in_source_id
stringlengths 13
58
| issue
stringlengths 3
241k
| before_files
listlengths 0
3
| after_files
listlengths 0
3
| pr_diff
stringlengths 109
107M
⌀ |
---|---|---|---|---|
python__mypy-2247 | join_types(UninhabitedType, t) should be t
For some reason `join_simple` has logic to move `UninhabitedType` to the second argument but `join_types` does not.
| [
{
"content": "\"\"\"Calculation of the least upper bound types (joins).\"\"\"\n\nfrom typing import List\n\nfrom mypy.types import (\n Type, AnyType, NoneTyp, Void, TypeVisitor, Instance, UnboundType,\n ErrorType, TypeVarType, CallableType, TupleType, ErasedType, TypeList,\n UnionType, FunctionLike, Overloaded, PartialType, DeletedType,\n UninhabitedType, TypeType, true_or_false\n)\nfrom mypy.maptype import map_instance_to_supertype\nfrom mypy.subtypes import is_subtype, is_equivalent, is_subtype_ignoring_tvars\n\nfrom mypy import experiments\n\n\ndef join_simple(declaration: Type, s: Type, t: Type) -> Type:\n \"\"\"Return a simple least upper bound given the declared type.\"\"\"\n\n if (s.can_be_true, s.can_be_false) != (t.can_be_true, t.can_be_false):\n # if types are restricted in different ways, use the more general versions\n s = true_or_false(s)\n t = true_or_false(t)\n\n if isinstance(s, AnyType):\n return s\n\n if isinstance(s, ErasedType):\n return t\n\n if is_subtype(s, t):\n return t\n\n if is_subtype(t, s):\n return s\n\n if isinstance(declaration, UnionType):\n return UnionType.make_simplified_union([s, t])\n\n if isinstance(s, NoneTyp) and not isinstance(t, NoneTyp):\n s, t = t, s\n\n if isinstance(s, UninhabitedType) and not isinstance(t, UninhabitedType):\n s, t = t, s\n\n value = t.accept(TypeJoinVisitor(s))\n\n if value is None:\n # XXX this code path probably should be avoided.\n # It seems to happen when a line (x = y) is a type error, and\n # it's not clear that assuming that x is arbitrary afterward\n # is a good idea.\n return declaration\n\n if declaration is None or is_subtype(value, declaration):\n return value\n\n return declaration\n\n\ndef join_types(s: Type, t: Type) -> Type:\n \"\"\"Return the least upper bound of s and t.\n\n For example, the join of 'int' and 'object' is 'object'.\n\n If the join does not exist, return an ErrorType instance.\n \"\"\"\n if (s.can_be_true, s.can_be_false) != (t.can_be_true, t.can_be_false):\n # if types are restricted in different ways, use the more general versions\n s = true_or_false(s)\n t = true_or_false(t)\n\n if isinstance(s, AnyType):\n return s\n\n if isinstance(s, ErasedType):\n return t\n\n if isinstance(s, UnionType) and not isinstance(t, UnionType):\n s, t = t, s\n\n if isinstance(s, NoneTyp) and not isinstance(t, NoneTyp):\n s, t = t, s\n\n # Use a visitor to handle non-trivial cases.\n return t.accept(TypeJoinVisitor(s))\n\n\nclass TypeJoinVisitor(TypeVisitor[Type]):\n \"\"\"Implementation of the least upper bound algorithm.\n\n Attributes:\n s: The other (left) type operand.\n \"\"\"\n\n def __init__(self, s: Type) -> None:\n self.s = s\n\n def visit_unbound_type(self, t: UnboundType) -> Type:\n if isinstance(self.s, Void) or isinstance(self.s, ErrorType):\n return ErrorType()\n else:\n return AnyType()\n\n def visit_union_type(self, t: UnionType) -> Type:\n if is_subtype(self.s, t):\n return t\n else:\n return UnionType.make_simplified_union([self.s, t])\n\n def visit_error_type(self, t: ErrorType) -> Type:\n return t\n\n def visit_type_list(self, t: TypeList) -> Type:\n assert False, 'Not supported'\n\n def visit_any(self, t: AnyType) -> Type:\n return t\n\n def visit_void(self, t: Void) -> Type:\n if isinstance(self.s, Void):\n return t\n else:\n return ErrorType()\n\n def visit_none_type(self, t: NoneTyp) -> Type:\n if experiments.STRICT_OPTIONAL:\n if isinstance(self.s, (NoneTyp, UninhabitedType)):\n return t\n elif isinstance(self.s, UnboundType):\n return AnyType()\n elif isinstance(self.s, Void) or isinstance(self.s, ErrorType):\n return ErrorType()\n else:\n return UnionType.make_simplified_union([self.s, t])\n else:\n if not isinstance(self.s, Void):\n return self.s\n else:\n return self.default(self.s)\n\n def visit_uninhabited_type(self, t: UninhabitedType) -> Type:\n if not isinstance(self.s, Void):\n return self.s\n else:\n return self.default(self.s)\n\n def visit_deleted_type(self, t: DeletedType) -> Type:\n if not isinstance(self.s, Void):\n return self.s\n else:\n return self.default(self.s)\n\n def visit_erased_type(self, t: ErasedType) -> Type:\n return self.s\n\n def visit_type_var(self, t: TypeVarType) -> Type:\n if isinstance(self.s, TypeVarType) and self.s.id == t.id:\n return self.s\n else:\n return self.default(self.s)\n\n def visit_instance(self, t: Instance) -> Type:\n if isinstance(self.s, Instance):\n return join_instances(t, self.s)\n elif isinstance(self.s, FunctionLike):\n return join_types(t, self.s.fallback)\n elif isinstance(self.s, TypeType):\n return join_types(t, self.s)\n else:\n return self.default(self.s)\n\n def visit_callable_type(self, t: CallableType) -> Type:\n # TODO: Consider subtyping instead of just similarity.\n if isinstance(self.s, CallableType) and is_similar_callables(t, self.s):\n return combine_similar_callables(t, self.s)\n elif isinstance(self.s, Overloaded):\n # Switch the order of arguments to that we'll get to visit_overloaded.\n return join_types(t, self.s)\n else:\n return join_types(t.fallback, self.s)\n\n def visit_overloaded(self, t: Overloaded) -> Type:\n # This is more complex than most other cases. Here are some\n # examples that illustrate how this works.\n #\n # First let's define a concise notation:\n # - Cn are callable types (for n in 1, 2, ...)\n # - Ov(C1, C2, ...) is an overloaded type with items C1, C2, ...\n # - Callable[[T, ...], S] is written as [T, ...] -> S.\n #\n # We want some basic properties to hold (assume Cn are all\n # unrelated via Any-similarity):\n #\n # join(Ov(C1, C2), C1) == C1\n # join(Ov(C1, C2), Ov(C1, C2)) == Ov(C1, C2)\n # join(Ov(C1, C2), Ov(C1, C3)) == C1\n # join(Ov(C2, C2), C3) == join of fallback types\n #\n # The presence of Any types makes things more interesting. The join is the\n # most general type we can get with respect to Any:\n #\n # join(Ov([int] -> int, [str] -> str), [Any] -> str) == Any -> str\n #\n # We could use a simplification step that removes redundancies, but that's not\n # implemented right now. Consider this example, where we get a redundancy:\n #\n # join(Ov([int, Any] -> Any, [str, Any] -> Any), [Any, int] -> Any) ==\n # Ov([Any, int] -> Any, [Any, int] -> Any)\n #\n # TODO: Use callable subtyping instead of just similarity.\n result = [] # type: List[CallableType]\n s = self.s\n if isinstance(s, FunctionLike):\n # The interesting case where both types are function types.\n for t_item in t.items():\n for s_item in s.items():\n if is_similar_callables(t_item, s_item):\n result.append(combine_similar_callables(t_item, s_item))\n if result:\n # TODO: Simplify redundancies from the result.\n if len(result) == 1:\n return result[0]\n else:\n return Overloaded(result)\n return join_types(t.fallback, s.fallback)\n return join_types(t.fallback, s)\n\n def visit_tuple_type(self, t: TupleType) -> Type:\n if isinstance(self.s, TupleType) and self.s.length() == t.length():\n items = [] # type: List[Type]\n for i in range(t.length()):\n items.append(self.join(t.items[i], self.s.items[i]))\n # join fallback types if they are different\n from typing import cast\n return TupleType(items, cast(Instance, join_instances(self.s.fallback, t.fallback)))\n else:\n return self.default(self.s)\n\n def visit_partial_type(self, t: PartialType) -> Type:\n # We only have partial information so we can't decide the join result. We should\n # never get here.\n assert False, \"Internal error\"\n\n def visit_type_type(self, t: TypeType) -> Type:\n if isinstance(self.s, TypeType):\n return TypeType(self.join(t.item, self.s.item), line=t.line)\n elif isinstance(self.s, Instance) and self.s.type.fullname() == 'builtins.type':\n return self.s\n else:\n return self.default(self.s)\n\n def join(self, s: Type, t: Type) -> Type:\n return join_types(s, t)\n\n def default(self, typ: Type) -> Type:\n if isinstance(typ, Instance):\n return object_from_instance(typ)\n elif isinstance(typ, UnboundType):\n return AnyType()\n elif isinstance(typ, Void) or isinstance(typ, ErrorType):\n return ErrorType()\n elif isinstance(typ, TupleType):\n return self.default(typ.fallback)\n elif isinstance(typ, FunctionLike):\n return self.default(typ.fallback)\n elif isinstance(typ, TypeVarType):\n return self.default(typ.upper_bound)\n else:\n return AnyType()\n\n\ndef join_instances(t: Instance, s: Instance) -> Type:\n \"\"\"Calculate the join of two instance types.\n\n Return ErrorType if the result is ambiguous.\n \"\"\"\n if t.type == s.type:\n # Simplest case: join two types with the same base type (but\n # potentially different arguments).\n if is_subtype(t, s) or is_subtype(s, t):\n # Compatible; combine type arguments.\n args = [] # type: List[Type]\n for i in range(len(t.args)):\n args.append(join_types(t.args[i], s.args[i]))\n return Instance(t.type, args)\n else:\n # Incompatible; return trivial result object.\n return object_from_instance(t)\n elif t.type.bases and is_subtype_ignoring_tvars(t, s):\n return join_instances_via_supertype(t, s)\n else:\n # Now t is not a subtype of s, and t != s. Now s could be a subtype\n # of t; alternatively, we need to find a common supertype. This works\n # in of the both cases.\n return join_instances_via_supertype(s, t)\n\n\ndef join_instances_via_supertype(t: Instance, s: Instance) -> Type:\n # Give preference to joins via duck typing relationship, so that\n # join(int, float) == float, for example.\n if t.type._promote and is_subtype(t.type._promote, s):\n return join_types(t.type._promote, s)\n elif s.type._promote and is_subtype(s.type._promote, t):\n return join_types(t, s.type._promote)\n # Compute the \"best\" supertype of t when joined with s.\n # The definition of \"best\" may evolve; for now it is the one with\n # the longest MRO. Ties are broken by using the earlier base.\n best = None # type: Type\n for base in t.type.bases:\n mapped = map_instance_to_supertype(t, base.type)\n res = join_instances(mapped, s)\n if best is None or is_better(res, best):\n best = res\n assert best is not None\n return best\n\n\ndef is_better(t: Type, s: Type) -> bool:\n # Given two possible results from join_instances_via_supertype(),\n # indicate whether t is the better one.\n if isinstance(t, Instance):\n if not isinstance(s, Instance):\n return True\n # Use len(mro) as a proxy for the better choice.\n if len(t.type.mro) > len(s.type.mro):\n return True\n return False\n\n\ndef is_similar_callables(t: CallableType, s: CallableType) -> bool:\n \"\"\"Return True if t and s are equivalent and have identical numbers of\n arguments, default arguments and varargs.\n \"\"\"\n\n return (len(t.arg_types) == len(s.arg_types) and t.min_args == s.min_args\n and t.is_var_arg == s.is_var_arg and is_equivalent(t, s))\n\n\ndef combine_similar_callables(t: CallableType, s: CallableType) -> CallableType:\n arg_types = [] # type: List[Type]\n for i in range(len(t.arg_types)):\n arg_types.append(join_types(t.arg_types[i], s.arg_types[i]))\n # TODO kinds and argument names\n # The fallback type can be either 'function' or 'type'. The result should have 'type' as\n # fallback only if both operands have it as 'type'.\n if t.fallback.type.fullname() != 'builtins.type':\n fallback = t.fallback\n else:\n fallback = s.fallback\n return t.copy_modified(arg_types=arg_types,\n ret_type=join_types(t.ret_type, s.ret_type),\n fallback=fallback,\n name=None)\n\n\ndef object_from_instance(instance: Instance) -> Instance:\n \"\"\"Construct the type 'builtins.object' from an instance type.\"\"\"\n # Use the fact that 'object' is always the last class in the mro.\n res = Instance(instance.type.mro[-1], [])\n return res\n\n\ndef join_type_list(types: List[Type]) -> Type:\n if not types:\n # This is a little arbitrary but reasonable. Any empty tuple should be compatible\n # with all variable length tuples, and this makes it possible. A better approach\n # would be to use a special bottom type, which we do when strict Optional\n # checking is enabled.\n if experiments.STRICT_OPTIONAL:\n return UninhabitedType()\n else:\n return NoneTyp()\n joined = types[0]\n for t in types[1:]:\n joined = join_types(joined, t)\n return joined\n",
"path": "mypy/join.py"
}
] | [
{
"content": "\"\"\"Calculation of the least upper bound types (joins).\"\"\"\n\nfrom typing import List\n\nfrom mypy.types import (\n Type, AnyType, NoneTyp, Void, TypeVisitor, Instance, UnboundType,\n ErrorType, TypeVarType, CallableType, TupleType, ErasedType, TypeList,\n UnionType, FunctionLike, Overloaded, PartialType, DeletedType,\n UninhabitedType, TypeType, true_or_false\n)\nfrom mypy.maptype import map_instance_to_supertype\nfrom mypy.subtypes import is_subtype, is_equivalent, is_subtype_ignoring_tvars\n\nfrom mypy import experiments\n\n\ndef join_simple(declaration: Type, s: Type, t: Type) -> Type:\n \"\"\"Return a simple least upper bound given the declared type.\"\"\"\n\n if (s.can_be_true, s.can_be_false) != (t.can_be_true, t.can_be_false):\n # if types are restricted in different ways, use the more general versions\n s = true_or_false(s)\n t = true_or_false(t)\n\n if isinstance(s, AnyType):\n return s\n\n if isinstance(s, ErasedType):\n return t\n\n if is_subtype(s, t):\n return t\n\n if is_subtype(t, s):\n return s\n\n if isinstance(declaration, UnionType):\n return UnionType.make_simplified_union([s, t])\n\n if isinstance(s, NoneTyp) and not isinstance(t, NoneTyp):\n s, t = t, s\n\n if isinstance(s, UninhabitedType) and not isinstance(t, UninhabitedType):\n s, t = t, s\n\n value = t.accept(TypeJoinVisitor(s))\n\n if value is None:\n # XXX this code path probably should be avoided.\n # It seems to happen when a line (x = y) is a type error, and\n # it's not clear that assuming that x is arbitrary afterward\n # is a good idea.\n return declaration\n\n if declaration is None or is_subtype(value, declaration):\n return value\n\n return declaration\n\n\ndef join_types(s: Type, t: Type) -> Type:\n \"\"\"Return the least upper bound of s and t.\n\n For example, the join of 'int' and 'object' is 'object'.\n\n If the join does not exist, return an ErrorType instance.\n \"\"\"\n if (s.can_be_true, s.can_be_false) != (t.can_be_true, t.can_be_false):\n # if types are restricted in different ways, use the more general versions\n s = true_or_false(s)\n t = true_or_false(t)\n\n if isinstance(s, AnyType):\n return s\n\n if isinstance(s, ErasedType):\n return t\n\n if isinstance(s, UnionType) and not isinstance(t, UnionType):\n s, t = t, s\n\n if isinstance(s, NoneTyp) and not isinstance(t, NoneTyp):\n s, t = t, s\n\n if isinstance(s, UninhabitedType) and not isinstance(t, UninhabitedType):\n s, t = t, s\n\n # Use a visitor to handle non-trivial cases.\n return t.accept(TypeJoinVisitor(s))\n\n\nclass TypeJoinVisitor(TypeVisitor[Type]):\n \"\"\"Implementation of the least upper bound algorithm.\n\n Attributes:\n s: The other (left) type operand.\n \"\"\"\n\n def __init__(self, s: Type) -> None:\n self.s = s\n\n def visit_unbound_type(self, t: UnboundType) -> Type:\n if isinstance(self.s, Void) or isinstance(self.s, ErrorType):\n return ErrorType()\n else:\n return AnyType()\n\n def visit_union_type(self, t: UnionType) -> Type:\n if is_subtype(self.s, t):\n return t\n else:\n return UnionType.make_simplified_union([self.s, t])\n\n def visit_error_type(self, t: ErrorType) -> Type:\n return t\n\n def visit_type_list(self, t: TypeList) -> Type:\n assert False, 'Not supported'\n\n def visit_any(self, t: AnyType) -> Type:\n return t\n\n def visit_void(self, t: Void) -> Type:\n if isinstance(self.s, Void):\n return t\n else:\n return ErrorType()\n\n def visit_none_type(self, t: NoneTyp) -> Type:\n if experiments.STRICT_OPTIONAL:\n if isinstance(self.s, (NoneTyp, UninhabitedType)):\n return t\n elif isinstance(self.s, UnboundType):\n return AnyType()\n elif isinstance(self.s, Void) or isinstance(self.s, ErrorType):\n return ErrorType()\n else:\n return UnionType.make_simplified_union([self.s, t])\n else:\n if not isinstance(self.s, Void):\n return self.s\n else:\n return self.default(self.s)\n\n def visit_uninhabited_type(self, t: UninhabitedType) -> Type:\n if not isinstance(self.s, Void):\n return self.s\n else:\n return self.default(self.s)\n\n def visit_deleted_type(self, t: DeletedType) -> Type:\n if not isinstance(self.s, Void):\n return self.s\n else:\n return self.default(self.s)\n\n def visit_erased_type(self, t: ErasedType) -> Type:\n return self.s\n\n def visit_type_var(self, t: TypeVarType) -> Type:\n if isinstance(self.s, TypeVarType) and self.s.id == t.id:\n return self.s\n else:\n return self.default(self.s)\n\n def visit_instance(self, t: Instance) -> Type:\n if isinstance(self.s, Instance):\n return join_instances(t, self.s)\n elif isinstance(self.s, FunctionLike):\n return join_types(t, self.s.fallback)\n elif isinstance(self.s, TypeType):\n return join_types(t, self.s)\n else:\n return self.default(self.s)\n\n def visit_callable_type(self, t: CallableType) -> Type:\n # TODO: Consider subtyping instead of just similarity.\n if isinstance(self.s, CallableType) and is_similar_callables(t, self.s):\n return combine_similar_callables(t, self.s)\n elif isinstance(self.s, Overloaded):\n # Switch the order of arguments to that we'll get to visit_overloaded.\n return join_types(t, self.s)\n else:\n return join_types(t.fallback, self.s)\n\n def visit_overloaded(self, t: Overloaded) -> Type:\n # This is more complex than most other cases. Here are some\n # examples that illustrate how this works.\n #\n # First let's define a concise notation:\n # - Cn are callable types (for n in 1, 2, ...)\n # - Ov(C1, C2, ...) is an overloaded type with items C1, C2, ...\n # - Callable[[T, ...], S] is written as [T, ...] -> S.\n #\n # We want some basic properties to hold (assume Cn are all\n # unrelated via Any-similarity):\n #\n # join(Ov(C1, C2), C1) == C1\n # join(Ov(C1, C2), Ov(C1, C2)) == Ov(C1, C2)\n # join(Ov(C1, C2), Ov(C1, C3)) == C1\n # join(Ov(C2, C2), C3) == join of fallback types\n #\n # The presence of Any types makes things more interesting. The join is the\n # most general type we can get with respect to Any:\n #\n # join(Ov([int] -> int, [str] -> str), [Any] -> str) == Any -> str\n #\n # We could use a simplification step that removes redundancies, but that's not\n # implemented right now. Consider this example, where we get a redundancy:\n #\n # join(Ov([int, Any] -> Any, [str, Any] -> Any), [Any, int] -> Any) ==\n # Ov([Any, int] -> Any, [Any, int] -> Any)\n #\n # TODO: Use callable subtyping instead of just similarity.\n result = [] # type: List[CallableType]\n s = self.s\n if isinstance(s, FunctionLike):\n # The interesting case where both types are function types.\n for t_item in t.items():\n for s_item in s.items():\n if is_similar_callables(t_item, s_item):\n result.append(combine_similar_callables(t_item, s_item))\n if result:\n # TODO: Simplify redundancies from the result.\n if len(result) == 1:\n return result[0]\n else:\n return Overloaded(result)\n return join_types(t.fallback, s.fallback)\n return join_types(t.fallback, s)\n\n def visit_tuple_type(self, t: TupleType) -> Type:\n if isinstance(self.s, TupleType) and self.s.length() == t.length():\n items = [] # type: List[Type]\n for i in range(t.length()):\n items.append(self.join(t.items[i], self.s.items[i]))\n # join fallback types if they are different\n from typing import cast\n return TupleType(items, cast(Instance, join_instances(self.s.fallback, t.fallback)))\n else:\n return self.default(self.s)\n\n def visit_partial_type(self, t: PartialType) -> Type:\n # We only have partial information so we can't decide the join result. We should\n # never get here.\n assert False, \"Internal error\"\n\n def visit_type_type(self, t: TypeType) -> Type:\n if isinstance(self.s, TypeType):\n return TypeType(self.join(t.item, self.s.item), line=t.line)\n elif isinstance(self.s, Instance) and self.s.type.fullname() == 'builtins.type':\n return self.s\n else:\n return self.default(self.s)\n\n def join(self, s: Type, t: Type) -> Type:\n return join_types(s, t)\n\n def default(self, typ: Type) -> Type:\n if isinstance(typ, Instance):\n return object_from_instance(typ)\n elif isinstance(typ, UnboundType):\n return AnyType()\n elif isinstance(typ, Void) or isinstance(typ, ErrorType):\n return ErrorType()\n elif isinstance(typ, TupleType):\n return self.default(typ.fallback)\n elif isinstance(typ, FunctionLike):\n return self.default(typ.fallback)\n elif isinstance(typ, TypeVarType):\n return self.default(typ.upper_bound)\n else:\n return AnyType()\n\n\ndef join_instances(t: Instance, s: Instance) -> Type:\n \"\"\"Calculate the join of two instance types.\n\n Return ErrorType if the result is ambiguous.\n \"\"\"\n if t.type == s.type:\n # Simplest case: join two types with the same base type (but\n # potentially different arguments).\n if is_subtype(t, s) or is_subtype(s, t):\n # Compatible; combine type arguments.\n args = [] # type: List[Type]\n for i in range(len(t.args)):\n args.append(join_types(t.args[i], s.args[i]))\n return Instance(t.type, args)\n else:\n # Incompatible; return trivial result object.\n return object_from_instance(t)\n elif t.type.bases and is_subtype_ignoring_tvars(t, s):\n return join_instances_via_supertype(t, s)\n else:\n # Now t is not a subtype of s, and t != s. Now s could be a subtype\n # of t; alternatively, we need to find a common supertype. This works\n # in of the both cases.\n return join_instances_via_supertype(s, t)\n\n\ndef join_instances_via_supertype(t: Instance, s: Instance) -> Type:\n # Give preference to joins via duck typing relationship, so that\n # join(int, float) == float, for example.\n if t.type._promote and is_subtype(t.type._promote, s):\n return join_types(t.type._promote, s)\n elif s.type._promote and is_subtype(s.type._promote, t):\n return join_types(t, s.type._promote)\n # Compute the \"best\" supertype of t when joined with s.\n # The definition of \"best\" may evolve; for now it is the one with\n # the longest MRO. Ties are broken by using the earlier base.\n best = None # type: Type\n for base in t.type.bases:\n mapped = map_instance_to_supertype(t, base.type)\n res = join_instances(mapped, s)\n if best is None or is_better(res, best):\n best = res\n assert best is not None\n return best\n\n\ndef is_better(t: Type, s: Type) -> bool:\n # Given two possible results from join_instances_via_supertype(),\n # indicate whether t is the better one.\n if isinstance(t, Instance):\n if not isinstance(s, Instance):\n return True\n # Use len(mro) as a proxy for the better choice.\n if len(t.type.mro) > len(s.type.mro):\n return True\n return False\n\n\ndef is_similar_callables(t: CallableType, s: CallableType) -> bool:\n \"\"\"Return True if t and s are equivalent and have identical numbers of\n arguments, default arguments and varargs.\n \"\"\"\n\n return (len(t.arg_types) == len(s.arg_types) and t.min_args == s.min_args\n and t.is_var_arg == s.is_var_arg and is_equivalent(t, s))\n\n\ndef combine_similar_callables(t: CallableType, s: CallableType) -> CallableType:\n arg_types = [] # type: List[Type]\n for i in range(len(t.arg_types)):\n arg_types.append(join_types(t.arg_types[i], s.arg_types[i]))\n # TODO kinds and argument names\n # The fallback type can be either 'function' or 'type'. The result should have 'type' as\n # fallback only if both operands have it as 'type'.\n if t.fallback.type.fullname() != 'builtins.type':\n fallback = t.fallback\n else:\n fallback = s.fallback\n return t.copy_modified(arg_types=arg_types,\n ret_type=join_types(t.ret_type, s.ret_type),\n fallback=fallback,\n name=None)\n\n\ndef object_from_instance(instance: Instance) -> Instance:\n \"\"\"Construct the type 'builtins.object' from an instance type.\"\"\"\n # Use the fact that 'object' is always the last class in the mro.\n res = Instance(instance.type.mro[-1], [])\n return res\n\n\ndef join_type_list(types: List[Type]) -> Type:\n if not types:\n # This is a little arbitrary but reasonable. Any empty tuple should be compatible\n # with all variable length tuples, and this makes it possible. A better approach\n # would be to use a special bottom type, which we do when strict Optional\n # checking is enabled.\n if experiments.STRICT_OPTIONAL:\n return UninhabitedType()\n else:\n return NoneTyp()\n joined = types[0]\n for t in types[1:]:\n joined = join_types(joined, t)\n return joined\n",
"path": "mypy/join.py"
}
] | diff --git a/mypy/join.py b/mypy/join.py
index c6d63331b233..c88ed2b7e58f 100644
--- a/mypy/join.py
+++ b/mypy/join.py
@@ -82,6 +82,9 @@ def join_types(s: Type, t: Type) -> Type:
if isinstance(s, NoneTyp) and not isinstance(t, NoneTyp):
s, t = t, s
+ if isinstance(s, UninhabitedType) and not isinstance(t, UninhabitedType):
+ s, t = t, s
+
# Use a visitor to handle non-trivial cases.
return t.accept(TypeJoinVisitor(s))
|
djangopackages__djangopackages-959 | 🐛 package_updater is missing the `all` argument
**Describe the bug**
The `package_updater` management command is missing the `all` argument. This means we should at least be testing that we can invoke `--help` on this command too.
**To Reproduce**
```
root@web2:~# /usr/bin/docker compose -f /code/djangopackages/docker-compose.prod.yml run --rm django-a python manage.py package_updater
[+] Running 1/0
⠿ Container djangopackages-redis-1 Running 0.0s
Postgres is up - continuing...
Traceback (most recent call last):
File "/app/manage.py", line 10, in <module>
execute_from_command_line(sys.argv)
File "/usr/local/lib/python3.9/site-packages/django/core/management/__init__.py", line 446, in execute_from_command_line
utility.execute()
File "/usr/local/lib/python3.9/site-packages/django/core/management/__init__.py", line 440, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/usr/local/lib/python3.9/site-packages/djclick/adapter.py", line 68, in run_from_argv
exit_code = self.main(
File "/usr/local/lib/python3.9/site-packages/click/core.py", line 1055, in main
rv = self.invoke(ctx)
File "/usr/local/lib/python3.9/site-packages/djclick/adapter.py", line 50, in invoke
return super(DjangoCommandMixin, self).invoke(ctx)
File "/usr/local/lib/python3.9/site-packages/click/core.py", line 1404, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/usr/local/lib/python3.9/site-packages/click/core.py", line 760, in invoke
return __callback(*args, **kwargs)
TypeError: command() missing 1 required positional argument: 'all'
```
| [
{
"content": "import logging\nfrom time import sleep\n\nimport djclick as click\nfrom django.conf import settings\nfrom django.db.models import F\nfrom django.utils import timezone\nfrom github3 import login as github_login\nfrom github3.exceptions import NotFoundError, UnexpectedResponse\nfrom rich import print\n\nfrom core.utils import healthcheck\nfrom package.models import Package\n\nlogger = logging.getLogger(__name__)\n\n\nclass PackageUpdaterException(Exception):\n def __init__(self, error, title):\n log_message = f\"For {title}, {type(error)}: {error}\"\n logging.critical(log_message)\n logging.exception(error)\n\n\[email protected]()\[email protected](\"--limit\", default=None, type=int)\ndef command(all, limit):\n \"\"\"Updates all the GitHub Packages in the database.\"\"\"\n\n github = github_login(token=settings.GITHUB_TOKEN)\n\n packages = Package.objects.filter(\n date_deprecated__isnull=True, last_exception_count__lte=5\n ).order_by(\"last_fetched\")\n if limit:\n packages = packages[:limit]\n\n for package in packages.iterator():\n # Simple attempt to deal with Github rate limiting\n while True:\n if github.ratelimit_remaining < 50:\n print(f\"github.ratelimit_remaining=={github.ratelimit_remaining}\")\n logger.debug(f\"{__file__}::handle::sleep(120)\")\n sleep(120)\n break\n\n try:\n try:\n package.fetch_metadata(fetch_pypi=False, fetch_repo=True)\n package.fetch_commits()\n package.save()\n\n except NotFoundError as e:\n logger.error(f\"Package was not found for {package.title}.\")\n\n Package.objects.filter(pk=package.pk).update(\n date_deprecated=timezone.now(),\n last_exception=e,\n last_exception_at=timezone.now(),\n last_exception_count=F(\"last_exception_count\") + 1,\n )\n\n except UnexpectedResponse as e:\n logger.error(f\"Empty repo found for {package.title}.\")\n\n Package.objects.filter(pk=package.pk).update(\n date_deprecated=timezone.now(),\n last_exception=e,\n last_exception_at=timezone.now(),\n last_exception_count=F(\"last_exception_count\") + 1,\n )\n\n except Exception as e:\n logger.error(\n f\"Error while fetching package details for {package.title}.\"\n )\n raise PackageUpdaterException(e, package.title)\n\n except PackageUpdaterException as e:\n logger.error(f\"Unable to update {package.title}\", exc_info=True)\n Package.objects.filter(pk=package.pk).update(\n last_exception=e,\n last_exception_at=timezone.now(),\n last_exception_count=F(\"last_exception_count\") + 1,\n )\n\n logger.debug(f\"{__file__}::handle::sleep(1)\")\n sleep(1)\n\n healthcheck(settings.PACKAGE_HEALTHCHECK_URL)\n",
"path": "package/management/commands/package_updater.py"
}
] | [
{
"content": "import logging\nfrom time import sleep\n\nimport djclick as click\nfrom django.conf import settings\nfrom django.db.models import F\nfrom django.utils import timezone\nfrom github3 import login as github_login\nfrom github3.exceptions import NotFoundError, UnexpectedResponse\nfrom rich import print\n\nfrom core.utils import healthcheck\nfrom package.models import Package\n\nlogger = logging.getLogger(__name__)\n\n\nclass PackageUpdaterException(Exception):\n def __init__(self, error, title):\n log_message = f\"For {title}, {type(error)}: {error}\"\n logging.critical(log_message)\n logging.exception(error)\n\n\[email protected]()\[email protected](\"--limit\", default=None, type=int)\ndef command(limit):\n \"\"\"Updates all the GitHub Packages in the database.\"\"\"\n\n github = github_login(token=settings.GITHUB_TOKEN)\n\n packages = Package.objects.filter(\n date_deprecated__isnull=True, last_exception_count__lte=5\n ).order_by(\"last_fetched\")\n if limit:\n packages = packages[:limit]\n\n for package in packages.iterator():\n # Simple attempt to deal with Github rate limiting\n while True:\n if github.ratelimit_remaining < 50:\n print(f\"github.ratelimit_remaining=={github.ratelimit_remaining}\")\n logger.debug(f\"{__file__}::handle::sleep(120)\")\n sleep(120)\n break\n\n try:\n try:\n package.fetch_metadata(fetch_pypi=False, fetch_repo=True)\n package.fetch_commits()\n package.save()\n\n except NotFoundError as e:\n logger.error(f\"Package was not found for {package.title}.\")\n\n Package.objects.filter(pk=package.pk).update(\n date_deprecated=timezone.now(),\n last_exception=e,\n last_exception_at=timezone.now(),\n last_exception_count=F(\"last_exception_count\") + 1,\n )\n\n except UnexpectedResponse as e:\n logger.error(f\"Empty repo found for {package.title}.\")\n\n Package.objects.filter(pk=package.pk).update(\n date_deprecated=timezone.now(),\n last_exception=e,\n last_exception_at=timezone.now(),\n last_exception_count=F(\"last_exception_count\") + 1,\n )\n\n except Exception as e:\n logger.error(\n f\"Error while fetching package details for {package.title}.\"\n )\n raise PackageUpdaterException(e, package.title)\n\n except PackageUpdaterException as e:\n logger.error(f\"Unable to update {package.title}\", exc_info=True)\n Package.objects.filter(pk=package.pk).update(\n last_exception=e,\n last_exception_at=timezone.now(),\n last_exception_count=F(\"last_exception_count\") + 1,\n )\n\n logger.debug(f\"{__file__}::handle::sleep(1)\")\n sleep(1)\n\n healthcheck(settings.PACKAGE_HEALTHCHECK_URL)\n",
"path": "package/management/commands/package_updater.py"
}
] | diff --git a/package/management/commands/package_updater.py b/package/management/commands/package_updater.py
index 3d97c8c1c..a0f1ac24b 100644
--- a/package/management/commands/package_updater.py
+++ b/package/management/commands/package_updater.py
@@ -24,7 +24,7 @@ def __init__(self, error, title):
@click.command()
@click.option("--limit", default=None, type=int)
-def command(all, limit):
+def command(limit):
"""Updates all the GitHub Packages in the database."""
github = github_login(token=settings.GITHUB_TOKEN)
diff --git a/package/tests/test_package_updater.py b/package/tests/test_package_updater.py
new file mode 100644
index 000000000..4e4d85a96
--- /dev/null
+++ b/package/tests/test_package_updater.py
@@ -0,0 +1,9 @@
+import pytest
+
+from click import exceptions
+from django.core.management import call_command
+
+
+def test_package_updater_command(db):
+ with pytest.raises(exceptions.Exit):
+ call_command("package_updater", "--help")
|
encode__django-rest-framework-510 | Bug : JSON integer won't match integer in a ChoiceField
I have a Model with :
```
PENDING = 1
COMPLETE = 2
CANCELLED = 3
STATUS = (
(PENDING, 'Pending'),
(COMPLETE, 'Complete'),
(CANCELLED, 'Cancelled'),
)
(...)
status = models.PositiveIntegerField(default=COMPLETE, choices=STATUS)
```
And when I perform a PUT (update) on that model (using the default ModelSerializer) with the following JSON :
```
{"id":8,"status":3,"t_type":1,"description":"Transaction example"}
```
I get the following error message :
```
"status" : "Select a valid choice. 3 is not one of the available choices."
```
Which it clearly is.
Bug : JSON integer won't match integer in a ChoiceField
I have a Model with :
```
PENDING = 1
COMPLETE = 2
CANCELLED = 3
STATUS = (
(PENDING, 'Pending'),
(COMPLETE, 'Complete'),
(CANCELLED, 'Cancelled'),
)
(...)
status = models.PositiveIntegerField(default=COMPLETE, choices=STATUS)
```
And when I perform a PUT (update) on that model (using the default ModelSerializer) with the following JSON :
```
{"id":8,"status":3,"t_type":1,"description":"Transaction example"}
```
I get the following error message :
```
"status" : "Select a valid choice. 3 is not one of the available choices."
```
Which it clearly is.
| [
{
"content": "import copy\nimport datetime\nimport inspect\nimport re\nimport warnings\n\nfrom io import BytesIO\n\nfrom django.core import validators\nfrom django.core.exceptions import ObjectDoesNotExist, ValidationError\nfrom django.core.urlresolvers import resolve, get_script_prefix\nfrom django.conf import settings\nfrom django import forms\nfrom django.forms import widgets\nfrom django.forms.models import ModelChoiceIterator\nfrom django.utils.encoding import is_protected_type, smart_unicode\nfrom django.utils.translation import ugettext_lazy as _\nfrom rest_framework.reverse import reverse\nfrom rest_framework.compat import parse_date, parse_datetime\nfrom rest_framework.compat import timezone\nfrom urlparse import urlparse\n\n\ndef is_simple_callable(obj):\n \"\"\"\n True if the object is a callable that takes no arguments.\n \"\"\"\n return (\n (inspect.isfunction(obj) and not inspect.getargspec(obj)[0]) or\n (inspect.ismethod(obj) and len(inspect.getargspec(obj)[0]) <= 1)\n )\n\n\nclass Field(object):\n creation_counter = 0\n empty = ''\n type_name = None\n _use_files = None\n form_field_class = forms.CharField\n\n def __init__(self, source=None):\n self.parent = None\n\n self.creation_counter = Field.creation_counter\n Field.creation_counter += 1\n\n self.source = source\n\n def initialize(self, parent, field_name):\n \"\"\"\n Called to set up a field prior to field_to_native or field_from_native.\n\n parent - The parent serializer.\n model_field - The model field this field corresponds to, if one exists.\n \"\"\"\n self.parent = parent\n self.root = parent.root or parent\n self.context = self.root.context\n if self.root.partial:\n self.required = False\n\n def field_from_native(self, data, files, field_name, into):\n \"\"\"\n Given a dictionary and a field name, updates the dictionary `into`,\n with the field and it's deserialized value.\n \"\"\"\n return\n\n def field_to_native(self, obj, field_name):\n \"\"\"\n Given and object and a field name, returns the value that should be\n serialized for that field.\n \"\"\"\n if obj is None:\n return self.empty\n\n if self.source == '*':\n return self.to_native(obj)\n\n if self.source:\n value = obj\n for component in self.source.split('.'):\n value = getattr(value, component)\n if is_simple_callable(value):\n value = value()\n else:\n value = getattr(obj, field_name)\n return self.to_native(value)\n\n def to_native(self, value):\n \"\"\"\n Converts the field's value into it's simple representation.\n \"\"\"\n if is_simple_callable(value):\n value = value()\n\n if is_protected_type(value):\n return value\n elif hasattr(value, '__iter__') and not isinstance(value, (dict, basestring)):\n return [self.to_native(item) for item in value]\n elif isinstance(value, dict):\n return dict(map(self.to_native, (k, v)) for k, v in value.items())\n return smart_unicode(value)\n\n def attributes(self):\n \"\"\"\n Returns a dictionary of attributes to be used when serializing to xml.\n \"\"\"\n if self.type_name:\n return {'type': self.type_name}\n return {}\n\n\nclass WritableField(Field):\n \"\"\"\n Base for read/write fields.\n \"\"\"\n default_validators = []\n default_error_messages = {\n 'required': _('This field is required.'),\n 'invalid': _('Invalid value.'),\n }\n widget = widgets.TextInput\n default = None\n\n def __init__(self, source=None, read_only=False, required=None,\n validators=[], error_messages=None, widget=None,\n default=None, blank=None):\n\n super(WritableField, self).__init__(source=source)\n\n self.read_only = read_only\n if required is None:\n self.required = not(read_only)\n else:\n assert not (read_only and required), \"Cannot set required=True and read_only=True\"\n self.required = required\n\n messages = {}\n for c in reversed(self.__class__.__mro__):\n messages.update(getattr(c, 'default_error_messages', {}))\n messages.update(error_messages or {})\n self.error_messages = messages\n\n self.validators = self.default_validators + validators\n self.default = default if default is not None else self.default\n self.blank = blank\n\n # Widgets are ony used for HTML forms.\n widget = widget or self.widget\n if isinstance(widget, type):\n widget = widget()\n self.widget = widget\n\n def validate(self, value):\n if value in validators.EMPTY_VALUES and self.required:\n raise ValidationError(self.error_messages['required'])\n\n def run_validators(self, value):\n if value in validators.EMPTY_VALUES:\n return\n errors = []\n for v in self.validators:\n try:\n v(value)\n except ValidationError as e:\n if hasattr(e, 'code') and e.code in self.error_messages:\n message = self.error_messages[e.code]\n if e.params:\n message = message % e.params\n errors.append(message)\n else:\n errors.extend(e.messages)\n if errors:\n raise ValidationError(errors)\n\n def field_from_native(self, data, files, field_name, into):\n \"\"\"\n Given a dictionary and a field name, updates the dictionary `into`,\n with the field and it's deserialized value.\n \"\"\"\n if self.read_only:\n return\n\n try:\n if self._use_files:\n native = files[field_name]\n else:\n native = data[field_name]\n except KeyError:\n if self.default is not None:\n native = self.default\n else:\n if self.required:\n raise ValidationError(self.error_messages['required'])\n return\n\n value = self.from_native(native)\n if self.source == '*':\n if value:\n into.update(value)\n else:\n self.validate(value)\n self.run_validators(value)\n into[self.source or field_name] = value\n\n def from_native(self, value):\n \"\"\"\n Reverts a simple representation back to the field's value.\n \"\"\"\n return value\n\n\nclass ModelField(WritableField):\n \"\"\"\n A generic field that can be used against an arbitrary model field.\n \"\"\"\n def __init__(self, *args, **kwargs):\n try:\n self.model_field = kwargs.pop('model_field')\n except:\n raise ValueError(\"ModelField requires 'model_field' kwarg\")\n\n self.min_length = kwargs.pop('min_length',\n getattr(self.model_field, 'min_length', None))\n self.max_length = kwargs.pop('max_length',\n getattr(self.model_field, 'max_length', None))\n\n super(ModelField, self).__init__(*args, **kwargs)\n\n if self.min_length is not None:\n self.validators.append(validators.MinLengthValidator(self.min_length))\n if self.max_length is not None:\n self.validators.append(validators.MaxLengthValidator(self.max_length))\n\n def from_native(self, value):\n rel = getattr(self.model_field, \"rel\", None)\n if rel is not None:\n return rel.to._meta.get_field(rel.field_name).to_python(value)\n else:\n return self.model_field.to_python(value)\n\n def field_to_native(self, obj, field_name):\n value = self.model_field._get_val_from_obj(obj)\n if is_protected_type(value):\n return value\n return self.model_field.value_to_string(obj)\n\n def attributes(self):\n return {\n \"type\": self.model_field.get_internal_type()\n }\n\n##### Relational fields #####\n\n\n# Not actually Writable, but subclasses may need to be.\nclass RelatedField(WritableField):\n \"\"\"\n Base class for related model fields.\n\n If not overridden, this represents a to-one relationship, using the unicode\n representation of the target.\n \"\"\"\n widget = widgets.Select\n cache_choices = False\n empty_label = None\n default_read_only = True # TODO: Remove this\n\n def __init__(self, *args, **kwargs):\n self.queryset = kwargs.pop('queryset', None)\n self.null = kwargs.pop('null', False)\n super(RelatedField, self).__init__(*args, **kwargs)\n self.read_only = kwargs.pop('read_only', self.default_read_only)\n\n def initialize(self, parent, field_name):\n super(RelatedField, self).initialize(parent, field_name)\n if self.queryset is None and not self.read_only:\n try:\n manager = getattr(self.parent.opts.model, self.source or field_name)\n if hasattr(manager, 'related'): # Forward\n self.queryset = manager.related.model._default_manager.all()\n else: # Reverse\n self.queryset = manager.field.rel.to._default_manager.all()\n except:\n raise\n msg = ('Serializer related fields must include a `queryset`' +\n ' argument or set `read_only=True')\n raise Exception(msg)\n\n ### We need this stuff to make form choices work...\n\n # def __deepcopy__(self, memo):\n # result = super(RelatedField, self).__deepcopy__(memo)\n # result.queryset = result.queryset\n # return result\n\n def prepare_value(self, obj):\n return self.to_native(obj)\n\n def label_from_instance(self, obj):\n \"\"\"\n Return a readable representation for use with eg. select widgets.\n \"\"\"\n desc = smart_unicode(obj)\n ident = smart_unicode(self.to_native(obj))\n if desc == ident:\n return desc\n return \"%s - %s\" % (desc, ident)\n\n def _get_queryset(self):\n return self._queryset\n\n def _set_queryset(self, queryset):\n self._queryset = queryset\n self.widget.choices = self.choices\n\n queryset = property(_get_queryset, _set_queryset)\n\n def _get_choices(self):\n # If self._choices is set, then somebody must have manually set\n # the property self.choices. In this case, just return self._choices.\n if hasattr(self, '_choices'):\n return self._choices\n\n # Otherwise, execute the QuerySet in self.queryset to determine the\n # choices dynamically. Return a fresh ModelChoiceIterator that has not been\n # consumed. Note that we're instantiating a new ModelChoiceIterator *each*\n # time _get_choices() is called (and, thus, each time self.choices is\n # accessed) so that we can ensure the QuerySet has not been consumed. This\n # construct might look complicated but it allows for lazy evaluation of\n # the queryset.\n return ModelChoiceIterator(self)\n\n def _set_choices(self, value):\n # Setting choices also sets the choices on the widget.\n # choices can be any iterable, but we call list() on it because\n # it will be consumed more than once.\n self._choices = self.widget.choices = list(value)\n\n choices = property(_get_choices, _set_choices)\n\n ### Regular serializer stuff...\n\n def field_to_native(self, obj, field_name):\n value = getattr(obj, self.source or field_name)\n return self.to_native(value)\n\n def field_from_native(self, data, files, field_name, into):\n if self.read_only:\n return\n\n value = data.get(field_name)\n\n if value in (None, '') and not self.null:\n raise ValidationError('Value may not be null')\n elif value in (None, '') and self.null:\n into[(self.source or field_name)] = None\n else:\n into[(self.source or field_name)] = self.from_native(value)\n\n\nclass ManyRelatedMixin(object):\n \"\"\"\n Mixin to convert a related field to a many related field.\n \"\"\"\n widget = widgets.SelectMultiple\n\n def field_to_native(self, obj, field_name):\n value = getattr(obj, self.source or field_name)\n return [self.to_native(item) for item in value.all()]\n\n def field_from_native(self, data, files, field_name, into):\n if self.read_only:\n return\n\n try:\n # Form data\n value = data.getlist(self.source or field_name)\n except:\n # Non-form data\n value = data.get(self.source or field_name)\n else:\n if value == ['']:\n value = []\n into[field_name] = [self.from_native(item) for item in value]\n\n\nclass ManyRelatedField(ManyRelatedMixin, RelatedField):\n \"\"\"\n Base class for related model managers.\n\n If not overridden, this represents a to-many relationship, using the unicode\n representations of the target, and is read-only.\n \"\"\"\n pass\n\n\n### PrimaryKey relationships\n\nclass PrimaryKeyRelatedField(RelatedField):\n \"\"\"\n Represents a to-one relationship as a pk value.\n \"\"\"\n default_read_only = False\n form_field_class = forms.ChoiceField\n\n # TODO: Remove these field hacks...\n def prepare_value(self, obj):\n return self.to_native(obj.pk)\n\n def label_from_instance(self, obj):\n \"\"\"\n Return a readable representation for use with eg. select widgets.\n \"\"\"\n desc = smart_unicode(obj)\n ident = smart_unicode(self.to_native(obj.pk))\n if desc == ident:\n return desc\n return \"%s - %s\" % (desc, ident)\n\n # TODO: Possibly change this to just take `obj`, through prob less performant\n def to_native(self, pk):\n return pk\n\n def from_native(self, data):\n if self.queryset is None:\n raise Exception('Writable related fields must include a `queryset` argument')\n\n try:\n return self.queryset.get(pk=data)\n except ObjectDoesNotExist:\n msg = \"Invalid pk '%s' - object does not exist.\" % smart_unicode(data)\n raise ValidationError(msg)\n\n def field_to_native(self, obj, field_name):\n try:\n # Prefer obj.serializable_value for performance reasons\n pk = obj.serializable_value(self.source or field_name)\n except AttributeError:\n # RelatedObject (reverse relationship)\n obj = getattr(obj, self.source or field_name)\n return self.to_native(obj.pk)\n # Forward relationship\n return self.to_native(pk)\n\n\nclass ManyPrimaryKeyRelatedField(ManyRelatedField):\n \"\"\"\n Represents a to-many relationship as a pk value.\n \"\"\"\n default_read_only = False\n form_field_class = forms.MultipleChoiceField\n\n def prepare_value(self, obj):\n return self.to_native(obj.pk)\n\n def label_from_instance(self, obj):\n \"\"\"\n Return a readable representation for use with eg. select widgets.\n \"\"\"\n desc = smart_unicode(obj)\n ident = smart_unicode(self.to_native(obj.pk))\n if desc == ident:\n return desc\n return \"%s - %s\" % (desc, ident)\n\n def to_native(self, pk):\n return pk\n\n def field_to_native(self, obj, field_name):\n try:\n # Prefer obj.serializable_value for performance reasons\n queryset = obj.serializable_value(self.source or field_name)\n except AttributeError:\n # RelatedManager (reverse relationship)\n queryset = getattr(obj, self.source or field_name)\n return [self.to_native(item.pk) for item in queryset.all()]\n # Forward relationship\n return [self.to_native(item.pk) for item in queryset.all()]\n\n def from_native(self, data):\n if self.queryset is None:\n raise Exception('Writable related fields must include a `queryset` argument')\n\n try:\n return self.queryset.get(pk=data)\n except ObjectDoesNotExist:\n msg = \"Invalid pk '%s' - object does not exist.\" % smart_unicode(data)\n raise ValidationError(msg)\n\n### Slug relationships\n\n\nclass SlugRelatedField(RelatedField):\n default_read_only = False\n form_field_class = forms.ChoiceField\n\n def __init__(self, *args, **kwargs):\n self.slug_field = kwargs.pop('slug_field', None)\n assert self.slug_field, 'slug_field is required'\n super(SlugRelatedField, self).__init__(*args, **kwargs)\n\n def to_native(self, obj):\n return getattr(obj, self.slug_field)\n\n def from_native(self, data):\n if self.queryset is None:\n raise Exception('Writable related fields must include a `queryset` argument')\n\n try:\n return self.queryset.get(**{self.slug_field: data})\n except ObjectDoesNotExist:\n raise ValidationError('Object with %s=%s does not exist.' %\n (self.slug_field, unicode(data)))\n\n\nclass ManySlugRelatedField(ManyRelatedMixin, SlugRelatedField):\n form_field_class = forms.MultipleChoiceField\n\n\n### Hyperlinked relationships\n\nclass HyperlinkedRelatedField(RelatedField):\n \"\"\"\n Represents a to-one relationship, using hyperlinking.\n \"\"\"\n pk_url_kwarg = 'pk'\n slug_field = 'slug'\n slug_url_kwarg = None # Defaults to same as `slug_field` unless overridden\n default_read_only = False\n form_field_class = forms.ChoiceField\n\n def __init__(self, *args, **kwargs):\n try:\n self.view_name = kwargs.pop('view_name')\n except:\n raise ValueError(\"Hyperlinked field requires 'view_name' kwarg\")\n\n self.slug_field = kwargs.pop('slug_field', self.slug_field)\n default_slug_kwarg = self.slug_url_kwarg or self.slug_field\n self.pk_url_kwarg = kwargs.pop('pk_url_kwarg', self.pk_url_kwarg)\n self.slug_url_kwarg = kwargs.pop('slug_url_kwarg', default_slug_kwarg)\n\n self.format = kwargs.pop('format', None)\n super(HyperlinkedRelatedField, self).__init__(*args, **kwargs)\n\n def get_slug_field(self):\n \"\"\"\n Get the name of a slug field to be used to look up by slug.\n \"\"\"\n return self.slug_field\n\n def to_native(self, obj):\n view_name = self.view_name\n request = self.context.get('request', None)\n format = self.format or self.context.get('format', None)\n pk = getattr(obj, 'pk', None)\n if pk is None:\n return\n kwargs = {self.pk_url_kwarg: pk}\n try:\n return reverse(view_name, kwargs=kwargs, request=request, format=format)\n except:\n pass\n\n slug = getattr(obj, self.slug_field, None)\n\n if not slug:\n raise ValidationError('Could not resolve URL for field using view name \"%s\"' % view_name)\n\n kwargs = {self.slug_url_kwarg: slug}\n try:\n return reverse(self.view_name, kwargs=kwargs, request=request, format=format)\n except:\n pass\n\n kwargs = {self.pk_url_kwarg: obj.pk, self.slug_url_kwarg: slug}\n try:\n return reverse(self.view_name, kwargs=kwargs, request=request, format=format)\n except:\n pass\n\n raise ValidationError('Could not resolve URL for field using view name \"%s\"' % view_name)\n\n def from_native(self, value):\n # Convert URL -> model instance pk\n # TODO: Use values_list\n if self.queryset is None:\n raise Exception('Writable related fields must include a `queryset` argument')\n\n if value.startswith('http:') or value.startswith('https:'):\n # If needed convert absolute URLs to relative path\n value = urlparse(value).path\n prefix = get_script_prefix()\n if value.startswith(prefix):\n value = '/' + value[len(prefix):]\n\n try:\n match = resolve(value)\n except:\n raise ValidationError('Invalid hyperlink - No URL match')\n\n if match.url_name != self.view_name:\n raise ValidationError('Invalid hyperlink - Incorrect URL match')\n\n pk = match.kwargs.get(self.pk_url_kwarg, None)\n slug = match.kwargs.get(self.slug_url_kwarg, None)\n\n # Try explicit primary key.\n if pk is not None:\n queryset = self.queryset.filter(pk=pk)\n # Next, try looking up by slug.\n elif slug is not None:\n slug_field = self.get_slug_field()\n queryset = self.queryset.filter(**{slug_field: slug})\n # If none of those are defined, it's an error.\n else:\n raise ValidationError('Invalid hyperlink')\n\n try:\n obj = queryset.get()\n except ObjectDoesNotExist:\n raise ValidationError('Invalid hyperlink - object does not exist.')\n return obj\n\n\nclass ManyHyperlinkedRelatedField(ManyRelatedMixin, HyperlinkedRelatedField):\n \"\"\"\n Represents a to-many relationship, using hyperlinking.\n \"\"\"\n form_field_class = forms.MultipleChoiceField\n\n\nclass HyperlinkedIdentityField(Field):\n \"\"\"\n Represents the instance, or a property on the instance, using hyperlinking.\n \"\"\"\n pk_url_kwarg = 'pk'\n slug_field = 'slug'\n slug_url_kwarg = None # Defaults to same as `slug_field` unless overridden\n\n def __init__(self, *args, **kwargs):\n # TODO: Make view_name mandatory, and have the\n # HyperlinkedModelSerializer set it on-the-fly\n self.view_name = kwargs.pop('view_name', None)\n self.format = kwargs.pop('format', None)\n\n self.slug_field = kwargs.pop('slug_field', self.slug_field)\n default_slug_kwarg = self.slug_url_kwarg or self.slug_field\n self.pk_url_kwarg = kwargs.pop('pk_url_kwarg', self.pk_url_kwarg)\n self.slug_url_kwarg = kwargs.pop('slug_url_kwarg', default_slug_kwarg)\n\n super(HyperlinkedIdentityField, self).__init__(*args, **kwargs)\n\n def field_to_native(self, obj, field_name):\n request = self.context.get('request', None)\n format = self.format or self.context.get('format', None)\n view_name = self.view_name or self.parent.opts.view_name\n kwargs = {self.pk_url_kwarg: obj.pk}\n try:\n return reverse(view_name, kwargs=kwargs, request=request, format=format)\n except:\n pass\n\n slug = getattr(obj, self.slug_field, None)\n\n if not slug:\n raise ValidationError('Could not resolve URL for field using view name \"%s\"' % view_name)\n\n kwargs = {self.slug_url_kwarg: slug}\n try:\n return reverse(self.view_name, kwargs=kwargs, request=request, format=format)\n except:\n pass\n\n kwargs = {self.pk_url_kwarg: obj.pk, self.slug_url_kwarg: slug}\n try:\n return reverse(self.view_name, kwargs=kwargs, request=request, format=format)\n except:\n pass\n\n raise ValidationError('Could not resolve URL for field using view name \"%s\"' % view_name)\n\n\n##### Typed Fields #####\n\nclass BooleanField(WritableField):\n type_name = 'BooleanField'\n form_field_class = forms.BooleanField\n widget = widgets.CheckboxInput\n default_error_messages = {\n 'invalid': _(u\"'%s' value must be either True or False.\"),\n }\n empty = False\n\n # Note: we set default to `False` in order to fill in missing value not\n # supplied by html form. TODO: Fix so that only html form input gets\n # this behavior.\n default = False\n\n def from_native(self, value):\n if value in ('true', 't', 'True', '1'):\n return True\n if value in ('false', 'f', 'False', '0'):\n return False\n return bool(value)\n\n\nclass CharField(WritableField):\n type_name = 'CharField'\n form_field_class = forms.CharField\n\n def __init__(self, max_length=None, min_length=None, *args, **kwargs):\n self.max_length, self.min_length = max_length, min_length\n super(CharField, self).__init__(*args, **kwargs)\n if min_length is not None:\n self.validators.append(validators.MinLengthValidator(min_length))\n if max_length is not None:\n self.validators.append(validators.MaxLengthValidator(max_length))\n\n def validate(self, value):\n \"\"\"\n Validates that the value is supplied (if required).\n \"\"\"\n # if empty string and allow blank\n if self.blank and not value:\n return\n else:\n super(CharField, self).validate(value)\n\n def from_native(self, value):\n if isinstance(value, basestring) or value is None:\n return value\n return smart_unicode(value)\n\n\nclass URLField(CharField):\n type_name = 'URLField'\n\n def __init__(self, **kwargs):\n kwargs['max_length'] = kwargs.get('max_length', 200)\n kwargs['validators'] = [validators.URLValidator()]\n super(URLField, self).__init__(**kwargs)\n\n\nclass SlugField(CharField):\n type_name = 'SlugField'\n\n def __init__(self, *args, **kwargs):\n kwargs['max_length'] = kwargs.get('max_length', 50)\n super(SlugField, self).__init__(*args, **kwargs)\n\n\nclass ChoiceField(WritableField):\n type_name = 'ChoiceField'\n form_field_class = forms.ChoiceField\n widget = widgets.Select\n default_error_messages = {\n 'invalid_choice': _('Select a valid choice. %(value)s is not one of the available choices.'),\n }\n\n def __init__(self, choices=(), *args, **kwargs):\n super(ChoiceField, self).__init__(*args, **kwargs)\n self.choices = choices\n\n def _get_choices(self):\n return self._choices\n\n def _set_choices(self, value):\n # Setting choices also sets the choices on the widget.\n # choices can be any iterable, but we call list() on it because\n # it will be consumed more than once.\n self._choices = self.widget.choices = list(value)\n\n choices = property(_get_choices, _set_choices)\n\n def validate(self, value):\n \"\"\"\n Validates that the input is in self.choices.\n \"\"\"\n super(ChoiceField, self).validate(value)\n if value and not self.valid_value(value):\n raise ValidationError(self.error_messages['invalid_choice'] % {'value': value})\n\n def valid_value(self, value):\n \"\"\"\n Check to see if the provided value is a valid choice.\n \"\"\"\n for k, v in self.choices:\n if isinstance(v, (list, tuple)):\n # This is an optgroup, so look inside the group for options\n for k2, v2 in v:\n if value == smart_unicode(k2):\n return True\n else:\n if value == smart_unicode(k):\n return True\n return False\n\n\nclass EmailField(CharField):\n type_name = 'EmailField'\n form_field_class = forms.EmailField\n\n default_error_messages = {\n 'invalid': _('Enter a valid e-mail address.'),\n }\n default_validators = [validators.validate_email]\n\n def from_native(self, value):\n ret = super(EmailField, self).from_native(value)\n if ret is None:\n return None\n return ret.strip()\n\n def __deepcopy__(self, memo):\n result = copy.copy(self)\n memo[id(self)] = result\n #result.widget = copy.deepcopy(self.widget, memo)\n result.validators = self.validators[:]\n return result\n\n\nclass RegexField(CharField):\n type_name = 'RegexField'\n form_field_class = forms.RegexField\n\n def __init__(self, regex, max_length=None, min_length=None, *args, **kwargs):\n super(RegexField, self).__init__(max_length, min_length, *args, **kwargs)\n self.regex = regex\n\n def _get_regex(self):\n return self._regex\n\n def _set_regex(self, regex):\n if isinstance(regex, basestring):\n regex = re.compile(regex)\n self._regex = regex\n if hasattr(self, '_regex_validator') and self._regex_validator in self.validators:\n self.validators.remove(self._regex_validator)\n self._regex_validator = validators.RegexValidator(regex=regex)\n self.validators.append(self._regex_validator)\n\n regex = property(_get_regex, _set_regex)\n\n def __deepcopy__(self, memo):\n result = copy.copy(self)\n memo[id(self)] = result\n result.validators = self.validators[:]\n return result\n\n\nclass DateField(WritableField):\n type_name = 'DateField'\n widget = widgets.DateInput\n form_field_class = forms.DateField\n\n default_error_messages = {\n 'invalid': _(u\"'%s' value has an invalid date format. It must be \"\n u\"in YYYY-MM-DD format.\"),\n 'invalid_date': _(u\"'%s' value has the correct format (YYYY-MM-DD) \"\n u\"but it is an invalid date.\"),\n }\n empty = None\n\n def from_native(self, value):\n if value in validators.EMPTY_VALUES:\n return None\n\n if isinstance(value, datetime.datetime):\n if timezone and settings.USE_TZ and timezone.is_aware(value):\n # Convert aware datetimes to the default time zone\n # before casting them to dates (#17742).\n default_timezone = timezone.get_default_timezone()\n value = timezone.make_naive(value, default_timezone)\n return value.date()\n if isinstance(value, datetime.date):\n return value\n\n try:\n parsed = parse_date(value)\n if parsed is not None:\n return parsed\n except ValueError:\n msg = self.error_messages['invalid_date'] % value\n raise ValidationError(msg)\n\n msg = self.error_messages['invalid'] % value\n raise ValidationError(msg)\n\n\nclass DateTimeField(WritableField):\n type_name = 'DateTimeField'\n widget = widgets.DateTimeInput\n form_field_class = forms.DateTimeField\n\n default_error_messages = {\n 'invalid': _(u\"'%s' value has an invalid format. It must be in \"\n u\"YYYY-MM-DD HH:MM[:ss[.uuuuuu]][TZ] format.\"),\n 'invalid_date': _(u\"'%s' value has the correct format \"\n u\"(YYYY-MM-DD) but it is an invalid date.\"),\n 'invalid_datetime': _(u\"'%s' value has the correct format \"\n u\"(YYYY-MM-DD HH:MM[:ss[.uuuuuu]][TZ]) \"\n u\"but it is an invalid date/time.\"),\n }\n empty = None\n\n def from_native(self, value):\n if value in validators.EMPTY_VALUES:\n return None\n\n if isinstance(value, datetime.datetime):\n return value\n if isinstance(value, datetime.date):\n value = datetime.datetime(value.year, value.month, value.day)\n if settings.USE_TZ:\n # For backwards compatibility, interpret naive datetimes in\n # local time. This won't work during DST change, but we can't\n # do much about it, so we let the exceptions percolate up the\n # call stack.\n warnings.warn(u\"DateTimeField received a naive datetime (%s)\"\n u\" while time zone support is active.\" % value,\n RuntimeWarning)\n default_timezone = timezone.get_default_timezone()\n value = timezone.make_aware(value, default_timezone)\n return value\n\n try:\n parsed = parse_datetime(value)\n if parsed is not None:\n return parsed\n except ValueError:\n msg = self.error_messages['invalid_datetime'] % value\n raise ValidationError(msg)\n\n try:\n parsed = parse_date(value)\n if parsed is not None:\n return datetime.datetime(parsed.year, parsed.month, parsed.day)\n except ValueError:\n msg = self.error_messages['invalid_date'] % value\n raise ValidationError(msg)\n\n msg = self.error_messages['invalid'] % value\n raise ValidationError(msg)\n\n\nclass IntegerField(WritableField):\n type_name = 'IntegerField'\n form_field_class = forms.IntegerField\n\n default_error_messages = {\n 'invalid': _('Enter a whole number.'),\n 'max_value': _('Ensure this value is less than or equal to %(limit_value)s.'),\n 'min_value': _('Ensure this value is greater than or equal to %(limit_value)s.'),\n }\n\n def __init__(self, max_value=None, min_value=None, *args, **kwargs):\n self.max_value, self.min_value = max_value, min_value\n super(IntegerField, self).__init__(*args, **kwargs)\n\n if max_value is not None:\n self.validators.append(validators.MaxValueValidator(max_value))\n if min_value is not None:\n self.validators.append(validators.MinValueValidator(min_value))\n\n def from_native(self, value):\n if value in validators.EMPTY_VALUES:\n return None\n\n try:\n value = int(str(value))\n except (ValueError, TypeError):\n raise ValidationError(self.error_messages['invalid'])\n return value\n\n\nclass FloatField(WritableField):\n type_name = 'FloatField'\n form_field_class = forms.FloatField\n\n default_error_messages = {\n 'invalid': _(\"'%s' value must be a float.\"),\n }\n\n def from_native(self, value):\n if value in validators.EMPTY_VALUES:\n return None\n\n try:\n return float(value)\n except (TypeError, ValueError):\n msg = self.error_messages['invalid'] % value\n raise ValidationError(msg)\n\n\nclass FileField(WritableField):\n _use_files = True\n type_name = 'FileField'\n form_field_class = forms.FileField\n widget = widgets.FileInput\n\n default_error_messages = {\n 'invalid': _(\"No file was submitted. Check the encoding type on the form.\"),\n 'missing': _(\"No file was submitted.\"),\n 'empty': _(\"The submitted file is empty.\"),\n 'max_length': _('Ensure this filename has at most %(max)d characters (it has %(length)d).'),\n 'contradiction': _('Please either submit a file or check the clear checkbox, not both.')\n }\n\n def __init__(self, *args, **kwargs):\n self.max_length = kwargs.pop('max_length', None)\n self.allow_empty_file = kwargs.pop('allow_empty_file', False)\n super(FileField, self).__init__(*args, **kwargs)\n\n def from_native(self, data):\n if data in validators.EMPTY_VALUES:\n return None\n\n # UploadedFile objects should have name and size attributes.\n try:\n file_name = data.name\n file_size = data.size\n except AttributeError:\n raise ValidationError(self.error_messages['invalid'])\n\n if self.max_length is not None and len(file_name) > self.max_length:\n error_values = {'max': self.max_length, 'length': len(file_name)}\n raise ValidationError(self.error_messages['max_length'] % error_values)\n if not file_name:\n raise ValidationError(self.error_messages['invalid'])\n if not self.allow_empty_file and not file_size:\n raise ValidationError(self.error_messages['empty'])\n\n return data\n\n def to_native(self, value):\n return value.name\n\n\nclass ImageField(FileField):\n _use_files = True\n form_field_class = forms.ImageField\n\n default_error_messages = {\n 'invalid_image': _(\"Upload a valid image. The file you uploaded was either not an image or a corrupted image.\"),\n }\n\n def from_native(self, data):\n \"\"\"\n Checks that the file-upload field data contains a valid image (GIF, JPG,\n PNG, possibly others -- whatever the Python Imaging Library supports).\n \"\"\"\n f = super(ImageField, self).from_native(data)\n if f is None:\n return None\n\n from compat import Image\n assert Image is not None, 'PIL must be installed for ImageField support'\n\n # We need to get a file object for PIL. We might have a path or we might\n # have to read the data into memory.\n if hasattr(data, 'temporary_file_path'):\n file = data.temporary_file_path()\n else:\n if hasattr(data, 'read'):\n file = BytesIO(data.read())\n else:\n file = BytesIO(data['content'])\n\n try:\n # load() could spot a truncated JPEG, but it loads the entire\n # image in memory, which is a DoS vector. See #3848 and #18520.\n # verify() must be called immediately after the constructor.\n Image.open(file).verify()\n except ImportError:\n # Under PyPy, it is possible to import PIL. However, the underlying\n # _imaging C module isn't available, so an ImportError will be\n # raised. Catch and re-raise.\n raise\n except Exception: # Python Imaging Library doesn't recognize it as an image\n raise ValidationError(self.error_messages['invalid_image'])\n if hasattr(f, 'seek') and callable(f.seek):\n f.seek(0)\n return f\n\n\nclass SerializerMethodField(Field):\n \"\"\"\n A field that gets its value by calling a method on the serializer it's attached to.\n \"\"\"\n\n def __init__(self, method_name):\n self.method_name = method_name\n super(SerializerMethodField, self).__init__()\n\n def field_to_native(self, obj, field_name):\n value = getattr(self.parent, self.method_name)(obj)\n return self.to_native(value)\n",
"path": "rest_framework/fields.py"
}
] | [
{
"content": "import copy\nimport datetime\nimport inspect\nimport re\nimport warnings\n\nfrom io import BytesIO\n\nfrom django.core import validators\nfrom django.core.exceptions import ObjectDoesNotExist, ValidationError\nfrom django.core.urlresolvers import resolve, get_script_prefix\nfrom django.conf import settings\nfrom django import forms\nfrom django.forms import widgets\nfrom django.forms.models import ModelChoiceIterator\nfrom django.utils.encoding import is_protected_type, smart_unicode\nfrom django.utils.translation import ugettext_lazy as _\nfrom rest_framework.reverse import reverse\nfrom rest_framework.compat import parse_date, parse_datetime\nfrom rest_framework.compat import timezone\nfrom urlparse import urlparse\n\n\ndef is_simple_callable(obj):\n \"\"\"\n True if the object is a callable that takes no arguments.\n \"\"\"\n return (\n (inspect.isfunction(obj) and not inspect.getargspec(obj)[0]) or\n (inspect.ismethod(obj) and len(inspect.getargspec(obj)[0]) <= 1)\n )\n\n\nclass Field(object):\n creation_counter = 0\n empty = ''\n type_name = None\n _use_files = None\n form_field_class = forms.CharField\n\n def __init__(self, source=None):\n self.parent = None\n\n self.creation_counter = Field.creation_counter\n Field.creation_counter += 1\n\n self.source = source\n\n def initialize(self, parent, field_name):\n \"\"\"\n Called to set up a field prior to field_to_native or field_from_native.\n\n parent - The parent serializer.\n model_field - The model field this field corresponds to, if one exists.\n \"\"\"\n self.parent = parent\n self.root = parent.root or parent\n self.context = self.root.context\n if self.root.partial:\n self.required = False\n\n def field_from_native(self, data, files, field_name, into):\n \"\"\"\n Given a dictionary and a field name, updates the dictionary `into`,\n with the field and it's deserialized value.\n \"\"\"\n return\n\n def field_to_native(self, obj, field_name):\n \"\"\"\n Given and object and a field name, returns the value that should be\n serialized for that field.\n \"\"\"\n if obj is None:\n return self.empty\n\n if self.source == '*':\n return self.to_native(obj)\n\n if self.source:\n value = obj\n for component in self.source.split('.'):\n value = getattr(value, component)\n if is_simple_callable(value):\n value = value()\n else:\n value = getattr(obj, field_name)\n return self.to_native(value)\n\n def to_native(self, value):\n \"\"\"\n Converts the field's value into it's simple representation.\n \"\"\"\n if is_simple_callable(value):\n value = value()\n\n if is_protected_type(value):\n return value\n elif hasattr(value, '__iter__') and not isinstance(value, (dict, basestring)):\n return [self.to_native(item) for item in value]\n elif isinstance(value, dict):\n return dict(map(self.to_native, (k, v)) for k, v in value.items())\n return smart_unicode(value)\n\n def attributes(self):\n \"\"\"\n Returns a dictionary of attributes to be used when serializing to xml.\n \"\"\"\n if self.type_name:\n return {'type': self.type_name}\n return {}\n\n\nclass WritableField(Field):\n \"\"\"\n Base for read/write fields.\n \"\"\"\n default_validators = []\n default_error_messages = {\n 'required': _('This field is required.'),\n 'invalid': _('Invalid value.'),\n }\n widget = widgets.TextInput\n default = None\n\n def __init__(self, source=None, read_only=False, required=None,\n validators=[], error_messages=None, widget=None,\n default=None, blank=None):\n\n super(WritableField, self).__init__(source=source)\n\n self.read_only = read_only\n if required is None:\n self.required = not(read_only)\n else:\n assert not (read_only and required), \"Cannot set required=True and read_only=True\"\n self.required = required\n\n messages = {}\n for c in reversed(self.__class__.__mro__):\n messages.update(getattr(c, 'default_error_messages', {}))\n messages.update(error_messages or {})\n self.error_messages = messages\n\n self.validators = self.default_validators + validators\n self.default = default if default is not None else self.default\n self.blank = blank\n\n # Widgets are ony used for HTML forms.\n widget = widget or self.widget\n if isinstance(widget, type):\n widget = widget()\n self.widget = widget\n\n def validate(self, value):\n if value in validators.EMPTY_VALUES and self.required:\n raise ValidationError(self.error_messages['required'])\n\n def run_validators(self, value):\n if value in validators.EMPTY_VALUES:\n return\n errors = []\n for v in self.validators:\n try:\n v(value)\n except ValidationError as e:\n if hasattr(e, 'code') and e.code in self.error_messages:\n message = self.error_messages[e.code]\n if e.params:\n message = message % e.params\n errors.append(message)\n else:\n errors.extend(e.messages)\n if errors:\n raise ValidationError(errors)\n\n def field_from_native(self, data, files, field_name, into):\n \"\"\"\n Given a dictionary and a field name, updates the dictionary `into`,\n with the field and it's deserialized value.\n \"\"\"\n if self.read_only:\n return\n\n try:\n if self._use_files:\n native = files[field_name]\n else:\n native = data[field_name]\n except KeyError:\n if self.default is not None:\n native = self.default\n else:\n if self.required:\n raise ValidationError(self.error_messages['required'])\n return\n\n value = self.from_native(native)\n if self.source == '*':\n if value:\n into.update(value)\n else:\n self.validate(value)\n self.run_validators(value)\n into[self.source or field_name] = value\n\n def from_native(self, value):\n \"\"\"\n Reverts a simple representation back to the field's value.\n \"\"\"\n return value\n\n\nclass ModelField(WritableField):\n \"\"\"\n A generic field that can be used against an arbitrary model field.\n \"\"\"\n def __init__(self, *args, **kwargs):\n try:\n self.model_field = kwargs.pop('model_field')\n except:\n raise ValueError(\"ModelField requires 'model_field' kwarg\")\n\n self.min_length = kwargs.pop('min_length',\n getattr(self.model_field, 'min_length', None))\n self.max_length = kwargs.pop('max_length',\n getattr(self.model_field, 'max_length', None))\n\n super(ModelField, self).__init__(*args, **kwargs)\n\n if self.min_length is not None:\n self.validators.append(validators.MinLengthValidator(self.min_length))\n if self.max_length is not None:\n self.validators.append(validators.MaxLengthValidator(self.max_length))\n\n def from_native(self, value):\n rel = getattr(self.model_field, \"rel\", None)\n if rel is not None:\n return rel.to._meta.get_field(rel.field_name).to_python(value)\n else:\n return self.model_field.to_python(value)\n\n def field_to_native(self, obj, field_name):\n value = self.model_field._get_val_from_obj(obj)\n if is_protected_type(value):\n return value\n return self.model_field.value_to_string(obj)\n\n def attributes(self):\n return {\n \"type\": self.model_field.get_internal_type()\n }\n\n##### Relational fields #####\n\n\n# Not actually Writable, but subclasses may need to be.\nclass RelatedField(WritableField):\n \"\"\"\n Base class for related model fields.\n\n If not overridden, this represents a to-one relationship, using the unicode\n representation of the target.\n \"\"\"\n widget = widgets.Select\n cache_choices = False\n empty_label = None\n default_read_only = True # TODO: Remove this\n\n def __init__(self, *args, **kwargs):\n self.queryset = kwargs.pop('queryset', None)\n self.null = kwargs.pop('null', False)\n super(RelatedField, self).__init__(*args, **kwargs)\n self.read_only = kwargs.pop('read_only', self.default_read_only)\n\n def initialize(self, parent, field_name):\n super(RelatedField, self).initialize(parent, field_name)\n if self.queryset is None and not self.read_only:\n try:\n manager = getattr(self.parent.opts.model, self.source or field_name)\n if hasattr(manager, 'related'): # Forward\n self.queryset = manager.related.model._default_manager.all()\n else: # Reverse\n self.queryset = manager.field.rel.to._default_manager.all()\n except:\n raise\n msg = ('Serializer related fields must include a `queryset`' +\n ' argument or set `read_only=True')\n raise Exception(msg)\n\n ### We need this stuff to make form choices work...\n\n # def __deepcopy__(self, memo):\n # result = super(RelatedField, self).__deepcopy__(memo)\n # result.queryset = result.queryset\n # return result\n\n def prepare_value(self, obj):\n return self.to_native(obj)\n\n def label_from_instance(self, obj):\n \"\"\"\n Return a readable representation for use with eg. select widgets.\n \"\"\"\n desc = smart_unicode(obj)\n ident = smart_unicode(self.to_native(obj))\n if desc == ident:\n return desc\n return \"%s - %s\" % (desc, ident)\n\n def _get_queryset(self):\n return self._queryset\n\n def _set_queryset(self, queryset):\n self._queryset = queryset\n self.widget.choices = self.choices\n\n queryset = property(_get_queryset, _set_queryset)\n\n def _get_choices(self):\n # If self._choices is set, then somebody must have manually set\n # the property self.choices. In this case, just return self._choices.\n if hasattr(self, '_choices'):\n return self._choices\n\n # Otherwise, execute the QuerySet in self.queryset to determine the\n # choices dynamically. Return a fresh ModelChoiceIterator that has not been\n # consumed. Note that we're instantiating a new ModelChoiceIterator *each*\n # time _get_choices() is called (and, thus, each time self.choices is\n # accessed) so that we can ensure the QuerySet has not been consumed. This\n # construct might look complicated but it allows for lazy evaluation of\n # the queryset.\n return ModelChoiceIterator(self)\n\n def _set_choices(self, value):\n # Setting choices also sets the choices on the widget.\n # choices can be any iterable, but we call list() on it because\n # it will be consumed more than once.\n self._choices = self.widget.choices = list(value)\n\n choices = property(_get_choices, _set_choices)\n\n ### Regular serializer stuff...\n\n def field_to_native(self, obj, field_name):\n value = getattr(obj, self.source or field_name)\n return self.to_native(value)\n\n def field_from_native(self, data, files, field_name, into):\n if self.read_only:\n return\n\n value = data.get(field_name)\n\n if value in (None, '') and not self.null:\n raise ValidationError('Value may not be null')\n elif value in (None, '') and self.null:\n into[(self.source or field_name)] = None\n else:\n into[(self.source or field_name)] = self.from_native(value)\n\n\nclass ManyRelatedMixin(object):\n \"\"\"\n Mixin to convert a related field to a many related field.\n \"\"\"\n widget = widgets.SelectMultiple\n\n def field_to_native(self, obj, field_name):\n value = getattr(obj, self.source or field_name)\n return [self.to_native(item) for item in value.all()]\n\n def field_from_native(self, data, files, field_name, into):\n if self.read_only:\n return\n\n try:\n # Form data\n value = data.getlist(self.source or field_name)\n except:\n # Non-form data\n value = data.get(self.source or field_name)\n else:\n if value == ['']:\n value = []\n into[field_name] = [self.from_native(item) for item in value]\n\n\nclass ManyRelatedField(ManyRelatedMixin, RelatedField):\n \"\"\"\n Base class for related model managers.\n\n If not overridden, this represents a to-many relationship, using the unicode\n representations of the target, and is read-only.\n \"\"\"\n pass\n\n\n### PrimaryKey relationships\n\nclass PrimaryKeyRelatedField(RelatedField):\n \"\"\"\n Represents a to-one relationship as a pk value.\n \"\"\"\n default_read_only = False\n form_field_class = forms.ChoiceField\n\n # TODO: Remove these field hacks...\n def prepare_value(self, obj):\n return self.to_native(obj.pk)\n\n def label_from_instance(self, obj):\n \"\"\"\n Return a readable representation for use with eg. select widgets.\n \"\"\"\n desc = smart_unicode(obj)\n ident = smart_unicode(self.to_native(obj.pk))\n if desc == ident:\n return desc\n return \"%s - %s\" % (desc, ident)\n\n # TODO: Possibly change this to just take `obj`, through prob less performant\n def to_native(self, pk):\n return pk\n\n def from_native(self, data):\n if self.queryset is None:\n raise Exception('Writable related fields must include a `queryset` argument')\n\n try:\n return self.queryset.get(pk=data)\n except ObjectDoesNotExist:\n msg = \"Invalid pk '%s' - object does not exist.\" % smart_unicode(data)\n raise ValidationError(msg)\n\n def field_to_native(self, obj, field_name):\n try:\n # Prefer obj.serializable_value for performance reasons\n pk = obj.serializable_value(self.source or field_name)\n except AttributeError:\n # RelatedObject (reverse relationship)\n obj = getattr(obj, self.source or field_name)\n return self.to_native(obj.pk)\n # Forward relationship\n return self.to_native(pk)\n\n\nclass ManyPrimaryKeyRelatedField(ManyRelatedField):\n \"\"\"\n Represents a to-many relationship as a pk value.\n \"\"\"\n default_read_only = False\n form_field_class = forms.MultipleChoiceField\n\n def prepare_value(self, obj):\n return self.to_native(obj.pk)\n\n def label_from_instance(self, obj):\n \"\"\"\n Return a readable representation for use with eg. select widgets.\n \"\"\"\n desc = smart_unicode(obj)\n ident = smart_unicode(self.to_native(obj.pk))\n if desc == ident:\n return desc\n return \"%s - %s\" % (desc, ident)\n\n def to_native(self, pk):\n return pk\n\n def field_to_native(self, obj, field_name):\n try:\n # Prefer obj.serializable_value for performance reasons\n queryset = obj.serializable_value(self.source or field_name)\n except AttributeError:\n # RelatedManager (reverse relationship)\n queryset = getattr(obj, self.source or field_name)\n return [self.to_native(item.pk) for item in queryset.all()]\n # Forward relationship\n return [self.to_native(item.pk) for item in queryset.all()]\n\n def from_native(self, data):\n if self.queryset is None:\n raise Exception('Writable related fields must include a `queryset` argument')\n\n try:\n return self.queryset.get(pk=data)\n except ObjectDoesNotExist:\n msg = \"Invalid pk '%s' - object does not exist.\" % smart_unicode(data)\n raise ValidationError(msg)\n\n### Slug relationships\n\n\nclass SlugRelatedField(RelatedField):\n default_read_only = False\n form_field_class = forms.ChoiceField\n\n def __init__(self, *args, **kwargs):\n self.slug_field = kwargs.pop('slug_field', None)\n assert self.slug_field, 'slug_field is required'\n super(SlugRelatedField, self).__init__(*args, **kwargs)\n\n def to_native(self, obj):\n return getattr(obj, self.slug_field)\n\n def from_native(self, data):\n if self.queryset is None:\n raise Exception('Writable related fields must include a `queryset` argument')\n\n try:\n return self.queryset.get(**{self.slug_field: data})\n except ObjectDoesNotExist:\n raise ValidationError('Object with %s=%s does not exist.' %\n (self.slug_field, unicode(data)))\n\n\nclass ManySlugRelatedField(ManyRelatedMixin, SlugRelatedField):\n form_field_class = forms.MultipleChoiceField\n\n\n### Hyperlinked relationships\n\nclass HyperlinkedRelatedField(RelatedField):\n \"\"\"\n Represents a to-one relationship, using hyperlinking.\n \"\"\"\n pk_url_kwarg = 'pk'\n slug_field = 'slug'\n slug_url_kwarg = None # Defaults to same as `slug_field` unless overridden\n default_read_only = False\n form_field_class = forms.ChoiceField\n\n def __init__(self, *args, **kwargs):\n try:\n self.view_name = kwargs.pop('view_name')\n except:\n raise ValueError(\"Hyperlinked field requires 'view_name' kwarg\")\n\n self.slug_field = kwargs.pop('slug_field', self.slug_field)\n default_slug_kwarg = self.slug_url_kwarg or self.slug_field\n self.pk_url_kwarg = kwargs.pop('pk_url_kwarg', self.pk_url_kwarg)\n self.slug_url_kwarg = kwargs.pop('slug_url_kwarg', default_slug_kwarg)\n\n self.format = kwargs.pop('format', None)\n super(HyperlinkedRelatedField, self).__init__(*args, **kwargs)\n\n def get_slug_field(self):\n \"\"\"\n Get the name of a slug field to be used to look up by slug.\n \"\"\"\n return self.slug_field\n\n def to_native(self, obj):\n view_name = self.view_name\n request = self.context.get('request', None)\n format = self.format or self.context.get('format', None)\n pk = getattr(obj, 'pk', None)\n if pk is None:\n return\n kwargs = {self.pk_url_kwarg: pk}\n try:\n return reverse(view_name, kwargs=kwargs, request=request, format=format)\n except:\n pass\n\n slug = getattr(obj, self.slug_field, None)\n\n if not slug:\n raise ValidationError('Could not resolve URL for field using view name \"%s\"' % view_name)\n\n kwargs = {self.slug_url_kwarg: slug}\n try:\n return reverse(self.view_name, kwargs=kwargs, request=request, format=format)\n except:\n pass\n\n kwargs = {self.pk_url_kwarg: obj.pk, self.slug_url_kwarg: slug}\n try:\n return reverse(self.view_name, kwargs=kwargs, request=request, format=format)\n except:\n pass\n\n raise ValidationError('Could not resolve URL for field using view name \"%s\"' % view_name)\n\n def from_native(self, value):\n # Convert URL -> model instance pk\n # TODO: Use values_list\n if self.queryset is None:\n raise Exception('Writable related fields must include a `queryset` argument')\n\n if value.startswith('http:') or value.startswith('https:'):\n # If needed convert absolute URLs to relative path\n value = urlparse(value).path\n prefix = get_script_prefix()\n if value.startswith(prefix):\n value = '/' + value[len(prefix):]\n\n try:\n match = resolve(value)\n except:\n raise ValidationError('Invalid hyperlink - No URL match')\n\n if match.url_name != self.view_name:\n raise ValidationError('Invalid hyperlink - Incorrect URL match')\n\n pk = match.kwargs.get(self.pk_url_kwarg, None)\n slug = match.kwargs.get(self.slug_url_kwarg, None)\n\n # Try explicit primary key.\n if pk is not None:\n queryset = self.queryset.filter(pk=pk)\n # Next, try looking up by slug.\n elif slug is not None:\n slug_field = self.get_slug_field()\n queryset = self.queryset.filter(**{slug_field: slug})\n # If none of those are defined, it's an error.\n else:\n raise ValidationError('Invalid hyperlink')\n\n try:\n obj = queryset.get()\n except ObjectDoesNotExist:\n raise ValidationError('Invalid hyperlink - object does not exist.')\n return obj\n\n\nclass ManyHyperlinkedRelatedField(ManyRelatedMixin, HyperlinkedRelatedField):\n \"\"\"\n Represents a to-many relationship, using hyperlinking.\n \"\"\"\n form_field_class = forms.MultipleChoiceField\n\n\nclass HyperlinkedIdentityField(Field):\n \"\"\"\n Represents the instance, or a property on the instance, using hyperlinking.\n \"\"\"\n pk_url_kwarg = 'pk'\n slug_field = 'slug'\n slug_url_kwarg = None # Defaults to same as `slug_field` unless overridden\n\n def __init__(self, *args, **kwargs):\n # TODO: Make view_name mandatory, and have the\n # HyperlinkedModelSerializer set it on-the-fly\n self.view_name = kwargs.pop('view_name', None)\n self.format = kwargs.pop('format', None)\n\n self.slug_field = kwargs.pop('slug_field', self.slug_field)\n default_slug_kwarg = self.slug_url_kwarg or self.slug_field\n self.pk_url_kwarg = kwargs.pop('pk_url_kwarg', self.pk_url_kwarg)\n self.slug_url_kwarg = kwargs.pop('slug_url_kwarg', default_slug_kwarg)\n\n super(HyperlinkedIdentityField, self).__init__(*args, **kwargs)\n\n def field_to_native(self, obj, field_name):\n request = self.context.get('request', None)\n format = self.format or self.context.get('format', None)\n view_name = self.view_name or self.parent.opts.view_name\n kwargs = {self.pk_url_kwarg: obj.pk}\n try:\n return reverse(view_name, kwargs=kwargs, request=request, format=format)\n except:\n pass\n\n slug = getattr(obj, self.slug_field, None)\n\n if not slug:\n raise ValidationError('Could not resolve URL for field using view name \"%s\"' % view_name)\n\n kwargs = {self.slug_url_kwarg: slug}\n try:\n return reverse(self.view_name, kwargs=kwargs, request=request, format=format)\n except:\n pass\n\n kwargs = {self.pk_url_kwarg: obj.pk, self.slug_url_kwarg: slug}\n try:\n return reverse(self.view_name, kwargs=kwargs, request=request, format=format)\n except:\n pass\n\n raise ValidationError('Could not resolve URL for field using view name \"%s\"' % view_name)\n\n\n##### Typed Fields #####\n\nclass BooleanField(WritableField):\n type_name = 'BooleanField'\n form_field_class = forms.BooleanField\n widget = widgets.CheckboxInput\n default_error_messages = {\n 'invalid': _(u\"'%s' value must be either True or False.\"),\n }\n empty = False\n\n # Note: we set default to `False` in order to fill in missing value not\n # supplied by html form. TODO: Fix so that only html form input gets\n # this behavior.\n default = False\n\n def from_native(self, value):\n if value in ('true', 't', 'True', '1'):\n return True\n if value in ('false', 'f', 'False', '0'):\n return False\n return bool(value)\n\n\nclass CharField(WritableField):\n type_name = 'CharField'\n form_field_class = forms.CharField\n\n def __init__(self, max_length=None, min_length=None, *args, **kwargs):\n self.max_length, self.min_length = max_length, min_length\n super(CharField, self).__init__(*args, **kwargs)\n if min_length is not None:\n self.validators.append(validators.MinLengthValidator(min_length))\n if max_length is not None:\n self.validators.append(validators.MaxLengthValidator(max_length))\n\n def validate(self, value):\n \"\"\"\n Validates that the value is supplied (if required).\n \"\"\"\n # if empty string and allow blank\n if self.blank and not value:\n return\n else:\n super(CharField, self).validate(value)\n\n def from_native(self, value):\n if isinstance(value, basestring) or value is None:\n return value\n return smart_unicode(value)\n\n\nclass URLField(CharField):\n type_name = 'URLField'\n\n def __init__(self, **kwargs):\n kwargs['max_length'] = kwargs.get('max_length', 200)\n kwargs['validators'] = [validators.URLValidator()]\n super(URLField, self).__init__(**kwargs)\n\n\nclass SlugField(CharField):\n type_name = 'SlugField'\n\n def __init__(self, *args, **kwargs):\n kwargs['max_length'] = kwargs.get('max_length', 50)\n super(SlugField, self).__init__(*args, **kwargs)\n\n\nclass ChoiceField(WritableField):\n type_name = 'ChoiceField'\n form_field_class = forms.ChoiceField\n widget = widgets.Select\n default_error_messages = {\n 'invalid_choice': _('Select a valid choice. %(value)s is not one of the available choices.'),\n }\n\n def __init__(self, choices=(), *args, **kwargs):\n super(ChoiceField, self).__init__(*args, **kwargs)\n self.choices = choices\n\n def _get_choices(self):\n return self._choices\n\n def _set_choices(self, value):\n # Setting choices also sets the choices on the widget.\n # choices can be any iterable, but we call list() on it because\n # it will be consumed more than once.\n self._choices = self.widget.choices = list(value)\n\n choices = property(_get_choices, _set_choices)\n\n def validate(self, value):\n \"\"\"\n Validates that the input is in self.choices.\n \"\"\"\n super(ChoiceField, self).validate(value)\n if value and not self.valid_value(value):\n raise ValidationError(self.error_messages['invalid_choice'] % {'value': value})\n\n def valid_value(self, value):\n \"\"\"\n Check to see if the provided value is a valid choice.\n \"\"\"\n for k, v in self.choices:\n if isinstance(v, (list, tuple)):\n # This is an optgroup, so look inside the group for options\n for k2, v2 in v:\n if value == smart_unicode(k2):\n return True\n else:\n if value == smart_unicode(k) or value == k:\n return True\n return False\n\n\nclass EmailField(CharField):\n type_name = 'EmailField'\n form_field_class = forms.EmailField\n\n default_error_messages = {\n 'invalid': _('Enter a valid e-mail address.'),\n }\n default_validators = [validators.validate_email]\n\n def from_native(self, value):\n ret = super(EmailField, self).from_native(value)\n if ret is None:\n return None\n return ret.strip()\n\n def __deepcopy__(self, memo):\n result = copy.copy(self)\n memo[id(self)] = result\n #result.widget = copy.deepcopy(self.widget, memo)\n result.validators = self.validators[:]\n return result\n\n\nclass RegexField(CharField):\n type_name = 'RegexField'\n form_field_class = forms.RegexField\n\n def __init__(self, regex, max_length=None, min_length=None, *args, **kwargs):\n super(RegexField, self).__init__(max_length, min_length, *args, **kwargs)\n self.regex = regex\n\n def _get_regex(self):\n return self._regex\n\n def _set_regex(self, regex):\n if isinstance(regex, basestring):\n regex = re.compile(regex)\n self._regex = regex\n if hasattr(self, '_regex_validator') and self._regex_validator in self.validators:\n self.validators.remove(self._regex_validator)\n self._regex_validator = validators.RegexValidator(regex=regex)\n self.validators.append(self._regex_validator)\n\n regex = property(_get_regex, _set_regex)\n\n def __deepcopy__(self, memo):\n result = copy.copy(self)\n memo[id(self)] = result\n result.validators = self.validators[:]\n return result\n\n\nclass DateField(WritableField):\n type_name = 'DateField'\n widget = widgets.DateInput\n form_field_class = forms.DateField\n\n default_error_messages = {\n 'invalid': _(u\"'%s' value has an invalid date format. It must be \"\n u\"in YYYY-MM-DD format.\"),\n 'invalid_date': _(u\"'%s' value has the correct format (YYYY-MM-DD) \"\n u\"but it is an invalid date.\"),\n }\n empty = None\n\n def from_native(self, value):\n if value in validators.EMPTY_VALUES:\n return None\n\n if isinstance(value, datetime.datetime):\n if timezone and settings.USE_TZ and timezone.is_aware(value):\n # Convert aware datetimes to the default time zone\n # before casting them to dates (#17742).\n default_timezone = timezone.get_default_timezone()\n value = timezone.make_naive(value, default_timezone)\n return value.date()\n if isinstance(value, datetime.date):\n return value\n\n try:\n parsed = parse_date(value)\n if parsed is not None:\n return parsed\n except ValueError:\n msg = self.error_messages['invalid_date'] % value\n raise ValidationError(msg)\n\n msg = self.error_messages['invalid'] % value\n raise ValidationError(msg)\n\n\nclass DateTimeField(WritableField):\n type_name = 'DateTimeField'\n widget = widgets.DateTimeInput\n form_field_class = forms.DateTimeField\n\n default_error_messages = {\n 'invalid': _(u\"'%s' value has an invalid format. It must be in \"\n u\"YYYY-MM-DD HH:MM[:ss[.uuuuuu]][TZ] format.\"),\n 'invalid_date': _(u\"'%s' value has the correct format \"\n u\"(YYYY-MM-DD) but it is an invalid date.\"),\n 'invalid_datetime': _(u\"'%s' value has the correct format \"\n u\"(YYYY-MM-DD HH:MM[:ss[.uuuuuu]][TZ]) \"\n u\"but it is an invalid date/time.\"),\n }\n empty = None\n\n def from_native(self, value):\n if value in validators.EMPTY_VALUES:\n return None\n\n if isinstance(value, datetime.datetime):\n return value\n if isinstance(value, datetime.date):\n value = datetime.datetime(value.year, value.month, value.day)\n if settings.USE_TZ:\n # For backwards compatibility, interpret naive datetimes in\n # local time. This won't work during DST change, but we can't\n # do much about it, so we let the exceptions percolate up the\n # call stack.\n warnings.warn(u\"DateTimeField received a naive datetime (%s)\"\n u\" while time zone support is active.\" % value,\n RuntimeWarning)\n default_timezone = timezone.get_default_timezone()\n value = timezone.make_aware(value, default_timezone)\n return value\n\n try:\n parsed = parse_datetime(value)\n if parsed is not None:\n return parsed\n except ValueError:\n msg = self.error_messages['invalid_datetime'] % value\n raise ValidationError(msg)\n\n try:\n parsed = parse_date(value)\n if parsed is not None:\n return datetime.datetime(parsed.year, parsed.month, parsed.day)\n except ValueError:\n msg = self.error_messages['invalid_date'] % value\n raise ValidationError(msg)\n\n msg = self.error_messages['invalid'] % value\n raise ValidationError(msg)\n\n\nclass IntegerField(WritableField):\n type_name = 'IntegerField'\n form_field_class = forms.IntegerField\n\n default_error_messages = {\n 'invalid': _('Enter a whole number.'),\n 'max_value': _('Ensure this value is less than or equal to %(limit_value)s.'),\n 'min_value': _('Ensure this value is greater than or equal to %(limit_value)s.'),\n }\n\n def __init__(self, max_value=None, min_value=None, *args, **kwargs):\n self.max_value, self.min_value = max_value, min_value\n super(IntegerField, self).__init__(*args, **kwargs)\n\n if max_value is not None:\n self.validators.append(validators.MaxValueValidator(max_value))\n if min_value is not None:\n self.validators.append(validators.MinValueValidator(min_value))\n\n def from_native(self, value):\n if value in validators.EMPTY_VALUES:\n return None\n\n try:\n value = int(str(value))\n except (ValueError, TypeError):\n raise ValidationError(self.error_messages['invalid'])\n return value\n\n\nclass FloatField(WritableField):\n type_name = 'FloatField'\n form_field_class = forms.FloatField\n\n default_error_messages = {\n 'invalid': _(\"'%s' value must be a float.\"),\n }\n\n def from_native(self, value):\n if value in validators.EMPTY_VALUES:\n return None\n\n try:\n return float(value)\n except (TypeError, ValueError):\n msg = self.error_messages['invalid'] % value\n raise ValidationError(msg)\n\n\nclass FileField(WritableField):\n _use_files = True\n type_name = 'FileField'\n form_field_class = forms.FileField\n widget = widgets.FileInput\n\n default_error_messages = {\n 'invalid': _(\"No file was submitted. Check the encoding type on the form.\"),\n 'missing': _(\"No file was submitted.\"),\n 'empty': _(\"The submitted file is empty.\"),\n 'max_length': _('Ensure this filename has at most %(max)d characters (it has %(length)d).'),\n 'contradiction': _('Please either submit a file or check the clear checkbox, not both.')\n }\n\n def __init__(self, *args, **kwargs):\n self.max_length = kwargs.pop('max_length', None)\n self.allow_empty_file = kwargs.pop('allow_empty_file', False)\n super(FileField, self).__init__(*args, **kwargs)\n\n def from_native(self, data):\n if data in validators.EMPTY_VALUES:\n return None\n\n # UploadedFile objects should have name and size attributes.\n try:\n file_name = data.name\n file_size = data.size\n except AttributeError:\n raise ValidationError(self.error_messages['invalid'])\n\n if self.max_length is not None and len(file_name) > self.max_length:\n error_values = {'max': self.max_length, 'length': len(file_name)}\n raise ValidationError(self.error_messages['max_length'] % error_values)\n if not file_name:\n raise ValidationError(self.error_messages['invalid'])\n if not self.allow_empty_file and not file_size:\n raise ValidationError(self.error_messages['empty'])\n\n return data\n\n def to_native(self, value):\n return value.name\n\n\nclass ImageField(FileField):\n _use_files = True\n form_field_class = forms.ImageField\n\n default_error_messages = {\n 'invalid_image': _(\"Upload a valid image. The file you uploaded was either not an image or a corrupted image.\"),\n }\n\n def from_native(self, data):\n \"\"\"\n Checks that the file-upload field data contains a valid image (GIF, JPG,\n PNG, possibly others -- whatever the Python Imaging Library supports).\n \"\"\"\n f = super(ImageField, self).from_native(data)\n if f is None:\n return None\n\n from compat import Image\n assert Image is not None, 'PIL must be installed for ImageField support'\n\n # We need to get a file object for PIL. We might have a path or we might\n # have to read the data into memory.\n if hasattr(data, 'temporary_file_path'):\n file = data.temporary_file_path()\n else:\n if hasattr(data, 'read'):\n file = BytesIO(data.read())\n else:\n file = BytesIO(data['content'])\n\n try:\n # load() could spot a truncated JPEG, but it loads the entire\n # image in memory, which is a DoS vector. See #3848 and #18520.\n # verify() must be called immediately after the constructor.\n Image.open(file).verify()\n except ImportError:\n # Under PyPy, it is possible to import PIL. However, the underlying\n # _imaging C module isn't available, so an ImportError will be\n # raised. Catch and re-raise.\n raise\n except Exception: # Python Imaging Library doesn't recognize it as an image\n raise ValidationError(self.error_messages['invalid_image'])\n if hasattr(f, 'seek') and callable(f.seek):\n f.seek(0)\n return f\n\n\nclass SerializerMethodField(Field):\n \"\"\"\n A field that gets its value by calling a method on the serializer it's attached to.\n \"\"\"\n\n def __init__(self, method_name):\n self.method_name = method_name\n super(SerializerMethodField, self).__init__()\n\n def field_to_native(self, obj, field_name):\n value = getattr(self.parent, self.method_name)(obj)\n return self.to_native(value)\n",
"path": "rest_framework/fields.py"
}
] | diff --git a/rest_framework/fields.py b/rest_framework/fields.py
index da588082c9..903c384e36 100644
--- a/rest_framework/fields.py
+++ b/rest_framework/fields.py
@@ -794,7 +794,7 @@ def valid_value(self, value):
if value == smart_unicode(k2):
return True
else:
- if value == smart_unicode(k):
+ if value == smart_unicode(k) or value == k:
return True
return False
diff --git a/rest_framework/tests/models.py b/rest_framework/tests/models.py
index 428bf130d0..807bcf9832 100644
--- a/rest_framework/tests/models.py
+++ b/rest_framework/tests/models.py
@@ -51,6 +51,10 @@ class Meta:
abstract = True
+class HasPositiveIntegerAsChoice(RESTFrameworkModel):
+ some_choices = ((1,'A'),(2,'B'),(3,'C'))
+ some_integer = models.PositiveIntegerField(choices=some_choices)
+
class Anchor(RESTFrameworkModel):
text = models.CharField(max_length=100, default='anchor')
diff --git a/rest_framework/tests/serializer.py b/rest_framework/tests/serializer.py
index 780177aa0c..7f2c27b05a 100644
--- a/rest_framework/tests/serializer.py
+++ b/rest_framework/tests/serializer.py
@@ -2,7 +2,7 @@
import pickle
from django.test import TestCase
from rest_framework import serializers
-from rest_framework.tests.models import (Album, ActionItem, Anchor, BasicModel,
+from rest_framework.tests.models import (HasPositiveIntegerAsChoice, Album, ActionItem, Anchor, BasicModel,
BlankFieldModel, BlogPost, Book, CallableDefaultValueModel, DefaultValueModel,
ManyToManyModel, Person, ReadOnlyManyToManyModel, Photo)
@@ -69,6 +69,11 @@ class Meta:
model = Album
fields = ['title'] # lists are also valid options
+class PositiveIntegerAsChoiceSerializer(serializers.ModelSerializer):
+ class Meta:
+ model = HasPositiveIntegerAsChoice
+ fields = ['some_integer']
+
class BasicTests(TestCase):
def setUp(self):
@@ -285,6 +290,12 @@ def test_default_modelfield_max_length_exceeded(self):
self.assertEquals(serializer.errors, {'info': [u'Ensure this value has at most 12 characters (it has 13).']})
+class PositiveIntegerAsChoiceTests(TestCase):
+ def test_positive_integer_in_json_is_correctly_parsed(self):
+ data = {'some_integer':1}
+ serializer = PositiveIntegerAsChoiceSerializer(data=data)
+ self.assertEquals(serializer.is_valid(), True)
+
class ModelValidationTests(TestCase):
def test_validate_unique(self):
"""
|
facebookresearch__ParlAI-1821 | Obselete download link for CLEVR Dataset
Apparently, the current link to CLEVR in the source code is "https://s3-us-west-1.amazonaws.com/clevr/CLEVR_v1.0.zip" that returns the message "All access to this object has been disabled"
When I try to execute the following line of code
`!python ~/ParlAI/examples/display_data.py -t clevr`
I obtain
```
[creating task(s): clevr]
[building data: /root/ParlAI/data/CLEVR]
[ downloading: https://s3-us-west-1.amazonaws.com/clevr/CLEVR_v1.0.zip to /root/ParlAI/data/CLEVR/CLEVR_v1.0.zip ]
Downloading CLEVR_v1.0.zip: 0.00B [00:00, ?B/s]
unpacking CLEVR_v1.0.zip
Traceback (most recent call last):
File "/root/ParlAI/parlai/core/agents.py", line 819, in _create_task_agents
task_agents = my_module.create_agent(opt)
AttributeError: module 'parlai.tasks.clevr.agents' has no attribute 'create_agent'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/root/ParlAI/examples/display_data.py", line 22, in <module>
display_data(opt)
File "/root/ParlAI/parlai/scripts/display_data.py", line 42, in display_data
world = create_task(opt, agent)
File "/root/ParlAI/parlai/core/worlds.py", line 1151, in create_task
world = create_task_world(opt, user_agents, default_world=default_world)
File "/root/ParlAI/parlai/core/worlds.py", line 1108, in create_task_world
opt, user_agents, default_world=default_world
File "/root/ParlAI/parlai/core/worlds.py", line 1068, in _get_task_world
task_agents = _create_task_agents(opt)
File "/root/ParlAI/parlai/core/agents.py", line 822, in _create_task_agents
return create_task_agent_from_taskname(opt)
File "/root/ParlAI/parlai/core/agents.py", line 776, in create_task_agent_from_taskname
task_agents = teacher_class(opt)
File "/root/ParlAI/parlai/tasks/clevr/agents.py", line 45, in __init__
data_path, self.images_path = _path(opt)
File "/root/ParlAI/parlai/tasks/clevr/agents.py", line 15, in _path
build(opt)
File "/root/ParlAI/parlai/tasks/clevr/build.py", line 28, in build
build_data.untar(dpath, fname)
File "/root/ParlAI/parlai/core/build_data.py", line 180, in untar
shutil.unpack_archive(fullpath, path)
File "/usr/lib/python3.6/shutil.py", line 983, in unpack_archive
func(filename, extract_dir, **kwargs)
File "/usr/lib/python3.6/shutil.py", line 883, in _unpack_zipfile
raise ReadError("%s is not a zip file" % filename)
shutil.ReadError: /root/ParlAI/data/CLEVR/CLEVR_v1.0.zip is not a zip file
```
I found the following working link on CLEVR webpage (https://cs.stanford.edu/people/jcjohns/clevr/):
https://dl.fbaipublicfiles.com/clevr/CLEVR_v1.0.zip
| [
{
"content": "#!/usr/bin/env python3\n\n# Copyright (c) Facebook, Inc. and its affiliates.\n# This source code is licensed under the MIT license found in the\n# LICENSE file in the root directory of this source tree.\n# Download and build the data if it does not exist.\n\nimport parlai.core.build_data as build_data\nimport os\n\n\ndef build(opt):\n dpath = os.path.join(opt['datapath'], 'CLEVR')\n version = 'v1.0'\n\n if not build_data.built(dpath, version_string=version):\n print('[building data: ' + dpath + ']')\n # An older version exists, so remove these outdated files.\n if build_data.built(dpath):\n build_data.remove_dir(dpath)\n build_data.make_dir(dpath)\n\n # Download the data.\n fname = 'CLEVR_v1.0.zip'\n url = 'https://s3-us-west-1.amazonaws.com/clevr/'\n\n build_data.download(url + fname, dpath, fname)\n build_data.untar(dpath, fname)\n\n # Mark the data as built.\n build_data.mark_done(dpath, version_string=version)\n",
"path": "parlai/tasks/clevr/build.py"
}
] | [
{
"content": "#!/usr/bin/env python3\n\n# Copyright (c) Facebook, Inc. and its affiliates.\n# This source code is licensed under the MIT license found in the\n# LICENSE file in the root directory of this source tree.\n# Download and build the data if it does not exist.\n\nimport parlai.core.build_data as build_data\nimport os\n\n\ndef build(opt):\n dpath = os.path.join(opt['datapath'], 'CLEVR')\n version = 'v1.0'\n\n if not build_data.built(dpath, version_string=version):\n print('[building data: ' + dpath + ']')\n # An older version exists, so remove these outdated files.\n if build_data.built(dpath):\n build_data.remove_dir(dpath)\n build_data.make_dir(dpath)\n\n # Download the data.\n fname = 'CLEVR_v1.0.zip'\n url = 'https://dl.fbaipublicfiles.com/clevr/'\n\n build_data.download(url + fname, dpath, fname)\n build_data.untar(dpath, fname)\n\n # Mark the data as built.\n build_data.mark_done(dpath, version_string=version)\n",
"path": "parlai/tasks/clevr/build.py"
}
] | diff --git a/parlai/tasks/clevr/build.py b/parlai/tasks/clevr/build.py
index 39b70209252..806c9fcf32b 100644
--- a/parlai/tasks/clevr/build.py
+++ b/parlai/tasks/clevr/build.py
@@ -22,7 +22,7 @@ def build(opt):
# Download the data.
fname = 'CLEVR_v1.0.zip'
- url = 'https://s3-us-west-1.amazonaws.com/clevr/'
+ url = 'https://dl.fbaipublicfiles.com/clevr/'
build_data.download(url + fname, dpath, fname)
build_data.untar(dpath, fname)
|
pypi__warehouse-13060 | OIDC publishers should be manageable within the admin app
Breakout of #11296: PyPI's admins should be able to administrate OIDC publishers (both full and "pending") from within the admin app/views.
| [
{
"content": "# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport shlex\n\nfrom paginate_sqlalchemy import SqlalchemyOrmPage as SQLAlchemyORMPage\nfrom pyramid.httpexceptions import HTTPBadRequest, HTTPMovedPermanently, HTTPSeeOther\nfrom pyramid.view import view_config\nfrom sqlalchemy import func, or_\nfrom sqlalchemy.exc import NoResultFound\nfrom sqlalchemy.orm import joinedload\n\nfrom warehouse.accounts.models import User\nfrom warehouse.forklift.legacy import MAX_FILESIZE, MAX_PROJECT_SIZE\nfrom warehouse.packaging.models import JournalEntry, Project, Release, Role\nfrom warehouse.search.tasks import reindex_project as _reindex_project\nfrom warehouse.utils.paginate import paginate_url_factory\nfrom warehouse.utils.project import confirm_project, remove_project\n\nONE_MB = 1024 * 1024 # bytes\nONE_GB = 1024 * 1024 * 1024 # bytes\nUPLOAD_LIMIT_CAP = 1073741824 # 1 GiB\n\n\n@view_config(\n route_name=\"admin.project.list\",\n renderer=\"admin/projects/list.html\",\n permission=\"moderator\",\n request_method=\"GET\",\n uses_session=True,\n)\ndef project_list(request):\n q = request.params.get(\"q\")\n\n try:\n page_num = int(request.params.get(\"page\", 1))\n except ValueError:\n raise HTTPBadRequest(\"'page' must be an integer.\") from None\n\n projects_query = request.db.query(Project).order_by(Project.normalized_name)\n exact_match = None\n\n if q:\n projects_query = projects_query.filter(\n func.ultranormalize_name(Project.name) == func.ultranormalize_name(q)\n )\n\n exact_match = (\n request.db.query(Project).filter(Project.normalized_name == q).one_or_none()\n )\n\n projects = SQLAlchemyORMPage(\n projects_query,\n page=page_num,\n items_per_page=25,\n url_maker=paginate_url_factory(request),\n )\n\n return {\"projects\": projects, \"query\": q, \"exact_match\": exact_match}\n\n\n@view_config(\n route_name=\"admin.project.detail\",\n renderer=\"admin/projects/detail.html\",\n permission=\"moderator\",\n request_method=\"GET\",\n uses_session=True,\n require_csrf=True,\n require_methods=False,\n)\n@view_config(\n route_name=\"admin.project.detail\",\n renderer=\"admin/projects/detail.html\",\n permission=\"admin\",\n request_method=\"POST\",\n uses_session=True,\n require_csrf=True,\n require_methods=False,\n)\ndef project_detail(project, request):\n project_name = request.matchdict[\"project_name\"]\n\n if project_name != project.normalized_name:\n raise HTTPMovedPermanently(\n request.current_route_path(project_name=project.normalized_name)\n )\n\n releases = (\n request.db.query(Release)\n .filter(Release.project == project)\n .order_by(Release._pypi_ordering.desc())\n .limit(10)\n .all()\n )\n\n maintainers = [\n role\n for role in (\n request.db.query(Role)\n .join(User)\n .filter(Role.project == project)\n .distinct(User.username)\n .all()\n )\n ]\n maintainers = sorted(maintainers, key=lambda x: (x.role_name, x.user.username))\n journal = [\n entry\n for entry in (\n request.db.query(JournalEntry)\n .options(joinedload(\"submitted_by\"))\n .filter(JournalEntry.name == project.name)\n .order_by(JournalEntry.submitted_date.desc(), JournalEntry.id.desc())\n .limit(30)\n )\n ]\n\n return {\n \"project\": project,\n \"releases\": releases,\n \"maintainers\": maintainers,\n \"journal\": journal,\n \"ONE_MB\": ONE_MB,\n \"MAX_FILESIZE\": MAX_FILESIZE,\n \"ONE_GB\": ONE_GB,\n \"MAX_PROJECT_SIZE\": MAX_PROJECT_SIZE,\n \"UPLOAD_LIMIT_CAP\": UPLOAD_LIMIT_CAP,\n }\n\n\n@view_config(\n route_name=\"admin.project.releases\",\n renderer=\"admin/projects/releases_list.html\",\n permission=\"moderator\",\n request_method=\"GET\",\n uses_session=True,\n)\ndef releases_list(project, request):\n q = request.params.get(\"q\")\n project_name = request.matchdict[\"project_name\"]\n\n if project_name != project.normalized_name:\n raise HTTPMovedPermanently(\n request.current_route_path(project_name=project.normalized_name)\n )\n\n try:\n page_num = int(request.params.get(\"page\", 1))\n except ValueError:\n raise HTTPBadRequest(\"'page' must be an integer.\") from None\n\n releases_query = (\n request.db.query(Release)\n .filter(Release.project == project)\n .order_by(Release._pypi_ordering.desc())\n )\n\n if q:\n terms = shlex.split(q)\n\n filters = []\n for term in terms:\n if \":\" in term:\n field, value = term.split(\":\", 1)\n if field.lower() == \"version\":\n filters.append(Release.version.ilike(value))\n\n filters = filters or [True]\n releases_query = releases_query.filter(or_(False, *filters))\n\n releases = SQLAlchemyORMPage(\n releases_query,\n page=page_num,\n items_per_page=25,\n url_maker=paginate_url_factory(request),\n )\n\n return {\"releases\": releases, \"project\": project, \"query\": q}\n\n\n@view_config(\n route_name=\"admin.project.release\",\n renderer=\"admin/projects/release_detail.html\",\n permission=\"moderator\",\n request_method=\"GET\",\n uses_session=True,\n)\ndef release_detail(release, request):\n journals = (\n request.db.query(JournalEntry)\n .options(joinedload(\"submitted_by\"))\n .filter(JournalEntry.name == release.project.name)\n .filter(JournalEntry.version == release.version)\n .order_by(JournalEntry.submitted_date.desc(), JournalEntry.id.desc())\n .all()\n )\n return {\"release\": release, \"journals\": journals}\n\n\n@view_config(\n route_name=\"admin.project.journals\",\n renderer=\"admin/projects/journals_list.html\",\n permission=\"moderator\",\n request_method=\"GET\",\n uses_session=True,\n)\ndef journals_list(project, request):\n q = request.params.get(\"q\")\n project_name = request.matchdict[\"project_name\"]\n\n if project_name != project.normalized_name:\n raise HTTPMovedPermanently(\n request.current_route_path(project_name=project.normalized_name)\n )\n\n try:\n page_num = int(request.params.get(\"page\", 1))\n except ValueError:\n raise HTTPBadRequest(\"'page' must be an integer.\") from None\n\n journals_query = (\n request.db.query(JournalEntry)\n .options(joinedload(\"submitted_by\"))\n .filter(JournalEntry.name == project.name)\n .order_by(JournalEntry.submitted_date.desc(), JournalEntry.id.desc())\n )\n\n if q:\n terms = shlex.split(q)\n\n filters = []\n for term in terms:\n if \":\" in term:\n field, value = term.split(\":\", 1)\n if field.lower() == \"version\":\n filters.append(JournalEntry.version.ilike(value))\n\n filters = filters or [True]\n journals_query = journals_query.filter(or_(False, *filters))\n\n journals = SQLAlchemyORMPage(\n journals_query,\n page=page_num,\n items_per_page=25,\n url_maker=paginate_url_factory(request),\n )\n\n return {\"journals\": journals, \"project\": project, \"query\": q}\n\n\n@view_config(\n route_name=\"admin.project.set_upload_limit\",\n permission=\"moderator\",\n request_method=\"POST\",\n uses_session=True,\n require_methods=False,\n)\ndef set_upload_limit(project, request):\n upload_limit = request.POST.get(\"upload_limit\", \"\")\n\n # Update the project's upload limit.\n # If the upload limit is an empty string or otherwise falsy, just set the\n # limit to None, indicating the default limit.\n if not upload_limit:\n upload_limit = None\n else:\n try:\n upload_limit = int(upload_limit)\n except ValueError:\n raise HTTPBadRequest(\n f\"Invalid value for upload limit: {upload_limit}, \"\n f\"must be integer or empty string.\"\n )\n\n # The form is in MB, but the database field is in bytes.\n upload_limit *= ONE_MB\n\n if upload_limit > UPLOAD_LIMIT_CAP:\n raise HTTPBadRequest(\n f\"Upload limit can not be more than the overall limit of \"\n f\"{UPLOAD_LIMIT_CAP / ONE_MB}MiB.\"\n )\n\n if upload_limit < MAX_FILESIZE:\n raise HTTPBadRequest(\n f\"Upload limit can not be less than the default limit of \"\n f\"{MAX_FILESIZE / ONE_MB}MB.\"\n )\n\n project.upload_limit = upload_limit\n\n request.session.flash(f\"Set the upload limit on {project.name!r}\", queue=\"success\")\n\n return HTTPSeeOther(\n request.route_path(\"admin.project.detail\", project_name=project.normalized_name)\n )\n\n\n@view_config(\n route_name=\"admin.project.set_total_size_limit\",\n permission=\"moderator\",\n request_method=\"POST\",\n uses_session=True,\n require_methods=False,\n)\ndef set_total_size_limit(project, request):\n total_size_limit = request.POST.get(\"total_size_limit\", \"\")\n\n if not total_size_limit:\n total_size_limit = None\n else:\n try:\n total_size_limit = int(total_size_limit)\n except ValueError:\n raise HTTPBadRequest(\n f\"Invalid value for total size limit: {total_size_limit}, \"\n f\"must be integer or empty string.\"\n )\n\n # The form is in GB, but the database field is in bytes.\n total_size_limit *= ONE_GB\n\n if total_size_limit < MAX_PROJECT_SIZE:\n raise HTTPBadRequest(\n f\"Total project size can not be less than the default limit of \"\n f\"{MAX_PROJECT_SIZE / ONE_GB}GB.\"\n )\n\n project.total_size_limit = total_size_limit\n\n request.session.flash(\n f\"Set the total size limit on {project.name!r}\", queue=\"success\"\n )\n\n return HTTPSeeOther(\n request.route_path(\"admin.project.detail\", project_name=project.normalized_name)\n )\n\n\n@view_config(\n route_name=\"admin.project.add_role\",\n permission=\"moderator\",\n request_method=\"POST\",\n uses_session=True,\n require_methods=False,\n)\ndef add_role(project, request):\n username = request.POST.get(\"username\")\n if not username:\n request.session.flash(\"Provide a username\", queue=\"error\")\n raise HTTPSeeOther(\n request.route_path(\n \"admin.project.detail\", project_name=project.normalized_name\n )\n )\n\n try:\n user = request.db.query(User).filter(User.username == username).one()\n except NoResultFound:\n request.session.flash(f\"Unknown username '{username}'\", queue=\"error\")\n raise HTTPSeeOther(\n request.route_path(\n \"admin.project.detail\", project_name=project.normalized_name\n )\n )\n\n role_name = request.POST.get(\"role_name\")\n if not role_name:\n request.session.flash(\"Provide a role\", queue=\"error\")\n raise HTTPSeeOther(\n request.route_path(\n \"admin.project.detail\", project_name=project.normalized_name\n )\n )\n\n already_there = (\n request.db.query(Role)\n .filter(Role.user == user, Role.project == project)\n .count()\n )\n\n if already_there > 0:\n request.session.flash(\n f\"User '{user.username}' already has a role on this project\", queue=\"error\"\n )\n raise HTTPSeeOther(\n request.route_path(\n \"admin.project.detail\", project_name=project.normalized_name\n )\n )\n\n request.db.add(\n JournalEntry(\n name=project.name,\n action=f\"add {role_name} {user.username}\",\n submitted_by=request.user,\n submitted_from=request.remote_addr,\n )\n )\n\n request.db.add(Role(role_name=role_name, user=user, project=project))\n\n request.session.flash(\n f\"Added '{user.username}' as '{role_name}' on '{project.name}'\", queue=\"success\"\n )\n return HTTPSeeOther(\n request.route_path(\"admin.project.detail\", project_name=project.normalized_name)\n )\n\n\n@view_config(\n route_name=\"admin.project.delete_role\",\n permission=\"moderator\",\n request_method=\"POST\",\n uses_session=True,\n require_methods=False,\n)\ndef delete_role(project, request):\n confirm = request.POST.get(\"username\")\n role_id = request.matchdict.get(\"role_id\")\n\n role = request.db.query(Role).get(role_id)\n if not role:\n request.session.flash(\"This role no longer exists\", queue=\"error\")\n raise HTTPSeeOther(\n request.route_path(\n \"admin.project.detail\", project_name=project.normalized_name\n )\n )\n\n if not confirm or confirm != role.user.username:\n request.session.flash(\"Confirm the request\", queue=\"error\")\n raise HTTPSeeOther(\n request.route_path(\n \"admin.project.detail\", project_name=project.normalized_name\n )\n )\n\n request.session.flash(\n f\"Removed '{role.user.username}' as '{role.role_name}' on '{project.name}'\",\n queue=\"success\",\n )\n request.db.add(\n JournalEntry(\n name=project.name,\n action=f\"remove {role.role_name} {role.user.username}\",\n submitted_by=request.user,\n submitted_from=request.remote_addr,\n )\n )\n\n request.db.delete(role)\n\n return HTTPSeeOther(\n request.route_path(\"admin.project.detail\", project_name=project.normalized_name)\n )\n\n\n@view_config(\n route_name=\"admin.project.delete\",\n permission=\"admin\",\n request_method=\"POST\",\n uses_session=True,\n require_methods=False,\n)\ndef delete_project(project, request):\n confirm_project(project, request, fail_route=\"admin.project.detail\")\n remove_project(project, request)\n\n return HTTPSeeOther(request.route_path(\"admin.project.list\"))\n\n\n@view_config(\n route_name=\"admin.project.reindex\",\n permission=\"moderator\",\n request_method=\"GET\",\n uses_session=True,\n require_methods=False,\n)\ndef reindex_project(project, request):\n request.task(_reindex_project).delay(project.normalized_name)\n request.session.flash(\n f\"Task sent to reindex the project {project.name!r}\", queue=\"success\"\n )\n return HTTPSeeOther(\n request.route_path(\"admin.project.detail\", project_name=project.normalized_name)\n )\n",
"path": "warehouse/admin/views/projects.py"
}
] | [
{
"content": "# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport shlex\n\nfrom paginate_sqlalchemy import SqlalchemyOrmPage as SQLAlchemyORMPage\nfrom pyramid.httpexceptions import HTTPBadRequest, HTTPMovedPermanently, HTTPSeeOther\nfrom pyramid.view import view_config\nfrom sqlalchemy import func, or_\nfrom sqlalchemy.exc import NoResultFound\nfrom sqlalchemy.orm import joinedload\n\nfrom warehouse.accounts.models import User\nfrom warehouse.forklift.legacy import MAX_FILESIZE, MAX_PROJECT_SIZE\nfrom warehouse.packaging.models import JournalEntry, Project, Release, Role\nfrom warehouse.search.tasks import reindex_project as _reindex_project\nfrom warehouse.utils.paginate import paginate_url_factory\nfrom warehouse.utils.project import confirm_project, remove_project\n\nONE_MB = 1024 * 1024 # bytes\nONE_GB = 1024 * 1024 * 1024 # bytes\nUPLOAD_LIMIT_CAP = 1073741824 # 1 GiB\n\n\n@view_config(\n route_name=\"admin.project.list\",\n renderer=\"admin/projects/list.html\",\n permission=\"moderator\",\n request_method=\"GET\",\n uses_session=True,\n)\ndef project_list(request):\n q = request.params.get(\"q\")\n\n try:\n page_num = int(request.params.get(\"page\", 1))\n except ValueError:\n raise HTTPBadRequest(\"'page' must be an integer.\") from None\n\n projects_query = request.db.query(Project).order_by(Project.normalized_name)\n exact_match = None\n\n if q:\n projects_query = projects_query.filter(\n func.ultranormalize_name(Project.name) == func.ultranormalize_name(q)\n )\n\n exact_match = (\n request.db.query(Project).filter(Project.normalized_name == q).one_or_none()\n )\n\n projects = SQLAlchemyORMPage(\n projects_query,\n page=page_num,\n items_per_page=25,\n url_maker=paginate_url_factory(request),\n )\n\n return {\"projects\": projects, \"query\": q, \"exact_match\": exact_match}\n\n\n@view_config(\n route_name=\"admin.project.detail\",\n renderer=\"admin/projects/detail.html\",\n permission=\"moderator\",\n request_method=\"GET\",\n uses_session=True,\n require_csrf=True,\n require_methods=False,\n)\n@view_config(\n route_name=\"admin.project.detail\",\n renderer=\"admin/projects/detail.html\",\n permission=\"admin\",\n request_method=\"POST\",\n uses_session=True,\n require_csrf=True,\n require_methods=False,\n)\ndef project_detail(project, request):\n project_name = request.matchdict[\"project_name\"]\n\n if project_name != project.normalized_name:\n raise HTTPMovedPermanently(\n request.current_route_path(project_name=project.normalized_name)\n )\n\n releases = (\n request.db.query(Release)\n .filter(Release.project == project)\n .order_by(Release._pypi_ordering.desc())\n .limit(10)\n .all()\n )\n\n maintainers = [\n role\n for role in (\n request.db.query(Role)\n .join(User)\n .filter(Role.project == project)\n .distinct(User.username)\n .all()\n )\n ]\n maintainers = sorted(maintainers, key=lambda x: (x.role_name, x.user.username))\n journal = [\n entry\n for entry in (\n request.db.query(JournalEntry)\n .options(joinedload(\"submitted_by\"))\n .filter(JournalEntry.name == project.name)\n .order_by(JournalEntry.submitted_date.desc(), JournalEntry.id.desc())\n .limit(30)\n )\n ]\n\n return {\n \"project\": project,\n \"releases\": releases,\n \"maintainers\": maintainers,\n \"journal\": journal,\n \"oidc_publishers\": project.oidc_publishers,\n \"ONE_MB\": ONE_MB,\n \"MAX_FILESIZE\": MAX_FILESIZE,\n \"ONE_GB\": ONE_GB,\n \"MAX_PROJECT_SIZE\": MAX_PROJECT_SIZE,\n \"UPLOAD_LIMIT_CAP\": UPLOAD_LIMIT_CAP,\n }\n\n\n@view_config(\n route_name=\"admin.project.releases\",\n renderer=\"admin/projects/releases_list.html\",\n permission=\"moderator\",\n request_method=\"GET\",\n uses_session=True,\n)\ndef releases_list(project, request):\n q = request.params.get(\"q\")\n project_name = request.matchdict[\"project_name\"]\n\n if project_name != project.normalized_name:\n raise HTTPMovedPermanently(\n request.current_route_path(project_name=project.normalized_name)\n )\n\n try:\n page_num = int(request.params.get(\"page\", 1))\n except ValueError:\n raise HTTPBadRequest(\"'page' must be an integer.\") from None\n\n releases_query = (\n request.db.query(Release)\n .filter(Release.project == project)\n .order_by(Release._pypi_ordering.desc())\n )\n\n if q:\n terms = shlex.split(q)\n\n filters = []\n for term in terms:\n if \":\" in term:\n field, value = term.split(\":\", 1)\n if field.lower() == \"version\":\n filters.append(Release.version.ilike(value))\n\n filters = filters or [True]\n releases_query = releases_query.filter(or_(False, *filters))\n\n releases = SQLAlchemyORMPage(\n releases_query,\n page=page_num,\n items_per_page=25,\n url_maker=paginate_url_factory(request),\n )\n\n return {\"releases\": releases, \"project\": project, \"query\": q}\n\n\n@view_config(\n route_name=\"admin.project.release\",\n renderer=\"admin/projects/release_detail.html\",\n permission=\"moderator\",\n request_method=\"GET\",\n uses_session=True,\n)\ndef release_detail(release, request):\n journals = (\n request.db.query(JournalEntry)\n .options(joinedload(\"submitted_by\"))\n .filter(JournalEntry.name == release.project.name)\n .filter(JournalEntry.version == release.version)\n .order_by(JournalEntry.submitted_date.desc(), JournalEntry.id.desc())\n .all()\n )\n return {\"release\": release, \"journals\": journals}\n\n\n@view_config(\n route_name=\"admin.project.journals\",\n renderer=\"admin/projects/journals_list.html\",\n permission=\"moderator\",\n request_method=\"GET\",\n uses_session=True,\n)\ndef journals_list(project, request):\n q = request.params.get(\"q\")\n project_name = request.matchdict[\"project_name\"]\n\n if project_name != project.normalized_name:\n raise HTTPMovedPermanently(\n request.current_route_path(project_name=project.normalized_name)\n )\n\n try:\n page_num = int(request.params.get(\"page\", 1))\n except ValueError:\n raise HTTPBadRequest(\"'page' must be an integer.\") from None\n\n journals_query = (\n request.db.query(JournalEntry)\n .options(joinedload(\"submitted_by\"))\n .filter(JournalEntry.name == project.name)\n .order_by(JournalEntry.submitted_date.desc(), JournalEntry.id.desc())\n )\n\n if q:\n terms = shlex.split(q)\n\n filters = []\n for term in terms:\n if \":\" in term:\n field, value = term.split(\":\", 1)\n if field.lower() == \"version\":\n filters.append(JournalEntry.version.ilike(value))\n\n filters = filters or [True]\n journals_query = journals_query.filter(or_(False, *filters))\n\n journals = SQLAlchemyORMPage(\n journals_query,\n page=page_num,\n items_per_page=25,\n url_maker=paginate_url_factory(request),\n )\n\n return {\"journals\": journals, \"project\": project, \"query\": q}\n\n\n@view_config(\n route_name=\"admin.project.set_upload_limit\",\n permission=\"moderator\",\n request_method=\"POST\",\n uses_session=True,\n require_methods=False,\n)\ndef set_upload_limit(project, request):\n upload_limit = request.POST.get(\"upload_limit\", \"\")\n\n # Update the project's upload limit.\n # If the upload limit is an empty string or otherwise falsy, just set the\n # limit to None, indicating the default limit.\n if not upload_limit:\n upload_limit = None\n else:\n try:\n upload_limit = int(upload_limit)\n except ValueError:\n raise HTTPBadRequest(\n f\"Invalid value for upload limit: {upload_limit}, \"\n f\"must be integer or empty string.\"\n )\n\n # The form is in MB, but the database field is in bytes.\n upload_limit *= ONE_MB\n\n if upload_limit > UPLOAD_LIMIT_CAP:\n raise HTTPBadRequest(\n f\"Upload limit can not be more than the overall limit of \"\n f\"{UPLOAD_LIMIT_CAP / ONE_MB}MiB.\"\n )\n\n if upload_limit < MAX_FILESIZE:\n raise HTTPBadRequest(\n f\"Upload limit can not be less than the default limit of \"\n f\"{MAX_FILESIZE / ONE_MB}MB.\"\n )\n\n project.upload_limit = upload_limit\n\n request.session.flash(f\"Set the upload limit on {project.name!r}\", queue=\"success\")\n\n return HTTPSeeOther(\n request.route_path(\"admin.project.detail\", project_name=project.normalized_name)\n )\n\n\n@view_config(\n route_name=\"admin.project.set_total_size_limit\",\n permission=\"moderator\",\n request_method=\"POST\",\n uses_session=True,\n require_methods=False,\n)\ndef set_total_size_limit(project, request):\n total_size_limit = request.POST.get(\"total_size_limit\", \"\")\n\n if not total_size_limit:\n total_size_limit = None\n else:\n try:\n total_size_limit = int(total_size_limit)\n except ValueError:\n raise HTTPBadRequest(\n f\"Invalid value for total size limit: {total_size_limit}, \"\n f\"must be integer or empty string.\"\n )\n\n # The form is in GB, but the database field is in bytes.\n total_size_limit *= ONE_GB\n\n if total_size_limit < MAX_PROJECT_SIZE:\n raise HTTPBadRequest(\n f\"Total project size can not be less than the default limit of \"\n f\"{MAX_PROJECT_SIZE / ONE_GB}GB.\"\n )\n\n project.total_size_limit = total_size_limit\n\n request.session.flash(\n f\"Set the total size limit on {project.name!r}\", queue=\"success\"\n )\n\n return HTTPSeeOther(\n request.route_path(\"admin.project.detail\", project_name=project.normalized_name)\n )\n\n\n@view_config(\n route_name=\"admin.project.add_role\",\n permission=\"moderator\",\n request_method=\"POST\",\n uses_session=True,\n require_methods=False,\n)\ndef add_role(project, request):\n username = request.POST.get(\"username\")\n if not username:\n request.session.flash(\"Provide a username\", queue=\"error\")\n raise HTTPSeeOther(\n request.route_path(\n \"admin.project.detail\", project_name=project.normalized_name\n )\n )\n\n try:\n user = request.db.query(User).filter(User.username == username).one()\n except NoResultFound:\n request.session.flash(f\"Unknown username '{username}'\", queue=\"error\")\n raise HTTPSeeOther(\n request.route_path(\n \"admin.project.detail\", project_name=project.normalized_name\n )\n )\n\n role_name = request.POST.get(\"role_name\")\n if not role_name:\n request.session.flash(\"Provide a role\", queue=\"error\")\n raise HTTPSeeOther(\n request.route_path(\n \"admin.project.detail\", project_name=project.normalized_name\n )\n )\n\n already_there = (\n request.db.query(Role)\n .filter(Role.user == user, Role.project == project)\n .count()\n )\n\n if already_there > 0:\n request.session.flash(\n f\"User '{user.username}' already has a role on this project\", queue=\"error\"\n )\n raise HTTPSeeOther(\n request.route_path(\n \"admin.project.detail\", project_name=project.normalized_name\n )\n )\n\n request.db.add(\n JournalEntry(\n name=project.name,\n action=f\"add {role_name} {user.username}\",\n submitted_by=request.user,\n submitted_from=request.remote_addr,\n )\n )\n\n request.db.add(Role(role_name=role_name, user=user, project=project))\n\n request.session.flash(\n f\"Added '{user.username}' as '{role_name}' on '{project.name}'\", queue=\"success\"\n )\n return HTTPSeeOther(\n request.route_path(\"admin.project.detail\", project_name=project.normalized_name)\n )\n\n\n@view_config(\n route_name=\"admin.project.delete_role\",\n permission=\"moderator\",\n request_method=\"POST\",\n uses_session=True,\n require_methods=False,\n)\ndef delete_role(project, request):\n confirm = request.POST.get(\"username\")\n role_id = request.matchdict.get(\"role_id\")\n\n role = request.db.query(Role).get(role_id)\n if not role:\n request.session.flash(\"This role no longer exists\", queue=\"error\")\n raise HTTPSeeOther(\n request.route_path(\n \"admin.project.detail\", project_name=project.normalized_name\n )\n )\n\n if not confirm or confirm != role.user.username:\n request.session.flash(\"Confirm the request\", queue=\"error\")\n raise HTTPSeeOther(\n request.route_path(\n \"admin.project.detail\", project_name=project.normalized_name\n )\n )\n\n request.session.flash(\n f\"Removed '{role.user.username}' as '{role.role_name}' on '{project.name}'\",\n queue=\"success\",\n )\n request.db.add(\n JournalEntry(\n name=project.name,\n action=f\"remove {role.role_name} {role.user.username}\",\n submitted_by=request.user,\n submitted_from=request.remote_addr,\n )\n )\n\n request.db.delete(role)\n\n return HTTPSeeOther(\n request.route_path(\"admin.project.detail\", project_name=project.normalized_name)\n )\n\n\n@view_config(\n route_name=\"admin.project.delete\",\n permission=\"admin\",\n request_method=\"POST\",\n uses_session=True,\n require_methods=False,\n)\ndef delete_project(project, request):\n confirm_project(project, request, fail_route=\"admin.project.detail\")\n remove_project(project, request)\n\n return HTTPSeeOther(request.route_path(\"admin.project.list\"))\n\n\n@view_config(\n route_name=\"admin.project.reindex\",\n permission=\"moderator\",\n request_method=\"GET\",\n uses_session=True,\n require_methods=False,\n)\ndef reindex_project(project, request):\n request.task(_reindex_project).delay(project.normalized_name)\n request.session.flash(\n f\"Task sent to reindex the project {project.name!r}\", queue=\"success\"\n )\n return HTTPSeeOther(\n request.route_path(\"admin.project.detail\", project_name=project.normalized_name)\n )\n",
"path": "warehouse/admin/views/projects.py"
}
] | diff --git a/tests/unit/admin/views/test_projects.py b/tests/unit/admin/views/test_projects.py
index 826b9ac4837e..25eabbb6c199 100644
--- a/tests/unit/admin/views/test_projects.py
+++ b/tests/unit/admin/views/test_projects.py
@@ -19,6 +19,7 @@
from pyramid.httpexceptions import HTTPBadRequest, HTTPMovedPermanently, HTTPSeeOther
+from tests.common.db.oidc import GitHubPublisherFactory
from warehouse.admin.views import projects as views
from warehouse.packaging.models import Project, Role
from warehouse.search.tasks import reindex_project
@@ -84,6 +85,7 @@ def test_gets_project(self, db_request):
[RoleFactory(project=project) for _ in range(5)],
key=lambda x: (x.role_name, x.user.username),
)
+ oidc_publishers = [GitHubPublisherFactory(projects=[project]) for _ in range(5)]
db_request.matchdict["project_name"] = str(project.normalized_name)
result = views.project_detail(project, db_request)
@@ -92,6 +94,7 @@ def test_gets_project(self, db_request):
"releases": [],
"maintainers": roles,
"journal": journals[:30],
+ "oidc_publishers": oidc_publishers,
"ONE_MB": views.ONE_MB,
"MAX_FILESIZE": views.MAX_FILESIZE,
"MAX_PROJECT_SIZE": views.MAX_PROJECT_SIZE,
diff --git a/warehouse/admin/templates/admin/projects/detail.html b/warehouse/admin/templates/admin/projects/detail.html
index ab7c1433d824..26620eb89597 100644
--- a/warehouse/admin/templates/admin/projects/detail.html
+++ b/warehouse/admin/templates/admin/projects/detail.html
@@ -233,6 +233,36 @@ <h4 class="modal-title" id="exampleModalLabel">Remove role for {{ role.user.user
</div>
</div> <!-- .card #releases -->
+{% if oidc_publishers %}
+<div class="card card-info" id="oidc-publishers">
+ <div class="card-header">OpenID Connect Publishers</div>
+ <div class="card-body">
+ <div class="table-responsive p-0">
+ <table class="table table-hover table-striped">
+ <thead>
+ <tr>
+ <th>Publisher name</th>
+ <th>URL</th>
+ <th>repr</th>
+ </tr>
+ <tbody>
+ {% for pub in oidc_publishers %}
+ <tr>
+ <td>{{ pub.publisher_name }}</td>
+ <td><a href="{{ pub.publisher_url }}">{{ pub.publisher_url }}</a></td>
+ <td><code>{{ pub }}</code></td>
+ </tr>
+ {% endfor %}
+ </tbody>
+ </thead>
+ </table>
+ </div>
+ </div>
+</div> <!-- .card #oidc-publishers -->
+{% else %}
+No publishers configured.
+{% endif %}
+
<div class="card card-primary card-outline collapsed-card" id="journals">
<div class="card-header">
<h3 class="card-title">Journals</h3>
diff --git a/warehouse/admin/templates/admin/users/detail.html b/warehouse/admin/templates/admin/users/detail.html
index 7e34d0715fd0..db3930a8a11f 100644
--- a/warehouse/admin/templates/admin/users/detail.html
+++ b/warehouse/admin/templates/admin/users/detail.html
@@ -411,6 +411,39 @@ <h3 class="card-title">Projects</h3>
</div>
</div> <!-- .card -->
+ {% if user.pending_oidc_publishers %}
+ <div class="card">
+ <div class="card-header with-border">
+ <h3 class="card-title">Pending OpenID Connect Publishers</h3>
+ </div>
+
+ <div class="card-body">
+ <table class="table table-hover" id="pending-oidc-publishers">
+ <thead>
+ <tr>
+ <th scope="col">Project name</th>
+ <th scope="col">Publisher name</th>
+ <th scope="col">URL</th>
+ <th scope="col">repr</th>
+ </tr>
+ </thead>
+ <tbody>
+ {% for pub in user.pending_oidc_publishers %}
+ <tr>
+ <td>{{ pub.project_name }}</td>
+ <td>{{ pub.publisher_name }}</td>
+ <td><a href="{{ pub.publisher_url }}">{{ pub.publisher_url }}</a></td>
+ <td><code>{{ pub }}</code></td>
+ </tr>
+ {% endfor %}
+ </tbody>
+ </table>
+ </div>
+ </div> <!-- .card -->
+ {% else %}
+ No publishers configured.
+ {% endif %}
+
<div class="card">
<div class="card-header with-border">
<h3 class="card-title">Account activity</h3>
diff --git a/warehouse/admin/views/projects.py b/warehouse/admin/views/projects.py
index 625756550c24..8d76a35cf7ba 100644
--- a/warehouse/admin/views/projects.py
+++ b/warehouse/admin/views/projects.py
@@ -129,6 +129,7 @@ def project_detail(project, request):
"releases": releases,
"maintainers": maintainers,
"journal": journal,
+ "oidc_publishers": project.oidc_publishers,
"ONE_MB": ONE_MB,
"MAX_FILESIZE": MAX_FILESIZE,
"ONE_GB": ONE_GB,
|
ethereum__web3.py-3228 | Add API to iterate through all events in a contract
### What was wrong?
No easy way to get all events for a contract (while still parsing the results). See [this StackExchange question](https://ethereum.stackexchange.com/questions/54473/how-to-read-allevents-using-python-web3-theres-capability-in-web3-js). One option is to iterate over all the events, but it's a bit awkward right now. I think the easiest way is:
```py
from web3.contract import ContractEvent
filters = [
event.createFilter(fromBlock='latest')
for event in myContract.events
if isinstance(event, ContractEvent)
]
```
### How can it be fixed?
Some options:
- Implement `__iter__` on `Contract.events` to iterate through all events in the ABI (my favorite option, except that it's inconsistent with `contract.functions`, which is doing the wrong thing IMO)
- Add a new `Contract.all_events()` equivalent to `Contract.all_functions()`
Then the example changes to:
```py
filters = [
event.createFilter(fromBlock='latest')
for event in myContract.events
]
```
---
Of course, we could also implement `contract.create_filter()` like web3.js's `contract.events.allEvents`. I kind of like that the filters are event specific right now, though. I don't think it's too big a deal to require callers to write a filter loop on events.
| [
{
"content": "import copy\nfrom typing import (\n TYPE_CHECKING,\n Any,\n Callable,\n Dict,\n Iterable,\n List,\n Optional,\n Sequence,\n Type,\n cast,\n)\n\nfrom eth_typing import (\n ChecksumAddress,\n)\nfrom eth_utils import (\n combomethod,\n)\nfrom eth_utils.toolz import (\n partial,\n)\nfrom hexbytes import (\n HexBytes,\n)\n\nfrom web3._utils.abi import (\n fallback_func_abi_exists,\n filter_by_type,\n receive_func_abi_exists,\n)\nfrom web3._utils.compat import (\n Self,\n)\nfrom web3._utils.contracts import (\n parse_block_identifier,\n)\nfrom web3._utils.datatypes import (\n PropertyCheckingFactory,\n)\nfrom web3._utils.events import (\n EventFilterBuilder,\n get_event_data,\n)\nfrom web3._utils.filters import (\n LogFilter,\n)\nfrom web3._utils.function_identifiers import (\n FallbackFn,\n ReceiveFn,\n)\nfrom web3._utils.normalizers import (\n normalize_abi,\n normalize_address,\n normalize_bytecode,\n)\nfrom web3._utils.transactions import (\n fill_transaction_defaults,\n)\nfrom web3.contract.base_contract import (\n BaseContract,\n BaseContractCaller,\n BaseContractConstructor,\n BaseContractEvent,\n BaseContractEvents,\n BaseContractFunction,\n BaseContractFunctions,\n NonExistentFallbackFunction,\n NonExistentReceiveFunction,\n)\nfrom web3.contract.utils import (\n build_transaction_for_function,\n call_contract_function,\n estimate_gas_for_function,\n find_functions_by_identifier,\n get_function_by_identifier,\n transact_with_contract_function,\n)\nfrom web3.exceptions import (\n ABIFunctionNotFound,\n NoABIFound,\n NoABIFunctionsFound,\n Web3ValidationError,\n)\nfrom web3.types import (\n ABI,\n BlockIdentifier,\n EventData,\n StateOverride,\n TxParams,\n)\nfrom web3.utils import (\n get_abi_input_names,\n)\n\nif TYPE_CHECKING:\n from ens import ENS # noqa: F401\n from web3 import Web3 # noqa: F401\n\n\nclass ContractEvent(BaseContractEvent):\n # mypy types\n w3: \"Web3\"\n\n @combomethod\n def get_logs(\n self,\n argument_filters: Optional[Dict[str, Any]] = None,\n fromBlock: Optional[BlockIdentifier] = None,\n toBlock: Optional[BlockIdentifier] = None,\n block_hash: Optional[HexBytes] = None,\n ) -> Iterable[EventData]:\n \"\"\"Get events for this contract instance using eth_getLogs API.\n\n This is a stateless method, as opposed to create_filter.\n It can be safely called against nodes which do not provide\n eth_newFilter API, like Infura nodes.\n\n If there are many events,\n like ``Transfer`` events for a popular token,\n the Ethereum node might be overloaded and timeout\n on the underlying JSON-RPC call.\n\n Example - how to get all ERC-20 token transactions\n for the latest 10 blocks:\n\n .. code-block:: python\n\n from = max(mycontract.web3.eth.block_number - 10, 1)\n to = mycontract.web3.eth.block_number\n\n events = mycontract.events.Transfer.get_logs(fromBlock=from, toBlock=to)\n\n for e in events:\n print(e[\"args\"][\"from\"],\n e[\"args\"][\"to\"],\n e[\"args\"][\"value\"])\n\n The returned processed log values will look like:\n\n .. code-block:: python\n\n (\n AttributeDict({\n 'args': AttributeDict({}),\n 'event': 'LogNoArguments',\n 'logIndex': 0,\n 'transactionIndex': 0,\n 'transactionHash': HexBytes('...'),\n 'address': '0xF2E246BB76DF876Cef8b38ae84130F4F55De395b',\n 'blockHash': HexBytes('...'),\n 'blockNumber': 3\n }),\n AttributeDict(...),\n ...\n )\n\n See also: :func:`web3.middleware.filter.LocalFilterMiddleware`.\n\n :param argument_filters: Filter by argument values. Indexed arguments are\n filtered by the node while non-indexed arguments are filtered by the library.\n :param fromBlock: block number or \"latest\", defaults to \"latest\"\n :param toBlock: block number or \"latest\". Defaults to \"latest\"\n :param block_hash: block hash. block_hash cannot be set at the\n same time as fromBlock or toBlock\n :yield: Tuple of :class:`AttributeDict` instances\n \"\"\"\n event_abi = self._get_event_abi()\n\n # validate ``argument_filters`` if present\n if argument_filters is not None:\n event_arg_names = get_abi_input_names(event_abi)\n if not all(arg in event_arg_names for arg in argument_filters.keys()):\n raise Web3ValidationError(\n \"When filtering by argument names, all argument names must be \"\n \"present in the contract's event ABI.\"\n )\n\n _filter_params = self._get_event_filter_params(\n event_abi, argument_filters, fromBlock, toBlock, block_hash\n )\n # call JSON-RPC API\n logs = self.w3.eth.get_logs(_filter_params)\n\n # convert raw binary data to Python proxy objects as described by ABI:\n all_event_logs = tuple(\n get_event_data(self.w3.codec, event_abi, entry) for entry in logs\n )\n filtered_logs = self._process_get_logs_argument_filters(\n event_abi,\n all_event_logs,\n argument_filters,\n )\n return filtered_logs\n\n @combomethod\n def create_filter(\n self,\n *, # PEP 3102\n argument_filters: Optional[Dict[str, Any]] = None,\n fromBlock: Optional[BlockIdentifier] = None,\n toBlock: BlockIdentifier = \"latest\",\n address: Optional[ChecksumAddress] = None,\n topics: Optional[Sequence[Any]] = None,\n ) -> LogFilter:\n \"\"\"\n Create filter object that tracks logs emitted by this contract event.\n \"\"\"\n filter_builder = EventFilterBuilder(self._get_event_abi(), self.w3.codec)\n self._set_up_filter_builder(\n argument_filters,\n fromBlock,\n toBlock,\n address,\n topics,\n filter_builder,\n )\n log_filter = filter_builder.deploy(self.w3)\n log_filter.log_entry_formatter = get_event_data(\n self.w3.codec, self._get_event_abi()\n )\n log_filter.builder = filter_builder\n\n return log_filter\n\n @combomethod\n def build_filter(self) -> EventFilterBuilder:\n builder = EventFilterBuilder(\n self._get_event_abi(),\n self.w3.codec,\n formatter=get_event_data(self.w3.codec, self._get_event_abi()),\n )\n builder.address = self.address\n return builder\n\n\nclass ContractEvents(BaseContractEvents):\n def __init__(\n self, abi: ABI, w3: \"Web3\", address: Optional[ChecksumAddress] = None\n ) -> None:\n super().__init__(abi, w3, ContractEvent, address)\n\n\nclass ContractFunction(BaseContractFunction):\n # mypy types\n w3: \"Web3\"\n\n def __call__(self, *args: Any, **kwargs: Any) -> \"ContractFunction\":\n clone = copy.copy(self)\n if args is None:\n clone.args = tuple()\n else:\n clone.args = args\n\n if kwargs is None:\n clone.kwargs = {}\n else:\n clone.kwargs = kwargs\n clone._set_function_info()\n return clone\n\n @classmethod\n def factory(cls, class_name: str, **kwargs: Any) -> Self:\n return PropertyCheckingFactory(class_name, (cls,), kwargs)(kwargs.get(\"abi\"))\n\n def call(\n self,\n transaction: Optional[TxParams] = None,\n block_identifier: BlockIdentifier = None,\n state_override: Optional[StateOverride] = None,\n ccip_read_enabled: Optional[bool] = None,\n ) -> Any:\n \"\"\"\n Execute a contract function call using the `eth_call` interface.\n\n This method prepares a ``Caller`` object that exposes the contract\n functions and public variables as callable Python functions.\n\n Reading a public ``owner`` address variable example:\n\n .. code-block:: python\n\n ContractFactory = w3.eth.contract(\n abi=wallet_contract_definition[\"abi\"]\n )\n\n # Not a real contract address\n contract = ContractFactory(\"0x2f70d3d26829e412A602E83FE8EeBF80255AEeA5\")\n\n # Read \"owner\" public variable\n addr = contract.functions.owner().call()\n\n :param transaction: Dictionary of transaction info for web3 interface\n :param block_identifier: TODO\n :param state_override TODO\n :param ccip_read_enabled TODO\n :return: ``Caller`` object that has contract public functions\n and variables exposed as Python methods\n \"\"\"\n call_transaction = self._get_call_txparams(transaction)\n\n block_id = parse_block_identifier(self.w3, block_identifier)\n\n return call_contract_function(\n self.w3,\n self.address,\n self._return_data_normalizers,\n self.function_identifier,\n call_transaction,\n block_id,\n self.contract_abi,\n self.abi,\n state_override,\n ccip_read_enabled,\n self.decode_tuples,\n *self.args,\n **self.kwargs,\n )\n\n def transact(self, transaction: Optional[TxParams] = None) -> HexBytes:\n setup_transaction = self._transact(transaction)\n return transact_with_contract_function(\n self.address,\n self.w3,\n self.function_identifier,\n setup_transaction,\n self.contract_abi,\n self.abi,\n *self.args,\n **self.kwargs,\n )\n\n def estimate_gas(\n self,\n transaction: Optional[TxParams] = None,\n block_identifier: Optional[BlockIdentifier] = None,\n state_override: Optional[StateOverride] = None,\n ) -> int:\n setup_transaction = self._estimate_gas(transaction)\n return estimate_gas_for_function(\n self.address,\n self.w3,\n self.function_identifier,\n setup_transaction,\n self.contract_abi,\n self.abi,\n block_identifier,\n state_override,\n *self.args,\n **self.kwargs,\n )\n\n def build_transaction(self, transaction: Optional[TxParams] = None) -> TxParams:\n built_transaction = self._build_transaction(transaction)\n return build_transaction_for_function(\n self.address,\n self.w3,\n self.function_identifier,\n built_transaction,\n self.contract_abi,\n self.abi,\n *self.args,\n **self.kwargs,\n )\n\n @staticmethod\n def get_fallback_function(\n abi: ABI,\n w3: \"Web3\",\n address: Optional[ChecksumAddress] = None,\n ) -> \"ContractFunction\":\n if abi and fallback_func_abi_exists(abi):\n return ContractFunction.factory(\n \"fallback\",\n w3=w3,\n contract_abi=abi,\n address=address,\n function_identifier=FallbackFn,\n )()\n return cast(ContractFunction, NonExistentFallbackFunction())\n\n @staticmethod\n def get_receive_function(\n abi: ABI,\n w3: \"Web3\",\n address: Optional[ChecksumAddress] = None,\n ) -> \"ContractFunction\":\n if abi and receive_func_abi_exists(abi):\n return ContractFunction.factory(\n \"receive\",\n w3=w3,\n contract_abi=abi,\n address=address,\n function_identifier=ReceiveFn,\n )()\n return cast(ContractFunction, NonExistentReceiveFunction())\n\n\nclass ContractFunctions(BaseContractFunctions):\n def __init__(\n self,\n abi: ABI,\n w3: \"Web3\",\n address: Optional[ChecksumAddress] = None,\n decode_tuples: Optional[bool] = False,\n ) -> None:\n super().__init__(abi, w3, ContractFunction, address, decode_tuples)\n\n def __getattr__(self, function_name: str) -> \"ContractFunction\":\n if self.abi is None:\n raise NoABIFound(\n \"There is no ABI found for this contract.\",\n )\n if \"_functions\" not in self.__dict__:\n raise NoABIFunctionsFound(\n \"The abi for this contract contains no function definitions. \",\n \"Are you sure you provided the correct contract abi?\",\n )\n elif function_name not in self.__dict__[\"_functions\"]:\n raise ABIFunctionNotFound(\n f\"The function '{function_name}' was not found in this contract's abi.\",\n \" Are you sure you provided the correct contract abi?\",\n )\n else:\n return super().__getattribute__(function_name)\n\n\nclass Contract(BaseContract):\n # mypy types\n w3: \"Web3\"\n functions: ContractFunctions = None\n caller: \"ContractCaller\" = None\n\n # Instance of :class:`ContractEvents` presenting available Event ABIs\n events: ContractEvents = None\n\n def __init__(self, address: Optional[ChecksumAddress] = None) -> None:\n \"\"\"Create a new smart contract proxy object.\n :param address: Contract address as 0x hex string\"\"\"\n _w3 = self.w3\n if _w3 is None:\n raise AttributeError(\n \"The `Contract` class has not been initialized. Please use the \"\n \"`web3.contract` interface to create your contract class.\"\n )\n\n if address:\n self.address = normalize_address(cast(\"ENS\", _w3.ens), address)\n\n if not self.address:\n raise TypeError(\n \"The address argument is required to instantiate a contract.\"\n )\n\n self.functions = ContractFunctions(\n self.abi, _w3, self.address, decode_tuples=self.decode_tuples\n )\n self.caller = ContractCaller(\n self.abi, _w3, self.address, decode_tuples=self.decode_tuples\n )\n self.events = ContractEvents(self.abi, _w3, self.address)\n self.fallback = Contract.get_fallback_function(\n self.abi,\n _w3,\n ContractFunction,\n self.address,\n )\n self.receive = Contract.get_receive_function(\n self.abi,\n _w3,\n ContractFunction,\n self.address,\n )\n\n @classmethod\n def factory(\n cls, w3: \"Web3\", class_name: Optional[str] = None, **kwargs: Any\n ) -> Type[Self]:\n kwargs[\"w3\"] = w3\n\n normalizers = {\n \"abi\": normalize_abi,\n \"address\": partial(normalize_address, w3.ens),\n \"bytecode\": normalize_bytecode,\n \"bytecode_runtime\": normalize_bytecode,\n }\n\n contract = cast(\n Type[Self],\n PropertyCheckingFactory(\n class_name or cls.__name__,\n (cls,),\n kwargs,\n normalizers=normalizers,\n ),\n )\n contract.functions = ContractFunctions(\n contract.abi, contract.w3, decode_tuples=contract.decode_tuples\n )\n contract.caller = ContractCaller(\n contract.abi,\n contract.w3,\n contract.address,\n decode_tuples=contract.decode_tuples,\n )\n contract.events = ContractEvents(contract.abi, contract.w3)\n contract.fallback = Contract.get_fallback_function(\n contract.abi,\n contract.w3,\n ContractFunction,\n )\n contract.receive = Contract.get_receive_function(\n contract.abi,\n contract.w3,\n ContractFunction,\n )\n\n return contract\n\n @classmethod\n def constructor(cls, *args: Any, **kwargs: Any) -> \"ContractConstructor\":\n \"\"\"\n :param args: The contract constructor arguments as positional arguments\n :param kwargs: The contract constructor arguments as keyword arguments\n :return: a contract constructor object\n \"\"\"\n if cls.bytecode is None:\n raise ValueError(\n \"Cannot call constructor on a contract that does not have \"\n \"'bytecode' associated with it\"\n )\n\n return ContractConstructor(cls.w3, cls.abi, cls.bytecode, *args, **kwargs)\n\n @combomethod\n def find_functions_by_identifier(\n cls,\n contract_abi: ABI,\n w3: \"Web3\",\n address: ChecksumAddress,\n callable_check: Callable[..., Any],\n ) -> List[\"ContractFunction\"]:\n return cast(\n List[\"ContractFunction\"],\n find_functions_by_identifier(\n contract_abi, w3, address, callable_check, ContractFunction\n ),\n )\n\n @combomethod\n def get_function_by_identifier(\n cls, fns: Sequence[\"ContractFunction\"], identifier: str\n ) -> \"ContractFunction\":\n return get_function_by_identifier(fns, identifier)\n\n\nclass ContractCaller(BaseContractCaller):\n # mypy types\n w3: \"Web3\"\n\n def __init__(\n self,\n abi: ABI,\n w3: \"Web3\",\n address: ChecksumAddress,\n transaction: Optional[TxParams] = None,\n block_identifier: BlockIdentifier = None,\n ccip_read_enabled: Optional[bool] = None,\n decode_tuples: Optional[bool] = False,\n ) -> None:\n super().__init__(abi, w3, address, decode_tuples=decode_tuples)\n\n if self.abi:\n if transaction is None:\n transaction = {}\n\n self._functions = filter_by_type(\"function\", self.abi)\n for func in self._functions:\n fn = ContractFunction.factory(\n func[\"name\"],\n w3=w3,\n contract_abi=self.abi,\n address=self.address,\n function_identifier=func[\"name\"],\n decode_tuples=decode_tuples,\n )\n\n block_id = parse_block_identifier(w3, block_identifier)\n caller_method = partial(\n self.call_function,\n fn,\n transaction=transaction,\n block_identifier=block_id,\n ccip_read_enabled=ccip_read_enabled,\n )\n\n setattr(self, func[\"name\"], caller_method)\n\n def __call__(\n self,\n transaction: Optional[TxParams] = None,\n block_identifier: BlockIdentifier = None,\n ccip_read_enabled: Optional[bool] = None,\n ) -> \"ContractCaller\":\n if transaction is None:\n transaction = {}\n\n return type(self)(\n self.abi,\n self.w3,\n self.address,\n transaction=transaction,\n block_identifier=block_identifier,\n ccip_read_enabled=ccip_read_enabled,\n decode_tuples=self.decode_tuples,\n )\n\n\nclass ContractConstructor(BaseContractConstructor):\n # mypy types\n w3: \"Web3\"\n\n @combomethod\n def transact(self, transaction: Optional[TxParams] = None) -> HexBytes:\n return self.w3.eth.send_transaction(self._get_transaction(transaction))\n\n @combomethod\n def build_transaction(self, transaction: Optional[TxParams] = None) -> TxParams:\n \"\"\"\n Build the transaction dictionary without sending\n \"\"\"\n built_transaction = self._build_transaction(transaction)\n return fill_transaction_defaults(self.w3, built_transaction)\n\n @combomethod\n def estimate_gas(\n self,\n transaction: Optional[TxParams] = None,\n block_identifier: Optional[BlockIdentifier] = None,\n ) -> int:\n transaction = self._estimate_gas(transaction)\n\n return self.w3.eth.estimate_gas(transaction, block_identifier=block_identifier)\n",
"path": "web3/contract/contract.py"
}
] | [
{
"content": "import copy\nfrom typing import (\n TYPE_CHECKING,\n Any,\n Callable,\n Dict,\n Iterable,\n List,\n Optional,\n Sequence,\n Type,\n cast,\n)\n\nfrom eth_typing import (\n ChecksumAddress,\n)\nfrom eth_utils import (\n combomethod,\n)\nfrom eth_utils.toolz import (\n partial,\n)\nfrom hexbytes import (\n HexBytes,\n)\n\nfrom web3._utils.abi import (\n fallback_func_abi_exists,\n filter_by_type,\n receive_func_abi_exists,\n)\nfrom web3._utils.compat import (\n Self,\n)\nfrom web3._utils.contracts import (\n parse_block_identifier,\n)\nfrom web3._utils.datatypes import (\n PropertyCheckingFactory,\n)\nfrom web3._utils.events import (\n EventFilterBuilder,\n get_event_data,\n)\nfrom web3._utils.filters import (\n LogFilter,\n)\nfrom web3._utils.function_identifiers import (\n FallbackFn,\n ReceiveFn,\n)\nfrom web3._utils.normalizers import (\n normalize_abi,\n normalize_address,\n normalize_bytecode,\n)\nfrom web3._utils.transactions import (\n fill_transaction_defaults,\n)\nfrom web3.contract.base_contract import (\n BaseContract,\n BaseContractCaller,\n BaseContractConstructor,\n BaseContractEvent,\n BaseContractEvents,\n BaseContractFunction,\n BaseContractFunctions,\n NonExistentFallbackFunction,\n NonExistentReceiveFunction,\n)\nfrom web3.contract.utils import (\n build_transaction_for_function,\n call_contract_function,\n estimate_gas_for_function,\n find_functions_by_identifier,\n get_function_by_identifier,\n transact_with_contract_function,\n)\nfrom web3.exceptions import (\n ABIFunctionNotFound,\n NoABIFound,\n NoABIFunctionsFound,\n Web3ValidationError,\n)\nfrom web3.types import (\n ABI,\n BlockIdentifier,\n CallOverride,\n EventData,\n TxParams,\n)\nfrom web3.utils import (\n get_abi_input_names,\n)\n\nif TYPE_CHECKING:\n from ens import ENS # noqa: F401\n from web3 import Web3 # noqa: F401\n\n\nclass ContractEvent(BaseContractEvent):\n # mypy types\n w3: \"Web3\"\n\n @combomethod\n def get_logs(\n self,\n argument_filters: Optional[Dict[str, Any]] = None,\n fromBlock: Optional[BlockIdentifier] = None,\n toBlock: Optional[BlockIdentifier] = None,\n block_hash: Optional[HexBytes] = None,\n ) -> Iterable[EventData]:\n \"\"\"Get events for this contract instance using eth_getLogs API.\n\n This is a stateless method, as opposed to create_filter.\n It can be safely called against nodes which do not provide\n eth_newFilter API, like Infura nodes.\n\n If there are many events,\n like ``Transfer`` events for a popular token,\n the Ethereum node might be overloaded and timeout\n on the underlying JSON-RPC call.\n\n Example - how to get all ERC-20 token transactions\n for the latest 10 blocks:\n\n .. code-block:: python\n\n from = max(mycontract.web3.eth.block_number - 10, 1)\n to = mycontract.web3.eth.block_number\n\n events = mycontract.events.Transfer.get_logs(fromBlock=from, toBlock=to)\n\n for e in events:\n print(e[\"args\"][\"from\"],\n e[\"args\"][\"to\"],\n e[\"args\"][\"value\"])\n\n The returned processed log values will look like:\n\n .. code-block:: python\n\n (\n AttributeDict({\n 'args': AttributeDict({}),\n 'event': 'LogNoArguments',\n 'logIndex': 0,\n 'transactionIndex': 0,\n 'transactionHash': HexBytes('...'),\n 'address': '0xF2E246BB76DF876Cef8b38ae84130F4F55De395b',\n 'blockHash': HexBytes('...'),\n 'blockNumber': 3\n }),\n AttributeDict(...),\n ...\n )\n\n See also: :func:`web3.middleware.filter.LocalFilterMiddleware`.\n\n :param argument_filters: Filter by argument values. Indexed arguments are\n filtered by the node while non-indexed arguments are filtered by the library.\n :param fromBlock: block number or \"latest\", defaults to \"latest\"\n :param toBlock: block number or \"latest\". Defaults to \"latest\"\n :param block_hash: block hash. block_hash cannot be set at the\n same time as fromBlock or toBlock\n :yield: Tuple of :class:`AttributeDict` instances\n \"\"\"\n event_abi = self._get_event_abi()\n\n # validate ``argument_filters`` if present\n if argument_filters is not None:\n event_arg_names = get_abi_input_names(event_abi)\n if not all(arg in event_arg_names for arg in argument_filters.keys()):\n raise Web3ValidationError(\n \"When filtering by argument names, all argument names must be \"\n \"present in the contract's event ABI.\"\n )\n\n _filter_params = self._get_event_filter_params(\n event_abi, argument_filters, fromBlock, toBlock, block_hash\n )\n # call JSON-RPC API\n logs = self.w3.eth.get_logs(_filter_params)\n\n # convert raw binary data to Python proxy objects as described by ABI:\n all_event_logs = tuple(\n get_event_data(self.w3.codec, event_abi, entry) for entry in logs\n )\n filtered_logs = self._process_get_logs_argument_filters(\n event_abi,\n all_event_logs,\n argument_filters,\n )\n sorted_logs = sorted(filtered_logs, key=lambda e: e[\"logIndex\"])\n sorted_logs = sorted(sorted_logs, key=lambda e: e[\"blockNumber\"])\n return sorted_logs\n\n @combomethod\n def create_filter(\n self,\n *, # PEP 3102\n argument_filters: Optional[Dict[str, Any]] = None,\n fromBlock: Optional[BlockIdentifier] = None,\n toBlock: BlockIdentifier = \"latest\",\n address: Optional[ChecksumAddress] = None,\n topics: Optional[Sequence[Any]] = None,\n ) -> LogFilter:\n \"\"\"\n Create filter object that tracks logs emitted by this contract event.\n \"\"\"\n filter_builder = EventFilterBuilder(self._get_event_abi(), self.w3.codec)\n self._set_up_filter_builder(\n argument_filters,\n fromBlock,\n toBlock,\n address,\n topics,\n filter_builder,\n )\n log_filter = filter_builder.deploy(self.w3)\n log_filter.log_entry_formatter = get_event_data(\n self.w3.codec, self._get_event_abi()\n )\n log_filter.builder = filter_builder\n\n return log_filter\n\n @combomethod\n def build_filter(self) -> EventFilterBuilder:\n builder = EventFilterBuilder(\n self._get_event_abi(),\n self.w3.codec,\n formatter=get_event_data(self.w3.codec, self._get_event_abi()),\n )\n builder.address = self.address\n return builder\n\n\nclass ContractEvents(BaseContractEvents):\n def __init__(\n self, abi: ABI, w3: \"Web3\", address: Optional[ChecksumAddress] = None\n ) -> None:\n super().__init__(abi, w3, ContractEvent, address)\n\n\nclass ContractFunction(BaseContractFunction):\n # mypy types\n w3: \"Web3\"\n\n def __call__(self, *args: Any, **kwargs: Any) -> \"ContractFunction\":\n clone = copy.copy(self)\n if args is None:\n clone.args = tuple()\n else:\n clone.args = args\n\n if kwargs is None:\n clone.kwargs = {}\n else:\n clone.kwargs = kwargs\n clone._set_function_info()\n return clone\n\n @classmethod\n def factory(cls, class_name: str, **kwargs: Any) -> Self:\n return PropertyCheckingFactory(class_name, (cls,), kwargs)(kwargs.get(\"abi\"))\n\n def call(\n self,\n transaction: Optional[TxParams] = None,\n block_identifier: BlockIdentifier = None,\n state_override: Optional[CallOverride] = None,\n ccip_read_enabled: Optional[bool] = None,\n ) -> Any:\n \"\"\"\n Execute a contract function call using the `eth_call` interface.\n\n This method prepares a ``Caller`` object that exposes the contract\n functions and public variables as callable Python functions.\n\n Reading a public ``owner`` address variable example:\n\n .. code-block:: python\n\n ContractFactory = w3.eth.contract(\n abi=wallet_contract_definition[\"abi\"]\n )\n\n # Not a real contract address\n contract = ContractFactory(\"0x2f70d3d26829e412A602E83FE8EeBF80255AEeA5\")\n\n # Read \"owner\" public variable\n addr = contract.functions.owner().call()\n\n :param transaction: Dictionary of transaction info for web3 interface\n :param block_identifier: TODO\n :param state_override TODO\n :param ccip_read_enabled TODO\n :return: ``Caller`` object that has contract public functions\n and variables exposed as Python methods\n \"\"\"\n call_transaction = self._get_call_txparams(transaction)\n\n block_id = parse_block_identifier(self.w3, block_identifier)\n\n return call_contract_function(\n self.w3,\n self.address,\n self._return_data_normalizers,\n self.function_identifier,\n call_transaction,\n block_id,\n self.contract_abi,\n self.abi,\n state_override,\n ccip_read_enabled,\n self.decode_tuples,\n *self.args,\n **self.kwargs,\n )\n\n def transact(self, transaction: Optional[TxParams] = None) -> HexBytes:\n setup_transaction = self._transact(transaction)\n return transact_with_contract_function(\n self.address,\n self.w3,\n self.function_identifier,\n setup_transaction,\n self.contract_abi,\n self.abi,\n *self.args,\n **self.kwargs,\n )\n\n def estimate_gas(\n self,\n transaction: Optional[TxParams] = None,\n block_identifier: Optional[BlockIdentifier] = None,\n state_override: Optional[CallOverride] = None,\n ) -> int:\n setup_transaction = self._estimate_gas(transaction)\n return estimate_gas_for_function(\n self.address,\n self.w3,\n self.function_identifier,\n setup_transaction,\n self.contract_abi,\n self.abi,\n block_identifier,\n state_override,\n *self.args,\n **self.kwargs,\n )\n\n def build_transaction(self, transaction: Optional[TxParams] = None) -> TxParams:\n built_transaction = self._build_transaction(transaction)\n return build_transaction_for_function(\n self.address,\n self.w3,\n self.function_identifier,\n built_transaction,\n self.contract_abi,\n self.abi,\n *self.args,\n **self.kwargs,\n )\n\n @staticmethod\n def get_fallback_function(\n abi: ABI,\n w3: \"Web3\",\n address: Optional[ChecksumAddress] = None,\n ) -> \"ContractFunction\":\n if abi and fallback_func_abi_exists(abi):\n return ContractFunction.factory(\n \"fallback\",\n w3=w3,\n contract_abi=abi,\n address=address,\n function_identifier=FallbackFn,\n )()\n return cast(ContractFunction, NonExistentFallbackFunction())\n\n @staticmethod\n def get_receive_function(\n abi: ABI,\n w3: \"Web3\",\n address: Optional[ChecksumAddress] = None,\n ) -> \"ContractFunction\":\n if abi and receive_func_abi_exists(abi):\n return ContractFunction.factory(\n \"receive\",\n w3=w3,\n contract_abi=abi,\n address=address,\n function_identifier=ReceiveFn,\n )()\n return cast(ContractFunction, NonExistentReceiveFunction())\n\n\nclass ContractFunctions(BaseContractFunctions):\n def __init__(\n self,\n abi: ABI,\n w3: \"Web3\",\n address: Optional[ChecksumAddress] = None,\n decode_tuples: Optional[bool] = False,\n ) -> None:\n super().__init__(abi, w3, ContractFunction, address, decode_tuples)\n\n def __getattr__(self, function_name: str) -> \"ContractFunction\":\n if self.abi is None:\n raise NoABIFound(\n \"There is no ABI found for this contract.\",\n )\n if \"_functions\" not in self.__dict__:\n raise NoABIFunctionsFound(\n \"The abi for this contract contains no function definitions. \",\n \"Are you sure you provided the correct contract abi?\",\n )\n elif function_name not in self.__dict__[\"_functions\"]:\n raise ABIFunctionNotFound(\n f\"The function '{function_name}' was not found in this contract's abi.\",\n \" Are you sure you provided the correct contract abi?\",\n )\n else:\n return super().__getattribute__(function_name)\n\n\nclass Contract(BaseContract):\n # mypy types\n w3: \"Web3\"\n functions: ContractFunctions = None\n caller: \"ContractCaller\" = None\n\n # Instance of :class:`ContractEvents` presenting available Event ABIs\n events: ContractEvents = None\n\n def __init__(self, address: Optional[ChecksumAddress] = None) -> None:\n \"\"\"Create a new smart contract proxy object.\n :param address: Contract address as 0x hex string\"\"\"\n _w3 = self.w3\n if _w3 is None:\n raise AttributeError(\n \"The `Contract` class has not been initialized. Please use the \"\n \"`web3.contract` interface to create your contract class.\"\n )\n\n if address:\n self.address = normalize_address(cast(\"ENS\", _w3.ens), address)\n\n if not self.address:\n raise TypeError(\n \"The address argument is required to instantiate a contract.\"\n )\n\n self.functions = ContractFunctions(\n self.abi, _w3, self.address, decode_tuples=self.decode_tuples\n )\n self.caller = ContractCaller(\n self.abi, _w3, self.address, decode_tuples=self.decode_tuples\n )\n self.events = ContractEvents(self.abi, _w3, self.address)\n self.fallback = Contract.get_fallback_function(\n self.abi,\n _w3,\n ContractFunction,\n self.address,\n )\n self.receive = Contract.get_receive_function(\n self.abi,\n _w3,\n ContractFunction,\n self.address,\n )\n\n @classmethod\n def factory(\n cls, w3: \"Web3\", class_name: Optional[str] = None, **kwargs: Any\n ) -> Type[Self]:\n kwargs[\"w3\"] = w3\n\n normalizers = {\n \"abi\": normalize_abi,\n \"address\": partial(normalize_address, w3.ens),\n \"bytecode\": normalize_bytecode,\n \"bytecode_runtime\": normalize_bytecode,\n }\n\n contract = cast(\n Type[Self],\n PropertyCheckingFactory(\n class_name or cls.__name__,\n (cls,),\n kwargs,\n normalizers=normalizers,\n ),\n )\n contract.functions = ContractFunctions(\n contract.abi, contract.w3, decode_tuples=contract.decode_tuples\n )\n contract.caller = ContractCaller(\n contract.abi,\n contract.w3,\n contract.address,\n decode_tuples=contract.decode_tuples,\n )\n contract.events = ContractEvents(contract.abi, contract.w3)\n contract.fallback = Contract.get_fallback_function(\n contract.abi,\n contract.w3,\n ContractFunction,\n )\n contract.receive = Contract.get_receive_function(\n contract.abi,\n contract.w3,\n ContractFunction,\n )\n\n return contract\n\n @classmethod\n def constructor(cls, *args: Any, **kwargs: Any) -> \"ContractConstructor\":\n \"\"\"\n :param args: The contract constructor arguments as positional arguments\n :param kwargs: The contract constructor arguments as keyword arguments\n :return: a contract constructor object\n \"\"\"\n if cls.bytecode is None:\n raise ValueError(\n \"Cannot call constructor on a contract that does not have \"\n \"'bytecode' associated with it\"\n )\n\n return ContractConstructor(cls.w3, cls.abi, cls.bytecode, *args, **kwargs)\n\n @combomethod\n def find_functions_by_identifier(\n cls,\n contract_abi: ABI,\n w3: \"Web3\",\n address: ChecksumAddress,\n callable_check: Callable[..., Any],\n ) -> List[\"ContractFunction\"]:\n return cast(\n List[\"ContractFunction\"],\n find_functions_by_identifier(\n contract_abi, w3, address, callable_check, ContractFunction\n ),\n )\n\n @combomethod\n def get_function_by_identifier(\n cls, fns: Sequence[\"ContractFunction\"], identifier: str\n ) -> \"ContractFunction\":\n return get_function_by_identifier(fns, identifier)\n\n\nclass ContractCaller(BaseContractCaller):\n # mypy types\n w3: \"Web3\"\n\n def __init__(\n self,\n abi: ABI,\n w3: \"Web3\",\n address: ChecksumAddress,\n transaction: Optional[TxParams] = None,\n block_identifier: BlockIdentifier = None,\n ccip_read_enabled: Optional[bool] = None,\n decode_tuples: Optional[bool] = False,\n ) -> None:\n super().__init__(abi, w3, address, decode_tuples=decode_tuples)\n\n if self.abi:\n if transaction is None:\n transaction = {}\n\n self._functions = filter_by_type(\"function\", self.abi)\n for func in self._functions:\n fn = ContractFunction.factory(\n func[\"name\"],\n w3=w3,\n contract_abi=self.abi,\n address=self.address,\n function_identifier=func[\"name\"],\n decode_tuples=decode_tuples,\n )\n\n block_id = parse_block_identifier(w3, block_identifier)\n caller_method = partial(\n self.call_function,\n fn,\n transaction=transaction,\n block_identifier=block_id,\n ccip_read_enabled=ccip_read_enabled,\n )\n\n setattr(self, func[\"name\"], caller_method)\n\n def __call__(\n self,\n transaction: Optional[TxParams] = None,\n block_identifier: BlockIdentifier = None,\n ccip_read_enabled: Optional[bool] = None,\n ) -> \"ContractCaller\":\n if transaction is None:\n transaction = {}\n\n return type(self)(\n self.abi,\n self.w3,\n self.address,\n transaction=transaction,\n block_identifier=block_identifier,\n ccip_read_enabled=ccip_read_enabled,\n decode_tuples=self.decode_tuples,\n )\n\n\nclass ContractConstructor(BaseContractConstructor):\n # mypy types\n w3: \"Web3\"\n\n @combomethod\n def transact(self, transaction: Optional[TxParams] = None) -> HexBytes:\n return self.w3.eth.send_transaction(self._get_transaction(transaction))\n\n @combomethod\n def build_transaction(self, transaction: Optional[TxParams] = None) -> TxParams:\n \"\"\"\n Build the transaction dictionary without sending\n \"\"\"\n built_transaction = self._build_transaction(transaction)\n return fill_transaction_defaults(self.w3, built_transaction)\n\n @combomethod\n def estimate_gas(\n self,\n transaction: Optional[TxParams] = None,\n block_identifier: Optional[BlockIdentifier] = None,\n ) -> int:\n transaction = self._estimate_gas(transaction)\n\n return self.w3.eth.estimate_gas(transaction, block_identifier=block_identifier)\n",
"path": "web3/contract/contract.py"
}
] | diff --git a/docs/web3.contract.rst b/docs/web3.contract.rst
index ee836ca349..f5082e29bb 100644
--- a/docs/web3.contract.rst
+++ b/docs/web3.contract.rst
@@ -944,6 +944,8 @@ For example:
Fetches all logs for a given event within the specified block range or block hash.
+ Returns a list of decoded event logs sorted by ``logIndex``.
+
``argument_filters`` is an optional dictionary argument that can be used to filter
for logs where the event's argument values match the values provided in the
dictionary. The keys must match the event argument names as they exist in the ABI.
diff --git a/newsfragments/3228.feature.rst b/newsfragments/3228.feature.rst
new file mode 100644
index 0000000000..8a9b0a9813
--- /dev/null
+++ b/newsfragments/3228.feature.rst
@@ -0,0 +1 @@
+Contract event ``get_logs`` results sorted by each ``ContractEvent`` ``logIndex``.
\ No newline at end of file
diff --git a/tests/conftest.py b/tests/conftest.py
index 6a4b502867..730ddb0053 100644
--- a/tests/conftest.py
+++ b/tests/conftest.py
@@ -46,9 +46,10 @@ def emitter_contract_data():
return EMITTER_CONTRACT_DATA
+# This class defines events for the EmitterContract and are used to construct
+# a fixture for contract event logs. Parameterized tests that utilize an `emitter`
+# contract fixture will use this data.
class LogFunctions:
- # These appear to be for a very specific test and this doesn't need to be updated
- # for every event in the emitter contract. That ends up breaking that test.
LogAnonymous = 0
LogNoArguments = 1
LogSingleArg = 2
@@ -74,6 +75,9 @@ def emitter_contract_event_ids():
return LogFunctions
+# This class defines topics for the EmitterContract and are used to construct
+# a fixture for contract event log topics. Parameterized tests that utilize
+# an `emitter` contract fixture will use this data.
class LogTopics:
LogAnonymous = event_signature_to_log_topic("LogAnonymous()")
LogNoArguments = event_signature_to_log_topic("LogNoArguments()")
diff --git a/tests/core/contracts/test_extracting_event_data.py b/tests/core/contracts/test_extracting_event_data.py
index 07e113dbf4..630a2c0471 100644
--- a/tests/core/contracts/test_extracting_event_data.py
+++ b/tests/core/contracts/test_extracting_event_data.py
@@ -331,6 +331,78 @@ def test_argument_extraction_strict_bytes_types(
assert event_data["event"] == "LogListArgs"
+def test_contract_event_get_logs_sorted_by_log_index(w3, emitter, request_mocker):
+ get_logs_response = [
+ {
+ "type": "mined",
+ "logIndex": 10,
+ "transactionIndex": 0,
+ "transactionHash": "0xaef7f312d863780b861d8c38984b2a33f77e9508810735e2b042143f7f189f83", # noqa: E501
+ "blockHash": "0x2200ec3324fdaca4ee2f4629489d2d06fb28108dae61b63b84ef39702e2b64e7", # noqa: E501
+ "blockNumber": 3,
+ "address": "0xF2E246BB76DF876Cef8b38ae84130F4F55De395b",
+ "data": "0x",
+ "topics": [
+ "0x1e86022f78f8d04f8e3dfd13a2bdb280403e6632877c0dbee5e4eeb259908a5c"
+ ],
+ },
+ {
+ "type": "mined",
+ "logIndex": 0,
+ "transactionIndex": 0,
+ "transactionHash": "0x61e57bb1b5af14ca1b0964a84fb640bf39927961f26311a6450475a749e00cbb", # noqa: E501
+ "blockHash": "0x73dd9a3b0f581689ebd67adea0debe05672a334c723379dc506fb71a666c1754", # noqa: E501
+ "blockNumber": 4,
+ "address": "0xF2E246BB76DF876Cef8b38ae84130F4F55De395b",
+ "data": "0x",
+ "topics": [
+ "0x1e86022f78f8d04f8e3dfd13a2bdb280403e6632877c0dbee5e4eeb259908a5c"
+ ],
+ },
+ {
+ "type": "mined",
+ "logIndex": 123,
+ "transactionIndex": 0,
+ "transactionHash": "0x61e57bb1b5af14ca1b0964a84fb640bf39927961f26311a6450475a749e00cbb", # noqa: E501
+ "blockHash": "0x73dd9a3b0f581689ebd67adea0debe05672a334c723379dc506fb71a666c1754", # noqa: E501
+ "blockNumber": 1,
+ "address": "0xF2E246BB76DF876Cef8b38ae84130F4F55De395b",
+ "data": "0x",
+ "topics": [
+ "0x1e86022f78f8d04f8e3dfd13a2bdb280403e6632877c0dbee5e4eeb259908a5c"
+ ],
+ },
+ {
+ "type": "mined",
+ "logIndex": 54,
+ "transactionIndex": 0,
+ "transactionHash": "0x61e57bb1b5af14ca1b0964a84fb640bf39927961f26311a6450475a749e00cbb", # noqa: E501
+ "blockHash": "0x73dd9a3b0f581689ebd67adea0debe05672a334c723379dc506fb71a666c1754", # noqa: E501
+ "blockNumber": 1,
+ "address": "0xF2E246BB76DF876Cef8b38ae84130F4F55De395b",
+ "data": "0x",
+ "topics": [
+ "0x1e86022f78f8d04f8e3dfd13a2bdb280403e6632877c0dbee5e4eeb259908a5c"
+ ],
+ },
+ ]
+
+ with request_mocker(w3, mock_results={"eth_getLogs": get_logs_response}):
+ logs = emitter.events.LogNoArguments().get_logs()
+
+ sorted_logs = sorted(
+ emitter.events.LogNoArguments().get_logs(),
+ key=lambda l: l["logIndex"],
+ )
+ sorted_logs = sorted(
+ emitter.events.LogNoArguments().get_logs(),
+ key=lambda l: l["blockNumber"],
+ )
+
+ assert len(logs) == 4
+ assert logs == sorted_logs
+
+
@pytest.mark.parametrize(
"contract_fn,event_name,call_args,expected_args,warning_msg,process_receipt",
(
diff --git a/web3/contract/contract.py b/web3/contract/contract.py
index c9e4ac8a7c..75dbc34c4d 100644
--- a/web3/contract/contract.py
+++ b/web3/contract/contract.py
@@ -192,7 +192,9 @@ def get_logs(
all_event_logs,
argument_filters,
)
- return filtered_logs
+ sorted_logs = sorted(filtered_logs, key=lambda e: e["logIndex"])
+ sorted_logs = sorted(sorted_logs, key=lambda e: e["blockNumber"])
+ return sorted_logs
@combomethod
def create_filter(
|
saleor__saleor-1389 | Add robots meta tag and "nofollow" link attribute
1. Fragile pages should be not indexed by search engines.
```
<meta name=”robots” content=”nofollow, noindex”>
```
- [x] Add above meta tag to order's confirmation page
2. Pages that brings no to little content value should not be crawled
```
<meta name=”robots” content=”nofollow”>
```
- [x] Add above meta tag to sign in/sign up/cart pages
3. Add link attribute
- [x] Links pointing to above pages should have set attribute `rel="nofollow"`
| [
{
"content": "from __future__ import unicode_literals\n\nfrom django.template.response import TemplateResponse\nfrom django.contrib import messages\nfrom django.conf import settings\nfrom django.utils.translation import pgettext_lazy\nfrom impersonate.views import impersonate as orig_impersonate\n\nfrom ..dashboard.views import staff_member_required\nfrom ..product.utils import products_with_availability, products_for_homepage\nfrom ..userprofile.models import User\n\n\ndef home(request):\n products = products_for_homepage()[:8]\n products = products_with_availability(\n products, discounts=request.discounts, local_currency=request.currency)\n return TemplateResponse(\n request, 'home.html',\n {'products': products, 'parent': None})\n\n\n@staff_member_required\ndef styleguide(request):\n return TemplateResponse(request, 'styleguide.html')\n\n\ndef impersonate(request, uid):\n response = orig_impersonate(request, uid)\n if request.session.modified:\n msg = pgettext_lazy(\n 'Impersonation message',\n 'You are now logged as {}'.format(User.objects.get(pk=uid)))\n messages.success(request, msg)\n return response\n",
"path": "saleor/core/views.py"
}
] | [
{
"content": "from __future__ import unicode_literals\n\nfrom django.template.response import TemplateResponse\nfrom django.contrib import messages\nfrom django.utils.translation import pgettext_lazy\nfrom impersonate.views import impersonate as orig_impersonate\n\nfrom ..dashboard.views import staff_member_required\nfrom ..product.utils import products_with_availability, products_for_homepage\nfrom ..userprofile.models import User\n\n\ndef home(request):\n products = products_for_homepage()[:8]\n products = products_with_availability(\n products, discounts=request.discounts, local_currency=request.currency)\n return TemplateResponse(\n request, 'home.html',\n {'products': products, 'parent': None})\n\n\n@staff_member_required\ndef styleguide(request):\n return TemplateResponse(request, 'styleguide.html')\n\n\ndef impersonate(request, uid):\n response = orig_impersonate(request, uid)\n if request.session.modified:\n msg = pgettext_lazy(\n 'Impersonation message',\n 'You are now logged as {}'.format(User.objects.get(pk=uid)))\n messages.success(request, msg)\n return response\n",
"path": "saleor/core/views.py"
}
] | diff --git a/saleor/core/views.py b/saleor/core/views.py
index d08fb5f9a1e..90e13056d3e 100644
--- a/saleor/core/views.py
+++ b/saleor/core/views.py
@@ -2,7 +2,6 @@
from django.template.response import TemplateResponse
from django.contrib import messages
-from django.conf import settings
from django.utils.translation import pgettext_lazy
from impersonate.views import impersonate as orig_impersonate
diff --git a/templates/account/login.html b/templates/account/login.html
index bd6af86ce21..337126d0541 100644
--- a/templates/account/login.html
+++ b/templates/account/login.html
@@ -4,6 +4,10 @@
{% block title %}{% trans "Log in" context "Login page title" %} — {{ block.super }}{% endblock %}
+{% block meta_tags %}
+ <meta name="robots" content="nofollow">
+{% endblock meta_tags %}
+
{% block content %}
<div class="col-lg-10 col-sm-12 m-auto">
@@ -13,7 +17,7 @@
<h3>{% trans "Don't have an account yet?" context "Login form secondary title" %}</h3>
<img src="{% static 'images/pirate_login.png' %}"
srcset="{% static 'images/pirate_login.png' %} 1x, {% static 'images/pirate_login2x.png' %} 2x">
- <a href="{% url 'account_signup' %}" class="btn secondary narrow">
+ <a rel="nofollow" href="{% url 'account_signup' %}" class="btn secondary narrow">
{% trans "Register" context "Login form secondary action" %}
</a>
</div>
diff --git a/templates/account/partials/login_form.html b/templates/account/partials/login_form.html
index c6a5638e963..08e38a58e15 100644
--- a/templates/account/partials/login_form.html
+++ b/templates/account/partials/login_form.html
@@ -16,7 +16,7 @@
<button class="btn primary narrow">
{% trans "Log in" context "Login form primary action" %}
</button>
- <a class="link--styled" href="{% url 'account_reset_password' %}">
+ <a rel="nofollow" class="link--styled" href="{% url 'account_reset_password' %}">
{% trans "Forgot password?" context "Login form secondary link" %}
</a>
{% with available_backends=settings.available_backends %}
diff --git a/templates/account/password_reset.html b/templates/account/password_reset.html
index faaff4da92e..3c127b539d2 100644
--- a/templates/account/password_reset.html
+++ b/templates/account/password_reset.html
@@ -5,6 +5,10 @@
{% block title %}{% trans "Password reset" context "Password reset page title" %} — {{ block.super }}{% endblock %}
+{% block meta_tags %}
+ <meta name="robots" content="nofollow">
+{% endblock meta_tags %}
+
{% block content %}
<div class="row login__forgot-password">
<div class="col-md-8 m-auto text-center">
diff --git a/templates/account/signup.html b/templates/account/signup.html
index e859ed2e256..7e479cb1022 100644
--- a/templates/account/signup.html
+++ b/templates/account/signup.html
@@ -5,6 +5,10 @@
{% block title %}{% trans "Sign Up" context "Signup page title" %} — {{ block.super }}{% endblock %}
+{% block meta_tags %}
+ <meta name="robots" content="nofollow">
+{% endblock meta_tags %}
+
{% block content %}
<div class="col-lg-10 offset-lg-1 col-sm-12">
<div class="row login">
@@ -13,7 +17,7 @@
<h3>{% trans "Already have an account?" context "Signup form secondary title" %}</h3>
<img class="signup-img" src="{% static 'images/pirate_login.png' %}"
srcset="{% static 'images/pirate_login.png' %} 1x, {% static 'images/pirate_login2x.png' %} 2x">
- <p><a href="{% url 'account_login' %}" class="btn secondary narrow">
+ <p><a rel="nofollow" href="{% url 'account_login' %}" class="btn secondary narrow">
{% trans "Log in" context "Signup form secondary action" %}
</a></p>
</div>
diff --git a/templates/base.html b/templates/base.html
index 046e403ce99..d21157bb8de 100644
--- a/templates/base.html
+++ b/templates/base.html
@@ -18,6 +18,7 @@
{% render_bundle 'storefront' 'css' %}
{% block stylesheet %}{% endblock stylesheet %}
+ {% block meta_tags %}{% endblock meta_tags %}
<!-- Le HTML5 shim, for IE6-8 support of HTML5 elements -->
<!--[if lt IE 9]>
@@ -62,11 +63,11 @@
{% endif %}
{% else %}
<li>
- <a href="{% url "account_signup" %}">
+ <a rel="nofollow" href="{% url "account_signup" %}">
{% trans "Register" context "Main navigation item" %}</a>
</li>
<li>
- <a href="{% url "account_login" %}">
+ <a rel="nofollow" href="{% url "account_login" %}">
{% trans "Log in" context "Main navigation item" %}
</a>
</li>
@@ -108,7 +109,7 @@
</div>
<div class="col-2 col-md-4">
<div class="navbar__brand__cart float-right">
- <a class="cart__icon" href="{% url "cart:index" %}">
+ <a rel="nofollow" class="cart__icon" href="{% url "cart:index" %}">
<span class="cart-label d-none d-md-inline-block">
{% trans "Your Cart" context "Main navigation item" %}
</span>
@@ -184,7 +185,7 @@
<div class="col-md-3 col-sm-6">
<ul>
<li>
- <a href="{% url "cart:index" %}">
+ <a rel="nofollow" href="{% url "cart:index" %}">
{% trans "Your Cart" context "Main navigation item" %}
</a>
</li>
@@ -220,12 +221,12 @@
{% endif %}
{% else %}
<li>
- <a href="{% url "account_signup" %}">
+ <a rel="nofollow" href="{% url "account_signup" %}">
{% trans "Register" context "Main navigation item" %}
</a>
</li>
<li>
- <a href="{% url "account_login" %}">
+ <a rel="nofollow" href="{% url "account_login" %}">
{% trans "Log in" context "Main navigation item" %}
</a>
</li>
diff --git a/templates/cart/index.html b/templates/cart/index.html
index 177c66a2a59..0700648972f 100644
--- a/templates/cart/index.html
+++ b/templates/cart/index.html
@@ -10,10 +10,14 @@
{% block breadcrumb %}
<ul class="breadcrumbs list-unstyled">
<li><a href="/">{% trans "Home" context "Main navigation item" %}</a></li>
- <li><a href="{% url 'cart:index' %}">{% trans "Cart" context "Cart breadcrumb" %}</a></li>
+ <li><a rel="nofollow" href="{% url 'cart:index' %}">{% trans "Cart" context "Cart breadcrumb" %}</a></li>
</ul>
{% endblock breadcrumb %}
+{% block meta_tags %}
+ <meta name="robots" content="nofollow">
+{% endblock meta_tags %}
+
{% block content %}
<div class="alert alert-success d-block d-sm-none remove-product-alert">
{% trans "Product has been removed from cart" context "Cart message" %}
diff --git a/templates/order/details.html b/templates/order/details.html
index 8cf5c05a910..38a6a6ad6c0 100644
--- a/templates/order/details.html
+++ b/templates/order/details.html
@@ -27,6 +27,10 @@
{% endif %}
{% endblock breadcrumb %}
+{% block meta_tags %}
+ <meta name="robots" content="noindex, nofollow">
+{% endblock meta_tags %}
+
{% block content %}
{# This view is available by just knowing url, #}
{# so we don't show all details (like delivery address) #}
|
e-valuation__EvaP-1666 | Make Typescript code Prettier
We should add automated formatting for our typescript files. I think https://prettier.io/ is pretty good, but the choice is open for discussion. The formatting should be done in `manage.py format` and be checked in CI.
| [
{
"content": "import subprocess # nosec\n\nfrom django.core.management.base import BaseCommand\n\n\nclass Command(BaseCommand):\n args = \"\"\n help = \"Runs the code formatter\"\n requires_migrations_checks = False\n\n def handle(self, *args, **options):\n subprocess.run([\"black\", \"evap\"], check=False) # nosec\n subprocess.run([\"isort\", \".\"], check=False) # nosec\n",
"path": "evap/evaluation/management/commands/format.py"
}
] | [
{
"content": "import subprocess # nosec\n\nfrom django.core.management.base import BaseCommand\n\n\nclass Command(BaseCommand):\n args = \"\"\n help = \"Runs the code formatter\"\n requires_migrations_checks = False\n\n def handle(self, *args, **options):\n subprocess.run([\"black\", \"evap\"], check=False) # nosec\n subprocess.run([\"isort\", \".\"], check=False) # nosec\n subprocess.run([\"npx\", \"prettier\", \"--write\", \"evap/static/ts/src\"], check=False) # nosec\n",
"path": "evap/evaluation/management/commands/format.py"
}
] | diff --git a/.github/workflows/tests.yml b/.github/workflows/tests.yml
index 6f3f6abb19..544304bc01 100644
--- a/.github/workflows/tests.yml
+++ b/.github/workflows/tests.yml
@@ -69,22 +69,29 @@ jobs:
formatter:
runs-on: ubuntu-18.04
- container:
- image: python:3.7
-
name: Formatting
steps:
- name: Check out repository code
uses: actions/checkout@v2
- - name: Install dependencies
+ - uses: actions/setup-python@v2
+ with:
+ python-version: 3.7
+ - name: Install Python dependencies
run: pip install -r requirements-dev.txt
+ - name: Setup Node
+ uses: actions/setup-node@v2
+ - name: Install Node dependencies
+ run: npm ci
- name: Add localsettings
run: cp evap/settings_test.py evap/localsettings.py
- name: Check code formatting
run: black --check evap
- name: Check imports formatting
run: isort . --check --diff
+ - run: ls -laR evap/static/ts
+ - name: Check TypeScript formatting
+ run: npx prettier --list-different --loglevel debug --config evap/static/ts/.prettierrc.json evap/static/ts/src
backup-process:
diff --git a/.gitignore b/.gitignore
index a29c85d6db..b8388d2dc6 100644
--- a/.gitignore
+++ b/.gitignore
@@ -42,6 +42,7 @@ htmlcov
# pip puts editable packages here
src
+!evap/static/ts/.prettierrc.json
!evap/static/ts/src
# node modules
diff --git a/evap/evaluation/management/commands/format.py b/evap/evaluation/management/commands/format.py
index a1994d7d51..f513276cf2 100644
--- a/evap/evaluation/management/commands/format.py
+++ b/evap/evaluation/management/commands/format.py
@@ -11,3 +11,4 @@ class Command(BaseCommand):
def handle(self, *args, **options):
subprocess.run(["black", "evap"], check=False) # nosec
subprocess.run(["isort", "."], check=False) # nosec
+ subprocess.run(["npx", "prettier", "--write", "evap/static/ts/src"], check=False) # nosec
diff --git a/evap/evaluation/tests/test_commands.py b/evap/evaluation/tests/test_commands.py
index 4824fcd16e..f0f2347878 100644
--- a/evap/evaluation/tests/test_commands.py
+++ b/evap/evaluation/tests/test_commands.py
@@ -352,11 +352,12 @@ class TestFormatCommand(TestCase):
@patch("subprocess.run")
def test_formatters_called(self, mock_subprocess_run):
management.call_command("format")
- self.assertEqual(len(mock_subprocess_run.mock_calls), 2)
+ self.assertEqual(len(mock_subprocess_run.mock_calls), 3)
mock_subprocess_run.assert_has_calls(
[
call(["black", "evap"], check=False),
call(["isort", "."], check=False),
+ call(["npx", "prettier", "--write", "evap/static/ts/src"], check=False),
]
)
diff --git a/evap/static/ts/.prettierrc.json b/evap/static/ts/.prettierrc.json
new file mode 100644
index 0000000000..d59f981524
--- /dev/null
+++ b/evap/static/ts/.prettierrc.json
@@ -0,0 +1,6 @@
+{
+ "tabWidth": 4,
+ "arrowParens": "avoid",
+ "trailingComma": "all",
+ "printWidth": 120
+}
diff --git a/evap/static/ts/src/csrf-utils.ts b/evap/static/ts/src/csrf-utils.ts
index b221865a44..5300b1b03d 100644
--- a/evap/static/ts/src/csrf-utils.ts
+++ b/evap/static/ts/src/csrf-utils.ts
@@ -1,7 +1,8 @@
// based on: https://docs.djangoproject.com/en/3.1/ref/csrf/#ajax
function getCookie(name: string): string | null {
if (document.cookie !== "") {
- const cookie = document.cookie.split(";")
+ const cookie = document.cookie
+ .split(";")
.map(cookie => cookie.trim())
.find(cookie => cookie.substring(0, name.length + 1) === `${name}=`);
if (cookie) {
@@ -19,7 +20,7 @@ function isMethodCsrfSafe(method: string): boolean {
// setup ajax sending csrf token
$.ajaxSetup({
- beforeSend: function(xhr: JQuery.jqXHR, settings: JQuery.AjaxSettings) {
+ beforeSend: function (xhr: JQuery.jqXHR, settings: JQuery.AjaxSettings) {
const isMethodSafe = settings.method && isMethodCsrfSafe(settings.method);
if (!isMethodSafe && !this.crossDomain) {
xhr.setRequestHeader("X-CSRFToken", csrftoken);
diff --git a/evap/static/ts/src/datagrid.ts b/evap/static/ts/src/datagrid.ts
index 5262d356c5..8f0584c68d 100644
--- a/evap/static/ts/src/datagrid.ts
+++ b/evap/static/ts/src/datagrid.ts
@@ -1,27 +1,27 @@
declare const Sortable: typeof import("sortablejs");
interface Row {
- element: HTMLElement,
- searchWords: string[],
- filterValues: Map<string, string[]>,
- orderValues: Map<string, string | number>,
- isDisplayed: boolean,
+ element: HTMLElement;
+ searchWords: string[];
+ filterValues: Map<string, string[]>;
+ orderValues: Map<string, string | number>;
+ isDisplayed: boolean;
}
interface State {
- search: string,
- filter: Map<string, string[]>,
- order: [string, "asc" | "desc"][],
+ search: string;
+ filter: Map<string, string[]>;
+ order: [string, "asc" | "desc"][];
}
interface BaseParameters {
- storageKey: string,
- searchInput: HTMLInputElement,
+ storageKey: string;
+ searchInput: HTMLInputElement;
}
interface DataGridParameters extends BaseParameters {
- head: HTMLElement,
- container: HTMLElement
+ head: HTMLElement;
+ container: HTMLElement;
}
abstract class DataGrid {
@@ -33,7 +33,7 @@ abstract class DataGrid {
private delayTimer: any | null;
protected state: State;
- protected constructor({storageKey, head, container, searchInput}: DataGridParameters) {
+ protected constructor({ storageKey, head, container, searchInput }: DataGridParameters) {
this.storageKey = storageKey;
this.sortableHeaders = new Map();
head.querySelectorAll<HTMLElement>(".col-order").forEach(header => {
@@ -83,16 +83,19 @@ abstract class DataGrid {
private static NUMBER_REGEX = /^[+-]?\d+(?:[.,]\d*)?$/;
private fetchRows(): Row[] {
- let rows = [...this.container.children].map(row => row as HTMLElement).map(row => {
- const searchWords = this.findSearchableCells(row)
- .flatMap(element => DataGrid.searchWordsOf(element.textContent!));
- return {
- element: row,
- searchWords,
- filterValues: this.fetchRowFilterValues(row),
- orderValues: this.fetchRowOrderValues(row),
- } as Row;
- });
+ let rows = [...this.container.children]
+ .map(row => row as HTMLElement)
+ .map(row => {
+ const searchWords = this.findSearchableCells(row).flatMap(element =>
+ DataGrid.searchWordsOf(element.textContent!),
+ );
+ return {
+ element: row,
+ searchWords,
+ filterValues: this.fetchRowFilterValues(row),
+ orderValues: this.fetchRowOrderValues(row),
+ } as Row;
+ });
for (const column of this.sortableHeaders.keys()) {
const orderValues = rows.map(row => row.orderValues.get(column) as string);
const isNumericalColumn = orderValues.every(orderValue => DataGrid.NUMBER_REGEX.test(orderValue));
@@ -100,7 +103,7 @@ abstract class DataGrid {
rows.forEach(row => {
const numberString = (row.orderValues.get(column) as string).replace(",", ".");
row.orderValues.set(column, parseFloat(numberString));
- })
+ });
}
}
return rows;
@@ -173,9 +176,7 @@ abstract class DataGrid {
// Reflects changes to the rows to the DOM
protected renderToDOM() {
[...this.container.children].map(element => element as HTMLElement).forEach(element => element.remove());
- const elements = this.rows
- .filter(row => row.isDisplayed)
- .map(row => row.element);
+ const elements = this.rows.filter(row => row.isDisplayed).map(row => row.element);
this.container.append(...elements);
this.saveStateToStorage();
}
@@ -206,15 +207,15 @@ abstract class DataGrid {
}
interface TableGridParameters extends BaseParameters {
- table: HTMLTableElement,
- resetSearch: HTMLButtonElement,
+ table: HTMLTableElement;
+ resetSearch: HTMLButtonElement;
}
// Table based data grid which uses its head and body
export class TableGrid extends DataGrid {
private resetSearch: HTMLButtonElement;
- constructor({table, resetSearch, ...options}: TableGridParameters) {
+ constructor({ table, resetSearch, ...options }: TableGridParameters) {
super({
head: table.querySelector("thead")!,
container: table.querySelector("tbody")!,
@@ -252,13 +253,13 @@ export class TableGrid extends DataGrid {
}
interface EvaluationGridParameters extends TableGridParameters {
- filterButtons: HTMLButtonElement[],
+ filterButtons: HTMLButtonElement[];
}
export class EvaluationGrid extends TableGrid {
private filterButtons: HTMLButtonElement[];
- constructor({filterButtons, ...options}: EvaluationGridParameters) {
+ constructor({ filterButtons, ...options }: EvaluationGridParameters) {
super(options);
this.filterButtons = filterButtons;
}
@@ -295,8 +296,9 @@ export class EvaluationGrid extends TableGrid {
}
protected fetchRowFilterValues(row: HTMLElement): Map<string, string[]> {
- const evaluationState = [...row.querySelectorAll<HTMLElement>("[data-filter]")]
- .map(element => element.dataset.filter!);
+ const evaluationState = [...row.querySelectorAll<HTMLElement>("[data-filter]")].map(
+ element => element.dataset.filter!,
+ );
return new Map([["evaluationState", evaluationState]]);
}
@@ -315,13 +317,13 @@ export class EvaluationGrid extends TableGrid {
}
interface QuestionnaireParameters extends TableGridParameters {
- updateUrl: string,
+ updateUrl: string;
}
export class QuestionnaireGrid extends TableGrid {
private readonly updateUrl: string;
- constructor({updateUrl, ...options}: QuestionnaireParameters) {
+ constructor({ updateUrl, ...options }: QuestionnaireParameters) {
super(options);
this.updateUrl = updateUrl;
}
@@ -338,35 +340,41 @@ export class QuestionnaireGrid extends TableGrid {
}
const questionnaireIndices = this.rows.map((row, index) => [$(row.element).data("id"), index]);
$.post(this.updateUrl, Object.fromEntries(questionnaireIndices));
- }
+ },
});
}
private reorderRow(oldPosition: number, newPosition: number) {
- const displayedRows = this.rows.map((row, index) => ({row, index}))
- .filter(({row}) => row.isDisplayed);
+ const displayedRows = this.rows.map((row, index) => ({ row, index })).filter(({ row }) => row.isDisplayed);
this.rows.splice(displayedRows[oldPosition].index, 1);
this.rows.splice(displayedRows[newPosition].index, 0, displayedRows[oldPosition].row);
}
}
interface ResultGridParameters extends DataGridParameters {
- filterCheckboxes: Map<string, {selector: string, checkboxes: HTMLInputElement[]}>,
- sortColumnSelect: HTMLSelectElement,
- sortOrderCheckboxes: HTMLInputElement[],
- resetFilter: HTMLButtonElement,
- resetOrder: HTMLButtonElement,
+ filterCheckboxes: Map<string, { selector: string; checkboxes: HTMLInputElement[] }>;
+ sortColumnSelect: HTMLSelectElement;
+ sortOrderCheckboxes: HTMLInputElement[];
+ resetFilter: HTMLButtonElement;
+ resetOrder: HTMLButtonElement;
}
// Grid based data grid which has its container separated from its header
export class ResultGrid extends DataGrid {
- private readonly filterCheckboxes: Map<string, {selector: string, checkboxes: HTMLInputElement[]}>;
+ private readonly filterCheckboxes: Map<string, { selector: string; checkboxes: HTMLInputElement[] }>;
private sortColumnSelect: HTMLSelectElement;
private sortOrderCheckboxes: HTMLInputElement[];
private resetFilter: HTMLButtonElement;
private resetOrder: HTMLButtonElement;
- constructor({filterCheckboxes, sortColumnSelect, sortOrderCheckboxes, resetFilter, resetOrder, ...options}: ResultGridParameters) {
+ constructor({
+ filterCheckboxes,
+ sortColumnSelect,
+ sortOrderCheckboxes,
+ resetFilter,
+ resetOrder,
+ ...options
+ }: ResultGridParameters) {
super(options);
this.filterCheckboxes = filterCheckboxes;
this.sortColumnSelect = sortColumnSelect;
@@ -377,7 +385,7 @@ export class ResultGrid extends DataGrid {
public bindEvents() {
super.bindEvents();
- for (const [name, {checkboxes}] of this.filterCheckboxes.entries()) {
+ for (const [name, { checkboxes }] of this.filterCheckboxes.entries()) {
checkboxes.forEach(checkbox => {
checkbox.addEventListener("change", () => {
const values = checkboxes.filter(checkbox => checkbox.checked).map(elem => elem.value);
@@ -413,21 +421,23 @@ export class ResultGrid extends DataGrid {
const order = this.sortOrderCheckboxes.find(checkbox => checkbox.checked)!.value;
if (order === "asc" || order === "desc") {
if (column === "name-semester") {
- this.sort([["name", order], ["semester", order]]);
+ this.sort([
+ ["name", order],
+ ["semester", order],
+ ]);
} else {
this.sort([[column, order]]);
}
}
}
-
protected findSearchableCells(row: HTMLElement): HTMLElement[] {
return [...row.querySelectorAll<HTMLElement>(".evaluation-name, [data-col=responsible]")];
}
protected fetchRowFilterValues(row: HTMLElement): Map<string, string[]> {
let filterValues = new Map();
- for (const [name, {selector, checkboxes}] of this.filterCheckboxes.entries()) {
+ for (const [name, { selector, checkboxes }] of this.filterCheckboxes.entries()) {
// To store filter values independent of the language, use the corresponding id from the checkbox
const values = [...row.querySelectorAll(selector)]
.map(element => element.textContent!.trim())
@@ -438,12 +448,15 @@ export class ResultGrid extends DataGrid {
}
protected get defaultOrder(): [string, "asc" | "desc"][] {
- return [["name", "asc"], ["semester", "asc"]];
+ return [
+ ["name", "asc"],
+ ["semester", "asc"],
+ ];
}
protected reflectFilterStateOnInputs() {
super.reflectFilterStateOnInputs();
- for (const [name, {checkboxes}] of this.filterCheckboxes.entries()) {
+ for (const [name, { checkboxes }] of this.filterCheckboxes.entries()) {
checkboxes.forEach(checkbox => {
let isActive;
if (this.state.filter.has(name)) {
diff --git a/package-lock.json b/package-lock.json
index 206271ac7b..4b83b774ba 100644
--- a/package-lock.json
+++ b/package-lock.json
@@ -12,6 +12,7 @@
"@types/sortablejs": "^1.3.0",
"jest": "^27.3.1",
"jest-environment-puppeteer": "^6.0.0",
+ "prettier": "^2.4.1",
"puppeteer": "^10.4.0",
"sass": "1.32.13",
"ts-jest": "^27.0.7",
@@ -6426,6 +6427,18 @@
"node": ">= 0.8.0"
}
},
+ "node_modules/prettier": {
+ "version": "2.4.1",
+ "resolved": "https://registry.npmjs.org/prettier/-/prettier-2.4.1.tgz",
+ "integrity": "sha512-9fbDAXSBcc6Bs1mZrDYb3XKzDLm4EXXL9sC1LqKP5rZkT6KRr/rf9amVUcODVXgguK/isJz0d0hP72WeaKWsvA==",
+ "dev": true,
+ "bin": {
+ "prettier": "bin-prettier.js"
+ },
+ "engines": {
+ "node": ">=10.13.0"
+ }
+ },
"node_modules/pretty-format": {
"version": "26.6.2",
"resolved": "https://registry.npmjs.org/pretty-format/-/pretty-format-26.6.2.tgz",
@@ -12653,6 +12666,12 @@
"integrity": "sha1-IZMqVJ9eUv/ZqCf1cOBL5iqX2lQ=",
"dev": true
},
+ "prettier": {
+ "version": "2.4.1",
+ "resolved": "https://registry.npmjs.org/prettier/-/prettier-2.4.1.tgz",
+ "integrity": "sha512-9fbDAXSBcc6Bs1mZrDYb3XKzDLm4EXXL9sC1LqKP5rZkT6KRr/rf9amVUcODVXgguK/isJz0d0hP72WeaKWsvA==",
+ "dev": true
+ },
"pretty-format": {
"version": "26.6.2",
"resolved": "https://registry.npmjs.org/pretty-format/-/pretty-format-26.6.2.tgz",
diff --git a/package.json b/package.json
index ae7bee343a..ed91972473 100644
--- a/package.json
+++ b/package.json
@@ -7,10 +7,11 @@
"@types/sortablejs": "^1.3.0",
"jest": "^27.3.1",
"jest-environment-puppeteer": "^6.0.0",
+ "prettier": "^2.4.1",
"puppeteer": "^10.4.0",
+ "sass": "1.32.13",
"ts-jest": "^27.0.7",
- "typescript": "^4.4.4",
- "sass": "1.32.13"
+ "typescript": "^4.4.4"
},
"jest": {
"testRunner": "jest-jasmine2",
|
qtile__qtile-1522 | EzKey does not allow description
I think the [EzKey constructor](https://github.com/qtile/qtile/blob/master/libqtile/config.py#L155) does not allow a description (no `kwds` variable) although [Key constructor](https://github.com/qtile/qtile/blob/master/libqtile/config.py#L53) does.
Edit: Why do you set the description within a dictionary instead of having a constructor argument for it?
Edit 2: Forgot my versions and stuff:
| Item | Version |
|:---------:|:--------------:|
| Qtile (from official repositories) | 0.14.2 |
| ArchLinux | 5.4.6-arch3-1 |
| [
{
"content": "# Copyright (c) 2012-2015 Tycho Andersen\n# Copyright (c) 2013 xarvh\n# Copyright (c) 2013 horsik\n# Copyright (c) 2013-2014 roger\n# Copyright (c) 2013 Tao Sauvage\n# Copyright (c) 2014 ramnes\n# Copyright (c) 2014 Sean Vig\n# Copyright (c) 2014 Adi Sieker\n#\n# Permission is hereby granted, free of charge, to any person obtaining a copy\n# of this software and associated documentation files (the \"Software\"), to deal\n# in the Software without restriction, including without limitation the rights\n# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n# copies of the Software, and to permit persons to whom the Software is\n# furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice shall be included in\n# all copies or substantial portions of the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\n# SOFTWARE.\n\nimport sys\nimport warnings\n\nfrom . import configurable\nfrom . import hook\nfrom . import utils\nfrom libqtile.command_object import CommandObject\n\n\nclass Key:\n \"\"\"Defines a keybinding.\n\n Parameters\n ==========\n modifiers:\n A list of modifier specifications. Modifier specifications are one of:\n \"shift\", \"lock\", \"control\", \"mod1\", \"mod2\", \"mod3\", \"mod4\", \"mod5\".\n key:\n A key specification, e.g. \"a\", \"Tab\", \"Return\", \"space\".\n commands:\n A list of lazy command objects generated with the lazy.lazy helper.\n If multiple Call objects are specified, they are run in sequence.\n kwds:\n A dictionary containing \"desc\", allowing a description to be added\n \"\"\"\n def __init__(self, modifiers, key, *commands, **kwds):\n self.modifiers = modifiers\n self.key = key\n self.commands = commands\n self.desc = kwds.get(\"desc\", \"\")\n\n def __repr__(self):\n return \"<Key (%s, %s)>\" % (self.modifiers, self.key)\n\n\nclass Mouse:\n def __init__(self, modifiers, button, *commands, **kwargs):\n self.focus = kwargs.pop(\"focus\", \"before\")\n self.modifiers = modifiers\n self.button = button\n self.commands = commands\n self.button_code = int(self.button.replace('Button', ''))\n for k, v in kwargs.items():\n setattr(self, k, v)\n\n\nclass Drag(Mouse):\n \"\"\"Defines binding of a mouse to some dragging action\n\n On each motion event command is executed with two extra parameters added x\n and y offset from previous move\n\n It focuses clicked window by default. If you want to prevent it pass,\n `focus=None` as an argument\n \"\"\"\n def __init__(self, *args, start=False, **kwargs):\n super().__init__(*args, **kwargs)\n self.start = start\n\n def __repr__(self):\n return \"<Drag (%s, %s)>\" % (self.modifiers, self.button)\n\n\nclass Click(Mouse):\n \"\"\"Defines binding of a mouse click\n\n It focuses clicked window by default. If you want to prevent it, pass\n `focus=None` as an argument\n \"\"\"\n def __init__(self, modifiers, button, *commands, **kwargs):\n super().__init__(modifiers, button, *commands, **kwargs)\n\n def __repr__(self):\n return \"<Click (%s, %s)>\" % (self.modifiers, self.button)\n\n\nclass EzConfig:\n \"\"\"\n Helper class for defining key and button bindings in an emacs-like format.\n Inspired by Xmonad's XMonad.Util.EZConfig.\n \"\"\"\n\n modifier_keys = {\n 'M': 'mod4',\n 'A': 'mod1',\n 'S': 'shift',\n 'C': 'control',\n }\n\n def parse(self, spec):\n \"\"\"\n Splits an emacs keydef into modifiers and keys. For example:\n \"M-S-a\" -> ['mod4', 'shift'], 'a'\n \"A-<minus>\" -> ['mod1'], 'minus'\n \"C-<Tab>\" -> ['control'], 'Tab'\n \"\"\"\n mods = []\n keys = []\n\n for key in spec.split('-'):\n if not key:\n break\n if key in self.modifier_keys:\n if keys:\n msg = 'Modifiers must always come before key/btn: %s'\n raise utils.QtileError(msg % spec)\n mods.append(self.modifier_keys[key])\n continue\n if len(key) == 1:\n keys.append(key)\n continue\n if len(key) > 3 and key[0] == '<' and key[-1] == '>':\n keys.append(key[1:-1])\n continue\n\n if not keys:\n msg = 'Invalid key/btn specifier: %s'\n raise utils.QtileError(msg % spec)\n\n if len(keys) > 1:\n msg = 'Key chains are not supported: %s' % spec\n raise utils.QtileError(msg)\n\n return mods, keys[0]\n\n\nclass EzKey(EzConfig, Key):\n def __init__(self, keydef, *commands):\n modkeys, key = self.parse(keydef)\n super().__init__(modkeys, key, *commands)\n\n\nclass EzClick(EzConfig, Click):\n def __init__(self, btndef, *commands, **kwargs):\n modkeys, button = self.parse(btndef)\n button = 'Button%s' % button\n super().__init__(modkeys, button, *commands, **kwargs)\n\n\nclass EzDrag(EzConfig, Drag):\n def __init__(self, btndef, *commands, **kwargs):\n modkeys, button = self.parse(btndef)\n button = 'Button%s' % button\n super().__init__(modkeys, button, *commands, **kwargs)\n\n\nclass ScreenRect:\n\n def __init__(self, x, y, width, height):\n self.x = x\n self.y = y\n self.width = width\n self.height = height\n\n def __repr__(self):\n return '<%s %d,%d %d,%d>' % (\n self.__class__.__name__,\n self.x, self.y,\n self.width, self.height\n )\n\n def hsplit(self, columnwidth):\n assert columnwidth > 0\n assert columnwidth < self.width\n return (\n self.__class__(self.x, self.y, columnwidth, self.height),\n self.__class__(\n self.x + columnwidth, self.y,\n self.width - columnwidth, self.height\n )\n )\n\n def vsplit(self, rowheight):\n assert rowheight > 0\n assert rowheight < self.height\n return (\n self.__class__(self.x, self.y, self.width, rowheight),\n self.__class__(\n self.x, self.y + rowheight,\n self.width, self.height - rowheight\n )\n )\n\n\nclass Screen(CommandObject):\n \"\"\"A physical screen, and its associated paraphernalia.\n\n Define a screen with a given set of Bars of a specific geometry. Note that\n bar.Bar objects can only be placed at the top or the bottom of the screen\n (bar.Gap objects can be placed anywhere). Also, ``x``, ``y``, ``width``,\n and ``height`` aren't specified usually unless you are using 'fake\n screens'.\n\n Parameters\n ==========\n top: Gap/Bar object, or None.\n bottom: Gap/Bar object, or None.\n left: Gap/Bar object, or None.\n right: Gap/Bar object, or None.\n x : int or None\n y : int or None\n width : int or None\n height : int or None\n \"\"\"\n def __init__(self, top=None, bottom=None, left=None, right=None,\n x=None, y=None, width=None, height=None):\n self.group = None\n self.previous_group = None\n\n self.top = top\n self.bottom = bottom\n self.left = left\n self.right = right\n self.qtile = None\n self.index = None\n # x position of upper left corner can be > 0\n # if one screen is \"right\" of the other\n self.x = x\n self.y = y\n self.width = width\n self.height = height\n\n def _configure(self, qtile, index, x, y, width, height, group):\n self.qtile = qtile\n self.index = index\n self.x = x\n self.y = y\n self.width = width\n self.height = height\n self.set_group(group)\n for i in self.gaps:\n i._configure(qtile, self)\n\n @property\n def gaps(self):\n return (i for i in [self.top, self.bottom, self.left, self.right] if i)\n\n @property\n def dx(self):\n return self.x + self.left.size if self.left else self.x\n\n @property\n def dy(self):\n return self.y + self.top.size if self.top else self.y\n\n @property\n def dwidth(self):\n val = self.width\n if self.left:\n val -= self.left.size\n if self.right:\n val -= self.right.size\n return val\n\n @property\n def dheight(self):\n val = self.height\n if self.top:\n val -= self.top.size\n if self.bottom:\n val -= self.bottom.size\n return val\n\n def get_rect(self):\n return ScreenRect(self.dx, self.dy, self.dwidth, self.dheight)\n\n def set_group(self, new_group, save_prev=True):\n \"\"\"Put group on this screen\"\"\"\n if new_group is None:\n return\n\n if new_group.screen == self:\n return\n\n if save_prev:\n self.previous_group = self.group\n\n if new_group.screen:\n # g1 <-> s1 (self)\n # g2 (new_group) <-> s2 to\n # g1 <-> s2\n # g2 <-> s1\n g1 = self.group\n s1 = self\n g2 = new_group\n s2 = new_group.screen\n\n s2.group = g1\n g1._set_screen(s2)\n s1.group = g2\n g2._set_screen(s1)\n else:\n old_group = self.group\n self.group = new_group\n\n # display clients of the new group and then hide from old group\n # to remove the screen flickering\n new_group._set_screen(self)\n\n if old_group is not None:\n old_group._set_screen(None)\n\n hook.fire(\"setgroup\")\n hook.fire(\"focus_change\")\n hook.fire(\n \"layout_change\",\n self.group.layouts[self.group.current_layout],\n self.group\n )\n\n def toggle_group(self, group=None):\n \"\"\"Switch to the selected group or to the previously active one\"\"\"\n if group in (self.group, None):\n group = self.previous_group\n self.set_group(group)\n\n def _items(self, name):\n if name == \"layout\":\n return (True, list(range(len(self.group.layouts))))\n elif name == \"window\":\n return (True, [i.window.wid for i in self.group.windows])\n elif name == \"bar\":\n return (False, [x.position for x in self.gaps])\n\n def _select(self, name, sel):\n if name == \"layout\":\n if sel is None:\n return self.group.layout\n else:\n return utils.lget(self.group.layouts, sel)\n elif name == \"window\":\n if sel is None:\n return self.group.current_window\n else:\n for i in self.group.windows:\n if i.window.wid == sel:\n return i\n elif name == \"bar\":\n return getattr(self, sel)\n\n def resize(self, x=None, y=None, w=None, h=None):\n if x is None:\n x = self.x\n if y is None:\n y = self.y\n if w is None:\n w = self.width\n if h is None:\n h = self.height\n self._configure(self.qtile, self.index, x, y, w, h, self.group)\n for bar in [self.top, self.bottom, self.left, self.right]:\n if bar:\n bar.draw()\n self.qtile.call_soon(self.group.layout_all)\n\n def cmd_info(self):\n \"\"\"Returns a dictionary of info for this screen.\"\"\"\n return dict(\n index=self.index,\n width=self.width,\n height=self.height,\n x=self.x,\n y=self.y\n )\n\n def cmd_resize(self, x=None, y=None, w=None, h=None):\n \"\"\"Resize the screen\"\"\"\n self.resize(x, y, w, h)\n\n def cmd_next_group(self, skip_empty=False, skip_managed=False):\n \"\"\"Switch to the next group\"\"\"\n n = self.group.get_next_group(skip_empty, skip_managed)\n self.set_group(n)\n return n.name\n\n def cmd_prev_group(self, skip_empty=False, skip_managed=False):\n \"\"\"Switch to the previous group\"\"\"\n n = self.group.get_previous_group(skip_empty, skip_managed)\n self.set_group(n)\n return n.name\n\n def cmd_toggle_group(self, group_name=None):\n \"\"\"Switch to the selected group or to the previously active one\"\"\"\n group = self.qtile.groups_map.get(group_name)\n self.toggle_group(group)\n\n def cmd_togglegroup(self, groupName=None): # noqa\n \"\"\"Switch to the selected group or to the previously active one\n\n Deprecated: use toggle_group()\"\"\"\n warnings.warn(\"togglegroup is deprecated, use toggle_group\", DeprecationWarning)\n self.cmd_toggle_group(groupName)\n\n\nclass Group:\n \"\"\"Represents a \"dynamic\" group\n\n These groups can spawn apps, only allow certain Matched windows to be on\n them, hide when they're not in use, etc.\n Groups are identified by their name.\n\n Parameters\n ==========\n name : string\n the name of this group\n matches : default ``None``\n list of ``Match`` objects whose windows will be assigned to this group\n exclusive : boolean\n when other apps are started in this group, should we allow them here or not?\n spawn : string or list of strings\n this will be ``exec()`` d when the group is created, you can pass\n either a program name or a list of programs to ``exec()``\n layout : string\n the name of default layout for this group (e.g. 'max' or 'stack').\n This is the name specified for a particular layout in config.py\n or if not defined it defaults in general the class name in all lower case.\n layouts : list\n the group layouts list overriding global layouts.\n Use this to define a separate list of layouts for this particular group.\n persist : boolean\n should this group stay alive with no member windows?\n init : boolean\n is this group alive when qtile starts?\n position : int\n group position\n label : string\n the display name of the group.\n Use this to define a display name other than name of the group.\n If set to None, the display name is set to the name.\n \"\"\"\n def __init__(self, name, matches=None, exclusive=False,\n spawn=None, layout=None, layouts=None, persist=True, init=True,\n layout_opts=None, screen_affinity=None, position=sys.maxsize,\n label=None):\n self.name = name\n self.label = label\n self.exclusive = exclusive\n self.spawn = spawn\n self.layout = layout\n self.layouts = layouts or []\n self.persist = persist\n self.init = init\n self.matches = matches or []\n self.layout_opts = layout_opts or {}\n\n self.screen_affinity = screen_affinity\n self.position = position\n\n def __repr__(self):\n attrs = utils.describe_attributes(\n self,\n ['exclusive', 'spawn', 'layout', 'layouts', 'persist', 'init',\n 'matches', 'layout_opts', 'screen_affinity'])\n return '<config.Group %r (%s)>' % (self.name, attrs)\n\n\nclass ScratchPad(Group):\n \"\"\"Represents a \"ScratchPad\" group\n\n ScratchPad adds a (by default) invisible group to qtile.\n That group is used as a place for currently not visible windows spawned by a\n ``DropDown`` configuration.\n\n Parameters\n ==========\n name : string\n the name of this group\n dropdowns : default ``None``\n list of DropDown objects\n position : int\n group position\n label : string\n The display name of the ScratchPad group. Defaults to the empty string\n such that the group is hidden in ``GroupList`` widget.\n \"\"\"\n def __init__(self, name, dropdowns=None, position=sys.maxsize, label=''):\n Group.__init__(self, name, layout='floating', layouts=['floating'],\n init=False, position=position, label=label)\n self.dropdowns = dropdowns if dropdowns is not None else []\n\n def __repr__(self):\n return '<config.ScratchPad %r (%s)>' % (\n self.name, ', '.join(dd.name for dd in self.dropdowns))\n\n\nclass Match:\n \"\"\"Match for dynamic groups\n\n It can match by title, class or role.\n\n ``Match`` supports both regular expression objects (i.e. the result of\n ``re.compile()``) or strings (match as a \"include\" match). If a window\n matches any of the things in any of the lists, it is considered a match.\n\n Parameters\n ==========\n title:\n things to match against the title (WM_NAME)\n wm_class:\n things to match against the second string in WM_CLASS atom\n role:\n things to match against the WM_ROLE atom\n wm_type:\n things to match against the WM_TYPE atom\n wm_instance_class:\n things to match against the first string in WM_CLASS atom\n net_wm_pid:\n things to match against the _NET_WM_PID atom (only int allowed in this\n rule)\n \"\"\"\n def __init__(self, title=None, wm_class=None, role=None, wm_type=None,\n wm_instance_class=None, net_wm_pid=None):\n if not title:\n title = []\n if not wm_class:\n wm_class = []\n if not role:\n role = []\n if not wm_type:\n wm_type = []\n if not wm_instance_class:\n wm_instance_class = []\n if not net_wm_pid:\n net_wm_pid = []\n\n try:\n net_wm_pid = list(map(int, net_wm_pid))\n except ValueError:\n error = 'Invalid rule for net_wm_pid: \"%s\" '\\\n 'only ints allowed' % str(net_wm_pid)\n raise utils.QtileError(error)\n\n self._rules = [('title', t) for t in title]\n self._rules += [('wm_class', w) for w in wm_class]\n self._rules += [('role', r) for r in role]\n self._rules += [('wm_type', r) for r in wm_type]\n self._rules += [('wm_instance_class', w) for w in wm_instance_class]\n self._rules += [('net_wm_pid', w) for w in net_wm_pid]\n\n def compare(self, client):\n for _type, rule in self._rules:\n if _type == \"net_wm_pid\":\n def match_func(value):\n return rule == value\n else:\n match_func = getattr(rule, 'match', None) or \\\n getattr(rule, 'count')\n\n if _type == 'title':\n value = client.name\n elif _type == 'wm_class':\n value = None\n _value = client.window.get_wm_class()\n if _value and len(_value) > 1:\n value = _value[1]\n elif _type == 'wm_instance_class':\n value = client.window.get_wm_class()\n if value:\n value = value[0]\n elif _type == 'wm_type':\n value = client.window.get_wm_type()\n elif _type == 'net_wm_pid':\n value = client.window.get_net_wm_pid()\n else:\n value = client.window.get_wm_window_role()\n\n if value and match_func(value):\n return True\n return False\n\n def map(self, callback, clients):\n \"\"\"Apply callback to each client that matches this Match\"\"\"\n for c in clients:\n if self.compare(c):\n callback(c)\n\n def __repr__(self):\n return '<Match %s>' % self._rules\n\n\nclass Rule:\n \"\"\"How to act on a Match\n\n A Rule contains a Match object, and a specification about what to do when\n that object is matched.\n\n Parameters\n ==========\n match :\n ``Match`` object associated with this ``Rule``\n float :\n auto float this window?\n intrusive :\n override the group's exclusive setting?\n break_on_match :\n Should we stop applying rules if this rule is matched?\n \"\"\"\n def __init__(self, match, group=None, float=False, intrusive=False,\n break_on_match=True):\n self.match = match\n self.group = group\n self.float = float\n self.intrusive = intrusive\n self.break_on_match = break_on_match\n\n def matches(self, w):\n return self.match.compare(w)\n\n def __repr__(self):\n actions = utils.describe_attributes(self, ['group', 'float', 'intrusive', 'break_on_match'])\n return '<Rule match=%r actions=(%s)>' % (self.match, actions)\n\n\nclass DropDown(configurable.Configurable):\n \"\"\"\n Configure a specified command and its associated window for the ScratchPad.\n That window can be shown and hidden using a configurable keystroke\n or any other scripted trigger.\n \"\"\"\n defaults = (\n (\n 'x',\n 0.1,\n 'X position of window as fraction of current screen width. '\n '0 is the left most position.'\n ),\n (\n 'y',\n 0.0,\n 'Y position of window as fraction of current screen height. '\n '0 is the top most position. To show the window at bottom, '\n 'you have to configure a value < 1 and an appropriate height.'\n ),\n (\n 'width',\n 0.8,\n 'Width of window as fraction of current screen width'\n ),\n (\n 'height',\n 0.35,\n 'Height of window as fraction of current screen.'\n ),\n (\n 'opacity',\n 0.9,\n 'Opacity of window as fraction. Zero is opaque.'\n ),\n (\n 'on_focus_lost_hide',\n True,\n 'Shall the window be hidden if focus is lost? If so, the DropDown '\n 'is hidden if window focus or the group is changed.'\n ),\n (\n 'warp_pointer',\n True,\n 'Shall pointer warp to center of window on activation? '\n 'This has only effect if any of the on_focus_lost_xxx '\n 'configurations is True'\n ),\n )\n\n def __init__(self, name, cmd, **config):\n \"\"\"\n Initialize DropDown window wrapper.\n Define a command to spawn a process for the first time the DropDown\n is shown.\n\n Parameters\n ==========\n name : string\n The name of the DropDown configuration.\n cmd : string\n Command to spawn a process.\n \"\"\"\n configurable.Configurable.__init__(self, **config)\n self.name = name\n self.command = cmd\n self.add_defaults(self.defaults)\n\n def info(self):\n return dict(name=self.name,\n command=self.command,\n x=self.x,\n y=self.y,\n width=self.width,\n height=self.height,\n opacity=self.opacity,\n on_focus_lost_hide=self.on_focus_lost_hide,\n warp_pointer=self.warp_pointer,)\n",
"path": "libqtile/config.py"
}
] | [
{
"content": "# Copyright (c) 2012-2015 Tycho Andersen\n# Copyright (c) 2013 xarvh\n# Copyright (c) 2013 horsik\n# Copyright (c) 2013-2014 roger\n# Copyright (c) 2013 Tao Sauvage\n# Copyright (c) 2014 ramnes\n# Copyright (c) 2014 Sean Vig\n# Copyright (c) 2014 Adi Sieker\n#\n# Permission is hereby granted, free of charge, to any person obtaining a copy\n# of this software and associated documentation files (the \"Software\"), to deal\n# in the Software without restriction, including without limitation the rights\n# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n# copies of the Software, and to permit persons to whom the Software is\n# furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice shall be included in\n# all copies or substantial portions of the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\n# SOFTWARE.\n\nimport sys\nimport warnings\n\nfrom . import configurable\nfrom . import hook\nfrom . import utils\nfrom libqtile.command_object import CommandObject\n\n\nclass Key:\n \"\"\"Defines a keybinding.\n\n Parameters\n ==========\n modifiers:\n A list of modifier specifications. Modifier specifications are one of:\n \"shift\", \"lock\", \"control\", \"mod1\", \"mod2\", \"mod3\", \"mod4\", \"mod5\".\n key:\n A key specification, e.g. \"a\", \"Tab\", \"Return\", \"space\".\n commands:\n A list of lazy command objects generated with the lazy.lazy helper.\n If multiple Call objects are specified, they are run in sequence.\n kwds:\n A dictionary containing \"desc\", allowing a description to be added\n \"\"\"\n def __init__(self, modifiers, key, *commands, **kwds):\n self.modifiers = modifiers\n self.key = key\n self.commands = commands\n self.desc = kwds.get(\"desc\", \"\")\n\n def __repr__(self):\n return \"<Key (%s, %s)>\" % (self.modifiers, self.key)\n\n\nclass Mouse:\n def __init__(self, modifiers, button, *commands, **kwargs):\n self.focus = kwargs.pop(\"focus\", \"before\")\n self.modifiers = modifiers\n self.button = button\n self.commands = commands\n self.button_code = int(self.button.replace('Button', ''))\n for k, v in kwargs.items():\n setattr(self, k, v)\n\n\nclass Drag(Mouse):\n \"\"\"Defines binding of a mouse to some dragging action\n\n On each motion event command is executed with two extra parameters added x\n and y offset from previous move\n\n It focuses clicked window by default. If you want to prevent it pass,\n `focus=None` as an argument\n \"\"\"\n def __init__(self, *args, start=False, **kwargs):\n super().__init__(*args, **kwargs)\n self.start = start\n\n def __repr__(self):\n return \"<Drag (%s, %s)>\" % (self.modifiers, self.button)\n\n\nclass Click(Mouse):\n \"\"\"Defines binding of a mouse click\n\n It focuses clicked window by default. If you want to prevent it, pass\n `focus=None` as an argument\n \"\"\"\n def __init__(self, modifiers, button, *commands, **kwargs):\n super().__init__(modifiers, button, *commands, **kwargs)\n\n def __repr__(self):\n return \"<Click (%s, %s)>\" % (self.modifiers, self.button)\n\n\nclass EzConfig:\n \"\"\"\n Helper class for defining key and button bindings in an emacs-like format.\n Inspired by Xmonad's XMonad.Util.EZConfig.\n \"\"\"\n\n modifier_keys = {\n 'M': 'mod4',\n 'A': 'mod1',\n 'S': 'shift',\n 'C': 'control',\n }\n\n def parse(self, spec):\n \"\"\"\n Splits an emacs keydef into modifiers and keys. For example:\n \"M-S-a\" -> ['mod4', 'shift'], 'a'\n \"A-<minus>\" -> ['mod1'], 'minus'\n \"C-<Tab>\" -> ['control'], 'Tab'\n \"\"\"\n mods = []\n keys = []\n\n for key in spec.split('-'):\n if not key:\n break\n if key in self.modifier_keys:\n if keys:\n msg = 'Modifiers must always come before key/btn: %s'\n raise utils.QtileError(msg % spec)\n mods.append(self.modifier_keys[key])\n continue\n if len(key) == 1:\n keys.append(key)\n continue\n if len(key) > 3 and key[0] == '<' and key[-1] == '>':\n keys.append(key[1:-1])\n continue\n\n if not keys:\n msg = 'Invalid key/btn specifier: %s'\n raise utils.QtileError(msg % spec)\n\n if len(keys) > 1:\n msg = 'Key chains are not supported: %s' % spec\n raise utils.QtileError(msg)\n\n return mods, keys[0]\n\n\nclass EzKey(EzConfig, Key):\n def __init__(self, keydef, *commands, **kwargs):\n modkeys, key = self.parse(keydef)\n super().__init__(modkeys, key, *commands, **kwargs)\n\n\nclass EzClick(EzConfig, Click):\n def __init__(self, btndef, *commands, **kwargs):\n modkeys, button = self.parse(btndef)\n button = 'Button%s' % button\n super().__init__(modkeys, button, *commands, **kwargs)\n\n\nclass EzDrag(EzConfig, Drag):\n def __init__(self, btndef, *commands, **kwargs):\n modkeys, button = self.parse(btndef)\n button = 'Button%s' % button\n super().__init__(modkeys, button, *commands, **kwargs)\n\n\nclass ScreenRect:\n\n def __init__(self, x, y, width, height):\n self.x = x\n self.y = y\n self.width = width\n self.height = height\n\n def __repr__(self):\n return '<%s %d,%d %d,%d>' % (\n self.__class__.__name__,\n self.x, self.y,\n self.width, self.height\n )\n\n def hsplit(self, columnwidth):\n assert columnwidth > 0\n assert columnwidth < self.width\n return (\n self.__class__(self.x, self.y, columnwidth, self.height),\n self.__class__(\n self.x + columnwidth, self.y,\n self.width - columnwidth, self.height\n )\n )\n\n def vsplit(self, rowheight):\n assert rowheight > 0\n assert rowheight < self.height\n return (\n self.__class__(self.x, self.y, self.width, rowheight),\n self.__class__(\n self.x, self.y + rowheight,\n self.width, self.height - rowheight\n )\n )\n\n\nclass Screen(CommandObject):\n \"\"\"A physical screen, and its associated paraphernalia.\n\n Define a screen with a given set of Bars of a specific geometry. Note that\n bar.Bar objects can only be placed at the top or the bottom of the screen\n (bar.Gap objects can be placed anywhere). Also, ``x``, ``y``, ``width``,\n and ``height`` aren't specified usually unless you are using 'fake\n screens'.\n\n Parameters\n ==========\n top: Gap/Bar object, or None.\n bottom: Gap/Bar object, or None.\n left: Gap/Bar object, or None.\n right: Gap/Bar object, or None.\n x : int or None\n y : int or None\n width : int or None\n height : int or None\n \"\"\"\n def __init__(self, top=None, bottom=None, left=None, right=None,\n x=None, y=None, width=None, height=None):\n self.group = None\n self.previous_group = None\n\n self.top = top\n self.bottom = bottom\n self.left = left\n self.right = right\n self.qtile = None\n self.index = None\n # x position of upper left corner can be > 0\n # if one screen is \"right\" of the other\n self.x = x\n self.y = y\n self.width = width\n self.height = height\n\n def _configure(self, qtile, index, x, y, width, height, group):\n self.qtile = qtile\n self.index = index\n self.x = x\n self.y = y\n self.width = width\n self.height = height\n self.set_group(group)\n for i in self.gaps:\n i._configure(qtile, self)\n\n @property\n def gaps(self):\n return (i for i in [self.top, self.bottom, self.left, self.right] if i)\n\n @property\n def dx(self):\n return self.x + self.left.size if self.left else self.x\n\n @property\n def dy(self):\n return self.y + self.top.size if self.top else self.y\n\n @property\n def dwidth(self):\n val = self.width\n if self.left:\n val -= self.left.size\n if self.right:\n val -= self.right.size\n return val\n\n @property\n def dheight(self):\n val = self.height\n if self.top:\n val -= self.top.size\n if self.bottom:\n val -= self.bottom.size\n return val\n\n def get_rect(self):\n return ScreenRect(self.dx, self.dy, self.dwidth, self.dheight)\n\n def set_group(self, new_group, save_prev=True):\n \"\"\"Put group on this screen\"\"\"\n if new_group is None:\n return\n\n if new_group.screen == self:\n return\n\n if save_prev:\n self.previous_group = self.group\n\n if new_group.screen:\n # g1 <-> s1 (self)\n # g2 (new_group) <-> s2 to\n # g1 <-> s2\n # g2 <-> s1\n g1 = self.group\n s1 = self\n g2 = new_group\n s2 = new_group.screen\n\n s2.group = g1\n g1._set_screen(s2)\n s1.group = g2\n g2._set_screen(s1)\n else:\n old_group = self.group\n self.group = new_group\n\n # display clients of the new group and then hide from old group\n # to remove the screen flickering\n new_group._set_screen(self)\n\n if old_group is not None:\n old_group._set_screen(None)\n\n hook.fire(\"setgroup\")\n hook.fire(\"focus_change\")\n hook.fire(\n \"layout_change\",\n self.group.layouts[self.group.current_layout],\n self.group\n )\n\n def toggle_group(self, group=None):\n \"\"\"Switch to the selected group or to the previously active one\"\"\"\n if group in (self.group, None):\n group = self.previous_group\n self.set_group(group)\n\n def _items(self, name):\n if name == \"layout\":\n return (True, list(range(len(self.group.layouts))))\n elif name == \"window\":\n return (True, [i.window.wid for i in self.group.windows])\n elif name == \"bar\":\n return (False, [x.position for x in self.gaps])\n\n def _select(self, name, sel):\n if name == \"layout\":\n if sel is None:\n return self.group.layout\n else:\n return utils.lget(self.group.layouts, sel)\n elif name == \"window\":\n if sel is None:\n return self.group.current_window\n else:\n for i in self.group.windows:\n if i.window.wid == sel:\n return i\n elif name == \"bar\":\n return getattr(self, sel)\n\n def resize(self, x=None, y=None, w=None, h=None):\n if x is None:\n x = self.x\n if y is None:\n y = self.y\n if w is None:\n w = self.width\n if h is None:\n h = self.height\n self._configure(self.qtile, self.index, x, y, w, h, self.group)\n for bar in [self.top, self.bottom, self.left, self.right]:\n if bar:\n bar.draw()\n self.qtile.call_soon(self.group.layout_all)\n\n def cmd_info(self):\n \"\"\"Returns a dictionary of info for this screen.\"\"\"\n return dict(\n index=self.index,\n width=self.width,\n height=self.height,\n x=self.x,\n y=self.y\n )\n\n def cmd_resize(self, x=None, y=None, w=None, h=None):\n \"\"\"Resize the screen\"\"\"\n self.resize(x, y, w, h)\n\n def cmd_next_group(self, skip_empty=False, skip_managed=False):\n \"\"\"Switch to the next group\"\"\"\n n = self.group.get_next_group(skip_empty, skip_managed)\n self.set_group(n)\n return n.name\n\n def cmd_prev_group(self, skip_empty=False, skip_managed=False):\n \"\"\"Switch to the previous group\"\"\"\n n = self.group.get_previous_group(skip_empty, skip_managed)\n self.set_group(n)\n return n.name\n\n def cmd_toggle_group(self, group_name=None):\n \"\"\"Switch to the selected group or to the previously active one\"\"\"\n group = self.qtile.groups_map.get(group_name)\n self.toggle_group(group)\n\n def cmd_togglegroup(self, groupName=None): # noqa\n \"\"\"Switch to the selected group or to the previously active one\n\n Deprecated: use toggle_group()\"\"\"\n warnings.warn(\"togglegroup is deprecated, use toggle_group\", DeprecationWarning)\n self.cmd_toggle_group(groupName)\n\n\nclass Group:\n \"\"\"Represents a \"dynamic\" group\n\n These groups can spawn apps, only allow certain Matched windows to be on\n them, hide when they're not in use, etc.\n Groups are identified by their name.\n\n Parameters\n ==========\n name : string\n the name of this group\n matches : default ``None``\n list of ``Match`` objects whose windows will be assigned to this group\n exclusive : boolean\n when other apps are started in this group, should we allow them here or not?\n spawn : string or list of strings\n this will be ``exec()`` d when the group is created, you can pass\n either a program name or a list of programs to ``exec()``\n layout : string\n the name of default layout for this group (e.g. 'max' or 'stack').\n This is the name specified for a particular layout in config.py\n or if not defined it defaults in general the class name in all lower case.\n layouts : list\n the group layouts list overriding global layouts.\n Use this to define a separate list of layouts for this particular group.\n persist : boolean\n should this group stay alive with no member windows?\n init : boolean\n is this group alive when qtile starts?\n position : int\n group position\n label : string\n the display name of the group.\n Use this to define a display name other than name of the group.\n If set to None, the display name is set to the name.\n \"\"\"\n def __init__(self, name, matches=None, exclusive=False,\n spawn=None, layout=None, layouts=None, persist=True, init=True,\n layout_opts=None, screen_affinity=None, position=sys.maxsize,\n label=None):\n self.name = name\n self.label = label\n self.exclusive = exclusive\n self.spawn = spawn\n self.layout = layout\n self.layouts = layouts or []\n self.persist = persist\n self.init = init\n self.matches = matches or []\n self.layout_opts = layout_opts or {}\n\n self.screen_affinity = screen_affinity\n self.position = position\n\n def __repr__(self):\n attrs = utils.describe_attributes(\n self,\n ['exclusive', 'spawn', 'layout', 'layouts', 'persist', 'init',\n 'matches', 'layout_opts', 'screen_affinity'])\n return '<config.Group %r (%s)>' % (self.name, attrs)\n\n\nclass ScratchPad(Group):\n \"\"\"Represents a \"ScratchPad\" group\n\n ScratchPad adds a (by default) invisible group to qtile.\n That group is used as a place for currently not visible windows spawned by a\n ``DropDown`` configuration.\n\n Parameters\n ==========\n name : string\n the name of this group\n dropdowns : default ``None``\n list of DropDown objects\n position : int\n group position\n label : string\n The display name of the ScratchPad group. Defaults to the empty string\n such that the group is hidden in ``GroupList`` widget.\n \"\"\"\n def __init__(self, name, dropdowns=None, position=sys.maxsize, label=''):\n Group.__init__(self, name, layout='floating', layouts=['floating'],\n init=False, position=position, label=label)\n self.dropdowns = dropdowns if dropdowns is not None else []\n\n def __repr__(self):\n return '<config.ScratchPad %r (%s)>' % (\n self.name, ', '.join(dd.name for dd in self.dropdowns))\n\n\nclass Match:\n \"\"\"Match for dynamic groups\n\n It can match by title, class or role.\n\n ``Match`` supports both regular expression objects (i.e. the result of\n ``re.compile()``) or strings (match as a \"include\" match). If a window\n matches any of the things in any of the lists, it is considered a match.\n\n Parameters\n ==========\n title:\n things to match against the title (WM_NAME)\n wm_class:\n things to match against the second string in WM_CLASS atom\n role:\n things to match against the WM_ROLE atom\n wm_type:\n things to match against the WM_TYPE atom\n wm_instance_class:\n things to match against the first string in WM_CLASS atom\n net_wm_pid:\n things to match against the _NET_WM_PID atom (only int allowed in this\n rule)\n \"\"\"\n def __init__(self, title=None, wm_class=None, role=None, wm_type=None,\n wm_instance_class=None, net_wm_pid=None):\n if not title:\n title = []\n if not wm_class:\n wm_class = []\n if not role:\n role = []\n if not wm_type:\n wm_type = []\n if not wm_instance_class:\n wm_instance_class = []\n if not net_wm_pid:\n net_wm_pid = []\n\n try:\n net_wm_pid = list(map(int, net_wm_pid))\n except ValueError:\n error = 'Invalid rule for net_wm_pid: \"%s\" '\\\n 'only ints allowed' % str(net_wm_pid)\n raise utils.QtileError(error)\n\n self._rules = [('title', t) for t in title]\n self._rules += [('wm_class', w) for w in wm_class]\n self._rules += [('role', r) for r in role]\n self._rules += [('wm_type', r) for r in wm_type]\n self._rules += [('wm_instance_class', w) for w in wm_instance_class]\n self._rules += [('net_wm_pid', w) for w in net_wm_pid]\n\n def compare(self, client):\n for _type, rule in self._rules:\n if _type == \"net_wm_pid\":\n def match_func(value):\n return rule == value\n else:\n match_func = getattr(rule, 'match', None) or \\\n getattr(rule, 'count')\n\n if _type == 'title':\n value = client.name\n elif _type == 'wm_class':\n value = None\n _value = client.window.get_wm_class()\n if _value and len(_value) > 1:\n value = _value[1]\n elif _type == 'wm_instance_class':\n value = client.window.get_wm_class()\n if value:\n value = value[0]\n elif _type == 'wm_type':\n value = client.window.get_wm_type()\n elif _type == 'net_wm_pid':\n value = client.window.get_net_wm_pid()\n else:\n value = client.window.get_wm_window_role()\n\n if value and match_func(value):\n return True\n return False\n\n def map(self, callback, clients):\n \"\"\"Apply callback to each client that matches this Match\"\"\"\n for c in clients:\n if self.compare(c):\n callback(c)\n\n def __repr__(self):\n return '<Match %s>' % self._rules\n\n\nclass Rule:\n \"\"\"How to act on a Match\n\n A Rule contains a Match object, and a specification about what to do when\n that object is matched.\n\n Parameters\n ==========\n match :\n ``Match`` object associated with this ``Rule``\n float :\n auto float this window?\n intrusive :\n override the group's exclusive setting?\n break_on_match :\n Should we stop applying rules if this rule is matched?\n \"\"\"\n def __init__(self, match, group=None, float=False, intrusive=False,\n break_on_match=True):\n self.match = match\n self.group = group\n self.float = float\n self.intrusive = intrusive\n self.break_on_match = break_on_match\n\n def matches(self, w):\n return self.match.compare(w)\n\n def __repr__(self):\n actions = utils.describe_attributes(self, ['group', 'float', 'intrusive', 'break_on_match'])\n return '<Rule match=%r actions=(%s)>' % (self.match, actions)\n\n\nclass DropDown(configurable.Configurable):\n \"\"\"\n Configure a specified command and its associated window for the ScratchPad.\n That window can be shown and hidden using a configurable keystroke\n or any other scripted trigger.\n \"\"\"\n defaults = (\n (\n 'x',\n 0.1,\n 'X position of window as fraction of current screen width. '\n '0 is the left most position.'\n ),\n (\n 'y',\n 0.0,\n 'Y position of window as fraction of current screen height. '\n '0 is the top most position. To show the window at bottom, '\n 'you have to configure a value < 1 and an appropriate height.'\n ),\n (\n 'width',\n 0.8,\n 'Width of window as fraction of current screen width'\n ),\n (\n 'height',\n 0.35,\n 'Height of window as fraction of current screen.'\n ),\n (\n 'opacity',\n 0.9,\n 'Opacity of window as fraction. Zero is opaque.'\n ),\n (\n 'on_focus_lost_hide',\n True,\n 'Shall the window be hidden if focus is lost? If so, the DropDown '\n 'is hidden if window focus or the group is changed.'\n ),\n (\n 'warp_pointer',\n True,\n 'Shall pointer warp to center of window on activation? '\n 'This has only effect if any of the on_focus_lost_xxx '\n 'configurations is True'\n ),\n )\n\n def __init__(self, name, cmd, **config):\n \"\"\"\n Initialize DropDown window wrapper.\n Define a command to spawn a process for the first time the DropDown\n is shown.\n\n Parameters\n ==========\n name : string\n The name of the DropDown configuration.\n cmd : string\n Command to spawn a process.\n \"\"\"\n configurable.Configurable.__init__(self, **config)\n self.name = name\n self.command = cmd\n self.add_defaults(self.defaults)\n\n def info(self):\n return dict(name=self.name,\n command=self.command,\n x=self.x,\n y=self.y,\n width=self.width,\n height=self.height,\n opacity=self.opacity,\n on_focus_lost_hide=self.on_focus_lost_hide,\n warp_pointer=self.warp_pointer,)\n",
"path": "libqtile/config.py"
}
] | diff --git a/libqtile/config.py b/libqtile/config.py
index dacff71672..0e53fc66f2 100644
--- a/libqtile/config.py
+++ b/libqtile/config.py
@@ -152,9 +152,9 @@ def parse(self, spec):
class EzKey(EzConfig, Key):
- def __init__(self, keydef, *commands):
+ def __init__(self, keydef, *commands, **kwargs):
modkeys, key = self.parse(keydef)
- super().__init__(modkeys, key, *commands)
+ super().__init__(modkeys, key, *commands, **kwargs)
class EzClick(EzConfig, Click):
|
liberapay__liberapay.com-1156 | Support avatars from Gitlab
Quite a number of open-source projects are hosted on Gitlab, including mine.
With Libavatar [shutting down](https://blog.libravatar.org/posts/Libravatar.org_is_shutting_down_on_2018-09-01/), it'd be nice to have an alternative that doesn't require creating an account on a proprietary service. (While Mastodon isn't proprietary, it's still an unnecessary extra account to take care of.)
| [
{
"content": "# coding: utf8\nfrom __future__ import print_function, unicode_literals\n\nfrom collections import namedtuple, OrderedDict\nfrom datetime import date, datetime, timedelta\nfrom decimal import Decimal, ROUND_UP\nimport re\n\nfrom jinja2 import StrictUndefined\nfrom mangopay.utils import Money\nfrom pando.utils import utc\n\n\ndef ordered_set(keys):\n return OrderedDict((k, None) for k in keys)\n\n\nclass CustomUndefined(StrictUndefined):\n __bool__ = __nonzero__ = lambda self: False\n\n def __str__(self):\n try:\n self._fail_with_undefined_error()\n except Exception as e:\n self._tell_sentry(e, {})\n return ''\n\n __unicode__ = __str__\n\n\ndef check_bits(bits):\n assert len(set(bits)) == len(bits) # no duplicates\n assert not [b for b in bits if '{0:b}'.format(b).count('1') != 1] # single bit\n\n\nEvent = namedtuple('Event', 'name bit title')\n\n\nclass Fees(namedtuple('Fees', ('var', 'fix'))):\n VAT = Decimal('0.17') # 17% (Luxembourg rate)\n VAT_1 = VAT + 1\n\n @property\n def with_vat(self):\n r = (self.var * self.VAT_1 * 100, self.fix * self.VAT_1)\n return r[0] if not r[1] else r[1].round_up() if not r[0] else r\n\n\nStandardTip = namedtuple('StandardTip', 'label weekly monthly yearly')\n\n\n_ = lambda a: a\n\nASCII_ALLOWED_IN_USERNAME = set(\"0123456789\"\n \"abcdefghijklmnopqrstuvwxyz\"\n \"ABCDEFGHIJKLMNOPQRSTUVWXYZ\"\n \"-_.\")\n\nAVATAR_QUERY = '?s=160&default=retro'\nAVATAR_SOURCES = 'libravatar bitbucket facebook github google mastodon twitter'.split()\n\nBIRTHDAY = date(2015, 5, 22)\n\nCURRENCIES = ordered_set(['EUR', 'USD'])\n\nD_CENT = Decimal('0.01')\nD_INF = Decimal('inf')\nD_MAX = Decimal('999999999999.99')\nD_UNIT = Decimal('1.00')\nD_ZERO = Decimal('0.00')\n\nDONATION_LIMITS_WEEKLY_EUR_USD = (Decimal('0.01'), Decimal('100.00'))\nDONATION_LIMITS_EUR_USD = {\n 'weekly': DONATION_LIMITS_WEEKLY_EUR_USD,\n 'monthly': tuple((x * Decimal(52) / Decimal(12)).quantize(D_CENT, rounding=ROUND_UP)\n for x in DONATION_LIMITS_WEEKLY_EUR_USD),\n 'yearly': tuple((x * Decimal(52)).quantize(D_CENT)\n for x in DONATION_LIMITS_WEEKLY_EUR_USD),\n}\nDONATION_LIMITS = {\n 'EUR': {k: (Money(v[0], 'EUR'), Money(v[1], 'EUR')) for k, v in DONATION_LIMITS_EUR_USD.items()},\n 'USD': {k: (Money(v[0], 'USD'), Money(v[1], 'USD')) for k, v in DONATION_LIMITS_EUR_USD.items()},\n}\n\nDOMAIN_RE = re.compile(r'''\n ^\n ([a-zA-Z0-9]([a-zA-Z0-9-]*[a-zA-Z0-9])?\\.)+\n [a-zA-Z0-9]([a-zA-Z0-9-]*[a-zA-Z0-9])?\n $\n''', re.VERBOSE)\n\nELSEWHERE_ACTIONS = {'connect', 'lock', 'unlock'}\n\nEMAIL_VERIFICATION_TIMEOUT = timedelta(hours=24)\nEMAIL_RE = re.compile(r'''\n # This is the regexp used by MangoPay (as of February 2017).\n # It rejects some valid but exotic addresses.\n # https://en.wikipedia.org/wiki/Email_address\n ^\n [a-zA-Z0-9!#$%&'*+/=?^_`{|}~-]+(\\.[a-zA-Z0-9!#$%&'*+/=?^_`{|}~-]+)*\n @\n ([a-zA-Z0-9]([a-zA-Z0-9-]*[a-zA-Z0-9])?\\.)+[a-zA-Z0-9]([a-zA-Z0-9-]*[a-zA-Z0-9])?\n $\n''', re.VERBOSE)\n\nEPOCH = datetime(1970, 1, 1, 0, 0, 0, 0, utc)\n\nEUROZONE = set(\"AT BE CY DE EE ES FI FR GR IE IT LT LU LV MT NL PT SI SK\".split())\nSEPA = EUROZONE | set(\"BG CH CZ DK GB GI HR HU IS LI MC NO PL RO SE\".split())\n\nEVENTS = [\n Event('income', 1, _(\"When I receive money\")),\n Event('low_balance', 2, _(\"When there isn't enough money in my wallet to cover my donations\")),\n Event('withdrawal_created', 4, _(\"When a transfer to my bank account is initiated\")),\n Event('withdrawal_failed', 8, _(\"When a transfer to my bank account fails\")),\n Event('pledgee_joined', 16, _(\"When someone I pledge to joins Liberapay\")),\n Event('team_invite', 32, _(\"When someone invites me to join a team\")),\n Event('payin_bankwire_failed', 64, _(\"When a bank wire transfer to my Liberapay wallet fails\")),\n Event('payin_bankwire_succeeded', 128, _(\"When a bank wire transfer to my Liberapay wallet succeeds\")),\n Event('payin_bankwire_expired', 256, _(\"When a bank wire transfer to my Liberapay wallet expires\")),\n Event('payin_directdebit_failed', 512, _(\"When a direct debit from my bank account fails\")),\n Event('payin_directdebit_succeeded', 1024, _(\"When a direct debit from my bank account succeeds\")),\n]\ncheck_bits([e.bit for e in EVENTS])\nEVENTS = OrderedDict((e.name, e) for e in EVENTS)\nEVENTS_S = ' '.join(EVENTS.keys())\n\n# https://www.mangopay.com/pricing/\nFEE_PAYIN_BANK_WIRE = Fees(Decimal('0.005'), 0) # 0.5%\nFEE_PAYIN_CARD = {\n 'EUR': Fees(Decimal('0.018'), Money('0.18', 'EUR')), # 1.8% + €0.18\n 'USD': Fees(Decimal('0.025'), Money('0.30', 'USD')), # 2.5% + $0.30\n}\nFEE_PAYIN_DIRECT_DEBIT = {\n 'EUR': Fees(0, Money('0.50', 'EUR')), # €0.50\n 'GBP': Fees(0, Money('0.50', 'GBP')), # £0.50\n}\nFEE_PAYOUT = {\n 'EUR': {\n 'domestic': (SEPA, Fees(0, 0)),\n 'foreign': Fees(0, Money('2.50', 'EUR')),\n },\n 'GBP': {\n 'domestic': ({'GB'}, Fees(0, Money('0.45', 'GBP'))),\n 'foreign': Fees(0, Money('1.90', 'GBP')),\n },\n 'USD': {\n '*': Fees(0, Money('3.00', 'USD')),\n },\n}\nFEE_PAYOUT_WARN = Decimal('0.03') # warn user when fee exceeds 3%\n\nINVOICE_DOC_MAX_SIZE = 5000000\nINVOICE_DOCS_EXTS = ['pdf', 'jpeg', 'jpg', 'png']\nINVOICE_DOCS_LIMIT = 10\n\nINVOICE_NATURES = {\n 'expense': _(\"Expense Report\"),\n}\n\nINVOICE_STATUSES = {\n 'pre': _(\"Draft\"),\n 'new': _(\"Sent (awaiting approval)\"),\n 'retracted': _(\"Retracted\"),\n 'accepted': _(\"Accepted (awaiting payment)\"),\n 'paid': _(\"Paid\"),\n 'rejected': _(\"Rejected\"),\n}\n\nJINJA_ENV_COMMON = dict(\n trim_blocks=True, lstrip_blocks=True,\n line_statement_prefix='%',\n # undefined=CustomUndefined,\n)\n\n# https://docs.mangopay.com/api-references/kyc-rules/\nKYC_DOC_MAX_SIZE = 7000000\nKYC_DOC_MAX_SIZE_MB = int(KYC_DOC_MAX_SIZE / 1000000)\nKYC_DOCS_EXTS = ['pdf', 'jpeg', 'jpg', 'gif', 'png']\nKYC_DOCS_EXTS_STR = ', '.join(KYC_DOCS_EXTS)\nKYC_INCOME_THRESHOLDS = [(i, Money(a, 'EUR')) for i, a in (\n (1, 18000),\n (2, 30000),\n (3, 50000),\n (4, 80000),\n (5, 120000),\n (6, 120000),\n)]\nKYC_PAYIN_YEARLY_THRESHOLD = Money('2500', 'EUR')\nKYC_PAYOUT_YEARLY_THRESHOLD = Money('1000', 'EUR')\n\nLAUNCH_TIME = datetime(2016, 2, 3, 12, 50, 0, 0, utc)\n\nPARTICIPANT_KINDS = {\n 'individual': _(\"Individual\"),\n 'organization': _(\"Organization\"),\n 'group': _(\"Team\"),\n}\n\nPASSWORD_MIN_SIZE = 8\nPASSWORD_MAX_SIZE = 150\n\nPAYIN_BANK_WIRE_MIN = {k: Money('2.00', k) for k in ('EUR', 'USD')} # fee ≈ 0.99%\nPAYIN_BANK_WIRE_TARGET = {k: Money('5.00', k) for k in ('EUR', 'USD')} # fee ≈ 0.6%\nPAYIN_BANK_WIRE_MAX = {k: Money('2500.00', k) for k in ('EUR', 'USD')}\nPAYIN_CARD_MIN = {\n 'EUR': Money('15.00', 'EUR'), # fee ≈ 3.5%\n 'USD': Money('20.00', 'USD'), # fee ≈ 4.58%\n}\nPAYIN_CARD_TARGET = {\n 'EUR': Money('92.00', 'EUR'), # fee ≈ 2.33%\n 'USD': Money('95.00', 'USD'), # fee ≈ 3.27%\n}\nPAYIN_CARD_MAX = {k: Money('2500.00', k) for k in ('EUR', 'USD')}\nPAYIN_DIRECT_DEBIT_COUNTRIES = {\n # https://support.gocardless.com/hc/en-gb/articles/115005758445\n 'EUR': EUROZONE | set(\"MC SM\".split()),\n}\nPAYIN_DIRECT_DEBIT_MIN_EUR_GBP = Decimal('15.00') # fee ≈ 3.78%\nPAYIN_DIRECT_DEBIT_MIN = {\n 'EUR': Money(PAYIN_DIRECT_DEBIT_MIN_EUR_GBP, 'EUR'),\n 'GBP': Money(PAYIN_DIRECT_DEBIT_MIN_EUR_GBP, 'GBP'),\n}\nPAYIN_DIRECT_DEBIT_TARGET_EUR_GBP = Decimal('99.00') # fee ≈ 0.59%\nPAYIN_DIRECT_DEBIT_TARGET = {\n 'EUR': Money(PAYIN_DIRECT_DEBIT_TARGET_EUR_GBP, 'EUR'),\n 'GBP': Money(PAYIN_DIRECT_DEBIT_TARGET_EUR_GBP, 'GBP'),\n}\nPAYIN_DIRECT_DEBIT_MAX = {k: Money('2500.00', k) for k in ('EUR', 'USD')}\n\nPAYMENT_METHODS = {\n 'mango-ba': _(\"Direct Debit\"),\n 'mango-bw': _(\"Bank Wire\"),\n 'mango-cc': _(\"Credit Card\"),\n}\nPAYMENT_SLUGS = {\n 'mango-ba': 'direct-debit',\n 'mango-bw': 'bankwire',\n 'mango-cc': 'card',\n}\n\nPERIOD_CONVERSION_RATES = {\n 'weekly': Decimal(1),\n 'monthly': Decimal(12) / Decimal(52),\n 'yearly': Decimal(1) / Decimal(52),\n}\n\nPOSTAL_ADDRESS_KEYS = (\n 'AddressLine1', 'AddressLine2', 'City', 'Region', 'PostalCode', 'Country'\n)\n\nPRIVACY_FIELDS = OrderedDict([\n ('hide_giving', (_(\"Hide total giving from others.\"), False)),\n ('hide_receiving', (_(\"Hide total receiving from others.\"), False)),\n ('hide_from_search', (_(\"Hide this profile from search results on Liberapay.\"), True)),\n ('profile_noindex', (_(\"Tell web search engines not to index this profile.\"), True)),\n ('hide_from_lists', (_(\"Prevent this profile from being listed on Liberapay.\"), True)),\n])\nPRIVACY_FIELDS_S = ' '.join(PRIVACY_FIELDS.keys())\n\nPRIVILEGES = dict(admin=1, run_payday=2)\ncheck_bits(list(PRIVILEGES.values()))\n\nPROFILE_VISIBILITY_ATTRS = ('profile_noindex', 'hide_from_lists', 'hide_from_search')\n\nPUBLIC_NAME_MAX_SIZE = 64\n\nQUARANTINE = timedelta(weeks=4)\n\nRATE_LIMITS = {\n 'add_email.source': (5, 60*60*24), # 5 per day\n 'add_email.target': (2, 60*60*24), # 2 per day\n 'change_currency': (4, 60*60*24*7), # 4 per week\n 'change_password': (7, 60*60*24*7), # 7 per week\n 'change_username': (7, 60*60*24*7), # 7 per week\n 'check_password': (25, 60*60*24*7), # 25 per week\n 'http-unsafe.ip-addr': (10, 10), # 10 per 10 seconds\n 'http-unsafe.user': (10, 10), # 10 per 10 seconds\n 'log-in.country': (10, 60), # 10 per minute per country\n 'log-in.email': (10, 60*60*24), # 10 per day\n 'log-in.email.not-verified': (2, 60*60*24), # 2 per day\n 'log-in.email.verified': (10, 60*60*24), # 10 per day\n 'log-in.ip-addr': (5, 5*60), # 5 per 5 minutes per IP address\n 'log-in.password': (3, 60*60), # 3 per hour\n 'make_team': (5, 60*60*24*7), # 5 per week\n 'refetch_elsewhere_data': (1, 60*60*24*7), # retry after one week\n 'refetch_repos': (1, 60*60*24), # retry after one day\n 'sign-up.ip-addr': (5, 60*60), # 5 per hour per IP address\n 'sign-up.ip-net': (15, 60*60), # 15 per hour per IP network\n 'sign-up.country': (5, 5*60), # 5 per 5 minutes per country\n 'sign-up.ip-version': (15, 5*60), # 15 per 5 minutes per IP version\n}\n\nSESSION = str('session') # bytes in python2, unicode in python3\nSESSION_REFRESH = timedelta(hours=1)\nSESSION_TIMEOUT = timedelta(hours=6)\n\n\ndef make_standard_tip(label, weekly, currency):\n return StandardTip(\n label,\n Money(weekly, currency),\n Money(weekly / PERIOD_CONVERSION_RATES['monthly'], currency),\n Money(weekly / PERIOD_CONVERSION_RATES['yearly'], currency),\n )\n\n\nSTANDARD_TIPS_EUR_USD = (\n (_(\"Symbolic\"), Decimal('0.01')),\n (_(\"Small\"), Decimal('0.25')),\n (_(\"Medium\"), Decimal('1.00')),\n (_(\"Large\"), Decimal('5.00')),\n (_(\"Maximum\"), DONATION_LIMITS_EUR_USD['weekly'][1]),\n)\nSTANDARD_TIPS = {\n 'EUR': [make_standard_tip(label, weekly, 'EUR') for label, weekly in STANDARD_TIPS_EUR_USD],\n 'USD': [make_standard_tip(label, weekly, 'USD') for label, weekly in STANDARD_TIPS_EUR_USD],\n}\n\nSUMMARY_MAX_SIZE = 100\n\nTAKE_THROTTLING_THRESHOLD = {k: Money('1.00', k) for k in ('EUR', 'USD')}\n\nUSERNAME_MAX_SIZE = 32\nUSERNAME_SUFFIX_BLACKLIST = set('.txt .html .htm .json .xml'.split())\n\nZERO = {c: Money(D_ZERO, c) for c in ('EUR', 'USD', None)}\n\ndel _\n",
"path": "liberapay/constants.py"
}
] | [
{
"content": "# coding: utf8\nfrom __future__ import print_function, unicode_literals\n\nfrom collections import namedtuple, OrderedDict\nfrom datetime import date, datetime, timedelta\nfrom decimal import Decimal, ROUND_UP\nimport re\n\nfrom jinja2 import StrictUndefined\nfrom mangopay.utils import Money\nfrom pando.utils import utc\n\n\ndef ordered_set(keys):\n return OrderedDict((k, None) for k in keys)\n\n\nclass CustomUndefined(StrictUndefined):\n __bool__ = __nonzero__ = lambda self: False\n\n def __str__(self):\n try:\n self._fail_with_undefined_error()\n except Exception as e:\n self._tell_sentry(e, {})\n return ''\n\n __unicode__ = __str__\n\n\ndef check_bits(bits):\n assert len(set(bits)) == len(bits) # no duplicates\n assert not [b for b in bits if '{0:b}'.format(b).count('1') != 1] # single bit\n\n\nEvent = namedtuple('Event', 'name bit title')\n\n\nclass Fees(namedtuple('Fees', ('var', 'fix'))):\n VAT = Decimal('0.17') # 17% (Luxembourg rate)\n VAT_1 = VAT + 1\n\n @property\n def with_vat(self):\n r = (self.var * self.VAT_1 * 100, self.fix * self.VAT_1)\n return r[0] if not r[1] else r[1].round_up() if not r[0] else r\n\n\nStandardTip = namedtuple('StandardTip', 'label weekly monthly yearly')\n\n\n_ = lambda a: a\n\nASCII_ALLOWED_IN_USERNAME = set(\"0123456789\"\n \"abcdefghijklmnopqrstuvwxyz\"\n \"ABCDEFGHIJKLMNOPQRSTUVWXYZ\"\n \"-_.\")\n\nAVATAR_QUERY = '?s=160&default=retro'\nAVATAR_SOURCES = (\n 'libravatar bitbucket facebook github gitlab google mastodon twitch twitter youtube'\n).split()\n\nBIRTHDAY = date(2015, 5, 22)\n\nCURRENCIES = ordered_set(['EUR', 'USD'])\n\nD_CENT = Decimal('0.01')\nD_INF = Decimal('inf')\nD_MAX = Decimal('999999999999.99')\nD_UNIT = Decimal('1.00')\nD_ZERO = Decimal('0.00')\n\nDONATION_LIMITS_WEEKLY_EUR_USD = (Decimal('0.01'), Decimal('100.00'))\nDONATION_LIMITS_EUR_USD = {\n 'weekly': DONATION_LIMITS_WEEKLY_EUR_USD,\n 'monthly': tuple((x * Decimal(52) / Decimal(12)).quantize(D_CENT, rounding=ROUND_UP)\n for x in DONATION_LIMITS_WEEKLY_EUR_USD),\n 'yearly': tuple((x * Decimal(52)).quantize(D_CENT)\n for x in DONATION_LIMITS_WEEKLY_EUR_USD),\n}\nDONATION_LIMITS = {\n 'EUR': {k: (Money(v[0], 'EUR'), Money(v[1], 'EUR')) for k, v in DONATION_LIMITS_EUR_USD.items()},\n 'USD': {k: (Money(v[0], 'USD'), Money(v[1], 'USD')) for k, v in DONATION_LIMITS_EUR_USD.items()},\n}\n\nDOMAIN_RE = re.compile(r'''\n ^\n ([a-zA-Z0-9]([a-zA-Z0-9-]*[a-zA-Z0-9])?\\.)+\n [a-zA-Z0-9]([a-zA-Z0-9-]*[a-zA-Z0-9])?\n $\n''', re.VERBOSE)\n\nELSEWHERE_ACTIONS = {'connect', 'lock', 'unlock'}\n\nEMAIL_VERIFICATION_TIMEOUT = timedelta(hours=24)\nEMAIL_RE = re.compile(r'''\n # This is the regexp used by MangoPay (as of February 2017).\n # It rejects some valid but exotic addresses.\n # https://en.wikipedia.org/wiki/Email_address\n ^\n [a-zA-Z0-9!#$%&'*+/=?^_`{|}~-]+(\\.[a-zA-Z0-9!#$%&'*+/=?^_`{|}~-]+)*\n @\n ([a-zA-Z0-9]([a-zA-Z0-9-]*[a-zA-Z0-9])?\\.)+[a-zA-Z0-9]([a-zA-Z0-9-]*[a-zA-Z0-9])?\n $\n''', re.VERBOSE)\n\nEPOCH = datetime(1970, 1, 1, 0, 0, 0, 0, utc)\n\nEUROZONE = set(\"AT BE CY DE EE ES FI FR GR IE IT LT LU LV MT NL PT SI SK\".split())\nSEPA = EUROZONE | set(\"BG CH CZ DK GB GI HR HU IS LI MC NO PL RO SE\".split())\n\nEVENTS = [\n Event('income', 1, _(\"When I receive money\")),\n Event('low_balance', 2, _(\"When there isn't enough money in my wallet to cover my donations\")),\n Event('withdrawal_created', 4, _(\"When a transfer to my bank account is initiated\")),\n Event('withdrawal_failed', 8, _(\"When a transfer to my bank account fails\")),\n Event('pledgee_joined', 16, _(\"When someone I pledge to joins Liberapay\")),\n Event('team_invite', 32, _(\"When someone invites me to join a team\")),\n Event('payin_bankwire_failed', 64, _(\"When a bank wire transfer to my Liberapay wallet fails\")),\n Event('payin_bankwire_succeeded', 128, _(\"When a bank wire transfer to my Liberapay wallet succeeds\")),\n Event('payin_bankwire_expired', 256, _(\"When a bank wire transfer to my Liberapay wallet expires\")),\n Event('payin_directdebit_failed', 512, _(\"When a direct debit from my bank account fails\")),\n Event('payin_directdebit_succeeded', 1024, _(\"When a direct debit from my bank account succeeds\")),\n]\ncheck_bits([e.bit for e in EVENTS])\nEVENTS = OrderedDict((e.name, e) for e in EVENTS)\nEVENTS_S = ' '.join(EVENTS.keys())\n\n# https://www.mangopay.com/pricing/\nFEE_PAYIN_BANK_WIRE = Fees(Decimal('0.005'), 0) # 0.5%\nFEE_PAYIN_CARD = {\n 'EUR': Fees(Decimal('0.018'), Money('0.18', 'EUR')), # 1.8% + €0.18\n 'USD': Fees(Decimal('0.025'), Money('0.30', 'USD')), # 2.5% + $0.30\n}\nFEE_PAYIN_DIRECT_DEBIT = {\n 'EUR': Fees(0, Money('0.50', 'EUR')), # €0.50\n 'GBP': Fees(0, Money('0.50', 'GBP')), # £0.50\n}\nFEE_PAYOUT = {\n 'EUR': {\n 'domestic': (SEPA, Fees(0, 0)),\n 'foreign': Fees(0, Money('2.50', 'EUR')),\n },\n 'GBP': {\n 'domestic': ({'GB'}, Fees(0, Money('0.45', 'GBP'))),\n 'foreign': Fees(0, Money('1.90', 'GBP')),\n },\n 'USD': {\n '*': Fees(0, Money('3.00', 'USD')),\n },\n}\nFEE_PAYOUT_WARN = Decimal('0.03') # warn user when fee exceeds 3%\n\nINVOICE_DOC_MAX_SIZE = 5000000\nINVOICE_DOCS_EXTS = ['pdf', 'jpeg', 'jpg', 'png']\nINVOICE_DOCS_LIMIT = 10\n\nINVOICE_NATURES = {\n 'expense': _(\"Expense Report\"),\n}\n\nINVOICE_STATUSES = {\n 'pre': _(\"Draft\"),\n 'new': _(\"Sent (awaiting approval)\"),\n 'retracted': _(\"Retracted\"),\n 'accepted': _(\"Accepted (awaiting payment)\"),\n 'paid': _(\"Paid\"),\n 'rejected': _(\"Rejected\"),\n}\n\nJINJA_ENV_COMMON = dict(\n trim_blocks=True, lstrip_blocks=True,\n line_statement_prefix='%',\n # undefined=CustomUndefined,\n)\n\n# https://docs.mangopay.com/api-references/kyc-rules/\nKYC_DOC_MAX_SIZE = 7000000\nKYC_DOC_MAX_SIZE_MB = int(KYC_DOC_MAX_SIZE / 1000000)\nKYC_DOCS_EXTS = ['pdf', 'jpeg', 'jpg', 'gif', 'png']\nKYC_DOCS_EXTS_STR = ', '.join(KYC_DOCS_EXTS)\nKYC_INCOME_THRESHOLDS = [(i, Money(a, 'EUR')) for i, a in (\n (1, 18000),\n (2, 30000),\n (3, 50000),\n (4, 80000),\n (5, 120000),\n (6, 120000),\n)]\nKYC_PAYIN_YEARLY_THRESHOLD = Money('2500', 'EUR')\nKYC_PAYOUT_YEARLY_THRESHOLD = Money('1000', 'EUR')\n\nLAUNCH_TIME = datetime(2016, 2, 3, 12, 50, 0, 0, utc)\n\nPARTICIPANT_KINDS = {\n 'individual': _(\"Individual\"),\n 'organization': _(\"Organization\"),\n 'group': _(\"Team\"),\n}\n\nPASSWORD_MIN_SIZE = 8\nPASSWORD_MAX_SIZE = 150\n\nPAYIN_BANK_WIRE_MIN = {k: Money('2.00', k) for k in ('EUR', 'USD')} # fee ≈ 0.99%\nPAYIN_BANK_WIRE_TARGET = {k: Money('5.00', k) for k in ('EUR', 'USD')} # fee ≈ 0.6%\nPAYIN_BANK_WIRE_MAX = {k: Money('2500.00', k) for k in ('EUR', 'USD')}\nPAYIN_CARD_MIN = {\n 'EUR': Money('15.00', 'EUR'), # fee ≈ 3.5%\n 'USD': Money('20.00', 'USD'), # fee ≈ 4.58%\n}\nPAYIN_CARD_TARGET = {\n 'EUR': Money('92.00', 'EUR'), # fee ≈ 2.33%\n 'USD': Money('95.00', 'USD'), # fee ≈ 3.27%\n}\nPAYIN_CARD_MAX = {k: Money('2500.00', k) for k in ('EUR', 'USD')}\nPAYIN_DIRECT_DEBIT_COUNTRIES = {\n # https://support.gocardless.com/hc/en-gb/articles/115005758445\n 'EUR': EUROZONE | set(\"MC SM\".split()),\n}\nPAYIN_DIRECT_DEBIT_MIN_EUR_GBP = Decimal('15.00') # fee ≈ 3.78%\nPAYIN_DIRECT_DEBIT_MIN = {\n 'EUR': Money(PAYIN_DIRECT_DEBIT_MIN_EUR_GBP, 'EUR'),\n 'GBP': Money(PAYIN_DIRECT_DEBIT_MIN_EUR_GBP, 'GBP'),\n}\nPAYIN_DIRECT_DEBIT_TARGET_EUR_GBP = Decimal('99.00') # fee ≈ 0.59%\nPAYIN_DIRECT_DEBIT_TARGET = {\n 'EUR': Money(PAYIN_DIRECT_DEBIT_TARGET_EUR_GBP, 'EUR'),\n 'GBP': Money(PAYIN_DIRECT_DEBIT_TARGET_EUR_GBP, 'GBP'),\n}\nPAYIN_DIRECT_DEBIT_MAX = {k: Money('2500.00', k) for k in ('EUR', 'USD')}\n\nPAYMENT_METHODS = {\n 'mango-ba': _(\"Direct Debit\"),\n 'mango-bw': _(\"Bank Wire\"),\n 'mango-cc': _(\"Credit Card\"),\n}\nPAYMENT_SLUGS = {\n 'mango-ba': 'direct-debit',\n 'mango-bw': 'bankwire',\n 'mango-cc': 'card',\n}\n\nPERIOD_CONVERSION_RATES = {\n 'weekly': Decimal(1),\n 'monthly': Decimal(12) / Decimal(52),\n 'yearly': Decimal(1) / Decimal(52),\n}\n\nPOSTAL_ADDRESS_KEYS = (\n 'AddressLine1', 'AddressLine2', 'City', 'Region', 'PostalCode', 'Country'\n)\n\nPRIVACY_FIELDS = OrderedDict([\n ('hide_giving', (_(\"Hide total giving from others.\"), False)),\n ('hide_receiving', (_(\"Hide total receiving from others.\"), False)),\n ('hide_from_search', (_(\"Hide this profile from search results on Liberapay.\"), True)),\n ('profile_noindex', (_(\"Tell web search engines not to index this profile.\"), True)),\n ('hide_from_lists', (_(\"Prevent this profile from being listed on Liberapay.\"), True)),\n])\nPRIVACY_FIELDS_S = ' '.join(PRIVACY_FIELDS.keys())\n\nPRIVILEGES = dict(admin=1, run_payday=2)\ncheck_bits(list(PRIVILEGES.values()))\n\nPROFILE_VISIBILITY_ATTRS = ('profile_noindex', 'hide_from_lists', 'hide_from_search')\n\nPUBLIC_NAME_MAX_SIZE = 64\n\nQUARANTINE = timedelta(weeks=4)\n\nRATE_LIMITS = {\n 'add_email.source': (5, 60*60*24), # 5 per day\n 'add_email.target': (2, 60*60*24), # 2 per day\n 'change_currency': (4, 60*60*24*7), # 4 per week\n 'change_password': (7, 60*60*24*7), # 7 per week\n 'change_username': (7, 60*60*24*7), # 7 per week\n 'check_password': (25, 60*60*24*7), # 25 per week\n 'http-unsafe.ip-addr': (10, 10), # 10 per 10 seconds\n 'http-unsafe.user': (10, 10), # 10 per 10 seconds\n 'log-in.country': (10, 60), # 10 per minute per country\n 'log-in.email': (10, 60*60*24), # 10 per day\n 'log-in.email.not-verified': (2, 60*60*24), # 2 per day\n 'log-in.email.verified': (10, 60*60*24), # 10 per day\n 'log-in.ip-addr': (5, 5*60), # 5 per 5 minutes per IP address\n 'log-in.password': (3, 60*60), # 3 per hour\n 'make_team': (5, 60*60*24*7), # 5 per week\n 'refetch_elsewhere_data': (1, 60*60*24*7), # retry after one week\n 'refetch_repos': (1, 60*60*24), # retry after one day\n 'sign-up.ip-addr': (5, 60*60), # 5 per hour per IP address\n 'sign-up.ip-net': (15, 60*60), # 15 per hour per IP network\n 'sign-up.country': (5, 5*60), # 5 per 5 minutes per country\n 'sign-up.ip-version': (15, 5*60), # 15 per 5 minutes per IP version\n}\n\nSESSION = str('session') # bytes in python2, unicode in python3\nSESSION_REFRESH = timedelta(hours=1)\nSESSION_TIMEOUT = timedelta(hours=6)\n\n\ndef make_standard_tip(label, weekly, currency):\n return StandardTip(\n label,\n Money(weekly, currency),\n Money(weekly / PERIOD_CONVERSION_RATES['monthly'], currency),\n Money(weekly / PERIOD_CONVERSION_RATES['yearly'], currency),\n )\n\n\nSTANDARD_TIPS_EUR_USD = (\n (_(\"Symbolic\"), Decimal('0.01')),\n (_(\"Small\"), Decimal('0.25')),\n (_(\"Medium\"), Decimal('1.00')),\n (_(\"Large\"), Decimal('5.00')),\n (_(\"Maximum\"), DONATION_LIMITS_EUR_USD['weekly'][1]),\n)\nSTANDARD_TIPS = {\n 'EUR': [make_standard_tip(label, weekly, 'EUR') for label, weekly in STANDARD_TIPS_EUR_USD],\n 'USD': [make_standard_tip(label, weekly, 'USD') for label, weekly in STANDARD_TIPS_EUR_USD],\n}\n\nSUMMARY_MAX_SIZE = 100\n\nTAKE_THROTTLING_THRESHOLD = {k: Money('1.00', k) for k in ('EUR', 'USD')}\n\nUSERNAME_MAX_SIZE = 32\nUSERNAME_SUFFIX_BLACKLIST = set('.txt .html .htm .json .xml'.split())\n\nZERO = {c: Money(D_ZERO, c) for c in ('EUR', 'USD', None)}\n\ndel _\n",
"path": "liberapay/constants.py"
}
] | diff --git a/liberapay/constants.py b/liberapay/constants.py
index 0d2c2efeb8..8a8b76770f 100644
--- a/liberapay/constants.py
+++ b/liberapay/constants.py
@@ -57,7 +57,9 @@ def with_vat(self):
"-_.")
AVATAR_QUERY = '?s=160&default=retro'
-AVATAR_SOURCES = 'libravatar bitbucket facebook github google mastodon twitter'.split()
+AVATAR_SOURCES = (
+ 'libravatar bitbucket facebook github gitlab google mastodon twitch twitter youtube'
+).split()
BIRTHDAY = date(2015, 5, 22)
|
deepset-ai__haystack-7396 | `HuggingFaceTGIChatGenerator` does not work properly in a Pipeline
**Describe the bug**
Reported on Discord, reproducible.
[Our example in docs](https://docs.haystack.deepset.ai/docs/huggingfacetgichatgenerator#in-a-pipeline) is broken.
**To Reproduce**
```python
from haystack.components.builders import DynamicChatPromptBuilder
from haystack.components.generators.chat import HuggingFaceTGIChatGenerator
from haystack.dataclasses import ChatMessage
from haystack.utils import Secret
from haystack import Pipeline
# no parameter init, we don't use any runtime template variables
prompt_builder = DynamicChatPromptBuilder()
llm = HuggingFaceTGIChatGenerator(model="meta-llama/Llama-2-70b-chat-hf", token=Secret.from_token("..."))
pipe = Pipeline()
pipe.add_component("prompt_builder", prompt_builder)
pipe.add_component("llm", llm)
pipe.connect("prompt_builder.prompt", "llm.messages")
location = "Berlin"
messages = [ChatMessage.from_system("Always respond in German even if some input data is in other languages."),
ChatMessage.from_user("Tell me about {{location}}")]
pipe.run(data={"prompt_builder": {"template_variables":{"location": location}, "prompt_source": messages}})
```
**Error message**
```
AttributeError Traceback (most recent call last)
[<ipython-input-6-4084a601c8bf>](https://localhost:8080/#) in <cell line: 13>()
11 pipe = Pipeline()
12 pipe.add_component("prompt_builder", prompt_builder)
---> 13 pipe.add_component("llm", llm)
14 pipe.connect("prompt_builder.prompt", "llm.messages")
15 location = "Berlin"
[/usr/local/lib/python3.10/dist-packages/haystack/core/pipeline/pipeline.py](https://localhost:8080/#) in add_component(self, name, instance)
291 name,
292 instance=instance,
--> 293 input_sockets=instance.__haystack_input__._sockets_dict, # type: ignore[attr-defined]
294 output_sockets=instance.__haystack_output__._sockets_dict, # type: ignore[attr-defined]
295 visits=0,
AttributeError: 'HuggingFaceTGIChatGenerator' object has no attribute '__haystack_input__'
```
**System:**
- Haystack version (commit or version number): 2.0.0
| [
{
"content": "from dataclasses import asdict\nfrom typing import Any, Callable, Dict, Iterable, List, Optional\nfrom urllib.parse import urlparse\n\nfrom haystack import component, default_from_dict, default_to_dict, logging\nfrom haystack.dataclasses import ChatMessage, StreamingChunk\nfrom haystack.lazy_imports import LazyImport\nfrom haystack.utils import Secret, deserialize_callable, deserialize_secrets_inplace, serialize_callable\nfrom haystack.utils.hf import HFModelType, check_generation_params, check_valid_model, list_inference_deployed_models\n\nwith LazyImport(message=\"Run 'pip install transformers'\") as transformers_import:\n from huggingface_hub import InferenceClient\n from huggingface_hub.inference._text_generation import TextGenerationResponse, TextGenerationStreamResponse, Token\n from transformers import AutoTokenizer\n\nlogger = logging.getLogger(__name__)\n\n\nclass HuggingFaceTGIChatGenerator:\n \"\"\"\n Enables text generation using HuggingFace Hub hosted chat-based LLMs. This component is designed to seamlessly\n inference chat-based models deployed on the Text Generation Inference (TGI) backend.\n\n You can use this component for chat LLMs hosted on Hugging Face inference endpoints, the rate-limited\n Inference API tier.\n\n Key Features and Compatibility:\n - Primary Compatibility: designed to work seamlessly with any chat-based model deployed using the TGI\n framework. For more information on TGI, visit [text-generation-inference](https://github.com/huggingface/text-generation-inference)\n - Hugging Face Inference Endpoints: Supports inference of TGI chat LLMs deployed on Hugging Face\n inference endpoints. For more details, refer to [inference-endpoints](https://huggingface.co/inference-endpoints)\n\n - Inference API Support: supports inference of TGI chat LLMs hosted on the rate-limited Inference\n API tier. Learn more about the Inference API at [inference-api](https://huggingface.co/inference-api).\n Discover available chat models using the following command: `wget -qO- https://api-inference.huggingface.co/framework/text-generation-inference | grep chat`\n and simply use the model ID as the model parameter for this component. You'll also need to provide a valid\n Hugging Face API token as the token parameter.\n\n - Custom TGI Endpoints: supports inference of TGI chat LLMs deployed on custom TGI endpoints. Anyone can\n deploy their own TGI endpoint using the TGI framework. For more details, refer to [inference-endpoints](https://huggingface.co/inference-endpoints)\n\n Input and Output Format:\n - ChatMessage Format: This component uses the ChatMessage format to structure both input and output,\n ensuring coherent and contextually relevant responses in chat-based text generation scenarios. Details on the\n ChatMessage format can be found [here](https://docs.haystack.deepset.ai/v2.0/docs/data-classes#chatmessage).\n\n\n ```python\n from haystack.components.generators.chat import HuggingFaceTGIChatGenerator\n from haystack.dataclasses import ChatMessage\n from haystack.utils import Secret\n\n messages = [ChatMessage.from_system(\"\\\\nYou are a helpful, respectful and honest assistant\"),\n ChatMessage.from_user(\"What's Natural Language Processing?\")]\n\n\n client = HuggingFaceTGIChatGenerator(model=\"HuggingFaceH4/zephyr-7b-beta\", token=Secret.from_token(\"<your-api-key>\"))\n client.warm_up()\n response = client.run(messages, generation_kwargs={\"max_new_tokens\": 120})\n print(response)\n ```\n\n For chat LLMs hosted on paid https://huggingface.co/inference-endpoints endpoint and/or your own custom TGI\n endpoint, you'll need to provide the URL of the endpoint as well as a valid token:\n\n ```python\n from haystack.components.generators.chat import HuggingFaceTGIChatGenerator\n from haystack.dataclasses import ChatMessage\n\n messages = [ChatMessage.from_system(\"\\\\nYou are a helpful, respectful and honest assistant\"),\n ChatMessage.from_user(\"What's Natural Language Processing?\")]\n\n client = HuggingFaceTGIChatGenerator(model=\"HuggingFaceH4/zephyr-7b-beta\",\n url=\"<your-tgi-endpoint-url>\",\n token=Secret.from_token(\"<your-api-key>\"))\n client.warm_up()\n response = client.run(messages, generation_kwargs={\"max_new_tokens\": 120})\n print(response)\n ```\n \"\"\"\n\n def __init__(\n self,\n model: str = \"HuggingFaceH4/zephyr-7b-beta\",\n url: Optional[str] = None,\n token: Optional[Secret] = Secret.from_env_var(\"HF_API_TOKEN\", strict=False),\n chat_template: Optional[str] = None,\n generation_kwargs: Optional[Dict[str, Any]] = None,\n stop_words: Optional[List[str]] = None,\n streaming_callback: Optional[Callable[[StreamingChunk], None]] = None,\n ):\n \"\"\"\n Initialize the HuggingFaceTGIChatGenerator instance.\n\n :param model: A string representing the model path or URL. Default is \"HuggingFaceH4/zephyr-7b-beta\".\n :param url: An optional string representing the URL of the TGI endpoint.\n :param chat_template: This optional parameter allows you to specify a Jinja template for formatting chat\n messages. While high-quality and well-supported chat models typically include their own chat templates\n accessible through their tokenizer, there are models that do not offer this feature. For such scenarios,\n or if you wish to use a custom template instead of the model's default, you can use this parameter to\n set your preferred chat template.\n :param token: The Hugging Face token for HTTP bearer authorization.\n You can find your HF token at https://huggingface.co/settings/tokens.\n :param generation_kwargs: A dictionary containing keyword arguments to customize text generation.\n Some examples: `max_new_tokens`, `temperature`, `top_k`, `top_p`,...\n See Hugging Face's [documentation](https://huggingface.co/docs/huggingface_hub/v0.18.0.rc0/en/package_reference/inference_client#huggingface_hub.inference._text_generation.TextGenerationParameters)\n for more information.\n :param stop_words: An optional list of strings representing the stop words.\n :param streaming_callback: An optional callable for handling streaming responses.\n \"\"\"\n transformers_import.check()\n\n if url:\n r = urlparse(url)\n is_valid_url = all([r.scheme in [\"http\", \"https\"], r.netloc])\n if not is_valid_url:\n raise ValueError(f\"Invalid TGI endpoint URL provided: {url}\")\n\n check_valid_model(model, HFModelType.GENERATION, token)\n\n # handle generation kwargs setup\n generation_kwargs = generation_kwargs.copy() if generation_kwargs else {}\n check_generation_params(generation_kwargs, [\"n\"])\n generation_kwargs[\"stop_sequences\"] = generation_kwargs.get(\"stop_sequences\", [])\n generation_kwargs[\"stop_sequences\"].extend(stop_words or [])\n generation_kwargs.setdefault(\"max_new_tokens\", 512)\n\n self.model = model\n self.url = url\n self.chat_template = chat_template\n self.token = token\n self.generation_kwargs = generation_kwargs\n self.client = InferenceClient(url or model, token=token.resolve_value() if token else None)\n self.streaming_callback = streaming_callback\n self.tokenizer = None\n\n def warm_up(self) -> None:\n \"\"\"\n If the url is not provided, check if the model is deployed on the free tier of the HF inference API.\n Load the tokenizer\n \"\"\"\n\n # is this user using HF free tier inference API?\n if self.model and not self.url:\n deployed_models = list_inference_deployed_models()\n # Determine if the specified model is deployed in the free tier.\n if self.model not in deployed_models:\n raise ValueError(\n f\"The model {self.model} is not deployed on the free tier of the HF inference API. \"\n \"To use free tier models provide the model ID and the token. Valid models are: \"\n f\"{deployed_models}\"\n )\n\n self.tokenizer = AutoTokenizer.from_pretrained(\n self.model, token=self.token.resolve_value() if self.token else None\n )\n\n # mypy can't infer that chat_template attribute exists on the object returned by AutoTokenizer.from_pretrained\n chat_template = getattr(self.tokenizer, \"chat_template\", None)\n if not chat_template and not self.chat_template:\n logger.warning(\n \"The model '{model}' doesn't have a default chat_template, and no chat_template was supplied during \"\n \"this component's initialization. It’s possible that the model doesn't support ChatML inference \"\n \"format, potentially leading to unexpected behavior.\",\n model=self.model,\n )\n\n def to_dict(self) -> Dict[str, Any]:\n \"\"\"\n Serialize this component to a dictionary.\n\n :return: A dictionary containing the serialized component.\n \"\"\"\n callback_name = serialize_callable(self.streaming_callback) if self.streaming_callback else None\n return default_to_dict(\n self,\n model=self.model,\n url=self.url,\n chat_template=self.chat_template,\n token=self.token.to_dict() if self.token else None,\n generation_kwargs=self.generation_kwargs,\n streaming_callback=callback_name,\n )\n\n @classmethod\n def from_dict(cls, data: Dict[str, Any]) -> \"HuggingFaceTGIChatGenerator\":\n \"\"\"\n Deserialize this component from a dictionary.\n \"\"\"\n deserialize_secrets_inplace(data[\"init_parameters\"], keys=[\"token\"])\n init_params = data.get(\"init_parameters\", {})\n serialized_callback_handler = init_params.get(\"streaming_callback\")\n if serialized_callback_handler:\n data[\"init_parameters\"][\"streaming_callback\"] = deserialize_callable(serialized_callback_handler)\n return default_from_dict(cls, data)\n\n def _get_telemetry_data(self) -> Dict[str, Any]:\n \"\"\"\n Data that is sent to Posthog for usage analytics.\n \"\"\"\n # Don't send URL as it is sensitive information\n return {\"model\": self.model}\n\n @component.output_types(replies=List[ChatMessage])\n def run(self, messages: List[ChatMessage], generation_kwargs: Optional[Dict[str, Any]] = None):\n \"\"\"\n Invoke the text generation inference based on the provided messages and generation parameters.\n\n :param messages: A list of ChatMessage instances representing the input messages.\n :param generation_kwargs: Additional keyword arguments for text generation.\n :return: A list containing the generated responses as ChatMessage instances.\n \"\"\"\n\n # check generation kwargs given as parameters to override the default ones\n additional_params = [\"n\", \"stop_words\"]\n check_generation_params(generation_kwargs, additional_params)\n\n # update generation kwargs by merging with the default ones\n generation_kwargs = {**self.generation_kwargs, **(generation_kwargs or {})}\n num_responses = generation_kwargs.pop(\"n\", 1)\n\n # merge stop_words and stop_sequences into a single list\n generation_kwargs[\"stop_sequences\"] = generation_kwargs.get(\"stop_sequences\", [])\n generation_kwargs[\"stop_sequences\"].extend(generation_kwargs.pop(\"stop_words\", []))\n\n if self.tokenizer is None:\n raise RuntimeError(\"Please call warm_up() before running LLM inference.\")\n\n # apply either model's chat template or the user-provided one\n prepared_prompt: str = self.tokenizer.apply_chat_template(\n conversation=messages, chat_template=self.chat_template, tokenize=False\n )\n prompt_token_count: int = len(self.tokenizer.encode(prepared_prompt, add_special_tokens=False))\n\n if self.streaming_callback:\n if num_responses > 1:\n raise ValueError(\"Cannot stream multiple responses, please set n=1.\")\n\n return self._run_streaming(prepared_prompt, prompt_token_count, generation_kwargs)\n\n return self._run_non_streaming(prepared_prompt, prompt_token_count, num_responses, generation_kwargs)\n\n def _run_streaming(\n self, prepared_prompt: str, prompt_token_count: int, generation_kwargs: Dict[str, Any]\n ) -> Dict[str, List[ChatMessage]]:\n res: Iterable[TextGenerationStreamResponse] = self.client.text_generation(\n prepared_prompt, stream=True, details=True, **generation_kwargs\n )\n chunk = None\n # pylint: disable=not-an-iterable\n for chunk in res:\n token: Token = chunk.token\n if token.special:\n continue\n chunk_metadata = {**asdict(token), **(asdict(chunk.details) if chunk.details else {})}\n stream_chunk = StreamingChunk(token.text, chunk_metadata)\n self.streaming_callback(stream_chunk) # type: ignore # streaming_callback is not None (verified in the run method)\n\n message = ChatMessage.from_assistant(chunk.generated_text)\n message.meta.update(\n {\n \"finish_reason\": chunk.details.finish_reason.value,\n \"index\": 0,\n \"model\": self.client.model,\n \"usage\": {\n \"completion_tokens\": chunk.details.generated_tokens,\n \"prompt_tokens\": prompt_token_count,\n \"total_tokens\": prompt_token_count + chunk.details.generated_tokens,\n },\n }\n )\n return {\"replies\": [message]}\n\n def _run_non_streaming(\n self, prepared_prompt: str, prompt_token_count: int, num_responses: int, generation_kwargs: Dict[str, Any]\n ) -> Dict[str, List[ChatMessage]]:\n chat_messages: List[ChatMessage] = []\n for _i in range(num_responses):\n tgr: TextGenerationResponse = self.client.text_generation(\n prepared_prompt, details=True, **generation_kwargs\n )\n message = ChatMessage.from_assistant(tgr.generated_text)\n message.meta.update(\n {\n \"finish_reason\": tgr.details.finish_reason.value,\n \"index\": _i,\n \"model\": self.client.model,\n \"usage\": {\n \"completion_tokens\": len(tgr.details.tokens),\n \"prompt_tokens\": prompt_token_count,\n \"total_tokens\": prompt_token_count + len(tgr.details.tokens),\n },\n }\n )\n chat_messages.append(message)\n return {\"replies\": chat_messages}\n",
"path": "haystack/components/generators/chat/hugging_face_tgi.py"
}
] | [
{
"content": "from dataclasses import asdict\nfrom typing import Any, Callable, Dict, Iterable, List, Optional\nfrom urllib.parse import urlparse\n\nfrom haystack import component, default_from_dict, default_to_dict, logging\nfrom haystack.dataclasses import ChatMessage, StreamingChunk\nfrom haystack.lazy_imports import LazyImport\nfrom haystack.utils import Secret, deserialize_callable, deserialize_secrets_inplace, serialize_callable\nfrom haystack.utils.hf import HFModelType, check_generation_params, check_valid_model, list_inference_deployed_models\n\nwith LazyImport(message=\"Run 'pip install transformers'\") as transformers_import:\n from huggingface_hub import InferenceClient\n from huggingface_hub.inference._text_generation import TextGenerationResponse, TextGenerationStreamResponse, Token\n from transformers import AutoTokenizer\n\nlogger = logging.getLogger(__name__)\n\n\n@component\nclass HuggingFaceTGIChatGenerator:\n \"\"\"\n Enables text generation using HuggingFace Hub hosted chat-based LLMs. This component is designed to seamlessly\n inference chat-based models deployed on the Text Generation Inference (TGI) backend.\n\n You can use this component for chat LLMs hosted on Hugging Face inference endpoints, the rate-limited\n Inference API tier.\n\n Key Features and Compatibility:\n - Primary Compatibility: designed to work seamlessly with any chat-based model deployed using the TGI\n framework. For more information on TGI, visit [text-generation-inference](https://github.com/huggingface/text-generation-inference)\n - Hugging Face Inference Endpoints: Supports inference of TGI chat LLMs deployed on Hugging Face\n inference endpoints. For more details, refer to [inference-endpoints](https://huggingface.co/inference-endpoints)\n\n - Inference API Support: supports inference of TGI chat LLMs hosted on the rate-limited Inference\n API tier. Learn more about the Inference API at [inference-api](https://huggingface.co/inference-api).\n Discover available chat models using the following command: `wget -qO- https://api-inference.huggingface.co/framework/text-generation-inference | grep chat`\n and simply use the model ID as the model parameter for this component. You'll also need to provide a valid\n Hugging Face API token as the token parameter.\n\n - Custom TGI Endpoints: supports inference of TGI chat LLMs deployed on custom TGI endpoints. Anyone can\n deploy their own TGI endpoint using the TGI framework. For more details, refer to [inference-endpoints](https://huggingface.co/inference-endpoints)\n\n Input and Output Format:\n - ChatMessage Format: This component uses the ChatMessage format to structure both input and output,\n ensuring coherent and contextually relevant responses in chat-based text generation scenarios. Details on the\n ChatMessage format can be found [here](https://docs.haystack.deepset.ai/v2.0/docs/data-classes#chatmessage).\n\n\n ```python\n from haystack.components.generators.chat import HuggingFaceTGIChatGenerator\n from haystack.dataclasses import ChatMessage\n from haystack.utils import Secret\n\n messages = [ChatMessage.from_system(\"\\\\nYou are a helpful, respectful and honest assistant\"),\n ChatMessage.from_user(\"What's Natural Language Processing?\")]\n\n\n client = HuggingFaceTGIChatGenerator(model=\"HuggingFaceH4/zephyr-7b-beta\", token=Secret.from_token(\"<your-api-key>\"))\n client.warm_up()\n response = client.run(messages, generation_kwargs={\"max_new_tokens\": 120})\n print(response)\n ```\n\n For chat LLMs hosted on paid https://huggingface.co/inference-endpoints endpoint and/or your own custom TGI\n endpoint, you'll need to provide the URL of the endpoint as well as a valid token:\n\n ```python\n from haystack.components.generators.chat import HuggingFaceTGIChatGenerator\n from haystack.dataclasses import ChatMessage\n\n messages = [ChatMessage.from_system(\"\\\\nYou are a helpful, respectful and honest assistant\"),\n ChatMessage.from_user(\"What's Natural Language Processing?\")]\n\n client = HuggingFaceTGIChatGenerator(model=\"HuggingFaceH4/zephyr-7b-beta\",\n url=\"<your-tgi-endpoint-url>\",\n token=Secret.from_token(\"<your-api-key>\"))\n client.warm_up()\n response = client.run(messages, generation_kwargs={\"max_new_tokens\": 120})\n print(response)\n ```\n \"\"\"\n\n def __init__(\n self,\n model: str = \"HuggingFaceH4/zephyr-7b-beta\",\n url: Optional[str] = None,\n token: Optional[Secret] = Secret.from_env_var(\"HF_API_TOKEN\", strict=False),\n chat_template: Optional[str] = None,\n generation_kwargs: Optional[Dict[str, Any]] = None,\n stop_words: Optional[List[str]] = None,\n streaming_callback: Optional[Callable[[StreamingChunk], None]] = None,\n ):\n \"\"\"\n Initialize the HuggingFaceTGIChatGenerator instance.\n\n :param model: A string representing the model path or URL. Default is \"HuggingFaceH4/zephyr-7b-beta\".\n :param url: An optional string representing the URL of the TGI endpoint.\n :param chat_template: This optional parameter allows you to specify a Jinja template for formatting chat\n messages. While high-quality and well-supported chat models typically include their own chat templates\n accessible through their tokenizer, there are models that do not offer this feature. For such scenarios,\n or if you wish to use a custom template instead of the model's default, you can use this parameter to\n set your preferred chat template.\n :param token: The Hugging Face token for HTTP bearer authorization.\n You can find your HF token at https://huggingface.co/settings/tokens.\n :param generation_kwargs: A dictionary containing keyword arguments to customize text generation.\n Some examples: `max_new_tokens`, `temperature`, `top_k`, `top_p`,...\n See Hugging Face's [documentation](https://huggingface.co/docs/huggingface_hub/v0.18.0.rc0/en/package_reference/inference_client#huggingface_hub.inference._text_generation.TextGenerationParameters)\n for more information.\n :param stop_words: An optional list of strings representing the stop words.\n :param streaming_callback: An optional callable for handling streaming responses.\n \"\"\"\n transformers_import.check()\n\n if url:\n r = urlparse(url)\n is_valid_url = all([r.scheme in [\"http\", \"https\"], r.netloc])\n if not is_valid_url:\n raise ValueError(f\"Invalid TGI endpoint URL provided: {url}\")\n\n check_valid_model(model, HFModelType.GENERATION, token)\n\n # handle generation kwargs setup\n generation_kwargs = generation_kwargs.copy() if generation_kwargs else {}\n check_generation_params(generation_kwargs, [\"n\"])\n generation_kwargs[\"stop_sequences\"] = generation_kwargs.get(\"stop_sequences\", [])\n generation_kwargs[\"stop_sequences\"].extend(stop_words or [])\n generation_kwargs.setdefault(\"max_new_tokens\", 512)\n\n self.model = model\n self.url = url\n self.chat_template = chat_template\n self.token = token\n self.generation_kwargs = generation_kwargs\n self.client = InferenceClient(url or model, token=token.resolve_value() if token else None)\n self.streaming_callback = streaming_callback\n self.tokenizer = None\n\n def warm_up(self) -> None:\n \"\"\"\n If the url is not provided, check if the model is deployed on the free tier of the HF inference API.\n Load the tokenizer\n \"\"\"\n\n # is this user using HF free tier inference API?\n if self.model and not self.url:\n deployed_models = list_inference_deployed_models()\n # Determine if the specified model is deployed in the free tier.\n if self.model not in deployed_models:\n raise ValueError(\n f\"The model {self.model} is not deployed on the free tier of the HF inference API. \"\n \"To use free tier models provide the model ID and the token. Valid models are: \"\n f\"{deployed_models}\"\n )\n\n self.tokenizer = AutoTokenizer.from_pretrained(\n self.model, token=self.token.resolve_value() if self.token else None\n )\n\n # mypy can't infer that chat_template attribute exists on the object returned by AutoTokenizer.from_pretrained\n chat_template = getattr(self.tokenizer, \"chat_template\", None)\n if not chat_template and not self.chat_template:\n logger.warning(\n \"The model '{model}' doesn't have a default chat_template, and no chat_template was supplied during \"\n \"this component's initialization. It’s possible that the model doesn't support ChatML inference \"\n \"format, potentially leading to unexpected behavior.\",\n model=self.model,\n )\n\n def to_dict(self) -> Dict[str, Any]:\n \"\"\"\n Serialize this component to a dictionary.\n\n :return: A dictionary containing the serialized component.\n \"\"\"\n callback_name = serialize_callable(self.streaming_callback) if self.streaming_callback else None\n return default_to_dict(\n self,\n model=self.model,\n url=self.url,\n chat_template=self.chat_template,\n token=self.token.to_dict() if self.token else None,\n generation_kwargs=self.generation_kwargs,\n streaming_callback=callback_name,\n )\n\n @classmethod\n def from_dict(cls, data: Dict[str, Any]) -> \"HuggingFaceTGIChatGenerator\":\n \"\"\"\n Deserialize this component from a dictionary.\n \"\"\"\n deserialize_secrets_inplace(data[\"init_parameters\"], keys=[\"token\"])\n init_params = data.get(\"init_parameters\", {})\n serialized_callback_handler = init_params.get(\"streaming_callback\")\n if serialized_callback_handler:\n data[\"init_parameters\"][\"streaming_callback\"] = deserialize_callable(serialized_callback_handler)\n return default_from_dict(cls, data)\n\n def _get_telemetry_data(self) -> Dict[str, Any]:\n \"\"\"\n Data that is sent to Posthog for usage analytics.\n \"\"\"\n # Don't send URL as it is sensitive information\n return {\"model\": self.model}\n\n @component.output_types(replies=List[ChatMessage])\n def run(self, messages: List[ChatMessage], generation_kwargs: Optional[Dict[str, Any]] = None):\n \"\"\"\n Invoke the text generation inference based on the provided messages and generation parameters.\n\n :param messages: A list of ChatMessage instances representing the input messages.\n :param generation_kwargs: Additional keyword arguments for text generation.\n :return: A list containing the generated responses as ChatMessage instances.\n \"\"\"\n\n # check generation kwargs given as parameters to override the default ones\n additional_params = [\"n\", \"stop_words\"]\n check_generation_params(generation_kwargs, additional_params)\n\n # update generation kwargs by merging with the default ones\n generation_kwargs = {**self.generation_kwargs, **(generation_kwargs or {})}\n num_responses = generation_kwargs.pop(\"n\", 1)\n\n # merge stop_words and stop_sequences into a single list\n generation_kwargs[\"stop_sequences\"] = generation_kwargs.get(\"stop_sequences\", [])\n generation_kwargs[\"stop_sequences\"].extend(generation_kwargs.pop(\"stop_words\", []))\n\n if self.tokenizer is None:\n raise RuntimeError(\"Please call warm_up() before running LLM inference.\")\n\n # apply either model's chat template or the user-provided one\n prepared_prompt: str = self.tokenizer.apply_chat_template(\n conversation=messages, chat_template=self.chat_template, tokenize=False\n )\n prompt_token_count: int = len(self.tokenizer.encode(prepared_prompt, add_special_tokens=False))\n\n if self.streaming_callback:\n if num_responses > 1:\n raise ValueError(\"Cannot stream multiple responses, please set n=1.\")\n\n return self._run_streaming(prepared_prompt, prompt_token_count, generation_kwargs)\n\n return self._run_non_streaming(prepared_prompt, prompt_token_count, num_responses, generation_kwargs)\n\n def _run_streaming(\n self, prepared_prompt: str, prompt_token_count: int, generation_kwargs: Dict[str, Any]\n ) -> Dict[str, List[ChatMessage]]:\n res: Iterable[TextGenerationStreamResponse] = self.client.text_generation(\n prepared_prompt, stream=True, details=True, **generation_kwargs\n )\n chunk = None\n # pylint: disable=not-an-iterable\n for chunk in res:\n token: Token = chunk.token\n if token.special:\n continue\n chunk_metadata = {**asdict(token), **(asdict(chunk.details) if chunk.details else {})}\n stream_chunk = StreamingChunk(token.text, chunk_metadata)\n self.streaming_callback(stream_chunk) # type: ignore # streaming_callback is not None (verified in the run method)\n\n message = ChatMessage.from_assistant(chunk.generated_text)\n message.meta.update(\n {\n \"finish_reason\": chunk.details.finish_reason.value,\n \"index\": 0,\n \"model\": self.client.model,\n \"usage\": {\n \"completion_tokens\": chunk.details.generated_tokens,\n \"prompt_tokens\": prompt_token_count,\n \"total_tokens\": prompt_token_count + chunk.details.generated_tokens,\n },\n }\n )\n return {\"replies\": [message]}\n\n def _run_non_streaming(\n self, prepared_prompt: str, prompt_token_count: int, num_responses: int, generation_kwargs: Dict[str, Any]\n ) -> Dict[str, List[ChatMessage]]:\n chat_messages: List[ChatMessage] = []\n for _i in range(num_responses):\n tgr: TextGenerationResponse = self.client.text_generation(\n prepared_prompt, details=True, **generation_kwargs\n )\n message = ChatMessage.from_assistant(tgr.generated_text)\n message.meta.update(\n {\n \"finish_reason\": tgr.details.finish_reason.value,\n \"index\": _i,\n \"model\": self.client.model,\n \"usage\": {\n \"completion_tokens\": len(tgr.details.tokens),\n \"prompt_tokens\": prompt_token_count,\n \"total_tokens\": prompt_token_count + len(tgr.details.tokens),\n },\n }\n )\n chat_messages.append(message)\n return {\"replies\": chat_messages}\n",
"path": "haystack/components/generators/chat/hugging_face_tgi.py"
}
] | diff --git a/haystack/components/generators/chat/hugging_face_tgi.py b/haystack/components/generators/chat/hugging_face_tgi.py
index 3e388a008d..95adbe792d 100644
--- a/haystack/components/generators/chat/hugging_face_tgi.py
+++ b/haystack/components/generators/chat/hugging_face_tgi.py
@@ -16,6 +16,7 @@
logger = logging.getLogger(__name__)
+@component
class HuggingFaceTGIChatGenerator:
"""
Enables text generation using HuggingFace Hub hosted chat-based LLMs. This component is designed to seamlessly
diff --git a/releasenotes/notes/tgi-chat-missing-decorator-799b2a133ee4708c.yaml b/releasenotes/notes/tgi-chat-missing-decorator-799b2a133ee4708c.yaml
new file mode 100644
index 0000000000..5437a83b5d
--- /dev/null
+++ b/releasenotes/notes/tgi-chat-missing-decorator-799b2a133ee4708c.yaml
@@ -0,0 +1,5 @@
+---
+fixes:
+ - |
+ Add the `@component` decorator to `HuggingFaceTGIChatGenerator`.
+ The lack of this decorator made it impossible to use the `HuggingFaceTGIChatGenerator` in a pipeline.
|
falconry__falcon-981 | Doc site: On small screen height, sidebar ("Navigation") clips at bottom.
Using a laptop with 768 pixels height resolution.

| [
{
"content": "# -*- coding: utf-8 -*-\n#\n# Falcon documentation build configuration file, created by\n# sphinx-quickstart on Wed Mar 12 14:14:02 2014.\n#\n# This file is execfile()d with the current directory set to its\n# containing dir.\n#\n# Note that not all possible configuration values are present in this\n# autogenerated file.\n#\n# All configuration values have a default; values that are commented out\n# serve to show the default.\n\nimport sys\nimport os\n\ntry:\n import configparser\nexcept ImportError:\n import ConfigParser as configparser\n\nimport falcon\n\n# on_rtd is whether we are on readthedocs.org\non_rtd = os.environ.get('READTHEDOCS', None) == 'True'\n\n# If extensions (or modules to document with autodoc) are in another directory,\n# add these directories to sys.path here. If the directory is relative to the\n# documentation root, use os.path.abspath to make it absolute, like shown here.\nsys.path.insert(0, os.path.abspath('..'))\nsys.path.insert(0, os.path.abspath('.'))\n\n# Path to custom themes\nsys.path.append(os.path.abspath('_themes'))\n\n# -- General configuration ------------------------------------------------\n\n# If your documentation needs a minimal Sphinx version, state it here.\n#needs_sphinx = '1.0'\n\n# Add any Sphinx extension module names here, as strings. They can be\n# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom\n# ones.\nextensions = [\n 'sphinx.ext.autodoc',\n 'sphinx.ext.intersphinx',\n 'sphinx.ext.viewcode',\n 'sphinx.ext.napoleon',\n\n # Falcon-specific extensions\n 'ext.rfc',\n 'ext.doorway',\n]\n\n# Add any paths that contain templates here, relative to this directory.\ntemplates_path = ['_templates']\n\n# The suffix of source filenames.\nsource_suffix = '.rst'\n\n# The encoding of source files.\n#source_encoding = 'utf-8-sig'\n\n# The master toctree document.\nmaster_doc = 'index'\n\n# General information about the project.\nproject = u'Falcon'\ncopyright = u\"2016 Falcon Contributors | Logo based on a <a href=https://commons.wikimedia.org/wiki/File:Brown-Falcon,-Vic,-3.1.2008.jpg>photograph by John O'Neill</a>\"\n\n# The version info for the project you're documenting, acts as replacement for\n# |version| and |release|, also used in various other places throughout the\n# built documents.\n\ncfg = configparser.SafeConfigParser()\ncfg.read('../setup.cfg')\ntag = cfg.get('egg_info', 'tag_build')\n\nhtml_context = {\n 'prerelease': bool(tag), # True if tag is not the empty string\n}\n\n# The short X.Y version.\nversion = '.'.join(falcon.__version__.split('.')[0:2]) + tag\n\n# The full version, including alpha/beta/rc tags.\nrelease = falcon.__version__ + tag\n\n# The language for content autogenerated by Sphinx. Refer to documentation\n# for a list of supported languages.\n#language = None\n\n# There are two options for replacing |today|: either, you set today to some\n# non-false value, then it is used:\n#today = ''\n# Else, today_fmt is used as the format for a strftime call.\n#today_fmt = '%B %d, %Y'\n\n# List of patterns, relative to source directory, that match files and\n# directories to ignore when looking for source files.\nexclude_patterns = ['_build']\n\n# The reST default role (used for this markup: `text`) to use for all\n# documents.\n#default_role = None\n\n# If true, '()' will be appended to :func: etc. cross-reference text.\n#add_function_parentheses = True\n\n# If true, the current module name will be prepended to all description\n# unit titles (such as .. function::).\n#add_module_names = True\n\n# If true, sectionauthor and moduleauthor directives will be shown in the\n# output. They are ignored by default.\n#show_authors = False\n\n# The name of the Pygments (syntax highlighting) style to use.\n# pygments_style = 'flask_theme_support.FlaskyStyle'\n\n# A list of ignored prefixes for module index sorting.\n#modindex_common_prefix = []\n\n# If true, keep warnings as \"system message\" paragraphs in the built documents.\n#keep_warnings = False\n\n\n# -- Options for HTML output ----------------------------------------------\n\n# Add any paths that contain custom themes here, relative to this directory.\n# html_theme_path = ['_themes']\n# html_theme = ''\n\nhtml_theme = 'alabaster'\n\n# if not on_rtd:\n# # Use the RTD theme explicitly if it is available\n# try:\n# import sphinx_rtd_theme\n\n# html_theme_path = [sphinx_rtd_theme.get_html_theme_path()]\n# html_theme = \"sphinx_rtd_theme\"\n# except ImportError:\n# pass\n\n# Theme options are theme-specific and customize the look and feel of a theme\n# further. For a list of options available for each theme, see the\n# documentation.\nhtml_theme_options = {\n 'github_user': 'falconry',\n 'github_repo': 'falcon',\n 'github_button': False,\n 'github_banner': True,\n 'fixed_sidebar': True,\n 'show_powered_by': False,\n 'extra_nav_links': {\n 'Falcon Home': 'http://falconframework.org/',\n 'Get Help': 'community/help.html',\n },\n}\n\n# The name for this set of Sphinx documents. If None, it defaults to\n# \"<project> v<release> documentation\".\n#html_title = None\n\n# A shorter title for the navigation bar. Default is the same as html_title.\n#html_short_title = None\n\n# The name of an image file (relative to this directory) to place at the top\n# of the sidebar.\n# html_logo = '../falcon.png'\n\n# The name of an image file (within the static path) to use as favicon of the\n# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32\n# pixels large.\nhtml_favicon = '_static/img/favicon.ico'\n\n# Add any paths that contain custom static files (such as style sheets) here,\n# relative to this directory. They are copied after the builtin static files,\n# so a file named \"default.css\" will overwrite the builtin \"default.css\".\nhtml_static_path = ['_static']\n\n# Add any extra paths that contain custom files (such as robots.txt or\n# .htaccess) here, relative to this directory. These files are copied\n# directly to the root of the documentation.\n#html_extra_path = []\n\n# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,\n# using the given strftime format.\n#html_last_updated_fmt = '%b %d, %Y'\n\n# If true, SmartyPants will be used to convert quotes and dashes to\n# typographically correct entities.\n#html_use_smartypants = True\n\n# Custom sidebar templates, maps document names to template names.\n# html_sidebars = {\n# 'index': ['side-primary.html', 'searchbox.html'],\n# '**': ['side-secondary.html', 'localtoc.html',\n# 'relations.html', 'searchbox.html']\n# }\n\nhtml_sidebars = {\n '**': [\n 'sidebar-top.html',\n 'about.html',\n 'navigation.html',\n 'relations.html',\n 'searchbox.html',\n ]\n}\n\n# Additional templates that should be rendered to pages, maps page names to\n# template names.\n#html_additional_pages = {}\n\n# If false, no module index is generated.\n#html_domain_indices = True\n\n# If false, no index is generated.\n#html_use_index = True\n\n# If true, the index is split into individual pages for each letter.\n#html_split_index = False\n\n# If true, links to the reST sources are added to the pages.\n#html_show_sourcelink = True\n\n# If true, \"Created using Sphinx\" is shown in the HTML footer. Default is True.\n#html_show_sphinx = True\n\n# If true, \"(C) Copyright ...\" is shown in the HTML footer. Default is True.\nhtml_show_copyright = True\n\n# If true, an OpenSearch description file will be output, and all pages will\n# contain a <link> tag referring to it. The value of this option must be the\n# base URL from which the finished HTML is served.\n#html_use_opensearch = ''\n\n# This is the file name suffix for HTML files (e.g. \".xhtml\").\n#html_file_suffix = None\n\n# Output file base name for HTML help builder.\nhtmlhelp_basename = 'Falcondoc'\n\n\n# -- Options for LaTeX output ---------------------------------------------\n\nlatex_elements = {\n# The paper size ('letterpaper' or 'a4paper').\n#'papersize': 'letterpaper',\n\n# The font size ('10pt', '11pt' or '12pt').\n#'pointsize': '10pt',\n\n# Additional stuff for the LaTeX preamble.\n#'preamble': '',\n}\n\n# Grouping the document tree into LaTeX files. List of tuples\n# (source start file, target name, title,\n# author, documentclass [howto, manual, or own class]).\nlatex_documents = [\n ('index', 'Falcon.tex', u'Falcon Documentation',\n u'Kurt Griffiths et al.', 'manual'),\n]\n\n# The name of an image file (relative to this directory) to place at the top of\n# the title page.\n#latex_logo = None\n\n# For \"manual\" documents, if this is true, then toplevel headings are parts,\n# not chapters.\n#latex_use_parts = False\n\n# If true, show page references after internal links.\n#latex_show_pagerefs = False\n\n# If true, show URL addresses after external links.\n#latex_show_urls = False\n\n# Documents to append as an appendix to all manuals.\n#latex_appendices = []\n\n# If false, no module index is generated.\n#latex_domain_indices = True\n\n\n# -- Options for manual page output ---------------------------------------\n\n# One entry per manual page. List of tuples\n# (source start file, name, description, authors, manual section).\nman_pages = [\n ('index', 'falcon', u'Falcon Documentation',\n [u'Kurt Griffiths et al.'], 1)\n]\n\n# If true, show URL addresses after external links.\n#man_show_urls = False\n\n\n# -- Options for Texinfo output -------------------------------------------\n\n# Grouping the document tree into Texinfo files. List of tuples\n# (source start file, target name, title, author,\n# dir menu entry, description, category)\ntexinfo_documents = [\n ('index', 'Falcon', u'Falcon Documentation',\n u'Kurt Griffiths et al.', 'Falcon', 'One line description of project.',\n 'Miscellaneous'),\n]\n\n# Documents to append as an appendix to all manuals.\n#texinfo_appendices = []\n\n# If false, no module index is generated.\n#texinfo_domain_indices = True\n\n# How to display URL addresses: 'footnote', 'no', or 'inline'.\n#texinfo_show_urls = 'footnote'\n\n# If true, do not generate a @detailmenu in the \"Top\" node's menu.\n#texinfo_no_detailmenu = False\n\n\n# Example configuration for intersphinx: refer to the Python standard library.\nintersphinx_mapping = {'http://docs.python.org/2': None}\n",
"path": "docs/conf.py"
}
] | [
{
"content": "# -*- coding: utf-8 -*-\n#\n# Falcon documentation build configuration file, created by\n# sphinx-quickstart on Wed Mar 12 14:14:02 2014.\n#\n# This file is execfile()d with the current directory set to its\n# containing dir.\n#\n# Note that not all possible configuration values are present in this\n# autogenerated file.\n#\n# All configuration values have a default; values that are commented out\n# serve to show the default.\n\nimport sys\nimport os\n\ntry:\n import configparser\nexcept ImportError:\n import ConfigParser as configparser\n\nimport falcon\n\n# on_rtd is whether we are on readthedocs.org\non_rtd = os.environ.get('READTHEDOCS', None) == 'True'\n\n# If extensions (or modules to document with autodoc) are in another directory,\n# add these directories to sys.path here. If the directory is relative to the\n# documentation root, use os.path.abspath to make it absolute, like shown here.\nsys.path.insert(0, os.path.abspath('..'))\nsys.path.insert(0, os.path.abspath('.'))\n\n# Path to custom themes\nsys.path.append(os.path.abspath('_themes'))\n\n# -- General configuration ------------------------------------------------\n\n# If your documentation needs a minimal Sphinx version, state it here.\n#needs_sphinx = '1.0'\n\n# Add any Sphinx extension module names here, as strings. They can be\n# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom\n# ones.\nextensions = [\n 'sphinx.ext.autodoc',\n 'sphinx.ext.intersphinx',\n 'sphinx.ext.viewcode',\n 'sphinx.ext.napoleon',\n\n # Falcon-specific extensions\n 'ext.rfc',\n 'ext.doorway',\n]\n\n# Add any paths that contain templates here, relative to this directory.\ntemplates_path = ['_templates']\n\n# The suffix of source filenames.\nsource_suffix = '.rst'\n\n# The encoding of source files.\n#source_encoding = 'utf-8-sig'\n\n# The master toctree document.\nmaster_doc = 'index'\n\n# General information about the project.\nproject = u'Falcon'\ncopyright = u\"2016 Falcon Contributors | Logo based on a <a href=https://commons.wikimedia.org/wiki/File:Brown-Falcon,-Vic,-3.1.2008.jpg>photograph by John O'Neill</a>\"\n\n# The version info for the project you're documenting, acts as replacement for\n# |version| and |release|, also used in various other places throughout the\n# built documents.\n\ncfg = configparser.SafeConfigParser()\ncfg.read('../setup.cfg')\ntag = cfg.get('egg_info', 'tag_build')\n\nhtml_context = {\n 'prerelease': bool(tag), # True if tag is not the empty string\n}\n\n# The short X.Y version.\nversion = '.'.join(falcon.__version__.split('.')[0:2]) + tag\n\n# The full version, including alpha/beta/rc tags.\nrelease = falcon.__version__ + tag\n\n# The language for content autogenerated by Sphinx. Refer to documentation\n# for a list of supported languages.\n#language = None\n\n# There are two options for replacing |today|: either, you set today to some\n# non-false value, then it is used:\n#today = ''\n# Else, today_fmt is used as the format for a strftime call.\n#today_fmt = '%B %d, %Y'\n\n# List of patterns, relative to source directory, that match files and\n# directories to ignore when looking for source files.\nexclude_patterns = ['_build']\n\n# The reST default role (used for this markup: `text`) to use for all\n# documents.\n#default_role = None\n\n# If true, '()' will be appended to :func: etc. cross-reference text.\n#add_function_parentheses = True\n\n# If true, the current module name will be prepended to all description\n# unit titles (such as .. function::).\n#add_module_names = True\n\n# If true, sectionauthor and moduleauthor directives will be shown in the\n# output. They are ignored by default.\n#show_authors = False\n\n# The name of the Pygments (syntax highlighting) style to use.\n# pygments_style = 'flask_theme_support.FlaskyStyle'\n\n# A list of ignored prefixes for module index sorting.\n#modindex_common_prefix = []\n\n# If true, keep warnings as \"system message\" paragraphs in the built documents.\n#keep_warnings = False\n\n\n# -- Options for HTML output ----------------------------------------------\n\n# Add any paths that contain custom themes here, relative to this directory.\n# html_theme_path = ['_themes']\n# html_theme = ''\n\nhtml_theme = 'alabaster'\n\n# if not on_rtd:\n# # Use the RTD theme explicitly if it is available\n# try:\n# import sphinx_rtd_theme\n\n# html_theme_path = [sphinx_rtd_theme.get_html_theme_path()]\n# html_theme = \"sphinx_rtd_theme\"\n# except ImportError:\n# pass\n\n# Theme options are theme-specific and customize the look and feel of a theme\n# further. For a list of options available for each theme, see the\n# documentation.\nhtml_theme_options = {\n 'github_user': 'falconry',\n 'github_repo': 'falcon',\n 'github_button': False,\n 'github_banner': True,\n 'fixed_sidebar': False,\n 'show_powered_by': False,\n 'extra_nav_links': {\n 'Falcon Home': 'http://falconframework.org/',\n 'Get Help': 'community/help.html',\n },\n}\n\n# The name for this set of Sphinx documents. If None, it defaults to\n# \"<project> v<release> documentation\".\n#html_title = None\n\n# A shorter title for the navigation bar. Default is the same as html_title.\n#html_short_title = None\n\n# The name of an image file (relative to this directory) to place at the top\n# of the sidebar.\n# html_logo = '../falcon.png'\n\n# The name of an image file (within the static path) to use as favicon of the\n# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32\n# pixels large.\nhtml_favicon = '_static/img/favicon.ico'\n\n# Add any paths that contain custom static files (such as style sheets) here,\n# relative to this directory. They are copied after the builtin static files,\n# so a file named \"default.css\" will overwrite the builtin \"default.css\".\nhtml_static_path = ['_static']\n\n# Add any extra paths that contain custom files (such as robots.txt or\n# .htaccess) here, relative to this directory. These files are copied\n# directly to the root of the documentation.\n#html_extra_path = []\n\n# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,\n# using the given strftime format.\n#html_last_updated_fmt = '%b %d, %Y'\n\n# If true, SmartyPants will be used to convert quotes and dashes to\n# typographically correct entities.\n#html_use_smartypants = True\n\n# Custom sidebar templates, maps document names to template names.\n# html_sidebars = {\n# 'index': ['side-primary.html', 'searchbox.html'],\n# '**': ['side-secondary.html', 'localtoc.html',\n# 'relations.html', 'searchbox.html']\n# }\n\nhtml_sidebars = {\n '**': [\n 'sidebar-top.html',\n 'about.html',\n 'navigation.html',\n 'relations.html',\n 'searchbox.html',\n ]\n}\n\n# Additional templates that should be rendered to pages, maps page names to\n# template names.\n#html_additional_pages = {}\n\n# If false, no module index is generated.\n#html_domain_indices = True\n\n# If false, no index is generated.\n#html_use_index = True\n\n# If true, the index is split into individual pages for each letter.\n#html_split_index = False\n\n# If true, links to the reST sources are added to the pages.\n#html_show_sourcelink = True\n\n# If true, \"Created using Sphinx\" is shown in the HTML footer. Default is True.\n#html_show_sphinx = True\n\n# If true, \"(C) Copyright ...\" is shown in the HTML footer. Default is True.\nhtml_show_copyright = True\n\n# If true, an OpenSearch description file will be output, and all pages will\n# contain a <link> tag referring to it. The value of this option must be the\n# base URL from which the finished HTML is served.\n#html_use_opensearch = ''\n\n# This is the file name suffix for HTML files (e.g. \".xhtml\").\n#html_file_suffix = None\n\n# Output file base name for HTML help builder.\nhtmlhelp_basename = 'Falcondoc'\n\n\n# -- Options for LaTeX output ---------------------------------------------\n\nlatex_elements = {\n# The paper size ('letterpaper' or 'a4paper').\n#'papersize': 'letterpaper',\n\n# The font size ('10pt', '11pt' or '12pt').\n#'pointsize': '10pt',\n\n# Additional stuff for the LaTeX preamble.\n#'preamble': '',\n}\n\n# Grouping the document tree into LaTeX files. List of tuples\n# (source start file, target name, title,\n# author, documentclass [howto, manual, or own class]).\nlatex_documents = [\n ('index', 'Falcon.tex', u'Falcon Documentation',\n u'Kurt Griffiths et al.', 'manual'),\n]\n\n# The name of an image file (relative to this directory) to place at the top of\n# the title page.\n#latex_logo = None\n\n# For \"manual\" documents, if this is true, then toplevel headings are parts,\n# not chapters.\n#latex_use_parts = False\n\n# If true, show page references after internal links.\n#latex_show_pagerefs = False\n\n# If true, show URL addresses after external links.\n#latex_show_urls = False\n\n# Documents to append as an appendix to all manuals.\n#latex_appendices = []\n\n# If false, no module index is generated.\n#latex_domain_indices = True\n\n\n# -- Options for manual page output ---------------------------------------\n\n# One entry per manual page. List of tuples\n# (source start file, name, description, authors, manual section).\nman_pages = [\n ('index', 'falcon', u'Falcon Documentation',\n [u'Kurt Griffiths et al.'], 1)\n]\n\n# If true, show URL addresses after external links.\n#man_show_urls = False\n\n\n# -- Options for Texinfo output -------------------------------------------\n\n# Grouping the document tree into Texinfo files. List of tuples\n# (source start file, target name, title, author,\n# dir menu entry, description, category)\ntexinfo_documents = [\n ('index', 'Falcon', u'Falcon Documentation',\n u'Kurt Griffiths et al.', 'Falcon', 'One line description of project.',\n 'Miscellaneous'),\n]\n\n# Documents to append as an appendix to all manuals.\n#texinfo_appendices = []\n\n# If false, no module index is generated.\n#texinfo_domain_indices = True\n\n# How to display URL addresses: 'footnote', 'no', or 'inline'.\n#texinfo_show_urls = 'footnote'\n\n# If true, do not generate a @detailmenu in the \"Top\" node's menu.\n#texinfo_no_detailmenu = False\n\n\n# Example configuration for intersphinx: refer to the Python standard library.\nintersphinx_mapping = {'http://docs.python.org/2': None}\n",
"path": "docs/conf.py"
}
] | diff --git a/docs/_static/custom.css b/docs/_static/custom.css
index 34a4ad865..67d13d9e8 100644
--- a/docs/_static/custom.css
+++ b/docs/_static/custom.css
@@ -14,6 +14,10 @@
display: none;
}
+#logo {
+ position: relative;
+}
+
#logo a,
#logo a:hover {
border-bottom: none;
@@ -32,12 +36,12 @@
font-family: "Amethysta", "goudy old style", serif;
font-weight: bold;
- font-size: 18pt;
+ font-size: 16pt;
color: #444;
position: absolute;
- left: 9px;
- top: 2px;
+ left: 10px;
+ top: -14px;
/*margin: -4px -4px 0 0;*/
}
diff --git a/docs/conf.py b/docs/conf.py
index ad11a5e65..059a916da 100644
--- a/docs/conf.py
+++ b/docs/conf.py
@@ -152,7 +152,7 @@
'github_repo': 'falcon',
'github_button': False,
'github_banner': True,
- 'fixed_sidebar': True,
+ 'fixed_sidebar': False,
'show_powered_by': False,
'extra_nav_links': {
'Falcon Home': 'http://falconframework.org/',
|
yt-project__yt-4463 | BUG: setting a boolean parameter via the command line break runtime
### Bug report
**Bug summary**
**Code for reproduction**
```shell
$ yt config set --local yt colored_logs true && python -c "import yt"
```
**Actual outcome**
<!--The output produced by the above code, which may be a screenshot, console
output, etc.-->
```python-traceback
Traceback (most recent call last):
File "/Users/robcleme/.pyenv/versions/yt-dev/bin/yt", line 8, in <module>
sys.exit(run_main())
^^^^^^^^^^
File "/Users/robcleme/dev/yt-project/yt/yt/utilities/command_line.py", line 1615, in run_main
args.func(args)
File "/Users/robcleme/dev/yt-project/yt/yt/utilities/command_line.py", line 228, in run
self(args)
File "/Users/robcleme/dev/yt-project/yt/yt/utilities/command_line.py", line 1402, in __call__
set_config(args.section, args.option, args.value, self.config_file)
File "/Users/robcleme/dev/yt-project/yt/yt/utilities/configure.py", line 195, in set_config
CONFIG.set(section, *option_path, _cast_value_helper(value))
File "/Users/robcleme/dev/yt-project/yt/yt/utilities/configure.py", line 79, in set
self.config_root.upsert_from_list(
File "/Users/robcleme/dev/yt-project/yt/yt/utilities/configuration_tree.py", line 54, in upsert_from_list
next_node.upsert_from_list(next_keys, value, extra_data)
File "/Users/robcleme/dev/yt-project/yt/yt/utilities/configuration_tree.py", line 46, in upsert_from_list
leaf.value = value
^^^^^^^^^^
File "/Users/robcleme/dev/yt-project/yt/yt/utilities/configuration_tree.py", line 187, in value
raise TypeError(msg)
TypeError: Error when setting yt.colored_logs.
Tried to assign a value of type <class 'str'>, expected type <class 'bool'>.
This entry was last modified in file: /Users/robcleme/dev/yt-project/yt/yt.toml.
```
One way to patch this would be to special-case `true` and `false` to be interpreted as booleans when received from the command line.
| [
{
"content": "import os\nimport sys\nimport warnings\nfrom pathlib import Path\nfrom typing import Callable, List\n\nimport tomli_w\nfrom more_itertools import always_iterable\n\nfrom yt.utilities.configuration_tree import ConfigLeaf, ConfigNode\n\nif sys.version_info >= (3, 11):\n import tomllib\nelse:\n import tomli as tomllib\n\nconfiguration_callbacks: List[Callable[[\"YTConfig\"], None]] = []\n\n\ndef config_dir():\n config_root = os.environ.get(\n \"XDG_CONFIG_HOME\", os.path.join(os.path.expanduser(\"~\"), \".config\")\n )\n conf_dir = os.path.join(config_root, \"yt\")\n return conf_dir\n\n\nclass YTConfig:\n def __init__(self, defaults=None):\n if defaults is None:\n defaults = {}\n self.config_root = ConfigNode(None)\n\n def get(self, section, *keys, callback=None):\n node_or_leaf = self.config_root.get(section, *keys)\n if isinstance(node_or_leaf, ConfigLeaf):\n if callback is not None:\n return callback(node_or_leaf)\n return node_or_leaf.value\n return node_or_leaf\n\n def get_most_specific(self, section, *keys, **kwargs):\n use_fallback = \"fallback\" in kwargs\n fallback = kwargs.pop(\"fallback\", None)\n try:\n return self.config_root.get_deepest_leaf(section, *keys)\n except KeyError as err:\n if use_fallback:\n return fallback\n else:\n raise err\n\n def update(self, new_values, metadata=None):\n if metadata is None:\n metadata = {}\n self.config_root.update(new_values, metadata)\n\n def has_section(self, section):\n try:\n self.config_root.get_child(section)\n return True\n except KeyError:\n return False\n\n def add_section(self, section):\n self.config_root.add_child(section)\n\n def remove_section(self, section):\n if self.has_section(section):\n self.config_root.remove_child(section)\n return True\n else:\n return False\n\n def set(self, *args, metadata=None):\n section, *keys, value = args\n if metadata is None:\n metadata = {\"source\": \"runtime\"}\n self.config_root.upsert_from_list(\n [section] + list(keys), value, extra_data=metadata\n )\n\n def remove(self, *args):\n self.config_root.pop_leaf(args)\n\n def read(self, file_names):\n file_names_read = []\n for fname in always_iterable(file_names):\n if not os.path.exists(fname):\n continue\n metadata = {\"source\": f\"file: {fname}\"}\n try:\n with open(fname, \"rb\") as fh:\n data = tomllib.load(fh)\n except tomllib.TOMLDecodeError as exc:\n warnings.warn(\n f\"Could not load configuration file {fname} (invalid TOML: {exc})\",\n stacklevel=2,\n )\n else:\n self.update(data, metadata=metadata)\n file_names_read.append(fname)\n\n return file_names_read\n\n def write(self, file_handler):\n value = self.config_root.as_dict()\n config_as_str = tomli_w.dumps(value)\n\n try:\n file_path = Path(file_handler)\n except TypeError:\n if not hasattr(file_handler, \"write\"):\n raise TypeError(\n f\"Expected a path to a file, or a writable object, got {file_handler}\"\n ) from None\n file_handler.write(config_as_str)\n else:\n pdir = file_path.parent\n if not pdir.exists():\n warnings.warn(\n f\"{pdir!s} does not exist, creating it (recursively)\", stacklevel=2\n )\n os.makedirs(pdir)\n file_path.write_text(config_as_str)\n\n @staticmethod\n def get_global_config_file():\n return os.path.join(config_dir(), \"yt.toml\")\n\n @staticmethod\n def get_local_config_file():\n path = Path.cwd()\n while path.parent is not path:\n candidate = path.joinpath(\"yt.toml\")\n if candidate.is_file():\n return os.path.abspath(candidate)\n else:\n path = path.parent\n\n return os.path.join(os.path.abspath(os.curdir), \"yt.toml\")\n\n def __setitem__(self, args, value):\n section, *keys = always_iterable(args)\n self.set(section, *keys, value, metadata=None)\n\n def __getitem__(self, key):\n section, *keys = always_iterable(key)\n return self.get(section, *keys)\n\n def __contains__(self, item):\n return item in self.config_root\n\n # Add support for IPython rich display\n # see https://ipython.readthedocs.io/en/stable/config/integrating.html\n def _repr_json_(self):\n return self.config_root._repr_json_()\n\n\nCONFIG = YTConfig()\n\n\ndef _cast_bool_helper(value):\n if value == \"True\":\n return True\n elif value == \"False\":\n return False\n else:\n raise ValueError(\"Cannot safely cast to bool\")\n\n\ndef _expand_all(s):\n return os.path.expandvars(os.path.expanduser(s))\n\n\ndef _cast_value_helper(value, types=(_cast_bool_helper, int, float, _expand_all)):\n for t in types:\n try:\n retval = t(value)\n return retval\n except ValueError:\n pass\n\n\ndef get_config(section, option):\n *option_path, option_name = option.split(\".\")\n return CONFIG.get(section, *option_path, option_name)\n\n\ndef set_config(section, option, value, config_file):\n if not CONFIG.has_section(section):\n CONFIG.add_section(section)\n\n option_path = option.split(\".\")\n CONFIG.set(section, *option_path, _cast_value_helper(value))\n write_config(config_file)\n\n\ndef write_config(config_file):\n CONFIG.write(config_file)\n\n\ndef rm_config(section, option, config_file):\n option_path = option.split(\".\")\n CONFIG.remove(section, *option_path)\n write_config(config_file)\n",
"path": "yt/utilities/configure.py"
}
] | [
{
"content": "import os\nimport sys\nimport warnings\nfrom pathlib import Path\nfrom typing import Callable, List\n\nimport tomli_w\nfrom more_itertools import always_iterable\n\nfrom yt.utilities.configuration_tree import ConfigLeaf, ConfigNode\n\nif sys.version_info >= (3, 11):\n import tomllib\nelse:\n import tomli as tomllib\n\nconfiguration_callbacks: List[Callable[[\"YTConfig\"], None]] = []\n\n\ndef config_dir():\n config_root = os.environ.get(\n \"XDG_CONFIG_HOME\", os.path.join(os.path.expanduser(\"~\"), \".config\")\n )\n conf_dir = os.path.join(config_root, \"yt\")\n return conf_dir\n\n\nclass YTConfig:\n def __init__(self, defaults=None):\n if defaults is None:\n defaults = {}\n self.config_root = ConfigNode(None)\n\n def get(self, section, *keys, callback=None):\n node_or_leaf = self.config_root.get(section, *keys)\n if isinstance(node_or_leaf, ConfigLeaf):\n if callback is not None:\n return callback(node_or_leaf)\n return node_or_leaf.value\n return node_or_leaf\n\n def get_most_specific(self, section, *keys, **kwargs):\n use_fallback = \"fallback\" in kwargs\n fallback = kwargs.pop(\"fallback\", None)\n try:\n return self.config_root.get_deepest_leaf(section, *keys)\n except KeyError as err:\n if use_fallback:\n return fallback\n else:\n raise err\n\n def update(self, new_values, metadata=None):\n if metadata is None:\n metadata = {}\n self.config_root.update(new_values, metadata)\n\n def has_section(self, section):\n try:\n self.config_root.get_child(section)\n return True\n except KeyError:\n return False\n\n def add_section(self, section):\n self.config_root.add_child(section)\n\n def remove_section(self, section):\n if self.has_section(section):\n self.config_root.remove_child(section)\n return True\n else:\n return False\n\n def set(self, *args, metadata=None):\n section, *keys, value = args\n if metadata is None:\n metadata = {\"source\": \"runtime\"}\n self.config_root.upsert_from_list(\n [section] + list(keys), value, extra_data=metadata\n )\n\n def remove(self, *args):\n self.config_root.pop_leaf(args)\n\n def read(self, file_names):\n file_names_read = []\n for fname in always_iterable(file_names):\n if not os.path.exists(fname):\n continue\n metadata = {\"source\": f\"file: {fname}\"}\n try:\n with open(fname, \"rb\") as fh:\n data = tomllib.load(fh)\n except tomllib.TOMLDecodeError as exc:\n warnings.warn(\n f\"Could not load configuration file {fname} (invalid TOML: {exc})\",\n stacklevel=2,\n )\n else:\n self.update(data, metadata=metadata)\n file_names_read.append(fname)\n\n return file_names_read\n\n def write(self, file_handler):\n value = self.config_root.as_dict()\n config_as_str = tomli_w.dumps(value)\n\n try:\n file_path = Path(file_handler)\n except TypeError:\n if not hasattr(file_handler, \"write\"):\n raise TypeError(\n f\"Expected a path to a file, or a writable object, got {file_handler}\"\n ) from None\n file_handler.write(config_as_str)\n else:\n pdir = file_path.parent\n if not pdir.exists():\n warnings.warn(\n f\"{pdir!s} does not exist, creating it (recursively)\", stacklevel=2\n )\n os.makedirs(pdir)\n file_path.write_text(config_as_str)\n\n @staticmethod\n def get_global_config_file():\n return os.path.join(config_dir(), \"yt.toml\")\n\n @staticmethod\n def get_local_config_file():\n path = Path.cwd()\n while path.parent is not path:\n candidate = path.joinpath(\"yt.toml\")\n if candidate.is_file():\n return os.path.abspath(candidate)\n else:\n path = path.parent\n\n return os.path.join(os.path.abspath(os.curdir), \"yt.toml\")\n\n def __setitem__(self, args, value):\n section, *keys = always_iterable(args)\n self.set(section, *keys, value, metadata=None)\n\n def __getitem__(self, key):\n section, *keys = always_iterable(key)\n return self.get(section, *keys)\n\n def __contains__(self, item):\n return item in self.config_root\n\n # Add support for IPython rich display\n # see https://ipython.readthedocs.io/en/stable/config/integrating.html\n def _repr_json_(self):\n return self.config_root._repr_json_()\n\n\nCONFIG = YTConfig()\n\n\ndef _cast_bool_helper(value):\n if value in (\"true\", \"True\", True):\n return True\n elif value in (\"false\", \"False\", False):\n return False\n else:\n raise ValueError(\"Cannot safely cast to bool\")\n\n\ndef _expand_all(s):\n return os.path.expandvars(os.path.expanduser(s))\n\n\ndef _cast_value_helper(value, types=(_cast_bool_helper, int, float, _expand_all)):\n for t in types:\n try:\n retval = t(value)\n return retval\n except ValueError:\n pass\n\n\ndef get_config(section, option):\n *option_path, option_name = option.split(\".\")\n return CONFIG.get(section, *option_path, option_name)\n\n\ndef set_config(section, option, value, config_file):\n if not CONFIG.has_section(section):\n CONFIG.add_section(section)\n\n option_path = option.split(\".\")\n CONFIG.set(section, *option_path, _cast_value_helper(value))\n write_config(config_file)\n\n\ndef write_config(config_file):\n CONFIG.write(config_file)\n\n\ndef rm_config(section, option, config_file):\n option_path = option.split(\".\")\n CONFIG.remove(section, *option_path)\n write_config(config_file)\n",
"path": "yt/utilities/configure.py"
}
] | diff --git a/yt/utilities/configure.py b/yt/utilities/configure.py
index 7894b63314f..64a034b7ee7 100644
--- a/yt/utilities/configure.py
+++ b/yt/utilities/configure.py
@@ -161,9 +161,9 @@ def _repr_json_(self):
def _cast_bool_helper(value):
- if value == "True":
+ if value in ("true", "True", True):
return True
- elif value == "False":
+ elif value in ("false", "False", False):
return False
else:
raise ValueError("Cannot safely cast to bool")
|
pretalx__pretalx-381 | installation crashes when there are no config files
## Current Behavior
```
$ cd pretalx
$ pip-3.6 install . --user
(...)
File "<frozen importlib._bootstrap>", line 994, in _gcd_import
File "<frozen importlib._bootstrap>", line 971, in _find_and_load
File "<frozen importlib._bootstrap>", line 955, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 665, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 678, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "/tmp/pip-xa87l9tk-build/pretalx/settings.py", line 460, in <module>
plugins=PLUGINS
File "/tmp/pip-xa87l9tk-build/pretalx/common/settings/utils.py", line 11, in log_initial
(f'Read from: {", ".join(config_files)}', False),
TypeError: can only join an iterable
```
if there are no config files at all, the installation crashes, because `config_files` is `None`.
## Your Environment
* Version used: master
* Operating System and version (desktop or mobile): FreeBSD
| [
{
"content": "import configparser\nimport os\nimport sys\n\nfrom pretalx.common.settings.utils import reduce_dict\n\nCONFIG = {\n 'filesystem': {\n 'base': {\n 'default': os.path.dirname(os.path.dirname(os.path.dirname(os.path.dirname(__file__)))),\n },\n 'logs': {\n 'default': None,\n 'env': os.getenv('PRETALX_FILESYSTEM_LOGS'),\n },\n 'media': {\n 'default': None,\n 'env': os.getenv('PRETALX_FILESYSTEM_MEDIA'),\n },\n 'static': {\n 'default': None,\n 'env': os.getenv('PRETALX_FILESYSTEM_STATIC'),\n },\n },\n 'site': {\n 'debug': {\n 'default': 'runserver' in sys.argv,\n 'env': os.getenv('PRETALX_DEBUG'),\n },\n 'url': {\n 'default': 'http://localhost',\n 'env': os.getenv('PRETALX_SITE_URL'),\n },\n 'https': {\n 'env': os.getenv('PRETALX_HTTPS'),\n },\n 'cookie_domain': {\n 'default': '',\n 'env': os.getenv('PRETALX_COOKIE_DOMAIN'),\n },\n },\n 'database': {\n 'backend': {\n 'default': 'sqlite3',\n 'env': os.getenv('PRETALX_DB_TYPE'),\n },\n 'name': {\n 'env': os.getenv('PRETALX_DB_NAME'),\n },\n 'user': {\n 'default': '',\n 'env': os.getenv('PRETALX_DB_USER'),\n },\n 'password': {\n 'default': '',\n 'env': os.getenv('PRETALX_DB_PASS'),\n },\n 'host': {\n 'default': '',\n 'env': os.getenv('PRETALX_DB_HOST'),\n },\n 'port': {\n 'default': '',\n 'env': os.getenv('PRETALX_DB_PORT'),\n },\n },\n 'mail': {\n 'from': {\n 'default': 'admin@localhost',\n 'env': os.getenv('PRETALX_MAIL_FROM'),\n },\n 'host': {\n 'default': 'localhost',\n 'env': os.getenv('PRETALX_MAIL_HOST'),\n },\n 'port': {\n 'default': '25',\n 'env': os.getenv('PRETALX_MAIL_PORT'),\n },\n 'user': {\n 'default': '',\n 'env': os.getenv('PRETALX_MAIL_USER'),\n },\n 'password': {\n 'default': '',\n 'env': os.getenv('PRETALX_MAIL_PASSWORD'),\n },\n 'tls': {\n 'default': 'False',\n 'env': os.getenv('PRETALX_MAIL_TLS'),\n },\n 'ssl': {\n 'default': 'False',\n 'env': os.getenv('PRETALX_MAIL_SSL'),\n },\n },\n 'cache': {\n },\n 'celery': {\n 'broker': {\n 'default': '',\n 'env': os.getenv('PRETALX_CELERY_BROKER'),\n },\n 'backend': {\n 'default': '',\n 'env': os.getenv('PRETALX_CELERY_BACKEND'),\n },\n },\n 'logging': {\n 'email': {\n 'default': '',\n 'env': os.getenv('PRETALX_LOGGING_EMAIL'),\n },\n 'email_level': {\n 'default': '',\n 'env': os.getenv('PRETALX_LOGGING_EMAIL_LEVEL'),\n },\n },\n}\n\n\ndef read_config_files(config):\n if 'PRETALX_CONFIG_FILE' in os.environ:\n config_files = config.read_file(open(os.environ.get('PRETALX_CONFIG_FILE'), encoding='utf-8'))\n else:\n config_files = config.read([\n '/etc/pretalx/pretalx.cfg',\n os.path.expanduser('~/.pretalx.cfg'),\n 'pretalx.cfg',\n ], encoding='utf-8')\n return config, config_files\n\n\ndef read_layer(layer_name, config):\n config_dict = reduce_dict({\n section_name: {\n key: value.get(layer_name)\n for key, value in section_content.items()\n }\n for section_name, section_content in CONFIG.items()\n })\n config.read_dict(config_dict)\n return config\n\n\ndef build_config():\n config = configparser.RawConfigParser()\n config = read_layer('default', config)\n config, config_files = read_config_files(config)\n config = read_layer('env', config)\n return config, config_files\n",
"path": "src/pretalx/common/settings/config.py"
}
] | [
{
"content": "import configparser\nimport os\nimport sys\n\nfrom pretalx.common.settings.utils import reduce_dict\n\nCONFIG = {\n 'filesystem': {\n 'base': {\n 'default': os.path.dirname(os.path.dirname(os.path.dirname(os.path.dirname(__file__)))),\n },\n 'logs': {\n 'default': None,\n 'env': os.getenv('PRETALX_FILESYSTEM_LOGS'),\n },\n 'media': {\n 'default': None,\n 'env': os.getenv('PRETALX_FILESYSTEM_MEDIA'),\n },\n 'static': {\n 'default': None,\n 'env': os.getenv('PRETALX_FILESYSTEM_STATIC'),\n },\n },\n 'site': {\n 'debug': {\n 'default': 'runserver' in sys.argv,\n 'env': os.getenv('PRETALX_DEBUG'),\n },\n 'url': {\n 'default': 'http://localhost',\n 'env': os.getenv('PRETALX_SITE_URL'),\n },\n 'https': {\n 'env': os.getenv('PRETALX_HTTPS'),\n },\n 'cookie_domain': {\n 'default': '',\n 'env': os.getenv('PRETALX_COOKIE_DOMAIN'),\n },\n },\n 'database': {\n 'backend': {\n 'default': 'sqlite3',\n 'env': os.getenv('PRETALX_DB_TYPE'),\n },\n 'name': {\n 'env': os.getenv('PRETALX_DB_NAME'),\n },\n 'user': {\n 'default': '',\n 'env': os.getenv('PRETALX_DB_USER'),\n },\n 'password': {\n 'default': '',\n 'env': os.getenv('PRETALX_DB_PASS'),\n },\n 'host': {\n 'default': '',\n 'env': os.getenv('PRETALX_DB_HOST'),\n },\n 'port': {\n 'default': '',\n 'env': os.getenv('PRETALX_DB_PORT'),\n },\n },\n 'mail': {\n 'from': {\n 'default': 'admin@localhost',\n 'env': os.getenv('PRETALX_MAIL_FROM'),\n },\n 'host': {\n 'default': 'localhost',\n 'env': os.getenv('PRETALX_MAIL_HOST'),\n },\n 'port': {\n 'default': '25',\n 'env': os.getenv('PRETALX_MAIL_PORT'),\n },\n 'user': {\n 'default': '',\n 'env': os.getenv('PRETALX_MAIL_USER'),\n },\n 'password': {\n 'default': '',\n 'env': os.getenv('PRETALX_MAIL_PASSWORD'),\n },\n 'tls': {\n 'default': 'False',\n 'env': os.getenv('PRETALX_MAIL_TLS'),\n },\n 'ssl': {\n 'default': 'False',\n 'env': os.getenv('PRETALX_MAIL_SSL'),\n },\n },\n 'cache': {\n },\n 'celery': {\n 'broker': {\n 'default': '',\n 'env': os.getenv('PRETALX_CELERY_BROKER'),\n },\n 'backend': {\n 'default': '',\n 'env': os.getenv('PRETALX_CELERY_BACKEND'),\n },\n },\n 'logging': {\n 'email': {\n 'default': '',\n 'env': os.getenv('PRETALX_LOGGING_EMAIL'),\n },\n 'email_level': {\n 'default': '',\n 'env': os.getenv('PRETALX_LOGGING_EMAIL_LEVEL'),\n },\n },\n}\n\n\ndef read_config_files(config):\n if 'PRETALX_CONFIG_FILE' in os.environ:\n config_files = config.read_file(open(os.environ.get('PRETALX_CONFIG_FILE'), encoding='utf-8'))\n else:\n config_files = config.read([\n '/etc/pretalx/pretalx.cfg',\n os.path.expanduser('~/.pretalx.cfg'),\n 'pretalx.cfg',\n ], encoding='utf-8')\n return config, config_files or [] # .read() returns None, if there are no config files\n\n\ndef read_layer(layer_name, config):\n config_dict = reduce_dict({\n section_name: {\n key: value.get(layer_name)\n for key, value in section_content.items()\n }\n for section_name, section_content in CONFIG.items()\n })\n config.read_dict(config_dict)\n return config\n\n\ndef build_config():\n config = configparser.RawConfigParser()\n config = read_layer('default', config)\n config, config_files = read_config_files(config)\n config = read_layer('env', config)\n return config, config_files\n",
"path": "src/pretalx/common/settings/config.py"
}
] | diff --git a/src/pretalx/common/settings/config.py b/src/pretalx/common/settings/config.py
index f89488e514..70b8a56172 100644
--- a/src/pretalx/common/settings/config.py
+++ b/src/pretalx/common/settings/config.py
@@ -128,7 +128,7 @@ def read_config_files(config):
os.path.expanduser('~/.pretalx.cfg'),
'pretalx.cfg',
], encoding='utf-8')
- return config, config_files
+ return config, config_files or [] # .read() returns None, if there are no config files
def read_layer(layer_name, config):
|
TileDB-Inc__TileDB-Py-501 | Four components should be three components?
In the recently created example "writing_dense_rgb.py" there is this fragment:
https://github.com/TileDB-Inc/TileDB-Py/blob/75ddcf56ed80ba5e1a1237b7e527ec4fbd87abb9/examples/writing_dense_rgb.py#L56-L57
It says four int32 components where it seems like it should be three int32 components. After all the values of the attribute are RGB and not RGBA.
| [
{
"content": "# writing_dense_rgb.py\n#\n# LICENSE\n#\n# The MIT License\n#\n# Copyright (c) 2021 TileDB, Inc.\n#\n# Permission is hereby granted, free of charge, to any person obtaining a copy\n# of this software and associated documentation files (the \"Software\"), to deal\n# in the Software without restriction, including without limitation the rights\n# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n# copies of the Software, and to permit persons to whom the Software is\n# furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice shall be included in\n# all copies or substantial portions of the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN\n# THE SOFTWARE.\n#\n# DESCRIPTION\n#\n# Please see the TileDB documentation for more information:\n# https://docs.tiledb.com/main/solutions/tiledb-embedded/api-usage/writing-arrays/writing-in-dense-subarrays\n#\n# When run, this program will create a 2D+1 multi-component (eg RGB) dense array, write some\n# data to it, and read the entire array data.\n\nimport tiledb, numpy as np\n\nimg_shape = (100, 224, 224)\nimg_uri = \"writing_dense_rgb\"\n\nimage_data = np.random.randint(low=0, high=100, size=(*img_shape, 3), dtype=np.int32)\n\n\ndef create_array():\n domain = tiledb.Domain(\n tiledb.Dim(\n name=\"image_id\", domain=(0, img_shape[0] - 1), tile=4, dtype=np.int32\n ),\n tiledb.Dim(\n name=\"x\", domain=(0, img_shape[1] - 1), tile=img_shape[1], dtype=np.int32\n ),\n tiledb.Dim(\n name=\"y\", domain=(0, img_shape[2] - 1), tile=img_shape[2], dtype=np.int32\n ),\n )\n\n # create multi-component attribute with four int32 components\n attr = tiledb.Attr(dtype=np.dtype(\"i4, i4, i4\"))\n\n schema = tiledb.ArraySchema(domain=domain, sparse=False, attrs=[attr])\n\n tiledb.Array.create(img_uri, schema)\n\n image_data_rgb = image_data.view(np.dtype(\"i4, i4, i4\"))\n\n with tiledb.open(img_uri, \"w\") as A:\n # write data to 1st image_id slot\n A[:] = image_data_rgb\n\n\ndef read_array():\n with tiledb.open(img_uri) as A:\n print(A[:].shape)\n\n\nif __name__ == \"__main__\":\n create_array()\n read_array()\n",
"path": "examples/writing_dense_rgb.py"
}
] | [
{
"content": "# writing_dense_rgb.py\n#\n# LICENSE\n#\n# The MIT License\n#\n# Copyright (c) 2021 TileDB, Inc.\n#\n# Permission is hereby granted, free of charge, to any person obtaining a copy\n# of this software and associated documentation files (the \"Software\"), to deal\n# in the Software without restriction, including without limitation the rights\n# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n# copies of the Software, and to permit persons to whom the Software is\n# furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice shall be included in\n# all copies or substantial portions of the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN\n# THE SOFTWARE.\n#\n# DESCRIPTION\n#\n# Please see the TileDB documentation for more information:\n# https://docs.tiledb.com/main/solutions/tiledb-embedded/api-usage/writing-arrays/writing-in-dense-subarrays\n#\n# When run, this program will create a 2D+1 multi-component (eg RGB) dense array, write some\n# data to it, and read the entire array data.\n\nimport tiledb, numpy as np\n\nimg_shape = (100, 224, 224)\nimg_uri = \"writing_dense_rgb\"\n\nimage_data = np.random.randint(low=0, high=100, size=(*img_shape, 3), dtype=np.int32)\n\n\ndef create_array():\n domain = tiledb.Domain(\n tiledb.Dim(\n name=\"image_id\", domain=(0, img_shape[0] - 1), tile=4, dtype=np.int32\n ),\n tiledb.Dim(\n name=\"x\", domain=(0, img_shape[1] - 1), tile=img_shape[1], dtype=np.int32\n ),\n tiledb.Dim(\n name=\"y\", domain=(0, img_shape[2] - 1), tile=img_shape[2], dtype=np.int32\n ),\n )\n\n # create multi-component attribute with three int32 components\n attr = tiledb.Attr(dtype=np.dtype(\"i4, i4, i4\"))\n\n schema = tiledb.ArraySchema(domain=domain, sparse=False, attrs=[attr])\n\n tiledb.Array.create(img_uri, schema)\n\n image_data_rgb = image_data.view(np.dtype(\"i4, i4, i4\"))\n\n with tiledb.open(img_uri, \"w\") as A:\n # write data to 1st image_id slot\n A[:] = image_data_rgb\n\n\ndef read_array():\n with tiledb.open(img_uri) as A:\n print(A[:].shape)\n\n\nif __name__ == \"__main__\":\n create_array()\n read_array()\n",
"path": "examples/writing_dense_rgb.py"
}
] | diff --git a/examples/writing_dense_rgb.py b/examples/writing_dense_rgb.py
index 8a2dfd1b6b..20a0669b37 100644
--- a/examples/writing_dense_rgb.py
+++ b/examples/writing_dense_rgb.py
@@ -53,7 +53,7 @@ def create_array():
),
)
- # create multi-component attribute with four int32 components
+ # create multi-component attribute with three int32 components
attr = tiledb.Attr(dtype=np.dtype("i4, i4, i4"))
schema = tiledb.ArraySchema(domain=domain, sparse=False, attrs=[attr])
|
flairNLP__flair-1713 | encoding error while loading WIKINER_RUSSIAN
I am trying to load russian NER labelled data 'WIKINER_RUSSIAN' into Flair.
But getting some encoding error during loading.
My System spec:
Flair: 0.4.3
Python: 3.8.0

| [
{
"content": "import logging\nimport re\nimport os\nfrom pathlib import Path\nfrom typing import Union, Dict, List\n\nimport flair\nfrom flair.data import Corpus, FlairDataset, Sentence, Token\nfrom flair.datasets.base import find_train_dev_test_files\nfrom flair.file_utils import cached_path\n\nlog = logging.getLogger(\"flair\")\n\n\nclass ColumnCorpus(Corpus):\n def __init__(\n self,\n data_folder: Union[str, Path],\n column_format: Dict[int, str],\n train_file=None,\n test_file=None,\n dev_file=None,\n tag_to_bioes=None,\n column_delimiter: str = r\"\\s+\",\n comment_symbol: str = None,\n encoding: str = \"utf-8\",\n document_separator_token: str = None,\n skip_first_line: bool = False,\n in_memory: bool = True,\n ):\n \"\"\"\n Instantiates a Corpus from CoNLL column-formatted task data such as CoNLL03 or CoNLL2000.\n\n :param data_folder: base folder with the task data\n :param column_format: a map specifying the column format\n :param train_file: the name of the train file\n :param test_file: the name of the test file\n :param dev_file: the name of the dev file, if None, dev data is sampled from train\n :param tag_to_bioes: whether to convert to BIOES tagging scheme\n :param column_delimiter: default is to split on any separatator, but you can overwrite for instance with \"\\t\"\n to split only on tabs\n :param comment_symbol: if set, lines that begin with this symbol are treated as comments\n :param document_separator_token: If provided, multiple sentences are read into one object. Provide the string token\n that indicates that a new document begins\n :param skip_first_line: set to True if your dataset has a header line\n :param in_memory: If set to True, the dataset is kept in memory as Sentence objects, otherwise does disk reads\n :return: a Corpus with annotated train, dev and test data\n \"\"\"\n\n # find train, dev and test files if not specified\n dev_file, test_file, train_file = \\\n find_train_dev_test_files(data_folder, dev_file, test_file, train_file)\n\n # get train data\n train = ColumnDataset(\n train_file,\n column_format,\n tag_to_bioes,\n encoding=encoding,\n comment_symbol=comment_symbol,\n column_delimiter=column_delimiter,\n in_memory=in_memory,\n document_separator_token=document_separator_token,\n skip_first_line=skip_first_line,\n )\n\n # read in test file if exists\n test = ColumnDataset(\n test_file,\n column_format,\n tag_to_bioes,\n encoding=encoding,\n comment_symbol=comment_symbol,\n column_delimiter=column_delimiter,\n in_memory=in_memory,\n document_separator_token=document_separator_token,\n skip_first_line=skip_first_line,\n ) if test_file is not None else None\n\n # read in dev file if exists\n dev = ColumnDataset(\n dev_file,\n column_format,\n tag_to_bioes,\n encoding=encoding,\n comment_symbol=comment_symbol,\n column_delimiter=column_delimiter,\n in_memory=in_memory,\n document_separator_token=document_separator_token,\n skip_first_line=skip_first_line,\n ) if dev_file is not None else None\n\n super(ColumnCorpus, self).__init__(train, dev, test, name=str(data_folder))\n\n\nclass ColumnDataset(FlairDataset):\n # special key for space after\n SPACE_AFTER_KEY = \"space-after\"\n\n def __init__(\n self,\n path_to_column_file: Union[str, Path],\n column_name_map: Dict[int, str],\n tag_to_bioes: str = None,\n column_delimiter: str = r\"\\s+\",\n comment_symbol: str = None,\n in_memory: bool = True,\n document_separator_token: str = None,\n encoding: str = \"utf-8\",\n skip_first_line: bool = False,\n ):\n \"\"\"\n Instantiates a column dataset (typically used for sequence labeling or word-level prediction).\n\n :param path_to_column_file: path to the file with the column-formatted data\n :param column_name_map: a map specifying the column format\n :param tag_to_bioes: whether to convert to BIOES tagging scheme\n :param column_delimiter: default is to split on any separatator, but you can overwrite for instance with \"\\t\"\n to split only on tabs\n :param comment_symbol: if set, lines that begin with this symbol are treated as comments\n :param in_memory: If set to True, the dataset is kept in memory as Sentence objects, otherwise does disk reads\n :param document_separator_token: If provided, multiple sentences are read into one object. Provide the string token\n that indicates that a new document begins\n :param skip_first_line: set to True if your dataset has a header line\n \"\"\"\n if type(path_to_column_file) is str:\n path_to_column_file = Path(path_to_column_file)\n assert path_to_column_file.exists()\n self.path_to_column_file = path_to_column_file\n self.tag_to_bioes = tag_to_bioes\n self.column_name_map = column_name_map\n self.column_delimiter = column_delimiter\n self.comment_symbol = comment_symbol\n self.document_separator_token = document_separator_token\n\n # store either Sentence objects in memory, or only file offsets\n self.in_memory = in_memory\n if self.in_memory:\n self.sentences: List[Sentence] = []\n else:\n self.indices: List[int] = []\n\n self.total_sentence_count: int = 0\n\n # most data sets have the token text in the first column, if not, pass 'text' as column\n self.text_column: int = 0\n for column in self.column_name_map:\n if column_name_map[column] == \"text\":\n self.text_column = column\n\n # determine encoding of text file\n self.encoding = encoding\n\n sentence: Sentence = Sentence()\n sentence_started: bool = False\n with open(str(self.path_to_column_file), encoding=self.encoding) as f:\n\n # skip first line if to selected\n if skip_first_line:\n f.readline()\n\n line = f.readline()\n position = 0\n\n while line:\n\n if self.comment_symbol is not None and line.startswith(comment_symbol):\n line = f.readline()\n continue\n\n if self.__line_completes_sentence(line):\n\n if sentence_started:\n\n if self.in_memory:\n if self.tag_to_bioes is not None:\n sentence.convert_tag_scheme(\n tag_type=self.tag_to_bioes, target_scheme=\"iobes\"\n )\n self.sentences.append(sentence)\n else:\n self.indices.append(position)\n position = f.tell()\n self.total_sentence_count += 1\n sentence: Sentence = Sentence()\n sentence_started = False\n\n elif self.in_memory:\n token = self._parse_token(line)\n if not line.isspace():\n sentence.add_token(token)\n sentence_started = True\n\n elif not line.isspace():\n sentence_started = True\n\n line = f.readline()\n\n if sentence_started:\n if self.in_memory:\n self.sentences.append(sentence)\n else:\n self.indices.append(position)\n self.total_sentence_count += 1\n\n def _parse_token(self, line: str) -> Token:\n fields: List[str] = re.split(self.column_delimiter, line)\n token = Token(fields[self.text_column])\n for column in self.column_name_map:\n if len(fields) > column:\n if column != self.text_column and self.column_name_map[column] != self.SPACE_AFTER_KEY:\n token.add_label(\n self.column_name_map[column], fields[column]\n )\n if self.column_name_map[column] == self.SPACE_AFTER_KEY and fields[column] == '-':\n token.whitespace_after = False\n return token\n\n def __line_completes_sentence(self, line: str) -> bool:\n sentence_completed = line.isspace()\n if self.document_separator_token:\n sentence_completed = False\n fields: List[str] = re.split(self.column_delimiter, line)\n if len(fields) >= self.text_column:\n if fields[self.text_column] == self.document_separator_token:\n sentence_completed = True\n return sentence_completed\n\n def is_in_memory(self) -> bool:\n return self.in_memory\n\n def __len__(self):\n return self.total_sentence_count\n\n def __getitem__(self, index: int = 0) -> Sentence:\n\n if self.in_memory:\n sentence = self.sentences[index]\n\n else:\n with open(str(self.path_to_column_file), encoding=self.encoding) as file:\n file.seek(self.indices[index])\n line = file.readline()\n sentence: Sentence = Sentence()\n while line:\n if self.comment_symbol is not None and line.startswith(\n self.comment_symbol\n ):\n line = file.readline()\n continue\n\n if self.__line_completes_sentence(line):\n if len(sentence) > 0:\n if self.tag_to_bioes is not None:\n sentence.convert_tag_scheme(\n tag_type=self.tag_to_bioes, target_scheme=\"iobes\"\n )\n return sentence\n\n else:\n token = self._parse_token(line)\n if not line.isspace():\n sentence.add_token(token)\n\n line = file.readline()\n return sentence\n\n\nclass BIOFID(ColumnCorpus):\n def __init__(\n self,\n base_path: Union[str, Path] = None,\n tag_to_bioes: str = \"ner\",\n in_memory: bool = True,\n ):\n if type(base_path) == str:\n base_path: Path = Path(base_path)\n\n # column format\n columns = {0: \"text\", 1: \"lemma\", 2: \"pos\", 3: \"ner\"}\n\n # this dataset name\n dataset_name = self.__class__.__name__.lower()\n\n # default dataset folder is the cache root\n if not base_path:\n base_path = Path(flair.cache_root) / \"datasets\"\n data_folder = base_path / dataset_name\n\n # download data if necessary\n biofid_path = \"https://raw.githubusercontent.com/texttechnologylab/BIOfid/master/BIOfid-Dataset-NER/\"\n cached_path(f\"{biofid_path}train.conll\", Path(\"datasets\") / dataset_name)\n cached_path(f\"{biofid_path}dev.conll\", Path(\"datasets\") / dataset_name)\n cached_path(f\"{biofid_path}test.conll\", Path(\"datasets\") / dataset_name)\n\n super(BIOFID, self).__init__(\n data_folder, columns, tag_to_bioes=tag_to_bioes, in_memory=in_memory\n )\n\n\nclass CONLL_03(ColumnCorpus):\n def __init__(\n self,\n base_path: Union[str, Path] = None,\n tag_to_bioes: str = \"ner\",\n in_memory: bool = True,\n document_as_sequence: bool = False,\n ):\n \"\"\"\n Initialize the CoNLL-03 corpus. This is only possible if you've manually downloaded it to your machine.\n Obtain the corpus from https://www.clips.uantwerpen.be/conll2003/ner/ and put the eng.testa, .testb, .train\n files in a folder called 'conll_03'. Then set the base_path parameter in the constructor to the path to the\n parent directory where the conll_03 folder resides.\n :param base_path: Path to the CoNLL-03 corpus (i.e. 'conll_03' folder) on your machine\n :param tag_to_bioes: NER by default, need not be changed, but you could also select 'pos' or 'np' to predict\n POS tags or chunks respectively\n :param in_memory: If True, keeps dataset in memory giving speedups in training.\n :param document_as_sequence: If True, all sentences of a document are read into a single Sentence object\n \"\"\"\n if type(base_path) == str:\n base_path: Path = Path(base_path)\n\n # column format\n columns = {0: \"text\", 1: \"pos\", 2: \"np\", 3: \"ner\"}\n\n # this dataset name\n dataset_name = self.__class__.__name__.lower()\n\n # default dataset folder is the cache root\n if not base_path:\n base_path = Path(flair.cache_root) / \"datasets\"\n data_folder = base_path / dataset_name\n\n # check if data there\n if not data_folder.exists():\n log.warning(\"-\" * 100)\n log.warning(f'WARNING: CoNLL-03 dataset not found at \"{data_folder}\".')\n log.warning(\n 'Instructions for obtaining the data can be found here: https://www.clips.uantwerpen.be/conll2003/ner/\"'\n )\n log.warning(\"-\" * 100)\n\n super(CONLL_03, self).__init__(\n data_folder,\n columns,\n tag_to_bioes=tag_to_bioes,\n in_memory=in_memory,\n document_separator_token=None if not document_as_sequence else \"-DOCSTART-\",\n )\n\n\nclass CONLL_03_GERMAN(ColumnCorpus):\n def __init__(\n self,\n base_path: Union[str, Path] = None,\n tag_to_bioes: str = \"ner\",\n in_memory: bool = True,\n document_as_sequence: bool = False,\n ):\n \"\"\"\n Initialize the CoNLL-03 corpus for German. This is only possible if you've manually downloaded it to your machine.\n Obtain the corpus from https://www.clips.uantwerpen.be/conll2003/ner/ and put the respective files in a folder called\n 'conll_03_german'. Then set the base_path parameter in the constructor to the path to the parent directory where\n the conll_03_german folder resides.\n :param base_path: Path to the CoNLL-03 corpus (i.e. 'conll_03_german' folder) on your machine\n :param tag_to_bioes: NER by default, need not be changed, but you could also select 'lemma', 'pos' or 'np' to predict\n word lemmas, POS tags or chunks respectively\n :param in_memory: If True, keeps dataset in memory giving speedups in training.\n :param document_as_sequence: If True, all sentences of a document are read into a single Sentence object\n \"\"\"\n if type(base_path) == str:\n base_path: Path = Path(base_path)\n\n # column format\n columns = {0: \"text\", 1: \"lemma\", 2: \"pos\", 3: \"np\", 4: \"ner\"}\n\n # this dataset name\n dataset_name = self.__class__.__name__.lower()\n\n # default dataset folder is the cache root\n if not base_path:\n base_path = Path(flair.cache_root) / \"datasets\"\n data_folder = base_path / dataset_name\n\n # check if data there\n if not data_folder.exists():\n log.warning(\"-\" * 100)\n log.warning(f'WARNING: CoNLL-03 dataset not found at \"{data_folder}\".')\n log.warning(\n 'Instructions for obtaining the data can be found here: https://www.clips.uantwerpen.be/conll2003/ner/\"'\n )\n log.warning(\"-\" * 100)\n\n super(CONLL_03_GERMAN, self).__init__(\n data_folder,\n columns,\n tag_to_bioes=tag_to_bioes,\n in_memory=in_memory,\n document_separator_token=None if not document_as_sequence else \"-DOCSTART-\",\n )\n\n\nclass CONLL_03_DUTCH(ColumnCorpus):\n def __init__(\n self,\n base_path: Union[str, Path] = None,\n tag_to_bioes: str = \"ner\",\n in_memory: bool = True,\n document_as_sequence: bool = False,\n ):\n \"\"\"\n Initialize the CoNLL-03 corpus for Dutch. The first time you call this constructor it will automatically\n download the dataset.\n :param base_path: Default is None, meaning that corpus gets auto-downloaded and loaded. You can override this\n to point to a different folder but typically this should not be necessary.\n :param tag_to_bioes: NER by default, need not be changed, but you could also select 'pos' to predict\n POS tags instead\n :param in_memory: If True, keeps dataset in memory giving speedups in training.\n :param document_as_sequence: If True, all sentences of a document are read into a single Sentence object\n \"\"\"\n if type(base_path) == str:\n base_path: Path = Path(base_path)\n\n # column format\n columns = {0: \"text\", 1: \"pos\", 2: \"ner\"}\n\n # this dataset name\n dataset_name = self.__class__.__name__.lower()\n\n # default dataset folder is the cache root\n if not base_path:\n base_path = Path(flair.cache_root) / \"datasets\"\n data_folder = base_path / dataset_name\n\n # download data if necessary\n conll_02_path = \"https://www.clips.uantwerpen.be/conll2002/ner/data/\"\n cached_path(f\"{conll_02_path}ned.testa\", Path(\"datasets\") / dataset_name)\n cached_path(f\"{conll_02_path}ned.testb\", Path(\"datasets\") / dataset_name)\n cached_path(f\"{conll_02_path}ned.train\", Path(\"datasets\") / dataset_name)\n\n super(CONLL_03_DUTCH, self).__init__(\n data_folder,\n columns,\n tag_to_bioes=tag_to_bioes,\n encoding=\"latin-1\",\n in_memory=in_memory,\n document_separator_token=None if not document_as_sequence else \"-DOCSTART-\",\n )\n\n\ndef add_IOB2_tags(data_file: Union[str, Path], encoding: str = \"utf8\"):\n \"\"\"\nFunction that adds IOB2 tags if only chunk names are provided (e.g. words are tagged PER instead\nof B-PER or I-PER). Replaces '0' with 'O' as the no-chunk tag since ColumnCorpus expects\nthe letter 'O'. Additionaly it removes lines with no tags in the data file and can also\nbe used if the data is only partialy IOB tagged.\nParameters\n----------\ndata_file : Union[str, Path]\n Path to the data file.\nencoding : str, optional\n Encoding used in open function. The default is \"utf8\".\n\n\"\"\"\n with open(file=data_file, mode='r', encoding=encoding) as f:\n lines = f.readlines()\n with open(file=data_file, mode='w', encoding=encoding) as f:\n pred = 'O' # remembers tag of predecessing line\n for line in lines:\n line_list = line.split()\n if len(line_list) == 2: # word with tag\n word = line_list[0]\n tag = line_list[1]\n if tag in ['0', 'O']: # no chunk\n f.write(word + ' O\\n')\n pred = 'O'\n elif '-' not in tag: # no IOB tags\n if pred == 'O': # found a new chunk\n f.write(word + ' B-' + tag + '\\n')\n pred = tag\n else: # found further part of chunk or new chunk directly after old chunk\n if pred == tag:\n f.write(word + ' I-' + tag + '\\n')\n else:\n f.write(word + ' B-' + tag + '\\n')\n pred = tag\n else: # line already has IOB tag (tag contains '-')\n f.write(line)\n pred = tag.split('-')[1]\n elif len(line_list) == 0: # empty line\n f.write('\\n')\n pred = 'O'\n\n\nclass CONLL_03_SPANISH(ColumnCorpus):\n def __init__(\n self,\n base_path: Union[str, Path] = None,\n tag_to_bioes: str = \"ner\",\n in_memory: bool = True,\n ):\n \"\"\"\n Initialize the CoNLL-03 corpus for Spanish. The first time you call this constructor it will automatically\n download the dataset.\n :param base_path: Default is None, meaning that corpus gets auto-downloaded and loaded. You can override this\n to point to a different folder but typically this should not be necessary.\n :param tag_to_bioes: NER by default, should not be changed\n :param in_memory: If True, keeps dataset in memory giving speedups in training.\n :param document_as_sequence: If True, all sentences of a document are read into a single Sentence object\n \"\"\"\n if type(base_path) == str:\n base_path: Path = Path(base_path)\n\n # column format\n columns = {0: \"text\", 1: \"ner\"}\n\n # this dataset name\n dataset_name = self.__class__.__name__.lower()\n\n # default dataset folder is the cache root\n if not base_path:\n base_path = Path(flair.cache_root) / \"datasets\"\n data_folder = base_path / dataset_name\n\n # download data if necessary\n conll_02_path = \"https://www.clips.uantwerpen.be/conll2002/ner/data/\"\n cached_path(f\"{conll_02_path}esp.testa\", Path(\"datasets\") / dataset_name)\n cached_path(f\"{conll_02_path}esp.testb\", Path(\"datasets\") / dataset_name)\n cached_path(f\"{conll_02_path}esp.train\", Path(\"datasets\") / dataset_name)\n\n super(CONLL_03_SPANISH, self).__init__(\n data_folder,\n columns,\n tag_to_bioes=tag_to_bioes,\n encoding=\"latin-1\",\n in_memory=in_memory,\n )\n\n\nclass CONLL_2000(ColumnCorpus):\n def __init__(\n self,\n base_path: Union[str, Path] = None,\n tag_to_bioes: str = \"np\",\n in_memory: bool = True,\n ):\n \"\"\"\n Initialize the CoNLL-2000 corpus for English chunking.\n The first time you call this constructor it will automatically download the dataset.\n :param base_path: Default is None, meaning that corpus gets auto-downloaded and loaded. You can override this\n to point to a different folder but typically this should not be necessary.\n :param tag_to_bioes: 'np' by default, should not be changed, but you can set 'pos' instead to predict POS tags\n :param in_memory: If True, keeps dataset in memory giving speedups in training.\n \"\"\"\n if type(base_path) == str:\n base_path: Path = Path(base_path)\n\n # column format\n columns = {0: \"text\", 1: \"pos\", 2: \"np\"}\n\n # this dataset name\n dataset_name = self.__class__.__name__.lower()\n\n # default dataset folder is the cache root\n if not base_path:\n base_path = Path(flair.cache_root) / \"datasets\"\n data_folder = base_path / dataset_name\n\n # download data if necessary\n conll_2000_path = \"https://www.clips.uantwerpen.be/conll2000/chunking/\"\n data_file = Path(flair.cache_root) / \"datasets\" / dataset_name / \"train.txt\"\n if not data_file.is_file():\n cached_path(\n f\"{conll_2000_path}train.txt.gz\", Path(\"datasets\") / dataset_name\n )\n cached_path(\n f\"{conll_2000_path}test.txt.gz\", Path(\"datasets\") / dataset_name\n )\n import gzip, shutil\n\n with gzip.open(\n Path(flair.cache_root) / \"datasets\" / dataset_name / \"train.txt.gz\",\n \"rb\",\n ) as f_in:\n with open(\n Path(flair.cache_root) / \"datasets\" / dataset_name / \"train.txt\",\n \"wb\",\n ) as f_out:\n shutil.copyfileobj(f_in, f_out)\n with gzip.open(\n Path(flair.cache_root) / \"datasets\" / dataset_name / \"test.txt.gz\", \"rb\"\n ) as f_in:\n with open(\n Path(flair.cache_root) / \"datasets\" / dataset_name / \"test.txt\",\n \"wb\",\n ) as f_out:\n shutil.copyfileobj(f_in, f_out)\n\n super(CONLL_2000, self).__init__(\n data_folder, columns, tag_to_bioes=tag_to_bioes, in_memory=in_memory\n )\n\n\nclass DANE(ColumnCorpus):\n def __init__(\n self,\n base_path: Union[str, Path] = None,\n tag_to_bioes: str = \"ner\",\n in_memory: bool = True,\n ):\n if type(base_path) == str:\n base_path: Path = Path(base_path)\n\n # column format\n columns = {1: 'text', 3: 'pos', 9: 'ner'}\n\n # this dataset name\n dataset_name = self.__class__.__name__.lower()\n\n # default dataset folder is the cache root\n if not base_path:\n base_path = Path(flair.cache_root) / \"datasets\"\n data_folder = base_path / dataset_name\n\n # download data if necessary\n data_path = Path(flair.cache_root) / \"datasets\" / dataset_name\n train_data_file = data_path / \"ddt.train.conllu\"\n if not train_data_file.is_file():\n temp_file = cached_path(\n 'https://danlp.s3.eu-central-1.amazonaws.com/datasets/ddt.zip',\n Path(\"datasets\") / dataset_name\n )\n from zipfile import ZipFile\n\n with ZipFile(temp_file, 'r') as zip_file:\n zip_file.extractall(path=data_path)\n\n # Remove CoNLL-U meta information in the last column\n for part in ['train', 'dev', 'test']:\n lines = []\n data_file = \"ddt.{}.conllu\".format(part)\n with open(data_path / data_file, 'r') as file:\n for line in file:\n if line.startswith(\"#\") or line == \"\\n\":\n lines.append(line)\n lines.append(line.replace(\"name=\", \"\").replace(\"|SpaceAfter=No\", \"\"))\n\n with open(data_path / data_file, 'w') as file:\n file.writelines(lines)\n\n print(data_path / data_file)\n\n super(DANE, self).__init__(\n data_folder, columns, tag_to_bioes=tag_to_bioes,\n in_memory=in_memory, comment_symbol=\"#\"\n )\n\n\nclass GERMEVAL_14(ColumnCorpus):\n def __init__(\n self,\n base_path: Union[str, Path] = None,\n tag_to_bioes: str = \"ner\",\n in_memory: bool = True,\n ):\n \"\"\"\n Initialize the GermEval NER corpus for German. This is only possible if you've manually downloaded it to your\n machine. Obtain the corpus from https://sites.google.com/site/germeval2014ner/home/ and put it into some folder.\n Then point the base_path parameter in the constructor to this folder\n :param base_path: Path to the GermEval corpus on your machine\n :param tag_to_bioes: 'ner' by default, should not be changed.\n :param in_memory:If True, keeps dataset in memory giving speedups in training.\n \"\"\"\n if type(base_path) == str:\n base_path: Path = Path(base_path)\n\n # column format\n columns = {1: \"text\", 2: \"ner\"}\n\n # this dataset name\n dataset_name = self.__class__.__name__.lower()\n\n # default dataset folder is the cache root\n if not base_path:\n base_path = Path(flair.cache_root) / \"datasets\"\n data_folder = base_path / dataset_name\n\n # check if data there\n if not data_folder.exists():\n log.warning(\"-\" * 100)\n log.warning(f'WARNING: GermEval-14 dataset not found at \"{data_folder}\".')\n log.warning(\n 'Instructions for obtaining the data can be found here: https://sites.google.com/site/germeval2014ner/home/\"'\n )\n log.warning(\"-\" * 100)\n super(GERMEVAL_14, self).__init__(\n data_folder,\n columns,\n tag_to_bioes=tag_to_bioes,\n comment_symbol=\"#\",\n in_memory=in_memory,\n )\n\n\nclass INSPEC(ColumnCorpus):\n def __init__(\n self,\n base_path: Union[str, Path] = None,\n tag_to_bioes: str = \"keyword\",\n in_memory: bool = True,\n ):\n\n if type(base_path) == str:\n base_path: Path = Path(base_path)\n\n # column format\n columns = {0: \"text\", 1: \"keyword\"}\n\n # this dataset name\n dataset_name = self.__class__.__name__.lower()\n\n # default dataset folder is the cache root\n if not base_path:\n base_path = Path(flair.cache_root) / \"datasets\"\n data_folder = base_path / dataset_name\n\n inspec_path = \"https://raw.githubusercontent.com/midas-research/keyphrase-extraction-as-sequence-labeling-data/master/Inspec\"\n cached_path(f\"{inspec_path}/train.txt\", Path(\"datasets\") / dataset_name)\n cached_path(f\"{inspec_path}/test.txt\", Path(\"datasets\") / dataset_name)\n if not \"dev.txt\" in os.listdir(data_folder):\n cached_path(f\"{inspec_path}/valid.txt\", Path(\"datasets\") / dataset_name)\n # rename according to train - test - dev - convention\n os.rename(data_folder / \"valid.txt\", data_folder / \"dev.txt\")\n\n super(INSPEC, self).__init__(\n data_folder, columns, tag_to_bioes=tag_to_bioes, in_memory=in_memory\n )\n\n\nclass LER_GERMAN(ColumnCorpus):\n def __init__(\n self,\n base_path: Union[str, Path] = None,\n tag_to_bioes: str = \"ner\",\n in_memory: bool = False,\n ):\n \"\"\"\n Initialize the LER_GERMAN (Legal Entity Recognition) corpus. The first time you call this constructor it will automatically\n download the dataset.\n :param base_path: Default is None, meaning that corpus gets auto-downloaded and loaded. You can override this\n to point to a different folder but typically this should not be necessary.\n :param in_memory: If True, keeps dataset in memory giving speedups in training. Not recommended due to heavy RAM usage.\n :param document_as_sequence: If True, all sentences of a document are read into a single Sentence object\n \"\"\"\n\n if type(base_path) == str:\n base_path: Path = Path(base_path)\n\n # column format\n columns = {0: \"text\", 1: \"ner\"}\n\n # this dataset name\n dataset_name = self.__class__.__name__.lower()\n\n # default dataset folder is the cache root\n if not base_path:\n base_path = Path(flair.cache_root) / \"datasets\"\n data_folder = base_path / dataset_name\n\n # download data if necessary\n ler_path = \"https://raw.githubusercontent.com/elenanereiss/Legal-Entity-Recognition/master/data/\"\n cached_path(f\"{ler_path}ler.conll\", Path(\"datasets\") / dataset_name)\n\n super(LER_GERMAN, self).__init__(\n data_folder,\n columns,\n tag_to_bioes=tag_to_bioes,\n in_memory=in_memory,\n train_file='ler.conll'\n )\n\n\nclass NER_BASQUE(ColumnCorpus):\n def __init__(\n self,\n base_path: Union[str, Path] = None,\n tag_to_bioes: str = \"ner\",\n in_memory: bool = True,\n ):\n if type(base_path) == str:\n base_path: Path = Path(base_path)\n\n # column format\n columns = {0: \"text\", 1: \"ner\"}\n\n # this dataset name\n dataset_name = self.__class__.__name__.lower()\n\n # default dataset folder is the cache root\n if not base_path:\n base_path = Path(flair.cache_root) / \"datasets\"\n data_folder = base_path / dataset_name\n\n # download data if necessary\n ner_basque_path = \"http://ixa2.si.ehu.eus/eiec/\"\n data_path = Path(flair.cache_root) / \"datasets\" / dataset_name\n data_file = data_path / \"named_ent_eu.train\"\n if not data_file.is_file():\n cached_path(\n f\"{ner_basque_path}/eiec_v1.0.tgz\", Path(\"datasets\") / dataset_name\n )\n import tarfile, shutil\n\n with tarfile.open(\n Path(flair.cache_root) / \"datasets\" / dataset_name / \"eiec_v1.0.tgz\",\n \"r:gz\",\n ) as f_in:\n corpus_files = (\n \"eiec_v1.0/named_ent_eu.train\",\n \"eiec_v1.0/named_ent_eu.test\",\n )\n for corpus_file in corpus_files:\n f_in.extract(corpus_file, data_path)\n shutil.move(f\"{data_path}/{corpus_file}\", data_path)\n\n super(NER_BASQUE, self).__init__(\n data_folder, columns, tag_to_bioes=tag_to_bioes, in_memory=in_memory\n )\n\n\nclass NER_FINNISH(ColumnCorpus):\n def __init__(\n self,\n base_path: Union[str, Path] = None,\n tag_to_bioes: str = \"ner\",\n in_memory: bool = True,\n ):\n if type(base_path) == str:\n base_path: Path = Path(base_path)\n\n # column format\n columns = {0: \"text\", 1: \"ner\"}\n\n # this dataset name\n dataset_name = self.__class__.__name__.lower()\n\n # default dataset folder is the cache root\n if not base_path:\n base_path = Path(flair.cache_root) / \"datasets\"\n data_folder = base_path / dataset_name\n\n # download data if necessary\n ner_finnish_path = \"https://raw.githubusercontent.com/mpsilfve/finer-data/master/data/digitoday.\"\n cached_path(f\"{ner_finnish_path}2014.train.csv\", Path(\"datasets\") / dataset_name)\n cached_path(f\"{ner_finnish_path}2014.dev.csv\", Path(\"datasets\") / dataset_name)\n cached_path(f\"{ner_finnish_path}2015.test.csv\", Path(\"datasets\") / dataset_name)\n\n _remove_lines_without_annotations(data_file=Path(data_folder / \"digitoday.2015.test.csv\"))\n\n super(NER_FINNISH, self).__init__(\n data_folder, columns, tag_to_bioes=tag_to_bioes, in_memory=in_memory, skip_first_line=True\n )\n\n\ndef _remove_lines_without_annotations(data_file: Union[str, Path] = None):\n with open(data_file, 'r') as f:\n lines = f.readlines()\n with open(data_file, 'w') as f:\n for line in lines:\n if len(line.split()) != 1:\n f.write(line)\n\n\nclass NER_SWEDISH(ColumnCorpus):\n def __init__(\n self,\n base_path: Union[str, Path] = None,\n tag_to_bioes: str = \"ner\",\n in_memory: bool = True,\n ):\n \"\"\"\n Initialize the NER_SWEDISH corpus for Swedish. The first time you call this constructor it will automatically\n download the dataset.\n :param base_path: Default is None, meaning that corpus gets auto-downloaded and loaded. You can override this\n to point to a different folder but typically this should not be necessary.\n :param in_memory: If True, keeps dataset in memory giving speedups in training.\n :param document_as_sequence: If True, all sentences of a document are read into a single Sentence object\n \"\"\"\n\n if type(base_path) == str:\n base_path: Path = Path(base_path)\n\n # column format\n columns = {0: \"text\", 1: \"ner\"}\n\n # this dataset name\n dataset_name = self.__class__.__name__.lower()\n\n # default dataset folder is the cache root\n if not base_path:\n base_path = Path(flair.cache_root) / \"datasets\"\n data_folder = base_path / dataset_name\n\n # download data if necessary\n ner_spraakbanken_path = \"https://raw.githubusercontent.com/klintan/swedish-ner-corpus/master/\"\n cached_path(f\"{ner_spraakbanken_path}test_corpus.txt\", Path(\"datasets\") / dataset_name)\n cached_path(f\"{ner_spraakbanken_path}train_corpus.txt\", Path(\"datasets\") / dataset_name)\n\n # data is not in IOB2 format. Thus we transform it to IOB2\n add_IOB2_tags(data_file=Path(data_folder / \"test_corpus.txt\"))\n add_IOB2_tags(data_file=Path(data_folder / \"train_corpus.txt\"))\n\n super(NER_SWEDISH, self).__init__(\n data_folder,\n columns,\n tag_to_bioes=tag_to_bioes,\n in_memory=in_memory,\n )\n\n\nclass SEMEVAL2017(ColumnCorpus):\n def __init__(\n self,\n base_path: Union[str, Path] = None,\n tag_to_bioes: str = \"keyword\",\n in_memory: bool = True,\n ):\n\n if type(base_path) == str:\n base_path: Path = Path(base_path)\n\n # column format\n columns = {0: \"text\", 1: \"keyword\"}\n\n # this dataset name\n dataset_name = self.__class__.__name__.lower()\n\n # default dataset folder is the cache root\n if not base_path:\n base_path = Path(flair.cache_root) / \"datasets\"\n data_folder = base_path / dataset_name\n\n semeval2017_path = \"https://raw.githubusercontent.com/midas-research/keyphrase-extraction-as-sequence-labeling-data/master/SemEval-2017\"\n cached_path(f\"{semeval2017_path}/train.txt\", Path(\"datasets\") / dataset_name)\n cached_path(f\"{semeval2017_path}/test.txt\", Path(\"datasets\") / dataset_name)\n cached_path(f\"{semeval2017_path}/dev.txt\", Path(\"datasets\") / dataset_name)\n\n super(SEMEVAL2017, self).__init__(\n data_folder, columns, tag_to_bioes=tag_to_bioes, in_memory=in_memory\n )\n\n\nclass SEMEVAL2010(ColumnCorpus):\n def __init__(\n self,\n base_path: Union[str, Path] = None,\n tag_to_bioes: str = \"keyword\",\n in_memory: bool = True,\n ):\n\n if type(base_path) == str:\n base_path: Path = Path(base_path)\n\n # column format\n columns = {0: \"text\", 1: \"keyword\"}\n\n # this dataset name\n dataset_name = self.__class__.__name__.lower()\n\n # default dataset folder is the cache root\n if not base_path:\n base_path = Path(flair.cache_root) / \"datasets\"\n data_folder = base_path / dataset_name\n\n semeval2010_path = \"https://raw.githubusercontent.com/midas-research/keyphrase-extraction-as-sequence-labeling-data/master/processed_semeval-2010\"\n cached_path(f\"{semeval2010_path}/train.txt\", Path(\"datasets\") / dataset_name)\n cached_path(f\"{semeval2010_path}/test.txt\", Path(\"datasets\") / dataset_name)\n\n super(SEMEVAL2010, self).__init__(\n data_folder, columns, tag_to_bioes=tag_to_bioes, in_memory=in_memory\n )\n\n\nclass WIKINER_ENGLISH(ColumnCorpus):\n def __init__(\n self,\n base_path: Union[str, Path] = None,\n tag_to_bioes: str = \"ner\",\n in_memory: bool = False,\n ):\n if type(base_path) == str:\n base_path: Path = Path(base_path)\n\n # column format\n columns = {0: \"text\", 1: \"pos\", 2: \"ner\"}\n\n # this dataset name\n dataset_name = self.__class__.__name__.lower()\n\n # default dataset folder is the cache root\n if not base_path:\n base_path = Path(flair.cache_root) / \"datasets\"\n data_folder = base_path / dataset_name\n\n # download data if necessary\n _download_wikiner(\"en\", dataset_name)\n\n super(WIKINER_ENGLISH, self).__init__(\n data_folder, columns, tag_to_bioes=tag_to_bioes, in_memory=in_memory\n )\n\n\nclass WIKINER_GERMAN(ColumnCorpus):\n def __init__(\n self,\n base_path: Union[str, Path] = None,\n tag_to_bioes: str = \"ner\",\n in_memory: bool = False,\n ):\n if type(base_path) == str:\n base_path: Path = Path(base_path)\n\n # column format\n columns = {0: \"text\", 1: \"pos\", 2: \"ner\"}\n\n # this dataset name\n dataset_name = self.__class__.__name__.lower()\n\n # default dataset folder is the cache root\n if not base_path:\n base_path = Path(flair.cache_root) / \"datasets\"\n data_folder = base_path / dataset_name\n\n # download data if necessary\n _download_wikiner(\"de\", dataset_name)\n\n super(WIKINER_GERMAN, self).__init__(\n data_folder, columns, tag_to_bioes=tag_to_bioes, in_memory=in_memory\n )\n\n\nclass WIKINER_DUTCH(ColumnCorpus):\n def __init__(\n self,\n base_path: Union[str, Path] = None,\n tag_to_bioes: str = \"ner\",\n in_memory: bool = False,\n ):\n if type(base_path) == str:\n base_path: Path = Path(base_path)\n\n # column format\n columns = {0: \"text\", 1: \"pos\", 2: \"ner\"}\n\n # this dataset name\n dataset_name = self.__class__.__name__.lower()\n\n # default dataset folder is the cache root\n if not base_path:\n base_path = Path(flair.cache_root) / \"datasets\"\n data_folder = base_path / dataset_name\n\n # download data if necessary\n _download_wikiner(\"nl\", dataset_name)\n\n super(WIKINER_DUTCH, self).__init__(\n data_folder, columns, tag_to_bioes=tag_to_bioes, in_memory=in_memory\n )\n\n\nclass WIKINER_FRENCH(ColumnCorpus):\n def __init__(\n self,\n base_path: Union[str, Path] = None,\n tag_to_bioes: str = \"ner\",\n in_memory: bool = False,\n ):\n if type(base_path) == str:\n base_path: Path = Path(base_path)\n\n # column format\n columns = {0: \"text\", 1: \"pos\", 2: \"ner\"}\n\n # this dataset name\n dataset_name = self.__class__.__name__.lower()\n\n # default dataset folder is the cache root\n if not base_path:\n base_path = Path(flair.cache_root) / \"datasets\"\n data_folder = base_path / dataset_name\n\n # download data if necessary\n _download_wikiner(\"fr\", dataset_name)\n\n super(WIKINER_FRENCH, self).__init__(\n data_folder, columns, tag_to_bioes=tag_to_bioes, in_memory=in_memory\n )\n\n\nclass WIKINER_ITALIAN(ColumnCorpus):\n def __init__(\n self,\n base_path: Union[str, Path] = None,\n tag_to_bioes: str = \"ner\",\n in_memory: bool = False,\n ):\n if type(base_path) == str:\n base_path: Path = Path(base_path)\n\n # column format\n columns = {0: \"text\", 1: \"pos\", 2: \"ner\"}\n\n # this dataset name\n dataset_name = self.__class__.__name__.lower()\n\n # default dataset folder is the cache root\n if not base_path:\n base_path = Path(flair.cache_root) / \"datasets\"\n data_folder = base_path / dataset_name\n\n # download data if necessary\n _download_wikiner(\"it\", dataset_name)\n\n super(WIKINER_ITALIAN, self).__init__(\n data_folder, columns, tag_to_bioes=tag_to_bioes, in_memory=in_memory\n )\n\n\nclass WIKINER_SPANISH(ColumnCorpus):\n def __init__(\n self,\n base_path: Union[str, Path] = None,\n tag_to_bioes: str = \"ner\",\n in_memory: bool = False,\n ):\n if type(base_path) == str:\n base_path: Path = Path(base_path)\n\n # column format\n columns = {0: \"text\", 1: \"pos\", 2: \"ner\"}\n\n # this dataset name\n dataset_name = self.__class__.__name__.lower()\n\n # default dataset folder is the cache root\n if not base_path:\n base_path = Path(flair.cache_root) / \"datasets\"\n data_folder = base_path / dataset_name\n\n # download data if necessary\n _download_wikiner(\"es\", dataset_name)\n\n super(WIKINER_SPANISH, self).__init__(\n data_folder, columns, tag_to_bioes=tag_to_bioes, in_memory=in_memory\n )\n\n\nclass WIKINER_PORTUGUESE(ColumnCorpus):\n def __init__(\n self,\n base_path: Union[str, Path] = None,\n tag_to_bioes: str = \"ner\",\n in_memory: bool = False,\n ):\n if type(base_path) == str:\n base_path: Path = Path(base_path)\n\n # column format\n columns = {0: \"text\", 1: \"pos\", 2: \"ner\"}\n\n # this dataset name\n dataset_name = self.__class__.__name__.lower()\n\n # default dataset folder is the cache root\n if not base_path:\n base_path = Path(flair.cache_root) / \"datasets\"\n data_folder = base_path / dataset_name\n\n # download data if necessary\n _download_wikiner(\"pt\", dataset_name)\n\n super(WIKINER_PORTUGUESE, self).__init__(\n data_folder, columns, tag_to_bioes=tag_to_bioes, in_memory=in_memory\n )\n\n\nclass WIKINER_POLISH(ColumnCorpus):\n def __init__(\n self,\n base_path: Union[str, Path] = None,\n tag_to_bioes: str = \"ner\",\n in_memory: bool = False,\n ):\n if type(base_path) == str:\n base_path: Path = Path(base_path)\n\n # column format\n columns = {0: \"text\", 1: \"pos\", 2: \"ner\"}\n\n # this dataset name\n dataset_name = self.__class__.__name__.lower()\n\n # default dataset folder is the cache root\n if not base_path:\n base_path = Path(flair.cache_root) / \"datasets\"\n data_folder = base_path / dataset_name\n\n # download data if necessary\n _download_wikiner(\"pl\", dataset_name)\n\n super(WIKINER_POLISH, self).__init__(\n data_folder, columns, tag_to_bioes=tag_to_bioes, in_memory=in_memory\n )\n\n\nclass WIKINER_RUSSIAN(ColumnCorpus):\n def __init__(\n self,\n base_path: Union[str, Path] = None,\n tag_to_bioes: str = \"ner\",\n in_memory: bool = False,\n ):\n if type(base_path) == str:\n base_path: Path = Path(base_path)\n\n # column format\n columns = {0: \"text\", 1: \"pos\", 2: \"ner\"}\n\n # this dataset name\n dataset_name = self.__class__.__name__.lower()\n\n # default dataset folder is the cache root\n if not base_path:\n base_path = Path(flair.cache_root) / \"datasets\"\n data_folder = base_path / dataset_name\n\n # download data if necessary\n _download_wikiner(\"ru\", dataset_name)\n\n super(WIKINER_RUSSIAN, self).__init__(\n data_folder, columns, tag_to_bioes=tag_to_bioes, in_memory=in_memory\n )\n\n\nclass WNUT_17(ColumnCorpus):\n def __init__(\n self,\n base_path: Union[str, Path] = None,\n tag_to_bioes: str = \"ner\",\n in_memory: bool = True,\n ):\n if type(base_path) == str:\n base_path: Path = Path(base_path)\n\n # column format\n columns = {0: \"text\", 1: \"ner\"}\n\n # this dataset name\n dataset_name = self.__class__.__name__.lower()\n\n # default dataset folder is the cache root\n if not base_path:\n base_path = Path(flair.cache_root) / \"datasets\"\n data_folder = base_path / dataset_name\n\n # download data if necessary\n wnut_path = \"https://noisy-text.github.io/2017/files/\"\n cached_path(f\"{wnut_path}wnut17train.conll\", Path(\"datasets\") / dataset_name)\n cached_path(f\"{wnut_path}emerging.dev.conll\", Path(\"datasets\") / dataset_name)\n cached_path(\n f\"{wnut_path}emerging.test.annotated\", Path(\"datasets\") / dataset_name\n )\n\n super(WNUT_17, self).__init__(\n data_folder, columns, tag_to_bioes=tag_to_bioes, in_memory=in_memory\n )\n\n\ndef _download_wikiner(language_code: str, dataset_name: str):\n # download data if necessary\n wikiner_path = (\n \"https://raw.githubusercontent.com/dice-group/FOX/master/input/Wikiner/\"\n )\n lc = language_code\n\n data_file = (\n Path(flair.cache_root)\n / \"datasets\"\n / dataset_name\n / f\"aij-wikiner-{lc}-wp3.train\"\n )\n if not data_file.is_file():\n\n cached_path(\n f\"{wikiner_path}aij-wikiner-{lc}-wp3.bz2\", Path(\"datasets\") / dataset_name\n )\n import bz2, shutil\n\n # unpack and write out in CoNLL column-like format\n bz_file = bz2.BZ2File(\n Path(flair.cache_root)\n / \"datasets\"\n / dataset_name\n / f\"aij-wikiner-{lc}-wp3.bz2\",\n \"rb\",\n )\n with bz_file as f, open(\n Path(flair.cache_root)\n / \"datasets\"\n / dataset_name\n / f\"aij-wikiner-{lc}-wp3.train\",\n \"w\",\n ) as out:\n for line in f:\n line = line.decode(\"utf-8\")\n words = line.split(\" \")\n for word in words:\n out.write(\"\\t\".join(word.split(\"|\")) + \"\\n\")\n",
"path": "flair/datasets/sequence_labeling.py"
}
] | [
{
"content": "import logging\nimport re\nimport os\nfrom pathlib import Path\nfrom typing import Union, Dict, List\n\nimport flair\nfrom flair.data import Corpus, FlairDataset, Sentence, Token\nfrom flair.datasets.base import find_train_dev_test_files\nfrom flair.file_utils import cached_path\n\nlog = logging.getLogger(\"flair\")\n\n\nclass ColumnCorpus(Corpus):\n def __init__(\n self,\n data_folder: Union[str, Path],\n column_format: Dict[int, str],\n train_file=None,\n test_file=None,\n dev_file=None,\n tag_to_bioes=None,\n column_delimiter: str = r\"\\s+\",\n comment_symbol: str = None,\n encoding: str = \"utf-8\",\n document_separator_token: str = None,\n skip_first_line: bool = False,\n in_memory: bool = True,\n ):\n \"\"\"\n Instantiates a Corpus from CoNLL column-formatted task data such as CoNLL03 or CoNLL2000.\n\n :param data_folder: base folder with the task data\n :param column_format: a map specifying the column format\n :param train_file: the name of the train file\n :param test_file: the name of the test file\n :param dev_file: the name of the dev file, if None, dev data is sampled from train\n :param tag_to_bioes: whether to convert to BIOES tagging scheme\n :param column_delimiter: default is to split on any separatator, but you can overwrite for instance with \"\\t\"\n to split only on tabs\n :param comment_symbol: if set, lines that begin with this symbol are treated as comments\n :param document_separator_token: If provided, multiple sentences are read into one object. Provide the string token\n that indicates that a new document begins\n :param skip_first_line: set to True if your dataset has a header line\n :param in_memory: If set to True, the dataset is kept in memory as Sentence objects, otherwise does disk reads\n :return: a Corpus with annotated train, dev and test data\n \"\"\"\n\n # find train, dev and test files if not specified\n dev_file, test_file, train_file = \\\n find_train_dev_test_files(data_folder, dev_file, test_file, train_file)\n\n # get train data\n train = ColumnDataset(\n train_file,\n column_format,\n tag_to_bioes,\n encoding=encoding,\n comment_symbol=comment_symbol,\n column_delimiter=column_delimiter,\n in_memory=in_memory,\n document_separator_token=document_separator_token,\n skip_first_line=skip_first_line,\n )\n\n # read in test file if exists\n test = ColumnDataset(\n test_file,\n column_format,\n tag_to_bioes,\n encoding=encoding,\n comment_symbol=comment_symbol,\n column_delimiter=column_delimiter,\n in_memory=in_memory,\n document_separator_token=document_separator_token,\n skip_first_line=skip_first_line,\n ) if test_file is not None else None\n\n # read in dev file if exists\n dev = ColumnDataset(\n dev_file,\n column_format,\n tag_to_bioes,\n encoding=encoding,\n comment_symbol=comment_symbol,\n column_delimiter=column_delimiter,\n in_memory=in_memory,\n document_separator_token=document_separator_token,\n skip_first_line=skip_first_line,\n ) if dev_file is not None else None\n\n super(ColumnCorpus, self).__init__(train, dev, test, name=str(data_folder))\n\n\nclass ColumnDataset(FlairDataset):\n # special key for space after\n SPACE_AFTER_KEY = \"space-after\"\n\n def __init__(\n self,\n path_to_column_file: Union[str, Path],\n column_name_map: Dict[int, str],\n tag_to_bioes: str = None,\n column_delimiter: str = r\"\\s+\",\n comment_symbol: str = None,\n in_memory: bool = True,\n document_separator_token: str = None,\n encoding: str = \"utf-8\",\n skip_first_line: bool = False,\n ):\n \"\"\"\n Instantiates a column dataset (typically used for sequence labeling or word-level prediction).\n\n :param path_to_column_file: path to the file with the column-formatted data\n :param column_name_map: a map specifying the column format\n :param tag_to_bioes: whether to convert to BIOES tagging scheme\n :param column_delimiter: default is to split on any separatator, but you can overwrite for instance with \"\\t\"\n to split only on tabs\n :param comment_symbol: if set, lines that begin with this symbol are treated as comments\n :param in_memory: If set to True, the dataset is kept in memory as Sentence objects, otherwise does disk reads\n :param document_separator_token: If provided, multiple sentences are read into one object. Provide the string token\n that indicates that a new document begins\n :param skip_first_line: set to True if your dataset has a header line\n \"\"\"\n if type(path_to_column_file) is str:\n path_to_column_file = Path(path_to_column_file)\n assert path_to_column_file.exists()\n self.path_to_column_file = path_to_column_file\n self.tag_to_bioes = tag_to_bioes\n self.column_name_map = column_name_map\n self.column_delimiter = column_delimiter\n self.comment_symbol = comment_symbol\n self.document_separator_token = document_separator_token\n\n # store either Sentence objects in memory, or only file offsets\n self.in_memory = in_memory\n if self.in_memory:\n self.sentences: List[Sentence] = []\n else:\n self.indices: List[int] = []\n\n self.total_sentence_count: int = 0\n\n # most data sets have the token text in the first column, if not, pass 'text' as column\n self.text_column: int = 0\n for column in self.column_name_map:\n if column_name_map[column] == \"text\":\n self.text_column = column\n\n # determine encoding of text file\n self.encoding = encoding\n\n sentence: Sentence = Sentence()\n sentence_started: bool = False\n with open(str(self.path_to_column_file), encoding=self.encoding) as f:\n\n # skip first line if to selected\n if skip_first_line:\n f.readline()\n\n line = f.readline()\n position = 0\n\n while line:\n\n if self.comment_symbol is not None and line.startswith(comment_symbol):\n line = f.readline()\n continue\n\n if self.__line_completes_sentence(line):\n\n if sentence_started:\n\n if self.in_memory:\n if self.tag_to_bioes is not None:\n sentence.convert_tag_scheme(\n tag_type=self.tag_to_bioes, target_scheme=\"iobes\"\n )\n self.sentences.append(sentence)\n else:\n self.indices.append(position)\n position = f.tell()\n self.total_sentence_count += 1\n sentence: Sentence = Sentence()\n sentence_started = False\n\n elif self.in_memory:\n token = self._parse_token(line)\n if not line.isspace():\n sentence.add_token(token)\n sentence_started = True\n\n elif not line.isspace():\n sentence_started = True\n\n line = f.readline()\n\n if sentence_started:\n if self.in_memory:\n self.sentences.append(sentence)\n else:\n self.indices.append(position)\n self.total_sentence_count += 1\n\n def _parse_token(self, line: str) -> Token:\n fields: List[str] = re.split(self.column_delimiter, line)\n token = Token(fields[self.text_column])\n for column in self.column_name_map:\n if len(fields) > column:\n if column != self.text_column and self.column_name_map[column] != self.SPACE_AFTER_KEY:\n token.add_label(\n self.column_name_map[column], fields[column]\n )\n if self.column_name_map[column] == self.SPACE_AFTER_KEY and fields[column] == '-':\n token.whitespace_after = False\n return token\n\n def __line_completes_sentence(self, line: str) -> bool:\n sentence_completed = line.isspace()\n if self.document_separator_token:\n sentence_completed = False\n fields: List[str] = re.split(self.column_delimiter, line)\n if len(fields) >= self.text_column:\n if fields[self.text_column] == self.document_separator_token:\n sentence_completed = True\n return sentence_completed\n\n def is_in_memory(self) -> bool:\n return self.in_memory\n\n def __len__(self):\n return self.total_sentence_count\n\n def __getitem__(self, index: int = 0) -> Sentence:\n\n if self.in_memory:\n sentence = self.sentences[index]\n\n else:\n with open(str(self.path_to_column_file), encoding=self.encoding) as file:\n file.seek(self.indices[index])\n line = file.readline()\n sentence: Sentence = Sentence()\n while line:\n if self.comment_symbol is not None and line.startswith(\n self.comment_symbol\n ):\n line = file.readline()\n continue\n\n if self.__line_completes_sentence(line):\n if len(sentence) > 0:\n if self.tag_to_bioes is not None:\n sentence.convert_tag_scheme(\n tag_type=self.tag_to_bioes, target_scheme=\"iobes\"\n )\n return sentence\n\n else:\n token = self._parse_token(line)\n if not line.isspace():\n sentence.add_token(token)\n\n line = file.readline()\n return sentence\n\n\nclass BIOFID(ColumnCorpus):\n def __init__(\n self,\n base_path: Union[str, Path] = None,\n tag_to_bioes: str = \"ner\",\n in_memory: bool = True,\n ):\n if type(base_path) == str:\n base_path: Path = Path(base_path)\n\n # column format\n columns = {0: \"text\", 1: \"lemma\", 2: \"pos\", 3: \"ner\"}\n\n # this dataset name\n dataset_name = self.__class__.__name__.lower()\n\n # default dataset folder is the cache root\n if not base_path:\n base_path = Path(flair.cache_root) / \"datasets\"\n data_folder = base_path / dataset_name\n\n # download data if necessary\n biofid_path = \"https://raw.githubusercontent.com/texttechnologylab/BIOfid/master/BIOfid-Dataset-NER/\"\n cached_path(f\"{biofid_path}train.conll\", Path(\"datasets\") / dataset_name)\n cached_path(f\"{biofid_path}dev.conll\", Path(\"datasets\") / dataset_name)\n cached_path(f\"{biofid_path}test.conll\", Path(\"datasets\") / dataset_name)\n\n super(BIOFID, self).__init__(\n data_folder, columns, tag_to_bioes=tag_to_bioes, in_memory=in_memory\n )\n\n\nclass CONLL_03(ColumnCorpus):\n def __init__(\n self,\n base_path: Union[str, Path] = None,\n tag_to_bioes: str = \"ner\",\n in_memory: bool = True,\n document_as_sequence: bool = False,\n ):\n \"\"\"\n Initialize the CoNLL-03 corpus. This is only possible if you've manually downloaded it to your machine.\n Obtain the corpus from https://www.clips.uantwerpen.be/conll2003/ner/ and put the eng.testa, .testb, .train\n files in a folder called 'conll_03'. Then set the base_path parameter in the constructor to the path to the\n parent directory where the conll_03 folder resides.\n :param base_path: Path to the CoNLL-03 corpus (i.e. 'conll_03' folder) on your machine\n :param tag_to_bioes: NER by default, need not be changed, but you could also select 'pos' or 'np' to predict\n POS tags or chunks respectively\n :param in_memory: If True, keeps dataset in memory giving speedups in training.\n :param document_as_sequence: If True, all sentences of a document are read into a single Sentence object\n \"\"\"\n if type(base_path) == str:\n base_path: Path = Path(base_path)\n\n # column format\n columns = {0: \"text\", 1: \"pos\", 2: \"np\", 3: \"ner\"}\n\n # this dataset name\n dataset_name = self.__class__.__name__.lower()\n\n # default dataset folder is the cache root\n if not base_path:\n base_path = Path(flair.cache_root) / \"datasets\"\n data_folder = base_path / dataset_name\n\n # check if data there\n if not data_folder.exists():\n log.warning(\"-\" * 100)\n log.warning(f'WARNING: CoNLL-03 dataset not found at \"{data_folder}\".')\n log.warning(\n 'Instructions for obtaining the data can be found here: https://www.clips.uantwerpen.be/conll2003/ner/\"'\n )\n log.warning(\"-\" * 100)\n\n super(CONLL_03, self).__init__(\n data_folder,\n columns,\n tag_to_bioes=tag_to_bioes,\n in_memory=in_memory,\n document_separator_token=None if not document_as_sequence else \"-DOCSTART-\",\n )\n\n\nclass CONLL_03_GERMAN(ColumnCorpus):\n def __init__(\n self,\n base_path: Union[str, Path] = None,\n tag_to_bioes: str = \"ner\",\n in_memory: bool = True,\n document_as_sequence: bool = False,\n ):\n \"\"\"\n Initialize the CoNLL-03 corpus for German. This is only possible if you've manually downloaded it to your machine.\n Obtain the corpus from https://www.clips.uantwerpen.be/conll2003/ner/ and put the respective files in a folder called\n 'conll_03_german'. Then set the base_path parameter in the constructor to the path to the parent directory where\n the conll_03_german folder resides.\n :param base_path: Path to the CoNLL-03 corpus (i.e. 'conll_03_german' folder) on your machine\n :param tag_to_bioes: NER by default, need not be changed, but you could also select 'lemma', 'pos' or 'np' to predict\n word lemmas, POS tags or chunks respectively\n :param in_memory: If True, keeps dataset in memory giving speedups in training.\n :param document_as_sequence: If True, all sentences of a document are read into a single Sentence object\n \"\"\"\n if type(base_path) == str:\n base_path: Path = Path(base_path)\n\n # column format\n columns = {0: \"text\", 1: \"lemma\", 2: \"pos\", 3: \"np\", 4: \"ner\"}\n\n # this dataset name\n dataset_name = self.__class__.__name__.lower()\n\n # default dataset folder is the cache root\n if not base_path:\n base_path = Path(flair.cache_root) / \"datasets\"\n data_folder = base_path / dataset_name\n\n # check if data there\n if not data_folder.exists():\n log.warning(\"-\" * 100)\n log.warning(f'WARNING: CoNLL-03 dataset not found at \"{data_folder}\".')\n log.warning(\n 'Instructions for obtaining the data can be found here: https://www.clips.uantwerpen.be/conll2003/ner/\"'\n )\n log.warning(\"-\" * 100)\n\n super(CONLL_03_GERMAN, self).__init__(\n data_folder,\n columns,\n tag_to_bioes=tag_to_bioes,\n in_memory=in_memory,\n document_separator_token=None if not document_as_sequence else \"-DOCSTART-\",\n )\n\n\nclass CONLL_03_DUTCH(ColumnCorpus):\n def __init__(\n self,\n base_path: Union[str, Path] = None,\n tag_to_bioes: str = \"ner\",\n in_memory: bool = True,\n document_as_sequence: bool = False,\n ):\n \"\"\"\n Initialize the CoNLL-03 corpus for Dutch. The first time you call this constructor it will automatically\n download the dataset.\n :param base_path: Default is None, meaning that corpus gets auto-downloaded and loaded. You can override this\n to point to a different folder but typically this should not be necessary.\n :param tag_to_bioes: NER by default, need not be changed, but you could also select 'pos' to predict\n POS tags instead\n :param in_memory: If True, keeps dataset in memory giving speedups in training.\n :param document_as_sequence: If True, all sentences of a document are read into a single Sentence object\n \"\"\"\n if type(base_path) == str:\n base_path: Path = Path(base_path)\n\n # column format\n columns = {0: \"text\", 1: \"pos\", 2: \"ner\"}\n\n # this dataset name\n dataset_name = self.__class__.__name__.lower()\n\n # default dataset folder is the cache root\n if not base_path:\n base_path = Path(flair.cache_root) / \"datasets\"\n data_folder = base_path / dataset_name\n\n # download data if necessary\n conll_02_path = \"https://www.clips.uantwerpen.be/conll2002/ner/data/\"\n cached_path(f\"{conll_02_path}ned.testa\", Path(\"datasets\") / dataset_name)\n cached_path(f\"{conll_02_path}ned.testb\", Path(\"datasets\") / dataset_name)\n cached_path(f\"{conll_02_path}ned.train\", Path(\"datasets\") / dataset_name)\n\n super(CONLL_03_DUTCH, self).__init__(\n data_folder,\n columns,\n tag_to_bioes=tag_to_bioes,\n encoding=\"latin-1\",\n in_memory=in_memory,\n document_separator_token=None if not document_as_sequence else \"-DOCSTART-\",\n )\n\n\ndef add_IOB2_tags(data_file: Union[str, Path], encoding: str = \"utf8\"):\n \"\"\"\nFunction that adds IOB2 tags if only chunk names are provided (e.g. words are tagged PER instead\nof B-PER or I-PER). Replaces '0' with 'O' as the no-chunk tag since ColumnCorpus expects\nthe letter 'O'. Additionaly it removes lines with no tags in the data file and can also\nbe used if the data is only partialy IOB tagged.\nParameters\n----------\ndata_file : Union[str, Path]\n Path to the data file.\nencoding : str, optional\n Encoding used in open function. The default is \"utf8\".\n\n\"\"\"\n with open(file=data_file, mode='r', encoding=encoding) as f:\n lines = f.readlines()\n with open(file=data_file, mode='w', encoding=encoding) as f:\n pred = 'O' # remembers tag of predecessing line\n for line in lines:\n line_list = line.split()\n if len(line_list) == 2: # word with tag\n word = line_list[0]\n tag = line_list[1]\n if tag in ['0', 'O']: # no chunk\n f.write(word + ' O\\n')\n pred = 'O'\n elif '-' not in tag: # no IOB tags\n if pred == 'O': # found a new chunk\n f.write(word + ' B-' + tag + '\\n')\n pred = tag\n else: # found further part of chunk or new chunk directly after old chunk\n if pred == tag:\n f.write(word + ' I-' + tag + '\\n')\n else:\n f.write(word + ' B-' + tag + '\\n')\n pred = tag\n else: # line already has IOB tag (tag contains '-')\n f.write(line)\n pred = tag.split('-')[1]\n elif len(line_list) == 0: # empty line\n f.write('\\n')\n pred = 'O'\n\n\nclass CONLL_03_SPANISH(ColumnCorpus):\n def __init__(\n self,\n base_path: Union[str, Path] = None,\n tag_to_bioes: str = \"ner\",\n in_memory: bool = True,\n ):\n \"\"\"\n Initialize the CoNLL-03 corpus for Spanish. The first time you call this constructor it will automatically\n download the dataset.\n :param base_path: Default is None, meaning that corpus gets auto-downloaded and loaded. You can override this\n to point to a different folder but typically this should not be necessary.\n :param tag_to_bioes: NER by default, should not be changed\n :param in_memory: If True, keeps dataset in memory giving speedups in training.\n :param document_as_sequence: If True, all sentences of a document are read into a single Sentence object\n \"\"\"\n if type(base_path) == str:\n base_path: Path = Path(base_path)\n\n # column format\n columns = {0: \"text\", 1: \"ner\"}\n\n # this dataset name\n dataset_name = self.__class__.__name__.lower()\n\n # default dataset folder is the cache root\n if not base_path:\n base_path = Path(flair.cache_root) / \"datasets\"\n data_folder = base_path / dataset_name\n\n # download data if necessary\n conll_02_path = \"https://www.clips.uantwerpen.be/conll2002/ner/data/\"\n cached_path(f\"{conll_02_path}esp.testa\", Path(\"datasets\") / dataset_name)\n cached_path(f\"{conll_02_path}esp.testb\", Path(\"datasets\") / dataset_name)\n cached_path(f\"{conll_02_path}esp.train\", Path(\"datasets\") / dataset_name)\n\n super(CONLL_03_SPANISH, self).__init__(\n data_folder,\n columns,\n tag_to_bioes=tag_to_bioes,\n encoding=\"latin-1\",\n in_memory=in_memory,\n )\n\n\nclass CONLL_2000(ColumnCorpus):\n def __init__(\n self,\n base_path: Union[str, Path] = None,\n tag_to_bioes: str = \"np\",\n in_memory: bool = True,\n ):\n \"\"\"\n Initialize the CoNLL-2000 corpus for English chunking.\n The first time you call this constructor it will automatically download the dataset.\n :param base_path: Default is None, meaning that corpus gets auto-downloaded and loaded. You can override this\n to point to a different folder but typically this should not be necessary.\n :param tag_to_bioes: 'np' by default, should not be changed, but you can set 'pos' instead to predict POS tags\n :param in_memory: If True, keeps dataset in memory giving speedups in training.\n \"\"\"\n if type(base_path) == str:\n base_path: Path = Path(base_path)\n\n # column format\n columns = {0: \"text\", 1: \"pos\", 2: \"np\"}\n\n # this dataset name\n dataset_name = self.__class__.__name__.lower()\n\n # default dataset folder is the cache root\n if not base_path:\n base_path = Path(flair.cache_root) / \"datasets\"\n data_folder = base_path / dataset_name\n\n # download data if necessary\n conll_2000_path = \"https://www.clips.uantwerpen.be/conll2000/chunking/\"\n data_file = Path(flair.cache_root) / \"datasets\" / dataset_name / \"train.txt\"\n if not data_file.is_file():\n cached_path(\n f\"{conll_2000_path}train.txt.gz\", Path(\"datasets\") / dataset_name\n )\n cached_path(\n f\"{conll_2000_path}test.txt.gz\", Path(\"datasets\") / dataset_name\n )\n import gzip, shutil\n\n with gzip.open(\n Path(flair.cache_root) / \"datasets\" / dataset_name / \"train.txt.gz\",\n \"rb\",\n ) as f_in:\n with open(\n Path(flair.cache_root) / \"datasets\" / dataset_name / \"train.txt\",\n \"wb\",\n ) as f_out:\n shutil.copyfileobj(f_in, f_out)\n with gzip.open(\n Path(flair.cache_root) / \"datasets\" / dataset_name / \"test.txt.gz\", \"rb\"\n ) as f_in:\n with open(\n Path(flair.cache_root) / \"datasets\" / dataset_name / \"test.txt\",\n \"wb\",\n ) as f_out:\n shutil.copyfileobj(f_in, f_out)\n\n super(CONLL_2000, self).__init__(\n data_folder, columns, tag_to_bioes=tag_to_bioes, in_memory=in_memory\n )\n\n\nclass DANE(ColumnCorpus):\n def __init__(\n self,\n base_path: Union[str, Path] = None,\n tag_to_bioes: str = \"ner\",\n in_memory: bool = True,\n ):\n if type(base_path) == str:\n base_path: Path = Path(base_path)\n\n # column format\n columns = {1: 'text', 3: 'pos', 9: 'ner'}\n\n # this dataset name\n dataset_name = self.__class__.__name__.lower()\n\n # default dataset folder is the cache root\n if not base_path:\n base_path = Path(flair.cache_root) / \"datasets\"\n data_folder = base_path / dataset_name\n\n # download data if necessary\n data_path = Path(flair.cache_root) / \"datasets\" / dataset_name\n train_data_file = data_path / \"ddt.train.conllu\"\n if not train_data_file.is_file():\n temp_file = cached_path(\n 'https://danlp.s3.eu-central-1.amazonaws.com/datasets/ddt.zip',\n Path(\"datasets\") / dataset_name\n )\n from zipfile import ZipFile\n\n with ZipFile(temp_file, 'r') as zip_file:\n zip_file.extractall(path=data_path)\n\n # Remove CoNLL-U meta information in the last column\n for part in ['train', 'dev', 'test']:\n lines = []\n data_file = \"ddt.{}.conllu\".format(part)\n with open(data_path / data_file, 'r') as file:\n for line in file:\n if line.startswith(\"#\") or line == \"\\n\":\n lines.append(line)\n lines.append(line.replace(\"name=\", \"\").replace(\"|SpaceAfter=No\", \"\"))\n\n with open(data_path / data_file, 'w') as file:\n file.writelines(lines)\n\n print(data_path / data_file)\n\n super(DANE, self).__init__(\n data_folder, columns, tag_to_bioes=tag_to_bioes,\n in_memory=in_memory, comment_symbol=\"#\"\n )\n\n\nclass GERMEVAL_14(ColumnCorpus):\n def __init__(\n self,\n base_path: Union[str, Path] = None,\n tag_to_bioes: str = \"ner\",\n in_memory: bool = True,\n ):\n \"\"\"\n Initialize the GermEval NER corpus for German. This is only possible if you've manually downloaded it to your\n machine. Obtain the corpus from https://sites.google.com/site/germeval2014ner/home/ and put it into some folder.\n Then point the base_path parameter in the constructor to this folder\n :param base_path: Path to the GermEval corpus on your machine\n :param tag_to_bioes: 'ner' by default, should not be changed.\n :param in_memory:If True, keeps dataset in memory giving speedups in training.\n \"\"\"\n if type(base_path) == str:\n base_path: Path = Path(base_path)\n\n # column format\n columns = {1: \"text\", 2: \"ner\"}\n\n # this dataset name\n dataset_name = self.__class__.__name__.lower()\n\n # default dataset folder is the cache root\n if not base_path:\n base_path = Path(flair.cache_root) / \"datasets\"\n data_folder = base_path / dataset_name\n\n # check if data there\n if not data_folder.exists():\n log.warning(\"-\" * 100)\n log.warning(f'WARNING: GermEval-14 dataset not found at \"{data_folder}\".')\n log.warning(\n 'Instructions for obtaining the data can be found here: https://sites.google.com/site/germeval2014ner/home/\"'\n )\n log.warning(\"-\" * 100)\n super(GERMEVAL_14, self).__init__(\n data_folder,\n columns,\n tag_to_bioes=tag_to_bioes,\n comment_symbol=\"#\",\n in_memory=in_memory,\n )\n\n\nclass INSPEC(ColumnCorpus):\n def __init__(\n self,\n base_path: Union[str, Path] = None,\n tag_to_bioes: str = \"keyword\",\n in_memory: bool = True,\n ):\n\n if type(base_path) == str:\n base_path: Path = Path(base_path)\n\n # column format\n columns = {0: \"text\", 1: \"keyword\"}\n\n # this dataset name\n dataset_name = self.__class__.__name__.lower()\n\n # default dataset folder is the cache root\n if not base_path:\n base_path = Path(flair.cache_root) / \"datasets\"\n data_folder = base_path / dataset_name\n\n inspec_path = \"https://raw.githubusercontent.com/midas-research/keyphrase-extraction-as-sequence-labeling-data/master/Inspec\"\n cached_path(f\"{inspec_path}/train.txt\", Path(\"datasets\") / dataset_name)\n cached_path(f\"{inspec_path}/test.txt\", Path(\"datasets\") / dataset_name)\n if not \"dev.txt\" in os.listdir(data_folder):\n cached_path(f\"{inspec_path}/valid.txt\", Path(\"datasets\") / dataset_name)\n # rename according to train - test - dev - convention\n os.rename(data_folder / \"valid.txt\", data_folder / \"dev.txt\")\n\n super(INSPEC, self).__init__(\n data_folder, columns, tag_to_bioes=tag_to_bioes, in_memory=in_memory\n )\n\n\nclass LER_GERMAN(ColumnCorpus):\n def __init__(\n self,\n base_path: Union[str, Path] = None,\n tag_to_bioes: str = \"ner\",\n in_memory: bool = False,\n ):\n \"\"\"\n Initialize the LER_GERMAN (Legal Entity Recognition) corpus. The first time you call this constructor it will automatically\n download the dataset.\n :param base_path: Default is None, meaning that corpus gets auto-downloaded and loaded. You can override this\n to point to a different folder but typically this should not be necessary.\n :param in_memory: If True, keeps dataset in memory giving speedups in training. Not recommended due to heavy RAM usage.\n :param document_as_sequence: If True, all sentences of a document are read into a single Sentence object\n \"\"\"\n\n if type(base_path) == str:\n base_path: Path = Path(base_path)\n\n # column format\n columns = {0: \"text\", 1: \"ner\"}\n\n # this dataset name\n dataset_name = self.__class__.__name__.lower()\n\n # default dataset folder is the cache root\n if not base_path:\n base_path = Path(flair.cache_root) / \"datasets\"\n data_folder = base_path / dataset_name\n\n # download data if necessary\n ler_path = \"https://raw.githubusercontent.com/elenanereiss/Legal-Entity-Recognition/master/data/\"\n cached_path(f\"{ler_path}ler.conll\", Path(\"datasets\") / dataset_name)\n\n super(LER_GERMAN, self).__init__(\n data_folder,\n columns,\n tag_to_bioes=tag_to_bioes,\n in_memory=in_memory,\n train_file='ler.conll'\n )\n\n\nclass NER_BASQUE(ColumnCorpus):\n def __init__(\n self,\n base_path: Union[str, Path] = None,\n tag_to_bioes: str = \"ner\",\n in_memory: bool = True,\n ):\n if type(base_path) == str:\n base_path: Path = Path(base_path)\n\n # column format\n columns = {0: \"text\", 1: \"ner\"}\n\n # this dataset name\n dataset_name = self.__class__.__name__.lower()\n\n # default dataset folder is the cache root\n if not base_path:\n base_path = Path(flair.cache_root) / \"datasets\"\n data_folder = base_path / dataset_name\n\n # download data if necessary\n ner_basque_path = \"http://ixa2.si.ehu.eus/eiec/\"\n data_path = Path(flair.cache_root) / \"datasets\" / dataset_name\n data_file = data_path / \"named_ent_eu.train\"\n if not data_file.is_file():\n cached_path(\n f\"{ner_basque_path}/eiec_v1.0.tgz\", Path(\"datasets\") / dataset_name\n )\n import tarfile, shutil\n\n with tarfile.open(\n Path(flair.cache_root) / \"datasets\" / dataset_name / \"eiec_v1.0.tgz\",\n \"r:gz\",\n ) as f_in:\n corpus_files = (\n \"eiec_v1.0/named_ent_eu.train\",\n \"eiec_v1.0/named_ent_eu.test\",\n )\n for corpus_file in corpus_files:\n f_in.extract(corpus_file, data_path)\n shutil.move(f\"{data_path}/{corpus_file}\", data_path)\n\n super(NER_BASQUE, self).__init__(\n data_folder, columns, tag_to_bioes=tag_to_bioes, in_memory=in_memory\n )\n\n\nclass NER_FINNISH(ColumnCorpus):\n def __init__(\n self,\n base_path: Union[str, Path] = None,\n tag_to_bioes: str = \"ner\",\n in_memory: bool = True,\n ):\n if type(base_path) == str:\n base_path: Path = Path(base_path)\n\n # column format\n columns = {0: \"text\", 1: \"ner\"}\n\n # this dataset name\n dataset_name = self.__class__.__name__.lower()\n\n # default dataset folder is the cache root\n if not base_path:\n base_path = Path(flair.cache_root) / \"datasets\"\n data_folder = base_path / dataset_name\n\n # download data if necessary\n ner_finnish_path = \"https://raw.githubusercontent.com/mpsilfve/finer-data/master/data/digitoday.\"\n cached_path(f\"{ner_finnish_path}2014.train.csv\", Path(\"datasets\") / dataset_name)\n cached_path(f\"{ner_finnish_path}2014.dev.csv\", Path(\"datasets\") / dataset_name)\n cached_path(f\"{ner_finnish_path}2015.test.csv\", Path(\"datasets\") / dataset_name)\n\n _remove_lines_without_annotations(data_file=Path(data_folder / \"digitoday.2015.test.csv\"))\n\n super(NER_FINNISH, self).__init__(\n data_folder, columns, tag_to_bioes=tag_to_bioes, in_memory=in_memory, skip_first_line=True\n )\n\n\ndef _remove_lines_without_annotations(data_file: Union[str, Path] = None):\n with open(data_file, 'r') as f:\n lines = f.readlines()\n with open(data_file, 'w') as f:\n for line in lines:\n if len(line.split()) != 1:\n f.write(line)\n\n\nclass NER_SWEDISH(ColumnCorpus):\n def __init__(\n self,\n base_path: Union[str, Path] = None,\n tag_to_bioes: str = \"ner\",\n in_memory: bool = True,\n ):\n \"\"\"\n Initialize the NER_SWEDISH corpus for Swedish. The first time you call this constructor it will automatically\n download the dataset.\n :param base_path: Default is None, meaning that corpus gets auto-downloaded and loaded. You can override this\n to point to a different folder but typically this should not be necessary.\n :param in_memory: If True, keeps dataset in memory giving speedups in training.\n :param document_as_sequence: If True, all sentences of a document are read into a single Sentence object\n \"\"\"\n\n if type(base_path) == str:\n base_path: Path = Path(base_path)\n\n # column format\n columns = {0: \"text\", 1: \"ner\"}\n\n # this dataset name\n dataset_name = self.__class__.__name__.lower()\n\n # default dataset folder is the cache root\n if not base_path:\n base_path = Path(flair.cache_root) / \"datasets\"\n data_folder = base_path / dataset_name\n\n # download data if necessary\n ner_spraakbanken_path = \"https://raw.githubusercontent.com/klintan/swedish-ner-corpus/master/\"\n cached_path(f\"{ner_spraakbanken_path}test_corpus.txt\", Path(\"datasets\") / dataset_name)\n cached_path(f\"{ner_spraakbanken_path}train_corpus.txt\", Path(\"datasets\") / dataset_name)\n\n # data is not in IOB2 format. Thus we transform it to IOB2\n add_IOB2_tags(data_file=Path(data_folder / \"test_corpus.txt\"))\n add_IOB2_tags(data_file=Path(data_folder / \"train_corpus.txt\"))\n\n super(NER_SWEDISH, self).__init__(\n data_folder,\n columns,\n tag_to_bioes=tag_to_bioes,\n in_memory=in_memory,\n )\n\n\nclass SEMEVAL2017(ColumnCorpus):\n def __init__(\n self,\n base_path: Union[str, Path] = None,\n tag_to_bioes: str = \"keyword\",\n in_memory: bool = True,\n ):\n\n if type(base_path) == str:\n base_path: Path = Path(base_path)\n\n # column format\n columns = {0: \"text\", 1: \"keyword\"}\n\n # this dataset name\n dataset_name = self.__class__.__name__.lower()\n\n # default dataset folder is the cache root\n if not base_path:\n base_path = Path(flair.cache_root) / \"datasets\"\n data_folder = base_path / dataset_name\n\n semeval2017_path = \"https://raw.githubusercontent.com/midas-research/keyphrase-extraction-as-sequence-labeling-data/master/SemEval-2017\"\n cached_path(f\"{semeval2017_path}/train.txt\", Path(\"datasets\") / dataset_name)\n cached_path(f\"{semeval2017_path}/test.txt\", Path(\"datasets\") / dataset_name)\n cached_path(f\"{semeval2017_path}/dev.txt\", Path(\"datasets\") / dataset_name)\n\n super(SEMEVAL2017, self).__init__(\n data_folder, columns, tag_to_bioes=tag_to_bioes, in_memory=in_memory\n )\n\n\nclass SEMEVAL2010(ColumnCorpus):\n def __init__(\n self,\n base_path: Union[str, Path] = None,\n tag_to_bioes: str = \"keyword\",\n in_memory: bool = True,\n ):\n\n if type(base_path) == str:\n base_path: Path = Path(base_path)\n\n # column format\n columns = {0: \"text\", 1: \"keyword\"}\n\n # this dataset name\n dataset_name = self.__class__.__name__.lower()\n\n # default dataset folder is the cache root\n if not base_path:\n base_path = Path(flair.cache_root) / \"datasets\"\n data_folder = base_path / dataset_name\n\n semeval2010_path = \"https://raw.githubusercontent.com/midas-research/keyphrase-extraction-as-sequence-labeling-data/master/processed_semeval-2010\"\n cached_path(f\"{semeval2010_path}/train.txt\", Path(\"datasets\") / dataset_name)\n cached_path(f\"{semeval2010_path}/test.txt\", Path(\"datasets\") / dataset_name)\n\n super(SEMEVAL2010, self).__init__(\n data_folder, columns, tag_to_bioes=tag_to_bioes, in_memory=in_memory\n )\n\n\nclass WIKINER_ENGLISH(ColumnCorpus):\n def __init__(\n self,\n base_path: Union[str, Path] = None,\n tag_to_bioes: str = \"ner\",\n in_memory: bool = False,\n ):\n if type(base_path) == str:\n base_path: Path = Path(base_path)\n\n # column format\n columns = {0: \"text\", 1: \"pos\", 2: \"ner\"}\n\n # this dataset name\n dataset_name = self.__class__.__name__.lower()\n\n # default dataset folder is the cache root\n if not base_path:\n base_path = Path(flair.cache_root) / \"datasets\"\n data_folder = base_path / dataset_name\n\n # download data if necessary\n _download_wikiner(\"en\", dataset_name)\n\n super(WIKINER_ENGLISH, self).__init__(\n data_folder, columns, tag_to_bioes=tag_to_bioes, in_memory=in_memory\n )\n\n\nclass WIKINER_GERMAN(ColumnCorpus):\n def __init__(\n self,\n base_path: Union[str, Path] = None,\n tag_to_bioes: str = \"ner\",\n in_memory: bool = False,\n ):\n if type(base_path) == str:\n base_path: Path = Path(base_path)\n\n # column format\n columns = {0: \"text\", 1: \"pos\", 2: \"ner\"}\n\n # this dataset name\n dataset_name = self.__class__.__name__.lower()\n\n # default dataset folder is the cache root\n if not base_path:\n base_path = Path(flair.cache_root) / \"datasets\"\n data_folder = base_path / dataset_name\n\n # download data if necessary\n _download_wikiner(\"de\", dataset_name)\n\n super(WIKINER_GERMAN, self).__init__(\n data_folder, columns, tag_to_bioes=tag_to_bioes, in_memory=in_memory\n )\n\n\nclass WIKINER_DUTCH(ColumnCorpus):\n def __init__(\n self,\n base_path: Union[str, Path] = None,\n tag_to_bioes: str = \"ner\",\n in_memory: bool = False,\n ):\n if type(base_path) == str:\n base_path: Path = Path(base_path)\n\n # column format\n columns = {0: \"text\", 1: \"pos\", 2: \"ner\"}\n\n # this dataset name\n dataset_name = self.__class__.__name__.lower()\n\n # default dataset folder is the cache root\n if not base_path:\n base_path = Path(flair.cache_root) / \"datasets\"\n data_folder = base_path / dataset_name\n\n # download data if necessary\n _download_wikiner(\"nl\", dataset_name)\n\n super(WIKINER_DUTCH, self).__init__(\n data_folder, columns, tag_to_bioes=tag_to_bioes, in_memory=in_memory\n )\n\n\nclass WIKINER_FRENCH(ColumnCorpus):\n def __init__(\n self,\n base_path: Union[str, Path] = None,\n tag_to_bioes: str = \"ner\",\n in_memory: bool = False,\n ):\n if type(base_path) == str:\n base_path: Path = Path(base_path)\n\n # column format\n columns = {0: \"text\", 1: \"pos\", 2: \"ner\"}\n\n # this dataset name\n dataset_name = self.__class__.__name__.lower()\n\n # default dataset folder is the cache root\n if not base_path:\n base_path = Path(flair.cache_root) / \"datasets\"\n data_folder = base_path / dataset_name\n\n # download data if necessary\n _download_wikiner(\"fr\", dataset_name)\n\n super(WIKINER_FRENCH, self).__init__(\n data_folder, columns, tag_to_bioes=tag_to_bioes, in_memory=in_memory\n )\n\n\nclass WIKINER_ITALIAN(ColumnCorpus):\n def __init__(\n self,\n base_path: Union[str, Path] = None,\n tag_to_bioes: str = \"ner\",\n in_memory: bool = False,\n ):\n if type(base_path) == str:\n base_path: Path = Path(base_path)\n\n # column format\n columns = {0: \"text\", 1: \"pos\", 2: \"ner\"}\n\n # this dataset name\n dataset_name = self.__class__.__name__.lower()\n\n # default dataset folder is the cache root\n if not base_path:\n base_path = Path(flair.cache_root) / \"datasets\"\n data_folder = base_path / dataset_name\n\n # download data if necessary\n _download_wikiner(\"it\", dataset_name)\n\n super(WIKINER_ITALIAN, self).__init__(\n data_folder, columns, tag_to_bioes=tag_to_bioes, in_memory=in_memory\n )\n\n\nclass WIKINER_SPANISH(ColumnCorpus):\n def __init__(\n self,\n base_path: Union[str, Path] = None,\n tag_to_bioes: str = \"ner\",\n in_memory: bool = False,\n ):\n if type(base_path) == str:\n base_path: Path = Path(base_path)\n\n # column format\n columns = {0: \"text\", 1: \"pos\", 2: \"ner\"}\n\n # this dataset name\n dataset_name = self.__class__.__name__.lower()\n\n # default dataset folder is the cache root\n if not base_path:\n base_path = Path(flair.cache_root) / \"datasets\"\n data_folder = base_path / dataset_name\n\n # download data if necessary\n _download_wikiner(\"es\", dataset_name)\n\n super(WIKINER_SPANISH, self).__init__(\n data_folder, columns, tag_to_bioes=tag_to_bioes, in_memory=in_memory\n )\n\n\nclass WIKINER_PORTUGUESE(ColumnCorpus):\n def __init__(\n self,\n base_path: Union[str, Path] = None,\n tag_to_bioes: str = \"ner\",\n in_memory: bool = False,\n ):\n if type(base_path) == str:\n base_path: Path = Path(base_path)\n\n # column format\n columns = {0: \"text\", 1: \"pos\", 2: \"ner\"}\n\n # this dataset name\n dataset_name = self.__class__.__name__.lower()\n\n # default dataset folder is the cache root\n if not base_path:\n base_path = Path(flair.cache_root) / \"datasets\"\n data_folder = base_path / dataset_name\n\n # download data if necessary\n _download_wikiner(\"pt\", dataset_name)\n\n super(WIKINER_PORTUGUESE, self).__init__(\n data_folder, columns, tag_to_bioes=tag_to_bioes, in_memory=in_memory\n )\n\n\nclass WIKINER_POLISH(ColumnCorpus):\n def __init__(\n self,\n base_path: Union[str, Path] = None,\n tag_to_bioes: str = \"ner\",\n in_memory: bool = False,\n ):\n if type(base_path) == str:\n base_path: Path = Path(base_path)\n\n # column format\n columns = {0: \"text\", 1: \"pos\", 2: \"ner\"}\n\n # this dataset name\n dataset_name = self.__class__.__name__.lower()\n\n # default dataset folder is the cache root\n if not base_path:\n base_path = Path(flair.cache_root) / \"datasets\"\n data_folder = base_path / dataset_name\n\n # download data if necessary\n _download_wikiner(\"pl\", dataset_name)\n\n super(WIKINER_POLISH, self).__init__(\n data_folder, columns, tag_to_bioes=tag_to_bioes, in_memory=in_memory\n )\n\n\nclass WIKINER_RUSSIAN(ColumnCorpus):\n def __init__(\n self,\n base_path: Union[str, Path] = None,\n tag_to_bioes: str = \"ner\",\n in_memory: bool = False,\n ):\n if type(base_path) == str:\n base_path: Path = Path(base_path)\n\n # column format\n columns = {0: \"text\", 1: \"pos\", 2: \"ner\"}\n\n # this dataset name\n dataset_name = self.__class__.__name__.lower()\n\n # default dataset folder is the cache root\n if not base_path:\n base_path = Path(flair.cache_root) / \"datasets\"\n data_folder = base_path / dataset_name\n\n # download data if necessary\n _download_wikiner(\"ru\", dataset_name)\n\n super(WIKINER_RUSSIAN, self).__init__(\n data_folder, columns, tag_to_bioes=tag_to_bioes, in_memory=in_memory\n )\n\n\nclass WNUT_17(ColumnCorpus):\n def __init__(\n self,\n base_path: Union[str, Path] = None,\n tag_to_bioes: str = \"ner\",\n in_memory: bool = True,\n ):\n if type(base_path) == str:\n base_path: Path = Path(base_path)\n\n # column format\n columns = {0: \"text\", 1: \"ner\"}\n\n # this dataset name\n dataset_name = self.__class__.__name__.lower()\n\n # default dataset folder is the cache root\n if not base_path:\n base_path = Path(flair.cache_root) / \"datasets\"\n data_folder = base_path / dataset_name\n\n # download data if necessary\n wnut_path = \"https://noisy-text.github.io/2017/files/\"\n cached_path(f\"{wnut_path}wnut17train.conll\", Path(\"datasets\") / dataset_name)\n cached_path(f\"{wnut_path}emerging.dev.conll\", Path(\"datasets\") / dataset_name)\n cached_path(\n f\"{wnut_path}emerging.test.annotated\", Path(\"datasets\") / dataset_name\n )\n\n super(WNUT_17, self).__init__(\n data_folder, columns, tag_to_bioes=tag_to_bioes, in_memory=in_memory\n )\n\n\ndef _download_wikiner(language_code: str, dataset_name: str):\n # download data if necessary\n wikiner_path = (\n \"https://raw.githubusercontent.com/dice-group/FOX/master/input/Wikiner/\"\n )\n lc = language_code\n\n data_file = (\n Path(flair.cache_root)\n / \"datasets\"\n / dataset_name\n / f\"aij-wikiner-{lc}-wp3.train\"\n )\n if not data_file.is_file():\n\n cached_path(\n f\"{wikiner_path}aij-wikiner-{lc}-wp3.bz2\", Path(\"datasets\") / dataset_name\n )\n import bz2, shutil\n\n # unpack and write out in CoNLL column-like format\n bz_file = bz2.BZ2File(\n Path(flair.cache_root)\n / \"datasets\"\n / dataset_name\n / f\"aij-wikiner-{lc}-wp3.bz2\",\n \"rb\",\n )\n with bz_file as f, open(\n Path(flair.cache_root)\n / \"datasets\"\n / dataset_name\n / f\"aij-wikiner-{lc}-wp3.train\",\n \"w\",\n encoding=\"utf-8\"\n ) as out:\n for line in f:\n line = line.decode(\"utf-8\")\n words = line.split(\" \")\n for word in words:\n out.write(\"\\t\".join(word.split(\"|\")) + \"\\n\")\n",
"path": "flair/datasets/sequence_labeling.py"
}
] | diff --git a/flair/datasets/sequence_labeling.py b/flair/datasets/sequence_labeling.py
index 23c80881c0..fd2c859a1c 100644
--- a/flair/datasets/sequence_labeling.py
+++ b/flair/datasets/sequence_labeling.py
@@ -1310,6 +1310,7 @@ def _download_wikiner(language_code: str, dataset_name: str):
/ dataset_name
/ f"aij-wikiner-{lc}-wp3.train",
"w",
+ encoding="utf-8"
) as out:
for line in f:
line = line.decode("utf-8")
|
coala__coala-bears-900 | YapfBear: Make asciinema
@Mariatta are you interested?
| [
{
"content": "import sys\n\nfrom yapf.yapflib.yapf_api import FormatFile\n\nfrom coalib.bearlib import deprecate_settings\nfrom coalib.bearlib.spacing.SpacingHelper import SpacingHelper\nfrom coalib.bears.LocalBear import LocalBear\nfrom coalib.bears.requirements.PipRequirement import PipRequirement\nfrom coalib.misc.ContextManagers import prepare_file\nfrom coalib.results.Result import Result\nfrom coalib.results.Diff import Diff\n\n\nclass YapfBear(LocalBear):\n \"\"\"\n Check and correct formatting of Python code using ``yapf`` utility.\n\n See <https://github.com/google/yapf> for more information.\n \"\"\"\n LANGUAGES = {\"Python\", \"Python 2\", \"Python 3\"}\n AUTHORS = {'The coala developers'}\n REQUIREMENTS = {PipRequirement('yapf', '0.11')}\n AUTHORS_EMAILS = {'[email protected]'}\n LICENSE = 'AGPL-3.0'\n CAN_FIX = {'Formatting'}\n\n @deprecate_settings(indent_size='tab_width')\n def run(self, filename, file,\n max_line_length: int=79,\n indent_size: int=SpacingHelper.DEFAULT_TAB_WIDTH,\n allow_multiline_lambdas: bool=False,\n blank_line_before_nested_class_or_def: bool=False,\n continuation_tab_width: int=SpacingHelper.DEFAULT_TAB_WIDTH,\n dedent_closing_brackets: bool=False,\n indent_dictionary_value: bool=False,\n coalesce_brackets: bool=False,\n join_multiple_lines: bool=True,\n spaces_around_power_operator: bool=True,\n spaces_before_comment: int=2,\n space_between_ending_comma_and_closing_bracket: bool=False,\n split_arguments_when_comma_terminated: bool=False,\n split_before_bitwise_operator: bool=False,\n split_before_first_argument: bool=False,\n split_before_logical_operator: bool=False,\n split_before_named_assigns: bool=True,\n use_spaces: bool=True,\n based_on_style: str='pep8',\n prefer_line_break_after_opening_bracket: bool=True):\n \"\"\"\n :param max_line_length:\n Maximum number of characters for a line.\n :param indent_size:\n Number of spaces per indentation level.\n :param allow_multiline_lambdas:\n Allows lambdas to be formatted on more than one line.\n :param blank_line_before_nested_class_or_def:\n Inserts a blank line before a ``def`` or ``class`` immediately\n nested within another ``def`` or ``class``.\n :param continuation_tab_width:\n Indent width used for line continuations.\n :param dedent_closing_brackets:\n Puts closing brackets on a separate line, dedented, if the\n bracketed expression can't fit in a single line. Applies to all\n kinds of brackets, including function definitions and calls.\n :param indent_dictionary_value:\n Indents the dictionary value if it cannot fit on the same line as\n the dictionary key.\n :param coalesce_brackets:\n Prevents splitting consecutive brackets. Only relevant when\n ``dedent_closing_brackets`` is set.\n Example:\n If ``True``,\n\n ```\n call_func_that_takes_a_dict(\n {\n 'key1': 'value1',\n 'key2': 'value2',\n }\n )\n ```\n would reformat to:\n ```\n call_func_that_takes_a_dict({\n 'key1': 'value1',\n 'key2': 'value2',\n })\n ```\n :param join_multiple_lines:\n Joins short lines into one line.\n :param spaces_around_power_operator:\n Set to ``True`` to prefer using spaces around ``**``.\n :param spaces_before_comment:\n The number of spaces required before a trailing comment.\n :param space_between_ending_comma_and_closing_bracket:\n Inserts a space between the ending comma and closing bracket of a\n list, etc.\n :param split_arguments_when_comma_terminated:\n Splits before arguments if the argument list is terminated by a\n comma.\n :param split_before_bitwise_operator:\n Set to ``True`` to prefer splitting before ``&``, ``|`` or ``^``\n rather than after.\n :param split_before_first_argument:\n If an argument / parameter list is going to be split, then split\n before the first argument.\n :param split_before_logical_operator:\n Set to ``True`` to prefer splitting before ``and`` or ``or`` rather\n than after.\n :param split_before_named_assigns:\n Splits named assignments into individual lines.\n :param use_spaces:\n Uses spaces for indentation.\n :param based_on_style:\n The formatting style to be used as reference.\n :param prefer_line_break_after_opening_bracket:\n If True, splitting right after a open bracket will not be\n preferred.\n \"\"\"\n if not file:\n # Yapf cannot handle zero-byte files well, and adds a redundent\n # newline into the file. To avoid this, we don't parse zero-byte\n # files as they cannot have anything to format either.\n return\n\n options = \"\"\"\n[style]\nindent_width = {indent_size}\ncolumn_limit = {max_line_length}\nallow_multiline_lambdas = {allow_multiline_lambdas}\ncontinuation_indent_width = {continuation_tab_width}\ndedent_closing_brackets = {dedent_closing_brackets}\nindent_dictionary_value = {indent_dictionary_value}\njoin_multiple_lines = {join_multiple_lines}\nspaces_around_power_operator = {spaces_around_power_operator}\nspaces_before_comment = {spaces_before_comment}\ncoalesce_brackets = {coalesce_brackets}\nsplit_before_bitwise_operator = {split_before_bitwise_operator}\nsplit_before_first_argument = {split_before_first_argument}\nsplit_before_logical_operator = {split_before_logical_operator}\nsplit_before_named_assigns = {split_before_named_assigns}\nbased_on_style = {based_on_style}\nblank_line_before_nested_class_or_def = {blank_line_before_nested_class_or_def}\nsplit_arguments_when_comma_terminated = {split_arguments_when_comma_terminated}\nspace_between_ending_comma_and_closing_bracket= \\\n{space_between_ending_comma_and_closing_bracket}\n\"\"\"\n options += 'use_tabs = ' + str(not use_spaces) + \"\\n\"\n options += ('split_penalty_after_opening_bracket = ' +\n ('30' if prefer_line_break_after_opening_bracket\n else '0') + \"\\n\")\n options = options.format(**locals())\n\n try:\n with prepare_file(options.splitlines(keepends=True),\n None) as (file_, fname):\n corrected = FormatFile(filename,\n style_config=fname,\n verify=False)[0].splitlines(True)\n except SyntaxError as err:\n if isinstance(err, IndentationError):\n error_type = \"indentation errors (\" + err.args[0] + ')'\n else:\n error_type = \"syntax errors\"\n yield Result.from_values(\n self,\n \"The code cannot be parsed due to {0}.\".format(error_type),\n filename, line=err.lineno, column=err.offset)\n return\n diffs = Diff.from_string_arrays(file, corrected).split_diff()\n for diff in diffs:\n yield Result(self,\n \"The code does not comply with the settings \"\n \"provided.\",\n affected_code=(diff.range(filename),),\n diffs={filename: diff})\n\n @classmethod\n def check_prerequisites(cls): # pragma: no cover\n if not sys.version_info >= (3, 4):\n return 'Yapf only supports Python 2.7 and Python 3.4+'\n else:\n return True\n",
"path": "bears/python/YapfBear.py"
}
] | [
{
"content": "import sys\n\nfrom yapf.yapflib.yapf_api import FormatFile\n\nfrom coalib.bearlib import deprecate_settings\nfrom coalib.bearlib.spacing.SpacingHelper import SpacingHelper\nfrom coalib.bears.LocalBear import LocalBear\nfrom coalib.bears.requirements.PipRequirement import PipRequirement\nfrom coalib.misc.ContextManagers import prepare_file\nfrom coalib.results.Result import Result\nfrom coalib.results.Diff import Diff\n\n\nclass YapfBear(LocalBear):\n \"\"\"\n Check and correct formatting of Python code using ``yapf`` utility.\n\n See <https://github.com/google/yapf> for more information.\n \"\"\"\n LANGUAGES = {\"Python\", \"Python 2\", \"Python 3\"}\n AUTHORS = {'The coala developers'}\n REQUIREMENTS = {PipRequirement('yapf', '0.11')}\n AUTHORS_EMAILS = {'[email protected]'}\n LICENSE = 'AGPL-3.0'\n CAN_FIX = {'Formatting'}\n ASCIINEMA_URL = 'https://asciinema.org/a/89021'\n\n @deprecate_settings(indent_size='tab_width')\n def run(self, filename, file,\n max_line_length: int=79,\n indent_size: int=SpacingHelper.DEFAULT_TAB_WIDTH,\n allow_multiline_lambdas: bool=False,\n blank_line_before_nested_class_or_def: bool=False,\n continuation_tab_width: int=SpacingHelper.DEFAULT_TAB_WIDTH,\n dedent_closing_brackets: bool=False,\n indent_dictionary_value: bool=False,\n coalesce_brackets: bool=False,\n join_multiple_lines: bool=True,\n spaces_around_power_operator: bool=True,\n spaces_before_comment: int=2,\n space_between_ending_comma_and_closing_bracket: bool=False,\n split_arguments_when_comma_terminated: bool=False,\n split_before_bitwise_operator: bool=False,\n split_before_first_argument: bool=False,\n split_before_logical_operator: bool=False,\n split_before_named_assigns: bool=True,\n use_spaces: bool=True,\n based_on_style: str='pep8',\n prefer_line_break_after_opening_bracket: bool=True):\n \"\"\"\n :param max_line_length:\n Maximum number of characters for a line.\n :param indent_size:\n Number of spaces per indentation level.\n :param allow_multiline_lambdas:\n Allows lambdas to be formatted on more than one line.\n :param blank_line_before_nested_class_or_def:\n Inserts a blank line before a ``def`` or ``class`` immediately\n nested within another ``def`` or ``class``.\n :param continuation_tab_width:\n Indent width used for line continuations.\n :param dedent_closing_brackets:\n Puts closing brackets on a separate line, dedented, if the\n bracketed expression can't fit in a single line. Applies to all\n kinds of brackets, including function definitions and calls.\n :param indent_dictionary_value:\n Indents the dictionary value if it cannot fit on the same line as\n the dictionary key.\n :param coalesce_brackets:\n Prevents splitting consecutive brackets. Only relevant when\n ``dedent_closing_brackets`` is set.\n Example:\n If ``True``,\n\n ```\n call_func_that_takes_a_dict(\n {\n 'key1': 'value1',\n 'key2': 'value2',\n }\n )\n ```\n would reformat to:\n ```\n call_func_that_takes_a_dict({\n 'key1': 'value1',\n 'key2': 'value2',\n })\n ```\n :param join_multiple_lines:\n Joins short lines into one line.\n :param spaces_around_power_operator:\n Set to ``True`` to prefer using spaces around ``**``.\n :param spaces_before_comment:\n The number of spaces required before a trailing comment.\n :param space_between_ending_comma_and_closing_bracket:\n Inserts a space between the ending comma and closing bracket of a\n list, etc.\n :param split_arguments_when_comma_terminated:\n Splits before arguments if the argument list is terminated by a\n comma.\n :param split_before_bitwise_operator:\n Set to ``True`` to prefer splitting before ``&``, ``|`` or ``^``\n rather than after.\n :param split_before_first_argument:\n If an argument / parameter list is going to be split, then split\n before the first argument.\n :param split_before_logical_operator:\n Set to ``True`` to prefer splitting before ``and`` or ``or`` rather\n than after.\n :param split_before_named_assigns:\n Splits named assignments into individual lines.\n :param use_spaces:\n Uses spaces for indentation.\n :param based_on_style:\n The formatting style to be used as reference.\n :param prefer_line_break_after_opening_bracket:\n If True, splitting right after a open bracket will not be\n preferred.\n \"\"\"\n if not file:\n # Yapf cannot handle zero-byte files well, and adds a redundent\n # newline into the file. To avoid this, we don't parse zero-byte\n # files as they cannot have anything to format either.\n return\n\n options = \"\"\"\n[style]\nindent_width = {indent_size}\ncolumn_limit = {max_line_length}\nallow_multiline_lambdas = {allow_multiline_lambdas}\ncontinuation_indent_width = {continuation_tab_width}\ndedent_closing_brackets = {dedent_closing_brackets}\nindent_dictionary_value = {indent_dictionary_value}\njoin_multiple_lines = {join_multiple_lines}\nspaces_around_power_operator = {spaces_around_power_operator}\nspaces_before_comment = {spaces_before_comment}\ncoalesce_brackets = {coalesce_brackets}\nsplit_before_bitwise_operator = {split_before_bitwise_operator}\nsplit_before_first_argument = {split_before_first_argument}\nsplit_before_logical_operator = {split_before_logical_operator}\nsplit_before_named_assigns = {split_before_named_assigns}\nbased_on_style = {based_on_style}\nblank_line_before_nested_class_or_def = {blank_line_before_nested_class_or_def}\nsplit_arguments_when_comma_terminated = {split_arguments_when_comma_terminated}\nspace_between_ending_comma_and_closing_bracket= \\\n{space_between_ending_comma_and_closing_bracket}\n\"\"\"\n options += 'use_tabs = ' + str(not use_spaces) + \"\\n\"\n options += ('split_penalty_after_opening_bracket = ' +\n ('30' if prefer_line_break_after_opening_bracket\n else '0') + \"\\n\")\n options = options.format(**locals())\n\n try:\n with prepare_file(options.splitlines(keepends=True),\n None) as (file_, fname):\n corrected = FormatFile(filename,\n style_config=fname,\n verify=False)[0].splitlines(True)\n except SyntaxError as err:\n if isinstance(err, IndentationError):\n error_type = \"indentation errors (\" + err.args[0] + ')'\n else:\n error_type = \"syntax errors\"\n yield Result.from_values(\n self,\n \"The code cannot be parsed due to {0}.\".format(error_type),\n filename, line=err.lineno, column=err.offset)\n return\n diffs = Diff.from_string_arrays(file, corrected).split_diff()\n for diff in diffs:\n yield Result(self,\n \"The code does not comply with the settings \"\n \"provided.\",\n affected_code=(diff.range(filename),),\n diffs={filename: diff})\n\n @classmethod\n def check_prerequisites(cls): # pragma: no cover\n if not sys.version_info >= (3, 4):\n return 'Yapf only supports Python 2.7 and Python 3.4+'\n else:\n return True\n",
"path": "bears/python/YapfBear.py"
}
] | diff --git a/bears/python/YapfBear.py b/bears/python/YapfBear.py
index 03d686596b..d02c898f1f 100644
--- a/bears/python/YapfBear.py
+++ b/bears/python/YapfBear.py
@@ -23,6 +23,7 @@ class YapfBear(LocalBear):
AUTHORS_EMAILS = {'[email protected]'}
LICENSE = 'AGPL-3.0'
CAN_FIX = {'Formatting'}
+ ASCIINEMA_URL = 'https://asciinema.org/a/89021'
@deprecate_settings(indent_size='tab_width')
def run(self, filename, file,
|
getpelican__pelican-2521 | WARNING: Docutils has no localization for 'english'. Using 'en' instead.
1. pipenv install pelican markdown
2. pelican-quickstart
3. create an article in content
4. run pelican
**Expected**: Clean run and output created
**Observed**: Warning
> WARNING: Docutils has no localization for 'english'. Using 'en' instead.
When I change DEFAULT_LANG = 'English' in my seetings to DEFAULT_LANG = 'en' it runs fine.
Should I PR that as a fix or is there some reason it is English and not en.
| [
{
"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\nfrom __future__ import print_function, unicode_literals\n\nimport argparse\nimport codecs\nimport locale\nimport os\nimport sys\n\nfrom jinja2 import Environment, FileSystemLoader\n\nimport pytz\n\ntry:\n import readline # NOQA\nexcept ImportError:\n pass\n\ntry:\n import tzlocal\n _DEFAULT_TIMEZONE = tzlocal.get_localzone().zone\nexcept ImportError:\n _DEFAULT_TIMEZONE = 'Europe/Paris'\n\nimport six\n\nfrom pelican import __version__\n\nlocale.setlocale(locale.LC_ALL, '')\ntry:\n _DEFAULT_LANGUAGE = locale.getlocale()[0]\nexcept ValueError:\n # Don't fail on macosx: \"unknown locale: UTF-8\"\n _DEFAULT_LANGUAGE = None\nif _DEFAULT_LANGUAGE is None:\n _DEFAULT_LANGUAGE = 'English'\nelse:\n _DEFAULT_LANGUAGE = _DEFAULT_LANGUAGE.split('_')[0]\n\n_TEMPLATES_DIR = os.path.join(os.path.dirname(os.path.abspath(__file__)),\n \"templates\")\n_jinja_env = Environment(\n loader=FileSystemLoader(_TEMPLATES_DIR),\n trim_blocks=True,\n)\n\n\n_GITHUB_PAGES_BRANCHES = {\n 'personal': 'master',\n 'project': 'gh-pages'\n}\n\nCONF = {\n 'pelican': 'pelican',\n 'pelicanopts': '',\n 'basedir': os.curdir,\n 'ftp_host': 'localhost',\n 'ftp_user': 'anonymous',\n 'ftp_target_dir': '/',\n 'ssh_host': 'localhost',\n 'ssh_port': 22,\n 'ssh_user': 'root',\n 'ssh_target_dir': '/var/www',\n 's3_bucket': 'my_s3_bucket',\n 'cloudfiles_username': 'my_rackspace_username',\n 'cloudfiles_api_key': 'my_rackspace_api_key',\n 'cloudfiles_container': 'my_cloudfiles_container',\n 'dropbox_dir': '~/Dropbox/Public/',\n 'github_pages_branch': _GITHUB_PAGES_BRANCHES['project'],\n 'default_pagination': 10,\n 'siteurl': '',\n 'lang': _DEFAULT_LANGUAGE,\n 'timezone': _DEFAULT_TIMEZONE\n}\n\n# url for list of valid timezones\n_TZ_URL = 'http://en.wikipedia.org/wiki/List_of_tz_database_time_zones'\n\n\ndef _input_compat(prompt):\n if six.PY3:\n r = input(prompt)\n else:\n r = raw_input(prompt)\n return r\n\n\nif six.PY3:\n str_compat = str\nelse:\n str_compat = unicode\n\n\n# Create a 'marked' default path, to determine if someone has supplied\n# a path on the command-line.\nclass _DEFAULT_PATH_TYPE(str_compat):\n is_default_path = True\n\n\n_DEFAULT_PATH = _DEFAULT_PATH_TYPE(os.curdir)\n\n\ndef decoding_strings(f):\n def wrapper(*args, **kwargs):\n out = f(*args, **kwargs)\n if isinstance(out, six.string_types) and not six.PY3:\n # todo: make encoding configurable?\n if six.PY3:\n return out\n else:\n return out.decode(sys.stdin.encoding)\n return out\n return wrapper\n\n\n@decoding_strings\ndef ask(question, answer=str_compat, default=None, length=None):\n if answer == str_compat:\n r = ''\n while True:\n if default:\n r = _input_compat('> {0} [{1}] '.format(question, default))\n else:\n r = _input_compat('> {0} '.format(question, default))\n\n r = r.strip()\n\n if len(r) <= 0:\n if default:\n r = default\n break\n else:\n print('You must enter something')\n else:\n if length and len(r) != length:\n print('Entry must be {0} characters long'.format(length))\n else:\n break\n\n return r\n\n elif answer == bool:\n r = None\n while True:\n if default is True:\n r = _input_compat('> {0} (Y/n) '.format(question))\n elif default is False:\n r = _input_compat('> {0} (y/N) '.format(question))\n else:\n r = _input_compat('> {0} (y/n) '.format(question))\n\n r = r.strip().lower()\n\n if r in ('y', 'yes'):\n r = True\n break\n elif r in ('n', 'no'):\n r = False\n break\n elif not r:\n r = default\n break\n else:\n print(\"You must answer 'yes' or 'no'\")\n return r\n elif answer == int:\n r = None\n while True:\n if default:\n r = _input_compat('> {0} [{1}] '.format(question, default))\n else:\n r = _input_compat('> {0} '.format(question))\n\n r = r.strip()\n\n if not r:\n r = default\n break\n\n try:\n r = int(r)\n break\n except ValueError:\n print('You must enter an integer')\n return r\n else:\n raise NotImplementedError(\n 'Argument `answer` must be str_compat, bool, or integer')\n\n\ndef ask_timezone(question, default, tzurl):\n \"\"\"Prompt for time zone and validate input\"\"\"\n lower_tz = [tz.lower() for tz in pytz.all_timezones]\n while True:\n r = ask(question, str_compat, default)\n r = r.strip().replace(' ', '_').lower()\n if r in lower_tz:\n r = pytz.all_timezones[lower_tz.index(r)]\n break\n else:\n print('Please enter a valid time zone:\\n'\n ' (check [{0}])'.format(tzurl))\n return r\n\n\ndef main():\n parser = argparse.ArgumentParser(\n description=\"A kickstarter for Pelican\",\n formatter_class=argparse.ArgumentDefaultsHelpFormatter)\n parser.add_argument('-p', '--path', default=_DEFAULT_PATH,\n help=\"The path to generate the blog into\")\n parser.add_argument('-t', '--title', metavar=\"title\",\n help='Set the title of the website')\n parser.add_argument('-a', '--author', metavar=\"author\",\n help='Set the author name of the website')\n parser.add_argument('-l', '--lang', metavar=\"lang\",\n help='Set the default web site language')\n\n args = parser.parse_args()\n\n print('''Welcome to pelican-quickstart v{v}.\n\nThis script will help you create a new Pelican-based website.\n\nPlease answer the following questions so this script can generate the files\nneeded by Pelican.\n\n '''.format(v=__version__))\n\n project = os.path.join(\n os.environ.get('VIRTUAL_ENV', os.curdir), '.project')\n no_path_was_specified = hasattr(args.path, 'is_default_path')\n if os.path.isfile(project) and no_path_was_specified:\n CONF['basedir'] = open(project, 'r').read().rstrip(\"\\n\")\n print('Using project associated with current virtual environment.'\n 'Will save to:\\n%s\\n' % CONF['basedir'])\n else:\n CONF['basedir'] = os.path.abspath(os.path.expanduser(\n ask('Where do you want to create your new web site?',\n answer=str_compat, default=args.path)))\n\n CONF['sitename'] = ask('What will be the title of this web site?',\n answer=str_compat, default=args.title)\n CONF['author'] = ask('Who will be the author of this web site?',\n answer=str_compat, default=args.author)\n CONF['lang'] = ask('What will be the default language of this web site?',\n str_compat, args.lang or CONF['lang'], 2)\n\n if ask('Do you want to specify a URL prefix? e.g., https://example.com ',\n answer=bool, default=True):\n CONF['siteurl'] = ask('What is your URL prefix? (see '\n 'above example; no trailing slash)',\n str_compat, CONF['siteurl'])\n\n CONF['with_pagination'] = ask('Do you want to enable article pagination?',\n bool, bool(CONF['default_pagination']))\n\n if CONF['with_pagination']:\n CONF['default_pagination'] = ask('How many articles per page '\n 'do you want?',\n int, CONF['default_pagination'])\n else:\n CONF['default_pagination'] = False\n\n CONF['timezone'] = ask_timezone('What is your time zone?',\n CONF['timezone'], _TZ_URL)\n\n automation = ask('Do you want to generate a tasks.py/Makefile '\n 'to automate generation and publishing?', bool, True)\n\n if automation:\n if ask('Do you want to upload your website using FTP?',\n answer=bool, default=False):\n CONF['ftp'] = True,\n CONF['ftp_host'] = ask('What is the hostname of your FTP server?',\n str_compat, CONF['ftp_host'])\n CONF['ftp_user'] = ask('What is your username on that server?',\n str_compat, CONF['ftp_user'])\n CONF['ftp_target_dir'] = ask('Where do you want to put your '\n 'web site on that server?',\n str_compat, CONF['ftp_target_dir'])\n if ask('Do you want to upload your website using SSH?',\n answer=bool, default=False):\n CONF['ssh'] = True,\n CONF['ssh_host'] = ask('What is the hostname of your SSH server?',\n str_compat, CONF['ssh_host'])\n CONF['ssh_port'] = ask('What is the port of your SSH server?',\n int, CONF['ssh_port'])\n CONF['ssh_user'] = ask('What is your username on that server?',\n str_compat, CONF['ssh_user'])\n CONF['ssh_target_dir'] = ask('Where do you want to put your '\n 'web site on that server?',\n str_compat, CONF['ssh_target_dir'])\n\n if ask('Do you want to upload your website using Dropbox?',\n answer=bool, default=False):\n CONF['dropbox'] = True,\n CONF['dropbox_dir'] = ask('Where is your Dropbox directory?',\n str_compat, CONF['dropbox_dir'])\n\n if ask('Do you want to upload your website using S3?',\n answer=bool, default=False):\n CONF['s3'] = True,\n CONF['s3_bucket'] = ask('What is the name of your S3 bucket?',\n str_compat, CONF['s3_bucket'])\n\n if ask('Do you want to upload your website using '\n 'Rackspace Cloud Files?', answer=bool, default=False):\n CONF['cloudfiles'] = True,\n CONF['cloudfiles_username'] = ask('What is your Rackspace '\n 'Cloud username?', str_compat,\n CONF['cloudfiles_username'])\n CONF['cloudfiles_api_key'] = ask('What is your Rackspace '\n 'Cloud API key?', str_compat,\n CONF['cloudfiles_api_key'])\n CONF['cloudfiles_container'] = ask('What is the name of your '\n 'Cloud Files container?',\n str_compat,\n CONF['cloudfiles_container'])\n\n if ask('Do you want to upload your website using GitHub Pages?',\n answer=bool, default=False):\n CONF['github'] = True,\n if ask('Is this your personal page (username.github.io)?',\n answer=bool, default=False):\n CONF['github_pages_branch'] = \\\n _GITHUB_PAGES_BRANCHES['personal']\n else:\n CONF['github_pages_branch'] = \\\n _GITHUB_PAGES_BRANCHES['project']\n\n try:\n os.makedirs(os.path.join(CONF['basedir'], 'content'))\n except OSError as e:\n print('Error: {0}'.format(e))\n\n try:\n os.makedirs(os.path.join(CONF['basedir'], 'output'))\n except OSError as e:\n print('Error: {0}'.format(e))\n\n try:\n with codecs.open(os.path.join(CONF['basedir'], 'pelicanconf.py'),\n 'w', 'utf-8') as fd:\n conf_python = dict()\n for key, value in CONF.items():\n conf_python[key] = repr(value)\n\n _template = _jinja_env.get_template('pelicanconf.py.jinja2')\n fd.write(_template.render(**conf_python))\n fd.close()\n except OSError as e:\n print('Error: {0}'.format(e))\n\n try:\n with codecs.open(os.path.join(CONF['basedir'], 'publishconf.py'),\n 'w', 'utf-8') as fd:\n _template = _jinja_env.get_template('publishconf.py.jinja2')\n fd.write(_template.render(**CONF))\n fd.close()\n except OSError as e:\n print('Error: {0}'.format(e))\n\n if automation:\n try:\n with codecs.open(os.path.join(CONF['basedir'], 'tasks.py'),\n 'w', 'utf-8') as fd:\n _template = _jinja_env.get_template('tasks.py.jinja2')\n fd.write(_template.render(**CONF))\n fd.close()\n except OSError as e:\n print('Error: {0}'.format(e))\n try:\n with codecs.open(os.path.join(CONF['basedir'], 'Makefile'),\n 'w', 'utf-8') as fd:\n py_v = 'python'\n if six.PY3:\n py_v = 'python3'\n _template = _jinja_env.get_template('Makefile.jinja2')\n fd.write(_template.render(py_v=py_v, **CONF))\n fd.close()\n except OSError as e:\n print('Error: {0}'.format(e))\n\n print('Done. Your new project is available at %s' % CONF['basedir'])\n\n\nif __name__ == \"__main__\":\n main()\n",
"path": "pelican/tools/pelican_quickstart.py"
}
] | [
{
"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\nfrom __future__ import print_function, unicode_literals\n\nimport argparse\nimport codecs\nimport locale\nimport os\nimport sys\n\nfrom jinja2 import Environment, FileSystemLoader\n\nimport pytz\n\ntry:\n import readline # NOQA\nexcept ImportError:\n pass\n\ntry:\n import tzlocal\n _DEFAULT_TIMEZONE = tzlocal.get_localzone().zone\nexcept ImportError:\n _DEFAULT_TIMEZONE = 'Europe/Paris'\n\nimport six\n\nfrom pelican import __version__\n\nlocale.setlocale(locale.LC_ALL, '')\ntry:\n _DEFAULT_LANGUAGE = locale.getlocale()[0]\nexcept ValueError:\n # Don't fail on macosx: \"unknown locale: UTF-8\"\n _DEFAULT_LANGUAGE = None\nif _DEFAULT_LANGUAGE is None:\n _DEFAULT_LANGUAGE = 'en'\nelse:\n _DEFAULT_LANGUAGE = _DEFAULT_LANGUAGE.split('_')[0]\n\n_TEMPLATES_DIR = os.path.join(os.path.dirname(os.path.abspath(__file__)),\n \"templates\")\n_jinja_env = Environment(\n loader=FileSystemLoader(_TEMPLATES_DIR),\n trim_blocks=True,\n)\n\n\n_GITHUB_PAGES_BRANCHES = {\n 'personal': 'master',\n 'project': 'gh-pages'\n}\n\nCONF = {\n 'pelican': 'pelican',\n 'pelicanopts': '',\n 'basedir': os.curdir,\n 'ftp_host': 'localhost',\n 'ftp_user': 'anonymous',\n 'ftp_target_dir': '/',\n 'ssh_host': 'localhost',\n 'ssh_port': 22,\n 'ssh_user': 'root',\n 'ssh_target_dir': '/var/www',\n 's3_bucket': 'my_s3_bucket',\n 'cloudfiles_username': 'my_rackspace_username',\n 'cloudfiles_api_key': 'my_rackspace_api_key',\n 'cloudfiles_container': 'my_cloudfiles_container',\n 'dropbox_dir': '~/Dropbox/Public/',\n 'github_pages_branch': _GITHUB_PAGES_BRANCHES['project'],\n 'default_pagination': 10,\n 'siteurl': '',\n 'lang': _DEFAULT_LANGUAGE,\n 'timezone': _DEFAULT_TIMEZONE\n}\n\n# url for list of valid timezones\n_TZ_URL = 'http://en.wikipedia.org/wiki/List_of_tz_database_time_zones'\n\n\ndef _input_compat(prompt):\n if six.PY3:\n r = input(prompt)\n else:\n r = raw_input(prompt)\n return r\n\n\nif six.PY3:\n str_compat = str\nelse:\n str_compat = unicode\n\n\n# Create a 'marked' default path, to determine if someone has supplied\n# a path on the command-line.\nclass _DEFAULT_PATH_TYPE(str_compat):\n is_default_path = True\n\n\n_DEFAULT_PATH = _DEFAULT_PATH_TYPE(os.curdir)\n\n\ndef decoding_strings(f):\n def wrapper(*args, **kwargs):\n out = f(*args, **kwargs)\n if isinstance(out, six.string_types) and not six.PY3:\n # todo: make encoding configurable?\n if six.PY3:\n return out\n else:\n return out.decode(sys.stdin.encoding)\n return out\n return wrapper\n\n\n@decoding_strings\ndef ask(question, answer=str_compat, default=None, length=None):\n if answer == str_compat:\n r = ''\n while True:\n if default:\n r = _input_compat('> {0} [{1}] '.format(question, default))\n else:\n r = _input_compat('> {0} '.format(question, default))\n\n r = r.strip()\n\n if len(r) <= 0:\n if default:\n r = default\n break\n else:\n print('You must enter something')\n else:\n if length and len(r) != length:\n print('Entry must be {0} characters long'.format(length))\n else:\n break\n\n return r\n\n elif answer == bool:\n r = None\n while True:\n if default is True:\n r = _input_compat('> {0} (Y/n) '.format(question))\n elif default is False:\n r = _input_compat('> {0} (y/N) '.format(question))\n else:\n r = _input_compat('> {0} (y/n) '.format(question))\n\n r = r.strip().lower()\n\n if r in ('y', 'yes'):\n r = True\n break\n elif r in ('n', 'no'):\n r = False\n break\n elif not r:\n r = default\n break\n else:\n print(\"You must answer 'yes' or 'no'\")\n return r\n elif answer == int:\n r = None\n while True:\n if default:\n r = _input_compat('> {0} [{1}] '.format(question, default))\n else:\n r = _input_compat('> {0} '.format(question))\n\n r = r.strip()\n\n if not r:\n r = default\n break\n\n try:\n r = int(r)\n break\n except ValueError:\n print('You must enter an integer')\n return r\n else:\n raise NotImplementedError(\n 'Argument `answer` must be str_compat, bool, or integer')\n\n\ndef ask_timezone(question, default, tzurl):\n \"\"\"Prompt for time zone and validate input\"\"\"\n lower_tz = [tz.lower() for tz in pytz.all_timezones]\n while True:\n r = ask(question, str_compat, default)\n r = r.strip().replace(' ', '_').lower()\n if r in lower_tz:\n r = pytz.all_timezones[lower_tz.index(r)]\n break\n else:\n print('Please enter a valid time zone:\\n'\n ' (check [{0}])'.format(tzurl))\n return r\n\n\ndef main():\n parser = argparse.ArgumentParser(\n description=\"A kickstarter for Pelican\",\n formatter_class=argparse.ArgumentDefaultsHelpFormatter)\n parser.add_argument('-p', '--path', default=_DEFAULT_PATH,\n help=\"The path to generate the blog into\")\n parser.add_argument('-t', '--title', metavar=\"title\",\n help='Set the title of the website')\n parser.add_argument('-a', '--author', metavar=\"author\",\n help='Set the author name of the website')\n parser.add_argument('-l', '--lang', metavar=\"lang\",\n help='Set the default web site language')\n\n args = parser.parse_args()\n\n print('''Welcome to pelican-quickstart v{v}.\n\nThis script will help you create a new Pelican-based website.\n\nPlease answer the following questions so this script can generate the files\nneeded by Pelican.\n\n '''.format(v=__version__))\n\n project = os.path.join(\n os.environ.get('VIRTUAL_ENV', os.curdir), '.project')\n no_path_was_specified = hasattr(args.path, 'is_default_path')\n if os.path.isfile(project) and no_path_was_specified:\n CONF['basedir'] = open(project, 'r').read().rstrip(\"\\n\")\n print('Using project associated with current virtual environment.'\n 'Will save to:\\n%s\\n' % CONF['basedir'])\n else:\n CONF['basedir'] = os.path.abspath(os.path.expanduser(\n ask('Where do you want to create your new web site?',\n answer=str_compat, default=args.path)))\n\n CONF['sitename'] = ask('What will be the title of this web site?',\n answer=str_compat, default=args.title)\n CONF['author'] = ask('Who will be the author of this web site?',\n answer=str_compat, default=args.author)\n CONF['lang'] = ask('What will be the default language of this web site?',\n str_compat, args.lang or CONF['lang'], 2)\n\n if ask('Do you want to specify a URL prefix? e.g., https://example.com ',\n answer=bool, default=True):\n CONF['siteurl'] = ask('What is your URL prefix? (see '\n 'above example; no trailing slash)',\n str_compat, CONF['siteurl'])\n\n CONF['with_pagination'] = ask('Do you want to enable article pagination?',\n bool, bool(CONF['default_pagination']))\n\n if CONF['with_pagination']:\n CONF['default_pagination'] = ask('How many articles per page '\n 'do you want?',\n int, CONF['default_pagination'])\n else:\n CONF['default_pagination'] = False\n\n CONF['timezone'] = ask_timezone('What is your time zone?',\n CONF['timezone'], _TZ_URL)\n\n automation = ask('Do you want to generate a tasks.py/Makefile '\n 'to automate generation and publishing?', bool, True)\n\n if automation:\n if ask('Do you want to upload your website using FTP?',\n answer=bool, default=False):\n CONF['ftp'] = True,\n CONF['ftp_host'] = ask('What is the hostname of your FTP server?',\n str_compat, CONF['ftp_host'])\n CONF['ftp_user'] = ask('What is your username on that server?',\n str_compat, CONF['ftp_user'])\n CONF['ftp_target_dir'] = ask('Where do you want to put your '\n 'web site on that server?',\n str_compat, CONF['ftp_target_dir'])\n if ask('Do you want to upload your website using SSH?',\n answer=bool, default=False):\n CONF['ssh'] = True,\n CONF['ssh_host'] = ask('What is the hostname of your SSH server?',\n str_compat, CONF['ssh_host'])\n CONF['ssh_port'] = ask('What is the port of your SSH server?',\n int, CONF['ssh_port'])\n CONF['ssh_user'] = ask('What is your username on that server?',\n str_compat, CONF['ssh_user'])\n CONF['ssh_target_dir'] = ask('Where do you want to put your '\n 'web site on that server?',\n str_compat, CONF['ssh_target_dir'])\n\n if ask('Do you want to upload your website using Dropbox?',\n answer=bool, default=False):\n CONF['dropbox'] = True,\n CONF['dropbox_dir'] = ask('Where is your Dropbox directory?',\n str_compat, CONF['dropbox_dir'])\n\n if ask('Do you want to upload your website using S3?',\n answer=bool, default=False):\n CONF['s3'] = True,\n CONF['s3_bucket'] = ask('What is the name of your S3 bucket?',\n str_compat, CONF['s3_bucket'])\n\n if ask('Do you want to upload your website using '\n 'Rackspace Cloud Files?', answer=bool, default=False):\n CONF['cloudfiles'] = True,\n CONF['cloudfiles_username'] = ask('What is your Rackspace '\n 'Cloud username?', str_compat,\n CONF['cloudfiles_username'])\n CONF['cloudfiles_api_key'] = ask('What is your Rackspace '\n 'Cloud API key?', str_compat,\n CONF['cloudfiles_api_key'])\n CONF['cloudfiles_container'] = ask('What is the name of your '\n 'Cloud Files container?',\n str_compat,\n CONF['cloudfiles_container'])\n\n if ask('Do you want to upload your website using GitHub Pages?',\n answer=bool, default=False):\n CONF['github'] = True,\n if ask('Is this your personal page (username.github.io)?',\n answer=bool, default=False):\n CONF['github_pages_branch'] = \\\n _GITHUB_PAGES_BRANCHES['personal']\n else:\n CONF['github_pages_branch'] = \\\n _GITHUB_PAGES_BRANCHES['project']\n\n try:\n os.makedirs(os.path.join(CONF['basedir'], 'content'))\n except OSError as e:\n print('Error: {0}'.format(e))\n\n try:\n os.makedirs(os.path.join(CONF['basedir'], 'output'))\n except OSError as e:\n print('Error: {0}'.format(e))\n\n try:\n with codecs.open(os.path.join(CONF['basedir'], 'pelicanconf.py'),\n 'w', 'utf-8') as fd:\n conf_python = dict()\n for key, value in CONF.items():\n conf_python[key] = repr(value)\n\n _template = _jinja_env.get_template('pelicanconf.py.jinja2')\n fd.write(_template.render(**conf_python))\n fd.close()\n except OSError as e:\n print('Error: {0}'.format(e))\n\n try:\n with codecs.open(os.path.join(CONF['basedir'], 'publishconf.py'),\n 'w', 'utf-8') as fd:\n _template = _jinja_env.get_template('publishconf.py.jinja2')\n fd.write(_template.render(**CONF))\n fd.close()\n except OSError as e:\n print('Error: {0}'.format(e))\n\n if automation:\n try:\n with codecs.open(os.path.join(CONF['basedir'], 'tasks.py'),\n 'w', 'utf-8') as fd:\n _template = _jinja_env.get_template('tasks.py.jinja2')\n fd.write(_template.render(**CONF))\n fd.close()\n except OSError as e:\n print('Error: {0}'.format(e))\n try:\n with codecs.open(os.path.join(CONF['basedir'], 'Makefile'),\n 'w', 'utf-8') as fd:\n py_v = 'python'\n if six.PY3:\n py_v = 'python3'\n _template = _jinja_env.get_template('Makefile.jinja2')\n fd.write(_template.render(py_v=py_v, **CONF))\n fd.close()\n except OSError as e:\n print('Error: {0}'.format(e))\n\n print('Done. Your new project is available at %s' % CONF['basedir'])\n\n\nif __name__ == \"__main__\":\n main()\n",
"path": "pelican/tools/pelican_quickstart.py"
}
] | diff --git a/pelican/tools/pelican_quickstart.py b/pelican/tools/pelican_quickstart.py
index 529eeb527..4a6b8cbc3 100755
--- a/pelican/tools/pelican_quickstart.py
+++ b/pelican/tools/pelican_quickstart.py
@@ -34,7 +34,7 @@
# Don't fail on macosx: "unknown locale: UTF-8"
_DEFAULT_LANGUAGE = None
if _DEFAULT_LANGUAGE is None:
- _DEFAULT_LANGUAGE = 'English'
+ _DEFAULT_LANGUAGE = 'en'
else:
_DEFAULT_LANGUAGE = _DEFAULT_LANGUAGE.split('_')[0]
|
huggingface__huggingface_hub-790 | Support python=3.10
Python 3.10 has been out for a while but we seem to not test for it. What are the roadblocks for us to support 3.10 and maybe deprecate 3.6? (Many packages now support 3.8-3.10 and older versions are not supported anymore).
Ping @LysandreJik @osanseviero maybe?
| [
{
"content": "from setuptools import find_packages, setup\n\n\ndef get_version() -> str:\n rel_path = \"src/huggingface_hub/__init__.py\"\n with open(rel_path, \"r\") as fp:\n for line in fp.read().splitlines():\n if line.startswith(\"__version__\"):\n delim = '\"' if '\"' in line else \"'\"\n return line.split(delim)[1]\n raise RuntimeError(\"Unable to find version string.\")\n\n\ninstall_requires = [\n \"filelock\",\n \"requests\",\n \"tqdm\",\n \"pyyaml\",\n \"typing-extensions>=3.7.4.3\", # to be able to import TypeAlias\n \"importlib_metadata;python_version<'3.8'\",\n \"packaging>=20.9\",\n]\n\nextras = {}\n\nextras[\"torch\"] = [\n \"torch\",\n]\n\nextras[\"tensorflow\"] = [\n \"tensorflow\",\n \"pydot\",\n \"graphviz\"\n]\n\nextras[\"testing\"] = [\n \"pytest\",\n \"datasets\",\n \"soundfile\",\n]\n\nextras[\"quality\"] = [\n \"black~=22.0\",\n \"isort>=5.5.4\",\n \"flake8>=3.8.3\",\n]\n\nextras[\"all\"] = extras[\"testing\"] + extras[\"quality\"]\n\nextras[\"dev\"] = extras[\"all\"]\n\n\nsetup(\n name=\"huggingface_hub\",\n version=get_version(),\n author=\"Hugging Face, Inc.\",\n author_email=\"[email protected]\",\n description=\"Client library to download and publish models on the huggingface.co hub\",\n long_description=open(\"README.md\", \"r\", encoding=\"utf-8\").read(),\n long_description_content_type=\"text/markdown\",\n keywords=\"model-hub machine-learning models natural-language-processing deep-learning pytorch pretrained-models\",\n license=\"Apache\",\n url=\"https://github.com/huggingface/huggingface_hub\",\n package_dir={\"\": \"src\"},\n packages=find_packages(\"src\"),\n extras_require=extras,\n entry_points={\n \"console_scripts\": [\n \"huggingface-cli=huggingface_hub.commands.huggingface_cli:main\"\n ]\n },\n python_requires=\">=3.6.0\",\n install_requires=install_requires,\n classifiers=[\n \"Intended Audience :: Developers\",\n \"Intended Audience :: Education\",\n \"Intended Audience :: Science/Research\",\n \"License :: OSI Approved :: Apache Software License\",\n \"Operating System :: OS Independent\",\n \"Programming Language :: Python :: 3\",\n \"Topic :: Scientific/Engineering :: Artificial Intelligence\",\n ],\n)\n",
"path": "setup.py"
}
] | [
{
"content": "from setuptools import find_packages, setup\n\n\ndef get_version() -> str:\n rel_path = \"src/huggingface_hub/__init__.py\"\n with open(rel_path, \"r\") as fp:\n for line in fp.read().splitlines():\n if line.startswith(\"__version__\"):\n delim = '\"' if '\"' in line else \"'\"\n return line.split(delim)[1]\n raise RuntimeError(\"Unable to find version string.\")\n\n\ninstall_requires = [\n \"filelock\",\n \"requests\",\n \"tqdm\",\n \"pyyaml\",\n \"typing-extensions>=3.7.4.3\", # to be able to import TypeAlias\n \"importlib_metadata;python_version<'3.8'\",\n \"packaging>=20.9\",\n]\n\nextras = {}\n\nextras[\"torch\"] = [\n \"torch\",\n]\n\nextras[\"tensorflow\"] = [\n \"tensorflow\",\n \"pydot\",\n \"graphviz\"\n]\n\nextras[\"testing\"] = [\n \"pytest\",\n \"datasets\",\n \"soundfile\",\n]\n\nextras[\"quality\"] = [\n \"black~=22.0\",\n \"isort>=5.5.4\",\n \"flake8>=3.8.3\",\n]\n\nextras[\"all\"] = extras[\"testing\"] + extras[\"quality\"]\n\nextras[\"dev\"] = extras[\"all\"]\n\n\nsetup(\n name=\"huggingface_hub\",\n version=get_version(),\n author=\"Hugging Face, Inc.\",\n author_email=\"[email protected]\",\n description=\"Client library to download and publish models on the huggingface.co hub\",\n long_description=open(\"README.md\", \"r\", encoding=\"utf-8\").read(),\n long_description_content_type=\"text/markdown\",\n keywords=\"model-hub machine-learning models natural-language-processing deep-learning pytorch pretrained-models\",\n license=\"Apache\",\n url=\"https://github.com/huggingface/huggingface_hub\",\n package_dir={\"\": \"src\"},\n packages=find_packages(\"src\"),\n extras_require=extras,\n entry_points={\n \"console_scripts\": [\n \"huggingface-cli=huggingface_hub.commands.huggingface_cli:main\"\n ]\n },\n python_requires=\">=3.7.0\",\n install_requires=install_requires,\n classifiers=[\n \"Intended Audience :: Developers\",\n \"Intended Audience :: Education\",\n \"Intended Audience :: Science/Research\",\n \"License :: OSI Approved :: Apache Software License\",\n \"Operating System :: OS Independent\",\n \"Programming Language :: Python :: 3\",\n \"Topic :: Scientific/Engineering :: Artificial Intelligence\",\n ],\n)\n",
"path": "setup.py"
}
] | diff --git a/.github/workflows/python-tests.yml b/.github/workflows/python-tests.yml
index 37df59bca1..71bd60054a 100644
--- a/.github/workflows/python-tests.yml
+++ b/.github/workflows/python-tests.yml
@@ -22,7 +22,7 @@ jobs:
runs-on: ubuntu-latest
strategy:
matrix:
- python-version: ["3.6", "3.9"]
+ python-version: ["3.7", "3.10"]
test_repository: ["Repository only", "Everything else"]
steps:
@@ -52,7 +52,7 @@ jobs:
runs-on: ubuntu-latest
strategy:
matrix:
- python-version: ["3.6", "3.9"]
+ python-version: ["3.7", "3.10"]
steps:
- uses: actions/checkout@v2
@@ -73,7 +73,7 @@ jobs:
runs-on: ubuntu-latest
strategy:
matrix:
- python-version: ["3.6", "3.9"]
+ python-version: ["3.7", "3.10"]
steps:
- uses: actions/checkout@v2
@@ -100,7 +100,7 @@ jobs:
- name: Set up Python
uses: actions/setup-python@v2
with:
- python-version: 3.9
+ python-version: "3.10"
- run: |
git config --global user.email "[email protected]"
diff --git a/setup.py b/setup.py
index d552bc5e89..1e643e2ecc 100644
--- a/setup.py
+++ b/setup.py
@@ -69,7 +69,7 @@ def get_version() -> str:
"huggingface-cli=huggingface_hub.commands.huggingface_cli:main"
]
},
- python_requires=">=3.6.0",
+ python_requires=">=3.7.0",
install_requires=install_requires,
classifiers=[
"Intended Audience :: Developers",
|
CTFd__CTFd-1918 | Users in admin scoreboard show user position instead of team position
In teams mode on the admin panel, users are shown with their user position on the scoreboard instead of their teams position. We should be showing both.
| [
{
"content": "from flask import render_template, request, url_for\nfrom sqlalchemy.sql import not_\n\nfrom CTFd.admin import admin\nfrom CTFd.models import Challenges, Tracking, Users\nfrom CTFd.utils import get_config\nfrom CTFd.utils.decorators import admins_only\nfrom CTFd.utils.modes import TEAMS_MODE\n\n\[email protected](\"/admin/users\")\n@admins_only\ndef users_listing():\n q = request.args.get(\"q\")\n field = request.args.get(\"field\")\n page = abs(request.args.get(\"page\", 1, type=int))\n filters = []\n users = []\n\n if q:\n # The field exists as an exposed column\n if Users.__mapper__.has_property(field):\n filters.append(getattr(Users, field).like(\"%{}%\".format(q)))\n\n if q and field == \"ip\":\n users = (\n Users.query.join(Tracking, Users.id == Tracking.user_id)\n .filter(Tracking.ip.like(\"%{}%\".format(q)))\n .order_by(Users.id.asc())\n .paginate(page=page, per_page=50)\n )\n else:\n users = (\n Users.query.filter(*filters)\n .order_by(Users.id.asc())\n .paginate(page=page, per_page=50)\n )\n\n args = dict(request.args)\n args.pop(\"page\", 1)\n\n return render_template(\n \"admin/users/users.html\",\n users=users,\n prev_page=url_for(request.endpoint, page=users.prev_num, **args),\n next_page=url_for(request.endpoint, page=users.next_num, **args),\n q=q,\n field=field,\n )\n\n\[email protected](\"/admin/users/new\")\n@admins_only\ndef users_new():\n return render_template(\"admin/users/new.html\")\n\n\[email protected](\"/admin/users/<int:user_id>\")\n@admins_only\ndef users_detail(user_id):\n # Get user object\n user = Users.query.filter_by(id=user_id).first_or_404()\n\n # Get the user's solves\n solves = user.get_solves(admin=True)\n\n # Get challenges that the user is missing\n if get_config(\"user_mode\") == TEAMS_MODE:\n if user.team:\n all_solves = user.team.get_solves(admin=True)\n else:\n all_solves = user.get_solves(admin=True)\n else:\n all_solves = user.get_solves(admin=True)\n\n solve_ids = [s.challenge_id for s in all_solves]\n missing = Challenges.query.filter(not_(Challenges.id.in_(solve_ids))).all()\n\n # Get IP addresses that the User has used\n addrs = (\n Tracking.query.filter_by(user_id=user_id).order_by(Tracking.date.desc()).all()\n )\n\n # Get Fails\n fails = user.get_fails(admin=True)\n\n # Get Awards\n awards = user.get_awards(admin=True)\n\n # Get user properties\n score = user.get_score(admin=True)\n place = user.get_place(admin=True)\n\n return render_template(\n \"admin/users/user.html\",\n solves=solves,\n user=user,\n addrs=addrs,\n score=score,\n missing=missing,\n place=place,\n fails=fails,\n awards=awards,\n )\n",
"path": "CTFd/admin/users.py"
}
] | [
{
"content": "from flask import render_template, request, url_for\nfrom sqlalchemy.sql import not_\n\nfrom CTFd.admin import admin\nfrom CTFd.models import Challenges, Tracking, Users\nfrom CTFd.utils import get_config\nfrom CTFd.utils.decorators import admins_only\nfrom CTFd.utils.modes import TEAMS_MODE\n\n\[email protected](\"/admin/users\")\n@admins_only\ndef users_listing():\n q = request.args.get(\"q\")\n field = request.args.get(\"field\")\n page = abs(request.args.get(\"page\", 1, type=int))\n filters = []\n users = []\n\n if q:\n # The field exists as an exposed column\n if Users.__mapper__.has_property(field):\n filters.append(getattr(Users, field).like(\"%{}%\".format(q)))\n\n if q and field == \"ip\":\n users = (\n Users.query.join(Tracking, Users.id == Tracking.user_id)\n .filter(Tracking.ip.like(\"%{}%\".format(q)))\n .order_by(Users.id.asc())\n .paginate(page=page, per_page=50)\n )\n else:\n users = (\n Users.query.filter(*filters)\n .order_by(Users.id.asc())\n .paginate(page=page, per_page=50)\n )\n\n args = dict(request.args)\n args.pop(\"page\", 1)\n\n return render_template(\n \"admin/users/users.html\",\n users=users,\n prev_page=url_for(request.endpoint, page=users.prev_num, **args),\n next_page=url_for(request.endpoint, page=users.next_num, **args),\n q=q,\n field=field,\n )\n\n\[email protected](\"/admin/users/new\")\n@admins_only\ndef users_new():\n return render_template(\"admin/users/new.html\")\n\n\[email protected](\"/admin/users/<int:user_id>\")\n@admins_only\ndef users_detail(user_id):\n # Get user object\n user = Users.query.filter_by(id=user_id).first_or_404()\n\n # Get the user's solves\n solves = user.get_solves(admin=True)\n\n # Get challenges that the user is missing\n if get_config(\"user_mode\") == TEAMS_MODE:\n if user.team:\n all_solves = user.team.get_solves(admin=True)\n else:\n all_solves = user.get_solves(admin=True)\n else:\n all_solves = user.get_solves(admin=True)\n\n solve_ids = [s.challenge_id for s in all_solves]\n missing = Challenges.query.filter(not_(Challenges.id.in_(solve_ids))).all()\n\n # Get IP addresses that the User has used\n addrs = (\n Tracking.query.filter_by(user_id=user_id).order_by(Tracking.date.desc()).all()\n )\n\n # Get Fails\n fails = user.get_fails(admin=True)\n\n # Get Awards\n awards = user.get_awards(admin=True)\n\n # Get user properties\n score = user.account.get_score(admin=True)\n place = user.account.get_place(admin=True)\n\n return render_template(\n \"admin/users/user.html\",\n solves=solves,\n user=user,\n addrs=addrs,\n score=score,\n missing=missing,\n place=place,\n fails=fails,\n awards=awards,\n )\n",
"path": "CTFd/admin/users.py"
}
] | diff --git a/CTFd/admin/users.py b/CTFd/admin/users.py
index 46f16c8af..f2a0c484d 100644
--- a/CTFd/admin/users.py
+++ b/CTFd/admin/users.py
@@ -88,8 +88,8 @@ def users_detail(user_id):
awards = user.get_awards(admin=True)
# Get user properties
- score = user.get_score(admin=True)
- place = user.get_place(admin=True)
+ score = user.account.get_score(admin=True)
+ place = user.account.get_place(admin=True)
return render_template(
"admin/users/user.html",
|
DataDog__dd-trace-py-1468 | Cannot install ddtrace 0.38.0 with Python 3.8 without the wheels
Hi,
I cannot install ddtrace 0.38.0 without using the provided wheel. It was working with ddtrace version 0.37.1.
### Which version of dd-trace-py are you using?
0.38.0 with Python 3.8.3 on Linux (tried from my Archlinux and from a Docker container with Debian)
### How can we reproduce your problem?
Run `pip install --no-binary=:all: ddtrace`
### What is the result that you get?
```
Collecting ddtrace==0.38.0
Using cached ddtrace-0.38.0.tar.gz (887 kB)
Installing build dependencies ... done
Getting requirements to build wheel ... done
Installing backend dependencies ... done
Preparing wheel metadata ... done
Requirement already satisfied: msgpack>=0.5.0 in /home/yannick/.local/share/virtualenvs/core/lib/python3.8/site-packages (from ddtrace==0.38.0) (1.0.0)
Building wheels for collected packages: ddtrace
Building wheel for ddtrace (PEP 517) ... error
ERROR: Command errored out with exit status 1:
command: /home/yannick/.local/share/virtualenvs/core/bin/python /home/yannick/.local/share/virtualenvs/core/lib/python3.8/site-packages/pip/_vendor/pep517/_in_process.py build_wheel /tmp/tmp5caazvta
cwd: /tmp/pip-install-b0v_y4yt/ddtrace
Complete output (423 lines):
running bdist_wheel
running build
running build_py
creating build
creating build/lib.linux-x86_64-3.8
creating build/lib.linux-x86_64-3.8/ddtrace
copying ddtrace/util.py -> build/lib.linux-x86_64-3.8/ddtrace
copying ddtrace/tracer.py -> build/lib.linux-x86_64-3.8/ddtrace
copying ddtrace/span.py -> build/lib.linux-x86_64-3.8/ddtrace
copying ddtrace/sampler.py -> build/lib.linux-x86_64-3.8/ddtrace
copying ddtrace/provider.py -> build/lib.linux-x86_64-3.8/ddtrace
copying ddtrace/pin.py -> build/lib.linux-x86_64-3.8/ddtrace
copying ddtrace/payload.py -> build/lib.linux-x86_64-3.8/ddtrace
copying ddtrace/monkey.py -> build/lib.linux-x86_64-3.8/ddtrace
copying ddtrace/helpers.py -> build/lib.linux-x86_64-3.8/ddtrace
copying ddtrace/filters.py -> build/lib.linux-x86_64-3.8/ddtrace
copying ddtrace/encoding.py -> build/lib.linux-x86_64-3.8/ddtrace
copying ddtrace/context.py -> build/lib.linux-x86_64-3.8/ddtrace
copying ddtrace/constants.py -> build/lib.linux-x86_64-3.8/ddtrace
copying ddtrace/compat.py -> build/lib.linux-x86_64-3.8/ddtrace
copying ddtrace/api.py -> build/lib.linux-x86_64-3.8/ddtrace
copying ddtrace/_worker.py -> build/lib.linux-x86_64-3.8/ddtrace
copying ddtrace/_hooks.py -> build/lib.linux-x86_64-3.8/ddtrace
copying ddtrace/__init__.py -> build/lib.linux-x86_64-3.8/ddtrace
creating build/lib.linux-x86_64-3.8/ddtrace/vendor
copying ddtrace/vendor/__init__.py -> build/lib.linux-x86_64-3.8/ddtrace/vendor
creating build/lib.linux-x86_64-3.8/ddtrace/utils
copying ddtrace/utils/wrappers.py -> build/lib.linux-x86_64-3.8/ddtrace/utils
copying ddtrace/utils/time.py -> build/lib.linux-x86_64-3.8/ddtrace/utils
copying ddtrace/utils/importlib.py -> build/lib.linux-x86_64-3.8/ddtrace/utils
copying ddtrace/utils/http.py -> build/lib.linux-x86_64-3.8/ddtrace/utils
copying ddtrace/utils/hook.py -> build/lib.linux-x86_64-3.8/ddtrace/utils
copying ddtrace/utils/formats.py -> build/lib.linux-x86_64-3.8/ddtrace/utils
copying ddtrace/utils/deprecation.py -> build/lib.linux-x86_64-3.8/ddtrace/utils
copying ddtrace/utils/config.py -> build/lib.linux-x86_64-3.8/ddtrace/utils
copying ddtrace/utils/attrdict.py -> build/lib.linux-x86_64-3.8/ddtrace/utils
copying ddtrace/utils/__init__.py -> build/lib.linux-x86_64-3.8/ddtrace/utils
creating build/lib.linux-x86_64-3.8/ddtrace/settings
copying ddtrace/settings/integration.py -> build/lib.linux-x86_64-3.8/ddtrace/settings
copying ddtrace/settings/http.py -> build/lib.linux-x86_64-3.8/ddtrace/settings
copying ddtrace/settings/exceptions.py -> build/lib.linux-x86_64-3.8/ddtrace/settings
copying ddtrace/settings/config.py -> build/lib.linux-x86_64-3.8/ddtrace/settings
copying ddtrace/settings/__init__.py -> build/lib.linux-x86_64-3.8/ddtrace/settings
creating build/lib.linux-x86_64-3.8/ddtrace/propagation
copying ddtrace/propagation/utils.py -> build/lib.linux-x86_64-3.8/ddtrace/propagation
copying ddtrace/propagation/http.py -> build/lib.linux-x86_64-3.8/ddtrace/propagation
copying ddtrace/propagation/__init__.py -> build/lib.linux-x86_64-3.8/ddtrace/propagation
creating build/lib.linux-x86_64-3.8/ddtrace/profiling
copying ddtrace/profiling/scheduler.py -> build/lib.linux-x86_64-3.8/ddtrace/profiling
copying ddtrace/profiling/recorder.py -> build/lib.linux-x86_64-3.8/ddtrace/profiling
copying ddtrace/profiling/profiler.py -> build/lib.linux-x86_64-3.8/ddtrace/profiling
copying ddtrace/profiling/event.py -> build/lib.linux-x86_64-3.8/ddtrace/profiling
copying ddtrace/profiling/auto.py -> build/lib.linux-x86_64-3.8/ddtrace/profiling
copying ddtrace/profiling/_traceback.py -> build/lib.linux-x86_64-3.8/ddtrace/profiling
copying ddtrace/profiling/_service.py -> build/lib.linux-x86_64-3.8/ddtrace/profiling
copying ddtrace/profiling/_periodic.py -> build/lib.linux-x86_64-3.8/ddtrace/profiling
copying ddtrace/profiling/_line2def.py -> build/lib.linux-x86_64-3.8/ddtrace/profiling
copying ddtrace/profiling/_attr.py -> build/lib.linux-x86_64-3.8/ddtrace/profiling
copying ddtrace/profiling/__main__.py -> build/lib.linux-x86_64-3.8/ddtrace/profiling
copying ddtrace/profiling/__init__.py -> build/lib.linux-x86_64-3.8/ddtrace/profiling
creating build/lib.linux-x86_64-3.8/ddtrace/profile
copying ddtrace/profile/scheduler.py -> build/lib.linux-x86_64-3.8/ddtrace/profile
copying ddtrace/profile/recorder.py -> build/lib.linux-x86_64-3.8/ddtrace/profile
copying ddtrace/profile/profiler.py -> build/lib.linux-x86_64-3.8/ddtrace/profile
copying ddtrace/profile/event.py -> build/lib.linux-x86_64-3.8/ddtrace/profile
copying ddtrace/profile/auto.py -> build/lib.linux-x86_64-3.8/ddtrace/profile
copying ddtrace/profile/__init__.py -> build/lib.linux-x86_64-3.8/ddtrace/profile
creating build/lib.linux-x86_64-3.8/ddtrace/opentracer
copying ddtrace/opentracer/utils.py -> build/lib.linux-x86_64-3.8/ddtrace/opentracer
copying ddtrace/opentracer/tracer.py -> build/lib.linux-x86_64-3.8/ddtrace/opentracer
copying ddtrace/opentracer/tags.py -> build/lib.linux-x86_64-3.8/ddtrace/opentracer
copying ddtrace/opentracer/span_context.py -> build/lib.linux-x86_64-3.8/ddtrace/opentracer
copying ddtrace/opentracer/span.py -> build/lib.linux-x86_64-3.8/ddtrace/opentracer
copying ddtrace/opentracer/settings.py -> build/lib.linux-x86_64-3.8/ddtrace/opentracer
copying ddtrace/opentracer/helpers.py -> build/lib.linux-x86_64-3.8/ddtrace/opentracer
copying ddtrace/opentracer/__init__.py -> build/lib.linux-x86_64-3.8/ddtrace/opentracer
creating build/lib.linux-x86_64-3.8/ddtrace/internal
copying ddtrace/internal/writer.py -> build/lib.linux-x86_64-3.8/ddtrace/internal
copying ddtrace/internal/rate_limiter.py -> build/lib.linux-x86_64-3.8/ddtrace/internal
copying ddtrace/internal/logger.py -> build/lib.linux-x86_64-3.8/ddtrace/internal
copying ddtrace/internal/import_hooks.py -> build/lib.linux-x86_64-3.8/ddtrace/internal
copying ddtrace/internal/hostname.py -> build/lib.linux-x86_64-3.8/ddtrace/internal
copying ddtrace/internal/context_manager.py -> build/lib.linux-x86_64-3.8/ddtrace/internal
copying ddtrace/internal/__init__.py -> build/lib.linux-x86_64-3.8/ddtrace/internal
creating build/lib.linux-x86_64-3.8/ddtrace/http
copying ddtrace/http/headers.py -> build/lib.linux-x86_64-3.8/ddtrace/http
copying ddtrace/http/__init__.py -> build/lib.linux-x86_64-3.8/ddtrace/http
creating build/lib.linux-x86_64-3.8/ddtrace/ext
copying ddtrace/ext/system.py -> build/lib.linux-x86_64-3.8/ddtrace/ext
copying ddtrace/ext/sql.py -> build/lib.linux-x86_64-3.8/ddtrace/ext
copying ddtrace/ext/redis.py -> build/lib.linux-x86_64-3.8/ddtrace/ext
copying ddtrace/ext/priority.py -> build/lib.linux-x86_64-3.8/ddtrace/ext
copying ddtrace/ext/net.py -> build/lib.linux-x86_64-3.8/ddtrace/ext
copying ddtrace/ext/mongo.py -> build/lib.linux-x86_64-3.8/ddtrace/ext
copying ddtrace/ext/memcached.py -> build/lib.linux-x86_64-3.8/ddtrace/ext
copying ddtrace/ext/kombu.py -> build/lib.linux-x86_64-3.8/ddtrace/ext
copying ddtrace/ext/http.py -> build/lib.linux-x86_64-3.8/ddtrace/ext
copying ddtrace/ext/errors.py -> build/lib.linux-x86_64-3.8/ddtrace/ext
copying ddtrace/ext/elasticsearch.py -> build/lib.linux-x86_64-3.8/ddtrace/ext
copying ddtrace/ext/db.py -> build/lib.linux-x86_64-3.8/ddtrace/ext
copying ddtrace/ext/consul.py -> build/lib.linux-x86_64-3.8/ddtrace/ext
copying ddtrace/ext/cassandra.py -> build/lib.linux-x86_64-3.8/ddtrace/ext
copying ddtrace/ext/aws.py -> build/lib.linux-x86_64-3.8/ddtrace/ext
copying ddtrace/ext/__init__.py -> build/lib.linux-x86_64-3.8/ddtrace/ext
creating build/lib.linux-x86_64-3.8/ddtrace/contrib
copying ddtrace/contrib/util.py -> build/lib.linux-x86_64-3.8/ddtrace/contrib
copying ddtrace/contrib/__init__.py -> build/lib.linux-x86_64-3.8/ddtrace/contrib
creating build/lib.linux-x86_64-3.8/ddtrace/commands
copying ddtrace/commands/ddtrace_run.py -> build/lib.linux-x86_64-3.8/ddtrace/commands
copying ddtrace/commands/__init__.py -> build/lib.linux-x86_64-3.8/ddtrace/commands
creating build/lib.linux-x86_64-3.8/ddtrace/bootstrap
copying ddtrace/bootstrap/sitecustomize.py -> build/lib.linux-x86_64-3.8/ddtrace/bootstrap
copying ddtrace/bootstrap/__init__.py -> build/lib.linux-x86_64-3.8/ddtrace/bootstrap
creating build/lib.linux-x86_64-3.8/ddtrace/vendor/wrapt
copying ddtrace/vendor/wrapt/wrappers.py -> build/lib.linux-x86_64-3.8/ddtrace/vendor/wrapt
copying ddtrace/vendor/wrapt/setup.py -> build/lib.linux-x86_64-3.8/ddtrace/vendor/wrapt
copying ddtrace/vendor/wrapt/importer.py -> build/lib.linux-x86_64-3.8/ddtrace/vendor/wrapt
copying ddtrace/vendor/wrapt/decorators.py -> build/lib.linux-x86_64-3.8/ddtrace/vendor/wrapt
copying ddtrace/vendor/wrapt/__init__.py -> build/lib.linux-x86_64-3.8/ddtrace/vendor/wrapt
creating build/lib.linux-x86_64-3.8/ddtrace/vendor/six
copying ddtrace/vendor/six/__init__.py -> build/lib.linux-x86_64-3.8/ddtrace/vendor/six
creating build/lib.linux-x86_64-3.8/ddtrace/vendor/psutil
copying ddtrace/vendor/psutil/setup.py -> build/lib.linux-x86_64-3.8/ddtrace/vendor/psutil
copying ddtrace/vendor/psutil/_pswindows.py -> build/lib.linux-x86_64-3.8/ddtrace/vendor/psutil
copying ddtrace/vendor/psutil/_pssunos.py -> build/lib.linux-x86_64-3.8/ddtrace/vendor/psutil
copying ddtrace/vendor/psutil/_psposix.py -> build/lib.linux-x86_64-3.8/ddtrace/vendor/psutil
copying ddtrace/vendor/psutil/_psosx.py -> build/lib.linux-x86_64-3.8/ddtrace/vendor/psutil
copying ddtrace/vendor/psutil/_pslinux.py -> build/lib.linux-x86_64-3.8/ddtrace/vendor/psutil
copying ddtrace/vendor/psutil/_psbsd.py -> build/lib.linux-x86_64-3.8/ddtrace/vendor/psutil
copying ddtrace/vendor/psutil/_psaix.py -> build/lib.linux-x86_64-3.8/ddtrace/vendor/psutil
copying ddtrace/vendor/psutil/_compat.py -> build/lib.linux-x86_64-3.8/ddtrace/vendor/psutil
copying ddtrace/vendor/psutil/_common.py -> build/lib.linux-x86_64-3.8/ddtrace/vendor/psutil
copying ddtrace/vendor/psutil/__init__.py -> build/lib.linux-x86_64-3.8/ddtrace/vendor/psutil
creating build/lib.linux-x86_64-3.8/ddtrace/vendor/monotonic
copying ddtrace/vendor/monotonic/__init__.py -> build/lib.linux-x86_64-3.8/ddtrace/vendor/monotonic
creating build/lib.linux-x86_64-3.8/ddtrace/vendor/dogstatsd
copying ddtrace/vendor/dogstatsd/route.py -> build/lib.linux-x86_64-3.8/ddtrace/vendor/dogstatsd
copying ddtrace/vendor/dogstatsd/context_async.py -> build/lib.linux-x86_64-3.8/ddtrace/vendor/dogstatsd
copying ddtrace/vendor/dogstatsd/context.py -> build/lib.linux-x86_64-3.8/ddtrace/vendor/dogstatsd
copying ddtrace/vendor/dogstatsd/compat.py -> build/lib.linux-x86_64-3.8/ddtrace/vendor/dogstatsd
copying ddtrace/vendor/dogstatsd/base.py -> build/lib.linux-x86_64-3.8/ddtrace/vendor/dogstatsd
copying ddtrace/vendor/dogstatsd/__init__.py -> build/lib.linux-x86_64-3.8/ddtrace/vendor/dogstatsd
creating build/lib.linux-x86_64-3.8/ddtrace/vendor/debtcollector
copying ddtrace/vendor/debtcollector/updating.py -> build/lib.linux-x86_64-3.8/ddtrace/vendor/debtcollector
copying ddtrace/vendor/debtcollector/renames.py -> build/lib.linux-x86_64-3.8/ddtrace/vendor/debtcollector
copying ddtrace/vendor/debtcollector/removals.py -> build/lib.linux-x86_64-3.8/ddtrace/vendor/debtcollector
copying ddtrace/vendor/debtcollector/moves.py -> build/lib.linux-x86_64-3.8/ddtrace/vendor/debtcollector
copying ddtrace/vendor/debtcollector/_utils.py -> build/lib.linux-x86_64-3.8/ddtrace/vendor/debtcollector
copying ddtrace/vendor/debtcollector/__init__.py -> build/lib.linux-x86_64-3.8/ddtrace/vendor/debtcollector
creating build/lib.linux-x86_64-3.8/ddtrace/vendor/attr
copying ddtrace/vendor/attr/validators.py -> build/lib.linux-x86_64-3.8/ddtrace/vendor/attr
copying ddtrace/vendor/attr/filters.py -> build/lib.linux-x86_64-3.8/ddtrace/vendor/attr
copying ddtrace/vendor/attr/exceptions.py -> build/lib.linux-x86_64-3.8/ddtrace/vendor/attr
copying ddtrace/vendor/attr/converters.py -> build/lib.linux-x86_64-3.8/ddtrace/vendor/attr
copying ddtrace/vendor/attr/_version_info.py -> build/lib.linux-x86_64-3.8/ddtrace/vendor/attr
copying ddtrace/vendor/attr/_make.py -> build/lib.linux-x86_64-3.8/ddtrace/vendor/attr
copying ddtrace/vendor/attr/_funcs.py -> build/lib.linux-x86_64-3.8/ddtrace/vendor/attr
copying ddtrace/vendor/attr/_config.py -> build/lib.linux-x86_64-3.8/ddtrace/vendor/attr
copying ddtrace/vendor/attr/_compat.py -> build/lib.linux-x86_64-3.8/ddtrace/vendor/attr
copying ddtrace/vendor/attr/__init__.py -> build/lib.linux-x86_64-3.8/ddtrace/vendor/attr
creating build/lib.linux-x86_64-3.8/ddtrace/profiling/exporter
copying ddtrace/profiling/exporter/pprof_pb2.py -> build/lib.linux-x86_64-3.8/ddtrace/profiling/exporter
copying ddtrace/profiling/exporter/pprof.py -> build/lib.linux-x86_64-3.8/ddtrace/profiling/exporter
copying ddtrace/profiling/exporter/http.py -> build/lib.linux-x86_64-3.8/ddtrace/profiling/exporter
copying ddtrace/profiling/exporter/file.py -> build/lib.linux-x86_64-3.8/ddtrace/profiling/exporter
copying ddtrace/profiling/exporter/__init__.py -> build/lib.linux-x86_64-3.8/ddtrace/profiling/exporter
creating build/lib.linux-x86_64-3.8/ddtrace/profiling/collector
copying ddtrace/profiling/collector/threading.py -> build/lib.linux-x86_64-3.8/ddtrace/profiling/collector
copying ddtrace/profiling/collector/memory.py -> build/lib.linux-x86_64-3.8/ddtrace/profiling/collector
copying ddtrace/profiling/collector/exceptions.py -> build/lib.linux-x86_64-3.8/ddtrace/profiling/collector
copying ddtrace/profiling/collector/__init__.py -> build/lib.linux-x86_64-3.8/ddtrace/profiling/collector
creating build/lib.linux-x86_64-3.8/ddtrace/profiling/bootstrap
copying ddtrace/profiling/bootstrap/sitecustomize.py -> build/lib.linux-x86_64-3.8/ddtrace/profiling/bootstrap
copying ddtrace/profiling/bootstrap/__init__.py -> build/lib.linux-x86_64-3.8/ddtrace/profiling/bootstrap
creating build/lib.linux-x86_64-3.8/ddtrace/profile/exporter
copying ddtrace/profile/exporter/pprof_pb2.py -> build/lib.linux-x86_64-3.8/ddtrace/profile/exporter
copying ddtrace/profile/exporter/pprof.py -> build/lib.linux-x86_64-3.8/ddtrace/profile/exporter
copying ddtrace/profile/exporter/http.py -> build/lib.linux-x86_64-3.8/ddtrace/profile/exporter
copying ddtrace/profile/exporter/file.py -> build/lib.linux-x86_64-3.8/ddtrace/profile/exporter
copying ddtrace/profile/exporter/__init__.py -> build/lib.linux-x86_64-3.8/ddtrace/profile/exporter
creating build/lib.linux-x86_64-3.8/ddtrace/profile/collector
copying ddtrace/profile/collector/threading.py -> build/lib.linux-x86_64-3.8/ddtrace/profile/collector
copying ddtrace/profile/collector/stack.py -> build/lib.linux-x86_64-3.8/ddtrace/profile/collector
copying ddtrace/profile/collector/memory.py -> build/lib.linux-x86_64-3.8/ddtrace/profile/collector
copying ddtrace/profile/collector/exceptions.py -> build/lib.linux-x86_64-3.8/ddtrace/profile/collector
copying ddtrace/profile/collector/__init__.py -> build/lib.linux-x86_64-3.8/ddtrace/profile/collector
creating build/lib.linux-x86_64-3.8/ddtrace/profile/bootstrap
copying ddtrace/profile/bootstrap/sitecustomize.py -> build/lib.linux-x86_64-3.8/ddtrace/profile/bootstrap
copying ddtrace/profile/bootstrap/__init__.py -> build/lib.linux-x86_64-3.8/ddtrace/profile/bootstrap
creating build/lib.linux-x86_64-3.8/ddtrace/opentracer/propagation
copying ddtrace/opentracer/propagation/text.py -> build/lib.linux-x86_64-3.8/ddtrace/opentracer/propagation
copying ddtrace/opentracer/propagation/propagator.py -> build/lib.linux-x86_64-3.8/ddtrace/opentracer/propagation
copying ddtrace/opentracer/propagation/http.py -> build/lib.linux-x86_64-3.8/ddtrace/opentracer/propagation
copying ddtrace/opentracer/propagation/binary.py -> build/lib.linux-x86_64-3.8/ddtrace/opentracer/propagation
copying ddtrace/opentracer/propagation/__init__.py -> build/lib.linux-x86_64-3.8/ddtrace/opentracer/propagation
creating build/lib.linux-x86_64-3.8/ddtrace/internal/runtime
copying ddtrace/internal/runtime/tag_collectors.py -> build/lib.linux-x86_64-3.8/ddtrace/internal/runtime
copying ddtrace/internal/runtime/runtime_metrics.py -> build/lib.linux-x86_64-3.8/ddtrace/internal/runtime
copying ddtrace/internal/runtime/metric_collectors.py -> build/lib.linux-x86_64-3.8/ddtrace/internal/runtime
copying ddtrace/internal/runtime/container.py -> build/lib.linux-x86_64-3.8/ddtrace/internal/runtime
copying ddtrace/internal/runtime/constants.py -> build/lib.linux-x86_64-3.8/ddtrace/internal/runtime
copying ddtrace/internal/runtime/collector.py -> build/lib.linux-x86_64-3.8/ddtrace/internal/runtime
copying ddtrace/internal/runtime/__init__.py -> build/lib.linux-x86_64-3.8/ddtrace/internal/runtime
creating build/lib.linux-x86_64-3.8/ddtrace/contrib/vertica
copying ddtrace/contrib/vertica/patch.py -> build/lib.linux-x86_64-3.8/ddtrace/contrib/vertica
copying ddtrace/contrib/vertica/constants.py -> build/lib.linux-x86_64-3.8/ddtrace/contrib/vertica
copying ddtrace/contrib/vertica/__init__.py -> build/lib.linux-x86_64-3.8/ddtrace/contrib/vertica
creating build/lib.linux-x86_64-3.8/ddtrace/contrib/tornado
copying ddtrace/contrib/tornado/template.py -> build/lib.linux-x86_64-3.8/ddtrace/contrib/tornado
copying ddtrace/contrib/tornado/stack_context.py -> build/lib.linux-x86_64-3.8/ddtrace/contrib/tornado
copying ddtrace/contrib/tornado/patch.py -> build/lib.linux-x86_64-3.8/ddtrace/contrib/tornado
copying ddtrace/contrib/tornado/handlers.py -> build/lib.linux-x86_64-3.8/ddtrace/contrib/tornado
copying ddtrace/contrib/tornado/decorators.py -> build/lib.linux-x86_64-3.8/ddtrace/contrib/tornado
copying ddtrace/contrib/tornado/constants.py -> build/lib.linux-x86_64-3.8/ddtrace/contrib/tornado
copying ddtrace/contrib/tornado/compat.py -> build/lib.linux-x86_64-3.8/ddtrace/contrib/tornado
copying ddtrace/contrib/tornado/application.py -> build/lib.linux-x86_64-3.8/ddtrace/contrib/tornado
copying ddtrace/contrib/tornado/__init__.py -> build/lib.linux-x86_64-3.8/ddtrace/contrib/tornado
creating build/lib.linux-x86_64-3.8/ddtrace/contrib/sqlite3
copying ddtrace/contrib/sqlite3/patch.py -> build/lib.linux-x86_64-3.8/ddtrace/contrib/sqlite3
copying ddtrace/contrib/sqlite3/connection.py -> build/lib.linux-x86_64-3.8/ddtrace/contrib/sqlite3
copying ddtrace/contrib/sqlite3/__init__.py -> build/lib.linux-x86_64-3.8/ddtrace/contrib/sqlite3
creating build/lib.linux-x86_64-3.8/ddtrace/contrib/sqlalchemy
copying ddtrace/contrib/sqlalchemy/patch.py -> build/lib.linux-x86_64-3.8/ddtrace/contrib/sqlalchemy
copying ddtrace/contrib/sqlalchemy/engine.py -> build/lib.linux-x86_64-3.8/ddtrace/contrib/sqlalchemy
copying ddtrace/contrib/sqlalchemy/__init__.py -> build/lib.linux-x86_64-3.8/ddtrace/contrib/sqlalchemy
creating build/lib.linux-x86_64-3.8/ddtrace/contrib/requests
copying ddtrace/contrib/requests/session.py -> build/lib.linux-x86_64-3.8/ddtrace/contrib/requests
copying ddtrace/contrib/requests/patch.py -> build/lib.linux-x86_64-3.8/ddtrace/contrib/requests
copying ddtrace/contrib/requests/legacy.py -> build/lib.linux-x86_64-3.8/ddtrace/contrib/requests
copying ddtrace/contrib/requests/constants.py -> build/lib.linux-x86_64-3.8/ddtrace/contrib/requests
copying ddtrace/contrib/requests/connection.py -> build/lib.linux-x86_64-3.8/ddtrace/contrib/requests
copying ddtrace/contrib/requests/__init__.py -> build/lib.linux-x86_64-3.8/ddtrace/contrib/requests
creating build/lib.linux-x86_64-3.8/ddtrace/contrib/rediscluster
copying ddtrace/contrib/rediscluster/patch.py -> build/lib.linux-x86_64-3.8/ddtrace/contrib/rediscluster
copying ddtrace/contrib/rediscluster/__init__.py -> build/lib.linux-x86_64-3.8/ddtrace/contrib/rediscluster
creating build/lib.linux-x86_64-3.8/ddtrace/contrib/redis
copying ddtrace/contrib/redis/util.py -> build/lib.linux-x86_64-3.8/ddtrace/contrib/redis
copying ddtrace/contrib/redis/tracers.py -> build/lib.linux-x86_64-3.8/ddtrace/contrib/redis
copying ddtrace/contrib/redis/patch.py -> build/lib.linux-x86_64-3.8/ddtrace/contrib/redis
copying ddtrace/contrib/redis/__init__.py -> build/lib.linux-x86_64-3.8/ddtrace/contrib/redis
creating build/lib.linux-x86_64-3.8/ddtrace/contrib/pyramid
copying ddtrace/contrib/pyramid/trace.py -> build/lib.linux-x86_64-3.8/ddtrace/contrib/pyramid
copying ddtrace/contrib/pyramid/patch.py -> build/lib.linux-x86_64-3.8/ddtrace/contrib/pyramid
copying ddtrace/contrib/pyramid/constants.py -> build/lib.linux-x86_64-3.8/ddtrace/contrib/pyramid
copying ddtrace/contrib/pyramid/__init__.py -> build/lib.linux-x86_64-3.8/ddtrace/contrib/pyramid
creating build/lib.linux-x86_64-3.8/ddtrace/contrib/pymysql
copying ddtrace/contrib/pymysql/tracers.py -> build/lib.linux-x86_64-3.8/ddtrace/contrib/pymysql
copying ddtrace/contrib/pymysql/patch.py -> build/lib.linux-x86_64-3.8/ddtrace/contrib/pymysql
copying ddtrace/contrib/pymysql/__init__.py -> build/lib.linux-x86_64-3.8/ddtrace/contrib/pymysql
creating build/lib.linux-x86_64-3.8/ddtrace/contrib/pymongo
copying ddtrace/contrib/pymongo/patch.py -> build/lib.linux-x86_64-3.8/ddtrace/contrib/pymongo
copying ddtrace/contrib/pymongo/parse.py -> build/lib.linux-x86_64-3.8/ddtrace/contrib/pymongo
copying ddtrace/contrib/pymongo/client.py -> build/lib.linux-x86_64-3.8/ddtrace/contrib/pymongo
copying ddtrace/contrib/pymongo/__init__.py -> build/lib.linux-x86_64-3.8/ddtrace/contrib/pymongo
creating build/lib.linux-x86_64-3.8/ddtrace/contrib/pymemcache
copying ddtrace/contrib/pymemcache/patch.py -> build/lib.linux-x86_64-3.8/ddtrace/contrib/pymemcache
copying ddtrace/contrib/pymemcache/client.py -> build/lib.linux-x86_64-3.8/ddtrace/contrib/pymemcache
copying ddtrace/contrib/pymemcache/__init__.py -> build/lib.linux-x86_64-3.8/ddtrace/contrib/pymemcache
creating build/lib.linux-x86_64-3.8/ddtrace/contrib/pylons
copying ddtrace/contrib/pylons/renderer.py -> build/lib.linux-x86_64-3.8/ddtrace/contrib/pylons
copying ddtrace/contrib/pylons/patch.py -> build/lib.linux-x86_64-3.8/ddtrace/contrib/pylons
copying ddtrace/contrib/pylons/middleware.py -> build/lib.linux-x86_64-3.8/ddtrace/contrib/pylons
copying ddtrace/contrib/pylons/constants.py -> build/lib.linux-x86_64-3.8/ddtrace/contrib/pylons
copying ddtrace/contrib/pylons/compat.py -> build/lib.linux-x86_64-3.8/ddtrace/contrib/pylons
copying ddtrace/contrib/pylons/__init__.py -> build/lib.linux-x86_64-3.8/ddtrace/contrib/pylons
creating build/lib.linux-x86_64-3.8/ddtrace/contrib/pylibmc
copying ddtrace/contrib/pylibmc/patch.py -> build/lib.linux-x86_64-3.8/ddtrace/contrib/pylibmc
copying ddtrace/contrib/pylibmc/client.py -> build/lib.linux-x86_64-3.8/ddtrace/contrib/pylibmc
copying ddtrace/contrib/pylibmc/addrs.py -> build/lib.linux-x86_64-3.8/ddtrace/contrib/pylibmc
copying ddtrace/contrib/pylibmc/__init__.py -> build/lib.linux-x86_64-3.8/ddtrace/contrib/pylibmc
creating build/lib.linux-x86_64-3.8/ddtrace/contrib/psycopg
copying ddtrace/contrib/psycopg/patch.py -> build/lib.linux-x86_64-3.8/ddtrace/contrib/psycopg
copying ddtrace/contrib/psycopg/connection.py -> build/lib.linux-x86_64-3.8/ddtrace/contrib/psycopg
copying ddtrace/contrib/psycopg/__init__.py -> build/lib.linux-x86_64-3.8/ddtrace/contrib/psycopg
creating build/lib.linux-x86_64-3.8/ddtrace/contrib/mysqldb
copying ddtrace/contrib/mysqldb/patch.py -> build/lib.linux-x86_64-3.8/ddtrace/contrib/mysqldb
copying ddtrace/contrib/mysqldb/__init__.py -> build/lib.linux-x86_64-3.8/ddtrace/contrib/mysqldb
creating build/lib.linux-x86_64-3.8/ddtrace/contrib/mysql
copying ddtrace/contrib/mysql/tracers.py -> build/lib.linux-x86_64-3.8/ddtrace/contrib/mysql
copying ddtrace/contrib/mysql/patch.py -> build/lib.linux-x86_64-3.8/ddtrace/contrib/mysql
copying ddtrace/contrib/mysql/__init__.py -> build/lib.linux-x86_64-3.8/ddtrace/contrib/mysql
creating build/lib.linux-x86_64-3.8/ddtrace/contrib/mongoengine
copying ddtrace/contrib/mongoengine/trace.py -> build/lib.linux-x86_64-3.8/ddtrace/contrib/mongoengine
copying ddtrace/contrib/mongoengine/patch.py -> build/lib.linux-x86_64-3.8/ddtrace/contrib/mongoengine
copying ddtrace/contrib/mongoengine/__init__.py -> build/lib.linux-x86_64-3.8/ddtrace/contrib/mongoengine
creating build/lib.linux-x86_64-3.8/ddtrace/contrib/molten
copying ddtrace/contrib/molten/wrappers.py -> build/lib.linux-x86_64-3.8/ddtrace/contrib/molten
copying ddtrace/contrib/molten/patch.py -> build/lib.linux-x86_64-3.8/ddtrace/contrib/molten
copying ddtrace/contrib/molten/__init__.py -> build/lib.linux-x86_64-3.8/ddtrace/contrib/molten
creating build/lib.linux-x86_64-3.8/ddtrace/contrib/mako
copying ddtrace/contrib/mako/patch.py -> build/lib.linux-x86_64-3.8/ddtrace/contrib/mako
copying ddtrace/contrib/mako/constants.py -> build/lib.linux-x86_64-3.8/ddtrace/contrib/mako
copying ddtrace/contrib/mako/__init__.py -> build/lib.linux-x86_64-3.8/ddtrace/contrib/mako
creating build/lib.linux-x86_64-3.8/ddtrace/contrib/logging
copying ddtrace/contrib/logging/patch.py -> build/lib.linux-x86_64-3.8/ddtrace/contrib/logging
copying ddtrace/contrib/logging/__init__.py -> build/lib.linux-x86_64-3.8/ddtrace/contrib/logging
creating build/lib.linux-x86_64-3.8/ddtrace/contrib/kombu
copying ddtrace/contrib/kombu/utils.py -> build/lib.linux-x86_64-3.8/ddtrace/contrib/kombu
copying ddtrace/contrib/kombu/patch.py -> build/lib.linux-x86_64-3.8/ddtrace/contrib/kombu
copying ddtrace/contrib/kombu/constants.py -> build/lib.linux-x86_64-3.8/ddtrace/contrib/kombu
copying ddtrace/contrib/kombu/__init__.py -> build/lib.linux-x86_64-3.8/ddtrace/contrib/kombu
creating build/lib.linux-x86_64-3.8/ddtrace/contrib/jinja2
copying ddtrace/contrib/jinja2/patch.py -> build/lib.linux-x86_64-3.8/ddtrace/contrib/jinja2
copying ddtrace/contrib/jinja2/constants.py -> build/lib.linux-x86_64-3.8/ddtrace/contrib/jinja2
copying ddtrace/contrib/jinja2/__init__.py -> build/lib.linux-x86_64-3.8/ddtrace/contrib/jinja2
creating build/lib.linux-x86_64-3.8/ddtrace/contrib/httplib
copying ddtrace/contrib/httplib/patch.py -> build/lib.linux-x86_64-3.8/ddtrace/contrib/httplib
copying ddtrace/contrib/httplib/__init__.py -> build/lib.linux-x86_64-3.8/ddtrace/contrib/httplib
creating build/lib.linux-x86_64-3.8/ddtrace/contrib/grpc
copying ddtrace/contrib/grpc/utils.py -> build/lib.linux-x86_64-3.8/ddtrace/contrib/grpc
copying ddtrace/contrib/grpc/server_interceptor.py -> build/lib.linux-x86_64-3.8/ddtrace/contrib/grpc
copying ddtrace/contrib/grpc/patch.py -> build/lib.linux-x86_64-3.8/ddtrace/contrib/grpc
copying ddtrace/contrib/grpc/constants.py -> build/lib.linux-x86_64-3.8/ddtrace/contrib/grpc
copying ddtrace/contrib/grpc/client_interceptor.py -> build/lib.linux-x86_64-3.8/ddtrace/contrib/grpc
copying ddtrace/contrib/grpc/__init__.py -> build/lib.linux-x86_64-3.8/ddtrace/contrib/grpc
creating build/lib.linux-x86_64-3.8/ddtrace/contrib/gevent
copying ddtrace/contrib/gevent/provider.py -> build/lib.linux-x86_64-3.8/ddtrace/contrib/gevent
copying ddtrace/contrib/gevent/patch.py -> build/lib.linux-x86_64-3.8/ddtrace/contrib/gevent
copying ddtrace/contrib/gevent/greenlet.py -> build/lib.linux-x86_64-3.8/ddtrace/contrib/gevent
copying ddtrace/contrib/gevent/__init__.py -> build/lib.linux-x86_64-3.8/ddtrace/contrib/gevent
creating build/lib.linux-x86_64-3.8/ddtrace/contrib/futures
copying ddtrace/contrib/futures/threading.py -> build/lib.linux-x86_64-3.8/ddtrace/contrib/futures
copying ddtrace/contrib/futures/patch.py -> build/lib.linux-x86_64-3.8/ddtrace/contrib/futures
copying ddtrace/contrib/futures/__init__.py -> build/lib.linux-x86_64-3.8/ddtrace/contrib/futures
creating build/lib.linux-x86_64-3.8/ddtrace/contrib/flask_cache
copying ddtrace/contrib/flask_cache/utils.py -> build/lib.linux-x86_64-3.8/ddtrace/contrib/flask_cache
copying ddtrace/contrib/flask_cache/tracers.py -> build/lib.linux-x86_64-3.8/ddtrace/contrib/flask_cache
copying ddtrace/contrib/flask_cache/__init__.py -> build/lib.linux-x86_64-3.8/ddtrace/contrib/flask_cache
creating build/lib.linux-x86_64-3.8/ddtrace/contrib/flask
copying ddtrace/contrib/flask/wrappers.py -> build/lib.linux-x86_64-3.8/ddtrace/contrib/flask
copying ddtrace/contrib/flask/patch.py -> build/lib.linux-x86_64-3.8/ddtrace/contrib/flask
copying ddtrace/contrib/flask/middleware.py -> build/lib.linux-x86_64-3.8/ddtrace/contrib/flask
copying ddtrace/contrib/flask/helpers.py -> build/lib.linux-x86_64-3.8/ddtrace/contrib/flask
copying ddtrace/contrib/flask/__init__.py -> build/lib.linux-x86_64-3.8/ddtrace/contrib/flask
creating build/lib.linux-x86_64-3.8/ddtrace/contrib/falcon
copying ddtrace/contrib/falcon/patch.py -> build/lib.linux-x86_64-3.8/ddtrace/contrib/falcon
copying ddtrace/contrib/falcon/middleware.py -> build/lib.linux-x86_64-3.8/ddtrace/contrib/falcon
copying ddtrace/contrib/falcon/__init__.py -> build/lib.linux-x86_64-3.8/ddtrace/contrib/falcon
creating build/lib.linux-x86_64-3.8/ddtrace/contrib/elasticsearch
copying ddtrace/contrib/elasticsearch/transport.py -> build/lib.linux-x86_64-3.8/ddtrace/contrib/elasticsearch
copying ddtrace/contrib/elasticsearch/quantize.py -> build/lib.linux-x86_64-3.8/ddtrace/contrib/elasticsearch
copying ddtrace/contrib/elasticsearch/patch.py -> build/lib.linux-x86_64-3.8/ddtrace/contrib/elasticsearch
copying ddtrace/contrib/elasticsearch/elasticsearch.py -> build/lib.linux-x86_64-3.8/ddtrace/contrib/elasticsearch
copying ddtrace/contrib/elasticsearch/__init__.py -> build/lib.linux-x86_64-3.8/ddtrace/contrib/elasticsearch
creating build/lib.linux-x86_64-3.8/ddtrace/contrib/dogpile_cache
copying ddtrace/contrib/dogpile_cache/region.py -> build/lib.linux-x86_64-3.8/ddtrace/contrib/dogpile_cache
copying ddtrace/contrib/dogpile_cache/patch.py -> build/lib.linux-x86_64-3.8/ddtrace/contrib/dogpile_cache
copying ddtrace/contrib/dogpile_cache/lock.py -> build/lib.linux-x86_64-3.8/ddtrace/contrib/dogpile_cache
copying ddtrace/contrib/dogpile_cache/__init__.py -> build/lib.linux-x86_64-3.8/ddtrace/contrib/dogpile_cache
creating build/lib.linux-x86_64-3.8/ddtrace/contrib/django
copying ddtrace/contrib/django/utils.py -> build/lib.linux-x86_64-3.8/ddtrace/contrib/django
copying ddtrace/contrib/django/restframework.py -> build/lib.linux-x86_64-3.8/ddtrace/contrib/django
copying ddtrace/contrib/django/patch.py -> build/lib.linux-x86_64-3.8/ddtrace/contrib/django
copying ddtrace/contrib/django/middleware.py -> build/lib.linux-x86_64-3.8/ddtrace/contrib/django
copying ddtrace/contrib/django/conf.py -> build/lib.linux-x86_64-3.8/ddtrace/contrib/django
copying ddtrace/contrib/django/compat.py -> build/lib.linux-x86_64-3.8/ddtrace/contrib/django
copying ddtrace/contrib/django/apps.py -> build/lib.linux-x86_64-3.8/ddtrace/contrib/django
copying ddtrace/contrib/django/__init__.py -> build/lib.linux-x86_64-3.8/ddtrace/contrib/django
creating build/lib.linux-x86_64-3.8/ddtrace/contrib/dbapi
copying ddtrace/contrib/dbapi/__init__.py -> build/lib.linux-x86_64-3.8/ddtrace/contrib/dbapi
creating build/lib.linux-x86_64-3.8/ddtrace/contrib/consul
copying ddtrace/contrib/consul/patch.py -> build/lib.linux-x86_64-3.8/ddtrace/contrib/consul
copying ddtrace/contrib/consul/__init__.py -> build/lib.linux-x86_64-3.8/ddtrace/contrib/consul
creating build/lib.linux-x86_64-3.8/ddtrace/contrib/celery
copying ddtrace/contrib/celery/utils.py -> build/lib.linux-x86_64-3.8/ddtrace/contrib/celery
copying ddtrace/contrib/celery/task.py -> build/lib.linux-x86_64-3.8/ddtrace/contrib/celery
copying ddtrace/contrib/celery/signals.py -> build/lib.linux-x86_64-3.8/ddtrace/contrib/celery
copying ddtrace/contrib/celery/patch.py -> build/lib.linux-x86_64-3.8/ddtrace/contrib/celery
copying ddtrace/contrib/celery/constants.py -> build/lib.linux-x86_64-3.8/ddtrace/contrib/celery
copying ddtrace/contrib/celery/app.py -> build/lib.linux-x86_64-3.8/ddtrace/contrib/celery
copying ddtrace/contrib/celery/__init__.py -> build/lib.linux-x86_64-3.8/ddtrace/contrib/celery
creating build/lib.linux-x86_64-3.8/ddtrace/contrib/cassandra
copying ddtrace/contrib/cassandra/session.py -> build/lib.linux-x86_64-3.8/ddtrace/contrib/cassandra
copying ddtrace/contrib/cassandra/patch.py -> build/lib.linux-x86_64-3.8/ddtrace/contrib/cassandra
copying ddtrace/contrib/cassandra/__init__.py -> build/lib.linux-x86_64-3.8/ddtrace/contrib/cassandra
creating build/lib.linux-x86_64-3.8/ddtrace/contrib/bottle
copying ddtrace/contrib/bottle/trace.py -> build/lib.linux-x86_64-3.8/ddtrace/contrib/bottle
copying ddtrace/contrib/bottle/patch.py -> build/lib.linux-x86_64-3.8/ddtrace/contrib/bottle
copying ddtrace/contrib/bottle/__init__.py -> build/lib.linux-x86_64-3.8/ddtrace/contrib/bottle
creating build/lib.linux-x86_64-3.8/ddtrace/contrib/botocore
copying ddtrace/contrib/botocore/patch.py -> build/lib.linux-x86_64-3.8/ddtrace/contrib/botocore
copying ddtrace/contrib/botocore/__init__.py -> build/lib.linux-x86_64-3.8/ddtrace/contrib/botocore
creating build/lib.linux-x86_64-3.8/ddtrace/contrib/boto
copying ddtrace/contrib/boto/patch.py -> build/lib.linux-x86_64-3.8/ddtrace/contrib/boto
copying ddtrace/contrib/boto/__init__.py -> build/lib.linux-x86_64-3.8/ddtrace/contrib/boto
creating build/lib.linux-x86_64-3.8/ddtrace/contrib/asyncio
copying ddtrace/contrib/asyncio/wrappers.py -> build/lib.linux-x86_64-3.8/ddtrace/contrib/asyncio
copying ddtrace/contrib/asyncio/provider.py -> build/lib.linux-x86_64-3.8/ddtrace/contrib/asyncio
copying ddtrace/contrib/asyncio/patch.py -> build/lib.linux-x86_64-3.8/ddtrace/contrib/asyncio
copying ddtrace/contrib/asyncio/helpers.py -> build/lib.linux-x86_64-3.8/ddtrace/contrib/asyncio
copying ddtrace/contrib/asyncio/compat.py -> build/lib.linux-x86_64-3.8/ddtrace/contrib/asyncio
copying ddtrace/contrib/asyncio/__init__.py -> build/lib.linux-x86_64-3.8/ddtrace/contrib/asyncio
creating build/lib.linux-x86_64-3.8/ddtrace/contrib/algoliasearch
copying ddtrace/contrib/algoliasearch/patch.py -> build/lib.linux-x86_64-3.8/ddtrace/contrib/algoliasearch
copying ddtrace/contrib/algoliasearch/__init__.py -> build/lib.linux-x86_64-3.8/ddtrace/contrib/algoliasearch
creating build/lib.linux-x86_64-3.8/ddtrace/contrib/aiopg
copying ddtrace/contrib/aiopg/patch.py -> build/lib.linux-x86_64-3.8/ddtrace/contrib/aiopg
copying ddtrace/contrib/aiopg/connection.py -> build/lib.linux-x86_64-3.8/ddtrace/contrib/aiopg
copying ddtrace/contrib/aiopg/__init__.py -> build/lib.linux-x86_64-3.8/ddtrace/contrib/aiopg
creating build/lib.linux-x86_64-3.8/ddtrace/contrib/aiohttp
copying ddtrace/contrib/aiohttp/template.py -> build/lib.linux-x86_64-3.8/ddtrace/contrib/aiohttp
copying ddtrace/contrib/aiohttp/patch.py -> build/lib.linux-x86_64-3.8/ddtrace/contrib/aiohttp
copying ddtrace/contrib/aiohttp/middlewares.py -> build/lib.linux-x86_64-3.8/ddtrace/contrib/aiohttp
copying ddtrace/contrib/aiohttp/__init__.py -> build/lib.linux-x86_64-3.8/ddtrace/contrib/aiohttp
creating build/lib.linux-x86_64-3.8/ddtrace/contrib/aiobotocore
copying ddtrace/contrib/aiobotocore/patch.py -> build/lib.linux-x86_64-3.8/ddtrace/contrib/aiobotocore
copying ddtrace/contrib/aiobotocore/__init__.py -> build/lib.linux-x86_64-3.8/ddtrace/contrib/aiobotocore
running build_ext
building 'ddtrace.internal._rand' extension
creating build/temp.linux-x86_64-3.8
creating build/temp.linux-x86_64-3.8/ddtrace
creating build/temp.linux-x86_64-3.8/ddtrace/internal
gcc -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -march=x86-64 -mtune=generic -O3 -pipe -fno-plt -fno-semantic-interposition -march=x86-64 -mtune=generic -O3 -pipe -fno-plt -march=x86-64 -mtune=generic -O3 -pipe -fno-plt -fPIC -I/usr/include/python3.8 -c ddtrace/internal/_rand.c -o build/temp.linux-x86_64-3.8/ddtrace/internal/_rand.o
gcc -pthread -shared -Wl,-O1,--sort-common,--as-needed,-z,relro,-z,now -fno-semantic-interposition -Wl,-O1,--sort-common,--as-needed,-z,relro,-z,now build/temp.linux-x86_64-3.8/ddtrace/internal/_rand.o -L/usr/lib -o build/lib.linux-x86_64-3.8/ddtrace/internal/_rand.cpython-38-x86_64-linux-gnu.so
building 'ddtrace.profiling.collector.stack' extension
creating build/temp.linux-x86_64-3.8/ddtrace/profiling
creating build/temp.linux-x86_64-3.8/ddtrace/profiling/collector
gcc -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -march=x86-64 -mtune=generic -O3 -pipe -fno-plt -fno-semantic-interposition -march=x86-64 -mtune=generic -O3 -pipe -fno-plt -march=x86-64 -mtune=generic -O3 -pipe -fno-plt -fPIC -I/usr/include/python3.8 -c ddtrace/profiling/collector/stack.c -o build/temp.linux-x86_64-3.8/ddtrace/profiling/collector/stack.o -DPy_BUILD_CORE
ddtrace/profiling/collector/stack.c:619:10: fatal error: internal/pystate.h: No such file or directory
619 | #include <internal/pystate.h>
| ^~~~~~~~~~~~~~~~~~~~
compilation terminated.
error: command 'gcc' failed with exit status 1
----------------------------------------
ERROR: Failed building wheel for ddtrace
Failed to build ddtrace
ERROR: Could not build wheels for ddtrace which use PEP 517 and cannot be installed directly
```
### What is the result that you expected?
I should be able to install ddtrace without using the provided wheels, as I could with previous versions.
| [
{
"content": "import os\nimport sys\n\nfrom setuptools import setup, find_packages\nfrom setuptools.command.test import test as TestCommand\n\n# ORDER MATTERS\n# Import this after setuptools or it will fail\nfrom Cython.Build import cythonize # noqa: I100\nimport Cython.Distutils\n\n\nHERE = os.path.dirname(os.path.abspath(__file__))\n\n\ndef load_module_from_project_file(mod_name, fname):\n \"\"\"\n Helper used to load a module from a file in this project\n\n DEV: Loading this way will by-pass loading all parent modules\n e.g. importing `ddtrace.vendor.psutil.setup` will load `ddtrace/__init__.py`\n which has side effects like loading the tracer\n \"\"\"\n fpath = os.path.join(HERE, fname)\n\n if sys.version_info >= (3, 5):\n import importlib.util\n\n spec = importlib.util.spec_from_file_location(mod_name, fpath)\n mod = importlib.util.module_from_spec(spec)\n spec.loader.exec_module(mod)\n return mod\n elif sys.version_info >= (3, 3):\n from importlib.machinery import SourceFileLoader\n\n return SourceFileLoader(mod_name, fpath).load_module()\n else:\n import imp\n\n return imp.load_source(mod_name, fpath)\n\n\nclass Tox(TestCommand):\n\n user_options = [(\"tox-args=\", \"a\", \"Arguments to pass to tox\")]\n\n def initialize_options(self):\n TestCommand.initialize_options(self)\n self.tox_args = None\n\n def finalize_options(self):\n TestCommand.finalize_options(self)\n self.test_args = []\n self.test_suite = True\n\n def run_tests(self):\n # import here, cause outside the eggs aren't loaded\n import tox\n import shlex\n\n args = self.tox_args\n if args:\n args = shlex.split(self.tox_args)\n errno = tox.cmdline(args=args)\n sys.exit(errno)\n\n\nlong_description = \"\"\"\n# dd-trace-py\n\n`ddtrace` is Datadog's tracing library for Python. It is used to trace requests\nas they flow across web servers, databases and microservices so that developers\nhave great visiblity into bottlenecks and troublesome requests.\n\n## Getting Started\n\nFor a basic product overview, installation and quick start, check out our\n[setup documentation][setup docs].\n\nFor more advanced usage and configuration, check out our [API\ndocumentation][pypi docs].\n\nFor descriptions of terminology used in APM, take a look at the [official\ndocumentation][visualization docs].\n\n[setup docs]: https://docs.datadoghq.com/tracing/setup/python/\n[pypi docs]: http://pypi.datadoghq.com/trace/docs/\n[visualization docs]: https://docs.datadoghq.com/tracing/visualization/\n\"\"\"\n\n\ndef get_exts_for(name):\n try:\n mod = load_module_from_project_file(\n \"ddtrace.vendor.{}.setup\".format(name), \"ddtrace/vendor/{}/setup.py\".format(name)\n )\n return mod.get_extensions()\n except Exception as e:\n print(\"WARNING: Failed to load %s extensions, skipping: %s\" % (name, e))\n return []\n\n\n# Base `setup()` kwargs without any C-extension registering\nsetup(\n **dict(\n name=\"ddtrace\",\n description=\"Datadog tracing code\",\n url=\"https://github.com/DataDog/dd-trace-py\",\n author=\"Datadog, Inc.\",\n author_email=\"[email protected]\",\n long_description=long_description,\n long_description_content_type=\"text/markdown\",\n license=\"BSD\",\n packages=find_packages(exclude=[\"tests*\"]),\n python_requires=\">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*\",\n # enum34 is an enum backport for earlier versions of python\n # funcsigs backport required for vendored debtcollector\n # encoding using msgpack\n install_requires=[\n \"enum34; python_version<'3.4'\",\n \"funcsigs>=1.0.0; python_version=='2.7'\",\n \"msgpack>=0.5.0\",\n \"protobuf>=3\",\n \"intervaltree\",\n \"tenacity>=5\",\n ],\n extras_require={\n # users can include opentracing by having:\n # install_requires=['ddtrace[opentracing]', ...]\n \"opentracing\": [\"opentracing>=2.0.0\"],\n },\n # plugin tox\n tests_require=[\"tox\", \"flake8\"],\n cmdclass={\"test\": Tox, \"build_ext\": Cython.Distutils.build_ext},\n entry_points={\n \"console_scripts\": [\n \"ddtrace-run = ddtrace.commands.ddtrace_run:main\",\n \"pyddprofile = ddtrace.profiling.__main__:main\",\n ]\n },\n classifiers=[\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 2.7\",\n \"Programming Language :: Python :: 3.5\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n ],\n use_scm_version=True,\n setup_requires=[\"setuptools_scm\", \"cython\"],\n ext_modules=cythonize(\n [\n Cython.Distutils.Extension(\n \"ddtrace.internal._rand\", sources=[\"ddtrace/internal/_rand.pyx\"], language=\"c\",\n ),\n Cython.Distutils.Extension(\n \"ddtrace.profiling.collector.stack\",\n sources=[\"ddtrace/profiling/collector/stack.pyx\"],\n language=\"c\",\n extra_compile_args=[\"-DPy_BUILD_CORE\"],\n ),\n Cython.Distutils.Extension(\n \"ddtrace.profiling.collector._traceback\",\n sources=[\"ddtrace/profiling/collector/_traceback.pyx\"],\n language=\"c\",\n ),\n Cython.Distutils.Extension(\n \"ddtrace.profiling._build\", sources=[\"ddtrace/profiling/_build.pyx\"], language=\"c\",\n ),\n ],\n compile_time_env={\n \"PY_MAJOR_VERSION\": sys.version_info.major,\n \"PY_MINOR_VERSION\": sys.version_info.minor,\n \"PY_MICRO_VERSION\": sys.version_info.micro,\n },\n )\n + get_exts_for(\"wrapt\")\n + get_exts_for(\"psutil\"),\n )\n)\n",
"path": "setup.py"
}
] | [
{
"content": "import os\nimport sys\n\nfrom setuptools import setup, find_packages\nfrom setuptools.command.test import test as TestCommand\n\n# ORDER MATTERS\n# Import this after setuptools or it will fail\nfrom Cython.Build import cythonize # noqa: I100\nimport Cython.Distutils\n\n\nHERE = os.path.dirname(os.path.abspath(__file__))\n\n\ndef load_module_from_project_file(mod_name, fname):\n \"\"\"\n Helper used to load a module from a file in this project\n\n DEV: Loading this way will by-pass loading all parent modules\n e.g. importing `ddtrace.vendor.psutil.setup` will load `ddtrace/__init__.py`\n which has side effects like loading the tracer\n \"\"\"\n fpath = os.path.join(HERE, fname)\n\n if sys.version_info >= (3, 5):\n import importlib.util\n\n spec = importlib.util.spec_from_file_location(mod_name, fpath)\n mod = importlib.util.module_from_spec(spec)\n spec.loader.exec_module(mod)\n return mod\n elif sys.version_info >= (3, 3):\n from importlib.machinery import SourceFileLoader\n\n return SourceFileLoader(mod_name, fpath).load_module()\n else:\n import imp\n\n return imp.load_source(mod_name, fpath)\n\n\nclass Tox(TestCommand):\n\n user_options = [(\"tox-args=\", \"a\", \"Arguments to pass to tox\")]\n\n def initialize_options(self):\n TestCommand.initialize_options(self)\n self.tox_args = None\n\n def finalize_options(self):\n TestCommand.finalize_options(self)\n self.test_args = []\n self.test_suite = True\n\n def run_tests(self):\n # import here, cause outside the eggs aren't loaded\n import tox\n import shlex\n\n args = self.tox_args\n if args:\n args = shlex.split(self.tox_args)\n errno = tox.cmdline(args=args)\n sys.exit(errno)\n\n\nlong_description = \"\"\"\n# dd-trace-py\n\n`ddtrace` is Datadog's tracing library for Python. It is used to trace requests\nas they flow across web servers, databases and microservices so that developers\nhave great visiblity into bottlenecks and troublesome requests.\n\n## Getting Started\n\nFor a basic product overview, installation and quick start, check out our\n[setup documentation][setup docs].\n\nFor more advanced usage and configuration, check out our [API\ndocumentation][pypi docs].\n\nFor descriptions of terminology used in APM, take a look at the [official\ndocumentation][visualization docs].\n\n[setup docs]: https://docs.datadoghq.com/tracing/setup/python/\n[pypi docs]: http://pypi.datadoghq.com/trace/docs/\n[visualization docs]: https://docs.datadoghq.com/tracing/visualization/\n\"\"\"\n\n\ndef get_exts_for(name):\n try:\n mod = load_module_from_project_file(\n \"ddtrace.vendor.{}.setup\".format(name), \"ddtrace/vendor/{}/setup.py\".format(name)\n )\n return mod.get_extensions()\n except Exception as e:\n print(\"WARNING: Failed to load %s extensions, skipping: %s\" % (name, e))\n return []\n\n\n# Base `setup()` kwargs without any C-extension registering\nsetup(\n **dict(\n name=\"ddtrace\",\n description=\"Datadog tracing code\",\n url=\"https://github.com/DataDog/dd-trace-py\",\n author=\"Datadog, Inc.\",\n author_email=\"[email protected]\",\n long_description=long_description,\n long_description_content_type=\"text/markdown\",\n license=\"BSD\",\n packages=find_packages(exclude=[\"tests*\"]),\n python_requires=\">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*\",\n # enum34 is an enum backport for earlier versions of python\n # funcsigs backport required for vendored debtcollector\n # encoding using msgpack\n install_requires=[\n \"enum34; python_version<'3.4'\",\n \"funcsigs>=1.0.0; python_version=='2.7'\",\n \"msgpack>=0.5.0\",\n \"protobuf>=3\",\n \"intervaltree\",\n \"tenacity>=5\",\n ],\n extras_require={\n # users can include opentracing by having:\n # install_requires=['ddtrace[opentracing]', ...]\n \"opentracing\": [\"opentracing>=2.0.0\"],\n },\n # plugin tox\n tests_require=[\"tox\", \"flake8\"],\n cmdclass={\"test\": Tox, \"build_ext\": Cython.Distutils.build_ext},\n entry_points={\n \"console_scripts\": [\n \"ddtrace-run = ddtrace.commands.ddtrace_run:main\",\n \"pyddprofile = ddtrace.profiling.__main__:main\",\n ]\n },\n classifiers=[\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 2.7\",\n \"Programming Language :: Python :: 3.5\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n ],\n use_scm_version=True,\n setup_requires=[\"setuptools_scm\", \"cython\"],\n ext_modules=cythonize(\n [\n Cython.Distutils.Extension(\n \"ddtrace.internal._rand\", sources=[\"ddtrace/internal/_rand.pyx\"], language=\"c\",\n ),\n Cython.Distutils.Extension(\n \"ddtrace.profiling.collector.stack\",\n sources=[\"ddtrace/profiling/collector/stack.pyx\"],\n language=\"c\",\n extra_compile_args=[\"-DPy_BUILD_CORE\"],\n ),\n Cython.Distutils.Extension(\n \"ddtrace.profiling.collector._traceback\",\n sources=[\"ddtrace/profiling/collector/_traceback.pyx\"],\n language=\"c\",\n ),\n Cython.Distutils.Extension(\n \"ddtrace.profiling._build\", sources=[\"ddtrace/profiling/_build.pyx\"], language=\"c\",\n ),\n ],\n compile_time_env={\n \"PY_MAJOR_VERSION\": sys.version_info.major,\n \"PY_MINOR_VERSION\": sys.version_info.minor,\n \"PY_MICRO_VERSION\": sys.version_info.micro,\n },\n force=True,\n )\n + get_exts_for(\"wrapt\")\n + get_exts_for(\"psutil\"),\n )\n)\n",
"path": "setup.py"
}
] | diff --git a/setup.py b/setup.py
index 0e7dd44fdcb..5caab5a5e57 100644
--- a/setup.py
+++ b/setup.py
@@ -173,6 +173,7 @@ def get_exts_for(name):
"PY_MINOR_VERSION": sys.version_info.minor,
"PY_MICRO_VERSION": sys.version_info.micro,
},
+ force=True,
)
+ get_exts_for("wrapt")
+ get_exts_for("psutil"),
diff --git a/tox.ini b/tox.ini
index dacb183bac8..0d0af1f161f 100644
--- a/tox.ini
+++ b/tox.ini
@@ -158,8 +158,7 @@ isolated_build = true
# meaning running on py3.x will fail
# https://stackoverflow.com/questions/57459123/why-do-i-need-to-run-tox-twice-to-test-a-python-package-with-c-extension
whitelist_externals=rm
-commands_pre=rm -f ddtrace/profiling/_build.c ddtrace/profiling/collector/stack.c ddtrace/profiling/collector/_traceback.c ddtrace/internal/_rand.c
- {envpython} {toxinidir}/setup.py develop
+commands_pre={envpython} {toxinidir}/setup.py develop
usedevelop =
# do not use develop mode with celery as running multiple python versions within
# same job will cause problem for tests that use ddtrace-run
|
kivy__python-for-android-2055 | Can't use AsyncImage with HTTPS URL (or any HTTPS url wit any request): fix is to manually load certifi
### Versions
* Python: 3
* OS: Android
* Kivy: 1.10.1
* Cython: 0.29.7
### Description
Try to open HTTPS Url
Failed with urlopen error [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate
Actually happening on Async Image I use like that:
```
AsyncImage:
source: 'https://i.goopics.net/27Odx.png'
```
Work perfectly on Windows, not on Android
### buildozer.spec
Command:
```
buildozer android debug
```
Spec file:
```
[app]
# (str) Title of your application
title = myapp
# (str) Package name
package.name = myapp
# (str) Package domain (needed for android/ios packaging)
package.domain = org.myapp
# (str) Source code where the main.py live
source.dir = ./kivy_app
# (list) Source files to include (let empty to include all the files)
source.include_exts = py,png,jpg,kv,atlas
# (list) List of inclusions using pattern matching
#source.include_patterns = assets/*,images/*.png
# (list) Source files to exclude (let empty to not exclude anything)
#source.exclude_exts = spec
# (list) List of directory to exclude (let empty to not exclude anything)
#source.exclude_dirs = tests, bin
# (list) List of exclusions using pattern matching
#source.exclude_patterns = license,images/*/*.jpg
# (str) Application versioning (method 1)
version = 0.2
# (str) Application versioning (method 2)
# version.regex = __version__ = ['"](.*)['"]
# version.filename = %(source.dir)s/main.py
# (list) Application requirements
# comma separated e.g. requirements = sqlite3,kivy
requirements = certifi,openssl,python3,kivy,android
# (str) Custom source folders for requirements
# Sets custom source for any requirements with recipes
# requirements.source.kivy = ../../kivy
# (list) Garden requirements
#garden_requirements =
# (str) Presplash of the application
#presplash.filename = %(source.dir)s/data/presplash.png
# (str) Icon of the application
#icon.filename = %(source.dir)s/data/icon.png
# (str) Supported orientation (one of landscape, sensorLandscape, portrait or all)
orientation = all
# (list) List of service to declare
#services = NAME:ENTRYPOINT_TO_PY,NAME2:ENTRYPOINT2_TO_PY
#
# OSX Specific
#
#
# author = © Copyright Info
# change the major version of python used by the app
osx.python_version = 3.7
# Kivy version to use
osx.kivy_version = 1.10.1
#
# Android specific
#
# (bool) Indicate if the application should be fullscreen or not
fullscreen = 0
# (string) Presplash background color (for new android toolchain)
# Supported formats are: #RRGGBB #AARRGGBB or one of the following names:
# red, blue, green, black, white, gray, cyan, magenta, yellow, lightgray,
# darkgray, grey, lightgrey, darkgrey, aqua, fuchsia, lime, maroon, navy,
# olive, purple, silver, teal.
#android.presplash_color = #FFFFFF
# (list) Permissions
android.permissions = INTERNET
# (int) Target Android API, should be as high as possible.
android.api = 27
# (int) Minimum API your APK will support.
android.minapi = 21
# (str) Android NDK version to use
android.ndk = 17c
# (int) Android NDK API to use. This is the minimum API your app will support, it should usually match android.minapi.
android.ndk_api = 21
# (bool) Use --private data storage (True) or --dir public storage (False)
#android.private_storage = True
# (str) Android NDK directory (if empty, it will be automatically downloaded.)
#android.ndk_path =
# (str) Android SDK directory (if empty, it will be automatically downloaded.)
#android.sdk_path =
# (str) ANT directory (if empty, it will be automatically downloaded.)
#android.ant_path =
# (bool) If True, then skip trying to update the Android sdk
# This can be useful to avoid excess Internet downloads or save time
# when an update is due and you just want to test/build your package
android.skip_update = False
# (bool) If True, then automatically accept SDK license
# agreements. This is intended for automation only. If set to False,
# the default, you will be shown the license when first running
# buildozer.
android.accept_sdk_license = True
# (str) Android entry point, default is ok for Kivy-based app
#android.entrypoint = org.renpy.android.PythonActivity
# (list) Pattern to whitelist for the whole project
#android.whitelist =
# (str) Path to a custom whitelist file
#android.whitelist_src =
# (str) Path to a custom blacklist file
#android.blacklist_src =
# (list) List of Java .jar files to add to the libs so that pyjnius can access
# their classes. Don't add jars that you do not need, since extra jars can slow
# down the build process. Allows wildcards matching, for example:
# OUYA-ODK/libs/*.jar
#android.add_jars = foo.jar,bar.jar,path/to/more/*.jar
# (list) List of Java files to add to the android project (can be java or a
# directory containing the files)
#android.add_src =
# (list) Android AAR archives to add (currently works only with sdl2_gradle
# bootstrap)
#android.add_aars =
# (list) Gradle dependencies to add (currently works only with sdl2_gradle
# bootstrap)
#android.gradle_dependencies =
# (list) Java classes to add as activities to the manifest.
#android.add_activites = com.example.ExampleActivity
# (str) python-for-android branch to use, defaults to master
#p4a.branch = master
# (str) OUYA Console category. Should be one of GAME or APP
# If you leave this blank, OUYA support will not be enabled
#android.ouya.category = GAME
# (str) Filename of OUYA Console icon. It must be a 732x412 png image.
#android.ouya.icon.filename = %(source.dir)s/data/ouya_icon.png
# (str) XML file to include as an intent filters in <activity> tag
#android.manifest.intent_filters =
# (str) launchMode to set for the main activity
#android.manifest.launch_mode = standard
# (list) Android additional libraries to copy into libs/armeabi
#android.add_libs_armeabi = libs/android/*.so
#android.add_libs_armeabi_v7a = libs/android-v7/*.so
#android.add_libs_x86 = libs/android-x86/*.so
#android.add_libs_mips = libs/android-mips/*.so
# (bool) Indicate whether the screen should stay on
# Don't forget to add the WAKE_LOCK permission if you set this to True
#android.wakelock = False
# (list) Android application meta-data to set (key=value format)
#android.meta_data =
# (list) Android library project to add (will be added in the
# project.properties automatically.)
#android.library_references =
# (list) Android shared libraries which will be added to AndroidManifest.xml using <uses-library> tag
#android.uses_library =
# (str) Android logcat filters to use
#android.logcat_filters = *:S python:D
# (bool) Copy library instead of making a libpymodules.so
#android.copy_libs = 1
# (str) The Android arch to build for, choices: armeabi-v7a, arm64-v8a, x86, x86_64
android.arch = armeabi-v7a
#
# Python for android (p4a) specific
#
# (str) python-for-android git clone directory (if empty, it will be automatically cloned from github)
#p4a.source_dir =
# (str) The directory in which python-for-android should look for your own build recipes (if any)
#p4a.local_recipes =
# (str) Filename to the hook for p4a
#p4a.hook =
# (str) Bootstrap to use for android builds
# p4a.bootstrap = sdl2
# (int) port number to specify an explicit --port= p4a argument (eg for bootstrap flask)
#p4a.port =
#
# iOS specific
#
# (str) Path to a custom kivy-ios folder
#ios.kivy_ios_dir = ../kivy-ios
# Alternately, specify the URL and branch of a git checkout:
ios.kivy_ios_url = https://github.com/kivy/kivy-ios
ios.kivy_ios_branch = master
# Another platform dependency: ios-deploy
# Uncomment to use a custom checkout
#ios.ios_deploy_dir = ../ios_deploy
# Or specify URL and branch
ios.ios_deploy_url = https://github.com/phonegap/ios-deploy
ios.ios_deploy_branch = 1.7.0
# (str) Name of the certificate to use for signing the debug version
# Get a list of available identities: buildozer ios list_identities
#ios.codesign.debug = "iPhone Developer: <lastname> <firstname> (<hexstring>)"
# (str) Name of the certificate to use for signing the release version
#ios.codesign.release = %(ios.codesign.debug)s
[buildozer]
# (int) Log level (0 = error only, 1 = info, 2 = debug (with command output))
log_level = 2
# (int) Display warning if buildozer is run as root (0 = False, 1 = True)
warn_on_root = 1
# (str) Path to build artifact storage, absolute or relative to spec file
# build_dir = ./.buildozer
# (str) Path to build output (i.e. .apk, .ipa) storage
# bin_dir = ./bin
# -----------------------------------------------------------------------------
# List as sections
#
# You can define all the "list" as [section:key].
# Each line will be considered as a option to the list.
# Let's take [app] / source.exclude_patterns.
# Instead of doing:
#
#[app]
#source.exclude_patterns = license,data/audio/*.wav,data/images/original/*
#
# This can be translated into:
#
#[app:source.exclude_patterns]
#license
#data/audio/*.wav
#data/images/original/*
#
# -----------------------------------------------------------------------------
# Profiles
#
# You can extend section / key with a profile
# For example, you want to deploy a demo version of your application without
# HD content. You could first change the title to add "(demo)" in the name
# and extend the excluded directories to remove the HD content.
#
#[app@demo]
#title = My Application (demo)
#
#[app:source.exclude_patterns@demo]
#images/hd/*
#
# Then, invoke the command line with the "demo" profile:
#
#buildozer --profile demo android debug
```
### Logs
```
05-27 19:29:05.842 23309 23355 I python : [ERROR ] [Loader ] Failed to load image <https://i.goopics.net/27Odx.png>
05-27 19:29:05.842 23309 23355 I python : Traceback (most recent call last):
05-27 19:29:05.842 23309 23355 I python : File "/home/user/hostcwd/.buildozer/android/platform/build/build/other_builds/python3-libffi-openssl-sqlite3/armeabi-v7a__ndk_target_21/python3/Lib/urllib/request.py", line 1317, in do_open
05-27 19:29:05.842 23309 23355 I python : File "/home/user/hostcwd/.buildozer/android/platform/build/build/other_builds/python3-libffi-openssl-sqlite3/armeabi-v7a__ndk_target_21/python3/Lib/http/client.py", line 1229, in request05-27 19:29:05.842 23309 23355 I python : File "/home/user/hostcwd/.buildozer/android/platform/build/build/other_builds/python3-libffi-openssl-sqlite3/armeabi-v7a__ndk_target_21/python3/Lib/http/client.py", line 1275, in _send_request
05-27 19:29:05.842 23309 23355 I python : File "/home/user/hostcwd/.buildozer/android/platform/build/build/other_builds/python3-libffi-openssl-sqlite3/armeabi-v7a__ndk_target_21/python3/Lib/http/client.py", line 1224, in endheaders
05-27 19:29:05.842 23309 23355 I python : File "/home/user/hostcwd/.buildozer/android/platform/build/build/other_builds/python3-libffi-openssl-sqlite3/armeabi-v7a__ndk_target_21/python3/Lib/http/client.py", line 1016, in _send_output
05-27 19:29:05.842 23309 23355 I python : File "/home/user/hostcwd/.buildozer/android/platform/build/build/other_builds/python3-libffi-openssl-sqlite3/armeabi-v7a__ndk_target_21/python3/Lib/http/client.py", line 956, in send
05-27 19:29:05.842 23309 23355 I python : File "/home/user/hostcwd/.buildozer/android/platform/build/build/other_builds/python3-libffi-openssl-sqlite3/armeabi-v7a__ndk_target_21/python3/Lib/http/client.py", line 1392, in connect05-27 19:29:05.842 23309 23355 I python : File "/home/user/hostcwd/.buildozer/android/platform/build/build/other_builds/python3-libffi-openssl-sqlite3/armeabi-v7a__ndk_target_21/python3/Lib/ssl.py", line 412, in wrap_socket
05-27 19:29:05.842 23309 23355 I python : File "/home/user/hostcwd/.buildozer/android/platform/build/build/other_builds/python3-libffi-openssl-sqlite3/armeabi-v7a__ndk_target_21/python3/Lib/ssl.py", line 853, in _create
05-27 19:29:05.842 23309 23355 I python : File "/home/user/hostcwd/.buildozer/android/platform/build/build/other_builds/python3-libffi-openssl-sqlite3/armeabi-v7a__ndk_target_21/python3/Lib/ssl.py", line 1117, in do_handshake
05-27 19:29:05.842 23309 23355 I python : ssl.SSLCertVerificationError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1051)
05-27 19:29:05.842 23309 23355 I python :
05-27 19:29:05.842 23309 23355 I python : During handling of the above exception, another exception occurred:
05-27 19:29:05.842 23309 23355 I python :
05-27 19:29:05.842 23309 23355 I python : Traceback (most recent call last):
05-27 19:29:05.842 23309 23355 I python : File "/home/user/hostcwd/.buildozer/android/platform/build/build/python-installs/kydoo/kivy/loader.py", line 342, in _load_urllib
05-27 19:29:05.842 23309 23355 I python : File "/home/user/hostcwd/.buildozer/android/platform/build/build/other_builds/python3-libffi-openssl-sqlite3/armeabi-v7a__ndk_target_21/python3/Lib/urllib/request.py", line 525, in open
05-27 19:29:05.842 23309 23355 I python : File "/home/user/hostcwd/.buildozer/android/platform/build/build/other_builds/python3-libffi-openssl-sqlite3/armeabi-v7a__ndk_target_21/python3/Lib/urllib/request.py", line 543, in _open05-27 19:29:05.842 23309 23355 I python : File "/home/user/hostcwd/.buildozer/android/platform/build/build/other_builds/python3-libffi-openssl-sqlite3/armeabi-v7a__ndk_target_21/python3/Lib/urllib/request.py", line 503, in _call_chain
05-27 19:29:05.842 23309 23355 I python : File "/home/user/hostcwd/.buildozer/android/platform/build/build/other_builds/python3-libffi-openssl-sqlite3/armeabi-v7a__ndk_target_21/python3/Lib/urllib/request.py", line 1360, in https_open
05-27 19:29:05.842 23309 23355 I python : File "/home/user/hostcwd/.buildozer/android/platform/build/build/other_builds/python3-libffi-openssl-sqlite3/armeabi-v7a__ndk_target_21/python3/Lib/urllib/request.py", line 1319, in do_open
05-27 19:29:05.842 23309 23355 I python : urllib.error.URLError: <urlopen error [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1051)>
```
I actually found a """solution""" using:
```
import ssl
try:
_create_unverified_https_context = ssl._create_unverified_context
except AttributeError:
# Legacy Python that doesn't verify HTTPS certificates by default
pass
else:
# Handle target environment that doesn't support HTTPS verification
ssl._create_default_https_context = _create_unverified_https_context
```
But using that in my main.py don't fix AsyncImage or any call in other py file
Any ideas ?
Thank's
| [
{
"content": "from pythonforandroid.recipe import CythonRecipe\nfrom pythonforandroid.toolchain import current_directory, shprint\nfrom os.path import exists, join, basename\nimport sh\nimport glob\n\n\nclass KivyRecipe(CythonRecipe):\n version = '1.11.1'\n url = 'https://github.com/kivy/kivy/archive/{version}.zip'\n name = 'kivy'\n\n depends = ['sdl2', 'pyjnius', 'setuptools']\n\n def cythonize_build(self, env, build_dir='.'):\n super(KivyRecipe, self).cythonize_build(env, build_dir=build_dir)\n\n if not exists(join(build_dir, 'kivy', 'include')):\n return\n\n # If kivy is new enough to use the include dir, copy it\n # manually to the right location as we bypass this stage of\n # the build\n with current_directory(build_dir):\n build_libs_dirs = glob.glob(join('build', 'lib.*'))\n\n for dirn in build_libs_dirs:\n shprint(sh.cp, '-r', join('kivy', 'include'),\n join(dirn, 'kivy'))\n\n def cythonize_file(self, env, build_dir, filename):\n # We can ignore a few files that aren't important to the\n # android build, and may not work on Android anyway\n do_not_cythonize = ['window_x11.pyx', ]\n if basename(filename) in do_not_cythonize:\n return\n super(KivyRecipe, self).cythonize_file(env, build_dir, filename)\n\n def get_recipe_env(self, arch):\n env = super(KivyRecipe, self).get_recipe_env(arch)\n if 'sdl2' in self.ctx.recipe_build_order:\n env['USE_SDL2'] = '1'\n env['KIVY_SPLIT_EXAMPLES'] = '1'\n env['KIVY_SDL2_PATH'] = ':'.join([\n join(self.ctx.bootstrap.build_dir, 'jni', 'SDL', 'include'),\n join(self.ctx.bootstrap.build_dir, 'jni', 'SDL2_image'),\n join(self.ctx.bootstrap.build_dir, 'jni', 'SDL2_mixer'),\n join(self.ctx.bootstrap.build_dir, 'jni', 'SDL2_ttf'),\n ])\n\n return env\n\n\nrecipe = KivyRecipe()\n",
"path": "pythonforandroid/recipes/kivy/__init__.py"
}
] | [
{
"content": "from pythonforandroid.recipe import CythonRecipe\nfrom pythonforandroid.toolchain import current_directory, shprint\nfrom os.path import exists, join, basename\nimport sh\nimport glob\n\n\nclass KivyRecipe(CythonRecipe):\n version = '1.11.1'\n url = 'https://github.com/kivy/kivy/archive/{version}.zip'\n name = 'kivy'\n\n depends = ['sdl2', 'pyjnius', 'setuptools']\n python_depends = ['certifi']\n\n def cythonize_build(self, env, build_dir='.'):\n super(KivyRecipe, self).cythonize_build(env, build_dir=build_dir)\n\n if not exists(join(build_dir, 'kivy', 'include')):\n return\n\n # If kivy is new enough to use the include dir, copy it\n # manually to the right location as we bypass this stage of\n # the build\n with current_directory(build_dir):\n build_libs_dirs = glob.glob(join('build', 'lib.*'))\n\n for dirn in build_libs_dirs:\n shprint(sh.cp, '-r', join('kivy', 'include'),\n join(dirn, 'kivy'))\n\n def cythonize_file(self, env, build_dir, filename):\n # We can ignore a few files that aren't important to the\n # android build, and may not work on Android anyway\n do_not_cythonize = ['window_x11.pyx', ]\n if basename(filename) in do_not_cythonize:\n return\n super(KivyRecipe, self).cythonize_file(env, build_dir, filename)\n\n def get_recipe_env(self, arch):\n env = super(KivyRecipe, self).get_recipe_env(arch)\n if 'sdl2' in self.ctx.recipe_build_order:\n env['USE_SDL2'] = '1'\n env['KIVY_SPLIT_EXAMPLES'] = '1'\n env['KIVY_SDL2_PATH'] = ':'.join([\n join(self.ctx.bootstrap.build_dir, 'jni', 'SDL', 'include'),\n join(self.ctx.bootstrap.build_dir, 'jni', 'SDL2_image'),\n join(self.ctx.bootstrap.build_dir, 'jni', 'SDL2_mixer'),\n join(self.ctx.bootstrap.build_dir, 'jni', 'SDL2_ttf'),\n ])\n\n return env\n\n\nrecipe = KivyRecipe()\n",
"path": "pythonforandroid/recipes/kivy/__init__.py"
}
] | diff --git a/pythonforandroid/recipes/kivy/__init__.py b/pythonforandroid/recipes/kivy/__init__.py
index 3106f25ce6..a93627a021 100644
--- a/pythonforandroid/recipes/kivy/__init__.py
+++ b/pythonforandroid/recipes/kivy/__init__.py
@@ -11,6 +11,7 @@ class KivyRecipe(CythonRecipe):
name = 'kivy'
depends = ['sdl2', 'pyjnius', 'setuptools']
+ python_depends = ['certifi']
def cythonize_build(self, env, build_dir='.'):
super(KivyRecipe, self).cythonize_build(env, build_dir=build_dir)
|
blaze__blaze-1560 | Use of deprecated `flask.ext.cors` results in a warning
``` python
In [1]: import blaze as bz
C:\Python\envs\py-dev\lib\site-packages\flask\exthook.py:71: ExtDeprecationWarning: Importing flask.ext.cors is deprecated, use flask_cors instead.
.format(x=modname), ExtDeprecationWarning
```
Looks like the culprit is:
https://github.com/blaze/blaze/blob/bcddeba0230743d040bc915804af2ff906ce4758/blaze/server/server.py#L22
| [
{
"content": "from __future__ import absolute_import, division, print_function\n\nimport sys\nimport logging\nfrom logging import Formatter\nfrom functools import wraps\nimport traceback\nimport collections\nfrom datetime import datetime\nimport errno\nimport functools\nfrom hashlib import md5\nimport os\nimport socket\nfrom time import time\nfrom warnings import warn\nimport importlib\n\nfrom datashape import discover, pprint\nimport flask\nfrom flask import Blueprint, Flask, Response\nfrom flask.ext.cors import cross_origin\nfrom werkzeug.http import parse_options_header\nfrom toolz import valmap, compose\n\nimport blaze\nfrom blaze import compute, resource\nfrom blaze.compatibility import ExitStack\nfrom blaze.compute import compute_up\nfrom .serialization import json, all_formats\nfrom ..interactive import _Data\nfrom ..expr import Expr, symbol, utils as expr_utils, Symbol\n\n\n__all__ = 'Server', 'to_tree', 'from_tree', 'expr_md5'\n\n# http://www.speedguide.net/port.php?port=6363\n# http://en.wikipedia.org/wiki/List_of_TCP_and_UDP_port_numbers\nDEFAULT_PORT = 6363\n\n\nclass RC(object):\n \"\"\"\n Simple namespace for HTTP status codes.\n https://en.wikipedia.org/wiki/List_of_HTTP_status_codes\n \"\"\"\n\n OK = 200\n CREATED = 201\n\n BAD_REQUEST = 400\n UNAUTHORIZED = 401\n NOT_FOUND = 404\n FORBIDDEN = 403\n CONFLICT = 409\n UNPROCESSABLE_ENTITY = 422\n UNSUPPORTED_MEDIA_TYPE = 415\n\n INTERNAL_SERVER_ERROR = 500\n NOT_IMPLEMENTED = 501\n\n\napi = Blueprint('api', __name__)\npickle_extension_api = Blueprint('pickle_extension_api', __name__)\n\n\n_no_default = object() # sentinel\n\n\ndef _logging(func):\n @wraps(func)\n def _logger(*args, **kwargs):\n logger = flask.current_app.logger\n try:\n logger.debug(\"Calling %s\" % func.__name__)\n ret = func(*args, **kwargs)\n finally:\n logger.debug(\"Leaving %s\" % func.__name__)\n return ret\n return _logger\n\n\ndef _get_option(option, options, default=_no_default):\n try:\n return options[option]\n except KeyError:\n if default is not _no_default:\n return default\n\n # Provides a more informative error message.\n msg = 'The blaze api must be registered with {option}'\n raise TypeError(msg.format(option=option))\n\n\ndef ensure_dir(path):\n try:\n os.makedirs(path)\n except OSError as e:\n if e.errno != errno.EEXIST:\n raise\n\n# Default for logging exception tracebacks is to simply use the standard\n# `traceback.format_tb`\n_default_log_exception_formatter = compose(''.join, traceback.format_tb)\n\n\ndef _register_api(app, options, first_registration=False):\n \"\"\"\n Register the data with the blueprint.\n \"\"\"\n _get_data.cache[app] = _get_option('data', options)\n _get_format.cache[app] = {f.name: f for f in _get_option('formats', options)}\n _get_auth.cache[app] = (_get_option('authorization', options, None) or\n (lambda a: True))\n allow_profiler = _get_option('allow_profiler', options, False)\n profiler_output = _get_option('profiler_output', options, None)\n profile_by_default = _get_option('profile_by_default', options, False)\n if not allow_profiler and (profiler_output or profile_by_default):\n msg = \"cannot set %s%s%s when 'allow_profiler' is False\"\n raise ValueError(msg % ('profiler_output' if profiler_output else '',\n ' or ' if profiler_output and profile_by_default else '',\n 'profile_by_default' if profile_by_default else ''))\n if allow_profiler:\n if profiler_output is None:\n profiler_output = 'profiler_output'\n if profiler_output != ':response':\n ensure_dir(profiler_output)\n\n _get_profiler_info.cache[app] = (allow_profiler,\n profiler_output,\n profile_by_default)\n\n # Allowing users to dynamically add datasets to the Blaze server can be\n # dangerous, so we only expose the method if specifically requested\n allow_add = _get_option('allow_add', options, False)\n if allow_add:\n app.add_url_rule('/add', 'addserver', addserver,\n methods=['POST', 'HEAD', 'OPTIONS'])\n\n # Call the original register function.\n Blueprint.register(api, app, options, first_registration)\n\napi.register = _register_api\n\n\ndef per_app_accesor(name):\n def _get():\n return _get.cache[flask.current_app]\n _get.cache = {}\n _get.__name__ = '_get' + name\n return _get\n\n\ndef _get_format(name):\n return _get_format.cache[flask.current_app][name]\n_get_format.cache = {}\n\n_get_data = per_app_accesor('data')\n_get_auth = per_app_accesor('auth')\n_get_profiler_info = per_app_accesor('profiler_info')\n\n\ndef expr_md5(expr):\n \"\"\"Returns the md5 hash of the str of the expression.\n\n Parameters\n ----------\n expr : Expr\n The expression to hash.\n\n Returns\n -------\n hexdigest : str\n The hexdigest of the md5 of the str of ``expr``.\n \"\"\"\n exprstr = str(expr)\n if not isinstance(exprstr, bytes):\n exprstr = exprstr.encode('utf-8')\n return md5(exprstr).hexdigest()\n\n\ndef _prof_path(profiler_output, expr):\n \"\"\"Get the path to write the data for a profile run of ``expr``.\n\n Parameters\n ----------\n profiler_output : str\n The director to write into.\n expr : Expr\n The expression that was run.\n\n Returns\n -------\n prof_path : str\n The filepath to write the new profiler data.\n\n Notes\n -----\n This function ensures that the dirname of the returned path exists.\n \"\"\"\n dir_ = os.path.join(profiler_output,\n expr_md5(expr)) # Use md5 so the client knows where to look.\n ensure_dir(dir_)\n return os.path.join(dir_,\n str(int(datetime.utcnow().timestamp())))\n\n\ndef authorization(f):\n @functools.wraps(f)\n def authorized(*args, **kwargs):\n if not _get_auth()(flask.request.authorization):\n return Response('bad auth token',\n RC.UNAUTHORIZED,\n {'WWW-Authenticate': 'Basic realm=\"Login Required\"'})\n return f(*args, **kwargs)\n return authorized\n\n\ndef check_request(f):\n @functools.wraps(f)\n def check():\n raw_content_type = flask.request.headers['content-type']\n content_type, options = parse_options_header(raw_content_type)\n\n if content_type not in accepted_mimetypes:\n return ('Unsupported serialization format %s' % content_type,\n RC.UNSUPPORTED_MEDIA_TYPE)\n\n try:\n serial = _get_format(accepted_mimetypes[content_type])\n except KeyError:\n return (\"Unsupported serialization format '%s'\" % content_type,\n RC.UNSUPPORTED_MEDIA_TYPE)\n\n try:\n payload = serial.loads(flask.request.data)\n except ValueError:\n return (\"Bad data. Got %s \" % flask.request.data, RC.BAD_REQUEST)\n\n return f(payload, serial)\n return check\n\n\nclass FlaskWithExceptionFormatting(Flask):\n \"\"\" Add a `log_exception_formatter` instance attribute to the Flask\n application object, to allow it to store a handler function.\n \"\"\"\n log_exception_formatter = None\n\n def __init__(self, *args, **kwargs):\n self.log_exception_formatter = kwargs.pop('log_exception_formatter',\n _default_log_exception_formatter)\n super(FlaskWithExceptionFormatting, self).__init__(*args, **kwargs)\n\n\nclass Server(object):\n\n \"\"\" Blaze Data Server\n\n Host local data through a web API\n\n Parameters\n ----------\n data : dict, optional\n A dictionary mapping dataset name to any data format that blaze\n understands.\n formats : iterable, optional\n An iterable of supported serialization formats. By default, the\n server will support JSON.\n A serialization format is an object that supports:\n name, loads, and dumps.\n authorization : callable, optional\n A callable to be used to check the auth header from the client.\n This callable should accept a single argument that will either be\n None indicating that no header was passed, or an object\n containing a username and password attribute. By default, all requests\n are allowed.\n allow_profiler : bool, optional\n Allow payloads to specify `\"profile\": true` which will run the\n computation under cProfile.\n profiler_output : str, optional\n The directory to write pstats files after profile runs.\n The files will be written in a structure like:\n\n {profiler_output}/{hash(expr)}/{timestamp}\n\n This defaults to a relative path of `profiler_output`.\n This requires `allow_profiler=True`.\n\n If this is the string ':response' then writing to the local filesystem\n is disabled. Only requests that specify `profiler_output=':response'`\n will be served. All others will return a 403 (Forbidden).\n profile_by_default : bool, optional\n Run the profiler on any computation that does not explicitly set\n \"profile\": false.\n This requires `allow_profiler=True`.\n allow_add : bool, optional\n Expose an `/add` endpoint to allow datasets to be dynamically added to\n the server. Since this increases the risk of security holes, it defaults\n to `False`.\n logfile : str or file-like object, optional\n A filename or open file-like stream to which to send log output. Defaults\n to `sys.stdout`.\n loglevel : str, optional\n A string logging level (e.g. 'WARNING', 'INFO') to set how verbose log\n output should be.\n log_exception_formatter : callable, optional\n A callable to be used to format an exception traceback for logging. It\n should take a traceback argument, and return the string to be logged.\n This defaults to the standard library `traceback.format_tb`\n\n\n Examples\n --------\n >>> from pandas import DataFrame\n >>> df = DataFrame([[1, 'Alice', 100],\n ... [2, 'Bob', -200],\n ... [3, 'Alice', 300],\n ... [4, 'Dennis', 400],\n ... [5, 'Bob', -500]],\n ... columns=['id', 'name', 'amount'])\n\n >>> server = Server({'accounts': df})\n >>> server.run() # doctest: +SKIP\n \"\"\"\n def __init__(self,\n data=None,\n formats=None,\n authorization=None,\n allow_profiler=False,\n profiler_output=None,\n profile_by_default=False,\n allow_add=False,\n logfile=sys.stdout,\n loglevel='WARNING',\n log_exception_formatter=_default_log_exception_formatter):\n if isinstance(data, collections.Mapping):\n data = valmap(lambda v: v.data if isinstance(v, _Data) else v,\n data)\n elif isinstance(data, _Data):\n data = data._resources()\n app = self.app = FlaskWithExceptionFormatting('blaze.server.server',\n log_exception_formatter=log_exception_formatter)\n if data is None:\n data = {}\n app.register_blueprint(api,\n data=data,\n formats=formats if formats is not None else (json,),\n authorization=authorization,\n allow_profiler=allow_profiler,\n profiler_output=profiler_output,\n profile_by_default=profile_by_default,\n allow_add=allow_add)\n self.data = data\n if logfile:\n if isinstance(logfile, (str, bytes)):\n handler = logging.FileHandler(logfile)\n else:\n handler = logging.StreamHandler(logfile)\n handler.setFormatter(Formatter('[%(asctime)s %(levelname)s] %(message)s '\n '[in %(pathname)s:%(lineno)d]'))\n handler.setLevel(getattr(logging, loglevel))\n app.logger.addHandler(handler)\n\n def run(self, port=DEFAULT_PORT, retry=False, **kwargs):\n \"\"\"Run the server.\n\n Parameters\n ----------\n port : int, optional\n The port to bind to.\n retry : bool, optional\n If the port is busy, should we retry with the next available port?\n **kwargs\n Forwarded to the underlying flask app's ``run`` method.\n\n Notes\n -----\n This function blocks forever when successful.\n \"\"\"\n self.port = port\n try:\n # Blocks until the server is shut down.\n self.app.logger.debug('Starting server...')\n self.app.run(port=port, **kwargs)\n self.app.logger.debug('Stopping server...')\n except socket.error:\n if not retry:\n raise\n\n warn(\"Oops, couldn't connect on port %d. Is it busy?\" % port)\n # Attempt to start the server on a new port.\n self.run(port=port + 1, retry=retry, **kwargs)\n\n\[email protected]('/datashape', methods=['GET'])\n@cross_origin(origins='*', methods=['GET'])\n@authorization\n@_logging\ndef shape():\n return pprint(discover(_get_data()), width=0)\n\n\ndef to_tree(expr, names=None):\n \"\"\" Represent Blaze expression with core data structures\n\n Transform a Blaze expression into a form using only strings, dicts, lists\n and base types (int, float, datetime, ....) This form can be useful for\n serialization.\n\n Parameters\n ----------\n expr : Expr\n A Blaze expression\n\n Examples\n --------\n\n >>> t = symbol('t', 'var * {x: int32, y: int32}')\n >>> to_tree(t) # doctest: +SKIP\n {'op': 'Symbol',\n 'args': ['t', 'var * { x : int32, y : int32 }', False]}\n\n\n >>> to_tree(t.x.sum()) # doctest: +SKIP\n {'op': 'sum',\n 'args': [{'op': 'Column',\n 'args': [{'op': 'Symbol'\n 'args': ['t',\n 'var * { x : int32, y : int32 }',\n False]}\n 'x']}]}\n\n Simplify expresion using explicit ``names`` dictionary. In the example\n below we replace the ``Symbol`` node with the string ``'t'``.\n\n >>> tree = to_tree(t.x, names={t: 't'})\n >>> tree # doctest: +SKIP\n {'op': 'Column', 'args': ['t', 'x']}\n\n >>> from_tree(tree, namespace={'t': t})\n t.x\n\n See Also\n --------\n\n from_tree\n \"\"\"\n if isinstance(expr, slice):\n # NOTE: This case must come first, since `slice` objects are not\n # hashable, so a dict lookup inside `names` will raise an execption.\n return {'op': 'slice',\n 'args': [to_tree(arg, names=names) for arg in\n [expr.start, expr.stop, expr.step]]}\n if names and expr in names:\n return names[expr]\n if isinstance(expr, tuple):\n return [to_tree(arg, names=names) for arg in expr]\n if isinstance(expr, expr_utils._slice):\n return to_tree(expr.as_slice(), names=names)\n elif isinstance(expr, _Data):\n return to_tree(symbol(expr._name, expr.dshape), names)\n elif isinstance(expr, Expr):\n return {'op': type(expr).__name__,\n 'args': [to_tree(arg, names) for arg in expr._args]}\n else:\n return expr\n\n\ndef expression_from_name(name):\n \"\"\"\n\n >>> expression_from_name('By')\n <class 'blaze.expr.split_apply_combine.By'>\n\n >>> expression_from_name('And')\n <class 'blaze.expr.arithmetic.And'>\n \"\"\"\n import blaze\n if hasattr(blaze, name):\n return getattr(blaze, name)\n if hasattr(blaze.expr, name):\n return getattr(blaze.expr, name)\n for signature, func in compute_up.funcs.items():\n try:\n if signature[0].__name__ == name:\n return signature[0]\n except TypeError:\n pass\n raise ValueError('%s not found in compute_up' % name)\n\n\ndef from_tree(expr, namespace=None):\n \"\"\" Convert core data structures to Blaze expression\n\n Core data structure representations created by ``to_tree`` are converted\n back into Blaze expressions.\n\n Parameters\n ----------\n expr : dict\n\n Examples\n --------\n\n >>> t = symbol('t', 'var * {x: int32, y: int32}')\n >>> tree = to_tree(t)\n >>> tree # doctest: +SKIP\n {'op': 'Symbol',\n 'args': ['t', 'var * { x : int32, y : int32 }', False]}\n\n >>> from_tree(tree)\n <`t` symbol; dshape='var * {x: int32, y: int32}'>\n\n >>> tree = to_tree(t.x.sum())\n >>> tree # doctest: +SKIP\n {'op': 'sum',\n 'args': [{'op': 'Field',\n 'args': [{'op': 'Symbol'\n 'args': ['t',\n 'var * {x : int32, y : int32}',\n False]}\n 'x']}]}\n\n >>> from_tree(tree)\n sum(t.x)\n\n Simplify expresion using explicit ``names`` dictionary. In the example\n below we replace the ``Symbol`` node with the string ``'t'``.\n\n >>> tree = to_tree(t.x, names={t: 't'})\n >>> tree # doctest: +SKIP\n {'op': 'Field', 'args': ['t', 'x']}\n\n >>> from_tree(tree, namespace={'t': t})\n t.x\n\n See Also\n --------\n\n to_tree\n \"\"\"\n if isinstance(expr, dict):\n op, args = expr['op'], expr['args']\n if 'slice' == op:\n return expr_utils._slice(*[from_tree(arg, namespace)\n for arg in args])\n if hasattr(blaze.expr, op):\n cls = getattr(blaze.expr, op)\n else:\n cls = expression_from_name(op)\n if cls is Symbol:\n cls = symbol\n children = [from_tree(arg, namespace) for arg in args]\n return cls(*children)\n elif isinstance(expr, (list, tuple)):\n return tuple(from_tree(arg, namespace) for arg in expr)\n if namespace and expr in namespace:\n return namespace[expr]\n else:\n return expr\n\n\naccepted_mimetypes = {'application/vnd.blaze+{}'.format(x.name): x.name for x\n in all_formats}\n\n\[email protected]('/compute', methods=['POST', 'HEAD', 'OPTIONS'])\n@cross_origin(origins='*', methods=['POST', 'HEAD', 'OPTIONS'])\n@authorization\n@check_request\n@_logging\ndef compserver(payload, serial):\n app = flask.current_app\n (allow_profiler,\n default_profiler_output,\n profile_by_default) = _get_profiler_info()\n requested_profiler_output = payload.get('profiler_output',\n default_profiler_output)\n profile = payload.get('profile')\n profiling = (allow_profiler and\n (profile or (profile_by_default and requested_profiler_output)))\n if profile and not allow_profiler:\n return ('profiling is disabled on this server', RC.FORBIDDEN)\n\n with ExitStack() as response_construction_context_stack:\n if profiling:\n from cProfile import Profile\n\n if (default_profiler_output == ':response' and\n requested_profiler_output != ':response'):\n # writing to the local filesystem is disabled\n return (\"local filepaths are disabled on this server, only\"\n \" ':response' is allowed for the 'profiler_output' field\",\n RC.FORBIDDEN)\n\n profiler_output = requested_profiler_output\n profiler = Profile()\n profiler.enable()\n # ensure that we stop profiling in the case of an exception\n response_construction_context_stack.callback(profiler.disable)\n\n expr = '<failed to parse expr>'\n\n @response_construction_context_stack.callback\n def log_time(start=time()):\n app.logger.info('compute expr: %s\\ntotal time (s): %.3f',\n expr,\n time() - start)\n\n ns = payload.get('namespace', {})\n compute_kwargs = payload.get('compute_kwargs') or {}\n odo_kwargs = payload.get('odo_kwargs') or {}\n dataset = _get_data()\n ns[':leaf'] = symbol('leaf', discover(dataset))\n\n expr = from_tree(payload['expr'], namespace=ns)\n assert len(expr._leaves()) == 1\n leaf = expr._leaves()[0]\n\n try:\n formatter = getattr(flask.current_app, 'log_exception_formatter',\n _default_log_exception_formatter)\n result = serial.materialize(compute(expr,\n {leaf: dataset},\n **compute_kwargs),\n expr.dshape,\n odo_kwargs)\n except NotImplementedError as e:\n # Note: `sys.exc_info()[2]` holds the current traceback, for\n # Python 2 / 3 compatibility. It's important not to store a local\n # reference to it.\n formatted_tb = formatter(sys.exc_info()[2])\n error_msg = \"Computation not supported:\\n%s\\n%s\" % (e, formatted_tb)\n app.logger.error(error_msg)\n return (error_msg, RC.NOT_IMPLEMENTED)\n except Exception as e:\n formatted_tb = formatter(sys.exc_info()[2])\n error_msg = \"Computation failed with message:\\n%s: %s\\n%s\" % (type(e).__name__, e, formatted_tb)\n app.logger.error(error_msg)\n return (error_msg, RC.INTERNAL_SERVER_ERROR)\n\n response = {'datashape': pprint(expr.dshape, width=0),\n 'data': serial.data_dumps(result),\n 'names': expr.fields}\n\n if profiling:\n import marshal\n from pstats import Stats\n\n if profiler_output == ':response':\n from pandas.compat import BytesIO\n file = BytesIO()\n else:\n file = open(_prof_path(profiler_output, expr), 'wb')\n\n with file:\n # Use marshal to dump the stats data to the given file.\n # This is taken from cProfile which unfortunately does not have\n # an api that allows us to pass the file object directly, only\n # a file path.\n marshal.dump(Stats(profiler).stats, file)\n if profiler_output == ':response':\n response['profiler_output'] = {'__!bytes': file.getvalue()}\n\n return serial.dumps(response)\n\n\n@cross_origin(origins='*', methods=['POST', 'HEAD', 'OPTIONS'])\n@authorization\n@check_request\n@_logging\ndef addserver(payload, serial):\n \"\"\"Add a data resource to the server.\n\n The request should contain serialized MutableMapping (dictionary) like\n object, and the server should already be hosting a MutableMapping resource.\n \"\"\"\n\n data = _get_data.cache[flask.current_app]\n\n if not isinstance(data, collections.MutableMapping):\n data_not_mm_msg = (\"Cannot update blaze server data since its current \"\n \"data is a %s and not a mutable mapping (dictionary \"\n \"like).\")\n return (data_not_mm_msg % type(data), RC.UNPROCESSABLE_ENTITY)\n\n if not isinstance(payload, collections.Mapping):\n payload_not_mm_msg = (\"Need a dictionary-like payload; instead was \"\n \"given %s of type %s.\")\n return (payload_not_mm_msg % (payload, type(payload)),\n RC.UNPROCESSABLE_ENTITY)\n\n if len(payload) > 1:\n error_msg = \"Given more than one resource to add: %s\"\n return (error_msg % list(payload.keys()),\n RC.UNPROCESSABLE_ENTITY)\n\n [(name, resource_info)] = payload.items()\n flask.current_app.logger.debug(\"Attempting to add dataset '%s'\" % name)\n\n if name in data:\n msg = \"Cannot add dataset named %s, already exists on server.\"\n return (msg % name, RC.CONFLICT)\n\n try:\n imports = []\n if isinstance(resource_info, dict):\n # Extract resource creation arguments\n source = resource_info['source']\n imports = resource_info.get('imports', [])\n args = resource_info.get('args', [])\n kwargs = resource_info.get('kwargs', {})\n else:\n # Just a URI\n source, args, kwargs = resource_info, [], {}\n # If we've been given libraries to import, we need to do so\n # before we can create the resource.\n for mod in imports:\n importlib.import_module(mod)\n # Make a new resource and try to discover it.\n new_resource = {name: resource(source, *args, **kwargs)}\n # Discovery is a minimal consistency check to determine if the new\n # resource is valid.\n ds = discover(new_resource)\n if name not in ds.dict:\n raise ValueError(\"%s not added.\" % name)\n except NotImplementedError as e:\n error_msg = \"Addition not supported:\\n%s: %s\"\n return (error_msg % (type(e).__name__, e),\n RC.UNPROCESSABLE_ENTITY)\n except Exception as e:\n error_msg = \"Addition failed with message:\\n%s: %s\"\n return (error_msg % (type(e).__name__, e),\n RC.UNPROCESSABLE_ENTITY)\n else:\n # Now that we've established that the new resource is discoverable--and\n # thus exists and is accessible--we add the resource to the server.\n data.update(new_resource)\n\n return ('OK', RC.CREATED)\n",
"path": "blaze/server/server.py"
}
] | [
{
"content": "from __future__ import absolute_import, division, print_function\n\nimport sys\nimport logging\nfrom logging import Formatter\nfrom functools import wraps\nimport traceback\nimport collections\nfrom datetime import datetime\nimport errno\nimport functools\nfrom hashlib import md5\nimport os\nimport socket\nfrom time import time\nfrom warnings import warn\nimport importlib\n\nfrom datashape import discover, pprint\nimport flask\nfrom flask import Blueprint, Flask, Response\nfrom flask_cors import cross_origin\nfrom werkzeug.http import parse_options_header\nfrom toolz import valmap, compose\n\nimport blaze\nfrom blaze import compute, resource\nfrom blaze.compatibility import ExitStack\nfrom blaze.compute import compute_up\nfrom .serialization import json, all_formats\nfrom ..interactive import _Data\nfrom ..expr import Expr, symbol, utils as expr_utils, Symbol\n\n\n__all__ = 'Server', 'to_tree', 'from_tree', 'expr_md5'\n\n# http://www.speedguide.net/port.php?port=6363\n# http://en.wikipedia.org/wiki/List_of_TCP_and_UDP_port_numbers\nDEFAULT_PORT = 6363\n\n\nclass RC(object):\n \"\"\"\n Simple namespace for HTTP status codes.\n https://en.wikipedia.org/wiki/List_of_HTTP_status_codes\n \"\"\"\n\n OK = 200\n CREATED = 201\n\n BAD_REQUEST = 400\n UNAUTHORIZED = 401\n NOT_FOUND = 404\n FORBIDDEN = 403\n CONFLICT = 409\n UNPROCESSABLE_ENTITY = 422\n UNSUPPORTED_MEDIA_TYPE = 415\n\n INTERNAL_SERVER_ERROR = 500\n NOT_IMPLEMENTED = 501\n\n\napi = Blueprint('api', __name__)\npickle_extension_api = Blueprint('pickle_extension_api', __name__)\n\n\n_no_default = object() # sentinel\n\n\ndef _logging(func):\n @wraps(func)\n def _logger(*args, **kwargs):\n logger = flask.current_app.logger\n try:\n logger.debug(\"Calling %s\" % func.__name__)\n ret = func(*args, **kwargs)\n finally:\n logger.debug(\"Leaving %s\" % func.__name__)\n return ret\n return _logger\n\n\ndef _get_option(option, options, default=_no_default):\n try:\n return options[option]\n except KeyError:\n if default is not _no_default:\n return default\n\n # Provides a more informative error message.\n msg = 'The blaze api must be registered with {option}'\n raise TypeError(msg.format(option=option))\n\n\ndef ensure_dir(path):\n try:\n os.makedirs(path)\n except OSError as e:\n if e.errno != errno.EEXIST:\n raise\n\n# Default for logging exception tracebacks is to simply use the standard\n# `traceback.format_tb`\n_default_log_exception_formatter = compose(''.join, traceback.format_tb)\n\n\ndef _register_api(app, options, first_registration=False):\n \"\"\"\n Register the data with the blueprint.\n \"\"\"\n _get_data.cache[app] = _get_option('data', options)\n _get_format.cache[app] = {f.name: f for f in _get_option('formats', options)}\n _get_auth.cache[app] = (_get_option('authorization', options, None) or\n (lambda a: True))\n allow_profiler = _get_option('allow_profiler', options, False)\n profiler_output = _get_option('profiler_output', options, None)\n profile_by_default = _get_option('profile_by_default', options, False)\n if not allow_profiler and (profiler_output or profile_by_default):\n msg = \"cannot set %s%s%s when 'allow_profiler' is False\"\n raise ValueError(msg % ('profiler_output' if profiler_output else '',\n ' or ' if profiler_output and profile_by_default else '',\n 'profile_by_default' if profile_by_default else ''))\n if allow_profiler:\n if profiler_output is None:\n profiler_output = 'profiler_output'\n if profiler_output != ':response':\n ensure_dir(profiler_output)\n\n _get_profiler_info.cache[app] = (allow_profiler,\n profiler_output,\n profile_by_default)\n\n # Allowing users to dynamically add datasets to the Blaze server can be\n # dangerous, so we only expose the method if specifically requested\n allow_add = _get_option('allow_add', options, False)\n if allow_add:\n app.add_url_rule('/add', 'addserver', addserver,\n methods=['POST', 'HEAD', 'OPTIONS'])\n\n # Call the original register function.\n Blueprint.register(api, app, options, first_registration)\n\napi.register = _register_api\n\n\ndef per_app_accesor(name):\n def _get():\n return _get.cache[flask.current_app]\n _get.cache = {}\n _get.__name__ = '_get' + name\n return _get\n\n\ndef _get_format(name):\n return _get_format.cache[flask.current_app][name]\n_get_format.cache = {}\n\n_get_data = per_app_accesor('data')\n_get_auth = per_app_accesor('auth')\n_get_profiler_info = per_app_accesor('profiler_info')\n\n\ndef expr_md5(expr):\n \"\"\"Returns the md5 hash of the str of the expression.\n\n Parameters\n ----------\n expr : Expr\n The expression to hash.\n\n Returns\n -------\n hexdigest : str\n The hexdigest of the md5 of the str of ``expr``.\n \"\"\"\n exprstr = str(expr)\n if not isinstance(exprstr, bytes):\n exprstr = exprstr.encode('utf-8')\n return md5(exprstr).hexdigest()\n\n\ndef _prof_path(profiler_output, expr):\n \"\"\"Get the path to write the data for a profile run of ``expr``.\n\n Parameters\n ----------\n profiler_output : str\n The director to write into.\n expr : Expr\n The expression that was run.\n\n Returns\n -------\n prof_path : str\n The filepath to write the new profiler data.\n\n Notes\n -----\n This function ensures that the dirname of the returned path exists.\n \"\"\"\n dir_ = os.path.join(profiler_output,\n expr_md5(expr)) # Use md5 so the client knows where to look.\n ensure_dir(dir_)\n return os.path.join(dir_,\n str(int(datetime.utcnow().timestamp())))\n\n\ndef authorization(f):\n @functools.wraps(f)\n def authorized(*args, **kwargs):\n if not _get_auth()(flask.request.authorization):\n return Response('bad auth token',\n RC.UNAUTHORIZED,\n {'WWW-Authenticate': 'Basic realm=\"Login Required\"'})\n return f(*args, **kwargs)\n return authorized\n\n\ndef check_request(f):\n @functools.wraps(f)\n def check():\n raw_content_type = flask.request.headers['content-type']\n content_type, options = parse_options_header(raw_content_type)\n\n if content_type not in accepted_mimetypes:\n return ('Unsupported serialization format %s' % content_type,\n RC.UNSUPPORTED_MEDIA_TYPE)\n\n try:\n serial = _get_format(accepted_mimetypes[content_type])\n except KeyError:\n return (\"Unsupported serialization format '%s'\" % content_type,\n RC.UNSUPPORTED_MEDIA_TYPE)\n\n try:\n payload = serial.loads(flask.request.data)\n except ValueError:\n return (\"Bad data. Got %s \" % flask.request.data, RC.BAD_REQUEST)\n\n return f(payload, serial)\n return check\n\n\nclass FlaskWithExceptionFormatting(Flask):\n \"\"\" Add a `log_exception_formatter` instance attribute to the Flask\n application object, to allow it to store a handler function.\n \"\"\"\n log_exception_formatter = None\n\n def __init__(self, *args, **kwargs):\n self.log_exception_formatter = kwargs.pop('log_exception_formatter',\n _default_log_exception_formatter)\n super(FlaskWithExceptionFormatting, self).__init__(*args, **kwargs)\n\n\nclass Server(object):\n\n \"\"\" Blaze Data Server\n\n Host local data through a web API\n\n Parameters\n ----------\n data : dict, optional\n A dictionary mapping dataset name to any data format that blaze\n understands.\n formats : iterable, optional\n An iterable of supported serialization formats. By default, the\n server will support JSON.\n A serialization format is an object that supports:\n name, loads, and dumps.\n authorization : callable, optional\n A callable to be used to check the auth header from the client.\n This callable should accept a single argument that will either be\n None indicating that no header was passed, or an object\n containing a username and password attribute. By default, all requests\n are allowed.\n allow_profiler : bool, optional\n Allow payloads to specify `\"profile\": true` which will run the\n computation under cProfile.\n profiler_output : str, optional\n The directory to write pstats files after profile runs.\n The files will be written in a structure like:\n\n {profiler_output}/{hash(expr)}/{timestamp}\n\n This defaults to a relative path of `profiler_output`.\n This requires `allow_profiler=True`.\n\n If this is the string ':response' then writing to the local filesystem\n is disabled. Only requests that specify `profiler_output=':response'`\n will be served. All others will return a 403 (Forbidden).\n profile_by_default : bool, optional\n Run the profiler on any computation that does not explicitly set\n \"profile\": false.\n This requires `allow_profiler=True`.\n allow_add : bool, optional\n Expose an `/add` endpoint to allow datasets to be dynamically added to\n the server. Since this increases the risk of security holes, it defaults\n to `False`.\n logfile : str or file-like object, optional\n A filename or open file-like stream to which to send log output. Defaults\n to `sys.stdout`.\n loglevel : str, optional\n A string logging level (e.g. 'WARNING', 'INFO') to set how verbose log\n output should be.\n log_exception_formatter : callable, optional\n A callable to be used to format an exception traceback for logging. It\n should take a traceback argument, and return the string to be logged.\n This defaults to the standard library `traceback.format_tb`\n\n\n Examples\n --------\n >>> from pandas import DataFrame\n >>> df = DataFrame([[1, 'Alice', 100],\n ... [2, 'Bob', -200],\n ... [3, 'Alice', 300],\n ... [4, 'Dennis', 400],\n ... [5, 'Bob', -500]],\n ... columns=['id', 'name', 'amount'])\n\n >>> server = Server({'accounts': df})\n >>> server.run() # doctest: +SKIP\n \"\"\"\n def __init__(self,\n data=None,\n formats=None,\n authorization=None,\n allow_profiler=False,\n profiler_output=None,\n profile_by_default=False,\n allow_add=False,\n logfile=sys.stdout,\n loglevel='WARNING',\n log_exception_formatter=_default_log_exception_formatter):\n if isinstance(data, collections.Mapping):\n data = valmap(lambda v: v.data if isinstance(v, _Data) else v,\n data)\n elif isinstance(data, _Data):\n data = data._resources()\n app = self.app = FlaskWithExceptionFormatting('blaze.server.server',\n log_exception_formatter=log_exception_formatter)\n if data is None:\n data = {}\n app.register_blueprint(api,\n data=data,\n formats=formats if formats is not None else (json,),\n authorization=authorization,\n allow_profiler=allow_profiler,\n profiler_output=profiler_output,\n profile_by_default=profile_by_default,\n allow_add=allow_add)\n self.data = data\n if logfile:\n if isinstance(logfile, (str, bytes)):\n handler = logging.FileHandler(logfile)\n else:\n handler = logging.StreamHandler(logfile)\n handler.setFormatter(Formatter('[%(asctime)s %(levelname)s] %(message)s '\n '[in %(pathname)s:%(lineno)d]'))\n handler.setLevel(getattr(logging, loglevel))\n app.logger.addHandler(handler)\n\n def run(self, port=DEFAULT_PORT, retry=False, **kwargs):\n \"\"\"Run the server.\n\n Parameters\n ----------\n port : int, optional\n The port to bind to.\n retry : bool, optional\n If the port is busy, should we retry with the next available port?\n **kwargs\n Forwarded to the underlying flask app's ``run`` method.\n\n Notes\n -----\n This function blocks forever when successful.\n \"\"\"\n self.port = port\n try:\n # Blocks until the server is shut down.\n self.app.logger.debug('Starting server...')\n self.app.run(port=port, **kwargs)\n self.app.logger.debug('Stopping server...')\n except socket.error:\n if not retry:\n raise\n\n warn(\"Oops, couldn't connect on port %d. Is it busy?\" % port)\n # Attempt to start the server on a new port.\n self.run(port=port + 1, retry=retry, **kwargs)\n\n\[email protected]('/datashape', methods=['GET'])\n@cross_origin(origins='*', methods=['GET'])\n@authorization\n@_logging\ndef shape():\n return pprint(discover(_get_data()), width=0)\n\n\ndef to_tree(expr, names=None):\n \"\"\" Represent Blaze expression with core data structures\n\n Transform a Blaze expression into a form using only strings, dicts, lists\n and base types (int, float, datetime, ....) This form can be useful for\n serialization.\n\n Parameters\n ----------\n expr : Expr\n A Blaze expression\n\n Examples\n --------\n\n >>> t = symbol('t', 'var * {x: int32, y: int32}')\n >>> to_tree(t) # doctest: +SKIP\n {'op': 'Symbol',\n 'args': ['t', 'var * { x : int32, y : int32 }', False]}\n\n\n >>> to_tree(t.x.sum()) # doctest: +SKIP\n {'op': 'sum',\n 'args': [{'op': 'Column',\n 'args': [{'op': 'Symbol'\n 'args': ['t',\n 'var * { x : int32, y : int32 }',\n False]}\n 'x']}]}\n\n Simplify expresion using explicit ``names`` dictionary. In the example\n below we replace the ``Symbol`` node with the string ``'t'``.\n\n >>> tree = to_tree(t.x, names={t: 't'})\n >>> tree # doctest: +SKIP\n {'op': 'Column', 'args': ['t', 'x']}\n\n >>> from_tree(tree, namespace={'t': t})\n t.x\n\n See Also\n --------\n\n from_tree\n \"\"\"\n if isinstance(expr, slice):\n # NOTE: This case must come first, since `slice` objects are not\n # hashable, so a dict lookup inside `names` will raise an execption.\n return {'op': 'slice',\n 'args': [to_tree(arg, names=names) for arg in\n [expr.start, expr.stop, expr.step]]}\n if names and expr in names:\n return names[expr]\n if isinstance(expr, tuple):\n return [to_tree(arg, names=names) for arg in expr]\n if isinstance(expr, expr_utils._slice):\n return to_tree(expr.as_slice(), names=names)\n elif isinstance(expr, _Data):\n return to_tree(symbol(expr._name, expr.dshape), names)\n elif isinstance(expr, Expr):\n return {'op': type(expr).__name__,\n 'args': [to_tree(arg, names) for arg in expr._args]}\n else:\n return expr\n\n\ndef expression_from_name(name):\n \"\"\"\n\n >>> expression_from_name('By')\n <class 'blaze.expr.split_apply_combine.By'>\n\n >>> expression_from_name('And')\n <class 'blaze.expr.arithmetic.And'>\n \"\"\"\n import blaze\n if hasattr(blaze, name):\n return getattr(blaze, name)\n if hasattr(blaze.expr, name):\n return getattr(blaze.expr, name)\n for signature, func in compute_up.funcs.items():\n try:\n if signature[0].__name__ == name:\n return signature[0]\n except TypeError:\n pass\n raise ValueError('%s not found in compute_up' % name)\n\n\ndef from_tree(expr, namespace=None):\n \"\"\" Convert core data structures to Blaze expression\n\n Core data structure representations created by ``to_tree`` are converted\n back into Blaze expressions.\n\n Parameters\n ----------\n expr : dict\n\n Examples\n --------\n\n >>> t = symbol('t', 'var * {x: int32, y: int32}')\n >>> tree = to_tree(t)\n >>> tree # doctest: +SKIP\n {'op': 'Symbol',\n 'args': ['t', 'var * { x : int32, y : int32 }', False]}\n\n >>> from_tree(tree)\n <`t` symbol; dshape='var * {x: int32, y: int32}'>\n\n >>> tree = to_tree(t.x.sum())\n >>> tree # doctest: +SKIP\n {'op': 'sum',\n 'args': [{'op': 'Field',\n 'args': [{'op': 'Symbol'\n 'args': ['t',\n 'var * {x : int32, y : int32}',\n False]}\n 'x']}]}\n\n >>> from_tree(tree)\n sum(t.x)\n\n Simplify expresion using explicit ``names`` dictionary. In the example\n below we replace the ``Symbol`` node with the string ``'t'``.\n\n >>> tree = to_tree(t.x, names={t: 't'})\n >>> tree # doctest: +SKIP\n {'op': 'Field', 'args': ['t', 'x']}\n\n >>> from_tree(tree, namespace={'t': t})\n t.x\n\n See Also\n --------\n\n to_tree\n \"\"\"\n if isinstance(expr, dict):\n op, args = expr['op'], expr['args']\n if 'slice' == op:\n return expr_utils._slice(*[from_tree(arg, namespace)\n for arg in args])\n if hasattr(blaze.expr, op):\n cls = getattr(blaze.expr, op)\n else:\n cls = expression_from_name(op)\n if cls is Symbol:\n cls = symbol\n children = [from_tree(arg, namespace) for arg in args]\n return cls(*children)\n elif isinstance(expr, (list, tuple)):\n return tuple(from_tree(arg, namespace) for arg in expr)\n if namespace and expr in namespace:\n return namespace[expr]\n else:\n return expr\n\n\naccepted_mimetypes = {'application/vnd.blaze+{}'.format(x.name): x.name for x\n in all_formats}\n\n\[email protected]('/compute', methods=['POST', 'HEAD', 'OPTIONS'])\n@cross_origin(origins='*', methods=['POST', 'HEAD', 'OPTIONS'])\n@authorization\n@check_request\n@_logging\ndef compserver(payload, serial):\n app = flask.current_app\n (allow_profiler,\n default_profiler_output,\n profile_by_default) = _get_profiler_info()\n requested_profiler_output = payload.get('profiler_output',\n default_profiler_output)\n profile = payload.get('profile')\n profiling = (allow_profiler and\n (profile or (profile_by_default and requested_profiler_output)))\n if profile and not allow_profiler:\n return ('profiling is disabled on this server', RC.FORBIDDEN)\n\n with ExitStack() as response_construction_context_stack:\n if profiling:\n from cProfile import Profile\n\n if (default_profiler_output == ':response' and\n requested_profiler_output != ':response'):\n # writing to the local filesystem is disabled\n return (\"local filepaths are disabled on this server, only\"\n \" ':response' is allowed for the 'profiler_output' field\",\n RC.FORBIDDEN)\n\n profiler_output = requested_profiler_output\n profiler = Profile()\n profiler.enable()\n # ensure that we stop profiling in the case of an exception\n response_construction_context_stack.callback(profiler.disable)\n\n expr = '<failed to parse expr>'\n\n @response_construction_context_stack.callback\n def log_time(start=time()):\n app.logger.info('compute expr: %s\\ntotal time (s): %.3f',\n expr,\n time() - start)\n\n ns = payload.get('namespace', {})\n compute_kwargs = payload.get('compute_kwargs') or {}\n odo_kwargs = payload.get('odo_kwargs') or {}\n dataset = _get_data()\n ns[':leaf'] = symbol('leaf', discover(dataset))\n\n expr = from_tree(payload['expr'], namespace=ns)\n assert len(expr._leaves()) == 1\n leaf = expr._leaves()[0]\n\n try:\n formatter = getattr(flask.current_app, 'log_exception_formatter',\n _default_log_exception_formatter)\n result = serial.materialize(compute(expr,\n {leaf: dataset},\n **compute_kwargs),\n expr.dshape,\n odo_kwargs)\n except NotImplementedError as e:\n # Note: `sys.exc_info()[2]` holds the current traceback, for\n # Python 2 / 3 compatibility. It's important not to store a local\n # reference to it.\n formatted_tb = formatter(sys.exc_info()[2])\n error_msg = \"Computation not supported:\\n%s\\n%s\" % (e, formatted_tb)\n app.logger.error(error_msg)\n return (error_msg, RC.NOT_IMPLEMENTED)\n except Exception as e:\n formatted_tb = formatter(sys.exc_info()[2])\n error_msg = \"Computation failed with message:\\n%s: %s\\n%s\" % (type(e).__name__, e, formatted_tb)\n app.logger.error(error_msg)\n return (error_msg, RC.INTERNAL_SERVER_ERROR)\n\n response = {'datashape': pprint(expr.dshape, width=0),\n 'data': serial.data_dumps(result),\n 'names': expr.fields}\n\n if profiling:\n import marshal\n from pstats import Stats\n\n if profiler_output == ':response':\n from pandas.compat import BytesIO\n file = BytesIO()\n else:\n file = open(_prof_path(profiler_output, expr), 'wb')\n\n with file:\n # Use marshal to dump the stats data to the given file.\n # This is taken from cProfile which unfortunately does not have\n # an api that allows us to pass the file object directly, only\n # a file path.\n marshal.dump(Stats(profiler).stats, file)\n if profiler_output == ':response':\n response['profiler_output'] = {'__!bytes': file.getvalue()}\n\n return serial.dumps(response)\n\n\n@cross_origin(origins='*', methods=['POST', 'HEAD', 'OPTIONS'])\n@authorization\n@check_request\n@_logging\ndef addserver(payload, serial):\n \"\"\"Add a data resource to the server.\n\n The request should contain serialized MutableMapping (dictionary) like\n object, and the server should already be hosting a MutableMapping resource.\n \"\"\"\n\n data = _get_data.cache[flask.current_app]\n\n if not isinstance(data, collections.MutableMapping):\n data_not_mm_msg = (\"Cannot update blaze server data since its current \"\n \"data is a %s and not a mutable mapping (dictionary \"\n \"like).\")\n return (data_not_mm_msg % type(data), RC.UNPROCESSABLE_ENTITY)\n\n if not isinstance(payload, collections.Mapping):\n payload_not_mm_msg = (\"Need a dictionary-like payload; instead was \"\n \"given %s of type %s.\")\n return (payload_not_mm_msg % (payload, type(payload)),\n RC.UNPROCESSABLE_ENTITY)\n\n if len(payload) > 1:\n error_msg = \"Given more than one resource to add: %s\"\n return (error_msg % list(payload.keys()),\n RC.UNPROCESSABLE_ENTITY)\n\n [(name, resource_info)] = payload.items()\n flask.current_app.logger.debug(\"Attempting to add dataset '%s'\" % name)\n\n if name in data:\n msg = \"Cannot add dataset named %s, already exists on server.\"\n return (msg % name, RC.CONFLICT)\n\n try:\n imports = []\n if isinstance(resource_info, dict):\n # Extract resource creation arguments\n source = resource_info['source']\n imports = resource_info.get('imports', [])\n args = resource_info.get('args', [])\n kwargs = resource_info.get('kwargs', {})\n else:\n # Just a URI\n source, args, kwargs = resource_info, [], {}\n # If we've been given libraries to import, we need to do so\n # before we can create the resource.\n for mod in imports:\n importlib.import_module(mod)\n # Make a new resource and try to discover it.\n new_resource = {name: resource(source, *args, **kwargs)}\n # Discovery is a minimal consistency check to determine if the new\n # resource is valid.\n ds = discover(new_resource)\n if name not in ds.dict:\n raise ValueError(\"%s not added.\" % name)\n except NotImplementedError as e:\n error_msg = \"Addition not supported:\\n%s: %s\"\n return (error_msg % (type(e).__name__, e),\n RC.UNPROCESSABLE_ENTITY)\n except Exception as e:\n error_msg = \"Addition failed with message:\\n%s: %s\"\n return (error_msg % (type(e).__name__, e),\n RC.UNPROCESSABLE_ENTITY)\n else:\n # Now that we've established that the new resource is discoverable--and\n # thus exists and is accessible--we add the resource to the server.\n data.update(new_resource)\n\n return ('OK', RC.CREATED)\n",
"path": "blaze/server/server.py"
}
] | diff --git a/blaze/server/server.py b/blaze/server/server.py
index 1438cb0af..15abdc741 100644
--- a/blaze/server/server.py
+++ b/blaze/server/server.py
@@ -19,7 +19,7 @@
from datashape import discover, pprint
import flask
from flask import Blueprint, Flask, Response
-from flask.ext.cors import cross_origin
+from flask_cors import cross_origin
from werkzeug.http import parse_options_header
from toolz import valmap, compose
diff --git a/blaze/server/tests/test_server.py b/blaze/server/tests/test_server.py
index f32305340..6decbbea9 100644
--- a/blaze/server/tests/test_server.py
+++ b/blaze/server/tests/test_server.py
@@ -2,7 +2,7 @@
import pytest
pytest.importorskip('flask')
-pytest.importorskip('flask.ext.cors')
+pytest.importorskip('flask_cors')
from base64 import b64encode
from copy import copy
diff --git a/docs/source/whatsnew/0.12.0.txt b/docs/source/whatsnew/0.12.0.txt
new file mode 100644
index 000000000..278001bcf
--- /dev/null
+++ b/docs/source/whatsnew/0.12.0.txt
@@ -0,0 +1,45 @@
+Release 0.12.0
+-----------------
+
+:Release: 0.12.0
+
+New Expressions
+~~~~~~~~~~~~~~~
+
+None
+
+Improved Expressions
+~~~~~~~~~~~~~~~~~~~~
+
+None
+
+New Backends
+~~~~~~~~~~~~
+
+None
+
+Improved Backends
+~~~~~~~~~~~~~~~~~
+
+None
+
+Experimental Features
+~~~~~~~~~~~~~~~~~~~~~
+
+None
+
+API Changes
+~~~~~~~~~~~
+
+None
+
+Bug Fixes
+~~~~~~~~~
+
+* The ``flask.ext.cors`` import was updated to resolve a ``DeprecationWarning``
+(:issue:`1556`).
+
+Miscellaneous
+~~~~~~~~~~~~~
+
+None
|
twisted__twisted-11958 | expand mypy .* module overrides
**Is your feature request related to a problem? Please describe.**
we'd like to be able to delete a module from the pyproject.toml to mark it as fully type annotated, however having .* overrides with weaker type hinting prevents this
**Describe the solution you'd like**
expand mypy .* module overrides
| [
{
"content": "# -*- test-case-name: twisted.words.test.test_jabberjid -*-\n#\n# Copyright (c) Twisted Matrix Laboratories.\n# See LICENSE for details.\n\n\"\"\"\nJabber Identifier support.\n\nThis module provides an object to represent Jabber Identifiers (JIDs) and\nparse string representations into them with proper checking for illegal\ncharacters, case folding and canonicalisation through\nL{stringprep<twisted.words.protocols.jabber.xmpp_stringprep>}.\n\"\"\"\n\nfrom typing import Dict, Tuple, Union\n\nfrom twisted.words.protocols.jabber.xmpp_stringprep import (\n nameprep,\n nodeprep,\n resourceprep,\n)\n\n\nclass InvalidFormat(Exception):\n \"\"\"\n The given string could not be parsed into a valid Jabber Identifier (JID).\n \"\"\"\n\n\ndef parse(jidstring: str) -> Tuple[Union[str, None], str, Union[str, None]]:\n \"\"\"\n Parse given JID string into its respective parts and apply stringprep.\n\n @param jidstring: string representation of a JID.\n @type jidstring: L{str}\n @return: tuple of (user, host, resource), each of type L{str} as\n the parsed and stringprep'd parts of the given JID. If the\n given string did not have a user or resource part, the respective\n field in the tuple will hold L{None}.\n @rtype: L{tuple}\n \"\"\"\n user = None\n host = None\n resource = None\n\n # Search for delimiters\n user_sep = jidstring.find(\"@\")\n res_sep = jidstring.find(\"/\")\n\n if user_sep == -1:\n if res_sep == -1:\n # host\n host = jidstring\n else:\n # host/resource\n host = jidstring[0:res_sep]\n resource = jidstring[res_sep + 1 :] or None\n else:\n if res_sep == -1:\n # user@host\n user = jidstring[0:user_sep] or None\n host = jidstring[user_sep + 1 :]\n else:\n if user_sep < res_sep:\n # user@host/resource\n user = jidstring[0:user_sep] or None\n host = jidstring[user_sep + 1 : user_sep + (res_sep - user_sep)]\n resource = jidstring[res_sep + 1 :] or None\n else:\n # host/resource (with an @ in resource)\n host = jidstring[0:res_sep]\n resource = jidstring[res_sep + 1 :] or None\n\n return prep(user, host, resource)\n\n\ndef prep(\n user: Union[str, None], host: str, resource: Union[str, None]\n) -> Tuple[Union[str, None], str, Union[str, None]]:\n \"\"\"\n Perform stringprep on all JID fragments.\n\n @param user: The user part of the JID.\n @type user: L{str}\n @param host: The host part of the JID.\n @type host: L{str}\n @param resource: The resource part of the JID.\n @type resource: L{str}\n @return: The given parts with stringprep applied.\n @rtype: L{tuple}\n \"\"\"\n\n if user:\n try:\n user = nodeprep.prepare(str(user))\n except UnicodeError:\n raise InvalidFormat(\"Invalid character in username\")\n else:\n user = None\n\n if not host:\n raise InvalidFormat(\"Server address required.\")\n else:\n try:\n host = nameprep.prepare(str(host))\n except UnicodeError:\n raise InvalidFormat(\"Invalid character in hostname\")\n\n if resource:\n try:\n resource = resourceprep.prepare(str(resource))\n except UnicodeError:\n raise InvalidFormat(\"Invalid character in resource\")\n else:\n resource = None\n\n return (user, host, resource)\n\n\n__internJIDs: Dict[str, \"JID\"] = {}\n\n\ndef internJID(jidstring):\n \"\"\"\n Return interned JID.\n\n @rtype: L{JID}\n \"\"\"\n\n if jidstring in __internJIDs:\n return __internJIDs[jidstring]\n else:\n j = JID(jidstring)\n __internJIDs[jidstring] = j\n return j\n\n\nclass JID:\n \"\"\"\n Represents a stringprep'd Jabber ID.\n\n JID objects are hashable so they can be used in sets and as keys in\n dictionaries.\n \"\"\"\n\n def __init__(\n self,\n str: Union[str, None] = None,\n tuple: Union[Tuple[str, str, str], None] = None,\n ):\n if str:\n user, host, res = parse(str)\n elif tuple:\n user, host, res = prep(*tuple)\n else:\n raise RuntimeError(\n \"You must provide a value for either 'str' or 'tuple' arguments.\"\n )\n\n self.user = user\n self.host = host\n self.resource = res\n\n def userhost(self):\n \"\"\"\n Extract the bare JID as a unicode string.\n\n A bare JID does not have a resource part, so this returns either\n C{user@host} or just C{host}.\n\n @rtype: L{str}\n \"\"\"\n if self.user:\n return f\"{self.user}@{self.host}\"\n else:\n return self.host\n\n def userhostJID(self):\n \"\"\"\n Extract the bare JID.\n\n A bare JID does not have a resource part, so this returns a\n L{JID} object representing either C{user@host} or just C{host}.\n\n If the object this method is called upon doesn't have a resource\n set, it will return itself. Otherwise, the bare JID object will\n be created, interned using L{internJID}.\n\n @rtype: L{JID}\n \"\"\"\n if self.resource:\n return internJID(self.userhost())\n else:\n return self\n\n def full(self):\n \"\"\"\n Return the string representation of this JID.\n\n @rtype: L{str}\n \"\"\"\n if self.user:\n if self.resource:\n return f\"{self.user}@{self.host}/{self.resource}\"\n else:\n return f\"{self.user}@{self.host}\"\n else:\n if self.resource:\n return f\"{self.host}/{self.resource}\"\n else:\n return self.host\n\n def __eq__(self, other: object) -> bool:\n \"\"\"\n Equality comparison.\n\n L{JID}s compare equal if their user, host and resource parts all\n compare equal. When comparing against instances of other types, it\n uses the default comparison.\n \"\"\"\n if isinstance(other, JID):\n return (\n self.user == other.user\n and self.host == other.host\n and self.resource == other.resource\n )\n else:\n return NotImplemented\n\n def __hash__(self):\n \"\"\"\n Calculate hash.\n\n L{JID}s with identical constituent user, host and resource parts have\n equal hash values. In combination with the comparison defined on JIDs,\n this allows for using L{JID}s in sets and as dictionary keys.\n \"\"\"\n return hash((self.user, self.host, self.resource))\n\n def __unicode__(self):\n \"\"\"\n Get unicode representation.\n\n Return the string representation of this JID as a unicode string.\n @see: L{full}\n \"\"\"\n\n return self.full()\n\n __str__ = __unicode__\n\n def __repr__(self) -> str:\n \"\"\"\n Get object representation.\n\n Returns a string that would create a new JID object that compares equal\n to this one.\n \"\"\"\n return \"JID(%r)\" % self.full()\n",
"path": "src/twisted/words/protocols/jabber/jid.py"
}
] | [
{
"content": "# -*- test-case-name: twisted.words.test.test_jabberjid -*-\n#\n# Copyright (c) Twisted Matrix Laboratories.\n# See LICENSE for details.\n\n\"\"\"\nJabber Identifier support.\n\nThis module provides an object to represent Jabber Identifiers (JIDs) and\nparse string representations into them with proper checking for illegal\ncharacters, case folding and canonicalisation through\nL{stringprep<twisted.words.protocols.jabber.xmpp_stringprep>}.\n\"\"\"\n\nfrom typing import Dict, Tuple, Union\n\nfrom twisted.words.protocols.jabber.xmpp_stringprep import (\n nameprep,\n nodeprep,\n resourceprep,\n)\n\n\nclass InvalidFormat(Exception):\n \"\"\"\n The given string could not be parsed into a valid Jabber Identifier (JID).\n \"\"\"\n\n\ndef parse(jidstring: str) -> Tuple[Union[str, None], str, Union[str, None]]:\n \"\"\"\n Parse given JID string into its respective parts and apply stringprep.\n\n @param jidstring: string representation of a JID.\n @type jidstring: L{str}\n @return: tuple of (user, host, resource), each of type L{str} as\n the parsed and stringprep'd parts of the given JID. If the\n given string did not have a user or resource part, the respective\n field in the tuple will hold L{None}.\n @rtype: L{tuple}\n \"\"\"\n user = None\n host = None\n resource = None\n\n # Search for delimiters\n user_sep = jidstring.find(\"@\")\n res_sep = jidstring.find(\"/\")\n\n if user_sep == -1:\n if res_sep == -1:\n # host\n host = jidstring\n else:\n # host/resource\n host = jidstring[0:res_sep]\n resource = jidstring[res_sep + 1 :] or None\n else:\n if res_sep == -1:\n # user@host\n user = jidstring[0:user_sep] or None\n host = jidstring[user_sep + 1 :]\n else:\n if user_sep < res_sep:\n # user@host/resource\n user = jidstring[0:user_sep] or None\n host = jidstring[user_sep + 1 : user_sep + (res_sep - user_sep)]\n resource = jidstring[res_sep + 1 :] or None\n else:\n # host/resource (with an @ in resource)\n host = jidstring[0:res_sep]\n resource = jidstring[res_sep + 1 :] or None\n\n return prep(user, host, resource)\n\n\ndef prep(\n user: Union[str, None], host: str, resource: Union[str, None]\n) -> Tuple[Union[str, None], str, Union[str, None]]:\n \"\"\"\n Perform stringprep on all JID fragments.\n\n @param user: The user part of the JID.\n @type user: L{str}\n @param host: The host part of the JID.\n @type host: L{str}\n @param resource: The resource part of the JID.\n @type resource: L{str}\n @return: The given parts with stringprep applied.\n @rtype: L{tuple}\n \"\"\"\n\n if user:\n try:\n user = nodeprep.prepare(str(user))\n except UnicodeError:\n raise InvalidFormat(\"Invalid character in username\")\n else:\n user = None\n\n if not host:\n raise InvalidFormat(\"Server address required.\")\n else:\n try:\n host = nameprep.prepare(str(host))\n except UnicodeError:\n raise InvalidFormat(\"Invalid character in hostname\")\n\n if resource:\n try:\n resource = resourceprep.prepare(str(resource))\n except UnicodeError:\n raise InvalidFormat(\"Invalid character in resource\")\n else:\n resource = None\n\n return (user, host, resource)\n\n\n__internJIDs: Dict[str, \"JID\"] = {}\n\n\ndef internJID(jidstring):\n \"\"\"\n Return interned JID.\n\n @rtype: L{JID}\n \"\"\"\n\n if jidstring in __internJIDs:\n return __internJIDs[jidstring]\n else:\n j = JID(jidstring)\n __internJIDs[jidstring] = j\n return j\n\n\nclass JID:\n \"\"\"\n Represents a stringprep'd Jabber ID.\n\n JID objects are hashable so they can be used in sets and as keys in\n dictionaries.\n \"\"\"\n\n def __init__(\n self,\n str: Union[str, None] = None,\n tuple: Union[Tuple[Union[str, None], str, Union[str, None]], None] = None,\n ):\n if str:\n user, host, res = parse(str)\n elif tuple:\n user, host, res = prep(*tuple)\n else:\n raise RuntimeError(\n \"You must provide a value for either 'str' or 'tuple' arguments.\"\n )\n\n self.user = user\n self.host = host\n self.resource = res\n\n def userhost(self):\n \"\"\"\n Extract the bare JID as a unicode string.\n\n A bare JID does not have a resource part, so this returns either\n C{user@host} or just C{host}.\n\n @rtype: L{str}\n \"\"\"\n if self.user:\n return f\"{self.user}@{self.host}\"\n else:\n return self.host\n\n def userhostJID(self):\n \"\"\"\n Extract the bare JID.\n\n A bare JID does not have a resource part, so this returns a\n L{JID} object representing either C{user@host} or just C{host}.\n\n If the object this method is called upon doesn't have a resource\n set, it will return itself. Otherwise, the bare JID object will\n be created, interned using L{internJID}.\n\n @rtype: L{JID}\n \"\"\"\n if self.resource:\n return internJID(self.userhost())\n else:\n return self\n\n def full(self):\n \"\"\"\n Return the string representation of this JID.\n\n @rtype: L{str}\n \"\"\"\n if self.user:\n if self.resource:\n return f\"{self.user}@{self.host}/{self.resource}\"\n else:\n return f\"{self.user}@{self.host}\"\n else:\n if self.resource:\n return f\"{self.host}/{self.resource}\"\n else:\n return self.host\n\n def __eq__(self, other: object) -> bool:\n \"\"\"\n Equality comparison.\n\n L{JID}s compare equal if their user, host and resource parts all\n compare equal. When comparing against instances of other types, it\n uses the default comparison.\n \"\"\"\n if isinstance(other, JID):\n return (\n self.user == other.user\n and self.host == other.host\n and self.resource == other.resource\n )\n else:\n return NotImplemented\n\n def __hash__(self):\n \"\"\"\n Calculate hash.\n\n L{JID}s with identical constituent user, host and resource parts have\n equal hash values. In combination with the comparison defined on JIDs,\n this allows for using L{JID}s in sets and as dictionary keys.\n \"\"\"\n return hash((self.user, self.host, self.resource))\n\n def __unicode__(self):\n \"\"\"\n Get unicode representation.\n\n Return the string representation of this JID as a unicode string.\n @see: L{full}\n \"\"\"\n\n return self.full()\n\n __str__ = __unicode__\n\n def __repr__(self) -> str:\n \"\"\"\n Get object representation.\n\n Returns a string that would create a new JID object that compares equal\n to this one.\n \"\"\"\n return \"JID(%r)\" % self.full()\n",
"path": "src/twisted/words/protocols/jabber/jid.py"
}
] | diff --git a/pyproject.toml b/pyproject.toml
index 5eec9f57d8a..d4f872eaed0 100644
--- a/pyproject.toml
+++ b/pyproject.toml
@@ -318,14 +318,76 @@ no_implicit_reexport = false
allow_untyped_defs = true
check_untyped_defs = false
module = [
- 'twisted._threads.*',
+ 'twisted._threads.test.test_team',
+ 'twisted._threads.test.test_threadworker',
'twisted.application.app',
'twisted.application.internet',
'twisted.application.service',
'twisted.application.test.test_internet',
- 'twisted.conch.*',
- 'twisted.cred.*',
- 'twisted.enterprise.*',
+ 'twisted.conch.client.agent',
+ 'twisted.conch.client.default',
+ 'twisted.conch.client.direct',
+ 'twisted.conch.endpoints',
+ 'twisted.conch.insults.helper',
+ 'twisted.conch.insults.insults',
+ 'twisted.conch.insults.window',
+ 'twisted.conch.ls',
+ 'twisted.conch.manhole',
+ 'twisted.conch.manhole_tap',
+ 'twisted.conch.mixin',
+ 'twisted.conch.recvline',
+ 'twisted.conch.scripts.cftp',
+ 'twisted.conch.scripts.ckeygen',
+ 'twisted.conch.scripts.conch',
+ 'twisted.conch.scripts.tkconch',
+ 'twisted.conch.ssh.agent',
+ 'twisted.conch.ssh.channel',
+ 'twisted.conch.ssh.connection',
+ 'twisted.conch.ssh.factory',
+ 'twisted.conch.ssh.filetransfer',
+ 'twisted.conch.ssh.forwarding',
+ 'twisted.conch.ssh.keys',
+ 'twisted.conch.ssh.service',
+ 'twisted.conch.ssh.session',
+ 'twisted.conch.ssh.sexpy',
+ 'twisted.conch.ssh.transport',
+ 'twisted.conch.ssh.userauth',
+ 'twisted.conch.stdio',
+ 'twisted.conch.tap',
+ 'twisted.conch.telnet',
+ 'twisted.conch.test.loopback',
+ 'twisted.conch.test.test_agent',
+ 'twisted.conch.test.test_cftp',
+ 'twisted.conch.test.test_channel',
+ 'twisted.conch.test.test_checkers',
+ 'twisted.conch.test.test_ckeygen',
+ 'twisted.conch.test.test_conch',
+ 'twisted.conch.test.test_connection',
+ 'twisted.conch.test.test_default',
+ 'twisted.conch.test.test_endpoints',
+ 'twisted.conch.test.test_filetransfer',
+ 'twisted.conch.test.test_forwarding',
+ 'twisted.conch.test.test_helper',
+ 'twisted.conch.test.test_insults',
+ 'twisted.conch.test.test_keys',
+ 'twisted.conch.test.test_knownhosts',
+ 'twisted.conch.test.test_manhole',
+ 'twisted.conch.test.test_mixin',
+ 'twisted.conch.test.test_recvline',
+ 'twisted.conch.test.test_session',
+ 'twisted.conch.test.test_ssh',
+ 'twisted.conch.test.test_telnet',
+ 'twisted.conch.test.test_transport',
+ 'twisted.conch.test.test_userauth',
+ 'twisted.conch.test.test_window',
+ 'twisted.conch.ui.tkvt100',
+ 'twisted.conch.unix',
+ 'twisted.cred.checkers',
+ 'twisted.cred.strcred',
+ 'twisted.cred.test.test_cred',
+ 'twisted.cred.test.test_digestauth',
+ 'twisted.cred.test.test_strcred',
+ 'twisted.enterprise.adbapi',
'twisted.internet._baseprocess',
'twisted.internet._dumbwin32proc',
'twisted.internet._glibbase',
@@ -345,7 +407,9 @@ module = [
'twisted.internet.iocpreactor.reactor',
'twisted.internet.iocpreactor.udp',
'twisted.internet.kqreactor',
+ 'twisted.internet.posixbase',
'twisted.internet.process',
+ 'twisted.internet.protocol',
'twisted.internet.serialport',
'twisted.internet.test._posixifaces',
'twisted.internet.test.connectionmixins',
@@ -353,6 +417,7 @@ module = [
'twisted.internet.test.test_abstract',
'twisted.internet.test.test_address',
'twisted.internet.test.test_asyncioreactor',
+ 'twisted.internet.test.test_base',
'twisted.internet.test.test_baseprocess',
'twisted.internet.test.test_defer_await',
'twisted.internet.test.test_defer_yieldfrom',
@@ -381,6 +446,7 @@ module = [
'twisted.internet.test.test_udp_internals',
'twisted.internet.test.test_unix',
'twisted.internet.test.test_win32events',
+ 'twisted.internet.testing',
'twisted.internet.threads',
'twisted.internet.tksupport',
'twisted.internet.udp',
@@ -389,14 +455,64 @@ module = [
'twisted.internet.win32eventreactor',
'twisted.internet.wxreactor',
'twisted.internet.wxsupport',
- 'twisted.logger.*',
- 'twisted.mail.*',
- 'twisted.names.*',
- 'twisted.pair.*',
- 'twisted.persisted.*',
- 'twisted.plugin.*',
- 'twisted.plugins.*',
- 'twisted.positioning.*',
+ 'twisted.logger._json',
+ 'twisted.mail._cred',
+ 'twisted.mail._pop3client',
+ 'twisted.mail.alias',
+ 'twisted.mail.imap4',
+ 'twisted.mail.mail',
+ 'twisted.mail.maildir',
+ 'twisted.mail.pb',
+ 'twisted.mail.pop3',
+ 'twisted.mail.protocols',
+ 'twisted.mail.relay',
+ 'twisted.mail.relaymanager',
+ 'twisted.mail.scripts.mailmail',
+ 'twisted.mail.smtp',
+ 'twisted.mail.tap',
+ 'twisted.mail.test.pop3testserver',
+ 'twisted.mail.test.test_imap',
+ 'twisted.mail.test.test_mail',
+ 'twisted.mail.test.test_mailmail',
+ 'twisted.mail.test.test_options',
+ 'twisted.mail.test.test_pop3',
+ 'twisted.mail.test.test_pop3client',
+ 'twisted.mail.test.test_smtp',
+ 'twisted.names.authority',
+ 'twisted.names.cache',
+ 'twisted.names.client',
+ 'twisted.names.common',
+ 'twisted.names.dns',
+ 'twisted.names.hosts',
+ 'twisted.names.root',
+ 'twisted.names.secondary',
+ 'twisted.names.server',
+ 'twisted.names.srvconnect',
+ 'twisted.names.tap',
+ 'twisted.names.test.test_cache',
+ 'twisted.names.test.test_client',
+ 'twisted.names.test.test_common',
+ 'twisted.names.test.test_dns',
+ 'twisted.names.test.test_examples',
+ 'twisted.names.test.test_hosts',
+ 'twisted.names.test.test_names',
+ 'twisted.names.test.test_rootresolve',
+ 'twisted.names.test.test_server',
+ 'twisted.names.test.test_srvconnect',
+ 'twisted.names.test.test_tap',
+ 'twisted.pair.test.test_tuntap',
+ 'twisted.pair.testing',
+ 'twisted.pair.tuntap',
+ 'twisted.persisted._tokenize',
+ 'twisted.persisted.aot',
+ 'twisted.persisted.sob',
+ 'twisted.persisted.styles',
+ 'twisted.plugin',
+ 'twisted.plugins.cred_unix',
+ 'twisted.positioning._sentence',
+ 'twisted.positioning.nmea',
+ 'twisted.positioning.test.test_nmea',
+ 'twisted.positioning.test.test_sentence',
'twisted.protocols.amp',
'twisted.protocols.basic',
'twisted.protocols.finger',
@@ -413,10 +529,10 @@ module = [
'twisted.protocols.sip',
'twisted.protocols.socks',
'twisted.protocols.stateful',
- 'twisted.protocols.tls',
- 'twisted.protocols.wire',
'twisted.protocols.test.test_basic',
'twisted.protocols.test.test_tls',
+ 'twisted.protocols.tls',
+ 'twisted.protocols.wire',
'twisted.python.failure',
'twisted.python.formmethod',
'twisted.python.logfile',
@@ -443,13 +559,24 @@ module = [
'twisted.python.util',
'twisted.python.win32',
'twisted.python.zipstream',
- 'twisted.runner.procmon',
'twisted.runner.inetd',
- 'twisted.runner.test.test_procmon',
+ 'twisted.runner.procmon',
'twisted.runner.test.test_inetdconf',
- 'twisted.scripts.*',
- 'twisted.spread.*',
- 'twisted.tap.*',
+ 'twisted.runner.test.test_procmon',
+ 'twisted.scripts._twistd_unix',
+ 'twisted.scripts.test.test_scripts',
+ 'twisted.scripts.trial',
+ 'twisted.spread.banana',
+ 'twisted.spread.flavors',
+ 'twisted.spread.jelly',
+ 'twisted.spread.pb',
+ 'twisted.spread.publish',
+ 'twisted.spread.test.test_banana',
+ 'twisted.spread.test.test_jelly',
+ 'twisted.spread.test.test_pb',
+ 'twisted.spread.test.test_pbfailure',
+ 'twisted.spread.util',
+ 'twisted.tap.ftp',
'twisted.test.iosim',
'twisted.test.process_twisted',
'twisted.test.stdio_test_consumer',
@@ -487,6 +614,7 @@ module = [
'twisted.test.test_paths',
'twisted.test.test_pcp',
'twisted.test.test_persisted',
+ 'twisted.test.test_plugin',
'twisted.test.test_policies',
'twisted.test.test_postfix',
'twisted.test.test_process',
@@ -515,19 +643,149 @@ module = [
'twisted.test.test_unix',
'twisted.test.test_usage',
'twisted.test.testutils',
- 'twisted.trial.*',
- 'twisted.web.*',
- 'twisted.words.*',
- 'twisted.test.test_plugin',
- 'twisted.internet.testing',
- 'twisted.internet.test.test_base',
- 'twisted.internet.protocol',
- 'twisted.internet.posixbase',
+ 'twisted.trial._asynctest',
+ 'twisted.trial._dist.test.test_disttrial',
+ 'twisted.trial._dist.test.test_matchers',
+ 'twisted.trial._dist.test.test_stream',
+ 'twisted.trial._dist.test.test_worker',
+ 'twisted.trial._dist.test.test_workertrial',
+ 'twisted.trial._dist.workerreporter',
+ 'twisted.trial._synctest',
+ 'twisted.trial.reporter',
+ 'twisted.trial.runner',
+ 'twisted.trial.test.detests',
+ 'twisted.trial.test.erroneous',
+ 'twisted.trial.test.mockcustomsuite',
+ 'twisted.trial.test.mockcustomsuite2',
+ 'twisted.trial.test.mockcustomsuite3',
+ 'twisted.trial.test.skipping',
+ 'twisted.trial.test.suppression',
+ 'twisted.trial.test.test_assertions',
+ 'twisted.trial.test.test_asyncassertions',
+ 'twisted.trial.test.test_deferred',
+ 'twisted.trial.test.test_keyboard',
+ 'twisted.trial.test.test_loader',
+ 'twisted.trial.test.test_log',
+ 'twisted.trial.test.test_plugins',
+ 'twisted.trial.test.test_pyunitcompat',
+ 'twisted.trial.test.test_reporter',
+ 'twisted.trial.test.test_runner',
+ 'twisted.trial.test.test_script',
+ 'twisted.trial.test.test_suppression',
+ 'twisted.trial.test.test_testcase',
+ 'twisted.trial.test.test_tests',
+ 'twisted.trial.test.test_util',
+ 'twisted.trial.test.test_warning',
+ 'twisted.trial.test.weird',
+ 'twisted.trial.util',
+ 'twisted.web._auth.basic',
+ 'twisted.web._auth.wrapper',
+ 'twisted.web._http2',
+ 'twisted.web._newclient',
+ 'twisted.web._template_util',
+ 'twisted.web.client',
+ 'twisted.web.distrib',
+ 'twisted.web.domhelpers',
+ 'twisted.web.error',
+ 'twisted.web.http',
+ 'twisted.web.http_headers',
+ 'twisted.web.microdom',
+ 'twisted.web.proxy',
+ 'twisted.web.resource',
+ 'twisted.web.server',
+ 'twisted.web.soap',
+ 'twisted.web.static',
+ 'twisted.web.sux',
+ 'twisted.web.tap',
+ 'twisted.web.test.injectionhelpers',
+ 'twisted.web.test.requesthelper',
+ 'twisted.web.test.test_agent',
+ 'twisted.web.test.test_cgi',
+ 'twisted.web.test.test_distrib',
+ 'twisted.web.test.test_domhelpers',
+ 'twisted.web.test.test_http',
+ 'twisted.web.test.test_http2',
+ 'twisted.web.test.test_httpauth',
+ 'twisted.web.test.test_newclient',
+ 'twisted.web.test.test_pages',
+ 'twisted.web.test.test_proxy',
+ 'twisted.web.test.test_resource',
+ 'twisted.web.test.test_soap',
+ 'twisted.web.test.test_static',
+ 'twisted.web.test.test_tap',
+ 'twisted.web.test.test_util',
+ 'twisted.web.test.test_vhost',
+ 'twisted.web.test.test_web',
+ 'twisted.web.test.test_webclient',
+ 'twisted.web.test.test_wsgi',
+ 'twisted.web.test.test_xml',
+ 'twisted.web.test.test_xmlrpc',
+ 'twisted.web.twcgi',
+ 'twisted.web.wsgi',
+ 'twisted.web.xmlrpc',
+ 'twisted.words.im.basesupport',
+ 'twisted.words.im.ircsupport',
+ 'twisted.words.im.pbsupport',
+ 'twisted.words.protocols.irc',
+ 'twisted.words.protocols.jabber.client',
+ 'twisted.words.protocols.jabber.component',
+ 'twisted.words.protocols.jabber.error',
+ 'twisted.words.protocols.jabber.jstrports',
+ 'twisted.words.protocols.jabber.sasl',
+ 'twisted.words.protocols.jabber.xmlstream',
+ 'twisted.words.service',
+ 'twisted.words.test.test_basesupport',
+ 'twisted.words.test.test_domish',
+ 'twisted.words.test.test_irc',
+ 'twisted.words.test.test_irc_service',
+ 'twisted.words.test.test_jabberclient',
+ 'twisted.words.test.test_jabbercomponent',
+ 'twisted.words.test.test_jabberjstrports',
+ 'twisted.words.test.test_jabbersasl',
+ 'twisted.words.test.test_jabberxmlstream',
+ 'twisted.words.test.test_service',
+ 'twisted.words.test.test_xishutil',
+ 'twisted.words.test.test_xmlstream',
+ 'twisted.words.xish.domish',
+ 'twisted.words.xish.utility',
+ 'twisted.words.xish.xmlstream',
+ 'twisted.words.xish.xpath',
]
[[tool.mypy.overrides]]
allow_untyped_defs = true
module = [
+ 'twisted._threads._convenience',
+ 'twisted._threads._ithreads',
+ 'twisted._threads._memory',
+ 'twisted._threads._threadworker',
+ 'twisted._threads.test.test_convenience',
+ 'twisted._threads.test.test_memory',
+ 'twisted.conch.avatar',
+ 'twisted.conch.checkers',
+ 'twisted.conch.client.connect',
+ 'twisted.conch.client.knownhosts',
+ 'twisted.conch.client.options',
+ 'twisted.conch.error',
+ 'twisted.conch.insults.text',
+ 'twisted.conch.interfaces',
+ 'twisted.conch.manhole_ssh',
+ 'twisted.conch.openssh_compat.factory',
+ 'twisted.conch.ssh._kex',
+ 'twisted.conch.ssh.address',
+ 'twisted.conch.ssh.common',
+ 'twisted.conch.test.test_address',
+ 'twisted.conch.test.test_manhole_tap',
+ 'twisted.conch.test.test_openssh_compat',
+ 'twisted.conch.test.test_scripts',
+ 'twisted.conch.test.test_tap',
+ 'twisted.conch.test.test_text',
+ 'twisted.conch.test.test_unix',
+ 'twisted.conch.ui.ansi',
+ 'twisted.cred._digest',
+ 'twisted.cred.credentials',
+ 'twisted.cred.test.test_cramauth',
+ 'twisted.cred.test.test_simpleauth',
'twisted.internet._pollingfile',
'twisted.internet._posixserialport',
'twisted.internet._posixstdio',
@@ -537,7 +795,6 @@ module = [
'twisted.internet.epollreactor',
'twisted.internet.gireactor',
'twisted.internet.glib2reactor',
- 'twisted.internet.gtk3reactor',
'twisted.internet.iocpreactor.interfaces',
'twisted.internet.main',
'twisted.internet.pollreactor',
@@ -557,8 +814,38 @@ module = [
'twisted.internet.test.test_sigchld',
'twisted.internet.test.test_testing',
'twisted.internet.test.test_win32serialport',
- 'twisted.protocols.dict',
- 'twisted.python._pydoctor',
+ 'twisted.mail._except',
+ 'twisted.mail.bounce',
+ 'twisted.mail.interfaces',
+ 'twisted.mail.test.test_bounce',
+ 'twisted.mail.test.test_scripts',
+ 'twisted.names._rfc1982',
+ 'twisted.names.error',
+ 'twisted.names.resolve',
+ 'twisted.names.test.test_resolve',
+ 'twisted.names.test.test_rfc1982',
+ 'twisted.names.test.test_util',
+ 'twisted.pair.ethernet',
+ 'twisted.pair.ip',
+ 'twisted.pair.raw',
+ 'twisted.pair.rawudp',
+ 'twisted.pair.test.test_ethernet',
+ 'twisted.pair.test.test_ip',
+ 'twisted.pair.test.test_rawudp',
+ 'twisted.persisted._token',
+ 'twisted.persisted.crefutil',
+ 'twisted.persisted.dirdbm',
+ 'twisted.persisted.test.test_styles',
+ 'twisted.plugins.cred_anonymous',
+ 'twisted.plugins.cred_file',
+ 'twisted.plugins.cred_memory',
+ 'twisted.plugins.cred_sshkeys',
+ 'twisted.plugins.twisted_trial',
+ 'twisted.plugins.twisted_words',
+ 'twisted.positioning.base',
+ 'twisted.positioning.ipositioning',
+ 'twisted.positioning.test.receiver',
+ 'twisted.positioning.test.test_base',
'twisted.python._release',
'twisted.python._shellcomp',
'twisted.python._textattributes',
@@ -576,10 +863,14 @@ module = [
'twisted.python.roots',
'twisted.python.shortcut',
'twisted.python.syslog',
- 'twisted.python.test.test_pydoctor',
- 'twisted.python.test.test_systemd',
'twisted.runner.inetdconf',
'twisted.runner.inetdtap',
+ 'twisted.scripts._twistw',
+ 'twisted.scripts.htmlizer',
+ 'twisted.scripts.twistd',
+ 'twisted.spread.interfaces',
+ 'twisted.tap.portforward',
+ 'twisted.tap.socks',
'twisted.test.crash_test_dummy',
'twisted.test.mock_win32process',
'twisted.test.myrebuilder1',
@@ -589,7 +880,6 @@ module = [
'twisted.test.plugin_extra2',
'twisted.test.process_tester',
'twisted.test.ssl_helpers',
- 'twisted.test.test_dict',
'twisted.test.test_finger',
'twisted.test.test_formmethod',
'twisted.test.test_htb',
@@ -599,7 +889,60 @@ module = [
'twisted.test.test_rebuild',
'twisted.test.test_roots',
'twisted.test.test_shortcut',
- 'twisted.test.test_text'
+ 'twisted.test.test_text',
+ 'twisted.trial._asyncrunner',
+ 'twisted.trial._dist.distreporter',
+ 'twisted.trial._dist.disttrial',
+ 'twisted.trial._dist.functional',
+ 'twisted.trial._dist.options',
+ 'twisted.trial._dist.test.test_options',
+ 'twisted.trial._dist.worker',
+ 'twisted.trial._dist.workertrial',
+ 'twisted.trial.itrial',
+ 'twisted.trial.test',
+ 'twisted.trial.test.mockdoctest',
+ 'twisted.trial.test.moduleself',
+ 'twisted.trial.test.ordertests',
+ 'twisted.trial.test.packages',
+ 'twisted.trial.test.pyunitcases',
+ 'twisted.trial.test.sample',
+ 'twisted.trial.test.test_doctest',
+ 'twisted.trial.test.test_matchers',
+ 'twisted.trial.test.test_output',
+ 'twisted.trial.test.test_skip',
+ 'twisted.web._auth.digest',
+ 'twisted.web.demo',
+ 'twisted.web.html',
+ 'twisted.web.iweb',
+ 'twisted.web.rewrite',
+ 'twisted.web.script',
+ 'twisted.web.test._util',
+ 'twisted.web.test.test_client',
+ 'twisted.web.test.test_error',
+ 'twisted.web.test.test_html',
+ 'twisted.web.test.test_http_headers',
+ 'twisted.web.test.test_script',
+ 'twisted.web.test.test_web__responses',
+ 'twisted.web.vhost',
+ 'twisted.words.im.baseaccount',
+ 'twisted.words.im.basechat',
+ 'twisted.words.im.interfaces',
+ 'twisted.words.iwords',
+ 'twisted.words.protocols.jabber.ijabber',
+ 'twisted.words.protocols.jabber.jid',
+ 'twisted.words.protocols.jabber.sasl_mechanisms',
+ 'twisted.words.protocols.jabber.xmpp_stringprep',
+ 'twisted.words.tap',
+ 'twisted.words.test.test_basechat',
+ 'twisted.words.test.test_ircsupport',
+ 'twisted.words.test.test_jabbererror',
+ 'twisted.words.test.test_jabberjid',
+ 'twisted.words.test.test_jabbersaslmechanisms',
+ 'twisted.words.test.test_jabberxmppstringprep',
+ 'twisted.words.test.test_tap',
+ 'twisted.words.test.test_xmpproutertap',
+ 'twisted.words.test.test_xpath',
+ 'twisted.words.xmpproutertap',
]
[[tool.mypy.overrides]]
diff --git a/src/twisted/newsfragments/11957.misc b/src/twisted/newsfragments/11957.misc
new file mode 100644
index 00000000000..6c77563a40b
--- /dev/null
+++ b/src/twisted/newsfragments/11957.misc
@@ -0,0 +1 @@
+expand mypy .* module overrides
diff --git a/src/twisted/words/protocols/jabber/jid.py b/src/twisted/words/protocols/jabber/jid.py
index c263b36e47a..52e154fee4f 100644
--- a/src/twisted/words/protocols/jabber/jid.py
+++ b/src/twisted/words/protocols/jabber/jid.py
@@ -146,7 +146,7 @@ class JID:
def __init__(
self,
str: Union[str, None] = None,
- tuple: Union[Tuple[str, str, str], None] = None,
+ tuple: Union[Tuple[Union[str, None], str, Union[str, None]], None] = None,
):
if str:
user, host, res = parse(str)
diff --git a/src/twisted/words/test/test_jabberjid.py b/src/twisted/words/test/test_jabberjid.py
index 18c5cd4d708..c24f14f7192 100644
--- a/src/twisted/words/test/test_jabberjid.py
+++ b/src/twisted/words/test/test_jabberjid.py
@@ -128,21 +128,21 @@ def test_userhostJIDNoResource(self):
j = jid.JID("user@host")
self.assertIdentical(j, j.userhostJID())
- def test_fullHost(self):
+ def test_fullHost(self) -> None:
"""
Test giving a string representation of the JID with only a host part.
"""
j = jid.JID(tuple=(None, "host", None))
self.assertEqual("host", j.full())
- def test_fullHostResource(self):
+ def test_fullHostResource(self) -> None:
"""
Test giving a string representation of the JID with host, resource.
"""
j = jid.JID(tuple=(None, "host", "resource"))
self.assertEqual("host/resource", j.full())
- def test_fullUserHost(self):
+ def test_fullUserHost(self) -> None:
"""
Test giving a string representation of the JID with user, host.
"""
|
beetbox__beets-2240 | Slowdown with beet web - regression
There is a massive slowdown in queries with python2 and python3 when using the beet web interface.
When large queries are run, i.e. 'format:flac' on a flac only library, the web interface posts queries 10x slower than before the regression. To clarify that does seem to be a relative time based on the length of time originally taken by the query.
This is caused by a regression in the commit.
https://github.com/beetbox/beets/commit/5e8ac9e4a5d06de791fe051a419ba070bbdd5bec
beet stats
Tracks: 43913
Total time: 51.4 weeks
Approximate total size: 976.18 GiB
Artists: 7345
Albums: 12004
Album artists: 1800
| [
{
"content": "# -*- coding: utf-8 -*-\n# This file is part of beets.\n# Copyright 2016, Adrian Sampson.\n#\n# Permission is hereby granted, free of charge, to any person obtaining\n# a copy of this software and associated documentation files (the\n# \"Software\"), to deal in the Software without restriction, including\n# without limitation the rights to use, copy, modify, merge, publish,\n# distribute, sublicense, and/or sell copies of the Software, and to\n# permit persons to whom the Software is furnished to do so, subject to\n# the following conditions:\n#\n# The above copyright notice and this permission notice shall be\n# included in all copies or substantial portions of the Software.\n\n\"\"\"A Web interface to beets.\"\"\"\nfrom __future__ import division, absolute_import, print_function\n\nfrom beets.plugins import BeetsPlugin\nfrom beets import ui\nfrom beets import util\nimport beets.library\nimport flask\nfrom flask import g\nfrom werkzeug.routing import BaseConverter, PathConverter\nimport os\nimport json\n\n\n# Utilities.\n\ndef _rep(obj, expand=False):\n \"\"\"Get a flat -- i.e., JSON-ish -- representation of a beets Item or\n Album object. For Albums, `expand` dictates whether tracks are\n included.\n \"\"\"\n out = dict(obj)\n\n if isinstance(obj, beets.library.Item):\n out['path'] = obj.destination(fragment=True)\n\n # Get the size (in bytes) of the backing file. This is useful\n # for the Tomahawk resolver API.\n try:\n out['size'] = os.path.getsize(util.syspath(obj.path))\n except OSError:\n out['size'] = 0\n\n return out\n\n elif isinstance(obj, beets.library.Album):\n del out['artpath']\n if expand:\n out['items'] = [_rep(item) for item in obj.items()]\n return out\n\n\ndef json_generator(items, root, expand=False):\n \"\"\"Generator that dumps list of beets Items or Albums as JSON\n\n :param root: root key for JSON\n :param items: list of :class:`Item` or :class:`Album` to dump\n :param expand: If true every :class:`Album` contains its items in the json\n representation\n :returns: generator that yields strings\n \"\"\"\n yield '{\"%s\":[' % root\n first = True\n for item in items:\n if first:\n first = False\n else:\n yield ','\n yield json.dumps(_rep(item, expand=expand))\n yield ']}'\n\n\ndef is_expand():\n \"\"\"Returns whether the current request is for an expanded response.\"\"\"\n\n return flask.request.args.get('expand') is not None\n\n\ndef resource(name):\n \"\"\"Decorates a function to handle RESTful HTTP requests for a resource.\n \"\"\"\n def make_responder(retriever):\n def responder(ids):\n entities = [retriever(id) for id in ids]\n entities = [entity for entity in entities if entity]\n\n if len(entities) == 1:\n return flask.jsonify(_rep(entities[0], expand=is_expand()))\n elif entities:\n return app.response_class(\n json_generator(entities, root=name),\n mimetype='application/json'\n )\n else:\n return flask.abort(404)\n responder.__name__ = 'get_{0}'.format(name)\n return responder\n return make_responder\n\n\ndef resource_query(name):\n \"\"\"Decorates a function to handle RESTful HTTP queries for resources.\n \"\"\"\n def make_responder(query_func):\n def responder(queries):\n return app.response_class(\n json_generator(\n query_func(queries),\n root='results', expand=is_expand()\n ),\n mimetype='application/json'\n )\n responder.__name__ = 'query_{0}'.format(name)\n return responder\n return make_responder\n\n\ndef resource_list(name):\n \"\"\"Decorates a function to handle RESTful HTTP request for a list of\n resources.\n \"\"\"\n def make_responder(list_all):\n def responder():\n return app.response_class(\n json_generator(list_all(), root=name, expand=is_expand()),\n mimetype='application/json'\n )\n responder.__name__ = 'all_{0}'.format(name)\n return responder\n return make_responder\n\n\ndef _get_unique_table_field_values(model, field, sort_field):\n \"\"\" retrieve all unique values belonging to a key from a model \"\"\"\n if field not in model.all_keys() or sort_field not in model.all_keys():\n raise KeyError\n with g.lib.transaction() as tx:\n rows = tx.query('SELECT DISTINCT \"{0}\" FROM \"{1}\" ORDER BY \"{2}\"'\n .format(field, model._table, sort_field))\n return [row[0] for row in rows]\n\n\nclass IdListConverter(BaseConverter):\n \"\"\"Converts comma separated lists of ids in urls to integer lists.\n \"\"\"\n\n def to_python(self, value):\n ids = []\n for id in value.split(','):\n try:\n ids.append(int(id))\n except ValueError:\n pass\n return ids\n\n def to_url(self, value):\n return ','.join(value)\n\n\nclass QueryConverter(PathConverter):\n \"\"\"Converts slash separated lists of queries in the url to string list.\n \"\"\"\n\n def to_python(self, value):\n return value.split('/')\n\n def to_url(self, value):\n return ','.join(value)\n\n\n# Flask setup.\n\napp = flask.Flask(__name__)\napp.url_map.converters['idlist'] = IdListConverter\napp.url_map.converters['query'] = QueryConverter\n\n\[email protected]_request\ndef before_request():\n g.lib = app.config['lib']\n\n\n# Items.\n\[email protected]('/item/<idlist:ids>')\n@resource('items')\ndef get_item(id):\n return g.lib.get_item(id)\n\n\[email protected]('/item/')\[email protected]('/item/query/')\n@resource_list('items')\ndef all_items():\n return g.lib.items()\n\n\[email protected]('/item/<int:item_id>/file')\ndef item_file(item_id):\n item = g.lib.get_item(item_id)\n response = flask.send_file(\n util.py3_path(item.path),\n as_attachment=True,\n attachment_filename=os.path.basename(item.path),\n )\n response.headers['Content-Length'] = os.path.getsize(item.path)\n return response\n\n\[email protected]('/item/query/<query:queries>')\n@resource_query('items')\ndef item_query(queries):\n return g.lib.items(queries)\n\n\[email protected]('/item/values/<string:key>')\ndef item_unique_field_values(key):\n sort_key = flask.request.args.get('sort_key', key)\n try:\n values = _get_unique_table_field_values(beets.library.Item, key,\n sort_key)\n except KeyError:\n return flask.abort(404)\n return flask.jsonify(values=values)\n\n\n# Albums.\n\[email protected]('/album/<idlist:ids>')\n@resource('albums')\ndef get_album(id):\n return g.lib.get_album(id)\n\n\[email protected]('/album/')\[email protected]('/album/query/')\n@resource_list('albums')\ndef all_albums():\n return g.lib.albums()\n\n\[email protected]('/album/query/<query:queries>')\n@resource_query('albums')\ndef album_query(queries):\n return g.lib.albums(queries)\n\n\[email protected]('/album/<int:album_id>/art')\ndef album_art(album_id):\n album = g.lib.get_album(album_id)\n if album.artpath:\n return flask.send_file(album.artpath)\n else:\n return flask.abort(404)\n\n\[email protected]('/album/values/<string:key>')\ndef album_unique_field_values(key):\n sort_key = flask.request.args.get('sort_key', key)\n try:\n values = _get_unique_table_field_values(beets.library.Album, key,\n sort_key)\n except KeyError:\n return flask.abort(404)\n return flask.jsonify(values=values)\n\n\n# Artists.\n\[email protected]('/artist/')\ndef all_artists():\n with g.lib.transaction() as tx:\n rows = tx.query(\"SELECT DISTINCT albumartist FROM albums\")\n all_artists = [row[0] for row in rows]\n return flask.jsonify(artist_names=all_artists)\n\n\n# Library information.\n\[email protected]('/stats')\ndef stats():\n with g.lib.transaction() as tx:\n item_rows = tx.query(\"SELECT COUNT(*) FROM items\")\n album_rows = tx.query(\"SELECT COUNT(*) FROM albums\")\n return flask.jsonify({\n 'items': item_rows[0][0],\n 'albums': album_rows[0][0],\n })\n\n\n# UI.\n\[email protected]('/')\ndef home():\n return flask.render_template('index.html')\n\n\n# Plugin hook.\n\nclass WebPlugin(BeetsPlugin):\n def __init__(self):\n super(WebPlugin, self).__init__()\n self.config.add({\n 'host': u'127.0.0.1',\n 'port': 8337,\n 'cors': '',\n })\n\n def commands(self):\n cmd = ui.Subcommand('web', help=u'start a Web interface')\n cmd.parser.add_option(u'-d', u'--debug', action='store_true',\n default=False, help=u'debug mode')\n\n def func(lib, opts, args):\n args = ui.decargs(args)\n if args:\n self.config['host'] = args.pop(0)\n if args:\n self.config['port'] = int(args.pop(0))\n\n app.config['lib'] = lib\n # Normalizes json output\n app.config['JSONIFY_PRETTYPRINT_REGULAR'] = False\n\n # Enable CORS if required.\n if self.config['cors']:\n self._log.info(u'Enabling CORS with origin: {0}',\n self.config['cors'])\n from flask.ext.cors import CORS\n app.config['CORS_ALLOW_HEADERS'] = \"Content-Type\"\n app.config['CORS_RESOURCES'] = {\n r\"/*\": {\"origins\": self.config['cors'].get(str)}\n }\n CORS(app)\n # Start the web application.\n app.run(host=self.config['host'].as_str(),\n port=self.config['port'].get(int),\n debug=opts.debug, threaded=True)\n cmd.func = func\n return [cmd]\n",
"path": "beetsplug/web/__init__.py"
}
] | [
{
"content": "# -*- coding: utf-8 -*-\n# This file is part of beets.\n# Copyright 2016, Adrian Sampson.\n#\n# Permission is hereby granted, free of charge, to any person obtaining\n# a copy of this software and associated documentation files (the\n# \"Software\"), to deal in the Software without restriction, including\n# without limitation the rights to use, copy, modify, merge, publish,\n# distribute, sublicense, and/or sell copies of the Software, and to\n# permit persons to whom the Software is furnished to do so, subject to\n# the following conditions:\n#\n# The above copyright notice and this permission notice shall be\n# included in all copies or substantial portions of the Software.\n\n\"\"\"A Web interface to beets.\"\"\"\nfrom __future__ import division, absolute_import, print_function\n\nfrom beets.plugins import BeetsPlugin\nfrom beets import ui\nfrom beets import util\nimport beets.library\nimport flask\nfrom flask import g\nfrom werkzeug.routing import BaseConverter, PathConverter\nimport os\nimport json\n\n\n# Utilities.\n\ndef _rep(obj, expand=False):\n \"\"\"Get a flat -- i.e., JSON-ish -- representation of a beets Item or\n Album object. For Albums, `expand` dictates whether tracks are\n included.\n \"\"\"\n out = dict(obj)\n\n if isinstance(obj, beets.library.Item):\n del out['path']\n\n # Get the size (in bytes) of the backing file. This is useful\n # for the Tomahawk resolver API.\n try:\n out['size'] = os.path.getsize(util.syspath(obj.path))\n except OSError:\n out['size'] = 0\n\n return out\n\n elif isinstance(obj, beets.library.Album):\n del out['artpath']\n if expand:\n out['items'] = [_rep(item) for item in obj.items()]\n return out\n\n\ndef json_generator(items, root, expand=False):\n \"\"\"Generator that dumps list of beets Items or Albums as JSON\n\n :param root: root key for JSON\n :param items: list of :class:`Item` or :class:`Album` to dump\n :param expand: If true every :class:`Album` contains its items in the json\n representation\n :returns: generator that yields strings\n \"\"\"\n yield '{\"%s\":[' % root\n first = True\n for item in items:\n if first:\n first = False\n else:\n yield ','\n yield json.dumps(_rep(item, expand=expand))\n yield ']}'\n\n\ndef is_expand():\n \"\"\"Returns whether the current request is for an expanded response.\"\"\"\n\n return flask.request.args.get('expand') is not None\n\n\ndef resource(name):\n \"\"\"Decorates a function to handle RESTful HTTP requests for a resource.\n \"\"\"\n def make_responder(retriever):\n def responder(ids):\n entities = [retriever(id) for id in ids]\n entities = [entity for entity in entities if entity]\n\n if len(entities) == 1:\n return flask.jsonify(_rep(entities[0], expand=is_expand()))\n elif entities:\n return app.response_class(\n json_generator(entities, root=name),\n mimetype='application/json'\n )\n else:\n return flask.abort(404)\n responder.__name__ = 'get_{0}'.format(name)\n return responder\n return make_responder\n\n\ndef resource_query(name):\n \"\"\"Decorates a function to handle RESTful HTTP queries for resources.\n \"\"\"\n def make_responder(query_func):\n def responder(queries):\n return app.response_class(\n json_generator(\n query_func(queries),\n root='results', expand=is_expand()\n ),\n mimetype='application/json'\n )\n responder.__name__ = 'query_{0}'.format(name)\n return responder\n return make_responder\n\n\ndef resource_list(name):\n \"\"\"Decorates a function to handle RESTful HTTP request for a list of\n resources.\n \"\"\"\n def make_responder(list_all):\n def responder():\n return app.response_class(\n json_generator(list_all(), root=name, expand=is_expand()),\n mimetype='application/json'\n )\n responder.__name__ = 'all_{0}'.format(name)\n return responder\n return make_responder\n\n\ndef _get_unique_table_field_values(model, field, sort_field):\n \"\"\" retrieve all unique values belonging to a key from a model \"\"\"\n if field not in model.all_keys() or sort_field not in model.all_keys():\n raise KeyError\n with g.lib.transaction() as tx:\n rows = tx.query('SELECT DISTINCT \"{0}\" FROM \"{1}\" ORDER BY \"{2}\"'\n .format(field, model._table, sort_field))\n return [row[0] for row in rows]\n\n\nclass IdListConverter(BaseConverter):\n \"\"\"Converts comma separated lists of ids in urls to integer lists.\n \"\"\"\n\n def to_python(self, value):\n ids = []\n for id in value.split(','):\n try:\n ids.append(int(id))\n except ValueError:\n pass\n return ids\n\n def to_url(self, value):\n return ','.join(value)\n\n\nclass QueryConverter(PathConverter):\n \"\"\"Converts slash separated lists of queries in the url to string list.\n \"\"\"\n\n def to_python(self, value):\n return value.split('/')\n\n def to_url(self, value):\n return ','.join(value)\n\n\n# Flask setup.\n\napp = flask.Flask(__name__)\napp.url_map.converters['idlist'] = IdListConverter\napp.url_map.converters['query'] = QueryConverter\n\n\[email protected]_request\ndef before_request():\n g.lib = app.config['lib']\n\n\n# Items.\n\[email protected]('/item/<idlist:ids>')\n@resource('items')\ndef get_item(id):\n return g.lib.get_item(id)\n\n\[email protected]('/item/')\[email protected]('/item/query/')\n@resource_list('items')\ndef all_items():\n return g.lib.items()\n\n\[email protected]('/item/<int:item_id>/file')\ndef item_file(item_id):\n item = g.lib.get_item(item_id)\n response = flask.send_file(\n util.py3_path(item.path),\n as_attachment=True,\n attachment_filename=os.path.basename(item.path),\n )\n response.headers['Content-Length'] = os.path.getsize(item.path)\n return response\n\n\[email protected]('/item/query/<query:queries>')\n@resource_query('items')\ndef item_query(queries):\n return g.lib.items(queries)\n\n\[email protected]('/item/values/<string:key>')\ndef item_unique_field_values(key):\n sort_key = flask.request.args.get('sort_key', key)\n try:\n values = _get_unique_table_field_values(beets.library.Item, key,\n sort_key)\n except KeyError:\n return flask.abort(404)\n return flask.jsonify(values=values)\n\n\n# Albums.\n\[email protected]('/album/<idlist:ids>')\n@resource('albums')\ndef get_album(id):\n return g.lib.get_album(id)\n\n\[email protected]('/album/')\[email protected]('/album/query/')\n@resource_list('albums')\ndef all_albums():\n return g.lib.albums()\n\n\[email protected]('/album/query/<query:queries>')\n@resource_query('albums')\ndef album_query(queries):\n return g.lib.albums(queries)\n\n\[email protected]('/album/<int:album_id>/art')\ndef album_art(album_id):\n album = g.lib.get_album(album_id)\n if album.artpath:\n return flask.send_file(album.artpath)\n else:\n return flask.abort(404)\n\n\[email protected]('/album/values/<string:key>')\ndef album_unique_field_values(key):\n sort_key = flask.request.args.get('sort_key', key)\n try:\n values = _get_unique_table_field_values(beets.library.Album, key,\n sort_key)\n except KeyError:\n return flask.abort(404)\n return flask.jsonify(values=values)\n\n\n# Artists.\n\[email protected]('/artist/')\ndef all_artists():\n with g.lib.transaction() as tx:\n rows = tx.query(\"SELECT DISTINCT albumartist FROM albums\")\n all_artists = [row[0] for row in rows]\n return flask.jsonify(artist_names=all_artists)\n\n\n# Library information.\n\[email protected]('/stats')\ndef stats():\n with g.lib.transaction() as tx:\n item_rows = tx.query(\"SELECT COUNT(*) FROM items\")\n album_rows = tx.query(\"SELECT COUNT(*) FROM albums\")\n return flask.jsonify({\n 'items': item_rows[0][0],\n 'albums': album_rows[0][0],\n })\n\n\n# UI.\n\[email protected]('/')\ndef home():\n return flask.render_template('index.html')\n\n\n# Plugin hook.\n\nclass WebPlugin(BeetsPlugin):\n def __init__(self):\n super(WebPlugin, self).__init__()\n self.config.add({\n 'host': u'127.0.0.1',\n 'port': 8337,\n 'cors': '',\n })\n\n def commands(self):\n cmd = ui.Subcommand('web', help=u'start a Web interface')\n cmd.parser.add_option(u'-d', u'--debug', action='store_true',\n default=False, help=u'debug mode')\n\n def func(lib, opts, args):\n args = ui.decargs(args)\n if args:\n self.config['host'] = args.pop(0)\n if args:\n self.config['port'] = int(args.pop(0))\n\n app.config['lib'] = lib\n # Normalizes json output\n app.config['JSONIFY_PRETTYPRINT_REGULAR'] = False\n\n # Enable CORS if required.\n if self.config['cors']:\n self._log.info(u'Enabling CORS with origin: {0}',\n self.config['cors'])\n from flask.ext.cors import CORS\n app.config['CORS_ALLOW_HEADERS'] = \"Content-Type\"\n app.config['CORS_RESOURCES'] = {\n r\"/*\": {\"origins\": self.config['cors'].get(str)}\n }\n CORS(app)\n # Start the web application.\n app.run(host=self.config['host'].as_str(),\n port=self.config['port'].get(int),\n debug=opts.debug, threaded=True)\n cmd.func = func\n return [cmd]\n",
"path": "beetsplug/web/__init__.py"
}
] | diff --git a/beetsplug/web/__init__.py b/beetsplug/web/__init__.py
index 07e68638b0..810de87183 100644
--- a/beetsplug/web/__init__.py
+++ b/beetsplug/web/__init__.py
@@ -37,7 +37,7 @@ def _rep(obj, expand=False):
out = dict(obj)
if isinstance(obj, beets.library.Item):
- out['path'] = obj.destination(fragment=True)
+ del out['path']
# Get the size (in bytes) of the backing file. This is useful
# for the Tomahawk resolver API.
|
piskvorky__gensim-2738 | keywords.py gives `IndexError: list index out of range` when `words` parameter is provided.
Really confused why I'm getting this error. Perhaps I'm making a silly mistake I'm not familiar with gensim and nlp in general.
Im running on Windows 10 Home 64-bit, conda version : 4.7.11, conda-build version : 2.18.8, python version : 3.7.3.final.0
My code is attempting to get keywords per sentence in a loop. To simplify matters I've isolated the following code that causes this, trying to get keywords from gensim's `keywords.py`.
```python
s = "Don’t dive right into solving without a plan (and somehow hope you can muddle your way through)."
keywords(s, words=4, scores=False, split=True, lemmatize=True)
File "C:\Users\username\Anaconda3\envs\gensim\lib\site-packages\gensim\summarization\keywords.py", line 521, in keywords
extracted_lemmas = _extract_tokens(graph.nodes(), pagerank_scores, ratio, words)
File "C:\Users\username\Anaconda3\envs\gensim\lib\site-packages\gensim\summarization\keywords.py", line 304, in _extract_tokens
return [(scores[lemmas[i]], lemmas[i],) for i in range(int(length))]
File "C:\Users\username\Anaconda3\envs\gensim\lib\site-packages\gensim\summarization\keywords.py", line 304, in <listcomp>
return [(scores[lemmas[i]], lemmas[i],) for i in range(int(length))]
IndexError: list index out of range
```
I've tried setting `scores=True`, `lemmatize=False`, and `split=False` but the same error persists. I've also tried removing the parenthesis and removing the apostrophe, the error persisted. What did work is removing the `words` parameter altogether, but still it shouldn't create an error if it's provided. Thanks for the help in advance!
| [
{
"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n#\n# Licensed under the GNU LGPL v2.1 - http://www.gnu.org/licenses/lgpl.html\n\n\"\"\"This module contains functions to find keywords of the text and building graph on tokens from text.\n\nExamples\n--------\nExtract keywords from text\n\n.. sourcecode:: pycon\n\n >>> from gensim.summarization import keywords\n >>> text = '''Challenges in natural language processing frequently involve\n ... speech recognition, natural language understanding, natural language\n ... generation (frequently from formal, machine-readable logical forms),\n ... connecting language and machine perception, dialog systems, or some\n ... combination thereof.'''\n >>> keywords(text).split('\\\\n')\n [u'natural language', u'machine', u'frequently']\n\n\nNotes\n-----\nCheck tags in http://www.clips.ua.ac.be/pages/mbsp-tags and use only first two letters\nfor `INCLUDING_FILTER` and `EXCLUDING_FILTER`\n\nData:\n-----\n.. data:: WINDOW_SIZE - Size of window, number of consecutive tokens in processing.\n.. data:: INCLUDING_FILTER - Including part of speech filters.\n.. data:: EXCLUDING_FILTER - Excluding part of speech filters.\n\n\"\"\"\n\nfrom gensim.summarization.pagerank_weighted import pagerank_weighted as _pagerank\nfrom gensim.summarization.textcleaner import clean_text_by_word as _clean_text_by_word\nfrom gensim.summarization.textcleaner import tokenize_by_word as _tokenize_by_word\nfrom gensim.summarization.commons import build_graph as _build_graph\nfrom gensim.summarization.commons import remove_unreachable_nodes as _remove_unreachable_nodes\nfrom gensim.utils import to_unicode\nfrom itertools import combinations as _combinations\nfrom six.moves.queue import Queue as _Queue\nfrom six.moves import range\nfrom six import iteritems\n\n\nWINDOW_SIZE = 2\n\nINCLUDING_FILTER = ['NN', 'JJ']\nEXCLUDING_FILTER = []\n\n\ndef _get_pos_filters():\n \"\"\"Get default including and excluding filters as frozen sets.\n\n Returns\n -------\n (frozenset of str, frozenset of str)\n Including and excluding filters.\n\n \"\"\"\n return frozenset(INCLUDING_FILTER), frozenset(EXCLUDING_FILTER)\n\n\ndef _get_words_for_graph(tokens, pos_filter=None):\n \"\"\"Filters given dictionary of tokens using provided part of speech filters.\n\n Parameters\n ----------\n tokens : dict\n Original units (words) as keys and processed units (tokens) as values.\n pos_filter : iterable\n Part of speech filters, optional. If `None` - using :func:`_get_pos_filters`.\n\n Returns\n -------\n list of str\n Filtered tokens.\n\n Raises\n ------\n ValueError\n If include and exclude filters ar not empty at the same time.\n\n \"\"\"\n if pos_filter is None:\n include_filters, exclude_filters = _get_pos_filters()\n else:\n include_filters = set(pos_filter)\n exclude_filters = frozenset([])\n if include_filters and exclude_filters:\n raise ValueError(\"Can't use both include and exclude filters, should use only one\")\n\n result = []\n for word, unit in iteritems(tokens):\n if exclude_filters and unit.tag in exclude_filters:\n continue\n if not include_filters or not unit.tag or unit.tag in include_filters:\n result.append(unit.token)\n return result\n\n\ndef _get_first_window(split_text):\n \"\"\"Get first :const:`~gensim.parsing.keywords.WINDOW_SIZE` tokens from given `split_text`.\n\n Parameters\n ----------\n split_text : list of str\n Splitted text.\n\n Returns\n -------\n list of str\n First :const:`~gensim.parsing.keywords.WINDOW_SIZE` tokens.\n\n \"\"\"\n return split_text[:WINDOW_SIZE]\n\n\ndef _set_graph_edge(graph, tokens, word_a, word_b):\n \"\"\"Sets an edge between nodes named word_a and word_b if they exists in `tokens` and `graph`, inplace.\n\n Parameters\n ----------\n graph : :class:~gensim.summarization.graph.Graph\n Given graph.\n tokens : dict\n Original units (words) as keys and processed units (tokens) as values.\n word_a : str\n First word, name of first node.\n word_b : str\n Second word, name of second node.\n\n \"\"\"\n if word_a in tokens and word_b in tokens:\n lemma_a = tokens[word_a].token\n lemma_b = tokens[word_b].token\n edge = (lemma_a, lemma_b)\n\n if graph.has_node(lemma_a) and graph.has_node(lemma_b) and not graph.has_edge(edge):\n graph.add_edge(edge)\n\n\ndef _process_first_window(graph, tokens, split_text):\n \"\"\"Sets an edges between nodes taken from first :const:`~gensim.parsing.keywords.WINDOW_SIZE`\n words of `split_text` if they exist in `tokens` and `graph`, inplace.\n\n Parameters\n ----------\n graph : :class:~gensim.summarization.graph.Graph\n Given graph.\n tokens : dict\n Original units (words) as keys and processed units (tokens) as values.\n split_text : list of str\n Splitted text.\n\n \"\"\"\n first_window = _get_first_window(split_text)\n for word_a, word_b in _combinations(first_window, 2):\n _set_graph_edge(graph, tokens, word_a, word_b)\n\n\ndef _init_queue(split_text):\n \"\"\"Initialize queue by first words from `split_text`.\n\n Parameters\n ----------\n split_text : list of str\n Splitted text.\n\n Returns\n -------\n Queue\n Initialized queue.\n\n \"\"\"\n queue = _Queue()\n first_window = _get_first_window(split_text)\n for word in first_window[1:]:\n queue.put(word)\n return queue\n\n\ndef _process_word(graph, tokens, queue, word):\n \"\"\"Sets edge between `word` and each element in queue in `graph` if such nodes\n exist in `tokens` and `graph`.\n\n Parameters\n ----------\n graph : :class:`~gensim.summarization.graph.Graph`\n Given graph.\n tokens : dict\n Original units (words) as keys and processed units (tokens) as values.\n queue : Queue\n Given queue.\n word : str\n Word, possible `node` in graph and item in `tokens`.\n\n \"\"\"\n for word_to_compare in _queue_iterator(queue):\n _set_graph_edge(graph, tokens, word, word_to_compare)\n\n\ndef _update_queue(queue, word):\n \"\"\"Updates given `queue` (removes last item and puts `word`).\n\n Parameters\n ----------\n queue : Queue\n Given queue.\n word : str\n Word to be added to queue.\n\n \"\"\"\n queue.get()\n queue.put(word)\n assert queue.qsize() == (WINDOW_SIZE - 1)\n\n\ndef _process_text(graph, tokens, split_text):\n \"\"\"Process `split_text` by updating given `graph` with new eges between nodes\n if they exists in `tokens` and `graph`.\n Words are taken from `split_text` with window size :const:`~gensim.parsing.keywords.WINDOW_SIZE`.\n\n Parameters\n ----------\n graph : :class:`~gensim.summarization.graph.Graph`\n Given graph.\n tokens : dict\n Original units (words) as keys and processed units (tokens) as values.\n split_text : list of str\n Splitted text.\n\n \"\"\"\n queue = _init_queue(split_text)\n for i in range(WINDOW_SIZE, len(split_text)):\n word = split_text[i]\n _process_word(graph, tokens, queue, word)\n _update_queue(queue, word)\n\n\ndef _queue_iterator(queue):\n \"\"\"Represents iterator of the given queue.\n\n Parameters\n ----------\n queue : Queue\n Given queue.\n\n Yields\n ------\n str\n Current item of queue.\n\n \"\"\"\n iterations = queue.qsize()\n for _ in range(iterations):\n var = queue.get()\n yield var\n queue.put(var)\n\n\ndef _set_graph_edges(graph, tokens, split_text):\n \"\"\"Updates given `graph` by setting eges between nodes if they exists in `tokens` and `graph`.\n Words are taken from `split_text` with window size :const:`~gensim.parsing.keywords.WINDOW_SIZE`.\n\n Parameters\n ----------\n graph : :class:~gensim.summarization.graph.Graph\n Given graph.\n tokens : dict\n Original units (words) as keys and processed units (tokens) as values.\n split_text : list of str\n Splitted text.\n\n \"\"\"\n _process_first_window(graph, tokens, split_text)\n _process_text(graph, tokens, split_text)\n\n\ndef _extract_tokens(lemmas, scores, ratio, words):\n \"\"\"Extracts tokens from provided lemmas. Most scored lemmas are used if `words` not provided.\n\n Parameters\n ----------\n lemmas : list of str\n Given lemmas.\n scores : dict\n Dictionary with lemmas and its scores.\n ratio : float\n Proportion of lemmas used for final result.\n words : int\n Number of used words. If no \"words\" option is selected, the number of\n sentences is reduced by the provided ratio, else, the ratio is ignored.\n\n Returns\n -------\n list of (float, str)\n Scores and corresponded lemmas.\n\n \"\"\"\n lemmas.sort(key=lambda s: scores[s], reverse=True)\n length = len(lemmas) * ratio if words is None else words\n return [(scores[lemmas[i]], lemmas[i],) for i in range(int(length))]\n\n\ndef _lemmas_to_words(tokens):\n \"\"\"Get words and lemmas from given tokens. Produces \"reversed\" `tokens`.\n\n Parameters\n ----------\n tokens : dict\n Original units (words) as keys and processed units (tokens) as values.\n\n Returns\n -------\n dict\n Lemmas as keys and lists corresponding words as values.\n\n \"\"\"\n lemma_to_word = {}\n for word, unit in iteritems(tokens):\n lemma = unit.token\n if lemma in lemma_to_word:\n lemma_to_word[lemma].append(word)\n else:\n lemma_to_word[lemma] = [word]\n return lemma_to_word\n\n\ndef _get_keywords_with_score(extracted_lemmas, lemma_to_word):\n \"\"\"Get words of `extracted_lemmas` and its scores, words contains in `lemma_to_word`.\n\n Parameters\n ----------\n extracted_lemmas : list of (float, str)\n Given lemmas with scores\n lemma_to_word : dict\n Lemmas and corresponding words.\n\n Returns\n -------\n dict\n Keywords as keys and its scores as values.\n\n \"\"\"\n\n keywords = {}\n for score, lemma in extracted_lemmas:\n keyword_list = lemma_to_word[lemma]\n for keyword in keyword_list:\n keywords[keyword] = score\n return keywords\n\n\ndef _strip_word(word):\n \"\"\"Get cleaned `word`.\n\n Parameters\n ----------\n word : str\n Given word.\n\n Returns\n -------\n str\n Cleaned word.\n \"\"\"\n stripped_word_list = list(_tokenize_by_word(word))\n return stripped_word_list[0] if stripped_word_list else \"\"\n\n\ndef _get_combined_keywords(_keywords, split_text):\n \"\"\"Get most scored words (`_keywords`) contained in `split_text` and it's combinations.\n\n Parameters\n ----------\n _keywords : dict\n Keywords as keys and its scores as values.\n split_text : list of str\n Splitted text.\n\n Returns\n -------\n list of str\n Keywords and/or its combinations.\n\n \"\"\"\n result = []\n _keywords = _keywords.copy()\n len_text = len(split_text)\n for i in range(len_text):\n word = _strip_word(split_text[i])\n if word in _keywords:\n combined_word = [word]\n if i + 1 == len_text:\n result.append(word) # appends last word if keyword and doesn't iterate\n for j in range(i + 1, len_text):\n other_word = _strip_word(split_text[j])\n if other_word in _keywords and other_word == split_text[j] and other_word not in combined_word:\n combined_word.append(other_word)\n else:\n for keyword in combined_word:\n _keywords.pop(keyword)\n result.append(\" \".join(combined_word))\n break\n return result\n\n\ndef _get_average_score(concept, _keywords):\n \"\"\"Get average score of words in `concept`.\n\n Parameters\n ----------\n concept : str\n Input text.\n _keywords : dict\n Keywords as keys and its scores as values.\n\n Returns\n -------\n float\n Average score.\n\n \"\"\"\n word_list = concept.split()\n word_counter = len(word_list)\n total = float(sum(_keywords[word] for word in word_list))\n return total / word_counter\n\n\ndef _format_results(_keywords, combined_keywords, split, scores):\n \"\"\"Formats, sorts and returns `combined_keywords` in desired format.\n\n Parameters\n ----------\n _keywords : dict\n Keywords as keys and its scores as values.\n combined_keywords : list of str\n Most ranked words and/or its combinations.\n split : bool\n Split result if True or return string otherwise, optional.\n scores : bool\n Whether return `combined_keywords` with scores, optional. If True\n `split` is ignored.\n\n Returns\n -------\n result: list of (str, float)\n If `scores`, keywords with scores **OR**\n result: list of str\n If `split`, keywords only **OR**\n result: str\n Keywords, joined by endl.\n\n \"\"\"\n combined_keywords.sort(key=lambda w: _get_average_score(w, _keywords), reverse=True)\n if scores:\n return [(word, _get_average_score(word, _keywords)) for word in combined_keywords]\n if split:\n return combined_keywords\n return \"\\n\".join(combined_keywords)\n\n\ndef keywords(text, ratio=0.2, words=None, split=False, scores=False, pos_filter=('NN', 'JJ'),\n lemmatize=False, deacc=True):\n \"\"\"Get most ranked words of provided text and/or its combinations.\n\n Parameters\n ----------\n\n text : str\n Input text.\n ratio : float, optional\n If no \"words\" option is selected, the number of sentences is reduced by the provided ratio,\n else, the ratio is ignored.\n words : int, optional\n Number of returned words.\n split : bool, optional\n Whether split keywords if True.\n scores : bool, optional\n Whether score of keyword.\n pos_filter : tuple, optional\n Part of speech filters.\n lemmatize : bool, optional\n If True - lemmatize words.\n deacc : bool, optional\n If True - remove accentuation.\n\n Returns\n -------\n result: list of (str, float)\n If `scores`, keywords with scores **OR**\n result: list of str\n If `split`, keywords only **OR**\n result: str\n Keywords, joined by endl.\n\n \"\"\"\n # Gets a dict of word -> lemma\n text = to_unicode(text)\n tokens = _clean_text_by_word(text, deacc=deacc)\n split_text = list(_tokenize_by_word(text))\n\n # Creates the graph and adds the edges\n graph = _build_graph(_get_words_for_graph(tokens, pos_filter))\n _set_graph_edges(graph, tokens, split_text)\n del split_text # It's no longer used\n\n _remove_unreachable_nodes(graph)\n\n if not any(True for _ in graph.iter_edges()):\n return _format_results([], [], split, scores)\n\n # Ranks the tokens using the PageRank algorithm. Returns dict of lemma -> score\n pagerank_scores = _pagerank(graph)\n\n extracted_lemmas = _extract_tokens(graph.nodes(), pagerank_scores, ratio, words)\n\n # The results can be polluted by many variations of the same word\n if lemmatize:\n lemmas_to_word = {}\n for word, unit in iteritems(tokens):\n lemmas_to_word[unit.token] = [word]\n else:\n lemmas_to_word = _lemmas_to_words(tokens)\n\n keywords = _get_keywords_with_score(extracted_lemmas, lemmas_to_word)\n\n # text.split() to keep numbers and punctuation marks, so separeted concepts are not combined\n combined_keywords = _get_combined_keywords(keywords, text.split())\n\n return _format_results(keywords, combined_keywords, split, scores)\n\n\ndef get_graph(text):\n \"\"\"Creates and returns graph from given text, cleans and tokenize text before building graph.\n\n Parameters\n ----------\n text : str\n Sequence of values.\n\n Returns\n -------\n :class:`~gensim.summarization.graph.Graph`\n Created graph.\n\n \"\"\"\n tokens = _clean_text_by_word(text)\n split_text = list(_tokenize_by_word(text))\n\n graph = _build_graph(_get_words_for_graph(tokens))\n _set_graph_edges(graph, tokens, split_text)\n\n return graph\n",
"path": "gensim/summarization/keywords.py"
}
] | [
{
"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n#\n# Licensed under the GNU LGPL v2.1 - http://www.gnu.org/licenses/lgpl.html\n\n\"\"\"This module contains functions to find keywords of the text and building graph on tokens from text.\n\nExamples\n--------\nExtract keywords from text\n\n.. sourcecode:: pycon\n\n >>> from gensim.summarization import keywords\n >>> text = '''Challenges in natural language processing frequently involve\n ... speech recognition, natural language understanding, natural language\n ... generation (frequently from formal, machine-readable logical forms),\n ... connecting language and machine perception, dialog systems, or some\n ... combination thereof.'''\n >>> keywords(text).split('\\\\n')\n [u'natural language', u'machine', u'frequently']\n\n\nNotes\n-----\nCheck tags in http://www.clips.ua.ac.be/pages/mbsp-tags and use only first two letters\nfor `INCLUDING_FILTER` and `EXCLUDING_FILTER`\n\nData:\n-----\n.. data:: WINDOW_SIZE - Size of window, number of consecutive tokens in processing.\n.. data:: INCLUDING_FILTER - Including part of speech filters.\n.. data:: EXCLUDING_FILTER - Excluding part of speech filters.\n\n\"\"\"\n\nfrom gensim.summarization.pagerank_weighted import pagerank_weighted as _pagerank\nfrom gensim.summarization.textcleaner import clean_text_by_word as _clean_text_by_word\nfrom gensim.summarization.textcleaner import tokenize_by_word as _tokenize_by_word\nfrom gensim.summarization.commons import build_graph as _build_graph\nfrom gensim.summarization.commons import remove_unreachable_nodes as _remove_unreachable_nodes\nfrom gensim.utils import to_unicode\nfrom itertools import combinations as _combinations\nfrom six.moves.queue import Queue as _Queue\nfrom six.moves import range\nfrom six import iteritems\n\n\nWINDOW_SIZE = 2\n\nINCLUDING_FILTER = ['NN', 'JJ']\nEXCLUDING_FILTER = []\n\n\ndef _get_pos_filters():\n \"\"\"Get default including and excluding filters as frozen sets.\n\n Returns\n -------\n (frozenset of str, frozenset of str)\n Including and excluding filters.\n\n \"\"\"\n return frozenset(INCLUDING_FILTER), frozenset(EXCLUDING_FILTER)\n\n\ndef _get_words_for_graph(tokens, pos_filter=None):\n \"\"\"Filters given dictionary of tokens using provided part of speech filters.\n\n Parameters\n ----------\n tokens : dict\n Original units (words) as keys and processed units (tokens) as values.\n pos_filter : iterable\n Part of speech filters, optional. If `None` - using :func:`_get_pos_filters`.\n\n Returns\n -------\n list of str\n Filtered tokens.\n\n Raises\n ------\n ValueError\n If include and exclude filters ar not empty at the same time.\n\n \"\"\"\n if pos_filter is None:\n include_filters, exclude_filters = _get_pos_filters()\n else:\n include_filters = set(pos_filter)\n exclude_filters = frozenset([])\n if include_filters and exclude_filters:\n raise ValueError(\"Can't use both include and exclude filters, should use only one\")\n\n result = []\n for word, unit in iteritems(tokens):\n if exclude_filters and unit.tag in exclude_filters:\n continue\n if not include_filters or not unit.tag or unit.tag in include_filters:\n result.append(unit.token)\n return result\n\n\ndef _get_first_window(split_text):\n \"\"\"Get first :const:`~gensim.parsing.keywords.WINDOW_SIZE` tokens from given `split_text`.\n\n Parameters\n ----------\n split_text : list of str\n Splitted text.\n\n Returns\n -------\n list of str\n First :const:`~gensim.parsing.keywords.WINDOW_SIZE` tokens.\n\n \"\"\"\n return split_text[:WINDOW_SIZE]\n\n\ndef _set_graph_edge(graph, tokens, word_a, word_b):\n \"\"\"Sets an edge between nodes named word_a and word_b if they exists in `tokens` and `graph`, inplace.\n\n Parameters\n ----------\n graph : :class:~gensim.summarization.graph.Graph\n Given graph.\n tokens : dict\n Original units (words) as keys and processed units (tokens) as values.\n word_a : str\n First word, name of first node.\n word_b : str\n Second word, name of second node.\n\n \"\"\"\n if word_a in tokens and word_b in tokens:\n lemma_a = tokens[word_a].token\n lemma_b = tokens[word_b].token\n edge = (lemma_a, lemma_b)\n\n if graph.has_node(lemma_a) and graph.has_node(lemma_b) and not graph.has_edge(edge):\n graph.add_edge(edge)\n\n\ndef _process_first_window(graph, tokens, split_text):\n \"\"\"Sets an edges between nodes taken from first :const:`~gensim.parsing.keywords.WINDOW_SIZE`\n words of `split_text` if they exist in `tokens` and `graph`, inplace.\n\n Parameters\n ----------\n graph : :class:~gensim.summarization.graph.Graph\n Given graph.\n tokens : dict\n Original units (words) as keys and processed units (tokens) as values.\n split_text : list of str\n Splitted text.\n\n \"\"\"\n first_window = _get_first_window(split_text)\n for word_a, word_b in _combinations(first_window, 2):\n _set_graph_edge(graph, tokens, word_a, word_b)\n\n\ndef _init_queue(split_text):\n \"\"\"Initialize queue by first words from `split_text`.\n\n Parameters\n ----------\n split_text : list of str\n Splitted text.\n\n Returns\n -------\n Queue\n Initialized queue.\n\n \"\"\"\n queue = _Queue()\n first_window = _get_first_window(split_text)\n for word in first_window[1:]:\n queue.put(word)\n return queue\n\n\ndef _process_word(graph, tokens, queue, word):\n \"\"\"Sets edge between `word` and each element in queue in `graph` if such nodes\n exist in `tokens` and `graph`.\n\n Parameters\n ----------\n graph : :class:`~gensim.summarization.graph.Graph`\n Given graph.\n tokens : dict\n Original units (words) as keys and processed units (tokens) as values.\n queue : Queue\n Given queue.\n word : str\n Word, possible `node` in graph and item in `tokens`.\n\n \"\"\"\n for word_to_compare in _queue_iterator(queue):\n _set_graph_edge(graph, tokens, word, word_to_compare)\n\n\ndef _update_queue(queue, word):\n \"\"\"Updates given `queue` (removes last item and puts `word`).\n\n Parameters\n ----------\n queue : Queue\n Given queue.\n word : str\n Word to be added to queue.\n\n \"\"\"\n queue.get()\n queue.put(word)\n assert queue.qsize() == (WINDOW_SIZE - 1)\n\n\ndef _process_text(graph, tokens, split_text):\n \"\"\"Process `split_text` by updating given `graph` with new eges between nodes\n if they exists in `tokens` and `graph`.\n Words are taken from `split_text` with window size :const:`~gensim.parsing.keywords.WINDOW_SIZE`.\n\n Parameters\n ----------\n graph : :class:`~gensim.summarization.graph.Graph`\n Given graph.\n tokens : dict\n Original units (words) as keys and processed units (tokens) as values.\n split_text : list of str\n Splitted text.\n\n \"\"\"\n queue = _init_queue(split_text)\n for i in range(WINDOW_SIZE, len(split_text)):\n word = split_text[i]\n _process_word(graph, tokens, queue, word)\n _update_queue(queue, word)\n\n\ndef _queue_iterator(queue):\n \"\"\"Represents iterator of the given queue.\n\n Parameters\n ----------\n queue : Queue\n Given queue.\n\n Yields\n ------\n str\n Current item of queue.\n\n \"\"\"\n iterations = queue.qsize()\n for _ in range(iterations):\n var = queue.get()\n yield var\n queue.put(var)\n\n\ndef _set_graph_edges(graph, tokens, split_text):\n \"\"\"Updates given `graph` by setting eges between nodes if they exists in `tokens` and `graph`.\n Words are taken from `split_text` with window size :const:`~gensim.parsing.keywords.WINDOW_SIZE`.\n\n Parameters\n ----------\n graph : :class:~gensim.summarization.graph.Graph\n Given graph.\n tokens : dict\n Original units (words) as keys and processed units (tokens) as values.\n split_text : list of str\n Splitted text.\n\n \"\"\"\n _process_first_window(graph, tokens, split_text)\n _process_text(graph, tokens, split_text)\n\n\ndef _extract_tokens(lemmas, scores, ratio, words):\n \"\"\"Extracts tokens from provided lemmas. Most scored lemmas are used if `words` not provided.\n\n Parameters\n ----------\n lemmas : list of str\n Given lemmas.\n scores : dict\n Dictionary with lemmas and its scores.\n ratio : float\n Proportion of lemmas used for final result.\n words : int\n Number of used words. If no \"words\" option is selected, the number of\n sentences is reduced by the provided ratio, else, the ratio is ignored.\n\n Returns\n -------\n list of (float, str)\n Scores and corresponded lemmas.\n\n \"\"\"\n lemmas.sort(key=lambda s: scores[s], reverse=True)\n length = len(lemmas) * ratio if words is None else min(words, len(lemmas))\n return [(scores[lemmas[i]], lemmas[i],) for i in range(int(length))]\n\n\ndef _lemmas_to_words(tokens):\n \"\"\"Get words and lemmas from given tokens. Produces \"reversed\" `tokens`.\n\n Parameters\n ----------\n tokens : dict\n Original units (words) as keys and processed units (tokens) as values.\n\n Returns\n -------\n dict\n Lemmas as keys and lists corresponding words as values.\n\n \"\"\"\n lemma_to_word = {}\n for word, unit in iteritems(tokens):\n lemma = unit.token\n if lemma in lemma_to_word:\n lemma_to_word[lemma].append(word)\n else:\n lemma_to_word[lemma] = [word]\n return lemma_to_word\n\n\ndef _get_keywords_with_score(extracted_lemmas, lemma_to_word):\n \"\"\"Get words of `extracted_lemmas` and its scores, words contains in `lemma_to_word`.\n\n Parameters\n ----------\n extracted_lemmas : list of (float, str)\n Given lemmas with scores\n lemma_to_word : dict\n Lemmas and corresponding words.\n\n Returns\n -------\n dict\n Keywords as keys and its scores as values.\n\n \"\"\"\n\n keywords = {}\n for score, lemma in extracted_lemmas:\n keyword_list = lemma_to_word[lemma]\n for keyword in keyword_list:\n keywords[keyword] = score\n return keywords\n\n\ndef _strip_word(word):\n \"\"\"Get cleaned `word`.\n\n Parameters\n ----------\n word : str\n Given word.\n\n Returns\n -------\n str\n Cleaned word.\n \"\"\"\n stripped_word_list = list(_tokenize_by_word(word))\n return stripped_word_list[0] if stripped_word_list else \"\"\n\n\ndef _get_combined_keywords(_keywords, split_text):\n \"\"\"Get most scored words (`_keywords`) contained in `split_text` and it's combinations.\n\n Parameters\n ----------\n _keywords : dict\n Keywords as keys and its scores as values.\n split_text : list of str\n Splitted text.\n\n Returns\n -------\n list of str\n Keywords and/or its combinations.\n\n \"\"\"\n result = []\n _keywords = _keywords.copy()\n len_text = len(split_text)\n for i in range(len_text):\n word = _strip_word(split_text[i])\n if word in _keywords:\n combined_word = [word]\n if i + 1 == len_text:\n result.append(word) # appends last word if keyword and doesn't iterate\n for j in range(i + 1, len_text):\n other_word = _strip_word(split_text[j])\n if other_word in _keywords and other_word == split_text[j] and other_word not in combined_word:\n combined_word.append(other_word)\n else:\n for keyword in combined_word:\n _keywords.pop(keyword)\n result.append(\" \".join(combined_word))\n break\n return result\n\n\ndef _get_average_score(concept, _keywords):\n \"\"\"Get average score of words in `concept`.\n\n Parameters\n ----------\n concept : str\n Input text.\n _keywords : dict\n Keywords as keys and its scores as values.\n\n Returns\n -------\n float\n Average score.\n\n \"\"\"\n word_list = concept.split()\n word_counter = len(word_list)\n total = float(sum(_keywords[word] for word in word_list))\n return total / word_counter\n\n\ndef _format_results(_keywords, combined_keywords, split, scores):\n \"\"\"Formats, sorts and returns `combined_keywords` in desired format.\n\n Parameters\n ----------\n _keywords : dict\n Keywords as keys and its scores as values.\n combined_keywords : list of str\n Most ranked words and/or its combinations.\n split : bool\n Split result if True or return string otherwise, optional.\n scores : bool\n Whether return `combined_keywords` with scores, optional. If True\n `split` is ignored.\n\n Returns\n -------\n result: list of (str, float)\n If `scores`, keywords with scores **OR**\n result: list of str\n If `split`, keywords only **OR**\n result: str\n Keywords, joined by endl.\n\n \"\"\"\n combined_keywords.sort(key=lambda w: _get_average_score(w, _keywords), reverse=True)\n if scores:\n return [(word, _get_average_score(word, _keywords)) for word in combined_keywords]\n if split:\n return combined_keywords\n return \"\\n\".join(combined_keywords)\n\n\ndef keywords(text, ratio=0.2, words=None, split=False, scores=False, pos_filter=('NN', 'JJ'),\n lemmatize=False, deacc=True):\n \"\"\"Get most ranked words of provided text and/or its combinations.\n\n Parameters\n ----------\n\n text : str\n Input text.\n ratio : float, optional\n If no \"words\" option is selected, the number of sentences is reduced by the provided ratio,\n else, the ratio is ignored.\n words : int, optional\n Number of returned words.\n split : bool, optional\n Whether split keywords if True.\n scores : bool, optional\n Whether score of keyword.\n pos_filter : tuple, optional\n Part of speech filters.\n lemmatize : bool, optional\n If True - lemmatize words.\n deacc : bool, optional\n If True - remove accentuation.\n\n Returns\n -------\n result: list of (str, float)\n If `scores`, keywords with scores **OR**\n result: list of str\n If `split`, keywords only **OR**\n result: str\n Keywords, joined by endl.\n\n \"\"\"\n # Gets a dict of word -> lemma\n text = to_unicode(text)\n tokens = _clean_text_by_word(text, deacc=deacc)\n split_text = list(_tokenize_by_word(text))\n\n # Creates the graph and adds the edges\n graph = _build_graph(_get_words_for_graph(tokens, pos_filter))\n _set_graph_edges(graph, tokens, split_text)\n del split_text # It's no longer used\n\n _remove_unreachable_nodes(graph)\n\n if not any(True for _ in graph.iter_edges()):\n return _format_results([], [], split, scores)\n\n # Ranks the tokens using the PageRank algorithm. Returns dict of lemma -> score\n pagerank_scores = _pagerank(graph)\n\n extracted_lemmas = _extract_tokens(graph.nodes(), pagerank_scores, ratio, words)\n\n # The results can be polluted by many variations of the same word\n if lemmatize:\n lemmas_to_word = {}\n for word, unit in iteritems(tokens):\n lemmas_to_word[unit.token] = [word]\n else:\n lemmas_to_word = _lemmas_to_words(tokens)\n\n keywords = _get_keywords_with_score(extracted_lemmas, lemmas_to_word)\n\n # text.split() to keep numbers and punctuation marks, so separeted concepts are not combined\n combined_keywords = _get_combined_keywords(keywords, text.split())\n\n return _format_results(keywords, combined_keywords, split, scores)\n\n\ndef get_graph(text):\n \"\"\"Creates and returns graph from given text, cleans and tokenize text before building graph.\n\n Parameters\n ----------\n text : str\n Sequence of values.\n\n Returns\n -------\n :class:`~gensim.summarization.graph.Graph`\n Created graph.\n\n \"\"\"\n tokens = _clean_text_by_word(text)\n split_text = list(_tokenize_by_word(text))\n\n graph = _build_graph(_get_words_for_graph(tokens))\n _set_graph_edges(graph, tokens, split_text)\n\n return graph\n",
"path": "gensim/summarization/keywords.py"
}
] | diff --git a/gensim/summarization/keywords.py b/gensim/summarization/keywords.py
index db7c8a0dc7..2c85cf0bfe 100644
--- a/gensim/summarization/keywords.py
+++ b/gensim/summarization/keywords.py
@@ -302,7 +302,7 @@ def _extract_tokens(lemmas, scores, ratio, words):
"""
lemmas.sort(key=lambda s: scores[s], reverse=True)
- length = len(lemmas) * ratio if words is None else words
+ length = len(lemmas) * ratio if words is None else min(words, len(lemmas))
return [(scores[lemmas[i]], lemmas[i],) for i in range(int(length))]
diff --git a/gensim/test/test_keywords.py b/gensim/test/test_keywords.py
index 6011c83df4..ffe2f32a8f 100644
--- a/gensim/test/test_keywords.py
+++ b/gensim/test/test_keywords.py
@@ -101,6 +101,12 @@ def test_text_keywords_without_graph_edges(self):
kwds = keywords(text, deacc=False, scores=True)
self.assertFalse(len(kwds))
+ def test_keywords_with_words_greater_than_lemmas(self):
+ # words parameter is greater than number of words in text variable
+ text = 'Test string small length'
+ kwds = keywords(text, words=5, split=True)
+ self.assertIsNotNone(kwds)
+
if __name__ == '__main__':
logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.DEBUG)
|
hydroshare__hydroshare-2769 | Add back Active, Date joined, and last login in mezzanine listing of users
In the 3/19/18 version of HydroShare when an admin listed users the fields listed were

At present when an admin lists users the fields are

The fields Active, Date joined and last login are needed so that when there are problems with users creating and activating accounts (as occurred this week) an admin can list recent account creations and account creation attempts to assess the extent of the problem, and contact users that may have been impacted.
This regression was noted in https://github.com/hydroshare/hydroshare/pull/2677#issuecomment-374183106
| [
{
"content": "from django import forms\nfrom django.contrib.auth.admin import UserAdmin\nfrom django.contrib.auth.forms import UserCreationForm\nfrom django.contrib.auth.models import User\nfrom django.contrib.gis import admin\nfrom django.contrib.contenttypes.admin import GenericTabularInline\nfrom django.utils.translation import ugettext_lazy as _\n\nfrom mezzanine.pages.admin import PageAdmin\n\nfrom .models import *\n\n\nclass UserCreationFormExtended(UserCreationForm):\n def __init__(self, *args, **kwargs):\n super(UserCreationFormExtended, self).__init__(*args, **kwargs)\n self.fields['email'] = forms.EmailField(label=_(\"E-mail\"), max_length=75)\n\nUserAdmin.add_form = UserCreationFormExtended\nUserAdmin.add_fieldsets = (\n (None, {\n 'classes': ('wide',),\n 'fields': ('email', 'username', 'password1', 'password2',)\n }),\n)\n\nclass InlineResourceFiles(GenericTabularInline):\n model = ResourceFile\n\nclass GenericResourceAdmin(PageAdmin):\n inlines = PageAdmin.inlines + [InlineResourceFiles]\n\nadmin.site.unregister(User)\nadmin.site.register(User, UserAdmin)\nadmin.site.register(GenericResource, GenericResourceAdmin)\n",
"path": "hs_core/admin.py"
}
] | [
{
"content": "from django import forms\nfrom django.contrib.auth.admin import UserAdmin\nfrom django.contrib.auth.forms import UserCreationForm\nfrom django.contrib.auth.models import User\nfrom django.contrib.gis import admin\nfrom django.contrib.contenttypes.admin import GenericTabularInline\nfrom django.utils.translation import ugettext_lazy as _\n\nfrom mezzanine.pages.admin import PageAdmin\n\nfrom .models import *\n\n\nclass UserCreationFormExtended(UserCreationForm):\n def __init__(self, *args, **kwargs):\n super(UserCreationFormExtended, self).__init__(*args, **kwargs)\n self.fields['email'] = forms.EmailField(label=_(\"E-mail\"), max_length=75)\n\nUserAdmin.add_form = UserCreationFormExtended\nUserAdmin.add_fieldsets = (\n (None, {\n 'classes': ('wide',),\n 'fields': ('email', 'username', 'password1', 'password2',)\n }),\n)\nUserAdmin.list_display = [\n 'username', 'email', 'first_name', 'last_name', 'is_staff',\n 'is_active', 'date_joined', 'last_login'\n]\n\nclass InlineResourceFiles(GenericTabularInline):\n model = ResourceFile\n\nclass GenericResourceAdmin(PageAdmin):\n inlines = PageAdmin.inlines + [InlineResourceFiles]\n\nadmin.site.unregister(User)\nadmin.site.register(User, UserAdmin)\nadmin.site.register(GenericResource, GenericResourceAdmin)\n",
"path": "hs_core/admin.py"
}
] | diff --git a/hs_core/admin.py b/hs_core/admin.py
index a0a7b4e7f1..797c926320 100755
--- a/hs_core/admin.py
+++ b/hs_core/admin.py
@@ -23,6 +23,10 @@ def __init__(self, *args, **kwargs):
'fields': ('email', 'username', 'password1', 'password2',)
}),
)
+UserAdmin.list_display = [
+ 'username', 'email', 'first_name', 'last_name', 'is_staff',
+ 'is_active', 'date_joined', 'last_login'
+]
class InlineResourceFiles(GenericTabularInline):
model = ResourceFile
diff --git a/theme/templates/resource-landing-page/title-section.html b/theme/templates/resource-landing-page/title-section.html
index 09f1ccdcb1..cd9b2f25c9 100644
--- a/theme/templates/resource-landing-page/title-section.html
+++ b/theme/templates/resource-landing-page/title-section.html
@@ -229,7 +229,7 @@ <h2 id="resource-title">{{ title }}</h2>
"@type": "Dataset",
"additionalType": ["http://schema.geolink.org/1.0/base/main#Dataset", "http://vivoweb.org/ontology/core#Dataset"],
"name": "{{ title }}",
- "description": "{{ cm.metadata.description }}",
+ "description": "{{ cm.metadata.description | escapejs }}",
"url": "https://www.hydroshare.org/resource/{{ cm.short_id }}/",
"version": "2017-06-04",
{% if cm.raccess.public %} "isAccessibleForFree": true, {% endif %}
@@ -294,7 +294,7 @@ <h2 id="resource-title">{{ title }}</h2>
"creator": {
"@id": "{{ cr.description }}",
"@type": "Person",
- "additionalType": "http://schema.geolink.org/1.0/base/main#Person", // Is this necessary?
+ "additionalType": "http://schema.geolink.org/1.0/base/main#Person",
"name": "{{ cr.name }}",
"url": "{{ cr.description }}/"
}
|
bokeh__bokeh-10311 | [BUG] Link in docs is not working for fill color property
https://docs.bokeh.org/en/latest/_modules/bokeh/core/property_mixins.html#FillProps
| [
{
"content": "#-----------------------------------------------------------------------------\n# Copyright (c) 2012 - 2020, Anaconda, Inc., and Bokeh Contributors.\n# All rights reserved.\n#\n# The full license is in the file LICENSE.txt, distributed with this software.\n#-----------------------------------------------------------------------------\n''' Mix-in classes that bulk add groups of properties to Bokeh models.\n\nSome groups of properties often show up in Bokeh models together. For\ninstance, any model that exposes a fill color property for use when\nrendering will almost always want to expose a fill alpha as well. To\nreduce boilerplate code and simplify defining models with these sets\nof properties, use the mix-in classes in this module:\n\n* |FillProps| --- properties for fill color and alpha\n\n* |HatchProps| --- properties for hatching pattern, color, alpha, etc.\n\n* |LineProps| --- properties for line color, dashing, width, etc.\n\n* |TextProps| --- properties for text color, font, etc.\n\nTo include these properties in a Bokeh model, use the |Include| property\nas shown here:\n\n.. code-block:: python\n\n class SomeGlyph(Glyph):\n\n fill_props = Include(FillProps, use_prefix=False, help=\"\"\"\n The %s values for the annular wedges.\n \"\"\")\n\nThis adds all the fill properties ``fill_color`` and ``fill_alpha`` to this\nmodel with one simple statement. Note that the help string contains a\nplaceholder format `%s`. When docs for this class are rendered by the\n:ref:`bokeh.sphinxext.bokeh_model` Sphinx extension, the placeholder will\nbe replaced with more information specific to each property. The setting\n``use_prefix`` means that the names of the properties added to ``SomeGlyph``\nare exactly ``fill_alpha`` and ``fill_color``. Some situations require a\ndifferent usage, for more information see the docs for |Include|.\n\n.. |Include| replace:: :class:`~bokeh.core.properties.Include`\n\n.. |FillProps| replace:: :class:`~bokeh.core.property_mixins.FillProps`\n.. |HatchProps| replace:: :class:`~bokeh.core.property_mixins.HatchProps`\n.. |LineProps| replace:: :class:`~bokeh.core.property_mixins.LineProps`\n.. |TextProps| replace:: :class:`~bokeh.core.property_mixins.TextProps`\n\n'''\n#-----------------------------------------------------------------------------\n# Boilerplate\n#-----------------------------------------------------------------------------\nimport logging # isort:skip\nlog = logging.getLogger(__name__)\n\n#-----------------------------------------------------------------------------\n# Imports\n#-----------------------------------------------------------------------------\n\n# Bokeh imports\nfrom .enums import (\n FontStyle,\n HatchPattern,\n HatchPatternAbbreviation,\n LineCap,\n LineJoin,\n TextAlign,\n TextBaseline,\n)\nfrom .has_props import HasProps\nfrom .properties import (\n Color,\n ColorSpec,\n DashPattern,\n Dict,\n Enum,\n Float,\n FontSize,\n FontSizeSpec,\n HatchPatternSpec,\n Include,\n Instance,\n Int,\n NumberSpec,\n Percent,\n Size,\n String,\n value,\n)\n\n#-----------------------------------------------------------------------------\n# Globals and constants\n#-----------------------------------------------------------------------------\n\n__all__ = (\n 'FillProps',\n 'HatchProps',\n 'LineProps',\n 'TextProps',\n 'ScalarFillProps',\n 'ScalarHatchProps',\n 'ScalarLineProps',\n 'ScalarTextProps',\n)\n\n#-----------------------------------------------------------------------------\n# Private API\n#-----------------------------------------------------------------------------\n\n_color_help = \"\"\"\nA color to use to %s with.\n\nAcceptable values are:\n\n- any of the 147 named `CSS colors`_, e.g ``'green'``, ``'indigo'``\n- an RGB(A) hex value, e.g., ``'#FF0000'``, ``'#44444444'``\n- a 3-tuple of integers (r,g,b) between 0 and 255\n- a 4-tuple of (r,g,b,a) where r,g,b are integers between 0..255 and a is between 0..1\n\n.. _CSS colors: http://www.w3schools.com/cssref/css_colornames.asp\n\n\"\"\"\n\n_alpha_help = \"\"\"\nAn alpha value to use to %s with.\n\nAcceptable values are floating point numbers between 0 (transparent)\nand 1 (opaque).\n\n\"\"\"\n\nclass _BaseLineProps(HasProps):\n line_join = Enum(LineJoin, default='bevel', help=\"\"\"\n How path segments should be joined together.\n\n Acceptable values are:\n\n - ``'miter'`` |miter_join|\n - ``'round'`` |round_join|\n - ``'bevel'`` |bevel_join|\n\n .. |miter_join| image:: /_images/miter_join.png\n :height: 15\n .. |round_join| image:: /_images/round_join.png\n :height: 15\n .. |bevel_join| image:: /_images/bevel_join.png\n :height: 15\n\n \"\"\")\n\n line_cap = Enum(LineCap, help=\"\"\"\n How path segments should be terminated.\n\n Acceptable values are:\n\n - ``'butt'`` |butt_cap|\n - ``'round'`` |round_cap|\n - ``'square'`` |square_cap|\n\n .. |butt_cap| image:: /_images/butt_cap.png\n :height: 12\n .. |round_cap| image:: /_images/round_cap.png\n :height: 12\n .. |square_cap| image:: /_images/square_cap.png\n :height: 12\n\n \"\"\")\n\n line_dash = DashPattern(help=\"\"\"\n How should the line be dashed.\n \"\"\")\n\n line_dash_offset = Int(0, help=\"\"\"\n The distance into the ``line_dash`` (in pixels) that the pattern should\n start from.\n \"\"\")\n\nclass _BaseTextProps(HasProps):\n\n text_font = String(\"helvetica\", help=\"\"\"\n Name of a font to use for rendering text, e.g., ``'times'``,\n ``'helvetica'``.\n\n \"\"\")\n\n text_font_style = Enum(FontStyle, help=\"\"\"\n A style to use for rendering text.\n\n Acceptable values are:\n\n - ``'normal'`` normal text\n - ``'italic'`` *italic text*\n - ``'bold'`` **bold text**\n - ``\"bold italic\"`` ***bold italic text***\n\n \"\"\")\n\n text_align = Enum(TextAlign, help=\"\"\"\n Horizontal anchor point to use when rendering text.\n\n Acceptable values are:\n\n - ``'left'``\n - ``'right'``\n - ``'center'``\n\n \"\"\")\n\n text_baseline = Enum(TextBaseline, default=\"bottom\", help=\"\"\"\n Vertical anchor point to use when rendering text.\n\n Acceptable values are:\n\n - ``'top'``\n - ``'middle'``\n - ``'bottom'``\n - ``'alphabetic'``\n - ``'hanging'``\n - ``'ideographic'``\n\n \"\"\")\n\n text_line_height = Float(default=1.2, help=\"\"\"\n In multi-line text, how much additional space should be allocated for\n each line. The value is provided as a number, but should be treated as\n a percentage of font size. The default is 120%. Setting it to 1.0, so\n 100%, means no additional space will be used.\n \"\"\")\n\n#----------------------------------------------------------------------------\n# General API\n#----------------------------------------------------------------------------\n\nclass FillProps(HasProps):\n ''' Properties relevant to rendering fill regions.\n\n Mirrors the BokehJS ``properties.Fill`` class.\n\n '''\n\n fill_color = ColorSpec(default=\"gray\", help=_color_help % \"fill paths\")\n fill_alpha = NumberSpec(default=1.0, accept_datetime=False, accept_timedelta=False, help=_alpha_help % \"fill paths\")\n\nclass ScalarFillProps(HasProps):\n ''' Properties relevant to rendering fill regions.\n\n Mirrors the BokehJS ``properties.Fill`` class.\n\n '''\n\n fill_color = Color(default=\"gray\", help=_color_help % \"fill paths\")\n fill_alpha = Percent(default=1.0, help=_alpha_help)\n\n_hatch_scale_help = \"\"\"\nA rough measure of the 'size' of the hatching pattern. Generally speaking, the\nhigher the number, the more spread out the pattern will be.\n\"\"\"\n\n_hatch_pattern_help = \"\"\"\nBuilt-in patterns are can either be specified as long names:\n\n%s\n\nor as one-letter abbreviations:\n\n%s\n\"\"\" % (\", \". join(HatchPattern), \", \". join(repr(x) for x in HatchPatternAbbreviation))\n\n_hatch_weight_help = \"\"\"\nA width value for line-strokes used in hatching.\n\"\"\"\n\nclass HatchProps(HasProps):\n ''' Properties relevant to rendering fill regions.\n\n Mirrors the BokehJS ``properties.Hatch`` class.\n\n '''\n\n hatch_color = ColorSpec(default=\"black\", help=_color_help % \"hatching\")\n hatch_alpha = NumberSpec(default=1.0, accept_datetime=False, accept_timedelta=False, help=_alpha_help % \"hatching\")\n hatch_scale = NumberSpec(default=12.0, accept_datetime=False, accept_timedelta=False, help=_hatch_scale_help)\n hatch_pattern = HatchPatternSpec(default=None, help=_hatch_pattern_help)\n hatch_weight = NumberSpec(default=1.0, accept_datetime=False, accept_timedelta=False, help=_hatch_weight_help)\n hatch_extra = Dict(String, Instance(\"bokeh.models.textures.Texture\"))\n\nclass ScalarHatchProps(HasProps):\n ''' Properties relevant to rendering fill regions.\n\n Mirrors the BokehJS ``properties.Hatch`` class.\n\n '''\n\n hatch_color = Color(default=\"black\", help=_color_help % \"hatching\")\n hatch_alpha = Percent(default=1.0, help=_alpha_help % \"hatching\")\n hatch_scale = Size(default=12.0, help=_hatch_scale_help)\n hatch_pattern = String(default=None, help=_hatch_pattern_help) # String to accommodate user custom values\n hatch_weight = Size(default=1.0, help=_hatch_weight_help)\n hatch_extra = Dict(String, Instance(\"bokeh.models.textures.Texture\"))\n\n_line_width_help = \"\"\"\nStroke width in units of pixels.\n\"\"\"\n\nclass LineProps(HasProps):\n ''' Properties relevant to rendering path operations.\n\n Mirrors the BokehJS ``properties.Line`` class.\n\n '''\n\n base_line_props = Include(_BaseLineProps, use_prefix=False)\n\n line_color = ColorSpec(default=\"black\", help=_color_help % \"stroke paths\")\n line_width = NumberSpec(default=1, accept_datetime=False, accept_timedelta=False, help=_line_width_help)\n line_alpha = NumberSpec(default=1.0, accept_datetime=False, accept_timedelta=False, help=_alpha_help % \"stroke paths\")\n\n\nclass ScalarLineProps(HasProps):\n ''' Properties relevant to rendering path operations.\n\n Mirrors the BokehJS ``properties.Line`` class.\n\n '''\n base_line_props = Include(_BaseLineProps, use_prefix=False)\n\n line_color = Color(default=\"black\", help=_color_help % \"stroke paths\")\n line_width = Float(default=1, help=_line_width_help)\n line_alpha = Percent(default=1.0, help=_alpha_help % \"stroke paths\")\n\n\nclass TextProps(HasProps):\n ''' Properties relevant to rendering text.\n\n Mirrors the BokehJS ``properties.Text`` class.\n\n .. note::\n There is currently only support for filling text. An interface\n to stroke the outlines of text has not yet been exposed.\n\n '''\n base_text_props = Include(_BaseTextProps, use_prefix=False)\n\n text_font_size = FontSizeSpec(value(\"16px\"))\n\n text_color = ColorSpec(default=\"#444444\", help=_color_help % \"fill text\")\n\n text_alpha = NumberSpec(default=1.0, accept_datetime=False, accept_timedelta=False, help=_alpha_help % \"fill text\")\n\nclass ScalarTextProps(HasProps):\n ''' Properties relevant to rendering text.\n\n Mirrors the BokehJS ``properties.Text`` class.\n\n .. note::\n There is currently only support for filling text. An interface\n to stroke the outlines of text has not yet been exposed.\n\n '''\n\n base_text_props = Include(_BaseTextProps, use_prefix=False)\n\n # XXX not great\n text_font_size = FontSize(\"16px\")\n\n text_color = Color(default=\"#444444\", help=_color_help % \"fill text\")\n\n text_alpha = Percent(default=1.0, help=_alpha_help % \"fill text\")\n\n#-----------------------------------------------------------------------------\n# Dev API\n#-----------------------------------------------------------------------------\n\n#-----------------------------------------------------------------------------\n# Code\n#-----------------------------------------------------------------------------\n",
"path": "bokeh/core/property_mixins.py"
}
] | [
{
"content": "#-----------------------------------------------------------------------------\n# Copyright (c) 2012 - 2020, Anaconda, Inc., and Bokeh Contributors.\n# All rights reserved.\n#\n# The full license is in the file LICENSE.txt, distributed with this software.\n#-----------------------------------------------------------------------------\n''' Mix-in classes that bulk add groups of properties to Bokeh models.\n\nSome groups of properties often show up in Bokeh models together. For\ninstance, any model that exposes a fill color property for use when\nrendering will almost always want to expose a fill alpha as well. To\nreduce boilerplate code and simplify defining models with these sets\nof properties, use the mix-in classes in this module:\n\n* |FillProps| --- properties for fill color and alpha\n\n* |HatchProps| --- properties for hatching pattern, color, alpha, etc.\n\n* |LineProps| --- properties for line color, dashing, width, etc.\n\n* |TextProps| --- properties for text color, font, etc.\n\nTo include these properties in a Bokeh model, use the |Include| property\nas shown here:\n\n.. code-block:: python\n\n class SomeGlyph(Glyph):\n\n fill_props = Include(FillProps, use_prefix=False, help=\"\"\"\n The %s values for the annular wedges.\n \"\"\")\n\nThis adds all the fill properties ``fill_color`` and ``fill_alpha`` to this\nmodel with one simple statement. Note that the help string contains a\nplaceholder format `%s`. When docs for this class are rendered by the\n:ref:`bokeh.sphinxext.bokeh_model` Sphinx extension, the placeholder will\nbe replaced with more information specific to each property. The setting\n``use_prefix`` means that the names of the properties added to ``SomeGlyph``\nare exactly ``fill_alpha`` and ``fill_color``. Some situations require a\ndifferent usage, for more information see the docs for |Include|.\n\n.. |Include| replace:: :class:`~bokeh.core.properties.Include`\n\n.. |FillProps| replace:: :class:`~bokeh.core.property_mixins.FillProps`\n.. |HatchProps| replace:: :class:`~bokeh.core.property_mixins.HatchProps`\n.. |LineProps| replace:: :class:`~bokeh.core.property_mixins.LineProps`\n.. |TextProps| replace:: :class:`~bokeh.core.property_mixins.TextProps`\n\n'''\n#-----------------------------------------------------------------------------\n# Boilerplate\n#-----------------------------------------------------------------------------\nimport logging # isort:skip\nlog = logging.getLogger(__name__)\n\n#-----------------------------------------------------------------------------\n# Imports\n#-----------------------------------------------------------------------------\n\n# Bokeh imports\nfrom .enums import (\n FontStyle,\n HatchPattern,\n HatchPatternAbbreviation,\n LineCap,\n LineJoin,\n TextAlign,\n TextBaseline,\n)\nfrom .has_props import HasProps\nfrom .properties import (\n Color,\n ColorSpec,\n DashPattern,\n Dict,\n Enum,\n Float,\n FontSize,\n FontSizeSpec,\n HatchPatternSpec,\n Include,\n Instance,\n Int,\n NumberSpec,\n Percent,\n Size,\n String,\n value,\n)\n\n#-----------------------------------------------------------------------------\n# Globals and constants\n#-----------------------------------------------------------------------------\n\n__all__ = (\n 'FillProps',\n 'HatchProps',\n 'LineProps',\n 'TextProps',\n 'ScalarFillProps',\n 'ScalarHatchProps',\n 'ScalarLineProps',\n 'ScalarTextProps',\n)\n\n#-----------------------------------------------------------------------------\n# Private API\n#-----------------------------------------------------------------------------\n\n_color_help = \"\"\"\nA color to use to %s with.\n\nAcceptable values are:\n\n- any of the 147 named `CSS colors`_, e.g ``'green'``, ``'indigo'``\n- an RGB(A) hex value, e.g., ``'#FF0000'``, ``'#44444444'``\n- a 3-tuple of integers (r,g,b) between 0 and 255\n- a 4-tuple of (r,g,b,a) where r,g,b are integers between 0..255 and a is between 0..1\n\n.. _CSS colors: https://www.w3schools.com/colors/colors_names.asp\n\n\"\"\"\n\n_alpha_help = \"\"\"\nAn alpha value to use to %s with.\n\nAcceptable values are floating point numbers between 0 (transparent)\nand 1 (opaque).\n\n\"\"\"\n\nclass _BaseLineProps(HasProps):\n line_join = Enum(LineJoin, default='bevel', help=\"\"\"\n How path segments should be joined together.\n\n Acceptable values are:\n\n - ``'miter'`` |miter_join|\n - ``'round'`` |round_join|\n - ``'bevel'`` |bevel_join|\n\n .. |miter_join| image:: /_images/miter_join.png\n :height: 15\n .. |round_join| image:: /_images/round_join.png\n :height: 15\n .. |bevel_join| image:: /_images/bevel_join.png\n :height: 15\n\n \"\"\")\n\n line_cap = Enum(LineCap, help=\"\"\"\n How path segments should be terminated.\n\n Acceptable values are:\n\n - ``'butt'`` |butt_cap|\n - ``'round'`` |round_cap|\n - ``'square'`` |square_cap|\n\n .. |butt_cap| image:: /_images/butt_cap.png\n :height: 12\n .. |round_cap| image:: /_images/round_cap.png\n :height: 12\n .. |square_cap| image:: /_images/square_cap.png\n :height: 12\n\n \"\"\")\n\n line_dash = DashPattern(help=\"\"\"\n How should the line be dashed.\n \"\"\")\n\n line_dash_offset = Int(0, help=\"\"\"\n The distance into the ``line_dash`` (in pixels) that the pattern should\n start from.\n \"\"\")\n\nclass _BaseTextProps(HasProps):\n\n text_font = String(\"helvetica\", help=\"\"\"\n Name of a font to use for rendering text, e.g., ``'times'``,\n ``'helvetica'``.\n\n \"\"\")\n\n text_font_style = Enum(FontStyle, help=\"\"\"\n A style to use for rendering text.\n\n Acceptable values are:\n\n - ``'normal'`` normal text\n - ``'italic'`` *italic text*\n - ``'bold'`` **bold text**\n - ``\"bold italic\"`` ***bold italic text***\n\n \"\"\")\n\n text_align = Enum(TextAlign, help=\"\"\"\n Horizontal anchor point to use when rendering text.\n\n Acceptable values are:\n\n - ``'left'``\n - ``'right'``\n - ``'center'``\n\n \"\"\")\n\n text_baseline = Enum(TextBaseline, default=\"bottom\", help=\"\"\"\n Vertical anchor point to use when rendering text.\n\n Acceptable values are:\n\n - ``'top'``\n - ``'middle'``\n - ``'bottom'``\n - ``'alphabetic'``\n - ``'hanging'``\n - ``'ideographic'``\n\n \"\"\")\n\n text_line_height = Float(default=1.2, help=\"\"\"\n In multi-line text, how much additional space should be allocated for\n each line. The value is provided as a number, but should be treated as\n a percentage of font size. The default is 120%. Setting it to 1.0, so\n 100%, means no additional space will be used.\n \"\"\")\n\n#----------------------------------------------------------------------------\n# General API\n#----------------------------------------------------------------------------\n\nclass FillProps(HasProps):\n ''' Properties relevant to rendering fill regions.\n\n Mirrors the BokehJS ``properties.Fill`` class.\n\n '''\n\n fill_color = ColorSpec(default=\"gray\", help=_color_help % \"fill paths\")\n fill_alpha = NumberSpec(default=1.0, accept_datetime=False, accept_timedelta=False, help=_alpha_help % \"fill paths\")\n\nclass ScalarFillProps(HasProps):\n ''' Properties relevant to rendering fill regions.\n\n Mirrors the BokehJS ``properties.Fill`` class.\n\n '''\n\n fill_color = Color(default=\"gray\", help=_color_help % \"fill paths\")\n fill_alpha = Percent(default=1.0, help=_alpha_help)\n\n_hatch_scale_help = \"\"\"\nA rough measure of the 'size' of the hatching pattern. Generally speaking, the\nhigher the number, the more spread out the pattern will be.\n\"\"\"\n\n_hatch_pattern_help = \"\"\"\nBuilt-in patterns are can either be specified as long names:\n\n%s\n\nor as one-letter abbreviations:\n\n%s\n\"\"\" % (\", \". join(HatchPattern), \", \". join(repr(x) for x in HatchPatternAbbreviation))\n\n_hatch_weight_help = \"\"\"\nA width value for line-strokes used in hatching.\n\"\"\"\n\nclass HatchProps(HasProps):\n ''' Properties relevant to rendering fill regions.\n\n Mirrors the BokehJS ``properties.Hatch`` class.\n\n '''\n\n hatch_color = ColorSpec(default=\"black\", help=_color_help % \"hatching\")\n hatch_alpha = NumberSpec(default=1.0, accept_datetime=False, accept_timedelta=False, help=_alpha_help % \"hatching\")\n hatch_scale = NumberSpec(default=12.0, accept_datetime=False, accept_timedelta=False, help=_hatch_scale_help)\n hatch_pattern = HatchPatternSpec(default=None, help=_hatch_pattern_help)\n hatch_weight = NumberSpec(default=1.0, accept_datetime=False, accept_timedelta=False, help=_hatch_weight_help)\n hatch_extra = Dict(String, Instance(\"bokeh.models.textures.Texture\"))\n\nclass ScalarHatchProps(HasProps):\n ''' Properties relevant to rendering fill regions.\n\n Mirrors the BokehJS ``properties.Hatch`` class.\n\n '''\n\n hatch_color = Color(default=\"black\", help=_color_help % \"hatching\")\n hatch_alpha = Percent(default=1.0, help=_alpha_help % \"hatching\")\n hatch_scale = Size(default=12.0, help=_hatch_scale_help)\n hatch_pattern = String(default=None, help=_hatch_pattern_help) # String to accommodate user custom values\n hatch_weight = Size(default=1.0, help=_hatch_weight_help)\n hatch_extra = Dict(String, Instance(\"bokeh.models.textures.Texture\"))\n\n_line_width_help = \"\"\"\nStroke width in units of pixels.\n\"\"\"\n\nclass LineProps(HasProps):\n ''' Properties relevant to rendering path operations.\n\n Mirrors the BokehJS ``properties.Line`` class.\n\n '''\n\n base_line_props = Include(_BaseLineProps, use_prefix=False)\n\n line_color = ColorSpec(default=\"black\", help=_color_help % \"stroke paths\")\n line_width = NumberSpec(default=1, accept_datetime=False, accept_timedelta=False, help=_line_width_help)\n line_alpha = NumberSpec(default=1.0, accept_datetime=False, accept_timedelta=False, help=_alpha_help % \"stroke paths\")\n\n\nclass ScalarLineProps(HasProps):\n ''' Properties relevant to rendering path operations.\n\n Mirrors the BokehJS ``properties.Line`` class.\n\n '''\n base_line_props = Include(_BaseLineProps, use_prefix=False)\n\n line_color = Color(default=\"black\", help=_color_help % \"stroke paths\")\n line_width = Float(default=1, help=_line_width_help)\n line_alpha = Percent(default=1.0, help=_alpha_help % \"stroke paths\")\n\n\nclass TextProps(HasProps):\n ''' Properties relevant to rendering text.\n\n Mirrors the BokehJS ``properties.Text`` class.\n\n .. note::\n There is currently only support for filling text. An interface\n to stroke the outlines of text has not yet been exposed.\n\n '''\n base_text_props = Include(_BaseTextProps, use_prefix=False)\n\n text_font_size = FontSizeSpec(value(\"16px\"))\n\n text_color = ColorSpec(default=\"#444444\", help=_color_help % \"fill text\")\n\n text_alpha = NumberSpec(default=1.0, accept_datetime=False, accept_timedelta=False, help=_alpha_help % \"fill text\")\n\nclass ScalarTextProps(HasProps):\n ''' Properties relevant to rendering text.\n\n Mirrors the BokehJS ``properties.Text`` class.\n\n .. note::\n There is currently only support for filling text. An interface\n to stroke the outlines of text has not yet been exposed.\n\n '''\n\n base_text_props = Include(_BaseTextProps, use_prefix=False)\n\n # XXX not great\n text_font_size = FontSize(\"16px\")\n\n text_color = Color(default=\"#444444\", help=_color_help % \"fill text\")\n\n text_alpha = Percent(default=1.0, help=_alpha_help % \"fill text\")\n\n#-----------------------------------------------------------------------------\n# Dev API\n#-----------------------------------------------------------------------------\n\n#-----------------------------------------------------------------------------\n# Code\n#-----------------------------------------------------------------------------\n",
"path": "bokeh/core/property_mixins.py"
}
] | diff --git a/bokeh/core/property_mixins.py b/bokeh/core/property_mixins.py
index 79b0b4577eb..85369400031 100644
--- a/bokeh/core/property_mixins.py
+++ b/bokeh/core/property_mixins.py
@@ -118,7 +118,7 @@ class SomeGlyph(Glyph):
- a 3-tuple of integers (r,g,b) between 0 and 255
- a 4-tuple of (r,g,b,a) where r,g,b are integers between 0..255 and a is between 0..1
-.. _CSS colors: http://www.w3schools.com/cssref/css_colornames.asp
+.. _CSS colors: https://www.w3schools.com/colors/colors_names.asp
"""
|
ray-project__ray-7665 | [Python] jsonschema included twice in setup.py requires list.
<!--Please include [tune], [rllib], [autoscaler] etc. in the issue title if relevant-->
### What is the problem?
`jsonschema` is included twice in the Python package [setup.py `requires` list](https://github.com/ray-project/ray/blob/master/python/setup.py#L176-L183). This is causing the usage of the Ray Python library within Bazel to fail during the analysis phase due to label duplication in the generated `py_library` target's `'deps'`:
```
ERROR: .../external/requirements_py3_pypi__ray_0_9_0_dev0/BUILD:6:1: Label '@requirements_py3_pypi__jsonschema_3_2_0//:pkg' is duplicated in the 'deps' attribute of rule 'pkg'
```
This bug was introduced in the [cluster json schema validator PR](https://github.com/ray-project/ray/pull/7261/files#diff-8cf6167d58ce775a08acafcfe6f40966).
*Ray version and other system information (Python version, TensorFlow version, OS):*
Ray master commit 90b553ed058a546e036374cd0919e00604892514 (most recent commit as of this issue filing)
### Reproduction (REQUIRED)
- [x] I have verified my script runs in a clean environment and reproduces the issue.
- [x] I have verified the issue also occurs with the [latest wheels](https://ray.readthedocs.io/en/latest/installation.html).
| [
{
"content": "from itertools import chain\nimport os\nimport re\nimport shutil\nimport subprocess\nimport sys\n\nfrom setuptools import setup, find_packages, Distribution\nimport setuptools.command.build_ext as _build_ext\n\n# Ideally, we could include these files by putting them in a\n# MANIFEST.in or using the package_data argument to setup, but the\n# MANIFEST.in gets applied at the very beginning when setup.py runs\n# before these files have been created, so we have to move the files\n# manually.\n\n# NOTE: The lists below must be kept in sync with ray/BUILD.bazel.\nray_files = [\n \"ray/core/src/ray/thirdparty/redis/src/redis-server\",\n \"ray/core/src/ray/gcs/redis_module/libray_redis_module.so\",\n \"ray/core/src/plasma/plasma_store_server\",\n \"ray/_raylet.so\",\n \"ray/core/src/ray/raylet/raylet_monitor\",\n \"ray/core/src/ray/gcs/gcs_server\",\n \"ray/core/src/ray/raylet/raylet\",\n \"ray/dashboard/dashboard.py\",\n \"ray/streaming/_streaming.so\",\n]\n\nbuild_java = os.getenv(\"RAY_INSTALL_JAVA\") == \"1\"\nif build_java:\n ray_files.append(\"ray/jars/ray_dist.jar\")\n\n# These are the directories where automatically generated Python protobuf\n# bindings are created.\ngenerated_python_directories = [\n \"ray/core/generated\",\n \"ray/streaming/generated\",\n]\n\noptional_ray_files = []\n\nray_autoscaler_files = [\n \"ray/autoscaler/aws/example-full.yaml\",\n \"ray/autoscaler/azure/example-full.yaml\",\n \"ray/autoscaler/gcp/example-full.yaml\",\n \"ray/autoscaler/local/example-full.yaml\",\n \"ray/autoscaler/kubernetes/example-full.yaml\",\n \"ray/autoscaler/kubernetes/kubectl-rsync.sh\",\n \"ray/autoscaler/ray-schema.json\"\n]\n\nray_project_files = [\n \"ray/projects/schema.json\", \"ray/projects/templates/cluster_template.yaml\",\n \"ray/projects/templates/project_template.yaml\",\n \"ray/projects/templates/requirements.txt\"\n]\n\nray_dashboard_files = [\n os.path.join(dirpath, filename)\n for dirpath, dirnames, filenames in os.walk(\"ray/dashboard/client/build\")\n for filename in filenames\n]\n\noptional_ray_files += ray_autoscaler_files\noptional_ray_files += ray_project_files\noptional_ray_files += ray_dashboard_files\n\nif \"RAY_USE_NEW_GCS\" in os.environ and os.environ[\"RAY_USE_NEW_GCS\"] == \"on\":\n ray_files += [\n \"ray/core/src/credis/build/src/libmember.so\",\n \"ray/core/src/credis/build/src/libmaster.so\",\n \"ray/core/src/credis/redis/src/redis-server\"\n ]\n\nextras = {\n \"debug\": [],\n \"dashboard\": [],\n \"serve\": [\"uvicorn\", \"pygments\", \"werkzeug\", \"flask\", \"pandas\", \"blist\"],\n \"tune\": [\"tabulate\", \"tensorboardX\"]\n}\n\nextras[\"rllib\"] = extras[\"tune\"] + [\n \"atari_py\",\n \"dm_tree\",\n \"gym[atari]\",\n \"lz4\",\n \"opencv-python-headless\",\n \"pyyaml\",\n \"scipy\",\n]\n\nextras[\"streaming\"] = [\"msgpack >= 0.6.2\"]\n\nextras[\"all\"] = list(set(chain.from_iterable(extras.values())))\n\n\nclass build_ext(_build_ext.build_ext):\n def run(self):\n # Note: We are passing in sys.executable so that we use the same\n # version of Python to build packages inside the build.sh script. Note\n # that certain flags will not be passed along such as --user or sudo.\n # TODO(rkn): Fix this.\n command = [\"../build.sh\", \"-p\", sys.executable]\n if build_java:\n # Also build binaries for Java if the above env variable exists.\n command += [\"-l\", \"python,java\"]\n subprocess.check_call(command)\n\n # We also need to install pickle5 along with Ray, so make sure that the\n # relevant non-Python pickle5 files get copied.\n pickle5_files = self.walk_directory(\"./ray/pickle5_files/pickle5\")\n\n thirdparty_files = self.walk_directory(\"./ray/thirdparty_files\")\n\n files_to_include = ray_files + pickle5_files + thirdparty_files\n\n # Copy over the autogenerated protobuf Python bindings.\n for directory in generated_python_directories:\n for filename in os.listdir(directory):\n if filename[-3:] == \".py\":\n files_to_include.append(os.path.join(directory, filename))\n\n for filename in files_to_include:\n self.move_file(filename)\n\n # Try to copy over the optional files.\n for filename in optional_ray_files:\n try:\n self.move_file(filename)\n except Exception:\n print(\"Failed to copy optional file {}. This is ok.\"\n .format(filename))\n\n def walk_directory(self, directory):\n file_list = []\n for (root, dirs, filenames) in os.walk(directory):\n for name in filenames:\n file_list.append(os.path.join(root, name))\n return file_list\n\n def move_file(self, filename):\n # TODO(rkn): This feels very brittle. It may not handle all cases. See\n # https://github.com/apache/arrow/blob/master/python/setup.py for an\n # example.\n source = filename\n destination = os.path.join(self.build_lib, filename)\n # Create the target directory if it doesn't already exist.\n parent_directory = os.path.dirname(destination)\n if not os.path.exists(parent_directory):\n os.makedirs(parent_directory)\n if not os.path.exists(destination):\n print(\"Copying {} to {}.\".format(source, destination))\n shutil.copy(source, destination, follow_symlinks=True)\n\n\nclass BinaryDistribution(Distribution):\n def has_ext_modules(self):\n return True\n\n\ndef find_version(*filepath):\n # Extract version information from filepath\n here = os.path.abspath(os.path.dirname(__file__))\n with open(os.path.join(here, *filepath)) as fp:\n version_match = re.search(r\"^__version__ = ['\\\"]([^'\\\"]*)['\\\"]\",\n fp.read(), re.M)\n if version_match:\n return version_match.group(1)\n raise RuntimeError(\"Unable to find version string.\")\n\n\nrequires = [\n \"numpy >= 1.16\",\n \"filelock\",\n \"jsonschema\",\n \"funcsigs\",\n \"click\",\n \"colorama\",\n \"packaging\",\n \"pytest\",\n \"pyyaml\",\n \"jsonschema\",\n \"redis>=3.3.2\",\n # NOTE: Don't upgrade the version of six! Doing so causes installation\n # problems. See https://github.com/ray-project/ray/issues/4169.\n \"six >= 1.0.0\",\n \"faulthandler;python_version<'3.3'\",\n \"protobuf >= 3.8.0\",\n \"cloudpickle\",\n \"py-spy >= 0.2.0\",\n \"aiohttp\",\n \"google\",\n \"grpcio\"\n]\n\nsetup(\n name=\"ray\",\n version=find_version(\"ray\", \"__init__.py\"),\n author=\"Ray Team\",\n author_email=\"[email protected]\",\n description=(\"A system for parallel and distributed Python that unifies \"\n \"the ML ecosystem.\"),\n long_description=open(\"../README.rst\").read(),\n url=\"https://github.com/ray-project/ray\",\n keywords=(\"ray distributed parallel machine-learning \"\n \"reinforcement-learning deep-learning python\"),\n packages=find_packages(),\n cmdclass={\"build_ext\": build_ext},\n # The BinaryDistribution argument triggers build_ext.\n distclass=BinaryDistribution,\n install_requires=requires,\n setup_requires=[\"cython >= 0.29\"],\n extras_require=extras,\n entry_points={\n \"console_scripts\": [\n \"ray=ray.scripts.scripts:main\",\n \"rllib=ray.rllib.scripts:cli [rllib]\", \"tune=ray.tune.scripts:cli\"\n ]\n },\n include_package_data=True,\n zip_safe=False,\n license=\"Apache 2.0\")\n",
"path": "python/setup.py"
}
] | [
{
"content": "from itertools import chain\nimport os\nimport re\nimport shutil\nimport subprocess\nimport sys\n\nfrom setuptools import setup, find_packages, Distribution\nimport setuptools.command.build_ext as _build_ext\n\n# Ideally, we could include these files by putting them in a\n# MANIFEST.in or using the package_data argument to setup, but the\n# MANIFEST.in gets applied at the very beginning when setup.py runs\n# before these files have been created, so we have to move the files\n# manually.\n\n# NOTE: The lists below must be kept in sync with ray/BUILD.bazel.\nray_files = [\n \"ray/core/src/ray/thirdparty/redis/src/redis-server\",\n \"ray/core/src/ray/gcs/redis_module/libray_redis_module.so\",\n \"ray/core/src/plasma/plasma_store_server\",\n \"ray/_raylet.so\",\n \"ray/core/src/ray/raylet/raylet_monitor\",\n \"ray/core/src/ray/gcs/gcs_server\",\n \"ray/core/src/ray/raylet/raylet\",\n \"ray/dashboard/dashboard.py\",\n \"ray/streaming/_streaming.so\",\n]\n\nbuild_java = os.getenv(\"RAY_INSTALL_JAVA\") == \"1\"\nif build_java:\n ray_files.append(\"ray/jars/ray_dist.jar\")\n\n# These are the directories where automatically generated Python protobuf\n# bindings are created.\ngenerated_python_directories = [\n \"ray/core/generated\",\n \"ray/streaming/generated\",\n]\n\noptional_ray_files = []\n\nray_autoscaler_files = [\n \"ray/autoscaler/aws/example-full.yaml\",\n \"ray/autoscaler/azure/example-full.yaml\",\n \"ray/autoscaler/gcp/example-full.yaml\",\n \"ray/autoscaler/local/example-full.yaml\",\n \"ray/autoscaler/kubernetes/example-full.yaml\",\n \"ray/autoscaler/kubernetes/kubectl-rsync.sh\",\n \"ray/autoscaler/ray-schema.json\"\n]\n\nray_project_files = [\n \"ray/projects/schema.json\", \"ray/projects/templates/cluster_template.yaml\",\n \"ray/projects/templates/project_template.yaml\",\n \"ray/projects/templates/requirements.txt\"\n]\n\nray_dashboard_files = [\n os.path.join(dirpath, filename)\n for dirpath, dirnames, filenames in os.walk(\"ray/dashboard/client/build\")\n for filename in filenames\n]\n\noptional_ray_files += ray_autoscaler_files\noptional_ray_files += ray_project_files\noptional_ray_files += ray_dashboard_files\n\nif \"RAY_USE_NEW_GCS\" in os.environ and os.environ[\"RAY_USE_NEW_GCS\"] == \"on\":\n ray_files += [\n \"ray/core/src/credis/build/src/libmember.so\",\n \"ray/core/src/credis/build/src/libmaster.so\",\n \"ray/core/src/credis/redis/src/redis-server\"\n ]\n\nextras = {\n \"debug\": [],\n \"dashboard\": [],\n \"serve\": [\"uvicorn\", \"pygments\", \"werkzeug\", \"flask\", \"pandas\", \"blist\"],\n \"tune\": [\"tabulate\", \"tensorboardX\"]\n}\n\nextras[\"rllib\"] = extras[\"tune\"] + [\n \"atari_py\",\n \"dm_tree\",\n \"gym[atari]\",\n \"lz4\",\n \"opencv-python-headless\",\n \"pyyaml\",\n \"scipy\",\n]\n\nextras[\"streaming\"] = [\"msgpack >= 0.6.2\"]\n\nextras[\"all\"] = list(set(chain.from_iterable(extras.values())))\n\n\nclass build_ext(_build_ext.build_ext):\n def run(self):\n # Note: We are passing in sys.executable so that we use the same\n # version of Python to build packages inside the build.sh script. Note\n # that certain flags will not be passed along such as --user or sudo.\n # TODO(rkn): Fix this.\n command = [\"../build.sh\", \"-p\", sys.executable]\n if build_java:\n # Also build binaries for Java if the above env variable exists.\n command += [\"-l\", \"python,java\"]\n subprocess.check_call(command)\n\n # We also need to install pickle5 along with Ray, so make sure that the\n # relevant non-Python pickle5 files get copied.\n pickle5_files = self.walk_directory(\"./ray/pickle5_files/pickle5\")\n\n thirdparty_files = self.walk_directory(\"./ray/thirdparty_files\")\n\n files_to_include = ray_files + pickle5_files + thirdparty_files\n\n # Copy over the autogenerated protobuf Python bindings.\n for directory in generated_python_directories:\n for filename in os.listdir(directory):\n if filename[-3:] == \".py\":\n files_to_include.append(os.path.join(directory, filename))\n\n for filename in files_to_include:\n self.move_file(filename)\n\n # Try to copy over the optional files.\n for filename in optional_ray_files:\n try:\n self.move_file(filename)\n except Exception:\n print(\"Failed to copy optional file {}. This is ok.\"\n .format(filename))\n\n def walk_directory(self, directory):\n file_list = []\n for (root, dirs, filenames) in os.walk(directory):\n for name in filenames:\n file_list.append(os.path.join(root, name))\n return file_list\n\n def move_file(self, filename):\n # TODO(rkn): This feels very brittle. It may not handle all cases. See\n # https://github.com/apache/arrow/blob/master/python/setup.py for an\n # example.\n source = filename\n destination = os.path.join(self.build_lib, filename)\n # Create the target directory if it doesn't already exist.\n parent_directory = os.path.dirname(destination)\n if not os.path.exists(parent_directory):\n os.makedirs(parent_directory)\n if not os.path.exists(destination):\n print(\"Copying {} to {}.\".format(source, destination))\n shutil.copy(source, destination, follow_symlinks=True)\n\n\nclass BinaryDistribution(Distribution):\n def has_ext_modules(self):\n return True\n\n\ndef find_version(*filepath):\n # Extract version information from filepath\n here = os.path.abspath(os.path.dirname(__file__))\n with open(os.path.join(here, *filepath)) as fp:\n version_match = re.search(r\"^__version__ = ['\\\"]([^'\\\"]*)['\\\"]\",\n fp.read(), re.M)\n if version_match:\n return version_match.group(1)\n raise RuntimeError(\"Unable to find version string.\")\n\n\nrequires = [\n \"numpy >= 1.16\",\n \"filelock\",\n \"jsonschema\",\n \"funcsigs\",\n \"click\",\n \"colorama\",\n \"packaging\",\n \"pytest\",\n \"pyyaml\",\n \"redis>=3.3.2\",\n # NOTE: Don't upgrade the version of six! Doing so causes installation\n # problems. See https://github.com/ray-project/ray/issues/4169.\n \"six >= 1.0.0\",\n \"faulthandler;python_version<'3.3'\",\n \"protobuf >= 3.8.0\",\n \"cloudpickle\",\n \"py-spy >= 0.2.0\",\n \"aiohttp\",\n \"google\",\n \"grpcio\"\n]\n\nsetup(\n name=\"ray\",\n version=find_version(\"ray\", \"__init__.py\"),\n author=\"Ray Team\",\n author_email=\"[email protected]\",\n description=(\"A system for parallel and distributed Python that unifies \"\n \"the ML ecosystem.\"),\n long_description=open(\"../README.rst\").read(),\n url=\"https://github.com/ray-project/ray\",\n keywords=(\"ray distributed parallel machine-learning \"\n \"reinforcement-learning deep-learning python\"),\n packages=find_packages(),\n cmdclass={\"build_ext\": build_ext},\n # The BinaryDistribution argument triggers build_ext.\n distclass=BinaryDistribution,\n install_requires=requires,\n setup_requires=[\"cython >= 0.29\"],\n extras_require=extras,\n entry_points={\n \"console_scripts\": [\n \"ray=ray.scripts.scripts:main\",\n \"rllib=ray.rllib.scripts:cli [rllib]\", \"tune=ray.tune.scripts:cli\"\n ]\n },\n include_package_data=True,\n zip_safe=False,\n license=\"Apache 2.0\")\n",
"path": "python/setup.py"
}
] | diff --git a/python/setup.py b/python/setup.py
index 36af00e764bfb..a4edb34606ca9 100644
--- a/python/setup.py
+++ b/python/setup.py
@@ -180,7 +180,6 @@ def find_version(*filepath):
"packaging",
"pytest",
"pyyaml",
- "jsonschema",
"redis>=3.3.2",
# NOTE: Don't upgrade the version of six! Doing so causes installation
# problems. See https://github.com/ray-project/ray/issues/4169.
|
googleapis__google-cloud-python-1347 | Should we compare Entity._meanings in __eq__
/cc @tseaver @pcostell
| [
{
"content": "# Copyright 2014 Google Inc. All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Class for representing a single entity in the Cloud Datastore.\"\"\"\n\n\nfrom gcloud._helpers import _ensure_tuple_or_list\n\n\nclass Entity(dict):\n \"\"\"Entities are akin to rows in a relational database\n\n An entity storing the actual instance of data.\n\n Each entity is officially represented with a\n :class:`gcloud.datastore.key.Key` class, however it is possible that\n you might create an Entity with only a partial Key (that is, a Key\n with a Kind, and possibly a parent, but without an ID). In such a\n case, the datastore service will automatically assign an ID to the\n partial key.\n\n Entities in this API act like dictionaries with extras built in that\n allow you to delete or persist the data stored on the entity.\n\n Entities are mutable and act like a subclass of a dictionary.\n This means you could take an existing entity and change the key\n to duplicate the object.\n\n Use :func:`gcloud.datastore.get` to retrieve an existing entity.\n\n >>> datastore.get(key)\n <Entity[{'kind': 'EntityKind', id: 1234}] {'property': 'value'}>\n\n You can the set values on the entity just like you would on any\n other dictionary.\n\n >>> entity['age'] = 20\n >>> entity['name'] = 'JJ'\n >>> entity\n <Entity[{'kind': 'EntityKind', id: 1234}] {'age': 20, 'name': 'JJ'}>\n\n And you can convert an entity to a regular Python dictionary with the\n ``dict`` builtin:\n\n >>> dict(entity)\n {'age': 20, 'name': 'JJ'}\n\n .. note::\n\n When saving an entity to the backend, values which are \"text\"\n (``unicode`` in Python2, ``str`` in Python3) will be saved using\n the 'text_value' field, after being encoded to UTF-8. When\n retrieved from the back-end, such values will be decoded to \"text\"\n again. Values which are \"bytes\" (``str`` in Python2, ``bytes`` in\n Python3), will be saved using the 'blob_value' field, without\n any decoding / encoding step.\n\n :type key: :class:`gcloud.datastore.key.Key`\n :param key: Optional key to be set on entity. Required for\n :func:`gcloud.datastore.put()` and\n :func:`gcloud.datastore.put_multi()`\n\n :type exclude_from_indexes: tuple of string\n :param exclude_from_indexes: Names of fields whose values are not to be\n indexed for this entity.\n \"\"\"\n\n def __init__(self, key=None, exclude_from_indexes=()):\n super(Entity, self).__init__()\n self.key = key\n self._exclude_from_indexes = set(_ensure_tuple_or_list(\n 'exclude_from_indexes', exclude_from_indexes))\n # NOTE: This will be populated when parsing a protobuf in\n # gcloud.datastore.helpers.entity_from_protobuf.\n self._meanings = {}\n\n def __eq__(self, other):\n \"\"\"Compare two entities for equality.\n\n Entities compare equal if their keys compare equal, and their\n properties compare equal.\n\n :rtype: boolean\n :returns: True if the entities compare equal, else False.\n \"\"\"\n if not isinstance(other, Entity):\n return False\n\n return (self.key == other.key and\n super(Entity, self).__eq__(other))\n\n def __ne__(self, other):\n \"\"\"Compare two entities for inequality.\n\n Entities compare equal if their keys compare equal, and their\n properties compare equal.\n\n :rtype: boolean\n :returns: False if the entities compare equal, else True.\n \"\"\"\n return not self.__eq__(other)\n\n @property\n def kind(self):\n \"\"\"Get the kind of the current entity.\n\n .. note::\n This relies entirely on the :class:`gcloud.datastore.key.Key`\n set on the entity. That means that we're not storing the kind\n of the entity at all, just the properties and a pointer to a\n Key which knows its Kind.\n \"\"\"\n if self.key:\n return self.key.kind\n\n @property\n def exclude_from_indexes(self):\n \"\"\"Names of fields which are *not* to be indexed for this entity.\n\n :rtype: sequence of field names\n \"\"\"\n return frozenset(self._exclude_from_indexes)\n\n def __repr__(self):\n if self.key:\n return '<Entity%s %s>' % (self.key.path,\n super(Entity, self).__repr__())\n else:\n return '<Entity %s>' % (super(Entity, self).__repr__())\n",
"path": "gcloud/datastore/entity.py"
}
] | [
{
"content": "# Copyright 2014 Google Inc. All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Class for representing a single entity in the Cloud Datastore.\"\"\"\n\n\nfrom gcloud._helpers import _ensure_tuple_or_list\n\n\nclass Entity(dict):\n \"\"\"Entities are akin to rows in a relational database\n\n An entity storing the actual instance of data.\n\n Each entity is officially represented with a\n :class:`gcloud.datastore.key.Key` class, however it is possible that\n you might create an Entity with only a partial Key (that is, a Key\n with a Kind, and possibly a parent, but without an ID). In such a\n case, the datastore service will automatically assign an ID to the\n partial key.\n\n Entities in this API act like dictionaries with extras built in that\n allow you to delete or persist the data stored on the entity.\n\n Entities are mutable and act like a subclass of a dictionary.\n This means you could take an existing entity and change the key\n to duplicate the object.\n\n Use :func:`gcloud.datastore.get` to retrieve an existing entity.\n\n >>> datastore.get(key)\n <Entity[{'kind': 'EntityKind', id: 1234}] {'property': 'value'}>\n\n You can the set values on the entity just like you would on any\n other dictionary.\n\n >>> entity['age'] = 20\n >>> entity['name'] = 'JJ'\n >>> entity\n <Entity[{'kind': 'EntityKind', id: 1234}] {'age': 20, 'name': 'JJ'}>\n\n And you can convert an entity to a regular Python dictionary with the\n ``dict`` builtin:\n\n >>> dict(entity)\n {'age': 20, 'name': 'JJ'}\n\n .. note::\n\n When saving an entity to the backend, values which are \"text\"\n (``unicode`` in Python2, ``str`` in Python3) will be saved using\n the 'text_value' field, after being encoded to UTF-8. When\n retrieved from the back-end, such values will be decoded to \"text\"\n again. Values which are \"bytes\" (``str`` in Python2, ``bytes`` in\n Python3), will be saved using the 'blob_value' field, without\n any decoding / encoding step.\n\n :type key: :class:`gcloud.datastore.key.Key`\n :param key: Optional key to be set on entity. Required for\n :func:`gcloud.datastore.put()` and\n :func:`gcloud.datastore.put_multi()`\n\n :type exclude_from_indexes: tuple of string\n :param exclude_from_indexes: Names of fields whose values are not to be\n indexed for this entity.\n \"\"\"\n\n def __init__(self, key=None, exclude_from_indexes=()):\n super(Entity, self).__init__()\n self.key = key\n self._exclude_from_indexes = set(_ensure_tuple_or_list(\n 'exclude_from_indexes', exclude_from_indexes))\n # NOTE: This will be populated when parsing a protobuf in\n # gcloud.datastore.helpers.entity_from_protobuf.\n self._meanings = {}\n\n def __eq__(self, other):\n \"\"\"Compare two entities for equality.\n\n Entities compare equal if their keys compare equal, and their\n properties compare equal.\n\n :rtype: boolean\n :returns: True if the entities compare equal, else False.\n \"\"\"\n if not isinstance(other, Entity):\n return False\n\n return (self.key == other.key and\n self._exclude_from_indexes == other._exclude_from_indexes and\n self._meanings == other._meanings and\n super(Entity, self).__eq__(other))\n\n def __ne__(self, other):\n \"\"\"Compare two entities for inequality.\n\n Entities compare equal if their keys compare equal, and their\n properties compare equal.\n\n :rtype: boolean\n :returns: False if the entities compare equal, else True.\n \"\"\"\n return not self.__eq__(other)\n\n @property\n def kind(self):\n \"\"\"Get the kind of the current entity.\n\n .. note::\n This relies entirely on the :class:`gcloud.datastore.key.Key`\n set on the entity. That means that we're not storing the kind\n of the entity at all, just the properties and a pointer to a\n Key which knows its Kind.\n \"\"\"\n if self.key:\n return self.key.kind\n\n @property\n def exclude_from_indexes(self):\n \"\"\"Names of fields which are *not* to be indexed for this entity.\n\n :rtype: sequence of field names\n \"\"\"\n return frozenset(self._exclude_from_indexes)\n\n def __repr__(self):\n if self.key:\n return '<Entity%s %s>' % (self.key.path,\n super(Entity, self).__repr__())\n else:\n return '<Entity %s>' % (super(Entity, self).__repr__())\n",
"path": "gcloud/datastore/entity.py"
}
] | diff --git a/gcloud/datastore/entity.py b/gcloud/datastore/entity.py
index 7a25e648391a..c27db1fc76e8 100644
--- a/gcloud/datastore/entity.py
+++ b/gcloud/datastore/entity.py
@@ -98,6 +98,8 @@ def __eq__(self, other):
return False
return (self.key == other.key and
+ self._exclude_from_indexes == other._exclude_from_indexes and
+ self._meanings == other._meanings and
super(Entity, self).__eq__(other))
def __ne__(self, other):
diff --git a/gcloud/datastore/test_entity.py b/gcloud/datastore/test_entity.py
index 122b916ea58b..8cf7ee43856d 100644
--- a/gcloud/datastore/test_entity.py
+++ b/gcloud/datastore/test_entity.py
@@ -70,10 +70,21 @@ def test___eq_____ne___w_different_keys(self):
def test___eq_____ne___w_same_keys(self):
from gcloud.datastore.key import Key
+
+ name = 'foo'
+ value = 42
+ meaning = 9
+
key1 = Key(_KIND, _ID, dataset_id=_DATASET_ID)
- entity1 = self._makeOne(key=key1)
+ entity1 = self._makeOne(key=key1, exclude_from_indexes=(name,))
+ entity1[name] = value
+ entity1._meanings[name] = (meaning, value)
+
key2 = Key(_KIND, _ID, dataset_id=_DATASET_ID)
- entity2 = self._makeOne(key=key2)
+ entity2 = self._makeOne(key=key2, exclude_from_indexes=(name,))
+ entity2[name] = value
+ entity2._meanings[name] = (meaning, value)
+
self.assertTrue(entity1 == entity2)
self.assertFalse(entity1 != entity2)
@@ -140,6 +151,38 @@ def test___eq_____ne___w_same_keys_props_w_diff_entities_as_value(self):
self.assertFalse(entity1 == entity2)
self.assertTrue(entity1 != entity2)
+ def test__eq__same_value_different_exclude(self):
+ from gcloud.datastore.key import Key
+
+ name = 'foo'
+ value = 42
+ key = Key(_KIND, _ID, dataset_id=_DATASET_ID)
+
+ entity1 = self._makeOne(key=key, exclude_from_indexes=(name,))
+ entity1[name] = value
+
+ entity2 = self._makeOne(key=key, exclude_from_indexes=())
+ entity2[name] = value
+
+ self.assertFalse(entity1 == entity2)
+
+ def test__eq__same_value_different_meanings(self):
+ from gcloud.datastore.key import Key
+
+ name = 'foo'
+ value = 42
+ meaning = 9
+ key = Key(_KIND, _ID, dataset_id=_DATASET_ID)
+
+ entity1 = self._makeOne(key=key, exclude_from_indexes=(name,))
+ entity1[name] = value
+
+ entity2 = self._makeOne(key=key, exclude_from_indexes=(name,))
+ entity2[name] = value
+ entity2._meanings[name] = (meaning, value)
+
+ self.assertFalse(entity1 == entity2)
+
def test___repr___no_key_empty(self):
entity = self._makeOne()
self.assertEqual(repr(entity), '<Entity {}>')
|
espnet__espnet-2227 | espnet2 inference error without language model
If not using a language model, the espnet2 asr_inference.py causes the following error.
File "espnet2/espnet2/bin/asr_inference.py", line 152, in __init__
self.lm_train_args = lm_train_args
UnboundLocalError: local variable 'lm_train_args' referenced before assignment
| [
{
"content": "#!/usr/bin/env python3\nimport argparse\nimport logging\nfrom pathlib import Path\nimport sys\nfrom typing import Optional\nfrom typing import Sequence\nfrom typing import Tuple\nfrom typing import Union\n\nimport numpy as np\nimport torch\nfrom typeguard import check_argument_types\nfrom typeguard import check_return_type\nfrom typing import List\n\nfrom espnet.nets.batch_beam_search import BatchBeamSearch\nfrom espnet.nets.beam_search import BeamSearch\nfrom espnet.nets.beam_search import Hypothesis\nfrom espnet.nets.scorer_interface import BatchScorerInterface\nfrom espnet.nets.scorers.ctc import CTCPrefixScorer\nfrom espnet.nets.scorers.length_bonus import LengthBonus\nfrom espnet.utils.cli_utils import get_commandline_args\nfrom espnet2.fileio.datadir_writer import DatadirWriter\nfrom espnet2.tasks.asr import ASRTask\nfrom espnet2.tasks.lm import LMTask\nfrom espnet2.text.build_tokenizer import build_tokenizer\nfrom espnet2.text.token_id_converter import TokenIDConverter\nfrom espnet2.torch_utils.device_funcs import to_device\nfrom espnet2.torch_utils.set_all_random_seed import set_all_random_seed\nfrom espnet2.utils import config_argparse\nfrom espnet2.utils.types import str2bool\nfrom espnet2.utils.types import str2triple_str\nfrom espnet2.utils.types import str_or_none\n\n\nclass Speech2Text:\n \"\"\"Speech2Text class\n\n Examples:\n >>> import soundfile\n >>> speech2text = Speech2Text(\"asr_config.yml\", \"asr.pth\")\n >>> audio, rate = soundfile.read(\"speech.wav\")\n >>> speech2text(audio)\n [(text, token, token_int, hypothesis object), ...]\n\n \"\"\"\n\n def __init__(\n self,\n asr_train_config: Union[Path, str],\n asr_model_file: Union[Path, str] = None,\n lm_train_config: Union[Path, str] = None,\n lm_file: Union[Path, str] = None,\n token_type: str = None,\n bpemodel: str = None,\n device: str = \"cpu\",\n maxlenratio: float = 0.0,\n minlenratio: float = 0.0,\n batch_size: int = 1,\n dtype: str = \"float32\",\n beam_size: int = 20,\n ctc_weight: float = 0.5,\n lm_weight: float = 1.0,\n penalty: float = 0.0,\n nbest: int = 1,\n ):\n assert check_argument_types()\n\n # 1. Build ASR model\n scorers = {}\n asr_model, asr_train_args = ASRTask.build_model_from_file(\n asr_train_config, asr_model_file, device\n )\n asr_model.eval()\n\n decoder = asr_model.decoder\n ctc = CTCPrefixScorer(ctc=asr_model.ctc, eos=asr_model.eos)\n token_list = asr_model.token_list\n scorers.update(\n decoder=decoder, ctc=ctc, length_bonus=LengthBonus(len(token_list)),\n )\n\n # 2. Build Language model\n if lm_train_config is not None:\n lm, lm_train_args = LMTask.build_model_from_file(\n lm_train_config, lm_file, device\n )\n scorers[\"lm\"] = lm.lm\n\n # 3. Build BeamSearch object\n weights = dict(\n decoder=1.0 - ctc_weight,\n ctc=ctc_weight,\n lm=lm_weight,\n length_bonus=penalty,\n )\n beam_search = BeamSearch(\n beam_size=beam_size,\n weights=weights,\n scorers=scorers,\n sos=asr_model.sos,\n eos=asr_model.eos,\n vocab_size=len(token_list),\n token_list=token_list,\n pre_beam_score_key=None if ctc_weight == 1.0 else \"full\",\n )\n # TODO(karita): make all scorers batchfied\n if batch_size == 1:\n non_batch = [\n k\n for k, v in beam_search.full_scorers.items()\n if not isinstance(v, BatchScorerInterface)\n ]\n if len(non_batch) == 0:\n beam_search.__class__ = BatchBeamSearch\n logging.info(\"BatchBeamSearch implementation is selected.\")\n else:\n logging.warning(\n f\"As non-batch scorers {non_batch} are found, \"\n f\"fall back to non-batch implementation.\"\n )\n beam_search.to(device=device, dtype=getattr(torch, dtype)).eval()\n for scorer in scorers.values():\n if isinstance(scorer, torch.nn.Module):\n scorer.to(device=device, dtype=getattr(torch, dtype)).eval()\n logging.info(f\"Beam_search: {beam_search}\")\n logging.info(f\"Decoding device={device}, dtype={dtype}\")\n\n # 4. [Optional] Build Text converter: e.g. bpe-sym -> Text\n if token_type is None:\n token_type = asr_train_args.token_type\n if bpemodel is None:\n bpemodel = asr_train_args.bpemodel\n\n if token_type is None:\n tokenizer = None\n elif token_type == \"bpe\":\n if bpemodel is not None:\n tokenizer = build_tokenizer(token_type=token_type, bpemodel=bpemodel)\n else:\n tokenizer = None\n else:\n tokenizer = build_tokenizer(token_type=token_type)\n converter = TokenIDConverter(token_list=token_list)\n logging.info(f\"Text tokenizer: {tokenizer}\")\n\n self.asr_model = asr_model\n self.asr_train_args = asr_train_args\n self.lm_train_args = lm_train_args\n self.converter = converter\n self.tokenizer = tokenizer\n self.beam_search = beam_search\n self.maxlenratio = maxlenratio\n self.minlenratio = minlenratio\n self.device = device\n self.dtype = dtype\n self.nbest = nbest\n\n @torch.no_grad()\n def __call__(\n self, speech: Union[torch.Tensor, np.ndarray]\n ) -> List[Tuple[Optional[str], List[str], List[int], Hypothesis]]:\n \"\"\"Inference\n\n Args:\n data: Input speech data\n Returns:\n text, token, token_int, hyp\n\n \"\"\"\n assert check_argument_types()\n\n # Input as audio signal\n if isinstance(speech, np.ndarray):\n speech = torch.tensor(speech)\n\n # data: (Nsamples,) -> (1, Nsamples)\n speech = speech.unsqueeze(0).to(getattr(torch, self.dtype))\n # lenghts: (1,)\n lengths = speech.new_full([1], dtype=torch.long, fill_value=speech.size(1))\n batch = {\"speech\": speech, \"speech_lengths\": lengths}\n\n # a. To device\n batch = to_device(batch, device=self.device)\n\n # b. Forward Encoder\n enc, _ = self.asr_model.encode(**batch)\n assert len(enc) == 1, len(enc)\n\n # c. Passed the encoder result and the beam search\n nbest_hyps = self.beam_search(\n x=enc[0], maxlenratio=self.maxlenratio, minlenratio=self.minlenratio\n )\n nbest_hyps = nbest_hyps[: self.nbest]\n\n results = []\n for hyp in nbest_hyps:\n assert isinstance(hyp, Hypothesis), type(hyp)\n\n # remove sos/eos and get results\n token_int = hyp.yseq[1:-1].tolist()\n\n # remove blank symbol id, which is assumed to be 0\n token_int = list(filter(lambda x: x != 0, token_int))\n\n # Change integer-ids to tokens\n token = self.converter.ids2tokens(token_int)\n\n if self.tokenizer is not None:\n text = self.tokenizer.tokens2text(token)\n else:\n text = None\n results.append((text, token, token_int, hyp))\n\n assert check_return_type(results)\n return results\n\n\ndef inference(\n output_dir: str,\n maxlenratio: float,\n minlenratio: float,\n batch_size: int,\n dtype: str,\n beam_size: int,\n ngpu: int,\n seed: int,\n ctc_weight: float,\n lm_weight: float,\n penalty: float,\n nbest: int,\n num_workers: int,\n log_level: Union[int, str],\n data_path_and_name_and_type: Sequence[Tuple[str, str, str]],\n key_file: Optional[str],\n asr_train_config: str,\n asr_model_file: str,\n lm_train_config: Optional[str],\n lm_file: Optional[str],\n word_lm_train_config: Optional[str],\n word_lm_file: Optional[str],\n token_type: Optional[str],\n bpemodel: Optional[str],\n allow_variable_data_keys: bool,\n):\n assert check_argument_types()\n if batch_size > 1:\n raise NotImplementedError(\"batch decoding is not implemented\")\n if word_lm_train_config is not None:\n raise NotImplementedError(\"Word LM is not implemented\")\n if ngpu > 1:\n raise NotImplementedError(\"only single GPU decoding is supported\")\n\n logging.basicConfig(\n level=log_level,\n format=\"%(asctime)s (%(module)s:%(lineno)d) %(levelname)s: %(message)s\",\n )\n\n if ngpu >= 1:\n device = \"cuda\"\n else:\n device = \"cpu\"\n\n # 1. Set random-seed\n set_all_random_seed(seed)\n\n # 2. Build speech2text\n speech2text = Speech2Text(\n asr_train_config=asr_train_config,\n asr_model_file=asr_model_file,\n lm_train_config=lm_train_config,\n lm_file=lm_file,\n token_type=token_type,\n bpemodel=bpemodel,\n device=device,\n maxlenratio=maxlenratio,\n minlenratio=minlenratio,\n dtype=dtype,\n beam_size=beam_size,\n ctc_weight=ctc_weight,\n lm_weight=lm_weight,\n penalty=penalty,\n nbest=nbest,\n )\n\n # 3. Build data-iterator\n loader = ASRTask.build_streaming_iterator(\n data_path_and_name_and_type,\n dtype=dtype,\n batch_size=batch_size,\n key_file=key_file,\n num_workers=num_workers,\n preprocess_fn=ASRTask.build_preprocess_fn(speech2text.asr_train_args, False),\n collate_fn=ASRTask.build_collate_fn(speech2text.asr_train_args),\n allow_variable_data_keys=allow_variable_data_keys,\n inference=True,\n )\n\n # 7 .Start for-loop\n # FIXME(kamo): The output format should be discussed about\n with DatadirWriter(output_dir) as writer:\n for keys, batch in loader:\n assert isinstance(batch, dict), type(batch)\n assert all(isinstance(s, str) for s in keys), keys\n _bs = len(next(iter(batch.values())))\n assert len(keys) == _bs, f\"{len(keys)} != {_bs}\"\n batch = {k: v[0] for k, v in batch.items() if not k.endswith(\"_lengths\")}\n\n # N-best list of (text, token, token_int, hyp_object)\n results = speech2text(**batch)\n\n # Only supporting batch_size==1\n key = keys[0]\n for n, (text, token, token_int, hyp) in zip(range(1, nbest + 1), results):\n # Create a directory: outdir/{n}best_recog\n ibest_writer = writer[f\"{n}best_recog\"]\n\n # Write the result to each file\n ibest_writer[\"token\"][key] = \" \".join(token)\n ibest_writer[\"token_int\"][key] = \" \".join(map(str, token_int))\n ibest_writer[\"score\"][key] = str(hyp.score)\n\n if text is not None:\n ibest_writer[\"text\"][key] = text\n\n\ndef get_parser():\n parser = config_argparse.ArgumentParser(\n description=\"ASR Decoding\",\n formatter_class=argparse.ArgumentDefaultsHelpFormatter,\n )\n\n # Note(kamo): Use '_' instead of '-' as separator.\n # '-' is confusing if written in yaml.\n parser.add_argument(\n \"--log_level\",\n type=lambda x: x.upper(),\n default=\"INFO\",\n choices=(\"INFO\", \"ERROR\", \"WARNING\", \"INFO\", \"DEBUG\", \"NOTSET\"),\n help=\"The verbose level of logging\",\n )\n\n parser.add_argument(\"--output_dir\", type=str, required=True)\n parser.add_argument(\n \"--ngpu\", type=int, default=0, help=\"The number of gpus. 0 indicates CPU mode\",\n )\n parser.add_argument(\"--seed\", type=int, default=0, help=\"Random seed\")\n parser.add_argument(\n \"--dtype\",\n default=\"float32\",\n choices=[\"float16\", \"float32\", \"float64\"],\n help=\"Data type\",\n )\n parser.add_argument(\n \"--num_workers\",\n type=int,\n default=1,\n help=\"The number of workers used for DataLoader\",\n )\n\n group = parser.add_argument_group(\"Input data related\")\n group.add_argument(\n \"--data_path_and_name_and_type\",\n type=str2triple_str,\n required=True,\n action=\"append\",\n )\n group.add_argument(\"--key_file\", type=str_or_none)\n group.add_argument(\"--allow_variable_data_keys\", type=str2bool, default=False)\n\n group = parser.add_argument_group(\"The model configuration related\")\n group.add_argument(\"--asr_train_config\", type=str, required=True)\n group.add_argument(\"--asr_model_file\", type=str, required=True)\n group.add_argument(\"--lm_train_config\", type=str)\n group.add_argument(\"--lm_file\", type=str)\n group.add_argument(\"--word_lm_train_config\", type=str)\n group.add_argument(\"--word_lm_file\", type=str)\n\n group = parser.add_argument_group(\"Beam-search related\")\n group.add_argument(\n \"--batch_size\", type=int, default=1, help=\"The batch size for inference\",\n )\n group.add_argument(\"--nbest\", type=int, default=1, help=\"Output N-best hypotheses\")\n group.add_argument(\"--beam_size\", type=int, default=20, help=\"Beam size\")\n group.add_argument(\"--penalty\", type=float, default=0.0, help=\"Insertion penalty\")\n group.add_argument(\n \"--maxlenratio\",\n type=float,\n default=0.0,\n help=\"Input length ratio to obtain max output length. \"\n \"If maxlenratio=0.0 (default), it uses a end-detect \"\n \"function \"\n \"to automatically find maximum hypothesis lengths\",\n )\n group.add_argument(\n \"--minlenratio\",\n type=float,\n default=0.0,\n help=\"Input length ratio to obtain min output length\",\n )\n group.add_argument(\n \"--ctc_weight\", type=float, default=0.5, help=\"CTC weight in joint decoding\",\n )\n group.add_argument(\"--lm_weight\", type=float, default=1.0, help=\"RNNLM weight\")\n\n group = parser.add_argument_group(\"Text converter related\")\n group.add_argument(\n \"--token_type\",\n type=str_or_none,\n default=None,\n choices=[\"char\", \"bpe\", None],\n help=\"The token type for ASR model. \"\n \"If not given, refers from the training args\",\n )\n group.add_argument(\n \"--bpemodel\",\n type=str_or_none,\n default=None,\n help=\"The model path of sentencepiece. \"\n \"If not given, refers from the training args\",\n )\n\n return parser\n\n\ndef main(cmd=None):\n print(get_commandline_args(), file=sys.stderr)\n parser = get_parser()\n args = parser.parse_args(cmd)\n kwargs = vars(args)\n kwargs.pop(\"config\", None)\n inference(**kwargs)\n\n\nif __name__ == \"__main__\":\n main()\n",
"path": "espnet2/bin/asr_inference.py"
}
] | [
{
"content": "#!/usr/bin/env python3\nimport argparse\nimport logging\nfrom pathlib import Path\nimport sys\nfrom typing import Optional\nfrom typing import Sequence\nfrom typing import Tuple\nfrom typing import Union\n\nimport numpy as np\nimport torch\nfrom typeguard import check_argument_types\nfrom typeguard import check_return_type\nfrom typing import List\n\nfrom espnet.nets.batch_beam_search import BatchBeamSearch\nfrom espnet.nets.beam_search import BeamSearch\nfrom espnet.nets.beam_search import Hypothesis\nfrom espnet.nets.scorer_interface import BatchScorerInterface\nfrom espnet.nets.scorers.ctc import CTCPrefixScorer\nfrom espnet.nets.scorers.length_bonus import LengthBonus\nfrom espnet.utils.cli_utils import get_commandline_args\nfrom espnet2.fileio.datadir_writer import DatadirWriter\nfrom espnet2.tasks.asr import ASRTask\nfrom espnet2.tasks.lm import LMTask\nfrom espnet2.text.build_tokenizer import build_tokenizer\nfrom espnet2.text.token_id_converter import TokenIDConverter\nfrom espnet2.torch_utils.device_funcs import to_device\nfrom espnet2.torch_utils.set_all_random_seed import set_all_random_seed\nfrom espnet2.utils import config_argparse\nfrom espnet2.utils.types import str2bool\nfrom espnet2.utils.types import str2triple_str\nfrom espnet2.utils.types import str_or_none\n\n\nclass Speech2Text:\n \"\"\"Speech2Text class\n\n Examples:\n >>> import soundfile\n >>> speech2text = Speech2Text(\"asr_config.yml\", \"asr.pth\")\n >>> audio, rate = soundfile.read(\"speech.wav\")\n >>> speech2text(audio)\n [(text, token, token_int, hypothesis object), ...]\n\n \"\"\"\n\n def __init__(\n self,\n asr_train_config: Union[Path, str],\n asr_model_file: Union[Path, str] = None,\n lm_train_config: Union[Path, str] = None,\n lm_file: Union[Path, str] = None,\n token_type: str = None,\n bpemodel: str = None,\n device: str = \"cpu\",\n maxlenratio: float = 0.0,\n minlenratio: float = 0.0,\n batch_size: int = 1,\n dtype: str = \"float32\",\n beam_size: int = 20,\n ctc_weight: float = 0.5,\n lm_weight: float = 1.0,\n penalty: float = 0.0,\n nbest: int = 1,\n ):\n assert check_argument_types()\n\n # 1. Build ASR model\n scorers = {}\n asr_model, asr_train_args = ASRTask.build_model_from_file(\n asr_train_config, asr_model_file, device\n )\n asr_model.eval()\n\n decoder = asr_model.decoder\n ctc = CTCPrefixScorer(ctc=asr_model.ctc, eos=asr_model.eos)\n token_list = asr_model.token_list\n scorers.update(\n decoder=decoder, ctc=ctc, length_bonus=LengthBonus(len(token_list)),\n )\n\n # 2. Build Language model\n if lm_train_config is not None:\n lm, lm_train_args = LMTask.build_model_from_file(\n lm_train_config, lm_file, device\n )\n scorers[\"lm\"] = lm.lm\n\n # 3. Build BeamSearch object\n weights = dict(\n decoder=1.0 - ctc_weight,\n ctc=ctc_weight,\n lm=lm_weight,\n length_bonus=penalty,\n )\n beam_search = BeamSearch(\n beam_size=beam_size,\n weights=weights,\n scorers=scorers,\n sos=asr_model.sos,\n eos=asr_model.eos,\n vocab_size=len(token_list),\n token_list=token_list,\n pre_beam_score_key=None if ctc_weight == 1.0 else \"full\",\n )\n # TODO(karita): make all scorers batchfied\n if batch_size == 1:\n non_batch = [\n k\n for k, v in beam_search.full_scorers.items()\n if not isinstance(v, BatchScorerInterface)\n ]\n if len(non_batch) == 0:\n beam_search.__class__ = BatchBeamSearch\n logging.info(\"BatchBeamSearch implementation is selected.\")\n else:\n logging.warning(\n f\"As non-batch scorers {non_batch} are found, \"\n f\"fall back to non-batch implementation.\"\n )\n beam_search.to(device=device, dtype=getattr(torch, dtype)).eval()\n for scorer in scorers.values():\n if isinstance(scorer, torch.nn.Module):\n scorer.to(device=device, dtype=getattr(torch, dtype)).eval()\n logging.info(f\"Beam_search: {beam_search}\")\n logging.info(f\"Decoding device={device}, dtype={dtype}\")\n\n # 4. [Optional] Build Text converter: e.g. bpe-sym -> Text\n if token_type is None:\n token_type = asr_train_args.token_type\n if bpemodel is None:\n bpemodel = asr_train_args.bpemodel\n\n if token_type is None:\n tokenizer = None\n elif token_type == \"bpe\":\n if bpemodel is not None:\n tokenizer = build_tokenizer(token_type=token_type, bpemodel=bpemodel)\n else:\n tokenizer = None\n else:\n tokenizer = build_tokenizer(token_type=token_type)\n converter = TokenIDConverter(token_list=token_list)\n logging.info(f\"Text tokenizer: {tokenizer}\")\n\n self.asr_model = asr_model\n self.asr_train_args = asr_train_args\n self.converter = converter\n self.tokenizer = tokenizer\n self.beam_search = beam_search\n self.maxlenratio = maxlenratio\n self.minlenratio = minlenratio\n self.device = device\n self.dtype = dtype\n self.nbest = nbest\n\n @torch.no_grad()\n def __call__(\n self, speech: Union[torch.Tensor, np.ndarray]\n ) -> List[Tuple[Optional[str], List[str], List[int], Hypothesis]]:\n \"\"\"Inference\n\n Args:\n data: Input speech data\n Returns:\n text, token, token_int, hyp\n\n \"\"\"\n assert check_argument_types()\n\n # Input as audio signal\n if isinstance(speech, np.ndarray):\n speech = torch.tensor(speech)\n\n # data: (Nsamples,) -> (1, Nsamples)\n speech = speech.unsqueeze(0).to(getattr(torch, self.dtype))\n # lenghts: (1,)\n lengths = speech.new_full([1], dtype=torch.long, fill_value=speech.size(1))\n batch = {\"speech\": speech, \"speech_lengths\": lengths}\n\n # a. To device\n batch = to_device(batch, device=self.device)\n\n # b. Forward Encoder\n enc, _ = self.asr_model.encode(**batch)\n assert len(enc) == 1, len(enc)\n\n # c. Passed the encoder result and the beam search\n nbest_hyps = self.beam_search(\n x=enc[0], maxlenratio=self.maxlenratio, minlenratio=self.minlenratio\n )\n nbest_hyps = nbest_hyps[: self.nbest]\n\n results = []\n for hyp in nbest_hyps:\n assert isinstance(hyp, Hypothesis), type(hyp)\n\n # remove sos/eos and get results\n token_int = hyp.yseq[1:-1].tolist()\n\n # remove blank symbol id, which is assumed to be 0\n token_int = list(filter(lambda x: x != 0, token_int))\n\n # Change integer-ids to tokens\n token = self.converter.ids2tokens(token_int)\n\n if self.tokenizer is not None:\n text = self.tokenizer.tokens2text(token)\n else:\n text = None\n results.append((text, token, token_int, hyp))\n\n assert check_return_type(results)\n return results\n\n\ndef inference(\n output_dir: str,\n maxlenratio: float,\n minlenratio: float,\n batch_size: int,\n dtype: str,\n beam_size: int,\n ngpu: int,\n seed: int,\n ctc_weight: float,\n lm_weight: float,\n penalty: float,\n nbest: int,\n num_workers: int,\n log_level: Union[int, str],\n data_path_and_name_and_type: Sequence[Tuple[str, str, str]],\n key_file: Optional[str],\n asr_train_config: str,\n asr_model_file: str,\n lm_train_config: Optional[str],\n lm_file: Optional[str],\n word_lm_train_config: Optional[str],\n word_lm_file: Optional[str],\n token_type: Optional[str],\n bpemodel: Optional[str],\n allow_variable_data_keys: bool,\n):\n assert check_argument_types()\n if batch_size > 1:\n raise NotImplementedError(\"batch decoding is not implemented\")\n if word_lm_train_config is not None:\n raise NotImplementedError(\"Word LM is not implemented\")\n if ngpu > 1:\n raise NotImplementedError(\"only single GPU decoding is supported\")\n\n logging.basicConfig(\n level=log_level,\n format=\"%(asctime)s (%(module)s:%(lineno)d) %(levelname)s: %(message)s\",\n )\n\n if ngpu >= 1:\n device = \"cuda\"\n else:\n device = \"cpu\"\n\n # 1. Set random-seed\n set_all_random_seed(seed)\n\n # 2. Build speech2text\n speech2text = Speech2Text(\n asr_train_config=asr_train_config,\n asr_model_file=asr_model_file,\n lm_train_config=lm_train_config,\n lm_file=lm_file,\n token_type=token_type,\n bpemodel=bpemodel,\n device=device,\n maxlenratio=maxlenratio,\n minlenratio=minlenratio,\n dtype=dtype,\n beam_size=beam_size,\n ctc_weight=ctc_weight,\n lm_weight=lm_weight,\n penalty=penalty,\n nbest=nbest,\n )\n\n # 3. Build data-iterator\n loader = ASRTask.build_streaming_iterator(\n data_path_and_name_and_type,\n dtype=dtype,\n batch_size=batch_size,\n key_file=key_file,\n num_workers=num_workers,\n preprocess_fn=ASRTask.build_preprocess_fn(speech2text.asr_train_args, False),\n collate_fn=ASRTask.build_collate_fn(speech2text.asr_train_args),\n allow_variable_data_keys=allow_variable_data_keys,\n inference=True,\n )\n\n # 7 .Start for-loop\n # FIXME(kamo): The output format should be discussed about\n with DatadirWriter(output_dir) as writer:\n for keys, batch in loader:\n assert isinstance(batch, dict), type(batch)\n assert all(isinstance(s, str) for s in keys), keys\n _bs = len(next(iter(batch.values())))\n assert len(keys) == _bs, f\"{len(keys)} != {_bs}\"\n batch = {k: v[0] for k, v in batch.items() if not k.endswith(\"_lengths\")}\n\n # N-best list of (text, token, token_int, hyp_object)\n results = speech2text(**batch)\n\n # Only supporting batch_size==1\n key = keys[0]\n for n, (text, token, token_int, hyp) in zip(range(1, nbest + 1), results):\n # Create a directory: outdir/{n}best_recog\n ibest_writer = writer[f\"{n}best_recog\"]\n\n # Write the result to each file\n ibest_writer[\"token\"][key] = \" \".join(token)\n ibest_writer[\"token_int\"][key] = \" \".join(map(str, token_int))\n ibest_writer[\"score\"][key] = str(hyp.score)\n\n if text is not None:\n ibest_writer[\"text\"][key] = text\n\n\ndef get_parser():\n parser = config_argparse.ArgumentParser(\n description=\"ASR Decoding\",\n formatter_class=argparse.ArgumentDefaultsHelpFormatter,\n )\n\n # Note(kamo): Use '_' instead of '-' as separator.\n # '-' is confusing if written in yaml.\n parser.add_argument(\n \"--log_level\",\n type=lambda x: x.upper(),\n default=\"INFO\",\n choices=(\"INFO\", \"ERROR\", \"WARNING\", \"INFO\", \"DEBUG\", \"NOTSET\"),\n help=\"The verbose level of logging\",\n )\n\n parser.add_argument(\"--output_dir\", type=str, required=True)\n parser.add_argument(\n \"--ngpu\", type=int, default=0, help=\"The number of gpus. 0 indicates CPU mode\",\n )\n parser.add_argument(\"--seed\", type=int, default=0, help=\"Random seed\")\n parser.add_argument(\n \"--dtype\",\n default=\"float32\",\n choices=[\"float16\", \"float32\", \"float64\"],\n help=\"Data type\",\n )\n parser.add_argument(\n \"--num_workers\",\n type=int,\n default=1,\n help=\"The number of workers used for DataLoader\",\n )\n\n group = parser.add_argument_group(\"Input data related\")\n group.add_argument(\n \"--data_path_and_name_and_type\",\n type=str2triple_str,\n required=True,\n action=\"append\",\n )\n group.add_argument(\"--key_file\", type=str_or_none)\n group.add_argument(\"--allow_variable_data_keys\", type=str2bool, default=False)\n\n group = parser.add_argument_group(\"The model configuration related\")\n group.add_argument(\"--asr_train_config\", type=str, required=True)\n group.add_argument(\"--asr_model_file\", type=str, required=True)\n group.add_argument(\"--lm_train_config\", type=str)\n group.add_argument(\"--lm_file\", type=str)\n group.add_argument(\"--word_lm_train_config\", type=str)\n group.add_argument(\"--word_lm_file\", type=str)\n\n group = parser.add_argument_group(\"Beam-search related\")\n group.add_argument(\n \"--batch_size\", type=int, default=1, help=\"The batch size for inference\",\n )\n group.add_argument(\"--nbest\", type=int, default=1, help=\"Output N-best hypotheses\")\n group.add_argument(\"--beam_size\", type=int, default=20, help=\"Beam size\")\n group.add_argument(\"--penalty\", type=float, default=0.0, help=\"Insertion penalty\")\n group.add_argument(\n \"--maxlenratio\",\n type=float,\n default=0.0,\n help=\"Input length ratio to obtain max output length. \"\n \"If maxlenratio=0.0 (default), it uses a end-detect \"\n \"function \"\n \"to automatically find maximum hypothesis lengths\",\n )\n group.add_argument(\n \"--minlenratio\",\n type=float,\n default=0.0,\n help=\"Input length ratio to obtain min output length\",\n )\n group.add_argument(\n \"--ctc_weight\", type=float, default=0.5, help=\"CTC weight in joint decoding\",\n )\n group.add_argument(\"--lm_weight\", type=float, default=1.0, help=\"RNNLM weight\")\n\n group = parser.add_argument_group(\"Text converter related\")\n group.add_argument(\n \"--token_type\",\n type=str_or_none,\n default=None,\n choices=[\"char\", \"bpe\", None],\n help=\"The token type for ASR model. \"\n \"If not given, refers from the training args\",\n )\n group.add_argument(\n \"--bpemodel\",\n type=str_or_none,\n default=None,\n help=\"The model path of sentencepiece. \"\n \"If not given, refers from the training args\",\n )\n\n return parser\n\n\ndef main(cmd=None):\n print(get_commandline_args(), file=sys.stderr)\n parser = get_parser()\n args = parser.parse_args(cmd)\n kwargs = vars(args)\n kwargs.pop(\"config\", None)\n inference(**kwargs)\n\n\nif __name__ == \"__main__\":\n main()\n",
"path": "espnet2/bin/asr_inference.py"
}
] | diff --git a/espnet2/bin/asr_inference.py b/espnet2/bin/asr_inference.py
index 61e657b6eb2..6c99fe0f265 100755
--- a/espnet2/bin/asr_inference.py
+++ b/espnet2/bin/asr_inference.py
@@ -147,7 +147,6 @@ def __init__(
self.asr_model = asr_model
self.asr_train_args = asr_train_args
- self.lm_train_args = lm_train_args
self.converter = converter
self.tokenizer = tokenizer
self.beam_search = beam_search
|
dbt-labs__dbt-core-7080 | [CT-2225] [Bug] Suddenly getting ModuleNotFoundError: No module named 'pytz'
### Is this a new bug in dbt-core?
- [X] I believe this is a new bug in dbt-core
- [X] I have searched the existing issues, and I could not find an existing issue for this bug
### Current Behavior
I am installing dbt-bigquery with meltano (which installs it in a isolated *venv*).
Today when invoking `dbt deps` using `meltano invoke dbt-bigquery:deps` I am getting a stacktrace with
ModuleNotFoundError: No module named 'pytz'
### Expected Behavior
`pytz` should be found. I have noted that it is not included in the requirements. So while it's strange that it suddenly started failing, maybe it was more of an accident that it ever worked in the first place?
### Steps To Reproduce
With versions specified as
dbt-core~=1.3.0
dbt-bigquery~=1.3.0
invoking `dbt deps` should not throw a ModuleNotFoundError
### Relevant log output
```shell
Traceback (most recent call last):
File "/workspaces/elt/.meltano/transformers/dbt-bigquery/venv/bin/dbt", line 5, in <module>
from dbt.main import main
File "/workspaces/elt/.meltano/transformers/dbt-bigquery/venv/lib/python3.9/site-packages/dbt/main.py", line 24, in <module>
import dbt.task.build as build_task
File "/workspaces/elt/.meltano/transformers/dbt-bigquery/venv/lib/python3.9/site-packages/dbt/task/build.py", line 1, in <module>
from .run import RunTask, ModelRunner as run_model_runner
File "/workspaces/elt/.meltano/transformers/dbt-bigquery/venv/lib/python3.9/site-packages/dbt/task/run.py", line 8, in <module>
from .compile import CompileRunner, CompileTask
File "/workspaces/elt/.meltano/transformers/dbt-bigquery/venv/lib/python3.9/site-packages/dbt/task/compile.py", line 4, in <module>
from .runnable import GraphRunnableTask
File "/workspaces/elt/.meltano/transformers/dbt-bigquery/venv/lib/python3.9/site-packages/dbt/task/runnable.py", line 11, in <module>
from .printer import (
File "/workspaces/elt/.meltano/transformers/dbt-bigquery/venv/lib/python3.9/site-packages/dbt/task/printer.py", line 22, in <module>
from dbt.tracking import InvocationProcessor
File "/workspaces/elt/.meltano/transformers/dbt-bigquery/venv/lib/python3.9/site-packages/dbt/tracking.py", line 25, in <module>
import pytz
ModuleNotFoundError: No module named 'pytz'
```
### Environment
```markdown
- OS: Linux (fresh docker container inside virtual environment)
- Python: 3.9
- dbt: 1.3.1 (~=1.3.0)
```
### Which database adapter are you using with dbt?
other (mention it in "Additional Context")
### Additional Context
_No response_
| [
{
"content": "#!/usr/bin/env python\nimport os\nimport sys\n\nif sys.version_info < (3, 7, 2):\n print(\"Error: dbt does not support this version of Python.\")\n print(\"Please upgrade to Python 3.7.2 or higher.\")\n sys.exit(1)\n\n\nfrom setuptools import setup\n\ntry:\n from setuptools import find_namespace_packages\nexcept ImportError:\n # the user has a downlevel version of setuptools.\n print(\"Error: dbt requires setuptools v40.1.0 or higher.\")\n print('Please upgrade setuptools with \"pip install --upgrade setuptools\" ' \"and try again\")\n sys.exit(1)\n\n\nthis_directory = os.path.abspath(os.path.dirname(__file__))\nwith open(os.path.join(this_directory, \"README.md\")) as f:\n long_description = f.read()\n\n\npackage_name = \"dbt-core\"\npackage_version = \"1.2.4\"\ndescription = \"\"\"With dbt, data analysts and engineers can build analytics \\\nthe way engineers build applications.\"\"\"\n\n\nsetup(\n name=package_name,\n version=package_version,\n description=description,\n long_description=long_description,\n long_description_content_type=\"text/markdown\",\n author=\"dbt Labs\",\n author_email=\"[email protected]\",\n url=\"https://github.com/dbt-labs/dbt-core\",\n packages=find_namespace_packages(include=[\"dbt\", \"dbt.*\"]),\n include_package_data=True,\n test_suite=\"test\",\n entry_points={\n \"console_scripts\": [\n \"dbt = dbt.main:main\",\n ],\n },\n install_requires=[\n \"Jinja2==2.11.3\",\n \"MarkupSafe>=0.23,<2.1\",\n \"agate>=1.6,<1.6.4\",\n \"click>=7.0,<9\",\n \"colorama>=0.3.9,<0.4.6\",\n \"hologram>=0.0.14,<=0.0.15\",\n \"isodate>=0.6,<0.7\",\n \"logbook>=1.5,<1.6\",\n \"mashumaro==2.9\",\n \"minimal-snowplow-tracker==0.0.2\",\n \"networkx>=2.3,<2.8.1;python_version<'3.8'\",\n \"networkx>=2.3,<3;python_version>='3.8'\",\n \"packaging>=20.9,<22.0\",\n \"sqlparse>=0.2.3,<0.5\",\n \"dbt-extractor~=0.4.1\",\n \"typing-extensions>=3.7.4\",\n \"werkzeug>=1,<3\",\n # the following are all to match snowflake-connector-python\n \"requests<3.0.0\",\n \"idna>=2.5,<4\",\n \"cffi>=1.9,<2.0.0\",\n ],\n zip_safe=False,\n classifiers=[\n \"Development Status :: 5 - Production/Stable\",\n \"License :: OSI Approved :: Apache Software License\",\n \"Operating System :: Microsoft :: Windows\",\n \"Operating System :: MacOS :: MacOS X\",\n \"Operating System :: POSIX :: Linux\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Programming Language :: Python :: 3.10\",\n ],\n python_requires=\">=3.7.2\",\n)\n",
"path": "core/setup.py"
}
] | [
{
"content": "#!/usr/bin/env python\nimport os\nimport sys\n\nif sys.version_info < (3, 7, 2):\n print(\"Error: dbt does not support this version of Python.\")\n print(\"Please upgrade to Python 3.7.2 or higher.\")\n sys.exit(1)\n\n\nfrom setuptools import setup\n\ntry:\n from setuptools import find_namespace_packages\nexcept ImportError:\n # the user has a downlevel version of setuptools.\n print(\"Error: dbt requires setuptools v40.1.0 or higher.\")\n print('Please upgrade setuptools with \"pip install --upgrade setuptools\" ' \"and try again\")\n sys.exit(1)\n\n\nthis_directory = os.path.abspath(os.path.dirname(__file__))\nwith open(os.path.join(this_directory, \"README.md\")) as f:\n long_description = f.read()\n\n\npackage_name = \"dbt-core\"\npackage_version = \"1.2.4\"\ndescription = \"\"\"With dbt, data analysts and engineers can build analytics \\\nthe way engineers build applications.\"\"\"\n\n\nsetup(\n name=package_name,\n version=package_version,\n description=description,\n long_description=long_description,\n long_description_content_type=\"text/markdown\",\n author=\"dbt Labs\",\n author_email=\"[email protected]\",\n url=\"https://github.com/dbt-labs/dbt-core\",\n packages=find_namespace_packages(include=[\"dbt\", \"dbt.*\"]),\n include_package_data=True,\n test_suite=\"test\",\n entry_points={\n \"console_scripts\": [\n \"dbt = dbt.main:main\",\n ],\n },\n install_requires=[\n \"Jinja2==2.11.3\",\n \"MarkupSafe>=0.23,<2.1\",\n \"agate>=1.6,<1.6.4\",\n \"click>=7.0,<9\",\n \"colorama>=0.3.9,<0.4.6\",\n \"hologram>=0.0.14,<=0.0.15\",\n \"isodate>=0.6,<0.7\",\n \"logbook>=1.5,<1.6\",\n \"mashumaro==2.9\",\n \"minimal-snowplow-tracker==0.0.2\",\n \"networkx>=2.3,<2.8.1;python_version<'3.8'\",\n \"networkx>=2.3,<3;python_version>='3.8'\",\n \"packaging>=20.9,<22.0\",\n \"sqlparse>=0.2.3,<0.5\",\n \"dbt-extractor~=0.4.1\",\n \"typing-extensions>=3.7.4\",\n \"werkzeug>=1,<3\",\n \"pytz>=2015.7\",\n # the following are all to match snowflake-connector-python\n \"requests<3.0.0\",\n \"idna>=2.5,<4\",\n \"cffi>=1.9,<2.0.0\",\n ],\n zip_safe=False,\n classifiers=[\n \"Development Status :: 5 - Production/Stable\",\n \"License :: OSI Approved :: Apache Software License\",\n \"Operating System :: Microsoft :: Windows\",\n \"Operating System :: MacOS :: MacOS X\",\n \"Operating System :: POSIX :: Linux\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Programming Language :: Python :: 3.10\",\n ],\n python_requires=\">=3.7.2\",\n)\n",
"path": "core/setup.py"
}
] | diff --git a/.changes/unreleased/Fixes-20230228-130318.yaml b/.changes/unreleased/Fixes-20230228-130318.yaml
new file mode 100644
index 00000000000..abcbee150a2
--- /dev/null
+++ b/.changes/unreleased/Fixes-20230228-130318.yaml
@@ -0,0 +1,6 @@
+kind: Fixes
+body: add pytz dependency
+time: 2023-02-28T13:03:18.353468+01:00
+custom:
+ Author: sdebruyn
+ Issue: "7077"
diff --git a/core/setup.py b/core/setup.py
index b2f58533fba..56454e9049c 100644
--- a/core/setup.py
+++ b/core/setup.py
@@ -65,6 +65,7 @@
"dbt-extractor~=0.4.1",
"typing-extensions>=3.7.4",
"werkzeug>=1,<3",
+ "pytz>=2015.7",
# the following are all to match snowflake-connector-python
"requests<3.0.0",
"idna>=2.5,<4",
diff --git a/dev-requirements.txt b/dev-requirements.txt
index 2701e4cab77..e13aa4628ea 100644
--- a/dev-requirements.txt
+++ b/dev-requirements.txt
@@ -14,7 +14,6 @@ pytest-dotenv
pytest-logbook
pytest-mock
pytest-xdist
-pytz
tox>=3.13
twine
types-colorama
|
open-mmlab__mmdetection-3553 | VOCDataset object has no attribute dataset
Thanks for your error report and we appreciate it a lot.
**Checklist**
1. I have searched related issues but cannot get the expected help.
2. The bug has not been fixed in the latest version.
**Describe the bug**
I tried to train my model on Pascal VOC 2012 dataset, and set the config for data as follows:
```python3
batch_size = 8
data = dict(
samples_per_gpu=batch_size,
workers_per_gpu=4,
train=dict(
type=dataset_type,
ann_file=data_root + 'VOC2012/ImageSets/Main/train.txt',
img_prefix=data_root + 'VOC2012/',
pipeline=train_pipeline,),
val=dict(
type=dataset_type,
ann_file=data_root + 'VOC2012/ImageSets/Main/val.txt',
img_prefix=data_root + 'VOC2012/',
pipeline=test_pipeline,),
)
evaluation=dict(interval=1, metric='mAP')
```
But during evaluation, it raised following error:
```shell
File "train.py", line 166, in <module>
main()
File "train.py", line 162, in main
meta=meta)
File "/home/lfc199471/mmdetection/mmdet/apis/train.py", line 128, in train_detector
runner.run(data_loaders, cfg.workflow, cfg.total_epochs)
File "/home/lfc199471/anaconda3/lib/python3.7/site-packages/mmcv/runner/epoch_based_runner.py", line 122, in run
epoch_runner(data_loaders[i], **kwargs)
File "/home/lfc199471/anaconda3/lib/python3.7/site-packages/mmcv/runner/epoch_based_runner.py", line 46, in train
self.call_hook('after_train_epoch')
File "/home/lfc199471/anaconda3/lib/python3.7/site-packages/mmcv/runner/base_runner.py", line 282, in call_hook
getattr(hook, fn_name)(self)
File "/home/lfc199471/mmdetection/mmdet/core/evaluation/eval_hooks.py", line 28, in after_train_epoch
self.evaluate(runner, results)
File "/home/lfc199471/mmdetection/mmdet/core/evaluation/eval_hooks.py", line 32, in evaluate
results, logger=runner.logger, **self.eval_kwargs)
File "/home/lfc199471/mmdetection/mmdet/datasets/voc.py", line 43, in evaluate
ds_name = self.dataset.CLASSES
AttributeError: 'VOCDataset' object has no attribute 'dataset'
```
I checked the `voc.py` in `mmdet` and found that in line 43, it was
```python3
ds_name = self.dataset.CLASSES
```
but `VOCDataset` and its superclasses `XMLDataset` and `CustomDataset` don't have this attribute. Is it a bug or did I make some mistakes in the config?
**Reproduction**
1. What command or script did you run?
```
python tools/train.py --gpus 1 configs/<my_config_file>
```
2. Did you make any modifications on the code or config? Did you understand what you have modified?
Yes, please see above.
3. What dataset did you use?
Pascal VOC 2012 detection
**Environment**
1. Please run `python mmdet/utils/collect_env.py` to collect necessary environment infomation and paste it here.
```shell
sys.platform: linux
Python: 3.7.6 (default, Jan 8 2020, 19:59:22) [GCC 7.3.0]
CUDA available: True
CUDA_HOME: /usr/local/cuda
NVCC: Cuda compilation tools, release 10.2, V10.2.89
GPU 0: Tesla P100-PCIE-16GB
GCC: gcc (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
PyTorch: 1.5.1
PyTorch compiling details: PyTorch built with:
- GCC 7.3
- C++ Version: 201402
- Intel(R) Math Kernel Library Version 2020.0.0 Product Build 20191122 for Intel(R) 64 architecture applications
- Intel(R) MKL-DNN v0.21.1 (Git Hash 7d2fd500bc78936d1d648ca713b901012f470dbc)
- OpenMP 201511 (a.k.a. OpenMP 4.5)
- NNPACK is enabled
- CPU capability usage: AVX2
- CUDA Runtime 10.2
- NVCC architecture flags: -gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_61,code=sm_61;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_37,code=compute_37
- CuDNN 7.6.5
- Magma 2.5.2
- Build settings: BLAS=MKL, BUILD_TYPE=Release, CXX_FLAGS= -Wno-deprecated -fvisibility-inlines-hidden -fopenmp -DNDEBUG -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DUSE_INTERNAL_THREADPOOL_IMPL -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-variable -Wno-unused-function -Wno-unused-result -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Wno-stringop-overflow, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, USE_CUDA=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=ON, USE_NNPACK=ON, USE_OPENMP=ON, USE_STATIC_DISPATCH=OFF,
TorchVision: 0.6.0a0+35d732a
OpenCV: 4.2.0
MMCV: 0.6.1
MMDetection: 2.1.0+b44e78b
MMDetection Compiler: GCC 7.5
MMDetection CUDA Compiler: 10.2
```
2. You may add addition that may be helpful for locating the problem, such as
- How you installed PyTorch [e.g., pip, conda, source] : conda
If you need any log file or some source code from me, just let me know.
| [
{
"content": "from mmdet.core import eval_map, eval_recalls\nfrom .builder import DATASETS\nfrom .xml_style import XMLDataset\n\n\[email protected]_module()\nclass VOCDataset(XMLDataset):\n\n CLASSES = ('aeroplane', 'bicycle', 'bird', 'boat', 'bottle', 'bus', 'car',\n 'cat', 'chair', 'cow', 'diningtable', 'dog', 'horse',\n 'motorbike', 'person', 'pottedplant', 'sheep', 'sofa', 'train',\n 'tvmonitor')\n\n def __init__(self, **kwargs):\n super(VOCDataset, self).__init__(**kwargs)\n if 'VOC2007' in self.img_prefix:\n self.year = 2007\n elif 'VOC2012' in self.img_prefix:\n self.year = 2012\n else:\n raise ValueError('Cannot infer dataset year from img_prefix')\n\n def evaluate(self,\n results,\n metric='mAP',\n logger=None,\n proposal_nums=(100, 300, 1000),\n iou_thr=0.5,\n scale_ranges=None):\n \"\"\"Evaluate in VOC protocol.\n\n Args:\n results (list[list | tuple]): Testing results of the dataset.\n metric (str | list[str]): Metrics to be evaluated. Options are\n 'mAP', 'recall'.\n logger (logging.Logger | str, optional): Logger used for printing\n related information during evaluation. Default: None.\n proposal_nums (Sequence[int]): Proposal number used for evaluating\n recalls, such as recall@100, recall@1000.\n Default: (100, 300, 1000).\n iou_thr (float | list[float]): IoU threshold. It must be a float\n when evaluating mAP, and can be a list when evaluating recall.\n Default: 0.5.\n scale_ranges (list[tuple], optional): Scale ranges for evaluating\n mAP. If not specified, all bounding boxes would be included in\n evaluation. Default: None.\n\n Returns:\n dict[str, float]: AP/recall metrics.\n \"\"\"\n\n if not isinstance(metric, str):\n assert len(metric) == 1\n metric = metric[0]\n allowed_metrics = ['mAP', 'recall']\n if metric not in allowed_metrics:\n raise KeyError(f'metric {metric} is not supported')\n annotations = [self.get_ann_info(i) for i in range(len(self))]\n eval_results = {}\n if metric == 'mAP':\n assert isinstance(iou_thr, float)\n if self.year == 2007:\n ds_name = 'voc07'\n else:\n ds_name = self.dataset.CLASSES\n mean_ap, _ = eval_map(\n results,\n annotations,\n scale_ranges=None,\n iou_thr=iou_thr,\n dataset=ds_name,\n logger=logger)\n eval_results['mAP'] = mean_ap\n elif metric == 'recall':\n gt_bboxes = [ann['bboxes'] for ann in annotations]\n if isinstance(iou_thr, float):\n iou_thr = [iou_thr]\n recalls = eval_recalls(\n gt_bboxes, results, proposal_nums, iou_thr, logger=logger)\n for i, num in enumerate(proposal_nums):\n for j, iou in enumerate(iou_thr):\n eval_results[f'recall@{num}@{iou}'] = recalls[i, j]\n if recalls.shape[1] > 1:\n ar = recalls.mean(axis=1)\n for i, num in enumerate(proposal_nums):\n eval_results[f'AR@{num}'] = ar[i]\n return eval_results\n",
"path": "mmdet/datasets/voc.py"
}
] | [
{
"content": "from mmdet.core import eval_map, eval_recalls\nfrom .builder import DATASETS\nfrom .xml_style import XMLDataset\n\n\[email protected]_module()\nclass VOCDataset(XMLDataset):\n\n CLASSES = ('aeroplane', 'bicycle', 'bird', 'boat', 'bottle', 'bus', 'car',\n 'cat', 'chair', 'cow', 'diningtable', 'dog', 'horse',\n 'motorbike', 'person', 'pottedplant', 'sheep', 'sofa', 'train',\n 'tvmonitor')\n\n def __init__(self, **kwargs):\n super(VOCDataset, self).__init__(**kwargs)\n if 'VOC2007' in self.img_prefix:\n self.year = 2007\n elif 'VOC2012' in self.img_prefix:\n self.year = 2012\n else:\n raise ValueError('Cannot infer dataset year from img_prefix')\n\n def evaluate(self,\n results,\n metric='mAP',\n logger=None,\n proposal_nums=(100, 300, 1000),\n iou_thr=0.5,\n scale_ranges=None):\n \"\"\"Evaluate in VOC protocol.\n\n Args:\n results (list[list | tuple]): Testing results of the dataset.\n metric (str | list[str]): Metrics to be evaluated. Options are\n 'mAP', 'recall'.\n logger (logging.Logger | str, optional): Logger used for printing\n related information during evaluation. Default: None.\n proposal_nums (Sequence[int]): Proposal number used for evaluating\n recalls, such as recall@100, recall@1000.\n Default: (100, 300, 1000).\n iou_thr (float | list[float]): IoU threshold. It must be a float\n when evaluating mAP, and can be a list when evaluating recall.\n Default: 0.5.\n scale_ranges (list[tuple], optional): Scale ranges for evaluating\n mAP. If not specified, all bounding boxes would be included in\n evaluation. Default: None.\n\n Returns:\n dict[str, float]: AP/recall metrics.\n \"\"\"\n\n if not isinstance(metric, str):\n assert len(metric) == 1\n metric = metric[0]\n allowed_metrics = ['mAP', 'recall']\n if metric not in allowed_metrics:\n raise KeyError(f'metric {metric} is not supported')\n annotations = [self.get_ann_info(i) for i in range(len(self))]\n eval_results = {}\n if metric == 'mAP':\n assert isinstance(iou_thr, float)\n if self.year == 2007:\n ds_name = 'voc07'\n else:\n ds_name = self.CLASSES\n mean_ap, _ = eval_map(\n results,\n annotations,\n scale_ranges=None,\n iou_thr=iou_thr,\n dataset=ds_name,\n logger=logger)\n eval_results['mAP'] = mean_ap\n elif metric == 'recall':\n gt_bboxes = [ann['bboxes'] for ann in annotations]\n if isinstance(iou_thr, float):\n iou_thr = [iou_thr]\n recalls = eval_recalls(\n gt_bboxes, results, proposal_nums, iou_thr, logger=logger)\n for i, num in enumerate(proposal_nums):\n for j, iou in enumerate(iou_thr):\n eval_results[f'recall@{num}@{iou}'] = recalls[i, j]\n if recalls.shape[1] > 1:\n ar = recalls.mean(axis=1)\n for i, num in enumerate(proposal_nums):\n eval_results[f'AR@{num}'] = ar[i]\n return eval_results\n",
"path": "mmdet/datasets/voc.py"
}
] | diff --git a/mmdet/datasets/voc.py b/mmdet/datasets/voc.py
index 9de96b1c774..87689b5e726 100644
--- a/mmdet/datasets/voc.py
+++ b/mmdet/datasets/voc.py
@@ -62,7 +62,7 @@ def evaluate(self,
if self.year == 2007:
ds_name = 'voc07'
else:
- ds_name = self.dataset.CLASSES
+ ds_name = self.CLASSES
mean_ap, _ = eval_map(
results,
annotations,
|
learningequality__kolibri-4343 | Enable ePUB plugin to run by default
### Observed behavior
ePUB plugin is not enabled by default, and it prevents from importing & viewing ePUB files, until the command `kolibri plugin kolibri.plugins.document_epub_render enable` is run.
### User-facing consequences
Inability to view and import ePUB files.
### Context
dev environment, tried on `develop` and `0.11.a7` branches
| [
{
"content": "\"\"\"\nKolibri configuration data\n==========================\n\n.. warning::\n Do not load any django.conf.settings stuff here. This configuration data\n precedes loading of settings, it is not part of the settings stack.\n\nTODO: We need to figure out our conf API. Do we store in ini/json/yaml?\n\n * How do we retrieve config data?\n * When should configuration files be loaded and written?\n\nThis module should be easier to document, for instance by having VARIABLES\ninstead of a dict.\n\n\"\"\"\nfrom __future__ import absolute_import\nfrom __future__ import print_function\nfrom __future__ import unicode_literals\n\nimport json\nimport logging\nimport os\n\nfrom .compat import module_exists\nfrom .options import read_options_file\n\nlogger = logging.getLogger(__name__)\n\n# use default OS encoding\nwith open(os.path.join(os.path.dirname(__file__), 'KOLIBRI_CORE_JS_NAME')) as f:\n KOLIBRI_CORE_JS_NAME = f.read().strip()\n\n#: Absolute path of the main user data directory.\n#: Will be created automatically if it doesn't exist.\nKOLIBRI_HOME = os.path.abspath(os.path.expanduser(os.environ[\"KOLIBRI_HOME\"]))\n\n# Creating KOLIBRI_HOME atm. has to happen here as for instance utils.cli is not\n# called through py.test. This file is the first basic entry point of\n# Kolibri, although utils.cli may or may not precede it.\nif not os.path.exists(KOLIBRI_HOME):\n parent = os.path.dirname(KOLIBRI_HOME)\n if not os.path.exists(parent):\n raise RuntimeError(\"The parent of your KOLIBRI_HOME does not exist: {}\".format(parent))\n os.mkdir(KOLIBRI_HOME)\n\n#: Set defaults before updating the dict\nconfig = {}\n\ntry:\n # The default list for this is populated from build_tools/default_plugins.txt\n # in the root of the Kolibri repository. The default list is identical to the list below,\n # except that the style_guide plugin is not enabled in production builds.\n # Caveat: this list may have been changed at build time to specify a different list of plugins.\n from .build_config.default_plugins import plugins\n DEFAULT_PLUGINS = plugins\nexcept ImportError:\n DEFAULT_PLUGINS = [\n \"kolibri.plugins.facility_management\",\n \"kolibri.plugins.device_management\",\n \"kolibri.plugins.learn\",\n \"kolibri.plugins.document_pdf_render\",\n \"kolibri.plugins.html5_app_renderer\",\n \"kolibri.plugins.media_player\",\n \"kolibri.plugins.setup_wizard\",\n \"kolibri.plugins.coach\",\n \"kolibri.plugins.user\",\n \"kolibri_exercise_perseus_plugin\",\n \"kolibri.plugins.style_guide\",\n ]\n\n#: Everything in this list is added to django.conf.settings.INSTALLED_APPS\nconfig['INSTALLED_APPS'] = DEFAULT_PLUGINS\n\n#: Well-known plugin names that are automatically searched for and enabled on\n#: first-run.\nconfig['AUTO_SEARCH_PLUGINS'] = []\n\n#: If a config file does not exist, we assume it's the first run\nconfig['FIRST_RUN'] = True\n\nconf_file = os.path.join(KOLIBRI_HOME, \"kolibri_settings.json\")\n\n\ndef update(new_values):\n \"\"\"\n Updates current configuration with ``new_values``. Does not save to file.\n \"\"\"\n config.update(new_values)\n\n\ndef save(first_run=False):\n \"\"\"Saves the current state of the configuration\"\"\"\n config['FIRST_RUN'] = first_run\n # use default OS encoding\n with open(conf_file, 'w') as kolibri_conf_file:\n json.dump(config, kolibri_conf_file, indent=2, sort_keys=True)\n\n\nif not os.path.isfile(conf_file):\n logger.info(\"Initialize kolibri_settings.json..\")\n save(True)\nelse:\n # Open up the config file and overwrite defaults\n # use default OS encoding\n with open(conf_file, 'r') as kolibri_conf_file:\n config.update(json.load(kolibri_conf_file))\n\n\ndef autoremove_unavailable_plugins():\n \"\"\"\n Sanitize INSTALLED_APPS - something that should be done separately for all\n build in plugins, but we should not auto-remove plugins that are actually\n configured by the user or some other kind of hard dependency that should\n make execution stop if not loadable.\n \"\"\"\n global config\n changed = False\n # Iterate over a copy of the list so that it is not modified during the loop\n for module_path in config['INSTALLED_APPS'][:]:\n if not module_exists(module_path):\n config['INSTALLED_APPS'].remove(module_path)\n logger.error(\n (\n \"Plugin {mod} not found and disabled. To re-enable it, run:\\n\"\n \" $ kolibri plugin {mod} enable\"\n ).format(mod=module_path)\n )\n changed = True\n if changed:\n save()\n\n\ndef enable_default_plugins():\n \"\"\"\n Enable new plugins that have been added between versions\n This will have the undesired side effect of reactivating\n default plugins that have been explicitly disabled by a user.\n However, until we add disabled plugins to a blacklist, this is\n unavoidable.\n \"\"\"\n global config\n changed = False\n for module_path in DEFAULT_PLUGINS:\n if module_path not in config['INSTALLED_APPS']:\n config['INSTALLED_APPS'].append(module_path)\n logger.warning(\n (\n \"Default plugin {mod} not found in configuration. To re-disable it, run:\\n\"\n \" $ kolibri plugin {mod} disable\"\n ).format(mod=module_path)\n )\n changed = True\n\n if changed:\n save()\n\n\n# read the config file options in here so they can be accessed from a standard location\nOPTIONS = read_options_file(KOLIBRI_HOME)\n",
"path": "kolibri/utils/conf.py"
}
] | [
{
"content": "\"\"\"\nKolibri configuration data\n==========================\n\n.. warning::\n Do not load any django.conf.settings stuff here. This configuration data\n precedes loading of settings, it is not part of the settings stack.\n\nTODO: We need to figure out our conf API. Do we store in ini/json/yaml?\n\n * How do we retrieve config data?\n * When should configuration files be loaded and written?\n\nThis module should be easier to document, for instance by having VARIABLES\ninstead of a dict.\n\n\"\"\"\nfrom __future__ import absolute_import\nfrom __future__ import print_function\nfrom __future__ import unicode_literals\n\nimport json\nimport logging\nimport os\n\nfrom .compat import module_exists\nfrom .options import read_options_file\n\nlogger = logging.getLogger(__name__)\n\n# use default OS encoding\nwith open(os.path.join(os.path.dirname(__file__), 'KOLIBRI_CORE_JS_NAME')) as f:\n KOLIBRI_CORE_JS_NAME = f.read().strip()\n\n#: Absolute path of the main user data directory.\n#: Will be created automatically if it doesn't exist.\nKOLIBRI_HOME = os.path.abspath(os.path.expanduser(os.environ[\"KOLIBRI_HOME\"]))\n\n# Creating KOLIBRI_HOME atm. has to happen here as for instance utils.cli is not\n# called through py.test. This file is the first basic entry point of\n# Kolibri, although utils.cli may or may not precede it.\nif not os.path.exists(KOLIBRI_HOME):\n parent = os.path.dirname(KOLIBRI_HOME)\n if not os.path.exists(parent):\n raise RuntimeError(\"The parent of your KOLIBRI_HOME does not exist: {}\".format(parent))\n os.mkdir(KOLIBRI_HOME)\n\n#: Set defaults before updating the dict\nconfig = {}\n\ntry:\n # The default list for this is populated from build_tools/default_plugins.txt\n # in the root of the Kolibri repository. The default list is identical to the list below,\n # except that the style_guide plugin is not enabled in production builds.\n # Caveat: this list may have been changed at build time to specify a different list of plugins.\n from .build_config.default_plugins import plugins\n DEFAULT_PLUGINS = plugins\nexcept ImportError:\n DEFAULT_PLUGINS = [\n \"kolibri.plugins.facility_management\",\n \"kolibri.plugins.device_management\",\n \"kolibri.plugins.learn\",\n \"kolibri.plugins.document_pdf_render\",\n \"kolibri.plugins.html5_app_renderer\",\n \"kolibri.plugins.media_player\",\n \"kolibri.plugins.setup_wizard\",\n \"kolibri.plugins.coach\",\n \"kolibri.plugins.user\",\n \"kolibri_exercise_perseus_plugin\",\n \"kolibri.plugins.style_guide\",\n \"kolibri.plugins.document_epub_render\",\n ]\n\n#: Everything in this list is added to django.conf.settings.INSTALLED_APPS\nconfig['INSTALLED_APPS'] = DEFAULT_PLUGINS\n\n#: Well-known plugin names that are automatically searched for and enabled on\n#: first-run.\nconfig['AUTO_SEARCH_PLUGINS'] = []\n\n#: If a config file does not exist, we assume it's the first run\nconfig['FIRST_RUN'] = True\n\nconf_file = os.path.join(KOLIBRI_HOME, \"kolibri_settings.json\")\n\n\ndef update(new_values):\n \"\"\"\n Updates current configuration with ``new_values``. Does not save to file.\n \"\"\"\n config.update(new_values)\n\n\ndef save(first_run=False):\n \"\"\"Saves the current state of the configuration\"\"\"\n config['FIRST_RUN'] = first_run\n # use default OS encoding\n with open(conf_file, 'w') as kolibri_conf_file:\n json.dump(config, kolibri_conf_file, indent=2, sort_keys=True)\n\n\nif not os.path.isfile(conf_file):\n logger.info(\"Initialize kolibri_settings.json..\")\n save(True)\nelse:\n # Open up the config file and overwrite defaults\n # use default OS encoding\n with open(conf_file, 'r') as kolibri_conf_file:\n config.update(json.load(kolibri_conf_file))\n\n\ndef autoremove_unavailable_plugins():\n \"\"\"\n Sanitize INSTALLED_APPS - something that should be done separately for all\n build in plugins, but we should not auto-remove plugins that are actually\n configured by the user or some other kind of hard dependency that should\n make execution stop if not loadable.\n \"\"\"\n global config\n changed = False\n # Iterate over a copy of the list so that it is not modified during the loop\n for module_path in config['INSTALLED_APPS'][:]:\n if not module_exists(module_path):\n config['INSTALLED_APPS'].remove(module_path)\n logger.error(\n (\n \"Plugin {mod} not found and disabled. To re-enable it, run:\\n\"\n \" $ kolibri plugin {mod} enable\"\n ).format(mod=module_path)\n )\n changed = True\n if changed:\n save()\n\n\ndef enable_default_plugins():\n \"\"\"\n Enable new plugins that have been added between versions\n This will have the undesired side effect of reactivating\n default plugins that have been explicitly disabled by a user.\n However, until we add disabled plugins to a blacklist, this is\n unavoidable.\n \"\"\"\n global config\n changed = False\n for module_path in DEFAULT_PLUGINS:\n if module_path not in config['INSTALLED_APPS']:\n config['INSTALLED_APPS'].append(module_path)\n logger.warning(\n (\n \"Default plugin {mod} not found in configuration. To re-disable it, run:\\n\"\n \" $ kolibri plugin {mod} disable\"\n ).format(mod=module_path)\n )\n changed = True\n\n if changed:\n save()\n\n\n# read the config file options in here so they can be accessed from a standard location\nOPTIONS = read_options_file(KOLIBRI_HOME)\n",
"path": "kolibri/utils/conf.py"
}
] | diff --git a/kolibri/utils/conf.py b/kolibri/utils/conf.py
index dd308d001b1..0576a182b59 100644
--- a/kolibri/utils/conf.py
+++ b/kolibri/utils/conf.py
@@ -68,6 +68,7 @@
"kolibri.plugins.user",
"kolibri_exercise_perseus_plugin",
"kolibri.plugins.style_guide",
+ "kolibri.plugins.document_epub_render",
]
#: Everything in this list is added to django.conf.settings.INSTALLED_APPS
|
pwr-Solaar__Solaar-1003 | Please create an AppData file for Solaar
Please consider writing and installing an AppData file with the application description and some screenshots, else Solaar looks really bad in the GNOME and KDE Software Centers. We'd love to showcase more applications, but without the extra data file we can't. See http://people.freedesktop.org/~hughsient/appdata/ for details; thanks!
Richard
| [
{
"content": "#!/usr/bin/env python3\n\nfrom glob import glob as _glob\n\ntry:\n from setuptools import setup\nexcept ImportError:\n from distutils.core import setup\n\n# from solaar import NAME, __version__\n__version__ = '1.0.4'\nNAME = 'Solaar'\n\n\ndef _data_files():\n from os.path import dirname as _dirname\n\n yield 'share/solaar/icons', _glob('share/solaar/icons/solaar*.svg')\n yield 'share/solaar/icons', _glob('share/solaar/icons/light_*.png')\n yield 'share/icons/hicolor/scalable/apps', ['share/solaar/icons/solaar.svg']\n\n for mo in _glob('share/locale/*/LC_MESSAGES/solaar.mo'):\n yield _dirname(mo), [mo]\n\n yield 'share/applications', ['share/applications/solaar.desktop']\n yield 'share/solaar/udev-rules.d', ['rules.d/42-logitech-unify-permissions.rules']\n\n del _dirname\n\n\nsetup(\n name=NAME.lower(),\n version=__version__,\n description='Linux devices manager for the Logitech Unifying Receiver.',\n long_description='''\nSolaar is a Linux device manager for Logitech's Unifying Receiver peripherals.\nIt is able to pair/unpair devices with the receiver, for many devices show\nbattery status, and show and modify some of the modifiable features of devices.\n'''.strip(),\n author='Daniel Pavel',\n license='GPLv2',\n url='http://pwr-solaar.github.io/Solaar/',\n classifiers=[\n 'Development Status :: 4 - Beta',\n 'Environment :: X11 Applications :: GTK',\n 'Environment :: Console',\n 'Intended Audience :: End Users/Desktop',\n 'License :: DFSG approved',\n 'License :: OSI Approved :: GNU General Public License v2 (GPLv2)',\n 'Natural Language :: English',\n 'Programming Language :: Python :: 3 :: Only',\n 'Operating System :: POSIX :: Linux',\n 'Topic :: Utilities',\n ],\n platforms=['linux'],\n\n # sudo apt install python-gi python3-gi \\\n # gir1.2-gtk-3.0 gir1.2-notify-0.7 gir1.2-ayatanaappindicator3-0.1\n # os_requires=['gi.repository.GObject (>= 2.0)', 'gi.repository.Gtk (>= 3.0)'],\n python_requires='>=3.6',\n install_requires=[\n 'pyudev (>= 0.13)',\n 'PyYAML (>= 5.1)',\n 'python-xlib (>= 0.27)',\n 'pynput (>= 1.7.0)',\n 'psutil (>= 5.7.3)',\n ],\n package_dir={'': 'lib'},\n packages=['hidapi', 'logitech_receiver', 'solaar', 'solaar.ui', 'solaar.cli'],\n data_files=list(_data_files()),\n scripts=_glob('bin/*'),\n)\n",
"path": "setup.py"
}
] | [
{
"content": "#!/usr/bin/env python3\n\nfrom glob import glob as _glob\n\ntry:\n from setuptools import setup\nexcept ImportError:\n from distutils.core import setup\n\n# from solaar import NAME, __version__\n__version__ = '1.0.4'\nNAME = 'Solaar'\n\n\ndef _data_files():\n from os.path import dirname as _dirname\n\n yield 'share/solaar/icons', _glob('share/solaar/icons/solaar*.svg')\n yield 'share/solaar/icons', _glob('share/solaar/icons/light_*.png')\n yield 'share/icons/hicolor/scalable/apps', ['share/solaar/icons/solaar.svg']\n\n for mo in _glob('share/locale/*/LC_MESSAGES/solaar.mo'):\n yield _dirname(mo), [mo]\n\n yield 'share/applications', ['share/applications/solaar.desktop']\n yield 'share/solaar/udev-rules.d', ['rules.d/42-logitech-unify-permissions.rules']\n yield 'share/metainfo/io.github.pwr_solaar.solaar.metainfo.xml', ['share/solaar/metainfo.xml']\n\n del _dirname\n\n\nsetup(\n name=NAME.lower(),\n version=__version__,\n description='Linux devices manager for the Logitech Unifying Receiver.',\n long_description='''\nSolaar is a Linux device manager for Logitech's Unifying Receiver peripherals.\nIt is able to pair/unpair devices with the receiver, for many devices show\nbattery status, and show and modify some of the modifiable features of devices.\n'''.strip(),\n author='Daniel Pavel',\n license='GPLv2',\n url='http://pwr-solaar.github.io/Solaar/',\n classifiers=[\n 'Development Status :: 4 - Beta',\n 'Environment :: X11 Applications :: GTK',\n 'Environment :: Console',\n 'Intended Audience :: End Users/Desktop',\n 'License :: DFSG approved',\n 'License :: OSI Approved :: GNU General Public License v2 (GPLv2)',\n 'Natural Language :: English',\n 'Programming Language :: Python :: 3 :: Only',\n 'Operating System :: POSIX :: Linux',\n 'Topic :: Utilities',\n ],\n platforms=['linux'],\n\n # sudo apt install python-gi python3-gi \\\n # gir1.2-gtk-3.0 gir1.2-notify-0.7 gir1.2-ayatanaappindicator3-0.1\n # os_requires=['gi.repository.GObject (>= 2.0)', 'gi.repository.Gtk (>= 3.0)'],\n python_requires='>=3.6',\n install_requires=[\n 'pyudev (>= 0.13)',\n 'PyYAML (>= 5.1)',\n 'python-xlib (>= 0.27)',\n 'pynput (>= 1.7.0)',\n 'psutil (>= 5.7.3)',\n ],\n package_dir={'': 'lib'},\n packages=['hidapi', 'logitech_receiver', 'solaar', 'solaar.ui', 'solaar.cli'],\n data_files=list(_data_files()),\n scripts=_glob('bin/*'),\n)\n",
"path": "setup.py"
}
] | diff --git a/setup.py b/setup.py
index 42e3cc7e3b..9fc93ae0bd 100755
--- a/setup.py
+++ b/setup.py
@@ -24,6 +24,7 @@ def _data_files():
yield 'share/applications', ['share/applications/solaar.desktop']
yield 'share/solaar/udev-rules.d', ['rules.d/42-logitech-unify-permissions.rules']
+ yield 'share/metainfo/io.github.pwr_solaar.solaar.metainfo.xml', ['share/solaar/metainfo.xml']
del _dirname
diff --git a/share/solaar/metainfo.xml b/share/solaar/metainfo.xml
new file mode 100644
index 0000000000..dfc65e87f3
--- /dev/null
+++ b/share/solaar/metainfo.xml
@@ -0,0 +1,43 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<component type="desktop-application">
+ <id>io.github.pwr_solaar.solaar</id>
+
+ <name>Solaar</name>
+ <summary>Solaar is a Linux manager for many Logitech keyboards, mice, and trackpads.</summary>
+
+ <metadata_license>CC-BY-4.0</metadata_license>
+ <project_license>GPL-2.0-only</project_license>
+
+ <recommends>
+ <control>pointing</control>
+ <control>keyboard</control>
+ <control>touch</control>
+ </recommends>
+
+ <description>
+ <p>
+ <em>
+ </em>Solaar<em>
+ </em> is a Linux manager for many Logitech keyboards, mice, and trackpads that connect wirelessly to a USB, Lightspeed, or Nano receiver, connect directly via a USB cable, or connect via Bluetooth. Solaar does not work with peripherals from other companies.
+ </p>
+ <p>
+ Solaar can be used as a GUI application or via its command-line interface. Both interfaces are able to list the connected devices and show information about each device, often including battery status. Solaar is able to pair and unpair devices with receivers as supported by the device and receiver. Solaar can also control some changeable features of devices, such as smooth scrolling or function key behavior.
+ </p>
+ <p>
+ Solaar's GUI normally uses an icon in the system tray and starts with its main window visible.
+ </p>
+</description>
+
+<launchable type="desktop-id">solaar.desktop</launchable>
+<screenshots>
+ <screenshot type="default">
+ 
+ </screenshot>
+ <screenshot>
+ 
+ </screenshot>
+ <screenshot>
+ 
+ </screenshot>
+</screenshots>
+</component>
|
fossasia__open-event-server-4135 | Unable to update the user info via patch request.
**I'm submitting a ...** (check one with "x")
- [x] bug report
- [ ] feature request
- [ ] support request => Please do not submit support requests here, instead ask your query in out Gitter channel at https://gitter.im/fossasia/open-event-orga-server
**Current behavior:**
I am trying to update the user info of 'email', 'phone' by sending a patch request at 'https://open-event-api.herokuapp.com/users/<user_id>' but it's giving me 'Unknown error'. I also tried to send it by postman with the same access token so that I can override the info, still I am getting the same error. Following are the screenshots:



URL: https://open-event-api.herokuapp.com/v1/users/110
Request headers:
```
Content-Type: application/vnd.api+json
Authorization: JWT <Auth Key>
```
Response:
```
{
"errors": [
{
"detail": "Unknown error",
"source": {
"pointer": ""
},
"status": 500,
"title": "Unknown error"
}
],
"jsonapi": {
"version": "1.0"
}
}
```
Status code: 500
| [
{
"content": "from datetime import datetime\nimport pytz\nimport random\nimport humanize\nfrom flask import url_for\nfrom sqlalchemy import event, desc\nfrom sqlalchemy.orm.exc import MultipleResultsFound, NoResultFound\nfrom flask.ext.scrypt import generate_password_hash, generate_random_salt\nfrom sqlalchemy.ext.hybrid import hybrid_property\nfrom app.api.helpers.db import get_count\nfrom app.models.session import Session\nfrom app.models.speaker import Speaker\nfrom app.models import db\nfrom app.models.notification import Notification\nfrom app.models.permission import Permission\nfrom app.models.role import Role\nfrom app.models.service import Service\nfrom app.models.custom_system_role import UserSystemRole\nfrom app.models.user_permission import UserPermission\nfrom app.models.users_events_role import UsersEventsRoles as UER\nfrom app.models.panel_permission import PanelPermission\nfrom app.models.helpers.versioning import clean_up_string, clean_html\n\n# System-wide\nADMIN = 'admin'\nSUPERADMIN = 'super_admin'\n\nSYS_ROLES_LIST = [\n ADMIN,\n SUPERADMIN,\n]\n\n# Event-specific\nORGANIZER = 'organizer'\nCOORGANIZER = 'coorganizer'\nTRACK_ORGANIZER = 'track_organizer'\nMODERATOR = 'moderator'\nATTENDEE = 'attendee'\nREGISTRAR = 'registrar'\n\n\nclass User(db.Model):\n \"\"\"User model class\"\"\"\n __tablename__ = 'users'\n\n id = db.Column(db.Integer, primary_key=True, autoincrement=True)\n _email = db.Column(db.String(120), unique=True, nullable=False)\n _password = db.Column(db.String(128), nullable=False)\n reset_password = db.Column(db.String(128))\n salt = db.Column(db.String(128))\n avatar_url = db.Column(db.String)\n tokens = db.Column(db.Text)\n first_name = db.Column(db.String, nullable=True)\n last_name = db.Column(db.String, nullable=True)\n details = db.Column(db.String)\n contact = db.Column(db.String)\n facebook_url = db.Column(db.String)\n twitter_url = db.Column(db.String)\n instagram_url = db.Column(db.String)\n google_plus_url = db.Column(db.String)\n original_image_url = db.Column(db.String, nullable=True, default=None)\n thumbnail_image_url = db.Column(db.String)\n small_image_url = db.Column(db.String)\n icon_image_url = db.Column(db.String)\n is_super_admin = db.Column(db.Boolean, default=False)\n is_admin = db.Column(db.Boolean, default=False)\n is_verified = db.Column(db.Boolean, default=False)\n last_accessed_at = db.Column(db.DateTime(timezone=True))\n created_at = db.Column(db.DateTime(timezone=True), default=datetime.now(pytz.utc))\n deleted_at = db.Column(db.DateTime(timezone=True))\n speaker = db.relationship('Speaker', backref=\"user\")\n\n @hybrid_property\n def password(self):\n \"\"\"\n Hybrid property for password\n :return:\n \"\"\"\n return self._password\n\n @password.setter\n def password(self, password):\n \"\"\"\n Setter for _password, saves hashed password, salt and reset_password string\n :param password:\n :return:\n \"\"\"\n salt = generate_random_salt()\n self._password = generate_password_hash(password, salt)\n hash_ = random.getrandbits(128)\n self.reset_password = str(hash_)\n self.salt = salt\n\n @hybrid_property\n def email(self):\n \"\"\"\n Hybrid property for email\n :return:\n \"\"\"\n return self._email\n\n @email.setter\n def email(self, email):\n \"\"\"\n Setter for _email, can be only set once\n :param email:\n :return:\n \"\"\"\n if self._email is None:\n self._email = email\n else:\n raise AttributeError(\"Email cannot be modified\")\n\n # User Permissions\n def can_publish_event(self):\n \"\"\"Checks if User can publish an event\n \"\"\"\n perm = UserPermission.query.filter_by(name='publish_event').first()\n if not perm:\n return self.is_verified\n\n return perm.unverified_user\n\n def can_create_event(self):\n \"\"\"Checks if User can create an event\n \"\"\"\n perm = UserPermission.query.filter_by(name='create_event').first()\n if not perm:\n return self.is_verified\n\n if self.is_verified is False:\n return perm.unverified_user\n\n return True\n\n def has_role(self, event_id):\n \"\"\"Checks if user has any of the Roles at an Event.\n Exclude Attendee Role.\n \"\"\"\n attendee_role = Role.query.filter_by(name=ATTENDEE).first()\n uer = UER.query.filter(UER.user == self, UER.event_id == event_id,\n UER.role != attendee_role).first()\n if uer is None:\n return False\n else:\n return True\n\n def _is_role(self, role_name, event_id):\n \"\"\"Checks if a user has a particular Role at an Event.\n \"\"\"\n role = Role.query.filter_by(name=role_name).first()\n uer = UER.query.filter_by(user=self,\n event_id=event_id,\n role=role).first()\n if not uer:\n return False\n else:\n return True\n\n def is_organizer(self, event_id):\n # type: (object) -> object\n return self._is_role(ORGANIZER, event_id)\n\n def is_coorganizer(self, event_id):\n return self._is_role(COORGANIZER, event_id)\n\n def is_track_organizer(self, event_id):\n return self._is_role(TRACK_ORGANIZER, event_id)\n\n def is_moderator(self, event_id):\n return self._is_role(MODERATOR, event_id)\n\n def is_registrar(self, event_id):\n return self._is_role(REGISTRAR, event_id)\n\n def is_attendee(self, event_id):\n return self._is_role(ATTENDEE, event_id)\n\n def _has_perm(self, operation, service_class, event_id):\n # Operation names and their corresponding permission in `Permissions`\n operations = {\n 'create': 'can_create',\n 'read': 'can_read',\n 'update': 'can_update',\n 'delete': 'can_delete',\n }\n if operation not in operations.keys():\n raise ValueError('No such operation defined')\n\n try:\n service_name = service_class.get_service_name()\n except AttributeError:\n # If `service_class` does not have `get_service_name()`\n return False\n\n if self.is_super_admin:\n return True\n\n service = Service.query.filter_by(name=service_name).first()\n\n uer_querylist = UER.query.filter_by(user=self,\n event_id=event_id)\n for uer in uer_querylist:\n role = uer.role\n perm = Permission.query.filter_by(role=role,\n service=service).first()\n if getattr(perm, operations[operation]):\n return True\n\n return False\n\n def can_create(self, service_class, event_id):\n return self._has_perm('create', service_class, event_id)\n\n def can_read(self, service_class, event_id):\n return self._has_perm('read', service_class, event_id)\n\n def can_update(self, service_class, event_id):\n return self._has_perm('update', service_class, event_id)\n\n def can_delete(self, service_class, event_id):\n return self._has_perm('delete', service_class, event_id)\n\n def is_speaker_at_session(self, session_id):\n try:\n session = Session.query.filter(Session.speakers.any(Speaker.user_id == self.id)).filter(\n Session.id == session_id).one()\n if session:\n return True\n else:\n return False\n except MultipleResultsFound:\n return False\n except NoResultFound:\n return False\n\n def is_speaker_at_event(self, event_id):\n try:\n session = Session.query.filter(Session.speakers.any(Speaker.user_id == self.id)).filter(\n Session.event_id == event_id).first()\n if session:\n return True\n else:\n return False\n except MultipleResultsFound:\n return False\n except NoResultFound:\n return False\n\n # Flask-Login integration\n def is_authenticated(self):\n return True\n\n def is_active(self):\n return True\n\n def is_anonymous(self):\n return False\n\n def get_id(self):\n return self.id\n\n @property\n def is_staff(self):\n return self.is_super_admin or self.is_admin\n\n def is_sys_role(self, role_id):\n \"\"\"Check if a user has a Custom System Role assigned.\n `role_id` is id of a `CustomSysRole` instance.\n \"\"\"\n role = UserSystemRole.query.filter_by(user=self, role_id=role_id).first()\n return bool(role)\n\n def first_access_panel(self):\n \"\"\"Check if the user is assigned a Custom Role or not\n This checks if there is an entry containing the current user in the `user_system_roles` table\n returns panel name if exists otherwise false\n \"\"\"\n custom_role = UserSystemRole.query.filter_by(user=self).first()\n if not custom_role:\n return False\n perm = PanelPermission.query.filter_by(role_id=custom_role.role_id, can_access=True).first()\n if not perm:\n return False\n return perm.panel_name\n\n def can_access_panel(self, panel_name):\n \"\"\"Check if user can access an Admin Panel\n \"\"\"\n if self.is_staff:\n return True\n\n custom_sys_roles = UserSystemRole.query.filter_by(user=self)\n for custom_role in custom_sys_roles:\n if custom_role.role.can_access(panel_name):\n return True\n\n return False\n\n def get_unread_notif_count(self):\n return get_count(Notification.query.filter_by(user=self, is_read=False))\n\n def get_unread_notifs(self):\n \"\"\"Get unread notifications with titles, humanized receiving time\n and Mark-as-read links.\n \"\"\"\n notifs = []\n unread_notifs = Notification.query.filter_by(user=self, is_read=False).order_by(\n desc(Notification.received_at))\n for notif in unread_notifs:\n notifs.append({\n 'title': notif.title,\n 'received_at': humanize.naturaltime(datetime.now(pytz.utc) - notif.received_at),\n 'mark_read': url_for('notifications.mark_as_read', notification_id=notif.id)\n })\n\n return notifs\n\n # update last access time\n def update_lat(self):\n self.last_accessed_at = datetime.now(pytz.utc)\n\n @property\n def fullname(self):\n firstname = self.firstname if self.firstname else ''\n lastname = self.lastname if self.lastname else ''\n if firstname and lastname:\n return u'{} {}'.format(firstname, lastname)\n else:\n return ''\n\n def __repr__(self):\n return '<User %r>' % self.email\n\n def __str__(self):\n return unicode(self).encode('utf-8')\n\n def __unicode__(self):\n return self.email\n\n def __setattr__(self, name, value):\n if name == 'details':\n super(User, self).__setattr__(name, clean_html(clean_up_string(value)))\n else:\n super(User, self).__setattr__(name, value)\n\n\[email protected]_for(User, 'init')\ndef receive_init(target, args, kwargs):\n target.signup_at = datetime.now(pytz.utc)\n",
"path": "app/models/user.py"
}
] | [
{
"content": "from datetime import datetime\nimport pytz\nimport random\nimport humanize\nfrom flask import url_for\nfrom sqlalchemy import event, desc\nfrom sqlalchemy.orm.exc import MultipleResultsFound, NoResultFound\nfrom flask.ext.scrypt import generate_password_hash, generate_random_salt\nfrom sqlalchemy.ext.hybrid import hybrid_property\nfrom app.api.helpers.db import get_count\nfrom app.models.session import Session\nfrom app.models.speaker import Speaker\nfrom app.models import db\nfrom app.models.notification import Notification\nfrom app.models.permission import Permission\nfrom app.models.role import Role\nfrom app.models.service import Service\nfrom app.models.custom_system_role import UserSystemRole\nfrom app.models.user_permission import UserPermission\nfrom app.models.users_events_role import UsersEventsRoles as UER\nfrom app.models.panel_permission import PanelPermission\nfrom app.models.helpers.versioning import clean_up_string, clean_html\n\n# System-wide\nADMIN = 'admin'\nSUPERADMIN = 'super_admin'\n\nSYS_ROLES_LIST = [\n ADMIN,\n SUPERADMIN,\n]\n\n# Event-specific\nORGANIZER = 'organizer'\nCOORGANIZER = 'coorganizer'\nTRACK_ORGANIZER = 'track_organizer'\nMODERATOR = 'moderator'\nATTENDEE = 'attendee'\nREGISTRAR = 'registrar'\n\n\nclass User(db.Model):\n \"\"\"User model class\"\"\"\n __tablename__ = 'users'\n\n id = db.Column(db.Integer, primary_key=True, autoincrement=True)\n _email = db.Column(db.String(120), unique=True, nullable=False)\n _password = db.Column(db.String(128), nullable=False)\n reset_password = db.Column(db.String(128))\n salt = db.Column(db.String(128))\n avatar_url = db.Column(db.String)\n tokens = db.Column(db.Text)\n first_name = db.Column(db.String, nullable=True)\n last_name = db.Column(db.String, nullable=True)\n details = db.Column(db.String)\n contact = db.Column(db.String)\n facebook_url = db.Column(db.String)\n twitter_url = db.Column(db.String)\n instagram_url = db.Column(db.String)\n google_plus_url = db.Column(db.String)\n original_image_url = db.Column(db.String, nullable=True, default=None)\n thumbnail_image_url = db.Column(db.String)\n small_image_url = db.Column(db.String)\n icon_image_url = db.Column(db.String)\n is_super_admin = db.Column(db.Boolean, default=False)\n is_admin = db.Column(db.Boolean, default=False)\n is_verified = db.Column(db.Boolean, default=False)\n last_accessed_at = db.Column(db.DateTime(timezone=True))\n created_at = db.Column(db.DateTime(timezone=True), default=datetime.now(pytz.utc))\n deleted_at = db.Column(db.DateTime(timezone=True))\n speaker = db.relationship('Speaker', backref=\"user\")\n\n @hybrid_property\n def password(self):\n \"\"\"\n Hybrid property for password\n :return:\n \"\"\"\n return self._password\n\n @password.setter\n def password(self, password):\n \"\"\"\n Setter for _password, saves hashed password, salt and reset_password string\n :param password:\n :return:\n \"\"\"\n salt = generate_random_salt()\n self._password = generate_password_hash(password, salt)\n hash_ = random.getrandbits(128)\n self.reset_password = str(hash_)\n self.salt = salt\n\n @hybrid_property\n def email(self):\n \"\"\"\n Hybrid property for email\n :return:\n \"\"\"\n return self._email\n\n @email.setter\n def email(self, email):\n \"\"\"\n Setter for _email, can be only set once\n :param email:\n :return:\n \"\"\"\n self._email = email\n self.is_verified = False\n\n # User Permissions\n def can_publish_event(self):\n \"\"\"Checks if User can publish an event\n \"\"\"\n perm = UserPermission.query.filter_by(name='publish_event').first()\n if not perm:\n return self.is_verified\n\n return perm.unverified_user\n\n def can_create_event(self):\n \"\"\"Checks if User can create an event\n \"\"\"\n perm = UserPermission.query.filter_by(name='create_event').first()\n if not perm:\n return self.is_verified\n\n if self.is_verified is False:\n return perm.unverified_user\n\n return True\n\n def has_role(self, event_id):\n \"\"\"Checks if user has any of the Roles at an Event.\n Exclude Attendee Role.\n \"\"\"\n attendee_role = Role.query.filter_by(name=ATTENDEE).first()\n uer = UER.query.filter(UER.user == self, UER.event_id == event_id,\n UER.role != attendee_role).first()\n if uer is None:\n return False\n else:\n return True\n\n def _is_role(self, role_name, event_id):\n \"\"\"Checks if a user has a particular Role at an Event.\n \"\"\"\n role = Role.query.filter_by(name=role_name).first()\n uer = UER.query.filter_by(user=self,\n event_id=event_id,\n role=role).first()\n if not uer:\n return False\n else:\n return True\n\n def is_organizer(self, event_id):\n # type: (object) -> object\n return self._is_role(ORGANIZER, event_id)\n\n def is_coorganizer(self, event_id):\n return self._is_role(COORGANIZER, event_id)\n\n def is_track_organizer(self, event_id):\n return self._is_role(TRACK_ORGANIZER, event_id)\n\n def is_moderator(self, event_id):\n return self._is_role(MODERATOR, event_id)\n\n def is_registrar(self, event_id):\n return self._is_role(REGISTRAR, event_id)\n\n def is_attendee(self, event_id):\n return self._is_role(ATTENDEE, event_id)\n\n def _has_perm(self, operation, service_class, event_id):\n # Operation names and their corresponding permission in `Permissions`\n operations = {\n 'create': 'can_create',\n 'read': 'can_read',\n 'update': 'can_update',\n 'delete': 'can_delete',\n }\n if operation not in operations.keys():\n raise ValueError('No such operation defined')\n\n try:\n service_name = service_class.get_service_name()\n except AttributeError:\n # If `service_class` does not have `get_service_name()`\n return False\n\n if self.is_super_admin:\n return True\n\n service = Service.query.filter_by(name=service_name).first()\n\n uer_querylist = UER.query.filter_by(user=self,\n event_id=event_id)\n for uer in uer_querylist:\n role = uer.role\n perm = Permission.query.filter_by(role=role,\n service=service).first()\n if getattr(perm, operations[operation]):\n return True\n\n return False\n\n def can_create(self, service_class, event_id):\n return self._has_perm('create', service_class, event_id)\n\n def can_read(self, service_class, event_id):\n return self._has_perm('read', service_class, event_id)\n\n def can_update(self, service_class, event_id):\n return self._has_perm('update', service_class, event_id)\n\n def can_delete(self, service_class, event_id):\n return self._has_perm('delete', service_class, event_id)\n\n def is_speaker_at_session(self, session_id):\n try:\n session = Session.query.filter(Session.speakers.any(Speaker.user_id == self.id)).filter(\n Session.id == session_id).one()\n if session:\n return True\n else:\n return False\n except MultipleResultsFound:\n return False\n except NoResultFound:\n return False\n\n def is_speaker_at_event(self, event_id):\n try:\n session = Session.query.filter(Session.speakers.any(Speaker.user_id == self.id)).filter(\n Session.event_id == event_id).first()\n if session:\n return True\n else:\n return False\n except MultipleResultsFound:\n return False\n except NoResultFound:\n return False\n\n # Flask-Login integration\n def is_authenticated(self):\n return True\n\n def is_active(self):\n return True\n\n def is_anonymous(self):\n return False\n\n def get_id(self):\n return self.id\n\n @property\n def is_staff(self):\n return self.is_super_admin or self.is_admin\n\n def is_sys_role(self, role_id):\n \"\"\"Check if a user has a Custom System Role assigned.\n `role_id` is id of a `CustomSysRole` instance.\n \"\"\"\n role = UserSystemRole.query.filter_by(user=self, role_id=role_id).first()\n return bool(role)\n\n def first_access_panel(self):\n \"\"\"Check if the user is assigned a Custom Role or not\n This checks if there is an entry containing the current user in the `user_system_roles` table\n returns panel name if exists otherwise false\n \"\"\"\n custom_role = UserSystemRole.query.filter_by(user=self).first()\n if not custom_role:\n return False\n perm = PanelPermission.query.filter_by(role_id=custom_role.role_id, can_access=True).first()\n if not perm:\n return False\n return perm.panel_name\n\n def can_access_panel(self, panel_name):\n \"\"\"Check if user can access an Admin Panel\n \"\"\"\n if self.is_staff:\n return True\n\n custom_sys_roles = UserSystemRole.query.filter_by(user=self)\n for custom_role in custom_sys_roles:\n if custom_role.role.can_access(panel_name):\n return True\n\n return False\n\n def get_unread_notif_count(self):\n return get_count(Notification.query.filter_by(user=self, is_read=False))\n\n def get_unread_notifs(self):\n \"\"\"Get unread notifications with titles, humanized receiving time\n and Mark-as-read links.\n \"\"\"\n notifs = []\n unread_notifs = Notification.query.filter_by(user=self, is_read=False).order_by(\n desc(Notification.received_at))\n for notif in unread_notifs:\n notifs.append({\n 'title': notif.title,\n 'received_at': humanize.naturaltime(datetime.now(pytz.utc) - notif.received_at),\n 'mark_read': url_for('notifications.mark_as_read', notification_id=notif.id)\n })\n\n return notifs\n\n # update last access time\n def update_lat(self):\n self.last_accessed_at = datetime.now(pytz.utc)\n\n @property\n def fullname(self):\n firstname = self.firstname if self.firstname else ''\n lastname = self.lastname if self.lastname else ''\n if firstname and lastname:\n return u'{} {}'.format(firstname, lastname)\n else:\n return ''\n\n def __repr__(self):\n return '<User %r>' % self.email\n\n def __str__(self):\n return unicode(self).encode('utf-8')\n\n def __unicode__(self):\n return self.email\n\n def __setattr__(self, name, value):\n if name == 'details':\n super(User, self).__setattr__(name, clean_html(clean_up_string(value)))\n else:\n super(User, self).__setattr__(name, value)\n\n\[email protected]_for(User, 'init')\ndef receive_init(target, args, kwargs):\n target.signup_at = datetime.now(pytz.utc)\n",
"path": "app/models/user.py"
}
] | diff --git a/app/models/user.py b/app/models/user.py
index db06bc8f4b..a7debdb13b 100644
--- a/app/models/user.py
+++ b/app/models/user.py
@@ -106,10 +106,8 @@ def email(self, email):
:param email:
:return:
"""
- if self._email is None:
- self._email = email
- else:
- raise AttributeError("Email cannot be modified")
+ self._email = email
+ self.is_verified = False
# User Permissions
def can_publish_event(self):
|
sunpy__sunpy-3835 | Plot titles and x-labels overlapping in example
The plot titles and labels overlap in the 3rd image of https://docs.sunpy.org/en/latest/generated/gallery/acquiring_data/2011_06_07_sampledata_overview.html#sphx-glr-generated-gallery-acquiring-data-2011-06-07-sampledata-overview-py (see below). I'm guessing the tight-layout just needs tweaking.

| [
{
"content": "# -*- coding: utf-8 -*-\n\"\"\"\n========================\nSample data set overview\n========================\n\nAn overview of the coordinated sample data set.\n\"\"\"\nimport matplotlib.pyplot as plt\nimport astropy.units as u\n\nimport sunpy.map\nimport sunpy.timeseries\nimport sunpy.data.sample as sample_data\n\n###############################################################################\n# On 2011 June 7, various solar instruments observed a spectacular solar\n# eruption from NOAA AR 11226. The event included an M2.5 flare, a\n# filament eruption, a coronal mass ejection, and a global coronal EUV wave (IAU standard:\n# SOL2011-06-07T06:24:00L045C112). This event was spectacular because it\n# features the ejection of a large amount of prominence material, much of which\n# failed to escape and fell back to the solar surface.\n# This event received some press coverage (e.g. `National Geographics\n# <https://news.nationalgeographic.com/news/2011/06/110608-solar-flare-sun-science-space/>`_,\n# `Discover Magazine <http://blogs.discovermagazine.com/badastronomy/2011/06/07/the-sun-lets-loose-a-huge-explosion/>`_)\n# and the literature contains a number of a papers about it (e.g. `Li et al.\n# <https://iopscience.iop.org/article/10.1088/0004-637X/746/1/13/meta>`_,\n# `Inglis et al. <https://iopscience.iop.org/article/10.1088/0004-637X/777/1/30/meta>`_)\n\n###############################################################################\n# The following image of the flare is now fairly iconic.\naia_cutout03_map = sunpy.map.Map(sample_data.AIA_193_CUTOUT03_IMAGE)\nfig = plt.figure()\nax = fig.add_subplot(111, projection=aia_cutout03_map)\naia_cutout03_map.plot()\nplt.show()\n\n###############################################################################\n# Let's take a look at the GOES XRS data.\ngoes = sunpy.timeseries.TimeSeries(sample_data.GOES_XRS_TIMESERIES)\nfig = plt.figure()\ngoes.plot()\nplt.show()\n\n###############################################################################\n# Next let's investigate the AIA full disk images that are available. Please\n# note that these images are not at the full AIA resolution.\n\naia_131_map = sunpy.map.Map(sample_data.AIA_131_IMAGE)\naia_171_map = sunpy.map.Map(sample_data.AIA_171_IMAGE)\naia_211_map = sunpy.map.Map(sample_data.AIA_211_IMAGE)\naia_335_map = sunpy.map.Map(sample_data.AIA_335_IMAGE)\naia_094_map = sunpy.map.Map(sample_data.AIA_094_IMAGE)\naia_1600_map = sunpy.map.Map(sample_data.AIA_1600_IMAGE)\n\nfig = plt.figure(figsize=(6, 28))\nax = fig.add_subplot(611, projection=aia_131_map)\naia_131_map.plot(clip_interval=(0.5, 99.9)*u.percent)\naia_131_map.draw_grid()\n\nax = fig.add_subplot(612, projection=aia_171_map)\naia_171_map.plot(clip_interval=(0.5, 99.9)*u.percent)\naia_171_map.draw_grid()\n\nax = fig.add_subplot(613, projection=aia_211_map)\naia_211_map.plot(clip_interval=(0.5, 99.9)*u.percent)\naia_211_map.draw_grid()\n\nax = fig.add_subplot(614, projection=aia_335_map)\naia_335_map.plot(clip_interval=(0.5, 99.9)*u.percent)\naia_335_map.draw_grid()\n\nax = fig.add_subplot(615, projection=aia_094_map)\naia_094_map.plot(clip_interval=(0.5, 99.9)*u.percent)\naia_094_map.draw_grid()\n\nax = fig.add_subplot(616, projection=aia_1600_map)\naia_1600_map.plot(clip_interval=(0.5, 99.9)*u.percent)\naia_1600_map.draw_grid()\n\nfig.tight_layout(pad=6.50)\nplt.show()\n\n###############################################################################\n# We also provide a series of AIA cutouts so that you can get a sense of the\n# dynamics of the in-falling material.\naia_cutout01_map = sunpy.map.Map(sample_data.AIA_193_CUTOUT01_IMAGE)\naia_cutout02_map = sunpy.map.Map(sample_data.AIA_193_CUTOUT02_IMAGE)\naia_cutout03_map = sunpy.map.Map(sample_data.AIA_193_CUTOUT03_IMAGE)\naia_cutout04_map = sunpy.map.Map(sample_data.AIA_193_CUTOUT04_IMAGE)\naia_cutout05_map = sunpy.map.Map(sample_data.AIA_193_CUTOUT05_IMAGE)\n\nfig = plt.figure(figsize=(6, 28))\nax = fig.add_subplot(511, projection=aia_cutout01_map)\naia_cutout01_map.plot()\n\nax = fig.add_subplot(512, projection=aia_cutout02_map)\naia_cutout02_map.plot()\n\nax = fig.add_subplot(513, projection=aia_cutout03_map)\naia_cutout03_map.plot()\n\nax = fig.add_subplot(514, projection=aia_cutout04_map)\naia_cutout04_map.plot()\n\nax = fig.add_subplot(515, projection=aia_cutout05_map)\naia_cutout05_map.plot()\n\nfig.tight_layout(pad=5.50)\nplt.show()\n\n###############################################################################\n# There are a number of other data sources available as well, such as SWAP.\nswap_map = sunpy.map.Map(sample_data.SWAP_LEVEL1_IMAGE)\nfig = plt.figure()\nswap_map.plot()\nplt.show()\n\n###############################################################################\n# And also RHESSI.\nrhessi_map = sunpy.map.Map(sample_data.RHESSI_IMAGE)\nfig = plt.figure()\nrhessi_map.plot()\nplt.show()\n",
"path": "examples/acquiring_data/2011_06_07_sampledata_overview.py"
}
] | [
{
"content": "# -*- coding: utf-8 -*-\n\"\"\"\n========================\nSample data set overview\n========================\n\nAn overview of the coordinated sample data set.\n\"\"\"\nimport matplotlib.pyplot as plt\nimport astropy.units as u\n\nimport sunpy.map\nimport sunpy.timeseries\nimport sunpy.data.sample as sample_data\n\n###############################################################################\n# On 2011 June 7, various solar instruments observed a spectacular solar\n# eruption from NOAA AR 11226. The event included an M2.5 flare, a\n# filament eruption, a coronal mass ejection, and a global coronal EUV wave (IAU standard:\n# SOL2011-06-07T06:24:00L045C112). This event was spectacular because it\n# features the ejection of a large amount of prominence material, much of which\n# failed to escape and fell back to the solar surface.\n# This event received some press coverage (e.g. `National Geographics\n# <https://news.nationalgeographic.com/news/2011/06/110608-solar-flare-sun-science-space/>`_,\n# `Discover Magazine <http://blogs.discovermagazine.com/badastronomy/2011/06/07/the-sun-lets-loose-a-huge-explosion/>`_)\n# and the literature contains a number of a papers about it (e.g. `Li et al.\n# <https://iopscience.iop.org/article/10.1088/0004-637X/746/1/13/meta>`_,\n# `Inglis et al. <https://iopscience.iop.org/article/10.1088/0004-637X/777/1/30/meta>`_)\n\n###############################################################################\n# The following image of the flare is now fairly iconic.\naia_cutout03_map = sunpy.map.Map(sample_data.AIA_193_CUTOUT03_IMAGE)\nfig = plt.figure()\nax = fig.add_subplot(111, projection=aia_cutout03_map)\naia_cutout03_map.plot()\nplt.show()\n\n###############################################################################\n# Let's take a look at the GOES XRS data.\ngoes = sunpy.timeseries.TimeSeries(sample_data.GOES_XRS_TIMESERIES)\nfig = plt.figure()\ngoes.plot()\nplt.show()\n\n###############################################################################\n# Next let's investigate the AIA full disk images that are available. Please\n# note that these images are not at the full AIA resolution.\n\naia_131_map = sunpy.map.Map(sample_data.AIA_131_IMAGE)\naia_171_map = sunpy.map.Map(sample_data.AIA_171_IMAGE)\naia_211_map = sunpy.map.Map(sample_data.AIA_211_IMAGE)\naia_335_map = sunpy.map.Map(sample_data.AIA_335_IMAGE)\naia_094_map = sunpy.map.Map(sample_data.AIA_094_IMAGE)\naia_1600_map = sunpy.map.Map(sample_data.AIA_1600_IMAGE)\n\nfig = plt.figure(figsize=(6, 28))\nax = fig.add_subplot(611, projection=aia_131_map)\naia_131_map.plot(clip_interval=(0.5, 99.9)*u.percent)\naia_131_map.draw_grid()\n\nax = fig.add_subplot(612, projection=aia_171_map)\naia_171_map.plot(clip_interval=(0.5, 99.9)*u.percent)\naia_171_map.draw_grid()\n\nax = fig.add_subplot(613, projection=aia_211_map)\naia_211_map.plot(clip_interval=(0.5, 99.9)*u.percent)\naia_211_map.draw_grid()\n\nax = fig.add_subplot(614, projection=aia_335_map)\naia_335_map.plot(clip_interval=(0.5, 99.9)*u.percent)\naia_335_map.draw_grid()\n\nax = fig.add_subplot(615, projection=aia_094_map)\naia_094_map.plot(clip_interval=(0.5, 99.9)*u.percent)\naia_094_map.draw_grid()\n\nax = fig.add_subplot(616, projection=aia_1600_map)\naia_1600_map.plot(clip_interval=(0.5, 99.9)*u.percent)\naia_1600_map.draw_grid()\n\nfig.tight_layout(pad=8.50)\nplt.show()\n\n###############################################################################\n# We also provide a series of AIA cutouts so that you can get a sense of the\n# dynamics of the in-falling material.\naia_cutout01_map = sunpy.map.Map(sample_data.AIA_193_CUTOUT01_IMAGE)\naia_cutout02_map = sunpy.map.Map(sample_data.AIA_193_CUTOUT02_IMAGE)\naia_cutout03_map = sunpy.map.Map(sample_data.AIA_193_CUTOUT03_IMAGE)\naia_cutout04_map = sunpy.map.Map(sample_data.AIA_193_CUTOUT04_IMAGE)\naia_cutout05_map = sunpy.map.Map(sample_data.AIA_193_CUTOUT05_IMAGE)\n\nfig = plt.figure(figsize=(6, 28))\nax = fig.add_subplot(511, projection=aia_cutout01_map)\naia_cutout01_map.plot()\n\nax = fig.add_subplot(512, projection=aia_cutout02_map)\naia_cutout02_map.plot()\n\nax = fig.add_subplot(513, projection=aia_cutout03_map)\naia_cutout03_map.plot()\n\nax = fig.add_subplot(514, projection=aia_cutout04_map)\naia_cutout04_map.plot()\n\nax = fig.add_subplot(515, projection=aia_cutout05_map)\naia_cutout05_map.plot()\n\nfig.tight_layout(pad=5.50)\nplt.show()\n\n###############################################################################\n# There are a number of other data sources available as well, such as SWAP.\nswap_map = sunpy.map.Map(sample_data.SWAP_LEVEL1_IMAGE)\nfig = plt.figure()\nswap_map.plot()\nplt.show()\n\n###############################################################################\n# And also RHESSI.\nrhessi_map = sunpy.map.Map(sample_data.RHESSI_IMAGE)\nfig = plt.figure()\nrhessi_map.plot()\nplt.show()\n",
"path": "examples/acquiring_data/2011_06_07_sampledata_overview.py"
}
] | diff --git a/changelog/3835.doc.rst b/changelog/3835.doc.rst
new file mode 100644
index 00000000000..acb95ba1734
--- /dev/null
+++ b/changelog/3835.doc.rst
@@ -0,0 +1 @@
+Changed padding value of an example in the example gallery to fix the overlap of titles and x-label axes.
diff --git a/examples/acquiring_data/2011_06_07_sampledata_overview.py b/examples/acquiring_data/2011_06_07_sampledata_overview.py
index cdda728d649..b33ddb0469a 100644
--- a/examples/acquiring_data/2011_06_07_sampledata_overview.py
+++ b/examples/acquiring_data/2011_06_07_sampledata_overview.py
@@ -78,7 +78,7 @@
aia_1600_map.plot(clip_interval=(0.5, 99.9)*u.percent)
aia_1600_map.draw_grid()
-fig.tight_layout(pad=6.50)
+fig.tight_layout(pad=8.50)
plt.show()
###############################################################################
|
readthedocs__readthedocs.org-4811 | Delete untracked tags on fetch step
Currently, if the user deletes a tag, it needs to wipe the environment for this change be reflected in their version list.
There are some solutions to delete untracked tags (require more than 2 commands). But I found that the newest version of git has the `--prune-tags` option, which is used as `git fetch --prune --prune-tags` (`git >2.17`). We need to update git on the servers (we use 2.7.4) and change the fetch command. Or we can find a way to wipe the environment if we detect something like this case.
Raised in https://github.com/rtfd/readthedocs.org/pull/3913#issuecomment-396673349
| [
{
"content": "# -*- coding: utf-8 -*-\n\"\"\"Git-related utilities.\"\"\"\n\nfrom __future__ import (\n absolute_import, division, print_function, unicode_literals)\n\nimport csv\nimport logging\nimport os\nimport re\n\nimport git\nfrom builtins import str\nfrom django.core.exceptions import ValidationError\nfrom git.exc import BadName\nfrom six import PY2, StringIO\n\nfrom readthedocs.config import ALL\nfrom readthedocs.projects.exceptions import RepositoryError\nfrom readthedocs.projects.validators import validate_submodule_url\nfrom readthedocs.vcs_support.base import BaseVCS, VCSVersion\n\nlog = logging.getLogger(__name__)\n\n\nclass Backend(BaseVCS):\n\n \"\"\"Git VCS backend.\"\"\"\n\n supports_tags = True\n supports_branches = True\n supports_submodules = True\n fallback_branch = 'master' # default branch\n\n def __init__(self, *args, **kwargs):\n super(Backend, self).__init__(*args, **kwargs)\n self.token = kwargs.get('token', None)\n self.repo_url = self._get_clone_url()\n\n def _get_clone_url(self):\n if '://' in self.repo_url:\n hacked_url = self.repo_url.split('://')[1]\n hacked_url = re.sub('.git$', '', hacked_url)\n clone_url = 'https://%s' % hacked_url\n if self.token:\n clone_url = 'https://%s@%s' % (self.token, hacked_url)\n return clone_url\n # Don't edit URL because all hosts aren't the same\n # else:\n # clone_url = 'git://%s' % (hacked_url)\n return self.repo_url\n\n def set_remote_url(self, url):\n return self.run('git', 'remote', 'set-url', 'origin', url)\n\n def update(self):\n # Use checkout() to update repo\n # TODO: See where we call this\n self.checkout()\n\n def repo_exists(self):\n code, _, _ = self.run('git', 'status', record=False)\n return code == 0\n\n def are_submodules_available(self, config):\n \"\"\"Test whether git submodule checkout step should be performed.\"\"\"\n # TODO remove this after users migrate to a config file\n from readthedocs.projects.models import Feature\n submodules_in_config = (\n config.submodules.exclude != ALL or\n config.submodules.include\n )\n if (self.project.has_feature(Feature.SKIP_SUBMODULES) or\n not submodules_in_config):\n return False\n\n # Keep compatibility with previous projects\n code, out, _ = self.run('git', 'submodule', 'status', record=False)\n return code == 0 and bool(out)\n\n def validate_submodules(self, config):\n \"\"\"\n Returns the submodules and check that its URLs are valid.\n\n .. note::\n\n Allways call after `self.are_submodules_available`.\n\n :returns: tuple(bool, list)\n\n Returns true if all required submodules URLs are valid.\n Returns a list of all required submodules:\n - Include is `ALL`, returns all submodules avaliable.\n - Include is a list, returns just those.\n - Exclude is `ALL` - this should never happen.\n - Exlude is a list, returns all avaliable submodules\n but those from the list.\n \"\"\"\n repo = git.Repo(self.working_dir)\n submodules = {\n sub.path: sub\n for sub in repo.submodules\n }\n\n for sub_path in config.submodules.exclude:\n path = sub_path.rstrip('/')\n if path in submodules:\n del submodules[path]\n\n if config.submodules.include != ALL and config.submodules.include:\n submodules_include = {}\n for sub_path in config.submodules.include:\n path = sub_path.rstrip('/')\n submodules_include[path] = submodules[path]\n submodules = submodules_include\n\n for path, submodule in submodules.items():\n try:\n validate_submodule_url(submodule.url)\n except ValidationError:\n return False, []\n return True, submodules.keys()\n\n def fetch(self):\n code, _, _ = self.run('git', 'fetch', '--tags', '--prune')\n if code != 0:\n raise RepositoryError\n\n def checkout_revision(self, revision=None):\n if not revision:\n branch = self.default_branch or self.fallback_branch\n revision = 'origin/%s' % branch\n\n code, out, err = self.run('git', 'checkout', '--force', revision)\n if code != 0:\n log.warning(\"Failed to checkout revision '%s': %s\", revision, code)\n return [code, out, err]\n\n def clone(self):\n \"\"\"\n Clone the repository.\n\n .. note::\n\n Temporarily, we support skipping submodule recursive clone via a\n feature flag. This will eventually be configurable with our YAML\n config.\n \"\"\"\n # TODO remove with https://github.com/rtfd/readthedocs-build/issues/30\n from readthedocs.projects.models import Feature\n cmd = ['git', 'clone']\n cmd.extend([self.repo_url, '.'])\n code, _, _ = self.run(*cmd)\n if code != 0:\n raise RepositoryError\n\n @property\n def tags(self):\n versions = []\n repo = git.Repo(self.working_dir)\n for tag in repo.tags:\n try:\n versions.append(VCSVersion(self, str(tag.commit), str(tag)))\n except ValueError as e:\n # ValueError: Cannot resolve commit as tag TAGNAME points to a\n # blob object - use the `.object` property instead to access it\n # This is not a real tag for us, so we skip it\n # https://github.com/rtfd/readthedocs.org/issues/4440\n log.warning('Git tag skipped: %s', tag, exc_info=True)\n continue\n return versions\n\n @property\n def branches(self):\n # Only show remote branches\n retcode, stdout, _ = self.run(\n 'git',\n 'branch',\n '-r',\n record_as_success=True,\n )\n # error (or no branches found)\n if retcode != 0:\n return []\n return self.parse_branches(stdout)\n\n def parse_branches(self, data):\n \"\"\"\n Parse output of git branch -r.\n\n e.g.:\n\n origin/2.0.X\n origin/HEAD -> origin/master\n origin/develop\n origin/master\n origin/release/2.0.0\n origin/release/2.1.0\n \"\"\"\n clean_branches = []\n # StringIO below is expecting Unicode data, so ensure that it gets it.\n if not isinstance(data, str):\n data = str(data)\n delimiter = str(' ').encode('utf-8') if PY2 else str(' ')\n raw_branches = csv.reader(StringIO(data), delimiter=delimiter)\n for branch in raw_branches:\n branch = [f for f in branch if f not in ('', '*')]\n # Handle empty branches\n if branch:\n branch = branch[0]\n if branch.startswith('origin/'):\n verbose_name = branch.replace('origin/', '')\n if verbose_name in ['HEAD']:\n continue\n clean_branches.append(\n VCSVersion(self, branch, verbose_name))\n else:\n clean_branches.append(VCSVersion(self, branch, branch))\n return clean_branches\n\n @property\n def commit(self):\n _, stdout, _ = self.run('git', 'rev-parse', 'HEAD')\n return stdout.strip()\n\n def checkout(self, identifier=None):\n self.check_working_dir()\n\n # Clone or update repository\n if self.repo_exists():\n self.set_remote_url(self.repo_url)\n self.fetch()\n else:\n self.make_clean_working_dir()\n self.clone()\n\n # Find proper identifier\n if not identifier:\n identifier = self.default_branch or self.fallback_branch\n\n identifier = self.find_ref(identifier)\n\n # Checkout the correct identifier for this branch.\n code, out, err = self.checkout_revision(identifier)\n if code != 0:\n return code, out, err\n\n # Clean any remains of previous checkouts\n self.run('git', 'clean', '-d', '-f', '-f')\n return code, out, err\n\n def update_submodules(self, config):\n if self.are_submodules_available(config):\n valid, submodules = self.validate_submodules(config)\n if valid:\n self.checkout_submodules(submodules, config)\n else:\n raise RepositoryError(RepositoryError.INVALID_SUBMODULES)\n\n def checkout_submodules(self, submodules, config):\n \"\"\"Checkout all repository submodules.\"\"\"\n self.run('git', 'submodule', 'sync')\n cmd = [\n 'git',\n 'submodule',\n 'update',\n '--init',\n '--force',\n ]\n if config.submodules.recursive:\n cmd.append('--recursive')\n cmd += submodules\n self.run(*cmd)\n\n def find_ref(self, ref):\n # Check if ref starts with 'origin/'\n if ref.startswith('origin/'):\n return ref\n\n # Check if ref is a branch of the origin remote\n if self.ref_exists('remotes/origin/' + ref):\n return 'origin/' + ref\n\n return ref\n\n def ref_exists(self, ref):\n try:\n r = git.Repo(self.working_dir)\n if r.commit(ref):\n return True\n except (BadName, ValueError):\n return False\n return False\n\n @property\n def env(self):\n env = super(Backend, self).env\n env['GIT_DIR'] = os.path.join(self.working_dir, '.git')\n # Don't prompt for username, this requires Git 2.3+\n env['GIT_TERMINAL_PROMPT'] = '0'\n return env\n",
"path": "readthedocs/vcs_support/backends/git.py"
}
] | [
{
"content": "# -*- coding: utf-8 -*-\n\"\"\"Git-related utilities.\"\"\"\n\nfrom __future__ import (\n absolute_import, division, print_function, unicode_literals)\n\nimport csv\nimport logging\nimport os\nimport re\n\nimport git\nfrom builtins import str\nfrom django.core.exceptions import ValidationError\nfrom git.exc import BadName\nfrom six import PY2, StringIO\n\nfrom readthedocs.config import ALL\nfrom readthedocs.projects.exceptions import RepositoryError\nfrom readthedocs.projects.validators import validate_submodule_url\nfrom readthedocs.vcs_support.base import BaseVCS, VCSVersion\n\nlog = logging.getLogger(__name__)\n\n\nclass Backend(BaseVCS):\n\n \"\"\"Git VCS backend.\"\"\"\n\n supports_tags = True\n supports_branches = True\n supports_submodules = True\n fallback_branch = 'master' # default branch\n\n def __init__(self, *args, **kwargs):\n super(Backend, self).__init__(*args, **kwargs)\n self.token = kwargs.get('token', None)\n self.repo_url = self._get_clone_url()\n\n def _get_clone_url(self):\n if '://' in self.repo_url:\n hacked_url = self.repo_url.split('://')[1]\n hacked_url = re.sub('.git$', '', hacked_url)\n clone_url = 'https://%s' % hacked_url\n if self.token:\n clone_url = 'https://%s@%s' % (self.token, hacked_url)\n return clone_url\n # Don't edit URL because all hosts aren't the same\n # else:\n # clone_url = 'git://%s' % (hacked_url)\n return self.repo_url\n\n def set_remote_url(self, url):\n return self.run('git', 'remote', 'set-url', 'origin', url)\n\n def update(self):\n # Use checkout() to update repo\n # TODO: See where we call this\n self.checkout()\n\n def repo_exists(self):\n code, _, _ = self.run('git', 'status', record=False)\n return code == 0\n\n def are_submodules_available(self, config):\n \"\"\"Test whether git submodule checkout step should be performed.\"\"\"\n # TODO remove this after users migrate to a config file\n from readthedocs.projects.models import Feature\n submodules_in_config = (\n config.submodules.exclude != ALL or\n config.submodules.include\n )\n if (self.project.has_feature(Feature.SKIP_SUBMODULES) or\n not submodules_in_config):\n return False\n\n # Keep compatibility with previous projects\n code, out, _ = self.run('git', 'submodule', 'status', record=False)\n return code == 0 and bool(out)\n\n def validate_submodules(self, config):\n \"\"\"\n Returns the submodules and check that its URLs are valid.\n\n .. note::\n\n Allways call after `self.are_submodules_available`.\n\n :returns: tuple(bool, list)\n\n Returns true if all required submodules URLs are valid.\n Returns a list of all required submodules:\n - Include is `ALL`, returns all submodules avaliable.\n - Include is a list, returns just those.\n - Exclude is `ALL` - this should never happen.\n - Exlude is a list, returns all avaliable submodules\n but those from the list.\n \"\"\"\n repo = git.Repo(self.working_dir)\n submodules = {\n sub.path: sub\n for sub in repo.submodules\n }\n\n for sub_path in config.submodules.exclude:\n path = sub_path.rstrip('/')\n if path in submodules:\n del submodules[path]\n\n if config.submodules.include != ALL and config.submodules.include:\n submodules_include = {}\n for sub_path in config.submodules.include:\n path = sub_path.rstrip('/')\n submodules_include[path] = submodules[path]\n submodules = submodules_include\n\n for path, submodule in submodules.items():\n try:\n validate_submodule_url(submodule.url)\n except ValidationError:\n return False, []\n return True, submodules.keys()\n\n def fetch(self):\n code, _, _ = self.run(\n 'git', 'fetch', '--tags', '--prune', '--prune-tags',\n )\n if code != 0:\n raise RepositoryError\n\n def checkout_revision(self, revision=None):\n if not revision:\n branch = self.default_branch or self.fallback_branch\n revision = 'origin/%s' % branch\n\n code, out, err = self.run('git', 'checkout', '--force', revision)\n if code != 0:\n log.warning(\"Failed to checkout revision '%s': %s\", revision, code)\n return [code, out, err]\n\n def clone(self):\n \"\"\"\n Clone the repository.\n\n .. note::\n\n Temporarily, we support skipping submodule recursive clone via a\n feature flag. This will eventually be configurable with our YAML\n config.\n \"\"\"\n # TODO remove with https://github.com/rtfd/readthedocs-build/issues/30\n from readthedocs.projects.models import Feature\n cmd = ['git', 'clone']\n cmd.extend([self.repo_url, '.'])\n code, _, _ = self.run(*cmd)\n if code != 0:\n raise RepositoryError\n\n @property\n def tags(self):\n versions = []\n repo = git.Repo(self.working_dir)\n for tag in repo.tags:\n try:\n versions.append(VCSVersion(self, str(tag.commit), str(tag)))\n except ValueError as e:\n # ValueError: Cannot resolve commit as tag TAGNAME points to a\n # blob object - use the `.object` property instead to access it\n # This is not a real tag for us, so we skip it\n # https://github.com/rtfd/readthedocs.org/issues/4440\n log.warning('Git tag skipped: %s', tag, exc_info=True)\n continue\n return versions\n\n @property\n def branches(self):\n # Only show remote branches\n retcode, stdout, _ = self.run(\n 'git',\n 'branch',\n '-r',\n record_as_success=True,\n )\n # error (or no branches found)\n if retcode != 0:\n return []\n return self.parse_branches(stdout)\n\n def parse_branches(self, data):\n \"\"\"\n Parse output of git branch -r.\n\n e.g.:\n\n origin/2.0.X\n origin/HEAD -> origin/master\n origin/develop\n origin/master\n origin/release/2.0.0\n origin/release/2.1.0\n \"\"\"\n clean_branches = []\n # StringIO below is expecting Unicode data, so ensure that it gets it.\n if not isinstance(data, str):\n data = str(data)\n delimiter = str(' ').encode('utf-8') if PY2 else str(' ')\n raw_branches = csv.reader(StringIO(data), delimiter=delimiter)\n for branch in raw_branches:\n branch = [f for f in branch if f not in ('', '*')]\n # Handle empty branches\n if branch:\n branch = branch[0]\n if branch.startswith('origin/'):\n verbose_name = branch.replace('origin/', '')\n if verbose_name in ['HEAD']:\n continue\n clean_branches.append(\n VCSVersion(self, branch, verbose_name))\n else:\n clean_branches.append(VCSVersion(self, branch, branch))\n return clean_branches\n\n @property\n def commit(self):\n _, stdout, _ = self.run('git', 'rev-parse', 'HEAD')\n return stdout.strip()\n\n def checkout(self, identifier=None):\n self.check_working_dir()\n\n # Clone or update repository\n if self.repo_exists():\n self.set_remote_url(self.repo_url)\n self.fetch()\n else:\n self.make_clean_working_dir()\n self.clone()\n\n # Find proper identifier\n if not identifier:\n identifier = self.default_branch or self.fallback_branch\n\n identifier = self.find_ref(identifier)\n\n # Checkout the correct identifier for this branch.\n code, out, err = self.checkout_revision(identifier)\n if code != 0:\n return code, out, err\n\n # Clean any remains of previous checkouts\n self.run('git', 'clean', '-d', '-f', '-f')\n return code, out, err\n\n def update_submodules(self, config):\n if self.are_submodules_available(config):\n valid, submodules = self.validate_submodules(config)\n if valid:\n self.checkout_submodules(submodules, config)\n else:\n raise RepositoryError(RepositoryError.INVALID_SUBMODULES)\n\n def checkout_submodules(self, submodules, config):\n \"\"\"Checkout all repository submodules.\"\"\"\n self.run('git', 'submodule', 'sync')\n cmd = [\n 'git',\n 'submodule',\n 'update',\n '--init',\n '--force',\n ]\n if config.submodules.recursive:\n cmd.append('--recursive')\n cmd += submodules\n self.run(*cmd)\n\n def find_ref(self, ref):\n # Check if ref starts with 'origin/'\n if ref.startswith('origin/'):\n return ref\n\n # Check if ref is a branch of the origin remote\n if self.ref_exists('remotes/origin/' + ref):\n return 'origin/' + ref\n\n return ref\n\n def ref_exists(self, ref):\n try:\n r = git.Repo(self.working_dir)\n if r.commit(ref):\n return True\n except (BadName, ValueError):\n return False\n return False\n\n @property\n def env(self):\n env = super(Backend, self).env\n env['GIT_DIR'] = os.path.join(self.working_dir, '.git')\n # Don't prompt for username, this requires Git 2.3+\n env['GIT_TERMINAL_PROMPT'] = '0'\n return env\n",
"path": "readthedocs/vcs_support/backends/git.py"
}
] | diff --git a/.travis.yml b/.travis.yml
index 7eff80a97e4..6f4bdf452ee 100644
--- a/.travis.yml
+++ b/.travis.yml
@@ -21,6 +21,8 @@ cache:
- ~/.cache/pip
- ~/.nvm/nvm.sh
- ~/.npm
+before_install:
+ - sudo apt-get install -y git
install:
- ./scripts/travis/install_elasticsearch.sh
- pip install tox-travis
diff --git a/docs/install.rst b/docs/install.rst
index 4d6f85d625b..d7234bf9283 100644
--- a/docs/install.rst
+++ b/docs/install.rst
@@ -13,7 +13,7 @@ since it will help you to avoid clutter in your system-wide libraries.
Additionally Read the Docs depends on:
-* `Git`_ (version >=2)
+* `Git`_ (version >=2.17.0)
* `Mercurial`_ (only if you need to work with mercurial repositories)
* `Pip`_ (version >1.5)
* `Redis`_
diff --git a/readthedocs/rtd_tests/tests/test_backend.py b/readthedocs/rtd_tests/tests/test_backend.py
index 3acae2d2036..239c2dc8f57 100644
--- a/readthedocs/rtd_tests/tests/test_backend.py
+++ b/readthedocs/rtd_tests/tests/test_backend.py
@@ -1,21 +1,33 @@
# -*- coding: utf-8 -*-
from __future__ import (
- absolute_import, division, print_function, unicode_literals)
+ absolute_import,
+ division,
+ print_function,
+ unicode_literals,
+)
+import os
from os.path import exists
+from tempfile import mkdtemp
import django_dynamic_fixture as fixture
import pytest
from django.contrib.auth.models import User
-from mock import Mock
+from mock import Mock, patch
from readthedocs.config import ALL
from readthedocs.projects.exceptions import RepositoryError
from readthedocs.projects.models import Feature, Project
from readthedocs.rtd_tests.base import RTDTestCase
from readthedocs.rtd_tests.utils import (
- create_git_tag, make_test_git, make_test_hg)
+ create_git_branch,
+ create_git_tag,
+ delete_git_branch,
+ delete_git_tag,
+ make_test_git,
+ make_test_hg,
+)
class TestGitBackend(RTDTestCase):
@@ -118,6 +130,51 @@ def test_check_invalid_submodule_urls(self):
repo.checkout('invalidsubmodule')
self.assertEqual(e.msg, RepositoryError.INVALID_SUBMODULES)
+ @patch('readthedocs.projects.models.Project.checkout_path')
+ def test_fetch_clean_tags_and_branches(self, checkout_path):
+ upstream_repo = self.project.repo
+ create_git_tag(upstream_repo, 'v01')
+ create_git_tag(upstream_repo, 'v02')
+ create_git_branch(upstream_repo, 'newbranch')
+
+ local_repo = os.path.join(mkdtemp(), 'local')
+ os.mkdir(local_repo)
+ checkout_path.return_value = local_repo
+
+ repo = self.project.vcs_repo()
+ repo.clone()
+
+ delete_git_tag(upstream_repo, 'v02')
+ delete_git_branch(upstream_repo, 'newbranch')
+
+ # We still have all branches and tags in the local repo
+ self.assertEqual(
+ set(['v01', 'v02']),
+ set(vcs.verbose_name for vcs in repo.tags)
+ )
+ self.assertEqual(
+ set([
+ 'relativesubmodule', 'invalidsubmodule',
+ 'master', 'submodule', 'newbranch',
+ ]),
+ set(vcs.verbose_name for vcs in repo.branches)
+ )
+
+ repo.checkout()
+
+ # We don't have the eliminated branches and tags in the local repo
+ self.assertEqual(
+ set(['v01']),
+ set(vcs.verbose_name for vcs in repo.tags)
+ )
+ self.assertEqual(
+ set([
+ 'relativesubmodule', 'invalidsubmodule',
+ 'master', 'submodule'
+ ]),
+ set(vcs.verbose_name for vcs in repo.branches)
+ )
+
class TestHgBackend(RTDTestCase):
def setUp(self):
diff --git a/readthedocs/vcs_support/backends/git.py b/readthedocs/vcs_support/backends/git.py
index 9b117799fb3..2959add5493 100644
--- a/readthedocs/vcs_support/backends/git.py
+++ b/readthedocs/vcs_support/backends/git.py
@@ -122,7 +122,9 @@ def validate_submodules(self, config):
return True, submodules.keys()
def fetch(self):
- code, _, _ = self.run('git', 'fetch', '--tags', '--prune')
+ code, _, _ = self.run(
+ 'git', 'fetch', '--tags', '--prune', '--prune-tags',
+ )
if code != 0:
raise RepositoryError
|
kivy__kivy-6322 | PermissionError is not available in Python2.7
<!--
The issue tracker is a tool to address bugs.
Please use the #support Discord channel at https://chat.kivy.org/ or Stack Overflow for
support questions, more information at https://git.io/vM1yQ.
Before opening a new issue, make sure you do the following:
* check that your issue isn't already filed: https://git.io/vM1iE
* prepare a short, runnable example that reproduces the issue
* reproduce the problem with the latest development version of Kivy
* double-check that the issue is indeed a bug and not a support request
-->
### Versions
* Python: 2.7
* OS: Any
* Kivy: 1.11.0rc1 (ef216431d5b2762480596ed4a2c93a5ecbd5a355)
* Kivy installation method: Installed following [official instructions](https://kivy.org/doc/stable/installation/installation-windows.html#use-development-kivy)
### Description
`PermissionError` isn't builtin error in Python2, so this line in `logger.py` will raise an error https://github.com/kivy/kivy/blob/ef216431d5b2762480596ed4a2c93a5ecbd5a355/kivy/logger.py#L150
| [
{
"content": "'''\nLogger object\n=============\n\nDifferents logging levels are available : trace, debug, info, warning, error\nand critical.\n\nExamples of usage::\n\n from kivy.logger import Logger\n\n Logger.info('title: This is a info message.')\n Logger.debug('title: This is a debug message.')\n\n try:\n raise Exception('bleh')\n except Exception:\n Logger.exception('Something happened!')\n\nThe message passed to the logger is split into two parts, separated by a colon\n(:). The first part is used as a title, and the second part is used as the\nmessage. This way, you can \"categorize\" your message easily. ::\n\n Logger.info('Application: This is a test')\n\n # will appear as\n\n [INFO ] [Application ] This is a test\n\nLogger configuration\n--------------------\n\nThe Logger can be controlled via the Kivy configuration file::\n\n [kivy]\n log_level = info\n log_enable = 1\n log_dir = logs\n log_name = kivy_%y-%m-%d_%_.txt\n log_maxfiles = 100\n\nMore information about the allowed values are described in the\n:mod:`kivy.config` module.\n\nLogger history\n--------------\n\nEven if the logger is not enabled, you still have access to the last 100\nmessages::\n\n from kivy.logger import LoggerHistory\n\n print(LoggerHistory.history)\n\n'''\n\nimport logging\nimport os\nimport sys\nimport kivy\nfrom kivy.compat import PY2\nfrom random import randint\nfrom functools import partial\n\n__all__ = (\n 'Logger', 'LOG_LEVELS', 'COLORS', 'LoggerHistory', 'file_log_handler')\n\nLogger = None\n\nBLACK, RED, GREEN, YELLOW, BLUE, MAGENTA, CYAN, WHITE = list(range(8))\n\n# These are the sequences need to get colored ouput\nRESET_SEQ = \"\\033[0m\"\nCOLOR_SEQ = \"\\033[1;%dm\"\nBOLD_SEQ = \"\\033[1m\"\n\nprevious_stderr = sys.stderr\n\n\ndef formatter_message(message, use_color=True):\n if use_color:\n message = message.replace(\"$RESET\", RESET_SEQ)\n message = message.replace(\"$BOLD\", BOLD_SEQ)\n else:\n message = message.replace(\"$RESET\", \"\").replace(\"$BOLD\", \"\")\n return message\n\n\nCOLORS = {\n 'TRACE': MAGENTA,\n 'WARNING': YELLOW,\n 'INFO': GREEN,\n 'DEBUG': CYAN,\n 'CRITICAL': RED,\n 'ERROR': RED}\n\nlogging.TRACE = 9\nLOG_LEVELS = {\n 'trace': logging.TRACE,\n 'debug': logging.DEBUG,\n 'info': logging.INFO,\n 'warning': logging.WARNING,\n 'error': logging.ERROR,\n 'critical': logging.CRITICAL}\n\n\nclass FileHandler(logging.Handler):\n history = []\n filename = 'log.txt'\n fd = None\n log_dir = ''\n\n def purge_logs(self):\n '''Purge log is called randomly to prevent the log directory from being\n filled by lots and lots of log files.\n You've a chance of 1 in 20 that purge log will be fired.\n '''\n if randint(0, 20) != 0:\n return\n if not self.log_dir:\n return\n\n from kivy.config import Config\n maxfiles = Config.getint('kivy', 'log_maxfiles')\n\n if maxfiles < 0:\n return\n\n Logger.info('Logger: Purge log fired. Analysing...')\n join = os.path.join\n unlink = os.unlink\n\n # search all log files\n lst = [join(self.log_dir, x) for x in os.listdir(self.log_dir)]\n if len(lst) > maxfiles:\n # get creation time on every files\n lst = [{'fn': x, 'ctime': os.path.getctime(x)} for x in lst]\n\n # sort by date\n lst = sorted(lst, key=lambda x: x['ctime'])\n\n # get the oldest (keep last maxfiles)\n lst = lst[:-maxfiles] if maxfiles else lst\n Logger.info('Logger: Purge %d log files' % len(lst))\n\n # now, unlink every file in the list\n for filename in lst:\n try:\n unlink(filename['fn'])\n except PermissionError as e:\n Logger.info('Logger: Skipped file {0}, {1}'.\n format(filename['fn'], e))\n\n Logger.info('Logger: Purge finished!')\n\n def _configure(self, *largs, **kwargs):\n from time import strftime\n from kivy.config import Config\n log_dir = Config.get('kivy', 'log_dir')\n log_name = Config.get('kivy', 'log_name')\n\n _dir = kivy.kivy_home_dir\n if log_dir and os.path.isabs(log_dir):\n _dir = log_dir\n else:\n _dir = os.path.join(_dir, log_dir)\n if not os.path.exists(_dir):\n os.makedirs(_dir)\n self.log_dir = _dir\n\n pattern = log_name.replace('%_', '@@NUMBER@@')\n pattern = os.path.join(_dir, strftime(pattern))\n n = 0\n while True:\n filename = pattern.replace('@@NUMBER@@', str(n))\n if not os.path.exists(filename):\n break\n n += 1\n if n > 10000: # prevent maybe flooding ?\n raise Exception('Too many logfile, remove them')\n\n if FileHandler.filename == filename and FileHandler.fd is not None:\n return\n FileHandler.filename = filename\n if FileHandler.fd is not None:\n FileHandler.fd.close()\n FileHandler.fd = open(filename, 'w')\n\n Logger.info('Logger: Record log in %s' % filename)\n\n def _write_message(self, record):\n if FileHandler.fd in (None, False):\n return\n\n msg = self.format(record)\n stream = FileHandler.fd\n fs = \"%s\\n\"\n stream.write('[%-7s] ' % record.levelname)\n if PY2:\n try:\n if (isinstance(msg, unicode) and\n getattr(stream, 'encoding', None)):\n ufs = u'%s\\n'\n try:\n stream.write(ufs % msg)\n except UnicodeEncodeError:\n stream.write((ufs % msg).encode(stream.encoding))\n else:\n stream.write(fs % msg)\n except UnicodeError:\n stream.write(fs % msg.encode(\"UTF-8\"))\n else:\n stream.write(fs % msg)\n stream.flush()\n\n def emit(self, message):\n # during the startup, store the message in the history\n if Logger.logfile_activated is None:\n FileHandler.history += [message]\n return\n\n # startup done, if the logfile is not activated, avoid history.\n if Logger.logfile_activated is False:\n FileHandler.history = []\n return\n\n if FileHandler.fd is None:\n try:\n self._configure()\n from kivy.config import Config\n Config.add_callback(self._configure, 'kivy', 'log_dir')\n Config.add_callback(self._configure, 'kivy', 'log_name')\n except Exception:\n # deactivate filehandler...\n FileHandler.fd = False\n Logger.exception('Error while activating FileHandler logger')\n return\n while FileHandler.history:\n _message = FileHandler.history.pop()\n self._write_message(_message)\n\n self._write_message(message)\n\n\nclass LoggerHistory(logging.Handler):\n\n history = []\n\n def emit(self, message):\n LoggerHistory.history = [message] + LoggerHistory.history[:100]\n\n\nclass ColoredFormatter(logging.Formatter):\n\n def __init__(self, msg, use_color=True):\n logging.Formatter.__init__(self, msg)\n self.use_color = use_color\n\n def format(self, record):\n try:\n msg = record.msg.split(':', 1)\n if len(msg) == 2:\n record.msg = '[%-12s]%s' % (msg[0], msg[1])\n except:\n pass\n levelname = record.levelname\n if record.levelno == logging.TRACE:\n levelname = 'TRACE'\n record.levelname = levelname\n if self.use_color and levelname in COLORS:\n levelname_color = (\n COLOR_SEQ % (30 + COLORS[levelname]) + levelname + RESET_SEQ)\n record.levelname = levelname_color\n return logging.Formatter.format(self, record)\n\n\nclass ConsoleHandler(logging.StreamHandler):\n\n def filter(self, record):\n try:\n msg = record.msg\n k = msg.split(':', 1)\n if k[0] == 'stderr' and len(k) == 2:\n previous_stderr.write(k[1] + '\\n')\n return False\n except:\n pass\n return True\n\n\nclass LogFile(object):\n\n def __init__(self, channel, func):\n self.buffer = ''\n self.func = func\n self.channel = channel\n self.errors = ''\n\n def write(self, s):\n s = self.buffer + s\n self.flush()\n f = self.func\n channel = self.channel\n lines = s.split('\\n')\n for l in lines[:-1]:\n f('%s: %s' % (channel, l))\n self.buffer = lines[-1]\n\n def flush(self):\n return\n\n def isatty(self):\n return False\n\n\ndef logger_config_update(section, key, value):\n if LOG_LEVELS.get(value) is None:\n raise AttributeError('Loglevel {0!r} doesn\\'t exists'.format(value))\n Logger.setLevel(level=LOG_LEVELS.get(value))\n\n\n#: Kivy default logger instance\nLogger = logging.getLogger('kivy')\nLogger.logfile_activated = None\nLogger.trace = partial(Logger.log, logging.TRACE)\n\n# set the Kivy logger as the default\nlogging.root = Logger\n\n# add default kivy logger\nLogger.addHandler(LoggerHistory())\nfile_log_handler = None\nif 'KIVY_NO_FILELOG' not in os.environ:\n file_log_handler = FileHandler()\n Logger.addHandler(file_log_handler)\n\n# Use the custom handler instead of streaming one.\nif 'KIVY_NO_CONSOLELOG' not in os.environ:\n if hasattr(sys, '_kivy_logging_handler'):\n Logger.addHandler(getattr(sys, '_kivy_logging_handler'))\n else:\n use_color = (\n os.name != 'nt' and\n os.environ.get('KIVY_BUILD') not in ('android', 'ios') and\n os.environ.get('TERM') in (\n 'rxvt',\n 'rxvt-256color',\n 'rxvt-unicode',\n 'rxvt-unicode-256color',\n 'xterm',\n 'xterm-256color',\n )\n )\n if not use_color:\n # No additional control characters will be inserted inside the\n # levelname field, 7 chars will fit \"WARNING\"\n color_fmt = formatter_message(\n '[%(levelname)-7s] %(message)s', use_color)\n else:\n # levelname field width need to take into account the length of the\n # color control codes (7+4 chars for bold+color, and reset)\n color_fmt = formatter_message(\n '[%(levelname)-18s] %(message)s', use_color)\n formatter = ColoredFormatter(color_fmt, use_color=use_color)\n console = ConsoleHandler()\n console.setFormatter(formatter)\n Logger.addHandler(console)\n\n# install stderr handlers\nsys.stderr = LogFile('stderr', Logger.warning)\n\n#: Kivy history handler\nLoggerHistory = LoggerHistory\n",
"path": "kivy/logger.py"
}
] | [
{
"content": "'''\nLogger object\n=============\n\nDifferents logging levels are available : trace, debug, info, warning, error\nand critical.\n\nExamples of usage::\n\n from kivy.logger import Logger\n\n Logger.info('title: This is a info message.')\n Logger.debug('title: This is a debug message.')\n\n try:\n raise Exception('bleh')\n except Exception:\n Logger.exception('Something happened!')\n\nThe message passed to the logger is split into two parts, separated by a colon\n(:). The first part is used as a title, and the second part is used as the\nmessage. This way, you can \"categorize\" your message easily. ::\n\n Logger.info('Application: This is a test')\n\n # will appear as\n\n [INFO ] [Application ] This is a test\n\nLogger configuration\n--------------------\n\nThe Logger can be controlled via the Kivy configuration file::\n\n [kivy]\n log_level = info\n log_enable = 1\n log_dir = logs\n log_name = kivy_%y-%m-%d_%_.txt\n log_maxfiles = 100\n\nMore information about the allowed values are described in the\n:mod:`kivy.config` module.\n\nLogger history\n--------------\n\nEven if the logger is not enabled, you still have access to the last 100\nmessages::\n\n from kivy.logger import LoggerHistory\n\n print(LoggerHistory.history)\n\n'''\n\nimport logging\nimport os\nimport sys\nimport kivy\nfrom kivy.compat import PY2\nfrom random import randint\nfrom functools import partial\n\n__all__ = (\n 'Logger', 'LOG_LEVELS', 'COLORS', 'LoggerHistory', 'file_log_handler')\n\ntry:\n PermissionError\nexcept NameError: # Python 2\n PermissionError = OSError, IOError\n\nLogger = None\n\nBLACK, RED, GREEN, YELLOW, BLUE, MAGENTA, CYAN, WHITE = list(range(8))\n\n# These are the sequences need to get colored ouput\nRESET_SEQ = \"\\033[0m\"\nCOLOR_SEQ = \"\\033[1;%dm\"\nBOLD_SEQ = \"\\033[1m\"\n\nprevious_stderr = sys.stderr\n\n\ndef formatter_message(message, use_color=True):\n if use_color:\n message = message.replace(\"$RESET\", RESET_SEQ)\n message = message.replace(\"$BOLD\", BOLD_SEQ)\n else:\n message = message.replace(\"$RESET\", \"\").replace(\"$BOLD\", \"\")\n return message\n\n\nCOLORS = {\n 'TRACE': MAGENTA,\n 'WARNING': YELLOW,\n 'INFO': GREEN,\n 'DEBUG': CYAN,\n 'CRITICAL': RED,\n 'ERROR': RED}\n\nlogging.TRACE = 9\nLOG_LEVELS = {\n 'trace': logging.TRACE,\n 'debug': logging.DEBUG,\n 'info': logging.INFO,\n 'warning': logging.WARNING,\n 'error': logging.ERROR,\n 'critical': logging.CRITICAL}\n\n\nclass FileHandler(logging.Handler):\n history = []\n filename = 'log.txt'\n fd = None\n log_dir = ''\n\n def purge_logs(self):\n '''Purge log is called randomly to prevent the log directory from being\n filled by lots and lots of log files.\n You've a chance of 1 in 20 that purge log will be fired.\n '''\n if randint(0, 20) != 0:\n return\n if not self.log_dir:\n return\n\n from kivy.config import Config\n maxfiles = Config.getint('kivy', 'log_maxfiles')\n\n if maxfiles < 0:\n return\n\n Logger.info('Logger: Purge log fired. Analysing...')\n join = os.path.join\n unlink = os.unlink\n\n # search all log files\n lst = [join(self.log_dir, x) for x in os.listdir(self.log_dir)]\n if len(lst) > maxfiles:\n # get creation time on every files\n lst = [{'fn': x, 'ctime': os.path.getctime(x)} for x in lst]\n\n # sort by date\n lst = sorted(lst, key=lambda x: x['ctime'])\n\n # get the oldest (keep last maxfiles)\n lst = lst[:-maxfiles] if maxfiles else lst\n Logger.info('Logger: Purge %d log files' % len(lst))\n\n # now, unlink every file in the list\n for filename in lst:\n try:\n unlink(filename['fn'])\n except PermissionError as e:\n Logger.info('Logger: Skipped file {0}, {1}'.\n format(filename['fn'], e))\n\n Logger.info('Logger: Purge finished!')\n\n def _configure(self, *largs, **kwargs):\n from time import strftime\n from kivy.config import Config\n log_dir = Config.get('kivy', 'log_dir')\n log_name = Config.get('kivy', 'log_name')\n\n _dir = kivy.kivy_home_dir\n if log_dir and os.path.isabs(log_dir):\n _dir = log_dir\n else:\n _dir = os.path.join(_dir, log_dir)\n if not os.path.exists(_dir):\n os.makedirs(_dir)\n self.log_dir = _dir\n\n pattern = log_name.replace('%_', '@@NUMBER@@')\n pattern = os.path.join(_dir, strftime(pattern))\n n = 0\n while True:\n filename = pattern.replace('@@NUMBER@@', str(n))\n if not os.path.exists(filename):\n break\n n += 1\n if n > 10000: # prevent maybe flooding ?\n raise Exception('Too many logfile, remove them')\n\n if FileHandler.filename == filename and FileHandler.fd is not None:\n return\n FileHandler.filename = filename\n if FileHandler.fd is not None:\n FileHandler.fd.close()\n FileHandler.fd = open(filename, 'w')\n\n Logger.info('Logger: Record log in %s' % filename)\n\n def _write_message(self, record):\n if FileHandler.fd in (None, False):\n return\n\n msg = self.format(record)\n stream = FileHandler.fd\n fs = \"%s\\n\"\n stream.write('[%-7s] ' % record.levelname)\n if PY2:\n try:\n if (isinstance(msg, unicode) and\n getattr(stream, 'encoding', None)):\n ufs = u'%s\\n'\n try:\n stream.write(ufs % msg)\n except UnicodeEncodeError:\n stream.write((ufs % msg).encode(stream.encoding))\n else:\n stream.write(fs % msg)\n except UnicodeError:\n stream.write(fs % msg.encode(\"UTF-8\"))\n else:\n stream.write(fs % msg)\n stream.flush()\n\n def emit(self, message):\n # during the startup, store the message in the history\n if Logger.logfile_activated is None:\n FileHandler.history += [message]\n return\n\n # startup done, if the logfile is not activated, avoid history.\n if Logger.logfile_activated is False:\n FileHandler.history = []\n return\n\n if FileHandler.fd is None:\n try:\n self._configure()\n from kivy.config import Config\n Config.add_callback(self._configure, 'kivy', 'log_dir')\n Config.add_callback(self._configure, 'kivy', 'log_name')\n except Exception:\n # deactivate filehandler...\n FileHandler.fd = False\n Logger.exception('Error while activating FileHandler logger')\n return\n while FileHandler.history:\n _message = FileHandler.history.pop()\n self._write_message(_message)\n\n self._write_message(message)\n\n\nclass LoggerHistory(logging.Handler):\n\n history = []\n\n def emit(self, message):\n LoggerHistory.history = [message] + LoggerHistory.history[:100]\n\n\nclass ColoredFormatter(logging.Formatter):\n\n def __init__(self, msg, use_color=True):\n logging.Formatter.__init__(self, msg)\n self.use_color = use_color\n\n def format(self, record):\n try:\n msg = record.msg.split(':', 1)\n if len(msg) == 2:\n record.msg = '[%-12s]%s' % (msg[0], msg[1])\n except:\n pass\n levelname = record.levelname\n if record.levelno == logging.TRACE:\n levelname = 'TRACE'\n record.levelname = levelname\n if self.use_color and levelname in COLORS:\n levelname_color = (\n COLOR_SEQ % (30 + COLORS[levelname]) + levelname + RESET_SEQ)\n record.levelname = levelname_color\n return logging.Formatter.format(self, record)\n\n\nclass ConsoleHandler(logging.StreamHandler):\n\n def filter(self, record):\n try:\n msg = record.msg\n k = msg.split(':', 1)\n if k[0] == 'stderr' and len(k) == 2:\n previous_stderr.write(k[1] + '\\n')\n return False\n except:\n pass\n return True\n\n\nclass LogFile(object):\n\n def __init__(self, channel, func):\n self.buffer = ''\n self.func = func\n self.channel = channel\n self.errors = ''\n\n def write(self, s):\n s = self.buffer + s\n self.flush()\n f = self.func\n channel = self.channel\n lines = s.split('\\n')\n for l in lines[:-1]:\n f('%s: %s' % (channel, l))\n self.buffer = lines[-1]\n\n def flush(self):\n return\n\n def isatty(self):\n return False\n\n\ndef logger_config_update(section, key, value):\n if LOG_LEVELS.get(value) is None:\n raise AttributeError('Loglevel {0!r} doesn\\'t exists'.format(value))\n Logger.setLevel(level=LOG_LEVELS.get(value))\n\n\n#: Kivy default logger instance\nLogger = logging.getLogger('kivy')\nLogger.logfile_activated = None\nLogger.trace = partial(Logger.log, logging.TRACE)\n\n# set the Kivy logger as the default\nlogging.root = Logger\n\n# add default kivy logger\nLogger.addHandler(LoggerHistory())\nfile_log_handler = None\nif 'KIVY_NO_FILELOG' not in os.environ:\n file_log_handler = FileHandler()\n Logger.addHandler(file_log_handler)\n\n# Use the custom handler instead of streaming one.\nif 'KIVY_NO_CONSOLELOG' not in os.environ:\n if hasattr(sys, '_kivy_logging_handler'):\n Logger.addHandler(getattr(sys, '_kivy_logging_handler'))\n else:\n use_color = (\n os.name != 'nt' and\n os.environ.get('KIVY_BUILD') not in ('android', 'ios') and\n os.environ.get('TERM') in (\n 'rxvt',\n 'rxvt-256color',\n 'rxvt-unicode',\n 'rxvt-unicode-256color',\n 'xterm',\n 'xterm-256color',\n )\n )\n if not use_color:\n # No additional control characters will be inserted inside the\n # levelname field, 7 chars will fit \"WARNING\"\n color_fmt = formatter_message(\n '[%(levelname)-7s] %(message)s', use_color)\n else:\n # levelname field width need to take into account the length of the\n # color control codes (7+4 chars for bold+color, and reset)\n color_fmt = formatter_message(\n '[%(levelname)-18s] %(message)s', use_color)\n formatter = ColoredFormatter(color_fmt, use_color=use_color)\n console = ConsoleHandler()\n console.setFormatter(formatter)\n Logger.addHandler(console)\n\n# install stderr handlers\nsys.stderr = LogFile('stderr', Logger.warning)\n\n#: Kivy history handler\nLoggerHistory = LoggerHistory\n",
"path": "kivy/logger.py"
}
] | diff --git a/kivy/logger.py b/kivy/logger.py
index 48bb314fcd..bd40701b3d 100644
--- a/kivy/logger.py
+++ b/kivy/logger.py
@@ -65,6 +65,11 @@
__all__ = (
'Logger', 'LOG_LEVELS', 'COLORS', 'LoggerHistory', 'file_log_handler')
+try:
+ PermissionError
+except NameError: # Python 2
+ PermissionError = OSError, IOError
+
Logger = None
BLACK, RED, GREEN, YELLOW, BLUE, MAGENTA, CYAN, WHITE = list(range(8))
|
DataBiosphere__toil-2028 | Race condition in Mesos batch system
In (very!) rare cases, the Mesos driver thread can crash with a KeyError. Fortunately, the log file demonstrates exactly the interleaving that needs to happen to cause this error:
```
Launched Mesos task 1667171.
Queueing the job command: _toil_worker CactusHalGeneratorUpWrapper aws:us-west-2:birds-first-jobstore e257211f-7dde-4f82-b2d4-1162f85589fe with
job id: 1667173 ...
Launched Mesos task 1667172.
Got offer 99e660ab-d9e0-4a70-9eb0-588da54bd4b0-O5620082 for a non-preemptable slave with 12988.00 MiB memory, 31.00 core(s) and 368882.00 MiB o
f disk.
Preparing to launch Mesos task 1667173 using offer 99e660ab-d9e0-4a70-9eb0-588da54bd4b0-O5620082 ...
Offer 99e660ab-d9e0-4a70-9eb0-588da54bd4b0-O5620082 not suitable to run the tasks with requirements {'cores': 1, 'preemptable': True, 'disk': 2
147483648, 'memory': 4089446400}. Mesos offered 13618905088.0 memory, 31.0 cores and 3.86800812032e+11 of disk on a non-preemptable slave.
Offer 99e660ab-d9e0-4a70-9eb0-588da54bd4b0-O5620082 not suitable to run the tasks with requirements {'cores': 1, 'preemptable': True, 'disk': 2
147483648, 'memory': 100000000}. Mesos offered 13618905088.0 memory, 31.0 cores and 3.86800812032e+11 of disk on a non-preemptable slave.
Failed to call scheduler's resourceOffer
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/toil/batchSystems/mesos/batchSystem.py", line 468, in resourceOffers
self._updateStateToRunning(offer, runnableTasks)
File "/usr/local/lib/python2.7/dist-packages/toil/batchSystems/mesos/batchSystem.py", line 379, in _updateStateToRunning
resources = self.taskResources[resourceKey]
KeyError: 1667173
I0120 07:18:14.162212 21183 sched.cpp:2055] Asked to abort the driver
... queued
Issued job 'CactusHalGeneratorUpWrapper' e257211f-7dde-4f82-b2d4-1162f85589fe with job batch system ID: 1667173 and preemptability: True, cores: 1, disk: 2.0 G, and memory: 3.8 G
```
The `Queueing the job command... queued` messages come from `issueBatchJob` on the leader thread. The `Preparing Mesos task` and similar messages come from `resourceOffers` on the driver thread. And if we look at `issueBatchJob`, there's a subtle race:
```python
self.jobQueues.insertJob(job, jobType)
<---------- leader thread was paused here
self.taskResources[jobID] = job.resources
log.debug("... queued")
```
The job was made available for processing before it was entirely ready. If the leader thread gets interrupted *immediately* after putting the job in the queue, and `resourceOffers` gets a chance to run without interruption for a while,`taskResources` won't be filled in properly and the Mesos driver will crash.
Race condition in Mesos batch system
In (very!) rare cases, the Mesos driver thread can crash with a KeyError. Fortunately, the log file demonstrates exactly the interleaving that needs to happen to cause this error:
```
Launched Mesos task 1667171.
Queueing the job command: _toil_worker CactusHalGeneratorUpWrapper aws:us-west-2:birds-first-jobstore e257211f-7dde-4f82-b2d4-1162f85589fe with
job id: 1667173 ...
Launched Mesos task 1667172.
Got offer 99e660ab-d9e0-4a70-9eb0-588da54bd4b0-O5620082 for a non-preemptable slave with 12988.00 MiB memory, 31.00 core(s) and 368882.00 MiB o
f disk.
Preparing to launch Mesos task 1667173 using offer 99e660ab-d9e0-4a70-9eb0-588da54bd4b0-O5620082 ...
Offer 99e660ab-d9e0-4a70-9eb0-588da54bd4b0-O5620082 not suitable to run the tasks with requirements {'cores': 1, 'preemptable': True, 'disk': 2
147483648, 'memory': 4089446400}. Mesos offered 13618905088.0 memory, 31.0 cores and 3.86800812032e+11 of disk on a non-preemptable slave.
Offer 99e660ab-d9e0-4a70-9eb0-588da54bd4b0-O5620082 not suitable to run the tasks with requirements {'cores': 1, 'preemptable': True, 'disk': 2
147483648, 'memory': 100000000}. Mesos offered 13618905088.0 memory, 31.0 cores and 3.86800812032e+11 of disk on a non-preemptable slave.
Failed to call scheduler's resourceOffer
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/toil/batchSystems/mesos/batchSystem.py", line 468, in resourceOffers
self._updateStateToRunning(offer, runnableTasks)
File "/usr/local/lib/python2.7/dist-packages/toil/batchSystems/mesos/batchSystem.py", line 379, in _updateStateToRunning
resources = self.taskResources[resourceKey]
KeyError: 1667173
I0120 07:18:14.162212 21183 sched.cpp:2055] Asked to abort the driver
... queued
Issued job 'CactusHalGeneratorUpWrapper' e257211f-7dde-4f82-b2d4-1162f85589fe with job batch system ID: 1667173 and preemptability: True, cores: 1, disk: 2.0 G, and memory: 3.8 G
```
The `Queueing the job command... queued` messages come from `issueBatchJob` on the leader thread. The `Preparing Mesos task` and similar messages come from `resourceOffers` on the driver thread. And if we look at `issueBatchJob`, there's a subtle race:
```python
self.jobQueues.insertJob(job, jobType)
<---------- leader thread was paused here
self.taskResources[jobID] = job.resources
log.debug("... queued")
```
The job was made available for processing before it was entirely ready. If the leader thread gets interrupted *immediately* after putting the job in the queue, and `resourceOffers` gets a chance to run without interruption for a while,`taskResources` won't be filled in properly and the Mesos driver will crash.
| [
{
"content": "# Copyright (C) 2015-2016 Regents of the University of California\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom __future__ import absolute_import\n\nfrom builtins import filter\nfrom builtins import str\nfrom builtins import object\nimport ast\nimport logging\nimport os\nimport pwd\nimport socket\nimport time\nimport sys\nfrom contextlib import contextmanager\nfrom struct import unpack\n\ntry:\n import cPickle as pickle\nexcept ImportError:\n import pickle\n\n\n# Python 3 compatibility imports\nfrom six.moves.queue import Empty, Queue\nfrom six import iteritems, itervalues\n\nimport mesos.interface\nimport mesos.native\nfrom bd2k.util import strict_bool\nfrom mesos.interface import mesos_pb2\n\nfrom toil import resolveEntryPoint\nfrom toil.batchSystems.abstractBatchSystem import (AbstractScalableBatchSystem,\n BatchSystemLocalSupport,\n NodeInfo)\nfrom toil.batchSystems.mesos import ToilJob, ResourceRequirement, TaskData, JobQueue\n\nlog = logging.getLogger(__name__)\n\n\nclass MesosBatchSystem(BatchSystemLocalSupport,\n AbstractScalableBatchSystem,\n mesos.interface.Scheduler):\n \"\"\"\n A Toil batch system implementation that uses Apache Mesos to distribute toil jobs as Mesos\n tasks over a cluster of slave nodes. A Mesos framework consists of a scheduler and an\n executor. This class acts as the scheduler and is typically run on the master node that also\n runs the Mesos master process with which the scheduler communicates via a driver component.\n The executor is implemented in a separate class. It is run on each slave node and\n communicates with the Mesos slave process via another driver object. The scheduler may also\n be run on a separate node from the master, which we then call somewhat ambiguously the driver\n node.\n \"\"\"\n\n @classmethod\n def supportsHotDeployment(cls):\n return True\n\n @classmethod\n def supportsWorkerCleanup(cls):\n return True\n\n class ExecutorInfo(object):\n def __init__(self, nodeAddress, slaveId, nodeInfo, lastSeen):\n super(MesosBatchSystem.ExecutorInfo, self).__init__()\n self.nodeAddress = nodeAddress\n self.slaveId = slaveId\n self.nodeInfo = nodeInfo\n self.lastSeen = lastSeen\n\n def __init__(self, config, maxCores, maxMemory, maxDisk):\n super(MesosBatchSystem, self).__init__(config, maxCores, maxMemory, maxDisk)\n\n # The hot-deployed resource representing the user script. Will be passed along in every\n # Mesos task. Also see setUserScript().\n self.userScript = None\n \"\"\"\n :type: toil.resource.Resource\n \"\"\"\n\n # Dictionary of queues, which toil assigns jobs to. Each queue represents a job type,\n # defined by resource usage\n self.jobQueues = JobQueue()\n\n # Address of the Mesos master in the form host:port where host can be an IP or a hostname\n self.mesosMasterAddress = config.mesosMasterAddress\n\n # Written to when Mesos kills tasks, as directed by Toil\n self.killedJobIds = set()\n\n # The IDs of job to be killed\n self.killJobIds = set()\n\n # Contains jobs on which killBatchJobs were called, regardless of whether or not they\n # actually were killed or ended by themselves\n self.intendedKill = set()\n\n # Map of host address to job ids\n # this is somewhat redundant since Mesos returns the number of workers per\n # node. However, that information isn't guaranteed to reach the leader,\n # so we also track the state here. When the information is returned from\n # mesos, prefer that information over this attempt at state tracking.\n self.hostToJobIDs = {}\n\n # see self.setNodeFilter\n self.nodeFilter = []\n\n # Dict of launched jobIDs to TaskData objects\n self.runningJobMap = {}\n\n # Mesos has no easy way of getting a task's resources so we track them here\n self.taskResources = {}\n\n # Queue of jobs whose status has been updated, according to Mesos\n self.updatedJobsQueue = Queue()\n\n # The Mesos driver used by this scheduler\n self.driver = None\n\n # A dictionary mapping a node's IP to an ExecutorInfo object describing important\n # properties of our executor running on that node. Only an approximation of the truth.\n self.executors = {}\n\n # A set of Mesos slave IDs, one for each slave running on a non-preemptable node. Only an\n # approximation of the truth. Recently launched nodes may be absent from this set for a\n # while and a node's absence from this set does not imply its preemptability. But it is\n # generally safer to assume a node is preemptable since non-preemptability is a stronger\n # requirement. If we tracked the set of preemptable nodes instead, we'd have to use\n # absence as an indicator of non-preemptability and could therefore be misled into\n # believeing that a recently launched preemptable node was non-preemptable.\n self.nonPreemptableNodes = set()\n\n self.executor = self._buildExecutor()\n\n self.lastReconciliation = time.time()\n self.reconciliationPeriod = 120\n\n # These control how frequently to log a message that would indicate if no jobs are\n # currently able to run on the offers given. This can happen if the cluster is busy\n # or if the nodes in the cluster simply don't have enough resources to run the jobs\n self.lastTimeOfferLogged = 0\n self.logPeriod = 30 # seconds\n\n self.ignoredNodes = set()\n\n self._startDriver()\n\n def setUserScript(self, userScript):\n self.userScript = userScript\n\n def ignoreNode(self, nodeAddress):\n self.ignoredNodes.add(nodeAddress)\n\n def unignoreNode(self, nodeAddress):\n self.ignoredNodes.remove(nodeAddress)\n\n def issueBatchJob(self, jobNode):\n \"\"\"\n Issues the following command returning a unique jobID. Command is the string to run, memory\n is an int giving the number of bytes the job needs to run in and cores is the number of cpus\n needed for the job and error-file is the path of the file to place any std-err/std-out in.\n \"\"\"\n localID = self.handleLocalJob(jobNode)\n if localID:\n return localID\n self.checkResourceRequest(jobNode.memory, jobNode.cores, jobNode.disk)\n jobID = self.getNextJobID()\n job = ToilJob(jobID=jobID,\n name=str(jobNode),\n resources=ResourceRequirement(**jobNode._requirements),\n command=jobNode.command,\n userScript=self.userScript,\n environment=self.environment.copy(),\n workerCleanupInfo=self.workerCleanupInfo)\n jobType = job.resources\n log.debug(\"Queueing the job command: %s with job id: %s ...\", jobNode.command, str(jobID))\n\n # TODO: round all elements of resources\n\n self.jobQueues.insertJob(job, jobType)\n self.taskResources[jobID] = job.resources\n log.debug(\"... queued\")\n return jobID\n\n def killBatchJobs(self, jobIDs):\n self.killLocalJobs(jobIDs)\n # FIXME: probably still racy\n assert self.driver is not None\n localSet = set()\n for jobID in jobIDs:\n self.killJobIds.add(jobID)\n localSet.add(jobID)\n self.intendedKill.add(jobID)\n # FIXME: a bit too expensive for my taste\n if jobID in self.getIssuedBatchJobIDs():\n taskId = mesos_pb2.TaskID()\n taskId.value = str(jobID)\n self.driver.killTask(taskId)\n else:\n self.killJobIds.remove(jobID)\n localSet.remove(jobID)\n while localSet:\n intersection = localSet.intersection(self.killedJobIds)\n if intersection:\n localSet -= intersection\n self.killedJobIds -= intersection\n else:\n time.sleep(1)\n\n def getIssuedBatchJobIDs(self):\n jobIds = set(self.jobQueues.jobIDs())\n jobIds.update(list(self.runningJobMap.keys()))\n return list(jobIds) + list(self.getIssuedLocalJobIDs())\n\n def getRunningBatchJobIDs(self):\n currentTime = dict()\n for jobID, data in list(self.runningJobMap.items()):\n currentTime[jobID] = time.time() - data.startTime\n currentTime.update(self.getRunningLocalJobIDs())\n return currentTime\n\n def getUpdatedBatchJob(self, maxWait):\n local_tuple = self.getUpdatedLocalJob(0)\n if local_tuple:\n return local_tuple\n while True:\n try:\n item = self.updatedJobsQueue.get(timeout=maxWait)\n except Empty:\n return None\n jobId, exitValue, wallTime = item\n try:\n self.intendedKill.remove(jobId)\n except KeyError:\n log.debug('Job %s ended with status %i, took %s seconds.', jobId, exitValue,\n '???' if wallTime is None else str(wallTime))\n return item\n else:\n log.debug('Job %s ended naturally before it could be killed.', jobId)\n\n def nodeInUse(self, nodeIP):\n return nodeIP in self.hostToJobIDs\n\n @contextmanager\n def nodeFiltering(self, filter):\n self.nodeFilter = [filter]\n yield\n self.nodeFilter = []\n\n def getWaitDuration(self):\n \"\"\"\n Gets the period of time to wait (floating point, in seconds) between checking for\n missing/overlong jobs.\n \"\"\"\n return self.reconciliationPeriod\n\n def _buildExecutor(self):\n \"\"\"\n Creates and returns an ExecutorInfo instance representing our executor implementation.\n \"\"\"\n # The executor program is installed as a setuptools entry point by setup.py\n info = mesos_pb2.ExecutorInfo()\n info.name = \"toil\"\n info.command.value = resolveEntryPoint('_toil_mesos_executor')\n info.executor_id.value = \"toil-%i\" % os.getpid()\n info.source = pwd.getpwuid(os.getuid()).pw_name\n return info\n\n def _startDriver(self):\n \"\"\"\n The Mesos driver thread which handles the scheduler's communication with the Mesos master\n \"\"\"\n framework = mesos_pb2.FrameworkInfo()\n framework.user = \"\" # Have Mesos fill in the current user.\n framework.name = \"toil\"\n framework.principal = framework.name\n self.driver = mesos.native.MesosSchedulerDriver(self,\n framework,\n self._resolveAddress(self.mesosMasterAddress),\n True) # enable implicit acknowledgements\n assert self.driver.start() == mesos_pb2.DRIVER_RUNNING\n\n @staticmethod\n def _resolveAddress(address):\n \"\"\"\n Resolves the host in the given string. The input is of the form host[:port]. This method\n is idempotent, i.e. the host may already be a dotted IP address.\n\n >>> # noinspection PyProtectedMember\n >>> f=MesosBatchSystem._resolveAddress\n >>> f('localhost')\n '127.0.0.1'\n >>> f('127.0.0.1')\n '127.0.0.1'\n >>> f('localhost:123')\n '127.0.0.1:123'\n >>> f('127.0.0.1:123')\n '127.0.0.1:123'\n \"\"\"\n address = address.split(':')\n assert len(address) in (1, 2)\n address[0] = socket.gethostbyname(address[0])\n return ':'.join(address)\n\n def shutdown(self):\n self.shutdownLocal()\n log.debug(\"Stopping Mesos driver\")\n self.driver.stop()\n log.debug(\"Joining Mesos driver\")\n driver_result = self.driver.join()\n log.debug(\"Joined Mesos driver\")\n if driver_result != mesos_pb2.DRIVER_STOPPED:\n raise RuntimeError(\"Mesos driver failed with %i\", driver_result)\n\n def registered(self, driver, frameworkId, masterInfo):\n \"\"\"\n Invoked when the scheduler successfully registers with a Mesos master\n \"\"\"\n log.debug(\"Registered with framework ID %s\", frameworkId.value)\n\n def _declineAllOffers(self, driver, offers):\n for offer in offers:\n log.debug(\"Declining offer %s.\", offer.id.value)\n driver.declineOffer(offer.id)\n\n def _parseOffer(self, offer):\n cores = 0\n memory = 0\n disk = 0\n preemptable = None\n for attribute in offer.attributes:\n if attribute.name == 'preemptable':\n assert preemptable is None, \"Attribute 'preemptable' occurs more than once.\"\n preemptable = strict_bool(attribute.text.value)\n if preemptable is None:\n log.debug('Slave not marked as either preemptable or not. Assuming non-preemptable.')\n preemptable = False\n for resource in offer.resources:\n if resource.name == \"cpus\":\n cores += resource.scalar.value\n elif resource.name == \"mem\":\n memory += resource.scalar.value\n elif resource.name == \"disk\":\n disk += resource.scalar.value\n return cores, memory, disk, preemptable\n\n def _prepareToRun(self, jobType, offer):\n # Get the first element to insure FIFO\n job = self.jobQueues.nextJobOfType(jobType)\n task = self._newMesosTask(job, offer)\n return task\n\n def _updateStateToRunning(self, offer, runnableTasks):\n for task in runnableTasks:\n resourceKey = int(task.task_id.value)\n resources = self.taskResources[resourceKey]\n slaveIP = socket.gethostbyname(offer.hostname)\n try:\n self.hostToJobIDs[slaveIP].append(resourceKey)\n except KeyError:\n self.hostToJobIDs[slaveIP] = [resourceKey]\n\n self.runningJobMap[int(task.task_id.value)] = TaskData(startTime=time.time(),\n slaveID=offer.slave_id.value,\n slaveIP=slaveIP,\n executorID=task.executor.executor_id.value,\n cores=resources.cores,\n memory=resources.memory)\n del self.taskResources[resourceKey]\n log.debug('Launched Mesos task %s.', task.task_id.value)\n\n def resourceOffers(self, driver, offers):\n \"\"\"\n Invoked when resources have been offered to this framework.\n \"\"\"\n self._trackOfferedNodes(offers)\n\n jobTypes = self.jobQueues.sorted()\n\n # TODO: We may want to assert that numIssued >= numRunning\n if not jobTypes or len(self.getIssuedBatchJobIDs()) == len(self.getRunningBatchJobIDs()):\n log.debug('There are no queued tasks. Declining Mesos offers.')\n # Without jobs, we can get stuck with no jobs and no new offers until we decline it.\n self._declineAllOffers(driver, offers)\n return\n\n unableToRun = True\n # Right now, gives priority to largest jobs\n for offer in offers:\n if offer.hostname in self.ignoredNodes:\n log.debug(\"Declining offer %s because node %s is designated for termination\" %\n (offer.id.value, offer.hostname))\n driver.declineOffer(offer.id)\n continue\n runnableTasks = []\n # TODO: In an offer, can there ever be more than one resource with the same name?\n offerCores, offerMemory, offerDisk, offerPreemptable = self._parseOffer(offer)\n log.debug('Got offer %s for a %spreemptable slave with %.2f MiB memory, %.2f core(s) '\n 'and %.2f MiB of disk.', offer.id.value, '' if offerPreemptable else 'non-',\n offerMemory, offerCores, offerDisk)\n remainingCores = offerCores\n remainingMemory = offerMemory\n remainingDisk = offerDisk\n\n for jobType in jobTypes:\n runnableTasksOfType = []\n # Because we are not removing from the list until outside of the while loop, we\n # must decrement the number of jobs left to run ourselves to avoid an infinite\n # loop.\n nextToLaunchIndex = 0\n # Toil specifies disk and memory in bytes but Mesos uses MiB\n while ( not self.jobQueues.typeEmpty(jobType)\n # On a non-preemptable node we can run any job, on a preemptable node we\n # can only run preemptable jobs:\n and (not offerPreemptable or jobType.preemptable)\n and remainingCores >= jobType.cores\n and remainingDisk >= toMiB(jobType.disk)\n and remainingMemory >= toMiB(jobType.memory)):\n task = self._prepareToRun(jobType, offer)\n # TODO: this used to be a conditional but Hannes wanted it changed to an assert\n # TODO: ... so we can understand why it exists.\n assert int(task.task_id.value) not in self.runningJobMap\n runnableTasksOfType.append(task)\n log.debug(\"Preparing to launch Mesos task %s using offer %s ...\",\n task.task_id.value, offer.id.value)\n remainingCores -= jobType.cores\n remainingMemory -= toMiB(jobType.memory)\n remainingDisk -= toMiB(jobType.disk)\n nextToLaunchIndex += 1\n else:\n log.debug('Offer %(offer)s not suitable to run the tasks with requirements '\n '%(requirements)r. Mesos offered %(memory)s memory, %(cores)s cores '\n 'and %(disk)s of disk on a %(non)spreemptable slave.',\n dict(offer=offer.id.value,\n requirements=jobType.__dict__,\n non='' if offerPreemptable else 'non-',\n memory=fromMiB(offerMemory),\n cores=offerCores,\n disk=fromMiB(offerDisk)))\n runnableTasks.extend(runnableTasksOfType)\n # Launch all runnable tasks together so we only call launchTasks once per offer\n if runnableTasks:\n unableToRun = False\n driver.launchTasks(offer.id, runnableTasks)\n self._updateStateToRunning(offer, runnableTasks)\n else:\n log.debug('Although there are queued jobs, none of them could be run with offer %s '\n 'extended to the framework.', offer.id)\n driver.declineOffer(offer.id)\n\n if unableToRun and time.time() > (self.lastTimeOfferLogged + self.logPeriod):\n self.lastTimeOfferLogged = time.time()\n log.debug('Although there are queued jobs, none of them were able to run in '\n 'any of the offers extended to the framework. There are currently '\n '%i jobs running. Enable debug level logging to see more details about '\n 'job types and offers received.', len(self.runningJobMap))\n\n def _trackOfferedNodes(self, offers):\n for offer in offers:\n nodeAddress = socket.gethostbyname(offer.hostname)\n self._registerNode(nodeAddress, offer.slave_id.value)\n preemptable = False\n for attribute in offer.attributes:\n if attribute.name == 'preemptable':\n preemptable = strict_bool(attribute.text.value)\n if preemptable:\n try:\n self.nonPreemptableNodes.remove(offer.slave_id.value)\n except KeyError:\n pass\n else:\n self.nonPreemptableNodes.add(offer.slave_id.value)\n\n def _filterOfferedNodes(self, offers):\n if not self.nodeFilter:\n return offers\n executorInfoOrNone = [self.executors.get(socket.gethostbyname(offer.hostname)) for offer in offers]\n executorInfos = [_f for _f in executorInfoOrNone if _f]\n executorsToConsider = list(filter(self.nodeFilter[0], executorInfos))\n ipsToConsider = {ex.nodeAddress for ex in executorsToConsider}\n return [offer for offer in offers if socket.gethostbyname(offer.hostname) in ipsToConsider]\n\n def _newMesosTask(self, job, offer):\n \"\"\"\n Build the Mesos task object for a given the Toil job and Mesos offer\n \"\"\"\n task = mesos_pb2.TaskInfo()\n task.task_id.value = str(job.jobID)\n task.slave_id.value = offer.slave_id.value\n task.name = job.name\n task.data = pickle.dumps(job)\n task.executor.MergeFrom(self.executor)\n\n cpus = task.resources.add()\n cpus.name = \"cpus\"\n cpus.type = mesos_pb2.Value.SCALAR\n cpus.scalar.value = job.resources.cores\n\n disk = task.resources.add()\n disk.name = \"disk\"\n disk.type = mesos_pb2.Value.SCALAR\n if toMiB(job.resources.disk) > 1:\n disk.scalar.value = toMiB(job.resources.disk)\n else:\n log.warning(\"Job %s uses less disk than Mesos requires. Rounding %s up to 1 MiB.\",\n job.jobID, job.resources.disk)\n disk.scalar.value = 1\n mem = task.resources.add()\n mem.name = \"mem\"\n mem.type = mesos_pb2.Value.SCALAR\n if toMiB(job.resources.memory) > 1:\n mem.scalar.value = toMiB(job.resources.memory)\n else:\n log.warning(\"Job %s uses less memory than Mesos requires. Rounding %s up to 1 MiB.\",\n job.jobID, job.resources.memory)\n mem.scalar.value = 1\n return task\n\n def statusUpdate(self, driver, update):\n \"\"\"\n Invoked when the status of a task has changed (e.g., a slave is lost and so the task is\n lost, a task finishes and an executor sends a status update saying so, etc). Note that\n returning from this callback _acknowledges_ receipt of this status update! If for\n whatever reason the scheduler aborts during this callback (or the process exits) another\n status update will be delivered (note, however, that this is currently not true if the\n slave sending the status update is lost/fails during that time).\n \"\"\"\n jobID = int(update.task_id.value)\n stateName = mesos_pb2.TaskState.Name(update.state)\n log.debug(\"Job %i is in state '%s'.\", jobID, stateName)\n\n def jobEnded(_exitStatus, wallTime=None):\n try:\n self.killJobIds.remove(jobID)\n except KeyError:\n pass\n else:\n self.killedJobIds.add(jobID)\n self.updatedJobsQueue.put((jobID, _exitStatus, wallTime))\n slaveIP = None\n try:\n slaveIP = self.runningJobMap[jobID].slaveIP\n except KeyError:\n log.warning(\"Job %i returned exit code %i but isn't tracked as running.\",\n jobID, _exitStatus)\n else:\n del self.runningJobMap[jobID]\n\n try:\n self.hostToJobIDs[slaveIP].remove(jobID)\n except KeyError:\n log.warning(\"Job %i returned exit code %i from unknown host.\",\n jobID, _exitStatus)\n\n if update.state == mesos_pb2.TASK_FINISHED:\n jobEnded(0, wallTime=unpack('d', update.data)[0])\n elif update.state == mesos_pb2.TASK_FAILED:\n try:\n exitStatus = int(update.message)\n except ValueError:\n exitStatus = 255\n log.warning(\"Job %i failed with message '%s'\", jobID, update.message)\n else:\n log.warning('Job %i failed with exit status %i', jobID, exitStatus)\n jobEnded(exitStatus)\n elif update.state in (mesos_pb2.TASK_LOST, mesos_pb2.TASK_KILLED, mesos_pb2.TASK_ERROR):\n log.warning(\"Job %i is in unexpected state %s with message '%s'.\",\n jobID, stateName, update.message)\n jobEnded(255)\n\n def frameworkMessage(self, driver, executorId, slaveId, message):\n \"\"\"\n Invoked when an executor sends a message.\n \"\"\"\n log.debug('Got framework message from executor %s running on slave %s: %s',\n executorId.value, slaveId.value, message)\n message = ast.literal_eval(message)\n assert isinstance(message, dict)\n # Handle the mandatory fields of a message\n nodeAddress = message.pop('address')\n executor = self._registerNode(nodeAddress, slaveId.value)\n # Handle optional message fields\n for k, v in iteritems(message):\n if k == 'nodeInfo':\n assert isinstance(v, dict)\n resources = [taskData for taskData in itervalues(self.runningJobMap)\n if taskData.executorID == executorId.value]\n requestedCores = sum(taskData.cores for taskData in resources)\n requestedMemory = sum(taskData.memory for taskData in resources)\n executor.nodeInfo = NodeInfo(requestedCores=requestedCores, requestedMemory=requestedMemory, **v)\n self.executors[nodeAddress] = executor\n else:\n raise RuntimeError(\"Unknown message field '%s'.\" % k)\n\n def _registerNode(self, nodeAddress, slaveId):\n executor = self.executors.get(nodeAddress)\n if executor is None or executor.slaveId != slaveId:\n executor = self.ExecutorInfo(nodeAddress=nodeAddress,\n slaveId=slaveId,\n nodeInfo=None,\n lastSeen=time.time())\n self.executors[nodeAddress] = executor\n else:\n executor.lastSeen = time.time()\n return executor\n\n def getNodes(self, preemptable=None, timeout=600):\n timeout = timeout or sys.maxsize\n return {nodeAddress: executor.nodeInfo\n for nodeAddress, executor in iteritems(self.executors)\n if time.time() - executor.lastSeen < timeout\n and (preemptable is None\n or preemptable == (executor.slaveId not in self.nonPreemptableNodes))}\n\n def reregistered(self, driver, masterInfo):\n \"\"\"\n Invoked when the scheduler re-registers with a newly elected Mesos master.\n \"\"\"\n log.debug('Registered with new master')\n\n def executorLost(self, driver, executorId, slaveId, status):\n \"\"\"\n Invoked when an executor has exited/terminated.\n \"\"\"\n log.warning(\"Executor '%s' lost.\", executorId)\n\n @classmethod\n def setOptions(cl, setOption):\n setOption(\"mesosMasterAddress\", None, None, 'localhost:5050')\n\n\ndef toMiB(n):\n return n / 1024 / 1024\n\n\ndef fromMiB(n):\n return n * 1024 * 1024\n",
"path": "src/toil/batchSystems/mesos/batchSystem.py"
}
] | [
{
"content": "# Copyright (C) 2015-2016 Regents of the University of California\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom __future__ import absolute_import\n\nfrom builtins import filter\nfrom builtins import str\nfrom builtins import object\nimport ast\nimport logging\nimport os\nimport pwd\nimport socket\nimport time\nimport sys\nfrom contextlib import contextmanager\nfrom struct import unpack\n\ntry:\n import cPickle as pickle\nexcept ImportError:\n import pickle\n\n\n# Python 3 compatibility imports\nfrom six.moves.queue import Empty, Queue\nfrom six import iteritems, itervalues\n\nimport mesos.interface\nimport mesos.native\nfrom bd2k.util import strict_bool\nfrom mesos.interface import mesos_pb2\n\nfrom toil import resolveEntryPoint\nfrom toil.batchSystems.abstractBatchSystem import (AbstractScalableBatchSystem,\n BatchSystemLocalSupport,\n NodeInfo)\nfrom toil.batchSystems.mesos import ToilJob, ResourceRequirement, TaskData, JobQueue\n\nlog = logging.getLogger(__name__)\n\n\nclass MesosBatchSystem(BatchSystemLocalSupport,\n AbstractScalableBatchSystem,\n mesos.interface.Scheduler):\n \"\"\"\n A Toil batch system implementation that uses Apache Mesos to distribute toil jobs as Mesos\n tasks over a cluster of slave nodes. A Mesos framework consists of a scheduler and an\n executor. This class acts as the scheduler and is typically run on the master node that also\n runs the Mesos master process with which the scheduler communicates via a driver component.\n The executor is implemented in a separate class. It is run on each slave node and\n communicates with the Mesos slave process via another driver object. The scheduler may also\n be run on a separate node from the master, which we then call somewhat ambiguously the driver\n node.\n \"\"\"\n\n @classmethod\n def supportsHotDeployment(cls):\n return True\n\n @classmethod\n def supportsWorkerCleanup(cls):\n return True\n\n class ExecutorInfo(object):\n def __init__(self, nodeAddress, slaveId, nodeInfo, lastSeen):\n super(MesosBatchSystem.ExecutorInfo, self).__init__()\n self.nodeAddress = nodeAddress\n self.slaveId = slaveId\n self.nodeInfo = nodeInfo\n self.lastSeen = lastSeen\n\n def __init__(self, config, maxCores, maxMemory, maxDisk):\n super(MesosBatchSystem, self).__init__(config, maxCores, maxMemory, maxDisk)\n\n # The hot-deployed resource representing the user script. Will be passed along in every\n # Mesos task. Also see setUserScript().\n self.userScript = None\n \"\"\"\n :type: toil.resource.Resource\n \"\"\"\n\n # Dictionary of queues, which toil assigns jobs to. Each queue represents a job type,\n # defined by resource usage\n self.jobQueues = JobQueue()\n\n # Address of the Mesos master in the form host:port where host can be an IP or a hostname\n self.mesosMasterAddress = config.mesosMasterAddress\n\n # Written to when Mesos kills tasks, as directed by Toil\n self.killedJobIds = set()\n\n # The IDs of job to be killed\n self.killJobIds = set()\n\n # Contains jobs on which killBatchJobs were called, regardless of whether or not they\n # actually were killed or ended by themselves\n self.intendedKill = set()\n\n # Map of host address to job ids\n # this is somewhat redundant since Mesos returns the number of workers per\n # node. However, that information isn't guaranteed to reach the leader,\n # so we also track the state here. When the information is returned from\n # mesos, prefer that information over this attempt at state tracking.\n self.hostToJobIDs = {}\n\n # see self.setNodeFilter\n self.nodeFilter = []\n\n # Dict of launched jobIDs to TaskData objects\n self.runningJobMap = {}\n\n # Mesos has no easy way of getting a task's resources so we track them here\n self.taskResources = {}\n\n # Queue of jobs whose status has been updated, according to Mesos\n self.updatedJobsQueue = Queue()\n\n # The Mesos driver used by this scheduler\n self.driver = None\n\n # A dictionary mapping a node's IP to an ExecutorInfo object describing important\n # properties of our executor running on that node. Only an approximation of the truth.\n self.executors = {}\n\n # A set of Mesos slave IDs, one for each slave running on a non-preemptable node. Only an\n # approximation of the truth. Recently launched nodes may be absent from this set for a\n # while and a node's absence from this set does not imply its preemptability. But it is\n # generally safer to assume a node is preemptable since non-preemptability is a stronger\n # requirement. If we tracked the set of preemptable nodes instead, we'd have to use\n # absence as an indicator of non-preemptability and could therefore be misled into\n # believeing that a recently launched preemptable node was non-preemptable.\n self.nonPreemptableNodes = set()\n\n self.executor = self._buildExecutor()\n\n self.lastReconciliation = time.time()\n self.reconciliationPeriod = 120\n\n # These control how frequently to log a message that would indicate if no jobs are\n # currently able to run on the offers given. This can happen if the cluster is busy\n # or if the nodes in the cluster simply don't have enough resources to run the jobs\n self.lastTimeOfferLogged = 0\n self.logPeriod = 30 # seconds\n\n self.ignoredNodes = set()\n\n self._startDriver()\n\n def setUserScript(self, userScript):\n self.userScript = userScript\n\n def ignoreNode(self, nodeAddress):\n self.ignoredNodes.add(nodeAddress)\n\n def unignoreNode(self, nodeAddress):\n self.ignoredNodes.remove(nodeAddress)\n\n def issueBatchJob(self, jobNode):\n \"\"\"\n Issues the following command returning a unique jobID. Command is the string to run, memory\n is an int giving the number of bytes the job needs to run in and cores is the number of cpus\n needed for the job and error-file is the path of the file to place any std-err/std-out in.\n \"\"\"\n localID = self.handleLocalJob(jobNode)\n if localID:\n return localID\n self.checkResourceRequest(jobNode.memory, jobNode.cores, jobNode.disk)\n jobID = self.getNextJobID()\n job = ToilJob(jobID=jobID,\n name=str(jobNode),\n resources=ResourceRequirement(**jobNode._requirements),\n command=jobNode.command,\n userScript=self.userScript,\n environment=self.environment.copy(),\n workerCleanupInfo=self.workerCleanupInfo)\n jobType = job.resources\n log.debug(\"Queueing the job command: %s with job id: %s ...\", jobNode.command, str(jobID))\n\n # TODO: round all elements of resources\n\n self.taskResources[jobID] = job.resources\n self.jobQueues.insertJob(job, jobType)\n log.debug(\"... queued\")\n return jobID\n\n def killBatchJobs(self, jobIDs):\n self.killLocalJobs(jobIDs)\n # FIXME: probably still racy\n assert self.driver is not None\n localSet = set()\n for jobID in jobIDs:\n self.killJobIds.add(jobID)\n localSet.add(jobID)\n self.intendedKill.add(jobID)\n # FIXME: a bit too expensive for my taste\n if jobID in self.getIssuedBatchJobIDs():\n taskId = mesos_pb2.TaskID()\n taskId.value = str(jobID)\n self.driver.killTask(taskId)\n else:\n self.killJobIds.remove(jobID)\n localSet.remove(jobID)\n while localSet:\n intersection = localSet.intersection(self.killedJobIds)\n if intersection:\n localSet -= intersection\n self.killedJobIds -= intersection\n else:\n time.sleep(1)\n\n def getIssuedBatchJobIDs(self):\n jobIds = set(self.jobQueues.jobIDs())\n jobIds.update(list(self.runningJobMap.keys()))\n return list(jobIds) + list(self.getIssuedLocalJobIDs())\n\n def getRunningBatchJobIDs(self):\n currentTime = dict()\n for jobID, data in list(self.runningJobMap.items()):\n currentTime[jobID] = time.time() - data.startTime\n currentTime.update(self.getRunningLocalJobIDs())\n return currentTime\n\n def getUpdatedBatchJob(self, maxWait):\n local_tuple = self.getUpdatedLocalJob(0)\n if local_tuple:\n return local_tuple\n while True:\n try:\n item = self.updatedJobsQueue.get(timeout=maxWait)\n except Empty:\n return None\n jobId, exitValue, wallTime = item\n try:\n self.intendedKill.remove(jobId)\n except KeyError:\n log.debug('Job %s ended with status %i, took %s seconds.', jobId, exitValue,\n '???' if wallTime is None else str(wallTime))\n return item\n else:\n log.debug('Job %s ended naturally before it could be killed.', jobId)\n\n def nodeInUse(self, nodeIP):\n return nodeIP in self.hostToJobIDs\n\n @contextmanager\n def nodeFiltering(self, filter):\n self.nodeFilter = [filter]\n yield\n self.nodeFilter = []\n\n def getWaitDuration(self):\n \"\"\"\n Gets the period of time to wait (floating point, in seconds) between checking for\n missing/overlong jobs.\n \"\"\"\n return self.reconciliationPeriod\n\n def _buildExecutor(self):\n \"\"\"\n Creates and returns an ExecutorInfo instance representing our executor implementation.\n \"\"\"\n # The executor program is installed as a setuptools entry point by setup.py\n info = mesos_pb2.ExecutorInfo()\n info.name = \"toil\"\n info.command.value = resolveEntryPoint('_toil_mesos_executor')\n info.executor_id.value = \"toil-%i\" % os.getpid()\n info.source = pwd.getpwuid(os.getuid()).pw_name\n return info\n\n def _startDriver(self):\n \"\"\"\n The Mesos driver thread which handles the scheduler's communication with the Mesos master\n \"\"\"\n framework = mesos_pb2.FrameworkInfo()\n framework.user = \"\" # Have Mesos fill in the current user.\n framework.name = \"toil\"\n framework.principal = framework.name\n self.driver = mesos.native.MesosSchedulerDriver(self,\n framework,\n self._resolveAddress(self.mesosMasterAddress),\n True) # enable implicit acknowledgements\n assert self.driver.start() == mesos_pb2.DRIVER_RUNNING\n\n @staticmethod\n def _resolveAddress(address):\n \"\"\"\n Resolves the host in the given string. The input is of the form host[:port]. This method\n is idempotent, i.e. the host may already be a dotted IP address.\n\n >>> # noinspection PyProtectedMember\n >>> f=MesosBatchSystem._resolveAddress\n >>> f('localhost')\n '127.0.0.1'\n >>> f('127.0.0.1')\n '127.0.0.1'\n >>> f('localhost:123')\n '127.0.0.1:123'\n >>> f('127.0.0.1:123')\n '127.0.0.1:123'\n \"\"\"\n address = address.split(':')\n assert len(address) in (1, 2)\n address[0] = socket.gethostbyname(address[0])\n return ':'.join(address)\n\n def shutdown(self):\n self.shutdownLocal()\n log.debug(\"Stopping Mesos driver\")\n self.driver.stop()\n log.debug(\"Joining Mesos driver\")\n driver_result = self.driver.join()\n log.debug(\"Joined Mesos driver\")\n if driver_result != mesos_pb2.DRIVER_STOPPED:\n raise RuntimeError(\"Mesos driver failed with %i\", driver_result)\n\n def registered(self, driver, frameworkId, masterInfo):\n \"\"\"\n Invoked when the scheduler successfully registers with a Mesos master\n \"\"\"\n log.debug(\"Registered with framework ID %s\", frameworkId.value)\n\n def _declineAllOffers(self, driver, offers):\n for offer in offers:\n log.debug(\"Declining offer %s.\", offer.id.value)\n driver.declineOffer(offer.id)\n\n def _parseOffer(self, offer):\n cores = 0\n memory = 0\n disk = 0\n preemptable = None\n for attribute in offer.attributes:\n if attribute.name == 'preemptable':\n assert preemptable is None, \"Attribute 'preemptable' occurs more than once.\"\n preemptable = strict_bool(attribute.text.value)\n if preemptable is None:\n log.debug('Slave not marked as either preemptable or not. Assuming non-preemptable.')\n preemptable = False\n for resource in offer.resources:\n if resource.name == \"cpus\":\n cores += resource.scalar.value\n elif resource.name == \"mem\":\n memory += resource.scalar.value\n elif resource.name == \"disk\":\n disk += resource.scalar.value\n return cores, memory, disk, preemptable\n\n def _prepareToRun(self, jobType, offer):\n # Get the first element to insure FIFO\n job = self.jobQueues.nextJobOfType(jobType)\n task = self._newMesosTask(job, offer)\n return task\n\n def _updateStateToRunning(self, offer, runnableTasks):\n for task in runnableTasks:\n resourceKey = int(task.task_id.value)\n resources = self.taskResources[resourceKey]\n slaveIP = socket.gethostbyname(offer.hostname)\n try:\n self.hostToJobIDs[slaveIP].append(resourceKey)\n except KeyError:\n self.hostToJobIDs[slaveIP] = [resourceKey]\n\n self.runningJobMap[int(task.task_id.value)] = TaskData(startTime=time.time(),\n slaveID=offer.slave_id.value,\n slaveIP=slaveIP,\n executorID=task.executor.executor_id.value,\n cores=resources.cores,\n memory=resources.memory)\n del self.taskResources[resourceKey]\n log.debug('Launched Mesos task %s.', task.task_id.value)\n\n def resourceOffers(self, driver, offers):\n \"\"\"\n Invoked when resources have been offered to this framework.\n \"\"\"\n self._trackOfferedNodes(offers)\n\n jobTypes = self.jobQueues.sorted()\n\n # TODO: We may want to assert that numIssued >= numRunning\n if not jobTypes or len(self.getIssuedBatchJobIDs()) == len(self.getRunningBatchJobIDs()):\n log.debug('There are no queued tasks. Declining Mesos offers.')\n # Without jobs, we can get stuck with no jobs and no new offers until we decline it.\n self._declineAllOffers(driver, offers)\n return\n\n unableToRun = True\n # Right now, gives priority to largest jobs\n for offer in offers:\n if offer.hostname in self.ignoredNodes:\n log.debug(\"Declining offer %s because node %s is designated for termination\" %\n (offer.id.value, offer.hostname))\n driver.declineOffer(offer.id)\n continue\n runnableTasks = []\n # TODO: In an offer, can there ever be more than one resource with the same name?\n offerCores, offerMemory, offerDisk, offerPreemptable = self._parseOffer(offer)\n log.debug('Got offer %s for a %spreemptable slave with %.2f MiB memory, %.2f core(s) '\n 'and %.2f MiB of disk.', offer.id.value, '' if offerPreemptable else 'non-',\n offerMemory, offerCores, offerDisk)\n remainingCores = offerCores\n remainingMemory = offerMemory\n remainingDisk = offerDisk\n\n for jobType in jobTypes:\n runnableTasksOfType = []\n # Because we are not removing from the list until outside of the while loop, we\n # must decrement the number of jobs left to run ourselves to avoid an infinite\n # loop.\n nextToLaunchIndex = 0\n # Toil specifies disk and memory in bytes but Mesos uses MiB\n while ( not self.jobQueues.typeEmpty(jobType)\n # On a non-preemptable node we can run any job, on a preemptable node we\n # can only run preemptable jobs:\n and (not offerPreemptable or jobType.preemptable)\n and remainingCores >= jobType.cores\n and remainingDisk >= toMiB(jobType.disk)\n and remainingMemory >= toMiB(jobType.memory)):\n task = self._prepareToRun(jobType, offer)\n # TODO: this used to be a conditional but Hannes wanted it changed to an assert\n # TODO: ... so we can understand why it exists.\n assert int(task.task_id.value) not in self.runningJobMap\n runnableTasksOfType.append(task)\n log.debug(\"Preparing to launch Mesos task %s using offer %s ...\",\n task.task_id.value, offer.id.value)\n remainingCores -= jobType.cores\n remainingMemory -= toMiB(jobType.memory)\n remainingDisk -= toMiB(jobType.disk)\n nextToLaunchIndex += 1\n else:\n log.debug('Offer %(offer)s not suitable to run the tasks with requirements '\n '%(requirements)r. Mesos offered %(memory)s memory, %(cores)s cores '\n 'and %(disk)s of disk on a %(non)spreemptable slave.',\n dict(offer=offer.id.value,\n requirements=jobType.__dict__,\n non='' if offerPreemptable else 'non-',\n memory=fromMiB(offerMemory),\n cores=offerCores,\n disk=fromMiB(offerDisk)))\n runnableTasks.extend(runnableTasksOfType)\n # Launch all runnable tasks together so we only call launchTasks once per offer\n if runnableTasks:\n unableToRun = False\n driver.launchTasks(offer.id, runnableTasks)\n self._updateStateToRunning(offer, runnableTasks)\n else:\n log.debug('Although there are queued jobs, none of them could be run with offer %s '\n 'extended to the framework.', offer.id)\n driver.declineOffer(offer.id)\n\n if unableToRun and time.time() > (self.lastTimeOfferLogged + self.logPeriod):\n self.lastTimeOfferLogged = time.time()\n log.debug('Although there are queued jobs, none of them were able to run in '\n 'any of the offers extended to the framework. There are currently '\n '%i jobs running. Enable debug level logging to see more details about '\n 'job types and offers received.', len(self.runningJobMap))\n\n def _trackOfferedNodes(self, offers):\n for offer in offers:\n nodeAddress = socket.gethostbyname(offer.hostname)\n self._registerNode(nodeAddress, offer.slave_id.value)\n preemptable = False\n for attribute in offer.attributes:\n if attribute.name == 'preemptable':\n preemptable = strict_bool(attribute.text.value)\n if preemptable:\n try:\n self.nonPreemptableNodes.remove(offer.slave_id.value)\n except KeyError:\n pass\n else:\n self.nonPreemptableNodes.add(offer.slave_id.value)\n\n def _filterOfferedNodes(self, offers):\n if not self.nodeFilter:\n return offers\n executorInfoOrNone = [self.executors.get(socket.gethostbyname(offer.hostname)) for offer in offers]\n executorInfos = [_f for _f in executorInfoOrNone if _f]\n executorsToConsider = list(filter(self.nodeFilter[0], executorInfos))\n ipsToConsider = {ex.nodeAddress for ex in executorsToConsider}\n return [offer for offer in offers if socket.gethostbyname(offer.hostname) in ipsToConsider]\n\n def _newMesosTask(self, job, offer):\n \"\"\"\n Build the Mesos task object for a given the Toil job and Mesos offer\n \"\"\"\n task = mesos_pb2.TaskInfo()\n task.task_id.value = str(job.jobID)\n task.slave_id.value = offer.slave_id.value\n task.name = job.name\n task.data = pickle.dumps(job)\n task.executor.MergeFrom(self.executor)\n\n cpus = task.resources.add()\n cpus.name = \"cpus\"\n cpus.type = mesos_pb2.Value.SCALAR\n cpus.scalar.value = job.resources.cores\n\n disk = task.resources.add()\n disk.name = \"disk\"\n disk.type = mesos_pb2.Value.SCALAR\n if toMiB(job.resources.disk) > 1:\n disk.scalar.value = toMiB(job.resources.disk)\n else:\n log.warning(\"Job %s uses less disk than Mesos requires. Rounding %s up to 1 MiB.\",\n job.jobID, job.resources.disk)\n disk.scalar.value = 1\n mem = task.resources.add()\n mem.name = \"mem\"\n mem.type = mesos_pb2.Value.SCALAR\n if toMiB(job.resources.memory) > 1:\n mem.scalar.value = toMiB(job.resources.memory)\n else:\n log.warning(\"Job %s uses less memory than Mesos requires. Rounding %s up to 1 MiB.\",\n job.jobID, job.resources.memory)\n mem.scalar.value = 1\n return task\n\n def statusUpdate(self, driver, update):\n \"\"\"\n Invoked when the status of a task has changed (e.g., a slave is lost and so the task is\n lost, a task finishes and an executor sends a status update saying so, etc). Note that\n returning from this callback _acknowledges_ receipt of this status update! If for\n whatever reason the scheduler aborts during this callback (or the process exits) another\n status update will be delivered (note, however, that this is currently not true if the\n slave sending the status update is lost/fails during that time).\n \"\"\"\n jobID = int(update.task_id.value)\n stateName = mesos_pb2.TaskState.Name(update.state)\n log.debug(\"Job %i is in state '%s'.\", jobID, stateName)\n\n def jobEnded(_exitStatus, wallTime=None):\n try:\n self.killJobIds.remove(jobID)\n except KeyError:\n pass\n else:\n self.killedJobIds.add(jobID)\n self.updatedJobsQueue.put((jobID, _exitStatus, wallTime))\n slaveIP = None\n try:\n slaveIP = self.runningJobMap[jobID].slaveIP\n except KeyError:\n log.warning(\"Job %i returned exit code %i but isn't tracked as running.\",\n jobID, _exitStatus)\n else:\n del self.runningJobMap[jobID]\n\n try:\n self.hostToJobIDs[slaveIP].remove(jobID)\n except KeyError:\n log.warning(\"Job %i returned exit code %i from unknown host.\",\n jobID, _exitStatus)\n\n if update.state == mesos_pb2.TASK_FINISHED:\n jobEnded(0, wallTime=unpack('d', update.data)[0])\n elif update.state == mesos_pb2.TASK_FAILED:\n try:\n exitStatus = int(update.message)\n except ValueError:\n exitStatus = 255\n log.warning(\"Job %i failed with message '%s'\", jobID, update.message)\n else:\n log.warning('Job %i failed with exit status %i', jobID, exitStatus)\n jobEnded(exitStatus)\n elif update.state in (mesos_pb2.TASK_LOST, mesos_pb2.TASK_KILLED, mesos_pb2.TASK_ERROR):\n log.warning(\"Job %i is in unexpected state %s with message '%s'.\",\n jobID, stateName, update.message)\n jobEnded(255)\n\n def frameworkMessage(self, driver, executorId, slaveId, message):\n \"\"\"\n Invoked when an executor sends a message.\n \"\"\"\n log.debug('Got framework message from executor %s running on slave %s: %s',\n executorId.value, slaveId.value, message)\n message = ast.literal_eval(message)\n assert isinstance(message, dict)\n # Handle the mandatory fields of a message\n nodeAddress = message.pop('address')\n executor = self._registerNode(nodeAddress, slaveId.value)\n # Handle optional message fields\n for k, v in iteritems(message):\n if k == 'nodeInfo':\n assert isinstance(v, dict)\n resources = [taskData for taskData in itervalues(self.runningJobMap)\n if taskData.executorID == executorId.value]\n requestedCores = sum(taskData.cores for taskData in resources)\n requestedMemory = sum(taskData.memory for taskData in resources)\n executor.nodeInfo = NodeInfo(requestedCores=requestedCores, requestedMemory=requestedMemory, **v)\n self.executors[nodeAddress] = executor\n else:\n raise RuntimeError(\"Unknown message field '%s'.\" % k)\n\n def _registerNode(self, nodeAddress, slaveId):\n executor = self.executors.get(nodeAddress)\n if executor is None or executor.slaveId != slaveId:\n executor = self.ExecutorInfo(nodeAddress=nodeAddress,\n slaveId=slaveId,\n nodeInfo=None,\n lastSeen=time.time())\n self.executors[nodeAddress] = executor\n else:\n executor.lastSeen = time.time()\n return executor\n\n def getNodes(self, preemptable=None, timeout=600):\n timeout = timeout or sys.maxsize\n return {nodeAddress: executor.nodeInfo\n for nodeAddress, executor in iteritems(self.executors)\n if time.time() - executor.lastSeen < timeout\n and (preemptable is None\n or preemptable == (executor.slaveId not in self.nonPreemptableNodes))}\n\n def reregistered(self, driver, masterInfo):\n \"\"\"\n Invoked when the scheduler re-registers with a newly elected Mesos master.\n \"\"\"\n log.debug('Registered with new master')\n\n def executorLost(self, driver, executorId, slaveId, status):\n \"\"\"\n Invoked when an executor has exited/terminated.\n \"\"\"\n log.warning(\"Executor '%s' lost.\", executorId)\n\n @classmethod\n def setOptions(cl, setOption):\n setOption(\"mesosMasterAddress\", None, None, 'localhost:5050')\n\n\ndef toMiB(n):\n return n / 1024 / 1024\n\n\ndef fromMiB(n):\n return n * 1024 * 1024\n",
"path": "src/toil/batchSystems/mesos/batchSystem.py"
}
] | diff --git a/src/toil/batchSystems/mesos/batchSystem.py b/src/toil/batchSystems/mesos/batchSystem.py
index 6a671049aa..25da89b8d0 100644
--- a/src/toil/batchSystems/mesos/batchSystem.py
+++ b/src/toil/batchSystems/mesos/batchSystem.py
@@ -190,8 +190,8 @@ def issueBatchJob(self, jobNode):
# TODO: round all elements of resources
- self.jobQueues.insertJob(job, jobType)
self.taskResources[jobID] = job.resources
+ self.jobQueues.insertJob(job, jobType)
log.debug("... queued")
return jobID
|
ckan__ckan-8093 | readthedocs sphinx build failures
## CKAN version
master
## Describe the bug
infinite loop in build, looks like no tags are returned from `git log`?
### Steps to reproduce
check sphinx logs
### Expected behavior
build docs on rtd working
### Additional details
```python-traceback
Traceback (most recent call last):
File "/home/docs/checkouts/readthedocs.org/user_builds/ckan/envs/latest/lib/python3.10/site-packages/sphinx/config.py", line 358, in eval_config_file
exec(code, namespace) # NoQA: S102
File "/home/docs/checkouts/readthedocs.org/user_builds/ckan/checkouts/latest/doc/conf.py", line 388, in <module>
current_release_tag_value = get_current_release_tag()
File "/home/docs/checkouts/readthedocs.org/user_builds/ckan/checkouts/latest/doc/conf.py", line 211, in get_current_release_tag
return get_latest_release_tag()
File "/home/docs/checkouts/readthedocs.org/user_builds/ckan/checkouts/latest/doc/conf.py", line 228, in get_latest_release_tag
return get_latest_release_version()
File "/home/docs/checkouts/readthedocs.org/user_builds/ckan/checkouts/latest/doc/conf.py", line 237, in get_latest_release_version
version = get_latest_release_tag()[len('ckan-'):]
File "/home/docs/checkouts/readthedocs.org/user_builds/ckan/checkouts/latest/doc/conf.py", line 228, in get_latest_release_tag
return get_latest_release_version()
File "/home/docs/checkouts/readthedocs.org/user_builds/ckan/checkouts/latest/doc/conf.py", line 237, in get_latest_release_version
version = get_latest_release_tag()[len('ckan-'):]
…
```
| [
{
"content": "# -*- coding: utf-8 -*-\n#\n# CKAN documentation build configuration file, created by\n# sphinx-quickstart on Sun Oct 25 16:47:17 2009.\n#\n# This file is execfile()d with the current directory set to its containing dir.\n#\n# The contents of this file are pickled, so don't put values in the namespace\n# that aren't pickleable (module imports are okay, they're removed automatically).\n#\n# Note that not all possible configuration values are present in this\n# autogenerated file.\n#\n# All configuration values have a default; values that are commented out\n# serve to show the default.\n\nfrom datetime import date\nimport re\nimport os\nimport subprocess\n\nfrom packaging.version import parse as version_parse\n\nimport ckan\n\n# If your extensions (or modules documented by autodoc) are in another directory,\n# add these directories to sys.path here. If the directory is relative to the\n# documentation root, use os.path.abspath to make it absolute, like shown here.\n#sys.path.append(os.path.abspath('.'))\n\n# General configuration\n# ---------------------\n\nrst_epilog = '''\n\n.. |virtualenv_parent_dir| replace:: /usr/lib/ckan\n.. |virtualenv| replace:: |virtualenv_parent_dir|/default\n.. |activate| replace:: . |virtualenv|/bin/activate\n.. |config_parent_dir| replace:: /etc/ckan\n.. |config_dir| replace:: |config_parent_dir|/default\n.. |production.ini| replace:: |config_dir|/production.ini\n.. |development.ini| replace:: |config_dir|/development.ini\n.. |ckan.ini| replace:: |config_dir|/ckan.ini\n.. |git_url| replace:: \\https://github.com/ckan/ckan.git\n.. |raw_git_url| replace:: \\https://raw.githubusercontent.com/ckan/ckan\n.. |postgres| replace:: PostgreSQL\n.. |database| replace:: ckan_default\n.. |database_user| replace:: ckan_default\n.. |datastore| replace:: datastore_default\n.. |datastore_user| replace:: datastore_default\n.. |test_database| replace:: ckan_test\n.. |test_datastore| replace:: datastore_test\n.. |apache_config_file| replace:: /etc/apache2/sites-available/ckan_default.conf\n.. |apache.wsgi| replace:: |config_dir|/apache.wsgi\n.. |wsgi.py| replace:: |config_dir|/wsgi.py\n.. |data_dir| replace:: |config_dir|/data\n.. |sstore| replace:: |config_dir|/sstore\n.. |storage_parent_dir| replace:: /var/lib/ckan\n.. |storage_dir| replace:: |storage_parent_dir|/default\n.. |storage_path| replace:: |storage_parent_dir|/default\n.. |restart_uwsgi| replace:: sudo supervisorctl restart ckan-uwsgi:*\n.. |solr| replace:: Solr\n.. |restructuredtext| replace:: reStructuredText\n.. |nginx| replace:: Nginx\n.. |sqlite| replace:: SQLite\n.. |python| replace:: Python\n.. |sqlalchemy| replace:: SQLAlchemy\n.. |javascript| replace:: JavaScript\n.. |apache| replace:: Apache\n.. |nginx_config_file| replace:: /etc/nginx/sites-available/ckan\n.. |reload_nginx| replace:: sudo service nginx reload\n.. |restart_nginx| replace:: sudo service nginx restart\n.. |jquery| replace:: jQuery\n.. |nodejs| replace:: Node.js\n\n.. _Jinja2: http://jinja.pocoo.org/\n.. _CKAN front page: http://127.0.0.1:5000\n.. _bootstrap: http://getbootstrap.com/2.3.2/\n.. _CKAN issue tracker: https://github.com/ckan/ckan/issues\n\n'''\n\n# Add any Sphinx extension module names here, as strings. They can be extensions\n# coming with Sphinx (named 'sphinx.ext.*') or your custom ones.\nextensions = [\n 'sphinx.ext.autodoc', 'sphinx.ext.todo',\n 'sphinx.ext.autosummary', 'ckan.plugins.toolkit_sphinx_extension',\n]\nautodoc_member_order = 'bysource'\ntodo_include_todos = True\n\n# Add any paths that contain templates here, relative to this directory.\ntemplates_path = ['_templates']\n\n# The suffix of source filenames.\nsource_suffix = '.rst'\n\n# The encoding of source files.\n#source_encoding = 'utf-8'\n\n# The master toctree document.\nmaster_doc = 'contents'\n\n# General information about the project.\nproject = u'CKAN'\nproject_short_name = u'CKAN'\ncopyright = u'© 2009-{} '.format(date.today().strftime(\"%Y\"))\ncopyright += u'''<a href=\"https://okfn.org/\">Open Knowledge Foundation</a> and\n <a href=\"https://github.com/ckan/ckan/graphs/contributors\">contributors</a>.\n Licensed under <a\n href=\"https://creativecommons.org/licenses/by-sa/3.0/\">Creative Commons\n Attribution ShareAlike (Unported) v3.0 License</a>.<br />\n <img src=\"https://licensebuttons.net/l/by-sa/3.0/80x15.png\" alt=\"CC License Logo\" />\n <a href=\"https://opendefinition.org/\">\n <img src=\"https://assets.okfn.org/images/ok_buttons/oc_80x15_blue.png\" border=\"0\"\n alt=\"{{ _('Open Content') }}\" />\n </a>\n '''\nhtml_show_sphinx = False\n\n# The version info for the project you're documenting, acts as replacement for\n# |version| and |release|, also used in various other places throughout the\n# built documents.\n#\n# The short X.Y version.\nversion = ckan.__version__.rstrip('abcdefgh')\n# The full version, including alpha/beta/rc tags.\nrelease = ckan.__version__\nversion_re = None\npoint_releases_ = None\n\nSUPPORTED_CKAN_VERSIONS = 2\n\n\ndef get_release_tags():\n git_tags = subprocess.check_output(\n ['git', 'tag', '-l'], stderr=subprocess.STDOUT).decode('utf8')\n git_tags = git_tags.split()\n release_tags_ = [tag for tag in git_tags if tag.startswith('ckan-')]\n\n # git tag -l prints out the tags in the right order anyway, but don't rely\n # on that, sort them again here for good measure.\n release_tags_.sort(key=version_parse)\n return release_tags_\n\n\ndef parse_version(version_):\n '''Parses version string\n ckan-2.1.3 -> ('2', '1', '3')\n ckan-2.1 -> ('2', '1', None) (the occasion when we didn't do semver)\n '''\n global version_re\n if version_re is None:\n version_re = re.compile('(?:ckan-)?(\\d+)\\.(\\d+)(?:\\.(\\d+))?[a-z]?')\n if isinstance(version_, bytes):\n version_ = version_.decode()\n return version_re.match(version_).groups()\n\n\ndef get_equivalent_point_release(version_):\n '''Returns the equivalent point release of any given version.\n\n e.g.\n ckan-2.1.3 -> ckan-2.1\n ckan-2.1 -> ckan-2.1 (the occasion when we didn't do semver)\n '''\n return 'ckan-%s.%s' % parse_version(version_)[:2]\n\n\ndef get_point_releases():\n '''\n returns ['ckan-1.3', 'ckan-1.4', ... 'ckan-2.0', 'ckan-2.1', ...]\n '''\n global point_releases_\n if point_releases_ is None:\n releases = get_release_tags()\n point_releases_ = []\n for release in releases:\n point_release = get_equivalent_point_release(release)\n if point_release not in point_releases_:\n point_releases_.append(point_release)\n return point_releases_\n\n\ndef get_status_of_this_version():\n '''Returns whether this release is supported or another category.\n '''\n equiv_point_release = get_equivalent_point_release(version)\n point_releases_ = get_point_releases()\n supported_point_releases = point_releases_[-int(SUPPORTED_CKAN_VERSIONS):]\n if equiv_point_release in supported_point_releases:\n return 'supported'\n else:\n return 'unsupported'\n\n\ndef get_current_release_tag():\n ''' Return the name of the tag for the current release\n\n e.g.: \"ckan-2.7.4\"\n\n '''\n release_tags_ = get_release_tags()\n\n current_tag = \"ckan-{}\".format(version)\n\n if release_tags_.__contains__(current_tag):\n return current_tag\n else:\n # Un-released tag (eg master or a beta version), use the latest one\n return get_latest_release_tag()\n\n\ndef get_latest_release_tag():\n '''Return the name of the git tag for the latest stable release.\n\n e.g.: \"ckan-2.7.4\"\n\n This requires git to be installed.\n\n '''\n release_tags_ = get_release_tags()\n\n if release_tags_:\n return release_tags_[-1]\n else:\n # Un-released tag (eg master or a beta version), use the latest one\n return get_latest_release_version()\n\n\ndef get_latest_release_version():\n '''Return the version number of the latest stable release.\n\n e.g. \"2.1.1\"\n\n '''\n version = get_latest_release_tag()[len('ckan-'):]\n\n # TODO: We could assert here that latest_version matches X.Y.Z.\n\n return version\n\n\ndef get_current_release_version():\n '''Return the version number of the current release.\n\n e.g. \"2.1.1\"\n\n '''\n version = get_current_release_tag()[len('ckan-'):]\n\n # TODO: We could assert here that latest_version matches X.Y.Z.\n\n return version\n\n\ndef get_previous_release_version() -> str:\n \"\"\"Returns the version number of the previous release\n\n eg if the latest release is 2.9.5, it returns 2.8.10\n\n \"\"\"\n current_version = parse_version(get_current_release_version())\n\n previous_tag_prefix = f\"ckan-{current_version[0]}.{int(current_version[1]) - 1}\"\n\n previous_version_tags = [\n r for r in get_release_tags() if r.startswith(previous_tag_prefix)\n ]\n previous_release_version = previous_version_tags[-1][len(\"ckan-\"):]\n return previous_release_version\n\n\ndef get_latest_package_name(distro, py_version=None):\n '''Return the filename of the Ubuntu package for the latest stable release.\n\n e.g. \"python-ckan_2.1-trusty_amd64.deb\"\n\n If ``py_version`` is provided, it's added as part of the iter number:\n\n e.g. \"python-ckan_2.9-py3-focal_amd64.deb\"\n\n '''\n # We don't create a new package file name for a patch release like 2.1.1,\n # instead we just update the existing 2.1 package. So package names only\n # have the X.Y part of the version number in them, not X.Y.Z.\n version = get_latest_release_version()\n latest_minor_version = version[:version.find(\".\", 3)]\n\n if py_version:\n name = 'python-ckan_{version}-py{py_version}-{distro}_amd64.deb'.format(\n version=latest_minor_version, distro=distro, py_version=py_version)\n else:\n name = 'python-ckan_{version}-{distro}_amd64.deb'.format(\n version=latest_minor_version, distro=distro)\n return name\n\n\ndef get_current_package_name(distro, py_version=None):\n '''Return the filename of the Ubuntu package for the current stable release.\n\n e.g. \"python-ckan_2.1-trusty_amd64.deb\"\n\n If ``py_version`` is provided, it's added as part of the iter number:\n\n e.g. \"python-ckan_2.9-py3-focal_amd64.deb\"\n\n '''\n # We don't create a new package file name for a patch release like 2.1.1,\n # instead we just update the existing 2.1 package. So package names only\n # have the X.Y part of the version number in them, not X.Y.Z.\n version = get_current_release_version()\n current_minor_version = version[:version.find(\".\", 3)]\n\n if py_version:\n name = 'python-ckan_{version}-py{py_version}-{distro}_amd64.deb'.format(\n version=current_minor_version, distro=distro, py_version=py_version)\n else:\n name = 'python-ckan_{version}-{distro}_amd64.deb'.format(\n version=current_minor_version, distro=distro)\n return name\n\n\ndef config_defaults_from_declaration():\n from ckan.config.declaration import Declaration\n decl = Declaration()\n decl.load_core_declaration()\n decl.load_plugin(\"resource_proxy\")\n decl.load_plugin(\"text_view\")\n decl.load_plugin(\"image_view\")\n decl.load_plugin(\"datatables_view\")\n decl.load_plugin(\"datastore\")\n decl.load_plugin(\"datapusher\")\n\n _write_config_options_file(decl)\n\n return {\n f\"config:{k}\": \"``{}``\".format(\n repr(decl[k].default) if decl[k].has_default() else None\n ) for k in decl.iter_options()\n }\n\n\ndef _write_config_options_file(decl):\n '''\n Write a file in the doc/ dir containing documentation for config options.\n\n '''\n filename = '_config_options.inc'\n header = '''.. Documentation for declared config options.\n **This file is autogenerated!** So don't edit it by hand.\n\n'''\n with open(filename, 'w') as f:\n f.write(header.format(filename=filename))\n f.write(decl.into_docs())\n\n\ndef write_substitutions_file(**kwargs):\n '''\n Write a file in the doc/ dir containing reStructuredText substitutions.\n\n Any keyword argument is stored as a substitution.\n '''\n filename = '_substitutions.rst'\n header = '''\n\n.. Some common reStructuredText substitutions.\n\n **This file is autogenerated!** So don't edit it by hand.\n\n You can include this file at the top of your ``*.rst`` file with a line\n like::\n\n .. include:: {filename}\n\n Then use the substitutions in this file, e.g.::\n\n |latest_release_version|\n\n'''\n with open(filename, 'w') as f:\n f.write(header.format(filename=filename))\n for name, substitution in kwargs.items():\n f.write('.. |{name}| replace:: {substitution}\\n'.format(\n name=name, substitution=substitution))\n\ncurrent_release_tag_value = get_current_release_tag()\ncurrent_release_version = get_current_release_version()\nprevious_release_version = get_previous_release_version()\nprevious_release_version_format = f\"**CKAN {previous_release_version}**\"\ncurrent_minor_version = current_release_version[:current_release_version.find(\".\", 3)]\nlatest_release_tag_value = get_latest_release_tag()\nlatest_release_version = get_latest_release_version()\nlatest_release_version_format = f\"**CKAN {latest_release_version}**\"\nlatest_minor_version = latest_release_version[:latest_release_version.find(\".\", 3)]\nis_master = \"a\" in release.split(\".\")[-1]\nis_supported = get_status_of_this_version() == 'supported'\nis_latest_version = version == latest_release_version\n\nwrite_substitutions_file(\n current_release_tag=current_release_tag_value,\n current_release_version=current_release_version,\n previous_release_version=previous_release_version,\n previous_release_version_format=previous_release_version_format,\n current_minor_version=current_minor_version,\n latest_release_tag=latest_release_tag_value,\n latest_release_version=latest_release_version,\n latest_release_version_format=latest_release_version_format,\n current_package_name_jammy=get_current_package_name('jammy'),\n current_package_name_focal=get_current_package_name('focal'),\n **config_defaults_from_declaration()\n)\n\n\n# The language for content autogenerated by Sphinx. Refer to documentation\n# for a list of supported languages.\n#language = None\n\n# There are two options for replacing |today|: either, you set today to some\n# non-false value, then it is used:\n#today = ''\n# Else, today_fmt is used as the format for a strftime call.\n#today_fmt = '%B %d, %Y'\n\n# List of documents that shouldn't be included in the build.\n#unused_docs = []\n\n# List of directories, relative to source directory, that shouldn't be searched\n# for source files.\nexclude_trees = ['.build']\n\n# The reST default role (used for this markup: `text`) to use for all documents.\n#default_role = None\n\n# If true, '()' will be appended to :func: etc. cross-reference text.\n#add_function_parentheses = True\n\n# If true, the current module name will be prepended to all description\n# unit titles (such as .. function::).\n#add_module_names = True\n\n# If true, sectionauthor and moduleauthor directives will be shown in the\n# output. They are ignored by default.\n#show_authors = False\n\n# The name of the Pygments (syntax highlighting) style to use.\npygments_style = 'sphinx'\n\n\n# Options for HTML output\n# -----------------------\n\nextra_css_files = ['_static/css/custom.css']\n\non_rtd = os.environ.get('READTHEDOCS', None) == 'True'\nif not on_rtd:\n import sphinx_rtd_theme\n html_theme = 'sphinx_rtd_theme'\n html_theme_path = [sphinx_rtd_theme.get_html_theme_path()]\n\nhtml_sidebars = {\n '**': ['globaltoc.html'],\n}\n\nhtml_context = {\n 'latest_release_tag_value': latest_release_tag_value,\n 'is_master': is_master,\n 'is_supported': is_supported,\n 'is_latest_version': is_latest_version,\n 'extra_css_files': extra_css_files,\n 'latest_minor_version': latest_minor_version,\n}\n\n# The style sheet to use for HTML and HTML Help pages. A file of that name\n# must exist either in Sphinx' static/ path, or in one of the custom paths\n# given in html_static_path.\n#html_style = 'default.css'\n\n# The name for this set of Sphinx documents. If None, it defaults to\n# \"<project> v<release> documentation\".\n# html_title = \"%s v%s Guide\" % (project, release)\n\n# A shorter title for the navigation bar. Default is the same as html_title.\n# html_short_title = \"%s Admin Guide\" % (project_short_name)\n\n# The name of an image file (relative to this directory) to place at the top\n# of the sidebar.\n# html_logo = None\n\n# The name of an image file (within the static path) to use as favicon of the\n# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32\n# pixels large.\n#html_favicon = 'images/favicon.ico'\n\n# Add any paths that contain custom static files (such as style sheets) here,\n# relative to this directory. They are copied after the builtin static files,\n# so a file named \"default.css\" will overwrite the builtin \"default.css\".\nhtml_static_path = ['_static']\n\n# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,\n# using the given strftime format.\n#html_last_updated_fmt = '%b %d, %Y'\n\n# If true, SmartyPants will be used to convert quotes and dashes to\n# typographically correct entities.\n#html_use_smartypants = True\n\n# Additional templates that should be rendered to pages, maps page names to\n# template names.\n#html_additional_pages = {}\n\n# If false, no module index is generated.\n#html_use_modindex = True\n\n# If false, no index is generated.\n#html_use_index = True\n\n# If true, the index is split into individual pages for each letter.\n#html_split_index = False\n\n# If true, the reST sources are included in the HTML build as _sources/<name>.\n#html_copy_source = True\n\n# If true, an OpenSearch description file will be output, and all pages will\n# contain a <link> tag referring to it. The value of this option must be the\n# base URL from which the finished HTML is served.\n#html_use_opensearch = ''\n\n# If nonempty, this is the file name suffix for HTML files (e.g. \".xhtml\").\n#html_file_suffix = ''\n\n# Output file base name for HTML help builder.\nhtmlhelp_basename = 'CKANdoc'\n\n\n# Options for LaTeX output\n# ------------------------\n\n# The paper size ('letter' or 'a4').\n#latex_paper_size = 'letter'\n\n# The font size ('10pt', '11pt' or '12pt').\n#latex_font_size = '10pt'\n\n# Grouping the document tree into LaTeX files. List of tuples\n# (source start file, target name, title, author, document class [howto/manual]).\nlatex_documents = [\n ('contents', 'CKAN.tex', u'CKAN documentation',\n u'CKAN contributors', 'manual'),\n]\n\n# The name of an image file (relative to this directory) to place at the top of\n# the title page.\n#latex_logo = None\n\n# For \"manual\" documents, if this is true, then toplevel headings are parts,\n# not chapters.\n#latex_use_parts = False\n\n# Additional stuff for the LaTeX preamble.\n#latex_preamble = ''\n\n# Documents to append as an appendix to all manuals.\n#latex_appendices = []\n\n# If false, no module index is generated.\n#latex_use_modindex = True\n",
"path": "doc/conf.py"
}
] | [
{
"content": "# -*- coding: utf-8 -*-\n#\n# CKAN documentation build configuration file, created by\n# sphinx-quickstart on Sun Oct 25 16:47:17 2009.\n#\n# This file is execfile()d with the current directory set to its containing dir.\n#\n# The contents of this file are pickled, so don't put values in the namespace\n# that aren't pickleable (module imports are okay, they're removed automatically).\n#\n# Note that not all possible configuration values are present in this\n# autogenerated file.\n#\n# All configuration values have a default; values that are commented out\n# serve to show the default.\n\nfrom datetime import date\nimport re\nimport os\nimport subprocess\n\nfrom packaging.version import parse as version_parse\n\nimport ckan\n\n# If your extensions (or modules documented by autodoc) are in another directory,\n# add these directories to sys.path here. If the directory is relative to the\n# documentation root, use os.path.abspath to make it absolute, like shown here.\n#sys.path.append(os.path.abspath('.'))\n\n# General configuration\n# ---------------------\n\nrst_epilog = '''\n\n.. |virtualenv_parent_dir| replace:: /usr/lib/ckan\n.. |virtualenv| replace:: |virtualenv_parent_dir|/default\n.. |activate| replace:: . |virtualenv|/bin/activate\n.. |config_parent_dir| replace:: /etc/ckan\n.. |config_dir| replace:: |config_parent_dir|/default\n.. |production.ini| replace:: |config_dir|/production.ini\n.. |development.ini| replace:: |config_dir|/development.ini\n.. |ckan.ini| replace:: |config_dir|/ckan.ini\n.. |git_url| replace:: \\https://github.com/ckan/ckan.git\n.. |raw_git_url| replace:: \\https://raw.githubusercontent.com/ckan/ckan\n.. |postgres| replace:: PostgreSQL\n.. |database| replace:: ckan_default\n.. |database_user| replace:: ckan_default\n.. |datastore| replace:: datastore_default\n.. |datastore_user| replace:: datastore_default\n.. |test_database| replace:: ckan_test\n.. |test_datastore| replace:: datastore_test\n.. |apache_config_file| replace:: /etc/apache2/sites-available/ckan_default.conf\n.. |apache.wsgi| replace:: |config_dir|/apache.wsgi\n.. |wsgi.py| replace:: |config_dir|/wsgi.py\n.. |data_dir| replace:: |config_dir|/data\n.. |sstore| replace:: |config_dir|/sstore\n.. |storage_parent_dir| replace:: /var/lib/ckan\n.. |storage_dir| replace:: |storage_parent_dir|/default\n.. |storage_path| replace:: |storage_parent_dir|/default\n.. |restart_uwsgi| replace:: sudo supervisorctl restart ckan-uwsgi:*\n.. |solr| replace:: Solr\n.. |restructuredtext| replace:: reStructuredText\n.. |nginx| replace:: Nginx\n.. |sqlite| replace:: SQLite\n.. |python| replace:: Python\n.. |sqlalchemy| replace:: SQLAlchemy\n.. |javascript| replace:: JavaScript\n.. |apache| replace:: Apache\n.. |nginx_config_file| replace:: /etc/nginx/sites-available/ckan\n.. |reload_nginx| replace:: sudo service nginx reload\n.. |restart_nginx| replace:: sudo service nginx restart\n.. |jquery| replace:: jQuery\n.. |nodejs| replace:: Node.js\n\n.. _Jinja2: http://jinja.pocoo.org/\n.. _CKAN front page: http://127.0.0.1:5000\n.. _bootstrap: http://getbootstrap.com/2.3.2/\n.. _CKAN issue tracker: https://github.com/ckan/ckan/issues\n\n'''\n\n# Add any Sphinx extension module names here, as strings. They can be extensions\n# coming with Sphinx (named 'sphinx.ext.*') or your custom ones.\nextensions = [\n 'sphinx.ext.autodoc', 'sphinx.ext.todo',\n 'sphinx.ext.autosummary', 'ckan.plugins.toolkit_sphinx_extension',\n 'sphinx_rtd_theme',\n]\nhtml_theme = 'sphinx_rtd_theme'\nautodoc_member_order = 'bysource'\ntodo_include_todos = True\n\n# Add any paths that contain templates here, relative to this directory.\ntemplates_path = ['_templates']\n\n# The suffix of source filenames.\nsource_suffix = '.rst'\n\n# The encoding of source files.\n#source_encoding = 'utf-8'\n\n# The master toctree document.\nmaster_doc = 'contents'\n\n# General information about the project.\nproject = u'CKAN'\nproject_short_name = u'CKAN'\ncopyright = u'© 2009-{} '.format(date.today().strftime(\"%Y\"))\ncopyright += u'''<a href=\"https://okfn.org/\">Open Knowledge Foundation</a> and\n <a href=\"https://github.com/ckan/ckan/graphs/contributors\">contributors</a>.\n Licensed under <a\n href=\"https://creativecommons.org/licenses/by-sa/3.0/\">Creative Commons\n Attribution ShareAlike (Unported) v3.0 License</a>.<br />\n <img src=\"https://licensebuttons.net/l/by-sa/3.0/80x15.png\" alt=\"CC License Logo\" />\n <a href=\"https://opendefinition.org/\">\n <img src=\"https://assets.okfn.org/images/ok_buttons/oc_80x15_blue.png\" border=\"0\"\n alt=\"{{ _('Open Content') }}\" />\n </a>\n '''\nhtml_show_sphinx = False\n\n# The version info for the project you're documenting, acts as replacement for\n# |version| and |release|, also used in various other places throughout the\n# built documents.\n#\n# The short X.Y version.\nversion = ckan.__version__.rstrip('abcdefgh')\n# The full version, including alpha/beta/rc tags.\nrelease = ckan.__version__\nversion_re = None\npoint_releases_ = None\n\nSUPPORTED_CKAN_VERSIONS = 2\n\n\ndef get_release_tags():\n git_tags = subprocess.check_output(\n ['git', 'tag', '-l'], stderr=subprocess.STDOUT).decode('utf8')\n git_tags = git_tags.split()\n release_tags_ = [tag for tag in git_tags if tag.startswith('ckan-')]\n\n # git tag -l prints out the tags in the right order anyway, but don't rely\n # on that, sort them again here for good measure.\n release_tags_.sort(key=version_parse)\n return release_tags_\n\n\ndef parse_version(version_):\n '''Parses version string\n ckan-2.1.3 -> ('2', '1', '3')\n ckan-2.1 -> ('2', '1', None) (the occasion when we didn't do semver)\n '''\n global version_re\n if version_re is None:\n version_re = re.compile('(?:ckan-)?(\\d+)\\.(\\d+)(?:\\.(\\d+))?[a-z]?')\n if isinstance(version_, bytes):\n version_ = version_.decode()\n return version_re.match(version_).groups()\n\n\ndef get_equivalent_point_release(version_):\n '''Returns the equivalent point release of any given version.\n\n e.g.\n ckan-2.1.3 -> ckan-2.1\n ckan-2.1 -> ckan-2.1 (the occasion when we didn't do semver)\n '''\n return 'ckan-%s.%s' % parse_version(version_)[:2]\n\n\ndef get_point_releases():\n '''\n returns ['ckan-1.3', 'ckan-1.4', ... 'ckan-2.0', 'ckan-2.1', ...]\n '''\n global point_releases_\n if point_releases_ is None:\n releases = get_release_tags()\n point_releases_ = []\n for release in releases:\n point_release = get_equivalent_point_release(release)\n if point_release not in point_releases_:\n point_releases_.append(point_release)\n return point_releases_\n\n\ndef get_status_of_this_version():\n '''Returns whether this release is supported or another category.\n '''\n equiv_point_release = get_equivalent_point_release(version)\n point_releases_ = get_point_releases()\n supported_point_releases = point_releases_[-int(SUPPORTED_CKAN_VERSIONS):]\n if equiv_point_release in supported_point_releases:\n return 'supported'\n else:\n return 'unsupported'\n\n\ndef get_current_release_tag():\n ''' Return the name of the tag for the current release\n\n e.g.: \"ckan-2.7.4\"\n\n '''\n release_tags_ = get_release_tags()\n\n current_tag = \"ckan-{}\".format(version)\n\n if release_tags_.__contains__(current_tag):\n return current_tag\n else:\n # Un-released tag (eg master or a beta version), use the latest one\n return get_latest_release_tag()\n\n\ndef get_latest_release_tag():\n '''Return the name of the git tag for the latest stable release.\n\n e.g.: \"ckan-2.7.4\"\n\n This requires git to be installed.\n\n '''\n release_tags_ = get_release_tags()\n\n if release_tags_:\n return release_tags_[-1]\n else:\n # Un-released tag (eg master or a beta version), use the latest one\n return get_latest_release_version()\n\n\ndef get_latest_release_version():\n '''Return the version number of the latest stable release.\n\n e.g. \"2.1.1\"\n\n '''\n version = get_latest_release_tag()[len('ckan-'):]\n\n # TODO: We could assert here that latest_version matches X.Y.Z.\n\n return version\n\n\ndef get_current_release_version():\n '''Return the version number of the current release.\n\n e.g. \"2.1.1\"\n\n '''\n version = get_current_release_tag()[len('ckan-'):]\n\n # TODO: We could assert here that latest_version matches X.Y.Z.\n\n return version\n\n\ndef get_previous_release_version() -> str:\n \"\"\"Returns the version number of the previous release\n\n eg if the latest release is 2.9.5, it returns 2.8.10\n\n \"\"\"\n current_version = parse_version(get_current_release_version())\n\n previous_tag_prefix = f\"ckan-{current_version[0]}.{int(current_version[1]) - 1}\"\n\n previous_version_tags = [\n r for r in get_release_tags() if r.startswith(previous_tag_prefix)\n ]\n previous_release_version = previous_version_tags[-1][len(\"ckan-\"):]\n return previous_release_version\n\n\ndef get_latest_package_name(distro, py_version=None):\n '''Return the filename of the Ubuntu package for the latest stable release.\n\n e.g. \"python-ckan_2.1-trusty_amd64.deb\"\n\n If ``py_version`` is provided, it's added as part of the iter number:\n\n e.g. \"python-ckan_2.9-py3-focal_amd64.deb\"\n\n '''\n # We don't create a new package file name for a patch release like 2.1.1,\n # instead we just update the existing 2.1 package. So package names only\n # have the X.Y part of the version number in them, not X.Y.Z.\n version = get_latest_release_version()\n latest_minor_version = version[:version.find(\".\", 3)]\n\n if py_version:\n name = 'python-ckan_{version}-py{py_version}-{distro}_amd64.deb'.format(\n version=latest_minor_version, distro=distro, py_version=py_version)\n else:\n name = 'python-ckan_{version}-{distro}_amd64.deb'.format(\n version=latest_minor_version, distro=distro)\n return name\n\n\ndef get_current_package_name(distro, py_version=None):\n '''Return the filename of the Ubuntu package for the current stable release.\n\n e.g. \"python-ckan_2.1-trusty_amd64.deb\"\n\n If ``py_version`` is provided, it's added as part of the iter number:\n\n e.g. \"python-ckan_2.9-py3-focal_amd64.deb\"\n\n '''\n # We don't create a new package file name for a patch release like 2.1.1,\n # instead we just update the existing 2.1 package. So package names only\n # have the X.Y part of the version number in them, not X.Y.Z.\n version = get_current_release_version()\n current_minor_version = version[:version.find(\".\", 3)]\n\n if py_version:\n name = 'python-ckan_{version}-py{py_version}-{distro}_amd64.deb'.format(\n version=current_minor_version, distro=distro, py_version=py_version)\n else:\n name = 'python-ckan_{version}-{distro}_amd64.deb'.format(\n version=current_minor_version, distro=distro)\n return name\n\n\ndef config_defaults_from_declaration():\n from ckan.config.declaration import Declaration\n decl = Declaration()\n decl.load_core_declaration()\n decl.load_plugin(\"resource_proxy\")\n decl.load_plugin(\"text_view\")\n decl.load_plugin(\"image_view\")\n decl.load_plugin(\"datatables_view\")\n decl.load_plugin(\"datastore\")\n decl.load_plugin(\"datapusher\")\n\n _write_config_options_file(decl)\n\n return {\n f\"config:{k}\": \"``{}``\".format(\n repr(decl[k].default) if decl[k].has_default() else None\n ) for k in decl.iter_options()\n }\n\n\ndef _write_config_options_file(decl):\n '''\n Write a file in the doc/ dir containing documentation for config options.\n\n '''\n filename = '_config_options.inc'\n header = '''.. Documentation for declared config options.\n **This file is autogenerated!** So don't edit it by hand.\n\n'''\n with open(filename, 'w') as f:\n f.write(header.format(filename=filename))\n f.write(decl.into_docs())\n\n\ndef write_substitutions_file(**kwargs):\n '''\n Write a file in the doc/ dir containing reStructuredText substitutions.\n\n Any keyword argument is stored as a substitution.\n '''\n filename = '_substitutions.rst'\n header = '''\n\n.. Some common reStructuredText substitutions.\n\n **This file is autogenerated!** So don't edit it by hand.\n\n You can include this file at the top of your ``*.rst`` file with a line\n like::\n\n .. include:: {filename}\n\n Then use the substitutions in this file, e.g.::\n\n |latest_release_version|\n\n'''\n with open(filename, 'w') as f:\n f.write(header.format(filename=filename))\n for name, substitution in kwargs.items():\n f.write('.. |{name}| replace:: {substitution}\\n'.format(\n name=name, substitution=substitution))\n\ncurrent_release_tag_value = get_current_release_tag()\ncurrent_release_version = get_current_release_version()\nprevious_release_version = get_previous_release_version()\nprevious_release_version_format = f\"**CKAN {previous_release_version}**\"\ncurrent_minor_version = current_release_version[:current_release_version.find(\".\", 3)]\nlatest_release_tag_value = get_latest_release_tag()\nlatest_release_version = get_latest_release_version()\nlatest_release_version_format = f\"**CKAN {latest_release_version}**\"\nlatest_minor_version = latest_release_version[:latest_release_version.find(\".\", 3)]\nis_master = \"a\" in release.split(\".\")[-1]\nis_supported = get_status_of_this_version() == 'supported'\nis_latest_version = version == latest_release_version\n\nwrite_substitutions_file(\n current_release_tag=current_release_tag_value,\n current_release_version=current_release_version,\n previous_release_version=previous_release_version,\n previous_release_version_format=previous_release_version_format,\n current_minor_version=current_minor_version,\n latest_release_tag=latest_release_tag_value,\n latest_release_version=latest_release_version,\n latest_release_version_format=latest_release_version_format,\n current_package_name_jammy=get_current_package_name('jammy'),\n current_package_name_focal=get_current_package_name('focal'),\n **config_defaults_from_declaration()\n)\n\n\n# The language for content autogenerated by Sphinx. Refer to documentation\n# for a list of supported languages.\n#language = None\n\n# There are two options for replacing |today|: either, you set today to some\n# non-false value, then it is used:\n#today = ''\n# Else, today_fmt is used as the format for a strftime call.\n#today_fmt = '%B %d, %Y'\n\n# List of documents that shouldn't be included in the build.\n#unused_docs = []\n\n# List of directories, relative to source directory, that shouldn't be searched\n# for source files.\nexclude_trees = ['.build']\n\n# The reST default role (used for this markup: `text`) to use for all documents.\n#default_role = None\n\n# If true, '()' will be appended to :func: etc. cross-reference text.\n#add_function_parentheses = True\n\n# If true, the current module name will be prepended to all description\n# unit titles (such as .. function::).\n#add_module_names = True\n\n# If true, sectionauthor and moduleauthor directives will be shown in the\n# output. They are ignored by default.\n#show_authors = False\n\n# The name of the Pygments (syntax highlighting) style to use.\npygments_style = 'sphinx'\n\n\n# Options for HTML output\n# -----------------------\n\nextra_css_files = ['_static/css/custom.css']\n\non_rtd = os.environ.get('READTHEDOCS', None) == 'True'\nif not on_rtd:\n import sphinx_rtd_theme\n html_theme = 'sphinx_rtd_theme'\n html_theme_path = [sphinx_rtd_theme.get_html_theme_path()]\n\nhtml_sidebars = {\n '**': ['globaltoc.html'],\n}\n\nhtml_context = {\n 'latest_release_tag_value': latest_release_tag_value,\n 'is_master': is_master,\n 'is_supported': is_supported,\n 'is_latest_version': is_latest_version,\n 'extra_css_files': extra_css_files,\n 'latest_minor_version': latest_minor_version,\n}\n\n# The style sheet to use for HTML and HTML Help pages. A file of that name\n# must exist either in Sphinx' static/ path, or in one of the custom paths\n# given in html_static_path.\n#html_style = 'default.css'\n\n# The name for this set of Sphinx documents. If None, it defaults to\n# \"<project> v<release> documentation\".\n# html_title = \"%s v%s Guide\" % (project, release)\n\n# A shorter title for the navigation bar. Default is the same as html_title.\n# html_short_title = \"%s Admin Guide\" % (project_short_name)\n\n# The name of an image file (relative to this directory) to place at the top\n# of the sidebar.\n# html_logo = None\n\n# The name of an image file (within the static path) to use as favicon of the\n# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32\n# pixels large.\n#html_favicon = 'images/favicon.ico'\n\n# Add any paths that contain custom static files (such as style sheets) here,\n# relative to this directory. They are copied after the builtin static files,\n# so a file named \"default.css\" will overwrite the builtin \"default.css\".\nhtml_static_path = ['_static']\n\n# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,\n# using the given strftime format.\n#html_last_updated_fmt = '%b %d, %Y'\n\n# If true, SmartyPants will be used to convert quotes and dashes to\n# typographically correct entities.\n#html_use_smartypants = True\n\n# Additional templates that should be rendered to pages, maps page names to\n# template names.\n#html_additional_pages = {}\n\n# If false, no module index is generated.\n#html_use_modindex = True\n\n# If false, no index is generated.\n#html_use_index = True\n\n# If true, the index is split into individual pages for each letter.\n#html_split_index = False\n\n# If true, the reST sources are included in the HTML build as _sources/<name>.\n#html_copy_source = True\n\n# If true, an OpenSearch description file will be output, and all pages will\n# contain a <link> tag referring to it. The value of this option must be the\n# base URL from which the finished HTML is served.\n#html_use_opensearch = ''\n\n# If nonempty, this is the file name suffix for HTML files (e.g. \".xhtml\").\n#html_file_suffix = ''\n\n# Output file base name for HTML help builder.\nhtmlhelp_basename = 'CKANdoc'\n\n\n# Options for LaTeX output\n# ------------------------\n\n# The paper size ('letter' or 'a4').\n#latex_paper_size = 'letter'\n\n# The font size ('10pt', '11pt' or '12pt').\n#latex_font_size = '10pt'\n\n# Grouping the document tree into LaTeX files. List of tuples\n# (source start file, target name, title, author, document class [howto/manual]).\nlatex_documents = [\n ('contents', 'CKAN.tex', u'CKAN documentation',\n u'CKAN contributors', 'manual'),\n]\n\n# The name of an image file (relative to this directory) to place at the top of\n# the title page.\n#latex_logo = None\n\n# For \"manual\" documents, if this is true, then toplevel headings are parts,\n# not chapters.\n#latex_use_parts = False\n\n# Additional stuff for the LaTeX preamble.\n#latex_preamble = ''\n\n# Documents to append as an appendix to all manuals.\n#latex_appendices = []\n\n# If false, no module index is generated.\n#latex_use_modindex = True\n",
"path": "doc/conf.py"
}
] | diff --git a/.readthedocs.yaml b/.readthedocs.yaml
index 1bd376c77b3..b8e02ef2197 100644
--- a/.readthedocs.yaml
+++ b/.readthedocs.yaml
@@ -9,9 +9,13 @@ version: 2
build:
os: ubuntu-22.04
apt_packages:
- - libmagic-dev
+ - libmagic-dev
+ - libmagic1
tools:
python: "3.10"
+ jobs:
+ post_checkout:
+ - git fetch --tags || true
sphinx:
configuration: doc/conf.py
diff --git a/doc/conf.py b/doc/conf.py
index 0e6325a8ec2..eca46d5929f 100644
--- a/doc/conf.py
+++ b/doc/conf.py
@@ -85,7 +85,9 @@
extensions = [
'sphinx.ext.autodoc', 'sphinx.ext.todo',
'sphinx.ext.autosummary', 'ckan.plugins.toolkit_sphinx_extension',
+ 'sphinx_rtd_theme',
]
+html_theme = 'sphinx_rtd_theme'
autodoc_member_order = 'bysource'
todo_include_todos = True
|
fossasia__open-event-server-4302 | Custom-forms: Change data.type in custom-form
**I'm submitting a ...** (check one with "x")
- [x] bug report
- [ ] feature request
- [ ] support request => Please do not submit support requests here, instead ask your query in out Gitter channel at https://gitter.im/fossasia/open-event-orga-server
**Current behavior:**
The type attribute is `custom_form` which leads to error 409 while making a request after #4300
**Expected behavior:**
The type attribute should be `custom-form`
@enigmaeth Can you please check?
| [
{
"content": "from flask_rest_jsonapi import ResourceDetail, ResourceList, ResourceRelationship\nfrom marshmallow_jsonapi.flask import Schema, Relationship\nfrom marshmallow_jsonapi import fields\nimport marshmallow.validate as validate\nfrom app.api.helpers.permissions import jwt_required\nfrom flask_rest_jsonapi.exceptions import ObjectNotFound\n\nfrom app.api.bootstrap import api\nfrom app.api.helpers.utilities import dasherize\nfrom app.models import db\nfrom app.models.custom_form import CustomForms\nfrom app.models.event import Event\nfrom app.api.helpers.db import safe_query\nfrom app.api.helpers.utilities import require_relationship\nfrom app.api.helpers.permission_manager import has_access\nfrom app.api.helpers.query import event_query\n\n\nclass CustomFormSchema(Schema):\n \"\"\"\n API Schema for Custom Forms database model\n \"\"\"\n class Meta:\n \"\"\"\n Meta class for CustomForm Schema\n \"\"\"\n type_ = 'custom_form'\n self_view = 'v1.custom_form_detail'\n self_view_kwargs = {'id': '<id>'}\n inflect = dasherize\n\n id = fields.Integer(dump_only=True)\n field_identifier = fields.Str(required=True)\n form = fields.Str(required=True)\n type = fields.Str(default=\"text\", validate=validate.OneOf(\n choices=[\"text\", \"checkbox\", \"select\", \"file\", \"image\"]))\n is_required = fields.Boolean(default=False)\n is_included = fields.Boolean(default=False)\n is_fixed = fields.Boolean(default=False)\n event = Relationship(attribute='event',\n self_view='v1.custom_form_event',\n self_view_kwargs={'id': '<id>'},\n related_view='v1.event_detail',\n related_view_kwargs={'custom_form_id': '<id>'},\n schema='EventSchema',\n type_='event')\n\n\nclass CustomFormListPost(ResourceList):\n \"\"\"\n Create and List Custom Forms\n \"\"\"\n\n def before_post(self, args, kwargs, data):\n \"\"\"\n method to check for required relationship with event\n :param args:\n :param kwargs:\n :param data:\n :return:\n \"\"\"\n require_relationship(['event'], data)\n if not has_access('is_coorganizer', event_id=data['event']):\n raise ObjectNotFound({'parameter': 'event_id'},\n \"Event: {} not found\".format(data['event_id']))\n\n schema = CustomFormSchema\n methods = ['POST', ]\n data_layer = {'session': db.session,\n 'model': CustomForms\n }\n\n\nclass CustomFormList(ResourceList):\n \"\"\"\n Create and List Custom Forms\n \"\"\"\n def query(self, view_kwargs):\n \"\"\"\n query method for different view_kwargs\n :param view_kwargs:\n :return:\n \"\"\"\n query_ = self.session.query(CustomForms)\n query_ = event_query(self, query_, view_kwargs)\n return query_\n\n view_kwargs = True\n decorators = (jwt_required, )\n methods = ['GET', ]\n schema = CustomFormSchema\n data_layer = {'session': db.session,\n 'model': CustomForms,\n 'methods': {\n 'query': query\n }}\n\n\nclass CustomFormDetail(ResourceDetail):\n \"\"\"\n CustomForm Resource\n \"\"\"\n\n def before_get_object(self, view_kwargs):\n \"\"\"\n before get method\n :param view_kwargs:\n :return:\n \"\"\"\n event = None\n if view_kwargs.get('event_id'):\n event = safe_query(self, Event, 'id', view_kwargs['event_id'], 'event_id')\n elif view_kwargs.get('event_identifier'):\n event = safe_query(self, Event, 'identifier', view_kwargs['event_identifier'], 'event_identifier')\n\n if event:\n custom_form = safe_query(self, CustomForms, 'event_id', event.id, 'event_id')\n view_kwargs['id'] = custom_form.id\n\n decorators = (api.has_permission('is_coorganizer', fetch='event_id',\n fetch_as=\"event_id\", model=CustomForms, methods=\"PATCH,DELETE\"), )\n schema = CustomFormSchema\n data_layer = {'session': db.session,\n 'model': CustomForms}\n\n\nclass CustomFormRelationshipRequired(ResourceRelationship):\n \"\"\"\n CustomForm Relationship (Required)\n \"\"\"\n decorators = (api.has_permission('is_coorganizer', fetch='event_id',\n fetch_as=\"event_id\", model=CustomForms, methods=\"PATCH\"),)\n methods = ['GET', 'PATCH']\n schema = CustomFormSchema\n data_layer = {'session': db.session,\n 'model': CustomForms}\n",
"path": "app/api/custom_forms.py"
}
] | [
{
"content": "from flask_rest_jsonapi import ResourceDetail, ResourceList, ResourceRelationship\nfrom marshmallow_jsonapi.flask import Schema, Relationship\nfrom marshmallow_jsonapi import fields\nimport marshmallow.validate as validate\nfrom app.api.helpers.permissions import jwt_required\nfrom flask_rest_jsonapi.exceptions import ObjectNotFound\n\nfrom app.api.bootstrap import api\nfrom app.api.helpers.utilities import dasherize\nfrom app.models import db\nfrom app.models.custom_form import CustomForms\nfrom app.models.event import Event\nfrom app.api.helpers.db import safe_query\nfrom app.api.helpers.utilities import require_relationship\nfrom app.api.helpers.permission_manager import has_access\nfrom app.api.helpers.query import event_query\n\n\nclass CustomFormSchema(Schema):\n \"\"\"\n API Schema for Custom Forms database model\n \"\"\"\n class Meta:\n \"\"\"\n Meta class for CustomForm Schema\n \"\"\"\n type_ = 'custom-form'\n self_view = 'v1.custom_form_detail'\n self_view_kwargs = {'id': '<id>'}\n inflect = dasherize\n\n id = fields.Integer(dump_only=True)\n field_identifier = fields.Str(required=True)\n form = fields.Str(required=True)\n type = fields.Str(default=\"text\", validate=validate.OneOf(\n choices=[\"text\", \"checkbox\", \"select\", \"file\", \"image\"]))\n is_required = fields.Boolean(default=False)\n is_included = fields.Boolean(default=False)\n is_fixed = fields.Boolean(default=False)\n event = Relationship(attribute='event',\n self_view='v1.custom_form_event',\n self_view_kwargs={'id': '<id>'},\n related_view='v1.event_detail',\n related_view_kwargs={'custom_form_id': '<id>'},\n schema='EventSchema',\n type_='event')\n\n\nclass CustomFormListPost(ResourceList):\n \"\"\"\n Create and List Custom Forms\n \"\"\"\n\n def before_post(self, args, kwargs, data):\n \"\"\"\n method to check for required relationship with event\n :param args:\n :param kwargs:\n :param data:\n :return:\n \"\"\"\n require_relationship(['event'], data)\n if not has_access('is_coorganizer', event_id=data['event']):\n raise ObjectNotFound({'parameter': 'event_id'},\n \"Event: {} not found\".format(data['event_id']))\n\n schema = CustomFormSchema\n methods = ['POST', ]\n data_layer = {'session': db.session,\n 'model': CustomForms\n }\n\n\nclass CustomFormList(ResourceList):\n \"\"\"\n Create and List Custom Forms\n \"\"\"\n def query(self, view_kwargs):\n \"\"\"\n query method for different view_kwargs\n :param view_kwargs:\n :return:\n \"\"\"\n query_ = self.session.query(CustomForms)\n query_ = event_query(self, query_, view_kwargs)\n return query_\n\n view_kwargs = True\n decorators = (jwt_required, )\n methods = ['GET', ]\n schema = CustomFormSchema\n data_layer = {'session': db.session,\n 'model': CustomForms,\n 'methods': {\n 'query': query\n }}\n\n\nclass CustomFormDetail(ResourceDetail):\n \"\"\"\n CustomForm Resource\n \"\"\"\n\n def before_get_object(self, view_kwargs):\n \"\"\"\n before get method\n :param view_kwargs:\n :return:\n \"\"\"\n event = None\n if view_kwargs.get('event_id'):\n event = safe_query(self, Event, 'id', view_kwargs['event_id'], 'event_id')\n elif view_kwargs.get('event_identifier'):\n event = safe_query(self, Event, 'identifier', view_kwargs['event_identifier'], 'event_identifier')\n\n if event:\n custom_form = safe_query(self, CustomForms, 'event_id', event.id, 'event_id')\n view_kwargs['id'] = custom_form.id\n\n decorators = (api.has_permission('is_coorganizer', fetch='event_id',\n fetch_as=\"event_id\", model=CustomForms, methods=\"PATCH,DELETE\"), )\n schema = CustomFormSchema\n data_layer = {'session': db.session,\n 'model': CustomForms}\n\n\nclass CustomFormRelationshipRequired(ResourceRelationship):\n \"\"\"\n CustomForm Relationship (Required)\n \"\"\"\n decorators = (api.has_permission('is_coorganizer', fetch='event_id',\n fetch_as=\"event_id\", model=CustomForms, methods=\"PATCH\"),)\n methods = ['GET', 'PATCH']\n schema = CustomFormSchema\n data_layer = {'session': db.session,\n 'model': CustomForms}\n",
"path": "app/api/custom_forms.py"
}
] | diff --git a/app/api/custom_forms.py b/app/api/custom_forms.py
index 5474779926..924b5a1ad7 100644
--- a/app/api/custom_forms.py
+++ b/app/api/custom_forms.py
@@ -24,7 +24,7 @@ class Meta:
"""
Meta class for CustomForm Schema
"""
- type_ = 'custom_form'
+ type_ = 'custom-form'
self_view = 'v1.custom_form_detail'
self_view_kwargs = {'id': '<id>'}
inflect = dasherize
diff --git a/docs/api/api_blueprint.apib b/docs/api/api_blueprint.apib
index 5457207c75..bccb44396e 100644
--- a/docs/api/api_blueprint.apib
+++ b/docs/api/api_blueprint.apib
@@ -16238,7 +16238,7 @@ Create a new Custom Form with event_id.
{
"data": {
- "type": "custom_form",
+ "type": "custom-form",
"relationships": {
"event": {
"data": {
@@ -16279,7 +16279,7 @@ Create a new Custom Form with event_id.
"is-fixed": false,
"type": "text"
},
- "type": "custom_form",
+ "type": "custom-form",
"id": 1,
"links": {
"self": "/v1/custom-forms/1"
@@ -16329,7 +16329,7 @@ Get a single custom form.
"is-included": false,
"type": "text"
},
- "type": "custom_form",
+ "type": "custom-form",
"id": 1,
"links": {
"self": "/v1/custom-forms/1"
@@ -16359,7 +16359,7 @@ Update a single custom form with `id`.
{
"data": {
- "type": "custom_form",
+ "type": "custom-form",
"attributes": {
"form": "form",
"field-identifier": "abc123",
@@ -16393,7 +16393,7 @@ Update a single custom form with `id`.
"is-included": false,
"type": "text"
},
- "type": "custom_form",
+ "type": "custom-form",
"id": 1,
"links": {
"self": "/v1/custom-forms/1"
@@ -16475,7 +16475,7 @@ Get a list of Custom Forms for an event.
"is-fixed": false,
"type": "text"
},
- "type": "custom_form",
+ "type": "custom-form",
"id": 1,
"links": {
"self": "/v1/custom-forms/1"
|
DataDog__dd-trace-py-1582 | ddtrace.Pin() for multiple grpc channels doesn't work
Thanks for taking the time for reporting an issue!
Before reporting an issue on dd-trace-py, please be sure to provide all
necessary information.
If you're hitting a bug, make sure that you're using the latest version of this
library.
### Which version of dd-trace-py are you using?
0.38.2
I didn't find anything related to this issue in the release notes of the releases after this version.
### Which version of the libraries are you using?
datadog==0.36.0
### How can we reproduce your problem?
Approach 1:
servers is a list of grpc server addresses
```
for server in servers:
channel = grpc.insecure_channel(server)
Pin.override(channel, service=server)
# Do something with the channel
```
Since `Pin.override(grpc.Channel, service=server)` worked with one server, I also tried the following to see how it looks
Approach 2:
servers is a list of grpc server addresses
```
for server in servers:
Pin.override(grpc.Channel, service=server)
channel = grpc.insecure_channel(server)
# Do something with the channel
```
### What is the result that you get?
In Approach 1, Pin.override did not set the service name correctly. Everywhere in Datadog, I could see it as `grpc-client` which is the default value.
In Approach 2, since I I don't pass the channels corresponding to each server, all servers are overriden by Pin to the final server (probably because it's the last one in the loop)
### What is the result that you expected?
ddtrace.Pin() onto multiple grpc channels should work and I should be able to see the correct `service` in Datadog APM traces and Service Map
| [
{
"content": "import os\n\nimport grpc\n\nfrom ddtrace.vendor.wrapt import wrap_function_wrapper as _w\nfrom ddtrace import config, Pin\n\nfrom ...utils.wrappers import unwrap as _u\n\nfrom . import constants\nfrom .client_interceptor import create_client_interceptor, intercept_channel\nfrom .server_interceptor import create_server_interceptor\n\n\nconfig._add('grpc_server', dict(\n service_name=config._get_service(default=constants.GRPC_SERVICE_SERVER),\n distributed_tracing_enabled=True,\n))\n\n\n# Precedence for the service name:\n# 1) DD_GRPC_SERVICE if defined; or\n# 2) For compatibility, the globally set service + \"-grpc-client\"; or\n# 3) The fall-back \"grpc-client\"\nif \"DD_GRPC_SERVICE\" in os.environ:\n service = os.getenv(\"DD_GRPC_SERVICE\")\nelif config._get_service():\n service = \"{}-{}\".format(config._get_service(), constants.GRPC_SERVICE_CLIENT)\nelse:\n service = constants.GRPC_SERVICE_CLIENT\n\n\n# TODO[tbutt]: keeping name for client config unchanged to maintain backwards\n# compatibility but should change in future\nconfig._add('grpc', dict(\n service_name=service,\n distributed_tracing_enabled=True,\n))\n\n\ndef patch():\n _patch_client()\n _patch_server()\n\n\ndef unpatch():\n _unpatch_client()\n _unpatch_server()\n\n\ndef _patch_client():\n if getattr(constants.GRPC_PIN_MODULE_CLIENT, '__datadog_patch', False):\n return\n setattr(constants.GRPC_PIN_MODULE_CLIENT, '__datadog_patch', True)\n\n Pin().onto(constants.GRPC_PIN_MODULE_CLIENT)\n\n _w('grpc', 'insecure_channel', _client_channel_interceptor)\n _w('grpc', 'secure_channel', _client_channel_interceptor)\n _w('grpc', 'intercept_channel', intercept_channel)\n\n\ndef _unpatch_client():\n if not getattr(constants.GRPC_PIN_MODULE_CLIENT, '__datadog_patch', False):\n return\n setattr(constants.GRPC_PIN_MODULE_CLIENT, '__datadog_patch', False)\n\n pin = Pin.get_from(constants.GRPC_PIN_MODULE_CLIENT)\n if pin:\n pin.remove_from(constants.GRPC_PIN_MODULE_CLIENT)\n\n _u(grpc, 'secure_channel')\n _u(grpc, 'insecure_channel')\n\n\ndef _patch_server():\n if getattr(constants.GRPC_PIN_MODULE_SERVER, '__datadog_patch', False):\n return\n setattr(constants.GRPC_PIN_MODULE_SERVER, '__datadog_patch', True)\n\n Pin().onto(constants.GRPC_PIN_MODULE_SERVER)\n\n _w('grpc', 'server', _server_constructor_interceptor)\n\n\ndef _unpatch_server():\n if not getattr(constants.GRPC_PIN_MODULE_SERVER, '__datadog_patch', False):\n return\n setattr(constants.GRPC_PIN_MODULE_SERVER, '__datadog_patch', False)\n\n pin = Pin.get_from(constants.GRPC_PIN_MODULE_SERVER)\n if pin:\n pin.remove_from(constants.GRPC_PIN_MODULE_SERVER)\n\n _u(grpc, 'server')\n\n\ndef _client_channel_interceptor(wrapped, instance, args, kwargs):\n channel = wrapped(*args, **kwargs)\n\n pin = Pin.get_from(constants.GRPC_PIN_MODULE_CLIENT)\n if not pin or not pin.enabled():\n return channel\n\n (host, port) = _parse_target_from_arguments(args, kwargs)\n\n interceptor_function = create_client_interceptor(pin, host, port)\n return grpc.intercept_channel(channel, interceptor_function)\n\n\ndef _server_constructor_interceptor(wrapped, instance, args, kwargs):\n # DEV: we clone the pin on the grpc module and configure it for the server\n # interceptor\n\n pin = Pin.get_from(constants.GRPC_PIN_MODULE_SERVER)\n if not pin or not pin.enabled():\n return wrapped(*args, **kwargs)\n\n interceptor = create_server_interceptor(pin)\n\n # DEV: Inject our tracing interceptor first in the list of interceptors\n if 'interceptors' in kwargs:\n kwargs['interceptors'] = (interceptor,) + tuple(kwargs['interceptors'])\n else:\n kwargs['interceptors'] = (interceptor,)\n\n return wrapped(*args, **kwargs)\n\n\ndef _parse_target_from_arguments(args, kwargs):\n if 'target' in kwargs:\n target = kwargs['target']\n else:\n target = args[0]\n\n split = target.rsplit(':', 2)\n\n return (split[0], split[1] if len(split) > 1 else None)\n",
"path": "ddtrace/contrib/grpc/patch.py"
}
] | [
{
"content": "import os\n\nimport grpc\n\nfrom ddtrace.vendor.wrapt import wrap_function_wrapper as _w\nfrom ddtrace import config, Pin\n\nfrom ...utils.wrappers import unwrap as _u\n\nfrom . import constants\nfrom .client_interceptor import create_client_interceptor, intercept_channel\nfrom .server_interceptor import create_server_interceptor\n\n\nconfig._add('grpc_server', dict(\n service_name=config._get_service(default=constants.GRPC_SERVICE_SERVER),\n distributed_tracing_enabled=True,\n))\n\n\n# Precedence for the service name:\n# 1) DD_GRPC_SERVICE if defined; or\n# 2) For compatibility, the globally set service + \"-grpc-client\"; or\n# 3) The fall-back \"grpc-client\"\nif \"DD_GRPC_SERVICE\" in os.environ:\n service = os.getenv(\"DD_GRPC_SERVICE\")\nelif config._get_service():\n service = \"{}-{}\".format(config._get_service(), constants.GRPC_SERVICE_CLIENT)\nelse:\n service = constants.GRPC_SERVICE_CLIENT\n\n\n# TODO[tbutt]: keeping name for client config unchanged to maintain backwards\n# compatibility but should change in future\nconfig._add('grpc', dict(\n service_name=service,\n distributed_tracing_enabled=True,\n))\n\n\ndef patch():\n _patch_client()\n _patch_server()\n\n\ndef unpatch():\n _unpatch_client()\n _unpatch_server()\n\n\ndef _patch_client():\n if getattr(constants.GRPC_PIN_MODULE_CLIENT, '__datadog_patch', False):\n return\n setattr(constants.GRPC_PIN_MODULE_CLIENT, '__datadog_patch', True)\n\n Pin().onto(constants.GRPC_PIN_MODULE_CLIENT)\n\n _w('grpc', 'insecure_channel', _client_channel_interceptor)\n _w('grpc', 'secure_channel', _client_channel_interceptor)\n _w('grpc', 'intercept_channel', intercept_channel)\n\n\ndef _unpatch_client():\n if not getattr(constants.GRPC_PIN_MODULE_CLIENT, '__datadog_patch', False):\n return\n setattr(constants.GRPC_PIN_MODULE_CLIENT, '__datadog_patch', False)\n\n pin = Pin.get_from(constants.GRPC_PIN_MODULE_CLIENT)\n if pin:\n pin.remove_from(constants.GRPC_PIN_MODULE_CLIENT)\n\n _u(grpc, 'secure_channel')\n _u(grpc, 'insecure_channel')\n\n\ndef _patch_server():\n if getattr(constants.GRPC_PIN_MODULE_SERVER, '__datadog_patch', False):\n return\n setattr(constants.GRPC_PIN_MODULE_SERVER, '__datadog_patch', True)\n\n Pin().onto(constants.GRPC_PIN_MODULE_SERVER)\n\n _w('grpc', 'server', _server_constructor_interceptor)\n\n\ndef _unpatch_server():\n if not getattr(constants.GRPC_PIN_MODULE_SERVER, '__datadog_patch', False):\n return\n setattr(constants.GRPC_PIN_MODULE_SERVER, '__datadog_patch', False)\n\n pin = Pin.get_from(constants.GRPC_PIN_MODULE_SERVER)\n if pin:\n pin.remove_from(constants.GRPC_PIN_MODULE_SERVER)\n\n _u(grpc, 'server')\n\n\ndef _client_channel_interceptor(wrapped, instance, args, kwargs):\n channel = wrapped(*args, **kwargs)\n\n pin = Pin.get_from(channel)\n if not pin or not pin.enabled():\n return channel\n\n (host, port) = _parse_target_from_arguments(args, kwargs)\n\n interceptor_function = create_client_interceptor(pin, host, port)\n return grpc.intercept_channel(channel, interceptor_function)\n\n\ndef _server_constructor_interceptor(wrapped, instance, args, kwargs):\n # DEV: we clone the pin on the grpc module and configure it for the server\n # interceptor\n\n pin = Pin.get_from(constants.GRPC_PIN_MODULE_SERVER)\n if not pin or not pin.enabled():\n return wrapped(*args, **kwargs)\n\n interceptor = create_server_interceptor(pin)\n\n # DEV: Inject our tracing interceptor first in the list of interceptors\n if 'interceptors' in kwargs:\n kwargs['interceptors'] = (interceptor,) + tuple(kwargs['interceptors'])\n else:\n kwargs['interceptors'] = (interceptor,)\n\n return wrapped(*args, **kwargs)\n\n\ndef _parse_target_from_arguments(args, kwargs):\n if 'target' in kwargs:\n target = kwargs['target']\n else:\n target = args[0]\n\n split = target.rsplit(':', 2)\n\n return (split[0], split[1] if len(split) > 1 else None)\n",
"path": "ddtrace/contrib/grpc/patch.py"
}
] | diff --git a/ddtrace/contrib/grpc/patch.py b/ddtrace/contrib/grpc/patch.py
index b815d8a7c4f..1d48914dd03 100644
--- a/ddtrace/contrib/grpc/patch.py
+++ b/ddtrace/contrib/grpc/patch.py
@@ -98,7 +98,7 @@ def _unpatch_server():
def _client_channel_interceptor(wrapped, instance, args, kwargs):
channel = wrapped(*args, **kwargs)
- pin = Pin.get_from(constants.GRPC_PIN_MODULE_CLIENT)
+ pin = Pin.get_from(channel)
if not pin or not pin.enabled():
return channel
|
jazzband__django-simple-history-1218 | Creating historical records for models with M2M fields to `"self"` causes `FieldError`
**Describe the bug**
*See title.*
**To Reproduce**
Steps to reproduce the behavior:
1. Given the following model:
```python
class Person(models.Model):
relations = models.ManyToManyField("self")
history = HistoricalRecords(m2m_fields=[relations])
```
2. Run the following code (which should also create a historical record for the `Person` object):
```python
Person.objects.create()
```
3. This will produce the following error:
```
django.core.exceptions.FieldError: Cannot resolve keyword 'person' into field. Choices are: from_person, from_person_id, id, to_person, to_person_id
```
**Expected behavior**
That a model object and associated historical record were successfully created, and that the error was not raised.
**Environment (please complete the following information):**
- OS: Windows 11 22H2
- Django Simple History Version: [the current `master` branch](https://github.com/jazzband/django-simple-history/tree/636bcbc46d473862c000101ef040e4eda693117f)
- Django Version: 4.1.6
- Database Version: SQLite 3.38.4
| [
{
"content": "import copy\nimport importlib\nimport uuid\nimport warnings\nfrom functools import partial\n\nfrom django.apps import apps\nfrom django.conf import settings\nfrom django.contrib import admin\nfrom django.contrib.auth import get_user_model\nfrom django.core.exceptions import ImproperlyConfigured, ObjectDoesNotExist\nfrom django.db import models\nfrom django.db.models import ManyToManyField\nfrom django.db.models.fields.proxy import OrderWrt\nfrom django.db.models.fields.related import ForeignKey\nfrom django.db.models.fields.related_descriptors import (\n ForwardManyToOneDescriptor,\n ReverseManyToOneDescriptor,\n create_reverse_many_to_one_manager,\n)\nfrom django.db.models.query import QuerySet\nfrom django.db.models.signals import m2m_changed\nfrom django.forms.models import model_to_dict\nfrom django.urls import reverse\nfrom django.utils import timezone\nfrom django.utils.encoding import smart_str\nfrom django.utils.functional import cached_property\nfrom django.utils.text import format_lazy\nfrom django.utils.translation import gettext_lazy as _\n\nfrom simple_history import utils\n\nfrom . import exceptions\nfrom .manager import SIMPLE_HISTORY_REVERSE_ATTR_NAME, HistoryDescriptor\nfrom .signals import (\n post_create_historical_m2m_records,\n post_create_historical_record,\n pre_create_historical_m2m_records,\n pre_create_historical_record,\n)\nfrom .utils import get_change_reason_from_object\n\ntry:\n from asgiref.local import Local as LocalContext\nexcept ImportError:\n from threading import local as LocalContext\n\nregistered_models = {}\n\n\ndef _default_get_user(request, **kwargs):\n try:\n return request.user\n except AttributeError:\n return None\n\n\ndef _history_user_getter(historical_instance):\n if historical_instance.history_user_id is None:\n return None\n User = get_user_model()\n try:\n return User.objects.get(pk=historical_instance.history_user_id)\n except User.DoesNotExist:\n return None\n\n\ndef _history_user_setter(historical_instance, user):\n if user is not None:\n historical_instance.history_user_id = user.pk\n\n\nclass HistoricalRecords:\n DEFAULT_MODEL_NAME_PREFIX = \"Historical\"\n\n thread = context = LocalContext() # retain thread for backwards compatibility\n m2m_models = {}\n\n def __init__(\n self,\n verbose_name=None,\n verbose_name_plural=None,\n bases=(models.Model,),\n user_related_name=\"+\",\n table_name=None,\n inherit=False,\n excluded_fields=None,\n history_id_field=None,\n history_change_reason_field=None,\n user_model=None,\n get_user=_default_get_user,\n cascade_delete_history=False,\n custom_model_name=None,\n app=None,\n history_user_id_field=None,\n history_user_getter=_history_user_getter,\n history_user_setter=_history_user_setter,\n related_name=None,\n use_base_model_db=False,\n user_db_constraint=True,\n no_db_index=list(),\n excluded_field_kwargs=None,\n m2m_fields=(),\n m2m_fields_model_field_name=\"_history_m2m_fields\",\n m2m_bases=(models.Model,),\n ):\n self.user_set_verbose_name = verbose_name\n self.user_set_verbose_name_plural = verbose_name_plural\n self.user_related_name = user_related_name\n self.user_db_constraint = user_db_constraint\n self.table_name = table_name\n self.inherit = inherit\n self.history_id_field = history_id_field\n self.history_change_reason_field = history_change_reason_field\n self.user_model = user_model\n self.get_user = get_user\n self.cascade_delete_history = cascade_delete_history\n self.custom_model_name = custom_model_name\n self.app = app\n self.user_id_field = history_user_id_field\n self.user_getter = history_user_getter\n self.user_setter = history_user_setter\n self.related_name = related_name\n self.use_base_model_db = use_base_model_db\n self.m2m_fields = m2m_fields\n self.m2m_fields_model_field_name = m2m_fields_model_field_name\n\n if isinstance(no_db_index, str):\n no_db_index = [no_db_index]\n self.no_db_index = no_db_index\n\n if excluded_fields is None:\n excluded_fields = []\n self.excluded_fields = excluded_fields\n\n if excluded_field_kwargs is None:\n excluded_field_kwargs = {}\n self.excluded_field_kwargs = excluded_field_kwargs\n try:\n if isinstance(bases, str):\n raise TypeError\n self.bases = (HistoricalChanges,) + tuple(bases)\n except TypeError:\n raise TypeError(\"The `bases` option must be a list or a tuple.\")\n try:\n if isinstance(m2m_bases, str):\n raise TypeError\n self.m2m_bases = (HistoricalChanges,) + tuple(m2m_bases)\n except TypeError:\n raise TypeError(\"The `m2m_bases` option must be a list or a tuple.\")\n\n def contribute_to_class(self, cls, name):\n self.manager_name = name\n self.module = cls.__module__\n self.cls = cls\n models.signals.class_prepared.connect(self.finalize, weak=False)\n self.add_extra_methods(cls)\n\n if cls._meta.abstract and not self.inherit:\n msg = (\n \"HistoricalRecords added to abstract model ({}) without \"\n \"inherit=True\".format(self.cls.__name__)\n )\n warnings.warn(msg, UserWarning)\n\n def add_extra_methods(self, cls):\n def save_without_historical_record(self, *args, **kwargs):\n \"\"\"\n Save model without saving a historical record\n\n Make sure you know what you're doing before you use this method.\n \"\"\"\n self.skip_history_when_saving = True\n try:\n ret = self.save(*args, **kwargs)\n finally:\n del self.skip_history_when_saving\n return ret\n\n setattr(cls, \"save_without_historical_record\", save_without_historical_record)\n\n def finalize(self, sender, **kwargs):\n inherited = False\n if self.cls is not sender: # set in concrete\n inherited = self.inherit and issubclass(sender, self.cls)\n if not inherited:\n return # set in abstract\n\n if hasattr(sender._meta, \"simple_history_manager_attribute\"):\n raise exceptions.MultipleRegistrationsError(\n \"{}.{} registered multiple times for history tracking.\".format(\n sender._meta.app_label, sender._meta.object_name\n )\n )\n history_model = self.create_history_model(sender, inherited)\n\n if inherited:\n # Make sure history model is in same module as concrete model\n module = importlib.import_module(history_model.__module__)\n else:\n module = importlib.import_module(self.module)\n setattr(module, history_model.__name__, history_model)\n\n # The HistoricalRecords object will be discarded,\n # so the signal handlers can't use weak references.\n models.signals.post_save.connect(self.post_save, sender=sender, weak=False)\n models.signals.post_delete.connect(self.post_delete, sender=sender, weak=False)\n\n m2m_fields = self.get_m2m_fields_from_model(sender)\n\n for field in m2m_fields:\n m2m_changed.connect(\n partial(self.m2m_changed, attr=field.name),\n sender=field.remote_field.through,\n weak=False,\n )\n\n descriptor = HistoryDescriptor(history_model)\n setattr(sender, self.manager_name, descriptor)\n sender._meta.simple_history_manager_attribute = self.manager_name\n\n for field in m2m_fields:\n m2m_model = self.create_history_m2m_model(\n history_model, field.remote_field.through\n )\n self.m2m_models[field] = m2m_model\n\n setattr(module, m2m_model.__name__, m2m_model)\n\n m2m_descriptor = HistoryDescriptor(m2m_model)\n setattr(history_model, field.name, m2m_descriptor)\n\n def get_history_model_name(self, model):\n if not self.custom_model_name:\n return f\"{self.DEFAULT_MODEL_NAME_PREFIX}{model._meta.object_name}\"\n # Must be trying to use a custom history model name\n if callable(self.custom_model_name):\n name = self.custom_model_name(model._meta.object_name)\n else:\n # simple string\n name = self.custom_model_name\n # Desired class name cannot be same as the model it is tracking\n if not (\n name.lower() == model._meta.object_name.lower()\n and model.__module__ == self.module\n ):\n return name\n raise ValueError(\n \"The 'custom_model_name' option '{}' evaluates to a name that is the same \"\n \"as the model it is tracking. This is not permitted.\".format(\n self.custom_model_name\n )\n )\n\n def create_history_m2m_model(self, model, through_model):\n attrs = {}\n\n fields = self.copy_fields(through_model)\n attrs.update(fields)\n attrs.update(self.get_extra_fields_m2m(model, through_model, fields))\n\n name = self.get_history_model_name(through_model)\n registered_models[through_model._meta.db_table] = through_model\n\n attrs.update(Meta=type(\"Meta\", (), self.get_meta_options_m2m(through_model)))\n\n m2m_history_model = type(str(name), self.m2m_bases, attrs)\n\n return m2m_history_model\n\n def create_history_model(self, model, inherited):\n \"\"\"\n Creates a historical model to associate with the model provided.\n \"\"\"\n attrs = {\n \"__module__\": self.module,\n \"_history_excluded_fields\": self.excluded_fields,\n \"_history_m2m_fields\": self.get_m2m_fields_from_model(model),\n \"tracked_fields\": self.fields_included(model),\n }\n\n app_module = \"%s.models\" % model._meta.app_label\n\n if inherited:\n # inherited use models module\n attrs[\"__module__\"] = model.__module__\n elif model.__module__ != self.module:\n # registered under different app\n attrs[\"__module__\"] = self.module\n elif app_module != self.module:\n # Abuse an internal API because the app registry is loading.\n app = apps.app_configs[model._meta.app_label]\n models_module = app.name\n attrs[\"__module__\"] = models_module\n\n fields = self.copy_fields(model)\n attrs.update(fields)\n attrs.update(self.get_extra_fields(model, fields))\n # type in python2 wants str as a first argument\n attrs.update(Meta=type(\"Meta\", (), self.get_meta_options(model)))\n if not inherited and self.table_name is not None:\n attrs[\"Meta\"].db_table = self.table_name\n\n # Set as the default then check for overrides\n name = self.get_history_model_name(model)\n\n registered_models[model._meta.db_table] = model\n history_model = type(str(name), self.bases, attrs)\n return history_model\n\n def fields_included(self, model):\n fields = []\n for field in model._meta.fields:\n if field.name not in self.excluded_fields:\n fields.append(field)\n return fields\n\n def field_excluded_kwargs(self, field):\n \"\"\"\n Find the excluded kwargs for a given field.\n \"\"\"\n return self.excluded_field_kwargs.get(field.name, set())\n\n def copy_fields(self, model):\n \"\"\"\n Creates copies of the model's original fields, returning\n a dictionary mapping field name to copied field object.\n \"\"\"\n fields = {}\n for field in self.fields_included(model):\n field = copy.copy(field)\n field.remote_field = copy.copy(field.remote_field)\n if isinstance(field, OrderWrt):\n # OrderWrt is a proxy field, switch to a plain IntegerField\n field.__class__ = models.IntegerField\n if isinstance(field, models.ForeignKey):\n old_field = field\n old_swappable = old_field.swappable\n old_field.swappable = False\n try:\n _name, _path, args, field_args = old_field.deconstruct()\n finally:\n old_field.swappable = old_swappable\n if getattr(old_field, \"one_to_one\", False) or isinstance(\n old_field, models.OneToOneField\n ):\n FieldType = models.ForeignKey\n else:\n FieldType = type(old_field)\n\n # Remove any excluded kwargs for the field.\n # This is useful when a custom OneToOneField is being used that\n # has a different set of arguments than ForeignKey\n for exclude_arg in self.field_excluded_kwargs(old_field):\n field_args.pop(exclude_arg, None)\n\n # If field_args['to'] is 'self' then we have a case where the object\n # has a foreign key to itself. If we pass the historical record's\n # field to = 'self', the foreign key will point to an historical\n # record rather than the base record. We can use old_field.model here.\n if field_args.get(\"to\", None) == \"self\":\n field_args[\"to\"] = old_field.model\n\n # Override certain arguments passed when creating the field\n # so that they work for the historical field.\n field_args.update(\n db_constraint=False,\n related_name=\"+\",\n null=True,\n blank=True,\n primary_key=False,\n db_index=True,\n serialize=True,\n unique=False,\n on_delete=models.DO_NOTHING,\n )\n field = FieldType(*args, **field_args)\n field.name = old_field.name\n else:\n transform_field(field)\n\n # drop db index\n if field.name in self.no_db_index:\n field.db_index = False\n\n fields[field.name] = field\n return fields\n\n def _get_history_change_reason_field(self):\n if self.history_change_reason_field:\n # User specific field from init\n history_change_reason_field = self.history_change_reason_field\n elif getattr(\n settings, \"SIMPLE_HISTORY_HISTORY_CHANGE_REASON_USE_TEXT_FIELD\", False\n ):\n # Use text field with no max length, not enforced by DB anyways\n history_change_reason_field = models.TextField(null=True)\n else:\n # Current default, with max length\n history_change_reason_field = models.CharField(max_length=100, null=True)\n\n return history_change_reason_field\n\n def _get_history_id_field(self):\n if self.history_id_field:\n history_id_field = self.history_id_field.clone()\n history_id_field.primary_key = True\n history_id_field.editable = False\n elif getattr(settings, \"SIMPLE_HISTORY_HISTORY_ID_USE_UUID\", False):\n history_id_field = models.UUIDField(\n primary_key=True, default=uuid.uuid4, editable=False\n )\n else:\n history_id_field = models.AutoField(primary_key=True)\n\n return history_id_field\n\n def _get_history_user_fields(self):\n if self.user_id_field is not None:\n # Tracking user using explicit id rather than Django ForeignKey\n history_user_fields = {\n \"history_user\": property(self.user_getter, self.user_setter),\n \"history_user_id\": self.user_id_field,\n }\n else:\n user_model = self.user_model or getattr(\n settings, \"AUTH_USER_MODEL\", \"auth.User\"\n )\n\n history_user_fields = {\n \"history_user\": models.ForeignKey(\n user_model,\n null=True,\n related_name=self.user_related_name,\n on_delete=models.SET_NULL,\n db_constraint=self.user_db_constraint,\n )\n }\n\n return history_user_fields\n\n def _get_history_related_field(self, model):\n if self.related_name:\n if self.manager_name == self.related_name:\n raise exceptions.RelatedNameConflictError(\n \"The related name must not be called like the history manager.\"\n )\n return {\n \"history_relation\": models.ForeignKey(\n model,\n on_delete=models.DO_NOTHING,\n related_name=self.related_name,\n db_constraint=False,\n )\n }\n else:\n return {}\n\n def get_extra_fields_m2m(self, model, through_model, fields):\n \"\"\"Return dict of extra fields added to the m2m historical record model\"\"\"\n\n extra_fields = {\n \"__module__\": model.__module__,\n \"__str__\": lambda self: \"{} as of {}\".format(\n self._meta.verbose_name, self.history.history_date\n ),\n \"history\": models.ForeignKey(\n model,\n db_constraint=False,\n on_delete=models.DO_NOTHING,\n ),\n \"instance_type\": through_model,\n \"m2m_history_id\": self._get_history_id_field(),\n }\n\n return extra_fields\n\n def get_extra_fields(self, model, fields):\n \"\"\"Return dict of extra fields added to the historical record model\"\"\"\n\n def revert_url(self):\n \"\"\"URL for this change in the default admin site.\"\"\"\n opts = model._meta\n app_label, model_name = opts.app_label, opts.model_name\n return reverse(\n f\"{admin.site.name}:{app_label}_{model_name}_simple_history\",\n args=[getattr(self, opts.pk.attname), self.history_id],\n )\n\n def get_instance(self):\n attrs = {\n field.attname: getattr(self, field.attname) for field in fields.values()\n }\n if self._history_excluded_fields:\n # We don't add ManyToManyFields to this list because they may cause\n # the subsequent `.get()` call to fail. See #706 for context.\n excluded_attnames = [\n model._meta.get_field(field).attname\n for field in self._history_excluded_fields\n if not isinstance(model._meta.get_field(field), ManyToManyField)\n ]\n try:\n values = (\n model.objects.filter(pk=getattr(self, model._meta.pk.attname))\n .values(*excluded_attnames)\n .get()\n )\n except ObjectDoesNotExist:\n pass\n else:\n attrs.update(values)\n result = model(**attrs)\n # this is the only way external code could know an instance is historical\n setattr(result, SIMPLE_HISTORY_REVERSE_ATTR_NAME, self)\n return result\n\n def get_next_record(self):\n \"\"\"\n Get the next history record for the instance. `None` if last.\n \"\"\"\n history = utils.get_history_manager_from_history(self)\n return (\n history.filter(history_date__gt=self.history_date)\n .order_by(\"history_date\")\n .first()\n )\n\n def get_prev_record(self):\n \"\"\"\n Get the previous history record for the instance. `None` if first.\n \"\"\"\n history = utils.get_history_manager_from_history(self)\n return (\n history.filter(history_date__lt=self.history_date)\n .order_by(\"history_date\")\n .last()\n )\n\n def get_default_history_user(instance):\n \"\"\"\n Returns the user specified by `get_user` method for manually creating\n historical objects\n \"\"\"\n return self.get_history_user(instance)\n\n extra_fields = {\n \"history_id\": self._get_history_id_field(),\n \"history_date\": models.DateTimeField(db_index=self._date_indexing is True),\n \"history_change_reason\": self._get_history_change_reason_field(),\n \"history_type\": models.CharField(\n max_length=1,\n choices=((\"+\", _(\"Created\")), (\"~\", _(\"Changed\")), (\"-\", _(\"Deleted\"))),\n ),\n \"history_object\": HistoricalObjectDescriptor(\n model, self.fields_included(model)\n ),\n \"instance\": property(get_instance),\n \"instance_type\": model,\n \"next_record\": property(get_next_record),\n \"prev_record\": property(get_prev_record),\n \"revert_url\": revert_url,\n \"__str__\": lambda self: \"{} as of {}\".format(\n self.history_object, self.history_date\n ),\n \"get_default_history_user\": staticmethod(get_default_history_user),\n }\n\n extra_fields.update(self._get_history_related_field(model))\n extra_fields.update(self._get_history_user_fields())\n\n return extra_fields\n\n @property\n def _date_indexing(self):\n \"\"\"False, True, or 'composite'; default is True\"\"\"\n result = getattr(settings, \"SIMPLE_HISTORY_DATE_INDEX\", True)\n valid = True\n if isinstance(result, str):\n result = result.lower()\n if result not in (\"composite\",):\n valid = False\n elif not isinstance(result, bool):\n valid = False\n if not valid:\n raise ImproperlyConfigured(\n \"SIMPLE_HISTORY_DATE_INDEX must be one of (False, True, 'Composite')\"\n )\n return result\n\n def get_meta_options_m2m(self, through_model):\n \"\"\"\n Returns a dictionary of fields that will be added to\n the Meta inner class of the m2m historical record model.\n \"\"\"\n name = self.get_history_model_name(through_model)\n\n meta_fields = {\"verbose_name\": name}\n\n if self.app:\n meta_fields[\"app_label\"] = self.app\n\n return meta_fields\n\n def get_meta_options(self, model):\n \"\"\"\n Returns a dictionary of fields that will be added to\n the Meta inner class of the historical record model.\n \"\"\"\n meta_fields = {\n \"ordering\": (\"-history_date\", \"-history_id\"),\n \"get_latest_by\": (\"history_date\", \"history_id\"),\n }\n if self.user_set_verbose_name:\n name = self.user_set_verbose_name\n else:\n name = format_lazy(\"historical {}\", smart_str(model._meta.verbose_name))\n if self.user_set_verbose_name_plural:\n plural_name = self.user_set_verbose_name_plural\n else:\n plural_name = format_lazy(\n \"historical {}\", smart_str(model._meta.verbose_name_plural)\n )\n meta_fields[\"verbose_name\"] = name\n meta_fields[\"verbose_name_plural\"] = plural_name\n if self.app:\n meta_fields[\"app_label\"] = self.app\n if self._date_indexing == \"composite\":\n meta_fields[\"indexes\"] = (\n models.Index(fields=(\"history_date\", model._meta.pk.attname)),\n )\n return meta_fields\n\n def post_save(self, instance, created, using=None, **kwargs):\n if not getattr(settings, \"SIMPLE_HISTORY_ENABLED\", True):\n return\n if not created and hasattr(instance, \"skip_history_when_saving\"):\n return\n if not kwargs.get(\"raw\", False):\n self.create_historical_record(instance, created and \"+\" or \"~\", using=using)\n\n def post_delete(self, instance, using=None, **kwargs):\n if not getattr(settings, \"SIMPLE_HISTORY_ENABLED\", True):\n return\n if self.cascade_delete_history:\n manager = getattr(instance, self.manager_name)\n manager.using(using).all().delete()\n else:\n self.create_historical_record(instance, \"-\", using=using)\n\n def get_change_reason_for_object(self, instance, history_type, using):\n \"\"\"\n Get change reason for object.\n Customize this method to automatically fill change reason from context.\n \"\"\"\n return get_change_reason_from_object(instance)\n\n def m2m_changed(self, instance, action, attr, pk_set, reverse, **_):\n if hasattr(instance, \"skip_history_when_saving\"):\n return\n\n if action in (\"post_add\", \"post_remove\", \"post_clear\"):\n # It should be safe to ~ this since the row must exist to modify m2m on it\n self.create_historical_record(instance, \"~\")\n\n def create_historical_record_m2ms(self, history_instance, instance):\n for field in history_instance._history_m2m_fields:\n m2m_history_model = self.m2m_models[field]\n original_instance = history_instance.instance\n through_model = getattr(original_instance, field.name).through\n\n insert_rows = []\n\n through_field_name = type(original_instance).__name__.lower()\n\n rows = through_model.objects.filter(**{through_field_name: instance})\n\n for row in rows:\n insert_row = {\"history\": history_instance}\n\n for through_model_field in through_model._meta.fields:\n insert_row[through_model_field.name] = getattr(\n row, through_model_field.name\n )\n insert_rows.append(m2m_history_model(**insert_row))\n\n pre_create_historical_m2m_records.send(\n sender=m2m_history_model,\n rows=insert_rows,\n history_instance=history_instance,\n instance=instance,\n field=field,\n )\n created_rows = m2m_history_model.objects.bulk_create(insert_rows)\n post_create_historical_m2m_records.send(\n sender=m2m_history_model,\n created_rows=created_rows,\n history_instance=history_instance,\n instance=instance,\n field=field,\n )\n\n def create_historical_record(self, instance, history_type, using=None):\n using = using if self.use_base_model_db else None\n history_date = getattr(instance, \"_history_date\", timezone.now())\n history_user = self.get_history_user(instance)\n history_change_reason = self.get_change_reason_for_object(\n instance, history_type, using\n )\n manager = getattr(instance, self.manager_name)\n\n attrs = {}\n for field in self.fields_included(instance):\n attrs[field.attname] = getattr(instance, field.attname)\n\n relation_field = getattr(manager.model, \"history_relation\", None)\n if relation_field is not None:\n attrs[\"history_relation\"] = instance\n\n history_instance = manager.model(\n history_date=history_date,\n history_type=history_type,\n history_user=history_user,\n history_change_reason=history_change_reason,\n **attrs,\n )\n\n pre_create_historical_record.send(\n sender=manager.model,\n instance=instance,\n history_date=history_date,\n history_user=history_user,\n history_change_reason=history_change_reason,\n history_instance=history_instance,\n using=using,\n )\n\n history_instance.save(using=using)\n self.create_historical_record_m2ms(history_instance, instance)\n\n post_create_historical_record.send(\n sender=manager.model,\n instance=instance,\n history_instance=history_instance,\n history_date=history_date,\n history_user=history_user,\n history_change_reason=history_change_reason,\n using=using,\n )\n\n def get_history_user(self, instance):\n \"\"\"Get the modifying user from instance or middleware.\"\"\"\n try:\n return instance._history_user\n except AttributeError:\n request = None\n try:\n if self.context.request.user.is_authenticated:\n request = self.context.request\n except AttributeError:\n pass\n\n return self.get_user(instance=instance, request=request)\n\n def get_m2m_fields_from_model(self, model):\n m2m_fields = set(self.m2m_fields)\n try:\n m2m_fields.update(getattr(model, self.m2m_fields_model_field_name))\n except AttributeError:\n pass\n return [getattr(model, field.name).field for field in m2m_fields]\n\n\ndef transform_field(field):\n \"\"\"Customize field appropriately for use in historical model\"\"\"\n field.name = field.attname\n if isinstance(field, models.BigAutoField):\n field.__class__ = models.BigIntegerField\n elif isinstance(field, models.AutoField):\n field.__class__ = models.IntegerField\n\n elif isinstance(field, models.FileField):\n # Don't copy file, just path.\n if getattr(settings, \"SIMPLE_HISTORY_FILEFIELD_TO_CHARFIELD\", False):\n field.__class__ = models.CharField\n else:\n field.__class__ = models.TextField\n\n # Historical instance shouldn't change create/update timestamps\n field.auto_now = False\n field.auto_now_add = False\n # Just setting db_collation explicitly since we're not using\n # field.deconstruct() here\n field.db_collation = None\n\n if field.primary_key or field.unique:\n # Unique fields can no longer be guaranteed unique,\n # but they should still be indexed for faster lookups.\n field.primary_key = False\n field._unique = False\n field.db_index = True\n field.serialize = True\n\n\nclass HistoricForwardManyToOneDescriptor(ForwardManyToOneDescriptor):\n \"\"\"\n Overrides get_queryset to provide historic query support, should the\n instance be historic (and therefore was generated by a timepoint query)\n and the other side of the relation also uses a history manager.\n \"\"\"\n\n def get_queryset(self, **hints) -> QuerySet:\n instance = hints.get(\"instance\")\n if instance:\n history = getattr(instance, SIMPLE_HISTORY_REVERSE_ATTR_NAME, None)\n histmgr = getattr(\n self.field.remote_field.model,\n getattr(\n self.field.remote_field.model._meta,\n \"simple_history_manager_attribute\",\n \"_notthere\",\n ),\n None,\n )\n if history and histmgr:\n return histmgr.as_of(getattr(history, \"_as_of\", history.history_date))\n return super().get_queryset(**hints)\n\n\nclass HistoricReverseManyToOneDescriptor(ReverseManyToOneDescriptor):\n \"\"\"\n Overrides get_queryset to provide historic query support, should the\n instance be historic (and therefore was generated by a timepoint query)\n and the other side of the relation also uses a history manager.\n \"\"\"\n\n @cached_property\n def related_manager_cls(self):\n related_model = self.rel.related_model\n\n class HistoricRelationModelManager(related_model._default_manager.__class__):\n def get_queryset(self):\n try:\n return self.instance._prefetched_objects_cache[\n self.field.remote_field.get_cache_name()\n ]\n except (AttributeError, KeyError):\n history = getattr(\n self.instance, SIMPLE_HISTORY_REVERSE_ATTR_NAME, None\n )\n histmgr = getattr(\n self.model,\n getattr(\n self.model._meta,\n \"simple_history_manager_attribute\",\n \"_notthere\",\n ),\n None,\n )\n if history and histmgr:\n queryset = histmgr.as_of(\n getattr(history, \"_as_of\", history.history_date)\n )\n else:\n queryset = super().get_queryset()\n return self._apply_rel_filters(queryset)\n\n return create_reverse_many_to_one_manager(\n HistoricRelationModelManager, self.rel\n )\n\n\nclass HistoricForeignKey(ForeignKey):\n \"\"\"\n Allows foreign keys to work properly from a historic instance.\n\n If you use as_of queries to extract historical instances from\n a model, and you have other models that are related by foreign\n key and also historic, changing them to a HistoricForeignKey\n field type will allow you to naturally cross the relationship\n boundary at the same point in time as the origin instance.\n\n A historic instance maintains an attribute (\"_historic\") when\n it is historic, holding the historic record instance and the\n timepoint used to query it (\"_as_of\"). HistoricForeignKey\n looks for this and uses an as_of query against the related\n object so the relationship is assessed at the same timepoint.\n \"\"\"\n\n forward_related_accessor_class = HistoricForwardManyToOneDescriptor\n related_accessor_class = HistoricReverseManyToOneDescriptor\n\n\ndef is_historic(instance):\n \"\"\"\n Returns True if the instance was acquired with an as_of timepoint.\n \"\"\"\n return to_historic(instance) is not None\n\n\ndef to_historic(instance):\n \"\"\"\n Returns a historic model instance if the instance was acquired with\n an as_of timepoint, or None.\n \"\"\"\n return getattr(instance, SIMPLE_HISTORY_REVERSE_ATTR_NAME, None)\n\n\nclass HistoricalObjectDescriptor:\n def __init__(self, model, fields_included):\n self.model = model\n self.fields_included = fields_included\n\n def __get__(self, instance, owner):\n if instance is None:\n return self\n values = {f.attname: getattr(instance, f.attname) for f in self.fields_included}\n return self.model(**values)\n\n\nclass HistoricalChanges:\n def diff_against(self, old_history, excluded_fields=None, included_fields=None):\n if not isinstance(old_history, type(self)):\n raise TypeError(\n (\"unsupported type(s) for diffing: \" \"'{}' and '{}'\").format(\n type(self), type(old_history)\n )\n )\n if excluded_fields is None:\n excluded_fields = set()\n\n included_m2m_fields = {field.name for field in old_history._history_m2m_fields}\n if included_fields is None:\n included_fields = {f.name for f in old_history.tracked_fields if f.editable}\n else:\n included_m2m_fields = included_m2m_fields.intersection(included_fields)\n\n fields = (\n set(included_fields)\n .difference(included_m2m_fields)\n .difference(excluded_fields)\n )\n m2m_fields = set(included_m2m_fields).difference(excluded_fields)\n\n changes = []\n changed_fields = []\n\n old_values = model_to_dict(old_history, fields=fields)\n current_values = model_to_dict(self, fields=fields)\n\n for field in fields:\n old_value = old_values[field]\n current_value = current_values[field]\n\n if old_value != current_value:\n changes.append(ModelChange(field, old_value, current_value))\n changed_fields.append(field)\n\n # Separately compare m2m fields:\n for field in m2m_fields:\n # First retrieve a single item to get the field names from:\n reference_history_m2m_item = (\n getattr(old_history, field).first() or getattr(self, field).first()\n )\n history_field_names = []\n if reference_history_m2m_item:\n # Create a list of field names to compare against.\n # The list is generated without the primary key of the intermediate\n # table, the foreign key to the history record, and the actual 'history'\n # field, to avoid false positives while diffing.\n history_field_names = [\n f.name\n for f in reference_history_m2m_item._meta.fields\n if f.editable and f.name not in [\"id\", \"m2m_history_id\", \"history\"]\n ]\n\n old_rows = list(getattr(old_history, field).values(*history_field_names))\n new_rows = list(getattr(self, field).values(*history_field_names))\n\n if old_rows != new_rows:\n change = ModelChange(field, old_rows, new_rows)\n changes.append(change)\n changed_fields.append(field)\n\n return ModelDelta(changes, changed_fields, old_history, self)\n\n\nclass ModelChange:\n def __init__(self, field_name, old_value, new_value):\n self.field = field_name\n self.old = old_value\n self.new = new_value\n\n\nclass ModelDelta:\n def __init__(self, changes, changed_fields, old_record, new_record):\n self.changes = changes\n self.changed_fields = changed_fields\n self.old_record = old_record\n self.new_record = new_record\n",
"path": "simple_history/models.py"
}
] | [
{
"content": "import copy\nimport importlib\nimport uuid\nimport warnings\nfrom functools import partial\n\nfrom django.apps import apps\nfrom django.conf import settings\nfrom django.contrib import admin\nfrom django.contrib.auth import get_user_model\nfrom django.core.exceptions import ImproperlyConfigured, ObjectDoesNotExist\nfrom django.db import models\nfrom django.db.models import ManyToManyField\nfrom django.db.models.fields.proxy import OrderWrt\nfrom django.db.models.fields.related import ForeignKey\nfrom django.db.models.fields.related_descriptors import (\n ForwardManyToOneDescriptor,\n ReverseManyToOneDescriptor,\n create_reverse_many_to_one_manager,\n)\nfrom django.db.models.query import QuerySet\nfrom django.db.models.signals import m2m_changed\nfrom django.forms.models import model_to_dict\nfrom django.urls import reverse\nfrom django.utils import timezone\nfrom django.utils.encoding import smart_str\nfrom django.utils.functional import cached_property\nfrom django.utils.text import format_lazy\nfrom django.utils.translation import gettext_lazy as _\n\nfrom simple_history import utils\n\nfrom . import exceptions\nfrom .manager import SIMPLE_HISTORY_REVERSE_ATTR_NAME, HistoryDescriptor\nfrom .signals import (\n post_create_historical_m2m_records,\n post_create_historical_record,\n pre_create_historical_m2m_records,\n pre_create_historical_record,\n)\nfrom .utils import get_change_reason_from_object\n\ntry:\n from asgiref.local import Local as LocalContext\nexcept ImportError:\n from threading import local as LocalContext\n\nregistered_models = {}\n\n\ndef _default_get_user(request, **kwargs):\n try:\n return request.user\n except AttributeError:\n return None\n\n\ndef _history_user_getter(historical_instance):\n if historical_instance.history_user_id is None:\n return None\n User = get_user_model()\n try:\n return User.objects.get(pk=historical_instance.history_user_id)\n except User.DoesNotExist:\n return None\n\n\ndef _history_user_setter(historical_instance, user):\n if user is not None:\n historical_instance.history_user_id = user.pk\n\n\nclass HistoricalRecords:\n DEFAULT_MODEL_NAME_PREFIX = \"Historical\"\n\n thread = context = LocalContext() # retain thread for backwards compatibility\n m2m_models = {}\n\n def __init__(\n self,\n verbose_name=None,\n verbose_name_plural=None,\n bases=(models.Model,),\n user_related_name=\"+\",\n table_name=None,\n inherit=False,\n excluded_fields=None,\n history_id_field=None,\n history_change_reason_field=None,\n user_model=None,\n get_user=_default_get_user,\n cascade_delete_history=False,\n custom_model_name=None,\n app=None,\n history_user_id_field=None,\n history_user_getter=_history_user_getter,\n history_user_setter=_history_user_setter,\n related_name=None,\n use_base_model_db=False,\n user_db_constraint=True,\n no_db_index=list(),\n excluded_field_kwargs=None,\n m2m_fields=(),\n m2m_fields_model_field_name=\"_history_m2m_fields\",\n m2m_bases=(models.Model,),\n ):\n self.user_set_verbose_name = verbose_name\n self.user_set_verbose_name_plural = verbose_name_plural\n self.user_related_name = user_related_name\n self.user_db_constraint = user_db_constraint\n self.table_name = table_name\n self.inherit = inherit\n self.history_id_field = history_id_field\n self.history_change_reason_field = history_change_reason_field\n self.user_model = user_model\n self.get_user = get_user\n self.cascade_delete_history = cascade_delete_history\n self.custom_model_name = custom_model_name\n self.app = app\n self.user_id_field = history_user_id_field\n self.user_getter = history_user_getter\n self.user_setter = history_user_setter\n self.related_name = related_name\n self.use_base_model_db = use_base_model_db\n self.m2m_fields = m2m_fields\n self.m2m_fields_model_field_name = m2m_fields_model_field_name\n\n if isinstance(no_db_index, str):\n no_db_index = [no_db_index]\n self.no_db_index = no_db_index\n\n if excluded_fields is None:\n excluded_fields = []\n self.excluded_fields = excluded_fields\n\n if excluded_field_kwargs is None:\n excluded_field_kwargs = {}\n self.excluded_field_kwargs = excluded_field_kwargs\n try:\n if isinstance(bases, str):\n raise TypeError\n self.bases = (HistoricalChanges,) + tuple(bases)\n except TypeError:\n raise TypeError(\"The `bases` option must be a list or a tuple.\")\n try:\n if isinstance(m2m_bases, str):\n raise TypeError\n self.m2m_bases = (HistoricalChanges,) + tuple(m2m_bases)\n except TypeError:\n raise TypeError(\"The `m2m_bases` option must be a list or a tuple.\")\n\n def contribute_to_class(self, cls, name):\n self.manager_name = name\n self.module = cls.__module__\n self.cls = cls\n models.signals.class_prepared.connect(self.finalize, weak=False)\n self.add_extra_methods(cls)\n\n if cls._meta.abstract and not self.inherit:\n msg = (\n \"HistoricalRecords added to abstract model ({}) without \"\n \"inherit=True\".format(self.cls.__name__)\n )\n warnings.warn(msg, UserWarning)\n\n def add_extra_methods(self, cls):\n def save_without_historical_record(self, *args, **kwargs):\n \"\"\"\n Save model without saving a historical record\n\n Make sure you know what you're doing before you use this method.\n \"\"\"\n self.skip_history_when_saving = True\n try:\n ret = self.save(*args, **kwargs)\n finally:\n del self.skip_history_when_saving\n return ret\n\n setattr(cls, \"save_without_historical_record\", save_without_historical_record)\n\n def finalize(self, sender, **kwargs):\n inherited = False\n if self.cls is not sender: # set in concrete\n inherited = self.inherit and issubclass(sender, self.cls)\n if not inherited:\n return # set in abstract\n\n if hasattr(sender._meta, \"simple_history_manager_attribute\"):\n raise exceptions.MultipleRegistrationsError(\n \"{}.{} registered multiple times for history tracking.\".format(\n sender._meta.app_label, sender._meta.object_name\n )\n )\n history_model = self.create_history_model(sender, inherited)\n\n if inherited:\n # Make sure history model is in same module as concrete model\n module = importlib.import_module(history_model.__module__)\n else:\n module = importlib.import_module(self.module)\n setattr(module, history_model.__name__, history_model)\n\n # The HistoricalRecords object will be discarded,\n # so the signal handlers can't use weak references.\n models.signals.post_save.connect(self.post_save, sender=sender, weak=False)\n models.signals.post_delete.connect(self.post_delete, sender=sender, weak=False)\n\n m2m_fields = self.get_m2m_fields_from_model(sender)\n\n for field in m2m_fields:\n m2m_changed.connect(\n partial(self.m2m_changed, attr=field.name),\n sender=field.remote_field.through,\n weak=False,\n )\n\n descriptor = HistoryDescriptor(history_model)\n setattr(sender, self.manager_name, descriptor)\n sender._meta.simple_history_manager_attribute = self.manager_name\n\n for field in m2m_fields:\n m2m_model = self.create_history_m2m_model(\n history_model, field.remote_field.through\n )\n self.m2m_models[field] = m2m_model\n\n setattr(module, m2m_model.__name__, m2m_model)\n\n m2m_descriptor = HistoryDescriptor(m2m_model)\n setattr(history_model, field.name, m2m_descriptor)\n\n def get_history_model_name(self, model):\n if not self.custom_model_name:\n return f\"{self.DEFAULT_MODEL_NAME_PREFIX}{model._meta.object_name}\"\n # Must be trying to use a custom history model name\n if callable(self.custom_model_name):\n name = self.custom_model_name(model._meta.object_name)\n else:\n # simple string\n name = self.custom_model_name\n # Desired class name cannot be same as the model it is tracking\n if not (\n name.lower() == model._meta.object_name.lower()\n and model.__module__ == self.module\n ):\n return name\n raise ValueError(\n \"The 'custom_model_name' option '{}' evaluates to a name that is the same \"\n \"as the model it is tracking. This is not permitted.\".format(\n self.custom_model_name\n )\n )\n\n def create_history_m2m_model(self, model, through_model):\n attrs = {}\n\n fields = self.copy_fields(through_model)\n attrs.update(fields)\n attrs.update(self.get_extra_fields_m2m(model, through_model, fields))\n\n name = self.get_history_model_name(through_model)\n registered_models[through_model._meta.db_table] = through_model\n\n attrs.update(Meta=type(\"Meta\", (), self.get_meta_options_m2m(through_model)))\n\n m2m_history_model = type(str(name), self.m2m_bases, attrs)\n\n return m2m_history_model\n\n def create_history_model(self, model, inherited):\n \"\"\"\n Creates a historical model to associate with the model provided.\n \"\"\"\n attrs = {\n \"__module__\": self.module,\n \"_history_excluded_fields\": self.excluded_fields,\n \"_history_m2m_fields\": self.get_m2m_fields_from_model(model),\n \"tracked_fields\": self.fields_included(model),\n }\n\n app_module = \"%s.models\" % model._meta.app_label\n\n if inherited:\n # inherited use models module\n attrs[\"__module__\"] = model.__module__\n elif model.__module__ != self.module:\n # registered under different app\n attrs[\"__module__\"] = self.module\n elif app_module != self.module:\n # Abuse an internal API because the app registry is loading.\n app = apps.app_configs[model._meta.app_label]\n models_module = app.name\n attrs[\"__module__\"] = models_module\n\n fields = self.copy_fields(model)\n attrs.update(fields)\n attrs.update(self.get_extra_fields(model, fields))\n # type in python2 wants str as a first argument\n attrs.update(Meta=type(\"Meta\", (), self.get_meta_options(model)))\n if not inherited and self.table_name is not None:\n attrs[\"Meta\"].db_table = self.table_name\n\n # Set as the default then check for overrides\n name = self.get_history_model_name(model)\n\n registered_models[model._meta.db_table] = model\n history_model = type(str(name), self.bases, attrs)\n return history_model\n\n def fields_included(self, model):\n fields = []\n for field in model._meta.fields:\n if field.name not in self.excluded_fields:\n fields.append(field)\n return fields\n\n def field_excluded_kwargs(self, field):\n \"\"\"\n Find the excluded kwargs for a given field.\n \"\"\"\n return self.excluded_field_kwargs.get(field.name, set())\n\n def copy_fields(self, model):\n \"\"\"\n Creates copies of the model's original fields, returning\n a dictionary mapping field name to copied field object.\n \"\"\"\n fields = {}\n for field in self.fields_included(model):\n field = copy.copy(field)\n field.remote_field = copy.copy(field.remote_field)\n if isinstance(field, OrderWrt):\n # OrderWrt is a proxy field, switch to a plain IntegerField\n field.__class__ = models.IntegerField\n if isinstance(field, models.ForeignKey):\n old_field = field\n old_swappable = old_field.swappable\n old_field.swappable = False\n try:\n _name, _path, args, field_args = old_field.deconstruct()\n finally:\n old_field.swappable = old_swappable\n if getattr(old_field, \"one_to_one\", False) or isinstance(\n old_field, models.OneToOneField\n ):\n FieldType = models.ForeignKey\n else:\n FieldType = type(old_field)\n\n # Remove any excluded kwargs for the field.\n # This is useful when a custom OneToOneField is being used that\n # has a different set of arguments than ForeignKey\n for exclude_arg in self.field_excluded_kwargs(old_field):\n field_args.pop(exclude_arg, None)\n\n # If field_args['to'] is 'self' then we have a case where the object\n # has a foreign key to itself. If we pass the historical record's\n # field to = 'self', the foreign key will point to an historical\n # record rather than the base record. We can use old_field.model here.\n if field_args.get(\"to\", None) == \"self\":\n field_args[\"to\"] = old_field.model\n\n # Override certain arguments passed when creating the field\n # so that they work for the historical field.\n field_args.update(\n db_constraint=False,\n related_name=\"+\",\n null=True,\n blank=True,\n primary_key=False,\n db_index=True,\n serialize=True,\n unique=False,\n on_delete=models.DO_NOTHING,\n )\n field = FieldType(*args, **field_args)\n field.name = old_field.name\n else:\n transform_field(field)\n\n # drop db index\n if field.name in self.no_db_index:\n field.db_index = False\n\n fields[field.name] = field\n return fields\n\n def _get_history_change_reason_field(self):\n if self.history_change_reason_field:\n # User specific field from init\n history_change_reason_field = self.history_change_reason_field\n elif getattr(\n settings, \"SIMPLE_HISTORY_HISTORY_CHANGE_REASON_USE_TEXT_FIELD\", False\n ):\n # Use text field with no max length, not enforced by DB anyways\n history_change_reason_field = models.TextField(null=True)\n else:\n # Current default, with max length\n history_change_reason_field = models.CharField(max_length=100, null=True)\n\n return history_change_reason_field\n\n def _get_history_id_field(self):\n if self.history_id_field:\n history_id_field = self.history_id_field.clone()\n history_id_field.primary_key = True\n history_id_field.editable = False\n elif getattr(settings, \"SIMPLE_HISTORY_HISTORY_ID_USE_UUID\", False):\n history_id_field = models.UUIDField(\n primary_key=True, default=uuid.uuid4, editable=False\n )\n else:\n history_id_field = models.AutoField(primary_key=True)\n\n return history_id_field\n\n def _get_history_user_fields(self):\n if self.user_id_field is not None:\n # Tracking user using explicit id rather than Django ForeignKey\n history_user_fields = {\n \"history_user\": property(self.user_getter, self.user_setter),\n \"history_user_id\": self.user_id_field,\n }\n else:\n user_model = self.user_model or getattr(\n settings, \"AUTH_USER_MODEL\", \"auth.User\"\n )\n\n history_user_fields = {\n \"history_user\": models.ForeignKey(\n user_model,\n null=True,\n related_name=self.user_related_name,\n on_delete=models.SET_NULL,\n db_constraint=self.user_db_constraint,\n )\n }\n\n return history_user_fields\n\n def _get_history_related_field(self, model):\n if self.related_name:\n if self.manager_name == self.related_name:\n raise exceptions.RelatedNameConflictError(\n \"The related name must not be called like the history manager.\"\n )\n return {\n \"history_relation\": models.ForeignKey(\n model,\n on_delete=models.DO_NOTHING,\n related_name=self.related_name,\n db_constraint=False,\n )\n }\n else:\n return {}\n\n def get_extra_fields_m2m(self, model, through_model, fields):\n \"\"\"Return dict of extra fields added to the m2m historical record model\"\"\"\n\n extra_fields = {\n \"__module__\": model.__module__,\n \"__str__\": lambda self: \"{} as of {}\".format(\n self._meta.verbose_name, self.history.history_date\n ),\n \"history\": models.ForeignKey(\n model,\n db_constraint=False,\n on_delete=models.DO_NOTHING,\n ),\n \"instance_type\": through_model,\n \"m2m_history_id\": self._get_history_id_field(),\n }\n\n return extra_fields\n\n def get_extra_fields(self, model, fields):\n \"\"\"Return dict of extra fields added to the historical record model\"\"\"\n\n def revert_url(self):\n \"\"\"URL for this change in the default admin site.\"\"\"\n opts = model._meta\n app_label, model_name = opts.app_label, opts.model_name\n return reverse(\n f\"{admin.site.name}:{app_label}_{model_name}_simple_history\",\n args=[getattr(self, opts.pk.attname), self.history_id],\n )\n\n def get_instance(self):\n attrs = {\n field.attname: getattr(self, field.attname) for field in fields.values()\n }\n if self._history_excluded_fields:\n # We don't add ManyToManyFields to this list because they may cause\n # the subsequent `.get()` call to fail. See #706 for context.\n excluded_attnames = [\n model._meta.get_field(field).attname\n for field in self._history_excluded_fields\n if not isinstance(model._meta.get_field(field), ManyToManyField)\n ]\n try:\n values = (\n model.objects.filter(pk=getattr(self, model._meta.pk.attname))\n .values(*excluded_attnames)\n .get()\n )\n except ObjectDoesNotExist:\n pass\n else:\n attrs.update(values)\n result = model(**attrs)\n # this is the only way external code could know an instance is historical\n setattr(result, SIMPLE_HISTORY_REVERSE_ATTR_NAME, self)\n return result\n\n def get_next_record(self):\n \"\"\"\n Get the next history record for the instance. `None` if last.\n \"\"\"\n history = utils.get_history_manager_from_history(self)\n return (\n history.filter(history_date__gt=self.history_date)\n .order_by(\"history_date\")\n .first()\n )\n\n def get_prev_record(self):\n \"\"\"\n Get the previous history record for the instance. `None` if first.\n \"\"\"\n history = utils.get_history_manager_from_history(self)\n return (\n history.filter(history_date__lt=self.history_date)\n .order_by(\"history_date\")\n .last()\n )\n\n def get_default_history_user(instance):\n \"\"\"\n Returns the user specified by `get_user` method for manually creating\n historical objects\n \"\"\"\n return self.get_history_user(instance)\n\n extra_fields = {\n \"history_id\": self._get_history_id_field(),\n \"history_date\": models.DateTimeField(db_index=self._date_indexing is True),\n \"history_change_reason\": self._get_history_change_reason_field(),\n \"history_type\": models.CharField(\n max_length=1,\n choices=((\"+\", _(\"Created\")), (\"~\", _(\"Changed\")), (\"-\", _(\"Deleted\"))),\n ),\n \"history_object\": HistoricalObjectDescriptor(\n model, self.fields_included(model)\n ),\n \"instance\": property(get_instance),\n \"instance_type\": model,\n \"next_record\": property(get_next_record),\n \"prev_record\": property(get_prev_record),\n \"revert_url\": revert_url,\n \"__str__\": lambda self: \"{} as of {}\".format(\n self.history_object, self.history_date\n ),\n \"get_default_history_user\": staticmethod(get_default_history_user),\n }\n\n extra_fields.update(self._get_history_related_field(model))\n extra_fields.update(self._get_history_user_fields())\n\n return extra_fields\n\n @property\n def _date_indexing(self):\n \"\"\"False, True, or 'composite'; default is True\"\"\"\n result = getattr(settings, \"SIMPLE_HISTORY_DATE_INDEX\", True)\n valid = True\n if isinstance(result, str):\n result = result.lower()\n if result not in (\"composite\",):\n valid = False\n elif not isinstance(result, bool):\n valid = False\n if not valid:\n raise ImproperlyConfigured(\n \"SIMPLE_HISTORY_DATE_INDEX must be one of (False, True, 'Composite')\"\n )\n return result\n\n def get_meta_options_m2m(self, through_model):\n \"\"\"\n Returns a dictionary of fields that will be added to\n the Meta inner class of the m2m historical record model.\n \"\"\"\n name = self.get_history_model_name(through_model)\n\n meta_fields = {\"verbose_name\": name}\n\n if self.app:\n meta_fields[\"app_label\"] = self.app\n\n return meta_fields\n\n def get_meta_options(self, model):\n \"\"\"\n Returns a dictionary of fields that will be added to\n the Meta inner class of the historical record model.\n \"\"\"\n meta_fields = {\n \"ordering\": (\"-history_date\", \"-history_id\"),\n \"get_latest_by\": (\"history_date\", \"history_id\"),\n }\n if self.user_set_verbose_name:\n name = self.user_set_verbose_name\n else:\n name = format_lazy(\"historical {}\", smart_str(model._meta.verbose_name))\n if self.user_set_verbose_name_plural:\n plural_name = self.user_set_verbose_name_plural\n else:\n plural_name = format_lazy(\n \"historical {}\", smart_str(model._meta.verbose_name_plural)\n )\n meta_fields[\"verbose_name\"] = name\n meta_fields[\"verbose_name_plural\"] = plural_name\n if self.app:\n meta_fields[\"app_label\"] = self.app\n if self._date_indexing == \"composite\":\n meta_fields[\"indexes\"] = (\n models.Index(fields=(\"history_date\", model._meta.pk.attname)),\n )\n return meta_fields\n\n def post_save(self, instance, created, using=None, **kwargs):\n if not getattr(settings, \"SIMPLE_HISTORY_ENABLED\", True):\n return\n if not created and hasattr(instance, \"skip_history_when_saving\"):\n return\n if not kwargs.get(\"raw\", False):\n self.create_historical_record(instance, created and \"+\" or \"~\", using=using)\n\n def post_delete(self, instance, using=None, **kwargs):\n if not getattr(settings, \"SIMPLE_HISTORY_ENABLED\", True):\n return\n if self.cascade_delete_history:\n manager = getattr(instance, self.manager_name)\n manager.using(using).all().delete()\n else:\n self.create_historical_record(instance, \"-\", using=using)\n\n def get_change_reason_for_object(self, instance, history_type, using):\n \"\"\"\n Get change reason for object.\n Customize this method to automatically fill change reason from context.\n \"\"\"\n return get_change_reason_from_object(instance)\n\n def m2m_changed(self, instance, action, attr, pk_set, reverse, **_):\n if hasattr(instance, \"skip_history_when_saving\"):\n return\n\n if action in (\"post_add\", \"post_remove\", \"post_clear\"):\n # It should be safe to ~ this since the row must exist to modify m2m on it\n self.create_historical_record(instance, \"~\")\n\n def create_historical_record_m2ms(self, history_instance, instance):\n for field in history_instance._history_m2m_fields:\n m2m_history_model = self.m2m_models[field]\n original_instance = history_instance.instance\n through_model = getattr(original_instance, field.name).through\n\n insert_rows = []\n\n # `m2m_field_name()` is part of Django's internal API\n through_field_name = field.m2m_field_name()\n\n rows = through_model.objects.filter(**{through_field_name: instance})\n\n for row in rows:\n insert_row = {\"history\": history_instance}\n\n for through_model_field in through_model._meta.fields:\n insert_row[through_model_field.name] = getattr(\n row, through_model_field.name\n )\n insert_rows.append(m2m_history_model(**insert_row))\n\n pre_create_historical_m2m_records.send(\n sender=m2m_history_model,\n rows=insert_rows,\n history_instance=history_instance,\n instance=instance,\n field=field,\n )\n created_rows = m2m_history_model.objects.bulk_create(insert_rows)\n post_create_historical_m2m_records.send(\n sender=m2m_history_model,\n created_rows=created_rows,\n history_instance=history_instance,\n instance=instance,\n field=field,\n )\n\n def create_historical_record(self, instance, history_type, using=None):\n using = using if self.use_base_model_db else None\n history_date = getattr(instance, \"_history_date\", timezone.now())\n history_user = self.get_history_user(instance)\n history_change_reason = self.get_change_reason_for_object(\n instance, history_type, using\n )\n manager = getattr(instance, self.manager_name)\n\n attrs = {}\n for field in self.fields_included(instance):\n attrs[field.attname] = getattr(instance, field.attname)\n\n relation_field = getattr(manager.model, \"history_relation\", None)\n if relation_field is not None:\n attrs[\"history_relation\"] = instance\n\n history_instance = manager.model(\n history_date=history_date,\n history_type=history_type,\n history_user=history_user,\n history_change_reason=history_change_reason,\n **attrs,\n )\n\n pre_create_historical_record.send(\n sender=manager.model,\n instance=instance,\n history_date=history_date,\n history_user=history_user,\n history_change_reason=history_change_reason,\n history_instance=history_instance,\n using=using,\n )\n\n history_instance.save(using=using)\n self.create_historical_record_m2ms(history_instance, instance)\n\n post_create_historical_record.send(\n sender=manager.model,\n instance=instance,\n history_instance=history_instance,\n history_date=history_date,\n history_user=history_user,\n history_change_reason=history_change_reason,\n using=using,\n )\n\n def get_history_user(self, instance):\n \"\"\"Get the modifying user from instance or middleware.\"\"\"\n try:\n return instance._history_user\n except AttributeError:\n request = None\n try:\n if self.context.request.user.is_authenticated:\n request = self.context.request\n except AttributeError:\n pass\n\n return self.get_user(instance=instance, request=request)\n\n def get_m2m_fields_from_model(self, model):\n m2m_fields = set(self.m2m_fields)\n try:\n m2m_fields.update(getattr(model, self.m2m_fields_model_field_name))\n except AttributeError:\n pass\n return [getattr(model, field.name).field for field in m2m_fields]\n\n\ndef transform_field(field):\n \"\"\"Customize field appropriately for use in historical model\"\"\"\n field.name = field.attname\n if isinstance(field, models.BigAutoField):\n field.__class__ = models.BigIntegerField\n elif isinstance(field, models.AutoField):\n field.__class__ = models.IntegerField\n\n elif isinstance(field, models.FileField):\n # Don't copy file, just path.\n if getattr(settings, \"SIMPLE_HISTORY_FILEFIELD_TO_CHARFIELD\", False):\n field.__class__ = models.CharField\n else:\n field.__class__ = models.TextField\n\n # Historical instance shouldn't change create/update timestamps\n field.auto_now = False\n field.auto_now_add = False\n # Just setting db_collation explicitly since we're not using\n # field.deconstruct() here\n field.db_collation = None\n\n if field.primary_key or field.unique:\n # Unique fields can no longer be guaranteed unique,\n # but they should still be indexed for faster lookups.\n field.primary_key = False\n field._unique = False\n field.db_index = True\n field.serialize = True\n\n\nclass HistoricForwardManyToOneDescriptor(ForwardManyToOneDescriptor):\n \"\"\"\n Overrides get_queryset to provide historic query support, should the\n instance be historic (and therefore was generated by a timepoint query)\n and the other side of the relation also uses a history manager.\n \"\"\"\n\n def get_queryset(self, **hints) -> QuerySet:\n instance = hints.get(\"instance\")\n if instance:\n history = getattr(instance, SIMPLE_HISTORY_REVERSE_ATTR_NAME, None)\n histmgr = getattr(\n self.field.remote_field.model,\n getattr(\n self.field.remote_field.model._meta,\n \"simple_history_manager_attribute\",\n \"_notthere\",\n ),\n None,\n )\n if history and histmgr:\n return histmgr.as_of(getattr(history, \"_as_of\", history.history_date))\n return super().get_queryset(**hints)\n\n\nclass HistoricReverseManyToOneDescriptor(ReverseManyToOneDescriptor):\n \"\"\"\n Overrides get_queryset to provide historic query support, should the\n instance be historic (and therefore was generated by a timepoint query)\n and the other side of the relation also uses a history manager.\n \"\"\"\n\n @cached_property\n def related_manager_cls(self):\n related_model = self.rel.related_model\n\n class HistoricRelationModelManager(related_model._default_manager.__class__):\n def get_queryset(self):\n try:\n return self.instance._prefetched_objects_cache[\n self.field.remote_field.get_cache_name()\n ]\n except (AttributeError, KeyError):\n history = getattr(\n self.instance, SIMPLE_HISTORY_REVERSE_ATTR_NAME, None\n )\n histmgr = getattr(\n self.model,\n getattr(\n self.model._meta,\n \"simple_history_manager_attribute\",\n \"_notthere\",\n ),\n None,\n )\n if history and histmgr:\n queryset = histmgr.as_of(\n getattr(history, \"_as_of\", history.history_date)\n )\n else:\n queryset = super().get_queryset()\n return self._apply_rel_filters(queryset)\n\n return create_reverse_many_to_one_manager(\n HistoricRelationModelManager, self.rel\n )\n\n\nclass HistoricForeignKey(ForeignKey):\n \"\"\"\n Allows foreign keys to work properly from a historic instance.\n\n If you use as_of queries to extract historical instances from\n a model, and you have other models that are related by foreign\n key and also historic, changing them to a HistoricForeignKey\n field type will allow you to naturally cross the relationship\n boundary at the same point in time as the origin instance.\n\n A historic instance maintains an attribute (\"_historic\") when\n it is historic, holding the historic record instance and the\n timepoint used to query it (\"_as_of\"). HistoricForeignKey\n looks for this and uses an as_of query against the related\n object so the relationship is assessed at the same timepoint.\n \"\"\"\n\n forward_related_accessor_class = HistoricForwardManyToOneDescriptor\n related_accessor_class = HistoricReverseManyToOneDescriptor\n\n\ndef is_historic(instance):\n \"\"\"\n Returns True if the instance was acquired with an as_of timepoint.\n \"\"\"\n return to_historic(instance) is not None\n\n\ndef to_historic(instance):\n \"\"\"\n Returns a historic model instance if the instance was acquired with\n an as_of timepoint, or None.\n \"\"\"\n return getattr(instance, SIMPLE_HISTORY_REVERSE_ATTR_NAME, None)\n\n\nclass HistoricalObjectDescriptor:\n def __init__(self, model, fields_included):\n self.model = model\n self.fields_included = fields_included\n\n def __get__(self, instance, owner):\n if instance is None:\n return self\n values = {f.attname: getattr(instance, f.attname) for f in self.fields_included}\n return self.model(**values)\n\n\nclass HistoricalChanges:\n def diff_against(self, old_history, excluded_fields=None, included_fields=None):\n if not isinstance(old_history, type(self)):\n raise TypeError(\n (\"unsupported type(s) for diffing: \" \"'{}' and '{}'\").format(\n type(self), type(old_history)\n )\n )\n if excluded_fields is None:\n excluded_fields = set()\n\n included_m2m_fields = {field.name for field in old_history._history_m2m_fields}\n if included_fields is None:\n included_fields = {f.name for f in old_history.tracked_fields if f.editable}\n else:\n included_m2m_fields = included_m2m_fields.intersection(included_fields)\n\n fields = (\n set(included_fields)\n .difference(included_m2m_fields)\n .difference(excluded_fields)\n )\n m2m_fields = set(included_m2m_fields).difference(excluded_fields)\n\n changes = []\n changed_fields = []\n\n old_values = model_to_dict(old_history, fields=fields)\n current_values = model_to_dict(self, fields=fields)\n\n for field in fields:\n old_value = old_values[field]\n current_value = current_values[field]\n\n if old_value != current_value:\n changes.append(ModelChange(field, old_value, current_value))\n changed_fields.append(field)\n\n # Separately compare m2m fields:\n for field in m2m_fields:\n # First retrieve a single item to get the field names from:\n reference_history_m2m_item = (\n getattr(old_history, field).first() or getattr(self, field).first()\n )\n history_field_names = []\n if reference_history_m2m_item:\n # Create a list of field names to compare against.\n # The list is generated without the primary key of the intermediate\n # table, the foreign key to the history record, and the actual 'history'\n # field, to avoid false positives while diffing.\n history_field_names = [\n f.name\n for f in reference_history_m2m_item._meta.fields\n if f.editable and f.name not in [\"id\", \"m2m_history_id\", \"history\"]\n ]\n\n old_rows = list(getattr(old_history, field).values(*history_field_names))\n new_rows = list(getattr(self, field).values(*history_field_names))\n\n if old_rows != new_rows:\n change = ModelChange(field, old_rows, new_rows)\n changes.append(change)\n changed_fields.append(field)\n\n return ModelDelta(changes, changed_fields, old_history, self)\n\n\nclass ModelChange:\n def __init__(self, field_name, old_value, new_value):\n self.field = field_name\n self.old = old_value\n self.new = new_value\n\n\nclass ModelDelta:\n def __init__(self, changes, changed_fields, old_record, new_record):\n self.changes = changes\n self.changed_fields = changed_fields\n self.old_record = old_record\n self.new_record = new_record\n",
"path": "simple_history/models.py"
}
] | diff --git a/AUTHORS.rst b/AUTHORS.rst
index 875e0173b..12eeb4383 100644
--- a/AUTHORS.rst
+++ b/AUTHORS.rst
@@ -90,6 +90,7 @@ Authors
- Lucas Wiman
- Maciej "RooTer" Urbański
- Marcelo Canina (`marcanuy <https://github.com/marcanuy>`_)
+- Marco Sirabella
- Mark Davidoff
- Martin Bachwerk
- Marty Alchin
diff --git a/CHANGES.rst b/CHANGES.rst
index 0e6660d5c..fbe4f979c 100644
--- a/CHANGES.rst
+++ b/CHANGES.rst
@@ -24,6 +24,8 @@ Unreleased
``HistoricalRecords.context.request``) under some circumstances (gh-1188)
- Made ``HistoryRequestMiddleware`` async-capable (gh-1209)
- Fixed error when setting ``table_name`` with ``inherit=True`` (gh-1195)
+- Fixed ``FieldError`` when creating historical records for many-to-many fields with
+ ``to="self"`` (gh-1218)
3.3.0 (2023-03-08)
------------------
diff --git a/simple_history/models.py b/simple_history/models.py
index 19080a4d4..4ad5c2e9c 100644
--- a/simple_history/models.py
+++ b/simple_history/models.py
@@ -670,7 +670,8 @@ def create_historical_record_m2ms(self, history_instance, instance):
insert_rows = []
- through_field_name = type(original_instance).__name__.lower()
+ # `m2m_field_name()` is part of Django's internal API
+ through_field_name = field.m2m_field_name()
rows = through_model.objects.filter(**{through_field_name: instance})
diff --git a/simple_history/tests/models.py b/simple_history/tests/models.py
index a41374d7d..5c1da32ad 100644
--- a/simple_history/tests/models.py
+++ b/simple_history/tests/models.py
@@ -200,6 +200,11 @@ class PollChildRestaurantWithManyToMany(PollParentWithManyToMany):
_history_m2m_fields = [restaurants]
+class PollWithSelfManyToMany(models.Model):
+ relations = models.ManyToManyField("self")
+ history = HistoricalRecords(m2m_fields=[relations])
+
+
class CustomAttrNameForeignKey(models.ForeignKey):
def __init__(self, *args, **kwargs):
self.attr_name = kwargs.pop("attr_name", None)
diff --git a/simple_history/tests/tests/test_models.py b/simple_history/tests/tests/test_models.py
index 2f98594a7..484df73f9 100644
--- a/simple_history/tests/tests/test_models.py
+++ b/simple_history/tests/tests/test_models.py
@@ -103,6 +103,7 @@
PollWithManyToManyCustomHistoryID,
PollWithManyToManyWithIPAddress,
PollWithNonEditableField,
+ PollWithSelfManyToMany,
PollWithSeveralManyToMany,
Province,
Restaurant,
@@ -1869,6 +1870,17 @@ def test_separation(self):
self.assertEqual(add.restaurants.all().count(), 0)
self.assertEqual(add.places.all().count(), 0)
+ def test_self_field(self):
+ poll1 = PollWithSelfManyToMany.objects.create()
+ poll2 = PollWithSelfManyToMany.objects.create()
+
+ self.assertEqual(poll1.history.all().count(), 1)
+
+ poll1.relations.add(poll2)
+ self.assertIn(poll2, poll1.relations.all())
+
+ self.assertEqual(poll1.history.all().count(), 2)
+
class ManyToManyWithSignalsTest(TestCase):
def setUp(self):
|
scverse__scanpy-1979 | small spelling mistake
In the file scanpy/_utils/__init__.py on master branch, line 412 says:
"Revieved a view of an AnnData. Making a copy."
probably meaning "received"
| [
{
"content": "\"\"\"Utility functions and classes\n\nThis file largely consists of the old _utils.py file. Over time, these functions\nshould be moved of this file.\n\"\"\"\nimport sys\nimport inspect\nimport warnings\nimport importlib.util\nfrom enum import Enum\nfrom pathlib import Path\nfrom weakref import WeakSet\nfrom collections import namedtuple\nfrom functools import partial, wraps\nfrom types import ModuleType, MethodType\nfrom typing import Union, Callable, Optional, Mapping, Any, Dict, Tuple\n\nimport numpy as np\nfrom numpy import random\nfrom scipy import sparse\nfrom anndata import AnnData, __version__ as anndata_version\nfrom textwrap import dedent\nfrom packaging import version\n\nfrom .._settings import settings\nfrom .._compat import Literal\nfrom .. import logging as logg\n\nfrom .compute.is_constant import is_constant\n\n\nclass Empty(Enum):\n token = 0\n\n\n_empty = Empty.token\n\n# e.g. https://scikit-learn.org/stable/modules/generated/sklearn.decomposition.PCA.html\nAnyRandom = Union[None, int, random.RandomState] # maybe in the future random.Generator\n\nEPS = 1e-15\n\n\ndef check_versions():\n from .._compat import pkg_version\n\n umap_version = pkg_version(\"umap-learn\")\n\n if version.parse(anndata_version) < version.parse('0.6.10'):\n from .. import __version__\n\n raise ImportError(\n f'Scanpy {__version__} needs anndata version >=0.6.10, '\n f'not {anndata_version}.\\nRun `pip install anndata -U --no-deps`.'\n )\n\n if umap_version < version.parse('0.3.0'):\n from . import __version__\n\n # make this a warning, not an error\n # it might be useful for people to still be able to run it\n logg.warning(\n f'Scanpy {__version__} needs umap ' f'version >=0.3.0, not {umap_version}.'\n )\n\n\ndef getdoc(c_or_f: Union[Callable, type]) -> Optional[str]:\n if getattr(c_or_f, '__doc__', None) is None:\n return None\n doc = inspect.getdoc(c_or_f)\n if isinstance(c_or_f, type) and hasattr(c_or_f, '__init__'):\n sig = inspect.signature(c_or_f.__init__)\n else:\n sig = inspect.signature(c_or_f)\n\n def type_doc(name: str):\n param: inspect.Parameter = sig.parameters[name]\n cls = getattr(param.annotation, '__qualname__', repr(param.annotation))\n if param.default is not param.empty:\n return f'{cls}, optional (default: {param.default!r})'\n else:\n return cls\n\n return '\\n'.join(\n f'{line} : {type_doc(line)}' if line.strip() in sig.parameters else line\n for line in doc.split('\\n')\n )\n\n\ndef deprecated_arg_names(arg_mapping: Mapping[str, str]):\n \"\"\"\n Decorator which marks a functions keyword arguments as deprecated. It will\n result in a warning being emitted when the deprecated keyword argument is\n used, and the function being called with the new argument.\n\n Parameters\n ----------\n arg_mapping\n Mapping from deprecated argument name to current argument name.\n \"\"\"\n\n def decorator(func):\n @wraps(func)\n def func_wrapper(*args, **kwargs):\n warnings.simplefilter('always', DeprecationWarning) # turn off filter\n for old, new in arg_mapping.items():\n if old in kwargs:\n warnings.warn(\n f\"Keyword argument '{old}' has been \"\n f\"deprecated in favour of '{new}'. \"\n f\"'{old}' will be removed in a future version.\",\n category=DeprecationWarning,\n stacklevel=2,\n )\n val = kwargs.pop(old)\n kwargs[new] = val\n # reset filter\n warnings.simplefilter('default', DeprecationWarning)\n return func(*args, **kwargs)\n\n return func_wrapper\n\n return decorator\n\n\ndef _one_of_ours(obj, root: str):\n return (\n hasattr(obj, \"__name__\")\n and not obj.__name__.split(\".\")[-1].startswith(\"_\")\n and getattr(\n obj, '__module__', getattr(obj, '__qualname__', obj.__name__)\n ).startswith(root)\n )\n\n\ndef descend_classes_and_funcs(mod: ModuleType, root: str, encountered=None):\n if encountered is None:\n encountered = WeakSet()\n for obj in vars(mod).values():\n if not _one_of_ours(obj, root):\n continue\n if callable(obj) and not isinstance(obj, MethodType):\n yield obj\n if isinstance(obj, type):\n for m in vars(obj).values():\n if callable(m) and _one_of_ours(m, root):\n yield m\n elif isinstance(obj, ModuleType) and obj not in encountered:\n if obj.__name__.startswith('scanpy.tests'):\n # Python’s import mechanism seems to add this to `scanpy`’s attributes\n continue\n encountered.add(obj)\n yield from descend_classes_and_funcs(obj, root, encountered)\n\n\ndef annotate_doc_types(mod: ModuleType, root: str):\n for c_or_f in descend_classes_and_funcs(mod, root):\n c_or_f.getdoc = partial(getdoc, c_or_f)\n\n\ndef _doc_params(**kwds):\n \"\"\"\\\n Docstrings should start with \"\\\" in the first line for proper formatting.\n \"\"\"\n\n def dec(obj):\n obj.__orig_doc__ = obj.__doc__\n obj.__doc__ = dedent(obj.__doc__).format_map(kwds)\n return obj\n\n return dec\n\n\ndef _check_array_function_arguments(**kwargs):\n \"\"\"Checks for invalid arguments when an array is passed.\n\n Helper for functions that work on either AnnData objects or array-likes.\n \"\"\"\n # TODO: Figure out a better solution for documenting dispatched functions\n invalid_args = [k for k, v in kwargs.items() if v is not None]\n if len(invalid_args) > 0:\n raise TypeError(\n f\"Arguments {invalid_args} are only valid if an AnnData object is passed.\"\n )\n\n\ndef _check_use_raw(adata: AnnData, use_raw: Union[None, bool]) -> bool:\n \"\"\"\n Normalize checking `use_raw`.\n\n My intentention here is to also provide a single place to throw a deprecation warning from in future.\n \"\"\"\n if use_raw is not None:\n return use_raw\n else:\n if adata.raw is not None:\n return True\n else:\n return False\n\n\n# --------------------------------------------------------------------------------\n# Graph stuff\n# --------------------------------------------------------------------------------\n\n\ndef get_igraph_from_adjacency(adjacency, directed=None):\n \"\"\"Get igraph graph from adjacency matrix.\"\"\"\n import igraph as ig\n\n sources, targets = adjacency.nonzero()\n weights = adjacency[sources, targets]\n if isinstance(weights, np.matrix):\n weights = weights.A1\n g = ig.Graph(directed=directed)\n g.add_vertices(adjacency.shape[0]) # this adds adjacency.shape[0] vertices\n g.add_edges(list(zip(sources, targets)))\n try:\n g.es['weight'] = weights\n except KeyError:\n pass\n if g.vcount() != adjacency.shape[0]:\n logg.warning(\n f'The constructed graph has only {g.vcount()} nodes. '\n 'Your adjacency matrix contained redundant nodes.'\n )\n return g\n\n\ndef get_sparse_from_igraph(graph, weight_attr=None):\n from scipy.sparse import csr_matrix\n\n edges = graph.get_edgelist()\n if weight_attr is None:\n weights = [1] * len(edges)\n else:\n weights = graph.es[weight_attr]\n if not graph.is_directed():\n edges.extend([(v, u) for u, v in edges])\n weights.extend(weights)\n shape = graph.vcount()\n shape = (shape, shape)\n if len(edges) > 0:\n return csr_matrix((weights, zip(*edges)), shape=shape)\n else:\n return csr_matrix(shape)\n\n\n# --------------------------------------------------------------------------------\n# Group stuff\n# --------------------------------------------------------------------------------\n\n\ndef compute_association_matrix_of_groups(\n adata: AnnData,\n prediction: str,\n reference: str,\n normalization: Literal['prediction', 'reference'] = 'prediction',\n threshold: float = 0.01,\n max_n_names: Optional[int] = 2,\n):\n \"\"\"Compute overlaps between groups.\n\n See ``identify_groups`` for identifying the groups.\n\n Parameters\n ----------\n adata\n prediction\n Field name of adata.obs.\n reference\n Field name of adata.obs.\n normalization\n Whether to normalize with respect to the predicted groups or the\n reference groups.\n threshold\n Do not consider associations whose overlap is below this fraction.\n max_n_names\n Control how many reference names you want to be associated with per\n predicted name. Set to `None`, if you want all.\n\n Returns\n -------\n asso_names\n List of associated reference names\n (`max_n_names` for each predicted name).\n asso_matrix\n Matrix where rows correspond to the predicted labels and columns to the\n reference labels, entries are proportional to degree of association.\n \"\"\"\n if normalization not in {'prediction', 'reference'}:\n raise ValueError(\n '`normalization` needs to be either \"prediction\" or \"reference\".'\n )\n sanitize_anndata(adata)\n cats = adata.obs[reference].cat.categories\n for cat in cats:\n if cat in settings.categories_to_ignore:\n logg.info(\n f'Ignoring category {cat!r} '\n 'as it’s in `settings.categories_to_ignore`.'\n )\n asso_names = []\n asso_matrix = []\n for ipred_group, pred_group in enumerate(adata.obs[prediction].cat.categories):\n if '?' in pred_group:\n pred_group = str(ipred_group)\n # starting from numpy version 1.13, subtractions of boolean arrays are deprecated\n mask_pred = adata.obs[prediction].values == pred_group\n mask_pred_int = mask_pred.astype(np.int8)\n asso_matrix += [[]]\n for ref_group in adata.obs[reference].cat.categories:\n mask_ref = (adata.obs[reference].values == ref_group).astype(np.int8)\n mask_ref_or_pred = mask_ref.copy()\n mask_ref_or_pred[mask_pred] = 1\n # e.g. if the pred group is contained in mask_ref, mask_ref and\n # mask_ref_or_pred are the same\n if normalization == 'prediction':\n # compute which fraction of the predicted group is contained in\n # the ref group\n ratio_contained = (\n np.sum(mask_pred_int) - np.sum(mask_ref_or_pred - mask_ref)\n ) / np.sum(mask_pred_int)\n else:\n # compute which fraction of the reference group is contained in\n # the predicted group\n ratio_contained = (\n np.sum(mask_ref) - np.sum(mask_ref_or_pred - mask_pred_int)\n ) / np.sum(mask_ref)\n asso_matrix[-1] += [ratio_contained]\n name_list_pred = [\n cats[i] if cats[i] not in settings.categories_to_ignore else ''\n for i in np.argsort(asso_matrix[-1])[::-1]\n if asso_matrix[-1][i] > threshold\n ]\n asso_names += ['\\n'.join(name_list_pred[:max_n_names])]\n Result = namedtuple(\n 'compute_association_matrix_of_groups', ['asso_names', 'asso_matrix']\n )\n return Result(asso_names=asso_names, asso_matrix=np.array(asso_matrix))\n\n\ndef get_associated_colors_of_groups(reference_colors, asso_matrix):\n return [\n {\n reference_colors[i_ref]: asso_matrix[i_pred, i_ref]\n for i_ref in range(asso_matrix.shape[1])\n }\n for i_pred in range(asso_matrix.shape[0])\n ]\n\n\ndef identify_groups(ref_labels, pred_labels, return_overlaps=False):\n \"\"\"Which predicted label explains which reference label?\n\n A predicted label explains the reference label which maximizes the minimum\n of ``relative_overlaps_pred`` and ``relative_overlaps_ref``.\n\n Compare this with ``compute_association_matrix_of_groups``.\n\n Returns\n -------\n A dictionary of length ``len(np.unique(ref_labels))`` that stores for each\n reference label the predicted label that best explains it.\n\n If ``return_overlaps`` is ``True``, this will in addition return the overlap\n of the reference group with the predicted group; normalized with respect to\n the reference group size and the predicted group size, respectively.\n \"\"\"\n ref_unique, ref_counts = np.unique(ref_labels, return_counts=True)\n ref_dict = dict(zip(ref_unique, ref_counts))\n pred_unique, pred_counts = np.unique(pred_labels, return_counts=True)\n pred_dict = dict(zip(pred_unique, pred_counts))\n associated_predictions = {}\n associated_overlaps = {}\n for ref_label in ref_unique:\n sub_pred_unique, sub_pred_counts = np.unique(\n pred_labels[ref_label == ref_labels], return_counts=True\n )\n relative_overlaps_pred = [\n sub_pred_counts[i] / pred_dict[n] for i, n in enumerate(sub_pred_unique)\n ]\n relative_overlaps_ref = [\n sub_pred_counts[i] / ref_dict[ref_label]\n for i, n in enumerate(sub_pred_unique)\n ]\n relative_overlaps = np.c_[relative_overlaps_pred, relative_overlaps_ref]\n relative_overlaps_min = np.min(relative_overlaps, axis=1)\n pred_best_index = np.argsort(relative_overlaps_min)[::-1]\n associated_predictions[ref_label] = sub_pred_unique[pred_best_index]\n associated_overlaps[ref_label] = relative_overlaps[pred_best_index]\n if return_overlaps:\n return associated_predictions, associated_overlaps\n else:\n return associated_predictions\n\n\n# --------------------------------------------------------------------------------\n# Other stuff\n# --------------------------------------------------------------------------------\n\n\n# backwards compat... remove this in the future\ndef sanitize_anndata(adata):\n \"\"\"Transform string annotations to categoricals.\"\"\"\n adata._sanitize()\n\n\ndef view_to_actual(adata):\n if adata.is_view:\n warnings.warn(\n \"Revieved a view of an AnnData. Making a copy.\",\n stacklevel=2,\n )\n adata._init_as_actual(adata.copy())\n\n\ndef moving_average(a: np.ndarray, n: int):\n \"\"\"Moving average over one-dimensional array.\n\n Parameters\n ----------\n a\n One-dimensional array.\n n\n Number of entries to average over. n=2 means averaging over the currrent\n the previous entry.\n\n Returns\n -------\n An array view storing the moving average.\n \"\"\"\n ret = np.cumsum(a, dtype=float)\n ret[n:] = ret[n:] - ret[:-n]\n return ret[n - 1 :] / n\n\n\n# --------------------------------------------------------------------------------\n# Deal with tool parameters\n# --------------------------------------------------------------------------------\n\n\ndef update_params(\n old_params: Mapping[str, Any],\n new_params: Mapping[str, Any],\n check=False,\n) -> Dict[str, Any]:\n \"\"\"\\\n Update old_params with new_params.\n\n If check==False, this merely adds and overwrites the content of old_params.\n\n If check==True, this only allows updating of parameters that are already\n present in old_params.\n\n Parameters\n ----------\n old_params\n new_params\n check\n\n Returns\n -------\n updated_params\n \"\"\"\n updated_params = dict(old_params)\n if new_params: # allow for new_params to be None\n for key, val in new_params.items():\n if key not in old_params and check:\n raise ValueError(\n '\\''\n + key\n + '\\' is not a valid parameter key, '\n + 'consider one of \\n'\n + str(list(old_params.keys()))\n )\n if val is not None:\n updated_params[key] = val\n return updated_params\n\n\n# --------------------------------------------------------------------------------\n# Others\n# --------------------------------------------------------------------------------\n\n\ndef check_nonnegative_integers(X: Union[np.ndarray, sparse.spmatrix]):\n \"\"\"Checks values of X to ensure it is count data\"\"\"\n from numbers import Integral\n\n data = X if isinstance(X, np.ndarray) else X.data\n # Check no negatives\n if np.signbit(data).any():\n return False\n # Check all are integers\n elif issubclass(data.dtype.type, Integral):\n return True\n elif np.any(~np.equal(np.mod(data, 1), 0)):\n return False\n else:\n return True\n\n\ndef select_groups(adata, groups_order_subset='all', key='groups'):\n \"\"\"Get subset of groups in adata.obs[key].\"\"\"\n groups_order = adata.obs[key].cat.categories\n if key + '_masks' in adata.uns:\n groups_masks = adata.uns[key + '_masks']\n else:\n groups_masks = np.zeros(\n (len(adata.obs[key].cat.categories), adata.obs[key].values.size), dtype=bool\n )\n for iname, name in enumerate(adata.obs[key].cat.categories):\n # if the name is not found, fallback to index retrieval\n if adata.obs[key].cat.categories[iname] in adata.obs[key].values:\n mask = adata.obs[key].cat.categories[iname] == adata.obs[key].values\n else:\n mask = str(iname) == adata.obs[key].values\n groups_masks[iname] = mask\n groups_ids = list(range(len(groups_order)))\n if groups_order_subset != 'all':\n groups_ids = []\n for name in groups_order_subset:\n groups_ids.append(\n np.where(adata.obs[key].cat.categories.values == name)[0][0]\n )\n if len(groups_ids) == 0:\n # fallback to index retrieval\n groups_ids = np.where(\n np.in1d(\n np.arange(len(adata.obs[key].cat.categories)).astype(str),\n np.array(groups_order_subset),\n )\n )[0]\n if len(groups_ids) == 0:\n logg.debug(\n f'{np.array(groups_order_subset)} invalid! specify valid '\n f'groups_order (or indices) from {adata.obs[key].cat.categories}',\n )\n from sys import exit\n\n exit(0)\n groups_masks = groups_masks[groups_ids]\n groups_order_subset = adata.obs[key].cat.categories[groups_ids].values\n else:\n groups_order_subset = groups_order.values\n return groups_order_subset, groups_masks\n\n\ndef warn_with_traceback(message, category, filename, lineno, file=None, line=None):\n \"\"\"Get full tracebacks when warning is raised by setting\n\n warnings.showwarning = warn_with_traceback\n\n See also\n --------\n http://stackoverflow.com/questions/22373927/get-traceback-of-warnings\n \"\"\"\n import traceback\n\n traceback.print_stack()\n log = ( # noqa: F841 # TODO Does this need fixing?\n file if hasattr(file, 'write') else sys.stderr\n )\n settings.write(warnings.formatwarning(message, category, filename, lineno, line))\n\n\ndef subsample(\n X: np.ndarray,\n subsample: int = 1,\n seed: int = 0,\n) -> Tuple[np.ndarray, np.ndarray]:\n \"\"\"\\\n Subsample a fraction of 1/subsample samples from the rows of X.\n\n Parameters\n ----------\n X\n Data array.\n subsample\n 1/subsample is the fraction of data sampled, n = X.shape[0]/subsample.\n seed\n Seed for sampling.\n\n Returns\n -------\n Xsampled\n Subsampled X.\n rows\n Indices of rows that are stored in Xsampled.\n \"\"\"\n if subsample == 1 and seed == 0:\n return X, np.arange(X.shape[0], dtype=int)\n if seed == 0:\n # this sequence is defined simply by skipping rows\n # is faster than sampling\n rows = np.arange(0, X.shape[0], subsample, dtype=int)\n n = rows.size\n Xsampled = np.array(X[rows])\n else:\n if seed < 0:\n raise ValueError(f'Invalid seed value < 0: {seed}')\n n = int(X.shape[0] / subsample)\n np.random.seed(seed)\n Xsampled, rows = subsample_n(X, n=n)\n logg.debug(f'... subsampled to {n} of {X.shape[0]} data points')\n return Xsampled, rows\n\n\ndef subsample_n(\n X: np.ndarray, n: int = 0, seed: int = 0\n) -> Tuple[np.ndarray, np.ndarray]:\n \"\"\"Subsample n samples from rows of array.\n\n Parameters\n ----------\n X\n Data array.\n n\n Sample size.\n seed\n Seed for sampling.\n\n Returns\n -------\n Xsampled\n Subsampled X.\n rows\n Indices of rows that are stored in Xsampled.\n \"\"\"\n if n < 0:\n raise ValueError('n must be greater 0')\n np.random.seed(seed)\n n = X.shape[0] if (n == 0 or n > X.shape[0]) else n\n rows = np.random.choice(X.shape[0], size=n, replace=False)\n Xsampled = X[rows]\n return Xsampled, rows\n\n\ndef check_presence_download(filename: Path, backup_url):\n \"\"\"Check if file is present otherwise download.\"\"\"\n if not filename.is_file():\n from ..readwrite import _download\n\n _download(backup_url, filename)\n\n\ndef lazy_import(full_name):\n \"\"\"Imports a module in a way that it’s only executed on member access\"\"\"\n try:\n return sys.modules[full_name]\n except KeyError:\n spec = importlib.util.find_spec(full_name)\n module = importlib.util.module_from_spec(spec)\n loader = importlib.util.LazyLoader(spec.loader)\n # Make module with proper locking and get it inserted into sys.modules.\n loader.exec_module(module)\n return module\n\n\n# --------------------------------------------------------------------------------\n# Neighbors\n# --------------------------------------------------------------------------------\n\n\ndef _fallback_to_uns(dct, conns, dists, conns_key, dists_key):\n if conns is None and conns_key in dct:\n conns = dct[conns_key]\n if dists is None and dists_key in dct:\n dists = dct[dists_key]\n\n return conns, dists\n\n\nclass NeighborsView:\n \"\"\"Convenience class for accessing neighbors graph representations.\n\n Allows to access neighbors distances, connectivities and settings\n dictionary in a uniform manner.\n\n Parameters\n ----------\n\n adata\n AnnData object.\n key\n This defines where to look for neighbors dictionary,\n connectivities, distances.\n\n neigh = NeighborsView(adata, key)\n neigh['distances']\n neigh['connectivities']\n neigh['params']\n 'connectivities' in neigh\n 'params' in neigh\n\n is the same as\n\n adata.obsp[adata.uns[key]['distances_key']]\n adata.obsp[adata.uns[key]['connectivities_key']]\n adata.uns[key]['params']\n adata.uns[key]['connectivities_key'] in adata.obsp\n 'params' in adata.uns[key]\n \"\"\"\n\n def __init__(self, adata, key=None):\n self._connectivities = None\n self._distances = None\n\n if key is None or key == 'neighbors':\n if 'neighbors' not in adata.uns:\n raise KeyError('No \"neighbors\" in .uns')\n self._neighbors_dict = adata.uns['neighbors']\n self._conns_key = 'connectivities'\n self._dists_key = 'distances'\n else:\n if key not in adata.uns:\n raise KeyError(f'No \"{key}\" in .uns')\n self._neighbors_dict = adata.uns[key]\n self._conns_key = self._neighbors_dict['connectivities_key']\n self._dists_key = self._neighbors_dict['distances_key']\n\n if self._conns_key in adata.obsp:\n self._connectivities = adata.obsp[self._conns_key]\n if self._dists_key in adata.obsp:\n self._distances = adata.obsp[self._dists_key]\n\n # fallback to uns\n self._connectivities, self._distances = _fallback_to_uns(\n self._neighbors_dict,\n self._connectivities,\n self._distances,\n self._conns_key,\n self._dists_key,\n )\n\n def __getitem__(self, key):\n if key == 'distances':\n if 'distances' not in self:\n raise KeyError(f'No \"{self._dists_key}\" in .obsp')\n return self._distances\n elif key == 'connectivities':\n if 'connectivities' not in self:\n raise KeyError(f'No \"{self._conns_key}\" in .obsp')\n return self._connectivities\n else:\n return self._neighbors_dict[key]\n\n def __contains__(self, key):\n if key == 'distances':\n return self._distances is not None\n elif key == 'connectivities':\n return self._connectivities is not None\n else:\n return key in self._neighbors_dict\n\n\ndef _choose_graph(adata, obsp, neighbors_key):\n \"\"\"Choose connectivities from neighbbors or another obsp column\"\"\"\n if obsp is not None and neighbors_key is not None:\n raise ValueError(\n 'You can\\'t specify both obsp, neighbors_key. ' 'Please select only one.'\n )\n\n if obsp is not None:\n return adata.obsp[obsp]\n else:\n neighbors = NeighborsView(adata, neighbors_key)\n if 'connectivities' not in neighbors:\n raise ValueError(\n 'You need to run `pp.neighbors` first '\n 'to compute a neighborhood graph.'\n )\n return neighbors['connectivities']\n",
"path": "scanpy/_utils/__init__.py"
}
] | [
{
"content": "\"\"\"Utility functions and classes\n\nThis file largely consists of the old _utils.py file. Over time, these functions\nshould be moved of this file.\n\"\"\"\nimport sys\nimport inspect\nimport warnings\nimport importlib.util\nfrom enum import Enum\nfrom pathlib import Path\nfrom weakref import WeakSet\nfrom collections import namedtuple\nfrom functools import partial, wraps\nfrom types import ModuleType, MethodType\nfrom typing import Union, Callable, Optional, Mapping, Any, Dict, Tuple\n\nimport numpy as np\nfrom numpy import random\nfrom scipy import sparse\nfrom anndata import AnnData, __version__ as anndata_version\nfrom textwrap import dedent\nfrom packaging import version\n\nfrom .._settings import settings\nfrom .._compat import Literal\nfrom .. import logging as logg\n\nfrom .compute.is_constant import is_constant\n\n\nclass Empty(Enum):\n token = 0\n\n\n_empty = Empty.token\n\n# e.g. https://scikit-learn.org/stable/modules/generated/sklearn.decomposition.PCA.html\nAnyRandom = Union[None, int, random.RandomState] # maybe in the future random.Generator\n\nEPS = 1e-15\n\n\ndef check_versions():\n from .._compat import pkg_version\n\n umap_version = pkg_version(\"umap-learn\")\n\n if version.parse(anndata_version) < version.parse('0.6.10'):\n from .. import __version__\n\n raise ImportError(\n f'Scanpy {__version__} needs anndata version >=0.6.10, '\n f'not {anndata_version}.\\nRun `pip install anndata -U --no-deps`.'\n )\n\n if umap_version < version.parse('0.3.0'):\n from . import __version__\n\n # make this a warning, not an error\n # it might be useful for people to still be able to run it\n logg.warning(\n f'Scanpy {__version__} needs umap ' f'version >=0.3.0, not {umap_version}.'\n )\n\n\ndef getdoc(c_or_f: Union[Callable, type]) -> Optional[str]:\n if getattr(c_or_f, '__doc__', None) is None:\n return None\n doc = inspect.getdoc(c_or_f)\n if isinstance(c_or_f, type) and hasattr(c_or_f, '__init__'):\n sig = inspect.signature(c_or_f.__init__)\n else:\n sig = inspect.signature(c_or_f)\n\n def type_doc(name: str):\n param: inspect.Parameter = sig.parameters[name]\n cls = getattr(param.annotation, '__qualname__', repr(param.annotation))\n if param.default is not param.empty:\n return f'{cls}, optional (default: {param.default!r})'\n else:\n return cls\n\n return '\\n'.join(\n f'{line} : {type_doc(line)}' if line.strip() in sig.parameters else line\n for line in doc.split('\\n')\n )\n\n\ndef deprecated_arg_names(arg_mapping: Mapping[str, str]):\n \"\"\"\n Decorator which marks a functions keyword arguments as deprecated. It will\n result in a warning being emitted when the deprecated keyword argument is\n used, and the function being called with the new argument.\n\n Parameters\n ----------\n arg_mapping\n Mapping from deprecated argument name to current argument name.\n \"\"\"\n\n def decorator(func):\n @wraps(func)\n def func_wrapper(*args, **kwargs):\n warnings.simplefilter('always', DeprecationWarning) # turn off filter\n for old, new in arg_mapping.items():\n if old in kwargs:\n warnings.warn(\n f\"Keyword argument '{old}' has been \"\n f\"deprecated in favour of '{new}'. \"\n f\"'{old}' will be removed in a future version.\",\n category=DeprecationWarning,\n stacklevel=2,\n )\n val = kwargs.pop(old)\n kwargs[new] = val\n # reset filter\n warnings.simplefilter('default', DeprecationWarning)\n return func(*args, **kwargs)\n\n return func_wrapper\n\n return decorator\n\n\ndef _one_of_ours(obj, root: str):\n return (\n hasattr(obj, \"__name__\")\n and not obj.__name__.split(\".\")[-1].startswith(\"_\")\n and getattr(\n obj, '__module__', getattr(obj, '__qualname__', obj.__name__)\n ).startswith(root)\n )\n\n\ndef descend_classes_and_funcs(mod: ModuleType, root: str, encountered=None):\n if encountered is None:\n encountered = WeakSet()\n for obj in vars(mod).values():\n if not _one_of_ours(obj, root):\n continue\n if callable(obj) and not isinstance(obj, MethodType):\n yield obj\n if isinstance(obj, type):\n for m in vars(obj).values():\n if callable(m) and _one_of_ours(m, root):\n yield m\n elif isinstance(obj, ModuleType) and obj not in encountered:\n if obj.__name__.startswith('scanpy.tests'):\n # Python’s import mechanism seems to add this to `scanpy`’s attributes\n continue\n encountered.add(obj)\n yield from descend_classes_and_funcs(obj, root, encountered)\n\n\ndef annotate_doc_types(mod: ModuleType, root: str):\n for c_or_f in descend_classes_and_funcs(mod, root):\n c_or_f.getdoc = partial(getdoc, c_or_f)\n\n\ndef _doc_params(**kwds):\n \"\"\"\\\n Docstrings should start with \"\\\" in the first line for proper formatting.\n \"\"\"\n\n def dec(obj):\n obj.__orig_doc__ = obj.__doc__\n obj.__doc__ = dedent(obj.__doc__).format_map(kwds)\n return obj\n\n return dec\n\n\ndef _check_array_function_arguments(**kwargs):\n \"\"\"Checks for invalid arguments when an array is passed.\n\n Helper for functions that work on either AnnData objects or array-likes.\n \"\"\"\n # TODO: Figure out a better solution for documenting dispatched functions\n invalid_args = [k for k, v in kwargs.items() if v is not None]\n if len(invalid_args) > 0:\n raise TypeError(\n f\"Arguments {invalid_args} are only valid if an AnnData object is passed.\"\n )\n\n\ndef _check_use_raw(adata: AnnData, use_raw: Union[None, bool]) -> bool:\n \"\"\"\n Normalize checking `use_raw`.\n\n My intentention here is to also provide a single place to throw a deprecation warning from in future.\n \"\"\"\n if use_raw is not None:\n return use_raw\n else:\n if adata.raw is not None:\n return True\n else:\n return False\n\n\n# --------------------------------------------------------------------------------\n# Graph stuff\n# --------------------------------------------------------------------------------\n\n\ndef get_igraph_from_adjacency(adjacency, directed=None):\n \"\"\"Get igraph graph from adjacency matrix.\"\"\"\n import igraph as ig\n\n sources, targets = adjacency.nonzero()\n weights = adjacency[sources, targets]\n if isinstance(weights, np.matrix):\n weights = weights.A1\n g = ig.Graph(directed=directed)\n g.add_vertices(adjacency.shape[0]) # this adds adjacency.shape[0] vertices\n g.add_edges(list(zip(sources, targets)))\n try:\n g.es['weight'] = weights\n except KeyError:\n pass\n if g.vcount() != adjacency.shape[0]:\n logg.warning(\n f'The constructed graph has only {g.vcount()} nodes. '\n 'Your adjacency matrix contained redundant nodes.'\n )\n return g\n\n\ndef get_sparse_from_igraph(graph, weight_attr=None):\n from scipy.sparse import csr_matrix\n\n edges = graph.get_edgelist()\n if weight_attr is None:\n weights = [1] * len(edges)\n else:\n weights = graph.es[weight_attr]\n if not graph.is_directed():\n edges.extend([(v, u) for u, v in edges])\n weights.extend(weights)\n shape = graph.vcount()\n shape = (shape, shape)\n if len(edges) > 0:\n return csr_matrix((weights, zip(*edges)), shape=shape)\n else:\n return csr_matrix(shape)\n\n\n# --------------------------------------------------------------------------------\n# Group stuff\n# --------------------------------------------------------------------------------\n\n\ndef compute_association_matrix_of_groups(\n adata: AnnData,\n prediction: str,\n reference: str,\n normalization: Literal['prediction', 'reference'] = 'prediction',\n threshold: float = 0.01,\n max_n_names: Optional[int] = 2,\n):\n \"\"\"Compute overlaps between groups.\n\n See ``identify_groups`` for identifying the groups.\n\n Parameters\n ----------\n adata\n prediction\n Field name of adata.obs.\n reference\n Field name of adata.obs.\n normalization\n Whether to normalize with respect to the predicted groups or the\n reference groups.\n threshold\n Do not consider associations whose overlap is below this fraction.\n max_n_names\n Control how many reference names you want to be associated with per\n predicted name. Set to `None`, if you want all.\n\n Returns\n -------\n asso_names\n List of associated reference names\n (`max_n_names` for each predicted name).\n asso_matrix\n Matrix where rows correspond to the predicted labels and columns to the\n reference labels, entries are proportional to degree of association.\n \"\"\"\n if normalization not in {'prediction', 'reference'}:\n raise ValueError(\n '`normalization` needs to be either \"prediction\" or \"reference\".'\n )\n sanitize_anndata(adata)\n cats = adata.obs[reference].cat.categories\n for cat in cats:\n if cat in settings.categories_to_ignore:\n logg.info(\n f'Ignoring category {cat!r} '\n 'as it’s in `settings.categories_to_ignore`.'\n )\n asso_names = []\n asso_matrix = []\n for ipred_group, pred_group in enumerate(adata.obs[prediction].cat.categories):\n if '?' in pred_group:\n pred_group = str(ipred_group)\n # starting from numpy version 1.13, subtractions of boolean arrays are deprecated\n mask_pred = adata.obs[prediction].values == pred_group\n mask_pred_int = mask_pred.astype(np.int8)\n asso_matrix += [[]]\n for ref_group in adata.obs[reference].cat.categories:\n mask_ref = (adata.obs[reference].values == ref_group).astype(np.int8)\n mask_ref_or_pred = mask_ref.copy()\n mask_ref_or_pred[mask_pred] = 1\n # e.g. if the pred group is contained in mask_ref, mask_ref and\n # mask_ref_or_pred are the same\n if normalization == 'prediction':\n # compute which fraction of the predicted group is contained in\n # the ref group\n ratio_contained = (\n np.sum(mask_pred_int) - np.sum(mask_ref_or_pred - mask_ref)\n ) / np.sum(mask_pred_int)\n else:\n # compute which fraction of the reference group is contained in\n # the predicted group\n ratio_contained = (\n np.sum(mask_ref) - np.sum(mask_ref_or_pred - mask_pred_int)\n ) / np.sum(mask_ref)\n asso_matrix[-1] += [ratio_contained]\n name_list_pred = [\n cats[i] if cats[i] not in settings.categories_to_ignore else ''\n for i in np.argsort(asso_matrix[-1])[::-1]\n if asso_matrix[-1][i] > threshold\n ]\n asso_names += ['\\n'.join(name_list_pred[:max_n_names])]\n Result = namedtuple(\n 'compute_association_matrix_of_groups', ['asso_names', 'asso_matrix']\n )\n return Result(asso_names=asso_names, asso_matrix=np.array(asso_matrix))\n\n\ndef get_associated_colors_of_groups(reference_colors, asso_matrix):\n return [\n {\n reference_colors[i_ref]: asso_matrix[i_pred, i_ref]\n for i_ref in range(asso_matrix.shape[1])\n }\n for i_pred in range(asso_matrix.shape[0])\n ]\n\n\ndef identify_groups(ref_labels, pred_labels, return_overlaps=False):\n \"\"\"Which predicted label explains which reference label?\n\n A predicted label explains the reference label which maximizes the minimum\n of ``relative_overlaps_pred`` and ``relative_overlaps_ref``.\n\n Compare this with ``compute_association_matrix_of_groups``.\n\n Returns\n -------\n A dictionary of length ``len(np.unique(ref_labels))`` that stores for each\n reference label the predicted label that best explains it.\n\n If ``return_overlaps`` is ``True``, this will in addition return the overlap\n of the reference group with the predicted group; normalized with respect to\n the reference group size and the predicted group size, respectively.\n \"\"\"\n ref_unique, ref_counts = np.unique(ref_labels, return_counts=True)\n ref_dict = dict(zip(ref_unique, ref_counts))\n pred_unique, pred_counts = np.unique(pred_labels, return_counts=True)\n pred_dict = dict(zip(pred_unique, pred_counts))\n associated_predictions = {}\n associated_overlaps = {}\n for ref_label in ref_unique:\n sub_pred_unique, sub_pred_counts = np.unique(\n pred_labels[ref_label == ref_labels], return_counts=True\n )\n relative_overlaps_pred = [\n sub_pred_counts[i] / pred_dict[n] for i, n in enumerate(sub_pred_unique)\n ]\n relative_overlaps_ref = [\n sub_pred_counts[i] / ref_dict[ref_label]\n for i, n in enumerate(sub_pred_unique)\n ]\n relative_overlaps = np.c_[relative_overlaps_pred, relative_overlaps_ref]\n relative_overlaps_min = np.min(relative_overlaps, axis=1)\n pred_best_index = np.argsort(relative_overlaps_min)[::-1]\n associated_predictions[ref_label] = sub_pred_unique[pred_best_index]\n associated_overlaps[ref_label] = relative_overlaps[pred_best_index]\n if return_overlaps:\n return associated_predictions, associated_overlaps\n else:\n return associated_predictions\n\n\n# --------------------------------------------------------------------------------\n# Other stuff\n# --------------------------------------------------------------------------------\n\n\n# backwards compat... remove this in the future\ndef sanitize_anndata(adata):\n \"\"\"Transform string annotations to categoricals.\"\"\"\n adata._sanitize()\n\n\ndef view_to_actual(adata):\n if adata.is_view:\n warnings.warn(\n \"Received a view of an AnnData. Making a copy.\",\n stacklevel=2,\n )\n adata._init_as_actual(adata.copy())\n\n\ndef moving_average(a: np.ndarray, n: int):\n \"\"\"Moving average over one-dimensional array.\n\n Parameters\n ----------\n a\n One-dimensional array.\n n\n Number of entries to average over. n=2 means averaging over the currrent\n the previous entry.\n\n Returns\n -------\n An array view storing the moving average.\n \"\"\"\n ret = np.cumsum(a, dtype=float)\n ret[n:] = ret[n:] - ret[:-n]\n return ret[n - 1 :] / n\n\n\n# --------------------------------------------------------------------------------\n# Deal with tool parameters\n# --------------------------------------------------------------------------------\n\n\ndef update_params(\n old_params: Mapping[str, Any],\n new_params: Mapping[str, Any],\n check=False,\n) -> Dict[str, Any]:\n \"\"\"\\\n Update old_params with new_params.\n\n If check==False, this merely adds and overwrites the content of old_params.\n\n If check==True, this only allows updating of parameters that are already\n present in old_params.\n\n Parameters\n ----------\n old_params\n new_params\n check\n\n Returns\n -------\n updated_params\n \"\"\"\n updated_params = dict(old_params)\n if new_params: # allow for new_params to be None\n for key, val in new_params.items():\n if key not in old_params and check:\n raise ValueError(\n '\\''\n + key\n + '\\' is not a valid parameter key, '\n + 'consider one of \\n'\n + str(list(old_params.keys()))\n )\n if val is not None:\n updated_params[key] = val\n return updated_params\n\n\n# --------------------------------------------------------------------------------\n# Others\n# --------------------------------------------------------------------------------\n\n\ndef check_nonnegative_integers(X: Union[np.ndarray, sparse.spmatrix]):\n \"\"\"Checks values of X to ensure it is count data\"\"\"\n from numbers import Integral\n\n data = X if isinstance(X, np.ndarray) else X.data\n # Check no negatives\n if np.signbit(data).any():\n return False\n # Check all are integers\n elif issubclass(data.dtype.type, Integral):\n return True\n elif np.any(~np.equal(np.mod(data, 1), 0)):\n return False\n else:\n return True\n\n\ndef select_groups(adata, groups_order_subset='all', key='groups'):\n \"\"\"Get subset of groups in adata.obs[key].\"\"\"\n groups_order = adata.obs[key].cat.categories\n if key + '_masks' in adata.uns:\n groups_masks = adata.uns[key + '_masks']\n else:\n groups_masks = np.zeros(\n (len(adata.obs[key].cat.categories), adata.obs[key].values.size), dtype=bool\n )\n for iname, name in enumerate(adata.obs[key].cat.categories):\n # if the name is not found, fallback to index retrieval\n if adata.obs[key].cat.categories[iname] in adata.obs[key].values:\n mask = adata.obs[key].cat.categories[iname] == adata.obs[key].values\n else:\n mask = str(iname) == adata.obs[key].values\n groups_masks[iname] = mask\n groups_ids = list(range(len(groups_order)))\n if groups_order_subset != 'all':\n groups_ids = []\n for name in groups_order_subset:\n groups_ids.append(\n np.where(adata.obs[key].cat.categories.values == name)[0][0]\n )\n if len(groups_ids) == 0:\n # fallback to index retrieval\n groups_ids = np.where(\n np.in1d(\n np.arange(len(adata.obs[key].cat.categories)).astype(str),\n np.array(groups_order_subset),\n )\n )[0]\n if len(groups_ids) == 0:\n logg.debug(\n f'{np.array(groups_order_subset)} invalid! specify valid '\n f'groups_order (or indices) from {adata.obs[key].cat.categories}',\n )\n from sys import exit\n\n exit(0)\n groups_masks = groups_masks[groups_ids]\n groups_order_subset = adata.obs[key].cat.categories[groups_ids].values\n else:\n groups_order_subset = groups_order.values\n return groups_order_subset, groups_masks\n\n\ndef warn_with_traceback(message, category, filename, lineno, file=None, line=None):\n \"\"\"Get full tracebacks when warning is raised by setting\n\n warnings.showwarning = warn_with_traceback\n\n See also\n --------\n http://stackoverflow.com/questions/22373927/get-traceback-of-warnings\n \"\"\"\n import traceback\n\n traceback.print_stack()\n log = ( # noqa: F841 # TODO Does this need fixing?\n file if hasattr(file, 'write') else sys.stderr\n )\n settings.write(warnings.formatwarning(message, category, filename, lineno, line))\n\n\ndef subsample(\n X: np.ndarray,\n subsample: int = 1,\n seed: int = 0,\n) -> Tuple[np.ndarray, np.ndarray]:\n \"\"\"\\\n Subsample a fraction of 1/subsample samples from the rows of X.\n\n Parameters\n ----------\n X\n Data array.\n subsample\n 1/subsample is the fraction of data sampled, n = X.shape[0]/subsample.\n seed\n Seed for sampling.\n\n Returns\n -------\n Xsampled\n Subsampled X.\n rows\n Indices of rows that are stored in Xsampled.\n \"\"\"\n if subsample == 1 and seed == 0:\n return X, np.arange(X.shape[0], dtype=int)\n if seed == 0:\n # this sequence is defined simply by skipping rows\n # is faster than sampling\n rows = np.arange(0, X.shape[0], subsample, dtype=int)\n n = rows.size\n Xsampled = np.array(X[rows])\n else:\n if seed < 0:\n raise ValueError(f'Invalid seed value < 0: {seed}')\n n = int(X.shape[0] / subsample)\n np.random.seed(seed)\n Xsampled, rows = subsample_n(X, n=n)\n logg.debug(f'... subsampled to {n} of {X.shape[0]} data points')\n return Xsampled, rows\n\n\ndef subsample_n(\n X: np.ndarray, n: int = 0, seed: int = 0\n) -> Tuple[np.ndarray, np.ndarray]:\n \"\"\"Subsample n samples from rows of array.\n\n Parameters\n ----------\n X\n Data array.\n n\n Sample size.\n seed\n Seed for sampling.\n\n Returns\n -------\n Xsampled\n Subsampled X.\n rows\n Indices of rows that are stored in Xsampled.\n \"\"\"\n if n < 0:\n raise ValueError('n must be greater 0')\n np.random.seed(seed)\n n = X.shape[0] if (n == 0 or n > X.shape[0]) else n\n rows = np.random.choice(X.shape[0], size=n, replace=False)\n Xsampled = X[rows]\n return Xsampled, rows\n\n\ndef check_presence_download(filename: Path, backup_url):\n \"\"\"Check if file is present otherwise download.\"\"\"\n if not filename.is_file():\n from ..readwrite import _download\n\n _download(backup_url, filename)\n\n\ndef lazy_import(full_name):\n \"\"\"Imports a module in a way that it’s only executed on member access\"\"\"\n try:\n return sys.modules[full_name]\n except KeyError:\n spec = importlib.util.find_spec(full_name)\n module = importlib.util.module_from_spec(spec)\n loader = importlib.util.LazyLoader(spec.loader)\n # Make module with proper locking and get it inserted into sys.modules.\n loader.exec_module(module)\n return module\n\n\n# --------------------------------------------------------------------------------\n# Neighbors\n# --------------------------------------------------------------------------------\n\n\ndef _fallback_to_uns(dct, conns, dists, conns_key, dists_key):\n if conns is None and conns_key in dct:\n conns = dct[conns_key]\n if dists is None and dists_key in dct:\n dists = dct[dists_key]\n\n return conns, dists\n\n\nclass NeighborsView:\n \"\"\"Convenience class for accessing neighbors graph representations.\n\n Allows to access neighbors distances, connectivities and settings\n dictionary in a uniform manner.\n\n Parameters\n ----------\n\n adata\n AnnData object.\n key\n This defines where to look for neighbors dictionary,\n connectivities, distances.\n\n neigh = NeighborsView(adata, key)\n neigh['distances']\n neigh['connectivities']\n neigh['params']\n 'connectivities' in neigh\n 'params' in neigh\n\n is the same as\n\n adata.obsp[adata.uns[key]['distances_key']]\n adata.obsp[adata.uns[key]['connectivities_key']]\n adata.uns[key]['params']\n adata.uns[key]['connectivities_key'] in adata.obsp\n 'params' in adata.uns[key]\n \"\"\"\n\n def __init__(self, adata, key=None):\n self._connectivities = None\n self._distances = None\n\n if key is None or key == 'neighbors':\n if 'neighbors' not in adata.uns:\n raise KeyError('No \"neighbors\" in .uns')\n self._neighbors_dict = adata.uns['neighbors']\n self._conns_key = 'connectivities'\n self._dists_key = 'distances'\n else:\n if key not in adata.uns:\n raise KeyError(f'No \"{key}\" in .uns')\n self._neighbors_dict = adata.uns[key]\n self._conns_key = self._neighbors_dict['connectivities_key']\n self._dists_key = self._neighbors_dict['distances_key']\n\n if self._conns_key in adata.obsp:\n self._connectivities = adata.obsp[self._conns_key]\n if self._dists_key in adata.obsp:\n self._distances = adata.obsp[self._dists_key]\n\n # fallback to uns\n self._connectivities, self._distances = _fallback_to_uns(\n self._neighbors_dict,\n self._connectivities,\n self._distances,\n self._conns_key,\n self._dists_key,\n )\n\n def __getitem__(self, key):\n if key == 'distances':\n if 'distances' not in self:\n raise KeyError(f'No \"{self._dists_key}\" in .obsp')\n return self._distances\n elif key == 'connectivities':\n if 'connectivities' not in self:\n raise KeyError(f'No \"{self._conns_key}\" in .obsp')\n return self._connectivities\n else:\n return self._neighbors_dict[key]\n\n def __contains__(self, key):\n if key == 'distances':\n return self._distances is not None\n elif key == 'connectivities':\n return self._connectivities is not None\n else:\n return key in self._neighbors_dict\n\n\ndef _choose_graph(adata, obsp, neighbors_key):\n \"\"\"Choose connectivities from neighbbors or another obsp column\"\"\"\n if obsp is not None and neighbors_key is not None:\n raise ValueError(\n 'You can\\'t specify both obsp, neighbors_key. ' 'Please select only one.'\n )\n\n if obsp is not None:\n return adata.obsp[obsp]\n else:\n neighbors = NeighborsView(adata, neighbors_key)\n if 'connectivities' not in neighbors:\n raise ValueError(\n 'You need to run `pp.neighbors` first '\n 'to compute a neighborhood graph.'\n )\n return neighbors['connectivities']\n",
"path": "scanpy/_utils/__init__.py"
}
] | diff --git a/scanpy/_utils/__init__.py b/scanpy/_utils/__init__.py
index 105ca8802a..fd169ff9d4 100644
--- a/scanpy/_utils/__init__.py
+++ b/scanpy/_utils/__init__.py
@@ -409,7 +409,7 @@ def sanitize_anndata(adata):
def view_to_actual(adata):
if adata.is_view:
warnings.warn(
- "Revieved a view of an AnnData. Making a copy.",
+ "Received a view of an AnnData. Making a copy.",
stacklevel=2,
)
adata._init_as_actual(adata.copy())
|
apple__coremltools-911 | cuda tensor parameter fail to convert to numpy in InternalTorchIRGraph
## 🐞Describe the bug
- If the input parameter type to a traced model is tensor.cuda(), ct.convert fails with the below error
- Torch
## Trace
```
File "/home/josh/anaconda3/lib/python3.7/site-packages/coremltools/converters/mil/frontend/torch/internal_graph.py", line 180, in __init__
value = param.detach().numpy()
TypeError: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.
```
Note, possible fix:
replace
` value = param.detach().numpy()
`
with
` value = param.cpu().detach().numpy()
`
## System environment (please complete the following information):
- coremltools version: 4.0b1
- OS: Linux
- How you install python: anaconda
- python version: 3.7.6
## Additional context
Add any other context about the problem here.
| [
{
"content": "# Copyright (c) 2020, Apple Inc. All rights reserved.\n#\n# Use of this source code is governed by a BSD-3-clause license that can be\n# found in the LICENSE.txt file or at https://opensource.org/licenses/BSD-3-Clause\n\nfrom collections import OrderedDict\nfrom itertools import islice\n\nimport torch\n\n\ndef _make_ssa_name(name):\n \"\"\"Converts a symbol name (string) into an SSA name, by prepending '%'.\n Only used for pretty printing the graph.\n \"\"\"\n return \"%\" + name\n\n\ndef _ssa_name_list(names):\n \"\"\"Take a list of symbol names (strings) and return them as SSA names. Only\n used for pretty printing the graph.\n \"\"\"\n return [_make_ssa_name(x) for x in names]\n\n\ndef _find_new_name(old_name, node_names):\n \"\"\"Disambiguate a node's name from a list of existing node names by adding\n successively larger integers.\n \"\"\"\n count = 0\n new_name = old_name + \".\" + str(count)\n while new_name in node_names:\n count += 1\n new_name = old_name + \".\" + str(count)\n return new_name\n\n\ndef _replace_in_list(ls, old_val, new_val):\n \"\"\"Helper function to replace a value in a list.\"\"\"\n try:\n idx = ls.index(old_val)\n except ValueError:\n pass\n else:\n ls[idx] = new_val\n\n\nclass InternalTorchIRBlock:\n \"\"\"CoreML internal representation of a torch IR block.\n\n Arguments:\n raw_block: The torch._C.Block to convert, or None.\n parent: The InternalTorchIRNode this block belongs to.\n nodes: If @raw_block is None, the list of InternalTorchIRNodes in the block\n inputs: If @raw_block is None, the list of input symbols.\n outputs: If @raw_block is None, the list of output symbols.\n \"\"\"\n\n def __init__(self, raw_block=None, parent=None, nodes=None, inputs=None, outputs=None):\n self.nodes = []\n node_names = set()\n self.inputs = []\n self.outputs = []\n self.parent = parent\n\n if raw_block:\n # Add nodes\n for raw_node in raw_block.nodes():\n new_node = InternalTorchIRNode(raw_node, parent=self)\n if new_node.name == new_node.kind:\n new_node.name = _find_new_name(new_node.name, node_names)\n self.nodes.append(new_node)\n node_names.add(new_node.name)\n\n # Add inputs\n for inp in raw_block.inputs():\n self.inputs.append(inp.debugName())\n\n # Add outputs\n for outp in raw_block.outputs():\n self.outputs.append(outp.debugName())\n else:\n self.nodes = nodes\n self.inputs = inputs\n self.outputs = outputs\n\n def __str__(self, indent=2):\n indent_str = \" \" * indent\n graph_str = \"{}block({}):\\n\".format(\n indent_str, \", \".join(_ssa_name_list(self.inputs))\n )\n graph_str += \"{}\\n\".format(indent_str).join(\n [x.__str__(indent=indent + 2) for x in self.nodes]\n )\n graph_str += \"\\n{}return ({})\".format(\n indent_str, \", \".join(_ssa_name_list(self.outputs))\n )\n return graph_str\n\n def __repr__(self):\n return str(self)\n\n def replace_name(self, old_name, new_name):\n \"\"\"Replaces all instances of @old_name with @new_name in @self.\"\"\"\n\n # Replace graph inputs/outputs\n _replace_in_list(self.inputs, old_name, new_name)\n _replace_in_list(self.outputs, old_name, new_name)\n\n for node in self.nodes:\n node.replace_name(old_name, new_name)\n\n\nclass InternalTorchIRNode:\n \"\"\"CoreML internal representation of a torch IR node.\n Can construct itself from a provided torchIR node or manually constructed with\n args for testing.\n\n See InternalTorchIRGraph for the motivation behind this structure.\n\n Arguments:\n node: The torch._C.Node to convert, or None.\n parent: The InternalTorchIRGraph/Block this node belongs to.\n attr: If @node is not specified, the dict of named attributes.\n inputs: If @node is not specified, the list of input symbols.\n outputs: If @node is not specified, the list of output symbols.\n kind: If @node is not specified, the kind (op) of the node.\n blocks: If @node is not specified, the list of InternalTorchIRBlock.\n \"\"\"\n\n def __init__(\n self, node=None, parent=None, attr=None, inputs=None, outputs=None, kind=None, blocks=None,\n ):\n self.parent = parent\n if node is not None:\n self.inputs = [_input.debugName() for _input in node.inputs()]\n self.outputs = [output.debugName() for output in node.outputs()]\n self.kind = node.kind().split(\"::\")[-1].lower()\n self.blocks = [InternalTorchIRBlock(raw_block=b, parent=self) for b in node.blocks()]\n self.attr = {\n name: getattr(node, node.kindOf(name))(name)\n for name in node.attributeNames()\n }\n if \"value\" not in self.attr:\n self.attr[\"value\"] = None\n # If the output is boolean, explicitly cast it so type inference\n # will work correctly.\n if len(self.outputs) == 1 and next(node.outputs()).type().str() == \"bool\":\n self.attr[\"value\"] = bool(self.attr[\"value\"])\n else:\n self.inputs = inputs\n self.outputs = outputs\n self.kind = kind\n self.blocks = blocks if blocks is not None else []\n self.attr = attr if attr is not None else {\"value\": None}\n # On rare occassions, a node has no outputs. In that case, the node's\n # name will be its kind. However, this no longer guarantees the node's\n # name is unique. It will be up to the graph constructing the node to\n # make sure names are unique.\n self.name = self.outputs[0] if len(self.outputs) > 0 else self.kind\n\n def __str__(self, indent=2):\n node_str = \" \" * indent + \"{} = {}\".format(\n \", \".join(_ssa_name_list(self.outputs)), self.kind\n )\n node_str += \"[{}]\".format(\n \", \".join(\n [\"{}={}\".format(n, v) for n, v in self.attr.items() if v is not None]\n )\n )\n node_str += \"({})\".format(\", \".join(_ssa_name_list(self.inputs)))\n for b in self.blocks:\n node_str += \"\\n\" + b.__str__(indent=indent + 2)\n return node_str\n\n def __repr__(self):\n return str(self)\n\n def replace_name(self, old_name, new_name):\n \"\"\"Replaces all instances of @old_name with @new_name in @self.\"\"\"\n\n _replace_in_list(self.inputs, old_name, new_name)\n _replace_in_list(self.outputs, old_name, new_name)\n\n if self.name == old_name:\n self.name = new_name\n for block in self.blocks:\n block.replace_name(old_name, new_name)\n\n\nclass InternalTorchIRGraph:\n \"\"\"CoreML internal representation of a torch IR graph. A torch._C.Graph\n object is not an ideal structure to use in converting to CoreML. Conversion\n to an InternalTorchIRGraph is inserted between the original graph and the\n final CoreML model to address several issues:\n 1. A torch._C.graph is hard to work with. For example, its .inputs()\n and .outputs() functions return iterators, so the only way to\n determine the number of inputs/outputs is by counting to the end.\n There are other examples of why the torch structure is hard to work\n with, and this structure alleviates those isses.\n 2. torch._C.graph is an internal API and so we can't count on its\n stability. By inserting a layer in between, we can handle any changes\n to torch._C.graph here and isolate the ops code that processes the\n graph.\n 3. torch._C.graph does not expose a Python constructor. This makes\n it impossible to write unit tests that isolate specific ops since\n they have to come from actually converting a PyTorch graph. With an\n internal structure, we can directly build the test cases we need for\n unit testing.\n\n Arguments:\n raw_graph: raw_graph: The torch._C.Graph to convert, or None.\n params_dict: A dictionary mapping graph parameter names to tensors.\n Must be given if @raw_graph is not None.\n input_values: A list of inputs to the graph. Must be given is\n @raw_graph if not None.\n cut_at_symbols: The list of desired outputs from the graph. Symbols\n must be present in the graph. For debugging use only. Can only\n be given if @raw_graph is not None.\n nodes: If @raw_graph is None, the list of InternalTorchIRNodes in\n the graph.\n params: If @raw_graph is None, the dict mapping parameter names to\n their numpy value.\n inputs: If @raw_graph is None, the OrderedDict mapping input names\n to their example values.\n outputs: If @raw_graph is None, the list of outputs from the graph.\n \"\"\"\n\n def __init__(\n self, raw_graph=None, params_dict=None, input_values=None, cut_at_symbols=None, nodes=None, params=None, inputs=None, outputs=None,\n ):\n self.nodes = []\n node_names = set()\n self.params = {}\n self.inputs = OrderedDict()\n self.outputs = []\n\n if raw_graph is not None:\n # Add nodes\n for raw_node in raw_graph.nodes():\n new_node = InternalTorchIRNode(raw_node, parent=self)\n if new_node.name == new_node.kind:\n new_node.name = _find_new_name(new_node.name, node_names)\n self.nodes.append(new_node)\n node_names.add(new_node.name)\n\n # Add params\n for name, param in params_dict.items():\n value = param.detach().numpy()\n self.params[name] = value\n\n # Add inputs\n for index, _input in enumerate(islice(raw_graph.inputs(), len(input_values))):\n name = _input.debugName()\n value = input_values[index]\n self.inputs[name] = value\n\n # Add outputs, cutting if @cut_at_symbols is set\n output_names = cut_at_symbols\n if output_names is None:\n output_names = [x.debugName() for x in raw_graph.outputs()]\n for output in output_names:\n self.outputs.append(output)\n else:\n self.nodes = nodes\n self.params = params\n self.inputs = inputs\n self.outputs = outputs\n\n def __str__(self):\n graph_str = \"graph(\\n\"\n graph_str += self._format_inputs(self.inputs, unpack=True)\n graph_str += self._format_inputs(self.params)\n graph_str += \"):\\n\"\n graph_str += \"\\n\".join([str(x) for x in self.nodes]) + \"\\n\"\n graph_str += \"return ({})\".format(\", \".join(_ssa_name_list(self.outputs)))\n return graph_str\n\n def _format_inputs(self, inputs, unpack=False):\n def tensor_str(x):\n return \"Tensor{}\".format(\n tuple(list(x.shape.shape if unpack else x.shape) + [str(x.dtype)])\n )\n\n inp_str = \"\"\n for k, v in inputs.items():\n if isinstance(v, (tuple, list)):\n shape_str = \"({})\".format(\", \".join([tensor_str(x) for x in v]))\n else:\n shape_str = tensor_str(v)\n inp_str += \" {} : {},\\n\".format(_make_ssa_name(k), shape_str)\n return inp_str\n\n def __repr__(self):\n return str(self)\n\n def replace_name(self, old_name, new_name):\n \"\"\"Replaces all instances of @old_name with @new_name in @self.\"\"\"\n\n # Replace graph inputs/outputs\n _replace_in_list(self.inputs, old_name, new_name)\n _replace_in_list(self.outputs, old_name, new_name)\n\n for node in self.nodes:\n node.replace_name(old_name, new_name)\n",
"path": "coremltools/converters/mil/frontend/torch/internal_graph.py"
}
] | [
{
"content": "# Copyright (c) 2020, Apple Inc. All rights reserved.\n#\n# Use of this source code is governed by a BSD-3-clause license that can be\n# found in the LICENSE.txt file or at https://opensource.org/licenses/BSD-3-Clause\n\nfrom collections import OrderedDict\nfrom itertools import islice\n\nimport torch\n\n\ndef _make_ssa_name(name):\n \"\"\"Converts a symbol name (string) into an SSA name, by prepending '%'.\n Only used for pretty printing the graph.\n \"\"\"\n return \"%\" + name\n\n\ndef _ssa_name_list(names):\n \"\"\"Take a list of symbol names (strings) and return them as SSA names. Only\n used for pretty printing the graph.\n \"\"\"\n return [_make_ssa_name(x) for x in names]\n\n\ndef _find_new_name(old_name, node_names):\n \"\"\"Disambiguate a node's name from a list of existing node names by adding\n successively larger integers.\n \"\"\"\n count = 0\n new_name = old_name + \".\" + str(count)\n while new_name in node_names:\n count += 1\n new_name = old_name + \".\" + str(count)\n return new_name\n\n\ndef _replace_in_list(ls, old_val, new_val):\n \"\"\"Helper function to replace a value in a list.\"\"\"\n try:\n idx = ls.index(old_val)\n except ValueError:\n pass\n else:\n ls[idx] = new_val\n\n\nclass InternalTorchIRBlock:\n \"\"\"CoreML internal representation of a torch IR block.\n\n Arguments:\n raw_block: The torch._C.Block to convert, or None.\n parent: The InternalTorchIRNode this block belongs to.\n nodes: If @raw_block is None, the list of InternalTorchIRNodes in the block\n inputs: If @raw_block is None, the list of input symbols.\n outputs: If @raw_block is None, the list of output symbols.\n \"\"\"\n\n def __init__(self, raw_block=None, parent=None, nodes=None, inputs=None, outputs=None):\n self.nodes = []\n node_names = set()\n self.inputs = []\n self.outputs = []\n self.parent = parent\n\n if raw_block:\n # Add nodes\n for raw_node in raw_block.nodes():\n new_node = InternalTorchIRNode(raw_node, parent=self)\n if new_node.name == new_node.kind:\n new_node.name = _find_new_name(new_node.name, node_names)\n self.nodes.append(new_node)\n node_names.add(new_node.name)\n\n # Add inputs\n for inp in raw_block.inputs():\n self.inputs.append(inp.debugName())\n\n # Add outputs\n for outp in raw_block.outputs():\n self.outputs.append(outp.debugName())\n else:\n self.nodes = nodes\n self.inputs = inputs\n self.outputs = outputs\n\n def __str__(self, indent=2):\n indent_str = \" \" * indent\n graph_str = \"{}block({}):\\n\".format(\n indent_str, \", \".join(_ssa_name_list(self.inputs))\n )\n graph_str += \"{}\\n\".format(indent_str).join(\n [x.__str__(indent=indent + 2) for x in self.nodes]\n )\n graph_str += \"\\n{}return ({})\".format(\n indent_str, \", \".join(_ssa_name_list(self.outputs))\n )\n return graph_str\n\n def __repr__(self):\n return str(self)\n\n def replace_name(self, old_name, new_name):\n \"\"\"Replaces all instances of @old_name with @new_name in @self.\"\"\"\n\n # Replace graph inputs/outputs\n _replace_in_list(self.inputs, old_name, new_name)\n _replace_in_list(self.outputs, old_name, new_name)\n\n for node in self.nodes:\n node.replace_name(old_name, new_name)\n\n\nclass InternalTorchIRNode:\n \"\"\"CoreML internal representation of a torch IR node.\n Can construct itself from a provided torchIR node or manually constructed with\n args for testing.\n\n See InternalTorchIRGraph for the motivation behind this structure.\n\n Arguments:\n node: The torch._C.Node to convert, or None.\n parent: The InternalTorchIRGraph/Block this node belongs to.\n attr: If @node is not specified, the dict of named attributes.\n inputs: If @node is not specified, the list of input symbols.\n outputs: If @node is not specified, the list of output symbols.\n kind: If @node is not specified, the kind (op) of the node.\n blocks: If @node is not specified, the list of InternalTorchIRBlock.\n \"\"\"\n\n def __init__(\n self, node=None, parent=None, attr=None, inputs=None, outputs=None, kind=None, blocks=None,\n ):\n self.parent = parent\n if node is not None:\n self.inputs = [_input.debugName() for _input in node.inputs()]\n self.outputs = [output.debugName() for output in node.outputs()]\n self.kind = node.kind().split(\"::\")[-1].lower()\n self.blocks = [InternalTorchIRBlock(raw_block=b, parent=self) for b in node.blocks()]\n self.attr = {\n name: getattr(node, node.kindOf(name))(name)\n for name in node.attributeNames()\n }\n if \"value\" not in self.attr:\n self.attr[\"value\"] = None\n # If the output is boolean, explicitly cast it so type inference\n # will work correctly.\n if len(self.outputs) == 1 and next(node.outputs()).type().str() == \"bool\":\n self.attr[\"value\"] = bool(self.attr[\"value\"])\n else:\n self.inputs = inputs\n self.outputs = outputs\n self.kind = kind\n self.blocks = blocks if blocks is not None else []\n self.attr = attr if attr is not None else {\"value\": None}\n # On rare occassions, a node has no outputs. In that case, the node's\n # name will be its kind. However, this no longer guarantees the node's\n # name is unique. It will be up to the graph constructing the node to\n # make sure names are unique.\n self.name = self.outputs[0] if len(self.outputs) > 0 else self.kind\n\n def __str__(self, indent=2):\n node_str = \" \" * indent + \"{} = {}\".format(\n \", \".join(_ssa_name_list(self.outputs)), self.kind\n )\n node_str += \"[{}]\".format(\n \", \".join(\n [\"{}={}\".format(n, v) for n, v in self.attr.items() if v is not None]\n )\n )\n node_str += \"({})\".format(\", \".join(_ssa_name_list(self.inputs)))\n for b in self.blocks:\n node_str += \"\\n\" + b.__str__(indent=indent + 2)\n return node_str\n\n def __repr__(self):\n return str(self)\n\n def replace_name(self, old_name, new_name):\n \"\"\"Replaces all instances of @old_name with @new_name in @self.\"\"\"\n\n _replace_in_list(self.inputs, old_name, new_name)\n _replace_in_list(self.outputs, old_name, new_name)\n\n if self.name == old_name:\n self.name = new_name\n for block in self.blocks:\n block.replace_name(old_name, new_name)\n\n\nclass InternalTorchIRGraph:\n \"\"\"CoreML internal representation of a torch IR graph. A torch._C.Graph\n object is not an ideal structure to use in converting to CoreML. Conversion\n to an InternalTorchIRGraph is inserted between the original graph and the\n final CoreML model to address several issues:\n 1. A torch._C.graph is hard to work with. For example, its .inputs()\n and .outputs() functions return iterators, so the only way to\n determine the number of inputs/outputs is by counting to the end.\n There are other examples of why the torch structure is hard to work\n with, and this structure alleviates those isses.\n 2. torch._C.graph is an internal API and so we can't count on its\n stability. By inserting a layer in between, we can handle any changes\n to torch._C.graph here and isolate the ops code that processes the\n graph.\n 3. torch._C.graph does not expose a Python constructor. This makes\n it impossible to write unit tests that isolate specific ops since\n they have to come from actually converting a PyTorch graph. With an\n internal structure, we can directly build the test cases we need for\n unit testing.\n\n Arguments:\n raw_graph: raw_graph: The torch._C.Graph to convert, or None.\n params_dict: A dictionary mapping graph parameter names to tensors.\n Must be given if @raw_graph is not None.\n input_values: A list of inputs to the graph. Must be given is\n @raw_graph if not None.\n cut_at_symbols: The list of desired outputs from the graph. Symbols\n must be present in the graph. For debugging use only. Can only\n be given if @raw_graph is not None.\n nodes: If @raw_graph is None, the list of InternalTorchIRNodes in\n the graph.\n params: If @raw_graph is None, the dict mapping parameter names to\n their numpy value.\n inputs: If @raw_graph is None, the OrderedDict mapping input names\n to their example values.\n outputs: If @raw_graph is None, the list of outputs from the graph.\n \"\"\"\n\n def __init__(\n self, raw_graph=None, params_dict=None, input_values=None, cut_at_symbols=None, nodes=None, params=None, inputs=None, outputs=None,\n ):\n self.nodes = []\n node_names = set()\n self.params = {}\n self.inputs = OrderedDict()\n self.outputs = []\n\n if raw_graph is not None:\n # Add nodes\n for raw_node in raw_graph.nodes():\n new_node = InternalTorchIRNode(raw_node, parent=self)\n if new_node.name == new_node.kind:\n new_node.name = _find_new_name(new_node.name, node_names)\n self.nodes.append(new_node)\n node_names.add(new_node.name)\n\n # Add params\n for name, param in params_dict.items():\n value = param.detach().cpu().numpy()\n self.params[name] = value\n\n # Add inputs\n for index, _input in enumerate(islice(raw_graph.inputs(), len(input_values))):\n name = _input.debugName()\n value = input_values[index]\n self.inputs[name] = value\n\n # Add outputs, cutting if @cut_at_symbols is set\n output_names = cut_at_symbols\n if output_names is None:\n output_names = [x.debugName() for x in raw_graph.outputs()]\n for output in output_names:\n self.outputs.append(output)\n else:\n self.nodes = nodes\n self.params = params\n self.inputs = inputs\n self.outputs = outputs\n\n def __str__(self):\n graph_str = \"graph(\\n\"\n graph_str += self._format_inputs(self.inputs, unpack=True)\n graph_str += self._format_inputs(self.params)\n graph_str += \"):\\n\"\n graph_str += \"\\n\".join([str(x) for x in self.nodes]) + \"\\n\"\n graph_str += \"return ({})\".format(\", \".join(_ssa_name_list(self.outputs)))\n return graph_str\n\n def _format_inputs(self, inputs, unpack=False):\n def tensor_str(x):\n return \"Tensor{}\".format(\n tuple(list(x.shape.shape if unpack else x.shape) + [str(x.dtype)])\n )\n\n inp_str = \"\"\n for k, v in inputs.items():\n if isinstance(v, (tuple, list)):\n shape_str = \"({})\".format(\", \".join([tensor_str(x) for x in v]))\n else:\n shape_str = tensor_str(v)\n inp_str += \" {} : {},\\n\".format(_make_ssa_name(k), shape_str)\n return inp_str\n\n def __repr__(self):\n return str(self)\n\n def replace_name(self, old_name, new_name):\n \"\"\"Replaces all instances of @old_name with @new_name in @self.\"\"\"\n\n # Replace graph inputs/outputs\n _replace_in_list(self.inputs, old_name, new_name)\n _replace_in_list(self.outputs, old_name, new_name)\n\n for node in self.nodes:\n node.replace_name(old_name, new_name)\n",
"path": "coremltools/converters/mil/frontend/torch/internal_graph.py"
}
] | diff --git a/coremltools/converters/mil/frontend/torch/internal_graph.py b/coremltools/converters/mil/frontend/torch/internal_graph.py
index 197e8848f..097519f89 100644
--- a/coremltools/converters/mil/frontend/torch/internal_graph.py
+++ b/coremltools/converters/mil/frontend/torch/internal_graph.py
@@ -246,7 +246,7 @@ def __init__(
# Add params
for name, param in params_dict.items():
- value = param.detach().numpy()
+ value = param.detach().cpu().numpy()
self.params[name] = value
# Add inputs
|
xonsh__xonsh-1181 | Configuration fails on Windows due to colon in filename
`xonfig wizard` fails on Windows because the temporary configuration file created has a colon in its filename.
The relevant output is:
```
Would you like to save this state, yes or no [default: no]? yes
filename [default='C:\\Users\\alowe\\.config\\xonsh\\config.json']:
Traceback (most recent call last):
File "c:\users\alowe\appdata\local\continuum\anaconda3\lib\shutil.py", line 538, in move
os.rename(src, real_dst)
OSError: [WinError 123] The filename, directory name, or volume label syntax is incorrect: 'C:\\Users\\alowe\\.config\\xonsh\\config.json' -> 'C:\\Users\\alowe\\.config\\xonsh\\config.2016-06-08T11:18:52.170226.json'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "c:\users\alowe\appdata\local\continuum\anaconda3\lib\site-packages\xonsh\xonfig.py", line 307, in _wizard
pv.visit()
File "c:\users\alowe\appdata\local\continuum\anaconda3\lib\site-packages\xonsh\wizard.py", line 481, in visit
rtn = super().visit(node)
File "c:\users\alowe\appdata\local\continuum\anaconda3\lib\site-packages\xonsh\wizard.py", line 302, in visit
rtn = meth(node)
File "c:\users\alowe\appdata\local\continuum\anaconda3\lib\site-packages\xonsh\wizard.py", line 538, in visit_wizard
self.visit(child)
File "c:\users\alowe\appdata\local\continuum\anaconda3\lib\site-packages\xonsh\wizard.py", line 481, in visit
rtn = super().visit(node)
File "c:\users\alowe\appdata\local\continuum\anaconda3\lib\site-packages\xonsh\wizard.py", line 302, in visit
rtn = meth(node)
File "c:\users\alowe\appdata\local\continuum\anaconda3\lib\site-packages\xonsh\wizard.py", line 623, in visit_save
backup_file(fname)
File "c:\users\alowe\appdata\local\continuum\anaconda3\lib\site-packages\xonsh\tools.py", line 1165, in backup_file
shutil.move(fname, newfname)
File "c:\users\alowe\appdata\local\continuum\anaconda3\lib\shutil.py", line 552, in move
copy_function(src, real_dst)
File "c:\users\alowe\appdata\local\continuum\anaconda3\lib\shutil.py", line 251, in copy2
copyfile(src, dst, follow_symlinks=follow_symlinks)
File "c:\users\alowe\appdata\local\continuum\anaconda3\lib\shutil.py", line 115, in copyfile
with open(dst, 'wb') as fdst:
OSError: [Errno 22] Invalid argument: 'C:\\Users\\alowe\\.config\\xonsh\\config.2016-06-08T11:18:52.170226.json'
```
| [
{
"content": "# -*- coding: utf-8 -*-\n\"\"\"Misc. xonsh tools.\n\nThe following implementations were forked from the IPython project:\n\n* Copyright (c) 2008-2014, IPython Development Team\n* Copyright (C) 2001-2007 Fernando Perez <[email protected]>\n* Copyright (c) 2001, Janko Hauser <[email protected]>\n* Copyright (c) 2001, Nathaniel Gray <[email protected]>\n\nImplementations:\n\n* decode()\n* encode()\n* cast_unicode()\n* safe_hasattr()\n* indent()\n\n\"\"\"\nimport os\nimport re\nimport sys\nimport ast\nimport string\nimport ctypes\nimport builtins\nimport subprocess\nimport threading\nimport traceback\nfrom glob import iglob\nfrom warnings import warn\nfrom contextlib import contextmanager\nfrom collections import OrderedDict, Sequence, Set\n\n# adding further imports from xonsh modules is discouraged to avoid cirular\n# dependencies\nfrom xonsh.platform import (has_prompt_toolkit, scandir, win_unicode_console,\n DEFAULT_ENCODING, ON_LINUX, ON_WINDOWS,\n PYTHON_VERSION_INFO)\n\nIS_SUPERUSER = ctypes.windll.shell32.IsUserAnAdmin() != 0 if ON_WINDOWS else os.getuid() == 0\n\n\nclass XonshError(Exception):\n pass\n\n\nclass DefaultNotGivenType(object):\n \"\"\"Singleton for representing when no default value is given.\"\"\"\n\n\nDefaultNotGiven = DefaultNotGivenType()\n\nBEG_TOK_SKIPS = frozenset(['WS', 'INDENT', 'NOT', 'LPAREN'])\nEND_TOK_TYPES = frozenset(['SEMI', 'AND', 'OR', 'RPAREN'])\nRE_END_TOKS = re.compile('(;|and|\\&\\&|or|\\|\\||\\))')\nLPARENS = frozenset(['LPAREN', 'AT_LPAREN', 'BANG_LPAREN', 'DOLLAR_LPAREN',\n 'ATDOLLAR_LPAREN'])\n\n\ndef _is_not_lparen_and_rparen(lparens, rtok):\n \"\"\"Tests if an RPAREN token is matched with something other than a plain old\n LPAREN type.\n \"\"\"\n # note that any([]) is False, so this covers len(lparens) == 0\n return rtok.type == 'RPAREN' and any(x != 'LPAREN' for x in lparens)\n\n\ndef find_next_break(line, mincol=0, lexer=None):\n \"\"\"Returns the column number of the next logical break in subproc mode.\n This function may be useful in finding the maxcol argument of subproc_toks().\n \"\"\"\n if mincol >= 1:\n line = line[mincol:]\n if lexer is None:\n lexer = builtins.__xonsh_execer__.parser.lexer\n if RE_END_TOKS.search(line) is None:\n return None\n maxcol = None\n lparens = []\n lexer.input(line)\n for tok in lexer:\n if tok.type in LPARENS:\n lparens.append(tok.type)\n elif tok.type in END_TOK_TYPES:\n if _is_not_lparen_and_rparen(lparens, tok):\n lparens.pop()\n else:\n maxcol = tok.lexpos + mincol + 1\n break\n elif tok.type == 'ERRORTOKEN' and ')' in tok.value:\n maxcol = tok.lexpos + mincol + 1\n break\n return maxcol\n\n\ndef subproc_toks(line, mincol=-1, maxcol=None, lexer=None, returnline=False):\n \"\"\"Excapsulates tokens in a source code line in a uncaptured\n subprocess ![] starting at a minimum column. If there are no tokens\n (ie in a comment line) this returns None.\n \"\"\"\n if lexer is None:\n lexer = builtins.__xonsh_execer__.parser.lexer\n if maxcol is None:\n maxcol = len(line) + 1\n lexer.reset()\n lexer.input(line)\n toks = []\n lparens = []\n end_offset = 0\n for tok in lexer:\n pos = tok.lexpos\n if tok.type not in END_TOK_TYPES and pos >= maxcol:\n break\n if tok.type in LPARENS:\n lparens.append(tok.type)\n if len(toks) == 0 and tok.type in BEG_TOK_SKIPS:\n continue # handle indentation\n elif len(toks) > 0 and toks[-1].type in END_TOK_TYPES:\n if _is_not_lparen_and_rparen(lparens, toks[-1]):\n lparens.pop() # don't continue or break\n elif pos < maxcol and tok.type not in ('NEWLINE', 'DEDENT', 'WS'):\n toks.clear()\n if tok.type in BEG_TOK_SKIPS:\n continue\n else:\n break\n if pos < mincol:\n continue\n toks.append(tok)\n if tok.type == 'NEWLINE':\n break\n elif tok.type == 'DEDENT':\n # fake a newline when dedenting without a newline\n tok.type = 'NEWLINE'\n tok.value = '\\n'\n tok.lineno -= 1\n if len(toks) >= 2:\n prev_tok_end = toks[-2].lexpos + len(toks[-2].value)\n else:\n prev_tok_end = len(line)\n if '#' in line[prev_tok_end:]:\n tok.lexpos = prev_tok_end # prevents wrapping comments\n else:\n tok.lexpos = len(line)\n break\n else:\n if len(toks) > 0 and toks[-1].type in END_TOK_TYPES:\n if _is_not_lparen_and_rparen(lparens, toks[-1]):\n pass\n else:\n toks.pop()\n if len(toks) == 0:\n return # handle comment lines\n tok = toks[-1]\n pos = tok.lexpos\n if isinstance(tok.value, str):\n end_offset = len(tok.value.rstrip())\n else:\n el = line[pos:].split('#')[0].rstrip()\n end_offset = len(el)\n if len(toks) == 0:\n return # handle comment lines\n beg, end = toks[0].lexpos, (toks[-1].lexpos + end_offset)\n end = len(line[:end].rstrip())\n rtn = '![' + line[beg:end] + ']'\n if returnline:\n rtn = line[:beg] + rtn + line[end:]\n return rtn\n\n\ndef subexpr_from_unbalanced(expr, ltok, rtok):\n \"\"\"Attempts to pull out a valid subexpression for unbalanced grouping,\n based on opening tokens, eg. '(', and closing tokens, eg. ')'. This\n does not do full tokenization, but should be good enough for tab\n completion.\n \"\"\"\n lcnt = expr.count(ltok)\n if lcnt == 0:\n return expr\n rcnt = expr.count(rtok)\n if lcnt == rcnt:\n return expr\n subexpr = expr.rsplit(ltok, 1)[-1]\n subexpr = subexpr.rsplit(',', 1)[-1]\n subexpr = subexpr.rsplit(':', 1)[-1]\n return subexpr\n\n\ndef decode(s, encoding=None):\n encoding = encoding or DEFAULT_ENCODING\n return s.decode(encoding, \"replace\")\n\n\ndef encode(u, encoding=None):\n encoding = encoding or DEFAULT_ENCODING\n return u.encode(encoding, \"replace\")\n\n\ndef cast_unicode(s, encoding=None):\n if isinstance(s, bytes):\n return decode(s, encoding)\n return s\n\n\ndef safe_hasattr(obj, attr):\n \"\"\"In recent versions of Python, hasattr() only catches AttributeError.\n This catches all errors.\n \"\"\"\n try:\n getattr(obj, attr)\n return True\n except Exception: # pylint:disable=bare-except\n return False\n\n\ndef indent(instr, nspaces=4, ntabs=0, flatten=False):\n \"\"\"Indent a string a given number of spaces or tabstops.\n\n indent(str,nspaces=4,ntabs=0) -> indent str by ntabs+nspaces.\n\n Parameters\n ----------\n instr : basestring\n The string to be indented.\n nspaces : int (default: 4)\n The number of spaces to be indented.\n ntabs : int (default: 0)\n The number of tabs to be indented.\n flatten : bool (default: False)\n Whether to scrub existing indentation. If True, all lines will be\n aligned to the same indentation. If False, existing indentation will\n be strictly increased.\n\n Returns\n -------\n outstr : string indented by ntabs and nspaces.\n\n \"\"\"\n if instr is None:\n return\n ind = '\\t' * ntabs + ' ' * nspaces\n if flatten:\n pat = re.compile(r'^\\s*', re.MULTILINE)\n else:\n pat = re.compile(r'^', re.MULTILINE)\n outstr = re.sub(pat, ind, instr)\n if outstr.endswith(os.linesep + ind):\n return outstr[:-len(ind)]\n else:\n return outstr\n\n\ndef get_sep():\n \"\"\" Returns the appropriate filepath separator char depending on OS and\n xonsh options set\n \"\"\"\n return (os.altsep if ON_WINDOWS\n and builtins.__xonsh_env__.get('FORCE_POSIX_PATHS') else\n os.sep)\n\n\ndef fallback(cond, backup):\n \"\"\"Decorator for returning the object if cond is true and a backup if cond is false.\n \"\"\"\n def dec(obj):\n return obj if cond else backup\n return dec\n\n\n# The following redirect classes were taken directly from Python 3.5's source\n# code (from the contextlib module). This can be removed when 3.5 is released,\n# although redirect_stdout exists in 3.4, redirect_stderr does not.\n# See the Python software license: https://docs.python.org/3/license.html\n# Copyright (c) Python Software Foundation. All rights reserved.\nclass _RedirectStream:\n\n _stream = None\n\n def __init__(self, new_target):\n self._new_target = new_target\n # We use a list of old targets to make this CM re-entrant\n self._old_targets = []\n\n def __enter__(self):\n self._old_targets.append(getattr(sys, self._stream))\n setattr(sys, self._stream, self._new_target)\n return self._new_target\n\n def __exit__(self, exctype, excinst, exctb):\n setattr(sys, self._stream, self._old_targets.pop())\n\n\nclass redirect_stdout(_RedirectStream):\n \"\"\"Context manager for temporarily redirecting stdout to another file::\n\n # How to send help() to stderr\n with redirect_stdout(sys.stderr):\n help(dir)\n\n # How to write help() to a file\n with open('help.txt', 'w') as f:\n with redirect_stdout(f):\n help(pow)\n\n Mostly for backwards compatibility.\n \"\"\"\n _stream = \"stdout\"\n\n\nclass redirect_stderr(_RedirectStream):\n \"\"\"Context manager for temporarily redirecting stderr to another file.\"\"\"\n _stream = \"stderr\"\n\n\ndef _yield_accessible_unix_file_names(path):\n \"yield file names of executablel files in `path`\"\n\n for file_ in scandir(path):\n try:\n if file_.is_file() and os.access(file_.path, os.X_OK):\n yield file_.name\n except NotADirectoryError:\n # broken Symlink are neither dir not files\n pass\n\n\ndef _executables_in_posix(path):\n if PYTHON_VERSION_INFO < (3, 5, 0):\n for fname in os.listdir(path):\n fpath = os.path.join(path, fname)\n if (os.path.exists(fpath) and os.access(fpath, os.X_OK) and \\\n (not os.path.isdir(fpath))):\n yield fname\n else:\n yield from _yield_accessible_unix_file_names(path)\n\n\ndef _executables_in_windows(path):\n extensions = builtins.__xonsh_env__.get('PATHEXT',['.COM', '.EXE', '.BAT'])\n if PYTHON_VERSION_INFO < (3, 5, 0):\n for fname in os.listdir(path):\n fpath = os.path.join(path, fname)\n if (os.path.exists(fpath) and not os.path.isdir(fpath)):\n base_name, ext = os.path.splitext(fname)\n if ext.upper() in extensions:\n yield fname\n else:\n for fname in (x.name for x in scandir(path) if x.is_file()):\n base_name, ext = os.path.splitext(fname)\n if ext.upper() in extensions:\n yield fname\n\n\ndef executables_in(path):\n \"\"\"Returns a generator of files in `path` that the user could execute. \"\"\"\n if ON_WINDOWS:\n func = _executables_in_windows\n else:\n func = _executables_in_posix\n try:\n yield from func(path)\n except PermissionError:\n return\n\n\ndef command_not_found(cmd):\n \"\"\"Uses the debian/ubuntu command-not-found utility to suggest packages for a\n command that cannot currently be found.\n \"\"\"\n if not ON_LINUX:\n return ''\n elif not os.path.isfile('/usr/lib/command-not-found'):\n # utility is not on PATH\n return ''\n c = '/usr/lib/command-not-found {0}; exit 0'\n s = subprocess.check_output(c.format(cmd), universal_newlines=True,\n stderr=subprocess.STDOUT, shell=True)\n s = '\\n'.join(s.splitlines()[:-1]).strip()\n return s\n\n\ndef suggest_commands(cmd, env, aliases):\n \"\"\"Suggests alternative commands given an environment and aliases.\"\"\"\n if not env.get('SUGGEST_COMMANDS'):\n return\n thresh = env.get('SUGGEST_THRESHOLD')\n max_sugg = env.get('SUGGEST_MAX_NUM')\n if max_sugg < 0:\n max_sugg = float('inf')\n cmd = cmd.lower()\n suggested = {}\n\n for alias in builtins.aliases:\n if alias not in suggested:\n if levenshtein(alias.lower(), cmd, thresh) < thresh:\n suggested[alias] = 'Alias'\n\n for path in filter(os.path.isdir, env.get('PATH')):\n for _file in executables_in(path):\n if _file not in suggested \\\n and levenshtein(_file.lower(), cmd, thresh) < thresh:\n suggested[_file] = 'Command ({0})'.format(os.path.join(path, _file))\n\n suggested = OrderedDict(\n sorted(suggested.items(),\n key=lambda x: suggestion_sort_helper(x[0].lower(), cmd)))\n num = min(len(suggested), max_sugg)\n\n if num == 0:\n rtn = command_not_found(cmd)\n else:\n oneof = '' if num == 1 else 'one of '\n tips = 'Did you mean {}the following?'.format(oneof)\n items = list(suggested.popitem(False) for _ in range(num))\n length = max(len(key) for key, _ in items) + 2\n alternatives = '\\n'.join(' {: <{}} {}'.format(key + \":\", length, val)\n for key, val in items)\n rtn = '{}\\n{}'.format(tips, alternatives)\n c = command_not_found(cmd)\n rtn += ('\\n\\n' + c) if len(c) > 0 else ''\n return rtn\n\n\ndef print_exception(msg=None):\n \"\"\"Print exceptions with/without traceback.\"\"\"\n env = getattr(builtins, '__xonsh_env__', os.environ)\n if 'XONSH_SHOW_TRACEBACK' not in env:\n sys.stderr.write('xonsh: For full traceback set: '\n '$XONSH_SHOW_TRACEBACK = True\\n')\n if env.get('XONSH_SHOW_TRACEBACK', False):\n traceback.print_exc()\n else:\n exc_type, exc_value, exc_traceback = sys.exc_info()\n exception_only = traceback.format_exception_only(exc_type, exc_value)\n sys.stderr.write(''.join(exception_only))\n if msg:\n msg = msg if msg.endswith('\\n') else msg + '\\n'\n sys.stderr.write(msg)\n\n\n# Modified from Public Domain code, by Magnus Lie Hetland\n# from http://hetland.org/coding/python/levenshtein.py\ndef levenshtein(a, b, max_dist=float('inf')):\n \"\"\"Calculates the Levenshtein distance between a and b.\"\"\"\n n, m = len(a), len(b)\n if abs(n - m) > max_dist:\n return float('inf')\n if n > m:\n # Make sure n <= m, to use O(min(n,m)) space\n a, b = b, a\n n, m = m, n\n current = range(n + 1)\n for i in range(1, m + 1):\n previous, current = current, [i] + [0] * n\n for j in range(1, n + 1):\n add, delete = previous[j] + 1, current[j - 1] + 1\n change = previous[j - 1]\n if a[j - 1] != b[i - 1]:\n change = change + 1\n current[j] = min(add, delete, change)\n return current[n]\n\n\ndef suggestion_sort_helper(x, y):\n \"\"\"Returns a score (lower is better) for x based on how similar\n it is to y. Used to rank suggestions.\"\"\"\n x = x.lower()\n y = y.lower()\n lendiff = len(x) + len(y)\n inx = len([i for i in x if i not in y])\n iny = len([i for i in y if i not in x])\n return lendiff + inx + iny\n\n\ndef escape_windows_cmd_string(s):\n \"\"\"Returns a string that is usable by the Windows cmd.exe.\n The escaping is based on details here and emperical testing:\n http://www.robvanderwoude.com/escapechars.php\n \"\"\"\n for c in '()%!^<>&|\"':\n s = s.replace(c, '^' + c)\n s = s.replace('/?', '/.')\n return s\n\n\ndef argvquote(arg, force=False):\n \"\"\" Returns an argument quoted in such a way that that CommandLineToArgvW\n on Windows will return the argument string unchanged.\n This is the same thing Popen does when supplied with an list of arguments.\n Arguments in a command line should be separated by spaces; this\n function does not add these spaces. This implementation follows the\n suggestions outlined here:\n https://blogs.msdn.microsoft.com/twistylittlepassagesallalike/2011/04/23/everyone-quotes-command-line-arguments-the-wrong-way/\n \"\"\"\n if not force and len(arg) != 0 and not any([c in arg for c in ' \\t\\n\\v\"']):\n return arg\n else:\n n_backslashes = 0\n cmdline = '\"'\n for c in arg:\n if c == '\"':\n cmdline += (n_backslashes * 2 + 1) * '\\\\'\n else:\n cmdline += n_backslashes * '\\\\'\n if c != '\\\\':\n cmdline += c\n n_backslashes = 0\n else:\n n_backslashes += 1\n return cmdline + n_backslashes * 2 * '\\\\' + '\"'\n\n\ndef on_main_thread():\n \"\"\"Checks if we are on the main thread or not.\"\"\"\n return threading.current_thread() is threading.main_thread()\n\n\n@contextmanager\ndef swap(namespace, name, value, default=NotImplemented):\n \"\"\"Swaps a current variable name in a namespace for another value, and then\n replaces it when the context is exited.\n \"\"\"\n old = getattr(namespace, name, default)\n setattr(namespace, name, value)\n yield value\n if old is default:\n delattr(namespace, name)\n else:\n setattr(namespace, name, old)\n\n#\n# Validators and contervers\n#\n\n\ndef is_int(x):\n \"\"\"Tests if something is an integer\"\"\"\n return isinstance(x, int)\n\n\ndef is_float(x):\n \"\"\"Tests if something is a float\"\"\"\n return isinstance(x, float)\n\n\ndef is_string(x):\n \"\"\"Tests if something is a string\"\"\"\n return isinstance(x, str)\n\n\ndef is_callable(x):\n \"\"\"Tests if something is callable\"\"\"\n return callable(x)\n\n\ndef is_string_or_callable(x):\n \"\"\"Tests if something is a string or callable\"\"\"\n return is_string(x) or is_callable(x)\n\n\ndef always_true(x):\n \"\"\"Returns True\"\"\"\n return True\n\n\ndef always_false(x):\n \"\"\"Returns False\"\"\"\n return False\n\n\ndef ensure_string(x):\n \"\"\"Returns a string if x is not a string, and x if it already is.\"\"\"\n return str(x)\n\n\ndef is_env_path(x):\n \"\"\"This tests if something is an environment path, ie a list of strings.\"\"\"\n if isinstance(x, str):\n return False\n else:\n return (isinstance(x, Sequence) and\n all(isinstance(a, str) for a in x))\n\n\ndef str_to_env_path(x):\n \"\"\"Converts a string to an environment path, ie a list of strings,\n splitting on the OS separator.\n \"\"\"\n return x.split(os.pathsep)\n\n\ndef env_path_to_str(x):\n \"\"\"Converts an environment path to a string by joining on the OS separator.\"\"\"\n return os.pathsep.join(x)\n\n\ndef is_bool(x):\n \"\"\"Tests if something is a boolean.\"\"\"\n return isinstance(x, bool)\n\n\n_FALSES = frozenset(['', '0', 'n', 'f', 'no', 'none', 'false'])\n\ndef to_bool(x):\n \"\"\"\"Converts to a boolean in a semantically meaningful way.\"\"\"\n if isinstance(x, bool):\n return x\n elif isinstance(x, str):\n return False if x.lower() in _FALSES else True\n else:\n return bool(x)\n\n\ndef bool_to_str(x):\n \"\"\"Converts a bool to an empty string if False and the string '1' if True.\"\"\"\n return '1' if x else ''\n\n\n_BREAKS = frozenset(['b', 'break', 's', 'skip', 'q', 'quit'])\n\n\ndef to_bool_or_break(x):\n if isinstance(x, str) and x.lower() in _BREAKS:\n return 'break'\n else:\n return to_bool(x)\n\n\ndef is_bool_or_int(x):\n \"\"\"Returns whether a value is a boolean or integer.\"\"\"\n return is_bool(x) or is_int(x)\n\n\ndef to_bool_or_int(x):\n \"\"\"Converts a value to a boolean or an integer.\"\"\"\n if isinstance(x, str):\n return int(x) if x.isdigit() else to_bool(x)\n elif is_int(x): # bools are ints too!\n return x\n else:\n return bool(x)\n\n\ndef bool_or_int_to_str(x):\n \"\"\"Converts a boolean or integer to a string.\"\"\"\n return bool_to_str(x) if is_bool(x) else str(x)\n\n\ndef ensure_int_or_slice(x):\n \"\"\"Makes sure that x is list-indexable.\"\"\"\n if x is None:\n return slice(None)\n elif is_int(x):\n return x\n # must have a string from here on\n if ':' in x:\n x = x.strip('[]()')\n return slice(*(int(x) if len(x) > 0 else None for x in x.split(':')))\n else:\n return int(x)\n\n\ndef is_string_set(x):\n \"\"\"Tests if something is a set\"\"\"\n return (isinstance(x, Set) and\n all(isinstance(a, str) for a in x))\n\n\ndef csv_to_set(x):\n \"\"\"Convert a comma-separated list of strings to a set of strings.\"\"\"\n if not x:\n return set()\n else:\n return set(x.split(','))\n\n\ndef set_to_csv(x):\n \"\"\"Convert a set of strings to a comma-separated list of strings.\"\"\"\n return ','.join(x)\n\n\ndef is_bool_seq(x):\n \"\"\"Tests if an object is a sequence of bools.\"\"\"\n return isinstance(x, Sequence) and all(isinstance(y, bool) for y in x)\n\n\ndef csv_to_bool_seq(x):\n \"\"\"Takes a comma-separated string and converts it into a list of bools.\"\"\"\n return [to_bool(y) for y in csv_to_set(x)]\n\n\ndef bool_seq_to_csv(x):\n \"\"\"Converts a sequence of bools to a comma-separated string.\"\"\"\n return ','.join(map(str, x))\n\n\ndef is_completions_display_value(x):\n return x in {'none', 'single', 'multi'}\n\n\ndef to_completions_display_value(x):\n x = str(x).lower()\n if x in {'none', 'false'}:\n x = 'none'\n elif x in {'multi', 'true'}:\n x = 'multi'\n elif x == 'single':\n pass\n else:\n warn('\"{}\" is not a valid value for $COMPLETIONS_DISPLAY. '.format(x) +\n 'Using \"multi\".', RuntimeWarning)\n x = 'multi'\n return x\n\n\ndef setup_win_unicode_console(enable):\n \"\"\"\"Enables or disables unicode display on windows.\"\"\"\n enable = to_bool(enable)\n if ON_WINDOWS and win_unicode_console:\n if enable:\n win_unicode_console.enable()\n else:\n win_unicode_console.disable()\n return enable\n\n# history validation\n\n_min_to_sec = lambda x: 60.0 * float(x)\n_hour_to_sec = lambda x: 60.0 * _min_to_sec(x)\n_day_to_sec = lambda x: 24.0 * _hour_to_sec(x)\n_month_to_sec = lambda x: 30.4375 * _day_to_sec(x)\n_year_to_sec = lambda x: 365.25 * _day_to_sec(x)\n_kb_to_b = lambda x: 1024 * int(x)\n_mb_to_b = lambda x: 1024 * _kb_to_b(x)\n_gb_to_b = lambda x: 1024 * _mb_to_b(x)\n_tb_to_b = lambda x: 1024 * _tb_to_b(x)\n\nCANON_HISTORY_UNITS = frozenset(['commands', 'files', 's', 'b'])\n\nHISTORY_UNITS = {\n '': ('commands', int),\n 'c': ('commands', int),\n 'cmd': ('commands', int),\n 'cmds': ('commands', int),\n 'command': ('commands', int),\n 'commands': ('commands', int),\n 'f': ('files', int),\n 'files': ('files', int),\n 's': ('s', float),\n 'sec': ('s', float),\n 'second': ('s', float),\n 'seconds': ('s', float),\n 'm': ('s', _min_to_sec),\n 'min': ('s', _min_to_sec),\n 'mins': ('s', _min_to_sec),\n 'h': ('s', _hour_to_sec),\n 'hr': ('s', _hour_to_sec),\n 'hour': ('s', _hour_to_sec),\n 'hours': ('s', _hour_to_sec),\n 'd': ('s', _day_to_sec),\n 'day': ('s', _day_to_sec),\n 'days': ('s', _day_to_sec),\n 'mon': ('s', _month_to_sec),\n 'month': ('s', _month_to_sec),\n 'months': ('s', _month_to_sec),\n 'y': ('s', _year_to_sec),\n 'yr': ('s', _year_to_sec),\n 'yrs': ('s', _year_to_sec),\n 'year': ('s', _year_to_sec),\n 'years': ('s', _year_to_sec),\n 'b': ('b', int),\n 'byte': ('b', int),\n 'bytes': ('b', int),\n 'kb': ('b', _kb_to_b),\n 'kilobyte': ('b', _kb_to_b),\n 'kilobytes': ('b', _kb_to_b),\n 'mb': ('b', _mb_to_b),\n 'meg': ('b', _mb_to_b),\n 'megs': ('b', _mb_to_b),\n 'megabyte': ('b', _mb_to_b),\n 'megabytes': ('b', _mb_to_b),\n 'gb': ('b', _gb_to_b),\n 'gig': ('b', _gb_to_b),\n 'gigs': ('b', _gb_to_b),\n 'gigabyte': ('b', _gb_to_b),\n 'gigabytes': ('b', _gb_to_b),\n 'tb': ('b', _tb_to_b),\n 'terabyte': ('b', _tb_to_b),\n 'terabytes': ('b', _tb_to_b),\n }\n\"\"\"Maps lowercase unit names to canonical name and conversion utilities.\"\"\"\n\ndef is_history_tuple(x):\n \"\"\"Tests if something is a proper history value, units tuple.\"\"\"\n if isinstance(x, Sequence) and len(x) == 2 and isinstance(x[0], (int, float)) \\\n and x[1].lower() in CANON_HISTORY_UNITS:\n return True\n return False\n\n\ndef is_dynamic_cwd_width(x):\n \"\"\" Determine if the input is a valid input for the DYNAMIC_CWD_WIDTH\n environement variable.\n \"\"\"\n return isinstance(x, tuple) and len(x) == 2 and isinstance(x[0], float) and \\\n (x[1] in set('c%'))\n\n\ndef to_dynamic_cwd_tuple(x):\n \"\"\"Convert to a canonical cwd_width tuple.\"\"\"\n unit = 'c'\n if isinstance(x, str):\n if x[-1] == '%':\n x = x[:-1]\n unit = '%'\n else:\n unit = 'c'\n return (float(x), unit)\n else:\n return (float(x[0]), x[1])\n\n\ndef dynamic_cwd_tuple_to_str(x):\n \"\"\"Convert a canonical cwd_width tuple to a string.\"\"\"\n if x[1] == '%':\n return str(x[0]) + '%'\n else:\n return str(x[0])\n\n\nRE_HISTORY_TUPLE = re.compile('([-+]?[0-9]*\\.?[0-9]+([eE][-+]?[0-9]+)?)\\s*([A-Za-z]*)')\n\ndef to_history_tuple(x):\n \"\"\"Converts to a canonincal history tuple.\"\"\"\n if not isinstance(x, (Sequence, float, int)):\n raise ValueError('history size must be given as a sequence or number')\n if isinstance(x, str):\n m = RE_HISTORY_TUPLE.match(x.strip().lower())\n return to_history_tuple((m.group(1), m.group(3)))\n elif isinstance(x, (float, int)):\n return to_history_tuple((x, 'commands'))\n units, converter = HISTORY_UNITS[x[1]]\n value = converter(x[0])\n return (value, units)\n\n\ndef history_tuple_to_str(x):\n \"\"\"Converts a valid history tuple to a canonical string.\"\"\"\n return '{0} {1}'.format(*x)\n\n\ndef format_color(string, **kwargs):\n \"\"\"Formats strings that may contain colors. This simply dispatches to the\n shell instances method of the same name. The results of this function should\n be directly usable by print_color().\n \"\"\"\n return builtins.__xonsh_shell__.shell.format_color(string, **kwargs)\n\n\ndef print_color(string, **kwargs):\n \"\"\"Prints a string that may contain colors. This dispatched to the shell\n method of the same name. Colors will be formatted if they have not already\n been.\n \"\"\"\n builtins.__xonsh_shell__.shell.print_color(string, **kwargs)\n\n\ndef color_style_names():\n \"\"\"Returns an iterable of all available style names.\"\"\"\n return builtins.__xonsh_shell__.shell.color_style_names()\n\n\ndef color_style():\n \"\"\"Returns the current color map.\"\"\"\n return builtins.__xonsh_shell__.shell.color_style()\n\n\ndef _get_color_indexes(style_map):\n \"\"\" Generates the color and windows color index for a style \"\"\"\n import prompt_toolkit\n table = prompt_toolkit.terminal.win32_output.ColorLookupTable()\n pt_style = prompt_toolkit.styles.style_from_dict(style_map)\n for token in style_map:\n attr = pt_style.token_to_attrs[token]\n if attr.color is not None:\n index = table.lookup_color(attr.color, attr.bgcolor)\n try:\n rgb = (int(attr.color[0:2], 16),\n int(attr.color[2:4], 16),\n int(attr.color[4:6], 16))\n except:\n rgb = None\n yield token, index, rgb\n\n\ndef intensify_colors_for_cmd_exe(style_map, replace_colors=None):\n \"\"\"Returns a modified style to where colors that maps to dark\n colors are replaced with brighter versions. Also expands the\n range used by the gray colors\n \"\"\"\n modified_style = {}\n stype = builtins.__xonsh_env__.get('SHELL_TYPE')\n if (not ON_WINDOWS or\n (stype not in ('prompt_toolkit', 'best')) or\n (stype == 'best' and not has_prompt_toolkit())):\n return modified_style\n if replace_colors is None:\n replace_colors = {1: '#44ffff', # subst blue with bright cyan\n 2: '#44ff44', # subst green with bright green\n 4: '#ff4444', # subst red with bright red\n 5: '#ff44ff', # subst magenta with bright magenta\n 6: '#ffff44', # subst yellow with bright yellow\n 9: '#00aaaa', # subst intense blue (hard to read)\n # with dark cyan (which is readable)\n }\n for token, idx, _ in _get_color_indexes(style_map):\n if idx in replace_colors:\n modified_style[token] = replace_colors[idx]\n return modified_style\n\n\ndef expand_gray_colors_for_cmd_exe(style_map):\n \"\"\" Expand the style's gray scale color range.\n All gray scale colors has a tendency to map to the same default GRAY\n in cmd.exe.\n \"\"\"\n modified_style = {}\n stype = builtins.__xonsh_env__.get('SHELL_TYPE')\n if (not ON_WINDOWS or\n (stype not in ('prompt_toolkit', 'best')) or\n (stype == 'best' and not has_prompt_toolkit())):\n return modified_style\n for token, idx, rgb in _get_color_indexes(style_map):\n if idx == 7 and rgb:\n if sum(rgb) <= 306:\n # Equal and below '#666666 is reset to dark gray\n modified_style[token] = '#444444'\n elif sum(rgb) >= 408:\n # Equal and above 0x888888 is reset to white\n modified_style[token] = '#ffffff'\n return modified_style\n\n\ndef intensify_colors_on_win_setter(enable):\n \"\"\" Resets the style when setting the INTENSIFY_COLORS_ON_WIN\n environment variable. \"\"\"\n enable = to_bool(enable)\n delattr(builtins.__xonsh_shell__.shell.styler, 'style_name')\n return enable\n\n\n_RE_STRING_START = \"[bBrRuU]*\"\n_RE_STRING_TRIPLE_DOUBLE = '\"\"\"'\n_RE_STRING_TRIPLE_SINGLE = \"'''\"\n_RE_STRING_DOUBLE = '\"'\n_RE_STRING_SINGLE = \"'\"\n_STRINGS = (_RE_STRING_TRIPLE_DOUBLE,\n _RE_STRING_TRIPLE_SINGLE,\n _RE_STRING_DOUBLE,\n _RE_STRING_SINGLE)\nRE_BEGIN_STRING = re.compile(\"(\" + _RE_STRING_START +\n '(' + \"|\".join(_STRINGS) +\n '))')\n\"\"\"Regular expression matching the start of a string, including quotes and\nleading characters (r, b, or u)\"\"\"\n\nRE_STRING_START = re.compile(_RE_STRING_START)\n\"\"\"Regular expression matching the characters before the quotes when starting a\nstring (r, b, or u, case insensitive)\"\"\"\n\nRE_STRING_CONT = {k: re.compile(v) for k,v in {\n '\"': r'((\\\\(.|\\n))|([^\"\\\\]))*',\n \"'\": r\"((\\\\(.|\\n))|([^'\\\\]))*\",\n '\"\"\"': r'((\\\\(.|\\n))|([^\"\\\\])|(\"(?!\"\"))|\\n)*',\n \"'''\": r\"((\\\\(.|\\n))|([^'\\\\])|('(?!''))|\\n)*\",\n}.items()}\n\"\"\"Dictionary mapping starting quote sequences to regular expressions that\nmatch the contents of a string beginning with those quotes (not including the\nterminating quotes)\"\"\"\n\n\ndef check_for_partial_string(x):\n \"\"\"\n Returns the starting index (inclusive), ending index (exclusive), and\n starting quote string of the most recent Python string found in the input.\n\n check_for_partial_string(x) -> (startix, endix, quote)\n\n Parameters\n ----------\n x : str\n The string to be checked (representing a line of terminal input)\n\n Returns\n -------\n startix : int (or None)\n The index where the most recent Python string found started\n (inclusive), or None if no strings exist in the input\n\n endix : int (or None)\n The index where the most recent Python string found ended (exclusive),\n or None if no strings exist in the input OR if the input ended in the\n middle of a Python string\n\n quote : str (or None)\n A string containing the quote used to start the string (e.g., b\", \",\n '''), or None if no string was found.\n \"\"\"\n string_indices = []\n starting_quote = []\n current_index = 0\n match = re.search(RE_BEGIN_STRING, x)\n while match is not None:\n # add the start in\n start = match.start()\n quote = match.group(0)\n lenquote = len(quote)\n current_index += start\n # store the starting index of the string, as well as the\n # characters in the starting quotes (e.g., \", ', \"\"\", r\", etc)\n string_indices.append(current_index)\n starting_quote.append(quote)\n # determine the string that should terminate this string\n ender = re.sub(RE_STRING_START, '', quote)\n x = x[start + lenquote:]\n current_index += lenquote\n # figure out what is inside the string\n continuer = RE_STRING_CONT[ender]\n contents = re.match(continuer, x)\n inside = contents.group(0)\n leninside = len(inside)\n current_index += contents.start() + leninside + len(ender)\n # if we are not at the end of the input string, add the ending index of\n # the string to string_indices\n if contents.end() < len(x):\n string_indices.append(current_index)\n x = x[leninside + len(ender):]\n # find the next match\n match = re.search(RE_BEGIN_STRING, x)\n numquotes = len(string_indices)\n if numquotes == 0:\n return (None, None, None)\n elif numquotes % 2:\n return (string_indices[-1], None, starting_quote[-1])\n else:\n return (string_indices[-2], string_indices[-1], starting_quote[-1])\n\n\n# expandvars is a modified version of os.path.expandvars from the Python 3.5.1\n# source code (root/Lib/ntpath.py, line 353)\n\ndef _is_in_env(name):\n ENV = builtins.__xonsh_env__\n return name in ENV._d or name in ENV.defaults\n\ndef _get_env_string(name):\n ENV = builtins.__xonsh_env__\n value = ENV.get(name)\n ensurer = ENV.get_ensurer(name)\n if ensurer.detype is bool_to_str:\n value = ensure_string(value)\n else:\n value = ensurer.detype(value)\n return value\n\n\ndef expandvars(path):\n \"\"\"Expand shell variables of the forms $var, ${var} and %var%.\n\n Unknown variables are left unchanged.\"\"\"\n ENV = builtins.__xonsh_env__\n if isinstance(path, bytes):\n path = path.decode(encoding=ENV.get('XONSH_ENCODING'),\n errors=ENV.get('XONSH_ENCODING_ERRORS'))\n if '$' not in path and (not ON_WINDOWS or '%' not in path):\n return path\n varchars = string.ascii_letters + string.digits + '_-'\n quote = '\\''\n percent = '%'\n brace = '{'\n rbrace = '}'\n dollar = '$'\n res = path[:0]\n index = 0\n pathlen = len(path)\n while index < pathlen:\n c = path[index:index+1]\n if c == quote: # no expansion within single quotes\n path = path[index + 1:]\n pathlen = len(path)\n try:\n index = path.index(c)\n res += c + path[:index + 1]\n except ValueError:\n res += c + path\n index = pathlen - 1\n elif c == percent and ON_WINDOWS: # variable or '%'\n if path[index + 1:index + 2] == percent:\n res += c\n index += 1\n else:\n path = path[index+1:]\n pathlen = len(path)\n try:\n index = path.index(percent)\n except ValueError:\n res += percent + path\n index = pathlen - 1\n else:\n var = path[:index]\n if _is_in_env(var):\n value = _get_env_string(var)\n else:\n value = percent + var + percent\n res += value\n elif c == dollar: # variable or '$$'\n if path[index + 1:index + 2] == dollar:\n res += c\n index += 1\n elif path[index + 1:index + 2] == brace:\n path = path[index+2:]\n pathlen = len(path)\n try:\n index = path.index(rbrace)\n except ValueError:\n res += dollar + brace + path\n index = pathlen - 1\n else:\n var = path[:index]\n try:\n var = eval(var, builtins.__xonsh_ctx__)\n if _is_in_env(var):\n value = _get_env_string(var)\n elif var is Ellipsis:\n value = dollar + brace + '...' + rbrace\n else:\n value = dollar + brace + var + rbrace\n except:\n value = dollar + brace + var + rbrace\n res += value\n else:\n var = path[:0]\n index += 1\n c = path[index:index + 1]\n while c and c in varchars:\n var += c\n index += 1\n c = path[index:index + 1]\n if _is_in_env(var):\n value = _get_env_string(var)\n else:\n value = dollar + var\n res += value\n if c:\n index -= 1\n else:\n res += c\n index += 1\n return res\n\n#\n# File handling tools\n#\n\ndef backup_file(fname):\n \"\"\"Moves an existing file to a new name that has the current time right\n before the extension.\n \"\"\"\n # lazy imports\n import shutil\n from datetime import datetime\n base, ext = os.path.splitext(fname)\n newfname = base + '.' + datetime.now().isoformat() + ext\n shutil.move(fname, newfname)\n\n\ndef normabspath(p):\n \"\"\"Retuns as normalized absolute path, namely, normcase(abspath(p))\"\"\"\n return os.path.normcase(os.path.abspath(p))\n\n\nclass CommandsCache(Set):\n \"\"\"A lazy cache representing the commands available on the file system.\"\"\"\n\n def __init__(self):\n self._cmds_cache = frozenset()\n self._path_checksum = None\n self._alias_checksum = None\n self._path_mtime = -1\n\n def __contains__(self, item):\n return item in self.all_commands\n\n def __iter__(self):\n return iter(self.all_commands)\n\n def __len__(self):\n return len(self.all_commands)\n\n @property\n def all_commands(self):\n paths = builtins.__xonsh_env__.get('PATH', [])\n paths = frozenset(x for x in paths if os.path.isdir(x))\n # did PATH change?\n path_hash = hash(paths)\n cache_valid = path_hash == self._path_checksum\n self._path_checksum = path_hash\n # did aliases change?\n al_hash = hash(frozenset(builtins.aliases))\n cache_valid = cache_valid and al_hash == self._alias_checksum\n self._alias_checksum = al_hash\n # did the contents of any directory in PATH change?\n max_mtime = 0\n for path in paths:\n mtime = os.stat(path).st_mtime\n if mtime > max_mtime:\n max_mtime = mtime\n cache_valid = cache_valid and max_mtime > self._path_mtime\n self._path_mtime = max_mtime\n if cache_valid:\n return self._cmds_cache\n allcmds = set()\n for path in paths:\n allcmds |= set(executables_in(path))\n allcmds |= set(builtins.aliases)\n self._cmds_cache = frozenset(allcmds)\n return self._cmds_cache\n\nWINDOWS_DRIVE_MATCHER = re.compile(r'^\\w:')\n\n\ndef expand_case_matching(s):\n \"\"\"Expands a string to a case insenstive globable string.\"\"\"\n t = []\n openers = {'[', '{'}\n closers = {']', '}'}\n nesting = 0\n\n drive_part = WINDOWS_DRIVE_MATCHER.match(s) if ON_WINDOWS else None\n\n if drive_part:\n drive_part = drive_part.group(0)\n t.append(drive_part)\n s = s[len(drive_part):]\n\n for c in s:\n if c in openers:\n nesting += 1\n elif c in closers:\n nesting -= 1\n elif nesting > 0:\n pass\n elif c.isalpha():\n folded = c.casefold()\n if len(folded) == 1:\n c = '[{0}{1}]'.format(c.upper(), c.lower())\n else:\n newc = ['[{0}{1}]?'.format(f.upper(), f.lower())\n for f in folded[:-1]]\n newc = ''.join(newc)\n newc += '[{0}{1}{2}]'.format(folded[-1].upper(),\n folded[-1].lower(),\n c)\n c = newc\n t.append(c)\n return ''.join(t)\n\n\ndef globpath(s, ignore_case=False):\n \"\"\"Simple wrapper around glob that also expands home and env vars.\"\"\"\n o, s = _iglobpath(s, ignore_case=ignore_case)\n o = list(o)\n return o if len(o) != 0 else [s]\n\n\ndef _iglobpath(s, ignore_case=False):\n s = builtins.__xonsh_expand_path__(s)\n if ignore_case:\n s = expand_case_matching(s)\n if sys.version_info > (3, 5):\n if '**' in s and '**/*' not in s:\n s = s.replace('**', '**/*')\n # `recursive` is only a 3.5+ kwarg.\n return iglob(s, recursive=True), s\n else:\n return iglob(s), s\n\ndef iglobpath(s, ignore_case=False):\n \"\"\"Simple wrapper around iglob that also expands home and env vars.\"\"\"\n return _iglobpath(s, ignore_case)[0]\n",
"path": "xonsh/tools.py"
}
] | [
{
"content": "# -*- coding: utf-8 -*-\n\"\"\"Misc. xonsh tools.\n\nThe following implementations were forked from the IPython project:\n\n* Copyright (c) 2008-2014, IPython Development Team\n* Copyright (C) 2001-2007 Fernando Perez <[email protected]>\n* Copyright (c) 2001, Janko Hauser <[email protected]>\n* Copyright (c) 2001, Nathaniel Gray <[email protected]>\n\nImplementations:\n\n* decode()\n* encode()\n* cast_unicode()\n* safe_hasattr()\n* indent()\n\n\"\"\"\nimport os\nimport re\nimport sys\nimport ast\nimport string\nimport ctypes\nimport builtins\nimport subprocess\nimport threading\nimport traceback\nfrom glob import iglob\nfrom warnings import warn\nfrom contextlib import contextmanager\nfrom collections import OrderedDict, Sequence, Set\n\n# adding further imports from xonsh modules is discouraged to avoid cirular\n# dependencies\nfrom xonsh.platform import (has_prompt_toolkit, scandir, win_unicode_console,\n DEFAULT_ENCODING, ON_LINUX, ON_WINDOWS,\n PYTHON_VERSION_INFO)\n\nIS_SUPERUSER = ctypes.windll.shell32.IsUserAnAdmin() != 0 if ON_WINDOWS else os.getuid() == 0\n\n\nclass XonshError(Exception):\n pass\n\n\nclass DefaultNotGivenType(object):\n \"\"\"Singleton for representing when no default value is given.\"\"\"\n\n\nDefaultNotGiven = DefaultNotGivenType()\n\nBEG_TOK_SKIPS = frozenset(['WS', 'INDENT', 'NOT', 'LPAREN'])\nEND_TOK_TYPES = frozenset(['SEMI', 'AND', 'OR', 'RPAREN'])\nRE_END_TOKS = re.compile('(;|and|\\&\\&|or|\\|\\||\\))')\nLPARENS = frozenset(['LPAREN', 'AT_LPAREN', 'BANG_LPAREN', 'DOLLAR_LPAREN',\n 'ATDOLLAR_LPAREN'])\n\n\ndef _is_not_lparen_and_rparen(lparens, rtok):\n \"\"\"Tests if an RPAREN token is matched with something other than a plain old\n LPAREN type.\n \"\"\"\n # note that any([]) is False, so this covers len(lparens) == 0\n return rtok.type == 'RPAREN' and any(x != 'LPAREN' for x in lparens)\n\n\ndef find_next_break(line, mincol=0, lexer=None):\n \"\"\"Returns the column number of the next logical break in subproc mode.\n This function may be useful in finding the maxcol argument of subproc_toks().\n \"\"\"\n if mincol >= 1:\n line = line[mincol:]\n if lexer is None:\n lexer = builtins.__xonsh_execer__.parser.lexer\n if RE_END_TOKS.search(line) is None:\n return None\n maxcol = None\n lparens = []\n lexer.input(line)\n for tok in lexer:\n if tok.type in LPARENS:\n lparens.append(tok.type)\n elif tok.type in END_TOK_TYPES:\n if _is_not_lparen_and_rparen(lparens, tok):\n lparens.pop()\n else:\n maxcol = tok.lexpos + mincol + 1\n break\n elif tok.type == 'ERRORTOKEN' and ')' in tok.value:\n maxcol = tok.lexpos + mincol + 1\n break\n return maxcol\n\n\ndef subproc_toks(line, mincol=-1, maxcol=None, lexer=None, returnline=False):\n \"\"\"Excapsulates tokens in a source code line in a uncaptured\n subprocess ![] starting at a minimum column. If there are no tokens\n (ie in a comment line) this returns None.\n \"\"\"\n if lexer is None:\n lexer = builtins.__xonsh_execer__.parser.lexer\n if maxcol is None:\n maxcol = len(line) + 1\n lexer.reset()\n lexer.input(line)\n toks = []\n lparens = []\n end_offset = 0\n for tok in lexer:\n pos = tok.lexpos\n if tok.type not in END_TOK_TYPES and pos >= maxcol:\n break\n if tok.type in LPARENS:\n lparens.append(tok.type)\n if len(toks) == 0 and tok.type in BEG_TOK_SKIPS:\n continue # handle indentation\n elif len(toks) > 0 and toks[-1].type in END_TOK_TYPES:\n if _is_not_lparen_and_rparen(lparens, toks[-1]):\n lparens.pop() # don't continue or break\n elif pos < maxcol and tok.type not in ('NEWLINE', 'DEDENT', 'WS'):\n toks.clear()\n if tok.type in BEG_TOK_SKIPS:\n continue\n else:\n break\n if pos < mincol:\n continue\n toks.append(tok)\n if tok.type == 'NEWLINE':\n break\n elif tok.type == 'DEDENT':\n # fake a newline when dedenting without a newline\n tok.type = 'NEWLINE'\n tok.value = '\\n'\n tok.lineno -= 1\n if len(toks) >= 2:\n prev_tok_end = toks[-2].lexpos + len(toks[-2].value)\n else:\n prev_tok_end = len(line)\n if '#' in line[prev_tok_end:]:\n tok.lexpos = prev_tok_end # prevents wrapping comments\n else:\n tok.lexpos = len(line)\n break\n else:\n if len(toks) > 0 and toks[-1].type in END_TOK_TYPES:\n if _is_not_lparen_and_rparen(lparens, toks[-1]):\n pass\n else:\n toks.pop()\n if len(toks) == 0:\n return # handle comment lines\n tok = toks[-1]\n pos = tok.lexpos\n if isinstance(tok.value, str):\n end_offset = len(tok.value.rstrip())\n else:\n el = line[pos:].split('#')[0].rstrip()\n end_offset = len(el)\n if len(toks) == 0:\n return # handle comment lines\n beg, end = toks[0].lexpos, (toks[-1].lexpos + end_offset)\n end = len(line[:end].rstrip())\n rtn = '![' + line[beg:end] + ']'\n if returnline:\n rtn = line[:beg] + rtn + line[end:]\n return rtn\n\n\ndef subexpr_from_unbalanced(expr, ltok, rtok):\n \"\"\"Attempts to pull out a valid subexpression for unbalanced grouping,\n based on opening tokens, eg. '(', and closing tokens, eg. ')'. This\n does not do full tokenization, but should be good enough for tab\n completion.\n \"\"\"\n lcnt = expr.count(ltok)\n if lcnt == 0:\n return expr\n rcnt = expr.count(rtok)\n if lcnt == rcnt:\n return expr\n subexpr = expr.rsplit(ltok, 1)[-1]\n subexpr = subexpr.rsplit(',', 1)[-1]\n subexpr = subexpr.rsplit(':', 1)[-1]\n return subexpr\n\n\ndef decode(s, encoding=None):\n encoding = encoding or DEFAULT_ENCODING\n return s.decode(encoding, \"replace\")\n\n\ndef encode(u, encoding=None):\n encoding = encoding or DEFAULT_ENCODING\n return u.encode(encoding, \"replace\")\n\n\ndef cast_unicode(s, encoding=None):\n if isinstance(s, bytes):\n return decode(s, encoding)\n return s\n\n\ndef safe_hasattr(obj, attr):\n \"\"\"In recent versions of Python, hasattr() only catches AttributeError.\n This catches all errors.\n \"\"\"\n try:\n getattr(obj, attr)\n return True\n except Exception: # pylint:disable=bare-except\n return False\n\n\ndef indent(instr, nspaces=4, ntabs=0, flatten=False):\n \"\"\"Indent a string a given number of spaces or tabstops.\n\n indent(str,nspaces=4,ntabs=0) -> indent str by ntabs+nspaces.\n\n Parameters\n ----------\n instr : basestring\n The string to be indented.\n nspaces : int (default: 4)\n The number of spaces to be indented.\n ntabs : int (default: 0)\n The number of tabs to be indented.\n flatten : bool (default: False)\n Whether to scrub existing indentation. If True, all lines will be\n aligned to the same indentation. If False, existing indentation will\n be strictly increased.\n\n Returns\n -------\n outstr : string indented by ntabs and nspaces.\n\n \"\"\"\n if instr is None:\n return\n ind = '\\t' * ntabs + ' ' * nspaces\n if flatten:\n pat = re.compile(r'^\\s*', re.MULTILINE)\n else:\n pat = re.compile(r'^', re.MULTILINE)\n outstr = re.sub(pat, ind, instr)\n if outstr.endswith(os.linesep + ind):\n return outstr[:-len(ind)]\n else:\n return outstr\n\n\ndef get_sep():\n \"\"\" Returns the appropriate filepath separator char depending on OS and\n xonsh options set\n \"\"\"\n return (os.altsep if ON_WINDOWS\n and builtins.__xonsh_env__.get('FORCE_POSIX_PATHS') else\n os.sep)\n\n\ndef fallback(cond, backup):\n \"\"\"Decorator for returning the object if cond is true and a backup if cond is false.\n \"\"\"\n def dec(obj):\n return obj if cond else backup\n return dec\n\n\n# The following redirect classes were taken directly from Python 3.5's source\n# code (from the contextlib module). This can be removed when 3.5 is released,\n# although redirect_stdout exists in 3.4, redirect_stderr does not.\n# See the Python software license: https://docs.python.org/3/license.html\n# Copyright (c) Python Software Foundation. All rights reserved.\nclass _RedirectStream:\n\n _stream = None\n\n def __init__(self, new_target):\n self._new_target = new_target\n # We use a list of old targets to make this CM re-entrant\n self._old_targets = []\n\n def __enter__(self):\n self._old_targets.append(getattr(sys, self._stream))\n setattr(sys, self._stream, self._new_target)\n return self._new_target\n\n def __exit__(self, exctype, excinst, exctb):\n setattr(sys, self._stream, self._old_targets.pop())\n\n\nclass redirect_stdout(_RedirectStream):\n \"\"\"Context manager for temporarily redirecting stdout to another file::\n\n # How to send help() to stderr\n with redirect_stdout(sys.stderr):\n help(dir)\n\n # How to write help() to a file\n with open('help.txt', 'w') as f:\n with redirect_stdout(f):\n help(pow)\n\n Mostly for backwards compatibility.\n \"\"\"\n _stream = \"stdout\"\n\n\nclass redirect_stderr(_RedirectStream):\n \"\"\"Context manager for temporarily redirecting stderr to another file.\"\"\"\n _stream = \"stderr\"\n\n\ndef _yield_accessible_unix_file_names(path):\n \"yield file names of executablel files in `path`\"\n\n for file_ in scandir(path):\n try:\n if file_.is_file() and os.access(file_.path, os.X_OK):\n yield file_.name\n except NotADirectoryError:\n # broken Symlink are neither dir not files\n pass\n\n\ndef _executables_in_posix(path):\n if PYTHON_VERSION_INFO < (3, 5, 0):\n for fname in os.listdir(path):\n fpath = os.path.join(path, fname)\n if (os.path.exists(fpath) and os.access(fpath, os.X_OK) and \\\n (not os.path.isdir(fpath))):\n yield fname\n else:\n yield from _yield_accessible_unix_file_names(path)\n\n\ndef _executables_in_windows(path):\n extensions = builtins.__xonsh_env__.get('PATHEXT',['.COM', '.EXE', '.BAT'])\n if PYTHON_VERSION_INFO < (3, 5, 0):\n for fname in os.listdir(path):\n fpath = os.path.join(path, fname)\n if (os.path.exists(fpath) and not os.path.isdir(fpath)):\n base_name, ext = os.path.splitext(fname)\n if ext.upper() in extensions:\n yield fname\n else:\n for fname in (x.name for x in scandir(path) if x.is_file()):\n base_name, ext = os.path.splitext(fname)\n if ext.upper() in extensions:\n yield fname\n\n\ndef executables_in(path):\n \"\"\"Returns a generator of files in `path` that the user could execute. \"\"\"\n if ON_WINDOWS:\n func = _executables_in_windows\n else:\n func = _executables_in_posix\n try:\n yield from func(path)\n except PermissionError:\n return\n\n\ndef command_not_found(cmd):\n \"\"\"Uses the debian/ubuntu command-not-found utility to suggest packages for a\n command that cannot currently be found.\n \"\"\"\n if not ON_LINUX:\n return ''\n elif not os.path.isfile('/usr/lib/command-not-found'):\n # utility is not on PATH\n return ''\n c = '/usr/lib/command-not-found {0}; exit 0'\n s = subprocess.check_output(c.format(cmd), universal_newlines=True,\n stderr=subprocess.STDOUT, shell=True)\n s = '\\n'.join(s.splitlines()[:-1]).strip()\n return s\n\n\ndef suggest_commands(cmd, env, aliases):\n \"\"\"Suggests alternative commands given an environment and aliases.\"\"\"\n if not env.get('SUGGEST_COMMANDS'):\n return\n thresh = env.get('SUGGEST_THRESHOLD')\n max_sugg = env.get('SUGGEST_MAX_NUM')\n if max_sugg < 0:\n max_sugg = float('inf')\n cmd = cmd.lower()\n suggested = {}\n\n for alias in builtins.aliases:\n if alias not in suggested:\n if levenshtein(alias.lower(), cmd, thresh) < thresh:\n suggested[alias] = 'Alias'\n\n for path in filter(os.path.isdir, env.get('PATH')):\n for _file in executables_in(path):\n if _file not in suggested \\\n and levenshtein(_file.lower(), cmd, thresh) < thresh:\n suggested[_file] = 'Command ({0})'.format(os.path.join(path, _file))\n\n suggested = OrderedDict(\n sorted(suggested.items(),\n key=lambda x: suggestion_sort_helper(x[0].lower(), cmd)))\n num = min(len(suggested), max_sugg)\n\n if num == 0:\n rtn = command_not_found(cmd)\n else:\n oneof = '' if num == 1 else 'one of '\n tips = 'Did you mean {}the following?'.format(oneof)\n items = list(suggested.popitem(False) for _ in range(num))\n length = max(len(key) for key, _ in items) + 2\n alternatives = '\\n'.join(' {: <{}} {}'.format(key + \":\", length, val)\n for key, val in items)\n rtn = '{}\\n{}'.format(tips, alternatives)\n c = command_not_found(cmd)\n rtn += ('\\n\\n' + c) if len(c) > 0 else ''\n return rtn\n\n\ndef print_exception(msg=None):\n \"\"\"Print exceptions with/without traceback.\"\"\"\n env = getattr(builtins, '__xonsh_env__', os.environ)\n if 'XONSH_SHOW_TRACEBACK' not in env:\n sys.stderr.write('xonsh: For full traceback set: '\n '$XONSH_SHOW_TRACEBACK = True\\n')\n if env.get('XONSH_SHOW_TRACEBACK', False):\n traceback.print_exc()\n else:\n exc_type, exc_value, exc_traceback = sys.exc_info()\n exception_only = traceback.format_exception_only(exc_type, exc_value)\n sys.stderr.write(''.join(exception_only))\n if msg:\n msg = msg if msg.endswith('\\n') else msg + '\\n'\n sys.stderr.write(msg)\n\n\n# Modified from Public Domain code, by Magnus Lie Hetland\n# from http://hetland.org/coding/python/levenshtein.py\ndef levenshtein(a, b, max_dist=float('inf')):\n \"\"\"Calculates the Levenshtein distance between a and b.\"\"\"\n n, m = len(a), len(b)\n if abs(n - m) > max_dist:\n return float('inf')\n if n > m:\n # Make sure n <= m, to use O(min(n,m)) space\n a, b = b, a\n n, m = m, n\n current = range(n + 1)\n for i in range(1, m + 1):\n previous, current = current, [i] + [0] * n\n for j in range(1, n + 1):\n add, delete = previous[j] + 1, current[j - 1] + 1\n change = previous[j - 1]\n if a[j - 1] != b[i - 1]:\n change = change + 1\n current[j] = min(add, delete, change)\n return current[n]\n\n\ndef suggestion_sort_helper(x, y):\n \"\"\"Returns a score (lower is better) for x based on how similar\n it is to y. Used to rank suggestions.\"\"\"\n x = x.lower()\n y = y.lower()\n lendiff = len(x) + len(y)\n inx = len([i for i in x if i not in y])\n iny = len([i for i in y if i not in x])\n return lendiff + inx + iny\n\n\ndef escape_windows_cmd_string(s):\n \"\"\"Returns a string that is usable by the Windows cmd.exe.\n The escaping is based on details here and emperical testing:\n http://www.robvanderwoude.com/escapechars.php\n \"\"\"\n for c in '()%!^<>&|\"':\n s = s.replace(c, '^' + c)\n s = s.replace('/?', '/.')\n return s\n\n\ndef argvquote(arg, force=False):\n \"\"\" Returns an argument quoted in such a way that that CommandLineToArgvW\n on Windows will return the argument string unchanged.\n This is the same thing Popen does when supplied with an list of arguments.\n Arguments in a command line should be separated by spaces; this\n function does not add these spaces. This implementation follows the\n suggestions outlined here:\n https://blogs.msdn.microsoft.com/twistylittlepassagesallalike/2011/04/23/everyone-quotes-command-line-arguments-the-wrong-way/\n \"\"\"\n if not force and len(arg) != 0 and not any([c in arg for c in ' \\t\\n\\v\"']):\n return arg\n else:\n n_backslashes = 0\n cmdline = '\"'\n for c in arg:\n if c == '\"':\n cmdline += (n_backslashes * 2 + 1) * '\\\\'\n else:\n cmdline += n_backslashes * '\\\\'\n if c != '\\\\':\n cmdline += c\n n_backslashes = 0\n else:\n n_backslashes += 1\n return cmdline + n_backslashes * 2 * '\\\\' + '\"'\n\n\ndef on_main_thread():\n \"\"\"Checks if we are on the main thread or not.\"\"\"\n return threading.current_thread() is threading.main_thread()\n\n\n@contextmanager\ndef swap(namespace, name, value, default=NotImplemented):\n \"\"\"Swaps a current variable name in a namespace for another value, and then\n replaces it when the context is exited.\n \"\"\"\n old = getattr(namespace, name, default)\n setattr(namespace, name, value)\n yield value\n if old is default:\n delattr(namespace, name)\n else:\n setattr(namespace, name, old)\n\n#\n# Validators and contervers\n#\n\n\ndef is_int(x):\n \"\"\"Tests if something is an integer\"\"\"\n return isinstance(x, int)\n\n\ndef is_float(x):\n \"\"\"Tests if something is a float\"\"\"\n return isinstance(x, float)\n\n\ndef is_string(x):\n \"\"\"Tests if something is a string\"\"\"\n return isinstance(x, str)\n\n\ndef is_callable(x):\n \"\"\"Tests if something is callable\"\"\"\n return callable(x)\n\n\ndef is_string_or_callable(x):\n \"\"\"Tests if something is a string or callable\"\"\"\n return is_string(x) or is_callable(x)\n\n\ndef always_true(x):\n \"\"\"Returns True\"\"\"\n return True\n\n\ndef always_false(x):\n \"\"\"Returns False\"\"\"\n return False\n\n\ndef ensure_string(x):\n \"\"\"Returns a string if x is not a string, and x if it already is.\"\"\"\n return str(x)\n\n\ndef is_env_path(x):\n \"\"\"This tests if something is an environment path, ie a list of strings.\"\"\"\n if isinstance(x, str):\n return False\n else:\n return (isinstance(x, Sequence) and\n all(isinstance(a, str) for a in x))\n\n\ndef str_to_env_path(x):\n \"\"\"Converts a string to an environment path, ie a list of strings,\n splitting on the OS separator.\n \"\"\"\n return x.split(os.pathsep)\n\n\ndef env_path_to_str(x):\n \"\"\"Converts an environment path to a string by joining on the OS separator.\"\"\"\n return os.pathsep.join(x)\n\n\ndef is_bool(x):\n \"\"\"Tests if something is a boolean.\"\"\"\n return isinstance(x, bool)\n\n\n_FALSES = frozenset(['', '0', 'n', 'f', 'no', 'none', 'false'])\n\ndef to_bool(x):\n \"\"\"\"Converts to a boolean in a semantically meaningful way.\"\"\"\n if isinstance(x, bool):\n return x\n elif isinstance(x, str):\n return False if x.lower() in _FALSES else True\n else:\n return bool(x)\n\n\ndef bool_to_str(x):\n \"\"\"Converts a bool to an empty string if False and the string '1' if True.\"\"\"\n return '1' if x else ''\n\n\n_BREAKS = frozenset(['b', 'break', 's', 'skip', 'q', 'quit'])\n\n\ndef to_bool_or_break(x):\n if isinstance(x, str) and x.lower() in _BREAKS:\n return 'break'\n else:\n return to_bool(x)\n\n\ndef is_bool_or_int(x):\n \"\"\"Returns whether a value is a boolean or integer.\"\"\"\n return is_bool(x) or is_int(x)\n\n\ndef to_bool_or_int(x):\n \"\"\"Converts a value to a boolean or an integer.\"\"\"\n if isinstance(x, str):\n return int(x) if x.isdigit() else to_bool(x)\n elif is_int(x): # bools are ints too!\n return x\n else:\n return bool(x)\n\n\ndef bool_or_int_to_str(x):\n \"\"\"Converts a boolean or integer to a string.\"\"\"\n return bool_to_str(x) if is_bool(x) else str(x)\n\n\ndef ensure_int_or_slice(x):\n \"\"\"Makes sure that x is list-indexable.\"\"\"\n if x is None:\n return slice(None)\n elif is_int(x):\n return x\n # must have a string from here on\n if ':' in x:\n x = x.strip('[]()')\n return slice(*(int(x) if len(x) > 0 else None for x in x.split(':')))\n else:\n return int(x)\n\n\ndef is_string_set(x):\n \"\"\"Tests if something is a set\"\"\"\n return (isinstance(x, Set) and\n all(isinstance(a, str) for a in x))\n\n\ndef csv_to_set(x):\n \"\"\"Convert a comma-separated list of strings to a set of strings.\"\"\"\n if not x:\n return set()\n else:\n return set(x.split(','))\n\n\ndef set_to_csv(x):\n \"\"\"Convert a set of strings to a comma-separated list of strings.\"\"\"\n return ','.join(x)\n\n\ndef is_bool_seq(x):\n \"\"\"Tests if an object is a sequence of bools.\"\"\"\n return isinstance(x, Sequence) and all(isinstance(y, bool) for y in x)\n\n\ndef csv_to_bool_seq(x):\n \"\"\"Takes a comma-separated string and converts it into a list of bools.\"\"\"\n return [to_bool(y) for y in csv_to_set(x)]\n\n\ndef bool_seq_to_csv(x):\n \"\"\"Converts a sequence of bools to a comma-separated string.\"\"\"\n return ','.join(map(str, x))\n\n\ndef is_completions_display_value(x):\n return x in {'none', 'single', 'multi'}\n\n\ndef to_completions_display_value(x):\n x = str(x).lower()\n if x in {'none', 'false'}:\n x = 'none'\n elif x in {'multi', 'true'}:\n x = 'multi'\n elif x == 'single':\n pass\n else:\n warn('\"{}\" is not a valid value for $COMPLETIONS_DISPLAY. '.format(x) +\n 'Using \"multi\".', RuntimeWarning)\n x = 'multi'\n return x\n\n\ndef setup_win_unicode_console(enable):\n \"\"\"\"Enables or disables unicode display on windows.\"\"\"\n enable = to_bool(enable)\n if ON_WINDOWS and win_unicode_console:\n if enable:\n win_unicode_console.enable()\n else:\n win_unicode_console.disable()\n return enable\n\n# history validation\n\n_min_to_sec = lambda x: 60.0 * float(x)\n_hour_to_sec = lambda x: 60.0 * _min_to_sec(x)\n_day_to_sec = lambda x: 24.0 * _hour_to_sec(x)\n_month_to_sec = lambda x: 30.4375 * _day_to_sec(x)\n_year_to_sec = lambda x: 365.25 * _day_to_sec(x)\n_kb_to_b = lambda x: 1024 * int(x)\n_mb_to_b = lambda x: 1024 * _kb_to_b(x)\n_gb_to_b = lambda x: 1024 * _mb_to_b(x)\n_tb_to_b = lambda x: 1024 * _tb_to_b(x)\n\nCANON_HISTORY_UNITS = frozenset(['commands', 'files', 's', 'b'])\n\nHISTORY_UNITS = {\n '': ('commands', int),\n 'c': ('commands', int),\n 'cmd': ('commands', int),\n 'cmds': ('commands', int),\n 'command': ('commands', int),\n 'commands': ('commands', int),\n 'f': ('files', int),\n 'files': ('files', int),\n 's': ('s', float),\n 'sec': ('s', float),\n 'second': ('s', float),\n 'seconds': ('s', float),\n 'm': ('s', _min_to_sec),\n 'min': ('s', _min_to_sec),\n 'mins': ('s', _min_to_sec),\n 'h': ('s', _hour_to_sec),\n 'hr': ('s', _hour_to_sec),\n 'hour': ('s', _hour_to_sec),\n 'hours': ('s', _hour_to_sec),\n 'd': ('s', _day_to_sec),\n 'day': ('s', _day_to_sec),\n 'days': ('s', _day_to_sec),\n 'mon': ('s', _month_to_sec),\n 'month': ('s', _month_to_sec),\n 'months': ('s', _month_to_sec),\n 'y': ('s', _year_to_sec),\n 'yr': ('s', _year_to_sec),\n 'yrs': ('s', _year_to_sec),\n 'year': ('s', _year_to_sec),\n 'years': ('s', _year_to_sec),\n 'b': ('b', int),\n 'byte': ('b', int),\n 'bytes': ('b', int),\n 'kb': ('b', _kb_to_b),\n 'kilobyte': ('b', _kb_to_b),\n 'kilobytes': ('b', _kb_to_b),\n 'mb': ('b', _mb_to_b),\n 'meg': ('b', _mb_to_b),\n 'megs': ('b', _mb_to_b),\n 'megabyte': ('b', _mb_to_b),\n 'megabytes': ('b', _mb_to_b),\n 'gb': ('b', _gb_to_b),\n 'gig': ('b', _gb_to_b),\n 'gigs': ('b', _gb_to_b),\n 'gigabyte': ('b', _gb_to_b),\n 'gigabytes': ('b', _gb_to_b),\n 'tb': ('b', _tb_to_b),\n 'terabyte': ('b', _tb_to_b),\n 'terabytes': ('b', _tb_to_b),\n }\n\"\"\"Maps lowercase unit names to canonical name and conversion utilities.\"\"\"\n\ndef is_history_tuple(x):\n \"\"\"Tests if something is a proper history value, units tuple.\"\"\"\n if isinstance(x, Sequence) and len(x) == 2 and isinstance(x[0], (int, float)) \\\n and x[1].lower() in CANON_HISTORY_UNITS:\n return True\n return False\n\n\ndef is_dynamic_cwd_width(x):\n \"\"\" Determine if the input is a valid input for the DYNAMIC_CWD_WIDTH\n environement variable.\n \"\"\"\n return isinstance(x, tuple) and len(x) == 2 and isinstance(x[0], float) and \\\n (x[1] in set('c%'))\n\n\ndef to_dynamic_cwd_tuple(x):\n \"\"\"Convert to a canonical cwd_width tuple.\"\"\"\n unit = 'c'\n if isinstance(x, str):\n if x[-1] == '%':\n x = x[:-1]\n unit = '%'\n else:\n unit = 'c'\n return (float(x), unit)\n else:\n return (float(x[0]), x[1])\n\n\ndef dynamic_cwd_tuple_to_str(x):\n \"\"\"Convert a canonical cwd_width tuple to a string.\"\"\"\n if x[1] == '%':\n return str(x[0]) + '%'\n else:\n return str(x[0])\n\n\nRE_HISTORY_TUPLE = re.compile('([-+]?[0-9]*\\.?[0-9]+([eE][-+]?[0-9]+)?)\\s*([A-Za-z]*)')\n\ndef to_history_tuple(x):\n \"\"\"Converts to a canonincal history tuple.\"\"\"\n if not isinstance(x, (Sequence, float, int)):\n raise ValueError('history size must be given as a sequence or number')\n if isinstance(x, str):\n m = RE_HISTORY_TUPLE.match(x.strip().lower())\n return to_history_tuple((m.group(1), m.group(3)))\n elif isinstance(x, (float, int)):\n return to_history_tuple((x, 'commands'))\n units, converter = HISTORY_UNITS[x[1]]\n value = converter(x[0])\n return (value, units)\n\n\ndef history_tuple_to_str(x):\n \"\"\"Converts a valid history tuple to a canonical string.\"\"\"\n return '{0} {1}'.format(*x)\n\n\ndef format_color(string, **kwargs):\n \"\"\"Formats strings that may contain colors. This simply dispatches to the\n shell instances method of the same name. The results of this function should\n be directly usable by print_color().\n \"\"\"\n return builtins.__xonsh_shell__.shell.format_color(string, **kwargs)\n\n\ndef print_color(string, **kwargs):\n \"\"\"Prints a string that may contain colors. This dispatched to the shell\n method of the same name. Colors will be formatted if they have not already\n been.\n \"\"\"\n builtins.__xonsh_shell__.shell.print_color(string, **kwargs)\n\n\ndef color_style_names():\n \"\"\"Returns an iterable of all available style names.\"\"\"\n return builtins.__xonsh_shell__.shell.color_style_names()\n\n\ndef color_style():\n \"\"\"Returns the current color map.\"\"\"\n return builtins.__xonsh_shell__.shell.color_style()\n\n\ndef _get_color_indexes(style_map):\n \"\"\" Generates the color and windows color index for a style \"\"\"\n import prompt_toolkit\n table = prompt_toolkit.terminal.win32_output.ColorLookupTable()\n pt_style = prompt_toolkit.styles.style_from_dict(style_map)\n for token in style_map:\n attr = pt_style.token_to_attrs[token]\n if attr.color is not None:\n index = table.lookup_color(attr.color, attr.bgcolor)\n try:\n rgb = (int(attr.color[0:2], 16),\n int(attr.color[2:4], 16),\n int(attr.color[4:6], 16))\n except:\n rgb = None\n yield token, index, rgb\n\n\ndef intensify_colors_for_cmd_exe(style_map, replace_colors=None):\n \"\"\"Returns a modified style to where colors that maps to dark\n colors are replaced with brighter versions. Also expands the\n range used by the gray colors\n \"\"\"\n modified_style = {}\n stype = builtins.__xonsh_env__.get('SHELL_TYPE')\n if (not ON_WINDOWS or\n (stype not in ('prompt_toolkit', 'best')) or\n (stype == 'best' and not has_prompt_toolkit())):\n return modified_style\n if replace_colors is None:\n replace_colors = {1: '#44ffff', # subst blue with bright cyan\n 2: '#44ff44', # subst green with bright green\n 4: '#ff4444', # subst red with bright red\n 5: '#ff44ff', # subst magenta with bright magenta\n 6: '#ffff44', # subst yellow with bright yellow\n 9: '#00aaaa', # subst intense blue (hard to read)\n # with dark cyan (which is readable)\n }\n for token, idx, _ in _get_color_indexes(style_map):\n if idx in replace_colors:\n modified_style[token] = replace_colors[idx]\n return modified_style\n\n\ndef expand_gray_colors_for_cmd_exe(style_map):\n \"\"\" Expand the style's gray scale color range.\n All gray scale colors has a tendency to map to the same default GRAY\n in cmd.exe.\n \"\"\"\n modified_style = {}\n stype = builtins.__xonsh_env__.get('SHELL_TYPE')\n if (not ON_WINDOWS or\n (stype not in ('prompt_toolkit', 'best')) or\n (stype == 'best' and not has_prompt_toolkit())):\n return modified_style\n for token, idx, rgb in _get_color_indexes(style_map):\n if idx == 7 and rgb:\n if sum(rgb) <= 306:\n # Equal and below '#666666 is reset to dark gray\n modified_style[token] = '#444444'\n elif sum(rgb) >= 408:\n # Equal and above 0x888888 is reset to white\n modified_style[token] = '#ffffff'\n return modified_style\n\n\ndef intensify_colors_on_win_setter(enable):\n \"\"\" Resets the style when setting the INTENSIFY_COLORS_ON_WIN\n environment variable. \"\"\"\n enable = to_bool(enable)\n delattr(builtins.__xonsh_shell__.shell.styler, 'style_name')\n return enable\n\n\n_RE_STRING_START = \"[bBrRuU]*\"\n_RE_STRING_TRIPLE_DOUBLE = '\"\"\"'\n_RE_STRING_TRIPLE_SINGLE = \"'''\"\n_RE_STRING_DOUBLE = '\"'\n_RE_STRING_SINGLE = \"'\"\n_STRINGS = (_RE_STRING_TRIPLE_DOUBLE,\n _RE_STRING_TRIPLE_SINGLE,\n _RE_STRING_DOUBLE,\n _RE_STRING_SINGLE)\nRE_BEGIN_STRING = re.compile(\"(\" + _RE_STRING_START +\n '(' + \"|\".join(_STRINGS) +\n '))')\n\"\"\"Regular expression matching the start of a string, including quotes and\nleading characters (r, b, or u)\"\"\"\n\nRE_STRING_START = re.compile(_RE_STRING_START)\n\"\"\"Regular expression matching the characters before the quotes when starting a\nstring (r, b, or u, case insensitive)\"\"\"\n\nRE_STRING_CONT = {k: re.compile(v) for k,v in {\n '\"': r'((\\\\(.|\\n))|([^\"\\\\]))*',\n \"'\": r\"((\\\\(.|\\n))|([^'\\\\]))*\",\n '\"\"\"': r'((\\\\(.|\\n))|([^\"\\\\])|(\"(?!\"\"))|\\n)*',\n \"'''\": r\"((\\\\(.|\\n))|([^'\\\\])|('(?!''))|\\n)*\",\n}.items()}\n\"\"\"Dictionary mapping starting quote sequences to regular expressions that\nmatch the contents of a string beginning with those quotes (not including the\nterminating quotes)\"\"\"\n\n\ndef check_for_partial_string(x):\n \"\"\"\n Returns the starting index (inclusive), ending index (exclusive), and\n starting quote string of the most recent Python string found in the input.\n\n check_for_partial_string(x) -> (startix, endix, quote)\n\n Parameters\n ----------\n x : str\n The string to be checked (representing a line of terminal input)\n\n Returns\n -------\n startix : int (or None)\n The index where the most recent Python string found started\n (inclusive), or None if no strings exist in the input\n\n endix : int (or None)\n The index where the most recent Python string found ended (exclusive),\n or None if no strings exist in the input OR if the input ended in the\n middle of a Python string\n\n quote : str (or None)\n A string containing the quote used to start the string (e.g., b\", \",\n '''), or None if no string was found.\n \"\"\"\n string_indices = []\n starting_quote = []\n current_index = 0\n match = re.search(RE_BEGIN_STRING, x)\n while match is not None:\n # add the start in\n start = match.start()\n quote = match.group(0)\n lenquote = len(quote)\n current_index += start\n # store the starting index of the string, as well as the\n # characters in the starting quotes (e.g., \", ', \"\"\", r\", etc)\n string_indices.append(current_index)\n starting_quote.append(quote)\n # determine the string that should terminate this string\n ender = re.sub(RE_STRING_START, '', quote)\n x = x[start + lenquote:]\n current_index += lenquote\n # figure out what is inside the string\n continuer = RE_STRING_CONT[ender]\n contents = re.match(continuer, x)\n inside = contents.group(0)\n leninside = len(inside)\n current_index += contents.start() + leninside + len(ender)\n # if we are not at the end of the input string, add the ending index of\n # the string to string_indices\n if contents.end() < len(x):\n string_indices.append(current_index)\n x = x[leninside + len(ender):]\n # find the next match\n match = re.search(RE_BEGIN_STRING, x)\n numquotes = len(string_indices)\n if numquotes == 0:\n return (None, None, None)\n elif numquotes % 2:\n return (string_indices[-1], None, starting_quote[-1])\n else:\n return (string_indices[-2], string_indices[-1], starting_quote[-1])\n\n\n# expandvars is a modified version of os.path.expandvars from the Python 3.5.1\n# source code (root/Lib/ntpath.py, line 353)\n\ndef _is_in_env(name):\n ENV = builtins.__xonsh_env__\n return name in ENV._d or name in ENV.defaults\n\ndef _get_env_string(name):\n ENV = builtins.__xonsh_env__\n value = ENV.get(name)\n ensurer = ENV.get_ensurer(name)\n if ensurer.detype is bool_to_str:\n value = ensure_string(value)\n else:\n value = ensurer.detype(value)\n return value\n\n\ndef expandvars(path):\n \"\"\"Expand shell variables of the forms $var, ${var} and %var%.\n\n Unknown variables are left unchanged.\"\"\"\n ENV = builtins.__xonsh_env__\n if isinstance(path, bytes):\n path = path.decode(encoding=ENV.get('XONSH_ENCODING'),\n errors=ENV.get('XONSH_ENCODING_ERRORS'))\n if '$' not in path and (not ON_WINDOWS or '%' not in path):\n return path\n varchars = string.ascii_letters + string.digits + '_-'\n quote = '\\''\n percent = '%'\n brace = '{'\n rbrace = '}'\n dollar = '$'\n res = path[:0]\n index = 0\n pathlen = len(path)\n while index < pathlen:\n c = path[index:index+1]\n if c == quote: # no expansion within single quotes\n path = path[index + 1:]\n pathlen = len(path)\n try:\n index = path.index(c)\n res += c + path[:index + 1]\n except ValueError:\n res += c + path\n index = pathlen - 1\n elif c == percent and ON_WINDOWS: # variable or '%'\n if path[index + 1:index + 2] == percent:\n res += c\n index += 1\n else:\n path = path[index+1:]\n pathlen = len(path)\n try:\n index = path.index(percent)\n except ValueError:\n res += percent + path\n index = pathlen - 1\n else:\n var = path[:index]\n if _is_in_env(var):\n value = _get_env_string(var)\n else:\n value = percent + var + percent\n res += value\n elif c == dollar: # variable or '$$'\n if path[index + 1:index + 2] == dollar:\n res += c\n index += 1\n elif path[index + 1:index + 2] == brace:\n path = path[index+2:]\n pathlen = len(path)\n try:\n index = path.index(rbrace)\n except ValueError:\n res += dollar + brace + path\n index = pathlen - 1\n else:\n var = path[:index]\n try:\n var = eval(var, builtins.__xonsh_ctx__)\n if _is_in_env(var):\n value = _get_env_string(var)\n elif var is Ellipsis:\n value = dollar + brace + '...' + rbrace\n else:\n value = dollar + brace + var + rbrace\n except:\n value = dollar + brace + var + rbrace\n res += value\n else:\n var = path[:0]\n index += 1\n c = path[index:index + 1]\n while c and c in varchars:\n var += c\n index += 1\n c = path[index:index + 1]\n if _is_in_env(var):\n value = _get_env_string(var)\n else:\n value = dollar + var\n res += value\n if c:\n index -= 1\n else:\n res += c\n index += 1\n return res\n\n#\n# File handling tools\n#\n\ndef backup_file(fname):\n \"\"\"Moves an existing file to a new name that has the current time right\n before the extension.\n \"\"\"\n # lazy imports\n import shutil\n from datetime import datetime\n base, ext = os.path.splitext(fname)\n timestamp = datetime.now().strftime('%Y-%m-%d-%H-%M-%S-%f')\n newfname = '%s.%s%s' % (base, timestamp, ext)\n shutil.move(fname, newfname)\n\n\ndef normabspath(p):\n \"\"\"Retuns as normalized absolute path, namely, normcase(abspath(p))\"\"\"\n return os.path.normcase(os.path.abspath(p))\n\n\nclass CommandsCache(Set):\n \"\"\"A lazy cache representing the commands available on the file system.\"\"\"\n\n def __init__(self):\n self._cmds_cache = frozenset()\n self._path_checksum = None\n self._alias_checksum = None\n self._path_mtime = -1\n\n def __contains__(self, item):\n return item in self.all_commands\n\n def __iter__(self):\n return iter(self.all_commands)\n\n def __len__(self):\n return len(self.all_commands)\n\n @property\n def all_commands(self):\n paths = builtins.__xonsh_env__.get('PATH', [])\n paths = frozenset(x for x in paths if os.path.isdir(x))\n # did PATH change?\n path_hash = hash(paths)\n cache_valid = path_hash == self._path_checksum\n self._path_checksum = path_hash\n # did aliases change?\n al_hash = hash(frozenset(builtins.aliases))\n cache_valid = cache_valid and al_hash == self._alias_checksum\n self._alias_checksum = al_hash\n # did the contents of any directory in PATH change?\n max_mtime = 0\n for path in paths:\n mtime = os.stat(path).st_mtime\n if mtime > max_mtime:\n max_mtime = mtime\n cache_valid = cache_valid and max_mtime > self._path_mtime\n self._path_mtime = max_mtime\n if cache_valid:\n return self._cmds_cache\n allcmds = set()\n for path in paths:\n allcmds |= set(executables_in(path))\n allcmds |= set(builtins.aliases)\n self._cmds_cache = frozenset(allcmds)\n return self._cmds_cache\n\nWINDOWS_DRIVE_MATCHER = re.compile(r'^\\w:')\n\n\ndef expand_case_matching(s):\n \"\"\"Expands a string to a case insenstive globable string.\"\"\"\n t = []\n openers = {'[', '{'}\n closers = {']', '}'}\n nesting = 0\n\n drive_part = WINDOWS_DRIVE_MATCHER.match(s) if ON_WINDOWS else None\n\n if drive_part:\n drive_part = drive_part.group(0)\n t.append(drive_part)\n s = s[len(drive_part):]\n\n for c in s:\n if c in openers:\n nesting += 1\n elif c in closers:\n nesting -= 1\n elif nesting > 0:\n pass\n elif c.isalpha():\n folded = c.casefold()\n if len(folded) == 1:\n c = '[{0}{1}]'.format(c.upper(), c.lower())\n else:\n newc = ['[{0}{1}]?'.format(f.upper(), f.lower())\n for f in folded[:-1]]\n newc = ''.join(newc)\n newc += '[{0}{1}{2}]'.format(folded[-1].upper(),\n folded[-1].lower(),\n c)\n c = newc\n t.append(c)\n return ''.join(t)\n\n\ndef globpath(s, ignore_case=False):\n \"\"\"Simple wrapper around glob that also expands home and env vars.\"\"\"\n o, s = _iglobpath(s, ignore_case=ignore_case)\n o = list(o)\n return o if len(o) != 0 else [s]\n\n\ndef _iglobpath(s, ignore_case=False):\n s = builtins.__xonsh_expand_path__(s)\n if ignore_case:\n s = expand_case_matching(s)\n if sys.version_info > (3, 5):\n if '**' in s and '**/*' not in s:\n s = s.replace('**', '**/*')\n # `recursive` is only a 3.5+ kwarg.\n return iglob(s, recursive=True), s\n else:\n return iglob(s), s\n\ndef iglobpath(s, ignore_case=False):\n \"\"\"Simple wrapper around iglob that also expands home and env vars.\"\"\"\n return _iglobpath(s, ignore_case)[0]\n",
"path": "xonsh/tools.py"
}
] | diff --git a/CHANGELOG.rst b/CHANGELOG.rst
index 2aa7aad2f8..d473624ee9 100644
--- a/CHANGELOG.rst
+++ b/CHANGELOG.rst
@@ -12,7 +12,9 @@ Current Developments
**Removed:** None
-**Fixed:** None
+**Fixed:**
+
+* Fixed xonfig wizard failing on Windows due to colon in created filename.
**Security:** None
@@ -48,7 +50,6 @@ v0.3.4
file.
-
v0.3.3
====================
**Added:**
diff --git a/xonsh/tools.py b/xonsh/tools.py
index 277b7aeb46..a12711bb20 100644
--- a/xonsh/tools.py
+++ b/xonsh/tools.py
@@ -1171,7 +1171,8 @@ def backup_file(fname):
import shutil
from datetime import datetime
base, ext = os.path.splitext(fname)
- newfname = base + '.' + datetime.now().isoformat() + ext
+ timestamp = datetime.now().strftime('%Y-%m-%d-%H-%M-%S-%f')
+ newfname = '%s.%s%s' % (base, timestamp, ext)
shutil.move(fname, newfname)
|
getmoto__moto-1462 | Add opsworks app mocks
Add the mocks of OpsWork create_app and describe_apps calls. This is part of #1477
| [
{
"content": "from __future__ import unicode_literals\nimport logging\n# logging.getLogger('boto').setLevel(logging.CRITICAL)\n\n__title__ = 'moto'\n__version__ = '1.2.0',\n\nfrom .acm import mock_acm # flake8: noqa\nfrom .apigateway import mock_apigateway, mock_apigateway_deprecated # flake8: noqa\nfrom .autoscaling import mock_autoscaling, mock_autoscaling_deprecated # flake8: noqa\nfrom .awslambda import mock_lambda, mock_lambda_deprecated # flake8: noqa\nfrom .cloudformation import mock_cloudformation, mock_cloudformation_deprecated # flake8: noqa\nfrom .cloudwatch import mock_cloudwatch, mock_cloudwatch_deprecated # flake8: noqa\nfrom .datapipeline import mock_datapipeline, mock_datapipeline_deprecated # flake8: noqa\nfrom .dynamodb import mock_dynamodb, mock_dynamodb_deprecated # flake8: noqa\nfrom .dynamodb2 import mock_dynamodb2, mock_dynamodb2_deprecated # flake8: noqa\nfrom .ec2 import mock_ec2, mock_ec2_deprecated # flake8: noqa\nfrom .ecr import mock_ecr, mock_ecr_deprecated # flake8: noqa\nfrom .ecs import mock_ecs, mock_ecs_deprecated # flake8: noqa\nfrom .elb import mock_elb, mock_elb_deprecated # flake8: noqa\nfrom .elbv2 import mock_elbv2 # flake8: noqa\nfrom .emr import mock_emr, mock_emr_deprecated # flake8: noqa\nfrom .events import mock_events # flake8: noqa\nfrom .glacier import mock_glacier, mock_glacier_deprecated # flake8: noqa\nfrom .iam import mock_iam, mock_iam_deprecated # flake8: noqa\nfrom .kinesis import mock_kinesis, mock_kinesis_deprecated # flake8: noqa\nfrom .kms import mock_kms, mock_kms_deprecated # flake8: noqa\nfrom .opsworks import mock_opsworks, mock_opsworks_deprecated # flake8: noqa\nfrom .polly import mock_polly # flake8: noqa\nfrom .rds import mock_rds, mock_rds_deprecated # flake8: noqa\nfrom .rds2 import mock_rds2, mock_rds2_deprecated # flake8: noqa\nfrom .redshift import mock_redshift, mock_redshift_deprecated # flake8: noqa\nfrom .s3 import mock_s3, mock_s3_deprecated # flake8: noqa\nfrom .ses import mock_ses, mock_ses_deprecated # flake8: noqa\nfrom .sns import mock_sns, mock_sns_deprecated # flake8: noqa\nfrom .sqs import mock_sqs, mock_sqs_deprecated # flake8: noqa\nfrom .sts import mock_sts, mock_sts_deprecated # flake8: noqa\nfrom .ssm import mock_ssm # flake8: noqa\nfrom .route53 import mock_route53, mock_route53_deprecated # flake8: noqa\nfrom .swf import mock_swf, mock_swf_deprecated # flake8: noqa\nfrom .xray import mock_xray, mock_xray_client, XRaySegment # flake8: noqa\nfrom .logs import mock_logs, mock_logs_deprecated # flake8: noqa\nfrom .batch import mock_batch # flake8: noqa\nfrom .resourcegroupstaggingapi import mock_resourcegroupstaggingapi # flake8: noqa\nfrom .iot import mock_iot # flake8: noqa\nfrom .iotdata import mock_iotdata # flake8: noqa\n\n\ntry:\n # Need to monkey-patch botocore requests back to underlying urllib3 classes\n from botocore.awsrequest import HTTPSConnectionPool, HTTPConnectionPool, HTTPConnection, VerifiedHTTPSConnection\nexcept ImportError:\n pass\nelse:\n HTTPSConnectionPool.ConnectionCls = VerifiedHTTPSConnection\n HTTPConnectionPool.ConnectionCls = HTTPConnection\n",
"path": "moto/__init__.py"
}
] | [
{
"content": "from __future__ import unicode_literals\nimport logging\n# logging.getLogger('boto').setLevel(logging.CRITICAL)\n\n__title__ = 'moto'\n__version__ = '1.2.0'\n\nfrom .acm import mock_acm # flake8: noqa\nfrom .apigateway import mock_apigateway, mock_apigateway_deprecated # flake8: noqa\nfrom .autoscaling import mock_autoscaling, mock_autoscaling_deprecated # flake8: noqa\nfrom .awslambda import mock_lambda, mock_lambda_deprecated # flake8: noqa\nfrom .cloudformation import mock_cloudformation, mock_cloudformation_deprecated # flake8: noqa\nfrom .cloudwatch import mock_cloudwatch, mock_cloudwatch_deprecated # flake8: noqa\nfrom .datapipeline import mock_datapipeline, mock_datapipeline_deprecated # flake8: noqa\nfrom .dynamodb import mock_dynamodb, mock_dynamodb_deprecated # flake8: noqa\nfrom .dynamodb2 import mock_dynamodb2, mock_dynamodb2_deprecated # flake8: noqa\nfrom .ec2 import mock_ec2, mock_ec2_deprecated # flake8: noqa\nfrom .ecr import mock_ecr, mock_ecr_deprecated # flake8: noqa\nfrom .ecs import mock_ecs, mock_ecs_deprecated # flake8: noqa\nfrom .elb import mock_elb, mock_elb_deprecated # flake8: noqa\nfrom .elbv2 import mock_elbv2 # flake8: noqa\nfrom .emr import mock_emr, mock_emr_deprecated # flake8: noqa\nfrom .events import mock_events # flake8: noqa\nfrom .glacier import mock_glacier, mock_glacier_deprecated # flake8: noqa\nfrom .iam import mock_iam, mock_iam_deprecated # flake8: noqa\nfrom .kinesis import mock_kinesis, mock_kinesis_deprecated # flake8: noqa\nfrom .kms import mock_kms, mock_kms_deprecated # flake8: noqa\nfrom .opsworks import mock_opsworks, mock_opsworks_deprecated # flake8: noqa\nfrom .polly import mock_polly # flake8: noqa\nfrom .rds import mock_rds, mock_rds_deprecated # flake8: noqa\nfrom .rds2 import mock_rds2, mock_rds2_deprecated # flake8: noqa\nfrom .redshift import mock_redshift, mock_redshift_deprecated # flake8: noqa\nfrom .s3 import mock_s3, mock_s3_deprecated # flake8: noqa\nfrom .ses import mock_ses, mock_ses_deprecated # flake8: noqa\nfrom .sns import mock_sns, mock_sns_deprecated # flake8: noqa\nfrom .sqs import mock_sqs, mock_sqs_deprecated # flake8: noqa\nfrom .sts import mock_sts, mock_sts_deprecated # flake8: noqa\nfrom .ssm import mock_ssm # flake8: noqa\nfrom .route53 import mock_route53, mock_route53_deprecated # flake8: noqa\nfrom .swf import mock_swf, mock_swf_deprecated # flake8: noqa\nfrom .xray import mock_xray, mock_xray_client, XRaySegment # flake8: noqa\nfrom .logs import mock_logs, mock_logs_deprecated # flake8: noqa\nfrom .batch import mock_batch # flake8: noqa\nfrom .resourcegroupstaggingapi import mock_resourcegroupstaggingapi # flake8: noqa\nfrom .iot import mock_iot # flake8: noqa\nfrom .iotdata import mock_iotdata # flake8: noqa\n\n\ntry:\n # Need to monkey-patch botocore requests back to underlying urllib3 classes\n from botocore.awsrequest import HTTPSConnectionPool, HTTPConnectionPool, HTTPConnection, VerifiedHTTPSConnection\nexcept ImportError:\n pass\nelse:\n HTTPSConnectionPool.ConnectionCls = VerifiedHTTPSConnection\n HTTPConnectionPool.ConnectionCls = HTTPConnection\n",
"path": "moto/__init__.py"
}
] | diff --git a/moto/__init__.py b/moto/__init__.py
index 9d292a3e1847..c38212b42f4c 100644
--- a/moto/__init__.py
+++ b/moto/__init__.py
@@ -3,7 +3,7 @@
# logging.getLogger('boto').setLevel(logging.CRITICAL)
__title__ = 'moto'
-__version__ = '1.2.0',
+__version__ = '1.2.0'
from .acm import mock_acm # flake8: noqa
from .apigateway import mock_apigateway, mock_apigateway_deprecated # flake8: noqa
|
ManimCommunity__manim-1335 | Add import statements to examples in documentation
See title.
The examples in the documentation should also include the `from manim import *` at the very least, and actually we could provide best-practice examples where we dont do a *-import, but rather import classes/functions separately.
This can of course be an iterative process: start with adding `from manim import *` first, and become more specific later.
| [
{
"content": "r\"\"\"\nA directive for including Manim videos in a Sphinx document\n===========================================================\n\nWhen rendering the HTML documentation, the ``.. manim::`` directive\nimplemented here allows to include rendered videos.\n\nIts basic usage that allows processing **inline content**\nlooks as follows::\n\n .. manim:: MyScene\n\n class MyScene(Scene):\n def construct(self):\n ...\n\nIt is required to pass the name of the class representing the\nscene to be rendered to the directive.\n\nAs a second application, the directive can also be used to\nrender scenes that are defined within doctests, for example::\n\n .. manim:: DirectiveDoctestExample\n :ref_classes: Dot\n\n >>> dot = Dot(color=RED)\n >>> dot.color\n <Color #fc6255>\n >>> class DirectiveDoctestExample(Scene):\n ... def construct(self):\n ... self.play(Create(dot))\n\n\nOptions\n-------\n\nOptions can be passed as follows::\n\n .. manim:: <Class name>\n :<option name>: <value>\n\nThe following configuration options are supported by the\ndirective:\n\n hide_source\n If this flag is present without argument,\n the source code is not displayed above the rendered video.\n\n quality : {'low', 'medium', 'high', 'fourk'}\n Controls render quality of the video, in analogy to\n the corresponding command line flags.\n\n save_as_gif\n If this flag is present without argument,\n the scene is rendered as a gif.\n\n save_last_frame\n If this flag is present without argument,\n an image representing the last frame of the scene will\n be rendered and displayed, instead of a video.\n\n ref_classes\n A list of classes, separated by spaces, that is\n rendered in a reference block after the source code.\n\n ref_functions\n A list of functions, separated by spaces,\n that is rendered in a reference block after the source code.\n\n ref_methods\n A list of methods, separated by spaces,\n that is rendered in a reference block after the source code.\n\n\"\"\"\nimport os\nimport shutil\nfrom os.path import relpath\nfrom pathlib import Path\nfrom typing import List\n\nimport jinja2\nfrom docutils import nodes\nfrom docutils.parsers.rst import Directive, directives\nfrom docutils.statemachine import StringList\n\nfrom manim import QUALITIES\n\nclassnamedict = {}\n\n\nclass skip_manim_node(nodes.Admonition, nodes.Element):\n pass\n\n\ndef visit(self, node, name=\"\"):\n self.visit_admonition(node, name)\n\n\ndef depart(self, node):\n self.depart_admonition(node)\n\n\ndef process_name_list(option_input: str, reference_type: str) -> List[str]:\n r\"\"\"Reformats a string of space separated class names\n as a list of strings containing valid Sphinx references.\n\n Tests\n -----\n\n ::\n\n >>> process_name_list(\"Tex TexTemplate\", \"class\")\n [\":class:`~.Tex`\", \":class:`~.TexTemplate`\"]\n >>> process_name_list(\"Scene.play Mobject.rotate\", \"func\")\n [\":func:`~.Scene.play`\", \":func:`~.Mobject.rotate`\"]\n \"\"\"\n return [f\":{reference_type}:`~.{name}`\" for name in option_input.split()]\n\n\nclass ManimDirective(Directive):\n r\"\"\"The manim directive, rendering videos while building\n the documentation.\n\n See the module docstring for documentation.\n \"\"\"\n has_content = True\n required_arguments = 1\n optional_arguments = 0\n option_spec = {\n \"hide_source\": bool,\n \"quality\": lambda arg: directives.choice(\n arg, (\"low\", \"medium\", \"high\", \"fourk\")\n ),\n \"save_as_gif\": bool,\n \"save_last_frame\": bool,\n \"ref_modules\": lambda arg: process_name_list(arg, \"mod\"),\n \"ref_classes\": lambda arg: process_name_list(arg, \"class\"),\n \"ref_functions\": lambda arg: process_name_list(arg, \"func\"),\n \"ref_methods\": lambda arg: process_name_list(arg, \"meth\"),\n }\n final_argument_whitespace = True\n\n def run(self):\n if \"skip-manim\" in self.state.document.settings.env.app.builder.tags.tags:\n node = skip_manim_node()\n self.state.nested_parse(\n StringList(self.content[0]), self.content_offset, node\n )\n return [node]\n\n from manim import config\n\n global classnamedict\n\n clsname = self.arguments[0]\n if clsname not in classnamedict:\n classnamedict[clsname] = 1\n else:\n classnamedict[clsname] += 1\n\n hide_source = \"hide_source\" in self.options\n save_as_gif = \"save_as_gif\" in self.options\n save_last_frame = \"save_last_frame\" in self.options\n assert not (save_as_gif and save_last_frame)\n\n ref_content = (\n self.options.get(\"ref_modules\", [])\n + self.options.get(\"ref_classes\", [])\n + self.options.get(\"ref_functions\", [])\n + self.options.get(\"ref_methods\", [])\n )\n if ref_content:\n ref_block = \"References: \" + \" \".join(ref_content)\n\n else:\n ref_block = \"\"\n\n if \"quality\" in self.options:\n quality = f'{self.options[\"quality\"]}_quality'\n else:\n quality = \"example_quality\"\n frame_rate = QUALITIES[quality][\"frame_rate\"]\n pixel_height = QUALITIES[quality][\"pixel_height\"]\n pixel_width = QUALITIES[quality][\"pixel_width\"]\n qualitydir = f\"{pixel_height}p{frame_rate}\"\n\n state_machine = self.state_machine\n document = state_machine.document\n\n source_file_name = document.attributes[\"source\"]\n source_rel_name = relpath(source_file_name, setup.confdir)\n source_rel_dir = os.path.dirname(source_rel_name)\n while source_rel_dir.startswith(os.path.sep):\n source_rel_dir = source_rel_dir[1:]\n\n dest_dir = os.path.abspath(\n os.path.join(setup.app.builder.outdir, source_rel_dir)\n )\n if not os.path.exists(dest_dir):\n os.makedirs(dest_dir)\n\n source_block = [\n \".. code-block:: python\",\n \"\",\n *[\" \" + line for line in self.content],\n ]\n source_block = \"\\n\".join(source_block)\n\n config.media_dir = Path(setup.confdir) / \"media\"\n config.images_dir = \"{media_dir}/images\"\n config.video_dir = \"{media_dir}/videos/{quality}\"\n output_file = f\"{clsname}-{classnamedict[clsname]}\"\n config.assets_dir = Path(\"_static\")\n\n config_code = [\n f'config[\"frame_rate\"] = {frame_rate}',\n f'config[\"pixel_height\"] = {pixel_height}',\n f'config[\"pixel_width\"] = {pixel_width}',\n f'config[\"save_last_frame\"] = {save_last_frame}',\n f'config[\"save_as_gif\"] = {save_as_gif}',\n f'config[\"write_to_movie\"] = {not save_last_frame}',\n f'config[\"output_file\"] = r\"{output_file}\"',\n ]\n\n user_code = self.content\n if user_code[0].startswith(\">>> \"): # check whether block comes from doctest\n user_code = [\n line[4:] for line in user_code if line.startswith((\">>> \", \"... \"))\n ]\n\n code = [\n \"from manim import *\",\n *config_code,\n *user_code,\n f\"{clsname}().render()\",\n ]\n exec(\"\\n\".join(code), globals())\n\n # copy video file to output directory\n if not (save_as_gif or save_last_frame):\n filename = f\"{output_file}.mp4\"\n filesrc = config.get_dir(\"video_dir\") / filename\n destfile = os.path.join(dest_dir, filename)\n shutil.copyfile(filesrc, destfile)\n elif save_as_gif:\n filename = f\"{output_file}.gif\"\n filesrc = config.get_dir(\"video_dir\") / filename\n elif save_last_frame:\n filename = f\"{output_file}.png\"\n filesrc = config.get_dir(\"images_dir\") / filename\n else:\n raise ValueError(\"Invalid combination of render flags received.\")\n\n rendered_template = jinja2.Template(TEMPLATE).render(\n clsname=clsname,\n clsname_lowercase=clsname.lower(),\n hide_source=hide_source,\n filesrc_rel=os.path.relpath(filesrc, setup.confdir),\n output_file=output_file,\n save_last_frame=save_last_frame,\n save_as_gif=save_as_gif,\n source_block=source_block,\n ref_block=ref_block,\n )\n state_machine.insert_input(\n rendered_template.split(\"\\n\"), source=document.attributes[\"source\"]\n )\n\n return []\n\n\ndef setup(app):\n import manim\n\n app.add_node(skip_manim_node, html=(visit, depart))\n\n setup.app = app\n setup.config = app.config\n setup.confdir = app.confdir\n\n app.add_directive(\"manim\", ManimDirective)\n\n metadata = {\"parallel_read_safe\": False, \"parallel_write_safe\": True}\n return metadata\n\n\nTEMPLATE = r\"\"\"\n{% if not hide_source %}\n.. raw:: html\n\n <div id=\"{{ clsname_lowercase }}\" class=\"admonition admonition-manim-example\">\n <p class=\"admonition-title\">Example: {{ clsname }} <a class=\"headerlink\" href=\"#{{ clsname_lowercase }}\">¶</a></p>\n\n{% endif %}\n\n{% if not (save_as_gif or save_last_frame) %}\n.. raw:: html\n\n <video class=\"manim-video\" controls loop autoplay src=\"./{{ output_file }}.mp4\"></video>\n\n{% elif save_as_gif %}\n.. image:: /{{ filesrc_rel }}\n :align: center\n\n{% elif save_last_frame %}\n.. image:: /{{ filesrc_rel }}\n :align: center\n\n{% endif %}\n{% if not hide_source %}\n{{ source_block }}\n\n{{ ref_block }}\n\n{% endif %}\n\n.. raw:: html\n\n </div>\n\"\"\"\n",
"path": "docs/source/manim_directive.py"
}
] | [
{
"content": "r\"\"\"\nA directive for including Manim videos in a Sphinx document\n===========================================================\n\nWhen rendering the HTML documentation, the ``.. manim::`` directive\nimplemented here allows to include rendered videos.\n\nIts basic usage that allows processing **inline content**\nlooks as follows::\n\n .. manim:: MyScene\n\n class MyScene(Scene):\n def construct(self):\n ...\n\nIt is required to pass the name of the class representing the\nscene to be rendered to the directive.\n\nAs a second application, the directive can also be used to\nrender scenes that are defined within doctests, for example::\n\n .. manim:: DirectiveDoctestExample\n :ref_classes: Dot\n\n >>> dot = Dot(color=RED)\n >>> dot.color\n <Color #fc6255>\n >>> class DirectiveDoctestExample(Scene):\n ... def construct(self):\n ... self.play(Create(dot))\n\n\nOptions\n-------\n\nOptions can be passed as follows::\n\n .. manim:: <Class name>\n :<option name>: <value>\n\nThe following configuration options are supported by the\ndirective:\n\n hide_source\n If this flag is present without argument,\n the source code is not displayed above the rendered video.\n\n quality : {'low', 'medium', 'high', 'fourk'}\n Controls render quality of the video, in analogy to\n the corresponding command line flags.\n\n save_as_gif\n If this flag is present without argument,\n the scene is rendered as a gif.\n\n save_last_frame\n If this flag is present without argument,\n an image representing the last frame of the scene will\n be rendered and displayed, instead of a video.\n\n ref_classes\n A list of classes, separated by spaces, that is\n rendered in a reference block after the source code.\n\n ref_functions\n A list of functions, separated by spaces,\n that is rendered in a reference block after the source code.\n\n ref_methods\n A list of methods, separated by spaces,\n that is rendered in a reference block after the source code.\n\n\"\"\"\nimport os\nimport shutil\nfrom os.path import relpath\nfrom pathlib import Path\nfrom typing import List\n\nimport jinja2\nfrom docutils import nodes\nfrom docutils.parsers.rst import Directive, directives\nfrom docutils.statemachine import StringList\n\nfrom manim import QUALITIES\n\nclassnamedict = {}\n\n\nclass skip_manim_node(nodes.Admonition, nodes.Element):\n pass\n\n\ndef visit(self, node, name=\"\"):\n self.visit_admonition(node, name)\n\n\ndef depart(self, node):\n self.depart_admonition(node)\n\n\ndef process_name_list(option_input: str, reference_type: str) -> List[str]:\n r\"\"\"Reformats a string of space separated class names\n as a list of strings containing valid Sphinx references.\n\n Tests\n -----\n\n ::\n\n >>> process_name_list(\"Tex TexTemplate\", \"class\")\n [\":class:`~.Tex`\", \":class:`~.TexTemplate`\"]\n >>> process_name_list(\"Scene.play Mobject.rotate\", \"func\")\n [\":func:`~.Scene.play`\", \":func:`~.Mobject.rotate`\"]\n \"\"\"\n return [f\":{reference_type}:`~.{name}`\" for name in option_input.split()]\n\n\nclass ManimDirective(Directive):\n r\"\"\"The manim directive, rendering videos while building\n the documentation.\n\n See the module docstring for documentation.\n \"\"\"\n has_content = True\n required_arguments = 1\n optional_arguments = 0\n option_spec = {\n \"hide_source\": bool,\n \"quality\": lambda arg: directives.choice(\n arg, (\"low\", \"medium\", \"high\", \"fourk\")\n ),\n \"save_as_gif\": bool,\n \"save_last_frame\": bool,\n \"ref_modules\": lambda arg: process_name_list(arg, \"mod\"),\n \"ref_classes\": lambda arg: process_name_list(arg, \"class\"),\n \"ref_functions\": lambda arg: process_name_list(arg, \"func\"),\n \"ref_methods\": lambda arg: process_name_list(arg, \"meth\"),\n }\n final_argument_whitespace = True\n\n def run(self):\n if \"skip-manim\" in self.state.document.settings.env.app.builder.tags.tags:\n node = skip_manim_node()\n self.state.nested_parse(\n StringList(self.content[0]), self.content_offset, node\n )\n return [node]\n\n from manim import config\n\n global classnamedict\n\n clsname = self.arguments[0]\n if clsname not in classnamedict:\n classnamedict[clsname] = 1\n else:\n classnamedict[clsname] += 1\n\n hide_source = \"hide_source\" in self.options\n save_as_gif = \"save_as_gif\" in self.options\n save_last_frame = \"save_last_frame\" in self.options\n assert not (save_as_gif and save_last_frame)\n\n ref_content = (\n self.options.get(\"ref_modules\", [])\n + self.options.get(\"ref_classes\", [])\n + self.options.get(\"ref_functions\", [])\n + self.options.get(\"ref_methods\", [])\n )\n if ref_content:\n ref_block = \"References: \" + \" \".join(ref_content)\n\n else:\n ref_block = \"\"\n\n if \"quality\" in self.options:\n quality = f'{self.options[\"quality\"]}_quality'\n else:\n quality = \"example_quality\"\n frame_rate = QUALITIES[quality][\"frame_rate\"]\n pixel_height = QUALITIES[quality][\"pixel_height\"]\n pixel_width = QUALITIES[quality][\"pixel_width\"]\n qualitydir = f\"{pixel_height}p{frame_rate}\"\n\n state_machine = self.state_machine\n document = state_machine.document\n\n source_file_name = document.attributes[\"source\"]\n source_rel_name = relpath(source_file_name, setup.confdir)\n source_rel_dir = os.path.dirname(source_rel_name)\n while source_rel_dir.startswith(os.path.sep):\n source_rel_dir = source_rel_dir[1:]\n\n dest_dir = os.path.abspath(\n os.path.join(setup.app.builder.outdir, source_rel_dir)\n )\n if not os.path.exists(dest_dir):\n os.makedirs(dest_dir)\n\n source_block = [\n \".. code-block:: python\",\n \"\",\n \" from manim import *\\n\",\n *[\" \" + line for line in self.content],\n ]\n source_block = \"\\n\".join(source_block)\n\n config.media_dir = Path(setup.confdir) / \"media\"\n config.images_dir = \"{media_dir}/images\"\n config.video_dir = \"{media_dir}/videos/{quality}\"\n output_file = f\"{clsname}-{classnamedict[clsname]}\"\n config.assets_dir = Path(\"_static\")\n\n config_code = [\n f'config[\"frame_rate\"] = {frame_rate}',\n f'config[\"pixel_height\"] = {pixel_height}',\n f'config[\"pixel_width\"] = {pixel_width}',\n f'config[\"save_last_frame\"] = {save_last_frame}',\n f'config[\"save_as_gif\"] = {save_as_gif}',\n f'config[\"write_to_movie\"] = {not save_last_frame}',\n f'config[\"output_file\"] = r\"{output_file}\"',\n ]\n\n user_code = self.content\n if user_code[0].startswith(\">>> \"): # check whether block comes from doctest\n user_code = [\n line[4:] for line in user_code if line.startswith((\">>> \", \"... \"))\n ]\n\n code = [\n \"from manim import *\",\n *config_code,\n *user_code,\n f\"{clsname}().render()\",\n ]\n exec(\"\\n\".join(code), globals())\n\n # copy video file to output directory\n if not (save_as_gif or save_last_frame):\n filename = f\"{output_file}.mp4\"\n filesrc = config.get_dir(\"video_dir\") / filename\n destfile = os.path.join(dest_dir, filename)\n shutil.copyfile(filesrc, destfile)\n elif save_as_gif:\n filename = f\"{output_file}.gif\"\n filesrc = config.get_dir(\"video_dir\") / filename\n elif save_last_frame:\n filename = f\"{output_file}.png\"\n filesrc = config.get_dir(\"images_dir\") / filename\n else:\n raise ValueError(\"Invalid combination of render flags received.\")\n\n rendered_template = jinja2.Template(TEMPLATE).render(\n clsname=clsname,\n clsname_lowercase=clsname.lower(),\n hide_source=hide_source,\n filesrc_rel=os.path.relpath(filesrc, setup.confdir),\n output_file=output_file,\n save_last_frame=save_last_frame,\n save_as_gif=save_as_gif,\n source_block=source_block,\n ref_block=ref_block,\n )\n state_machine.insert_input(\n rendered_template.split(\"\\n\"), source=document.attributes[\"source\"]\n )\n\n return []\n\n\ndef setup(app):\n import manim\n\n app.add_node(skip_manim_node, html=(visit, depart))\n\n setup.app = app\n setup.config = app.config\n setup.confdir = app.confdir\n\n app.add_directive(\"manim\", ManimDirective)\n\n metadata = {\"parallel_read_safe\": False, \"parallel_write_safe\": True}\n return metadata\n\n\nTEMPLATE = r\"\"\"\n{% if not hide_source %}\n.. raw:: html\n\n <div id=\"{{ clsname_lowercase }}\" class=\"admonition admonition-manim-example\">\n <p class=\"admonition-title\">Example: {{ clsname }} <a class=\"headerlink\" href=\"#{{ clsname_lowercase }}\">¶</a></p>\n\n{% endif %}\n\n{% if not (save_as_gif or save_last_frame) %}\n.. raw:: html\n\n <video class=\"manim-video\" controls loop autoplay src=\"./{{ output_file }}.mp4\"></video>\n\n{% elif save_as_gif %}\n.. image:: /{{ filesrc_rel }}\n :align: center\n\n{% elif save_last_frame %}\n.. image:: /{{ filesrc_rel }}\n :align: center\n\n{% endif %}\n{% if not hide_source %}\n{{ source_block }}\n\n{{ ref_block }}\n\n{% endif %}\n\n.. raw:: html\n\n </div>\n\"\"\"\n",
"path": "docs/source/manim_directive.py"
}
] | diff --git a/docs/source/manim_directive.py b/docs/source/manim_directive.py
index 4f152a928a..d049b6c1db 100644
--- a/docs/source/manim_directive.py
+++ b/docs/source/manim_directive.py
@@ -202,6 +202,7 @@ def run(self):
source_block = [
".. code-block:: python",
"",
+ " from manim import *\n",
*[" " + line for line in self.content],
]
source_block = "\n".join(source_block)
|
wagtail__wagtail-9923 | Search on listing views doesn't work unless the `?q=` param exists in the URL
<!--
Found a bug? Please fill out the sections below. 👍
-->
### Issue Summary
Possible regression in https://github.com/wagtail/wagtail/pull/9768
The `URLSearchParams.get()` returns `null` if the param doesn't exist, so the following code:
https://github.com/wagtail/wagtail/blob/a3f10acae17c892d843c419495e4204adb3ed991/client/src/entrypoints/admin/core.js#L270-L276
will crash during `currentQuery.trim()` when searching on the listing views (snippets, images, etc.) if the `?q=` param doesn't exist in the URL.
Might be a good time to add `required=False` in here as well:
https://github.com/wagtail/wagtail/blob/a3f10acae17c892d843c419495e4204adb3ed991/wagtail/admin/forms/search.py#L12
to remove this silly error when `q` is an empty string:
<img width="473" alt="image" src="https://user-images.githubusercontent.com/6379424/213499685-ce37c064-2635-434f-952f-e85fae4ab9af.png">
<!--
A summary of the issue.
-->
### Steps to Reproduce
1. Spin up bakerydemo
2. Open the images listing
3. Try to search
| [
{
"content": "from django import forms\nfrom django.utils.translation import gettext as _\nfrom django.utils.translation import gettext_lazy\n\n\nclass SearchForm(forms.Form):\n def __init__(self, *args, **kwargs):\n placeholder = kwargs.pop(\"placeholder\", _(\"Search\"))\n super().__init__(*args, **kwargs)\n self.fields[\"q\"].widget.attrs = {\"placeholder\": placeholder}\n\n q = forms.CharField(label=gettext_lazy(\"Search term\"), widget=forms.TextInput())\n",
"path": "wagtail/admin/forms/search.py"
}
] | [
{
"content": "from django import forms\nfrom django.utils.translation import gettext as _\nfrom django.utils.translation import gettext_lazy\n\n\nclass SearchForm(forms.Form):\n def __init__(self, *args, **kwargs):\n placeholder = kwargs.pop(\"placeholder\", _(\"Search\"))\n super().__init__(*args, **kwargs)\n self.fields[\"q\"].widget.attrs = {\"placeholder\": placeholder}\n\n q = forms.CharField(\n label=gettext_lazy(\"Search term\"),\n widget=forms.TextInput(),\n required=False,\n )\n",
"path": "wagtail/admin/forms/search.py"
}
] | diff --git a/client/src/entrypoints/admin/core.js b/client/src/entrypoints/admin/core.js
index c9d86855cf13..9f0561a40d0d 100644
--- a/client/src/entrypoints/admin/core.js
+++ b/client/src/entrypoints/admin/core.js
@@ -270,7 +270,7 @@ $(() => {
const search = function () {
const newQuery = $input.val();
const searchParams = new URLSearchParams(window.location.search);
- const currentQuery = searchParams.get('q');
+ const currentQuery = searchParams.get('q') || '';
// only do the query if it has changed for trimmed queries
// for example - " " === "" and "firstword " ==== "firstword"
if (currentQuery.trim() !== newQuery.trim()) {
diff --git a/wagtail/admin/forms/search.py b/wagtail/admin/forms/search.py
index 4d6f85956aea..fb2303c302a2 100644
--- a/wagtail/admin/forms/search.py
+++ b/wagtail/admin/forms/search.py
@@ -9,4 +9,8 @@ def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.fields["q"].widget.attrs = {"placeholder": placeholder}
- q = forms.CharField(label=gettext_lazy("Search term"), widget=forms.TextInput())
+ q = forms.CharField(
+ label=gettext_lazy("Search term"),
+ widget=forms.TextInput(),
+ required=False,
+ )
diff --git a/wagtail/images/tests/test_admin_views.py b/wagtail/images/tests/test_admin_views.py
index c0928aed66fa..700021e3cbcb 100644
--- a/wagtail/images/tests/test_admin_views.py
+++ b/wagtail/images/tests/test_admin_views.py
@@ -44,6 +44,16 @@ def test_simple(self):
self.assertEqual(response.status_code, 200)
self.assertTemplateUsed(response, "wagtailimages/images/index.html")
self.assertContains(response, "Add an image")
+ # The search box should not raise an error
+ self.assertNotContains(response, "This field is required.")
+
+ def test_empty_q(self):
+ response = self.get({"q": ""})
+ self.assertEqual(response.status_code, 200)
+ self.assertEqual(response.context["query_string"], "")
+ self.assertContains(response, "Add an image")
+ # The search box should not raise an error
+ self.assertNotContains(response, "This field is required.")
def test_search(self):
response = self.get({"q": "Hello"})
diff --git a/wagtail/snippets/tests/test_snippets.py b/wagtail/snippets/tests/test_snippets.py
index c17427a7bc1a..2988dce0154f 100644
--- a/wagtail/snippets/tests/test_snippets.py
+++ b/wagtail/snippets/tests/test_snippets.py
@@ -479,6 +479,23 @@ def test_simple(self):
self.assertIn(self.snippet_b, items)
self.assertIn(self.snippet_c, items)
+ # The search box should not raise an error
+ self.assertNotContains(response, "This field is required.")
+
+ def test_empty_q(self):
+ response = self.get({"q": ""})
+ self.assertEqual(response.status_code, 200)
+ self.assertTemplateUsed(response, "wagtailsnippets/snippets/type_index.html")
+
+ # All snippets should be in items
+ items = list(response.context["page_obj"].object_list)
+ self.assertIn(self.snippet_a, items)
+ self.assertIn(self.snippet_b, items)
+ self.assertIn(self.snippet_c, items)
+
+ # The search box should not raise an error
+ self.assertNotContains(response, "This field is required.")
+
def test_is_searchable(self):
self.assertTrue(self.get().context["is_searchable"])
|
sanic-org__sanic-1397 | Logger not work.
**Describe the bug**
Logger did not work at current master commit (https://github.com/huge-success/sanic/commit/7d79a86d4dc48de11cd34e8ba12e41f3a9f9ff18).
**Code snippet**
```python
from sanic import Sanic
from sanic.log import logger
from sanic.response import text
app = Sanic()
@app.listener('before_server_start')
async def setup(app, loop):
logger.info('INFO')
@app.get('/')
async def test(request):
return text('hello world')
if __name__ == '__main__':
app.run()
```
There is no any log/output now.
**Expected behavior**
At `0.8.3` release, it will logging/output some messages like:
```
[2018-11-05 17:34:47 +0800] [12112] [INFO] Goin' Fast @ http://127.0.0.1:8000
[2018-11-05 17:34:47 +0800] [12112] [INFO] INFO
[2018-11-05 17:34:47 +0800] [12112] [INFO] Starting worker [12112]
```
**Environment (please complete the following information):**
- OS: Ubuntu 18.04
- Version: https://github.com/huge-success/sanic/commit/7d79a86d4dc48de11cd34e8ba12e41f3a9f9ff18
**Additional context**
It seems that `getLogger()` does not get the correct logger at [line 56](https://github.com/huge-success/sanic/blob/master/sanic/log.py#L56) in `log.py`. The logger is trying to get a logger named `sanic.root`, but it does not exist. Rename the logger `root` at [line 9](https://github.com/huge-success/sanic/blob/master/sanic/log.py#L9) should fix this bug.
| [
{
"content": "import logging\nimport sys\n\n\nLOGGING_CONFIG_DEFAULTS = dict(\n version=1,\n disable_existing_loggers=False,\n loggers={\n \"root\": {\"level\": \"INFO\", \"handlers\": [\"console\"]},\n \"sanic.error\": {\n \"level\": \"INFO\",\n \"handlers\": [\"error_console\"],\n \"propagate\": True,\n \"qualname\": \"sanic.error\",\n },\n \"sanic.access\": {\n \"level\": \"INFO\",\n \"handlers\": [\"access_console\"],\n \"propagate\": True,\n \"qualname\": \"sanic.access\",\n },\n },\n handlers={\n \"console\": {\n \"class\": \"logging.StreamHandler\",\n \"formatter\": \"generic\",\n \"stream\": sys.stdout,\n },\n \"error_console\": {\n \"class\": \"logging.StreamHandler\",\n \"formatter\": \"generic\",\n \"stream\": sys.stderr,\n },\n \"access_console\": {\n \"class\": \"logging.StreamHandler\",\n \"formatter\": \"access\",\n \"stream\": sys.stdout,\n },\n },\n formatters={\n \"generic\": {\n \"format\": \"%(asctime)s [%(process)d] [%(levelname)s] %(message)s\",\n \"datefmt\": \"[%Y-%m-%d %H:%M:%S %z]\",\n \"class\": \"logging.Formatter\",\n },\n \"access\": {\n \"format\": \"%(asctime)s - (%(name)s)[%(levelname)s][%(host)s]: \"\n + \"%(request)s %(message)s %(status)d %(byte)d\",\n \"datefmt\": \"[%Y-%m-%d %H:%M:%S %z]\",\n \"class\": \"logging.Formatter\",\n },\n },\n)\n\n\nlogger = logging.getLogger(\"sanic.root\")\nerror_logger = logging.getLogger(\"sanic.error\")\naccess_logger = logging.getLogger(\"sanic.access\")\n",
"path": "sanic/log.py"
}
] | [
{
"content": "import logging\nimport sys\n\n\nLOGGING_CONFIG_DEFAULTS = dict(\n version=1,\n disable_existing_loggers=False,\n loggers={\n \"sanic.root\": {\"level\": \"INFO\", \"handlers\": [\"console\"]},\n \"sanic.error\": {\n \"level\": \"INFO\",\n \"handlers\": [\"error_console\"],\n \"propagate\": True,\n \"qualname\": \"sanic.error\",\n },\n \"sanic.access\": {\n \"level\": \"INFO\",\n \"handlers\": [\"access_console\"],\n \"propagate\": True,\n \"qualname\": \"sanic.access\",\n },\n },\n handlers={\n \"console\": {\n \"class\": \"logging.StreamHandler\",\n \"formatter\": \"generic\",\n \"stream\": sys.stdout,\n },\n \"error_console\": {\n \"class\": \"logging.StreamHandler\",\n \"formatter\": \"generic\",\n \"stream\": sys.stderr,\n },\n \"access_console\": {\n \"class\": \"logging.StreamHandler\",\n \"formatter\": \"access\",\n \"stream\": sys.stdout,\n },\n },\n formatters={\n \"generic\": {\n \"format\": \"%(asctime)s [%(process)d] [%(levelname)s] %(message)s\",\n \"datefmt\": \"[%Y-%m-%d %H:%M:%S %z]\",\n \"class\": \"logging.Formatter\",\n },\n \"access\": {\n \"format\": \"%(asctime)s - (%(name)s)[%(levelname)s][%(host)s]: \"\n + \"%(request)s %(message)s %(status)d %(byte)d\",\n \"datefmt\": \"[%Y-%m-%d %H:%M:%S %z]\",\n \"class\": \"logging.Formatter\",\n },\n },\n)\n\n\nlogger = logging.getLogger(\"sanic.root\")\nerror_logger = logging.getLogger(\"sanic.error\")\naccess_logger = logging.getLogger(\"sanic.access\")\n",
"path": "sanic/log.py"
}
] | diff --git a/sanic/log.py b/sanic/log.py
index cb8ca52475..08fc835d14 100644
--- a/sanic/log.py
+++ b/sanic/log.py
@@ -6,7 +6,7 @@
version=1,
disable_existing_loggers=False,
loggers={
- "root": {"level": "INFO", "handlers": ["console"]},
+ "sanic.root": {"level": "INFO", "handlers": ["console"]},
"sanic.error": {
"level": "INFO",
"handlers": ["error_console"],
diff --git a/tests/test_logging.py b/tests/test_logging.py
index 3af3f122db..95c55de0ca 100644
--- a/tests/test_logging.py
+++ b/tests/test_logging.py
@@ -49,7 +49,7 @@ def test_logging_defaults():
reset_logging()
app = Sanic("test_logging")
- for fmt in [h.formatter for h in logging.getLogger('root').handlers]:
+ for fmt in [h.formatter for h in logging.getLogger('sanic.root').handlers]:
assert fmt._fmt == LOGGING_CONFIG_DEFAULTS['formatters']['generic']['format']
for fmt in [h.formatter for h in logging.getLogger('sanic.error').handlers]:
@@ -68,7 +68,7 @@ def test_logging_pass_customer_logconfig():
app = Sanic("test_logging", log_config=modified_config)
- for fmt in [h.formatter for h in logging.getLogger('root').handlers]:
+ for fmt in [h.formatter for h in logging.getLogger('sanic.root').handlers]:
assert fmt._fmt == modified_config['formatters']['generic']['format']
for fmt in [h.formatter for h in logging.getLogger('sanic.error').handlers]:
@@ -82,7 +82,7 @@ def test_logging_pass_customer_logconfig():
def test_log_connection_lost(app, debug, monkeypatch):
""" Should not log Connection lost exception on non debug """
stream = StringIO()
- root = logging.getLogger('root')
+ root = logging.getLogger('sanic.root')
root.addHandler(logging.StreamHandler(stream))
monkeypatch.setattr(sanic.server, 'logger', root)
@@ -102,3 +102,15 @@ async def conn_lost(request):
assert 'Connection lost before response written @' in log
else:
assert 'Connection lost before response written @' not in log
+
+
+def test_logging_modified_root_logger_config():
+ reset_logging()
+
+ modified_config = LOGGING_CONFIG_DEFAULTS
+ modified_config['loggers']['sanic.root']['level'] = 'DEBUG'
+
+ app = Sanic("test_logging", log_config=modified_config)
+
+ assert logging.getLogger('sanic.root').getEffectiveLevel() == logging.DEBUG
+
|
dotkom__onlineweb4-810 | Active feedbacks bug
Minor bug where feedbacks where everyone answers does not get set to inactive.
| [
{
"content": "# -*- coding: utf-8 -*-\nimport datetime\nimport socket\nimport locale\nimport logging\n\nfrom django.utils import timezone\nfrom django.contrib.contenttypes.models import ContentType\nfrom django.conf import settings\nfrom django.core.mail import EmailMessage\n\nfrom apps.events.models import Event, AttendanceEvent, Attendee\nfrom apps.feedback.models import FeedbackRelation\nfrom apps.marks.models import Mark, UserEntry\nfrom apps.mommy import Task, schedule\n\nclass FeedbackMail(Task):\n\n @staticmethod\n def run():\n logger = logging.getLogger(\"feedback\")\n logger.info(\"Feedback job started\")\n locale.setlocale(locale.LC_ALL, \"nb_NO.UTF-8\")\n active_feedbacks = FeedbackRelation.objects.filter(active=True)\n \n for feedback in active_feedbacks:\n message = FeedbackMail.generate_message(feedback, logger)\n\n if message.send:\n EmailMessage(message.subject, unicode(message), message.committee_mail, [], message.attended_mails).send()\n logger.info('Emails sent to: ' + str(message.attended_mails))\n\n if message.results_message:\n EmailMessage(\"Feedback resultat\", message.results_message,\"[email protected]\", [message.committee_mail]).send() \n logger.info('Results mail sent to :' + message.committee_mail)\n\n @staticmethod\n def generate_message(feedback, logger):\n logger.info('Processing: \"' + feedback.get_title() + '\"')\n today = timezone.now().date()\n yesterday = today + datetime.timedelta(days=-1)\n not_responded = FeedbackMail.get_users(feedback)\n logger.info('Not responded: ' + str(not_responded))\n message = Message()\n\n #return if everyone has answered\n if not not_responded:\n logger.info('Everyone has answered')\n return message\n\n \n message.attended_mails = FeedbackMail.get_user_mails(not_responded)\n\n message.committee_mail = FeedbackMail.get_committee_email(feedback)\n deadline = feedback.deadline.strftime(\"%d. %B\").encode(\"utf-8\")\n title = str(FeedbackMail.get_title(feedback)).encode(\"utf-8\")\n message.link = str(u\"\\n\\n\" + FeedbackMail.get_link(feedback)).encode(\"utf-8\")\n results_link = str(FeedbackMail.get_link(feedback) + \"results\").encode(\"utf-8\")\n \n start_date = feedback.get_start_date()\n deadline_diff = (feedback.deadline - today).days\n\n message.subject = u\"Feedback: %s\" % (title)\n message.intro = u\"Hei, vi ønsker tilbakemelding på \\\"%s\\\"\" % (title)\n message.mark = FeedbackMail.mark_message(feedback)\n message.contact = u\"\\n\\nEventuelle spørsmål sendes til %s \" % (message.committee_mail)\n message.start_date = FeedbackMail.start_date_message(start_date)\n\n if deadline_diff < 0: #Deadline passed\n feedback.active = False\n feedback.save()\n logger.info(\"Deadline passed feedback set to inactive\")\n\n if feedback.gives_mark:\n FeedbackMail.set_marks(title, not_responded) \n \n message.intro = u\"Fristen for å svare på \\\"%s\\\" har gått ut og du har fått en prikk.\" % (title)\n message.mark = \"\"\n message.start_date = \"\"\n message.link = \"\"\n message.send = True\n \n logger.info(\"Marks given to: \" + str(not_responded))\n\n elif deadline_diff < 1: #Last warning\n message.deadline = u\"\\n\\nI dag innen 23:59 er siste frist til å svare på skjemaet.\"\n \n message.results_message = u\"Hei, siste purremail på feedback skjema har blitt sendt til alle \" \\\n u\"gjenværende deltagere på \\\"%s\\\".\\nDere kan se feedback-resultatene på:\\n%s\\n\" % \\\n (title, results_link)\n message.send = True\n logger.info(\"Last warning message generated\")\n elif deadline_diff < 3 and feedback.gives_mark: # 3 days from the deadline\n message.deadline = u\"\\n\\nFristen for å svare på skjema er %s innen kl 23:59.\" % (deadline)\n message.send = True\n logger.info(\"Warning message generated\")\n elif FeedbackMail.send_first_notification(feedback): #Day after the event or feedback creation \n message.deadline = u\"\\n\\nFristen for å svare på skjema er %s innen kl 23:59.\" % (deadline)\n \n message.results_message = u\"Hei, nå har feedbackmail blitt sendt til alle \" \\\n u\"deltagere på \\\"%s\\\".\\nDere kan se feedback-resultatene på:\\n%s\\n\" % \\\n (title, results_link)\n message.send = True\n logger.info(\"First message generated\")\n else:\n logger.info(\"No message generated\")\n\n return message\n \n @staticmethod\n def send_first_notification(feedback):\n start_date = FeedbackMail.start_date(feedback)\n\n #The object that requires feedback doesnt have a start date\n if not start_date:\n yesterday = timezone.now().date() - datetime.timedelta(days=1)\n if feedback.created_date == yesterday.date():\n #Send the first notification the day after the feedback relation was created\n return True\n else:\n day_after_event = start_date + datetime.timedelta(1)\n if day_after_event == datetime.datetime.date(timezone.now()):\n #Send the first notification the day after the event\n return True\n return False\n\n @staticmethod\n def start_date(feedback):\n start_date = feedback.get_start_date()\n \n if start_date:\n return start_date.date()\n else:\n return False\n\n @staticmethod\n def start_date_message(start_date):\n #If the object(event) doesnt have start date it will send \n #the first notification the day after the feedbackrelation is made\n if start_date:\n start_date_string = start_date.strftime(\"%d. %B\").encode(\"utf-8\")\n message_start_date = u\"som du var med på den %s:\" % (start_date_string)\n else:\n message_start_date = \"\"\n \n return message_start_date \n\n @staticmethod\n def get_users(feedback):\n return feedback.get_slackers()\n\n @staticmethod\n def get_user_mails(not_responded):\n return [user.email for user in not_responded]\n\n @staticmethod\n def get_link(feedback):\n return str(settings.BASE_URL + feedback.get_absolute_url())\n\n @staticmethod\n def get_title(feedback):\n return feedback.get_title()\n\n @staticmethod\n def get_committee_email(feedback):\n return feedback.get_email()\n\n @staticmethod\n def mark_message(feedback):\n if feedback.gives_mark:\n return u\"\\nVær oppmerksom på at du får prikk dersom du ikke svarer \" \\\n u\"på disse spørsmålene innen fristen.\"\n else:\n return \"\"\n\n @staticmethod\n def set_marks(title, not_responded):\n mark = Mark()\n mark.title = u\"Manglende tilbakemelding på %s\" % (title)\n mark.category = 4 #Missed feedback\n mark.description = u\"Du har fått en prikk fordi du ikke har levert tilbakemelding.\"\n mark.save()\n \n for user in not_responded:\n user_entry = UserEntry()\n user_entry.user = user\n user_entry.mark = mark\n user_entry.save()\n \nclass Message():\n subject = \"\"\n intro = \"\"\n start_date = \"\"\n deadline = \"\"\n mark = \"\"\n contact = \"\"\n link = \"\"\n send = False\n end = u\"\\n\\nMvh\\nLinjeforeningen Online\"\n results_message = False\n\n committee_mail = \"\"\n attended_mails = False\n\n\n def __unicode__(self):\n message = \"%s %s %s %s %s %s %s\" % (\n self.intro, \n self.start_date, \n self.link, \n self.deadline, \n self.mark, \n self.contact, \n self.end)\n return message\n\nschedule.register(FeedbackMail, day_of_week='mon-sun', hour=8, minute=00)\n",
"path": "apps/feedback/mommy.py"
}
] | [
{
"content": "# -*- coding: utf-8 -*-\nimport datetime\nimport socket\nimport locale\nimport logging\n\nfrom django.utils import timezone\nfrom django.contrib.contenttypes.models import ContentType\nfrom django.conf import settings\nfrom django.core.mail import EmailMessage\n\nfrom apps.events.models import Event, AttendanceEvent, Attendee\nfrom apps.feedback.models import FeedbackRelation\nfrom apps.marks.models import Mark, UserEntry\nfrom apps.mommy import Task, schedule\n\nclass FeedbackMail(Task):\n\n @staticmethod\n def run():\n logger = logging.getLogger(\"feedback\")\n logger.info(\"Feedback job started\")\n locale.setlocale(locale.LC_ALL, \"nb_NO.UTF-8\")\n active_feedbacks = FeedbackRelation.objects.filter(active=True)\n \n for feedback in active_feedbacks:\n message = FeedbackMail.generate_message(feedback, logger)\n\n if message.send:\n EmailMessage(message.subject, unicode(message), message.committee_mail, [], message.attended_mails).send()\n logger.info('Emails sent to: ' + str(message.attended_mails))\n\n if message.results_message:\n EmailMessage(\"Feedback resultat\", message.results_message,\"[email protected]\", [message.committee_mail]).send() \n logger.info('Results mail sent to :' + message.committee_mail)\n\n @staticmethod\n def generate_message(feedback, logger):\n logger.info('Processing: \"' + feedback.get_title() + '\"')\n today = timezone.now().date()\n yesterday = today + datetime.timedelta(days=-1)\n not_responded = FeedbackMail.get_users(feedback)\n logger.info('Not responded: ' + str(not_responded))\n message = Message()\n\n #return if everyone has answered\n if not not_responded:\n feedback.active = False\n feedback.save()\n logger.info('Everyone has answered')\n logger.info('Feedback set to innactive')\n return message\n\n \n message.attended_mails = FeedbackMail.get_user_mails(not_responded)\n\n message.committee_mail = FeedbackMail.get_committee_email(feedback)\n deadline = feedback.deadline.strftime(\"%d. %B\").encode(\"utf-8\")\n title = str(FeedbackMail.get_title(feedback)).encode(\"utf-8\")\n message.link = str(u\"\\n\\n\" + FeedbackMail.get_link(feedback)).encode(\"utf-8\")\n results_link = str(FeedbackMail.get_link(feedback) + \"results\").encode(\"utf-8\")\n \n start_date = feedback.get_start_date()\n deadline_diff = (feedback.deadline - today).days\n\n message.subject = u\"Feedback: %s\" % (title)\n message.intro = u\"Hei, vi ønsker tilbakemelding på \\\"%s\\\"\" % (title)\n message.mark = FeedbackMail.mark_message(feedback)\n message.contact = u\"\\n\\nEventuelle spørsmål sendes til %s \" % (message.committee_mail)\n message.start_date = FeedbackMail.start_date_message(start_date)\n\n if deadline_diff < 0: #Deadline passed\n feedback.active = False\n feedback.save()\n logger.info(\"Deadline passed feedback set to inactive\")\n\n if feedback.gives_mark:\n FeedbackMail.set_marks(title, not_responded) \n \n message.intro = u\"Fristen for å svare på \\\"%s\\\" har gått ut og du har fått en prikk.\" % (title)\n message.mark = \"\"\n message.start_date = \"\"\n message.link = \"\"\n message.send = True\n \n logger.info(\"Marks given to: \" + str(not_responded))\n\n elif deadline_diff < 1: #Last warning\n message.deadline = u\"\\n\\nI dag innen 23:59 er siste frist til å svare på skjemaet.\"\n \n message.results_message = u\"Hei, siste purremail på feedback skjema har blitt sendt til alle \" \\\n u\"gjenværende deltagere på \\\"%s\\\".\\nDere kan se feedback-resultatene på:\\n%s\\n\" % \\\n (title, results_link)\n message.send = True\n logger.info(\"Last warning message generated\")\n elif deadline_diff < 3 and feedback.gives_mark: # 3 days from the deadline\n message.deadline = u\"\\n\\nFristen for å svare på skjema er %s innen kl 23:59.\" % (deadline)\n message.send = True\n logger.info(\"Warning message generated\")\n elif FeedbackMail.send_first_notification(feedback): #Day after the event or feedback creation \n message.deadline = u\"\\n\\nFristen for å svare på skjema er %s innen kl 23:59.\" % (deadline)\n \n message.results_message = u\"Hei, nå har feedbackmail blitt sendt til alle \" \\\n u\"deltagere på \\\"%s\\\".\\nDere kan se feedback-resultatene på:\\n%s\\n\" % \\\n (title, results_link)\n message.send = True\n logger.info(\"First message generated\")\n else:\n logger.info(\"No message generated\")\n\n return message\n \n @staticmethod\n def send_first_notification(feedback):\n start_date = FeedbackMail.start_date(feedback)\n\n #The object that requires feedback doesnt have a start date\n if not start_date:\n yesterday = timezone.now().date() - datetime.timedelta(days=1)\n if feedback.created_date == yesterday.date():\n #Send the first notification the day after the feedback relation was created\n return True\n else:\n day_after_event = start_date + datetime.timedelta(1)\n if day_after_event == datetime.datetime.date(timezone.now()):\n #Send the first notification the day after the event\n return True\n return False\n\n @staticmethod\n def start_date(feedback):\n start_date = feedback.get_start_date()\n \n if start_date:\n return start_date.date()\n else:\n return False\n\n @staticmethod\n def start_date_message(start_date):\n #If the object(event) doesnt have start date it will send \n #the first notification the day after the feedbackrelation is made\n if start_date:\n start_date_string = start_date.strftime(\"%d. %B\").encode(\"utf-8\")\n message_start_date = u\"som du var med på den %s:\" % (start_date_string)\n else:\n message_start_date = \"\"\n \n return message_start_date \n\n @staticmethod\n def get_users(feedback):\n return feedback.get_slackers()\n\n @staticmethod\n def get_user_mails(not_responded):\n return [user.email for user in not_responded]\n\n @staticmethod\n def get_link(feedback):\n return str(settings.BASE_URL + feedback.get_absolute_url())\n\n @staticmethod\n def get_title(feedback):\n return feedback.get_title()\n\n @staticmethod\n def get_committee_email(feedback):\n return feedback.get_email()\n\n @staticmethod\n def mark_message(feedback):\n if feedback.gives_mark:\n return u\"\\nVær oppmerksom på at du får prikk dersom du ikke svarer \" \\\n u\"på disse spørsmålene innen fristen.\"\n else:\n return \"\"\n\n @staticmethod\n def set_marks(title, not_responded):\n mark = Mark()\n mark.title = u\"Manglende tilbakemelding på %s\" % (title)\n mark.category = 4 #Missed feedback\n mark.description = u\"Du har fått en prikk fordi du ikke har levert tilbakemelding.\"\n mark.save()\n \n for user in not_responded:\n user_entry = UserEntry()\n user_entry.user = user\n user_entry.mark = mark\n user_entry.save()\n \nclass Message():\n subject = \"\"\n intro = \"\"\n start_date = \"\"\n deadline = \"\"\n mark = \"\"\n contact = \"\"\n link = \"\"\n send = False\n end = u\"\\n\\nMvh\\nLinjeforeningen Online\"\n results_message = False\n\n committee_mail = \"\"\n attended_mails = False\n\n\n def __unicode__(self):\n message = \"%s %s %s %s %s %s %s\" % (\n self.intro, \n self.start_date, \n self.link, \n self.deadline, \n self.mark, \n self.contact, \n self.end)\n return message\n\nschedule.register(FeedbackMail, day_of_week='mon-sun', hour=8, minute=00)\n",
"path": "apps/feedback/mommy.py"
}
] | diff --git a/apps/feedback/mommy.py b/apps/feedback/mommy.py
index 7295634f5..efd8ef73f 100644
--- a/apps/feedback/mommy.py
+++ b/apps/feedback/mommy.py
@@ -45,7 +45,10 @@ def generate_message(feedback, logger):
#return if everyone has answered
if not not_responded:
+ feedback.active = False
+ feedback.save()
logger.info('Everyone has answered')
+ logger.info('Feedback set to innactive')
return message
|
sktime__sktime-3653 | [DOC] sktime docs should link clearly to example notebooks
It seems that the sktime doc user journey does not lead clearly to the example notebooks when starting on the doc page.
This should be investigated and reworked.
Related issue: https://github.com/alan-turing-institute/sktime/issues/2127
| [
{
"content": "#!/usr/bin/env python3\n# -*- coding: utf-8 -*-\n\n\"\"\"Configuration file for the Sphinx documentation builder.\"\"\"\n\nimport os\nimport sys\nfrom importlib import import_module\n\nimport sktime\n\n# -- Path setup --------------------------------------------------------------\n\n# If extensions (or modules to document with autodoc) are in another directory,\n# add these directories to sys.path here. If the directory is relative to the\n# documentation root, use os.path.abspath to make it absolute, like shown here.\n\nON_READTHEDOCS = os.environ.get(\"READTHEDOCS\") == \"True\"\nif not ON_READTHEDOCS:\n sys.path.insert(0, os.path.abspath(\"../..\"))\n\n# -- Project information -----------------------------------------------------\nproject = \"sktime\"\ncopyright = \"2019 - 2021 (BSD-3-Clause License)\"\nauthor = \"sktime developers\"\n\n# The full version, including alpha/beta/rc tags\nCURRENT_VERSION = f\"v{sktime.__version__}\"\n\n# If on readthedocs, and we're building the latest version, update tag to generate\n# correct links in notebooks\nif ON_READTHEDOCS:\n READTHEDOCS_VERSION = os.environ.get(\"READTHEDOCS_VERSION\")\n if READTHEDOCS_VERSION == \"latest\":\n CURRENT_VERSION = \"main\"\n\n# -- General configuration ---------------------------------------------------\n\n# Add any Sphinx extension module names here, as strings. They can be\n# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom\n# ones.\nextensions = [\n \"sphinx.ext.autodoc\",\n \"sphinx.ext.autosummary\",\n \"numpydoc\",\n \"sphinx.ext.intersphinx\",\n \"sphinx.ext.linkcode\", # link to GitHub source code via linkcode_resolve()\n \"nbsphinx\", # integrates example notebooks\n \"sphinx_gallery.load_style\",\n \"myst_parser\",\n \"sphinx_design\",\n \"sphinx_issues\",\n]\n\n# Recommended by sphinx_design when using the MyST Parser\nmyst_enable_extensions = [\"colon_fence\"]\n\n# Use bootstrap CSS from theme.\npanels_add_bootstrap_css = False\n\n# Add any paths that contain templates here, relative to this directory.\ntemplates_path = [\"_templates\"]\n\n# The suffix(es) of source filenames.\n# You can specify multiple suffix as a list of string:\nsource_suffix = {\n \".rst\": \"restructuredtext\",\n \".md\": \"markdown\",\n}\n\n# The main toctree document.\nmaster_doc = \"index\"\n\n# The language for content autogenerated by Sphinx. Refer to documentation\n# for a list of supported languages.\n#\n# This is also used if you do content translation via gettext catalogs.\n# Usually you set \"language\" from the command line for these cases.\nlanguage = None\n\n# List of patterns, relative to source directory, that match files and\n# directories to ignore when looking for source files.\n# This pattern also affects html_static_path and html_extra_path.\nexclude_patterns = [\n \"_build\",\n \".ipynb_checkpoints\",\n \"Thumbs.db\",\n \".DS_Store\",\n]\n\nadd_module_names = False\n\n# The name of the Pygments (syntax highlighting) style to use.\npygments_style = \"sphinx\"\n\n# see http://stackoverflow.com/q/12206334/562769\nnumpydoc_show_class_members = True\n# this is needed for some reason...\n# see https://github.com/numpy/numpydoc/issues/69\nnumpydoc_class_members_toctree = False\n\nnumpydoc_validation_checks = {\"all\"}\n\n# generate autosummary even if no references\nautosummary_generate = True\n\n# Members and inherited-members default to showing methods and attributes from a\n# class or those inherited.\n# Member-order orders the documentation in the order of how the members are defined in\n# the source code.\nautodoc_default_options = {\n \"members\": True,\n \"inherited-members\": True,\n \"member-order\": \"bysource\",\n}\n\n# If true, '()' will be appended to :func: etc. cross-reference text.\nadd_function_parentheses = False\n\n# When building HTML using the sphinx.ext.mathjax (enabled by default),\n# Myst-Parser injects the tex2jax_ignore (MathJax v2) and mathjax_ignore (MathJax v3)\n# classes in to the top-level section of each MyST document, and adds some default\n# configuration. This ensures that MathJax processes only math, identified by the\n# dollarmath and amsmath extensions, or specified in math directives. We here silence\n# the corresponding warning that this override happens.\nsuppress_warnings = [\"myst.mathjax\"]\n\n# Link to GitHub repo for github_issues extension\nissues_github_path = \"sktime/sktime\"\n\n\ndef linkcode_resolve(domain, info):\n \"\"\"Return URL to source code corresponding.\n\n Parameters\n ----------\n domain : str\n info : dict\n\n Returns\n -------\n url : str\n \"\"\"\n\n def find_source():\n # try to find the file and line number, based on code from numpy:\n # https://github.com/numpy/numpy/blob/main/doc/source/conf.py#L286\n obj = sys.modules[info[\"module\"]]\n for part in info[\"fullname\"].split(\".\"):\n obj = getattr(obj, part)\n import inspect\n import os\n\n fn = inspect.getsourcefile(obj)\n fn = os.path.relpath(fn, start=os.path.dirname(sktime.__file__))\n source, lineno = inspect.getsourcelines(obj)\n return fn, lineno, lineno + len(source) - 1\n\n if domain != \"py\" or not info[\"module\"]:\n return None\n try:\n filename = \"sktime/%s#L%d-L%d\" % find_source()\n except Exception:\n filename = info[\"module\"].replace(\".\", \"/\") + \".py\"\n return \"https://github.com/sktime/sktime/blob/%s/%s\" % (\n CURRENT_VERSION,\n filename,\n )\n\n\n# -- Options for HTML output -------------------------------------------------\n\n# The theme to use for HTML and HTML Help pages. See the documentation for\n# a list of builtin themes.\n\nhtml_theme = \"pydata_sphinx_theme\"\n\n# Theme options are theme-specific and customize the look and feel of a theme\n# further. For a list of options available for each theme, see the\n# documentation.\n\nhtml_theme_options = {\n \"icon_links\": [\n {\n \"name\": \"GitHub\",\n \"url\": \"https://github.com/sktime/sktime\",\n \"icon\": \"fab fa-github\",\n },\n {\n \"name\": \"Slack\",\n \"url\": \"https://join.slack.com/t/sktime-group/shared_invite/zt-1cghagwee-sqLJ~eHWGYgzWbqUX937ig\", # noqa: E501\n \"icon\": \"fab fa-slack\",\n },\n {\n \"name\": \"Discord\",\n \"url\": \"https://discord.com/invite/gqSab2K\",\n \"icon\": \"fab fa-discord\",\n },\n {\n \"name\": \"LinkedIn\",\n \"url\": \"https://www.linkedin.com/company/sktime/\",\n \"icon\": \"fab fa-linkedin\",\n },\n {\n \"name\": \"Twitter\",\n \"url\": \"https://twitter.com/sktime_toolbox\",\n \"icon\": \"fab fa-twitter\",\n },\n ],\n \"favicons\": [\n {\n \"rel\": \"icon\",\n \"sizes\": \"16x16\",\n \"href\": \"images/sktime-favicon.ico\",\n }\n ],\n \"show_prev_next\": False,\n \"use_edit_page_button\": False,\n \"navbar_start\": [\"navbar-logo\"],\n \"navbar_center\": [\"navbar-nav\"],\n \"navbar_end\": [\"navbar-icon-links\"],\n \"announcement\": \"<p><a href=https://docs.google.com/forms/d/e/1FAIpQLScQkrSZfNiZiQKPuBcFMtHAlL10RBZ3QSBo-I3klUHeL7Vg0A/viewform>Sign up</a> for the sktime Fall Dev days Nov 9 - 10 2022</p>\", # noqa: E501\n}\nhtml_logo = \"images/sktime-logo-text-horizontal.png\"\nhtml_context = {\n \"github_user\": \"sktime\",\n \"github_repo\": \"sktime\",\n \"github_version\": \"main\",\n \"doc_path\": \"docs/source/\",\n}\nhtml_favicon = \"images/sktime-favicon.ico\"\nhtml_sidebars = {\n \"**\": [\"search-field.html\", \"sidebar-nav-bs.html\", \"sidebar-ethical-ads.html\"]\n}\n\n# Add any paths that contain custom static files (such as style sheets) here,\n# relative to this directory. They are copied after the builtin static files,\n# so a file named \"default.css\" will overwrite the builtin \"default.css\".\nhtml_static_path = [\"_static\"]\nhtml_css_files = [\"css/custom.css\"]\nhtml_js_files = [\n \"js/dynamic_table.js\",\n]\n\n# Custom sidebar templates, must be a dictionary that maps document names\n# to template names.\n#\n# The default sidebars (for documents that don't match any pattern) are\n# defined by theme itself. Builtin themes are using these templates by\n# default: ``['localtoc.html', 'relations.html', 'sourcelink.html',\n# 'searchbox.html']``.\n#\n# html_sidebars = {}\n\nhtml_show_sourcelink = False\n\n# -- Options for HTMLHelp output ---------------------------------------------\n\n# Output file base name for HTML help builder.\nhtmlhelp_basename = \"sktimedoc\"\n\n# -- Options for LaTeX output ------------------------------------------------\n\nlatex_elements = {\n # The paper size ('letterpaper' or 'a4paper').\n # 'papersize': 'letterpaper',\n # The font size ('10pt', '11pt' or '12pt').\n # 'pointsize': '10pt',\n # Additional stuff for the LaTeX preamble.\n # 'preamble': '',\n # Latex figure (float) alignment\n # 'figure_align': 'htbp',\n}\n\n# Grouping the document tree into LaTeX files. List of tuples\n# (source start file, target name, title,\n# author, documentclass [howto, manual, or own class]).\nlatex_documents = [\n (master_doc, \"sktime.tex\", \"sktime Documentation\", \"sktime developers\", \"manual\"),\n]\n\n# -- Options for manual page output ------------------------------------------\n\n# One entry per manual page. List of tuples\n# (source start file, name, description, authors, manual section).\nman_pages = [(master_doc, \"sktime\", \"sktime Documentation\", [author], 1)]\n\n# -- Options for Texinfo output ----------------------------------------------\n\n# Grouping the document tree into Texinfo files. List of tuples\n# (source start file, target name, title, author,\n# dir menu entry, description, category)\ntexinfo_documents = [\n (\n master_doc,\n \"sktime\",\n \"sktime Documentation\",\n author,\n \"sktime\",\n \"One line description of project.\",\n \"Miscellaneous\",\n ),\n]\n\n\ndef _make_estimator_overview(app):\n \"\"\"Make estimator overview table.\"\"\"\n import pandas as pd\n\n from sktime.registry import all_estimators\n\n def _process_author_info(author_info):\n \"\"\"\n Process author information from source code files.\n\n Parameters\n ----------\n author_info : str\n Author information string from source code files.\n\n Returns\n -------\n author_info : str\n Preprocessed author information.\n\n Notes\n -----\n A list of author names is turned into a string.\n Multiple author names will be separated by a comma,\n with the final name always preceded by \"&\".\n \"\"\"\n if isinstance(author_info, list):\n if len(author_info) > 1:\n return \", \".join(author_info[:-1]) + \" & \" + author_info[-1]\n else:\n return author_info[0]\n else:\n return author_info\n\n def _does_not_start_with_underscore(input_string):\n return not input_string.startswith(\"_\")\n\n # creates dataframe as df\n COLNAMES = [\"Class Name\", \"Estimator Type\", \"Authors\"]\n\n df = pd.DataFrame([], columns=COLNAMES)\n\n for modname, modclass in all_estimators():\n algorithm_type = \"::\".join(str(modclass).split(\".\")[1:-2])\n try:\n author_info = _process_author_info(modclass.__author__)\n except AttributeError:\n try:\n author_info = _process_author_info(\n import_module(modclass.__module__).__author__\n )\n except AttributeError:\n author_info = \"no author info\"\n\n # includes part of class string\n modpath = str(modclass)[8:-2]\n path_parts = modpath.split(\".\")\n # joins strings excluding starting with '_'\n clean_path = \".\".join(list(filter(_does_not_start_with_underscore, path_parts)))\n # adds html link reference\n modname = str(\n '<a href=\"https://www.sktime.org/en/latest/api_reference'\n + \"/auto_generated/\"\n + clean_path\n + '.html\">'\n + modname\n + \"</a>\"\n )\n\n record = pd.DataFrame([modname, algorithm_type, author_info], index=COLNAMES).T\n df = pd.concat([df, record], ignore_index=True)\n with open(\"estimator_overview_table.md\", \"w\") as file:\n df.to_markdown(file, index=False)\n\n\ndef setup(app):\n \"\"\"Set up sphinx builder.\n\n Parameters\n ----------\n app : Sphinx application object\n \"\"\"\n\n def adds(pth):\n print(\"Adding stylesheet: %s\" % pth) # noqa: T201, T001\n app.add_css_file(pth)\n\n adds(\"fields.css\") # for parameters, etc.\n\n app.connect(\"builder-inited\", _make_estimator_overview)\n\n\n# -- Extension configuration -------------------------------------------------\n\n# -- Options for nbsphinx extension ---------------------------------------\nnbsphinx_execute = \"never\" # always # whether to run notebooks\nnbsphinx_allow_errors = False # False\nnbsphinx_timeout = 600 # seconds, set to -1 to disable timeout\n\n# add Binder launch buttom at the top\ncurrent_file = \"{{ env.doc2path( env.docname, base=None) }}\"\n\n# make sure Binder points to latest stable release, not main\nbinder_url = f\"https://mybinder.org/v2/gh/sktime/sktime/{CURRENT_VERSION}?filepath={current_file}\" # noqa\nnbsphinx_prolog = f\"\"\"\n.. |binder| image:: https://mybinder.org/badge_logo.svg\n.. _Binder: {binder_url}\n\n|Binder|_\n\"\"\"\n\n# add link to original notebook at the bottom\nnotebook_url = (\n f\"https://github.com/sktime/sktime/tree/{CURRENT_VERSION}/{current_file}\" # noqa\n)\nnbsphinx_epilog = f\"\"\"\n----\n\nGenerated using nbsphinx_. The Jupyter notebook can be found here_.\n\n.. _here: {notebook_url}\n.. _nbsphinx: https://nbsphinx.readthedocs.io/\n\"\"\"\n\n# -- Options for intersphinx extension ---------------------------------------\n\n# Example configuration for intersphinx: refer to the Python standard library.\nintersphinx_mapping = {\n \"python\": (\"https://docs.python.org/{.major}\".format(sys.version_info), None),\n \"numpy\": (\"https://docs.scipy.org/doc/numpy/\", None),\n \"scipy\": (\"https://docs.scipy.org/doc/scipy/reference\", None),\n \"matplotlib\": (\"https://matplotlib.org/\", None),\n \"pandas\": (\"https://pandas.pydata.org/pandas-docs/stable/\", None),\n \"joblib\": (\"https://joblib.readthedocs.io/en/latest/\", None),\n \"scikit-learn\": (\"https://scikit-learn.org/stable/\", None),\n \"statsmodels\": (\"https://www.statsmodels.org/stable/\", None),\n}\n\n# -- Options for _todo extension ----------------------------------------------\ntodo_include_todos = False\n",
"path": "docs/source/conf.py"
}
] | [
{
"content": "#!/usr/bin/env python3\n# -*- coding: utf-8 -*-\n\n\"\"\"Configuration file for the Sphinx documentation builder.\"\"\"\n\nimport os\nimport sys\nfrom importlib import import_module\n\nimport sktime\n\n# -- Path setup --------------------------------------------------------------\n\n# If extensions (or modules to document with autodoc) are in another directory,\n# add these directories to sys.path here. If the directory is relative to the\n# documentation root, use os.path.abspath to make it absolute, like shown here.\n\nON_READTHEDOCS = os.environ.get(\"READTHEDOCS\") == \"True\"\nif not ON_READTHEDOCS:\n sys.path.insert(0, os.path.abspath(\"../..\"))\n\n# -- Project information -----------------------------------------------------\nproject = \"sktime\"\ncopyright = \"2019 - 2021 (BSD-3-Clause License)\"\nauthor = \"sktime developers\"\n\n# The full version, including alpha/beta/rc tags\nCURRENT_VERSION = f\"v{sktime.__version__}\"\n\n# If on readthedocs, and we're building the latest version, update tag to generate\n# correct links in notebooks\nif ON_READTHEDOCS:\n READTHEDOCS_VERSION = os.environ.get(\"READTHEDOCS_VERSION\")\n if READTHEDOCS_VERSION == \"latest\":\n CURRENT_VERSION = \"main\"\n\n# -- General configuration ---------------------------------------------------\n\n# Add any Sphinx extension module names here, as strings. They can be\n# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom\n# ones.\nextensions = [\n \"sphinx.ext.autodoc\",\n \"sphinx.ext.autosummary\",\n \"numpydoc\",\n \"sphinx.ext.intersphinx\",\n \"sphinx.ext.linkcode\", # link to GitHub source code via linkcode_resolve()\n \"nbsphinx\", # integrates example notebooks\n \"sphinx_gallery.load_style\",\n \"myst_parser\",\n \"sphinx_design\",\n \"sphinx_issues\",\n]\n\n# Recommended by sphinx_design when using the MyST Parser\nmyst_enable_extensions = [\"colon_fence\"]\n\n# Notebook thumbnails\nnbsphinx_thumbnails = {\n \"examples/02_classification\": \"examples/img/tsc.png\",\n}\n\n# Use bootstrap CSS from theme.\npanels_add_bootstrap_css = False\n\n# Add any paths that contain templates here, relative to this directory.\ntemplates_path = [\"_templates\"]\n\n# The suffix(es) of source filenames.\n# You can specify multiple suffix as a list of string:\nsource_suffix = {\n \".rst\": \"restructuredtext\",\n \".md\": \"markdown\",\n}\n\n# The main toctree document.\nmaster_doc = \"index\"\n\n# The language for content autogenerated by Sphinx. Refer to documentation\n# for a list of supported languages.\n#\n# This is also used if you do content translation via gettext catalogs.\n# Usually you set \"language\" from the command line for these cases.\nlanguage = None\n\n# List of patterns, relative to source directory, that match files and\n# directories to ignore when looking for source files.\n# This pattern also affects html_static_path and html_extra_path.\nexclude_patterns = [\n \"_build\",\n \".ipynb_checkpoints\",\n \"Thumbs.db\",\n \".DS_Store\",\n]\n\nadd_module_names = False\n\n# The name of the Pygments (syntax highlighting) style to use.\npygments_style = \"sphinx\"\n\n# see http://stackoverflow.com/q/12206334/562769\nnumpydoc_show_class_members = True\n# this is needed for some reason...\n# see https://github.com/numpy/numpydoc/issues/69\nnumpydoc_class_members_toctree = False\n\nnumpydoc_validation_checks = {\"all\"}\n\n# generate autosummary even if no references\nautosummary_generate = True\n\n# Members and inherited-members default to showing methods and attributes from a\n# class or those inherited.\n# Member-order orders the documentation in the order of how the members are defined in\n# the source code.\nautodoc_default_options = {\n \"members\": True,\n \"inherited-members\": True,\n \"member-order\": \"bysource\",\n}\n\n# If true, '()' will be appended to :func: etc. cross-reference text.\nadd_function_parentheses = False\n\n# When building HTML using the sphinx.ext.mathjax (enabled by default),\n# Myst-Parser injects the tex2jax_ignore (MathJax v2) and mathjax_ignore (MathJax v3)\n# classes in to the top-level section of each MyST document, and adds some default\n# configuration. This ensures that MathJax processes only math, identified by the\n# dollarmath and amsmath extensions, or specified in math directives. We here silence\n# the corresponding warning that this override happens.\nsuppress_warnings = [\"myst.mathjax\"]\n\n# Link to GitHub repo for github_issues extension\nissues_github_path = \"sktime/sktime\"\n\n\ndef linkcode_resolve(domain, info):\n \"\"\"Return URL to source code corresponding.\n\n Parameters\n ----------\n domain : str\n info : dict\n\n Returns\n -------\n url : str\n \"\"\"\n\n def find_source():\n # try to find the file and line number, based on code from numpy:\n # https://github.com/numpy/numpy/blob/main/doc/source/conf.py#L286\n obj = sys.modules[info[\"module\"]]\n for part in info[\"fullname\"].split(\".\"):\n obj = getattr(obj, part)\n import inspect\n import os\n\n fn = inspect.getsourcefile(obj)\n fn = os.path.relpath(fn, start=os.path.dirname(sktime.__file__))\n source, lineno = inspect.getsourcelines(obj)\n return fn, lineno, lineno + len(source) - 1\n\n if domain != \"py\" or not info[\"module\"]:\n return None\n try:\n filename = \"sktime/%s#L%d-L%d\" % find_source()\n except Exception:\n filename = info[\"module\"].replace(\".\", \"/\") + \".py\"\n return \"https://github.com/sktime/sktime/blob/%s/%s\" % (\n CURRENT_VERSION,\n filename,\n )\n\n\n# -- Options for HTML output -------------------------------------------------\n\n# The theme to use for HTML and HTML Help pages. See the documentation for\n# a list of builtin themes.\n\nhtml_theme = \"pydata_sphinx_theme\"\n\n# Theme options are theme-specific and customize the look and feel of a theme\n# further. For a list of options available for each theme, see the\n# documentation.\n\nhtml_theme_options = {\n \"icon_links\": [\n {\n \"name\": \"GitHub\",\n \"url\": \"https://github.com/sktime/sktime\",\n \"icon\": \"fab fa-github\",\n },\n {\n \"name\": \"Slack\",\n \"url\": \"https://join.slack.com/t/sktime-group/shared_invite/zt-1cghagwee-sqLJ~eHWGYgzWbqUX937ig\", # noqa: E501\n \"icon\": \"fab fa-slack\",\n },\n {\n \"name\": \"Discord\",\n \"url\": \"https://discord.com/invite/gqSab2K\",\n \"icon\": \"fab fa-discord\",\n },\n {\n \"name\": \"LinkedIn\",\n \"url\": \"https://www.linkedin.com/company/sktime/\",\n \"icon\": \"fab fa-linkedin\",\n },\n {\n \"name\": \"Twitter\",\n \"url\": \"https://twitter.com/sktime_toolbox\",\n \"icon\": \"fab fa-twitter\",\n },\n ],\n \"favicons\": [\n {\n \"rel\": \"icon\",\n \"sizes\": \"16x16\",\n \"href\": \"images/sktime-favicon.ico\",\n }\n ],\n \"show_prev_next\": False,\n \"use_edit_page_button\": False,\n \"navbar_start\": [\"navbar-logo\"],\n \"navbar_center\": [\"navbar-nav\"],\n \"navbar_end\": [\"navbar-icon-links\"],\n \"announcement\": \"<p><a href=https://docs.google.com/forms/d/e/1FAIpQLScQkrSZfNiZiQKPuBcFMtHAlL10RBZ3QSBo-I3klUHeL7Vg0A/viewform>Sign up</a> for the sktime Fall Dev days Nov 9 - 10 2022</p>\", # noqa: E501\n}\nhtml_logo = \"images/sktime-logo-text-horizontal.png\"\nhtml_context = {\n \"github_user\": \"sktime\",\n \"github_repo\": \"sktime\",\n \"github_version\": \"main\",\n \"doc_path\": \"docs/source/\",\n}\nhtml_favicon = \"images/sktime-favicon.ico\"\nhtml_sidebars = {\n \"**\": [\"search-field.html\", \"sidebar-nav-bs.html\", \"sidebar-ethical-ads.html\"]\n}\n\n# Add any paths that contain custom static files (such as style sheets) here,\n# relative to this directory. They are copied after the builtin static files,\n# so a file named \"default.css\" will overwrite the builtin \"default.css\".\nhtml_static_path = [\"_static\"]\nhtml_css_files = [\"css/custom.css\"]\nhtml_js_files = [\n \"js/dynamic_table.js\",\n]\n\n# Custom sidebar templates, must be a dictionary that maps document names\n# to template names.\n#\n# The default sidebars (for documents that don't match any pattern) are\n# defined by theme itself. Builtin themes are using these templates by\n# default: ``['localtoc.html', 'relations.html', 'sourcelink.html',\n# 'searchbox.html']``.\n#\n# html_sidebars = {}\n\nhtml_show_sourcelink = False\n\n# -- Options for HTMLHelp output ---------------------------------------------\n\n# Output file base name for HTML help builder.\nhtmlhelp_basename = \"sktimedoc\"\n\n# -- Options for LaTeX output ------------------------------------------------\n\nlatex_elements = {\n # The paper size ('letterpaper' or 'a4paper').\n # 'papersize': 'letterpaper',\n # The font size ('10pt', '11pt' or '12pt').\n # 'pointsize': '10pt',\n # Additional stuff for the LaTeX preamble.\n # 'preamble': '',\n # Latex figure (float) alignment\n # 'figure_align': 'htbp',\n}\n\n# Grouping the document tree into LaTeX files. List of tuples\n# (source start file, target name, title,\n# author, documentclass [howto, manual, or own class]).\nlatex_documents = [\n (master_doc, \"sktime.tex\", \"sktime Documentation\", \"sktime developers\", \"manual\"),\n]\n\n# -- Options for manual page output ------------------------------------------\n\n# One entry per manual page. List of tuples\n# (source start file, name, description, authors, manual section).\nman_pages = [(master_doc, \"sktime\", \"sktime Documentation\", [author], 1)]\n\n# -- Options for Texinfo output ----------------------------------------------\n\n# Grouping the document tree into Texinfo files. List of tuples\n# (source start file, target name, title, author,\n# dir menu entry, description, category)\ntexinfo_documents = [\n (\n master_doc,\n \"sktime\",\n \"sktime Documentation\",\n author,\n \"sktime\",\n \"One line description of project.\",\n \"Miscellaneous\",\n ),\n]\n\n\ndef _make_estimator_overview(app):\n \"\"\"Make estimator overview table.\"\"\"\n import pandas as pd\n\n from sktime.registry import all_estimators\n\n def _process_author_info(author_info):\n \"\"\"\n Process author information from source code files.\n\n Parameters\n ----------\n author_info : str\n Author information string from source code files.\n\n Returns\n -------\n author_info : str\n Preprocessed author information.\n\n Notes\n -----\n A list of author names is turned into a string.\n Multiple author names will be separated by a comma,\n with the final name always preceded by \"&\".\n \"\"\"\n if isinstance(author_info, list):\n if len(author_info) > 1:\n return \", \".join(author_info[:-1]) + \" & \" + author_info[-1]\n else:\n return author_info[0]\n else:\n return author_info\n\n def _does_not_start_with_underscore(input_string):\n return not input_string.startswith(\"_\")\n\n # creates dataframe as df\n COLNAMES = [\"Class Name\", \"Estimator Type\", \"Authors\"]\n\n df = pd.DataFrame([], columns=COLNAMES)\n\n for modname, modclass in all_estimators():\n algorithm_type = \"::\".join(str(modclass).split(\".\")[1:-2])\n try:\n author_info = _process_author_info(modclass.__author__)\n except AttributeError:\n try:\n author_info = _process_author_info(\n import_module(modclass.__module__).__author__\n )\n except AttributeError:\n author_info = \"no author info\"\n\n # includes part of class string\n modpath = str(modclass)[8:-2]\n path_parts = modpath.split(\".\")\n # joins strings excluding starting with '_'\n clean_path = \".\".join(list(filter(_does_not_start_with_underscore, path_parts)))\n # adds html link reference\n modname = str(\n '<a href=\"https://www.sktime.org/en/latest/api_reference'\n + \"/auto_generated/\"\n + clean_path\n + '.html\">'\n + modname\n + \"</a>\"\n )\n\n record = pd.DataFrame([modname, algorithm_type, author_info], index=COLNAMES).T\n df = pd.concat([df, record], ignore_index=True)\n with open(\"estimator_overview_table.md\", \"w\") as file:\n df.to_markdown(file, index=False)\n\n\ndef setup(app):\n \"\"\"Set up sphinx builder.\n\n Parameters\n ----------\n app : Sphinx application object\n \"\"\"\n\n def adds(pth):\n print(\"Adding stylesheet: %s\" % pth) # noqa: T201, T001\n app.add_css_file(pth)\n\n adds(\"fields.css\") # for parameters, etc.\n\n app.connect(\"builder-inited\", _make_estimator_overview)\n\n\n# -- Extension configuration -------------------------------------------------\n\n# -- Options for nbsphinx extension ---------------------------------------\nnbsphinx_execute = \"never\" # always # whether to run notebooks\nnbsphinx_allow_errors = False # False\nnbsphinx_timeout = 600 # seconds, set to -1 to disable timeout\n\n# add Binder launch buttom at the top\ncurrent_file = \"{{ env.doc2path( env.docname, base=None) }}\"\n\n# make sure Binder points to latest stable release, not main\nbinder_url = f\"https://mybinder.org/v2/gh/sktime/sktime/{CURRENT_VERSION}?filepath={current_file}\" # noqa\nnbsphinx_prolog = f\"\"\"\n.. |binder| image:: https://mybinder.org/badge_logo.svg\n.. _Binder: {binder_url}\n\n|Binder|_\n\"\"\"\n\n# add link to original notebook at the bottom\nnotebook_url = (\n f\"https://github.com/sktime/sktime/tree/{CURRENT_VERSION}/{current_file}\" # noqa\n)\nnbsphinx_epilog = f\"\"\"\n----\n\nGenerated using nbsphinx_. The Jupyter notebook can be found here_.\n\n.. _here: {notebook_url}\n.. _nbsphinx: https://nbsphinx.readthedocs.io/\n\"\"\"\n\n# -- Options for intersphinx extension ---------------------------------------\n\n# Example configuration for intersphinx: refer to the Python standard library.\nintersphinx_mapping = {\n \"python\": (\"https://docs.python.org/{.major}\".format(sys.version_info), None),\n \"numpy\": (\"https://docs.scipy.org/doc/numpy/\", None),\n \"scipy\": (\"https://docs.scipy.org/doc/scipy/reference\", None),\n \"matplotlib\": (\"https://matplotlib.org/\", None),\n \"pandas\": (\"https://pandas.pydata.org/pandas-docs/stable/\", None),\n \"joblib\": (\"https://joblib.readthedocs.io/en/latest/\", None),\n \"scikit-learn\": (\"https://scikit-learn.org/stable/\", None),\n \"statsmodels\": (\"https://www.statsmodels.org/stable/\", None),\n}\n\n# -- Options for _todo extension ----------------------------------------------\ntodo_include_todos = False\n",
"path": "docs/source/conf.py"
}
] | diff --git a/docs/source/conf.py b/docs/source/conf.py
index 411941bbbae..976e3c9e332 100644
--- a/docs/source/conf.py
+++ b/docs/source/conf.py
@@ -55,6 +55,11 @@
# Recommended by sphinx_design when using the MyST Parser
myst_enable_extensions = ["colon_fence"]
+# Notebook thumbnails
+nbsphinx_thumbnails = {
+ "examples/02_classification": "examples/img/tsc.png",
+}
+
# Use bootstrap CSS from theme.
panels_add_bootstrap_css = False
diff --git a/docs/source/examples.rst b/docs/source/examples.rst
new file mode 100644
index 00000000000..590de105a6b
--- /dev/null
+++ b/docs/source/examples.rst
@@ -0,0 +1,72 @@
+.. _examples:
+
+==========
+Examples
+==========
+
+Forecasting
+=============
+
+.. nbgallery::
+ :glob:
+
+ examples/01_forecasting.ipynb
+ examples/01a_forecasting_sklearn.ipynb
+ examples/01b_forecasting_proba.ipynb
+ examples/forecasting/*
+
+Classification
+=============
+
+.. nbgallery::
+ :glob:
+
+ examples/02_classification.ipynb
+ examples/classification/*
+
+Regression
+=============
+
+To come!
+
+Clustering
+=============
+
+.. nbgallery::
+ :glob:
+
+ examples/clustering/*
+
+Annotation
+=============
+
+.. nbgallery::
+ :glob:
+
+ examples/annotation/*
+
+Transformation
+=============
+
+.. nbgallery::
+ :glob:
+
+ examples/transformation/*
+
+Data
+=============
+
+.. nbgallery::
+ :glob:
+
+ examples/AA_datatypes_and_datasets.ipynb
+ examples/data/*
+
+Other
+=============
+
+.. nbgallery::
+ :glob:
+
+ examples/04_benchmarking.ipynb
+ examples/other/*
diff --git a/docs/source/examples/annotation/segmentation_with_clasp.ipynb b/docs/source/examples/annotation/segmentation_with_clasp.ipynb
new file mode 120000
index 00000000000..3e51aa3e367
--- /dev/null
+++ b/docs/source/examples/annotation/segmentation_with_clasp.ipynb
@@ -0,0 +1 @@
+../../../../examples/segmentation_with_clasp.ipynb
\ No newline at end of file
diff --git a/docs/source/examples/classification/02a_classification_multivariate_cnn.ipynb b/docs/source/examples/classification/02a_classification_multivariate_cnn.ipynb
new file mode 120000
index 00000000000..7bbb59cfa55
--- /dev/null
+++ b/docs/source/examples/classification/02a_classification_multivariate_cnn.ipynb
@@ -0,0 +1 @@
+../../../../examples/02a_classification_multivariate_cnn.ipynb
\ No newline at end of file
diff --git a/docs/source/examples/classification/channel_selection.ipynb b/docs/source/examples/classification/channel_selection.ipynb
new file mode 120000
index 00000000000..23e3736743f
--- /dev/null
+++ b/docs/source/examples/classification/channel_selection.ipynb
@@ -0,0 +1 @@
+../../../../examples/channel_selection.ipynb
\ No newline at end of file
diff --git a/docs/source/examples/classification/dictionary_based_classification.ipynb b/docs/source/examples/classification/dictionary_based_classification.ipynb
new file mode 120000
index 00000000000..66ff49c6cb6
--- /dev/null
+++ b/docs/source/examples/classification/dictionary_based_classification.ipynb
@@ -0,0 +1 @@
+../../../../examples/dictionary_based_classification.ipynb
\ No newline at end of file
diff --git a/docs/source/examples/classification/early_classification.ipynb b/docs/source/examples/classification/early_classification.ipynb
new file mode 120000
index 00000000000..4cdfa912314
--- /dev/null
+++ b/docs/source/examples/classification/early_classification.ipynb
@@ -0,0 +1 @@
+../../../../examples/early_classification.ipynb
\ No newline at end of file
diff --git a/docs/source/examples/classification/interval_based_classification.ipynb b/docs/source/examples/classification/interval_based_classification.ipynb
new file mode 120000
index 00000000000..745612878dd
--- /dev/null
+++ b/docs/source/examples/classification/interval_based_classification.ipynb
@@ -0,0 +1 @@
+../../../../examples/interval_based_classification.ipynb
\ No newline at end of file
diff --git a/docs/source/examples/clustering/partition_based_clustering.ipynb b/docs/source/examples/clustering/partition_based_clustering.ipynb
new file mode 120000
index 00000000000..cafd15ab95e
--- /dev/null
+++ b/docs/source/examples/clustering/partition_based_clustering.ipynb
@@ -0,0 +1 @@
+../../../../examples/partition_based_clustering.ipynb
\ No newline at end of file
diff --git a/docs/source/examples/data/loading_data.ipynb b/docs/source/examples/data/loading_data.ipynb
new file mode 120000
index 00000000000..f91d9400996
--- /dev/null
+++ b/docs/source/examples/data/loading_data.ipynb
@@ -0,0 +1 @@
+../../../../examples/loading_data.ipynb
\ No newline at end of file
diff --git a/docs/source/examples/forecasting/01c_forecasting_hierarchical_global.ipynb b/docs/source/examples/forecasting/01c_forecasting_hierarchical_global.ipynb
new file mode 120000
index 00000000000..392fa386adb
--- /dev/null
+++ b/docs/source/examples/forecasting/01c_forecasting_hierarchical_global.ipynb
@@ -0,0 +1 @@
+../../../../examples/01c_forecasting_hierarchical_global.ipynb
\ No newline at end of file
diff --git a/docs/source/examples/forecasting/window_splitters.ipynb b/docs/source/examples/forecasting/window_splitters.ipynb
new file mode 120000
index 00000000000..58c6aeb9659
--- /dev/null
+++ b/docs/source/examples/forecasting/window_splitters.ipynb
@@ -0,0 +1 @@
+../../../../examples/window_splitters.ipynb
\ No newline at end of file
diff --git a/docs/source/examples/other/distances.ipynb b/docs/source/examples/other/distances.ipynb
new file mode 120000
index 00000000000..1a7021840a7
--- /dev/null
+++ b/docs/source/examples/other/distances.ipynb
@@ -0,0 +1 @@
+../../../../examples/distances.ipynb
\ No newline at end of file
diff --git a/docs/source/examples/transformation/catch22.ipynb b/docs/source/examples/transformation/catch22.ipynb
new file mode 120000
index 00000000000..aa0770a2ccb
--- /dev/null
+++ b/docs/source/examples/transformation/catch22.ipynb
@@ -0,0 +1 @@
+../../../../examples/catch22.ipynb
\ No newline at end of file
diff --git a/docs/source/examples/transformation/feature_extraction_with_tsfresh.ipynb b/docs/source/examples/transformation/feature_extraction_with_tsfresh.ipynb
new file mode 120000
index 00000000000..1703cc6e2d0
--- /dev/null
+++ b/docs/source/examples/transformation/feature_extraction_with_tsfresh.ipynb
@@ -0,0 +1 @@
+../../../../examples/feature_extraction_with_tsfresh.ipynb
\ No newline at end of file
diff --git a/docs/source/examples/transformation/hidalgo_segmentation.ipynb b/docs/source/examples/transformation/hidalgo_segmentation.ipynb
new file mode 120000
index 00000000000..2d0eccb8dc7
--- /dev/null
+++ b/docs/source/examples/transformation/hidalgo_segmentation.ipynb
@@ -0,0 +1 @@
+../../../../examples/hidalgo_segmentation.ipynb
\ No newline at end of file
diff --git a/docs/source/examples/transformation/interpolation.ipynb b/docs/source/examples/transformation/interpolation.ipynb
new file mode 120000
index 00000000000..9c906f2f888
--- /dev/null
+++ b/docs/source/examples/transformation/interpolation.ipynb
@@ -0,0 +1 @@
+../../../../examples/interpolation.ipynb
\ No newline at end of file
diff --git a/docs/source/examples/transformation/minirocket.ipynb b/docs/source/examples/transformation/minirocket.ipynb
new file mode 120000
index 00000000000..84adf880da2
--- /dev/null
+++ b/docs/source/examples/transformation/minirocket.ipynb
@@ -0,0 +1 @@
+../../../../examples/minirocket.ipynb
\ No newline at end of file
diff --git a/docs/source/examples/transformation/plateau_finder.ipynb b/docs/source/examples/transformation/plateau_finder.ipynb
new file mode 120000
index 00000000000..10226731968
--- /dev/null
+++ b/docs/source/examples/transformation/plateau_finder.ipynb
@@ -0,0 +1 @@
+../../../../examples/plateau_finder.ipynb
\ No newline at end of file
diff --git a/docs/source/examples/transformation/rocket.ipynb b/docs/source/examples/transformation/rocket.ipynb
new file mode 120000
index 00000000000..d1189babf03
--- /dev/null
+++ b/docs/source/examples/transformation/rocket.ipynb
@@ -0,0 +1 @@
+../../../../examples/rocket.ipynb
\ No newline at end of file
diff --git a/docs/source/examples/transformation/signature_method.ipynb b/docs/source/examples/transformation/signature_method.ipynb
new file mode 120000
index 00000000000..fc23d016352
--- /dev/null
+++ b/docs/source/examples/transformation/signature_method.ipynb
@@ -0,0 +1 @@
+../../../../examples/signature_method.ipynb
\ No newline at end of file
diff --git a/docs/source/examples/transformation/theta_transform.ipynb b/docs/source/examples/transformation/theta_transform.ipynb
new file mode 120000
index 00000000000..b845a69a588
--- /dev/null
+++ b/docs/source/examples/transformation/theta_transform.ipynb
@@ -0,0 +1 @@
+../../../../examples/theta_transform.ipynb
\ No newline at end of file
diff --git a/docs/source/index.rst b/docs/source/index.rst
index a3df10c16e6..2280163c645 100644
--- a/docs/source/index.rst
+++ b/docs/source/index.rst
@@ -59,6 +59,7 @@ Contents
get_involved
developers
about
+ examples
.. grid:: 1 2 2 2
:gutter: 3
diff --git a/examples/02_classification.ipynb b/examples/02_classification.ipynb
index 750f645c535..ff815c36fc3 100644
--- a/examples/02_classification.ipynb
+++ b/examples/02_classification.ipynb
@@ -2,54 +2,47 @@
"cells": [
{
"cell_type": "markdown",
- "metadata": {
- "collapsed": true,
- "pycharm": {
- "name": "#%% md\n"
- }
- },
"source": [
"# Time Series Classification with sktime\n",
"\n",
- "The Time Series Classification (TSC) task involves training a model from a collection of time series (real valued, ordered, data) in order to predict a target variable. For example, we might want to build a model that can predict whether a patient is sick based on the ECG reading, or predict whether a device will fail based on some sensor reading. This notebook gives a quick guide to get you started."
- ]
+ "The Time Series Classification (TSC) task involves training a model from a collection of time series (real valued, ordered, data) in order to predict a target variable. For example, we might want to build a model that can predict whether a patient is sick based on the ECG reading, or predict whether a device will fail based on some sensor reading. This notebook gives a quick guide to get you started.\n",
+ "\n",
+ "<img src=\"./img/tsc.png\" width=\"600\" alt=\"time series classification\"> [<i>​</i>](./img/tsc.png)"
+ ],
+ "metadata": {
+ "collapsed": false
+ }
},
{
"cell_type": "markdown",
- "metadata": {
- "collapsed": false,
- "pycharm": {
- "name": "#%% md\n"
- }
- },
"source": [
"## Datasets and Problem Types\n",
"\n",
"The UCR/UEA [TSC dataset archive](https://timeseriesclassification.com/) contains a large number of example TSC problems that have been used thousands of times in the literature to assess TSC algorithms. These datasets have certain characteristics that influence what data structure we use to store them in memory.\n",
"\n",
- "Most datasets in the archive contain time series all the same length. For example, the [ArrowHead dataset](https://timeseriesclassification.com/description.php?Dataset=ArrowHead) dataset consists of outlines of the images of arrow heads. The classification of projectile points is an important topic in anthropology.\n",
+ "Most datasets in the archive contain time series all the same length. For example, the [ArrowHead dataset](https://timeseriesclassification.com/description.php?Dataset=ArrowHead) consists of outlines of the images of arrow heads. The classification of projectile points is an important topic in anthropology.\n",
"\n",
- "<img src=\"./img/arrow-heads.png\" width=\"400\" alt=\"arrow heads\">\n",
+ "<img src=\"./img/arrow-heads.png\" width=\"600\" alt=\"arrow heads\">\n",
"\n",
"The shapes of the projectile points are converted into a sequence using the angle-based method as described in this [blog post](https://izbicki.me/blog/converting-images-into-time-series-for-data-mining.html) about converting images into time series for data mining.\n",
"\n",
- "<img src=\"./img/from-shapes-to-time-series.png\" width=\"400\" alt=\"from shapes to time series\">\n",
+ "<img src=\"./img/from-shapes-to-time-series.png\" width=\"600\" alt=\"from shapes to time series\">\n",
"\n",
"Each instance consists of a single time series (i.e. the problem is univariate) of equal length and a class label based on shape distinctions such as the presence and location of a notch in the arrow. The data set consists of 210 instances, by default split into 36 train and 175 test instances. We refer to the collection of time series as $X$ and to the collection of class labels as $y$.\n",
"\n",
"Below, we store the data in a 3D dimensional (instance, variable, time point) numpy array for $X$, and a one dimensional (instance) numpy array for $y$. In TSC the variable portion is commonly referred to as the dimension of the time series instance.\n",
"\n",
"For the single problem loader load arrow head, set the return type to `numpy3D` to store $X$ in such a 3D ndarray. The data can also be returned in other formats, e.g., `pd-multiindex` (row-index hierarchical pandas), or `numpyflat` (2D numpy with rows=instances, columns=time points; alias is `numpy2d`). The full range of options are the `Panel` data format strings desribed in tutorial AA - datatypes and data loaders (see there)."
- ]
+ ],
+ "metadata": {
+ "collapsed": false
+ }
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
- "collapsed": false,
- "pycharm": {
- "name": "#%%\n"
- }
+ "collapsed": false
},
"outputs": [],
"source": [
@@ -68,10 +61,7 @@
"cell_type": "code",
"execution_count": null,
"metadata": {
- "collapsed": false,
- "pycharm": {
- "name": "#%%\n"
- }
+ "collapsed": false
},
"outputs": [],
"source": [
@@ -147,10 +137,7 @@
"cell_type": "code",
"execution_count": null,
"metadata": {
- "collapsed": false,
- "pycharm": {
- "name": "#%%\n"
- }
+ "collapsed": false
},
"outputs": [],
"source": [
@@ -175,10 +162,7 @@
{
"cell_type": "markdown",
"metadata": {
- "collapsed": false,
- "pycharm": {
- "name": "#%% md\n"
- }
+ "collapsed": false
},
"source": [
"Some data sets have unequal length series. Two data sets with this characteristic are shipped with sktime: PLAID (univariate) and JapaneseVowels (multivariate). We cannot store unequal length series in numpy arrays. Instead, we use nested pandas data frames, where each cell is a pandas Series. This is the default return type for all single problem loaders."
@@ -188,10 +172,7 @@
"cell_type": "code",
"execution_count": null,
"metadata": {
- "collapsed": false,
- "pycharm": {
- "name": "#%%\n"
- }
+ "collapsed": false
},
"outputs": [],
"source": [
@@ -211,10 +192,7 @@
"cell_type": "code",
"execution_count": null,
"metadata": {
- "collapsed": false,
- "pycharm": {
- "name": "#%%\n"
- }
+ "collapsed": false
},
"outputs": [],
"source": [
@@ -259,10 +237,7 @@
"cell_type": "code",
"execution_count": null,
"metadata": {
- "collapsed": false,
- "pycharm": {
- "name": "#%%\n"
- }
+ "collapsed": false
},
"outputs": [],
"source": [
@@ -292,10 +267,7 @@
"cell_type": "code",
"execution_count": null,
"metadata": {
- "collapsed": false,
- "pycharm": {
- "name": "#%%\n"
- }
+ "collapsed": false
},
"outputs": [],
"source": [
@@ -320,10 +292,7 @@
"cell_type": "code",
"execution_count": null,
"metadata": {
- "collapsed": false,
- "pycharm": {
- "name": "#%%\n"
- }
+ "collapsed": false
},
"outputs": [],
"source": [
@@ -354,10 +323,7 @@
"cell_type": "code",
"execution_count": null,
"metadata": {
- "collapsed": false,
- "pycharm": {
- "name": "#%%\n"
- }
+ "collapsed": false
},
"outputs": [],
"source": [
@@ -464,10 +430,7 @@
"Parameter tuning using `sklearn` `GridSearchCV`, we tune the _k_ and distance measure for a K-NN classifier:"
],
"metadata": {
- "collapsed": false,
- "pycharm": {
- "name": "#%% md\n"
- }
+ "collapsed": false
}
},
{
@@ -495,10 +458,7 @@
"Probability calibration with the `sklearn` `CalibratedClassifierCV`:"
],
"metadata": {
- "collapsed": false,
- "pycharm": {
- "name": "#%% md\n"
- }
+ "collapsed": false
}
},
{
@@ -519,10 +479,7 @@
"accuracy_score(arrow_test_y, y_pred)"
],
"metadata": {
- "collapsed": false,
- "pycharm": {
- "name": "#%%\n"
- }
+ "collapsed": false
}
},
{
@@ -533,10 +490,7 @@
{
"cell_type": "markdown",
"metadata": {
- "collapsed": false,
- "pycharm": {
- "name": "#%% md\n"
- }
+ "collapsed": false
},
"source": [
"## Multivariate Classification\n",
@@ -547,10 +501,7 @@
"cell_type": "code",
"execution_count": null,
"metadata": {
- "collapsed": false,
- "pycharm": {
- "name": "#%%\n"
- }
+ "collapsed": false
},
"outputs": [],
"source": [
@@ -575,19 +526,13 @@
"accuracy_score(motions_test_y, y_pred)"
],
"metadata": {
- "collapsed": false,
- "pycharm": {
- "name": "#%%\n"
- }
+ "collapsed": false
}
},
{
"cell_type": "markdown",
"metadata": {
- "collapsed": false,
- "pycharm": {
- "name": "#%% md\n"
- }
+ "collapsed": false
},
"source": [
"`sktime` offers two other ways of building estimators for multivariate time series problems:\n",
@@ -602,10 +547,7 @@
"cell_type": "code",
"execution_count": null,
"metadata": {
- "collapsed": false,
- "pycharm": {
- "name": "#%%\n"
- }
+ "collapsed": false
},
"outputs": [],
"source": [
@@ -630,10 +572,7 @@
"cell_type": "code",
"execution_count": null,
"metadata": {
- "collapsed": false,
- "pycharm": {
- "name": "#%%\n"
- }
+ "collapsed": false
},
"outputs": [],
"source": [
diff --git a/examples/img/tsc.png b/examples/img/tsc.png
new file mode 100644
index 00000000000..48c5df29586
Binary files /dev/null and b/examples/img/tsc.png differ
|
huggingface__peft-646 | importing peft with an old version of bitsandbytes causes an exception
### System Info
Importing peft with the bitsandbytes version "0.39.1" works. But when importing peft with the version "0.38.1", I get an exception : `AttributeError: module 'bitsandbytes.nn' has no attribute 'Linear4bit'`.
Indeed, the class `SVDLinear4bit` should be defined only if `is_bnb_4bit_available()`, not just if `is_bnb_available()`.
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder
- [ ] My own task or dataset (give details below)
### Reproduction
in a notebook :
!pip install 'bitsandbytes==0.38.1'
import peft
### Expected behavior
no exception
| [
{
"content": "import re\nimport warnings\nfrom dataclasses import dataclass, field\nfrom typing import Optional\n\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nfrom transformers.pytorch_utils import Conv1D\n\nfrom ..import_utils import is_bnb_4bit_available, is_bnb_available\nfrom ..utils import (\n TRANSFORMERS_MODELS_TO_ADALORA_TARGET_MODULES_MAPPING,\n PeftType,\n _freeze_adapter,\n _get_submodules,\n transpose,\n)\nfrom .lora import (\n LoraConfig,\n LoraLayer,\n LoraModel,\n mark_only_lora_as_trainable,\n)\n\n\nif is_bnb_available():\n import bitsandbytes as bnb\n\n\n@dataclass\nclass AdaLoraConfig(LoraConfig):\n \"\"\"\n This is the configuration class to store the configuration of a [`~peft.AdaLora`].\n\n Args:\n target_r (`int`): The target average rank of incremental matrix.\n init_r (`int`): The initial rank for each incremental matrix.\n tinit (`int`): The steps of initial fine-tuning warmup.\n tfinal (`int`): The step of final fine-tuning.\n deltaT (`int`): The time internval between two budget allocations.\n beta1 (`float`): The hyperparameter of EMA for sensitivity smoothing.\n beta2 (`float`): The hyperparameter of EMA for undertainty quantification.\n orth_reg_weight (`float`): The coefficient of orthogonal regularization.\n total_step (`int`): The total training steps that should be specified before training.\n rank_pattern (`list`): The allocated rank for each weight matrix by RankAllocator.\n \"\"\"\n\n target_r: int = field(default=8, metadata={\"help\": \"Target Lora matrix dimension.\"})\n init_r: int = field(default=12, metadata={\"help\": \"Intial Lora matrix dimension.\"})\n tinit: int = field(default=0, metadata={\"help\": \"The steps of initial warmup.\"})\n tfinal: int = field(default=0, metadata={\"help\": \"The steps of final warmup.\"})\n deltaT: int = field(default=1, metadata={\"help\": \"Step interval of rank allocation.\"})\n beta1: float = field(default=0.85, metadata={\"help\": \"Hyperparameter of EMA.\"})\n beta2: float = field(default=0.85, metadata={\"help\": \"Hyperparameter of EMA.\"})\n orth_reg_weight: float = field(default=0.5, metadata={\"help\": \"The orthogonal regularization coefficient.\"})\n total_step: Optional[int] = field(default=None, metadata={\"help\": \"The total training steps.\"})\n rank_pattern: Optional[dict] = field(default=None, metadata={\"help\": \"The saved rank pattern.\"})\n\n def __post_init__(self):\n self.peft_type = PeftType.ADALORA\n\n\nclass AdaLoraModel(LoraModel):\n \"\"\"\n Creates AdaLoRA (Adaptive LoRA) model from a pretrained transformers model. Paper:\n https://openreview.net/pdf?id=lq62uWRJjiY\n\n Args:\n model ([`transformers.PreTrainedModel`]): The model to be adapted.\n config ([`AdaLoraConfig`]): The configuration of the AdaLora model.\n\n Returns:\n `torch.nn.Module`: The AdaLora model.\n\n Example::\n\n >>> from transformers import AutoModelForSeq2SeqLM, LoraConfig >>> from peft import AdaLoraModel, AdaLoraConfig\n >>> config = AdaLoraConfig(\n peft_type=\"ADALORA\", task_type=\"SEQ_2_SEQ_LM\", r=8, lora_alpha=32, target_modules=[\"q\", \"v\"],\n lora_dropout=0.01,\n )\n >>> model = AutoModelForSeq2SeqLM.from_pretrained(\"t5-base\") >>> model = AdaLoraModel(config, model)\n\n **Attributes**:\n - **model** ([`transformers.PreTrainedModel`]) -- The model to be adapted.\n - **peft_config** ([`AdaLoraConfig`]): The configuration of the AdaLora model.\n \"\"\"\n\n def __init__(self, model, config, adapter_name):\n nn.Module.__init__(self)\n self.model = model\n self.peft_config = config\n self.add_adapter(adapter_name, self.peft_config[adapter_name])\n\n def add_adapter(self, adapter_name, config=None):\n if config is not None:\n model_config = self.model.config.to_dict() if hasattr(self.model.config, \"to_dict\") else self.model.config\n config = self._prepare_adalora_config(config, model_config)\n self.peft_config[adapter_name] = config\n self._find_and_replace(adapter_name)\n if len(self.peft_config) > 1 and self.peft_config[adapter_name].bias != \"none\":\n raise ValueError(\n \"AdaLoraModel supports only 1 adapter with bias. When using multiple adapters, set bias to 'none' for all adapters.\"\n )\n traininable_mode_counter = 0\n for config in self.peft_config.values():\n if not config.inference_mode:\n traininable_mode_counter += 1\n\n if traininable_mode_counter > 1:\n raise ValueError(\n \"AdaLoraModel supports only 1 trainable adapter. \"\n \"When using multiple adapters, set inference_mode to True for all adapters except the one you want to train.\"\n )\n\n mark_only_lora_as_trainable(self.model, self.peft_config[adapter_name].bias)\n if self.peft_config[adapter_name].inference_mode:\n _freeze_adapter(self.model, adapter_name)\n else:\n self.trainable_adapter_name = adapter_name\n self.rankallocator = RankAllocator(self.model, self.peft_config[adapter_name], self.trainable_adapter_name)\n\n def _find_and_replace(self, adapter_name):\n lora_config = self.peft_config[adapter_name]\n loaded_in_8bit = getattr(self.model, \"is_loaded_in_8bit\", False)\n loaded_in_4bit = getattr(self.model, \"is_loaded_in_4bit\", False)\n\n if (loaded_in_8bit or loaded_in_4bit) and not is_bnb_available():\n raise ImportError(\n \"To use Lora with 8-bit quantization, please install the `bitsandbytes` package. \"\n \"You can install it with `pip install bitsandbytes`.\"\n )\n is_target_modules_in_base_model = False\n kwargs = {\n \"r\": lora_config.init_r,\n \"lora_alpha\": lora_config.lora_alpha,\n \"lora_dropout\": lora_config.lora_dropout,\n \"fan_in_fan_out\": lora_config.fan_in_fan_out,\n \"init_lora_weights\": lora_config.init_lora_weights,\n }\n key_list = [key for key, _ in self.model.named_modules()]\n for key in key_list:\n if isinstance(lora_config.target_modules, str):\n target_module_found = re.fullmatch(lora_config.target_modules, key)\n else:\n target_module_found = any(key.endswith(target_key) for target_key in lora_config.target_modules)\n if target_module_found:\n if not is_target_modules_in_base_model:\n is_target_modules_in_base_model = True\n parent, target, target_name = _get_submodules(self.model, key)\n bias = target.bias is not None\n if isinstance(target, LoraLayer):\n target.update_layer(\n adapter_name,\n lora_config.init_r,\n lora_config.lora_alpha,\n lora_config.lora_dropout,\n lora_config.init_lora_weights,\n )\n else:\n if loaded_in_8bit and isinstance(target, bnb.nn.Linear8bitLt):\n kwargs.update(\n {\n \"has_fp16_weights\": target.state.has_fp16_weights,\n \"memory_efficient_backward\": target.state.memory_efficient_backward,\n \"threshold\": target.state.threshold,\n \"index\": target.index,\n }\n )\n new_module = SVDLinear8bitLt(\n adapter_name, target.in_features, target.out_features, bias=bias, **kwargs\n )\n elif loaded_in_4bit and is_bnb_4bit_available() and isinstance(target, bnb.nn.Linear4bit):\n fourbit_kwargs = kwargs.copy()\n fourbit_kwargs.update(\n {\n \"compute_dtype\": target.compute_dtype,\n \"compress_statistics\": target.weight.compress_statistics,\n \"quant_type\": target.weight.quant_type,\n }\n )\n new_module = SVDLinear4bit(\n adapter_name, target.in_features, target.out_features, bias=bias, **fourbit_kwargs\n )\n else:\n if isinstance(target, torch.nn.Linear):\n in_features, out_features = target.in_features, target.out_features\n if kwargs[\"fan_in_fan_out\"]:\n warnings.warn(\n \"fan_in_fan_out is set to True but the target module is `torch.nn.Linear`. \"\n \"Setting fan_in_fan_out to False.\"\n )\n kwargs[\"fan_in_fan_out\"] = lora_config.fan_in_fan_out = False\n elif isinstance(target, Conv1D):\n in_features, out_features = (\n target.weight.ds_shape if hasattr(target.weight, \"ds_shape\") else target.weight.shape\n )\n if not kwargs[\"fan_in_fan_out\"]:\n warnings.warn(\n \"fan_in_fan_out is set to False but the target module is `Conv1D`. \"\n \"Setting fan_in_fan_out to True.\"\n )\n kwargs[\"fan_in_fan_out\"] = lora_config.fan_in_fan_out = True\n else:\n raise ValueError(\n f\"Target module {target} is not supported. \"\n f\"Currently, only `torch.nn.Linear` and `Conv1D` are supported.\"\n )\n new_module = SVDLinear(adapter_name, in_features, out_features, bias=bias, **kwargs)\n\n self._replace_module(parent, target_name, new_module, target)\n if not is_target_modules_in_base_model:\n raise ValueError(\n f\"Target modules {lora_config.target_modules} not found in the base model. \"\n f\"Please check the target modules and try again.\"\n )\n\n def __getattr__(self, name: str):\n \"\"\"Forward missing attributes to the wrapped module.\"\"\"\n try:\n return super().__getattr__(name) # defer to nn.Module's logic\n except AttributeError:\n return getattr(self.model, name)\n\n def forward(self, *args, **kwargs):\n outputs = self.model.forward(*args, **kwargs)\n\n # Calculate the orthogonal regularization\n orth_reg_weight = self.peft_config[self.trainable_adapter_name].orth_reg_weight\n assert orth_reg_weight > 0\n\n if hasattr(outputs, \"loss\"):\n regu_loss = 0\n num_param = 0\n for n, p in self.model.named_parameters():\n if (\"lora_A\" in n or \"lora_B\" in n) and self.trainable_adapter_name in n:\n para_cov = p @ p.T if \"lora_A\" in n else p.T @ p\n I = torch.eye(*para_cov.size(), out=torch.empty_like(para_cov))\n I.requires_grad = False\n num_param += 1\n regu_loss += torch.norm(para_cov - I, p=\"fro\")\n if num_param > 0:\n regu_loss = regu_loss / num_param\n else:\n regu_loss = 0\n outputs.loss += orth_reg_weight * regu_loss\n return outputs\n\n def resize_modules_by_rank_pattern(self, rank_pattern, adapter_name):\n lora_config = self.peft_config[adapter_name]\n for name, rank_idx in rank_pattern.items():\n if isinstance(rank_idx, list):\n rank = sum(rank_idx)\n elif isinstance(rank_idx, torch.Tensor):\n rank_idx = rank_idx.view(-1)\n rank = rank_idx.sum().item()\n else:\n raise ValueError(\"Unexcepted type of rank_idx\")\n key = \".\".join(name.split(\".\")[0:-2]) if adapter_name in name else \".\".join(name.split(\".\")[0:-1])\n _, target, _ = _get_submodules(self.model, key)\n lora_E_weights = target.lora_E[adapter_name][rank_idx]\n lora_A_weights = target.lora_A[adapter_name][rank_idx]\n lora_B_weights = target.lora_B[adapter_name][:, rank_idx]\n ranknum = target.ranknum[adapter_name]\n target.update_layer(\n adapter_name,\n rank,\n lora_config.lora_alpha,\n lora_config.lora_dropout,\n lora_config.init_lora_weights,\n )\n with torch.no_grad():\n if rank > 0:\n target.lora_E[adapter_name].copy_(lora_E_weights)\n target.lora_A[adapter_name].copy_(lora_A_weights)\n target.lora_B[adapter_name].copy_(lora_B_weights)\n # The scaling is exactly as the previous\n target.ranknum[adapter_name].copy_(ranknum)\n\n def resize_state_dict_by_rank_pattern(self, rank_pattern, state_dict, adapter_name):\n for name, rank_idx in rank_pattern.items():\n rank = sum(rank_idx)\n prefix = \".\".join(name.split(\".\")[0:-2]) if adapter_name in name else \".\".join(name.split(\".\")[0:-1])\n for layer in [\"lora_E\", \"lora_A\", \"lora_B\"]:\n key = f\"base_model.model.{prefix}.{layer}.{adapter_name}\"\n if layer != \"lora_B\":\n state_dict[key] = (\n state_dict[key][rank_idx] if rank != state_dict[key].shape[0] else state_dict[key]\n )\n else:\n state_dict[key] = (\n state_dict[key][:, rank_idx] if rank != state_dict[key].shape[1] else state_dict[key]\n )\n return state_dict\n\n def update_and_allocate(self, global_step):\n lora_config = self.peft_config[self.trainable_adapter_name]\n # Update the importance score and allocate the budget\n if global_step < lora_config.total_step - lora_config.tfinal:\n _, rank_pattern = self.rankallocator.update_and_allocate(self.model, global_step)\n if rank_pattern:\n lora_config.rank_pattern = rank_pattern\n # Finalize the budget allocation\n elif global_step == lora_config.total_step - lora_config.tfinal:\n _, rank_pattern = self.rankallocator.update_and_allocate(self.model, global_step, force_mask=True)\n # for some reason, this freezes the trainable parameters and nothing gets updates\n # self.resize_modules_by_rank_pattern(rank_pattern, self.trainable_adapter_name)\n lora_config.rank_pattern = rank_pattern\n self.rankallocator.reset_ipt()\n # Currently using inefficient way to mask the unimportant weights using the rank pattern\n # due to problem mentioned above\n elif global_step > lora_config.total_step - lora_config.tfinal:\n self.rankallocator.mask_using_rank_pattern(self.model, lora_config.rank_pattern)\n # Pass the function and do forward propagation\n else:\n return None\n\n @staticmethod\n def _prepare_adalora_config(peft_config, model_config):\n if peft_config.target_modules is None:\n if model_config[\"model_type\"] not in TRANSFORMERS_MODELS_TO_ADALORA_TARGET_MODULES_MAPPING:\n raise ValueError(\"Please specify `target_modules` in `peft_config`\")\n peft_config.target_modules = TRANSFORMERS_MODELS_TO_ADALORA_TARGET_MODULES_MAPPING[\n model_config[\"model_type\"]\n ]\n return peft_config\n\n\nclass AdaLoraLayer(LoraLayer):\n def __init__(\n self,\n in_features: int,\n out_features: int,\n ):\n super().__init__(in_features, out_features)\n self.lora_E = nn.ParameterDict({})\n self.lora_A = nn.ParameterDict({})\n self.lora_B = nn.ParameterDict({})\n self.ranknum = nn.ParameterDict({})\n\n def update_layer(self, adapter_name, r, lora_alpha, lora_dropout, init_lora_weights):\n self.r[adapter_name] = r\n self.lora_alpha[adapter_name] = lora_alpha\n if lora_dropout > 0.0:\n lora_dropout_layer = nn.Dropout(p=lora_dropout)\n else:\n\n def lora_dropout_layer(x):\n return x\n\n self.lora_dropout.update(nn.ModuleDict({adapter_name: lora_dropout_layer}))\n # Actual trainable parameters\n # Right singular vectors\n self.lora_A.update(nn.ParameterDict({adapter_name: nn.Parameter(torch.zeros(r, self.in_features))}))\n # Singular values\n self.lora_E.update(nn.ParameterDict({adapter_name: nn.Parameter(torch.zeros(r, 1))}))\n # Left singular vectors\n self.lora_B.update(nn.ParameterDict({adapter_name: nn.Parameter(torch.zeros(self.out_features, r))}))\n # The current rank\n self.ranknum.update(nn.ParameterDict({adapter_name: nn.Parameter(torch.zeros(1), requires_grad=False)}))\n self.ranknum[adapter_name].data.fill_(float(r))\n self.ranknum[adapter_name].requires_grad = False\n self.scaling[adapter_name] = lora_alpha if lora_alpha > 0 else float(r)\n if init_lora_weights:\n self.reset_lora_parameters(adapter_name)\n self.to(self.weight.device)\n\n def reset_lora_parameters(self, adapter_name):\n if adapter_name in self.lora_A.keys():\n nn.init.zeros_(self.lora_E[adapter_name])\n nn.init.normal_(self.lora_A[adapter_name], mean=0.0, std=0.02)\n nn.init.normal_(self.lora_B[adapter_name], mean=0.0, std=0.02)\n\n\nclass SVDLinear(nn.Linear, AdaLoraLayer):\n # SVD-based adaptation by a dense layer\n def __init__(\n self,\n adapter_name: str,\n in_features: int,\n out_features: int,\n r: int = 0,\n lora_alpha: int = 1,\n lora_dropout: float = 0.0,\n fan_in_fan_out: bool = False,\n **kwargs,\n ):\n init_lora_weights = kwargs.pop(\"init_lora_weights\", True)\n nn.Linear.__init__(self, in_features, out_features, **kwargs)\n AdaLoraLayer.__init__(self, in_features=in_features, out_features=out_features)\n # Freezing the pre-trained weight matrix\n self.weight.requires_grad = False\n\n self.fan_in_fan_out = fan_in_fan_out\n if fan_in_fan_out:\n self.weight.data = self.weight.data.T\n\n nn.Linear.reset_parameters(self)\n self.update_layer(adapter_name, r, lora_alpha, lora_dropout, init_lora_weights)\n self.active_adapter = adapter_name\n\n def merge(self):\n if self.active_adapter not in self.lora_A.keys():\n return\n if self.merged:\n warnings.warn(\"Already merged. Nothing to do.\")\n return\n if self.r[self.active_adapter] > 0:\n self.weight.data += (\n transpose(\n self.lora_B[self.active_adapter]\n @ (self.lora_A[self.active_adapter] * self.lora_E[self.active_adapter]),\n self.fan_in_fan_out,\n )\n * self.scaling[self.active_adapter]\n / (self.ranknum[self.active_adapter] + 1e-5)\n )\n self.merged = True\n\n def unmerge(self):\n if self.active_adapter not in self.lora_A.keys():\n return\n if not self.merged:\n warnings.warn(\"Already unmerged. Nothing to do.\")\n return\n if self.r[self.active_adapter] > 0:\n self.weight.data -= (\n transpose(\n self.lora_B[self.active_adapter]\n @ (self.lora_A[self.active_adapter] * self.lora_E[self.active_adapter])\n )\n * self.scaling[self.active_adapter]\n / (self.ranknum[self.active_adapter] + 1e-5)\n )\n self.merged = False\n\n def forward(self, x: torch.Tensor):\n if self.active_adapter not in self.lora_A.keys():\n return F.linear(x, transpose(self.weight, self.fan_in_fan_out), bias=self.bias)\n if self.disable_adapters:\n if self.r[self.active_adapter] > 0 and self.merged:\n self.unmerge()\n result = F.linear(x, transpose(self.weight, self.fan_in_fan_out), bias=self.bias)\n elif self.r[self.active_adapter] > 0 and not self.merged:\n result = F.linear(x, transpose(self.weight, self.fan_in_fan_out), bias=self.bias)\n result += (\n (\n self.lora_dropout[self.active_adapter](x)\n @ (self.lora_A[self.active_adapter] * self.lora_E[self.active_adapter]).T\n @ self.lora_B[self.active_adapter].T\n )\n * self.scaling[self.active_adapter]\n / (self.ranknum[self.active_adapter] + 1e-5)\n )\n else:\n result = F.linear(x, transpose(self.weight, self.fan_in_fan_out), bias=self.bias)\n return result\n\n\nif is_bnb_available():\n\n class SVDLinear8bitLt(bnb.nn.Linear8bitLt, AdaLoraLayer):\n # Low-rank matrix for SVD-based adaptation\n def __init__(\n self,\n adapter_name,\n in_features,\n out_features,\n r: int = 0,\n lora_alpha: int = 1,\n lora_dropout: float = 0.0,\n **kwargs,\n ):\n bnb.nn.Linear8bitLt.__init__(\n self,\n in_features,\n out_features,\n bias=kwargs.get(\"bias\", True),\n has_fp16_weights=kwargs.get(\"has_fp16_weights\", True),\n memory_efficient_backward=kwargs.get(\"memory_efficient_backward\", False),\n threshold=kwargs.get(\"threshold\", 0.0),\n index=kwargs.get(\"index\", None),\n )\n AdaLoraLayer.__init__(self, in_features=in_features, out_features=out_features)\n # Freezing the pre-trained weight matrix\n self.weight.requires_grad = False\n\n init_lora_weights = kwargs.pop(\"init_lora_weights\", True)\n self.update_layer(adapter_name, r, lora_alpha, lora_dropout, init_lora_weights)\n self.active_adapter = adapter_name\n\n def forward(self, x: torch.Tensor):\n result = super().forward(x)\n\n if self.disable_adapters or self.active_adapter not in self.lora_A.keys():\n return result\n elif self.r[self.active_adapter] > 0:\n if not torch.is_autocast_enabled():\n expected_dtype = result.dtype\n\n if x.dtype != torch.float32:\n x = x.float()\n output = (\n (\n self.lora_dropout[self.active_adapter](x)\n @ (self.lora_A[self.active_adapter] * self.lora_E[self.active_adapter]).T\n @ self.lora_B[self.active_adapter].T\n ).to(expected_dtype)\n * self.scaling[self.active_adapter]\n / (self.ranknum[self.active_adapter] + 1e-5)\n )\n else:\n output = (\n (\n self.lora_dropout[self.active_adapter](x)\n @ (self.lora_A[self.active_adapter] * self.lora_E[self.active_adapter]).T\n @ self.lora_B[self.active_adapter].T\n )\n * self.scaling[self.active_adapter]\n / (self.ranknum[self.active_adapter] + 1e-5)\n )\n result = result + output\n return result\n\n class SVDLinear4bit(bnb.nn.Linear4bit, AdaLoraLayer):\n # Low-rank matrix for SVD-based adaptation\n def __init__(\n self,\n adapter_name,\n in_features,\n out_features,\n r: int = 0,\n lora_alpha: int = 1,\n lora_dropout: float = 0.0,\n **kwargs,\n ):\n bnb.nn.Linear4bit.__init__(\n self,\n in_features,\n out_features,\n bias=kwargs.get(\"bias\", True),\n compute_dtype=kwargs.get(\"compute_dtype\", torch.float32),\n compress_statistics=kwargs.get(\"compress_statistics\", True),\n quant_type=kwargs.get(\"quant_type\", \"nf4\"),\n )\n AdaLoraLayer.__init__(self, in_features=in_features, out_features=out_features)\n # Freezing the pre-trained weight matrix\n self.weight.requires_grad = False\n\n init_lora_weights = kwargs.pop(\"init_lora_weights\", True)\n self.update_layer(adapter_name, r, lora_alpha, lora_dropout, init_lora_weights)\n self.active_adapter = adapter_name\n\n def forward(self, x: torch.Tensor):\n result = super().forward(x)\n\n if self.disable_adapters or self.active_adapter not in self.lora_A.keys():\n return result\n elif self.r[self.active_adapter] > 0:\n if not torch.is_autocast_enabled():\n expected_dtype = result.dtype\n\n if x.dtype != torch.float32:\n x = x.float()\n output = (\n (\n self.lora_dropout[self.active_adapter](x)\n @ (self.lora_A[self.active_adapter] * self.lora_E[self.active_adapter]).T\n @ self.lora_B[self.active_adapter].T\n ).to(expected_dtype)\n * self.scaling[self.active_adapter]\n / (self.ranknum[self.active_adapter] + 1e-5)\n )\n else:\n output = (\n (\n self.lora_dropout[self.active_adapter](x)\n @ (self.lora_A[self.active_adapter] * self.lora_E[self.active_adapter]).T\n @ self.lora_B[self.active_adapter].T\n )\n * self.scaling[self.active_adapter]\n / (self.ranknum[self.active_adapter] + 1e-5)\n )\n result = result + output\n return result\n\n\nclass RankAllocator(object):\n \"\"\"\n The RankAllocator for AdaLoraModel. Paper: https://openreview.net/pdf?id=lq62uWRJjiY\n\n Args:\n config ([`AdaLoraConfig`]): The configuration of the AdaLora model.\n model: the model that we apply AdaLoRA to.\n\n \"\"\"\n\n def __init__(self, model, peft_config, adapter_name):\n self.peft_config = peft_config\n self.adapter_name = adapter_name\n self.beta1 = peft_config.beta1\n self.beta2 = peft_config.beta2\n assert self.beta1 > 0 and self.beta1 < 1\n assert self.beta2 > 0 and self.beta2 < 1\n\n self.reset_ipt()\n self._set_budget_scheduler(model)\n\n def set_total_step(self, total_step):\n self.peft_config.total_step = total_step\n\n def reset_ipt(self):\n self.ipt = {}\n self.exp_avg_ipt = {}\n self.exp_avg_unc = {}\n\n def _set_budget_scheduler(self, model):\n self.init_bgt = 0\n self.name_set = set()\n for n, p in model.named_parameters():\n if f\"lora_A.{self.adapter_name}\" in n:\n self.init_bgt += p.size(0)\n self.name_set.add(n.replace(\"lora_A\", \"%s\"))\n self.name_set = sorted(self.name_set)\n # The total final rank budget\n self.target_bgt = self.peft_config.target_r * len(self.name_set)\n\n def budget_schedule(self, step: int):\n tinit = self.peft_config.tinit\n tfinal = self.peft_config.tfinal\n total_step = self.peft_config.total_step\n # Initial warmup\n if step <= tinit:\n budget = self.init_bgt\n mask_ind = False\n # Final fine-tuning\n elif step > total_step - tfinal:\n budget = self.target_bgt\n mask_ind = True\n else:\n # Budget decreasing with a cubic scheduler\n mul_coeff = 1 - (step - tinit) / (total_step - tfinal - tinit)\n budget = int((self.init_bgt - self.target_bgt) * (mul_coeff**3) + self.target_bgt)\n mask_ind = True if step % self.peft_config.deltaT == 0 else False\n return budget, mask_ind\n\n def update_ipt(self, model):\n # Update the sensitivity and uncertainty for every weight\n for n, p in model.named_parameters():\n if \"lora_\" in n and self.adapter_name in n:\n if n not in self.ipt:\n self.ipt[n] = torch.zeros_like(p)\n self.exp_avg_ipt[n] = torch.zeros_like(p)\n self.exp_avg_unc[n] = torch.zeros_like(p)\n with torch.no_grad():\n self.ipt[n] = (p * p.grad).abs().detach()\n # Sensitivity smoothing\n self.exp_avg_ipt[n] = self.beta1 * self.exp_avg_ipt[n] + (1 - self.beta1) * self.ipt[n]\n # Uncertainty quantification\n self.exp_avg_unc[n] = (\n self.beta2 * self.exp_avg_unc[n] + (1 - self.beta2) * (self.ipt[n] - self.exp_avg_ipt[n]).abs()\n )\n\n def _element_score(self, n):\n return self.exp_avg_ipt[n] * self.exp_avg_unc[n]\n\n def _combine_ipt(self, ipt_E, ipt_AB):\n ipt_AB = ipt_AB.sum(dim=1, keepdim=False)\n sum_ipt = ipt_E.view(-1) + ipt_AB.view(-1)\n return sum_ipt\n\n def mask_to_budget(self, model, budget):\n value_ipt = {}\n vector_ipt = {}\n triplet_ipt = {}\n # Get the importance score for A, E, B\n for n, p in model.named_parameters():\n if f\"lora_A.{self.adapter_name}\" in n:\n entry_ipt = self._element_score(n)\n comb_ipt = torch.mean(entry_ipt, dim=1, keepdim=True)\n name_m = n.replace(\"lora_A\", \"%s\")\n if name_m not in vector_ipt:\n vector_ipt[name_m] = [comb_ipt]\n else:\n vector_ipt[name_m].append(comb_ipt)\n if f\"lora_B.{self.adapter_name}\" in n:\n entry_ipt = self._element_score(n)\n comb_ipt = torch.mean(entry_ipt, dim=0, keepdim=False).view(-1, 1)\n name_m = n.replace(\"lora_B\", \"%s\")\n if name_m not in vector_ipt:\n vector_ipt[name_m] = [comb_ipt]\n else:\n vector_ipt[name_m].append(comb_ipt)\n if f\"lora_E.{self.adapter_name}\" in n:\n entry_ipt = self._element_score(n)\n name_m = n.replace(\"lora_E\", \"%s\")\n value_ipt[name_m] = entry_ipt\n\n all_score = []\n # Calculate the score for each triplet\n for name_m in vector_ipt:\n ipt_E = value_ipt[name_m]\n ipt_AB = torch.cat(vector_ipt[name_m], dim=1)\n sum_ipt = self._combine_ipt(ipt_E, ipt_AB)\n name_E = name_m % \"lora_E\"\n triplet_ipt[name_E] = sum_ipt.view(-1, 1)\n all_score.append(sum_ipt.view(-1))\n\n # Get the threshold by ranking ipt\n mask_threshold = torch.kthvalue(\n torch.cat(all_score),\n k=self.init_bgt - budget,\n )[0].item()\n\n rank_pattern = {}\n # Mask the unimportant triplets\n with torch.no_grad():\n for n, p in model.named_parameters():\n if f\"lora_E.{self.adapter_name}\" in n:\n p.masked_fill_(triplet_ipt[n] <= mask_threshold, 0.0)\n rank_pattern[n] = (~(triplet_ipt[n] <= mask_threshold)).view(-1).tolist()\n return rank_pattern\n\n def update_and_allocate(self, model, global_step, force_mask=False):\n # # Update the importance score and allocate the budget\n if global_step < self.peft_config.total_step - self.peft_config.tfinal:\n self.update_ipt(model)\n budget, mask_ind = self.budget_schedule(global_step)\n # Allocate the budget according to importance scores\n if mask_ind or force_mask:\n rank_pattern = self.mask_to_budget(model, budget)\n else:\n rank_pattern = None\n return budget, rank_pattern\n\n def mask_using_rank_pattern(self, model, rank_pattern):\n # Mask the unimportant triplets\n is_adapter_name_truncated = False\n if self.adapter_name not in next(iter(rank_pattern.keys())):\n is_adapter_name_truncated = True\n\n with torch.no_grad():\n for n, p in model.named_parameters():\n if f\"lora_E.{self.adapter_name}\" in n:\n key = n if not is_adapter_name_truncated else n.replace(f\".{self.adapter_name}\", \"\")\n mask = torch.Tensor(rank_pattern[key]).unsqueeze(-1).to(p.device)\n p.masked_fill_(~mask.bool(), 0.0)\n",
"path": "src/peft/tuners/adalora.py"
}
] | [
{
"content": "import re\nimport warnings\nfrom dataclasses import dataclass, field\nfrom typing import Optional\n\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nfrom transformers.pytorch_utils import Conv1D\n\nfrom ..import_utils import is_bnb_4bit_available, is_bnb_available\nfrom ..utils import (\n TRANSFORMERS_MODELS_TO_ADALORA_TARGET_MODULES_MAPPING,\n PeftType,\n _freeze_adapter,\n _get_submodules,\n transpose,\n)\nfrom .lora import (\n LoraConfig,\n LoraLayer,\n LoraModel,\n mark_only_lora_as_trainable,\n)\n\n\nif is_bnb_available():\n import bitsandbytes as bnb\n\n\n@dataclass\nclass AdaLoraConfig(LoraConfig):\n \"\"\"\n This is the configuration class to store the configuration of a [`~peft.AdaLora`].\n\n Args:\n target_r (`int`): The target average rank of incremental matrix.\n init_r (`int`): The initial rank for each incremental matrix.\n tinit (`int`): The steps of initial fine-tuning warmup.\n tfinal (`int`): The step of final fine-tuning.\n deltaT (`int`): The time internval between two budget allocations.\n beta1 (`float`): The hyperparameter of EMA for sensitivity smoothing.\n beta2 (`float`): The hyperparameter of EMA for undertainty quantification.\n orth_reg_weight (`float`): The coefficient of orthogonal regularization.\n total_step (`int`): The total training steps that should be specified before training.\n rank_pattern (`list`): The allocated rank for each weight matrix by RankAllocator.\n \"\"\"\n\n target_r: int = field(default=8, metadata={\"help\": \"Target Lora matrix dimension.\"})\n init_r: int = field(default=12, metadata={\"help\": \"Intial Lora matrix dimension.\"})\n tinit: int = field(default=0, metadata={\"help\": \"The steps of initial warmup.\"})\n tfinal: int = field(default=0, metadata={\"help\": \"The steps of final warmup.\"})\n deltaT: int = field(default=1, metadata={\"help\": \"Step interval of rank allocation.\"})\n beta1: float = field(default=0.85, metadata={\"help\": \"Hyperparameter of EMA.\"})\n beta2: float = field(default=0.85, metadata={\"help\": \"Hyperparameter of EMA.\"})\n orth_reg_weight: float = field(default=0.5, metadata={\"help\": \"The orthogonal regularization coefficient.\"})\n total_step: Optional[int] = field(default=None, metadata={\"help\": \"The total training steps.\"})\n rank_pattern: Optional[dict] = field(default=None, metadata={\"help\": \"The saved rank pattern.\"})\n\n def __post_init__(self):\n self.peft_type = PeftType.ADALORA\n\n\nclass AdaLoraModel(LoraModel):\n \"\"\"\n Creates AdaLoRA (Adaptive LoRA) model from a pretrained transformers model. Paper:\n https://openreview.net/pdf?id=lq62uWRJjiY\n\n Args:\n model ([`transformers.PreTrainedModel`]): The model to be adapted.\n config ([`AdaLoraConfig`]): The configuration of the AdaLora model.\n\n Returns:\n `torch.nn.Module`: The AdaLora model.\n\n Example::\n\n >>> from transformers import AutoModelForSeq2SeqLM, LoraConfig >>> from peft import AdaLoraModel, AdaLoraConfig\n >>> config = AdaLoraConfig(\n peft_type=\"ADALORA\", task_type=\"SEQ_2_SEQ_LM\", r=8, lora_alpha=32, target_modules=[\"q\", \"v\"],\n lora_dropout=0.01,\n )\n >>> model = AutoModelForSeq2SeqLM.from_pretrained(\"t5-base\") >>> model = AdaLoraModel(config, model)\n\n **Attributes**:\n - **model** ([`transformers.PreTrainedModel`]) -- The model to be adapted.\n - **peft_config** ([`AdaLoraConfig`]): The configuration of the AdaLora model.\n \"\"\"\n\n def __init__(self, model, config, adapter_name):\n nn.Module.__init__(self)\n self.model = model\n self.peft_config = config\n self.add_adapter(adapter_name, self.peft_config[adapter_name])\n\n def add_adapter(self, adapter_name, config=None):\n if config is not None:\n model_config = self.model.config.to_dict() if hasattr(self.model.config, \"to_dict\") else self.model.config\n config = self._prepare_adalora_config(config, model_config)\n self.peft_config[adapter_name] = config\n self._find_and_replace(adapter_name)\n if len(self.peft_config) > 1 and self.peft_config[adapter_name].bias != \"none\":\n raise ValueError(\n \"AdaLoraModel supports only 1 adapter with bias. When using multiple adapters, set bias to 'none' for all adapters.\"\n )\n traininable_mode_counter = 0\n for config in self.peft_config.values():\n if not config.inference_mode:\n traininable_mode_counter += 1\n\n if traininable_mode_counter > 1:\n raise ValueError(\n \"AdaLoraModel supports only 1 trainable adapter. \"\n \"When using multiple adapters, set inference_mode to True for all adapters except the one you want to train.\"\n )\n\n mark_only_lora_as_trainable(self.model, self.peft_config[adapter_name].bias)\n if self.peft_config[adapter_name].inference_mode:\n _freeze_adapter(self.model, adapter_name)\n else:\n self.trainable_adapter_name = adapter_name\n self.rankallocator = RankAllocator(self.model, self.peft_config[adapter_name], self.trainable_adapter_name)\n\n def _find_and_replace(self, adapter_name):\n lora_config = self.peft_config[adapter_name]\n loaded_in_8bit = getattr(self.model, \"is_loaded_in_8bit\", False)\n loaded_in_4bit = getattr(self.model, \"is_loaded_in_4bit\", False)\n\n if (loaded_in_8bit or loaded_in_4bit) and not is_bnb_available():\n raise ImportError(\n \"To use Lora with 8-bit quantization, please install the `bitsandbytes` package. \"\n \"You can install it with `pip install bitsandbytes`.\"\n )\n is_target_modules_in_base_model = False\n kwargs = {\n \"r\": lora_config.init_r,\n \"lora_alpha\": lora_config.lora_alpha,\n \"lora_dropout\": lora_config.lora_dropout,\n \"fan_in_fan_out\": lora_config.fan_in_fan_out,\n \"init_lora_weights\": lora_config.init_lora_weights,\n }\n key_list = [key for key, _ in self.model.named_modules()]\n for key in key_list:\n if isinstance(lora_config.target_modules, str):\n target_module_found = re.fullmatch(lora_config.target_modules, key)\n else:\n target_module_found = any(key.endswith(target_key) for target_key in lora_config.target_modules)\n if target_module_found:\n if not is_target_modules_in_base_model:\n is_target_modules_in_base_model = True\n parent, target, target_name = _get_submodules(self.model, key)\n bias = target.bias is not None\n if isinstance(target, LoraLayer):\n target.update_layer(\n adapter_name,\n lora_config.init_r,\n lora_config.lora_alpha,\n lora_config.lora_dropout,\n lora_config.init_lora_weights,\n )\n else:\n if loaded_in_8bit and isinstance(target, bnb.nn.Linear8bitLt):\n kwargs.update(\n {\n \"has_fp16_weights\": target.state.has_fp16_weights,\n \"memory_efficient_backward\": target.state.memory_efficient_backward,\n \"threshold\": target.state.threshold,\n \"index\": target.index,\n }\n )\n new_module = SVDLinear8bitLt(\n adapter_name, target.in_features, target.out_features, bias=bias, **kwargs\n )\n elif loaded_in_4bit and is_bnb_4bit_available() and isinstance(target, bnb.nn.Linear4bit):\n fourbit_kwargs = kwargs.copy()\n fourbit_kwargs.update(\n {\n \"compute_dtype\": target.compute_dtype,\n \"compress_statistics\": target.weight.compress_statistics,\n \"quant_type\": target.weight.quant_type,\n }\n )\n new_module = SVDLinear4bit(\n adapter_name, target.in_features, target.out_features, bias=bias, **fourbit_kwargs\n )\n else:\n if isinstance(target, torch.nn.Linear):\n in_features, out_features = target.in_features, target.out_features\n if kwargs[\"fan_in_fan_out\"]:\n warnings.warn(\n \"fan_in_fan_out is set to True but the target module is `torch.nn.Linear`. \"\n \"Setting fan_in_fan_out to False.\"\n )\n kwargs[\"fan_in_fan_out\"] = lora_config.fan_in_fan_out = False\n elif isinstance(target, Conv1D):\n in_features, out_features = (\n target.weight.ds_shape if hasattr(target.weight, \"ds_shape\") else target.weight.shape\n )\n if not kwargs[\"fan_in_fan_out\"]:\n warnings.warn(\n \"fan_in_fan_out is set to False but the target module is `Conv1D`. \"\n \"Setting fan_in_fan_out to True.\"\n )\n kwargs[\"fan_in_fan_out\"] = lora_config.fan_in_fan_out = True\n else:\n raise ValueError(\n f\"Target module {target} is not supported. \"\n f\"Currently, only `torch.nn.Linear` and `Conv1D` are supported.\"\n )\n new_module = SVDLinear(adapter_name, in_features, out_features, bias=bias, **kwargs)\n\n self._replace_module(parent, target_name, new_module, target)\n if not is_target_modules_in_base_model:\n raise ValueError(\n f\"Target modules {lora_config.target_modules} not found in the base model. \"\n f\"Please check the target modules and try again.\"\n )\n\n def __getattr__(self, name: str):\n \"\"\"Forward missing attributes to the wrapped module.\"\"\"\n try:\n return super().__getattr__(name) # defer to nn.Module's logic\n except AttributeError:\n return getattr(self.model, name)\n\n def forward(self, *args, **kwargs):\n outputs = self.model.forward(*args, **kwargs)\n\n # Calculate the orthogonal regularization\n orth_reg_weight = self.peft_config[self.trainable_adapter_name].orth_reg_weight\n assert orth_reg_weight > 0\n\n if hasattr(outputs, \"loss\"):\n regu_loss = 0\n num_param = 0\n for n, p in self.model.named_parameters():\n if (\"lora_A\" in n or \"lora_B\" in n) and self.trainable_adapter_name in n:\n para_cov = p @ p.T if \"lora_A\" in n else p.T @ p\n I = torch.eye(*para_cov.size(), out=torch.empty_like(para_cov))\n I.requires_grad = False\n num_param += 1\n regu_loss += torch.norm(para_cov - I, p=\"fro\")\n if num_param > 0:\n regu_loss = regu_loss / num_param\n else:\n regu_loss = 0\n outputs.loss += orth_reg_weight * regu_loss\n return outputs\n\n def resize_modules_by_rank_pattern(self, rank_pattern, adapter_name):\n lora_config = self.peft_config[adapter_name]\n for name, rank_idx in rank_pattern.items():\n if isinstance(rank_idx, list):\n rank = sum(rank_idx)\n elif isinstance(rank_idx, torch.Tensor):\n rank_idx = rank_idx.view(-1)\n rank = rank_idx.sum().item()\n else:\n raise ValueError(\"Unexcepted type of rank_idx\")\n key = \".\".join(name.split(\".\")[0:-2]) if adapter_name in name else \".\".join(name.split(\".\")[0:-1])\n _, target, _ = _get_submodules(self.model, key)\n lora_E_weights = target.lora_E[adapter_name][rank_idx]\n lora_A_weights = target.lora_A[adapter_name][rank_idx]\n lora_B_weights = target.lora_B[adapter_name][:, rank_idx]\n ranknum = target.ranknum[adapter_name]\n target.update_layer(\n adapter_name,\n rank,\n lora_config.lora_alpha,\n lora_config.lora_dropout,\n lora_config.init_lora_weights,\n )\n with torch.no_grad():\n if rank > 0:\n target.lora_E[adapter_name].copy_(lora_E_weights)\n target.lora_A[adapter_name].copy_(lora_A_weights)\n target.lora_B[adapter_name].copy_(lora_B_weights)\n # The scaling is exactly as the previous\n target.ranknum[adapter_name].copy_(ranknum)\n\n def resize_state_dict_by_rank_pattern(self, rank_pattern, state_dict, adapter_name):\n for name, rank_idx in rank_pattern.items():\n rank = sum(rank_idx)\n prefix = \".\".join(name.split(\".\")[0:-2]) if adapter_name in name else \".\".join(name.split(\".\")[0:-1])\n for layer in [\"lora_E\", \"lora_A\", \"lora_B\"]:\n key = f\"base_model.model.{prefix}.{layer}.{adapter_name}\"\n if layer != \"lora_B\":\n state_dict[key] = (\n state_dict[key][rank_idx] if rank != state_dict[key].shape[0] else state_dict[key]\n )\n else:\n state_dict[key] = (\n state_dict[key][:, rank_idx] if rank != state_dict[key].shape[1] else state_dict[key]\n )\n return state_dict\n\n def update_and_allocate(self, global_step):\n lora_config = self.peft_config[self.trainable_adapter_name]\n # Update the importance score and allocate the budget\n if global_step < lora_config.total_step - lora_config.tfinal:\n _, rank_pattern = self.rankallocator.update_and_allocate(self.model, global_step)\n if rank_pattern:\n lora_config.rank_pattern = rank_pattern\n # Finalize the budget allocation\n elif global_step == lora_config.total_step - lora_config.tfinal:\n _, rank_pattern = self.rankallocator.update_and_allocate(self.model, global_step, force_mask=True)\n # for some reason, this freezes the trainable parameters and nothing gets updates\n # self.resize_modules_by_rank_pattern(rank_pattern, self.trainable_adapter_name)\n lora_config.rank_pattern = rank_pattern\n self.rankallocator.reset_ipt()\n # Currently using inefficient way to mask the unimportant weights using the rank pattern\n # due to problem mentioned above\n elif global_step > lora_config.total_step - lora_config.tfinal:\n self.rankallocator.mask_using_rank_pattern(self.model, lora_config.rank_pattern)\n # Pass the function and do forward propagation\n else:\n return None\n\n @staticmethod\n def _prepare_adalora_config(peft_config, model_config):\n if peft_config.target_modules is None:\n if model_config[\"model_type\"] not in TRANSFORMERS_MODELS_TO_ADALORA_TARGET_MODULES_MAPPING:\n raise ValueError(\"Please specify `target_modules` in `peft_config`\")\n peft_config.target_modules = TRANSFORMERS_MODELS_TO_ADALORA_TARGET_MODULES_MAPPING[\n model_config[\"model_type\"]\n ]\n return peft_config\n\n\nclass AdaLoraLayer(LoraLayer):\n def __init__(\n self,\n in_features: int,\n out_features: int,\n ):\n super().__init__(in_features, out_features)\n self.lora_E = nn.ParameterDict({})\n self.lora_A = nn.ParameterDict({})\n self.lora_B = nn.ParameterDict({})\n self.ranknum = nn.ParameterDict({})\n\n def update_layer(self, adapter_name, r, lora_alpha, lora_dropout, init_lora_weights):\n self.r[adapter_name] = r\n self.lora_alpha[adapter_name] = lora_alpha\n if lora_dropout > 0.0:\n lora_dropout_layer = nn.Dropout(p=lora_dropout)\n else:\n\n def lora_dropout_layer(x):\n return x\n\n self.lora_dropout.update(nn.ModuleDict({adapter_name: lora_dropout_layer}))\n # Actual trainable parameters\n # Right singular vectors\n self.lora_A.update(nn.ParameterDict({adapter_name: nn.Parameter(torch.zeros(r, self.in_features))}))\n # Singular values\n self.lora_E.update(nn.ParameterDict({adapter_name: nn.Parameter(torch.zeros(r, 1))}))\n # Left singular vectors\n self.lora_B.update(nn.ParameterDict({adapter_name: nn.Parameter(torch.zeros(self.out_features, r))}))\n # The current rank\n self.ranknum.update(nn.ParameterDict({adapter_name: nn.Parameter(torch.zeros(1), requires_grad=False)}))\n self.ranknum[adapter_name].data.fill_(float(r))\n self.ranknum[adapter_name].requires_grad = False\n self.scaling[adapter_name] = lora_alpha if lora_alpha > 0 else float(r)\n if init_lora_weights:\n self.reset_lora_parameters(adapter_name)\n self.to(self.weight.device)\n\n def reset_lora_parameters(self, adapter_name):\n if adapter_name in self.lora_A.keys():\n nn.init.zeros_(self.lora_E[adapter_name])\n nn.init.normal_(self.lora_A[adapter_name], mean=0.0, std=0.02)\n nn.init.normal_(self.lora_B[adapter_name], mean=0.0, std=0.02)\n\n\nclass SVDLinear(nn.Linear, AdaLoraLayer):\n # SVD-based adaptation by a dense layer\n def __init__(\n self,\n adapter_name: str,\n in_features: int,\n out_features: int,\n r: int = 0,\n lora_alpha: int = 1,\n lora_dropout: float = 0.0,\n fan_in_fan_out: bool = False,\n **kwargs,\n ):\n init_lora_weights = kwargs.pop(\"init_lora_weights\", True)\n nn.Linear.__init__(self, in_features, out_features, **kwargs)\n AdaLoraLayer.__init__(self, in_features=in_features, out_features=out_features)\n # Freezing the pre-trained weight matrix\n self.weight.requires_grad = False\n\n self.fan_in_fan_out = fan_in_fan_out\n if fan_in_fan_out:\n self.weight.data = self.weight.data.T\n\n nn.Linear.reset_parameters(self)\n self.update_layer(adapter_name, r, lora_alpha, lora_dropout, init_lora_weights)\n self.active_adapter = adapter_name\n\n def merge(self):\n if self.active_adapter not in self.lora_A.keys():\n return\n if self.merged:\n warnings.warn(\"Already merged. Nothing to do.\")\n return\n if self.r[self.active_adapter] > 0:\n self.weight.data += (\n transpose(\n self.lora_B[self.active_adapter]\n @ (self.lora_A[self.active_adapter] * self.lora_E[self.active_adapter]),\n self.fan_in_fan_out,\n )\n * self.scaling[self.active_adapter]\n / (self.ranknum[self.active_adapter] + 1e-5)\n )\n self.merged = True\n\n def unmerge(self):\n if self.active_adapter not in self.lora_A.keys():\n return\n if not self.merged:\n warnings.warn(\"Already unmerged. Nothing to do.\")\n return\n if self.r[self.active_adapter] > 0:\n self.weight.data -= (\n transpose(\n self.lora_B[self.active_adapter]\n @ (self.lora_A[self.active_adapter] * self.lora_E[self.active_adapter])\n )\n * self.scaling[self.active_adapter]\n / (self.ranknum[self.active_adapter] + 1e-5)\n )\n self.merged = False\n\n def forward(self, x: torch.Tensor):\n if self.active_adapter not in self.lora_A.keys():\n return F.linear(x, transpose(self.weight, self.fan_in_fan_out), bias=self.bias)\n if self.disable_adapters:\n if self.r[self.active_adapter] > 0 and self.merged:\n self.unmerge()\n result = F.linear(x, transpose(self.weight, self.fan_in_fan_out), bias=self.bias)\n elif self.r[self.active_adapter] > 0 and not self.merged:\n result = F.linear(x, transpose(self.weight, self.fan_in_fan_out), bias=self.bias)\n result += (\n (\n self.lora_dropout[self.active_adapter](x)\n @ (self.lora_A[self.active_adapter] * self.lora_E[self.active_adapter]).T\n @ self.lora_B[self.active_adapter].T\n )\n * self.scaling[self.active_adapter]\n / (self.ranknum[self.active_adapter] + 1e-5)\n )\n else:\n result = F.linear(x, transpose(self.weight, self.fan_in_fan_out), bias=self.bias)\n return result\n\n\nif is_bnb_available():\n\n class SVDLinear8bitLt(bnb.nn.Linear8bitLt, AdaLoraLayer):\n # Low-rank matrix for SVD-based adaptation\n def __init__(\n self,\n adapter_name,\n in_features,\n out_features,\n r: int = 0,\n lora_alpha: int = 1,\n lora_dropout: float = 0.0,\n **kwargs,\n ):\n bnb.nn.Linear8bitLt.__init__(\n self,\n in_features,\n out_features,\n bias=kwargs.get(\"bias\", True),\n has_fp16_weights=kwargs.get(\"has_fp16_weights\", True),\n memory_efficient_backward=kwargs.get(\"memory_efficient_backward\", False),\n threshold=kwargs.get(\"threshold\", 0.0),\n index=kwargs.get(\"index\", None),\n )\n AdaLoraLayer.__init__(self, in_features=in_features, out_features=out_features)\n # Freezing the pre-trained weight matrix\n self.weight.requires_grad = False\n\n init_lora_weights = kwargs.pop(\"init_lora_weights\", True)\n self.update_layer(adapter_name, r, lora_alpha, lora_dropout, init_lora_weights)\n self.active_adapter = adapter_name\n\n def forward(self, x: torch.Tensor):\n result = super().forward(x)\n\n if self.disable_adapters or self.active_adapter not in self.lora_A.keys():\n return result\n elif self.r[self.active_adapter] > 0:\n if not torch.is_autocast_enabled():\n expected_dtype = result.dtype\n\n if x.dtype != torch.float32:\n x = x.float()\n output = (\n (\n self.lora_dropout[self.active_adapter](x)\n @ (self.lora_A[self.active_adapter] * self.lora_E[self.active_adapter]).T\n @ self.lora_B[self.active_adapter].T\n ).to(expected_dtype)\n * self.scaling[self.active_adapter]\n / (self.ranknum[self.active_adapter] + 1e-5)\n )\n else:\n output = (\n (\n self.lora_dropout[self.active_adapter](x)\n @ (self.lora_A[self.active_adapter] * self.lora_E[self.active_adapter]).T\n @ self.lora_B[self.active_adapter].T\n )\n * self.scaling[self.active_adapter]\n / (self.ranknum[self.active_adapter] + 1e-5)\n )\n result = result + output\n return result\n\n\nif is_bnb_4bit_available():\n\n class SVDLinear4bit(bnb.nn.Linear4bit, AdaLoraLayer):\n # Low-rank matrix for SVD-based adaptation\n def __init__(\n self,\n adapter_name,\n in_features,\n out_features,\n r: int = 0,\n lora_alpha: int = 1,\n lora_dropout: float = 0.0,\n **kwargs,\n ):\n bnb.nn.Linear4bit.__init__(\n self,\n in_features,\n out_features,\n bias=kwargs.get(\"bias\", True),\n compute_dtype=kwargs.get(\"compute_dtype\", torch.float32),\n compress_statistics=kwargs.get(\"compress_statistics\", True),\n quant_type=kwargs.get(\"quant_type\", \"nf4\"),\n )\n AdaLoraLayer.__init__(self, in_features=in_features, out_features=out_features)\n # Freezing the pre-trained weight matrix\n self.weight.requires_grad = False\n\n init_lora_weights = kwargs.pop(\"init_lora_weights\", True)\n self.update_layer(adapter_name, r, lora_alpha, lora_dropout, init_lora_weights)\n self.active_adapter = adapter_name\n\n def forward(self, x: torch.Tensor):\n result = super().forward(x)\n\n if self.disable_adapters or self.active_adapter not in self.lora_A.keys():\n return result\n elif self.r[self.active_adapter] > 0:\n if not torch.is_autocast_enabled():\n expected_dtype = result.dtype\n\n if x.dtype != torch.float32:\n x = x.float()\n output = (\n (\n self.lora_dropout[self.active_adapter](x)\n @ (self.lora_A[self.active_adapter] * self.lora_E[self.active_adapter]).T\n @ self.lora_B[self.active_adapter].T\n ).to(expected_dtype)\n * self.scaling[self.active_adapter]\n / (self.ranknum[self.active_adapter] + 1e-5)\n )\n else:\n output = (\n (\n self.lora_dropout[self.active_adapter](x)\n @ (self.lora_A[self.active_adapter] * self.lora_E[self.active_adapter]).T\n @ self.lora_B[self.active_adapter].T\n )\n * self.scaling[self.active_adapter]\n / (self.ranknum[self.active_adapter] + 1e-5)\n )\n result = result + output\n return result\n\n\nclass RankAllocator(object):\n \"\"\"\n The RankAllocator for AdaLoraModel. Paper: https://openreview.net/pdf?id=lq62uWRJjiY\n\n Args:\n config ([`AdaLoraConfig`]): The configuration of the AdaLora model.\n model: the model that we apply AdaLoRA to.\n\n \"\"\"\n\n def __init__(self, model, peft_config, adapter_name):\n self.peft_config = peft_config\n self.adapter_name = adapter_name\n self.beta1 = peft_config.beta1\n self.beta2 = peft_config.beta2\n assert self.beta1 > 0 and self.beta1 < 1\n assert self.beta2 > 0 and self.beta2 < 1\n\n self.reset_ipt()\n self._set_budget_scheduler(model)\n\n def set_total_step(self, total_step):\n self.peft_config.total_step = total_step\n\n def reset_ipt(self):\n self.ipt = {}\n self.exp_avg_ipt = {}\n self.exp_avg_unc = {}\n\n def _set_budget_scheduler(self, model):\n self.init_bgt = 0\n self.name_set = set()\n for n, p in model.named_parameters():\n if f\"lora_A.{self.adapter_name}\" in n:\n self.init_bgt += p.size(0)\n self.name_set.add(n.replace(\"lora_A\", \"%s\"))\n self.name_set = sorted(self.name_set)\n # The total final rank budget\n self.target_bgt = self.peft_config.target_r * len(self.name_set)\n\n def budget_schedule(self, step: int):\n tinit = self.peft_config.tinit\n tfinal = self.peft_config.tfinal\n total_step = self.peft_config.total_step\n # Initial warmup\n if step <= tinit:\n budget = self.init_bgt\n mask_ind = False\n # Final fine-tuning\n elif step > total_step - tfinal:\n budget = self.target_bgt\n mask_ind = True\n else:\n # Budget decreasing with a cubic scheduler\n mul_coeff = 1 - (step - tinit) / (total_step - tfinal - tinit)\n budget = int((self.init_bgt - self.target_bgt) * (mul_coeff**3) + self.target_bgt)\n mask_ind = True if step % self.peft_config.deltaT == 0 else False\n return budget, mask_ind\n\n def update_ipt(self, model):\n # Update the sensitivity and uncertainty for every weight\n for n, p in model.named_parameters():\n if \"lora_\" in n and self.adapter_name in n:\n if n not in self.ipt:\n self.ipt[n] = torch.zeros_like(p)\n self.exp_avg_ipt[n] = torch.zeros_like(p)\n self.exp_avg_unc[n] = torch.zeros_like(p)\n with torch.no_grad():\n self.ipt[n] = (p * p.grad).abs().detach()\n # Sensitivity smoothing\n self.exp_avg_ipt[n] = self.beta1 * self.exp_avg_ipt[n] + (1 - self.beta1) * self.ipt[n]\n # Uncertainty quantification\n self.exp_avg_unc[n] = (\n self.beta2 * self.exp_avg_unc[n] + (1 - self.beta2) * (self.ipt[n] - self.exp_avg_ipt[n]).abs()\n )\n\n def _element_score(self, n):\n return self.exp_avg_ipt[n] * self.exp_avg_unc[n]\n\n def _combine_ipt(self, ipt_E, ipt_AB):\n ipt_AB = ipt_AB.sum(dim=1, keepdim=False)\n sum_ipt = ipt_E.view(-1) + ipt_AB.view(-1)\n return sum_ipt\n\n def mask_to_budget(self, model, budget):\n value_ipt = {}\n vector_ipt = {}\n triplet_ipt = {}\n # Get the importance score for A, E, B\n for n, p in model.named_parameters():\n if f\"lora_A.{self.adapter_name}\" in n:\n entry_ipt = self._element_score(n)\n comb_ipt = torch.mean(entry_ipt, dim=1, keepdim=True)\n name_m = n.replace(\"lora_A\", \"%s\")\n if name_m not in vector_ipt:\n vector_ipt[name_m] = [comb_ipt]\n else:\n vector_ipt[name_m].append(comb_ipt)\n if f\"lora_B.{self.adapter_name}\" in n:\n entry_ipt = self._element_score(n)\n comb_ipt = torch.mean(entry_ipt, dim=0, keepdim=False).view(-1, 1)\n name_m = n.replace(\"lora_B\", \"%s\")\n if name_m not in vector_ipt:\n vector_ipt[name_m] = [comb_ipt]\n else:\n vector_ipt[name_m].append(comb_ipt)\n if f\"lora_E.{self.adapter_name}\" in n:\n entry_ipt = self._element_score(n)\n name_m = n.replace(\"lora_E\", \"%s\")\n value_ipt[name_m] = entry_ipt\n\n all_score = []\n # Calculate the score for each triplet\n for name_m in vector_ipt:\n ipt_E = value_ipt[name_m]\n ipt_AB = torch.cat(vector_ipt[name_m], dim=1)\n sum_ipt = self._combine_ipt(ipt_E, ipt_AB)\n name_E = name_m % \"lora_E\"\n triplet_ipt[name_E] = sum_ipt.view(-1, 1)\n all_score.append(sum_ipt.view(-1))\n\n # Get the threshold by ranking ipt\n mask_threshold = torch.kthvalue(\n torch.cat(all_score),\n k=self.init_bgt - budget,\n )[0].item()\n\n rank_pattern = {}\n # Mask the unimportant triplets\n with torch.no_grad():\n for n, p in model.named_parameters():\n if f\"lora_E.{self.adapter_name}\" in n:\n p.masked_fill_(triplet_ipt[n] <= mask_threshold, 0.0)\n rank_pattern[n] = (~(triplet_ipt[n] <= mask_threshold)).view(-1).tolist()\n return rank_pattern\n\n def update_and_allocate(self, model, global_step, force_mask=False):\n # # Update the importance score and allocate the budget\n if global_step < self.peft_config.total_step - self.peft_config.tfinal:\n self.update_ipt(model)\n budget, mask_ind = self.budget_schedule(global_step)\n # Allocate the budget according to importance scores\n if mask_ind or force_mask:\n rank_pattern = self.mask_to_budget(model, budget)\n else:\n rank_pattern = None\n return budget, rank_pattern\n\n def mask_using_rank_pattern(self, model, rank_pattern):\n # Mask the unimportant triplets\n is_adapter_name_truncated = False\n if self.adapter_name not in next(iter(rank_pattern.keys())):\n is_adapter_name_truncated = True\n\n with torch.no_grad():\n for n, p in model.named_parameters():\n if f\"lora_E.{self.adapter_name}\" in n:\n key = n if not is_adapter_name_truncated else n.replace(f\".{self.adapter_name}\", \"\")\n mask = torch.Tensor(rank_pattern[key]).unsqueeze(-1).to(p.device)\n p.masked_fill_(~mask.bool(), 0.0)\n",
"path": "src/peft/tuners/adalora.py"
}
] | diff --git a/src/peft/tuners/adalora.py b/src/peft/tuners/adalora.py
index d1ff7f2e4f..aff877adac 100644
--- a/src/peft/tuners/adalora.py
+++ b/src/peft/tuners/adalora.py
@@ -523,6 +523,9 @@ def forward(self, x: torch.Tensor):
result = result + output
return result
+
+if is_bnb_4bit_available():
+
class SVDLinear4bit(bnb.nn.Linear4bit, AdaLoraLayer):
# Low-rank matrix for SVD-based adaptation
def __init__(
|
pyload__pyload-1733 | HOOK LinkdecrypterCom: 'LinkdecrypterComHook' object has no attribute 'COOKIES'
03.08.2015 20:46:43 INFO Free space: 6.48 TiB
630 03.08.2015 20:46:43 INFO Activating Accounts...
631 03.08.2015 20:46:43 INFO Activating Plugins...
632 03.08.2015 20:46:43 WARNING HOOK AntiStandby: Unable to change system power state | [Errno 2] No such file or directory
633 03.08.2015 20:46:43 WARNING HOOK AntiStandby: Unable to change display power state | [Errno 2] No such file or directory
634 03.08.2015 20:46:43 INFO HOOK XFileSharingPro: Handling any hoster I can!
635 03.08.2015 20:46:43 WARNING HOOK UpdateManager: Unable to retrieve server to get updates
636 03.08.2015 20:46:43 INFO HOOK XFileSharingPro: Handling any crypter I can!
637 03.08.2015 20:46:43 INFO pyLoad is up and running
638 03.08.2015 20:46:45 INFO HOOK LinkdecrypterCom: Reloading supported crypter list
639 03.08.2015 20:46:45 WARNING HOOK LinkdecrypterCom: 'LinkdecrypterComHook' object has no attribute 'COOKIES' | Waiting 1 minute and retry
640 03.08.2015 20:46:53 INFO HOOK ClickAndLoad: Proxy listening on 127.0.0.1:9666
641 03.08.2015 20:46:53 INFO HOOK LinkdecrypterCom: Reloading supported crypter list
642 03.08.2015 20:46:53 WARNING HOOK LinkdecrypterCom: 'LinkdecrypterComHook' object has no attribute 'COOKIES' | Waiting 1 minute and retry
643 03.08.2015 20:47:45 WARNING HOOK LinkdecrypterCom: 'LinkdecrypterComHook' object has no attribute 'COOKIES' | Waiting 1 minute and retry
644 03.08.2015 20:47:53 WARNING HOOK LinkdecrypterCom: 'LinkdecrypterComHook' object has no attribute 'COOKIES' | Waiting 1 minute and retry
| [
{
"content": "# -*- coding: utf-8 -*-\n\nimport re\n\nfrom module.plugins.internal.MultiHook import MultiHook\n\n\nclass LinkdecrypterComHook(MultiHook):\n __name__ = \"LinkdecrypterComHook\"\n __type__ = \"hook\"\n __version__ = \"1.07\"\n __status__ = \"testing\"\n\n __config__ = [(\"activated\" , \"bool\" , \"Activated\" , True ),\n (\"pluginmode\" , \"all;listed;unlisted\", \"Use for plugins\" , \"all\"),\n (\"pluginlist\" , \"str\" , \"Plugin list (comma separated)\", \"\" ),\n (\"reload\" , \"bool\" , \"Reload plugin list\" , True ),\n (\"reloadinterval\", \"int\" , \"Reload interval in hours\" , 12 )]\n\n __description__ = \"\"\"Linkdecrypter.com hook plugin\"\"\"\n __license__ = \"GPLv3\"\n __authors__ = [(\"Walter Purcaro\", \"[email protected]\")]\n\n\n def get_hosters(self):\n list = re.search(r'>Supported\\(\\d+\\)</b>: <i>(.[\\w.\\-, ]+)',\n self.load(\"http://linkdecrypter.com/\").replace(\"(g)\", \"\")).group(1).split(', ')\n try:\n list.remove(\"download.serienjunkies.org\")\n except ValueError:\n pass\n\n return list\n",
"path": "module/plugins/hooks/LinkdecrypterComHook.py"
}
] | [
{
"content": "# -*- coding: utf-8 -*-\n\nimport re\n\nfrom module.plugins.internal.MultiHook import MultiHook\n\n\nclass LinkdecrypterComHook(MultiHook):\n __name__ = \"LinkdecrypterComHook\"\n __type__ = \"hook\"\n __version__ = \"1.07\"\n __status__ = \"testing\"\n\n __config__ = [(\"activated\" , \"bool\" , \"Activated\" , True ),\n (\"pluginmode\" , \"all;listed;unlisted\", \"Use for plugins\" , \"all\"),\n (\"pluginlist\" , \"str\" , \"Plugin list (comma separated)\", \"\" ),\n (\"reload\" , \"bool\" , \"Reload plugin list\" , True ),\n (\"reloadinterval\", \"int\" , \"Reload interval in hours\" , 12 )]\n\n __description__ = \"\"\"Linkdecrypter.com hook plugin\"\"\"\n __license__ = \"GPLv3\"\n __authors__ = [(\"Walter Purcaro\", \"[email protected]\")]\n\n COOKIES = False\n\n def get_hosters(self):\n list = re.search(r'>Supported\\(\\d+\\)</b>: <i>(.[\\w.\\-, ]+)',\n self.load(\"http://linkdecrypter.com/\").replace(\"(g)\", \"\")).group(1).split(', ')\n try:\n list.remove(\"download.serienjunkies.org\")\n except ValueError:\n pass\n\n return list\n",
"path": "module/plugins/hooks/LinkdecrypterComHook.py"
}
] | diff --git a/module/plugins/hooks/LinkdecrypterComHook.py b/module/plugins/hooks/LinkdecrypterComHook.py
index 6930afdb50..bf437fb6d8 100644
--- a/module/plugins/hooks/LinkdecrypterComHook.py
+++ b/module/plugins/hooks/LinkdecrypterComHook.py
@@ -21,6 +21,7 @@ class LinkdecrypterComHook(MultiHook):
__license__ = "GPLv3"
__authors__ = [("Walter Purcaro", "[email protected]")]
+ COOKIES = False
def get_hosters(self):
list = re.search(r'>Supported\(\d+\)</b>: <i>(.[\w.\-, ]+)',
|
networkx__networkx-6600 | Error in method description in ismags.py
The docstring of `partition_to_color` method in ismags.py seems off to me. The description is not clear, and it's hard to understand what the method is supposed to do.
```python
def partition_to_color(partitions):
"""
Creates a dictionary with for every item in partition for every partition
in partitions the index of partition in partitions.
Parameters
----------
partitions: collections.abc.Sequence[collections.abc.Iterable]
As returned by :func:`make_partitions`.
Returns
-------
dict
"""
colors = {}
for color, keys in enumerate(partitions):
for key in keys:
colors[key] = color
return colors
```
I think the following description explains the method better.
```python
def partition_to_color(partitions):
"""
Creates a dictionary that maps each item in each partition to the index of
the partition it belongs to
"""
```
If the new description looks alright, I'll go ahead and make the changes.
| [
{
"content": "\"\"\"\n****************\nISMAGS Algorithm\n****************\n\nProvides a Python implementation of the ISMAGS algorithm. [1]_\n\nIt is capable of finding (subgraph) isomorphisms between two graphs, taking the\nsymmetry of the subgraph into account. In most cases the VF2 algorithm is\nfaster (at least on small graphs) than this implementation, but in some cases\nthere is an exponential number of isomorphisms that are symmetrically\nequivalent. In that case, the ISMAGS algorithm will provide only one solution\nper symmetry group.\n\n>>> petersen = nx.petersen_graph()\n>>> ismags = nx.isomorphism.ISMAGS(petersen, petersen)\n>>> isomorphisms = list(ismags.isomorphisms_iter(symmetry=False))\n>>> len(isomorphisms)\n120\n>>> isomorphisms = list(ismags.isomorphisms_iter(symmetry=True))\n>>> answer = [{0: 0, 1: 1, 2: 2, 3: 3, 4: 4, 5: 5, 6: 6, 7: 7, 8: 8, 9: 9}]\n>>> answer == isomorphisms\nTrue\n\nIn addition, this implementation also provides an interface to find the\nlargest common induced subgraph [2]_ between any two graphs, again taking\nsymmetry into account. Given `graph` and `subgraph` the algorithm will remove\nnodes from the `subgraph` until `subgraph` is isomorphic to a subgraph of\n`graph`. Since only the symmetry of `subgraph` is taken into account it is\nworth thinking about how you provide your graphs:\n\n>>> graph1 = nx.path_graph(4)\n>>> graph2 = nx.star_graph(3)\n>>> ismags = nx.isomorphism.ISMAGS(graph1, graph2)\n>>> ismags.is_isomorphic()\nFalse\n>>> largest_common_subgraph = list(ismags.largest_common_subgraph())\n>>> answer = [{1: 0, 0: 1, 2: 2}, {2: 0, 1: 1, 3: 2}]\n>>> answer == largest_common_subgraph\nTrue\n>>> ismags2 = nx.isomorphism.ISMAGS(graph2, graph1)\n>>> largest_common_subgraph = list(ismags2.largest_common_subgraph())\n>>> answer = [\n... {1: 0, 0: 1, 2: 2},\n... {1: 0, 0: 1, 3: 2},\n... {2: 0, 0: 1, 1: 2},\n... {2: 0, 0: 1, 3: 2},\n... {3: 0, 0: 1, 1: 2},\n... {3: 0, 0: 1, 2: 2},\n... ]\n>>> answer == largest_common_subgraph\nTrue\n\nHowever, when not taking symmetry into account, it doesn't matter:\n\n>>> largest_common_subgraph = list(ismags.largest_common_subgraph(symmetry=False))\n>>> answer = [\n... {1: 0, 0: 1, 2: 2},\n... {1: 0, 2: 1, 0: 2},\n... {2: 0, 1: 1, 3: 2},\n... {2: 0, 3: 1, 1: 2},\n... {1: 0, 0: 1, 2: 3},\n... {1: 0, 2: 1, 0: 3},\n... {2: 0, 1: 1, 3: 3},\n... {2: 0, 3: 1, 1: 3},\n... {1: 0, 0: 2, 2: 3},\n... {1: 0, 2: 2, 0: 3},\n... {2: 0, 1: 2, 3: 3},\n... {2: 0, 3: 2, 1: 3},\n... ]\n>>> answer == largest_common_subgraph\nTrue\n>>> largest_common_subgraph = list(ismags2.largest_common_subgraph(symmetry=False))\n>>> answer = [\n... {1: 0, 0: 1, 2: 2},\n... {1: 0, 0: 1, 3: 2},\n... {2: 0, 0: 1, 1: 2},\n... {2: 0, 0: 1, 3: 2},\n... {3: 0, 0: 1, 1: 2},\n... {3: 0, 0: 1, 2: 2},\n... {1: 1, 0: 2, 2: 3},\n... {1: 1, 0: 2, 3: 3},\n... {2: 1, 0: 2, 1: 3},\n... {2: 1, 0: 2, 3: 3},\n... {3: 1, 0: 2, 1: 3},\n... {3: 1, 0: 2, 2: 3},\n... ]\n>>> answer == largest_common_subgraph\nTrue\n\nNotes\n-----\n - The current implementation works for undirected graphs only. The algorithm\n in general should work for directed graphs as well though.\n - Node keys for both provided graphs need to be fully orderable as well as\n hashable.\n - Node and edge equality is assumed to be transitive: if A is equal to B, and\n B is equal to C, then A is equal to C.\n\nReferences\n----------\n .. [1] M. Houbraken, S. Demeyer, T. Michoel, P. Audenaert, D. Colle,\n M. Pickavet, \"The Index-Based Subgraph Matching Algorithm with General\n Symmetries (ISMAGS): Exploiting Symmetry for Faster Subgraph\n Enumeration\", PLoS One 9(5): e97896, 2014.\n https://doi.org/10.1371/journal.pone.0097896\n .. [2] https://en.wikipedia.org/wiki/Maximum_common_induced_subgraph\n\"\"\"\n\n__all__ = [\"ISMAGS\"]\n\nimport itertools\nfrom collections import Counter, defaultdict\nfrom functools import reduce, wraps\n\n\ndef are_all_equal(iterable):\n \"\"\"\n Returns ``True`` if and only if all elements in `iterable` are equal; and\n ``False`` otherwise.\n\n Parameters\n ----------\n iterable: collections.abc.Iterable\n The container whose elements will be checked.\n\n Returns\n -------\n bool\n ``True`` iff all elements in `iterable` compare equal, ``False``\n otherwise.\n \"\"\"\n try:\n shape = iterable.shape\n except AttributeError:\n pass\n else:\n if len(shape) > 1:\n message = \"The function does not works on multidimensional arrays.\"\n raise NotImplementedError(message) from None\n\n iterator = iter(iterable)\n first = next(iterator, None)\n return all(item == first for item in iterator)\n\n\ndef make_partitions(items, test):\n \"\"\"\n Partitions items into sets based on the outcome of ``test(item1, item2)``.\n Pairs of items for which `test` returns `True` end up in the same set.\n\n Parameters\n ----------\n items : collections.abc.Iterable[collections.abc.Hashable]\n Items to partition\n test : collections.abc.Callable[collections.abc.Hashable, collections.abc.Hashable]\n A function that will be called with 2 arguments, taken from items.\n Should return `True` if those 2 items need to end up in the same\n partition, and `False` otherwise.\n\n Returns\n -------\n list[set]\n A list of sets, with each set containing part of the items in `items`,\n such that ``all(test(*pair) for pair in itertools.combinations(set, 2))\n == True``\n\n Notes\n -----\n The function `test` is assumed to be transitive: if ``test(a, b)`` and\n ``test(b, c)`` return ``True``, then ``test(a, c)`` must also be ``True``.\n \"\"\"\n partitions = []\n for item in items:\n for partition in partitions:\n p_item = next(iter(partition))\n if test(item, p_item):\n partition.add(item)\n break\n else: # No break\n partitions.append({item})\n return partitions\n\n\ndef partition_to_color(partitions):\n \"\"\"\n Creates a dictionary with for every item in partition for every partition\n in partitions the index of partition in partitions.\n\n Parameters\n ----------\n partitions: collections.abc.Sequence[collections.abc.Iterable]\n As returned by :func:`make_partitions`.\n\n Returns\n -------\n dict\n \"\"\"\n colors = {}\n for color, keys in enumerate(partitions):\n for key in keys:\n colors[key] = color\n return colors\n\n\ndef intersect(collection_of_sets):\n \"\"\"\n Given an collection of sets, returns the intersection of those sets.\n\n Parameters\n ----------\n collection_of_sets: collections.abc.Collection[set]\n A collection of sets.\n\n Returns\n -------\n set\n An intersection of all sets in `collection_of_sets`. Will have the same\n type as the item initially taken from `collection_of_sets`.\n \"\"\"\n collection_of_sets = list(collection_of_sets)\n first = collection_of_sets.pop()\n out = reduce(set.intersection, collection_of_sets, set(first))\n return type(first)(out)\n\n\nclass ISMAGS:\n \"\"\"\n Implements the ISMAGS subgraph matching algorithm. [1]_ ISMAGS stands for\n \"Index-based Subgraph Matching Algorithm with General Symmetries\". As the\n name implies, it is symmetry aware and will only generate non-symmetric\n isomorphisms.\n\n Notes\n -----\n The implementation imposes additional conditions compared to the VF2\n algorithm on the graphs provided and the comparison functions\n (:attr:`node_equality` and :attr:`edge_equality`):\n\n - Node keys in both graphs must be orderable as well as hashable.\n - Equality must be transitive: if A is equal to B, and B is equal to C,\n then A must be equal to C.\n\n Attributes\n ----------\n graph: networkx.Graph\n subgraph: networkx.Graph\n node_equality: collections.abc.Callable\n The function called to see if two nodes should be considered equal.\n It's signature looks like this:\n ``f(graph1: networkx.Graph, node1, graph2: networkx.Graph, node2) -> bool``.\n `node1` is a node in `graph1`, and `node2` a node in `graph2`.\n Constructed from the argument `node_match`.\n edge_equality: collections.abc.Callable\n The function called to see if two edges should be considered equal.\n It's signature looks like this:\n ``f(graph1: networkx.Graph, edge1, graph2: networkx.Graph, edge2) -> bool``.\n `edge1` is an edge in `graph1`, and `edge2` an edge in `graph2`.\n Constructed from the argument `edge_match`.\n\n References\n ----------\n .. [1] M. Houbraken, S. Demeyer, T. Michoel, P. Audenaert, D. Colle,\n M. Pickavet, \"The Index-Based Subgraph Matching Algorithm with General\n Symmetries (ISMAGS): Exploiting Symmetry for Faster Subgraph\n Enumeration\", PLoS One 9(5): e97896, 2014.\n https://doi.org/10.1371/journal.pone.0097896\n \"\"\"\n\n def __init__(self, graph, subgraph, node_match=None, edge_match=None, cache=None):\n \"\"\"\n Parameters\n ----------\n graph: networkx.Graph\n subgraph: networkx.Graph\n node_match: collections.abc.Callable or None\n Function used to determine whether two nodes are equivalent. Its\n signature should look like ``f(n1: dict, n2: dict) -> bool``, with\n `n1` and `n2` node property dicts. See also\n :func:`~networkx.algorithms.isomorphism.categorical_node_match` and\n friends.\n If `None`, all nodes are considered equal.\n edge_match: collections.abc.Callable or None\n Function used to determine whether two edges are equivalent. Its\n signature should look like ``f(e1: dict, e2: dict) -> bool``, with\n `e1` and `e2` edge property dicts. See also\n :func:`~networkx.algorithms.isomorphism.categorical_edge_match` and\n friends.\n If `None`, all edges are considered equal.\n cache: collections.abc.Mapping\n A cache used for caching graph symmetries.\n \"\"\"\n # TODO: graph and subgraph setter methods that invalidate the caches.\n # TODO: allow for precomputed partitions and colors\n self.graph = graph\n self.subgraph = subgraph\n self._symmetry_cache = cache\n # Naming conventions are taken from the original paper. For your\n # sanity:\n # sg: subgraph\n # g: graph\n # e: edge(s)\n # n: node(s)\n # So: sgn means \"subgraph nodes\".\n self._sgn_partitions_ = None\n self._sge_partitions_ = None\n\n self._sgn_colors_ = None\n self._sge_colors_ = None\n\n self._gn_partitions_ = None\n self._ge_partitions_ = None\n\n self._gn_colors_ = None\n self._ge_colors_ = None\n\n self._node_compat_ = None\n self._edge_compat_ = None\n\n if node_match is None:\n self.node_equality = self._node_match_maker(lambda n1, n2: True)\n self._sgn_partitions_ = [set(self.subgraph.nodes)]\n self._gn_partitions_ = [set(self.graph.nodes)]\n self._node_compat_ = {0: 0}\n else:\n self.node_equality = self._node_match_maker(node_match)\n if edge_match is None:\n self.edge_equality = self._edge_match_maker(lambda e1, e2: True)\n self._sge_partitions_ = [set(self.subgraph.edges)]\n self._ge_partitions_ = [set(self.graph.edges)]\n self._edge_compat_ = {0: 0}\n else:\n self.edge_equality = self._edge_match_maker(edge_match)\n\n @property\n def _sgn_partitions(self):\n if self._sgn_partitions_ is None:\n\n def nodematch(node1, node2):\n return self.node_equality(self.subgraph, node1, self.subgraph, node2)\n\n self._sgn_partitions_ = make_partitions(self.subgraph.nodes, nodematch)\n return self._sgn_partitions_\n\n @property\n def _sge_partitions(self):\n if self._sge_partitions_ is None:\n\n def edgematch(edge1, edge2):\n return self.edge_equality(self.subgraph, edge1, self.subgraph, edge2)\n\n self._sge_partitions_ = make_partitions(self.subgraph.edges, edgematch)\n return self._sge_partitions_\n\n @property\n def _gn_partitions(self):\n if self._gn_partitions_ is None:\n\n def nodematch(node1, node2):\n return self.node_equality(self.graph, node1, self.graph, node2)\n\n self._gn_partitions_ = make_partitions(self.graph.nodes, nodematch)\n return self._gn_partitions_\n\n @property\n def _ge_partitions(self):\n if self._ge_partitions_ is None:\n\n def edgematch(edge1, edge2):\n return self.edge_equality(self.graph, edge1, self.graph, edge2)\n\n self._ge_partitions_ = make_partitions(self.graph.edges, edgematch)\n return self._ge_partitions_\n\n @property\n def _sgn_colors(self):\n if self._sgn_colors_ is None:\n self._sgn_colors_ = partition_to_color(self._sgn_partitions)\n return self._sgn_colors_\n\n @property\n def _sge_colors(self):\n if self._sge_colors_ is None:\n self._sge_colors_ = partition_to_color(self._sge_partitions)\n return self._sge_colors_\n\n @property\n def _gn_colors(self):\n if self._gn_colors_ is None:\n self._gn_colors_ = partition_to_color(self._gn_partitions)\n return self._gn_colors_\n\n @property\n def _ge_colors(self):\n if self._ge_colors_ is None:\n self._ge_colors_ = partition_to_color(self._ge_partitions)\n return self._ge_colors_\n\n @property\n def _node_compatibility(self):\n if self._node_compat_ is not None:\n return self._node_compat_\n self._node_compat_ = {}\n for sgn_part_color, gn_part_color in itertools.product(\n range(len(self._sgn_partitions)), range(len(self._gn_partitions))\n ):\n sgn = next(iter(self._sgn_partitions[sgn_part_color]))\n gn = next(iter(self._gn_partitions[gn_part_color]))\n if self.node_equality(self.subgraph, sgn, self.graph, gn):\n self._node_compat_[sgn_part_color] = gn_part_color\n return self._node_compat_\n\n @property\n def _edge_compatibility(self):\n if self._edge_compat_ is not None:\n return self._edge_compat_\n self._edge_compat_ = {}\n for sge_part_color, ge_part_color in itertools.product(\n range(len(self._sge_partitions)), range(len(self._ge_partitions))\n ):\n sge = next(iter(self._sge_partitions[sge_part_color]))\n ge = next(iter(self._ge_partitions[ge_part_color]))\n if self.edge_equality(self.subgraph, sge, self.graph, ge):\n self._edge_compat_[sge_part_color] = ge_part_color\n return self._edge_compat_\n\n @staticmethod\n def _node_match_maker(cmp):\n @wraps(cmp)\n def comparer(graph1, node1, graph2, node2):\n return cmp(graph1.nodes[node1], graph2.nodes[node2])\n\n return comparer\n\n @staticmethod\n def _edge_match_maker(cmp):\n @wraps(cmp)\n def comparer(graph1, edge1, graph2, edge2):\n return cmp(graph1.edges[edge1], graph2.edges[edge2])\n\n return comparer\n\n def find_isomorphisms(self, symmetry=True):\n \"\"\"Find all subgraph isomorphisms between subgraph and graph\n\n Finds isomorphisms where :attr:`subgraph` <= :attr:`graph`.\n\n Parameters\n ----------\n symmetry: bool\n Whether symmetry should be taken into account. If False, found\n isomorphisms may be symmetrically equivalent.\n\n Yields\n ------\n dict\n The found isomorphism mappings of {graph_node: subgraph_node}.\n \"\"\"\n # The networkx VF2 algorithm is slightly funny in when it yields an\n # empty dict and when not.\n if not self.subgraph:\n yield {}\n return\n elif not self.graph:\n return\n elif len(self.graph) < len(self.subgraph):\n return\n\n if symmetry:\n _, cosets = self.analyze_symmetry(\n self.subgraph, self._sgn_partitions, self._sge_colors\n )\n constraints = self._make_constraints(cosets)\n else:\n constraints = []\n\n candidates = self._find_nodecolor_candidates()\n la_candidates = self._get_lookahead_candidates()\n for sgn in self.subgraph:\n extra_candidates = la_candidates[sgn]\n if extra_candidates:\n candidates[sgn] = candidates[sgn] | {frozenset(extra_candidates)}\n\n if any(candidates.values()):\n start_sgn = min(candidates, key=lambda n: min(candidates[n], key=len))\n candidates[start_sgn] = (intersect(candidates[start_sgn]),)\n yield from self._map_nodes(start_sgn, candidates, constraints)\n else:\n return\n\n @staticmethod\n def _find_neighbor_color_count(graph, node, node_color, edge_color):\n \"\"\"\n For `node` in `graph`, count the number of edges of a specific color\n it has to nodes of a specific color.\n \"\"\"\n counts = Counter()\n neighbors = graph[node]\n for neighbor in neighbors:\n n_color = node_color[neighbor]\n if (node, neighbor) in edge_color:\n e_color = edge_color[node, neighbor]\n else:\n e_color = edge_color[neighbor, node]\n counts[e_color, n_color] += 1\n return counts\n\n def _get_lookahead_candidates(self):\n \"\"\"\n Returns a mapping of {subgraph node: collection of graph nodes} for\n which the graph nodes are feasible candidates for the subgraph node, as\n determined by looking ahead one edge.\n \"\"\"\n g_counts = {}\n for gn in self.graph:\n g_counts[gn] = self._find_neighbor_color_count(\n self.graph, gn, self._gn_colors, self._ge_colors\n )\n candidates = defaultdict(set)\n for sgn in self.subgraph:\n sg_count = self._find_neighbor_color_count(\n self.subgraph, sgn, self._sgn_colors, self._sge_colors\n )\n new_sg_count = Counter()\n for (sge_color, sgn_color), count in sg_count.items():\n try:\n ge_color = self._edge_compatibility[sge_color]\n gn_color = self._node_compatibility[sgn_color]\n except KeyError:\n pass\n else:\n new_sg_count[ge_color, gn_color] = count\n\n for gn, g_count in g_counts.items():\n if all(new_sg_count[x] <= g_count[x] for x in new_sg_count):\n # Valid candidate\n candidates[sgn].add(gn)\n return candidates\n\n def largest_common_subgraph(self, symmetry=True):\n \"\"\"\n Find the largest common induced subgraphs between :attr:`subgraph` and\n :attr:`graph`.\n\n Parameters\n ----------\n symmetry: bool\n Whether symmetry should be taken into account. If False, found\n largest common subgraphs may be symmetrically equivalent.\n\n Yields\n ------\n dict\n The found isomorphism mappings of {graph_node: subgraph_node}.\n \"\"\"\n # The networkx VF2 algorithm is slightly funny in when it yields an\n # empty dict and when not.\n if not self.subgraph:\n yield {}\n return\n elif not self.graph:\n return\n\n if symmetry:\n _, cosets = self.analyze_symmetry(\n self.subgraph, self._sgn_partitions, self._sge_colors\n )\n constraints = self._make_constraints(cosets)\n else:\n constraints = []\n\n candidates = self._find_nodecolor_candidates()\n\n if any(candidates.values()):\n yield from self._largest_common_subgraph(candidates, constraints)\n else:\n return\n\n def analyze_symmetry(self, graph, node_partitions, edge_colors):\n \"\"\"\n Find a minimal set of permutations and corresponding co-sets that\n describe the symmetry of `graph`, given the node and edge equalities\n given by `node_partitions` and `edge_colors`, respectively.\n\n Parameters\n ----------\n graph : networkx.Graph\n The graph whose symmetry should be analyzed.\n node_partitions : list of sets\n A list of sets containing node keys. Node keys in the same set\n are considered equivalent. Every node key in `graph` should be in\n exactly one of the sets. If all nodes are equivalent, this should\n be ``[set(graph.nodes)]``.\n edge_colors : dict mapping edges to their colors\n A dict mapping every edge in `graph` to its corresponding color.\n Edges with the same color are considered equivalent. If all edges\n are equivalent, this should be ``{e: 0 for e in graph.edges}``.\n\n\n Returns\n -------\n set[frozenset]\n The found permutations. This is a set of frozensets of pairs of node\n keys which can be exchanged without changing :attr:`subgraph`.\n dict[collections.abc.Hashable, set[collections.abc.Hashable]]\n The found co-sets. The co-sets is a dictionary of\n ``{node key: set of node keys}``.\n Every key-value pair describes which ``values`` can be interchanged\n without changing nodes less than ``key``.\n \"\"\"\n if self._symmetry_cache is not None:\n key = hash(\n (\n tuple(graph.nodes),\n tuple(graph.edges),\n tuple(map(tuple, node_partitions)),\n tuple(edge_colors.items()),\n )\n )\n if key in self._symmetry_cache:\n return self._symmetry_cache[key]\n node_partitions = list(\n self._refine_node_partitions(graph, node_partitions, edge_colors)\n )\n assert len(node_partitions) == 1\n node_partitions = node_partitions[0]\n permutations, cosets = self._process_ordered_pair_partitions(\n graph, node_partitions, node_partitions, edge_colors\n )\n if self._symmetry_cache is not None:\n self._symmetry_cache[key] = permutations, cosets\n return permutations, cosets\n\n def is_isomorphic(self, symmetry=False):\n \"\"\"\n Returns True if :attr:`graph` is isomorphic to :attr:`subgraph` and\n False otherwise.\n\n Returns\n -------\n bool\n \"\"\"\n return len(self.subgraph) == len(self.graph) and self.subgraph_is_isomorphic(\n symmetry\n )\n\n def subgraph_is_isomorphic(self, symmetry=False):\n \"\"\"\n Returns True if a subgraph of :attr:`graph` is isomorphic to\n :attr:`subgraph` and False otherwise.\n\n Returns\n -------\n bool\n \"\"\"\n # symmetry=False, since we only need to know whether there is any\n # example; figuring out all symmetry elements probably costs more time\n # than it gains.\n isom = next(self.subgraph_isomorphisms_iter(symmetry=symmetry), None)\n return isom is not None\n\n def isomorphisms_iter(self, symmetry=True):\n \"\"\"\n Does the same as :meth:`find_isomorphisms` if :attr:`graph` and\n :attr:`subgraph` have the same number of nodes.\n \"\"\"\n if len(self.graph) == len(self.subgraph):\n yield from self.subgraph_isomorphisms_iter(symmetry=symmetry)\n\n def subgraph_isomorphisms_iter(self, symmetry=True):\n \"\"\"Alternative name for :meth:`find_isomorphisms`.\"\"\"\n return self.find_isomorphisms(symmetry)\n\n def _find_nodecolor_candidates(self):\n \"\"\"\n Per node in subgraph find all nodes in graph that have the same color.\n \"\"\"\n candidates = defaultdict(set)\n for sgn in self.subgraph.nodes:\n sgn_color = self._sgn_colors[sgn]\n if sgn_color in self._node_compatibility:\n gn_color = self._node_compatibility[sgn_color]\n candidates[sgn].add(frozenset(self._gn_partitions[gn_color]))\n else:\n candidates[sgn].add(frozenset())\n candidates = dict(candidates)\n for sgn, options in candidates.items():\n candidates[sgn] = frozenset(options)\n return candidates\n\n @staticmethod\n def _make_constraints(cosets):\n \"\"\"\n Turn cosets into constraints.\n \"\"\"\n constraints = []\n for node_i, node_ts in cosets.items():\n for node_t in node_ts:\n if node_i != node_t:\n # Node i must be smaller than node t.\n constraints.append((node_i, node_t))\n return constraints\n\n @staticmethod\n def _find_node_edge_color(graph, node_colors, edge_colors):\n \"\"\"\n For every node in graph, come up with a color that combines 1) the\n color of the node, and 2) the number of edges of a color to each type\n of node.\n \"\"\"\n counts = defaultdict(lambda: defaultdict(int))\n for node1, node2 in graph.edges:\n if (node1, node2) in edge_colors:\n # FIXME directed graphs\n ecolor = edge_colors[node1, node2]\n else:\n ecolor = edge_colors[node2, node1]\n # Count per node how many edges it has of what color to nodes of\n # what color\n counts[node1][ecolor, node_colors[node2]] += 1\n counts[node2][ecolor, node_colors[node1]] += 1\n\n node_edge_colors = {}\n for node in graph.nodes:\n node_edge_colors[node] = node_colors[node], set(counts[node].items())\n\n return node_edge_colors\n\n @staticmethod\n def _get_permutations_by_length(items):\n \"\"\"\n Get all permutations of items, but only permute items with the same\n length.\n\n >>> found = list(ISMAGS._get_permutations_by_length([[1], [2], [3, 4], [4, 5]]))\n >>> answer = [\n ... (([1], [2]), ([3, 4], [4, 5])),\n ... (([1], [2]), ([4, 5], [3, 4])),\n ... (([2], [1]), ([3, 4], [4, 5])),\n ... (([2], [1]), ([4, 5], [3, 4])),\n ... ]\n >>> found == answer\n True\n \"\"\"\n by_len = defaultdict(list)\n for item in items:\n by_len[len(item)].append(item)\n\n yield from itertools.product(\n *(itertools.permutations(by_len[l]) for l in sorted(by_len))\n )\n\n @classmethod\n def _refine_node_partitions(cls, graph, node_partitions, edge_colors, branch=False):\n \"\"\"\n Given a partition of nodes in graph, make the partitions smaller such\n that all nodes in a partition have 1) the same color, and 2) the same\n number of edges to specific other partitions.\n \"\"\"\n\n def equal_color(node1, node2):\n return node_edge_colors[node1] == node_edge_colors[node2]\n\n node_partitions = list(node_partitions)\n node_colors = partition_to_color(node_partitions)\n node_edge_colors = cls._find_node_edge_color(graph, node_colors, edge_colors)\n if all(\n are_all_equal(node_edge_colors[node] for node in partition)\n for partition in node_partitions\n ):\n yield node_partitions\n return\n\n new_partitions = []\n output = [new_partitions]\n for partition in node_partitions:\n if not are_all_equal(node_edge_colors[node] for node in partition):\n refined = make_partitions(partition, equal_color)\n if (\n branch\n and len(refined) != 1\n and len({len(r) for r in refined}) != len([len(r) for r in refined])\n ):\n # This is where it breaks. There are multiple new cells\n # in refined with the same length, and their order\n # matters.\n # So option 1) Hit it with a big hammer and simply make all\n # orderings.\n permutations = cls._get_permutations_by_length(refined)\n new_output = []\n for n_p in output:\n for permutation in permutations:\n new_output.append(n_p + list(permutation[0]))\n output = new_output\n else:\n for n_p in output:\n n_p.extend(sorted(refined, key=len))\n else:\n for n_p in output:\n n_p.append(partition)\n for n_p in output:\n yield from cls._refine_node_partitions(graph, n_p, edge_colors, branch)\n\n def _edges_of_same_color(self, sgn1, sgn2):\n \"\"\"\n Returns all edges in :attr:`graph` that have the same colour as the\n edge between sgn1 and sgn2 in :attr:`subgraph`.\n \"\"\"\n if (sgn1, sgn2) in self._sge_colors:\n # FIXME directed graphs\n sge_color = self._sge_colors[sgn1, sgn2]\n else:\n sge_color = self._sge_colors[sgn2, sgn1]\n if sge_color in self._edge_compatibility:\n ge_color = self._edge_compatibility[sge_color]\n g_edges = self._ge_partitions[ge_color]\n else:\n g_edges = []\n return g_edges\n\n def _map_nodes(self, sgn, candidates, constraints, mapping=None, to_be_mapped=None):\n \"\"\"\n Find all subgraph isomorphisms honoring constraints.\n \"\"\"\n if mapping is None:\n mapping = {}\n else:\n mapping = mapping.copy()\n if to_be_mapped is None:\n to_be_mapped = set(self.subgraph.nodes)\n\n # Note, we modify candidates here. Doesn't seem to affect results, but\n # remember this.\n # candidates = candidates.copy()\n sgn_candidates = intersect(candidates[sgn])\n candidates[sgn] = frozenset([sgn_candidates])\n for gn in sgn_candidates:\n # We're going to try to map sgn to gn.\n if gn in mapping.values() or sgn not in to_be_mapped:\n # gn is already mapped to something\n continue # pragma: no cover\n\n # REDUCTION and COMBINATION\n mapping[sgn] = gn\n # BASECASE\n if to_be_mapped == set(mapping.keys()):\n yield {v: k for k, v in mapping.items()}\n continue\n left_to_map = to_be_mapped - set(mapping.keys())\n\n new_candidates = candidates.copy()\n sgn_neighbours = set(self.subgraph[sgn])\n not_gn_neighbours = set(self.graph.nodes) - set(self.graph[gn])\n for sgn2 in left_to_map:\n if sgn2 not in sgn_neighbours:\n gn2_options = not_gn_neighbours\n else:\n # Get all edges to gn of the right color:\n g_edges = self._edges_of_same_color(sgn, sgn2)\n # FIXME directed graphs\n # And all nodes involved in those which are connected to gn\n gn2_options = {n for e in g_edges for n in e if gn in e}\n # Node color compatibility should be taken care of by the\n # initial candidate lists made by find_subgraphs\n\n # Add gn2_options to the right collection. Since new_candidates\n # is a dict of frozensets of frozensets of node indices it's\n # a bit clunky. We can't do .add, and + also doesn't work. We\n # could do |, but I deem union to be clearer.\n new_candidates[sgn2] = new_candidates[sgn2].union(\n [frozenset(gn2_options)]\n )\n\n if (sgn, sgn2) in constraints:\n gn2_options = {gn2 for gn2 in self.graph if gn2 > gn}\n elif (sgn2, sgn) in constraints:\n gn2_options = {gn2 for gn2 in self.graph if gn2 < gn}\n else:\n continue # pragma: no cover\n new_candidates[sgn2] = new_candidates[sgn2].union(\n [frozenset(gn2_options)]\n )\n\n # The next node is the one that is unmapped and has fewest\n # candidates\n # Pylint disables because it's a one-shot function.\n next_sgn = min(\n left_to_map, key=lambda n: min(new_candidates[n], key=len)\n ) # pylint: disable=cell-var-from-loop\n yield from self._map_nodes(\n next_sgn,\n new_candidates,\n constraints,\n mapping=mapping,\n to_be_mapped=to_be_mapped,\n )\n # Unmap sgn-gn. Strictly not necessary since it'd get overwritten\n # when making a new mapping for sgn.\n # del mapping[sgn]\n\n def _largest_common_subgraph(self, candidates, constraints, to_be_mapped=None):\n \"\"\"\n Find all largest common subgraphs honoring constraints.\n \"\"\"\n if to_be_mapped is None:\n to_be_mapped = {frozenset(self.subgraph.nodes)}\n\n # The LCS problem is basically a repeated subgraph isomorphism problem\n # with smaller and smaller subgraphs. We store the nodes that are\n # \"part of\" the subgraph in to_be_mapped, and we make it a little\n # smaller every iteration.\n\n # pylint disable because it's guarded against by default value\n current_size = len(\n next(iter(to_be_mapped), [])\n ) # pylint: disable=stop-iteration-return\n\n found_iso = False\n if current_size <= len(self.graph):\n # There's no point in trying to find isomorphisms of\n # graph >= subgraph if subgraph has more nodes than graph.\n\n # Try the isomorphism first with the nodes with lowest ID. So sort\n # them. Those are more likely to be part of the final\n # correspondence. This makes finding the first answer(s) faster. In\n # theory.\n for nodes in sorted(to_be_mapped, key=sorted):\n # Find the isomorphism between subgraph[to_be_mapped] <= graph\n next_sgn = min(nodes, key=lambda n: min(candidates[n], key=len))\n isomorphs = self._map_nodes(\n next_sgn, candidates, constraints, to_be_mapped=nodes\n )\n\n # This is effectively `yield from isomorphs`, except that we look\n # whether an item was yielded.\n try:\n item = next(isomorphs)\n except StopIteration:\n pass\n else:\n yield item\n yield from isomorphs\n found_iso = True\n\n # BASECASE\n if found_iso or current_size == 1:\n # Shrinking has no point because either 1) we end up with a smaller\n # common subgraph (and we want the largest), or 2) there'll be no\n # more subgraph.\n return\n\n left_to_be_mapped = set()\n for nodes in to_be_mapped:\n for sgn in nodes:\n # We're going to remove sgn from to_be_mapped, but subject to\n # symmetry constraints. We know that for every constraint we\n # have those subgraph nodes are equal. So whenever we would\n # remove the lower part of a constraint, remove the higher\n # instead. This is all dealth with by _remove_node. And because\n # left_to_be_mapped is a set, we don't do double work.\n\n # And finally, make the subgraph one node smaller.\n # REDUCTION\n new_nodes = self._remove_node(sgn, nodes, constraints)\n left_to_be_mapped.add(new_nodes)\n # COMBINATION\n yield from self._largest_common_subgraph(\n candidates, constraints, to_be_mapped=left_to_be_mapped\n )\n\n @staticmethod\n def _remove_node(node, nodes, constraints):\n \"\"\"\n Returns a new set where node has been removed from nodes, subject to\n symmetry constraints. We know, that for every constraint we have\n those subgraph nodes are equal. So whenever we would remove the\n lower part of a constraint, remove the higher instead.\n \"\"\"\n while True:\n for low, high in constraints:\n if low == node and high in nodes:\n node = high\n break\n else: # no break, couldn't find node in constraints\n break\n return frozenset(nodes - {node})\n\n @staticmethod\n def _find_permutations(top_partitions, bottom_partitions):\n \"\"\"\n Return the pairs of top/bottom partitions where the partitions are\n different. Ensures that all partitions in both top and bottom\n partitions have size 1.\n \"\"\"\n # Find permutations\n permutations = set()\n for top, bot in zip(top_partitions, bottom_partitions):\n # top and bot have only one element\n if len(top) != 1 or len(bot) != 1:\n raise IndexError(\n \"Not all nodes are coupled. This is\"\n f\" impossible: {top_partitions}, {bottom_partitions}\"\n )\n if top != bot:\n permutations.add(frozenset((next(iter(top)), next(iter(bot)))))\n return permutations\n\n @staticmethod\n def _update_orbits(orbits, permutations):\n \"\"\"\n Update orbits based on permutations. Orbits is modified in place.\n For every pair of items in permutations their respective orbits are\n merged.\n \"\"\"\n for permutation in permutations:\n node, node2 = permutation\n # Find the orbits that contain node and node2, and replace the\n # orbit containing node with the union\n first = second = None\n for idx, orbit in enumerate(orbits):\n if first is not None and second is not None:\n break\n if node in orbit:\n first = idx\n if node2 in orbit:\n second = idx\n if first != second:\n orbits[first].update(orbits[second])\n del orbits[second]\n\n def _couple_nodes(\n self,\n top_partitions,\n bottom_partitions,\n pair_idx,\n t_node,\n b_node,\n graph,\n edge_colors,\n ):\n \"\"\"\n Generate new partitions from top and bottom_partitions where t_node is\n coupled to b_node. pair_idx is the index of the partitions where t_ and\n b_node can be found.\n \"\"\"\n t_partition = top_partitions[pair_idx]\n b_partition = bottom_partitions[pair_idx]\n assert t_node in t_partition and b_node in b_partition\n # Couple node to node2. This means they get their own partition\n new_top_partitions = [top.copy() for top in top_partitions]\n new_bottom_partitions = [bot.copy() for bot in bottom_partitions]\n new_t_groups = {t_node}, t_partition - {t_node}\n new_b_groups = {b_node}, b_partition - {b_node}\n # Replace the old partitions with the coupled ones\n del new_top_partitions[pair_idx]\n del new_bottom_partitions[pair_idx]\n new_top_partitions[pair_idx:pair_idx] = new_t_groups\n new_bottom_partitions[pair_idx:pair_idx] = new_b_groups\n\n new_top_partitions = self._refine_node_partitions(\n graph, new_top_partitions, edge_colors\n )\n new_bottom_partitions = self._refine_node_partitions(\n graph, new_bottom_partitions, edge_colors, branch=True\n )\n new_top_partitions = list(new_top_partitions)\n assert len(new_top_partitions) == 1\n new_top_partitions = new_top_partitions[0]\n for bot in new_bottom_partitions:\n yield list(new_top_partitions), bot\n\n def _process_ordered_pair_partitions(\n self,\n graph,\n top_partitions,\n bottom_partitions,\n edge_colors,\n orbits=None,\n cosets=None,\n ):\n \"\"\"\n Processes ordered pair partitions as per the reference paper. Finds and\n returns all permutations and cosets that leave the graph unchanged.\n \"\"\"\n if orbits is None:\n orbits = [{node} for node in graph.nodes]\n else:\n # Note that we don't copy orbits when we are given one. This means\n # we leak information between the recursive branches. This is\n # intentional!\n orbits = orbits\n if cosets is None:\n cosets = {}\n else:\n cosets = cosets.copy()\n\n assert all(\n len(t_p) == len(b_p) for t_p, b_p in zip(top_partitions, bottom_partitions)\n )\n\n # BASECASE\n if all(len(top) == 1 for top in top_partitions):\n # All nodes are mapped\n permutations = self._find_permutations(top_partitions, bottom_partitions)\n self._update_orbits(orbits, permutations)\n if permutations:\n return [permutations], cosets\n else:\n return [], cosets\n\n permutations = []\n unmapped_nodes = {\n (node, idx)\n for idx, t_partition in enumerate(top_partitions)\n for node in t_partition\n if len(t_partition) > 1\n }\n node, pair_idx = min(unmapped_nodes)\n b_partition = bottom_partitions[pair_idx]\n\n for node2 in sorted(b_partition):\n if len(b_partition) == 1:\n # Can never result in symmetry\n continue\n if node != node2 and any(\n node in orbit and node2 in orbit for orbit in orbits\n ):\n # Orbit prune branch\n continue\n # REDUCTION\n # Couple node to node2\n partitions = self._couple_nodes(\n top_partitions,\n bottom_partitions,\n pair_idx,\n node,\n node2,\n graph,\n edge_colors,\n )\n for opp in partitions:\n new_top_partitions, new_bottom_partitions = opp\n\n new_perms, new_cosets = self._process_ordered_pair_partitions(\n graph,\n new_top_partitions,\n new_bottom_partitions,\n edge_colors,\n orbits,\n cosets,\n )\n # COMBINATION\n permutations += new_perms\n cosets.update(new_cosets)\n\n mapped = {\n k\n for top, bottom in zip(top_partitions, bottom_partitions)\n for k in top\n if len(top) == 1 and top == bottom\n }\n ks = {k for k in graph.nodes if k < node}\n # Have all nodes with ID < node been mapped?\n find_coset = ks <= mapped and node not in cosets\n if find_coset:\n # Find the orbit that contains node\n for orbit in orbits:\n if node in orbit:\n cosets[node] = orbit.copy()\n return permutations, cosets\n",
"path": "networkx/algorithms/isomorphism/ismags.py"
}
] | [
{
"content": "\"\"\"\n****************\nISMAGS Algorithm\n****************\n\nProvides a Python implementation of the ISMAGS algorithm. [1]_\n\nIt is capable of finding (subgraph) isomorphisms between two graphs, taking the\nsymmetry of the subgraph into account. In most cases the VF2 algorithm is\nfaster (at least on small graphs) than this implementation, but in some cases\nthere is an exponential number of isomorphisms that are symmetrically\nequivalent. In that case, the ISMAGS algorithm will provide only one solution\nper symmetry group.\n\n>>> petersen = nx.petersen_graph()\n>>> ismags = nx.isomorphism.ISMAGS(petersen, petersen)\n>>> isomorphisms = list(ismags.isomorphisms_iter(symmetry=False))\n>>> len(isomorphisms)\n120\n>>> isomorphisms = list(ismags.isomorphisms_iter(symmetry=True))\n>>> answer = [{0: 0, 1: 1, 2: 2, 3: 3, 4: 4, 5: 5, 6: 6, 7: 7, 8: 8, 9: 9}]\n>>> answer == isomorphisms\nTrue\n\nIn addition, this implementation also provides an interface to find the\nlargest common induced subgraph [2]_ between any two graphs, again taking\nsymmetry into account. Given `graph` and `subgraph` the algorithm will remove\nnodes from the `subgraph` until `subgraph` is isomorphic to a subgraph of\n`graph`. Since only the symmetry of `subgraph` is taken into account it is\nworth thinking about how you provide your graphs:\n\n>>> graph1 = nx.path_graph(4)\n>>> graph2 = nx.star_graph(3)\n>>> ismags = nx.isomorphism.ISMAGS(graph1, graph2)\n>>> ismags.is_isomorphic()\nFalse\n>>> largest_common_subgraph = list(ismags.largest_common_subgraph())\n>>> answer = [{1: 0, 0: 1, 2: 2}, {2: 0, 1: 1, 3: 2}]\n>>> answer == largest_common_subgraph\nTrue\n>>> ismags2 = nx.isomorphism.ISMAGS(graph2, graph1)\n>>> largest_common_subgraph = list(ismags2.largest_common_subgraph())\n>>> answer = [\n... {1: 0, 0: 1, 2: 2},\n... {1: 0, 0: 1, 3: 2},\n... {2: 0, 0: 1, 1: 2},\n... {2: 0, 0: 1, 3: 2},\n... {3: 0, 0: 1, 1: 2},\n... {3: 0, 0: 1, 2: 2},\n... ]\n>>> answer == largest_common_subgraph\nTrue\n\nHowever, when not taking symmetry into account, it doesn't matter:\n\n>>> largest_common_subgraph = list(ismags.largest_common_subgraph(symmetry=False))\n>>> answer = [\n... {1: 0, 0: 1, 2: 2},\n... {1: 0, 2: 1, 0: 2},\n... {2: 0, 1: 1, 3: 2},\n... {2: 0, 3: 1, 1: 2},\n... {1: 0, 0: 1, 2: 3},\n... {1: 0, 2: 1, 0: 3},\n... {2: 0, 1: 1, 3: 3},\n... {2: 0, 3: 1, 1: 3},\n... {1: 0, 0: 2, 2: 3},\n... {1: 0, 2: 2, 0: 3},\n... {2: 0, 1: 2, 3: 3},\n... {2: 0, 3: 2, 1: 3},\n... ]\n>>> answer == largest_common_subgraph\nTrue\n>>> largest_common_subgraph = list(ismags2.largest_common_subgraph(symmetry=False))\n>>> answer = [\n... {1: 0, 0: 1, 2: 2},\n... {1: 0, 0: 1, 3: 2},\n... {2: 0, 0: 1, 1: 2},\n... {2: 0, 0: 1, 3: 2},\n... {3: 0, 0: 1, 1: 2},\n... {3: 0, 0: 1, 2: 2},\n... {1: 1, 0: 2, 2: 3},\n... {1: 1, 0: 2, 3: 3},\n... {2: 1, 0: 2, 1: 3},\n... {2: 1, 0: 2, 3: 3},\n... {3: 1, 0: 2, 1: 3},\n... {3: 1, 0: 2, 2: 3},\n... ]\n>>> answer == largest_common_subgraph\nTrue\n\nNotes\n-----\n - The current implementation works for undirected graphs only. The algorithm\n in general should work for directed graphs as well though.\n - Node keys for both provided graphs need to be fully orderable as well as\n hashable.\n - Node and edge equality is assumed to be transitive: if A is equal to B, and\n B is equal to C, then A is equal to C.\n\nReferences\n----------\n .. [1] M. Houbraken, S. Demeyer, T. Michoel, P. Audenaert, D. Colle,\n M. Pickavet, \"The Index-Based Subgraph Matching Algorithm with General\n Symmetries (ISMAGS): Exploiting Symmetry for Faster Subgraph\n Enumeration\", PLoS One 9(5): e97896, 2014.\n https://doi.org/10.1371/journal.pone.0097896\n .. [2] https://en.wikipedia.org/wiki/Maximum_common_induced_subgraph\n\"\"\"\n\n__all__ = [\"ISMAGS\"]\n\nimport itertools\nfrom collections import Counter, defaultdict\nfrom functools import reduce, wraps\n\n\ndef are_all_equal(iterable):\n \"\"\"\n Returns ``True`` if and only if all elements in `iterable` are equal; and\n ``False`` otherwise.\n\n Parameters\n ----------\n iterable: collections.abc.Iterable\n The container whose elements will be checked.\n\n Returns\n -------\n bool\n ``True`` iff all elements in `iterable` compare equal, ``False``\n otherwise.\n \"\"\"\n try:\n shape = iterable.shape\n except AttributeError:\n pass\n else:\n if len(shape) > 1:\n message = \"The function does not works on multidimensional arrays.\"\n raise NotImplementedError(message) from None\n\n iterator = iter(iterable)\n first = next(iterator, None)\n return all(item == first for item in iterator)\n\n\ndef make_partitions(items, test):\n \"\"\"\n Partitions items into sets based on the outcome of ``test(item1, item2)``.\n Pairs of items for which `test` returns `True` end up in the same set.\n\n Parameters\n ----------\n items : collections.abc.Iterable[collections.abc.Hashable]\n Items to partition\n test : collections.abc.Callable[collections.abc.Hashable, collections.abc.Hashable]\n A function that will be called with 2 arguments, taken from items.\n Should return `True` if those 2 items need to end up in the same\n partition, and `False` otherwise.\n\n Returns\n -------\n list[set]\n A list of sets, with each set containing part of the items in `items`,\n such that ``all(test(*pair) for pair in itertools.combinations(set, 2))\n == True``\n\n Notes\n -----\n The function `test` is assumed to be transitive: if ``test(a, b)`` and\n ``test(b, c)`` return ``True``, then ``test(a, c)`` must also be ``True``.\n \"\"\"\n partitions = []\n for item in items:\n for partition in partitions:\n p_item = next(iter(partition))\n if test(item, p_item):\n partition.add(item)\n break\n else: # No break\n partitions.append({item})\n return partitions\n\n\ndef partition_to_color(partitions):\n \"\"\"\n Creates a dictionary that maps each item in each partition to the index of\n the partition to which it belongs.\n\n Parameters\n ----------\n partitions: collections.abc.Sequence[collections.abc.Iterable]\n As returned by :func:`make_partitions`.\n\n Returns\n -------\n dict\n \"\"\"\n colors = {}\n for color, keys in enumerate(partitions):\n for key in keys:\n colors[key] = color\n return colors\n\n\ndef intersect(collection_of_sets):\n \"\"\"\n Given an collection of sets, returns the intersection of those sets.\n\n Parameters\n ----------\n collection_of_sets: collections.abc.Collection[set]\n A collection of sets.\n\n Returns\n -------\n set\n An intersection of all sets in `collection_of_sets`. Will have the same\n type as the item initially taken from `collection_of_sets`.\n \"\"\"\n collection_of_sets = list(collection_of_sets)\n first = collection_of_sets.pop()\n out = reduce(set.intersection, collection_of_sets, set(first))\n return type(first)(out)\n\n\nclass ISMAGS:\n \"\"\"\n Implements the ISMAGS subgraph matching algorithm. [1]_ ISMAGS stands for\n \"Index-based Subgraph Matching Algorithm with General Symmetries\". As the\n name implies, it is symmetry aware and will only generate non-symmetric\n isomorphisms.\n\n Notes\n -----\n The implementation imposes additional conditions compared to the VF2\n algorithm on the graphs provided and the comparison functions\n (:attr:`node_equality` and :attr:`edge_equality`):\n\n - Node keys in both graphs must be orderable as well as hashable.\n - Equality must be transitive: if A is equal to B, and B is equal to C,\n then A must be equal to C.\n\n Attributes\n ----------\n graph: networkx.Graph\n subgraph: networkx.Graph\n node_equality: collections.abc.Callable\n The function called to see if two nodes should be considered equal.\n It's signature looks like this:\n ``f(graph1: networkx.Graph, node1, graph2: networkx.Graph, node2) -> bool``.\n `node1` is a node in `graph1`, and `node2` a node in `graph2`.\n Constructed from the argument `node_match`.\n edge_equality: collections.abc.Callable\n The function called to see if two edges should be considered equal.\n It's signature looks like this:\n ``f(graph1: networkx.Graph, edge1, graph2: networkx.Graph, edge2) -> bool``.\n `edge1` is an edge in `graph1`, and `edge2` an edge in `graph2`.\n Constructed from the argument `edge_match`.\n\n References\n ----------\n .. [1] M. Houbraken, S. Demeyer, T. Michoel, P. Audenaert, D. Colle,\n M. Pickavet, \"The Index-Based Subgraph Matching Algorithm with General\n Symmetries (ISMAGS): Exploiting Symmetry for Faster Subgraph\n Enumeration\", PLoS One 9(5): e97896, 2014.\n https://doi.org/10.1371/journal.pone.0097896\n \"\"\"\n\n def __init__(self, graph, subgraph, node_match=None, edge_match=None, cache=None):\n \"\"\"\n Parameters\n ----------\n graph: networkx.Graph\n subgraph: networkx.Graph\n node_match: collections.abc.Callable or None\n Function used to determine whether two nodes are equivalent. Its\n signature should look like ``f(n1: dict, n2: dict) -> bool``, with\n `n1` and `n2` node property dicts. See also\n :func:`~networkx.algorithms.isomorphism.categorical_node_match` and\n friends.\n If `None`, all nodes are considered equal.\n edge_match: collections.abc.Callable or None\n Function used to determine whether two edges are equivalent. Its\n signature should look like ``f(e1: dict, e2: dict) -> bool``, with\n `e1` and `e2` edge property dicts. See also\n :func:`~networkx.algorithms.isomorphism.categorical_edge_match` and\n friends.\n If `None`, all edges are considered equal.\n cache: collections.abc.Mapping\n A cache used for caching graph symmetries.\n \"\"\"\n # TODO: graph and subgraph setter methods that invalidate the caches.\n # TODO: allow for precomputed partitions and colors\n self.graph = graph\n self.subgraph = subgraph\n self._symmetry_cache = cache\n # Naming conventions are taken from the original paper. For your\n # sanity:\n # sg: subgraph\n # g: graph\n # e: edge(s)\n # n: node(s)\n # So: sgn means \"subgraph nodes\".\n self._sgn_partitions_ = None\n self._sge_partitions_ = None\n\n self._sgn_colors_ = None\n self._sge_colors_ = None\n\n self._gn_partitions_ = None\n self._ge_partitions_ = None\n\n self._gn_colors_ = None\n self._ge_colors_ = None\n\n self._node_compat_ = None\n self._edge_compat_ = None\n\n if node_match is None:\n self.node_equality = self._node_match_maker(lambda n1, n2: True)\n self._sgn_partitions_ = [set(self.subgraph.nodes)]\n self._gn_partitions_ = [set(self.graph.nodes)]\n self._node_compat_ = {0: 0}\n else:\n self.node_equality = self._node_match_maker(node_match)\n if edge_match is None:\n self.edge_equality = self._edge_match_maker(lambda e1, e2: True)\n self._sge_partitions_ = [set(self.subgraph.edges)]\n self._ge_partitions_ = [set(self.graph.edges)]\n self._edge_compat_ = {0: 0}\n else:\n self.edge_equality = self._edge_match_maker(edge_match)\n\n @property\n def _sgn_partitions(self):\n if self._sgn_partitions_ is None:\n\n def nodematch(node1, node2):\n return self.node_equality(self.subgraph, node1, self.subgraph, node2)\n\n self._sgn_partitions_ = make_partitions(self.subgraph.nodes, nodematch)\n return self._sgn_partitions_\n\n @property\n def _sge_partitions(self):\n if self._sge_partitions_ is None:\n\n def edgematch(edge1, edge2):\n return self.edge_equality(self.subgraph, edge1, self.subgraph, edge2)\n\n self._sge_partitions_ = make_partitions(self.subgraph.edges, edgematch)\n return self._sge_partitions_\n\n @property\n def _gn_partitions(self):\n if self._gn_partitions_ is None:\n\n def nodematch(node1, node2):\n return self.node_equality(self.graph, node1, self.graph, node2)\n\n self._gn_partitions_ = make_partitions(self.graph.nodes, nodematch)\n return self._gn_partitions_\n\n @property\n def _ge_partitions(self):\n if self._ge_partitions_ is None:\n\n def edgematch(edge1, edge2):\n return self.edge_equality(self.graph, edge1, self.graph, edge2)\n\n self._ge_partitions_ = make_partitions(self.graph.edges, edgematch)\n return self._ge_partitions_\n\n @property\n def _sgn_colors(self):\n if self._sgn_colors_ is None:\n self._sgn_colors_ = partition_to_color(self._sgn_partitions)\n return self._sgn_colors_\n\n @property\n def _sge_colors(self):\n if self._sge_colors_ is None:\n self._sge_colors_ = partition_to_color(self._sge_partitions)\n return self._sge_colors_\n\n @property\n def _gn_colors(self):\n if self._gn_colors_ is None:\n self._gn_colors_ = partition_to_color(self._gn_partitions)\n return self._gn_colors_\n\n @property\n def _ge_colors(self):\n if self._ge_colors_ is None:\n self._ge_colors_ = partition_to_color(self._ge_partitions)\n return self._ge_colors_\n\n @property\n def _node_compatibility(self):\n if self._node_compat_ is not None:\n return self._node_compat_\n self._node_compat_ = {}\n for sgn_part_color, gn_part_color in itertools.product(\n range(len(self._sgn_partitions)), range(len(self._gn_partitions))\n ):\n sgn = next(iter(self._sgn_partitions[sgn_part_color]))\n gn = next(iter(self._gn_partitions[gn_part_color]))\n if self.node_equality(self.subgraph, sgn, self.graph, gn):\n self._node_compat_[sgn_part_color] = gn_part_color\n return self._node_compat_\n\n @property\n def _edge_compatibility(self):\n if self._edge_compat_ is not None:\n return self._edge_compat_\n self._edge_compat_ = {}\n for sge_part_color, ge_part_color in itertools.product(\n range(len(self._sge_partitions)), range(len(self._ge_partitions))\n ):\n sge = next(iter(self._sge_partitions[sge_part_color]))\n ge = next(iter(self._ge_partitions[ge_part_color]))\n if self.edge_equality(self.subgraph, sge, self.graph, ge):\n self._edge_compat_[sge_part_color] = ge_part_color\n return self._edge_compat_\n\n @staticmethod\n def _node_match_maker(cmp):\n @wraps(cmp)\n def comparer(graph1, node1, graph2, node2):\n return cmp(graph1.nodes[node1], graph2.nodes[node2])\n\n return comparer\n\n @staticmethod\n def _edge_match_maker(cmp):\n @wraps(cmp)\n def comparer(graph1, edge1, graph2, edge2):\n return cmp(graph1.edges[edge1], graph2.edges[edge2])\n\n return comparer\n\n def find_isomorphisms(self, symmetry=True):\n \"\"\"Find all subgraph isomorphisms between subgraph and graph\n\n Finds isomorphisms where :attr:`subgraph` <= :attr:`graph`.\n\n Parameters\n ----------\n symmetry: bool\n Whether symmetry should be taken into account. If False, found\n isomorphisms may be symmetrically equivalent.\n\n Yields\n ------\n dict\n The found isomorphism mappings of {graph_node: subgraph_node}.\n \"\"\"\n # The networkx VF2 algorithm is slightly funny in when it yields an\n # empty dict and when not.\n if not self.subgraph:\n yield {}\n return\n elif not self.graph:\n return\n elif len(self.graph) < len(self.subgraph):\n return\n\n if symmetry:\n _, cosets = self.analyze_symmetry(\n self.subgraph, self._sgn_partitions, self._sge_colors\n )\n constraints = self._make_constraints(cosets)\n else:\n constraints = []\n\n candidates = self._find_nodecolor_candidates()\n la_candidates = self._get_lookahead_candidates()\n for sgn in self.subgraph:\n extra_candidates = la_candidates[sgn]\n if extra_candidates:\n candidates[sgn] = candidates[sgn] | {frozenset(extra_candidates)}\n\n if any(candidates.values()):\n start_sgn = min(candidates, key=lambda n: min(candidates[n], key=len))\n candidates[start_sgn] = (intersect(candidates[start_sgn]),)\n yield from self._map_nodes(start_sgn, candidates, constraints)\n else:\n return\n\n @staticmethod\n def _find_neighbor_color_count(graph, node, node_color, edge_color):\n \"\"\"\n For `node` in `graph`, count the number of edges of a specific color\n it has to nodes of a specific color.\n \"\"\"\n counts = Counter()\n neighbors = graph[node]\n for neighbor in neighbors:\n n_color = node_color[neighbor]\n if (node, neighbor) in edge_color:\n e_color = edge_color[node, neighbor]\n else:\n e_color = edge_color[neighbor, node]\n counts[e_color, n_color] += 1\n return counts\n\n def _get_lookahead_candidates(self):\n \"\"\"\n Returns a mapping of {subgraph node: collection of graph nodes} for\n which the graph nodes are feasible candidates for the subgraph node, as\n determined by looking ahead one edge.\n \"\"\"\n g_counts = {}\n for gn in self.graph:\n g_counts[gn] = self._find_neighbor_color_count(\n self.graph, gn, self._gn_colors, self._ge_colors\n )\n candidates = defaultdict(set)\n for sgn in self.subgraph:\n sg_count = self._find_neighbor_color_count(\n self.subgraph, sgn, self._sgn_colors, self._sge_colors\n )\n new_sg_count = Counter()\n for (sge_color, sgn_color), count in sg_count.items():\n try:\n ge_color = self._edge_compatibility[sge_color]\n gn_color = self._node_compatibility[sgn_color]\n except KeyError:\n pass\n else:\n new_sg_count[ge_color, gn_color] = count\n\n for gn, g_count in g_counts.items():\n if all(new_sg_count[x] <= g_count[x] for x in new_sg_count):\n # Valid candidate\n candidates[sgn].add(gn)\n return candidates\n\n def largest_common_subgraph(self, symmetry=True):\n \"\"\"\n Find the largest common induced subgraphs between :attr:`subgraph` and\n :attr:`graph`.\n\n Parameters\n ----------\n symmetry: bool\n Whether symmetry should be taken into account. If False, found\n largest common subgraphs may be symmetrically equivalent.\n\n Yields\n ------\n dict\n The found isomorphism mappings of {graph_node: subgraph_node}.\n \"\"\"\n # The networkx VF2 algorithm is slightly funny in when it yields an\n # empty dict and when not.\n if not self.subgraph:\n yield {}\n return\n elif not self.graph:\n return\n\n if symmetry:\n _, cosets = self.analyze_symmetry(\n self.subgraph, self._sgn_partitions, self._sge_colors\n )\n constraints = self._make_constraints(cosets)\n else:\n constraints = []\n\n candidates = self._find_nodecolor_candidates()\n\n if any(candidates.values()):\n yield from self._largest_common_subgraph(candidates, constraints)\n else:\n return\n\n def analyze_symmetry(self, graph, node_partitions, edge_colors):\n \"\"\"\n Find a minimal set of permutations and corresponding co-sets that\n describe the symmetry of `graph`, given the node and edge equalities\n given by `node_partitions` and `edge_colors`, respectively.\n\n Parameters\n ----------\n graph : networkx.Graph\n The graph whose symmetry should be analyzed.\n node_partitions : list of sets\n A list of sets containing node keys. Node keys in the same set\n are considered equivalent. Every node key in `graph` should be in\n exactly one of the sets. If all nodes are equivalent, this should\n be ``[set(graph.nodes)]``.\n edge_colors : dict mapping edges to their colors\n A dict mapping every edge in `graph` to its corresponding color.\n Edges with the same color are considered equivalent. If all edges\n are equivalent, this should be ``{e: 0 for e in graph.edges}``.\n\n\n Returns\n -------\n set[frozenset]\n The found permutations. This is a set of frozensets of pairs of node\n keys which can be exchanged without changing :attr:`subgraph`.\n dict[collections.abc.Hashable, set[collections.abc.Hashable]]\n The found co-sets. The co-sets is a dictionary of\n ``{node key: set of node keys}``.\n Every key-value pair describes which ``values`` can be interchanged\n without changing nodes less than ``key``.\n \"\"\"\n if self._symmetry_cache is not None:\n key = hash(\n (\n tuple(graph.nodes),\n tuple(graph.edges),\n tuple(map(tuple, node_partitions)),\n tuple(edge_colors.items()),\n )\n )\n if key in self._symmetry_cache:\n return self._symmetry_cache[key]\n node_partitions = list(\n self._refine_node_partitions(graph, node_partitions, edge_colors)\n )\n assert len(node_partitions) == 1\n node_partitions = node_partitions[0]\n permutations, cosets = self._process_ordered_pair_partitions(\n graph, node_partitions, node_partitions, edge_colors\n )\n if self._symmetry_cache is not None:\n self._symmetry_cache[key] = permutations, cosets\n return permutations, cosets\n\n def is_isomorphic(self, symmetry=False):\n \"\"\"\n Returns True if :attr:`graph` is isomorphic to :attr:`subgraph` and\n False otherwise.\n\n Returns\n -------\n bool\n \"\"\"\n return len(self.subgraph) == len(self.graph) and self.subgraph_is_isomorphic(\n symmetry\n )\n\n def subgraph_is_isomorphic(self, symmetry=False):\n \"\"\"\n Returns True if a subgraph of :attr:`graph` is isomorphic to\n :attr:`subgraph` and False otherwise.\n\n Returns\n -------\n bool\n \"\"\"\n # symmetry=False, since we only need to know whether there is any\n # example; figuring out all symmetry elements probably costs more time\n # than it gains.\n isom = next(self.subgraph_isomorphisms_iter(symmetry=symmetry), None)\n return isom is not None\n\n def isomorphisms_iter(self, symmetry=True):\n \"\"\"\n Does the same as :meth:`find_isomorphisms` if :attr:`graph` and\n :attr:`subgraph` have the same number of nodes.\n \"\"\"\n if len(self.graph) == len(self.subgraph):\n yield from self.subgraph_isomorphisms_iter(symmetry=symmetry)\n\n def subgraph_isomorphisms_iter(self, symmetry=True):\n \"\"\"Alternative name for :meth:`find_isomorphisms`.\"\"\"\n return self.find_isomorphisms(symmetry)\n\n def _find_nodecolor_candidates(self):\n \"\"\"\n Per node in subgraph find all nodes in graph that have the same color.\n \"\"\"\n candidates = defaultdict(set)\n for sgn in self.subgraph.nodes:\n sgn_color = self._sgn_colors[sgn]\n if sgn_color in self._node_compatibility:\n gn_color = self._node_compatibility[sgn_color]\n candidates[sgn].add(frozenset(self._gn_partitions[gn_color]))\n else:\n candidates[sgn].add(frozenset())\n candidates = dict(candidates)\n for sgn, options in candidates.items():\n candidates[sgn] = frozenset(options)\n return candidates\n\n @staticmethod\n def _make_constraints(cosets):\n \"\"\"\n Turn cosets into constraints.\n \"\"\"\n constraints = []\n for node_i, node_ts in cosets.items():\n for node_t in node_ts:\n if node_i != node_t:\n # Node i must be smaller than node t.\n constraints.append((node_i, node_t))\n return constraints\n\n @staticmethod\n def _find_node_edge_color(graph, node_colors, edge_colors):\n \"\"\"\n For every node in graph, come up with a color that combines 1) the\n color of the node, and 2) the number of edges of a color to each type\n of node.\n \"\"\"\n counts = defaultdict(lambda: defaultdict(int))\n for node1, node2 in graph.edges:\n if (node1, node2) in edge_colors:\n # FIXME directed graphs\n ecolor = edge_colors[node1, node2]\n else:\n ecolor = edge_colors[node2, node1]\n # Count per node how many edges it has of what color to nodes of\n # what color\n counts[node1][ecolor, node_colors[node2]] += 1\n counts[node2][ecolor, node_colors[node1]] += 1\n\n node_edge_colors = {}\n for node in graph.nodes:\n node_edge_colors[node] = node_colors[node], set(counts[node].items())\n\n return node_edge_colors\n\n @staticmethod\n def _get_permutations_by_length(items):\n \"\"\"\n Get all permutations of items, but only permute items with the same\n length.\n\n >>> found = list(ISMAGS._get_permutations_by_length([[1], [2], [3, 4], [4, 5]]))\n >>> answer = [\n ... (([1], [2]), ([3, 4], [4, 5])),\n ... (([1], [2]), ([4, 5], [3, 4])),\n ... (([2], [1]), ([3, 4], [4, 5])),\n ... (([2], [1]), ([4, 5], [3, 4])),\n ... ]\n >>> found == answer\n True\n \"\"\"\n by_len = defaultdict(list)\n for item in items:\n by_len[len(item)].append(item)\n\n yield from itertools.product(\n *(itertools.permutations(by_len[l]) for l in sorted(by_len))\n )\n\n @classmethod\n def _refine_node_partitions(cls, graph, node_partitions, edge_colors, branch=False):\n \"\"\"\n Given a partition of nodes in graph, make the partitions smaller such\n that all nodes in a partition have 1) the same color, and 2) the same\n number of edges to specific other partitions.\n \"\"\"\n\n def equal_color(node1, node2):\n return node_edge_colors[node1] == node_edge_colors[node2]\n\n node_partitions = list(node_partitions)\n node_colors = partition_to_color(node_partitions)\n node_edge_colors = cls._find_node_edge_color(graph, node_colors, edge_colors)\n if all(\n are_all_equal(node_edge_colors[node] for node in partition)\n for partition in node_partitions\n ):\n yield node_partitions\n return\n\n new_partitions = []\n output = [new_partitions]\n for partition in node_partitions:\n if not are_all_equal(node_edge_colors[node] for node in partition):\n refined = make_partitions(partition, equal_color)\n if (\n branch\n and len(refined) != 1\n and len({len(r) for r in refined}) != len([len(r) for r in refined])\n ):\n # This is where it breaks. There are multiple new cells\n # in refined with the same length, and their order\n # matters.\n # So option 1) Hit it with a big hammer and simply make all\n # orderings.\n permutations = cls._get_permutations_by_length(refined)\n new_output = []\n for n_p in output:\n for permutation in permutations:\n new_output.append(n_p + list(permutation[0]))\n output = new_output\n else:\n for n_p in output:\n n_p.extend(sorted(refined, key=len))\n else:\n for n_p in output:\n n_p.append(partition)\n for n_p in output:\n yield from cls._refine_node_partitions(graph, n_p, edge_colors, branch)\n\n def _edges_of_same_color(self, sgn1, sgn2):\n \"\"\"\n Returns all edges in :attr:`graph` that have the same colour as the\n edge between sgn1 and sgn2 in :attr:`subgraph`.\n \"\"\"\n if (sgn1, sgn2) in self._sge_colors:\n # FIXME directed graphs\n sge_color = self._sge_colors[sgn1, sgn2]\n else:\n sge_color = self._sge_colors[sgn2, sgn1]\n if sge_color in self._edge_compatibility:\n ge_color = self._edge_compatibility[sge_color]\n g_edges = self._ge_partitions[ge_color]\n else:\n g_edges = []\n return g_edges\n\n def _map_nodes(self, sgn, candidates, constraints, mapping=None, to_be_mapped=None):\n \"\"\"\n Find all subgraph isomorphisms honoring constraints.\n \"\"\"\n if mapping is None:\n mapping = {}\n else:\n mapping = mapping.copy()\n if to_be_mapped is None:\n to_be_mapped = set(self.subgraph.nodes)\n\n # Note, we modify candidates here. Doesn't seem to affect results, but\n # remember this.\n # candidates = candidates.copy()\n sgn_candidates = intersect(candidates[sgn])\n candidates[sgn] = frozenset([sgn_candidates])\n for gn in sgn_candidates:\n # We're going to try to map sgn to gn.\n if gn in mapping.values() or sgn not in to_be_mapped:\n # gn is already mapped to something\n continue # pragma: no cover\n\n # REDUCTION and COMBINATION\n mapping[sgn] = gn\n # BASECASE\n if to_be_mapped == set(mapping.keys()):\n yield {v: k for k, v in mapping.items()}\n continue\n left_to_map = to_be_mapped - set(mapping.keys())\n\n new_candidates = candidates.copy()\n sgn_neighbours = set(self.subgraph[sgn])\n not_gn_neighbours = set(self.graph.nodes) - set(self.graph[gn])\n for sgn2 in left_to_map:\n if sgn2 not in sgn_neighbours:\n gn2_options = not_gn_neighbours\n else:\n # Get all edges to gn of the right color:\n g_edges = self._edges_of_same_color(sgn, sgn2)\n # FIXME directed graphs\n # And all nodes involved in those which are connected to gn\n gn2_options = {n for e in g_edges for n in e if gn in e}\n # Node color compatibility should be taken care of by the\n # initial candidate lists made by find_subgraphs\n\n # Add gn2_options to the right collection. Since new_candidates\n # is a dict of frozensets of frozensets of node indices it's\n # a bit clunky. We can't do .add, and + also doesn't work. We\n # could do |, but I deem union to be clearer.\n new_candidates[sgn2] = new_candidates[sgn2].union(\n [frozenset(gn2_options)]\n )\n\n if (sgn, sgn2) in constraints:\n gn2_options = {gn2 for gn2 in self.graph if gn2 > gn}\n elif (sgn2, sgn) in constraints:\n gn2_options = {gn2 for gn2 in self.graph if gn2 < gn}\n else:\n continue # pragma: no cover\n new_candidates[sgn2] = new_candidates[sgn2].union(\n [frozenset(gn2_options)]\n )\n\n # The next node is the one that is unmapped and has fewest\n # candidates\n # Pylint disables because it's a one-shot function.\n next_sgn = min(\n left_to_map, key=lambda n: min(new_candidates[n], key=len)\n ) # pylint: disable=cell-var-from-loop\n yield from self._map_nodes(\n next_sgn,\n new_candidates,\n constraints,\n mapping=mapping,\n to_be_mapped=to_be_mapped,\n )\n # Unmap sgn-gn. Strictly not necessary since it'd get overwritten\n # when making a new mapping for sgn.\n # del mapping[sgn]\n\n def _largest_common_subgraph(self, candidates, constraints, to_be_mapped=None):\n \"\"\"\n Find all largest common subgraphs honoring constraints.\n \"\"\"\n if to_be_mapped is None:\n to_be_mapped = {frozenset(self.subgraph.nodes)}\n\n # The LCS problem is basically a repeated subgraph isomorphism problem\n # with smaller and smaller subgraphs. We store the nodes that are\n # \"part of\" the subgraph in to_be_mapped, and we make it a little\n # smaller every iteration.\n\n # pylint disable because it's guarded against by default value\n current_size = len(\n next(iter(to_be_mapped), [])\n ) # pylint: disable=stop-iteration-return\n\n found_iso = False\n if current_size <= len(self.graph):\n # There's no point in trying to find isomorphisms of\n # graph >= subgraph if subgraph has more nodes than graph.\n\n # Try the isomorphism first with the nodes with lowest ID. So sort\n # them. Those are more likely to be part of the final\n # correspondence. This makes finding the first answer(s) faster. In\n # theory.\n for nodes in sorted(to_be_mapped, key=sorted):\n # Find the isomorphism between subgraph[to_be_mapped] <= graph\n next_sgn = min(nodes, key=lambda n: min(candidates[n], key=len))\n isomorphs = self._map_nodes(\n next_sgn, candidates, constraints, to_be_mapped=nodes\n )\n\n # This is effectively `yield from isomorphs`, except that we look\n # whether an item was yielded.\n try:\n item = next(isomorphs)\n except StopIteration:\n pass\n else:\n yield item\n yield from isomorphs\n found_iso = True\n\n # BASECASE\n if found_iso or current_size == 1:\n # Shrinking has no point because either 1) we end up with a smaller\n # common subgraph (and we want the largest), or 2) there'll be no\n # more subgraph.\n return\n\n left_to_be_mapped = set()\n for nodes in to_be_mapped:\n for sgn in nodes:\n # We're going to remove sgn from to_be_mapped, but subject to\n # symmetry constraints. We know that for every constraint we\n # have those subgraph nodes are equal. So whenever we would\n # remove the lower part of a constraint, remove the higher\n # instead. This is all dealth with by _remove_node. And because\n # left_to_be_mapped is a set, we don't do double work.\n\n # And finally, make the subgraph one node smaller.\n # REDUCTION\n new_nodes = self._remove_node(sgn, nodes, constraints)\n left_to_be_mapped.add(new_nodes)\n # COMBINATION\n yield from self._largest_common_subgraph(\n candidates, constraints, to_be_mapped=left_to_be_mapped\n )\n\n @staticmethod\n def _remove_node(node, nodes, constraints):\n \"\"\"\n Returns a new set where node has been removed from nodes, subject to\n symmetry constraints. We know, that for every constraint we have\n those subgraph nodes are equal. So whenever we would remove the\n lower part of a constraint, remove the higher instead.\n \"\"\"\n while True:\n for low, high in constraints:\n if low == node and high in nodes:\n node = high\n break\n else: # no break, couldn't find node in constraints\n break\n return frozenset(nodes - {node})\n\n @staticmethod\n def _find_permutations(top_partitions, bottom_partitions):\n \"\"\"\n Return the pairs of top/bottom partitions where the partitions are\n different. Ensures that all partitions in both top and bottom\n partitions have size 1.\n \"\"\"\n # Find permutations\n permutations = set()\n for top, bot in zip(top_partitions, bottom_partitions):\n # top and bot have only one element\n if len(top) != 1 or len(bot) != 1:\n raise IndexError(\n \"Not all nodes are coupled. This is\"\n f\" impossible: {top_partitions}, {bottom_partitions}\"\n )\n if top != bot:\n permutations.add(frozenset((next(iter(top)), next(iter(bot)))))\n return permutations\n\n @staticmethod\n def _update_orbits(orbits, permutations):\n \"\"\"\n Update orbits based on permutations. Orbits is modified in place.\n For every pair of items in permutations their respective orbits are\n merged.\n \"\"\"\n for permutation in permutations:\n node, node2 = permutation\n # Find the orbits that contain node and node2, and replace the\n # orbit containing node with the union\n first = second = None\n for idx, orbit in enumerate(orbits):\n if first is not None and second is not None:\n break\n if node in orbit:\n first = idx\n if node2 in orbit:\n second = idx\n if first != second:\n orbits[first].update(orbits[second])\n del orbits[second]\n\n def _couple_nodes(\n self,\n top_partitions,\n bottom_partitions,\n pair_idx,\n t_node,\n b_node,\n graph,\n edge_colors,\n ):\n \"\"\"\n Generate new partitions from top and bottom_partitions where t_node is\n coupled to b_node. pair_idx is the index of the partitions where t_ and\n b_node can be found.\n \"\"\"\n t_partition = top_partitions[pair_idx]\n b_partition = bottom_partitions[pair_idx]\n assert t_node in t_partition and b_node in b_partition\n # Couple node to node2. This means they get their own partition\n new_top_partitions = [top.copy() for top in top_partitions]\n new_bottom_partitions = [bot.copy() for bot in bottom_partitions]\n new_t_groups = {t_node}, t_partition - {t_node}\n new_b_groups = {b_node}, b_partition - {b_node}\n # Replace the old partitions with the coupled ones\n del new_top_partitions[pair_idx]\n del new_bottom_partitions[pair_idx]\n new_top_partitions[pair_idx:pair_idx] = new_t_groups\n new_bottom_partitions[pair_idx:pair_idx] = new_b_groups\n\n new_top_partitions = self._refine_node_partitions(\n graph, new_top_partitions, edge_colors\n )\n new_bottom_partitions = self._refine_node_partitions(\n graph, new_bottom_partitions, edge_colors, branch=True\n )\n new_top_partitions = list(new_top_partitions)\n assert len(new_top_partitions) == 1\n new_top_partitions = new_top_partitions[0]\n for bot in new_bottom_partitions:\n yield list(new_top_partitions), bot\n\n def _process_ordered_pair_partitions(\n self,\n graph,\n top_partitions,\n bottom_partitions,\n edge_colors,\n orbits=None,\n cosets=None,\n ):\n \"\"\"\n Processes ordered pair partitions as per the reference paper. Finds and\n returns all permutations and cosets that leave the graph unchanged.\n \"\"\"\n if orbits is None:\n orbits = [{node} for node in graph.nodes]\n else:\n # Note that we don't copy orbits when we are given one. This means\n # we leak information between the recursive branches. This is\n # intentional!\n orbits = orbits\n if cosets is None:\n cosets = {}\n else:\n cosets = cosets.copy()\n\n assert all(\n len(t_p) == len(b_p) for t_p, b_p in zip(top_partitions, bottom_partitions)\n )\n\n # BASECASE\n if all(len(top) == 1 for top in top_partitions):\n # All nodes are mapped\n permutations = self._find_permutations(top_partitions, bottom_partitions)\n self._update_orbits(orbits, permutations)\n if permutations:\n return [permutations], cosets\n else:\n return [], cosets\n\n permutations = []\n unmapped_nodes = {\n (node, idx)\n for idx, t_partition in enumerate(top_partitions)\n for node in t_partition\n if len(t_partition) > 1\n }\n node, pair_idx = min(unmapped_nodes)\n b_partition = bottom_partitions[pair_idx]\n\n for node2 in sorted(b_partition):\n if len(b_partition) == 1:\n # Can never result in symmetry\n continue\n if node != node2 and any(\n node in orbit and node2 in orbit for orbit in orbits\n ):\n # Orbit prune branch\n continue\n # REDUCTION\n # Couple node to node2\n partitions = self._couple_nodes(\n top_partitions,\n bottom_partitions,\n pair_idx,\n node,\n node2,\n graph,\n edge_colors,\n )\n for opp in partitions:\n new_top_partitions, new_bottom_partitions = opp\n\n new_perms, new_cosets = self._process_ordered_pair_partitions(\n graph,\n new_top_partitions,\n new_bottom_partitions,\n edge_colors,\n orbits,\n cosets,\n )\n # COMBINATION\n permutations += new_perms\n cosets.update(new_cosets)\n\n mapped = {\n k\n for top, bottom in zip(top_partitions, bottom_partitions)\n for k in top\n if len(top) == 1 and top == bottom\n }\n ks = {k for k in graph.nodes if k < node}\n # Have all nodes with ID < node been mapped?\n find_coset = ks <= mapped and node not in cosets\n if find_coset:\n # Find the orbit that contains node\n for orbit in orbits:\n if node in orbit:\n cosets[node] = orbit.copy()\n return permutations, cosets\n",
"path": "networkx/algorithms/isomorphism/ismags.py"
}
] | diff --git a/networkx/algorithms/isomorphism/ismags.py b/networkx/algorithms/isomorphism/ismags.py
index 4145be1150c..76fdee05c5a 100644
--- a/networkx/algorithms/isomorphism/ismags.py
+++ b/networkx/algorithms/isomorphism/ismags.py
@@ -184,8 +184,8 @@ def make_partitions(items, test):
def partition_to_color(partitions):
"""
- Creates a dictionary with for every item in partition for every partition
- in partitions the index of partition in partitions.
+ Creates a dictionary that maps each item in each partition to the index of
+ the partition to which it belongs.
Parameters
----------
|
biolab__orange3-4619 | Table doesn't output the current tab selection when switching between tabs
**Describe the bug**
If the Table widget has multiple tabs and if on each tab there are some instances selected it doesn't change the output when a user switches between tabs. The widget doesn't output the current tab selection but it outputs the last selection made on some of the previous tabs and the user has to re-select instances.
**Orange version:**
3.25.dev
**Expected behavior**
The output should change every time the tab is changed.

| [
{
"content": "import sys\nimport threading\nimport io\nimport csv\nimport itertools\nimport concurrent.futures\n\nfrom collections import OrderedDict, namedtuple\nfrom typing import List, Tuple, Iterable\n\nfrom math import isnan\n\nimport numpy\nfrom scipy.sparse import issparse\n\nfrom AnyQt.QtWidgets import (\n QTableView, QHeaderView, QAbstractButton, QApplication, QStyleOptionHeader,\n QStyle, QStylePainter, QStyledItemDelegate\n)\nfrom AnyQt.QtGui import QColor, QClipboard, QMouseEvent\nfrom AnyQt.QtCore import (\n Qt, QSize, QEvent, QByteArray, QMimeData, QObject, QMetaObject,\n QAbstractProxyModel, QIdentityProxyModel, QModelIndex,\n QItemSelectionModel, QItemSelection, QItemSelectionRange,\n Signal)\nfrom AnyQt.QtCore import pyqtSlot as Slot\n\nimport Orange.data\nfrom Orange.data.storage import Storage\nfrom Orange.data.table import Table\nfrom Orange.data.sql.table import SqlTable\nfrom Orange.statistics import basic_stats\n\nfrom Orange.widgets import gui\nfrom Orange.widgets.settings import Setting, DomainContextHandler\nfrom Orange.widgets.utils.widgetpreview import WidgetPreview\nfrom Orange.widgets.widget import OWWidget, Input, Output\nfrom Orange.widgets.utils import datacaching\nfrom Orange.widgets.utils.annotated_data import (create_annotated_table,\n ANNOTATED_DATA_SIGNAL_NAME)\nfrom Orange.widgets.utils.itemmodels import TableModel\n\n\nclass RichTableModel(TableModel):\n \"\"\"A TableModel with some extra bells and whistles/\n\n (adds support for gui.BarRole, include variable labels and icons\n in the header)\n \"\"\"\n #: Rich header data flags.\n Name, Labels, Icon = 1, 2, 4\n\n def __init__(self, sourcedata, parent=None):\n super().__init__(sourcedata, parent)\n\n self._header_flags = RichTableModel.Name\n self._continuous = [var.is_continuous for var in self.vars]\n labels = []\n for var in self.vars:\n if isinstance(var, Orange.data.Variable):\n labels.extend(var.attributes.keys())\n self._labels = list(sorted(\n {label for label in labels if not label.startswith(\"_\")}))\n\n def data(self, index, role=Qt.DisplayRole,\n # for faster local lookup\n _BarRole=gui.TableBarItem.BarRole):\n # pylint: disable=arguments-differ\n if role == _BarRole and self._continuous[index.column()]:\n val = super().data(index, TableModel.ValueRole)\n if val is None or isnan(val):\n return None\n\n dist = super().data(index, TableModel.VariableStatsRole)\n if dist is not None and dist.max > dist.min:\n return (val - dist.min) / (dist.max - dist.min)\n else:\n return None\n elif role == Qt.TextAlignmentRole and self._continuous[index.column()]:\n return Qt.AlignRight | Qt.AlignVCenter\n else:\n return super().data(index, role)\n\n def headerData(self, section, orientation, role):\n if orientation == Qt.Horizontal and role == Qt.DisplayRole:\n var = super().headerData(\n section, orientation, TableModel.VariableRole)\n if var is None:\n return super().headerData(\n section, orientation, Qt.DisplayRole)\n\n lines = []\n if self._header_flags & RichTableModel.Name:\n lines.append(var.name)\n if self._header_flags & RichTableModel.Labels:\n lines.extend(str(var.attributes.get(label, \"\"))\n for label in self._labels)\n return \"\\n\".join(lines)\n elif orientation == Qt.Horizontal and role == Qt.DecorationRole and \\\n self._header_flags & RichTableModel.Icon:\n var = super().headerData(\n section, orientation, TableModel.VariableRole)\n if var is not None:\n return gui.attributeIconDict[var]\n else:\n return None\n else:\n return super().headerData(section, orientation, role)\n\n def setRichHeaderFlags(self, flags):\n if flags != self._header_flags:\n self._header_flags = flags\n self.headerDataChanged.emit(\n Qt.Horizontal, 0, self.columnCount() - 1)\n\n def richHeaderFlags(self):\n return self._header_flags\n\n\nclass TableSliceProxy(QIdentityProxyModel):\n def __init__(self, parent=None, rowSlice=slice(0, -1), **kwargs):\n super().__init__(parent, **kwargs)\n self.__rowslice = rowSlice\n\n def setRowSlice(self, rowslice):\n if rowslice.step is not None and rowslice.step != 1:\n raise ValueError(\"invalid stride\")\n\n if self.__rowslice != rowslice:\n self.beginResetModel()\n self.__rowslice = rowslice\n self.endResetModel()\n\n def mapToSource(self, proxyindex):\n model = self.sourceModel()\n if model is None or not proxyindex.isValid():\n return QModelIndex()\n\n row, col = proxyindex.row(), proxyindex.column()\n row = row + self.__rowslice.start\n assert 0 <= row < model.rowCount()\n return model.createIndex(row, col, proxyindex.internalPointer())\n\n def mapFromSource(self, sourceindex):\n model = self.sourceModel()\n if model is None or not sourceindex.isValid():\n return QModelIndex()\n row, col = sourceindex.row(), sourceindex.column()\n row = row - self.__rowslice.start\n assert 0 <= row < self.rowCount()\n return self.createIndex(row, col, sourceindex.internalPointer())\n\n def rowCount(self, parent=QModelIndex()):\n if parent.isValid():\n return 0\n count = super().rowCount()\n start, stop, step = self.__rowslice.indices(count)\n assert step == 1\n return stop - start\n\n\nclass BlockSelectionModel(QItemSelectionModel):\n \"\"\"\n Item selection model ensuring the selection maintains a simple block\n like structure.\n\n e.g.\n\n [a b] c [d e]\n [f g] h [i j]\n\n is allowed but this is not\n\n [a] b c d e\n [f g] h [i j]\n\n I.e. select the Cartesian product of row and column indices.\n\n \"\"\"\n def __init__(self, model, parent=None, selectBlocks=True, **kwargs):\n super().__init__(model, parent, **kwargs)\n self.__selectBlocks = selectBlocks\n\n def select(self, selection, flags):\n \"\"\"Reimplemented.\"\"\"\n if isinstance(selection, QModelIndex):\n selection = QItemSelection(selection, selection)\n\n if not self.__selectBlocks:\n super().select(selection, flags)\n return\n\n model = self.model()\n\n def to_ranges(spans):\n return list(range(*r) for r in spans)\n\n if flags & QItemSelectionModel.Current: # no current selection support\n flags &= ~QItemSelectionModel.Current\n if flags & QItemSelectionModel.Toggle: # no toggle support either\n flags &= ~QItemSelectionModel.Toggle\n flags |= QItemSelectionModel.Select\n\n if flags == QItemSelectionModel.ClearAndSelect:\n # extend selection ranges in `selection` to span all row/columns\n sel_rows = selection_rows(selection)\n sel_cols = selection_columns(selection)\n selection = QItemSelection()\n for row_range, col_range in \\\n itertools.product(to_ranges(sel_rows), to_ranges(sel_cols)):\n selection.select(\n model.index(row_range.start, col_range.start),\n model.index(row_range.stop - 1, col_range.stop - 1)\n )\n elif flags & (QItemSelectionModel.Select |\n QItemSelectionModel.Deselect):\n # extend all selection ranges in `selection` with the full current\n # row/col spans\n rows, cols = selection_blocks(self.selection())\n sel_rows = selection_rows(selection)\n sel_cols = selection_columns(selection)\n ext_selection = QItemSelection()\n for row_range, col_range in \\\n itertools.product(to_ranges(rows), to_ranges(sel_cols)):\n ext_selection.select(\n model.index(row_range.start, col_range.start),\n model.index(row_range.stop - 1, col_range.stop - 1)\n )\n for row_range, col_range in \\\n itertools.product(to_ranges(sel_rows), to_ranges(cols)):\n ext_selection.select(\n model.index(row_range.start, col_range.start),\n model.index(row_range.stop - 1, col_range.stop - 1)\n )\n selection.merge(ext_selection, QItemSelectionModel.Select)\n super().select(selection, flags)\n\n def selectBlocks(self):\n \"\"\"Is the block selection in effect.\"\"\"\n return self.__selectBlocks\n\n def setSelectBlocks(self, state):\n \"\"\"Set the block selection state.\n\n If set to False, the selection model behaves as the base\n QItemSelectionModel\n\n \"\"\"\n self.__selectBlocks = state\n\n\ndef selection_rows(selection):\n # type: (QItemSelection) -> List[Tuple[int, int]]\n \"\"\"\n Return a list of ranges for all referenced rows contained in selection\n\n Parameters\n ----------\n selection : QItemSelection\n\n Returns\n -------\n rows : List[Tuple[int, int]]\n \"\"\"\n spans = set(range(s.top(), s.bottom() + 1) for s in selection)\n indices = sorted(set(itertools.chain(*spans)))\n return list(ranges(indices))\n\n\ndef selection_columns(selection):\n # type: (QItemSelection) -> List[Tuple[int, int]]\n \"\"\"\n Return a list of ranges for all referenced columns contained in selection\n\n Parameters\n ----------\n selection : QItemSelection\n\n Returns\n -------\n rows : List[Tuple[int, int]]\n \"\"\"\n spans = {range(s.left(), s.right() + 1) for s in selection}\n indices = sorted(set(itertools.chain(*spans)))\n return list(ranges(indices))\n\n\ndef selection_blocks(selection):\n # type: (QItemSelection) -> Tuple[List[Tuple[int, int]], List[Tuple[int, int]]]\n if selection.count() > 0:\n rowranges = {range(span.top(), span.bottom() + 1)\n for span in selection}\n colranges = {range(span.left(), span.right() + 1)\n for span in selection}\n else:\n return [], []\n\n rows = sorted(set(itertools.chain(*rowranges)))\n cols = sorted(set(itertools.chain(*colranges)))\n return list(ranges(rows)), list(ranges(cols))\n\n\ndef ranges(indices):\n # type: (Iterable[int]) -> Iterable[Tuple[int, int]]\n \"\"\"\n Group consecutive indices into `(start, stop)` tuple 'ranges'.\n\n >>> list(ranges([1, 2, 3, 5, 3, 4]))\n >>> [(1, 4), (5, 6), (3, 5)]\n\n \"\"\"\n g = itertools.groupby(enumerate(indices),\n key=lambda t: t[1] - t[0])\n for _, range_ind in g:\n range_ind = list(range_ind)\n _, start = range_ind[0]\n _, end = range_ind[-1]\n yield start, end + 1\n\n\ndef table_selection_to_mime_data(table):\n \"\"\"Copy the current selection in a QTableView to the clipboard.\n \"\"\"\n lines = table_selection_to_list(table)\n\n as_csv = lines_to_csv_string(lines, dialect=\"excel\").encode(\"utf-8\")\n as_tsv = lines_to_csv_string(lines, dialect=\"excel-tab\").encode(\"utf-8\")\n\n mime = QMimeData()\n mime.setData(\"text/csv\", QByteArray(as_csv))\n mime.setData(\"text/tab-separated-values\", QByteArray(as_tsv))\n mime.setData(\"text/plain\", QByteArray(as_tsv))\n return mime\n\n\ndef lines_to_csv_string(lines, dialect=\"excel\"):\n stream = io.StringIO()\n writer = csv.writer(stream, dialect=dialect)\n writer.writerows(lines)\n return stream.getvalue()\n\n\ndef table_selection_to_list(table):\n model = table.model()\n indexes = table.selectedIndexes()\n\n rows = sorted(set(index.row() for index in indexes))\n columns = sorted(set(index.column() for index in indexes))\n\n lines = []\n for row in rows:\n line = []\n for col in columns:\n val = model.index(row, col).data(Qt.DisplayRole)\n # TODO: use style item delegate displayText?\n line.append(str(val))\n lines.append(line)\n\n return lines\n\n\nTableSlot = namedtuple(\"TableSlot\", [\"input_id\", \"table\", \"summary\", \"view\"])\n\n\nclass TableView(QTableView):\n #: Signal emitted when selection finished. It is not emitted during\n #: mouse drag selection updates.\n selectionFinished = Signal()\n\n __mouseDown = False\n __selectionDidChange = False\n\n def setSelectionModel(self, selectionModel: QItemSelectionModel) -> None:\n sm = self.selectionModel()\n if sm is not None:\n sm.selectionChanged.disconnect(self.__on_selectionChanged)\n super().setSelectionModel(selectionModel)\n if selectionModel is not None:\n selectionModel.selectionChanged.connect(self.__on_selectionChanged)\n\n def __on_selectionChanged(self):\n if self.__mouseDown:\n self.__selectionDidChange = True\n else:\n self.selectionFinished.emit()\n\n def mousePressEvent(self, event: QMouseEvent) -> None:\n self.__mouseDown = event.button() == Qt.LeftButton\n super().mousePressEvent(event)\n\n def mouseReleaseEvent(self, event: QMouseEvent) -> None:\n super().mouseReleaseEvent(event)\n if self.__mouseDown and event.button() == Qt.LeftButton:\n self.__mouseDown = False\n if self.__selectionDidChange:\n self.__selectionDidChange = False\n self.selectionFinished.emit()\n\n\nclass DataTableView(TableView):\n dataset: Table\n input_slot: TableSlot\n\n\nclass OWDataTable(OWWidget):\n name = \"Data Table\"\n description = \"View the dataset in a spreadsheet.\"\n icon = \"icons/Table.svg\"\n priority = 50\n keywords = []\n\n buttons_area_orientation = Qt.Vertical\n\n class Inputs:\n data = Input(\"Data\", Table, multiple=True)\n\n class Outputs:\n selected_data = Output(\"Selected Data\", Table, default=True)\n annotated_data = Output(ANNOTATED_DATA_SIGNAL_NAME, Table)\n\n show_distributions = Setting(False)\n dist_color_RGB = Setting((220, 220, 220, 255))\n show_attribute_labels = Setting(True)\n select_rows = Setting(True)\n auto_commit = Setting(True)\n\n color_by_class = Setting(True)\n settingsHandler = DomainContextHandler(\n match_values=DomainContextHandler.MATCH_VALUES_ALL)\n selected_rows = Setting([], schema_only=True)\n selected_cols = Setting([], schema_only=True)\n\n def __init__(self):\n super().__init__()\n\n self._inputs = OrderedDict()\n\n self.__pending_selected_rows = self.selected_rows\n self.selected_rows = None\n self.__pending_selected_cols = self.selected_cols\n self.selected_cols = None\n\n self.dist_color = QColor(*self.dist_color_RGB)\n\n info_box = gui.vBox(self.controlArea, \"Info\")\n self.info_ex = gui.widgetLabel(info_box, 'No data on input.', )\n self.info_ex.setWordWrap(True)\n self.info_attr = gui.widgetLabel(info_box, ' ')\n self.info_attr.setWordWrap(True)\n self.info_class = gui.widgetLabel(info_box, ' ')\n self.info_class.setWordWrap(True)\n self.info_meta = gui.widgetLabel(info_box, ' ')\n self.info_meta.setWordWrap(True)\n info_box.setMinimumWidth(200)\n gui.separator(self.controlArea)\n\n box = gui.vBox(self.controlArea, \"Variables\")\n self.c_show_attribute_labels = gui.checkBox(\n box, self, \"show_attribute_labels\",\n \"Show variable labels (if present)\",\n callback=self._on_show_variable_labels_changed)\n\n gui.checkBox(box, self, \"show_distributions\",\n 'Visualize numeric values',\n callback=self._on_distribution_color_changed)\n gui.checkBox(box, self, \"color_by_class\", 'Color by instance classes',\n callback=self._on_distribution_color_changed)\n\n box = gui.vBox(self.controlArea, \"Selection\")\n\n gui.checkBox(box, self, \"select_rows\", \"Select full rows\",\n callback=self._on_select_rows_changed)\n\n gui.rubber(self.controlArea)\n\n reset = gui.button(\n None, self, \"Restore Original Order\", callback=self.restore_order,\n tooltip=\"Show rows in the original order\", autoDefault=False)\n self.buttonsArea.layout().insertWidget(0, reset)\n gui.auto_send(self.buttonsArea, self, \"auto_commit\")\n\n # GUI with tabs\n self.tabs = gui.tabWidget(self.mainArea)\n self.tabs.currentChanged.connect(self._on_current_tab_changed)\n\n def copy_to_clipboard(self):\n self.copy()\n\n @staticmethod\n def sizeHint():\n return QSize(800, 500)\n\n @Inputs.data\n def set_dataset(self, data, tid=None):\n \"\"\"Set the input dataset.\"\"\"\n self.closeContext()\n if data is not None:\n datasetname = getattr(data, \"name\", \"Data\")\n if tid in self._inputs:\n # update existing input slot\n slot = self._inputs[tid]\n view = slot.view\n # reset the (header) view state.\n view.setModel(None)\n view.horizontalHeader().setSortIndicator(-1, Qt.AscendingOrder)\n assert self.tabs.indexOf(view) != -1\n self.tabs.setTabText(self.tabs.indexOf(view), datasetname)\n else:\n view = DataTableView()\n view.setSortingEnabled(True)\n view.setHorizontalScrollMode(QTableView.ScrollPerPixel)\n\n if self.select_rows:\n view.setSelectionBehavior(QTableView.SelectRows)\n\n header = view.horizontalHeader()\n header.setSectionsMovable(True)\n header.setSectionsClickable(True)\n header.setSortIndicatorShown(True)\n header.setSortIndicator(-1, Qt.AscendingOrder)\n\n # QHeaderView does not 'reset' the model sort column,\n # because there is no guaranty (requirement) that the\n # models understand the -1 sort column.\n def sort_reset(index, order):\n if view.model() is not None and index == -1:\n view.model().sort(index, order)\n\n header.sortIndicatorChanged.connect(sort_reset)\n self.tabs.addTab(view, datasetname)\n\n view.dataset = data\n self.tabs.setCurrentWidget(view)\n\n self._setup_table_view(view, data)\n slot = TableSlot(tid, data, table_summary(data), view)\n view.input_slot = slot\n self._inputs[tid] = slot\n\n self.tabs.setCurrentIndex(self.tabs.indexOf(view))\n\n self.set_info(slot.summary)\n\n if isinstance(slot.summary.len, concurrent.futures.Future):\n def update(_):\n QMetaObject.invokeMethod(\n self, \"_update_info\", Qt.QueuedConnection)\n\n slot.summary.len.add_done_callback(update)\n\n elif tid in self._inputs:\n slot = self._inputs.pop(tid)\n view = slot.view\n view.hide()\n view.deleteLater()\n self.tabs.removeTab(self.tabs.indexOf(view))\n\n current = self.tabs.currentWidget()\n if current is not None:\n self.set_info(current.input_slot.summary)\n\n self.tabs.tabBar().setVisible(self.tabs.count() > 1)\n self.openContext(data)\n\n if data and self.__pending_selected_rows is not None:\n self.selected_rows = self.__pending_selected_rows\n self.__pending_selected_rows = None\n else:\n self.selected_rows = []\n\n if data and self.__pending_selected_cols is not None:\n self.selected_cols = self.__pending_selected_cols\n self.__pending_selected_cols = None\n else:\n self.selected_cols = []\n\n self.set_selection()\n self.unconditional_commit()\n\n def _setup_table_view(self, view, data):\n \"\"\"Setup the `view` (QTableView) with `data` (Orange.data.Table)\n \"\"\"\n if data is None:\n view.setModel(None)\n return\n\n datamodel = RichTableModel(data)\n\n rowcount = data.approx_len()\n\n if self.color_by_class and data.domain.has_discrete_class:\n color_schema = [\n QColor(*c) for c in data.domain.class_var.colors]\n else:\n color_schema = None\n if self.show_distributions:\n view.setItemDelegate(\n gui.TableBarItem(\n self, color=self.dist_color, color_schema=color_schema)\n )\n else:\n view.setItemDelegate(QStyledItemDelegate(self))\n\n # Enable/disable view sorting based on data's type\n view.setSortingEnabled(is_sortable(data))\n header = view.horizontalHeader()\n header.setSectionsClickable(is_sortable(data))\n header.setSortIndicatorShown(is_sortable(data))\n\n view.setModel(datamodel)\n\n vheader = view.verticalHeader()\n option = view.viewOptions()\n size = view.style().sizeFromContents(\n QStyle.CT_ItemViewItem, option,\n QSize(20, 20), view)\n\n vheader.setDefaultSectionSize(size.height() + 2)\n vheader.setMinimumSectionSize(5)\n vheader.setSectionResizeMode(QHeaderView.Fixed)\n\n # Limit the number of rows displayed in the QTableView\n # (workaround for QTBUG-18490 / QTBUG-28631)\n maxrows = (2 ** 31 - 1) // (vheader.defaultSectionSize() + 2)\n if rowcount > maxrows:\n sliceproxy = TableSliceProxy(\n parent=view, rowSlice=slice(0, maxrows))\n sliceproxy.setSourceModel(datamodel)\n # First reset the view (without this the header view retains\n # it's state - at this point invalid/broken)\n view.setModel(None)\n view.setModel(sliceproxy)\n\n assert view.model().rowCount() <= maxrows\n assert vheader.sectionSize(0) > 1 or datamodel.rowCount() == 0\n\n # update the header (attribute names)\n self._update_variable_labels(view)\n\n selmodel = BlockSelectionModel(\n view.model(), parent=view, selectBlocks=not self.select_rows)\n view.setSelectionModel(selmodel)\n view.selectionFinished.connect(self.update_selection)\n\n #noinspection PyBroadException\n def set_corner_text(self, table, text):\n \"\"\"Set table corner text.\"\"\"\n # As this is an ugly hack, do everything in\n # try - except blocks, as it may stop working in newer Qt.\n # pylint: disable=broad-except\n if not hasattr(table, \"btn\") and not hasattr(table, \"btnfailed\"):\n try:\n btn = table.findChild(QAbstractButton)\n\n class Efc(QObject):\n @staticmethod\n def eventFilter(o, e):\n if (isinstance(o, QAbstractButton) and\n e.type() == QEvent.Paint):\n # paint by hand (borrowed from QTableCornerButton)\n btn = o\n opt = QStyleOptionHeader()\n opt.initFrom(btn)\n state = QStyle.State_None\n if btn.isEnabled():\n state |= QStyle.State_Enabled\n if btn.isActiveWindow():\n state |= QStyle.State_Active\n if btn.isDown():\n state |= QStyle.State_Sunken\n opt.state = state\n opt.rect = btn.rect()\n opt.text = btn.text()\n opt.position = QStyleOptionHeader.OnlyOneSection\n painter = QStylePainter(btn)\n painter.drawControl(QStyle.CE_Header, opt)\n return True # eat event\n return False\n table.efc = Efc()\n # disconnect default handler for clicks and connect a new one, which supports\n # both selection and deselection of all data\n btn.clicked.disconnect()\n btn.installEventFilter(table.efc)\n btn.clicked.connect(self._on_select_all)\n table.btn = btn\n\n if sys.platform == \"darwin\":\n btn.setAttribute(Qt.WA_MacSmallSize)\n\n except Exception:\n table.btnfailed = True\n\n if hasattr(table, \"btn\"):\n try:\n btn = table.btn\n btn.setText(text)\n opt = QStyleOptionHeader()\n opt.text = btn.text()\n s = btn.style().sizeFromContents(\n QStyle.CT_HeaderSection,\n opt, QSize(),\n btn).expandedTo(QApplication.globalStrut())\n if s.isValid():\n table.verticalHeader().setMinimumWidth(s.width())\n except Exception:\n pass\n\n def _on_select_all(self, _):\n data_info = self.tabs.currentWidget().input_slot.summary\n if len(self.selected_rows) == data_info.len \\\n and len(self.selected_cols) == len(data_info.domain):\n self.tabs.currentWidget().clearSelection()\n else:\n self.tabs.currentWidget().selectAll()\n\n def _on_current_tab_changed(self, index):\n \"\"\"Update the info box on current tab change\"\"\"\n view = self.tabs.widget(index)\n if view is not None and view.model() is not None:\n self.set_info(view.input_slot.summary)\n else:\n self.set_info(None)\n\n def _update_variable_labels(self, view):\n \"Update the variable labels visibility for `view`\"\n model = view.model()\n if isinstance(model, TableSliceProxy):\n model = model.sourceModel()\n\n if self.show_attribute_labels:\n model.setRichHeaderFlags(\n RichTableModel.Labels | RichTableModel.Name)\n\n labelnames = set()\n domain = model.source.domain\n for a in itertools.chain(domain.metas, domain.variables):\n labelnames.update(a.attributes.keys())\n labelnames = sorted(\n [label for label in labelnames if not label.startswith(\"_\")])\n self.set_corner_text(view, \"\\n\".join([\"\"] + labelnames))\n else:\n model.setRichHeaderFlags(RichTableModel.Name)\n self.set_corner_text(view, \"\")\n\n def _on_show_variable_labels_changed(self):\n \"\"\"The variable labels (var.attribues) visibility was changed.\"\"\"\n for slot in self._inputs.values():\n self._update_variable_labels(slot.view)\n\n def _on_distribution_color_changed(self):\n for ti in range(self.tabs.count()):\n widget = self.tabs.widget(ti)\n model = widget.model()\n while isinstance(model, QAbstractProxyModel):\n model = model.sourceModel()\n data = model.source\n class_var = data.domain.class_var\n if self.color_by_class and class_var and class_var.is_discrete:\n color_schema = [QColor(*c) for c in class_var.colors]\n else:\n color_schema = None\n if self.show_distributions:\n delegate = gui.TableBarItem(self, color=self.dist_color,\n color_schema=color_schema)\n else:\n delegate = QStyledItemDelegate(self)\n widget.setItemDelegate(delegate)\n tab = self.tabs.currentWidget()\n if tab:\n tab.reset()\n\n def _on_select_rows_changed(self):\n for slot in self._inputs.values():\n selection_model = slot.view.selectionModel()\n selection_model.setSelectBlocks(not self.select_rows)\n if self.select_rows:\n slot.view.setSelectionBehavior(QTableView.SelectRows)\n # Expand the current selection to full row selection.\n selection_model.select(\n selection_model.selection(),\n QItemSelectionModel.Select | QItemSelectionModel.Rows\n )\n else:\n slot.view.setSelectionBehavior(QTableView.SelectItems)\n\n def restore_order(self):\n \"\"\"Restore the original data order of the current view.\"\"\"\n table = self.tabs.currentWidget()\n if table is not None:\n table.horizontalHeader().setSortIndicator(-1, Qt.AscendingOrder)\n\n def set_info(self, summary):\n if summary is None:\n self.info_ex.setText(\"No data on input.\")\n self.info_attr.setText(\"\")\n self.info_class.setText(\"\")\n self.info_meta.setText(\"\")\n else:\n info_len, info_attr, info_class, info_meta = \\\n format_summary(summary)\n\n self.info_ex.setText(info_len)\n self.info_attr.setText(info_attr)\n self.info_class.setText(info_class)\n self.info_meta.setText(info_meta)\n\n @Slot()\n def _update_info(self):\n current = self.tabs.currentWidget()\n if current is not None and current.model() is not None:\n self.set_info(current.input_slot.summary)\n\n def update_selection(self, *_):\n self.commit()\n\n def set_selection(self):\n if self.selected_rows and self.selected_cols:\n view = self.tabs.currentWidget()\n model = view.model()\n if model.rowCount() <= self.selected_rows[-1] or \\\n model.columnCount() <= self.selected_cols[-1]:\n return\n\n selection = QItemSelection()\n rowranges = list(ranges(self.selected_rows))\n colranges = list(ranges(self.selected_cols))\n\n for rowstart, rowend in rowranges:\n for colstart, colend in colranges:\n selection.append(\n QItemSelectionRange(\n view.model().index(rowstart, colstart),\n view.model().index(rowend - 1, colend - 1)\n )\n )\n view.selectionModel().select(\n selection, QItemSelectionModel.ClearAndSelect)\n\n @staticmethod\n def get_selection(view):\n \"\"\"\n Return the selected row and column indices of the selection in view.\n \"\"\"\n selmodel = view.selectionModel()\n\n selection = selmodel.selection()\n model = view.model()\n # map through the proxies into input table.\n while isinstance(model, QAbstractProxyModel):\n selection = model.mapSelectionToSource(selection)\n model = model.sourceModel()\n\n assert isinstance(selmodel, BlockSelectionModel)\n assert isinstance(model, TableModel)\n\n row_spans, col_spans = selection_blocks(selection)\n rows = list(itertools.chain.from_iterable(itertools.starmap(range, row_spans)))\n cols = list(itertools.chain.from_iterable(itertools.starmap(range, col_spans)))\n rows = numpy.array(rows, dtype=numpy.intp)\n # map the rows through the applied sorting (if any)\n rows = model.mapToSourceRows(rows)\n rows.sort()\n rows = rows.tolist()\n return rows, cols\n\n @staticmethod\n def _get_model(view):\n model = view.model()\n while isinstance(model, QAbstractProxyModel):\n model = model.sourceModel()\n return model\n\n def commit(self):\n \"\"\"\n Commit/send the current selected row/column selection.\n \"\"\"\n selected_data = table = rowsel = None\n view = self.tabs.currentWidget()\n if view and view.model() is not None:\n model = self._get_model(view)\n table = model.source # The input data table\n\n # Selections of individual instances are not implemented\n # for SqlTables\n if isinstance(table, SqlTable):\n self.Outputs.selected_data.send(selected_data)\n self.Outputs.annotated_data.send(None)\n return\n\n rowsel, colsel = self.get_selection(view)\n self.selected_rows, self.selected_cols = rowsel, colsel\n\n def select(data, rows, domain):\n \"\"\"\n Select the data subset with specified rows and domain subsets.\n\n If either rows or domain is None they mean select all.\n \"\"\"\n if rows is not None and domain is not None:\n return data.from_table(domain, data, rows)\n elif rows is not None:\n return data.from_table(data.domain, rows)\n elif domain is not None:\n return data.from_table(domain, data)\n else:\n return data\n\n domain = table.domain\n\n if len(colsel) < len(domain) + len(domain.metas):\n # only a subset of the columns is selected\n allvars = domain.class_vars + domain.metas + domain.attributes\n columns = [(c, model.headerData(c, Qt.Horizontal,\n TableModel.DomainRole))\n for c in colsel]\n assert all(role is not None for _, role in columns)\n\n def select_vars(role):\n \"\"\"select variables for role (TableModel.DomainRole)\"\"\"\n return [allvars[c] for c, r in columns if r == role]\n\n attrs = select_vars(TableModel.Attribute)\n if attrs and issparse(table.X):\n # for sparse data you can only select all attributes\n attrs = table.domain.attributes\n class_vars = select_vars(TableModel.ClassVar)\n metas = select_vars(TableModel.Meta)\n domain = Orange.data.Domain(attrs, class_vars, metas)\n\n # Avoid a copy if all/none rows are selected.\n if not rowsel:\n selected_data = None\n elif len(rowsel) == len(table):\n selected_data = select(table, None, domain)\n else:\n selected_data = select(table, rowsel, domain)\n\n self.Outputs.selected_data.send(selected_data)\n self.Outputs.annotated_data.send(create_annotated_table(table, rowsel))\n\n def copy(self):\n \"\"\"\n Copy current table selection to the clipboard.\n \"\"\"\n view = self.tabs.currentWidget()\n if view is not None:\n mime = table_selection_to_mime_data(view)\n QApplication.clipboard().setMimeData(\n mime, QClipboard.Clipboard\n )\n\n def send_report(self):\n view = self.tabs.currentWidget()\n if not view or not view.model():\n return\n model = self._get_model(view)\n self.report_data_brief(model.source)\n self.report_table(view)\n\n\n# Table Summary\n\n# Basic statistics for X/Y/metas arrays\nDenseArray = namedtuple(\n \"DenseArray\", [\"nans\", \"non_nans\", \"stats\"])\nSparseArray = namedtuple(\n \"SparseArray\", [\"nans\", \"non_nans\", \"stats\"])\nSparseBoolArray = namedtuple(\n \"SparseBoolArray\", [\"nans\", \"non_nans\", \"stats\"])\nNotAvailable = namedtuple(\"NotAvailable\", [])\n\n#: Orange.data.Table summary\nSummary = namedtuple(\n \"Summary\",\n [\"len\", \"domain\", \"X\", \"Y\", \"M\"])\n\n#: Orange.data.sql.table.SqlTable summary\nApproxSummary = namedtuple(\n \"ApproxSummary\",\n [\"approx_len\", \"len\", \"domain\", \"X\", \"Y\", \"M\"])\n\n\ndef table_summary(table):\n if isinstance(table, SqlTable):\n approx_len = table.approx_len()\n len_future = concurrent.futures.Future()\n\n def _len():\n len_future.set_result(len(table))\n threading.Thread(target=_len).start() # KILL ME !!!\n\n return ApproxSummary(approx_len, len_future, table.domain,\n NotAvailable(), NotAvailable(), NotAvailable())\n else:\n domain = table.domain\n n_instances = len(table)\n # dist = basic_stats.DomainBasicStats(table, include_metas=True)\n bstats = datacaching.getCached(\n table, basic_stats.DomainBasicStats, (table, True)\n )\n\n dist = bstats.stats\n # pylint: disable=unbalanced-tuple-unpacking\n X_dist, Y_dist, M_dist = numpy.split(\n dist, numpy.cumsum([len(domain.attributes),\n len(domain.class_vars)]))\n\n def parts(array, density, col_dist):\n array = numpy.atleast_2d(array)\n nans = sum([dist.nans for dist in col_dist])\n non_nans = sum([dist.non_nans for dist in col_dist])\n if density == Storage.DENSE:\n return DenseArray(nans, non_nans, col_dist)\n elif density == Storage.SPARSE:\n return SparseArray(nans, non_nans, col_dist)\n elif density == Storage.SPARSE_BOOL:\n return SparseBoolArray(nans, non_nans, col_dist)\n elif density == Storage.MISSING:\n return NotAvailable()\n else:\n assert False\n return None\n\n X_part = parts(table.X, table.X_density(), X_dist)\n Y_part = parts(table.Y, table.Y_density(), Y_dist)\n M_part = parts(table.metas, table.metas_density(), M_dist)\n return Summary(n_instances, domain, X_part, Y_part, M_part)\n\n\ndef format_summary(summary):\n text = []\n if isinstance(summary, ApproxSummary):\n if summary.len.done():\n text += [\"{} instances\".format(summary.len.result())]\n else:\n text += [\"~{} instances\".format(summary.approx_len)]\n\n elif isinstance(summary, Summary):\n text += [\"{} instances\".format(summary.len)]\n\n if sum(p.nans for p in [summary.X, summary.Y, summary.M]) == 0:\n text[-1] += \" (no missing values)\"\n\n def format_part(part):\n if isinstance(part, NotAvailable):\n return \"\"\n elif part.nans + part.non_nans == 0:\n return \"\"\n\n if isinstance(part, DenseArray):\n total = part.nans + part.non_nans\n miss = (\"%.1f%%\" % (100 * part.nans / total) if part.nans > 0\n else \"no\")\n return \" (%s missing values)\" % miss\n elif isinstance(part, (SparseArray, SparseBoolArray)):\n text = \" ({}, density {:.2f}%)\"\n tag = \"sparse\" if isinstance(part, SparseArray) else \"tags\"\n total = part.nans + part.non_nans\n return text.format(tag, 100 * part.non_nans / total)\n else:\n # MISSING, N/A\n return \"\"\n\n def sp(n):\n if n == 0:\n return \"No\", \"s\"\n elif n == 1:\n return str(n), ''\n else:\n return str(n), 's'\n\n text += [(\"%s feature%s\" % sp(len(summary.domain.attributes)))\n + format_part(summary.X)]\n\n if not summary.domain.class_vars:\n text += [\"No target variable.\"]\n else:\n if len(summary.domain.class_vars) > 1:\n c_text = \"%s outcome%s\" % sp(len(summary.domain.class_vars))\n elif summary.domain.has_continuous_class:\n c_text = \"Continuous target variable\"\n else:\n c_text = \"Discrete class with %s value%s\" % sp(\n len(summary.domain.class_var.values))\n c_text += format_part(summary.Y)\n text += [c_text]\n\n text += [(\"%s meta attribute%s\" % sp(len(summary.domain.metas)))\n + format_part(summary.M)]\n\n return text\n\n\ndef is_sortable(table):\n if isinstance(table, SqlTable):\n return False\n elif isinstance(table, Orange.data.Table):\n return True\n else:\n return False\n\n\ndef test_model():\n app = QApplication([])\n view = QTableView(\n sortingEnabled=True\n )\n data = Orange.data.Table(\"lenses\")\n model = TableModel(data)\n\n view.setModel(model)\n\n view.show()\n view.raise_()\n return app.exec()\n\n\nif __name__ == \"__main__\": # pragma: no cover\n WidgetPreview(OWDataTable).run(\n [(Table(\"iris\"), \"iris\"),\n (Table(\"brown-selected\"), \"brown-selected\"),\n (Table(\"housing\"), \"housing\")])\n",
"path": "Orange/widgets/data/owtable.py"
}
] | [
{
"content": "import sys\nimport threading\nimport io\nimport csv\nimport itertools\nimport concurrent.futures\n\nfrom collections import OrderedDict, namedtuple\nfrom typing import List, Tuple, Iterable\n\nfrom math import isnan\n\nimport numpy\nfrom scipy.sparse import issparse\n\nfrom AnyQt.QtWidgets import (\n QTableView, QHeaderView, QAbstractButton, QApplication, QStyleOptionHeader,\n QStyle, QStylePainter, QStyledItemDelegate\n)\nfrom AnyQt.QtGui import QColor, QClipboard, QMouseEvent\nfrom AnyQt.QtCore import (\n Qt, QSize, QEvent, QByteArray, QMimeData, QObject, QMetaObject,\n QAbstractProxyModel, QIdentityProxyModel, QModelIndex,\n QItemSelectionModel, QItemSelection, QItemSelectionRange,\n Signal)\nfrom AnyQt.QtCore import pyqtSlot as Slot\n\nimport Orange.data\nfrom Orange.data.storage import Storage\nfrom Orange.data.table import Table\nfrom Orange.data.sql.table import SqlTable\nfrom Orange.statistics import basic_stats\n\nfrom Orange.widgets import gui\nfrom Orange.widgets.settings import Setting, DomainContextHandler\nfrom Orange.widgets.utils.widgetpreview import WidgetPreview\nfrom Orange.widgets.widget import OWWidget, Input, Output\nfrom Orange.widgets.utils import datacaching\nfrom Orange.widgets.utils.annotated_data import (create_annotated_table,\n ANNOTATED_DATA_SIGNAL_NAME)\nfrom Orange.widgets.utils.itemmodels import TableModel\n\n\nclass RichTableModel(TableModel):\n \"\"\"A TableModel with some extra bells and whistles/\n\n (adds support for gui.BarRole, include variable labels and icons\n in the header)\n \"\"\"\n #: Rich header data flags.\n Name, Labels, Icon = 1, 2, 4\n\n def __init__(self, sourcedata, parent=None):\n super().__init__(sourcedata, parent)\n\n self._header_flags = RichTableModel.Name\n self._continuous = [var.is_continuous for var in self.vars]\n labels = []\n for var in self.vars:\n if isinstance(var, Orange.data.Variable):\n labels.extend(var.attributes.keys())\n self._labels = list(sorted(\n {label for label in labels if not label.startswith(\"_\")}))\n\n def data(self, index, role=Qt.DisplayRole,\n # for faster local lookup\n _BarRole=gui.TableBarItem.BarRole):\n # pylint: disable=arguments-differ\n if role == _BarRole and self._continuous[index.column()]:\n val = super().data(index, TableModel.ValueRole)\n if val is None or isnan(val):\n return None\n\n dist = super().data(index, TableModel.VariableStatsRole)\n if dist is not None and dist.max > dist.min:\n return (val - dist.min) / (dist.max - dist.min)\n else:\n return None\n elif role == Qt.TextAlignmentRole and self._continuous[index.column()]:\n return Qt.AlignRight | Qt.AlignVCenter\n else:\n return super().data(index, role)\n\n def headerData(self, section, orientation, role):\n if orientation == Qt.Horizontal and role == Qt.DisplayRole:\n var = super().headerData(\n section, orientation, TableModel.VariableRole)\n if var is None:\n return super().headerData(\n section, orientation, Qt.DisplayRole)\n\n lines = []\n if self._header_flags & RichTableModel.Name:\n lines.append(var.name)\n if self._header_flags & RichTableModel.Labels:\n lines.extend(str(var.attributes.get(label, \"\"))\n for label in self._labels)\n return \"\\n\".join(lines)\n elif orientation == Qt.Horizontal and role == Qt.DecorationRole and \\\n self._header_flags & RichTableModel.Icon:\n var = super().headerData(\n section, orientation, TableModel.VariableRole)\n if var is not None:\n return gui.attributeIconDict[var]\n else:\n return None\n else:\n return super().headerData(section, orientation, role)\n\n def setRichHeaderFlags(self, flags):\n if flags != self._header_flags:\n self._header_flags = flags\n self.headerDataChanged.emit(\n Qt.Horizontal, 0, self.columnCount() - 1)\n\n def richHeaderFlags(self):\n return self._header_flags\n\n\nclass TableSliceProxy(QIdentityProxyModel):\n def __init__(self, parent=None, rowSlice=slice(0, -1), **kwargs):\n super().__init__(parent, **kwargs)\n self.__rowslice = rowSlice\n\n def setRowSlice(self, rowslice):\n if rowslice.step is not None and rowslice.step != 1:\n raise ValueError(\"invalid stride\")\n\n if self.__rowslice != rowslice:\n self.beginResetModel()\n self.__rowslice = rowslice\n self.endResetModel()\n\n def mapToSource(self, proxyindex):\n model = self.sourceModel()\n if model is None or not proxyindex.isValid():\n return QModelIndex()\n\n row, col = proxyindex.row(), proxyindex.column()\n row = row + self.__rowslice.start\n assert 0 <= row < model.rowCount()\n return model.createIndex(row, col, proxyindex.internalPointer())\n\n def mapFromSource(self, sourceindex):\n model = self.sourceModel()\n if model is None or not sourceindex.isValid():\n return QModelIndex()\n row, col = sourceindex.row(), sourceindex.column()\n row = row - self.__rowslice.start\n assert 0 <= row < self.rowCount()\n return self.createIndex(row, col, sourceindex.internalPointer())\n\n def rowCount(self, parent=QModelIndex()):\n if parent.isValid():\n return 0\n count = super().rowCount()\n start, stop, step = self.__rowslice.indices(count)\n assert step == 1\n return stop - start\n\n\nclass BlockSelectionModel(QItemSelectionModel):\n \"\"\"\n Item selection model ensuring the selection maintains a simple block\n like structure.\n\n e.g.\n\n [a b] c [d e]\n [f g] h [i j]\n\n is allowed but this is not\n\n [a] b c d e\n [f g] h [i j]\n\n I.e. select the Cartesian product of row and column indices.\n\n \"\"\"\n def __init__(self, model, parent=None, selectBlocks=True, **kwargs):\n super().__init__(model, parent, **kwargs)\n self.__selectBlocks = selectBlocks\n\n def select(self, selection, flags):\n \"\"\"Reimplemented.\"\"\"\n if isinstance(selection, QModelIndex):\n selection = QItemSelection(selection, selection)\n\n if not self.__selectBlocks:\n super().select(selection, flags)\n return\n\n model = self.model()\n\n def to_ranges(spans):\n return list(range(*r) for r in spans)\n\n if flags & QItemSelectionModel.Current: # no current selection support\n flags &= ~QItemSelectionModel.Current\n if flags & QItemSelectionModel.Toggle: # no toggle support either\n flags &= ~QItemSelectionModel.Toggle\n flags |= QItemSelectionModel.Select\n\n if flags == QItemSelectionModel.ClearAndSelect:\n # extend selection ranges in `selection` to span all row/columns\n sel_rows = selection_rows(selection)\n sel_cols = selection_columns(selection)\n selection = QItemSelection()\n for row_range, col_range in \\\n itertools.product(to_ranges(sel_rows), to_ranges(sel_cols)):\n selection.select(\n model.index(row_range.start, col_range.start),\n model.index(row_range.stop - 1, col_range.stop - 1)\n )\n elif flags & (QItemSelectionModel.Select |\n QItemSelectionModel.Deselect):\n # extend all selection ranges in `selection` with the full current\n # row/col spans\n rows, cols = selection_blocks(self.selection())\n sel_rows = selection_rows(selection)\n sel_cols = selection_columns(selection)\n ext_selection = QItemSelection()\n for row_range, col_range in \\\n itertools.product(to_ranges(rows), to_ranges(sel_cols)):\n ext_selection.select(\n model.index(row_range.start, col_range.start),\n model.index(row_range.stop - 1, col_range.stop - 1)\n )\n for row_range, col_range in \\\n itertools.product(to_ranges(sel_rows), to_ranges(cols)):\n ext_selection.select(\n model.index(row_range.start, col_range.start),\n model.index(row_range.stop - 1, col_range.stop - 1)\n )\n selection.merge(ext_selection, QItemSelectionModel.Select)\n super().select(selection, flags)\n\n def selectBlocks(self):\n \"\"\"Is the block selection in effect.\"\"\"\n return self.__selectBlocks\n\n def setSelectBlocks(self, state):\n \"\"\"Set the block selection state.\n\n If set to False, the selection model behaves as the base\n QItemSelectionModel\n\n \"\"\"\n self.__selectBlocks = state\n\n\ndef selection_rows(selection):\n # type: (QItemSelection) -> List[Tuple[int, int]]\n \"\"\"\n Return a list of ranges for all referenced rows contained in selection\n\n Parameters\n ----------\n selection : QItemSelection\n\n Returns\n -------\n rows : List[Tuple[int, int]]\n \"\"\"\n spans = set(range(s.top(), s.bottom() + 1) for s in selection)\n indices = sorted(set(itertools.chain(*spans)))\n return list(ranges(indices))\n\n\ndef selection_columns(selection):\n # type: (QItemSelection) -> List[Tuple[int, int]]\n \"\"\"\n Return a list of ranges for all referenced columns contained in selection\n\n Parameters\n ----------\n selection : QItemSelection\n\n Returns\n -------\n rows : List[Tuple[int, int]]\n \"\"\"\n spans = {range(s.left(), s.right() + 1) for s in selection}\n indices = sorted(set(itertools.chain(*spans)))\n return list(ranges(indices))\n\n\ndef selection_blocks(selection):\n # type: (QItemSelection) -> Tuple[List[Tuple[int, int]], List[Tuple[int, int]]]\n if selection.count() > 0:\n rowranges = {range(span.top(), span.bottom() + 1)\n for span in selection}\n colranges = {range(span.left(), span.right() + 1)\n for span in selection}\n else:\n return [], []\n\n rows = sorted(set(itertools.chain(*rowranges)))\n cols = sorted(set(itertools.chain(*colranges)))\n return list(ranges(rows)), list(ranges(cols))\n\n\ndef ranges(indices):\n # type: (Iterable[int]) -> Iterable[Tuple[int, int]]\n \"\"\"\n Group consecutive indices into `(start, stop)` tuple 'ranges'.\n\n >>> list(ranges([1, 2, 3, 5, 3, 4]))\n >>> [(1, 4), (5, 6), (3, 5)]\n\n \"\"\"\n g = itertools.groupby(enumerate(indices),\n key=lambda t: t[1] - t[0])\n for _, range_ind in g:\n range_ind = list(range_ind)\n _, start = range_ind[0]\n _, end = range_ind[-1]\n yield start, end + 1\n\n\ndef table_selection_to_mime_data(table):\n \"\"\"Copy the current selection in a QTableView to the clipboard.\n \"\"\"\n lines = table_selection_to_list(table)\n\n as_csv = lines_to_csv_string(lines, dialect=\"excel\").encode(\"utf-8\")\n as_tsv = lines_to_csv_string(lines, dialect=\"excel-tab\").encode(\"utf-8\")\n\n mime = QMimeData()\n mime.setData(\"text/csv\", QByteArray(as_csv))\n mime.setData(\"text/tab-separated-values\", QByteArray(as_tsv))\n mime.setData(\"text/plain\", QByteArray(as_tsv))\n return mime\n\n\ndef lines_to_csv_string(lines, dialect=\"excel\"):\n stream = io.StringIO()\n writer = csv.writer(stream, dialect=dialect)\n writer.writerows(lines)\n return stream.getvalue()\n\n\ndef table_selection_to_list(table):\n model = table.model()\n indexes = table.selectedIndexes()\n\n rows = sorted(set(index.row() for index in indexes))\n columns = sorted(set(index.column() for index in indexes))\n\n lines = []\n for row in rows:\n line = []\n for col in columns:\n val = model.index(row, col).data(Qt.DisplayRole)\n # TODO: use style item delegate displayText?\n line.append(str(val))\n lines.append(line)\n\n return lines\n\n\nTableSlot = namedtuple(\"TableSlot\", [\"input_id\", \"table\", \"summary\", \"view\"])\n\n\nclass TableView(QTableView):\n #: Signal emitted when selection finished. It is not emitted during\n #: mouse drag selection updates.\n selectionFinished = Signal()\n\n __mouseDown = False\n __selectionDidChange = False\n\n def setSelectionModel(self, selectionModel: QItemSelectionModel) -> None:\n sm = self.selectionModel()\n if sm is not None:\n sm.selectionChanged.disconnect(self.__on_selectionChanged)\n super().setSelectionModel(selectionModel)\n if selectionModel is not None:\n selectionModel.selectionChanged.connect(self.__on_selectionChanged)\n\n def __on_selectionChanged(self):\n if self.__mouseDown:\n self.__selectionDidChange = True\n else:\n self.selectionFinished.emit()\n\n def mousePressEvent(self, event: QMouseEvent) -> None:\n self.__mouseDown = event.button() == Qt.LeftButton\n super().mousePressEvent(event)\n\n def mouseReleaseEvent(self, event: QMouseEvent) -> None:\n super().mouseReleaseEvent(event)\n if self.__mouseDown and event.button() == Qt.LeftButton:\n self.__mouseDown = False\n if self.__selectionDidChange:\n self.__selectionDidChange = False\n self.selectionFinished.emit()\n\n\nclass DataTableView(TableView):\n dataset: Table\n input_slot: TableSlot\n\n\nclass OWDataTable(OWWidget):\n name = \"Data Table\"\n description = \"View the dataset in a spreadsheet.\"\n icon = \"icons/Table.svg\"\n priority = 50\n keywords = []\n\n buttons_area_orientation = Qt.Vertical\n\n class Inputs:\n data = Input(\"Data\", Table, multiple=True)\n\n class Outputs:\n selected_data = Output(\"Selected Data\", Table, default=True)\n annotated_data = Output(ANNOTATED_DATA_SIGNAL_NAME, Table)\n\n show_distributions = Setting(False)\n dist_color_RGB = Setting((220, 220, 220, 255))\n show_attribute_labels = Setting(True)\n select_rows = Setting(True)\n auto_commit = Setting(True)\n\n color_by_class = Setting(True)\n settingsHandler = DomainContextHandler(\n match_values=DomainContextHandler.MATCH_VALUES_ALL)\n selected_rows = Setting([], schema_only=True)\n selected_cols = Setting([], schema_only=True)\n\n def __init__(self):\n super().__init__()\n\n self._inputs = OrderedDict()\n\n self.__pending_selected_rows = self.selected_rows\n self.selected_rows = None\n self.__pending_selected_cols = self.selected_cols\n self.selected_cols = None\n\n self.dist_color = QColor(*self.dist_color_RGB)\n\n info_box = gui.vBox(self.controlArea, \"Info\")\n self.info_ex = gui.widgetLabel(info_box, 'No data on input.', )\n self.info_ex.setWordWrap(True)\n self.info_attr = gui.widgetLabel(info_box, ' ')\n self.info_attr.setWordWrap(True)\n self.info_class = gui.widgetLabel(info_box, ' ')\n self.info_class.setWordWrap(True)\n self.info_meta = gui.widgetLabel(info_box, ' ')\n self.info_meta.setWordWrap(True)\n info_box.setMinimumWidth(200)\n gui.separator(self.controlArea)\n\n box = gui.vBox(self.controlArea, \"Variables\")\n self.c_show_attribute_labels = gui.checkBox(\n box, self, \"show_attribute_labels\",\n \"Show variable labels (if present)\",\n callback=self._on_show_variable_labels_changed)\n\n gui.checkBox(box, self, \"show_distributions\",\n 'Visualize numeric values',\n callback=self._on_distribution_color_changed)\n gui.checkBox(box, self, \"color_by_class\", 'Color by instance classes',\n callback=self._on_distribution_color_changed)\n\n box = gui.vBox(self.controlArea, \"Selection\")\n\n gui.checkBox(box, self, \"select_rows\", \"Select full rows\",\n callback=self._on_select_rows_changed)\n\n gui.rubber(self.controlArea)\n\n reset = gui.button(\n None, self, \"Restore Original Order\", callback=self.restore_order,\n tooltip=\"Show rows in the original order\", autoDefault=False)\n self.buttonsArea.layout().insertWidget(0, reset)\n gui.auto_send(self.buttonsArea, self, \"auto_commit\")\n\n # GUI with tabs\n self.tabs = gui.tabWidget(self.mainArea)\n self.tabs.currentChanged.connect(self._on_current_tab_changed)\n\n def copy_to_clipboard(self):\n self.copy()\n\n @staticmethod\n def sizeHint():\n return QSize(800, 500)\n\n @Inputs.data\n def set_dataset(self, data, tid=None):\n \"\"\"Set the input dataset.\"\"\"\n self.closeContext()\n if data is not None:\n datasetname = getattr(data, \"name\", \"Data\")\n if tid in self._inputs:\n # update existing input slot\n slot = self._inputs[tid]\n view = slot.view\n # reset the (header) view state.\n view.setModel(None)\n view.horizontalHeader().setSortIndicator(-1, Qt.AscendingOrder)\n assert self.tabs.indexOf(view) != -1\n self.tabs.setTabText(self.tabs.indexOf(view), datasetname)\n else:\n view = DataTableView()\n view.setSortingEnabled(True)\n view.setHorizontalScrollMode(QTableView.ScrollPerPixel)\n\n if self.select_rows:\n view.setSelectionBehavior(QTableView.SelectRows)\n\n header = view.horizontalHeader()\n header.setSectionsMovable(True)\n header.setSectionsClickable(True)\n header.setSortIndicatorShown(True)\n header.setSortIndicator(-1, Qt.AscendingOrder)\n\n # QHeaderView does not 'reset' the model sort column,\n # because there is no guaranty (requirement) that the\n # models understand the -1 sort column.\n def sort_reset(index, order):\n if view.model() is not None and index == -1:\n view.model().sort(index, order)\n\n header.sortIndicatorChanged.connect(sort_reset)\n self.tabs.addTab(view, datasetname)\n\n view.dataset = data\n self.tabs.setCurrentWidget(view)\n\n self._setup_table_view(view, data)\n slot = TableSlot(tid, data, table_summary(data), view)\n view.input_slot = slot\n self._inputs[tid] = slot\n\n self.tabs.setCurrentIndex(self.tabs.indexOf(view))\n\n self.set_info(slot.summary)\n\n if isinstance(slot.summary.len, concurrent.futures.Future):\n def update(_):\n QMetaObject.invokeMethod(\n self, \"_update_info\", Qt.QueuedConnection)\n\n slot.summary.len.add_done_callback(update)\n\n elif tid in self._inputs:\n slot = self._inputs.pop(tid)\n view = slot.view\n view.hide()\n view.deleteLater()\n self.tabs.removeTab(self.tabs.indexOf(view))\n\n current = self.tabs.currentWidget()\n if current is not None:\n self.set_info(current.input_slot.summary)\n\n self.tabs.tabBar().setVisible(self.tabs.count() > 1)\n self.openContext(data)\n\n if data and self.__pending_selected_rows is not None:\n self.selected_rows = self.__pending_selected_rows\n self.__pending_selected_rows = None\n else:\n self.selected_rows = []\n\n if data and self.__pending_selected_cols is not None:\n self.selected_cols = self.__pending_selected_cols\n self.__pending_selected_cols = None\n else:\n self.selected_cols = []\n\n self.set_selection()\n self.unconditional_commit()\n\n def _setup_table_view(self, view, data):\n \"\"\"Setup the `view` (QTableView) with `data` (Orange.data.Table)\n \"\"\"\n if data is None:\n view.setModel(None)\n return\n\n datamodel = RichTableModel(data)\n\n rowcount = data.approx_len()\n\n if self.color_by_class and data.domain.has_discrete_class:\n color_schema = [\n QColor(*c) for c in data.domain.class_var.colors]\n else:\n color_schema = None\n if self.show_distributions:\n view.setItemDelegate(\n gui.TableBarItem(\n self, color=self.dist_color, color_schema=color_schema)\n )\n else:\n view.setItemDelegate(QStyledItemDelegate(self))\n\n # Enable/disable view sorting based on data's type\n view.setSortingEnabled(is_sortable(data))\n header = view.horizontalHeader()\n header.setSectionsClickable(is_sortable(data))\n header.setSortIndicatorShown(is_sortable(data))\n\n view.setModel(datamodel)\n\n vheader = view.verticalHeader()\n option = view.viewOptions()\n size = view.style().sizeFromContents(\n QStyle.CT_ItemViewItem, option,\n QSize(20, 20), view)\n\n vheader.setDefaultSectionSize(size.height() + 2)\n vheader.setMinimumSectionSize(5)\n vheader.setSectionResizeMode(QHeaderView.Fixed)\n\n # Limit the number of rows displayed in the QTableView\n # (workaround for QTBUG-18490 / QTBUG-28631)\n maxrows = (2 ** 31 - 1) // (vheader.defaultSectionSize() + 2)\n if rowcount > maxrows:\n sliceproxy = TableSliceProxy(\n parent=view, rowSlice=slice(0, maxrows))\n sliceproxy.setSourceModel(datamodel)\n # First reset the view (without this the header view retains\n # it's state - at this point invalid/broken)\n view.setModel(None)\n view.setModel(sliceproxy)\n\n assert view.model().rowCount() <= maxrows\n assert vheader.sectionSize(0) > 1 or datamodel.rowCount() == 0\n\n # update the header (attribute names)\n self._update_variable_labels(view)\n\n selmodel = BlockSelectionModel(\n view.model(), parent=view, selectBlocks=not self.select_rows)\n view.setSelectionModel(selmodel)\n view.selectionFinished.connect(self.update_selection)\n\n #noinspection PyBroadException\n def set_corner_text(self, table, text):\n \"\"\"Set table corner text.\"\"\"\n # As this is an ugly hack, do everything in\n # try - except blocks, as it may stop working in newer Qt.\n # pylint: disable=broad-except\n if not hasattr(table, \"btn\") and not hasattr(table, \"btnfailed\"):\n try:\n btn = table.findChild(QAbstractButton)\n\n class Efc(QObject):\n @staticmethod\n def eventFilter(o, e):\n if (isinstance(o, QAbstractButton) and\n e.type() == QEvent.Paint):\n # paint by hand (borrowed from QTableCornerButton)\n btn = o\n opt = QStyleOptionHeader()\n opt.initFrom(btn)\n state = QStyle.State_None\n if btn.isEnabled():\n state |= QStyle.State_Enabled\n if btn.isActiveWindow():\n state |= QStyle.State_Active\n if btn.isDown():\n state |= QStyle.State_Sunken\n opt.state = state\n opt.rect = btn.rect()\n opt.text = btn.text()\n opt.position = QStyleOptionHeader.OnlyOneSection\n painter = QStylePainter(btn)\n painter.drawControl(QStyle.CE_Header, opt)\n return True # eat event\n return False\n table.efc = Efc()\n # disconnect default handler for clicks and connect a new one, which supports\n # both selection and deselection of all data\n btn.clicked.disconnect()\n btn.installEventFilter(table.efc)\n btn.clicked.connect(self._on_select_all)\n table.btn = btn\n\n if sys.platform == \"darwin\":\n btn.setAttribute(Qt.WA_MacSmallSize)\n\n except Exception:\n table.btnfailed = True\n\n if hasattr(table, \"btn\"):\n try:\n btn = table.btn\n btn.setText(text)\n opt = QStyleOptionHeader()\n opt.text = btn.text()\n s = btn.style().sizeFromContents(\n QStyle.CT_HeaderSection,\n opt, QSize(),\n btn).expandedTo(QApplication.globalStrut())\n if s.isValid():\n table.verticalHeader().setMinimumWidth(s.width())\n except Exception:\n pass\n\n def _on_select_all(self, _):\n data_info = self.tabs.currentWidget().input_slot.summary\n if len(self.selected_rows) == data_info.len \\\n and len(self.selected_cols) == len(data_info.domain):\n self.tabs.currentWidget().clearSelection()\n else:\n self.tabs.currentWidget().selectAll()\n\n def _on_current_tab_changed(self, index):\n \"\"\"Update the info box on current tab change\"\"\"\n view = self.tabs.widget(index)\n if view is not None and view.model() is not None:\n self.set_info(view.input_slot.summary)\n self.update_selection()\n else:\n self.set_info(None)\n\n def _update_variable_labels(self, view):\n \"Update the variable labels visibility for `view`\"\n model = view.model()\n if isinstance(model, TableSliceProxy):\n model = model.sourceModel()\n\n if self.show_attribute_labels:\n model.setRichHeaderFlags(\n RichTableModel.Labels | RichTableModel.Name)\n\n labelnames = set()\n domain = model.source.domain\n for a in itertools.chain(domain.metas, domain.variables):\n labelnames.update(a.attributes.keys())\n labelnames = sorted(\n [label for label in labelnames if not label.startswith(\"_\")])\n self.set_corner_text(view, \"\\n\".join([\"\"] + labelnames))\n else:\n model.setRichHeaderFlags(RichTableModel.Name)\n self.set_corner_text(view, \"\")\n\n def _on_show_variable_labels_changed(self):\n \"\"\"The variable labels (var.attribues) visibility was changed.\"\"\"\n for slot in self._inputs.values():\n self._update_variable_labels(slot.view)\n\n def _on_distribution_color_changed(self):\n for ti in range(self.tabs.count()):\n widget = self.tabs.widget(ti)\n model = widget.model()\n while isinstance(model, QAbstractProxyModel):\n model = model.sourceModel()\n data = model.source\n class_var = data.domain.class_var\n if self.color_by_class and class_var and class_var.is_discrete:\n color_schema = [QColor(*c) for c in class_var.colors]\n else:\n color_schema = None\n if self.show_distributions:\n delegate = gui.TableBarItem(self, color=self.dist_color,\n color_schema=color_schema)\n else:\n delegate = QStyledItemDelegate(self)\n widget.setItemDelegate(delegate)\n tab = self.tabs.currentWidget()\n if tab:\n tab.reset()\n\n def _on_select_rows_changed(self):\n for slot in self._inputs.values():\n selection_model = slot.view.selectionModel()\n selection_model.setSelectBlocks(not self.select_rows)\n if self.select_rows:\n slot.view.setSelectionBehavior(QTableView.SelectRows)\n # Expand the current selection to full row selection.\n selection_model.select(\n selection_model.selection(),\n QItemSelectionModel.Select | QItemSelectionModel.Rows\n )\n else:\n slot.view.setSelectionBehavior(QTableView.SelectItems)\n\n def restore_order(self):\n \"\"\"Restore the original data order of the current view.\"\"\"\n table = self.tabs.currentWidget()\n if table is not None:\n table.horizontalHeader().setSortIndicator(-1, Qt.AscendingOrder)\n\n def set_info(self, summary):\n if summary is None:\n self.info_ex.setText(\"No data on input.\")\n self.info_attr.setText(\"\")\n self.info_class.setText(\"\")\n self.info_meta.setText(\"\")\n else:\n info_len, info_attr, info_class, info_meta = \\\n format_summary(summary)\n\n self.info_ex.setText(info_len)\n self.info_attr.setText(info_attr)\n self.info_class.setText(info_class)\n self.info_meta.setText(info_meta)\n\n @Slot()\n def _update_info(self):\n current = self.tabs.currentWidget()\n if current is not None and current.model() is not None:\n self.set_info(current.input_slot.summary)\n\n def update_selection(self, *_):\n self.commit()\n\n def set_selection(self):\n if self.selected_rows and self.selected_cols:\n view = self.tabs.currentWidget()\n model = view.model()\n if model.rowCount() <= self.selected_rows[-1] or \\\n model.columnCount() <= self.selected_cols[-1]:\n return\n\n selection = QItemSelection()\n rowranges = list(ranges(self.selected_rows))\n colranges = list(ranges(self.selected_cols))\n\n for rowstart, rowend in rowranges:\n for colstart, colend in colranges:\n selection.append(\n QItemSelectionRange(\n view.model().index(rowstart, colstart),\n view.model().index(rowend - 1, colend - 1)\n )\n )\n view.selectionModel().select(\n selection, QItemSelectionModel.ClearAndSelect)\n\n @staticmethod\n def get_selection(view):\n \"\"\"\n Return the selected row and column indices of the selection in view.\n \"\"\"\n selmodel = view.selectionModel()\n\n selection = selmodel.selection()\n model = view.model()\n # map through the proxies into input table.\n while isinstance(model, QAbstractProxyModel):\n selection = model.mapSelectionToSource(selection)\n model = model.sourceModel()\n\n assert isinstance(selmodel, BlockSelectionModel)\n assert isinstance(model, TableModel)\n\n row_spans, col_spans = selection_blocks(selection)\n rows = list(itertools.chain.from_iterable(itertools.starmap(range, row_spans)))\n cols = list(itertools.chain.from_iterable(itertools.starmap(range, col_spans)))\n rows = numpy.array(rows, dtype=numpy.intp)\n # map the rows through the applied sorting (if any)\n rows = model.mapToSourceRows(rows)\n rows.sort()\n rows = rows.tolist()\n return rows, cols\n\n @staticmethod\n def _get_model(view):\n model = view.model()\n while isinstance(model, QAbstractProxyModel):\n model = model.sourceModel()\n return model\n\n def commit(self):\n \"\"\"\n Commit/send the current selected row/column selection.\n \"\"\"\n selected_data = table = rowsel = None\n view = self.tabs.currentWidget()\n if view and view.model() is not None:\n model = self._get_model(view)\n table = model.source # The input data table\n\n # Selections of individual instances are not implemented\n # for SqlTables\n if isinstance(table, SqlTable):\n self.Outputs.selected_data.send(selected_data)\n self.Outputs.annotated_data.send(None)\n return\n\n rowsel, colsel = self.get_selection(view)\n self.selected_rows, self.selected_cols = rowsel, colsel\n\n def select(data, rows, domain):\n \"\"\"\n Select the data subset with specified rows and domain subsets.\n\n If either rows or domain is None they mean select all.\n \"\"\"\n if rows is not None and domain is not None:\n return data.from_table(domain, data, rows)\n elif rows is not None:\n return data.from_table(data.domain, rows)\n elif domain is not None:\n return data.from_table(domain, data)\n else:\n return data\n\n domain = table.domain\n\n if len(colsel) < len(domain) + len(domain.metas):\n # only a subset of the columns is selected\n allvars = domain.class_vars + domain.metas + domain.attributes\n columns = [(c, model.headerData(c, Qt.Horizontal,\n TableModel.DomainRole))\n for c in colsel]\n assert all(role is not None for _, role in columns)\n\n def select_vars(role):\n \"\"\"select variables for role (TableModel.DomainRole)\"\"\"\n return [allvars[c] for c, r in columns if r == role]\n\n attrs = select_vars(TableModel.Attribute)\n if attrs and issparse(table.X):\n # for sparse data you can only select all attributes\n attrs = table.domain.attributes\n class_vars = select_vars(TableModel.ClassVar)\n metas = select_vars(TableModel.Meta)\n domain = Orange.data.Domain(attrs, class_vars, metas)\n\n # Avoid a copy if all/none rows are selected.\n if not rowsel:\n selected_data = None\n elif len(rowsel) == len(table):\n selected_data = select(table, None, domain)\n else:\n selected_data = select(table, rowsel, domain)\n\n self.Outputs.selected_data.send(selected_data)\n self.Outputs.annotated_data.send(create_annotated_table(table, rowsel))\n\n def copy(self):\n \"\"\"\n Copy current table selection to the clipboard.\n \"\"\"\n view = self.tabs.currentWidget()\n if view is not None:\n mime = table_selection_to_mime_data(view)\n QApplication.clipboard().setMimeData(\n mime, QClipboard.Clipboard\n )\n\n def send_report(self):\n view = self.tabs.currentWidget()\n if not view or not view.model():\n return\n model = self._get_model(view)\n self.report_data_brief(model.source)\n self.report_table(view)\n\n\n# Table Summary\n\n# Basic statistics for X/Y/metas arrays\nDenseArray = namedtuple(\n \"DenseArray\", [\"nans\", \"non_nans\", \"stats\"])\nSparseArray = namedtuple(\n \"SparseArray\", [\"nans\", \"non_nans\", \"stats\"])\nSparseBoolArray = namedtuple(\n \"SparseBoolArray\", [\"nans\", \"non_nans\", \"stats\"])\nNotAvailable = namedtuple(\"NotAvailable\", [])\n\n#: Orange.data.Table summary\nSummary = namedtuple(\n \"Summary\",\n [\"len\", \"domain\", \"X\", \"Y\", \"M\"])\n\n#: Orange.data.sql.table.SqlTable summary\nApproxSummary = namedtuple(\n \"ApproxSummary\",\n [\"approx_len\", \"len\", \"domain\", \"X\", \"Y\", \"M\"])\n\n\ndef table_summary(table):\n if isinstance(table, SqlTable):\n approx_len = table.approx_len()\n len_future = concurrent.futures.Future()\n\n def _len():\n len_future.set_result(len(table))\n threading.Thread(target=_len).start() # KILL ME !!!\n\n return ApproxSummary(approx_len, len_future, table.domain,\n NotAvailable(), NotAvailable(), NotAvailable())\n else:\n domain = table.domain\n n_instances = len(table)\n # dist = basic_stats.DomainBasicStats(table, include_metas=True)\n bstats = datacaching.getCached(\n table, basic_stats.DomainBasicStats, (table, True)\n )\n\n dist = bstats.stats\n # pylint: disable=unbalanced-tuple-unpacking\n X_dist, Y_dist, M_dist = numpy.split(\n dist, numpy.cumsum([len(domain.attributes),\n len(domain.class_vars)]))\n\n def parts(array, density, col_dist):\n array = numpy.atleast_2d(array)\n nans = sum([dist.nans for dist in col_dist])\n non_nans = sum([dist.non_nans for dist in col_dist])\n if density == Storage.DENSE:\n return DenseArray(nans, non_nans, col_dist)\n elif density == Storage.SPARSE:\n return SparseArray(nans, non_nans, col_dist)\n elif density == Storage.SPARSE_BOOL:\n return SparseBoolArray(nans, non_nans, col_dist)\n elif density == Storage.MISSING:\n return NotAvailable()\n else:\n assert False\n return None\n\n X_part = parts(table.X, table.X_density(), X_dist)\n Y_part = parts(table.Y, table.Y_density(), Y_dist)\n M_part = parts(table.metas, table.metas_density(), M_dist)\n return Summary(n_instances, domain, X_part, Y_part, M_part)\n\n\ndef format_summary(summary):\n text = []\n if isinstance(summary, ApproxSummary):\n if summary.len.done():\n text += [\"{} instances\".format(summary.len.result())]\n else:\n text += [\"~{} instances\".format(summary.approx_len)]\n\n elif isinstance(summary, Summary):\n text += [\"{} instances\".format(summary.len)]\n\n if sum(p.nans for p in [summary.X, summary.Y, summary.M]) == 0:\n text[-1] += \" (no missing values)\"\n\n def format_part(part):\n if isinstance(part, NotAvailable):\n return \"\"\n elif part.nans + part.non_nans == 0:\n return \"\"\n\n if isinstance(part, DenseArray):\n total = part.nans + part.non_nans\n miss = (\"%.1f%%\" % (100 * part.nans / total) if part.nans > 0\n else \"no\")\n return \" (%s missing values)\" % miss\n elif isinstance(part, (SparseArray, SparseBoolArray)):\n text = \" ({}, density {:.2f}%)\"\n tag = \"sparse\" if isinstance(part, SparseArray) else \"tags\"\n total = part.nans + part.non_nans\n return text.format(tag, 100 * part.non_nans / total)\n else:\n # MISSING, N/A\n return \"\"\n\n def sp(n):\n if n == 0:\n return \"No\", \"s\"\n elif n == 1:\n return str(n), ''\n else:\n return str(n), 's'\n\n text += [(\"%s feature%s\" % sp(len(summary.domain.attributes)))\n + format_part(summary.X)]\n\n if not summary.domain.class_vars:\n text += [\"No target variable.\"]\n else:\n if len(summary.domain.class_vars) > 1:\n c_text = \"%s outcome%s\" % sp(len(summary.domain.class_vars))\n elif summary.domain.has_continuous_class:\n c_text = \"Continuous target variable\"\n else:\n c_text = \"Discrete class with %s value%s\" % sp(\n len(summary.domain.class_var.values))\n c_text += format_part(summary.Y)\n text += [c_text]\n\n text += [(\"%s meta attribute%s\" % sp(len(summary.domain.metas)))\n + format_part(summary.M)]\n\n return text\n\n\ndef is_sortable(table):\n if isinstance(table, SqlTable):\n return False\n elif isinstance(table, Orange.data.Table):\n return True\n else:\n return False\n\n\ndef test_model():\n app = QApplication([])\n view = QTableView(\n sortingEnabled=True\n )\n data = Orange.data.Table(\"lenses\")\n model = TableModel(data)\n\n view.setModel(model)\n\n view.show()\n view.raise_()\n return app.exec()\n\n\nif __name__ == \"__main__\": # pragma: no cover\n WidgetPreview(OWDataTable).run(\n [(Table(\"iris\"), \"iris\"),\n (Table(\"brown-selected\"), \"brown-selected\"),\n (Table(\"housing\"), \"housing\")])\n",
"path": "Orange/widgets/data/owtable.py"
}
] | diff --git a/Orange/widgets/data/owtable.py b/Orange/widgets/data/owtable.py
index 7288937d537..abd2e0f0fb1 100644
--- a/Orange/widgets/data/owtable.py
+++ b/Orange/widgets/data/owtable.py
@@ -718,6 +718,7 @@ def _on_current_tab_changed(self, index):
view = self.tabs.widget(index)
if view is not None and view.model() is not None:
self.set_info(view.input_slot.summary)
+ self.update_selection()
else:
self.set_info(None)
|
encode__django-rest-framework-7158 | RemoteUserAuthentication.authenticate calls django.contrib.auth.authenticate without request argument
## Checklist
- [X] I have verified that that issue exists against the `master` branch of Django REST framework.
- [X] I have searched for similar issues in both open and closed tickets and cannot find a duplicate.
- [X] This is not a usage question. (Those should be directed to the [discussion group](https://groups.google.com/forum/#!forum/django-rest-framework) instead.)
- [X] This cannot be dealt with as a third party library. (We prefer new functionality to be [in the form of third party libraries](https://www.django-rest-framework.org/community/third-party-packages/#about-third-party-packages) where possible.)
- [X] I have reduced the issue to the simplest possible case.
- [x] I have included a failing test as a pull request. (If you are unable to do so we can still accept the issue.)
## Expected behavior
`user = authenticate(request=request, remote_user=request.META.get(self.header))`
## Actual behavior
`user = authenticate(remote_user=request.META.get(self.header))`
| [
{
"content": "\"\"\"\nProvides various authentication policies.\n\"\"\"\nimport base64\nimport binascii\n\nfrom django.contrib.auth import authenticate, get_user_model\nfrom django.middleware.csrf import CsrfViewMiddleware\nfrom django.utils.translation import gettext_lazy as _\n\nfrom rest_framework import HTTP_HEADER_ENCODING, exceptions\n\n\ndef get_authorization_header(request):\n \"\"\"\n Return request's 'Authorization:' header, as a bytestring.\n\n Hide some test client ickyness where the header can be unicode.\n \"\"\"\n auth = request.META.get('HTTP_AUTHORIZATION', b'')\n if isinstance(auth, str):\n # Work around django test client oddness\n auth = auth.encode(HTTP_HEADER_ENCODING)\n return auth\n\n\nclass CSRFCheck(CsrfViewMiddleware):\n def _reject(self, request, reason):\n # Return the failure reason instead of an HttpResponse\n return reason\n\n\nclass BaseAuthentication:\n \"\"\"\n All authentication classes should extend BaseAuthentication.\n \"\"\"\n\n def authenticate(self, request):\n \"\"\"\n Authenticate the request and return a two-tuple of (user, token).\n \"\"\"\n raise NotImplementedError(\".authenticate() must be overridden.\")\n\n def authenticate_header(self, request):\n \"\"\"\n Return a string to be used as the value of the `WWW-Authenticate`\n header in a `401 Unauthenticated` response, or `None` if the\n authentication scheme should return `403 Permission Denied` responses.\n \"\"\"\n pass\n\n\nclass BasicAuthentication(BaseAuthentication):\n \"\"\"\n HTTP Basic authentication against username/password.\n \"\"\"\n www_authenticate_realm = 'api'\n\n def authenticate(self, request):\n \"\"\"\n Returns a `User` if a correct username and password have been supplied\n using HTTP Basic authentication. Otherwise returns `None`.\n \"\"\"\n auth = get_authorization_header(request).split()\n\n if not auth or auth[0].lower() != b'basic':\n return None\n\n if len(auth) == 1:\n msg = _('Invalid basic header. No credentials provided.')\n raise exceptions.AuthenticationFailed(msg)\n elif len(auth) > 2:\n msg = _('Invalid basic header. Credentials string should not contain spaces.')\n raise exceptions.AuthenticationFailed(msg)\n\n try:\n auth_parts = base64.b64decode(auth[1]).decode(HTTP_HEADER_ENCODING).partition(':')\n except (TypeError, UnicodeDecodeError, binascii.Error):\n msg = _('Invalid basic header. Credentials not correctly base64 encoded.')\n raise exceptions.AuthenticationFailed(msg)\n\n userid, password = auth_parts[0], auth_parts[2]\n return self.authenticate_credentials(userid, password, request)\n\n def authenticate_credentials(self, userid, password, request=None):\n \"\"\"\n Authenticate the userid and password against username and password\n with optional request for context.\n \"\"\"\n credentials = {\n get_user_model().USERNAME_FIELD: userid,\n 'password': password\n }\n user = authenticate(request=request, **credentials)\n\n if user is None:\n raise exceptions.AuthenticationFailed(_('Invalid username/password.'))\n\n if not user.is_active:\n raise exceptions.AuthenticationFailed(_('User inactive or deleted.'))\n\n return (user, None)\n\n def authenticate_header(self, request):\n return 'Basic realm=\"%s\"' % self.www_authenticate_realm\n\n\nclass SessionAuthentication(BaseAuthentication):\n \"\"\"\n Use Django's session framework for authentication.\n \"\"\"\n\n def authenticate(self, request):\n \"\"\"\n Returns a `User` if the request session currently has a logged in user.\n Otherwise returns `None`.\n \"\"\"\n\n # Get the session-based user from the underlying HttpRequest object\n user = getattr(request._request, 'user', None)\n\n # Unauthenticated, CSRF validation not required\n if not user or not user.is_active:\n return None\n\n self.enforce_csrf(request)\n\n # CSRF passed with authenticated user\n return (user, None)\n\n def enforce_csrf(self, request):\n \"\"\"\n Enforce CSRF validation for session based authentication.\n \"\"\"\n check = CSRFCheck()\n # populates request.META['CSRF_COOKIE'], which is used in process_view()\n check.process_request(request)\n reason = check.process_view(request, None, (), {})\n if reason:\n # CSRF failed, bail with explicit error message\n raise exceptions.PermissionDenied('CSRF Failed: %s' % reason)\n\n\nclass TokenAuthentication(BaseAuthentication):\n \"\"\"\n Simple token based authentication.\n\n Clients should authenticate by passing the token key in the \"Authorization\"\n HTTP header, prepended with the string \"Token \". For example:\n\n Authorization: Token 401f7ac837da42b97f613d789819ff93537bee6a\n \"\"\"\n\n keyword = 'Token'\n model = None\n\n def get_model(self):\n if self.model is not None:\n return self.model\n from rest_framework.authtoken.models import Token\n return Token\n\n \"\"\"\n A custom token model may be used, but must have the following properties.\n\n * key -- The string identifying the token\n * user -- The user to which the token belongs\n \"\"\"\n\n def authenticate(self, request):\n auth = get_authorization_header(request).split()\n\n if not auth or auth[0].lower() != self.keyword.lower().encode():\n return None\n\n if len(auth) == 1:\n msg = _('Invalid token header. No credentials provided.')\n raise exceptions.AuthenticationFailed(msg)\n elif len(auth) > 2:\n msg = _('Invalid token header. Token string should not contain spaces.')\n raise exceptions.AuthenticationFailed(msg)\n\n try:\n token = auth[1].decode()\n except UnicodeError:\n msg = _('Invalid token header. Token string should not contain invalid characters.')\n raise exceptions.AuthenticationFailed(msg)\n\n return self.authenticate_credentials(token)\n\n def authenticate_credentials(self, key):\n model = self.get_model()\n try:\n token = model.objects.select_related('user').get(key=key)\n except model.DoesNotExist:\n raise exceptions.AuthenticationFailed(_('Invalid token.'))\n\n if not token.user.is_active:\n raise exceptions.AuthenticationFailed(_('User inactive or deleted.'))\n\n return (token.user, token)\n\n def authenticate_header(self, request):\n return self.keyword\n\n\nclass RemoteUserAuthentication(BaseAuthentication):\n \"\"\"\n REMOTE_USER authentication.\n\n To use this, set up your web server to perform authentication, which will\n set the REMOTE_USER environment variable. You will need to have\n 'django.contrib.auth.backends.RemoteUserBackend in your\n AUTHENTICATION_BACKENDS setting\n \"\"\"\n\n # Name of request header to grab username from. This will be the key as\n # used in the request.META dictionary, i.e. the normalization of headers to\n # all uppercase and the addition of \"HTTP_\" prefix apply.\n header = \"REMOTE_USER\"\n\n def authenticate(self, request):\n user = authenticate(remote_user=request.META.get(self.header))\n if user and user.is_active:\n return (user, None)\n",
"path": "rest_framework/authentication.py"
}
] | [
{
"content": "\"\"\"\nProvides various authentication policies.\n\"\"\"\nimport base64\nimport binascii\n\nfrom django.contrib.auth import authenticate, get_user_model\nfrom django.middleware.csrf import CsrfViewMiddleware\nfrom django.utils.translation import gettext_lazy as _\n\nfrom rest_framework import HTTP_HEADER_ENCODING, exceptions\n\n\ndef get_authorization_header(request):\n \"\"\"\n Return request's 'Authorization:' header, as a bytestring.\n\n Hide some test client ickyness where the header can be unicode.\n \"\"\"\n auth = request.META.get('HTTP_AUTHORIZATION', b'')\n if isinstance(auth, str):\n # Work around django test client oddness\n auth = auth.encode(HTTP_HEADER_ENCODING)\n return auth\n\n\nclass CSRFCheck(CsrfViewMiddleware):\n def _reject(self, request, reason):\n # Return the failure reason instead of an HttpResponse\n return reason\n\n\nclass BaseAuthentication:\n \"\"\"\n All authentication classes should extend BaseAuthentication.\n \"\"\"\n\n def authenticate(self, request):\n \"\"\"\n Authenticate the request and return a two-tuple of (user, token).\n \"\"\"\n raise NotImplementedError(\".authenticate() must be overridden.\")\n\n def authenticate_header(self, request):\n \"\"\"\n Return a string to be used as the value of the `WWW-Authenticate`\n header in a `401 Unauthenticated` response, or `None` if the\n authentication scheme should return `403 Permission Denied` responses.\n \"\"\"\n pass\n\n\nclass BasicAuthentication(BaseAuthentication):\n \"\"\"\n HTTP Basic authentication against username/password.\n \"\"\"\n www_authenticate_realm = 'api'\n\n def authenticate(self, request):\n \"\"\"\n Returns a `User` if a correct username and password have been supplied\n using HTTP Basic authentication. Otherwise returns `None`.\n \"\"\"\n auth = get_authorization_header(request).split()\n\n if not auth or auth[0].lower() != b'basic':\n return None\n\n if len(auth) == 1:\n msg = _('Invalid basic header. No credentials provided.')\n raise exceptions.AuthenticationFailed(msg)\n elif len(auth) > 2:\n msg = _('Invalid basic header. Credentials string should not contain spaces.')\n raise exceptions.AuthenticationFailed(msg)\n\n try:\n auth_parts = base64.b64decode(auth[1]).decode(HTTP_HEADER_ENCODING).partition(':')\n except (TypeError, UnicodeDecodeError, binascii.Error):\n msg = _('Invalid basic header. Credentials not correctly base64 encoded.')\n raise exceptions.AuthenticationFailed(msg)\n\n userid, password = auth_parts[0], auth_parts[2]\n return self.authenticate_credentials(userid, password, request)\n\n def authenticate_credentials(self, userid, password, request=None):\n \"\"\"\n Authenticate the userid and password against username and password\n with optional request for context.\n \"\"\"\n credentials = {\n get_user_model().USERNAME_FIELD: userid,\n 'password': password\n }\n user = authenticate(request=request, **credentials)\n\n if user is None:\n raise exceptions.AuthenticationFailed(_('Invalid username/password.'))\n\n if not user.is_active:\n raise exceptions.AuthenticationFailed(_('User inactive or deleted.'))\n\n return (user, None)\n\n def authenticate_header(self, request):\n return 'Basic realm=\"%s\"' % self.www_authenticate_realm\n\n\nclass SessionAuthentication(BaseAuthentication):\n \"\"\"\n Use Django's session framework for authentication.\n \"\"\"\n\n def authenticate(self, request):\n \"\"\"\n Returns a `User` if the request session currently has a logged in user.\n Otherwise returns `None`.\n \"\"\"\n\n # Get the session-based user from the underlying HttpRequest object\n user = getattr(request._request, 'user', None)\n\n # Unauthenticated, CSRF validation not required\n if not user or not user.is_active:\n return None\n\n self.enforce_csrf(request)\n\n # CSRF passed with authenticated user\n return (user, None)\n\n def enforce_csrf(self, request):\n \"\"\"\n Enforce CSRF validation for session based authentication.\n \"\"\"\n check = CSRFCheck()\n # populates request.META['CSRF_COOKIE'], which is used in process_view()\n check.process_request(request)\n reason = check.process_view(request, None, (), {})\n if reason:\n # CSRF failed, bail with explicit error message\n raise exceptions.PermissionDenied('CSRF Failed: %s' % reason)\n\n\nclass TokenAuthentication(BaseAuthentication):\n \"\"\"\n Simple token based authentication.\n\n Clients should authenticate by passing the token key in the \"Authorization\"\n HTTP header, prepended with the string \"Token \". For example:\n\n Authorization: Token 401f7ac837da42b97f613d789819ff93537bee6a\n \"\"\"\n\n keyword = 'Token'\n model = None\n\n def get_model(self):\n if self.model is not None:\n return self.model\n from rest_framework.authtoken.models import Token\n return Token\n\n \"\"\"\n A custom token model may be used, but must have the following properties.\n\n * key -- The string identifying the token\n * user -- The user to which the token belongs\n \"\"\"\n\n def authenticate(self, request):\n auth = get_authorization_header(request).split()\n\n if not auth or auth[0].lower() != self.keyword.lower().encode():\n return None\n\n if len(auth) == 1:\n msg = _('Invalid token header. No credentials provided.')\n raise exceptions.AuthenticationFailed(msg)\n elif len(auth) > 2:\n msg = _('Invalid token header. Token string should not contain spaces.')\n raise exceptions.AuthenticationFailed(msg)\n\n try:\n token = auth[1].decode()\n except UnicodeError:\n msg = _('Invalid token header. Token string should not contain invalid characters.')\n raise exceptions.AuthenticationFailed(msg)\n\n return self.authenticate_credentials(token)\n\n def authenticate_credentials(self, key):\n model = self.get_model()\n try:\n token = model.objects.select_related('user').get(key=key)\n except model.DoesNotExist:\n raise exceptions.AuthenticationFailed(_('Invalid token.'))\n\n if not token.user.is_active:\n raise exceptions.AuthenticationFailed(_('User inactive or deleted.'))\n\n return (token.user, token)\n\n def authenticate_header(self, request):\n return self.keyword\n\n\nclass RemoteUserAuthentication(BaseAuthentication):\n \"\"\"\n REMOTE_USER authentication.\n\n To use this, set up your web server to perform authentication, which will\n set the REMOTE_USER environment variable. You will need to have\n 'django.contrib.auth.backends.RemoteUserBackend in your\n AUTHENTICATION_BACKENDS setting\n \"\"\"\n\n # Name of request header to grab username from. This will be the key as\n # used in the request.META dictionary, i.e. the normalization of headers to\n # all uppercase and the addition of \"HTTP_\" prefix apply.\n header = \"REMOTE_USER\"\n\n def authenticate(self, request):\n user = authenticate(request=request, remote_user=request.META.get(self.header))\n if user and user.is_active:\n return (user, None)\n",
"path": "rest_framework/authentication.py"
}
] | diff --git a/rest_framework/authentication.py b/rest_framework/authentication.py
index 1e30728d34..1dfc23d7f9 100644
--- a/rest_framework/authentication.py
+++ b/rest_framework/authentication.py
@@ -220,6 +220,6 @@ class RemoteUserAuthentication(BaseAuthentication):
header = "REMOTE_USER"
def authenticate(self, request):
- user = authenticate(remote_user=request.META.get(self.header))
+ user = authenticate(request=request, remote_user=request.META.get(self.header))
if user and user.is_active:
return (user, None)
|
benoitc__gunicorn-1708 | gunicorn crashed on start with --reload flag
Setup: Vagrant, virtualenv, gunicorn 19.3.0:
The following command produces this stack:
`gunicorn -c /data/shared/api/gunicorn_config.py -b unix:/tmp/api-dev-gunicorn.sock --log-level INFO --reload wsgi:app`
```
Exception in thread Thread-1:
Traceback (most recent call last):
File "/home/vagrant/.pyenv/versions/2.7.6/lib/python2.7/threading.py", line 810, in __bootstrap_inner
self.run()
File "/data/virtualenv/default/lib/python2.7/site-packages/gunicorn/reloader.py", line 41, in run
for filename in self.get_files():
File "/data/virtualenv/default/lib/python2.7/site-packages/gunicorn/reloader.py", line 30, in get_files
if hasattr(module, '__file__')
File "/data/virtualenv/default/lib/python2.7/re.py", line 151, in sub
return _compile(pattern, flags).sub(repl, string, count)
TypeError: expected string or buffer
```
If I remove --reload it boots up fine.
| [
{
"content": "# -*- coding: utf-8 -\n#\n# This file is part of gunicorn released under the MIT license.\n# See the NOTICE for more information.\n\nimport os\nimport os.path\nimport re\nimport sys\nimport time\nimport threading\n\n\nclass Reloader(threading.Thread):\n def __init__(self, extra_files=None, interval=1, callback=None):\n super(Reloader, self).__init__()\n self.setDaemon(True)\n self._extra_files = set(extra_files or ())\n self._extra_files_lock = threading.RLock()\n self._interval = interval\n self._callback = callback\n\n def add_extra_file(self, filename):\n with self._extra_files_lock:\n self._extra_files.add(filename)\n\n def get_files(self):\n fnames = [\n re.sub('py[co]$', 'py', module.__file__)\n for module in list(sys.modules.values())\n if hasattr(module, '__file__')\n ]\n\n with self._extra_files_lock:\n fnames.extend(self._extra_files)\n\n return fnames\n\n def run(self):\n mtimes = {}\n while True:\n for filename in self.get_files():\n try:\n mtime = os.stat(filename).st_mtime\n except OSError:\n continue\n old_time = mtimes.get(filename)\n if old_time is None:\n mtimes[filename] = mtime\n continue\n elif mtime > old_time:\n if self._callback:\n self._callback(filename)\n time.sleep(self._interval)\n\nhas_inotify = False\nif sys.platform.startswith('linux'):\n try:\n from inotify.adapters import Inotify\n import inotify.constants\n has_inotify = True\n except ImportError:\n pass\n\n\nif has_inotify:\n\n class InotifyReloader(threading.Thread):\n event_mask = (inotify.constants.IN_CREATE | inotify.constants.IN_DELETE\n | inotify.constants.IN_DELETE_SELF | inotify.constants.IN_MODIFY\n | inotify.constants.IN_MOVE_SELF | inotify.constants.IN_MOVED_FROM\n | inotify.constants.IN_MOVED_TO)\n\n def __init__(self, extra_files=None, callback=None):\n super(InotifyReloader, self).__init__()\n self.setDaemon(True)\n self._callback = callback\n self._dirs = set()\n self._watcher = Inotify()\n\n for extra_file in extra_files:\n self.add_extra_file(extra_file)\n\n def add_extra_file(self, filename):\n dirname = os.path.dirname(filename)\n\n if dirname in self._dirs:\n return\n\n self._watcher.add_watch(dirname, mask=self.event_mask)\n self._dirs.add(dirname)\n\n def get_dirs(self):\n fnames = [\n os.path.dirname(re.sub('py[co]$', 'py', module.__file__))\n for module in list(sys.modules.values())\n if hasattr(module, '__file__')\n ]\n\n return set(fnames)\n\n def run(self):\n self._dirs = self.get_dirs()\n\n for dirname in self._dirs:\n self._watcher.add_watch(dirname, mask=self.event_mask)\n\n for event in self._watcher.event_gen():\n if event is None:\n continue\n\n filename = event[3]\n\n self._callback(filename)\n\nelse:\n\n class InotifyReloader(object):\n def __init__(self, callback=None):\n raise ImportError('You must have the inotify module installed to '\n 'use the inotify reloader')\n\n\npreferred_reloader = InotifyReloader if has_inotify else Reloader\n\nreloader_engines = {\n 'auto': preferred_reloader,\n 'poll': Reloader,\n 'inotify': InotifyReloader,\n}\n",
"path": "gunicorn/reloader.py"
}
] | [
{
"content": "# -*- coding: utf-8 -\n#\n# This file is part of gunicorn released under the MIT license.\n# See the NOTICE for more information.\n\nimport os\nimport os.path\nimport re\nimport sys\nimport time\nimport threading\n\n\nclass Reloader(threading.Thread):\n def __init__(self, extra_files=None, interval=1, callback=None):\n super(Reloader, self).__init__()\n self.setDaemon(True)\n self._extra_files = set(extra_files or ())\n self._extra_files_lock = threading.RLock()\n self._interval = interval\n self._callback = callback\n\n def add_extra_file(self, filename):\n with self._extra_files_lock:\n self._extra_files.add(filename)\n\n def get_files(self):\n fnames = [\n re.sub('py[co]$', 'py', module.__file__)\n for module in list(sys.modules.values())\n if getattr(module, '__file__', None)\n ]\n\n with self._extra_files_lock:\n fnames.extend(self._extra_files)\n\n return fnames\n\n def run(self):\n mtimes = {}\n while True:\n for filename in self.get_files():\n try:\n mtime = os.stat(filename).st_mtime\n except OSError:\n continue\n old_time = mtimes.get(filename)\n if old_time is None:\n mtimes[filename] = mtime\n continue\n elif mtime > old_time:\n if self._callback:\n self._callback(filename)\n time.sleep(self._interval)\n\nhas_inotify = False\nif sys.platform.startswith('linux'):\n try:\n from inotify.adapters import Inotify\n import inotify.constants\n has_inotify = True\n except ImportError:\n pass\n\n\nif has_inotify:\n\n class InotifyReloader(threading.Thread):\n event_mask = (inotify.constants.IN_CREATE | inotify.constants.IN_DELETE\n | inotify.constants.IN_DELETE_SELF | inotify.constants.IN_MODIFY\n | inotify.constants.IN_MOVE_SELF | inotify.constants.IN_MOVED_FROM\n | inotify.constants.IN_MOVED_TO)\n\n def __init__(self, extra_files=None, callback=None):\n super(InotifyReloader, self).__init__()\n self.setDaemon(True)\n self._callback = callback\n self._dirs = set()\n self._watcher = Inotify()\n\n for extra_file in extra_files:\n self.add_extra_file(extra_file)\n\n def add_extra_file(self, filename):\n dirname = os.path.dirname(filename)\n\n if dirname in self._dirs:\n return\n\n self._watcher.add_watch(dirname, mask=self.event_mask)\n self._dirs.add(dirname)\n\n def get_dirs(self):\n fnames = [\n os.path.dirname(re.sub('py[co]$', 'py', module.__file__))\n for module in list(sys.modules.values())\n if hasattr(module, '__file__')\n ]\n\n return set(fnames)\n\n def run(self):\n self._dirs = self.get_dirs()\n\n for dirname in self._dirs:\n self._watcher.add_watch(dirname, mask=self.event_mask)\n\n for event in self._watcher.event_gen():\n if event is None:\n continue\n\n filename = event[3]\n\n self._callback(filename)\n\nelse:\n\n class InotifyReloader(object):\n def __init__(self, callback=None):\n raise ImportError('You must have the inotify module installed to '\n 'use the inotify reloader')\n\n\npreferred_reloader = InotifyReloader if has_inotify else Reloader\n\nreloader_engines = {\n 'auto': preferred_reloader,\n 'poll': Reloader,\n 'inotify': InotifyReloader,\n}\n",
"path": "gunicorn/reloader.py"
}
] | diff --git a/gunicorn/reloader.py b/gunicorn/reloader.py
index b1ce743f9..4ab868e94 100644
--- a/gunicorn/reloader.py
+++ b/gunicorn/reloader.py
@@ -28,7 +28,7 @@ def get_files(self):
fnames = [
re.sub('py[co]$', 'py', module.__file__)
for module in list(sys.modules.values())
- if hasattr(module, '__file__')
+ if getattr(module, '__file__', None)
]
with self._extra_files_lock:
|
ivy-llc__ivy-13420 | standard_gamma
| [
{
"content": "# local\nimport ivy\nfrom ivy.functional.frontends.numpy.func_wrapper import (\n to_ivy_arrays_and_back,\n from_zero_dim_arrays_to_scalar,\n)\n\n\n@to_ivy_arrays_and_back\n@from_zero_dim_arrays_to_scalar\ndef random_sample(size=None):\n return ivy.random_uniform(low=0.0, high=1.0, shape=size, dtype=\"float64\")\n\n\n@to_ivy_arrays_and_back\n@from_zero_dim_arrays_to_scalar\ndef dirichlet(alpha, size=None):\n return ivy.dirichlet(alpha, size=size)\n\n\n@to_ivy_arrays_and_back\n@from_zero_dim_arrays_to_scalar\ndef uniform(low=0.0, high=1.0, size=None):\n return ivy.random_uniform(low=low, high=high, shape=size, dtype=\"float64\")\n\n\n@to_ivy_arrays_and_back\n@from_zero_dim_arrays_to_scalar\ndef geometric(p, size=None):\n if p < 0 or p > 1:\n raise ValueError(\"p must be in the interval [0, 1]\")\n oneMinusP = ivy.subtract(1, p)\n sizeMinusOne = ivy.subtract(size, 1)\n\n return ivy.multiply(ivy.pow(oneMinusP, sizeMinusOne), p)\n\n\n@to_ivy_arrays_and_back\n@from_zero_dim_arrays_to_scalar\ndef normal(loc=0.0, scale=1.0, size=None):\n return ivy.random_normal(mean=loc, std=scale, shape=size, dtype=\"float64\")\n\n\n@to_ivy_arrays_and_back\n@from_zero_dim_arrays_to_scalar\ndef poisson(lam=1.0, size=None):\n return ivy.poisson(lam=lam, shape=size)\n\n\n@to_ivy_arrays_and_back\n@from_zero_dim_arrays_to_scalar\ndef multinomial(n, pvals, size=None):\n assert not ivy.exists(size) or (len(size) > 0 and len(size) < 3)\n batch_size = 1\n if ivy.exists(size):\n if len(size) == 2:\n batch_size = size[0]\n num_samples = size[1]\n else:\n num_samples = size[0]\n else:\n num_samples = len(pvals)\n return ivy.multinomial(n, num_samples, batch_size=batch_size, probs=pvals)\n\n\n@to_ivy_arrays_and_back\n@from_zero_dim_arrays_to_scalar\ndef permutation(x, /):\n if isinstance(x, int):\n x = ivy.arange(x)\n return ivy.shuffle(x)\n\n\n@to_ivy_arrays_and_back\n@from_zero_dim_arrays_to_scalar\ndef beta(a, b, size=None):\n return ivy.beta(a, b, shape=size)\n\n\n@to_ivy_arrays_and_back\n@from_zero_dim_arrays_to_scalar\ndef shuffle(x, /):\n if isinstance(x, int):\n x = ivy.arange(x)\n return ivy.shuffle(x)\n\n\n@to_ivy_arrays_and_back\n@from_zero_dim_arrays_to_scalar\ndef standard_normal(size=None):\n return ivy.random_normal(mean=0.0, std=1.0, shape=size, dtype=\"float64\")\n",
"path": "ivy/functional/frontends/numpy/random/functions.py"
}
] | [
{
"content": "# local\nimport ivy\nfrom ivy.functional.frontends.numpy.func_wrapper import (\n to_ivy_arrays_and_back,\n from_zero_dim_arrays_to_scalar,\n)\n\n\n@to_ivy_arrays_and_back\n@from_zero_dim_arrays_to_scalar\ndef random_sample(size=None):\n return ivy.random_uniform(low=0.0, high=1.0, shape=size, dtype=\"float64\")\n\n\n@to_ivy_arrays_and_back\n@from_zero_dim_arrays_to_scalar\ndef dirichlet(alpha, size=None):\n return ivy.dirichlet(alpha, size=size)\n\n\n@to_ivy_arrays_and_back\n@from_zero_dim_arrays_to_scalar\ndef uniform(low=0.0, high=1.0, size=None):\n return ivy.random_uniform(low=low, high=high, shape=size, dtype=\"float64\")\n\n\n@to_ivy_arrays_and_back\n@from_zero_dim_arrays_to_scalar\ndef geometric(p, size=None):\n if p < 0 or p > 1:\n raise ValueError(\"p must be in the interval [0, 1]\")\n oneMinusP = ivy.subtract(1, p)\n sizeMinusOne = ivy.subtract(size, 1)\n\n return ivy.multiply(ivy.pow(oneMinusP, sizeMinusOne), p)\n\n\n@to_ivy_arrays_and_back\n@from_zero_dim_arrays_to_scalar\ndef normal(loc=0.0, scale=1.0, size=None):\n return ivy.random_normal(mean=loc, std=scale, shape=size, dtype=\"float64\")\n\n\n@to_ivy_arrays_and_back\n@from_zero_dim_arrays_to_scalar\ndef poisson(lam=1.0, size=None):\n return ivy.poisson(lam=lam, shape=size)\n\n\n@to_ivy_arrays_and_back\n@from_zero_dim_arrays_to_scalar\ndef multinomial(n, pvals, size=None):\n assert not ivy.exists(size) or (len(size) > 0 and len(size) < 3)\n batch_size = 1\n if ivy.exists(size):\n if len(size) == 2:\n batch_size = size[0]\n num_samples = size[1]\n else:\n num_samples = size[0]\n else:\n num_samples = len(pvals)\n return ivy.multinomial(n, num_samples, batch_size=batch_size, probs=pvals)\n\n\n@to_ivy_arrays_and_back\n@from_zero_dim_arrays_to_scalar\ndef permutation(x, /):\n if isinstance(x, int):\n x = ivy.arange(x)\n return ivy.shuffle(x)\n\n\n@to_ivy_arrays_and_back\n@from_zero_dim_arrays_to_scalar\ndef beta(a, b, size=None):\n return ivy.beta(a, b, shape=size)\n\n\n@to_ivy_arrays_and_back\n@from_zero_dim_arrays_to_scalar\ndef shuffle(x, /):\n if isinstance(x, int):\n x = ivy.arange(x)\n return ivy.shuffle(x)\n\n\n@to_ivy_arrays_and_back\n@from_zero_dim_arrays_to_scalar\ndef standard_normal(size=None):\n return ivy.random_normal(mean=0.0, std=1.0, shape=size, dtype=\"float64\")\n\n\n@to_ivy_arrays_and_back\n@from_zero_dim_arrays_to_scalar\ndef standard_gamma(alpha):\n return ivy.gamma(alpha, beta=1.0, dtype=\"float64\")\n",
"path": "ivy/functional/frontends/numpy/random/functions.py"
}
] | diff --git a/ivy/functional/frontends/numpy/random/functions.py b/ivy/functional/frontends/numpy/random/functions.py
index 4dc1c4a1e42f5..bbd25a45c5e3d 100644
--- a/ivy/functional/frontends/numpy/random/functions.py
+++ b/ivy/functional/frontends/numpy/random/functions.py
@@ -89,3 +89,9 @@ def shuffle(x, /):
@from_zero_dim_arrays_to_scalar
def standard_normal(size=None):
return ivy.random_normal(mean=0.0, std=1.0, shape=size, dtype="float64")
+
+
+@to_ivy_arrays_and_back
+@from_zero_dim_arrays_to_scalar
+def standard_gamma(alpha):
+ return ivy.gamma(alpha, beta=1.0, dtype="float64")
diff --git a/ivy_tests/test_ivy/test_frontends/test_numpy/test_random/test_functions.py b/ivy_tests/test_ivy/test_frontends/test_numpy/test_random/test_functions.py
index fb21c870aebfb..ab4e80c6d7210 100644
--- a/ivy_tests/test_ivy/test_frontends/test_numpy/test_random/test_functions.py
+++ b/ivy_tests/test_ivy/test_frontends/test_numpy/test_random/test_functions.py
@@ -349,3 +349,32 @@ def test_numpy_standard_normal(
test_values=False,
size=size,
)
+
+
+@handle_frontend_test(
+ fn_tree="numpy.random.standard_gamma",
+ dtype_and_x=helpers.dtype_and_values(
+ available_dtypes=helpers.get_dtypes("float"),
+ shape=st.tuples(st.integers(min_value=1, max_value=2)),
+ min_value=1,
+ max_value=100,
+ ),
+ test_with_out=st.just(False),
+)
+def test_numpy_standard_gamma(
+ dtype_and_x,
+ frontend,
+ test_flags,
+ fn_tree,
+ on_device,
+):
+ input_dtype, x = dtype_and_x
+ helpers.test_frontend_function(
+ input_dtypes=input_dtype,
+ test_flags=test_flags,
+ frontend=frontend,
+ fn_tree=fn_tree,
+ on_device=on_device,
+ alpha=x[0],
+ test_values=False,
+ )
|
fedora-infra__bodhi-1061 | Bodhi sends notifications to old address after e-mail change
I've changed my e-mail addresses in all locations I could think of:
- [fedmsg](https://apps.fedoraproject.org/notifications)
- [bugzilla](https://bugzilla.redhat.com/)
- [Fedora Admin](https://admin.fedoraproject.org/accounts/)
But I still get notifications from bodhi at my old address about updates I've commented on, see the message below for an example.
It looks like this message doesn't come from fedmsg, the mail doesn't have any X-Fedmsg header fields. If I click on "Manage Alerts" in bodhi, it shows my fedmsg settings (the new e-mail address).
I initially thought this is a caching issue. But I changed my address months ago and I still get notifications to my old address. In addition to that, I also get fedmsg style notifications to my new address, but only about my own comments.
I'm not sure if this is a bug or I forgot to change my address somewhere. If it's the latter, I would expect "Manage alerts" to point to the right location.
Example message:
> Return-Path: [email protected]
> Delivered-To: [email protected]
> Received: from mx-out-2.rwth-aachen.de (mx-out-2.rwth-aachen.de [134.130.5.187])
> (using TLSv1 with cipher DHE-RSA-CAMELLIA256-SHA (256/256 bits))
> (Client CN "mx-out-2.rwth-aachen.de", Issuer "RWTH Aachen CA" (verified OK))
> by lagrande.kbsg.rwth-aachen.de (Postfix) with ESMTPS id 7FA403FAD7
> for [email protected]; Fri, 2 Sep 2016 01:00:39 +0200 (CEST)
> X-IronPort-Anti-Spam-Filtered: true
> X-IronPort-Anti-Spam-Result: A0BnAQAJsshXhwK1hNFdHAEBBAEBgywBAQEBAXV8pHaRLIQRJIV4AoIkAQIBAQEBAQITAQEBCgsJCRkvhGICAQOBCSwPFg9IiGEOuwcBAQEBAQEEAQEBAQEBASCGLIIDhnABAQVkgXwLWIIvBZlQhiCJB4F3ToQPgw2GAIZwhViDeYMdEQqBTTw0hE2CHwEBAQ
> X-IPAS-Result: A0BnAQAJsshXhwK1hNFdHAEBBAEBgywBAQEBAXV8pHaRLIQRJIV4AoIkAQIBAQEBAQITAQEBCgsJCRkvhGICAQOBCSwPFg9IiGEOuwcBAQEBAQEEAQEBAQEBASCGLIIDhnABAQVkgXwLWIIvBZlQhiCJB4F3ToQPgw2GAIZwhViDeYMdEQqBTTw0hE2CHwEBAQ
> X-IronPort-AV: E=Sophos;i="5.30,268,1470693600";
> d="scan'208";a="456213363"
> Received: from bastion01.fedoraproject.org (HELO bastion.fedoraproject.org) ([209.132.181.2])
> by mx-2.rz.rwth-aachen.de with ESMTP; 02 Sep 2016 01:00:39 +0200
> Received: from bodhi03.phx2.fedoraproject.org (bodhi03.phx2.fedoraproject.org [10.5.126.115])
> by bastion01.phx2.fedoraproject.org (Postfix) with ESMTP id C7A8A6070D39
> for [email protected]; Thu, 1 Sep 2016 23:00:36 +0000 (UTC)
> From: [email protected]
> To: [email protected]
> X-Bodhi-Update-Builds: kernel-4.7.2-201.fc24
> In-Reply-To: [email protected]
> X-Bodhi-Update-Pushed: True
> X-Bodhi-Update-Type: security
> X-Bodhi-Update-Release: F24
> References: [email protected]
> X-Bodhi-Update-Status: testing
> X-Bodhi-Update-Request: stable
> X-Bodhi-Update-Submitter: labbott
> X-Bodhi-Update-Title: kernel-4.7.2-201.fc24
> X-Bodhi: fedoraproject.org
> Subject: [Fedora Update] [CRITPATH] [comment] kernel-4.7.2-201.fc24
> Message-Id: [email protected]
> Date: Thu, 1 Sep 2016 23:00:36 +0000 (UTC)
> [message body skipped]
| [
{
"content": "# This program is free software; you can redistribute it and/or\n# modify it under the terms of the GNU General Public License\n# as published by the Free Software Foundation; either version 2\n# of the License, or (at your option) any later version.\n#\n# This program is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with this program; if not, write to the Free Software\n# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.\n\nfrom cornice.errors import Errors\n\nfrom pyramid.security import (Allow, ALL_PERMISSIONS, DENY_ALL)\nfrom pyramid.security import remember, forget\nfrom pyramid.httpexceptions import HTTPFound\nfrom pyramid.threadlocal import get_current_registry\n\nfrom . import log\nfrom .models import User, Group\n\n\n#\n# Pyramid ACL factories\n#\n\ndef admin_only_acl(request):\n \"\"\"Generate our admin-only ACL\"\"\"\n return [(Allow, 'group:' + group, ALL_PERMISSIONS) for group in\n request.registry.settings['admin_groups'].split()] + \\\n [DENY_ALL]\n\n\ndef packagers_allowed_acl(request):\n \"\"\"Generate an ACL for update submission\"\"\"\n groups = request.registry.settings['mandatory_packager_groups'].split()\n return [\n (Allow, 'group:' + group, ALL_PERMISSIONS) for group in groups\n ] + [DENY_ALL]\n\n\n#\n# OpenID views\n#\n\ndef login(request):\n login_url = request.route_url('login')\n referrer = request.url\n if referrer == login_url:\n referrer = request.route_url('home')\n came_from = request.params.get('came_from', referrer)\n request.session['came_from'] = came_from\n oid_url = request.registry.settings['openid.url']\n return HTTPFound(location=request.route_url('verify_openid',\n _query=dict(openid=oid_url)))\n\n\ndef logout(request):\n headers = forget(request)\n return HTTPFound(location=request.route_url('home'), headers=headers)\n\n\n#\n# openid.success_callback\n#\n\ndef remember_me(context, request, info, *args, **kw):\n \"\"\" Called upon successful login \"\"\"\n log.debug('remember_me(%s)' % locals())\n log.debug('remember_me: request.params = %r' % request.params)\n endpoint = request.params['openid.op_endpoint']\n if endpoint != request.registry.settings['openid.provider']:\n log.warn('Invalid OpenID provider: %s' % endpoint)\n request.session.flash('Invalid OpenID provider. You can only use: %s' %\n request.registry.settings['openid.provider'])\n return HTTPFound(location=request.route_url('home'))\n\n username = unicode(info['identity_url'].split('http://')[1].split('.')[0])\n email = info['sreg']['email']\n log.debug('remember_me: groups = %s' % info['groups'])\n log.info('%s successfully logged in' % username)\n\n # Find the user in our database. Create it if it doesn't exist.\n db = request.db\n user = db.query(User).filter_by(name=username).first()\n if not user:\n user = User(name=username, email=email)\n db.add(user)\n db.flush()\n else:\n # We used to not track email addresses, so fill in the fields as people\n # log back in\n if not user.email:\n user.email = email\n db.flush()\n\n # Keep track of what groups the user is a memeber of\n for group_name in info['groups']:\n # Drop empty group names https://github.com/fedora-infra/bodhi/issues/306\n if not group_name.strip():\n continue\n\n group = db.query(Group).filter_by(name=group_name).first()\n if not group:\n group = Group(name=group_name)\n db.add(group)\n db.flush()\n if group not in user.groups:\n log.info('Adding %s to %s group', user.name, group.name)\n user.groups.append(group)\n\n # See if the user was removed from any groups\n for group in user.groups:\n if group.name not in info['groups']:\n log.info('Removing %s from %s group', user.name, group.name)\n user.groups.remove(group)\n\n headers = remember(request, username)\n came_from = request.session['came_from']\n del(request.session['came_from'])\n\n # Mitigate \"Covert Redirect\"\n if not came_from.startswith(request.host_url):\n came_from = '/'\n\n response = HTTPFound(location=came_from)\n response.headerlist.extend(headers)\n return response\n\n\nclass CorsOrigins(object):\n \"\"\" Proxy-list class to load CORS config after scan-time.\n\n This should appear to behave just like a list, but it loads values from the\n pyramid configuration for its values. AFAIK, we have to do things this way\n since Cornice expects its cors configuration to be present at import-time,\n but the configuration isn't available until later, at Pyramid scan-time.\n Luckily, Cornice doesn't iterate over that configuration until\n request-time, so we can load this then.\n\n >>> cors_origins_ro = CorsOrigins('cors_origins_ro')\n >>> cors_origins_ro[0]\n ['*']\n >>> cors_origins_rw = CorsOrigins('cors_origins_rw')\n >>> cors_origins_rw[0]\n ['bodhi.fedoraproject.org']\n\n \"\"\"\n def __init__(self, name):\n self.name = name\n self.origins = None\n\n def initialize(self):\n if self.origins is None:\n settings = get_current_registry().settings\n self.origins = settings.get(self.name, 'localhost').split(',')\n\n def __len__(self):\n if self.origins is None:\n self.initialize()\n return len(self.origins)\n\n def __getitem__(self, key):\n if self.origins is None:\n self.initialize()\n return self.origins[key]\n\n def __iter__(self):\n if self.origins is None:\n self.initialize()\n return iter(self.originals)\n\n def __contains__(self, item):\n if self.origins is None:\n self.initialize()\n return item in self.originals\n\n\ncors_origins_ro = CorsOrigins('cors_origins_ro')\ncors_origins_rw = CorsOrigins('cors_origins_rw')\n\n\nclass ProtectedRequest(object):\n \"\"\" A proxy to the request object.\n\n The point here is that you can set 'errors' on this request, but they\n will be sent to /dev/null and hidden from cornice. Otherwise, this\n object behaves just like a normal request object.\n \"\"\"\n def __init__(self, real_request):\n # Hide errors added to this from the real request\n self.errors = Errors()\n # But proxy other attributes to the real request\n self.real_request = real_request\n for attr in ['db', 'registry', 'validated', 'buildinfo', 'user']:\n setattr(self, attr, getattr(self.real_request, attr))\n",
"path": "bodhi/server/security.py"
}
] | [
{
"content": "# This program is free software; you can redistribute it and/or\n# modify it under the terms of the GNU General Public License\n# as published by the Free Software Foundation; either version 2\n# of the License, or (at your option) any later version.\n#\n# This program is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with this program; if not, write to the Free Software\n# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.\n\nfrom cornice.errors import Errors\n\nfrom pyramid.security import (Allow, ALL_PERMISSIONS, DENY_ALL)\nfrom pyramid.security import remember, forget\nfrom pyramid.httpexceptions import HTTPFound\nfrom pyramid.threadlocal import get_current_registry\n\nfrom . import log\nfrom .models import User, Group\n\n\n#\n# Pyramid ACL factories\n#\n\ndef admin_only_acl(request):\n \"\"\"Generate our admin-only ACL\"\"\"\n return [(Allow, 'group:' + group, ALL_PERMISSIONS) for group in\n request.registry.settings['admin_groups'].split()] + \\\n [DENY_ALL]\n\n\ndef packagers_allowed_acl(request):\n \"\"\"Generate an ACL for update submission\"\"\"\n groups = request.registry.settings['mandatory_packager_groups'].split()\n return [\n (Allow, 'group:' + group, ALL_PERMISSIONS) for group in groups\n ] + [DENY_ALL]\n\n\n#\n# OpenID views\n#\n\ndef login(request):\n login_url = request.route_url('login')\n referrer = request.url\n if referrer == login_url:\n referrer = request.route_url('home')\n came_from = request.params.get('came_from', referrer)\n request.session['came_from'] = came_from\n oid_url = request.registry.settings['openid.url']\n return HTTPFound(location=request.route_url('verify_openid',\n _query=dict(openid=oid_url)))\n\n\ndef logout(request):\n headers = forget(request)\n return HTTPFound(location=request.route_url('home'), headers=headers)\n\n\n#\n# openid.success_callback\n#\n\ndef remember_me(context, request, info, *args, **kw):\n \"\"\" Called upon successful login \"\"\"\n log.debug('remember_me(%s)' % locals())\n log.debug('remember_me: request.params = %r' % request.params)\n endpoint = request.params['openid.op_endpoint']\n if endpoint != request.registry.settings['openid.provider']:\n log.warn('Invalid OpenID provider: %s' % endpoint)\n request.session.flash('Invalid OpenID provider. You can only use: %s' %\n request.registry.settings['openid.provider'])\n return HTTPFound(location=request.route_url('home'))\n\n username = unicode(info['identity_url'].split('http://')[1].split('.')[0])\n email = info['sreg']['email']\n log.debug('remember_me: groups = %s' % info['groups'])\n log.info('%s successfully logged in' % username)\n\n # Find the user in our database. Create it if it doesn't exist.\n db = request.db\n user = db.query(User).filter_by(name=username).first()\n if not user:\n user = User(name=username, email=email)\n db.add(user)\n db.flush()\n else:\n # Update email address if the address changed\n if user.email != email:\n user.email = email\n db.flush()\n\n # Keep track of what groups the user is a memeber of\n for group_name in info['groups']:\n # Drop empty group names https://github.com/fedora-infra/bodhi/issues/306\n if not group_name.strip():\n continue\n\n group = db.query(Group).filter_by(name=group_name).first()\n if not group:\n group = Group(name=group_name)\n db.add(group)\n db.flush()\n if group not in user.groups:\n log.info('Adding %s to %s group', user.name, group.name)\n user.groups.append(group)\n\n # See if the user was removed from any groups\n for group in user.groups:\n if group.name not in info['groups']:\n log.info('Removing %s from %s group', user.name, group.name)\n user.groups.remove(group)\n\n headers = remember(request, username)\n came_from = request.session['came_from']\n del(request.session['came_from'])\n\n # Mitigate \"Covert Redirect\"\n if not came_from.startswith(request.host_url):\n came_from = '/'\n\n response = HTTPFound(location=came_from)\n response.headerlist.extend(headers)\n return response\n\n\nclass CorsOrigins(object):\n \"\"\" Proxy-list class to load CORS config after scan-time.\n\n This should appear to behave just like a list, but it loads values from the\n pyramid configuration for its values. AFAIK, we have to do things this way\n since Cornice expects its cors configuration to be present at import-time,\n but the configuration isn't available until later, at Pyramid scan-time.\n Luckily, Cornice doesn't iterate over that configuration until\n request-time, so we can load this then.\n\n >>> cors_origins_ro = CorsOrigins('cors_origins_ro')\n >>> cors_origins_ro[0]\n ['*']\n >>> cors_origins_rw = CorsOrigins('cors_origins_rw')\n >>> cors_origins_rw[0]\n ['bodhi.fedoraproject.org']\n\n \"\"\"\n def __init__(self, name):\n self.name = name\n self.origins = None\n\n def initialize(self):\n if self.origins is None:\n settings = get_current_registry().settings\n self.origins = settings.get(self.name, 'localhost').split(',')\n\n def __len__(self):\n if self.origins is None:\n self.initialize()\n return len(self.origins)\n\n def __getitem__(self, key):\n if self.origins is None:\n self.initialize()\n return self.origins[key]\n\n def __iter__(self):\n if self.origins is None:\n self.initialize()\n return iter(self.originals)\n\n def __contains__(self, item):\n if self.origins is None:\n self.initialize()\n return item in self.originals\n\n\ncors_origins_ro = CorsOrigins('cors_origins_ro')\ncors_origins_rw = CorsOrigins('cors_origins_rw')\n\n\nclass ProtectedRequest(object):\n \"\"\" A proxy to the request object.\n\n The point here is that you can set 'errors' on this request, but they\n will be sent to /dev/null and hidden from cornice. Otherwise, this\n object behaves just like a normal request object.\n \"\"\"\n def __init__(self, real_request):\n # Hide errors added to this from the real request\n self.errors = Errors()\n # But proxy other attributes to the real request\n self.real_request = real_request\n for attr in ['db', 'registry', 'validated', 'buildinfo', 'user']:\n setattr(self, attr, getattr(self.real_request, attr))\n",
"path": "bodhi/server/security.py"
}
] | diff --git a/bodhi/server/security.py b/bodhi/server/security.py
index e95947dbd9..c2633ba2df 100644
--- a/bodhi/server/security.py
+++ b/bodhi/server/security.py
@@ -91,9 +91,8 @@ def remember_me(context, request, info, *args, **kw):
db.add(user)
db.flush()
else:
- # We used to not track email addresses, so fill in the fields as people
- # log back in
- if not user.email:
+ # Update email address if the address changed
+ if user.email != email:
user.email = email
db.flush()
|
huggingface__optimum-360 | AutoConfig is not imported in optimization.py
### System Info
```shell
optimium master : fb7e303d9254fcee194aa76f4a0b7fa9d9b140d0
```
### Who can help?
@echarlaix
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Try to optimize a model and get `NameError: name 'AutoConfig' is not defined` as it's not imported
### Expected behavior
No runtime error
I made a PR here to fix that : https://github.com/huggingface/optimum/pull/360
| [
{
"content": "# Copyright 2021 The HuggingFace Team. All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nimport logging\nimport os\nfrom pathlib import Path\nfrom typing import Callable, Dict, List, Optional, Tuple, Union\n\nimport transformers\n\nfrom onnx import load_model\nfrom onnxruntime.transformers.fusion_options import FusionOptions\nfrom onnxruntime.transformers.onnx_model_bert import BertOnnxModel\nfrom onnxruntime.transformers.optimizer import get_fusion_statistics, optimize_model\n\nfrom ..utils import CONFIG_NAME\nfrom .configuration import OptimizationConfig, ORTConfig\nfrom .modeling_ort import ORTModel\nfrom .modeling_seq2seq import ORTModelForSeq2SeqLM\nfrom .utils import ONNX_WEIGHTS_NAME, ORTConfigManager\n\n\nLOGGER = logging.getLogger(__name__)\n\n\nclass ORTOptimizer:\n \"\"\"\n Handles the ONNX Runtime optimization process for models shared on huggingface.co/models.\n \"\"\"\n\n def __init__(self, onnx_model_path: List[os.PathLike], config: transformers.PretrainedConfig):\n \"\"\"\n Args:\n onnx_model_path (`List[os.PathLike]`):\n The paths of the onnx models to optimize.\n config (`transformers.PretrainedConfig`):\n An instance of the configuration associated to the model to optimize.\n \"\"\"\n super().__init__()\n self.onnx_model_path = onnx_model_path\n self.config = config\n\n @classmethod\n def from_pretrained(cls, model_or_path: Union[str, os.PathLike, ORTModel], file_names: Optional[List[str]] = None):\n \"\"\"\n Args:\n model_or_path (`Union[str, os.PathLike, ORTModel]`):\n The path to a local directory hosting the model to optimize or an instance of an `ORTModel` to quantize.\n Can be either:\n - A path to a local *directory* containing the model to optimize.\n - An instance of ORTModel.\n file_names(`List[str]`, *optional*):\n The list of file names of the models to optimize.\n \"\"\"\n if isinstance(model_or_path, ORTModel):\n if isinstance(model_or_path, ORTModelForSeq2SeqLM):\n model_save_dir = model_or_path.model_save_dir\n onnx_model_path = [\n model_save_dir.joinpath(model_or_path.encoder_file_name),\n model_save_dir.joinpath(model_or_path.decoder_file_name),\n ]\n # Add the decoder with past key/values if present\n if model_or_path.use_cache:\n onnx_model_path.append(model_save_dir.joinpath(model_or_path.decoder_file_with_past_name))\n else:\n onnx_model_path = [model_or_path.model_save_dir.joinpath(model_or_path.latest_model_name)]\n return cls(onnx_model_path, config=model_or_path.config)\n elif os.path.isdir(model_or_path):\n file_names = [ONNX_WEIGHTS_NAME] if file_names is None else file_names\n model_or_path = Path(model_or_path)\n if CONFIG_NAME not in os.listdir(model_or_path):\n raise ValueError(f\"The local directory does not contain the configuration file {CONFIG_NAME}.\")\n config = AutoConfig.from_pretrained(model_or_path)\n onnx_model_path = []\n for file_name in file_names:\n onnx_model_path.append(model_or_path.joinpath(file_name))\n return cls(onnx_model_path, config=config)\n else:\n raise ValueError(f\"Unable to load the model from {model_or_path}.\")\n\n def optimize(\n self,\n optimization_config: OptimizationConfig,\n save_dir: Union[str, os.PathLike],\n file_suffix: str = \"optimized\",\n use_external_data_format: bool = False,\n ):\n \"\"\"\n Optimize a model given the optimization specifications defined in `optimization_config`.\n\n Args:\n optimization_config (`OptimizationConfig`):\n The configuration containing the parameters related to optimization.\n save_dir (`Union[str, os.PathLike]`):\n The path used to save the optimized model.\n file_suffix (`str`, *optional*, defaults to `\"optimized\"`):\n The file suffix used to save the optimized model.\n use_external_data_format (`bool`, *optional*, defaults to `False`):\n Whether to use external data format to store model of size >= 2Gb.\n \"\"\"\n save_dir = Path(save_dir)\n save_dir.mkdir(parents=True, exist_ok=True)\n model_type = self.config.model_type\n ORTConfigManager.check_supported_model_or_raise(model_type)\n\n # Save the model configuration\n self.config.save_pretrained(save_dir)\n\n # Create and save the configuration summarizing all the parameters related to optimization\n ort_config = ORTConfig(optimization=optimization_config)\n ort_config.save_pretrained(save_dir)\n\n num_heads = getattr(self.config, ORTConfigManager.get_num_heads_name(model_type))\n hidden_size = getattr(self.config, ORTConfigManager.get_hidden_size_name(model_type))\n model_type = ORTConfigManager.get_model_ort_type(model_type)\n optimization_config.model_type = model_type\n optimization_options = FusionOptions.parse(optimization_config)\n LOGGER.info(\"Optimizing model...\")\n\n for model_path in self.onnx_model_path:\n optimizer = optimize_model(\n model_path.as_posix(),\n model_type,\n num_heads,\n hidden_size,\n opt_level=optimization_config.optimization_level,\n optimization_options=optimization_options,\n use_gpu=optimization_config.optimize_for_gpu,\n only_onnxruntime=optimization_config.optimize_with_onnxruntime_only,\n )\n\n if optimization_config.fp16:\n # keep_io_types to keep inputs/outputs as float32\n optimizer.convert_float_to_float16(keep_io_types=True)\n\n output_path = save_dir.joinpath(f\"{model_path.stem}_{file_suffix}\").with_suffix(model_path.suffix)\n optimizer.save_model_to_file(output_path.as_posix(), use_external_data_format)\n\n LOGGER.info(f\"Optimized model saved at: {save_dir} (external data format: \" f\"{use_external_data_format})\")\n\n return Path(save_dir)\n\n @staticmethod\n def get_fused_operators(onnx_model_path: Union[str, os.PathLike]) -> Dict[str, int]:\n \"\"\"\n Compute the dictionary mapping the name of the fused operators to their number of apparition in the model.\n\n Args:\n onnx_model_path (`Union[str, os.PathLike]`):\n Path of the ONNX model.\n\n Returns:\n The dictionary mapping the name of the fused operators to their number of apparition in the model.\n \"\"\"\n onnx_optimized_model = BertOnnxModel(load_model(onnx_model_path))\n fused_operator = onnx_optimized_model.get_fused_operator_statistics()\n LOGGER.info(\n f\"The following operators were fused : { ', '.join([k for k,v in fused_operator.items() if v > 0])}\"\n )\n return {k: v for k, v in fused_operator.items() if v > 0}\n\n @staticmethod\n def get_nodes_number_difference(\n onnx_model_path: Union[str, os.PathLike], onnx_optimized_model_path: Union[str, os.PathLike]\n ) -> int:\n \"\"\"\n Compute the difference in the number of nodes between the original and the optimized model.\n\n Args:\n onnx_model_path (`Union[str, os.PathLike]`):\n Path of the ONNX model.\n onnx_optimized_model_path (`Union[str, os.PathLike]`):\n Path of the optimized ONNX model.\n\n Returns:\n The difference in the number of nodes between the original and the optimized model.\n \"\"\"\n onnx_model = BertOnnxModel(load_model(onnx_model_path))\n onnx_optimized_model = BertOnnxModel(load_model(onnx_optimized_model_path))\n\n # Information in the number of nodes decrease resulting from optimization\n nodes_number_onnx_model = len(onnx_model.nodes())\n nodes_number_onnx_optimized_model = len(onnx_optimized_model.nodes())\n difference_nodes_number = nodes_number_onnx_model - nodes_number_onnx_optimized_model\n LOGGER.info(\n f\"There are {nodes_number_onnx_model} nodes before optimization and {nodes_number_onnx_optimized_model}\"\n f\"nodes after. The number of nodes removed is {difference_nodes_number}\"\n )\n return difference_nodes_number\n\n @staticmethod\n def get_operators_difference(\n onnx_model_path: Union[str, os.PathLike], onnx_optimized_model_path: Union[str, os.PathLike]\n ) -> Dict[str, int]:\n \"\"\"\n Compute the dictionary mapping the operators name to the difference in the number of corresponding nodes between\n the original and the optimized model.\n\n Args:\n onnx_model_path (`Union[str, os.PathLike]`):\n Path of the ONNX model.\n onnx_optimized_model_path (`Union[str, os.PathLike]`):\n Path of the optimized ONNX model.\n\n Returns:\n The dictionary mapping the operators name to the difference in the number of corresponding nodes between the\n original and the optimized model.\n \"\"\"\n onnx_model = BertOnnxModel(load_model(onnx_model_path))\n onnx_optimized_model = BertOnnxModel(load_model(onnx_optimized_model_path))\n\n def nodes_difference_given_type(op_type):\n onnx_model_nodes_with_op_type = len(onnx_model.get_nodes_by_op_type(op_type))\n onnx_optimized_model_nodes_with_op_type = len(onnx_optimized_model.get_nodes_by_op_type(op_type))\n return onnx_model_nodes_with_op_type - onnx_optimized_model_nodes_with_op_type\n\n # Compute operators difference between the original and the optimized models\n op_types = set()\n for model in [onnx_model, onnx_optimized_model]:\n for node in model.nodes():\n op_types.add(node.op_type)\n\n operators_difference = dict(map(lambda op_type: (op_type, nodes_difference_given_type(op_type)), op_types))\n return {k: v for k, v in operators_difference.items() if v != 0}\n",
"path": "optimum/onnxruntime/optimization.py"
}
] | [
{
"content": "# Copyright 2021 The HuggingFace Team. All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nimport logging\nimport os\nfrom pathlib import Path\nfrom typing import Callable, Dict, List, Optional, Tuple, Union\n\nimport transformers\nfrom transformers.models.auto.configuration_auto import AutoConfig\n\nfrom onnx import load_model\nfrom onnxruntime.transformers.fusion_options import FusionOptions\nfrom onnxruntime.transformers.onnx_model_bert import BertOnnxModel\nfrom onnxruntime.transformers.optimizer import get_fusion_statistics, optimize_model\n\nfrom ..utils import CONFIG_NAME\nfrom .configuration import OptimizationConfig, ORTConfig\nfrom .modeling_ort import ORTModel\nfrom .modeling_seq2seq import ORTModelForSeq2SeqLM\nfrom .utils import ONNX_WEIGHTS_NAME, ORTConfigManager\n\n\nLOGGER = logging.getLogger(__name__)\n\n\nclass ORTOptimizer:\n \"\"\"\n Handles the ONNX Runtime optimization process for models shared on huggingface.co/models.\n \"\"\"\n\n def __init__(self, onnx_model_path: List[os.PathLike], config: transformers.PretrainedConfig):\n \"\"\"\n Args:\n onnx_model_path (`List[os.PathLike]`):\n The paths of the onnx models to optimize.\n config (`transformers.PretrainedConfig`):\n An instance of the configuration associated to the model to optimize.\n \"\"\"\n super().__init__()\n self.onnx_model_path = onnx_model_path\n self.config = config\n\n @classmethod\n def from_pretrained(cls, model_or_path: Union[str, os.PathLike, ORTModel], file_names: Optional[List[str]] = None):\n \"\"\"\n Args:\n model_or_path (`Union[str, os.PathLike, ORTModel]`):\n The path to a local directory hosting the model to optimize or an instance of an `ORTModel` to quantize.\n Can be either:\n - A path to a local *directory* containing the model to optimize.\n - An instance of ORTModel.\n file_names(`List[str]`, *optional*):\n The list of file names of the models to optimize.\n \"\"\"\n if isinstance(model_or_path, ORTModel):\n if isinstance(model_or_path, ORTModelForSeq2SeqLM):\n model_save_dir = model_or_path.model_save_dir\n onnx_model_path = [\n model_save_dir.joinpath(model_or_path.encoder_file_name),\n model_save_dir.joinpath(model_or_path.decoder_file_name),\n ]\n # Add the decoder with past key/values if present\n if model_or_path.use_cache:\n onnx_model_path.append(model_save_dir.joinpath(model_or_path.decoder_file_with_past_name))\n else:\n onnx_model_path = [model_or_path.model_save_dir.joinpath(model_or_path.latest_model_name)]\n return cls(onnx_model_path, config=model_or_path.config)\n elif os.path.isdir(model_or_path):\n file_names = [ONNX_WEIGHTS_NAME] if file_names is None else file_names\n model_or_path = Path(model_or_path)\n if CONFIG_NAME not in os.listdir(model_or_path):\n raise ValueError(f\"The local directory does not contain the configuration file {CONFIG_NAME}.\")\n config = AutoConfig.from_pretrained(model_or_path)\n onnx_model_path = []\n for file_name in file_names:\n onnx_model_path.append(model_or_path.joinpath(file_name))\n return cls(onnx_model_path, config=config)\n else:\n raise ValueError(f\"Unable to load the model from {model_or_path}.\")\n\n def optimize(\n self,\n optimization_config: OptimizationConfig,\n save_dir: Union[str, os.PathLike],\n file_suffix: str = \"optimized\",\n use_external_data_format: bool = False,\n ):\n \"\"\"\n Optimize a model given the optimization specifications defined in `optimization_config`.\n\n Args:\n optimization_config (`OptimizationConfig`):\n The configuration containing the parameters related to optimization.\n save_dir (`Union[str, os.PathLike]`):\n The path used to save the optimized model.\n file_suffix (`str`, *optional*, defaults to `\"optimized\"`):\n The file suffix used to save the optimized model.\n use_external_data_format (`bool`, *optional*, defaults to `False`):\n Whether to use external data format to store model of size >= 2Gb.\n \"\"\"\n save_dir = Path(save_dir)\n save_dir.mkdir(parents=True, exist_ok=True)\n model_type = self.config.model_type\n ORTConfigManager.check_supported_model_or_raise(model_type)\n\n # Save the model configuration\n self.config.save_pretrained(save_dir)\n\n # Create and save the configuration summarizing all the parameters related to optimization\n ort_config = ORTConfig(optimization=optimization_config)\n ort_config.save_pretrained(save_dir)\n\n num_heads = getattr(self.config, ORTConfigManager.get_num_heads_name(model_type))\n hidden_size = getattr(self.config, ORTConfigManager.get_hidden_size_name(model_type))\n model_type = ORTConfigManager.get_model_ort_type(model_type)\n optimization_config.model_type = model_type\n optimization_options = FusionOptions.parse(optimization_config)\n LOGGER.info(\"Optimizing model...\")\n\n for model_path in self.onnx_model_path:\n optimizer = optimize_model(\n model_path.as_posix(),\n model_type,\n num_heads,\n hidden_size,\n opt_level=optimization_config.optimization_level,\n optimization_options=optimization_options,\n use_gpu=optimization_config.optimize_for_gpu,\n only_onnxruntime=optimization_config.optimize_with_onnxruntime_only,\n )\n\n if optimization_config.fp16:\n # keep_io_types to keep inputs/outputs as float32\n optimizer.convert_float_to_float16(keep_io_types=True)\n\n output_path = save_dir.joinpath(f\"{model_path.stem}_{file_suffix}\").with_suffix(model_path.suffix)\n optimizer.save_model_to_file(output_path.as_posix(), use_external_data_format)\n\n LOGGER.info(f\"Optimized model saved at: {save_dir} (external data format: \" f\"{use_external_data_format})\")\n\n return Path(save_dir)\n\n @staticmethod\n def get_fused_operators(onnx_model_path: Union[str, os.PathLike]) -> Dict[str, int]:\n \"\"\"\n Compute the dictionary mapping the name of the fused operators to their number of apparition in the model.\n\n Args:\n onnx_model_path (`Union[str, os.PathLike]`):\n Path of the ONNX model.\n\n Returns:\n The dictionary mapping the name of the fused operators to their number of apparition in the model.\n \"\"\"\n onnx_optimized_model = BertOnnxModel(load_model(onnx_model_path))\n fused_operator = onnx_optimized_model.get_fused_operator_statistics()\n LOGGER.info(\n f\"The following operators were fused : { ', '.join([k for k,v in fused_operator.items() if v > 0])}\"\n )\n return {k: v for k, v in fused_operator.items() if v > 0}\n\n @staticmethod\n def get_nodes_number_difference(\n onnx_model_path: Union[str, os.PathLike], onnx_optimized_model_path: Union[str, os.PathLike]\n ) -> int:\n \"\"\"\n Compute the difference in the number of nodes between the original and the optimized model.\n\n Args:\n onnx_model_path (`Union[str, os.PathLike]`):\n Path of the ONNX model.\n onnx_optimized_model_path (`Union[str, os.PathLike]`):\n Path of the optimized ONNX model.\n\n Returns:\n The difference in the number of nodes between the original and the optimized model.\n \"\"\"\n onnx_model = BertOnnxModel(load_model(onnx_model_path))\n onnx_optimized_model = BertOnnxModel(load_model(onnx_optimized_model_path))\n\n # Information in the number of nodes decrease resulting from optimization\n nodes_number_onnx_model = len(onnx_model.nodes())\n nodes_number_onnx_optimized_model = len(onnx_optimized_model.nodes())\n difference_nodes_number = nodes_number_onnx_model - nodes_number_onnx_optimized_model\n LOGGER.info(\n f\"There are {nodes_number_onnx_model} nodes before optimization and {nodes_number_onnx_optimized_model}\"\n f\"nodes after. The number of nodes removed is {difference_nodes_number}\"\n )\n return difference_nodes_number\n\n @staticmethod\n def get_operators_difference(\n onnx_model_path: Union[str, os.PathLike], onnx_optimized_model_path: Union[str, os.PathLike]\n ) -> Dict[str, int]:\n \"\"\"\n Compute the dictionary mapping the operators name to the difference in the number of corresponding nodes between\n the original and the optimized model.\n\n Args:\n onnx_model_path (`Union[str, os.PathLike]`):\n Path of the ONNX model.\n onnx_optimized_model_path (`Union[str, os.PathLike]`):\n Path of the optimized ONNX model.\n\n Returns:\n The dictionary mapping the operators name to the difference in the number of corresponding nodes between the\n original and the optimized model.\n \"\"\"\n onnx_model = BertOnnxModel(load_model(onnx_model_path))\n onnx_optimized_model = BertOnnxModel(load_model(onnx_optimized_model_path))\n\n def nodes_difference_given_type(op_type):\n onnx_model_nodes_with_op_type = len(onnx_model.get_nodes_by_op_type(op_type))\n onnx_optimized_model_nodes_with_op_type = len(onnx_optimized_model.get_nodes_by_op_type(op_type))\n return onnx_model_nodes_with_op_type - onnx_optimized_model_nodes_with_op_type\n\n # Compute operators difference between the original and the optimized models\n op_types = set()\n for model in [onnx_model, onnx_optimized_model]:\n for node in model.nodes():\n op_types.add(node.op_type)\n\n operators_difference = dict(map(lambda op_type: (op_type, nodes_difference_given_type(op_type)), op_types))\n return {k: v for k, v in operators_difference.items() if v != 0}\n",
"path": "optimum/onnxruntime/optimization.py"
}
] | diff --git a/optimum/onnxruntime/optimization.py b/optimum/onnxruntime/optimization.py
index 2448f3b478..3b0eea7623 100644
--- a/optimum/onnxruntime/optimization.py
+++ b/optimum/onnxruntime/optimization.py
@@ -17,6 +17,7 @@
from typing import Callable, Dict, List, Optional, Tuple, Union
import transformers
+from transformers.models.auto.configuration_auto import AutoConfig
from onnx import load_model
from onnxruntime.transformers.fusion_options import FusionOptions
|
beeware__toga-569 | Error looking for icon for tutorial for 0.3.0.dev9
This is with Python 3.6.5 in a clean venv:
```
(.venv) PS C:\Users\_\Desktop\toga_tutorial> python .\helloworld.py
[Winforms] No valid icon format available for C:\Users\brcan\Desktop\toga_tutorial\.venv\lib\site-packages\toga\resources\tiberius; fall back on Tiberius instead
Unhandled Exception: Python.Runtime.PythonException: FileNotFoundException : Could not find file 'C:\Users\brcan\Desktop\toga_tutorial\.venv\lib\site-packages\toga\resources\tiberius.ico'.
at System.IO.__Error.WinIOError(Int32 errorCode, String maybeFullPath)
at System.IO.FileStream.Init(String path, FileMode mode, FileAccess access, Int32 rights, Boolean useRights, FileShare share, Int32 bufferSize, FileOptions options, SECURITY_ATTRIBUTES secAttrs, String msgPath, Boolean bFromProxy, Boolean useLongPath, Boolean checkHost)
at System.IO.FileStream..ctor(String path, FileMode mode, FileAccess access, FileShare share)
at System.Drawing.Icon..ctor(String fileName, Int32 width, Int32 height)
at Python.Runtime.Dispatcher.Dispatch(ArrayList args)
at System.Threading.ExecutionContext.RunInternal(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean preserveSyncCtx)
at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean preserveSyncCtx)
at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state)
at System.Threading.ThreadHelper.ThreadStart()
```
| [
{
"content": "#/usr/bin/env python\nimport io\nimport re\n\nfrom setuptools import setup, find_packages\n\nwith io.open('toga/__init__.py', encoding='utf8') as version_file:\n version_match = re.search(r\"^__version__ = ['\\\"]([^'\\\"]*)['\\\"]\", version_file.read(), re.M)\n if version_match:\n version = version_match.group(1)\n else:\n raise RuntimeError(\"Unable to find version string.\")\n\n\nwith io.open('README.rst', encoding='utf8') as readme:\n long_description = readme.read()\n\n\nsetup(\n name='toga-core',\n version=version,\n description='A Python native, OS native GUI toolkit.',\n long_description=long_description,\n author='Russell Keith-Magee',\n author_email='[email protected]',\n url='http://pybee.org/toga',\n packages=find_packages(exclude='tests'),\n python_requires='>=3.5',\n package_data={\n 'toga': ['resources/*.icns', 'resources/*.png'],\n },\n include_package_data=True,\n install_requires=[\n 'travertino>=0.1.0'\n ],\n tests_require=[\n 'toga-dummy==%s' % version\n ],\n license='New BSD',\n classifiers=[\n 'Development Status :: 3 - Alpha',\n 'Intended Audience :: Developers',\n 'License :: OSI Approved :: BSD License',\n 'Operating System :: OS Independent',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Programming Language :: Python :: 3 :: Only',\n 'Topic :: Software Development',\n 'Topic :: Software Development :: User Interfaces',\n 'Topic :: Software Development :: Widget Sets',\n ],\n test_suite='tests',\n zip_safe=False,\n)\n",
"path": "src/core/setup.py"
}
] | [
{
"content": "#/usr/bin/env python\nimport io\nimport re\n\nfrom setuptools import setup, find_packages\n\nwith io.open('toga/__init__.py', encoding='utf8') as version_file:\n version_match = re.search(r\"^__version__ = ['\\\"]([^'\\\"]*)['\\\"]\", version_file.read(), re.M)\n if version_match:\n version = version_match.group(1)\n else:\n raise RuntimeError(\"Unable to find version string.\")\n\n\nwith io.open('README.rst', encoding='utf8') as readme:\n long_description = readme.read()\n\n\nsetup(\n name='toga-core',\n version=version,\n description='A Python native, OS native GUI toolkit.',\n long_description=long_description,\n author='Russell Keith-Magee',\n author_email='[email protected]',\n url='http://pybee.org/toga',\n packages=find_packages(exclude='tests'),\n python_requires='>=3.5',\n package_data={\n 'toga': ['resources/*.icns', 'resources/*.ico', 'resources/*.png'],\n },\n include_package_data=True,\n install_requires=[\n 'travertino>=0.1.0'\n ],\n tests_require=[\n 'toga-dummy==%s' % version\n ],\n license='New BSD',\n classifiers=[\n 'Development Status :: 3 - Alpha',\n 'Intended Audience :: Developers',\n 'License :: OSI Approved :: BSD License',\n 'Operating System :: OS Independent',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Programming Language :: Python :: 3 :: Only',\n 'Topic :: Software Development',\n 'Topic :: Software Development :: User Interfaces',\n 'Topic :: Software Development :: Widget Sets',\n ],\n test_suite='tests',\n zip_safe=False,\n)\n",
"path": "src/core/setup.py"
}
] | diff --git a/src/core/setup.py b/src/core/setup.py
index a15d41397d..c4d67176e3 100644
--- a/src/core/setup.py
+++ b/src/core/setup.py
@@ -27,7 +27,7 @@
packages=find_packages(exclude='tests'),
python_requires='>=3.5',
package_data={
- 'toga': ['resources/*.icns', 'resources/*.png'],
+ 'toga': ['resources/*.icns', 'resources/*.ico', 'resources/*.png'],
},
include_package_data=True,
install_requires=[
|
evennia__evennia-2813 | [BUG - Develop] Can't `|` two SaverDicts
#### Describe the bug
When combining two attributes containing dict data, it fails with a traceback.
```
File "./TestGame/typeclasses/characters.py", line 30, in test_attr
return self.db.db_one | self.db.db_two
File "./evennia/evennia/utils/dbserialize.py", line 243, in __or__
return self._data | other
TypeError: unsupported operand type(s) for |: 'dict' and '_SaverDict
```
#### To Reproduce
Steps to reproduce the behavior:
1. Store dicts in two attributes or attribute properties.
2. Use the `|` operator on them
4. See error
#### Develop-branch commit
22fa2c6b8
| [
{
"content": "\"\"\"\nThis module handles serialization of arbitrary python structural data,\nintended primarily to be stored in the database. It also supports\nstoring Django model instances (which plain pickle cannot do).\n\nThis serialization is used internally by the server, notably for\nstoring data in Attributes and for piping data to process pools.\n\nThe purpose of dbserialize is to handle all forms of data. For\nwell-structured non-arbitrary exchange, such as communicating with a\nrich web client, a simpler JSON serialization makes more sense.\n\nThis module also implements the `SaverList`, `SaverDict` and `SaverSet`\nclasses. These are iterables that track their position in a nested\nstructure and makes sure to send updates up to their root. This is\nused by Attributes - without it, one would not be able to update mutables\nin-situ, e.g `obj.db.mynestedlist[3][5] = 3` would never be saved and\nbe out of sync with the database.\n\n\"\"\"\nfrom collections import OrderedDict, defaultdict, deque\nfrom collections.abc import MutableMapping, MutableSequence, MutableSet\nfrom functools import update_wrapper\n\ntry:\n from pickle import UnpicklingError, dumps, loads\nexcept ImportError:\n from pickle import dumps, loads\n\nfrom django.contrib.contenttypes.models import ContentType\nfrom django.core.exceptions import ObjectDoesNotExist\nfrom django.utils.safestring import SafeString\nfrom evennia.utils import logger\nfrom evennia.utils.utils import is_iter, to_bytes, uses_database\n\n__all__ = (\"to_pickle\", \"from_pickle\", \"do_pickle\", \"do_unpickle\", \"dbserialize\", \"dbunserialize\")\n\nPICKLE_PROTOCOL = 2\n\n\n# message to send if editing an already deleted Attribute in a savermutable\n_ERROR_DELETED_ATTR = (\n \"{cls_name} {obj} has had its root Attribute deleted. \"\n \"It must be cast to a {non_saver_name} before it can be modified further.\"\n)\n\n\ndef _get_mysql_db_version():\n \"\"\"\n This is a helper method for specifically getting the version\n string of a MySQL database.\n\n Returns:\n mysql_version (str): The currently used mysql database\n version.\n\n \"\"\"\n from django.db import connection\n\n conn = connection.cursor()\n conn.execute(\"SELECT VERSION()\")\n version = conn.fetchone()\n return version and str(version[0]) or \"\"\n\n\n# initialization and helpers\n\n\n_GA = object.__getattribute__\n_SA = object.__setattr__\n_FROM_MODEL_MAP = None\n_TO_MODEL_MAP = None\n_IGNORE_DATETIME_MODELS = None\n_SESSION_HANDLER = None\n\n\ndef _IS_PACKED_DBOBJ(o):\n return isinstance(o, tuple) and len(o) == 4 and o[0] == \"__packed_dbobj__\"\n\n\ndef _IS_PACKED_SESSION(o):\n return isinstance(o, tuple) and len(o) == 3 and o[0] == \"__packed_session__\"\n\n\nif uses_database(\"mysql\") and _get_mysql_db_version() < \"5.6.4\":\n # mysql <5.6.4 don't support millisecond precision\n _DATESTRING = \"%Y:%m:%d-%H:%M:%S:000000\"\nelse:\n _DATESTRING = \"%Y:%m:%d-%H:%M:%S:%f\"\n\n\ndef _TO_DATESTRING(obj):\n \"\"\"\n Creates datestring hash.\n\n Args:\n obj (Object): Database object.\n\n Returns:\n datestring (str): A datestring hash.\n\n \"\"\"\n try:\n return _GA(obj, \"db_date_created\").strftime(_DATESTRING)\n except AttributeError:\n # this can happen if object is not yet saved - no datestring is then set\n try:\n obj.save()\n except AttributeError:\n # we have received a None object, for example due to an erroneous save.\n return None\n return _GA(obj, \"db_date_created\").strftime(_DATESTRING)\n\n\ndef _init_globals():\n \"\"\"Lazy importing to avoid circular import issues\"\"\"\n global _FROM_MODEL_MAP, _TO_MODEL_MAP, _SESSION_HANDLER, _IGNORE_DATETIME_MODELS\n if not _FROM_MODEL_MAP:\n _FROM_MODEL_MAP = defaultdict(str)\n _FROM_MODEL_MAP.update(dict((c.model, c.natural_key()) for c in ContentType.objects.all()))\n if not _TO_MODEL_MAP:\n from django.conf import settings\n\n _TO_MODEL_MAP = defaultdict(str)\n _TO_MODEL_MAP.update(\n dict((c.natural_key(), c.model_class()) for c in ContentType.objects.all())\n )\n _IGNORE_DATETIME_MODELS = []\n for src_key, dst_key in settings.ATTRIBUTE_STORED_MODEL_RENAME:\n _TO_MODEL_MAP[src_key] = _TO_MODEL_MAP.get(dst_key, None)\n _IGNORE_DATETIME_MODELS.append(src_key)\n if not _SESSION_HANDLER:\n from evennia.server.sessionhandler import SESSION_HANDLER as _SESSION_HANDLER\n\n\n#\n# SaverList, SaverDict, SaverSet - Attribute-specific helper classes and functions\n#\n\n\ndef _save(method):\n \"\"\"method decorator that saves data to Attribute\"\"\"\n\n def save_wrapper(self, *args, **kwargs):\n self.__doc__ = method.__doc__\n ret = method(self, *args, **kwargs)\n self._save_tree()\n return ret\n\n return update_wrapper(save_wrapper, method)\n\n\nclass _SaverMutable(object):\n \"\"\"\n Parent class for properly handling of nested mutables in\n an Attribute. If not used something like\n obj.db.mylist[1][2] = \"test\" (allocation to a nested list)\n will not save the updated value to the database.\n \"\"\"\n\n def __init__(self, *args, **kwargs):\n \"\"\"store all properties for tracking the tree\"\"\"\n self._parent = kwargs.pop(\"_parent\", None)\n self._db_obj = kwargs.pop(\"_db_obj\", None)\n self._data = None\n\n def __bool__(self):\n \"\"\"Make sure to evaluate as False if empty\"\"\"\n return bool(self._data)\n\n def _save_tree(self):\n \"\"\"recursively traverse back up the tree, save when we reach the root\"\"\"\n if self._parent:\n self._parent._save_tree()\n elif self._db_obj:\n if not self._db_obj.pk:\n cls_name = self.__class__.__name__\n try:\n non_saver_name = cls_name.split(\"_Saver\", 1)[1].lower()\n except IndexError:\n non_saver_name = cls_name\n raise ValueError(\n _ERROR_DELETED_ATTR.format(\n cls_name=cls_name, obj=self, non_saver_name=non_saver_name\n )\n )\n self._db_obj.value = self\n else:\n logger.log_err(\"_SaverMutable %s has no root Attribute to save to.\" % self)\n\n def _convert_mutables(self, data):\n \"\"\"converts mutables to Saver* variants and assigns ._parent property\"\"\"\n\n def process_tree(item, parent):\n \"\"\"recursively populate the tree, storing parents\"\"\"\n dtype = type(item)\n if dtype in (str, int, float, bool, tuple):\n return item\n elif dtype == list:\n dat = _SaverList(_parent=parent)\n dat._data.extend(process_tree(val, dat) for val in item)\n return dat\n elif dtype == dict:\n dat = _SaverDict(_parent=parent)\n dat._data.update((key, process_tree(val, dat)) for key, val in item.items())\n return dat\n elif dtype == defaultdict:\n dat = _SaverDefaultDict(item.default_factory, _parent=parent)\n dat._data.update((key, process_tree(val, dat)) for key, val in item.items())\n return dat\n elif dtype == set:\n dat = _SaverSet(_parent=parent)\n dat._data.update(process_tree(val, dat) for val in item)\n return dat\n return item\n\n return process_tree(data, self)\n\n def __repr__(self):\n return self._data.__repr__()\n\n def __len__(self):\n return self._data.__len__()\n\n def __iter__(self):\n return self._data.__iter__()\n\n def __getitem__(self, key):\n return self._data.__getitem__(key)\n\n def __eq__(self, other):\n return self._data == other\n\n def __ne__(self, other):\n return self._data != other\n\n def __lt__(self, other):\n return self._data < other\n\n def __gt__(self, other):\n return self._data > other\n\n def __or__(self, other):\n return self._data | other\n\n @_save\n def __setitem__(self, key, value):\n self._data.__setitem__(key, self._convert_mutables(value))\n\n @_save\n def __delitem__(self, key):\n self._data.__delitem__(key)\n\n def deserialize(self):\n \"\"\"Deserializes this mutable into its corresponding non-Saver type.\"\"\"\n return deserialize(self)\n\n\nclass _SaverList(_SaverMutable, MutableSequence):\n \"\"\"\n A list that saves itself to an Attribute when updated.\n \"\"\"\n\n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n self._data = list()\n\n @_save\n def __iadd__(self, otherlist):\n self._data = self._data.__add__(otherlist)\n return self._data\n\n def __add__(self, otherlist):\n return list(self._data) + otherlist\n\n @_save\n def insert(self, index, value):\n self._data.insert(index, self._convert_mutables(value))\n\n def __eq__(self, other):\n try:\n return list(self._data) == list(other)\n except TypeError:\n return False\n\n def __ne__(self, other):\n try:\n return list(self._data) != list(other)\n except TypeError:\n return True\n\n def index(self, value, *args):\n return self._data.index(value, *args)\n\n @_save\n def sort(self, *, key=None, reverse=False):\n self._data.sort(key=key, reverse=reverse)\n\n def copy(self):\n return self._data.copy()\n\n\nclass _SaverDict(_SaverMutable, MutableMapping):\n \"\"\"\n A dict that stores changes to an Attribute when updated\n \"\"\"\n\n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n self._data = dict()\n\n def has_key(self, key):\n return key in self._data\n\n @_save\n def update(self, *args, **kwargs):\n self._data.update(*args, **kwargs)\n\n\nclass _SaverDefaultDict(_SaverDict):\n \"\"\"\n A defaultdict that stores changes to an attribute when updated\n \"\"\"\n\n def __init__(self, factory, *args, **kwargs):\n super().__init__(*args, **kwargs)\n self._data = defaultdict(factory)\n self.default_factory = factory\n\n def __getitem__(self, key):\n if key not in self._data.keys():\n # detect the case of db.foo['a'] with no immediate assignment\n # (important: using `key in self._data` would be always True!)\n default_value = self._data[key]\n self.__setitem__(key, default_value)\n return self._data[key]\n\n\nclass _SaverSet(_SaverMutable, MutableSet):\n \"\"\"\n A set that saves to an Attribute when updated\n \"\"\"\n\n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n self._data = set()\n\n def __contains__(self, value):\n return self._data.__contains__(value)\n\n @_save\n def add(self, value):\n self._data.add(self._convert_mutables(value))\n\n @_save\n def discard(self, value):\n self._data.discard(value)\n\n\nclass _SaverOrderedDict(_SaverMutable, MutableMapping):\n \"\"\"\n An ordereddict that can be saved and operated on.\n \"\"\"\n\n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n self._data = OrderedDict()\n\n def has_key(self, key):\n return key in self._data\n\n\nclass _SaverDeque(_SaverMutable):\n \"\"\"\n A deque that can be saved and operated on.\n \"\"\"\n\n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n self._data = deque()\n\n @_save\n def append(self, *args, **kwargs):\n self._data.append(*args, **kwargs)\n\n @_save\n def appendleft(self, *args, **kwargs):\n self._data.appendleft(*args, **kwargs)\n\n @_save\n def clear(self):\n self._data.clear()\n\n @_save\n def extendleft(self, *args, **kwargs):\n self._data.extendleft(*args, **kwargs)\n\n # maxlen property\n def _getmaxlen(self):\n return self._data.maxlen\n\n def _setmaxlen(self, value):\n self._data.maxlen = value\n\n def _delmaxlen(self):\n del self._data.maxlen\n\n maxlen = property(_getmaxlen, _setmaxlen, _delmaxlen)\n\n @_save\n def pop(self, *args, **kwargs):\n return self._data.pop(*args, **kwargs)\n\n @_save\n def popleft(self, *args, **kwargs):\n return self._data.popleft(*args, **kwargs)\n\n @_save\n def reverse(self):\n self._data.reverse()\n\n @_save\n def rotate(self, *args):\n self._data.rotate(*args)\n\n @_save\n def remove(self, *args):\n self._data.remove(*args)\n\n\n_DESERIALIZE_MAPPING = {\n _SaverList.__name__: list,\n _SaverDict.__name__: dict,\n _SaverSet.__name__: set,\n _SaverOrderedDict.__name__: OrderedDict,\n _SaverDeque.__name__: deque,\n _SaverDefaultDict.__name__: defaultdict,\n}\n\n\ndef deserialize(obj):\n \"\"\"\n Make sure to *fully* decouple a structure from the database, by turning all _Saver*-mutables\n inside it back into their normal Python forms.\n\n \"\"\"\n\n def _iter(obj):\n # breakpoint()\n typ = type(obj)\n tname = typ.__name__\n if tname in (\"_SaverDict\", \"dict\"):\n return {_iter(key): _iter(val) for key, val in obj.items()}\n elif tname in (\"_SaverOrderedDict\", \"OrderedDict\"):\n return OrderedDict([(_iter(key), _iter(val)) for key, val in obj.items()])\n elif tname in (\"_SaverDefaultDict\", \"defaultdict\"):\n return defaultdict(\n obj.default_factory, {_iter(key): _iter(val) for key, val in obj.items()}\n )\n elif tname in _DESERIALIZE_MAPPING:\n return _DESERIALIZE_MAPPING[tname](_iter(val) for val in obj)\n elif is_iter(obj):\n return typ(_iter(val) for val in obj)\n return obj\n\n return _iter(obj)\n\n\n#\n# serialization helpers\n\n\ndef pack_dbobj(item):\n \"\"\"\n Check and convert django database objects to an internal representation.\n\n Args:\n item (any): A database entity to pack\n\n Returns:\n packed (any or tuple): Either returns the original input item\n or the packing tuple `(\"__packed_dbobj__\", key, creation_time, id)`.\n\n \"\"\"\n _init_globals()\n obj = item\n natural_key = _FROM_MODEL_MAP[\n hasattr(obj, \"id\")\n and hasattr(obj, \"db_date_created\")\n and hasattr(obj, \"__dbclass__\")\n and obj.__dbclass__.__name__.lower()\n ]\n # build the internal representation as a tuple\n # (\"__packed_dbobj__\", key, creation_time, id)\n return (\n natural_key\n and (\"__packed_dbobj__\", natural_key, _TO_DATESTRING(obj), _GA(obj, \"id\"))\n or item\n )\n\n\ndef unpack_dbobj(item):\n \"\"\"\n Check and convert internal representations back to Django database\n models.\n\n Args:\n item (packed_dbobj): The fact that item is a packed dbobj\n should be checked before this call.\n\n Returns:\n unpacked (any): Either the original input or converts the\n internal store back to a database representation (its\n typeclass is returned if applicable).\n\n \"\"\"\n _init_globals()\n try:\n obj = item[3] and _TO_MODEL_MAP[item[1]].objects.get(id=item[3])\n except ObjectDoesNotExist:\n return None\n except TypeError:\n if hasattr(item, \"pk\"):\n # this happens if item is already an obj\n return item\n return None\n if item[1] in _IGNORE_DATETIME_MODELS:\n # if we are replacing models we ignore the datatime\n return obj\n else:\n # even if we got back a match, check the sanity of the date (some\n # databases may 're-use' the id)\n return _TO_DATESTRING(obj) == item[2] and obj or None\n\n\ndef pack_session(item):\n \"\"\"\n Handle the safe serializion of Sessions objects (these contain\n hidden references to database objects (accounts, puppets) so they\n can't be safely serialized).\n\n Args:\n item (Session)): This item must have all properties of a session\n before entering this call.\n\n Returns:\n packed (tuple or None): A session-packed tuple on the form\n `(__packed_session__, sessid, conn_time)`. If this sessid\n does not match a session in the Session handler, None is returned.\n\n \"\"\"\n _init_globals()\n session = _SESSION_HANDLER.get(item.sessid)\n if session and session.conn_time == item.conn_time:\n # we require connection times to be identical for the Session\n # to be accepted as actually being a session (sessids gets\n # reused all the time).\n return (\n item.conn_time\n and item.sessid\n and (\"__packed_session__\", _GA(item, \"sessid\"), _GA(item, \"conn_time\"))\n )\n return None\n\n\ndef unpack_session(item):\n \"\"\"\n Check and convert internal representations back to Sessions.\n\n Args:\n item (packed_session): The fact that item is a packed session\n should be checked before this call.\n\n Returns:\n unpacked (any): Either the original input or converts the\n internal store back to a Session. If Session no longer\n exists, None will be returned.\n \"\"\"\n _init_globals()\n session = _SESSION_HANDLER.get(item[1])\n if session and session.conn_time == item[2]:\n # we require connection times to be identical for the Session\n # to be accepted as the same as the one stored (sessids gets\n # reused all the time).\n return session\n return None\n\n\n#\n# Access methods\n\n\ndef to_pickle(data):\n \"\"\"\n This prepares data on arbitrary form to be pickled. It handles any\n nested structure and returns data on a form that is safe to pickle\n (including having converted any database models to their internal\n representation). We also convert any Saver*-type objects back to\n their normal representations, they are not pickle-safe.\n\n Args:\n data (any): Data to pickle.\n\n Returns:\n data (any): Pickled data.\n\n \"\"\"\n\n def process_item(item):\n \"\"\"Recursive processor and identification of data\"\"\"\n\n dtype = type(item)\n\n if dtype in (str, int, float, bool, bytes, SafeString):\n return item\n elif dtype == tuple:\n return tuple(process_item(val) for val in item)\n elif dtype in (list, _SaverList):\n return [process_item(val) for val in item]\n elif dtype in (dict, _SaverDict):\n return dict((process_item(key), process_item(val)) for key, val in item.items())\n elif dtype in (defaultdict, _SaverDefaultDict):\n return defaultdict(\n item.default_factory,\n ((process_item(key), process_item(val)) for key, val in item.items()),\n )\n elif dtype in (set, _SaverSet):\n return set(process_item(val) for val in item)\n elif dtype in (OrderedDict, _SaverOrderedDict):\n return OrderedDict((process_item(key), process_item(val)) for key, val in item.items())\n elif dtype in (deque, _SaverDeque):\n return deque(process_item(val) for val in item)\n\n # not one of the base types\n if hasattr(item, \"__serialize_dbobjs__\"):\n # Allows custom serialization of any dbobjects embedded in\n # the item that Evennia will otherwise not find (these would\n # otherwise lead to an error). Use the dbserialize helper from\n # this method.\n try:\n item.__serialize_dbobjs__()\n except TypeError as err:\n # we catch typerrors so we can handle both classes (requiring\n # classmethods) and instances\n pass\n\n if hasattr(item, \"__iter__\"):\n # we try to conserve the iterable class, if not convert to list\n try:\n return item.__class__([process_item(val) for val in item])\n except (AttributeError, TypeError):\n return [process_item(val) for val in item]\n elif hasattr(item, \"sessid\") and hasattr(item, \"conn_time\"):\n return pack_session(item)\n try:\n return pack_dbobj(item)\n except TypeError:\n return item\n except Exception:\n logger.log_err(f\"The object {item} of type {type(item)} could not be stored.\")\n raise\n\n return process_item(data)\n\n\n# @transaction.autocommit\ndef from_pickle(data, db_obj=None):\n \"\"\"\n This should be fed a just de-pickled data object. It will be converted back\n to a form that may contain database objects again. Note that if a database\n object was removed (or changed in-place) in the database, None will be\n returned.\n\n Args:\n data (any): Pickled data to unpickle.\n db_obj (Atribute, any): This is the model instance (normally\n an Attribute) that _Saver*-type iterables (_SaverList etc)\n will save to when they update. It must have a 'value' property\n that saves assigned data to the database. Skip if not\n serializing onto a given object. If db_obj is given, this\n function will convert lists, dicts and sets to their\n _SaverList, _SaverDict and _SaverSet counterparts.\n\n Returns:\n data (any): Unpickled data.\n\n \"\"\"\n\n def process_item(item):\n \"\"\"Recursive processor and identification of data\"\"\"\n # breakpoint()\n dtype = type(item)\n if dtype in (str, int, float, bool, bytes, SafeString):\n return item\n elif _IS_PACKED_DBOBJ(item):\n # this must be checked before tuple\n return unpack_dbobj(item)\n elif _IS_PACKED_SESSION(item):\n return unpack_session(item)\n elif dtype == tuple:\n return tuple(process_item(val) for val in item)\n elif dtype == dict:\n return dict((process_item(key), process_item(val)) for key, val in item.items())\n elif dtype == defaultdict:\n return defaultdict(\n item.default_factory,\n ((process_item(key), process_item(val)) for key, val in item.items()),\n )\n elif dtype == set:\n return set(process_item(val) for val in item)\n elif dtype == OrderedDict:\n return OrderedDict((process_item(key), process_item(val)) for key, val in item.items())\n elif dtype == deque:\n return deque(process_item(val) for val in item)\n elif hasattr(item, \"__iter__\"):\n try:\n # we try to conserve the iterable class if\n # it accepts an iterator\n return item.__class__(process_item(val) for val in item)\n except (AttributeError, TypeError):\n return [process_item(val) for val in item]\n\n if hasattr(item, \"__deserialize_dbobjs__\"):\n # this allows the object to custom-deserialize any embedded dbobjs\n # that we previously serialized with __serialize_dbobjs__.\n # use the dbunserialize helper in this module.\n try:\n item.__deserialize_dbobjs__()\n except (TypeError, UnpicklingError):\n # handle recoveries both of classes (requiring classmethods\n # or instances. Unpickling errors can happen when re-loading the\n # data from cache (because the hidden entity was already\n # deserialized and stored back on the object, unpickling it\n # again fails). TODO: Maybe one could avoid this retry in a\n # more graceful way?\n pass\n\n return item\n\n def process_tree(item, parent):\n \"\"\"Recursive processor, building a parent-tree from iterable data\"\"\"\n # breakpoint()\n dtype = type(item)\n if dtype in (str, int, float, bool, bytes, SafeString):\n return item\n elif _IS_PACKED_DBOBJ(item):\n # this must be checked before tuple\n return unpack_dbobj(item)\n elif dtype == tuple:\n return tuple(process_tree(val, item) for val in item)\n elif dtype == list:\n dat = _SaverList(_parent=parent)\n dat._data.extend(process_tree(val, dat) for val in item)\n return dat\n elif dtype == dict:\n dat = _SaverDict(_parent=parent)\n dat._data.update(\n (process_item(key), process_tree(val, dat)) for key, val in item.items()\n )\n return dat\n elif dtype == defaultdict:\n dat = _SaverDefaultDict(item.default_factory, _parent=parent)\n dat._data.update(\n (process_item(key), process_tree(val, dat)) for key, val in item.items()\n )\n return dat\n elif dtype == set:\n dat = _SaverSet(_parent=parent)\n dat._data.update(set(process_tree(val, dat) for val in item))\n return dat\n elif dtype == OrderedDict:\n dat = _SaverOrderedDict(_parent=parent)\n dat._data.update(\n (process_item(key), process_tree(val, dat)) for key, val in item.items()\n )\n return dat\n elif dtype == deque:\n dat = _SaverDeque(_parent=parent)\n dat._data.extend(process_item(val) for val in item)\n return dat\n elif hasattr(item, \"__iter__\"):\n try:\n # we try to conserve the iterable class if it\n # accepts an iterator\n return item.__class__(process_tree(val, parent) for val in item)\n except (AttributeError, TypeError):\n dat = _SaverList(_parent=parent)\n dat._data.extend(process_tree(val, dat) for val in item)\n return dat\n\n if hasattr(item, \"__deserialize_dbobjs__\"):\n try:\n item.__deserialize_dbobjs__()\n except (TypeError, UnpicklingError):\n pass\n\n return item\n\n if db_obj:\n # convert lists, dicts and sets to their Saved* counterparts. It\n # is only relevant if the \"root\" is an iterable of the right type.\n dtype = type(data)\n if dtype == list:\n dat = _SaverList(_db_obj=db_obj)\n dat._data.extend(process_tree(val, dat) for val in data)\n return dat\n elif dtype == dict:\n dat = _SaverDict(_db_obj=db_obj)\n dat._data.update(\n (process_item(key), process_tree(val, dat)) for key, val in data.items()\n )\n return dat\n elif dtype == defaultdict:\n dat = _SaverDefaultDict(data.default_factory, _db_obj=db_obj)\n dat._data.update(\n (process_item(key), process_tree(val, dat)) for key, val in data.items()\n )\n return dat\n elif dtype == set:\n dat = _SaverSet(_db_obj=db_obj)\n dat._data.update(process_tree(val, dat) for val in data)\n return dat\n elif dtype == OrderedDict:\n dat = _SaverOrderedDict(_db_obj=db_obj)\n dat._data.update(\n (process_item(key), process_tree(val, dat)) for key, val in data.items()\n )\n return dat\n elif dtype == deque:\n dat = _SaverDeque(_db_obj=db_obj)\n dat._data.extend(process_item(val) for val in data)\n return dat\n return process_item(data)\n\n\ndef do_pickle(data):\n \"\"\"Perform pickle to string\"\"\"\n try:\n return dumps(data, protocol=PICKLE_PROTOCOL)\n except Exception:\n logger.log_err(f\"Could not pickle data for storage: {data}\")\n raise\n\n\ndef do_unpickle(data):\n \"\"\"Retrieve pickle from pickled string\"\"\"\n try:\n return loads(to_bytes(data))\n except Exception:\n logger.log_err(f\"Could not unpickle data from storage: {data}\")\n raise\n\n\ndef dbserialize(data):\n \"\"\"Serialize to pickled form in one step\"\"\"\n return do_pickle(to_pickle(data))\n\n\ndef dbunserialize(data, db_obj=None):\n \"\"\"Un-serialize in one step. See from_pickle for help db_obj.\"\"\"\n return from_pickle(do_unpickle(data), db_obj=db_obj)\n",
"path": "evennia/utils/dbserialize.py"
}
] | [
{
"content": "\"\"\"\nThis module handles serialization of arbitrary python structural data,\nintended primarily to be stored in the database. It also supports\nstoring Django model instances (which plain pickle cannot do).\n\nThis serialization is used internally by the server, notably for\nstoring data in Attributes and for piping data to process pools.\n\nThe purpose of dbserialize is to handle all forms of data. For\nwell-structured non-arbitrary exchange, such as communicating with a\nrich web client, a simpler JSON serialization makes more sense.\n\nThis module also implements the `SaverList`, `SaverDict` and `SaverSet`\nclasses. These are iterables that track their position in a nested\nstructure and makes sure to send updates up to their root. This is\nused by Attributes - without it, one would not be able to update mutables\nin-situ, e.g `obj.db.mynestedlist[3][5] = 3` would never be saved and\nbe out of sync with the database.\n\n\"\"\"\nfrom collections import OrderedDict, defaultdict, deque\nfrom collections.abc import MutableMapping, MutableSequence, MutableSet\nfrom functools import update_wrapper\n\ntry:\n from pickle import UnpicklingError, dumps, loads\nexcept ImportError:\n from pickle import dumps, loads\n\nfrom django.contrib.contenttypes.models import ContentType\nfrom django.core.exceptions import ObjectDoesNotExist\nfrom django.utils.safestring import SafeString\nfrom evennia.utils import logger\nfrom evennia.utils.utils import is_iter, to_bytes, uses_database\n\n__all__ = (\"to_pickle\", \"from_pickle\", \"do_pickle\", \"do_unpickle\", \"dbserialize\", \"dbunserialize\")\n\nPICKLE_PROTOCOL = 2\n\n\n# message to send if editing an already deleted Attribute in a savermutable\n_ERROR_DELETED_ATTR = (\n \"{cls_name} {obj} has had its root Attribute deleted. \"\n \"It must be cast to a {non_saver_name} before it can be modified further.\"\n)\n\n\ndef _get_mysql_db_version():\n \"\"\"\n This is a helper method for specifically getting the version\n string of a MySQL database.\n\n Returns:\n mysql_version (str): The currently used mysql database\n version.\n\n \"\"\"\n from django.db import connection\n\n conn = connection.cursor()\n conn.execute(\"SELECT VERSION()\")\n version = conn.fetchone()\n return version and str(version[0]) or \"\"\n\n\n# initialization and helpers\n\n\n_GA = object.__getattribute__\n_SA = object.__setattr__\n_FROM_MODEL_MAP = None\n_TO_MODEL_MAP = None\n_IGNORE_DATETIME_MODELS = None\n_SESSION_HANDLER = None\n\n\ndef _IS_PACKED_DBOBJ(o):\n return isinstance(o, tuple) and len(o) == 4 and o[0] == \"__packed_dbobj__\"\n\n\ndef _IS_PACKED_SESSION(o):\n return isinstance(o, tuple) and len(o) == 3 and o[0] == \"__packed_session__\"\n\n\nif uses_database(\"mysql\") and _get_mysql_db_version() < \"5.6.4\":\n # mysql <5.6.4 don't support millisecond precision\n _DATESTRING = \"%Y:%m:%d-%H:%M:%S:000000\"\nelse:\n _DATESTRING = \"%Y:%m:%d-%H:%M:%S:%f\"\n\n\ndef _TO_DATESTRING(obj):\n \"\"\"\n Creates datestring hash.\n\n Args:\n obj (Object): Database object.\n\n Returns:\n datestring (str): A datestring hash.\n\n \"\"\"\n try:\n return _GA(obj, \"db_date_created\").strftime(_DATESTRING)\n except AttributeError:\n # this can happen if object is not yet saved - no datestring is then set\n try:\n obj.save()\n except AttributeError:\n # we have received a None object, for example due to an erroneous save.\n return None\n return _GA(obj, \"db_date_created\").strftime(_DATESTRING)\n\n\ndef _init_globals():\n \"\"\"Lazy importing to avoid circular import issues\"\"\"\n global _FROM_MODEL_MAP, _TO_MODEL_MAP, _SESSION_HANDLER, _IGNORE_DATETIME_MODELS\n if not _FROM_MODEL_MAP:\n _FROM_MODEL_MAP = defaultdict(str)\n _FROM_MODEL_MAP.update(dict((c.model, c.natural_key()) for c in ContentType.objects.all()))\n if not _TO_MODEL_MAP:\n from django.conf import settings\n\n _TO_MODEL_MAP = defaultdict(str)\n _TO_MODEL_MAP.update(\n dict((c.natural_key(), c.model_class()) for c in ContentType.objects.all())\n )\n _IGNORE_DATETIME_MODELS = []\n for src_key, dst_key in settings.ATTRIBUTE_STORED_MODEL_RENAME:\n _TO_MODEL_MAP[src_key] = _TO_MODEL_MAP.get(dst_key, None)\n _IGNORE_DATETIME_MODELS.append(src_key)\n if not _SESSION_HANDLER:\n from evennia.server.sessionhandler import SESSION_HANDLER as _SESSION_HANDLER\n\n\n#\n# SaverList, SaverDict, SaverSet - Attribute-specific helper classes and functions\n#\n\n\ndef _save(method):\n \"\"\"method decorator that saves data to Attribute\"\"\"\n\n def save_wrapper(self, *args, **kwargs):\n self.__doc__ = method.__doc__\n ret = method(self, *args, **kwargs)\n self._save_tree()\n return ret\n\n return update_wrapper(save_wrapper, method)\n\n\nclass _SaverMutable(object):\n \"\"\"\n Parent class for properly handling of nested mutables in\n an Attribute. If not used something like\n obj.db.mylist[1][2] = \"test\" (allocation to a nested list)\n will not save the updated value to the database.\n \"\"\"\n\n def __init__(self, *args, **kwargs):\n \"\"\"store all properties for tracking the tree\"\"\"\n self._parent = kwargs.pop(\"_parent\", None)\n self._db_obj = kwargs.pop(\"_db_obj\", None)\n self._data = None\n\n def __bool__(self):\n \"\"\"Make sure to evaluate as False if empty\"\"\"\n return bool(self._data)\n\n def _save_tree(self):\n \"\"\"recursively traverse back up the tree, save when we reach the root\"\"\"\n if self._parent:\n self._parent._save_tree()\n elif self._db_obj:\n if not self._db_obj.pk:\n cls_name = self.__class__.__name__\n try:\n non_saver_name = cls_name.split(\"_Saver\", 1)[1].lower()\n except IndexError:\n non_saver_name = cls_name\n raise ValueError(\n _ERROR_DELETED_ATTR.format(\n cls_name=cls_name, obj=self, non_saver_name=non_saver_name\n )\n )\n self._db_obj.value = self\n else:\n logger.log_err(\"_SaverMutable %s has no root Attribute to save to.\" % self)\n\n def _convert_mutables(self, data):\n \"\"\"converts mutables to Saver* variants and assigns ._parent property\"\"\"\n\n def process_tree(item, parent):\n \"\"\"recursively populate the tree, storing parents\"\"\"\n dtype = type(item)\n if dtype in (str, int, float, bool, tuple):\n return item\n elif dtype == list:\n dat = _SaverList(_parent=parent)\n dat._data.extend(process_tree(val, dat) for val in item)\n return dat\n elif dtype == dict:\n dat = _SaverDict(_parent=parent)\n dat._data.update((key, process_tree(val, dat)) for key, val in item.items())\n return dat\n elif dtype == defaultdict:\n dat = _SaverDefaultDict(item.default_factory, _parent=parent)\n dat._data.update((key, process_tree(val, dat)) for key, val in item.items())\n return dat\n elif dtype == set:\n dat = _SaverSet(_parent=parent)\n dat._data.update(process_tree(val, dat) for val in item)\n return dat\n return item\n\n return process_tree(data, self)\n\n def __repr__(self):\n return self._data.__repr__()\n\n def __len__(self):\n return self._data.__len__()\n\n def __iter__(self):\n return self._data.__iter__()\n\n def __getitem__(self, key):\n return self._data.__getitem__(key)\n\n def __eq__(self, other):\n return self._data == other\n\n def __ne__(self, other):\n return self._data != other\n\n def __lt__(self, other):\n return self._data < other\n\n def __gt__(self, other):\n return self._data > other\n\n def __or__(self, other):\n return self._data | other\n\n def __ror__(self, other):\n return self._data | other\n\n @_save\n def __setitem__(self, key, value):\n self._data.__setitem__(key, self._convert_mutables(value))\n\n @_save\n def __delitem__(self, key):\n self._data.__delitem__(key)\n\n def deserialize(self):\n \"\"\"Deserializes this mutable into its corresponding non-Saver type.\"\"\"\n return deserialize(self)\n\n\nclass _SaverList(_SaverMutable, MutableSequence):\n \"\"\"\n A list that saves itself to an Attribute when updated.\n \"\"\"\n\n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n self._data = list()\n\n @_save\n def __iadd__(self, otherlist):\n self._data = self._data.__add__(otherlist)\n return self._data\n\n def __add__(self, otherlist):\n return list(self._data) + otherlist\n\n @_save\n def insert(self, index, value):\n self._data.insert(index, self._convert_mutables(value))\n\n def __eq__(self, other):\n try:\n return list(self._data) == list(other)\n except TypeError:\n return False\n\n def __ne__(self, other):\n try:\n return list(self._data) != list(other)\n except TypeError:\n return True\n\n def index(self, value, *args):\n return self._data.index(value, *args)\n\n @_save\n def sort(self, *, key=None, reverse=False):\n self._data.sort(key=key, reverse=reverse)\n\n def copy(self):\n return self._data.copy()\n\n\nclass _SaverDict(_SaverMutable, MutableMapping):\n \"\"\"\n A dict that stores changes to an Attribute when updated\n \"\"\"\n\n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n self._data = dict()\n\n def has_key(self, key):\n return key in self._data\n\n @_save\n def update(self, *args, **kwargs):\n self._data.update(*args, **kwargs)\n\n\nclass _SaverDefaultDict(_SaverDict):\n \"\"\"\n A defaultdict that stores changes to an attribute when updated\n \"\"\"\n\n def __init__(self, factory, *args, **kwargs):\n super().__init__(*args, **kwargs)\n self._data = defaultdict(factory)\n self.default_factory = factory\n\n def __getitem__(self, key):\n if key not in self._data.keys():\n # detect the case of db.foo['a'] with no immediate assignment\n # (important: using `key in self._data` would be always True!)\n default_value = self._data[key]\n self.__setitem__(key, default_value)\n return self._data[key]\n\n\nclass _SaverSet(_SaverMutable, MutableSet):\n \"\"\"\n A set that saves to an Attribute when updated\n \"\"\"\n\n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n self._data = set()\n\n def __contains__(self, value):\n return self._data.__contains__(value)\n\n @_save\n def add(self, value):\n self._data.add(self._convert_mutables(value))\n\n @_save\n def discard(self, value):\n self._data.discard(value)\n\n\nclass _SaverOrderedDict(_SaverMutable, MutableMapping):\n \"\"\"\n An ordereddict that can be saved and operated on.\n \"\"\"\n\n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n self._data = OrderedDict()\n\n def has_key(self, key):\n return key in self._data\n\n\nclass _SaverDeque(_SaverMutable):\n \"\"\"\n A deque that can be saved and operated on.\n \"\"\"\n\n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n self._data = deque()\n\n @_save\n def append(self, *args, **kwargs):\n self._data.append(*args, **kwargs)\n\n @_save\n def appendleft(self, *args, **kwargs):\n self._data.appendleft(*args, **kwargs)\n\n @_save\n def clear(self):\n self._data.clear()\n\n @_save\n def extendleft(self, *args, **kwargs):\n self._data.extendleft(*args, **kwargs)\n\n # maxlen property\n def _getmaxlen(self):\n return self._data.maxlen\n\n def _setmaxlen(self, value):\n self._data.maxlen = value\n\n def _delmaxlen(self):\n del self._data.maxlen\n\n maxlen = property(_getmaxlen, _setmaxlen, _delmaxlen)\n\n @_save\n def pop(self, *args, **kwargs):\n return self._data.pop(*args, **kwargs)\n\n @_save\n def popleft(self, *args, **kwargs):\n return self._data.popleft(*args, **kwargs)\n\n @_save\n def reverse(self):\n self._data.reverse()\n\n @_save\n def rotate(self, *args):\n self._data.rotate(*args)\n\n @_save\n def remove(self, *args):\n self._data.remove(*args)\n\n\n_DESERIALIZE_MAPPING = {\n _SaverList.__name__: list,\n _SaverDict.__name__: dict,\n _SaverSet.__name__: set,\n _SaverOrderedDict.__name__: OrderedDict,\n _SaverDeque.__name__: deque,\n _SaverDefaultDict.__name__: defaultdict,\n}\n\n\ndef deserialize(obj):\n \"\"\"\n Make sure to *fully* decouple a structure from the database, by turning all _Saver*-mutables\n inside it back into their normal Python forms.\n\n \"\"\"\n\n def _iter(obj):\n # breakpoint()\n typ = type(obj)\n tname = typ.__name__\n if tname in (\"_SaverDict\", \"dict\"):\n return {_iter(key): _iter(val) for key, val in obj.items()}\n elif tname in (\"_SaverOrderedDict\", \"OrderedDict\"):\n return OrderedDict([(_iter(key), _iter(val)) for key, val in obj.items()])\n elif tname in (\"_SaverDefaultDict\", \"defaultdict\"):\n return defaultdict(\n obj.default_factory, {_iter(key): _iter(val) for key, val in obj.items()}\n )\n elif tname in _DESERIALIZE_MAPPING:\n return _DESERIALIZE_MAPPING[tname](_iter(val) for val in obj)\n elif is_iter(obj):\n return typ(_iter(val) for val in obj)\n return obj\n\n return _iter(obj)\n\n\n#\n# serialization helpers\n\n\ndef pack_dbobj(item):\n \"\"\"\n Check and convert django database objects to an internal representation.\n\n Args:\n item (any): A database entity to pack\n\n Returns:\n packed (any or tuple): Either returns the original input item\n or the packing tuple `(\"__packed_dbobj__\", key, creation_time, id)`.\n\n \"\"\"\n _init_globals()\n obj = item\n natural_key = _FROM_MODEL_MAP[\n hasattr(obj, \"id\")\n and hasattr(obj, \"db_date_created\")\n and hasattr(obj, \"__dbclass__\")\n and obj.__dbclass__.__name__.lower()\n ]\n # build the internal representation as a tuple\n # (\"__packed_dbobj__\", key, creation_time, id)\n return (\n natural_key\n and (\"__packed_dbobj__\", natural_key, _TO_DATESTRING(obj), _GA(obj, \"id\"))\n or item\n )\n\n\ndef unpack_dbobj(item):\n \"\"\"\n Check and convert internal representations back to Django database\n models.\n\n Args:\n item (packed_dbobj): The fact that item is a packed dbobj\n should be checked before this call.\n\n Returns:\n unpacked (any): Either the original input or converts the\n internal store back to a database representation (its\n typeclass is returned if applicable).\n\n \"\"\"\n _init_globals()\n try:\n obj = item[3] and _TO_MODEL_MAP[item[1]].objects.get(id=item[3])\n except ObjectDoesNotExist:\n return None\n except TypeError:\n if hasattr(item, \"pk\"):\n # this happens if item is already an obj\n return item\n return None\n if item[1] in _IGNORE_DATETIME_MODELS:\n # if we are replacing models we ignore the datatime\n return obj\n else:\n # even if we got back a match, check the sanity of the date (some\n # databases may 're-use' the id)\n return _TO_DATESTRING(obj) == item[2] and obj or None\n\n\ndef pack_session(item):\n \"\"\"\n Handle the safe serializion of Sessions objects (these contain\n hidden references to database objects (accounts, puppets) so they\n can't be safely serialized).\n\n Args:\n item (Session)): This item must have all properties of a session\n before entering this call.\n\n Returns:\n packed (tuple or None): A session-packed tuple on the form\n `(__packed_session__, sessid, conn_time)`. If this sessid\n does not match a session in the Session handler, None is returned.\n\n \"\"\"\n _init_globals()\n session = _SESSION_HANDLER.get(item.sessid)\n if session and session.conn_time == item.conn_time:\n # we require connection times to be identical for the Session\n # to be accepted as actually being a session (sessids gets\n # reused all the time).\n return (\n item.conn_time\n and item.sessid\n and (\"__packed_session__\", _GA(item, \"sessid\"), _GA(item, \"conn_time\"))\n )\n return None\n\n\ndef unpack_session(item):\n \"\"\"\n Check and convert internal representations back to Sessions.\n\n Args:\n item (packed_session): The fact that item is a packed session\n should be checked before this call.\n\n Returns:\n unpacked (any): Either the original input or converts the\n internal store back to a Session. If Session no longer\n exists, None will be returned.\n \"\"\"\n _init_globals()\n session = _SESSION_HANDLER.get(item[1])\n if session and session.conn_time == item[2]:\n # we require connection times to be identical for the Session\n # to be accepted as the same as the one stored (sessids gets\n # reused all the time).\n return session\n return None\n\n\n#\n# Access methods\n\n\ndef to_pickle(data):\n \"\"\"\n This prepares data on arbitrary form to be pickled. It handles any\n nested structure and returns data on a form that is safe to pickle\n (including having converted any database models to their internal\n representation). We also convert any Saver*-type objects back to\n their normal representations, they are not pickle-safe.\n\n Args:\n data (any): Data to pickle.\n\n Returns:\n data (any): Pickled data.\n\n \"\"\"\n\n def process_item(item):\n \"\"\"Recursive processor and identification of data\"\"\"\n\n dtype = type(item)\n\n if dtype in (str, int, float, bool, bytes, SafeString):\n return item\n elif dtype == tuple:\n return tuple(process_item(val) for val in item)\n elif dtype in (list, _SaverList):\n return [process_item(val) for val in item]\n elif dtype in (dict, _SaverDict):\n return dict((process_item(key), process_item(val)) for key, val in item.items())\n elif dtype in (defaultdict, _SaverDefaultDict):\n return defaultdict(\n item.default_factory,\n ((process_item(key), process_item(val)) for key, val in item.items()),\n )\n elif dtype in (set, _SaverSet):\n return set(process_item(val) for val in item)\n elif dtype in (OrderedDict, _SaverOrderedDict):\n return OrderedDict((process_item(key), process_item(val)) for key, val in item.items())\n elif dtype in (deque, _SaverDeque):\n return deque(process_item(val) for val in item)\n\n # not one of the base types\n if hasattr(item, \"__serialize_dbobjs__\"):\n # Allows custom serialization of any dbobjects embedded in\n # the item that Evennia will otherwise not find (these would\n # otherwise lead to an error). Use the dbserialize helper from\n # this method.\n try:\n item.__serialize_dbobjs__()\n except TypeError as err:\n # we catch typerrors so we can handle both classes (requiring\n # classmethods) and instances\n pass\n\n if hasattr(item, \"__iter__\"):\n # we try to conserve the iterable class, if not convert to list\n try:\n return item.__class__([process_item(val) for val in item])\n except (AttributeError, TypeError):\n return [process_item(val) for val in item]\n elif hasattr(item, \"sessid\") and hasattr(item, \"conn_time\"):\n return pack_session(item)\n try:\n return pack_dbobj(item)\n except TypeError:\n return item\n except Exception:\n logger.log_err(f\"The object {item} of type {type(item)} could not be stored.\")\n raise\n\n return process_item(data)\n\n\n# @transaction.autocommit\ndef from_pickle(data, db_obj=None):\n \"\"\"\n This should be fed a just de-pickled data object. It will be converted back\n to a form that may contain database objects again. Note that if a database\n object was removed (or changed in-place) in the database, None will be\n returned.\n\n Args:\n data (any): Pickled data to unpickle.\n db_obj (Atribute, any): This is the model instance (normally\n an Attribute) that _Saver*-type iterables (_SaverList etc)\n will save to when they update. It must have a 'value' property\n that saves assigned data to the database. Skip if not\n serializing onto a given object. If db_obj is given, this\n function will convert lists, dicts and sets to their\n _SaverList, _SaverDict and _SaverSet counterparts.\n\n Returns:\n data (any): Unpickled data.\n\n \"\"\"\n\n def process_item(item):\n \"\"\"Recursive processor and identification of data\"\"\"\n # breakpoint()\n dtype = type(item)\n if dtype in (str, int, float, bool, bytes, SafeString):\n return item\n elif _IS_PACKED_DBOBJ(item):\n # this must be checked before tuple\n return unpack_dbobj(item)\n elif _IS_PACKED_SESSION(item):\n return unpack_session(item)\n elif dtype == tuple:\n return tuple(process_item(val) for val in item)\n elif dtype == dict:\n return dict((process_item(key), process_item(val)) for key, val in item.items())\n elif dtype == defaultdict:\n return defaultdict(\n item.default_factory,\n ((process_item(key), process_item(val)) for key, val in item.items()),\n )\n elif dtype == set:\n return set(process_item(val) for val in item)\n elif dtype == OrderedDict:\n return OrderedDict((process_item(key), process_item(val)) for key, val in item.items())\n elif dtype == deque:\n return deque(process_item(val) for val in item)\n elif hasattr(item, \"__iter__\"):\n try:\n # we try to conserve the iterable class if\n # it accepts an iterator\n return item.__class__(process_item(val) for val in item)\n except (AttributeError, TypeError):\n return [process_item(val) for val in item]\n\n if hasattr(item, \"__deserialize_dbobjs__\"):\n # this allows the object to custom-deserialize any embedded dbobjs\n # that we previously serialized with __serialize_dbobjs__.\n # use the dbunserialize helper in this module.\n try:\n item.__deserialize_dbobjs__()\n except (TypeError, UnpicklingError):\n # handle recoveries both of classes (requiring classmethods\n # or instances. Unpickling errors can happen when re-loading the\n # data from cache (because the hidden entity was already\n # deserialized and stored back on the object, unpickling it\n # again fails). TODO: Maybe one could avoid this retry in a\n # more graceful way?\n pass\n\n return item\n\n def process_tree(item, parent):\n \"\"\"Recursive processor, building a parent-tree from iterable data\"\"\"\n # breakpoint()\n dtype = type(item)\n if dtype in (str, int, float, bool, bytes, SafeString):\n return item\n elif _IS_PACKED_DBOBJ(item):\n # this must be checked before tuple\n return unpack_dbobj(item)\n elif dtype == tuple:\n return tuple(process_tree(val, item) for val in item)\n elif dtype == list:\n dat = _SaverList(_parent=parent)\n dat._data.extend(process_tree(val, dat) for val in item)\n return dat\n elif dtype == dict:\n dat = _SaverDict(_parent=parent)\n dat._data.update(\n (process_item(key), process_tree(val, dat)) for key, val in item.items()\n )\n return dat\n elif dtype == defaultdict:\n dat = _SaverDefaultDict(item.default_factory, _parent=parent)\n dat._data.update(\n (process_item(key), process_tree(val, dat)) for key, val in item.items()\n )\n return dat\n elif dtype == set:\n dat = _SaverSet(_parent=parent)\n dat._data.update(set(process_tree(val, dat) for val in item))\n return dat\n elif dtype == OrderedDict:\n dat = _SaverOrderedDict(_parent=parent)\n dat._data.update(\n (process_item(key), process_tree(val, dat)) for key, val in item.items()\n )\n return dat\n elif dtype == deque:\n dat = _SaverDeque(_parent=parent)\n dat._data.extend(process_item(val) for val in item)\n return dat\n elif hasattr(item, \"__iter__\"):\n try:\n # we try to conserve the iterable class if it\n # accepts an iterator\n return item.__class__(process_tree(val, parent) for val in item)\n except (AttributeError, TypeError):\n dat = _SaverList(_parent=parent)\n dat._data.extend(process_tree(val, dat) for val in item)\n return dat\n\n if hasattr(item, \"__deserialize_dbobjs__\"):\n try:\n item.__deserialize_dbobjs__()\n except (TypeError, UnpicklingError):\n pass\n\n return item\n\n if db_obj:\n # convert lists, dicts and sets to their Saved* counterparts. It\n # is only relevant if the \"root\" is an iterable of the right type.\n dtype = type(data)\n if dtype == list:\n dat = _SaverList(_db_obj=db_obj)\n dat._data.extend(process_tree(val, dat) for val in data)\n return dat\n elif dtype == dict:\n dat = _SaverDict(_db_obj=db_obj)\n dat._data.update(\n (process_item(key), process_tree(val, dat)) for key, val in data.items()\n )\n return dat\n elif dtype == defaultdict:\n dat = _SaverDefaultDict(data.default_factory, _db_obj=db_obj)\n dat._data.update(\n (process_item(key), process_tree(val, dat)) for key, val in data.items()\n )\n return dat\n elif dtype == set:\n dat = _SaverSet(_db_obj=db_obj)\n dat._data.update(process_tree(val, dat) for val in data)\n return dat\n elif dtype == OrderedDict:\n dat = _SaverOrderedDict(_db_obj=db_obj)\n dat._data.update(\n (process_item(key), process_tree(val, dat)) for key, val in data.items()\n )\n return dat\n elif dtype == deque:\n dat = _SaverDeque(_db_obj=db_obj)\n dat._data.extend(process_item(val) for val in data)\n return dat\n return process_item(data)\n\n\ndef do_pickle(data):\n \"\"\"Perform pickle to string\"\"\"\n try:\n return dumps(data, protocol=PICKLE_PROTOCOL)\n except Exception:\n logger.log_err(f\"Could not pickle data for storage: {data}\")\n raise\n\n\ndef do_unpickle(data):\n \"\"\"Retrieve pickle from pickled string\"\"\"\n try:\n return loads(to_bytes(data))\n except Exception:\n logger.log_err(f\"Could not unpickle data from storage: {data}\")\n raise\n\n\ndef dbserialize(data):\n \"\"\"Serialize to pickled form in one step\"\"\"\n return do_pickle(to_pickle(data))\n\n\ndef dbunserialize(data, db_obj=None):\n \"\"\"Un-serialize in one step. See from_pickle for help db_obj.\"\"\"\n return from_pickle(do_unpickle(data), db_obj=db_obj)\n",
"path": "evennia/utils/dbserialize.py"
}
] | diff --git a/evennia/utils/dbserialize.py b/evennia/utils/dbserialize.py
index 11321d8dfd7..0b8b0e63b8d 100644
--- a/evennia/utils/dbserialize.py
+++ b/evennia/utils/dbserialize.py
@@ -243,6 +243,9 @@ def __gt__(self, other):
def __or__(self, other):
return self._data | other
+ def __ror__(self, other):
+ return self._data | other
+
@_save
def __setitem__(self, key, value):
self._data.__setitem__(key, self._convert_mutables(value))
|
ipython__ipython-1991 | %page not working
```
%page
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-336-e5a187ccb094> in <module>()
----> 1 get_ipython().magic(u'page')
c:\python26\lib\site-packages\ipython-0.13.dev-py2.6.egg\IPython\core\interactiveshell.pyc in magic(self, arg_s)
2150 magic_name, _, magic_arg_s = arg_s.partition(' ')
2151 magic_name = magic_name.lstrip(prefilter.ESC_MAGIC)
-> 2152 return self.run_line_magic(magic_name, magic_arg_s)
2153
2154 #-------------------------------------------------------------------------
c:\python26\lib\site-packages\ipython-0.13.dev-py2.6.egg\IPython\core\interactiveshell.pyc in run_line_magic(self, magic_name, line)
2076 args.append(sys._getframe(stack_depth).f_locals)
2077 with self.builtin_trap:
-> 2078 result = fn(*args)
2079 return result
2080
c:\python26\lib\site-packages\ipython-0.13.dev-py2.6.egg\IPython\core\magics\basic.pyc in page(self, parameter_s)
c:\python26\lib\site-packages\ipython-0.13.dev-py2.6.egg\IPython\core\magic.pyc in <lambda>(f, *a, **k)
188 # but it's overkill for just that one bit of state.
189 def magic_deco(arg):
--> 190 call = lambda f, *a, **k: f(*a, **k)
191
192 if callable(arg):
c:\python26\lib\site-packages\ipython-0.13.dev-py2.6.egg\IPython\core\magics\basic.pyc in page(self, parameter_s)
186
187 oname = args and args or '_'
--> 188 info = self._ofind(oname)
189 if info['found']:
190 txt = (raw and str or pformat)( info['obj'] )
AttributeError: 'BasicMagics' object has no attribute '_ofind'
```
| [
{
"content": "\"\"\"Implementation of basic magic functions.\n\"\"\"\n#-----------------------------------------------------------------------------\n# Copyright (c) 2012 The IPython Development Team.\n#\n# Distributed under the terms of the Modified BSD License.\n#\n# The full license is in the file COPYING.txt, distributed with this software.\n#-----------------------------------------------------------------------------\n\n#-----------------------------------------------------------------------------\n# Imports\n#-----------------------------------------------------------------------------\nfrom __future__ import print_function\n\n# Stdlib\nimport io\nimport sys\nfrom pprint import pformat\n\n# Our own packages\nfrom IPython.core.error import UsageError\nfrom IPython.core.inputsplitter import ESC_MAGIC\nfrom IPython.core.magic import Magics, magics_class, line_magic\nfrom IPython.utils.text import format_screen\nfrom IPython.core import magic_arguments, page\nfrom IPython.testing.skipdoctest import skip_doctest\nfrom IPython.utils.ipstruct import Struct\nfrom IPython.utils.path import unquote_filename\nfrom IPython.utils.warn import warn, error\n\n#-----------------------------------------------------------------------------\n# Magics class implementation\n#-----------------------------------------------------------------------------\n\n@magics_class\nclass BasicMagics(Magics):\n \"\"\"Magics that provide central IPython functionality.\n\n These are various magics that don't fit into specific categories but that\n are all part of the base 'IPython experience'.\"\"\"\n\n def _lsmagic(self):\n mesc = ESC_MAGIC\n cesc = mesc*2\n mman = self.shell.magics_manager\n magics = mman.lsmagic()\n out = ['Available line magics:',\n mesc + (' '+mesc).join(sorted(magics['line'])),\n '',\n 'Available cell magics:',\n cesc + (' '+cesc).join(sorted(magics['cell'])),\n '',\n mman.auto_status()]\n return '\\n'.join(out)\n\n @line_magic\n def lsmagic(self, parameter_s=''):\n \"\"\"List currently available magic functions.\"\"\"\n print(self._lsmagic())\n\n @line_magic\n def magic(self, parameter_s=''):\n \"\"\"Print information about the magic function system.\n\n Supported formats: -latex, -brief, -rest\n \"\"\"\n\n mode = ''\n try:\n mode = parameter_s.split()[0][1:]\n if mode == 'rest':\n rest_docs = []\n except IndexError:\n pass\n\n magic_docs = []\n escapes = dict(line=ESC_MAGIC, cell=ESC_MAGIC*2)\n magics = self.shell.magics_manager.magics\n\n for mtype in ('line', 'cell'):\n escape = escapes[mtype]\n for fname, fn in magics[mtype].iteritems():\n\n if mode == 'brief':\n # only first line\n if fn.__doc__:\n fndoc = fn.__doc__.split('\\n',1)[0]\n else:\n fndoc = 'No documentation'\n else:\n if fn.__doc__:\n fndoc = fn.__doc__.rstrip()\n else:\n fndoc = 'No documentation'\n\n if mode == 'rest':\n rest_docs.append('**%s%s**::\\n\\n\\t%s\\n\\n' %\n (escape, fname, fndoc))\n else:\n magic_docs.append('%s%s:\\n\\t%s\\n' %\n (escape, fname, fndoc))\n\n magic_docs = ''.join(magic_docs)\n\n if mode == 'rest':\n return \"\".join(rest_docs)\n\n if mode == 'latex':\n print(self.format_latex(magic_docs))\n return\n else:\n magic_docs = format_screen(magic_docs)\n if mode == 'brief':\n return magic_docs\n\n out = [\"\"\"\nIPython's 'magic' functions\n===========================\n\nThe magic function system provides a series of functions which allow you to\ncontrol the behavior of IPython itself, plus a lot of system-type\nfeatures. There are two kinds of magics, line-oriented and cell-oriented.\n\nLine magics are prefixed with the % character and work much like OS\ncommand-line calls: they get as an argument the rest of the line, where\narguments are passed without parentheses or quotes. For example, this will\ntime the given statement::\n\n %timeit range(1000)\n\nCell magics are prefixed with a double %%, and they are functions that get as\nan argument not only the rest of the line, but also the lines below it in a\nseparate argument. These magics are called with two arguments: the rest of the\ncall line and the body of the cell, consisting of the lines below the first.\nFor example::\n\n %%timeit x = numpy.random.randn((100, 100))\n numpy.linalg.svd(x)\n\nwill time the execution of the numpy svd routine, running the assignment of x\nas part of the setup phase, which is not timed.\n\nIn a line-oriented client (the terminal or Qt console IPython), starting a new\ninput with %% will automatically enter cell mode, and IPython will continue\nreading input until a blank line is given. In the notebook, simply type the\nwhole cell as one entity, but keep in mind that the %% escape can only be at\nthe very start of the cell.\n\nNOTE: If you have 'automagic' enabled (via the command line option or with the\n%automagic function), you don't need to type in the % explicitly for line\nmagics; cell magics always require an explicit '%%' escape. By default,\nIPython ships with automagic on, so you should only rarely need the % escape.\n\nExample: typing '%cd mydir' (without the quotes) changes you working directory\nto 'mydir', if it exists.\n\nFor a list of the available magic functions, use %lsmagic. For a description\nof any of them, type %magic_name?, e.g. '%cd?'.\n\nCurrently the magic system has the following functions:\"\"\",\n magic_docs,\n \"Summary of magic functions (from %slsmagic):\",\n self._lsmagic(),\n ]\n page.page('\\n'.join(out))\n\n\n @line_magic\n def page(self, parameter_s=''):\n \"\"\"Pretty print the object and display it through a pager.\n\n %page [options] OBJECT\n\n If no object is given, use _ (last output).\n\n Options:\n\n -r: page str(object), don't pretty-print it.\"\"\"\n\n # After a function contributed by Olivier Aubert, slightly modified.\n\n # Process options/args\n opts, args = self.parse_options(parameter_s, 'r')\n raw = 'r' in opts\n\n oname = args and args or '_'\n info = self._ofind(oname)\n if info['found']:\n txt = (raw and str or pformat)( info['obj'] )\n page.page(txt)\n else:\n print('Object `%s` not found' % oname)\n\n @line_magic\n def profile(self, parameter_s=''):\n \"\"\"Print your currently active IPython profile.\"\"\"\n from IPython.core.application import BaseIPythonApplication\n if BaseIPythonApplication.initialized():\n print(BaseIPythonApplication.instance().profile)\n else:\n error(\"profile is an application-level value, but you don't appear to be in an IPython application\")\n\n @line_magic\n def pprint(self, parameter_s=''):\n \"\"\"Toggle pretty printing on/off.\"\"\"\n ptformatter = self.shell.display_formatter.formatters['text/plain']\n ptformatter.pprint = bool(1 - ptformatter.pprint)\n print('Pretty printing has been turned',\n ['OFF','ON'][ptformatter.pprint])\n\n @line_magic\n def colors(self, parameter_s=''):\n \"\"\"Switch color scheme for prompts, info system and exception handlers.\n\n Currently implemented schemes: NoColor, Linux, LightBG.\n\n Color scheme names are not case-sensitive.\n\n Examples\n --------\n To get a plain black and white terminal::\n\n %colors nocolor\n \"\"\"\n def color_switch_err(name):\n warn('Error changing %s color schemes.\\n%s' %\n (name, sys.exc_info()[1]))\n\n\n new_scheme = parameter_s.strip()\n if not new_scheme:\n raise UsageError(\n \"%colors: you must specify a color scheme. See '%colors?'\")\n return\n # local shortcut\n shell = self.shell\n\n import IPython.utils.rlineimpl as readline\n\n if not shell.colors_force and \\\n not readline.have_readline and sys.platform == \"win32\":\n msg = \"\"\"\\\nProper color support under MS Windows requires the pyreadline library.\nYou can find it at:\nhttp://ipython.org/pyreadline.html\nGary's readline needs the ctypes module, from:\nhttp://starship.python.net/crew/theller/ctypes\n(Note that ctypes is already part of Python versions 2.5 and newer).\n\nDefaulting color scheme to 'NoColor'\"\"\"\n new_scheme = 'NoColor'\n warn(msg)\n\n # readline option is 0\n if not shell.colors_force and not shell.has_readline:\n new_scheme = 'NoColor'\n\n # Set prompt colors\n try:\n shell.prompt_manager.color_scheme = new_scheme\n except:\n color_switch_err('prompt')\n else:\n shell.colors = \\\n shell.prompt_manager.color_scheme_table.active_scheme_name\n # Set exception colors\n try:\n shell.InteractiveTB.set_colors(scheme = new_scheme)\n shell.SyntaxTB.set_colors(scheme = new_scheme)\n except:\n color_switch_err('exception')\n\n # Set info (for 'object?') colors\n if shell.color_info:\n try:\n shell.inspector.set_active_scheme(new_scheme)\n except:\n color_switch_err('object inspector')\n else:\n shell.inspector.set_active_scheme('NoColor')\n\n @line_magic\n def xmode(self, parameter_s=''):\n \"\"\"Switch modes for the exception handlers.\n\n Valid modes: Plain, Context and Verbose.\n\n If called without arguments, acts as a toggle.\"\"\"\n\n def xmode_switch_err(name):\n warn('Error changing %s exception modes.\\n%s' %\n (name,sys.exc_info()[1]))\n\n shell = self.shell\n new_mode = parameter_s.strip().capitalize()\n try:\n shell.InteractiveTB.set_mode(mode=new_mode)\n print('Exception reporting mode:',shell.InteractiveTB.mode)\n except:\n xmode_switch_err('user')\n\n @line_magic\n def quickref(self,arg):\n \"\"\" Show a quick reference sheet \"\"\"\n from IPython.core.usage import quick_reference\n qr = quick_reference + self.magic('-brief')\n page.page(qr)\n\n @line_magic\n def doctest_mode(self, parameter_s=''):\n \"\"\"Toggle doctest mode on and off.\n\n This mode is intended to make IPython behave as much as possible like a\n plain Python shell, from the perspective of how its prompts, exceptions\n and output look. This makes it easy to copy and paste parts of a\n session into doctests. It does so by:\n\n - Changing the prompts to the classic ``>>>`` ones.\n - Changing the exception reporting mode to 'Plain'.\n - Disabling pretty-printing of output.\n\n Note that IPython also supports the pasting of code snippets that have\n leading '>>>' and '...' prompts in them. This means that you can paste\n doctests from files or docstrings (even if they have leading\n whitespace), and the code will execute correctly. You can then use\n '%history -t' to see the translated history; this will give you the\n input after removal of all the leading prompts and whitespace, which\n can be pasted back into an editor.\n\n With these features, you can switch into this mode easily whenever you\n need to do testing and changes to doctests, without having to leave\n your existing IPython session.\n \"\"\"\n\n # Shorthands\n shell = self.shell\n pm = shell.prompt_manager\n meta = shell.meta\n disp_formatter = self.shell.display_formatter\n ptformatter = disp_formatter.formatters['text/plain']\n # dstore is a data store kept in the instance metadata bag to track any\n # changes we make, so we can undo them later.\n dstore = meta.setdefault('doctest_mode',Struct())\n save_dstore = dstore.setdefault\n\n # save a few values we'll need to recover later\n mode = save_dstore('mode',False)\n save_dstore('rc_pprint',ptformatter.pprint)\n save_dstore('xmode',shell.InteractiveTB.mode)\n save_dstore('rc_separate_out',shell.separate_out)\n save_dstore('rc_separate_out2',shell.separate_out2)\n save_dstore('rc_prompts_pad_left',pm.justify)\n save_dstore('rc_separate_in',shell.separate_in)\n save_dstore('rc_plain_text_only',disp_formatter.plain_text_only)\n save_dstore('prompt_templates',(pm.in_template, pm.in2_template, pm.out_template))\n\n if mode == False:\n # turn on\n pm.in_template = '>>> '\n pm.in2_template = '... '\n pm.out_template = ''\n\n # Prompt separators like plain python\n shell.separate_in = ''\n shell.separate_out = ''\n shell.separate_out2 = ''\n\n pm.justify = False\n\n ptformatter.pprint = False\n disp_formatter.plain_text_only = True\n\n shell.magic('xmode Plain')\n else:\n # turn off\n pm.in_template, pm.in2_template, pm.out_template = dstore.prompt_templates\n\n shell.separate_in = dstore.rc_separate_in\n\n shell.separate_out = dstore.rc_separate_out\n shell.separate_out2 = dstore.rc_separate_out2\n\n pm.justify = dstore.rc_prompts_pad_left\n\n ptformatter.pprint = dstore.rc_pprint\n disp_formatter.plain_text_only = dstore.rc_plain_text_only\n\n shell.magic('xmode ' + dstore.xmode)\n\n # Store new mode and inform\n dstore.mode = bool(1-int(mode))\n mode_label = ['OFF','ON'][dstore.mode]\n print('Doctest mode is:', mode_label)\n\n @line_magic\n def gui(self, parameter_s=''):\n \"\"\"Enable or disable IPython GUI event loop integration.\n\n %gui [GUINAME]\n\n This magic replaces IPython's threaded shells that were activated\n using the (pylab/wthread/etc.) command line flags. GUI toolkits\n can now be enabled at runtime and keyboard\n interrupts should work without any problems. The following toolkits\n are supported: wxPython, PyQt4, PyGTK, Tk and Cocoa (OSX)::\n\n %gui wx # enable wxPython event loop integration\n %gui qt4|qt # enable PyQt4 event loop integration\n %gui gtk # enable PyGTK event loop integration\n %gui gtk3 # enable Gtk3 event loop integration\n %gui tk # enable Tk event loop integration\n %gui osx # enable Cocoa event loop integration\n # (requires %matplotlib 1.1)\n %gui # disable all event loop integration\n\n WARNING: after any of these has been called you can simply create\n an application object, but DO NOT start the event loop yourself, as\n we have already handled that.\n \"\"\"\n opts, arg = self.parse_options(parameter_s, '')\n if arg=='': arg = None\n try:\n return self.shell.enable_gui(arg)\n except Exception as e:\n # print simple error message, rather than traceback if we can't\n # hook up the GUI\n error(str(e))\n\n @skip_doctest\n @line_magic\n def precision(self, s=''):\n \"\"\"Set floating point precision for pretty printing.\n\n Can set either integer precision or a format string.\n\n If numpy has been imported and precision is an int,\n numpy display precision will also be set, via ``numpy.set_printoptions``.\n\n If no argument is given, defaults will be restored.\n\n Examples\n --------\n ::\n\n In [1]: from math import pi\n\n In [2]: %precision 3\n Out[2]: u'%.3f'\n\n In [3]: pi\n Out[3]: 3.142\n\n In [4]: %precision %i\n Out[4]: u'%i'\n\n In [5]: pi\n Out[5]: 3\n\n In [6]: %precision %e\n Out[6]: u'%e'\n\n In [7]: pi**10\n Out[7]: 9.364805e+04\n\n In [8]: %precision\n Out[8]: u'%r'\n\n In [9]: pi**10\n Out[9]: 93648.047476082982\n \"\"\"\n ptformatter = self.shell.display_formatter.formatters['text/plain']\n ptformatter.float_precision = s\n return ptformatter.float_format\n\n @magic_arguments.magic_arguments()\n @magic_arguments.argument(\n '-e', '--export', action='store_true', default=False,\n help='Export IPython history as a notebook. The filename argument '\n 'is used to specify the notebook name and format. For example '\n 'a filename of notebook.ipynb will result in a notebook name '\n 'of \"notebook\" and a format of \"xml\". Likewise using a \".json\" '\n 'or \".py\" file extension will write the notebook in the json '\n 'or py formats.'\n )\n @magic_arguments.argument(\n '-f', '--format',\n help='Convert an existing IPython notebook to a new format. This option '\n 'specifies the new format and can have the values: xml, json, py. '\n 'The target filename is chosen automatically based on the new '\n 'format. The filename argument gives the name of the source file.'\n )\n @magic_arguments.argument(\n 'filename', type=unicode,\n help='Notebook name or filename'\n )\n @line_magic\n def notebook(self, s):\n \"\"\"Export and convert IPython notebooks.\n\n This function can export the current IPython history to a notebook file\n or can convert an existing notebook file into a different format. For\n example, to export the history to \"foo.ipynb\" do \"%notebook -e foo.ipynb\".\n To export the history to \"foo.py\" do \"%notebook -e foo.py\". To convert\n \"foo.ipynb\" to \"foo.json\" do \"%notebook -f json foo.ipynb\". Possible\n formats include (json/ipynb, py).\n \"\"\"\n args = magic_arguments.parse_argstring(self.notebook, s)\n\n from IPython.nbformat import current\n args.filename = unquote_filename(args.filename)\n if args.export:\n fname, name, format = current.parse_filename(args.filename)\n cells = []\n hist = list(self.shell.history_manager.get_range())\n for session, prompt_number, input in hist[:-1]:\n cells.append(current.new_code_cell(prompt_number=prompt_number,\n input=input))\n worksheet = current.new_worksheet(cells=cells)\n nb = current.new_notebook(name=name,worksheets=[worksheet])\n with io.open(fname, 'w', encoding='utf-8') as f:\n current.write(nb, f, format);\n elif args.format is not None:\n old_fname, old_name, old_format = current.parse_filename(args.filename)\n new_format = args.format\n if new_format == u'xml':\n raise ValueError('Notebooks cannot be written as xml.')\n elif new_format == u'ipynb' or new_format == u'json':\n new_fname = old_name + u'.ipynb'\n new_format = u'json'\n elif new_format == u'py':\n new_fname = old_name + u'.py'\n else:\n raise ValueError('Invalid notebook format: %s' % new_format)\n with io.open(old_fname, 'r', encoding='utf-8') as f:\n nb = current.read(f, old_format)\n with io.open(new_fname, 'w', encoding='utf-8') as f:\n current.write(nb, f, new_format)\n",
"path": "IPython/core/magics/basic.py"
}
] | [
{
"content": "\"\"\"Implementation of basic magic functions.\n\"\"\"\n#-----------------------------------------------------------------------------\n# Copyright (c) 2012 The IPython Development Team.\n#\n# Distributed under the terms of the Modified BSD License.\n#\n# The full license is in the file COPYING.txt, distributed with this software.\n#-----------------------------------------------------------------------------\n\n#-----------------------------------------------------------------------------\n# Imports\n#-----------------------------------------------------------------------------\nfrom __future__ import print_function\n\n# Stdlib\nimport io\nimport sys\nfrom pprint import pformat\n\n# Our own packages\nfrom IPython.core.error import UsageError\nfrom IPython.core.inputsplitter import ESC_MAGIC\nfrom IPython.core.magic import Magics, magics_class, line_magic\nfrom IPython.utils.text import format_screen\nfrom IPython.core import magic_arguments, page\nfrom IPython.testing.skipdoctest import skip_doctest\nfrom IPython.utils.ipstruct import Struct\nfrom IPython.utils.path import unquote_filename\nfrom IPython.utils.warn import warn, error\n\n#-----------------------------------------------------------------------------\n# Magics class implementation\n#-----------------------------------------------------------------------------\n\n@magics_class\nclass BasicMagics(Magics):\n \"\"\"Magics that provide central IPython functionality.\n\n These are various magics that don't fit into specific categories but that\n are all part of the base 'IPython experience'.\"\"\"\n\n def _lsmagic(self):\n mesc = ESC_MAGIC\n cesc = mesc*2\n mman = self.shell.magics_manager\n magics = mman.lsmagic()\n out = ['Available line magics:',\n mesc + (' '+mesc).join(sorted(magics['line'])),\n '',\n 'Available cell magics:',\n cesc + (' '+cesc).join(sorted(magics['cell'])),\n '',\n mman.auto_status()]\n return '\\n'.join(out)\n\n @line_magic\n def lsmagic(self, parameter_s=''):\n \"\"\"List currently available magic functions.\"\"\"\n print(self._lsmagic())\n\n @line_magic\n def magic(self, parameter_s=''):\n \"\"\"Print information about the magic function system.\n\n Supported formats: -latex, -brief, -rest\n \"\"\"\n\n mode = ''\n try:\n mode = parameter_s.split()[0][1:]\n if mode == 'rest':\n rest_docs = []\n except IndexError:\n pass\n\n magic_docs = []\n escapes = dict(line=ESC_MAGIC, cell=ESC_MAGIC*2)\n magics = self.shell.magics_manager.magics\n\n for mtype in ('line', 'cell'):\n escape = escapes[mtype]\n for fname, fn in magics[mtype].iteritems():\n\n if mode == 'brief':\n # only first line\n if fn.__doc__:\n fndoc = fn.__doc__.split('\\n',1)[0]\n else:\n fndoc = 'No documentation'\n else:\n if fn.__doc__:\n fndoc = fn.__doc__.rstrip()\n else:\n fndoc = 'No documentation'\n\n if mode == 'rest':\n rest_docs.append('**%s%s**::\\n\\n\\t%s\\n\\n' %\n (escape, fname, fndoc))\n else:\n magic_docs.append('%s%s:\\n\\t%s\\n' %\n (escape, fname, fndoc))\n\n magic_docs = ''.join(magic_docs)\n\n if mode == 'rest':\n return \"\".join(rest_docs)\n\n if mode == 'latex':\n print(self.format_latex(magic_docs))\n return\n else:\n magic_docs = format_screen(magic_docs)\n if mode == 'brief':\n return magic_docs\n\n out = [\"\"\"\nIPython's 'magic' functions\n===========================\n\nThe magic function system provides a series of functions which allow you to\ncontrol the behavior of IPython itself, plus a lot of system-type\nfeatures. There are two kinds of magics, line-oriented and cell-oriented.\n\nLine magics are prefixed with the % character and work much like OS\ncommand-line calls: they get as an argument the rest of the line, where\narguments are passed without parentheses or quotes. For example, this will\ntime the given statement::\n\n %timeit range(1000)\n\nCell magics are prefixed with a double %%, and they are functions that get as\nan argument not only the rest of the line, but also the lines below it in a\nseparate argument. These magics are called with two arguments: the rest of the\ncall line and the body of the cell, consisting of the lines below the first.\nFor example::\n\n %%timeit x = numpy.random.randn((100, 100))\n numpy.linalg.svd(x)\n\nwill time the execution of the numpy svd routine, running the assignment of x\nas part of the setup phase, which is not timed.\n\nIn a line-oriented client (the terminal or Qt console IPython), starting a new\ninput with %% will automatically enter cell mode, and IPython will continue\nreading input until a blank line is given. In the notebook, simply type the\nwhole cell as one entity, but keep in mind that the %% escape can only be at\nthe very start of the cell.\n\nNOTE: If you have 'automagic' enabled (via the command line option or with the\n%automagic function), you don't need to type in the % explicitly for line\nmagics; cell magics always require an explicit '%%' escape. By default,\nIPython ships with automagic on, so you should only rarely need the % escape.\n\nExample: typing '%cd mydir' (without the quotes) changes you working directory\nto 'mydir', if it exists.\n\nFor a list of the available magic functions, use %lsmagic. For a description\nof any of them, type %magic_name?, e.g. '%cd?'.\n\nCurrently the magic system has the following functions:\"\"\",\n magic_docs,\n \"Summary of magic functions (from %slsmagic):\",\n self._lsmagic(),\n ]\n page.page('\\n'.join(out))\n\n\n @line_magic\n def page(self, parameter_s=''):\n \"\"\"Pretty print the object and display it through a pager.\n\n %page [options] OBJECT\n\n If no object is given, use _ (last output).\n\n Options:\n\n -r: page str(object), don't pretty-print it.\"\"\"\n\n # After a function contributed by Olivier Aubert, slightly modified.\n\n # Process options/args\n opts, args = self.parse_options(parameter_s, 'r')\n raw = 'r' in opts\n\n oname = args and args or '_'\n info = self.shell._ofind(oname)\n if info['found']:\n txt = (raw and str or pformat)( info['obj'] )\n page.page(txt)\n else:\n print('Object `%s` not found' % oname)\n\n @line_magic\n def profile(self, parameter_s=''):\n \"\"\"Print your currently active IPython profile.\"\"\"\n from IPython.core.application import BaseIPythonApplication\n if BaseIPythonApplication.initialized():\n print(BaseIPythonApplication.instance().profile)\n else:\n error(\"profile is an application-level value, but you don't appear to be in an IPython application\")\n\n @line_magic\n def pprint(self, parameter_s=''):\n \"\"\"Toggle pretty printing on/off.\"\"\"\n ptformatter = self.shell.display_formatter.formatters['text/plain']\n ptformatter.pprint = bool(1 - ptformatter.pprint)\n print('Pretty printing has been turned',\n ['OFF','ON'][ptformatter.pprint])\n\n @line_magic\n def colors(self, parameter_s=''):\n \"\"\"Switch color scheme for prompts, info system and exception handlers.\n\n Currently implemented schemes: NoColor, Linux, LightBG.\n\n Color scheme names are not case-sensitive.\n\n Examples\n --------\n To get a plain black and white terminal::\n\n %colors nocolor\n \"\"\"\n def color_switch_err(name):\n warn('Error changing %s color schemes.\\n%s' %\n (name, sys.exc_info()[1]))\n\n\n new_scheme = parameter_s.strip()\n if not new_scheme:\n raise UsageError(\n \"%colors: you must specify a color scheme. See '%colors?'\")\n return\n # local shortcut\n shell = self.shell\n\n import IPython.utils.rlineimpl as readline\n\n if not shell.colors_force and \\\n not readline.have_readline and sys.platform == \"win32\":\n msg = \"\"\"\\\nProper color support under MS Windows requires the pyreadline library.\nYou can find it at:\nhttp://ipython.org/pyreadline.html\nGary's readline needs the ctypes module, from:\nhttp://starship.python.net/crew/theller/ctypes\n(Note that ctypes is already part of Python versions 2.5 and newer).\n\nDefaulting color scheme to 'NoColor'\"\"\"\n new_scheme = 'NoColor'\n warn(msg)\n\n # readline option is 0\n if not shell.colors_force and not shell.has_readline:\n new_scheme = 'NoColor'\n\n # Set prompt colors\n try:\n shell.prompt_manager.color_scheme = new_scheme\n except:\n color_switch_err('prompt')\n else:\n shell.colors = \\\n shell.prompt_manager.color_scheme_table.active_scheme_name\n # Set exception colors\n try:\n shell.InteractiveTB.set_colors(scheme = new_scheme)\n shell.SyntaxTB.set_colors(scheme = new_scheme)\n except:\n color_switch_err('exception')\n\n # Set info (for 'object?') colors\n if shell.color_info:\n try:\n shell.inspector.set_active_scheme(new_scheme)\n except:\n color_switch_err('object inspector')\n else:\n shell.inspector.set_active_scheme('NoColor')\n\n @line_magic\n def xmode(self, parameter_s=''):\n \"\"\"Switch modes for the exception handlers.\n\n Valid modes: Plain, Context and Verbose.\n\n If called without arguments, acts as a toggle.\"\"\"\n\n def xmode_switch_err(name):\n warn('Error changing %s exception modes.\\n%s' %\n (name,sys.exc_info()[1]))\n\n shell = self.shell\n new_mode = parameter_s.strip().capitalize()\n try:\n shell.InteractiveTB.set_mode(mode=new_mode)\n print('Exception reporting mode:',shell.InteractiveTB.mode)\n except:\n xmode_switch_err('user')\n\n @line_magic\n def quickref(self,arg):\n \"\"\" Show a quick reference sheet \"\"\"\n from IPython.core.usage import quick_reference\n qr = quick_reference + self.magic('-brief')\n page.page(qr)\n\n @line_magic\n def doctest_mode(self, parameter_s=''):\n \"\"\"Toggle doctest mode on and off.\n\n This mode is intended to make IPython behave as much as possible like a\n plain Python shell, from the perspective of how its prompts, exceptions\n and output look. This makes it easy to copy and paste parts of a\n session into doctests. It does so by:\n\n - Changing the prompts to the classic ``>>>`` ones.\n - Changing the exception reporting mode to 'Plain'.\n - Disabling pretty-printing of output.\n\n Note that IPython also supports the pasting of code snippets that have\n leading '>>>' and '...' prompts in them. This means that you can paste\n doctests from files or docstrings (even if they have leading\n whitespace), and the code will execute correctly. You can then use\n '%history -t' to see the translated history; this will give you the\n input after removal of all the leading prompts and whitespace, which\n can be pasted back into an editor.\n\n With these features, you can switch into this mode easily whenever you\n need to do testing and changes to doctests, without having to leave\n your existing IPython session.\n \"\"\"\n\n # Shorthands\n shell = self.shell\n pm = shell.prompt_manager\n meta = shell.meta\n disp_formatter = self.shell.display_formatter\n ptformatter = disp_formatter.formatters['text/plain']\n # dstore is a data store kept in the instance metadata bag to track any\n # changes we make, so we can undo them later.\n dstore = meta.setdefault('doctest_mode',Struct())\n save_dstore = dstore.setdefault\n\n # save a few values we'll need to recover later\n mode = save_dstore('mode',False)\n save_dstore('rc_pprint',ptformatter.pprint)\n save_dstore('xmode',shell.InteractiveTB.mode)\n save_dstore('rc_separate_out',shell.separate_out)\n save_dstore('rc_separate_out2',shell.separate_out2)\n save_dstore('rc_prompts_pad_left',pm.justify)\n save_dstore('rc_separate_in',shell.separate_in)\n save_dstore('rc_plain_text_only',disp_formatter.plain_text_only)\n save_dstore('prompt_templates',(pm.in_template, pm.in2_template, pm.out_template))\n\n if mode == False:\n # turn on\n pm.in_template = '>>> '\n pm.in2_template = '... '\n pm.out_template = ''\n\n # Prompt separators like plain python\n shell.separate_in = ''\n shell.separate_out = ''\n shell.separate_out2 = ''\n\n pm.justify = False\n\n ptformatter.pprint = False\n disp_formatter.plain_text_only = True\n\n shell.magic('xmode Plain')\n else:\n # turn off\n pm.in_template, pm.in2_template, pm.out_template = dstore.prompt_templates\n\n shell.separate_in = dstore.rc_separate_in\n\n shell.separate_out = dstore.rc_separate_out\n shell.separate_out2 = dstore.rc_separate_out2\n\n pm.justify = dstore.rc_prompts_pad_left\n\n ptformatter.pprint = dstore.rc_pprint\n disp_formatter.plain_text_only = dstore.rc_plain_text_only\n\n shell.magic('xmode ' + dstore.xmode)\n\n # Store new mode and inform\n dstore.mode = bool(1-int(mode))\n mode_label = ['OFF','ON'][dstore.mode]\n print('Doctest mode is:', mode_label)\n\n @line_magic\n def gui(self, parameter_s=''):\n \"\"\"Enable or disable IPython GUI event loop integration.\n\n %gui [GUINAME]\n\n This magic replaces IPython's threaded shells that were activated\n using the (pylab/wthread/etc.) command line flags. GUI toolkits\n can now be enabled at runtime and keyboard\n interrupts should work without any problems. The following toolkits\n are supported: wxPython, PyQt4, PyGTK, Tk and Cocoa (OSX)::\n\n %gui wx # enable wxPython event loop integration\n %gui qt4|qt # enable PyQt4 event loop integration\n %gui gtk # enable PyGTK event loop integration\n %gui gtk3 # enable Gtk3 event loop integration\n %gui tk # enable Tk event loop integration\n %gui osx # enable Cocoa event loop integration\n # (requires %matplotlib 1.1)\n %gui # disable all event loop integration\n\n WARNING: after any of these has been called you can simply create\n an application object, but DO NOT start the event loop yourself, as\n we have already handled that.\n \"\"\"\n opts, arg = self.parse_options(parameter_s, '')\n if arg=='': arg = None\n try:\n return self.shell.enable_gui(arg)\n except Exception as e:\n # print simple error message, rather than traceback if we can't\n # hook up the GUI\n error(str(e))\n\n @skip_doctest\n @line_magic\n def precision(self, s=''):\n \"\"\"Set floating point precision for pretty printing.\n\n Can set either integer precision or a format string.\n\n If numpy has been imported and precision is an int,\n numpy display precision will also be set, via ``numpy.set_printoptions``.\n\n If no argument is given, defaults will be restored.\n\n Examples\n --------\n ::\n\n In [1]: from math import pi\n\n In [2]: %precision 3\n Out[2]: u'%.3f'\n\n In [3]: pi\n Out[3]: 3.142\n\n In [4]: %precision %i\n Out[4]: u'%i'\n\n In [5]: pi\n Out[5]: 3\n\n In [6]: %precision %e\n Out[6]: u'%e'\n\n In [7]: pi**10\n Out[7]: 9.364805e+04\n\n In [8]: %precision\n Out[8]: u'%r'\n\n In [9]: pi**10\n Out[9]: 93648.047476082982\n \"\"\"\n ptformatter = self.shell.display_formatter.formatters['text/plain']\n ptformatter.float_precision = s\n return ptformatter.float_format\n\n @magic_arguments.magic_arguments()\n @magic_arguments.argument(\n '-e', '--export', action='store_true', default=False,\n help='Export IPython history as a notebook. The filename argument '\n 'is used to specify the notebook name and format. For example '\n 'a filename of notebook.ipynb will result in a notebook name '\n 'of \"notebook\" and a format of \"xml\". Likewise using a \".json\" '\n 'or \".py\" file extension will write the notebook in the json '\n 'or py formats.'\n )\n @magic_arguments.argument(\n '-f', '--format',\n help='Convert an existing IPython notebook to a new format. This option '\n 'specifies the new format and can have the values: xml, json, py. '\n 'The target filename is chosen automatically based on the new '\n 'format. The filename argument gives the name of the source file.'\n )\n @magic_arguments.argument(\n 'filename', type=unicode,\n help='Notebook name or filename'\n )\n @line_magic\n def notebook(self, s):\n \"\"\"Export and convert IPython notebooks.\n\n This function can export the current IPython history to a notebook file\n or can convert an existing notebook file into a different format. For\n example, to export the history to \"foo.ipynb\" do \"%notebook -e foo.ipynb\".\n To export the history to \"foo.py\" do \"%notebook -e foo.py\". To convert\n \"foo.ipynb\" to \"foo.json\" do \"%notebook -f json foo.ipynb\". Possible\n formats include (json/ipynb, py).\n \"\"\"\n args = magic_arguments.parse_argstring(self.notebook, s)\n\n from IPython.nbformat import current\n args.filename = unquote_filename(args.filename)\n if args.export:\n fname, name, format = current.parse_filename(args.filename)\n cells = []\n hist = list(self.shell.history_manager.get_range())\n for session, prompt_number, input in hist[:-1]:\n cells.append(current.new_code_cell(prompt_number=prompt_number,\n input=input))\n worksheet = current.new_worksheet(cells=cells)\n nb = current.new_notebook(name=name,worksheets=[worksheet])\n with io.open(fname, 'w', encoding='utf-8') as f:\n current.write(nb, f, format);\n elif args.format is not None:\n old_fname, old_name, old_format = current.parse_filename(args.filename)\n new_format = args.format\n if new_format == u'xml':\n raise ValueError('Notebooks cannot be written as xml.')\n elif new_format == u'ipynb' or new_format == u'json':\n new_fname = old_name + u'.ipynb'\n new_format = u'json'\n elif new_format == u'py':\n new_fname = old_name + u'.py'\n else:\n raise ValueError('Invalid notebook format: %s' % new_format)\n with io.open(old_fname, 'r', encoding='utf-8') as f:\n nb = current.read(f, old_format)\n with io.open(new_fname, 'w', encoding='utf-8') as f:\n current.write(nb, f, new_format)\n",
"path": "IPython/core/magics/basic.py"
}
] | diff --git a/IPython/core/magics/basic.py b/IPython/core/magics/basic.py
index 534a210a973..d7f35263238 100644
--- a/IPython/core/magics/basic.py
+++ b/IPython/core/magics/basic.py
@@ -185,7 +185,7 @@ def page(self, parameter_s=''):
raw = 'r' in opts
oname = args and args or '_'
- info = self._ofind(oname)
+ info = self.shell._ofind(oname)
if info['found']:
txt = (raw and str or pformat)( info['obj'] )
page.page(txt)
|
PennyLaneAI__pennylane-5623 | [BUG] `param_shift` with `broadcast=True` does not work with zero-length recipes
### Expected behavior
`param_shift` has feature parity between `broadcast=True` and `broadcast=False`
### Actual behavior
The (somewhat esoteric) example from the test suite below runs with `broadcast=False` but not with `broadcast=True`.
### Additional information
If we do not include the commented `assert` line, the gradient computation will raise an error, because the created tapes are not valid.
If we include the commented `assert` line in the code below, we see that too many tapes are being created when `broadcast=True`. This is because unnecessary tapes with `batch_size=0` are being created, which is the core bug making the tapes invalid.
### Source code
```shell
ops_with_custom_recipe = [1]
broadcast = True
dev = qml.device("default.qubit", wires=2)
x = [0.543, -0.654]
with qml.queuing.AnnotatedQueue() as q:
qml.RX(x[0], wires=[0])
qml.RX(x[1], wires=[0])
qml.expval(qml.PauliZ(0))
tape = qml.tape.QuantumScript.from_queue(q)
gradient_recipes = tuple(
[[-1e7, 1, 0], [1e7, 1, 0]] if i in ops_with_custom_recipe else None for i in range(2)
)
tapes, fn = qml.gradients.param_shift(tape, gradient_recipes=gradient_recipes, broadcast=broadcast)
num_ops_standard = (2 - len(ops_with_custom_recipe))
# assert len(tapes) == (1 if broadcast else 2) * num_ops_standard + (tape.num_params != num_ops_standard)
grad = fn(qml.execute(tapes, dev, None))
print(grad)
```
### Tracebacks
```shell
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Cell In[2], line 19
17 num_ops_standard = (2 - len(ops_with_custom_recipe))
18 # assert len(tapes) == (1 if broadcast else 2) * num_ops_standard + (tape.num_params != num_ops_standard)
---> 19 grad = fn(qml.execute(tapes, dev, None))
20 print(grad)
File ~/repos/pennylane/pennylane/workflow/execution.py:616, in execute(tapes, device, gradient_fn, interface, transform_program, config, grad_on_execution, gradient_kwargs, cache, cachesize, max_diff, override_shots, expand_fn, max_expansion, device_batch_transform, device_vjp)
614 # Exiting early if we do not need to deal with an interface boundary
615 if no_interface_boundary_required:
--> 616 results = inner_execute(tapes)
617 return post_processing(results)
619 _grad_on_execution = False
File ~/repos/pennylane/pennylane/workflow/execution.py:297, in _make_inner_execute.<locals>.inner_execute(tapes, **_)
294 transformed_tapes, transform_post_processing = transform_program(tapes)
296 if transformed_tapes:
--> 297 results = device_execution(transformed_tapes)
298 else:
299 results = ()
File ~/repos/pennylane/pennylane/devices/modifiers/simulator_tracking.py:30, in _track_execute.<locals>.execute(self, circuits, execution_config)
28 @wraps(untracked_execute)
29 def execute(self, circuits, execution_config=DefaultExecutionConfig):
---> 30 results = untracked_execute(self, circuits, execution_config)
31 if isinstance(circuits, QuantumScript):
32 batch = (circuits,)
File ~/repos/pennylane/pennylane/devices/modifiers/single_tape_support.py:32, in _make_execute.<locals>.execute(self, circuits, execution_config)
30 is_single_circuit = True
31 circuits = (circuits,)
---> 32 results = batch_execute(self, circuits, execution_config)
33 return results[0] if is_single_circuit else results
File ~/repos/pennylane/pennylane/devices/default_qubit.py:594, in DefaultQubit.execute(self, circuits, execution_config)
591 prng_keys = [self.get_prng_keys()[0] for _ in range(len(circuits))]
593 if max_workers is None:
--> 594 return tuple(
595 _simulate_wrapper(
596 c,
597 {
598 "rng": self._rng,
599 "debugger": self._debugger,
600 "interface": interface,
601 "state_cache": self._state_cache,
602 "prng_key": _key,
603 },
604 )
605 for c, _key in zip(circuits, prng_keys)
606 )
608 vanilla_circuits = [convert_to_numpy_parameters(c) for c in circuits]
609 seeds = self._rng.integers(2**31 - 1, size=len(vanilla_circuits))
File ~/repos/pennylane/pennylane/devices/default_qubit.py:595, in <genexpr>(.0)
591 prng_keys = [self.get_prng_keys()[0] for _ in range(len(circuits))]
593 if max_workers is None:
594 return tuple(
--> 595 _simulate_wrapper(
596 c,
597 {
598 "rng": self._rng,
599 "debugger": self._debugger,
600 "interface": interface,
601 "state_cache": self._state_cache,
602 "prng_key": _key,
603 },
604 )
605 for c, _key in zip(circuits, prng_keys)
606 )
608 vanilla_circuits = [convert_to_numpy_parameters(c) for c in circuits]
609 seeds = self._rng.integers(2**31 - 1, size=len(vanilla_circuits))
File ~/repos/pennylane/pennylane/devices/default_qubit.py:842, in _simulate_wrapper(circuit, kwargs)
841 def _simulate_wrapper(circuit, kwargs):
--> 842 return simulate(circuit, **kwargs)
File ~/repos/pennylane/pennylane/devices/qubit/simulate.py:292, in simulate(circuit, debugger, state_cache, **execution_kwargs)
290 if state_cache is not None:
291 state_cache[circuit.hash] = state
--> 292 return measure_final_state(circuit, state, is_state_batched, rng=rng, prng_key=meas_key)
File ~/repos/pennylane/pennylane/devices/qubit/simulate.py:213, in measure_final_state(circuit, state, is_state_batched, **execution_kwargs)
210 raise TypeError("Native mid-circuit measurements are only supported with finite shots.")
212 if len(circuit.measurements) == 1:
--> 213 return measure(circuit.measurements[0], state, is_state_batched=is_state_batched)
215 return tuple(
216 measure(mp, state, is_state_batched=is_state_batched) for mp in circuit.measurements
217 )
219 # finite-shot case
File ~/repos/pennylane/pennylane/devices/qubit/measure.py:233, in measure(measurementprocess, state, is_state_batched)
220 def measure(
221 measurementprocess: MeasurementProcess, state: TensorLike, is_state_batched: bool = False
222 ) -> TensorLike:
223 """Apply a measurement process to a state.
224
225 Args:
(...)
231 Tensorlike: the result of the measurement
232 """
--> 233 return get_measurement_function(measurementprocess, state)(
234 measurementprocess, state, is_state_batched
235 )
File ~/repos/pennylane/pennylane/devices/qubit/measure.py:72, in state_diagonalizing_gates(measurementprocess, state, is_state_batched)
70 wires = Wires(range(total_indices))
71 flattened_state = flatten_state(state, total_indices)
---> 72 return measurementprocess.process_state(flattened_state, wires)
File ~/repos/pennylane/pennylane/measurements/expval.py:142, in ExpectationMP.process_state(self, state, wire_order)
140 return qml.math.squeeze(self.eigvals())
141 with qml.queuing.QueuingManager.stop_recording():
--> 142 prob = qml.probs(wires=self.wires).process_state(state=state, wire_order=wire_order)
143 # In case of broadcasting, `prob` has two axes and this is a matrix-vector product
144 return self._calculate_expectation(prob)
File ~/repos/pennylane/pennylane/measurements/probs.py:238, in ProbabilityMP.process_state(self, state, wire_order)
236 prob = qml.math.transpose(prob, desired_axes)
237 # flatten and return probabilities
--> 238 return qml.math.reshape(prob, flat_shape)
File ~/venvs/dev/lib/python3.10/site-packages/autoray/autoray.py:80, in do(fn, like, *args, **kwargs)
31 """Do function named ``fn`` on ``(*args, **kwargs)``, peforming single
32 dispatch to retrieve ``fn`` based on whichever library defines the class of
33 the ``args[0]``, or the ``like`` keyword argument if specified.
(...)
77 <tf.Tensor: id=91, shape=(3, 3), dtype=float32>
78 """
79 backend = choose_backend(fn, *args, like=like, **kwargs)
---> 80 return get_lib_fn(backend, fn)(*args, **kwargs)
File ~/venvs/dev/lib/python3.10/site-packages/numpy/core/fromnumeric.py:285, in reshape(a, newshape, order)
200 @array_function_dispatch(_reshape_dispatcher)
201 def reshape(a, newshape, order='C'):
202 """
203 Gives a new shape to an array without changing its data.
204
(...)
283 [5, 6]])
284 """
--> 285 return _wrapfunc(a, 'reshape', newshape, order=order)
File ~/venvs/dev/lib/python3.10/site-packages/numpy/core/fromnumeric.py:59, in _wrapfunc(obj, method, *args, **kwds)
56 return _wrapit(obj, method, *args, **kwds)
58 try:
---> 59 return bound(*args, **kwds)
60 except TypeError:
61 # A TypeError occurs if the object does have such a method in its
62 # class, but its signature is not identical to that of NumPy's. This
(...)
66 # Call _wrapit from within the except clause to ensure a potential
67 # exception has a traceback chain.
68 return _wrapit(obj, method, *args, **kwds)
ValueError: cannot reshape array of size 0 into shape (0,newaxis)
```
### System information
```shell
pl dev
```
### Existing GitHub issues
- [X] I have searched existing GitHub issues to make sure the issue does not already exist.
| [
{
"content": "# Copyright 2018-2021 Xanadu Quantum Technologies Inc.\r\n\r\n# Licensed under the Apache License, Version 2.0 (the \"License\");\r\n# you may not use this file except in compliance with the License.\r\n# You may obtain a copy of the License at\r\n\r\n# http://www.apache.org/licenses/LICENSE-2.0\r\n\r\n# Unless required by applicable law or agreed to in writing, software\r\n# distributed under the License is distributed on an \"AS IS\" BASIS,\r\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\r\n# See the License for the specific language governing permissions and\r\n# limitations under the License.\r\n\"\"\"Contains a function for generating generalized parameter shift rules and\r\nhelper methods for processing shift rules as well as for creating tapes with\r\nshifted parameters.\"\"\"\r\nimport functools\r\nimport itertools\r\nimport numbers\r\nimport warnings\r\n\r\nimport numpy as np\r\nfrom scipy.linalg import solve as linalg_solve\r\nimport pennylane as qml\r\nfrom pennylane.measurements import MeasurementProcess\r\nfrom pennylane.ops.functions import bind_new_parameters\r\nfrom pennylane.tape import QuantumScript\r\n\r\n\r\ndef process_shifts(rule, tol=1e-10, batch_duplicates=True):\r\n \"\"\"Utility function to process gradient rules.\r\n\r\n Args:\r\n rule (array): a ``(M, N)`` array corresponding to ``M`` terms\r\n with parameter shifts. ``N`` has to be either ``2`` or ``3``.\r\n The first column corresponds to the linear combination coefficients;\r\n the last column contains the shift values.\r\n If ``N=3``, the middle column contains the multipliers.\r\n tol (float): floating point tolerance used when comparing shifts/coefficients\r\n Terms with coefficients below ``tol`` will be removed.\r\n batch_duplicates (bool): whether to check the input ``rule`` for duplicate\r\n shift values in its second column.\r\n\r\n Returns:\r\n array: The processed shift rule with small entries rounded to 0, sorted\r\n with respect to the absolute value of the shifts, and groups of shift\r\n terms with identical (multiplier and) shift fused into one term each,\r\n if ``batch_duplicates=True``.\r\n\r\n This utility function accepts coefficients and shift values as well as optionally\r\n multipliers, and performs the following processing:\r\n\r\n - Set all small (within absolute tolerance ``tol``) coefficients and shifts to 0\r\n\r\n - Remove terms where the coefficients are 0 (including the ones set to 0 in the previous step)\r\n\r\n - Terms with the same shift value (and multiplier) are combined into a single term.\r\n\r\n - Finally, the terms are sorted according to the absolute value of ``shift``,\r\n This ensures that a zero-shift term, if it exists, is returned first.\r\n \"\"\"\r\n # set all small coefficients, multipliers if present, and shifts to zero.\r\n rule[np.abs(rule) < tol] = 0\r\n\r\n # remove columns where the coefficients are 0\r\n rule = rule[~(rule[:, 0] == 0)]\r\n\r\n if batch_duplicates:\r\n round_decimals = int(-np.log10(tol))\r\n rounded_rule = np.round(rule[:, 1:], round_decimals)\r\n # determine unique shifts or (multiplier, shift) combinations\r\n unique_mods = np.unique(rounded_rule, axis=0)\r\n\r\n if rule.shape[0] != unique_mods.shape[0]:\r\n matches = np.all(rounded_rule[:, np.newaxis] == unique_mods[np.newaxis, :], axis=-1)\r\n # TODO: The following line probably can be done in numpy\r\n coeffs = [np.sum(rule[slc, 0]) for slc in matches.T]\r\n rule = np.hstack([np.stack(coeffs)[:, np.newaxis], unique_mods])\r\n\r\n # sort columns according to abs(shift)\r\n return rule[np.argsort(np.abs(rule[:, -1]), kind=\"stable\")]\r\n\r\n\r\[email protected]_cache(maxsize=None)\r\ndef eigvals_to_frequencies(eigvals):\r\n r\"\"\"Convert an eigenvalue spectrum to frequency values, defined\r\n as the the set of positive, unique differences of the eigenvalues in the spectrum.\r\n\r\n Args:\r\n eigvals (tuple[int, float]): eigenvalue spectra\r\n\r\n Returns:\r\n tuple[int, float]: frequencies\r\n\r\n **Example**\r\n\r\n >>> eigvals = (-0.5, 0, 0, 0.5)\r\n >>> eigvals_to_frequencies(eigvals)\r\n (0.5, 1.0)\r\n \"\"\"\r\n unique_eigvals = sorted(set(eigvals))\r\n return tuple({j - i for i, j in itertools.combinations(unique_eigvals, 2)})\r\n\r\n\r\[email protected]_cache(maxsize=None)\r\ndef frequencies_to_period(frequencies, decimals=5):\r\n r\"\"\"Returns the period of a Fourier series as defined\r\n by a set of frequencies.\r\n\r\n The period is simply :math:`2\\pi/gcd(frequencies)`,\r\n where :math:`\\text{gcd}` is the greatest common divisor.\r\n\r\n Args:\r\n spectra (tuple[int, float]): frequency spectra\r\n decimals (int): Number of decimal places to round to\r\n if there are non-integral frequencies.\r\n\r\n Returns:\r\n tuple[int, float]: frequencies\r\n\r\n **Example**\r\n\r\n >>> frequencies = (0.5, 1.0)\r\n >>> frequencies_to_period(frequencies)\r\n 12.566370614359172\r\n \"\"\"\r\n try:\r\n gcd = np.gcd.reduce(frequencies)\r\n\r\n except TypeError:\r\n # np.gcd only support integer frequencies\r\n exponent = 10**decimals\r\n frequencies = np.round(frequencies, decimals) * exponent\r\n gcd = np.gcd.reduce(np.int64(frequencies)) / exponent\r\n\r\n return 2 * np.pi / gcd\r\n\r\n\r\[email protected]_cache(maxsize=None)\r\ndef _get_shift_rule(frequencies, shifts=None):\r\n n_freqs = len(frequencies)\r\n frequencies = qml.math.sort(qml.math.stack(frequencies))\r\n freq_min = frequencies[0]\r\n\r\n if len(set(frequencies)) != n_freqs or freq_min <= 0:\r\n raise ValueError(\r\n f\"Expected frequencies to be a list of unique positive values, instead got {frequencies}.\"\r\n )\r\n\r\n mu = np.arange(1, n_freqs + 1)\r\n\r\n if shifts is None: # assume equidistant shifts\r\n shifts = (2 * mu - 1) * np.pi / (2 * n_freqs * freq_min)\r\n equ_shifts = True\r\n else:\r\n shifts = qml.math.sort(qml.math.stack(shifts))\r\n if len(shifts) != n_freqs:\r\n raise ValueError(\r\n f\"Expected number of shifts to equal the number of frequencies ({n_freqs}), instead got {shifts}.\"\r\n )\r\n if len(set(shifts)) != n_freqs:\r\n raise ValueError(f\"Shift values must be unique, instead got {shifts}\")\r\n\r\n equ_shifts = np.allclose(shifts, (2 * mu - 1) * np.pi / (2 * n_freqs * freq_min))\r\n\r\n if len(set(np.round(np.diff(frequencies), 10))) <= 1 and equ_shifts: # equidistant case\r\n coeffs = (\r\n freq_min\r\n * (-1) ** (mu - 1)\r\n / (4 * n_freqs * np.sin(np.pi * (2 * mu - 1) / (4 * n_freqs)) ** 2)\r\n )\r\n\r\n else: # non-equidistant case\r\n sin_matrix = -4 * np.sin(np.outer(shifts, frequencies))\r\n det_sin_matrix = np.linalg.det(sin_matrix)\r\n if abs(det_sin_matrix) < 1e-6:\r\n warnings.warn(\r\n f\"Solving linear problem with near zero determinant ({det_sin_matrix}) \"\r\n \"may give unstable results for the parameter shift rules.\"\r\n )\r\n\r\n coeffs = -2 * linalg_solve(sin_matrix.T, frequencies)\r\n\r\n coeffs = np.concatenate((coeffs, -coeffs))\r\n shifts = np.concatenate((shifts, -shifts)) # pylint: disable=invalid-unary-operand-type\r\n return np.stack([coeffs, shifts]).T\r\n\r\n\r\ndef _iterate_shift_rule_with_multipliers(rule, order, period):\r\n r\"\"\"Helper method to repeat a shift rule that includes multipliers multiple\r\n times along the same parameter axis for higher-order derivatives.\"\"\"\r\n combined_rules = []\r\n\r\n for partial_rules in itertools.product(rule, repeat=order):\r\n c, m, s = np.stack(partial_rules).T\r\n cumul_shift = 0.0\r\n for _m, _s in zip(m, s):\r\n cumul_shift *= _m\r\n cumul_shift += _s\r\n if period is not None:\r\n cumul_shift = np.mod(cumul_shift + 0.5 * period, period) - 0.5 * period\r\n combined_rules.append(np.stack([np.prod(c), np.prod(m), cumul_shift]))\r\n\r\n # combine all terms in the linear combination into a single\r\n # array, with column order (coefficients, multipliers, shifts)\r\n return qml.math.stack(combined_rules)\r\n\r\n\r\ndef _iterate_shift_rule(rule, order, period=None):\r\n r\"\"\"Helper method to repeat a shift rule multiple times along the same\r\n parameter axis for higher-order derivatives.\"\"\"\r\n if len(rule[0]) == 3:\r\n return _iterate_shift_rule_with_multipliers(rule, order, period)\r\n\r\n # TODO: optimization: Without multipliers, the order of shifts does not matter,\r\n # so that we can only iterate over the symmetric part of the combined_rules tensor.\r\n # This requires the corresponding multinomial prefactors to be included in the coeffs.\r\n combined_rules = np.array(list(itertools.product(rule, repeat=order)))\r\n # multiply the coefficients of each rule\r\n coeffs = np.prod(combined_rules[..., 0], axis=1)\r\n # sum the shifts of each rule\r\n shifts = np.sum(combined_rules[..., 1], axis=1)\r\n if period is not None:\r\n # if a period is provided, make sure the shift value is within [-period/2, period/2)\r\n shifts = np.mod(shifts + 0.5 * period, period) - 0.5 * period\r\n return qml.math.stack([coeffs, shifts]).T\r\n\r\n\r\ndef _combine_shift_rules(rules):\r\n r\"\"\"Helper method to combine shift rules for multiple parameters into\r\n simultaneous multivariate shift rules.\"\"\"\r\n combined_rules = []\r\n\r\n for partial_rules in itertools.product(*rules):\r\n c, *m, s = np.stack(partial_rules).T\r\n combined = np.concatenate([[np.prod(c)], *m, s])\r\n combined_rules.append(np.stack(combined))\r\n\r\n return np.stack(combined_rules)\r\n\r\n\r\[email protected]_cache()\r\ndef generate_shift_rule(frequencies, shifts=None, order=1):\r\n r\"\"\"Computes the parameter shift rule for a unitary based on its generator's eigenvalue\r\n frequency spectrum.\r\n\r\n To compute gradients of circuit parameters in variational quantum algorithms, expressions for\r\n cost function first derivatives with respect to the variational parameters can be cast into\r\n linear combinations of expectation values at shifted parameter values. The coefficients and\r\n shifts defining the linear combination can be obtained from the unitary generator's eigenvalue\r\n frequency spectrum. Details can be found in\r\n `Wierichs et al. (2022) <https://doi.org/10.22331/q-2022-03-30-677>`__.\r\n\r\n Args:\r\n frequencies (tuple[int or float]): The tuple of eigenvalue frequencies. Eigenvalue\r\n frequencies are defined as the unique positive differences obtained from a set of\r\n eigenvalues.\r\n shifts (tuple[int or float]): the tuple of shift values. If unspecified,\r\n equidistant shifts are assumed. If supplied, the length of this tuple should match the\r\n number of given frequencies.\r\n order (int): the order of differentiation to compute the shift rule for\r\n\r\n Returns:\r\n tuple: a tuple of coefficients and shifts describing the gradient rule for the\r\n parameter-shift method. For parameter :math:`\\phi`, the coefficients :math:`c_i` and the\r\n shifts :math:`s_i` combine to give a gradient rule of the following form:\r\n\r\n .. math:: \\frac{\\partial}{\\partial\\phi}f = \\sum_{i} c_i f(\\phi + s_i).\r\n\r\n where :math:`f(\\phi) = \\langle 0|U(\\phi)^\\dagger \\hat{O} U(\\phi)|0\\rangle`\r\n for some observable :math:`\\hat{O}` and the unitary :math:`U(\\phi)=e^{iH\\phi}`.\r\n\r\n Raises:\r\n ValueError: if ``frequencies`` is not a list of unique positive values, or if ``shifts``\r\n (if specified) is not a list of unique values the same length as ``frequencies``.\r\n\r\n **Examples**\r\n\r\n An example of obtaining the frequencies from generator eigenvalues, and obtaining the parameter\r\n shift rule:\r\n\r\n >>> eigvals = (-0.5, 0, 0, 0.5)\r\n >>> frequencies = eigvals_to_frequencies(eigvals)\r\n >>> generate_shift_rule(frequencies)\r\n array([[ 0.4267767 , 1.57079633],\r\n [-0.4267767 , -1.57079633],\r\n [-0.0732233 , 4.71238898],\r\n [ 0.0732233 , -4.71238898]])\r\n\r\n An example with explicitly specified shift values:\r\n\r\n >>> frequencies = (1, 2, 4)\r\n >>> shifts = (np.pi / 3, 2 * np.pi / 3, np.pi / 4)\r\n >>> generate_shift_rule(frequencies, shifts)\r\n array([[ 3. , 0.78539816],\r\n [-3. , -0.78539816],\r\n [-2.09077028, 1.04719755],\r\n [ 2.09077028, -1.04719755],\r\n [ 0.2186308 , 2.0943951 ],\r\n [-0.2186308 , -2.0943951 ]])\r\n\r\n Higher order shift rules (corresponding to the :math:`n`-th derivative of the parameter) can be\r\n requested via the ``order`` argument. For example, to extract the second order shift rule for a\r\n gate with generator :math:`X/2`:\r\n\r\n >>> eigvals = (0.5, -0.5)\r\n >>> frequencies = eigvals_to_frequencies(eigvals)\r\n >>> generate_shift_rule(frequencies, order=2)\r\n array([[-0.5 , 0. ],\r\n [ 0.5 , -3.14159265]])\r\n\r\n This corresponds to the shift rule\r\n :math:`\\frac{\\partial^2 f}{\\partial \\phi^2} = \\frac{1}{2} \\left[f(\\phi) - f(\\phi-\\pi)\\right]`.\r\n \"\"\"\r\n frequencies = tuple(f for f in frequencies if f > 0)\r\n rule = _get_shift_rule(frequencies, shifts=shifts)\r\n\r\n if order > 1:\r\n T = frequencies_to_period(frequencies)\r\n rule = _iterate_shift_rule(rule, order, period=T)\r\n\r\n return process_shifts(rule, tol=1e-10)\r\n\r\n\r\ndef generate_multi_shift_rule(frequencies, shifts=None, orders=None):\r\n r\"\"\"Computes the parameter shift rule with respect to two parametrized unitaries,\r\n given their generator's eigenvalue frequency spectrum. This corresponds to a\r\n shift rule that computes off-diagonal elements of higher order derivative tensors.\r\n For the second order, this corresponds to the Hessian.\r\n\r\n Args:\r\n frequencies (list[tuple[int or float]]): List of eigenvalue frequencies corresponding\r\n to the each parametrized unitary.\r\n shifts (list[tuple[int or float]]): List of shift values corresponding to each parametrized\r\n unitary. If unspecified, equidistant shifts are assumed. If supplied, the length\r\n of each tuple in the list must be the same as the length of the corresponding tuple in\r\n ``frequencies``.\r\n orders (list[int]): the order of differentiation for each parametrized unitary.\r\n If unspecified, the first order derivative shift rule is computed for each parametrized\r\n unitary.\r\n\r\n Returns:\r\n tuple: a tuple of coefficients, shifts for the first parameter, and shifts for the\r\n second parameter, describing the gradient rule\r\n for the parameter-shift method.\r\n\r\n For parameters :math:`\\phi_a` and :math:`\\phi_b`, the\r\n coefficients :math:`c_i` and the shifts :math:`s^{(a)}_i`, :math:`s^{(b)}_i`,\r\n combine to give a gradient rule of the following form:\r\n\r\n .. math::\r\n\r\n \\frac{\\partial^2}{\\partial\\phi_a \\partial\\phi_b}f\r\n = \\sum_{i} c_i f(\\phi_a + s^{(a)}_i, \\phi_b + s^{(b)}_i).\r\n\r\n where :math:`f(\\phi_a, \\phi_b) = \\langle 0|U(\\phi_a)^\\dagger V(\\phi_b)^\\dagger \\hat{O} V(\\phi_b) U(\\phi_a)|0\\rangle`\r\n for some observable :math:`\\hat{O}` and unitaries :math:`U(\\phi_a)=e^{iH_a\\phi_a}` and :math:`V(\\phi_b)=e^{iH_b\\phi_b}`.\r\n\r\n **Example**\r\n\r\n >>> generate_multi_shift_rule([(1,), (1,)])\r\n array([[ 0.25 , 1.57079633, 1.57079633],\r\n [-0.25 , 1.57079633, -1.57079633],\r\n [-0.25 , -1.57079633, 1.57079633],\r\n [ 0.25 , -1.57079633, -1.57079633]])\r\n\r\n This corresponds to the gradient rule\r\n\r\n .. math::\r\n\r\n \\begin{align*}\r\n \\frac{\\partial^2 f}{\\partial x\\partial y} &= \\frac{1}{4}\r\n [f(x+\\pi/2, y+\\pi/2) - f(x+\\pi/2, y-\\pi/2)\\\\\r\n &\\phantom{\\frac{1}{4}[}-f(x-\\pi/2, y+\\pi/2) + f(x-\\pi/2, y-\\pi/2) ].\r\n \\end{align*}\r\n\r\n \"\"\"\r\n rules = []\r\n shifts = shifts or [None] * len(frequencies)\r\n orders = orders or [1] * len(frequencies)\r\n\r\n for f, s, o in zip(frequencies, shifts, orders):\r\n rule = generate_shift_rule(f, shifts=s, order=o)\r\n rules.append(process_shifts(rule))\r\n\r\n return _combine_shift_rules(rules)\r\n\r\n\r\ndef _copy_and_shift_params(tape, indices, shifts, multipliers, cast=False):\r\n \"\"\"Create a copy of a tape and of parameters, and set the new tape to the parameters\r\n rescaled and shifted as indicated by ``indices``, ``multipliers`` and ``shifts``.\"\"\"\r\n all_ops = tape.circuit\r\n\r\n for idx, shift, multiplier in zip(indices, shifts, multipliers):\r\n _, op_idx, p_idx = tape.get_operation(idx)\r\n op = (\r\n all_ops[op_idx].obs\r\n if isinstance(all_ops[op_idx], MeasurementProcess)\r\n else all_ops[op_idx]\r\n )\r\n\r\n # Shift copied parameter\r\n new_params = list(op.data)\r\n if not isinstance(new_params[p_idx], numbers.Integral):\r\n multiplier = qml.math.convert_like(multiplier, new_params[p_idx])\r\n multiplier = qml.math.cast_like(multiplier, new_params[p_idx])\r\n shift = qml.math.convert_like(shift, new_params[p_idx])\r\n shift = qml.math.cast_like(shift, new_params[p_idx])\r\n new_params[p_idx] = new_params[p_idx] * multiplier\r\n new_params[p_idx] = new_params[p_idx] + shift\r\n if cast:\r\n dtype = getattr(new_params[p_idx], \"dtype\", float)\r\n new_params[p_idx] = qml.math.cast(new_params[p_idx], dtype)\r\n\r\n # Create operator with shifted parameter and put into shifted tape\r\n shifted_op = bind_new_parameters(op, new_params)\r\n if op_idx < len(tape.operations):\r\n all_ops[op_idx] = shifted_op\r\n else:\r\n mp = all_ops[op_idx].__class__\r\n all_ops[op_idx] = mp(obs=shifted_op)\r\n\r\n # pylint: disable=protected-access\r\n ops = all_ops[: len(tape.operations)]\r\n meas = all_ops[len(tape.operations) :]\r\n return QuantumScript(ops=ops, measurements=meas, shots=tape.shots)\r\n\r\n\r\ndef generate_shifted_tapes(tape, index, shifts, multipliers=None, broadcast=False):\r\n r\"\"\"Generate a list of tapes or a single broadcasted tape, where one marked\r\n trainable parameter has been shifted by the provided shift values.\r\n\r\n Args:\r\n tape (.QuantumTape): input quantum tape\r\n index (int): index of the trainable parameter to shift\r\n shifts (Sequence[float or int]): sequence of shift values.\r\n The length determines how many parameter-shifted tapes are created.\r\n multipliers (Sequence[float or int]): sequence of multiplier values.\r\n The length should match the one of ``shifts``. Each multiplier scales the\r\n corresponding gate parameter before the shift is applied. If not provided, the\r\n parameters will not be scaled.\r\n broadcast (bool): Whether or not to use broadcasting to create a single tape\r\n with the shifted parameters.\r\n\r\n Returns:\r\n list[QuantumTape]: List of quantum tapes. In each tape the parameter indicated\r\n by ``index`` has been shifted by the values in ``shifts``. The number of tapes\r\n matches the length of ``shifts`` and ``multipliers`` (if provided).\r\n If ``broadcast=True`` was used, the list contains a single broadcasted tape\r\n with all shifts distributed over the broadcasting dimension. In this case,\r\n the ``batch_size`` of the returned tape matches the length of ``shifts``.\r\n \"\"\"\r\n\r\n if multipliers is None:\r\n multipliers = np.ones_like(shifts)\r\n\r\n if broadcast:\r\n return (_copy_and_shift_params(tape, [index], [shifts], [multipliers]),)\r\n\r\n return tuple(\r\n _copy_and_shift_params(tape, [index], [shift], [multiplier])\r\n for shift, multiplier in zip(shifts, multipliers)\r\n )\r\n\r\n\r\ndef generate_multishifted_tapes(tape, indices, shifts, multipliers=None):\r\n r\"\"\"Generate a list of tapes where multiple marked trainable\r\n parameters have been shifted by the provided shift values.\r\n\r\n Args:\r\n tape (.QuantumTape): input quantum tape\r\n indices (Sequence[int]): indices of the trainable parameters to shift\r\n shifts (Sequence[Sequence[float or int]]): Nested sequence of shift values.\r\n The length of the outer Sequence determines how many parameter-shifted\r\n tapes are created. The lengths of the inner sequences should match and\r\n have the same length as ``indices``.\r\n multipliers (Sequence[Sequence[float or int]]): Nested sequence\r\n of multiplier values of the same format as `shifts``. Each multiplier\r\n scales the corresponding gate parameter before the shift is applied.\r\n If not provided, the parameters will not be scaled.\r\n\r\n Returns:\r\n list[QuantumTape]: List of quantum tapes. Each tape has the marked parameters\r\n indicated by ``indices`` shifted by the values of ``shifts``. The number\r\n of tapes will match the summed lengths of all inner sequences in ``shifts``\r\n and ``multipliers`` (if provided).\r\n \"\"\"\r\n if multipliers is None:\r\n multipliers = np.ones_like(shifts)\r\n\r\n tapes = [\r\n _copy_and_shift_params(tape, indices, _shifts, _multipliers, cast=True)\r\n for _shifts, _multipliers in zip(shifts, multipliers)\r\n ]\r\n\r\n return tapes\r\n",
"path": "pennylane/gradients/general_shift_rules.py"
}
] | [
{
"content": "# Copyright 2018-2021 Xanadu Quantum Technologies Inc.\r\n\r\n# Licensed under the Apache License, Version 2.0 (the \"License\");\r\n# you may not use this file except in compliance with the License.\r\n# You may obtain a copy of the License at\r\n\r\n# http://www.apache.org/licenses/LICENSE-2.0\r\n\r\n# Unless required by applicable law or agreed to in writing, software\r\n# distributed under the License is distributed on an \"AS IS\" BASIS,\r\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\r\n# See the License for the specific language governing permissions and\r\n# limitations under the License.\r\n\"\"\"Contains a function for generating generalized parameter shift rules and\r\nhelper methods for processing shift rules as well as for creating tapes with\r\nshifted parameters.\"\"\"\r\nimport functools\r\nimport itertools\r\nimport numbers\r\nimport warnings\r\n\r\nimport numpy as np\r\nfrom scipy.linalg import solve as linalg_solve\r\nimport pennylane as qml\r\nfrom pennylane.measurements import MeasurementProcess\r\nfrom pennylane.ops.functions import bind_new_parameters\r\nfrom pennylane.tape import QuantumScript\r\n\r\n\r\ndef process_shifts(rule, tol=1e-10, batch_duplicates=True):\r\n \"\"\"Utility function to process gradient rules.\r\n\r\n Args:\r\n rule (array): a ``(M, N)`` array corresponding to ``M`` terms\r\n with parameter shifts. ``N`` has to be either ``2`` or ``3``.\r\n The first column corresponds to the linear combination coefficients;\r\n the last column contains the shift values.\r\n If ``N=3``, the middle column contains the multipliers.\r\n tol (float): floating point tolerance used when comparing shifts/coefficients\r\n Terms with coefficients below ``tol`` will be removed.\r\n batch_duplicates (bool): whether to check the input ``rule`` for duplicate\r\n shift values in its second column.\r\n\r\n Returns:\r\n array: The processed shift rule with small entries rounded to 0, sorted\r\n with respect to the absolute value of the shifts, and groups of shift\r\n terms with identical (multiplier and) shift fused into one term each,\r\n if ``batch_duplicates=True``.\r\n\r\n This utility function accepts coefficients and shift values as well as optionally\r\n multipliers, and performs the following processing:\r\n\r\n - Set all small (within absolute tolerance ``tol``) coefficients and shifts to 0\r\n\r\n - Remove terms where the coefficients are 0 (including the ones set to 0 in the previous step)\r\n\r\n - Terms with the same shift value (and multiplier) are combined into a single term.\r\n\r\n - Finally, the terms are sorted according to the absolute value of ``shift``,\r\n This ensures that a zero-shift term, if it exists, is returned first.\r\n \"\"\"\r\n # set all small coefficients, multipliers if present, and shifts to zero.\r\n rule[np.abs(rule) < tol] = 0\r\n\r\n # remove columns where the coefficients are 0\r\n rule = rule[~(rule[:, 0] == 0)]\r\n\r\n if batch_duplicates:\r\n round_decimals = int(-np.log10(tol))\r\n rounded_rule = np.round(rule[:, 1:], round_decimals)\r\n # determine unique shifts or (multiplier, shift) combinations\r\n unique_mods = np.unique(rounded_rule, axis=0)\r\n\r\n if rule.shape[0] != unique_mods.shape[0]:\r\n matches = np.all(rounded_rule[:, np.newaxis] == unique_mods[np.newaxis, :], axis=-1)\r\n # TODO: The following line probably can be done in numpy\r\n coeffs = [np.sum(rule[slc, 0]) for slc in matches.T]\r\n rule = np.hstack([np.stack(coeffs)[:, np.newaxis], unique_mods])\r\n\r\n # sort columns according to abs(shift)\r\n return rule[np.argsort(np.abs(rule[:, -1]), kind=\"stable\")]\r\n\r\n\r\[email protected]_cache(maxsize=None)\r\ndef eigvals_to_frequencies(eigvals):\r\n r\"\"\"Convert an eigenvalue spectrum to frequency values, defined\r\n as the the set of positive, unique differences of the eigenvalues in the spectrum.\r\n\r\n Args:\r\n eigvals (tuple[int, float]): eigenvalue spectra\r\n\r\n Returns:\r\n tuple[int, float]: frequencies\r\n\r\n **Example**\r\n\r\n >>> eigvals = (-0.5, 0, 0, 0.5)\r\n >>> eigvals_to_frequencies(eigvals)\r\n (0.5, 1.0)\r\n \"\"\"\r\n unique_eigvals = sorted(set(eigvals))\r\n return tuple({j - i for i, j in itertools.combinations(unique_eigvals, 2)})\r\n\r\n\r\[email protected]_cache(maxsize=None)\r\ndef frequencies_to_period(frequencies, decimals=5):\r\n r\"\"\"Returns the period of a Fourier series as defined\r\n by a set of frequencies.\r\n\r\n The period is simply :math:`2\\pi/gcd(frequencies)`,\r\n where :math:`\\text{gcd}` is the greatest common divisor.\r\n\r\n Args:\r\n spectra (tuple[int, float]): frequency spectra\r\n decimals (int): Number of decimal places to round to\r\n if there are non-integral frequencies.\r\n\r\n Returns:\r\n tuple[int, float]: frequencies\r\n\r\n **Example**\r\n\r\n >>> frequencies = (0.5, 1.0)\r\n >>> frequencies_to_period(frequencies)\r\n 12.566370614359172\r\n \"\"\"\r\n try:\r\n gcd = np.gcd.reduce(frequencies)\r\n\r\n except TypeError:\r\n # np.gcd only support integer frequencies\r\n exponent = 10**decimals\r\n frequencies = np.round(frequencies, decimals) * exponent\r\n gcd = np.gcd.reduce(np.int64(frequencies)) / exponent\r\n\r\n return 2 * np.pi / gcd\r\n\r\n\r\[email protected]_cache(maxsize=None)\r\ndef _get_shift_rule(frequencies, shifts=None):\r\n n_freqs = len(frequencies)\r\n frequencies = qml.math.sort(qml.math.stack(frequencies))\r\n freq_min = frequencies[0]\r\n\r\n if len(set(frequencies)) != n_freqs or freq_min <= 0:\r\n raise ValueError(\r\n f\"Expected frequencies to be a list of unique positive values, instead got {frequencies}.\"\r\n )\r\n\r\n mu = np.arange(1, n_freqs + 1)\r\n\r\n if shifts is None: # assume equidistant shifts\r\n shifts = (2 * mu - 1) * np.pi / (2 * n_freqs * freq_min)\r\n equ_shifts = True\r\n else:\r\n shifts = qml.math.sort(qml.math.stack(shifts))\r\n if len(shifts) != n_freqs:\r\n raise ValueError(\r\n f\"Expected number of shifts to equal the number of frequencies ({n_freqs}), instead got {shifts}.\"\r\n )\r\n if len(set(shifts)) != n_freqs:\r\n raise ValueError(f\"Shift values must be unique, instead got {shifts}\")\r\n\r\n equ_shifts = np.allclose(shifts, (2 * mu - 1) * np.pi / (2 * n_freqs * freq_min))\r\n\r\n if len(set(np.round(np.diff(frequencies), 10))) <= 1 and equ_shifts: # equidistant case\r\n coeffs = (\r\n freq_min\r\n * (-1) ** (mu - 1)\r\n / (4 * n_freqs * np.sin(np.pi * (2 * mu - 1) / (4 * n_freqs)) ** 2)\r\n )\r\n\r\n else: # non-equidistant case\r\n sin_matrix = -4 * np.sin(np.outer(shifts, frequencies))\r\n det_sin_matrix = np.linalg.det(sin_matrix)\r\n if abs(det_sin_matrix) < 1e-6:\r\n warnings.warn(\r\n f\"Solving linear problem with near zero determinant ({det_sin_matrix}) \"\r\n \"may give unstable results for the parameter shift rules.\"\r\n )\r\n\r\n coeffs = -2 * linalg_solve(sin_matrix.T, frequencies)\r\n\r\n coeffs = np.concatenate((coeffs, -coeffs))\r\n shifts = np.concatenate((shifts, -shifts)) # pylint: disable=invalid-unary-operand-type\r\n return np.stack([coeffs, shifts]).T\r\n\r\n\r\ndef _iterate_shift_rule_with_multipliers(rule, order, period):\r\n r\"\"\"Helper method to repeat a shift rule that includes multipliers multiple\r\n times along the same parameter axis for higher-order derivatives.\"\"\"\r\n combined_rules = []\r\n\r\n for partial_rules in itertools.product(rule, repeat=order):\r\n c, m, s = np.stack(partial_rules).T\r\n cumul_shift = 0.0\r\n for _m, _s in zip(m, s):\r\n cumul_shift *= _m\r\n cumul_shift += _s\r\n if period is not None:\r\n cumul_shift = np.mod(cumul_shift + 0.5 * period, period) - 0.5 * period\r\n combined_rules.append(np.stack([np.prod(c), np.prod(m), cumul_shift]))\r\n\r\n # combine all terms in the linear combination into a single\r\n # array, with column order (coefficients, multipliers, shifts)\r\n return qml.math.stack(combined_rules)\r\n\r\n\r\ndef _iterate_shift_rule(rule, order, period=None):\r\n r\"\"\"Helper method to repeat a shift rule multiple times along the same\r\n parameter axis for higher-order derivatives.\"\"\"\r\n if len(rule[0]) == 3:\r\n return _iterate_shift_rule_with_multipliers(rule, order, period)\r\n\r\n # TODO: optimization: Without multipliers, the order of shifts does not matter,\r\n # so that we can only iterate over the symmetric part of the combined_rules tensor.\r\n # This requires the corresponding multinomial prefactors to be included in the coeffs.\r\n combined_rules = np.array(list(itertools.product(rule, repeat=order)))\r\n # multiply the coefficients of each rule\r\n coeffs = np.prod(combined_rules[..., 0], axis=1)\r\n # sum the shifts of each rule\r\n shifts = np.sum(combined_rules[..., 1], axis=1)\r\n if period is not None:\r\n # if a period is provided, make sure the shift value is within [-period/2, period/2)\r\n shifts = np.mod(shifts + 0.5 * period, period) - 0.5 * period\r\n return qml.math.stack([coeffs, shifts]).T\r\n\r\n\r\ndef _combine_shift_rules(rules):\r\n r\"\"\"Helper method to combine shift rules for multiple parameters into\r\n simultaneous multivariate shift rules.\"\"\"\r\n combined_rules = []\r\n\r\n for partial_rules in itertools.product(*rules):\r\n c, *m, s = np.stack(partial_rules).T\r\n combined = np.concatenate([[np.prod(c)], *m, s])\r\n combined_rules.append(np.stack(combined))\r\n\r\n return np.stack(combined_rules)\r\n\r\n\r\[email protected]_cache()\r\ndef generate_shift_rule(frequencies, shifts=None, order=1):\r\n r\"\"\"Computes the parameter shift rule for a unitary based on its generator's eigenvalue\r\n frequency spectrum.\r\n\r\n To compute gradients of circuit parameters in variational quantum algorithms, expressions for\r\n cost function first derivatives with respect to the variational parameters can be cast into\r\n linear combinations of expectation values at shifted parameter values. The coefficients and\r\n shifts defining the linear combination can be obtained from the unitary generator's eigenvalue\r\n frequency spectrum. Details can be found in\r\n `Wierichs et al. (2022) <https://doi.org/10.22331/q-2022-03-30-677>`__.\r\n\r\n Args:\r\n frequencies (tuple[int or float]): The tuple of eigenvalue frequencies. Eigenvalue\r\n frequencies are defined as the unique positive differences obtained from a set of\r\n eigenvalues.\r\n shifts (tuple[int or float]): the tuple of shift values. If unspecified,\r\n equidistant shifts are assumed. If supplied, the length of this tuple should match the\r\n number of given frequencies.\r\n order (int): the order of differentiation to compute the shift rule for\r\n\r\n Returns:\r\n tuple: a tuple of coefficients and shifts describing the gradient rule for the\r\n parameter-shift method. For parameter :math:`\\phi`, the coefficients :math:`c_i` and the\r\n shifts :math:`s_i` combine to give a gradient rule of the following form:\r\n\r\n .. math:: \\frac{\\partial}{\\partial\\phi}f = \\sum_{i} c_i f(\\phi + s_i).\r\n\r\n where :math:`f(\\phi) = \\langle 0|U(\\phi)^\\dagger \\hat{O} U(\\phi)|0\\rangle`\r\n for some observable :math:`\\hat{O}` and the unitary :math:`U(\\phi)=e^{iH\\phi}`.\r\n\r\n Raises:\r\n ValueError: if ``frequencies`` is not a list of unique positive values, or if ``shifts``\r\n (if specified) is not a list of unique values the same length as ``frequencies``.\r\n\r\n **Examples**\r\n\r\n An example of obtaining the frequencies from generator eigenvalues, and obtaining the parameter\r\n shift rule:\r\n\r\n >>> eigvals = (-0.5, 0, 0, 0.5)\r\n >>> frequencies = eigvals_to_frequencies(eigvals)\r\n >>> generate_shift_rule(frequencies)\r\n array([[ 0.4267767 , 1.57079633],\r\n [-0.4267767 , -1.57079633],\r\n [-0.0732233 , 4.71238898],\r\n [ 0.0732233 , -4.71238898]])\r\n\r\n An example with explicitly specified shift values:\r\n\r\n >>> frequencies = (1, 2, 4)\r\n >>> shifts = (np.pi / 3, 2 * np.pi / 3, np.pi / 4)\r\n >>> generate_shift_rule(frequencies, shifts)\r\n array([[ 3. , 0.78539816],\r\n [-3. , -0.78539816],\r\n [-2.09077028, 1.04719755],\r\n [ 2.09077028, -1.04719755],\r\n [ 0.2186308 , 2.0943951 ],\r\n [-0.2186308 , -2.0943951 ]])\r\n\r\n Higher order shift rules (corresponding to the :math:`n`-th derivative of the parameter) can be\r\n requested via the ``order`` argument. For example, to extract the second order shift rule for a\r\n gate with generator :math:`X/2`:\r\n\r\n >>> eigvals = (0.5, -0.5)\r\n >>> frequencies = eigvals_to_frequencies(eigvals)\r\n >>> generate_shift_rule(frequencies, order=2)\r\n array([[-0.5 , 0. ],\r\n [ 0.5 , -3.14159265]])\r\n\r\n This corresponds to the shift rule\r\n :math:`\\frac{\\partial^2 f}{\\partial \\phi^2} = \\frac{1}{2} \\left[f(\\phi) - f(\\phi-\\pi)\\right]`.\r\n \"\"\"\r\n frequencies = tuple(f for f in frequencies if f > 0)\r\n rule = _get_shift_rule(frequencies, shifts=shifts)\r\n\r\n if order > 1:\r\n T = frequencies_to_period(frequencies)\r\n rule = _iterate_shift_rule(rule, order, period=T)\r\n\r\n return process_shifts(rule, tol=1e-10)\r\n\r\n\r\ndef generate_multi_shift_rule(frequencies, shifts=None, orders=None):\r\n r\"\"\"Computes the parameter shift rule with respect to two parametrized unitaries,\r\n given their generator's eigenvalue frequency spectrum. This corresponds to a\r\n shift rule that computes off-diagonal elements of higher order derivative tensors.\r\n For the second order, this corresponds to the Hessian.\r\n\r\n Args:\r\n frequencies (list[tuple[int or float]]): List of eigenvalue frequencies corresponding\r\n to the each parametrized unitary.\r\n shifts (list[tuple[int or float]]): List of shift values corresponding to each parametrized\r\n unitary. If unspecified, equidistant shifts are assumed. If supplied, the length\r\n of each tuple in the list must be the same as the length of the corresponding tuple in\r\n ``frequencies``.\r\n orders (list[int]): the order of differentiation for each parametrized unitary.\r\n If unspecified, the first order derivative shift rule is computed for each parametrized\r\n unitary.\r\n\r\n Returns:\r\n tuple: a tuple of coefficients, shifts for the first parameter, and shifts for the\r\n second parameter, describing the gradient rule\r\n for the parameter-shift method.\r\n\r\n For parameters :math:`\\phi_a` and :math:`\\phi_b`, the\r\n coefficients :math:`c_i` and the shifts :math:`s^{(a)}_i`, :math:`s^{(b)}_i`,\r\n combine to give a gradient rule of the following form:\r\n\r\n .. math::\r\n\r\n \\frac{\\partial^2}{\\partial\\phi_a \\partial\\phi_b}f\r\n = \\sum_{i} c_i f(\\phi_a + s^{(a)}_i, \\phi_b + s^{(b)}_i).\r\n\r\n where :math:`f(\\phi_a, \\phi_b) = \\langle 0|U(\\phi_a)^\\dagger V(\\phi_b)^\\dagger \\hat{O} V(\\phi_b) U(\\phi_a)|0\\rangle`\r\n for some observable :math:`\\hat{O}` and unitaries :math:`U(\\phi_a)=e^{iH_a\\phi_a}` and :math:`V(\\phi_b)=e^{iH_b\\phi_b}`.\r\n\r\n **Example**\r\n\r\n >>> generate_multi_shift_rule([(1,), (1,)])\r\n array([[ 0.25 , 1.57079633, 1.57079633],\r\n [-0.25 , 1.57079633, -1.57079633],\r\n [-0.25 , -1.57079633, 1.57079633],\r\n [ 0.25 , -1.57079633, -1.57079633]])\r\n\r\n This corresponds to the gradient rule\r\n\r\n .. math::\r\n\r\n \\begin{align*}\r\n \\frac{\\partial^2 f}{\\partial x\\partial y} &= \\frac{1}{4}\r\n [f(x+\\pi/2, y+\\pi/2) - f(x+\\pi/2, y-\\pi/2)\\\\\r\n &\\phantom{\\frac{1}{4}[}-f(x-\\pi/2, y+\\pi/2) + f(x-\\pi/2, y-\\pi/2) ].\r\n \\end{align*}\r\n\r\n \"\"\"\r\n rules = []\r\n shifts = shifts or [None] * len(frequencies)\r\n orders = orders or [1] * len(frequencies)\r\n\r\n for f, s, o in zip(frequencies, shifts, orders):\r\n rule = generate_shift_rule(f, shifts=s, order=o)\r\n rules.append(process_shifts(rule))\r\n\r\n return _combine_shift_rules(rules)\r\n\r\n\r\ndef _copy_and_shift_params(tape, indices, shifts, multipliers, cast=False):\r\n \"\"\"Create a copy of a tape and of parameters, and set the new tape to the parameters\r\n rescaled and shifted as indicated by ``indices``, ``multipliers`` and ``shifts``.\"\"\"\r\n all_ops = tape.circuit\r\n\r\n for idx, shift, multiplier in zip(indices, shifts, multipliers):\r\n _, op_idx, p_idx = tape.get_operation(idx)\r\n op = (\r\n all_ops[op_idx].obs\r\n if isinstance(all_ops[op_idx], MeasurementProcess)\r\n else all_ops[op_idx]\r\n )\r\n\r\n # Shift copied parameter\r\n new_params = list(op.data)\r\n if not isinstance(new_params[p_idx], numbers.Integral):\r\n multiplier = qml.math.convert_like(multiplier, new_params[p_idx])\r\n multiplier = qml.math.cast_like(multiplier, new_params[p_idx])\r\n shift = qml.math.convert_like(shift, new_params[p_idx])\r\n shift = qml.math.cast_like(shift, new_params[p_idx])\r\n new_params[p_idx] = new_params[p_idx] * multiplier\r\n new_params[p_idx] = new_params[p_idx] + shift\r\n if cast:\r\n dtype = getattr(new_params[p_idx], \"dtype\", float)\r\n new_params[p_idx] = qml.math.cast(new_params[p_idx], dtype)\r\n\r\n # Create operator with shifted parameter and put into shifted tape\r\n shifted_op = bind_new_parameters(op, new_params)\r\n if op_idx < len(tape.operations):\r\n all_ops[op_idx] = shifted_op\r\n else:\r\n mp = all_ops[op_idx].__class__\r\n all_ops[op_idx] = mp(obs=shifted_op)\r\n\r\n # pylint: disable=protected-access\r\n ops = all_ops[: len(tape.operations)]\r\n meas = all_ops[len(tape.operations) :]\r\n return QuantumScript(ops=ops, measurements=meas, shots=tape.shots)\r\n\r\n\r\ndef generate_shifted_tapes(tape, index, shifts, multipliers=None, broadcast=False):\r\n r\"\"\"Generate a list of tapes or a single broadcasted tape, where one marked\r\n trainable parameter has been shifted by the provided shift values.\r\n\r\n Args:\r\n tape (.QuantumTape): input quantum tape\r\n index (int): index of the trainable parameter to shift\r\n shifts (Sequence[float or int]): sequence of shift values.\r\n The length determines how many parameter-shifted tapes are created.\r\n multipliers (Sequence[float or int]): sequence of multiplier values.\r\n The length should match the one of ``shifts``. Each multiplier scales the\r\n corresponding gate parameter before the shift is applied. If not provided, the\r\n parameters will not be scaled.\r\n broadcast (bool): Whether or not to use broadcasting to create a single tape\r\n with the shifted parameters.\r\n\r\n Returns:\r\n list[QuantumTape]: List of quantum tapes. In each tape the parameter indicated\r\n by ``index`` has been shifted by the values in ``shifts``. The number of tapes\r\n matches the length of ``shifts`` and ``multipliers`` (if provided).\r\n If ``broadcast=True`` was used, the list contains a single broadcasted tape\r\n with all shifts distributed over the broadcasting dimension. In this case,\r\n the ``batch_size`` of the returned tape matches the length of ``shifts``.\r\n \"\"\"\r\n\r\n if len(shifts) == 0:\r\n return tuple()\r\n\r\n if multipliers is None:\r\n multipliers = np.ones_like(shifts)\r\n\r\n if broadcast:\r\n return (_copy_and_shift_params(tape, [index], [shifts], [multipliers]),)\r\n\r\n return tuple(\r\n _copy_and_shift_params(tape, [index], [shift], [multiplier])\r\n for shift, multiplier in zip(shifts, multipliers)\r\n )\r\n\r\n\r\ndef generate_multishifted_tapes(tape, indices, shifts, multipliers=None):\r\n r\"\"\"Generate a list of tapes where multiple marked trainable\r\n parameters have been shifted by the provided shift values.\r\n\r\n Args:\r\n tape (.QuantumTape): input quantum tape\r\n indices (Sequence[int]): indices of the trainable parameters to shift\r\n shifts (Sequence[Sequence[float or int]]): Nested sequence of shift values.\r\n The length of the outer Sequence determines how many parameter-shifted\r\n tapes are created. The lengths of the inner sequences should match and\r\n have the same length as ``indices``.\r\n multipliers (Sequence[Sequence[float or int]]): Nested sequence\r\n of multiplier values of the same format as `shifts``. Each multiplier\r\n scales the corresponding gate parameter before the shift is applied.\r\n If not provided, the parameters will not be scaled.\r\n\r\n Returns:\r\n list[QuantumTape]: List of quantum tapes. Each tape has the marked parameters\r\n indicated by ``indices`` shifted by the values of ``shifts``. The number\r\n of tapes will match the summed lengths of all inner sequences in ``shifts``\r\n and ``multipliers`` (if provided).\r\n \"\"\"\r\n if multipliers is None:\r\n multipliers = np.ones_like(shifts)\r\n\r\n tapes = [\r\n _copy_and_shift_params(tape, indices, _shifts, _multipliers, cast=True)\r\n for _shifts, _multipliers in zip(shifts, multipliers)\r\n ]\r\n\r\n return tapes\r\n",
"path": "pennylane/gradients/general_shift_rules.py"
}
] | diff --git a/doc/releases/changelog-0.36.0.md b/doc/releases/changelog-0.36.0.md
index 77826722ffd..c81e7c5710b 100644
--- a/doc/releases/changelog-0.36.0.md
+++ b/doc/releases/changelog-0.36.0.md
@@ -564,8 +564,9 @@
[(#5610)](https://github.com/PennyLaneAI/pennylane/pull/5610)
* Using shot vectors with `param_shift(... broadcast=True)` caused a bug. This combination is no longer supported
- and will be added again in the next release.
+ and will be added again in the next release. Fixed a bug with custom gradient recipes that only consist of unshifted terms.
[(#5612)](https://github.com/PennyLaneAI/pennylane/pull/5612)
+ [(#5623)](https://github.com/PennyLaneAI/pennylane/pull/5623)
* Cast the keys of the `CountsMP` measurements returned `dynamic_one_shot` to the type produced by `MeasurementValue.concretize`.
[(#5587)](https://github.com/PennyLaneAI/pennylane/pull/5587)
diff --git a/pennylane/gradients/general_shift_rules.py b/pennylane/gradients/general_shift_rules.py
index 07e2b9fce56..cccb0f5a930 100644
--- a/pennylane/gradients/general_shift_rules.py
+++ b/pennylane/gradients/general_shift_rules.py
@@ -451,6 +451,9 @@ def generate_shifted_tapes(tape, index, shifts, multipliers=None, broadcast=Fals
the ``batch_size`` of the returned tape matches the length of ``shifts``.
"""
+ if len(shifts) == 0:
+ return tuple()
+
if multipliers is None:
multipliers = np.ones_like(shifts)
diff --git a/tests/gradients/parameter_shift/test_parameter_shift.py b/tests/gradients/parameter_shift/test_parameter_shift.py
index 7a0efd917d4..0dd03d8faac 100644
--- a/tests/gradients/parameter_shift/test_parameter_shift.py
+++ b/tests/gradients/parameter_shift/test_parameter_shift.py
@@ -536,7 +536,8 @@ def test_all_zero_diff_methods_multiple_returns_tape(self):
# tapes, _ = qml.gradients.param_shift(circuit.tape, broadcast=broadcast)
# assert tapes == []
- def test_with_gradient_recipes(self):
+ @pytest.mark.parametrize("broadcast", [True, False])
+ def test_with_gradient_recipes(self, broadcast):
"""Test that the function behaves as expected"""
with qml.queuing.AnnotatedQueue() as q:
@@ -549,18 +550,34 @@ def test_with_gradient_recipes(self):
tape = qml.tape.QuantumScript.from_queue(q)
tape.trainable_params = {0, 2}
gradient_recipes = ([[0.1, 0.2, 0.3], [0.4, 0.5, 0.6]], [[1, 1, 1], [2, 2, 2], [3, 3, 3]])
- tapes, _ = qml.gradients.param_shift(tape, gradient_recipes=gradient_recipes)
+ tapes, _ = param_shift(tape, gradient_recipes=gradient_recipes, broadcast=broadcast)
- assert len(tapes) == 5
- assert [t.batch_size for t in tapes] == [None] * 5
- assert tapes[0].get_parameters(trainable_only=False) == [0.2 * 1.0 + 0.3, 2.0, 3.0, 4.0]
- assert tapes[1].get_parameters(trainable_only=False) == [0.5 * 1.0 + 0.6, 2.0, 3.0, 4.0]
- assert tapes[2].get_parameters(trainable_only=False) == [1.0, 2.0, 1 * 3.0 + 1, 4.0]
- assert tapes[3].get_parameters(trainable_only=False) == [1.0, 2.0, 2 * 3.0 + 2, 4.0]
- assert tapes[4].get_parameters(trainable_only=False) == [1.0, 2.0, 3 * 3.0 + 3, 4.0]
+ if broadcast:
+ assert len(tapes) == 2
+ assert [t.batch_size for t in tapes] == [2, 3]
+
+ shifted_batch = [0.2 * 1.0 + 0.3, 0.5 * 1.0 + 0.6]
+ tape_par = tapes[0].get_parameters(trainable_only=False)
+ assert np.allclose(tape_par[0], shifted_batch)
+ assert tape_par[1:] == [2.0, 3.0, 4.0]
+
+ shifted_batch = [1 * 3.0 + 1, 2 * 3.0 + 2, 3 * 3.0 + 3]
+ tape_par = tapes[1].get_parameters(trainable_only=False)
+ assert tape_par[:2] == [1.0, 2.0]
+ assert np.allclose(tape_par[2], shifted_batch)
+ assert tape_par[3:] == [4.0]
+ else:
+ assert len(tapes) == 5
+ assert [t.batch_size for t in tapes] == [None] * 5
+ assert tapes[0].get_parameters(trainable_only=False) == [0.2 * 1.0 + 0.3, 2.0, 3.0, 4.0]
+ assert tapes[1].get_parameters(trainable_only=False) == [0.5 * 1.0 + 0.6, 2.0, 3.0, 4.0]
+ assert tapes[2].get_parameters(trainable_only=False) == [1.0, 2.0, 1 * 3.0 + 1, 4.0]
+ assert tapes[3].get_parameters(trainable_only=False) == [1.0, 2.0, 2 * 3.0 + 2, 4.0]
+ assert tapes[4].get_parameters(trainable_only=False) == [1.0, 2.0, 3 * 3.0 + 3, 4.0]
+ @pytest.mark.parametrize("broadcast", [True, False])
@pytest.mark.parametrize("ops_with_custom_recipe", [[0], [1], [0, 1]])
- def test_recycled_unshifted_tape(self, ops_with_custom_recipe):
+ def test_recycled_unshifted_tape(self, ops_with_custom_recipe, broadcast):
"""Test that if the gradient recipe has a zero-shift component, then
the tape is executed only once using the current parameter
values."""
@@ -577,23 +594,28 @@ def test_recycled_unshifted_tape(self, ops_with_custom_recipe):
[[-1e7, 1, 0], [1e7, 1, 1e-7]] if i in ops_with_custom_recipe else None
for i in range(2)
)
- tapes, fn = qml.gradients.param_shift(tape, gradient_recipes=gradient_recipes)
+ tapes, fn = param_shift(tape, gradient_recipes=gradient_recipes, broadcast=broadcast)
- # two tapes per parameter that doesn't use a custom recipe,
+ # two (one with broadcast) tapes per parameter that doesn't use a custom recipe,
# one tape per parameter that uses custom recipe,
# plus one global call if at least one uses the custom recipe
- num_ops_standard_recipe = tape.num_params - len(ops_with_custom_recipe)
- assert len(tapes) == 2 * num_ops_standard_recipe + len(ops_with_custom_recipe) + 1
+ num_custom = len(ops_with_custom_recipe)
+ num_ops_standard_recipe = tape.num_params - num_custom
+ tapes_per_param = 1 if broadcast else 2
+ assert len(tapes) == tapes_per_param * num_ops_standard_recipe + num_custom + 1
# Test that executing the tapes and the postprocessing function works
grad = fn(qml.execute(tapes, dev, None))
assert qml.math.allclose(grad, -np.sin(x[0] + x[1]), atol=1e-5)
+ @pytest.mark.parametrize("broadcast", [False, True])
@pytest.mark.parametrize("ops_with_custom_recipe", [[0], [1], [0, 1]])
@pytest.mark.parametrize("multi_measure", [False, True])
- def test_custom_recipe_unshifted_only(self, ops_with_custom_recipe, multi_measure):
+ def test_custom_recipe_unshifted_only(self, ops_with_custom_recipe, multi_measure, broadcast):
"""Test that if the gradient recipe has a zero-shift component, then
the tape is executed only once using the current parameter
values."""
+ if multi_measure and broadcast:
+ pytest.skip("Multiple measurements are not supported with `broadcast=True` yet.")
dev = qml.device("default.qubit", wires=2)
x = [0.543, -0.654]
@@ -608,12 +630,13 @@ def test_custom_recipe_unshifted_only(self, ops_with_custom_recipe, multi_measur
gradient_recipes = tuple(
[[-1e7, 1, 0], [1e7, 1, 0]] if i in ops_with_custom_recipe else None for i in range(2)
)
- tapes, fn = qml.gradients.param_shift(tape, gradient_recipes=gradient_recipes)
+ tapes, fn = param_shift(tape, gradient_recipes=gradient_recipes, broadcast=broadcast)
- # two tapes per parameter that doesn't use a custom recipe,
+ # two (one with broadcast) tapes per parameter that doesn't use a custom recipe,
# plus one global (unshifted) call if at least one uses the custom recipe
num_ops_standard_recipe = tape.num_params - len(ops_with_custom_recipe)
- assert len(tapes) == 2 * num_ops_standard_recipe + int(
+ tapes_per_param = 1 if broadcast else 2
+ assert len(tapes) == tapes_per_param * num_ops_standard_recipe + int(
tape.num_params != num_ops_standard_recipe
)
# Test that executing the tapes and the postprocessing function works
@@ -662,8 +685,9 @@ def test_custom_recipe_mixing_unshifted_shifted(self, ops_with_custom_recipe):
assert qml.math.allclose(grad[0], -np.sin(x[0] + x[1]), atol=1e-5)
assert qml.math.allclose(grad[1], 0, atol=1e-5)
+ @pytest.mark.parametrize("broadcast", [True, False])
@pytest.mark.parametrize("y_wire", [0, 1])
- def test_f0_provided(self, y_wire):
+ def test_f0_provided(self, y_wire, broadcast):
"""Test that if the original tape output is provided, then
the tape is not executed additionally at the current parameter
values."""
@@ -677,7 +701,7 @@ def test_f0_provided(self, y_wire):
tape = qml.tape.QuantumScript.from_queue(q)
gradient_recipes = ([[-1e7, 1, 0], [1e7, 1, 1e7]],) * 2
f0 = dev.execute(tape)
- tapes, fn = qml.gradients.param_shift(tape, gradient_recipes=gradient_recipes, f0=f0)
+ tapes, fn = param_shift(tape, gradient_recipes=gradient_recipes, f0=f0, broadcast=broadcast)
# one tape per parameter that impacts the expval
assert len(tapes) == 2 if y_wire == 0 else 1
|
pypa__setuptools-936 | Graft with Asterisk broken after 28.4.0
28.4.0 is the last release where `graft */data` as an example, was working. After that release, there is a warning that `warning: no directories found matching '*/data'`
| [
{
"content": "\"\"\"setuptools.command.egg_info\n\nCreate a distribution's .egg-info directory and contents\"\"\"\n\nfrom distutils.filelist import FileList as _FileList\nfrom distutils.errors import DistutilsInternalError\nfrom distutils.util import convert_path\nfrom distutils import log\nimport distutils.errors\nimport distutils.filelist\nimport os\nimport re\nimport sys\nimport io\nimport warnings\nimport time\nimport collections\n\nimport six\nfrom six.moves import map\n\nfrom setuptools import Command\nfrom setuptools.command.sdist import sdist\nfrom setuptools.command.sdist import walk_revctrl\nfrom setuptools.command.setopt import edit_config\nfrom setuptools.command import bdist_egg\nfrom pkg_resources import (\n parse_requirements, safe_name, parse_version,\n safe_version, yield_lines, EntryPoint, iter_entry_points, to_filename)\nimport setuptools.unicode_utils as unicode_utils\nfrom setuptools.glob import glob\n\nimport packaging\n\n\ndef translate_pattern(glob):\n \"\"\"\n Translate a file path glob like '*.txt' in to a regular expression.\n This differs from fnmatch.translate which allows wildcards to match\n directory separators. It also knows about '**/' which matches any number of\n directories.\n \"\"\"\n pat = ''\n\n # This will split on '/' within [character classes]. This is deliberate.\n chunks = glob.split(os.path.sep)\n\n sep = re.escape(os.sep)\n valid_char = '[^%s]' % (sep,)\n\n for c, chunk in enumerate(chunks):\n last_chunk = c == len(chunks) - 1\n\n # Chunks that are a literal ** are globstars. They match anything.\n if chunk == '**':\n if last_chunk:\n # Match anything if this is the last component\n pat += '.*'\n else:\n # Match '(name/)*'\n pat += '(?:%s+%s)*' % (valid_char, sep)\n continue # Break here as the whole path component has been handled\n\n # Find any special characters in the remainder\n i = 0\n chunk_len = len(chunk)\n while i < chunk_len:\n char = chunk[i]\n if char == '*':\n # Match any number of name characters\n pat += valid_char + '*'\n elif char == '?':\n # Match a name character\n pat += valid_char\n elif char == '[':\n # Character class\n inner_i = i + 1\n # Skip initial !/] chars\n if inner_i < chunk_len and chunk[inner_i] == '!':\n inner_i = inner_i + 1\n if inner_i < chunk_len and chunk[inner_i] == ']':\n inner_i = inner_i + 1\n\n # Loop till the closing ] is found\n while inner_i < chunk_len and chunk[inner_i] != ']':\n inner_i = inner_i + 1\n\n if inner_i >= chunk_len:\n # Got to the end of the string without finding a closing ]\n # Do not treat this as a matching group, but as a literal [\n pat += re.escape(char)\n else:\n # Grab the insides of the [brackets]\n inner = chunk[i + 1:inner_i]\n char_class = ''\n\n # Class negation\n if inner[0] == '!':\n char_class = '^'\n inner = inner[1:]\n\n char_class += re.escape(inner)\n pat += '[%s]' % (char_class,)\n\n # Skip to the end ]\n i = inner_i\n else:\n pat += re.escape(char)\n i += 1\n\n # Join each chunk with the dir separator\n if not last_chunk:\n pat += sep\n\n return re.compile(pat + r'\\Z(?ms)')\n\n\nclass egg_info(Command):\n description = \"create a distribution's .egg-info directory\"\n\n user_options = [\n ('egg-base=', 'e', \"directory containing .egg-info directories\"\n \" (default: top of the source tree)\"),\n ('tag-date', 'd', \"Add date stamp (e.g. 20050528) to version number\"),\n ('tag-build=', 'b', \"Specify explicit tag to add to version number\"),\n ('no-date', 'D', \"Don't include date stamp [default]\"),\n ]\n\n boolean_options = ['tag-date']\n negative_opt = {\n 'no-date': 'tag-date',\n }\n\n def initialize_options(self):\n self.egg_name = None\n self.egg_version = None\n self.egg_base = None\n self.egg_info = None\n self.tag_build = None\n self.tag_date = 0\n self.broken_egg_info = False\n self.vtags = None\n\n ####################################\n # allow the 'tag_svn_revision' to be detected and\n # set, supporting sdists built on older Setuptools.\n @property\n def tag_svn_revision(self):\n pass\n\n @tag_svn_revision.setter\n def tag_svn_revision(self, value):\n pass\n ####################################\n\n def save_version_info(self, filename):\n \"\"\"\n Materialize the value of date into the\n build tag. Install build keys in a deterministic order\n to avoid arbitrary reordering on subsequent builds.\n \"\"\"\n # python 2.6 compatibility\n odict = getattr(collections, 'OrderedDict', dict)\n egg_info = odict()\n # follow the order these keys would have been added\n # when PYTHONHASHSEED=0\n egg_info['tag_build'] = self.tags()\n egg_info['tag_date'] = 0\n edit_config(filename, dict(egg_info=egg_info))\n\n def finalize_options(self):\n self.egg_name = safe_name(self.distribution.get_name())\n self.vtags = self.tags()\n self.egg_version = self.tagged_version()\n\n parsed_version = parse_version(self.egg_version)\n\n try:\n is_version = isinstance(parsed_version, packaging.version.Version)\n spec = (\n \"%s==%s\" if is_version else \"%s===%s\"\n )\n list(\n parse_requirements(spec % (self.egg_name, self.egg_version))\n )\n except ValueError:\n raise distutils.errors.DistutilsOptionError(\n \"Invalid distribution name or version syntax: %s-%s\" %\n (self.egg_name, self.egg_version)\n )\n\n if self.egg_base is None:\n dirs = self.distribution.package_dir\n self.egg_base = (dirs or {}).get('', os.curdir)\n\n self.ensure_dirname('egg_base')\n self.egg_info = to_filename(self.egg_name) + '.egg-info'\n if self.egg_base != os.curdir:\n self.egg_info = os.path.join(self.egg_base, self.egg_info)\n if '-' in self.egg_name:\n self.check_broken_egg_info()\n\n # Set package version for the benefit of dumber commands\n # (e.g. sdist, bdist_wininst, etc.)\n #\n self.distribution.metadata.version = self.egg_version\n\n # If we bootstrapped around the lack of a PKG-INFO, as might be the\n # case in a fresh checkout, make sure that any special tags get added\n # to the version info\n #\n pd = self.distribution._patched_dist\n if pd is not None and pd.key == self.egg_name.lower():\n pd._version = self.egg_version\n pd._parsed_version = parse_version(self.egg_version)\n self.distribution._patched_dist = None\n\n def write_or_delete_file(self, what, filename, data, force=False):\n \"\"\"Write `data` to `filename` or delete if empty\n\n If `data` is non-empty, this routine is the same as ``write_file()``.\n If `data` is empty but not ``None``, this is the same as calling\n ``delete_file(filename)`. If `data` is ``None``, then this is a no-op\n unless `filename` exists, in which case a warning is issued about the\n orphaned file (if `force` is false), or deleted (if `force` is true).\n \"\"\"\n if data:\n self.write_file(what, filename, data)\n elif os.path.exists(filename):\n if data is None and not force:\n log.warn(\n \"%s not set in setup(), but %s exists\", what, filename\n )\n return\n else:\n self.delete_file(filename)\n\n def write_file(self, what, filename, data):\n \"\"\"Write `data` to `filename` (if not a dry run) after announcing it\n\n `what` is used in a log message to identify what is being written\n to the file.\n \"\"\"\n log.info(\"writing %s to %s\", what, filename)\n if six.PY3:\n data = data.encode(\"utf-8\")\n if not self.dry_run:\n f = open(filename, 'wb')\n f.write(data)\n f.close()\n\n def delete_file(self, filename):\n \"\"\"Delete `filename` (if not a dry run) after announcing it\"\"\"\n log.info(\"deleting %s\", filename)\n if not self.dry_run:\n os.unlink(filename)\n\n def tagged_version(self):\n version = self.distribution.get_version()\n # egg_info may be called more than once for a distribution,\n # in which case the version string already contains all tags.\n if self.vtags and version.endswith(self.vtags):\n return safe_version(version)\n return safe_version(version + self.vtags)\n\n def run(self):\n self.mkpath(self.egg_info)\n installer = self.distribution.fetch_build_egg\n for ep in iter_entry_points('egg_info.writers'):\n ep.require(installer=installer)\n writer = ep.resolve()\n writer(self, ep.name, os.path.join(self.egg_info, ep.name))\n\n # Get rid of native_libs.txt if it was put there by older bdist_egg\n nl = os.path.join(self.egg_info, \"native_libs.txt\")\n if os.path.exists(nl):\n self.delete_file(nl)\n\n self.find_sources()\n\n def tags(self):\n version = ''\n if self.tag_build:\n version += self.tag_build\n if self.tag_date:\n version += time.strftime(\"-%Y%m%d\")\n return version\n\n def find_sources(self):\n \"\"\"Generate SOURCES.txt manifest file\"\"\"\n manifest_filename = os.path.join(self.egg_info, \"SOURCES.txt\")\n mm = manifest_maker(self.distribution)\n mm.manifest = manifest_filename\n mm.run()\n self.filelist = mm.filelist\n\n def check_broken_egg_info(self):\n bei = self.egg_name + '.egg-info'\n if self.egg_base != os.curdir:\n bei = os.path.join(self.egg_base, bei)\n if os.path.exists(bei):\n log.warn(\n \"-\" * 78 + '\\n'\n \"Note: Your current .egg-info directory has a '-' in its name;\"\n '\\nthis will not work correctly with \"setup.py develop\".\\n\\n'\n 'Please rename %s to %s to correct this problem.\\n' + '-' * 78,\n bei, self.egg_info\n )\n self.broken_egg_info = self.egg_info\n self.egg_info = bei # make it work for now\n\n\nclass FileList(_FileList):\n # Implementations of the various MANIFEST.in commands\n\n def process_template_line(self, line):\n # Parse the line: split it up, make sure the right number of words\n # is there, and return the relevant words. 'action' is always\n # defined: it's the first word of the line. Which of the other\n # three are defined depends on the action; it'll be either\n # patterns, (dir and patterns), or (dir_pattern).\n (action, patterns, dir, dir_pattern) = self._parse_template_line(line)\n\n # OK, now we know that the action is valid and we have the\n # right number of words on the line for that action -- so we\n # can proceed with minimal error-checking.\n if action == 'include':\n self.debug_print(\"include \" + ' '.join(patterns))\n for pattern in patterns:\n if not self.include(pattern):\n log.warn(\"warning: no files found matching '%s'\", pattern)\n\n elif action == 'exclude':\n self.debug_print(\"exclude \" + ' '.join(patterns))\n for pattern in patterns:\n if not self.exclude(pattern):\n log.warn((\"warning: no previously-included files \"\n \"found matching '%s'\"), pattern)\n\n elif action == 'global-include':\n self.debug_print(\"global-include \" + ' '.join(patterns))\n for pattern in patterns:\n if not self.global_include(pattern):\n log.warn((\"warning: no files found matching '%s' \"\n \"anywhere in distribution\"), pattern)\n\n elif action == 'global-exclude':\n self.debug_print(\"global-exclude \" + ' '.join(patterns))\n for pattern in patterns:\n if not self.global_exclude(pattern):\n log.warn((\"warning: no previously-included files matching \"\n \"'%s' found anywhere in distribution\"),\n pattern)\n\n elif action == 'recursive-include':\n self.debug_print(\"recursive-include %s %s\" %\n (dir, ' '.join(patterns)))\n for pattern in patterns:\n if not self.recursive_include(dir, pattern):\n log.warn((\"warning: no files found matching '%s' \"\n \"under directory '%s'\"),\n pattern, dir)\n\n elif action == 'recursive-exclude':\n self.debug_print(\"recursive-exclude %s %s\" %\n (dir, ' '.join(patterns)))\n for pattern in patterns:\n if not self.recursive_exclude(dir, pattern):\n log.warn((\"warning: no previously-included files matching \"\n \"'%s' found under directory '%s'\"),\n pattern, dir)\n\n elif action == 'graft':\n self.debug_print(\"graft \" + dir_pattern)\n if not self.graft(dir_pattern):\n log.warn(\"warning: no directories found matching '%s'\",\n dir_pattern)\n\n elif action == 'prune':\n self.debug_print(\"prune \" + dir_pattern)\n if not self.prune(dir_pattern):\n log.warn((\"no previously-included directories found \"\n \"matching '%s'\"), dir_pattern)\n\n else:\n raise DistutilsInternalError(\n \"this cannot happen: invalid action '%s'\" % action)\n\n def _remove_files(self, predicate):\n \"\"\"\n Remove all files from the file list that match the predicate.\n Return True if any matching files were removed\n \"\"\"\n found = False\n for i in range(len(self.files) - 1, -1, -1):\n if predicate(self.files[i]):\n self.debug_print(\" removing \" + self.files[i])\n del self.files[i]\n found = True\n return found\n\n def include(self, pattern):\n \"\"\"Include files that match 'pattern'.\"\"\"\n found = [f for f in glob(pattern) if not os.path.isdir(f)]\n self.extend(found)\n return bool(found)\n\n def exclude(self, pattern):\n \"\"\"Exclude files that match 'pattern'.\"\"\"\n match = translate_pattern(pattern)\n return self._remove_files(match.match)\n\n def recursive_include(self, dir, pattern):\n \"\"\"\n Include all files anywhere in 'dir/' that match the pattern.\n \"\"\"\n full_pattern = os.path.join(dir, '**', pattern)\n found = [f for f in glob(full_pattern, recursive=True)\n if not os.path.isdir(f)]\n self.extend(found)\n return bool(found)\n\n def recursive_exclude(self, dir, pattern):\n \"\"\"\n Exclude any file anywhere in 'dir/' that match the pattern.\n \"\"\"\n match = translate_pattern(os.path.join(dir, '**', pattern))\n return self._remove_files(match.match)\n\n def graft(self, dir):\n \"\"\"Include all files from 'dir/'.\"\"\"\n found = distutils.filelist.findall(dir)\n self.extend(found)\n return bool(found)\n\n def prune(self, dir):\n \"\"\"Filter out files from 'dir/'.\"\"\"\n match = translate_pattern(os.path.join(dir, '**'))\n return self._remove_files(match.match)\n\n def global_include(self, pattern):\n \"\"\"\n Include all files anywhere in the current directory that match the\n pattern. This is very inefficient on large file trees.\n \"\"\"\n if self.allfiles is None:\n self.findall()\n match = translate_pattern(os.path.join('**', pattern))\n found = [f for f in self.allfiles if match.match(f)]\n self.extend(found)\n return bool(found)\n\n def global_exclude(self, pattern):\n \"\"\"\n Exclude all files anywhere that match the pattern.\n \"\"\"\n match = translate_pattern(os.path.join('**', pattern))\n return self._remove_files(match.match)\n\n def append(self, item):\n if item.endswith('\\r'): # Fix older sdists built on Windows\n item = item[:-1]\n path = convert_path(item)\n\n if self._safe_path(path):\n self.files.append(path)\n\n def extend(self, paths):\n self.files.extend(filter(self._safe_path, paths))\n\n def _repair(self):\n \"\"\"\n Replace self.files with only safe paths\n\n Because some owners of FileList manipulate the underlying\n ``files`` attribute directly, this method must be called to\n repair those paths.\n \"\"\"\n self.files = list(filter(self._safe_path, self.files))\n\n def _safe_path(self, path):\n enc_warn = \"'%s' not %s encodable -- skipping\"\n\n # To avoid accidental trans-codings errors, first to unicode\n u_path = unicode_utils.filesys_decode(path)\n if u_path is None:\n log.warn(\"'%s' in unexpected encoding -- skipping\" % path)\n return False\n\n # Must ensure utf-8 encodability\n utf8_path = unicode_utils.try_encode(u_path, \"utf-8\")\n if utf8_path is None:\n log.warn(enc_warn, path, 'utf-8')\n return False\n\n try:\n # accept is either way checks out\n if os.path.exists(u_path) or os.path.exists(utf8_path):\n return True\n # this will catch any encode errors decoding u_path\n except UnicodeEncodeError:\n log.warn(enc_warn, path, sys.getfilesystemencoding())\n\n\nclass manifest_maker(sdist):\n template = \"MANIFEST.in\"\n\n def initialize_options(self):\n self.use_defaults = 1\n self.prune = 1\n self.manifest_only = 1\n self.force_manifest = 1\n\n def finalize_options(self):\n pass\n\n def run(self):\n self.filelist = FileList()\n if not os.path.exists(self.manifest):\n self.write_manifest() # it must exist so it'll get in the list\n self.add_defaults()\n if os.path.exists(self.template):\n self.read_template()\n self.prune_file_list()\n self.filelist.sort()\n self.filelist.remove_duplicates()\n self.write_manifest()\n\n def _manifest_normalize(self, path):\n path = unicode_utils.filesys_decode(path)\n return path.replace(os.sep, '/')\n\n def write_manifest(self):\n \"\"\"\n Write the file list in 'self.filelist' to the manifest file\n named by 'self.manifest'.\n \"\"\"\n self.filelist._repair()\n\n # Now _repairs should encodability, but not unicode\n files = [self._manifest_normalize(f) for f in self.filelist.files]\n msg = \"writing manifest file '%s'\" % self.manifest\n self.execute(write_file, (self.manifest, files), msg)\n\n def warn(self, msg):\n if not self._should_suppress_warning(msg):\n sdist.warn(self, msg)\n\n @staticmethod\n def _should_suppress_warning(msg):\n \"\"\"\n suppress missing-file warnings from sdist\n \"\"\"\n return re.match(r\"standard file .*not found\", msg)\n\n def add_defaults(self):\n sdist.add_defaults(self)\n self.filelist.append(self.template)\n self.filelist.append(self.manifest)\n rcfiles = list(walk_revctrl())\n if rcfiles:\n self.filelist.extend(rcfiles)\n elif os.path.exists(self.manifest):\n self.read_manifest()\n ei_cmd = self.get_finalized_command('egg_info')\n self.filelist.graft(ei_cmd.egg_info)\n\n def prune_file_list(self):\n build = self.get_finalized_command('build')\n base_dir = self.distribution.get_fullname()\n self.filelist.prune(build.build_base)\n self.filelist.prune(base_dir)\n sep = re.escape(os.sep)\n self.filelist.exclude_pattern(r'(^|' + sep + r')(RCS|CVS|\\.svn)' + sep,\n is_regex=1)\n\n\ndef write_file(filename, contents):\n \"\"\"Create a file with the specified name and write 'contents' (a\n sequence of strings without line terminators) to it.\n \"\"\"\n contents = \"\\n\".join(contents)\n\n # assuming the contents has been vetted for utf-8 encoding\n contents = contents.encode(\"utf-8\")\n\n with open(filename, \"wb\") as f: # always write POSIX-style manifest\n f.write(contents)\n\n\ndef write_pkg_info(cmd, basename, filename):\n log.info(\"writing %s\", filename)\n if not cmd.dry_run:\n metadata = cmd.distribution.metadata\n metadata.version, oldver = cmd.egg_version, metadata.version\n metadata.name, oldname = cmd.egg_name, metadata.name\n try:\n # write unescaped data to PKG-INFO, so older pkg_resources\n # can still parse it\n metadata.write_pkg_info(cmd.egg_info)\n finally:\n metadata.name, metadata.version = oldname, oldver\n\n safe = getattr(cmd.distribution, 'zip_safe', None)\n\n bdist_egg.write_safety_flag(cmd.egg_info, safe)\n\n\ndef warn_depends_obsolete(cmd, basename, filename):\n if os.path.exists(filename):\n log.warn(\n \"WARNING: 'depends.txt' is not used by setuptools 0.6!\\n\"\n \"Use the install_requires/extras_require setup() args instead.\"\n )\n\n\ndef _write_requirements(stream, reqs):\n lines = yield_lines(reqs or ())\n append_cr = lambda line: line + '\\n'\n lines = map(append_cr, lines)\n stream.writelines(lines)\n\n\ndef write_requirements(cmd, basename, filename):\n dist = cmd.distribution\n data = six.StringIO()\n _write_requirements(data, dist.install_requires)\n extras_require = dist.extras_require or {}\n for extra in sorted(extras_require):\n data.write('\\n[{extra}]\\n'.format(**vars()))\n _write_requirements(data, extras_require[extra])\n cmd.write_or_delete_file(\"requirements\", filename, data.getvalue())\n\n\ndef write_setup_requirements(cmd, basename, filename):\n data = StringIO()\n _write_requirements(data, cmd.distribution.setup_requires)\n cmd.write_or_delete_file(\"setup-requirements\", filename, data.getvalue())\n\n\ndef write_toplevel_names(cmd, basename, filename):\n pkgs = dict.fromkeys(\n [\n k.split('.', 1)[0]\n for k in cmd.distribution.iter_distribution_names()\n ]\n )\n cmd.write_file(\"top-level names\", filename, '\\n'.join(sorted(pkgs)) + '\\n')\n\n\ndef overwrite_arg(cmd, basename, filename):\n write_arg(cmd, basename, filename, True)\n\n\ndef write_arg(cmd, basename, filename, force=False):\n argname = os.path.splitext(basename)[0]\n value = getattr(cmd.distribution, argname, None)\n if value is not None:\n value = '\\n'.join(value) + '\\n'\n cmd.write_or_delete_file(argname, filename, value, force)\n\n\ndef write_entries(cmd, basename, filename):\n ep = cmd.distribution.entry_points\n\n if isinstance(ep, six.string_types) or ep is None:\n data = ep\n elif ep is not None:\n data = []\n for section, contents in sorted(ep.items()):\n if not isinstance(contents, six.string_types):\n contents = EntryPoint.parse_group(section, contents)\n contents = '\\n'.join(sorted(map(str, contents.values())))\n data.append('[%s]\\n%s\\n\\n' % (section, contents))\n data = ''.join(data)\n\n cmd.write_or_delete_file('entry points', filename, data, True)\n\n\ndef get_pkg_info_revision():\n \"\"\"\n Get a -r### off of PKG-INFO Version in case this is an sdist of\n a subversion revision.\n \"\"\"\n warnings.warn(\"get_pkg_info_revision is deprecated.\", DeprecationWarning)\n if os.path.exists('PKG-INFO'):\n with io.open('PKG-INFO') as f:\n for line in f:\n match = re.match(r\"Version:.*-r(\\d+)\\s*$\", line)\n if match:\n return int(match.group(1))\n return 0\n",
"path": "setuptools/command/egg_info.py"
}
] | [
{
"content": "\"\"\"setuptools.command.egg_info\n\nCreate a distribution's .egg-info directory and contents\"\"\"\n\nfrom distutils.filelist import FileList as _FileList\nfrom distutils.errors import DistutilsInternalError\nfrom distutils.util import convert_path\nfrom distutils import log\nimport distutils.errors\nimport distutils.filelist\nimport os\nimport re\nimport sys\nimport io\nimport warnings\nimport time\nimport collections\n\nimport six\nfrom six.moves import map\n\nfrom setuptools import Command\nfrom setuptools.command.sdist import sdist\nfrom setuptools.command.sdist import walk_revctrl\nfrom setuptools.command.setopt import edit_config\nfrom setuptools.command import bdist_egg\nfrom pkg_resources import (\n parse_requirements, safe_name, parse_version,\n safe_version, yield_lines, EntryPoint, iter_entry_points, to_filename)\nimport setuptools.unicode_utils as unicode_utils\nfrom setuptools.glob import glob\n\nimport packaging\n\n\ndef translate_pattern(glob):\n \"\"\"\n Translate a file path glob like '*.txt' in to a regular expression.\n This differs from fnmatch.translate which allows wildcards to match\n directory separators. It also knows about '**/' which matches any number of\n directories.\n \"\"\"\n pat = ''\n\n # This will split on '/' within [character classes]. This is deliberate.\n chunks = glob.split(os.path.sep)\n\n sep = re.escape(os.sep)\n valid_char = '[^%s]' % (sep,)\n\n for c, chunk in enumerate(chunks):\n last_chunk = c == len(chunks) - 1\n\n # Chunks that are a literal ** are globstars. They match anything.\n if chunk == '**':\n if last_chunk:\n # Match anything if this is the last component\n pat += '.*'\n else:\n # Match '(name/)*'\n pat += '(?:%s+%s)*' % (valid_char, sep)\n continue # Break here as the whole path component has been handled\n\n # Find any special characters in the remainder\n i = 0\n chunk_len = len(chunk)\n while i < chunk_len:\n char = chunk[i]\n if char == '*':\n # Match any number of name characters\n pat += valid_char + '*'\n elif char == '?':\n # Match a name character\n pat += valid_char\n elif char == '[':\n # Character class\n inner_i = i + 1\n # Skip initial !/] chars\n if inner_i < chunk_len and chunk[inner_i] == '!':\n inner_i = inner_i + 1\n if inner_i < chunk_len and chunk[inner_i] == ']':\n inner_i = inner_i + 1\n\n # Loop till the closing ] is found\n while inner_i < chunk_len and chunk[inner_i] != ']':\n inner_i = inner_i + 1\n\n if inner_i >= chunk_len:\n # Got to the end of the string without finding a closing ]\n # Do not treat this as a matching group, but as a literal [\n pat += re.escape(char)\n else:\n # Grab the insides of the [brackets]\n inner = chunk[i + 1:inner_i]\n char_class = ''\n\n # Class negation\n if inner[0] == '!':\n char_class = '^'\n inner = inner[1:]\n\n char_class += re.escape(inner)\n pat += '[%s]' % (char_class,)\n\n # Skip to the end ]\n i = inner_i\n else:\n pat += re.escape(char)\n i += 1\n\n # Join each chunk with the dir separator\n if not last_chunk:\n pat += sep\n\n return re.compile(pat + r'\\Z(?ms)')\n\n\nclass egg_info(Command):\n description = \"create a distribution's .egg-info directory\"\n\n user_options = [\n ('egg-base=', 'e', \"directory containing .egg-info directories\"\n \" (default: top of the source tree)\"),\n ('tag-date', 'd', \"Add date stamp (e.g. 20050528) to version number\"),\n ('tag-build=', 'b', \"Specify explicit tag to add to version number\"),\n ('no-date', 'D', \"Don't include date stamp [default]\"),\n ]\n\n boolean_options = ['tag-date']\n negative_opt = {\n 'no-date': 'tag-date',\n }\n\n def initialize_options(self):\n self.egg_name = None\n self.egg_version = None\n self.egg_base = None\n self.egg_info = None\n self.tag_build = None\n self.tag_date = 0\n self.broken_egg_info = False\n self.vtags = None\n\n ####################################\n # allow the 'tag_svn_revision' to be detected and\n # set, supporting sdists built on older Setuptools.\n @property\n def tag_svn_revision(self):\n pass\n\n @tag_svn_revision.setter\n def tag_svn_revision(self, value):\n pass\n ####################################\n\n def save_version_info(self, filename):\n \"\"\"\n Materialize the value of date into the\n build tag. Install build keys in a deterministic order\n to avoid arbitrary reordering on subsequent builds.\n \"\"\"\n # python 2.6 compatibility\n odict = getattr(collections, 'OrderedDict', dict)\n egg_info = odict()\n # follow the order these keys would have been added\n # when PYTHONHASHSEED=0\n egg_info['tag_build'] = self.tags()\n egg_info['tag_date'] = 0\n edit_config(filename, dict(egg_info=egg_info))\n\n def finalize_options(self):\n self.egg_name = safe_name(self.distribution.get_name())\n self.vtags = self.tags()\n self.egg_version = self.tagged_version()\n\n parsed_version = parse_version(self.egg_version)\n\n try:\n is_version = isinstance(parsed_version, packaging.version.Version)\n spec = (\n \"%s==%s\" if is_version else \"%s===%s\"\n )\n list(\n parse_requirements(spec % (self.egg_name, self.egg_version))\n )\n except ValueError:\n raise distutils.errors.DistutilsOptionError(\n \"Invalid distribution name or version syntax: %s-%s\" %\n (self.egg_name, self.egg_version)\n )\n\n if self.egg_base is None:\n dirs = self.distribution.package_dir\n self.egg_base = (dirs or {}).get('', os.curdir)\n\n self.ensure_dirname('egg_base')\n self.egg_info = to_filename(self.egg_name) + '.egg-info'\n if self.egg_base != os.curdir:\n self.egg_info = os.path.join(self.egg_base, self.egg_info)\n if '-' in self.egg_name:\n self.check_broken_egg_info()\n\n # Set package version for the benefit of dumber commands\n # (e.g. sdist, bdist_wininst, etc.)\n #\n self.distribution.metadata.version = self.egg_version\n\n # If we bootstrapped around the lack of a PKG-INFO, as might be the\n # case in a fresh checkout, make sure that any special tags get added\n # to the version info\n #\n pd = self.distribution._patched_dist\n if pd is not None and pd.key == self.egg_name.lower():\n pd._version = self.egg_version\n pd._parsed_version = parse_version(self.egg_version)\n self.distribution._patched_dist = None\n\n def write_or_delete_file(self, what, filename, data, force=False):\n \"\"\"Write `data` to `filename` or delete if empty\n\n If `data` is non-empty, this routine is the same as ``write_file()``.\n If `data` is empty but not ``None``, this is the same as calling\n ``delete_file(filename)`. If `data` is ``None``, then this is a no-op\n unless `filename` exists, in which case a warning is issued about the\n orphaned file (if `force` is false), or deleted (if `force` is true).\n \"\"\"\n if data:\n self.write_file(what, filename, data)\n elif os.path.exists(filename):\n if data is None and not force:\n log.warn(\n \"%s not set in setup(), but %s exists\", what, filename\n )\n return\n else:\n self.delete_file(filename)\n\n def write_file(self, what, filename, data):\n \"\"\"Write `data` to `filename` (if not a dry run) after announcing it\n\n `what` is used in a log message to identify what is being written\n to the file.\n \"\"\"\n log.info(\"writing %s to %s\", what, filename)\n if six.PY3:\n data = data.encode(\"utf-8\")\n if not self.dry_run:\n f = open(filename, 'wb')\n f.write(data)\n f.close()\n\n def delete_file(self, filename):\n \"\"\"Delete `filename` (if not a dry run) after announcing it\"\"\"\n log.info(\"deleting %s\", filename)\n if not self.dry_run:\n os.unlink(filename)\n\n def tagged_version(self):\n version = self.distribution.get_version()\n # egg_info may be called more than once for a distribution,\n # in which case the version string already contains all tags.\n if self.vtags and version.endswith(self.vtags):\n return safe_version(version)\n return safe_version(version + self.vtags)\n\n def run(self):\n self.mkpath(self.egg_info)\n installer = self.distribution.fetch_build_egg\n for ep in iter_entry_points('egg_info.writers'):\n ep.require(installer=installer)\n writer = ep.resolve()\n writer(self, ep.name, os.path.join(self.egg_info, ep.name))\n\n # Get rid of native_libs.txt if it was put there by older bdist_egg\n nl = os.path.join(self.egg_info, \"native_libs.txt\")\n if os.path.exists(nl):\n self.delete_file(nl)\n\n self.find_sources()\n\n def tags(self):\n version = ''\n if self.tag_build:\n version += self.tag_build\n if self.tag_date:\n version += time.strftime(\"-%Y%m%d\")\n return version\n\n def find_sources(self):\n \"\"\"Generate SOURCES.txt manifest file\"\"\"\n manifest_filename = os.path.join(self.egg_info, \"SOURCES.txt\")\n mm = manifest_maker(self.distribution)\n mm.manifest = manifest_filename\n mm.run()\n self.filelist = mm.filelist\n\n def check_broken_egg_info(self):\n bei = self.egg_name + '.egg-info'\n if self.egg_base != os.curdir:\n bei = os.path.join(self.egg_base, bei)\n if os.path.exists(bei):\n log.warn(\n \"-\" * 78 + '\\n'\n \"Note: Your current .egg-info directory has a '-' in its name;\"\n '\\nthis will not work correctly with \"setup.py develop\".\\n\\n'\n 'Please rename %s to %s to correct this problem.\\n' + '-' * 78,\n bei, self.egg_info\n )\n self.broken_egg_info = self.egg_info\n self.egg_info = bei # make it work for now\n\n\nclass FileList(_FileList):\n # Implementations of the various MANIFEST.in commands\n\n def process_template_line(self, line):\n # Parse the line: split it up, make sure the right number of words\n # is there, and return the relevant words. 'action' is always\n # defined: it's the first word of the line. Which of the other\n # three are defined depends on the action; it'll be either\n # patterns, (dir and patterns), or (dir_pattern).\n (action, patterns, dir, dir_pattern) = self._parse_template_line(line)\n\n # OK, now we know that the action is valid and we have the\n # right number of words on the line for that action -- so we\n # can proceed with minimal error-checking.\n if action == 'include':\n self.debug_print(\"include \" + ' '.join(patterns))\n for pattern in patterns:\n if not self.include(pattern):\n log.warn(\"warning: no files found matching '%s'\", pattern)\n\n elif action == 'exclude':\n self.debug_print(\"exclude \" + ' '.join(patterns))\n for pattern in patterns:\n if not self.exclude(pattern):\n log.warn((\"warning: no previously-included files \"\n \"found matching '%s'\"), pattern)\n\n elif action == 'global-include':\n self.debug_print(\"global-include \" + ' '.join(patterns))\n for pattern in patterns:\n if not self.global_include(pattern):\n log.warn((\"warning: no files found matching '%s' \"\n \"anywhere in distribution\"), pattern)\n\n elif action == 'global-exclude':\n self.debug_print(\"global-exclude \" + ' '.join(patterns))\n for pattern in patterns:\n if not self.global_exclude(pattern):\n log.warn((\"warning: no previously-included files matching \"\n \"'%s' found anywhere in distribution\"),\n pattern)\n\n elif action == 'recursive-include':\n self.debug_print(\"recursive-include %s %s\" %\n (dir, ' '.join(patterns)))\n for pattern in patterns:\n if not self.recursive_include(dir, pattern):\n log.warn((\"warning: no files found matching '%s' \"\n \"under directory '%s'\"),\n pattern, dir)\n\n elif action == 'recursive-exclude':\n self.debug_print(\"recursive-exclude %s %s\" %\n (dir, ' '.join(patterns)))\n for pattern in patterns:\n if not self.recursive_exclude(dir, pattern):\n log.warn((\"warning: no previously-included files matching \"\n \"'%s' found under directory '%s'\"),\n pattern, dir)\n\n elif action == 'graft':\n self.debug_print(\"graft \" + dir_pattern)\n if not self.graft(dir_pattern):\n log.warn(\"warning: no directories found matching '%s'\",\n dir_pattern)\n\n elif action == 'prune':\n self.debug_print(\"prune \" + dir_pattern)\n if not self.prune(dir_pattern):\n log.warn((\"no previously-included directories found \"\n \"matching '%s'\"), dir_pattern)\n\n else:\n raise DistutilsInternalError(\n \"this cannot happen: invalid action '%s'\" % action)\n\n def _remove_files(self, predicate):\n \"\"\"\n Remove all files from the file list that match the predicate.\n Return True if any matching files were removed\n \"\"\"\n found = False\n for i in range(len(self.files) - 1, -1, -1):\n if predicate(self.files[i]):\n self.debug_print(\" removing \" + self.files[i])\n del self.files[i]\n found = True\n return found\n\n def include(self, pattern):\n \"\"\"Include files that match 'pattern'.\"\"\"\n found = [f for f in glob(pattern) if not os.path.isdir(f)]\n self.extend(found)\n return bool(found)\n\n def exclude(self, pattern):\n \"\"\"Exclude files that match 'pattern'.\"\"\"\n match = translate_pattern(pattern)\n return self._remove_files(match.match)\n\n def recursive_include(self, dir, pattern):\n \"\"\"\n Include all files anywhere in 'dir/' that match the pattern.\n \"\"\"\n full_pattern = os.path.join(dir, '**', pattern)\n found = [f for f in glob(full_pattern, recursive=True)\n if not os.path.isdir(f)]\n self.extend(found)\n return bool(found)\n\n def recursive_exclude(self, dir, pattern):\n \"\"\"\n Exclude any file anywhere in 'dir/' that match the pattern.\n \"\"\"\n match = translate_pattern(os.path.join(dir, '**', pattern))\n return self._remove_files(match.match)\n\n def graft(self, dir):\n \"\"\"Include all files from 'dir/'.\"\"\"\n found = []\n for match_dir in glob(dir):\n found += distutils.filelist.findall(match_dir)\n self.extend(found)\n return bool(found)\n\n def prune(self, dir):\n \"\"\"Filter out files from 'dir/'.\"\"\"\n match = translate_pattern(os.path.join(dir, '**'))\n return self._remove_files(match.match)\n\n def global_include(self, pattern):\n \"\"\"\n Include all files anywhere in the current directory that match the\n pattern. This is very inefficient on large file trees.\n \"\"\"\n if self.allfiles is None:\n self.findall()\n match = translate_pattern(os.path.join('**', pattern))\n found = [f for f in self.allfiles if match.match(f)]\n self.extend(found)\n return bool(found)\n\n def global_exclude(self, pattern):\n \"\"\"\n Exclude all files anywhere that match the pattern.\n \"\"\"\n match = translate_pattern(os.path.join('**', pattern))\n return self._remove_files(match.match)\n\n def append(self, item):\n if item.endswith('\\r'): # Fix older sdists built on Windows\n item = item[:-1]\n path = convert_path(item)\n\n if self._safe_path(path):\n self.files.append(path)\n\n def extend(self, paths):\n self.files.extend(filter(self._safe_path, paths))\n\n def _repair(self):\n \"\"\"\n Replace self.files with only safe paths\n\n Because some owners of FileList manipulate the underlying\n ``files`` attribute directly, this method must be called to\n repair those paths.\n \"\"\"\n self.files = list(filter(self._safe_path, self.files))\n\n def _safe_path(self, path):\n enc_warn = \"'%s' not %s encodable -- skipping\"\n\n # To avoid accidental trans-codings errors, first to unicode\n u_path = unicode_utils.filesys_decode(path)\n if u_path is None:\n log.warn(\"'%s' in unexpected encoding -- skipping\" % path)\n return False\n\n # Must ensure utf-8 encodability\n utf8_path = unicode_utils.try_encode(u_path, \"utf-8\")\n if utf8_path is None:\n log.warn(enc_warn, path, 'utf-8')\n return False\n\n try:\n # accept is either way checks out\n if os.path.exists(u_path) or os.path.exists(utf8_path):\n return True\n # this will catch any encode errors decoding u_path\n except UnicodeEncodeError:\n log.warn(enc_warn, path, sys.getfilesystemencoding())\n\n\nclass manifest_maker(sdist):\n template = \"MANIFEST.in\"\n\n def initialize_options(self):\n self.use_defaults = 1\n self.prune = 1\n self.manifest_only = 1\n self.force_manifest = 1\n\n def finalize_options(self):\n pass\n\n def run(self):\n self.filelist = FileList()\n if not os.path.exists(self.manifest):\n self.write_manifest() # it must exist so it'll get in the list\n self.add_defaults()\n if os.path.exists(self.template):\n self.read_template()\n self.prune_file_list()\n self.filelist.sort()\n self.filelist.remove_duplicates()\n self.write_manifest()\n\n def _manifest_normalize(self, path):\n path = unicode_utils.filesys_decode(path)\n return path.replace(os.sep, '/')\n\n def write_manifest(self):\n \"\"\"\n Write the file list in 'self.filelist' to the manifest file\n named by 'self.manifest'.\n \"\"\"\n self.filelist._repair()\n\n # Now _repairs should encodability, but not unicode\n files = [self._manifest_normalize(f) for f in self.filelist.files]\n msg = \"writing manifest file '%s'\" % self.manifest\n self.execute(write_file, (self.manifest, files), msg)\n\n def warn(self, msg):\n if not self._should_suppress_warning(msg):\n sdist.warn(self, msg)\n\n @staticmethod\n def _should_suppress_warning(msg):\n \"\"\"\n suppress missing-file warnings from sdist\n \"\"\"\n return re.match(r\"standard file .*not found\", msg)\n\n def add_defaults(self):\n sdist.add_defaults(self)\n self.filelist.append(self.template)\n self.filelist.append(self.manifest)\n rcfiles = list(walk_revctrl())\n if rcfiles:\n self.filelist.extend(rcfiles)\n elif os.path.exists(self.manifest):\n self.read_manifest()\n ei_cmd = self.get_finalized_command('egg_info')\n self.filelist.graft(ei_cmd.egg_info)\n\n def prune_file_list(self):\n build = self.get_finalized_command('build')\n base_dir = self.distribution.get_fullname()\n self.filelist.prune(build.build_base)\n self.filelist.prune(base_dir)\n sep = re.escape(os.sep)\n self.filelist.exclude_pattern(r'(^|' + sep + r')(RCS|CVS|\\.svn)' + sep,\n is_regex=1)\n\n\ndef write_file(filename, contents):\n \"\"\"Create a file with the specified name and write 'contents' (a\n sequence of strings without line terminators) to it.\n \"\"\"\n contents = \"\\n\".join(contents)\n\n # assuming the contents has been vetted for utf-8 encoding\n contents = contents.encode(\"utf-8\")\n\n with open(filename, \"wb\") as f: # always write POSIX-style manifest\n f.write(contents)\n\n\ndef write_pkg_info(cmd, basename, filename):\n log.info(\"writing %s\", filename)\n if not cmd.dry_run:\n metadata = cmd.distribution.metadata\n metadata.version, oldver = cmd.egg_version, metadata.version\n metadata.name, oldname = cmd.egg_name, metadata.name\n try:\n # write unescaped data to PKG-INFO, so older pkg_resources\n # can still parse it\n metadata.write_pkg_info(cmd.egg_info)\n finally:\n metadata.name, metadata.version = oldname, oldver\n\n safe = getattr(cmd.distribution, 'zip_safe', None)\n\n bdist_egg.write_safety_flag(cmd.egg_info, safe)\n\n\ndef warn_depends_obsolete(cmd, basename, filename):\n if os.path.exists(filename):\n log.warn(\n \"WARNING: 'depends.txt' is not used by setuptools 0.6!\\n\"\n \"Use the install_requires/extras_require setup() args instead.\"\n )\n\n\ndef _write_requirements(stream, reqs):\n lines = yield_lines(reqs or ())\n append_cr = lambda line: line + '\\n'\n lines = map(append_cr, lines)\n stream.writelines(lines)\n\n\ndef write_requirements(cmd, basename, filename):\n dist = cmd.distribution\n data = six.StringIO()\n _write_requirements(data, dist.install_requires)\n extras_require = dist.extras_require or {}\n for extra in sorted(extras_require):\n data.write('\\n[{extra}]\\n'.format(**vars()))\n _write_requirements(data, extras_require[extra])\n cmd.write_or_delete_file(\"requirements\", filename, data.getvalue())\n\n\ndef write_setup_requirements(cmd, basename, filename):\n data = StringIO()\n _write_requirements(data, cmd.distribution.setup_requires)\n cmd.write_or_delete_file(\"setup-requirements\", filename, data.getvalue())\n\n\ndef write_toplevel_names(cmd, basename, filename):\n pkgs = dict.fromkeys(\n [\n k.split('.', 1)[0]\n for k in cmd.distribution.iter_distribution_names()\n ]\n )\n cmd.write_file(\"top-level names\", filename, '\\n'.join(sorted(pkgs)) + '\\n')\n\n\ndef overwrite_arg(cmd, basename, filename):\n write_arg(cmd, basename, filename, True)\n\n\ndef write_arg(cmd, basename, filename, force=False):\n argname = os.path.splitext(basename)[0]\n value = getattr(cmd.distribution, argname, None)\n if value is not None:\n value = '\\n'.join(value) + '\\n'\n cmd.write_or_delete_file(argname, filename, value, force)\n\n\ndef write_entries(cmd, basename, filename):\n ep = cmd.distribution.entry_points\n\n if isinstance(ep, six.string_types) or ep is None:\n data = ep\n elif ep is not None:\n data = []\n for section, contents in sorted(ep.items()):\n if not isinstance(contents, six.string_types):\n contents = EntryPoint.parse_group(section, contents)\n contents = '\\n'.join(sorted(map(str, contents.values())))\n data.append('[%s]\\n%s\\n\\n' % (section, contents))\n data = ''.join(data)\n\n cmd.write_or_delete_file('entry points', filename, data, True)\n\n\ndef get_pkg_info_revision():\n \"\"\"\n Get a -r### off of PKG-INFO Version in case this is an sdist of\n a subversion revision.\n \"\"\"\n warnings.warn(\"get_pkg_info_revision is deprecated.\", DeprecationWarning)\n if os.path.exists('PKG-INFO'):\n with io.open('PKG-INFO') as f:\n for line in f:\n match = re.match(r\"Version:.*-r(\\d+)\\s*$\", line)\n if match:\n return int(match.group(1))\n return 0\n",
"path": "setuptools/command/egg_info.py"
}
] | diff --git a/setuptools/command/egg_info.py b/setuptools/command/egg_info.py
index 5ab54dc70f..62bf00aaa9 100755
--- a/setuptools/command/egg_info.py
+++ b/setuptools/command/egg_info.py
@@ -429,7 +429,9 @@ def recursive_exclude(self, dir, pattern):
def graft(self, dir):
"""Include all files from 'dir/'."""
- found = distutils.filelist.findall(dir)
+ found = []
+ for match_dir in glob(dir):
+ found += distutils.filelist.findall(match_dir)
self.extend(found)
return bool(found)
diff --git a/setuptools/tests/test_manifest.py b/setuptools/tests/test_manifest.py
index cf39346a18..3b34c88813 100644
--- a/setuptools/tests/test_manifest.py
+++ b/setuptools/tests/test_manifest.py
@@ -206,6 +206,15 @@ def test_graft(self):
l('app/static/app.css'), l('app/static/app.css.map')])
assert files == self.get_files()
+ def test_graft_glob_syntax(self):
+ """Include the whole app/static/ directory."""
+ l = make_local_path
+ self.make_manifest("graft */static")
+ files = default_files | set([
+ l('app/static/app.js'), l('app/static/app.js.map'),
+ l('app/static/app.css'), l('app/static/app.css.map')])
+ assert files == self.get_files()
+
def test_graft_global_exclude(self):
"""Exclude all *.map files in the project."""
l = make_local_path
|
microsoft__Qcodes-940 | error when saving to drive other than current path
This is due to windows handling of drives. A minimal example:
``` python
import qcodes,os
datadir = r'd:\Temp'
qcodes.DataSet.default_io = qcodes.DiskIO(datadir)
p=qcodes.Parameter('p', set_cmd=None)
q=qcodes.Parameter('q', set_cmd=None)
ds=qcodes.Loop(p[0:10:1]).each(q).run() # fine
qcodes.DataSet.default_io = qcodes.DiskIO(r'c:\Temp')
ds=qcodes.Loop(p[0:10:1]).each(p).run() # error
```
This generates the error `ValueError: path is on mount 'd:', start on mount 'c:'`
Also see https://bugs.python.org/issue7195
| [
{
"content": "\"\"\"\nIO managers for QCodes.\n\nIO managers wrap whatever physical storage layer the user wants to use\nin an interface mimicking the built-in <open> context manager, with\nsome restrictions to minimize the overhead in creating new IO managers.\n\nThe main thing these managers need to implement is the open context manager:\n\n- Only the context manager needs to be implemented, not separate\n open function and close methods.\n\n- open takes the standard parameters:\n\n - filename: (string)\n - mode: (string) only 'r' (read), 'w' (write), and 'a' (append) are\n expected to be implemented. As with normal file objects, the only\n difference between write and append is that write empties the file\n before adding new data, and append leaves the existing contents in\n place but starts writing at the end.\n - encoding: If a special output encoding is desired. i.e. 'utf8\n\n- the file-like object returned should implement a minimal set of operations.\n\n In read mode:\n - read([size]): read to the end or at most size bytes into a string\n - readline([size]): read until a newline or up to size bytes, into a string\n - iter(): usually return self, but can be any iterator over lines\n - next(): assuming iter() returns self, this yields the next line.\n\n In write or append mode:\n - write(s): add string s to the end of the file.\n - writelines(seq): add a sequence of strings\n\nIO managers should also implement:\n\n- a join method, ala os.path.join(\\*args).\n- a list method, that returns all objects matching location\n- a remove method, ala os.remove(path) except that it will remove directories\n as well as files, since we're allowing \"locations\" to be directories\n or files.\n\"\"\"\n\nfrom contextlib import contextmanager\nimport os\nimport re\nimport shutil\nfrom fnmatch import fnmatch\n\nALLOWED_OPEN_MODES = ('r', 'w', 'a')\n\n\nclass DiskIO:\n\n \"\"\"\n Simple IO object to wrap disk operations with a custom base location.\n\n Also accepts both forward and backward slashes at any point, and\n normalizes both to the OS we are currently on.\n\n Args:\n base_location (str): a path to the root data folder.\n Converted to an absolute path immediately, so even if you supply a\n relative path, later changes to the OS working directory will not\n affect data paths.\n \"\"\"\n\n def __init__(self, base_location):\n if base_location is None:\n self.base_location = None\n else:\n base_location = self._normalize_slashes(base_location)\n self.base_location = os.path.abspath(base_location)\n\n @contextmanager\n def open(self, filename, mode, encoding=None):\n \"\"\"\n Mimic the interface of the built in open context manager.\n\n Args:\n filename (str): path relative to base_location.\n\n mode (str): 'r' (read), 'w' (write), or 'a' (append).\n Other open modes are not supported because we don't want\n to force all IO managers to support others.\n\n Returns:\n context manager yielding the open file\n \"\"\"\n if mode not in ALLOWED_OPEN_MODES:\n raise ValueError('mode {} not allowed in IO managers'.format(mode))\n\n filepath = self.to_path(filename)\n\n # make directories if needed\n dirpath = os.path.dirname(filepath)\n if not os.path.exists(dirpath):\n os.makedirs(dirpath)\n\n # normally we'd construct this context manager with try/finally, but\n # here we already have a context manager for open so we just wrap it\n with open(filepath, mode, encoding=encoding) as f:\n yield f\n\n def _normalize_slashes(self, location):\n # note that this is NOT os.path.join - the difference is os.path.join\n # discards empty strings, so if you use it on a re.split absolute\n # path you will get a relative path!\n return os.sep.join(re.split('[\\\\\\\\/]', location))\n\n def to_path(self, location):\n \"\"\"\n Convert a location string into a path on the local file system.\n\n For DiskIO this just fixes slashes and prepends the base location,\n doing nothing active with the file. But for other io managers that\n refer to remote storage, this method may actually fetch the file and\n put it at a temporary local path.\n\n Args:\n location (str): A location string for a complete dataset or\n a file within it.\n\n Returns:\n path (str): The path on disk to which this location maps.\n \"\"\"\n location = self._normalize_slashes(location)\n if self.base_location:\n return os.path.join(self.base_location, location)\n else:\n return location\n\n def to_location(self, path):\n \"\"\"\n Convert a local filesystem path into a location string.\n\n Args:\n path (str): a path on the local file system.\n\n Returns:\n location (str): the location string corresponding to this path.\n \"\"\"\n if self.base_location:\n return os.path.relpath(path, self.base_location)\n else:\n return path\n\n def __repr__(self):\n \"\"\"Show the base location in the repr.\"\"\"\n return '<DiskIO, base_location={}>'.format(repr(self.base_location))\n\n def join(self, *args):\n \"\"\"Context-dependent os.path.join for this io manager.\"\"\"\n return os.path.join(*list(map(self._normalize_slashes, args)))\n\n def isfile(self, location):\n \"\"\"Check whether this location matches a file.\"\"\"\n path = self.to_path(location)\n return os.path.isfile(path)\n\n def list(self, location, maxdepth=1, include_dirs=False):\n \"\"\"\n Return all files that match location.\n\n This is either files whose names match up to an arbitrary extension,\n or any files within an exactly matching directory name.\n\n Args:\n location (str): the location to match.\n May contain the usual path wildcards * and ?\n\n maxdepth (int, optional): maximum levels of directory nesting to\n recurse into looking for files. Default 1.\n\n include_dirs (bool, optional): whether to allow directories in\n the results or just files. Default False.\n\n Returns:\n A list of matching files and/or directories, as locations\n relative to our base_location.\n \"\"\"\n location = self._normalize_slashes(location)\n search_dir, pattern = os.path.split(location)\n path = self.to_path(search_dir)\n\n if not os.path.isdir(path):\n return []\n\n matches = [fn for fn in os.listdir(path) if fnmatch(fn, pattern + '*')]\n out = []\n\n for match in matches:\n matchpath = self.join(path, match)\n if os.path.isdir(matchpath) and fnmatch(match, pattern):\n if maxdepth > 0:\n # exact directory match - walk down to maxdepth\n for root, dirs, files in os.walk(matchpath, topdown=True):\n depth = root[len(path):].count(os.path.sep)\n if depth == maxdepth:\n dirs[:] = [] # don't recurse any further\n\n for fn in files + (dirs if include_dirs else []):\n out.append(self.to_location(self.join(root, fn)))\n\n elif include_dirs:\n out.append(self.join(search_dir, match))\n\n elif (os.path.isfile(matchpath) and\n (fnmatch(match, pattern) or\n fnmatch(os.path.splitext(match)[0], pattern))):\n # exact filename match, or match up to an extension\n # note that we need fnmatch(match, pattern) in addition to the\n # splitext test to cover the case of the base filename itself\n # containing a dot.\n out.append(self.join(search_dir, match))\n\n return out\n\n def remove(self, filename):\n \"\"\"Delete a file or folder and prune the directory tree.\"\"\"\n path = self.to_path(filename)\n if os.path.isdir(path):\n shutil.rmtree(path)\n else:\n os.remove(path)\n\n filepath = os.path.split(path)[0]\n try:\n os.removedirs(filepath)\n except OSError:\n # directory was not empty - good that we're not removing it!\n pass\n\n def remove_all(self, location):\n \"\"\"\n Delete all files/directories in the dataset at this location.\n\n Afterward prunes the directory tree.\n \"\"\"\n for fn in self.list(location):\n self.remove(fn)\n",
"path": "qcodes/data/io.py"
}
] | [
{
"content": "\"\"\"\nIO managers for QCodes.\n\nIO managers wrap whatever physical storage layer the user wants to use\nin an interface mimicking the built-in <open> context manager, with\nsome restrictions to minimize the overhead in creating new IO managers.\n\nThe main thing these managers need to implement is the open context manager:\n\n- Only the context manager needs to be implemented, not separate\n open function and close methods.\n\n- open takes the standard parameters:\n\n - filename: (string)\n - mode: (string) only 'r' (read), 'w' (write), and 'a' (append) are\n expected to be implemented. As with normal file objects, the only\n difference between write and append is that write empties the file\n before adding new data, and append leaves the existing contents in\n place but starts writing at the end.\n - encoding: If a special output encoding is desired. i.e. 'utf8\n\n- the file-like object returned should implement a minimal set of operations.\n\n In read mode:\n - read([size]): read to the end or at most size bytes into a string\n - readline([size]): read until a newline or up to size bytes, into a string\n - iter(): usually return self, but can be any iterator over lines\n - next(): assuming iter() returns self, this yields the next line.\n\n In write or append mode:\n - write(s): add string s to the end of the file.\n - writelines(seq): add a sequence of strings\n\nIO managers should also implement:\n\n- a join method, ala os.path.join(\\*args).\n- a list method, that returns all objects matching location\n- a remove method, ala os.remove(path) except that it will remove directories\n as well as files, since we're allowing \"locations\" to be directories\n or files.\n\"\"\"\n\nfrom contextlib import contextmanager\nimport os\nimport re\nimport shutil\nfrom fnmatch import fnmatch\n\nALLOWED_OPEN_MODES = ('r', 'w', 'a')\n\n\nclass DiskIO:\n\n \"\"\"\n Simple IO object to wrap disk operations with a custom base location.\n\n Also accepts both forward and backward slashes at any point, and\n normalizes both to the OS we are currently on.\n\n Args:\n base_location (str): a path to the root data folder.\n Converted to an absolute path immediately, so even if you supply a\n relative path, later changes to the OS working directory will not\n affect data paths.\n \"\"\"\n\n def __init__(self, base_location):\n if base_location is None:\n self.base_location = None\n else:\n base_location = self._normalize_slashes(base_location)\n self.base_location = os.path.abspath(base_location)\n\n @contextmanager\n def open(self, filename, mode, encoding=None):\n \"\"\"\n Mimic the interface of the built in open context manager.\n\n Args:\n filename (str): path relative to base_location.\n\n mode (str): 'r' (read), 'w' (write), or 'a' (append).\n Other open modes are not supported because we don't want\n to force all IO managers to support others.\n\n Returns:\n context manager yielding the open file\n \"\"\"\n if mode not in ALLOWED_OPEN_MODES:\n raise ValueError('mode {} not allowed in IO managers'.format(mode))\n\n filepath = self.to_path(filename)\n\n # make directories if needed\n dirpath = os.path.dirname(filepath)\n if not os.path.exists(dirpath):\n os.makedirs(dirpath)\n\n # normally we'd construct this context manager with try/finally, but\n # here we already have a context manager for open so we just wrap it\n with open(filepath, mode, encoding=encoding) as f:\n yield f\n\n def _normalize_slashes(self, location):\n # note that this is NOT os.path.join - the difference is os.path.join\n # discards empty strings, so if you use it on a re.split absolute\n # path you will get a relative path!\n return os.sep.join(re.split('[\\\\\\\\/]', location))\n\n def to_path(self, location):\n \"\"\"\n Convert a location string into a path on the local file system.\n\n For DiskIO this just fixes slashes and prepends the base location,\n doing nothing active with the file. But for other io managers that\n refer to remote storage, this method may actually fetch the file and\n put it at a temporary local path.\n\n Args:\n location (str): A location string for a complete dataset or\n a file within it.\n\n Returns:\n path (str): The path on disk to which this location maps.\n \"\"\"\n location = self._normalize_slashes(location)\n if self.base_location:\n return os.path.join(self.base_location, location)\n else:\n return location\n\n def to_location(self, path):\n \"\"\"\n Convert a local filesystem path into a location string.\n\n Args:\n path (str): a path on the local file system.\n\n Returns:\n location (str): the location string corresponding to this path.\n \"\"\"\n if self.base_location:\n return os.path.join(self.base_location, path)\n else:\n return path\n\n def __repr__(self):\n \"\"\"Show the base location in the repr.\"\"\"\n return '<DiskIO, base_location={}>'.format(repr(self.base_location))\n\n def join(self, *args):\n \"\"\"Context-dependent os.path.join for this io manager.\"\"\"\n return os.path.join(*list(map(self._normalize_slashes, args)))\n\n def isfile(self, location):\n \"\"\"Check whether this location matches a file.\"\"\"\n path = self.to_path(location)\n return os.path.isfile(path)\n\n def list(self, location, maxdepth=1, include_dirs=False):\n \"\"\"\n Return all files that match location.\n\n This is either files whose names match up to an arbitrary extension,\n or any files within an exactly matching directory name.\n\n Args:\n location (str): the location to match.\n May contain the usual path wildcards * and ?\n\n maxdepth (int, optional): maximum levels of directory nesting to\n recurse into looking for files. Default 1.\n\n include_dirs (bool, optional): whether to allow directories in\n the results or just files. Default False.\n\n Returns:\n A list of matching files and/or directories, as locations\n relative to our base_location.\n \"\"\"\n location = self._normalize_slashes(location)\n search_dir, pattern = os.path.split(location)\n path = self.to_path(search_dir)\n\n if not os.path.isdir(path):\n return []\n\n matches = [fn for fn in os.listdir(path) if fnmatch(fn, pattern + '*')]\n out = []\n\n for match in matches:\n matchpath = self.join(path, match)\n if os.path.isdir(matchpath) and fnmatch(match, pattern):\n if maxdepth > 0:\n # exact directory match - walk down to maxdepth\n for root, dirs, files in os.walk(matchpath, topdown=True):\n depth = root[len(path):].count(os.path.sep)\n if depth == maxdepth:\n dirs[:] = [] # don't recurse any further\n\n for fn in files + (dirs if include_dirs else []):\n out.append(self.to_location(self.join(root, fn)))\n\n elif include_dirs:\n out.append(self.join(search_dir, match))\n\n elif (os.path.isfile(matchpath) and\n (fnmatch(match, pattern) or\n fnmatch(os.path.splitext(match)[0], pattern))):\n # exact filename match, or match up to an extension\n # note that we need fnmatch(match, pattern) in addition to the\n # splitext test to cover the case of the base filename itself\n # containing a dot.\n out.append(self.join(search_dir, match))\n\n return out\n\n def remove(self, filename):\n \"\"\"Delete a file or folder and prune the directory tree.\"\"\"\n path = self.to_path(filename)\n if os.path.isdir(path):\n shutil.rmtree(path)\n else:\n os.remove(path)\n\n filepath = os.path.split(path)[0]\n try:\n os.removedirs(filepath)\n except OSError:\n # directory was not empty - good that we're not removing it!\n pass\n\n def remove_all(self, location):\n \"\"\"\n Delete all files/directories in the dataset at this location.\n\n Afterward prunes the directory tree.\n \"\"\"\n for fn in self.list(location):\n self.remove(fn)\n",
"path": "qcodes/data/io.py"
}
] | diff --git a/qcodes/data/io.py b/qcodes/data/io.py
index 4b7a82c8588..93ff782a28b 100644
--- a/qcodes/data/io.py
+++ b/qcodes/data/io.py
@@ -141,7 +141,7 @@ def to_location(self, path):
location (str): the location string corresponding to this path.
"""
if self.base_location:
- return os.path.relpath(path, self.base_location)
+ return os.path.join(self.base_location, path)
else:
return path
|
strawberry-graphql__strawberry-2481 | Increased CPU usage when subscribing with the graphql-transport-ws protocol
<!-- Provide a general summary of the bug in the title above. -->
<!--- This template is entirely optional and can be removed, but is here to help both you and us. -->
<!--- Anything on lines wrapped in comments like these will not show up in the final text. -->
## Describe the Bug
We have a Strawberry GraphQL server that we have been stress testing and running CPU performance tests on. We have found that there is a noticeable and consistent increase in the CPU usage of our server application when our client subscribes using the _graphql-transport-ws_ protocol compared to using the _graphql-ws_ protocol.
I have done a bit of investigating and further profiling using py-spy and discovered that the Strawberry code is creating a `NextMessage` object ([here](https://github.com/strawberry-graphql/strawberry/blob/db9c22a53205cd82330a9c84d44ac1ee2731eafb/strawberry/subscriptions/protocols/graphql_transport_ws/handlers.py#L261)) for each message, which it then converts to a dictionary ([here](https://github.com/strawberry-graphql/strawberry/blob/db9c22a53205cd82330a9c84d44ac1ee2731eafb/strawberry/subscriptions/protocols/graphql_transport_ws/handlers.py#L283)) using the `dataclasses` `asdict() `method ([here](https://github.com/strawberry-graphql/strawberry/blob/db9c22a53205cd82330a9c84d44ac1ee2731eafb/strawberry/subscriptions/protocols/graphql_transport_ws/types.py#L12)). Some internet research shows that this `asdict()` method is doing a `deepcopy` of everything within the class. I ran a few timing tests and the `asdict()` method takes an order of magnitude longer than doing a simple `.__dict__` on the object. This is only done in the _graphql-transport-ws_ implementation and not the _graphql-ws_ implementation which explains why there is a difference in CPU usage between the 2 protocols.
I do not believe that we need to be doing a deepcopy when turning the class into a dictionary. What's more, I wonder whether we need to even be creating the `NextMessage` object because as far as I can see, we create it and pass it to a function that immediately turns it into a dictionary. So why don't we just create it as a dictionary and send it instead. This would bypass having to do any sort of conversion costing time.
I.e. instead of line 261 and 262 ([here](https://github.com/strawberry-graphql/strawberry/blob/db9c22a53205cd82330a9c84d44ac1ee2731eafb/strawberry/subscriptions/protocols/graphql_transport_ws/handlers.py#L261)) which do:
```
next_message = NextMessage(id=operation_id, payload=next_payload)
await self.send_message(next_message)`
```
we could do something like:
```
next_message = {"id":operation_id, "payload": next_payload, "type": "next"}
await self.send_json(next_message)
```
When I ran the performance tests with the above change the CPU usage dropped and was consistent with the _graphql-ws_ protocol performance.
<!-- A clear and concise description of what the bug is. -->
## System Information
- Operating system: Centos 7
- Strawberry version (if applicable): 0.154.1
## Additional Context
I have created a simple demo Strawberry GraphQL server and Python client on GitHub, available at: https://github.com/rjwills28/strawberry_cpu_demo/tree/master.
Instructions on how to install and run are in the readme. It simulates the tests that we were running where we have a server providing subscription updates at 10Hz and a client that creates 100 different subscriptions. Follow the example in the readme to first run with the _graphql-ws_ protocol (command line argument (`-p 1`) and then with the _graphql-transport-ws_ protocol (`-p 2`). Run both a few times and you should see that the average CPU usage is on the whole higher for the latter protocol. Please let me know if you have any problems running this.
<!-- POLAR PLEDGE BADGE START -->
## Upvote & Fund
- We're using [Polar.sh](https://polar.sh/strawberry-graphql) so you can upvote and help fund this issue.
- We receive the funding once the issue is completed & confirmed by you.
- Thank you in advance for helping prioritize & fund our backlog.
<a href="https://polar.sh/strawberry-graphql/strawberry/issues/2479">
<picture>
<source media="(prefers-color-scheme: dark)" srcset="https://polar.sh/api/github/strawberry-graphql/strawberry/issues/2479/pledge.svg?darkmode=1">
<img alt="Fund with Polar" src="https://polar.sh/api/github/strawberry-graphql/strawberry/issues/2479/pledge.svg">
</picture>
</a>
<!-- POLAR PLEDGE BADGE END -->
| [
{
"content": "from dataclasses import asdict, dataclass\nfrom typing import Any, Dict, List, Optional\n\nfrom graphql import GraphQLFormattedError\n\nfrom strawberry.unset import UNSET\n\n\n@dataclass\nclass GraphQLTransportMessage:\n def as_dict(self) -> dict:\n data = asdict(self)\n if getattr(self, \"payload\", None) is UNSET:\n # Unset fields must have a JSON value of \"undefined\" not \"null\"\n data.pop(\"payload\")\n return data\n\n\n@dataclass\nclass ConnectionInitMessage(GraphQLTransportMessage):\n \"\"\"\n Direction: Client -> Server\n \"\"\"\n\n payload: Optional[Dict[str, Any]] = UNSET\n type: str = \"connection_init\"\n\n\n@dataclass\nclass ConnectionAckMessage(GraphQLTransportMessage):\n \"\"\"\n Direction: Server -> Client\n \"\"\"\n\n payload: Optional[Dict[str, Any]] = UNSET\n type: str = \"connection_ack\"\n\n\n@dataclass\nclass PingMessage(GraphQLTransportMessage):\n \"\"\"\n Direction: bidirectional\n \"\"\"\n\n payload: Optional[Dict[str, Any]] = UNSET\n type: str = \"ping\"\n\n\n@dataclass\nclass PongMessage(GraphQLTransportMessage):\n \"\"\"\n Direction: bidirectional\n \"\"\"\n\n payload: Optional[Dict[str, Any]] = UNSET\n type: str = \"pong\"\n\n\n@dataclass\nclass SubscribeMessagePayload:\n query: str\n operationName: Optional[str] = None\n variables: Optional[Dict[str, Any]] = None\n extensions: Optional[Dict[str, Any]] = None\n\n\n@dataclass\nclass SubscribeMessage(GraphQLTransportMessage):\n \"\"\"\n Direction: Client -> Server\n \"\"\"\n\n id: str\n payload: SubscribeMessagePayload\n type: str = \"subscribe\"\n\n\n@dataclass\nclass NextMessage(GraphQLTransportMessage):\n \"\"\"\n Direction: Server -> Client\n \"\"\"\n\n id: str\n payload: Dict[str, Any] # TODO: shape like ExecutionResult\n type: str = \"next\"\n\n\n@dataclass\nclass ErrorMessage(GraphQLTransportMessage):\n \"\"\"\n Direction: Server -> Client\n \"\"\"\n\n id: str\n payload: List[GraphQLFormattedError]\n type: str = \"error\"\n\n\n@dataclass\nclass CompleteMessage(GraphQLTransportMessage):\n \"\"\"\n Direction: bidirectional\n \"\"\"\n\n id: str\n type: str = \"complete\"\n",
"path": "strawberry/subscriptions/protocols/graphql_transport_ws/types.py"
}
] | [
{
"content": "from dataclasses import asdict, dataclass\nfrom typing import Any, Dict, List, Optional\n\nfrom graphql import GraphQLFormattedError\n\nfrom strawberry.unset import UNSET\n\n\n@dataclass\nclass GraphQLTransportMessage:\n def as_dict(self) -> dict:\n data = asdict(self)\n if getattr(self, \"payload\", None) is UNSET:\n # Unset fields must have a JSON value of \"undefined\" not \"null\"\n data.pop(\"payload\")\n return data\n\n\n@dataclass\nclass ConnectionInitMessage(GraphQLTransportMessage):\n \"\"\"\n Direction: Client -> Server\n \"\"\"\n\n payload: Optional[Dict[str, Any]] = UNSET\n type: str = \"connection_init\"\n\n\n@dataclass\nclass ConnectionAckMessage(GraphQLTransportMessage):\n \"\"\"\n Direction: Server -> Client\n \"\"\"\n\n payload: Optional[Dict[str, Any]] = UNSET\n type: str = \"connection_ack\"\n\n\n@dataclass\nclass PingMessage(GraphQLTransportMessage):\n \"\"\"\n Direction: bidirectional\n \"\"\"\n\n payload: Optional[Dict[str, Any]] = UNSET\n type: str = \"ping\"\n\n\n@dataclass\nclass PongMessage(GraphQLTransportMessage):\n \"\"\"\n Direction: bidirectional\n \"\"\"\n\n payload: Optional[Dict[str, Any]] = UNSET\n type: str = \"pong\"\n\n\n@dataclass\nclass SubscribeMessagePayload:\n query: str\n operationName: Optional[str] = None\n variables: Optional[Dict[str, Any]] = None\n extensions: Optional[Dict[str, Any]] = None\n\n\n@dataclass\nclass SubscribeMessage(GraphQLTransportMessage):\n \"\"\"\n Direction: Client -> Server\n \"\"\"\n\n id: str\n payload: SubscribeMessagePayload\n type: str = \"subscribe\"\n\n\n@dataclass\nclass NextMessage(GraphQLTransportMessage):\n \"\"\"\n Direction: Server -> Client\n \"\"\"\n\n id: str\n payload: Dict[str, Any] # TODO: shape like ExecutionResult\n type: str = \"next\"\n\n def as_dict(self) -> dict:\n return {\"id\": self.id, \"payload\": self.payload, \"type\": self.type}\n\n\n@dataclass\nclass ErrorMessage(GraphQLTransportMessage):\n \"\"\"\n Direction: Server -> Client\n \"\"\"\n\n id: str\n payload: List[GraphQLFormattedError]\n type: str = \"error\"\n\n\n@dataclass\nclass CompleteMessage(GraphQLTransportMessage):\n \"\"\"\n Direction: bidirectional\n \"\"\"\n\n id: str\n type: str = \"complete\"\n",
"path": "strawberry/subscriptions/protocols/graphql_transport_ws/types.py"
}
] | diff --git a/RELEASE.md b/RELEASE.md
new file mode 100644
index 0000000000..d3ebb00904
--- /dev/null
+++ b/RELEASE.md
@@ -0,0 +1,5 @@
+Release type: patch
+
+This release fixes a bug in subscriptions using the graphql-transport-ws protocol
+where the conversion of the NextMessage object to a dictionary took an unnecessary
+amount of time leading to an increase in CPU usage.
diff --git a/strawberry/subscriptions/protocols/graphql_transport_ws/types.py b/strawberry/subscriptions/protocols/graphql_transport_ws/types.py
index 04f844e1a0..72033f7ff4 100644
--- a/strawberry/subscriptions/protocols/graphql_transport_ws/types.py
+++ b/strawberry/subscriptions/protocols/graphql_transport_ws/types.py
@@ -85,6 +85,9 @@ class NextMessage(GraphQLTransportMessage):
payload: Dict[str, Any] # TODO: shape like ExecutionResult
type: str = "next"
+ def as_dict(self) -> dict:
+ return {"id": self.id, "payload": self.payload, "type": self.type}
+
@dataclass
class ErrorMessage(GraphQLTransportMessage):
|
oobabooga__text-generation-webui-3014 | Error when downloading model from UI
### Describe the bug
I just downloaded the latest version of text-generation-webui on Ubuntu and started the UI but it is not longer allowing me to download a model from the UI. I tried to downloading 'anon8231489123/vicuna-13b-GPTQ-4bit-128g' but got the following error:
Traceback (most recent call last): File “/home/squirol/ben2/oobabooga_linux/text-generation-webui/server.py”, line 134, in download_model_wrapper downloader = downloader_module.ModelDownloader() TypeError: ModelDownloader.init() missing 1 required positional argument: ‘max_retries’
### Is there an existing issue for this?
- [X] I have searched the existing issues
### Reproduction
1. Launch web UI using ./start_linux.sh
2. Open browser to http://127.0.0.1:7860/
3. Enter 'anon8231489123/vicuna-13b-GPTQ-4bit-128g' and select Download in UI
4. View exception under Download button
### Screenshot

### Logs
```shell
N/A
```
### System Info
```shell
Ubuntu
NVIDIA
```
| [
{
"content": "'''\nDownloads models from Hugging Face to models/username_modelname.\n\nExample:\npython download-model.py facebook/opt-1.3b\n\n'''\n\nimport argparse\nimport base64\nimport datetime\nimport hashlib\nimport json\nimport os\nimport re\nimport sys\nfrom pathlib import Path\n\nimport requests\nimport tqdm\nfrom requests.adapters import HTTPAdapter\nfrom tqdm.contrib.concurrent import thread_map\n\n\nclass ModelDownloader:\n def __init__(self, max_retries):\n self.s = requests.Session()\n if max_retries:\n self.s.mount('https://cdn-lfs.huggingface.co', HTTPAdapter(max_retries=max_retries))\n self.s.mount('https://huggingface.co', HTTPAdapter(max_retries=max_retries))\n if os.getenv('HF_USER') is not None and os.getenv('HF_PASS') is not None:\n self.s.auth = (os.getenv('HF_USER'), os.getenv('HF_PASS'))\n\n def sanitize_model_and_branch_names(self, model, branch):\n if model[-1] == '/':\n model = model[:-1]\n\n if branch is None:\n branch = \"main\"\n else:\n pattern = re.compile(r\"^[a-zA-Z0-9._-]+$\")\n if not pattern.match(branch):\n raise ValueError(\n \"Invalid branch name. Only alphanumeric characters, period, underscore and dash are allowed.\")\n\n return model, branch\n\n def get_download_links_from_huggingface(self, model, branch, text_only=False):\n base = \"https://huggingface.co\"\n page = f\"/api/models/{model}/tree/{branch}\"\n cursor = b\"\"\n\n links = []\n sha256 = []\n classifications = []\n has_pytorch = False\n has_pt = False\n # has_ggml = False\n has_safetensors = False\n is_lora = False\n while True:\n url = f\"{base}{page}\" + (f\"?cursor={cursor.decode()}\" if cursor else \"\")\n r = self.s.get(url, timeout=20)\n r.raise_for_status()\n content = r.content\n\n dict = json.loads(content)\n if len(dict) == 0:\n break\n\n for i in range(len(dict)):\n fname = dict[i]['path']\n if not is_lora and fname.endswith(('adapter_config.json', 'adapter_model.bin')):\n is_lora = True\n\n is_pytorch = re.match(\"(pytorch|adapter)_model.*\\.bin\", fname)\n is_safetensors = re.match(\".*\\.safetensors\", fname)\n is_pt = re.match(\".*\\.pt\", fname)\n is_ggml = re.match(\".*ggml.*\\.bin\", fname)\n is_tokenizer = re.match(\"(tokenizer|ice).*\\.model\", fname)\n is_text = re.match(\".*\\.(txt|json|py|md)\", fname) or is_tokenizer\n if any((is_pytorch, is_safetensors, is_pt, is_ggml, is_tokenizer, is_text)):\n if 'lfs' in dict[i]:\n sha256.append([fname, dict[i]['lfs']['oid']])\n\n if is_text:\n links.append(f\"https://huggingface.co/{model}/resolve/{branch}/{fname}\")\n classifications.append('text')\n continue\n\n if not text_only:\n links.append(f\"https://huggingface.co/{model}/resolve/{branch}/{fname}\")\n if is_safetensors:\n has_safetensors = True\n classifications.append('safetensors')\n elif is_pytorch:\n has_pytorch = True\n classifications.append('pytorch')\n elif is_pt:\n has_pt = True\n classifications.append('pt')\n elif is_ggml:\n # has_ggml = True\n classifications.append('ggml')\n\n cursor = base64.b64encode(f'{{\"file_name\":\"{dict[-1][\"path\"]}\"}}'.encode()) + b':50'\n cursor = base64.b64encode(cursor)\n cursor = cursor.replace(b'=', b'%3D')\n\n # If both pytorch and safetensors are available, download safetensors only\n if (has_pytorch or has_pt) and has_safetensors:\n for i in range(len(classifications) - 1, -1, -1):\n if classifications[i] in ['pytorch', 'pt']:\n links.pop(i)\n\n return links, sha256, is_lora\n\n def get_output_folder(self, model, branch, is_lora, base_folder=None):\n if base_folder is None:\n base_folder = 'models' if not is_lora else 'loras'\n\n output_folder = f\"{'_'.join(model.split('/')[-2:])}\"\n if branch != 'main':\n output_folder += f'_{branch}'\n\n output_folder = Path(base_folder) / output_folder\n return output_folder\n\n def get_single_file(self, url, output_folder, start_from_scratch=False):\n filename = Path(url.rsplit('/', 1)[1])\n output_path = output_folder / filename\n headers = {}\n mode = 'wb'\n if output_path.exists() and not start_from_scratch:\n\n # Check if the file has already been downloaded completely\n r = self.s.get(url, stream=True, timeout=20)\n total_size = int(r.headers.get('content-length', 0))\n if output_path.stat().st_size >= total_size:\n return\n\n # Otherwise, resume the download from where it left off\n headers = {'Range': f'bytes={output_path.stat().st_size}-'}\n mode = 'ab'\n\n with self.s.get(url, stream=True, headers=headers, timeout=20) as r:\n r.raise_for_status() # Do not continue the download if the request was unsuccessful\n total_size = int(r.headers.get('content-length', 0))\n block_size = 1024 * 1024 # 1MB\n with open(output_path, mode) as f:\n with tqdm.tqdm(total=total_size, unit='iB', unit_scale=True, bar_format='{l_bar}{bar}| {n_fmt:6}/{total_fmt:6} {rate_fmt:6}') as t:\n count = 0\n for data in r.iter_content(block_size):\n t.update(len(data))\n f.write(data)\n if total_size != 0 and self.progress_bar is not None:\n count += len(data)\n self.progress_bar(float(count) / float(total_size), f\"Downloading {filename}\")\n\n def start_download_threads(self, file_list, output_folder, start_from_scratch=False, threads=1):\n thread_map(lambda url: self.get_single_file(url, output_folder, start_from_scratch=start_from_scratch), file_list, max_workers=threads, disable=True)\n\n def download_model_files(self, model, branch, links, sha256, output_folder, progress_bar=None, start_from_scratch=False, threads=1):\n self.progress_bar = progress_bar\n\n # Creating the folder and writing the metadata\n output_folder.mkdir(parents=True, exist_ok=True)\n metadata = f'url: https://huggingface.co/{model}\\n' \\\n f'branch: {branch}\\n' \\\n f'download date: {datetime.datetime.now().strftime(\"%Y-%m-%d %H:%M:%S\")}\\n'\n\n sha256_str = '\\n'.join([f' {item[1]} {item[0]}' for item in sha256])\n if sha256_str:\n metadata += f'sha256sum:\\n{sha256_str}'\n\n metadata += '\\n'\n (output_folder / 'huggingface-metadata.txt').write_text(metadata)\n\n # Downloading the files\n print(f\"Downloading the model to {output_folder}\")\n self.start_download_threads(links, output_folder, start_from_scratch=start_from_scratch, threads=threads)\n\n def check_model_files(self, model, branch, links, sha256, output_folder):\n # Validate the checksums\n validated = True\n for i in range(len(sha256)):\n fpath = (output_folder / sha256[i][0])\n\n if not fpath.exists():\n print(f\"The following file is missing: {fpath}\")\n validated = False\n continue\n\n with open(output_folder / sha256[i][0], \"rb\") as f:\n bytes = f.read()\n file_hash = hashlib.sha256(bytes).hexdigest()\n if file_hash != sha256[i][1]:\n print(f'Checksum failed: {sha256[i][0]} {sha256[i][1]}')\n validated = False\n else:\n print(f'Checksum validated: {sha256[i][0]} {sha256[i][1]}')\n\n if validated:\n print('[+] Validated checksums of all model files!')\n else:\n print('[-] Invalid checksums. Rerun download-model.py with the --clean flag.')\n\n\nif __name__ == '__main__':\n\n parser = argparse.ArgumentParser()\n parser.add_argument('MODEL', type=str, default=None, nargs='?')\n parser.add_argument('--branch', type=str, default='main', help='Name of the Git branch to download from.')\n parser.add_argument('--threads', type=int, default=1, help='Number of files to download simultaneously.')\n parser.add_argument('--text-only', action='store_true', help='Only download text files (txt/json).')\n parser.add_argument('--output', type=str, default=None, help='The folder where the model should be saved.')\n parser.add_argument('--clean', action='store_true', help='Does not resume the previous download.')\n parser.add_argument('--check', action='store_true', help='Validates the checksums of model files.')\n parser.add_argument('--max-retries', type=int, default=5, help='Max retries count when get error in download time.')\n args = parser.parse_args()\n\n branch = args.branch\n model = args.MODEL\n\n if model is None:\n print(\"Error: Please specify the model you'd like to download (e.g. 'python download-model.py facebook/opt-1.3b').\")\n sys.exit()\n\n downloader = ModelDownloader(max_retries=args.max_retries)\n # Cleaning up the model/branch names\n try:\n model, branch = downloader.sanitize_model_and_branch_names(model, branch)\n except ValueError as err_branch:\n print(f\"Error: {err_branch}\")\n sys.exit()\n\n # Getting the download links from Hugging Face\n links, sha256, is_lora = downloader.get_download_links_from_huggingface(model, branch, text_only=args.text_only)\n\n # Getting the output folder\n output_folder = downloader.get_output_folder(model, branch, is_lora, base_folder=args.output)\n\n if args.check:\n # Check previously downloaded files\n downloader.check_model_files(model, branch, links, sha256, output_folder)\n else:\n # Download files\n downloader.download_model_files(model, branch, links, sha256, output_folder, threads=args.threads)\n",
"path": "download-model.py"
}
] | [
{
"content": "'''\nDownloads models from Hugging Face to models/username_modelname.\n\nExample:\npython download-model.py facebook/opt-1.3b\n\n'''\n\nimport argparse\nimport base64\nimport datetime\nimport hashlib\nimport json\nimport os\nimport re\nimport sys\nfrom pathlib import Path\n\nimport requests\nimport tqdm\nfrom requests.adapters import HTTPAdapter\nfrom tqdm.contrib.concurrent import thread_map\n\n\nclass ModelDownloader:\n def __init__(self, max_retries = 5):\n self.s = requests.Session()\n if max_retries:\n self.s.mount('https://cdn-lfs.huggingface.co', HTTPAdapter(max_retries=max_retries))\n self.s.mount('https://huggingface.co', HTTPAdapter(max_retries=max_retries))\n if os.getenv('HF_USER') is not None and os.getenv('HF_PASS') is not None:\n self.s.auth = (os.getenv('HF_USER'), os.getenv('HF_PASS'))\n\n def sanitize_model_and_branch_names(self, model, branch):\n if model[-1] == '/':\n model = model[:-1]\n\n if branch is None:\n branch = \"main\"\n else:\n pattern = re.compile(r\"^[a-zA-Z0-9._-]+$\")\n if not pattern.match(branch):\n raise ValueError(\n \"Invalid branch name. Only alphanumeric characters, period, underscore and dash are allowed.\")\n\n return model, branch\n\n def get_download_links_from_huggingface(self, model, branch, text_only=False):\n base = \"https://huggingface.co\"\n page = f\"/api/models/{model}/tree/{branch}\"\n cursor = b\"\"\n\n links = []\n sha256 = []\n classifications = []\n has_pytorch = False\n has_pt = False\n # has_ggml = False\n has_safetensors = False\n is_lora = False\n while True:\n url = f\"{base}{page}\" + (f\"?cursor={cursor.decode()}\" if cursor else \"\")\n r = self.s.get(url, timeout=20)\n r.raise_for_status()\n content = r.content\n\n dict = json.loads(content)\n if len(dict) == 0:\n break\n\n for i in range(len(dict)):\n fname = dict[i]['path']\n if not is_lora and fname.endswith(('adapter_config.json', 'adapter_model.bin')):\n is_lora = True\n\n is_pytorch = re.match(\"(pytorch|adapter)_model.*\\.bin\", fname)\n is_safetensors = re.match(\".*\\.safetensors\", fname)\n is_pt = re.match(\".*\\.pt\", fname)\n is_ggml = re.match(\".*ggml.*\\.bin\", fname)\n is_tokenizer = re.match(\"(tokenizer|ice).*\\.model\", fname)\n is_text = re.match(\".*\\.(txt|json|py|md)\", fname) or is_tokenizer\n if any((is_pytorch, is_safetensors, is_pt, is_ggml, is_tokenizer, is_text)):\n if 'lfs' in dict[i]:\n sha256.append([fname, dict[i]['lfs']['oid']])\n\n if is_text:\n links.append(f\"https://huggingface.co/{model}/resolve/{branch}/{fname}\")\n classifications.append('text')\n continue\n\n if not text_only:\n links.append(f\"https://huggingface.co/{model}/resolve/{branch}/{fname}\")\n if is_safetensors:\n has_safetensors = True\n classifications.append('safetensors')\n elif is_pytorch:\n has_pytorch = True\n classifications.append('pytorch')\n elif is_pt:\n has_pt = True\n classifications.append('pt')\n elif is_ggml:\n # has_ggml = True\n classifications.append('ggml')\n\n cursor = base64.b64encode(f'{{\"file_name\":\"{dict[-1][\"path\"]}\"}}'.encode()) + b':50'\n cursor = base64.b64encode(cursor)\n cursor = cursor.replace(b'=', b'%3D')\n\n # If both pytorch and safetensors are available, download safetensors only\n if (has_pytorch or has_pt) and has_safetensors:\n for i in range(len(classifications) - 1, -1, -1):\n if classifications[i] in ['pytorch', 'pt']:\n links.pop(i)\n\n return links, sha256, is_lora\n\n def get_output_folder(self, model, branch, is_lora, base_folder=None):\n if base_folder is None:\n base_folder = 'models' if not is_lora else 'loras'\n\n output_folder = f\"{'_'.join(model.split('/')[-2:])}\"\n if branch != 'main':\n output_folder += f'_{branch}'\n\n output_folder = Path(base_folder) / output_folder\n return output_folder\n\n def get_single_file(self, url, output_folder, start_from_scratch=False):\n filename = Path(url.rsplit('/', 1)[1])\n output_path = output_folder / filename\n headers = {}\n mode = 'wb'\n if output_path.exists() and not start_from_scratch:\n\n # Check if the file has already been downloaded completely\n r = self.s.get(url, stream=True, timeout=20)\n total_size = int(r.headers.get('content-length', 0))\n if output_path.stat().st_size >= total_size:\n return\n\n # Otherwise, resume the download from where it left off\n headers = {'Range': f'bytes={output_path.stat().st_size}-'}\n mode = 'ab'\n\n with self.s.get(url, stream=True, headers=headers, timeout=20) as r:\n r.raise_for_status() # Do not continue the download if the request was unsuccessful\n total_size = int(r.headers.get('content-length', 0))\n block_size = 1024 * 1024 # 1MB\n with open(output_path, mode) as f:\n with tqdm.tqdm(total=total_size, unit='iB', unit_scale=True, bar_format='{l_bar}{bar}| {n_fmt:6}/{total_fmt:6} {rate_fmt:6}') as t:\n count = 0\n for data in r.iter_content(block_size):\n t.update(len(data))\n f.write(data)\n if total_size != 0 and self.progress_bar is not None:\n count += len(data)\n self.progress_bar(float(count) / float(total_size), f\"Downloading {filename}\")\n\n def start_download_threads(self, file_list, output_folder, start_from_scratch=False, threads=1):\n thread_map(lambda url: self.get_single_file(url, output_folder, start_from_scratch=start_from_scratch), file_list, max_workers=threads, disable=True)\n\n def download_model_files(self, model, branch, links, sha256, output_folder, progress_bar=None, start_from_scratch=False, threads=1):\n self.progress_bar = progress_bar\n\n # Creating the folder and writing the metadata\n output_folder.mkdir(parents=True, exist_ok=True)\n metadata = f'url: https://huggingface.co/{model}\\n' \\\n f'branch: {branch}\\n' \\\n f'download date: {datetime.datetime.now().strftime(\"%Y-%m-%d %H:%M:%S\")}\\n'\n\n sha256_str = '\\n'.join([f' {item[1]} {item[0]}' for item in sha256])\n if sha256_str:\n metadata += f'sha256sum:\\n{sha256_str}'\n\n metadata += '\\n'\n (output_folder / 'huggingface-metadata.txt').write_text(metadata)\n\n # Downloading the files\n print(f\"Downloading the model to {output_folder}\")\n self.start_download_threads(links, output_folder, start_from_scratch=start_from_scratch, threads=threads)\n\n def check_model_files(self, model, branch, links, sha256, output_folder):\n # Validate the checksums\n validated = True\n for i in range(len(sha256)):\n fpath = (output_folder / sha256[i][0])\n\n if not fpath.exists():\n print(f\"The following file is missing: {fpath}\")\n validated = False\n continue\n\n with open(output_folder / sha256[i][0], \"rb\") as f:\n bytes = f.read()\n file_hash = hashlib.sha256(bytes).hexdigest()\n if file_hash != sha256[i][1]:\n print(f'Checksum failed: {sha256[i][0]} {sha256[i][1]}')\n validated = False\n else:\n print(f'Checksum validated: {sha256[i][0]} {sha256[i][1]}')\n\n if validated:\n print('[+] Validated checksums of all model files!')\n else:\n print('[-] Invalid checksums. Rerun download-model.py with the --clean flag.')\n\n\nif __name__ == '__main__':\n\n parser = argparse.ArgumentParser()\n parser.add_argument('MODEL', type=str, default=None, nargs='?')\n parser.add_argument('--branch', type=str, default='main', help='Name of the Git branch to download from.')\n parser.add_argument('--threads', type=int, default=1, help='Number of files to download simultaneously.')\n parser.add_argument('--text-only', action='store_true', help='Only download text files (txt/json).')\n parser.add_argument('--output', type=str, default=None, help='The folder where the model should be saved.')\n parser.add_argument('--clean', action='store_true', help='Does not resume the previous download.')\n parser.add_argument('--check', action='store_true', help='Validates the checksums of model files.')\n parser.add_argument('--max-retries', type=int, default=5, help='Max retries count when get error in download time.')\n args = parser.parse_args()\n\n branch = args.branch\n model = args.MODEL\n\n if model is None:\n print(\"Error: Please specify the model you'd like to download (e.g. 'python download-model.py facebook/opt-1.3b').\")\n sys.exit()\n\n downloader = ModelDownloader(max_retries=args.max_retries)\n # Cleaning up the model/branch names\n try:\n model, branch = downloader.sanitize_model_and_branch_names(model, branch)\n except ValueError as err_branch:\n print(f\"Error: {err_branch}\")\n sys.exit()\n\n # Getting the download links from Hugging Face\n links, sha256, is_lora = downloader.get_download_links_from_huggingface(model, branch, text_only=args.text_only)\n\n # Getting the output folder\n output_folder = downloader.get_output_folder(model, branch, is_lora, base_folder=args.output)\n\n if args.check:\n # Check previously downloaded files\n downloader.check_model_files(model, branch, links, sha256, output_folder)\n else:\n # Download files\n downloader.download_model_files(model, branch, links, sha256, output_folder, threads=args.threads)\n",
"path": "download-model.py"
}
] | diff --git a/download-model.py b/download-model.py
index 9ee7790664..2642c40545 100644
--- a/download-model.py
+++ b/download-model.py
@@ -23,7 +23,7 @@
class ModelDownloader:
- def __init__(self, max_retries):
+ def __init__(self, max_retries = 5):
self.s = requests.Session()
if max_retries:
self.s.mount('https://cdn-lfs.huggingface.co', HTTPAdapter(max_retries=max_retries))
|
GeotrekCE__Geotrek-admin-805 | ADMIN - Tronçon bouclant sur lui-même
Impossible de saisir le CIRCUIT DES LACS correctement.
Renvoie souvent une 504 BAD GATEWAY quand on enregistre. L'itinéraire a pourtant été modifié mais différemment de la façon dont il a été saisi. A creuser.
| [
{
"content": "from django.utils.translation import ugettext_lazy as _\n\nimport floppyforms as forms\n\nfrom geotrek.common.forms import CommonForm\nfrom .models import Path\nfrom .helpers import PathHelper\nfrom .fields import TopologyField, SnappedLineStringField\n\n\nclass TopologyForm(CommonForm):\n \"\"\"\n This form is a bit specific :\n\n We use a field (topology) in order to edit the whole instance.\n Thus, at init, we load the instance into field, and at save, we\n save the field into the instance.\n\n The geom field is fully ignored, since we edit a topology.\n \"\"\"\n topology = TopologyField(label=\"\")\n\n def __init__(self, *args, **kwargs):\n super(TopologyForm, self).__init__(*args, **kwargs)\n if self.instance and self.instance.pk:\n self.fields['topology'].initial = self.instance\n\n def clean(self, *args, **kwargs):\n data = super(TopologyForm, self).clean()\n # geom is computed at db-level and never edited\n if 'geom' in self.errors:\n del self.errors['geom']\n return data\n\n def save(self, *args, **kwargs):\n topology = self.cleaned_data.pop('topology')\n instance = super(TopologyForm, self).save(*args, **kwargs)\n instance.mutate(topology)\n return instance\n\n geomfields = ['topology']\n\n class Meta(CommonForm.Meta):\n fields = CommonForm.Meta.fields + ['topology']\n\n MEDIA_JS = (\"core/dijkstra.js\",\n \"core/leaflet-geomutils.js\",\n \"core/multipath.js\",\n \"core/topology_helper.js\") + CommonForm.MEDIA_JS\n\n\nclass PathForm(CommonForm):\n geom = SnappedLineStringField()\n\n reverse_geom = forms.BooleanField(required=False,\n label=_(\"Reverse path\"),\n help_text=_(\"The path will be reversed once saved\"))\n\n geomfields = ['geom']\n\n class Meta(CommonForm.Meta):\n model = Path\n fields = CommonForm.Meta.fields + \\\n ['structure',\n 'name', 'stake', 'comfort', 'trail', 'departure', 'arrival', 'comments',\n 'datasource', 'networks', 'usages', 'valid', 'reverse_geom', 'geom']\n\n def __init__(self, *args, **kwargs):\n super(PathForm, self).__init__(*args, **kwargs)\n self.fields['geom'].label = ''\n\n def clean_geom(self):\n geom = self.cleaned_data['geom']\n if geom is None:\n raise forms.ValidationError(_(\"Invalid snapped geometry.\"))\n if not geom.simple:\n raise forms.ValidationError(_(\"Geometry is not simple.\"))\n if not PathHelper.disjoint(geom, self.cleaned_data.get('pk') or -1):\n raise forms.ValidationError(_(\"Geometry overlaps another.\"))\n return geom\n\n def save(self, commit=True):\n path = super(PathForm, self).save(commit=False)\n\n if self.cleaned_data.get('reverse_geom'):\n path.reverse()\n\n if commit:\n path.save()\n self.save_m2m()\n\n return path\n",
"path": "geotrek/core/forms.py"
}
] | [
{
"content": "from django.utils.translation import ugettext_lazy as _\n\nimport floppyforms as forms\n\nfrom geotrek.common.forms import CommonForm\nfrom .models import Path\nfrom .helpers import PathHelper\nfrom .fields import TopologyField, SnappedLineStringField\n\n\nclass TopologyForm(CommonForm):\n \"\"\"\n This form is a bit specific :\n\n We use a field (topology) in order to edit the whole instance.\n Thus, at init, we load the instance into field, and at save, we\n save the field into the instance.\n\n The geom field is fully ignored, since we edit a topology.\n \"\"\"\n topology = TopologyField(label=\"\")\n\n def __init__(self, *args, **kwargs):\n super(TopologyForm, self).__init__(*args, **kwargs)\n if self.instance and self.instance.pk:\n self.fields['topology'].initial = self.instance\n\n def clean(self, *args, **kwargs):\n data = super(TopologyForm, self).clean()\n # geom is computed at db-level and never edited\n if 'geom' in self.errors:\n del self.errors['geom']\n return data\n\n def save(self, *args, **kwargs):\n topology = self.cleaned_data.pop('topology')\n instance = super(TopologyForm, self).save(*args, **kwargs)\n instance.mutate(topology)\n return instance\n\n geomfields = ['topology']\n\n class Meta(CommonForm.Meta):\n fields = CommonForm.Meta.fields + ['topology']\n\n MEDIA_JS = (\"core/dijkstra.js\",\n \"core/multipath.js\",\n \"core/topology_helper.js\") + CommonForm.MEDIA_JS\n\n\nclass PathForm(CommonForm):\n geom = SnappedLineStringField()\n\n reverse_geom = forms.BooleanField(required=False,\n label=_(\"Reverse path\"),\n help_text=_(\"The path will be reversed once saved\"))\n\n geomfields = ['geom']\n\n class Meta(CommonForm.Meta):\n model = Path\n fields = CommonForm.Meta.fields + \\\n ['structure',\n 'name', 'stake', 'comfort', 'trail', 'departure', 'arrival', 'comments',\n 'datasource', 'networks', 'usages', 'valid', 'reverse_geom', 'geom']\n\n def __init__(self, *args, **kwargs):\n super(PathForm, self).__init__(*args, **kwargs)\n self.fields['geom'].label = ''\n\n def clean_geom(self):\n geom = self.cleaned_data['geom']\n if geom is None:\n raise forms.ValidationError(_(\"Invalid snapped geometry.\"))\n if not geom.simple:\n raise forms.ValidationError(_(\"Geometry is not simple.\"))\n if not PathHelper.disjoint(geom, self.cleaned_data.get('pk') or -1):\n raise forms.ValidationError(_(\"Geometry overlaps another.\"))\n return geom\n\n def save(self, commit=True):\n path = super(PathForm, self).save(commit=False)\n\n if self.cleaned_data.get('reverse_geom'):\n path.reverse()\n\n if commit:\n path.save()\n self.save_m2m()\n\n return path\n",
"path": "geotrek/core/forms.py"
}
] | diff --git a/CHANGES b/CHANGES
index ed9dbb2e7a..1deda94e32 100644
--- a/CHANGES
+++ b/CHANGES
@@ -35,6 +35,7 @@ CHANGELOG
* Allow server host to capture pages (fixes #733)
* Adjust map capture according to geometry aspect ratio (fixes #627)
* Always show path layer in detail pages (fixes #781)
+* Fix restore of topology on loop paths (fixes #760)
0.19.1 (2013-07-15)
diff --git a/geotrek/core/forms.py b/geotrek/core/forms.py
index f429aad222..42eebfd7d7 100644
--- a/geotrek/core/forms.py
+++ b/geotrek/core/forms.py
@@ -44,7 +44,6 @@ class Meta(CommonForm.Meta):
fields = CommonForm.Meta.fields + ['topology']
MEDIA_JS = ("core/dijkstra.js",
- "core/leaflet-geomutils.js",
"core/multipath.js",
"core/topology_helper.js") + CommonForm.MEDIA_JS
diff --git a/geotrek/core/static/core/leaflet-geomutils.js b/geotrek/core/static/core/leaflet-geomutils.js
deleted file mode 100644
index aa00e31d47..0000000000
--- a/geotrek/core/static/core/leaflet-geomutils.js
+++ /dev/null
@@ -1,325 +0,0 @@
-L.GeomUtils = (function() {
- var self;
- return self = {
-
- // Calculate if a point p is between a and b
- isBetween: function(x, a, b, epsilon) {
- epsilon = epsilon || 0.5;
- var d = x.distanceTo(a) + x.distanceTo(b) - a.distanceTo(b);
- return d < epsilon;
- },
-
- // Use LatLng
- getPercentageDistanceFromPolyline: function(ll, polyline) {
- // Will test every point, considering a point is in a segment with an error of 2 meters
- return self.getPercentageDistance(ll, polyline.getLatLngs(), 5 /* in meters */, true);
- },
-
- // May be used for performance issue but you will loose precision
- getPercentageDistanceFromPolylineAsPoints: function(point, polyline) {
- return self.getPercentageDistance(point, polyline._parts[0], 5, true);
- },
-
- // You may pass latlng or point to this function
- getPercentageDistance: function(x, xs, epsilon, only_first, recurse) {
- var xs_len = 0.0
- , distance_found = false
- , closest_idx = null
- , distance = Number.MAX_VALUE;
-
- for (var i = 0; i < xs.length - 1; i++) {
- var x1 = xs[i], x2 = xs[i+1];
-
- // We iterate on each segment of the path
- if (!distance_found || !only_first) {
- if (self.isBetween(x, x1, x2, epsilon)) {
- distance_found = true;
- xdistance = xs_len + x.distanceTo(x1);
-
- if (only_first || xdistance < distance) {
- distance = xdistance;
- closest_idx = i;
- }
- }
- }
-
- xs_len += x1.distanceTo(x2);
- }
-
- if (!distance_found) {
- if (!recurse) {
- console.warn('Could not find ' + x + ' in ' + xs);
- return null;
- }
- // Try with closest point.
- var seg = L.GeomUtils.closestSegment(x, xs)
- , p = L.LineUtil.closestPointOnSegment(x, seg[0], seg[1]);
- return L.GeomUtils.getPercentageDistance(p, xs, epsilon, only_first, true);
- }
- var percent = Math.round((distance / xs_len)*10000)/10000;
- return { 'distance': percent, 'closest': closest_idx };
- },
-
- getLatLngFromPos: function(map, polyline, pos_list, equal_delta) {
- equal_delta === equal_delta === undefined ? 2 /*in meters*/ : equal_delta;
-
- // Safety check : should be ordered and 0.0 <= X <=1.0!
- $.each(pos_list, function(i, pos) {
- var prev_pos = pos[i - 1];
- var sorted = prev_pos === undefined ? true : pos > prev_pos;
- if (! (pos >= 0 && pos <= 1 && sorted)) {
- throw 'Wrong value: ' + pos_list;
- }
- });
-
- // Polyline related
- var polyline_lls = polyline.getLatLngs();
- var d_len = self.getDistances(polyline_lls)
- , polyline_len = d_len.length
- , polyline_distances = d_len.distances;
-
- // Simple situation... simple solution.
- if (pos_list.length == 1) {
- if (pos_list[0] == 0.0) return [self.cloneLatLng(polyline_lls[0])];
- if (pos_list[0] == 1.0) return [self.cloneLatLng(polyline_lls[polyline_lls.length-1])];
- }
-
- var ds = $.map(pos_list, function(pos) { return polyline_len * pos; });
-
- var res = [];
- var i;
-
- var current_distance = ds.shift()
- , current_geom = [];
-
- // If pos is 0.0, take first latlng
- if (current_distance == 0.0) {
- res.push(self.cloneLatLng(polyline_distances[0].x1));
- current_distance = ds.shift()
- }
-
- for (i = 0; i < polyline_distances.length; i++) {
- var dist = polyline_distances[i];
- var new_acc = dist.acc + dist.distance;
-
- var delta = Math.abs(current_distance - new_acc)
- var distance_equal = delta < equal_delta;
-
- if (distance_equal || current_distance < new_acc) {
- if (distance_equal) {
- // Same point
- res.push(self.cloneLatLng(dist.x2));
- } else {
- // current_distance < new_acc
- // New point
-
- var dist_from_point = current_distance - dist.acc;
- var ratio_dist = dist_from_point / dist.distance;
- var ll = self.getPointOnLine(map, ratio_dist, dist.x1, dist.x2);
-
- res.push(ll);
- }
-
- if (ds.length == 0) break;
- current_distance = ds.shift()
- }
- }
-
- if (res.length < 1) console.warn("Could not get LatLng from position " + pos_list);
- if (window.DEBUG) {
- console.log("Invert getLatLngFromPos("+ pos_list[0] + ") : " +
- JSON.stringify(self.getPercentageDistanceFromPolyline(res[0], polyline)));
- }
- return res;
- },
-
- cloneLatLng: function(latlng) {
- return new L.LatLng(latlng.lat, latlng.lng);
- },
-
- getPointOnLine: function(map, ratio_dist, ll1, ll2) {
- if (ratio_dist == 0.0) return ll1;
- if (ratio_dist == 1.0) return ll2;
- var zoom = map.getMaxZoom()
- , p1 = map.project(ll1, zoom)
- , p2 = map.project(ll2, zoom)
- , d = p1.distanceTo(p2);
-
- var x_new = p1.x + (p2.x - p1.x) * ratio_dist
- , y_new = p1.y + (p2.y - p1.y) * ratio_dist
- , ll_new = map.unproject(new L.Point(x_new, y_new), zoom);
- console.assert(!ll_new.equals(ll1) && !ll_new.equals(ll2), ratio_dist + ' got extremity (margin is ' + L.LatLng.MAX_MARGIN + ')');
- return ll_new;
- },
-
- getGradient: function(x1, y1, x2, y2) {
- var a = (y2 - y1) / (x2 - x1);
- var b = y1 - (a * x1);
- return {'a': a, 'b': b};
- },
-
- getDistances: function(xs) {
- var xs_len = 0.0, d, distances = [];
-
- for (var i = 0; i < xs.length - 1; i++) {
- var x1 = xs[i], x2 = xs[i+1];
- d = x1.distanceTo(x2);
-
- // acc: so far (without distance)
- distances.push({
- 'i1': i, 'i2': i+1,
- 'x1': x1, 'x2': x2,
- 'acc': xs_len, 'distance': d
- });
-
- xs_len += d
- }
- return {'length': xs_len, 'distances': distances};
- },
-
- // Calculate length (works for either points or latlngs)
- length: function(xs) {
- var xs_len = 0;
- for (var i = 0; i < xs.length - 1; i++) {
- xs_len += xs[i].distanceTo(xs[i+1]);
- }
- return xs_len;
- },
-
- distance: function (map, latlng1, latlng2) {
- return map.latLngToLayerPoint(latlng1).distanceTo(map.latLngToLayerPoint(latlng2));
- },
-
- distanceSegment: function (map, latlng, latlngA, latlngB) {
- var p = map.latLngToLayerPoint(latlng),
- p1 = map.latLngToLayerPoint(latlngA),
- p2 = map.latLngToLayerPoint(latlngB);
- return L.LineUtil.pointToSegmentDistance(p, p1, p2);
- },
-
- latlngOnSegment: function (map, latlng, latlngA, latlngB) {
- var maxzoom = map.getMaxZoom();
- var p = map.project(latlng, maxzoom),
- p1 = map.project(latlngA, maxzoom),
- p2 = map.project(latlngB, maxzoom);
- closest = L.LineUtil.closestPointOnSegment(p, p1, p2);
- return map.unproject(closest, maxzoom);
- },
-
- closestSegment: function (p, points) {
- var mindist = Number.MAX_VALUE
- , idx = 0;
- for (var i=0; i<points.length-1; i++) {
- var x = points[i]
- , d = p.distanceTo(x);
- if (d < mindist) {
- idx = i;
- }
- }
- return [points[idx], points[idx+1]];
- },
-
- closestOnLine: function (map, latlng, linestring) {
- return self.closestOnLatLngs(map, latlng, linestring.getLatLngs());
- },
-
- closestOnLatLngs: function (map, latlng, lls) {
- // Iterate on line segments
- var segmentmindist = Number.MAX_VALUE,
- ll = null;
- // Keep the closest point of all segments
- for (var j = 0; j < lls.length - 1; j++) {
- var p1 = lls[j],
- p2 = lls[j+1],
- d = self.distanceSegment(map, latlng, p1, p2);
- if (d < segmentmindist) {
- segmentmindist = d;
- ll = self.latlngOnSegment(map, latlng, p1, p2);
- }
- }
- return ll;
- },
-
- closest: function (map, marker, snaplist, snap_distance) {
- var mindist = Number.MAX_VALUE,
- chosen = null,
- point = null;
- var n = snaplist.length;
- // /!\ Careful with size of this list, iterated at every marker move!
- if (n>1000) console.warn("Snap list is very big : " + n + " objects!");
-
- // Iterate the whole snaplist
- for (var i = 0; i < n ; i++) {
- var object = snaplist[i],
- ll = null,
- distance = Number.MAX_VALUE;
- if (object.getLatLng) {
- // Single dimension, snap on points
- ll = object.getLatLng();
- distance = self.distance(map, marker.getLatLng(), ll);
- }
- else {
- ll = L.GeomUtils.closestOnLine(map, marker.getLatLng(), object);
- distance = L.GeomUtils.distance(map, marker.getLatLng(), ll);
- }
- // Keep the closest point of all objects
- if (distance < snap_distance && distance < mindist) {
- mindist = distance;
- chosen = object;
- point = ll;
- }
- }
- // Try to snap on line points (extremities and middle points)
- if (chosen && chosen.getLatLngs) {
- var mindist = snap_distance,
- linepoint = null;
- for (var i=0; i<chosen.getLatLngs().length; i++) {
- var lp = chosen.getLatLngs()[i],
- distance = L.GeomUtils.distance(map, point, lp);
- if (distance < mindist) {
- linepoint = lp;
- mindist = distance;
- }
- }
- if (linepoint) point = linepoint;
- }
- return [chosen, point];
- },
-
- isBefore: function (polyline, other) {
- var lls = polyline.getLatLngs(),
- ll_p = lls[lls.length - 1];
- if (!other) return false;
- var lls = other.getLatLngs()
- , ll_a = lls[0];
- return ll_p.equals(ll_a);
- },
-
- isAfter: function (polyline, other) {
- var ll_p = polyline.getLatLngs()[0];
- if (!other) return false;
- var lls = other.getLatLngs()
- , ll_b = lls[lls.length - 1];
- return ll_p.equals(ll_b);
- },
-
- isStartAtEdges: function (polyline, other) {
- /**
- * Returns true if the first point of the polyline
- * is equal to start or end of the other
- */
- var ll_p = polyline.getLatLngs()[0];
- if (!other) return false;
-
- var lls = other.getLatLngs()
- , ll_a = lls[0]
- , ll_b = lls[lls.length - 1];
-
- return ll_p.equals(ll_a) || ll_p.equals(ll_b);
- },
-
- lineReverse: function (line) {
- return L.polyline(line.getLatLngs().slice(0).reverse());
- }
- };
-})();
diff --git a/geotrek/core/static/core/multipath.js b/geotrek/core/static/core/multipath.js
index 8c28e6ed1a..49c3cc83f4 100644
--- a/geotrek/core/static/core/multipath.js
+++ b/geotrek/core/static/core/multipath.js
@@ -302,13 +302,12 @@ L.Handler.MultiPath = L.Handler.extend({
pop.toggleActivate();
- // If this was clicked, the marker should be close enought, snap it.
+ // If this was clicked, the marker should be close enough, snap it.
self.forceMarkerToLayer(marker, layer);
},
forceMarkerToLayer: function(marker, layer) {
- var self = this;
- var closest = L.GeomUtils.closestOnLine(self.map, marker.getLatLng(), layer);
+ var closest = L.GeometryUtil.closest(this.map, layer, marker.getLatLng());
marker.editing.updateClosest(marker, [layer, closest]);
},
@@ -436,7 +435,7 @@ L.Handler.MultiPath = L.Handler.extend({
*
* Each sub-topoogy is a way between markers. The first marker
* of the first sub-topology is the beginning, the last of the last is the end.
- * All others are intermediary points.
+ * All others are intermediary points (via markers)
*/
var self = this;
@@ -455,8 +454,8 @@ L.Handler.MultiPath = L.Handler.extend({
var start_layer = this.idToLayer(paths[0]);
var end_layer = this.idToLayer(paths[paths.length - 1]);
- var start_ll = L.GeomUtils.getLatLngFromPos(this.map, start_layer, [ first_pos ])[0];
- var end_ll = L.GeomUtils.getLatLngFromPos(this.map, end_layer, [ last_pos ])[0];
+ var start_ll = L.GeometryUtil.interpolateOnLine(this.map, start_layer, first_pos).latLng;
+ var end_ll = L.GeometryUtil.interpolateOnLine(this.map, end_layer, last_pos).latLng;
var state = {
start_ll: start_ll,
@@ -474,7 +473,7 @@ L.Handler.MultiPath = L.Handler.extend({
var pos2latlng = function (pos, layer) {
var used_pos = pos;
if (pos instanceof Array) {
- used_pos = pos[0];
+ used_pos = pos[1]; // Default is second position (think of last path of topology)
if (pos[0] == 0.0 && pos[1] != 1.0)
used_pos = pos[1];
if (pos[0] == 1.0 && pos[1] != 0.0)
@@ -485,11 +484,11 @@ L.Handler.MultiPath = L.Handler.extend({
used_pos = pos[0];
console.log("Chose " + used_pos + " for " + pos);
}
- var ll = L.GeomUtils.getLatLngFromPos(self.map, layer, [ used_pos ])[0];
- if (!ll) {
+ var interpolated = L.GeometryUtil.interpolateOnLine(self.map, layer, used_pos);
+ if (!interpolated) {
throw ('Could not interpolate ' + used_pos + ' on layer ' + layer.properties.pk);
}
- return ll;
+ return interpolated.latLng;
};
for (var i=0; i<topo.length; i++) {
@@ -738,7 +737,7 @@ Geotrek.PointOnPolyline = function (marker) {
// if valid
this.ll = null;
this.polyline = null;
- this.length = null;
+ this.path_length = null;
this.percent_distance = null;
this._activated = false;
@@ -753,10 +752,10 @@ Geotrek.PointOnPolyline = function (marker) {
this.ll = e.location;
this.polyline = e.object;
- this.length = L.GeomUtils.length(this.polyline.getLatLngs());
- var dd = L.GeomUtils.getPercentageDistanceFromPolyline(this.ll, this.polyline);
+ this.path_length = L.GeometryUtil.length(this.polyline);
+ var dd = L.GeometryUtil.locateOnLine(this.polyline._map, this.polyline, this.ll);
if (dd) {
- this.percent_distance = dd.distance;
+ this.percent_distance = dd;
this.events.fire('valid');
}
},
@@ -809,8 +808,8 @@ Geotrek.PointOnPolyline.prototype.addToGraph = function(graph) {
// To which nodes dist start_point/end_point corresponds ?
// The edge.nodes_id are ordered, it corresponds to polylines: coords[0] and coords[coords.length - 1]
- var dist_start_point = this.percent_distance * this.length
- , dist_end_point = (1 - this.percent_distance) * this.length
+ var dist_start_point = this.percent_distance * this.path_length
+ , dist_end_point = (1 - this.percent_distance) * this.path_length
;
var new_node_id = Geotrek.getNextId();
diff --git a/geotrek/core/tests/topology.py b/geotrek/core/tests/topology.py
index ea649a5134..68195cd576 100644
--- a/geotrek/core/tests/topology.py
+++ b/geotrek/core/tests/topology.py
@@ -424,6 +424,8 @@ def test_return_path_serialized(self):
(7, 10, 0), (5, 10, 0), (5, 0, 0),
(7.5, 0, 0)))
+
+class TopologyLoopTests(TestCase):
def test_simple_loop(self):
"""
==========
@@ -521,11 +523,14 @@ def test_spoon_loop_2(self):
(17, 5, 0), (20, 5, 0), # extra point due middle aggregation
(20, 0, 0), (16, 0, 0), (10, 0, 0), (3, 0, 0)))
- # Deserializing should work too
- topod = Topology.deserialize("""
- [{"positions":{"0":[0.3,1],"1":[0, 0.4]},"paths":[%(pk1)s,%(pk2)s]},
- {"positions":{"0":[0.4, 0.8]},"paths":[%(pk2)s]},
- {"positions":{"0":[0.8,1],"1":[1,0.3]},"paths":[%(pk2)s,%(pk1)s]}]""" % {'pk1': p1.pk, 'pk2': p2.pk})
+ # De/Serializing should work too
+ serialized = """
+ [{"kind": "TOPOLOGY","positions":{"0":[0.3,1],"1":[0, 0.4]},"paths":[%(pk1)s,%(pk2)s],"offset": 0.0},
+ {"kind": "TOPOLOGY","positions":{"0":[0.4, 0.8]},"paths":[%(pk2)s],"offset": 0.0},
+ {"kind": "TOPOLOGY","positions":{"0":[0.8,1],"1":[1,0.3]},"paths":[%(pk2)s,%(pk1)s],"offset": 0.0}]""" % {'pk1': p1.pk, 'pk2': p2.pk}
+
+ self.assertEqual(json.loads(serialized), json.loads(topo.serialize()))
+ topod = Topology.deserialize(serialized)
self.assertEqual(topo.geom, topod.geom)
self.assertEqual(len(topod.aggregations.all()), 7)
|
buildbot__buildbot-5219 | Gerrit Change Event Monitor: assertion error: codebase cannot be None
I am getting this stacktrace in the log, with Buildbot 2.6.0 . I do not see this problem with Buildbot 2.5.1 .
2020-02-05 09:11:59-0500 [-] Unhandled Error
Traceback (most recent call last):
File "/home/buildbot/build-venv/lib/python3.6/site-packages/twisted/internet/defer.py", line 1418, in _inlineCallbacks
result = g.send(result)
File "/home/buildbot/build-venv/lib/python3.6/site-packages/buildbot/changes/gerritchangesource.py", line 180, in addChange
self.master.db.sourcestamps.findOrCreateId(**stampdict))
File "/home/buildbot/build-venv/lib/python3.6/site-packages/twisted/internet/defer.py", line 1613, in unwindGenerator
return _cancellableInlineCallbacks(gen)
File "/home/buildbot/build-venv/lib/python3.6/site-packages/twisted/internet/defer.py", line 1529, in _cancellableInlineCallbacks
_inlineCallbacks(None, g, status)
--- <exception caught here> ---
File "/home/buildbot/build-venv/lib/python3.6/site-packages/buildbot/changes/gerritchangesource.py", line 343, in outReceived
yield self.change_source.lineReceived(line)
File "/home/buildbot/build-venv/lib/python3.6/site-packages/buildbot/changes/gerritchangesource.py", line 253, in addChangeFromEvent
'properties': properties})
File "/home/buildbot/build-venv/lib/python3.6/site-packages/buildbot/changes/gerritchangesource.py", line 180, in addChange
self.master.db.sourcestamps.findOrCreateId(**stampdict))
File "/home/buildbot/build-venv/lib/python3.6/site-packages/twisted/internet/defer.py", line 1418, in _inlineCallbacks
result = g.send(result)
File "/home/buildbot/build-venv/lib/python3.6/site-packages/buildbot/db/sourcestamps.py", line 58, in findOrCreateId
assert codebase is not None, "codebase cannot be None"
builtins.AssertionError: codebase cannot be None
| [
{
"content": "# This file is part of Buildbot. Buildbot is free software: you can\n# redistribute it and/or modify it under the terms of the GNU General Public\n# License as published by the Free Software Foundation, version 2.\n#\n# This program is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS\n# FOR A PARTICULAR PURPOSE. See the GNU General Public License for more\n# details.\n#\n# You should have received a copy of the GNU General Public License along with\n# this program; if not, write to the Free Software Foundation, Inc., 51\n# Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.\n#\n# Copyright Buildbot Team Members\n\nimport copy\nimport datetime\nimport json\n\nfrom twisted.internet import defer\nfrom twisted.internet import reactor\nfrom twisted.internet import utils\nfrom twisted.python import log\n\nfrom buildbot import config\nfrom buildbot import util\nfrom buildbot.changes import base\nfrom buildbot.changes.filter import ChangeFilter\nfrom buildbot.util import bytes2unicode\nfrom buildbot.util import httpclientservice\nfrom buildbot.util.protocol import LineProcessProtocol\n\n\ndef _canonicalize_event(event):\n \"\"\"\n Return an event dictionary which is consistent between the gerrit\n event stream and the gerrit event log formats.\n \"\"\"\n # For \"patchset-created\" the events-log JSON looks like:\n # \"project\": {\"name\": \"buildbot\"}\n # while the stream-events JSON looks like:\n # \"project\": \"buildbot\"\n # so we canonicalize them to the latter\n if \"change\" not in event:\n return event\n\n change = event[\"change\"]\n if \"project\" not in change:\n return event\n\n project = change[\"project\"]\n if not isinstance(project, dict):\n return event\n\n if \"name\" not in project:\n return event\n\n event = copy.deepcopy(event)\n event[\"change\"][\"project\"] = project[\"name\"]\n return event\n\n\nclass GerritChangeFilter(ChangeFilter):\n\n \"\"\"This gerrit specific change filter helps creating pre-commit and post-commit builders\"\"\"\n\n def __init__(self,\n eventtype=None, eventtype_re=None, eventtype_fn=None, **kw):\n super().__init__(**kw)\n\n self.checks.update(\n self.createChecks(\n (eventtype, eventtype_re, eventtype_fn, \"prop:event.type\"),\n ))\n # for branch change filter, we take the real gerrit branch\n # instead of the change's branch, which is also used as a grouping key\n if \"branch\" in self.checks:\n self.checks[\"prop:event.change.branch\"] = self.checks[\"branch\"]\n del self.checks[\"branch\"]\n\n\ndef _gerrit_user_to_author(props, username=\"unknown\"):\n \"\"\"\n Convert Gerrit account properties to Buildbot format\n\n Take into account missing values\n \"\"\"\n username = props.get(\"username\", username)\n username = props.get(\"name\", username)\n if \"email\" in props:\n username += \" <%(email)s>\" % props\n return username\n\n\nclass GerritChangeSourceBase(base.ChangeSource):\n\n \"\"\"This source will maintain a connection to gerrit ssh server\n that will provide us gerrit events in json format.\"\"\"\n\n compare_attrs = (\"gerritserver\", \"gerritport\")\n name = None\n # list of properties that are no of no use to be put in the event dict\n EVENT_PROPERTY_BLACKLIST = [\"event.eventCreatedOn\"]\n\n def checkConfig(self,\n gitBaseURL=None,\n handled_events=(\"patchset-created\", \"ref-updated\"),\n debug=False,\n get_files=False):\n\n if gitBaseURL is None:\n config.error(\"gitBaseURL must be specified\")\n\n def reconfigService(self,\n gitBaseURL=None,\n handled_events=(\"patchset-created\", \"ref-updated\"),\n debug=False,\n get_files=False):\n self.gitBaseURL = gitBaseURL\n self.handled_events = list(handled_events)\n self._get_files = get_files\n self.debug = debug\n\n def lineReceived(self, line):\n try:\n event = json.loads(bytes2unicode(line))\n except ValueError:\n log.msg(\"bad json line: {}\".format(line))\n return defer.succeed(None)\n\n if not(isinstance(event, dict) and \"type\" in event):\n if self.debug:\n log.msg(\"no type in event {}\".format(line))\n return defer.succeed(None)\n\n return self.eventReceived(event)\n\n def eventReceived(self, event):\n if not (event['type'] in self.handled_events):\n if self.debug:\n log.msg(\"the event type '{}' is not setup to handle\".format(event['type']))\n return defer.succeed(None)\n\n # flatten the event dictionary, for easy access with WithProperties\n def flatten(properties, base, event):\n for k, v in event.items():\n name = \"{}.{}\".format(base, k)\n if name in self.EVENT_PROPERTY_BLACKLIST:\n continue\n if isinstance(v, dict):\n flatten(properties, name, v)\n else: # already there\n properties[name] = v\n\n properties = {}\n flatten(properties, \"event\", event)\n properties[\"event.source\"] = self.__class__.__name__\n func_name = \"eventReceived_{}\".format(event[\"type\"].replace(\"-\", \"_\"))\n func = getattr(self, func_name, None)\n if func is None:\n return self.addChangeFromEvent(properties, event)\n\n return func(properties, event)\n\n @defer.inlineCallbacks\n def addChange(self, chdict):\n stampdict = {\n \"branch\": chdict[\"branch\"],\n \"revision\": chdict[\"revision\"],\n \"patch_author\": chdict[\"author\"],\n \"patch_comment\": chdict[\"comments\"],\n \"repository\": chdict[\"repository\"],\n \"project\": chdict[\"project\"],\n }\n\n stampid, found_existing = yield(\n self.master.db.sourcestamps.findOrCreateId(**stampdict))\n\n if found_existing:\n if self.debug or True:\n eventstr = \"{}/{} -- {}:{}\".format(\n self.gitBaseURL, chdict[\"project\"], chdict[\"branch\"],\n chdict[\"revision\"])\n message = (\n \"gerrit: duplicate change event {} by {}\"\n .format(eventstr, self.__class__.__name__))\n log.msg(message.encode(\"utf-8\"))\n defer.returnValue(None)\n\n if self.debug:\n eventstr = \"{} -- {}:{}\".format(\n chdict[\"repository\"], chdict[\"branch\"], chdict[\"revision\"])\n message = (\n \"gerrit: adding change from {} in {}\"\n .format(eventstr, self.__class__.__name__))\n log.msg(message.encode(\"utf-8\"))\n\n try:\n yield self.master.data.updates.addChange(**chdict)\n except Exception:\n # eat failures..\n log.err('error adding change from GerritChangeSource')\n\n def getGroupingPolicyFromEvent(self, event):\n # At the moment, buildbot's change grouping strategy is hardcoded at various place\n # to be the 'branch' of an event.\n # With gerrit, you usually want to group by branch on post commit, and by changeid\n # on pre-commit.\n # we keep this customization point here, waiting to have a better grouping strategy support\n # in the core\n event_change = event[\"change\"]\n if event['type'] in ('patchset-created',):\n return \"{}/{}\".format(event_change[\"branch\"], event_change['number'])\n return event_change[\"branch\"]\n\n @defer.inlineCallbacks\n def addChangeFromEvent(self, properties, event):\n if \"change\" not in event:\n if self.debug:\n log.msg(\"unsupported event {}\".format(event[\"type\"]))\n return defer.returnValue(None)\n\n if \"patchSet\" not in event:\n if self.debug:\n log.msg(\"unsupported event {}\".format(event[\"type\"]))\n return defer.returnValue(None)\n\n event = _canonicalize_event(event)\n event_change = event[\"change\"]\n\n files = [\"unknown\"]\n if self._get_files:\n files = yield self.getFiles(\n change=event_change[\"number\"],\n patchset=event[\"patchSet\"][\"number\"]\n )\n\n yield self.addChange({\n 'author': _gerrit_user_to_author(event_change[\"owner\"]),\n 'project': util.bytes2unicode(event_change[\"project\"]),\n 'repository': \"{}/{}\".format(\n self.gitBaseURL, event_change[\"project\"]),\n 'branch': self.getGroupingPolicyFromEvent(event),\n 'revision': event[\"patchSet\"][\"revision\"],\n 'revlink': event_change[\"url\"],\n 'comments': event_change[\"subject\"],\n 'files': files,\n 'category': event[\"type\"],\n 'properties': properties})\n return None\n\n def eventReceived_ref_updated(self, properties, event):\n ref = event[\"refUpdate\"]\n author = \"gerrit\"\n\n if \"submitter\" in event:\n author = _gerrit_user_to_author(event[\"submitter\"], author)\n\n return self.addChange(dict(\n author=author,\n project=ref[\"project\"],\n repository=\"{}/{}\".format(self.gitBaseURL, ref[\"project\"]),\n branch=ref[\"refName\"],\n revision=ref[\"newRev\"],\n comments=\"Gerrit: patchset(s) merged.\",\n files=[\"unknown\"],\n category=event[\"type\"],\n properties=properties))\n\n\nclass GerritChangeSource(GerritChangeSourceBase):\n\n \"\"\"This source will maintain a connection to gerrit ssh server\n that will provide us gerrit events in json format.\"\"\"\n\n compare_attrs = (\"gerritserver\", \"gerritport\")\n\n STREAM_GOOD_CONNECTION_TIME = 120\n \"(seconds) connections longer than this are considered good, and reset the backoff timer\"\n\n STREAM_BACKOFF_MIN = 0.5\n \"(seconds) minimum, but nonzero, time to wait before retrying a failed connection\"\n\n STREAM_BACKOFF_EXPONENT = 1.5\n \"multiplier used to increase the backoff from MIN to MAX on repeated failures\"\n\n STREAM_BACKOFF_MAX = 60\n \"(seconds) maximum time to wait before retrying a failed connection\"\n\n name = None\n\n def checkConfig(self,\n gerritserver,\n username,\n gerritport=29418,\n identity_file=None,\n **kwargs):\n if self.name is None:\n self.name = \"GerritChangeSource:{}@{}:{}\".format(username, gerritserver, gerritport)\n if 'gitBaseURL' not in kwargs:\n kwargs['gitBaseURL'] = \"automatic at reconfigure\"\n super().checkConfig(**kwargs)\n\n def reconfigService(self,\n gerritserver,\n username,\n gerritport=29418,\n identity_file=None,\n name=None,\n **kwargs):\n if 'gitBaseURL' not in kwargs:\n kwargs['gitBaseURL'] = \"ssh://{}@{}:{}\".format(username, gerritserver, gerritport)\n self.gerritserver = gerritserver\n self.gerritport = gerritport\n self.username = username\n self.identity_file = identity_file\n self.process = None\n self.wantProcess = False\n self.streamProcessTimeout = self.STREAM_BACKOFF_MIN\n return super().reconfigService(**kwargs)\n\n class LocalPP(LineProcessProtocol):\n\n def __init__(self, change_source):\n super().__init__()\n self.change_source = change_source\n\n @defer.inlineCallbacks\n def outLineReceived(self, line):\n if self.change_source.debug:\n log.msg(b\"gerrit: \" + line)\n yield self.change_source.lineReceived(line)\n\n def errLineReceived(self, line):\n if self.change_source.debug:\n log.msg(b\"gerrit stderr: \" + line)\n\n def processEnded(self, status):\n super().processEnded(status)\n self.change_source.streamProcessStopped()\n\n def streamProcessStopped(self):\n self.process = None\n\n # if the service is stopped, don't try to restart the process\n if not self.wantProcess or not self.running:\n return\n\n now = util.now()\n if now - self.lastStreamProcessStart < \\\n self.STREAM_GOOD_CONNECTION_TIME:\n # bad startup; start the stream process again after a timeout,\n # and then increase the timeout\n log.msg(\n \"'gerrit stream-events' failed; restarting after %ds\"\n % round(self.streamProcessTimeout))\n self.master.reactor.callLater(\n self.streamProcessTimeout, self.startStreamProcess)\n self.streamProcessTimeout *= self.STREAM_BACKOFF_EXPONENT\n if self.streamProcessTimeout > self.STREAM_BACKOFF_MAX:\n self.streamProcessTimeout = self.STREAM_BACKOFF_MAX\n else:\n # good startup, but lost connection; restart immediately,\n # and set the timeout to its minimum\n\n # make sure we log the reconnection, so that it might be detected\n # and network connectivity fixed\n log.msg(\"gerrit stream-events lost connection. Reconnecting...\")\n self.startStreamProcess()\n self.streamProcessTimeout = self.STREAM_BACKOFF_MIN\n\n def _buildGerritCommand(self, *gerrit_args):\n '''Get an ssh command list which invokes gerrit with the given args on the\n remote host'''\n\n cmd = [\n \"ssh\",\n \"{}@{}\".format(self.username, self.gerritserver),\n \"-p\", str(self.gerritport)\n ]\n\n if self.identity_file is not None:\n cmd.extend([\"-i\", self.identity_file])\n\n cmd.append(\"gerrit\")\n cmd.extend(gerrit_args)\n return cmd\n\n def startStreamProcess(self):\n if self.debug:\n log.msg(\"starting 'gerrit stream-events'\")\n\n cmd = self._buildGerritCommand(\"stream-events\")\n self.lastStreamProcessStart = util.now()\n self.process = reactor.spawnProcess(self.LocalPP(self), \"ssh\", cmd, env=None)\n\n @defer.inlineCallbacks\n def getFiles(self, change, patchset):\n cmd = self._buildGerritCommand(\"query\", str(change), \"--format\", \"JSON\",\n \"--files\", \"--patch-sets\")\n\n if self.debug:\n log.msg(\"querying gerrit for changed files in change {}/{}: {}\".format(change, patchset,\n cmd))\n\n out = yield utils.getProcessOutput(cmd[0], cmd[1:], env=None)\n out = out.splitlines()[0]\n res = json.loads(bytes2unicode(out))\n\n if res.get(\"rowCount\") == 0:\n return [\"unknown\"]\n\n patchsets = {i[\"number\"]: i[\"files\"] for i in res[\"patchSets\"]}\n return [i[\"file\"] for i in patchsets[int(patchset)]]\n\n def activate(self):\n self.wantProcess = True\n self.startStreamProcess()\n\n def deactivate(self):\n self.wantProcess = False\n if self.process:\n self.process.signalProcess(\"KILL\")\n # TODO: if this occurs while the process is restarting, some exceptions\n # may be logged, although things will settle down normally\n\n def describe(self):\n status = \"\"\n if not self.process:\n status = \"[NOT CONNECTED - check log]\"\n return ((\"GerritChangeSource watching the remote \"\n \"Gerrit repository {}@{} {}\").format(self.username, self.gerritserver, status))\n\n\nclass GerritEventLogPoller(GerritChangeSourceBase):\n\n POLL_INTERVAL_SEC = 30\n FIRST_FETCH_LOOKBACK_DAYS = 30\n\n def checkConfig(self,\n baseURL,\n auth,\n pollInterval=POLL_INTERVAL_SEC,\n pollAtLaunch=True,\n firstFetchLookback=FIRST_FETCH_LOOKBACK_DAYS,\n **kwargs):\n if self.name is None:\n self.name = \"GerritEventLogPoller:{}\".format(baseURL)\n super().checkConfig(**kwargs)\n\n @defer.inlineCallbacks\n def reconfigService(self,\n baseURL,\n auth,\n pollInterval=POLL_INTERVAL_SEC,\n pollAtLaunch=True,\n firstFetchLookback=FIRST_FETCH_LOOKBACK_DAYS,\n **kwargs):\n\n yield super().reconfigService(**kwargs)\n if baseURL.endswith('/'):\n baseURL = baseURL[:-1]\n\n self._pollInterval = pollInterval\n self._pollAtLaunch = pollAtLaunch\n self._oid = yield self.master.db.state.getObjectId(self.name, self.__class__.__name__)\n self._http = yield httpclientservice.HTTPClientService.getService(\n self.master, baseURL, auth=auth)\n\n self._first_fetch_lookback = firstFetchLookback\n self._last_event_time = None\n\n @staticmethod\n def now():\n \"\"\"patchable now (datetime is not patchable as builtin)\"\"\"\n return datetime.datetime.utcnow()\n\n @defer.inlineCallbacks\n def poll(self):\n last_event_ts = yield self.master.db.state.getState(self._oid, 'last_event_ts', None)\n if last_event_ts is None:\n # If there is not last event time stored in the database, then set\n # the last event time to some historical look-back\n last_event = self.now() - datetime.timedelta(days=self._first_fetch_lookback)\n else:\n last_event = datetime.datetime.utcfromtimestamp(last_event_ts)\n last_event_formatted = last_event.strftime(\"%Y-%m-%d %H:%M:%S\")\n\n if self.debug:\n log.msg(\"Polling gerrit: {}\".format(last_event_formatted).encode(\"utf-8\"))\n\n res = yield self._http.get(\"/plugins/events-log/events/\",\n params=dict(t1=last_event_formatted))\n lines = yield res.content()\n for line in lines.splitlines():\n yield self.lineReceived(line)\n\n @defer.inlineCallbacks\n def eventReceived(self, event):\n res = yield super().eventReceived(event)\n if 'eventCreatedOn' in event:\n yield self.master.db.state.setState(self._oid, 'last_event_ts', event['eventCreatedOn'])\n return res\n\n @defer.inlineCallbacks\n def getFiles(self, change, patchset):\n res = yield self._http.get(\"/changes/{}/revisions/{}/files/\".format(change, patchset))\n res = yield res.content()\n\n res = res.splitlines()[1].decode('utf8') # the first line of every response is `)]}'`\n return list(json.loads(res))\n\n # FIXME this copy the code from PollingChangeSource\n # but as PollingChangeSource and its subclasses need to be ported to reconfigurability\n # we can't use it right now\n @base.poll_method\n def doPoll(self):\n d = defer.maybeDeferred(self.poll)\n d.addErrback(log.err, 'while polling for changes')\n return d\n\n def force(self):\n self.doPoll()\n\n def activate(self):\n self.doPoll.start(interval=self._pollInterval, now=self._pollAtLaunch)\n\n def deactivate(self):\n return self.doPoll.stop()\n\n def describe(self):\n msg = (\"GerritEventLogPoller watching the remote \"\n \"Gerrit repository {}\")\n return msg.format(self.name)\n",
"path": "master/buildbot/changes/gerritchangesource.py"
}
] | [
{
"content": "# This file is part of Buildbot. Buildbot is free software: you can\n# redistribute it and/or modify it under the terms of the GNU General Public\n# License as published by the Free Software Foundation, version 2.\n#\n# This program is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS\n# FOR A PARTICULAR PURPOSE. See the GNU General Public License for more\n# details.\n#\n# You should have received a copy of the GNU General Public License along with\n# this program; if not, write to the Free Software Foundation, Inc., 51\n# Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.\n#\n# Copyright Buildbot Team Members\n\nimport copy\nimport datetime\nimport json\n\nfrom twisted.internet import defer\nfrom twisted.internet import reactor\nfrom twisted.internet import utils\nfrom twisted.python import log\n\nfrom buildbot import config\nfrom buildbot import util\nfrom buildbot.changes import base\nfrom buildbot.changes.filter import ChangeFilter\nfrom buildbot.util import bytes2unicode\nfrom buildbot.util import httpclientservice\nfrom buildbot.util.protocol import LineProcessProtocol\n\n\ndef _canonicalize_event(event):\n \"\"\"\n Return an event dictionary which is consistent between the gerrit\n event stream and the gerrit event log formats.\n \"\"\"\n # For \"patchset-created\" the events-log JSON looks like:\n # \"project\": {\"name\": \"buildbot\"}\n # while the stream-events JSON looks like:\n # \"project\": \"buildbot\"\n # so we canonicalize them to the latter\n if \"change\" not in event:\n return event\n\n change = event[\"change\"]\n if \"project\" not in change:\n return event\n\n project = change[\"project\"]\n if not isinstance(project, dict):\n return event\n\n if \"name\" not in project:\n return event\n\n event = copy.deepcopy(event)\n event[\"change\"][\"project\"] = project[\"name\"]\n return event\n\n\nclass GerritChangeFilter(ChangeFilter):\n\n \"\"\"This gerrit specific change filter helps creating pre-commit and post-commit builders\"\"\"\n\n def __init__(self,\n eventtype=None, eventtype_re=None, eventtype_fn=None, **kw):\n super().__init__(**kw)\n\n self.checks.update(\n self.createChecks(\n (eventtype, eventtype_re, eventtype_fn, \"prop:event.type\"),\n ))\n # for branch change filter, we take the real gerrit branch\n # instead of the change's branch, which is also used as a grouping key\n if \"branch\" in self.checks:\n self.checks[\"prop:event.change.branch\"] = self.checks[\"branch\"]\n del self.checks[\"branch\"]\n\n\ndef _gerrit_user_to_author(props, username=\"unknown\"):\n \"\"\"\n Convert Gerrit account properties to Buildbot format\n\n Take into account missing values\n \"\"\"\n username = props.get(\"username\", username)\n username = props.get(\"name\", username)\n if \"email\" in props:\n username += \" <%(email)s>\" % props\n return username\n\n\nclass GerritChangeSourceBase(base.ChangeSource):\n\n \"\"\"This source will maintain a connection to gerrit ssh server\n that will provide us gerrit events in json format.\"\"\"\n\n compare_attrs = (\"gerritserver\", \"gerritport\")\n name = None\n # list of properties that are no of no use to be put in the event dict\n EVENT_PROPERTY_BLACKLIST = [\"event.eventCreatedOn\"]\n\n def checkConfig(self,\n gitBaseURL=None,\n handled_events=(\"patchset-created\", \"ref-updated\"),\n debug=False,\n get_files=False):\n\n if gitBaseURL is None:\n config.error(\"gitBaseURL must be specified\")\n\n def reconfigService(self,\n gitBaseURL=None,\n handled_events=(\"patchset-created\", \"ref-updated\"),\n debug=False,\n get_files=False):\n self.gitBaseURL = gitBaseURL\n self.handled_events = list(handled_events)\n self._get_files = get_files\n self.debug = debug\n\n def lineReceived(self, line):\n try:\n event = json.loads(bytes2unicode(line))\n except ValueError:\n log.msg(\"bad json line: {}\".format(line))\n return defer.succeed(None)\n\n if not(isinstance(event, dict) and \"type\" in event):\n if self.debug:\n log.msg(\"no type in event {}\".format(line))\n return defer.succeed(None)\n\n return self.eventReceived(event)\n\n def eventReceived(self, event):\n if not (event['type'] in self.handled_events):\n if self.debug:\n log.msg(\"the event type '{}' is not setup to handle\".format(event['type']))\n return defer.succeed(None)\n\n # flatten the event dictionary, for easy access with WithProperties\n def flatten(properties, base, event):\n for k, v in event.items():\n name = \"{}.{}\".format(base, k)\n if name in self.EVENT_PROPERTY_BLACKLIST:\n continue\n if isinstance(v, dict):\n flatten(properties, name, v)\n else: # already there\n properties[name] = v\n\n properties = {}\n flatten(properties, \"event\", event)\n properties[\"event.source\"] = self.__class__.__name__\n func_name = \"eventReceived_{}\".format(event[\"type\"].replace(\"-\", \"_\"))\n func = getattr(self, func_name, None)\n if func is None:\n return self.addChangeFromEvent(properties, event)\n\n return func(properties, event)\n\n @defer.inlineCallbacks\n def addChange(self, chdict):\n stampdict = {\n \"branch\": chdict[\"branch\"],\n \"revision\": chdict[\"revision\"],\n \"patch_author\": chdict[\"author\"],\n \"patch_comment\": chdict[\"comments\"],\n \"repository\": chdict[\"repository\"],\n \"project\": chdict[\"project\"],\n \"codebase\": '',\n }\n\n stampid, found_existing = yield(\n self.master.db.sourcestamps.findOrCreateId(**stampdict))\n\n if found_existing:\n if self.debug or True:\n eventstr = \"{}/{} -- {}:{}\".format(\n self.gitBaseURL, chdict[\"project\"], chdict[\"branch\"],\n chdict[\"revision\"])\n message = (\n \"gerrit: duplicate change event {} by {}\"\n .format(eventstr, self.__class__.__name__))\n log.msg(message.encode(\"utf-8\"))\n defer.returnValue(None)\n\n if self.debug:\n eventstr = \"{} -- {}:{}\".format(\n chdict[\"repository\"], chdict[\"branch\"], chdict[\"revision\"])\n message = (\n \"gerrit: adding change from {} in {}\"\n .format(eventstr, self.__class__.__name__))\n log.msg(message.encode(\"utf-8\"))\n\n try:\n yield self.master.data.updates.addChange(**chdict)\n except Exception:\n # eat failures..\n log.err('error adding change from GerritChangeSource')\n\n def getGroupingPolicyFromEvent(self, event):\n # At the moment, buildbot's change grouping strategy is hardcoded at various place\n # to be the 'branch' of an event.\n # With gerrit, you usually want to group by branch on post commit, and by changeid\n # on pre-commit.\n # we keep this customization point here, waiting to have a better grouping strategy support\n # in the core\n event_change = event[\"change\"]\n if event['type'] in ('patchset-created',):\n return \"{}/{}\".format(event_change[\"branch\"], event_change['number'])\n return event_change[\"branch\"]\n\n @defer.inlineCallbacks\n def addChangeFromEvent(self, properties, event):\n if \"change\" not in event:\n if self.debug:\n log.msg(\"unsupported event {}\".format(event[\"type\"]))\n return defer.returnValue(None)\n\n if \"patchSet\" not in event:\n if self.debug:\n log.msg(\"unsupported event {}\".format(event[\"type\"]))\n return defer.returnValue(None)\n\n event = _canonicalize_event(event)\n event_change = event[\"change\"]\n\n files = [\"unknown\"]\n if self._get_files:\n files = yield self.getFiles(\n change=event_change[\"number\"],\n patchset=event[\"patchSet\"][\"number\"]\n )\n\n yield self.addChange({\n 'author': _gerrit_user_to_author(event_change[\"owner\"]),\n 'project': util.bytes2unicode(event_change[\"project\"]),\n 'repository': \"{}/{}\".format(\n self.gitBaseURL, event_change[\"project\"]),\n 'branch': self.getGroupingPolicyFromEvent(event),\n 'revision': event[\"patchSet\"][\"revision\"],\n 'revlink': event_change[\"url\"],\n 'comments': event_change[\"subject\"],\n 'files': files,\n 'category': event[\"type\"],\n 'properties': properties})\n return None\n\n def eventReceived_ref_updated(self, properties, event):\n ref = event[\"refUpdate\"]\n author = \"gerrit\"\n\n if \"submitter\" in event:\n author = _gerrit_user_to_author(event[\"submitter\"], author)\n\n return self.addChange(dict(\n author=author,\n project=ref[\"project\"],\n repository=\"{}/{}\".format(self.gitBaseURL, ref[\"project\"]),\n branch=ref[\"refName\"],\n revision=ref[\"newRev\"],\n comments=\"Gerrit: patchset(s) merged.\",\n files=[\"unknown\"],\n category=event[\"type\"],\n properties=properties))\n\n\nclass GerritChangeSource(GerritChangeSourceBase):\n\n \"\"\"This source will maintain a connection to gerrit ssh server\n that will provide us gerrit events in json format.\"\"\"\n\n compare_attrs = (\"gerritserver\", \"gerritport\")\n\n STREAM_GOOD_CONNECTION_TIME = 120\n \"(seconds) connections longer than this are considered good, and reset the backoff timer\"\n\n STREAM_BACKOFF_MIN = 0.5\n \"(seconds) minimum, but nonzero, time to wait before retrying a failed connection\"\n\n STREAM_BACKOFF_EXPONENT = 1.5\n \"multiplier used to increase the backoff from MIN to MAX on repeated failures\"\n\n STREAM_BACKOFF_MAX = 60\n \"(seconds) maximum time to wait before retrying a failed connection\"\n\n name = None\n\n def checkConfig(self,\n gerritserver,\n username,\n gerritport=29418,\n identity_file=None,\n **kwargs):\n if self.name is None:\n self.name = \"GerritChangeSource:{}@{}:{}\".format(username, gerritserver, gerritport)\n if 'gitBaseURL' not in kwargs:\n kwargs['gitBaseURL'] = \"automatic at reconfigure\"\n super().checkConfig(**kwargs)\n\n def reconfigService(self,\n gerritserver,\n username,\n gerritport=29418,\n identity_file=None,\n name=None,\n **kwargs):\n if 'gitBaseURL' not in kwargs:\n kwargs['gitBaseURL'] = \"ssh://{}@{}:{}\".format(username, gerritserver, gerritport)\n self.gerritserver = gerritserver\n self.gerritport = gerritport\n self.username = username\n self.identity_file = identity_file\n self.process = None\n self.wantProcess = False\n self.streamProcessTimeout = self.STREAM_BACKOFF_MIN\n return super().reconfigService(**kwargs)\n\n class LocalPP(LineProcessProtocol):\n\n def __init__(self, change_source):\n super().__init__()\n self.change_source = change_source\n\n @defer.inlineCallbacks\n def outLineReceived(self, line):\n if self.change_source.debug:\n log.msg(b\"gerrit: \" + line)\n yield self.change_source.lineReceived(line)\n\n def errLineReceived(self, line):\n if self.change_source.debug:\n log.msg(b\"gerrit stderr: \" + line)\n\n def processEnded(self, status):\n super().processEnded(status)\n self.change_source.streamProcessStopped()\n\n def streamProcessStopped(self):\n self.process = None\n\n # if the service is stopped, don't try to restart the process\n if not self.wantProcess or not self.running:\n return\n\n now = util.now()\n if now - self.lastStreamProcessStart < \\\n self.STREAM_GOOD_CONNECTION_TIME:\n # bad startup; start the stream process again after a timeout,\n # and then increase the timeout\n log.msg(\n \"'gerrit stream-events' failed; restarting after %ds\"\n % round(self.streamProcessTimeout))\n self.master.reactor.callLater(\n self.streamProcessTimeout, self.startStreamProcess)\n self.streamProcessTimeout *= self.STREAM_BACKOFF_EXPONENT\n if self.streamProcessTimeout > self.STREAM_BACKOFF_MAX:\n self.streamProcessTimeout = self.STREAM_BACKOFF_MAX\n else:\n # good startup, but lost connection; restart immediately,\n # and set the timeout to its minimum\n\n # make sure we log the reconnection, so that it might be detected\n # and network connectivity fixed\n log.msg(\"gerrit stream-events lost connection. Reconnecting...\")\n self.startStreamProcess()\n self.streamProcessTimeout = self.STREAM_BACKOFF_MIN\n\n def _buildGerritCommand(self, *gerrit_args):\n '''Get an ssh command list which invokes gerrit with the given args on the\n remote host'''\n\n cmd = [\n \"ssh\",\n \"{}@{}\".format(self.username, self.gerritserver),\n \"-p\", str(self.gerritport)\n ]\n\n if self.identity_file is not None:\n cmd.extend([\"-i\", self.identity_file])\n\n cmd.append(\"gerrit\")\n cmd.extend(gerrit_args)\n return cmd\n\n def startStreamProcess(self):\n if self.debug:\n log.msg(\"starting 'gerrit stream-events'\")\n\n cmd = self._buildGerritCommand(\"stream-events\")\n self.lastStreamProcessStart = util.now()\n self.process = reactor.spawnProcess(self.LocalPP(self), \"ssh\", cmd, env=None)\n\n @defer.inlineCallbacks\n def getFiles(self, change, patchset):\n cmd = self._buildGerritCommand(\"query\", str(change), \"--format\", \"JSON\",\n \"--files\", \"--patch-sets\")\n\n if self.debug:\n log.msg(\"querying gerrit for changed files in change {}/{}: {}\".format(change, patchset,\n cmd))\n\n out = yield utils.getProcessOutput(cmd[0], cmd[1:], env=None)\n out = out.splitlines()[0]\n res = json.loads(bytes2unicode(out))\n\n if res.get(\"rowCount\") == 0:\n return [\"unknown\"]\n\n patchsets = {i[\"number\"]: i[\"files\"] for i in res[\"patchSets\"]}\n return [i[\"file\"] for i in patchsets[int(patchset)]]\n\n def activate(self):\n self.wantProcess = True\n self.startStreamProcess()\n\n def deactivate(self):\n self.wantProcess = False\n if self.process:\n self.process.signalProcess(\"KILL\")\n # TODO: if this occurs while the process is restarting, some exceptions\n # may be logged, although things will settle down normally\n\n def describe(self):\n status = \"\"\n if not self.process:\n status = \"[NOT CONNECTED - check log]\"\n return ((\"GerritChangeSource watching the remote \"\n \"Gerrit repository {}@{} {}\").format(self.username, self.gerritserver, status))\n\n\nclass GerritEventLogPoller(GerritChangeSourceBase):\n\n POLL_INTERVAL_SEC = 30\n FIRST_FETCH_LOOKBACK_DAYS = 30\n\n def checkConfig(self,\n baseURL,\n auth,\n pollInterval=POLL_INTERVAL_SEC,\n pollAtLaunch=True,\n firstFetchLookback=FIRST_FETCH_LOOKBACK_DAYS,\n **kwargs):\n if self.name is None:\n self.name = \"GerritEventLogPoller:{}\".format(baseURL)\n super().checkConfig(**kwargs)\n\n @defer.inlineCallbacks\n def reconfigService(self,\n baseURL,\n auth,\n pollInterval=POLL_INTERVAL_SEC,\n pollAtLaunch=True,\n firstFetchLookback=FIRST_FETCH_LOOKBACK_DAYS,\n **kwargs):\n\n yield super().reconfigService(**kwargs)\n if baseURL.endswith('/'):\n baseURL = baseURL[:-1]\n\n self._pollInterval = pollInterval\n self._pollAtLaunch = pollAtLaunch\n self._oid = yield self.master.db.state.getObjectId(self.name, self.__class__.__name__)\n self._http = yield httpclientservice.HTTPClientService.getService(\n self.master, baseURL, auth=auth)\n\n self._first_fetch_lookback = firstFetchLookback\n self._last_event_time = None\n\n @staticmethod\n def now():\n \"\"\"patchable now (datetime is not patchable as builtin)\"\"\"\n return datetime.datetime.utcnow()\n\n @defer.inlineCallbacks\n def poll(self):\n last_event_ts = yield self.master.db.state.getState(self._oid, 'last_event_ts', None)\n if last_event_ts is None:\n # If there is not last event time stored in the database, then set\n # the last event time to some historical look-back\n last_event = self.now() - datetime.timedelta(days=self._first_fetch_lookback)\n else:\n last_event = datetime.datetime.utcfromtimestamp(last_event_ts)\n last_event_formatted = last_event.strftime(\"%Y-%m-%d %H:%M:%S\")\n\n if self.debug:\n log.msg(\"Polling gerrit: {}\".format(last_event_formatted).encode(\"utf-8\"))\n\n res = yield self._http.get(\"/plugins/events-log/events/\",\n params=dict(t1=last_event_formatted))\n lines = yield res.content()\n for line in lines.splitlines():\n yield self.lineReceived(line)\n\n @defer.inlineCallbacks\n def eventReceived(self, event):\n res = yield super().eventReceived(event)\n if 'eventCreatedOn' in event:\n yield self.master.db.state.setState(self._oid, 'last_event_ts', event['eventCreatedOn'])\n return res\n\n @defer.inlineCallbacks\n def getFiles(self, change, patchset):\n res = yield self._http.get(\"/changes/{}/revisions/{}/files/\".format(change, patchset))\n res = yield res.content()\n\n res = res.splitlines()[1].decode('utf8') # the first line of every response is `)]}'`\n return list(json.loads(res))\n\n # FIXME this copy the code from PollingChangeSource\n # but as PollingChangeSource and its subclasses need to be ported to reconfigurability\n # we can't use it right now\n @base.poll_method\n def doPoll(self):\n d = defer.maybeDeferred(self.poll)\n d.addErrback(log.err, 'while polling for changes')\n return d\n\n def force(self):\n self.doPoll()\n\n def activate(self):\n self.doPoll.start(interval=self._pollInterval, now=self._pollAtLaunch)\n\n def deactivate(self):\n return self.doPoll.stop()\n\n def describe(self):\n msg = (\"GerritEventLogPoller watching the remote \"\n \"Gerrit repository {}\")\n return msg.format(self.name)\n",
"path": "master/buildbot/changes/gerritchangesource.py"
}
] | diff --git a/master/buildbot/changes/gerritchangesource.py b/master/buildbot/changes/gerritchangesource.py
index fd50922571c2..c800e3f290d2 100644
--- a/master/buildbot/changes/gerritchangesource.py
+++ b/master/buildbot/changes/gerritchangesource.py
@@ -171,6 +171,7 @@ def addChange(self, chdict):
"patch_comment": chdict["comments"],
"repository": chdict["repository"],
"project": chdict["project"],
+ "codebase": '',
}
stampid, found_existing = yield(
diff --git a/master/buildbot/newsfragments/handle-default-codebase-in-gerrit.bugfix b/master/buildbot/newsfragments/handle-default-codebase-in-gerrit.bugfix
new file mode 100644
index 000000000000..6e68e22c54c4
--- /dev/null
+++ b/master/buildbot/newsfragments/handle-default-codebase-in-gerrit.bugfix
@@ -0,0 +1,2 @@
+Work around incomplete support for codebases in GerritChangeSource (:issue:`5190`). This avoids an internal assertion when the configuration file does not specify any codebases.
+
diff --git a/master/buildbot/test/fakedb/sourcestamps.py b/master/buildbot/test/fakedb/sourcestamps.py
index eb9404d03925..c5f118b3ef9f 100644
--- a/master/buildbot/test/fakedb/sourcestamps.py
+++ b/master/buildbot/test/fakedb/sourcestamps.py
@@ -98,6 +98,11 @@ def findOrCreateId(self, branch=None, revision=None, repository=None,
patch_body=None, patch_level=None,
patch_author=None, patch_comment=None,
patch_subdir=None):
+
+ assert codebase is not None, "codebase cannot be None"
+ assert project is not None, "project cannot be None"
+ assert repository is not None, "repository cannot be None"
+
if patch_body:
patchid = len(self.patches) + 1
while patchid in self.patches:
|
kubeflow__pipelines-4319 | allow output artifact store configuration (vs hard coded)
it seems like the output artifacts are always stored in a specific minio service, port, namespace, bucket, secrets, etc (`minio-service.kubeflow:9000`).
see: https://github.com/kubeflow/pipelines/blob/f40a22a3f4a8e06d20cf3e3f425b5058d5c87e0b/sdk/python/kfp/compiler/_op_to_template.py#L148
it would be great to make it flexible, e.g. allow using S3, or change namespace or bucket names.
i suggest making it configurable, i can do such PR if we agree its needed.
flexible pipeline service (host) path in client SDK
when creating an SDK `Client()` the path to `ml-pipeline` API service is loaded from a hard coded value (`ml-pipeline.kubeflow.svc.cluster.local:8888`) which indicate a specific k8s namespace. it can be valuable to load that default value from an env variable, i.e. changing the line in `_client.py` from:
`config.host = host if host else Client.IN_CLUSTER_DNS_NAME`
to:
`config.host = host or os.environ.get('ML_PIPELINE_DNS_NAME',Client.IN_CLUSTER_DNS_NAME)`
also note that when a user provide the `host` parameter, the ipython output points to the API server and not to the UI service (see the logic in `_get_url_prefix()`), it seems like a potential bug
if its acceptable i can submit a PR for the line change above
| [
{
"content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport time\nimport logging\nimport json\nimport os\nimport re\nimport tarfile\nimport tempfile\nimport warnings\nimport yaml\nimport zipfile\nimport datetime\nfrom typing import Mapping, Callable, Optional\n\nimport kfp\nimport kfp_server_api\n\nfrom kfp.compiler import compiler\nfrom kfp.compiler._k8s_helper import sanitize_k8s_name\n\nfrom kfp._auth import get_auth_token, get_gcp_access_token\n\n# TTL of the access token associated with the client. This is needed because\n# `gcloud auth print-access-token` generates a token with TTL=1 hour, after\n# which the authentication expires. This TTL is needed for kfp.Client()\n# initialized with host=<inverse proxy endpoint>.\n# Set to 55 mins to provide some safe margin.\n_GCP_ACCESS_TOKEN_TIMEOUT = datetime.timedelta(minutes=55)\n# Operators on scalar values. Only applies to one of |int_value|,\n# |long_value|, |string_value| or |timestamp_value|.\n_FILTER_OPERATIONS = {\"UNKNOWN\": 0,\n \"EQUALS\" : 1,\n \"NOT_EQUALS\" : 2,\n \"GREATER_THAN\": 3,\n \"GREATER_THAN_EQUALS\": 5,\n \"LESS_THAN\": 6,\n \"LESS_THAN_EQUALS\": 7}\n\ndef _add_generated_apis(target_struct, api_module, api_client):\n \"\"\"Initializes a hierarchical API object based on the generated API module.\n PipelineServiceApi.create_pipeline becomes target_struct.pipelines.create_pipeline\n \"\"\"\n Struct = type('Struct', (), {})\n\n def camel_case_to_snake_case(name):\n import re\n return re.sub('([a-z0-9])([A-Z])', r'\\1_\\2', name).lower()\n\n for api_name in dir(api_module):\n if not api_name.endswith('ServiceApi'):\n continue\n\n short_api_name = camel_case_to_snake_case(api_name[0:-len('ServiceApi')]) + 's'\n api_struct = Struct()\n setattr(target_struct, short_api_name, api_struct)\n service_api = getattr(api_module.api, api_name)\n initialized_service_api = service_api(api_client)\n for member_name in dir(initialized_service_api):\n if member_name.startswith('_') or member_name.endswith('_with_http_info'):\n continue\n\n bound_member = getattr(initialized_service_api, member_name)\n setattr(api_struct, member_name, bound_member)\n models_struct = Struct()\n for member_name in dir(api_module.models):\n if not member_name[0].islower():\n setattr(models_struct, member_name, getattr(api_module.models, member_name))\n target_struct.api_models = models_struct\n\n\nKF_PIPELINES_ENDPOINT_ENV = 'KF_PIPELINES_ENDPOINT'\nKF_PIPELINES_UI_ENDPOINT_ENV = 'KF_PIPELINES_UI_ENDPOINT'\nKF_PIPELINES_DEFAULT_EXPERIMENT_NAME = 'KF_PIPELINES_DEFAULT_EXPERIMENT_NAME'\nKF_PIPELINES_OVERRIDE_EXPERIMENT_NAME = 'KF_PIPELINES_OVERRIDE_EXPERIMENT_NAME'\n\n\nclass Client(object):\n \"\"\"API Client for KubeFlow Pipeline.\n\n Args:\n host: The host name to use to talk to Kubeflow Pipelines. If not set, the in-cluster\n service DNS name will be used, which only works if the current environment is a pod\n in the same cluster (such as a Jupyter instance spawned by Kubeflow's\n JupyterHub). If you have a different connection to cluster, such as a kubectl\n proxy connection, then set it to something like \"127.0.0.1:8080/pipeline.\n If you connect to an IAP enabled cluster, set it to\n https://<your-deployment>.endpoints.<your-project>.cloud.goog/pipeline\".\n client_id: The client ID used by Identity-Aware Proxy.\n namespace: The namespace where the kubeflow pipeline system is run.\n other_client_id: The client ID used to obtain the auth codes and refresh tokens.\n Reference: https://cloud.google.com/iap/docs/authentication-howto#authenticating_from_a_desktop_app.\n other_client_secret: The client secret used to obtain the auth codes and refresh tokens.\n existing_token: Pass in token directly, it's used for cases better get token outside of SDK, e.x. GCP Cloud Functions\n or caller already has a token\n cookies: CookieJar object containing cookies that will be passed to the pipelines API.\n proxy: HTTP or HTTPS proxy server\n ssl_ca_cert: Cert for proxy\n \"\"\"\n\n # in-cluster DNS name of the pipeline service\n IN_CLUSTER_DNS_NAME = 'ml-pipeline.{}.svc.cluster.local:8888'\n KUBE_PROXY_PATH = 'api/v1/namespaces/{}/services/ml-pipeline:http/proxy/'\n\n LOCAL_KFP_CONTEXT = os.path.expanduser('~/.config/kfp/context.json')\n\n # TODO: Wrap the configurations for different authentication methods.\n def __init__(self, host=None, client_id=None, namespace='kubeflow', other_client_id=None, other_client_secret=None, existing_token=None, cookies=None, proxy=None, ssl_ca_cert=None):\n \"\"\"Create a new instance of kfp client.\n \"\"\"\n host = host or os.environ.get(KF_PIPELINES_ENDPOINT_ENV)\n self._uihost = os.environ.get(KF_PIPELINES_UI_ENDPOINT_ENV, host)\n config = self._load_config(host, client_id, namespace, other_client_id, other_client_secret, existing_token, proxy, ssl_ca_cert)\n # Save the loaded API client configuration, as a reference if update is\n # needed.\n self._existing_config = config\n api_client = kfp_server_api.api_client.ApiClient(config, cookie=cookies)\n _add_generated_apis(self, kfp_server_api, api_client)\n self._job_api = kfp_server_api.api.job_service_api.JobServiceApi(api_client)\n self._run_api = kfp_server_api.api.run_service_api.RunServiceApi(api_client)\n self._experiment_api = kfp_server_api.api.experiment_service_api.ExperimentServiceApi(api_client)\n self._pipelines_api = kfp_server_api.api.pipeline_service_api.PipelineServiceApi(api_client)\n self._upload_api = kfp_server_api.api.PipelineUploadServiceApi(api_client)\n self._load_context_setting_or_default()\n\n def _load_config(self, host, client_id, namespace, other_client_id, other_client_secret, existing_token, proxy, ssl_ca_cert):\n config = kfp_server_api.configuration.Configuration()\n\n if proxy:\n # https://github.com/kubeflow/pipelines/blob/c6ac5e0b1fd991e19e96419f0f508ec0a4217c29/backend/api/python_http_client/kfp_server_api/rest.py#L100\n config.proxy = proxy\n\n if ssl_ca_cert:\n config.ssl_ca_cert = ssl_ca_cert\n\n host = host or ''\n # Preprocess the host endpoint to prevent some common user mistakes.\n if not client_id:\n # always preserving the protocol (http://localhost requires it)\n host = host.rstrip('/')\n\n if host:\n config.host = host\n\n token = None\n\n # \"existing_token\" is designed to accept token generated outside of SDK. Here is an example.\n #\n # https://cloud.google.com/functions/docs/securing/function-identity\n # https://cloud.google.com/endpoints/docs/grpc/service-account-authentication\n #\n # import requests\n # import kfp\n #\n # def get_access_token():\n # url = 'http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/token'\n # r = requests.get(url, headers={'Metadata-Flavor': 'Google'})\n # r.raise_for_status()\n # access_token = r.json()['access_token']\n # return access_token\n #\n # client = kfp.Client(host='<KFPHost>', existing_token=get_access_token())\n #\n if existing_token:\n token = existing_token\n self._is_refresh_token = False\n elif client_id:\n token = get_auth_token(client_id, other_client_id, other_client_secret)\n self._is_refresh_token = True\n elif self._is_inverse_proxy_host(host):\n token = get_gcp_access_token()\n self._is_refresh_token = False\n\n if token:\n config.api_key['authorization'] = token\n config.api_key_prefix['authorization'] = 'Bearer'\n return config\n\n if host:\n # if host is explicitly set with auth token, it's probably a port forward address.\n return config\n\n import kubernetes as k8s\n in_cluster = True\n try:\n k8s.config.load_incluster_config()\n except:\n in_cluster = False\n pass\n\n if in_cluster:\n config.host = Client.IN_CLUSTER_DNS_NAME.format(namespace)\n return config\n\n try:\n k8s.config.load_kube_config(client_configuration=config)\n except:\n print('Failed to load kube config.')\n return config\n\n if config.host:\n config.host = config.host + '/' + Client.KUBE_PROXY_PATH.format(namespace)\n return config\n\n def _is_inverse_proxy_host(self, host):\n if host:\n return re.match(r'\\S+.googleusercontent.com/{0,1}$', host)\n if re.match(r'\\w+', host):\n warnings.warn(\n 'The received host is %s, please include the full endpoint address '\n '(with \".(pipelines/notebooks).googleusercontent.com\")' % host)\n return False\n\n def _is_ipython(self):\n \"\"\"Returns whether we are running in notebook.\"\"\"\n try:\n import IPython\n ipy = IPython.get_ipython()\n if ipy is None:\n return False\n except ImportError:\n return False\n\n return True\n\n def _get_url_prefix(self):\n if self._uihost:\n # User's own connection.\n if self._uihost.startswith('http://') or self._uihost.startswith('https://'):\n return self._uihost\n else:\n return 'http://' + self._uihost\n\n # In-cluster pod. We could use relative URL.\n return '/pipeline'\n\n def _load_context_setting_or_default(self):\n if os.path.exists(Client.LOCAL_KFP_CONTEXT):\n with open(Client.LOCAL_KFP_CONTEXT, 'r') as f:\n self._context_setting = json.load(f)\n else:\n self._context_setting = {\n 'namespace': '',\n }\n \n def _refresh_api_client_token(self):\n \"\"\"Refreshes the existing token associated with the kfp_api_client.\"\"\"\n if getattr(self, '_is_refresh_token', None):\n return\n\n new_token = get_gcp_access_token()\n self._existing_config.api_key['authorization'] = new_token\n\n def set_user_namespace(self, namespace):\n \"\"\"Set user namespace into local context setting file.\n \n This function should only be used when Kubeflow Pipelines is in the multi-user mode.\n\n Args:\n namespace: kubernetes namespace the user has access to.\n \"\"\"\n self._context_setting['namespace'] = namespace\n with open(Client.LOCAL_KFP_CONTEXT, 'w') as f:\n json.dump(self._context_setting, f)\n\n def get_user_namespace(self):\n \"\"\"Get user namespace in context config.\n\n Returns:\n namespace: kubernetes namespace from the local context file or empty if it wasn't set.\n \"\"\"\n return self._context_setting['namespace']\n\n def create_experiment(self, name, description=None, namespace=None):\n \"\"\"Create a new experiment.\n\n Args:\n name: The name of the experiment.\n description: Description of the experiment.\n namespace: Kubernetes namespace where the experiment should be created.\n For single user deployment, leave it as None;\n For multi user, input a namespace where the user is authorized.\n\n Returns:\n An Experiment object. Most important field is id.\n \"\"\"\n namespace = namespace or self.get_user_namespace()\n experiment = None\n try:\n experiment = self.get_experiment(experiment_name=name, namespace=namespace)\n except:\n # Ignore error if the experiment does not exist.\n pass\n\n if not experiment:\n logging.info('Creating experiment {}.'.format(name))\n\n resource_references = []\n if namespace:\n key = kfp_server_api.models.ApiResourceKey(id=namespace, type=kfp_server_api.models.ApiResourceType.NAMESPACE)\n reference = kfp_server_api.models.ApiResourceReference(key=key, relationship=kfp_server_api.models.ApiRelationship.OWNER)\n resource_references.append(reference)\n\n experiment = kfp_server_api.models.ApiExperiment(\n name=name,\n description=description,\n resource_references=resource_references)\n experiment = self._experiment_api.create_experiment(body=experiment)\n\n if self._is_ipython():\n import IPython\n html = \\\n ('Experiment link <a href=\"%s/#/experiments/details/%s\" target=\"_blank\" >here</a>'\n % (self._get_url_prefix(), experiment.id))\n IPython.display.display(IPython.display.HTML(html))\n return experiment\n\n def get_pipeline_id(self, name):\n \"\"\"Find the id of a pipeline by name.\n\n Args:\n name: Pipeline name.\n\n Returns:\n Returns the pipeline id if a pipeline with the name exists.\n \"\"\"\n pipeline_filter = json.dumps({\n \"predicates\": [\n {\n \"op\": _FILTER_OPERATIONS[\"EQUALS\"],\n \"key\": \"name\",\n \"stringValue\": name,\n }\n ]\n })\n result = self._pipelines_api.list_pipelines(filter=pipeline_filter)\n if len(result.pipelines)==1:\n return result.pipelines[0].id\n elif len(result.pipelines)>1:\n raise ValueError(\"Multiple pipelines with the name: {} found, the name needs to be unique\".format(name))\n return None\n\n def list_experiments(self, page_token='', page_size=10, sort_by='', namespace=None):\n \"\"\"List experiments.\n\n Args:\n page_token: Token for starting of the page.\n page_size: Size of the page.\n sort_by: Can be '[field_name]', '[field_name] des'. For example, 'name desc'.\n namespace: Kubernetes namespace where the experiment was created.\n For single user deployment, leave it as None;\n For multi user, input a namespace where the user is authorized.\n \n Returns:\n A response object including a list of experiments and next page token.\n \"\"\"\n namespace = namespace or self.get_user_namespace()\n response = self._experiment_api.list_experiment(\n page_token=page_token,\n page_size=page_size,\n sort_by=sort_by,\n resource_reference_key_type=kfp_server_api.models.api_resource_type.ApiResourceType.NAMESPACE,\n resource_reference_key_id=namespace)\n return response\n\n def get_experiment(self, experiment_id=None, experiment_name=None, namespace=None):\n \"\"\"Get details of an experiment\n\n Either experiment_id or experiment_name is required\n\n Args:\n experiment_id: Id of the experiment. (Optional)\n experiment_name: Name of the experiment. (Optional)\n namespace: Kubernetes namespace where the experiment was created.\n For single user deployment, leave it as None;\n For multi user, input the namespace where the user is authorized.\n\n Returns:\n A response object including details of a experiment.\n\n Throws:\n Exception if experiment is not found or None of the arguments is provided\n \"\"\"\n namespace = namespace or self.get_user_namespace()\n if experiment_id is None and experiment_name is None:\n raise ValueError('Either experiment_id or experiment_name is required')\n if experiment_id is not None:\n return self._experiment_api.get_experiment(id=experiment_id)\n next_page_token = ''\n while next_page_token is not None:\n list_experiments_response = self.list_experiments(page_size=100, page_token=next_page_token, namespace=namespace)\n next_page_token = list_experiments_response.next_page_token\n for experiment in list_experiments_response.experiments or []:\n if experiment.name == experiment_name:\n return self._experiment_api.get_experiment(id=experiment.id)\n raise ValueError('No experiment is found with name {}.'.format(experiment_name))\n\n def _extract_pipeline_yaml(self, package_file):\n def _choose_pipeline_yaml_file(file_list) -> str:\n yaml_files = [file for file in file_list if file.endswith('.yaml')]\n if len(yaml_files) == 0:\n raise ValueError('Invalid package. Missing pipeline yaml file in the package.')\n\n if 'pipeline.yaml' in yaml_files:\n return 'pipeline.yaml'\n else:\n if len(yaml_files) == 1:\n return yaml_files[0]\n raise ValueError('Invalid package. There is no pipeline.yaml file and there are multiple yaml files.')\n\n if package_file.endswith('.tar.gz') or package_file.endswith('.tgz'):\n with tarfile.open(package_file, \"r:gz\") as tar:\n file_names = [member.name for member in tar if member.isfile()]\n pipeline_yaml_file = _choose_pipeline_yaml_file(file_names)\n with tar.extractfile(tar.getmember(pipeline_yaml_file)) as f:\n return yaml.safe_load(f)\n elif package_file.endswith('.zip'):\n with zipfile.ZipFile(package_file, 'r') as zip:\n pipeline_yaml_file = _choose_pipeline_yaml_file(zip.namelist())\n with zip.open(pipeline_yaml_file) as f:\n return yaml.safe_load(f)\n elif package_file.endswith('.yaml') or package_file.endswith('.yml'):\n with open(package_file, 'r') as f:\n return yaml.safe_load(f)\n else:\n raise ValueError('The package_file '+ package_file + ' should end with one of the following formats: [.tar.gz, .tgz, .zip, .yaml, .yml]')\n\n def list_pipelines(self, page_token='', page_size=10, sort_by=''):\n \"\"\"List pipelines.\n\n Args:\n page_token: Token for starting of the page.\n page_size: Size of the page.\n sort_by: one of 'field_name', 'field_name desc'. For example, 'name desc'.\n\n Returns:\n A response object including a list of pipelines and next page token.\n \"\"\"\n return self._pipelines_api.list_pipelines(page_token=page_token, page_size=page_size, sort_by=sort_by)\n\n def list_pipeline_versions(self, pipeline_id: str, page_token='', page_size=10, sort_by=''):\n \"\"\"List all versions of a given pipeline.\n\n Args:\n pipeline_id: The id of a pipeline.\n page_token: Token for starting of the page.\n page_size: Size of the page.\n sort_by: one of 'field_name', 'field_name desc'. For example, 'name desc'.\n\n Returns:\n A response object including a list of pipeline versions and next page token.\n \"\"\"\n return self._pipelines_api.list_pipeline_versions(\n resource_key_type=\"PIPELINE\",\n resource_key_id=pipeline_id,\n page_token=page_token,\n page_size=page_size,\n sort_by=sort_by\n )\n\n # TODO: provide default namespace, similar to kubectl default namespaces.\n def run_pipeline(self, experiment_id, job_name, pipeline_package_path=None, params={}, pipeline_id=None, version_id=None):\n \"\"\"Run a specified pipeline.\n\n Args:\n experiment_id: The id of an experiment.\n job_name: Name of the job.\n pipeline_package_path: Local path of the pipeline package(the filename should end with one of the following .tar.gz, .tgz, .zip, .yaml, .yml).\n params: A dictionary with key (string) as param name and value (string) as as param value.\n pipeline_id: The id of a pipeline.\n version_id: The id of a pipeline version.\n If both pipeline_id and version_id are specified, version_id will take precendence.\n If only pipeline_id is specified, the default version of this pipeline is used to create the run.\n\n Returns:\n A run object. Most important field is id.\n \"\"\"\n job_config = self._create_job_config(\n experiment_id=experiment_id,\n params=params,\n pipeline_package_path=pipeline_package_path,\n pipeline_id=pipeline_id,\n version_id=version_id)\n run_body = kfp_server_api.models.ApiRun(\n pipeline_spec=job_config.spec, resource_references=job_config.resource_references, name=job_name)\n\n response = self._run_api.create_run(body=run_body)\n\n if self._is_ipython():\n import IPython\n html = ('Run link <a href=\"%s/#/runs/details/%s\" target=\"_blank\" >here</a>'\n % (self._get_url_prefix(), response.run.id))\n IPython.display.display(IPython.display.HTML(html))\n return response.run\n\n def create_recurring_run(self, experiment_id, job_name, description=None, start_time=None, end_time=None, interval_second=None, cron_expression=None, max_concurrency=1, no_catchup=None, params={}, pipeline_package_path=None, pipeline_id=None, version_id=None, enabled=True):\n \"\"\"Create a recurring run.\n\n Args:\n experiment_id: The string id of an experiment.\n job_name: Name of the job.\n description: An optional job description.\n start_time: The RFC3339 time string of the time when to start the job.\n end_time: The RFC3339 time string of the time when to end the job.\n interval_second: Integer indicating the seconds between two recurring runs in for a periodic schedule.\n cron_expression: A cron expression representing a set of times, using 5 space-separated fields, e.g. \"0 0 9 ? * 2-6\".\n max_concurrency: Integer indicating how many jobs can be run in parallel.\n no_catchup: Whether the recurring run should catch up if behind schedule.\n For example, if the recurring run is paused for a while and re-enabled\n afterwards. If no_catchup=False, the scheduler will catch up on (backfill) each\n missed interval. Otherwise, it only schedules the latest interval if more than one interval\n is ready to be scheduled.\n Usually, if your pipeline handles backfill internally, you should turn catchup\n off to avoid duplicate backfill. (default: {False})\n pipeline_package_path: Local path of the pipeline package(the filename should end with one of the following .tar.gz, .tgz, .zip, .yaml, .yml).\n params: A dictionary with key (string) as param name and value (string) as param value.\n pipeline_id: The string ID of a pipeline.\n version_id: The string ID of a pipeline version. \n If both pipeline_id and version_id are specified, pipeline_id will take precendence\n This will change in a future version, so it is recommended to use version_id by itself.\n enabled: A bool indicating whether the recurring run is enabled or disabled.\n\n Returns:\n A Job object. Most important field is id.\n \"\"\"\n job_config = self._create_job_config(\n experiment_id=experiment_id,\n params=params,\n pipeline_package_path=pipeline_package_path,\n pipeline_id=pipeline_id,\n version_id=version_id)\n\n if all([interval_second, cron_expression]) or not any([interval_second, cron_expression]):\n raise ValueError('Either interval_second or cron_expression is required')\n if interval_second is not None:\n trigger = kfp_server_api.models.ApiTrigger(\n periodic_schedule=kfp_server_api.models.ApiPeriodicSchedule(\n start_time=start_time, end_time=end_time, interval_second=interval_second)\n )\n if cron_expression is not None:\n trigger = kfp_server_api.models.ApiTrigger(\n cron_schedule=kfp_server_api.models.ApiCronSchedule(\n start_time=start_time, end_time=end_time, cron=cron_expression)\n )\n\n job_body = kfp_server_api.models.ApiJob(\n enabled=enabled,\n pipeline_spec=job_config.spec,\n resource_references=job_config.resource_references,\n name=job_name,\n description=description,\n no_catchup=no_catchup,\n trigger=trigger,\n max_concurrency=max_concurrency)\n return self._job_api.create_job(body=job_body)\n\n def _create_job_config(self, experiment_id, params, pipeline_package_path, pipeline_id, version_id):\n \"\"\"Create a JobConfig with spec and resource_references.\n\n Args:\n experiment_id: The id of an experiment.\n pipeline_package_path: Local path of the pipeline package(the filename should end with one of the following .tar.gz, .tgz, .zip, .yaml, .yml).\n params: A dictionary with key (string) as param name and value (string) as param value.\n pipeline_id: The id of a pipeline.\n version_id: The id of a pipeline version. \n If both pipeline_id and version_id are specified, pipeline_id will take precendence\n This will change in a future version, so it is recommended to use version_id by itself.\n\n Returns:\n A JobConfig object with attributes spec and resource_reference.\n \"\"\"\n \n class JobConfig:\n def __init__(self, spec, resource_references):\n self.spec = spec\n self.resource_references = resource_references\n\n pipeline_json_string = None\n if pipeline_package_path:\n pipeline_obj = self._extract_pipeline_yaml(pipeline_package_path)\n pipeline_json_string = json.dumps(pipeline_obj)\n api_params = [kfp_server_api.ApiParameter(\n name=sanitize_k8s_name(name=k, allow_capital_underscore=True),\n value=str(v)) for k,v in params.items()]\n resource_references = []\n key = kfp_server_api.models.ApiResourceKey(id=experiment_id,\n type=kfp_server_api.models.ApiResourceType.EXPERIMENT)\n reference = kfp_server_api.models.ApiResourceReference(key=key,\n relationship=kfp_server_api.models.ApiRelationship.OWNER)\n resource_references.append(reference)\n\n if version_id:\n key = kfp_server_api.models.ApiResourceKey(id=version_id,\n type=kfp_server_api.models.ApiResourceType.PIPELINE_VERSION)\n reference = kfp_server_api.models.ApiResourceReference(key=key,\n relationship=kfp_server_api.models.ApiRelationship.CREATOR)\n resource_references.append(reference)\n\n spec = kfp_server_api.models.ApiPipelineSpec(\n pipeline_id=pipeline_id,\n workflow_manifest=pipeline_json_string,\n parameters=api_params)\n return JobConfig(spec=spec, resource_references=resource_references)\n\n def create_run_from_pipeline_func(self, pipeline_func: Callable, arguments: Mapping[str, str], run_name=None, experiment_name=None, pipeline_conf: kfp.dsl.PipelineConf = None, namespace=None):\n \"\"\"Runs pipeline on KFP-enabled Kubernetes cluster.\n\n This command compiles the pipeline function, creates or gets an experiment and submits the pipeline for execution.\n\n Args:\n pipeline_func: A function that describes a pipeline by calling components and composing them into execution graph.\n arguments: Arguments to the pipeline function provided as a dict.\n run_name: Optional. Name of the run to be shown in the UI.\n experiment_name: Optional. Name of the experiment to add the run to.\n namespace: Kubernetes namespace where the pipeline runs are created.\n For single user deployment, leave it as None;\n For multi user, input a namespace where the user is authorized\n \"\"\"\n #TODO: Check arguments against the pipeline function\n pipeline_name = pipeline_func.__name__\n run_name = run_name or pipeline_name + ' ' + datetime.datetime.now().strftime('%Y-%m-%d %H-%M-%S')\n with tempfile.TemporaryDirectory() as tmpdir:\n pipeline_package_path = os.path.join(tmpdir, 'pipeline.yaml')\n compiler.Compiler().compile(pipeline_func, pipeline_package_path, pipeline_conf=pipeline_conf)\n return self.create_run_from_pipeline_package(pipeline_package_path, arguments, run_name, experiment_name, namespace)\n\n def create_run_from_pipeline_package(self, pipeline_file: str, arguments: Mapping[str, str], run_name=None, experiment_name=None, namespace=None):\n \"\"\"Runs pipeline on KFP-enabled Kubernetes cluster.\n\n This command compiles the pipeline function, creates or gets an experiment and submits the pipeline for execution.\n\n Args:\n pipeline_file: A compiled pipeline package file.\n arguments: Arguments to the pipeline function provided as a dict.\n run_name: Optional. Name of the run to be shown in the UI.\n experiment_name: Optional. Name of the experiment to add the run to.\n namespace: Kubernetes namespace where the pipeline runs are created.\n For single user deployment, leave it as None;\n For multi user, input a namespace where the user is authorized\n \"\"\"\n\n class RunPipelineResult:\n def __init__(self, client, run_info):\n self._client = client\n self.run_info = run_info\n self.run_id = run_info.id\n\n def wait_for_run_completion(self, timeout=None):\n timeout = timeout or datetime.timedelta.max\n return self._client.wait_for_run_completion(self.run_id, timeout)\n\n def __repr__(self):\n return 'RunPipelineResult(run_id={})'.format(self.run_id)\n\n #TODO: Check arguments against the pipeline function\n pipeline_name = os.path.basename(pipeline_file)\n experiment_name = experiment_name or os.environ.get(KF_PIPELINES_DEFAULT_EXPERIMENT_NAME, None)\n overridden_experiment_name = os.environ.get(KF_PIPELINES_OVERRIDE_EXPERIMENT_NAME, experiment_name)\n if overridden_experiment_name != experiment_name:\n import warnings\n warnings.warn('Changing experiment name from \"{}\" to \"{}\".'.format(experiment_name, overridden_experiment_name))\n experiment_name = overridden_experiment_name or 'Default'\n run_name = run_name or (pipeline_name + ' ' +\n datetime.datetime.now().strftime(\n '%Y-%m-%d %H-%M-%S'))\n experiment = self.create_experiment(name=experiment_name, namespace=namespace)\n run_info = self.run_pipeline(experiment.id, run_name, pipeline_file, arguments)\n return RunPipelineResult(self, run_info)\n\n def list_runs(self, page_token='', page_size=10, sort_by='', experiment_id=None, namespace=None):\n \"\"\"List runs, optionally can be filtered by experiment or namespace.\n\n Args:\n page_token: Token for starting of the page.\n page_size: Size of the page.\n sort_by: One of 'field_name', 'field_name desc'. For example, 'name desc'.\n experiment_id: Experiment id to filter upon\n namespace: Kubernetes namespace to filter upon.\n For single user deployment, leave it as None;\n For multi user, input a namespace where the user is authorized.\n\n Returns:\n A response object including a list of experiments and next page token.\n \"\"\"\n namespace = namespace or self.get_user_namespace()\n if experiment_id is not None:\n response = self._run_api.list_runs(page_token=page_token, page_size=page_size, sort_by=sort_by, resource_reference_key_type=kfp_server_api.models.api_resource_type.ApiResourceType.EXPERIMENT, resource_reference_key_id=experiment_id)\n elif namespace:\n response = self._run_api.list_runs(page_token=page_token, page_size=page_size, sort_by=sort_by, resource_reference_key_type=kfp_server_api.models.api_resource_type.ApiResourceType.NAMESPACE, resource_reference_key_id=namespace)\n else:\n response = self._run_api.list_runs(page_token=page_token, page_size=page_size, sort_by=sort_by)\n return response\n\n def list_recurring_runs(self, page_token='', page_size=10, sort_by='', experiment_id=None):\n \"\"\"List recurring runs.\n\n Args:\n page_token: Token for starting of the page.\n page_size: Size of the page.\n sort_by: One of 'field_name', 'field_name desc'. For example, 'name desc'.\n experiment_id: Experiment id to filter upon.\n\n Returns:\n A response object including a list of recurring_runs and next page token.\n \"\"\"\n if experiment_id is not None:\n response = self._job_api.list_jobs(page_token=page_token, page_size=page_size, sort_by=sort_by, resource_reference_key_type=kfp_server_api.models.api_resource_type.ApiResourceType.EXPERIMENT, resource_reference_key_id=experiment_id)\n else:\n response = self._job_api.list_jobs(page_token=page_token, page_size=page_size, sort_by=sort_by)\n return response\n\n def get_recurring_run(self, job_id):\n \"\"\"Get recurring_run details.\n\n Args:\n job_id: id of the recurring_run.\n\n Returns:\n A response object including details of a recurring_run.\n\n Throws:\n Exception if recurring_run is not found.\n \"\"\"\n return self._job_api.get_job(id=job_id)\n\n\n def get_run(self, run_id):\n \"\"\"Get run details.\n\n Args:\n run_id: id of the run.\n\n Returns:\n A response object including details of a run.\n\n Throws:\n Exception if run is not found.\n \"\"\"\n return self._run_api.get_run(run_id=run_id)\n\n def wait_for_run_completion(self, run_id, timeout):\n \"\"\"Waits for a run to complete.\n\n Args:\n run_id: Run id, returned from run_pipeline.\n timeout: Timeout in seconds.\n\n Returns:\n A run detail object: Most important fields are run and pipeline_runtime.\n\n Raises:\n TimeoutError: if the pipeline run failed to finish before the specified timeout.\n \"\"\"\n status = 'Running:'\n start_time = datetime.datetime.now()\n last_token_refresh_time = datetime.datetime.now()\n while (status is None or\n status.lower() not in ['succeeded', 'failed', 'skipped', 'error']):\n # Refreshes the access token before it hits the TTL.\n if (datetime.datetime.now() - last_token_refresh_time\n > _GCP_ACCESS_TOKEN_TIMEOUT):\n self._refresh_api_client_token()\n last_token_refresh_time = datetime.datetime.now()\n \n get_run_response = self._run_api.get_run(run_id=run_id)\n status = get_run_response.run.status\n elapsed_time = (datetime.datetime.now() - start_time).seconds\n logging.info('Waiting for the job to complete...')\n if elapsed_time > timeout:\n raise TimeoutError('Run timeout')\n time.sleep(5)\n return get_run_response\n\n def _get_workflow_json(self, run_id):\n \"\"\"Get the workflow json.\n\n Args:\n run_id: run id, returned from run_pipeline.\n\n Returns:\n workflow: Json workflow\n \"\"\"\n get_run_response = self._run_api.get_run(run_id=run_id)\n workflow = get_run_response.pipeline_runtime.workflow_manifest\n workflow_json = json.loads(workflow)\n return workflow_json\n\n def upload_pipeline(\n self,\n pipeline_package_path: str = None,\n pipeline_name: str = None,\n description: str = None,\n ):\n \"\"\"Uploads the pipeline to the Kubeflow Pipelines cluster.\n\n Args:\n pipeline_package_path: Local path to the pipeline package.\n pipeline_name: Optional. Name of the pipeline to be shown in the UI.\n description: Optional. Description of the pipeline to be shown in the UI.\n\n Returns:\n Server response object containing pipleine id and other information.\n \"\"\"\n\n response = self._upload_api.upload_pipeline(pipeline_package_path, name=pipeline_name, description=description)\n if self._is_ipython():\n import IPython\n html = 'Pipeline link <a href=%s/#/pipelines/details/%s>here</a>' % (self._get_url_prefix(), response.id)\n IPython.display.display(IPython.display.HTML(html))\n return response\n\n def upload_pipeline_version(\n self,\n pipeline_package_path,\n pipeline_version_name: str,\n pipeline_id: Optional[str] = None,\n pipeline_name: Optional[str] = None\n ):\n \"\"\"Uploads a new version of the pipeline to the Kubeflow Pipelines cluster.\n Args:\n pipeline_package_path: Local path to the pipeline package.\n pipeline_version_name: Name of the pipeline version to be shown in the UI.\n pipeline_id: Optional. Id of the pipeline.\n pipeline_name: Optional. Name of the pipeline.\n Returns:\n Server response object containing pipleine id and other information.\n Throws:\n ValueError when none or both of pipeline_id or pipeline_name are specified\n Exception if pipeline id is not found.\n \"\"\"\n\n if all([pipeline_id, pipeline_name]) or not any([pipeline_id, pipeline_name]):\n raise ValueError('Either pipeline_id or pipeline_name is required')\n\n if pipeline_name:\n pipeline_id = self.get_pipeline_id(pipeline_name)\n\n response = self._upload_api.upload_pipeline_version(\n pipeline_package_path, \n name=pipeline_version_name, \n pipelineid=pipeline_id\n )\n\n if self._is_ipython():\n import IPython\n html = 'Pipeline link <a href=%s/#/pipelines/details/%s>here</a>' % (self._get_url_prefix(), response.id)\n IPython.display.display(IPython.display.HTML(html))\n return response\n\n def get_pipeline(self, pipeline_id):\n \"\"\"Get pipeline details.\n\n Args:\n pipeline_id: id of the pipeline.\n\n Returns:\n A response object including details of a pipeline.\n\n Throws:\n Exception if pipeline is not found.\n \"\"\"\n return self._pipelines_api.get_pipeline(id=pipeline_id)\n\n def delete_pipeline(self, pipeline_id):\n \"\"\"Delete pipeline.\n\n Args:\n pipeline_id: id of the pipeline.\n\n Returns:\n Object. If the method is called asynchronously, returns the request thread.\n\n Throws:\n Exception if pipeline is not found.\n \"\"\"\n return self._pipelines_api.delete_pipeline(id=pipeline_id)\n\n def list_pipeline_versions(self, pipeline_id, page_token='', page_size=10, sort_by=''):\n \"\"\"Lists pipeline versions.\n\n Args:\n pipeline_id: Id of the pipeline to list versions\n page_token: Token for starting of the page.\n page_size: Size of the page.\n sort_by: One of 'field_name', 'field_name des'. For example, 'name des'.\n\n Returns:\n A response object including a list of versions and next page token.\n \"\"\"\n\n return self._pipelines_api.list_pipeline_versions(page_token=page_token, page_size=page_size, sort_by=sort_by, resource_key_type=kfp_server_api.models.api_resource_type.ApiResourceType.PIPELINE, resource_key_id=pipeline_id)\n",
"path": "sdk/python/kfp/_client.py"
}
] | [
{
"content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport time\nimport logging\nimport json\nimport os\nimport re\nimport tarfile\nimport tempfile\nimport warnings\nimport yaml\nimport zipfile\nimport datetime\nfrom typing import Mapping, Callable, Optional\n\nimport kfp\nimport kfp_server_api\n\nfrom kfp.compiler import compiler\nfrom kfp.compiler._k8s_helper import sanitize_k8s_name\n\nfrom kfp._auth import get_auth_token, get_gcp_access_token\n\n# TTL of the access token associated with the client. This is needed because\n# `gcloud auth print-access-token` generates a token with TTL=1 hour, after\n# which the authentication expires. This TTL is needed for kfp.Client()\n# initialized with host=<inverse proxy endpoint>.\n# Set to 55 mins to provide some safe margin.\n_GCP_ACCESS_TOKEN_TIMEOUT = datetime.timedelta(minutes=55)\n# Operators on scalar values. Only applies to one of |int_value|,\n# |long_value|, |string_value| or |timestamp_value|.\n_FILTER_OPERATIONS = {\"UNKNOWN\": 0,\n \"EQUALS\" : 1,\n \"NOT_EQUALS\" : 2,\n \"GREATER_THAN\": 3,\n \"GREATER_THAN_EQUALS\": 5,\n \"LESS_THAN\": 6,\n \"LESS_THAN_EQUALS\": 7}\n\ndef _add_generated_apis(target_struct, api_module, api_client):\n \"\"\"Initializes a hierarchical API object based on the generated API module.\n PipelineServiceApi.create_pipeline becomes target_struct.pipelines.create_pipeline\n \"\"\"\n Struct = type('Struct', (), {})\n\n def camel_case_to_snake_case(name):\n import re\n return re.sub('([a-z0-9])([A-Z])', r'\\1_\\2', name).lower()\n\n for api_name in dir(api_module):\n if not api_name.endswith('ServiceApi'):\n continue\n\n short_api_name = camel_case_to_snake_case(api_name[0:-len('ServiceApi')]) + 's'\n api_struct = Struct()\n setattr(target_struct, short_api_name, api_struct)\n service_api = getattr(api_module.api, api_name)\n initialized_service_api = service_api(api_client)\n for member_name in dir(initialized_service_api):\n if member_name.startswith('_') or member_name.endswith('_with_http_info'):\n continue\n\n bound_member = getattr(initialized_service_api, member_name)\n setattr(api_struct, member_name, bound_member)\n models_struct = Struct()\n for member_name in dir(api_module.models):\n if not member_name[0].islower():\n setattr(models_struct, member_name, getattr(api_module.models, member_name))\n target_struct.api_models = models_struct\n\n\nKF_PIPELINES_ENDPOINT_ENV = 'KF_PIPELINES_ENDPOINT'\nKF_PIPELINES_UI_ENDPOINT_ENV = 'KF_PIPELINES_UI_ENDPOINT'\nKF_PIPELINES_DEFAULT_EXPERIMENT_NAME = 'KF_PIPELINES_DEFAULT_EXPERIMENT_NAME'\nKF_PIPELINES_OVERRIDE_EXPERIMENT_NAME = 'KF_PIPELINES_OVERRIDE_EXPERIMENT_NAME'\n\n\nclass Client(object):\n \"\"\"API Client for KubeFlow Pipeline.\n\n Args:\n host: The host name to use to talk to Kubeflow Pipelines. If not set, the in-cluster\n service DNS name will be used, which only works if the current environment is a pod\n in the same cluster (such as a Jupyter instance spawned by Kubeflow's\n JupyterHub). If you have a different connection to cluster, such as a kubectl\n proxy connection, then set it to something like \"127.0.0.1:8080/pipeline.\n If you connect to an IAP enabled cluster, set it to\n https://<your-deployment>.endpoints.<your-project>.cloud.goog/pipeline\".\n client_id: The client ID used by Identity-Aware Proxy.\n namespace: The namespace where the kubeflow pipeline system is run.\n other_client_id: The client ID used to obtain the auth codes and refresh tokens.\n Reference: https://cloud.google.com/iap/docs/authentication-howto#authenticating_from_a_desktop_app.\n other_client_secret: The client secret used to obtain the auth codes and refresh tokens.\n existing_token: Pass in token directly, it's used for cases better get token outside of SDK, e.x. GCP Cloud Functions\n or caller already has a token\n cookies: CookieJar object containing cookies that will be passed to the pipelines API.\n proxy: HTTP or HTTPS proxy server\n ssl_ca_cert: Cert for proxy\n \"\"\"\n\n # in-cluster DNS name of the pipeline service\n IN_CLUSTER_DNS_NAME = 'ml-pipeline.{}.svc.cluster.local:8888'\n KUBE_PROXY_PATH = 'api/v1/namespaces/{}/services/ml-pipeline:http/proxy/'\n\n LOCAL_KFP_CONTEXT = os.path.expanduser('~/.config/kfp/context.json')\n\n # TODO: Wrap the configurations for different authentication methods.\n def __init__(self, host=None, client_id=None, namespace='kubeflow', other_client_id=None, other_client_secret=None, existing_token=None, cookies=None, proxy=None, ssl_ca_cert=None):\n \"\"\"Create a new instance of kfp client.\n \"\"\"\n host = host or os.environ.get(KF_PIPELINES_ENDPOINT_ENV)\n self._uihost = os.environ.get(KF_PIPELINES_UI_ENDPOINT_ENV, host)\n config = self._load_config(host, client_id, namespace, other_client_id, other_client_secret, existing_token, proxy, ssl_ca_cert)\n # Save the loaded API client configuration, as a reference if update is\n # needed.\n self._existing_config = config\n api_client = kfp_server_api.api_client.ApiClient(config, cookie=cookies)\n _add_generated_apis(self, kfp_server_api, api_client)\n self._job_api = kfp_server_api.api.job_service_api.JobServiceApi(api_client)\n self._run_api = kfp_server_api.api.run_service_api.RunServiceApi(api_client)\n self._experiment_api = kfp_server_api.api.experiment_service_api.ExperimentServiceApi(api_client)\n self._pipelines_api = kfp_server_api.api.pipeline_service_api.PipelineServiceApi(api_client)\n self._upload_api = kfp_server_api.api.PipelineUploadServiceApi(api_client)\n self._load_context_setting_or_default()\n\n def _load_config(self, host, client_id, namespace, other_client_id, other_client_secret, existing_token, proxy, ssl_ca_cert):\n config = kfp_server_api.configuration.Configuration()\n\n if proxy:\n # https://github.com/kubeflow/pipelines/blob/c6ac5e0b1fd991e19e96419f0f508ec0a4217c29/backend/api/python_http_client/kfp_server_api/rest.py#L100\n config.proxy = proxy\n\n if ssl_ca_cert:\n config.ssl_ca_cert = ssl_ca_cert\n\n host = host or ''\n # Preprocess the host endpoint to prevent some common user mistakes.\n if not client_id:\n # always preserving the protocol (http://localhost requires it)\n host = host.rstrip('/')\n\n if host:\n config.host = host\n\n token = None\n\n # \"existing_token\" is designed to accept token generated outside of SDK. Here is an example.\n #\n # https://cloud.google.com/functions/docs/securing/function-identity\n # https://cloud.google.com/endpoints/docs/grpc/service-account-authentication\n #\n # import requests\n # import kfp\n #\n # def get_access_token():\n # url = 'http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/token'\n # r = requests.get(url, headers={'Metadata-Flavor': 'Google'})\n # r.raise_for_status()\n # access_token = r.json()['access_token']\n # return access_token\n #\n # client = kfp.Client(host='<KFPHost>', existing_token=get_access_token())\n #\n if existing_token:\n token = existing_token\n self._is_refresh_token = False\n elif client_id:\n token = get_auth_token(client_id, other_client_id, other_client_secret)\n self._is_refresh_token = True\n elif self._is_inverse_proxy_host(host):\n token = get_gcp_access_token()\n self._is_refresh_token = False\n\n if token:\n config.api_key['authorization'] = token\n config.api_key_prefix['authorization'] = 'Bearer'\n return config\n\n if host:\n # if host is explicitly set with auth token, it's probably a port forward address.\n return config\n\n import kubernetes as k8s\n in_cluster = True\n try:\n k8s.config.load_incluster_config()\n except:\n in_cluster = False\n pass\n\n if in_cluster:\n config.host = Client.IN_CLUSTER_DNS_NAME.format(namespace)\n return config\n\n try:\n k8s.config.load_kube_config(client_configuration=config)\n except:\n print('Failed to load kube config.')\n return config\n\n if config.host:\n config.host = config.host + '/' + Client.KUBE_PROXY_PATH.format(namespace)\n return config\n\n def _is_inverse_proxy_host(self, host):\n if host:\n return re.match(r'\\S+.googleusercontent.com/{0,1}$', host)\n if re.match(r'\\w+', host):\n warnings.warn(\n 'The received host is %s, please include the full endpoint address '\n '(with \".(pipelines/notebooks).googleusercontent.com\")' % host)\n return False\n\n def _is_ipython(self):\n \"\"\"Returns whether we are running in notebook.\"\"\"\n try:\n import IPython\n ipy = IPython.get_ipython()\n if ipy is None:\n return False\n except ImportError:\n return False\n\n return True\n\n def _get_url_prefix(self):\n if self._uihost:\n # User's own connection.\n if self._uihost.startswith('http://') or self._uihost.startswith('https://'):\n return self._uihost\n else:\n return 'http://' + self._uihost\n\n # In-cluster pod. We could use relative URL.\n return '/pipeline'\n\n def _load_context_setting_or_default(self):\n if os.path.exists(Client.LOCAL_KFP_CONTEXT):\n with open(Client.LOCAL_KFP_CONTEXT, 'r') as f:\n self._context_setting = json.load(f)\n else:\n self._context_setting = {\n 'namespace': '',\n }\n \n def _refresh_api_client_token(self):\n \"\"\"Refreshes the existing token associated with the kfp_api_client.\"\"\"\n if getattr(self, '_is_refresh_token', None):\n return\n\n new_token = get_gcp_access_token()\n self._existing_config.api_key['authorization'] = new_token\n\n def set_user_namespace(self, namespace):\n \"\"\"Set user namespace into local context setting file.\n \n This function should only be used when Kubeflow Pipelines is in the multi-user mode.\n\n Args:\n namespace: kubernetes namespace the user has access to.\n \"\"\"\n self._context_setting['namespace'] = namespace\n with open(Client.LOCAL_KFP_CONTEXT, 'w') as f:\n json.dump(self._context_setting, f)\n\n def get_user_namespace(self):\n \"\"\"Get user namespace in context config.\n\n Returns:\n namespace: kubernetes namespace from the local context file or empty if it wasn't set.\n \"\"\"\n return self._context_setting['namespace']\n\n def create_experiment(self, name, description=None, namespace=None):\n \"\"\"Create a new experiment.\n\n Args:\n name: The name of the experiment.\n description: Description of the experiment.\n namespace: Kubernetes namespace where the experiment should be created.\n For single user deployment, leave it as None;\n For multi user, input a namespace where the user is authorized.\n\n Returns:\n An Experiment object. Most important field is id.\n \"\"\"\n namespace = namespace or self.get_user_namespace()\n experiment = None\n try:\n experiment = self.get_experiment(experiment_name=name, namespace=namespace)\n except:\n # Ignore error if the experiment does not exist.\n pass\n\n if not experiment:\n logging.info('Creating experiment {}.'.format(name))\n\n resource_references = []\n if namespace:\n key = kfp_server_api.models.ApiResourceKey(id=namespace, type=kfp_server_api.models.ApiResourceType.NAMESPACE)\n reference = kfp_server_api.models.ApiResourceReference(key=key, relationship=kfp_server_api.models.ApiRelationship.OWNER)\n resource_references.append(reference)\n\n experiment = kfp_server_api.models.ApiExperiment(\n name=name,\n description=description,\n resource_references=resource_references)\n experiment = self._experiment_api.create_experiment(body=experiment)\n\n if self._is_ipython():\n import IPython\n html = \\\n ('Experiment link <a href=\"%s/#/experiments/details/%s\" target=\"_blank\" >here</a>'\n % (self._get_url_prefix(), experiment.id))\n IPython.display.display(IPython.display.HTML(html))\n return experiment\n\n def get_pipeline_id(self, name):\n \"\"\"Find the id of a pipeline by name.\n\n Args:\n name: Pipeline name.\n\n Returns:\n Returns the pipeline id if a pipeline with the name exists.\n \"\"\"\n pipeline_filter = json.dumps({\n \"predicates\": [\n {\n \"op\": _FILTER_OPERATIONS[\"EQUALS\"],\n \"key\": \"name\",\n \"stringValue\": name,\n }\n ]\n })\n result = self._pipelines_api.list_pipelines(filter=pipeline_filter)\n if result.pipelines is None:\n return None\n if len(result.pipelines)==1:\n return result.pipelines[0].id\n elif len(result.pipelines)>1:\n raise ValueError(\"Multiple pipelines with the name: {} found, the name needs to be unique\".format(name))\n return None\n\n def list_experiments(self, page_token='', page_size=10, sort_by='', namespace=None):\n \"\"\"List experiments.\n\n Args:\n page_token: Token for starting of the page.\n page_size: Size of the page.\n sort_by: Can be '[field_name]', '[field_name] des'. For example, 'name desc'.\n namespace: Kubernetes namespace where the experiment was created.\n For single user deployment, leave it as None;\n For multi user, input a namespace where the user is authorized.\n \n Returns:\n A response object including a list of experiments and next page token.\n \"\"\"\n namespace = namespace or self.get_user_namespace()\n response = self._experiment_api.list_experiment(\n page_token=page_token,\n page_size=page_size,\n sort_by=sort_by,\n resource_reference_key_type=kfp_server_api.models.api_resource_type.ApiResourceType.NAMESPACE,\n resource_reference_key_id=namespace)\n return response\n\n def get_experiment(self, experiment_id=None, experiment_name=None, namespace=None):\n \"\"\"Get details of an experiment\n\n Either experiment_id or experiment_name is required\n\n Args:\n experiment_id: Id of the experiment. (Optional)\n experiment_name: Name of the experiment. (Optional)\n namespace: Kubernetes namespace where the experiment was created.\n For single user deployment, leave it as None;\n For multi user, input the namespace where the user is authorized.\n\n Returns:\n A response object including details of a experiment.\n\n Throws:\n Exception if experiment is not found or None of the arguments is provided\n \"\"\"\n namespace = namespace or self.get_user_namespace()\n if experiment_id is None and experiment_name is None:\n raise ValueError('Either experiment_id or experiment_name is required')\n if experiment_id is not None:\n return self._experiment_api.get_experiment(id=experiment_id)\n next_page_token = ''\n while next_page_token is not None:\n list_experiments_response = self.list_experiments(page_size=100, page_token=next_page_token, namespace=namespace)\n next_page_token = list_experiments_response.next_page_token\n for experiment in list_experiments_response.experiments or []:\n if experiment.name == experiment_name:\n return self._experiment_api.get_experiment(id=experiment.id)\n raise ValueError('No experiment is found with name {}.'.format(experiment_name))\n\n def _extract_pipeline_yaml(self, package_file):\n def _choose_pipeline_yaml_file(file_list) -> str:\n yaml_files = [file for file in file_list if file.endswith('.yaml')]\n if len(yaml_files) == 0:\n raise ValueError('Invalid package. Missing pipeline yaml file in the package.')\n\n if 'pipeline.yaml' in yaml_files:\n return 'pipeline.yaml'\n else:\n if len(yaml_files) == 1:\n return yaml_files[0]\n raise ValueError('Invalid package. There is no pipeline.yaml file and there are multiple yaml files.')\n\n if package_file.endswith('.tar.gz') or package_file.endswith('.tgz'):\n with tarfile.open(package_file, \"r:gz\") as tar:\n file_names = [member.name for member in tar if member.isfile()]\n pipeline_yaml_file = _choose_pipeline_yaml_file(file_names)\n with tar.extractfile(tar.getmember(pipeline_yaml_file)) as f:\n return yaml.safe_load(f)\n elif package_file.endswith('.zip'):\n with zipfile.ZipFile(package_file, 'r') as zip:\n pipeline_yaml_file = _choose_pipeline_yaml_file(zip.namelist())\n with zip.open(pipeline_yaml_file) as f:\n return yaml.safe_load(f)\n elif package_file.endswith('.yaml') or package_file.endswith('.yml'):\n with open(package_file, 'r') as f:\n return yaml.safe_load(f)\n else:\n raise ValueError('The package_file '+ package_file + ' should end with one of the following formats: [.tar.gz, .tgz, .zip, .yaml, .yml]')\n\n def list_pipelines(self, page_token='', page_size=10, sort_by=''):\n \"\"\"List pipelines.\n\n Args:\n page_token: Token for starting of the page.\n page_size: Size of the page.\n sort_by: one of 'field_name', 'field_name desc'. For example, 'name desc'.\n\n Returns:\n A response object including a list of pipelines and next page token.\n \"\"\"\n return self._pipelines_api.list_pipelines(page_token=page_token, page_size=page_size, sort_by=sort_by)\n\n def list_pipeline_versions(self, pipeline_id: str, page_token='', page_size=10, sort_by=''):\n \"\"\"List all versions of a given pipeline.\n\n Args:\n pipeline_id: The id of a pipeline.\n page_token: Token for starting of the page.\n page_size: Size of the page.\n sort_by: one of 'field_name', 'field_name desc'. For example, 'name desc'.\n\n Returns:\n A response object including a list of pipeline versions and next page token.\n \"\"\"\n return self._pipelines_api.list_pipeline_versions(\n resource_key_type=\"PIPELINE\",\n resource_key_id=pipeline_id,\n page_token=page_token,\n page_size=page_size,\n sort_by=sort_by\n )\n\n # TODO: provide default namespace, similar to kubectl default namespaces.\n def run_pipeline(self, experiment_id, job_name, pipeline_package_path=None, params={}, pipeline_id=None, version_id=None):\n \"\"\"Run a specified pipeline.\n\n Args:\n experiment_id: The id of an experiment.\n job_name: Name of the job.\n pipeline_package_path: Local path of the pipeline package(the filename should end with one of the following .tar.gz, .tgz, .zip, .yaml, .yml).\n params: A dictionary with key (string) as param name and value (string) as as param value.\n pipeline_id: The id of a pipeline.\n version_id: The id of a pipeline version.\n If both pipeline_id and version_id are specified, version_id will take precendence.\n If only pipeline_id is specified, the default version of this pipeline is used to create the run.\n\n Returns:\n A run object. Most important field is id.\n \"\"\"\n job_config = self._create_job_config(\n experiment_id=experiment_id,\n params=params,\n pipeline_package_path=pipeline_package_path,\n pipeline_id=pipeline_id,\n version_id=version_id)\n run_body = kfp_server_api.models.ApiRun(\n pipeline_spec=job_config.spec, resource_references=job_config.resource_references, name=job_name)\n\n response = self._run_api.create_run(body=run_body)\n\n if self._is_ipython():\n import IPython\n html = ('Run link <a href=\"%s/#/runs/details/%s\" target=\"_blank\" >here</a>'\n % (self._get_url_prefix(), response.run.id))\n IPython.display.display(IPython.display.HTML(html))\n return response.run\n\n def create_recurring_run(self, experiment_id, job_name, description=None, start_time=None, end_time=None, interval_second=None, cron_expression=None, max_concurrency=1, no_catchup=None, params={}, pipeline_package_path=None, pipeline_id=None, version_id=None, enabled=True):\n \"\"\"Create a recurring run.\n\n Args:\n experiment_id: The string id of an experiment.\n job_name: Name of the job.\n description: An optional job description.\n start_time: The RFC3339 time string of the time when to start the job.\n end_time: The RFC3339 time string of the time when to end the job.\n interval_second: Integer indicating the seconds between two recurring runs in for a periodic schedule.\n cron_expression: A cron expression representing a set of times, using 5 space-separated fields, e.g. \"0 0 9 ? * 2-6\".\n max_concurrency: Integer indicating how many jobs can be run in parallel.\n no_catchup: Whether the recurring run should catch up if behind schedule.\n For example, if the recurring run is paused for a while and re-enabled\n afterwards. If no_catchup=False, the scheduler will catch up on (backfill) each\n missed interval. Otherwise, it only schedules the latest interval if more than one interval\n is ready to be scheduled.\n Usually, if your pipeline handles backfill internally, you should turn catchup\n off to avoid duplicate backfill. (default: {False})\n pipeline_package_path: Local path of the pipeline package(the filename should end with one of the following .tar.gz, .tgz, .zip, .yaml, .yml).\n params: A dictionary with key (string) as param name and value (string) as param value.\n pipeline_id: The string ID of a pipeline.\n version_id: The string ID of a pipeline version. \n If both pipeline_id and version_id are specified, pipeline_id will take precendence\n This will change in a future version, so it is recommended to use version_id by itself.\n enabled: A bool indicating whether the recurring run is enabled or disabled.\n\n Returns:\n A Job object. Most important field is id.\n \"\"\"\n job_config = self._create_job_config(\n experiment_id=experiment_id,\n params=params,\n pipeline_package_path=pipeline_package_path,\n pipeline_id=pipeline_id,\n version_id=version_id)\n\n if all([interval_second, cron_expression]) or not any([interval_second, cron_expression]):\n raise ValueError('Either interval_second or cron_expression is required')\n if interval_second is not None:\n trigger = kfp_server_api.models.ApiTrigger(\n periodic_schedule=kfp_server_api.models.ApiPeriodicSchedule(\n start_time=start_time, end_time=end_time, interval_second=interval_second)\n )\n if cron_expression is not None:\n trigger = kfp_server_api.models.ApiTrigger(\n cron_schedule=kfp_server_api.models.ApiCronSchedule(\n start_time=start_time, end_time=end_time, cron=cron_expression)\n )\n\n job_body = kfp_server_api.models.ApiJob(\n enabled=enabled,\n pipeline_spec=job_config.spec,\n resource_references=job_config.resource_references,\n name=job_name,\n description=description,\n no_catchup=no_catchup,\n trigger=trigger,\n max_concurrency=max_concurrency)\n return self._job_api.create_job(body=job_body)\n\n def _create_job_config(self, experiment_id, params, pipeline_package_path, pipeline_id, version_id):\n \"\"\"Create a JobConfig with spec and resource_references.\n\n Args:\n experiment_id: The id of an experiment.\n pipeline_package_path: Local path of the pipeline package(the filename should end with one of the following .tar.gz, .tgz, .zip, .yaml, .yml).\n params: A dictionary with key (string) as param name and value (string) as param value.\n pipeline_id: The id of a pipeline.\n version_id: The id of a pipeline version. \n If both pipeline_id and version_id are specified, pipeline_id will take precendence\n This will change in a future version, so it is recommended to use version_id by itself.\n\n Returns:\n A JobConfig object with attributes spec and resource_reference.\n \"\"\"\n \n class JobConfig:\n def __init__(self, spec, resource_references):\n self.spec = spec\n self.resource_references = resource_references\n\n pipeline_json_string = None\n if pipeline_package_path:\n pipeline_obj = self._extract_pipeline_yaml(pipeline_package_path)\n pipeline_json_string = json.dumps(pipeline_obj)\n api_params = [kfp_server_api.ApiParameter(\n name=sanitize_k8s_name(name=k, allow_capital_underscore=True),\n value=str(v)) for k,v in params.items()]\n resource_references = []\n key = kfp_server_api.models.ApiResourceKey(id=experiment_id,\n type=kfp_server_api.models.ApiResourceType.EXPERIMENT)\n reference = kfp_server_api.models.ApiResourceReference(key=key,\n relationship=kfp_server_api.models.ApiRelationship.OWNER)\n resource_references.append(reference)\n\n if version_id:\n key = kfp_server_api.models.ApiResourceKey(id=version_id,\n type=kfp_server_api.models.ApiResourceType.PIPELINE_VERSION)\n reference = kfp_server_api.models.ApiResourceReference(key=key,\n relationship=kfp_server_api.models.ApiRelationship.CREATOR)\n resource_references.append(reference)\n\n spec = kfp_server_api.models.ApiPipelineSpec(\n pipeline_id=pipeline_id,\n workflow_manifest=pipeline_json_string,\n parameters=api_params)\n return JobConfig(spec=spec, resource_references=resource_references)\n\n def create_run_from_pipeline_func(self, pipeline_func: Callable, arguments: Mapping[str, str], run_name=None, experiment_name=None, pipeline_conf: kfp.dsl.PipelineConf = None, namespace=None):\n \"\"\"Runs pipeline on KFP-enabled Kubernetes cluster.\n\n This command compiles the pipeline function, creates or gets an experiment and submits the pipeline for execution.\n\n Args:\n pipeline_func: A function that describes a pipeline by calling components and composing them into execution graph.\n arguments: Arguments to the pipeline function provided as a dict.\n run_name: Optional. Name of the run to be shown in the UI.\n experiment_name: Optional. Name of the experiment to add the run to.\n namespace: Kubernetes namespace where the pipeline runs are created.\n For single user deployment, leave it as None;\n For multi user, input a namespace where the user is authorized\n \"\"\"\n #TODO: Check arguments against the pipeline function\n pipeline_name = pipeline_func.__name__\n run_name = run_name or pipeline_name + ' ' + datetime.datetime.now().strftime('%Y-%m-%d %H-%M-%S')\n with tempfile.TemporaryDirectory() as tmpdir:\n pipeline_package_path = os.path.join(tmpdir, 'pipeline.yaml')\n compiler.Compiler().compile(pipeline_func, pipeline_package_path, pipeline_conf=pipeline_conf)\n return self.create_run_from_pipeline_package(pipeline_package_path, arguments, run_name, experiment_name, namespace)\n\n def create_run_from_pipeline_package(self, pipeline_file: str, arguments: Mapping[str, str], run_name=None, experiment_name=None, namespace=None):\n \"\"\"Runs pipeline on KFP-enabled Kubernetes cluster.\n\n This command compiles the pipeline function, creates or gets an experiment and submits the pipeline for execution.\n\n Args:\n pipeline_file: A compiled pipeline package file.\n arguments: Arguments to the pipeline function provided as a dict.\n run_name: Optional. Name of the run to be shown in the UI.\n experiment_name: Optional. Name of the experiment to add the run to.\n namespace: Kubernetes namespace where the pipeline runs are created.\n For single user deployment, leave it as None;\n For multi user, input a namespace where the user is authorized\n \"\"\"\n\n class RunPipelineResult:\n def __init__(self, client, run_info):\n self._client = client\n self.run_info = run_info\n self.run_id = run_info.id\n\n def wait_for_run_completion(self, timeout=None):\n timeout = timeout or datetime.timedelta.max\n return self._client.wait_for_run_completion(self.run_id, timeout)\n\n def __repr__(self):\n return 'RunPipelineResult(run_id={})'.format(self.run_id)\n\n #TODO: Check arguments against the pipeline function\n pipeline_name = os.path.basename(pipeline_file)\n experiment_name = experiment_name or os.environ.get(KF_PIPELINES_DEFAULT_EXPERIMENT_NAME, None)\n overridden_experiment_name = os.environ.get(KF_PIPELINES_OVERRIDE_EXPERIMENT_NAME, experiment_name)\n if overridden_experiment_name != experiment_name:\n import warnings\n warnings.warn('Changing experiment name from \"{}\" to \"{}\".'.format(experiment_name, overridden_experiment_name))\n experiment_name = overridden_experiment_name or 'Default'\n run_name = run_name or (pipeline_name + ' ' +\n datetime.datetime.now().strftime(\n '%Y-%m-%d %H-%M-%S'))\n experiment = self.create_experiment(name=experiment_name, namespace=namespace)\n run_info = self.run_pipeline(experiment.id, run_name, pipeline_file, arguments)\n return RunPipelineResult(self, run_info)\n\n def list_runs(self, page_token='', page_size=10, sort_by='', experiment_id=None, namespace=None):\n \"\"\"List runs, optionally can be filtered by experiment or namespace.\n\n Args:\n page_token: Token for starting of the page.\n page_size: Size of the page.\n sort_by: One of 'field_name', 'field_name desc'. For example, 'name desc'.\n experiment_id: Experiment id to filter upon\n namespace: Kubernetes namespace to filter upon.\n For single user deployment, leave it as None;\n For multi user, input a namespace where the user is authorized.\n\n Returns:\n A response object including a list of experiments and next page token.\n \"\"\"\n namespace = namespace or self.get_user_namespace()\n if experiment_id is not None:\n response = self._run_api.list_runs(page_token=page_token, page_size=page_size, sort_by=sort_by, resource_reference_key_type=kfp_server_api.models.api_resource_type.ApiResourceType.EXPERIMENT, resource_reference_key_id=experiment_id)\n elif namespace:\n response = self._run_api.list_runs(page_token=page_token, page_size=page_size, sort_by=sort_by, resource_reference_key_type=kfp_server_api.models.api_resource_type.ApiResourceType.NAMESPACE, resource_reference_key_id=namespace)\n else:\n response = self._run_api.list_runs(page_token=page_token, page_size=page_size, sort_by=sort_by)\n return response\n\n def list_recurring_runs(self, page_token='', page_size=10, sort_by='', experiment_id=None):\n \"\"\"List recurring runs.\n\n Args:\n page_token: Token for starting of the page.\n page_size: Size of the page.\n sort_by: One of 'field_name', 'field_name desc'. For example, 'name desc'.\n experiment_id: Experiment id to filter upon.\n\n Returns:\n A response object including a list of recurring_runs and next page token.\n \"\"\"\n if experiment_id is not None:\n response = self._job_api.list_jobs(page_token=page_token, page_size=page_size, sort_by=sort_by, resource_reference_key_type=kfp_server_api.models.api_resource_type.ApiResourceType.EXPERIMENT, resource_reference_key_id=experiment_id)\n else:\n response = self._job_api.list_jobs(page_token=page_token, page_size=page_size, sort_by=sort_by)\n return response\n\n def get_recurring_run(self, job_id):\n \"\"\"Get recurring_run details.\n\n Args:\n job_id: id of the recurring_run.\n\n Returns:\n A response object including details of a recurring_run.\n\n Throws:\n Exception if recurring_run is not found.\n \"\"\"\n return self._job_api.get_job(id=job_id)\n\n\n def get_run(self, run_id):\n \"\"\"Get run details.\n\n Args:\n run_id: id of the run.\n\n Returns:\n A response object including details of a run.\n\n Throws:\n Exception if run is not found.\n \"\"\"\n return self._run_api.get_run(run_id=run_id)\n\n def wait_for_run_completion(self, run_id, timeout):\n \"\"\"Waits for a run to complete.\n\n Args:\n run_id: Run id, returned from run_pipeline.\n timeout: Timeout in seconds.\n\n Returns:\n A run detail object: Most important fields are run and pipeline_runtime.\n\n Raises:\n TimeoutError: if the pipeline run failed to finish before the specified timeout.\n \"\"\"\n status = 'Running:'\n start_time = datetime.datetime.now()\n last_token_refresh_time = datetime.datetime.now()\n while (status is None or\n status.lower() not in ['succeeded', 'failed', 'skipped', 'error']):\n # Refreshes the access token before it hits the TTL.\n if (datetime.datetime.now() - last_token_refresh_time\n > _GCP_ACCESS_TOKEN_TIMEOUT):\n self._refresh_api_client_token()\n last_token_refresh_time = datetime.datetime.now()\n \n get_run_response = self._run_api.get_run(run_id=run_id)\n status = get_run_response.run.status\n elapsed_time = (datetime.datetime.now() - start_time).seconds\n logging.info('Waiting for the job to complete...')\n if elapsed_time > timeout:\n raise TimeoutError('Run timeout')\n time.sleep(5)\n return get_run_response\n\n def _get_workflow_json(self, run_id):\n \"\"\"Get the workflow json.\n\n Args:\n run_id: run id, returned from run_pipeline.\n\n Returns:\n workflow: Json workflow\n \"\"\"\n get_run_response = self._run_api.get_run(run_id=run_id)\n workflow = get_run_response.pipeline_runtime.workflow_manifest\n workflow_json = json.loads(workflow)\n return workflow_json\n\n def upload_pipeline(\n self,\n pipeline_package_path: str = None,\n pipeline_name: str = None,\n description: str = None,\n ):\n \"\"\"Uploads the pipeline to the Kubeflow Pipelines cluster.\n\n Args:\n pipeline_package_path: Local path to the pipeline package.\n pipeline_name: Optional. Name of the pipeline to be shown in the UI.\n description: Optional. Description of the pipeline to be shown in the UI.\n\n Returns:\n Server response object containing pipleine id and other information.\n \"\"\"\n\n response = self._upload_api.upload_pipeline(pipeline_package_path, name=pipeline_name, description=description)\n if self._is_ipython():\n import IPython\n html = 'Pipeline link <a href=%s/#/pipelines/details/%s>here</a>' % (self._get_url_prefix(), response.id)\n IPython.display.display(IPython.display.HTML(html))\n return response\n\n def upload_pipeline_version(\n self,\n pipeline_package_path,\n pipeline_version_name: str,\n pipeline_id: Optional[str] = None,\n pipeline_name: Optional[str] = None\n ):\n \"\"\"Uploads a new version of the pipeline to the Kubeflow Pipelines cluster.\n Args:\n pipeline_package_path: Local path to the pipeline package.\n pipeline_version_name: Name of the pipeline version to be shown in the UI.\n pipeline_id: Optional. Id of the pipeline.\n pipeline_name: Optional. Name of the pipeline.\n Returns:\n Server response object containing pipleine id and other information.\n Throws:\n ValueError when none or both of pipeline_id or pipeline_name are specified\n Exception if pipeline id is not found.\n \"\"\"\n\n if all([pipeline_id, pipeline_name]) or not any([pipeline_id, pipeline_name]):\n raise ValueError('Either pipeline_id or pipeline_name is required')\n\n if pipeline_name:\n pipeline_id = self.get_pipeline_id(pipeline_name)\n\n response = self._upload_api.upload_pipeline_version(\n pipeline_package_path, \n name=pipeline_version_name, \n pipelineid=pipeline_id\n )\n\n if self._is_ipython():\n import IPython\n html = 'Pipeline link <a href=%s/#/pipelines/details/%s>here</a>' % (self._get_url_prefix(), response.id)\n IPython.display.display(IPython.display.HTML(html))\n return response\n\n def get_pipeline(self, pipeline_id):\n \"\"\"Get pipeline details.\n\n Args:\n pipeline_id: id of the pipeline.\n\n Returns:\n A response object including details of a pipeline.\n\n Throws:\n Exception if pipeline is not found.\n \"\"\"\n return self._pipelines_api.get_pipeline(id=pipeline_id)\n\n def delete_pipeline(self, pipeline_id):\n \"\"\"Delete pipeline.\n\n Args:\n pipeline_id: id of the pipeline.\n\n Returns:\n Object. If the method is called asynchronously, returns the request thread.\n\n Throws:\n Exception if pipeline is not found.\n \"\"\"\n return self._pipelines_api.delete_pipeline(id=pipeline_id)\n\n def list_pipeline_versions(self, pipeline_id, page_token='', page_size=10, sort_by=''):\n \"\"\"Lists pipeline versions.\n\n Args:\n pipeline_id: Id of the pipeline to list versions\n page_token: Token for starting of the page.\n page_size: Size of the page.\n sort_by: One of 'field_name', 'field_name des'. For example, 'name des'.\n\n Returns:\n A response object including a list of versions and next page token.\n \"\"\"\n\n return self._pipelines_api.list_pipeline_versions(page_token=page_token, page_size=page_size, sort_by=sort_by, resource_key_type=kfp_server_api.models.api_resource_type.ApiResourceType.PIPELINE, resource_key_id=pipeline_id)\n",
"path": "sdk/python/kfp/_client.py"
}
] | diff --git a/sdk/python/kfp/_client.py b/sdk/python/kfp/_client.py
index 6565a273f22..fd8d056306c 100644
--- a/sdk/python/kfp/_client.py
+++ b/sdk/python/kfp/_client.py
@@ -346,6 +346,8 @@ def get_pipeline_id(self, name):
]
})
result = self._pipelines_api.list_pipelines(filter=pipeline_filter)
+ if result.pipelines is None:
+ return None
if len(result.pipelines)==1:
return result.pipelines[0].id
elif len(result.pipelines)>1:
|
holoviz__panel-1064 | outdated param dependency
it seems panel 0.8 uses `CalendarDateRange` from param. This [was introduced in param 1.9.2](https://github.com/holoviz/param/releases/tag/v1.9.2), but the param dependency is still at >=1.9.0
https://github.com/holoviz/panel/blob/master/setup.py#L93
This can lead to errors like
```
param.CalendarDateRange: DateRangeSlider,
AttributeError: module 'param' has no attribute 'CalendarDateRange'
```
when upgrading to panel 0.8.0.
Will make a simple PR to fix this
| [
{
"content": "#!/usr/bin/env python\n\nimport os\nimport shutil\nimport sys\nimport json\n\nfrom setuptools import setup, find_packages\nfrom setuptools.command.develop import develop\nfrom setuptools.command.install import install\nfrom setuptools.command.sdist import sdist\n\nimport pyct.build\n\n\ndef get_setup_version(reponame):\n \"\"\"\n Helper to get the current version from either git describe or the\n .version file (if available).\n \"\"\"\n basepath = os.path.split(__file__)[0]\n version_file_path = os.path.join(basepath, reponame, '.version')\n try:\n from param import version\n except Exception:\n version = None\n if version is not None:\n return version.Version.setup_version(basepath, reponame, archive_commit=\"$Format:%h$\")\n else:\n print(\"WARNING: param>=1.6.0 unavailable. If you are installing a package, \"\n \"this warning can safely be ignored. If you are creating a package or \"\n \"otherwise operating in a git repository, you should install param>=1.6.0.\")\n return json.load(open(version_file_path, 'r'))['version_string']\n\n\ndef _build_paneljs():\n from bokeh.ext import build\n print(\"Building custom models:\")\n panel_dir = os.path.join(os.path.dirname(__file__), \"panel\")\n build(panel_dir)\n\n\nclass CustomDevelopCommand(develop):\n \"\"\"Custom installation for development mode.\"\"\"\n\n def run(self):\n _build_paneljs()\n develop.run(self)\n\n\nclass CustomInstallCommand(install):\n \"\"\"Custom installation for install mode.\"\"\"\n\n def run(self):\n _build_paneljs()\n install.run(self)\n\n\nclass CustomSdistCommand(sdist):\n \"\"\"Custom installation for sdist mode.\"\"\"\n\n def run(self):\n _build_paneljs()\n sdist.run(self)\n\n\n_COMMANDS = {\n 'develop': CustomDevelopCommand,\n 'install': CustomInstallCommand,\n 'sdist': CustomSdistCommand,\n}\n\ntry:\n from wheel.bdist_wheel import bdist_wheel\n\n class CustomBdistWheelCommand(bdist_wheel):\n \"\"\"Custom bdist_wheel command to force cancelling qiskit-terra wheel\n creation.\"\"\"\n\n def run(self):\n \"\"\"Do nothing so the command intentionally fails.\"\"\"\n _build_paneljs()\n bdist_wheel.run(self)\n\n _COMMANDS['bdist_wheel'] = CustomBdistWheelCommand\nexcept Exception:\n pass\n\n########## dependencies ##########\n\ninstall_requires = [\n 'bokeh >=1.4.0,<2.0',\n 'param >=1.9.0',\n 'pyviz_comms >=0.7.3',\n 'markdown',\n 'tqdm',\n 'pyct >=0.4.4'\n]\n\n_recommended = [\n 'notebook >=5.4',\n 'holoviews >=1.12.0',\n 'matplotlib',\n 'pillow',\n 'plotly'\n]\n\nextras_require = {\n 'tests': [\n 'flake8',\n 'parameterized',\n 'pytest',\n 'scipy',\n 'nbsmoke >=0.2.0',\n 'pytest-cov',\n 'codecov',\n # For examples\n 'hvplot',\n 'plotly',\n 'altair',\n 'streamz',\n 'vega_datasets',\n 'vtk',\n 'scikit-learn',\n 'datashader',\n 'jupyter_bokeh',\n 'django',\n 'pyvista',\n ],\n 'recommended': _recommended,\n 'doc': _recommended + [\n 'nbsite >=0.6.1',\n 'sphinx_holoviz_theme',\n 'selenium',\n 'phantomjs',\n 'lxml',\n ]\n}\n\nextras_require['all'] = sorted(set(sum(extras_require.values(), [])))\n\n# Superset of what's in pyproject.toml (includes non-python\n# dependencies). Also, pyproject.toml isn't supported by all tools\n# anyway (e.g. older versions of pip, or conda - which also supports\n# non-python dependencies). Note that setup_requires isn't used\n# because it doesn't work well with pip.\nextras_require['build'] = [\n 'param >=1.9.0',\n 'pyct >=0.4.4',\n 'setuptools >=30.3.0',\n 'bokeh >=1.4.0',\n 'pyviz_comms >=0.6.0',\n # non-python dependency\n 'nodejs >=9.11.1',\n]\n\nsetup_args = dict(\n name='panel',\n version=get_setup_version(\"panel\"),\n description='A high level app and dashboarding solution for Python.',\n long_description=open('README.md').read() if os.path.isfile('README.md') else 'Consult README.md',\n long_description_content_type=\"text/markdown\",\n author=\"HoloViz\",\n author_email=\"[email protected]\",\n maintainer=\"HoloViz\",\n maintainer_email=\"[email protected]\",\n platforms=['Windows', 'Mac OS X', 'Linux'],\n license='BSD',\n url='http://panel.holoviz.org',\n cmdclass=_COMMANDS,\n packages=find_packages(),\n include_package_data=True,\n classifiers=[\n \"License :: OSI Approved :: BSD License\",\n \"Development Status :: 5 - Production/Stable\",\n \"Programming Language :: Python :: 2.7\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n \"Operating System :: OS Independent\",\n \"Intended Audience :: Developers\",\n \"Intended Audience :: Science/Research\",\n \"Intended Audience :: Financial and Insurance Industry\",\n \"Intended Audience :: Healthcare Industry\",\n \"Intended Audience :: Information Technology\",\n \"Intended Audience :: Legal Industry\",\n \"Intended Audience :: Other Audience\",\n \"Intended Audience :: Science/Research\",\n \"Natural Language :: English\",\n \"Topic :: Scientific/Engineering\",\n \"Topic :: Scientific/Engineering :: Visualization\",\n \"Topic :: Scientific/Engineering :: Information Analysis\",\n \"Topic :: Office/Business\",\n \"Topic :: Office/Business :: Financial\",\n \"Topic :: Software Development :: Libraries\"],\n python_requires=\">=2.7\",\n entry_points={\n 'console_scripts': [\n 'panel = panel.cli:main'\n ]},\n install_requires=install_requires,\n extras_require=extras_require,\n tests_require=extras_require['tests']\n)\n\nif __name__ == \"__main__\":\n example_path = os.path.join(os.path.dirname(os.path.abspath(__file__)),\n 'panel', 'examples')\n\n if 'develop' not in sys.argv and 'egg_info' not in sys.argv:\n pyct.build.examples(example_path, __file__, force=True)\n\n setup(**setup_args)\n\n if os.path.isdir(example_path):\n shutil.rmtree(example_path)\n",
"path": "setup.py"
}
] | [
{
"content": "#!/usr/bin/env python\n\nimport os\nimport shutil\nimport sys\nimport json\n\nfrom setuptools import setup, find_packages\nfrom setuptools.command.develop import develop\nfrom setuptools.command.install import install\nfrom setuptools.command.sdist import sdist\n\nimport pyct.build\n\n\ndef get_setup_version(reponame):\n \"\"\"\n Helper to get the current version from either git describe or the\n .version file (if available).\n \"\"\"\n basepath = os.path.split(__file__)[0]\n version_file_path = os.path.join(basepath, reponame, '.version')\n try:\n from param import version\n except:\n version = None\n if version is not None:\n return version.Version.setup_version(basepath, reponame, archive_commit=\"$Format:%h$\")\n else:\n print(\"WARNING: param>=1.6.0 unavailable. If you are installing a package, this warning can safely be ignored. If you are creating a package or otherwise operating in a git repository, you should install param>=1.6.0.\")\n return json.load(open(version_file_path, 'r'))['version_string']\n\n\ndef _build_paneljs():\n from bokeh.ext import build\n print(\"Building custom models:\")\n panel_dir = os.path.join(os.path.dirname(__file__), \"panel\")\n build(panel_dir)\n\n\nclass CustomDevelopCommand(develop):\n \"\"\"Custom installation for development mode.\"\"\"\n\n def run(self):\n _build_paneljs()\n develop.run(self)\n\n\nclass CustomInstallCommand(install):\n \"\"\"Custom installation for install mode.\"\"\"\n\n def run(self):\n _build_paneljs()\n install.run(self)\n\n\nclass CustomSdistCommand(sdist):\n \"\"\"Custom installation for sdist mode.\"\"\"\n\n def run(self):\n _build_paneljs()\n sdist.run(self)\n\n\n_COMMANDS = {\n 'develop': CustomDevelopCommand,\n 'install': CustomInstallCommand,\n 'sdist': CustomSdistCommand,\n}\n\ntry:\n from wheel.bdist_wheel import bdist_wheel\n\n class CustomBdistWheelCommand(bdist_wheel):\n \"\"\"Custom bdist_wheel command to force cancelling qiskit-terra wheel\n creation.\"\"\"\n\n def run(self):\n \"\"\"Do nothing so the command intentionally fails.\"\"\"\n _build_paneljs()\n bdist_wheel.run(self)\n\n _COMMANDS['bdist_wheel'] = CustomBdistWheelCommand\nexcept:\n pass\n\n########## dependencies ##########\n\ninstall_requires = [\n 'bokeh >=1.4.0',\n 'param >=1.9.0',\n 'pyviz_comms >=0.7.2',\n 'markdown',\n 'pyct >=0.4.4'\n]\n\n_recommended = [\n 'notebook >=5.4',\n 'holoviews >=1.12.0',\n 'matplotlib',\n 'pillow',\n 'plotly'\n]\n\nextras_require = {\n 'tests': [\n 'flake8',\n 'parameterized',\n 'pytest',\n 'scipy',\n 'nbsmoke >=0.2.0',\n 'pytest-cov',\n 'codecov',\n # For examples\n 'hvplot',\n 'plotly',\n 'altair',\n 'streamz',\n 'vega_datasets',\n 'vtk',\n 'scikit-learn',\n 'datashader',\n 'jupyter_bokeh',\n 'nodejs'\n ],\n 'recommended': _recommended,\n 'doc': _recommended + [\n 'nbsite >=0.6.1',\n 'sphinx_holoviz_theme',\n 'selenium',\n 'phantomjs',\n 'lxml',\n 'pyvista'\n ]\n}\n\nextras_require['all'] = sorted(set(sum(extras_require.values(), [])))\n\n# Superset of what's in pyproject.toml (includes non-python\n# dependencies). Also, pyproject.toml isn't supported by all tools\n# anyway (e.g. older versions of pip, or conda - which also supports\n# non-python dependencies). Note that setup_requires isn't used\n# because it doesn't work well with pip.\nextras_require['build'] = [\n 'param >=1.9.2',\n 'pyct >=0.4.4',\n 'setuptools >=30.3.0',\n 'bokeh >=1.4.0',\n 'pyviz_comms >=0.6.0',\n # non-python dependency\n 'nodejs >=9.11.1',\n]\n\nsetup_args = dict(\n name='panel',\n version=get_setup_version(\"panel\"),\n description='A high level dashboarding library for python visualization libraries.',\n long_description=open('README.md').read() if os.path.isfile('README.md') else 'Consult README.md',\n long_description_content_type=\"text/markdown\",\n author=\"PyViz developers\",\n author_email=\"[email protected]\",\n maintainer=\"PyViz\",\n maintainer_email=\"[email protected]\",\n platforms=['Windows', 'Mac OS X', 'Linux'],\n license='BSD',\n url='http://pyviz.org',\n cmdclass=_COMMANDS,\n packages=find_packages(),\n include_package_data=True,\n classifiers=[\n \"License :: OSI Approved :: BSD License\",\n \"Development Status :: 5 - Production/Stable\",\n \"Programming Language :: Python :: 2.7\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n \"Operating System :: OS Independent\",\n \"Intended Audience :: Developers\",\n \"Intended Audience :: Science/Research\",\n \"Intended Audience :: Financial and Insurance Industry\",\n \"Intended Audience :: Healthcare Industry\",\n \"Intended Audience :: Information Technology\",\n \"Intended Audience :: Legal Industry\",\n \"Intended Audience :: Other Audience\",\n \"Intended Audience :: Science/Research\",\n \"Natural Language :: English\",\n \"Topic :: Scientific/Engineering\",\n \"Topic :: Scientific/Engineering :: Visualization\",\n \"Topic :: Scientific/Engineering :: Information Analysis\",\n \"Topic :: Office/Business\",\n \"Topic :: Office/Business :: Financial\",\n \"Topic :: Software Development :: Libraries\"],\n python_requires=\">=2.7\",\n entry_points={\n 'console_scripts': [\n 'panel = panel.cli:main'\n ]},\n install_requires=install_requires,\n extras_require=extras_require,\n tests_require=extras_require['tests']\n)\n\nif __name__ == \"__main__\":\n example_path = os.path.join(os.path.dirname(os.path.abspath(__file__)),\n 'panel', 'examples')\n\n if 'develop' not in sys.argv and 'egg_info' not in sys.argv:\n pyct.build.examples(example_path, __file__, force=True)\n\n setup(**setup_args)\n\n if os.path.isdir(example_path):\n shutil.rmtree(example_path)\n",
"path": "setup.py"
}
] | diff --git a/setup.py b/setup.py
index bc598ae72d..14848474a0 100644
--- a/setup.py
+++ b/setup.py
@@ -142,7 +142,7 @@ def run(self):
# non-python dependencies). Note that setup_requires isn't used
# because it doesn't work well with pip.
extras_require['build'] = [
- 'param >=1.9.0',
+ 'param >=1.9.2',
'pyct >=0.4.4',
'setuptools >=30.3.0',
'bokeh >=1.4.0',
|
holoviz__panel-2616 | --autoreload raises AttributeError: 'NoneType' object has no attribute 'stop'
I'm on the current Panel master. When I `panel serve 'script.py' --autoreload` this code
```python
import panel as pn
pn.extension()
import numpy as np
import holoviews as hv
from holoviews import opts, streams
from holoviews.plotting.links import DataLink
hv.extension('bokeh')
curve = hv.Curve(np.random.randn(10).cumsum()).opts(responsive=True, line_width=6)
table = hv.Table(curve).opts(editable=True)
component=pn.pane.HoloViews(table, height=500, sizing_mode="stretch_both")
pn.template.FastListTemplate(title="Table", main=[component]).servable()
```
and change the code I get the error
```bash
2021-08-04 06:40:44,760 Error thrown from periodic callback:
2021-08-04 06:40:44,763 Traceback (most recent call last):
File "c:\repos\private\panel_docker\panel\.venv\lib\site-packages\tornado\gen.py", line 526, in callback
result_list.append(f.result())
File "c:\repos\private\panel_docker\panel\.venv\lib\site-packages\bokeh\server\session.py", line 67, in _needs_document_lock_wrapper
result = func(self, *args, **kwargs)
File "c:\repos\private\panel_docker\panel\.venv\lib\site-packages\bokeh\server\session.py", line 195, in with_document_locked
return func(*args, **kwargs)
File "c:\repos\private\panel_docker\panel\.venv\lib\site-packages\bokeh\document\document.py", line 1212, in wrapper
return doc._with_self_as_curdoc(invoke)
File "c:\repos\private\panel_docker\panel\.venv\lib\site-packages\bokeh\document\document.py", line 1198, in _with_self_as_curdoc
return f()
File "c:\repos\private\panel_docker\panel\.venv\lib\site-packages\bokeh\document\document.py", line 1211, in invoke
return f(*args, **kwargs)
File "c:\repos\private\panel_docker\panel\panel\io\callbacks.py", line 72, in _periodic_callback
self.callback()
File "c:\repos\private\panel_docker\panel\panel\io\reload.py", line 155, in _reload_on_update
_check_file(modify_times, path)
File "c:\repos\private\panel_docker\panel\panel\io\reload.py", line 134, in _check_file
_reload(module)
File "c:\repos\private\panel_docker\panel\panel\io\reload.py", line 117, in _reload
cb.stop()
File "c:\repos\private\panel_docker\panel\panel\io\callbacks.py", line 134, in stop
self._cb.stop()
AttributeError: 'NoneType' object has no attribute 'stop'
```
I believe this is would be a major issue if 0.12.1 was released before fixing this @philippjfr
| [
{
"content": "\"\"\"\nDefines callbacks to be executed on a thread or by scheduling it\non a running bokeh server.\n\"\"\"\nimport time\nimport param\n\nfrom bokeh.io import curdoc as _curdoc\n\nfrom ..util import edit_readonly\nfrom .state import state\n\n\nclass PeriodicCallback(param.Parameterized):\n \"\"\"\n Periodic encapsulates a periodic callback which will run both\n in tornado based notebook environments and on bokeh server. By\n default the callback will run until the stop method is called,\n but count and timeout values can be set to limit the number of\n executions or the maximum length of time for which the callback\n will run. The callback may also be started and stopped by setting\n the running parameter to True or False respectively.\n \"\"\"\n\n callback = param.Callable(doc=\"\"\"\n The callback to execute periodically.\"\"\")\n\n count = param.Integer(default=None, doc=\"\"\"\n Number of times the callback will be executed, by default\n this is unlimited.\"\"\")\n\n period = param.Integer(default=500, doc=\"\"\"\n Period in milliseconds at which the callback is executed.\"\"\")\n\n timeout = param.Integer(default=None, doc=\"\"\"\n Timeout in milliseconds from the start time at which the callback\n expires.\"\"\")\n\n running = param.Boolean(default=False, doc=\"\"\"\n Toggles whether the periodic callback is currently running.\"\"\")\n\n def __init__(self, **params):\n super().__init__(**params)\n self._counter = 0\n self._start_time = None\n self._cb = None\n self._updating = False\n self._doc = None\n\n @param.depends('running', watch=True)\n def _start(self):\n if not self.running or self._updating:\n return\n self.start()\n\n @param.depends('running', watch=True)\n def _stop(self):\n if self.running or self._updating:\n return\n self.stop()\n\n @param.depends('period', watch=True)\n def _update_period(self):\n if self._cb:\n self.stop()\n self.start()\n\n def _periodic_callback(self):\n with edit_readonly(state):\n state.busy = True\n try:\n self.callback()\n finally:\n with edit_readonly(state):\n state.busy = False\n self._counter += 1\n if self.timeout is not None:\n dt = (time.time() - self._start_time) * 1000\n if dt > self.timeout:\n self.stop()\n if self._counter == self.count:\n self.stop()\n\n @property\n def counter(self):\n \"\"\"\n Returns the execution count of the periodic callback.\n \"\"\"\n return self._counter\n\n def _cleanup(self, session_context):\n self.stop()\n\n def start(self):\n \"\"\"\n Starts running the periodic callback.\n \"\"\"\n if self._cb is not None:\n raise RuntimeError('Periodic callback has already started.')\n if not self.running:\n try:\n self._updating = True\n self.running = True\n finally:\n self._updating = False\n self._start_time = time.time()\n if state.curdoc:\n self._doc = state.curdoc\n self._cb = self._doc.add_periodic_callback(self._periodic_callback, self.period)\n else:\n from tornado.ioloop import PeriodicCallback\n self._cb = PeriodicCallback(self._periodic_callback, self.period)\n self._cb.start()\n try:\n state.on_session_destroyed(self._cleanup)\n except Exception:\n pass\n\n def stop(self):\n \"\"\"\n Stops running the periodic callback.\n \"\"\"\n if self.running:\n try:\n self._updating = True\n self.running = False\n finally:\n self._updating = False\n self._counter = 0\n self._timeout = None\n if self._doc:\n self._doc.remove_periodic_callback(self._cb)\n else:\n self._cb.stop()\n self._cb = None\n doc = self._doc or _curdoc()\n if doc:\n doc.session_destroyed_callbacks = {\n cb for cb in doc.session_destroyed_callbacks\n if cb is not self._cleanup\n }\n self._doc = None\n",
"path": "panel/io/callbacks.py"
}
] | [
{
"content": "\"\"\"\nDefines callbacks to be executed on a thread or by scheduling it\non a running bokeh server.\n\"\"\"\nimport time\nimport param\n\nfrom bokeh.io import curdoc as _curdoc\n\nfrom ..util import edit_readonly\nfrom .state import state\n\n\nclass PeriodicCallback(param.Parameterized):\n \"\"\"\n Periodic encapsulates a periodic callback which will run both\n in tornado based notebook environments and on bokeh server. By\n default the callback will run until the stop method is called,\n but count and timeout values can be set to limit the number of\n executions or the maximum length of time for which the callback\n will run. The callback may also be started and stopped by setting\n the running parameter to True or False respectively.\n \"\"\"\n\n callback = param.Callable(doc=\"\"\"\n The callback to execute periodically.\"\"\")\n\n count = param.Integer(default=None, doc=\"\"\"\n Number of times the callback will be executed, by default\n this is unlimited.\"\"\")\n\n period = param.Integer(default=500, doc=\"\"\"\n Period in milliseconds at which the callback is executed.\"\"\")\n\n timeout = param.Integer(default=None, doc=\"\"\"\n Timeout in milliseconds from the start time at which the callback\n expires.\"\"\")\n\n running = param.Boolean(default=False, doc=\"\"\"\n Toggles whether the periodic callback is currently running.\"\"\")\n\n def __init__(self, **params):\n super().__init__(**params)\n self._counter = 0\n self._start_time = None\n self._cb = None\n self._updating = False\n self._doc = None\n\n @param.depends('running', watch=True)\n def _start(self):\n if not self.running or self._updating:\n return\n self.start()\n\n @param.depends('running', watch=True)\n def _stop(self):\n if self.running or self._updating:\n return\n self.stop()\n\n @param.depends('period', watch=True)\n def _update_period(self):\n if self._cb:\n self.stop()\n self.start()\n\n def _periodic_callback(self):\n with edit_readonly(state):\n state.busy = True\n try:\n self.callback()\n finally:\n with edit_readonly(state):\n state.busy = False\n self._counter += 1\n if self.timeout is not None:\n dt = (time.time() - self._start_time) * 1000\n if dt > self.timeout:\n self.stop()\n if self._counter == self.count:\n self.stop()\n\n @property\n def counter(self):\n \"\"\"\n Returns the execution count of the periodic callback.\n \"\"\"\n return self._counter\n\n def _cleanup(self, session_context):\n self.stop()\n\n def start(self):\n \"\"\"\n Starts running the periodic callback.\n \"\"\"\n if self._cb is not None:\n raise RuntimeError('Periodic callback has already started.')\n if not self.running:\n try:\n self._updating = True\n self.running = True\n finally:\n self._updating = False\n self._start_time = time.time()\n if state.curdoc:\n self._doc = state.curdoc\n self._cb = self._doc.add_periodic_callback(self._periodic_callback, self.period)\n else:\n from tornado.ioloop import PeriodicCallback\n self._cb = PeriodicCallback(self._periodic_callback, self.period)\n self._cb.start()\n try:\n state.on_session_destroyed(self._cleanup)\n except Exception:\n pass\n\n def stop(self):\n \"\"\"\n Stops running the periodic callback.\n \"\"\"\n if self.running:\n try:\n self._updating = True\n self.running = False\n finally:\n self._updating = False\n self._counter = 0\n self._timeout = None\n if self._doc:\n self._doc.remove_periodic_callback(self._cb)\n elif self._cb:\n self._cb.stop()\n self._cb = None\n doc = self._doc or _curdoc()\n if doc:\n doc.session_destroyed_callbacks = {\n cb for cb in doc.session_destroyed_callbacks\n if cb is not self._cleanup\n }\n self._doc = None\n",
"path": "panel/io/callbacks.py"
}
] | diff --git a/panel/io/callbacks.py b/panel/io/callbacks.py
index b6ceb263f0..0176a1f7bd 100644
--- a/panel/io/callbacks.py
+++ b/panel/io/callbacks.py
@@ -130,7 +130,7 @@ def stop(self):
self._timeout = None
if self._doc:
self._doc.remove_periodic_callback(self._cb)
- else:
+ elif self._cb:
self._cb.stop()
self._cb = None
doc = self._doc or _curdoc()
|
qtile__qtile-2254 | Qtile loggin with default config
Hi today when i loggin in my arch linux with qtile, opened with default config. i see another post with similar problem but dont work. This is the log of qtile:
```
2021-02-22 13:35:55,667 WARNING libqtile lifecycle.py:_atexit():L38 Qtile will now terminate
2021-02-22 13:36:01,032 WARNING libqtile floating.py:__init__():L109 Non-config.Match objects in float_rules are deprecated
2021-02-22 13:36:01,032 ERROR libqtile confreader.py:load():L106 Could not import config file '/home/sailentk/.config/qtile/config.py'
Traceback (most recent call last):
File "/usr/lib/python3.9/site-packages/libqtile/confreader.py", line 101, in load
config = __import__(name) # noqa: F811
File "/home/sailentk/.config/qtile/config.py", line 9, in <module>
from settings.widgets import widget_defaults, extension_defaults
File "/home/sailentk/.config/qtile/settings/widgets.py", line 64, in <module>
widget.Pacman(**base(bg='color4'), update_interval=1800),
File "/usr/lib/python3.9/site-packages/libqtile/utils.py", line 226, in __getattr__
raise AttributeError
AttributeError
2021-02-22 13:36:01,033 ERROR libqtile manager.py:load_config():L107 Error while reading config file (Traceback (most recent call last):
File "/usr/lib/python3.9/site-packages/libqtile/confreader.py", line 101, in load
config = __import__(name) # noqa: F811
File "/home/sailentk/.config/qtile/config.py", line 9, in <module>
from settings.widgets import widget_defaults, extension_defaults
File "/home/sailentk/.config/qtile/settings/widgets.py", line 64, in <module>
widget.Pacman(**base(bg='color4'), update_interval=1800),
File "/usr/lib/python3.9/site-packages/libqtile/utils.py", line 226, in __getattr__
raise AttributeError
AttributeError
)
Traceback (most recent call last):
File "/usr/lib/python3.9/site-packages/libqtile/confreader.py", line 101, in load
config = __import__(name) # noqa: F811
File "/home/sailentk/.config/qtile/config.py", line 9, in <module>
from settings.widgets import widget_defaults, extension_defaults
File "/home/sailentk/.config/qtile/settings/widgets.py", line 64, in <module>
widget.Pacman(**base(bg='color4'), update_interval=1800),
File "/usr/lib/python3.9/site-packages/libqtile/utils.py", line 226, in __getattr__
raise AttributeError
AttributeError
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/lib/python3.9/site-packages/libqtile/core/manager.py", line 104, in load_config
self.config.load()
File "/usr/lib/python3.9/site-packages/libqtile/confreader.py", line 108, in load
raise ConfigError(tb)
libqtile.confreader.ConfigError: Traceback (most recent call last):
File "/usr/lib/python3.9/site-packages/libqtile/confreader.py", line 101, in load
config = __import__(name) # noqa: F811
File "/home/sailentk/.config/qtile/config.py", line 9, in <module>
from settings.widgets import widget_defaults, extension_defaults
File "/home/sailentk/.config/qtile/settings/widgets.py", line 64, in <module>
widget.Pacman(**base(bg='color4'), update_interval=1800),
File "/usr/lib/python3.9/site-packages/libqtile/utils.py", line 226, in __getattr__
raise AttributeError
AttributeError
```
| [
{
"content": "# Copyright (c) 2021, Tycho Andersen. All rights reserved.\n#\n# Permission is hereby granted, free of charge, to any person obtaining a copy\n# of this software and associated documentation files (the \"Software\"), to deal\n# in the Software without restriction, including without limitation the rights\n# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n# copies of the Software, and to permit persons to whom the Software is\n# furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice shall be included in\n# all copies or substantial portions of the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\nimport os\nimport os.path\nimport shutil\nimport sys\n\nBACKUP_SUFFIX = \".migrate.bak\"\n\ntry:\n import bowler\nexcept ImportError:\n pass\n\n\ndef rename_hook(config, fro, to):\n # could match on dotted_name< 'hook' '.' 'subscribe' '.' '{name}' >\n # but the replacement gets more complicated...\n selector = \"'{name}'\".format(name=fro)\n q = bowler.Query(config).select_pattern(selector)\n q.current.kwargs[\"name\"] = fro\n return q.rename(to)\n\n\ndef client_name_updated(config):\n \"\"\" Rename window_name_change -> client_name_updated\"\"\"\n return rename_hook(config, \"window_name_change\", \"client_name_updated\")\n\n\ndef tile_master_windows_rename(config):\n return (\n bowler.Query(config)\n .select_function(\"Tile\")\n .modify_argument(\"masterWindows\", \"master_length\")\n )\n\n\ndef threaded_poll_text_rename(config):\n return (\n bowler.Query(config)\n .select_class(\"ThreadedPollText\")\n .rename(\"ThreadPoolText\")\n )\n\n\nMIGRATIONS = [\n client_name_updated,\n tile_master_windows_rename,\n threaded_poll_text_rename,\n]\n\n\nMODULE_RENAMES = [\n (\"libqtile.command_graph\", \"libqtile.command.graph\"),\n (\"libqtile.command_client\", \"libqtile.command.client\"),\n (\"libqtile.command_interface\", \"libqtile.command.interface\"),\n (\"libqtile.command_object\", \"libqtile.command.object\"),\n]\n\nfor (fro, to) in MODULE_RENAMES:\n def f(config, fro=fro, to=to):\n return (\n bowler.Query(config)\n .select_module(fro)\n .rename(to)\n )\n MIGRATIONS.append(f)\n\n\ndef do_migrate(args):\n if \"bowler\" not in sys.modules:\n print(\"bowler can't be found, not migrating config file\")\n print(\"install it and try again\")\n sys.exit(1)\n\n shutil.copyfile(args.config, args.config+BACKUP_SUFFIX)\n\n for m in MIGRATIONS:\n m(args.config).execute(interactive=args.interactive, write=True)\n\n\ndef add_subcommand(subparsers):\n parser = subparsers.add_parser(\n \"migrate\", help=\"Migrate a configuration file to the current API\"\n )\n parser.add_argument(\n \"-c\",\n \"--config\",\n action=\"store\",\n default=os.path.expanduser(\n os.path.join(os.getenv(\"XDG_CONFIG_HOME\", \"~/.config\"), \"qtile\", \"config.py\")\n ),\n help=\"Use the specified configuration file\",\n )\n parser.add_argument(\n \"--interactive\",\n action=\"store_true\",\n help=\"Interactively apply diff (similar to git add -p)\",\n )\n parser.set_defaults(func=do_migrate)\n",
"path": "libqtile/scripts/migrate.py"
}
] | [
{
"content": "# Copyright (c) 2021, Tycho Andersen. All rights reserved.\n#\n# Permission is hereby granted, free of charge, to any person obtaining a copy\n# of this software and associated documentation files (the \"Software\"), to deal\n# in the Software without restriction, including without limitation the rights\n# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n# copies of the Software, and to permit persons to whom the Software is\n# furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice shall be included in\n# all copies or substantial portions of the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\nimport os\nimport os.path\nimport shutil\nimport sys\n\nBACKUP_SUFFIX = \".migrate.bak\"\n\ntry:\n import bowler\nexcept ImportError:\n pass\n\n\ndef rename_hook(config, fro, to):\n # could match on dotted_name< 'hook' '.' 'subscribe' '.' '{name}' >\n # but the replacement gets more complicated...\n selector = \"'{name}'\".format(name=fro)\n q = bowler.Query(config).select_pattern(selector)\n q.current.kwargs[\"name\"] = fro\n return q.rename(to)\n\n\ndef client_name_updated(config):\n \"\"\" Rename window_name_change -> client_name_updated\"\"\"\n return rename_hook(config, \"window_name_change\", \"client_name_updated\")\n\n\ndef tile_master_windows_rename(config):\n return (\n bowler.Query(config)\n .select_function(\"Tile\")\n .modify_argument(\"masterWindows\", \"master_length\")\n )\n\n\ndef threaded_poll_text_rename(config):\n return (\n bowler.Query(config)\n .select_class(\"ThreadedPollText\")\n .rename(\"ThreadPoolText\")\n )\n\n\ndef pacman_to_checkupdates(config):\n return (\n bowler.Query(config)\n .select_class(\"Pacman\")\n .rename(\"CheckUpdates\")\n )\n\n\nMIGRATIONS = [\n client_name_updated,\n tile_master_windows_rename,\n threaded_poll_text_rename,\n pacman_to_checkupdates,\n]\n\n\nMODULE_RENAMES = [\n (\"libqtile.command_graph\", \"libqtile.command.graph\"),\n (\"libqtile.command_client\", \"libqtile.command.client\"),\n (\"libqtile.command_interface\", \"libqtile.command.interface\"),\n (\"libqtile.command_object\", \"libqtile.command.object\"),\n]\n\nfor (fro, to) in MODULE_RENAMES:\n def f(config, fro=fro, to=to):\n return (\n bowler.Query(config)\n .select_module(fro)\n .rename(to)\n )\n MIGRATIONS.append(f)\n\n\ndef do_migrate(args):\n if \"bowler\" not in sys.modules:\n print(\"bowler can't be found, not migrating config file\")\n print(\"install it and try again\")\n sys.exit(1)\n\n shutil.copyfile(args.config, args.config+BACKUP_SUFFIX)\n\n for m in MIGRATIONS:\n m(args.config).execute(interactive=args.interactive, write=True)\n\n\ndef add_subcommand(subparsers):\n parser = subparsers.add_parser(\n \"migrate\", help=\"Migrate a configuration file to the current API\"\n )\n parser.add_argument(\n \"-c\",\n \"--config\",\n action=\"store\",\n default=os.path.expanduser(\n os.path.join(os.getenv(\"XDG_CONFIG_HOME\", \"~/.config\"), \"qtile\", \"config.py\")\n ),\n help=\"Use the specified configuration file\",\n )\n parser.add_argument(\n \"--interactive\",\n action=\"store_true\",\n help=\"Interactively apply diff (similar to git add -p)\",\n )\n parser.set_defaults(func=do_migrate)\n",
"path": "libqtile/scripts/migrate.py"
}
] | diff --git a/libqtile/scripts/migrate.py b/libqtile/scripts/migrate.py
index 9d7931cbe9..98b1a2ace0 100644
--- a/libqtile/scripts/migrate.py
+++ b/libqtile/scripts/migrate.py
@@ -59,10 +59,19 @@ def threaded_poll_text_rename(config):
)
+def pacman_to_checkupdates(config):
+ return (
+ bowler.Query(config)
+ .select_class("Pacman")
+ .rename("CheckUpdates")
+ )
+
+
MIGRATIONS = [
client_name_updated,
tile_master_windows_rename,
threaded_poll_text_rename,
+ pacman_to_checkupdates,
]
diff --git a/test/test_migrate.py b/test/test_migrate.py
index f4fc4cc4fc..84f007440f 100644
--- a/test/test_migrate.py
+++ b/test/test_migrate.py
@@ -119,3 +119,21 @@ class MyWidget(ThreadPoolText):
""")
check_migrate(orig, expected)
+
+
+def test_pacman():
+ orig = textwrap.dedent("""
+ from libqtile import bar
+ from libqtile.widget import Pacman
+
+ bar.Bar([Pacman()])
+ """)
+
+ expected = textwrap.dedent("""
+ from libqtile import bar
+ from libqtile.widget import CheckUpdates
+
+ bar.Bar([CheckUpdates()])
+ """)
+
+ check_migrate(orig, expected)
|
translate__translate-3435 | multistring needs a __hash__ method
In old ttk you could do something like
``` python
foo = multistring("foo")
foodict = {foo: "bar"}
assert 'foo' in foodict
```
It seems this no longer works - not sure why, but a `__hash__` method that returns `hash(str(self))` should fix the problem i believe
@claudep @julen any thoughts on this?
| [
{
"content": "# -*- coding: utf-8 -*-\n#\n# Copyright 2006 Zuza Software Foundation\n#\n# This file is part of translate.\n#\n# translate is free software; you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation; either version 2 of the License, or\n# (at your option) any later version.\n#\n# translate is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with this program; if not, see <http://www.gnu.org/licenses/>.\n\n\"\"\"Supports a hybrid Unicode string that can also have a list of alternate\nstrings in the strings attribute\n\"\"\"\n\nimport warnings\n\nimport six\n\nfrom .deprecation import RemovedInTTK2Warning\n\n\ndef _create_text_type(newtype, string, encoding):\n \"\"\"Helper to construct a text type out of characters or bytes. Required to\n temporarily preserve backwards compatibility. Must be removed in TTK2.\n \"\"\"\n if isinstance(string, six.text_type):\n return six.text_type.__new__(newtype, string)\n\n warnings.warn(\n 'Passing non-ASCII bytes as well as the `encoding` argument to '\n '`multistring` is deprecated. Always pass unicode characters instead.',\n RemovedInTTK2Warning, stacklevel=2,\n )\n return six.text_type.__new__(newtype, string or six.binary_type(), encoding)\n\n\nclass multistring(six.text_type):\n\n def __new__(newtype, string=u\"\", *args, **kwargs):\n encoding = kwargs.pop('encoding', 'utf-8')\n if isinstance(string, list):\n if not string:\n raise ValueError(\"multistring must contain at least one string\")\n newstring = _create_text_type(newtype, string[0], encoding)\n newstring.strings = [newstring] + [multistring.__new__(newtype, altstring) for altstring in string[1:]]\n else:\n newstring = _create_text_type(newtype, string, encoding)\n newstring.strings = [newstring]\n return newstring\n\n def __init__(self, *args, **kwargs):\n super(multistring, self).__init__()\n if not hasattr(self, \"strings\"):\n self.strings = []\n\n def __cmp__(self, otherstring):\n def cmp_compat(s1, s2):\n # Python 3 compatible cmp() equivalent\n return (s1 > s2) - (s1 < s2)\n if isinstance(otherstring, multistring):\n parentcompare = cmp_compat(six.text_type(self), otherstring)\n if parentcompare:\n return parentcompare\n else:\n return cmp_compat(self.strings[1:], otherstring.strings[1:])\n elif isinstance(otherstring, six.text_type):\n return cmp_compat(six.text_type(self), otherstring)\n elif isinstance(otherstring, bytes):\n return cmp_compat(self.encode('utf-8'), otherstring)\n elif isinstance(otherstring, list) and otherstring:\n return cmp_compat(self, multistring(otherstring))\n else:\n return cmp_compat(str(type(self)), str(type(otherstring)))\n\n def __hash__(self):\n return hash(''.join(self.strings))\n\n def __ne__(self, otherstring):\n return self.__cmp__(otherstring) != 0\n\n def __eq__(self, otherstring):\n return self.__cmp__(otherstring) == 0\n\n def __repr__(self):\n _repr = u\"multistring(%r)\" % (\n [six.text_type(item) for item in self.strings]\n )\n return _repr.encode('utf-8') if six.PY2 else _repr\n\n def __str__(self):\n if six.PY2:\n return self.encode('utf-8')\n return super(multistring, self).__str__()\n\n def replace(self, old, new, count=None):\n if count is None:\n newstr = multistring(super(multistring, self).replace(old, new))\n else:\n newstr = multistring(super(multistring, self).replace(old, new, count))\n for s in self.strings[1:]:\n if count is None:\n newstr.strings.append(s.replace(old, new))\n else:\n newstr.strings.append(s.replace(old, new, count))\n return newstr\n",
"path": "translate/misc/multistring.py"
}
] | [
{
"content": "# -*- coding: utf-8 -*-\n#\n# Copyright 2006 Zuza Software Foundation\n#\n# This file is part of translate.\n#\n# translate is free software; you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation; either version 2 of the License, or\n# (at your option) any later version.\n#\n# translate is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with this program; if not, see <http://www.gnu.org/licenses/>.\n\n\"\"\"Supports a hybrid Unicode string that can also have a list of alternate\nstrings in the strings attribute\n\"\"\"\n\nimport warnings\n\nimport six\n\nfrom .deprecation import RemovedInTTK2Warning\n\n\ndef _create_text_type(newtype, string, encoding):\n \"\"\"Helper to construct a text type out of characters or bytes. Required to\n temporarily preserve backwards compatibility. Must be removed in TTK2.\n \"\"\"\n if isinstance(string, six.text_type):\n return six.text_type.__new__(newtype, string)\n\n warnings.warn(\n 'Passing non-ASCII bytes as well as the `encoding` argument to '\n '`multistring` is deprecated. Always pass unicode characters instead.',\n RemovedInTTK2Warning, stacklevel=2,\n )\n return six.text_type.__new__(newtype, string or six.binary_type(), encoding)\n\n\nclass multistring(six.text_type):\n\n def __new__(newtype, string=u\"\", *args, **kwargs):\n encoding = kwargs.pop('encoding', 'utf-8')\n if isinstance(string, list):\n if not string:\n raise ValueError(\"multistring must contain at least one string\")\n newstring = _create_text_type(newtype, string[0], encoding)\n newstring.strings = [newstring] + [multistring.__new__(newtype, altstring) for altstring in string[1:]]\n else:\n newstring = _create_text_type(newtype, string, encoding)\n newstring.strings = [newstring]\n return newstring\n\n def __init__(self, *args, **kwargs):\n super(multistring, self).__init__()\n if not hasattr(self, \"strings\"):\n self.strings = []\n\n def __cmp__(self, otherstring):\n def cmp_compat(s1, s2):\n # Python 3 compatible cmp() equivalent\n return (s1 > s2) - (s1 < s2)\n if isinstance(otherstring, multistring):\n parentcompare = cmp_compat(six.text_type(self), otherstring)\n if parentcompare:\n return parentcompare\n else:\n return cmp_compat(self.strings[1:], otherstring.strings[1:])\n elif isinstance(otherstring, six.text_type):\n return cmp_compat(six.text_type(self), otherstring)\n elif isinstance(otherstring, bytes):\n return cmp_compat(self.encode('utf-8'), otherstring)\n elif isinstance(otherstring, list) and otherstring:\n return cmp_compat(self, multistring(otherstring))\n else:\n return cmp_compat(str(type(self)), str(type(otherstring)))\n\n def __hash__(self):\n return hash(str(self))\n\n def __ne__(self, otherstring):\n return self.__cmp__(otherstring) != 0\n\n def __eq__(self, otherstring):\n return self.__cmp__(otherstring) == 0\n\n def __repr__(self):\n _repr = u\"multistring(%r)\" % (\n [six.text_type(item) for item in self.strings]\n )\n return _repr.encode('utf-8') if six.PY2 else _repr\n\n def __str__(self):\n if six.PY2:\n return self.encode('utf-8')\n return super(multistring, self).__str__()\n\n def replace(self, old, new, count=None):\n if count is None:\n newstr = multistring(super(multistring, self).replace(old, new))\n else:\n newstr = multistring(super(multistring, self).replace(old, new, count))\n for s in self.strings[1:]:\n if count is None:\n newstr.strings.append(s.replace(old, new))\n else:\n newstr.strings.append(s.replace(old, new, count))\n return newstr\n",
"path": "translate/misc/multistring.py"
}
] | diff --git a/translate/misc/multistring.py b/translate/misc/multistring.py
index c32a957266..87e6a9ec79 100644
--- a/translate/misc/multistring.py
+++ b/translate/misc/multistring.py
@@ -82,7 +82,7 @@ def cmp_compat(s1, s2):
return cmp_compat(str(type(self)), str(type(otherstring)))
def __hash__(self):
- return hash(''.join(self.strings))
+ return hash(str(self))
def __ne__(self, otherstring):
return self.__cmp__(otherstring) != 0
diff --git a/translate/misc/test_multistring.py b/translate/misc/test_multistring.py
index 1ca2e431fd..31d8c1319c 100644
--- a/translate/misc/test_multistring.py
+++ b/translate/misc/test_multistring.py
@@ -97,3 +97,12 @@ def test_list_coercion(self):
assert six.text_type([t(u"tést")]) == u"[multistring(['tést'])]"
else:
assert six.text_type([t(u"tést")]) == u"[multistring([u't\\xe9st'])]"
+
+ def test_multistring_hash(self):
+ t = multistring.multistring
+ foo = t([u"foo", u"bar"])
+ foodict = {foo: "baz"}
+ assert u"foo" in foodict
+ foodict2 = {"foo": "baz"}
+ assert foo in foodict2
+ assert hash(str(foo)) == hash(foo)
|
mozilla__bugbug-200 | Use 'product' and 'component' features in the models
b7369ea8bf282941ce4b378ad5ad3c832db20668 introduced the features, but we are still not using them.
| [
{
"content": "# -*- coding: utf-8 -*-\n# This Source Code Form is subject to the terms of the Mozilla Public\n# License, v. 2.0. If a copy of the MPL was not distributed with this file,\n# You can obtain one at http://mozilla.org/MPL/2.0/.\n\nimport xgboost\nfrom imblearn.over_sampling import BorderlineSMOTE\nfrom sklearn.compose import ColumnTransformer\nfrom sklearn.feature_extraction import DictVectorizer\nfrom sklearn.pipeline import Pipeline\n\nfrom bugbug import bug_features\nfrom bugbug import bugzilla\nfrom bugbug import labels\nfrom bugbug.model import Model\n\n\nclass BugModel(Model):\n def __init__(self, lemmatization=False):\n Model.__init__(self, lemmatization)\n\n self.sampler = BorderlineSMOTE(random_state=0)\n\n feature_extractors = [\n bug_features.has_str(),\n bug_features.severity(),\n # Ignore keywords that would make the ML completely skewed\n # (we are going to use them as 100% rules in the evaluation phase).\n bug_features.keywords({'regression', 'talos-regression', 'feature'}),\n bug_features.is_coverity_issue(),\n bug_features.has_crash_signature(),\n bug_features.has_url(),\n bug_features.has_w3c_url(),\n bug_features.has_github_url(),\n bug_features.whiteboard(),\n bug_features.patches(),\n bug_features.landings(),\n bug_features.title(),\n bug_features.blocked_bugs_number(),\n bug_features.ever_affected(),\n bug_features.affected_then_unaffected(),\n ]\n\n cleanup_functions = [\n bug_features.cleanup_url,\n bug_features.cleanup_fileref,\n bug_features.cleanup_synonyms,\n ]\n\n self.extraction_pipeline = Pipeline([\n ('bug_extractor', bug_features.BugExtractor(feature_extractors, cleanup_functions)),\n ('union', ColumnTransformer([\n ('data', DictVectorizer(), 'data'),\n\n ('title', self.text_vectorizer(min_df=0.001), 'title'),\n\n ('first_comment', self.text_vectorizer(min_df=0.001), 'first_comment'),\n\n ('comments', self.text_vectorizer(min_df=0.001), 'comments'),\n ])),\n ])\n\n self.clf = xgboost.XGBClassifier(n_jobs=16)\n self.clf.set_params(predictor='cpu_predictor')\n\n def get_bugbug_labels(self, kind='bug'):\n assert kind in ['bug', 'regression', 'defect_feature_task']\n\n classes = {}\n\n for bug_id, category in labels.get_labels('bug_nobug'):\n assert category in ['True', 'False'], f'unexpected category {category}'\n if kind == 'bug':\n classes[int(bug_id)] = 1 if category == 'True' else 0\n elif kind == 'regression':\n if category == 'False':\n classes[int(bug_id)] = 0\n elif kind == 'defect_feature_task':\n if category == 'True':\n classes[int(bug_id)] = 'd'\n\n for bug_id, category in labels.get_labels('regression_bug_nobug'):\n assert category in ['nobug', 'bug_unknown_regression', 'bug_no_regression', 'regression'], f'unexpected category {category}'\n if kind == 'bug':\n classes[int(bug_id)] = 1 if category != 'nobug' else 0\n elif kind == 'regression':\n if category == 'bug_unknown_regression':\n continue\n\n classes[int(bug_id)] = 1 if category == 'regression' else 0\n elif kind == 'defect_feature_task':\n if category != 'nobug':\n classes[int(bug_id)] = 'd'\n\n for bug_id, category in labels.get_labels('defect_feature_task'):\n assert category in ['d', 'f', 't']\n if kind == 'bug':\n classes[int(bug_id)] = 1 if category == 'd' else 0\n elif kind == 'regression':\n if category in ['f', 't']:\n classes[int(bug_id)] = 0\n elif kind == 'defect_feature_task':\n classes[int(bug_id)] = category\n\n # Augment labes by using bugs marked as 'regression' or 'feature', as they are basically labelled.\n bug_ids = set()\n for bug in bugzilla.get_bugs():\n bug_id = int(bug['id'])\n\n bug_ids.add(bug_id)\n\n if bug_id in classes:\n continue\n\n if any(keyword in bug['keywords'] for keyword in ['regression', 'talos-regression']) or ('cf_has_regression_range' in bug and bug['cf_has_regression_range'] == 'yes'):\n if kind in ['bug', 'regression']:\n classes[bug_id] = 1\n else:\n classes[bug_id] = 'd'\n elif any(keyword in bug['keywords'] for keyword in ['feature']):\n if kind in ['bug', 'regression']:\n classes[bug_id] = 0\n else:\n classes[bug_id] = 'f'\n elif kind == 'regression':\n for history in bug['history']:\n for change in history['changes']:\n if change['field_name'] == 'keywords' and change['removed'] == 'regression':\n classes[bug_id] = 0\n\n # Remove labels which belong to bugs for which we have no data.\n return {bug_id: label for bug_id, label in classes.items() if bug_id in bug_ids}\n\n def get_labels(self):\n return self.get_bugbug_labels('bug')\n\n def get_feature_names(self):\n return self.extraction_pipeline.named_steps['union'].get_feature_names()\n\n def overwrite_classes(self, bugs, classes, probabilities):\n for i, bug in enumerate(bugs):\n if any(keyword in bug['keywords'] for keyword in ['regression', 'talos-regression']) or ('cf_has_regression_range' in bug and bug['cf_has_regression_range'] == 'yes'):\n classes[i] = 1 if not probabilities else [0., 1.]\n elif 'feature' in bug['keywords']:\n classes[i] = 0 if not probabilities else [1., 0.]\n\n return classes\n",
"path": "bugbug/models/bug.py"
}
] | [
{
"content": "# -*- coding: utf-8 -*-\n# This Source Code Form is subject to the terms of the Mozilla Public\n# License, v. 2.0. If a copy of the MPL was not distributed with this file,\n# You can obtain one at http://mozilla.org/MPL/2.0/.\n\nimport xgboost\nfrom imblearn.over_sampling import BorderlineSMOTE\nfrom sklearn.compose import ColumnTransformer\nfrom sklearn.feature_extraction import DictVectorizer\nfrom sklearn.pipeline import Pipeline\n\nfrom bugbug import bug_features\nfrom bugbug import bugzilla\nfrom bugbug import labels\nfrom bugbug.model import Model\n\n\nclass BugModel(Model):\n def __init__(self, lemmatization=False):\n Model.__init__(self, lemmatization)\n\n self.sampler = BorderlineSMOTE(random_state=0)\n\n feature_extractors = [\n bug_features.has_str(),\n bug_features.severity(),\n # Ignore keywords that would make the ML completely skewed\n # (we are going to use them as 100% rules in the evaluation phase).\n bug_features.keywords({'regression', 'talos-regression', 'feature'}),\n bug_features.is_coverity_issue(),\n bug_features.has_crash_signature(),\n bug_features.has_url(),\n bug_features.has_w3c_url(),\n bug_features.has_github_url(),\n bug_features.whiteboard(),\n bug_features.patches(),\n bug_features.landings(),\n bug_features.title(),\n bug_features.blocked_bugs_number(),\n bug_features.ever_affected(),\n bug_features.affected_then_unaffected(),\n bug_features.product(),\n bug_features.component(),\n ]\n\n cleanup_functions = [\n bug_features.cleanup_url,\n bug_features.cleanup_fileref,\n bug_features.cleanup_synonyms,\n ]\n\n self.extraction_pipeline = Pipeline([\n ('bug_extractor', bug_features.BugExtractor(feature_extractors, cleanup_functions)),\n ('union', ColumnTransformer([\n ('data', DictVectorizer(), 'data'),\n\n ('title', self.text_vectorizer(min_df=0.001), 'title'),\n\n ('first_comment', self.text_vectorizer(min_df=0.001), 'first_comment'),\n\n ('comments', self.text_vectorizer(min_df=0.001), 'comments'),\n ])),\n ])\n\n self.clf = xgboost.XGBClassifier(n_jobs=16)\n self.clf.set_params(predictor='cpu_predictor')\n\n def get_bugbug_labels(self, kind='bug'):\n assert kind in ['bug', 'regression', 'defect_feature_task']\n\n classes = {}\n\n for bug_id, category in labels.get_labels('bug_nobug'):\n assert category in ['True', 'False'], f'unexpected category {category}'\n if kind == 'bug':\n classes[int(bug_id)] = 1 if category == 'True' else 0\n elif kind == 'regression':\n if category == 'False':\n classes[int(bug_id)] = 0\n elif kind == 'defect_feature_task':\n if category == 'True':\n classes[int(bug_id)] = 'd'\n\n for bug_id, category in labels.get_labels('regression_bug_nobug'):\n assert category in ['nobug', 'bug_unknown_regression', 'bug_no_regression', 'regression'], f'unexpected category {category}'\n if kind == 'bug':\n classes[int(bug_id)] = 1 if category != 'nobug' else 0\n elif kind == 'regression':\n if category == 'bug_unknown_regression':\n continue\n\n classes[int(bug_id)] = 1 if category == 'regression' else 0\n elif kind == 'defect_feature_task':\n if category != 'nobug':\n classes[int(bug_id)] = 'd'\n\n for bug_id, category in labels.get_labels('defect_feature_task'):\n assert category in ['d', 'f', 't']\n if kind == 'bug':\n classes[int(bug_id)] = 1 if category == 'd' else 0\n elif kind == 'regression':\n if category in ['f', 't']:\n classes[int(bug_id)] = 0\n elif kind == 'defect_feature_task':\n classes[int(bug_id)] = category\n\n # Augment labes by using bugs marked as 'regression' or 'feature', as they are basically labelled.\n bug_ids = set()\n for bug in bugzilla.get_bugs():\n bug_id = int(bug['id'])\n\n bug_ids.add(bug_id)\n\n if bug_id in classes:\n continue\n\n if any(keyword in bug['keywords'] for keyword in ['regression', 'talos-regression']) or ('cf_has_regression_range' in bug and bug['cf_has_regression_range'] == 'yes'):\n if kind in ['bug', 'regression']:\n classes[bug_id] = 1\n else:\n classes[bug_id] = 'd'\n elif any(keyword in bug['keywords'] for keyword in ['feature']):\n if kind in ['bug', 'regression']:\n classes[bug_id] = 0\n else:\n classes[bug_id] = 'f'\n elif kind == 'regression':\n for history in bug['history']:\n for change in history['changes']:\n if change['field_name'] == 'keywords' and change['removed'] == 'regression':\n classes[bug_id] = 0\n\n # Remove labels which belong to bugs for which we have no data.\n return {bug_id: label for bug_id, label in classes.items() if bug_id in bug_ids}\n\n def get_labels(self):\n return self.get_bugbug_labels('bug')\n\n def get_feature_names(self):\n return self.extraction_pipeline.named_steps['union'].get_feature_names()\n\n def overwrite_classes(self, bugs, classes, probabilities):\n for i, bug in enumerate(bugs):\n if any(keyword in bug['keywords'] for keyword in ['regression', 'talos-regression']) or ('cf_has_regression_range' in bug and bug['cf_has_regression_range'] == 'yes'):\n classes[i] = 1 if not probabilities else [0., 1.]\n elif 'feature' in bug['keywords']:\n classes[i] = 0 if not probabilities else [1., 0.]\n\n return classes\n",
"path": "bugbug/models/bug.py"
}
] | diff --git a/bugbug/models/bug.py b/bugbug/models/bug.py
index d55345cc74..f0ee0b4b68 100644
--- a/bugbug/models/bug.py
+++ b/bugbug/models/bug.py
@@ -39,6 +39,8 @@ def __init__(self, lemmatization=False):
bug_features.blocked_bugs_number(),
bug_features.ever_affected(),
bug_features.affected_then_unaffected(),
+ bug_features.product(),
+ bug_features.component(),
]
cleanup_functions = [
|
bookwyrm-social__bookwyrm-3239 | OWASP Core Rule Set 913101
**Describe the bug**
BookWyrm's user agent is blocked by an OWASP-compliant web application firewall (WAF) for violating rule 913101. No other fediverse applications violate this rule.
**To Reproduce**
This issue is not reproducible between normal servers and clients.
**Expected behavior**
The WAF allows communication.
**Screenshots**
`python-requests/2.31.0 (BookWyrm/0.6.6; +https://bookwyrm.social/)`
```
[Thu Nov 09 04:13:56.824444 2023] [security2:error] [pid 2117:tid 140508772919040] [client 143.110.147.80:53962] [client 143.110.147.80] ModSecurity: Warning. Matched phrase "python-requests" at REQUEST_HEADERS:User-Agent. [file "/usr/apache/conf/waf/rules/REQUEST-913-SCANNER-DETECTION.conf"] [line "143"] [id "913101"] [msg "Found User-Agent associated with scripting/generic HTTP client"] [data "Matched Data: python-requests found within REQUEST_HEADERS:User-Agent: python-requests/2.31.0 (bookwyrm/0.6.6; +https://bookwyrm.social/)"] [severity "CRITICAL"] [ver "OWASP_CRS/3.3.3"] [tag "application-multi"] [tag "language-multi"] [tag "platform-multi"] [tag "attack-reputation-scripting"] [tag "OWASP_CRS"] [tag "capec/1000/118/224/541/310"] [tag "PCI/6.5.10"] [tag "paranoia-level/2"] [hostname "muri.network"] [uri "/users/Yae"] [unique_id "ZUxchFymkmHm47qNPINTzgAAAKI"]
[Thu Nov 09 04:13:56.824875 2023] [security2:error] [pid 2117:tid 140508772919040] [client 143.110.147.80:53962] [client 143.110.147.80] ModSecurity: Access denied with code 403 (phase 2). Operator GE matched 5 at TX:anomaly_score. [file "/usr/apache/conf/waf/rules/REQUEST-949-BLOCKING-EVALUATION.conf"] [line "94"] [id "949110"] [msg "Inbound Anomaly Score Exceeded (Total Score: 5)"] [severity "CRITICAL"] [ver "OWASP_CRS/3.3.3"] [tag "application-multi"] [tag "language-multi"] [tag "platform-multi"] [tag "attack-generic"] [hostname "muri.network"] [uri "/users/Yae"] [unique_id "ZUxchFymkmHm47qNPINTzgAAAKI"]
[Thu Nov 09 04:13:56.825023 2023] [security2:error] [pid 2117:tid 140508772919040] [client 143.110.147.80:53962] [client 143.110.147.80] ModSecurity: Warning. Operator GE matched 5 at TX:inbound_anomaly_score. [file "/usr/apache/conf/waf/rules/RESPONSE-980-CORRELATION.conf"] [line "92"] [id "980130"] [msg "Inbound Anomaly Score Exceeded (Total Inbound Score: 5 - SQLI=0,XSS=0,RFI=0,LFI=0,RCE=0,PHPI=0,HTTP=0,SESS=0): individual paranoia level scores: 0, 5, 0, 0"] [ver "OWASP_CRS/3.3.3"] [tag "event-correlation"] [hostname "muri.network"] [uri "/users/Yae"] [unique_id "ZUxchFymkmHm47qNPINTzgAAAKI"]
```
**Instance**
bookwyrm.social and another all servers.
**Additional context**
The Bookwyrm server security staff may allow 913101 for communication between Bookwyrm servers.
This becomes a problem when Bookwyrm servers send communication requests to other fediverse application servers.
Script commands that violate 913101 are putting requests to most fediverse servers, and most fediverse servers do not violate 913101.
Security staff on other fediverse servers are unlikely to exclude 913101 from their WAFs, which limits Bookwyrm's federated communications.
---
| [
{
"content": "\"\"\" bookwyrm settings and configuration \"\"\"\nimport os\nfrom typing import AnyStr\n\nfrom environs import Env\n\n\nimport requests\nfrom django.utils.translation import gettext_lazy as _\nfrom django.core.exceptions import ImproperlyConfigured\n\n\n# pylint: disable=line-too-long\n\nenv = Env()\nenv.read_env()\nDOMAIN = env(\"DOMAIN\")\n\nwith open(\"VERSION\", encoding=\"utf-8\") as f:\n version = f.read()\n version = version.replace(\"\\n\", \"\")\nf.close()\n\nVERSION = version\n\nRELEASE_API = env(\n \"RELEASE_API\",\n \"https://api.github.com/repos/bookwyrm-social/bookwyrm/releases/latest\",\n)\n\nPAGE_LENGTH = env.int(\"PAGE_LENGTH\", 15)\nDEFAULT_LANGUAGE = env(\"DEFAULT_LANGUAGE\", \"English\")\n\nJS_CACHE = \"8a89cad7\"\n\n# email\nEMAIL_BACKEND = env(\"EMAIL_BACKEND\", \"django.core.mail.backends.smtp.EmailBackend\")\nEMAIL_HOST = env(\"EMAIL_HOST\")\nEMAIL_PORT = env.int(\"EMAIL_PORT\", 587)\nEMAIL_HOST_USER = env(\"EMAIL_HOST_USER\")\nEMAIL_HOST_PASSWORD = env(\"EMAIL_HOST_PASSWORD\")\nEMAIL_USE_TLS = env.bool(\"EMAIL_USE_TLS\", True)\nEMAIL_USE_SSL = env.bool(\"EMAIL_USE_SSL\", False)\nEMAIL_SENDER_NAME = env(\"EMAIL_SENDER_NAME\", \"admin\")\nEMAIL_SENDER_DOMAIN = env(\"EMAIL_SENDER_DOMAIN\", DOMAIN)\nEMAIL_SENDER = f\"{EMAIL_SENDER_NAME}@{EMAIL_SENDER_DOMAIN}\"\n\n# Build paths inside the project like this: os.path.join(BASE_DIR, ...)\nBASE_DIR: AnyStr = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))\nLOCALE_PATHS = [\n os.path.join(BASE_DIR, \"locale\"),\n]\nLANGUAGE_COOKIE_NAME = env.str(\"LANGUAGE_COOKIE_NAME\", \"django_language\")\n\nSTATIC_ROOT = os.path.join(BASE_DIR, env(\"STATIC_ROOT\", \"static\"))\nMEDIA_ROOT = os.path.join(BASE_DIR, env(\"MEDIA_ROOT\", \"images\"))\n\nDEFAULT_AUTO_FIELD = \"django.db.models.AutoField\"\n\n# Preview image\nENABLE_PREVIEW_IMAGES = env.bool(\"ENABLE_PREVIEW_IMAGES\", False)\nPREVIEW_BG_COLOR = env.str(\"PREVIEW_BG_COLOR\", \"use_dominant_color_light\")\nPREVIEW_TEXT_COLOR = env.str(\"PREVIEW_TEXT_COLOR\", \"#363636\")\nPREVIEW_IMG_WIDTH = env.int(\"PREVIEW_IMG_WIDTH\", 1200)\nPREVIEW_IMG_HEIGHT = env.int(\"PREVIEW_IMG_HEIGHT\", 630)\nPREVIEW_DEFAULT_COVER_COLOR = env.str(\"PREVIEW_DEFAULT_COVER_COLOR\", \"#002549\")\nPREVIEW_DEFAULT_FONT = env.str(\"PREVIEW_DEFAULT_FONT\", \"Source Han Sans\")\n\nFONTS = {\n \"Source Han Sans\": {\n \"directory\": \"source_han_sans\",\n \"filename\": \"SourceHanSans-VF.ttf.ttc\",\n \"url\": \"https://github.com/adobe-fonts/source-han-sans/raw/release/Variable/OTC/SourceHanSans-VF.ttf.ttc\",\n }\n}\nFONT_DIR = os.path.join(STATIC_ROOT, \"fonts\")\n\n# Quick-start development settings - unsuitable for production\n# See https://docs.djangoproject.com/en/3.2/howto/deployment/checklist/\n\n# SECURITY WARNING: don't run with debug turned on in production!\nDEBUG = env.bool(\"DEBUG\", True)\nUSE_HTTPS = env.bool(\"USE_HTTPS\", not DEBUG)\n\n# SECURITY WARNING: keep the secret key used in production secret!\nSECRET_KEY = env(\"SECRET_KEY\")\nif not DEBUG and SECRET_KEY == \"7(2w1sedok=aznpq)ta1mc4i%4h=xx@hxwx*o57ctsuml0x%fr\":\n raise ImproperlyConfigured(\"You must change the SECRET_KEY env variable\")\n\nALLOWED_HOSTS = env.list(\"ALLOWED_HOSTS\", [\"*\"])\n\n# Application definition\n\nINSTALLED_APPS = [\n \"django.contrib.admin\",\n \"django.contrib.auth\",\n \"django.contrib.contenttypes\",\n \"django.contrib.sessions\",\n \"django.contrib.messages\",\n \"django.contrib.staticfiles\",\n \"django.contrib.humanize\",\n \"file_resubmit\",\n \"sass_processor\",\n \"bookwyrm\",\n \"celery\",\n \"django_celery_beat\",\n \"imagekit\",\n \"storages\",\n]\n\nMIDDLEWARE = [\n \"django.middleware.security.SecurityMiddleware\",\n \"django.contrib.sessions.middleware.SessionMiddleware\",\n \"django.middleware.locale.LocaleMiddleware\",\n \"django.middleware.common.CommonMiddleware\",\n \"django.middleware.csrf.CsrfViewMiddleware\",\n \"csp.middleware.CSPMiddleware\",\n \"django.contrib.auth.middleware.AuthenticationMiddleware\",\n \"bookwyrm.middleware.TimezoneMiddleware\",\n \"bookwyrm.middleware.IPBlocklistMiddleware\",\n \"django.contrib.messages.middleware.MessageMiddleware\",\n \"django.middleware.clickjacking.XFrameOptionsMiddleware\",\n \"bookwyrm.middleware.FileTooBig\",\n]\n\nROOT_URLCONF = \"bookwyrm.urls\"\n\nTEMPLATES = [\n {\n \"BACKEND\": \"django.template.backends.django.DjangoTemplates\",\n \"DIRS\": [\"templates\"],\n \"APP_DIRS\": True,\n \"OPTIONS\": {\n \"context_processors\": [\n \"django.template.context_processors.debug\",\n \"django.template.context_processors.request\",\n \"django.contrib.auth.context_processors.auth\",\n \"django.contrib.messages.context_processors.messages\",\n \"bookwyrm.context_processors.site_settings\",\n ],\n },\n },\n]\n\nLOG_LEVEL = env(\"LOG_LEVEL\", \"INFO\").upper()\n# Override aspects of the default handler to our taste\n# See https://docs.djangoproject.com/en/3.2/topics/logging/#default-logging-configuration\n# for a reference to the defaults we're overriding\n#\n# It seems that in order to override anything you have to include its\n# entire dependency tree (handlers and filters) which makes this a\n# bit verbose\nLOGGING = {\n \"version\": 1,\n \"disable_existing_loggers\": False,\n \"filters\": {\n # These are copied from the default configuration, required for\n # implementing mail_admins below\n \"require_debug_false\": {\n \"()\": \"django.utils.log.RequireDebugFalse\",\n },\n \"require_debug_true\": {\n \"()\": \"django.utils.log.RequireDebugTrue\",\n },\n \"ignore_missing_variable\": {\n \"()\": \"bookwyrm.utils.log.IgnoreVariableDoesNotExist\",\n },\n },\n \"handlers\": {\n # Overrides the default handler to make it log to console\n # regardless of the DEBUG setting (default is to not log to\n # console if DEBUG=False)\n \"console\": {\n \"level\": LOG_LEVEL,\n \"filters\": [\"ignore_missing_variable\"],\n \"class\": \"logging.StreamHandler\",\n },\n # This is copied as-is from the default logger, and is\n # required for the django section below\n \"mail_admins\": {\n \"level\": \"ERROR\",\n \"filters\": [\"require_debug_false\"],\n \"class\": \"django.utils.log.AdminEmailHandler\",\n },\n },\n \"loggers\": {\n # Install our new console handler for Django's logger, and\n # override the log level while we're at it\n \"django\": {\n \"handlers\": [\"console\", \"mail_admins\"],\n \"level\": LOG_LEVEL,\n },\n \"django.utils.autoreload\": {\n \"level\": \"INFO\",\n },\n # Add a bookwyrm-specific logger\n \"bookwyrm\": {\n \"handlers\": [\"console\"],\n \"level\": LOG_LEVEL,\n },\n },\n}\n\nSTATICFILES_FINDERS = [\n \"django.contrib.staticfiles.finders.FileSystemFinder\",\n \"django.contrib.staticfiles.finders.AppDirectoriesFinder\",\n \"sass_processor.finders.CssFinder\",\n]\n\nSASS_PROCESSOR_INCLUDE_FILE_PATTERN = r\"^.+\\.[s]{0,1}(?:a|c)ss$\"\n# when debug is disabled, make sure to compile themes once with `./bw-dev compile_themes`\nSASS_PROCESSOR_ENABLED = DEBUG\n\n# minify css is production but not dev\nif not DEBUG:\n SASS_OUTPUT_STYLE = \"compressed\"\n\nWSGI_APPLICATION = \"bookwyrm.wsgi.application\"\n\n# redis/activity streams settings\nREDIS_ACTIVITY_HOST = env(\"REDIS_ACTIVITY_HOST\", \"localhost\")\nREDIS_ACTIVITY_PORT = env.int(\"REDIS_ACTIVITY_PORT\", 6379)\nREDIS_ACTIVITY_PASSWORD = requests.utils.quote(env(\"REDIS_ACTIVITY_PASSWORD\", \"\"))\nREDIS_ACTIVITY_DB_INDEX = env.int(\"REDIS_ACTIVITY_DB_INDEX\", 0)\nREDIS_ACTIVITY_URL = env(\n \"REDIS_ACTIVITY_URL\",\n f\"redis://:{REDIS_ACTIVITY_PASSWORD}@{REDIS_ACTIVITY_HOST}:{REDIS_ACTIVITY_PORT}/{REDIS_ACTIVITY_DB_INDEX}\",\n)\nMAX_STREAM_LENGTH = env.int(\"MAX_STREAM_LENGTH\", 200)\n\nSTREAMS = [\n {\"key\": \"home\", \"name\": _(\"Home Timeline\"), \"shortname\": _(\"Home\")},\n {\"key\": \"books\", \"name\": _(\"Books Timeline\"), \"shortname\": _(\"Books\")},\n]\n\n# Search configuration\n# total time in seconds that the instance will spend searching connectors\nSEARCH_TIMEOUT = env.int(\"SEARCH_TIMEOUT\", 8)\n# timeout for a query to an individual connector\nQUERY_TIMEOUT = env.int(\"INTERACTIVE_QUERY_TIMEOUT\", env.int(\"QUERY_TIMEOUT\", 5))\n\n# Redis cache backend\nif env.bool(\"USE_DUMMY_CACHE\", False):\n CACHES = {\n \"default\": {\n \"BACKEND\": \"django.core.cache.backends.dummy.DummyCache\",\n },\n \"file_resubmit\": {\n \"BACKEND\": \"django.core.cache.backends.dummy.DummyCache\",\n \"LOCATION\": \"/tmp/file_resubmit_tests/\",\n },\n }\nelse:\n CACHES = {\n \"default\": {\n \"BACKEND\": \"django_redis.cache.RedisCache\",\n \"LOCATION\": REDIS_ACTIVITY_URL,\n \"OPTIONS\": {\n \"CLIENT_CLASS\": \"django_redis.client.DefaultClient\",\n },\n },\n \"file_resubmit\": {\n \"BACKEND\": \"django.core.cache.backends.filebased.FileBasedCache\",\n \"LOCATION\": \"/tmp/file_resubmit/\",\n },\n }\n\n SESSION_ENGINE = \"django.contrib.sessions.backends.cache\"\n SESSION_CACHE_ALIAS = \"default\"\n\n# Database\n# https://docs.djangoproject.com/en/3.2/ref/settings/#databases\n\nDATABASES = {\n \"default\": {\n \"ENGINE\": \"django.db.backends.postgresql_psycopg2\",\n \"NAME\": env(\"POSTGRES_DB\", \"bookwyrm\"),\n \"USER\": env(\"POSTGRES_USER\", \"bookwyrm\"),\n \"PASSWORD\": env(\"POSTGRES_PASSWORD\", \"bookwyrm\"),\n \"HOST\": env(\"POSTGRES_HOST\", \"\"),\n \"PORT\": env.int(\"PGPORT\", 5432),\n },\n}\n\n\nLOGIN_URL = \"/login/\"\nAUTH_USER_MODEL = \"bookwyrm.User\"\n\n# Password validation\n# https://docs.djangoproject.com/en/3.2/ref/settings/#auth-password-validators\n\nAUTH_PASSWORD_VALIDATORS = [\n {\n \"NAME\": \"django.contrib.auth.password_validation.UserAttributeSimilarityValidator\",\n },\n {\n \"NAME\": \"django.contrib.auth.password_validation.MinimumLengthValidator\",\n },\n {\n \"NAME\": \"django.contrib.auth.password_validation.CommonPasswordValidator\",\n },\n {\n \"NAME\": \"django.contrib.auth.password_validation.NumericPasswordValidator\",\n },\n]\n\n\n# Internationalization\n# https://docs.djangoproject.com/en/3.2/topics/i18n/\n\nLANGUAGE_CODE = env(\"LANGUAGE_CODE\", \"en-us\")\nLANGUAGES = [\n (\"en-us\", _(\"English\")),\n (\"ca-es\", _(\"Català (Catalan)\")),\n (\"de-de\", _(\"Deutsch (German)\")),\n (\"eo-uy\", _(\"Esperanto (Esperanto)\")),\n (\"es-es\", _(\"Español (Spanish)\")),\n (\"eu-es\", _(\"Euskara (Basque)\")),\n (\"gl-es\", _(\"Galego (Galician)\")),\n (\"it-it\", _(\"Italiano (Italian)\")),\n (\"fi-fi\", _(\"Suomi (Finnish)\")),\n (\"fr-fr\", _(\"Français (French)\")),\n (\"lt-lt\", _(\"Lietuvių (Lithuanian)\")),\n (\"nl-nl\", _(\"Nederlands (Dutch)\")),\n (\"no-no\", _(\"Norsk (Norwegian)\")),\n (\"pl-pl\", _(\"Polski (Polish)\")),\n (\"pt-br\", _(\"Português do Brasil (Brazilian Portuguese)\")),\n (\"pt-pt\", _(\"Português Europeu (European Portuguese)\")),\n (\"ro-ro\", _(\"Română (Romanian)\")),\n (\"sv-se\", _(\"Svenska (Swedish)\")),\n (\"uk-ua\", _(\"Українська (Ukrainian)\")),\n (\"zh-hans\", _(\"简体中文 (Simplified Chinese)\")),\n (\"zh-hant\", _(\"繁體中文 (Traditional Chinese)\")),\n]\n\nLANGUAGE_ARTICLES = {\n \"English\": {\"the\", \"a\", \"an\"},\n \"Español (Spanish)\": {\"un\", \"una\", \"unos\", \"unas\", \"el\", \"la\", \"los\", \"las\"},\n}\n\nTIME_ZONE = \"UTC\"\n\nUSE_I18N = True\n\nUSE_L10N = True\n\nUSE_TZ = True\n\n\nagent = requests.utils.default_user_agent()\nUSER_AGENT = f\"{agent} (BookWyrm/{VERSION}; +https://{DOMAIN}/)\"\n\n# Imagekit generated thumbnails\nENABLE_THUMBNAIL_GENERATION = env.bool(\"ENABLE_THUMBNAIL_GENERATION\", False)\nIMAGEKIT_CACHEFILE_DIR = \"thumbnails\"\nIMAGEKIT_DEFAULT_CACHEFILE_STRATEGY = \"bookwyrm.thumbnail_generation.Strategy\"\n\n# Static files (CSS, JavaScript, Images)\n# https://docs.djangoproject.com/en/3.2/howto/static-files/\n\nPROJECT_DIR = os.path.dirname(os.path.abspath(__file__))\nCSP_ADDITIONAL_HOSTS = env.list(\"CSP_ADDITIONAL_HOSTS\", [])\n\n# Storage\n\nPROTOCOL = \"http\"\nif USE_HTTPS:\n PROTOCOL = \"https\"\n SESSION_COOKIE_SECURE = True\n CSRF_COOKIE_SECURE = True\n\nUSE_S3 = env.bool(\"USE_S3\", False)\nUSE_AZURE = env.bool(\"USE_AZURE\", False)\n\nif USE_S3:\n # AWS settings\n AWS_ACCESS_KEY_ID = env(\"AWS_ACCESS_KEY_ID\")\n AWS_SECRET_ACCESS_KEY = env(\"AWS_SECRET_ACCESS_KEY\")\n AWS_STORAGE_BUCKET_NAME = env(\"AWS_STORAGE_BUCKET_NAME\")\n AWS_S3_CUSTOM_DOMAIN = env(\"AWS_S3_CUSTOM_DOMAIN\", None)\n AWS_S3_REGION_NAME = env(\"AWS_S3_REGION_NAME\", \"\")\n AWS_S3_ENDPOINT_URL = env(\"AWS_S3_ENDPOINT_URL\", None)\n AWS_DEFAULT_ACL = \"public-read\"\n AWS_S3_OBJECT_PARAMETERS = {\"CacheControl\": \"max-age=86400\"}\n # S3 Static settings\n STATIC_LOCATION = \"static\"\n STATIC_URL = f\"{PROTOCOL}://{AWS_S3_CUSTOM_DOMAIN}/{STATIC_LOCATION}/\"\n STATICFILES_STORAGE = \"bookwyrm.storage_backends.StaticStorage\"\n # S3 Media settings\n MEDIA_LOCATION = \"images\"\n MEDIA_URL = f\"{PROTOCOL}://{AWS_S3_CUSTOM_DOMAIN}/{MEDIA_LOCATION}/\"\n MEDIA_FULL_URL = MEDIA_URL\n STATIC_FULL_URL = STATIC_URL\n DEFAULT_FILE_STORAGE = \"bookwyrm.storage_backends.ImagesStorage\"\n CSP_DEFAULT_SRC = [\"'self'\", AWS_S3_CUSTOM_DOMAIN] + CSP_ADDITIONAL_HOSTS\n CSP_SCRIPT_SRC = [\"'self'\", AWS_S3_CUSTOM_DOMAIN] + CSP_ADDITIONAL_HOSTS\nelif USE_AZURE:\n AZURE_ACCOUNT_NAME = env(\"AZURE_ACCOUNT_NAME\")\n AZURE_ACCOUNT_KEY = env(\"AZURE_ACCOUNT_KEY\")\n AZURE_CONTAINER = env(\"AZURE_CONTAINER\")\n AZURE_CUSTOM_DOMAIN = env(\"AZURE_CUSTOM_DOMAIN\")\n # Azure Static settings\n STATIC_LOCATION = \"static\"\n STATIC_URL = (\n f\"{PROTOCOL}://{AZURE_CUSTOM_DOMAIN}/{AZURE_CONTAINER}/{STATIC_LOCATION}/\"\n )\n STATICFILES_STORAGE = \"bookwyrm.storage_backends.AzureStaticStorage\"\n # Azure Media settings\n MEDIA_LOCATION = \"images\"\n MEDIA_URL = (\n f\"{PROTOCOL}://{AZURE_CUSTOM_DOMAIN}/{AZURE_CONTAINER}/{MEDIA_LOCATION}/\"\n )\n MEDIA_FULL_URL = MEDIA_URL\n STATIC_FULL_URL = STATIC_URL\n DEFAULT_FILE_STORAGE = \"bookwyrm.storage_backends.AzureImagesStorage\"\n CSP_DEFAULT_SRC = [\"'self'\", AZURE_CUSTOM_DOMAIN] + CSP_ADDITIONAL_HOSTS\n CSP_SCRIPT_SRC = [\"'self'\", AZURE_CUSTOM_DOMAIN] + CSP_ADDITIONAL_HOSTS\nelse:\n STATIC_URL = \"/static/\"\n MEDIA_URL = \"/images/\"\n MEDIA_FULL_URL = f\"{PROTOCOL}://{DOMAIN}{MEDIA_URL}\"\n STATIC_FULL_URL = f\"{PROTOCOL}://{DOMAIN}{STATIC_URL}\"\n CSP_DEFAULT_SRC = [\"'self'\"] + CSP_ADDITIONAL_HOSTS\n CSP_SCRIPT_SRC = [\"'self'\"] + CSP_ADDITIONAL_HOSTS\n\nCSP_INCLUDE_NONCE_IN = [\"script-src\"]\n\nOTEL_EXPORTER_OTLP_ENDPOINT = env(\"OTEL_EXPORTER_OTLP_ENDPOINT\", None)\nOTEL_EXPORTER_OTLP_HEADERS = env(\"OTEL_EXPORTER_OTLP_HEADERS\", None)\nOTEL_SERVICE_NAME = env(\"OTEL_SERVICE_NAME\", None)\nOTEL_EXPORTER_CONSOLE = env.bool(\"OTEL_EXPORTER_CONSOLE\", False)\n\nTWO_FACTOR_LOGIN_MAX_SECONDS = env.int(\"TWO_FACTOR_LOGIN_MAX_SECONDS\", 60)\nTWO_FACTOR_LOGIN_VALIDITY_WINDOW = env.int(\"TWO_FACTOR_LOGIN_VALIDITY_WINDOW\", 2)\n\nHTTP_X_FORWARDED_PROTO = env.bool(\"SECURE_PROXY_SSL_HEADER\", False)\nif HTTP_X_FORWARDED_PROTO:\n SECURE_PROXY_SSL_HEADER = (\"HTTP_X_FORWARDED_PROTO\", \"https\")\n\n# Instance Actor for signing GET requests to \"secure mode\"\n# Mastodon servers.\n# Do not change this setting unless you already have an existing\n# user with the same username - in which case you should change it!\nINSTANCE_ACTOR_USERNAME = \"bookwyrm.instance.actor\"\n\nDATA_UPLOAD_MAX_MEMORY_SIZE = env.int(\"DATA_UPLOAD_MAX_MEMORY_SIZE\", (1024**2 * 100))\n",
"path": "bookwyrm/settings.py"
}
] | [
{
"content": "\"\"\" bookwyrm settings and configuration \"\"\"\nimport os\nfrom typing import AnyStr\n\nfrom environs import Env\n\n\nimport requests\nfrom django.utils.translation import gettext_lazy as _\nfrom django.core.exceptions import ImproperlyConfigured\n\n\n# pylint: disable=line-too-long\n\nenv = Env()\nenv.read_env()\nDOMAIN = env(\"DOMAIN\")\n\nwith open(\"VERSION\", encoding=\"utf-8\") as f:\n version = f.read()\n version = version.replace(\"\\n\", \"\")\nf.close()\n\nVERSION = version\n\nRELEASE_API = env(\n \"RELEASE_API\",\n \"https://api.github.com/repos/bookwyrm-social/bookwyrm/releases/latest\",\n)\n\nPAGE_LENGTH = env.int(\"PAGE_LENGTH\", 15)\nDEFAULT_LANGUAGE = env(\"DEFAULT_LANGUAGE\", \"English\")\n\nJS_CACHE = \"8a89cad7\"\n\n# email\nEMAIL_BACKEND = env(\"EMAIL_BACKEND\", \"django.core.mail.backends.smtp.EmailBackend\")\nEMAIL_HOST = env(\"EMAIL_HOST\")\nEMAIL_PORT = env.int(\"EMAIL_PORT\", 587)\nEMAIL_HOST_USER = env(\"EMAIL_HOST_USER\")\nEMAIL_HOST_PASSWORD = env(\"EMAIL_HOST_PASSWORD\")\nEMAIL_USE_TLS = env.bool(\"EMAIL_USE_TLS\", True)\nEMAIL_USE_SSL = env.bool(\"EMAIL_USE_SSL\", False)\nEMAIL_SENDER_NAME = env(\"EMAIL_SENDER_NAME\", \"admin\")\nEMAIL_SENDER_DOMAIN = env(\"EMAIL_SENDER_DOMAIN\", DOMAIN)\nEMAIL_SENDER = f\"{EMAIL_SENDER_NAME}@{EMAIL_SENDER_DOMAIN}\"\n\n# Build paths inside the project like this: os.path.join(BASE_DIR, ...)\nBASE_DIR: AnyStr = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))\nLOCALE_PATHS = [\n os.path.join(BASE_DIR, \"locale\"),\n]\nLANGUAGE_COOKIE_NAME = env.str(\"LANGUAGE_COOKIE_NAME\", \"django_language\")\n\nSTATIC_ROOT = os.path.join(BASE_DIR, env(\"STATIC_ROOT\", \"static\"))\nMEDIA_ROOT = os.path.join(BASE_DIR, env(\"MEDIA_ROOT\", \"images\"))\n\nDEFAULT_AUTO_FIELD = \"django.db.models.AutoField\"\n\n# Preview image\nENABLE_PREVIEW_IMAGES = env.bool(\"ENABLE_PREVIEW_IMAGES\", False)\nPREVIEW_BG_COLOR = env.str(\"PREVIEW_BG_COLOR\", \"use_dominant_color_light\")\nPREVIEW_TEXT_COLOR = env.str(\"PREVIEW_TEXT_COLOR\", \"#363636\")\nPREVIEW_IMG_WIDTH = env.int(\"PREVIEW_IMG_WIDTH\", 1200)\nPREVIEW_IMG_HEIGHT = env.int(\"PREVIEW_IMG_HEIGHT\", 630)\nPREVIEW_DEFAULT_COVER_COLOR = env.str(\"PREVIEW_DEFAULT_COVER_COLOR\", \"#002549\")\nPREVIEW_DEFAULT_FONT = env.str(\"PREVIEW_DEFAULT_FONT\", \"Source Han Sans\")\n\nFONTS = {\n \"Source Han Sans\": {\n \"directory\": \"source_han_sans\",\n \"filename\": \"SourceHanSans-VF.ttf.ttc\",\n \"url\": \"https://github.com/adobe-fonts/source-han-sans/raw/release/Variable/OTC/SourceHanSans-VF.ttf.ttc\",\n }\n}\nFONT_DIR = os.path.join(STATIC_ROOT, \"fonts\")\n\n# Quick-start development settings - unsuitable for production\n# See https://docs.djangoproject.com/en/3.2/howto/deployment/checklist/\n\n# SECURITY WARNING: don't run with debug turned on in production!\nDEBUG = env.bool(\"DEBUG\", True)\nUSE_HTTPS = env.bool(\"USE_HTTPS\", not DEBUG)\n\n# SECURITY WARNING: keep the secret key used in production secret!\nSECRET_KEY = env(\"SECRET_KEY\")\nif not DEBUG and SECRET_KEY == \"7(2w1sedok=aznpq)ta1mc4i%4h=xx@hxwx*o57ctsuml0x%fr\":\n raise ImproperlyConfigured(\"You must change the SECRET_KEY env variable\")\n\nALLOWED_HOSTS = env.list(\"ALLOWED_HOSTS\", [\"*\"])\n\n# Application definition\n\nINSTALLED_APPS = [\n \"django.contrib.admin\",\n \"django.contrib.auth\",\n \"django.contrib.contenttypes\",\n \"django.contrib.sessions\",\n \"django.contrib.messages\",\n \"django.contrib.staticfiles\",\n \"django.contrib.humanize\",\n \"file_resubmit\",\n \"sass_processor\",\n \"bookwyrm\",\n \"celery\",\n \"django_celery_beat\",\n \"imagekit\",\n \"storages\",\n]\n\nMIDDLEWARE = [\n \"django.middleware.security.SecurityMiddleware\",\n \"django.contrib.sessions.middleware.SessionMiddleware\",\n \"django.middleware.locale.LocaleMiddleware\",\n \"django.middleware.common.CommonMiddleware\",\n \"django.middleware.csrf.CsrfViewMiddleware\",\n \"csp.middleware.CSPMiddleware\",\n \"django.contrib.auth.middleware.AuthenticationMiddleware\",\n \"bookwyrm.middleware.TimezoneMiddleware\",\n \"bookwyrm.middleware.IPBlocklistMiddleware\",\n \"django.contrib.messages.middleware.MessageMiddleware\",\n \"django.middleware.clickjacking.XFrameOptionsMiddleware\",\n \"bookwyrm.middleware.FileTooBig\",\n]\n\nROOT_URLCONF = \"bookwyrm.urls\"\n\nTEMPLATES = [\n {\n \"BACKEND\": \"django.template.backends.django.DjangoTemplates\",\n \"DIRS\": [\"templates\"],\n \"APP_DIRS\": True,\n \"OPTIONS\": {\n \"context_processors\": [\n \"django.template.context_processors.debug\",\n \"django.template.context_processors.request\",\n \"django.contrib.auth.context_processors.auth\",\n \"django.contrib.messages.context_processors.messages\",\n \"bookwyrm.context_processors.site_settings\",\n ],\n },\n },\n]\n\nLOG_LEVEL = env(\"LOG_LEVEL\", \"INFO\").upper()\n# Override aspects of the default handler to our taste\n# See https://docs.djangoproject.com/en/3.2/topics/logging/#default-logging-configuration\n# for a reference to the defaults we're overriding\n#\n# It seems that in order to override anything you have to include its\n# entire dependency tree (handlers and filters) which makes this a\n# bit verbose\nLOGGING = {\n \"version\": 1,\n \"disable_existing_loggers\": False,\n \"filters\": {\n # These are copied from the default configuration, required for\n # implementing mail_admins below\n \"require_debug_false\": {\n \"()\": \"django.utils.log.RequireDebugFalse\",\n },\n \"require_debug_true\": {\n \"()\": \"django.utils.log.RequireDebugTrue\",\n },\n \"ignore_missing_variable\": {\n \"()\": \"bookwyrm.utils.log.IgnoreVariableDoesNotExist\",\n },\n },\n \"handlers\": {\n # Overrides the default handler to make it log to console\n # regardless of the DEBUG setting (default is to not log to\n # console if DEBUG=False)\n \"console\": {\n \"level\": LOG_LEVEL,\n \"filters\": [\"ignore_missing_variable\"],\n \"class\": \"logging.StreamHandler\",\n },\n # This is copied as-is from the default logger, and is\n # required for the django section below\n \"mail_admins\": {\n \"level\": \"ERROR\",\n \"filters\": [\"require_debug_false\"],\n \"class\": \"django.utils.log.AdminEmailHandler\",\n },\n },\n \"loggers\": {\n # Install our new console handler for Django's logger, and\n # override the log level while we're at it\n \"django\": {\n \"handlers\": [\"console\", \"mail_admins\"],\n \"level\": LOG_LEVEL,\n },\n \"django.utils.autoreload\": {\n \"level\": \"INFO\",\n },\n # Add a bookwyrm-specific logger\n \"bookwyrm\": {\n \"handlers\": [\"console\"],\n \"level\": LOG_LEVEL,\n },\n },\n}\n\nSTATICFILES_FINDERS = [\n \"django.contrib.staticfiles.finders.FileSystemFinder\",\n \"django.contrib.staticfiles.finders.AppDirectoriesFinder\",\n \"sass_processor.finders.CssFinder\",\n]\n\nSASS_PROCESSOR_INCLUDE_FILE_PATTERN = r\"^.+\\.[s]{0,1}(?:a|c)ss$\"\n# when debug is disabled, make sure to compile themes once with `./bw-dev compile_themes`\nSASS_PROCESSOR_ENABLED = DEBUG\n\n# minify css is production but not dev\nif not DEBUG:\n SASS_OUTPUT_STYLE = \"compressed\"\n\nWSGI_APPLICATION = \"bookwyrm.wsgi.application\"\n\n# redis/activity streams settings\nREDIS_ACTIVITY_HOST = env(\"REDIS_ACTIVITY_HOST\", \"localhost\")\nREDIS_ACTIVITY_PORT = env.int(\"REDIS_ACTIVITY_PORT\", 6379)\nREDIS_ACTIVITY_PASSWORD = requests.utils.quote(env(\"REDIS_ACTIVITY_PASSWORD\", \"\"))\nREDIS_ACTIVITY_DB_INDEX = env.int(\"REDIS_ACTIVITY_DB_INDEX\", 0)\nREDIS_ACTIVITY_URL = env(\n \"REDIS_ACTIVITY_URL\",\n f\"redis://:{REDIS_ACTIVITY_PASSWORD}@{REDIS_ACTIVITY_HOST}:{REDIS_ACTIVITY_PORT}/{REDIS_ACTIVITY_DB_INDEX}\",\n)\nMAX_STREAM_LENGTH = env.int(\"MAX_STREAM_LENGTH\", 200)\n\nSTREAMS = [\n {\"key\": \"home\", \"name\": _(\"Home Timeline\"), \"shortname\": _(\"Home\")},\n {\"key\": \"books\", \"name\": _(\"Books Timeline\"), \"shortname\": _(\"Books\")},\n]\n\n# Search configuration\n# total time in seconds that the instance will spend searching connectors\nSEARCH_TIMEOUT = env.int(\"SEARCH_TIMEOUT\", 8)\n# timeout for a query to an individual connector\nQUERY_TIMEOUT = env.int(\"INTERACTIVE_QUERY_TIMEOUT\", env.int(\"QUERY_TIMEOUT\", 5))\n\n# Redis cache backend\nif env.bool(\"USE_DUMMY_CACHE\", False):\n CACHES = {\n \"default\": {\n \"BACKEND\": \"django.core.cache.backends.dummy.DummyCache\",\n },\n \"file_resubmit\": {\n \"BACKEND\": \"django.core.cache.backends.dummy.DummyCache\",\n \"LOCATION\": \"/tmp/file_resubmit_tests/\",\n },\n }\nelse:\n CACHES = {\n \"default\": {\n \"BACKEND\": \"django_redis.cache.RedisCache\",\n \"LOCATION\": REDIS_ACTIVITY_URL,\n \"OPTIONS\": {\n \"CLIENT_CLASS\": \"django_redis.client.DefaultClient\",\n },\n },\n \"file_resubmit\": {\n \"BACKEND\": \"django.core.cache.backends.filebased.FileBasedCache\",\n \"LOCATION\": \"/tmp/file_resubmit/\",\n },\n }\n\n SESSION_ENGINE = \"django.contrib.sessions.backends.cache\"\n SESSION_CACHE_ALIAS = \"default\"\n\n# Database\n# https://docs.djangoproject.com/en/3.2/ref/settings/#databases\n\nDATABASES = {\n \"default\": {\n \"ENGINE\": \"django.db.backends.postgresql_psycopg2\",\n \"NAME\": env(\"POSTGRES_DB\", \"bookwyrm\"),\n \"USER\": env(\"POSTGRES_USER\", \"bookwyrm\"),\n \"PASSWORD\": env(\"POSTGRES_PASSWORD\", \"bookwyrm\"),\n \"HOST\": env(\"POSTGRES_HOST\", \"\"),\n \"PORT\": env.int(\"PGPORT\", 5432),\n },\n}\n\n\nLOGIN_URL = \"/login/\"\nAUTH_USER_MODEL = \"bookwyrm.User\"\n\n# Password validation\n# https://docs.djangoproject.com/en/3.2/ref/settings/#auth-password-validators\n\nAUTH_PASSWORD_VALIDATORS = [\n {\n \"NAME\": \"django.contrib.auth.password_validation.UserAttributeSimilarityValidator\",\n },\n {\n \"NAME\": \"django.contrib.auth.password_validation.MinimumLengthValidator\",\n },\n {\n \"NAME\": \"django.contrib.auth.password_validation.CommonPasswordValidator\",\n },\n {\n \"NAME\": \"django.contrib.auth.password_validation.NumericPasswordValidator\",\n },\n]\n\n\n# Internationalization\n# https://docs.djangoproject.com/en/3.2/topics/i18n/\n\nLANGUAGE_CODE = env(\"LANGUAGE_CODE\", \"en-us\")\nLANGUAGES = [\n (\"en-us\", _(\"English\")),\n (\"ca-es\", _(\"Català (Catalan)\")),\n (\"de-de\", _(\"Deutsch (German)\")),\n (\"eo-uy\", _(\"Esperanto (Esperanto)\")),\n (\"es-es\", _(\"Español (Spanish)\")),\n (\"eu-es\", _(\"Euskara (Basque)\")),\n (\"gl-es\", _(\"Galego (Galician)\")),\n (\"it-it\", _(\"Italiano (Italian)\")),\n (\"fi-fi\", _(\"Suomi (Finnish)\")),\n (\"fr-fr\", _(\"Français (French)\")),\n (\"lt-lt\", _(\"Lietuvių (Lithuanian)\")),\n (\"nl-nl\", _(\"Nederlands (Dutch)\")),\n (\"no-no\", _(\"Norsk (Norwegian)\")),\n (\"pl-pl\", _(\"Polski (Polish)\")),\n (\"pt-br\", _(\"Português do Brasil (Brazilian Portuguese)\")),\n (\"pt-pt\", _(\"Português Europeu (European Portuguese)\")),\n (\"ro-ro\", _(\"Română (Romanian)\")),\n (\"sv-se\", _(\"Svenska (Swedish)\")),\n (\"uk-ua\", _(\"Українська (Ukrainian)\")),\n (\"zh-hans\", _(\"简体中文 (Simplified Chinese)\")),\n (\"zh-hant\", _(\"繁體中文 (Traditional Chinese)\")),\n]\n\nLANGUAGE_ARTICLES = {\n \"English\": {\"the\", \"a\", \"an\"},\n \"Español (Spanish)\": {\"un\", \"una\", \"unos\", \"unas\", \"el\", \"la\", \"los\", \"las\"},\n}\n\nTIME_ZONE = \"UTC\"\n\nUSE_I18N = True\n\nUSE_L10N = True\n\nUSE_TZ = True\n\n\nUSER_AGENT = f\"BookWyrm (BookWyrm/{VERSION}; +https://{DOMAIN}/)\"\n\n# Imagekit generated thumbnails\nENABLE_THUMBNAIL_GENERATION = env.bool(\"ENABLE_THUMBNAIL_GENERATION\", False)\nIMAGEKIT_CACHEFILE_DIR = \"thumbnails\"\nIMAGEKIT_DEFAULT_CACHEFILE_STRATEGY = \"bookwyrm.thumbnail_generation.Strategy\"\n\n# Static files (CSS, JavaScript, Images)\n# https://docs.djangoproject.com/en/3.2/howto/static-files/\n\nPROJECT_DIR = os.path.dirname(os.path.abspath(__file__))\nCSP_ADDITIONAL_HOSTS = env.list(\"CSP_ADDITIONAL_HOSTS\", [])\n\n# Storage\n\nPROTOCOL = \"http\"\nif USE_HTTPS:\n PROTOCOL = \"https\"\n SESSION_COOKIE_SECURE = True\n CSRF_COOKIE_SECURE = True\n\nUSE_S3 = env.bool(\"USE_S3\", False)\nUSE_AZURE = env.bool(\"USE_AZURE\", False)\n\nif USE_S3:\n # AWS settings\n AWS_ACCESS_KEY_ID = env(\"AWS_ACCESS_KEY_ID\")\n AWS_SECRET_ACCESS_KEY = env(\"AWS_SECRET_ACCESS_KEY\")\n AWS_STORAGE_BUCKET_NAME = env(\"AWS_STORAGE_BUCKET_NAME\")\n AWS_S3_CUSTOM_DOMAIN = env(\"AWS_S3_CUSTOM_DOMAIN\", None)\n AWS_S3_REGION_NAME = env(\"AWS_S3_REGION_NAME\", \"\")\n AWS_S3_ENDPOINT_URL = env(\"AWS_S3_ENDPOINT_URL\", None)\n AWS_DEFAULT_ACL = \"public-read\"\n AWS_S3_OBJECT_PARAMETERS = {\"CacheControl\": \"max-age=86400\"}\n # S3 Static settings\n STATIC_LOCATION = \"static\"\n STATIC_URL = f\"{PROTOCOL}://{AWS_S3_CUSTOM_DOMAIN}/{STATIC_LOCATION}/\"\n STATICFILES_STORAGE = \"bookwyrm.storage_backends.StaticStorage\"\n # S3 Media settings\n MEDIA_LOCATION = \"images\"\n MEDIA_URL = f\"{PROTOCOL}://{AWS_S3_CUSTOM_DOMAIN}/{MEDIA_LOCATION}/\"\n MEDIA_FULL_URL = MEDIA_URL\n STATIC_FULL_URL = STATIC_URL\n DEFAULT_FILE_STORAGE = \"bookwyrm.storage_backends.ImagesStorage\"\n CSP_DEFAULT_SRC = [\"'self'\", AWS_S3_CUSTOM_DOMAIN] + CSP_ADDITIONAL_HOSTS\n CSP_SCRIPT_SRC = [\"'self'\", AWS_S3_CUSTOM_DOMAIN] + CSP_ADDITIONAL_HOSTS\nelif USE_AZURE:\n AZURE_ACCOUNT_NAME = env(\"AZURE_ACCOUNT_NAME\")\n AZURE_ACCOUNT_KEY = env(\"AZURE_ACCOUNT_KEY\")\n AZURE_CONTAINER = env(\"AZURE_CONTAINER\")\n AZURE_CUSTOM_DOMAIN = env(\"AZURE_CUSTOM_DOMAIN\")\n # Azure Static settings\n STATIC_LOCATION = \"static\"\n STATIC_URL = (\n f\"{PROTOCOL}://{AZURE_CUSTOM_DOMAIN}/{AZURE_CONTAINER}/{STATIC_LOCATION}/\"\n )\n STATICFILES_STORAGE = \"bookwyrm.storage_backends.AzureStaticStorage\"\n # Azure Media settings\n MEDIA_LOCATION = \"images\"\n MEDIA_URL = (\n f\"{PROTOCOL}://{AZURE_CUSTOM_DOMAIN}/{AZURE_CONTAINER}/{MEDIA_LOCATION}/\"\n )\n MEDIA_FULL_URL = MEDIA_URL\n STATIC_FULL_URL = STATIC_URL\n DEFAULT_FILE_STORAGE = \"bookwyrm.storage_backends.AzureImagesStorage\"\n CSP_DEFAULT_SRC = [\"'self'\", AZURE_CUSTOM_DOMAIN] + CSP_ADDITIONAL_HOSTS\n CSP_SCRIPT_SRC = [\"'self'\", AZURE_CUSTOM_DOMAIN] + CSP_ADDITIONAL_HOSTS\nelse:\n STATIC_URL = \"/static/\"\n MEDIA_URL = \"/images/\"\n MEDIA_FULL_URL = f\"{PROTOCOL}://{DOMAIN}{MEDIA_URL}\"\n STATIC_FULL_URL = f\"{PROTOCOL}://{DOMAIN}{STATIC_URL}\"\n CSP_DEFAULT_SRC = [\"'self'\"] + CSP_ADDITIONAL_HOSTS\n CSP_SCRIPT_SRC = [\"'self'\"] + CSP_ADDITIONAL_HOSTS\n\nCSP_INCLUDE_NONCE_IN = [\"script-src\"]\n\nOTEL_EXPORTER_OTLP_ENDPOINT = env(\"OTEL_EXPORTER_OTLP_ENDPOINT\", None)\nOTEL_EXPORTER_OTLP_HEADERS = env(\"OTEL_EXPORTER_OTLP_HEADERS\", None)\nOTEL_SERVICE_NAME = env(\"OTEL_SERVICE_NAME\", None)\nOTEL_EXPORTER_CONSOLE = env.bool(\"OTEL_EXPORTER_CONSOLE\", False)\n\nTWO_FACTOR_LOGIN_MAX_SECONDS = env.int(\"TWO_FACTOR_LOGIN_MAX_SECONDS\", 60)\nTWO_FACTOR_LOGIN_VALIDITY_WINDOW = env.int(\"TWO_FACTOR_LOGIN_VALIDITY_WINDOW\", 2)\n\nHTTP_X_FORWARDED_PROTO = env.bool(\"SECURE_PROXY_SSL_HEADER\", False)\nif HTTP_X_FORWARDED_PROTO:\n SECURE_PROXY_SSL_HEADER = (\"HTTP_X_FORWARDED_PROTO\", \"https\")\n\n# Instance Actor for signing GET requests to \"secure mode\"\n# Mastodon servers.\n# Do not change this setting unless you already have an existing\n# user with the same username - in which case you should change it!\nINSTANCE_ACTOR_USERNAME = \"bookwyrm.instance.actor\"\n\nDATA_UPLOAD_MAX_MEMORY_SIZE = env.int(\"DATA_UPLOAD_MAX_MEMORY_SIZE\", (1024**2 * 100))\n",
"path": "bookwyrm/settings.py"
}
] | diff --git a/bookwyrm/settings.py b/bookwyrm/settings.py
index cc941da849..adc9bd0ef1 100644
--- a/bookwyrm/settings.py
+++ b/bookwyrm/settings.py
@@ -347,8 +347,7 @@
USE_TZ = True
-agent = requests.utils.default_user_agent()
-USER_AGENT = f"{agent} (BookWyrm/{VERSION}; +https://{DOMAIN}/)"
+USER_AGENT = f"BookWyrm (BookWyrm/{VERSION}; +https://{DOMAIN}/)"
# Imagekit generated thumbnails
ENABLE_THUMBNAIL_GENERATION = env.bool("ENABLE_THUMBNAIL_GENERATION", False)
|
Kinto__kinto-1342 | `kinto create-user` command should fallback to KINTO_INI env variable for the config file;
| [
{
"content": "import argparse\nimport os\nimport sys\nimport logging\nimport logging.config\n\nfrom kinto.core import scripts\nfrom kinto.plugins.accounts.scripts import create_user\nfrom pyramid.scripts import pserve\nfrom pyramid.paster import bootstrap\nfrom kinto import __version__\nfrom kinto.config import init\n\nDEFAULT_CONFIG_FILE = 'config/kinto.ini'\nDEFAULT_PORT = 8888\nDEFAULT_LOG_LEVEL = logging.INFO\nDEFAULT_LOG_FORMAT = '%(levelname)-5.5s %(message)s'\n\n\ndef main(args=None):\n \"\"\"The main routine.\"\"\"\n if args is None:\n args = sys.argv[1:]\n\n parser = argparse.ArgumentParser(description='Kinto Command-Line '\n 'Interface')\n commands = ('init', 'start', 'migrate', 'delete-collection', 'version',\n 'rebuild-quotas', 'create-user')\n subparsers = parser.add_subparsers(title='subcommands',\n description='Main Kinto CLI commands',\n dest='subcommand',\n help='Choose and run with --help')\n subparsers.required = True\n\n for command in commands:\n subparser = subparsers.add_parser(command)\n subparser.set_defaults(which=command)\n\n subparser.add_argument('--ini',\n help='Application configuration file',\n dest='ini_file',\n required=False,\n default=DEFAULT_CONFIG_FILE)\n\n subparser.add_argument('-q', '--quiet', action='store_const',\n const=logging.CRITICAL, dest='verbosity',\n help='Show only critical errors.')\n\n subparser.add_argument('-v', '--debug', action='store_const',\n const=logging.DEBUG, dest='verbosity',\n help='Show all messages, including debug messages.')\n\n if command == 'init':\n subparser.add_argument('--backend',\n help='{memory,redis,postgresql}',\n dest='backend',\n required=False,\n default=None)\n subparser.add_argument('--host',\n help='Host to listen() on.',\n dest='host',\n required=False,\n default='127.0.0.1')\n elif command == 'migrate':\n subparser.add_argument('--dry-run',\n action='store_true',\n help='Simulate the migration operations '\n 'and show information',\n dest='dry_run',\n required=False,\n default=False)\n elif command == 'delete-collection':\n subparser.add_argument('--bucket',\n help='The bucket where the collection '\n 'belongs to.',\n required=True)\n subparser.add_argument('--collection',\n help='The collection to remove.',\n required=True)\n\n elif command == 'rebuild-quotas':\n subparser.add_argument('--dry-run',\n action='store_true',\n help='Simulate the rebuild operation '\n 'and show information',\n dest='dry_run',\n required=False,\n default=False)\n\n elif command == 'start':\n subparser.add_argument('--reload',\n action='store_true',\n help='Restart when code or config changes',\n required=False,\n default=False)\n subparser.add_argument('--port',\n type=int,\n help='Listening port number',\n required=False,\n default=DEFAULT_PORT)\n\n elif command == 'create-user':\n subparser.add_argument('-u', '--username',\n help='Superuser username',\n required=False,\n default=None)\n subparser.add_argument('-p', '--password',\n help='Superuser password',\n required=False,\n default=None)\n\n # Parse command-line arguments\n parsed_args = vars(parser.parse_args(args))\n\n config_file = parsed_args['ini_file']\n which_command = parsed_args['which']\n\n # Initialize logging from\n level = parsed_args.get('verbosity') or DEFAULT_LOG_LEVEL\n logging.basicConfig(level=level, format=DEFAULT_LOG_FORMAT)\n\n if which_command == 'init':\n if os.path.exists(config_file):\n print('{} already exists.'.format(config_file), file=sys.stderr)\n return 1\n\n backend = parsed_args['backend']\n if not backend:\n while True:\n prompt = ('Select the backend you would like to use: '\n '(1 - postgresql, 2 - redis, default - memory) ')\n answer = input(prompt).strip()\n try:\n backends = {'1': 'postgresql', '2': 'redis', '': 'memory'}\n backend = backends[answer]\n break\n except KeyError:\n pass\n\n init(config_file, backend, parsed_args['host'])\n\n # Install postgresql libraries if necessary\n if backend == 'postgresql':\n try:\n import psycopg2 # NOQA\n except ImportError:\n import pip\n pip.main(['install', 'kinto[postgresql]'])\n elif backend == 'redis':\n try:\n import kinto_redis # NOQA\n except ImportError:\n import pip\n pip.main(['install', 'kinto[redis]'])\n\n elif which_command == 'migrate':\n dry_run = parsed_args['dry_run']\n env = bootstrap(config_file)\n scripts.migrate(env, dry_run=dry_run)\n\n elif which_command == 'delete-collection':\n env = bootstrap(config_file)\n return scripts.delete_collection(env,\n parsed_args['bucket'],\n parsed_args['collection'])\n\n elif which_command == 'rebuild-quotas':\n dry_run = parsed_args['dry_run']\n env = bootstrap(config_file)\n return scripts.rebuild_quotas(env, dry_run=dry_run)\n\n elif which_command == 'create-user':\n username = parsed_args['username']\n password = parsed_args['password']\n env = bootstrap(config_file)\n return create_user(env, username=username, password=password)\n\n elif which_command == 'start':\n pserve_argv = ['pserve']\n\n if parsed_args['reload']:\n pserve_argv.append('--reload')\n\n if level == logging.DEBUG:\n pserve_argv.append('-v')\n\n if level == logging.CRITICAL:\n pserve_argv.append('-q')\n\n pserve_argv.append(config_file)\n pserve_argv.append('http_port={}'.format(parsed_args['port']))\n pserve.main(argv=pserve_argv)\n\n else:\n print(__version__)\n\n return 0\n",
"path": "kinto/__main__.py"
}
] | [
{
"content": "import argparse\nimport os\nimport sys\nimport logging\nimport logging.config\n\nfrom kinto.core import scripts\nfrom kinto.plugins.accounts.scripts import create_user\nfrom pyramid.scripts import pserve\nfrom pyramid.paster import bootstrap\nfrom kinto import __version__\nfrom kinto.config import init\n\nDEFAULT_CONFIG_FILE = os.getenv('KINTO_INI', 'config/kinto.ini')\nDEFAULT_PORT = 8888\nDEFAULT_LOG_LEVEL = logging.INFO\nDEFAULT_LOG_FORMAT = '%(levelname)-5.5s %(message)s'\n\n\ndef main(args=None):\n \"\"\"The main routine.\"\"\"\n if args is None:\n args = sys.argv[1:]\n\n parser = argparse.ArgumentParser(description='Kinto Command-Line '\n 'Interface')\n commands = ('init', 'start', 'migrate', 'delete-collection', 'version',\n 'rebuild-quotas', 'create-user')\n subparsers = parser.add_subparsers(title='subcommands',\n description='Main Kinto CLI commands',\n dest='subcommand',\n help='Choose and run with --help')\n subparsers.required = True\n\n for command in commands:\n subparser = subparsers.add_parser(command)\n subparser.set_defaults(which=command)\n\n subparser.add_argument('--ini',\n help='Application configuration file',\n dest='ini_file',\n required=False,\n default=DEFAULT_CONFIG_FILE)\n\n subparser.add_argument('-q', '--quiet', action='store_const',\n const=logging.CRITICAL, dest='verbosity',\n help='Show only critical errors.')\n\n subparser.add_argument('-v', '--debug', action='store_const',\n const=logging.DEBUG, dest='verbosity',\n help='Show all messages, including debug messages.')\n\n if command == 'init':\n subparser.add_argument('--backend',\n help='{memory,redis,postgresql}',\n dest='backend',\n required=False,\n default=None)\n subparser.add_argument('--host',\n help='Host to listen() on.',\n dest='host',\n required=False,\n default='127.0.0.1')\n elif command == 'migrate':\n subparser.add_argument('--dry-run',\n action='store_true',\n help='Simulate the migration operations '\n 'and show information',\n dest='dry_run',\n required=False,\n default=False)\n elif command == 'delete-collection':\n subparser.add_argument('--bucket',\n help='The bucket where the collection '\n 'belongs to.',\n required=True)\n subparser.add_argument('--collection',\n help='The collection to remove.',\n required=True)\n\n elif command == 'rebuild-quotas':\n subparser.add_argument('--dry-run',\n action='store_true',\n help='Simulate the rebuild operation '\n 'and show information',\n dest='dry_run',\n required=False,\n default=False)\n\n elif command == 'start':\n subparser.add_argument('--reload',\n action='store_true',\n help='Restart when code or config changes',\n required=False,\n default=False)\n subparser.add_argument('--port',\n type=int,\n help='Listening port number',\n required=False,\n default=DEFAULT_PORT)\n\n elif command == 'create-user':\n subparser.add_argument('-u', '--username',\n help='Superuser username',\n required=False,\n default=None)\n subparser.add_argument('-p', '--password',\n help='Superuser password',\n required=False,\n default=None)\n\n # Parse command-line arguments\n parsed_args = vars(parser.parse_args(args))\n\n config_file = parsed_args['ini_file']\n which_command = parsed_args['which']\n\n # Initialize logging from\n level = parsed_args.get('verbosity') or DEFAULT_LOG_LEVEL\n logging.basicConfig(level=level, format=DEFAULT_LOG_FORMAT)\n\n if which_command == 'init':\n if os.path.exists(config_file):\n print('{} already exists.'.format(config_file), file=sys.stderr)\n return 1\n\n backend = parsed_args['backend']\n if not backend:\n while True:\n prompt = ('Select the backend you would like to use: '\n '(1 - postgresql, 2 - redis, default - memory) ')\n answer = input(prompt).strip()\n try:\n backends = {'1': 'postgresql', '2': 'redis', '': 'memory'}\n backend = backends[answer]\n break\n except KeyError:\n pass\n\n init(config_file, backend, parsed_args['host'])\n\n # Install postgresql libraries if necessary\n if backend == 'postgresql':\n try:\n import psycopg2 # NOQA\n except ImportError:\n import pip\n pip.main(['install', 'kinto[postgresql]'])\n elif backend == 'redis':\n try:\n import kinto_redis # NOQA\n except ImportError:\n import pip\n pip.main(['install', 'kinto[redis]'])\n\n elif which_command == 'migrate':\n dry_run = parsed_args['dry_run']\n env = bootstrap(config_file)\n scripts.migrate(env, dry_run=dry_run)\n\n elif which_command == 'delete-collection':\n env = bootstrap(config_file)\n return scripts.delete_collection(env,\n parsed_args['bucket'],\n parsed_args['collection'])\n\n elif which_command == 'rebuild-quotas':\n dry_run = parsed_args['dry_run']\n env = bootstrap(config_file)\n return scripts.rebuild_quotas(env, dry_run=dry_run)\n\n elif which_command == 'create-user':\n username = parsed_args['username']\n password = parsed_args['password']\n env = bootstrap(config_file)\n return create_user(env, username=username, password=password)\n\n elif which_command == 'start':\n pserve_argv = ['pserve']\n\n if parsed_args['reload']:\n pserve_argv.append('--reload')\n\n if level == logging.DEBUG:\n pserve_argv.append('-v')\n\n if level == logging.CRITICAL:\n pserve_argv.append('-q')\n\n pserve_argv.append(config_file)\n pserve_argv.append('http_port={}'.format(parsed_args['port']))\n pserve.main(argv=pserve_argv)\n\n else:\n print(__version__)\n\n return 0\n",
"path": "kinto/__main__.py"
}
] | diff --git a/CHANGELOG.rst b/CHANGELOG.rst
index f5f507aa2..74ec541ae 100644
--- a/CHANGELOG.rst
+++ b/CHANGELOG.rst
@@ -8,7 +8,8 @@ This document describes changes between each past release.
**Bug fixes**
-- Fix `create-user` command for PostgreSQL backend (#1340)
+- Use the ``KINTO_INI`` env variable to findout the configuration file. (#1339)
+- Fix ``create-user`` command for PostgreSQL backend (#1340)
- Make sure ``create-user`` command updates password (#1336)
diff --git a/docs/commandline.rst b/docs/commandline.rst
index 48b750acc..e56d2080c 100644
--- a/docs/commandline.rst
+++ b/docs/commandline.rst
@@ -5,8 +5,10 @@ Command Line
When Kinto is installed, a command ``kinto`` becomes available.
-It accepts a ``--ini`` parameter, whose default value is ``config/kinto.ini``,
-and a set of «sub commands» are available.
+It accepts a ``--ini`` parameter, whose default value is
+``config/kinto.ini`` or the ``KINTO_INI`` env variable if defined.
+
+A set of «sub commands» are available.
::
diff --git a/kinto/__main__.py b/kinto/__main__.py
index 8d53ed682..471742cfe 100644
--- a/kinto/__main__.py
+++ b/kinto/__main__.py
@@ -11,7 +11,7 @@
from kinto import __version__
from kinto.config import init
-DEFAULT_CONFIG_FILE = 'config/kinto.ini'
+DEFAULT_CONFIG_FILE = os.getenv('KINTO_INI', 'config/kinto.ini')
DEFAULT_PORT = 8888
DEFAULT_LOG_LEVEL = logging.INFO
DEFAULT_LOG_FORMAT = '%(levelname)-5.5s %(message)s'
|
horovod__horovod-1693 | horovodrun convenience script does not account for 'OpenRTE' in the output of mpirun --version
**Environment:**
1. Framework: (TensorFlow, PyTorch)
2. Framework version: 1.14.0
3. Horovod version: 0.16.4
4. MPI version: 3.1.4/4.0.1
5. CUDA version: 10.1
6. NCCL version: 2.4.8
7. Python version: 3.6
8. OS and version: Ubuntu, Docker
9. GCC version:5.4.0
**Checklist:**
1. Did you search issues to find if somebody asked this question before?
Yes, hasn't been specifically asked
4. Did you check if you question is answered in the [troubleshooting guide](https://github.com/horovod/horovod/blob/master/docs/troubleshooting.rst)?
**Bug report:**
1. horovodrun outputs the following, when using with Open MPI 4.0.1.
```
horovodrun -np 1 -H localhost:1 python pytorch_mnist.py
Open MPI not found in output of mpirun --version.
Traceback (most recent call last):
File "/opt/conda/bin/horovodrun", line 21, in <module>
run.run()
File "/opt/conda/lib/python3.6/site-packages/horovod/run/run.py", line 448, in run
'horovodrun convenience script currently only supports '
Exception: horovodrun convenience script currently only supports Open MPI.
Choose one of:
1. Install Open MPI 4.0.0+ and re-install Horovod (use --no-cache-dir pip option).
2. Run distributed training script using the standard way provided by your MPI distribution (usually mpirun, srun, or jsrun).
root@3da487b92c3d:/horovod/examples# mpirun --version
mpirun.real (OpenRTE) 4.0.1
Report bugs to http://www.open-mpi.org/community/help/
```
2. When Open MPI is installed as follows:
```
RUN wget https://www.open-mpi.org/software/ompi/v4.0/downloads/openmpi-$OPEN_MPI_VERSION.tar.gz \
&& gunzip -c openmpi-$OPEN_MPI_VERSION.tar.gz | tar xf - \
&& cd openmpi-$OPEN_MPI_VERSION \
&& ./configure --prefix=/home/.openmpi \
&& make all install \
&& cd .. \
&& rm openmpi-$OPEN_MPI_VERSION.tar.gz \
&& rm -rf openmpi-$OPEN_MPI_VERSION
```
3. The horovodrun check expects 'OpenMPI' to be present in the output of `mpirun --version`. [[link](https://github.com/horovod/horovod/blob/master/horovod/run/mpi_run.py)]. However, when installed as above, OpenMPI has the following in output:
```
root@3b5149353790:/horovod/examples# mpirun --version
mpirun.real (OpenRTE) 4.0.1
Report bugs to http://www.open-mpi.org/community/help/
```
4. Either openmpi was installed incorrectly (in which case, can horovod documentation clarify how to install it correctly?), or the horovodrun convenience script does not account for presence of 'OpenRTE' in the `mpirun --version`.
I'm unable to understand when is 'OpenRTE' visible in mpirun --version, and it isn't? I saw the option --enable-orterun-prefix-by-default, but I'm not using it to build open-mpi.
| [
{
"content": "# Copyright 2019 Uber Technologies, Inc. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n# ==============================================================================\n\nfrom __future__ import print_function\nimport six\nimport traceback\nimport sys\nimport os\nfrom horovod.run.common.util import env as env_util, safe_shell_exec, secret, codec\n\n# Open MPI Flags\n_OMPI_FLAGS = ['-mca pml ob1', '-mca btl ^openib']\n# Spectrum MPI Flags\n_SMPI_FLAGS = ['-gpu', '-disable_gdr']\n# MPICH Flags\n_MPICH_FLAGS = []\n# Threshold for large cluster MPI issues:\n_LARGE_CLUSTER_THRESHOLD = 64\n\ntry:\n from shlex import quote\nexcept ImportError:\n from pipes import quote\n\n\ndef _get_mpi_implementation_flags():\n output = six.StringIO()\n command = 'mpirun --version'\n try:\n exit_code = safe_shell_exec.execute(command, stdout=output,\n stderr=output)\n output_msg = output.getvalue()\n except Exception:\n print(traceback.format_exc(), file=sys.stderr)\n return None\n finally:\n output.close()\n\n if exit_code == 0:\n if 'Open MPI' in output_msg:\n return list(_OMPI_FLAGS)\n elif 'IBM Spectrum MPI' in output_msg:\n return list(_SMPI_FLAGS)\n elif 'MPICH' in output_msg:\n return list(_MPICH_FLAGS)\n print('Open MPI/Spectrum MPI/MPICH not found in output of mpirun --version.',\n file=sys.stderr)\n return None\n else:\n print(\"Was not able to run %s:\\n%s\" % (command, output_msg),\n file=sys.stderr)\n return None\n\n\ndef mpi_run(settings, common_intfs, env, command, stdout=None, stderr=None, run_func=safe_shell_exec.execute):\n \"\"\"\n Runs mpi_run.\n\n Args:\n settings: Settings for running MPI.\n Note: settings.num_proc and settings.hosts must not be None.\n common_intfs: Interfaces to include by MPI.\n env: Environment dictionary to use for running MPI.\n command: Command and arguments to run as a list of string.\n stdout: Stdout of the mpi process.\n Only used when settings.run_func_mode is True.\n stderr: Stderr of the mpi process.\n Only used when settings.run_func_mode is True.\n run_func: Run function to use. Must have arguments 'command' and 'env'.\n Only used when settings.run_func_mode is True.\n Defaults to safe_shell_exec.execute.\n \"\"\"\n mpi_impl_flags = _get_mpi_implementation_flags()\n if mpi_impl_flags is None:\n raise Exception(\n 'horovodrun convenience script does not find an installed MPI.\\n\\n'\n 'Choose one of:\\n'\n '1. Install Open MPI 4.0.0+ or IBM Spectrum MPI or MPICH and re-install Horovod '\n '(use --no-cache-dir pip option).\\n'\n '2. Run distributed '\n 'training script using the standard way provided by your'\n ' MPI distribution (usually mpirun, srun, or jsrun).\\n'\n '3. Use built-in gloo option (horovodrun --gloo ...).')\n\n ssh_port_arg = '-mca plm_rsh_args \\\"-p {ssh_port}\\\"'.format(\n ssh_port=settings.ssh_port) if settings.ssh_port else ''\n\n # if user does not specify any hosts, mpirun by default uses local host.\n # There is no need to specify localhost.\n hosts_arg = '-H {hosts}'.format(hosts=settings.hosts)\n\n tcp_intf_arg = '-mca btl_tcp_if_include {common_intfs}'.format(\n common_intfs=','.join(common_intfs)) if common_intfs else ''\n nccl_socket_intf_arg = '-x NCCL_SOCKET_IFNAME={common_intfs}'.format(\n common_intfs=','.join(common_intfs)) if common_intfs else ''\n\n # On large cluster runs (e.g. Summit), we need extra settings to work around OpenMPI issues\n if settings.num_hosts and settings.num_hosts >= _LARGE_CLUSTER_THRESHOLD:\n mpi_impl_flags.append('-mca plm_rsh_no_tree_spawn true')\n mpi_impl_flags.append('-mca plm_rsh_num_concurrent {}'.format(settings.num_proc))\n\n # Pass all the env variables to the mpirun command.\n mpirun_command = (\n 'mpirun --allow-run-as-root --tag-output '\n '-np {num_proc} {hosts_arg} '\n '-bind-to none -map-by slot '\n '{mpi_args} '\n '{ssh_port_arg} '\n '{tcp_intf_arg} '\n '{nccl_socket_intf_arg} '\n '{output_filename_arg} '\n '{env} {extra_mpi_args} {command}' # expect a lot of environment variables\n .format(num_proc=settings.num_proc,\n hosts_arg=hosts_arg,\n mpi_args=' '.join(mpi_impl_flags),\n tcp_intf_arg=tcp_intf_arg,\n nccl_socket_intf_arg=nccl_socket_intf_arg,\n ssh_port_arg=ssh_port_arg,\n output_filename_arg='--output-filename ' + settings.output_filename\n if settings.output_filename else '',\n env=' '.join('-x %s' % key for key in sorted(env.keys())\n if env_util.is_exportable(key)),\n\n extra_mpi_args=settings.extra_mpi_args if settings.extra_mpi_args else '',\n command=' '.join(quote(par) for par in command))\n )\n\n if settings.verbose >= 2:\n print(mpirun_command)\n\n # Execute the mpirun command.\n if settings.run_func_mode:\n exit_code = run_func(command=mpirun_command, env=env, stdout=stdout, stderr=stderr)\n if exit_code != 0:\n raise RuntimeError(\"mpirun failed with exit code {exit_code}\".format(exit_code=exit_code))\n else:\n os.execve('/bin/sh', ['/bin/sh', '-c', mpirun_command], env)\n\n",
"path": "horovod/run/mpi_run.py"
}
] | [
{
"content": "# Copyright 2019 Uber Technologies, Inc. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n# ==============================================================================\n\nfrom __future__ import print_function\nimport six\nimport traceback\nimport sys\nimport os\nfrom horovod.run.common.util import env as env_util, safe_shell_exec, secret, codec\n\n# Open MPI Flags\n_OMPI_FLAGS = ['-mca pml ob1', '-mca btl ^openib']\n# Spectrum MPI Flags\n_SMPI_FLAGS = ['-gpu', '-disable_gdr']\n# MPICH Flags\n_MPICH_FLAGS = []\n# Threshold for large cluster MPI issues:\n_LARGE_CLUSTER_THRESHOLD = 64\n\ntry:\n from shlex import quote\nexcept ImportError:\n from pipes import quote\n\n\ndef _get_mpi_implementation_flags():\n output = six.StringIO()\n command = 'mpirun --version'\n try:\n exit_code = safe_shell_exec.execute(command, stdout=output,\n stderr=output)\n output_msg = output.getvalue()\n except Exception:\n print(traceback.format_exc(), file=sys.stderr)\n return None\n finally:\n output.close()\n\n if exit_code == 0:\n if 'Open MPI' in output_msg or 'OpenRTE' in output_msg:\n return list(_OMPI_FLAGS)\n elif 'IBM Spectrum MPI' in output_msg:\n return list(_SMPI_FLAGS)\n elif 'MPICH' in output_msg:\n return list(_MPICH_FLAGS)\n print('Open MPI/Spectrum MPI/MPICH not found in output of mpirun --version.',\n file=sys.stderr)\n return None\n else:\n print(\"Was not able to run %s:\\n%s\" % (command, output_msg),\n file=sys.stderr)\n return None\n\n\ndef mpi_run(settings, common_intfs, env, command, stdout=None, stderr=None, run_func=safe_shell_exec.execute):\n \"\"\"\n Runs mpi_run.\n\n Args:\n settings: Settings for running MPI.\n Note: settings.num_proc and settings.hosts must not be None.\n common_intfs: Interfaces to include by MPI.\n env: Environment dictionary to use for running MPI.\n command: Command and arguments to run as a list of string.\n stdout: Stdout of the mpi process.\n Only used when settings.run_func_mode is True.\n stderr: Stderr of the mpi process.\n Only used when settings.run_func_mode is True.\n run_func: Run function to use. Must have arguments 'command' and 'env'.\n Only used when settings.run_func_mode is True.\n Defaults to safe_shell_exec.execute.\n \"\"\"\n mpi_impl_flags = _get_mpi_implementation_flags()\n if mpi_impl_flags is None:\n raise Exception(\n 'horovodrun convenience script does not find an installed MPI.\\n\\n'\n 'Choose one of:\\n'\n '1. Install Open MPI 4.0.0+ or IBM Spectrum MPI or MPICH and re-install Horovod '\n '(use --no-cache-dir pip option).\\n'\n '2. Run distributed '\n 'training script using the standard way provided by your'\n ' MPI distribution (usually mpirun, srun, or jsrun).\\n'\n '3. Use built-in gloo option (horovodrun --gloo ...).')\n\n ssh_port_arg = '-mca plm_rsh_args \\\"-p {ssh_port}\\\"'.format(\n ssh_port=settings.ssh_port) if settings.ssh_port else ''\n\n # if user does not specify any hosts, mpirun by default uses local host.\n # There is no need to specify localhost.\n hosts_arg = '-H {hosts}'.format(hosts=settings.hosts)\n\n tcp_intf_arg = '-mca btl_tcp_if_include {common_intfs}'.format(\n common_intfs=','.join(common_intfs)) if common_intfs else ''\n nccl_socket_intf_arg = '-x NCCL_SOCKET_IFNAME={common_intfs}'.format(\n common_intfs=','.join(common_intfs)) if common_intfs else ''\n\n # On large cluster runs (e.g. Summit), we need extra settings to work around OpenMPI issues\n if settings.num_hosts and settings.num_hosts >= _LARGE_CLUSTER_THRESHOLD:\n mpi_impl_flags.append('-mca plm_rsh_no_tree_spawn true')\n mpi_impl_flags.append('-mca plm_rsh_num_concurrent {}'.format(settings.num_proc))\n\n # Pass all the env variables to the mpirun command.\n mpirun_command = (\n 'mpirun --allow-run-as-root --tag-output '\n '-np {num_proc} {hosts_arg} '\n '-bind-to none -map-by slot '\n '{mpi_args} '\n '{ssh_port_arg} '\n '{tcp_intf_arg} '\n '{nccl_socket_intf_arg} '\n '{output_filename_arg} '\n '{env} {extra_mpi_args} {command}' # expect a lot of environment variables\n .format(num_proc=settings.num_proc,\n hosts_arg=hosts_arg,\n mpi_args=' '.join(mpi_impl_flags),\n tcp_intf_arg=tcp_intf_arg,\n nccl_socket_intf_arg=nccl_socket_intf_arg,\n ssh_port_arg=ssh_port_arg,\n output_filename_arg='--output-filename ' + settings.output_filename\n if settings.output_filename else '',\n env=' '.join('-x %s' % key for key in sorted(env.keys())\n if env_util.is_exportable(key)),\n\n extra_mpi_args=settings.extra_mpi_args if settings.extra_mpi_args else '',\n command=' '.join(quote(par) for par in command))\n )\n\n if settings.verbose >= 2:\n print(mpirun_command)\n\n # Execute the mpirun command.\n if settings.run_func_mode:\n exit_code = run_func(command=mpirun_command, env=env, stdout=stdout, stderr=stderr)\n if exit_code != 0:\n raise RuntimeError(\"mpirun failed with exit code {exit_code}\".format(exit_code=exit_code))\n else:\n os.execve('/bin/sh', ['/bin/sh', '-c', mpirun_command], env)\n\n",
"path": "horovod/run/mpi_run.py"
}
] | diff --git a/horovod/run/mpi_run.py b/horovod/run/mpi_run.py
index 9fbc55e085..18c41ca747 100644
--- a/horovod/run/mpi_run.py
+++ b/horovod/run/mpi_run.py
@@ -49,7 +49,7 @@ def _get_mpi_implementation_flags():
output.close()
if exit_code == 0:
- if 'Open MPI' in output_msg:
+ if 'Open MPI' in output_msg or 'OpenRTE' in output_msg:
return list(_OMPI_FLAGS)
elif 'IBM Spectrum MPI' in output_msg:
return list(_SMPI_FLAGS)
|
pulp__pulpcore-3381 | Export is not locking on the exported repositories
SSIA
| [
{
"content": "from django_filters.rest_framework import filters\n\nfrom drf_spectacular.utils import extend_schema\nfrom rest_framework import mixins\n\nfrom pulpcore.app.models import (\n Export,\n Exporter,\n FilesystemExport,\n FilesystemExporter,\n Publication,\n PulpExport,\n PulpExporter,\n RepositoryVersion,\n)\n\nfrom pulpcore.app.serializers import (\n AsyncOperationResponseSerializer,\n ExportSerializer,\n ExporterSerializer,\n FilesystemExporterSerializer,\n FilesystemExportSerializer,\n PulpExporterSerializer,\n PulpExportSerializer,\n)\n\nfrom pulpcore.app.tasks.export import fs_publication_export, fs_repo_version_export, pulp_export\n\nfrom pulpcore.app.viewsets import (\n AsyncRemoveMixin,\n AsyncUpdateMixin,\n BaseFilterSet,\n NamedModelViewSet,\n)\nfrom pulpcore.app.viewsets.base import NAME_FILTER_OPTIONS\nfrom pulpcore.plugin.tasking import dispatch\nfrom pulpcore.app.response import OperationPostponedResponse\n\n\nclass ExporterFilter(BaseFilterSet):\n \"\"\"\n Plugin file system exporter filter should:\n - inherit from this class\n - add any specific filters if needed\n - define a `Meta` class which should:\n - specify a plugin remote model for which filter is defined\n - extend `fields` with specific ones\n \"\"\"\n\n name = filters.CharFilter()\n\n class Meta:\n model = Exporter\n fields = {\n \"name\": NAME_FILTER_OPTIONS,\n }\n\n\nclass ExporterViewSet(\n NamedModelViewSet,\n mixins.CreateModelMixin,\n AsyncUpdateMixin,\n mixins.RetrieveModelMixin,\n mixins.ListModelMixin,\n AsyncRemoveMixin,\n):\n \"\"\"\n ViewSet for viewing exporters.\n \"\"\"\n\n queryset = Exporter.objects.all()\n serializer_class = ExporterSerializer\n endpoint_name = \"exporters\"\n router_lookup = \"exporter\"\n filterset_class = ExporterFilter\n\n\nclass PulpExporterViewSet(ExporterViewSet):\n \"\"\"\n ViewSet for viewing PulpExporters.\n \"\"\"\n\n endpoint_name = \"pulp\"\n serializer_class = PulpExporterSerializer\n queryset = PulpExporter.objects.all()\n\n\nclass FilesystemExporterViewSet(ExporterViewSet):\n \"\"\"\n Endpoint for managing FilesystemExporters. FilesystemExporters are provided as a tech preview.\n \"\"\"\n\n endpoint_name = \"filesystem\"\n serializer_class = FilesystemExporterSerializer\n queryset = FilesystemExporter.objects.all()\n\n\nclass ExportViewSet(\n NamedModelViewSet,\n mixins.CreateModelMixin,\n mixins.RetrieveModelMixin,\n mixins.ListModelMixin,\n mixins.DestroyModelMixin,\n):\n \"\"\"\n ViewSet for viewing exports from an Exporter.\n \"\"\"\n\n endpoint_name = \"exports\"\n nest_prefix = \"exporters\"\n router_lookup = \"export\"\n lookup_field = \"pk\"\n parent_lookup_kwargs = {\"exporter_pk\": \"exporter__pk\"}\n serializer_class = ExportSerializer\n queryset = Export.objects.all()\n parent_viewset = ExporterViewSet\n\n\nclass PulpExportViewSet(ExportViewSet):\n \"\"\"\n ViewSet for viewing exports from a PulpExporter.\n \"\"\"\n\n parent_viewset = PulpExporterViewSet\n serializer_class = PulpExportSerializer\n queryset = PulpExport.objects.all()\n\n @extend_schema(\n request=PulpExportSerializer,\n description=\"Trigger an asynchronous task to export a set of repositories\",\n responses={202: AsyncOperationResponseSerializer},\n )\n def create(self, request, exporter_pk):\n \"\"\"\n Generates a Task to export the set of repositories assigned to a specific PulpExporter.\n \"\"\"\n # Validate Exporter\n exporter = PulpExporter.objects.get(pk=exporter_pk).cast()\n ExporterSerializer.validate_path(exporter.path, check_is_dir=True)\n\n # Validate Export\n serializer = PulpExportSerializer(data=request.data, context={\"exporter\": exporter})\n serializer.is_valid(raise_exception=True)\n\n # Invoke the export\n task = dispatch(\n pulp_export,\n exclusive_resources=[exporter],\n kwargs={\"exporter_pk\": str(exporter.pk), \"params\": request.data},\n )\n\n return OperationPostponedResponse(task, request)\n\n\nclass FilesystemExportViewSet(ExportViewSet):\n \"\"\"\n Endpoint for managing FilesystemExports. This endpoint is provided as a tech preview.\n \"\"\"\n\n parent_viewset = FilesystemExporterViewSet\n serializer_class = FilesystemExportSerializer\n queryset = FilesystemExport.objects.all()\n\n @extend_schema(\n request=FilesystemExportSerializer,\n description=\"Trigger an asynchronous task to export files to the filesystem\",\n responses={202: AsyncOperationResponseSerializer},\n )\n def create(self, request, exporter_pk):\n \"\"\"\n Generates a Task to export files to the filesystem.\n \"\"\"\n # Validate Exporter\n exporter = FilesystemExporter.objects.get(pk=exporter_pk).cast()\n ExporterSerializer.validate_path(exporter.path, check_is_dir=True)\n\n # Validate Export\n serializer = FilesystemExportSerializer(data=request.data, context={\"exporter\": exporter})\n serializer.is_valid(raise_exception=True)\n\n if request.data.get(\"publication\"):\n publication = self.get_resource(request.data[\"publication\"], Publication)\n\n task = dispatch(\n fs_publication_export,\n exclusive_resources=[exporter],\n kwargs={\"exporter_pk\": exporter.pk, \"publication_pk\": publication.pk},\n )\n else:\n repo_version = self.get_resource(request.data[\"repository_version\"], RepositoryVersion)\n\n task = dispatch(\n fs_repo_version_export,\n exclusive_resources=[exporter],\n kwargs={\"exporter_pk\": str(exporter.pk), \"repo_version_pk\": repo_version.pk},\n )\n\n return OperationPostponedResponse(task, request)\n",
"path": "pulpcore/app/viewsets/exporter.py"
}
] | [
{
"content": "from django_filters.rest_framework import filters\n\nfrom drf_spectacular.utils import extend_schema\nfrom rest_framework import mixins\n\nfrom pulpcore.app.models import (\n Export,\n Exporter,\n FilesystemExport,\n FilesystemExporter,\n Publication,\n PulpExport,\n PulpExporter,\n RepositoryVersion,\n)\n\nfrom pulpcore.app.serializers import (\n AsyncOperationResponseSerializer,\n ExportSerializer,\n ExporterSerializer,\n FilesystemExporterSerializer,\n FilesystemExportSerializer,\n PulpExporterSerializer,\n PulpExportSerializer,\n)\n\nfrom pulpcore.app.tasks.export import fs_publication_export, fs_repo_version_export, pulp_export\n\nfrom pulpcore.app.viewsets import (\n AsyncRemoveMixin,\n AsyncUpdateMixin,\n BaseFilterSet,\n NamedModelViewSet,\n)\nfrom pulpcore.app.viewsets.base import NAME_FILTER_OPTIONS\nfrom pulpcore.plugin.tasking import dispatch\nfrom pulpcore.app.response import OperationPostponedResponse\n\n\nclass ExporterFilter(BaseFilterSet):\n \"\"\"\n Plugin file system exporter filter should:\n - inherit from this class\n - add any specific filters if needed\n - define a `Meta` class which should:\n - specify a plugin remote model for which filter is defined\n - extend `fields` with specific ones\n \"\"\"\n\n name = filters.CharFilter()\n\n class Meta:\n model = Exporter\n fields = {\n \"name\": NAME_FILTER_OPTIONS,\n }\n\n\nclass ExporterViewSet(\n NamedModelViewSet,\n mixins.CreateModelMixin,\n AsyncUpdateMixin,\n mixins.RetrieveModelMixin,\n mixins.ListModelMixin,\n AsyncRemoveMixin,\n):\n \"\"\"\n ViewSet for viewing exporters.\n \"\"\"\n\n queryset = Exporter.objects.all()\n serializer_class = ExporterSerializer\n endpoint_name = \"exporters\"\n router_lookup = \"exporter\"\n filterset_class = ExporterFilter\n\n\nclass PulpExporterViewSet(ExporterViewSet):\n \"\"\"\n ViewSet for viewing PulpExporters.\n \"\"\"\n\n endpoint_name = \"pulp\"\n serializer_class = PulpExporterSerializer\n queryset = PulpExporter.objects.all()\n\n\nclass FilesystemExporterViewSet(ExporterViewSet):\n \"\"\"\n Endpoint for managing FilesystemExporters. FilesystemExporters are provided as a tech preview.\n \"\"\"\n\n endpoint_name = \"filesystem\"\n serializer_class = FilesystemExporterSerializer\n queryset = FilesystemExporter.objects.all()\n\n\nclass ExportViewSet(\n NamedModelViewSet,\n mixins.CreateModelMixin,\n mixins.RetrieveModelMixin,\n mixins.ListModelMixin,\n mixins.DestroyModelMixin,\n):\n \"\"\"\n ViewSet for viewing exports from an Exporter.\n \"\"\"\n\n endpoint_name = \"exports\"\n nest_prefix = \"exporters\"\n router_lookup = \"export\"\n lookup_field = \"pk\"\n parent_lookup_kwargs = {\"exporter_pk\": \"exporter__pk\"}\n serializer_class = ExportSerializer\n queryset = Export.objects.all()\n parent_viewset = ExporterViewSet\n\n\nclass PulpExportViewSet(ExportViewSet):\n \"\"\"\n ViewSet for viewing exports from a PulpExporter.\n \"\"\"\n\n parent_viewset = PulpExporterViewSet\n serializer_class = PulpExportSerializer\n queryset = PulpExport.objects.all()\n\n @extend_schema(\n request=PulpExportSerializer,\n description=\"Trigger an asynchronous task to export a set of repositories\",\n responses={202: AsyncOperationResponseSerializer},\n )\n def create(self, request, exporter_pk):\n \"\"\"\n Generates a Task to export the set of repositories assigned to a specific PulpExporter.\n \"\"\"\n # Validate Exporter\n exporter = PulpExporter.objects.get(pk=exporter_pk).cast()\n ExporterSerializer.validate_path(exporter.path, check_is_dir=True)\n\n # Validate Export\n serializer = PulpExportSerializer(data=request.data, context={\"exporter\": exporter})\n serializer.is_valid(raise_exception=True)\n\n # Invoke the export\n task = dispatch(\n pulp_export,\n exclusive_resources=[exporter],\n shared_resources=exporter.repositories.all(),\n kwargs={\"exporter_pk\": str(exporter.pk), \"params\": request.data},\n )\n\n return OperationPostponedResponse(task, request)\n\n\nclass FilesystemExportViewSet(ExportViewSet):\n \"\"\"\n Endpoint for managing FilesystemExports. This endpoint is provided as a tech preview.\n \"\"\"\n\n parent_viewset = FilesystemExporterViewSet\n serializer_class = FilesystemExportSerializer\n queryset = FilesystemExport.objects.all()\n\n @extend_schema(\n request=FilesystemExportSerializer,\n description=\"Trigger an asynchronous task to export files to the filesystem\",\n responses={202: AsyncOperationResponseSerializer},\n )\n def create(self, request, exporter_pk):\n \"\"\"\n Generates a Task to export files to the filesystem.\n \"\"\"\n # Validate Exporter\n exporter = FilesystemExporter.objects.get(pk=exporter_pk).cast()\n ExporterSerializer.validate_path(exporter.path, check_is_dir=True)\n\n # Validate Export\n serializer = FilesystemExportSerializer(data=request.data, context={\"exporter\": exporter})\n serializer.is_valid(raise_exception=True)\n\n if request.data.get(\"publication\"):\n publication = self.get_resource(request.data[\"publication\"], Publication)\n\n task = dispatch(\n fs_publication_export,\n exclusive_resources=[exporter],\n kwargs={\"exporter_pk\": exporter.pk, \"publication_pk\": publication.pk},\n )\n else:\n repo_version = self.get_resource(request.data[\"repository_version\"], RepositoryVersion)\n\n task = dispatch(\n fs_repo_version_export,\n exclusive_resources=[exporter],\n kwargs={\"exporter_pk\": str(exporter.pk), \"repo_version_pk\": repo_version.pk},\n )\n\n return OperationPostponedResponse(task, request)\n",
"path": "pulpcore/app/viewsets/exporter.py"
}
] | diff --git a/CHANGES/3370.bugfix b/CHANGES/3370.bugfix
new file mode 100644
index 0000000000..7653714719
--- /dev/null
+++ b/CHANGES/3370.bugfix
@@ -0,0 +1 @@
+Insured that pulp-export correctly locks repos-being-exported.
diff --git a/pulpcore/app/viewsets/exporter.py b/pulpcore/app/viewsets/exporter.py
index 3918874387..099722f093 100644
--- a/pulpcore/app/viewsets/exporter.py
+++ b/pulpcore/app/viewsets/exporter.py
@@ -146,6 +146,7 @@ def create(self, request, exporter_pk):
task = dispatch(
pulp_export,
exclusive_resources=[exporter],
+ shared_resources=exporter.repositories.all(),
kwargs={"exporter_pk": str(exporter.pk), "params": request.data},
)
|
bokeh__bokeh-8730 | Delay between autoload.js and websocket request
# READ AND FOLLOW THESE INSTRUCTIONS CAREFULLY
*ISSUES THAT DO NOT CONTAIN NECESSARY INFORMATION MAY BE CLOSED, IMMEDIATELY*
The issue tracker is NOT the place for general support. For questions and
technical assistance, come ask the [Bokeh mailing list](https://groups.google.com/a/continuum.io/forum/#!forum/bokeh) or join the chat on [Gitter](https://gitter.im/bokeh/bokeh). For feature requests, please provide a detailed description or proposal of the new capability or behavior.
For defects or deficiencies, please provide ALL OF THE FOLLOWING:
#### ALL software version info (bokeh, python, notebook, OS, browser, any other relevant packages)
bokeh 1.0.2
python 3.6.5
OS CentOS-7.5.1804
#### Description of expected behavior and the observed behavior
For whatever reason, it appears that on some requests, there can be a significant delay between the autoload.js request and the subsequent websocket connection. Normally, this process takes no more than 1-2 seconds:
```
doc.session_context.request.arguments: {'bokeh-autoload-element': [b'1088'], 'bokeh-app-path': [b'/graphs/enviz_graphs'], 'bokeh-absolute-url': [b'https://*redacted*/graphs/enviz_graphs'], 'processor_id': [b'83,187,196,1114,206,335,536,212,214,1173,217,250,252,256,265,876,268,298,999']}
2019-01-18 22:44:45,794 root_url should end with a /, adding one
2019-01-18 22:44:45,797 200 GET /graphs/enviz_graphs/autoload.js?bokeh-autoload-element=1089&bokeh-app-path=/graphs/enviz_graphs&bokeh-absolute-url=https://*redacted*/graphs/enviz_graphs&processor_id=83%2C187%2C196%2C1114%2C206%2C335%2C536%2C212%2C214%2C1173%2C217%2C250%2C252%2C256%2C265%2C876%2C268%2C298%2C999 (10.50.1.159) 398.52ms
2019-01-18 22:44:47,291 101 GET /graphs/enviz_graphs/ws?bokeh-protocol-version=1.0&bokeh-session-id=ImqIQZ1sbiZS4KsAOocVHGFgUGfJJLwHxG44Irv9Xls9&pid=83,187,196,1114,206,335,536,212,214,1173,217,250,252,256,265,876,268,298,999 (10.50.1.159) 0.56ms
2019-01-18 22:44:47,291 WebSocket connection opened
2019-01-18 22:44:47,291 Receiver created for Protocol('1.0')
2019-01-18 22:44:47,291 ProtocolHandler created for Protocol('1.0')
2019-01-18 22:44:47,291 ServerConnection created
2019-01-18 22:44:47,350 Sending pull-doc-reply from session 'ImqIQZ1sbiZS4KsAOocVHGFgUGfJJLwHxG44Irv9Xls9'
```
Notice the autoload request at 22:44:45 and the ws request at 22:44:47. (2 seconds)
However, sometimes the ws request can arrive nearly a minute later:
```
doc.session_context.request.arguments: {'bokeh-autoload-element': [b'1090'], 'bokeh-app-path': [b'/graphs/enviz_graphs'], 'bokeh-absolute-url': [b'https://*redacted*/graphs/enviz_graphs'], 'processor_id': [b'83,187,196,1114,206,335,536,212,214,1173,217,250,252,256,265,876,268,298,300,1347,1350,1352,284,307,1115,1229,999,92,']}
2019-01-18 22:45:10,741 root_url should end with a /, adding one
2019-01-18 22:45:10,745 200 GET /graphs/enviz_graphs/autoload.js?bokeh-autoload-element=1090&bokeh-app-path=/graphs/enviz_graphs&bokeh-absolute-url=https://*redacted*/graphs/enviz_graphs&processor_id=83%2C187%2C196%2C1114%2C206%2C335%2C536%2C212%2C214%2C1173%2C217%2C250%2C252%2C256%2C265%2C876%2C268%2C298%2C300%2C1347%2C1350%2C1352%2C284%2C307%2C1115%2C1229%2C999%2C92%2C (10.50.1.159) 392.75ms
2019-01-18 22:45:35,357 Scheduling 1 sessions to discard
2019-01-18 22:45:35,357 Discarding session '1fz6E0KuyuaCaCscdyKLyI2YJze38csKNckNQkotkrE8' last in use 24616.089113235474 milliseconds ago
2019-01-18 22:45:35,358 Deleting 1 modules for <bokeh.document.document.Document object at 0x7f8bb89f8438>
2019-01-18 22:45:50,352 [pid 11775] 1 clients connected
2019-01-18 22:45:50,352 [pid 11775] /enviz_graphs has 1 sessions with 0 unused
2019-01-18 22:46:05,562 101 GET /graphs/enviz_graphs/ws?bokeh-protocol-version=1.0&bokeh-session-id=1fz6E0KuyuaCaCscdyKLyI2YJze38csKNckNQkotkrE8&pid=83,187,196,1114,206,335,536,212,214,1173,217,250,252,256,265,876,268,298,300,1347,1350,1352,284,307,1115,1229,999,92, (10.50.1.159) 0.58ms
2019-01-18 22:46:05,562 WebSocket connection opened
doc.session_context.request.arguments: {'pid': [b'83,187,196,1114,206,335,536,212,214,1173,217,250,252,256,265,876,268,298,300,1347,1350,1352,284,307,1115,1229,999,92,']}
2019-01-18 22:46:05,563 Error running application handler <bokeh.application.handlers.directory.DirectoryHandler object at 0x7f8bb8f75cf8>: local variable 'current_pids' referenced before assignment
File "env_frontend.py", line 30, in modify_doc:
if len(current_pids) < 1: Traceback (most recent call last):
File "/*redacted*/enviz/venv/lib64/python3.6/site-packages/bokeh/application/handlers/code_runner.py", line 180, in run
exec(self._code, module.__dict__)
File "/*redacted*/enviz/venv/lib/python3.6/site-packages/enviz_graphs/main.py", line 7, in <module>
modify_doc(doc)
File "/*redacted*/enviz/venv/lib/python3.6/site-packages/enviz_graphs/env_frontend.py", line 30, in modify_doc
if len(current_pids) < 1:
UnboundLocalError: local variable 'current_pids' referenced before assignment
2019-01-18 22:46:05,563 Receiver created for Protocol('1.0')
2019-01-18 22:46:05,563 ProtocolHandler created for Protocol('1.0')
2019-01-18 22:46:05,563 ServerConnection created
2019-01-18 22:46:05,631 Sending pull-doc-reply from session '1fz6E0KuyuaCaCscdyKLyI2YJze38csKNckNQkotkrE8'
```
Notice the autoload request at 22:45:10 and the ws request at 22:46:05. (55 seconds)
In that gap, it appears that the session created by the autoload request was discarded as being unused at 22:46:05. (we have the default 15000 ms timeout for that.)
In both cases, the request for autoload.js takes less than 400 ms, so the slowdown seems like it would be in the browser, though I don't yet have any browser profiling that caught it.
Then, when the ws request comes in, it tries to create a new session, but fails to run our module, as the correct keys aren't in doc.session_context.request.arguments.
After this, every request to the bokeh server fails at requesting autoload until we restart the server, as it appears that doc.session_context.request.arguments is always None after that.
#### Complete, minimal, self-contained example code that reproduces the issue
N/A
#### Stack traceback and/or browser JavaScript console output
N/A
#### Screenshots or screencasts of the bug in action
N/A
| [
{
"content": "#-----------------------------------------------------------------------------\n# Copyright (c) 2012 - 2019, Anaconda, Inc., and Bokeh Contributors.\n# All rights reserved.\n#\n# The full license is in the file LICENSE.txt, distributed with this software.\n#-----------------------------------------------------------------------------\n''' Provide a Bokeh Application Handler to build up documents by running\nthe code from ``main.py`` or ``main.ipynb`` files in specified directories.\n\nThe directory may also optionally contain:\n\n* A ``server_lifecyle.py`` module to provide lifecycle callbacks for the\n application and sessions.\n\n* A ``static`` subdirectory containing app-specific static resources to\n serve.\n\n* A ``theme.yaml`` file containing a Bokeh theme to automatically apply to\n all new documents.\n\n* A ``templates`` subdirectory containing templates for app display\n\nA full directory layout might look like:\n\n.. code-block:: none\n\n myapp\n |\n +---main.py\n +---server_lifecycle.py\n +---static\n +---theme.yaml\n +---templates\n +---index.html\n\n'''\n\n#-----------------------------------------------------------------------------\n# Boilerplate\n#-----------------------------------------------------------------------------\nfrom __future__ import absolute_import, division, print_function, unicode_literals\n\nimport logging\nlog = logging.getLogger(__name__)\n\n#-----------------------------------------------------------------------------\n# Imports\n#-----------------------------------------------------------------------------\n\n# Standard library imports\nfrom os.path import basename, dirname, exists, join\n\n# External imports\nfrom jinja2 import Environment, FileSystemLoader\n\n# Bokeh imports\nfrom .handler import Handler\nfrom .script import ScriptHandler\nfrom .server_lifecycle import ServerLifecycleHandler\n\n#-----------------------------------------------------------------------------\n# Globals and constants\n#-----------------------------------------------------------------------------\n\n__all__ = (\n 'DirectoryHandler',\n)\n\n#-----------------------------------------------------------------------------\n# General API\n#-----------------------------------------------------------------------------\n\n#-----------------------------------------------------------------------------\n# Dev API\n#-----------------------------------------------------------------------------\n\nclass DirectoryHandler(Handler):\n ''' Load an application directory which modifies a Document.\n\n '''\n\n def __init__(self, *args, **kwargs):\n '''\n Keywords:\n filename (str) : a path to an application directory with either \"main.py\" or \"main.ipynb\"\n\n argv (list[str], optional) : a list of string arguments to make available as sys.argv to main.py\n '''\n super(DirectoryHandler, self).__init__(*args, **kwargs)\n\n if 'filename' not in kwargs:\n raise ValueError('Must pass a filename to DirectoryHandler')\n src_path = kwargs['filename']\n argv = kwargs.get('argv', [])\n\n main_py = join(src_path, 'main.py')\n main_ipy = join(src_path, 'main.ipynb')\n if exists(main_py) and exists(main_ipy):\n log.warning(\"Found both 'main.py' and 'main.ipynb' in %s, using 'main.py'\" % (src_path))\n main = main_py\n elif exists(main_py):\n main = main_py\n elif exists(main_ipy):\n main = main_ipy\n else:\n raise ValueError(\"No 'main.py' or 'main.ipynb' in %s\" % (src_path))\n self._path = src_path\n self._main = main\n self._main_handler = ScriptHandler(filename=self._main, argv=argv)\n\n lifecycle = join(src_path, 'server_lifecycle.py')\n if exists(lifecycle):\n self._lifecycle = lifecycle\n self._lifecycle_handler = ServerLifecycleHandler(filename=self._lifecycle, argv=argv)\n else:\n self._lifecycle = None\n self._lifecycle_handler = Handler() # no-op handler\n\n self._theme = None\n themeyaml = join(src_path, 'theme.yaml')\n if exists(themeyaml):\n from bokeh.themes import Theme\n self._theme = Theme(filename=themeyaml)\n\n appstatic = join(src_path, 'static')\n if exists(appstatic):\n self._static = appstatic\n\n self._template = None\n appindex = join(src_path, 'templates', 'index.html')\n if exists(appindex):\n env = Environment(loader=FileSystemLoader(dirname(appindex)))\n self._template = env.get_template('index.html')\n\n # Properties --------------------------------------------------------------\n\n @property\n def error(self):\n ''' If the handler fails, may contain a related error message.\n\n '''\n return self._main_handler.error or self._lifecycle_handler.error\n\n @property\n def error_detail(self):\n ''' If the handler fails, may contain a traceback or other details.\n\n '''\n return self._main_handler.error_detail or self._lifecycle_handler.error_detail\n\n @property\n def failed(self):\n ''' ``True`` if the handler failed to modify the doc\n\n '''\n return self._main_handler.failed or self._lifecycle_handler.failed\n\n @property\n def safe_to_fork(self):\n ''' Whether it is still safe for the Bokeh server to fork new workers.\n\n ``False`` if the configured code (script, notebook, etc.) has already\n been run.\n\n '''\n return self._main_handler.safe_to_fork\n\n # Public methods ----------------------------------------------------------\n\n def modify_document(self, doc):\n ''' Execute the configured ``main.py`` or ``main.ipynb`` to modify the\n document.\n\n This method will also search the app directory for any theme or\n template files, and automatically configure the document with them\n if they are found.\n\n '''\n if self.failed:\n return\n # Note: we do NOT copy self._theme, which assumes the Theme\n # class is immutable (has no setters)\n if self._theme is not None:\n doc.theme = self._theme\n\n if self._template is not None:\n doc.template = self._template\n\n # This internal handler should never add a template\n self._main_handler.modify_document(doc)\n\n def on_server_loaded(self, server_context):\n ''' Execute `on_server_unloaded`` from ``server_lifecycle.py`` (if\n it is defined) when the server is first started.\n\n Args:\n server_context (ServerContext) :\n\n '''\n return self._lifecycle_handler.on_server_loaded(server_context)\n\n def on_server_unloaded(self, server_context):\n ''' Execute ``on_server_unloaded`` from ``server_lifecycle.py`` (if\n it is defined) when the server cleanly exits. (Before stopping the\n server's ``IOLoop``.)\n\n Args:\n server_context (ServerContext) :\n\n .. warning::\n In practice this code may not run, since servers are often killed\n by a signal.\n\n\n '''\n return self._lifecycle_handler.on_server_unloaded(server_context)\n\n def on_session_created(self, session_context):\n ''' Execute ``on_session_created`` from ``server_lifecycle.py`` (if\n it is defined) when a new session is created.\n\n Args:\n session_context (SessionContext) :\n\n '''\n return self._lifecycle_handler.on_session_created(session_context)\n\n def on_session_destroyed(self, session_context):\n ''' Execute ``on_session_destroyed`` from ``server_lifecycle.py`` (if\n it is defined) when a session is destroyed.\n\n Args:\n session_context (SessionContext) :\n\n '''\n return self._lifecycle_handler.on_session_destroyed(session_context)\n\n def url_path(self):\n ''' The last path component for the basename of the path to the\n configured directory.\n\n '''\n if self.failed:\n return None\n else:\n # TODO should fix invalid URL characters\n return '/' + basename(self._path)\n\n#-----------------------------------------------------------------------------\n# Private API\n#-----------------------------------------------------------------------------\n\n#-----------------------------------------------------------------------------\n# Code\n#-----------------------------------------------------------------------------\n",
"path": "bokeh/application/handlers/directory.py"
}
] | [
{
"content": "#-----------------------------------------------------------------------------\n# Copyright (c) 2012 - 2019, Anaconda, Inc., and Bokeh Contributors.\n# All rights reserved.\n#\n# The full license is in the file LICENSE.txt, distributed with this software.\n#-----------------------------------------------------------------------------\n''' Provide a Bokeh Application Handler to build up documents by running\nthe code from ``main.py`` or ``main.ipynb`` files in specified directories.\n\nThe directory may also optionally contain:\n\n* A ``server_lifecyle.py`` module to provide lifecycle callbacks for the\n application and sessions.\n\n* A ``static`` subdirectory containing app-specific static resources to\n serve.\n\n* A ``theme.yaml`` file containing a Bokeh theme to automatically apply to\n all new documents.\n\n* A ``templates`` subdirectory containing templates for app display\n\nA full directory layout might look like:\n\n.. code-block:: none\n\n myapp\n |\n +---main.py\n +---server_lifecycle.py\n +---static\n +---theme.yaml\n +---templates\n +---index.html\n\n'''\n\n#-----------------------------------------------------------------------------\n# Boilerplate\n#-----------------------------------------------------------------------------\nfrom __future__ import absolute_import, division, print_function, unicode_literals\n\nimport logging\nlog = logging.getLogger(__name__)\n\n#-----------------------------------------------------------------------------\n# Imports\n#-----------------------------------------------------------------------------\n\n# Standard library imports\nfrom os.path import basename, dirname, exists, join\n\n# External imports\nfrom jinja2 import Environment, FileSystemLoader\n\n# Bokeh imports\nfrom .handler import Handler\nfrom .script import ScriptHandler\nfrom .server_lifecycle import ServerLifecycleHandler\n\n#-----------------------------------------------------------------------------\n# Globals and constants\n#-----------------------------------------------------------------------------\n\n__all__ = (\n 'DirectoryHandler',\n)\n\n#-----------------------------------------------------------------------------\n# General API\n#-----------------------------------------------------------------------------\n\n#-----------------------------------------------------------------------------\n# Dev API\n#-----------------------------------------------------------------------------\n\nclass DirectoryHandler(Handler):\n ''' Load an application directory which modifies a Document.\n\n '''\n\n def __init__(self, *args, **kwargs):\n '''\n Keywords:\n filename (str) : a path to an application directory with either \"main.py\" or \"main.ipynb\"\n\n argv (list[str], optional) : a list of string arguments to make available as sys.argv to main.py\n '''\n super(DirectoryHandler, self).__init__(*args, **kwargs)\n\n if 'filename' not in kwargs:\n raise ValueError('Must pass a filename to DirectoryHandler')\n src_path = kwargs['filename']\n argv = kwargs.get('argv', [])\n\n main_py = join(src_path, 'main.py')\n main_ipy = join(src_path, 'main.ipynb')\n if exists(main_py) and exists(main_ipy):\n log.warning(\"Found both 'main.py' and 'main.ipynb' in %s, using 'main.py'\" % (src_path))\n main = main_py\n elif exists(main_py):\n main = main_py\n elif exists(main_ipy):\n main = main_ipy\n else:\n raise ValueError(\"No 'main.py' or 'main.ipynb' in %s\" % (src_path))\n self._path = src_path\n self._main = main\n self._main_handler = ScriptHandler(filename=self._main, argv=argv)\n\n lifecycle = join(src_path, 'server_lifecycle.py')\n if exists(lifecycle):\n self._lifecycle = lifecycle\n self._lifecycle_handler = ServerLifecycleHandler(filename=self._lifecycle, argv=argv)\n else:\n self._lifecycle = None\n self._lifecycle_handler = Handler() # no-op handler\n\n self._theme = None\n themeyaml = join(src_path, 'theme.yaml')\n if exists(themeyaml):\n from bokeh.themes import Theme\n self._theme = Theme(filename=themeyaml)\n\n appstatic = join(src_path, 'static')\n if exists(appstatic):\n self._static = appstatic\n\n self._template = None\n appindex = join(src_path, 'templates', 'index.html')\n if exists(appindex):\n env = Environment(loader=FileSystemLoader(dirname(appindex)))\n self._template = env.get_template('index.html')\n\n # Properties --------------------------------------------------------------\n\n @property\n def error(self):\n ''' If the handler fails, may contain a related error message.\n\n '''\n return self._main_handler.error or self._lifecycle_handler.error\n\n @property\n def error_detail(self):\n ''' If the handler fails, may contain a traceback or other details.\n\n '''\n return self._main_handler.error_detail or self._lifecycle_handler.error_detail\n\n @property\n def failed(self):\n ''' ``True`` if the handler failed to modify the doc\n\n '''\n return self._main_handler.failed or self._lifecycle_handler.failed\n\n @property\n def safe_to_fork(self):\n ''' Whether it is still safe for the Bokeh server to fork new workers.\n\n ``False`` if the configured code (script, notebook, etc.) has already\n been run.\n\n '''\n return self._main_handler.safe_to_fork\n\n # Public methods ----------------------------------------------------------\n\n def modify_document(self, doc):\n ''' Execute the configured ``main.py`` or ``main.ipynb`` to modify the\n document.\n\n This method will also search the app directory for any theme or\n template files, and automatically configure the document with them\n if they are found.\n\n '''\n if self._lifecycle_handler.failed:\n return\n # Note: we do NOT copy self._theme, which assumes the Theme\n # class is immutable (has no setters)\n if self._theme is not None:\n doc.theme = self._theme\n\n if self._template is not None:\n doc.template = self._template\n\n # This internal handler should never add a template\n self._main_handler.modify_document(doc)\n\n def on_server_loaded(self, server_context):\n ''' Execute `on_server_unloaded`` from ``server_lifecycle.py`` (if\n it is defined) when the server is first started.\n\n Args:\n server_context (ServerContext) :\n\n '''\n return self._lifecycle_handler.on_server_loaded(server_context)\n\n def on_server_unloaded(self, server_context):\n ''' Execute ``on_server_unloaded`` from ``server_lifecycle.py`` (if\n it is defined) when the server cleanly exits. (Before stopping the\n server's ``IOLoop``.)\n\n Args:\n server_context (ServerContext) :\n\n .. warning::\n In practice this code may not run, since servers are often killed\n by a signal.\n\n\n '''\n return self._lifecycle_handler.on_server_unloaded(server_context)\n\n def on_session_created(self, session_context):\n ''' Execute ``on_session_created`` from ``server_lifecycle.py`` (if\n it is defined) when a new session is created.\n\n Args:\n session_context (SessionContext) :\n\n '''\n return self._lifecycle_handler.on_session_created(session_context)\n\n def on_session_destroyed(self, session_context):\n ''' Execute ``on_session_destroyed`` from ``server_lifecycle.py`` (if\n it is defined) when a session is destroyed.\n\n Args:\n session_context (SessionContext) :\n\n '''\n return self._lifecycle_handler.on_session_destroyed(session_context)\n\n def url_path(self):\n ''' The last path component for the basename of the path to the\n configured directory.\n\n '''\n if self.failed:\n return None\n else:\n # TODO should fix invalid URL characters\n return '/' + basename(self._path)\n\n#-----------------------------------------------------------------------------\n# Private API\n#-----------------------------------------------------------------------------\n\n#-----------------------------------------------------------------------------\n# Code\n#-----------------------------------------------------------------------------\n",
"path": "bokeh/application/handlers/directory.py"
}
] | diff --git a/bokeh/application/handlers/directory.py b/bokeh/application/handlers/directory.py
index 58c050aedfc..5f79257ab06 100644
--- a/bokeh/application/handlers/directory.py
+++ b/bokeh/application/handlers/directory.py
@@ -176,7 +176,7 @@ def modify_document(self, doc):
if they are found.
'''
- if self.failed:
+ if self._lifecycle_handler.failed:
return
# Note: we do NOT copy self._theme, which assumes the Theme
# class is immutable (has no setters)
|
facebookresearch__ParlAI-3351 | BERT classifier doesn't work under distributed_train
The default tokenization is re, I think it's building the dictionary along the way...
**Logs**
Please paste the command line output:
```
ValueError: Dictionaries should be pre-built before distributed train.
ValueError: Dictionaries should be pre-built before distributed train.
```
| [
{
"content": "#!/usr/bin/env python3\n\n# Copyright (c) Facebook, Inc. and its affiliates.\n# This source code is licensed under the MIT license found in the\n# LICENSE file in the root directory of this source tree.\nfrom parlai.core.dict import DictionaryAgent\nfrom parlai.zoo.bert.build import download\nfrom parlai.utils.misc import warn_once\n\ntry:\n from pytorch_pretrained_bert import BertTokenizer\nexcept ImportError:\n raise ImportError(\n 'BERT rankers needs pytorch-pretrained-BERT installed. \\n '\n 'pip install pytorch-pretrained-bert'\n )\nfrom .helpers import VOCAB_PATH\n\nimport os\n\n\nclass BertDictionaryAgent(DictionaryAgent):\n \"\"\"\n Allow to use the Torch Agent with the wordpiece dictionary of Hugging Face.\n \"\"\"\n\n def __init__(self, opt):\n super().__init__(opt)\n # initialize from vocab path\n warn_once(\n 'WARNING: BERT uses a Hugging Face tokenizer; ParlAI dictionary args are ignored'\n )\n download(opt['datapath'])\n vocab_path = os.path.join(opt['datapath'], 'models', 'bert_models', VOCAB_PATH)\n self.tokenizer = BertTokenizer.from_pretrained(vocab_path)\n\n self.start_token = '[CLS]'\n self.end_token = '[SEP]'\n self.null_token = '[PAD]'\n self.start_idx = self.tokenizer.convert_tokens_to_ids(['[CLS]'])[\n 0\n ] # should be 101\n self.end_idx = self.tokenizer.convert_tokens_to_ids(['[SEP]'])[\n 0\n ] # should be 102\n self.pad_idx = self.tokenizer.convert_tokens_to_ids(['[PAD]'])[0] # should be 0\n # set tok2ind for special tokens\n self.tok2ind[self.start_token] = self.start_idx\n self.tok2ind[self.end_token] = self.end_idx\n self.tok2ind[self.null_token] = self.pad_idx\n # set ind2tok for special tokens\n self.ind2tok[self.start_idx] = self.start_token\n self.ind2tok[self.end_idx] = self.end_token\n self.ind2tok[self.pad_idx] = self.null_token\n\n def txt2vec(self, text, vec_type=list):\n tokens = self.tokenizer.tokenize(text)\n tokens_id = self.tokenizer.convert_tokens_to_ids(tokens)\n return tokens_id\n\n def vec2txt(self, vec):\n if not isinstance(vec, list):\n # assume tensor\n idxs = [idx.item() for idx in vec.cpu()]\n else:\n idxs = vec\n toks = self.tokenizer.convert_ids_to_tokens(idxs)\n return ' '.join(toks)\n\n def act(self):\n return {}\n",
"path": "parlai/agents/bert_ranker/bert_dictionary.py"
}
] | [
{
"content": "#!/usr/bin/env python3\n\n# Copyright (c) Facebook, Inc. and its affiliates.\n# This source code is licensed under the MIT license found in the\n# LICENSE file in the root directory of this source tree.\nfrom parlai.core.dict import DictionaryAgent\nfrom parlai.zoo.bert.build import download\nfrom parlai.utils.misc import warn_once\n\ntry:\n from pytorch_pretrained_bert import BertTokenizer\nexcept ImportError:\n raise ImportError(\n 'BERT rankers needs pytorch-pretrained-BERT installed. \\n '\n 'pip install pytorch-pretrained-bert'\n )\nfrom .helpers import VOCAB_PATH\n\nimport os\n\n\nclass BertDictionaryAgent(DictionaryAgent):\n \"\"\"\n Allow to use the Torch Agent with the wordpiece dictionary of Hugging Face.\n \"\"\"\n\n def is_prebuit(self):\n return True\n\n def __init__(self, opt):\n super().__init__(opt)\n # initialize from vocab path\n warn_once(\n 'WARNING: BERT uses a Hugging Face tokenizer; ParlAI dictionary args are ignored'\n )\n download(opt['datapath'])\n vocab_path = os.path.join(opt['datapath'], 'models', 'bert_models', VOCAB_PATH)\n self.tokenizer = BertTokenizer.from_pretrained(vocab_path)\n\n self.start_token = '[CLS]'\n self.end_token = '[SEP]'\n self.null_token = '[PAD]'\n self.start_idx = self.tokenizer.convert_tokens_to_ids(['[CLS]'])[\n 0\n ] # should be 101\n self.end_idx = self.tokenizer.convert_tokens_to_ids(['[SEP]'])[\n 0\n ] # should be 102\n self.pad_idx = self.tokenizer.convert_tokens_to_ids(['[PAD]'])[0] # should be 0\n # set tok2ind for special tokens\n self.tok2ind[self.start_token] = self.start_idx\n self.tok2ind[self.end_token] = self.end_idx\n self.tok2ind[self.null_token] = self.pad_idx\n # set ind2tok for special tokens\n self.ind2tok[self.start_idx] = self.start_token\n self.ind2tok[self.end_idx] = self.end_token\n self.ind2tok[self.pad_idx] = self.null_token\n\n def txt2vec(self, text, vec_type=list):\n tokens = self.tokenizer.tokenize(text)\n tokens_id = self.tokenizer.convert_tokens_to_ids(tokens)\n return tokens_id\n\n def vec2txt(self, vec):\n if not isinstance(vec, list):\n # assume tensor\n idxs = [idx.item() for idx in vec.cpu()]\n else:\n idxs = vec\n toks = self.tokenizer.convert_ids_to_tokens(idxs)\n return ' '.join(toks)\n\n def act(self):\n return {}\n",
"path": "parlai/agents/bert_ranker/bert_dictionary.py"
}
] | diff --git a/parlai/agents/bert_ranker/bert_dictionary.py b/parlai/agents/bert_ranker/bert_dictionary.py
index 1711024073f..268a12fd490 100644
--- a/parlai/agents/bert_ranker/bert_dictionary.py
+++ b/parlai/agents/bert_ranker/bert_dictionary.py
@@ -24,6 +24,9 @@ class BertDictionaryAgent(DictionaryAgent):
Allow to use the Torch Agent with the wordpiece dictionary of Hugging Face.
"""
+ def is_prebuit(self):
+ return True
+
def __init__(self, opt):
super().__init__(opt)
# initialize from vocab path
|
pre-commit__pre-commit-1259 | [FR][bug?] pre-commit hook repo self-test
Given a repo with `.pre-commit-hooks.yaml` defined (like https://github.com/ansible/ansible-lint), I want to integrate testing of the hooks declared in it.
I can do `pre-commit try-repo https://github.com/ansible/ansible-lint.git` but this hits the remote which I want to avoid. I know that Git itself can work with local fs paths (like `/path/to/.git`) perfectly fine.
So I tried:
<details>
<summary>
<code>$ <kbd>pre-commit try-repo .git -vvv</kbd></code>
</summary>
```console
➜ pre-commit try-repo .git -vvv
[WARNING] Creating temporary repo with uncommitted changes...
An unexpected error has occurred: CalledProcessError: Command: ('/usr/bin/git', 'add', '-u')
Return code: 128
Expected return code: 0
Output: (none)
Errors:
fatal: this operation must be run in a work tree
Check the log at ~/.cache/pre-commit/pre-commit.log
```
</details>
The log doesn't reveal anything more than the fact that the Git command failed.
<details>
<summary>
<code>$ <kbd>cat ~/.cache/pre-commit/pre-commit.log</kbd></code>
</summary>
```console
An unexpected error has occurred: CalledProcessError: Command: ('/usr/bin/git', 'add', '-u')
Return code: 128
Expected return code: 0
Output: (none)
Errors:
fatal: this operation must be run in a work tree
Traceback (most recent call last):
File "~/.pyenv/versions/3.7.1/lib/python3.7/site-packages/pre_commit/error_handler.py", line 46, in error_handler
yield
File "~/.pyenv/versions/3.7.1/lib/python3.7/site-packages/pre_commit/main.py", line 296, in main
return try_repo(args)
File "~/.pyenv/versions/3.7.1/lib/python3.7/site-packages/pre_commit/commands/try_repo.py", line 55, in try_repo
repo, ref = _repo_ref(tempdir, args.repo, args.ref)
File "~/.pyenv/versions/3.7.1/lib/python3.7/site-packages/pre_commit/commands/try_repo.py", line 45, in _repo_ref
cmd_output('git', 'add', '-u', cwd=repo, env=env)
File "~/.pyenv/versions/3.7.1/lib/python3.7/site-packages/pre_commit/util.py", line 153, in cmd_output
returncode, cmd, retcode, output=(stdout, stderr),
pre_commit.util.CalledProcessError: Command: ('/usr/bin/git', 'add', '-u')
Return code: 128
Expected return code: 0
Output: (none)
Errors:
fatal: this operation must be run in a work tree
```
</details>
It must be pretty easy to fix.
| [
{
"content": "from __future__ import unicode_literals\n\nimport logging\nimport os.path\nimport sys\n\nfrom pre_commit.util import cmd_output\nfrom pre_commit.util import cmd_output_b\n\n\nlogger = logging.getLogger(__name__)\n\n\ndef zsplit(s):\n s = s.strip('\\0')\n if s:\n return s.split('\\0')\n else:\n return []\n\n\ndef no_git_env(_env=None):\n # Too many bugs dealing with environment variables and GIT:\n # https://github.com/pre-commit/pre-commit/issues/300\n # In git 2.6.3 (maybe others), git exports GIT_WORK_TREE while running\n # pre-commit hooks\n # In git 1.9.1 (maybe others), git exports GIT_DIR and GIT_INDEX_FILE\n # while running pre-commit hooks in submodules.\n # GIT_DIR: Causes git clone to clone wrong thing\n # GIT_INDEX_FILE: Causes 'error invalid object ...' during commit\n _env = _env if _env is not None else os.environ\n return {\n k: v for k, v in _env.items()\n if not k.startswith('GIT_') or\n k in {'GIT_EXEC_PATH', 'GIT_SSH', 'GIT_SSH_COMMAND', 'GIT_SSL_CAINFO'}\n }\n\n\ndef get_root():\n return cmd_output('git', 'rev-parse', '--show-toplevel')[1].strip()\n\n\ndef get_git_dir(git_root='.'):\n opts = ('--git-common-dir', '--git-dir')\n _, out, _ = cmd_output('git', 'rev-parse', *opts, cwd=git_root)\n for line, opt in zip(out.splitlines(), opts):\n if line != opt: # pragma: no branch (git < 2.5)\n return os.path.normpath(os.path.join(git_root, line))\n else:\n raise AssertionError('unreachable: no git dir')\n\n\ndef get_remote_url(git_root):\n _, out, _ = cmd_output('git', 'config', 'remote.origin.url', cwd=git_root)\n return out.strip()\n\n\ndef is_in_merge_conflict():\n git_dir = get_git_dir('.')\n return (\n os.path.exists(os.path.join(git_dir, 'MERGE_MSG')) and\n os.path.exists(os.path.join(git_dir, 'MERGE_HEAD'))\n )\n\n\ndef parse_merge_msg_for_conflicts(merge_msg):\n # Conflicted files start with tabs\n return [\n line.lstrip(b'#').strip().decode('UTF-8')\n for line in merge_msg.splitlines()\n # '#\\t' for git 2.4.1\n if line.startswith((b'\\t', b'#\\t'))\n ]\n\n\ndef get_conflicted_files():\n logger.info('Checking merge-conflict files only.')\n # Need to get the conflicted files from the MERGE_MSG because they could\n # have resolved the conflict by choosing one side or the other\n with open(os.path.join(get_git_dir('.'), 'MERGE_MSG'), 'rb') as f:\n merge_msg = f.read()\n merge_conflict_filenames = parse_merge_msg_for_conflicts(merge_msg)\n\n # This will get the rest of the changes made after the merge.\n # If they resolved the merge conflict by choosing a mesh of both sides\n # this will also include the conflicted files\n tree_hash = cmd_output('git', 'write-tree')[1].strip()\n merge_diff_filenames = zsplit(\n cmd_output(\n 'git', 'diff', '--name-only', '--no-ext-diff', '-z',\n '-m', tree_hash, 'HEAD', 'MERGE_HEAD',\n )[1],\n )\n return set(merge_conflict_filenames) | set(merge_diff_filenames)\n\n\ndef get_staged_files(cwd=None):\n return zsplit(\n cmd_output(\n 'git', 'diff', '--staged', '--name-only', '--no-ext-diff', '-z',\n # Everything except for D\n '--diff-filter=ACMRTUXB',\n cwd=cwd,\n )[1],\n )\n\n\ndef intent_to_add_files():\n _, stdout, _ = cmd_output('git', 'status', '--porcelain', '-z')\n parts = list(reversed(zsplit(stdout)))\n intent_to_add = []\n while parts:\n line = parts.pop()\n status, filename = line[:3], line[3:]\n if status[0] in {'C', 'R'}: # renames / moves have an additional arg\n parts.pop()\n if status[1] == 'A':\n intent_to_add.append(filename)\n return intent_to_add\n\n\ndef get_all_files():\n return zsplit(cmd_output('git', 'ls-files', '-z')[1])\n\n\ndef get_changed_files(new, old):\n return zsplit(\n cmd_output(\n 'git', 'diff', '--name-only', '--no-ext-diff', '-z',\n '{}...{}'.format(old, new),\n )[1],\n )\n\n\ndef head_rev(remote):\n _, out, _ = cmd_output('git', 'ls-remote', '--exit-code', remote, 'HEAD')\n return out.split()[0]\n\n\ndef has_diff(*args, **kwargs):\n repo = kwargs.pop('repo', '.')\n assert not kwargs, kwargs\n cmd = ('git', 'diff', '--quiet', '--no-ext-diff') + args\n return cmd_output_b(*cmd, cwd=repo, retcode=None)[0]\n\n\ndef has_core_hookpaths_set():\n _, out, _ = cmd_output_b('git', 'config', 'core.hooksPath', retcode=None)\n return bool(out.strip())\n\n\ndef init_repo(path, remote):\n if os.path.isdir(remote):\n remote = os.path.abspath(remote)\n\n env = no_git_env()\n cmd_output_b('git', 'init', path, env=env)\n cmd_output_b('git', 'remote', 'add', 'origin', remote, cwd=path, env=env)\n\n\ndef commit(repo='.'):\n env = no_git_env()\n name, email = 'pre-commit', '[email protected]'\n env['GIT_AUTHOR_NAME'] = env['GIT_COMMITTER_NAME'] = name\n env['GIT_AUTHOR_EMAIL'] = env['GIT_COMMITTER_EMAIL'] = email\n cmd = ('git', 'commit', '--no-edit', '--no-gpg-sign', '-n', '-minit')\n cmd_output_b(*cmd, cwd=repo, env=env)\n\n\ndef git_path(name, repo='.'):\n _, out, _ = cmd_output('git', 'rev-parse', '--git-path', name, cwd=repo)\n return os.path.join(repo, out.strip())\n\n\ndef check_for_cygwin_mismatch():\n \"\"\"See https://github.com/pre-commit/pre-commit/issues/354\"\"\"\n if sys.platform in ('cygwin', 'win32'): # pragma: no cover (windows)\n is_cygwin_python = sys.platform == 'cygwin'\n toplevel = cmd_output('git', 'rev-parse', '--show-toplevel')[1]\n is_cygwin_git = toplevel.startswith('/')\n\n if is_cygwin_python ^ is_cygwin_git:\n exe_type = {True: '(cygwin)', False: '(windows)'}\n logger.warn(\n 'pre-commit has detected a mix of cygwin python / git\\n'\n 'This combination is not supported, it is likely you will '\n 'receive an error later in the program.\\n'\n 'Make sure to use cygwin git+python while using cygwin\\n'\n 'These can be installed through the cygwin installer.\\n'\n ' - python {}\\n'\n ' - git {}\\n'.format(\n exe_type[is_cygwin_python], exe_type[is_cygwin_git],\n ),\n )\n",
"path": "pre_commit/git.py"
}
] | [
{
"content": "from __future__ import unicode_literals\n\nimport logging\nimport os.path\nimport sys\n\nfrom pre_commit.util import cmd_output\nfrom pre_commit.util import cmd_output_b\n\n\nlogger = logging.getLogger(__name__)\n\n\ndef zsplit(s):\n s = s.strip('\\0')\n if s:\n return s.split('\\0')\n else:\n return []\n\n\ndef no_git_env(_env=None):\n # Too many bugs dealing with environment variables and GIT:\n # https://github.com/pre-commit/pre-commit/issues/300\n # In git 2.6.3 (maybe others), git exports GIT_WORK_TREE while running\n # pre-commit hooks\n # In git 1.9.1 (maybe others), git exports GIT_DIR and GIT_INDEX_FILE\n # while running pre-commit hooks in submodules.\n # GIT_DIR: Causes git clone to clone wrong thing\n # GIT_INDEX_FILE: Causes 'error invalid object ...' during commit\n _env = _env if _env is not None else os.environ\n return {\n k: v for k, v in _env.items()\n if not k.startswith('GIT_') or\n k in {'GIT_EXEC_PATH', 'GIT_SSH', 'GIT_SSH_COMMAND', 'GIT_SSL_CAINFO'}\n }\n\n\ndef get_root():\n return cmd_output('git', 'rev-parse', '--show-toplevel')[1].strip()\n\n\ndef get_git_dir(git_root='.'):\n opts = ('--git-common-dir', '--git-dir')\n _, out, _ = cmd_output('git', 'rev-parse', *opts, cwd=git_root)\n for line, opt in zip(out.splitlines(), opts):\n if line != opt: # pragma: no branch (git < 2.5)\n return os.path.normpath(os.path.join(git_root, line))\n else:\n raise AssertionError('unreachable: no git dir')\n\n\ndef get_remote_url(git_root):\n _, out, _ = cmd_output('git', 'config', 'remote.origin.url', cwd=git_root)\n return out.strip()\n\n\ndef is_in_merge_conflict():\n git_dir = get_git_dir('.')\n return (\n os.path.exists(os.path.join(git_dir, 'MERGE_MSG')) and\n os.path.exists(os.path.join(git_dir, 'MERGE_HEAD'))\n )\n\n\ndef parse_merge_msg_for_conflicts(merge_msg):\n # Conflicted files start with tabs\n return [\n line.lstrip(b'#').strip().decode('UTF-8')\n for line in merge_msg.splitlines()\n # '#\\t' for git 2.4.1\n if line.startswith((b'\\t', b'#\\t'))\n ]\n\n\ndef get_conflicted_files():\n logger.info('Checking merge-conflict files only.')\n # Need to get the conflicted files from the MERGE_MSG because they could\n # have resolved the conflict by choosing one side or the other\n with open(os.path.join(get_git_dir('.'), 'MERGE_MSG'), 'rb') as f:\n merge_msg = f.read()\n merge_conflict_filenames = parse_merge_msg_for_conflicts(merge_msg)\n\n # This will get the rest of the changes made after the merge.\n # If they resolved the merge conflict by choosing a mesh of both sides\n # this will also include the conflicted files\n tree_hash = cmd_output('git', 'write-tree')[1].strip()\n merge_diff_filenames = zsplit(\n cmd_output(\n 'git', 'diff', '--name-only', '--no-ext-diff', '-z',\n '-m', tree_hash, 'HEAD', 'MERGE_HEAD',\n )[1],\n )\n return set(merge_conflict_filenames) | set(merge_diff_filenames)\n\n\ndef get_staged_files(cwd=None):\n return zsplit(\n cmd_output(\n 'git', 'diff', '--staged', '--name-only', '--no-ext-diff', '-z',\n # Everything except for D\n '--diff-filter=ACMRTUXB',\n cwd=cwd,\n )[1],\n )\n\n\ndef intent_to_add_files():\n _, stdout, _ = cmd_output('git', 'status', '--porcelain', '-z')\n parts = list(reversed(zsplit(stdout)))\n intent_to_add = []\n while parts:\n line = parts.pop()\n status, filename = line[:3], line[3:]\n if status[0] in {'C', 'R'}: # renames / moves have an additional arg\n parts.pop()\n if status[1] == 'A':\n intent_to_add.append(filename)\n return intent_to_add\n\n\ndef get_all_files():\n return zsplit(cmd_output('git', 'ls-files', '-z')[1])\n\n\ndef get_changed_files(new, old):\n return zsplit(\n cmd_output(\n 'git', 'diff', '--name-only', '--no-ext-diff', '-z',\n '{}...{}'.format(old, new),\n )[1],\n )\n\n\ndef head_rev(remote):\n _, out, _ = cmd_output('git', 'ls-remote', '--exit-code', remote, 'HEAD')\n return out.split()[0]\n\n\ndef has_diff(*args, **kwargs):\n repo = kwargs.pop('repo', '.')\n assert not kwargs, kwargs\n cmd = ('git', 'diff', '--quiet', '--no-ext-diff') + args\n return cmd_output_b(*cmd, cwd=repo, retcode=None)[0] == 1\n\n\ndef has_core_hookpaths_set():\n _, out, _ = cmd_output_b('git', 'config', 'core.hooksPath', retcode=None)\n return bool(out.strip())\n\n\ndef init_repo(path, remote):\n if os.path.isdir(remote):\n remote = os.path.abspath(remote)\n\n env = no_git_env()\n cmd_output_b('git', 'init', path, env=env)\n cmd_output_b('git', 'remote', 'add', 'origin', remote, cwd=path, env=env)\n\n\ndef commit(repo='.'):\n env = no_git_env()\n name, email = 'pre-commit', '[email protected]'\n env['GIT_AUTHOR_NAME'] = env['GIT_COMMITTER_NAME'] = name\n env['GIT_AUTHOR_EMAIL'] = env['GIT_COMMITTER_EMAIL'] = email\n cmd = ('git', 'commit', '--no-edit', '--no-gpg-sign', '-n', '-minit')\n cmd_output_b(*cmd, cwd=repo, env=env)\n\n\ndef git_path(name, repo='.'):\n _, out, _ = cmd_output('git', 'rev-parse', '--git-path', name, cwd=repo)\n return os.path.join(repo, out.strip())\n\n\ndef check_for_cygwin_mismatch():\n \"\"\"See https://github.com/pre-commit/pre-commit/issues/354\"\"\"\n if sys.platform in ('cygwin', 'win32'): # pragma: no cover (windows)\n is_cygwin_python = sys.platform == 'cygwin'\n toplevel = cmd_output('git', 'rev-parse', '--show-toplevel')[1]\n is_cygwin_git = toplevel.startswith('/')\n\n if is_cygwin_python ^ is_cygwin_git:\n exe_type = {True: '(cygwin)', False: '(windows)'}\n logger.warn(\n 'pre-commit has detected a mix of cygwin python / git\\n'\n 'This combination is not supported, it is likely you will '\n 'receive an error later in the program.\\n'\n 'Make sure to use cygwin git+python while using cygwin\\n'\n 'These can be installed through the cygwin installer.\\n'\n ' - python {}\\n'\n ' - git {}\\n'.format(\n exe_type[is_cygwin_python], exe_type[is_cygwin_git],\n ),\n )\n",
"path": "pre_commit/git.py"
}
] | diff --git a/pre_commit/git.py b/pre_commit/git.py
index c8faf60f7..136cefef5 100644
--- a/pre_commit/git.py
+++ b/pre_commit/git.py
@@ -141,7 +141,7 @@ def has_diff(*args, **kwargs):
repo = kwargs.pop('repo', '.')
assert not kwargs, kwargs
cmd = ('git', 'diff', '--quiet', '--no-ext-diff') + args
- return cmd_output_b(*cmd, cwd=repo, retcode=None)[0]
+ return cmd_output_b(*cmd, cwd=repo, retcode=None)[0] == 1
def has_core_hookpaths_set():
diff --git a/tests/commands/try_repo_test.py b/tests/commands/try_repo_test.py
index 536eb9bc4..1849c70a5 100644
--- a/tests/commands/try_repo_test.py
+++ b/tests/commands/try_repo_test.py
@@ -98,6 +98,15 @@ def test_try_repo_relative_path(cap_out, tempdir_factory):
assert not try_repo(try_repo_opts(relative_repo, hook='bash_hook'))
+def test_try_repo_bare_repo(cap_out, tempdir_factory):
+ repo = make_repo(tempdir_factory, 'modified_file_returns_zero_repo')
+ with cwd(git_dir(tempdir_factory)):
+ _add_test_file()
+ bare_repo = os.path.join(repo, '.git')
+ # previously crashed attempting modification changes
+ assert not try_repo(try_repo_opts(bare_repo, hook='bash_hook'))
+
+
def test_try_repo_specific_revision(cap_out, tempdir_factory):
repo = make_repo(tempdir_factory, 'script_hooks_repo')
ref = git.head_rev(repo)
|
urllib3__urllib3-1782 | iterating a closed response improperly produces data
Consider the following script:
```
import urllib3
http = urllib3.PoolManager()
resp = http.request("GET", "https://www.python.org")
resp.close()
for d in resp:
print(repr(d))
```
With urllib3 1.25.7, this program prints `b''`. With urllib3 1.24.3, one sees:
```
Traceback (most recent call last):
File "example.py", line 6, in <module>
for d in resp:
ValueError: I/O operation on closed file.
```
The latter is in line with what I expect.
| [
{
"content": "from __future__ import absolute_import\nfrom contextlib import contextmanager\nimport zlib\nimport io\nimport logging\nfrom socket import timeout as SocketTimeout\nfrom socket import error as SocketError\n\ntry:\n import brotli\nexcept ImportError:\n brotli = None\n\nfrom ._collections import HTTPHeaderDict\nfrom .exceptions import (\n BodyNotHttplibCompatible,\n ProtocolError,\n DecodeError,\n ReadTimeoutError,\n ResponseNotChunked,\n IncompleteRead,\n InvalidHeader,\n)\nfrom .packages.six import string_types as basestring, PY3\nfrom .packages.six.moves import http_client as httplib\nfrom .connection import HTTPException, BaseSSLError\nfrom .util.response import is_fp_closed, is_response_to_head\n\nlog = logging.getLogger(__name__)\n\n\nclass DeflateDecoder(object):\n def __init__(self):\n self._first_try = True\n self._data = b\"\"\n self._obj = zlib.decompressobj()\n\n def __getattr__(self, name):\n return getattr(self._obj, name)\n\n def decompress(self, data):\n if not data:\n return data\n\n if not self._first_try:\n return self._obj.decompress(data)\n\n self._data += data\n try:\n decompressed = self._obj.decompress(data)\n if decompressed:\n self._first_try = False\n self._data = None\n return decompressed\n except zlib.error:\n self._first_try = False\n self._obj = zlib.decompressobj(-zlib.MAX_WBITS)\n try:\n return self.decompress(self._data)\n finally:\n self._data = None\n\n\nclass GzipDecoderState(object):\n\n FIRST_MEMBER = 0\n OTHER_MEMBERS = 1\n SWALLOW_DATA = 2\n\n\nclass GzipDecoder(object):\n def __init__(self):\n self._obj = zlib.decompressobj(16 + zlib.MAX_WBITS)\n self._state = GzipDecoderState.FIRST_MEMBER\n\n def __getattr__(self, name):\n return getattr(self._obj, name)\n\n def decompress(self, data):\n ret = bytearray()\n if self._state == GzipDecoderState.SWALLOW_DATA or not data:\n return bytes(ret)\n while True:\n try:\n ret += self._obj.decompress(data)\n except zlib.error:\n previous_state = self._state\n # Ignore data after the first error\n self._state = GzipDecoderState.SWALLOW_DATA\n if previous_state == GzipDecoderState.OTHER_MEMBERS:\n # Allow trailing garbage acceptable in other gzip clients\n return bytes(ret)\n raise\n data = self._obj.unused_data\n if not data:\n return bytes(ret)\n self._state = GzipDecoderState.OTHER_MEMBERS\n self._obj = zlib.decompressobj(16 + zlib.MAX_WBITS)\n\n\nif brotli is not None:\n\n class BrotliDecoder(object):\n # Supports both 'brotlipy' and 'Brotli' packages\n # since they share an import name. The top branches\n # are for 'brotlipy' and bottom branches for 'Brotli'\n def __init__(self):\n self._obj = brotli.Decompressor()\n\n def decompress(self, data):\n if hasattr(self._obj, \"decompress\"):\n return self._obj.decompress(data)\n return self._obj.process(data)\n\n def flush(self):\n if hasattr(self._obj, \"flush\"):\n return self._obj.flush()\n return b\"\"\n\n\nclass MultiDecoder(object):\n \"\"\"\n From RFC7231:\n If one or more encodings have been applied to a representation, the\n sender that applied the encodings MUST generate a Content-Encoding\n header field that lists the content codings in the order in which\n they were applied.\n \"\"\"\n\n def __init__(self, modes):\n self._decoders = [_get_decoder(m.strip()) for m in modes.split(\",\")]\n\n def flush(self):\n return self._decoders[0].flush()\n\n def decompress(self, data):\n for d in reversed(self._decoders):\n data = d.decompress(data)\n return data\n\n\ndef _get_decoder(mode):\n if \",\" in mode:\n return MultiDecoder(mode)\n\n if mode == \"gzip\":\n return GzipDecoder()\n\n if brotli is not None and mode == \"br\":\n return BrotliDecoder()\n\n return DeflateDecoder()\n\n\nclass HTTPResponse(io.IOBase):\n \"\"\"\n HTTP Response container.\n\n Backwards-compatible to httplib's HTTPResponse but the response ``body`` is\n loaded and decoded on-demand when the ``data`` property is accessed. This\n class is also compatible with the Python standard library's :mod:`io`\n module, and can hence be treated as a readable object in the context of that\n framework.\n\n Extra parameters for behaviour not present in httplib.HTTPResponse:\n\n :param preload_content:\n If True, the response's body will be preloaded during construction.\n\n :param decode_content:\n If True, will attempt to decode the body based on the\n 'content-encoding' header.\n\n :param original_response:\n When this HTTPResponse wrapper is generated from an httplib.HTTPResponse\n object, it's convenient to include the original for debug purposes. It's\n otherwise unused.\n\n :param retries:\n The retries contains the last :class:`~urllib3.util.retry.Retry` that\n was used during the request.\n\n :param enforce_content_length:\n Enforce content length checking. Body returned by server must match\n value of Content-Length header, if present. Otherwise, raise error.\n \"\"\"\n\n CONTENT_DECODERS = [\"gzip\", \"deflate\"]\n if brotli is not None:\n CONTENT_DECODERS += [\"br\"]\n REDIRECT_STATUSES = [301, 302, 303, 307, 308]\n\n def __init__(\n self,\n body=\"\",\n headers=None,\n status=0,\n version=0,\n reason=None,\n strict=0,\n preload_content=True,\n decode_content=True,\n original_response=None,\n pool=None,\n connection=None,\n msg=None,\n retries=None,\n enforce_content_length=False,\n request_method=None,\n request_url=None,\n auto_close=True,\n ):\n\n if isinstance(headers, HTTPHeaderDict):\n self.headers = headers\n else:\n self.headers = HTTPHeaderDict(headers)\n self.status = status\n self.version = version\n self.reason = reason\n self.strict = strict\n self.decode_content = decode_content\n self.retries = retries\n self.enforce_content_length = enforce_content_length\n self.auto_close = auto_close\n\n self._decoder = None\n self._body = None\n self._fp = None\n self._original_response = original_response\n self._fp_bytes_read = 0\n self.msg = msg\n self._request_url = request_url\n\n if body and isinstance(body, (basestring, bytes)):\n self._body = body\n\n self._pool = pool\n self._connection = connection\n\n if hasattr(body, \"read\"):\n self._fp = body\n\n # Are we using the chunked-style of transfer encoding?\n self.chunked = False\n self.chunk_left = None\n tr_enc = self.headers.get(\"transfer-encoding\", \"\").lower()\n # Don't incur the penalty of creating a list and then discarding it\n encodings = (enc.strip() for enc in tr_enc.split(\",\"))\n if \"chunked\" in encodings:\n self.chunked = True\n\n # Determine length of response\n self.length_remaining = self._init_length(request_method)\n\n # If requested, preload the body.\n if preload_content and not self._body:\n self._body = self.read(decode_content=decode_content)\n\n def get_redirect_location(self):\n \"\"\"\n Should we redirect and where to?\n\n :returns: Truthy redirect location string if we got a redirect status\n code and valid location. ``None`` if redirect status and no\n location. ``False`` if not a redirect status code.\n \"\"\"\n if self.status in self.REDIRECT_STATUSES:\n return self.headers.get(\"location\")\n\n return False\n\n def release_conn(self):\n if not self._pool or not self._connection:\n return\n\n self._pool._put_conn(self._connection)\n self._connection = None\n\n @property\n def data(self):\n # For backwords-compat with earlier urllib3 0.4 and earlier.\n if self._body:\n return self._body\n\n if self._fp:\n return self.read(cache_content=True)\n\n @property\n def connection(self):\n return self._connection\n\n def isclosed(self):\n return is_fp_closed(self._fp)\n\n def tell(self):\n \"\"\"\n Obtain the number of bytes pulled over the wire so far. May differ from\n the amount of content returned by :meth:``HTTPResponse.read`` if bytes\n are encoded on the wire (e.g, compressed).\n \"\"\"\n return self._fp_bytes_read\n\n def _init_length(self, request_method):\n \"\"\"\n Set initial length value for Response content if available.\n \"\"\"\n length = self.headers.get(\"content-length\")\n\n if length is not None:\n if self.chunked:\n # This Response will fail with an IncompleteRead if it can't be\n # received as chunked. This method falls back to attempt reading\n # the response before raising an exception.\n log.warning(\n \"Received response with both Content-Length and \"\n \"Transfer-Encoding set. This is expressly forbidden \"\n \"by RFC 7230 sec 3.3.2. Ignoring Content-Length and \"\n \"attempting to process response as Transfer-Encoding: \"\n \"chunked.\"\n )\n return None\n\n try:\n # RFC 7230 section 3.3.2 specifies multiple content lengths can\n # be sent in a single Content-Length header\n # (e.g. Content-Length: 42, 42). This line ensures the values\n # are all valid ints and that as long as the `set` length is 1,\n # all values are the same. Otherwise, the header is invalid.\n lengths = set([int(val) for val in length.split(\",\")])\n if len(lengths) > 1:\n raise InvalidHeader(\n \"Content-Length contained multiple \"\n \"unmatching values (%s)\" % length\n )\n length = lengths.pop()\n except ValueError:\n length = None\n else:\n if length < 0:\n length = None\n\n # Convert status to int for comparison\n # In some cases, httplib returns a status of \"_UNKNOWN\"\n try:\n status = int(self.status)\n except ValueError:\n status = 0\n\n # Check for responses that shouldn't include a body\n if status in (204, 304) or 100 <= status < 200 or request_method == \"HEAD\":\n length = 0\n\n return length\n\n def _init_decoder(self):\n \"\"\"\n Set-up the _decoder attribute if necessary.\n \"\"\"\n # Note: content-encoding value should be case-insensitive, per RFC 7230\n # Section 3.2\n content_encoding = self.headers.get(\"content-encoding\", \"\").lower()\n if self._decoder is None:\n if content_encoding in self.CONTENT_DECODERS:\n self._decoder = _get_decoder(content_encoding)\n elif \",\" in content_encoding:\n encodings = [\n e.strip()\n for e in content_encoding.split(\",\")\n if e.strip() in self.CONTENT_DECODERS\n ]\n if len(encodings):\n self._decoder = _get_decoder(content_encoding)\n\n DECODER_ERROR_CLASSES = (IOError, zlib.error)\n if brotli is not None:\n DECODER_ERROR_CLASSES += (brotli.error,)\n\n def _decode(self, data, decode_content, flush_decoder):\n \"\"\"\n Decode the data passed in and potentially flush the decoder.\n \"\"\"\n if not decode_content:\n return data\n\n try:\n if self._decoder:\n data = self._decoder.decompress(data)\n except self.DECODER_ERROR_CLASSES as e:\n content_encoding = self.headers.get(\"content-encoding\", \"\").lower()\n raise DecodeError(\n \"Received response with content-encoding: %s, but \"\n \"failed to decode it.\" % content_encoding,\n e,\n )\n if flush_decoder:\n data += self._flush_decoder()\n\n return data\n\n def _flush_decoder(self):\n \"\"\"\n Flushes the decoder. Should only be called if the decoder is actually\n being used.\n \"\"\"\n if self._decoder:\n buf = self._decoder.decompress(b\"\")\n return buf + self._decoder.flush()\n\n return b\"\"\n\n @contextmanager\n def _error_catcher(self):\n \"\"\"\n Catch low-level python exceptions, instead re-raising urllib3\n variants, so that low-level exceptions are not leaked in the\n high-level api.\n\n On exit, release the connection back to the pool.\n \"\"\"\n clean_exit = False\n\n try:\n try:\n yield\n\n except SocketTimeout:\n # FIXME: Ideally we'd like to include the url in the ReadTimeoutError but\n # there is yet no clean way to get at it from this context.\n raise ReadTimeoutError(self._pool, None, \"Read timed out.\")\n\n except BaseSSLError as e:\n # FIXME: Is there a better way to differentiate between SSLErrors?\n if \"read operation timed out\" not in str(e): # Defensive:\n # This shouldn't happen but just in case we're missing an edge\n # case, let's avoid swallowing SSL errors.\n raise\n\n raise ReadTimeoutError(self._pool, None, \"Read timed out.\")\n\n except (HTTPException, SocketError) as e:\n # This includes IncompleteRead.\n raise ProtocolError(\"Connection broken: %r\" % e, e)\n\n # If no exception is thrown, we should avoid cleaning up\n # unnecessarily.\n clean_exit = True\n finally:\n # If we didn't terminate cleanly, we need to throw away our\n # connection.\n if not clean_exit:\n # The response may not be closed but we're not going to use it\n # anymore so close it now to ensure that the connection is\n # released back to the pool.\n if self._original_response:\n self._original_response.close()\n\n # Closing the response may not actually be sufficient to close\n # everything, so if we have a hold of the connection close that\n # too.\n if self._connection:\n self._connection.close()\n\n # If we hold the original response but it's closed now, we should\n # return the connection back to the pool.\n if self._original_response and self._original_response.isclosed():\n self.release_conn()\n\n def read(self, amt=None, decode_content=None, cache_content=False):\n \"\"\"\n Similar to :meth:`httplib.HTTPResponse.read`, but with two additional\n parameters: ``decode_content`` and ``cache_content``.\n\n :param amt:\n How much of the content to read. If specified, caching is skipped\n because it doesn't make sense to cache partial content as the full\n response.\n\n :param decode_content:\n If True, will attempt to decode the body based on the\n 'content-encoding' header.\n\n :param cache_content:\n If True, will save the returned data such that the same result is\n returned despite of the state of the underlying file object. This\n is useful if you want the ``.data`` property to continue working\n after having ``.read()`` the file object. (Overridden if ``amt`` is\n set.)\n \"\"\"\n self._init_decoder()\n if decode_content is None:\n decode_content = self.decode_content\n\n if self._fp is None:\n return\n\n flush_decoder = False\n fp_closed = getattr(self._fp, \"closed\", False)\n\n with self._error_catcher():\n if amt is None:\n # cStringIO doesn't like amt=None\n data = self._fp.read() if not fp_closed else b\"\"\n flush_decoder = True\n else:\n cache_content = False\n data = self._fp.read(amt) if not fp_closed else b\"\"\n if (\n amt != 0 and not data\n ): # Platform-specific: Buggy versions of Python.\n # Close the connection when no data is returned\n #\n # This is redundant to what httplib/http.client _should_\n # already do. However, versions of python released before\n # December 15, 2012 (http://bugs.python.org/issue16298) do\n # not properly close the connection in all cases. There is\n # no harm in redundantly calling close.\n self._fp.close()\n flush_decoder = True\n if self.enforce_content_length and self.length_remaining not in (\n 0,\n None,\n ):\n # This is an edge case that httplib failed to cover due\n # to concerns of backward compatibility. We're\n # addressing it here to make sure IncompleteRead is\n # raised during streaming, so all calls with incorrect\n # Content-Length are caught.\n raise IncompleteRead(self._fp_bytes_read, self.length_remaining)\n\n if data:\n self._fp_bytes_read += len(data)\n if self.length_remaining is not None:\n self.length_remaining -= len(data)\n\n data = self._decode(data, decode_content, flush_decoder)\n\n if cache_content:\n self._body = data\n\n return data\n\n def stream(self, amt=2 ** 16, decode_content=None):\n \"\"\"\n A generator wrapper for the read() method. A call will block until\n ``amt`` bytes have been read from the connection or until the\n connection is closed.\n\n :param amt:\n How much of the content to read. The generator will return up to\n much data per iteration, but may return less. This is particularly\n likely when using compressed data. However, the empty string will\n never be returned.\n\n :param decode_content:\n If True, will attempt to decode the body based on the\n 'content-encoding' header.\n \"\"\"\n if self.chunked and self.supports_chunked_reads():\n for line in self.read_chunked(amt, decode_content=decode_content):\n yield line\n else:\n while not is_fp_closed(self._fp):\n data = self.read(amt=amt, decode_content=decode_content)\n\n if data:\n yield data\n\n @classmethod\n def from_httplib(ResponseCls, r, **response_kw):\n \"\"\"\n Given an :class:`httplib.HTTPResponse` instance ``r``, return a\n corresponding :class:`urllib3.response.HTTPResponse` object.\n\n Remaining parameters are passed to the HTTPResponse constructor, along\n with ``original_response=r``.\n \"\"\"\n headers = r.msg\n\n if not isinstance(headers, HTTPHeaderDict):\n if PY3:\n headers = HTTPHeaderDict(headers.items())\n else:\n # Python 2.7\n headers = HTTPHeaderDict.from_httplib(headers)\n\n # HTTPResponse objects in Python 3 don't have a .strict attribute\n strict = getattr(r, \"strict\", 0)\n resp = ResponseCls(\n body=r,\n headers=headers,\n status=r.status,\n version=r.version,\n reason=r.reason,\n strict=strict,\n original_response=r,\n **response_kw\n )\n return resp\n\n # Backwards-compatibility methods for httplib.HTTPResponse\n def getheaders(self):\n return self.headers\n\n def getheader(self, name, default=None):\n return self.headers.get(name, default)\n\n # Backwards compatibility for http.cookiejar\n def info(self):\n return self.headers\n\n # Overrides from io.IOBase\n def close(self):\n if not self.closed:\n self._fp.close()\n\n if self._connection:\n self._connection.close()\n\n if not self.auto_close:\n io.IOBase.close(self)\n\n @property\n def closed(self):\n if not self.auto_close:\n return io.IOBase.closed.__get__(self)\n elif self._fp is None:\n return True\n elif hasattr(self._fp, \"isclosed\"):\n return self._fp.isclosed()\n elif hasattr(self._fp, \"closed\"):\n return self._fp.closed\n else:\n return True\n\n def fileno(self):\n if self._fp is None:\n raise IOError(\"HTTPResponse has no file to get a fileno from\")\n elif hasattr(self._fp, \"fileno\"):\n return self._fp.fileno()\n else:\n raise IOError(\n \"The file-like object this HTTPResponse is wrapped \"\n \"around has no file descriptor\"\n )\n\n def flush(self):\n if (\n self._fp is not None\n and hasattr(self._fp, \"flush\")\n and not getattr(self._fp, \"closed\", False)\n ):\n return self._fp.flush()\n\n def readable(self):\n # This method is required for `io` module compatibility.\n return True\n\n def readinto(self, b):\n # This method is required for `io` module compatibility.\n temp = self.read(len(b))\n if len(temp) == 0:\n return 0\n else:\n b[: len(temp)] = temp\n return len(temp)\n\n def supports_chunked_reads(self):\n \"\"\"\n Checks if the underlying file-like object looks like a\n httplib.HTTPResponse object. We do this by testing for the fp\n attribute. If it is present we assume it returns raw chunks as\n processed by read_chunked().\n \"\"\"\n return hasattr(self._fp, \"fp\")\n\n def _update_chunk_length(self):\n # First, we'll figure out length of a chunk and then\n # we'll try to read it from socket.\n if self.chunk_left is not None:\n return\n line = self._fp.fp.readline()\n line = line.split(b\";\", 1)[0]\n try:\n self.chunk_left = int(line, 16)\n except ValueError:\n # Invalid chunked protocol response, abort.\n self.close()\n raise httplib.IncompleteRead(line)\n\n def _handle_chunk(self, amt):\n returned_chunk = None\n if amt is None:\n chunk = self._fp._safe_read(self.chunk_left)\n returned_chunk = chunk\n self._fp._safe_read(2) # Toss the CRLF at the end of the chunk.\n self.chunk_left = None\n elif amt < self.chunk_left:\n value = self._fp._safe_read(amt)\n self.chunk_left = self.chunk_left - amt\n returned_chunk = value\n elif amt == self.chunk_left:\n value = self._fp._safe_read(amt)\n self._fp._safe_read(2) # Toss the CRLF at the end of the chunk.\n self.chunk_left = None\n returned_chunk = value\n else: # amt > self.chunk_left\n returned_chunk = self._fp._safe_read(self.chunk_left)\n self._fp._safe_read(2) # Toss the CRLF at the end of the chunk.\n self.chunk_left = None\n return returned_chunk\n\n def read_chunked(self, amt=None, decode_content=None):\n \"\"\"\n Similar to :meth:`HTTPResponse.read`, but with an additional\n parameter: ``decode_content``.\n\n :param amt:\n How much of the content to read. If specified, caching is skipped\n because it doesn't make sense to cache partial content as the full\n response.\n\n :param decode_content:\n If True, will attempt to decode the body based on the\n 'content-encoding' header.\n \"\"\"\n self._init_decoder()\n # FIXME: Rewrite this method and make it a class with a better structured logic.\n if not self.chunked:\n raise ResponseNotChunked(\n \"Response is not chunked. \"\n \"Header 'transfer-encoding: chunked' is missing.\"\n )\n if not self.supports_chunked_reads():\n raise BodyNotHttplibCompatible(\n \"Body should be httplib.HTTPResponse like. \"\n \"It should have have an fp attribute which returns raw chunks.\"\n )\n\n with self._error_catcher():\n # Don't bother reading the body of a HEAD request.\n if self._original_response and is_response_to_head(self._original_response):\n self._original_response.close()\n return\n\n # If a response is already read and closed\n # then return immediately.\n if self._fp.fp is None:\n return\n\n while True:\n self._update_chunk_length()\n if self.chunk_left == 0:\n break\n chunk = self._handle_chunk(amt)\n decoded = self._decode(\n chunk, decode_content=decode_content, flush_decoder=False\n )\n if decoded:\n yield decoded\n\n if decode_content:\n # On CPython and PyPy, we should never need to flush the\n # decoder. However, on Jython we *might* need to, so\n # lets defensively do it anyway.\n decoded = self._flush_decoder()\n if decoded: # Platform-specific: Jython.\n yield decoded\n\n # Chunk content ends with \\r\\n: discard it.\n while True:\n line = self._fp.fp.readline()\n if not line:\n # Some sites may not end with '\\r\\n'.\n break\n if line == b\"\\r\\n\":\n break\n\n # We read everything; close the \"file\".\n if self._original_response:\n self._original_response.close()\n\n def geturl(self):\n \"\"\"\n Returns the URL that was the source of this response.\n If the request that generated this response redirected, this method\n will return the final redirect location.\n \"\"\"\n if self.retries is not None and len(self.retries.history):\n return self.retries.history[-1].redirect_location\n else:\n return self._request_url\n\n def __iter__(self):\n buffer = [b\"\"]\n for chunk in self.stream(decode_content=True):\n if b\"\\n\" in chunk:\n chunk = chunk.split(b\"\\n\")\n yield b\"\".join(buffer) + chunk[0] + b\"\\n\"\n for x in chunk[1:-1]:\n yield x + b\"\\n\"\n if chunk[-1]:\n buffer = [chunk[-1]]\n else:\n buffer = []\n else:\n buffer.append(chunk)\n if buffer:\n yield b\"\".join(buffer)\n",
"path": "src/urllib3/response.py"
}
] | [
{
"content": "from __future__ import absolute_import\nfrom contextlib import contextmanager\nimport zlib\nimport io\nimport logging\nfrom socket import timeout as SocketTimeout\nfrom socket import error as SocketError\n\ntry:\n import brotli\nexcept ImportError:\n brotli = None\n\nfrom ._collections import HTTPHeaderDict\nfrom .exceptions import (\n BodyNotHttplibCompatible,\n ProtocolError,\n DecodeError,\n ReadTimeoutError,\n ResponseNotChunked,\n IncompleteRead,\n InvalidHeader,\n)\nfrom .packages.six import string_types as basestring, PY3\nfrom .packages.six.moves import http_client as httplib\nfrom .connection import HTTPException, BaseSSLError\nfrom .util.response import is_fp_closed, is_response_to_head\n\nlog = logging.getLogger(__name__)\n\n\nclass DeflateDecoder(object):\n def __init__(self):\n self._first_try = True\n self._data = b\"\"\n self._obj = zlib.decompressobj()\n\n def __getattr__(self, name):\n return getattr(self._obj, name)\n\n def decompress(self, data):\n if not data:\n return data\n\n if not self._first_try:\n return self._obj.decompress(data)\n\n self._data += data\n try:\n decompressed = self._obj.decompress(data)\n if decompressed:\n self._first_try = False\n self._data = None\n return decompressed\n except zlib.error:\n self._first_try = False\n self._obj = zlib.decompressobj(-zlib.MAX_WBITS)\n try:\n return self.decompress(self._data)\n finally:\n self._data = None\n\n\nclass GzipDecoderState(object):\n\n FIRST_MEMBER = 0\n OTHER_MEMBERS = 1\n SWALLOW_DATA = 2\n\n\nclass GzipDecoder(object):\n def __init__(self):\n self._obj = zlib.decompressobj(16 + zlib.MAX_WBITS)\n self._state = GzipDecoderState.FIRST_MEMBER\n\n def __getattr__(self, name):\n return getattr(self._obj, name)\n\n def decompress(self, data):\n ret = bytearray()\n if self._state == GzipDecoderState.SWALLOW_DATA or not data:\n return bytes(ret)\n while True:\n try:\n ret += self._obj.decompress(data)\n except zlib.error:\n previous_state = self._state\n # Ignore data after the first error\n self._state = GzipDecoderState.SWALLOW_DATA\n if previous_state == GzipDecoderState.OTHER_MEMBERS:\n # Allow trailing garbage acceptable in other gzip clients\n return bytes(ret)\n raise\n data = self._obj.unused_data\n if not data:\n return bytes(ret)\n self._state = GzipDecoderState.OTHER_MEMBERS\n self._obj = zlib.decompressobj(16 + zlib.MAX_WBITS)\n\n\nif brotli is not None:\n\n class BrotliDecoder(object):\n # Supports both 'brotlipy' and 'Brotli' packages\n # since they share an import name. The top branches\n # are for 'brotlipy' and bottom branches for 'Brotli'\n def __init__(self):\n self._obj = brotli.Decompressor()\n\n def decompress(self, data):\n if hasattr(self._obj, \"decompress\"):\n return self._obj.decompress(data)\n return self._obj.process(data)\n\n def flush(self):\n if hasattr(self._obj, \"flush\"):\n return self._obj.flush()\n return b\"\"\n\n\nclass MultiDecoder(object):\n \"\"\"\n From RFC7231:\n If one or more encodings have been applied to a representation, the\n sender that applied the encodings MUST generate a Content-Encoding\n header field that lists the content codings in the order in which\n they were applied.\n \"\"\"\n\n def __init__(self, modes):\n self._decoders = [_get_decoder(m.strip()) for m in modes.split(\",\")]\n\n def flush(self):\n return self._decoders[0].flush()\n\n def decompress(self, data):\n for d in reversed(self._decoders):\n data = d.decompress(data)\n return data\n\n\ndef _get_decoder(mode):\n if \",\" in mode:\n return MultiDecoder(mode)\n\n if mode == \"gzip\":\n return GzipDecoder()\n\n if brotli is not None and mode == \"br\":\n return BrotliDecoder()\n\n return DeflateDecoder()\n\n\nclass HTTPResponse(io.IOBase):\n \"\"\"\n HTTP Response container.\n\n Backwards-compatible to httplib's HTTPResponse but the response ``body`` is\n loaded and decoded on-demand when the ``data`` property is accessed. This\n class is also compatible with the Python standard library's :mod:`io`\n module, and can hence be treated as a readable object in the context of that\n framework.\n\n Extra parameters for behaviour not present in httplib.HTTPResponse:\n\n :param preload_content:\n If True, the response's body will be preloaded during construction.\n\n :param decode_content:\n If True, will attempt to decode the body based on the\n 'content-encoding' header.\n\n :param original_response:\n When this HTTPResponse wrapper is generated from an httplib.HTTPResponse\n object, it's convenient to include the original for debug purposes. It's\n otherwise unused.\n\n :param retries:\n The retries contains the last :class:`~urllib3.util.retry.Retry` that\n was used during the request.\n\n :param enforce_content_length:\n Enforce content length checking. Body returned by server must match\n value of Content-Length header, if present. Otherwise, raise error.\n \"\"\"\n\n CONTENT_DECODERS = [\"gzip\", \"deflate\"]\n if brotli is not None:\n CONTENT_DECODERS += [\"br\"]\n REDIRECT_STATUSES = [301, 302, 303, 307, 308]\n\n def __init__(\n self,\n body=\"\",\n headers=None,\n status=0,\n version=0,\n reason=None,\n strict=0,\n preload_content=True,\n decode_content=True,\n original_response=None,\n pool=None,\n connection=None,\n msg=None,\n retries=None,\n enforce_content_length=False,\n request_method=None,\n request_url=None,\n auto_close=True,\n ):\n\n if isinstance(headers, HTTPHeaderDict):\n self.headers = headers\n else:\n self.headers = HTTPHeaderDict(headers)\n self.status = status\n self.version = version\n self.reason = reason\n self.strict = strict\n self.decode_content = decode_content\n self.retries = retries\n self.enforce_content_length = enforce_content_length\n self.auto_close = auto_close\n\n self._decoder = None\n self._body = None\n self._fp = None\n self._original_response = original_response\n self._fp_bytes_read = 0\n self.msg = msg\n self._request_url = request_url\n\n if body and isinstance(body, (basestring, bytes)):\n self._body = body\n\n self._pool = pool\n self._connection = connection\n\n if hasattr(body, \"read\"):\n self._fp = body\n\n # Are we using the chunked-style of transfer encoding?\n self.chunked = False\n self.chunk_left = None\n tr_enc = self.headers.get(\"transfer-encoding\", \"\").lower()\n # Don't incur the penalty of creating a list and then discarding it\n encodings = (enc.strip() for enc in tr_enc.split(\",\"))\n if \"chunked\" in encodings:\n self.chunked = True\n\n # Determine length of response\n self.length_remaining = self._init_length(request_method)\n\n # If requested, preload the body.\n if preload_content and not self._body:\n self._body = self.read(decode_content=decode_content)\n\n def get_redirect_location(self):\n \"\"\"\n Should we redirect and where to?\n\n :returns: Truthy redirect location string if we got a redirect status\n code and valid location. ``None`` if redirect status and no\n location. ``False`` if not a redirect status code.\n \"\"\"\n if self.status in self.REDIRECT_STATUSES:\n return self.headers.get(\"location\")\n\n return False\n\n def release_conn(self):\n if not self._pool or not self._connection:\n return\n\n self._pool._put_conn(self._connection)\n self._connection = None\n\n @property\n def data(self):\n # For backwords-compat with earlier urllib3 0.4 and earlier.\n if self._body:\n return self._body\n\n if self._fp:\n return self.read(cache_content=True)\n\n @property\n def connection(self):\n return self._connection\n\n def isclosed(self):\n return is_fp_closed(self._fp)\n\n def tell(self):\n \"\"\"\n Obtain the number of bytes pulled over the wire so far. May differ from\n the amount of content returned by :meth:``HTTPResponse.read`` if bytes\n are encoded on the wire (e.g, compressed).\n \"\"\"\n return self._fp_bytes_read\n\n def _init_length(self, request_method):\n \"\"\"\n Set initial length value for Response content if available.\n \"\"\"\n length = self.headers.get(\"content-length\")\n\n if length is not None:\n if self.chunked:\n # This Response will fail with an IncompleteRead if it can't be\n # received as chunked. This method falls back to attempt reading\n # the response before raising an exception.\n log.warning(\n \"Received response with both Content-Length and \"\n \"Transfer-Encoding set. This is expressly forbidden \"\n \"by RFC 7230 sec 3.3.2. Ignoring Content-Length and \"\n \"attempting to process response as Transfer-Encoding: \"\n \"chunked.\"\n )\n return None\n\n try:\n # RFC 7230 section 3.3.2 specifies multiple content lengths can\n # be sent in a single Content-Length header\n # (e.g. Content-Length: 42, 42). This line ensures the values\n # are all valid ints and that as long as the `set` length is 1,\n # all values are the same. Otherwise, the header is invalid.\n lengths = set([int(val) for val in length.split(\",\")])\n if len(lengths) > 1:\n raise InvalidHeader(\n \"Content-Length contained multiple \"\n \"unmatching values (%s)\" % length\n )\n length = lengths.pop()\n except ValueError:\n length = None\n else:\n if length < 0:\n length = None\n\n # Convert status to int for comparison\n # In some cases, httplib returns a status of \"_UNKNOWN\"\n try:\n status = int(self.status)\n except ValueError:\n status = 0\n\n # Check for responses that shouldn't include a body\n if status in (204, 304) or 100 <= status < 200 or request_method == \"HEAD\":\n length = 0\n\n return length\n\n def _init_decoder(self):\n \"\"\"\n Set-up the _decoder attribute if necessary.\n \"\"\"\n # Note: content-encoding value should be case-insensitive, per RFC 7230\n # Section 3.2\n content_encoding = self.headers.get(\"content-encoding\", \"\").lower()\n if self._decoder is None:\n if content_encoding in self.CONTENT_DECODERS:\n self._decoder = _get_decoder(content_encoding)\n elif \",\" in content_encoding:\n encodings = [\n e.strip()\n for e in content_encoding.split(\",\")\n if e.strip() in self.CONTENT_DECODERS\n ]\n if len(encodings):\n self._decoder = _get_decoder(content_encoding)\n\n DECODER_ERROR_CLASSES = (IOError, zlib.error)\n if brotli is not None:\n DECODER_ERROR_CLASSES += (brotli.error,)\n\n def _decode(self, data, decode_content, flush_decoder):\n \"\"\"\n Decode the data passed in and potentially flush the decoder.\n \"\"\"\n if not decode_content:\n return data\n\n try:\n if self._decoder:\n data = self._decoder.decompress(data)\n except self.DECODER_ERROR_CLASSES as e:\n content_encoding = self.headers.get(\"content-encoding\", \"\").lower()\n raise DecodeError(\n \"Received response with content-encoding: %s, but \"\n \"failed to decode it.\" % content_encoding,\n e,\n )\n if flush_decoder:\n data += self._flush_decoder()\n\n return data\n\n def _flush_decoder(self):\n \"\"\"\n Flushes the decoder. Should only be called if the decoder is actually\n being used.\n \"\"\"\n if self._decoder:\n buf = self._decoder.decompress(b\"\")\n return buf + self._decoder.flush()\n\n return b\"\"\n\n @contextmanager\n def _error_catcher(self):\n \"\"\"\n Catch low-level python exceptions, instead re-raising urllib3\n variants, so that low-level exceptions are not leaked in the\n high-level api.\n\n On exit, release the connection back to the pool.\n \"\"\"\n clean_exit = False\n\n try:\n try:\n yield\n\n except SocketTimeout:\n # FIXME: Ideally we'd like to include the url in the ReadTimeoutError but\n # there is yet no clean way to get at it from this context.\n raise ReadTimeoutError(self._pool, None, \"Read timed out.\")\n\n except BaseSSLError as e:\n # FIXME: Is there a better way to differentiate between SSLErrors?\n if \"read operation timed out\" not in str(e): # Defensive:\n # This shouldn't happen but just in case we're missing an edge\n # case, let's avoid swallowing SSL errors.\n raise\n\n raise ReadTimeoutError(self._pool, None, \"Read timed out.\")\n\n except (HTTPException, SocketError) as e:\n # This includes IncompleteRead.\n raise ProtocolError(\"Connection broken: %r\" % e, e)\n\n # If no exception is thrown, we should avoid cleaning up\n # unnecessarily.\n clean_exit = True\n finally:\n # If we didn't terminate cleanly, we need to throw away our\n # connection.\n if not clean_exit:\n # The response may not be closed but we're not going to use it\n # anymore so close it now to ensure that the connection is\n # released back to the pool.\n if self._original_response:\n self._original_response.close()\n\n # Closing the response may not actually be sufficient to close\n # everything, so if we have a hold of the connection close that\n # too.\n if self._connection:\n self._connection.close()\n\n # If we hold the original response but it's closed now, we should\n # return the connection back to the pool.\n if self._original_response and self._original_response.isclosed():\n self.release_conn()\n\n def read(self, amt=None, decode_content=None, cache_content=False):\n \"\"\"\n Similar to :meth:`httplib.HTTPResponse.read`, but with two additional\n parameters: ``decode_content`` and ``cache_content``.\n\n :param amt:\n How much of the content to read. If specified, caching is skipped\n because it doesn't make sense to cache partial content as the full\n response.\n\n :param decode_content:\n If True, will attempt to decode the body based on the\n 'content-encoding' header.\n\n :param cache_content:\n If True, will save the returned data such that the same result is\n returned despite of the state of the underlying file object. This\n is useful if you want the ``.data`` property to continue working\n after having ``.read()`` the file object. (Overridden if ``amt`` is\n set.)\n \"\"\"\n self._init_decoder()\n if decode_content is None:\n decode_content = self.decode_content\n\n if self._fp is None:\n return\n\n flush_decoder = False\n fp_closed = getattr(self._fp, \"closed\", False)\n\n with self._error_catcher():\n if amt is None:\n # cStringIO doesn't like amt=None\n data = self._fp.read() if not fp_closed else b\"\"\n flush_decoder = True\n else:\n cache_content = False\n data = self._fp.read(amt) if not fp_closed else b\"\"\n if (\n amt != 0 and not data\n ): # Platform-specific: Buggy versions of Python.\n # Close the connection when no data is returned\n #\n # This is redundant to what httplib/http.client _should_\n # already do. However, versions of python released before\n # December 15, 2012 (http://bugs.python.org/issue16298) do\n # not properly close the connection in all cases. There is\n # no harm in redundantly calling close.\n self._fp.close()\n flush_decoder = True\n if self.enforce_content_length and self.length_remaining not in (\n 0,\n None,\n ):\n # This is an edge case that httplib failed to cover due\n # to concerns of backward compatibility. We're\n # addressing it here to make sure IncompleteRead is\n # raised during streaming, so all calls with incorrect\n # Content-Length are caught.\n raise IncompleteRead(self._fp_bytes_read, self.length_remaining)\n\n if data:\n self._fp_bytes_read += len(data)\n if self.length_remaining is not None:\n self.length_remaining -= len(data)\n\n data = self._decode(data, decode_content, flush_decoder)\n\n if cache_content:\n self._body = data\n\n return data\n\n def stream(self, amt=2 ** 16, decode_content=None):\n \"\"\"\n A generator wrapper for the read() method. A call will block until\n ``amt`` bytes have been read from the connection or until the\n connection is closed.\n\n :param amt:\n How much of the content to read. The generator will return up to\n much data per iteration, but may return less. This is particularly\n likely when using compressed data. However, the empty string will\n never be returned.\n\n :param decode_content:\n If True, will attempt to decode the body based on the\n 'content-encoding' header.\n \"\"\"\n if self.chunked and self.supports_chunked_reads():\n for line in self.read_chunked(amt, decode_content=decode_content):\n yield line\n else:\n while not is_fp_closed(self._fp):\n data = self.read(amt=amt, decode_content=decode_content)\n\n if data:\n yield data\n\n @classmethod\n def from_httplib(ResponseCls, r, **response_kw):\n \"\"\"\n Given an :class:`httplib.HTTPResponse` instance ``r``, return a\n corresponding :class:`urllib3.response.HTTPResponse` object.\n\n Remaining parameters are passed to the HTTPResponse constructor, along\n with ``original_response=r``.\n \"\"\"\n headers = r.msg\n\n if not isinstance(headers, HTTPHeaderDict):\n if PY3:\n headers = HTTPHeaderDict(headers.items())\n else:\n # Python 2.7\n headers = HTTPHeaderDict.from_httplib(headers)\n\n # HTTPResponse objects in Python 3 don't have a .strict attribute\n strict = getattr(r, \"strict\", 0)\n resp = ResponseCls(\n body=r,\n headers=headers,\n status=r.status,\n version=r.version,\n reason=r.reason,\n strict=strict,\n original_response=r,\n **response_kw\n )\n return resp\n\n # Backwards-compatibility methods for httplib.HTTPResponse\n def getheaders(self):\n return self.headers\n\n def getheader(self, name, default=None):\n return self.headers.get(name, default)\n\n # Backwards compatibility for http.cookiejar\n def info(self):\n return self.headers\n\n # Overrides from io.IOBase\n def close(self):\n if not self.closed:\n self._fp.close()\n\n if self._connection:\n self._connection.close()\n\n if not self.auto_close:\n io.IOBase.close(self)\n\n @property\n def closed(self):\n if not self.auto_close:\n return io.IOBase.closed.__get__(self)\n elif self._fp is None:\n return True\n elif hasattr(self._fp, \"isclosed\"):\n return self._fp.isclosed()\n elif hasattr(self._fp, \"closed\"):\n return self._fp.closed\n else:\n return True\n\n def fileno(self):\n if self._fp is None:\n raise IOError(\"HTTPResponse has no file to get a fileno from\")\n elif hasattr(self._fp, \"fileno\"):\n return self._fp.fileno()\n else:\n raise IOError(\n \"The file-like object this HTTPResponse is wrapped \"\n \"around has no file descriptor\"\n )\n\n def flush(self):\n if (\n self._fp is not None\n and hasattr(self._fp, \"flush\")\n and not getattr(self._fp, \"closed\", False)\n ):\n return self._fp.flush()\n\n def readable(self):\n # This method is required for `io` module compatibility.\n return True\n\n def readinto(self, b):\n # This method is required for `io` module compatibility.\n temp = self.read(len(b))\n if len(temp) == 0:\n return 0\n else:\n b[: len(temp)] = temp\n return len(temp)\n\n def supports_chunked_reads(self):\n \"\"\"\n Checks if the underlying file-like object looks like a\n httplib.HTTPResponse object. We do this by testing for the fp\n attribute. If it is present we assume it returns raw chunks as\n processed by read_chunked().\n \"\"\"\n return hasattr(self._fp, \"fp\")\n\n def _update_chunk_length(self):\n # First, we'll figure out length of a chunk and then\n # we'll try to read it from socket.\n if self.chunk_left is not None:\n return\n line = self._fp.fp.readline()\n line = line.split(b\";\", 1)[0]\n try:\n self.chunk_left = int(line, 16)\n except ValueError:\n # Invalid chunked protocol response, abort.\n self.close()\n raise httplib.IncompleteRead(line)\n\n def _handle_chunk(self, amt):\n returned_chunk = None\n if amt is None:\n chunk = self._fp._safe_read(self.chunk_left)\n returned_chunk = chunk\n self._fp._safe_read(2) # Toss the CRLF at the end of the chunk.\n self.chunk_left = None\n elif amt < self.chunk_left:\n value = self._fp._safe_read(amt)\n self.chunk_left = self.chunk_left - amt\n returned_chunk = value\n elif amt == self.chunk_left:\n value = self._fp._safe_read(amt)\n self._fp._safe_read(2) # Toss the CRLF at the end of the chunk.\n self.chunk_left = None\n returned_chunk = value\n else: # amt > self.chunk_left\n returned_chunk = self._fp._safe_read(self.chunk_left)\n self._fp._safe_read(2) # Toss the CRLF at the end of the chunk.\n self.chunk_left = None\n return returned_chunk\n\n def read_chunked(self, amt=None, decode_content=None):\n \"\"\"\n Similar to :meth:`HTTPResponse.read`, but with an additional\n parameter: ``decode_content``.\n\n :param amt:\n How much of the content to read. If specified, caching is skipped\n because it doesn't make sense to cache partial content as the full\n response.\n\n :param decode_content:\n If True, will attempt to decode the body based on the\n 'content-encoding' header.\n \"\"\"\n self._init_decoder()\n # FIXME: Rewrite this method and make it a class with a better structured logic.\n if not self.chunked:\n raise ResponseNotChunked(\n \"Response is not chunked. \"\n \"Header 'transfer-encoding: chunked' is missing.\"\n )\n if not self.supports_chunked_reads():\n raise BodyNotHttplibCompatible(\n \"Body should be httplib.HTTPResponse like. \"\n \"It should have have an fp attribute which returns raw chunks.\"\n )\n\n with self._error_catcher():\n # Don't bother reading the body of a HEAD request.\n if self._original_response and is_response_to_head(self._original_response):\n self._original_response.close()\n return\n\n # If a response is already read and closed\n # then return immediately.\n if self._fp.fp is None:\n return\n\n while True:\n self._update_chunk_length()\n if self.chunk_left == 0:\n break\n chunk = self._handle_chunk(amt)\n decoded = self._decode(\n chunk, decode_content=decode_content, flush_decoder=False\n )\n if decoded:\n yield decoded\n\n if decode_content:\n # On CPython and PyPy, we should never need to flush the\n # decoder. However, on Jython we *might* need to, so\n # lets defensively do it anyway.\n decoded = self._flush_decoder()\n if decoded: # Platform-specific: Jython.\n yield decoded\n\n # Chunk content ends with \\r\\n: discard it.\n while True:\n line = self._fp.fp.readline()\n if not line:\n # Some sites may not end with '\\r\\n'.\n break\n if line == b\"\\r\\n\":\n break\n\n # We read everything; close the \"file\".\n if self._original_response:\n self._original_response.close()\n\n def geturl(self):\n \"\"\"\n Returns the URL that was the source of this response.\n If the request that generated this response redirected, this method\n will return the final redirect location.\n \"\"\"\n if self.retries is not None and len(self.retries.history):\n return self.retries.history[-1].redirect_location\n else:\n return self._request_url\n\n def __iter__(self):\n buffer = []\n for chunk in self.stream(decode_content=True):\n if b\"\\n\" in chunk:\n chunk = chunk.split(b\"\\n\")\n yield b\"\".join(buffer) + chunk[0] + b\"\\n\"\n for x in chunk[1:-1]:\n yield x + b\"\\n\"\n if chunk[-1]:\n buffer = [chunk[-1]]\n else:\n buffer = []\n else:\n buffer.append(chunk)\n if buffer:\n yield b\"\".join(buffer)\n",
"path": "src/urllib3/response.py"
}
] | diff --git a/src/urllib3/response.py b/src/urllib3/response.py
index adc321e713..6090a7350f 100644
--- a/src/urllib3/response.py
+++ b/src/urllib3/response.py
@@ -792,7 +792,7 @@ def geturl(self):
return self._request_url
def __iter__(self):
- buffer = [b""]
+ buffer = []
for chunk in self.stream(decode_content=True):
if b"\n" in chunk:
chunk = chunk.split(b"\n")
diff --git a/test/test_response.py b/test/test_response.py
index c6a9c3ad04..dccce8e56b 100644
--- a/test/test_response.py
+++ b/test/test_response.py
@@ -859,8 +859,9 @@ def test_geturl_retries(self):
@pytest.mark.parametrize(
["payload", "expected_stream"],
[
- (b"", [b""]),
+ (b"", []),
(b"\n", [b"\n"]),
+ (b"\n\n\n", [b"\n", b"\n", b"\n"]),
(b"abc\ndef", [b"abc\n", b"def"]),
(b"Hello\nworld\n\n\n!", [b"Hello\n", b"world\n", b"\n", b"\n", b"!"]),
],
|
openfun__richie-2306 | Frontend - Rename teacher dashboard menu entry
## Feature Request
**Is your feature request related to a problem or unsupported use case? Please describe.**
Currently, we name the dashboard dedicated to manage trainings "teacher dashboard". After a demonstration, it appears this term is weird as other user than teacher have to use this dashboard (university members, course leaders...).
**Describe the solution you'd like**
We should rename this entry by something like "Administration dashboard" or "Training dashboard" ?
Furthermore, we could display this entry apart from other to explicitly show this is an extra entry not available to all users.
| [
{
"content": "\"\"\"\nDjango settings for richie project.\n\"\"\"\n\nimport json\nimport os\n\nfrom django.utils.translation import gettext_lazy as _\n\n# pylint: disable=ungrouped-imports\nimport sentry_sdk\nfrom configurations import Configuration, values\nfrom sentry_sdk.integrations.django import DjangoIntegration\n\nfrom richie.apps.courses.settings.mixins import RichieCoursesConfigurationMixin\n\nBASE_DIR = os.path.dirname(os.path.abspath(__file__))\nDATA_DIR = os.path.join(\"/\", \"data\")\n\n\ndef get_release():\n \"\"\"Get the current release of the application.\n\n By release, we mean the release from the version.json file à la Mozilla [1]\n (if any). If this file has not been found, it defaults to \"NA\".\n\n [1]\n https://github.com/mozilla-services/Dockerflow/blob/master/docs/version_object.md\n \"\"\"\n # Try to get the current release from the version.json file generated by the\n # CI during the Docker image build\n try:\n with open(os.path.join(BASE_DIR, \"version.json\"), encoding=\"utf8\") as version:\n return json.load(version)[\"version\"]\n except FileNotFoundError:\n return \"NA\" # Default: not available\n\n\nclass StyleguideMixin:\n \"\"\"\n Theme styleguide reference\n\n Only used to build styleguide page without to hardcode properties and\n values into styleguide template.\n \"\"\"\n\n STYLEGUIDE = {\n # Available font family names\n \"fonts\": [\"hind\", \"montserrat\"],\n # Named color palette\n \"palette\": [\n \"black\",\n \"dark-grey\",\n \"charcoal\",\n \"slate-grey\",\n \"battleship-grey\",\n \"light-grey\",\n \"silver\",\n \"azure2\",\n \"smoke\",\n \"white\",\n \"denim\",\n \"firebrick6\",\n \"grey32\",\n \"grey59\",\n \"grey87\",\n \"purplish-grey\",\n \"midnightblue\",\n \"indianred3\",\n ],\n # Available gradient background\n \"gradient_colors\": [\n \"neutral-gradient\",\n \"middle-gradient\",\n \"dark-gradient\",\n \"white-mask-gradient\",\n ],\n # Available color schemes\n \"schemes\": [\n \"primary\",\n \"secondary\",\n \"tertiary\",\n \"clear\",\n \"light\",\n \"lightest\",\n \"neutral-gradient\",\n \"middle-gradient\",\n \"dark-gradient\",\n \"white-mask-gradient\",\n \"transparent-darkest\",\n \"clouds\",\n \"waves\",\n \"purplish-grey\",\n \"battleship-grey\",\n ],\n }\n\n\nclass DRFMixin:\n \"\"\"\n Django Rest Framework configuration mixin.\n NB: DRF picks its settings from the REST_FRAMEWORK namespace on the settings, hence\n the nesting of all our values inside that prop\n \"\"\"\n\n REST_FRAMEWORK = {\n \"ALLOWED_VERSIONS\": (\"1.0\",),\n \"DEFAULT_AUTHENTICATION_CLASSES\": (\n \"rest_framework.authentication.SessionAuthentication\",\n ),\n \"DEFAULT_VERSION\": \"1.0\",\n \"DEFAULT_VERSIONING_CLASS\": \"rest_framework.versioning.URLPathVersioning\",\n }\n\n\nclass Base(StyleguideMixin, DRFMixin, RichieCoursesConfigurationMixin, Configuration):\n \"\"\"\n This is the base configuration every configuration (aka environnement) should inherit from. It\n is recommended to configure third-party applications by creating a configuration mixins in\n ./configurations and compose the Base configuration with those mixins.\n\n It depends on an environment variable that SHOULD be defined:\n\n * DJANGO_SECRET_KEY\n\n You may also want to override default configuration by setting the following environment\n variables:\n\n * DJANGO_SENTRY_DSN\n * RICHIE_ES_HOST\n * DB_NAME\n * DB_HOST\n * DB_PASSWORD\n * DB_USER\n \"\"\"\n\n DEBUG = False\n\n SITE_ID = 1\n\n # Security\n ALLOWED_HOSTS = []\n SECRET_KEY = values.Value(\"ThisIsAnExampleKeyForDevPurposeOnly\")\n # System check reference:\n # https://docs.djangoproject.com/en/2.2/ref/checks/#security\n SILENCED_SYSTEM_CHECKS = values.ListValue(\n [\n # Allow the X_FRAME_OPTIONS to be set to \"SAMEORIGIN\"\n \"security.W019\"\n ]\n )\n # The X_FRAME_OPTIONS value should be set to \"SAMEORIGIN\" to display\n # DjangoCMS frontend admin frames. Dockerflow raises a system check security\n # warning with this setting, one should add \"security.W019\" to the\n # SILENCED_SYSTEM_CHECKS setting (see above).\n X_FRAME_OPTIONS = \"SAMEORIGIN\"\n\n # Application definition\n ROOT_URLCONF = \"urls\"\n WSGI_APPLICATION = \"wsgi.application\"\n\n # Database\n DATABASES = {\n \"default\": {\n \"ENGINE\": values.Value(\n \"django.db.backends.postgresql_psycopg2\",\n environ_name=\"DB_ENGINE\",\n environ_prefix=None,\n ),\n \"NAME\": values.Value(\"richie\", environ_name=\"DB_NAME\", environ_prefix=None),\n \"USER\": values.Value(\"fun\", environ_name=\"DB_USER\", environ_prefix=None),\n \"PASSWORD\": values.Value(\n \"pass\", environ_name=\"DB_PASSWORD\", environ_prefix=None\n ),\n \"HOST\": values.Value(\n \"localhost\", environ_name=\"DB_HOST\", environ_prefix=None\n ),\n \"PORT\": values.Value(5432, environ_name=\"DB_PORT\", environ_prefix=None),\n }\n }\n DEFAULT_AUTO_FIELD = \"django.db.models.AutoField\"\n MIGRATION_MODULES = {}\n\n # Static files (CSS, JavaScript, Images)\n STATIC_URL = \"/static/\"\n MEDIA_URL = \"/media/\"\n MEDIA_ROOT = os.path.join(DATA_DIR, \"media\")\n STATIC_ROOT = os.path.join(DATA_DIR, \"static\")\n\n # Login/registration related settings\n LOGIN_REDIRECT_URL = \"/\"\n LOGOUT_REDIRECT_URL = \"/\"\n LOGIN_URL = \"login\"\n LOGOUT_URL = \"logout\"\n\n AUTHENTICATION_BACKENDS = (\"django.contrib.auth.backends.ModelBackend\",)\n\n # Mapping between edx and richie profile fields\n EDX_USER_PROFILE_TO_DJANGO = values.DictValue()\n\n # Feature flags\n FEATURES = values.DictValue(environ_name=\"FEATURES\", environ_prefix=None)\n\n # Joanie\n \"\"\"\n NB: Richie picks all Joanie's settings from the JOANIE_BACKEND namespace in the\n settings, hence the nesting of all Joanie's values inside that prop.\n\n If BASE_URL is defined, this setting is bound into RICHIE_LMS_BACKENDS to use Joanie\n as a LMS BACKEND.\n \"\"\"\n JOANIE_BACKEND = {\n \"BASE_URL\": values.Value(environ_name=\"JOANIE_BASE_URL\", environ_prefix=None),\n \"BACKEND\": values.Value(\n \"richie.apps.courses.lms.joanie.JoanieBackend\",\n environ_name=\"JOANIE_BACKEND\",\n environ_prefix=None,\n ),\n \"JS_BACKEND\": values.Value(\n \"joanie\", environ_name=\"JOANIE_JS_BACKEND\", environ_prefix=None\n ),\n \"COURSE_REGEX\": values.Value(\n r\"^.*/api/v1.0(?P<resource_uri>(?:/(?:courses|course-runs|products)/[^/]+)+)/?$\",\n environ_name=\"JOANIE_COURSE_REGEX\",\n environ_prefix=None,\n ),\n \"JS_COURSE_REGEX\": values.Value(\n r\"^.*/api/v1.0((?:/(?:courses|course-runs|products)/[^/]+)+)/?$\",\n environ_name=\"JOANIE_JS_COURSE_REGEX\",\n environ_prefix=None,\n ),\n # Course runs synchronization\n \"COURSE_RUN_SYNC_NO_UPDATE_FIELDS\": [],\n \"DEFAULT_COURSE_RUN_SYNC_MODE\": \"sync_to_public\",\n }\n\n # LMS\n RICHIE_LMS_BACKENDS = [\n {\n # We configure default values that work with the test configuration of\n # github.com/openfun/openedx-docker.\n \"BASE_URL\": values.Value(environ_name=\"EDX_BASE_URL\", environ_prefix=None),\n # Django backend\n \"BACKEND\": values.Value(\n \"richie.apps.courses.lms.edx.EdXLMSBackend\",\n environ_name=\"EDX_BACKEND\",\n environ_prefix=None,\n ),\n \"COURSE_REGEX\": values.Value(\n r\"^.*/courses/(?P<course_id>.*)/course/?$\",\n environ_name=\"EDX_COURSE_REGEX\",\n environ_prefix=None,\n ),\n # React frontend\n \"JS_BACKEND\": values.Value(\n \"openedx-hawthorn\", environ_name=\"EDX_JS_BACKEND\", environ_prefix=None\n ),\n \"JS_COURSE_REGEX\": values.Value(\n r\"^.*/courses/(.*)/course/?$\",\n environ_name=\"EDX_JS_COURSE_REGEX\",\n environ_prefix=None,\n ),\n # Course runs synchronization\n \"COURSE_RUN_SYNC_NO_UPDATE_FIELDS\": [],\n \"DEFAULT_COURSE_RUN_SYNC_MODE\": \"sync_to_public\",\n }\n ]\n RICHIE_COURSE_RUN_SYNC_SECRETS = values.ListValue([])\n\n # AUTHENTICATION\n profile_dashboard_urls = {\n \"dashboard\": {\n \"label\": _(\"Dashboard\"),\n \"href\": _(\"{base_url:s}/dashboard/\"),\n },\n }\n if (\n FEATURES.get(\"REACT_DASHBOARD\", False) # pylint: disable=no-member\n and JOANIE_BACKEND.get(\"BASE_URL\") is not None\n ):\n profile_dashboard_urls = {\n \"dashboard\": {\n \"label\": _(\"Dashboard\"),\n \"href\": _(\"/dashboard/\"),\n },\n \"dashboard_teacher\": {\n \"label\": _(\"Teacher dashboard\"),\n \"href\": _(\"/dashboard/teacher\"),\n },\n }\n\n RICHIE_AUTHENTICATION_DELEGATION = {\n \"BASE_URL\": values.Value(\n \"\", environ_name=\"AUTHENTICATION_BASE_URL\", environ_prefix=None\n ),\n \"BACKEND\": values.Value(\n \"dummy\", environ_name=\"AUTHENTICATION_BACKEND\", environ_prefix=None\n ),\n # PROFILE_URLS are custom links to access to Auth profile views\n # from Richie. Link order will reflect the order of display in frontend.\n # (i) Info - {base_url} is RICHIE_AUTHENTICATION_DELEGATION.BASE_URL\n # (i) If you need to bind user data into href url, wrap the property between ()\n # e.g: for user.username = johndoe, /u/(username) will be /u/johndoe\n \"PROFILE_URLS\": values.DictValue(\n {\n **profile_dashboard_urls,\n \"profile\": {\n \"label\": _(\"Profile\"),\n \"href\": _(\"{base_url:s}/u/(username)\"),\n },\n \"account\": {\n \"label\": _(\"Account\"),\n \"href\": _(\"{base_url:s}/account/settings\"),\n },\n },\n environ_name=\"AUTHENTICATION_PROFILE_URLS\",\n environ_prefix=None,\n ),\n }\n\n # Elasticsearch\n RICHIE_ES_HOST = values.ListValue(\n [\"elasticsearch\"], environ_name=\"RICHIE_ES_HOST\", environ_prefix=None\n )\n RICHIE_ES_INDICES_PREFIX = values.Value(\n default=\"richie\", environ_name=\"RICHIE_ES_INDICES_PREFIX\", environ_prefix=None\n )\n RICHIE_ES_STATE_WEIGHTS = values.ListValue(None)\n\n # LTI Content\n RICHIE_LTI_PROVIDERS = {\n \"lti_provider_test\": {\n \"oauth_consumer_key\": values.Value(\n \"InsecureOauthConsumerKey\",\n environ_name=\"LTI_TEST_OAUTH_CONSUMER_KEY\",\n environ_prefix=None,\n ),\n \"shared_secret\": values.Value(\n \"InsecureSharedSecret\",\n environ_name=\"LTI_TEST_SHARED_SECRET\",\n environ_prefix=None,\n ),\n \"base_url\": values.Value(\n \"https://lti.tools/saltire/tp\",\n environ_name=\"LTI_TEST_BASE_URL\",\n environ_prefix=None,\n ),\n \"display_name\": \"LTI Provider Test Video\",\n \"is_base_url_regex\": False,\n \"is_automatic_resizing\": True,\n \"inline_ratio\": 0.5625,\n }\n }\n\n # Internationalization\n TIME_ZONE = \"Europe/Paris\"\n USE_I18N = True\n USE_L10N = True\n USE_TZ = True\n\n # Templates\n TEMPLATES = [\n {\n \"BACKEND\": \"django.template.backends.django.DjangoTemplates\",\n \"DIRS\": [os.path.join(BASE_DIR, \"templates\")],\n \"OPTIONS\": {\n \"context_processors\": [\n \"django.contrib.auth.context_processors.auth\",\n \"django.contrib.messages.context_processors.messages\",\n \"django.template.context_processors.i18n\",\n \"django.template.context_processors.debug\",\n \"django.template.context_processors.request\",\n \"django.template.context_processors.media\",\n \"django.template.context_processors.csrf\",\n \"django.template.context_processors.tz\",\n \"sekizai.context_processors.sekizai\",\n \"django.template.context_processors.static\",\n \"cms.context_processors.cms_settings\",\n \"richie.apps.core.context_processors.site_metas\",\n ],\n \"loaders\": [\n \"django.template.loaders.filesystem.Loader\",\n \"django.template.loaders.app_directories.Loader\",\n ],\n },\n }\n ]\n\n MIDDLEWARE = (\n \"richie.apps.core.cache.LimitBrowserCacheTTLHeaders\",\n \"cms.middleware.utils.ApphookReloadMiddleware\",\n \"django.middleware.security.SecurityMiddleware\",\n \"django.contrib.sessions.middleware.SessionMiddleware\",\n \"django.middleware.csrf.CsrfViewMiddleware\",\n \"django.contrib.auth.middleware.AuthenticationMiddleware\",\n \"django.contrib.messages.middleware.MessageMiddleware\",\n \"django.middleware.locale.LocaleMiddleware\",\n \"django.middleware.common.CommonMiddleware\",\n \"django.middleware.clickjacking.XFrameOptionsMiddleware\",\n \"dockerflow.django.middleware.DockerflowMiddleware\",\n \"cms.middleware.user.CurrentUserMiddleware\",\n \"cms.middleware.page.CurrentPageMiddleware\",\n \"cms.middleware.toolbar.ToolbarMiddleware\",\n \"cms.middleware.language.LanguageCookieMiddleware\",\n \"dj_pagination.middleware.PaginationMiddleware\",\n )\n\n # Django applications from the highest priority to the lowest\n INSTALLED_APPS = (\n # Richie stuff\n \"richie.apps.demo\",\n \"richie.apps.search\",\n \"richie.apps.courses\",\n \"richie.apps.core\",\n \"richie.plugins.glimpse\",\n \"richie.plugins.html_sitemap\",\n \"richie.plugins.large_banner\",\n \"richie.plugins.nesteditem\",\n \"richie.plugins.plain_text\",\n \"richie.plugins.section\",\n \"richie.plugins.simple_picture\",\n \"richie.plugins.simple_text_ckeditor\",\n \"richie.plugins.lti_consumer\",\n \"richie\",\n # Third party apps\n \"dj_pagination\",\n \"dockerflow.django\",\n \"parler\",\n \"rest_framework\",\n # Django-cms\n \"djangocms_admin_style\",\n \"djangocms_googlemap\",\n \"djangocms_link\",\n \"djangocms_picture\",\n \"djangocms_text_ckeditor\",\n \"djangocms_video\",\n \"cms\",\n \"menus\",\n \"sekizai\",\n \"treebeard\",\n \"filer\",\n \"easy_thumbnails\",\n # django-autocomplete-light\n \"dal\",\n \"dal_select2\",\n # Django\n \"django.contrib.auth\",\n \"django.contrib.contenttypes\",\n \"django.contrib.sessions\",\n \"django.contrib.admin\",\n \"django.contrib.sites\",\n \"django.contrib.sitemaps\",\n \"django.contrib.staticfiles\",\n \"django.contrib.messages\",\n \"django.contrib.humanize\",\n )\n\n # Languages\n # - Django\n LANGUAGE_CODE = \"en\"\n\n # Careful! Languages should be ordered by priority, as this tuple is used to get\n # fallback/default languages throughout the app.\n # Use \"en\" as default as it is the language that is most likely to be spoken by any visitor\n # when their preferred language, whatever it is, is unavailable\n LANGUAGES = ((\"en\", _(\"English\")), (\"fr\", _(\"French\")))\n\n # - Django CMS\n CMS_LANGUAGES = {\n \"default\": {\n \"public\": True,\n \"hide_untranslated\": False,\n \"redirect_on_fallback\": False,\n \"fallbacks\": [\"en\", \"fr\"],\n },\n 1: [\n {\n \"public\": True,\n \"code\": \"en\",\n \"hide_untranslated\": False,\n \"name\": _(\"English\"),\n \"fallbacks\": [\"fr\"],\n \"redirect_on_fallback\": False,\n },\n {\n \"public\": True,\n \"code\": \"fr\",\n \"hide_untranslated\": False,\n \"name\": _(\"French\"),\n \"fallbacks\": [\"en\"],\n \"redirect_on_fallback\": False,\n },\n ],\n }\n\n # - Django Parler\n PARLER_LANGUAGES = CMS_LANGUAGES\n\n # Permisions\n # - Django CMS\n CMS_PERMISSION = True\n\n # - Django Filer\n FILER_ENABLE_PERMISSIONS = True\n FILER_IS_PUBLIC_DEFAULT = True\n\n # - Django Pagination\n PAGINATION_INVALID_PAGE_RAISES_404 = True\n PAGINATION_DEFAULT_WINDOW = 2\n PAGINATION_DEFAULT_MARGIN = 1\n\n # Logging\n LOGGING = {\n \"version\": 1,\n \"disable_existing_loggers\": True,\n \"formatters\": {\n \"verbose\": {\n \"format\": \"%(levelname)s %(asctime)s %(module)s \"\n \"%(process)d %(thread)d %(message)s\"\n }\n },\n \"handlers\": {\n \"console\": {\n \"level\": \"DEBUG\",\n \"class\": \"logging.StreamHandler\",\n \"formatter\": \"verbose\",\n }\n },\n \"loggers\": {\n \"django.db.backends\": {\n \"level\": \"ERROR\",\n \"handlers\": [\"console\"],\n \"propagate\": False,\n }\n },\n }\n\n # Cache\n CACHES = {\n \"default\": {\n \"BACKEND\": values.Value(\n \"django.core.cache.backends.locmem.LocMemCache\",\n environ_name=\"CACHE_DEFAULT_BACKEND\",\n environ_prefix=None,\n ),\n \"LOCATION\": values.Value(\n \"\", environ_name=\"CACHE_DEFAULT_LOCATION\", environ_prefix=None\n ),\n \"OPTIONS\": values.DictValue(\n {}, environ_name=\"CACHE_DEFAULT_OPTIONS\", environ_prefix=None\n ),\n },\n \"search\": {\n \"BACKEND\": values.Value(\n \"django.core.cache.backends.locmem.LocMemCache\",\n environ_name=\"SEARCH_CACHE_BACKEND\",\n environ_prefix=None,\n ),\n \"LOCATION\": values.Value(\n \"search_cache\",\n environ_name=\"SEARCH_CACHE_NAME\",\n environ_prefix=None,\n ),\n \"TIMEOUT\": 60,\n },\n }\n\n # For more details about CMS_CACHE_DURATION, see :\n # http://docs.django-cms.org/en/latest/reference/configuration.html#cms-cache-durations\n CMS_CACHE_DURATIONS = values.DictValue(\n {\"menus\": 3600, \"content\": 60, \"permissions\": 3600}\n )\n\n # Sessions\n SESSION_ENGINE = values.Value(\"django.contrib.sessions.backends.db\")\n\n # Sentry\n SENTRY_DSN = values.Value(None, environ_name=\"SENTRY_DSN\")\n\n # Web Analytics\n WEB_ANALYTICS = values.DictValue(\n None,\n environ_name=\"WEB_ANALYTICS\",\n environ_prefix=None,\n )\n\n # Performance configuration, preconnect to the media CDN\n MEDIA_HOSTNAME_PRECONNECT = values.BooleanValue(\n False, environ_name=\"MEDIA_HOSTNAME_PRECONNECT\", environ_prefix=None\n )\n\n # Minimum enrollment count value that would be shown on course detail page\n RICHIE_MINIMUM_COURSE_RUNS_ENROLLMENT_COUNT = values.Value(\n 5000,\n environ_name=\"RICHIE_MINIMUM_COURSE_RUNS_ENROLLMENT_COUNT\",\n environ_prefix=None,\n )\n\n @classmethod\n def _get_environment(cls):\n \"\"\"Environment in which the application is launched.\"\"\"\n return cls.__name__.lower()\n\n # pylint: disable=invalid-name\n @property\n def ENVIRONMENT(self):\n \"\"\"Environment in which the application is launched.\"\"\"\n return self._get_environment()\n\n # pylint: disable=invalid-name\n @property\n def RELEASE(self):\n \"\"\"\n Return the release information.\n\n Delegate to the module function to enable easier testing.\n \"\"\"\n return get_release()\n\n @classmethod\n def post_setup(cls):\n \"\"\"Post setup configuration.\n This is the place where you can configure settings that require other\n settings to be loaded.\n \"\"\"\n super().post_setup()\n\n # The SENTRY_DSN setting should be available to activate sentry for an environment\n if cls.SENTRY_DSN is not None:\n sentry_sdk.init( # pylint: disable=abstract-class-instantiated\n dsn=cls.SENTRY_DSN,\n environment=cls._get_environment(),\n release=get_release(),\n integrations=[DjangoIntegration()],\n )\n with sentry_sdk.configure_scope() as scope:\n scope.set_extra(\"application\", \"backend\")\n\n # If a Joanie Backend has been configured, we add it into LMS_BACKENDS dict\n if cls.JOANIE_BACKEND.get(\"BASE_URL\") is not None:\n cls.RICHIE_LMS_BACKENDS.append(cls.JOANIE_BACKEND)\n\n\nclass Development(Base):\n \"\"\"\n Development environment settings\n\n We set DEBUG to True and configure the server to respond from all hosts.\n \"\"\"\n\n DEBUG = True\n ALLOWED_HOSTS = [\"*\"]\n # Needed by LTI Consumer plugin\n # When we use a LTI provider on localhost domain, browser security needs to be lowered,\n # as crossdomain iframe posting is dangerous.\n SECURE_REFERRER_POLICY = \"unsafe-url\"\n\n\nclass Test(Base):\n \"\"\"Test environment settings\"\"\"\n\n CACHES = {\n \"default\": {\n \"BACKEND\": \"django_redis.cache.RedisCache\",\n \"LOCATION\": \"mymaster/redis-sentinel:26379,redis-sentinel:26379/0\",\n \"OPTIONS\": {\"CLIENT_CLASS\": \"richie.apps.core.cache.SentinelClient\"},\n },\n \"search\": {\n \"BACKEND\": \"django.core.cache.backends.locmem.LocMemCache\",\n \"LOCATION\": \"search_cache\",\n \"TIMEOUT\": 60,\n },\n }\n\n RICHIE_LMS_BACKENDS = [\n {\n \"BASE_URL\": \"http://localhost:8073\",\n \"BACKEND\": \"richie.apps.courses.lms.edx.EdXLMSBackend\",\n \"COURSE_REGEX\": r\"^.*/courses/(?P<course_id>.*)/course/?$\",\n \"JS_BACKEND\": \"dummy\",\n \"JS_COURSE_REGEX\": r\"^.*/courses/(.*)/course/?$\",\n }\n ]\n\n\nclass ContinuousIntegration(Test):\n \"\"\"\n Continuous Integration environment settings\n\n nota bene: it should inherit from the Test environment.\n \"\"\"\n\n\nclass Production(Base):\n \"\"\"Production environment settings\n\n You must define the DJANGO_ALLOWED_HOSTS and DJANGO_SECRET_KEY environment\n variables in Production configuration (and derived configurations):\n\n DJANGO_ALLOWED_HOSTS=\"foo.com,foo.fr\"\n DJANGO_SECRET_KEY=\"your-secret-key\"\n \"\"\"\n\n # Security\n SECRET_KEY = values.SecretValue()\n ALLOWED_HOSTS = values.ListValue(None)\n CSRF_COOKIE_SECURE = True\n SECURE_BROWSER_XSS_FILTER = True\n SECURE_CONTENT_TYPE_NOSNIFF = True\n SESSION_COOKIE_SECURE = True\n\n # For static files in production, we want to use a backend that includes a hash in\n # the filename, that is calculated from the file content, so that browsers always\n # get the updated version of each file.\n STATICFILES_STORAGE = (\n \"django.contrib.staticfiles.storage.ManifestStaticFilesStorage\"\n )\n\n # For more details about CMS_CACHE_DURATION, see :\n # http://docs.django-cms.org/en/latest/reference/configuration.html#cms-cache-durations\n CMS_CACHE_DURATIONS = values.DictValue(\n {\"menus\": 3600, \"content\": 86400, \"permissions\": 86400}\n )\n\n # By default, Django CMS sends cached responses with a\n # Cache-control: max-age value that reflects the server cache TTL\n # (CMS_CACHE_DURATIONS[\"content\"])\n #\n # The thing is : we can invalidate a server side cache entry, but we cannot\n # invalidate our client browser cache entries. That's why we want to set a\n # long TTL on the server side, but a much lower TTL on the browser cache.\n #\n # This setting allows to define a maximum value for the max-age header\n # returned by Django CMS views.\n MAX_BROWSER_CACHE_TTL = 600\n\n\nclass Feature(Production):\n \"\"\"\n Feature environment settings\n\n nota bene: it should inherit from the Production environment.\n \"\"\"\n\n\nclass Staging(Production):\n \"\"\"\n Staging environment settings\n\n nota bene: it should inherit from the Production environment.\n \"\"\"\n\n\nclass PreProduction(Production):\n \"\"\"\n Pre-production environment settings\n\n nota bene: it should inherit from the Production environment.\n \"\"\"\n",
"path": "sandbox/settings.py"
}
] | [
{
"content": "\"\"\"\nDjango settings for richie project.\n\"\"\"\n\nimport json\nimport os\n\nfrom django.utils.translation import gettext_lazy as _\n\n# pylint: disable=ungrouped-imports\nimport sentry_sdk\nfrom configurations import Configuration, values\nfrom sentry_sdk.integrations.django import DjangoIntegration\n\nfrom richie.apps.courses.settings.mixins import RichieCoursesConfigurationMixin\n\nBASE_DIR = os.path.dirname(os.path.abspath(__file__))\nDATA_DIR = os.path.join(\"/\", \"data\")\n\n\ndef get_release():\n \"\"\"Get the current release of the application.\n\n By release, we mean the release from the version.json file à la Mozilla [1]\n (if any). If this file has not been found, it defaults to \"NA\".\n\n [1]\n https://github.com/mozilla-services/Dockerflow/blob/master/docs/version_object.md\n \"\"\"\n # Try to get the current release from the version.json file generated by the\n # CI during the Docker image build\n try:\n with open(os.path.join(BASE_DIR, \"version.json\"), encoding=\"utf8\") as version:\n return json.load(version)[\"version\"]\n except FileNotFoundError:\n return \"NA\" # Default: not available\n\n\nclass StyleguideMixin:\n \"\"\"\n Theme styleguide reference\n\n Only used to build styleguide page without to hardcode properties and\n values into styleguide template.\n \"\"\"\n\n STYLEGUIDE = {\n # Available font family names\n \"fonts\": [\"hind\", \"montserrat\"],\n # Named color palette\n \"palette\": [\n \"black\",\n \"dark-grey\",\n \"charcoal\",\n \"slate-grey\",\n \"battleship-grey\",\n \"light-grey\",\n \"silver\",\n \"azure2\",\n \"smoke\",\n \"white\",\n \"denim\",\n \"firebrick6\",\n \"grey32\",\n \"grey59\",\n \"grey87\",\n \"purplish-grey\",\n \"midnightblue\",\n \"indianred3\",\n ],\n # Available gradient background\n \"gradient_colors\": [\n \"neutral-gradient\",\n \"middle-gradient\",\n \"dark-gradient\",\n \"white-mask-gradient\",\n ],\n # Available color schemes\n \"schemes\": [\n \"primary\",\n \"secondary\",\n \"tertiary\",\n \"clear\",\n \"light\",\n \"lightest\",\n \"neutral-gradient\",\n \"middle-gradient\",\n \"dark-gradient\",\n \"white-mask-gradient\",\n \"transparent-darkest\",\n \"clouds\",\n \"waves\",\n \"purplish-grey\",\n \"battleship-grey\",\n ],\n }\n\n\nclass DRFMixin:\n \"\"\"\n Django Rest Framework configuration mixin.\n NB: DRF picks its settings from the REST_FRAMEWORK namespace on the settings, hence\n the nesting of all our values inside that prop\n \"\"\"\n\n REST_FRAMEWORK = {\n \"ALLOWED_VERSIONS\": (\"1.0\",),\n \"DEFAULT_AUTHENTICATION_CLASSES\": (\n \"rest_framework.authentication.SessionAuthentication\",\n ),\n \"DEFAULT_VERSION\": \"1.0\",\n \"DEFAULT_VERSIONING_CLASS\": \"rest_framework.versioning.URLPathVersioning\",\n }\n\n\nclass Base(StyleguideMixin, DRFMixin, RichieCoursesConfigurationMixin, Configuration):\n \"\"\"\n This is the base configuration every configuration (aka environnement) should inherit from. It\n is recommended to configure third-party applications by creating a configuration mixins in\n ./configurations and compose the Base configuration with those mixins.\n\n It depends on an environment variable that SHOULD be defined:\n\n * DJANGO_SECRET_KEY\n\n You may also want to override default configuration by setting the following environment\n variables:\n\n * DJANGO_SENTRY_DSN\n * RICHIE_ES_HOST\n * DB_NAME\n * DB_HOST\n * DB_PASSWORD\n * DB_USER\n \"\"\"\n\n DEBUG = False\n\n SITE_ID = 1\n\n # Security\n ALLOWED_HOSTS = []\n SECRET_KEY = values.Value(\"ThisIsAnExampleKeyForDevPurposeOnly\")\n # System check reference:\n # https://docs.djangoproject.com/en/2.2/ref/checks/#security\n SILENCED_SYSTEM_CHECKS = values.ListValue(\n [\n # Allow the X_FRAME_OPTIONS to be set to \"SAMEORIGIN\"\n \"security.W019\"\n ]\n )\n # The X_FRAME_OPTIONS value should be set to \"SAMEORIGIN\" to display\n # DjangoCMS frontend admin frames. Dockerflow raises a system check security\n # warning with this setting, one should add \"security.W019\" to the\n # SILENCED_SYSTEM_CHECKS setting (see above).\n X_FRAME_OPTIONS = \"SAMEORIGIN\"\n\n # Application definition\n ROOT_URLCONF = \"urls\"\n WSGI_APPLICATION = \"wsgi.application\"\n\n # Database\n DATABASES = {\n \"default\": {\n \"ENGINE\": values.Value(\n \"django.db.backends.postgresql_psycopg2\",\n environ_name=\"DB_ENGINE\",\n environ_prefix=None,\n ),\n \"NAME\": values.Value(\"richie\", environ_name=\"DB_NAME\", environ_prefix=None),\n \"USER\": values.Value(\"fun\", environ_name=\"DB_USER\", environ_prefix=None),\n \"PASSWORD\": values.Value(\n \"pass\", environ_name=\"DB_PASSWORD\", environ_prefix=None\n ),\n \"HOST\": values.Value(\n \"localhost\", environ_name=\"DB_HOST\", environ_prefix=None\n ),\n \"PORT\": values.Value(5432, environ_name=\"DB_PORT\", environ_prefix=None),\n }\n }\n DEFAULT_AUTO_FIELD = \"django.db.models.AutoField\"\n MIGRATION_MODULES = {}\n\n # Static files (CSS, JavaScript, Images)\n STATIC_URL = \"/static/\"\n MEDIA_URL = \"/media/\"\n MEDIA_ROOT = os.path.join(DATA_DIR, \"media\")\n STATIC_ROOT = os.path.join(DATA_DIR, \"static\")\n\n # Login/registration related settings\n LOGIN_REDIRECT_URL = \"/\"\n LOGOUT_REDIRECT_URL = \"/\"\n LOGIN_URL = \"login\"\n LOGOUT_URL = \"logout\"\n\n AUTHENTICATION_BACKENDS = (\"django.contrib.auth.backends.ModelBackend\",)\n\n # Mapping between edx and richie profile fields\n EDX_USER_PROFILE_TO_DJANGO = values.DictValue()\n\n # Feature flags\n FEATURES = values.DictValue(environ_name=\"FEATURES\", environ_prefix=None)\n\n # Joanie\n \"\"\"\n NB: Richie picks all Joanie's settings from the JOANIE_BACKEND namespace in the\n settings, hence the nesting of all Joanie's values inside that prop.\n\n If BASE_URL is defined, this setting is bound into RICHIE_LMS_BACKENDS to use Joanie\n as a LMS BACKEND.\n \"\"\"\n JOANIE_BACKEND = {\n \"BASE_URL\": values.Value(environ_name=\"JOANIE_BASE_URL\", environ_prefix=None),\n \"BACKEND\": values.Value(\n \"richie.apps.courses.lms.joanie.JoanieBackend\",\n environ_name=\"JOANIE_BACKEND\",\n environ_prefix=None,\n ),\n \"JS_BACKEND\": values.Value(\n \"joanie\", environ_name=\"JOANIE_JS_BACKEND\", environ_prefix=None\n ),\n \"COURSE_REGEX\": values.Value(\n r\"^.*/api/v1.0(?P<resource_uri>(?:/(?:courses|course-runs|products)/[^/]+)+)/?$\",\n environ_name=\"JOANIE_COURSE_REGEX\",\n environ_prefix=None,\n ),\n \"JS_COURSE_REGEX\": values.Value(\n r\"^.*/api/v1.0((?:/(?:courses|course-runs|products)/[^/]+)+)/?$\",\n environ_name=\"JOANIE_JS_COURSE_REGEX\",\n environ_prefix=None,\n ),\n # Course runs synchronization\n \"COURSE_RUN_SYNC_NO_UPDATE_FIELDS\": [],\n \"DEFAULT_COURSE_RUN_SYNC_MODE\": \"sync_to_public\",\n }\n\n # LMS\n RICHIE_LMS_BACKENDS = [\n {\n # We configure default values that work with the test configuration of\n # github.com/openfun/openedx-docker.\n \"BASE_URL\": values.Value(environ_name=\"EDX_BASE_URL\", environ_prefix=None),\n # Django backend\n \"BACKEND\": values.Value(\n \"richie.apps.courses.lms.edx.EdXLMSBackend\",\n environ_name=\"EDX_BACKEND\",\n environ_prefix=None,\n ),\n \"COURSE_REGEX\": values.Value(\n r\"^.*/courses/(?P<course_id>.*)/course/?$\",\n environ_name=\"EDX_COURSE_REGEX\",\n environ_prefix=None,\n ),\n # React frontend\n \"JS_BACKEND\": values.Value(\n \"openedx-hawthorn\", environ_name=\"EDX_JS_BACKEND\", environ_prefix=None\n ),\n \"JS_COURSE_REGEX\": values.Value(\n r\"^.*/courses/(.*)/course/?$\",\n environ_name=\"EDX_JS_COURSE_REGEX\",\n environ_prefix=None,\n ),\n # Course runs synchronization\n \"COURSE_RUN_SYNC_NO_UPDATE_FIELDS\": [],\n \"DEFAULT_COURSE_RUN_SYNC_MODE\": \"sync_to_public\",\n }\n ]\n RICHIE_COURSE_RUN_SYNC_SECRETS = values.ListValue([])\n\n # AUTHENTICATION\n profile_dashboard_urls = {\n \"dashboard\": {\n \"label\": _(\"Dashboard\"),\n \"href\": _(\"{base_url:s}/dashboard/\"),\n },\n }\n if (\n FEATURES.get(\"REACT_DASHBOARD\", False) # pylint: disable=no-member\n and JOANIE_BACKEND.get(\"BASE_URL\") is not None\n ):\n profile_dashboard_urls = {\n \"dashboard\": {\n \"label\": _(\"Dashboard\"),\n \"href\": _(\"/dashboard/\"),\n },\n \"dashboard_teacher\": {\n \"label\": _(\"Course administration\"),\n \"href\": _(\"/dashboard/teacher\"),\n },\n }\n\n RICHIE_AUTHENTICATION_DELEGATION = {\n \"BASE_URL\": values.Value(\n \"\", environ_name=\"AUTHENTICATION_BASE_URL\", environ_prefix=None\n ),\n \"BACKEND\": values.Value(\n \"dummy\", environ_name=\"AUTHENTICATION_BACKEND\", environ_prefix=None\n ),\n # PROFILE_URLS are custom links to access to Auth profile views\n # from Richie. Link order will reflect the order of display in frontend.\n # (i) Info - {base_url} is RICHIE_AUTHENTICATION_DELEGATION.BASE_URL\n # (i) If you need to bind user data into href url, wrap the property between ()\n # e.g: for user.username = johndoe, /u/(username) will be /u/johndoe\n \"PROFILE_URLS\": values.DictValue(\n {\n **profile_dashboard_urls,\n \"profile\": {\n \"label\": _(\"Profile\"),\n \"href\": _(\"{base_url:s}/u/(username)\"),\n },\n \"account\": {\n \"label\": _(\"Account\"),\n \"href\": _(\"{base_url:s}/account/settings\"),\n },\n },\n environ_name=\"AUTHENTICATION_PROFILE_URLS\",\n environ_prefix=None,\n ),\n }\n\n # Elasticsearch\n RICHIE_ES_HOST = values.ListValue(\n [\"elasticsearch\"], environ_name=\"RICHIE_ES_HOST\", environ_prefix=None\n )\n RICHIE_ES_INDICES_PREFIX = values.Value(\n default=\"richie\", environ_name=\"RICHIE_ES_INDICES_PREFIX\", environ_prefix=None\n )\n RICHIE_ES_STATE_WEIGHTS = values.ListValue(None)\n\n # LTI Content\n RICHIE_LTI_PROVIDERS = {\n \"lti_provider_test\": {\n \"oauth_consumer_key\": values.Value(\n \"InsecureOauthConsumerKey\",\n environ_name=\"LTI_TEST_OAUTH_CONSUMER_KEY\",\n environ_prefix=None,\n ),\n \"shared_secret\": values.Value(\n \"InsecureSharedSecret\",\n environ_name=\"LTI_TEST_SHARED_SECRET\",\n environ_prefix=None,\n ),\n \"base_url\": values.Value(\n \"https://lti.tools/saltire/tp\",\n environ_name=\"LTI_TEST_BASE_URL\",\n environ_prefix=None,\n ),\n \"display_name\": \"LTI Provider Test Video\",\n \"is_base_url_regex\": False,\n \"is_automatic_resizing\": True,\n \"inline_ratio\": 0.5625,\n }\n }\n\n # Internationalization\n TIME_ZONE = \"Europe/Paris\"\n USE_I18N = True\n USE_L10N = True\n USE_TZ = True\n\n # Templates\n TEMPLATES = [\n {\n \"BACKEND\": \"django.template.backends.django.DjangoTemplates\",\n \"DIRS\": [os.path.join(BASE_DIR, \"templates\")],\n \"OPTIONS\": {\n \"context_processors\": [\n \"django.contrib.auth.context_processors.auth\",\n \"django.contrib.messages.context_processors.messages\",\n \"django.template.context_processors.i18n\",\n \"django.template.context_processors.debug\",\n \"django.template.context_processors.request\",\n \"django.template.context_processors.media\",\n \"django.template.context_processors.csrf\",\n \"django.template.context_processors.tz\",\n \"sekizai.context_processors.sekizai\",\n \"django.template.context_processors.static\",\n \"cms.context_processors.cms_settings\",\n \"richie.apps.core.context_processors.site_metas\",\n ],\n \"loaders\": [\n \"django.template.loaders.filesystem.Loader\",\n \"django.template.loaders.app_directories.Loader\",\n ],\n },\n }\n ]\n\n MIDDLEWARE = (\n \"richie.apps.core.cache.LimitBrowserCacheTTLHeaders\",\n \"cms.middleware.utils.ApphookReloadMiddleware\",\n \"django.middleware.security.SecurityMiddleware\",\n \"django.contrib.sessions.middleware.SessionMiddleware\",\n \"django.middleware.csrf.CsrfViewMiddleware\",\n \"django.contrib.auth.middleware.AuthenticationMiddleware\",\n \"django.contrib.messages.middleware.MessageMiddleware\",\n \"django.middleware.locale.LocaleMiddleware\",\n \"django.middleware.common.CommonMiddleware\",\n \"django.middleware.clickjacking.XFrameOptionsMiddleware\",\n \"dockerflow.django.middleware.DockerflowMiddleware\",\n \"cms.middleware.user.CurrentUserMiddleware\",\n \"cms.middleware.page.CurrentPageMiddleware\",\n \"cms.middleware.toolbar.ToolbarMiddleware\",\n \"cms.middleware.language.LanguageCookieMiddleware\",\n \"dj_pagination.middleware.PaginationMiddleware\",\n )\n\n # Django applications from the highest priority to the lowest\n INSTALLED_APPS = (\n # Richie stuff\n \"richie.apps.demo\",\n \"richie.apps.search\",\n \"richie.apps.courses\",\n \"richie.apps.core\",\n \"richie.plugins.glimpse\",\n \"richie.plugins.html_sitemap\",\n \"richie.plugins.large_banner\",\n \"richie.plugins.nesteditem\",\n \"richie.plugins.plain_text\",\n \"richie.plugins.section\",\n \"richie.plugins.simple_picture\",\n \"richie.plugins.simple_text_ckeditor\",\n \"richie.plugins.lti_consumer\",\n \"richie\",\n # Third party apps\n \"dj_pagination\",\n \"dockerflow.django\",\n \"parler\",\n \"rest_framework\",\n # Django-cms\n \"djangocms_admin_style\",\n \"djangocms_googlemap\",\n \"djangocms_link\",\n \"djangocms_picture\",\n \"djangocms_text_ckeditor\",\n \"djangocms_video\",\n \"cms\",\n \"menus\",\n \"sekizai\",\n \"treebeard\",\n \"filer\",\n \"easy_thumbnails\",\n # django-autocomplete-light\n \"dal\",\n \"dal_select2\",\n # Django\n \"django.contrib.auth\",\n \"django.contrib.contenttypes\",\n \"django.contrib.sessions\",\n \"django.contrib.admin\",\n \"django.contrib.sites\",\n \"django.contrib.sitemaps\",\n \"django.contrib.staticfiles\",\n \"django.contrib.messages\",\n \"django.contrib.humanize\",\n )\n\n # Languages\n # - Django\n LANGUAGE_CODE = \"en\"\n\n # Careful! Languages should be ordered by priority, as this tuple is used to get\n # fallback/default languages throughout the app.\n # Use \"en\" as default as it is the language that is most likely to be spoken by any visitor\n # when their preferred language, whatever it is, is unavailable\n LANGUAGES = ((\"en\", _(\"English\")), (\"fr\", _(\"French\")))\n\n # - Django CMS\n CMS_LANGUAGES = {\n \"default\": {\n \"public\": True,\n \"hide_untranslated\": False,\n \"redirect_on_fallback\": False,\n \"fallbacks\": [\"en\", \"fr\"],\n },\n 1: [\n {\n \"public\": True,\n \"code\": \"en\",\n \"hide_untranslated\": False,\n \"name\": _(\"English\"),\n \"fallbacks\": [\"fr\"],\n \"redirect_on_fallback\": False,\n },\n {\n \"public\": True,\n \"code\": \"fr\",\n \"hide_untranslated\": False,\n \"name\": _(\"French\"),\n \"fallbacks\": [\"en\"],\n \"redirect_on_fallback\": False,\n },\n ],\n }\n\n # - Django Parler\n PARLER_LANGUAGES = CMS_LANGUAGES\n\n # Permisions\n # - Django CMS\n CMS_PERMISSION = True\n\n # - Django Filer\n FILER_ENABLE_PERMISSIONS = True\n FILER_IS_PUBLIC_DEFAULT = True\n\n # - Django Pagination\n PAGINATION_INVALID_PAGE_RAISES_404 = True\n PAGINATION_DEFAULT_WINDOW = 2\n PAGINATION_DEFAULT_MARGIN = 1\n\n # Logging\n LOGGING = {\n \"version\": 1,\n \"disable_existing_loggers\": True,\n \"formatters\": {\n \"verbose\": {\n \"format\": \"%(levelname)s %(asctime)s %(module)s \"\n \"%(process)d %(thread)d %(message)s\"\n }\n },\n \"handlers\": {\n \"console\": {\n \"level\": \"DEBUG\",\n \"class\": \"logging.StreamHandler\",\n \"formatter\": \"verbose\",\n }\n },\n \"loggers\": {\n \"django.db.backends\": {\n \"level\": \"ERROR\",\n \"handlers\": [\"console\"],\n \"propagate\": False,\n }\n },\n }\n\n # Cache\n CACHES = {\n \"default\": {\n \"BACKEND\": values.Value(\n \"django.core.cache.backends.locmem.LocMemCache\",\n environ_name=\"CACHE_DEFAULT_BACKEND\",\n environ_prefix=None,\n ),\n \"LOCATION\": values.Value(\n \"\", environ_name=\"CACHE_DEFAULT_LOCATION\", environ_prefix=None\n ),\n \"OPTIONS\": values.DictValue(\n {}, environ_name=\"CACHE_DEFAULT_OPTIONS\", environ_prefix=None\n ),\n },\n \"search\": {\n \"BACKEND\": values.Value(\n \"django.core.cache.backends.locmem.LocMemCache\",\n environ_name=\"SEARCH_CACHE_BACKEND\",\n environ_prefix=None,\n ),\n \"LOCATION\": values.Value(\n \"search_cache\",\n environ_name=\"SEARCH_CACHE_NAME\",\n environ_prefix=None,\n ),\n \"TIMEOUT\": 60,\n },\n }\n\n # For more details about CMS_CACHE_DURATION, see :\n # http://docs.django-cms.org/en/latest/reference/configuration.html#cms-cache-durations\n CMS_CACHE_DURATIONS = values.DictValue(\n {\"menus\": 3600, \"content\": 60, \"permissions\": 3600}\n )\n\n # Sessions\n SESSION_ENGINE = values.Value(\"django.contrib.sessions.backends.db\")\n\n # Sentry\n SENTRY_DSN = values.Value(None, environ_name=\"SENTRY_DSN\")\n\n # Web Analytics\n WEB_ANALYTICS = values.DictValue(\n None,\n environ_name=\"WEB_ANALYTICS\",\n environ_prefix=None,\n )\n\n # Performance configuration, preconnect to the media CDN\n MEDIA_HOSTNAME_PRECONNECT = values.BooleanValue(\n False, environ_name=\"MEDIA_HOSTNAME_PRECONNECT\", environ_prefix=None\n )\n\n # Minimum enrollment count value that would be shown on course detail page\n RICHIE_MINIMUM_COURSE_RUNS_ENROLLMENT_COUNT = values.Value(\n 5000,\n environ_name=\"RICHIE_MINIMUM_COURSE_RUNS_ENROLLMENT_COUNT\",\n environ_prefix=None,\n )\n\n @classmethod\n def _get_environment(cls):\n \"\"\"Environment in which the application is launched.\"\"\"\n return cls.__name__.lower()\n\n # pylint: disable=invalid-name\n @property\n def ENVIRONMENT(self):\n \"\"\"Environment in which the application is launched.\"\"\"\n return self._get_environment()\n\n # pylint: disable=invalid-name\n @property\n def RELEASE(self):\n \"\"\"\n Return the release information.\n\n Delegate to the module function to enable easier testing.\n \"\"\"\n return get_release()\n\n @classmethod\n def post_setup(cls):\n \"\"\"Post setup configuration.\n This is the place where you can configure settings that require other\n settings to be loaded.\n \"\"\"\n super().post_setup()\n\n # The SENTRY_DSN setting should be available to activate sentry for an environment\n if cls.SENTRY_DSN is not None:\n sentry_sdk.init( # pylint: disable=abstract-class-instantiated\n dsn=cls.SENTRY_DSN,\n environment=cls._get_environment(),\n release=get_release(),\n integrations=[DjangoIntegration()],\n )\n with sentry_sdk.configure_scope() as scope:\n scope.set_extra(\"application\", \"backend\")\n\n # If a Joanie Backend has been configured, we add it into LMS_BACKENDS dict\n if cls.JOANIE_BACKEND.get(\"BASE_URL\") is not None:\n cls.RICHIE_LMS_BACKENDS.append(cls.JOANIE_BACKEND)\n\n\nclass Development(Base):\n \"\"\"\n Development environment settings\n\n We set DEBUG to True and configure the server to respond from all hosts.\n \"\"\"\n\n DEBUG = True\n ALLOWED_HOSTS = [\"*\"]\n # Needed by LTI Consumer plugin\n # When we use a LTI provider on localhost domain, browser security needs to be lowered,\n # as crossdomain iframe posting is dangerous.\n SECURE_REFERRER_POLICY = \"unsafe-url\"\n\n\nclass Test(Base):\n \"\"\"Test environment settings\"\"\"\n\n CACHES = {\n \"default\": {\n \"BACKEND\": \"django_redis.cache.RedisCache\",\n \"LOCATION\": \"mymaster/redis-sentinel:26379,redis-sentinel:26379/0\",\n \"OPTIONS\": {\"CLIENT_CLASS\": \"richie.apps.core.cache.SentinelClient\"},\n },\n \"search\": {\n \"BACKEND\": \"django.core.cache.backends.locmem.LocMemCache\",\n \"LOCATION\": \"search_cache\",\n \"TIMEOUT\": 60,\n },\n }\n\n RICHIE_LMS_BACKENDS = [\n {\n \"BASE_URL\": \"http://localhost:8073\",\n \"BACKEND\": \"richie.apps.courses.lms.edx.EdXLMSBackend\",\n \"COURSE_REGEX\": r\"^.*/courses/(?P<course_id>.*)/course/?$\",\n \"JS_BACKEND\": \"dummy\",\n \"JS_COURSE_REGEX\": r\"^.*/courses/(.*)/course/?$\",\n }\n ]\n\n\nclass ContinuousIntegration(Test):\n \"\"\"\n Continuous Integration environment settings\n\n nota bene: it should inherit from the Test environment.\n \"\"\"\n\n\nclass Production(Base):\n \"\"\"Production environment settings\n\n You must define the DJANGO_ALLOWED_HOSTS and DJANGO_SECRET_KEY environment\n variables in Production configuration (and derived configurations):\n\n DJANGO_ALLOWED_HOSTS=\"foo.com,foo.fr\"\n DJANGO_SECRET_KEY=\"your-secret-key\"\n \"\"\"\n\n # Security\n SECRET_KEY = values.SecretValue()\n ALLOWED_HOSTS = values.ListValue(None)\n CSRF_COOKIE_SECURE = True\n SECURE_BROWSER_XSS_FILTER = True\n SECURE_CONTENT_TYPE_NOSNIFF = True\n SESSION_COOKIE_SECURE = True\n\n # For static files in production, we want to use a backend that includes a hash in\n # the filename, that is calculated from the file content, so that browsers always\n # get the updated version of each file.\n STATICFILES_STORAGE = (\n \"django.contrib.staticfiles.storage.ManifestStaticFilesStorage\"\n )\n\n # For more details about CMS_CACHE_DURATION, see :\n # http://docs.django-cms.org/en/latest/reference/configuration.html#cms-cache-durations\n CMS_CACHE_DURATIONS = values.DictValue(\n {\"menus\": 3600, \"content\": 86400, \"permissions\": 86400}\n )\n\n # By default, Django CMS sends cached responses with a\n # Cache-control: max-age value that reflects the server cache TTL\n # (CMS_CACHE_DURATIONS[\"content\"])\n #\n # The thing is : we can invalidate a server side cache entry, but we cannot\n # invalidate our client browser cache entries. That's why we want to set a\n # long TTL on the server side, but a much lower TTL on the browser cache.\n #\n # This setting allows to define a maximum value for the max-age header\n # returned by Django CMS views.\n MAX_BROWSER_CACHE_TTL = 600\n\n\nclass Feature(Production):\n \"\"\"\n Feature environment settings\n\n nota bene: it should inherit from the Production environment.\n \"\"\"\n\n\nclass Staging(Production):\n \"\"\"\n Staging environment settings\n\n nota bene: it should inherit from the Production environment.\n \"\"\"\n\n\nclass PreProduction(Production):\n \"\"\"\n Pre-production environment settings\n\n nota bene: it should inherit from the Production environment.\n \"\"\"\n",
"path": "sandbox/settings.py"
}
] | diff --git a/sandbox/settings.py b/sandbox/settings.py
index e83e9c0fc0..98fcb07910 100644
--- a/sandbox/settings.py
+++ b/sandbox/settings.py
@@ -284,7 +284,7 @@ class Base(StyleguideMixin, DRFMixin, RichieCoursesConfigurationMixin, Configura
"href": _("/dashboard/"),
},
"dashboard_teacher": {
- "label": _("Teacher dashboard"),
+ "label": _("Course administration"),
"href": _("/dashboard/teacher"),
},
}
diff --git a/src/frontend/js/widgets/UserLogin/components/UserMenu/DesktopUserMenu.tsx b/src/frontend/js/widgets/UserLogin/components/UserMenu/DesktopUserMenu.tsx
index d8ed22e4dd..8f2c71d03e 100644
--- a/src/frontend/js/widgets/UserLogin/components/UserMenu/DesktopUserMenu.tsx
+++ b/src/frontend/js/widgets/UserLogin/components/UserMenu/DesktopUserMenu.tsx
@@ -1,6 +1,7 @@
import { FC } from 'react';
import { defineMessages, FormattedMessage } from 'react-intl';
import { useSelect } from 'downshift';
+import classNames from 'classnames';
import { location } from 'utils/indirection/window';
import { UserHelper } from 'utils/UserHelper';
import { UserMenuProps } from '.';
@@ -36,6 +37,21 @@ export const DesktopUserMenu: FC<UserMenuProps> = ({ user }) => {
},
});
+ const teacherDasbhoardUrl = user.urls.find((link) => {
+ return link.key === 'dashboard_teacher';
+ });
+ let menuLinkList;
+ if (teacherDasbhoardUrl) {
+ menuLinkList = [
+ teacherDasbhoardUrl,
+ ...user.urls.filter((link) => {
+ return link.key !== 'dashboard_teacher';
+ }),
+ ];
+ } else {
+ menuLinkList = user.urls;
+ }
+
return (
<div className="user-menu user-menu--desktop selector">
<label {...getLabelProps()} className="offscreen">
@@ -52,8 +68,14 @@ export const DesktopUserMenu: FC<UserMenuProps> = ({ user }) => {
className={`selector__list ${isOpen ? '' : 'selector__list--is-closed'}`}
>
{isOpen &&
- user.urls.map((link, index) => (
- <li key={link.key} {...getItemProps({ item: link, index })}>
+ menuLinkList.map((link, index) => (
+ <li
+ key={link.key}
+ {...getItemProps({ item: link, index })}
+ className={classNames({
+ 'selector__list__item--bordered': link.key === 'dashboard_teacher',
+ })}
+ >
{typeof link.action === 'string' ? (
<a
className={`selector__list__link ${
diff --git a/src/frontend/scss/objects/_selector.scss b/src/frontend/scss/objects/_selector.scss
index 280ba4861f..917d75e9c4 100644
--- a/src/frontend/scss/objects/_selector.scss
+++ b/src/frontend/scss/objects/_selector.scss
@@ -57,6 +57,12 @@
margin-left: calc(3rem - 12px);
}
+ &__item {
+ &--bordered:not(:last-child) {
+ border-bottom: $onepixel solid r-theme-val(topbar, item-divider-border);
+ }
+ }
+
&__link {
@include button-reset-style();
background: r-theme-val(selector, base-background);
|
ckan__ckan-5478 | routes manual reference URL in comment is broken
**CKAN version**
latest
**Describe the bug**
The url in [comment ](https://github.com/ckan/ckan/blob/0f87337fd937a15545ed761367b5d27d888e3803/ckan/config/routing.py#L6) is broken.
**Steps to reproduce**
Steps to reproduce the behavior:
Open a browser and go to "http://routes.groovie.org/docs/"

**Expected behavior**
A valid documentation reference.
| [
{
"content": "# encoding: utf-8\n\"\"\"Routes configuration\n\nThe more specific and detailed routes should be defined first so they\nmay take precedent over the more generic routes. For more information\nrefer to the routes manual at http://routes.groovie.org/docs/\n\n\"\"\"\nimport re\n\nfrom routes.mapper import SubMapper, Mapper as _Mapper\n\nimport ckan.plugins as p\nfrom ckan.common import config, current_app\n\nnamed_routes = {}\n\n\nclass Mapper(_Mapper):\n ''' This Mapper allows us to intercept the connect calls used by routes\n so that we can collect named routes and later use them to create links\n via some helper functions like build_nav(). '''\n\n def connect(self, *args, **kw):\n '''Connect a new route, storing any named routes for later.\n\n This custom connect() method wraps the standard connect() method,\n and additionally saves any named routes that are connected in a dict\n ckan.routing.named_routes, which ends up being accessible via the\n Pylons config as config['routes.named_routes'].\n\n Also takes some additional params:\n\n :param ckan_icon: name of the icon to be associated with this route,\n e.g. 'group', 'time'. Available icons are listed here:\n http://fortawesome.github.io/Font-Awesome/3.2.1/icons/\n :type ckan_icon: string\n :param highlight_actions: space-separated list of controller actions\n that should be treated as the same as this named route for menu\n highlighting purposes, e.g. 'index search'\n :type highlight_actions: string\n\n '''\n\n ckan_icon = kw.pop('ckan_icon', None)\n highlight_actions = kw.pop('highlight_actions', kw.get('action', ''))\n ckan_core = kw.pop('ckan_core', None)\n out = _Mapper.connect(self, *args, **kw)\n route = self.matchlist[-1]\n if ckan_core is not None:\n route._ckan_core = ckan_core\n if len(args) == 1 or args[0].startswith('_redirect_'):\n return out\n # we have a named route\n needed = []\n matches = re.findall('\\{([^:}]*)(\\}|:)', args[1])\n for match in matches:\n needed.append(match[0])\n route_data = {\n 'icon': ckan_icon,\n # needed lists the names of the parameters that need defining\n # for the route to be generated\n 'needed': needed,\n 'controller': kw.get('controller'),\n 'action': kw.get('action', ''),\n 'highlight_actions': highlight_actions\n }\n named_routes[args[0]] = route_data\n return out\n\n\ndef make_map():\n \"\"\"Create, configure and return the routes Mapper\"\"\"\n # import controllers here rather than at root level because\n # pylons config is initialised by this point.\n\n # Helpers to reduce code clutter\n GET = dict(method=['GET'])\n PUT = dict(method=['PUT'])\n POST = dict(method=['POST'])\n DELETE = dict(method=['DELETE'])\n GET_POST = dict(method=['GET', 'POST'])\n PUT_POST = dict(method=['PUT', 'POST'])\n PUT_POST_DELETE = dict(method=['PUT', 'POST', 'DELETE'])\n OPTIONS = dict(method=['OPTIONS'])\n\n map = Mapper(\n directory=config['pylons.paths']['controllers'],\n always_scan=config['debug'])\n map.minimization = False\n map.explicit = True\n\n # CUSTOM ROUTES HERE\n for plugin in p.PluginImplementations(p.IRoutes):\n map = plugin.before_map(map)\n\n # The ErrorController route (handles 404/500 error pages); it should\n # likely stay at the top, ensuring it can always be resolved.\n map.connect('/error/{action}', controller='error', ckan_core=True)\n map.connect('/error/{action}/{id}', controller='error', ckan_core=True)\n\n map.connect(\n '*url',\n controller='home',\n action='cors_options',\n conditions=OPTIONS,\n ckan_core=True)\n\n # Mark all routes added from extensions on the `before_map` extension point\n # as non-core\n for route in map.matchlist:\n if not hasattr(route, '_ckan_core'):\n route._ckan_core = False\n\n # /api/util ver 1, 2 or none\n with SubMapper(\n map, controller='api', path_prefix='/api{ver:/1|/2|}',\n ver='/1') as m:\n m.connect('/util/dataset/munge_name', action='munge_package_name')\n m.connect(\n '/util/dataset/munge_title_to_name',\n action='munge_title_to_package_name')\n m.connect('/util/tag/munge', action='munge_tag')\n\n ###########\n ## /END API\n ###########\n\n map.redirect('/packages', '/dataset')\n map.redirect('/packages/{url:.*}', '/dataset/{url}')\n map.redirect('/package', '/dataset')\n map.redirect('/package/{url:.*}', '/dataset/{url}')\n\n # users\n map.redirect('/users/{url:.*}', '/user/{url}')\n\n # Mark all unmarked routes added up until now as core routes\n for route in map.matchlist:\n if not hasattr(route, '_ckan_core'):\n route._ckan_core = True\n\n for plugin in p.PluginImplementations(p.IRoutes):\n map = plugin.after_map(map)\n\n # Mark all routes added from extensions on the `after_map` extension point\n # as non-core\n for route in map.matchlist:\n if not hasattr(route, '_ckan_core'):\n route._ckan_core = False\n\n # sometimes we get requests for favicon.ico we should redirect to\n # the real favicon location.\n map.redirect('/favicon.ico', config.get('ckan.favicon'))\n\n map.redirect('/*(url)/', '/{url}', _redirect_code='301 Moved Permanently')\n\n return map\n",
"path": "ckan/config/routing.py"
}
] | [
{
"content": "# encoding: utf-8\n\"\"\"Routes configuration\n\nThe more specific and detailed routes should be defined first so they\nmay take precedent over the more generic routes. For more information\nrefer to the routes manual at https://routes.readthedocs.io/en/latest/\n\n\"\"\"\nimport re\n\nfrom routes.mapper import SubMapper, Mapper as _Mapper\n\nimport ckan.plugins as p\nfrom ckan.common import config, current_app\n\nnamed_routes = {}\n\n\nclass Mapper(_Mapper):\n ''' This Mapper allows us to intercept the connect calls used by routes\n so that we can collect named routes and later use them to create links\n via some helper functions like build_nav(). '''\n\n def connect(self, *args, **kw):\n '''Connect a new route, storing any named routes for later.\n\n This custom connect() method wraps the standard connect() method,\n and additionally saves any named routes that are connected in a dict\n ckan.routing.named_routes, which ends up being accessible via the\n Pylons config as config['routes.named_routes'].\n\n Also takes some additional params:\n\n :param ckan_icon: name of the icon to be associated with this route,\n e.g. 'group', 'time'. Available icons are listed here:\n http://fortawesome.github.io/Font-Awesome/3.2.1/icons/\n :type ckan_icon: string\n :param highlight_actions: space-separated list of controller actions\n that should be treated as the same as this named route for menu\n highlighting purposes, e.g. 'index search'\n :type highlight_actions: string\n\n '''\n\n ckan_icon = kw.pop('ckan_icon', None)\n highlight_actions = kw.pop('highlight_actions', kw.get('action', ''))\n ckan_core = kw.pop('ckan_core', None)\n out = _Mapper.connect(self, *args, **kw)\n route = self.matchlist[-1]\n if ckan_core is not None:\n route._ckan_core = ckan_core\n if len(args) == 1 or args[0].startswith('_redirect_'):\n return out\n # we have a named route\n needed = []\n matches = re.findall('\\{([^:}]*)(\\}|:)', args[1])\n for match in matches:\n needed.append(match[0])\n route_data = {\n 'icon': ckan_icon,\n # needed lists the names of the parameters that need defining\n # for the route to be generated\n 'needed': needed,\n 'controller': kw.get('controller'),\n 'action': kw.get('action', ''),\n 'highlight_actions': highlight_actions\n }\n named_routes[args[0]] = route_data\n return out\n\n\ndef make_map():\n \"\"\"Create, configure and return the routes Mapper\"\"\"\n # import controllers here rather than at root level because\n # pylons config is initialised by this point.\n\n # Helpers to reduce code clutter\n GET = dict(method=['GET'])\n PUT = dict(method=['PUT'])\n POST = dict(method=['POST'])\n DELETE = dict(method=['DELETE'])\n GET_POST = dict(method=['GET', 'POST'])\n PUT_POST = dict(method=['PUT', 'POST'])\n PUT_POST_DELETE = dict(method=['PUT', 'POST', 'DELETE'])\n OPTIONS = dict(method=['OPTIONS'])\n\n map = Mapper(\n directory=config['pylons.paths']['controllers'],\n always_scan=config['debug'])\n map.minimization = False\n map.explicit = True\n\n # CUSTOM ROUTES HERE\n for plugin in p.PluginImplementations(p.IRoutes):\n map = plugin.before_map(map)\n\n # The ErrorController route (handles 404/500 error pages); it should\n # likely stay at the top, ensuring it can always be resolved.\n map.connect('/error/{action}', controller='error', ckan_core=True)\n map.connect('/error/{action}/{id}', controller='error', ckan_core=True)\n\n map.connect(\n '*url',\n controller='home',\n action='cors_options',\n conditions=OPTIONS,\n ckan_core=True)\n\n # Mark all routes added from extensions on the `before_map` extension point\n # as non-core\n for route in map.matchlist:\n if not hasattr(route, '_ckan_core'):\n route._ckan_core = False\n\n # /api/util ver 1, 2 or none\n with SubMapper(\n map, controller='api', path_prefix='/api{ver:/1|/2|}',\n ver='/1') as m:\n m.connect('/util/dataset/munge_name', action='munge_package_name')\n m.connect(\n '/util/dataset/munge_title_to_name',\n action='munge_title_to_package_name')\n m.connect('/util/tag/munge', action='munge_tag')\n\n ###########\n ## /END API\n ###########\n\n map.redirect('/packages', '/dataset')\n map.redirect('/packages/{url:.*}', '/dataset/{url}')\n map.redirect('/package', '/dataset')\n map.redirect('/package/{url:.*}', '/dataset/{url}')\n\n # users\n map.redirect('/users/{url:.*}', '/user/{url}')\n\n # Mark all unmarked routes added up until now as core routes\n for route in map.matchlist:\n if not hasattr(route, '_ckan_core'):\n route._ckan_core = True\n\n for plugin in p.PluginImplementations(p.IRoutes):\n map = plugin.after_map(map)\n\n # Mark all routes added from extensions on the `after_map` extension point\n # as non-core\n for route in map.matchlist:\n if not hasattr(route, '_ckan_core'):\n route._ckan_core = False\n\n # sometimes we get requests for favicon.ico we should redirect to\n # the real favicon location.\n map.redirect('/favicon.ico', config.get('ckan.favicon'))\n\n map.redirect('/*(url)/', '/{url}', _redirect_code='301 Moved Permanently')\n\n return map\n",
"path": "ckan/config/routing.py"
}
] | diff --git a/ckan/config/routing.py b/ckan/config/routing.py
index f4632a2643a..af723c55448 100644
--- a/ckan/config/routing.py
+++ b/ckan/config/routing.py
@@ -3,7 +3,7 @@
The more specific and detailed routes should be defined first so they
may take precedent over the more generic routes. For more information
-refer to the routes manual at http://routes.groovie.org/docs/
+refer to the routes manual at https://routes.readthedocs.io/en/latest/
"""
import re
|
abey79__vpype-607 | Default to QT_QPA_PLATFORM=xcb on Linux/Wayland
If we detect a linux box running on wayland, we should force Qt to use the xcb platform as the wayland backend doesn't work properly with moderngl.
This maybe a good way to detect wayland:
```
XDG_SESSION_TYPE=wayland
```
Relevant discussions:
- https://github.com/abey79/vsketch/issues/353
- https://discord.com/channels/550302843777712148/696045774970028062/1072436292798926868
| [
{
"content": "from .viewer import *\n",
"path": "vpype_viewer/qtviewer/__init__.py"
}
] | [
{
"content": "def _check_wayland():\n \"\"\"Fix QT env variable on Wayland-based systems.\n\n See https://github.com/abey79/vpype/issues/596\n \"\"\"\n import os\n import sys\n\n if sys.platform.startswith(\"linux\"):\n if os.environ.get(\"XDG_SESSION_TYPE\", \"\") == \"wayland\":\n if \"QT_QPA_PLATFORM\" not in os.environ:\n os.environ[\"QT_QPA_PLATFORM\"] = \"xcb\"\n\n\n_check_wayland()\n\n\nfrom .viewer import *\n",
"path": "vpype_viewer/qtviewer/__init__.py"
}
] | diff --git a/CHANGELOG.md b/CHANGELOG.md
index 7c88f7ee..8f30b192 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -12,6 +12,7 @@ Release date: UNRELEASED
### Bug fixes
* Fixed a design issue with the `read` command where disjoints groups of digit in layer names would be used to determine layer IDs. Only the first contiguous group of digit is used, so a layer named "01-layer1" would now have layer ID of 1 instead of 11 (#606)
+* Fixed an issue on Wayland-based Linux distribution where using the viewer (e.g. with the `show` command) would crash (#607)
### API changes
diff --git a/vpype_viewer/qtviewer/__init__.py b/vpype_viewer/qtviewer/__init__.py
index d8bfbc32..8f8f143e 100644
--- a/vpype_viewer/qtviewer/__init__.py
+++ b/vpype_viewer/qtviewer/__init__.py
@@ -1 +1,18 @@
+def _check_wayland():
+ """Fix QT env variable on Wayland-based systems.
+
+ See https://github.com/abey79/vpype/issues/596
+ """
+ import os
+ import sys
+
+ if sys.platform.startswith("linux"):
+ if os.environ.get("XDG_SESSION_TYPE", "") == "wayland":
+ if "QT_QPA_PLATFORM" not in os.environ:
+ os.environ["QT_QPA_PLATFORM"] = "xcb"
+
+
+_check_wayland()
+
+
from .viewer import *
|
fonttools__fonttools-2274 | When parsing MVAR with lazy=True recordSize is wrong
Reproduction:
```
from fontTools import ttLib
import io
import sys
file_path = sys.argv[1]
fontdata = open(file_path, "rb").read()
font = ttLib.TTFont(io.BytesIO(fontdata), lazy=True)
mvar = font["MVAR"].table
print(mvar.ValueRecord.recordSize)
for rec in mvar.ValueRecord:
print(rec.ValueTag, "->", rec.VarIdx)
```
Running this against the latest version of recursive gives:
16
hcrn -> 65538
sbxo -> 65536
stro -> 131072
undo -> 1
xhgt -> 0
Ê -> 732
@ -> 1073741824
-> 0
-> 16384
Record size should be 8.
| [
{
"content": "from fontTools.misc.py23 import Tag, bytesjoin\nfrom .DefaultTable import DefaultTable\nimport sys\nimport array\nimport struct\nimport logging\n\nlog = logging.getLogger(__name__)\n\nclass OverflowErrorRecord(object):\n\tdef __init__(self, overflowTuple):\n\t\tself.tableType = overflowTuple[0]\n\t\tself.LookupListIndex = overflowTuple[1]\n\t\tself.SubTableIndex = overflowTuple[2]\n\t\tself.itemName = overflowTuple[3]\n\t\tself.itemIndex = overflowTuple[4]\n\n\tdef __repr__(self):\n\t\treturn str((self.tableType, \"LookupIndex:\", self.LookupListIndex, \"SubTableIndex:\", self.SubTableIndex, \"ItemName:\", self.itemName, \"ItemIndex:\", self.itemIndex))\n\nclass OTLOffsetOverflowError(Exception):\n\tdef __init__(self, overflowErrorRecord):\n\t\tself.value = overflowErrorRecord\n\n\tdef __str__(self):\n\t\treturn repr(self.value)\n\n\nclass BaseTTXConverter(DefaultTable):\n\n\t\"\"\"Generic base class for TTX table converters. It functions as an\n\tadapter between the TTX (ttLib actually) table model and the model\n\twe use for OpenType tables, which is necessarily subtly different.\n\t\"\"\"\n\n\tdef decompile(self, data, font):\n\t\tfrom . import otTables\n\t\treader = OTTableReader(data, tableTag=self.tableTag)\n\t\ttableClass = getattr(otTables, self.tableTag)\n\t\tself.table = tableClass()\n\t\tself.table.decompile(reader, font)\n\n\tdef compile(self, font):\n\t\t\"\"\" Create a top-level OTTableWriter for the GPOS/GSUB table.\n\t\t\tCall the compile method for the the table\n\t\t\t\tfor each 'converter' record in the table converter list\n\t\t\t\t\tcall converter's write method for each item in the value.\n\t\t\t\t\t\t- For simple items, the write method adds a string to the\n\t\t\t\t\t\twriter's self.items list.\n\t\t\t\t\t\t- For Struct/Table/Subtable items, it add first adds new writer to the\n\t\t\t\t\t\tto the writer's self.items, then calls the item's compile method.\n\t\t\t\t\t\tThis creates a tree of writers, rooted at the GUSB/GPOS writer, with\n\t\t\t\t\t\teach writer representing a table, and the writer.items list containing\n\t\t\t\t\t\tthe child data strings and writers.\n\t\t\tcall the getAllData method\n\t\t\t\tcall _doneWriting, which removes duplicates\n\t\t\t\tcall _gatherTables. This traverses the tables, adding unique occurences to a flat list of tables\n\t\t\t\tTraverse the flat list of tables, calling getDataLength on each to update their position\n\t\t\t\tTraverse the flat list of tables again, calling getData each get the data in the table, now that\n\t\t\t\tpos's and offset are known.\n\n\t\t\t\tIf a lookup subtable overflows an offset, we have to start all over.\n\t\t\"\"\"\n\t\toverflowRecord = None\n\n\t\twhile True:\n\t\t\ttry:\n\t\t\t\twriter = OTTableWriter(tableTag=self.tableTag)\n\t\t\t\tself.table.compile(writer, font)\n\t\t\t\treturn writer.getAllData()\n\n\t\t\texcept OTLOffsetOverflowError as e:\n\n\t\t\t\tif overflowRecord == e.value:\n\t\t\t\t\traise # Oh well...\n\n\t\t\t\toverflowRecord = e.value\n\t\t\t\tlog.info(\"Attempting to fix OTLOffsetOverflowError %s\", e)\n\t\t\t\tlastItem = overflowRecord\n\n\t\t\t\tok = 0\n\t\t\t\tif overflowRecord.itemName is None:\n\t\t\t\t\tfrom .otTables import fixLookupOverFlows\n\t\t\t\t\tok = fixLookupOverFlows(font, overflowRecord)\n\t\t\t\telse:\n\t\t\t\t\tfrom .otTables import fixSubTableOverFlows\n\t\t\t\t\tok = fixSubTableOverFlows(font, overflowRecord)\n\t\t\t\tif not ok:\n\t\t\t\t\t# Try upgrading lookup to Extension and hope\n\t\t\t\t\t# that cross-lookup sharing not happening would\n\t\t\t\t\t# fix overflow...\n\t\t\t\t\tfrom .otTables import fixLookupOverFlows\n\t\t\t\t\tok = fixLookupOverFlows(font, overflowRecord)\n\t\t\t\t\tif not ok:\n\t\t\t\t\t\traise\n\n\tdef toXML(self, writer, font):\n\t\tself.table.toXML2(writer, font)\n\n\tdef fromXML(self, name, attrs, content, font):\n\t\tfrom . import otTables\n\t\tif not hasattr(self, \"table\"):\n\t\t\ttableClass = getattr(otTables, self.tableTag)\n\t\t\tself.table = tableClass()\n\t\tself.table.fromXML(name, attrs, content, font)\n\t\tself.table.populateDefaults()\n\n\nclass OTTableReader(object):\n\n\t\"\"\"Helper class to retrieve data from an OpenType table.\"\"\"\n\n\t__slots__ = ('data', 'offset', 'pos', 'localState', 'tableTag')\n\n\tdef __init__(self, data, localState=None, offset=0, tableTag=None):\n\t\tself.data = data\n\t\tself.offset = offset\n\t\tself.pos = offset\n\t\tself.localState = localState\n\t\tself.tableTag = tableTag\n\n\tdef advance(self, count):\n\t\tself.pos += count\n\n\tdef seek(self, pos):\n\t\tself.pos = pos\n\n\tdef copy(self):\n\t\tother = self.__class__(self.data, self.localState, self.offset, self.tableTag)\n\t\tother.pos = self.pos\n\t\treturn other\n\n\tdef getSubReader(self, offset):\n\t\toffset = self.offset + offset\n\t\treturn self.__class__(self.data, self.localState, offset, self.tableTag)\n\n\tdef readValue(self, typecode, staticSize):\n\t\tpos = self.pos\n\t\tnewpos = pos + staticSize\n\t\tvalue, = struct.unpack(f\">{typecode}\", self.data[pos:newpos])\n\t\tself.pos = newpos\n\t\treturn value\n\n\tdef readUShort(self):\n\t\treturn self.readValue(\"H\", staticSize=2)\n\n\tdef readArray(self, typecode, staticSize, count):\n\t\tpos = self.pos\n\t\tnewpos = pos + count * staticSize\n\t\tvalue = array.array(typecode, self.data[pos:newpos])\n\t\tif sys.byteorder != \"big\": value.byteswap()\n\t\tself.pos = newpos\n\t\treturn value\n\n\tdef readUShortArray(self, count):\n\t\treturn self.readArray(\"H\", staticSize=2, count=count)\n\n\tdef readInt8(self):\n\t\treturn self.readValue(\"b\", staticSize=1)\n\n\tdef readShort(self):\n\t\treturn self.readValue(\"h\", staticSize=2)\n\n\tdef readLong(self):\n\t\treturn self.readValue(\"l\", staticSize=4)\n\n\tdef readUInt8(self):\n\t\treturn self.readValue(\"B\", staticSize=1)\n\n\tdef readUInt24(self):\n\t\tpos = self.pos\n\t\tnewpos = pos + 3\n\t\tvalue, = struct.unpack(\">l\", b'\\0'+self.data[pos:newpos])\n\t\tself.pos = newpos\n\t\treturn value\n\n\tdef readULong(self):\n\t\treturn self.readValue(\"L\", staticSize=4)\n\n\tdef readTag(self):\n\t\tpos = self.pos\n\t\tnewpos = pos + 4\n\t\tvalue = Tag(self.data[pos:newpos])\n\t\tassert len(value) == 4, value\n\t\tself.pos = newpos\n\t\treturn value\n\n\tdef readData(self, count):\n\t\tpos = self.pos\n\t\tnewpos = pos + count\n\t\tvalue = self.data[pos:newpos]\n\t\tself.pos = newpos\n\t\treturn value\n\n\tdef __setitem__(self, name, value):\n\t\tstate = self.localState.copy() if self.localState else dict()\n\t\tstate[name] = value\n\t\tself.localState = state\n\n\tdef __getitem__(self, name):\n\t\treturn self.localState and self.localState[name]\n\n\tdef __contains__(self, name):\n\t\treturn self.localState and name in self.localState\n\n\nclass OTTableWriter(object):\n\n\t\"\"\"Helper class to gather and assemble data for OpenType tables.\"\"\"\n\n\tdef __init__(self, localState=None, tableTag=None, offsetSize=2):\n\t\tself.items = []\n\t\tself.pos = None\n\t\tself.localState = localState\n\t\tself.tableTag = tableTag\n\t\tself.offsetSize = offsetSize\n\t\tself.parent = None\n\n\t# DEPRECATED: 'longOffset' is kept as a property for backward compat with old code.\n\t# You should use 'offsetSize' instead (2, 3 or 4 bytes).\n\t@property\n\tdef longOffset(self):\n\t\treturn self.offsetSize == 4\n\n\[email protected]\n\tdef longOffset(self, value):\n\t\tself.offsetSize = 4 if value else 2\n\n\tdef __setitem__(self, name, value):\n\t\tstate = self.localState.copy() if self.localState else dict()\n\t\tstate[name] = value\n\t\tself.localState = state\n\n\tdef __getitem__(self, name):\n\t\treturn self.localState[name]\n\n\tdef __delitem__(self, name):\n\t\tdel self.localState[name]\n\n\t# assembler interface\n\n\tdef getDataLength(self):\n\t\t\"\"\"Return the length of this table in bytes, without subtables.\"\"\"\n\t\tl = 0\n\t\tfor item in self.items:\n\t\t\tif hasattr(item, \"getCountData\"):\n\t\t\t\tl += item.size\n\t\t\telif hasattr(item, \"getData\"):\n\t\t\t\tl += item.offsetSize\n\t\t\telse:\n\t\t\t\tl = l + len(item)\n\t\treturn l\n\n\tdef getData(self):\n\t\t\"\"\"Assemble the data for this writer/table, without subtables.\"\"\"\n\t\titems = list(self.items) # make a shallow copy\n\t\tpos = self.pos\n\t\tnumItems = len(items)\n\t\tfor i in range(numItems):\n\t\t\titem = items[i]\n\n\t\t\tif hasattr(item, \"getData\"):\n\t\t\t\tif item.offsetSize == 4:\n\t\t\t\t\titems[i] = packULong(item.pos - pos)\n\t\t\t\telif item.offsetSize == 2:\n\t\t\t\t\ttry:\n\t\t\t\t\t\titems[i] = packUShort(item.pos - pos)\n\t\t\t\t\texcept struct.error:\n\t\t\t\t\t\t# provide data to fix overflow problem.\n\t\t\t\t\t\toverflowErrorRecord = self.getOverflowErrorRecord(item)\n\n\t\t\t\t\t\traise OTLOffsetOverflowError(overflowErrorRecord)\n\t\t\t\telif item.offsetSize == 3:\n\t\t\t\t\titems[i] = packUInt24(item.pos - pos)\n\t\t\t\telse:\n\t\t\t\t\traise ValueError(item.offsetSize)\n\n\t\treturn bytesjoin(items)\n\n\tdef __hash__(self):\n\t\t# only works after self._doneWriting() has been called\n\t\treturn hash(self.items)\n\n\tdef __ne__(self, other):\n\t\tresult = self.__eq__(other)\n\t\treturn result if result is NotImplemented else not result\n\n\tdef __eq__(self, other):\n\t\tif type(self) != type(other):\n\t\t\treturn NotImplemented\n\t\treturn self.offsetSize == other.offsetSize and self.items == other.items\n\n\tdef _doneWriting(self, internedTables):\n\t\t# Convert CountData references to data string items\n\t\t# collapse duplicate table references to a unique entry\n\t\t# \"tables\" are OTTableWriter objects.\n\n\t\t# For Extension Lookup types, we can\n\t\t# eliminate duplicates only within the tree under the Extension Lookup,\n\t\t# as offsets may exceed 64K even between Extension LookupTable subtables.\n\t\tisExtension = hasattr(self, \"Extension\")\n\n\t\t# Certain versions of Uniscribe reject the font if the GSUB/GPOS top-level\n\t\t# arrays (ScriptList, FeatureList, LookupList) point to the same, possibly\n\t\t# empty, array. So, we don't share those.\n\t\t# See: https://github.com/fonttools/fonttools/issues/518\n\t\tdontShare = hasattr(self, 'DontShare')\n\n\t\tif isExtension:\n\t\t\tinternedTables = {}\n\n\t\titems = self.items\n\t\tfor i in range(len(items)):\n\t\t\titem = items[i]\n\t\t\tif hasattr(item, \"getCountData\"):\n\t\t\t\titems[i] = item.getCountData()\n\t\t\telif hasattr(item, \"getData\"):\n\t\t\t\titem._doneWriting(internedTables)\n\t\t\t\tif not dontShare:\n\t\t\t\t\titems[i] = item = internedTables.setdefault(item, item)\n\t\tself.items = tuple(items)\n\n\tdef _gatherTables(self, tables, extTables, done):\n\t\t# Convert table references in self.items tree to a flat\n\t\t# list of tables in depth-first traversal order.\n\t\t# \"tables\" are OTTableWriter objects.\n\t\t# We do the traversal in reverse order at each level, in order to\n\t\t# resolve duplicate references to be the last reference in the list of tables.\n\t\t# For extension lookups, duplicate references can be merged only within the\n\t\t# writer tree under the extension lookup.\n\n\t\tdone[id(self)] = True\n\n\t\tnumItems = len(self.items)\n\t\tiRange = list(range(numItems))\n\t\tiRange.reverse()\n\n\t\tisExtension = hasattr(self, \"Extension\")\n\n\t\tselfTables = tables\n\n\t\tif isExtension:\n\t\t\tassert extTables is not None, \"Program or XML editing error. Extension subtables cannot contain extensions subtables\"\n\t\t\ttables, extTables, done = extTables, None, {}\n\n\t\t# add Coverage table if it is sorted last.\n\t\tsortCoverageLast = 0\n\t\tif hasattr(self, \"sortCoverageLast\"):\n\t\t\t# Find coverage table\n\t\t\tfor i in range(numItems):\n\t\t\t\titem = self.items[i]\n\t\t\t\tif hasattr(item, \"name\") and (item.name == \"Coverage\"):\n\t\t\t\t\tsortCoverageLast = 1\n\t\t\t\t\tbreak\n\t\t\tif id(item) not in done:\n\t\t\t\titem._gatherTables(tables, extTables, done)\n\t\t\telse:\n\t\t\t\t# We're a new parent of item\n\t\t\t\tpass\n\n\t\tfor i in iRange:\n\t\t\titem = self.items[i]\n\t\t\tif not hasattr(item, \"getData\"):\n\t\t\t\tcontinue\n\n\t\t\tif sortCoverageLast and (i==1) and item.name == 'Coverage':\n\t\t\t\t# we've already 'gathered' it above\n\t\t\t\tcontinue\n\n\t\t\tif id(item) not in done:\n\t\t\t\titem._gatherTables(tables, extTables, done)\n\t\t\telse:\n\t\t\t\t# Item is already written out by other parent\n\t\t\t\tpass\n\n\t\tselfTables.append(self)\n\n\tdef getAllData(self):\n\t\t\"\"\"Assemble all data, including all subtables.\"\"\"\n\t\tinternedTables = {}\n\t\tself._doneWriting(internedTables)\n\t\ttables = []\n\t\textTables = []\n\t\tdone = {}\n\t\tself._gatherTables(tables, extTables, done)\n\t\ttables.reverse()\n\t\textTables.reverse()\n\t\t# Gather all data in two passes: the absolute positions of all\n\t\t# subtable are needed before the actual data can be assembled.\n\t\tpos = 0\n\t\tfor table in tables:\n\t\t\ttable.pos = pos\n\t\t\tpos = pos + table.getDataLength()\n\n\t\tfor table in extTables:\n\t\t\ttable.pos = pos\n\t\t\tpos = pos + table.getDataLength()\n\n\t\tdata = []\n\t\tfor table in tables:\n\t\t\ttableData = table.getData()\n\t\t\tdata.append(tableData)\n\n\t\tfor table in extTables:\n\t\t\ttableData = table.getData()\n\t\t\tdata.append(tableData)\n\n\t\treturn bytesjoin(data)\n\n\t# interface for gathering data, as used by table.compile()\n\n\tdef getSubWriter(self, offsetSize=2):\n\t\tsubwriter = self.__class__(self.localState, self.tableTag, offsetSize=offsetSize)\n\t\tsubwriter.parent = self # because some subtables have idential values, we discard\n\t\t\t\t\t# the duplicates under the getAllData method. Hence some\n\t\t\t\t\t# subtable writers can have more than one parent writer.\n\t\t\t\t\t# But we just care about first one right now.\n\t\treturn subwriter\n\n\tdef writeValue(self, typecode, value):\n\t\tself.items.append(struct.pack(f\">{typecode}\", value))\n\n\tdef writeUShort(self, value):\n\t\tassert 0 <= value < 0x10000, value\n\t\tself.items.append(struct.pack(\">H\", value))\n\n\tdef writeShort(self, value):\n\t\tassert -32768 <= value < 32768, value\n\t\tself.items.append(struct.pack(\">h\", value))\n\n\tdef writeUInt8(self, value):\n\t\tassert 0 <= value < 256, value\n\t\tself.items.append(struct.pack(\">B\", value))\n\n\tdef writeInt8(self, value):\n\t\tassert -128 <= value < 128, value\n\t\tself.items.append(struct.pack(\">b\", value))\n\n\tdef writeUInt24(self, value):\n\t\tassert 0 <= value < 0x1000000, value\n\t\tb = struct.pack(\">L\", value)\n\t\tself.items.append(b[1:])\n\n\tdef writeLong(self, value):\n\t\tself.items.append(struct.pack(\">l\", value))\n\n\tdef writeULong(self, value):\n\t\tself.items.append(struct.pack(\">L\", value))\n\n\tdef writeTag(self, tag):\n\t\ttag = Tag(tag).tobytes()\n\t\tassert len(tag) == 4, tag\n\t\tself.items.append(tag)\n\n\tdef writeSubTable(self, subWriter):\n\t\tself.items.append(subWriter)\n\n\tdef writeCountReference(self, table, name, size=2, value=None):\n\t\tref = CountReference(table, name, size=size, value=value)\n\t\tself.items.append(ref)\n\t\treturn ref\n\n\tdef writeStruct(self, format, values):\n\t\tdata = struct.pack(*(format,) + values)\n\t\tself.items.append(data)\n\n\tdef writeData(self, data):\n\t\tself.items.append(data)\n\n\tdef getOverflowErrorRecord(self, item):\n\t\tLookupListIndex = SubTableIndex = itemName = itemIndex = None\n\t\tif self.name == 'LookupList':\n\t\t\tLookupListIndex = item.repeatIndex\n\t\telif self.name == 'Lookup':\n\t\t\tLookupListIndex = self.repeatIndex\n\t\t\tSubTableIndex = item.repeatIndex\n\t\telse:\n\t\t\titemName = getattr(item, 'name', '<none>')\n\t\t\tif hasattr(item, 'repeatIndex'):\n\t\t\t\titemIndex = item.repeatIndex\n\t\t\tif self.name == 'SubTable':\n\t\t\t\tLookupListIndex = self.parent.repeatIndex\n\t\t\t\tSubTableIndex = self.repeatIndex\n\t\t\telif self.name == 'ExtSubTable':\n\t\t\t\tLookupListIndex = self.parent.parent.repeatIndex\n\t\t\t\tSubTableIndex = self.parent.repeatIndex\n\t\t\telse: # who knows how far below the SubTable level we are! Climb back up to the nearest subtable.\n\t\t\t\titemName = \".\".join([self.name, itemName])\n\t\t\t\tp1 = self.parent\n\t\t\t\twhile p1 and p1.name not in ['ExtSubTable', 'SubTable']:\n\t\t\t\t\titemName = \".\".join([p1.name, itemName])\n\t\t\t\t\tp1 = p1.parent\n\t\t\t\tif p1:\n\t\t\t\t\tif p1.name == 'ExtSubTable':\n\t\t\t\t\t\tLookupListIndex = p1.parent.parent.repeatIndex\n\t\t\t\t\t\tSubTableIndex = p1.parent.repeatIndex\n\t\t\t\t\telse:\n\t\t\t\t\t\tLookupListIndex = p1.parent.repeatIndex\n\t\t\t\t\t\tSubTableIndex = p1.repeatIndex\n\n\t\treturn OverflowErrorRecord( (self.tableTag, LookupListIndex, SubTableIndex, itemName, itemIndex) )\n\n\nclass CountReference(object):\n\t\"\"\"A reference to a Count value, not a count of references.\"\"\"\n\tdef __init__(self, table, name, size=None, value=None):\n\t\tself.table = table\n\t\tself.name = name\n\t\tself.size = size\n\t\tif value is not None:\n\t\t\tself.setValue(value)\n\tdef setValue(self, value):\n\t\ttable = self.table\n\t\tname = self.name\n\t\tif table[name] is None:\n\t\t\ttable[name] = value\n\t\telse:\n\t\t\tassert table[name] == value, (name, table[name], value)\n\tdef getValue(self):\n\t\treturn self.table[self.name]\n\tdef getCountData(self):\n\t\tv = self.table[self.name]\n\t\tif v is None: v = 0\n\t\treturn {1:packUInt8, 2:packUShort, 4:packULong}[self.size](v)\n\n\ndef packUInt8 (value):\n\treturn struct.pack(\">B\", value)\n\ndef packUShort(value):\n\treturn struct.pack(\">H\", value)\n\ndef packULong(value):\n\tassert 0 <= value < 0x100000000, value\n\treturn struct.pack(\">L\", value)\n\ndef packUInt24(value):\n\tassert 0 <= value < 0x1000000, value\n\treturn struct.pack(\">L\", value)[1:]\n\n\nclass BaseTable(object):\n\n\t\"\"\"Generic base class for all OpenType (sub)tables.\"\"\"\n\n\tdef __getattr__(self, attr):\n\t\treader = self.__dict__.get(\"reader\")\n\t\tif reader:\n\t\t\tdel self.reader\n\t\t\tfont = self.font\n\t\t\tdel self.font\n\t\t\tself.decompile(reader, font)\n\t\t\treturn getattr(self, attr)\n\n\t\traise AttributeError(attr)\n\n\tdef ensureDecompiled(self):\n\t\treader = self.__dict__.get(\"reader\")\n\t\tif reader:\n\t\t\tdel self.reader\n\t\t\tfont = self.font\n\t\t\tdel self.font\n\t\t\tself.decompile(reader, font)\n\n\t@classmethod\n\tdef getRecordSize(cls, reader):\n\t\ttotalSize = 0\n\t\tfor conv in cls.converters:\n\t\t\tsize = conv.getRecordSize(reader)\n\t\t\tif size is NotImplemented: return NotImplemented\n\t\t\tcountValue = 1\n\t\t\tif conv.repeat:\n\t\t\t\tif conv.repeat in reader:\n\t\t\t\t\tcountValue = reader[conv.repeat]\n\t\t\t\telse:\n\t\t\t\t\treturn NotImplemented\n\t\t\ttotalSize += size * countValue\n\t\treturn totalSize\n\n\tdef getConverters(self):\n\t\treturn self.converters\n\n\tdef getConverterByName(self, name):\n\t\treturn self.convertersByName[name]\n\n\tdef populateDefaults(self, propagator=None):\n\t\tfor conv in self.getConverters():\n\t\t\tif conv.repeat:\n\t\t\t\tif not hasattr(self, conv.name):\n\t\t\t\t\tsetattr(self, conv.name, [])\n\t\t\t\tcountValue = len(getattr(self, conv.name)) - conv.aux\n\t\t\t\ttry:\n\t\t\t\t\tcount_conv = self.getConverterByName(conv.repeat)\n\t\t\t\t\tsetattr(self, conv.repeat, countValue)\n\t\t\t\texcept KeyError:\n\t\t\t\t\t# conv.repeat is a propagated count\n\t\t\t\t\tif propagator and conv.repeat in propagator:\n\t\t\t\t\t\tpropagator[conv.repeat].setValue(countValue)\n\t\t\telse:\n\t\t\t\tif conv.aux and not eval(conv.aux, None, self.__dict__):\n\t\t\t\t\tcontinue\n\t\t\t\tif hasattr(self, conv.name):\n\t\t\t\t\tcontinue # Warn if it should NOT be present?!\n\t\t\t\tif hasattr(conv, 'writeNullOffset'):\n\t\t\t\t\tsetattr(self, conv.name, None) # Warn?\n\t\t\t\t#elif not conv.isCount:\n\t\t\t\t#\t# Warn?\n\t\t\t\t#\tpass\n\n\tdef decompile(self, reader, font):\n\t\tself.readFormat(reader)\n\t\ttable = {}\n\t\tself.__rawTable = table # for debugging\n\t\tfor conv in self.getConverters():\n\t\t\tif conv.name == \"SubTable\":\n\t\t\t\tconv = conv.getConverter(reader.tableTag,\n\t\t\t\t\t\ttable[\"LookupType\"])\n\t\t\tif conv.name == \"ExtSubTable\":\n\t\t\t\tconv = conv.getConverter(reader.tableTag,\n\t\t\t\t\t\ttable[\"ExtensionLookupType\"])\n\t\t\tif conv.name == \"FeatureParams\":\n\t\t\t\tconv = conv.getConverter(reader[\"FeatureTag\"])\n\t\t\tif conv.name == \"SubStruct\":\n\t\t\t\tconv = conv.getConverter(reader.tableTag,\n\t\t\t\t table[\"MorphType\"])\n\t\t\ttry:\n\t\t\t\tif conv.repeat:\n\t\t\t\t\tif isinstance(conv.repeat, int):\n\t\t\t\t\t\tcountValue = conv.repeat\n\t\t\t\t\telif conv.repeat in table:\n\t\t\t\t\t\tcountValue = table[conv.repeat]\n\t\t\t\t\telse:\n\t\t\t\t\t\t# conv.repeat is a propagated count\n\t\t\t\t\t\tcountValue = reader[conv.repeat]\n\t\t\t\t\tcountValue += conv.aux\n\t\t\t\t\ttable[conv.name] = conv.readArray(reader, font, table, countValue)\n\t\t\t\telse:\n\t\t\t\t\tif conv.aux and not eval(conv.aux, None, table):\n\t\t\t\t\t\tcontinue\n\t\t\t\t\ttable[conv.name] = conv.read(reader, font, table)\n\t\t\t\t\tif conv.isPropagated:\n\t\t\t\t\t\treader[conv.name] = table[conv.name]\n\t\t\texcept Exception as e:\n\t\t\t\tname = conv.name\n\t\t\t\te.args = e.args + (name,)\n\t\t\t\traise\n\n\t\tif hasattr(self, 'postRead'):\n\t\t\tself.postRead(table, font)\n\t\telse:\n\t\t\tself.__dict__.update(table)\n\n\t\tdel self.__rawTable # succeeded, get rid of debugging info\n\n\tdef compile(self, writer, font):\n\t\tself.ensureDecompiled()\n\t\t# TODO Following hack to be removed by rewriting how FormatSwitching tables\n\t\t# are handled.\n\t\t# https://github.com/fonttools/fonttools/pull/2238#issuecomment-805192631\n\t\tif hasattr(self, 'preWrite'):\n\t\t\tdeleteFormat = not hasattr(self, 'Format')\n\t\t\ttable = self.preWrite(font)\n\t\t\tdeleteFormat = deleteFormat and hasattr(self, 'Format')\n\t\telse:\n\t\t\tdeleteFormat = False\n\t\t\ttable = self.__dict__.copy()\n\n\t\t# some count references may have been initialized in a custom preWrite; we set\n\t\t# these in the writer's state beforehand (instead of sequentially) so they will\n\t\t# be propagated to all nested subtables even if the count appears in the current\n\t\t# table only *after* the offset to the subtable that it is counting.\n\t\tfor conv in self.getConverters():\n\t\t\tif conv.isCount and conv.isPropagated:\n\t\t\t\tvalue = table.get(conv.name)\n\t\t\t\tif isinstance(value, CountReference):\n\t\t\t\t\twriter[conv.name] = value\n\n\t\tif hasattr(self, 'sortCoverageLast'):\n\t\t\twriter.sortCoverageLast = 1\n\n\t\tif hasattr(self, 'DontShare'):\n\t\t\twriter.DontShare = True\n\n\t\tif hasattr(self.__class__, 'LookupType'):\n\t\t\twriter['LookupType'].setValue(self.__class__.LookupType)\n\n\t\tself.writeFormat(writer)\n\t\tfor conv in self.getConverters():\n\t\t\tvalue = table.get(conv.name) # TODO Handle defaults instead of defaulting to None!\n\t\t\tif conv.repeat:\n\t\t\t\tif value is None:\n\t\t\t\t\tvalue = []\n\t\t\t\tcountValue = len(value) - conv.aux\n\t\t\t\tif isinstance(conv.repeat, int):\n\t\t\t\t\tassert len(value) == conv.repeat, 'expected %d values, got %d' % (conv.repeat, len(value))\n\t\t\t\telif conv.repeat in table:\n\t\t\t\t\tCountReference(table, conv.repeat, value=countValue)\n\t\t\t\telse:\n\t\t\t\t\t# conv.repeat is a propagated count\n\t\t\t\t\twriter[conv.repeat].setValue(countValue)\n\t\t\t\tvalues = value\n\t\t\t\tfor i, value in enumerate(values):\n\t\t\t\t\ttry:\n\t\t\t\t\t\tconv.write(writer, font, table, value, i)\n\t\t\t\t\texcept Exception as e:\n\t\t\t\t\t\tname = value.__class__.__name__ if value is not None else conv.name\n\t\t\t\t\t\te.args = e.args + (name+'['+str(i)+']',)\n\t\t\t\t\t\traise\n\t\t\telif conv.isCount:\n\t\t\t\t# Special-case Count values.\n\t\t\t\t# Assumption: a Count field will *always* precede\n\t\t\t\t# the actual array(s).\n\t\t\t\t# We need a default value, as it may be set later by a nested\n\t\t\t\t# table. We will later store it here.\n\t\t\t\t# We add a reference: by the time the data is assembled\n\t\t\t\t# the Count value will be filled in.\n\t\t\t\t# We ignore the current count value since it will be recomputed,\n\t\t\t\t# unless it's a CountReference that was already initialized in a custom preWrite.\n\t\t\t\tif isinstance(value, CountReference):\n\t\t\t\t\tref = value\n\t\t\t\t\tref.size = conv.staticSize\n\t\t\t\t\twriter.writeData(ref)\n\t\t\t\t\ttable[conv.name] = ref.getValue()\n\t\t\t\telse:\n\t\t\t\t\tref = writer.writeCountReference(table, conv.name, conv.staticSize)\n\t\t\t\t\ttable[conv.name] = None\n\t\t\t\tif conv.isPropagated:\n\t\t\t\t\twriter[conv.name] = ref\n\t\t\telif conv.isLookupType:\n\t\t\t\t# We make sure that subtables have the same lookup type,\n\t\t\t\t# and that the type is the same as the one set on the\n\t\t\t\t# Lookup object, if any is set.\n\t\t\t\tif conv.name not in table:\n\t\t\t\t\ttable[conv.name] = None\n\t\t\t\tref = writer.writeCountReference(table, conv.name, conv.staticSize, table[conv.name])\n\t\t\t\twriter['LookupType'] = ref\n\t\t\telse:\n\t\t\t\tif conv.aux and not eval(conv.aux, None, table):\n\t\t\t\t\tcontinue\n\t\t\t\ttry:\n\t\t\t\t\tconv.write(writer, font, table, value)\n\t\t\t\texcept Exception as e:\n\t\t\t\t\tname = value.__class__.__name__ if value is not None else conv.name\n\t\t\t\t\te.args = e.args + (name,)\n\t\t\t\t\traise\n\t\t\t\tif conv.isPropagated:\n\t\t\t\t\twriter[conv.name] = value\n\n\t\tif deleteFormat:\n\t\t\tdel self.Format\n\n\tdef readFormat(self, reader):\n\t\tpass\n\n\tdef writeFormat(self, writer):\n\t\tpass\n\n\tdef toXML(self, xmlWriter, font, attrs=None, name=None):\n\t\ttableName = name if name else self.__class__.__name__\n\t\tif attrs is None:\n\t\t\tattrs = []\n\t\tif hasattr(self, \"Format\"):\n\t\t\tattrs = attrs + [(\"Format\", self.Format)]\n\t\txmlWriter.begintag(tableName, attrs)\n\t\txmlWriter.newline()\n\t\tself.toXML2(xmlWriter, font)\n\t\txmlWriter.endtag(tableName)\n\t\txmlWriter.newline()\n\n\tdef toXML2(self, xmlWriter, font):\n\t\t# Simpler variant of toXML, *only* for the top level tables (like GPOS, GSUB).\n\t\t# This is because in TTX our parent writes our main tag, and in otBase.py we\n\t\t# do it ourselves. I think I'm getting schizophrenic...\n\t\tfor conv in self.getConverters():\n\t\t\tif conv.repeat:\n\t\t\t\tvalue = getattr(self, conv.name, [])\n\t\t\t\tfor i in range(len(value)):\n\t\t\t\t\titem = value[i]\n\t\t\t\t\tconv.xmlWrite(xmlWriter, font, item, conv.name,\n\t\t\t\t\t\t\t[(\"index\", i)])\n\t\t\telse:\n\t\t\t\tif conv.aux and not eval(conv.aux, None, vars(self)):\n\t\t\t\t\tcontinue\n\t\t\t\tvalue = getattr(self, conv.name, None) # TODO Handle defaults instead of defaulting to None!\n\t\t\t\tconv.xmlWrite(xmlWriter, font, value, conv.name, [])\n\n\tdef fromXML(self, name, attrs, content, font):\n\t\ttry:\n\t\t\tconv = self.getConverterByName(name)\n\t\texcept KeyError:\n\t\t\traise # XXX on KeyError, raise nice error\n\t\tvalue = conv.xmlRead(attrs, content, font)\n\t\tif conv.repeat:\n\t\t\tseq = getattr(self, conv.name, None)\n\t\t\tif seq is None:\n\t\t\t\tseq = []\n\t\t\t\tsetattr(self, conv.name, seq)\n\t\t\tseq.append(value)\n\t\telse:\n\t\t\tsetattr(self, conv.name, value)\n\n\tdef __ne__(self, other):\n\t\tresult = self.__eq__(other)\n\t\treturn result if result is NotImplemented else not result\n\n\tdef __eq__(self, other):\n\t\tif type(self) != type(other):\n\t\t\treturn NotImplemented\n\n\t\tself.ensureDecompiled()\n\t\tother.ensureDecompiled()\n\n\t\treturn self.__dict__ == other.__dict__\n\n\nclass FormatSwitchingBaseTable(BaseTable):\n\n\t\"\"\"Minor specialization of BaseTable, for tables that have multiple\n\tformats, eg. CoverageFormat1 vs. CoverageFormat2.\"\"\"\n\n\t@classmethod\n\tdef getRecordSize(cls, reader):\n\t\treturn NotImplemented\n\n\tdef getConverters(self):\n\t\treturn self.converters.get(self.Format, [])\n\n\tdef getConverterByName(self, name):\n\t\treturn self.convertersByName[self.Format][name]\n\n\tdef readFormat(self, reader):\n\t\tself.Format = reader.readUShort()\n\n\tdef writeFormat(self, writer):\n\t\twriter.writeUShort(self.Format)\n\n\tdef toXML(self, xmlWriter, font, attrs=None, name=None):\n\t\tBaseTable.toXML(self, xmlWriter, font, attrs, name)\n\n\nclass UInt8FormatSwitchingBaseTable(FormatSwitchingBaseTable):\n\tdef readFormat(self, reader):\n\t\tself.Format = reader.readUInt8()\n\n\tdef writeFormat(self, writer):\n\t\twriter.writeUInt8(self.Format)\n\n\nformatSwitchingBaseTables = {\n\t\"uint16\": FormatSwitchingBaseTable,\n\t\"uint8\": UInt8FormatSwitchingBaseTable,\n}\n\ndef getFormatSwitchingBaseTableClass(formatType):\n\ttry:\n\t\treturn formatSwitchingBaseTables[formatType]\n\texcept KeyError:\n\t\traise TypeError(f\"Unsupported format type: {formatType!r}\")\n\n\n#\n# Support for ValueRecords\n#\n# This data type is so different from all other OpenType data types that\n# it requires quite a bit of code for itself. It even has special support\n# in OTTableReader and OTTableWriter...\n#\n\nvalueRecordFormat = [\n#\tMask\t Name\t\tisDevice signed\n\t(0x0001, \"XPlacement\",\t0,\t1),\n\t(0x0002, \"YPlacement\",\t0,\t1),\n\t(0x0004, \"XAdvance\",\t0,\t1),\n\t(0x0008, \"YAdvance\",\t0,\t1),\n\t(0x0010, \"XPlaDevice\",\t1,\t0),\n\t(0x0020, \"YPlaDevice\",\t1,\t0),\n\t(0x0040, \"XAdvDevice\",\t1,\t0),\n\t(0x0080, \"YAdvDevice\",\t1,\t0),\n#\treserved:\n\t(0x0100, \"Reserved1\",\t0,\t0),\n\t(0x0200, \"Reserved2\",\t0,\t0),\n\t(0x0400, \"Reserved3\",\t0,\t0),\n\t(0x0800, \"Reserved4\",\t0,\t0),\n\t(0x1000, \"Reserved5\",\t0,\t0),\n\t(0x2000, \"Reserved6\",\t0,\t0),\n\t(0x4000, \"Reserved7\",\t0,\t0),\n\t(0x8000, \"Reserved8\",\t0,\t0),\n]\n\ndef _buildDict():\n\td = {}\n\tfor mask, name, isDevice, signed in valueRecordFormat:\n\t\td[name] = mask, isDevice, signed\n\treturn d\n\nvalueRecordFormatDict = _buildDict()\n\n\nclass ValueRecordFactory(object):\n\n\t\"\"\"Given a format code, this object convert ValueRecords.\"\"\"\n\n\tdef __init__(self, valueFormat):\n\t\tformat = []\n\t\tfor mask, name, isDevice, signed in valueRecordFormat:\n\t\t\tif valueFormat & mask:\n\t\t\t\tformat.append((name, isDevice, signed))\n\t\tself.format = format\n\n\tdef __len__(self):\n\t\treturn len(self.format)\n\n\tdef readValueRecord(self, reader, font):\n\t\tformat = self.format\n\t\tif not format:\n\t\t\treturn None\n\t\tvalueRecord = ValueRecord()\n\t\tfor name, isDevice, signed in format:\n\t\t\tif signed:\n\t\t\t\tvalue = reader.readShort()\n\t\t\telse:\n\t\t\t\tvalue = reader.readUShort()\n\t\t\tif isDevice:\n\t\t\t\tif value:\n\t\t\t\t\tfrom . import otTables\n\t\t\t\t\tsubReader = reader.getSubReader(value)\n\t\t\t\t\tvalue = getattr(otTables, name)()\n\t\t\t\t\tvalue.decompile(subReader, font)\n\t\t\t\telse:\n\t\t\t\t\tvalue = None\n\t\t\tsetattr(valueRecord, name, value)\n\t\treturn valueRecord\n\n\tdef writeValueRecord(self, writer, font, valueRecord):\n\t\tfor name, isDevice, signed in self.format:\n\t\t\tvalue = getattr(valueRecord, name, 0)\n\t\t\tif isDevice:\n\t\t\t\tif value:\n\t\t\t\t\tsubWriter = writer.getSubWriter()\n\t\t\t\t\twriter.writeSubTable(subWriter)\n\t\t\t\t\tvalue.compile(subWriter, font)\n\t\t\t\telse:\n\t\t\t\t\twriter.writeUShort(0)\n\t\t\telif signed:\n\t\t\t\twriter.writeShort(value)\n\t\t\telse:\n\t\t\t\twriter.writeUShort(value)\n\n\nclass ValueRecord(object):\n\n\t# see ValueRecordFactory\n\n\tdef __init__(self, valueFormat=None, src=None):\n\t\tif valueFormat is not None:\n\t\t\tfor mask, name, isDevice, signed in valueRecordFormat:\n\t\t\t\tif valueFormat & mask:\n\t\t\t\t\tsetattr(self, name, None if isDevice else 0)\n\t\t\tif src is not None:\n\t\t\t\tfor key,val in src.__dict__.items():\n\t\t\t\t\tif not hasattr(self, key):\n\t\t\t\t\t\tcontinue\n\t\t\t\t\tsetattr(self, key, val)\n\t\telif src is not None:\n\t\t\tself.__dict__ = src.__dict__.copy()\n\n\tdef getFormat(self):\n\t\tformat = 0\n\t\tfor name in self.__dict__.keys():\n\t\t\tformat = format | valueRecordFormatDict[name][0]\n\t\treturn format\n\n\tdef toXML(self, xmlWriter, font, valueName, attrs=None):\n\t\tif attrs is None:\n\t\t\tsimpleItems = []\n\t\telse:\n\t\t\tsimpleItems = list(attrs)\n\t\tfor mask, name, isDevice, format in valueRecordFormat[:4]: # \"simple\" values\n\t\t\tif hasattr(self, name):\n\t\t\t\tsimpleItems.append((name, getattr(self, name)))\n\t\tdeviceItems = []\n\t\tfor mask, name, isDevice, format in valueRecordFormat[4:8]: # device records\n\t\t\tif hasattr(self, name):\n\t\t\t\tdevice = getattr(self, name)\n\t\t\t\tif device is not None:\n\t\t\t\t\tdeviceItems.append((name, device))\n\t\tif deviceItems:\n\t\t\txmlWriter.begintag(valueName, simpleItems)\n\t\t\txmlWriter.newline()\n\t\t\tfor name, deviceRecord in deviceItems:\n\t\t\t\tif deviceRecord is not None:\n\t\t\t\t\tdeviceRecord.toXML(xmlWriter, font, name=name)\n\t\t\txmlWriter.endtag(valueName)\n\t\t\txmlWriter.newline()\n\t\telse:\n\t\t\txmlWriter.simpletag(valueName, simpleItems)\n\t\t\txmlWriter.newline()\n\n\tdef fromXML(self, name, attrs, content, font):\n\t\tfrom . import otTables\n\t\tfor k, v in attrs.items():\n\t\t\tsetattr(self, k, int(v))\n\t\tfor element in content:\n\t\t\tif not isinstance(element, tuple):\n\t\t\t\tcontinue\n\t\t\tname, attrs, content = element\n\t\t\tvalue = getattr(otTables, name)()\n\t\t\tfor elem2 in content:\n\t\t\t\tif not isinstance(elem2, tuple):\n\t\t\t\t\tcontinue\n\t\t\t\tname2, attrs2, content2 = elem2\n\t\t\t\tvalue.fromXML(name2, attrs2, content2, font)\n\t\t\tsetattr(self, name, value)\n\n\tdef __ne__(self, other):\n\t\tresult = self.__eq__(other)\n\t\treturn result if result is NotImplemented else not result\n\n\tdef __eq__(self, other):\n\t\tif type(self) != type(other):\n\t\t\treturn NotImplemented\n\t\treturn self.__dict__ == other.__dict__\n",
"path": "Lib/fontTools/ttLib/tables/otBase.py"
}
] | [
{
"content": "from fontTools.misc.py23 import Tag, bytesjoin\nfrom .DefaultTable import DefaultTable\nimport sys\nimport array\nimport struct\nimport logging\n\nlog = logging.getLogger(__name__)\n\nclass OverflowErrorRecord(object):\n\tdef __init__(self, overflowTuple):\n\t\tself.tableType = overflowTuple[0]\n\t\tself.LookupListIndex = overflowTuple[1]\n\t\tself.SubTableIndex = overflowTuple[2]\n\t\tself.itemName = overflowTuple[3]\n\t\tself.itemIndex = overflowTuple[4]\n\n\tdef __repr__(self):\n\t\treturn str((self.tableType, \"LookupIndex:\", self.LookupListIndex, \"SubTableIndex:\", self.SubTableIndex, \"ItemName:\", self.itemName, \"ItemIndex:\", self.itemIndex))\n\nclass OTLOffsetOverflowError(Exception):\n\tdef __init__(self, overflowErrorRecord):\n\t\tself.value = overflowErrorRecord\n\n\tdef __str__(self):\n\t\treturn repr(self.value)\n\n\nclass BaseTTXConverter(DefaultTable):\n\n\t\"\"\"Generic base class for TTX table converters. It functions as an\n\tadapter between the TTX (ttLib actually) table model and the model\n\twe use for OpenType tables, which is necessarily subtly different.\n\t\"\"\"\n\n\tdef decompile(self, data, font):\n\t\tfrom . import otTables\n\t\treader = OTTableReader(data, tableTag=self.tableTag)\n\t\ttableClass = getattr(otTables, self.tableTag)\n\t\tself.table = tableClass()\n\t\tself.table.decompile(reader, font)\n\n\tdef compile(self, font):\n\t\t\"\"\" Create a top-level OTTableWriter for the GPOS/GSUB table.\n\t\t\tCall the compile method for the the table\n\t\t\t\tfor each 'converter' record in the table converter list\n\t\t\t\t\tcall converter's write method for each item in the value.\n\t\t\t\t\t\t- For simple items, the write method adds a string to the\n\t\t\t\t\t\twriter's self.items list.\n\t\t\t\t\t\t- For Struct/Table/Subtable items, it add first adds new writer to the\n\t\t\t\t\t\tto the writer's self.items, then calls the item's compile method.\n\t\t\t\t\t\tThis creates a tree of writers, rooted at the GUSB/GPOS writer, with\n\t\t\t\t\t\teach writer representing a table, and the writer.items list containing\n\t\t\t\t\t\tthe child data strings and writers.\n\t\t\tcall the getAllData method\n\t\t\t\tcall _doneWriting, which removes duplicates\n\t\t\t\tcall _gatherTables. This traverses the tables, adding unique occurences to a flat list of tables\n\t\t\t\tTraverse the flat list of tables, calling getDataLength on each to update their position\n\t\t\t\tTraverse the flat list of tables again, calling getData each get the data in the table, now that\n\t\t\t\tpos's and offset are known.\n\n\t\t\t\tIf a lookup subtable overflows an offset, we have to start all over.\n\t\t\"\"\"\n\t\toverflowRecord = None\n\n\t\twhile True:\n\t\t\ttry:\n\t\t\t\twriter = OTTableWriter(tableTag=self.tableTag)\n\t\t\t\tself.table.compile(writer, font)\n\t\t\t\treturn writer.getAllData()\n\n\t\t\texcept OTLOffsetOverflowError as e:\n\n\t\t\t\tif overflowRecord == e.value:\n\t\t\t\t\traise # Oh well...\n\n\t\t\t\toverflowRecord = e.value\n\t\t\t\tlog.info(\"Attempting to fix OTLOffsetOverflowError %s\", e)\n\t\t\t\tlastItem = overflowRecord\n\n\t\t\t\tok = 0\n\t\t\t\tif overflowRecord.itemName is None:\n\t\t\t\t\tfrom .otTables import fixLookupOverFlows\n\t\t\t\t\tok = fixLookupOverFlows(font, overflowRecord)\n\t\t\t\telse:\n\t\t\t\t\tfrom .otTables import fixSubTableOverFlows\n\t\t\t\t\tok = fixSubTableOverFlows(font, overflowRecord)\n\t\t\t\tif not ok:\n\t\t\t\t\t# Try upgrading lookup to Extension and hope\n\t\t\t\t\t# that cross-lookup sharing not happening would\n\t\t\t\t\t# fix overflow...\n\t\t\t\t\tfrom .otTables import fixLookupOverFlows\n\t\t\t\t\tok = fixLookupOverFlows(font, overflowRecord)\n\t\t\t\t\tif not ok:\n\t\t\t\t\t\traise\n\n\tdef toXML(self, writer, font):\n\t\tself.table.toXML2(writer, font)\n\n\tdef fromXML(self, name, attrs, content, font):\n\t\tfrom . import otTables\n\t\tif not hasattr(self, \"table\"):\n\t\t\ttableClass = getattr(otTables, self.tableTag)\n\t\t\tself.table = tableClass()\n\t\tself.table.fromXML(name, attrs, content, font)\n\t\tself.table.populateDefaults()\n\n\nclass OTTableReader(object):\n\n\t\"\"\"Helper class to retrieve data from an OpenType table.\"\"\"\n\n\t__slots__ = ('data', 'offset', 'pos', 'localState', 'tableTag')\n\n\tdef __init__(self, data, localState=None, offset=0, tableTag=None):\n\t\tself.data = data\n\t\tself.offset = offset\n\t\tself.pos = offset\n\t\tself.localState = localState\n\t\tself.tableTag = tableTag\n\n\tdef advance(self, count):\n\t\tself.pos += count\n\n\tdef seek(self, pos):\n\t\tself.pos = pos\n\n\tdef copy(self):\n\t\tother = self.__class__(self.data, self.localState, self.offset, self.tableTag)\n\t\tother.pos = self.pos\n\t\treturn other\n\n\tdef getSubReader(self, offset):\n\t\toffset = self.offset + offset\n\t\treturn self.__class__(self.data, self.localState, offset, self.tableTag)\n\n\tdef readValue(self, typecode, staticSize):\n\t\tpos = self.pos\n\t\tnewpos = pos + staticSize\n\t\tvalue, = struct.unpack(f\">{typecode}\", self.data[pos:newpos])\n\t\tself.pos = newpos\n\t\treturn value\n\n\tdef readUShort(self):\n\t\treturn self.readValue(\"H\", staticSize=2)\n\n\tdef readArray(self, typecode, staticSize, count):\n\t\tpos = self.pos\n\t\tnewpos = pos + count * staticSize\n\t\tvalue = array.array(typecode, self.data[pos:newpos])\n\t\tif sys.byteorder != \"big\": value.byteswap()\n\t\tself.pos = newpos\n\t\treturn value\n\n\tdef readUShortArray(self, count):\n\t\treturn self.readArray(\"H\", staticSize=2, count=count)\n\n\tdef readInt8(self):\n\t\treturn self.readValue(\"b\", staticSize=1)\n\n\tdef readShort(self):\n\t\treturn self.readValue(\"h\", staticSize=2)\n\n\tdef readLong(self):\n\t\treturn self.readValue(\"l\", staticSize=4)\n\n\tdef readUInt8(self):\n\t\treturn self.readValue(\"B\", staticSize=1)\n\n\tdef readUInt24(self):\n\t\tpos = self.pos\n\t\tnewpos = pos + 3\n\t\tvalue, = struct.unpack(\">l\", b'\\0'+self.data[pos:newpos])\n\t\tself.pos = newpos\n\t\treturn value\n\n\tdef readULong(self):\n\t\treturn self.readValue(\"L\", staticSize=4)\n\n\tdef readTag(self):\n\t\tpos = self.pos\n\t\tnewpos = pos + 4\n\t\tvalue = Tag(self.data[pos:newpos])\n\t\tassert len(value) == 4, value\n\t\tself.pos = newpos\n\t\treturn value\n\n\tdef readData(self, count):\n\t\tpos = self.pos\n\t\tnewpos = pos + count\n\t\tvalue = self.data[pos:newpos]\n\t\tself.pos = newpos\n\t\treturn value\n\n\tdef __setitem__(self, name, value):\n\t\tstate = self.localState.copy() if self.localState else dict()\n\t\tstate[name] = value\n\t\tself.localState = state\n\n\tdef __getitem__(self, name):\n\t\treturn self.localState and self.localState[name]\n\n\tdef __contains__(self, name):\n\t\treturn self.localState and name in self.localState\n\n\nclass OTTableWriter(object):\n\n\t\"\"\"Helper class to gather and assemble data for OpenType tables.\"\"\"\n\n\tdef __init__(self, localState=None, tableTag=None, offsetSize=2):\n\t\tself.items = []\n\t\tself.pos = None\n\t\tself.localState = localState\n\t\tself.tableTag = tableTag\n\t\tself.offsetSize = offsetSize\n\t\tself.parent = None\n\n\t# DEPRECATED: 'longOffset' is kept as a property for backward compat with old code.\n\t# You should use 'offsetSize' instead (2, 3 or 4 bytes).\n\t@property\n\tdef longOffset(self):\n\t\treturn self.offsetSize == 4\n\n\[email protected]\n\tdef longOffset(self, value):\n\t\tself.offsetSize = 4 if value else 2\n\n\tdef __setitem__(self, name, value):\n\t\tstate = self.localState.copy() if self.localState else dict()\n\t\tstate[name] = value\n\t\tself.localState = state\n\n\tdef __getitem__(self, name):\n\t\treturn self.localState[name]\n\n\tdef __delitem__(self, name):\n\t\tdel self.localState[name]\n\n\t# assembler interface\n\n\tdef getDataLength(self):\n\t\t\"\"\"Return the length of this table in bytes, without subtables.\"\"\"\n\t\tl = 0\n\t\tfor item in self.items:\n\t\t\tif hasattr(item, \"getCountData\"):\n\t\t\t\tl += item.size\n\t\t\telif hasattr(item, \"getData\"):\n\t\t\t\tl += item.offsetSize\n\t\t\telse:\n\t\t\t\tl = l + len(item)\n\t\treturn l\n\n\tdef getData(self):\n\t\t\"\"\"Assemble the data for this writer/table, without subtables.\"\"\"\n\t\titems = list(self.items) # make a shallow copy\n\t\tpos = self.pos\n\t\tnumItems = len(items)\n\t\tfor i in range(numItems):\n\t\t\titem = items[i]\n\n\t\t\tif hasattr(item, \"getData\"):\n\t\t\t\tif item.offsetSize == 4:\n\t\t\t\t\titems[i] = packULong(item.pos - pos)\n\t\t\t\telif item.offsetSize == 2:\n\t\t\t\t\ttry:\n\t\t\t\t\t\titems[i] = packUShort(item.pos - pos)\n\t\t\t\t\texcept struct.error:\n\t\t\t\t\t\t# provide data to fix overflow problem.\n\t\t\t\t\t\toverflowErrorRecord = self.getOverflowErrorRecord(item)\n\n\t\t\t\t\t\traise OTLOffsetOverflowError(overflowErrorRecord)\n\t\t\t\telif item.offsetSize == 3:\n\t\t\t\t\titems[i] = packUInt24(item.pos - pos)\n\t\t\t\telse:\n\t\t\t\t\traise ValueError(item.offsetSize)\n\n\t\treturn bytesjoin(items)\n\n\tdef __hash__(self):\n\t\t# only works after self._doneWriting() has been called\n\t\treturn hash(self.items)\n\n\tdef __ne__(self, other):\n\t\tresult = self.__eq__(other)\n\t\treturn result if result is NotImplemented else not result\n\n\tdef __eq__(self, other):\n\t\tif type(self) != type(other):\n\t\t\treturn NotImplemented\n\t\treturn self.offsetSize == other.offsetSize and self.items == other.items\n\n\tdef _doneWriting(self, internedTables):\n\t\t# Convert CountData references to data string items\n\t\t# collapse duplicate table references to a unique entry\n\t\t# \"tables\" are OTTableWriter objects.\n\n\t\t# For Extension Lookup types, we can\n\t\t# eliminate duplicates only within the tree under the Extension Lookup,\n\t\t# as offsets may exceed 64K even between Extension LookupTable subtables.\n\t\tisExtension = hasattr(self, \"Extension\")\n\n\t\t# Certain versions of Uniscribe reject the font if the GSUB/GPOS top-level\n\t\t# arrays (ScriptList, FeatureList, LookupList) point to the same, possibly\n\t\t# empty, array. So, we don't share those.\n\t\t# See: https://github.com/fonttools/fonttools/issues/518\n\t\tdontShare = hasattr(self, 'DontShare')\n\n\t\tif isExtension:\n\t\t\tinternedTables = {}\n\n\t\titems = self.items\n\t\tfor i in range(len(items)):\n\t\t\titem = items[i]\n\t\t\tif hasattr(item, \"getCountData\"):\n\t\t\t\titems[i] = item.getCountData()\n\t\t\telif hasattr(item, \"getData\"):\n\t\t\t\titem._doneWriting(internedTables)\n\t\t\t\tif not dontShare:\n\t\t\t\t\titems[i] = item = internedTables.setdefault(item, item)\n\t\tself.items = tuple(items)\n\n\tdef _gatherTables(self, tables, extTables, done):\n\t\t# Convert table references in self.items tree to a flat\n\t\t# list of tables in depth-first traversal order.\n\t\t# \"tables\" are OTTableWriter objects.\n\t\t# We do the traversal in reverse order at each level, in order to\n\t\t# resolve duplicate references to be the last reference in the list of tables.\n\t\t# For extension lookups, duplicate references can be merged only within the\n\t\t# writer tree under the extension lookup.\n\n\t\tdone[id(self)] = True\n\n\t\tnumItems = len(self.items)\n\t\tiRange = list(range(numItems))\n\t\tiRange.reverse()\n\n\t\tisExtension = hasattr(self, \"Extension\")\n\n\t\tselfTables = tables\n\n\t\tif isExtension:\n\t\t\tassert extTables is not None, \"Program or XML editing error. Extension subtables cannot contain extensions subtables\"\n\t\t\ttables, extTables, done = extTables, None, {}\n\n\t\t# add Coverage table if it is sorted last.\n\t\tsortCoverageLast = 0\n\t\tif hasattr(self, \"sortCoverageLast\"):\n\t\t\t# Find coverage table\n\t\t\tfor i in range(numItems):\n\t\t\t\titem = self.items[i]\n\t\t\t\tif hasattr(item, \"name\") and (item.name == \"Coverage\"):\n\t\t\t\t\tsortCoverageLast = 1\n\t\t\t\t\tbreak\n\t\t\tif id(item) not in done:\n\t\t\t\titem._gatherTables(tables, extTables, done)\n\t\t\telse:\n\t\t\t\t# We're a new parent of item\n\t\t\t\tpass\n\n\t\tfor i in iRange:\n\t\t\titem = self.items[i]\n\t\t\tif not hasattr(item, \"getData\"):\n\t\t\t\tcontinue\n\n\t\t\tif sortCoverageLast and (i==1) and item.name == 'Coverage':\n\t\t\t\t# we've already 'gathered' it above\n\t\t\t\tcontinue\n\n\t\t\tif id(item) not in done:\n\t\t\t\titem._gatherTables(tables, extTables, done)\n\t\t\telse:\n\t\t\t\t# Item is already written out by other parent\n\t\t\t\tpass\n\n\t\tselfTables.append(self)\n\n\tdef getAllData(self):\n\t\t\"\"\"Assemble all data, including all subtables.\"\"\"\n\t\tinternedTables = {}\n\t\tself._doneWriting(internedTables)\n\t\ttables = []\n\t\textTables = []\n\t\tdone = {}\n\t\tself._gatherTables(tables, extTables, done)\n\t\ttables.reverse()\n\t\textTables.reverse()\n\t\t# Gather all data in two passes: the absolute positions of all\n\t\t# subtable are needed before the actual data can be assembled.\n\t\tpos = 0\n\t\tfor table in tables:\n\t\t\ttable.pos = pos\n\t\t\tpos = pos + table.getDataLength()\n\n\t\tfor table in extTables:\n\t\t\ttable.pos = pos\n\t\t\tpos = pos + table.getDataLength()\n\n\t\tdata = []\n\t\tfor table in tables:\n\t\t\ttableData = table.getData()\n\t\t\tdata.append(tableData)\n\n\t\tfor table in extTables:\n\t\t\ttableData = table.getData()\n\t\t\tdata.append(tableData)\n\n\t\treturn bytesjoin(data)\n\n\t# interface for gathering data, as used by table.compile()\n\n\tdef getSubWriter(self, offsetSize=2):\n\t\tsubwriter = self.__class__(self.localState, self.tableTag, offsetSize=offsetSize)\n\t\tsubwriter.parent = self # because some subtables have idential values, we discard\n\t\t\t\t\t# the duplicates under the getAllData method. Hence some\n\t\t\t\t\t# subtable writers can have more than one parent writer.\n\t\t\t\t\t# But we just care about first one right now.\n\t\treturn subwriter\n\n\tdef writeValue(self, typecode, value):\n\t\tself.items.append(struct.pack(f\">{typecode}\", value))\n\n\tdef writeUShort(self, value):\n\t\tassert 0 <= value < 0x10000, value\n\t\tself.items.append(struct.pack(\">H\", value))\n\n\tdef writeShort(self, value):\n\t\tassert -32768 <= value < 32768, value\n\t\tself.items.append(struct.pack(\">h\", value))\n\n\tdef writeUInt8(self, value):\n\t\tassert 0 <= value < 256, value\n\t\tself.items.append(struct.pack(\">B\", value))\n\n\tdef writeInt8(self, value):\n\t\tassert -128 <= value < 128, value\n\t\tself.items.append(struct.pack(\">b\", value))\n\n\tdef writeUInt24(self, value):\n\t\tassert 0 <= value < 0x1000000, value\n\t\tb = struct.pack(\">L\", value)\n\t\tself.items.append(b[1:])\n\n\tdef writeLong(self, value):\n\t\tself.items.append(struct.pack(\">l\", value))\n\n\tdef writeULong(self, value):\n\t\tself.items.append(struct.pack(\">L\", value))\n\n\tdef writeTag(self, tag):\n\t\ttag = Tag(tag).tobytes()\n\t\tassert len(tag) == 4, tag\n\t\tself.items.append(tag)\n\n\tdef writeSubTable(self, subWriter):\n\t\tself.items.append(subWriter)\n\n\tdef writeCountReference(self, table, name, size=2, value=None):\n\t\tref = CountReference(table, name, size=size, value=value)\n\t\tself.items.append(ref)\n\t\treturn ref\n\n\tdef writeStruct(self, format, values):\n\t\tdata = struct.pack(*(format,) + values)\n\t\tself.items.append(data)\n\n\tdef writeData(self, data):\n\t\tself.items.append(data)\n\n\tdef getOverflowErrorRecord(self, item):\n\t\tLookupListIndex = SubTableIndex = itemName = itemIndex = None\n\t\tif self.name == 'LookupList':\n\t\t\tLookupListIndex = item.repeatIndex\n\t\telif self.name == 'Lookup':\n\t\t\tLookupListIndex = self.repeatIndex\n\t\t\tSubTableIndex = item.repeatIndex\n\t\telse:\n\t\t\titemName = getattr(item, 'name', '<none>')\n\t\t\tif hasattr(item, 'repeatIndex'):\n\t\t\t\titemIndex = item.repeatIndex\n\t\t\tif self.name == 'SubTable':\n\t\t\t\tLookupListIndex = self.parent.repeatIndex\n\t\t\t\tSubTableIndex = self.repeatIndex\n\t\t\telif self.name == 'ExtSubTable':\n\t\t\t\tLookupListIndex = self.parent.parent.repeatIndex\n\t\t\t\tSubTableIndex = self.parent.repeatIndex\n\t\t\telse: # who knows how far below the SubTable level we are! Climb back up to the nearest subtable.\n\t\t\t\titemName = \".\".join([self.name, itemName])\n\t\t\t\tp1 = self.parent\n\t\t\t\twhile p1 and p1.name not in ['ExtSubTable', 'SubTable']:\n\t\t\t\t\titemName = \".\".join([p1.name, itemName])\n\t\t\t\t\tp1 = p1.parent\n\t\t\t\tif p1:\n\t\t\t\t\tif p1.name == 'ExtSubTable':\n\t\t\t\t\t\tLookupListIndex = p1.parent.parent.repeatIndex\n\t\t\t\t\t\tSubTableIndex = p1.parent.repeatIndex\n\t\t\t\t\telse:\n\t\t\t\t\t\tLookupListIndex = p1.parent.repeatIndex\n\t\t\t\t\t\tSubTableIndex = p1.repeatIndex\n\n\t\treturn OverflowErrorRecord( (self.tableTag, LookupListIndex, SubTableIndex, itemName, itemIndex) )\n\n\nclass CountReference(object):\n\t\"\"\"A reference to a Count value, not a count of references.\"\"\"\n\tdef __init__(self, table, name, size=None, value=None):\n\t\tself.table = table\n\t\tself.name = name\n\t\tself.size = size\n\t\tif value is not None:\n\t\t\tself.setValue(value)\n\tdef setValue(self, value):\n\t\ttable = self.table\n\t\tname = self.name\n\t\tif table[name] is None:\n\t\t\ttable[name] = value\n\t\telse:\n\t\t\tassert table[name] == value, (name, table[name], value)\n\tdef getValue(self):\n\t\treturn self.table[self.name]\n\tdef getCountData(self):\n\t\tv = self.table[self.name]\n\t\tif v is None: v = 0\n\t\treturn {1:packUInt8, 2:packUShort, 4:packULong}[self.size](v)\n\n\ndef packUInt8 (value):\n\treturn struct.pack(\">B\", value)\n\ndef packUShort(value):\n\treturn struct.pack(\">H\", value)\n\ndef packULong(value):\n\tassert 0 <= value < 0x100000000, value\n\treturn struct.pack(\">L\", value)\n\ndef packUInt24(value):\n\tassert 0 <= value < 0x1000000, value\n\treturn struct.pack(\">L\", value)[1:]\n\n\nclass BaseTable(object):\n\n\t\"\"\"Generic base class for all OpenType (sub)tables.\"\"\"\n\n\tdef __getattr__(self, attr):\n\t\treader = self.__dict__.get(\"reader\")\n\t\tif reader:\n\t\t\tdel self.reader\n\t\t\tfont = self.font\n\t\t\tdel self.font\n\t\t\tself.decompile(reader, font)\n\t\t\treturn getattr(self, attr)\n\n\t\traise AttributeError(attr)\n\n\tdef ensureDecompiled(self):\n\t\treader = self.__dict__.get(\"reader\")\n\t\tif reader:\n\t\t\tdel self.reader\n\t\t\tfont = self.font\n\t\t\tdel self.font\n\t\t\tself.decompile(reader, font)\n\n\t@classmethod\n\tdef getRecordSize(cls, reader):\n\t\ttotalSize = 0\n\t\tfor conv in cls.converters:\n\t\t\tsize = conv.getRecordSize(reader)\n\t\t\tif size is NotImplemented: return NotImplemented\n\t\t\tcountValue = 1\n\t\t\tif conv.repeat:\n\t\t\t\tif conv.repeat in reader:\n\t\t\t\t\tcountValue = reader[conv.repeat] + conv.aux\n\t\t\t\telse:\n\t\t\t\t\treturn NotImplemented\n\t\t\ttotalSize += size * countValue\n\t\treturn totalSize\n\n\tdef getConverters(self):\n\t\treturn self.converters\n\n\tdef getConverterByName(self, name):\n\t\treturn self.convertersByName[name]\n\n\tdef populateDefaults(self, propagator=None):\n\t\tfor conv in self.getConverters():\n\t\t\tif conv.repeat:\n\t\t\t\tif not hasattr(self, conv.name):\n\t\t\t\t\tsetattr(self, conv.name, [])\n\t\t\t\tcountValue = len(getattr(self, conv.name)) - conv.aux\n\t\t\t\ttry:\n\t\t\t\t\tcount_conv = self.getConverterByName(conv.repeat)\n\t\t\t\t\tsetattr(self, conv.repeat, countValue)\n\t\t\t\texcept KeyError:\n\t\t\t\t\t# conv.repeat is a propagated count\n\t\t\t\t\tif propagator and conv.repeat in propagator:\n\t\t\t\t\t\tpropagator[conv.repeat].setValue(countValue)\n\t\t\telse:\n\t\t\t\tif conv.aux and not eval(conv.aux, None, self.__dict__):\n\t\t\t\t\tcontinue\n\t\t\t\tif hasattr(self, conv.name):\n\t\t\t\t\tcontinue # Warn if it should NOT be present?!\n\t\t\t\tif hasattr(conv, 'writeNullOffset'):\n\t\t\t\t\tsetattr(self, conv.name, None) # Warn?\n\t\t\t\t#elif not conv.isCount:\n\t\t\t\t#\t# Warn?\n\t\t\t\t#\tpass\n\n\tdef decompile(self, reader, font):\n\t\tself.readFormat(reader)\n\t\ttable = {}\n\t\tself.__rawTable = table # for debugging\n\t\tfor conv in self.getConverters():\n\t\t\tif conv.name == \"SubTable\":\n\t\t\t\tconv = conv.getConverter(reader.tableTag,\n\t\t\t\t\t\ttable[\"LookupType\"])\n\t\t\tif conv.name == \"ExtSubTable\":\n\t\t\t\tconv = conv.getConverter(reader.tableTag,\n\t\t\t\t\t\ttable[\"ExtensionLookupType\"])\n\t\t\tif conv.name == \"FeatureParams\":\n\t\t\t\tconv = conv.getConverter(reader[\"FeatureTag\"])\n\t\t\tif conv.name == \"SubStruct\":\n\t\t\t\tconv = conv.getConverter(reader.tableTag,\n\t\t\t\t table[\"MorphType\"])\n\t\t\ttry:\n\t\t\t\tif conv.repeat:\n\t\t\t\t\tif isinstance(conv.repeat, int):\n\t\t\t\t\t\tcountValue = conv.repeat\n\t\t\t\t\telif conv.repeat in table:\n\t\t\t\t\t\tcountValue = table[conv.repeat]\n\t\t\t\t\telse:\n\t\t\t\t\t\t# conv.repeat is a propagated count\n\t\t\t\t\t\tcountValue = reader[conv.repeat]\n\t\t\t\t\tcountValue += conv.aux\n\t\t\t\t\ttable[conv.name] = conv.readArray(reader, font, table, countValue)\n\t\t\t\telse:\n\t\t\t\t\tif conv.aux and not eval(conv.aux, None, table):\n\t\t\t\t\t\tcontinue\n\t\t\t\t\ttable[conv.name] = conv.read(reader, font, table)\n\t\t\t\t\tif conv.isPropagated:\n\t\t\t\t\t\treader[conv.name] = table[conv.name]\n\t\t\texcept Exception as e:\n\t\t\t\tname = conv.name\n\t\t\t\te.args = e.args + (name,)\n\t\t\t\traise\n\n\t\tif hasattr(self, 'postRead'):\n\t\t\tself.postRead(table, font)\n\t\telse:\n\t\t\tself.__dict__.update(table)\n\n\t\tdel self.__rawTable # succeeded, get rid of debugging info\n\n\tdef compile(self, writer, font):\n\t\tself.ensureDecompiled()\n\t\t# TODO Following hack to be removed by rewriting how FormatSwitching tables\n\t\t# are handled.\n\t\t# https://github.com/fonttools/fonttools/pull/2238#issuecomment-805192631\n\t\tif hasattr(self, 'preWrite'):\n\t\t\tdeleteFormat = not hasattr(self, 'Format')\n\t\t\ttable = self.preWrite(font)\n\t\t\tdeleteFormat = deleteFormat and hasattr(self, 'Format')\n\t\telse:\n\t\t\tdeleteFormat = False\n\t\t\ttable = self.__dict__.copy()\n\n\t\t# some count references may have been initialized in a custom preWrite; we set\n\t\t# these in the writer's state beforehand (instead of sequentially) so they will\n\t\t# be propagated to all nested subtables even if the count appears in the current\n\t\t# table only *after* the offset to the subtable that it is counting.\n\t\tfor conv in self.getConverters():\n\t\t\tif conv.isCount and conv.isPropagated:\n\t\t\t\tvalue = table.get(conv.name)\n\t\t\t\tif isinstance(value, CountReference):\n\t\t\t\t\twriter[conv.name] = value\n\n\t\tif hasattr(self, 'sortCoverageLast'):\n\t\t\twriter.sortCoverageLast = 1\n\n\t\tif hasattr(self, 'DontShare'):\n\t\t\twriter.DontShare = True\n\n\t\tif hasattr(self.__class__, 'LookupType'):\n\t\t\twriter['LookupType'].setValue(self.__class__.LookupType)\n\n\t\tself.writeFormat(writer)\n\t\tfor conv in self.getConverters():\n\t\t\tvalue = table.get(conv.name) # TODO Handle defaults instead of defaulting to None!\n\t\t\tif conv.repeat:\n\t\t\t\tif value is None:\n\t\t\t\t\tvalue = []\n\t\t\t\tcountValue = len(value) - conv.aux\n\t\t\t\tif isinstance(conv.repeat, int):\n\t\t\t\t\tassert len(value) == conv.repeat, 'expected %d values, got %d' % (conv.repeat, len(value))\n\t\t\t\telif conv.repeat in table:\n\t\t\t\t\tCountReference(table, conv.repeat, value=countValue)\n\t\t\t\telse:\n\t\t\t\t\t# conv.repeat is a propagated count\n\t\t\t\t\twriter[conv.repeat].setValue(countValue)\n\t\t\t\tvalues = value\n\t\t\t\tfor i, value in enumerate(values):\n\t\t\t\t\ttry:\n\t\t\t\t\t\tconv.write(writer, font, table, value, i)\n\t\t\t\t\texcept Exception as e:\n\t\t\t\t\t\tname = value.__class__.__name__ if value is not None else conv.name\n\t\t\t\t\t\te.args = e.args + (name+'['+str(i)+']',)\n\t\t\t\t\t\traise\n\t\t\telif conv.isCount:\n\t\t\t\t# Special-case Count values.\n\t\t\t\t# Assumption: a Count field will *always* precede\n\t\t\t\t# the actual array(s).\n\t\t\t\t# We need a default value, as it may be set later by a nested\n\t\t\t\t# table. We will later store it here.\n\t\t\t\t# We add a reference: by the time the data is assembled\n\t\t\t\t# the Count value will be filled in.\n\t\t\t\t# We ignore the current count value since it will be recomputed,\n\t\t\t\t# unless it's a CountReference that was already initialized in a custom preWrite.\n\t\t\t\tif isinstance(value, CountReference):\n\t\t\t\t\tref = value\n\t\t\t\t\tref.size = conv.staticSize\n\t\t\t\t\twriter.writeData(ref)\n\t\t\t\t\ttable[conv.name] = ref.getValue()\n\t\t\t\telse:\n\t\t\t\t\tref = writer.writeCountReference(table, conv.name, conv.staticSize)\n\t\t\t\t\ttable[conv.name] = None\n\t\t\t\tif conv.isPropagated:\n\t\t\t\t\twriter[conv.name] = ref\n\t\t\telif conv.isLookupType:\n\t\t\t\t# We make sure that subtables have the same lookup type,\n\t\t\t\t# and that the type is the same as the one set on the\n\t\t\t\t# Lookup object, if any is set.\n\t\t\t\tif conv.name not in table:\n\t\t\t\t\ttable[conv.name] = None\n\t\t\t\tref = writer.writeCountReference(table, conv.name, conv.staticSize, table[conv.name])\n\t\t\t\twriter['LookupType'] = ref\n\t\t\telse:\n\t\t\t\tif conv.aux and not eval(conv.aux, None, table):\n\t\t\t\t\tcontinue\n\t\t\t\ttry:\n\t\t\t\t\tconv.write(writer, font, table, value)\n\t\t\t\texcept Exception as e:\n\t\t\t\t\tname = value.__class__.__name__ if value is not None else conv.name\n\t\t\t\t\te.args = e.args + (name,)\n\t\t\t\t\traise\n\t\t\t\tif conv.isPropagated:\n\t\t\t\t\twriter[conv.name] = value\n\n\t\tif deleteFormat:\n\t\t\tdel self.Format\n\n\tdef readFormat(self, reader):\n\t\tpass\n\n\tdef writeFormat(self, writer):\n\t\tpass\n\n\tdef toXML(self, xmlWriter, font, attrs=None, name=None):\n\t\ttableName = name if name else self.__class__.__name__\n\t\tif attrs is None:\n\t\t\tattrs = []\n\t\tif hasattr(self, \"Format\"):\n\t\t\tattrs = attrs + [(\"Format\", self.Format)]\n\t\txmlWriter.begintag(tableName, attrs)\n\t\txmlWriter.newline()\n\t\tself.toXML2(xmlWriter, font)\n\t\txmlWriter.endtag(tableName)\n\t\txmlWriter.newline()\n\n\tdef toXML2(self, xmlWriter, font):\n\t\t# Simpler variant of toXML, *only* for the top level tables (like GPOS, GSUB).\n\t\t# This is because in TTX our parent writes our main tag, and in otBase.py we\n\t\t# do it ourselves. I think I'm getting schizophrenic...\n\t\tfor conv in self.getConverters():\n\t\t\tif conv.repeat:\n\t\t\t\tvalue = getattr(self, conv.name, [])\n\t\t\t\tfor i in range(len(value)):\n\t\t\t\t\titem = value[i]\n\t\t\t\t\tconv.xmlWrite(xmlWriter, font, item, conv.name,\n\t\t\t\t\t\t\t[(\"index\", i)])\n\t\t\telse:\n\t\t\t\tif conv.aux and not eval(conv.aux, None, vars(self)):\n\t\t\t\t\tcontinue\n\t\t\t\tvalue = getattr(self, conv.name, None) # TODO Handle defaults instead of defaulting to None!\n\t\t\t\tconv.xmlWrite(xmlWriter, font, value, conv.name, [])\n\n\tdef fromXML(self, name, attrs, content, font):\n\t\ttry:\n\t\t\tconv = self.getConverterByName(name)\n\t\texcept KeyError:\n\t\t\traise # XXX on KeyError, raise nice error\n\t\tvalue = conv.xmlRead(attrs, content, font)\n\t\tif conv.repeat:\n\t\t\tseq = getattr(self, conv.name, None)\n\t\t\tif seq is None:\n\t\t\t\tseq = []\n\t\t\t\tsetattr(self, conv.name, seq)\n\t\t\tseq.append(value)\n\t\telse:\n\t\t\tsetattr(self, conv.name, value)\n\n\tdef __ne__(self, other):\n\t\tresult = self.__eq__(other)\n\t\treturn result if result is NotImplemented else not result\n\n\tdef __eq__(self, other):\n\t\tif type(self) != type(other):\n\t\t\treturn NotImplemented\n\n\t\tself.ensureDecompiled()\n\t\tother.ensureDecompiled()\n\n\t\treturn self.__dict__ == other.__dict__\n\n\nclass FormatSwitchingBaseTable(BaseTable):\n\n\t\"\"\"Minor specialization of BaseTable, for tables that have multiple\n\tformats, eg. CoverageFormat1 vs. CoverageFormat2.\"\"\"\n\n\t@classmethod\n\tdef getRecordSize(cls, reader):\n\t\treturn NotImplemented\n\n\tdef getConverters(self):\n\t\treturn self.converters.get(self.Format, [])\n\n\tdef getConverterByName(self, name):\n\t\treturn self.convertersByName[self.Format][name]\n\n\tdef readFormat(self, reader):\n\t\tself.Format = reader.readUShort()\n\n\tdef writeFormat(self, writer):\n\t\twriter.writeUShort(self.Format)\n\n\tdef toXML(self, xmlWriter, font, attrs=None, name=None):\n\t\tBaseTable.toXML(self, xmlWriter, font, attrs, name)\n\n\nclass UInt8FormatSwitchingBaseTable(FormatSwitchingBaseTable):\n\tdef readFormat(self, reader):\n\t\tself.Format = reader.readUInt8()\n\n\tdef writeFormat(self, writer):\n\t\twriter.writeUInt8(self.Format)\n\n\nformatSwitchingBaseTables = {\n\t\"uint16\": FormatSwitchingBaseTable,\n\t\"uint8\": UInt8FormatSwitchingBaseTable,\n}\n\ndef getFormatSwitchingBaseTableClass(formatType):\n\ttry:\n\t\treturn formatSwitchingBaseTables[formatType]\n\texcept KeyError:\n\t\traise TypeError(f\"Unsupported format type: {formatType!r}\")\n\n\n#\n# Support for ValueRecords\n#\n# This data type is so different from all other OpenType data types that\n# it requires quite a bit of code for itself. It even has special support\n# in OTTableReader and OTTableWriter...\n#\n\nvalueRecordFormat = [\n#\tMask\t Name\t\tisDevice signed\n\t(0x0001, \"XPlacement\",\t0,\t1),\n\t(0x0002, \"YPlacement\",\t0,\t1),\n\t(0x0004, \"XAdvance\",\t0,\t1),\n\t(0x0008, \"YAdvance\",\t0,\t1),\n\t(0x0010, \"XPlaDevice\",\t1,\t0),\n\t(0x0020, \"YPlaDevice\",\t1,\t0),\n\t(0x0040, \"XAdvDevice\",\t1,\t0),\n\t(0x0080, \"YAdvDevice\",\t1,\t0),\n#\treserved:\n\t(0x0100, \"Reserved1\",\t0,\t0),\n\t(0x0200, \"Reserved2\",\t0,\t0),\n\t(0x0400, \"Reserved3\",\t0,\t0),\n\t(0x0800, \"Reserved4\",\t0,\t0),\n\t(0x1000, \"Reserved5\",\t0,\t0),\n\t(0x2000, \"Reserved6\",\t0,\t0),\n\t(0x4000, \"Reserved7\",\t0,\t0),\n\t(0x8000, \"Reserved8\",\t0,\t0),\n]\n\ndef _buildDict():\n\td = {}\n\tfor mask, name, isDevice, signed in valueRecordFormat:\n\t\td[name] = mask, isDevice, signed\n\treturn d\n\nvalueRecordFormatDict = _buildDict()\n\n\nclass ValueRecordFactory(object):\n\n\t\"\"\"Given a format code, this object convert ValueRecords.\"\"\"\n\n\tdef __init__(self, valueFormat):\n\t\tformat = []\n\t\tfor mask, name, isDevice, signed in valueRecordFormat:\n\t\t\tif valueFormat & mask:\n\t\t\t\tformat.append((name, isDevice, signed))\n\t\tself.format = format\n\n\tdef __len__(self):\n\t\treturn len(self.format)\n\n\tdef readValueRecord(self, reader, font):\n\t\tformat = self.format\n\t\tif not format:\n\t\t\treturn None\n\t\tvalueRecord = ValueRecord()\n\t\tfor name, isDevice, signed in format:\n\t\t\tif signed:\n\t\t\t\tvalue = reader.readShort()\n\t\t\telse:\n\t\t\t\tvalue = reader.readUShort()\n\t\t\tif isDevice:\n\t\t\t\tif value:\n\t\t\t\t\tfrom . import otTables\n\t\t\t\t\tsubReader = reader.getSubReader(value)\n\t\t\t\t\tvalue = getattr(otTables, name)()\n\t\t\t\t\tvalue.decompile(subReader, font)\n\t\t\t\telse:\n\t\t\t\t\tvalue = None\n\t\t\tsetattr(valueRecord, name, value)\n\t\treturn valueRecord\n\n\tdef writeValueRecord(self, writer, font, valueRecord):\n\t\tfor name, isDevice, signed in self.format:\n\t\t\tvalue = getattr(valueRecord, name, 0)\n\t\t\tif isDevice:\n\t\t\t\tif value:\n\t\t\t\t\tsubWriter = writer.getSubWriter()\n\t\t\t\t\twriter.writeSubTable(subWriter)\n\t\t\t\t\tvalue.compile(subWriter, font)\n\t\t\t\telse:\n\t\t\t\t\twriter.writeUShort(0)\n\t\t\telif signed:\n\t\t\t\twriter.writeShort(value)\n\t\t\telse:\n\t\t\t\twriter.writeUShort(value)\n\n\nclass ValueRecord(object):\n\n\t# see ValueRecordFactory\n\n\tdef __init__(self, valueFormat=None, src=None):\n\t\tif valueFormat is not None:\n\t\t\tfor mask, name, isDevice, signed in valueRecordFormat:\n\t\t\t\tif valueFormat & mask:\n\t\t\t\t\tsetattr(self, name, None if isDevice else 0)\n\t\t\tif src is not None:\n\t\t\t\tfor key,val in src.__dict__.items():\n\t\t\t\t\tif not hasattr(self, key):\n\t\t\t\t\t\tcontinue\n\t\t\t\t\tsetattr(self, key, val)\n\t\telif src is not None:\n\t\t\tself.__dict__ = src.__dict__.copy()\n\n\tdef getFormat(self):\n\t\tformat = 0\n\t\tfor name in self.__dict__.keys():\n\t\t\tformat = format | valueRecordFormatDict[name][0]\n\t\treturn format\n\n\tdef toXML(self, xmlWriter, font, valueName, attrs=None):\n\t\tif attrs is None:\n\t\t\tsimpleItems = []\n\t\telse:\n\t\t\tsimpleItems = list(attrs)\n\t\tfor mask, name, isDevice, format in valueRecordFormat[:4]: # \"simple\" values\n\t\t\tif hasattr(self, name):\n\t\t\t\tsimpleItems.append((name, getattr(self, name)))\n\t\tdeviceItems = []\n\t\tfor mask, name, isDevice, format in valueRecordFormat[4:8]: # device records\n\t\t\tif hasattr(self, name):\n\t\t\t\tdevice = getattr(self, name)\n\t\t\t\tif device is not None:\n\t\t\t\t\tdeviceItems.append((name, device))\n\t\tif deviceItems:\n\t\t\txmlWriter.begintag(valueName, simpleItems)\n\t\t\txmlWriter.newline()\n\t\t\tfor name, deviceRecord in deviceItems:\n\t\t\t\tif deviceRecord is not None:\n\t\t\t\t\tdeviceRecord.toXML(xmlWriter, font, name=name)\n\t\t\txmlWriter.endtag(valueName)\n\t\t\txmlWriter.newline()\n\t\telse:\n\t\t\txmlWriter.simpletag(valueName, simpleItems)\n\t\t\txmlWriter.newline()\n\n\tdef fromXML(self, name, attrs, content, font):\n\t\tfrom . import otTables\n\t\tfor k, v in attrs.items():\n\t\t\tsetattr(self, k, int(v))\n\t\tfor element in content:\n\t\t\tif not isinstance(element, tuple):\n\t\t\t\tcontinue\n\t\t\tname, attrs, content = element\n\t\t\tvalue = getattr(otTables, name)()\n\t\t\tfor elem2 in content:\n\t\t\t\tif not isinstance(elem2, tuple):\n\t\t\t\t\tcontinue\n\t\t\t\tname2, attrs2, content2 = elem2\n\t\t\t\tvalue.fromXML(name2, attrs2, content2, font)\n\t\t\tsetattr(self, name, value)\n\n\tdef __ne__(self, other):\n\t\tresult = self.__eq__(other)\n\t\treturn result if result is NotImplemented else not result\n\n\tdef __eq__(self, other):\n\t\tif type(self) != type(other):\n\t\t\treturn NotImplemented\n\t\treturn self.__dict__ == other.__dict__\n",
"path": "Lib/fontTools/ttLib/tables/otBase.py"
}
] | diff --git a/Lib/fontTools/ttLib/tables/otBase.py b/Lib/fontTools/ttLib/tables/otBase.py
index 3c07f9e11a..24c6197006 100644
--- a/Lib/fontTools/ttLib/tables/otBase.py
+++ b/Lib/fontTools/ttLib/tables/otBase.py
@@ -571,7 +571,7 @@ def getRecordSize(cls, reader):
countValue = 1
if conv.repeat:
if conv.repeat in reader:
- countValue = reader[conv.repeat]
+ countValue = reader[conv.repeat] + conv.aux
else:
return NotImplemented
totalSize += size * countValue
diff --git a/Tests/ttLib/tables/M_V_A_R_test.py b/Tests/ttLib/tables/M_V_A_R_test.py
index 3972d8c302..a8b092e0ed 100644
--- a/Tests/ttLib/tables/M_V_A_R_test.py
+++ b/Tests/ttLib/tables/M_V_A_R_test.py
@@ -8,8 +8,8 @@
MVAR_DATA = deHexStr(
'0001 0000 ' # 0: version=1.0
'0000 0008 ' # 4: reserved=0, valueRecordSize=8
- '0007 ' # 8: valueRecordCount=7
- '0044 ' # 10: offsetToItemVariationStore=68
+ '0009 ' # 8: valueRecordCount=9
+ '0054 ' # 10: offsetToItemVariationStore=84
'6861 7363 ' # 12: ValueRecord.valueTag="hasc"
'0000 ' # 16: ValueRecord.deltaSetOuterIndex
'0003 ' # 18: ValueRecord.deltaSetInnerIndex
@@ -31,30 +31,36 @@
'7370 796F ' # 60: ValueRecord.valueTag="spyo"
'0000 ' # 64: ValueRecord.deltaSetOuterIndex
'0002 ' # 66: ValueRecord.deltaSetInnerIndex
- '0001 ' # 68: VarStore.format=1
- '0000 000C ' # 70: VarStore.offsetToVariationRegionList=12
- '0001 ' # 74: VarStore.itemVariationDataCount=1
- '0000 0016 ' # 76: VarStore.itemVariationDataOffsets[0]=22
- '0001 ' # 80: VarRegionList.axisCount=1
- '0001 ' # 82: VarRegionList.regionCount=1
- '0000 ' # 84: variationRegions[0].regionAxes[0].startCoord=0.0
- '4000 ' # 86: variationRegions[0].regionAxes[0].peakCoord=1.0
- '4000 ' # 88: variationRegions[0].regionAxes[0].endCoord=1.0
- '0004 ' # 90: VarData.ItemCount=4
- '0001 ' # 92: VarData.NumShorts=1
- '0001 ' # 94: VarData.VarRegionCount=1
- '0000 ' # 96: VarData.VarRegionIndex[0]=0
- 'FF38 ' # 98: VarData.deltaSets[0]=-200
- 'FFCE ' # 100: VarData.deltaSets[0]=-50
- '0064 ' # 102: VarData.deltaSets[0]=100
- '00C8 ' # 104: VarData.deltaSets[0]=200
+ '7465 7374 ' # 68: ValueRecord.valueTag="test"
+ '0000 ' # 72: ValueRecord.deltaSetOuterIndex
+ '0002 ' # 74: ValueRecord.deltaSetInnerIndex
+ '7465 7332 ' # 76: ValueRecord.valueTag="tes2"
+ '0000 ' # 78: ValueRecord.deltaSetOuterIndex
+ '0002 ' # 82: ValueRecord.deltaSetInnerIndex
+ '0001 ' # 84: VarStore.format=1
+ '0000 000C ' # 86: VarStore.offsetToVariationRegionList=12
+ '0001 ' # 90: VarStore.itemVariationDataCount=1
+ '0000 0016 ' # 92: VarStore.itemVariationDataOffsets[0]=22
+ '0001 ' # 96: VarRegionList.axisCount=1
+ '0001 ' # 98: VarRegionList.regionCount=1
+ '0000 ' # 100: variationRegions[0].regionAxes[0].startCoord=0.0
+ '4000 ' # 102: variationRegions[0].regionAxes[0].peakCoord=1.0
+ '4000 ' # 104: variationRegions[0].regionAxes[0].endCoord=1.0
+ '0004 ' # 106: VarData.ItemCount=4
+ '0001 ' # 108: VarData.NumShorts=1
+ '0001 ' # 110: VarData.VarRegionCount=1
+ '0000 ' # 112: VarData.VarRegionIndex[0]=0
+ 'FF38 ' # 114: VarData.deltaSets[0]=-200
+ 'FFCE ' # 116: VarData.deltaSets[0]=-50
+ '0064 ' # 118: VarData.deltaSets[0]=100
+ '00C8 ' # 120: VarData.deltaSets[0]=200
)
MVAR_XML = [
'<Version value="0x00010000"/>',
'<Reserved value="0"/>',
'<ValueRecordSize value="8"/>',
- '<!-- ValueRecordCount=7 -->',
+ '<!-- ValueRecordCount=9 -->',
'<VarStore Format="1">',
' <Format value="1"/>',
' <VarRegionList>',
@@ -108,6 +114,14 @@
' <ValueTag value="spyo"/>',
' <VarIdx value="2"/>',
'</ValueRecord>',
+ '<ValueRecord index="7">',
+ ' <ValueTag value="test"/>',
+ ' <VarIdx value="2"/>',
+ '</ValueRecord>',
+ '<ValueRecord index="8">',
+ ' <ValueTag value="tes2"/>',
+ ' <VarIdx value="2"/>',
+ '</ValueRecord>',
]
@@ -123,6 +137,13 @@ def test_decompile_toXML(self):
mvar.decompile(MVAR_DATA, font)
self.assertEqual(getXML(mvar.toXML), MVAR_XML)
+
+ def test_decompile_toXML_lazy(self):
+ mvar = newTable('MVAR')
+ font = TTFont(lazy=True)
+ mvar.decompile(MVAR_DATA, font)
+ self.assertEqual(getXML(mvar.toXML), MVAR_XML)
+
def test_compile_fromXML(self):
mvar = newTable('MVAR')
font = TTFont()
|
SeldonIO__MLServer-1172 | Star imports from `mlserver.codecs` not working
For example:
```python
from mlserver.codecs import *
```
Throws an error:
```python
Traceback (most recent call last):
File "/home/janis/.conda/envs/py310/lib/python3.10/site-packages/IPython/core/interactiveshell.py", line 3460, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-2-b8cc62508f29>", line 1, in <module>
from mlserver.codecs import *
AttributeError: module 'mlserver.codecs' has no attribute 'StringRequestCodec'
```
This is likely because `__all__` is out-of-date with the actual imports. I haven't tested other sub-packages, but it might be worth looking at these.
P.S. I'm not a big fan of `__all__` and star imports in particular, the main issue is that the existence of `__all__` gives rise to two public APIs which may diverge (as it has in this case).
| [
{
"content": "from .numpy import NumpyCodec, NumpyRequestCodec\nfrom .pandas import PandasCodec\nfrom .string import StringCodec\nfrom .base64 import Base64Codec\nfrom .datetime import DatetimeCodec\nfrom .errors import CodecError\nfrom .decorator import decode_args\nfrom .base import (\n InputCodec,\n RequestCodec,\n register_input_codec,\n register_request_codec,\n InputCodecLike,\n RequestCodecLike,\n)\nfrom .utils import (\n DecodedParameterName,\n has_decoded,\n get_decoded,\n get_decoded_or_raw,\n encode_inference_response,\n encode_response_output,\n decode_request_input,\n decode_inference_request,\n)\n\n__all__ = [\n \"CodecError\",\n \"NumpyCodec\",\n \"NumpyRequestCodec\",\n \"StringCodec\",\n \"StringRequestCodec\",\n \"Base64Codec\",\n \"DatetimeCodec\",\n \"PandasCodec\",\n \"InputCodec\",\n \"InputCodecLike\",\n \"RequestCodec\",\n \"RequestCodecLike\",\n \"DecodedParameterName\",\n \"register_input_codec\",\n \"register_request_codec\",\n \"has_decoded\",\n \"get_decoded\",\n \"get_decoded_or_raw\",\n \"encode_inference_response\",\n \"encode_response_output\",\n \"decode_request_input\",\n \"decode_inference_request\",\n \"decode_args\",\n]\n",
"path": "mlserver/codecs/__init__.py"
}
] | [
{
"content": "from .numpy import NumpyCodec, NumpyRequestCodec\nfrom .pandas import PandasCodec\nfrom .string import StringCodec, StringRequestCodec\nfrom .base64 import Base64Codec\nfrom .datetime import DatetimeCodec\nfrom .errors import CodecError\nfrom .decorator import decode_args\nfrom .base import (\n InputCodec,\n RequestCodec,\n register_input_codec,\n register_request_codec,\n InputCodecLike,\n RequestCodecLike,\n)\nfrom .utils import (\n DecodedParameterName,\n has_decoded,\n get_decoded,\n get_decoded_or_raw,\n encode_inference_response,\n encode_response_output,\n decode_request_input,\n decode_inference_request,\n)\n\n__all__ = [\n \"CodecError\",\n \"NumpyCodec\",\n \"NumpyRequestCodec\",\n \"StringCodec\",\n \"StringRequestCodec\",\n \"Base64Codec\",\n \"DatetimeCodec\",\n \"PandasCodec\",\n \"InputCodec\",\n \"InputCodecLike\",\n \"RequestCodec\",\n \"RequestCodecLike\",\n \"DecodedParameterName\",\n \"register_input_codec\",\n \"register_request_codec\",\n \"has_decoded\",\n \"get_decoded\",\n \"get_decoded_or_raw\",\n \"encode_inference_response\",\n \"encode_response_output\",\n \"decode_request_input\",\n \"decode_inference_request\",\n \"decode_args\",\n]\n",
"path": "mlserver/codecs/__init__.py"
}
] | diff --git a/mlserver/codecs/__init__.py b/mlserver/codecs/__init__.py
index 47f6a1880..99211dd32 100644
--- a/mlserver/codecs/__init__.py
+++ b/mlserver/codecs/__init__.py
@@ -1,6 +1,6 @@
from .numpy import NumpyCodec, NumpyRequestCodec
from .pandas import PandasCodec
-from .string import StringCodec
+from .string import StringCodec, StringRequestCodec
from .base64 import Base64Codec
from .datetime import DatetimeCodec
from .errors import CodecError
|
saulpw__visidata-591 | [wishlist] pasting data into input(): should not react to newlines
I've come against this a few times now where I've accidentally pasted multi line data into the regex search input. This does two things:
1. When a new line character is pasted, it takes this as and enter which competes the command
2. If there are any characters after the new line these are entered as key commands which could potentially mess up the table
Recreate:
Open a sheet
Copy some text to search for. This will include a new line character
Use / to open the input prompt
Paste the text
I often do this when I copy data from a cell using using zY (the cell data maybe partially hidden). This cell value may contain a new line character that I'm unaware of and upon pasting into the input field searches for everything up to the new line, but then entered a bunch of unintended key combinations which are from the following line.
| [
{
"content": "from contextlib import suppress\nimport collections\nimport curses\n\nimport visidata\n\nfrom visidata import EscapeException, ExpectedException, clipdraw, Sheet, VisiData\nfrom visidata import vd, status, error, warning, fail, options, theme, colors\nfrom visidata import launchExternalEditor, suspend, ColumnItem, ENTER\n\n__all__ = ['confirm', 'CompleteKey']\n\ntheme('color_edit_cell', 'normal', 'cell color to use when editing cell')\ntheme('disp_edit_fill', '_', 'edit field fill character')\ntheme('disp_unprintable', '·', 'substitute character for unprintables')\n\nVisiData.init('lastInputs', lambda: collections.defaultdict(list)) # [input_type] -> list of prevInputs\n\n# editline helpers\n\nclass EnableCursor:\n def __enter__(self):\n with suppress(curses.error):\n curses.mousemask(0)\n curses.curs_set(1)\n\n def __exit__(self, exc_type, exc_val, tb):\n with suppress(curses.error):\n curses.curs_set(0)\n curses.mousemask(-1)\n\ndef until_get_wch(scr):\n 'Ignores get_wch timeouts'\n ret = None\n while not ret:\n try:\n ret = scr.get_wch()\n except curses.error:\n pass\n\n return ret\n\n\ndef splice(v, i, s):\n 'Insert `s` into string `v` at `i` (such that v[i] == s[0]).'\n return v if i < 0 else v[:i] + s + v[i:]\n\n\ndef clean_printable(s):\n 'Escape unprintable characters.'\n return ''.join(c if c.isprintable() else options.disp_unprintable for c in str(s))\n\n\ndef delchar(s, i, remove=1):\n 'Delete `remove` characters from str `s` beginning at position `i`.'\n return s if i < 0 else s[:i] + s[i+remove:]\n\n\nclass CompleteState:\n def __init__(self, completer_func):\n self.comps_idx = -1\n self.completer_func = completer_func\n self.former_i = None\n self.just_completed = False\n\n def complete(self, v, i, state_incr):\n self.just_completed = True\n self.comps_idx += state_incr\n\n if self.former_i is None:\n self.former_i = i\n try:\n r = self.completer_func(v[:self.former_i], self.comps_idx)\n except Exception as e:\n # raise # beep/flash; how to report exception?\n return v, i\n\n if not r:\n # beep/flash to indicate no matches?\n return v, i\n\n v = r + v[i:]\n return v, len(v)\n\n def reset(self):\n if self.just_completed:\n self.just_completed = False\n else:\n self.former_i = None\n self.comps_idx = -1\n\nclass HistoryState:\n def __init__(self, history):\n self.history = history\n self.hist_idx = None\n self.prev_val = None\n\n def up(self, v, i):\n if self.hist_idx is None:\n self.hist_idx = len(self.history)\n self.prev_val = v\n if self.hist_idx > 0:\n self.hist_idx -= 1\n v = self.history[self.hist_idx]\n i = len(v)\n return v, i\n\n def down(self, v, i):\n if self.hist_idx is None:\n return v, i\n elif self.hist_idx < len(self.history)-1:\n self.hist_idx += 1\n v = self.history[self.hist_idx]\n else:\n v = self.prev_val\n self.hist_idx = None\n i = len(v)\n return v, i\n\n\n# history: earliest entry first\[email protected]\ndef editline(vd, scr, y, x, w, i=0, attr=curses.A_NORMAL, value='', fillchar=' ', truncchar='-', unprintablechar='.', completer=lambda text,idx: None, history=[], display=True, updater=lambda val: None):\n 'A better curses line editing widget.'\n with EnableCursor():\n ESC='^['\n ENTER='^J'\n TAB='^I'\n\n history_state = HistoryState(history)\n complete_state = CompleteState(completer)\n insert_mode = True\n first_action = True\n v = str(value) # value under edit\n\n # i = 0 # index into v, initial value can be passed in as argument as of 1.2\n if i != 0:\n first_action = False\n\n left_truncchar = right_truncchar = truncchar\n\n def rfind_nonword(s, a, b):\n if not s:\n return 0\n\n while not s[b].isalnum() and b >= a: # first skip non-word chars\n b -= 1\n while s[b].isalnum() and b >= a:\n b -= 1\n return b\n\n while True:\n updater(v)\n\n if display:\n dispval = clean_printable(v)\n else:\n dispval = '*' * len(v)\n\n dispi = i # the onscreen offset within the field where v[i] is displayed\n if len(dispval) < w: # entire value fits\n dispval += fillchar*(w-len(dispval)-1)\n elif i == len(dispval): # cursor after value (will append)\n dispi = w-1\n dispval = left_truncchar + dispval[len(dispval)-w+2:] + fillchar\n elif i >= len(dispval)-w//2: # cursor within halfwidth of end\n dispi = w-(len(dispval)-i)\n dispval = left_truncchar + dispval[len(dispval)-w+1:]\n elif i <= w//2: # cursor within halfwidth of beginning\n dispval = dispval[:w-1] + right_truncchar\n else:\n dispi = w//2 # visual cursor stays right in the middle\n k = 1 if w%2==0 else 0 # odd widths have one character more\n dispval = left_truncchar + dispval[i-w//2+1:i+w//2-k] + right_truncchar\n\n prew = clipdraw(scr, y, x, dispval[:dispi], attr, w)\n clipdraw(scr, y, x+prew, dispval[dispi:], attr, w-prew+1)\n scr.move(y, x+prew)\n ch = vd.getkeystroke(scr)\n if ch == '': continue\n elif ch == 'KEY_IC': insert_mode = not insert_mode\n elif ch == '^A' or ch == 'KEY_HOME': i = 0\n elif ch == '^B' or ch == 'KEY_LEFT': i -= 1\n elif ch in ('^C', '^Q', ESC): raise EscapeException(ch)\n elif ch == '^D' or ch == 'KEY_DC': v = delchar(v, i)\n elif ch == '^E' or ch == 'KEY_END': i = len(v)\n elif ch == '^F' or ch == 'KEY_RIGHT': i += 1\n elif ch in ('^H', 'KEY_BACKSPACE', '^?'): i -= 1; v = delchar(v, i)\n elif ch == TAB: v, i = complete_state.complete(v, i, +1)\n elif ch == 'KEY_BTAB': v, i = complete_state.complete(v, i, -1)\n elif ch == ENTER: break\n elif ch == '^K': v = v[:i] # ^Kill to end-of-line\n elif ch == '^O': v = launchExternalEditor(v)\n elif ch == '^R': v = str(value) # ^Reload initial value\n elif ch == '^T': v = delchar(splice(v, i-2, v[i-1]), i) # swap chars\n elif ch == '^U': v = v[i:]; i = 0 # clear to beginning\n elif ch == '^V': v = splice(v, i, until_get_wch(scr)); i += 1 # literal character\n elif ch == '^W': j = rfind_nonword(v, 0, i-1); v = v[:j+1] + v[i:]; i = j+1 # erase word\n elif ch == '^Z': suspend()\n elif history and ch == 'KEY_UP': v, i = history_state.up(v, i)\n elif history and ch == 'KEY_DOWN': v, i = history_state.down(v, i)\n elif ch.startswith('KEY_'): pass\n else:\n if first_action:\n v = ''\n if insert_mode:\n v = splice(v, i, ch)\n else:\n v = v[:i] + ch + v[i+1:]\n\n i += 1\n\n if i < 0: i = 0\n if i > len(v): i = len(v)\n first_action = False\n complete_state.reset()\n\n return v\n\n\[email protected]\ndef editText(vd, y, x, w, record=True, display=True, **kwargs):\n 'Wrap editline; if record=True, get input from the cmdlog in batch mode, save input to the cmdlog if display=True.'\n v = None\n if record and vd.cmdlog:\n v = vd.getLastArgs()\n\n if v is None:\n v = vd.editline(vd.sheets[0]._scr, y, x, w, display=display, **kwargs)\n\n if display:\n status('\"%s\"' % v)\n if record and vd.cmdlog:\n vd.setLastArgs(v)\n return v\n\n\[email protected]\ndef inputsingle(vd, prompt, record=True):\n 'Display prompt and return single character of user input.'\n sheet = vd.sheets[0]\n rstatuslen = vd.drawRightStatus(sheet._scr, sheet)\n\n v = None\n if record and vd.cmdlog:\n v = vd.getLastArgs()\n\n if v is not None:\n return v\n\n y = sheet.windowHeight-1\n w = sheet.windowWidth\n rstatuslen = vd.drawRightStatus(sheet._scr, sheet)\n promptlen = clipdraw(sheet._scr, y, 0, prompt, 0, w=w-rstatuslen-1)\n sheet._scr.move(y, w-promptlen-rstatuslen-2)\n v = vd.getkeystroke(sheet._scr)\n\n if record and vd.cmdlog:\n vd.setLastArgs(v)\n\n return v\n\n\[email protected]\ndef input(self, prompt, type=None, defaultLast=False, history=[], **kwargs):\n '''Display prompt and return line of user input.\n\n type: list of previous items, or a string indicating the type of input.\n defaultLast: on empty input, if True, return last history item\n '''\n if type:\n if isinstance(type, str):\n history = self.lastInputs[type]\n else:\n history = type\n\n sheet = self.sheets[0]\n rstatuslen = self.drawRightStatus(sheet._scr, sheet)\n attr = 0\n promptlen = clipdraw(sheet._scr, sheet.windowHeight-1, 0, prompt, attr, w=sheet.windowWidth-rstatuslen-1)\n ret = self.editText(sheet.windowHeight-1, promptlen, sheet.windowWidth-promptlen-rstatuslen-2,\n attr=colors.color_edit_cell,\n unprintablechar=options.disp_unprintable,\n truncchar=options.disp_truncator,\n history=history,\n **kwargs)\n\n if ret:\n if isinstance(type, str):\n self.lastInputs[type].append(ret)\n elif defaultLast:\n history or fail(\"no previous input\")\n ret = history[-1]\n\n return ret\n\n\[email protected]_api\ndef confirm(vd, prompt, exc=EscapeException):\n yn = vd.input(prompt, value='no', record=False)[:1]\n if not yn or yn not in 'Yy':\n msg = 'disconfirmed: ' + prompt\n if exc:\n raise exc(msg)\n warning(msg)\n return False\n return True\n\n\nclass CompleteKey:\n def __init__(self, items):\n self.items = items\n\n def __call__(self, val, state):\n opts = [x for x in self.items if x.startswith(val)]\n return opts[state%len(opts)] if opts else val\n\n\[email protected]\ndef editCell(self, vcolidx=None, rowidx=None, value=None, **kwargs):\n 'Call `editText` at its place on the screen. Returns the new value, properly typed'\n\n if vcolidx is None:\n vcolidx = self.cursorVisibleColIndex\n x, w = self._visibleColLayout.get(vcolidx, (0, 0))\n\n col = self.visibleCols[vcolidx]\n if rowidx is None:\n rowidx = self.cursorRowIndex\n\n if rowidx < 0: # header\n y = 0\n value = value or col.name\n else:\n y, h = self._rowLayout.get(rowidx, (0, 0))\n value = value or col.getDisplayValue(self.rows[self.cursorRowIndex])\n\n editargs = dict(value=value,\n fillchar=options.disp_edit_fill,\n truncchar=options.disp_truncator)\n editargs.update(kwargs) # update with user-specified args\n r = vd.editText(y, x, w, **editargs)\n if rowidx >= 0: # if not header\n r = col.type(r) # convert input to column type, let exceptions be raised\n\n return r\n",
"path": "visidata/_input.py"
}
] | [
{
"content": "from contextlib import suppress\nimport collections\nimport curses\n\nimport visidata\n\nfrom visidata import EscapeException, ExpectedException, clipdraw, Sheet, VisiData\nfrom visidata import vd, status, error, warning, fail, options, theme, colors\nfrom visidata import launchExternalEditor, suspend, ColumnItem, ENTER\n\n__all__ = ['confirm', 'CompleteKey']\n\ntheme('color_edit_cell', 'normal', 'cell color to use when editing cell')\ntheme('disp_edit_fill', '_', 'edit field fill character')\ntheme('disp_unprintable', '·', 'substitute character for unprintables')\n\nVisiData.init('lastInputs', lambda: collections.defaultdict(list)) # [input_type] -> list of prevInputs\n\n# editline helpers\n\nclass EnableCursor:\n def __enter__(self):\n with suppress(curses.error):\n curses.mousemask(0)\n curses.curs_set(1)\n\n def __exit__(self, exc_type, exc_val, tb):\n with suppress(curses.error):\n curses.curs_set(0)\n curses.mousemask(-1)\n\ndef until_get_wch(scr):\n 'Ignores get_wch timeouts'\n ret = None\n while not ret:\n try:\n ret = scr.get_wch()\n except curses.error:\n pass\n\n return ret\n\n\ndef splice(v, i, s):\n 'Insert `s` into string `v` at `i` (such that v[i] == s[0]).'\n return v if i < 0 else v[:i] + s + v[i:]\n\n\ndef clean_printable(s):\n 'Escape unprintable characters.'\n return ''.join(c if c.isprintable() else options.disp_unprintable for c in str(s))\n\n\ndef delchar(s, i, remove=1):\n 'Delete `remove` characters from str `s` beginning at position `i`.'\n return s if i < 0 else s[:i] + s[i+remove:]\n\n\nclass CompleteState:\n def __init__(self, completer_func):\n self.comps_idx = -1\n self.completer_func = completer_func\n self.former_i = None\n self.just_completed = False\n\n def complete(self, v, i, state_incr):\n self.just_completed = True\n self.comps_idx += state_incr\n\n if self.former_i is None:\n self.former_i = i\n try:\n r = self.completer_func(v[:self.former_i], self.comps_idx)\n except Exception as e:\n # raise # beep/flash; how to report exception?\n return v, i\n\n if not r:\n # beep/flash to indicate no matches?\n return v, i\n\n v = r + v[i:]\n return v, len(v)\n\n def reset(self):\n if self.just_completed:\n self.just_completed = False\n else:\n self.former_i = None\n self.comps_idx = -1\n\nclass HistoryState:\n def __init__(self, history):\n self.history = history\n self.hist_idx = None\n self.prev_val = None\n\n def up(self, v, i):\n if self.hist_idx is None:\n self.hist_idx = len(self.history)\n self.prev_val = v\n if self.hist_idx > 0:\n self.hist_idx -= 1\n v = self.history[self.hist_idx]\n i = len(v)\n return v, i\n\n def down(self, v, i):\n if self.hist_idx is None:\n return v, i\n elif self.hist_idx < len(self.history)-1:\n self.hist_idx += 1\n v = self.history[self.hist_idx]\n else:\n v = self.prev_val\n self.hist_idx = None\n i = len(v)\n return v, i\n\n\n# history: earliest entry first\[email protected]\ndef editline(vd, scr, y, x, w, i=0, attr=curses.A_NORMAL, value='', fillchar=' ', truncchar='-', unprintablechar='.', completer=lambda text,idx: None, history=[], display=True, updater=lambda val: None):\n 'A better curses line editing widget.'\n with EnableCursor():\n ESC='^['\n ENTER='^J'\n TAB='^I'\n\n history_state = HistoryState(history)\n complete_state = CompleteState(completer)\n insert_mode = True\n first_action = True\n v = str(value) # value under edit\n\n # i = 0 # index into v, initial value can be passed in as argument as of 1.2\n if i != 0:\n first_action = False\n\n left_truncchar = right_truncchar = truncchar\n\n def rfind_nonword(s, a, b):\n if not s:\n return 0\n\n while not s[b].isalnum() and b >= a: # first skip non-word chars\n b -= 1\n while s[b].isalnum() and b >= a:\n b -= 1\n return b\n\n while True:\n updater(v)\n\n if display:\n dispval = clean_printable(v)\n else:\n dispval = '*' * len(v)\n\n dispi = i # the onscreen offset within the field where v[i] is displayed\n if len(dispval) < w: # entire value fits\n dispval += fillchar*(w-len(dispval)-1)\n elif i == len(dispval): # cursor after value (will append)\n dispi = w-1\n dispval = left_truncchar + dispval[len(dispval)-w+2:] + fillchar\n elif i >= len(dispval)-w//2: # cursor within halfwidth of end\n dispi = w-(len(dispval)-i)\n dispval = left_truncchar + dispval[len(dispval)-w+1:]\n elif i <= w//2: # cursor within halfwidth of beginning\n dispval = dispval[:w-1] + right_truncchar\n else:\n dispi = w//2 # visual cursor stays right in the middle\n k = 1 if w%2==0 else 0 # odd widths have one character more\n dispval = left_truncchar + dispval[i-w//2+1:i+w//2-k] + right_truncchar\n\n prew = clipdraw(scr, y, x, dispval[:dispi], attr, w)\n clipdraw(scr, y, x+prew, dispval[dispi:], attr, w-prew+1)\n scr.move(y, x+prew)\n ch = vd.getkeystroke(scr)\n if ch == '': continue\n elif ch == 'KEY_IC': insert_mode = not insert_mode\n elif ch == '^A' or ch == 'KEY_HOME': i = 0\n elif ch == '^B' or ch == 'KEY_LEFT': i -= 1\n elif ch in ('^C', '^Q', ESC): raise EscapeException(ch)\n elif ch == '^D' or ch == 'KEY_DC': v = delchar(v, i)\n elif ch == '^E' or ch == 'KEY_END': i = len(v)\n elif ch == '^F' or ch == 'KEY_RIGHT': i += 1\n elif ch in ('^H', 'KEY_BACKSPACE', '^?'): i -= 1; v = delchar(v, i)\n elif ch == TAB: v, i = complete_state.complete(v, i, +1)\n elif ch == 'KEY_BTAB': v, i = complete_state.complete(v, i, -1)\n elif ch == ENTER: break\n elif ch == '^K': v = v[:i] # ^Kill to end-of-line\n elif ch == '^O': v = launchExternalEditor(v)\n elif ch == '^R': v = str(value) # ^Reload initial value\n elif ch == '^T': v = delchar(splice(v, i-2, v[i-1]), i) # swap chars\n elif ch == '^U': v = v[i:]; i = 0 # clear to beginning\n elif ch == '^V': v = splice(v, i, until_get_wch(scr)); i += 1 # literal character\n elif ch == '^W': j = rfind_nonword(v, 0, i-1); v = v[:j+1] + v[i:]; i = j+1 # erase word\n elif ch == '^Z': suspend()\n elif history and ch == 'KEY_UP': v, i = history_state.up(v, i)\n elif history and ch == 'KEY_DOWN': v, i = history_state.down(v, i)\n elif ch.startswith('KEY_'): pass\n else:\n if first_action:\n v = ''\n if insert_mode:\n v = splice(v, i, ch)\n else:\n v = v[:i] + ch + v[i+1:]\n\n i += 1\n\n if i < 0: i = 0\n if i > len(v): i = len(v)\n first_action = False\n complete_state.reset()\n\n return v\n\n\[email protected]\ndef editText(vd, y, x, w, record=True, display=True, **kwargs):\n 'Wrap editline; if record=True, get input from the cmdlog in batch mode, save input to the cmdlog if display=True.'\n v = None\n if record and vd.cmdlog:\n v = vd.getLastArgs()\n\n if v is None:\n v = vd.editline(vd.sheets[0]._scr, y, x, w, display=display, **kwargs)\n\n if display:\n status('\"%s\"' % v)\n if record and vd.cmdlog:\n vd.setLastArgs(v)\n\n # clear keyboard buffer upon exit from input()\n # input() stops when it reaches an ENTER, and we do not want the expressions\n # that follow to register as keystrokes\n # see issue#585\n curses.flushinp()\n\n return v\n\n\[email protected]\ndef inputsingle(vd, prompt, record=True):\n 'Display prompt and return single character of user input.'\n sheet = vd.sheets[0]\n rstatuslen = vd.drawRightStatus(sheet._scr, sheet)\n\n v = None\n if record and vd.cmdlog:\n v = vd.getLastArgs()\n\n if v is not None:\n return v\n\n y = sheet.windowHeight-1\n w = sheet.windowWidth\n rstatuslen = vd.drawRightStatus(sheet._scr, sheet)\n promptlen = clipdraw(sheet._scr, y, 0, prompt, 0, w=w-rstatuslen-1)\n sheet._scr.move(y, w-promptlen-rstatuslen-2)\n v = vd.getkeystroke(sheet._scr)\n\n if record and vd.cmdlog:\n vd.setLastArgs(v)\n\n return v\n\n\[email protected]\ndef input(self, prompt, type=None, defaultLast=False, history=[], **kwargs):\n '''Display prompt and return line of user input.\n\n type: list of previous items, or a string indicating the type of input.\n defaultLast: on empty input, if True, return last history item\n '''\n if type:\n if isinstance(type, str):\n history = self.lastInputs[type]\n else:\n history = type\n\n sheet = self.sheets[0]\n rstatuslen = self.drawRightStatus(sheet._scr, sheet)\n attr = 0\n promptlen = clipdraw(sheet._scr, sheet.windowHeight-1, 0, prompt, attr, w=sheet.windowWidth-rstatuslen-1)\n ret = self.editText(sheet.windowHeight-1, promptlen, sheet.windowWidth-promptlen-rstatuslen-2,\n attr=colors.color_edit_cell,\n unprintablechar=options.disp_unprintable,\n truncchar=options.disp_truncator,\n history=history,\n **kwargs)\n\n if ret:\n if isinstance(type, str):\n self.lastInputs[type].append(ret)\n elif defaultLast:\n history or fail(\"no previous input\")\n ret = history[-1]\n\n return ret\n\n\[email protected]_api\ndef confirm(vd, prompt, exc=EscapeException):\n yn = vd.input(prompt, value='no', record=False)[:1]\n if not yn or yn not in 'Yy':\n msg = 'disconfirmed: ' + prompt\n if exc:\n raise exc(msg)\n warning(msg)\n return False\n return True\n\n\nclass CompleteKey:\n def __init__(self, items):\n self.items = items\n\n def __call__(self, val, state):\n opts = [x for x in self.items if x.startswith(val)]\n return opts[state%len(opts)] if opts else val\n\n\[email protected]\ndef editCell(self, vcolidx=None, rowidx=None, value=None, **kwargs):\n 'Call `editText` at its place on the screen. Returns the new value, properly typed'\n\n if vcolidx is None:\n vcolidx = self.cursorVisibleColIndex\n x, w = self._visibleColLayout.get(vcolidx, (0, 0))\n\n col = self.visibleCols[vcolidx]\n if rowidx is None:\n rowidx = self.cursorRowIndex\n\n if rowidx < 0: # header\n y = 0\n value = value or col.name\n else:\n y, h = self._rowLayout.get(rowidx, (0, 0))\n value = value or col.getDisplayValue(self.rows[self.cursorRowIndex])\n\n editargs = dict(value=value,\n fillchar=options.disp_edit_fill,\n truncchar=options.disp_truncator)\n editargs.update(kwargs) # update with user-specified args\n r = vd.editText(y, x, w, **editargs)\n if rowidx >= 0: # if not header\n r = col.type(r) # convert input to column type, let exceptions be raised\n\n return r\n",
"path": "visidata/_input.py"
}
] | diff --git a/visidata/_input.py b/visidata/_input.py
index 677d14b4c..1f67ef44b 100644
--- a/visidata/_input.py
+++ b/visidata/_input.py
@@ -232,6 +232,13 @@ def editText(vd, y, x, w, record=True, display=True, **kwargs):
status('"%s"' % v)
if record and vd.cmdlog:
vd.setLastArgs(v)
+
+ # clear keyboard buffer upon exit from input()
+ # input() stops when it reaches an ENTER, and we do not want the expressions
+ # that follow to register as keystrokes
+ # see issue#585
+ curses.flushinp()
+
return v
|
ivy-llc__ivy-13703 | ptp
| [
{
"content": "# local\n\nimport ivy\nfrom ivy.func_wrapper import with_unsupported_dtypes\nfrom ivy.functional.frontends.jax.func_wrapper import (\n to_ivy_arrays_and_back,\n handle_jax_dtype,\n)\nfrom ivy.functional.frontends.jax.numpy import promote_types_of_jax_inputs\n\n\n@to_ivy_arrays_and_back\ndef einsum(\n subscripts,\n *operands,\n out=None,\n optimize=\"optimal\",\n precision=None,\n _use_xeinsum=False,\n _dot_general=None,\n):\n return ivy.einsum(subscripts, *operands, out=out)\n\n\n@handle_jax_dtype\n@to_ivy_arrays_and_back\ndef mean(a, axis=None, dtype=None, out=None, keepdims=False, *, where=None):\n axis = tuple(axis) if isinstance(axis, list) else axis\n if dtype is None:\n dtype = \"float32\" if ivy.is_int_dtype(a) else a.dtype\n ret = ivy.mean(a, axis=axis, keepdims=keepdims, out=out)\n if ivy.is_array(where):\n where = ivy.array(where, dtype=ivy.bool)\n ret = ivy.where(where, ret, ivy.default(out, ivy.zeros_like(ret)), out=out)\n return ivy.astype(ret, ivy.as_ivy_dtype(dtype), copy=False)\n\n\n@handle_jax_dtype\n@to_ivy_arrays_and_back\ndef var(a, axis=None, dtype=None, out=None, ddof=0, keepdims=False, *, where=None):\n axis = tuple(axis) if isinstance(axis, list) else axis\n if dtype is None:\n dtype = \"float32\" if ivy.is_int_dtype(a) else a.dtype\n ret = ivy.var(a, axis=axis, correction=ddof, keepdims=keepdims, out=out)\n if ivy.is_array(where):\n where = ivy.array(where, dtype=ivy.bool)\n ret = ivy.where(where, ret, ivy.default(out, ivy.zeros_like(ret)), out=out)\n return ivy.astype(ret, ivy.as_ivy_dtype(dtype), copy=False)\n\n\n@to_ivy_arrays_and_back\ndef argmin(a, axis=None, out=None, keepdims=None):\n return ivy.argmin(a, axis=axis, out=out, keepdims=keepdims)\n\n\n@to_ivy_arrays_and_back\ndef bincount(x, weights=None, minlength=0, *, length=None):\n x_list = []\n for i in range(x.shape[0]):\n x_list.append(int(x[i]))\n max_val = int(ivy.max(ivy.array(x_list)))\n ret = [x_list.count(i) for i in range(0, max_val + 1)]\n ret = ivy.array(ret)\n ret = ivy.astype(ret, ivy.as_ivy_dtype(ivy.int64))\n return ret\n\n\n@handle_jax_dtype\n@to_ivy_arrays_and_back\ndef cumprod(a, axis=None, dtype=None, out=None):\n if dtype is None:\n dtype = ivy.as_ivy_dtype(a.dtype)\n return ivy.cumprod(a, axis=axis, dtype=dtype, out=out)\n\n\n@handle_jax_dtype\n@to_ivy_arrays_and_back\ndef cumsum(a, axis=0, dtype=None, out=None):\n if dtype is None:\n dtype = ivy.uint8\n return ivy.cumsum(a, axis, dtype=dtype, out=out)\n\n\ncumproduct = cumprod\n\n\n@handle_jax_dtype\n@to_ivy_arrays_and_back\ndef sum(\n a,\n axis=None,\n dtype=None,\n out=None,\n keepdims=False,\n initial=None,\n where=None,\n promote_integers=True,\n):\n if dtype is None:\n dtype = \"float32\" if ivy.is_int_dtype(a.dtype) else ivy.as_ivy_dtype(a.dtype)\n\n # TODO: promote_integers is only supported from JAX v0.3.14\n if dtype is None and promote_integers:\n if ivy.is_bool_dtype(dtype):\n dtype = ivy.default_int_dtype()\n elif ivy.is_uint_dtype(dtype):\n if ivy.dtype_bits(dtype) < ivy.dtype_bits(ivy.default_uint_dtype()):\n dtype = ivy.default_uint_dtype()\n elif ivy.is_int_dtype(dtype):\n if ivy.dtype_bits(dtype) < ivy.dtype_bits(ivy.default_int_dtype()):\n dtype = ivy.default_int_dtype()\n\n if initial:\n if axis is None:\n a = ivy.reshape(a, (1, -1))\n axis = 0\n s = list(ivy.shape(a))\n s[axis] = 1\n header = ivy.full(s, initial)\n a = ivy.concat([a, header], axis=axis)\n\n ret = ivy.sum(a, axis=axis, keepdims=keepdims, out=out)\n\n if ivy.is_array(where):\n where = ivy.array(where, dtype=ivy.bool)\n ret = ivy.where(where, ret, ivy.default(out, ivy.zeros_like(ret)), out=out)\n return ivy.astype(ret, ivy.as_ivy_dtype(dtype))\n\n\n@to_ivy_arrays_and_back\ndef min(a, axis=None, out=None, keepdims=False, where=None):\n ret = ivy.min(a, axis=axis, out=out, keepdims=keepdims)\n if ivy.is_array(where):\n where = ivy.array(where, dtype=ivy.bool)\n ret = ivy.where(where, ret, ivy.default(out, ivy.zeros_like(ret)), out=out)\n return ret\n\n\namin = min\n\n\n@to_ivy_arrays_and_back\ndef max(a, axis=None, out=None, keepdims=False, where=None):\n ret = ivy.max(a, axis=axis, out=out, keepdims=keepdims)\n if ivy.is_array(where):\n where = ivy.array(where, dtype=ivy.bool)\n ret = ivy.where(where, ret, ivy.default(out, ivy.zeros_like(ret)), out=out)\n return ret\n\n\namax = max\n\n\n@to_ivy_arrays_and_back\ndef average(a, axis=None, weights=None, returned=False, keepdims=False):\n\n # canonicalize_axis to ensure axis or the values in axis > 0\n if isinstance(axis, tuple) or isinstance(axis, list):\n a_ndim = len(ivy.shape(a))\n new_axis = [0] * len(axis)\n for i, v in enumerate(axis):\n if not -a_ndim <= v < a_ndim:\n raise ValueError(\n f\"axis {v} is out of bounds for array of \\\n dimension {a_ndim}\"\n )\n if v < 0:\n new_axis[i] = v + a_ndim\n else:\n new_axis[i] = v\n axis = tuple(new_axis)\n\n if weights is None:\n ret = ivy.mean(a, axis=axis, keepdims=keepdims)\n if axis is None:\n fill_value = int(a.size) if ivy.is_int_dtype(ret) else float(a.size)\n weights_sum = ivy.full(shape=(), fill_value=fill_value, dtype=ret.dtype)\n else:\n if isinstance(axis, tuple):\n # prod with axis has dtype Sequence[int]\n fill_value = 1\n for d in axis:\n fill_value *= a.shape[d]\n else:\n fill_value = a.shape[axis]\n fill_value = int(fill_value) if ivy.is_int_dtype(ret) else float(fill_value)\n weights_sum = ivy.full_like(ret, fill_value=fill_value)\n else:\n a = ivy.asarray(a, copy=False)\n weights = ivy.asarray(weights, copy=False)\n a, weights = promote_types_of_jax_inputs(a, weights)\n\n a_shape = ivy.shape(a)\n a_ndim = len(a_shape)\n weights_shape = ivy.shape(weights)\n\n # Make sure the dimensions work out\n if a_shape != weights_shape:\n if len(weights_shape) != 1:\n raise ValueError(\n \"1D weights expected when shapes of a and \\\n weights differ.\"\n )\n if axis is None:\n raise ValueError(\n \"Axis must be specified when shapes of a and \\\n weights differ.\"\n )\n elif isinstance(axis, tuple):\n raise ValueError(\n \"Single axis expected when shapes of a and \\\n weights differ\"\n )\n elif not weights.shape[0] == a.shape[axis]:\n raise ValueError(\n \"Length of weights not compatible with \\\n specified axis.\"\n )\n\n weights = ivy.broadcast_to(\n weights, shape=(a_ndim - 1) * (1,) + weights_shape\n )\n weights = ivy.moveaxis(weights, -1, axis)\n\n weights_sum = ivy.sum(weights, axis=axis)\n ret = ivy.sum(a * weights, axis=axis, keepdims=keepdims) / weights_sum\n\n if returned:\n if ret.shape != weights_sum.shape:\n weights_sum = ivy.broadcast_to(weights_sum, shape=ret.shape)\n return ret, weights_sum\n\n return ret\n\n\n@to_ivy_arrays_and_back\ndef nanmax(\n a,\n axis=None,\n out=None,\n keepdims=False,\n initial=None,\n where=True,\n):\n nan_mask = ivy.isnan(a)\n a = ivy.where(ivy.logical_not(nan_mask), a, a.full_like(-ivy.inf))\n where_mask = None\n if initial is not None:\n if ivy.is_array(where):\n a = ivy.where(where, a, a.full_like(initial))\n where_mask = ivy.all(ivy.logical_not(where), axis=axis, keepdims=keepdims)\n s = ivy.shape(a, as_array=True)\n if axis is not None:\n if isinstance(axis, (tuple, list)) or ivy.is_array(axis):\n # introducing the initial in one dimension is enough\n ax = axis[0] % len(s)\n s[ax] = 1\n else:\n ax = axis % len(s)\n s[ax] = 1\n header = ivy.full(ivy.Shape(s.to_list()), initial, dtype=ivy.dtype(a))\n if axis:\n if isinstance(axis, (tuple, list)) or ivy.is_array(axis):\n a = ivy.concat([a, header], axis=axis[0])\n else:\n a = ivy.concat([a, header], axis=axis)\n else:\n a = ivy.concat([a, header], axis=0)\n res = ivy.max(a, axis=axis, keepdims=keepdims, out=out)\n if nan_mask is not None:\n nan_mask = ivy.all(nan_mask, axis=axis, keepdims=keepdims, out=out)\n if ivy.any(nan_mask):\n res = ivy.where(\n ivy.logical_not(nan_mask),\n res,\n initial if initial is not None else ivy.nan,\n out=out,\n )\n if where_mask is not None and ivy.any(where_mask):\n res = ivy.where(ivy.logical_not(where_mask), res, ivy.nan, out=out)\n return res\n\n\n@to_ivy_arrays_and_back\ndef nanmin(\n a,\n axis=None,\n out=None,\n keepdims=False,\n initial=None,\n where=True,\n):\n nan_mask = ivy.isnan(a)\n a = ivy.where(ivy.logical_not(nan_mask), a, a.full_like(+ivy.inf))\n where_mask = None\n if initial is not None:\n if ivy.is_array(where):\n a = ivy.where(where, a, a.full_like(initial))\n where_mask = ivy.all(ivy.logical_not(where), axis=axis, keepdims=keepdims)\n s = ivy.shape(a, as_array=True)\n if axis is not None:\n if isinstance(axis, (tuple, list)) or ivy.is_array(axis):\n # introducing the initial in one dimension is enough\n ax = axis[0] % len(s)\n s[ax] = 1\n else:\n ax = axis % len(s)\n s[ax] = 1\n header = ivy.full(ivy.Shape(s.to_list()), initial, dtype=ivy.dtype(a))\n if axis:\n if isinstance(axis, (tuple, list)) or ivy.is_array(axis):\n a = ivy.concat([a, header], axis=axis[0])\n else:\n a = ivy.concat([a, header], axis=axis)\n else:\n a = ivy.concat([a, header], axis=0)\n res = ivy.min(a, axis=axis, keepdims=keepdims, out=out)\n if nan_mask is not None:\n nan_mask = ivy.all(nan_mask, axis=axis, keepdims=keepdims, out=out)\n if ivy.any(nan_mask):\n res = ivy.where(\n ivy.logical_not(nan_mask),\n res,\n initial if initial is not None else ivy.nan,\n out=out,\n )\n if where_mask is not None and ivy.any(where_mask):\n res = ivy.where(ivy.logical_not(where_mask), res, ivy.nan, out=out)\n return res\n\n\n@handle_jax_dtype\n@to_ivy_arrays_and_back\ndef nanstd(\n a, /, *, axis=None, dtype=None, out=None, ddof=0, keepdims=False, where=True\n):\n a = ivy.nan_to_num(a)\n axis = tuple(axis) if isinstance(axis, list) else axis\n\n if dtype:\n a = ivy.astype(ivy.array(a), ivy.as_ivy_dtype(dtype))\n\n ret = ivy.std(a, axis=axis, correction=ddof, keepdims=keepdims, out=out)\n if ivy.is_array(where):\n ret = ivy.where(where, ret, ivy.default(out, ivy.zeros_like(ret)), out=out)\n\n return ret\n\n\n@handle_jax_dtype\n@to_ivy_arrays_and_back\ndef nanvar(a, axis=None, dtype=None, out=None, ddof=0, keepdims=False, *, where=True):\n is_nan = ivy.isnan(a)\n axis = tuple(axis) if isinstance(axis, list) else axis\n if dtype is None:\n dtype = \"float16\" if ivy.is_int_dtype(a) else a.dtype\n ret = ivy.var(a, axis=axis, correction=ddof, keepdims=keepdims, out=out)\n if not ivy.any(is_nan):\n if dtype:\n a = ivy.astype(ivy.array(a), ivy.as_ivy_dtype(dtype))\n else:\n dtype = \"float\" if ivy.is_int_dtype(a) else a.dtype\n\n ret = ivy.var(a, axis=axis, correction=ddof, keepdims=keepdims, out=out)\n\n if ivy.is_array(where):\n where = ivy.array(where, dtype=ivy.bool)\n ret = ivy.where(where, ret, ivy.default(out, ivy.zeros_like(ret)), out=out)\n\n else:\n a = [i for i in a if ivy.isnan(i) is False]\n\n if dtype:\n a = ivy.astype(ivy.array(a), ivy.as_ivy_dtype(dtype))\n else:\n dtype = \"float\" if ivy.is_int_dtype(a) else a.dtype\n\n ret = ivy.var(a, axis=axis, correction=ddof, keepdims=keepdims, out=out)\n\n if ivy.is_array(where):\n where = ivy.array(where, dtype=ivy.bool)\n ret = ivy.where(where, ret, ivy.default(out, ivy.zeros_like(ret)), out=out)\n\n all_nan = ivy.isnan(ret)\n if ivy.all(all_nan):\n ret = ivy.astype(ret, ivy.array([float(\"inf\")]))\n return ret\n\n\n@handle_jax_dtype\n@to_ivy_arrays_and_back\ndef nancumsum(a, axis=None, dtype=None, out=None):\n a = ivy.where(ivy.isnan(a), ivy.zeros_like(a), a)\n return ivy.cumsum(a, axis=axis, dtype=dtype, out=out)\n\n\n@handle_jax_dtype\n@to_ivy_arrays_and_back\ndef nancumprod(a, axis=None, dtype=None, out=None):\n a = ivy.where(ivy.isnan(a), ivy.zeros_like(a), a)\n return ivy.cumprod(a, axis=axis, dtype=dtype, out=out)\n\n\n@handle_jax_dtype\n@with_unsupported_dtypes({\"1.11.0 and below\": (\"bfloat16\")}, \"jax\")\n@to_ivy_arrays_and_back\ndef std(a, axis=None, dtype=None, out=None, ddof=0, keepdims=False, *, where=None):\n axis = tuple(axis) if isinstance(axis, list) else axis\n if dtype is None:\n dtype = \"float32\" if ivy.is_int_dtype(a) else a.dtype\n std_a = ivy.std(a, axis=axis, correction=ddof, keepdims=keepdims, out=out)\n if ivy.is_array(where):\n where = ivy.array(where, dtype=ivy.bool)\n std_a = ivy.where(\n where, std_a, ivy.default(out, ivy.zeros_like(std_a)), out=out\n )\n return ivy.astype(std_a, ivy.as_ivy_dtype(dtype), copy=False)\n\n\n@to_ivy_arrays_and_back\ndef corrcoef(x, y=None, rowvar=True):\n return ivy.corrcoef(x, y=y, rowvar=rowvar)\n",
"path": "ivy/functional/frontends/jax/numpy/statistical.py"
}
] | [
{
"content": "# local\n\nimport ivy\nfrom ivy.func_wrapper import with_unsupported_dtypes\nfrom ivy.functional.frontends.jax.func_wrapper import (\n to_ivy_arrays_and_back,\n handle_jax_dtype,\n)\nfrom ivy.functional.frontends.jax.numpy import promote_types_of_jax_inputs\n\n\n@to_ivy_arrays_and_back\ndef einsum(\n subscripts,\n *operands,\n out=None,\n optimize=\"optimal\",\n precision=None,\n _use_xeinsum=False,\n _dot_general=None,\n):\n return ivy.einsum(subscripts, *operands, out=out)\n\n\n@handle_jax_dtype\n@to_ivy_arrays_and_back\ndef mean(a, axis=None, dtype=None, out=None, keepdims=False, *, where=None):\n axis = tuple(axis) if isinstance(axis, list) else axis\n if dtype is None:\n dtype = \"float32\" if ivy.is_int_dtype(a) else a.dtype\n ret = ivy.mean(a, axis=axis, keepdims=keepdims, out=out)\n if ivy.is_array(where):\n where = ivy.array(where, dtype=ivy.bool)\n ret = ivy.where(where, ret, ivy.default(out, ivy.zeros_like(ret)), out=out)\n return ivy.astype(ret, ivy.as_ivy_dtype(dtype), copy=False)\n\n\n@handle_jax_dtype\n@to_ivy_arrays_and_back\ndef var(a, axis=None, dtype=None, out=None, ddof=0, keepdims=False, *, where=None):\n axis = tuple(axis) if isinstance(axis, list) else axis\n if dtype is None:\n dtype = \"float32\" if ivy.is_int_dtype(a) else a.dtype\n ret = ivy.var(a, axis=axis, correction=ddof, keepdims=keepdims, out=out)\n if ivy.is_array(where):\n where = ivy.array(where, dtype=ivy.bool)\n ret = ivy.where(where, ret, ivy.default(out, ivy.zeros_like(ret)), out=out)\n return ivy.astype(ret, ivy.as_ivy_dtype(dtype), copy=False)\n\n\n@to_ivy_arrays_and_back\ndef argmin(a, axis=None, out=None, keepdims=None):\n return ivy.argmin(a, axis=axis, out=out, keepdims=keepdims)\n\n\n@to_ivy_arrays_and_back\ndef bincount(x, weights=None, minlength=0, *, length=None):\n x_list = []\n for i in range(x.shape[0]):\n x_list.append(int(x[i]))\n max_val = int(ivy.max(ivy.array(x_list)))\n ret = [x_list.count(i) for i in range(0, max_val + 1)]\n ret = ivy.array(ret)\n ret = ivy.astype(ret, ivy.as_ivy_dtype(ivy.int64))\n return ret\n\n\n@handle_jax_dtype\n@to_ivy_arrays_and_back\ndef cumprod(a, axis=None, dtype=None, out=None):\n if dtype is None:\n dtype = ivy.as_ivy_dtype(a.dtype)\n return ivy.cumprod(a, axis=axis, dtype=dtype, out=out)\n\n\n@handle_jax_dtype\n@to_ivy_arrays_and_back\ndef cumsum(a, axis=0, dtype=None, out=None):\n if dtype is None:\n dtype = ivy.uint8\n return ivy.cumsum(a, axis, dtype=dtype, out=out)\n\n\ncumproduct = cumprod\n\n\n@handle_jax_dtype\n@to_ivy_arrays_and_back\ndef sum(\n a,\n axis=None,\n dtype=None,\n out=None,\n keepdims=False,\n initial=None,\n where=None,\n promote_integers=True,\n):\n if dtype is None:\n dtype = \"float32\" if ivy.is_int_dtype(a.dtype) else ivy.as_ivy_dtype(a.dtype)\n\n # TODO: promote_integers is only supported from JAX v0.3.14\n if dtype is None and promote_integers:\n if ivy.is_bool_dtype(dtype):\n dtype = ivy.default_int_dtype()\n elif ivy.is_uint_dtype(dtype):\n if ivy.dtype_bits(dtype) < ivy.dtype_bits(ivy.default_uint_dtype()):\n dtype = ivy.default_uint_dtype()\n elif ivy.is_int_dtype(dtype):\n if ivy.dtype_bits(dtype) < ivy.dtype_bits(ivy.default_int_dtype()):\n dtype = ivy.default_int_dtype()\n\n if initial:\n if axis is None:\n a = ivy.reshape(a, (1, -1))\n axis = 0\n s = list(ivy.shape(a))\n s[axis] = 1\n header = ivy.full(s, initial)\n a = ivy.concat([a, header], axis=axis)\n\n ret = ivy.sum(a, axis=axis, keepdims=keepdims, out=out)\n\n if ivy.is_array(where):\n where = ivy.array(where, dtype=ivy.bool)\n ret = ivy.where(where, ret, ivy.default(out, ivy.zeros_like(ret)), out=out)\n return ivy.astype(ret, ivy.as_ivy_dtype(dtype))\n\n\n@to_ivy_arrays_and_back\ndef min(a, axis=None, out=None, keepdims=False, where=None):\n ret = ivy.min(a, axis=axis, out=out, keepdims=keepdims)\n if ivy.is_array(where):\n where = ivy.array(where, dtype=ivy.bool)\n ret = ivy.where(where, ret, ivy.default(out, ivy.zeros_like(ret)), out=out)\n return ret\n\n\namin = min\n\n\n@to_ivy_arrays_and_back\ndef max(a, axis=None, out=None, keepdims=False, where=None):\n ret = ivy.max(a, axis=axis, out=out, keepdims=keepdims)\n if ivy.is_array(where):\n where = ivy.array(where, dtype=ivy.bool)\n ret = ivy.where(where, ret, ivy.default(out, ivy.zeros_like(ret)), out=out)\n return ret\n\n\namax = max\n\n\n@to_ivy_arrays_and_back\ndef average(a, axis=None, weights=None, returned=False, keepdims=False):\n\n # canonicalize_axis to ensure axis or the values in axis > 0\n if isinstance(axis, tuple) or isinstance(axis, list):\n a_ndim = len(ivy.shape(a))\n new_axis = [0] * len(axis)\n for i, v in enumerate(axis):\n if not -a_ndim <= v < a_ndim:\n raise ValueError(\n f\"axis {v} is out of bounds for array of \\\n dimension {a_ndim}\"\n )\n if v < 0:\n new_axis[i] = v + a_ndim\n else:\n new_axis[i] = v\n axis = tuple(new_axis)\n\n if weights is None:\n ret = ivy.mean(a, axis=axis, keepdims=keepdims)\n if axis is None:\n fill_value = int(a.size) if ivy.is_int_dtype(ret) else float(a.size)\n weights_sum = ivy.full(shape=(), fill_value=fill_value, dtype=ret.dtype)\n else:\n if isinstance(axis, tuple):\n # prod with axis has dtype Sequence[int]\n fill_value = 1\n for d in axis:\n fill_value *= a.shape[d]\n else:\n fill_value = a.shape[axis]\n fill_value = int(fill_value) if ivy.is_int_dtype(ret) else float(fill_value)\n weights_sum = ivy.full_like(ret, fill_value=fill_value)\n else:\n a = ivy.asarray(a, copy=False)\n weights = ivy.asarray(weights, copy=False)\n a, weights = promote_types_of_jax_inputs(a, weights)\n\n a_shape = ivy.shape(a)\n a_ndim = len(a_shape)\n weights_shape = ivy.shape(weights)\n\n # Make sure the dimensions work out\n if a_shape != weights_shape:\n if len(weights_shape) != 1:\n raise ValueError(\n \"1D weights expected when shapes of a and \\\n weights differ.\"\n )\n if axis is None:\n raise ValueError(\n \"Axis must be specified when shapes of a and \\\n weights differ.\"\n )\n elif isinstance(axis, tuple):\n raise ValueError(\n \"Single axis expected when shapes of a and \\\n weights differ\"\n )\n elif not weights.shape[0] == a.shape[axis]:\n raise ValueError(\n \"Length of weights not compatible with \\\n specified axis.\"\n )\n\n weights = ivy.broadcast_to(\n weights, shape=(a_ndim - 1) * (1,) + weights_shape\n )\n weights = ivy.moveaxis(weights, -1, axis)\n\n weights_sum = ivy.sum(weights, axis=axis)\n ret = ivy.sum(a * weights, axis=axis, keepdims=keepdims) / weights_sum\n\n if returned:\n if ret.shape != weights_sum.shape:\n weights_sum = ivy.broadcast_to(weights_sum, shape=ret.shape)\n return ret, weights_sum\n\n return ret\n\n\n@to_ivy_arrays_and_back\ndef nanmax(\n a,\n axis=None,\n out=None,\n keepdims=False,\n initial=None,\n where=True,\n):\n nan_mask = ivy.isnan(a)\n a = ivy.where(ivy.logical_not(nan_mask), a, a.full_like(-ivy.inf))\n where_mask = None\n if initial is not None:\n if ivy.is_array(where):\n a = ivy.where(where, a, a.full_like(initial))\n where_mask = ivy.all(ivy.logical_not(where), axis=axis, keepdims=keepdims)\n s = ivy.shape(a, as_array=True)\n if axis is not None:\n if isinstance(axis, (tuple, list)) or ivy.is_array(axis):\n # introducing the initial in one dimension is enough\n ax = axis[0] % len(s)\n s[ax] = 1\n else:\n ax = axis % len(s)\n s[ax] = 1\n header = ivy.full(ivy.Shape(s.to_list()), initial, dtype=ivy.dtype(a))\n if axis:\n if isinstance(axis, (tuple, list)) or ivy.is_array(axis):\n a = ivy.concat([a, header], axis=axis[0])\n else:\n a = ivy.concat([a, header], axis=axis)\n else:\n a = ivy.concat([a, header], axis=0)\n res = ivy.max(a, axis=axis, keepdims=keepdims, out=out)\n if nan_mask is not None:\n nan_mask = ivy.all(nan_mask, axis=axis, keepdims=keepdims, out=out)\n if ivy.any(nan_mask):\n res = ivy.where(\n ivy.logical_not(nan_mask),\n res,\n initial if initial is not None else ivy.nan,\n out=out,\n )\n if where_mask is not None and ivy.any(where_mask):\n res = ivy.where(ivy.logical_not(where_mask), res, ivy.nan, out=out)\n return res\n\n\n@to_ivy_arrays_and_back\ndef nanmin(\n a,\n axis=None,\n out=None,\n keepdims=False,\n initial=None,\n where=True,\n):\n nan_mask = ivy.isnan(a)\n a = ivy.where(ivy.logical_not(nan_mask), a, a.full_like(+ivy.inf))\n where_mask = None\n if initial is not None:\n if ivy.is_array(where):\n a = ivy.where(where, a, a.full_like(initial))\n where_mask = ivy.all(ivy.logical_not(where), axis=axis, keepdims=keepdims)\n s = ivy.shape(a, as_array=True)\n if axis is not None:\n if isinstance(axis, (tuple, list)) or ivy.is_array(axis):\n # introducing the initial in one dimension is enough\n ax = axis[0] % len(s)\n s[ax] = 1\n else:\n ax = axis % len(s)\n s[ax] = 1\n header = ivy.full(ivy.Shape(s.to_list()), initial, dtype=ivy.dtype(a))\n if axis:\n if isinstance(axis, (tuple, list)) or ivy.is_array(axis):\n a = ivy.concat([a, header], axis=axis[0])\n else:\n a = ivy.concat([a, header], axis=axis)\n else:\n a = ivy.concat([a, header], axis=0)\n res = ivy.min(a, axis=axis, keepdims=keepdims, out=out)\n if nan_mask is not None:\n nan_mask = ivy.all(nan_mask, axis=axis, keepdims=keepdims, out=out)\n if ivy.any(nan_mask):\n res = ivy.where(\n ivy.logical_not(nan_mask),\n res,\n initial if initial is not None else ivy.nan,\n out=out,\n )\n if where_mask is not None and ivy.any(where_mask):\n res = ivy.where(ivy.logical_not(where_mask), res, ivy.nan, out=out)\n return res\n\n\n@handle_jax_dtype\n@to_ivy_arrays_and_back\ndef nanstd(\n a, /, *, axis=None, dtype=None, out=None, ddof=0, keepdims=False, where=True\n):\n a = ivy.nan_to_num(a)\n axis = tuple(axis) if isinstance(axis, list) else axis\n\n if dtype:\n a = ivy.astype(ivy.array(a), ivy.as_ivy_dtype(dtype))\n\n ret = ivy.std(a, axis=axis, correction=ddof, keepdims=keepdims, out=out)\n if ivy.is_array(where):\n ret = ivy.where(where, ret, ivy.default(out, ivy.zeros_like(ret)), out=out)\n\n return ret\n\n\n@handle_jax_dtype\n@to_ivy_arrays_and_back\ndef nanvar(a, axis=None, dtype=None, out=None, ddof=0, keepdims=False, *, where=True):\n is_nan = ivy.isnan(a)\n axis = tuple(axis) if isinstance(axis, list) else axis\n if dtype is None:\n dtype = \"float16\" if ivy.is_int_dtype(a) else a.dtype\n ret = ivy.var(a, axis=axis, correction=ddof, keepdims=keepdims, out=out)\n if not ivy.any(is_nan):\n if dtype:\n a = ivy.astype(ivy.array(a), ivy.as_ivy_dtype(dtype))\n else:\n dtype = \"float\" if ivy.is_int_dtype(a) else a.dtype\n\n ret = ivy.var(a, axis=axis, correction=ddof, keepdims=keepdims, out=out)\n\n if ivy.is_array(where):\n where = ivy.array(where, dtype=ivy.bool)\n ret = ivy.where(where, ret, ivy.default(out, ivy.zeros_like(ret)), out=out)\n\n else:\n a = [i for i in a if ivy.isnan(i) is False]\n\n if dtype:\n a = ivy.astype(ivy.array(a), ivy.as_ivy_dtype(dtype))\n else:\n dtype = \"float\" if ivy.is_int_dtype(a) else a.dtype\n\n ret = ivy.var(a, axis=axis, correction=ddof, keepdims=keepdims, out=out)\n\n if ivy.is_array(where):\n where = ivy.array(where, dtype=ivy.bool)\n ret = ivy.where(where, ret, ivy.default(out, ivy.zeros_like(ret)), out=out)\n\n all_nan = ivy.isnan(ret)\n if ivy.all(all_nan):\n ret = ivy.astype(ret, ivy.array([float(\"inf\")]))\n return ret\n\n\n@handle_jax_dtype\n@to_ivy_arrays_and_back\ndef nancumsum(a, axis=None, dtype=None, out=None):\n a = ivy.where(ivy.isnan(a), ivy.zeros_like(a), a)\n return ivy.cumsum(a, axis=axis, dtype=dtype, out=out)\n\n\n@handle_jax_dtype\n@to_ivy_arrays_and_back\ndef nancumprod(a, axis=None, dtype=None, out=None):\n a = ivy.where(ivy.isnan(a), ivy.zeros_like(a), a)\n return ivy.cumprod(a, axis=axis, dtype=dtype, out=out)\n\n\n@handle_jax_dtype\n@with_unsupported_dtypes({\"1.11.0 and below\": (\"bfloat16\")}, \"jax\")\n@to_ivy_arrays_and_back\ndef std(a, axis=None, dtype=None, out=None, ddof=0, keepdims=False, *, where=None):\n axis = tuple(axis) if isinstance(axis, list) else axis\n if dtype is None:\n dtype = \"float32\" if ivy.is_int_dtype(a) else a.dtype\n std_a = ivy.std(a, axis=axis, correction=ddof, keepdims=keepdims, out=out)\n if ivy.is_array(where):\n where = ivy.array(where, dtype=ivy.bool)\n std_a = ivy.where(\n where, std_a, ivy.default(out, ivy.zeros_like(std_a)), out=out\n )\n return ivy.astype(std_a, ivy.as_ivy_dtype(dtype), copy=False)\n\n\n@to_ivy_arrays_and_back\ndef corrcoef(x, y=None, rowvar=True):\n return ivy.corrcoef(x, y=y, rowvar=rowvar)\n\n\n@to_ivy_arrays_and_back\ndef ptp(a, axis=None, out=None, keepdims=False):\n x = ivy.max(a, axis=axis, keepdims=keepdims)\n y = ivy.min(a, axis=axis, keepdims=keepdims)\n return ivy.subtract(x, y)\n",
"path": "ivy/functional/frontends/jax/numpy/statistical.py"
}
] | diff --git a/ivy/functional/frontends/jax/numpy/statistical.py b/ivy/functional/frontends/jax/numpy/statistical.py
index 0a4778ab4d3b6..3fa96b1baf311 100644
--- a/ivy/functional/frontends/jax/numpy/statistical.py
+++ b/ivy/functional/frontends/jax/numpy/statistical.py
@@ -420,3 +420,10 @@ def std(a, axis=None, dtype=None, out=None, ddof=0, keepdims=False, *, where=Non
@to_ivy_arrays_and_back
def corrcoef(x, y=None, rowvar=True):
return ivy.corrcoef(x, y=y, rowvar=rowvar)
+
+
+@to_ivy_arrays_and_back
+def ptp(a, axis=None, out=None, keepdims=False):
+ x = ivy.max(a, axis=axis, keepdims=keepdims)
+ y = ivy.min(a, axis=axis, keepdims=keepdims)
+ return ivy.subtract(x, y)
diff --git a/ivy_tests/test_ivy/test_frontends/test_jax/test_jax_numpy_statistical.py b/ivy_tests/test_ivy/test_frontends/test_jax/test_jax_numpy_statistical.py
index b79dd658548f2..d627ff4fe382a 100644
--- a/ivy_tests/test_ivy/test_frontends/test_jax/test_jax_numpy_statistical.py
+++ b/ivy_tests/test_ivy/test_frontends/test_jax/test_jax_numpy_statistical.py
@@ -847,3 +847,31 @@ def test_jax_numpy_corrcoef(
y=x[1],
rowvar=rowvar,
)
+
+
+# ptp
+@handle_frontend_test(
+ fn_tree="jax.numpy.ptp",
+ dtype_and_x_axis_dtype=_get_castable_dtypes_values(allow_nan=False),
+ keep_dims=st.booleans(),
+)
+def test_jax_numpy_ptp(
+ dtype_and_x_axis_dtype,
+ frontend,
+ test_flags,
+ fn_tree,
+ on_device,
+ keep_dims,
+):
+ input_dtypes, x, axis, dtype = dtype_and_x_axis_dtype
+ np_frontend_helpers.test_frontend_function(
+ input_dtypes=input_dtypes,
+ frontend=frontend,
+ test_flags=test_flags,
+ fn_tree=fn_tree,
+ on_device=on_device,
+ a=x[0],
+ axis=axis,
+ out=None,
+ keepdims=keep_dims
+ )
|
scverse__scanpy-2248 | read_10x_h5() `genome` argument appears recently broken for 10x v2 format
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the latest version of scanpy.
- [x] (optional) I have confirmed this bug exists on the master branch of scanpy.
---
To reproduce this issue:
1. download the public 10x dataset here (https://cf.10xgenomics.com/samples/cell-exp/2.1.0/hgmm_12k/hgmm_12k_raw_gene_bc_matrices_h5.h5)
2. run the following
```python
import scanpy as sc
adata_human = sc.read_10x_h5('hgmm_12k_raw_gene_bc_matrices_h5.h5', genome='hg19')
adata_mouse = sc.read_10x_h5('hgmm_12k_raw_gene_bc_matrices_h5.h5', genome='mm10')
assert (adata_human.X != adata_mouse.X).sum() > 0, 'these count matrices are equal'
```
which produces the assertion error. We see that the loaded data is the same regardless of `'genome'` argument. A look at the file itself shows this is not the case (notice the number of gene names, which are different for hg19 and mm10):

#### Versions
Also I think I can say confidently that this was working fine as of scanpy 1.8.1
<details>
-----
anndata 0.8.0
scanpy 1.9.1
-----
PIL 8.1.0
appnope 0.1.2
backcall 0.2.0
cached_property 1.5.2
cellbender NA
cffi 1.14.5
colorcet 3.0.0
cycler 0.10.0
cython_runtime NA
dateutil 2.8.1
decorator 5.0.9
fontTools 4.33.3
h5py 3.2.0
igraph 0.9.10
ipykernel 5.5.5
ipython_genutils 0.2.0
ipywidgets 7.6.3
jedi 0.18.0
joblib 1.0.1
kiwisolver 1.3.1
leidenalg 0.8.10
llvmlite 0.38.0
lxml 4.8.0
matplotlib 3.5.1
matplotlib_inline NA
mkl 2.3.0
mpl_toolkits NA
natsort 7.1.1
numba 0.55.1
numexpr 2.7.3
numpy 1.19.2
packaging 20.9
pandas 1.2.3
param 1.12.1
parso 0.8.2
pexpect 4.8.0
pickleshare 0.7.5
pkg_resources NA
prompt_toolkit 3.0.18
psutil 5.8.0
ptyprocess 0.7.0
pycparser 2.20
pygments 2.8.0
pynndescent 0.5.6
pyparsing 2.4.7
pytz 2021.1
scipy 1.6.1
seaborn 0.11.2
session_info 1.0.0
six 1.15.0
sklearn 0.24.1
skmisc 0.1.4
sphinxcontrib NA
statsmodels 0.12.2
storemagic NA
tables 3.6.1
texttable 1.6.4
tornado 6.1
tqdm 4.55.1
traitlets 5.0.5
typing_extensions NA
umap 0.5.3
wcwidth 0.2.5
yaml 6.0
zipp NA
zmq 22.0.3
-----
IPython 7.23.1
jupyter_client 6.1.12
jupyter_core 4.7.1
notebook 6.4.0
-----
Python 3.7.9 (default, Aug 31 2020, 07:22:35) [Clang 10.0.0 ]
Darwin-20.6.0-x86_64-i386-64bit
-----
</details>
| [
{
"content": "\"\"\"Reading and Writing\n\"\"\"\nfrom pathlib import Path, PurePath\nfrom typing import Union, Dict, Optional, Tuple, BinaryIO\n\nimport h5py\nimport json\nimport numpy as np\nimport pandas as pd\nfrom matplotlib.image import imread\nimport anndata\nfrom anndata import (\n AnnData,\n read_csv,\n read_text,\n read_excel,\n read_mtx,\n read_loom,\n read_hdf,\n)\nfrom anndata import read as read_h5ad\n\nfrom ._settings import settings\nfrom ._compat import Literal\nfrom ._utils import Empty, _empty\nfrom . import logging as logg\n\n# .gz and .bz2 suffixes are also allowed for text formats\ntext_exts = {\n 'csv',\n 'tsv',\n 'tab',\n 'data',\n 'txt', # these four are all equivalent\n}\navail_exts = {\n 'anndata',\n 'xlsx',\n 'h5',\n 'h5ad',\n 'mtx',\n 'mtx.gz',\n 'soft.gz',\n 'loom',\n} | text_exts\n\"\"\"Available file formats for reading data. \"\"\"\n\n\n# --------------------------------------------------------------------------------\n# Reading and Writing data files and AnnData objects\n# --------------------------------------------------------------------------------\n\n\ndef read(\n filename: Union[Path, str],\n backed: Optional[Literal['r', 'r+']] = None,\n sheet: Optional[str] = None,\n ext: Optional[str] = None,\n delimiter: Optional[str] = None,\n first_column_names: bool = False,\n backup_url: Optional[str] = None,\n cache: bool = False,\n cache_compression: Union[Literal['gzip', 'lzf'], None, Empty] = _empty,\n **kwargs,\n) -> AnnData:\n \"\"\"\\\n Read file and return :class:`~anndata.AnnData` object.\n\n To speed up reading, consider passing ``cache=True``, which creates an hdf5\n cache file.\n\n Parameters\n ----------\n filename\n If the filename has no file extension, it is interpreted as a key for\n generating a filename via ``sc.settings.writedir / (filename +\n sc.settings.file_format_data)``. This is the same behavior as in\n ``sc.read(filename, ...)``.\n backed\n If ``'r'``, load :class:`~anndata.AnnData` in ``backed`` mode instead\n of fully loading it into memory (`memory` mode). If you want to modify\n backed attributes of the AnnData object, you need to choose ``'r+'``.\n sheet\n Name of sheet/table in hdf5 or Excel file.\n ext\n Extension that indicates the file type. If ``None``, uses extension of\n filename.\n delimiter\n Delimiter that separates data within text file. If ``None``, will split at\n arbitrary number of white spaces, which is different from enforcing\n splitting at any single white space ``' '``.\n first_column_names\n Assume the first column stores row names. This is only necessary if\n these are not strings: strings in the first column are automatically\n assumed to be row names.\n backup_url\n Retrieve the file from an URL if not present on disk.\n cache\n If `False`, read from source, if `True`, read from fast 'h5ad' cache.\n cache_compression\n See the h5py :ref:`dataset_compression`.\n (Default: `settings.cache_compression`)\n kwargs\n Parameters passed to :func:`~anndata.read_loom`.\n\n Returns\n -------\n An :class:`~anndata.AnnData` object\n \"\"\"\n filename = Path(filename) # allow passing strings\n if is_valid_filename(filename):\n return _read(\n filename,\n backed=backed,\n sheet=sheet,\n ext=ext,\n delimiter=delimiter,\n first_column_names=first_column_names,\n backup_url=backup_url,\n cache=cache,\n cache_compression=cache_compression,\n **kwargs,\n )\n # generate filename and read to dict\n filekey = str(filename)\n filename = settings.writedir / (filekey + '.' + settings.file_format_data)\n if not filename.exists():\n raise ValueError(\n f'Reading with filekey {filekey!r} failed, '\n f'the inferred filename {filename!r} does not exist. '\n 'If you intended to provide a filename, either use a filename '\n f'ending on one of the available extensions {avail_exts} '\n 'or pass the parameter `ext`.'\n )\n return read_h5ad(filename, backed=backed)\n\n\ndef read_10x_h5(\n filename: Union[str, Path],\n genome: Optional[str] = None,\n gex_only: bool = True,\n backup_url: Optional[str] = None,\n) -> AnnData:\n \"\"\"\\\n Read 10x-Genomics-formatted hdf5 file.\n\n Parameters\n ----------\n filename\n Path to a 10x hdf5 file.\n genome\n Filter expression to genes within this genome. For legacy 10x h5\n files, this must be provided if the data contains more than one genome.\n gex_only\n Only keep 'Gene Expression' data and ignore other feature types,\n e.g. 'Antibody Capture', 'CRISPR Guide Capture', or 'Custom'\n backup_url\n Retrieve the file from an URL if not present on disk.\n\n Returns\n -------\n Annotated data matrix, where observations/cells are named by their\n barcode and variables/genes by gene name. Stores the following information:\n\n :attr:`~anndata.AnnData.X`\n The data matrix is stored\n :attr:`~anndata.AnnData.obs_names`\n Cell names\n :attr:`~anndata.AnnData.var_names`\n Gene names\n :attr:`~anndata.AnnData.var`\\\\ `['gene_ids']`\n Gene IDs\n :attr:`~anndata.AnnData.var`\\\\ `['feature_types']`\n Feature types\n \"\"\"\n start = logg.info(f'reading {filename}')\n is_present = _check_datafile_present_and_download(filename, backup_url=backup_url)\n if not is_present:\n logg.debug(f'... did not find original file {filename}')\n with h5py.File(str(filename), 'r') as f:\n v3 = '/matrix' in f\n if v3:\n adata = _read_v3_10x_h5(filename, start=start)\n if genome:\n if genome not in adata.var['genome'].values:\n raise ValueError(\n f\"Could not find data corresponding to genome '{genome}' in '{filename}'. \"\n f'Available genomes are: {list(adata.var[\"genome\"].unique())}.'\n )\n adata = adata[:, adata.var['genome'] == genome]\n if gex_only:\n adata = adata[:, adata.var['feature_types'] == 'Gene Expression']\n if adata.is_view:\n adata = adata.copy()\n else:\n adata = _read_legacy_10x_h5(filename, genome=genome, start=start)\n return adata\n\n\ndef _read_legacy_10x_h5(filename, *, genome=None, start=None):\n \"\"\"\n Read hdf5 file from Cell Ranger v2 or earlier versions.\n \"\"\"\n with h5py.File(str(filename), 'r') as f:\n try:\n children = list(f.keys())\n if not genome:\n if len(children) > 1:\n raise ValueError(\n f\"'{filename}' contains more than one genome. For legacy 10x h5 \"\n \"files you must specify the genome if more than one is present. \"\n f\"Available genomes are: {children}\"\n )\n genome = children[0]\n elif genome not in children:\n raise ValueError(\n f\"Could not find genome '{genome}' in '{filename}'. \"\n f'Available genomes are: {children}'\n )\n\n dsets = {}\n _collect_datasets(dsets, f)\n\n # AnnData works with csr matrices\n # 10x stores the transposed data, so we do the transposition right away\n from scipy.sparse import csr_matrix\n\n M, N = dsets['shape']\n data = dsets['data']\n if dsets['data'].dtype == np.dtype('int32'):\n data = dsets['data'].view('float32')\n data[:] = dsets['data']\n matrix = csr_matrix(\n (data, dsets['indices'], dsets['indptr']),\n shape=(N, M),\n )\n # the csc matrix is automatically the transposed csr matrix\n # as scanpy expects it, so, no need for a further transpostion\n adata = AnnData(\n matrix,\n obs=dict(obs_names=dsets['barcodes'].astype(str)),\n var=dict(\n var_names=dsets['gene_names'].astype(str),\n gene_ids=dsets['genes'].astype(str),\n ),\n )\n logg.info('', time=start)\n return adata\n except KeyError:\n raise Exception('File is missing one or more required datasets.')\n\n\ndef _collect_datasets(dsets: dict, group: h5py.Group):\n for k, v in group.items():\n if isinstance(v, h5py.Dataset):\n dsets[k] = v[:]\n else:\n _collect_datasets(dsets, v)\n\n\ndef _read_v3_10x_h5(filename, *, start=None):\n \"\"\"\n Read hdf5 file from Cell Ranger v3 or later versions.\n \"\"\"\n with h5py.File(str(filename), 'r') as f:\n try:\n dsets = {}\n _collect_datasets(dsets, f[\"matrix\"])\n\n from scipy.sparse import csr_matrix\n\n M, N = dsets['shape']\n data = dsets['data']\n if dsets['data'].dtype == np.dtype('int32'):\n data = dsets['data'].view('float32')\n data[:] = dsets['data']\n matrix = csr_matrix(\n (data, dsets['indices'], dsets['indptr']),\n shape=(N, M),\n )\n adata = AnnData(\n matrix,\n obs=dict(obs_names=dsets['barcodes'].astype(str)),\n var=dict(\n var_names=dsets['name'].astype(str),\n gene_ids=dsets['id'].astype(str),\n feature_types=dsets['feature_type'].astype(str),\n genome=dsets['genome'].astype(str),\n ),\n )\n logg.info('', time=start)\n return adata\n except KeyError:\n raise Exception('File is missing one or more required datasets.')\n\n\ndef read_visium(\n path: Union[str, Path],\n genome: Optional[str] = None,\n *,\n count_file: str = \"filtered_feature_bc_matrix.h5\",\n library_id: str = None,\n load_images: Optional[bool] = True,\n source_image_path: Optional[Union[str, Path]] = None,\n) -> AnnData:\n \"\"\"\\\n Read 10x-Genomics-formatted visum dataset.\n\n In addition to reading regular 10x output,\n this looks for the `spatial` folder and loads images,\n coordinates and scale factors.\n Based on the `Space Ranger output docs`_.\n\n See :func:`~scanpy.pl.spatial` for a compatible plotting function.\n\n .. _Space Ranger output docs: https://support.10xgenomics.com/spatial-gene-expression/software/pipelines/latest/output/overview\n\n Parameters\n ----------\n path\n Path to directory for visium datafiles.\n genome\n Filter expression to genes within this genome.\n count_file\n Which file in the passed directory to use as the count file. Typically would be one of:\n 'filtered_feature_bc_matrix.h5' or 'raw_feature_bc_matrix.h5'.\n library_id\n Identifier for the visium library. Can be modified when concatenating multiple adata objects.\n source_image_path\n Path to the high-resolution tissue image. Path will be included in\n `.uns[\"spatial\"][library_id][\"metadata\"][\"source_image_path\"]`.\n\n Returns\n -------\n Annotated data matrix, where observations/cells are named by their\n barcode and variables/genes by gene name. Stores the following information:\n\n :attr:`~anndata.AnnData.X`\n The data matrix is stored\n :attr:`~anndata.AnnData.obs_names`\n Cell names\n :attr:`~anndata.AnnData.var_names`\n Gene names\n :attr:`~anndata.AnnData.var`\\\\ `['gene_ids']`\n Gene IDs\n :attr:`~anndata.AnnData.var`\\\\ `['feature_types']`\n Feature types\n :attr:`~anndata.AnnData.uns`\\\\ `['spatial']`\n Dict of spaceranger output files with 'library_id' as key\n :attr:`~anndata.AnnData.uns`\\\\ `['spatial'][library_id]['images']`\n Dict of images (`'hires'` and `'lowres'`)\n :attr:`~anndata.AnnData.uns`\\\\ `['spatial'][library_id]['scalefactors']`\n Scale factors for the spots\n :attr:`~anndata.AnnData.uns`\\\\ `['spatial'][library_id]['metadata']`\n Files metadata: 'chemistry_description', 'software_version', 'source_image_path'\n :attr:`~anndata.AnnData.obsm`\\\\ `['spatial']`\n Spatial spot coordinates, usable as `basis` by :func:`~scanpy.pl.embedding`.\n \"\"\"\n path = Path(path)\n adata = read_10x_h5(path / count_file, genome=genome)\n\n adata.uns[\"spatial\"] = dict()\n\n from h5py import File\n\n with File(path / count_file, mode=\"r\") as f:\n attrs = dict(f.attrs)\n if library_id is None:\n library_id = str(attrs.pop(\"library_ids\")[0], \"utf-8\")\n\n adata.uns[\"spatial\"][library_id] = dict()\n\n if load_images:\n files = dict(\n tissue_positions_file=path / 'spatial/tissue_positions_list.csv',\n scalefactors_json_file=path / 'spatial/scalefactors_json.json',\n hires_image=path / 'spatial/tissue_hires_image.png',\n lowres_image=path / 'spatial/tissue_lowres_image.png',\n )\n\n # check if files exists, continue if images are missing\n for f in files.values():\n if not f.exists():\n if any(x in str(f) for x in [\"hires_image\", \"lowres_image\"]):\n logg.warning(\n f\"You seem to be missing an image file.\\n\"\n f\"Could not find '{f}'.\"\n )\n else:\n raise OSError(f\"Could not find '{f}'\")\n\n adata.uns[\"spatial\"][library_id]['images'] = dict()\n for res in ['hires', 'lowres']:\n try:\n adata.uns[\"spatial\"][library_id]['images'][res] = imread(\n str(files[f'{res}_image'])\n )\n except Exception:\n raise OSError(f\"Could not find '{res}_image'\")\n\n # read json scalefactors\n adata.uns[\"spatial\"][library_id]['scalefactors'] = json.loads(\n files['scalefactors_json_file'].read_bytes()\n )\n\n adata.uns[\"spatial\"][library_id][\"metadata\"] = {\n k: (str(attrs[k], \"utf-8\") if isinstance(attrs[k], bytes) else attrs[k])\n for k in (\"chemistry_description\", \"software_version\")\n if k in attrs\n }\n\n # read coordinates\n positions = pd.read_csv(files['tissue_positions_file'], header=None)\n positions.columns = [\n 'barcode',\n 'in_tissue',\n 'array_row',\n 'array_col',\n 'pxl_col_in_fullres',\n 'pxl_row_in_fullres',\n ]\n positions.index = positions['barcode']\n\n adata.obs = adata.obs.join(positions, how=\"left\")\n\n adata.obsm['spatial'] = adata.obs[\n ['pxl_row_in_fullres', 'pxl_col_in_fullres']\n ].to_numpy()\n adata.obs.drop(\n columns=['barcode', 'pxl_row_in_fullres', 'pxl_col_in_fullres'],\n inplace=True,\n )\n\n # put image path in uns\n if source_image_path is not None:\n # get an absolute path\n source_image_path = str(Path(source_image_path).resolve())\n adata.uns[\"spatial\"][library_id][\"metadata\"][\"source_image_path\"] = str(\n source_image_path\n )\n\n return adata\n\n\ndef read_10x_mtx(\n path: Union[Path, str],\n var_names: Literal['gene_symbols', 'gene_ids'] = 'gene_symbols',\n make_unique: bool = True,\n cache: bool = False,\n cache_compression: Union[Literal['gzip', 'lzf'], None, Empty] = _empty,\n gex_only: bool = True,\n *,\n prefix: str = None,\n) -> AnnData:\n \"\"\"\\\n Read 10x-Genomics-formatted mtx directory.\n\n Parameters\n ----------\n path\n Path to directory for `.mtx` and `.tsv` files,\n e.g. './filtered_gene_bc_matrices/hg19/'.\n var_names\n The variables index.\n make_unique\n Whether to make the variables index unique by appending '-1',\n '-2' etc. or not.\n cache\n If `False`, read from source, if `True`, read from fast 'h5ad' cache.\n cache_compression\n See the h5py :ref:`dataset_compression`.\n (Default: `settings.cache_compression`)\n gex_only\n Only keep 'Gene Expression' data and ignore other feature types,\n e.g. 'Antibody Capture', 'CRISPR Guide Capture', or 'Custom'\n prefix\n Any prefix before `matrix.mtx`, `genes.tsv` and `barcodes.tsv`. For instance,\n if the files are named `patientA_matrix.mtx`, `patientA_genes.tsv` and\n `patientA_barcodes.tsv` the prefix is `patientA_`.\n (Default: no prefix)\n\n Returns\n -------\n An :class:`~anndata.AnnData` object\n \"\"\"\n path = Path(path)\n prefix = \"\" if prefix is None else prefix\n genefile_exists = (path / f'{prefix}genes.tsv').is_file()\n read = _read_legacy_10x_mtx if genefile_exists else _read_v3_10x_mtx\n adata = read(\n str(path),\n var_names=var_names,\n make_unique=make_unique,\n cache=cache,\n cache_compression=cache_compression,\n prefix=prefix,\n )\n if genefile_exists or not gex_only:\n return adata\n else:\n gex_rows = list(\n map(lambda x: x == 'Gene Expression', adata.var['feature_types'])\n )\n return adata[:, gex_rows].copy()\n\n\ndef _read_legacy_10x_mtx(\n path,\n var_names='gene_symbols',\n make_unique=True,\n cache=False,\n cache_compression=_empty,\n *,\n prefix=\"\",\n):\n \"\"\"\n Read mex from output from Cell Ranger v2 or earlier versions\n \"\"\"\n path = Path(path)\n adata = read(\n path / f'{prefix}matrix.mtx',\n cache=cache,\n cache_compression=cache_compression,\n ).T # transpose the data\n genes = pd.read_csv(path / f'{prefix}genes.tsv', header=None, sep='\\t')\n if var_names == 'gene_symbols':\n var_names = genes[1].values\n if make_unique:\n var_names = anndata.utils.make_index_unique(pd.Index(var_names))\n adata.var_names = var_names\n adata.var['gene_ids'] = genes[0].values\n elif var_names == 'gene_ids':\n adata.var_names = genes[0].values\n adata.var['gene_symbols'] = genes[1].values\n else:\n raise ValueError(\"`var_names` needs to be 'gene_symbols' or 'gene_ids'\")\n adata.obs_names = pd.read_csv(path / f'{prefix}barcodes.tsv', header=None)[0].values\n return adata\n\n\ndef _read_v3_10x_mtx(\n path,\n var_names='gene_symbols',\n make_unique=True,\n cache=False,\n cache_compression=_empty,\n *,\n prefix=\"\",\n):\n \"\"\"\n Read mtx from output from Cell Ranger v3 or later versions\n \"\"\"\n path = Path(path)\n adata = read(\n path / f'{prefix}matrix.mtx.gz',\n cache=cache,\n cache_compression=cache_compression,\n ).T # transpose the data\n genes = pd.read_csv(path / f'{prefix}features.tsv.gz', header=None, sep='\\t')\n if var_names == 'gene_symbols':\n var_names = genes[1].values\n if make_unique:\n var_names = anndata.utils.make_index_unique(pd.Index(var_names))\n adata.var_names = var_names\n adata.var['gene_ids'] = genes[0].values\n elif var_names == 'gene_ids':\n adata.var_names = genes[0].values\n adata.var['gene_symbols'] = genes[1].values\n else:\n raise ValueError(\"`var_names` needs to be 'gene_symbols' or 'gene_ids'\")\n adata.var['feature_types'] = genes[2].values\n adata.obs_names = pd.read_csv(path / f'{prefix}barcodes.tsv.gz', header=None)[\n 0\n ].values\n return adata\n\n\ndef write(\n filename: Union[str, Path],\n adata: AnnData,\n ext: Optional[Literal['h5', 'csv', 'txt', 'npz']] = None,\n compression: Optional[Literal['gzip', 'lzf']] = 'gzip',\n compression_opts: Optional[int] = None,\n):\n \"\"\"\\\n Write :class:`~anndata.AnnData` objects to file.\n\n Parameters\n ----------\n filename\n If the filename has no file extension, it is interpreted as a key for\n generating a filename via `sc.settings.writedir / (filename +\n sc.settings.file_format_data)`. This is the same behavior as in\n :func:`~scanpy.read`.\n adata\n Annotated data matrix.\n ext\n File extension from wich to infer file format. If `None`, defaults to\n `sc.settings.file_format_data`.\n compression\n See http://docs.h5py.org/en/latest/high/dataset.html.\n compression_opts\n See http://docs.h5py.org/en/latest/high/dataset.html.\n \"\"\"\n filename = Path(filename) # allow passing strings\n if is_valid_filename(filename):\n filename = filename\n ext_ = is_valid_filename(filename, return_ext=True)\n if ext is None:\n ext = ext_\n elif ext != ext_:\n raise ValueError(\n 'It suffices to provide the file type by '\n 'providing a proper extension to the filename.'\n 'One of \"txt\", \"csv\", \"h5\" or \"npz\".'\n )\n else:\n key = filename\n ext = settings.file_format_data if ext is None else ext\n filename = _get_filename_from_key(key, ext)\n if ext == 'csv':\n adata.write_csvs(filename)\n else:\n adata.write(\n filename, compression=compression, compression_opts=compression_opts\n )\n\n\n# -------------------------------------------------------------------------------\n# Reading and writing parameter files\n# -------------------------------------------------------------------------------\n\n\ndef read_params(\n filename: Union[Path, str], asheader: bool = False\n) -> Dict[str, Union[int, float, bool, str, None]]:\n \"\"\"\\\n Read parameter dictionary from text file.\n\n Assumes that parameters are specified in the format::\n\n par1 = value1\n par2 = value2\n\n Comments that start with '#' are allowed.\n\n Parameters\n ----------\n filename\n Filename of data file.\n asheader\n Read the dictionary from the header (comment section) of a file.\n\n Returns\n -------\n Dictionary that stores parameters.\n \"\"\"\n filename = str(filename) # allow passing pathlib.Path objects\n from collections import OrderedDict\n\n params = OrderedDict([])\n for line in open(filename):\n if '=' in line:\n if not asheader or line.startswith('#'):\n line = line[1:] if line.startswith('#') else line\n key, val = line.split('=')\n key = key.strip()\n val = val.strip()\n params[key] = convert_string(val)\n return params\n\n\ndef write_params(path: Union[Path, str], *args, **maps):\n \"\"\"\\\n Write parameters to file, so that it's readable by read_params.\n\n Uses INI file format.\n \"\"\"\n path = Path(path)\n if not path.parent.is_dir():\n path.parent.mkdir(parents=True)\n if len(args) == 1:\n maps[None] = args[0]\n with path.open('w') as f:\n for header, map in maps.items():\n if header is not None:\n f.write(f'[{header}]\\n')\n for key, val in map.items():\n f.write(f'{key} = {val}\\n')\n\n\n# -------------------------------------------------------------------------------\n# Reading and Writing data files\n# -------------------------------------------------------------------------------\n\n\ndef _read(\n filename: Path,\n backed=None,\n sheet=None,\n ext=None,\n delimiter=None,\n first_column_names=None,\n backup_url=None,\n cache=False,\n cache_compression=None,\n suppress_cache_warning=False,\n **kwargs,\n):\n if ext is not None and ext not in avail_exts:\n raise ValueError(\n 'Please provide one of the available extensions.\\n' f'{avail_exts}'\n )\n else:\n ext = is_valid_filename(filename, return_ext=True)\n is_present = _check_datafile_present_and_download(filename, backup_url=backup_url)\n if not is_present:\n logg.debug(f'... did not find original file {filename}')\n # read hdf5 files\n if ext in {'h5', 'h5ad'}:\n if sheet is None:\n return read_h5ad(filename, backed=backed)\n else:\n logg.debug(f'reading sheet {sheet} from file {filename}')\n return read_hdf(filename, sheet)\n # read other file types\n path_cache = settings.cachedir / _slugify(filename).replace(\n '.' + ext, '.h5ad'\n ) # type: Path\n if path_cache.suffix in {'.gz', '.bz2'}:\n path_cache = path_cache.with_suffix('')\n if cache and path_cache.is_file():\n logg.info(f'... reading from cache file {path_cache}')\n return read_h5ad(path_cache)\n\n if not is_present:\n raise FileNotFoundError(f'Did not find file {filename}.')\n logg.debug(f'reading {filename}')\n if not cache and not suppress_cache_warning:\n logg.hint(\n 'This might be very slow. Consider passing `cache=True`, '\n 'which enables much faster reading from a cache file.'\n )\n # do the actual reading\n if ext == 'xlsx' or ext == 'xls':\n if sheet is None:\n raise ValueError(\"Provide `sheet` parameter when reading '.xlsx' files.\")\n else:\n adata = read_excel(filename, sheet)\n elif ext in {'mtx', 'mtx.gz'}:\n adata = read_mtx(filename)\n elif ext == 'csv':\n adata = read_csv(filename, first_column_names=first_column_names)\n elif ext in {'txt', 'tab', 'data', 'tsv'}:\n if ext == 'data':\n logg.hint(\n \"... assuming '.data' means tab or white-space \" 'separated text file',\n )\n logg.hint('change this by passing `ext` to sc.read')\n adata = read_text(filename, delimiter, first_column_names)\n elif ext == 'soft.gz':\n adata = _read_softgz(filename)\n elif ext == 'loom':\n adata = read_loom(filename=filename, **kwargs)\n else:\n raise ValueError(f'Unknown extension {ext}.')\n if cache:\n logg.info(\n f'... writing an {settings.file_format_data} '\n 'cache file to speedup reading next time'\n )\n if cache_compression is _empty:\n cache_compression = settings.cache_compression\n if not path_cache.parent.is_dir():\n path_cache.parent.mkdir(parents=True)\n # write for faster reading when calling the next time\n adata.write(path_cache, compression=cache_compression)\n return adata\n\n\ndef _slugify(path: Union[str, PurePath]) -> str:\n \"\"\"Make a path into a filename.\"\"\"\n if not isinstance(path, PurePath):\n path = PurePath(path)\n parts = list(path.parts)\n if parts[0] == '/':\n parts.pop(0)\n elif len(parts[0]) == 3 and parts[0][1:] == ':\\\\':\n parts[0] = parts[0][0] # C:\\ → C\n filename = '-'.join(parts)\n assert '/' not in filename, filename\n assert not filename[1:].startswith(':'), filename\n return filename\n\n\ndef _read_softgz(filename: Union[str, bytes, Path, BinaryIO]) -> AnnData:\n \"\"\"\\\n Read a SOFT format data file.\n\n The SOFT format is documented here\n http://www.ncbi.nlm.nih.gov/geo/info/soft2.html.\n\n Notes\n -----\n The function is based on a script by Kerby Shedden.\n http://dept.stat.lsa.umich.edu/~kshedden/Python-Workshop/gene_expression_comparison.html\n \"\"\"\n import gzip\n\n with gzip.open(filename, mode='rt') as file:\n # The header part of the file contains information about the\n # samples. Read that information first.\n samples_info = {}\n for line in file:\n if line.startswith(\"!dataset_table_begin\"):\n break\n elif line.startswith(\"!subset_description\"):\n subset_description = line.split(\"=\")[1].strip()\n elif line.startswith(\"!subset_sample_id\"):\n subset_ids = line.split(\"=\")[1].split(\",\")\n subset_ids = [x.strip() for x in subset_ids]\n for k in subset_ids:\n samples_info[k] = subset_description\n # Next line is the column headers (sample id's)\n sample_names = file.readline().strip().split(\"\\t\")\n # The column indices that contain gene expression data\n indices = [i for i, x in enumerate(sample_names) if x.startswith(\"GSM\")]\n # Restrict the column headers to those that we keep\n sample_names = [sample_names[i] for i in indices]\n # Get a list of sample labels\n groups = [samples_info[k] for k in sample_names]\n # Read the gene expression data as a list of lists, also get the gene\n # identifiers\n gene_names, X = [], []\n for line in file:\n # This is what signals the end of the gene expression data\n # section in the file\n if line.startswith(\"!dataset_table_end\"):\n break\n V = line.split(\"\\t\")\n # Extract the values that correspond to gene expression measures\n # and convert the strings to numbers\n x = [float(V[i]) for i in indices]\n X.append(x)\n gene_names.append(V[1])\n # Convert the Python list of lists to a Numpy array and transpose to match\n # the Scanpy convention of storing samples in rows and variables in colums.\n X = np.array(X).T\n obs = pd.DataFrame({\"groups\": groups}, index=sample_names)\n var = pd.DataFrame(index=gene_names)\n return AnnData(X=X, obs=obs, var=var, dtype=X.dtype)\n\n\n# -------------------------------------------------------------------------------\n# Type conversion\n# -------------------------------------------------------------------------------\n\n\ndef is_float(string: str) -> float:\n \"\"\"Check whether string is float.\n\n See also\n --------\n http://stackoverflow.com/questions/736043/checking-if-a-string-can-be-converted-to-float-in-python\n \"\"\"\n try:\n float(string)\n return True\n except ValueError:\n return False\n\n\ndef is_int(string: str) -> bool:\n \"\"\"Check whether string is integer.\"\"\"\n try:\n int(string)\n return True\n except ValueError:\n return False\n\n\ndef convert_bool(string: str) -> Tuple[bool, bool]:\n \"\"\"Check whether string is boolean.\"\"\"\n if string == 'True':\n return True, True\n elif string == 'False':\n return True, False\n else:\n return False, False\n\n\ndef convert_string(string: str) -> Union[int, float, bool, str, None]:\n \"\"\"Convert string to int, float or bool.\"\"\"\n if is_int(string):\n return int(string)\n elif is_float(string):\n return float(string)\n elif convert_bool(string)[0]:\n return convert_bool(string)[1]\n elif string == 'None':\n return None\n else:\n return string\n\n\n# -------------------------------------------------------------------------------\n# Helper functions for reading and writing\n# -------------------------------------------------------------------------------\n\n\ndef get_used_files():\n \"\"\"Get files used by processes with name scanpy.\"\"\"\n import psutil\n\n loop_over_scanpy_processes = (\n proc for proc in psutil.process_iter() if proc.name() == 'scanpy'\n )\n filenames = []\n for proc in loop_over_scanpy_processes:\n try:\n flist = proc.open_files()\n for nt in flist:\n filenames.append(nt.path)\n # This catches a race condition where a process ends\n # before we can examine its files\n except psutil.NoSuchProcess:\n pass\n return set(filenames)\n\n\ndef _get_filename_from_key(key, ext=None) -> Path:\n ext = settings.file_format_data if ext is None else ext\n return settings.writedir / f'{key}.{ext}'\n\n\ndef _download(url: str, path: Path):\n try:\n import ipywidgets\n from tqdm.auto import tqdm\n except ImportError:\n from tqdm import tqdm\n\n from urllib.request import urlopen, Request\n from urllib.error import URLError\n\n blocksize = 1024 * 8\n blocknum = 0\n\n try:\n req = Request(url, headers={\"User-agent\": \"scanpy-user\"})\n\n try:\n open_url = urlopen(req)\n except URLError:\n logg.warning(\n 'Failed to open the url with default certificates, trying with certifi.'\n )\n\n from certifi import where\n from ssl import create_default_context\n\n open_url = urlopen(req, context=create_default_context(cafile=where()))\n\n with open_url as resp:\n total = resp.info().get(\"content-length\", None)\n with tqdm(\n unit=\"B\",\n unit_scale=True,\n miniters=1,\n unit_divisor=1024,\n total=total if total is None else int(total),\n ) as t, path.open(\"wb\") as f:\n block = resp.read(blocksize)\n while block:\n f.write(block)\n blocknum += 1\n t.update(len(block))\n block = resp.read(blocksize)\n\n except (KeyboardInterrupt, Exception):\n # Make sure file doesn’t exist half-downloaded\n if path.is_file():\n path.unlink()\n raise\n\n\ndef _check_datafile_present_and_download(path, backup_url=None):\n \"\"\"Check whether the file is present, otherwise download.\"\"\"\n path = Path(path)\n if path.is_file():\n return True\n if backup_url is None:\n return False\n logg.info(\n f'try downloading from url\\n{backup_url}\\n'\n '... this may take a while but only happens once'\n )\n if not path.parent.is_dir():\n logg.info(f'creating directory {path.parent}/ for saving data')\n path.parent.mkdir(parents=True)\n\n _download(backup_url, path)\n return True\n\n\ndef is_valid_filename(filename: Path, return_ext=False):\n \"\"\"Check whether the argument is a filename.\"\"\"\n ext = filename.suffixes\n\n if len(ext) > 2:\n logg.warning(\n f'Your filename has more than two extensions: {ext}.\\n'\n f'Only considering the two last: {ext[-2:]}.'\n )\n ext = ext[-2:]\n\n # cases for gzipped/bzipped text files\n if len(ext) == 2 and ext[0][1:] in text_exts and ext[1][1:] in ('gz', 'bz2'):\n return ext[0][1:] if return_ext else True\n elif ext and ext[-1][1:] in avail_exts:\n return ext[-1][1:] if return_ext else True\n elif ''.join(ext) == '.soft.gz':\n return 'soft.gz' if return_ext else True\n elif ''.join(ext) == '.mtx.gz':\n return 'mtx.gz' if return_ext else True\n elif not return_ext:\n return False\n raise ValueError(\n f'''\\\n{filename!r} does not end on a valid extension.\nPlease, provide one of the available extensions.\n{avail_exts}\nText files with .gz and .bz2 extensions are also supported.\\\n'''\n )\n",
"path": "scanpy/readwrite.py"
}
] | [
{
"content": "\"\"\"Reading and Writing\n\"\"\"\nfrom pathlib import Path, PurePath\nfrom typing import Union, Dict, Optional, Tuple, BinaryIO\n\nimport h5py\nimport json\nimport numpy as np\nimport pandas as pd\nfrom matplotlib.image import imread\nimport anndata\nfrom anndata import (\n AnnData,\n read_csv,\n read_text,\n read_excel,\n read_mtx,\n read_loom,\n read_hdf,\n)\nfrom anndata import read as read_h5ad\n\nfrom ._settings import settings\nfrom ._compat import Literal\nfrom ._utils import Empty, _empty\nfrom . import logging as logg\n\n# .gz and .bz2 suffixes are also allowed for text formats\ntext_exts = {\n 'csv',\n 'tsv',\n 'tab',\n 'data',\n 'txt', # these four are all equivalent\n}\navail_exts = {\n 'anndata',\n 'xlsx',\n 'h5',\n 'h5ad',\n 'mtx',\n 'mtx.gz',\n 'soft.gz',\n 'loom',\n} | text_exts\n\"\"\"Available file formats for reading data. \"\"\"\n\n\n# --------------------------------------------------------------------------------\n# Reading and Writing data files and AnnData objects\n# --------------------------------------------------------------------------------\n\n\ndef read(\n filename: Union[Path, str],\n backed: Optional[Literal['r', 'r+']] = None,\n sheet: Optional[str] = None,\n ext: Optional[str] = None,\n delimiter: Optional[str] = None,\n first_column_names: bool = False,\n backup_url: Optional[str] = None,\n cache: bool = False,\n cache_compression: Union[Literal['gzip', 'lzf'], None, Empty] = _empty,\n **kwargs,\n) -> AnnData:\n \"\"\"\\\n Read file and return :class:`~anndata.AnnData` object.\n\n To speed up reading, consider passing ``cache=True``, which creates an hdf5\n cache file.\n\n Parameters\n ----------\n filename\n If the filename has no file extension, it is interpreted as a key for\n generating a filename via ``sc.settings.writedir / (filename +\n sc.settings.file_format_data)``. This is the same behavior as in\n ``sc.read(filename, ...)``.\n backed\n If ``'r'``, load :class:`~anndata.AnnData` in ``backed`` mode instead\n of fully loading it into memory (`memory` mode). If you want to modify\n backed attributes of the AnnData object, you need to choose ``'r+'``.\n sheet\n Name of sheet/table in hdf5 or Excel file.\n ext\n Extension that indicates the file type. If ``None``, uses extension of\n filename.\n delimiter\n Delimiter that separates data within text file. If ``None``, will split at\n arbitrary number of white spaces, which is different from enforcing\n splitting at any single white space ``' '``.\n first_column_names\n Assume the first column stores row names. This is only necessary if\n these are not strings: strings in the first column are automatically\n assumed to be row names.\n backup_url\n Retrieve the file from an URL if not present on disk.\n cache\n If `False`, read from source, if `True`, read from fast 'h5ad' cache.\n cache_compression\n See the h5py :ref:`dataset_compression`.\n (Default: `settings.cache_compression`)\n kwargs\n Parameters passed to :func:`~anndata.read_loom`.\n\n Returns\n -------\n An :class:`~anndata.AnnData` object\n \"\"\"\n filename = Path(filename) # allow passing strings\n if is_valid_filename(filename):\n return _read(\n filename,\n backed=backed,\n sheet=sheet,\n ext=ext,\n delimiter=delimiter,\n first_column_names=first_column_names,\n backup_url=backup_url,\n cache=cache,\n cache_compression=cache_compression,\n **kwargs,\n )\n # generate filename and read to dict\n filekey = str(filename)\n filename = settings.writedir / (filekey + '.' + settings.file_format_data)\n if not filename.exists():\n raise ValueError(\n f'Reading with filekey {filekey!r} failed, '\n f'the inferred filename {filename!r} does not exist. '\n 'If you intended to provide a filename, either use a filename '\n f'ending on one of the available extensions {avail_exts} '\n 'or pass the parameter `ext`.'\n )\n return read_h5ad(filename, backed=backed)\n\n\ndef read_10x_h5(\n filename: Union[str, Path],\n genome: Optional[str] = None,\n gex_only: bool = True,\n backup_url: Optional[str] = None,\n) -> AnnData:\n \"\"\"\\\n Read 10x-Genomics-formatted hdf5 file.\n\n Parameters\n ----------\n filename\n Path to a 10x hdf5 file.\n genome\n Filter expression to genes within this genome. For legacy 10x h5\n files, this must be provided if the data contains more than one genome.\n gex_only\n Only keep 'Gene Expression' data and ignore other feature types,\n e.g. 'Antibody Capture', 'CRISPR Guide Capture', or 'Custom'\n backup_url\n Retrieve the file from an URL if not present on disk.\n\n Returns\n -------\n Annotated data matrix, where observations/cells are named by their\n barcode and variables/genes by gene name. Stores the following information:\n\n :attr:`~anndata.AnnData.X`\n The data matrix is stored\n :attr:`~anndata.AnnData.obs_names`\n Cell names\n :attr:`~anndata.AnnData.var_names`\n Gene names\n :attr:`~anndata.AnnData.var`\\\\ `['gene_ids']`\n Gene IDs\n :attr:`~anndata.AnnData.var`\\\\ `['feature_types']`\n Feature types\n \"\"\"\n start = logg.info(f'reading {filename}')\n is_present = _check_datafile_present_and_download(filename, backup_url=backup_url)\n if not is_present:\n logg.debug(f'... did not find original file {filename}')\n with h5py.File(str(filename), 'r') as f:\n v3 = '/matrix' in f\n if v3:\n adata = _read_v3_10x_h5(filename, start=start)\n if genome:\n if genome not in adata.var['genome'].values:\n raise ValueError(\n f\"Could not find data corresponding to genome '{genome}' in '{filename}'. \"\n f'Available genomes are: {list(adata.var[\"genome\"].unique())}.'\n )\n adata = adata[:, adata.var['genome'] == genome]\n if gex_only:\n adata = adata[:, adata.var['feature_types'] == 'Gene Expression']\n if adata.is_view:\n adata = adata.copy()\n else:\n adata = _read_legacy_10x_h5(filename, genome=genome, start=start)\n return adata\n\n\ndef _read_legacy_10x_h5(filename, *, genome=None, start=None):\n \"\"\"\n Read hdf5 file from Cell Ranger v2 or earlier versions.\n \"\"\"\n with h5py.File(str(filename), 'r') as f:\n try:\n children = list(f.keys())\n if not genome:\n if len(children) > 1:\n raise ValueError(\n f\"'{filename}' contains more than one genome. For legacy 10x h5 \"\n \"files you must specify the genome if more than one is present. \"\n f\"Available genomes are: {children}\"\n )\n genome = children[0]\n elif genome not in children:\n raise ValueError(\n f\"Could not find genome '{genome}' in '{filename}'. \"\n f'Available genomes are: {children}'\n )\n\n dsets = {}\n _collect_datasets(dsets, f[genome])\n\n # AnnData works with csr matrices\n # 10x stores the transposed data, so we do the transposition right away\n from scipy.sparse import csr_matrix\n\n M, N = dsets['shape']\n data = dsets['data']\n if dsets['data'].dtype == np.dtype('int32'):\n data = dsets['data'].view('float32')\n data[:] = dsets['data']\n matrix = csr_matrix(\n (data, dsets['indices'], dsets['indptr']),\n shape=(N, M),\n )\n # the csc matrix is automatically the transposed csr matrix\n # as scanpy expects it, so, no need for a further transpostion\n adata = AnnData(\n matrix,\n obs=dict(obs_names=dsets['barcodes'].astype(str)),\n var=dict(\n var_names=dsets['gene_names'].astype(str),\n gene_ids=dsets['genes'].astype(str),\n ),\n )\n logg.info('', time=start)\n return adata\n except KeyError:\n raise Exception('File is missing one or more required datasets.')\n\n\ndef _collect_datasets(dsets: dict, group: h5py.Group):\n for k, v in group.items():\n if isinstance(v, h5py.Dataset):\n dsets[k] = v[:]\n else:\n _collect_datasets(dsets, v)\n\n\ndef _read_v3_10x_h5(filename, *, start=None):\n \"\"\"\n Read hdf5 file from Cell Ranger v3 or later versions.\n \"\"\"\n with h5py.File(str(filename), 'r') as f:\n try:\n dsets = {}\n _collect_datasets(dsets, f[\"matrix\"])\n\n from scipy.sparse import csr_matrix\n\n M, N = dsets['shape']\n data = dsets['data']\n if dsets['data'].dtype == np.dtype('int32'):\n data = dsets['data'].view('float32')\n data[:] = dsets['data']\n matrix = csr_matrix(\n (data, dsets['indices'], dsets['indptr']),\n shape=(N, M),\n )\n adata = AnnData(\n matrix,\n obs=dict(obs_names=dsets['barcodes'].astype(str)),\n var=dict(\n var_names=dsets['name'].astype(str),\n gene_ids=dsets['id'].astype(str),\n feature_types=dsets['feature_type'].astype(str),\n genome=dsets['genome'].astype(str),\n ),\n )\n logg.info('', time=start)\n return adata\n except KeyError:\n raise Exception('File is missing one or more required datasets.')\n\n\ndef read_visium(\n path: Union[str, Path],\n genome: Optional[str] = None,\n *,\n count_file: str = \"filtered_feature_bc_matrix.h5\",\n library_id: str = None,\n load_images: Optional[bool] = True,\n source_image_path: Optional[Union[str, Path]] = None,\n) -> AnnData:\n \"\"\"\\\n Read 10x-Genomics-formatted visum dataset.\n\n In addition to reading regular 10x output,\n this looks for the `spatial` folder and loads images,\n coordinates and scale factors.\n Based on the `Space Ranger output docs`_.\n\n See :func:`~scanpy.pl.spatial` for a compatible plotting function.\n\n .. _Space Ranger output docs: https://support.10xgenomics.com/spatial-gene-expression/software/pipelines/latest/output/overview\n\n Parameters\n ----------\n path\n Path to directory for visium datafiles.\n genome\n Filter expression to genes within this genome.\n count_file\n Which file in the passed directory to use as the count file. Typically would be one of:\n 'filtered_feature_bc_matrix.h5' or 'raw_feature_bc_matrix.h5'.\n library_id\n Identifier for the visium library. Can be modified when concatenating multiple adata objects.\n source_image_path\n Path to the high-resolution tissue image. Path will be included in\n `.uns[\"spatial\"][library_id][\"metadata\"][\"source_image_path\"]`.\n\n Returns\n -------\n Annotated data matrix, where observations/cells are named by their\n barcode and variables/genes by gene name. Stores the following information:\n\n :attr:`~anndata.AnnData.X`\n The data matrix is stored\n :attr:`~anndata.AnnData.obs_names`\n Cell names\n :attr:`~anndata.AnnData.var_names`\n Gene names\n :attr:`~anndata.AnnData.var`\\\\ `['gene_ids']`\n Gene IDs\n :attr:`~anndata.AnnData.var`\\\\ `['feature_types']`\n Feature types\n :attr:`~anndata.AnnData.uns`\\\\ `['spatial']`\n Dict of spaceranger output files with 'library_id' as key\n :attr:`~anndata.AnnData.uns`\\\\ `['spatial'][library_id]['images']`\n Dict of images (`'hires'` and `'lowres'`)\n :attr:`~anndata.AnnData.uns`\\\\ `['spatial'][library_id]['scalefactors']`\n Scale factors for the spots\n :attr:`~anndata.AnnData.uns`\\\\ `['spatial'][library_id]['metadata']`\n Files metadata: 'chemistry_description', 'software_version', 'source_image_path'\n :attr:`~anndata.AnnData.obsm`\\\\ `['spatial']`\n Spatial spot coordinates, usable as `basis` by :func:`~scanpy.pl.embedding`.\n \"\"\"\n path = Path(path)\n adata = read_10x_h5(path / count_file, genome=genome)\n\n adata.uns[\"spatial\"] = dict()\n\n from h5py import File\n\n with File(path / count_file, mode=\"r\") as f:\n attrs = dict(f.attrs)\n if library_id is None:\n library_id = str(attrs.pop(\"library_ids\")[0], \"utf-8\")\n\n adata.uns[\"spatial\"][library_id] = dict()\n\n if load_images:\n files = dict(\n tissue_positions_file=path / 'spatial/tissue_positions_list.csv',\n scalefactors_json_file=path / 'spatial/scalefactors_json.json',\n hires_image=path / 'spatial/tissue_hires_image.png',\n lowres_image=path / 'spatial/tissue_lowres_image.png',\n )\n\n # check if files exists, continue if images are missing\n for f in files.values():\n if not f.exists():\n if any(x in str(f) for x in [\"hires_image\", \"lowres_image\"]):\n logg.warning(\n f\"You seem to be missing an image file.\\n\"\n f\"Could not find '{f}'.\"\n )\n else:\n raise OSError(f\"Could not find '{f}'\")\n\n adata.uns[\"spatial\"][library_id]['images'] = dict()\n for res in ['hires', 'lowres']:\n try:\n adata.uns[\"spatial\"][library_id]['images'][res] = imread(\n str(files[f'{res}_image'])\n )\n except Exception:\n raise OSError(f\"Could not find '{res}_image'\")\n\n # read json scalefactors\n adata.uns[\"spatial\"][library_id]['scalefactors'] = json.loads(\n files['scalefactors_json_file'].read_bytes()\n )\n\n adata.uns[\"spatial\"][library_id][\"metadata\"] = {\n k: (str(attrs[k], \"utf-8\") if isinstance(attrs[k], bytes) else attrs[k])\n for k in (\"chemistry_description\", \"software_version\")\n if k in attrs\n }\n\n # read coordinates\n positions = pd.read_csv(files['tissue_positions_file'], header=None)\n positions.columns = [\n 'barcode',\n 'in_tissue',\n 'array_row',\n 'array_col',\n 'pxl_col_in_fullres',\n 'pxl_row_in_fullres',\n ]\n positions.index = positions['barcode']\n\n adata.obs = adata.obs.join(positions, how=\"left\")\n\n adata.obsm['spatial'] = adata.obs[\n ['pxl_row_in_fullres', 'pxl_col_in_fullres']\n ].to_numpy()\n adata.obs.drop(\n columns=['barcode', 'pxl_row_in_fullres', 'pxl_col_in_fullres'],\n inplace=True,\n )\n\n # put image path in uns\n if source_image_path is not None:\n # get an absolute path\n source_image_path = str(Path(source_image_path).resolve())\n adata.uns[\"spatial\"][library_id][\"metadata\"][\"source_image_path\"] = str(\n source_image_path\n )\n\n return adata\n\n\ndef read_10x_mtx(\n path: Union[Path, str],\n var_names: Literal['gene_symbols', 'gene_ids'] = 'gene_symbols',\n make_unique: bool = True,\n cache: bool = False,\n cache_compression: Union[Literal['gzip', 'lzf'], None, Empty] = _empty,\n gex_only: bool = True,\n *,\n prefix: str = None,\n) -> AnnData:\n \"\"\"\\\n Read 10x-Genomics-formatted mtx directory.\n\n Parameters\n ----------\n path\n Path to directory for `.mtx` and `.tsv` files,\n e.g. './filtered_gene_bc_matrices/hg19/'.\n var_names\n The variables index.\n make_unique\n Whether to make the variables index unique by appending '-1',\n '-2' etc. or not.\n cache\n If `False`, read from source, if `True`, read from fast 'h5ad' cache.\n cache_compression\n See the h5py :ref:`dataset_compression`.\n (Default: `settings.cache_compression`)\n gex_only\n Only keep 'Gene Expression' data and ignore other feature types,\n e.g. 'Antibody Capture', 'CRISPR Guide Capture', or 'Custom'\n prefix\n Any prefix before `matrix.mtx`, `genes.tsv` and `barcodes.tsv`. For instance,\n if the files are named `patientA_matrix.mtx`, `patientA_genes.tsv` and\n `patientA_barcodes.tsv` the prefix is `patientA_`.\n (Default: no prefix)\n\n Returns\n -------\n An :class:`~anndata.AnnData` object\n \"\"\"\n path = Path(path)\n prefix = \"\" if prefix is None else prefix\n genefile_exists = (path / f'{prefix}genes.tsv').is_file()\n read = _read_legacy_10x_mtx if genefile_exists else _read_v3_10x_mtx\n adata = read(\n str(path),\n var_names=var_names,\n make_unique=make_unique,\n cache=cache,\n cache_compression=cache_compression,\n prefix=prefix,\n )\n if genefile_exists or not gex_only:\n return adata\n else:\n gex_rows = list(\n map(lambda x: x == 'Gene Expression', adata.var['feature_types'])\n )\n return adata[:, gex_rows].copy()\n\n\ndef _read_legacy_10x_mtx(\n path,\n var_names='gene_symbols',\n make_unique=True,\n cache=False,\n cache_compression=_empty,\n *,\n prefix=\"\",\n):\n \"\"\"\n Read mex from output from Cell Ranger v2 or earlier versions\n \"\"\"\n path = Path(path)\n adata = read(\n path / f'{prefix}matrix.mtx',\n cache=cache,\n cache_compression=cache_compression,\n ).T # transpose the data\n genes = pd.read_csv(path / f'{prefix}genes.tsv', header=None, sep='\\t')\n if var_names == 'gene_symbols':\n var_names = genes[1].values\n if make_unique:\n var_names = anndata.utils.make_index_unique(pd.Index(var_names))\n adata.var_names = var_names\n adata.var['gene_ids'] = genes[0].values\n elif var_names == 'gene_ids':\n adata.var_names = genes[0].values\n adata.var['gene_symbols'] = genes[1].values\n else:\n raise ValueError(\"`var_names` needs to be 'gene_symbols' or 'gene_ids'\")\n adata.obs_names = pd.read_csv(path / f'{prefix}barcodes.tsv', header=None)[0].values\n return adata\n\n\ndef _read_v3_10x_mtx(\n path,\n var_names='gene_symbols',\n make_unique=True,\n cache=False,\n cache_compression=_empty,\n *,\n prefix=\"\",\n):\n \"\"\"\n Read mtx from output from Cell Ranger v3 or later versions\n \"\"\"\n path = Path(path)\n adata = read(\n path / f'{prefix}matrix.mtx.gz',\n cache=cache,\n cache_compression=cache_compression,\n ).T # transpose the data\n genes = pd.read_csv(path / f'{prefix}features.tsv.gz', header=None, sep='\\t')\n if var_names == 'gene_symbols':\n var_names = genes[1].values\n if make_unique:\n var_names = anndata.utils.make_index_unique(pd.Index(var_names))\n adata.var_names = var_names\n adata.var['gene_ids'] = genes[0].values\n elif var_names == 'gene_ids':\n adata.var_names = genes[0].values\n adata.var['gene_symbols'] = genes[1].values\n else:\n raise ValueError(\"`var_names` needs to be 'gene_symbols' or 'gene_ids'\")\n adata.var['feature_types'] = genes[2].values\n adata.obs_names = pd.read_csv(path / f'{prefix}barcodes.tsv.gz', header=None)[\n 0\n ].values\n return adata\n\n\ndef write(\n filename: Union[str, Path],\n adata: AnnData,\n ext: Optional[Literal['h5', 'csv', 'txt', 'npz']] = None,\n compression: Optional[Literal['gzip', 'lzf']] = 'gzip',\n compression_opts: Optional[int] = None,\n):\n \"\"\"\\\n Write :class:`~anndata.AnnData` objects to file.\n\n Parameters\n ----------\n filename\n If the filename has no file extension, it is interpreted as a key for\n generating a filename via `sc.settings.writedir / (filename +\n sc.settings.file_format_data)`. This is the same behavior as in\n :func:`~scanpy.read`.\n adata\n Annotated data matrix.\n ext\n File extension from wich to infer file format. If `None`, defaults to\n `sc.settings.file_format_data`.\n compression\n See http://docs.h5py.org/en/latest/high/dataset.html.\n compression_opts\n See http://docs.h5py.org/en/latest/high/dataset.html.\n \"\"\"\n filename = Path(filename) # allow passing strings\n if is_valid_filename(filename):\n filename = filename\n ext_ = is_valid_filename(filename, return_ext=True)\n if ext is None:\n ext = ext_\n elif ext != ext_:\n raise ValueError(\n 'It suffices to provide the file type by '\n 'providing a proper extension to the filename.'\n 'One of \"txt\", \"csv\", \"h5\" or \"npz\".'\n )\n else:\n key = filename\n ext = settings.file_format_data if ext is None else ext\n filename = _get_filename_from_key(key, ext)\n if ext == 'csv':\n adata.write_csvs(filename)\n else:\n adata.write(\n filename, compression=compression, compression_opts=compression_opts\n )\n\n\n# -------------------------------------------------------------------------------\n# Reading and writing parameter files\n# -------------------------------------------------------------------------------\n\n\ndef read_params(\n filename: Union[Path, str], asheader: bool = False\n) -> Dict[str, Union[int, float, bool, str, None]]:\n \"\"\"\\\n Read parameter dictionary from text file.\n\n Assumes that parameters are specified in the format::\n\n par1 = value1\n par2 = value2\n\n Comments that start with '#' are allowed.\n\n Parameters\n ----------\n filename\n Filename of data file.\n asheader\n Read the dictionary from the header (comment section) of a file.\n\n Returns\n -------\n Dictionary that stores parameters.\n \"\"\"\n filename = str(filename) # allow passing pathlib.Path objects\n from collections import OrderedDict\n\n params = OrderedDict([])\n for line in open(filename):\n if '=' in line:\n if not asheader or line.startswith('#'):\n line = line[1:] if line.startswith('#') else line\n key, val = line.split('=')\n key = key.strip()\n val = val.strip()\n params[key] = convert_string(val)\n return params\n\n\ndef write_params(path: Union[Path, str], *args, **maps):\n \"\"\"\\\n Write parameters to file, so that it's readable by read_params.\n\n Uses INI file format.\n \"\"\"\n path = Path(path)\n if not path.parent.is_dir():\n path.parent.mkdir(parents=True)\n if len(args) == 1:\n maps[None] = args[0]\n with path.open('w') as f:\n for header, map in maps.items():\n if header is not None:\n f.write(f'[{header}]\\n')\n for key, val in map.items():\n f.write(f'{key} = {val}\\n')\n\n\n# -------------------------------------------------------------------------------\n# Reading and Writing data files\n# -------------------------------------------------------------------------------\n\n\ndef _read(\n filename: Path,\n backed=None,\n sheet=None,\n ext=None,\n delimiter=None,\n first_column_names=None,\n backup_url=None,\n cache=False,\n cache_compression=None,\n suppress_cache_warning=False,\n **kwargs,\n):\n if ext is not None and ext not in avail_exts:\n raise ValueError(\n 'Please provide one of the available extensions.\\n' f'{avail_exts}'\n )\n else:\n ext = is_valid_filename(filename, return_ext=True)\n is_present = _check_datafile_present_and_download(filename, backup_url=backup_url)\n if not is_present:\n logg.debug(f'... did not find original file {filename}')\n # read hdf5 files\n if ext in {'h5', 'h5ad'}:\n if sheet is None:\n return read_h5ad(filename, backed=backed)\n else:\n logg.debug(f'reading sheet {sheet} from file {filename}')\n return read_hdf(filename, sheet)\n # read other file types\n path_cache = settings.cachedir / _slugify(filename).replace(\n '.' + ext, '.h5ad'\n ) # type: Path\n if path_cache.suffix in {'.gz', '.bz2'}:\n path_cache = path_cache.with_suffix('')\n if cache and path_cache.is_file():\n logg.info(f'... reading from cache file {path_cache}')\n return read_h5ad(path_cache)\n\n if not is_present:\n raise FileNotFoundError(f'Did not find file {filename}.')\n logg.debug(f'reading {filename}')\n if not cache and not suppress_cache_warning:\n logg.hint(\n 'This might be very slow. Consider passing `cache=True`, '\n 'which enables much faster reading from a cache file.'\n )\n # do the actual reading\n if ext == 'xlsx' or ext == 'xls':\n if sheet is None:\n raise ValueError(\"Provide `sheet` parameter when reading '.xlsx' files.\")\n else:\n adata = read_excel(filename, sheet)\n elif ext in {'mtx', 'mtx.gz'}:\n adata = read_mtx(filename)\n elif ext == 'csv':\n adata = read_csv(filename, first_column_names=first_column_names)\n elif ext in {'txt', 'tab', 'data', 'tsv'}:\n if ext == 'data':\n logg.hint(\n \"... assuming '.data' means tab or white-space \" 'separated text file',\n )\n logg.hint('change this by passing `ext` to sc.read')\n adata = read_text(filename, delimiter, first_column_names)\n elif ext == 'soft.gz':\n adata = _read_softgz(filename)\n elif ext == 'loom':\n adata = read_loom(filename=filename, **kwargs)\n else:\n raise ValueError(f'Unknown extension {ext}.')\n if cache:\n logg.info(\n f'... writing an {settings.file_format_data} '\n 'cache file to speedup reading next time'\n )\n if cache_compression is _empty:\n cache_compression = settings.cache_compression\n if not path_cache.parent.is_dir():\n path_cache.parent.mkdir(parents=True)\n # write for faster reading when calling the next time\n adata.write(path_cache, compression=cache_compression)\n return adata\n\n\ndef _slugify(path: Union[str, PurePath]) -> str:\n \"\"\"Make a path into a filename.\"\"\"\n if not isinstance(path, PurePath):\n path = PurePath(path)\n parts = list(path.parts)\n if parts[0] == '/':\n parts.pop(0)\n elif len(parts[0]) == 3 and parts[0][1:] == ':\\\\':\n parts[0] = parts[0][0] # C:\\ → C\n filename = '-'.join(parts)\n assert '/' not in filename, filename\n assert not filename[1:].startswith(':'), filename\n return filename\n\n\ndef _read_softgz(filename: Union[str, bytes, Path, BinaryIO]) -> AnnData:\n \"\"\"\\\n Read a SOFT format data file.\n\n The SOFT format is documented here\n http://www.ncbi.nlm.nih.gov/geo/info/soft2.html.\n\n Notes\n -----\n The function is based on a script by Kerby Shedden.\n http://dept.stat.lsa.umich.edu/~kshedden/Python-Workshop/gene_expression_comparison.html\n \"\"\"\n import gzip\n\n with gzip.open(filename, mode='rt') as file:\n # The header part of the file contains information about the\n # samples. Read that information first.\n samples_info = {}\n for line in file:\n if line.startswith(\"!dataset_table_begin\"):\n break\n elif line.startswith(\"!subset_description\"):\n subset_description = line.split(\"=\")[1].strip()\n elif line.startswith(\"!subset_sample_id\"):\n subset_ids = line.split(\"=\")[1].split(\",\")\n subset_ids = [x.strip() for x in subset_ids]\n for k in subset_ids:\n samples_info[k] = subset_description\n # Next line is the column headers (sample id's)\n sample_names = file.readline().strip().split(\"\\t\")\n # The column indices that contain gene expression data\n indices = [i for i, x in enumerate(sample_names) if x.startswith(\"GSM\")]\n # Restrict the column headers to those that we keep\n sample_names = [sample_names[i] for i in indices]\n # Get a list of sample labels\n groups = [samples_info[k] for k in sample_names]\n # Read the gene expression data as a list of lists, also get the gene\n # identifiers\n gene_names, X = [], []\n for line in file:\n # This is what signals the end of the gene expression data\n # section in the file\n if line.startswith(\"!dataset_table_end\"):\n break\n V = line.split(\"\\t\")\n # Extract the values that correspond to gene expression measures\n # and convert the strings to numbers\n x = [float(V[i]) for i in indices]\n X.append(x)\n gene_names.append(V[1])\n # Convert the Python list of lists to a Numpy array and transpose to match\n # the Scanpy convention of storing samples in rows and variables in colums.\n X = np.array(X).T\n obs = pd.DataFrame({\"groups\": groups}, index=sample_names)\n var = pd.DataFrame(index=gene_names)\n return AnnData(X=X, obs=obs, var=var, dtype=X.dtype)\n\n\n# -------------------------------------------------------------------------------\n# Type conversion\n# -------------------------------------------------------------------------------\n\n\ndef is_float(string: str) -> float:\n \"\"\"Check whether string is float.\n\n See also\n --------\n http://stackoverflow.com/questions/736043/checking-if-a-string-can-be-converted-to-float-in-python\n \"\"\"\n try:\n float(string)\n return True\n except ValueError:\n return False\n\n\ndef is_int(string: str) -> bool:\n \"\"\"Check whether string is integer.\"\"\"\n try:\n int(string)\n return True\n except ValueError:\n return False\n\n\ndef convert_bool(string: str) -> Tuple[bool, bool]:\n \"\"\"Check whether string is boolean.\"\"\"\n if string == 'True':\n return True, True\n elif string == 'False':\n return True, False\n else:\n return False, False\n\n\ndef convert_string(string: str) -> Union[int, float, bool, str, None]:\n \"\"\"Convert string to int, float or bool.\"\"\"\n if is_int(string):\n return int(string)\n elif is_float(string):\n return float(string)\n elif convert_bool(string)[0]:\n return convert_bool(string)[1]\n elif string == 'None':\n return None\n else:\n return string\n\n\n# -------------------------------------------------------------------------------\n# Helper functions for reading and writing\n# -------------------------------------------------------------------------------\n\n\ndef get_used_files():\n \"\"\"Get files used by processes with name scanpy.\"\"\"\n import psutil\n\n loop_over_scanpy_processes = (\n proc for proc in psutil.process_iter() if proc.name() == 'scanpy'\n )\n filenames = []\n for proc in loop_over_scanpy_processes:\n try:\n flist = proc.open_files()\n for nt in flist:\n filenames.append(nt.path)\n # This catches a race condition where a process ends\n # before we can examine its files\n except psutil.NoSuchProcess:\n pass\n return set(filenames)\n\n\ndef _get_filename_from_key(key, ext=None) -> Path:\n ext = settings.file_format_data if ext is None else ext\n return settings.writedir / f'{key}.{ext}'\n\n\ndef _download(url: str, path: Path):\n try:\n import ipywidgets\n from tqdm.auto import tqdm\n except ImportError:\n from tqdm import tqdm\n\n from urllib.request import urlopen, Request\n from urllib.error import URLError\n\n blocksize = 1024 * 8\n blocknum = 0\n\n try:\n req = Request(url, headers={\"User-agent\": \"scanpy-user\"})\n\n try:\n open_url = urlopen(req)\n except URLError:\n logg.warning(\n 'Failed to open the url with default certificates, trying with certifi.'\n )\n\n from certifi import where\n from ssl import create_default_context\n\n open_url = urlopen(req, context=create_default_context(cafile=where()))\n\n with open_url as resp:\n total = resp.info().get(\"content-length\", None)\n with tqdm(\n unit=\"B\",\n unit_scale=True,\n miniters=1,\n unit_divisor=1024,\n total=total if total is None else int(total),\n ) as t, path.open(\"wb\") as f:\n block = resp.read(blocksize)\n while block:\n f.write(block)\n blocknum += 1\n t.update(len(block))\n block = resp.read(blocksize)\n\n except (KeyboardInterrupt, Exception):\n # Make sure file doesn’t exist half-downloaded\n if path.is_file():\n path.unlink()\n raise\n\n\ndef _check_datafile_present_and_download(path, backup_url=None):\n \"\"\"Check whether the file is present, otherwise download.\"\"\"\n path = Path(path)\n if path.is_file():\n return True\n if backup_url is None:\n return False\n logg.info(\n f'try downloading from url\\n{backup_url}\\n'\n '... this may take a while but only happens once'\n )\n if not path.parent.is_dir():\n logg.info(f'creating directory {path.parent}/ for saving data')\n path.parent.mkdir(parents=True)\n\n _download(backup_url, path)\n return True\n\n\ndef is_valid_filename(filename: Path, return_ext=False):\n \"\"\"Check whether the argument is a filename.\"\"\"\n ext = filename.suffixes\n\n if len(ext) > 2:\n logg.warning(\n f'Your filename has more than two extensions: {ext}.\\n'\n f'Only considering the two last: {ext[-2:]}.'\n )\n ext = ext[-2:]\n\n # cases for gzipped/bzipped text files\n if len(ext) == 2 and ext[0][1:] in text_exts and ext[1][1:] in ('gz', 'bz2'):\n return ext[0][1:] if return_ext else True\n elif ext and ext[-1][1:] in avail_exts:\n return ext[-1][1:] if return_ext else True\n elif ''.join(ext) == '.soft.gz':\n return 'soft.gz' if return_ext else True\n elif ''.join(ext) == '.mtx.gz':\n return 'mtx.gz' if return_ext else True\n elif not return_ext:\n return False\n raise ValueError(\n f'''\\\n{filename!r} does not end on a valid extension.\nPlease, provide one of the available extensions.\n{avail_exts}\nText files with .gz and .bz2 extensions are also supported.\\\n'''\n )\n",
"path": "scanpy/readwrite.py"
}
] | diff --git a/scanpy/readwrite.py b/scanpy/readwrite.py
index f2aef41297..5ff556ddc5 100644
--- a/scanpy/readwrite.py
+++ b/scanpy/readwrite.py
@@ -219,7 +219,7 @@ def _read_legacy_10x_h5(filename, *, genome=None, start=None):
)
dsets = {}
- _collect_datasets(dsets, f)
+ _collect_datasets(dsets, f[genome])
# AnnData works with csr matrices
# 10x stores the transposed data, so we do the transposition right away
diff --git a/scanpy/tests/_data/10x_data/1.2.0/multiple_genomes.h5 b/scanpy/tests/_data/10x_data/1.2.0/multiple_genomes.h5
new file mode 100644
index 0000000000..3d04d4e909
Binary files /dev/null and b/scanpy/tests/_data/10x_data/1.2.0/multiple_genomes.h5 differ
diff --git a/scanpy/tests/test_read_10x.py b/scanpy/tests/test_read_10x.py
index e9cf188a4b..0f4373334a 100644
--- a/scanpy/tests/test_read_10x.py
+++ b/scanpy/tests/test_read_10x.py
@@ -73,6 +73,23 @@ def test_read_10x_h5_v1():
assert_anndata_equal(spec_genome_v1, nospec_genome_v1)
+def test_read_10x_h5_v2_multiple_genomes():
+ genome1_v1 = sc.read_10x_h5(
+ ROOT / '1.2.0' / 'multiple_genomes.h5',
+ genome='hg19_chr21',
+ )
+ genome2_v1 = sc.read_10x_h5(
+ ROOT / '1.2.0' / 'multiple_genomes.h5',
+ genome='another_genome',
+ )
+ # the test data are such that X is the same shape for both "genomes",
+ # but the values are different
+ assert (genome1_v1.X != genome2_v1.X).sum() > 0, (
+ 'loading data from two different genomes in 10x v2 format. '
+ 'should be different, but is the same. '
+ )
+
+
def test_read_10x_h5():
spec_genome_v3 = sc.read_10x_h5(
ROOT / '3.0.0' / 'filtered_feature_bc_matrix.h5',
|
paperless-ngx__paperless-ngx-6474 | [BUG] E-Mail-Filter matching on non-Emails
### Description
I am using an Email-Rule to receive invoices. Two different for two different users.
To assign them to an user, I add a workflow that uses the E-Mail-Rule-Filter and then assigns the owner and the storage location.
For whatever reason this is applying on every scanned document (which is consumed via consumption folder).
The Mail Rule:

The Workflow:

### Steps to reproduce
1. Add a mail rule to collect a document from mail
2. Add a workflow which is filtering the mail rule from stepp 1.
3. Add a document to the consumption folder.
### Webserver logs
```bash
...
[2024-04-20 11:05:11,431] [INFO] [paperless.management.consumer] Adding /usr/src/paperless/consume/20240420_100319.pdf to the task queue.
[2024-04-20 11:05:12,235] [DEBUG] [paperless.tasks] Skipping plugin CollatePlugin
[2024-04-20 11:05:12,236] [DEBUG] [paperless.tasks] Skipping plugin BarcodePlugin
[2024-04-20 11:05:12,236] [DEBUG] [paperless.tasks] Executing plugin WorkflowTriggerPlugin
[2024-04-20 11:05:13,475] [INFO] [paperless.matching] Document matched WorkflowTrigger 6 from Workflow: Pauschal zu Finkman
[2024-04-20 11:05:13,678] [INFO] [paperless.matching] Document did not match Workflow: Tag Kontoauszug
[2024-04-20 11:05:13,678] [DEBUG] [paperless.matching] ('Document path /usr/src/paperless/consume/20240420_100319.pdf does not match Sven/Kontoauszug/*',)
[2024-04-20 11:05:13,686] [INFO] [paperless.matching] Document did not match Workflow: Lohnabrechnung
[2024-04-20 11:05:13,686] [DEBUG] [paperless.matching] ('Document path /usr/src/paperless/consume/20240420_100319.pdf does not match Sven/Lohnabrechnung/*',)
[2024-04-20 11:05:13,693] [INFO] [paperless.matching] Document did not match Workflow: Karo Import
[2024-04-20 11:05:13,694] [DEBUG] [paperless.matching] ('Document path /usr/src/paperless/consume/20240420_100319.pdf does not match */Karo/*',)
[2024-04-20 11:05:13,698] [INFO] [paperless.matching] Document did not match Workflow: Kinder zu alle
[2024-04-20 11:05:13,699] [DEBUG] [paperless.matching] No matching triggers with type 1 found
[2024-04-20 11:05:13,703] [INFO] [paperless.matching] Document did not match Workflow: Landstuhl zu ImmoGbr
[2024-04-20 11:05:13,704] [DEBUG] [paperless.matching] No matching triggers with type 1 found
[2024-04-20 11:05:13,710] [INFO] [paperless.matching] Document matched WorkflowTrigger 7 from Workflow: Karo Email Import zu Karo Speicherordner
[2024-04-20 11:05:13,730] [INFO] [paperless.tasks] WorkflowTriggerPlugin completed with: Applying WorkflowAction 6 from Workflow: Pauschal zu Finkman
Applying WorkflowAction 7 from Workflow: Karo Email Import zu Karo Speicherordner
[2024-04-20 11:05:14,030] [INFO] [paperless.consumer] Consuming 20240420_100319.pdf
...
```
### Browser logs
_No response_
### Paperless-ngx version
2.7.2
### Host OS
Synology
### Installation method
Docker - official image
### Browser
Safari
### Configuration changes
_No response_
### Other
_No response_
### Please confirm the following
- [X] I believe this issue is a bug that affects all users of Paperless-ngx, not something specific to my installation.
- [X] I have already searched for relevant existing issues and discussions before opening this report.
- [X] I have updated the title field above with a concise description.
| [
{
"content": "import logging\nimport re\nfrom fnmatch import fnmatch\nfrom typing import Union\n\nfrom documents.classifier import DocumentClassifier\nfrom documents.data_models import ConsumableDocument\nfrom documents.data_models import DocumentSource\nfrom documents.models import Correspondent\nfrom documents.models import Document\nfrom documents.models import DocumentType\nfrom documents.models import MatchingModel\nfrom documents.models import StoragePath\nfrom documents.models import Tag\nfrom documents.models import Workflow\nfrom documents.models import WorkflowTrigger\nfrom documents.permissions import get_objects_for_user_owner_aware\n\nlogger = logging.getLogger(\"paperless.matching\")\n\n\ndef log_reason(\n matching_model: Union[MatchingModel, WorkflowTrigger],\n document: Document,\n reason: str,\n):\n class_name = type(matching_model).__name__\n name = (\n matching_model.name if hasattr(matching_model, \"name\") else str(matching_model)\n )\n logger.debug(\n f\"{class_name} {name} matched on document {document} because {reason}\",\n )\n\n\ndef match_correspondents(document: Document, classifier: DocumentClassifier, user=None):\n pred_id = classifier.predict_correspondent(document.content) if classifier else None\n\n if user is None and document.owner is not None:\n user = document.owner\n\n if user is not None:\n correspondents = get_objects_for_user_owner_aware(\n user,\n \"documents.view_correspondent\",\n Correspondent,\n )\n else:\n correspondents = Correspondent.objects.all()\n\n return list(\n filter(\n lambda o: matches(o, document)\n or (o.pk == pred_id and o.matching_algorithm == MatchingModel.MATCH_AUTO),\n correspondents,\n ),\n )\n\n\ndef match_document_types(document: Document, classifier: DocumentClassifier, user=None):\n pred_id = classifier.predict_document_type(document.content) if classifier else None\n\n if user is None and document.owner is not None:\n user = document.owner\n\n if user is not None:\n document_types = get_objects_for_user_owner_aware(\n user,\n \"documents.view_documenttype\",\n DocumentType,\n )\n else:\n document_types = DocumentType.objects.all()\n\n return list(\n filter(\n lambda o: matches(o, document)\n or (o.pk == pred_id and o.matching_algorithm == MatchingModel.MATCH_AUTO),\n document_types,\n ),\n )\n\n\ndef match_tags(document: Document, classifier: DocumentClassifier, user=None):\n predicted_tag_ids = classifier.predict_tags(document.content) if classifier else []\n\n if user is None and document.owner is not None:\n user = document.owner\n\n if user is not None:\n tags = get_objects_for_user_owner_aware(user, \"documents.view_tag\", Tag)\n else:\n tags = Tag.objects.all()\n\n return list(\n filter(\n lambda o: matches(o, document)\n or (\n o.matching_algorithm == MatchingModel.MATCH_AUTO\n and o.pk in predicted_tag_ids\n ),\n tags,\n ),\n )\n\n\ndef match_storage_paths(document: Document, classifier: DocumentClassifier, user=None):\n pred_id = classifier.predict_storage_path(document.content) if classifier else None\n\n if user is None and document.owner is not None:\n user = document.owner\n\n if user is not None:\n storage_paths = get_objects_for_user_owner_aware(\n user,\n \"documents.view_storagepath\",\n StoragePath,\n )\n else:\n storage_paths = StoragePath.objects.all()\n\n return list(\n filter(\n lambda o: matches(o, document)\n or (o.pk == pred_id and o.matching_algorithm == MatchingModel.MATCH_AUTO),\n storage_paths,\n ),\n )\n\n\ndef matches(matching_model: MatchingModel, document: Document):\n search_kwargs = {}\n\n document_content = document.content\n\n # Check that match is not empty\n if not matching_model.match.strip():\n return False\n\n if matching_model.is_insensitive:\n search_kwargs = {\"flags\": re.IGNORECASE}\n\n if matching_model.matching_algorithm == MatchingModel.MATCH_NONE:\n return False\n\n elif matching_model.matching_algorithm == MatchingModel.MATCH_ALL:\n for word in _split_match(matching_model):\n search_result = re.search(rf\"\\b{word}\\b\", document_content, **search_kwargs)\n if not search_result:\n return False\n log_reason(\n matching_model,\n document,\n f\"it contains all of these words: {matching_model.match}\",\n )\n return True\n\n elif matching_model.matching_algorithm == MatchingModel.MATCH_ANY:\n for word in _split_match(matching_model):\n if re.search(rf\"\\b{word}\\b\", document_content, **search_kwargs):\n log_reason(matching_model, document, f\"it contains this word: {word}\")\n return True\n return False\n\n elif matching_model.matching_algorithm == MatchingModel.MATCH_LITERAL:\n result = bool(\n re.search(\n rf\"\\b{re.escape(matching_model.match)}\\b\",\n document_content,\n **search_kwargs,\n ),\n )\n if result:\n log_reason(\n matching_model,\n document,\n f'it contains this string: \"{matching_model.match}\"',\n )\n return result\n\n elif matching_model.matching_algorithm == MatchingModel.MATCH_REGEX:\n try:\n match = re.search(\n re.compile(matching_model.match, **search_kwargs),\n document_content,\n )\n except re.error:\n logger.error(\n f\"Error while processing regular expression {matching_model.match}\",\n )\n return False\n if match:\n log_reason(\n matching_model,\n document,\n f\"the string {match.group()} matches the regular expression \"\n f\"{matching_model.match}\",\n )\n return bool(match)\n\n elif matching_model.matching_algorithm == MatchingModel.MATCH_FUZZY:\n from rapidfuzz import fuzz\n\n match = re.sub(r\"[^\\w\\s]\", \"\", matching_model.match)\n text = re.sub(r\"[^\\w\\s]\", \"\", document_content)\n if matching_model.is_insensitive:\n match = match.lower()\n text = text.lower()\n if fuzz.partial_ratio(match, text, score_cutoff=90):\n # TODO: make this better\n log_reason(\n matching_model,\n document,\n f\"parts of the document content somehow match the string \"\n f\"{matching_model.match}\",\n )\n return True\n else:\n return False\n\n elif matching_model.matching_algorithm == MatchingModel.MATCH_AUTO:\n # this is done elsewhere.\n return False\n\n else:\n raise NotImplementedError(\"Unsupported matching algorithm\")\n\n\ndef _split_match(matching_model):\n \"\"\"\n Splits the match to individual keywords, getting rid of unnecessary\n spaces and grouping quoted words together.\n\n Example:\n ' some random words \"with quotes \" and spaces'\n ==>\n [\"some\", \"random\", \"words\", \"with+quotes\", \"and\", \"spaces\"]\n \"\"\"\n findterms = re.compile(r'\"([^\"]+)\"|(\\S+)').findall\n normspace = re.compile(r\"\\s+\").sub\n return [\n # normspace(\" \", (t[0] or t[1]).strip()).replace(\" \", r\"\\s+\")\n re.escape(normspace(\" \", (t[0] or t[1]).strip())).replace(r\"\\ \", r\"\\s+\")\n for t in findterms(matching_model.match)\n ]\n\n\ndef consumable_document_matches_workflow(\n document: ConsumableDocument,\n trigger: WorkflowTrigger,\n) -> tuple[bool, str]:\n \"\"\"\n Returns True if the ConsumableDocument matches all filters from the workflow trigger,\n False otherwise. Includes a reason if doesn't match\n \"\"\"\n\n trigger_matched = True\n reason = \"\"\n\n # Document source vs trigger source\n if len(trigger.sources) > 0 and document.source not in [\n int(x) for x in list(trigger.sources)\n ]:\n reason = (\n f\"Document source {document.source.name} not in\"\n f\" {[DocumentSource(int(x)).name for x in trigger.sources]}\",\n )\n trigger_matched = False\n\n # Document mail rule vs trigger mail rule\n if (\n document.mailrule_id is not None\n and trigger.filter_mailrule is not None\n and document.mailrule_id != trigger.filter_mailrule.pk\n ):\n reason = (\n f\"Document mail rule {document.mailrule_id}\"\n f\" != {trigger.filter_mailrule.pk}\",\n )\n trigger_matched = False\n\n # Document filename vs trigger filename\n if (\n trigger.filter_filename is not None\n and len(trigger.filter_filename) > 0\n and not fnmatch(\n document.original_file.name.lower(),\n trigger.filter_filename.lower(),\n )\n ):\n reason = (\n f\"Document filename {document.original_file.name} does not match\"\n f\" {trigger.filter_filename.lower()}\",\n )\n trigger_matched = False\n\n # Document path vs trigger path\n if (\n trigger.filter_path is not None\n and len(trigger.filter_path) > 0\n and not fnmatch(\n document.original_file,\n trigger.filter_path,\n )\n ):\n reason = (\n f\"Document path {document.original_file}\"\n f\" does not match {trigger.filter_path}\",\n )\n trigger_matched = False\n\n return (trigger_matched, reason)\n\n\ndef existing_document_matches_workflow(\n document: Document,\n trigger: WorkflowTrigger,\n) -> tuple[bool, str]:\n \"\"\"\n Returns True if the Document matches all filters from the workflow trigger,\n False otherwise. Includes a reason if doesn't match\n \"\"\"\n\n trigger_matched = True\n reason = \"\"\n\n if trigger.matching_algorithm > MatchingModel.MATCH_NONE and not matches(\n trigger,\n document,\n ):\n reason = (\n f\"Document content matching settings for algorithm '{trigger.matching_algorithm}' did not match\",\n )\n trigger_matched = False\n\n # Document tags vs trigger has_tags\n if (\n trigger.filter_has_tags.all().count() > 0\n and document.tags.filter(\n id__in=trigger.filter_has_tags.all().values_list(\"id\"),\n ).count()\n == 0\n ):\n reason = (\n f\"Document tags {document.tags.all()} do not include\"\n f\" {trigger.filter_has_tags.all()}\",\n )\n trigger_matched = False\n\n # Document correspondent vs trigger has_correspondent\n if (\n trigger.filter_has_correspondent is not None\n and document.correspondent != trigger.filter_has_correspondent\n ):\n reason = (\n f\"Document correspondent {document.correspondent} does not match {trigger.filter_has_correspondent}\",\n )\n trigger_matched = False\n\n # Document document_type vs trigger has_document_type\n if (\n trigger.filter_has_document_type is not None\n and document.document_type != trigger.filter_has_document_type\n ):\n reason = (\n f\"Document doc type {document.document_type} does not match {trigger.filter_has_document_type}\",\n )\n trigger_matched = False\n\n # Document original_filename vs trigger filename\n if (\n trigger.filter_filename is not None\n and len(trigger.filter_filename) > 0\n and document.original_filename is not None\n and not fnmatch(\n document.original_filename.lower(),\n trigger.filter_filename.lower(),\n )\n ):\n reason = (\n f\"Document filename {document.original_filename} does not match\"\n f\" {trigger.filter_filename.lower()}\",\n )\n trigger_matched = False\n\n return (trigger_matched, reason)\n\n\ndef document_matches_workflow(\n document: Union[ConsumableDocument, Document],\n workflow: Workflow,\n trigger_type: WorkflowTrigger.WorkflowTriggerType,\n) -> bool:\n \"\"\"\n Returns True if the ConsumableDocument or Document matches all filters and\n settings from the workflow trigger, False otherwise\n \"\"\"\n\n trigger_matched = True\n if workflow.triggers.filter(type=trigger_type).count() == 0:\n trigger_matched = False\n logger.info(f\"Document did not match {workflow}\")\n logger.debug(f\"No matching triggers with type {trigger_type} found\")\n else:\n for trigger in workflow.triggers.filter(type=trigger_type):\n if trigger_type == WorkflowTrigger.WorkflowTriggerType.CONSUMPTION:\n trigger_matched, reason = consumable_document_matches_workflow(\n document,\n trigger,\n )\n elif (\n trigger_type == WorkflowTrigger.WorkflowTriggerType.DOCUMENT_ADDED\n or trigger_type == WorkflowTrigger.WorkflowTriggerType.DOCUMENT_UPDATED\n ):\n trigger_matched, reason = existing_document_matches_workflow(\n document,\n trigger,\n )\n else:\n # New trigger types need to be explicitly checked above\n raise Exception(f\"Trigger type {trigger_type} not yet supported\")\n\n if trigger_matched:\n logger.info(f\"Document matched {trigger} from {workflow}\")\n # matched, bail early\n return True\n else:\n logger.info(f\"Document did not match {workflow}\")\n logger.debug(reason)\n\n return trigger_matched\n",
"path": "src/documents/matching.py"
}
] | [
{
"content": "import logging\nimport re\nfrom fnmatch import fnmatch\nfrom typing import Union\n\nfrom documents.classifier import DocumentClassifier\nfrom documents.data_models import ConsumableDocument\nfrom documents.data_models import DocumentSource\nfrom documents.models import Correspondent\nfrom documents.models import Document\nfrom documents.models import DocumentType\nfrom documents.models import MatchingModel\nfrom documents.models import StoragePath\nfrom documents.models import Tag\nfrom documents.models import Workflow\nfrom documents.models import WorkflowTrigger\nfrom documents.permissions import get_objects_for_user_owner_aware\n\nlogger = logging.getLogger(\"paperless.matching\")\n\n\ndef log_reason(\n matching_model: Union[MatchingModel, WorkflowTrigger],\n document: Document,\n reason: str,\n):\n class_name = type(matching_model).__name__\n name = (\n matching_model.name if hasattr(matching_model, \"name\") else str(matching_model)\n )\n logger.debug(\n f\"{class_name} {name} matched on document {document} because {reason}\",\n )\n\n\ndef match_correspondents(document: Document, classifier: DocumentClassifier, user=None):\n pred_id = classifier.predict_correspondent(document.content) if classifier else None\n\n if user is None and document.owner is not None:\n user = document.owner\n\n if user is not None:\n correspondents = get_objects_for_user_owner_aware(\n user,\n \"documents.view_correspondent\",\n Correspondent,\n )\n else:\n correspondents = Correspondent.objects.all()\n\n return list(\n filter(\n lambda o: matches(o, document)\n or (o.pk == pred_id and o.matching_algorithm == MatchingModel.MATCH_AUTO),\n correspondents,\n ),\n )\n\n\ndef match_document_types(document: Document, classifier: DocumentClassifier, user=None):\n pred_id = classifier.predict_document_type(document.content) if classifier else None\n\n if user is None and document.owner is not None:\n user = document.owner\n\n if user is not None:\n document_types = get_objects_for_user_owner_aware(\n user,\n \"documents.view_documenttype\",\n DocumentType,\n )\n else:\n document_types = DocumentType.objects.all()\n\n return list(\n filter(\n lambda o: matches(o, document)\n or (o.pk == pred_id and o.matching_algorithm == MatchingModel.MATCH_AUTO),\n document_types,\n ),\n )\n\n\ndef match_tags(document: Document, classifier: DocumentClassifier, user=None):\n predicted_tag_ids = classifier.predict_tags(document.content) if classifier else []\n\n if user is None and document.owner is not None:\n user = document.owner\n\n if user is not None:\n tags = get_objects_for_user_owner_aware(user, \"documents.view_tag\", Tag)\n else:\n tags = Tag.objects.all()\n\n return list(\n filter(\n lambda o: matches(o, document)\n or (\n o.matching_algorithm == MatchingModel.MATCH_AUTO\n and o.pk in predicted_tag_ids\n ),\n tags,\n ),\n )\n\n\ndef match_storage_paths(document: Document, classifier: DocumentClassifier, user=None):\n pred_id = classifier.predict_storage_path(document.content) if classifier else None\n\n if user is None and document.owner is not None:\n user = document.owner\n\n if user is not None:\n storage_paths = get_objects_for_user_owner_aware(\n user,\n \"documents.view_storagepath\",\n StoragePath,\n )\n else:\n storage_paths = StoragePath.objects.all()\n\n return list(\n filter(\n lambda o: matches(o, document)\n or (o.pk == pred_id and o.matching_algorithm == MatchingModel.MATCH_AUTO),\n storage_paths,\n ),\n )\n\n\ndef matches(matching_model: MatchingModel, document: Document):\n search_kwargs = {}\n\n document_content = document.content\n\n # Check that match is not empty\n if not matching_model.match.strip():\n return False\n\n if matching_model.is_insensitive:\n search_kwargs = {\"flags\": re.IGNORECASE}\n\n if matching_model.matching_algorithm == MatchingModel.MATCH_NONE:\n return False\n\n elif matching_model.matching_algorithm == MatchingModel.MATCH_ALL:\n for word in _split_match(matching_model):\n search_result = re.search(rf\"\\b{word}\\b\", document_content, **search_kwargs)\n if not search_result:\n return False\n log_reason(\n matching_model,\n document,\n f\"it contains all of these words: {matching_model.match}\",\n )\n return True\n\n elif matching_model.matching_algorithm == MatchingModel.MATCH_ANY:\n for word in _split_match(matching_model):\n if re.search(rf\"\\b{word}\\b\", document_content, **search_kwargs):\n log_reason(matching_model, document, f\"it contains this word: {word}\")\n return True\n return False\n\n elif matching_model.matching_algorithm == MatchingModel.MATCH_LITERAL:\n result = bool(\n re.search(\n rf\"\\b{re.escape(matching_model.match)}\\b\",\n document_content,\n **search_kwargs,\n ),\n )\n if result:\n log_reason(\n matching_model,\n document,\n f'it contains this string: \"{matching_model.match}\"',\n )\n return result\n\n elif matching_model.matching_algorithm == MatchingModel.MATCH_REGEX:\n try:\n match = re.search(\n re.compile(matching_model.match, **search_kwargs),\n document_content,\n )\n except re.error:\n logger.error(\n f\"Error while processing regular expression {matching_model.match}\",\n )\n return False\n if match:\n log_reason(\n matching_model,\n document,\n f\"the string {match.group()} matches the regular expression \"\n f\"{matching_model.match}\",\n )\n return bool(match)\n\n elif matching_model.matching_algorithm == MatchingModel.MATCH_FUZZY:\n from rapidfuzz import fuzz\n\n match = re.sub(r\"[^\\w\\s]\", \"\", matching_model.match)\n text = re.sub(r\"[^\\w\\s]\", \"\", document_content)\n if matching_model.is_insensitive:\n match = match.lower()\n text = text.lower()\n if fuzz.partial_ratio(match, text, score_cutoff=90):\n # TODO: make this better\n log_reason(\n matching_model,\n document,\n f\"parts of the document content somehow match the string \"\n f\"{matching_model.match}\",\n )\n return True\n else:\n return False\n\n elif matching_model.matching_algorithm == MatchingModel.MATCH_AUTO:\n # this is done elsewhere.\n return False\n\n else:\n raise NotImplementedError(\"Unsupported matching algorithm\")\n\n\ndef _split_match(matching_model):\n \"\"\"\n Splits the match to individual keywords, getting rid of unnecessary\n spaces and grouping quoted words together.\n\n Example:\n ' some random words \"with quotes \" and spaces'\n ==>\n [\"some\", \"random\", \"words\", \"with+quotes\", \"and\", \"spaces\"]\n \"\"\"\n findterms = re.compile(r'\"([^\"]+)\"|(\\S+)').findall\n normspace = re.compile(r\"\\s+\").sub\n return [\n # normspace(\" \", (t[0] or t[1]).strip()).replace(\" \", r\"\\s+\")\n re.escape(normspace(\" \", (t[0] or t[1]).strip())).replace(r\"\\ \", r\"\\s+\")\n for t in findterms(matching_model.match)\n ]\n\n\ndef consumable_document_matches_workflow(\n document: ConsumableDocument,\n trigger: WorkflowTrigger,\n) -> tuple[bool, str]:\n \"\"\"\n Returns True if the ConsumableDocument matches all filters from the workflow trigger,\n False otherwise. Includes a reason if doesn't match\n \"\"\"\n\n trigger_matched = True\n reason = \"\"\n\n # Document source vs trigger source\n if len(trigger.sources) > 0 and document.source not in [\n int(x) for x in list(trigger.sources)\n ]:\n reason = (\n f\"Document source {document.source.name} not in\"\n f\" {[DocumentSource(int(x)).name for x in trigger.sources]}\",\n )\n trigger_matched = False\n\n # Document mail rule vs trigger mail rule\n if (\n trigger.filter_mailrule is not None\n and document.mailrule_id != trigger.filter_mailrule.pk\n ):\n reason = (\n f\"Document mail rule {document.mailrule_id}\"\n f\" != {trigger.filter_mailrule.pk}\",\n )\n trigger_matched = False\n\n # Document filename vs trigger filename\n if (\n trigger.filter_filename is not None\n and len(trigger.filter_filename) > 0\n and not fnmatch(\n document.original_file.name.lower(),\n trigger.filter_filename.lower(),\n )\n ):\n reason = (\n f\"Document filename {document.original_file.name} does not match\"\n f\" {trigger.filter_filename.lower()}\",\n )\n trigger_matched = False\n\n # Document path vs trigger path\n if (\n trigger.filter_path is not None\n and len(trigger.filter_path) > 0\n and not fnmatch(\n document.original_file,\n trigger.filter_path,\n )\n ):\n reason = (\n f\"Document path {document.original_file}\"\n f\" does not match {trigger.filter_path}\",\n )\n trigger_matched = False\n\n return (trigger_matched, reason)\n\n\ndef existing_document_matches_workflow(\n document: Document,\n trigger: WorkflowTrigger,\n) -> tuple[bool, str]:\n \"\"\"\n Returns True if the Document matches all filters from the workflow trigger,\n False otherwise. Includes a reason if doesn't match\n \"\"\"\n\n trigger_matched = True\n reason = \"\"\n\n if trigger.matching_algorithm > MatchingModel.MATCH_NONE and not matches(\n trigger,\n document,\n ):\n reason = (\n f\"Document content matching settings for algorithm '{trigger.matching_algorithm}' did not match\",\n )\n trigger_matched = False\n\n # Document tags vs trigger has_tags\n if (\n trigger.filter_has_tags.all().count() > 0\n and document.tags.filter(\n id__in=trigger.filter_has_tags.all().values_list(\"id\"),\n ).count()\n == 0\n ):\n reason = (\n f\"Document tags {document.tags.all()} do not include\"\n f\" {trigger.filter_has_tags.all()}\",\n )\n trigger_matched = False\n\n # Document correspondent vs trigger has_correspondent\n if (\n trigger.filter_has_correspondent is not None\n and document.correspondent != trigger.filter_has_correspondent\n ):\n reason = (\n f\"Document correspondent {document.correspondent} does not match {trigger.filter_has_correspondent}\",\n )\n trigger_matched = False\n\n # Document document_type vs trigger has_document_type\n if (\n trigger.filter_has_document_type is not None\n and document.document_type != trigger.filter_has_document_type\n ):\n reason = (\n f\"Document doc type {document.document_type} does not match {trigger.filter_has_document_type}\",\n )\n trigger_matched = False\n\n # Document original_filename vs trigger filename\n if (\n trigger.filter_filename is not None\n and len(trigger.filter_filename) > 0\n and document.original_filename is not None\n and not fnmatch(\n document.original_filename.lower(),\n trigger.filter_filename.lower(),\n )\n ):\n reason = (\n f\"Document filename {document.original_filename} does not match\"\n f\" {trigger.filter_filename.lower()}\",\n )\n trigger_matched = False\n\n return (trigger_matched, reason)\n\n\ndef document_matches_workflow(\n document: Union[ConsumableDocument, Document],\n workflow: Workflow,\n trigger_type: WorkflowTrigger.WorkflowTriggerType,\n) -> bool:\n \"\"\"\n Returns True if the ConsumableDocument or Document matches all filters and\n settings from the workflow trigger, False otherwise\n \"\"\"\n\n trigger_matched = True\n if workflow.triggers.filter(type=trigger_type).count() == 0:\n trigger_matched = False\n logger.info(f\"Document did not match {workflow}\")\n logger.debug(f\"No matching triggers with type {trigger_type} found\")\n else:\n for trigger in workflow.triggers.filter(type=trigger_type):\n if trigger_type == WorkflowTrigger.WorkflowTriggerType.CONSUMPTION:\n trigger_matched, reason = consumable_document_matches_workflow(\n document,\n trigger,\n )\n elif (\n trigger_type == WorkflowTrigger.WorkflowTriggerType.DOCUMENT_ADDED\n or trigger_type == WorkflowTrigger.WorkflowTriggerType.DOCUMENT_UPDATED\n ):\n trigger_matched, reason = existing_document_matches_workflow(\n document,\n trigger,\n )\n else:\n # New trigger types need to be explicitly checked above\n raise Exception(f\"Trigger type {trigger_type} not yet supported\")\n\n if trigger_matched:\n logger.info(f\"Document matched {trigger} from {workflow}\")\n # matched, bail early\n return True\n else:\n logger.info(f\"Document did not match {workflow}\")\n logger.debug(reason)\n\n return trigger_matched\n",
"path": "src/documents/matching.py"
}
] | diff --git a/src/documents/matching.py b/src/documents/matching.py
index 6ffa1b3aac8..586ca3a6a6e 100644
--- a/src/documents/matching.py
+++ b/src/documents/matching.py
@@ -269,8 +269,7 @@ def consumable_document_matches_workflow(
# Document mail rule vs trigger mail rule
if (
- document.mailrule_id is not None
- and trigger.filter_mailrule is not None
+ trigger.filter_mailrule is not None
and document.mailrule_id != trigger.filter_mailrule.pk
):
reason = (
|
mitmproxy__mitmproxy-2067 | brotli encode/decode crash
##### Steps to reproduce the problem:
1. load google.com in browser
2. press `enter` on `GET https://www.google.com/ HTTP/2.0`
3. press `z` to select encoding in either `Request` or `Response`
4. press `b` to select brotli
##### Any other comments? What have you tried so far?
```
Traceback (most recent call last):
File "/home/whackashoe/code/mitmproxy/mitmproxy/tools/console/master.py", line 281, in run
self.loop.run()
File "/home/whackashoe/code/mitmproxy/venv/lib/python3.5/site-packages/urwid/main_loop.py", line 278, in run
self._run()
File "/home/whackashoe/code/mitmproxy/venv/lib/python3.5/site-packages/urwid/main_loop.py", line 376, in _run
self.event_loop.run()
File "/home/whackashoe/code/mitmproxy/venv/lib/python3.5/site-packages/urwid/main_loop.py", line 682, in run
self._loop()
File "/home/whackashoe/code/mitmproxy/venv/lib/python3.5/site-packages/urwid/main_loop.py", line 719, in _loop
self._watch_files[fd]()
File "/home/whackashoe/code/mitmproxy/venv/lib/python3.5/site-packages/urwid/raw_display.py", line 393, in <lambda>
event_loop, callback, self.get_available_raw_input())
File "/home/whackashoe/code/mitmproxy/venv/lib/python3.5/site-packages/urwid/raw_display.py", line 493, in parse_input
callback(processed, processed_codes)
File "/home/whackashoe/code/mitmproxy/venv/lib/python3.5/site-packages/urwid/main_loop.py", line 403, in _update
self.process_input(keys)
File "/home/whackashoe/code/mitmproxy/venv/lib/python3.5/site-packages/urwid/main_loop.py", line 503, in process_input
k = self._topmost_widget.keypress(self.screen_size, k)
File "/home/whackashoe/code/mitmproxy/mitmproxy/tools/console/window.py", line 84, in keypress
k = super().keypress(size, k)
File "/home/whackashoe/code/mitmproxy/venv/lib/python3.5/site-packages/urwid/container.py", line 1116, in keypress
return self.footer.keypress((maxcol,),key)
File "/home/whackashoe/code/mitmproxy/mitmproxy/tools/console/statusbar.py", line 155, in keypress
return self.master.ab.keypress(*args, **kwargs)
File "/home/whackashoe/code/mitmproxy/mitmproxy/tools/console/statusbar.py", line 108, in keypress
self.prompt_execute(k)
File "/home/whackashoe/code/mitmproxy/mitmproxy/tools/console/statusbar.py", line 133, in prompt_execute
msg = p(txt)
File "/home/whackashoe/code/mitmproxy/mitmproxy/tools/console/statusbar.py", line 31, in __call__
return self.callback(txt, *self.args)
File "/home/whackashoe/code/mitmproxy/mitmproxy/tools/console/flowview.py", line 686, in encode_callback
conn.encode(encoding_map[key])
File "/home/whackashoe/code/mitmproxy/mitmproxy/net/http/message.py", line 245, in encode
raise ValueError("Invalid content encoding {}".format(repr(e)))
ValueError: Invalid content encoding 'brotli'
```
Here is a patch which suppresses the error:
```
diff --git a/mitmproxy/tools/console/flowview.py b/mitmproxy/tools/console/flowview.py
index a97a9b3..650ef42 100644
--- a/mitmproxy/tools/console/flowview.py
+++ b/mitmproxy/tools/console/flowview.py
@@ -683,5 +683,9 @@ class FlowView(tabs.Tabs):
"d": "deflate",
"b": "brotli",
}
- conn.encode(encoding_map[key])
+ try:
+ conn.encode(encoding_map[key])
+ except ValueError:
+ pass
+
signals.flow_change.send(self, flow = self.flow)
```
##### System information
```
$ mitmproxy --version
Mitmproxy version: 3.0.0 (2.0.0dev0020-0x2aecffd)
Python version: 3.5.0
Platform: Linux-3.13.0-107-generic-x86_64-with-Ubuntu-14.04-trusty
SSL version: OpenSSL 1.0.2k 26 Jan 2017
Linux distro: Ubuntu 14.04 trusty
```
| [
{
"content": "import math\nimport os\nimport sys\nfrom functools import lru_cache\nfrom typing import Optional, Union # noqa\n\nimport urwid\n\nfrom mitmproxy import contentviews\nfrom mitmproxy import exceptions\nfrom mitmproxy import export\nfrom mitmproxy import http\nfrom mitmproxy.net.http import Headers\nfrom mitmproxy.net.http import status_codes\nfrom mitmproxy.tools.console import common\nfrom mitmproxy.tools.console import flowdetailview\nfrom mitmproxy.tools.console import grideditor\nfrom mitmproxy.tools.console import searchable\nfrom mitmproxy.tools.console import signals\nfrom mitmproxy.tools.console import tabs\n\n\nclass SearchError(Exception):\n pass\n\n\ndef _mkhelp():\n text = []\n keys = [\n (\"A\", \"accept all intercepted flows\"),\n (\"a\", \"accept this intercepted flow\"),\n (\"b\", \"save request/response body\"),\n (\"C\", \"export flow to clipboard\"),\n (\"D\", \"duplicate flow\"),\n (\"d\", \"delete flow\"),\n (\"e\", \"edit request/response\"),\n (\"f\", \"load full body data\"),\n (\"m\", \"change body display mode for this entity\\n(default mode can be changed in the options)\"),\n (None,\n common.highlight_key(\"automatic\", \"a\") +\n [(\"text\", \": automatic detection\")]\n ),\n (None,\n common.highlight_key(\"hex\", \"e\") +\n [(\"text\", \": Hex\")]\n ),\n (None,\n common.highlight_key(\"html\", \"h\") +\n [(\"text\", \": HTML\")]\n ),\n (None,\n common.highlight_key(\"image\", \"i\") +\n [(\"text\", \": Image\")]\n ),\n (None,\n common.highlight_key(\"javascript\", \"j\") +\n [(\"text\", \": JavaScript\")]\n ),\n (None,\n common.highlight_key(\"json\", \"s\") +\n [(\"text\", \": JSON\")]\n ),\n (None,\n common.highlight_key(\"urlencoded\", \"u\") +\n [(\"text\", \": URL-encoded data\")]\n ),\n (None,\n common.highlight_key(\"raw\", \"r\") +\n [(\"text\", \": raw data\")]\n ),\n (None,\n common.highlight_key(\"xml\", \"x\") +\n [(\"text\", \": XML\")]\n ),\n (\"E\", \"export flow to file\"),\n (\"r\", \"replay request\"),\n (\"V\", \"revert changes to request\"),\n (\"v\", \"view body in external viewer\"),\n (\"w\", \"save all flows matching current view filter\"),\n (\"W\", \"save this flow\"),\n (\"x\", \"delete body\"),\n (\"z\", \"encode/decode a request/response\"),\n (\"tab\", \"next tab\"),\n (\"h, l\", \"previous tab, next tab\"),\n (\"space\", \"next flow\"),\n (\"|\", \"run script on this flow\"),\n (\"/\", \"search (case sensitive)\"),\n (\"n\", \"repeat search forward\"),\n (\"N\", \"repeat search backwards\"),\n ]\n text.extend(common.format_keyvals(keys, key=\"key\", val=\"text\", indent=4))\n return text\n\n\nhelp_context = _mkhelp()\n\nfooter = [\n ('heading_key', \"?\"), \":help \",\n ('heading_key', \"q\"), \":back \",\n]\n\n\nclass FlowViewHeader(urwid.WidgetWrap):\n\n def __init__(self, master: \"mitmproxy.console.master.ConsoleMaster\", f: http.HTTPFlow):\n self.master = master\n self.flow = f\n self._w = common.format_flow(\n f,\n False,\n extended=True,\n hostheader=self.master.options.showhost\n )\n signals.flow_change.connect(self.sig_flow_change)\n\n def sig_flow_change(self, sender, flow):\n if flow == self.flow:\n self._w = common.format_flow(\n flow,\n False,\n extended=True,\n hostheader=self.master.options.showhost\n )\n\n\nTAB_REQ = 0\nTAB_RESP = 1\n\n\nclass FlowView(tabs.Tabs):\n highlight_color = \"focusfield\"\n\n def __init__(self, master, view, flow, tab_offset):\n self.master, self.view, self.flow = master, view, flow\n super().__init__(\n [\n (self.tab_request, self.view_request),\n (self.tab_response, self.view_response),\n (self.tab_details, self.view_details),\n ],\n tab_offset\n )\n\n self.show()\n self.last_displayed_body = None\n signals.flow_change.connect(self.sig_flow_change)\n\n def tab_request(self):\n if self.flow.intercepted and not self.flow.response:\n return \"Request intercepted\"\n else:\n return \"Request\"\n\n def tab_response(self):\n if self.flow.intercepted and self.flow.response:\n return \"Response intercepted\"\n else:\n return \"Response\"\n\n def tab_details(self):\n return \"Detail\"\n\n def view_request(self):\n return self.conn_text(self.flow.request)\n\n def view_response(self):\n return self.conn_text(self.flow.response)\n\n def view_details(self):\n return flowdetailview.flowdetails(self.view, self.flow)\n\n def sig_flow_change(self, sender, flow):\n if flow == self.flow:\n self.show()\n\n def content_view(self, viewmode, message):\n if message.raw_content is None:\n msg, body = \"\", [urwid.Text([(\"error\", \"[content missing]\")])]\n return msg, body\n else:\n s = self.view.settings[self.flow]\n full = s.get((self.tab_offset, \"fullcontents\"), False)\n if full:\n limit = sys.maxsize\n else:\n limit = contentviews.VIEW_CUTOFF\n\n flow_modify_cache_invalidation = hash((\n message.raw_content,\n message.headers.fields,\n getattr(message, \"path\", None),\n ))\n # we need to pass the message off-band because it's not hashable\n self._get_content_view_message = message\n return self._get_content_view(viewmode, limit, flow_modify_cache_invalidation)\n\n @lru_cache(maxsize=200)\n def _get_content_view(self, viewmode, max_lines, _):\n message = self._get_content_view_message\n self._get_content_view_message = None\n description, lines, error = contentviews.get_message_content_view(\n viewmode, message\n )\n if error:\n signals.add_log(error, \"error\")\n # Give hint that you have to tab for the response.\n if description == \"No content\" and isinstance(message, http.HTTPRequest):\n description = \"No request content (press tab to view response)\"\n\n # If the users has a wide terminal, he gets fewer lines; this should not be an issue.\n chars_per_line = 80\n max_chars = max_lines * chars_per_line\n total_chars = 0\n text_objects = []\n for line in lines:\n txt = []\n for (style, text) in line:\n if total_chars + len(text) > max_chars:\n text = text[:max_chars - total_chars]\n txt.append((style, text))\n total_chars += len(text)\n if total_chars == max_chars:\n break\n\n # round up to the next line.\n total_chars = int(math.ceil(total_chars / chars_per_line) * chars_per_line)\n\n text_objects.append(urwid.Text(txt))\n if total_chars == max_chars:\n text_objects.append(urwid.Text([\n (\"highlight\", \"Stopped displaying data after %d lines. Press \" % max_lines),\n (\"key\", \"f\"),\n (\"highlight\", \" to load all data.\")\n ]))\n break\n\n return description, text_objects\n\n def viewmode_get(self):\n override = self.view.settings[self.flow].get(\n (self.tab_offset, \"prettyview\"),\n None\n )\n return self.master.options.default_contentview if override is None else override\n\n def conn_text(self, conn):\n if conn:\n txt = common.format_keyvals(\n [(h + \":\", v) for (h, v) in conn.headers.items(multi=True)],\n key = \"header\",\n val = \"text\"\n )\n viewmode = self.viewmode_get()\n msg, body = self.content_view(viewmode, conn)\n\n cols = [\n urwid.Text(\n [\n (\"heading\", msg),\n ]\n ),\n urwid.Text(\n [\n \" \",\n ('heading', \"[\"),\n ('heading_key', \"m\"),\n ('heading', (\":%s]\" % viewmode)),\n ],\n align=\"right\"\n )\n ]\n title = urwid.AttrWrap(urwid.Columns(cols), \"heading\")\n\n txt.append(title)\n txt.extend(body)\n else:\n txt = [\n urwid.Text(\"\"),\n urwid.Text(\n [\n (\"highlight\", \"No response. Press \"),\n (\"key\", \"e\"),\n (\"highlight\", \" and edit any aspect to add one.\"),\n ]\n )\n ]\n return searchable.Searchable(self.view, txt)\n\n def set_method_raw(self, m):\n if m:\n self.flow.request.method = m\n signals.flow_change.send(self, flow = self.flow)\n\n def edit_method(self, m):\n if m == \"e\":\n signals.status_prompt.send(\n prompt = \"Method\",\n text = self.flow.request.method,\n callback = self.set_method_raw\n )\n else:\n for i in common.METHOD_OPTIONS:\n if i[1] == m:\n self.flow.request.method = i[0].upper()\n signals.flow_change.send(self, flow = self.flow)\n\n def set_url(self, url):\n request = self.flow.request\n try:\n request.url = str(url)\n except ValueError:\n return \"Invalid URL.\"\n signals.flow_change.send(self, flow = self.flow)\n\n def set_resp_status_code(self, status_code):\n try:\n status_code = int(status_code)\n except ValueError:\n return None\n self.flow.response.status_code = status_code\n if status_code in status_codes.RESPONSES:\n self.flow.response.reason = status_codes.RESPONSES[status_code]\n signals.flow_change.send(self, flow = self.flow)\n\n def set_resp_reason(self, reason):\n self.flow.response.reason = reason\n signals.flow_change.send(self, flow = self.flow)\n\n def set_headers(self, fields, conn):\n conn.headers = Headers(fields)\n signals.flow_change.send(self, flow = self.flow)\n\n def set_query(self, lst, conn):\n conn.query = lst\n signals.flow_change.send(self, flow = self.flow)\n\n def set_path_components(self, lst, conn):\n conn.path_components = lst\n signals.flow_change.send(self, flow = self.flow)\n\n def set_form(self, lst, conn):\n conn.urlencoded_form = lst\n signals.flow_change.send(self, flow = self.flow)\n\n def edit_form(self, conn):\n self.master.view_grideditor(\n grideditor.URLEncodedFormEditor(\n self.master,\n conn.urlencoded_form.items(multi=True),\n self.set_form,\n conn\n )\n )\n\n def edit_form_confirm(self, key, conn):\n if key == \"y\":\n self.edit_form(conn)\n\n def set_cookies(self, lst, conn):\n conn.cookies = lst\n signals.flow_change.send(self, flow = self.flow)\n\n def set_setcookies(self, data, conn):\n conn.cookies = data\n signals.flow_change.send(self, flow = self.flow)\n\n def edit(self, part):\n if self.tab_offset == TAB_REQ:\n message = self.flow.request\n else:\n if not self.flow.response:\n self.flow.response = http.HTTPResponse.make(200, b\"\")\n message = self.flow.response\n\n self.flow.backup()\n if message == self.flow.request and part == \"c\":\n self.master.view_grideditor(\n grideditor.CookieEditor(\n self.master,\n message.cookies.items(multi=True),\n self.set_cookies,\n message\n )\n )\n if message == self.flow.response and part == \"c\":\n self.master.view_grideditor(\n grideditor.SetCookieEditor(\n self.master,\n message.cookies.items(multi=True),\n self.set_setcookies,\n message\n )\n )\n if part == \"r\":\n # Fix an issue caused by some editors when editing a\n # request/response body. Many editors make it hard to save a\n # file without a terminating newline on the last line. When\n # editing message bodies, this can cause problems. For now, I\n # just strip the newlines off the end of the body when we return\n # from an editor.\n c = self.master.spawn_editor(message.get_content(strict=False) or b\"\")\n message.content = c.rstrip(b\"\\n\")\n elif part == \"f\":\n if not message.urlencoded_form and message.raw_content:\n signals.status_prompt_onekey.send(\n prompt = \"Existing body is not a URL-encoded form. Clear and edit?\",\n keys = [\n (\"yes\", \"y\"),\n (\"no\", \"n\"),\n ],\n callback = self.edit_form_confirm,\n args = (message,)\n )\n else:\n self.edit_form(message)\n elif part == \"h\":\n self.master.view_grideditor(\n grideditor.HeaderEditor(\n self.master,\n message.headers.fields,\n self.set_headers,\n message\n )\n )\n elif part == \"p\":\n p = message.path_components\n self.master.view_grideditor(\n grideditor.PathEditor(\n self.master,\n p,\n self.set_path_components,\n message\n )\n )\n elif part == \"q\":\n self.master.view_grideditor(\n grideditor.QueryEditor(\n self.master,\n message.query.items(multi=True),\n self.set_query, message\n )\n )\n elif part == \"u\":\n signals.status_prompt.send(\n prompt = \"URL\",\n text = message.url,\n callback = self.set_url\n )\n elif part == \"m\" and message == self.flow.request:\n signals.status_prompt_onekey.send(\n prompt = \"Method\",\n keys = common.METHOD_OPTIONS,\n callback = self.edit_method\n )\n elif part == \"o\":\n signals.status_prompt.send(\n prompt = \"Code\",\n text = str(message.status_code),\n callback = self.set_resp_status_code\n )\n elif part == \"m\" and message == self.flow.response:\n signals.status_prompt.send(\n prompt = \"Message\",\n text = message.reason,\n callback = self.set_resp_reason\n )\n signals.flow_change.send(self, flow = self.flow)\n\n def view_flow(self, flow):\n signals.pop_view_state.send(self)\n self.master.view_flow(flow, self.tab_offset)\n\n def _view_nextprev_flow(self, idx, flow):\n if not self.view.inbounds(idx):\n signals.status_message.send(message=\"No more flows\")\n return\n self.view_flow(self.view[idx])\n\n def view_next_flow(self, flow):\n return self._view_nextprev_flow(self.view.index(flow) + 1, flow)\n\n def view_prev_flow(self, flow):\n return self._view_nextprev_flow(self.view.index(flow) - 1, flow)\n\n def change_this_display_mode(self, t):\n view = contentviews.get_by_shortcut(t)\n if view:\n self.view.settings[self.flow][(self.tab_offset, \"prettyview\")] = view.name\n else:\n self.view.settings[self.flow][(self.tab_offset, \"prettyview\")] = None\n signals.flow_change.send(self, flow=self.flow)\n\n def keypress(self, size, key):\n conn = None # type: Optional[Union[http.HTTPRequest, http.HTTPResponse]]\n if self.tab_offset == TAB_REQ:\n conn = self.flow.request\n elif self.tab_offset == TAB_RESP:\n conn = self.flow.response\n\n key = super().keypress(size, key)\n\n # Special case: Space moves over to the next flow.\n # We need to catch that before applying common.shortcuts()\n if key == \" \":\n self.view_next_flow(self.flow)\n return\n\n key = common.shortcuts(key)\n if key in (\"up\", \"down\", \"page up\", \"page down\"):\n # Pass scroll events to the wrapped widget\n self._w.keypress(size, key)\n elif key == \"a\":\n self.flow.resume()\n self.master.view.update(self.flow)\n elif key == \"A\":\n for f in self.view:\n if f.intercepted:\n f.resume()\n self.master.view.update(self.flow)\n elif key == \"d\":\n if self.flow.killable:\n self.flow.kill()\n self.view.remove(self.flow)\n if not self.view.focus.flow:\n self.master.view_flowlist()\n else:\n self.view_flow(self.view.focus.flow)\n elif key == \"D\":\n cp = self.flow.copy()\n self.master.view.add(cp)\n self.master.view.focus.flow = cp\n self.view_flow(cp)\n signals.status_message.send(message=\"Duplicated.\")\n elif key == \"p\":\n self.view_prev_flow(self.flow)\n elif key == \"r\":\n try:\n self.master.replay_request(self.flow)\n except exceptions.ReplayException as e:\n signals.add_log(\"Replay error: %s\" % e, \"warn\")\n signals.flow_change.send(self, flow = self.flow)\n elif key == \"V\":\n if self.flow.modified():\n self.flow.revert()\n signals.flow_change.send(self, flow = self.flow)\n signals.status_message.send(message=\"Reverted.\")\n else:\n signals.status_message.send(message=\"Flow not modified.\")\n elif key == \"W\":\n signals.status_prompt_path.send(\n prompt = \"Save this flow\",\n callback = self.master.save_one_flow,\n args = (self.flow,)\n )\n elif key == \"|\":\n signals.status_prompt_path.send(\n prompt = \"Send flow to script\",\n callback = self.master.run_script_once,\n args = (self.flow,)\n )\n elif key == \"e\":\n if self.tab_offset == TAB_REQ:\n signals.status_prompt_onekey.send(\n prompt=\"Edit request\",\n keys=(\n (\"cookies\", \"c\"),\n (\"query\", \"q\"),\n (\"path\", \"p\"),\n (\"url\", \"u\"),\n (\"header\", \"h\"),\n (\"form\", \"f\"),\n (\"raw body\", \"r\"),\n (\"method\", \"m\"),\n ),\n callback=self.edit\n )\n elif self.tab_offset == TAB_RESP:\n signals.status_prompt_onekey.send(\n prompt=\"Edit response\",\n keys=(\n (\"cookies\", \"c\"),\n (\"code\", \"o\"),\n (\"message\", \"m\"),\n (\"header\", \"h\"),\n (\"raw body\", \"r\"),\n ),\n callback=self.edit\n )\n else:\n signals.status_message.send(\n message=\"Tab to the request or response\",\n expire=1\n )\n elif key in set(\"bfgmxvzEC\") and not conn:\n signals.status_message.send(\n message = \"Tab to the request or response\",\n expire = 1\n )\n return\n elif key == \"b\":\n if self.tab_offset == TAB_REQ:\n common.ask_save_body(\"q\", self.flow)\n else:\n common.ask_save_body(\"s\", self.flow)\n elif key == \"f\":\n self.view.settings[self.flow][(self.tab_offset, \"fullcontents\")] = True\n signals.flow_change.send(self, flow = self.flow)\n signals.status_message.send(message=\"Loading all body data...\")\n elif key == \"m\":\n p = list(contentviews.view_prompts)\n p.insert(0, (\"Clear\", \"C\"))\n signals.status_prompt_onekey.send(\n self,\n prompt = \"Display mode\",\n keys = p,\n callback = self.change_this_display_mode\n )\n elif key == \"E\":\n if self.tab_offset == TAB_REQ:\n scope = \"q\"\n else:\n scope = \"s\"\n signals.status_prompt_onekey.send(\n self,\n prompt = \"Export to file\",\n keys = [(e[0], e[1]) for e in export.EXPORTERS],\n callback = common.export_to_clip_or_file,\n args = (scope, self.flow, common.ask_save_path)\n )\n elif key == \"C\":\n if self.tab_offset == TAB_REQ:\n scope = \"q\"\n else:\n scope = \"s\"\n signals.status_prompt_onekey.send(\n self,\n prompt = \"Export to clipboard\",\n keys = [(e[0], e[1]) for e in export.EXPORTERS],\n callback = common.export_to_clip_or_file,\n args = (scope, self.flow, common.copy_to_clipboard_or_prompt)\n )\n elif key == \"x\":\n conn.content = None\n signals.flow_change.send(self, flow=self.flow)\n elif key == \"v\":\n if conn.raw_content:\n t = conn.headers.get(\"content-type\")\n if \"EDITOR\" in os.environ or \"PAGER\" in os.environ:\n self.master.spawn_external_viewer(conn.get_content(strict=False), t)\n else:\n signals.status_message.send(\n message = \"Error! Set $EDITOR or $PAGER.\"\n )\n elif key == \"z\":\n self.flow.backup()\n e = conn.headers.get(\"content-encoding\", \"identity\")\n if e != \"identity\":\n try:\n conn.decode()\n except ValueError:\n signals.status_message.send(\n message = \"Could not decode - invalid data?\"\n )\n else:\n signals.status_prompt_onekey.send(\n prompt = \"Select encoding: \",\n keys = (\n (\"gzip\", \"z\"),\n (\"deflate\", \"d\"),\n (\"brotli\", \"b\"),\n ),\n callback = self.encode_callback,\n args = (conn,)\n )\n signals.flow_change.send(self, flow = self.flow)\n else:\n # Key is not handled here.\n return key\n\n def encode_callback(self, key, conn):\n encoding_map = {\n \"z\": \"gzip\",\n \"d\": \"deflate\",\n \"b\": \"brotli\",\n }\n conn.encode(encoding_map[key])\n signals.flow_change.send(self, flow = self.flow)\n",
"path": "mitmproxy/tools/console/flowview.py"
}
] | [
{
"content": "import math\nimport os\nimport sys\nfrom functools import lru_cache\nfrom typing import Optional, Union # noqa\n\nimport urwid\n\nfrom mitmproxy import contentviews\nfrom mitmproxy import exceptions\nfrom mitmproxy import export\nfrom mitmproxy import http\nfrom mitmproxy.net.http import Headers\nfrom mitmproxy.net.http import status_codes\nfrom mitmproxy.tools.console import common\nfrom mitmproxy.tools.console import flowdetailview\nfrom mitmproxy.tools.console import grideditor\nfrom mitmproxy.tools.console import searchable\nfrom mitmproxy.tools.console import signals\nfrom mitmproxy.tools.console import tabs\n\n\nclass SearchError(Exception):\n pass\n\n\ndef _mkhelp():\n text = []\n keys = [\n (\"A\", \"accept all intercepted flows\"),\n (\"a\", \"accept this intercepted flow\"),\n (\"b\", \"save request/response body\"),\n (\"C\", \"export flow to clipboard\"),\n (\"D\", \"duplicate flow\"),\n (\"d\", \"delete flow\"),\n (\"e\", \"edit request/response\"),\n (\"f\", \"load full body data\"),\n (\"m\", \"change body display mode for this entity\\n(default mode can be changed in the options)\"),\n (None,\n common.highlight_key(\"automatic\", \"a\") +\n [(\"text\", \": automatic detection\")]\n ),\n (None,\n common.highlight_key(\"hex\", \"e\") +\n [(\"text\", \": Hex\")]\n ),\n (None,\n common.highlight_key(\"html\", \"h\") +\n [(\"text\", \": HTML\")]\n ),\n (None,\n common.highlight_key(\"image\", \"i\") +\n [(\"text\", \": Image\")]\n ),\n (None,\n common.highlight_key(\"javascript\", \"j\") +\n [(\"text\", \": JavaScript\")]\n ),\n (None,\n common.highlight_key(\"json\", \"s\") +\n [(\"text\", \": JSON\")]\n ),\n (None,\n common.highlight_key(\"urlencoded\", \"u\") +\n [(\"text\", \": URL-encoded data\")]\n ),\n (None,\n common.highlight_key(\"raw\", \"r\") +\n [(\"text\", \": raw data\")]\n ),\n (None,\n common.highlight_key(\"xml\", \"x\") +\n [(\"text\", \": XML\")]\n ),\n (\"E\", \"export flow to file\"),\n (\"r\", \"replay request\"),\n (\"V\", \"revert changes to request\"),\n (\"v\", \"view body in external viewer\"),\n (\"w\", \"save all flows matching current view filter\"),\n (\"W\", \"save this flow\"),\n (\"x\", \"delete body\"),\n (\"z\", \"encode/decode a request/response\"),\n (\"tab\", \"next tab\"),\n (\"h, l\", \"previous tab, next tab\"),\n (\"space\", \"next flow\"),\n (\"|\", \"run script on this flow\"),\n (\"/\", \"search (case sensitive)\"),\n (\"n\", \"repeat search forward\"),\n (\"N\", \"repeat search backwards\"),\n ]\n text.extend(common.format_keyvals(keys, key=\"key\", val=\"text\", indent=4))\n return text\n\n\nhelp_context = _mkhelp()\n\nfooter = [\n ('heading_key', \"?\"), \":help \",\n ('heading_key', \"q\"), \":back \",\n]\n\n\nclass FlowViewHeader(urwid.WidgetWrap):\n\n def __init__(self, master: \"mitmproxy.console.master.ConsoleMaster\", f: http.HTTPFlow):\n self.master = master\n self.flow = f\n self._w = common.format_flow(\n f,\n False,\n extended=True,\n hostheader=self.master.options.showhost\n )\n signals.flow_change.connect(self.sig_flow_change)\n\n def sig_flow_change(self, sender, flow):\n if flow == self.flow:\n self._w = common.format_flow(\n flow,\n False,\n extended=True,\n hostheader=self.master.options.showhost\n )\n\n\nTAB_REQ = 0\nTAB_RESP = 1\n\n\nclass FlowView(tabs.Tabs):\n highlight_color = \"focusfield\"\n\n def __init__(self, master, view, flow, tab_offset):\n self.master, self.view, self.flow = master, view, flow\n super().__init__(\n [\n (self.tab_request, self.view_request),\n (self.tab_response, self.view_response),\n (self.tab_details, self.view_details),\n ],\n tab_offset\n )\n\n self.show()\n self.last_displayed_body = None\n signals.flow_change.connect(self.sig_flow_change)\n\n def tab_request(self):\n if self.flow.intercepted and not self.flow.response:\n return \"Request intercepted\"\n else:\n return \"Request\"\n\n def tab_response(self):\n if self.flow.intercepted and self.flow.response:\n return \"Response intercepted\"\n else:\n return \"Response\"\n\n def tab_details(self):\n return \"Detail\"\n\n def view_request(self):\n return self.conn_text(self.flow.request)\n\n def view_response(self):\n return self.conn_text(self.flow.response)\n\n def view_details(self):\n return flowdetailview.flowdetails(self.view, self.flow)\n\n def sig_flow_change(self, sender, flow):\n if flow == self.flow:\n self.show()\n\n def content_view(self, viewmode, message):\n if message.raw_content is None:\n msg, body = \"\", [urwid.Text([(\"error\", \"[content missing]\")])]\n return msg, body\n else:\n s = self.view.settings[self.flow]\n full = s.get((self.tab_offset, \"fullcontents\"), False)\n if full:\n limit = sys.maxsize\n else:\n limit = contentviews.VIEW_CUTOFF\n\n flow_modify_cache_invalidation = hash((\n message.raw_content,\n message.headers.fields,\n getattr(message, \"path\", None),\n ))\n # we need to pass the message off-band because it's not hashable\n self._get_content_view_message = message\n return self._get_content_view(viewmode, limit, flow_modify_cache_invalidation)\n\n @lru_cache(maxsize=200)\n def _get_content_view(self, viewmode, max_lines, _):\n message = self._get_content_view_message\n self._get_content_view_message = None\n description, lines, error = contentviews.get_message_content_view(\n viewmode, message\n )\n if error:\n signals.add_log(error, \"error\")\n # Give hint that you have to tab for the response.\n if description == \"No content\" and isinstance(message, http.HTTPRequest):\n description = \"No request content (press tab to view response)\"\n\n # If the users has a wide terminal, he gets fewer lines; this should not be an issue.\n chars_per_line = 80\n max_chars = max_lines * chars_per_line\n total_chars = 0\n text_objects = []\n for line in lines:\n txt = []\n for (style, text) in line:\n if total_chars + len(text) > max_chars:\n text = text[:max_chars - total_chars]\n txt.append((style, text))\n total_chars += len(text)\n if total_chars == max_chars:\n break\n\n # round up to the next line.\n total_chars = int(math.ceil(total_chars / chars_per_line) * chars_per_line)\n\n text_objects.append(urwid.Text(txt))\n if total_chars == max_chars:\n text_objects.append(urwid.Text([\n (\"highlight\", \"Stopped displaying data after %d lines. Press \" % max_lines),\n (\"key\", \"f\"),\n (\"highlight\", \" to load all data.\")\n ]))\n break\n\n return description, text_objects\n\n def viewmode_get(self):\n override = self.view.settings[self.flow].get(\n (self.tab_offset, \"prettyview\"),\n None\n )\n return self.master.options.default_contentview if override is None else override\n\n def conn_text(self, conn):\n if conn:\n txt = common.format_keyvals(\n [(h + \":\", v) for (h, v) in conn.headers.items(multi=True)],\n key = \"header\",\n val = \"text\"\n )\n viewmode = self.viewmode_get()\n msg, body = self.content_view(viewmode, conn)\n\n cols = [\n urwid.Text(\n [\n (\"heading\", msg),\n ]\n ),\n urwid.Text(\n [\n \" \",\n ('heading', \"[\"),\n ('heading_key', \"m\"),\n ('heading', (\":%s]\" % viewmode)),\n ],\n align=\"right\"\n )\n ]\n title = urwid.AttrWrap(urwid.Columns(cols), \"heading\")\n\n txt.append(title)\n txt.extend(body)\n else:\n txt = [\n urwid.Text(\"\"),\n urwid.Text(\n [\n (\"highlight\", \"No response. Press \"),\n (\"key\", \"e\"),\n (\"highlight\", \" and edit any aspect to add one.\"),\n ]\n )\n ]\n return searchable.Searchable(self.view, txt)\n\n def set_method_raw(self, m):\n if m:\n self.flow.request.method = m\n signals.flow_change.send(self, flow = self.flow)\n\n def edit_method(self, m):\n if m == \"e\":\n signals.status_prompt.send(\n prompt = \"Method\",\n text = self.flow.request.method,\n callback = self.set_method_raw\n )\n else:\n for i in common.METHOD_OPTIONS:\n if i[1] == m:\n self.flow.request.method = i[0].upper()\n signals.flow_change.send(self, flow = self.flow)\n\n def set_url(self, url):\n request = self.flow.request\n try:\n request.url = str(url)\n except ValueError:\n return \"Invalid URL.\"\n signals.flow_change.send(self, flow = self.flow)\n\n def set_resp_status_code(self, status_code):\n try:\n status_code = int(status_code)\n except ValueError:\n return None\n self.flow.response.status_code = status_code\n if status_code in status_codes.RESPONSES:\n self.flow.response.reason = status_codes.RESPONSES[status_code]\n signals.flow_change.send(self, flow = self.flow)\n\n def set_resp_reason(self, reason):\n self.flow.response.reason = reason\n signals.flow_change.send(self, flow = self.flow)\n\n def set_headers(self, fields, conn):\n conn.headers = Headers(fields)\n signals.flow_change.send(self, flow = self.flow)\n\n def set_query(self, lst, conn):\n conn.query = lst\n signals.flow_change.send(self, flow = self.flow)\n\n def set_path_components(self, lst, conn):\n conn.path_components = lst\n signals.flow_change.send(self, flow = self.flow)\n\n def set_form(self, lst, conn):\n conn.urlencoded_form = lst\n signals.flow_change.send(self, flow = self.flow)\n\n def edit_form(self, conn):\n self.master.view_grideditor(\n grideditor.URLEncodedFormEditor(\n self.master,\n conn.urlencoded_form.items(multi=True),\n self.set_form,\n conn\n )\n )\n\n def edit_form_confirm(self, key, conn):\n if key == \"y\":\n self.edit_form(conn)\n\n def set_cookies(self, lst, conn):\n conn.cookies = lst\n signals.flow_change.send(self, flow = self.flow)\n\n def set_setcookies(self, data, conn):\n conn.cookies = data\n signals.flow_change.send(self, flow = self.flow)\n\n def edit(self, part):\n if self.tab_offset == TAB_REQ:\n message = self.flow.request\n else:\n if not self.flow.response:\n self.flow.response = http.HTTPResponse.make(200, b\"\")\n message = self.flow.response\n\n self.flow.backup()\n if message == self.flow.request and part == \"c\":\n self.master.view_grideditor(\n grideditor.CookieEditor(\n self.master,\n message.cookies.items(multi=True),\n self.set_cookies,\n message\n )\n )\n if message == self.flow.response and part == \"c\":\n self.master.view_grideditor(\n grideditor.SetCookieEditor(\n self.master,\n message.cookies.items(multi=True),\n self.set_setcookies,\n message\n )\n )\n if part == \"r\":\n # Fix an issue caused by some editors when editing a\n # request/response body. Many editors make it hard to save a\n # file without a terminating newline on the last line. When\n # editing message bodies, this can cause problems. For now, I\n # just strip the newlines off the end of the body when we return\n # from an editor.\n c = self.master.spawn_editor(message.get_content(strict=False) or b\"\")\n message.content = c.rstrip(b\"\\n\")\n elif part == \"f\":\n if not message.urlencoded_form and message.raw_content:\n signals.status_prompt_onekey.send(\n prompt = \"Existing body is not a URL-encoded form. Clear and edit?\",\n keys = [\n (\"yes\", \"y\"),\n (\"no\", \"n\"),\n ],\n callback = self.edit_form_confirm,\n args = (message,)\n )\n else:\n self.edit_form(message)\n elif part == \"h\":\n self.master.view_grideditor(\n grideditor.HeaderEditor(\n self.master,\n message.headers.fields,\n self.set_headers,\n message\n )\n )\n elif part == \"p\":\n p = message.path_components\n self.master.view_grideditor(\n grideditor.PathEditor(\n self.master,\n p,\n self.set_path_components,\n message\n )\n )\n elif part == \"q\":\n self.master.view_grideditor(\n grideditor.QueryEditor(\n self.master,\n message.query.items(multi=True),\n self.set_query, message\n )\n )\n elif part == \"u\":\n signals.status_prompt.send(\n prompt = \"URL\",\n text = message.url,\n callback = self.set_url\n )\n elif part == \"m\" and message == self.flow.request:\n signals.status_prompt_onekey.send(\n prompt = \"Method\",\n keys = common.METHOD_OPTIONS,\n callback = self.edit_method\n )\n elif part == \"o\":\n signals.status_prompt.send(\n prompt = \"Code\",\n text = str(message.status_code),\n callback = self.set_resp_status_code\n )\n elif part == \"m\" and message == self.flow.response:\n signals.status_prompt.send(\n prompt = \"Message\",\n text = message.reason,\n callback = self.set_resp_reason\n )\n signals.flow_change.send(self, flow = self.flow)\n\n def view_flow(self, flow):\n signals.pop_view_state.send(self)\n self.master.view_flow(flow, self.tab_offset)\n\n def _view_nextprev_flow(self, idx, flow):\n if not self.view.inbounds(idx):\n signals.status_message.send(message=\"No more flows\")\n return\n self.view_flow(self.view[idx])\n\n def view_next_flow(self, flow):\n return self._view_nextprev_flow(self.view.index(flow) + 1, flow)\n\n def view_prev_flow(self, flow):\n return self._view_nextprev_flow(self.view.index(flow) - 1, flow)\n\n def change_this_display_mode(self, t):\n view = contentviews.get_by_shortcut(t)\n if view:\n self.view.settings[self.flow][(self.tab_offset, \"prettyview\")] = view.name\n else:\n self.view.settings[self.flow][(self.tab_offset, \"prettyview\")] = None\n signals.flow_change.send(self, flow=self.flow)\n\n def keypress(self, size, key):\n conn = None # type: Optional[Union[http.HTTPRequest, http.HTTPResponse]]\n if self.tab_offset == TAB_REQ:\n conn = self.flow.request\n elif self.tab_offset == TAB_RESP:\n conn = self.flow.response\n\n key = super().keypress(size, key)\n\n # Special case: Space moves over to the next flow.\n # We need to catch that before applying common.shortcuts()\n if key == \" \":\n self.view_next_flow(self.flow)\n return\n\n key = common.shortcuts(key)\n if key in (\"up\", \"down\", \"page up\", \"page down\"):\n # Pass scroll events to the wrapped widget\n self._w.keypress(size, key)\n elif key == \"a\":\n self.flow.resume()\n self.master.view.update(self.flow)\n elif key == \"A\":\n for f in self.view:\n if f.intercepted:\n f.resume()\n self.master.view.update(self.flow)\n elif key == \"d\":\n if self.flow.killable:\n self.flow.kill()\n self.view.remove(self.flow)\n if not self.view.focus.flow:\n self.master.view_flowlist()\n else:\n self.view_flow(self.view.focus.flow)\n elif key == \"D\":\n cp = self.flow.copy()\n self.master.view.add(cp)\n self.master.view.focus.flow = cp\n self.view_flow(cp)\n signals.status_message.send(message=\"Duplicated.\")\n elif key == \"p\":\n self.view_prev_flow(self.flow)\n elif key == \"r\":\n try:\n self.master.replay_request(self.flow)\n except exceptions.ReplayException as e:\n signals.add_log(\"Replay error: %s\" % e, \"warn\")\n signals.flow_change.send(self, flow = self.flow)\n elif key == \"V\":\n if self.flow.modified():\n self.flow.revert()\n signals.flow_change.send(self, flow = self.flow)\n signals.status_message.send(message=\"Reverted.\")\n else:\n signals.status_message.send(message=\"Flow not modified.\")\n elif key == \"W\":\n signals.status_prompt_path.send(\n prompt = \"Save this flow\",\n callback = self.master.save_one_flow,\n args = (self.flow,)\n )\n elif key == \"|\":\n signals.status_prompt_path.send(\n prompt = \"Send flow to script\",\n callback = self.master.run_script_once,\n args = (self.flow,)\n )\n elif key == \"e\":\n if self.tab_offset == TAB_REQ:\n signals.status_prompt_onekey.send(\n prompt=\"Edit request\",\n keys=(\n (\"cookies\", \"c\"),\n (\"query\", \"q\"),\n (\"path\", \"p\"),\n (\"url\", \"u\"),\n (\"header\", \"h\"),\n (\"form\", \"f\"),\n (\"raw body\", \"r\"),\n (\"method\", \"m\"),\n ),\n callback=self.edit\n )\n elif self.tab_offset == TAB_RESP:\n signals.status_prompt_onekey.send(\n prompt=\"Edit response\",\n keys=(\n (\"cookies\", \"c\"),\n (\"code\", \"o\"),\n (\"message\", \"m\"),\n (\"header\", \"h\"),\n (\"raw body\", \"r\"),\n ),\n callback=self.edit\n )\n else:\n signals.status_message.send(\n message=\"Tab to the request or response\",\n expire=1\n )\n elif key in set(\"bfgmxvzEC\") and not conn:\n signals.status_message.send(\n message = \"Tab to the request or response\",\n expire = 1\n )\n return\n elif key == \"b\":\n if self.tab_offset == TAB_REQ:\n common.ask_save_body(\"q\", self.flow)\n else:\n common.ask_save_body(\"s\", self.flow)\n elif key == \"f\":\n self.view.settings[self.flow][(self.tab_offset, \"fullcontents\")] = True\n signals.flow_change.send(self, flow = self.flow)\n signals.status_message.send(message=\"Loading all body data...\")\n elif key == \"m\":\n p = list(contentviews.view_prompts)\n p.insert(0, (\"Clear\", \"C\"))\n signals.status_prompt_onekey.send(\n self,\n prompt = \"Display mode\",\n keys = p,\n callback = self.change_this_display_mode\n )\n elif key == \"E\":\n if self.tab_offset == TAB_REQ:\n scope = \"q\"\n else:\n scope = \"s\"\n signals.status_prompt_onekey.send(\n self,\n prompt = \"Export to file\",\n keys = [(e[0], e[1]) for e in export.EXPORTERS],\n callback = common.export_to_clip_or_file,\n args = (scope, self.flow, common.ask_save_path)\n )\n elif key == \"C\":\n if self.tab_offset == TAB_REQ:\n scope = \"q\"\n else:\n scope = \"s\"\n signals.status_prompt_onekey.send(\n self,\n prompt = \"Export to clipboard\",\n keys = [(e[0], e[1]) for e in export.EXPORTERS],\n callback = common.export_to_clip_or_file,\n args = (scope, self.flow, common.copy_to_clipboard_or_prompt)\n )\n elif key == \"x\":\n conn.content = None\n signals.flow_change.send(self, flow=self.flow)\n elif key == \"v\":\n if conn.raw_content:\n t = conn.headers.get(\"content-type\")\n if \"EDITOR\" in os.environ or \"PAGER\" in os.environ:\n self.master.spawn_external_viewer(conn.get_content(strict=False), t)\n else:\n signals.status_message.send(\n message = \"Error! Set $EDITOR or $PAGER.\"\n )\n elif key == \"z\":\n self.flow.backup()\n e = conn.headers.get(\"content-encoding\", \"identity\")\n if e != \"identity\":\n try:\n conn.decode()\n except ValueError:\n signals.status_message.send(\n message = \"Could not decode - invalid data?\"\n )\n else:\n signals.status_prompt_onekey.send(\n prompt = \"Select encoding: \",\n keys = (\n (\"gzip\", \"z\"),\n (\"deflate\", \"d\"),\n (\"brotli\", \"b\"),\n ),\n callback = self.encode_callback,\n args = (conn,)\n )\n signals.flow_change.send(self, flow = self.flow)\n else:\n # Key is not handled here.\n return key\n\n def encode_callback(self, key, conn):\n encoding_map = {\n \"z\": \"gzip\",\n \"d\": \"deflate\",\n \"b\": \"br\",\n }\n conn.encode(encoding_map[key])\n signals.flow_change.send(self, flow = self.flow)\n",
"path": "mitmproxy/tools/console/flowview.py"
}
] | diff --git a/mitmproxy/tools/console/flowview.py b/mitmproxy/tools/console/flowview.py
index a97a9b3156..90cca1c5ac 100644
--- a/mitmproxy/tools/console/flowview.py
+++ b/mitmproxy/tools/console/flowview.py
@@ -681,7 +681,7 @@ def encode_callback(self, key, conn):
encoding_map = {
"z": "gzip",
"d": "deflate",
- "b": "brotli",
+ "b": "br",
}
conn.encode(encoding_map[key])
signals.flow_change.send(self, flow = self.flow)
|
getnikola__nikola-3437 | The post_list plugin prevents 'else' functionality in templates
<!--
Before creating an issue:
* make sure you are using an up-to-date version of Nikola
* search for existing issues that might be related
Make sure to:
* provide information about your environment (below)
* include all the output you get, and any other information related to your problem
Nikola v7.6.4, as provided by Ubuntu, is NOT SUPPORTED.
If you are using this version, you should upgrade: https://getnikola.com/getting-started.html
-->
### Environment
**Python Version:**
3.7.8
**Nikola Version:**
8.1.1
**Operating System:**
Mac OS Catalina (10.15.5) / Ubuntu 19.10
### Description:
In the default template for the `post-list` plugin, namely `post_list_directive.tmpl`
```python
{% if posts %}
<ul class="post-list">
...
```
Which suggests that there is some possibility that the template will be called with no posts.
While in `list_post.tmpl`, which you can also use with `post-list`, we have this:
```python
{% if posts %}
<ul class="postlist">
{% for post in posts %}
<li><time class="listdate" datetime="{{ post.formatted_date('webiso') }}" title="{{ post.formatted_date(date_format)|e }}">{{ post.formatted_date(date_format)|e }}</time> <a href="{{ post.permalink() }}" class="listtitle">{{ post.title()|e }}</a></li>
{% endfor %}
</ul>
{% else %}
<p>{{ messages("No posts found.") }}</p>
{% endif %}
```
Which is obviously expected to be able to handle the situation when there are no posts.
However, when the plugin returns no posts, the `else` block is not executed. In fact, it appears that the template is not called at all when no posts are returned.
This is because of these lines in `post_list.py`, at around lines 221-222:
```python
if not posts:
return '', []
```
It seems that because the empty values are returned, processing is not passed to the template. Removing those lines fixes the problem and allows the template's `else` clause to work.
I can't see that this change breaks anything else, so I'll submit a pull request for it, unless someone has an objection.
| [
{
"content": "# -*- coding: utf-8 -*-\n\n# Copyright © 2013-2020 Udo Spallek, Roberto Alsina and others.\n\n# Permission is hereby granted, free of charge, to any\n# person obtaining a copy of this software and associated\n# documentation files (the \"Software\"), to deal in the\n# Software without restriction, including without limitation\n# the rights to use, copy, modify, merge, publish,\n# distribute, sublicense, and/or sell copies of the\n# Software, and to permit persons to whom the Software is\n# furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice\n# shall be included in all copies or substantial portions of\n# the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY\n# KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE\n# WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR\n# PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS\n# OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR\n# OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR\n# OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE\n# SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.\n\n\"\"\"Post list shortcode.\"\"\"\n\n\nimport operator\nimport os\nimport uuid\n\nimport natsort\n\nfrom nikola import utils\nfrom nikola.packages.datecond import date_in_range\nfrom nikola.plugin_categories import ShortcodePlugin\n\n\nclass PostListShortcode(ShortcodePlugin):\n \"\"\"Provide a shortcode to create a list of posts.\n\n Post List\n =========\n :Directive Arguments: None.\n :Directive Options: lang, start, stop, reverse, sort, date, tags, categories, sections, slugs, post_type, template, id\n :Directive Content: None.\n\n The posts appearing in the list can be filtered by options.\n *List slicing* is provided with the *start*, *stop* and *reverse* options.\n\n The following not required options are recognized:\n\n ``start`` : integer\n The index of the first post to show.\n A negative value like ``-3`` will show the *last* three posts in the\n post-list.\n Defaults to None.\n\n ``stop`` : integer\n The index of the last post to show.\n A value negative value like ``-1`` will show every post, but not the\n *last* in the post-list.\n Defaults to None.\n\n ``reverse`` : flag\n Reverse the order of the post-list.\n Defaults is to not reverse the order of posts.\n\n ``sort`` : string\n Sort post list by one of each post's attributes, usually ``title`` or a\n custom ``priority``. Defaults to None (chronological sorting).\n\n ``date`` : string\n Show posts that match date range specified by this option. Format:\n\n * comma-separated clauses (AND)\n * clause: attribute comparison_operator value (spaces optional)\n * attribute: year, month, day, hour, month, second, weekday, isoweekday; or empty for full datetime\n * comparison_operator: == != <= >= < >\n * value: integer, 'now', 'today', or dateutil-compatible date input\n\n ``tags`` : string [, string...]\n Filter posts to show only posts having at least one of the ``tags``.\n Defaults to None.\n\n ``require_all_tags`` : flag\n Change tag filter behaviour to show only posts that have all specified ``tags``.\n Defaults to False.\n\n ``categories`` : string [, string...]\n Filter posts to show only posts having one of the ``categories``.\n Defaults to None.\n\n ``sections`` : string [, string...]\n Filter posts to show only posts having one of the ``sections``.\n Defaults to None.\n\n ``slugs`` : string [, string...]\n Filter posts to show only posts having at least one of the ``slugs``.\n Defaults to None.\n\n ``post_type`` (or ``type``) : string\n Show only ``posts``, ``pages`` or ``all``.\n Replaces ``all``. Defaults to ``posts``.\n\n ``lang`` : string\n The language of post *titles* and *links*.\n Defaults to default language.\n\n ``template`` : string\n The name of an alternative template to render the post-list.\n Defaults to ``post_list_directive.tmpl``\n\n ``id`` : string\n A manual id for the post list.\n Defaults to a random name composed by 'post_list_' + uuid.uuid4().hex.\n \"\"\"\n\n name = \"post_list\"\n\n def set_site(self, site):\n \"\"\"Set the site.\"\"\"\n super().set_site(site)\n site.register_shortcode('post-list', self.handler)\n\n def handler(self, start=None, stop=None, reverse=False, tags=None, require_all_tags=False, categories=None,\n sections=None, slugs=None, post_type='post', type=False,\n lang=None, template='post_list_directive.tmpl', sort=None,\n id=None, data=None, state=None, site=None, date=None, filename=None, post=None):\n \"\"\"Generate HTML for post-list.\"\"\"\n if lang is None:\n lang = utils.LocaleBorg().current_lang\n if site.invariant: # for testing purposes\n post_list_id = id or 'post_list_' + 'fixedvaluethatisnotauuid'\n else:\n post_list_id = id or 'post_list_' + uuid.uuid4().hex\n\n # Get post from filename if available\n if filename:\n self_post = site.post_per_input_file.get(filename)\n else:\n self_post = None\n\n if self_post:\n self_post.register_depfile(\"####MAGIC####TIMELINE\", lang=lang)\n\n # If we get strings for start/stop, make them integers\n if start is not None:\n start = int(start)\n if stop is not None:\n stop = int(stop)\n\n # Parse tags/categories/sections/slugs (input is strings)\n categories = [c.strip().lower() for c in categories.split(',')] if categories else []\n sections = [s.strip().lower() for s in sections.split(',')] if sections else []\n slugs = [s.strip() for s in slugs.split(',')] if slugs else []\n\n filtered_timeline = []\n posts = []\n step = None if reverse is False else -1\n\n if type is not False:\n post_type = type\n\n if post_type == 'page' or post_type == 'pages':\n timeline = [p for p in site.timeline if not p.use_in_feeds]\n elif post_type == 'all':\n timeline = [p for p in site.timeline]\n else: # post\n timeline = [p for p in site.timeline if p.use_in_feeds]\n\n # self_post should be removed from timeline because this is redundant\n timeline = [p for p in timeline if p.source_path != filename]\n\n if categories:\n timeline = [p for p in timeline if p.meta('category', lang=lang).lower() in categories]\n\n if sections:\n timeline = [p for p in timeline if p.section_name(lang).lower() in sections]\n\n if tags:\n tags = {t.strip().lower() for t in tags.split(',')}\n if require_all_tags:\n compare = set.issubset\n else:\n compare = operator.and_\n for post in timeline:\n post_tags = {t.lower() for t in post.tags}\n if compare(tags, post_tags):\n filtered_timeline.append(post)\n else:\n filtered_timeline = timeline\n\n if sort:\n filtered_timeline = natsort.natsorted(filtered_timeline, key=lambda post: post.meta[lang][sort], alg=natsort.ns.F | natsort.ns.IC)\n\n if date:\n _now = utils.current_time()\n filtered_timeline = [p for p in filtered_timeline if date_in_range(utils.html_unescape(date), p.date, now=_now)]\n\n for post in filtered_timeline[start:stop:step]:\n if slugs:\n cont = True\n for slug in slugs:\n if slug == post.meta('slug'):\n cont = False\n\n if cont:\n continue\n\n bp = post.translated_base_path(lang)\n if os.path.exists(bp) and state:\n state.document.settings.record_dependencies.add(bp)\n elif os.path.exists(bp) and self_post:\n self_post.register_depfile(bp, lang=lang)\n\n posts += [post]\n\n if not posts:\n return '', []\n\n template_deps = site.template_system.template_deps(template)\n if state:\n # Register template as a dependency (Issue #2391)\n for d in template_deps:\n state.document.settings.record_dependencies.add(d)\n elif self_post:\n for d in template_deps:\n self_post.register_depfile(d, lang=lang)\n\n template_data = {\n 'lang': lang,\n 'posts': posts,\n # Need to provide str, not TranslatableSetting (Issue #2104)\n 'date_format': site.GLOBAL_CONTEXT.get('date_format')[lang],\n 'post_list_id': post_list_id,\n 'messages': site.MESSAGES,\n '_link': site.link,\n }\n output = site.template_system.render_template(\n template, None, template_data)\n return output, template_deps\n\n\n# Request file name from shortcode (Issue #2412)\nPostListShortcode.handler.nikola_shortcode_pass_filename = True\n",
"path": "nikola/plugins/shortcode/post_list.py"
}
] | [
{
"content": "# -*- coding: utf-8 -*-\n\n# Copyright © 2013-2020 Udo Spallek, Roberto Alsina and others.\n\n# Permission is hereby granted, free of charge, to any\n# person obtaining a copy of this software and associated\n# documentation files (the \"Software\"), to deal in the\n# Software without restriction, including without limitation\n# the rights to use, copy, modify, merge, publish,\n# distribute, sublicense, and/or sell copies of the\n# Software, and to permit persons to whom the Software is\n# furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice\n# shall be included in all copies or substantial portions of\n# the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY\n# KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE\n# WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR\n# PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS\n# OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR\n# OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR\n# OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE\n# SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.\n\n\"\"\"Post list shortcode.\"\"\"\n\n\nimport operator\nimport os\nimport uuid\n\nimport natsort\n\nfrom nikola import utils\nfrom nikola.packages.datecond import date_in_range\nfrom nikola.plugin_categories import ShortcodePlugin\n\n\nclass PostListShortcode(ShortcodePlugin):\n \"\"\"Provide a shortcode to create a list of posts.\n\n Post List\n =========\n :Directive Arguments: None.\n :Directive Options: lang, start, stop, reverse, sort, date, tags, categories, sections, slugs, post_type, template, id\n :Directive Content: None.\n\n The posts appearing in the list can be filtered by options.\n *List slicing* is provided with the *start*, *stop* and *reverse* options.\n\n The following not required options are recognized:\n\n ``start`` : integer\n The index of the first post to show.\n A negative value like ``-3`` will show the *last* three posts in the\n post-list.\n Defaults to None.\n\n ``stop`` : integer\n The index of the last post to show.\n A value negative value like ``-1`` will show every post, but not the\n *last* in the post-list.\n Defaults to None.\n\n ``reverse`` : flag\n Reverse the order of the post-list.\n Defaults is to not reverse the order of posts.\n\n ``sort`` : string\n Sort post list by one of each post's attributes, usually ``title`` or a\n custom ``priority``. Defaults to None (chronological sorting).\n\n ``date`` : string\n Show posts that match date range specified by this option. Format:\n\n * comma-separated clauses (AND)\n * clause: attribute comparison_operator value (spaces optional)\n * attribute: year, month, day, hour, month, second, weekday, isoweekday; or empty for full datetime\n * comparison_operator: == != <= >= < >\n * value: integer, 'now', 'today', or dateutil-compatible date input\n\n ``tags`` : string [, string...]\n Filter posts to show only posts having at least one of the ``tags``.\n Defaults to None.\n\n ``require_all_tags`` : flag\n Change tag filter behaviour to show only posts that have all specified ``tags``.\n Defaults to False.\n\n ``categories`` : string [, string...]\n Filter posts to show only posts having one of the ``categories``.\n Defaults to None.\n\n ``sections`` : string [, string...]\n Filter posts to show only posts having one of the ``sections``.\n Defaults to None.\n\n ``slugs`` : string [, string...]\n Filter posts to show only posts having at least one of the ``slugs``.\n Defaults to None.\n\n ``post_type`` (or ``type``) : string\n Show only ``posts``, ``pages`` or ``all``.\n Replaces ``all``. Defaults to ``posts``.\n\n ``lang`` : string\n The language of post *titles* and *links*.\n Defaults to default language.\n\n ``template`` : string\n The name of an alternative template to render the post-list.\n Defaults to ``post_list_directive.tmpl``\n\n ``id`` : string\n A manual id for the post list.\n Defaults to a random name composed by 'post_list_' + uuid.uuid4().hex.\n \"\"\"\n\n name = \"post_list\"\n\n def set_site(self, site):\n \"\"\"Set the site.\"\"\"\n super().set_site(site)\n site.register_shortcode('post-list', self.handler)\n\n def handler(self, start=None, stop=None, reverse=False, tags=None, require_all_tags=False, categories=None,\n sections=None, slugs=None, post_type='post', type=False,\n lang=None, template='post_list_directive.tmpl', sort=None,\n id=None, data=None, state=None, site=None, date=None, filename=None, post=None):\n \"\"\"Generate HTML for post-list.\"\"\"\n if lang is None:\n lang = utils.LocaleBorg().current_lang\n if site.invariant: # for testing purposes\n post_list_id = id or 'post_list_' + 'fixedvaluethatisnotauuid'\n else:\n post_list_id = id or 'post_list_' + uuid.uuid4().hex\n\n # Get post from filename if available\n if filename:\n self_post = site.post_per_input_file.get(filename)\n else:\n self_post = None\n\n if self_post:\n self_post.register_depfile(\"####MAGIC####TIMELINE\", lang=lang)\n\n # If we get strings for start/stop, make them integers\n if start is not None:\n start = int(start)\n if stop is not None:\n stop = int(stop)\n\n # Parse tags/categories/sections/slugs (input is strings)\n categories = [c.strip().lower() for c in categories.split(',')] if categories else []\n sections = [s.strip().lower() for s in sections.split(',')] if sections else []\n slugs = [s.strip() for s in slugs.split(',')] if slugs else []\n\n filtered_timeline = []\n posts = []\n step = None if reverse is False else -1\n\n if type is not False:\n post_type = type\n\n if post_type == 'page' or post_type == 'pages':\n timeline = [p for p in site.timeline if not p.use_in_feeds]\n elif post_type == 'all':\n timeline = [p for p in site.timeline]\n else: # post\n timeline = [p for p in site.timeline if p.use_in_feeds]\n\n # self_post should be removed from timeline because this is redundant\n timeline = [p for p in timeline if p.source_path != filename]\n\n if categories:\n timeline = [p for p in timeline if p.meta('category', lang=lang).lower() in categories]\n\n if sections:\n timeline = [p for p in timeline if p.section_name(lang).lower() in sections]\n\n if tags:\n tags = {t.strip().lower() for t in tags.split(',')}\n if require_all_tags:\n compare = set.issubset\n else:\n compare = operator.and_\n for post in timeline:\n post_tags = {t.lower() for t in post.tags}\n if compare(tags, post_tags):\n filtered_timeline.append(post)\n else:\n filtered_timeline = timeline\n\n if sort:\n filtered_timeline = natsort.natsorted(filtered_timeline, key=lambda post: post.meta[lang][sort], alg=natsort.ns.F | natsort.ns.IC)\n\n if date:\n _now = utils.current_time()\n filtered_timeline = [p for p in filtered_timeline if date_in_range(utils.html_unescape(date), p.date, now=_now)]\n\n for post in filtered_timeline[start:stop:step]:\n if slugs:\n cont = True\n for slug in slugs:\n if slug == post.meta('slug'):\n cont = False\n\n if cont:\n continue\n\n bp = post.translated_base_path(lang)\n if os.path.exists(bp) and state:\n state.document.settings.record_dependencies.add(bp)\n elif os.path.exists(bp) and self_post:\n self_post.register_depfile(bp, lang=lang)\n\n posts += [post]\n\n template_deps = site.template_system.template_deps(template)\n if state:\n # Register template as a dependency (Issue #2391)\n for d in template_deps:\n state.document.settings.record_dependencies.add(d)\n elif self_post:\n for d in template_deps:\n self_post.register_depfile(d, lang=lang)\n\n template_data = {\n 'lang': lang,\n 'posts': posts,\n # Need to provide str, not TranslatableSetting (Issue #2104)\n 'date_format': site.GLOBAL_CONTEXT.get('date_format')[lang],\n 'post_list_id': post_list_id,\n 'messages': site.MESSAGES,\n '_link': site.link,\n }\n output = site.template_system.render_template(\n template, None, template_data)\n return output, template_deps\n\n\n# Request file name from shortcode (Issue #2412)\nPostListShortcode.handler.nikola_shortcode_pass_filename = True\n",
"path": "nikola/plugins/shortcode/post_list.py"
}
] | diff --git a/CHANGES.txt b/CHANGES.txt
index 6d599abaa6..dd742c785e 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -9,6 +9,8 @@ Features
Bugfixes
--------
+* Allow else clause in post-list plugin. (Issue #3436)
+
New in v8.1.1
=============
diff --git a/nikola/plugins/shortcode/post_list.py b/nikola/plugins/shortcode/post_list.py
index b71e523626..462984a576 100644
--- a/nikola/plugins/shortcode/post_list.py
+++ b/nikola/plugins/shortcode/post_list.py
@@ -218,9 +218,6 @@ def handler(self, start=None, stop=None, reverse=False, tags=None, require_all_t
posts += [post]
- if not posts:
- return '', []
-
template_deps = site.template_system.template_deps(template)
if state:
# Register template as a dependency (Issue #2391)
|
getredash__redash-464 | Error running query: datetime.time(13, 52, 27) is not JSON serializable
My table schema:
``` sql
CREATE TABLE F_entrances (
id SERIAL PRIMARY KEY,
timeOfEntrance time,
customerId int REFERENCES D_customers
);
```
(and yes, I committed the horrible sin of camel_case vs underScore. I'll be fixing that soonish)
The query
``` sql
SELECT
timeofentrance
FROM F_entrances
```
Gives me the error `Error running query: datetime.time(13, 52, 27) is not JSON serializable`. I worked around it with `to_char` but this seems to be a problem at the [Python layer](http://stackoverflow.com/a/11875813/1216976).
| [
{
"content": "import cStringIO\nimport csv\nimport codecs\nimport decimal\nimport datetime\nimport json\nimport re\nimport hashlib\nimport sqlparse\nimport pytz\n\nCOMMENTS_REGEX = re.compile(\"/\\*.*?\\*/\")\n\n\nclass SQLMetaData(object):\n TABLE_SELECTION_KEYWORDS = ('FROM', 'JOIN', 'LEFT JOIN', 'FULL JOIN', 'RIGHT JOIN', 'CROSS JOIN', 'INNER JOIN',\n 'OUTER JOIN', 'LEFT OUTER JOIN', 'RIGHT OUTER JOIN', 'FULL OUTER JOIN')\n\n def __init__(self, sql):\n self.sql = sql\n self.parsed_sql = sqlparse.parse(self.sql)\n\n self.has_ddl_statements = self._find_ddl_statements()\n self.has_non_select_dml_statements = self._find_dml_statements()\n self.used_tables = self._find_tables()\n\n def _find_ddl_statements(self):\n for statement in self.parsed_sql:\n if len([x for x in statement.flatten() if x.ttype == sqlparse.tokens.DDL]):\n return True\n\n return False\n\n def _find_tables(self):\n tables = set()\n for statement in self.parsed_sql:\n tables.update(self.extract_table_names(statement.tokens))\n\n return tables\n\n def extract_table_names(self, tokens):\n tables = set()\n tokens = [t for t in tokens if t.ttype not in (sqlparse.tokens.Whitespace, sqlparse.tokens.Newline)]\n\n for i in range(len(tokens)):\n if tokens[i].is_group():\n tables.update(self.extract_table_names(tokens[i].tokens))\n else:\n if tokens[i].ttype == sqlparse.tokens.Keyword and tokens[i].normalized in self.TABLE_SELECTION_KEYWORDS:\n if isinstance(tokens[i + 1], sqlparse.sql.Identifier):\n tables.add(tokens[i + 1].value)\n\n if isinstance(tokens[i + 1], sqlparse.sql.IdentifierList):\n tables.update(set([t.value for t in tokens[i+1].get_identifiers()]))\n return tables\n\n def _find_dml_statements(self):\n for statement in self.parsed_sql:\n for token in statement.flatten():\n if token.ttype == sqlparse.tokens.DML and token.normalized != 'SELECT':\n return True\n\n return False\n\n\ndef utcnow():\n \"\"\"Return datetime.now value with timezone specified.\n\n Without the timezone data, when the timestamp stored to the database it gets the current timezone of the server,\n which leads to errors in calculations.\n \"\"\"\n return datetime.datetime.now(pytz.utc)\n\ndef slugify(s):\n return re.sub('[^a-z0-9_\\-]+', '-', s.lower())\n\n\ndef gen_query_hash(sql):\n \"\"\"Returns hash of the given query after stripping all comments, line breaks and multiple\n spaces, and lower casing all text.\n\n TODO: possible issue - the following queries will get the same id:\n 1. SELECT 1 FROM table WHERE column='Value';\n 2. SELECT 1 FROM table where column='value';\n \"\"\"\n sql = COMMENTS_REGEX.sub(\"\", sql)\n sql = \"\".join(sql.split()).lower()\n return hashlib.md5(sql.encode('utf-8')).hexdigest()\n\n\nclass JSONEncoder(json.JSONEncoder):\n \"\"\"Custom JSON encoding class, to handle Decimal and datetime.date instances.\n \"\"\"\n def default(self, o):\n if isinstance(o, decimal.Decimal):\n return float(o)\n\n if isinstance(o, datetime.date):\n return o.isoformat()\n \n super(JSONEncoder, self).default(o)\n\n\ndef json_dumps(data):\n return json.dumps(data, cls=JSONEncoder)\n\n\nclass UnicodeWriter:\n \"\"\"\n A CSV writer which will write rows to CSV file \"f\",\n which is encoded in the given encoding.\n \"\"\"\n def __init__(self, f, dialect=csv.excel, encoding=\"utf-8\", **kwds):\n # Redirect output to a queue\n self.queue = cStringIO.StringIO()\n self.writer = csv.writer(self.queue, dialect=dialect, **kwds)\n self.stream = f\n self.encoder = codecs.getincrementalencoder(encoding)()\n\n def _encode_utf8(self, val):\n if isinstance(val, (unicode, str)):\n return val.encode('utf-8')\n\n return val\n\n def writerow(self, row):\n self.writer.writerow([self._encode_utf8(s) for s in row])\n # Fetch UTF-8 output from the queue ...\n data = self.queue.getvalue()\n data = data.decode(\"utf-8\")\n # ... and reencode it into the target encoding\n data = self.encoder.encode(data)\n # write to the target stream\n self.stream.write(data)\n # empty queue\n self.queue.truncate(0)\n\n def writerows(self, rows):\n for row in rows:\n self.writerow(row)\n",
"path": "redash/utils.py"
}
] | [
{
"content": "import cStringIO\nimport csv\nimport codecs\nimport decimal\nimport datetime\nimport json\nimport re\nimport hashlib\nimport sqlparse\nimport pytz\n\nCOMMENTS_REGEX = re.compile(\"/\\*.*?\\*/\")\n\n\nclass SQLMetaData(object):\n TABLE_SELECTION_KEYWORDS = ('FROM', 'JOIN', 'LEFT JOIN', 'FULL JOIN', 'RIGHT JOIN', 'CROSS JOIN', 'INNER JOIN',\n 'OUTER JOIN', 'LEFT OUTER JOIN', 'RIGHT OUTER JOIN', 'FULL OUTER JOIN')\n\n def __init__(self, sql):\n self.sql = sql\n self.parsed_sql = sqlparse.parse(self.sql)\n\n self.has_ddl_statements = self._find_ddl_statements()\n self.has_non_select_dml_statements = self._find_dml_statements()\n self.used_tables = self._find_tables()\n\n def _find_ddl_statements(self):\n for statement in self.parsed_sql:\n if len([x for x in statement.flatten() if x.ttype == sqlparse.tokens.DDL]):\n return True\n\n return False\n\n def _find_tables(self):\n tables = set()\n for statement in self.parsed_sql:\n tables.update(self.extract_table_names(statement.tokens))\n\n return tables\n\n def extract_table_names(self, tokens):\n tables = set()\n tokens = [t for t in tokens if t.ttype not in (sqlparse.tokens.Whitespace, sqlparse.tokens.Newline)]\n\n for i in range(len(tokens)):\n if tokens[i].is_group():\n tables.update(self.extract_table_names(tokens[i].tokens))\n else:\n if tokens[i].ttype == sqlparse.tokens.Keyword and tokens[i].normalized in self.TABLE_SELECTION_KEYWORDS:\n if isinstance(tokens[i + 1], sqlparse.sql.Identifier):\n tables.add(tokens[i + 1].value)\n\n if isinstance(tokens[i + 1], sqlparse.sql.IdentifierList):\n tables.update(set([t.value for t in tokens[i+1].get_identifiers()]))\n return tables\n\n def _find_dml_statements(self):\n for statement in self.parsed_sql:\n for token in statement.flatten():\n if token.ttype == sqlparse.tokens.DML and token.normalized != 'SELECT':\n return True\n\n return False\n\n\ndef utcnow():\n \"\"\"Return datetime.now value with timezone specified.\n\n Without the timezone data, when the timestamp stored to the database it gets the current timezone of the server,\n which leads to errors in calculations.\n \"\"\"\n return datetime.datetime.now(pytz.utc)\n\ndef slugify(s):\n return re.sub('[^a-z0-9_\\-]+', '-', s.lower())\n\n\ndef gen_query_hash(sql):\n \"\"\"Returns hash of the given query after stripping all comments, line breaks and multiple\n spaces, and lower casing all text.\n\n TODO: possible issue - the following queries will get the same id:\n 1. SELECT 1 FROM table WHERE column='Value';\n 2. SELECT 1 FROM table where column='value';\n \"\"\"\n sql = COMMENTS_REGEX.sub(\"\", sql)\n sql = \"\".join(sql.split()).lower()\n return hashlib.md5(sql.encode('utf-8')).hexdigest()\n\n\nclass JSONEncoder(json.JSONEncoder):\n \"\"\"Custom JSON encoding class, to handle Decimal and datetime.date instances.\n \"\"\"\n def default(self, o):\n if isinstance(o, decimal.Decimal):\n return float(o)\n\n if isinstance(o, (datetime.date, datetime.time, datetime.timedelta)):\n return o.isoformat()\n \n super(JSONEncoder, self).default(o)\n\n\ndef json_dumps(data):\n return json.dumps(data, cls=JSONEncoder)\n\n\nclass UnicodeWriter:\n \"\"\"\n A CSV writer which will write rows to CSV file \"f\",\n which is encoded in the given encoding.\n \"\"\"\n def __init__(self, f, dialect=csv.excel, encoding=\"utf-8\", **kwds):\n # Redirect output to a queue\n self.queue = cStringIO.StringIO()\n self.writer = csv.writer(self.queue, dialect=dialect, **kwds)\n self.stream = f\n self.encoder = codecs.getincrementalencoder(encoding)()\n\n def _encode_utf8(self, val):\n if isinstance(val, (unicode, str)):\n return val.encode('utf-8')\n\n return val\n\n def writerow(self, row):\n self.writer.writerow([self._encode_utf8(s) for s in row])\n # Fetch UTF-8 output from the queue ...\n data = self.queue.getvalue()\n data = data.decode(\"utf-8\")\n # ... and reencode it into the target encoding\n data = self.encoder.encode(data)\n # write to the target stream\n self.stream.write(data)\n # empty queue\n self.queue.truncate(0)\n\n def writerows(self, rows):\n for row in rows:\n self.writerow(row)\n",
"path": "redash/utils.py"
}
] | diff --git a/redash/utils.py b/redash/utils.py
index 41b0d813f8..41d23372f2 100644
--- a/redash/utils.py
+++ b/redash/utils.py
@@ -95,7 +95,7 @@ def default(self, o):
if isinstance(o, decimal.Decimal):
return float(o)
- if isinstance(o, datetime.date):
+ if isinstance(o, (datetime.date, datetime.time, datetime.timedelta)):
return o.isoformat()
super(JSONEncoder, self).default(o)
|
napari__napari-1088 | ListModel.append does not check type
## 🐛 Bug
in working on layer groups, I found a strange lack of type checking when appending to a `ListModel` (which inherits from `TypedList`). [`ListModel.append`](https://github.com/napari/napari/blob/59ed366e9d492a2389c451468fd8b9f96508b4e2/napari/utils/list/_model.py#L59) jumps right over `TypedList.append`
https://github.com/napari/napari/blob/59ed366e9d492a2389c451468fd8b9f96508b4e2/napari/utils/list/_model.py#L58-L60
... and if you try to something that is not a `Layer` to a `LayerList`, it works fine up until throwing an error (unrelated to typing) in `components.layerlist._add`. Is that supposed to be `TypedList.append(self, obj)`? or was that intentional?
| [
{
"content": "from ...utils.event import EmitterGroup\n\nfrom ._multi import MultiIndexList\nfrom ._typed import TypedList\n\n\nclass ListModel(MultiIndexList, TypedList):\n \"\"\"List with events, tuple-indexing, typing, and filtering.\n\n Parameters\n ----------\n basetype : type\n Type of the elements in the list.\n iterable : iterable, optional\n Elements to initialize the list with.\n lookup : dict of type : function(object, ``basetype``) -> bool\n Functions that determine if an object is a reference to an\n element of the list.\n\n Attributes\n ----------\n events : vispy.util.event.EmitterGroup\n Group of events for adding, removing, and reordering elements\n within the list.\n \"\"\"\n\n def __init__(self, basetype, iterable=(), lookup=None):\n super().__init__(basetype, iterable, lookup)\n self.events = EmitterGroup(\n source=self,\n auto_connect=True,\n added=None,\n removed=None,\n reordered=None,\n changed=None,\n )\n self.events.added.connect(self.events.changed)\n self.events.removed.connect(self.events.changed)\n self.events.reordered.connect(self.events.changed)\n\n def __setitem__(self, query, values):\n indices = tuple(self.__prsitem__(query))\n new_indices = tuple(values)\n\n if sorted(indices) != sorted(self.index(v) for v in new_indices):\n raise TypeError(\n 'must be a reordering of indices; '\n 'setting of list items not allowed'\n )\n\n super().__setitem__(indices, new_indices)\n self.events.reordered()\n\n def insert(self, index, obj):\n super().insert(index, obj)\n self.events.added(item=obj, index=self.__locitem__(index))\n\n def append(self, obj):\n super(TypedList, self).append(obj)\n self.events.added(item=obj, index=len(self) - 1)\n\n def pop(self, key):\n obj = super().pop(key)\n self.events.removed(item=obj, index=key)\n return obj\n",
"path": "napari/utils/list/_model.py"
}
] | [
{
"content": "from ...utils.event import EmitterGroup\n\nfrom ._multi import MultiIndexList\nfrom ._typed import TypedList\n\n\nclass ListModel(MultiIndexList, TypedList):\n \"\"\"List with events, tuple-indexing, typing, and filtering.\n\n Parameters\n ----------\n basetype : type\n Type of the elements in the list.\n iterable : iterable, optional\n Elements to initialize the list with.\n lookup : dict of type : function(object, ``basetype``) -> bool\n Functions that determine if an object is a reference to an\n element of the list.\n\n Attributes\n ----------\n events : vispy.util.event.EmitterGroup\n Group of events for adding, removing, and reordering elements\n within the list.\n \"\"\"\n\n def __init__(self, basetype, iterable=(), lookup=None):\n super().__init__(basetype, iterable, lookup)\n self.events = EmitterGroup(\n source=self,\n auto_connect=True,\n added=None,\n removed=None,\n reordered=None,\n changed=None,\n )\n self.events.added.connect(self.events.changed)\n self.events.removed.connect(self.events.changed)\n self.events.reordered.connect(self.events.changed)\n\n def __setitem__(self, query, values):\n indices = tuple(self.__prsitem__(query))\n new_indices = tuple(values)\n\n if sorted(indices) != sorted(self.index(v) for v in new_indices):\n raise TypeError(\n 'must be a reordering of indices; '\n 'setting of list items not allowed'\n )\n\n super().__setitem__(indices, new_indices)\n self.events.reordered()\n\n def insert(self, index, obj):\n super().insert(index, obj)\n self.events.added(item=obj, index=self.__locitem__(index))\n\n def append(self, obj):\n TypedList.append(self, obj)\n self.events.added(item=obj, index=len(self) - 1)\n\n def pop(self, key):\n obj = super().pop(key)\n self.events.removed(item=obj, index=key)\n return obj\n",
"path": "napari/utils/list/_model.py"
}
] | diff --git a/napari/components/_tests/test_layers_list.py b/napari/components/_tests/test_layers_list.py
index 96b273f0def..d19c975c970 100644
--- a/napari/components/_tests/test_layers_list.py
+++ b/napari/components/_tests/test_layers_list.py
@@ -1,6 +1,7 @@
from napari.components import LayerList
from napari.layers import Image
import numpy as np
+import pytest
def test_empty_layers_list():
@@ -27,6 +28,10 @@ def test_adding_layer():
layer = Image(np.random.random((10, 10)))
layers.append(layer)
+ # LayerList should err if you add anything other than a layer
+ with pytest.raises(TypeError):
+ layers.append('something')
+
assert len(layers) == 1
diff --git a/napari/utils/list/_model.py b/napari/utils/list/_model.py
index daaad2ba34b..ad90c2f1be0 100644
--- a/napari/utils/list/_model.py
+++ b/napari/utils/list/_model.py
@@ -56,7 +56,7 @@ def insert(self, index, obj):
self.events.added(item=obj, index=self.__locitem__(index))
def append(self, obj):
- super(TypedList, self).append(obj)
+ TypedList.append(self, obj)
self.events.added(item=obj, index=len(self) - 1)
def pop(self, key):
|
ansible__molecule-2716 | Add a config option to run podman as root
# Issue Type
- Feature request
# Molecule and Ansible details
```
ansible 2.9.9
molecule 3.0.4
```
Molecule installation method (one of):
- pip
Ansible installation method (one of):
- OS package
# Desired Behavior
Podman allows running containers in a rootless mode, although this introduces some limitations.
For example this makes it difficult to test ansible roles which rely on a working systemd instance running in the container.
It would therefore be nice if it would be possible to specify whether podman should be executed as root or as the current user in the molecule config.
| [
{
"content": "# Copyright (c) 2015-2018 Cisco Systems, Inc.\n#\n# Permission is hereby granted, free of charge, to any person obtaining a copy\n# of this software and associated documentation files (the \"Software\"), to\n# deal in the Software without restriction, including without limitation the\n# rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n# sell copies of the Software, and to permit persons to whom the Software is\n# furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice shall be included in\n# all copies or substantial portions of the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER\n# DEALINGS IN THE SOFTWARE.\n\"\"\"Schema v2 Validation Module.\"\"\"\n\nimport collections\nimport functools\nimport re\n\nimport cerberus\nimport cerberus.errors\n\nfrom molecule import api\nfrom molecule import interpolation, util\n\n\ndef coerce_env(env, keep_string, v):\n \"\"\"Interpolate environment.\"\"\"\n i = interpolation.Interpolator(interpolation.TemplateWithDefaults, env)\n\n return i.interpolate(v, keep_string)\n\n\ndef pre_validate_base_schema(env, keep_string):\n \"\"\"Pre-validate base schema.\"\"\"\n return {\n \"dependency\": {\n \"type\": \"dict\",\n \"schema\": {\n \"name\": {\n \"type\": \"string\",\n \"molecule_env_var\": True,\n \"allowed\": [\"galaxy\", \"gilt\", \"shell\"],\n }\n },\n },\n \"driver\": {\n \"type\": \"dict\",\n \"schema\": {\n \"name\": {\n \"type\": \"string\",\n \"molecule_env_var\": True,\n \"allowed\": api.drivers(),\n # NOTE(retr0h): Some users use an environment variable to\n # change the driver name. May add this coercion to rest of\n # config using allowed validation.\n \"coerce\": (str, functools.partial(coerce_env, env, keep_string)),\n }\n },\n },\n \"lint\": {\"type\": \"string\"},\n \"platforms\": {\n \"type\": \"list\",\n \"schema\": {\n \"type\": \"dict\",\n \"schema\": {\n \"registry\": {\n \"type\": \"dict\",\n \"schema\": {\n \"credentials\": {\n \"type\": \"dict\",\n \"schema\": {\"password\": {\"type\": \"string\"}},\n }\n },\n }\n },\n },\n },\n \"provisioner\": {\n \"type\": \"dict\",\n \"schema\": {\n \"name\": {\n \"type\": \"string\",\n \"molecule_env_var\": True,\n \"allowed\": [\"ansible\"],\n },\n },\n },\n \"scenario\": {\"type\": \"dict\", \"schema\": {\"name\": {\"molecule_env_var\": True}}},\n \"verifier\": {\n \"type\": \"dict\",\n \"schema\": {\n \"name\": {\n \"type\": \"string\",\n \"molecule_env_var\": True,\n \"allowed\": api.verifiers(),\n },\n },\n },\n }\n\n\nbase_schema = {\n \"dependency\": {\n \"type\": \"dict\",\n \"schema\": {\n \"name\": {\"type\": \"string\"},\n \"enabled\": {\"type\": \"boolean\"},\n \"options\": {\"type\": \"dict\"},\n \"env\": {\n \"type\": \"dict\",\n \"keysrules\": {\"type\": \"string\", \"regex\": \"^[A-Z0-9_-]+$\"},\n },\n \"command\": {\"type\": \"string\", \"nullable\": True},\n },\n },\n \"driver\": {\n \"type\": \"dict\",\n \"schema\": {\n \"name\": {\"type\": \"string\"},\n \"provider\": {\n \"type\": \"dict\",\n \"schema\": {\"name\": {\"type\": \"string\", \"nullable\": True}},\n },\n \"options\": {\"type\": \"dict\", \"schema\": {\"managed\": {\"type\": \"boolean\"}}},\n \"ssh_connection_options\": {\"type\": \"list\", \"schema\": {\"type\": \"string\"}},\n \"safe_files\": {\"type\": \"list\", \"schema\": {\"type\": \"string\"}},\n },\n },\n \"lint\": {\"type\": \"string\"},\n \"platforms\": {\n \"type\": \"list\",\n \"schema\": {\n \"type\": \"dict\",\n \"schema\": {\n \"name\": {\n \"type\": \"string\",\n \"required\": True,\n \"unique\": True, # https://github.com/pyeve/cerberus/issues/467\n },\n \"groups\": {\"type\": \"list\", \"schema\": {\"type\": \"string\"}},\n \"children\": {\"type\": \"list\", \"schema\": {\"type\": \"string\"}},\n },\n },\n },\n \"provisioner\": {\n \"type\": \"dict\",\n \"schema\": {\n \"name\": {\"type\": \"string\"},\n \"log\": {\"type\": \"boolean\"},\n \"config_options\": {\n \"type\": \"dict\",\n \"schema\": {\n \"defaults\": {\n \"type\": \"dict\",\n \"schema\": {\n \"roles_path\": {\"type\": \"string\", \"disallowed\": True},\n \"library\": {\"type\": \"string\", \"disallowed\": True},\n \"filter_plugins\": {\"type\": \"string\", \"disallowed\": True},\n },\n },\n \"privilege_escalation\": {\"type\": \"dict\", \"disallowed\": True},\n },\n },\n \"connection_options\": {\"type\": \"dict\"},\n \"options\": {\"type\": \"dict\"},\n \"env\": {\n \"type\": \"dict\",\n \"keysrules\": {\"type\": \"string\", \"regex\": \"^[A-Z0-9_-]+$\"},\n \"valuesrules\": {\"nullable\": False},\n \"schema\": {\n \"ANSIBLE_BECOME\": {\"type\": \"string\", \"disallowed\": True},\n \"ANSIBLE_BECOME_METHOD\": {\"type\": \"string\", \"disallowed\": True},\n \"ANSIBLE_BECOME_USER\": {\"type\": \"string\", \"disallowed\": True},\n },\n },\n \"inventory\": {\n \"type\": \"dict\",\n \"schema\": {\n \"hosts\": {\"type\": \"dict\"},\n \"host_vars\": {\"type\": \"dict\"},\n \"group_vars\": {\"type\": \"dict\"},\n \"links\": {\"type\": \"dict\"},\n },\n },\n \"children\": {\"type\": \"dict\"},\n \"playbooks\": {\n \"type\": \"dict\",\n \"schema\": {\n \"cleanup\": {\"type\": \"string\"},\n \"create\": {\"type\": \"string\"},\n \"converge\": {\"type\": \"string\"},\n \"destroy\": {\"type\": \"string\"},\n \"prepare\": {\"type\": \"string\"},\n \"side_effect\": {\"type\": \"string\"},\n \"verify\": {\"type\": \"string\"},\n },\n },\n },\n },\n \"scenario\": {\n \"type\": \"dict\",\n \"schema\": {\n \"check_sequence\": {\"type\": \"list\", \"schema\": {\"type\": \"string\"}},\n \"converge_sequence\": {\"type\": \"list\", \"schema\": {\"type\": \"string\"}},\n \"create_sequence\": {\"type\": \"list\", \"schema\": {\"type\": \"string\"}},\n \"destroy_sequence\": {\"type\": \"list\", \"schema\": {\"type\": \"string\"}},\n \"test_sequence\": {\"type\": \"list\", \"schema\": {\"type\": \"string\"}},\n },\n },\n \"verifier\": {\n \"type\": \"dict\",\n \"schema\": {\n \"name\": {\"type\": \"string\"},\n \"enabled\": {\"type\": \"boolean\"},\n \"options\": {\"type\": \"dict\"},\n \"env\": {\n \"type\": \"dict\",\n \"keysrules\": {\"type\": \"string\", \"regex\": \"^[A-Z0-9_-]+$\"},\n },\n \"directory\": {\"type\": \"string\"},\n \"additional_files_or_dirs\": {\"type\": \"list\", \"schema\": {\"type\": \"string\"}},\n },\n },\n}\n\nplatforms_docker_schema = {\n \"platforms\": {\n \"type\": \"list\",\n \"schema\": {\n \"type\": \"dict\",\n \"schema\": {\n \"name\": {\"type\": \"string\"},\n \"hostname\": {\"type\": \"string\"},\n \"image\": {\"type\": \"string\"},\n \"dockerfile\": {\"type\": \"string\"},\n \"pull\": {\"type\": \"boolean\"},\n \"pre_build_image\": {\"type\": \"boolean\"},\n \"registry\": {\n \"type\": \"dict\",\n \"schema\": {\n \"url\": {\"type\": \"string\"},\n \"credentials\": {\n \"type\": \"dict\",\n \"schema\": {\n \"username\": {\"type\": \"string\"},\n \"password\": {\"type\": \"string\"},\n \"email\": {\"type\": \"string\"},\n },\n },\n },\n },\n \"override_command\": {\"type\": \"boolean\", \"nullable\": True},\n \"command\": {\"type\": \"string\", \"nullable\": True},\n \"tty\": {\"type\": \"boolean\", \"nullable\": True},\n \"pid_mode\": {\"type\": \"string\"},\n \"privileged\": {\"type\": \"boolean\"},\n \"security_opts\": {\"type\": \"list\", \"schema\": {\"type\": \"string\"}},\n \"volumes\": {\"type\": \"list\", \"schema\": {\"type\": \"string\"}},\n \"keep_volumes\": {\"type\": \"boolean\"},\n \"tmpfs\": {\"type\": \"list\", \"schema\": {\"type\": \"string\"}},\n \"capabilities\": {\"type\": \"list\", \"schema\": {\"type\": \"string\"}},\n \"sysctls\": {\"type\": \"dict\", \"keysrules\": {\"type\": \"string\"}},\n \"exposed_ports\": {\n \"type\": \"list\",\n \"schema\": {\"type\": \"string\", \"coerce\": \"exposed_ports\"},\n },\n \"published_ports\": {\"type\": \"list\", \"schema\": {\"type\": \"string\"}},\n \"user\": {\"type\": \"string\"},\n \"ulimits\": {\"type\": \"list\", \"schema\": {\"type\": \"string\"}},\n \"dns_servers\": {\"type\": \"list\", \"schema\": {\"type\": \"string\"}},\n \"etc_hosts\": {\n \"type\": [\"string\", \"dict\"],\n \"keysrules\": {\"type\": \"string\"},\n },\n \"env\": {\n \"type\": \"dict\",\n \"keysrules\": {\"type\": \"string\", \"regex\": \"^[a-zA-Z0-9._-]+$\"},\n },\n \"restart_policy\": {\"type\": \"string\"},\n \"restart_retries\": {\"type\": \"integer\"},\n \"networks\": {\n \"type\": \"list\",\n \"schema\": {\"type\": \"dict\", \"schema\": {\"name\": {\"type\": \"string\"}}},\n },\n \"network_mode\": {\"type\": \"string\"},\n \"purge_networks\": {\"type\": \"boolean\"},\n \"docker_host\": {\"type\": \"string\"},\n \"cacert_path\": {\"type\": \"string\"},\n \"cert_path\": {\"type\": \"string\"},\n \"key_path\": {\"type\": \"string\"},\n \"tls_verify\": {\"type\": \"boolean\"},\n },\n },\n }\n}\n\nplatforms_podman_schema = {\n \"platforms\": {\n \"type\": \"list\",\n \"schema\": {\n \"type\": \"dict\",\n \"schema\": {\n \"name\": {\"type\": \"string\"},\n \"hostname\": {\"type\": \"string\"},\n \"image\": {\"type\": \"string\"},\n \"dockerfile\": {\"type\": \"string\"},\n \"pull\": {\"type\": \"boolean\"},\n \"pre_build_image\": {\"type\": \"boolean\"},\n \"registry\": {\n \"type\": \"dict\",\n \"schema\": {\n \"url\": {\"type\": \"string\"},\n \"credentials\": {\n \"type\": \"dict\",\n \"schema\": {\n \"username\": {\"type\": \"string\"},\n \"password\": {\"type\": \"string\"},\n },\n },\n },\n },\n \"override_command\": {\"type\": \"boolean\", \"nullable\": True},\n \"command\": {\"type\": \"string\", \"nullable\": True},\n \"tty\": {\"type\": \"boolean\", \"nullable\": True},\n \"pid_mode\": {\"type\": \"string\"},\n \"privileged\": {\"type\": \"boolean\"},\n \"security_opts\": {\"type\": \"list\", \"schema\": {\"type\": \"string\"}},\n \"volumes\": {\"type\": \"list\", \"schema\": {\"type\": \"string\"}},\n \"tmpfs\": {\"type\": \"list\", \"schema\": {\"type\": \"string\"}},\n \"capabilities\": {\"type\": \"list\", \"schema\": {\"type\": \"string\"}},\n \"exposed_ports\": {\n \"type\": \"list\",\n \"schema\": {\"type\": \"string\", \"coerce\": \"exposed_ports\"},\n },\n \"published_ports\": {\"type\": \"list\", \"schema\": {\"type\": \"string\"}},\n \"ulimits\": {\"type\": \"list\", \"schema\": {\"type\": \"string\"}},\n \"dns_servers\": {\"type\": \"list\", \"schema\": {\"type\": \"string\"}},\n \"etc_hosts\": {\n \"type\": [\"string\", \"dict\"],\n \"keysrules\": {\"type\": \"string\"},\n },\n \"env\": {\n \"type\": \"dict\",\n \"keysrules\": {\"type\": \"string\", \"regex\": \"^[a-zA-Z0-9._-]+$\"},\n },\n \"restart_policy\": {\"type\": \"string\"},\n \"restart_retries\": {\"type\": \"integer\"},\n \"network\": {\"type\": \"string\"},\n \"cert_path\": {\"type\": \"string\"},\n \"tls_verify\": {\"type\": \"boolean\"},\n \"cgroup_manager\": {\"type\": \"string\"},\n \"storage_opt\": {\"type\": \"string\"},\n \"storage_driver\": {\"type\": \"string\"},\n },\n },\n }\n}\n\n\ndependency_command_nullable_schema = {\n \"dependency\": {\n \"type\": \"dict\",\n \"schema\": {\"command\": {\"type\": \"string\", \"nullable\": False}},\n }\n}\n\n\nclass Validator(cerberus.Validator):\n \"\"\"Validator Class.\"\"\"\n\n def __init__(self, *args, **kwargs):\n \"\"\"Construct Validator.\"\"\"\n super(Validator, self).__init__(*args, **kwargs)\n\n def _validate_unique(self, unique, field, value):\n \"\"\"Ensure value uniqueness.\n\n The rule's arguments are validated against this schema:\n {'type': 'boolean'}\n \"\"\"\n if unique:\n root_key = self.schema_path[0]\n data = (doc[field] for doc in self.root_document[root_key])\n for key, count in collections.Counter(data).items():\n if count > 1:\n msg = \"'{}' is not unique\".format(key)\n self._error(field, msg)\n\n def _validate_disallowed(self, disallowed, field, value):\n \"\"\"Readonly but with a custom error.\n\n The rule's arguments are validated against this schema:\n {'type': 'boolean'}\n \"\"\"\n if disallowed:\n msg = \"disallowed user provided config option\"\n self._error(field, msg)\n\n def _normalize_coerce_exposed_ports(self, value):\n \"\"\"Coerce ``exposed_ports`` values to string.\n\n Not all types that can be specified by the user are acceptable and\n therefore we cannot simply pass a ``'coerce': 'string'`` to the schema\n definition.\n \"\"\"\n if type(value) == int:\n return str(value)\n return value\n\n def _validate_molecule_env_var(self, molecule_env_var, field, value):\n \"\"\"Readonly but with a custom error.\n\n The rule's arguments are validated against this schema:\n {'type': 'boolean'}\n \"\"\"\n # TODO(retr0h): This needs to be better handled.\n pattern = r\"^[{$]+MOLECULE[_a-z0-9A-Z]+[}]*$\"\n\n if molecule_env_var:\n if re.match(pattern, value):\n msg = \"cannot reference $MOLECULE special variables \" \"in this section\"\n self._error(field, msg)\n\n\ndef pre_validate(stream, env, keep_string):\n \"\"\"Pre-validate stream.\"\"\"\n data = util.safe_load(stream)\n\n v = Validator(allow_unknown=True)\n v.validate(data, pre_validate_base_schema(env, keep_string))\n\n return v.errors, data\n\n\ndef validate(c):\n \"\"\"Perform schema validation.\"\"\"\n schema = base_schema\n\n # Dependency\n if c[\"dependency\"][\"name\"] == \"shell\":\n schema = util.merge_dicts(schema, dependency_command_nullable_schema)\n\n # Driver\n if c[\"driver\"][\"name\"] == \"docker\":\n schema = util.merge_dicts(schema, platforms_docker_schema)\n elif c[\"driver\"][\"name\"] == \"podman\":\n schema = util.merge_dicts(schema, platforms_podman_schema)\n\n # Verifier\n schema = util.merge_dicts(schema, api.verifiers()[c[\"verifier\"][\"name\"]].schema())\n\n v = Validator(allow_unknown=True)\n v.validate(c, schema)\n\n return v.errors\n",
"path": "molecule/model/schema_v3.py"
}
] | [
{
"content": "# Copyright (c) 2015-2018 Cisco Systems, Inc.\n#\n# Permission is hereby granted, free of charge, to any person obtaining a copy\n# of this software and associated documentation files (the \"Software\"), to\n# deal in the Software without restriction, including without limitation the\n# rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n# sell copies of the Software, and to permit persons to whom the Software is\n# furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice shall be included in\n# all copies or substantial portions of the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER\n# DEALINGS IN THE SOFTWARE.\n\"\"\"Schema v2 Validation Module.\"\"\"\n\nimport collections\nimport functools\nimport re\n\nimport cerberus\nimport cerberus.errors\n\nfrom molecule import api\nfrom molecule import interpolation, util\n\n\ndef coerce_env(env, keep_string, v):\n \"\"\"Interpolate environment.\"\"\"\n i = interpolation.Interpolator(interpolation.TemplateWithDefaults, env)\n\n return i.interpolate(v, keep_string)\n\n\ndef pre_validate_base_schema(env, keep_string):\n \"\"\"Pre-validate base schema.\"\"\"\n return {\n \"dependency\": {\n \"type\": \"dict\",\n \"schema\": {\n \"name\": {\n \"type\": \"string\",\n \"molecule_env_var\": True,\n \"allowed\": [\"galaxy\", \"gilt\", \"shell\"],\n }\n },\n },\n \"driver\": {\n \"type\": \"dict\",\n \"schema\": {\n \"name\": {\n \"type\": \"string\",\n \"molecule_env_var\": True,\n \"allowed\": api.drivers(),\n # NOTE(retr0h): Some users use an environment variable to\n # change the driver name. May add this coercion to rest of\n # config using allowed validation.\n \"coerce\": (str, functools.partial(coerce_env, env, keep_string)),\n }\n },\n },\n \"lint\": {\"type\": \"string\"},\n \"platforms\": {\n \"type\": \"list\",\n \"schema\": {\n \"type\": \"dict\",\n \"schema\": {\n \"registry\": {\n \"type\": \"dict\",\n \"schema\": {\n \"credentials\": {\n \"type\": \"dict\",\n \"schema\": {\"password\": {\"type\": \"string\"}},\n }\n },\n }\n },\n },\n },\n \"provisioner\": {\n \"type\": \"dict\",\n \"schema\": {\n \"name\": {\n \"type\": \"string\",\n \"molecule_env_var\": True,\n \"allowed\": [\"ansible\"],\n },\n },\n },\n \"scenario\": {\"type\": \"dict\", \"schema\": {\"name\": {\"molecule_env_var\": True}}},\n \"verifier\": {\n \"type\": \"dict\",\n \"schema\": {\n \"name\": {\n \"type\": \"string\",\n \"molecule_env_var\": True,\n \"allowed\": api.verifiers(),\n },\n },\n },\n }\n\n\nbase_schema = {\n \"dependency\": {\n \"type\": \"dict\",\n \"schema\": {\n \"name\": {\"type\": \"string\"},\n \"enabled\": {\"type\": \"boolean\"},\n \"options\": {\"type\": \"dict\"},\n \"env\": {\n \"type\": \"dict\",\n \"keysrules\": {\"type\": \"string\", \"regex\": \"^[A-Z0-9_-]+$\"},\n },\n \"command\": {\"type\": \"string\", \"nullable\": True},\n },\n },\n \"driver\": {\n \"type\": \"dict\",\n \"schema\": {\n \"name\": {\"type\": \"string\"},\n \"provider\": {\n \"type\": \"dict\",\n \"schema\": {\"name\": {\"type\": \"string\", \"nullable\": True}},\n },\n \"options\": {\"type\": \"dict\", \"schema\": {\"managed\": {\"type\": \"boolean\"}}},\n \"ssh_connection_options\": {\"type\": \"list\", \"schema\": {\"type\": \"string\"}},\n \"safe_files\": {\"type\": \"list\", \"schema\": {\"type\": \"string\"}},\n },\n },\n \"lint\": {\"type\": \"string\"},\n \"platforms\": {\n \"type\": \"list\",\n \"schema\": {\n \"type\": \"dict\",\n \"schema\": {\n \"name\": {\n \"type\": \"string\",\n \"required\": True,\n \"unique\": True, # https://github.com/pyeve/cerberus/issues/467\n },\n \"groups\": {\"type\": \"list\", \"schema\": {\"type\": \"string\"}},\n \"children\": {\"type\": \"list\", \"schema\": {\"type\": \"string\"}},\n },\n },\n },\n \"provisioner\": {\n \"type\": \"dict\",\n \"schema\": {\n \"name\": {\"type\": \"string\"},\n \"log\": {\"type\": \"boolean\"},\n \"config_options\": {\n \"type\": \"dict\",\n \"schema\": {\n \"defaults\": {\n \"type\": \"dict\",\n \"schema\": {\n \"roles_path\": {\"type\": \"string\", \"disallowed\": True},\n \"library\": {\"type\": \"string\", \"disallowed\": True},\n \"filter_plugins\": {\"type\": \"string\", \"disallowed\": True},\n },\n },\n \"privilege_escalation\": {\"type\": \"dict\", \"disallowed\": True},\n },\n },\n \"connection_options\": {\"type\": \"dict\"},\n \"options\": {\"type\": \"dict\"},\n \"env\": {\n \"type\": \"dict\",\n \"keysrules\": {\"type\": \"string\", \"regex\": \"^[A-Z0-9_-]+$\"},\n \"valuesrules\": {\"nullable\": False},\n \"schema\": {\n \"ANSIBLE_BECOME\": {\"type\": \"string\", \"disallowed\": True},\n \"ANSIBLE_BECOME_METHOD\": {\"type\": \"string\", \"disallowed\": True},\n \"ANSIBLE_BECOME_USER\": {\"type\": \"string\", \"disallowed\": True},\n },\n },\n \"inventory\": {\n \"type\": \"dict\",\n \"schema\": {\n \"hosts\": {\"type\": \"dict\"},\n \"host_vars\": {\"type\": \"dict\"},\n \"group_vars\": {\"type\": \"dict\"},\n \"links\": {\"type\": \"dict\"},\n },\n },\n \"children\": {\"type\": \"dict\"},\n \"playbooks\": {\n \"type\": \"dict\",\n \"schema\": {\n \"cleanup\": {\"type\": \"string\"},\n \"create\": {\"type\": \"string\"},\n \"converge\": {\"type\": \"string\"},\n \"destroy\": {\"type\": \"string\"},\n \"prepare\": {\"type\": \"string\"},\n \"side_effect\": {\"type\": \"string\"},\n \"verify\": {\"type\": \"string\"},\n },\n },\n },\n },\n \"scenario\": {\n \"type\": \"dict\",\n \"schema\": {\n \"check_sequence\": {\"type\": \"list\", \"schema\": {\"type\": \"string\"}},\n \"converge_sequence\": {\"type\": \"list\", \"schema\": {\"type\": \"string\"}},\n \"create_sequence\": {\"type\": \"list\", \"schema\": {\"type\": \"string\"}},\n \"destroy_sequence\": {\"type\": \"list\", \"schema\": {\"type\": \"string\"}},\n \"test_sequence\": {\"type\": \"list\", \"schema\": {\"type\": \"string\"}},\n },\n },\n \"verifier\": {\n \"type\": \"dict\",\n \"schema\": {\n \"name\": {\"type\": \"string\"},\n \"enabled\": {\"type\": \"boolean\"},\n \"options\": {\"type\": \"dict\"},\n \"env\": {\n \"type\": \"dict\",\n \"keysrules\": {\"type\": \"string\", \"regex\": \"^[A-Z0-9_-]+$\"},\n },\n \"directory\": {\"type\": \"string\"},\n \"additional_files_or_dirs\": {\"type\": \"list\", \"schema\": {\"type\": \"string\"}},\n },\n },\n}\n\nplatforms_docker_schema = {\n \"platforms\": {\n \"type\": \"list\",\n \"schema\": {\n \"type\": \"dict\",\n \"schema\": {\n \"name\": {\"type\": \"string\"},\n \"hostname\": {\"type\": \"string\"},\n \"image\": {\"type\": \"string\"},\n \"dockerfile\": {\"type\": \"string\"},\n \"pull\": {\"type\": \"boolean\"},\n \"pre_build_image\": {\"type\": \"boolean\"},\n \"registry\": {\n \"type\": \"dict\",\n \"schema\": {\n \"url\": {\"type\": \"string\"},\n \"credentials\": {\n \"type\": \"dict\",\n \"schema\": {\n \"username\": {\"type\": \"string\"},\n \"password\": {\"type\": \"string\"},\n \"email\": {\"type\": \"string\"},\n },\n },\n },\n },\n \"override_command\": {\"type\": \"boolean\", \"nullable\": True},\n \"command\": {\"type\": \"string\", \"nullable\": True},\n \"tty\": {\"type\": \"boolean\", \"nullable\": True},\n \"pid_mode\": {\"type\": \"string\"},\n \"privileged\": {\"type\": \"boolean\"},\n \"security_opts\": {\"type\": \"list\", \"schema\": {\"type\": \"string\"}},\n \"volumes\": {\"type\": \"list\", \"schema\": {\"type\": \"string\"}},\n \"keep_volumes\": {\"type\": \"boolean\"},\n \"tmpfs\": {\"type\": \"list\", \"schema\": {\"type\": \"string\"}},\n \"capabilities\": {\"type\": \"list\", \"schema\": {\"type\": \"string\"}},\n \"sysctls\": {\"type\": \"dict\", \"keysrules\": {\"type\": \"string\"}},\n \"exposed_ports\": {\n \"type\": \"list\",\n \"schema\": {\"type\": \"string\", \"coerce\": \"exposed_ports\"},\n },\n \"published_ports\": {\"type\": \"list\", \"schema\": {\"type\": \"string\"}},\n \"user\": {\"type\": \"string\"},\n \"ulimits\": {\"type\": \"list\", \"schema\": {\"type\": \"string\"}},\n \"dns_servers\": {\"type\": \"list\", \"schema\": {\"type\": \"string\"}},\n \"etc_hosts\": {\n \"type\": [\"string\", \"dict\"],\n \"keysrules\": {\"type\": \"string\"},\n },\n \"env\": {\n \"type\": \"dict\",\n \"keysrules\": {\"type\": \"string\", \"regex\": \"^[a-zA-Z0-9._-]+$\"},\n },\n \"restart_policy\": {\"type\": \"string\"},\n \"restart_retries\": {\"type\": \"integer\"},\n \"networks\": {\n \"type\": \"list\",\n \"schema\": {\"type\": \"dict\", \"schema\": {\"name\": {\"type\": \"string\"}}},\n },\n \"network_mode\": {\"type\": \"string\"},\n \"purge_networks\": {\"type\": \"boolean\"},\n \"docker_host\": {\"type\": \"string\"},\n \"cacert_path\": {\"type\": \"string\"},\n \"cert_path\": {\"type\": \"string\"},\n \"key_path\": {\"type\": \"string\"},\n \"tls_verify\": {\"type\": \"boolean\"},\n },\n },\n }\n}\n\nplatforms_podman_schema = {\n \"platforms\": {\n \"type\": \"list\",\n \"schema\": {\n \"type\": \"dict\",\n \"schema\": {\n \"name\": {\"type\": \"string\"},\n \"hostname\": {\"type\": \"string\"},\n \"image\": {\"type\": \"string\"},\n \"dockerfile\": {\"type\": \"string\"},\n \"pull\": {\"type\": \"boolean\"},\n \"pre_build_image\": {\"type\": \"boolean\"},\n \"registry\": {\n \"type\": \"dict\",\n \"schema\": {\n \"url\": {\"type\": \"string\"},\n \"credentials\": {\n \"type\": \"dict\",\n \"schema\": {\n \"username\": {\"type\": \"string\"},\n \"password\": {\"type\": \"string\"},\n },\n },\n },\n },\n \"override_command\": {\"type\": \"boolean\", \"nullable\": True},\n \"command\": {\"type\": \"string\", \"nullable\": True},\n \"tty\": {\"type\": \"boolean\", \"nullable\": True},\n \"pid_mode\": {\"type\": \"string\"},\n \"privileged\": {\"type\": \"boolean\"},\n \"security_opts\": {\"type\": \"list\", \"schema\": {\"type\": \"string\"}},\n \"volumes\": {\"type\": \"list\", \"schema\": {\"type\": \"string\"}},\n \"tmpfs\": {\"type\": \"list\", \"schema\": {\"type\": \"string\"}},\n \"capabilities\": {\"type\": \"list\", \"schema\": {\"type\": \"string\"}},\n \"exposed_ports\": {\n \"type\": \"list\",\n \"schema\": {\"type\": \"string\", \"coerce\": \"exposed_ports\"},\n },\n \"published_ports\": {\"type\": \"list\", \"schema\": {\"type\": \"string\"}},\n \"ulimits\": {\"type\": \"list\", \"schema\": {\"type\": \"string\"}},\n \"dns_servers\": {\"type\": \"list\", \"schema\": {\"type\": \"string\"}},\n \"etc_hosts\": {\n \"type\": [\"string\", \"dict\"],\n \"keysrules\": {\"type\": \"string\"},\n },\n \"env\": {\n \"type\": \"dict\",\n \"keysrules\": {\"type\": \"string\", \"regex\": \"^[a-zA-Z0-9._-]+$\"},\n },\n \"restart_policy\": {\"type\": \"string\"},\n \"restart_retries\": {\"type\": \"integer\"},\n \"network\": {\"type\": \"string\"},\n \"cert_path\": {\"type\": \"string\"},\n \"tls_verify\": {\"type\": \"boolean\"},\n \"cgroup_manager\": {\"type\": \"string\"},\n \"storage_opt\": {\"type\": \"string\"},\n \"storage_driver\": {\"type\": \"string\"},\n \"rootless\": {\"type\": \"boolean\"},\n },\n },\n }\n}\n\n\ndependency_command_nullable_schema = {\n \"dependency\": {\n \"type\": \"dict\",\n \"schema\": {\"command\": {\"type\": \"string\", \"nullable\": False}},\n }\n}\n\n\nclass Validator(cerberus.Validator):\n \"\"\"Validator Class.\"\"\"\n\n def __init__(self, *args, **kwargs):\n \"\"\"Construct Validator.\"\"\"\n super(Validator, self).__init__(*args, **kwargs)\n\n def _validate_unique(self, unique, field, value):\n \"\"\"Ensure value uniqueness.\n\n The rule's arguments are validated against this schema:\n {'type': 'boolean'}\n \"\"\"\n if unique:\n root_key = self.schema_path[0]\n data = (doc[field] for doc in self.root_document[root_key])\n for key, count in collections.Counter(data).items():\n if count > 1:\n msg = \"'{}' is not unique\".format(key)\n self._error(field, msg)\n\n def _validate_disallowed(self, disallowed, field, value):\n \"\"\"Readonly but with a custom error.\n\n The rule's arguments are validated against this schema:\n {'type': 'boolean'}\n \"\"\"\n if disallowed:\n msg = \"disallowed user provided config option\"\n self._error(field, msg)\n\n def _normalize_coerce_exposed_ports(self, value):\n \"\"\"Coerce ``exposed_ports`` values to string.\n\n Not all types that can be specified by the user are acceptable and\n therefore we cannot simply pass a ``'coerce': 'string'`` to the schema\n definition.\n \"\"\"\n if type(value) == int:\n return str(value)\n return value\n\n def _validate_molecule_env_var(self, molecule_env_var, field, value):\n \"\"\"Readonly but with a custom error.\n\n The rule's arguments are validated against this schema:\n {'type': 'boolean'}\n \"\"\"\n # TODO(retr0h): This needs to be better handled.\n pattern = r\"^[{$]+MOLECULE[_a-z0-9A-Z]+[}]*$\"\n\n if molecule_env_var:\n if re.match(pattern, value):\n msg = \"cannot reference $MOLECULE special variables \" \"in this section\"\n self._error(field, msg)\n\n\ndef pre_validate(stream, env, keep_string):\n \"\"\"Pre-validate stream.\"\"\"\n data = util.safe_load(stream)\n\n v = Validator(allow_unknown=True)\n v.validate(data, pre_validate_base_schema(env, keep_string))\n\n return v.errors, data\n\n\ndef validate(c):\n \"\"\"Perform schema validation.\"\"\"\n schema = base_schema\n\n # Dependency\n if c[\"dependency\"][\"name\"] == \"shell\":\n schema = util.merge_dicts(schema, dependency_command_nullable_schema)\n\n # Driver\n if c[\"driver\"][\"name\"] == \"docker\":\n schema = util.merge_dicts(schema, platforms_docker_schema)\n elif c[\"driver\"][\"name\"] == \"podman\":\n schema = util.merge_dicts(schema, platforms_podman_schema)\n\n # Verifier\n schema = util.merge_dicts(schema, api.verifiers()[c[\"verifier\"][\"name\"]].schema())\n\n v = Validator(allow_unknown=True)\n v.validate(c, schema)\n\n return v.errors\n",
"path": "molecule/model/schema_v3.py"
}
] | diff --git a/molecule/model/schema_v3.py b/molecule/model/schema_v3.py
index ffc5ace19c..0d7b59030e 100644
--- a/molecule/model/schema_v3.py
+++ b/molecule/model/schema_v3.py
@@ -358,6 +358,7 @@ def pre_validate_base_schema(env, keep_string):
"cgroup_manager": {"type": "string"},
"storage_opt": {"type": "string"},
"storage_driver": {"type": "string"},
+ "rootless": {"type": "boolean"},
},
},
}
diff --git a/molecule/provisioner/ansible/playbooks/podman/create.yml b/molecule/provisioner/ansible/playbooks/podman/create.yml
index 068863f554..36f945442c 100644
--- a/molecule/provisioner/ansible/playbooks/podman/create.yml
+++ b/molecule/provisioner/ansible/playbooks/podman/create.yml
@@ -4,6 +4,7 @@
connection: local
gather_facts: false
no_log: "{{ molecule_no_log }}"
+ become: "{{ not (item.rootless|default(true)) }}"
tasks:
- name: Log into a container registry
diff --git a/molecule/provisioner/ansible/playbooks/podman/destroy.yml b/molecule/provisioner/ansible/playbooks/podman/destroy.yml
index 7da2dc60b3..71389dd7a7 100644
--- a/molecule/provisioner/ansible/playbooks/podman/destroy.yml
+++ b/molecule/provisioner/ansible/playbooks/podman/destroy.yml
@@ -4,6 +4,7 @@
connection: local
gather_facts: false
no_log: "{{ molecule_no_log }}"
+ become: "{{ not (item.rootless|default(true)) }}"
tasks:
- name: Destroy molecule instance(s)
shell: podman container exists {{ item.name }} && podman rm -f {{ item.name }} || true
diff --git a/molecule/test/resources/playbooks/podman/create.yml b/molecule/test/resources/playbooks/podman/create.yml
index d2c04a8b3e..1af6685eb5 100644
--- a/molecule/test/resources/playbooks/podman/create.yml
+++ b/molecule/test/resources/playbooks/podman/create.yml
@@ -3,6 +3,7 @@
hosts: localhost
connection: local
gather_facts: false
+ become: "{{ not (item.rootless|default(true)) }}"
tasks:
- name: Log into a container registry
command: >
diff --git a/molecule/test/resources/playbooks/podman/destroy.yml b/molecule/test/resources/playbooks/podman/destroy.yml
index 7da2dc60b3..71389dd7a7 100644
--- a/molecule/test/resources/playbooks/podman/destroy.yml
+++ b/molecule/test/resources/playbooks/podman/destroy.yml
@@ -4,6 +4,7 @@
connection: local
gather_facts: false
no_log: "{{ molecule_no_log }}"
+ become: "{{ not (item.rootless|default(true)) }}"
tasks:
- name: Destroy molecule instance(s)
shell: podman container exists {{ item.name }} && podman rm -f {{ item.name }} || true
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.