problem_id
stringlengths 18
22
| source
stringclasses 1
value | task_type
stringclasses 1
value | in_source_id
stringlengths 13
58
| prompt
stringlengths 1.71k
9.01k
| golden_diff
stringlengths 151
4.94k
| verification_info
stringlengths 465
11.3k
| num_tokens_prompt
int64 557
2.05k
| num_tokens_diff
int64 48
1.02k
|
---|---|---|---|---|---|---|---|---|
gh_patches_debug_12330 | rasdani/github-patches | git_diff | falconry__falcon-1883 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
deprecated() utility raises AttributeError under Meinheld
The inner function of our [`deprecated()`](https://falcon.readthedocs.io/en/latest/api/util.html#falcon.deprecated) utility generator grabs the current stack frame object via [`inspect.getcurrentframe()`](https://docs.python.org/3/library/inspect.html#inspect.currentframe), and then uses its attributes to provide a more informative deprecation warning.
However, as warned in the latter's docs, this function is not guaranteed to return a valid stack frame object on all Python implementations; it may also return `None`. It seems that running Gunicorn+Meinheld workers can trigger this situation even under CPython.
Discovered using the following command line under CPython 3.7 and 3.8:
```
gunicorn --workers=8 --worker-class="egg:meinheld#gunicorn_worker" test:app
```
For instance, assigning a value to the deprecated [`Response.body`](https://falcon.readthedocs.io/en/latest/api/request_and_response_wsgi.html#falcon.Response.body) yields
```
2021-03-11 23:31:42 [FALCON] [ERROR] GET /things => Traceback (most recent call last):
File "falcon/app.py", line 361, in falcon.app.App.__call__
File "/tmp/benchmark/test3.py", line 13, in on_get
resp.body = ('\nTwo things awe me most, the starry sky '
File "falcon/util/deprecation.py", line 67, in falcon.util.deprecation.deprecated.decorator.wrapper
AttributeError: 'NoneType' object has no attribute 'f_code'
```
</issue>
<code>
[start of falcon/util/deprecation.py]
1 # Copyright 2013 by Rackspace Hosting, Inc.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """Miscellaneous deprecation utilities.
16
17 This module provides decorators to mark functions and classes as deprecated.
18 """
19
20 import functools
21 import inspect
22 import warnings
23
24
25 __all__ = (
26 'DeprecatedWarning',
27 'deprecated',
28 'deprecated_args',
29 )
30
31
32 # NOTE(kgriffs): We don't want our deprecations to be ignored by default,
33 # so create our own type.
34 #
35 # TODO(kgriffs): Revisit this decision if users complain.
36 class DeprecatedWarning(UserWarning):
37 pass
38
39
40 def deprecated(instructions, is_property=False):
41 """Flag a method as deprecated.
42
43 This function returns a decorator which can be used to mark deprecated
44 functions. Applying this decorator will result in a warning being
45 emitted when the function is used.
46
47 Args:
48 instructions (str): Specific guidance for the developer, e.g.:
49 'Please migrate to add_proxy(...)'
50 is_property (bool): If the deprecated object is a property. It
51 will omit the ``(...)`` from the generated documentation
52 """
53
54 def decorator(func):
55
56 object_name = 'property' if is_property else 'function'
57 post_name = '' if is_property else '(...)'
58 message = 'Call to deprecated {} {}{}. {}'.format(
59 object_name, func.__name__, post_name, instructions)
60
61 @functools.wraps(func)
62 def wrapper(*args, **kwargs):
63 frame = inspect.currentframe().f_back
64
65 warnings.warn_explicit(message,
66 category=DeprecatedWarning,
67 filename=inspect.getfile(frame.f_code),
68 lineno=frame.f_lineno)
69
70 return func(*args, **kwargs)
71
72 return wrapper
73
74 return decorator
75
76
77 def deprecated_args(*, allowed_positional, is_method=True):
78 """Flag a method call with positional args as deprecated.
79
80 Keyword Args:
81 allowed_positional (int): Number of allowed positional arguments
82 is_method (bool, optional): The decorated function is a method. Will
83 add one to the number of allowed positional args to account for
84 ``self``. Defaults to True.
85 """
86
87 template = (
88 'Calls with{} positional args are deprecated.'
89 ' Please specify them as keyword arguments instead.'
90 )
91 text = ' more than {}'.format(allowed_positional) if allowed_positional else ''
92 warn_text = template.format(text)
93 if is_method:
94 allowed_positional += 1
95
96 def deprecated_args(fn):
97 @functools.wraps(fn)
98 def wraps(*args, **kwargs):
99 if len(args) > allowed_positional:
100 warnings.warn(warn_text, DeprecatedWarning, stacklevel=2)
101 return fn(*args, **kwargs)
102
103 return wraps
104
105 return deprecated_args
106
[end of falcon/util/deprecation.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/falcon/util/deprecation.py b/falcon/util/deprecation.py
--- a/falcon/util/deprecation.py
+++ b/falcon/util/deprecation.py
@@ -18,7 +18,6 @@
"""
import functools
-import inspect
import warnings
@@ -60,12 +59,7 @@
@functools.wraps(func)
def wrapper(*args, **kwargs):
- frame = inspect.currentframe().f_back
-
- warnings.warn_explicit(message,
- category=DeprecatedWarning,
- filename=inspect.getfile(frame.f_code),
- lineno=frame.f_lineno)
+ warnings.warn(message, category=DeprecatedWarning, stacklevel=2)
return func(*args, **kwargs)
| {"golden_diff": "diff --git a/falcon/util/deprecation.py b/falcon/util/deprecation.py\n--- a/falcon/util/deprecation.py\n+++ b/falcon/util/deprecation.py\n@@ -18,7 +18,6 @@\n \"\"\"\n \n import functools\n-import inspect\n import warnings\n \n \n@@ -60,12 +59,7 @@\n \n @functools.wraps(func)\n def wrapper(*args, **kwargs):\n- frame = inspect.currentframe().f_back\n-\n- warnings.warn_explicit(message,\n- category=DeprecatedWarning,\n- filename=inspect.getfile(frame.f_code),\n- lineno=frame.f_lineno)\n+ warnings.warn(message, category=DeprecatedWarning, stacklevel=2)\n \n return func(*args, **kwargs)\n", "issue": "deprecated() utility raises AttributeError under Meinheld\nThe inner function of our [`deprecated()`](https://falcon.readthedocs.io/en/latest/api/util.html#falcon.deprecated) utility generator grabs the current stack frame object via [`inspect.getcurrentframe()`](https://docs.python.org/3/library/inspect.html#inspect.currentframe), and then uses its attributes to provide a more informative deprecation warning.\r\n\r\nHowever, as warned in the latter's docs, this function is not guaranteed to return a valid stack frame object on all Python implementations; it may also return `None`. It seems that running Gunicorn+Meinheld workers can trigger this situation even under CPython.\r\n\r\nDiscovered using the following command line under CPython 3.7 and 3.8:\r\n```\r\ngunicorn --workers=8 --worker-class=\"egg:meinheld#gunicorn_worker\" test:app\r\n```\r\n\r\nFor instance, assigning a value to the deprecated [`Response.body`](https://falcon.readthedocs.io/en/latest/api/request_and_response_wsgi.html#falcon.Response.body) yields\r\n```\r\n2021-03-11 23:31:42 [FALCON] [ERROR] GET /things => Traceback (most recent call last):\r\n File \"falcon/app.py\", line 361, in falcon.app.App.__call__\r\n File \"/tmp/benchmark/test3.py\", line 13, in on_get\r\n resp.body = ('\\nTwo things awe me most, the starry sky '\r\n File \"falcon/util/deprecation.py\", line 67, in falcon.util.deprecation.deprecated.decorator.wrapper\r\nAttributeError: 'NoneType' object has no attribute 'f_code'\r\n```\n", "before_files": [{"content": "# Copyright 2013 by Rackspace Hosting, Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Miscellaneous deprecation utilities.\n\nThis module provides decorators to mark functions and classes as deprecated.\n\"\"\"\n\nimport functools\nimport inspect\nimport warnings\n\n\n__all__ = (\n 'DeprecatedWarning',\n 'deprecated',\n 'deprecated_args',\n)\n\n\n# NOTE(kgriffs): We don't want our deprecations to be ignored by default,\n# so create our own type.\n#\n# TODO(kgriffs): Revisit this decision if users complain.\nclass DeprecatedWarning(UserWarning):\n pass\n\n\ndef deprecated(instructions, is_property=False):\n \"\"\"Flag a method as deprecated.\n\n This function returns a decorator which can be used to mark deprecated\n functions. Applying this decorator will result in a warning being\n emitted when the function is used.\n\n Args:\n instructions (str): Specific guidance for the developer, e.g.:\n 'Please migrate to add_proxy(...)'\n is_property (bool): If the deprecated object is a property. It\n will omit the ``(...)`` from the generated documentation\n \"\"\"\n\n def decorator(func):\n\n object_name = 'property' if is_property else 'function'\n post_name = '' if is_property else '(...)'\n message = 'Call to deprecated {} {}{}. {}'.format(\n object_name, func.__name__, post_name, instructions)\n\n @functools.wraps(func)\n def wrapper(*args, **kwargs):\n frame = inspect.currentframe().f_back\n\n warnings.warn_explicit(message,\n category=DeprecatedWarning,\n filename=inspect.getfile(frame.f_code),\n lineno=frame.f_lineno)\n\n return func(*args, **kwargs)\n\n return wrapper\n\n return decorator\n\n\ndef deprecated_args(*, allowed_positional, is_method=True):\n \"\"\"Flag a method call with positional args as deprecated.\n\n Keyword Args:\n allowed_positional (int): Number of allowed positional arguments\n is_method (bool, optional): The decorated function is a method. Will\n add one to the number of allowed positional args to account for\n ``self``. Defaults to True.\n \"\"\"\n\n template = (\n 'Calls with{} positional args are deprecated.'\n ' Please specify them as keyword arguments instead.'\n )\n text = ' more than {}'.format(allowed_positional) if allowed_positional else ''\n warn_text = template.format(text)\n if is_method:\n allowed_positional += 1\n\n def deprecated_args(fn):\n @functools.wraps(fn)\n def wraps(*args, **kwargs):\n if len(args) > allowed_positional:\n warnings.warn(warn_text, DeprecatedWarning, stacklevel=2)\n return fn(*args, **kwargs)\n\n return wraps\n\n return deprecated_args\n", "path": "falcon/util/deprecation.py"}]} | 1,832 | 165 |
gh_patches_debug_1321 | rasdani/github-patches | git_diff | pyodide__pyodide-717 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Calling yaml.load() without Loader=... is deprecated
For each built packages there is now the following deprecation warning ,
```
pyodide_build/common.py:27: YAMLLoadWarning: calling yaml.load() without Loader=... is deprecated, as the default Loader is unsafe. Please read https://msg.pyyaml.org/load for full details.
return yaml.load(fd)
```
it would be nice to fix this.
</issue>
<code>
[start of pyodide_build/common.py]
1 from pathlib import Path
2 from typing import Optional, Set
3
4
5 ROOTDIR = Path(__file__).parents[1].resolve() / "tools"
6 HOSTPYTHON = ROOTDIR / ".." / "cpython" / "build" / "3.8.2" / "host"
7 TARGETPYTHON = ROOTDIR / ".." / "cpython" / "installs" / "python-3.8.2"
8 DEFAULTCFLAGS = ""
9 DEFAULTLDFLAGS = " ".join(
10 [
11 "-O3",
12 "-s",
13 "BINARYEN_METHOD='native-wasm'",
14 "-Werror",
15 "-s",
16 "EMULATED_FUNCTION_POINTERS=1",
17 "-s",
18 "EMULATE_FUNCTION_POINTER_CASTS=1",
19 "-s",
20 "SIDE_MODULE=1",
21 "-s",
22 "WASM=1",
23 "--memory-init-file",
24 "0",
25 ]
26 )
27
28
29 def parse_package(package):
30 # Import yaml here because pywasmcross needs to run in the built native
31 # Python, which won't have PyYAML
32 import yaml
33
34 # TODO: Validate against a schema
35 with open(package) as fd:
36 return yaml.load(fd)
37
38
39 def _parse_package_subset(query: Optional[str]) -> Optional[Set[str]]:
40 """Parse the list of packages specified with PYODIDE_PACKAGES env var.
41
42 Also add the list of mandatory packages: ['micropip', 'distlib']
43
44 Returns:
45 a set of package names to build or None.
46 """
47 if query is None:
48 return None
49 packages = query.split(",")
50 packages = [el.strip() for el in packages]
51 packages = ["micropip", "distlib"] + packages
52 return set(packages)
53
[end of pyodide_build/common.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/pyodide_build/common.py b/pyodide_build/common.py
--- a/pyodide_build/common.py
+++ b/pyodide_build/common.py
@@ -33,7 +33,7 @@
# TODO: Validate against a schema
with open(package) as fd:
- return yaml.load(fd)
+ return yaml.safe_load(fd)
def _parse_package_subset(query: Optional[str]) -> Optional[Set[str]]:
| {"golden_diff": "diff --git a/pyodide_build/common.py b/pyodide_build/common.py\n--- a/pyodide_build/common.py\n+++ b/pyodide_build/common.py\n@@ -33,7 +33,7 @@\n \n # TODO: Validate against a schema\n with open(package) as fd:\n- return yaml.load(fd)\n+ return yaml.safe_load(fd)\n \n \n def _parse_package_subset(query: Optional[str]) -> Optional[Set[str]]:\n", "issue": "Calling yaml.load() without Loader=... is deprecated\nFor each built packages there is now the following deprecation warning ,\r\n```\r\npyodide_build/common.py:27: YAMLLoadWarning: calling yaml.load() without Loader=... is deprecated, as the default Loader is unsafe. Please read https://msg.pyyaml.org/load for full details.\r\n return yaml.load(fd)\r\n```\r\nit would be nice to fix this.\n", "before_files": [{"content": "from pathlib import Path\nfrom typing import Optional, Set\n\n\nROOTDIR = Path(__file__).parents[1].resolve() / \"tools\"\nHOSTPYTHON = ROOTDIR / \"..\" / \"cpython\" / \"build\" / \"3.8.2\" / \"host\"\nTARGETPYTHON = ROOTDIR / \"..\" / \"cpython\" / \"installs\" / \"python-3.8.2\"\nDEFAULTCFLAGS = \"\"\nDEFAULTLDFLAGS = \" \".join(\n [\n \"-O3\",\n \"-s\",\n \"BINARYEN_METHOD='native-wasm'\",\n \"-Werror\",\n \"-s\",\n \"EMULATED_FUNCTION_POINTERS=1\",\n \"-s\",\n \"EMULATE_FUNCTION_POINTER_CASTS=1\",\n \"-s\",\n \"SIDE_MODULE=1\",\n \"-s\",\n \"WASM=1\",\n \"--memory-init-file\",\n \"0\",\n ]\n)\n\n\ndef parse_package(package):\n # Import yaml here because pywasmcross needs to run in the built native\n # Python, which won't have PyYAML\n import yaml\n\n # TODO: Validate against a schema\n with open(package) as fd:\n return yaml.load(fd)\n\n\ndef _parse_package_subset(query: Optional[str]) -> Optional[Set[str]]:\n \"\"\"Parse the list of packages specified with PYODIDE_PACKAGES env var.\n\n Also add the list of mandatory packages: ['micropip', 'distlib']\n\n Returns:\n a set of package names to build or None.\n \"\"\"\n if query is None:\n return None\n packages = query.split(\",\")\n packages = [el.strip() for el in packages]\n packages = [\"micropip\", \"distlib\"] + packages\n return set(packages)\n", "path": "pyodide_build/common.py"}]} | 1,105 | 99 |
gh_patches_debug_20517 | rasdani/github-patches | git_diff | quantumlib__Cirq-1863 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Split cirq packages into with/without contrib
Otherwise there's no way to easily pip install the contrib-requirements
</issue>
<code>
[start of setup.py]
1 # Copyright 2018 The Cirq Developers
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # https://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import io
16 from setuptools import find_packages, setup
17
18 # This reads the __version__ variable from cirq/_version.py
19 __version__ = ''
20 exec(open('cirq/_version.py').read())
21
22 description = ('A framework for creating, editing, and invoking '
23 'Noisy Intermediate Scale Quantum (NISQ) circuits.')
24
25 # README file as long_description.
26 long_description = io.open('README.rst', encoding='utf-8').read()
27
28 # Read in requirements
29 requirements = open('requirements.txt').readlines()
30 requirements = [r.strip() for r in requirements]
31
32 cirq_packages = ['cirq'] + [
33 'cirq.' + package for package in find_packages(where='cirq')
34 ]
35
36 setup(name='cirq',
37 version=__version__,
38 url='http://github.com/quantumlib/cirq',
39 author='The Cirq Developers',
40 author_email='[email protected]',
41 python_requires=('>=3.6.0'),
42 install_requires=requirements,
43 license='Apache 2',
44 description=description,
45 long_description=long_description,
46 packages=cirq_packages,
47 package_data={
48 'cirq.api.google.v1': ['*.proto'],
49 'cirq.api.google.v2': ['*.proto'],
50 })
51
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -28,6 +28,10 @@
# Read in requirements
requirements = open('requirements.txt').readlines()
requirements = [r.strip() for r in requirements]
+contrib_requirements = open('cirq/contrib/contrib-requirements.txt').readlines()
+contrib_requirements = [r.strip() for r in contrib_requirements]
+dev_requirements = open('dev_tools/conf/pip-list-dev-tools.txt').readlines()
+dev_requirements = [r.strip() for r in dev_requirements]
cirq_packages = ['cirq'] + [
'cirq.' + package for package in find_packages(where='cirq')
@@ -40,6 +44,10 @@
author_email='[email protected]',
python_requires=('>=3.6.0'),
install_requires=requirements,
+ extras_require={
+ 'contrib': contrib_requirements,
+ 'dev': dev_requirements,
+ },
license='Apache 2',
description=description,
long_description=long_description,
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -28,6 +28,10 @@\n # Read in requirements\n requirements = open('requirements.txt').readlines()\n requirements = [r.strip() for r in requirements]\n+contrib_requirements = open('cirq/contrib/contrib-requirements.txt').readlines()\n+contrib_requirements = [r.strip() for r in contrib_requirements]\n+dev_requirements = open('dev_tools/conf/pip-list-dev-tools.txt').readlines()\n+dev_requirements = [r.strip() for r in dev_requirements]\n \n cirq_packages = ['cirq'] + [\n 'cirq.' + package for package in find_packages(where='cirq')\n@@ -40,6 +44,10 @@\n author_email='[email protected]',\n python_requires=('>=3.6.0'),\n install_requires=requirements,\n+ extras_require={\n+ 'contrib': contrib_requirements,\n+ 'dev': dev_requirements,\n+ },\n license='Apache 2',\n description=description,\n long_description=long_description,\n", "issue": "Split cirq packages into with/without contrib\nOtherwise there's no way to easily pip install the contrib-requirements\n", "before_files": [{"content": "# Copyright 2018 The Cirq Developers\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport io\nfrom setuptools import find_packages, setup\n\n# This reads the __version__ variable from cirq/_version.py\n__version__ = ''\nexec(open('cirq/_version.py').read())\n\ndescription = ('A framework for creating, editing, and invoking '\n 'Noisy Intermediate Scale Quantum (NISQ) circuits.')\n\n# README file as long_description.\nlong_description = io.open('README.rst', encoding='utf-8').read()\n\n# Read in requirements\nrequirements = open('requirements.txt').readlines()\nrequirements = [r.strip() for r in requirements]\n\ncirq_packages = ['cirq'] + [\n 'cirq.' + package for package in find_packages(where='cirq')\n]\n\nsetup(name='cirq',\n version=__version__,\n url='http://github.com/quantumlib/cirq',\n author='The Cirq Developers',\n author_email='[email protected]',\n python_requires=('>=3.6.0'),\n install_requires=requirements,\n license='Apache 2',\n description=description,\n long_description=long_description,\n packages=cirq_packages,\n package_data={\n 'cirq.api.google.v1': ['*.proto'],\n 'cirq.api.google.v2': ['*.proto'],\n })\n", "path": "setup.py"}]} | 1,053 | 239 |
gh_patches_debug_2369 | rasdani/github-patches | git_diff | Pyomo__pyomo-2265 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Consistent semantic versioning
## Summary
The most recent version of Pyomo released was 6.2, as opposed to 6.2.0. It seems inconsistent with the way many other packages are versioned (e.g. NumFocus packages), although I am unaware if there is a standard specified anywhere. Is there a benefit to the former as opposed to the latter?
## Context
Managing our dependencies, we automate pulling in new versions of packages, running them through our CI prior to upgrading. We run this in two ways - one allowing all upgrades and one allowing only compatible upgrades (PEP 440). This always requires manual review because not all packages use semantic versioning (or the same semantic versioning). One manual override we had to apply this time was pinning `Pyomo ~= 6.2.0` instead of what our script automatically pulled in `Pyomo ~= 6.2`.
Consistent semantic versioning
## Summary
The most recent version of Pyomo released was 6.2, as opposed to 6.2.0. It seems inconsistent with the way many other packages are versioned (e.g. NumFocus packages), although I am unaware if there is a standard specified anywhere. Is there a benefit to the former as opposed to the latter?
## Context
Managing our dependencies, we automate pulling in new versions of packages, running them through our CI prior to upgrading. We run this in two ways - one allowing all upgrades and one allowing only compatible upgrades (PEP 440). This always requires manual review because not all packages use semantic versioning (or the same semantic versioning). One manual override we had to apply this time was pinning `Pyomo ~= 6.2.0` instead of what our script automatically pulled in `Pyomo ~= 6.2`.
</issue>
<code>
[start of pyomo/version/info.py]
1 # ___________________________________________________________________________
2 #
3 # Pyomo: Python Optimization Modeling Objects
4 # Copyright 2017 National Technology and Engineering Solutions of Sandia, LLC
5 # Under the terms of Contract DE-NA0003525 with National Technology and
6 # Engineering Solutions of Sandia, LLC, the U.S. Government retains certain
7 # rights in this software.
8 # This software is distributed under the 3-clause BSD License.
9 # ___________________________________________________________________________
10
11 _init_url="$URL$"
12
13 # NOTE: releaselevel should be left at 'invalid' for trunk development
14 # and set to 'final' for releases. During development, the
15 # major.minor.micro should point ot the NEXT release (generally, the
16 # next micro release after the current release).
17 #
18 # Note: When cutting a release, also update the major/minor/micro in
19 #
20 # pyomo/RELEASE.txt
21 #
22 # The VOTD zipbuilder will automatically change releaselevel to "VOTD
23 # {hash}" and set the serial number to YYMMDDhhmm. The serial number
24 # should generally be left at 0, unless a downstream package is tracking
25 # main and needs a hard reference to "suitably new" development.
26 major=6
27 minor=2
28 micro=1
29 releaselevel='invalid'
30 #releaselevel='final'
31 serial=0
32
33 if releaselevel == 'final':
34 pass
35 elif '/tags/' in _init_url: #pragma:nocover
36 releaselevel = 'final'
37 elif releaselevel == 'invalid':
38 from os.path import abspath, dirname, exists, join
39 if __file__.endswith('setup.py'):
40 # This file is being sources (exec'ed) from setup.py.
41 # dirname(__file__) setup.py's scope is the root sourec directory
42 _rootdir = os.path.dirname(__file__)
43 else:
44 # Eventually this should import PYOMO_ROOT_DIR from
45 # pyomo.common instead of reimplementing that logic here.
46 #
47 # __file__ fails if script is called in different ways on Windows
48 # __file__ fails if someone does os.chdir() before
49 # sys.argv[0] also fails because it doesn't not always contains the path
50 from inspect import getfile, currentframe
51 _rootdir = join(dirname(abspath(getfile(currentframe()))), '..', '..')
52
53 if exists(join(_rootdir, '.git')):
54 try:
55 with open(join(_rootdir, '.git', 'HEAD')) as _FILE:
56 _ref = _FILE.readline().strip() #pragma:nocover
57 releaselevel = 'devel {%s}' % (
58 _ref.split('/')[-1].split('\\')[-1], ) #pragma:nocover
59 except:
60 releaselevel = 'devel' #pragma:nocover
61 elif exists(join(_rootdir, '.svn')):
62 releaselevel = 'devel {svn}' #pragma:nocover
63 else:
64 releaselevel = 'VOTD' #pragma:nocover
65
66
67 version_info = (major, minor, micro, releaselevel, serial)
68
69 version = '.'.join(str(x) for x in version_info[:(3 if micro else 2)])
70 __version__ = version
71 if releaselevel != 'final':
72 version += ' ('+releaselevel+')'
73 if releaselevel.startswith('devel'):
74 __version__ += ".dev%d" % (serial,)
75 elif releaselevel.startswith('VOTD'):
76 __version__ += "a%d" % (serial,)
77
[end of pyomo/version/info.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/pyomo/version/info.py b/pyomo/version/info.py
--- a/pyomo/version/info.py
+++ b/pyomo/version/info.py
@@ -66,7 +66,7 @@
version_info = (major, minor, micro, releaselevel, serial)
-version = '.'.join(str(x) for x in version_info[:(3 if micro else 2)])
+version = '.'.join(str(x) for x in version_info[:3])
__version__ = version
if releaselevel != 'final':
version += ' ('+releaselevel+')'
| {"golden_diff": "diff --git a/pyomo/version/info.py b/pyomo/version/info.py\n--- a/pyomo/version/info.py\n+++ b/pyomo/version/info.py\n@@ -66,7 +66,7 @@\n \n version_info = (major, minor, micro, releaselevel, serial)\n \n-version = '.'.join(str(x) for x in version_info[:(3 if micro else 2)])\n+version = '.'.join(str(x) for x in version_info[:3])\n __version__ = version\n if releaselevel != 'final':\n version += ' ('+releaselevel+')'\n", "issue": "Consistent semantic versioning\n## Summary\r\n\r\nThe most recent version of Pyomo released was 6.2, as opposed to 6.2.0. It seems inconsistent with the way many other packages are versioned (e.g. NumFocus packages), although I am unaware if there is a standard specified anywhere. Is there a benefit to the former as opposed to the latter? \r\n\r\n## Context\r\n\r\nManaging our dependencies, we automate pulling in new versions of packages, running them through our CI prior to upgrading. We run this in two ways - one allowing all upgrades and one allowing only compatible upgrades (PEP 440). This always requires manual review because not all packages use semantic versioning (or the same semantic versioning). One manual override we had to apply this time was pinning `Pyomo ~= 6.2.0` instead of what our script automatically pulled in `Pyomo ~= 6.2`.\nConsistent semantic versioning\n## Summary\r\n\r\nThe most recent version of Pyomo released was 6.2, as opposed to 6.2.0. It seems inconsistent with the way many other packages are versioned (e.g. NumFocus packages), although I am unaware if there is a standard specified anywhere. Is there a benefit to the former as opposed to the latter? \r\n\r\n## Context\r\n\r\nManaging our dependencies, we automate pulling in new versions of packages, running them through our CI prior to upgrading. We run this in two ways - one allowing all upgrades and one allowing only compatible upgrades (PEP 440). This always requires manual review because not all packages use semantic versioning (or the same semantic versioning). One manual override we had to apply this time was pinning `Pyomo ~= 6.2.0` instead of what our script automatically pulled in `Pyomo ~= 6.2`.\n", "before_files": [{"content": "# ___________________________________________________________________________\n#\n# Pyomo: Python Optimization Modeling Objects\n# Copyright 2017 National Technology and Engineering Solutions of Sandia, LLC\n# Under the terms of Contract DE-NA0003525 with National Technology and \n# Engineering Solutions of Sandia, LLC, the U.S. Government retains certain \n# rights in this software.\n# This software is distributed under the 3-clause BSD License.\n# ___________________________________________________________________________\n\n_init_url=\"$URL$\"\n\n# NOTE: releaselevel should be left at 'invalid' for trunk development\n# and set to 'final' for releases. During development, the\n# major.minor.micro should point ot the NEXT release (generally, the\n# next micro release after the current release).\n#\n# Note: When cutting a release, also update the major/minor/micro in\n#\n# pyomo/RELEASE.txt\n#\n# The VOTD zipbuilder will automatically change releaselevel to \"VOTD\n# {hash}\" and set the serial number to YYMMDDhhmm. The serial number\n# should generally be left at 0, unless a downstream package is tracking\n# main and needs a hard reference to \"suitably new\" development.\nmajor=6\nminor=2\nmicro=1\nreleaselevel='invalid'\n#releaselevel='final'\nserial=0\n\nif releaselevel == 'final':\n pass\nelif '/tags/' in _init_url: #pragma:nocover\n releaselevel = 'final'\nelif releaselevel == 'invalid':\n from os.path import abspath, dirname, exists, join\n if __file__.endswith('setup.py'):\n # This file is being sources (exec'ed) from setup.py.\n # dirname(__file__) setup.py's scope is the root sourec directory\n _rootdir = os.path.dirname(__file__)\n else:\n # Eventually this should import PYOMO_ROOT_DIR from\n # pyomo.common instead of reimplementing that logic here.\n #\n # __file__ fails if script is called in different ways on Windows\n # __file__ fails if someone does os.chdir() before\n # sys.argv[0] also fails because it doesn't not always contains the path\n from inspect import getfile, currentframe\n _rootdir = join(dirname(abspath(getfile(currentframe()))), '..', '..')\n\n if exists(join(_rootdir, '.git')):\n try:\n with open(join(_rootdir, '.git', 'HEAD')) as _FILE:\n _ref = _FILE.readline().strip() #pragma:nocover\n releaselevel = 'devel {%s}' % (\n _ref.split('/')[-1].split('\\\\')[-1], ) #pragma:nocover\n except:\n releaselevel = 'devel' #pragma:nocover\n elif exists(join(_rootdir, '.svn')):\n releaselevel = 'devel {svn}' #pragma:nocover\n else:\n releaselevel = 'VOTD' #pragma:nocover\n\n\nversion_info = (major, minor, micro, releaselevel, serial)\n\nversion = '.'.join(str(x) for x in version_info[:(3 if micro else 2)])\n__version__ = version\nif releaselevel != 'final':\n version += ' ('+releaselevel+')'\nif releaselevel.startswith('devel'):\n __version__ += \".dev%d\" % (serial,)\nelif releaselevel.startswith('VOTD'):\n __version__ += \"a%d\" % (serial,)\n", "path": "pyomo/version/info.py"}]} | 1,834 | 124 |
gh_patches_debug_36212 | rasdani/github-patches | git_diff | strawberry-graphql__strawberry-403 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Error adding more than one implementation of an interface
**Observed Behaviour**: When i try to add two implementations of an interface, i get a duplicated type name exception
**Expected Behaviour**: Instead of trying to recreate the interface type again, reuse it.
**Steps to reproduce**:
1. Create an interface
2. Create two types which implement the interface
3. Launch `strawberry server app`
4. See it fails with ` Schema must contain uniquely named types but contains multiple types named '<InterfaceName>'`
**Snippet to reproduce the issue**
````python
from typing import List, Optional, Union
import strawberry
from strawberry import field
@strawberry.interface
class Person:
name: str
email: str
@strawberry.type
class Speaker(Person):
job: str
@strawberry.type
class Attendee(Person):
interests: List[str]
def get_people_by_name(name: str):
return []
@strawberry.type
class Query:
searchPeopleByName: List[Union[Speaker, Attendee]] = field(resolver=get_people_by_name)
schema = strawberry.Schema(query=Query)
````
**Full traceback:**
```
File "/mnt/c/Users/<User>/code/nerdearla/test_app.py", line 30, in <module>
schema = strawberry.Schema(query=Query)
File "/home/crow/.virtualenvs/venv/lib/python3.8/site-packages/strawberry/schema/schema.py", line 42, in __init__
self._schema = GraphQLSchema(
File "/home/crow/.virtualenvs/venv/lib/python3.8/site-packages/graphql/type/schema.py", line 240, in __init__
raise TypeError(
TypeError: Schema must contain uniquely named types but contains multiple types named 'Person'.
```
</issue>
<code>
[start of strawberry/schema/types/object_type.py]
1 from typing import Type, cast
2
3 from graphql import GraphQLInputObjectType, GraphQLObjectType
4 from graphql.type.definition import GraphQLInterfaceType
5 from strawberry.type import TypeDefinition
6
7 from .fields import get_field
8 from .types import ConcreteType, GraphQLType, TypeMap
9
10
11 def _get_object_type_for_type_definition(
12 type_definition: TypeDefinition, type_map: TypeMap
13 ) -> GraphQLType:
14
15 TypeClass: Type = GraphQLObjectType
16
17 kwargs = {}
18
19 if type_definition.is_input:
20 TypeClass = GraphQLInputObjectType
21 elif type_definition.is_interface:
22 TypeClass = GraphQLInterfaceType
23
24 if type_definition.interfaces:
25 kwargs["interfaces"] = [
26 _get_object_type_for_type_definition(interface, type_map)
27 for interface in type_definition.interfaces
28 ]
29
30 assert not type_definition.is_generic
31
32 return TypeClass(
33 name=type_definition.name,
34 fields=lambda: {
35 field.name: get_field(field, type_definition.is_input, type_map)
36 for field in type_definition.fields
37 },
38 description=type_definition.description,
39 **kwargs,
40 )
41
42
43 def get_object_type(origin: Type, type_map: TypeMap) -> GraphQLObjectType:
44 """Returns a root type (Query, Mutation, Subscription) from a decorated type"""
45
46 if not hasattr(origin, "_type_definition"):
47 raise ValueError(f"Wrong type passed to get object type {origin}")
48
49 type_definition: TypeDefinition = origin._type_definition
50
51 name = type_definition.name
52
53 if name not in type_map:
54 object_type = _get_object_type_for_type_definition(type_definition, type_map)
55
56 type_map[name] = ConcreteType(
57 definition=type_definition, implementation=object_type
58 )
59
60 return cast(GraphQLObjectType, type_map[name].implementation)
61
[end of strawberry/schema/types/object_type.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/strawberry/schema/types/object_type.py b/strawberry/schema/types/object_type.py
--- a/strawberry/schema/types/object_type.py
+++ b/strawberry/schema/types/object_type.py
@@ -12,32 +12,43 @@
type_definition: TypeDefinition, type_map: TypeMap
) -> GraphQLType:
- TypeClass: Type = GraphQLObjectType
-
- kwargs = {}
-
- if type_definition.is_input:
- TypeClass = GraphQLInputObjectType
- elif type_definition.is_interface:
- TypeClass = GraphQLInterfaceType
-
- if type_definition.interfaces:
- kwargs["interfaces"] = [
- _get_object_type_for_type_definition(interface, type_map)
- for interface in type_definition.interfaces
- ]
-
- assert not type_definition.is_generic
-
- return TypeClass(
- name=type_definition.name,
- fields=lambda: {
- field.name: get_field(field, type_definition.is_input, type_map)
- for field in type_definition.fields
- },
- description=type_definition.description,
- **kwargs,
- )
+ if type_definition.name not in type_map:
+ TypeClass: Type = GraphQLObjectType
+
+ kwargs = {}
+
+ if type_definition.is_input:
+ TypeClass = GraphQLInputObjectType
+ elif type_definition.is_interface:
+ TypeClass = GraphQLInterfaceType
+
+ if type_definition.interfaces:
+ kwargs["interfaces"] = [
+ _get_object_type_for_type_definition(interface, type_map)
+ for interface in type_definition.interfaces
+ ]
+ # this tells GraphQL core what the returned object's actual type is
+ kwargs["is_type_of"] = lambda obj, _: isinstance( # type: ignore
+ obj, type_definition.origin
+ )
+
+ assert not type_definition.is_generic
+
+ object_type = TypeClass(
+ name=type_definition.name,
+ fields=lambda: {
+ field.name: get_field(field, type_definition.is_input, type_map)
+ for field in type_definition.fields
+ },
+ description=type_definition.description,
+ **kwargs,
+ )
+
+ type_map[type_definition.name] = ConcreteType(
+ definition=type_definition, implementation=object_type
+ )
+
+ return type_map[type_definition.name].implementation
def get_object_type(origin: Type, type_map: TypeMap) -> GraphQLObjectType:
@@ -48,13 +59,7 @@
type_definition: TypeDefinition = origin._type_definition
- name = type_definition.name
-
- if name not in type_map:
- object_type = _get_object_type_for_type_definition(type_definition, type_map)
-
- type_map[name] = ConcreteType(
- definition=type_definition, implementation=object_type
- )
-
- return cast(GraphQLObjectType, type_map[name].implementation)
+ return cast(
+ GraphQLObjectType,
+ _get_object_type_for_type_definition(type_definition, type_map),
+ )
| {"golden_diff": "diff --git a/strawberry/schema/types/object_type.py b/strawberry/schema/types/object_type.py\n--- a/strawberry/schema/types/object_type.py\n+++ b/strawberry/schema/types/object_type.py\n@@ -12,32 +12,43 @@\n type_definition: TypeDefinition, type_map: TypeMap\n ) -> GraphQLType:\n \n- TypeClass: Type = GraphQLObjectType\n-\n- kwargs = {}\n-\n- if type_definition.is_input:\n- TypeClass = GraphQLInputObjectType\n- elif type_definition.is_interface:\n- TypeClass = GraphQLInterfaceType\n-\n- if type_definition.interfaces:\n- kwargs[\"interfaces\"] = [\n- _get_object_type_for_type_definition(interface, type_map)\n- for interface in type_definition.interfaces\n- ]\n-\n- assert not type_definition.is_generic\n-\n- return TypeClass(\n- name=type_definition.name,\n- fields=lambda: {\n- field.name: get_field(field, type_definition.is_input, type_map)\n- for field in type_definition.fields\n- },\n- description=type_definition.description,\n- **kwargs,\n- )\n+ if type_definition.name not in type_map:\n+ TypeClass: Type = GraphQLObjectType\n+\n+ kwargs = {}\n+\n+ if type_definition.is_input:\n+ TypeClass = GraphQLInputObjectType\n+ elif type_definition.is_interface:\n+ TypeClass = GraphQLInterfaceType\n+\n+ if type_definition.interfaces:\n+ kwargs[\"interfaces\"] = [\n+ _get_object_type_for_type_definition(interface, type_map)\n+ for interface in type_definition.interfaces\n+ ]\n+ # this tells GraphQL core what the returned object's actual type is\n+ kwargs[\"is_type_of\"] = lambda obj, _: isinstance( # type: ignore\n+ obj, type_definition.origin\n+ )\n+\n+ assert not type_definition.is_generic\n+\n+ object_type = TypeClass(\n+ name=type_definition.name,\n+ fields=lambda: {\n+ field.name: get_field(field, type_definition.is_input, type_map)\n+ for field in type_definition.fields\n+ },\n+ description=type_definition.description,\n+ **kwargs,\n+ )\n+\n+ type_map[type_definition.name] = ConcreteType(\n+ definition=type_definition, implementation=object_type\n+ )\n+\n+ return type_map[type_definition.name].implementation\n \n \n def get_object_type(origin: Type, type_map: TypeMap) -> GraphQLObjectType:\n@@ -48,13 +59,7 @@\n \n type_definition: TypeDefinition = origin._type_definition\n \n- name = type_definition.name\n-\n- if name not in type_map:\n- object_type = _get_object_type_for_type_definition(type_definition, type_map)\n-\n- type_map[name] = ConcreteType(\n- definition=type_definition, implementation=object_type\n- )\n-\n- return cast(GraphQLObjectType, type_map[name].implementation)\n+ return cast(\n+ GraphQLObjectType,\n+ _get_object_type_for_type_definition(type_definition, type_map),\n+ )\n", "issue": "Error adding more than one implementation of an interface\n**Observed Behaviour**: When i try to add two implementations of an interface, i get a duplicated type name exception\r\n\r\n**Expected Behaviour**: Instead of trying to recreate the interface type again, reuse it.\r\n\r\n**Steps to reproduce**:\r\n1. Create an interface\r\n2. Create two types which implement the interface\r\n3. Launch `strawberry server app`\r\n4. See it fails with ` Schema must contain uniquely named types but contains multiple types named '<InterfaceName>'`\r\n\r\n**Snippet to reproduce the issue**\r\n````python\r\nfrom typing import List, Optional, Union\r\nimport strawberry\r\nfrom strawberry import field\r\n\r\n\r\[email protected]\r\nclass Person:\r\n name: str\r\n email: str\r\n\r\n\r\[email protected]\r\nclass Speaker(Person):\r\n job: str \r\n\r\n\r\[email protected]\r\nclass Attendee(Person):\r\n interests: List[str]\r\n\r\n\r\ndef get_people_by_name(name: str): \r\n return []\r\n\r\n\r\[email protected]\r\nclass Query:\r\n searchPeopleByName: List[Union[Speaker, Attendee]] = field(resolver=get_people_by_name)\r\n\r\nschema = strawberry.Schema(query=Query)\r\n````\r\n**Full traceback:**\r\n```\r\n File \"/mnt/c/Users/<User>/code/nerdearla/test_app.py\", line 30, in <module>\r\n schema = strawberry.Schema(query=Query)\r\n File \"/home/crow/.virtualenvs/venv/lib/python3.8/site-packages/strawberry/schema/schema.py\", line 42, in __init__\r\n self._schema = GraphQLSchema(\r\n File \"/home/crow/.virtualenvs/venv/lib/python3.8/site-packages/graphql/type/schema.py\", line 240, in __init__\r\n raise TypeError(\r\nTypeError: Schema must contain uniquely named types but contains multiple types named 'Person'.\r\n```\n", "before_files": [{"content": "from typing import Type, cast\n\nfrom graphql import GraphQLInputObjectType, GraphQLObjectType\nfrom graphql.type.definition import GraphQLInterfaceType\nfrom strawberry.type import TypeDefinition\n\nfrom .fields import get_field\nfrom .types import ConcreteType, GraphQLType, TypeMap\n\n\ndef _get_object_type_for_type_definition(\n type_definition: TypeDefinition, type_map: TypeMap\n) -> GraphQLType:\n\n TypeClass: Type = GraphQLObjectType\n\n kwargs = {}\n\n if type_definition.is_input:\n TypeClass = GraphQLInputObjectType\n elif type_definition.is_interface:\n TypeClass = GraphQLInterfaceType\n\n if type_definition.interfaces:\n kwargs[\"interfaces\"] = [\n _get_object_type_for_type_definition(interface, type_map)\n for interface in type_definition.interfaces\n ]\n\n assert not type_definition.is_generic\n\n return TypeClass(\n name=type_definition.name,\n fields=lambda: {\n field.name: get_field(field, type_definition.is_input, type_map)\n for field in type_definition.fields\n },\n description=type_definition.description,\n **kwargs,\n )\n\n\ndef get_object_type(origin: Type, type_map: TypeMap) -> GraphQLObjectType:\n \"\"\"Returns a root type (Query, Mutation, Subscription) from a decorated type\"\"\"\n\n if not hasattr(origin, \"_type_definition\"):\n raise ValueError(f\"Wrong type passed to get object type {origin}\")\n\n type_definition: TypeDefinition = origin._type_definition\n\n name = type_definition.name\n\n if name not in type_map:\n object_type = _get_object_type_for_type_definition(type_definition, type_map)\n\n type_map[name] = ConcreteType(\n definition=type_definition, implementation=object_type\n )\n\n return cast(GraphQLObjectType, type_map[name].implementation)\n", "path": "strawberry/schema/types/object_type.py"}]} | 1,424 | 671 |
gh_patches_debug_26521 | rasdani/github-patches | git_diff | internetarchive__openlibrary-7718 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Publisher search endpoint solr performance
<!-- What problem are we solving? What does the experience look like today? What are the symptoms? -->
Our /search/publishers endpoint is doing a strange roll-up and submitting many solr select queries causing performance issues. Solution presumably is to not make more than 1 solr query on /search/publishers.
### Proposal
Change the backend call for /search/publishers to make a single query to solr `publisher:(...)` query.
### Evidence / Screenshot (if possible)

<img width="775" alt="Screenshot 2023-03-23 at 12 18 55 PM" src="https://user-images.githubusercontent.com/978325/227324919-d19b91c5-d19b-4746-9908-43e0f7cf1cbd.png">
### Relevant url?
<!-- `https://openlibrary.org/...` -->
http://testing.openlibrary.org/search/publishers?q=Black%20Dolls%20And%20White%20Dolls%20From%201940%20Through%201970%3A%20Their%20Impact%20Then%20On%20Black%20And%20White%20Children%27s%20Development%20
### Related files
<!-- Files related to this issue; this is super useful for new contributors who might want to help! If you're not sure, leave this blank; a maintainer will add them. -->
https://github.com/internetarchive/openlibrary/blob/b897c8c51a79308e38f9825fac82864a5cc7d3ae/openlibrary/plugins/worksearch/publishers.py#L82
### Stakeholders
<!-- @ tag stakeholders of this bug -->
</issue>
<code>
[start of openlibrary/plugins/worksearch/publishers.py]
1 """Publisher pages
2 """
3 from infogami.utils import delegate, stats
4 from infogami.utils.view import render_template, safeint
5 import web
6 import logging
7
8 from . import subjects
9 from . import search
10
11 logger = logging.getLogger("openlibrary.worksearch")
12
13
14 class publishers(subjects.subjects):
15 path = '(/publishers/[^/]+)'
16
17 def GET(self, key):
18 key = key.replace("_", " ")
19 page = subjects.get_subject(key, details=True)
20
21 if not page or page.work_count == 0:
22 web.ctx.status = "404 Not Found"
23 return render_template('publishers/notfound.tmpl', key)
24
25 return render_template("publishers/view", page)
26
27 def is_enabled(self):
28 return "publishers" in web.ctx.features
29
30
31 class publishers_json(subjects.subjects_json):
32 path = '(/publishers/[^/]+)'
33 encoding = "json"
34
35 def is_enabled(self):
36 return "publishers" in web.ctx.features
37
38 def normalize_key(self, key):
39 return key
40
41 def process_key(self, key):
42 return key.replace("_", " ")
43
44
45 class index(delegate.page):
46 path = "/publishers"
47
48 def GET(self):
49 return render_template("publishers/index")
50
51 def is_enabled(self):
52 return "publishers" in web.ctx.features
53
54
55 class publisher_search(delegate.page):
56 path = '/search/publishers'
57
58 def GET(self):
59 i = web.input(q="")
60 solr = search.get_solr()
61 q = {"publisher": i.q}
62
63 result = solr.select(
64 q,
65 facets=["publisher_facet"],
66 fields=["publisher", "publisher_facet"],
67 rows=0,
68 )
69 result = self.process_result(result)
70 return render_template('search/publishers', i.q, result)
71
72 def process_result(self, result):
73 solr = search.get_solr()
74
75 def process(p):
76 return web.storage(
77 name=p.value,
78 key="/publishers/" + p.value.replace(" ", "_"),
79 count=solr.select({"publisher_facet": p.value}, rows=0)['num_found'],
80 )
81
82 publisher_facets = result['facets']['publisher_facet'][:25]
83 return [process(p) for p in publisher_facets]
84
85
86 class PublisherEngine(subjects.SubjectEngine):
87 def normalize_key(self, key):
88 return key
89
90 def get_ebook_count(self, name, value, publish_year):
91 # Query solr for this publish_year and publish_year combination and read the has_fulltext=true facet
92 solr = search.get_solr()
93 q = {"publisher_facet": value}
94
95 if isinstance(publish_year, list):
96 q['publish_year'] = tuple(publish_year) # range
97 elif publish_year:
98 q['publish_year'] = publish_year
99
100 result = solr.select(q, facets=["has_fulltext"], rows=0)
101 counts = {v.value: v.count for v in result["facets"]["has_fulltext"]}
102 return counts.get('true')
103
104
105 def setup():
106 subjects.SUBJECTS.append(
107 subjects.SubjectMeta(
108 name="publisher",
109 key="publishers",
110 prefix="/publishers/",
111 facet="publisher_facet",
112 facet_key="publisher_facet",
113 Engine=PublisherEngine,
114 )
115 )
116
[end of openlibrary/plugins/worksearch/publishers.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/openlibrary/plugins/worksearch/publishers.py b/openlibrary/plugins/worksearch/publishers.py
--- a/openlibrary/plugins/worksearch/publishers.py
+++ b/openlibrary/plugins/worksearch/publishers.py
@@ -57,30 +57,28 @@
def GET(self):
i = web.input(q="")
- solr = search.get_solr()
- q = {"publisher": i.q}
-
- result = solr.select(
- q,
+ result = search.get_solr().select(
+ {"publisher": i.q, "type": "work"},
facets=["publisher_facet"],
- fields=["publisher", "publisher_facet"],
+ facet_mincount=1,
+ facet_limit=25,
+ facet_contains=i.q,
+ facet_contains_ignoreCase='true',
rows=0,
)
result = self.process_result(result)
return render_template('search/publishers', i.q, result)
def process_result(self, result):
- solr = search.get_solr()
-
- def process(p):
- return web.storage(
+ publisher_facets = result['facets']['publisher_facet']
+ return [
+ web.storage(
name=p.value,
key="/publishers/" + p.value.replace(" ", "_"),
- count=solr.select({"publisher_facet": p.value}, rows=0)['num_found'],
+ count=p.count,
)
-
- publisher_facets = result['facets']['publisher_facet'][:25]
- return [process(p) for p in publisher_facets]
+ for p in publisher_facets
+ ]
class PublisherEngine(subjects.SubjectEngine):
| {"golden_diff": "diff --git a/openlibrary/plugins/worksearch/publishers.py b/openlibrary/plugins/worksearch/publishers.py\n--- a/openlibrary/plugins/worksearch/publishers.py\n+++ b/openlibrary/plugins/worksearch/publishers.py\n@@ -57,30 +57,28 @@\n \n def GET(self):\n i = web.input(q=\"\")\n- solr = search.get_solr()\n- q = {\"publisher\": i.q}\n-\n- result = solr.select(\n- q,\n+ result = search.get_solr().select(\n+ {\"publisher\": i.q, \"type\": \"work\"},\n facets=[\"publisher_facet\"],\n- fields=[\"publisher\", \"publisher_facet\"],\n+ facet_mincount=1,\n+ facet_limit=25,\n+ facet_contains=i.q,\n+ facet_contains_ignoreCase='true',\n rows=0,\n )\n result = self.process_result(result)\n return render_template('search/publishers', i.q, result)\n \n def process_result(self, result):\n- solr = search.get_solr()\n-\n- def process(p):\n- return web.storage(\n+ publisher_facets = result['facets']['publisher_facet']\n+ return [\n+ web.storage(\n name=p.value,\n key=\"/publishers/\" + p.value.replace(\" \", \"_\"),\n- count=solr.select({\"publisher_facet\": p.value}, rows=0)['num_found'],\n+ count=p.count,\n )\n-\n- publisher_facets = result['facets']['publisher_facet'][:25]\n- return [process(p) for p in publisher_facets]\n+ for p in publisher_facets\n+ ]\n \n \n class PublisherEngine(subjects.SubjectEngine):\n", "issue": "Publisher search endpoint solr performance\n<!-- What problem are we solving? What does the experience look like today? What are the symptoms? -->\r\n\r\nOur /search/publishers endpoint is doing a strange roll-up and submitting many solr select queries causing performance issues. Solution presumably is to not make more than 1 solr query on /search/publishers.\r\n\r\n### Proposal\r\n\r\nChange the backend call for /search/publishers to make a single query to solr `publisher:(...)` query.\r\n\r\n### Evidence / Screenshot (if possible)\r\n\r\n<img width=\"775\" alt=\"Screenshot 2023-03-23 at 12 18 55 PM\" src=\"https://user-images.githubusercontent.com/978325/227324919-d19b91c5-d19b-4746-9908-43e0f7cf1cbd.png\">\r\n\r\n### Relevant url?\r\n<!-- `https://openlibrary.org/...` -->\r\n\r\nhttp://testing.openlibrary.org/search/publishers?q=Black%20Dolls%20And%20White%20Dolls%20From%201940%20Through%201970%3A%20Their%20Impact%20Then%20On%20Black%20And%20White%20Children%27s%20Development%20\r\n\r\n### Related files\r\n<!-- Files related to this issue; this is super useful for new contributors who might want to help! If you're not sure, leave this blank; a maintainer will add them. -->\r\n\r\nhttps://github.com/internetarchive/openlibrary/blob/b897c8c51a79308e38f9825fac82864a5cc7d3ae/openlibrary/plugins/worksearch/publishers.py#L82\r\n\r\n### Stakeholders\r\n<!-- @ tag stakeholders of this bug -->\r\n\n", "before_files": [{"content": "\"\"\"Publisher pages\n\"\"\"\nfrom infogami.utils import delegate, stats\nfrom infogami.utils.view import render_template, safeint\nimport web\nimport logging\n\nfrom . import subjects\nfrom . import search\n\nlogger = logging.getLogger(\"openlibrary.worksearch\")\n\n\nclass publishers(subjects.subjects):\n path = '(/publishers/[^/]+)'\n\n def GET(self, key):\n key = key.replace(\"_\", \" \")\n page = subjects.get_subject(key, details=True)\n\n if not page or page.work_count == 0:\n web.ctx.status = \"404 Not Found\"\n return render_template('publishers/notfound.tmpl', key)\n\n return render_template(\"publishers/view\", page)\n\n def is_enabled(self):\n return \"publishers\" in web.ctx.features\n\n\nclass publishers_json(subjects.subjects_json):\n path = '(/publishers/[^/]+)'\n encoding = \"json\"\n\n def is_enabled(self):\n return \"publishers\" in web.ctx.features\n\n def normalize_key(self, key):\n return key\n\n def process_key(self, key):\n return key.replace(\"_\", \" \")\n\n\nclass index(delegate.page):\n path = \"/publishers\"\n\n def GET(self):\n return render_template(\"publishers/index\")\n\n def is_enabled(self):\n return \"publishers\" in web.ctx.features\n\n\nclass publisher_search(delegate.page):\n path = '/search/publishers'\n\n def GET(self):\n i = web.input(q=\"\")\n solr = search.get_solr()\n q = {\"publisher\": i.q}\n\n result = solr.select(\n q,\n facets=[\"publisher_facet\"],\n fields=[\"publisher\", \"publisher_facet\"],\n rows=0,\n )\n result = self.process_result(result)\n return render_template('search/publishers', i.q, result)\n\n def process_result(self, result):\n solr = search.get_solr()\n\n def process(p):\n return web.storage(\n name=p.value,\n key=\"/publishers/\" + p.value.replace(\" \", \"_\"),\n count=solr.select({\"publisher_facet\": p.value}, rows=0)['num_found'],\n )\n\n publisher_facets = result['facets']['publisher_facet'][:25]\n return [process(p) for p in publisher_facets]\n\n\nclass PublisherEngine(subjects.SubjectEngine):\n def normalize_key(self, key):\n return key\n\n def get_ebook_count(self, name, value, publish_year):\n # Query solr for this publish_year and publish_year combination and read the has_fulltext=true facet\n solr = search.get_solr()\n q = {\"publisher_facet\": value}\n\n if isinstance(publish_year, list):\n q['publish_year'] = tuple(publish_year) # range\n elif publish_year:\n q['publish_year'] = publish_year\n\n result = solr.select(q, facets=[\"has_fulltext\"], rows=0)\n counts = {v.value: v.count for v in result[\"facets\"][\"has_fulltext\"]}\n return counts.get('true')\n\n\ndef setup():\n subjects.SUBJECTS.append(\n subjects.SubjectMeta(\n name=\"publisher\",\n key=\"publishers\",\n prefix=\"/publishers/\",\n facet=\"publisher_facet\",\n facet_key=\"publisher_facet\",\n Engine=PublisherEngine,\n )\n )\n", "path": "openlibrary/plugins/worksearch/publishers.py"}]} | 2,010 | 374 |
gh_patches_debug_18174 | rasdani/github-patches | git_diff | ephios-dev__ephios-82 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Redirect anonymous users to login view instead of raising 403
this also raises 403 if users are not logged in. this is not what we want
_Originally posted by @jeriox in https://github.com/jeriox/jep/pull/48#discussion_r479789720_
</issue>
<code>
[start of jep/permissions.py]
1 import guardian.mixins
2 from django.contrib.auth.models import Permission, Group
3 from guardian.ctypes import get_content_type
4 from guardian.utils import get_group_obj_perms_model
5
6
7 def get_groups_with_perms(obj, only_with_perms_in):
8
9 ctype = get_content_type(obj)
10 group_model = get_group_obj_perms_model(obj)
11
12 group_rel_name = group_model.group.field.related_query_name()
13
14 if group_model.objects.is_generic():
15 group_filters = {
16 "%s__content_type" % group_rel_name: ctype,
17 "%s__object_pk" % group_rel_name: obj.pk,
18 }
19 else:
20 group_filters = {"%s__content_object" % group_rel_name: obj}
21
22 permission_ids = Permission.objects.filter(
23 content_type=ctype, codename__in=only_with_perms_in
24 ).values_list("id", flat=True)
25 group_filters.update(
26 {"%s__permission_id__in" % group_rel_name: permission_ids,}
27 )
28 return Group.objects.filter(**group_filters).distinct()
29
30
31 class CustomPermissionRequiredMixin(guardian.mixins.PermissionRequiredMixin):
32 raise_exception = True
33 accept_global_perms = True
34
35 # FIXME redirect non logged in users and raise Permission for others
36
[end of jep/permissions.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/jep/permissions.py b/jep/permissions.py
--- a/jep/permissions.py
+++ b/jep/permissions.py
@@ -1,8 +1,12 @@
import guardian.mixins
+from django.contrib.auth import REDIRECT_FIELD_NAME
from django.contrib.auth.models import Permission, Group
+from django.contrib.auth.views import redirect_to_login
from guardian.ctypes import get_content_type
from guardian.utils import get_group_obj_perms_model
+from jep import settings
+
def get_groups_with_perms(obj, only_with_perms_in):
@@ -32,4 +36,10 @@
raise_exception = True
accept_global_perms = True
- # FIXME redirect non logged in users and raise Permission for others
+ def on_permission_check_fail(self, request, response, obj=None):
+ if request.user.is_authenticated:
+ return response
+ else:
+ return redirect_to_login(
+ self.request.get_full_path(), settings.LOGIN_URL, REDIRECT_FIELD_NAME
+ )
| {"golden_diff": "diff --git a/jep/permissions.py b/jep/permissions.py\n--- a/jep/permissions.py\n+++ b/jep/permissions.py\n@@ -1,8 +1,12 @@\n import guardian.mixins\n+from django.contrib.auth import REDIRECT_FIELD_NAME\n from django.contrib.auth.models import Permission, Group\n+from django.contrib.auth.views import redirect_to_login\n from guardian.ctypes import get_content_type\n from guardian.utils import get_group_obj_perms_model\n \n+from jep import settings\n+\n \n def get_groups_with_perms(obj, only_with_perms_in):\n \n@@ -32,4 +36,10 @@\n raise_exception = True\n accept_global_perms = True\n \n- # FIXME redirect non logged in users and raise Permission for others\n+ def on_permission_check_fail(self, request, response, obj=None):\n+ if request.user.is_authenticated:\n+ return response\n+ else:\n+ return redirect_to_login(\n+ self.request.get_full_path(), settings.LOGIN_URL, REDIRECT_FIELD_NAME\n+ )\n", "issue": "Redirect anonymous users to login view instead of raising 403\nthis also raises 403 if users are not logged in. this is not what we want\r\n\r\n_Originally posted by @jeriox in https://github.com/jeriox/jep/pull/48#discussion_r479789720_\n", "before_files": [{"content": "import guardian.mixins\nfrom django.contrib.auth.models import Permission, Group\nfrom guardian.ctypes import get_content_type\nfrom guardian.utils import get_group_obj_perms_model\n\n\ndef get_groups_with_perms(obj, only_with_perms_in):\n\n ctype = get_content_type(obj)\n group_model = get_group_obj_perms_model(obj)\n\n group_rel_name = group_model.group.field.related_query_name()\n\n if group_model.objects.is_generic():\n group_filters = {\n \"%s__content_type\" % group_rel_name: ctype,\n \"%s__object_pk\" % group_rel_name: obj.pk,\n }\n else:\n group_filters = {\"%s__content_object\" % group_rel_name: obj}\n\n permission_ids = Permission.objects.filter(\n content_type=ctype, codename__in=only_with_perms_in\n ).values_list(\"id\", flat=True)\n group_filters.update(\n {\"%s__permission_id__in\" % group_rel_name: permission_ids,}\n )\n return Group.objects.filter(**group_filters).distinct()\n\n\nclass CustomPermissionRequiredMixin(guardian.mixins.PermissionRequiredMixin):\n raise_exception = True\n accept_global_perms = True\n\n # FIXME redirect non logged in users and raise Permission for others\n", "path": "jep/permissions.py"}]} | 931 | 223 |
gh_patches_debug_15802 | rasdani/github-patches | git_diff | lutris__lutris-1179 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Have logger scroll automatically only when at the bottom
Currently the logger scrolls whenever it outputs which makes scrolling up useless unless the game is stopped. This behavior is annoying.
</issue>
<code>
[start of lutris/gui/logwindow.py]
1 from gi.repository import Gtk
2 from lutris.gui.widgets.dialogs import Dialog
3
4
5 class LogTextView(Gtk.TextView):
6 def __init__(self, buffer):
7 super(LogTextView, self).__init__()
8
9 self.set_buffer(buffer)
10 self.set_editable(False)
11 self.set_monospace(True)
12 self.set_left_margin(10)
13 self.set_wrap_mode(Gtk.WrapMode.CHAR)
14 self.get_style_context().add_class('lutris-logview')
15 self.connect("size-allocate", self.autoscroll)
16
17 def autoscroll(self, *args):
18 adj = self.get_vadjustment()
19 adj.set_value(adj.get_upper() - adj.get_page_size())
20
21
22 class LogWindow(Dialog):
23 def __init__(self, title, buffer, parent):
24 super(LogWindow, self).__init__(title, parent, 0,
25 ('_OK', Gtk.ResponseType.OK))
26 self.set_size_request(640, 480)
27 self.grid = Gtk.Grid()
28 self.buffer = buffer
29 self.logtextview = LogTextView(self.buffer)
30
31 scrolledwindow = Gtk.ScrolledWindow(hexpand=True, vexpand=True,
32 child=self.logtextview)
33 self.vbox.add(scrolledwindow)
34 self.show_all()
35
[end of lutris/gui/logwindow.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/lutris/gui/logwindow.py b/lutris/gui/logwindow.py
--- a/lutris/gui/logwindow.py
+++ b/lutris/gui/logwindow.py
@@ -10,13 +10,16 @@
self.set_editable(False)
self.set_monospace(True)
self.set_left_margin(10)
+ self.scroll_max = 0
self.set_wrap_mode(Gtk.WrapMode.CHAR)
self.get_style_context().add_class('lutris-logview')
self.connect("size-allocate", self.autoscroll)
def autoscroll(self, *args):
adj = self.get_vadjustment()
- adj.set_value(adj.get_upper() - adj.get_page_size())
+ if adj.get_value() == self.scroll_max or self.scroll_max == 0:
+ adj.set_value(adj.get_upper() - adj.get_page_size())
+ self.scroll_max = adj.get_upper() - adj.get_page_size()
class LogWindow(Dialog):
| {"golden_diff": "diff --git a/lutris/gui/logwindow.py b/lutris/gui/logwindow.py\n--- a/lutris/gui/logwindow.py\n+++ b/lutris/gui/logwindow.py\n@@ -10,13 +10,16 @@\n self.set_editable(False)\n self.set_monospace(True)\n self.set_left_margin(10)\n+ self.scroll_max = 0\n self.set_wrap_mode(Gtk.WrapMode.CHAR)\n self.get_style_context().add_class('lutris-logview')\n self.connect(\"size-allocate\", self.autoscroll)\n \n def autoscroll(self, *args):\n adj = self.get_vadjustment()\n- adj.set_value(adj.get_upper() - adj.get_page_size())\n+ if adj.get_value() == self.scroll_max or self.scroll_max == 0:\n+ adj.set_value(adj.get_upper() - adj.get_page_size())\n+ self.scroll_max = adj.get_upper() - adj.get_page_size()\n \n \n class LogWindow(Dialog):\n", "issue": "Have logger scroll automatically only when at the bottom\nCurrently the logger scrolls whenever it outputs which makes scrolling up useless unless the game is stopped. This behavior is annoying.\n", "before_files": [{"content": "from gi.repository import Gtk\nfrom lutris.gui.widgets.dialogs import Dialog\n\n\nclass LogTextView(Gtk.TextView):\n def __init__(self, buffer):\n super(LogTextView, self).__init__()\n\n self.set_buffer(buffer)\n self.set_editable(False)\n self.set_monospace(True)\n self.set_left_margin(10)\n self.set_wrap_mode(Gtk.WrapMode.CHAR)\n self.get_style_context().add_class('lutris-logview')\n self.connect(\"size-allocate\", self.autoscroll)\n\n def autoscroll(self, *args):\n adj = self.get_vadjustment()\n adj.set_value(adj.get_upper() - adj.get_page_size())\n\n\nclass LogWindow(Dialog):\n def __init__(self, title, buffer, parent):\n super(LogWindow, self).__init__(title, parent, 0,\n ('_OK', Gtk.ResponseType.OK))\n self.set_size_request(640, 480)\n self.grid = Gtk.Grid()\n self.buffer = buffer\n self.logtextview = LogTextView(self.buffer)\n\n scrolledwindow = Gtk.ScrolledWindow(hexpand=True, vexpand=True,\n child=self.logtextview)\n self.vbox.add(scrolledwindow)\n self.show_all()\n", "path": "lutris/gui/logwindow.py"}]} | 898 | 215 |
gh_patches_debug_16370 | rasdani/github-patches | git_diff | open-mmlab__mmaction2-676 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Localizer train cfg & test cfg ?
</issue>
<code>
[start of configs/_base_/models/bsn_tem.py]
1 # model settings
2 model = dict(
3 type='TEM',
4 temporal_dim=100,
5 boundary_ratio=0.1,
6 tem_feat_dim=400,
7 tem_hidden_dim=512,
8 tem_match_threshold=0.5)
9 # model training and testing settings
10 train_cfg = None
11 test_cfg = dict(average_clips='score')
12
[end of configs/_base_/models/bsn_tem.py]
[start of configs/_base_/models/bsn_pem.py]
1 # model settings
2 model = dict(
3 type='PEM',
4 pem_feat_dim=32,
5 pem_hidden_dim=256,
6 pem_u_ratio_m=1,
7 pem_u_ratio_l=2,
8 pem_high_temporal_iou_threshold=0.6,
9 pem_low_temporal_iou_threshold=0.2,
10 soft_nms_alpha=0.75,
11 soft_nms_low_threshold=0.65,
12 soft_nms_high_threshold=0.9,
13 post_process_top_k=100)
14 # model training and testing settings
15 train_cfg = None
16 test_cfg = dict(average_clips='score')
17
[end of configs/_base_/models/bsn_pem.py]
[start of configs/_base_/models/bmn_400x100.py]
1 # model settings
2 model = dict(
3 type='BMN',
4 temporal_dim=100,
5 boundary_ratio=0.5,
6 num_samples=32,
7 num_samples_per_bin=3,
8 feat_dim=400,
9 soft_nms_alpha=0.4,
10 soft_nms_low_threshold=0.5,
11 soft_nms_high_threshold=0.9,
12 post_process_top_k=100)
13 # model training and testing settings
14 train_cfg = None
15 test_cfg = dict(average_clips='score')
16
[end of configs/_base_/models/bmn_400x100.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/configs/_base_/models/bmn_400x100.py b/configs/_base_/models/bmn_400x100.py
--- a/configs/_base_/models/bmn_400x100.py
+++ b/configs/_base_/models/bmn_400x100.py
@@ -10,6 +10,3 @@
soft_nms_low_threshold=0.5,
soft_nms_high_threshold=0.9,
post_process_top_k=100)
-# model training and testing settings
-train_cfg = None
-test_cfg = dict(average_clips='score')
diff --git a/configs/_base_/models/bsn_pem.py b/configs/_base_/models/bsn_pem.py
--- a/configs/_base_/models/bsn_pem.py
+++ b/configs/_base_/models/bsn_pem.py
@@ -11,6 +11,3 @@
soft_nms_low_threshold=0.65,
soft_nms_high_threshold=0.9,
post_process_top_k=100)
-# model training and testing settings
-train_cfg = None
-test_cfg = dict(average_clips='score')
diff --git a/configs/_base_/models/bsn_tem.py b/configs/_base_/models/bsn_tem.py
--- a/configs/_base_/models/bsn_tem.py
+++ b/configs/_base_/models/bsn_tem.py
@@ -6,6 +6,3 @@
tem_feat_dim=400,
tem_hidden_dim=512,
tem_match_threshold=0.5)
-# model training and testing settings
-train_cfg = None
-test_cfg = dict(average_clips='score')
| {"golden_diff": "diff --git a/configs/_base_/models/bmn_400x100.py b/configs/_base_/models/bmn_400x100.py\n--- a/configs/_base_/models/bmn_400x100.py\n+++ b/configs/_base_/models/bmn_400x100.py\n@@ -10,6 +10,3 @@\n soft_nms_low_threshold=0.5,\n soft_nms_high_threshold=0.9,\n post_process_top_k=100)\n-# model training and testing settings\n-train_cfg = None\n-test_cfg = dict(average_clips='score')\ndiff --git a/configs/_base_/models/bsn_pem.py b/configs/_base_/models/bsn_pem.py\n--- a/configs/_base_/models/bsn_pem.py\n+++ b/configs/_base_/models/bsn_pem.py\n@@ -11,6 +11,3 @@\n soft_nms_low_threshold=0.65,\n soft_nms_high_threshold=0.9,\n post_process_top_k=100)\n-# model training and testing settings\n-train_cfg = None\n-test_cfg = dict(average_clips='score')\ndiff --git a/configs/_base_/models/bsn_tem.py b/configs/_base_/models/bsn_tem.py\n--- a/configs/_base_/models/bsn_tem.py\n+++ b/configs/_base_/models/bsn_tem.py\n@@ -6,6 +6,3 @@\n tem_feat_dim=400,\n tem_hidden_dim=512,\n tem_match_threshold=0.5)\n-# model training and testing settings\n-train_cfg = None\n-test_cfg = dict(average_clips='score')\n", "issue": "Localizer train cfg & test cfg ?\n\n", "before_files": [{"content": "# model settings\nmodel = dict(\n type='TEM',\n temporal_dim=100,\n boundary_ratio=0.1,\n tem_feat_dim=400,\n tem_hidden_dim=512,\n tem_match_threshold=0.5)\n# model training and testing settings\ntrain_cfg = None\ntest_cfg = dict(average_clips='score')\n", "path": "configs/_base_/models/bsn_tem.py"}, {"content": "# model settings\nmodel = dict(\n type='PEM',\n pem_feat_dim=32,\n pem_hidden_dim=256,\n pem_u_ratio_m=1,\n pem_u_ratio_l=2,\n pem_high_temporal_iou_threshold=0.6,\n pem_low_temporal_iou_threshold=0.2,\n soft_nms_alpha=0.75,\n soft_nms_low_threshold=0.65,\n soft_nms_high_threshold=0.9,\n post_process_top_k=100)\n# model training and testing settings\ntrain_cfg = None\ntest_cfg = dict(average_clips='score')\n", "path": "configs/_base_/models/bsn_pem.py"}, {"content": "# model settings\nmodel = dict(\n type='BMN',\n temporal_dim=100,\n boundary_ratio=0.5,\n num_samples=32,\n num_samples_per_bin=3,\n feat_dim=400,\n soft_nms_alpha=0.4,\n soft_nms_low_threshold=0.5,\n soft_nms_high_threshold=0.9,\n post_process_top_k=100)\n# model training and testing settings\ntrain_cfg = None\ntest_cfg = dict(average_clips='score')\n", "path": "configs/_base_/models/bmn_400x100.py"}]} | 1,023 | 398 |
gh_patches_debug_16280 | rasdani/github-patches | git_diff | mirumee__ariadne-35 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
If value from resolver is callable, call it with **kwargs.
[Apollo doc](https://www.apollographql.com/docs/graphql-tools/resolvers) for default resolver says that if `field_name` resolves to function, it will be called with query arguments:
> Calls a function on obj with the relevant field name and passes the query arguments into that function
This can be useful for situations when parent resolver returned an object with getter functions.
</issue>
<code>
[start of ariadne/resolvers.py]
1 from graphql import GraphQLObjectType, GraphQLScalarType, GraphQLSchema
2 from graphql.execution.base import ResolveInfo
3
4
5 def resolve_parent_field(parent, name: str):
6 if isinstance(parent, dict):
7 return parent.get(name)
8 return getattr(parent, name, None)
9
10
11 def default_resolver(parent, info: ResolveInfo):
12 return resolve_parent_field(parent, info.field_name)
13
14
15 def resolve_to(name: str):
16 def resolver(parent, *_):
17 return resolve_parent_field(parent, name)
18
19 return resolver
20
21
22 def add_resolve_functions_to_schema(schema: GraphQLSchema, resolvers: dict):
23 for type_name, type_object in schema.get_type_map().items():
24 if isinstance(type_object, GraphQLObjectType):
25 add_resolve_functions_to_object(type_name, type_object, resolvers)
26 if isinstance(type_object, GraphQLScalarType):
27 add_resolve_functions_to_scalar(type_name, type_object, resolvers)
28
29
30 def add_resolve_functions_to_object(name: str, obj: GraphQLObjectType, resolvers: dict):
31 type_resolvers = resolvers.get(name, {})
32 for field_name, field_object in obj.fields.items():
33 field_resolver = type_resolvers.get(field_name)
34 if field_resolver:
35 field_object.resolver = field_resolver
36 elif field_object.resolver is None:
37 field_object.resolver = default_resolver
38
39
40 def add_resolve_functions_to_scalar(name: str, obj: GraphQLObjectType, resolvers: dict):
41 scalar_resolvers = resolvers.get(name, {})
42
43 serialize = scalar_resolvers.get("serialize", obj.serialize)
44 obj.serialize = serialize
45
46 parse_literal = scalar_resolvers.get("parse_literal", obj.parse_literal)
47 obj.parse_literal = parse_literal
48
49 parse_value = scalar_resolvers.get("parse_value", obj.parse_value)
50 obj.parse_value = parse_value
51
[end of ariadne/resolvers.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/ariadne/resolvers.py b/ariadne/resolvers.py
--- a/ariadne/resolvers.py
+++ b/ariadne/resolvers.py
@@ -2,19 +2,23 @@
from graphql.execution.base import ResolveInfo
-def resolve_parent_field(parent, name: str):
+def resolve_parent_field(parent, name: str, **kwargs: dict):
if isinstance(parent, dict):
- return parent.get(name)
- return getattr(parent, name, None)
+ value = parent.get(name)
+ else:
+ value = getattr(parent, name, None)
+ if callable(value):
+ return value(**kwargs)
+ return value
-def default_resolver(parent, info: ResolveInfo):
- return resolve_parent_field(parent, info.field_name)
+def default_resolver(parent, info: ResolveInfo, **kwargs):
+ return resolve_parent_field(parent, info.field_name, **kwargs)
def resolve_to(name: str):
- def resolver(parent, *_):
- return resolve_parent_field(parent, name)
+ def resolver(parent, *_, **kwargs):
+ return resolve_parent_field(parent, name, **kwargs)
return resolver
| {"golden_diff": "diff --git a/ariadne/resolvers.py b/ariadne/resolvers.py\n--- a/ariadne/resolvers.py\n+++ b/ariadne/resolvers.py\n@@ -2,19 +2,23 @@\n from graphql.execution.base import ResolveInfo\n \n \n-def resolve_parent_field(parent, name: str):\n+def resolve_parent_field(parent, name: str, **kwargs: dict):\n if isinstance(parent, dict):\n- return parent.get(name)\n- return getattr(parent, name, None)\n+ value = parent.get(name)\n+ else:\n+ value = getattr(parent, name, None)\n+ if callable(value):\n+ return value(**kwargs)\n+ return value\n \n \n-def default_resolver(parent, info: ResolveInfo):\n- return resolve_parent_field(parent, info.field_name)\n+def default_resolver(parent, info: ResolveInfo, **kwargs):\n+ return resolve_parent_field(parent, info.field_name, **kwargs)\n \n \n def resolve_to(name: str):\n- def resolver(parent, *_):\n- return resolve_parent_field(parent, name)\n+ def resolver(parent, *_, **kwargs):\n+ return resolve_parent_field(parent, name, **kwargs)\n \n return resolver\n", "issue": "If value from resolver is callable, call it with **kwargs.\n[Apollo doc](https://www.apollographql.com/docs/graphql-tools/resolvers) for default resolver says that if `field_name` resolves to function, it will be called with query arguments:\r\n\r\n> Calls a function on obj with the relevant field name and passes the query arguments into that function\r\n\r\nThis can be useful for situations when parent resolver returned an object with getter functions.\n", "before_files": [{"content": "from graphql import GraphQLObjectType, GraphQLScalarType, GraphQLSchema\nfrom graphql.execution.base import ResolveInfo\n\n\ndef resolve_parent_field(parent, name: str):\n if isinstance(parent, dict):\n return parent.get(name)\n return getattr(parent, name, None)\n\n\ndef default_resolver(parent, info: ResolveInfo):\n return resolve_parent_field(parent, info.field_name)\n\n\ndef resolve_to(name: str):\n def resolver(parent, *_):\n return resolve_parent_field(parent, name)\n\n return resolver\n\n\ndef add_resolve_functions_to_schema(schema: GraphQLSchema, resolvers: dict):\n for type_name, type_object in schema.get_type_map().items():\n if isinstance(type_object, GraphQLObjectType):\n add_resolve_functions_to_object(type_name, type_object, resolvers)\n if isinstance(type_object, GraphQLScalarType):\n add_resolve_functions_to_scalar(type_name, type_object, resolvers)\n\n\ndef add_resolve_functions_to_object(name: str, obj: GraphQLObjectType, resolvers: dict):\n type_resolvers = resolvers.get(name, {})\n for field_name, field_object in obj.fields.items():\n field_resolver = type_resolvers.get(field_name)\n if field_resolver:\n field_object.resolver = field_resolver\n elif field_object.resolver is None:\n field_object.resolver = default_resolver\n\n\ndef add_resolve_functions_to_scalar(name: str, obj: GraphQLObjectType, resolvers: dict):\n scalar_resolvers = resolvers.get(name, {})\n\n serialize = scalar_resolvers.get(\"serialize\", obj.serialize)\n obj.serialize = serialize\n\n parse_literal = scalar_resolvers.get(\"parse_literal\", obj.parse_literal)\n obj.parse_literal = parse_literal\n\n parse_value = scalar_resolvers.get(\"parse_value\", obj.parse_value)\n obj.parse_value = parse_value\n", "path": "ariadne/resolvers.py"}]} | 1,102 | 260 |
gh_patches_debug_13586 | rasdani/github-patches | git_diff | pwndbg__pwndbg-146 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
"show print elements 0" causes exceptions on stop
```
pwndbg> show print elements
Limit on string chars or array elements to print is unlimited.
Traceback (most recent call last):
File "/home/david/.pwndbg/pwndbg/events.py", line 111, in caller
func()
File "/home/david/.pwndbg/pwndbg/strings.py", line 34, in update_length
length = int(message)
File "/home/david/.pwndbg/pwndbg/inthook.py", line 44, in __new__
return _int(_int(value, *a, **kw))
ValueError: invalid literal for int() with base 10: 'unlimited'
Python Exception <class 'ValueError'> invalid literal for int() with base 10: 'unlimited':
```
</issue>
<code>
[start of pwndbg/strings.py]
1 #!/usr/bin/env python
2 # -*- coding: utf-8 -*-
3 """
4 Functionality for resolving ASCII printable strings within
5 the debuggee's address space.
6 """
7 from __future__ import absolute_import
8 from __future__ import division
9 from __future__ import print_function
10 from __future__ import unicode_literals
11
12 import string
13
14 import gdb
15
16 import pwndbg.events
17 import pwndbg.memory
18 import pwndbg.typeinfo
19
20 length = 15
21
22 @pwndbg.events.stop
23 def update_length():
24 r"""
25 Unfortunately there's not a better way to get at this info.
26
27 >>> gdb.execute('show print elements', from_tty=False, to_string=True)
28 'Limit on string chars or array elements to print is 21.\n'
29 """
30 global length
31 message = gdb.execute('show print elements', from_tty=False, to_string=True)
32 message = message.split()[-1]
33 message = message.strip('.')
34 length = int(message)
35
36 def get(address, maxlen = None):
37 if maxlen is None:
38 maxlen = length
39
40 try:
41 sz = pwndbg.memory.string(address)
42 sz = sz.decode('latin-1', 'replace')
43
44 if not sz or not all(s in string.printable for s in sz):
45 return None
46 except Exception as e:
47 return None
48
49 if len(sz) < maxlen:
50 return sz
51
52 return sz[:maxlen] + '...'
53
[end of pwndbg/strings.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/pwndbg/strings.py b/pwndbg/strings.py
--- a/pwndbg/strings.py
+++ b/pwndbg/strings.py
@@ -31,7 +31,10 @@
message = gdb.execute('show print elements', from_tty=False, to_string=True)
message = message.split()[-1]
message = message.strip('.')
- length = int(message)
+ if message == 'unlimited':
+ length = 0
+ else:
+ length = int(message)
def get(address, maxlen = None):
if maxlen is None:
@@ -46,7 +49,7 @@
except Exception as e:
return None
- if len(sz) < maxlen:
+ if len(sz) < maxlen or not maxlen:
return sz
return sz[:maxlen] + '...'
| {"golden_diff": "diff --git a/pwndbg/strings.py b/pwndbg/strings.py\n--- a/pwndbg/strings.py\n+++ b/pwndbg/strings.py\n@@ -31,7 +31,10 @@\n message = gdb.execute('show print elements', from_tty=False, to_string=True)\n message = message.split()[-1]\n message = message.strip('.')\n- length = int(message)\n+ if message == 'unlimited':\n+ length = 0\n+ else:\n+ length = int(message)\n \n def get(address, maxlen = None):\n if maxlen is None:\n@@ -46,7 +49,7 @@\n except Exception as e:\n return None\n \n- if len(sz) < maxlen:\n+ if len(sz) < maxlen or not maxlen:\n return sz\n \n return sz[:maxlen] + '...'\n", "issue": "\"show print elements 0\" causes exceptions on stop\n```\r\npwndbg> show print elements\r\nLimit on string chars or array elements to print is unlimited.\r\nTraceback (most recent call last):\r\n File \"/home/david/.pwndbg/pwndbg/events.py\", line 111, in caller\r\n func()\r\n File \"/home/david/.pwndbg/pwndbg/strings.py\", line 34, in update_length\r\n length = int(message)\r\n File \"/home/david/.pwndbg/pwndbg/inthook.py\", line 44, in __new__\r\n return _int(_int(value, *a, **kw))\r\nValueError: invalid literal for int() with base 10: 'unlimited'\r\nPython Exception <class 'ValueError'> invalid literal for int() with base 10: 'unlimited': \r\n```\n", "before_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n\"\"\"\nFunctionality for resolving ASCII printable strings within\nthe debuggee's address space.\n\"\"\"\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\nfrom __future__ import unicode_literals\n\nimport string\n\nimport gdb\n\nimport pwndbg.events\nimport pwndbg.memory\nimport pwndbg.typeinfo\n\nlength = 15\n\[email protected]\ndef update_length():\n r\"\"\"\n Unfortunately there's not a better way to get at this info.\n\n >>> gdb.execute('show print elements', from_tty=False, to_string=True)\n 'Limit on string chars or array elements to print is 21.\\n'\n \"\"\"\n global length\n message = gdb.execute('show print elements', from_tty=False, to_string=True)\n message = message.split()[-1]\n message = message.strip('.')\n length = int(message)\n\ndef get(address, maxlen = None):\n if maxlen is None:\n maxlen = length\n\n try:\n sz = pwndbg.memory.string(address)\n sz = sz.decode('latin-1', 'replace')\n\n if not sz or not all(s in string.printable for s in sz):\n return None\n except Exception as e:\n return None\n\n if len(sz) < maxlen:\n return sz\n\n return sz[:maxlen] + '...'\n", "path": "pwndbg/strings.py"}]} | 1,138 | 191 |
gh_patches_debug_1981 | rasdani/github-patches | git_diff | vyperlang__vyper-2905 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Missing @view decorator for interface ERC20Detailed.py
### Version Information
* vyper Version (output of `vyper --version`): 0.3.3
* OS: linux
* Python Version (output of `python --version`): Python 3.9.5
### What's your issue about?
**Issue**
Error using `ERC20Detailed.py` as an interface to a vyper class. Trying to compile the following snippet produces the following error.
```
# @version 0.3.3
from vyper.interfaces import ERC20Detailed
@view
@external
def getSymbol() -> String[32]:
return ERC20Detailed(0x5f3b5DfEb7B28CDbD7FAba78963EE202a494e2A2).symbol()
```
**Error**
```
vyper.exceptions.StateAccessViolation: May not call state modifying function 'symbol' within a constant
function.vyper.exceptions.StateAccessViolation: May not call state modifying function 'symbol' within a constant function.
```
**Reason**
This issue occurs because `ERC20Detailed.py` does not contain `@view` decorator for its interfaces
### How can it be fixed?
Adding `@view` decorator to interface under `vyper.builtin_interfaces.ERC20Detailed.py`
```
@external
@view
def name() -> String[1]:
pass
@external
@view
def symbol() -> String[1]:
pass
@external
@view
def decimals() -> uint8:
pass
```
**Why?**
Running `vyper -f interface examples/tokens/ERC20.vy` generates the following
```
...
@view
@external
def name() -> String[32]:
pass
@view
@external
def symbol() -> String[32]:
pass
@view
@external
def decimals() -> uint8:
pass
...
```
Adding `@view` decorator to `vyper.builtin_interfaces.ERC20Detailed.py` would make interface consistent.
</issue>
<code>
[start of vyper/builtin_interfaces/ERC20Detailed.py]
1 """
2 NOTE: interface uses `String[1]` where 1 is the lower bound of the string returned by the function.
3 For end-users this means they can't use `implements: ERC20Detailed` unless their implementation
4 uses a value n >= 1. Regardless this is fine as one can't do String[0] where n == 0.
5 """
6
7 interface_code = """
8 @external
9 def name() -> String[1]:
10 pass
11
12 @external
13 def symbol() -> String[1]:
14 pass
15
16 @external
17 def decimals() -> uint8:
18 pass
19 """
20
[end of vyper/builtin_interfaces/ERC20Detailed.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/vyper/builtin_interfaces/ERC20Detailed.py b/vyper/builtin_interfaces/ERC20Detailed.py
--- a/vyper/builtin_interfaces/ERC20Detailed.py
+++ b/vyper/builtin_interfaces/ERC20Detailed.py
@@ -5,14 +5,17 @@
"""
interface_code = """
+@view
@external
def name() -> String[1]:
pass
+@view
@external
def symbol() -> String[1]:
pass
+@view
@external
def decimals() -> uint8:
pass
| {"golden_diff": "diff --git a/vyper/builtin_interfaces/ERC20Detailed.py b/vyper/builtin_interfaces/ERC20Detailed.py\n--- a/vyper/builtin_interfaces/ERC20Detailed.py\n+++ b/vyper/builtin_interfaces/ERC20Detailed.py\n@@ -5,14 +5,17 @@\n \"\"\"\n \n interface_code = \"\"\"\n+@view\n @external\n def name() -> String[1]:\n pass\n \n+@view\n @external\n def symbol() -> String[1]:\n pass\n \n+@view\n @external\n def decimals() -> uint8:\n pass\n", "issue": "Missing @view decorator for interface ERC20Detailed.py\n### Version Information\r\n* vyper Version (output of `vyper --version`): 0.3.3\r\n* OS: linux\r\n* Python Version (output of `python --version`): Python 3.9.5\r\n### What's your issue about?\r\n**Issue**\r\nError using `ERC20Detailed.py` as an interface to a vyper class. Trying to compile the following snippet produces the following error.\r\n```\r\n# @version 0.3.3\r\n\r\nfrom vyper.interfaces import ERC20Detailed\r\n\r\n@view\r\n@external\r\ndef getSymbol() -> String[32]:\r\n return ERC20Detailed(0x5f3b5DfEb7B28CDbD7FAba78963EE202a494e2A2).symbol()\r\n```\r\n**Error**\r\n```\r\nvyper.exceptions.StateAccessViolation: May not call state modifying function 'symbol' within a constant\r\nfunction.vyper.exceptions.StateAccessViolation: May not call state modifying function 'symbol' within a constant function.\r\n```\r\n**Reason**\r\nThis issue occurs because `ERC20Detailed.py` does not contain `@view` decorator for its interfaces\r\n### How can it be fixed?\r\nAdding `@view` decorator to interface under `vyper.builtin_interfaces.ERC20Detailed.py`\r\n```\r\n@external\r\n@view\r\ndef name() -> String[1]:\r\n pass\r\n \r\n@external\r\n@view\r\ndef symbol() -> String[1]:\r\n pass\r\n \r\n@external\r\n@view\r\ndef decimals() -> uint8:\r\n pass\r\n```\r\n**Why?**\r\nRunning `vyper -f interface examples/tokens/ERC20.vy` generates the following\r\n```\r\n...\r\n@view\r\n@external\r\ndef name() -> String[32]:\r\n pass\r\n \r\n@view\r\n@external\r\ndef symbol() -> String[32]:\r\n pass\r\n \r\n@view\r\n@external\r\ndef decimals() -> uint8:\r\n pass\r\n...\r\n```\r\n\r\nAdding `@view` decorator to `vyper.builtin_interfaces.ERC20Detailed.py` would make interface consistent.\n", "before_files": [{"content": "\"\"\"\nNOTE: interface uses `String[1]` where 1 is the lower bound of the string returned by the function.\n For end-users this means they can't use `implements: ERC20Detailed` unless their implementation\n uses a value n >= 1. Regardless this is fine as one can't do String[0] where n == 0.\n\"\"\"\n\ninterface_code = \"\"\"\n@external\ndef name() -> String[1]:\n pass\n\n@external\ndef symbol() -> String[1]:\n pass\n\n@external\ndef decimals() -> uint8:\n pass\n\"\"\"\n", "path": "vyper/builtin_interfaces/ERC20Detailed.py"}]} | 1,159 | 128 |
gh_patches_debug_2390 | rasdani/github-patches | git_diff | Qiskit__qiskit-2448 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
No module named 'vcr': requirement is missing (vcrpy)
<!-- ⚠️ If you do not respect this template, your issue will be closed -->
<!-- ⚠️ Make sure to browse the opened and closed issues -->
### Information
- **Qiskit Terra version**: 0.10.1
- **Python version**: 3.7.3
- **Operating system**: windows 10
### What is the current behavior?
Fresh qiskit installation inside a new environment on windows 10.
In one of the terra tutorial (using_the_transpiler) `from qiskit.test.mock import FakeTokyo` is failing 'ModuleNotFoundError: No module named vcr'
### Suggested solutions
'pip install vcrpy'
'vcrpy' needs to be added in requirements.
</issue>
<code>
[start of qiskit/util.py]
1 # -*- coding: utf-8 -*-
2 # This code is part of Qiskit.
3 #
4 # (C) Copyright IBM 2017.
5 #
6 # This code is licensed under the Apache License, Version 2.0. You may
7 # obtain a copy of this license in the LICENSE.txt file in the root directory
8 # of this source tree or at http://www.apache.org/licenses/LICENSE-2.0.
9 #
10 # Any modifications or derivative works of this code must retain this
11 # copyright notice, and modified files need to carry a notice indicating
12 # that they have been altered from the originals.
13
14 """Common utilities for Qiskit."""
15
16 import platform
17 import re
18 import socket
19 import sys
20 import warnings
21
22 import psutil
23 from marshmallow.warnings import ChangedInMarshmallow3Warning
24
25
26 def _check_python_version():
27 """Check for Python version 3.5+."""
28 if sys.version_info < (3, 5):
29 raise Exception('Qiskit requires Python version 3.5 or greater.')
30
31
32 def _filter_deprecation_warnings():
33 """Apply filters to deprecation warnings.
34
35 Force the `DeprecationWarning` warnings to be displayed for the qiskit
36 module, overriding the system configuration as they are ignored by default
37 [1] for end-users. Additionally, silence the `ChangedInMarshmallow3Warning`
38 messages.
39
40 TODO: on Python 3.7, this might not be needed due to PEP-0565 [2].
41
42 [1] https://docs.python.org/3/library/warnings.html#default-warning-filters
43 [2] https://www.python.org/dev/peps/pep-0565/
44 """
45 deprecation_filter = ('always', None, DeprecationWarning,
46 re.compile(r'^qiskit\.*', re.UNICODE), 0)
47
48 # Instead of using warnings.simple_filter() directly, the internal
49 # _add_filter() function is used for being able to match against the
50 # module.
51 try:
52 warnings._add_filter(*deprecation_filter, append=False)
53 except AttributeError:
54 # ._add_filter is internal and not available in some Python versions.
55 pass
56
57 # Add a filter for ignoring ChangedInMarshmallow3Warning, as we depend on
58 # marhsmallow 2 explicitly. 2.17.0 introduced new deprecation warnings that
59 # are useful for eventually migrating, but too verbose for our purposes.
60 warnings.simplefilter('ignore', category=ChangedInMarshmallow3Warning)
61
62
63 _check_python_version()
64 _filter_deprecation_warnings()
65
66
67 def local_hardware_info():
68 """Basic hardware information about the local machine.
69
70 Gives actual number of CPU's in the machine, even when hyperthreading is
71 turned on. CPU count defaults to 1 when true count can't be determined.
72
73 Returns:
74 dict: The hardware information.
75 """
76 results = {
77 'os': platform.system(),
78 'memory': psutil.virtual_memory().total / (1024 ** 3),
79 'cpus': psutil.cpu_count(logical=False) or 1
80 }
81 return results
82
83
84 def _has_connection(hostname, port):
85 """Checks if internet connection exists to host via specified port.
86
87 If any exception is raised while trying to open a socket this will return
88 false.
89
90 Args:
91 hostname (str): Hostname to connect to.
92 port (int): Port to connect to
93
94 Returns:
95 bool: Has connection or not
96
97 """
98 try:
99 host = socket.gethostbyname(hostname)
100 socket.create_connection((host, port), 2)
101 return True
102 except Exception: # pylint: disable=broad-except
103 return False
104
[end of qiskit/util.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/qiskit/util.py b/qiskit/util.py
--- a/qiskit/util.py
+++ b/qiskit/util.py
@@ -97,7 +97,7 @@
"""
try:
host = socket.gethostbyname(hostname)
- socket.create_connection((host, port), 2)
+ socket.create_connection((host, port), 2).close()
return True
except Exception: # pylint: disable=broad-except
return False
| {"golden_diff": "diff --git a/qiskit/util.py b/qiskit/util.py\n--- a/qiskit/util.py\n+++ b/qiskit/util.py\n@@ -97,7 +97,7 @@\n \"\"\"\n try:\n host = socket.gethostbyname(hostname)\n- socket.create_connection((host, port), 2)\n+ socket.create_connection((host, port), 2).close()\n return True\n except Exception: # pylint: disable=broad-except\n return False\n", "issue": "No module named 'vcr': requirement is missing (vcrpy) \n<!-- \u26a0\ufe0f If you do not respect this template, your issue will be closed -->\r\n<!-- \u26a0\ufe0f Make sure to browse the opened and closed issues -->\r\n\r\n### Information\r\n\r\n- **Qiskit Terra version**: 0.10.1\r\n- **Python version**: 3.7.3\r\n- **Operating system**: windows 10\r\n\r\n### What is the current behavior?\r\nFresh qiskit installation inside a new environment on windows 10. \r\nIn one of the terra tutorial (using_the_transpiler) `from qiskit.test.mock import FakeTokyo` is failing 'ModuleNotFoundError: No module named vcr'\r\n\r\n### Suggested solutions\r\n'pip install vcrpy' \r\n'vcrpy' needs to be added in requirements.\r\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n# This code is part of Qiskit.\n#\n# (C) Copyright IBM 2017.\n#\n# This code is licensed under the Apache License, Version 2.0. You may\n# obtain a copy of this license in the LICENSE.txt file in the root directory\n# of this source tree or at http://www.apache.org/licenses/LICENSE-2.0.\n#\n# Any modifications or derivative works of this code must retain this\n# copyright notice, and modified files need to carry a notice indicating\n# that they have been altered from the originals.\n\n\"\"\"Common utilities for Qiskit.\"\"\"\n\nimport platform\nimport re\nimport socket\nimport sys\nimport warnings\n\nimport psutil\nfrom marshmallow.warnings import ChangedInMarshmallow3Warning\n\n\ndef _check_python_version():\n \"\"\"Check for Python version 3.5+.\"\"\"\n if sys.version_info < (3, 5):\n raise Exception('Qiskit requires Python version 3.5 or greater.')\n\n\ndef _filter_deprecation_warnings():\n \"\"\"Apply filters to deprecation warnings.\n\n Force the `DeprecationWarning` warnings to be displayed for the qiskit\n module, overriding the system configuration as they are ignored by default\n [1] for end-users. Additionally, silence the `ChangedInMarshmallow3Warning`\n messages.\n\n TODO: on Python 3.7, this might not be needed due to PEP-0565 [2].\n\n [1] https://docs.python.org/3/library/warnings.html#default-warning-filters\n [2] https://www.python.org/dev/peps/pep-0565/\n \"\"\"\n deprecation_filter = ('always', None, DeprecationWarning,\n re.compile(r'^qiskit\\.*', re.UNICODE), 0)\n\n # Instead of using warnings.simple_filter() directly, the internal\n # _add_filter() function is used for being able to match against the\n # module.\n try:\n warnings._add_filter(*deprecation_filter, append=False)\n except AttributeError:\n # ._add_filter is internal and not available in some Python versions.\n pass\n\n # Add a filter for ignoring ChangedInMarshmallow3Warning, as we depend on\n # marhsmallow 2 explicitly. 2.17.0 introduced new deprecation warnings that\n # are useful for eventually migrating, but too verbose for our purposes.\n warnings.simplefilter('ignore', category=ChangedInMarshmallow3Warning)\n\n\n_check_python_version()\n_filter_deprecation_warnings()\n\n\ndef local_hardware_info():\n \"\"\"Basic hardware information about the local machine.\n\n Gives actual number of CPU's in the machine, even when hyperthreading is\n turned on. CPU count defaults to 1 when true count can't be determined.\n\n Returns:\n dict: The hardware information.\n \"\"\"\n results = {\n 'os': platform.system(),\n 'memory': psutil.virtual_memory().total / (1024 ** 3),\n 'cpus': psutil.cpu_count(logical=False) or 1\n }\n return results\n\n\ndef _has_connection(hostname, port):\n \"\"\"Checks if internet connection exists to host via specified port.\n\n If any exception is raised while trying to open a socket this will return\n false.\n\n Args:\n hostname (str): Hostname to connect to.\n port (int): Port to connect to\n\n Returns:\n bool: Has connection or not\n\n \"\"\"\n try:\n host = socket.gethostbyname(hostname)\n socket.create_connection((host, port), 2)\n return True\n except Exception: # pylint: disable=broad-except\n return False\n", "path": "qiskit/util.py"}]} | 1,729 | 109 |
gh_patches_debug_16223 | rasdani/github-patches | git_diff | microsoft__botbuilder-python-1930 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Bump msrest to the 0.6.19 or higher
Is your feature request related to a problem? Please describe.
Old version of msrest is used in botframework components -> https://github.com/microsoft/botbuilder-python/search?q=msrest%3D%3D0.6.10 . This blocks us to use latest versions of the service bus client or event using the new language studio python libraries.
With msrest=0.6.10, we're blocked to using 0.50 service bus package and other packages like event grid.
Describe the solution you'd like
EDITED: Upgrade msrest to the at least 0.6.19 or higher.
Describe alternatives you've considered
No alternatives.
</issue>
<code>
[start of libraries/botframework-connector/setup.py]
1 # Copyright (c) Microsoft Corporation. All rights reserved.
2 # Licensed under the MIT License.
3
4 import os
5 from setuptools import setup
6
7 NAME = "botframework-connector"
8 VERSION = os.environ["packageVersion"] if "packageVersion" in os.environ else "4.15.0"
9 REQUIRES = [
10 "msrest==0.6.10",
11 "requests>=2.23.0,<2.26",
12 "PyJWT>=1.5.3,<2.0.0",
13 "botbuilder-schema==4.15.0",
14 "msal==1.6.0",
15 ]
16
17 root = os.path.abspath(os.path.dirname(__file__))
18
19 with open(os.path.join(root, "README.rst"), encoding="utf-8") as f:
20 long_description = f.read()
21
22 setup(
23 name=NAME,
24 version=VERSION,
25 description="Microsoft Bot Framework Bot Builder SDK for Python.",
26 author="Microsoft",
27 url="https://www.github.com/Microsoft/botbuilder-python",
28 keywords=["BotFrameworkConnector", "bots", "ai", "botframework", "botbuilder"],
29 install_requires=REQUIRES,
30 packages=[
31 "botframework.connector",
32 "botframework.connector.auth",
33 "botframework.connector.async_mixin",
34 "botframework.connector.operations",
35 "botframework.connector.models",
36 "botframework.connector.aio",
37 "botframework.connector.aio.operations_async",
38 "botframework.connector.skills",
39 "botframework.connector.teams",
40 "botframework.connector.teams.operations",
41 "botframework.connector.token_api",
42 "botframework.connector.token_api.aio",
43 "botframework.connector.token_api.aio.operations_async",
44 "botframework.connector.token_api.models",
45 "botframework.connector.token_api.operations",
46 ],
47 include_package_data=True,
48 long_description=long_description,
49 long_description_content_type="text/x-rst",
50 license="MIT",
51 classifiers=[
52 "Programming Language :: Python :: 3.7",
53 "Intended Audience :: Developers",
54 "License :: OSI Approved :: MIT License",
55 "Operating System :: OS Independent",
56 "Development Status :: 5 - Production/Stable",
57 "Topic :: Scientific/Engineering :: Artificial Intelligence",
58 ],
59 )
60
[end of libraries/botframework-connector/setup.py]
[start of libraries/botbuilder-schema/setup.py]
1 # Copyright (c) Microsoft Corporation. All rights reserved.
2 # Licensed under the MIT License.
3
4 import os
5 from setuptools import setup
6
7 NAME = "botbuilder-schema"
8 VERSION = os.environ["packageVersion"] if "packageVersion" in os.environ else "4.15.0"
9 REQUIRES = ["msrest==0.6.10"]
10
11 root = os.path.abspath(os.path.dirname(__file__))
12
13 with open(os.path.join(root, "README.rst"), encoding="utf-8") as f:
14 long_description = f.read()
15
16 setup(
17 name=NAME,
18 version=VERSION,
19 description="BotBuilder Schema",
20 author="Microsoft",
21 url="https://github.com/Microsoft/botbuilder-python",
22 keywords=["BotBuilderSchema", "bots", "ai", "botframework", "botbuilder"],
23 long_description=long_description,
24 long_description_content_type="text/x-rst",
25 license="MIT",
26 install_requires=REQUIRES,
27 packages=["botbuilder.schema", "botbuilder.schema.teams",],
28 include_package_data=True,
29 classifiers=[
30 "Programming Language :: Python :: 3.7",
31 "Intended Audience :: Developers",
32 "License :: OSI Approved :: MIT License",
33 "Operating System :: OS Independent",
34 "Development Status :: 5 - Production/Stable",
35 "Topic :: Scientific/Engineering :: Artificial Intelligence",
36 ],
37 )
38
[end of libraries/botbuilder-schema/setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/libraries/botbuilder-schema/setup.py b/libraries/botbuilder-schema/setup.py
--- a/libraries/botbuilder-schema/setup.py
+++ b/libraries/botbuilder-schema/setup.py
@@ -6,7 +6,7 @@
NAME = "botbuilder-schema"
VERSION = os.environ["packageVersion"] if "packageVersion" in os.environ else "4.15.0"
-REQUIRES = ["msrest==0.6.10"]
+REQUIRES = ["msrest==0.6.19"]
root = os.path.abspath(os.path.dirname(__file__))
diff --git a/libraries/botframework-connector/setup.py b/libraries/botframework-connector/setup.py
--- a/libraries/botframework-connector/setup.py
+++ b/libraries/botframework-connector/setup.py
@@ -7,7 +7,7 @@
NAME = "botframework-connector"
VERSION = os.environ["packageVersion"] if "packageVersion" in os.environ else "4.15.0"
REQUIRES = [
- "msrest==0.6.10",
+ "msrest==0.6.19",
"requests>=2.23.0,<2.26",
"PyJWT>=1.5.3,<2.0.0",
"botbuilder-schema==4.15.0",
| {"golden_diff": "diff --git a/libraries/botbuilder-schema/setup.py b/libraries/botbuilder-schema/setup.py\n--- a/libraries/botbuilder-schema/setup.py\n+++ b/libraries/botbuilder-schema/setup.py\n@@ -6,7 +6,7 @@\n \r\n NAME = \"botbuilder-schema\"\r\n VERSION = os.environ[\"packageVersion\"] if \"packageVersion\" in os.environ else \"4.15.0\"\r\n-REQUIRES = [\"msrest==0.6.10\"]\r\n+REQUIRES = [\"msrest==0.6.19\"]\r\n \r\n root = os.path.abspath(os.path.dirname(__file__))\r\n \r\ndiff --git a/libraries/botframework-connector/setup.py b/libraries/botframework-connector/setup.py\n--- a/libraries/botframework-connector/setup.py\n+++ b/libraries/botframework-connector/setup.py\n@@ -7,7 +7,7 @@\n NAME = \"botframework-connector\"\n VERSION = os.environ[\"packageVersion\"] if \"packageVersion\" in os.environ else \"4.15.0\"\n REQUIRES = [\n- \"msrest==0.6.10\",\n+ \"msrest==0.6.19\",\n \"requests>=2.23.0,<2.26\",\n \"PyJWT>=1.5.3,<2.0.0\",\n \"botbuilder-schema==4.15.0\",\n", "issue": "Bump msrest to the 0.6.19 or higher\nIs your feature request related to a problem? Please describe.\r\nOld version of msrest is used in botframework components -> https://github.com/microsoft/botbuilder-python/search?q=msrest%3D%3D0.6.10 . This blocks us to use latest versions of the service bus client or event using the new language studio python libraries.\r\n\r\nWith msrest=0.6.10, we're blocked to using 0.50 service bus package and other packages like event grid.\r\n\r\nDescribe the solution you'd like\r\nEDITED: Upgrade msrest to the at least 0.6.19 or higher.\r\n\r\nDescribe alternatives you've considered\r\nNo alternatives.\r\n\n", "before_files": [{"content": "# Copyright (c) Microsoft Corporation. All rights reserved.\n# Licensed under the MIT License.\n\nimport os\nfrom setuptools import setup\n\nNAME = \"botframework-connector\"\nVERSION = os.environ[\"packageVersion\"] if \"packageVersion\" in os.environ else \"4.15.0\"\nREQUIRES = [\n \"msrest==0.6.10\",\n \"requests>=2.23.0,<2.26\",\n \"PyJWT>=1.5.3,<2.0.0\",\n \"botbuilder-schema==4.15.0\",\n \"msal==1.6.0\",\n]\n\nroot = os.path.abspath(os.path.dirname(__file__))\n\nwith open(os.path.join(root, \"README.rst\"), encoding=\"utf-8\") as f:\n long_description = f.read()\n\nsetup(\n name=NAME,\n version=VERSION,\n description=\"Microsoft Bot Framework Bot Builder SDK for Python.\",\n author=\"Microsoft\",\n url=\"https://www.github.com/Microsoft/botbuilder-python\",\n keywords=[\"BotFrameworkConnector\", \"bots\", \"ai\", \"botframework\", \"botbuilder\"],\n install_requires=REQUIRES,\n packages=[\n \"botframework.connector\",\n \"botframework.connector.auth\",\n \"botframework.connector.async_mixin\",\n \"botframework.connector.operations\",\n \"botframework.connector.models\",\n \"botframework.connector.aio\",\n \"botframework.connector.aio.operations_async\",\n \"botframework.connector.skills\",\n \"botframework.connector.teams\",\n \"botframework.connector.teams.operations\",\n \"botframework.connector.token_api\",\n \"botframework.connector.token_api.aio\",\n \"botframework.connector.token_api.aio.operations_async\",\n \"botframework.connector.token_api.models\",\n \"botframework.connector.token_api.operations\",\n ],\n include_package_data=True,\n long_description=long_description,\n long_description_content_type=\"text/x-rst\",\n license=\"MIT\",\n classifiers=[\n \"Programming Language :: Python :: 3.7\",\n \"Intended Audience :: Developers\",\n \"License :: OSI Approved :: MIT License\",\n \"Operating System :: OS Independent\",\n \"Development Status :: 5 - Production/Stable\",\n \"Topic :: Scientific/Engineering :: Artificial Intelligence\",\n ],\n)\n", "path": "libraries/botframework-connector/setup.py"}, {"content": "# Copyright (c) Microsoft Corporation. All rights reserved.\r\n# Licensed under the MIT License.\r\n\r\nimport os\r\nfrom setuptools import setup\r\n\r\nNAME = \"botbuilder-schema\"\r\nVERSION = os.environ[\"packageVersion\"] if \"packageVersion\" in os.environ else \"4.15.0\"\r\nREQUIRES = [\"msrest==0.6.10\"]\r\n\r\nroot = os.path.abspath(os.path.dirname(__file__))\r\n\r\nwith open(os.path.join(root, \"README.rst\"), encoding=\"utf-8\") as f:\r\n long_description = f.read()\r\n\r\nsetup(\r\n name=NAME,\r\n version=VERSION,\r\n description=\"BotBuilder Schema\",\r\n author=\"Microsoft\",\r\n url=\"https://github.com/Microsoft/botbuilder-python\",\r\n keywords=[\"BotBuilderSchema\", \"bots\", \"ai\", \"botframework\", \"botbuilder\"],\r\n long_description=long_description,\r\n long_description_content_type=\"text/x-rst\",\r\n license=\"MIT\",\r\n install_requires=REQUIRES,\r\n packages=[\"botbuilder.schema\", \"botbuilder.schema.teams\",],\r\n include_package_data=True,\r\n classifiers=[\r\n \"Programming Language :: Python :: 3.7\",\r\n \"Intended Audience :: Developers\",\r\n \"License :: OSI Approved :: MIT License\",\r\n \"Operating System :: OS Independent\",\r\n \"Development Status :: 5 - Production/Stable\",\r\n \"Topic :: Scientific/Engineering :: Artificial Intelligence\",\r\n ],\r\n)\r\n", "path": "libraries/botbuilder-schema/setup.py"}]} | 1,673 | 299 |
gh_patches_debug_11520 | rasdani/github-patches | git_diff | gratipay__gratipay.com-2999 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Font problem in production
> Font from origin 'https://assets.gratipay.com' has been blocked from loading by Cross-Origin Resource Sharing policy: No 'Access-Control-Allow-Origin' header is present on the requested resource. Origin 'https://gratipay.com' is therefore not allowed access.
</issue>
<code>
[start of gratipay/utils/cache_static.py]
1 """
2 Handles caching of static resources.
3 """
4 from base64 import b64encode
5 from hashlib import md5
6
7 from aspen import Response
8
9
10 ETAGS = {}
11
12
13 def asset_etag(path):
14 if path.endswith('.spt'):
15 return ''
16 if path in ETAGS:
17 h = ETAGS[path]
18 else:
19 with open(path) as f:
20 h = ETAGS[path] = b64encode(md5(f.read()).digest(), '-_').replace('=', '~')
21 return h
22
23
24 # algorithm functions
25
26 def get_etag_for_file(dispatch_result):
27 return {'etag': asset_etag(dispatch_result.match)}
28
29
30 def try_to_serve_304(website, dispatch_result, request, etag):
31 """Try to serve a 304 for static resources.
32 """
33 if not etag:
34 # This is a request for a dynamic resource.
35 return
36
37 qs_etag = request.line.uri.querystring.get('etag')
38 if qs_etag and qs_etag != etag:
39 # Don't serve one version of a file as if it were another.
40 raise Response(410)
41
42 headers_etag = request.headers.get('If-None-Match')
43 if not headers_etag:
44 # This client doesn't want a 304.
45 return
46
47 if headers_etag != etag:
48 # Cache miss, the client sent an old or invalid etag.
49 return
50
51 # Huzzah!
52 # =======
53 # We can serve a 304! :D
54
55 raise Response(304)
56
57
58 def add_caching_to_response(website, response, request=None, etag=None):
59 """Set caching headers for static resources.
60 """
61 if etag is None:
62 return
63 assert request is not None # sanity check
64
65 if response.code not in (200, 304):
66 return
67
68 # https://developers.google.com/speed/docs/best-practices/caching
69 response.headers['Vary'] = 'accept-encoding'
70 response.headers['Etag'] = etag
71
72 if request.line.uri.querystring.get('etag'):
73 # We can cache "indefinitely" when the querystring contains the etag.
74 response.headers['Cache-Control'] = 'public, max-age=31536000'
75 else:
76 # Otherwise we cache for 5 seconds
77 response.headers['Cache-Control'] = 'public, max-age=5'
78
[end of gratipay/utils/cache_static.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/gratipay/utils/cache_static.py b/gratipay/utils/cache_static.py
--- a/gratipay/utils/cache_static.py
+++ b/gratipay/utils/cache_static.py
@@ -68,6 +68,9 @@
# https://developers.google.com/speed/docs/best-practices/caching
response.headers['Vary'] = 'accept-encoding'
response.headers['Etag'] = etag
+ # Set CORS header for https://assets.gratipay.com (see issue #2970)
+ if 'Access-Control-Allow-Origin' not in response.headers:
+ response.headers['Access-Control-Allow-Origin'] = 'https://gratipay.com'
if request.line.uri.querystring.get('etag'):
# We can cache "indefinitely" when the querystring contains the etag.
| {"golden_diff": "diff --git a/gratipay/utils/cache_static.py b/gratipay/utils/cache_static.py\n--- a/gratipay/utils/cache_static.py\n+++ b/gratipay/utils/cache_static.py\n@@ -68,6 +68,9 @@\n # https://developers.google.com/speed/docs/best-practices/caching\n response.headers['Vary'] = 'accept-encoding'\n response.headers['Etag'] = etag\n+ # Set CORS header for https://assets.gratipay.com (see issue #2970)\n+ if 'Access-Control-Allow-Origin' not in response.headers:\n+ response.headers['Access-Control-Allow-Origin'] = 'https://gratipay.com'\n \n if request.line.uri.querystring.get('etag'):\n # We can cache \"indefinitely\" when the querystring contains the etag.\n", "issue": "Font problem in production\n> Font from origin 'https://assets.gratipay.com' has been blocked from loading by Cross-Origin Resource Sharing policy: No 'Access-Control-Allow-Origin' header is present on the requested resource. Origin 'https://gratipay.com' is therefore not allowed access. \n\n", "before_files": [{"content": "\"\"\"\nHandles caching of static resources.\n\"\"\"\nfrom base64 import b64encode\nfrom hashlib import md5\n\nfrom aspen import Response\n\n\nETAGS = {}\n\n\ndef asset_etag(path):\n if path.endswith('.spt'):\n return ''\n if path in ETAGS:\n h = ETAGS[path]\n else:\n with open(path) as f:\n h = ETAGS[path] = b64encode(md5(f.read()).digest(), '-_').replace('=', '~')\n return h\n\n\n# algorithm functions\n\ndef get_etag_for_file(dispatch_result):\n return {'etag': asset_etag(dispatch_result.match)}\n\n\ndef try_to_serve_304(website, dispatch_result, request, etag):\n \"\"\"Try to serve a 304 for static resources.\n \"\"\"\n if not etag:\n # This is a request for a dynamic resource.\n return\n\n qs_etag = request.line.uri.querystring.get('etag')\n if qs_etag and qs_etag != etag:\n # Don't serve one version of a file as if it were another.\n raise Response(410)\n\n headers_etag = request.headers.get('If-None-Match')\n if not headers_etag:\n # This client doesn't want a 304.\n return\n\n if headers_etag != etag:\n # Cache miss, the client sent an old or invalid etag.\n return\n\n # Huzzah!\n # =======\n # We can serve a 304! :D\n\n raise Response(304)\n\n\ndef add_caching_to_response(website, response, request=None, etag=None):\n \"\"\"Set caching headers for static resources.\n \"\"\"\n if etag is None:\n return\n assert request is not None # sanity check\n\n if response.code not in (200, 304):\n return\n\n # https://developers.google.com/speed/docs/best-practices/caching\n response.headers['Vary'] = 'accept-encoding'\n response.headers['Etag'] = etag\n\n if request.line.uri.querystring.get('etag'):\n # We can cache \"indefinitely\" when the querystring contains the etag.\n response.headers['Cache-Control'] = 'public, max-age=31536000'\n else:\n # Otherwise we cache for 5 seconds\n response.headers['Cache-Control'] = 'public, max-age=5'\n", "path": "gratipay/utils/cache_static.py"}]} | 1,298 | 181 |
gh_patches_debug_2632 | rasdani/github-patches | git_diff | hpcaitech__ColossalAI-5433 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[tensor] fix some unittests
[tensor] fix some unittests
[tensor] fix some unittests
</issue>
<code>
[start of extensions/inference/inference_ops_cuda.py]
1 from ..cuda_extension import _CudaExtension
2 from ..utils import get_cuda_cc_flag
3
4
5 class InferenceOpsCudaExtension(_CudaExtension):
6 def __init__(self):
7 super().__init__(name="inference_ops_cuda")
8
9 def sources_files(self):
10 ret = [
11 self.csrc_abs_path(fname)
12 for fname in [
13 "cuda/colossal_inference_C_frontend.cpp",
14 "cuda/decode_kv_cache_memcpy_kernel.cu",
15 ]
16 ]
17 return ret
18
19 def include_dirs(self):
20 ret = [self.get_cuda_home_include()]
21 return ret
22
23 def cxx_flags(self):
24 version_dependent_macros = ["-DVERSION_GE_1_1", "-DVERSION_GE_1_3", "-DVERSION_GE_1_5"]
25 return ["-O3"] + version_dependent_macros
26
27 def nvcc_flags(self):
28 extra_cuda_flags = ["-lineinfo"]
29 extra_cuda_flags.extend(get_cuda_cc_flag())
30 return ["-O3", "--use_fast_math"] + extra_cuda_flags
31
[end of extensions/inference/inference_ops_cuda.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/extensions/inference/inference_ops_cuda.py b/extensions/inference/inference_ops_cuda.py
--- a/extensions/inference/inference_ops_cuda.py
+++ b/extensions/inference/inference_ops_cuda.py
@@ -12,6 +12,7 @@
for fname in [
"cuda/colossal_inference_C_frontend.cpp",
"cuda/decode_kv_cache_memcpy_kernel.cu",
+ "cuda/activation_kernel.cu",
]
]
return ret
| {"golden_diff": "diff --git a/extensions/inference/inference_ops_cuda.py b/extensions/inference/inference_ops_cuda.py\n--- a/extensions/inference/inference_ops_cuda.py\n+++ b/extensions/inference/inference_ops_cuda.py\n@@ -12,6 +12,7 @@\n for fname in [\n \"cuda/colossal_inference_C_frontend.cpp\",\n \"cuda/decode_kv_cache_memcpy_kernel.cu\",\n+ \"cuda/activation_kernel.cu\",\n ]\n ]\n return ret\n", "issue": "[tensor] fix some unittests\n\n[tensor] fix some unittests\n\n[tensor] fix some unittests\n\n", "before_files": [{"content": "from ..cuda_extension import _CudaExtension\nfrom ..utils import get_cuda_cc_flag\n\n\nclass InferenceOpsCudaExtension(_CudaExtension):\n def __init__(self):\n super().__init__(name=\"inference_ops_cuda\")\n\n def sources_files(self):\n ret = [\n self.csrc_abs_path(fname)\n for fname in [\n \"cuda/colossal_inference_C_frontend.cpp\",\n \"cuda/decode_kv_cache_memcpy_kernel.cu\",\n ]\n ]\n return ret\n\n def include_dirs(self):\n ret = [self.get_cuda_home_include()]\n return ret\n\n def cxx_flags(self):\n version_dependent_macros = [\"-DVERSION_GE_1_1\", \"-DVERSION_GE_1_3\", \"-DVERSION_GE_1_5\"]\n return [\"-O3\"] + version_dependent_macros\n\n def nvcc_flags(self):\n extra_cuda_flags = [\"-lineinfo\"]\n extra_cuda_flags.extend(get_cuda_cc_flag())\n return [\"-O3\", \"--use_fast_math\"] + extra_cuda_flags\n", "path": "extensions/inference/inference_ops_cuda.py"}]} | 848 | 104 |
gh_patches_debug_153 | rasdani/github-patches | git_diff | bookwyrm-social__bookwyrm-1018 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Ratings don't federate
**Describe the bug**
I do follow someone on bookwyrm.social from bookwyrm.social and wyrms.de. I have seen on b.s that they rated some books without reviewing them, but those ratings do not appear on w.d. All other posts federate properly (I think).
**Expeceted behaviour**
The rating should show up on connected instances and ideally also be used on those to calculate the average rating of the book.
Here is one example that's not visible from w.d: https://bookwyrm.social/user/tastytea/reviewrating/21469
</issue>
<code>
[start of bookwyrm/activitypub/note.py]
1 """ note serializer and children thereof """
2 from dataclasses import dataclass, field
3 from typing import Dict, List
4 from django.apps import apps
5
6 from .base_activity import ActivityObject, Link
7 from .image import Document
8
9
10 @dataclass(init=False)
11 class Tombstone(ActivityObject):
12 """the placeholder for a deleted status"""
13
14 type: str = "Tombstone"
15
16 def to_model(self, *args, **kwargs): # pylint: disable=unused-argument
17 """this should never really get serialized, just searched for"""
18 model = apps.get_model("bookwyrm.Status")
19 return model.find_existing_by_remote_id(self.id)
20
21
22 @dataclass(init=False)
23 class Note(ActivityObject):
24 """Note activity"""
25
26 published: str
27 attributedTo: str
28 content: str = ""
29 to: List[str] = field(default_factory=lambda: [])
30 cc: List[str] = field(default_factory=lambda: [])
31 replies: Dict = field(default_factory=lambda: {})
32 inReplyTo: str = ""
33 summary: str = ""
34 tag: List[Link] = field(default_factory=lambda: [])
35 attachment: List[Document] = field(default_factory=lambda: [])
36 sensitive: bool = False
37 type: str = "Note"
38
39
40 @dataclass(init=False)
41 class Article(Note):
42 """what's an article except a note with more fields"""
43
44 name: str
45 type: str = "Article"
46
47
48 @dataclass(init=False)
49 class GeneratedNote(Note):
50 """just a re-typed note"""
51
52 type: str = "GeneratedNote"
53
54
55 @dataclass(init=False)
56 class Comment(Note):
57 """like a note but with a book"""
58
59 inReplyToBook: str
60 type: str = "Comment"
61
62
63 @dataclass(init=False)
64 class Quotation(Comment):
65 """a quote and commentary on a book"""
66
67 quote: str
68 type: str = "Quotation"
69
70
71 @dataclass(init=False)
72 class Review(Comment):
73 """a full book review"""
74
75 name: str = None
76 rating: int = None
77 type: str = "Review"
78
79
80 @dataclass(init=False)
81 class Rating(Comment):
82 """just a star rating"""
83
84 rating: int
85 content: str = None
86 type: str = "Rating"
87
[end of bookwyrm/activitypub/note.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/bookwyrm/activitypub/note.py b/bookwyrm/activitypub/note.py
--- a/bookwyrm/activitypub/note.py
+++ b/bookwyrm/activitypub/note.py
@@ -83,4 +83,5 @@
rating: int
content: str = None
+ name: str = None # not used, but the model inherits from Review
type: str = "Rating"
| {"golden_diff": "diff --git a/bookwyrm/activitypub/note.py b/bookwyrm/activitypub/note.py\n--- a/bookwyrm/activitypub/note.py\n+++ b/bookwyrm/activitypub/note.py\n@@ -83,4 +83,5 @@\n \n rating: int\n content: str = None\n+ name: str = None # not used, but the model inherits from Review\n type: str = \"Rating\"\n", "issue": "Ratings don't federate\n**Describe the bug**\r\nI do follow someone on bookwyrm.social from bookwyrm.social and wyrms.de. I have seen on b.s that they rated some books without reviewing them, but those ratings do not appear on w.d. All other posts federate properly (I think).\r\n\r\n**Expeceted behaviour**\r\nThe rating should show up on connected instances and ideally also be used on those to calculate the average rating of the book.\r\n\r\nHere is one example that's not visible from w.d: https://bookwyrm.social/user/tastytea/reviewrating/21469\n", "before_files": [{"content": "\"\"\" note serializer and children thereof \"\"\"\nfrom dataclasses import dataclass, field\nfrom typing import Dict, List\nfrom django.apps import apps\n\nfrom .base_activity import ActivityObject, Link\nfrom .image import Document\n\n\n@dataclass(init=False)\nclass Tombstone(ActivityObject):\n \"\"\"the placeholder for a deleted status\"\"\"\n\n type: str = \"Tombstone\"\n\n def to_model(self, *args, **kwargs): # pylint: disable=unused-argument\n \"\"\"this should never really get serialized, just searched for\"\"\"\n model = apps.get_model(\"bookwyrm.Status\")\n return model.find_existing_by_remote_id(self.id)\n\n\n@dataclass(init=False)\nclass Note(ActivityObject):\n \"\"\"Note activity\"\"\"\n\n published: str\n attributedTo: str\n content: str = \"\"\n to: List[str] = field(default_factory=lambda: [])\n cc: List[str] = field(default_factory=lambda: [])\n replies: Dict = field(default_factory=lambda: {})\n inReplyTo: str = \"\"\n summary: str = \"\"\n tag: List[Link] = field(default_factory=lambda: [])\n attachment: List[Document] = field(default_factory=lambda: [])\n sensitive: bool = False\n type: str = \"Note\"\n\n\n@dataclass(init=False)\nclass Article(Note):\n \"\"\"what's an article except a note with more fields\"\"\"\n\n name: str\n type: str = \"Article\"\n\n\n@dataclass(init=False)\nclass GeneratedNote(Note):\n \"\"\"just a re-typed note\"\"\"\n\n type: str = \"GeneratedNote\"\n\n\n@dataclass(init=False)\nclass Comment(Note):\n \"\"\"like a note but with a book\"\"\"\n\n inReplyToBook: str\n type: str = \"Comment\"\n\n\n@dataclass(init=False)\nclass Quotation(Comment):\n \"\"\"a quote and commentary on a book\"\"\"\n\n quote: str\n type: str = \"Quotation\"\n\n\n@dataclass(init=False)\nclass Review(Comment):\n \"\"\"a full book review\"\"\"\n\n name: str = None\n rating: int = None\n type: str = \"Review\"\n\n\n@dataclass(init=False)\nclass Rating(Comment):\n \"\"\"just a star rating\"\"\"\n\n rating: int\n content: str = None\n type: str = \"Rating\"\n", "path": "bookwyrm/activitypub/note.py"}]} | 1,339 | 97 |
gh_patches_debug_591 | rasdani/github-patches | git_diff | pex-tool__pex-1140 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Release 2.1.23
On the docket:
+ [x] Upgrade Pex to Pip 20.3.1. #1133
</issue>
<code>
[start of pex/version.py]
1 # Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).
2 # Licensed under the Apache License, Version 2.0 (see LICENSE).
3
4 __version__ = "2.1.22"
5
[end of pex/version.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/pex/version.py b/pex/version.py
--- a/pex/version.py
+++ b/pex/version.py
@@ -1,4 +1,4 @@
# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).
# Licensed under the Apache License, Version 2.0 (see LICENSE).
-__version__ = "2.1.22"
+__version__ = "2.1.23"
| {"golden_diff": "diff --git a/pex/version.py b/pex/version.py\n--- a/pex/version.py\n+++ b/pex/version.py\n@@ -1,4 +1,4 @@\n # Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n # Licensed under the Apache License, Version 2.0 (see LICENSE).\n \n-__version__ = \"2.1.22\"\n+__version__ = \"2.1.23\"\n", "issue": "Release 2.1.23\nOn the docket:\r\n+ [x] Upgrade Pex to Pip 20.3.1. #1133\r\n\n", "before_files": [{"content": "# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\n__version__ = \"2.1.22\"\n", "path": "pex/version.py"}]} | 621 | 97 |
gh_patches_debug_13712 | rasdani/github-patches | git_diff | chainer__chainer-1312 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
`split_axis` doesn't support empty sections
This code causes a TypeError.
`functions.split_axis(x, [], 0)`
</issue>
<code>
[start of chainer/functions/array/split_axis.py]
1 import collections
2
3 import six
4
5 import chainer
6 from chainer import cuda
7 from chainer import function
8 from chainer.utils import type_check
9
10
11 class SplitAxis(function.Function):
12
13 """Function that splits multiple arrays along the specified axis."""
14
15 def __init__(self, indices_or_sections, axis):
16 if not isinstance(indices_or_sections, (int, collections.Iterable)):
17 raise TypeError('indices_or_sections must be integer or 1-D array')
18 self.indices_or_sections = indices_or_sections
19 self.axis = axis
20
21 def check_type_forward(self, in_types):
22 type_check.expect(in_types.size() == 1)
23 type_check.expect(in_types[0].ndim > self.axis)
24
25 if isinstance(self.indices_or_sections, collections.Iterable):
26 max_index = type_check.Variable(
27 self.indices_or_sections[-1], 'max_index')
28 type_check.expect(in_types[0].shape[self.axis] > max_index)
29 else:
30 sections = type_check.Variable(
31 self.indices_or_sections, 'sections')
32 type_check.expect(in_types[0].shape[self.axis] % sections == 0)
33
34 def forward(self, x):
35 if isinstance(self.indices_or_sections, collections.Iterable):
36 cdimx = x[0].shape[self.axis]
37 ind = list(self.indices_or_sections)
38 ind.append(cdimx)
39 prev_i = 0
40 for i in ind:
41 cdimy = max(0, min(i, cdimx) - prev_i)
42 if cdimy == 0:
43 raise ValueError('Not support if shape contains 0')
44 prev_i = i
45 xp = cuda.get_array_module(*x)
46 return tuple(xp.split(x[0], self.indices_or_sections, self.axis))
47
48 def backward(self, x, gys):
49 xp = cuda.get_array_module(*x)
50 if any(gy is None for gy in gys):
51 gx = xp.zeros_like(x[0])
52 gxs = xp.split(gx, self.indices_or_sections, self.axis)
53 for gxi, gy in six.moves.zip(gxs, gys):
54 if gy is None:
55 continue
56 gxi[:] = gy
57 return gx,
58 else:
59 return xp.concatenate(gys, axis=self.axis),
60
61
62 def split_axis(x, indices_or_sections, axis, force_tuple=False):
63 """Splits given variables along an axis.
64
65 Args:
66 x (tuple of Variables): Variables to be split.
67 indices_or_sections (int or 1-D array): If this argument is an integer,
68 N, the array will be divided into N equal arrays along axis.
69 If it is a 1-D array of sorted integers, it
70 indicates the positions where the array is split.
71 axis (int): Axis that the input array is split along.
72 force_tuple (bool): If ``True``, this method returns a tuple even when
73 the number of outputs is one.
74
75 Returns:
76 tuple or Variable: Tuple of :class:`~chainer.Variable` objects
77 if the number of outputs is more than 1 or
78 :class:`~chainer.Variable` otherwise.
79 When ``force_tuple`` is ``True``, returned value is always a tuple
80 regardless of the number of outputs.
81
82 .. note::
83 This function raises :class:`ValueError` if at least
84 one of the outputs is split to zero-size
85 (i.e. ``axis``-th value of its shape is zero).
86
87 """
88 res = SplitAxis(indices_or_sections, axis)(x)
89 if force_tuple and isinstance(res, chainer.Variable):
90 res = (res,)
91 return res
92
[end of chainer/functions/array/split_axis.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/chainer/functions/array/split_axis.py b/chainer/functions/array/split_axis.py
--- a/chainer/functions/array/split_axis.py
+++ b/chainer/functions/array/split_axis.py
@@ -23,9 +23,10 @@
type_check.expect(in_types[0].ndim > self.axis)
if isinstance(self.indices_or_sections, collections.Iterable):
- max_index = type_check.Variable(
- self.indices_or_sections[-1], 'max_index')
- type_check.expect(in_types[0].shape[self.axis] > max_index)
+ if len(self.indices_or_sections) > 0:
+ max_index = type_check.Variable(
+ self.indices_or_sections[-1], 'max_index')
+ type_check.expect(in_types[0].shape[self.axis] > max_index)
else:
sections = type_check.Variable(
self.indices_or_sections, 'sections')
| {"golden_diff": "diff --git a/chainer/functions/array/split_axis.py b/chainer/functions/array/split_axis.py\n--- a/chainer/functions/array/split_axis.py\n+++ b/chainer/functions/array/split_axis.py\n@@ -23,9 +23,10 @@\n type_check.expect(in_types[0].ndim > self.axis)\n \n if isinstance(self.indices_or_sections, collections.Iterable):\n- max_index = type_check.Variable(\n- self.indices_or_sections[-1], 'max_index')\n- type_check.expect(in_types[0].shape[self.axis] > max_index)\n+ if len(self.indices_or_sections) > 0:\n+ max_index = type_check.Variable(\n+ self.indices_or_sections[-1], 'max_index')\n+ type_check.expect(in_types[0].shape[self.axis] > max_index)\n else:\n sections = type_check.Variable(\n self.indices_or_sections, 'sections')\n", "issue": "`split_axis` doesn't support empty sections\nThis code causes a TypeError.\n`functions.split_axis(x, [], 0)`\n\n", "before_files": [{"content": "import collections\n\nimport six\n\nimport chainer\nfrom chainer import cuda\nfrom chainer import function\nfrom chainer.utils import type_check\n\n\nclass SplitAxis(function.Function):\n\n \"\"\"Function that splits multiple arrays along the specified axis.\"\"\"\n\n def __init__(self, indices_or_sections, axis):\n if not isinstance(indices_or_sections, (int, collections.Iterable)):\n raise TypeError('indices_or_sections must be integer or 1-D array')\n self.indices_or_sections = indices_or_sections\n self.axis = axis\n\n def check_type_forward(self, in_types):\n type_check.expect(in_types.size() == 1)\n type_check.expect(in_types[0].ndim > self.axis)\n\n if isinstance(self.indices_or_sections, collections.Iterable):\n max_index = type_check.Variable(\n self.indices_or_sections[-1], 'max_index')\n type_check.expect(in_types[0].shape[self.axis] > max_index)\n else:\n sections = type_check.Variable(\n self.indices_or_sections, 'sections')\n type_check.expect(in_types[0].shape[self.axis] % sections == 0)\n\n def forward(self, x):\n if isinstance(self.indices_or_sections, collections.Iterable):\n cdimx = x[0].shape[self.axis]\n ind = list(self.indices_or_sections)\n ind.append(cdimx)\n prev_i = 0\n for i in ind:\n cdimy = max(0, min(i, cdimx) - prev_i)\n if cdimy == 0:\n raise ValueError('Not support if shape contains 0')\n prev_i = i\n xp = cuda.get_array_module(*x)\n return tuple(xp.split(x[0], self.indices_or_sections, self.axis))\n\n def backward(self, x, gys):\n xp = cuda.get_array_module(*x)\n if any(gy is None for gy in gys):\n gx = xp.zeros_like(x[0])\n gxs = xp.split(gx, self.indices_or_sections, self.axis)\n for gxi, gy in six.moves.zip(gxs, gys):\n if gy is None:\n continue\n gxi[:] = gy\n return gx,\n else:\n return xp.concatenate(gys, axis=self.axis),\n\n\ndef split_axis(x, indices_or_sections, axis, force_tuple=False):\n \"\"\"Splits given variables along an axis.\n\n Args:\n x (tuple of Variables): Variables to be split.\n indices_or_sections (int or 1-D array): If this argument is an integer,\n N, the array will be divided into N equal arrays along axis.\n If it is a 1-D array of sorted integers, it\n indicates the positions where the array is split.\n axis (int): Axis that the input array is split along.\n force_tuple (bool): If ``True``, this method returns a tuple even when\n the number of outputs is one.\n\n Returns:\n tuple or Variable: Tuple of :class:`~chainer.Variable` objects\n if the number of outputs is more than 1 or\n :class:`~chainer.Variable` otherwise.\n When ``force_tuple`` is ``True``, returned value is always a tuple\n regardless of the number of outputs.\n\n .. note::\n This function raises :class:`ValueError` if at least\n one of the outputs is split to zero-size\n (i.e. ``axis``-th value of its shape is zero).\n\n \"\"\"\n res = SplitAxis(indices_or_sections, axis)(x)\n if force_tuple and isinstance(res, chainer.Variable):\n res = (res,)\n return res\n", "path": "chainer/functions/array/split_axis.py"}]} | 1,527 | 198 |
gh_patches_debug_35028 | rasdani/github-patches | git_diff | strawberry-graphql__strawberry-1071 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Compatibility with pydantic 1.4
I'm trying to use strawberry in a project that has pydantic pinned at 1.4. I chatted with @patrick91 on discord about this, and he thought it would be reasonable to achieve compatibility with this version.
Pydantic appears to only be used in the [strawberry.experimental](https://github.com/strawberry-graphql/strawberry/blob/main/strawberry/experimental/__init__.py) module, which only gets loaded if pydantic is present. One way to solve this for me in particular would be to lazily load strawberry.experimental/pydantic, such that when an older version of pydantic is present, one can still import other packages in strawberry.
Thank you!
</issue>
<code>
[start of strawberry/experimental/pydantic/fields.py]
1 from decimal import Decimal
2 from typing import Optional
3 from uuid import UUID
4
5 import pydantic
6
7 from .exceptions import UnsupportedTypeError
8
9
10 FIELDS_MAP = {
11 pydantic.NoneStr: Optional[str],
12 pydantic.NoneBytes: Optional[bytes],
13 pydantic.StrBytes: None,
14 pydantic.NoneStrBytes: None,
15 pydantic.StrictStr: str,
16 pydantic.ConstrainedBytes: bytes,
17 pydantic.conbytes: bytes,
18 pydantic.ConstrainedList: None,
19 pydantic.conlist: None,
20 pydantic.ConstrainedSet: None,
21 pydantic.conset: None,
22 pydantic.ConstrainedStr: str,
23 pydantic.constr: str,
24 pydantic.EmailStr: str,
25 pydantic.PyObject: None,
26 pydantic.ConstrainedInt: int,
27 pydantic.conint: int,
28 pydantic.PositiveInt: int,
29 pydantic.NegativeInt: int,
30 pydantic.ConstrainedFloat: float,
31 pydantic.confloat: float,
32 pydantic.PositiveFloat: float,
33 pydantic.NegativeFloat: float,
34 pydantic.ConstrainedDecimal: Decimal,
35 pydantic.condecimal: Decimal,
36 pydantic.UUID1: UUID,
37 pydantic.UUID3: UUID,
38 pydantic.UUID4: UUID,
39 pydantic.UUID5: UUID,
40 pydantic.FilePath: None,
41 pydantic.DirectoryPath: None,
42 pydantic.Json: None,
43 pydantic.JsonWrapper: None,
44 pydantic.SecretStr: str,
45 pydantic.SecretBytes: bytes,
46 pydantic.StrictBool: bool,
47 pydantic.StrictInt: int,
48 pydantic.StrictFloat: float,
49 pydantic.PaymentCardNumber: None,
50 pydantic.ByteSize: None,
51 pydantic.AnyUrl: str,
52 pydantic.AnyHttpUrl: str,
53 pydantic.HttpUrl: str,
54 pydantic.PostgresDsn: str,
55 pydantic.RedisDsn: str,
56 }
57
58
59 def get_basic_type(type_):
60 if isinstance(type_, type):
61 if issubclass(type_, pydantic.ConstrainedInt):
62 return int
63 if issubclass(type_, pydantic.ConstrainedStr):
64 return str
65
66 if type_ in FIELDS_MAP:
67 type_ = FIELDS_MAP.get(type_)
68
69 if type_ is None:
70 raise UnsupportedTypeError()
71
72 return type_
73
[end of strawberry/experimental/pydantic/fields.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/strawberry/experimental/pydantic/fields.py b/strawberry/experimental/pydantic/fields.py
--- a/strawberry/experimental/pydantic/fields.py
+++ b/strawberry/experimental/pydantic/fields.py
@@ -7,52 +7,59 @@
from .exceptions import UnsupportedTypeError
+ATTR_TO_TYPE_MAP = {
+ "NoneStr": Optional[str],
+ "NoneBytes": Optional[bytes],
+ "StrBytes": None,
+ "NoneStrBytes": None,
+ "StrictStr": str,
+ "ConstrainedBytes": bytes,
+ "conbytes": bytes,
+ "ConstrainedList": None,
+ "conlist": None,
+ "ConstrainedSet": None,
+ "conset": None,
+ "ConstrainedStr": str,
+ "constr": str,
+ "EmailStr": str,
+ "PyObject": None,
+ "ConstrainedInt": int,
+ "conint": int,
+ "PositiveInt": int,
+ "NegativeInt": int,
+ "ConstrainedFloat": float,
+ "confloat": float,
+ "PositiveFloat": float,
+ "NegativeFloat": float,
+ "ConstrainedDecimal": Decimal,
+ "condecimal": Decimal,
+ "UUID1": UUID,
+ "UUID3": UUID,
+ "UUID4": UUID,
+ "UUID5": UUID,
+ "FilePath": None,
+ "DirectoryPath": None,
+ "Json": None,
+ "JsonWrapper": None,
+ "SecretStr": str,
+ "SecretBytes": bytes,
+ "StrictBool": bool,
+ "StrictInt": int,
+ "StrictFloat": float,
+ "PaymentCardNumber": None,
+ "ByteSize": None,
+ "AnyUrl": str,
+ "AnyHttpUrl": str,
+ "HttpUrl": str,
+ "PostgresDsn": str,
+ "RedisDsn": str,
+}
+
+
FIELDS_MAP = {
- pydantic.NoneStr: Optional[str],
- pydantic.NoneBytes: Optional[bytes],
- pydantic.StrBytes: None,
- pydantic.NoneStrBytes: None,
- pydantic.StrictStr: str,
- pydantic.ConstrainedBytes: bytes,
- pydantic.conbytes: bytes,
- pydantic.ConstrainedList: None,
- pydantic.conlist: None,
- pydantic.ConstrainedSet: None,
- pydantic.conset: None,
- pydantic.ConstrainedStr: str,
- pydantic.constr: str,
- pydantic.EmailStr: str,
- pydantic.PyObject: None,
- pydantic.ConstrainedInt: int,
- pydantic.conint: int,
- pydantic.PositiveInt: int,
- pydantic.NegativeInt: int,
- pydantic.ConstrainedFloat: float,
- pydantic.confloat: float,
- pydantic.PositiveFloat: float,
- pydantic.NegativeFloat: float,
- pydantic.ConstrainedDecimal: Decimal,
- pydantic.condecimal: Decimal,
- pydantic.UUID1: UUID,
- pydantic.UUID3: UUID,
- pydantic.UUID4: UUID,
- pydantic.UUID5: UUID,
- pydantic.FilePath: None,
- pydantic.DirectoryPath: None,
- pydantic.Json: None,
- pydantic.JsonWrapper: None,
- pydantic.SecretStr: str,
- pydantic.SecretBytes: bytes,
- pydantic.StrictBool: bool,
- pydantic.StrictInt: int,
- pydantic.StrictFloat: float,
- pydantic.PaymentCardNumber: None,
- pydantic.ByteSize: None,
- pydantic.AnyUrl: str,
- pydantic.AnyHttpUrl: str,
- pydantic.HttpUrl: str,
- pydantic.PostgresDsn: str,
- pydantic.RedisDsn: str,
+ getattr(pydantic, field_name): type
+ for field_name, type in ATTR_TO_TYPE_MAP.items()
+ if hasattr(pydantic, field_name)
}
| {"golden_diff": "diff --git a/strawberry/experimental/pydantic/fields.py b/strawberry/experimental/pydantic/fields.py\n--- a/strawberry/experimental/pydantic/fields.py\n+++ b/strawberry/experimental/pydantic/fields.py\n@@ -7,52 +7,59 @@\n from .exceptions import UnsupportedTypeError\n \n \n+ATTR_TO_TYPE_MAP = {\n+ \"NoneStr\": Optional[str],\n+ \"NoneBytes\": Optional[bytes],\n+ \"StrBytes\": None,\n+ \"NoneStrBytes\": None,\n+ \"StrictStr\": str,\n+ \"ConstrainedBytes\": bytes,\n+ \"conbytes\": bytes,\n+ \"ConstrainedList\": None,\n+ \"conlist\": None,\n+ \"ConstrainedSet\": None,\n+ \"conset\": None,\n+ \"ConstrainedStr\": str,\n+ \"constr\": str,\n+ \"EmailStr\": str,\n+ \"PyObject\": None,\n+ \"ConstrainedInt\": int,\n+ \"conint\": int,\n+ \"PositiveInt\": int,\n+ \"NegativeInt\": int,\n+ \"ConstrainedFloat\": float,\n+ \"confloat\": float,\n+ \"PositiveFloat\": float,\n+ \"NegativeFloat\": float,\n+ \"ConstrainedDecimal\": Decimal,\n+ \"condecimal\": Decimal,\n+ \"UUID1\": UUID,\n+ \"UUID3\": UUID,\n+ \"UUID4\": UUID,\n+ \"UUID5\": UUID,\n+ \"FilePath\": None,\n+ \"DirectoryPath\": None,\n+ \"Json\": None,\n+ \"JsonWrapper\": None,\n+ \"SecretStr\": str,\n+ \"SecretBytes\": bytes,\n+ \"StrictBool\": bool,\n+ \"StrictInt\": int,\n+ \"StrictFloat\": float,\n+ \"PaymentCardNumber\": None,\n+ \"ByteSize\": None,\n+ \"AnyUrl\": str,\n+ \"AnyHttpUrl\": str,\n+ \"HttpUrl\": str,\n+ \"PostgresDsn\": str,\n+ \"RedisDsn\": str,\n+}\n+\n+\n FIELDS_MAP = {\n- pydantic.NoneStr: Optional[str],\n- pydantic.NoneBytes: Optional[bytes],\n- pydantic.StrBytes: None,\n- pydantic.NoneStrBytes: None,\n- pydantic.StrictStr: str,\n- pydantic.ConstrainedBytes: bytes,\n- pydantic.conbytes: bytes,\n- pydantic.ConstrainedList: None,\n- pydantic.conlist: None,\n- pydantic.ConstrainedSet: None,\n- pydantic.conset: None,\n- pydantic.ConstrainedStr: str,\n- pydantic.constr: str,\n- pydantic.EmailStr: str,\n- pydantic.PyObject: None,\n- pydantic.ConstrainedInt: int,\n- pydantic.conint: int,\n- pydantic.PositiveInt: int,\n- pydantic.NegativeInt: int,\n- pydantic.ConstrainedFloat: float,\n- pydantic.confloat: float,\n- pydantic.PositiveFloat: float,\n- pydantic.NegativeFloat: float,\n- pydantic.ConstrainedDecimal: Decimal,\n- pydantic.condecimal: Decimal,\n- pydantic.UUID1: UUID,\n- pydantic.UUID3: UUID,\n- pydantic.UUID4: UUID,\n- pydantic.UUID5: UUID,\n- pydantic.FilePath: None,\n- pydantic.DirectoryPath: None,\n- pydantic.Json: None,\n- pydantic.JsonWrapper: None,\n- pydantic.SecretStr: str,\n- pydantic.SecretBytes: bytes,\n- pydantic.StrictBool: bool,\n- pydantic.StrictInt: int,\n- pydantic.StrictFloat: float,\n- pydantic.PaymentCardNumber: None,\n- pydantic.ByteSize: None,\n- pydantic.AnyUrl: str,\n- pydantic.AnyHttpUrl: str,\n- pydantic.HttpUrl: str,\n- pydantic.PostgresDsn: str,\n- pydantic.RedisDsn: str,\n+ getattr(pydantic, field_name): type\n+ for field_name, type in ATTR_TO_TYPE_MAP.items()\n+ if hasattr(pydantic, field_name)\n }\n", "issue": "Compatibility with pydantic 1.4\nI'm trying to use strawberry in a project that has pydantic pinned at 1.4. I chatted with @patrick91 on discord about this, and he thought it would be reasonable to achieve compatibility with this version.\r\n\r\nPydantic appears to only be used in the [strawberry.experimental](https://github.com/strawberry-graphql/strawberry/blob/main/strawberry/experimental/__init__.py) module, which only gets loaded if pydantic is present. One way to solve this for me in particular would be to lazily load strawberry.experimental/pydantic, such that when an older version of pydantic is present, one can still import other packages in strawberry.\r\n\r\nThank you!\n", "before_files": [{"content": "from decimal import Decimal\nfrom typing import Optional\nfrom uuid import UUID\n\nimport pydantic\n\nfrom .exceptions import UnsupportedTypeError\n\n\nFIELDS_MAP = {\n pydantic.NoneStr: Optional[str],\n pydantic.NoneBytes: Optional[bytes],\n pydantic.StrBytes: None,\n pydantic.NoneStrBytes: None,\n pydantic.StrictStr: str,\n pydantic.ConstrainedBytes: bytes,\n pydantic.conbytes: bytes,\n pydantic.ConstrainedList: None,\n pydantic.conlist: None,\n pydantic.ConstrainedSet: None,\n pydantic.conset: None,\n pydantic.ConstrainedStr: str,\n pydantic.constr: str,\n pydantic.EmailStr: str,\n pydantic.PyObject: None,\n pydantic.ConstrainedInt: int,\n pydantic.conint: int,\n pydantic.PositiveInt: int,\n pydantic.NegativeInt: int,\n pydantic.ConstrainedFloat: float,\n pydantic.confloat: float,\n pydantic.PositiveFloat: float,\n pydantic.NegativeFloat: float,\n pydantic.ConstrainedDecimal: Decimal,\n pydantic.condecimal: Decimal,\n pydantic.UUID1: UUID,\n pydantic.UUID3: UUID,\n pydantic.UUID4: UUID,\n pydantic.UUID5: UUID,\n pydantic.FilePath: None,\n pydantic.DirectoryPath: None,\n pydantic.Json: None,\n pydantic.JsonWrapper: None,\n pydantic.SecretStr: str,\n pydantic.SecretBytes: bytes,\n pydantic.StrictBool: bool,\n pydantic.StrictInt: int,\n pydantic.StrictFloat: float,\n pydantic.PaymentCardNumber: None,\n pydantic.ByteSize: None,\n pydantic.AnyUrl: str,\n pydantic.AnyHttpUrl: str,\n pydantic.HttpUrl: str,\n pydantic.PostgresDsn: str,\n pydantic.RedisDsn: str,\n}\n\n\ndef get_basic_type(type_):\n if isinstance(type_, type):\n if issubclass(type_, pydantic.ConstrainedInt):\n return int\n if issubclass(type_, pydantic.ConstrainedStr):\n return str\n\n if type_ in FIELDS_MAP:\n type_ = FIELDS_MAP.get(type_)\n\n if type_ is None:\n raise UnsupportedTypeError()\n\n return type_\n", "path": "strawberry/experimental/pydantic/fields.py"}]} | 1,389 | 976 |
gh_patches_debug_9298 | rasdani/github-patches | git_diff | joke2k__faker-1607 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
es_ES postalcode is not generating valid codes
* Faker version: 11.3
* OS: Any
When using postcode for es_ES and using it with a field that requires a valid Postal Code, it fails sometimes.
I will assume that there is no logic with postal code generation for Spain.
### Steps to reproduce
Generate postal codes
### Expected behavior
Get a valid Spain postal code
### Actual behavior
Unexpected. Many are wrong
----
I'll dig now into the code. Let's see if I can get some more information and fix it :thinking: Do not expect much from me
</issue>
<code>
[start of faker/providers/address/es_ES/__init__.py]
1 from ..es import Provider as AddressProvider
2
3
4 class Provider(AddressProvider):
5 building_number_formats = ("%", "%#", "%#", "%#", "%##")
6 street_prefixes = (
7 "Plaza",
8 "Calle",
9 "Avenida",
10 "Via",
11 "Vial",
12 "Rambla",
13 "Glorieta",
14 "Urbanización",
15 "Callejón",
16 "Cañada",
17 "Alameda",
18 "Acceso",
19 "C.",
20 "Ronda",
21 "Pasaje",
22 "Cuesta",
23 "Pasadizo",
24 "Paseo",
25 "Camino",
26 )
27 postcode_formats = ("#####",)
28 states = (
29 "Álava",
30 "Albacete",
31 "Alicante",
32 "Almería",
33 "Asturias",
34 "Ávila",
35 "Badajoz",
36 "Baleares",
37 "Barcelona",
38 "Burgos",
39 "Cáceres",
40 "Cádiz",
41 "Cantabria",
42 "Castellón",
43 "Ceuta",
44 "Ciudad",
45 "Córdoba",
46 "Cuenca",
47 "Girona",
48 "Granada",
49 "Guadalajara",
50 "Guipúzcoa",
51 "Huelva",
52 "Huesca",
53 "Jaén",
54 "La Coruña",
55 "La Rioja",
56 "Las Palmas",
57 "León",
58 "Lleida",
59 "Lugo",
60 "Madrid",
61 "Málaga",
62 "Melilla",
63 "Murcia",
64 "Navarra",
65 "Ourense",
66 "Palencia",
67 "Pontevedra",
68 "Salamanca",
69 "Santa Cruz de Tenerife",
70 "Segovia",
71 "Sevilla",
72 "Soria",
73 "Tarragona",
74 "Teruel",
75 "Toledo",
76 "Valencia",
77 "Valladolid",
78 "Vizcaya",
79 "Zamora",
80 "Zaragoza",
81 )
82
83 # Source:
84 # https://administracionelectronica.gob.es/ctt/resources/Soluciones
85 # /238/Descargas/Catalogo-de-Comunidades-Autonomas.xlsx
86 regions = (
87 "Andalucía",
88 "Aragón",
89 "Principado de Asturias",
90 "Illes Balears",
91 "Canarias",
92 "Cantabria",
93 "Castilla y León",
94 "Castilla-La Mancha",
95 "Cataluña",
96 "Comunitat Valenciana",
97 "Extremadura",
98 "Galicia",
99 "Comunidad de Madrid",
100 "Región de Murcia",
101 "Comunidad Foral de Navarra",
102 "País Vasco",
103 "La Rioja",
104 "Ciudad Autónoma de Ceuta",
105 "Ciudad Autónoma de Melilla",
106 )
107
108 city_formats = ("{{state_name}}",)
109
110 street_name_formats = (
111 "{{street_prefix}} {{first_name}} {{last_name}}",
112 "{{street_prefix}} de {{first_name}} {{last_name}}",
113 )
114 street_address_formats = (
115 "{{street_name}} {{building_number}}",
116 "{{street_name}} {{building_number}} {{secondary_address}} ",
117 )
118 address_formats = ("{{street_address}}\n{{city}}, {{postcode}}",)
119 secondary_address_formats = ("Apt. ##", "Piso #", "Puerta #")
120
121 def state_name(self) -> str:
122 return self.random_element(self.states)
123
124 def street_prefix(self) -> str:
125 return self.random_element(self.street_prefixes)
126
127 def secondary_address(self) -> str:
128 return self.numerify(self.random_element(self.secondary_address_formats))
129
130 def administrative_unit(self) -> str:
131 return self.random_element(self.states)
132
133 state = administrative_unit
134
135 def region(self) -> str:
136 return self.random_element(self.regions)
137
138 autonomous_community = region
139
[end of faker/providers/address/es_ES/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/faker/providers/address/es_ES/__init__.py b/faker/providers/address/es_ES/__init__.py
--- a/faker/providers/address/es_ES/__init__.py
+++ b/faker/providers/address/es_ES/__init__.py
@@ -24,7 +24,6 @@
"Paseo",
"Camino",
)
- postcode_formats = ("#####",)
states = (
"Álava",
"Albacete",
@@ -135,4 +134,7 @@
def region(self) -> str:
return self.random_element(self.regions)
+ def postcode(self) -> str:
+ return str(self.generator.random.randint(1000, 52100)).zfill(5)
+
autonomous_community = region
| {"golden_diff": "diff --git a/faker/providers/address/es_ES/__init__.py b/faker/providers/address/es_ES/__init__.py\n--- a/faker/providers/address/es_ES/__init__.py\n+++ b/faker/providers/address/es_ES/__init__.py\n@@ -24,7 +24,6 @@\n \"Paseo\",\n \"Camino\",\n )\n- postcode_formats = (\"#####\",)\n states = (\n \"\u00c1lava\",\n \"Albacete\",\n@@ -135,4 +134,7 @@\n def region(self) -> str:\n return self.random_element(self.regions)\n \n+ def postcode(self) -> str:\n+ return str(self.generator.random.randint(1000, 52100)).zfill(5)\n+\n autonomous_community = region\n", "issue": "es_ES postalcode is not generating valid codes\n* Faker version: 11.3\r\n* OS: Any\r\n\r\nWhen using postcode for es_ES and using it with a field that requires a valid Postal Code, it fails sometimes.\r\nI will assume that there is no logic with postal code generation for Spain.\r\n\r\n### Steps to reproduce\r\n\r\nGenerate postal codes\r\n\r\n### Expected behavior\r\n\r\nGet a valid Spain postal code\r\n\r\n### Actual behavior\r\n\r\nUnexpected. Many are wrong\r\n\r\n----\r\n\r\nI'll dig now into the code. Let's see if I can get some more information and fix it :thinking: Do not expect much from me\n", "before_files": [{"content": "from ..es import Provider as AddressProvider\n\n\nclass Provider(AddressProvider):\n building_number_formats = (\"%\", \"%#\", \"%#\", \"%#\", \"%##\")\n street_prefixes = (\n \"Plaza\",\n \"Calle\",\n \"Avenida\",\n \"Via\",\n \"Vial\",\n \"Rambla\",\n \"Glorieta\",\n \"Urbanizaci\u00f3n\",\n \"Callej\u00f3n\",\n \"Ca\u00f1ada\",\n \"Alameda\",\n \"Acceso\",\n \"C.\",\n \"Ronda\",\n \"Pasaje\",\n \"Cuesta\",\n \"Pasadizo\",\n \"Paseo\",\n \"Camino\",\n )\n postcode_formats = (\"#####\",)\n states = (\n \"\u00c1lava\",\n \"Albacete\",\n \"Alicante\",\n \"Almer\u00eda\",\n \"Asturias\",\n \"\u00c1vila\",\n \"Badajoz\",\n \"Baleares\",\n \"Barcelona\",\n \"Burgos\",\n \"C\u00e1ceres\",\n \"C\u00e1diz\",\n \"Cantabria\",\n \"Castell\u00f3n\",\n \"Ceuta\",\n \"Ciudad\",\n \"C\u00f3rdoba\",\n \"Cuenca\",\n \"Girona\",\n \"Granada\",\n \"Guadalajara\",\n \"Guip\u00fazcoa\",\n \"Huelva\",\n \"Huesca\",\n \"Ja\u00e9n\",\n \"La Coru\u00f1a\",\n \"La Rioja\",\n \"Las Palmas\",\n \"Le\u00f3n\",\n \"Lleida\",\n \"Lugo\",\n \"Madrid\",\n \"M\u00e1laga\",\n \"Melilla\",\n \"Murcia\",\n \"Navarra\",\n \"Ourense\",\n \"Palencia\",\n \"Pontevedra\",\n \"Salamanca\",\n \"Santa Cruz de Tenerife\",\n \"Segovia\",\n \"Sevilla\",\n \"Soria\",\n \"Tarragona\",\n \"Teruel\",\n \"Toledo\",\n \"Valencia\",\n \"Valladolid\",\n \"Vizcaya\",\n \"Zamora\",\n \"Zaragoza\",\n )\n\n # Source:\n # https://administracionelectronica.gob.es/ctt/resources/Soluciones\n # /238/Descargas/Catalogo-de-Comunidades-Autonomas.xlsx\n regions = (\n \"Andaluc\u00eda\",\n \"Arag\u00f3n\",\n \"Principado de Asturias\",\n \"Illes Balears\",\n \"Canarias\",\n \"Cantabria\",\n \"Castilla y Le\u00f3n\",\n \"Castilla-La Mancha\",\n \"Catalu\u00f1a\",\n \"Comunitat Valenciana\",\n \"Extremadura\",\n \"Galicia\",\n \"Comunidad de Madrid\",\n \"Regi\u00f3n de Murcia\",\n \"Comunidad Foral de Navarra\",\n \"Pa\u00eds Vasco\",\n \"La Rioja\",\n \"Ciudad Aut\u00f3noma de Ceuta\",\n \"Ciudad Aut\u00f3noma de Melilla\",\n )\n\n city_formats = (\"{{state_name}}\",)\n\n street_name_formats = (\n \"{{street_prefix}} {{first_name}} {{last_name}}\",\n \"{{street_prefix}} de {{first_name}} {{last_name}}\",\n )\n street_address_formats = (\n \"{{street_name}} {{building_number}}\",\n \"{{street_name}} {{building_number}} {{secondary_address}} \",\n )\n address_formats = (\"{{street_address}}\\n{{city}}, {{postcode}}\",)\n secondary_address_formats = (\"Apt. ##\", \"Piso #\", \"Puerta #\")\n\n def state_name(self) -> str:\n return self.random_element(self.states)\n\n def street_prefix(self) -> str:\n return self.random_element(self.street_prefixes)\n\n def secondary_address(self) -> str:\n return self.numerify(self.random_element(self.secondary_address_formats))\n\n def administrative_unit(self) -> str:\n return self.random_element(self.states)\n\n state = administrative_unit\n\n def region(self) -> str:\n return self.random_element(self.regions)\n\n autonomous_community = region\n", "path": "faker/providers/address/es_ES/__init__.py"}]} | 1,879 | 177 |
gh_patches_debug_11009 | rasdani/github-patches | git_diff | pyca__cryptography-7895 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Bump BoringSSL and/or OpenSSL in CI
## BoringSSL
[Commit: e2e613c269a6bb3d7c0271150fff48d11fdbbace](https://boringssl.googlesource.com/boringssl/+/e2e613c269a6bb3d7c0271150fff48d11fdbbace)
[Diff](https://boringssl.googlesource.com/boringssl/+/d77fdbff010ee70776036c41155d1b3711ede548..e2e613c269a6bb3d7c0271150fff48d11fdbbace) between the last commit hash merged to this repository and the new commit.
## OpenSSL
[Commit: dc45d4c6faeb53bb68401141d899b9f857bbc51d](https://github.com/openssl/openssl/commit/dc45d4c6faeb53bb68401141d899b9f857bbc51d)
[Diff](https://github.com/openssl/openssl/compare/efec0f4611ee854f2b0b3da0c135e839bf8e7d04...dc45d4c6faeb53bb68401141d899b9f857bbc51d) between the last commit hash merged to this repository and the new commit.
</issue>
<code>
[start of src/_cffi_src/openssl/rsa.py]
1 # This file is dual licensed under the terms of the Apache License, Version
2 # 2.0, and the BSD License. See the LICENSE file in the root of this repository
3 # for complete details.
4
5
6 INCLUDES = """
7 #include <openssl/rsa.h>
8 """
9
10 TYPES = """
11 typedef ... RSA;
12 typedef ... BN_GENCB;
13 static const int RSA_PKCS1_PADDING;
14 static const int RSA_NO_PADDING;
15 static const int RSA_PKCS1_OAEP_PADDING;
16 static const int RSA_PKCS1_PSS_PADDING;
17 static const int RSA_F4;
18 static const int RSA_PSS_SALTLEN_AUTO;
19 """
20
21 FUNCTIONS = """
22 RSA *RSA_new(void);
23 void RSA_free(RSA *);
24 int RSA_generate_key_ex(RSA *, int, BIGNUM *, BN_GENCB *);
25 int RSA_check_key(const RSA *);
26 RSA *RSAPublicKey_dup(RSA *);
27 int RSA_blinding_on(RSA *, BN_CTX *);
28 int RSA_print(BIO *, const RSA *, int);
29
30 int RSA_set0_key(RSA *, BIGNUM *, BIGNUM *, BIGNUM *);
31 int RSA_set0_factors(RSA *, BIGNUM *, BIGNUM *);
32 int RSA_set0_crt_params(RSA *, BIGNUM *, BIGNUM *, BIGNUM *);
33 void RSA_get0_key(const RSA *, const BIGNUM **, const BIGNUM **,
34 const BIGNUM **);
35 void RSA_get0_factors(const RSA *, const BIGNUM **, const BIGNUM **);
36 void RSA_get0_crt_params(const RSA *, const BIGNUM **, const BIGNUM **,
37 const BIGNUM **);
38 int EVP_PKEY_CTX_set_rsa_padding(EVP_PKEY_CTX *, int);
39 int EVP_PKEY_CTX_set_rsa_pss_saltlen(EVP_PKEY_CTX *, int);
40 int EVP_PKEY_CTX_set_rsa_mgf1_md(EVP_PKEY_CTX *, EVP_MD *);
41 int EVP_PKEY_CTX_set0_rsa_oaep_label(EVP_PKEY_CTX *, unsigned char *, int);
42
43 int EVP_PKEY_CTX_set_rsa_oaep_md(EVP_PKEY_CTX *, EVP_MD *);
44 """
45
46 CUSTOMIZATIONS = """
47 // BoringSSL doesn't define this constant, but the value is used for
48 // automatic salt length computation as in OpenSSL and LibreSSL
49 #if !defined(RSA_PSS_SALTLEN_AUTO)
50 #define RSA_PSS_SALTLEN_AUTO -2
51 #endif
52 """
53
[end of src/_cffi_src/openssl/rsa.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/_cffi_src/openssl/rsa.py b/src/_cffi_src/openssl/rsa.py
--- a/src/_cffi_src/openssl/rsa.py
+++ b/src/_cffi_src/openssl/rsa.py
@@ -16,6 +16,8 @@
static const int RSA_PKCS1_PSS_PADDING;
static const int RSA_F4;
static const int RSA_PSS_SALTLEN_AUTO;
+
+static const int Cryptography_HAS_IMPLICIT_RSA_REJECTION;
"""
FUNCTIONS = """
@@ -49,4 +51,10 @@
#if !defined(RSA_PSS_SALTLEN_AUTO)
#define RSA_PSS_SALTLEN_AUTO -2
#endif
+
+#if defined(EVP_PKEY_CTRL_RSA_IMPLICIT_REJECTION)
+static const int Cryptography_HAS_IMPLICIT_RSA_REJECTION = 1;
+#else
+static const int Cryptography_HAS_IMPLICIT_RSA_REJECTION = 0;
+#endif
"""
| {"golden_diff": "diff --git a/src/_cffi_src/openssl/rsa.py b/src/_cffi_src/openssl/rsa.py\n--- a/src/_cffi_src/openssl/rsa.py\n+++ b/src/_cffi_src/openssl/rsa.py\n@@ -16,6 +16,8 @@\n static const int RSA_PKCS1_PSS_PADDING;\n static const int RSA_F4;\n static const int RSA_PSS_SALTLEN_AUTO;\n+\n+static const int Cryptography_HAS_IMPLICIT_RSA_REJECTION;\n \"\"\"\n \n FUNCTIONS = \"\"\"\n@@ -49,4 +51,10 @@\n #if !defined(RSA_PSS_SALTLEN_AUTO)\n #define RSA_PSS_SALTLEN_AUTO -2\n #endif\n+\n+#if defined(EVP_PKEY_CTRL_RSA_IMPLICIT_REJECTION)\n+static const int Cryptography_HAS_IMPLICIT_RSA_REJECTION = 1;\n+#else\n+static const int Cryptography_HAS_IMPLICIT_RSA_REJECTION = 0;\n+#endif\n \"\"\"\n", "issue": "Bump BoringSSL and/or OpenSSL in CI\n## BoringSSL\n[Commit: e2e613c269a6bb3d7c0271150fff48d11fdbbace](https://boringssl.googlesource.com/boringssl/+/e2e613c269a6bb3d7c0271150fff48d11fdbbace)\n\n[Diff](https://boringssl.googlesource.com/boringssl/+/d77fdbff010ee70776036c41155d1b3711ede548..e2e613c269a6bb3d7c0271150fff48d11fdbbace) between the last commit hash merged to this repository and the new commit.\n## OpenSSL\n[Commit: dc45d4c6faeb53bb68401141d899b9f857bbc51d](https://github.com/openssl/openssl/commit/dc45d4c6faeb53bb68401141d899b9f857bbc51d)\n\n[Diff](https://github.com/openssl/openssl/compare/efec0f4611ee854f2b0b3da0c135e839bf8e7d04...dc45d4c6faeb53bb68401141d899b9f857bbc51d) between the last commit hash merged to this repository and the new commit.\n", "before_files": [{"content": "# This file is dual licensed under the terms of the Apache License, Version\n# 2.0, and the BSD License. See the LICENSE file in the root of this repository\n# for complete details.\n\n\nINCLUDES = \"\"\"\n#include <openssl/rsa.h>\n\"\"\"\n\nTYPES = \"\"\"\ntypedef ... RSA;\ntypedef ... BN_GENCB;\nstatic const int RSA_PKCS1_PADDING;\nstatic const int RSA_NO_PADDING;\nstatic const int RSA_PKCS1_OAEP_PADDING;\nstatic const int RSA_PKCS1_PSS_PADDING;\nstatic const int RSA_F4;\nstatic const int RSA_PSS_SALTLEN_AUTO;\n\"\"\"\n\nFUNCTIONS = \"\"\"\nRSA *RSA_new(void);\nvoid RSA_free(RSA *);\nint RSA_generate_key_ex(RSA *, int, BIGNUM *, BN_GENCB *);\nint RSA_check_key(const RSA *);\nRSA *RSAPublicKey_dup(RSA *);\nint RSA_blinding_on(RSA *, BN_CTX *);\nint RSA_print(BIO *, const RSA *, int);\n\nint RSA_set0_key(RSA *, BIGNUM *, BIGNUM *, BIGNUM *);\nint RSA_set0_factors(RSA *, BIGNUM *, BIGNUM *);\nint RSA_set0_crt_params(RSA *, BIGNUM *, BIGNUM *, BIGNUM *);\nvoid RSA_get0_key(const RSA *, const BIGNUM **, const BIGNUM **,\n const BIGNUM **);\nvoid RSA_get0_factors(const RSA *, const BIGNUM **, const BIGNUM **);\nvoid RSA_get0_crt_params(const RSA *, const BIGNUM **, const BIGNUM **,\n const BIGNUM **);\nint EVP_PKEY_CTX_set_rsa_padding(EVP_PKEY_CTX *, int);\nint EVP_PKEY_CTX_set_rsa_pss_saltlen(EVP_PKEY_CTX *, int);\nint EVP_PKEY_CTX_set_rsa_mgf1_md(EVP_PKEY_CTX *, EVP_MD *);\nint EVP_PKEY_CTX_set0_rsa_oaep_label(EVP_PKEY_CTX *, unsigned char *, int);\n\nint EVP_PKEY_CTX_set_rsa_oaep_md(EVP_PKEY_CTX *, EVP_MD *);\n\"\"\"\n\nCUSTOMIZATIONS = \"\"\"\n// BoringSSL doesn't define this constant, but the value is used for\n// automatic salt length computation as in OpenSSL and LibreSSL\n#if !defined(RSA_PSS_SALTLEN_AUTO)\n#define RSA_PSS_SALTLEN_AUTO -2\n#endif\n\"\"\"\n", "path": "src/_cffi_src/openssl/rsa.py"}]} | 1,533 | 208 |
gh_patches_debug_13019 | rasdani/github-patches | git_diff | getsentry__sentry-52100 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
SDK Crash Detection: Store Project ID and Event ID
Store project ID and event ID in the SDK crash detection context to find the original SDK crash event, which is only possible with admin Sentry rights.
https://github.com/getsentry/sentry/blob/2c31ee009b44964f78b9e7e8282e602b7ef849b0/src/sentry/utils/sdk_crashes/sdk_crash_detection.py#L40C2-L42
</issue>
<code>
[start of src/sentry/utils/sdk_crashes/sdk_crash_detection.py]
1 from __future__ import annotations
2
3 from typing import Any, Mapping, Optional
4
5 from sentry.eventstore.models import Event
6 from sentry.issues.grouptype import GroupCategory
7 from sentry.utils.safe import get_path, set_path
8 from sentry.utils.sdk_crashes.cocoa_sdk_crash_detector import CocoaSDKCrashDetector
9 from sentry.utils.sdk_crashes.event_stripper import strip_event_data
10 from sentry.utils.sdk_crashes.sdk_crash_detector import SDKCrashDetector
11
12
13 class SDKCrashReporter:
14 def report(self, event_data: Mapping[str, Any], event_project_id: int) -> Event:
15 from sentry.event_manager import EventManager
16
17 manager = EventManager(dict(event_data))
18 manager.normalize()
19 return manager.save(project_id=event_project_id)
20
21
22 class SDKCrashDetection:
23 def __init__(
24 self,
25 sdk_crash_reporter: SDKCrashReporter,
26 sdk_crash_detector: SDKCrashDetector,
27 ):
28 self.sdk_crash_reporter = sdk_crash_reporter
29 self.cocoa_sdk_crash_detector = sdk_crash_detector
30
31 def detect_sdk_crash(self, event: Event, event_project_id: int) -> Optional[Event]:
32 should_detect_sdk_crash = (
33 event.group
34 and event.group.issue_category == GroupCategory.ERROR
35 and event.group.platform == "cocoa"
36 )
37 if not should_detect_sdk_crash:
38 return None
39
40 context = get_path(event.data, "contexts", "sdk_crash_detection")
41 if context is not None and context.get("detected", False):
42 return None
43
44 # Getting the frames and checking if the event is unhandled might different per platform.
45 # We will change this once we implement this for more platforms.
46 is_unhandled = (
47 get_path(event.data, "exception", "values", -1, "mechanism", "handled") is False
48 )
49 if is_unhandled is False:
50 return None
51
52 frames = get_path(event.data, "exception", "values", -1, "stacktrace", "frames")
53 if not frames:
54 return None
55
56 if self.cocoa_sdk_crash_detector.is_sdk_crash(frames):
57 sdk_crash_event_data = strip_event_data(event.data, self.cocoa_sdk_crash_detector)
58
59 set_path(
60 sdk_crash_event_data, "contexts", "sdk_crash_detection", value={"detected": True}
61 )
62
63 return self.sdk_crash_reporter.report(sdk_crash_event_data, event_project_id)
64
65 return None
66
67
68 _crash_reporter = SDKCrashReporter()
69 _cocoa_sdk_crash_detector = CocoaSDKCrashDetector()
70
71 sdk_crash_detection = SDKCrashDetection(_crash_reporter, _cocoa_sdk_crash_detector)
72
[end of src/sentry/utils/sdk_crashes/sdk_crash_detection.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/sentry/utils/sdk_crashes/sdk_crash_detection.py b/src/sentry/utils/sdk_crashes/sdk_crash_detection.py
--- a/src/sentry/utils/sdk_crashes/sdk_crash_detection.py
+++ b/src/sentry/utils/sdk_crashes/sdk_crash_detection.py
@@ -57,7 +57,14 @@
sdk_crash_event_data = strip_event_data(event.data, self.cocoa_sdk_crash_detector)
set_path(
- sdk_crash_event_data, "contexts", "sdk_crash_detection", value={"detected": True}
+ sdk_crash_event_data,
+ "contexts",
+ "sdk_crash_detection",
+ value={
+ "detected": True,
+ "original_project_id": event.project.id,
+ "original_event_id": event.event_id,
+ },
)
return self.sdk_crash_reporter.report(sdk_crash_event_data, event_project_id)
| {"golden_diff": "diff --git a/src/sentry/utils/sdk_crashes/sdk_crash_detection.py b/src/sentry/utils/sdk_crashes/sdk_crash_detection.py\n--- a/src/sentry/utils/sdk_crashes/sdk_crash_detection.py\n+++ b/src/sentry/utils/sdk_crashes/sdk_crash_detection.py\n@@ -57,7 +57,14 @@\n sdk_crash_event_data = strip_event_data(event.data, self.cocoa_sdk_crash_detector)\n \n set_path(\n- sdk_crash_event_data, \"contexts\", \"sdk_crash_detection\", value={\"detected\": True}\n+ sdk_crash_event_data,\n+ \"contexts\",\n+ \"sdk_crash_detection\",\n+ value={\n+ \"detected\": True,\n+ \"original_project_id\": event.project.id,\n+ \"original_event_id\": event.event_id,\n+ },\n )\n \n return self.sdk_crash_reporter.report(sdk_crash_event_data, event_project_id)\n", "issue": "SDK Crash Detection: Store Project ID and Event ID\nStore project ID and event ID in the SDK crash detection context to find the original SDK crash event, which is only possible with admin Sentry rights.\r\n\r\nhttps://github.com/getsentry/sentry/blob/2c31ee009b44964f78b9e7e8282e602b7ef849b0/src/sentry/utils/sdk_crashes/sdk_crash_detection.py#L40C2-L42\n", "before_files": [{"content": "from __future__ import annotations\n\nfrom typing import Any, Mapping, Optional\n\nfrom sentry.eventstore.models import Event\nfrom sentry.issues.grouptype import GroupCategory\nfrom sentry.utils.safe import get_path, set_path\nfrom sentry.utils.sdk_crashes.cocoa_sdk_crash_detector import CocoaSDKCrashDetector\nfrom sentry.utils.sdk_crashes.event_stripper import strip_event_data\nfrom sentry.utils.sdk_crashes.sdk_crash_detector import SDKCrashDetector\n\n\nclass SDKCrashReporter:\n def report(self, event_data: Mapping[str, Any], event_project_id: int) -> Event:\n from sentry.event_manager import EventManager\n\n manager = EventManager(dict(event_data))\n manager.normalize()\n return manager.save(project_id=event_project_id)\n\n\nclass SDKCrashDetection:\n def __init__(\n self,\n sdk_crash_reporter: SDKCrashReporter,\n sdk_crash_detector: SDKCrashDetector,\n ):\n self.sdk_crash_reporter = sdk_crash_reporter\n self.cocoa_sdk_crash_detector = sdk_crash_detector\n\n def detect_sdk_crash(self, event: Event, event_project_id: int) -> Optional[Event]:\n should_detect_sdk_crash = (\n event.group\n and event.group.issue_category == GroupCategory.ERROR\n and event.group.platform == \"cocoa\"\n )\n if not should_detect_sdk_crash:\n return None\n\n context = get_path(event.data, \"contexts\", \"sdk_crash_detection\")\n if context is not None and context.get(\"detected\", False):\n return None\n\n # Getting the frames and checking if the event is unhandled might different per platform.\n # We will change this once we implement this for more platforms.\n is_unhandled = (\n get_path(event.data, \"exception\", \"values\", -1, \"mechanism\", \"handled\") is False\n )\n if is_unhandled is False:\n return None\n\n frames = get_path(event.data, \"exception\", \"values\", -1, \"stacktrace\", \"frames\")\n if not frames:\n return None\n\n if self.cocoa_sdk_crash_detector.is_sdk_crash(frames):\n sdk_crash_event_data = strip_event_data(event.data, self.cocoa_sdk_crash_detector)\n\n set_path(\n sdk_crash_event_data, \"contexts\", \"sdk_crash_detection\", value={\"detected\": True}\n )\n\n return self.sdk_crash_reporter.report(sdk_crash_event_data, event_project_id)\n\n return None\n\n\n_crash_reporter = SDKCrashReporter()\n_cocoa_sdk_crash_detector = CocoaSDKCrashDetector()\n\nsdk_crash_detection = SDKCrashDetection(_crash_reporter, _cocoa_sdk_crash_detector)\n", "path": "src/sentry/utils/sdk_crashes/sdk_crash_detection.py"}]} | 1,398 | 205 |
gh_patches_debug_35563 | rasdani/github-patches | git_diff | litestar-org__litestar-784 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Bug: StaticFiles sends files as `content-disposition: 'attachment'` in html-mode
**Describe the bug**
When using `StaticFiles` in html-mode, files are being sent with `content-disposition: 'attachment'`
**To Reproduce**
Create an `html/index.html` file. Run:
```python
from starlite import Starlite, StaticFilesConfig, TestClient
app = Starlite(
static_files_config=[StaticFilesConfig(path="/", directories=["html"], html_mode=True)], route_handlers=[]
)
with TestClient(app=app) as client:
res = client.get("/index.html")
assert not res.headers["content-disposition"].startswith("attachment")
```
Bug: StaticFiles sends files as `content-disposition: 'attachment'` in html-mode
**Describe the bug**
When using `StaticFiles` in html-mode, files are being sent with `content-disposition: 'attachment'`
**To Reproduce**
Create an `html/index.html` file. Run:
```python
from starlite import Starlite, StaticFilesConfig, TestClient
app = Starlite(
static_files_config=[StaticFilesConfig(path="/", directories=["html"], html_mode=True)], route_handlers=[]
)
with TestClient(app=app) as client:
res = client.get("/index.html")
assert not res.headers["content-disposition"].startswith("attachment")
```
</issue>
<code>
[start of starlite/static_files/base.py]
1 from os.path import commonpath, join
2 from typing import TYPE_CHECKING, List, Tuple, Union
3
4 from starlite.enums import ScopeType
5 from starlite.exceptions import MethodNotAllowedException, NotFoundException
6 from starlite.response import FileResponse
7 from starlite.status_codes import HTTP_404_NOT_FOUND
8 from starlite.utils.file import FileSystemAdapter
9
10 if TYPE_CHECKING:
11
12 from starlite.types import Receive, Scope, Send
13 from starlite.types.composite_types import PathType
14 from starlite.types.file_types import FileInfo, FileSystemProtocol
15
16
17 class StaticFiles:
18 __slots__ = ("is_html_mode", "directories", "adapter")
19
20 def __init__(self, is_html_mode: bool, directories: List["PathType"], file_system: "FileSystemProtocol") -> None:
21 """This class is an ASGI App that handles file sending.
22
23 Args:
24 is_html_mode: Flag dictating whether serving html. If true, the default file will be 'index.html'.
25 directories: A list of directories to serve files from.
26 file_system: The file_system spec to use for serving files.
27 """
28 self.adapter = FileSystemAdapter(file_system)
29 self.directories = directories
30 self.is_html_mode = is_html_mode
31
32 async def get_fs_info(
33 self, directories: List["PathType"], file_path: str
34 ) -> Union[Tuple[str, "FileInfo"], Tuple[None, None]]:
35 """Resolves the file path and returns the resolved path and a.
36
37 [stat_result][os.stat_result].
38
39 Args:
40 directories: A list of directory paths.
41 file_path: A file path to resolve
42
43 Returns:
44 A tuple with an optional resolved [Path][anyio.Path] instance and an optional [stat_result][os.stat_result].
45 """
46 for directory in directories:
47 try:
48 joined_path = join(directory, file_path) # noqa: PL118
49 file_info = await self.adapter.info(joined_path)
50 if file_info and commonpath([str(directory), file_info["name"], joined_path]) == str(directory):
51 return joined_path, file_info
52 except FileNotFoundError:
53 continue
54 return None, None
55
56 async def __call__(self, scope: "Scope", receive: "Receive", send: "Send") -> None:
57 if scope["type"] != ScopeType.HTTP or scope["method"] not in {"GET", "HEAD"}:
58 raise MethodNotAllowedException()
59
60 split_path = scope["path"].split("/")
61 filename = split_path[-1]
62 joined_path = join(*split_path) # noqa: PL118
63 resolved_path, fs_info = await self.get_fs_info(directories=self.directories, file_path=joined_path)
64
65 if fs_info and fs_info["type"] == "directory" and self.is_html_mode:
66 filename = "index.html"
67 resolved_path, fs_info = await self.get_fs_info(
68 directories=self.directories, file_path=join(resolved_path or joined_path, filename)
69 )
70
71 if fs_info and fs_info["type"] == "file":
72 await FileResponse(
73 path=resolved_path or joined_path,
74 file_info=fs_info,
75 file_system=self.adapter.file_system,
76 filename=filename,
77 is_head_response=scope["method"] == "HEAD",
78 )(scope, receive, send)
79 return
80
81 if self.is_html_mode:
82 filename = "404.html"
83 resolved_path, fs_info = await self.get_fs_info(directories=self.directories, file_path=filename)
84 if fs_info and fs_info["type"] == "file":
85 await FileResponse(
86 path=resolved_path or joined_path,
87 file_info=fs_info,
88 file_system=self.adapter.file_system,
89 filename=filename,
90 is_head_response=scope["method"] == "HEAD",
91 status_code=HTTP_404_NOT_FOUND,
92 )(scope, receive, send)
93 return
94
95 raise NotFoundException(f"no file or directory match the path {resolved_path or joined_path} was found")
96
[end of starlite/static_files/base.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/starlite/static_files/base.py b/starlite/static_files/base.py
--- a/starlite/static_files/base.py
+++ b/starlite/static_files/base.py
@@ -8,6 +8,7 @@
from starlite.utils.file import FileSystemAdapter
if TYPE_CHECKING:
+ from typing_extensions import Literal
from starlite.types import Receive, Scope, Send
from starlite.types.composite_types import PathType
@@ -61,12 +62,15 @@
filename = split_path[-1]
joined_path = join(*split_path) # noqa: PL118
resolved_path, fs_info = await self.get_fs_info(directories=self.directories, file_path=joined_path)
+ content_disposition_type: "Literal['inline', 'attachment']" = "attachment"
- if fs_info and fs_info["type"] == "directory" and self.is_html_mode:
- filename = "index.html"
- resolved_path, fs_info = await self.get_fs_info(
- directories=self.directories, file_path=join(resolved_path or joined_path, filename)
- )
+ if self.is_html_mode:
+ content_disposition_type = "inline"
+ if fs_info and fs_info["type"] == "directory":
+ filename = "index.html"
+ resolved_path, fs_info = await self.get_fs_info(
+ directories=self.directories, file_path=join(resolved_path or joined_path, filename)
+ )
if fs_info and fs_info["type"] == "file":
await FileResponse(
@@ -75,6 +79,7 @@
file_system=self.adapter.file_system,
filename=filename,
is_head_response=scope["method"] == "HEAD",
+ content_disposition_type=content_disposition_type,
)(scope, receive, send)
return
@@ -89,6 +94,7 @@
filename=filename,
is_head_response=scope["method"] == "HEAD",
status_code=HTTP_404_NOT_FOUND,
+ content_disposition_type=content_disposition_type,
)(scope, receive, send)
return
| {"golden_diff": "diff --git a/starlite/static_files/base.py b/starlite/static_files/base.py\n--- a/starlite/static_files/base.py\n+++ b/starlite/static_files/base.py\n@@ -8,6 +8,7 @@\n from starlite.utils.file import FileSystemAdapter\n \n if TYPE_CHECKING:\n+ from typing_extensions import Literal\n \n from starlite.types import Receive, Scope, Send\n from starlite.types.composite_types import PathType\n@@ -61,12 +62,15 @@\n filename = split_path[-1]\n joined_path = join(*split_path) # noqa: PL118\n resolved_path, fs_info = await self.get_fs_info(directories=self.directories, file_path=joined_path)\n+ content_disposition_type: \"Literal['inline', 'attachment']\" = \"attachment\"\n \n- if fs_info and fs_info[\"type\"] == \"directory\" and self.is_html_mode:\n- filename = \"index.html\"\n- resolved_path, fs_info = await self.get_fs_info(\n- directories=self.directories, file_path=join(resolved_path or joined_path, filename)\n- )\n+ if self.is_html_mode:\n+ content_disposition_type = \"inline\"\n+ if fs_info and fs_info[\"type\"] == \"directory\":\n+ filename = \"index.html\"\n+ resolved_path, fs_info = await self.get_fs_info(\n+ directories=self.directories, file_path=join(resolved_path or joined_path, filename)\n+ )\n \n if fs_info and fs_info[\"type\"] == \"file\":\n await FileResponse(\n@@ -75,6 +79,7 @@\n file_system=self.adapter.file_system,\n filename=filename,\n is_head_response=scope[\"method\"] == \"HEAD\",\n+ content_disposition_type=content_disposition_type,\n )(scope, receive, send)\n return\n \n@@ -89,6 +94,7 @@\n filename=filename,\n is_head_response=scope[\"method\"] == \"HEAD\",\n status_code=HTTP_404_NOT_FOUND,\n+ content_disposition_type=content_disposition_type,\n )(scope, receive, send)\n return\n", "issue": "Bug: StaticFiles sends files as `content-disposition: 'attachment'` in html-mode\n**Describe the bug**\r\nWhen using `StaticFiles` in html-mode, files are being sent with `content-disposition: 'attachment'`\r\n\r\n**To Reproduce**\r\nCreate an `html/index.html` file. Run:\r\n\r\n```python\r\nfrom starlite import Starlite, StaticFilesConfig, TestClient\r\n\r\napp = Starlite(\r\n static_files_config=[StaticFilesConfig(path=\"/\", directories=[\"html\"], html_mode=True)], route_handlers=[]\r\n)\r\n\r\nwith TestClient(app=app) as client:\r\n res = client.get(\"/index.html\")\r\n assert not res.headers[\"content-disposition\"].startswith(\"attachment\")\r\n```\r\n\nBug: StaticFiles sends files as `content-disposition: 'attachment'` in html-mode\n**Describe the bug**\r\nWhen using `StaticFiles` in html-mode, files are being sent with `content-disposition: 'attachment'`\r\n\r\n**To Reproduce**\r\nCreate an `html/index.html` file. Run:\r\n\r\n```python\r\nfrom starlite import Starlite, StaticFilesConfig, TestClient\r\n\r\napp = Starlite(\r\n static_files_config=[StaticFilesConfig(path=\"/\", directories=[\"html\"], html_mode=True)], route_handlers=[]\r\n)\r\n\r\nwith TestClient(app=app) as client:\r\n res = client.get(\"/index.html\")\r\n assert not res.headers[\"content-disposition\"].startswith(\"attachment\")\r\n```\r\n\n", "before_files": [{"content": "from os.path import commonpath, join\nfrom typing import TYPE_CHECKING, List, Tuple, Union\n\nfrom starlite.enums import ScopeType\nfrom starlite.exceptions import MethodNotAllowedException, NotFoundException\nfrom starlite.response import FileResponse\nfrom starlite.status_codes import HTTP_404_NOT_FOUND\nfrom starlite.utils.file import FileSystemAdapter\n\nif TYPE_CHECKING:\n\n from starlite.types import Receive, Scope, Send\n from starlite.types.composite_types import PathType\n from starlite.types.file_types import FileInfo, FileSystemProtocol\n\n\nclass StaticFiles:\n __slots__ = (\"is_html_mode\", \"directories\", \"adapter\")\n\n def __init__(self, is_html_mode: bool, directories: List[\"PathType\"], file_system: \"FileSystemProtocol\") -> None:\n \"\"\"This class is an ASGI App that handles file sending.\n\n Args:\n is_html_mode: Flag dictating whether serving html. If true, the default file will be 'index.html'.\n directories: A list of directories to serve files from.\n file_system: The file_system spec to use for serving files.\n \"\"\"\n self.adapter = FileSystemAdapter(file_system)\n self.directories = directories\n self.is_html_mode = is_html_mode\n\n async def get_fs_info(\n self, directories: List[\"PathType\"], file_path: str\n ) -> Union[Tuple[str, \"FileInfo\"], Tuple[None, None]]:\n \"\"\"Resolves the file path and returns the resolved path and a.\n\n [stat_result][os.stat_result].\n\n Args:\n directories: A list of directory paths.\n file_path: A file path to resolve\n\n Returns:\n A tuple with an optional resolved [Path][anyio.Path] instance and an optional [stat_result][os.stat_result].\n \"\"\"\n for directory in directories:\n try:\n joined_path = join(directory, file_path) # noqa: PL118\n file_info = await self.adapter.info(joined_path)\n if file_info and commonpath([str(directory), file_info[\"name\"], joined_path]) == str(directory):\n return joined_path, file_info\n except FileNotFoundError:\n continue\n return None, None\n\n async def __call__(self, scope: \"Scope\", receive: \"Receive\", send: \"Send\") -> None:\n if scope[\"type\"] != ScopeType.HTTP or scope[\"method\"] not in {\"GET\", \"HEAD\"}:\n raise MethodNotAllowedException()\n\n split_path = scope[\"path\"].split(\"/\")\n filename = split_path[-1]\n joined_path = join(*split_path) # noqa: PL118\n resolved_path, fs_info = await self.get_fs_info(directories=self.directories, file_path=joined_path)\n\n if fs_info and fs_info[\"type\"] == \"directory\" and self.is_html_mode:\n filename = \"index.html\"\n resolved_path, fs_info = await self.get_fs_info(\n directories=self.directories, file_path=join(resolved_path or joined_path, filename)\n )\n\n if fs_info and fs_info[\"type\"] == \"file\":\n await FileResponse(\n path=resolved_path or joined_path,\n file_info=fs_info,\n file_system=self.adapter.file_system,\n filename=filename,\n is_head_response=scope[\"method\"] == \"HEAD\",\n )(scope, receive, send)\n return\n\n if self.is_html_mode:\n filename = \"404.html\"\n resolved_path, fs_info = await self.get_fs_info(directories=self.directories, file_path=filename)\n if fs_info and fs_info[\"type\"] == \"file\":\n await FileResponse(\n path=resolved_path or joined_path,\n file_info=fs_info,\n file_system=self.adapter.file_system,\n filename=filename,\n is_head_response=scope[\"method\"] == \"HEAD\",\n status_code=HTTP_404_NOT_FOUND,\n )(scope, receive, send)\n return\n\n raise NotFoundException(f\"no file or directory match the path {resolved_path or joined_path} was found\")\n", "path": "starlite/static_files/base.py"}]} | 1,886 | 469 |
gh_patches_debug_27756 | rasdani/github-patches | git_diff | scrapy__scrapy-5002 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
refactoring curl_to_request_kwargs to reduce cyclomatic complexity
<!--
Thanks for taking an interest in Scrapy!
If you have a question that starts with "How to...", please see the Scrapy Community page: https://scrapy.org/community/.
The GitHub issue tracker's purpose is to deal with bug reports and feature requests for the project itself.
Keep in mind that by filing an issue, you are expected to comply with Scrapy's Code of Conduct, including treating everyone with respect: https://github.com/scrapy/scrapy/blob/master/CODE_OF_CONDUCT.md
The following is a suggested template to structure your pull request, you can find more guidelines at https://doc.scrapy.org/en/latest/contributing.html#writing-patches and https://doc.scrapy.org/en/latest/contributing.html#submitting-patches
-->
## Summary
After some exploring with cyclomatic complexity tools (lizard), the function was found to have the second highest complexity.
## Motivation
Low complexity allows for higher readability, testability and maintainability.
## Solution
Refactor
## Additional context
N/A
</issue>
<code>
[start of scrapy/utils/curl.py]
1 import argparse
2 import warnings
3 from shlex import split
4 from http.cookies import SimpleCookie
5 from urllib.parse import urlparse
6
7 from w3lib.http import basic_auth_header
8
9
10 class CurlParser(argparse.ArgumentParser):
11 def error(self, message):
12 error_msg = f'There was an error parsing the curl command: {message}'
13 raise ValueError(error_msg)
14
15
16 curl_parser = CurlParser()
17 curl_parser.add_argument('url')
18 curl_parser.add_argument('-H', '--header', dest='headers', action='append')
19 curl_parser.add_argument('-X', '--request', dest='method')
20 curl_parser.add_argument('-d', '--data', '--data-raw', dest='data')
21 curl_parser.add_argument('-u', '--user', dest='auth')
22
23
24 safe_to_ignore_arguments = [
25 ['--compressed'],
26 # `--compressed` argument is not safe to ignore, but it's included here
27 # because the `HttpCompressionMiddleware` is enabled by default
28 ['-s', '--silent'],
29 ['-v', '--verbose'],
30 ['-#', '--progress-bar']
31 ]
32
33 for argument in safe_to_ignore_arguments:
34 curl_parser.add_argument(*argument, action='store_true')
35
36
37 def curl_to_request_kwargs(curl_command, ignore_unknown_options=True):
38 """Convert a cURL command syntax to Request kwargs.
39
40 :param str curl_command: string containing the curl command
41 :param bool ignore_unknown_options: If true, only a warning is emitted when
42 cURL options are unknown. Otherwise
43 raises an error. (default: True)
44 :return: dictionary of Request kwargs
45 """
46
47 curl_args = split(curl_command)
48
49 if curl_args[0] != 'curl':
50 raise ValueError('A curl command must start with "curl"')
51
52 parsed_args, argv = curl_parser.parse_known_args(curl_args[1:])
53
54 if argv:
55 msg = f'Unrecognized options: {", ".join(argv)}'
56 if ignore_unknown_options:
57 warnings.warn(msg)
58 else:
59 raise ValueError(msg)
60
61 url = parsed_args.url
62
63 # curl automatically prepends 'http' if the scheme is missing, but Request
64 # needs the scheme to work
65 parsed_url = urlparse(url)
66 if not parsed_url.scheme:
67 url = 'http://' + url
68
69 method = parsed_args.method or 'GET'
70
71 result = {'method': method.upper(), 'url': url}
72
73 headers = []
74 cookies = {}
75 for header in parsed_args.headers or ():
76 name, val = header.split(':', 1)
77 name = name.strip()
78 val = val.strip()
79 if name.title() == 'Cookie':
80 for name, morsel in SimpleCookie(val).items():
81 cookies[name] = morsel.value
82 else:
83 headers.append((name, val))
84
85 if parsed_args.auth:
86 user, password = parsed_args.auth.split(':', 1)
87 headers.append(('Authorization', basic_auth_header(user, password)))
88
89 if headers:
90 result['headers'] = headers
91 if cookies:
92 result['cookies'] = cookies
93 if parsed_args.data:
94 result['body'] = parsed_args.data
95 if not parsed_args.method:
96 # if the "data" is specified but the "method" is not specified,
97 # the default method is 'POST'
98 result['method'] = 'POST'
99
100 return result
101
[end of scrapy/utils/curl.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/scrapy/utils/curl.py b/scrapy/utils/curl.py
--- a/scrapy/utils/curl.py
+++ b/scrapy/utils/curl.py
@@ -34,6 +34,26 @@
curl_parser.add_argument(*argument, action='store_true')
+def _parse_headers_and_cookies(parsed_args):
+ headers = []
+ cookies = {}
+ for header in parsed_args.headers or ():
+ name, val = header.split(':', 1)
+ name = name.strip()
+ val = val.strip()
+ if name.title() == 'Cookie':
+ for name, morsel in SimpleCookie(val).items():
+ cookies[name] = morsel.value
+ else:
+ headers.append((name, val))
+
+ if parsed_args.auth:
+ user, password = parsed_args.auth.split(':', 1)
+ headers.append(('Authorization', basic_auth_header(user, password)))
+
+ return headers, cookies
+
+
def curl_to_request_kwargs(curl_command, ignore_unknown_options=True):
"""Convert a cURL command syntax to Request kwargs.
@@ -70,21 +90,7 @@
result = {'method': method.upper(), 'url': url}
- headers = []
- cookies = {}
- for header in parsed_args.headers or ():
- name, val = header.split(':', 1)
- name = name.strip()
- val = val.strip()
- if name.title() == 'Cookie':
- for name, morsel in SimpleCookie(val).items():
- cookies[name] = morsel.value
- else:
- headers.append((name, val))
-
- if parsed_args.auth:
- user, password = parsed_args.auth.split(':', 1)
- headers.append(('Authorization', basic_auth_header(user, password)))
+ headers, cookies = _parse_headers_and_cookies(parsed_args)
if headers:
result['headers'] = headers
| {"golden_diff": "diff --git a/scrapy/utils/curl.py b/scrapy/utils/curl.py\n--- a/scrapy/utils/curl.py\n+++ b/scrapy/utils/curl.py\n@@ -34,6 +34,26 @@\n curl_parser.add_argument(*argument, action='store_true')\n \n \n+def _parse_headers_and_cookies(parsed_args):\n+ headers = []\n+ cookies = {}\n+ for header in parsed_args.headers or ():\n+ name, val = header.split(':', 1)\n+ name = name.strip()\n+ val = val.strip()\n+ if name.title() == 'Cookie':\n+ for name, morsel in SimpleCookie(val).items():\n+ cookies[name] = morsel.value\n+ else:\n+ headers.append((name, val))\n+\n+ if parsed_args.auth:\n+ user, password = parsed_args.auth.split(':', 1)\n+ headers.append(('Authorization', basic_auth_header(user, password)))\n+\n+ return headers, cookies\n+\n+\n def curl_to_request_kwargs(curl_command, ignore_unknown_options=True):\n \"\"\"Convert a cURL command syntax to Request kwargs.\n \n@@ -70,21 +90,7 @@\n \n result = {'method': method.upper(), 'url': url}\n \n- headers = []\n- cookies = {}\n- for header in parsed_args.headers or ():\n- name, val = header.split(':', 1)\n- name = name.strip()\n- val = val.strip()\n- if name.title() == 'Cookie':\n- for name, morsel in SimpleCookie(val).items():\n- cookies[name] = morsel.value\n- else:\n- headers.append((name, val))\n-\n- if parsed_args.auth:\n- user, password = parsed_args.auth.split(':', 1)\n- headers.append(('Authorization', basic_auth_header(user, password)))\n+ headers, cookies = _parse_headers_and_cookies(parsed_args)\n \n if headers:\n result['headers'] = headers\n", "issue": "refactoring curl_to_request_kwargs to reduce cyclomatic complexity\n<!--\r\n\r\nThanks for taking an interest in Scrapy!\r\n\r\nIf you have a question that starts with \"How to...\", please see the Scrapy Community page: https://scrapy.org/community/.\r\nThe GitHub issue tracker's purpose is to deal with bug reports and feature requests for the project itself.\r\n\r\nKeep in mind that by filing an issue, you are expected to comply with Scrapy's Code of Conduct, including treating everyone with respect: https://github.com/scrapy/scrapy/blob/master/CODE_OF_CONDUCT.md\r\n\r\nThe following is a suggested template to structure your pull request, you can find more guidelines at https://doc.scrapy.org/en/latest/contributing.html#writing-patches and https://doc.scrapy.org/en/latest/contributing.html#submitting-patches\r\n\r\n-->\r\n\r\n## Summary\r\n\r\nAfter some exploring with cyclomatic complexity tools (lizard), the function was found to have the second highest complexity. \r\n\r\n## Motivation\r\n\r\nLow complexity allows for higher readability, testability and maintainability. \r\n\r\n## Solution\r\n\r\nRefactor\r\n\r\n## Additional context\r\n\r\nN/A\r\n\n", "before_files": [{"content": "import argparse\nimport warnings\nfrom shlex import split\nfrom http.cookies import SimpleCookie\nfrom urllib.parse import urlparse\n\nfrom w3lib.http import basic_auth_header\n\n\nclass CurlParser(argparse.ArgumentParser):\n def error(self, message):\n error_msg = f'There was an error parsing the curl command: {message}'\n raise ValueError(error_msg)\n\n\ncurl_parser = CurlParser()\ncurl_parser.add_argument('url')\ncurl_parser.add_argument('-H', '--header', dest='headers', action='append')\ncurl_parser.add_argument('-X', '--request', dest='method')\ncurl_parser.add_argument('-d', '--data', '--data-raw', dest='data')\ncurl_parser.add_argument('-u', '--user', dest='auth')\n\n\nsafe_to_ignore_arguments = [\n ['--compressed'],\n # `--compressed` argument is not safe to ignore, but it's included here\n # because the `HttpCompressionMiddleware` is enabled by default\n ['-s', '--silent'],\n ['-v', '--verbose'],\n ['-#', '--progress-bar']\n]\n\nfor argument in safe_to_ignore_arguments:\n curl_parser.add_argument(*argument, action='store_true')\n\n\ndef curl_to_request_kwargs(curl_command, ignore_unknown_options=True):\n \"\"\"Convert a cURL command syntax to Request kwargs.\n\n :param str curl_command: string containing the curl command\n :param bool ignore_unknown_options: If true, only a warning is emitted when\n cURL options are unknown. Otherwise\n raises an error. (default: True)\n :return: dictionary of Request kwargs\n \"\"\"\n\n curl_args = split(curl_command)\n\n if curl_args[0] != 'curl':\n raise ValueError('A curl command must start with \"curl\"')\n\n parsed_args, argv = curl_parser.parse_known_args(curl_args[1:])\n\n if argv:\n msg = f'Unrecognized options: {\", \".join(argv)}'\n if ignore_unknown_options:\n warnings.warn(msg)\n else:\n raise ValueError(msg)\n\n url = parsed_args.url\n\n # curl automatically prepends 'http' if the scheme is missing, but Request\n # needs the scheme to work\n parsed_url = urlparse(url)\n if not parsed_url.scheme:\n url = 'http://' + url\n\n method = parsed_args.method or 'GET'\n\n result = {'method': method.upper(), 'url': url}\n\n headers = []\n cookies = {}\n for header in parsed_args.headers or ():\n name, val = header.split(':', 1)\n name = name.strip()\n val = val.strip()\n if name.title() == 'Cookie':\n for name, morsel in SimpleCookie(val).items():\n cookies[name] = morsel.value\n else:\n headers.append((name, val))\n\n if parsed_args.auth:\n user, password = parsed_args.auth.split(':', 1)\n headers.append(('Authorization', basic_auth_header(user, password)))\n\n if headers:\n result['headers'] = headers\n if cookies:\n result['cookies'] = cookies\n if parsed_args.data:\n result['body'] = parsed_args.data\n if not parsed_args.method:\n # if the \"data\" is specified but the \"method\" is not specified,\n # the default method is 'POST'\n result['method'] = 'POST'\n\n return result\n", "path": "scrapy/utils/curl.py"}]} | 1,682 | 430 |
gh_patches_debug_802 | rasdani/github-patches | git_diff | pyca__cryptography-1599 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Update year in copyright notice for vectors
Refs #1597
</issue>
<code>
[start of vectors/cryptography_vectors/__about__.py]
1 # This file is dual licensed under the terms of the Apache License, Version
2 # 2.0, and the BSD License. See the LICENSE file in the root of this repository
3 # for complete details.
4
5 from __future__ import absolute_import, division, print_function
6
7 __all__ = [
8 "__title__", "__summary__", "__uri__", "__version__", "__author__",
9 "__email__", "__license__", "__copyright__",
10 ]
11
12 __title__ = "cryptography_vectors"
13 __summary__ = "Test vectors for the cryptography package."
14
15 __uri__ = "https://github.com/pyca/cryptography"
16
17 __version__ = "0.8.dev1"
18
19 __author__ = "The cryptography developers"
20 __email__ = "[email protected]"
21
22 __license__ = "BSD or Apache License, Version 2.0"
23 __copyright__ = "Copyright 2013-2014 %s" % __author__
24
[end of vectors/cryptography_vectors/__about__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/vectors/cryptography_vectors/__about__.py b/vectors/cryptography_vectors/__about__.py
--- a/vectors/cryptography_vectors/__about__.py
+++ b/vectors/cryptography_vectors/__about__.py
@@ -20,4 +20,4 @@
__email__ = "[email protected]"
__license__ = "BSD or Apache License, Version 2.0"
-__copyright__ = "Copyright 2013-2014 %s" % __author__
+__copyright__ = "Copyright 2013-2015 %s" % __author__
| {"golden_diff": "diff --git a/vectors/cryptography_vectors/__about__.py b/vectors/cryptography_vectors/__about__.py\n--- a/vectors/cryptography_vectors/__about__.py\n+++ b/vectors/cryptography_vectors/__about__.py\n@@ -20,4 +20,4 @@\n __email__ = \"[email protected]\"\n \n __license__ = \"BSD or Apache License, Version 2.0\"\n-__copyright__ = \"Copyright 2013-2014 %s\" % __author__\n+__copyright__ = \"Copyright 2013-2015 %s\" % __author__\n", "issue": "Update year in copyright notice for vectors\nRefs #1597 \n\n", "before_files": [{"content": "# This file is dual licensed under the terms of the Apache License, Version\n# 2.0, and the BSD License. See the LICENSE file in the root of this repository\n# for complete details.\n\nfrom __future__ import absolute_import, division, print_function\n\n__all__ = [\n \"__title__\", \"__summary__\", \"__uri__\", \"__version__\", \"__author__\",\n \"__email__\", \"__license__\", \"__copyright__\",\n]\n\n__title__ = \"cryptography_vectors\"\n__summary__ = \"Test vectors for the cryptography package.\"\n\n__uri__ = \"https://github.com/pyca/cryptography\"\n\n__version__ = \"0.8.dev1\"\n\n__author__ = \"The cryptography developers\"\n__email__ = \"[email protected]\"\n\n__license__ = \"BSD or Apache License, Version 2.0\"\n__copyright__ = \"Copyright 2013-2014 %s\" % __author__\n", "path": "vectors/cryptography_vectors/__about__.py"}]} | 805 | 138 |
gh_patches_debug_1453 | rasdani/github-patches | git_diff | rlworkgroup__garage-971 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
pytest flag --strict-markers requires version 4.5.0
pytest flag `--strict-markers` in https://github.com/rlworkgroup/garage/blob/master/setup.cfg#L79 requires version >= 4.5.0.
See https://docs.pytest.org/en/latest/changelog.html#pytest-4-5-0-2019-05-11
</issue>
<code>
[start of setup.py]
1 """setuptools based setup module."""
2 from setuptools import find_packages
3 from setuptools import setup
4
5 TF_VERSION = '<1.16,>=1.15.0'
6 GYM_VERSION = '==0.12.4'
7
8 # Required dependencies
9 REQUIRED = [
10 # Please keep alphabetized
11 'akro==0.0.6',
12 'cached_property',
13 'click',
14 'cloudpickle',
15 'cma==2.7.0',
16 'dowel==0.0.2',
17 'gym[atari,box2d,classic_control]' + GYM_VERSION,
18 'joblib<0.13,>=0.12',
19 'matplotlib',
20 'numpy>=1.14.5',
21 'psutil',
22 # Pyglet 1.4.0 introduces some api change which breaks some
23 # gym environments
24 # See: https://github.com/openai/gym/issues/1588
25 'pyglet<1.4.0,>=1.3.0',
26 'pyprind',
27 'python-dateutil',
28 'torch==1.3.0',
29 'ray',
30 'scikit-image',
31 'scipy',
32 'tensorflow' + TF_VERSION,
33 'tensorflow-probability',
34 'torchvision==0.4.1'
35 ]
36
37 # Dependencies for optional features
38 EXTRAS = {}
39
40 EXTRAS['mujoco'] = [
41 'mujoco-py<2.1,>=2.0',
42 'gym[all]' + GYM_VERSION,
43 ]
44
45 EXTRAS['dm_control'] = [
46 # dm_control throws an error during install about not being able to
47 # find a build dependency (absl-py). Later pip executes the `install`
48 # command again and the install succeeds because absl-py has been
49 # installed. This is stupid, but harmless.
50 'dm_control @ https://api.github.com/repos/deepmind/dm_control/tarball/7a36377879c57777e5d5b4da5aae2cd2a29b607a', # pylint: disable=line-too-long; # noqa: E501
51 ]
52
53 EXTRAS['all'] = list(set(sum(EXTRAS.values(), [])))
54
55 # dependencies for using gpu, not included in 'all'
56 EXTRAS['gpu'] = ['tensorflow-gpu' + TF_VERSION]
57
58 # Development dependencies (*not* included in 'all')
59 EXTRAS['dev'] = [
60 # Please keep alphabetized
61 'baselines @ https://api.github.com/repos/openai/baselines/tarball/f2729693253c0ef4d4086231d36e0a4307ec1cb3', # pylint: disable=line-too-long; # noqa: E501
62 'flake8',
63 'flake8-docstrings>=1.5.0',
64 'flake8-import-order',
65 'gtimer',
66 'pandas',
67 'pep8-naming==0.7.0',
68 'pre-commit',
69 'pycodestyle>=2.5.0',
70 'pydocstyle>=4.0.0',
71 'pylint>=2.4.3',
72 'pytest>=3.6', # Required for pytest-cov on Python 3.6
73 'pytest-cov',
74 'pytest-xdist',
75 'recommonmark',
76 'rlkit @ git+https://github.com/vitchyr/rlkit/@1d469a509b797ca04a39b8734c1816ca7d108fc8', # pylint: disable=line-too-long; # noqa: E501
77 'seaborn',
78 'sphinx',
79 'sphinx_rtd_theme',
80 'yapf==0.28.0',
81 ]
82
83 with open('README.md') as f:
84 README = f.read()
85
86 # Get the package version dynamically
87 with open('VERSION') as v:
88 VERSION = v.read().strip()
89
90 setup(
91 name='garage',
92 version=VERSION,
93 author='Reinforcement Learning Working Group',
94 description='A toolkit for reproducible reinforcement learning research',
95 url='https://github.com/rlworkgroup/garage',
96 packages=find_packages(where='src'),
97 package_dir={'': 'src'},
98 scripts=['scripts/garage'],
99 python_requires='>=3.5',
100 install_requires=REQUIRED,
101 extras_require=EXTRAS,
102 license='MIT',
103 long_description=README,
104 long_description_content_type='text/markdown',
105 classifiers=[
106 'Development Status :: 4 - Beta',
107 'Intended Audience :: Developers',
108 'Intended Audience :: Education',
109 'Intended Audience :: Science/Research',
110 'License :: OSI Approved :: MIT License',
111 'Programming Language :: Python :: 3.5',
112 'Programming Language :: Python :: 3.6',
113 'Programming Language :: Python :: 3.7',
114 'Programming Language :: Python :: 3 :: Only',
115 'Topic :: Scientific/Engineering :: Artificial Intelligence',
116 'Topic :: Scientific/Engineering :: Mathematics',
117 'Topic :: Software Development :: Libraries',
118 ],
119 )
120
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -69,7 +69,7 @@
'pycodestyle>=2.5.0',
'pydocstyle>=4.0.0',
'pylint>=2.4.3',
- 'pytest>=3.6', # Required for pytest-cov on Python 3.6
+ 'pytest>=4.5.0', # Required for strict-markers
'pytest-cov',
'pytest-xdist',
'recommonmark',
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -69,7 +69,7 @@\n 'pycodestyle>=2.5.0',\n 'pydocstyle>=4.0.0',\n 'pylint>=2.4.3',\n- 'pytest>=3.6', # Required for pytest-cov on Python 3.6\n+ 'pytest>=4.5.0', # Required for strict-markers\n 'pytest-cov',\n 'pytest-xdist',\n 'recommonmark',\n", "issue": "pytest flag --strict-markers requires version 4.5.0\npytest flag `--strict-markers` in https://github.com/rlworkgroup/garage/blob/master/setup.cfg#L79 requires version >= 4.5.0. \r\n\r\nSee https://docs.pytest.org/en/latest/changelog.html#pytest-4-5-0-2019-05-11\n", "before_files": [{"content": "\"\"\"setuptools based setup module.\"\"\"\nfrom setuptools import find_packages\nfrom setuptools import setup\n\nTF_VERSION = '<1.16,>=1.15.0'\nGYM_VERSION = '==0.12.4'\n\n# Required dependencies\nREQUIRED = [\n # Please keep alphabetized\n 'akro==0.0.6',\n 'cached_property',\n 'click',\n 'cloudpickle',\n 'cma==2.7.0',\n 'dowel==0.0.2',\n 'gym[atari,box2d,classic_control]' + GYM_VERSION,\n 'joblib<0.13,>=0.12',\n 'matplotlib',\n 'numpy>=1.14.5',\n 'psutil',\n # Pyglet 1.4.0 introduces some api change which breaks some\n # gym environments\n # See: https://github.com/openai/gym/issues/1588\n 'pyglet<1.4.0,>=1.3.0',\n 'pyprind',\n 'python-dateutil',\n 'torch==1.3.0',\n 'ray',\n 'scikit-image',\n 'scipy',\n 'tensorflow' + TF_VERSION,\n 'tensorflow-probability',\n 'torchvision==0.4.1'\n]\n\n# Dependencies for optional features\nEXTRAS = {}\n\nEXTRAS['mujoco'] = [\n 'mujoco-py<2.1,>=2.0',\n 'gym[all]' + GYM_VERSION,\n]\n\nEXTRAS['dm_control'] = [\n # dm_control throws an error during install about not being able to\n # find a build dependency (absl-py). Later pip executes the `install`\n # command again and the install succeeds because absl-py has been\n # installed. This is stupid, but harmless.\n 'dm_control @ https://api.github.com/repos/deepmind/dm_control/tarball/7a36377879c57777e5d5b4da5aae2cd2a29b607a', # pylint: disable=line-too-long; # noqa: E501\n]\n\nEXTRAS['all'] = list(set(sum(EXTRAS.values(), [])))\n\n# dependencies for using gpu, not included in 'all'\nEXTRAS['gpu'] = ['tensorflow-gpu' + TF_VERSION]\n\n# Development dependencies (*not* included in 'all')\nEXTRAS['dev'] = [\n # Please keep alphabetized\n 'baselines @ https://api.github.com/repos/openai/baselines/tarball/f2729693253c0ef4d4086231d36e0a4307ec1cb3', # pylint: disable=line-too-long; # noqa: E501\n 'flake8',\n 'flake8-docstrings>=1.5.0',\n 'flake8-import-order',\n 'gtimer',\n 'pandas',\n 'pep8-naming==0.7.0',\n 'pre-commit',\n 'pycodestyle>=2.5.0',\n 'pydocstyle>=4.0.0',\n 'pylint>=2.4.3',\n 'pytest>=3.6', # Required for pytest-cov on Python 3.6\n 'pytest-cov',\n 'pytest-xdist',\n 'recommonmark',\n 'rlkit @ git+https://github.com/vitchyr/rlkit/@1d469a509b797ca04a39b8734c1816ca7d108fc8', # pylint: disable=line-too-long; # noqa: E501\n 'seaborn',\n 'sphinx',\n 'sphinx_rtd_theme',\n 'yapf==0.28.0',\n]\n\nwith open('README.md') as f:\n README = f.read()\n\n# Get the package version dynamically\nwith open('VERSION') as v:\n VERSION = v.read().strip()\n\nsetup(\n name='garage',\n version=VERSION,\n author='Reinforcement Learning Working Group',\n description='A toolkit for reproducible reinforcement learning research',\n url='https://github.com/rlworkgroup/garage',\n packages=find_packages(where='src'),\n package_dir={'': 'src'},\n scripts=['scripts/garage'],\n python_requires='>=3.5',\n install_requires=REQUIRED,\n extras_require=EXTRAS,\n license='MIT',\n long_description=README,\n long_description_content_type='text/markdown',\n classifiers=[\n 'Development Status :: 4 - Beta',\n 'Intended Audience :: Developers',\n 'Intended Audience :: Education',\n 'Intended Audience :: Science/Research',\n 'License :: OSI Approved :: MIT License',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Programming Language :: Python :: 3 :: Only',\n 'Topic :: Scientific/Engineering :: Artificial Intelligence',\n 'Topic :: Scientific/Engineering :: Mathematics',\n 'Topic :: Software Development :: Libraries',\n ],\n)\n", "path": "setup.py"}]} | 2,032 | 128 |
gh_patches_debug_20214 | rasdani/github-patches | git_diff | getsentry__sentry-python-921 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Not working with older boto version
Hello, we use it in Django==2.1.7 app and this row breaks the app.
https://github.com/getsentry/sentry-python/blob/cc08a6bed116e09db41c712c20ab63eb0a839e41/sentry_sdk/integrations/boto3.py#L36
For versions
boto3==1.7.45
botocore==1.10.84
this throws
`
AttributeError: 'str' object has no attribute 'hyphenize'`
I'm not sure the base of the integrations but I thought they must be enabled in settings, but this part of Boto3Integration is triggered even if we have not enabled it in django settings.
</issue>
<code>
[start of sentry_sdk/integrations/boto3.py]
1 from __future__ import absolute_import
2
3 from sentry_sdk import Hub
4 from sentry_sdk.integrations import Integration, DidNotEnable
5 from sentry_sdk.tracing import Span
6
7 from sentry_sdk._functools import partial
8 from sentry_sdk._types import MYPY
9
10 if MYPY:
11 from typing import Any
12 from typing import Dict
13 from typing import Optional
14 from typing import Type
15
16 try:
17 from botocore.client import BaseClient # type: ignore
18 from botocore.response import StreamingBody # type: ignore
19 from botocore.awsrequest import AWSRequest # type: ignore
20 except ImportError:
21 raise DidNotEnable("botocore is not installed")
22
23
24 class Boto3Integration(Integration):
25 identifier = "boto3"
26
27 @staticmethod
28 def setup_once():
29 # type: () -> None
30 orig_init = BaseClient.__init__
31
32 def sentry_patched_init(self, *args, **kwargs):
33 # type: (Type[BaseClient], *Any, **Any) -> None
34 orig_init(self, *args, **kwargs)
35 meta = self.meta
36 service_id = meta.service_model.service_id.hyphenize()
37 meta.events.register(
38 "request-created",
39 partial(_sentry_request_created, service_id=service_id),
40 )
41 meta.events.register("after-call", _sentry_after_call)
42 meta.events.register("after-call-error", _sentry_after_call_error)
43
44 BaseClient.__init__ = sentry_patched_init
45
46
47 def _sentry_request_created(service_id, request, operation_name, **kwargs):
48 # type: (str, AWSRequest, str, **Any) -> None
49 hub = Hub.current
50 if hub.get_integration(Boto3Integration) is None:
51 return
52
53 description = "aws.%s.%s" % (service_id, operation_name)
54 span = hub.start_span(
55 hub=hub,
56 op="aws.request",
57 description=description,
58 )
59 span.set_tag("aws.service_id", service_id)
60 span.set_tag("aws.operation_name", operation_name)
61 span.set_data("aws.request.url", request.url)
62
63 # We do it in order for subsequent http calls/retries be
64 # attached to this span.
65 span.__enter__()
66
67 # request.context is an open-ended data-structure
68 # where we can add anything useful in request life cycle.
69 request.context["_sentrysdk_span"] = span
70
71
72 def _sentry_after_call(context, parsed, **kwargs):
73 # type: (Dict[str, Any], Dict[str, Any], **Any) -> None
74 span = context.pop("_sentrysdk_span", None) # type: Optional[Span]
75
76 # Span could be absent if the integration is disabled.
77 if span is None:
78 return
79 span.__exit__(None, None, None)
80
81 body = parsed.get("Body")
82 if not isinstance(body, StreamingBody):
83 return
84
85 streaming_span = span.start_child(
86 op="aws.request.stream",
87 description=span.description,
88 )
89
90 orig_read = body.read
91 orig_close = body.close
92
93 def sentry_streaming_body_read(*args, **kwargs):
94 # type: (*Any, **Any) -> bytes
95 try:
96 ret = orig_read(*args, **kwargs)
97 if not ret:
98 streaming_span.finish()
99 return ret
100 except Exception:
101 streaming_span.finish()
102 raise
103
104 body.read = sentry_streaming_body_read
105
106 def sentry_streaming_body_close(*args, **kwargs):
107 # type: (*Any, **Any) -> None
108 streaming_span.finish()
109 orig_close(*args, **kwargs)
110
111 body.close = sentry_streaming_body_close
112
113
114 def _sentry_after_call_error(context, exception, **kwargs):
115 # type: (Dict[str, Any], Type[BaseException], **Any) -> None
116 span = context.pop("_sentrysdk_span", None) # type: Optional[Span]
117
118 # Span could be absent if the integration is disabled.
119 if span is None:
120 return
121 span.__exit__(type(exception), exception, None)
122
[end of sentry_sdk/integrations/boto3.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/sentry_sdk/integrations/boto3.py b/sentry_sdk/integrations/boto3.py
--- a/sentry_sdk/integrations/boto3.py
+++ b/sentry_sdk/integrations/boto3.py
@@ -14,6 +14,7 @@
from typing import Type
try:
+ from botocore import __version__ as BOTOCORE_VERSION # type: ignore
from botocore.client import BaseClient # type: ignore
from botocore.response import StreamingBody # type: ignore
from botocore.awsrequest import AWSRequest # type: ignore
@@ -27,6 +28,14 @@
@staticmethod
def setup_once():
# type: () -> None
+ try:
+ version = tuple(map(int, BOTOCORE_VERSION.split(".")[:3]))
+ except (ValueError, TypeError):
+ raise DidNotEnable(
+ "Unparsable botocore version: {}".format(BOTOCORE_VERSION)
+ )
+ if version < (1, 12):
+ raise DidNotEnable("Botocore 1.12 or newer is required.")
orig_init = BaseClient.__init__
def sentry_patched_init(self, *args, **kwargs):
| {"golden_diff": "diff --git a/sentry_sdk/integrations/boto3.py b/sentry_sdk/integrations/boto3.py\n--- a/sentry_sdk/integrations/boto3.py\n+++ b/sentry_sdk/integrations/boto3.py\n@@ -14,6 +14,7 @@\n from typing import Type\n \n try:\n+ from botocore import __version__ as BOTOCORE_VERSION # type: ignore\n from botocore.client import BaseClient # type: ignore\n from botocore.response import StreamingBody # type: ignore\n from botocore.awsrequest import AWSRequest # type: ignore\n@@ -27,6 +28,14 @@\n @staticmethod\n def setup_once():\n # type: () -> None\n+ try:\n+ version = tuple(map(int, BOTOCORE_VERSION.split(\".\")[:3]))\n+ except (ValueError, TypeError):\n+ raise DidNotEnable(\n+ \"Unparsable botocore version: {}\".format(BOTOCORE_VERSION)\n+ )\n+ if version < (1, 12):\n+ raise DidNotEnable(\"Botocore 1.12 or newer is required.\")\n orig_init = BaseClient.__init__\n \n def sentry_patched_init(self, *args, **kwargs):\n", "issue": "Not working with older boto version\nHello, we use it in Django==2.1.7 app and this row breaks the app.\r\n\r\nhttps://github.com/getsentry/sentry-python/blob/cc08a6bed116e09db41c712c20ab63eb0a839e41/sentry_sdk/integrations/boto3.py#L36\r\n\r\nFor versions\r\nboto3==1.7.45\r\nbotocore==1.10.84\r\n\r\nthis throws\r\n`\r\nAttributeError: 'str' object has no attribute 'hyphenize'`\r\n\r\nI'm not sure the base of the integrations but I thought they must be enabled in settings, but this part of Boto3Integration is triggered even if we have not enabled it in django settings.\r\n\n", "before_files": [{"content": "from __future__ import absolute_import\n\nfrom sentry_sdk import Hub\nfrom sentry_sdk.integrations import Integration, DidNotEnable\nfrom sentry_sdk.tracing import Span\n\nfrom sentry_sdk._functools import partial\nfrom sentry_sdk._types import MYPY\n\nif MYPY:\n from typing import Any\n from typing import Dict\n from typing import Optional\n from typing import Type\n\ntry:\n from botocore.client import BaseClient # type: ignore\n from botocore.response import StreamingBody # type: ignore\n from botocore.awsrequest import AWSRequest # type: ignore\nexcept ImportError:\n raise DidNotEnable(\"botocore is not installed\")\n\n\nclass Boto3Integration(Integration):\n identifier = \"boto3\"\n\n @staticmethod\n def setup_once():\n # type: () -> None\n orig_init = BaseClient.__init__\n\n def sentry_patched_init(self, *args, **kwargs):\n # type: (Type[BaseClient], *Any, **Any) -> None\n orig_init(self, *args, **kwargs)\n meta = self.meta\n service_id = meta.service_model.service_id.hyphenize()\n meta.events.register(\n \"request-created\",\n partial(_sentry_request_created, service_id=service_id),\n )\n meta.events.register(\"after-call\", _sentry_after_call)\n meta.events.register(\"after-call-error\", _sentry_after_call_error)\n\n BaseClient.__init__ = sentry_patched_init\n\n\ndef _sentry_request_created(service_id, request, operation_name, **kwargs):\n # type: (str, AWSRequest, str, **Any) -> None\n hub = Hub.current\n if hub.get_integration(Boto3Integration) is None:\n return\n\n description = \"aws.%s.%s\" % (service_id, operation_name)\n span = hub.start_span(\n hub=hub,\n op=\"aws.request\",\n description=description,\n )\n span.set_tag(\"aws.service_id\", service_id)\n span.set_tag(\"aws.operation_name\", operation_name)\n span.set_data(\"aws.request.url\", request.url)\n\n # We do it in order for subsequent http calls/retries be\n # attached to this span.\n span.__enter__()\n\n # request.context is an open-ended data-structure\n # where we can add anything useful in request life cycle.\n request.context[\"_sentrysdk_span\"] = span\n\n\ndef _sentry_after_call(context, parsed, **kwargs):\n # type: (Dict[str, Any], Dict[str, Any], **Any) -> None\n span = context.pop(\"_sentrysdk_span\", None) # type: Optional[Span]\n\n # Span could be absent if the integration is disabled.\n if span is None:\n return\n span.__exit__(None, None, None)\n\n body = parsed.get(\"Body\")\n if not isinstance(body, StreamingBody):\n return\n\n streaming_span = span.start_child(\n op=\"aws.request.stream\",\n description=span.description,\n )\n\n orig_read = body.read\n orig_close = body.close\n\n def sentry_streaming_body_read(*args, **kwargs):\n # type: (*Any, **Any) -> bytes\n try:\n ret = orig_read(*args, **kwargs)\n if not ret:\n streaming_span.finish()\n return ret\n except Exception:\n streaming_span.finish()\n raise\n\n body.read = sentry_streaming_body_read\n\n def sentry_streaming_body_close(*args, **kwargs):\n # type: (*Any, **Any) -> None\n streaming_span.finish()\n orig_close(*args, **kwargs)\n\n body.close = sentry_streaming_body_close\n\n\ndef _sentry_after_call_error(context, exception, **kwargs):\n # type: (Dict[str, Any], Type[BaseException], **Any) -> None\n span = context.pop(\"_sentrysdk_span\", None) # type: Optional[Span]\n\n # Span could be absent if the integration is disabled.\n if span is None:\n return\n span.__exit__(type(exception), exception, None)\n", "path": "sentry_sdk/integrations/boto3.py"}]} | 1,900 | 286 |
gh_patches_debug_30061 | rasdani/github-patches | git_diff | Miserlou__Zappa-1993 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Set_Cookie option sets duplicate cookies on AWS Lambda
## Context
I have an API running Python3.7 and Zappa (in a virtualenv).
I am setting 6 cookies by using the option "set_cookie" in flask. It looks something like this:
```
resp = make_response(jsonify({'success':'true', 'message': 'Successfully authenticated!'}), 200)
resp.set_cookie("1", value="1", secure=True, samesite='Lax', domain=".example.com",max_age=3600)
resp.set_cookie("2", value="2", secure=True, samesite='Lax', domain=".example.com",max_age=3600)
resp.set_cookie("3", value="3", secure=True, samesite='Lax', domain=".example.com",max_age=3600)
resp.set_cookie("4", value="4", secure=True, samesite='Lax', domain=".example.com",max_age=3600)
resp.set_cookie("5", value="5", secure=True, samesite='Lax', domain=".example.com",max_age=3600)
resp.set_cookie("6", value="6", secure=True, samesite='Lax', domain=".example.com",max_age=3600)
return resp
```
On localhost testing Flask, this works as expected.
If I deploy the same code to AWS using Zappa, the response header will show 36 "set-cookie" headers. So the formula here is n^2. So if I add 4 cookies using the above method, it will show 16 in the request header.
The browser takes care of duplicate cookies, but the response from the API is still huge because of this issue.
Same thing happens if I use:
`resp.headers.add("set-cookie""1"="1; Domain=.example.com; Max-Age=3600; Secure; Path=/; SameSite=Lax")`
## Expected Behavior
I believe Zappa or something at AWS is at fault here. Expected behaviour is to send 6 "set-cookie" headers and not 36.
## Actual Behavior
Sets n^2 cookies as response.
## Steps to Reproduce
Deploy a Flask route using Zappa which sets the cookies. Use the code above.
## Your Environment
* Zappa version used: 0.48.2
* Operating System and Python version: Ubuntu 18.04, Python3.7
* The output of `pip freeze`: https://pastebin.com/d4QTaTuG
* Your `zappa_settings.py`: https://pastebin.com/d1GK8sbe
</issue>
<code>
[start of zappa/middleware.py]
1 from werkzeug.wsgi import ClosingIterator
2
3
4 def all_casings(input_string):
5 """
6 Permute all casings of a given string.
7
8 A pretty algorithm, via @Amber
9 http://stackoverflow.com/questions/6792803/finding-all-possible-case-permutations-in-python
10 """
11 if not input_string:
12 yield ""
13 else:
14 first = input_string[:1]
15 if first.lower() == first.upper():
16 for sub_casing in all_casings(input_string[1:]):
17 yield first + sub_casing
18 else:
19 for sub_casing in all_casings(input_string[1:]):
20 yield first.lower() + sub_casing
21 yield first.upper() + sub_casing
22
23
24 class ZappaWSGIMiddleware(object):
25 """
26 Middleware functions necessary for a Zappa deployment.
27
28 Most hacks have now been remove except for Set-Cookie permutation.
29 """
30 def __init__(self, application):
31 self.application = application
32
33 def __call__(self, environ, start_response):
34 """
35 We must case-mangle the Set-Cookie header name or AWS will use only a
36 single one of these headers.
37 """
38
39 def encode_response(status, headers, exc_info=None):
40 """
41 Create an APIGW-acceptable version of our cookies.
42
43 We have to use a bizarre hack that turns multiple Set-Cookie headers into
44 their case-permutated format, ex:
45
46 Set-cookie:
47 sEt-cookie:
48 seT-cookie:
49
50 To get around an API Gateway limitation.
51
52 This is weird, but better than our previous hack of creating a Base58-encoded
53 supercookie.
54 """
55
56 # All the non-cookie headers should be sent unharmed.
57
58 # The main app can send 'set-cookie' headers in any casing
59 # Related: https://github.com/Miserlou/Zappa/issues/990
60 new_headers = [header for header in headers
61 if ((type(header[0]) != str) or (header[0].lower() != 'set-cookie'))]
62 cookie_headers = [header for header in headers
63 if ((type(header[0]) == str) and (header[0].lower() == "set-cookie"))]
64 for header, new_name in zip(cookie_headers,
65 all_casings("Set-Cookie")):
66 new_headers.append((new_name, header[1]))
67 return start_response(status, new_headers, exc_info)
68
69 # Call the application with our modifier
70 response = self.application(environ, encode_response)
71
72 # Return the response as a WSGI-safe iterator
73 return ClosingIterator(response)
74
[end of zappa/middleware.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/zappa/middleware.py b/zappa/middleware.py
--- a/zappa/middleware.py
+++ b/zappa/middleware.py
@@ -38,32 +38,17 @@
def encode_response(status, headers, exc_info=None):
"""
- Create an APIGW-acceptable version of our cookies.
-
- We have to use a bizarre hack that turns multiple Set-Cookie headers into
- their case-permutated format, ex:
-
- Set-cookie:
- sEt-cookie:
- seT-cookie:
-
- To get around an API Gateway limitation.
-
- This is weird, but better than our previous hack of creating a Base58-encoded
- supercookie.
+ This makes the 'set-cookie' headers name lowercase,
+ all the non-cookie headers should be sent unharmed.
+ Related: https://github.com/Miserlou/Zappa/issues/1965
"""
- # All the non-cookie headers should be sent unharmed.
-
- # The main app can send 'set-cookie' headers in any casing
- # Related: https://github.com/Miserlou/Zappa/issues/990
new_headers = [header for header in headers
if ((type(header[0]) != str) or (header[0].lower() != 'set-cookie'))]
- cookie_headers = [header for header in headers
+ cookie_headers = [(header[0].lower(), header[1]) for header in headers
if ((type(header[0]) == str) and (header[0].lower() == "set-cookie"))]
- for header, new_name in zip(cookie_headers,
- all_casings("Set-Cookie")):
- new_headers.append((new_name, header[1]))
+ new_headers = new_headers + cookie_headers
+
return start_response(status, new_headers, exc_info)
# Call the application with our modifier
| {"golden_diff": "diff --git a/zappa/middleware.py b/zappa/middleware.py\n--- a/zappa/middleware.py\n+++ b/zappa/middleware.py\n@@ -38,32 +38,17 @@\n \n def encode_response(status, headers, exc_info=None):\n \"\"\"\n- Create an APIGW-acceptable version of our cookies.\n-\n- We have to use a bizarre hack that turns multiple Set-Cookie headers into\n- their case-permutated format, ex:\n-\n- Set-cookie:\n- sEt-cookie:\n- seT-cookie:\n-\n- To get around an API Gateway limitation.\n-\n- This is weird, but better than our previous hack of creating a Base58-encoded\n- supercookie.\n+ This makes the 'set-cookie' headers name lowercase,\n+ all the non-cookie headers should be sent unharmed.\n+ Related: https://github.com/Miserlou/Zappa/issues/1965\n \"\"\"\n \n- # All the non-cookie headers should be sent unharmed.\n- \n- # The main app can send 'set-cookie' headers in any casing\n- # Related: https://github.com/Miserlou/Zappa/issues/990\n new_headers = [header for header in headers\n if ((type(header[0]) != str) or (header[0].lower() != 'set-cookie'))]\n- cookie_headers = [header for header in headers \n+ cookie_headers = [(header[0].lower(), header[1]) for header in headers\n if ((type(header[0]) == str) and (header[0].lower() == \"set-cookie\"))]\n- for header, new_name in zip(cookie_headers,\n- all_casings(\"Set-Cookie\")):\n- new_headers.append((new_name, header[1]))\n+ new_headers = new_headers + cookie_headers\n+\n return start_response(status, new_headers, exc_info)\n \n # Call the application with our modifier\n", "issue": "Set_Cookie option sets duplicate cookies on AWS Lambda\n## Context\r\nI have an API running Python3.7 and Zappa (in a virtualenv).\r\nI am setting 6 cookies by using the option \"set_cookie\" in flask. It looks something like this:\r\n```\r\nresp = make_response(jsonify({'success':'true', 'message': 'Successfully authenticated!'}), 200)\r\nresp.set_cookie(\"1\", value=\"1\", secure=True, samesite='Lax', domain=\".example.com\",max_age=3600)\r\nresp.set_cookie(\"2\", value=\"2\", secure=True, samesite='Lax', domain=\".example.com\",max_age=3600)\r\nresp.set_cookie(\"3\", value=\"3\", secure=True, samesite='Lax', domain=\".example.com\",max_age=3600)\r\nresp.set_cookie(\"4\", value=\"4\", secure=True, samesite='Lax', domain=\".example.com\",max_age=3600)\r\nresp.set_cookie(\"5\", value=\"5\", secure=True, samesite='Lax', domain=\".example.com\",max_age=3600)\r\nresp.set_cookie(\"6\", value=\"6\", secure=True, samesite='Lax', domain=\".example.com\",max_age=3600)\r\nreturn resp\r\n```\r\n\r\nOn localhost testing Flask, this works as expected.\r\n\r\nIf I deploy the same code to AWS using Zappa, the response header will show 36 \"set-cookie\" headers. So the formula here is n^2. So if I add 4 cookies using the above method, it will show 16 in the request header.\r\n\r\nThe browser takes care of duplicate cookies, but the response from the API is still huge because of this issue.\r\n\r\nSame thing happens if I use: \r\n`resp.headers.add(\"set-cookie\"\"1\"=\"1; Domain=.example.com; Max-Age=3600; Secure; Path=/; SameSite=Lax\")`\r\n\r\n## Expected Behavior\r\nI believe Zappa or something at AWS is at fault here. Expected behaviour is to send 6 \"set-cookie\" headers and not 36.\r\n\r\n## Actual Behavior\r\nSets n^2 cookies as response.\r\n\r\n## Steps to Reproduce\r\nDeploy a Flask route using Zappa which sets the cookies. Use the code above.\r\n\r\n## Your Environment\r\n* Zappa version used: 0.48.2\r\n* Operating System and Python version: Ubuntu 18.04, Python3.7\r\n* The output of `pip freeze`: https://pastebin.com/d4QTaTuG\r\n* Your `zappa_settings.py`: https://pastebin.com/d1GK8sbe\n", "before_files": [{"content": "from werkzeug.wsgi import ClosingIterator\n\n\ndef all_casings(input_string):\n \"\"\"\n Permute all casings of a given string.\n\n A pretty algorithm, via @Amber\n http://stackoverflow.com/questions/6792803/finding-all-possible-case-permutations-in-python\n \"\"\"\n if not input_string:\n yield \"\"\n else:\n first = input_string[:1]\n if first.lower() == first.upper():\n for sub_casing in all_casings(input_string[1:]):\n yield first + sub_casing\n else:\n for sub_casing in all_casings(input_string[1:]):\n yield first.lower() + sub_casing\n yield first.upper() + sub_casing\n\n\nclass ZappaWSGIMiddleware(object):\n \"\"\"\n Middleware functions necessary for a Zappa deployment.\n\n Most hacks have now been remove except for Set-Cookie permutation.\n \"\"\"\n def __init__(self, application):\n self.application = application\n\n def __call__(self, environ, start_response):\n \"\"\"\n We must case-mangle the Set-Cookie header name or AWS will use only a\n single one of these headers.\n \"\"\"\n\n def encode_response(status, headers, exc_info=None):\n \"\"\"\n Create an APIGW-acceptable version of our cookies.\n\n We have to use a bizarre hack that turns multiple Set-Cookie headers into\n their case-permutated format, ex:\n\n Set-cookie:\n sEt-cookie:\n seT-cookie:\n\n To get around an API Gateway limitation.\n\n This is weird, but better than our previous hack of creating a Base58-encoded\n supercookie.\n \"\"\"\n\n # All the non-cookie headers should be sent unharmed.\n \n # The main app can send 'set-cookie' headers in any casing\n # Related: https://github.com/Miserlou/Zappa/issues/990\n new_headers = [header for header in headers\n if ((type(header[0]) != str) or (header[0].lower() != 'set-cookie'))]\n cookie_headers = [header for header in headers \n if ((type(header[0]) == str) and (header[0].lower() == \"set-cookie\"))]\n for header, new_name in zip(cookie_headers,\n all_casings(\"Set-Cookie\")):\n new_headers.append((new_name, header[1]))\n return start_response(status, new_headers, exc_info)\n\n # Call the application with our modifier\n response = self.application(environ, encode_response)\n\n # Return the response as a WSGI-safe iterator\n return ClosingIterator(response)\n", "path": "zappa/middleware.py"}]} | 1,823 | 431 |
gh_patches_debug_32826 | rasdani/github-patches | git_diff | dotkom__onlineweb4-1498 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Don't send additional 500 email if no useful information in it.
After the implementation if #1485 we get an additional email for _all_ 500 errors, even if there is no supplied information. Let's not send an email if there's no useful information in it.
</issue>
<code>
[start of onlineweb4/views.py]
1 # -*- coding: utf-8 -*-
2 import logging
3 from smtplib import SMTPException
4
5 from django.conf import settings
6 from django.contrib import messages
7 from django.core.mail import send_mail
8 from django.shortcuts import redirect, render
9 from onlineweb4.forms import ErrorForm
10
11
12 def server_error(request):
13 log = logging.getLogger(__name__)
14
15 if request.method == 'POST':
16 form = ErrorForm(request.POST)
17 message = request.POST.get('reason', 'Ingen forklaring oppgitt.')
18 try:
19 log.error('%s triggered a 500 server error and provided the following description: %s' % (
20 request.user,
21 message
22 ))
23 send_mail('500error user-report', message,
24 settings.DEFAULT_FROM_EMAIL, [settings.EMAIL_DOTKOM])
25 log.debug('Finished sending error email to %s' % settings.EMAIL_DOTKOM)
26
27 messages.success(request, 'Feilmeldingen din ble sendt til %s' % settings.EMAIL_DOTKOM)
28
29 return redirect('home')
30 except SMTPException:
31 messages.error(request, 'Det oppstod en uventet feil under sending av feilmeldingen')
32 return redirect('home')
33
34 return render(request, '500.html', {'error_form': ErrorForm})
35
[end of onlineweb4/views.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/onlineweb4/views.py b/onlineweb4/views.py
--- a/onlineweb4/views.py
+++ b/onlineweb4/views.py
@@ -1,21 +1,22 @@
# -*- coding: utf-8 -*-
import logging
-from smtplib import SMTPException
from django.conf import settings
from django.contrib import messages
from django.core.mail import send_mail
from django.shortcuts import redirect, render
from onlineweb4.forms import ErrorForm
+from smtplib import SMTPException
def server_error(request):
log = logging.getLogger(__name__)
if request.method == 'POST':
- form = ErrorForm(request.POST)
- message = request.POST.get('reason', 'Ingen forklaring oppgitt.')
- try:
+ message = request.POST.get('reason')
+ if not message:
+ return redirect('home')
+ try:
log.error('%s triggered a 500 server error and provided the following description: %s' % (
request.user,
message
@@ -23,12 +24,9 @@
send_mail('500error user-report', message,
settings.DEFAULT_FROM_EMAIL, [settings.EMAIL_DOTKOM])
log.debug('Finished sending error email to %s' % settings.EMAIL_DOTKOM)
-
messages.success(request, 'Feilmeldingen din ble sendt til %s' % settings.EMAIL_DOTKOM)
-
return redirect('home')
except SMTPException:
messages.error(request, 'Det oppstod en uventet feil under sending av feilmeldingen')
return redirect('home')
-
return render(request, '500.html', {'error_form': ErrorForm})
| {"golden_diff": "diff --git a/onlineweb4/views.py b/onlineweb4/views.py\n--- a/onlineweb4/views.py\n+++ b/onlineweb4/views.py\n@@ -1,21 +1,22 @@\n # -*- coding: utf-8 -*-\n import logging\n-from smtplib import SMTPException\n \n from django.conf import settings\n from django.contrib import messages\n from django.core.mail import send_mail\n from django.shortcuts import redirect, render\n from onlineweb4.forms import ErrorForm\n+from smtplib import SMTPException\n \n \n def server_error(request):\n log = logging.getLogger(__name__)\n \n if request.method == 'POST':\n- form = ErrorForm(request.POST)\n- message = request.POST.get('reason', 'Ingen forklaring oppgitt.')\n- try: \n+ message = request.POST.get('reason')\n+ if not message:\n+ return redirect('home')\n+ try:\n log.error('%s triggered a 500 server error and provided the following description: %s' % (\n request.user,\n message\n@@ -23,12 +24,9 @@\n send_mail('500error user-report', message,\n settings.DEFAULT_FROM_EMAIL, [settings.EMAIL_DOTKOM])\n log.debug('Finished sending error email to %s' % settings.EMAIL_DOTKOM)\n-\n messages.success(request, 'Feilmeldingen din ble sendt til %s' % settings.EMAIL_DOTKOM)\n-\n return redirect('home')\n except SMTPException:\n messages.error(request, 'Det oppstod en uventet feil under sending av feilmeldingen')\n return redirect('home')\n-\n return render(request, '500.html', {'error_form': ErrorForm})\n", "issue": "Don't send additional 500 email if no useful information in it.\nAfter the implementation if #1485 we get an additional email for _all_ 500 errors, even if there is no supplied information. Let's not send an email if there's no useful information in it.\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\nimport logging\nfrom smtplib import SMTPException\n\nfrom django.conf import settings\nfrom django.contrib import messages\nfrom django.core.mail import send_mail\nfrom django.shortcuts import redirect, render\nfrom onlineweb4.forms import ErrorForm\n\n\ndef server_error(request):\n log = logging.getLogger(__name__)\n\n if request.method == 'POST':\n form = ErrorForm(request.POST)\n message = request.POST.get('reason', 'Ingen forklaring oppgitt.')\n try: \n log.error('%s triggered a 500 server error and provided the following description: %s' % (\n request.user,\n message\n ))\n send_mail('500error user-report', message,\n settings.DEFAULT_FROM_EMAIL, [settings.EMAIL_DOTKOM])\n log.debug('Finished sending error email to %s' % settings.EMAIL_DOTKOM)\n\n messages.success(request, 'Feilmeldingen din ble sendt til %s' % settings.EMAIL_DOTKOM)\n\n return redirect('home')\n except SMTPException:\n messages.error(request, 'Det oppstod en uventet feil under sending av feilmeldingen')\n return redirect('home')\n\n return render(request, '500.html', {'error_form': ErrorForm})\n", "path": "onlineweb4/views.py"}]} | 942 | 378 |
gh_patches_debug_1048 | rasdani/github-patches | git_diff | mindee__doctr-243 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Pb: unitest text_export_size not passing on tf 2.3.1
Unitest text_export_size not OK locally on tf 2.3.1 :
```
def test_export_sizes(test_convert_to_tflite, test_convert_to_fp16, test_quantize_model):
assert sys.getsizeof(test_convert_to_tflite) > sys.getsizeof(test_convert_to_fp16)
> assert sys.getsizeof(test_convert_to_fp16) > sys.getsizeof(test_quantize_model)
E AssertionError: assert 3041 > 3041
```
</issue>
<code>
[start of setup.py]
1 # Copyright (C) 2021, Mindee.
2
3 # This program is licensed under the Apache License version 2.
4 # See LICENSE or go to <https://www.apache.org/licenses/LICENSE-2.0.txt> for full license details.
5
6 """
7 Package installation setup
8 """
9
10 import os
11 from pathlib import Path
12 import subprocess
13
14 from setuptools import find_packages, setup
15
16
17 version = "0.1.2a0"
18 sha = 'Unknown'
19 package_name = 'doctr'
20
21 cwd = Path(__file__).parent.absolute()
22
23 if os.getenv('BUILD_VERSION'):
24 version = os.getenv('BUILD_VERSION')
25 elif sha != 'Unknown':
26 try:
27 sha = subprocess.check_output(['git', 'rev-parse', 'HEAD'], cwd=cwd).decode('ascii').strip()
28 except Exception:
29 pass
30 version += '+' + sha[:7]
31 print(f"Building wheel {package_name}-{version}")
32
33 with open(cwd.joinpath(package_name, 'version.py'), 'w') as f:
34 f.write(f"__version__ = '{version}'\n")
35
36 with open('README.md', 'r') as f:
37 readme = f.read()
38
39 requirements = [
40 "numpy>=1.16.0",
41 "scipy>=1.4.0",
42 "opencv-python>=4.2",
43 "tensorflow>=2.3.0",
44 "PyMuPDF>=1.16.0,<1.18.11",
45 "pyclipper>=1.2.0",
46 "shapely>=1.6.0",
47 "matplotlib>=3.1.0",
48 "mplcursors>=0.3",
49 "rapidfuzz>=1.0.0",
50 "weasyprint>=52.2",
51 ]
52
53 setup(
54 # Metadata
55 name=os.getenv('PKG_INDEX') if os.getenv('PKG_INDEX') else package_name,
56 version=version,
57 author='François-Guillaume Fernandez, Charles Gaillard',
58 author_email='[email protected]',
59 description='Extract valuable text information from your documents',
60 long_description=readme,
61 long_description_content_type="text/markdown",
62 url='https://github.com/mindee/doctr',
63 download_url='https://github.com/mindee/doctr/tags',
64 license='Apache',
65 classifiers=[
66 'Development Status :: 3 - Alpha',
67 'Intended Audience :: Developers',
68 'Intended Audience :: Science/Research',
69 'License :: OSI Approved :: Apache Software License',
70 'Natural Language :: English',
71 'Operating System :: OS Independent',
72 'Programming Language :: Python :: 3',
73 'Programming Language :: Python :: 3.6',
74 'Programming Language :: Python :: 3.7',
75 'Topic :: Scientific/Engineering',
76 'Topic :: Scientific/Engineering :: Artificial Intelligence',
77 'Topic :: Software Development',
78 'Topic :: Software Development :: Libraries',
79 'Topic :: Software Development :: Libraries :: Python Modules',
80 ],
81 keywords=['ocr', 'deep learning', 'tensorflow', 'text detection', 'text recognition'],
82
83 # Package info
84 packages=find_packages(exclude=('test',)),
85 zip_safe=True,
86 python_requires='>=3.6.0',
87 include_package_data=True,
88 install_requires=requirements,
89 package_data={'': ['LICENSE']}
90 )
91
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -40,7 +40,7 @@
"numpy>=1.16.0",
"scipy>=1.4.0",
"opencv-python>=4.2",
- "tensorflow>=2.3.0",
+ "tensorflow>=2.4.0",
"PyMuPDF>=1.16.0,<1.18.11",
"pyclipper>=1.2.0",
"shapely>=1.6.0",
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -40,7 +40,7 @@\n \"numpy>=1.16.0\",\n \"scipy>=1.4.0\",\n \"opencv-python>=4.2\",\n- \"tensorflow>=2.3.0\",\n+ \"tensorflow>=2.4.0\",\n \"PyMuPDF>=1.16.0,<1.18.11\",\n \"pyclipper>=1.2.0\",\n \"shapely>=1.6.0\",\n", "issue": "Pb: unitest text_export_size not passing on tf 2.3.1\nUnitest text_export_size not OK locally on tf 2.3.1 : \r\n\r\n```\r\ndef test_export_sizes(test_convert_to_tflite, test_convert_to_fp16, test_quantize_model):\r\n assert sys.getsizeof(test_convert_to_tflite) > sys.getsizeof(test_convert_to_fp16)\r\n> assert sys.getsizeof(test_convert_to_fp16) > sys.getsizeof(test_quantize_model)\r\nE AssertionError: assert 3041 > 3041\r\n\r\n```\n", "before_files": [{"content": "# Copyright (C) 2021, Mindee.\n\n# This program is licensed under the Apache License version 2.\n# See LICENSE or go to <https://www.apache.org/licenses/LICENSE-2.0.txt> for full license details.\n\n\"\"\"\nPackage installation setup\n\"\"\"\n\nimport os\nfrom pathlib import Path\nimport subprocess\n\nfrom setuptools import find_packages, setup\n\n\nversion = \"0.1.2a0\"\nsha = 'Unknown'\npackage_name = 'doctr'\n\ncwd = Path(__file__).parent.absolute()\n\nif os.getenv('BUILD_VERSION'):\n version = os.getenv('BUILD_VERSION')\nelif sha != 'Unknown':\n try:\n sha = subprocess.check_output(['git', 'rev-parse', 'HEAD'], cwd=cwd).decode('ascii').strip()\n except Exception:\n pass\n version += '+' + sha[:7]\nprint(f\"Building wheel {package_name}-{version}\")\n\nwith open(cwd.joinpath(package_name, 'version.py'), 'w') as f:\n f.write(f\"__version__ = '{version}'\\n\")\n\nwith open('README.md', 'r') as f:\n readme = f.read()\n\nrequirements = [\n \"numpy>=1.16.0\",\n \"scipy>=1.4.0\",\n \"opencv-python>=4.2\",\n \"tensorflow>=2.3.0\",\n \"PyMuPDF>=1.16.0,<1.18.11\",\n \"pyclipper>=1.2.0\",\n \"shapely>=1.6.0\",\n \"matplotlib>=3.1.0\",\n \"mplcursors>=0.3\",\n \"rapidfuzz>=1.0.0\",\n \"weasyprint>=52.2\",\n]\n\nsetup(\n # Metadata\n name=os.getenv('PKG_INDEX') if os.getenv('PKG_INDEX') else package_name,\n version=version,\n author='Fran\u00e7ois-Guillaume Fernandez, Charles Gaillard',\n author_email='[email protected]',\n description='Extract valuable text information from your documents',\n long_description=readme,\n long_description_content_type=\"text/markdown\",\n url='https://github.com/mindee/doctr',\n download_url='https://github.com/mindee/doctr/tags',\n license='Apache',\n classifiers=[\n 'Development Status :: 3 - Alpha',\n 'Intended Audience :: Developers',\n 'Intended Audience :: Science/Research',\n 'License :: OSI Approved :: Apache Software License',\n 'Natural Language :: English',\n 'Operating System :: OS Independent',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Topic :: Scientific/Engineering',\n 'Topic :: Scientific/Engineering :: Artificial Intelligence',\n 'Topic :: Software Development',\n 'Topic :: Software Development :: Libraries',\n 'Topic :: Software Development :: Libraries :: Python Modules',\n ],\n keywords=['ocr', 'deep learning', 'tensorflow', 'text detection', 'text recognition'],\n\n # Package info\n packages=find_packages(exclude=('test',)),\n zip_safe=True,\n python_requires='>=3.6.0',\n include_package_data=True,\n install_requires=requirements,\n package_data={'': ['LICENSE']}\n)\n", "path": "setup.py"}]} | 1,537 | 131 |
gh_patches_debug_16475 | rasdani/github-patches | git_diff | pyinstaller__pyinstaller-8544 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Could not import `pywintypes` or `win32api` from `win32ctypes.pywin32`
## Description of the issue
Error when running the executable.
Issue is present in 6.6.0 and "latest development version". Issue is not present in 6.5.0.
Output differs between versions at this point:
6.6.0:
```
import 'win32ctypes.core' # <_frozen_importlib_external.SourcelessFileLoader object at 0x000001C6D9E41BB0>
# win32ctypes.core._common not found in PYZ
# win32ctypes.core.ctypes not found in PYZ
# destroy win32ctypes.pywin32.win32api
# destroy win32ctypes.pywin32
# destroy PyInstaller
Could not import `pywintypes` or `win32api` from `win32ctypes.pywin32`.
Please make sure that `pywin32-ctypes` is installed and importable, for example:
pip install pywin32-ctypes
```
6.5.0:
```
# cffi not found in PYZ
# code object from '[...]\\cffi\\__init__.pyc'
# cffi.api not found in PYZ
# code object from '[...]\\cffi\\api.pyc'
# cffi.lock not found in PYZ
# code object from '[...]\\cffi\\lock.pyc'
import 'cffi.lock' # <_frozen_importlib_external.SourcelessFileLoader object at 0x000001F4AEB845C0>
# cffi.error not found in PYZ
# code object from '[...]\\cffi\\error.pyc'
import 'cffi.error' # <_frozen_importlib_external.SourcelessFileLoader object at 0x000001F4AEB846B0>
# cffi.model not found in PYZ
# code object from '[...]\\cffi\\model.pyc'
import 'cffi.model' # <_frozen_importlib_external.SourcelessFileLoader object at 0x000001F4AEB848F0>
import 'cffi.api' # <_frozen_importlib_external.SourcelessFileLoader object at 0x000001F4AEB52330>
import 'cffi' # <_frozen_importlib_external.SourcelessFileLoader object at 0x000001F4AEB51EB0>
import 'win32ctypes.core' # <_frozen_importlib_external.SourcelessFileLoader object at 0x000001F4AEB51B80>
# win32ctypes.core._common not found in PYZ
# win32ctypes.core.cffi not found in PYZ
# code object from '[...]\\win32ctypes\\core\\cffi\\__init__.pyc'
import 'win32ctypes.core.cffi' # <_frozen_importlib_external.SourcelessFileLoader object at 0x000001F4AEB86BA0>
# win32ctypes.core.cffi._common not found in PYZ
# code object from '[...]\\win32ctypes\\core\\cffi\\_common.pyc'
# win32ctypes.core.cffi._util not found in PYZ
# code object from '[...]\\win32ctypes\\core\\cffi\\_util.pyc'
# win32ctypes.core.compat not found in PYZ
# code object from '[...]\\win32ctypes\\core\\compat.pyc'
import 'win32ctypes.core.compat' # <_frozen_importlib_external.SourcelessFileLoader object at 0x000001F4AEB87440>
# _cffi_backend not found in PYZ
# extension module '_cffi_backend' loaded from '[...]\\_cffi_backend.cp312-win_amd64.pyd'
# extension module '_cffi_backend' executed from '[...]\\_cffi_backend.cp312-win_amd64.pyd'
import '_cffi_backend' # <_frozen_importlib_external.ExtensionFileLoader object at 0x000001F4AEB876B0>
# cffi.cparser not found in PYZ
# code object from '[...]\\cffi\\cparser.pyc'
# cffi.commontypes not found in PYZ
# code object from '[...]\\cffi\\commontypes.pyc'
import 'cffi.commontypes' # <_frozen_importlib_external.SourcelessFileLoader object at 0x000001F4AEBA91F0>
```
### Context information (for bug reports)
* 502 INFO: PyInstaller: 6.6.0, contrib hooks: 2024.6
* 503 INFO: Python: 3.12.0
* 541 INFO: Platform: Windows-10-10.0.19045-SP0
</issue>
<code>
[start of PyInstaller/hooks/hook-win32ctypes.core.py]
1 #-----------------------------------------------------------------------------
2 # Copyright (c) 2020-2023, PyInstaller Development Team.
3 #
4 # Distributed under the terms of the GNU General Public License (version 2
5 # or later) with exception for distributing the bootloader.
6 #
7 # The full license is in the file COPYING.txt, distributed with this software.
8 #
9 # SPDX-License-Identifier: (GPL-2.0-or-later WITH Bootloader-exception)
10 #-----------------------------------------------------------------------------
11
12 # TODO: remove this hook during PyInstaller 4.5 release cycle!
13
14 from PyInstaller.utils.hooks import can_import_module, collect_submodules
15
16 # We need to collect submodules from win32ctypes.core.cffi or win32ctypes.core.ctypes for win32ctypes.core to work. The
17 # use of the backend is determined by availability of cffi.
18 if can_import_module('cffi'):
19 hiddenimports = collect_submodules('win32ctypes.core.cffi')
20 else:
21 hiddenimports = collect_submodules('win32ctypes.core.ctypes')
22
[end of PyInstaller/hooks/hook-win32ctypes.core.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/PyInstaller/hooks/hook-win32ctypes.core.py b/PyInstaller/hooks/hook-win32ctypes.core.py
--- a/PyInstaller/hooks/hook-win32ctypes.core.py
+++ b/PyInstaller/hooks/hook-win32ctypes.core.py
@@ -13,9 +13,10 @@
from PyInstaller.utils.hooks import can_import_module, collect_submodules
-# We need to collect submodules from win32ctypes.core.cffi or win32ctypes.core.ctypes for win32ctypes.core to work. The
-# use of the backend is determined by availability of cffi.
+# We need to collect submodules from win32ctypes.core.cffi or win32ctypes.core.ctypes for win32ctypes.core to work.
+# Always collect the `ctypes` backend, and add the `cffi` one if `cffi` is available. Having the `ctypes` backend always
+# available helps in situations when `cffi` is available in the build environment, but is disabled at run-time or not
+# collected (e.g., due to `--exclude cffi`).
+hiddenimports = collect_submodules('win32ctypes.core.ctypes')
if can_import_module('cffi'):
- hiddenimports = collect_submodules('win32ctypes.core.cffi')
-else:
- hiddenimports = collect_submodules('win32ctypes.core.ctypes')
+ hiddenimports += collect_submodules('win32ctypes.core.cffi')
| {"golden_diff": "diff --git a/PyInstaller/hooks/hook-win32ctypes.core.py b/PyInstaller/hooks/hook-win32ctypes.core.py\n--- a/PyInstaller/hooks/hook-win32ctypes.core.py\n+++ b/PyInstaller/hooks/hook-win32ctypes.core.py\n@@ -13,9 +13,10 @@\n \n from PyInstaller.utils.hooks import can_import_module, collect_submodules\n \n-# We need to collect submodules from win32ctypes.core.cffi or win32ctypes.core.ctypes for win32ctypes.core to work. The\n-# use of the backend is determined by availability of cffi.\n+# We need to collect submodules from win32ctypes.core.cffi or win32ctypes.core.ctypes for win32ctypes.core to work.\n+# Always collect the `ctypes` backend, and add the `cffi` one if `cffi` is available. Having the `ctypes` backend always\n+# available helps in situations when `cffi` is available in the build environment, but is disabled at run-time or not\n+# collected (e.g., due to `--exclude cffi`).\n+hiddenimports = collect_submodules('win32ctypes.core.ctypes')\n if can_import_module('cffi'):\n- hiddenimports = collect_submodules('win32ctypes.core.cffi')\n-else:\n- hiddenimports = collect_submodules('win32ctypes.core.ctypes')\n+ hiddenimports += collect_submodules('win32ctypes.core.cffi')\n", "issue": "Could not import `pywintypes` or `win32api` from `win32ctypes.pywin32`\n## Description of the issue\r\n\r\nError when running the executable.\r\n\r\nIssue is present in 6.6.0 and \"latest development version\". Issue is not present in 6.5.0.\r\n\r\nOutput differs between versions at this point:\r\n\r\n6.6.0:\r\n```\r\nimport 'win32ctypes.core' # <_frozen_importlib_external.SourcelessFileLoader object at 0x000001C6D9E41BB0>\r\n# win32ctypes.core._common not found in PYZ\r\n# win32ctypes.core.ctypes not found in PYZ\r\n# destroy win32ctypes.pywin32.win32api\r\n# destroy win32ctypes.pywin32\r\n# destroy PyInstaller\r\nCould not import `pywintypes` or `win32api` from `win32ctypes.pywin32`.\r\nPlease make sure that `pywin32-ctypes` is installed and importable, for example:\r\n\r\npip install pywin32-ctypes\r\n\r\n```\r\n\r\n6.5.0:\r\n```\r\n# cffi not found in PYZ\r\n# code object from '[...]\\\\cffi\\\\__init__.pyc'\r\n# cffi.api not found in PYZ\r\n# code object from '[...]\\\\cffi\\\\api.pyc'\r\n# cffi.lock not found in PYZ\r\n# code object from '[...]\\\\cffi\\\\lock.pyc'\r\nimport 'cffi.lock' # <_frozen_importlib_external.SourcelessFileLoader object at 0x000001F4AEB845C0>\r\n# cffi.error not found in PYZ\r\n# code object from '[...]\\\\cffi\\\\error.pyc'\r\nimport 'cffi.error' # <_frozen_importlib_external.SourcelessFileLoader object at 0x000001F4AEB846B0>\r\n# cffi.model not found in PYZ\r\n# code object from '[...]\\\\cffi\\\\model.pyc'\r\nimport 'cffi.model' # <_frozen_importlib_external.SourcelessFileLoader object at 0x000001F4AEB848F0>\r\nimport 'cffi.api' # <_frozen_importlib_external.SourcelessFileLoader object at 0x000001F4AEB52330>\r\nimport 'cffi' # <_frozen_importlib_external.SourcelessFileLoader object at 0x000001F4AEB51EB0>\r\nimport 'win32ctypes.core' # <_frozen_importlib_external.SourcelessFileLoader object at 0x000001F4AEB51B80>\r\n# win32ctypes.core._common not found in PYZ\r\n# win32ctypes.core.cffi not found in PYZ\r\n# code object from '[...]\\\\win32ctypes\\\\core\\\\cffi\\\\__init__.pyc'\r\nimport 'win32ctypes.core.cffi' # <_frozen_importlib_external.SourcelessFileLoader object at 0x000001F4AEB86BA0>\r\n# win32ctypes.core.cffi._common not found in PYZ\r\n# code object from '[...]\\\\win32ctypes\\\\core\\\\cffi\\\\_common.pyc'\r\n# win32ctypes.core.cffi._util not found in PYZ\r\n# code object from '[...]\\\\win32ctypes\\\\core\\\\cffi\\\\_util.pyc'\r\n# win32ctypes.core.compat not found in PYZ\r\n# code object from '[...]\\\\win32ctypes\\\\core\\\\compat.pyc'\r\nimport 'win32ctypes.core.compat' # <_frozen_importlib_external.SourcelessFileLoader object at 0x000001F4AEB87440>\r\n# _cffi_backend not found in PYZ\r\n# extension module '_cffi_backend' loaded from '[...]\\\\_cffi_backend.cp312-win_amd64.pyd'\r\n# extension module '_cffi_backend' executed from '[...]\\\\_cffi_backend.cp312-win_amd64.pyd'\r\nimport '_cffi_backend' # <_frozen_importlib_external.ExtensionFileLoader object at 0x000001F4AEB876B0>\r\n# cffi.cparser not found in PYZ\r\n# code object from '[...]\\\\cffi\\\\cparser.pyc'\r\n# cffi.commontypes not found in PYZ\r\n# code object from '[...]\\\\cffi\\\\commontypes.pyc'\r\nimport 'cffi.commontypes' # <_frozen_importlib_external.SourcelessFileLoader object at 0x000001F4AEBA91F0>\r\n```\r\n\r\n\r\n### Context information (for bug reports)\r\n\r\n* 502 INFO: PyInstaller: 6.6.0, contrib hooks: 2024.6\r\n* 503 INFO: Python: 3.12.0\r\n* 541 INFO: Platform: Windows-10-10.0.19045-SP0\r\n\r\n\r\n\n", "before_files": [{"content": "#-----------------------------------------------------------------------------\n# Copyright (c) 2020-2023, PyInstaller Development Team.\n#\n# Distributed under the terms of the GNU General Public License (version 2\n# or later) with exception for distributing the bootloader.\n#\n# The full license is in the file COPYING.txt, distributed with this software.\n#\n# SPDX-License-Identifier: (GPL-2.0-or-later WITH Bootloader-exception)\n#-----------------------------------------------------------------------------\n\n# TODO: remove this hook during PyInstaller 4.5 release cycle!\n\nfrom PyInstaller.utils.hooks import can_import_module, collect_submodules\n\n# We need to collect submodules from win32ctypes.core.cffi or win32ctypes.core.ctypes for win32ctypes.core to work. The\n# use of the backend is determined by availability of cffi.\nif can_import_module('cffi'):\n hiddenimports = collect_submodules('win32ctypes.core.cffi')\nelse:\n hiddenimports = collect_submodules('win32ctypes.core.ctypes')\n", "path": "PyInstaller/hooks/hook-win32ctypes.core.py"}]} | 1,961 | 339 |
gh_patches_debug_16407 | rasdani/github-patches | git_diff | buildbot__buildbot-5729 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
change_hook/poller not working for ReconfigurablePollingChangeSource
In [poller.py](https://github.com/buildbot/buildbot/blob/a0e1d8840e8856ead136a1ad6e2021931355af15/master/buildbot/www/hooks/poller.py#L40), the change sources are filtered like this:
```python
for source in change_svc:
if not isinstance(source, PollingChangeSource):
continue
```
This means that any pollers derived from the super-class `ReconfigurablePollingChangeSource` will not be found. Since [new code is supposed to use `ReconfigurablePollingChangeSource`](https://docs.buildbot.net/current/developer/cls-changesources.html?highlight=reconfigurablepollingchangesource#pollingchangesource), the code should probably read:
```python
for source in change_svc:
if not isinstance(source, ReconfigurablePollingChangeSource):
continue
```
</issue>
<code>
[start of master/buildbot/www/hooks/poller.py]
1 # This file is part of Buildbot. Buildbot is free software: you can
2 # redistribute it and/or modify it under the terms of the GNU General Public
3 # License as published by the Free Software Foundation, version 2.
4 #
5 # This program is distributed in the hope that it will be useful, but WITHOUT
6 # ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
7 # FOR A PARTICULAR PURPOSE. See the GNU General Public License for more
8 # details.
9 #
10 # You should have received a copy of the GNU General Public License along with
11 # this program; if not, write to the Free Software Foundation, Inc., 51
12 # Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
13 #
14 # Copyright Buildbot Team Members
15
16 # This change hook allows GitHub or a hand crafted curl invocation to "knock on
17 # the door" and trigger a change source to poll.
18
19
20 from buildbot.changes.base import PollingChangeSource
21 from buildbot.util import bytes2unicode
22 from buildbot.util import unicode2bytes
23 from buildbot.www.hooks.base import BaseHookHandler
24
25
26 class PollingHandler(BaseHookHandler):
27
28 def getChanges(self, req):
29 change_svc = req.site.master.change_svc
30 poll_all = b"poller" not in req.args
31
32 allow_all = True
33 allowed = []
34 if isinstance(self.options, dict) and b"allowed" in self.options:
35 allow_all = False
36 allowed = self.options[b"allowed"]
37
38 pollers = []
39
40 for source in change_svc:
41 if not isinstance(source, PollingChangeSource):
42 continue
43 if not hasattr(source, "name"):
44 continue
45 if (not poll_all and
46 unicode2bytes(source.name) not in req.args[b'poller']):
47 continue
48 if not allow_all and unicode2bytes(source.name) not in allowed:
49 continue
50 pollers.append(source)
51
52 if not poll_all:
53 missing = (set(req.args[b'poller']) -
54 set(unicode2bytes(s.name) for s in pollers))
55 if missing:
56 raise ValueError("Could not find pollers: {}".format(
57 bytes2unicode(b",".join(missing))))
58
59 for p in pollers:
60 p.force()
61
62 return [], None
63
64
65 poller = PollingHandler
66
[end of master/buildbot/www/hooks/poller.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/master/buildbot/www/hooks/poller.py b/master/buildbot/www/hooks/poller.py
--- a/master/buildbot/www/hooks/poller.py
+++ b/master/buildbot/www/hooks/poller.py
@@ -17,7 +17,7 @@
# the door" and trigger a change source to poll.
-from buildbot.changes.base import PollingChangeSource
+from buildbot.changes.base import ReconfigurablePollingChangeSource
from buildbot.util import bytes2unicode
from buildbot.util import unicode2bytes
from buildbot.www.hooks.base import BaseHookHandler
@@ -38,7 +38,7 @@
pollers = []
for source in change_svc:
- if not isinstance(source, PollingChangeSource):
+ if not isinstance(source, ReconfigurablePollingChangeSource):
continue
if not hasattr(source, "name"):
continue
| {"golden_diff": "diff --git a/master/buildbot/www/hooks/poller.py b/master/buildbot/www/hooks/poller.py\n--- a/master/buildbot/www/hooks/poller.py\n+++ b/master/buildbot/www/hooks/poller.py\n@@ -17,7 +17,7 @@\n # the door\" and trigger a change source to poll.\n \n \n-from buildbot.changes.base import PollingChangeSource\n+from buildbot.changes.base import ReconfigurablePollingChangeSource\n from buildbot.util import bytes2unicode\n from buildbot.util import unicode2bytes\n from buildbot.www.hooks.base import BaseHookHandler\n@@ -38,7 +38,7 @@\n pollers = []\n \n for source in change_svc:\n- if not isinstance(source, PollingChangeSource):\n+ if not isinstance(source, ReconfigurablePollingChangeSource):\n continue\n if not hasattr(source, \"name\"):\n continue\n", "issue": "change_hook/poller not working for ReconfigurablePollingChangeSource\nIn [poller.py](https://github.com/buildbot/buildbot/blob/a0e1d8840e8856ead136a1ad6e2021931355af15/master/buildbot/www/hooks/poller.py#L40), the change sources are filtered like this:\r\n\r\n```python\r\n for source in change_svc:\r\n if not isinstance(source, PollingChangeSource):\r\n continue\r\n```\r\n\r\nThis means that any pollers derived from the super-class `ReconfigurablePollingChangeSource` will not be found. Since [new code is supposed to use `ReconfigurablePollingChangeSource`](https://docs.buildbot.net/current/developer/cls-changesources.html?highlight=reconfigurablepollingchangesource#pollingchangesource), the code should probably read:\r\n\r\n```python\r\n for source in change_svc:\r\n if not isinstance(source, ReconfigurablePollingChangeSource):\r\n continue\r\n```\r\n\n", "before_files": [{"content": "# This file is part of Buildbot. Buildbot is free software: you can\n# redistribute it and/or modify it under the terms of the GNU General Public\n# License as published by the Free Software Foundation, version 2.\n#\n# This program is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS\n# FOR A PARTICULAR PURPOSE. See the GNU General Public License for more\n# details.\n#\n# You should have received a copy of the GNU General Public License along with\n# this program; if not, write to the Free Software Foundation, Inc., 51\n# Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.\n#\n# Copyright Buildbot Team Members\n\n# This change hook allows GitHub or a hand crafted curl invocation to \"knock on\n# the door\" and trigger a change source to poll.\n\n\nfrom buildbot.changes.base import PollingChangeSource\nfrom buildbot.util import bytes2unicode\nfrom buildbot.util import unicode2bytes\nfrom buildbot.www.hooks.base import BaseHookHandler\n\n\nclass PollingHandler(BaseHookHandler):\n\n def getChanges(self, req):\n change_svc = req.site.master.change_svc\n poll_all = b\"poller\" not in req.args\n\n allow_all = True\n allowed = []\n if isinstance(self.options, dict) and b\"allowed\" in self.options:\n allow_all = False\n allowed = self.options[b\"allowed\"]\n\n pollers = []\n\n for source in change_svc:\n if not isinstance(source, PollingChangeSource):\n continue\n if not hasattr(source, \"name\"):\n continue\n if (not poll_all and\n unicode2bytes(source.name) not in req.args[b'poller']):\n continue\n if not allow_all and unicode2bytes(source.name) not in allowed:\n continue\n pollers.append(source)\n\n if not poll_all:\n missing = (set(req.args[b'poller']) -\n set(unicode2bytes(s.name) for s in pollers))\n if missing:\n raise ValueError(\"Could not find pollers: {}\".format(\n bytes2unicode(b\",\".join(missing))))\n\n for p in pollers:\n p.force()\n\n return [], None\n\n\npoller = PollingHandler\n", "path": "master/buildbot/www/hooks/poller.py"}]} | 1,397 | 195 |
gh_patches_debug_12830 | rasdani/github-patches | git_diff | mars-project__mars-82 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
By default use core number as n_parallel for threaded scheduling
Use core number as `n_parallel` for threaded scheduling, currently 1 thread by default.
</issue>
<code>
[start of mars/session.py]
1 #!/usr/bin/env python
2 # -*- coding: utf-8 -*-
3 # Copyright 1999-2018 Alibaba Group Holding Ltd.
4 #
5 # Licensed under the Apache License, Version 2.0 (the "License");
6 # you may not use this file except in compliance with the License.
7 # You may obtain a copy of the License at
8 #
9 # http://www.apache.org/licenses/LICENSE-2.0
10 #
11 # Unless required by applicable law or agreed to in writing, software
12 # distributed under the License is distributed on an "AS IS" BASIS,
13 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
14 # See the License for the specific language governing permissions and
15 # limitations under the License.
16
17 import numpy as np
18
19
20 class LocalSession(object):
21 def __init__(self):
22 from .tensor.execution.core import Executor
23
24 self._executor = Executor()
25 self._endpoint = None
26
27 @property
28 def endpoint(self):
29 return self._endpoint
30
31 @endpoint.setter
32 def endpoint(self, endpoint):
33 if endpoint is not None:
34 raise ValueError('Local session cannot set endpoint')
35 self._endpoint = endpoint
36
37 def run(self, *tensors, **kw):
38 if self._executor is None:
39 raise RuntimeError('Session has closed')
40 return self._executor.execute_tensors(tensors, **kw)
41
42 def decref(self, *keys):
43 self._executor.decref(*keys)
44
45 def __enter__(self):
46 return self
47
48 def __exit__(self, *_):
49 self._executor = None
50
51
52 class Session(object):
53 _default_session = None
54
55 def __init__(self, endpoint=None):
56 if endpoint is not None:
57 if 'http' in endpoint:
58 # connect to web
59 from .web.session import Session as WebSession
60
61 self._sess = WebSession(endpoint)
62 else:
63 # connect to local cluster
64 from .deploy.local.session import LocalClusterSession
65
66 self._sess = LocalClusterSession(endpoint)
67 else:
68 self._sess = LocalSession()
69
70 self._executed_keys = set()
71
72 def run(self, *tensors, **kw):
73 from . import tensor as mt
74
75 ret_list = False
76 if len(tensors) == 1 and isinstance(tensors[0], (tuple, list)):
77 ret_list = True
78 tensors = tensors[0]
79 elif len(tensors) > 1:
80 ret_list = True
81
82 tensors = tuple(mt.tensor(t) for t in tensors)
83 result = self._sess.run(*tensors, **kw)
84 self._executed_keys.update(t.key for t in tensors)
85 for t in tensors:
86 t._execute_session = self
87
88 ret = []
89 for r, t in zip(result, tensors):
90 if r is None:
91 ret.append(r)
92 continue
93 if t.isscalar() and hasattr(r, 'item'):
94 ret.append(np.asscalar(r))
95 else:
96 ret.append(r)
97 if ret_list:
98 return ret
99 return ret[0]
100
101 @property
102 def endpoint(self):
103 return self._sess.endpoint
104
105 @endpoint.setter
106 def endpoint(self, endpoint):
107 self._sess.endpoint = endpoint
108
109 def decref(self, *keys):
110 if hasattr(self._sess, 'decref'):
111 self._sess.decref(*keys)
112
113 def __getattr__(self, attr):
114 try:
115 obj = self._sess.__getattribute__(attr)
116 return obj
117 except AttributeError:
118 raise
119
120 def __enter__(self):
121 self._sess.__enter__()
122 return self
123
124 def __exit__(self, exc_type, exc_val, exc_tb):
125 self._sess.__exit__(exc_type, exc_val, exc_tb)
126
127 close = __exit__
128
129 def as_default(self):
130 Session._default_session = self
131 return self
132
133 @classmethod
134 def default_or_local(cls):
135 if cls._default_session is not None:
136 return cls._default_session
137
138 cls._default_session = Session()
139 return cls._default_session
140
141
142 def new_session(scheduler=None):
143 return Session(scheduler)
144
[end of mars/session.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/mars/session.py b/mars/session.py
--- a/mars/session.py
+++ b/mars/session.py
@@ -16,6 +16,11 @@
import numpy as np
+try:
+ from .resource import cpu_count
+except ImportError:
+ from multiprocessing import cpu_count
+
class LocalSession(object):
def __init__(self):
@@ -37,6 +42,8 @@
def run(self, *tensors, **kw):
if self._executor is None:
raise RuntimeError('Session has closed')
+ if 'n_parallel' not in kw:
+ kw['n_parallel'] = cpu_count()
return self._executor.execute_tensors(tensors, **kw)
def decref(self, *keys):
| {"golden_diff": "diff --git a/mars/session.py b/mars/session.py\n--- a/mars/session.py\n+++ b/mars/session.py\n@@ -16,6 +16,11 @@\n \n import numpy as np\n \n+try:\n+ from .resource import cpu_count\n+except ImportError:\n+ from multiprocessing import cpu_count\n+\n \n class LocalSession(object):\n def __init__(self):\n@@ -37,6 +42,8 @@\n def run(self, *tensors, **kw):\n if self._executor is None:\n raise RuntimeError('Session has closed')\n+ if 'n_parallel' not in kw:\n+ kw['n_parallel'] = cpu_count()\n return self._executor.execute_tensors(tensors, **kw)\n \n def decref(self, *keys):\n", "issue": "By default use core number as n_parallel for threaded scheduling\nUse core number as `n_parallel` for threaded scheduling, currently 1 thread by default.\n", "before_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n# Copyright 1999-2018 Alibaba Group Holding Ltd.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport numpy as np\n\n\nclass LocalSession(object):\n def __init__(self):\n from .tensor.execution.core import Executor\n\n self._executor = Executor()\n self._endpoint = None\n\n @property\n def endpoint(self):\n return self._endpoint\n\n @endpoint.setter\n def endpoint(self, endpoint):\n if endpoint is not None:\n raise ValueError('Local session cannot set endpoint')\n self._endpoint = endpoint\n\n def run(self, *tensors, **kw):\n if self._executor is None:\n raise RuntimeError('Session has closed')\n return self._executor.execute_tensors(tensors, **kw)\n\n def decref(self, *keys):\n self._executor.decref(*keys)\n\n def __enter__(self):\n return self\n\n def __exit__(self, *_):\n self._executor = None\n\n\nclass Session(object):\n _default_session = None\n\n def __init__(self, endpoint=None):\n if endpoint is not None:\n if 'http' in endpoint:\n # connect to web\n from .web.session import Session as WebSession\n\n self._sess = WebSession(endpoint)\n else:\n # connect to local cluster\n from .deploy.local.session import LocalClusterSession\n\n self._sess = LocalClusterSession(endpoint)\n else:\n self._sess = LocalSession()\n\n self._executed_keys = set()\n\n def run(self, *tensors, **kw):\n from . import tensor as mt\n\n ret_list = False\n if len(tensors) == 1 and isinstance(tensors[0], (tuple, list)):\n ret_list = True\n tensors = tensors[0]\n elif len(tensors) > 1:\n ret_list = True\n\n tensors = tuple(mt.tensor(t) for t in tensors)\n result = self._sess.run(*tensors, **kw)\n self._executed_keys.update(t.key for t in tensors)\n for t in tensors:\n t._execute_session = self\n\n ret = []\n for r, t in zip(result, tensors):\n if r is None:\n ret.append(r)\n continue\n if t.isscalar() and hasattr(r, 'item'):\n ret.append(np.asscalar(r))\n else:\n ret.append(r)\n if ret_list:\n return ret\n return ret[0]\n\n @property\n def endpoint(self):\n return self._sess.endpoint\n\n @endpoint.setter\n def endpoint(self, endpoint):\n self._sess.endpoint = endpoint\n\n def decref(self, *keys):\n if hasattr(self._sess, 'decref'):\n self._sess.decref(*keys)\n\n def __getattr__(self, attr):\n try:\n obj = self._sess.__getattribute__(attr)\n return obj\n except AttributeError:\n raise\n\n def __enter__(self):\n self._sess.__enter__()\n return self\n\n def __exit__(self, exc_type, exc_val, exc_tb):\n self._sess.__exit__(exc_type, exc_val, exc_tb)\n\n close = __exit__\n\n def as_default(self):\n Session._default_session = self\n return self\n\n @classmethod\n def default_or_local(cls):\n if cls._default_session is not None:\n return cls._default_session\n\n cls._default_session = Session()\n return cls._default_session\n\n\ndef new_session(scheduler=None):\n return Session(scheduler)\n", "path": "mars/session.py"}]} | 1,800 | 171 |
gh_patches_debug_15984 | rasdani/github-patches | git_diff | OpenMined__PySyft-5397 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Adding loguru compatiblity with pytest caplog
## Description
`caplog` fixture in pytest captures the logging output for testing if appropriate warnings have been raised.
By default pytest uses the standard `logging` module, but since we are using `loguru` appropriate patching needs to be added.
## Additional Context
https://loguru.readthedocs.io/en/stable/resources/migration.html#making-things-work-with-pytest-and-caplog
</issue>
<code>
[start of src/syft/logger.py]
1 # stdlib
2 import os
3 from typing import Any
4 from typing import Callable
5 from typing import NoReturn
6 from typing import TextIO
7 from typing import Union
8
9 # third party
10 from loguru import logger
11
12 LOG_FORMAT = "[{time}][{level}][{module}]][{process.id}] {message}"
13
14 logger.remove()
15 DEFAULT_SINK = "syft_{time}.log"
16
17
18 def remove() -> None:
19 logger.remove()
20
21
22 def add(
23 sink: Union[None, str, os.PathLike, TextIO] = None,
24 level: str = "ERROR",
25 ) -> None:
26 sink = DEFAULT_SINK if sink is None else sink
27 try:
28 logger.add(
29 sink=sink,
30 format=LOG_FORMAT,
31 enqueue=True,
32 colorize=False,
33 diagnose=True,
34 backtrace=True,
35 rotation="10 MB",
36 retention="1 day",
37 level=level,
38 )
39 except BaseException:
40 logger.add(
41 sink=sink,
42 format=LOG_FORMAT,
43 enqueue=True,
44 colorize=False,
45 diagnose=True,
46 backtrace=True,
47 level=level,
48 )
49
50
51 def traceback_and_raise(e: Any, verbose: bool = False) -> NoReturn:
52 try:
53 if verbose:
54 logger.opt(lazy=True).exception(e)
55 else:
56 logger.opt(lazy=True).critical(e)
57 except BaseException as ex:
58 logger.debug("failed to print exception", ex)
59 if not issubclass(type(e), Exception):
60 e = Exception(e)
61 raise e
62
63
64 def create_log_and_print_function(level: str) -> Callable:
65 def log_and_print(*args: Any, **kwargs: Any) -> None:
66 try:
67 method = getattr(logger.opt(lazy=True), level, None)
68 if "print" in kwargs and kwargs["print"] is True:
69 del kwargs["print"]
70 print(*args, **kwargs)
71 if "end" in kwargs:
72 # clean up extra end for printing
73 del kwargs["end"]
74
75 if method is not None:
76 method(*args, **kwargs)
77 else:
78 raise Exception(f"no method {level} on logger")
79 except BaseException as e:
80 msg = f"failed to log exception. {e}"
81 try:
82 logger.debug(msg)
83 except Exception as e:
84 print(f"{msg}. {e}")
85
86 return log_and_print
87
88
89 def traceback(*args: Any, **kwargs: Any) -> None:
90 return create_log_and_print_function(level="exception")(*args, **kwargs)
91
92
93 def critical(*args: Any, **kwargs: Any) -> None:
94 return create_log_and_print_function(level="critical")(*args, **kwargs)
95
96
97 def error(*args: Any, **kwargs: Any) -> None:
98 return create_log_and_print_function(level="error")(*args, **kwargs)
99
100
101 def warning(*args: Any, **kwargs: Any) -> None:
102 return create_log_and_print_function(level="warning")(*args, **kwargs)
103
104
105 def info(*args: Any, **kwargs: Any) -> None:
106 return create_log_and_print_function(level="info")(*args, **kwargs)
107
108
109 def debug(*args: Any, **kwargs: Any) -> None:
110 return create_log_and_print_function(level="debug")(*args, **kwargs)
111
112
113 def trace(*args: Any, **kwargs: Any) -> None:
114 return create_log_and_print_function(level="trace")(*args, **kwargs)
115
[end of src/syft/logger.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/syft/logger.py b/src/syft/logger.py
--- a/src/syft/logger.py
+++ b/src/syft/logger.py
@@ -1,4 +1,5 @@
# stdlib
+import logging
import os
from typing import Any
from typing import Callable
@@ -20,7 +21,7 @@
def add(
- sink: Union[None, str, os.PathLike, TextIO] = None,
+ sink: Union[None, str, os.PathLike, TextIO, logging.Handler] = None,
level: str = "ERROR",
) -> None:
sink = DEFAULT_SINK if sink is None else sink
@@ -40,7 +41,6 @@
logger.add(
sink=sink,
format=LOG_FORMAT,
- enqueue=True,
colorize=False,
diagnose=True,
backtrace=True,
| {"golden_diff": "diff --git a/src/syft/logger.py b/src/syft/logger.py\n--- a/src/syft/logger.py\n+++ b/src/syft/logger.py\n@@ -1,4 +1,5 @@\n # stdlib\n+import logging\n import os\n from typing import Any\n from typing import Callable\n@@ -20,7 +21,7 @@\n \n \n def add(\n- sink: Union[None, str, os.PathLike, TextIO] = None,\n+ sink: Union[None, str, os.PathLike, TextIO, logging.Handler] = None,\n level: str = \"ERROR\",\n ) -> None:\n sink = DEFAULT_SINK if sink is None else sink\n@@ -40,7 +41,6 @@\n logger.add(\n sink=sink,\n format=LOG_FORMAT,\n- enqueue=True,\n colorize=False,\n diagnose=True,\n backtrace=True,\n", "issue": "Adding loguru compatiblity with pytest caplog\n## Description\r\n`caplog` fixture in pytest captures the logging output for testing if appropriate warnings have been raised.\r\n\r\nBy default pytest uses the standard `logging` module, but since we are using `loguru` appropriate patching needs to be added.\r\n\r\n## Additional Context\r\nhttps://loguru.readthedocs.io/en/stable/resources/migration.html#making-things-work-with-pytest-and-caplog\r\n\n", "before_files": [{"content": "# stdlib\nimport os\nfrom typing import Any\nfrom typing import Callable\nfrom typing import NoReturn\nfrom typing import TextIO\nfrom typing import Union\n\n# third party\nfrom loguru import logger\n\nLOG_FORMAT = \"[{time}][{level}][{module}]][{process.id}] {message}\"\n\nlogger.remove()\nDEFAULT_SINK = \"syft_{time}.log\"\n\n\ndef remove() -> None:\n logger.remove()\n\n\ndef add(\n sink: Union[None, str, os.PathLike, TextIO] = None,\n level: str = \"ERROR\",\n) -> None:\n sink = DEFAULT_SINK if sink is None else sink\n try:\n logger.add(\n sink=sink,\n format=LOG_FORMAT,\n enqueue=True,\n colorize=False,\n diagnose=True,\n backtrace=True,\n rotation=\"10 MB\",\n retention=\"1 day\",\n level=level,\n )\n except BaseException:\n logger.add(\n sink=sink,\n format=LOG_FORMAT,\n enqueue=True,\n colorize=False,\n diagnose=True,\n backtrace=True,\n level=level,\n )\n\n\ndef traceback_and_raise(e: Any, verbose: bool = False) -> NoReturn:\n try:\n if verbose:\n logger.opt(lazy=True).exception(e)\n else:\n logger.opt(lazy=True).critical(e)\n except BaseException as ex:\n logger.debug(\"failed to print exception\", ex)\n if not issubclass(type(e), Exception):\n e = Exception(e)\n raise e\n\n\ndef create_log_and_print_function(level: str) -> Callable:\n def log_and_print(*args: Any, **kwargs: Any) -> None:\n try:\n method = getattr(logger.opt(lazy=True), level, None)\n if \"print\" in kwargs and kwargs[\"print\"] is True:\n del kwargs[\"print\"]\n print(*args, **kwargs)\n if \"end\" in kwargs:\n # clean up extra end for printing\n del kwargs[\"end\"]\n\n if method is not None:\n method(*args, **kwargs)\n else:\n raise Exception(f\"no method {level} on logger\")\n except BaseException as e:\n msg = f\"failed to log exception. {e}\"\n try:\n logger.debug(msg)\n except Exception as e:\n print(f\"{msg}. {e}\")\n\n return log_and_print\n\n\ndef traceback(*args: Any, **kwargs: Any) -> None:\n return create_log_and_print_function(level=\"exception\")(*args, **kwargs)\n\n\ndef critical(*args: Any, **kwargs: Any) -> None:\n return create_log_and_print_function(level=\"critical\")(*args, **kwargs)\n\n\ndef error(*args: Any, **kwargs: Any) -> None:\n return create_log_and_print_function(level=\"error\")(*args, **kwargs)\n\n\ndef warning(*args: Any, **kwargs: Any) -> None:\n return create_log_and_print_function(level=\"warning\")(*args, **kwargs)\n\n\ndef info(*args: Any, **kwargs: Any) -> None:\n return create_log_and_print_function(level=\"info\")(*args, **kwargs)\n\n\ndef debug(*args: Any, **kwargs: Any) -> None:\n return create_log_and_print_function(level=\"debug\")(*args, **kwargs)\n\n\ndef trace(*args: Any, **kwargs: Any) -> None:\n return create_log_and_print_function(level=\"trace\")(*args, **kwargs)\n", "path": "src/syft/logger.py"}]} | 1,628 | 198 |
gh_patches_debug_36091 | rasdani/github-patches | git_diff | freedomofpress__securedrop-5894 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
v2 removal on restore does not cover HTTPS services
The logic added in https://github.com/freedomofpress/securedrop/pull/5677 to disable v2 onion services when a backup is restored to a Focal server does not remove config lines for HTTPS services (port 443), potentially resulting in a broken configuration.
</issue>
<code>
[start of install_files/ansible-base/roles/restore/files/disable_v2.py]
1 #!/usr/bin/env python3
2 # To execute on prod:
3 # python3 disable_v2.py /etc/tor/torrc /etc/tor/torrc
4 # To execute for testing locally:
5 # python3 disable_v2.py /etc/tor/torrc /tmp/dumytorrc
6 import sys
7
8
9 def filter_v2(filename):
10 # Read the file
11 with open(filename) as f:
12 data = f.readlines()
13 # We will store the filtered lines to result
14 result = []
15
16 i = 0
17 while i < len(data):
18 line = data[i]
19 if line == "HiddenServiceDir /var/lib/tor/services/source\n":
20 i += 1
21 while data[i].strip() == "":
22 i += 1
23 line = data[i]
24 if line == "HiddenServiceVersion 2\n":
25 i += 1
26 line = data[i]
27 while data[i].strip() == "":
28 i += 1
29 line = data[i]
30 if line == "HiddenServicePort 80 127.0.0.1:80\n":
31 i += 1
32 continue
33 # Now check for journalist
34 if line == "HiddenServiceDir /var/lib/tor/services/journalist\n":
35 i += 1
36 while data[i].strip() == "":
37 i += 1
38 line = data[i]
39 if line == "HiddenServiceVersion 2\n":
40 i += 1
41 line = data[i]
42 while data[i].strip() == "":
43 i += 1
44 line = data[i]
45 if line == "HiddenServicePort 80 127.0.0.1:8080\n":
46 i += 1
47 line = data[i]
48 while data[i].strip() == "":
49 i += 1
50 line = data[i]
51 if line == "HiddenServiceAuthorizeClient stealth journalist\n":
52 i += 1
53 continue
54 # Now the v2 ssh access
55 if line == "HiddenServiceDir /var/lib/tor/services/ssh\n":
56 i += 1
57 while data[i].strip() == "":
58 i += 1
59 line = data[i]
60 if line == "HiddenServiceVersion 2\n":
61 i += 1
62 line = data[i]
63 while data[i].strip() == "":
64 i += 1
65 line = data[i]
66 if line == "HiddenServicePort 22 127.0.0.1:22\n":
67 i += 1
68 line = data[i]
69 while data[i].strip() == "":
70 i += 1
71 line = data[i]
72 if line == "HiddenServiceAuthorizeClient stealth admin\n":
73 i += 1
74 continue
75
76 result.append(line)
77 i += 1
78
79 # Now return the result
80 return result
81
82
83 if __name__ == "__main__":
84 filename = sys.argv[1]
85 outputfilename = sys.argv[2]
86 result = filter_v2(filename)
87 with open(outputfilename, "w") as fobj:
88 for line in result:
89 fobj.write(line)
90
[end of install_files/ansible-base/roles/restore/files/disable_v2.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/install_files/ansible-base/roles/restore/files/disable_v2.py b/install_files/ansible-base/roles/restore/files/disable_v2.py
deleted file mode 100644
--- a/install_files/ansible-base/roles/restore/files/disable_v2.py
+++ /dev/null
@@ -1,89 +0,0 @@
-#!/usr/bin/env python3
-# To execute on prod:
-# python3 disable_v2.py /etc/tor/torrc /etc/tor/torrc
-# To execute for testing locally:
-# python3 disable_v2.py /etc/tor/torrc /tmp/dumytorrc
-import sys
-
-
-def filter_v2(filename):
- # Read the file
- with open(filename) as f:
- data = f.readlines()
- # We will store the filtered lines to result
- result = []
-
- i = 0
- while i < len(data):
- line = data[i]
- if line == "HiddenServiceDir /var/lib/tor/services/source\n":
- i += 1
- while data[i].strip() == "":
- i += 1
- line = data[i]
- if line == "HiddenServiceVersion 2\n":
- i += 1
- line = data[i]
- while data[i].strip() == "":
- i += 1
- line = data[i]
- if line == "HiddenServicePort 80 127.0.0.1:80\n":
- i += 1
- continue
- # Now check for journalist
- if line == "HiddenServiceDir /var/lib/tor/services/journalist\n":
- i += 1
- while data[i].strip() == "":
- i += 1
- line = data[i]
- if line == "HiddenServiceVersion 2\n":
- i += 1
- line = data[i]
- while data[i].strip() == "":
- i += 1
- line = data[i]
- if line == "HiddenServicePort 80 127.0.0.1:8080\n":
- i += 1
- line = data[i]
- while data[i].strip() == "":
- i += 1
- line = data[i]
- if line == "HiddenServiceAuthorizeClient stealth journalist\n":
- i += 1
- continue
- # Now the v2 ssh access
- if line == "HiddenServiceDir /var/lib/tor/services/ssh\n":
- i += 1
- while data[i].strip() == "":
- i += 1
- line = data[i]
- if line == "HiddenServiceVersion 2\n":
- i += 1
- line = data[i]
- while data[i].strip() == "":
- i += 1
- line = data[i]
- if line == "HiddenServicePort 22 127.0.0.1:22\n":
- i += 1
- line = data[i]
- while data[i].strip() == "":
- i += 1
- line = data[i]
- if line == "HiddenServiceAuthorizeClient stealth admin\n":
- i += 1
- continue
-
- result.append(line)
- i += 1
-
- # Now return the result
- return result
-
-
-if __name__ == "__main__":
- filename = sys.argv[1]
- outputfilename = sys.argv[2]
- result = filter_v2(filename)
- with open(outputfilename, "w") as fobj:
- for line in result:
- fobj.write(line)
| {"golden_diff": "diff --git a/install_files/ansible-base/roles/restore/files/disable_v2.py b/install_files/ansible-base/roles/restore/files/disable_v2.py\ndeleted file mode 100644\n--- a/install_files/ansible-base/roles/restore/files/disable_v2.py\n+++ /dev/null\n@@ -1,89 +0,0 @@\n-#!/usr/bin/env python3\n-# To execute on prod:\n-# python3 disable_v2.py /etc/tor/torrc /etc/tor/torrc\n-# To execute for testing locally:\n-# python3 disable_v2.py /etc/tor/torrc /tmp/dumytorrc\n-import sys\n-\n-\n-def filter_v2(filename):\n- # Read the file\n- with open(filename) as f:\n- data = f.readlines()\n- # We will store the filtered lines to result\n- result = []\n-\n- i = 0\n- while i < len(data):\n- line = data[i]\n- if line == \"HiddenServiceDir /var/lib/tor/services/source\\n\":\n- i += 1\n- while data[i].strip() == \"\":\n- i += 1\n- line = data[i]\n- if line == \"HiddenServiceVersion 2\\n\":\n- i += 1\n- line = data[i]\n- while data[i].strip() == \"\":\n- i += 1\n- line = data[i]\n- if line == \"HiddenServicePort 80 127.0.0.1:80\\n\":\n- i += 1\n- continue\n- # Now check for journalist\n- if line == \"HiddenServiceDir /var/lib/tor/services/journalist\\n\":\n- i += 1\n- while data[i].strip() == \"\":\n- i += 1\n- line = data[i]\n- if line == \"HiddenServiceVersion 2\\n\":\n- i += 1\n- line = data[i]\n- while data[i].strip() == \"\":\n- i += 1\n- line = data[i]\n- if line == \"HiddenServicePort 80 127.0.0.1:8080\\n\":\n- i += 1\n- line = data[i]\n- while data[i].strip() == \"\":\n- i += 1\n- line = data[i]\n- if line == \"HiddenServiceAuthorizeClient stealth journalist\\n\":\n- i += 1\n- continue\n- # Now the v2 ssh access\n- if line == \"HiddenServiceDir /var/lib/tor/services/ssh\\n\":\n- i += 1\n- while data[i].strip() == \"\":\n- i += 1\n- line = data[i]\n- if line == \"HiddenServiceVersion 2\\n\":\n- i += 1\n- line = data[i]\n- while data[i].strip() == \"\":\n- i += 1\n- line = data[i]\n- if line == \"HiddenServicePort 22 127.0.0.1:22\\n\":\n- i += 1\n- line = data[i]\n- while data[i].strip() == \"\":\n- i += 1\n- line = data[i]\n- if line == \"HiddenServiceAuthorizeClient stealth admin\\n\":\n- i += 1\n- continue\n-\n- result.append(line)\n- i += 1\n-\n- # Now return the result\n- return result\n-\n-\n-if __name__ == \"__main__\":\n- filename = sys.argv[1]\n- outputfilename = sys.argv[2]\n- result = filter_v2(filename)\n- with open(outputfilename, \"w\") as fobj:\n- for line in result:\n- fobj.write(line)\n", "issue": "v2 removal on restore does not cover HTTPS services\nThe logic added in https://github.com/freedomofpress/securedrop/pull/5677 to disable v2 onion services when a backup is restored to a Focal server does not remove config lines for HTTPS services (port 443), potentially resulting in a broken configuration.\n", "before_files": [{"content": "#!/usr/bin/env python3\n# To execute on prod:\n# python3 disable_v2.py /etc/tor/torrc /etc/tor/torrc\n# To execute for testing locally:\n# python3 disable_v2.py /etc/tor/torrc /tmp/dumytorrc\nimport sys\n\n\ndef filter_v2(filename):\n # Read the file\n with open(filename) as f:\n data = f.readlines()\n # We will store the filtered lines to result\n result = []\n\n i = 0\n while i < len(data):\n line = data[i]\n if line == \"HiddenServiceDir /var/lib/tor/services/source\\n\":\n i += 1\n while data[i].strip() == \"\":\n i += 1\n line = data[i]\n if line == \"HiddenServiceVersion 2\\n\":\n i += 1\n line = data[i]\n while data[i].strip() == \"\":\n i += 1\n line = data[i]\n if line == \"HiddenServicePort 80 127.0.0.1:80\\n\":\n i += 1\n continue\n # Now check for journalist\n if line == \"HiddenServiceDir /var/lib/tor/services/journalist\\n\":\n i += 1\n while data[i].strip() == \"\":\n i += 1\n line = data[i]\n if line == \"HiddenServiceVersion 2\\n\":\n i += 1\n line = data[i]\n while data[i].strip() == \"\":\n i += 1\n line = data[i]\n if line == \"HiddenServicePort 80 127.0.0.1:8080\\n\":\n i += 1\n line = data[i]\n while data[i].strip() == \"\":\n i += 1\n line = data[i]\n if line == \"HiddenServiceAuthorizeClient stealth journalist\\n\":\n i += 1\n continue\n # Now the v2 ssh access\n if line == \"HiddenServiceDir /var/lib/tor/services/ssh\\n\":\n i += 1\n while data[i].strip() == \"\":\n i += 1\n line = data[i]\n if line == \"HiddenServiceVersion 2\\n\":\n i += 1\n line = data[i]\n while data[i].strip() == \"\":\n i += 1\n line = data[i]\n if line == \"HiddenServicePort 22 127.0.0.1:22\\n\":\n i += 1\n line = data[i]\n while data[i].strip() == \"\":\n i += 1\n line = data[i]\n if line == \"HiddenServiceAuthorizeClient stealth admin\\n\":\n i += 1\n continue\n\n result.append(line)\n i += 1\n\n # Now return the result\n return result\n\n\nif __name__ == \"__main__\":\n filename = sys.argv[1]\n outputfilename = sys.argv[2]\n result = filter_v2(filename)\n with open(outputfilename, \"w\") as fobj:\n for line in result:\n fobj.write(line)\n", "path": "install_files/ansible-base/roles/restore/files/disable_v2.py"}]} | 1,497 | 857 |
gh_patches_debug_10651 | rasdani/github-patches | git_diff | hpcaitech__ColossalAI-5440 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[tensor] fix some unittests
[tensor] fix some unittests
[tensor] fix some unittests
</issue>
<code>
[start of colossalai/legacy/tensor/tensor_spec.py]
1 from dataclasses import dataclass
2 from typing import Optional
3
4 from colossalai.legacy.tensor.distspec import DistPlacementPattern, _DistSpec
5 from colossalai.legacy.tensor.process_group import ProcessGroup
6
7 from .compute_spec import ComputeSpec
8
9
10 @dataclass
11 class ColoTensorSpec:
12 """ColoTensorSpec
13
14 A data class for specifications of the `ColoTensor`.
15 It contains attributes of `ProcessGroup`, `_DistSpec`, `ComputeSpec`.
16 The latter two attributes are optional. If not set, they are default value is `Replicate()` and `None`.
17 """
18
19 pg: ProcessGroup
20 dist_attr: Optional[_DistSpec] = _DistSpec(DistPlacementPattern.REPLICATE)
21 compute_attr: Optional[ComputeSpec] = None
22
[end of colossalai/legacy/tensor/tensor_spec.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/colossalai/legacy/tensor/tensor_spec.py b/colossalai/legacy/tensor/tensor_spec.py
--- a/colossalai/legacy/tensor/tensor_spec.py
+++ b/colossalai/legacy/tensor/tensor_spec.py
@@ -1,4 +1,4 @@
-from dataclasses import dataclass
+from dataclasses import dataclass, field
from typing import Optional
from colossalai.legacy.tensor.distspec import DistPlacementPattern, _DistSpec
@@ -17,5 +17,5 @@
"""
pg: ProcessGroup
- dist_attr: Optional[_DistSpec] = _DistSpec(DistPlacementPattern.REPLICATE)
+ dist_attr: Optional[_DistSpec] = field(default_factory=lambda: _DistSpec(DistPlacementPattern.REPLICATE))
compute_attr: Optional[ComputeSpec] = None
| {"golden_diff": "diff --git a/colossalai/legacy/tensor/tensor_spec.py b/colossalai/legacy/tensor/tensor_spec.py\n--- a/colossalai/legacy/tensor/tensor_spec.py\n+++ b/colossalai/legacy/tensor/tensor_spec.py\n@@ -1,4 +1,4 @@\n-from dataclasses import dataclass\n+from dataclasses import dataclass, field\n from typing import Optional\n \n from colossalai.legacy.tensor.distspec import DistPlacementPattern, _DistSpec\n@@ -17,5 +17,5 @@\n \"\"\"\n \n pg: ProcessGroup\n- dist_attr: Optional[_DistSpec] = _DistSpec(DistPlacementPattern.REPLICATE)\n+ dist_attr: Optional[_DistSpec] = field(default_factory=lambda: _DistSpec(DistPlacementPattern.REPLICATE))\n compute_attr: Optional[ComputeSpec] = None\n", "issue": "[tensor] fix some unittests\n\n[tensor] fix some unittests\n\n[tensor] fix some unittests\n\n", "before_files": [{"content": "from dataclasses import dataclass\nfrom typing import Optional\n\nfrom colossalai.legacy.tensor.distspec import DistPlacementPattern, _DistSpec\nfrom colossalai.legacy.tensor.process_group import ProcessGroup\n\nfrom .compute_spec import ComputeSpec\n\n\n@dataclass\nclass ColoTensorSpec:\n \"\"\"ColoTensorSpec\n\n A data class for specifications of the `ColoTensor`.\n It contains attributes of `ProcessGroup`, `_DistSpec`, `ComputeSpec`.\n The latter two attributes are optional. If not set, they are default value is `Replicate()` and `None`.\n \"\"\"\n\n pg: ProcessGroup\n dist_attr: Optional[_DistSpec] = _DistSpec(DistPlacementPattern.REPLICATE)\n compute_attr: Optional[ComputeSpec] = None\n", "path": "colossalai/legacy/tensor/tensor_spec.py"}]} | 772 | 191 |
gh_patches_debug_12715 | rasdani/github-patches | git_diff | open-telemetry__opentelemetry-python-3240 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Export ExponentialBucketHistogramAggregation in opentelemetry.sdk.metrics.view
**Is your feature request related to a problem?**
We want to use the Exponential Histograms features publicly released in version [1.17.0](https://github.com/open-telemetry/opentelemetry-python/blob/main/CHANGELOG.md#version-1170038b0-2023-03-22).
**Describe the solution you'd like**
I'd like to use the public API.
**Describe alternatives you've considered**
One can import it from `opentelemetry.sdk.metrics._internal.aggregation`
**Additional context**
Currently the code in https://github.com/open-telemetry/opentelemetry-python/blob/b6a1b22fa65f41bdefb01d64b76e5e793d039f6d/opentelemetry-sdk/src/opentelemetry/sdk/metrics/view/__init__.py#L25-L33 is not exporting the newly added `ExponentialBucketHistogramAggregation`
Export ExponentialBucketHistogramAggregation in opentelemetry.sdk.metrics.view
**Is your feature request related to a problem?**
We want to use the Exponential Histograms features publicly released in version [1.17.0](https://github.com/open-telemetry/opentelemetry-python/blob/main/CHANGELOG.md#version-1170038b0-2023-03-22).
**Describe the solution you'd like**
I'd like to use the public API.
**Describe alternatives you've considered**
One can import it from `opentelemetry.sdk.metrics._internal.aggregation`
**Additional context**
Currently the code in https://github.com/open-telemetry/opentelemetry-python/blob/b6a1b22fa65f41bdefb01d64b76e5e793d039f6d/opentelemetry-sdk/src/opentelemetry/sdk/metrics/view/__init__.py#L25-L33 is not exporting the newly added `ExponentialBucketHistogramAggregation`
</issue>
<code>
[start of opentelemetry-sdk/src/opentelemetry/sdk/metrics/view/__init__.py]
1 # Copyright The OpenTelemetry Authors
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from opentelemetry.sdk.metrics._internal.aggregation import (
16 Aggregation,
17 DefaultAggregation,
18 DropAggregation,
19 ExplicitBucketHistogramAggregation,
20 LastValueAggregation,
21 SumAggregation,
22 )
23 from opentelemetry.sdk.metrics._internal.view import View
24
25 __all__ = [
26 "Aggregation",
27 "DefaultAggregation",
28 "DropAggregation",
29 "ExplicitBucketHistogramAggregation",
30 "LastValueAggregation",
31 "SumAggregation",
32 "View",
33 ]
34
[end of opentelemetry-sdk/src/opentelemetry/sdk/metrics/view/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/opentelemetry-sdk/src/opentelemetry/sdk/metrics/view/__init__.py b/opentelemetry-sdk/src/opentelemetry/sdk/metrics/view/__init__.py
--- a/opentelemetry-sdk/src/opentelemetry/sdk/metrics/view/__init__.py
+++ b/opentelemetry-sdk/src/opentelemetry/sdk/metrics/view/__init__.py
@@ -17,6 +17,7 @@
DefaultAggregation,
DropAggregation,
ExplicitBucketHistogramAggregation,
+ ExponentialBucketHistogramAggregation,
LastValueAggregation,
SumAggregation,
)
@@ -27,6 +28,7 @@
"DefaultAggregation",
"DropAggregation",
"ExplicitBucketHistogramAggregation",
+ "ExponentialBucketHistogramAggregation",
"LastValueAggregation",
"SumAggregation",
"View",
| {"golden_diff": "diff --git a/opentelemetry-sdk/src/opentelemetry/sdk/metrics/view/__init__.py b/opentelemetry-sdk/src/opentelemetry/sdk/metrics/view/__init__.py\n--- a/opentelemetry-sdk/src/opentelemetry/sdk/metrics/view/__init__.py\n+++ b/opentelemetry-sdk/src/opentelemetry/sdk/metrics/view/__init__.py\n@@ -17,6 +17,7 @@\n DefaultAggregation,\n DropAggregation,\n ExplicitBucketHistogramAggregation,\n+ ExponentialBucketHistogramAggregation,\n LastValueAggregation,\n SumAggregation,\n )\n@@ -27,6 +28,7 @@\n \"DefaultAggregation\",\n \"DropAggregation\",\n \"ExplicitBucketHistogramAggregation\",\n+ \"ExponentialBucketHistogramAggregation\",\n \"LastValueAggregation\",\n \"SumAggregation\",\n \"View\",\n", "issue": "Export ExponentialBucketHistogramAggregation in opentelemetry.sdk.metrics.view\n**Is your feature request related to a problem?**\r\nWe want to use the Exponential Histograms features publicly released in version [1.17.0](https://github.com/open-telemetry/opentelemetry-python/blob/main/CHANGELOG.md#version-1170038b0-2023-03-22).\r\n\r\n**Describe the solution you'd like**\r\nI'd like to use the public API.\r\n\r\n**Describe alternatives you've considered**\r\nOne can import it from `opentelemetry.sdk.metrics._internal.aggregation`\r\n\r\n**Additional context**\r\nCurrently the code in https://github.com/open-telemetry/opentelemetry-python/blob/b6a1b22fa65f41bdefb01d64b76e5e793d039f6d/opentelemetry-sdk/src/opentelemetry/sdk/metrics/view/__init__.py#L25-L33 is not exporting the newly added `ExponentialBucketHistogramAggregation`\r\n\nExport ExponentialBucketHistogramAggregation in opentelemetry.sdk.metrics.view\n**Is your feature request related to a problem?**\r\nWe want to use the Exponential Histograms features publicly released in version [1.17.0](https://github.com/open-telemetry/opentelemetry-python/blob/main/CHANGELOG.md#version-1170038b0-2023-03-22).\r\n\r\n**Describe the solution you'd like**\r\nI'd like to use the public API.\r\n\r\n**Describe alternatives you've considered**\r\nOne can import it from `opentelemetry.sdk.metrics._internal.aggregation`\r\n\r\n**Additional context**\r\nCurrently the code in https://github.com/open-telemetry/opentelemetry-python/blob/b6a1b22fa65f41bdefb01d64b76e5e793d039f6d/opentelemetry-sdk/src/opentelemetry/sdk/metrics/view/__init__.py#L25-L33 is not exporting the newly added `ExponentialBucketHistogramAggregation`\r\n\n", "before_files": [{"content": "# Copyright The OpenTelemetry Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom opentelemetry.sdk.metrics._internal.aggregation import (\n Aggregation,\n DefaultAggregation,\n DropAggregation,\n ExplicitBucketHistogramAggregation,\n LastValueAggregation,\n SumAggregation,\n)\nfrom opentelemetry.sdk.metrics._internal.view import View\n\n__all__ = [\n \"Aggregation\",\n \"DefaultAggregation\",\n \"DropAggregation\",\n \"ExplicitBucketHistogramAggregation\",\n \"LastValueAggregation\",\n \"SumAggregation\",\n \"View\",\n]\n", "path": "opentelemetry-sdk/src/opentelemetry/sdk/metrics/view/__init__.py"}]} | 1,305 | 186 |
gh_patches_debug_9367 | rasdani/github-patches | git_diff | ytdl-org__youtube-dl-14997 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[https://www.franceinter.fr] WARNING: unable to extract upload date
---
### Make sure you are using the *latest* version: run `youtube-dl --version` and ensure your version is *2017.12.14*. If it's not, read [this FAQ entry](https://github.com/rg3/youtube-dl/blob/master/README.md#how-do-i-update-youtube-dl) and update. Issues with outdated version will be rejected.
- [x] I've **verified** and **I assure** that I'm running youtube-dl **2017.12.14**
### Before submitting an *issue* make sure you have:
- [x] At least skimmed through the [README](https://github.com/rg3/youtube-dl/blob/master/README.md), **most notably** the [FAQ](https://github.com/rg3/youtube-dl#faq) and [BUGS](https://github.com/rg3/youtube-dl#bugs) sections
- [x] [Searched](https://github.com/rg3/youtube-dl/search?type=Issues) the bugtracker for similar issues including closed ones
### What is the purpose of your *issue*?
- [ ] Bug report (encountered problems with youtube-dl)
- [x] Site support request (request for adding support for a new site)
- [ ] Feature request (request for a new functionality)
- [ ] Question
- [ ] Other
---
```
youtube-dl-mp3 "https://www.franceinter.fr/emissions/les-concerts-d-inter/les-concerts-d-inter-14-decembre-2017"
[FranceInter] les-concerts-d-inter/les-concerts-d-inter-14-decembre-2017: Downloading webpage
WARNING: unable to extract upload date; please report this issue on https://yt-dl.org/bug . Make sure you are using the latest version; see https://yt-dl.org/update on how to update. Be sure to call youtube-dl with the --verbose flag and include its complete output.
```
```
[debug] System config: []
[debug] User config: []
[debug] Custom config: []
[debug] Command-line args: [u'-v']
[debug] Encodings: locale UTF-8, fs UTF-8, out UTF-8, pref UTF-8
[debug] youtube-dl version 2017.12.14
[debug] Python version 2.7.12 - Linux-4.4.0-103-generic-x86_64-with-Ubuntu-16.04-xenial
[debug] exe versions: avconv 2.8.11-0ubuntu0.16.04.1, avprobe 2.8.11-0ubuntu0.16.04.1, ffmpeg 2.8.11-0ubuntu0.16.04.1, ffprobe 2.8.11-0ubuntu0.16.04.1
[debug] Proxy map: {}
```
</issue>
<code>
[start of youtube_dl/extractor/franceinter.py]
1 # coding: utf-8
2 from __future__ import unicode_literals
3
4 from .common import InfoExtractor
5 from ..utils import month_by_name
6
7
8 class FranceInterIE(InfoExtractor):
9 _VALID_URL = r'https?://(?:www\.)?franceinter\.fr/emissions/(?P<id>[^?#]+)'
10
11 _TEST = {
12 'url': 'https://www.franceinter.fr/emissions/affaires-sensibles/affaires-sensibles-07-septembre-2016',
13 'md5': '9e54d7bdb6fdc02a841007f8a975c094',
14 'info_dict': {
15 'id': 'affaires-sensibles/affaires-sensibles-07-septembre-2016',
16 'ext': 'mp3',
17 'title': 'Affaire Cahuzac : le contentieux du compte en Suisse',
18 'description': 'md5:401969c5d318c061f86bda1fa359292b',
19 'upload_date': '20160907',
20 },
21 }
22
23 def _real_extract(self, url):
24 video_id = self._match_id(url)
25
26 webpage = self._download_webpage(url, video_id)
27
28 video_url = self._search_regex(
29 r'(?s)<div[^>]+class=["\']page-diffusion["\'][^>]*>.*?<button[^>]+data-url=(["\'])(?P<url>(?:(?!\1).)+)\1',
30 webpage, 'video url', group='url')
31
32 title = self._og_search_title(webpage)
33 description = self._og_search_description(webpage)
34
35 upload_date_str = self._search_regex(
36 r'class=["\']cover-emission-period["\'][^>]*>[^<]+\s+(\d{1,2}\s+[^\s]+\s+\d{4})<',
37 webpage, 'upload date', fatal=False)
38 if upload_date_str:
39 upload_date_list = upload_date_str.split()
40 upload_date_list.reverse()
41 upload_date_list[1] = '%02d' % (month_by_name(upload_date_list[1], lang='fr') or 0)
42 upload_date_list[2] = '%02d' % int(upload_date_list[2])
43 upload_date = ''.join(upload_date_list)
44 else:
45 upload_date = None
46
47 return {
48 'id': video_id,
49 'title': title,
50 'description': description,
51 'upload_date': upload_date,
52 'formats': [{
53 'url': video_url,
54 'vcodec': 'none',
55 }],
56 }
57
[end of youtube_dl/extractor/franceinter.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/youtube_dl/extractor/franceinter.py b/youtube_dl/extractor/franceinter.py
--- a/youtube_dl/extractor/franceinter.py
+++ b/youtube_dl/extractor/franceinter.py
@@ -33,7 +33,7 @@
description = self._og_search_description(webpage)
upload_date_str = self._search_regex(
- r'class=["\']cover-emission-period["\'][^>]*>[^<]+\s+(\d{1,2}\s+[^\s]+\s+\d{4})<',
+ r'class=["\']\s*cover-emission-period\s*["\'][^>]*>[^<]+\s+(\d{1,2}\s+[^\s]+\s+\d{4})<',
webpage, 'upload date', fatal=False)
if upload_date_str:
upload_date_list = upload_date_str.split()
| {"golden_diff": "diff --git a/youtube_dl/extractor/franceinter.py b/youtube_dl/extractor/franceinter.py\n--- a/youtube_dl/extractor/franceinter.py\n+++ b/youtube_dl/extractor/franceinter.py\n@@ -33,7 +33,7 @@\n description = self._og_search_description(webpage)\n \n upload_date_str = self._search_regex(\n- r'class=[\"\\']cover-emission-period[\"\\'][^>]*>[^<]+\\s+(\\d{1,2}\\s+[^\\s]+\\s+\\d{4})<',\n+ r'class=[\"\\']\\s*cover-emission-period\\s*[\"\\'][^>]*>[^<]+\\s+(\\d{1,2}\\s+[^\\s]+\\s+\\d{4})<',\n webpage, 'upload date', fatal=False)\n if upload_date_str:\n upload_date_list = upload_date_str.split()\n", "issue": "[https://www.franceinter.fr] WARNING: unable to extract upload date\n---\r\n\r\n### Make sure you are using the *latest* version: run `youtube-dl --version` and ensure your version is *2017.12.14*. If it's not, read [this FAQ entry](https://github.com/rg3/youtube-dl/blob/master/README.md#how-do-i-update-youtube-dl) and update. Issues with outdated version will be rejected.\r\n- [x] I've **verified** and **I assure** that I'm running youtube-dl **2017.12.14**\r\n\r\n### Before submitting an *issue* make sure you have:\r\n- [x] At least skimmed through the [README](https://github.com/rg3/youtube-dl/blob/master/README.md), **most notably** the [FAQ](https://github.com/rg3/youtube-dl#faq) and [BUGS](https://github.com/rg3/youtube-dl#bugs) sections\r\n- [x] [Searched](https://github.com/rg3/youtube-dl/search?type=Issues) the bugtracker for similar issues including closed ones\r\n\r\n### What is the purpose of your *issue*?\r\n- [ ] Bug report (encountered problems with youtube-dl)\r\n- [x] Site support request (request for adding support for a new site)\r\n- [ ] Feature request (request for a new functionality)\r\n- [ ] Question\r\n- [ ] Other\r\n\r\n---\r\n\r\n``` \r\nyoutube-dl-mp3 \"https://www.franceinter.fr/emissions/les-concerts-d-inter/les-concerts-d-inter-14-decembre-2017\"\r\n[FranceInter] les-concerts-d-inter/les-concerts-d-inter-14-decembre-2017: Downloading webpage\r\nWARNING: unable to extract upload date; please report this issue on https://yt-dl.org/bug . Make sure you are using the latest version; see https://yt-dl.org/update on how to update. Be sure to call youtube-dl with the --verbose flag and include its complete output.\r\n```\r\n\r\n```\r\n[debug] System config: []\r\n[debug] User config: []\r\n[debug] Custom config: []\r\n[debug] Command-line args: [u'-v']\r\n[debug] Encodings: locale UTF-8, fs UTF-8, out UTF-8, pref UTF-8\r\n[debug] youtube-dl version 2017.12.14\r\n[debug] Python version 2.7.12 - Linux-4.4.0-103-generic-x86_64-with-Ubuntu-16.04-xenial\r\n[debug] exe versions: avconv 2.8.11-0ubuntu0.16.04.1, avprobe 2.8.11-0ubuntu0.16.04.1, ffmpeg 2.8.11-0ubuntu0.16.04.1, ffprobe 2.8.11-0ubuntu0.16.04.1\r\n[debug] Proxy map: {}\r\n```\n", "before_files": [{"content": "# coding: utf-8\nfrom __future__ import unicode_literals\n\nfrom .common import InfoExtractor\nfrom ..utils import month_by_name\n\n\nclass FranceInterIE(InfoExtractor):\n _VALID_URL = r'https?://(?:www\\.)?franceinter\\.fr/emissions/(?P<id>[^?#]+)'\n\n _TEST = {\n 'url': 'https://www.franceinter.fr/emissions/affaires-sensibles/affaires-sensibles-07-septembre-2016',\n 'md5': '9e54d7bdb6fdc02a841007f8a975c094',\n 'info_dict': {\n 'id': 'affaires-sensibles/affaires-sensibles-07-septembre-2016',\n 'ext': 'mp3',\n 'title': 'Affaire Cahuzac : le contentieux du compte en Suisse',\n 'description': 'md5:401969c5d318c061f86bda1fa359292b',\n 'upload_date': '20160907',\n },\n }\n\n def _real_extract(self, url):\n video_id = self._match_id(url)\n\n webpage = self._download_webpage(url, video_id)\n\n video_url = self._search_regex(\n r'(?s)<div[^>]+class=[\"\\']page-diffusion[\"\\'][^>]*>.*?<button[^>]+data-url=([\"\\'])(?P<url>(?:(?!\\1).)+)\\1',\n webpage, 'video url', group='url')\n\n title = self._og_search_title(webpage)\n description = self._og_search_description(webpage)\n\n upload_date_str = self._search_regex(\n r'class=[\"\\']cover-emission-period[\"\\'][^>]*>[^<]+\\s+(\\d{1,2}\\s+[^\\s]+\\s+\\d{4})<',\n webpage, 'upload date', fatal=False)\n if upload_date_str:\n upload_date_list = upload_date_str.split()\n upload_date_list.reverse()\n upload_date_list[1] = '%02d' % (month_by_name(upload_date_list[1], lang='fr') or 0)\n upload_date_list[2] = '%02d' % int(upload_date_list[2])\n upload_date = ''.join(upload_date_list)\n else:\n upload_date = None\n\n return {\n 'id': video_id,\n 'title': title,\n 'description': description,\n 'upload_date': upload_date,\n 'formats': [{\n 'url': video_url,\n 'vcodec': 'none',\n }],\n }\n", "path": "youtube_dl/extractor/franceinter.py"}]} | 1,957 | 202 |
gh_patches_debug_7992 | rasdani/github-patches | git_diff | lightly-ai__lightly-305 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Fix import of ApiWorkflowClient
# Fix import of ApiWorkflowClient
Currently, the following import statement (from the docs) does not work:
```python
from lightly.api import ApiWorkflowClient
```
TODO:
- [x] We need to fix this by exposing the client in the `__init__.py` file.
- [x] Make sure the other imports in the docs work as well
</issue>
<code>
[start of lightly/active_learning/utils/__init__.py]
[end of lightly/active_learning/utils/__init__.py]
[start of lightly/api/__init__.py]
1 """ The lightly.api module provides access to the Lightly web-app. """
2
3 # Copyright (c) 2020. Lightly AG and its affiliates.
4 # All Rights Reserved
5
6 from lightly.api import routes
7
[end of lightly/api/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/lightly/active_learning/utils/__init__.py b/lightly/active_learning/utils/__init__.py
--- a/lightly/active_learning/utils/__init__.py
+++ b/lightly/active_learning/utils/__init__.py
@@ -0,0 +1,7 @@
+""" Collection of Utils for Active Learning """
+
+# Copyright (c) 2020. Lightly AG and its affiliates.
+# All Rights Reserved
+
+from lightly.active_learning.utils.bounding_box import BoundingBox
+from lightly.active_learning.utils.object_detection_output import ObjectDetectionOutput
\ No newline at end of file
diff --git a/lightly/api/__init__.py b/lightly/api/__init__.py
--- a/lightly/api/__init__.py
+++ b/lightly/api/__init__.py
@@ -3,4 +3,5 @@
# Copyright (c) 2020. Lightly AG and its affiliates.
# All Rights Reserved
+from lightly.api.api_workflow_client import ApiWorkflowClient
from lightly.api import routes
| {"golden_diff": "diff --git a/lightly/active_learning/utils/__init__.py b/lightly/active_learning/utils/__init__.py\n--- a/lightly/active_learning/utils/__init__.py\n+++ b/lightly/active_learning/utils/__init__.py\n@@ -0,0 +1,7 @@\n+\"\"\" Collection of Utils for Active Learning \"\"\"\n+\n+# Copyright (c) 2020. Lightly AG and its affiliates.\n+# All Rights Reserved\n+\n+from lightly.active_learning.utils.bounding_box import BoundingBox\n+from lightly.active_learning.utils.object_detection_output import ObjectDetectionOutput\n\\ No newline at end of file\ndiff --git a/lightly/api/__init__.py b/lightly/api/__init__.py\n--- a/lightly/api/__init__.py\n+++ b/lightly/api/__init__.py\n@@ -3,4 +3,5 @@\n # Copyright (c) 2020. Lightly AG and its affiliates.\n # All Rights Reserved\n \n+from lightly.api.api_workflow_client import ApiWorkflowClient\n from lightly.api import routes\n", "issue": "Fix import of ApiWorkflowClient\n# Fix import of ApiWorkflowClient\r\n\r\nCurrently, the following import statement (from the docs) does not work:\r\n```python\r\nfrom lightly.api import ApiWorkflowClient\r\n```\r\n\r\nTODO:\r\n- [x] We need to fix this by exposing the client in the `__init__.py` file. \r\n- [x] Make sure the other imports in the docs work as well\n", "before_files": [{"content": "", "path": "lightly/active_learning/utils/__init__.py"}, {"content": "\"\"\" The lightly.api module provides access to the Lightly web-app. \"\"\"\n\n# Copyright (c) 2020. Lightly AG and its affiliates.\n# All Rights Reserved\n\nfrom lightly.api import routes\n", "path": "lightly/api/__init__.py"}]} | 701 | 223 |
gh_patches_debug_5392 | rasdani/github-patches | git_diff | streamlink__streamlink-1351 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Kanal7 Defective again
Only 2 months later they have changed the design.
Not opening with latest 0.9.0 Release:
[cli][info] Found matching plugin kanal7 for URL http://www.kanal7.com/canli-izle
error: No playable streams found on this URL: http://www.kanal7.com/canli-izle
</issue>
<code>
[start of src/streamlink/plugins/kanal7.py]
1 from __future__ import print_function
2 import re
3
4 from streamlink.plugin import Plugin
5 from streamlink.plugin.api import http
6 from streamlink.plugin.api import useragents
7 from streamlink.plugin.api import validate
8 from streamlink.stream import HLSStream
9
10
11 class Kanal7(Plugin):
12 url_re = re.compile(r"https?://(?:www.)?kanal7.com/canli-izle")
13 iframe_re = re.compile(r'iframe .*?src="(http://[^"]*?)"')
14 stream_re = re.compile(r'src="(http[^"]*?)"')
15
16 @classmethod
17 def can_handle_url(cls, url):
18 return cls.url_re.match(url) is not None
19
20 def find_iframe(self, url):
21 res = http.get(url)
22 # find iframe url
23 iframe = self.iframe_re.search(res.text)
24 iframe_url = iframe and iframe.group(1)
25 if iframe_url:
26 self.logger.debug("Found iframe: {}", iframe_url)
27 return iframe_url
28
29 def _get_streams(self):
30 iframe1 = self.find_iframe(self.url)
31 if iframe1:
32 iframe2 = self.find_iframe(iframe1)
33 if iframe2:
34 ires = http.get(iframe2)
35 stream_m = self.stream_re.search(ires.text)
36 stream_url = stream_m and stream_m.group(1)
37 if stream_url:
38 yield "live", HLSStream(self.session, stream_url, headers={"Referer": iframe2})
39 else:
40 self.logger.error("Could not find second iframe, has the page layout changed?")
41 else:
42 self.logger.error("Could not find iframe, has the page layout changed?")
43
44
45 __plugin__ = Kanal7
46
[end of src/streamlink/plugins/kanal7.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/streamlink/plugins/kanal7.py b/src/streamlink/plugins/kanal7.py
--- a/src/streamlink/plugins/kanal7.py
+++ b/src/streamlink/plugins/kanal7.py
@@ -11,7 +11,7 @@
class Kanal7(Plugin):
url_re = re.compile(r"https?://(?:www.)?kanal7.com/canli-izle")
iframe_re = re.compile(r'iframe .*?src="(http://[^"]*?)"')
- stream_re = re.compile(r'src="(http[^"]*?)"')
+ stream_re = re.compile(r'''tp_file\s+=\s+['"](http[^"]*?)['"]''')
@classmethod
def can_handle_url(cls, url):
| {"golden_diff": "diff --git a/src/streamlink/plugins/kanal7.py b/src/streamlink/plugins/kanal7.py\n--- a/src/streamlink/plugins/kanal7.py\n+++ b/src/streamlink/plugins/kanal7.py\n@@ -11,7 +11,7 @@\n class Kanal7(Plugin):\n url_re = re.compile(r\"https?://(?:www.)?kanal7.com/canli-izle\")\n iframe_re = re.compile(r'iframe .*?src=\"(http://[^\"]*?)\"')\n- stream_re = re.compile(r'src=\"(http[^\"]*?)\"')\n+ stream_re = re.compile(r'''tp_file\\s+=\\s+['\"](http[^\"]*?)['\"]''')\n \n @classmethod\n def can_handle_url(cls, url):\n", "issue": "Kanal7 Defective again\nOnly 2 months later they have changed the design.\r\n\r\nNot opening with latest 0.9.0 Release:\r\n\r\n[cli][info] Found matching plugin kanal7 for URL http://www.kanal7.com/canli-izle\r\nerror: No playable streams found on this URL: http://www.kanal7.com/canli-izle\n", "before_files": [{"content": "from __future__ import print_function\nimport re\n\nfrom streamlink.plugin import Plugin\nfrom streamlink.plugin.api import http\nfrom streamlink.plugin.api import useragents\nfrom streamlink.plugin.api import validate\nfrom streamlink.stream import HLSStream\n\n\nclass Kanal7(Plugin):\n url_re = re.compile(r\"https?://(?:www.)?kanal7.com/canli-izle\")\n iframe_re = re.compile(r'iframe .*?src=\"(http://[^\"]*?)\"')\n stream_re = re.compile(r'src=\"(http[^\"]*?)\"')\n\n @classmethod\n def can_handle_url(cls, url):\n return cls.url_re.match(url) is not None\n\n def find_iframe(self, url):\n res = http.get(url)\n # find iframe url\n iframe = self.iframe_re.search(res.text)\n iframe_url = iframe and iframe.group(1)\n if iframe_url:\n self.logger.debug(\"Found iframe: {}\", iframe_url)\n return iframe_url\n\n def _get_streams(self):\n iframe1 = self.find_iframe(self.url)\n if iframe1:\n iframe2 = self.find_iframe(iframe1)\n if iframe2:\n ires = http.get(iframe2)\n stream_m = self.stream_re.search(ires.text)\n stream_url = stream_m and stream_m.group(1)\n if stream_url:\n yield \"live\", HLSStream(self.session, stream_url, headers={\"Referer\": iframe2})\n else:\n self.logger.error(\"Could not find second iframe, has the page layout changed?\")\n else:\n self.logger.error(\"Could not find iframe, has the page layout changed?\")\n\n\n__plugin__ = Kanal7\n", "path": "src/streamlink/plugins/kanal7.py"}]} | 1,076 | 173 |
gh_patches_debug_41012 | rasdani/github-patches | git_diff | cornellius-gp__gpytorch-1468 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[Docs] Missing Grid.py documentation
# 📚 Documentation/Examples
** Is there documentation missing? **
The utils section of [GPyTorch documentation](https://gpytorch.readthedocs.io) does not include any information on grid.py, which is referenced [elsewhere in the docs](https://docs.gpytorch.ai/en/stable/kernels.html?highlight=choose_grid_size#gpytorch.kernels.GridKernel.update_grid).
</issue>
<code>
[start of gpytorch/utils/grid.py]
1 #!/usr/bin/env python3
2
3 import math
4 from typing import List, Tuple
5
6 import torch
7
8
9 def scale_to_bounds(x, lower_bound, upper_bound):
10 """
11 Scale the input data so that it lies in between the lower and upper bounds.
12
13 Args:
14 :attr:`x` (Tensor `n` or `b x n`):
15 the input
16 :attr:`lower_bound` (float)
17 :attr:`upper_bound` (float)
18
19 Returns:
20 :obj:`torch.Tensor`
21 """
22 # Scale features so they fit inside grid bounds
23 min_val = x.min()
24 max_val = x.max()
25 diff = max_val - min_val
26 x = (x - min_val) * (0.95 * (upper_bound - lower_bound) / diff) + 0.95 * lower_bound
27 return x
28
29
30 def choose_grid_size(train_inputs, ratio=1.0, kronecker_structure=True):
31 """
32 Given some training inputs, determine a good grid size for KISS-GP.
33
34 Args:
35 :attr:`train_inputs` (Tensor `n` or `n x d` or `b x n x d`):
36 training data
37 :attr:`ratio` (float, optional):
38 Ratio - number of grid points to the amount of data (default: 1.)
39 :attr:`kronecker_structure` (bool, default=True):
40 Whether or not the model will use Kronecker structure in the grid
41 (set to True unless there is an additive or product decomposition in the prior)
42
43 Returns:
44 :obj:`int`
45 """
46 # Scale features so they fit inside grid bounds
47 num_data = train_inputs.numel() if train_inputs.dim() == 1 else train_inputs.size(-2)
48 num_dim = 1 if train_inputs.dim() == 1 else train_inputs.size(-1)
49 if kronecker_structure:
50 return int(ratio * math.pow(num_data, 1.0 / num_dim))
51 else:
52 return ratio * num_data
53
54
55 def convert_legacy_grid(grid: torch.Tensor) -> List[torch.Tensor]:
56 return [grid[:, i] for i in range(grid.size(-1))]
57
58
59 def create_data_from_grid(grid: List[torch.Tensor]) -> torch.Tensor:
60 """
61 Args:
62 :attr:`grid` (List[Tensor])
63 Each Tensor is a 1D set of increments for the grid in that dimension
64 Returns:
65 `grid_data` (Tensor)
66 Returns the set of points on the grid going by column-major order
67 (due to legacy reasons).
68 """
69 if torch.is_tensor(grid):
70 grid = convert_legacy_grid(grid)
71 ndims = len(grid)
72 assert all(axis.dim() == 1 for axis in grid)
73 projections = torch.meshgrid(*grid)
74 grid_tensor = torch.stack(projections, axis=-1)
75 # Note that if we did
76 # grid_data = grid_tensor.reshape(-1, ndims)
77 # instead, we would be iterating through the points of our grid from the
78 # last data dimension to the first data dimension. However, due to legacy
79 # reasons, we need to iterate from the first data dimension to the last data
80 # dimension when creating grid_data
81 grid_data = grid_tensor.permute(*(reversed(range(ndims + 1)))).reshape(ndims, -1).transpose(0, 1)
82 return grid_data
83
84
85 def create_grid(
86 grid_sizes: List[int], grid_bounds: List[Tuple[float, float]], extend: bool = True, device="cpu", dtype=torch.float,
87 ) -> List[torch.Tensor]:
88 """
89 Creates a grid represented by a list of 1D Tensors representing the
90 projections of the grid into each dimension
91
92 If `extend`, we extend the grid by two points past the specified boundary
93 which can be important for getting good grid interpolations
94 """
95 grid = []
96 for i in range(len(grid_bounds)):
97 grid_diff = float(grid_bounds[i][1] - grid_bounds[i][0]) / (grid_sizes[i] - 2)
98 if extend:
99 proj = torch.linspace(
100 grid_bounds[i][0] - grid_diff, grid_bounds[i][1] + grid_diff, grid_sizes[i], device=device, dtype=dtype,
101 )
102 else:
103 proj = torch.linspace(grid_bounds[i][0], grid_bounds[i][1], grid_sizes[i], device=device, dtype=dtype,)
104 grid.append(proj)
105 return grid
106
[end of gpytorch/utils/grid.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/gpytorch/utils/grid.py b/gpytorch/utils/grid.py
--- a/gpytorch/utils/grid.py
+++ b/gpytorch/utils/grid.py
@@ -10,14 +10,12 @@
"""
Scale the input data so that it lies in between the lower and upper bounds.
- Args:
- :attr:`x` (Tensor `n` or `b x n`):
- the input
- :attr:`lower_bound` (float)
- :attr:`upper_bound` (float)
-
- Returns:
- :obj:`torch.Tensor`
+ :param x: the input data
+ :type x: torch.Tensor (... x n x d)
+ :param float lower_bound: lower bound of scaled data
+ :param float upper_bound: upper bound of scaled data
+ :return: scaled data
+ :rtype: torch.Tensor (... x n x d)
"""
# Scale features so they fit inside grid bounds
min_val = x.min()
@@ -31,17 +29,15 @@
"""
Given some training inputs, determine a good grid size for KISS-GP.
- Args:
- :attr:`train_inputs` (Tensor `n` or `n x d` or `b x n x d`):
- training data
- :attr:`ratio` (float, optional):
- Ratio - number of grid points to the amount of data (default: 1.)
- :attr:`kronecker_structure` (bool, default=True):
- Whether or not the model will use Kronecker structure in the grid
- (set to True unless there is an additive or product decomposition in the prior)
-
- Returns:
- :obj:`int`
+ :param x: the input data
+ :type x: torch.Tensor (... x n x d)
+ :param ratio: Amount of grid points per data point (default: 1.)
+ :type ratio: float, optional
+ :param kronecker_structure: Whether or not the model will use Kronecker structure in the grid
+ (set to True unless there is an additive or product decomposition in the prior)
+ :type kronecker_structure: bool, optional
+ :return: Grid size
+ :rtype: int
"""
# Scale features so they fit inside grid bounds
num_data = train_inputs.numel() if train_inputs.dim() == 1 else train_inputs.size(-2)
@@ -58,13 +54,10 @@
def create_data_from_grid(grid: List[torch.Tensor]) -> torch.Tensor:
"""
- Args:
- :attr:`grid` (List[Tensor])
- Each Tensor is a 1D set of increments for the grid in that dimension
- Returns:
- `grid_data` (Tensor)
- Returns the set of points on the grid going by column-major order
- (due to legacy reasons).
+ :param grid: Each Tensor is a 1D set of increments for the grid in that dimension
+ :type grid: List[torch.Tensor]
+ :return: The set of points on the grid going by column-major order
+ :rtype: torch.Tensor
"""
if torch.is_tensor(grid):
grid = convert_legacy_grid(grid)
@@ -90,7 +83,18 @@
projections of the grid into each dimension
If `extend`, we extend the grid by two points past the specified boundary
- which can be important for getting good grid interpolations
+ which can be important for getting good grid interpolations.
+
+ :param grid_sizes: Sizes of each grid dimension
+ :type grid_sizes: List[int]
+ :param grid_bounds: Lower and upper bounds of each grid dimension
+ :type grid_sizes: List[Tuple[float, float]]
+ :param device: target device for output (default: cpu)
+ :type device: torch.device, optional
+ :param dtype: target dtype for output (default: torch.float)
+ :type dtype: torch.dtype, optional
+ :return: Grid points for each dimension. Grid points are stored in a :obj:`torch.Tensor` with shape `grid_sizes[i]`.
+ :rtype: List[torch.Tensor]
"""
grid = []
for i in range(len(grid_bounds)):
| {"golden_diff": "diff --git a/gpytorch/utils/grid.py b/gpytorch/utils/grid.py\n--- a/gpytorch/utils/grid.py\n+++ b/gpytorch/utils/grid.py\n@@ -10,14 +10,12 @@\n \"\"\"\n Scale the input data so that it lies in between the lower and upper bounds.\n \n- Args:\n- :attr:`x` (Tensor `n` or `b x n`):\n- the input\n- :attr:`lower_bound` (float)\n- :attr:`upper_bound` (float)\n-\n- Returns:\n- :obj:`torch.Tensor`\n+ :param x: the input data\n+ :type x: torch.Tensor (... x n x d)\n+ :param float lower_bound: lower bound of scaled data\n+ :param float upper_bound: upper bound of scaled data\n+ :return: scaled data\n+ :rtype: torch.Tensor (... x n x d)\n \"\"\"\n # Scale features so they fit inside grid bounds\n min_val = x.min()\n@@ -31,17 +29,15 @@\n \"\"\"\n Given some training inputs, determine a good grid size for KISS-GP.\n \n- Args:\n- :attr:`train_inputs` (Tensor `n` or `n x d` or `b x n x d`):\n- training data\n- :attr:`ratio` (float, optional):\n- Ratio - number of grid points to the amount of data (default: 1.)\n- :attr:`kronecker_structure` (bool, default=True):\n- Whether or not the model will use Kronecker structure in the grid\n- (set to True unless there is an additive or product decomposition in the prior)\n-\n- Returns:\n- :obj:`int`\n+ :param x: the input data\n+ :type x: torch.Tensor (... x n x d)\n+ :param ratio: Amount of grid points per data point (default: 1.)\n+ :type ratio: float, optional\n+ :param kronecker_structure: Whether or not the model will use Kronecker structure in the grid\n+ (set to True unless there is an additive or product decomposition in the prior)\n+ :type kronecker_structure: bool, optional\n+ :return: Grid size\n+ :rtype: int\n \"\"\"\n # Scale features so they fit inside grid bounds\n num_data = train_inputs.numel() if train_inputs.dim() == 1 else train_inputs.size(-2)\n@@ -58,13 +54,10 @@\n \n def create_data_from_grid(grid: List[torch.Tensor]) -> torch.Tensor:\n \"\"\"\n- Args:\n- :attr:`grid` (List[Tensor])\n- Each Tensor is a 1D set of increments for the grid in that dimension\n- Returns:\n- `grid_data` (Tensor)\n- Returns the set of points on the grid going by column-major order\n- (due to legacy reasons).\n+ :param grid: Each Tensor is a 1D set of increments for the grid in that dimension\n+ :type grid: List[torch.Tensor]\n+ :return: The set of points on the grid going by column-major order\n+ :rtype: torch.Tensor\n \"\"\"\n if torch.is_tensor(grid):\n grid = convert_legacy_grid(grid)\n@@ -90,7 +83,18 @@\n projections of the grid into each dimension\n \n If `extend`, we extend the grid by two points past the specified boundary\n- which can be important for getting good grid interpolations\n+ which can be important for getting good grid interpolations.\n+\n+ :param grid_sizes: Sizes of each grid dimension\n+ :type grid_sizes: List[int]\n+ :param grid_bounds: Lower and upper bounds of each grid dimension\n+ :type grid_sizes: List[Tuple[float, float]]\n+ :param device: target device for output (default: cpu)\n+ :type device: torch.device, optional\n+ :param dtype: target dtype for output (default: torch.float)\n+ :type dtype: torch.dtype, optional\n+ :return: Grid points for each dimension. Grid points are stored in a :obj:`torch.Tensor` with shape `grid_sizes[i]`.\n+ :rtype: List[torch.Tensor]\n \"\"\"\n grid = []\n for i in range(len(grid_bounds)):\n", "issue": "[Docs] Missing Grid.py documentation\n# \ud83d\udcda Documentation/Examples\r\n\r\n** Is there documentation missing? **\r\nThe utils section of [GPyTorch documentation](https://gpytorch.readthedocs.io) does not include any information on grid.py, which is referenced [elsewhere in the docs](https://docs.gpytorch.ai/en/stable/kernels.html?highlight=choose_grid_size#gpytorch.kernels.GridKernel.update_grid).\r\n\n", "before_files": [{"content": "#!/usr/bin/env python3\n\nimport math\nfrom typing import List, Tuple\n\nimport torch\n\n\ndef scale_to_bounds(x, lower_bound, upper_bound):\n \"\"\"\n Scale the input data so that it lies in between the lower and upper bounds.\n\n Args:\n :attr:`x` (Tensor `n` or `b x n`):\n the input\n :attr:`lower_bound` (float)\n :attr:`upper_bound` (float)\n\n Returns:\n :obj:`torch.Tensor`\n \"\"\"\n # Scale features so they fit inside grid bounds\n min_val = x.min()\n max_val = x.max()\n diff = max_val - min_val\n x = (x - min_val) * (0.95 * (upper_bound - lower_bound) / diff) + 0.95 * lower_bound\n return x\n\n\ndef choose_grid_size(train_inputs, ratio=1.0, kronecker_structure=True):\n \"\"\"\n Given some training inputs, determine a good grid size for KISS-GP.\n\n Args:\n :attr:`train_inputs` (Tensor `n` or `n x d` or `b x n x d`):\n training data\n :attr:`ratio` (float, optional):\n Ratio - number of grid points to the amount of data (default: 1.)\n :attr:`kronecker_structure` (bool, default=True):\n Whether or not the model will use Kronecker structure in the grid\n (set to True unless there is an additive or product decomposition in the prior)\n\n Returns:\n :obj:`int`\n \"\"\"\n # Scale features so they fit inside grid bounds\n num_data = train_inputs.numel() if train_inputs.dim() == 1 else train_inputs.size(-2)\n num_dim = 1 if train_inputs.dim() == 1 else train_inputs.size(-1)\n if kronecker_structure:\n return int(ratio * math.pow(num_data, 1.0 / num_dim))\n else:\n return ratio * num_data\n\n\ndef convert_legacy_grid(grid: torch.Tensor) -> List[torch.Tensor]:\n return [grid[:, i] for i in range(grid.size(-1))]\n\n\ndef create_data_from_grid(grid: List[torch.Tensor]) -> torch.Tensor:\n \"\"\"\n Args:\n :attr:`grid` (List[Tensor])\n Each Tensor is a 1D set of increments for the grid in that dimension\n Returns:\n `grid_data` (Tensor)\n Returns the set of points on the grid going by column-major order\n (due to legacy reasons).\n \"\"\"\n if torch.is_tensor(grid):\n grid = convert_legacy_grid(grid)\n ndims = len(grid)\n assert all(axis.dim() == 1 for axis in grid)\n projections = torch.meshgrid(*grid)\n grid_tensor = torch.stack(projections, axis=-1)\n # Note that if we did\n # grid_data = grid_tensor.reshape(-1, ndims)\n # instead, we would be iterating through the points of our grid from the\n # last data dimension to the first data dimension. However, due to legacy\n # reasons, we need to iterate from the first data dimension to the last data\n # dimension when creating grid_data\n grid_data = grid_tensor.permute(*(reversed(range(ndims + 1)))).reshape(ndims, -1).transpose(0, 1)\n return grid_data\n\n\ndef create_grid(\n grid_sizes: List[int], grid_bounds: List[Tuple[float, float]], extend: bool = True, device=\"cpu\", dtype=torch.float,\n) -> List[torch.Tensor]:\n \"\"\"\n Creates a grid represented by a list of 1D Tensors representing the\n projections of the grid into each dimension\n\n If `extend`, we extend the grid by two points past the specified boundary\n which can be important for getting good grid interpolations\n \"\"\"\n grid = []\n for i in range(len(grid_bounds)):\n grid_diff = float(grid_bounds[i][1] - grid_bounds[i][0]) / (grid_sizes[i] - 2)\n if extend:\n proj = torch.linspace(\n grid_bounds[i][0] - grid_diff, grid_bounds[i][1] + grid_diff, grid_sizes[i], device=device, dtype=dtype,\n )\n else:\n proj = torch.linspace(grid_bounds[i][0], grid_bounds[i][1], grid_sizes[i], device=device, dtype=dtype,)\n grid.append(proj)\n return grid\n", "path": "gpytorch/utils/grid.py"}]} | 1,816 | 962 |
gh_patches_debug_27885 | rasdani/github-patches | git_diff | pwr-Solaar__Solaar-743 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
authorship of Solaar in setup.py
Daniel Pavel is listed as the sole author of Solaar in setup.py
As far as I can tell, this puts him and his email in several repositories, such as PyPI https://pypi.org/project/solaar/
Who should be put there?
</issue>
<code>
[start of setup.py]
1 #!/usr/bin/env python3
2
3 from glob import glob as _glob
4 try:
5 from setuptools import setup
6 except ImportError:
7 from distutils.core import setup
8
9 autostart_path = '/etc/xdg/autostart'
10
11 import sys
12 backup_path_0 = sys.path[0]
13 sys.path[0] = backup_path_0 + '/lib'
14 #from solaar import NAME, __version__
15 __version__ = '1.0.2-rc1'
16 NAME = 'Solaar'
17
18 sys.path[0] = backup_path_0
19
20 if 'install' in sys.argv:
21 # naively guess where the autostart .desktop file should be installed
22 if '--prefix' in sys.argv or any(x.startswith('--prefix=') for x in sys.argv) or '--home' in sys.argv:
23 autostart_path = 'etc/xdg/autostart'
24 elif '--user' in sys.argv:
25 from os import environ
26 from os import path
27 xdg_config_home = environ.get('XDG_CONFIG_HOME', path.expanduser(path.join('~', '.config')))
28 autostart_path = path.join(xdg_config_home, 'autostart')
29 del environ, path, xdg_config_home
30
31 del sys, backup_path_0
32
33
34 def _data_files():
35 from os.path import dirname as _dirname
36
37 yield 'share/solaar/icons', _glob('share/solaar/icons/solaar*.svg')
38 yield 'share/solaar/icons', _glob('share/solaar/icons/light_*.png')
39 yield 'share/icons/hicolor/scalable/apps', ['share/solaar/icons/solaar.svg']
40
41 for mo in _glob('share/locale/*/LC_MESSAGES/solaar.mo'):
42 yield _dirname(mo), [mo]
43
44 yield 'share/applications', ['share/applications/solaar.desktop']
45 yield autostart_path, ['share/autostart/solaar.desktop']
46
47 del _dirname
48
49
50 setup(name=NAME.lower(),
51 version=__version__,
52 description='Linux devices manager for the Logitech Unifying Receiver.',
53 long_description='''
54 Solaar is a Linux device manager for Logitech's Unifying Receiver peripherals.
55 It is able to pair/unpair devices to the receiver, and for some devices read
56 battery status.
57 '''.strip(),
58 author='Daniel Pavel',
59 author_email='[email protected]',
60 license='GPLv2',
61 url='http://pwr-solaar.github.io/Solaar/',
62 classifiers=[
63 'Development Status :: 4 - Beta',
64 'Environment :: X11 Applications :: GTK',
65 'Environment :: Console',
66 'Intended Audience :: End Users/Desktop',
67 'License :: DFSG approved',
68 'License :: OSI Approved :: GNU General Public License v2 (GPLv2)',
69 'Natural Language :: English',
70 'Programming Language :: Python :: 3 :: Only',
71 'Operating System :: POSIX :: Linux',
72 'Topic :: Utilities',
73 ],
74
75 platforms=['linux'],
76
77 # sudo apt install python-gi python3-gi \
78 # gir1.2-gtk-3.0 gir1.2-notify-0.7 gir1.2-ayatanaappindicator3-0.1
79 # os_requires=['gi.repository.GObject (>= 2.0)', 'gi.repository.Gtk (>= 3.0)'],
80
81 python_requires='>=3.2',
82 install_requires=['pyudev (>= 0.13)', ],
83 package_dir={'': 'lib'},
84 packages=['hidapi', 'logitech_receiver', 'solaar', 'solaar.ui', 'solaar.cli'],
85 data_files=list(_data_files()),
86 scripts=_glob('bin/*'),
87 )
88
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -8,28 +8,10 @@
autostart_path = '/etc/xdg/autostart'
-import sys
-backup_path_0 = sys.path[0]
-sys.path[0] = backup_path_0 + '/lib'
#from solaar import NAME, __version__
__version__ = '1.0.2-rc1'
NAME = 'Solaar'
-sys.path[0] = backup_path_0
-
-if 'install' in sys.argv:
- # naively guess where the autostart .desktop file should be installed
- if '--prefix' in sys.argv or any(x.startswith('--prefix=') for x in sys.argv) or '--home' in sys.argv:
- autostart_path = 'etc/xdg/autostart'
- elif '--user' in sys.argv:
- from os import environ
- from os import path
- xdg_config_home = environ.get('XDG_CONFIG_HOME', path.expanduser(path.join('~', '.config')))
- autostart_path = path.join(xdg_config_home, 'autostart')
- del environ, path, xdg_config_home
-
-del sys, backup_path_0
-
def _data_files():
from os.path import dirname as _dirname
@@ -43,6 +25,7 @@
yield 'share/applications', ['share/applications/solaar.desktop']
yield autostart_path, ['share/autostart/solaar.desktop']
+ yield '/etc/udev/rules.d', ['rules.d/42-logitech-unify-permissions.rules']
del _dirname
@@ -56,7 +39,6 @@
battery status.
'''.strip(),
author='Daniel Pavel',
- author_email='[email protected]',
license='GPLv2',
url='http://pwr-solaar.github.io/Solaar/',
classifiers=[
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -8,28 +8,10 @@\n \n autostart_path = '/etc/xdg/autostart'\n \n-import sys\n-backup_path_0 = sys.path[0]\n-sys.path[0] = backup_path_0 + '/lib'\n #from solaar import NAME, __version__\n __version__ = '1.0.2-rc1'\n NAME = 'Solaar'\n \n-sys.path[0] = backup_path_0\n-\n-if 'install' in sys.argv:\n-\t# naively guess where the autostart .desktop file should be installed\n-\tif '--prefix' in sys.argv or any(x.startswith('--prefix=') for x in sys.argv) or '--home' in sys.argv:\n-\t\tautostart_path = 'etc/xdg/autostart'\n-\telif '--user' in sys.argv:\n-\t\tfrom os import environ\n-\t\tfrom os import path\n-\t\txdg_config_home = environ.get('XDG_CONFIG_HOME', path.expanduser(path.join('~', '.config')))\n-\t\tautostart_path = path.join(xdg_config_home, 'autostart')\n-\t\tdel environ, path, xdg_config_home\n-\n-del sys, backup_path_0\n-\n \n def _data_files():\n \tfrom os.path import dirname as _dirname\n@@ -43,6 +25,7 @@\n \n \tyield 'share/applications', ['share/applications/solaar.desktop']\n \tyield autostart_path, ['share/autostart/solaar.desktop']\n+\tyield '/etc/udev/rules.d', ['rules.d/42-logitech-unify-permissions.rules']\n \n \tdel _dirname\n \n@@ -56,7 +39,6 @@\n battery status.\n '''.strip(),\n \t\tauthor='Daniel Pavel',\n-\t\tauthor_email='[email protected]',\n \t\tlicense='GPLv2',\n \t\turl='http://pwr-solaar.github.io/Solaar/',\n \t\tclassifiers=[\n", "issue": "authorship of Solaar in setup.py\nDaniel Pavel is listed as the sole author of Solaar in setup.py \r\n\r\nAs far as I can tell, this puts him and his email in several repositories, such as PyPI https://pypi.org/project/solaar/\r\n\r\nWho should be put there?\n", "before_files": [{"content": "#!/usr/bin/env python3\n\nfrom glob import glob as _glob\ntry:\n from setuptools import setup\nexcept ImportError:\n from distutils.core import setup\n\nautostart_path = '/etc/xdg/autostart'\n\nimport sys\nbackup_path_0 = sys.path[0]\nsys.path[0] = backup_path_0 + '/lib'\n#from solaar import NAME, __version__\n__version__ = '1.0.2-rc1'\nNAME = 'Solaar'\n\nsys.path[0] = backup_path_0\n\nif 'install' in sys.argv:\n\t# naively guess where the autostart .desktop file should be installed\n\tif '--prefix' in sys.argv or any(x.startswith('--prefix=') for x in sys.argv) or '--home' in sys.argv:\n\t\tautostart_path = 'etc/xdg/autostart'\n\telif '--user' in sys.argv:\n\t\tfrom os import environ\n\t\tfrom os import path\n\t\txdg_config_home = environ.get('XDG_CONFIG_HOME', path.expanduser(path.join('~', '.config')))\n\t\tautostart_path = path.join(xdg_config_home, 'autostart')\n\t\tdel environ, path, xdg_config_home\n\ndel sys, backup_path_0\n\n\ndef _data_files():\n\tfrom os.path import dirname as _dirname\n\n\tyield 'share/solaar/icons', _glob('share/solaar/icons/solaar*.svg')\n\tyield 'share/solaar/icons', _glob('share/solaar/icons/light_*.png')\n\tyield 'share/icons/hicolor/scalable/apps', ['share/solaar/icons/solaar.svg']\n\n\tfor mo in _glob('share/locale/*/LC_MESSAGES/solaar.mo'):\n\t\tyield _dirname(mo), [mo]\n\n\tyield 'share/applications', ['share/applications/solaar.desktop']\n\tyield autostart_path, ['share/autostart/solaar.desktop']\n\n\tdel _dirname\n\n\nsetup(name=NAME.lower(),\n\t\tversion=__version__,\n\t\tdescription='Linux devices manager for the Logitech Unifying Receiver.',\n\t\tlong_description='''\nSolaar is a Linux device manager for Logitech's Unifying Receiver peripherals.\nIt is able to pair/unpair devices to the receiver, and for some devices read\nbattery status.\n'''.strip(),\n\t\tauthor='Daniel Pavel',\n\t\tauthor_email='[email protected]',\n\t\tlicense='GPLv2',\n\t\turl='http://pwr-solaar.github.io/Solaar/',\n\t\tclassifiers=[\n\t\t\t'Development Status :: 4 - Beta',\n\t\t\t'Environment :: X11 Applications :: GTK',\n\t\t\t'Environment :: Console',\n\t\t\t'Intended Audience :: End Users/Desktop',\n\t\t\t'License :: DFSG approved',\n\t\t\t'License :: OSI Approved :: GNU General Public License v2 (GPLv2)',\n\t\t\t'Natural Language :: English',\n\t\t\t'Programming Language :: Python :: 3 :: Only',\n\t\t\t'Operating System :: POSIX :: Linux',\n\t\t\t'Topic :: Utilities',\n\t\t\t],\n\n\t\tplatforms=['linux'],\n\n\t\t# sudo apt install python-gi python3-gi \\\n\t\t# gir1.2-gtk-3.0 gir1.2-notify-0.7 gir1.2-ayatanaappindicator3-0.1\n\t\t# os_requires=['gi.repository.GObject (>= 2.0)', 'gi.repository.Gtk (>= 3.0)'],\n\n\t\tpython_requires='>=3.2',\n\t\tinstall_requires=['pyudev (>= 0.13)', ],\n\t\tpackage_dir={'': 'lib'},\n\t\tpackages=['hidapi', 'logitech_receiver', 'solaar', 'solaar.ui', 'solaar.cli'],\n\t\tdata_files=list(_data_files()),\n\t\tscripts=_glob('bin/*'),\n\t)\n", "path": "setup.py"}]} | 1,606 | 448 |
gh_patches_debug_60773 | rasdani/github-patches | git_diff | data-for-change__anyway-1848 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Fix walla scraping - see test_scrape_sanity_online_walla
</issue>
<code>
[start of anyway/parsers/rss_sites.py]
1 import requests
2 from bs4 import BeautifulSoup
3 import feedparser
4 from anyway.parsers import timezones
5
6
7 def parse_html_walla(item_rss, html_soup):
8 # For some reason there's html here
9 description = BeautifulSoup(item_rss["summary"], features="lxml").text
10
11 author = html_soup.find("div", class_="author").find("a").get_text()
12 return author, description
13
14
15 def parse_html_ynet(item_rss, html_soup):
16 # This is rather fragile
17 # description_text: "[description] ([author]) [unrelated stuff]"
18 description_text = html_soup.find(id="ArticleBodyComponent").get_text()
19 author = description_text.split("(")[-1].split(")")[0].strip()
20 description = description_text.rsplit("(")[0].strip()
21 return author, description
22
23
24 sites_config = {
25 "ynet": {
26 "rss": "https://www.ynet.co.il:443/Integration/StoryRss1854.xml",
27 "parser": parse_html_ynet,
28 },
29 "walla": {"rss": "https://rss.walla.co.il:443/feed/22", "parser": parse_html_walla},
30 }
31
32
33 def _fetch(url: str) -> str:
34 return requests.get(url).text
35
36
37 def scrape_raw(site_name: str, *, rss_source=None, fetch_html=_fetch):
38 config = sites_config[site_name]
39 if rss_source is None:
40 rss_source = config["rss"]
41 rss_dict = feedparser.parse(rss_source)
42 if rss_dict.get("bozo_exception"):
43 raise rss_dict["bozo_exception"]
44
45 for item_rss in rss_dict["items"]:
46 html_text = fetch_html(item_rss["link"])
47 author, description = config["parser"](item_rss, BeautifulSoup(html_text, "lxml"))
48 yield {
49 "link": item_rss["link"],
50 "date": timezones.from_rss(item_rss["published_parsed"]),
51 "source": site_name,
52 "author": author,
53 "title": item_rss["title"],
54 "description": description,
55 "accident": False,
56 }
57
58
59 def scrape(*args, **kwargs):
60 # lazily load dependencies, so this module will behave like an independent library
61 from anyway.models import NewsFlash
62
63 for dict_item in scrape_raw(*args, **kwargs):
64 yield NewsFlash(**dict_item)
65
[end of anyway/parsers/rss_sites.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/anyway/parsers/rss_sites.py b/anyway/parsers/rss_sites.py
--- a/anyway/parsers/rss_sites.py
+++ b/anyway/parsers/rss_sites.py
@@ -8,7 +8,7 @@
# For some reason there's html here
description = BeautifulSoup(item_rss["summary"], features="lxml").text
- author = html_soup.find("div", class_="author").find("a").get_text()
+ author = html_soup.find("div", class_="author").get_text().strip()
return author, description
| {"golden_diff": "diff --git a/anyway/parsers/rss_sites.py b/anyway/parsers/rss_sites.py\n--- a/anyway/parsers/rss_sites.py\n+++ b/anyway/parsers/rss_sites.py\n@@ -8,7 +8,7 @@\n # For some reason there's html here\n description = BeautifulSoup(item_rss[\"summary\"], features=\"lxml\").text\n \n- author = html_soup.find(\"div\", class_=\"author\").find(\"a\").get_text()\n+ author = html_soup.find(\"div\", class_=\"author\").get_text().strip()\n return author, description\n", "issue": "Fix walla scraping - see test_scrape_sanity_online_walla\n\n", "before_files": [{"content": "import requests\nfrom bs4 import BeautifulSoup\nimport feedparser\nfrom anyway.parsers import timezones\n\n\ndef parse_html_walla(item_rss, html_soup):\n # For some reason there's html here\n description = BeautifulSoup(item_rss[\"summary\"], features=\"lxml\").text\n\n author = html_soup.find(\"div\", class_=\"author\").find(\"a\").get_text()\n return author, description\n\n\ndef parse_html_ynet(item_rss, html_soup):\n # This is rather fragile\n # description_text: \"[description] ([author]) [unrelated stuff]\"\n description_text = html_soup.find(id=\"ArticleBodyComponent\").get_text()\n author = description_text.split(\"(\")[-1].split(\")\")[0].strip()\n description = description_text.rsplit(\"(\")[0].strip()\n return author, description\n\n\nsites_config = {\n \"ynet\": {\n \"rss\": \"https://www.ynet.co.il:443/Integration/StoryRss1854.xml\",\n \"parser\": parse_html_ynet,\n },\n \"walla\": {\"rss\": \"https://rss.walla.co.il:443/feed/22\", \"parser\": parse_html_walla},\n}\n\n\ndef _fetch(url: str) -> str:\n return requests.get(url).text\n\n\ndef scrape_raw(site_name: str, *, rss_source=None, fetch_html=_fetch):\n config = sites_config[site_name]\n if rss_source is None:\n rss_source = config[\"rss\"]\n rss_dict = feedparser.parse(rss_source)\n if rss_dict.get(\"bozo_exception\"):\n raise rss_dict[\"bozo_exception\"]\n\n for item_rss in rss_dict[\"items\"]:\n html_text = fetch_html(item_rss[\"link\"])\n author, description = config[\"parser\"](item_rss, BeautifulSoup(html_text, \"lxml\"))\n yield {\n \"link\": item_rss[\"link\"],\n \"date\": timezones.from_rss(item_rss[\"published_parsed\"]),\n \"source\": site_name,\n \"author\": author,\n \"title\": item_rss[\"title\"],\n \"description\": description,\n \"accident\": False,\n }\n\n\ndef scrape(*args, **kwargs):\n # lazily load dependencies, so this module will behave like an independent library\n from anyway.models import NewsFlash\n\n for dict_item in scrape_raw(*args, **kwargs):\n yield NewsFlash(**dict_item)\n", "path": "anyway/parsers/rss_sites.py"}]} | 1,200 | 130 |
gh_patches_debug_17789 | rasdani/github-patches | git_diff | encode__starlette-1018 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Memory usage streaming large responses
We've been running into memory issues when providing very large async generators to a streaming response. We have these generators producing large (larger than memory set) responses in a way that allows us to only keep small chunks in memory at a time. However, it looks like the BaseHTTPMiddleware implementation uses an asyncio queue to store the individual chunks:
https://github.com/encode/starlette/blob/master/starlette/middleware/base.py#L30
This prevents any network backpressure handling -- if the client that is receiving the streaming response is on a slow connection, the queue will happily grow without bound and consume all memory, triggering kernel out-of-memory, when the ideal handling here would be for send to block (yield) when this happens. I believe this would naturally happen if there were no queue here at all, so I am wondering why it needs to be here?
Would a PR to remove the queueing be accepted?
If not, what is the appropriate way to override this to not use a queue? We can write our own, but the use of BaseHTTPMiddleware is hardcoded: https://github.com/encode/starlette/blob/519f5750b5e797bb3d4805fd29657674304ce397/starlette/applications.py#L197, leaving only some fairly hacky approaches to preventing this queueing.
</issue>
<code>
[start of starlette/middleware/base.py]
1 import asyncio
2 import typing
3
4 from starlette.requests import Request
5 from starlette.responses import Response, StreamingResponse
6 from starlette.types import ASGIApp, Receive, Scope, Send
7
8 RequestResponseEndpoint = typing.Callable[[Request], typing.Awaitable[Response]]
9 DispatchFunction = typing.Callable[
10 [Request, RequestResponseEndpoint], typing.Awaitable[Response]
11 ]
12
13
14 class BaseHTTPMiddleware:
15 def __init__(self, app: ASGIApp, dispatch: DispatchFunction = None) -> None:
16 self.app = app
17 self.dispatch_func = self.dispatch if dispatch is None else dispatch
18
19 async def __call__(self, scope: Scope, receive: Receive, send: Send) -> None:
20 if scope["type"] != "http":
21 await self.app(scope, receive, send)
22 return
23
24 request = Request(scope, receive=receive)
25 response = await self.dispatch_func(request, self.call_next)
26 await response(scope, receive, send)
27
28 async def call_next(self, request: Request) -> Response:
29 loop = asyncio.get_event_loop()
30 queue = asyncio.Queue() # type: asyncio.Queue
31
32 scope = request.scope
33 receive = request.receive
34 send = queue.put
35
36 async def coro() -> None:
37 try:
38 await self.app(scope, receive, send)
39 finally:
40 await queue.put(None)
41
42 task = loop.create_task(coro())
43 message = await queue.get()
44 if message is None:
45 task.result()
46 raise RuntimeError("No response returned.")
47 assert message["type"] == "http.response.start"
48
49 async def body_stream() -> typing.AsyncGenerator[bytes, None]:
50 while True:
51 message = await queue.get()
52 if message is None:
53 break
54 assert message["type"] == "http.response.body"
55 yield message.get("body", b"")
56 task.result()
57
58 response = StreamingResponse(
59 status_code=message["status"], content=body_stream()
60 )
61 response.raw_headers = message["headers"]
62 return response
63
64 async def dispatch(
65 self, request: Request, call_next: RequestResponseEndpoint
66 ) -> Response:
67 raise NotImplementedError() # pragma: no cover
68
[end of starlette/middleware/base.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/starlette/middleware/base.py b/starlette/middleware/base.py
--- a/starlette/middleware/base.py
+++ b/starlette/middleware/base.py
@@ -3,7 +3,7 @@
from starlette.requests import Request
from starlette.responses import Response, StreamingResponse
-from starlette.types import ASGIApp, Receive, Scope, Send
+from starlette.types import ASGIApp, Message, Receive, Scope, Send
RequestResponseEndpoint = typing.Callable[[Request], typing.Awaitable[Response]]
DispatchFunction = typing.Callable[
@@ -27,7 +27,7 @@
async def call_next(self, request: Request) -> Response:
loop = asyncio.get_event_loop()
- queue = asyncio.Queue() # type: asyncio.Queue
+ queue: "asyncio.Queue[typing.Optional[Message]]" = asyncio.Queue(maxsize=1)
scope = request.scope
receive = request.receive
| {"golden_diff": "diff --git a/starlette/middleware/base.py b/starlette/middleware/base.py\n--- a/starlette/middleware/base.py\n+++ b/starlette/middleware/base.py\n@@ -3,7 +3,7 @@\n \n from starlette.requests import Request\n from starlette.responses import Response, StreamingResponse\n-from starlette.types import ASGIApp, Receive, Scope, Send\n+from starlette.types import ASGIApp, Message, Receive, Scope, Send\n \n RequestResponseEndpoint = typing.Callable[[Request], typing.Awaitable[Response]]\n DispatchFunction = typing.Callable[\n@@ -27,7 +27,7 @@\n \n async def call_next(self, request: Request) -> Response:\n loop = asyncio.get_event_loop()\n- queue = asyncio.Queue() # type: asyncio.Queue\n+ queue: \"asyncio.Queue[typing.Optional[Message]]\" = asyncio.Queue(maxsize=1)\n \n scope = request.scope\n receive = request.receive\n", "issue": "Memory usage streaming large responses\nWe've been running into memory issues when providing very large async generators to a streaming response. We have these generators producing large (larger than memory set) responses in a way that allows us to only keep small chunks in memory at a time. However, it looks like the BaseHTTPMiddleware implementation uses an asyncio queue to store the individual chunks:\r\n\r\nhttps://github.com/encode/starlette/blob/master/starlette/middleware/base.py#L30\r\n\r\nThis prevents any network backpressure handling -- if the client that is receiving the streaming response is on a slow connection, the queue will happily grow without bound and consume all memory, triggering kernel out-of-memory, when the ideal handling here would be for send to block (yield) when this happens. I believe this would naturally happen if there were no queue here at all, so I am wondering why it needs to be here?\r\n\r\nWould a PR to remove the queueing be accepted?\r\n\r\nIf not, what is the appropriate way to override this to not use a queue? We can write our own, but the use of BaseHTTPMiddleware is hardcoded: https://github.com/encode/starlette/blob/519f5750b5e797bb3d4805fd29657674304ce397/starlette/applications.py#L197, leaving only some fairly hacky approaches to preventing this queueing.\n", "before_files": [{"content": "import asyncio\nimport typing\n\nfrom starlette.requests import Request\nfrom starlette.responses import Response, StreamingResponse\nfrom starlette.types import ASGIApp, Receive, Scope, Send\n\nRequestResponseEndpoint = typing.Callable[[Request], typing.Awaitable[Response]]\nDispatchFunction = typing.Callable[\n [Request, RequestResponseEndpoint], typing.Awaitable[Response]\n]\n\n\nclass BaseHTTPMiddleware:\n def __init__(self, app: ASGIApp, dispatch: DispatchFunction = None) -> None:\n self.app = app\n self.dispatch_func = self.dispatch if dispatch is None else dispatch\n\n async def __call__(self, scope: Scope, receive: Receive, send: Send) -> None:\n if scope[\"type\"] != \"http\":\n await self.app(scope, receive, send)\n return\n\n request = Request(scope, receive=receive)\n response = await self.dispatch_func(request, self.call_next)\n await response(scope, receive, send)\n\n async def call_next(self, request: Request) -> Response:\n loop = asyncio.get_event_loop()\n queue = asyncio.Queue() # type: asyncio.Queue\n\n scope = request.scope\n receive = request.receive\n send = queue.put\n\n async def coro() -> None:\n try:\n await self.app(scope, receive, send)\n finally:\n await queue.put(None)\n\n task = loop.create_task(coro())\n message = await queue.get()\n if message is None:\n task.result()\n raise RuntimeError(\"No response returned.\")\n assert message[\"type\"] == \"http.response.start\"\n\n async def body_stream() -> typing.AsyncGenerator[bytes, None]:\n while True:\n message = await queue.get()\n if message is None:\n break\n assert message[\"type\"] == \"http.response.body\"\n yield message.get(\"body\", b\"\")\n task.result()\n\n response = StreamingResponse(\n status_code=message[\"status\"], content=body_stream()\n )\n response.raw_headers = message[\"headers\"]\n return response\n\n async def dispatch(\n self, request: Request, call_next: RequestResponseEndpoint\n ) -> Response:\n raise NotImplementedError() # pragma: no cover\n", "path": "starlette/middleware/base.py"}]} | 1,443 | 207 |
gh_patches_debug_335 | rasdani/github-patches | git_diff | pymodbus-dev__pymodbus-1395 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
pip show pymodbus, misses information.
```
pymodbus) pymodbus % pip show pymodbus
Name: pymodbus
Version: 3.1.x
Summary: A fully featured modbus protocol stack in python
Home-page: https://github.com/pymodbus-dev/pymodbus/
Author: attr: pymodbus.__author__
Author-email:
License: BSD-3-Clause
Location: /Users/jan/repos/pymodbus
Editable project location: /Users/jan/repos/pymodbus
Requires: setuptools
Required-by:
```
Normally it gets the information from setup.cfg, but for some reason it does not work with "pip show".
</issue>
<code>
[start of pymodbus/__init__.py]
1 """Pymodbus: Modbus Protocol Implementation.
2
3 Released under the the BSD license
4 """
5
6 from logging import WARNING
7
8 import pymodbus.version as __version
9 from pymodbus.logging import Log
10
11
12 __version__ = __version.version.short()
13 __author__ = "Galen Collins"
14 __maintainer__ = "dhoomakethu, janiversen"
15
16
17 def pymodbus_apply_logging_config(level=WARNING):
18 """Apply basic logging configuration used by default by Pymodbus maintainers.
19
20 Please call this function to format logging appropriately when opening issues.
21 """
22 Log.apply_logging_config(level)
23
[end of pymodbus/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/pymodbus/__init__.py b/pymodbus/__init__.py
--- a/pymodbus/__init__.py
+++ b/pymodbus/__init__.py
@@ -10,7 +10,7 @@
__version__ = __version.version.short()
-__author__ = "Galen Collins"
+__author__ = "Galen Collins, Jan Iversen"
__maintainer__ = "dhoomakethu, janiversen"
| {"golden_diff": "diff --git a/pymodbus/__init__.py b/pymodbus/__init__.py\n--- a/pymodbus/__init__.py\n+++ b/pymodbus/__init__.py\n@@ -10,7 +10,7 @@\n \n \n __version__ = __version.version.short()\n-__author__ = \"Galen Collins\"\n+__author__ = \"Galen Collins, Jan Iversen\"\n __maintainer__ = \"dhoomakethu, janiversen\"\n", "issue": "pip show pymodbus, misses information.\n```\r\npymodbus) pymodbus % pip show pymodbus\r\n\r\nName: pymodbus\r\nVersion: 3.1.x\r\nSummary: A fully featured modbus protocol stack in python\r\nHome-page: https://github.com/pymodbus-dev/pymodbus/\r\nAuthor: attr: pymodbus.__author__\r\nAuthor-email: \r\nLicense: BSD-3-Clause\r\nLocation: /Users/jan/repos/pymodbus\r\nEditable project location: /Users/jan/repos/pymodbus\r\nRequires: setuptools\r\nRequired-by: \r\n```\r\nNormally it gets the information from setup.cfg, but for some reason it does not work with \"pip show\".\n", "before_files": [{"content": "\"\"\"Pymodbus: Modbus Protocol Implementation.\n\nReleased under the the BSD license\n\"\"\"\n\nfrom logging import WARNING\n\nimport pymodbus.version as __version\nfrom pymodbus.logging import Log\n\n\n__version__ = __version.version.short()\n__author__ = \"Galen Collins\"\n__maintainer__ = \"dhoomakethu, janiversen\"\n\n\ndef pymodbus_apply_logging_config(level=WARNING):\n \"\"\"Apply basic logging configuration used by default by Pymodbus maintainers.\n\n Please call this function to format logging appropriately when opening issues.\n \"\"\"\n Log.apply_logging_config(level)\n", "path": "pymodbus/__init__.py"}]} | 859 | 108 |
gh_patches_debug_33263 | rasdani/github-patches | git_diff | kserve__kserve-2684 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
GPL License Violation in the kserve python package
The version of the `kserve` package that is currently on PyPI (version `0.10`) violates the GPL license because it depends on [`table-logger`](https://github.com/AleksTk/table-logger), distributed under GPLv2 (you'll see that the library is now MIT, the author updated the license just a few days ago, but hasn't released a new version with the new license yet). No GPLv2 packages should be imported given that `kserve` has an Apache 2 license.
This was recently fixed by this PR https://github.com/kserve/kserve/pull/2673, which accidentally resolved the issue by replacing `table-logger` with `tabulate` (MIT License)
cc @yuzisun @cliveseldon @jinchihe @ellistarn
Is it possible to quickly release a patch release `0.10.1` to include the above patch and make sure `kserve` is compliant with the Apache license? As it stands, any distribution and vendor using `kserve` is liable for a license violation.
</issue>
<code>
[start of python/kserve/kserve/api/watch.py]
1 # Copyright 2021 The KServe Authors.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import time
16 from kubernetes import client
17 from kubernetes import watch as k8s_watch
18 from table_logger import TableLogger
19
20 from ..constants import constants
21 from ..utils import utils
22
23
24 def isvc_watch(name=None, namespace=None, timeout_seconds=600, generation=0):
25 """Watch the created or patched InferenceService in the specified namespace"""
26
27 if namespace is None:
28 namespace = utils.get_default_target_namespace()
29
30 tbl = TableLogger(
31 columns='NAME,READY,PREV,LATEST,URL',
32 colwidth={'NAME': 20, 'READY': 10, 'PREV': 25, 'LATEST': 25, 'URL': 65},
33 border=False)
34
35 stream = k8s_watch.Watch().stream(
36 client.CustomObjectsApi().list_namespaced_custom_object,
37 constants.KSERVE_GROUP,
38 constants.KSERVE_V1BETA1_VERSION,
39 namespace,
40 constants.KSERVE_PLURAL,
41 timeout_seconds=timeout_seconds)
42
43 for event in stream:
44 isvc = event['object']
45 isvc_name = isvc['metadata']['name']
46 if name and name != isvc_name:
47 continue
48 else:
49 status = 'Unknown'
50 if isvc.get('status', ''):
51 url = isvc['status'].get('url', '')
52 traffic = isvc['status'].get('components', {}).get(
53 'predictor', {}).get('traffic', [])
54 traffic_percent = 100
55 if constants.OBSERVED_GENERATION in isvc['status']:
56 observed_generation = isvc['status'][constants.OBSERVED_GENERATION]
57 for t in traffic:
58 if t["latestRevision"]:
59 traffic_percent = t["percent"]
60
61 if generation != 0 and observed_generation != generation:
62 continue
63 for condition in isvc['status'].get('conditions', {}):
64 if condition.get('type', '') == 'Ready':
65 status = condition.get('status', 'Unknown')
66 tbl(isvc_name, status, 100-traffic_percent, traffic_percent, url)
67 if status == 'True':
68 break
69
70 else:
71 tbl(isvc_name, status, '', '', '')
72 # Sleep 2 to avoid status section is not generated within a very short time.
73 time.sleep(2)
74 continue
75
[end of python/kserve/kserve/api/watch.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/python/kserve/kserve/api/watch.py b/python/kserve/kserve/api/watch.py
--- a/python/kserve/kserve/api/watch.py
+++ b/python/kserve/kserve/api/watch.py
@@ -13,9 +13,10 @@
# limitations under the License.
import time
+
from kubernetes import client
from kubernetes import watch as k8s_watch
-from table_logger import TableLogger
+from tabulate import tabulate
from ..constants import constants
from ..utils import utils
@@ -27,10 +28,8 @@
if namespace is None:
namespace = utils.get_default_target_namespace()
- tbl = TableLogger(
- columns='NAME,READY,PREV,LATEST,URL',
- colwidth={'NAME': 20, 'READY': 10, 'PREV': 25, 'LATEST': 25, 'URL': 65},
- border=False)
+ headers = ['NAME', 'READY', 'PREV', 'LATEST', 'URL']
+ table_fmt = 'plain'
stream = k8s_watch.Watch().stream(
client.CustomObjectsApi().list_namespaced_custom_object,
@@ -63,12 +62,13 @@
for condition in isvc['status'].get('conditions', {}):
if condition.get('type', '') == 'Ready':
status = condition.get('status', 'Unknown')
- tbl(isvc_name, status, 100-traffic_percent, traffic_percent, url)
+ print(tabulate([[isvc_name, status, 100 - traffic_percent, traffic_percent, url]],
+ headers=headers, tablefmt=table_fmt))
if status == 'True':
break
else:
- tbl(isvc_name, status, '', '', '')
+ print(tabulate([[isvc_name, status, '', '', '']], headers=headers, tablefmt=table_fmt))
# Sleep 2 to avoid status section is not generated within a very short time.
time.sleep(2)
continue
| {"golden_diff": "diff --git a/python/kserve/kserve/api/watch.py b/python/kserve/kserve/api/watch.py\n--- a/python/kserve/kserve/api/watch.py\n+++ b/python/kserve/kserve/api/watch.py\n@@ -13,9 +13,10 @@\n # limitations under the License.\n \n import time\n+\n from kubernetes import client\n from kubernetes import watch as k8s_watch\n-from table_logger import TableLogger\n+from tabulate import tabulate\n \n from ..constants import constants\n from ..utils import utils\n@@ -27,10 +28,8 @@\n if namespace is None:\n namespace = utils.get_default_target_namespace()\n \n- tbl = TableLogger(\n- columns='NAME,READY,PREV,LATEST,URL',\n- colwidth={'NAME': 20, 'READY': 10, 'PREV': 25, 'LATEST': 25, 'URL': 65},\n- border=False)\n+ headers = ['NAME', 'READY', 'PREV', 'LATEST', 'URL']\n+ table_fmt = 'plain'\n \n stream = k8s_watch.Watch().stream(\n client.CustomObjectsApi().list_namespaced_custom_object,\n@@ -63,12 +62,13 @@\n for condition in isvc['status'].get('conditions', {}):\n if condition.get('type', '') == 'Ready':\n status = condition.get('status', 'Unknown')\n- tbl(isvc_name, status, 100-traffic_percent, traffic_percent, url)\n+ print(tabulate([[isvc_name, status, 100 - traffic_percent, traffic_percent, url]],\n+ headers=headers, tablefmt=table_fmt))\n if status == 'True':\n break\n \n else:\n- tbl(isvc_name, status, '', '', '')\n+ print(tabulate([[isvc_name, status, '', '', '']], headers=headers, tablefmt=table_fmt))\n # Sleep 2 to avoid status section is not generated within a very short time.\n time.sleep(2)\n continue\n", "issue": "GPL License Violation in the kserve python package\nThe version of the `kserve` package that is currently on PyPI (version `0.10`) violates the GPL license because it depends on [`table-logger`](https://github.com/AleksTk/table-logger), distributed under GPLv2 (you'll see that the library is now MIT, the author updated the license just a few days ago, but hasn't released a new version with the new license yet). No GPLv2 packages should be imported given that `kserve` has an Apache 2 license.\r\n\r\n\r\nThis was recently fixed by this PR https://github.com/kserve/kserve/pull/2673, which accidentally resolved the issue by replacing `table-logger` with `tabulate` (MIT License)\r\n\r\ncc @yuzisun @cliveseldon @jinchihe @ellistarn \r\n\r\nIs it possible to quickly release a patch release `0.10.1` to include the above patch and make sure `kserve` is compliant with the Apache license? As it stands, any distribution and vendor using `kserve` is liable for a license violation.\r\n\r\n\r\n\r\n\n", "before_files": [{"content": "# Copyright 2021 The KServe Authors.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport time\nfrom kubernetes import client\nfrom kubernetes import watch as k8s_watch\nfrom table_logger import TableLogger\n\nfrom ..constants import constants\nfrom ..utils import utils\n\n\ndef isvc_watch(name=None, namespace=None, timeout_seconds=600, generation=0):\n \"\"\"Watch the created or patched InferenceService in the specified namespace\"\"\"\n\n if namespace is None:\n namespace = utils.get_default_target_namespace()\n\n tbl = TableLogger(\n columns='NAME,READY,PREV,LATEST,URL',\n colwidth={'NAME': 20, 'READY': 10, 'PREV': 25, 'LATEST': 25, 'URL': 65},\n border=False)\n\n stream = k8s_watch.Watch().stream(\n client.CustomObjectsApi().list_namespaced_custom_object,\n constants.KSERVE_GROUP,\n constants.KSERVE_V1BETA1_VERSION,\n namespace,\n constants.KSERVE_PLURAL,\n timeout_seconds=timeout_seconds)\n\n for event in stream:\n isvc = event['object']\n isvc_name = isvc['metadata']['name']\n if name and name != isvc_name:\n continue\n else:\n status = 'Unknown'\n if isvc.get('status', ''):\n url = isvc['status'].get('url', '')\n traffic = isvc['status'].get('components', {}).get(\n 'predictor', {}).get('traffic', [])\n traffic_percent = 100\n if constants.OBSERVED_GENERATION in isvc['status']:\n observed_generation = isvc['status'][constants.OBSERVED_GENERATION]\n for t in traffic:\n if t[\"latestRevision\"]:\n traffic_percent = t[\"percent\"]\n\n if generation != 0 and observed_generation != generation:\n continue\n for condition in isvc['status'].get('conditions', {}):\n if condition.get('type', '') == 'Ready':\n status = condition.get('status', 'Unknown')\n tbl(isvc_name, status, 100-traffic_percent, traffic_percent, url)\n if status == 'True':\n break\n\n else:\n tbl(isvc_name, status, '', '', '')\n # Sleep 2 to avoid status section is not generated within a very short time.\n time.sleep(2)\n continue\n", "path": "python/kserve/kserve/api/watch.py"}]} | 1,563 | 451 |
gh_patches_debug_27026 | rasdani/github-patches | git_diff | DataDog__integrations-core-619 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
postfix integration should not require sudo to root
Reading the source code to integrations-core/postfix/check.py I note that it does a sudo to root to run the find command.
This is noted in the docs / comments :
> WARNING: the user that dd-agent runs as must have sudo access for the 'find' command
> --
> | sudo access is not required when running dd-agent as root (not recommended)
> |
> | example /etc/sudoers entry:
> | dd-agent ALL=(ALL) NOPASSWD:/usr/bin/find /var/spool/postfix* -type f
root should not be required here - postfix user should be sufficient. That would be combined with a '-u postfix' on line 64's sudo command to allow this to work.
This is a concern because find has a -exec parameter and your command list has a wildcard in it - this could be used to run arbitrary commands as root if the dd-agent user is compromised.
</issue>
<code>
[start of postfix/check.py]
1 # (C) Datadog, Inc. 2013-2016
2 # (C) Josiah C Webb <[email protected]> 2013
3 # All rights reserved
4 # Licensed under Simplified BSD License (see LICENSE)
5
6 # stdlib
7 import os
8
9 # project
10 from checks import AgentCheck
11 from utils.subprocess_output import get_subprocess_output
12
13 class PostfixCheck(AgentCheck):
14 """This check provides metrics on the number of messages in a given postfix queue
15
16 WARNING: the user that dd-agent runs as must have sudo access for the 'find' command
17 sudo access is not required when running dd-agent as root (not recommended)
18
19 example /etc/sudoers entry:
20 dd-agent ALL=(ALL) NOPASSWD:/usr/bin/find /var/spool/postfix* -type f
21
22 YAML config options:
23 "directory" - the value of 'postconf -h queue_directory'
24 "queues" - the postfix mail queues you would like to get message count totals for
25 """
26 def check(self, instance):
27 config = self._get_config(instance)
28
29 directory = config['directory']
30 queues = config['queues']
31 tags = config['tags']
32
33 self._get_queue_count(directory, queues, tags)
34
35 def _get_config(self, instance):
36 directory = instance.get('directory', None)
37 queues = instance.get('queues', None)
38 tags = instance.get('tags', [])
39 if not queues or not directory:
40 raise Exception('missing required yaml config entry')
41
42 instance_config = {
43 'directory': directory,
44 'queues': queues,
45 'tags': tags,
46 }
47
48 return instance_config
49
50 def _get_queue_count(self, directory, queues, tags):
51 for queue in queues:
52 queue_path = os.path.join(directory, queue)
53 if not os.path.exists(queue_path):
54 raise Exception('%s does not exist' % queue_path)
55
56 count = 0
57 if os.geteuid() == 0:
58 # dd-agent is running as root (not recommended)
59 count = sum(len(files) for root, dirs, files in os.walk(queue_path))
60 else:
61 # can dd-agent user run sudo?
62 test_sudo = os.system('setsid sudo -l < /dev/null')
63 if test_sudo == 0:
64 output, _, _ = get_subprocess_output(['sudo', 'find', queue_path, '-type', 'f'], self.log, False)
65 count = len(output.splitlines())
66 else:
67 raise Exception('The dd-agent user does not have sudo access')
68
69 # emit an individually tagged metric
70 self.gauge('postfix.queue.size', count, tags=tags + ['queue:%s' % queue, 'instance:%s' % os.path.basename(directory)])
71
72 # these can be retrieved in a single graph statement
73 # for example:
74 # sum:postfix.queue.size{instance:postfix-2,queue:incoming,host:hostname.domain.tld}
75
[end of postfix/check.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/postfix/check.py b/postfix/check.py
--- a/postfix/check.py
+++ b/postfix/check.py
@@ -17,7 +17,9 @@
sudo access is not required when running dd-agent as root (not recommended)
example /etc/sudoers entry:
- dd-agent ALL=(ALL) NOPASSWD:/usr/bin/find /var/spool/postfix* -type f
+ dd-agent ALL=(postfix) NOPASSWD:/usr/bin/find /var/spool/postfix/incoming -type f
+ dd-agent ALL=(postfix) NOPASSWD:/usr/bin/find /var/spool/postfix/active -type f
+ dd-agent ALL=(postfix) NOPASSWD:/usr/bin/find /var/spool/postfix/deferred -type f
YAML config options:
"directory" - the value of 'postconf -h queue_directory'
@@ -61,7 +63,9 @@
# can dd-agent user run sudo?
test_sudo = os.system('setsid sudo -l < /dev/null')
if test_sudo == 0:
- output, _, _ = get_subprocess_output(['sudo', 'find', queue_path, '-type', 'f'], self.log, False)
+ # default to `root` for backward compatibility
+ postfix_user = self.init_config.get('postfix_user', 'root')
+ output, _, _ = get_subprocess_output(['sudo', '-u', postfix_user, 'find', queue_path, '-type', 'f'], self.log, False)
count = len(output.splitlines())
else:
raise Exception('The dd-agent user does not have sudo access')
| {"golden_diff": "diff --git a/postfix/check.py b/postfix/check.py\n--- a/postfix/check.py\n+++ b/postfix/check.py\n@@ -17,7 +17,9 @@\n sudo access is not required when running dd-agent as root (not recommended)\n \n example /etc/sudoers entry:\n- dd-agent ALL=(ALL) NOPASSWD:/usr/bin/find /var/spool/postfix* -type f\n+ dd-agent ALL=(postfix) NOPASSWD:/usr/bin/find /var/spool/postfix/incoming -type f\n+ dd-agent ALL=(postfix) NOPASSWD:/usr/bin/find /var/spool/postfix/active -type f\n+ dd-agent ALL=(postfix) NOPASSWD:/usr/bin/find /var/spool/postfix/deferred -type f\n \n YAML config options:\n \"directory\" - the value of 'postconf -h queue_directory'\n@@ -61,7 +63,9 @@\n # can dd-agent user run sudo?\n test_sudo = os.system('setsid sudo -l < /dev/null')\n if test_sudo == 0:\n- output, _, _ = get_subprocess_output(['sudo', 'find', queue_path, '-type', 'f'], self.log, False)\n+ # default to `root` for backward compatibility\n+ postfix_user = self.init_config.get('postfix_user', 'root')\n+ output, _, _ = get_subprocess_output(['sudo', '-u', postfix_user, 'find', queue_path, '-type', 'f'], self.log, False)\n count = len(output.splitlines())\n else:\n raise Exception('The dd-agent user does not have sudo access')\n", "issue": "postfix integration should not require sudo to root\nReading the source code to integrations-core/postfix/check.py I note that it does a sudo to root to run the find command.\r\n\r\nThis is noted in the docs / comments :\r\n\r\n> WARNING: the user that dd-agent runs as must have sudo access for the 'find' command\r\n> --\r\n> \u00a0 | sudo access is not required when running dd-agent as root (not recommended)\r\n> \u00a0 | \u00a0\r\n> \u00a0 | example /etc/sudoers entry:\r\n> \u00a0 | dd-agent ALL=(ALL) NOPASSWD:/usr/bin/find /var/spool/postfix* -type f\r\n\r\nroot should not be required here - postfix user should be sufficient. That would be combined with a '-u postfix' on line 64's sudo command to allow this to work.\r\n\r\nThis is a concern because find has a -exec parameter and your command list has a wildcard in it - this could be used to run arbitrary commands as root if the dd-agent user is compromised.\r\n\n", "before_files": [{"content": "# (C) Datadog, Inc. 2013-2016\n# (C) Josiah C Webb <[email protected]> 2013\n# All rights reserved\n# Licensed under Simplified BSD License (see LICENSE)\n\n# stdlib\nimport os\n\n# project\nfrom checks import AgentCheck\nfrom utils.subprocess_output import get_subprocess_output\n\nclass PostfixCheck(AgentCheck):\n \"\"\"This check provides metrics on the number of messages in a given postfix queue\n\n WARNING: the user that dd-agent runs as must have sudo access for the 'find' command\n sudo access is not required when running dd-agent as root (not recommended)\n\n example /etc/sudoers entry:\n dd-agent ALL=(ALL) NOPASSWD:/usr/bin/find /var/spool/postfix* -type f\n\n YAML config options:\n \"directory\" - the value of 'postconf -h queue_directory'\n \"queues\" - the postfix mail queues you would like to get message count totals for\n \"\"\"\n def check(self, instance):\n config = self._get_config(instance)\n\n directory = config['directory']\n queues = config['queues']\n tags = config['tags']\n\n self._get_queue_count(directory, queues, tags)\n\n def _get_config(self, instance):\n directory = instance.get('directory', None)\n queues = instance.get('queues', None)\n tags = instance.get('tags', [])\n if not queues or not directory:\n raise Exception('missing required yaml config entry')\n\n instance_config = {\n 'directory': directory,\n 'queues': queues,\n 'tags': tags,\n }\n\n return instance_config\n\n def _get_queue_count(self, directory, queues, tags):\n for queue in queues:\n queue_path = os.path.join(directory, queue)\n if not os.path.exists(queue_path):\n raise Exception('%s does not exist' % queue_path)\n\n count = 0\n if os.geteuid() == 0:\n # dd-agent is running as root (not recommended)\n count = sum(len(files) for root, dirs, files in os.walk(queue_path))\n else:\n # can dd-agent user run sudo?\n test_sudo = os.system('setsid sudo -l < /dev/null')\n if test_sudo == 0:\n output, _, _ = get_subprocess_output(['sudo', 'find', queue_path, '-type', 'f'], self.log, False)\n count = len(output.splitlines())\n else:\n raise Exception('The dd-agent user does not have sudo access')\n\n # emit an individually tagged metric\n self.gauge('postfix.queue.size', count, tags=tags + ['queue:%s' % queue, 'instance:%s' % os.path.basename(directory)])\n\n # these can be retrieved in a single graph statement\n # for example:\n # sum:postfix.queue.size{instance:postfix-2,queue:incoming,host:hostname.domain.tld}\n", "path": "postfix/check.py"}]} | 1,539 | 369 |
gh_patches_debug_15930 | rasdani/github-patches | git_diff | conan-io__conan-4591 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[question][suggestion] Listing profile should be recursive
Hello,
I use multiple profiles, and those are organized in subdirectories:
+ `~/.conan/profiles/application/x64_gcc6_app1`
+ `~/.conan/profiles/application/x64_msvc_app1`
+ `~/.conan/profiles/compilers/x64_gcc6`
+ `~/.conan/profiles/compilers/x64_msvc`
The "applications" profile include other profiles etc. This works pretty well, so I assume
using subdirectory in profiles is supported and is not a problem.
However, the `conan profile list` command does not list profiles contained in subdirectories.
I believe it should recursively search for profile files rather than only list the files available directly in the `~/.conan/profiles` directory.
I'm wondering if there is a particular reason why the search is limited to the `~/.conan/profiles` directory and if you'd be open to changing this behavior.
</issue>
<code>
[start of conans/client/cmd/profile.py]
1 import os
2
3 from conans.client.conf.detect import detect_defaults_settings
4 from conans.client.profile_loader import get_profile_path, read_profile
5 from conans.errors import ConanException
6 from conans.model.options import OptionsValues
7 from conans.model.profile import Profile
8 from conans.unicode import get_cwd
9 from conans.util.files import save
10
11
12 def _get_profile_keys(key):
13 # settings.compiler.version => settings, compiler.version
14 tmp = key.split(".")
15 first_key = tmp[0]
16 rest_key = ".".join(tmp[1:]) if len(tmp) > 1 else None
17 if first_key not in ("build_requires", "settings", "options", "env"):
18 raise ConanException("Invalid specified key: %s" % key)
19
20 return first_key, rest_key
21
22
23 def cmd_profile_list(cache_profiles_path, output):
24 folder = cache_profiles_path
25 if os.path.exists(folder):
26 return [name for name in os.listdir(folder)
27 if not os.path.isdir(os.path.join(folder, name))]
28 else:
29 output.info("No profiles defined")
30 return []
31
32
33 def cmd_profile_create(profile_name, cache_profiles_path, output, detect=False):
34 profile_path = get_profile_path(profile_name, cache_profiles_path, get_cwd(),
35 exists=False)
36 if os.path.exists(profile_path):
37 raise ConanException("Profile already exists")
38
39 profile = Profile()
40 if detect:
41 settings = detect_defaults_settings(output)
42 for name, value in settings:
43 profile.settings[name] = value
44
45 contents = profile.dumps()
46 save(profile_path, contents)
47
48 if detect:
49 output.info("Profile created with detected settings: %s" % profile_path)
50 else:
51 output.info("Empty profile created: %s" % profile_path)
52 return profile_path
53
54
55 def cmd_profile_update(profile_name, key, value, cache_profiles_path):
56 first_key, rest_key = _get_profile_keys(key)
57
58 profile, _ = read_profile(profile_name, get_cwd(), cache_profiles_path)
59 if first_key == "settings":
60 profile.settings[rest_key] = value
61 elif first_key == "options":
62 tmp = OptionsValues([(rest_key, value)])
63 profile.options.update(tmp)
64 elif first_key == "env":
65 profile.env_values.update_replace(rest_key, value)
66 elif first_key == "build_requires":
67 raise ConanException("Edit the profile manually to change the build_requires")
68
69 contents = profile.dumps()
70 profile_path = get_profile_path(profile_name, cache_profiles_path, get_cwd())
71 save(profile_path, contents)
72
73
74 def cmd_profile_get(profile_name, key, cache_profiles_path):
75 first_key, rest_key = _get_profile_keys(key)
76 profile, _ = read_profile(profile_name, get_cwd(), cache_profiles_path)
77 try:
78 if first_key == "settings":
79 return profile.settings[rest_key]
80 elif first_key == "options":
81 return dict(profile.options.as_list())[rest_key]
82 elif first_key == "env":
83 package = None
84 var = rest_key
85 if ":" in rest_key:
86 package, var = rest_key.split(":")
87 return profile.env_values.data[package][var]
88 elif first_key == "build_requires":
89 raise ConanException("List the profile manually to see the build_requires")
90 except KeyError:
91 raise ConanException("Key not found: '%s'" % key)
92
93
94 def cmd_profile_delete_key(profile_name, key, cache_profiles_path):
95 first_key, rest_key = _get_profile_keys(key)
96 profile, _ = read_profile(profile_name, get_cwd(), cache_profiles_path)
97
98 try:
99 package, name = rest_key.split(":")
100 except ValueError:
101 package = None
102 name = rest_key
103
104 try:
105 if first_key == "settings":
106 del profile.settings[rest_key]
107 elif first_key == "options":
108 profile.options.remove(name, package)
109 elif first_key == "env":
110 profile.env_values.remove(name, package)
111 elif first_key == "build_requires":
112 raise ConanException("Edit the profile manually to delete a build_require")
113 except KeyError:
114 raise ConanException("Profile key '%s' doesn't exist" % key)
115
116 contents = profile.dumps()
117 profile_path = get_profile_path(profile_name, cache_profiles_path, get_cwd())
118 save(profile_path, contents)
119
[end of conans/client/cmd/profile.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/conans/client/cmd/profile.py b/conans/client/cmd/profile.py
--- a/conans/client/cmd/profile.py
+++ b/conans/client/cmd/profile.py
@@ -21,13 +21,18 @@
def cmd_profile_list(cache_profiles_path, output):
- folder = cache_profiles_path
- if os.path.exists(folder):
- return [name for name in os.listdir(folder)
- if not os.path.isdir(os.path.join(folder, name))]
- else:
+ profiles = []
+ if os.path.exists(cache_profiles_path):
+ for current_directory, _, files in os.walk(cache_profiles_path, followlinks=True):
+ for filename in files:
+ rel_path = os.path.relpath(os.path.join(current_directory, filename),
+ cache_profiles_path)
+ profiles.append(rel_path)
+
+ if not profiles:
output.info("No profiles defined")
- return []
+ profiles.sort()
+ return profiles
def cmd_profile_create(profile_name, cache_profiles_path, output, detect=False):
| {"golden_diff": "diff --git a/conans/client/cmd/profile.py b/conans/client/cmd/profile.py\n--- a/conans/client/cmd/profile.py\n+++ b/conans/client/cmd/profile.py\n@@ -21,13 +21,18 @@\n \n \n def cmd_profile_list(cache_profiles_path, output):\n- folder = cache_profiles_path\n- if os.path.exists(folder):\n- return [name for name in os.listdir(folder)\n- if not os.path.isdir(os.path.join(folder, name))]\n- else:\n+ profiles = []\n+ if os.path.exists(cache_profiles_path):\n+ for current_directory, _, files in os.walk(cache_profiles_path, followlinks=True):\n+ for filename in files:\n+ rel_path = os.path.relpath(os.path.join(current_directory, filename),\n+ cache_profiles_path)\n+ profiles.append(rel_path)\n+\n+ if not profiles:\n output.info(\"No profiles defined\")\n- return []\n+ profiles.sort()\n+ return profiles\n \n \n def cmd_profile_create(profile_name, cache_profiles_path, output, detect=False):\n", "issue": "[question][suggestion] Listing profile should be recursive\nHello,\r\n\r\nI use multiple profiles, and those are organized in subdirectories:\r\n + `~/.conan/profiles/application/x64_gcc6_app1`\r\n + `~/.conan/profiles/application/x64_msvc_app1`\r\n + `~/.conan/profiles/compilers/x64_gcc6`\r\n + `~/.conan/profiles/compilers/x64_msvc`\r\n\r\nThe \"applications\" profile include other profiles etc. This works pretty well, so I assume \r\nusing subdirectory in profiles is supported and is not a problem.\r\n\r\nHowever, the `conan profile list` command does not list profiles contained in subdirectories.\r\nI believe it should recursively search for profile files rather than only list the files available directly in the `~/.conan/profiles` directory.\r\n\r\nI'm wondering if there is a particular reason why the search is limited to the `~/.conan/profiles` directory and if you'd be open to changing this behavior.\r\n\n", "before_files": [{"content": "import os\n\nfrom conans.client.conf.detect import detect_defaults_settings\nfrom conans.client.profile_loader import get_profile_path, read_profile\nfrom conans.errors import ConanException\nfrom conans.model.options import OptionsValues\nfrom conans.model.profile import Profile\nfrom conans.unicode import get_cwd\nfrom conans.util.files import save\n\n\ndef _get_profile_keys(key):\n # settings.compiler.version => settings, compiler.version\n tmp = key.split(\".\")\n first_key = tmp[0]\n rest_key = \".\".join(tmp[1:]) if len(tmp) > 1 else None\n if first_key not in (\"build_requires\", \"settings\", \"options\", \"env\"):\n raise ConanException(\"Invalid specified key: %s\" % key)\n\n return first_key, rest_key\n\n\ndef cmd_profile_list(cache_profiles_path, output):\n folder = cache_profiles_path\n if os.path.exists(folder):\n return [name for name in os.listdir(folder)\n if not os.path.isdir(os.path.join(folder, name))]\n else:\n output.info(\"No profiles defined\")\n return []\n\n\ndef cmd_profile_create(profile_name, cache_profiles_path, output, detect=False):\n profile_path = get_profile_path(profile_name, cache_profiles_path, get_cwd(),\n exists=False)\n if os.path.exists(profile_path):\n raise ConanException(\"Profile already exists\")\n\n profile = Profile()\n if detect:\n settings = detect_defaults_settings(output)\n for name, value in settings:\n profile.settings[name] = value\n\n contents = profile.dumps()\n save(profile_path, contents)\n\n if detect:\n output.info(\"Profile created with detected settings: %s\" % profile_path)\n else:\n output.info(\"Empty profile created: %s\" % profile_path)\n return profile_path\n\n\ndef cmd_profile_update(profile_name, key, value, cache_profiles_path):\n first_key, rest_key = _get_profile_keys(key)\n\n profile, _ = read_profile(profile_name, get_cwd(), cache_profiles_path)\n if first_key == \"settings\":\n profile.settings[rest_key] = value\n elif first_key == \"options\":\n tmp = OptionsValues([(rest_key, value)])\n profile.options.update(tmp)\n elif first_key == \"env\":\n profile.env_values.update_replace(rest_key, value)\n elif first_key == \"build_requires\":\n raise ConanException(\"Edit the profile manually to change the build_requires\")\n\n contents = profile.dumps()\n profile_path = get_profile_path(profile_name, cache_profiles_path, get_cwd())\n save(profile_path, contents)\n\n\ndef cmd_profile_get(profile_name, key, cache_profiles_path):\n first_key, rest_key = _get_profile_keys(key)\n profile, _ = read_profile(profile_name, get_cwd(), cache_profiles_path)\n try:\n if first_key == \"settings\":\n return profile.settings[rest_key]\n elif first_key == \"options\":\n return dict(profile.options.as_list())[rest_key]\n elif first_key == \"env\":\n package = None\n var = rest_key\n if \":\" in rest_key:\n package, var = rest_key.split(\":\")\n return profile.env_values.data[package][var]\n elif first_key == \"build_requires\":\n raise ConanException(\"List the profile manually to see the build_requires\")\n except KeyError:\n raise ConanException(\"Key not found: '%s'\" % key)\n\n\ndef cmd_profile_delete_key(profile_name, key, cache_profiles_path):\n first_key, rest_key = _get_profile_keys(key)\n profile, _ = read_profile(profile_name, get_cwd(), cache_profiles_path)\n\n try:\n package, name = rest_key.split(\":\")\n except ValueError:\n package = None\n name = rest_key\n\n try:\n if first_key == \"settings\":\n del profile.settings[rest_key]\n elif first_key == \"options\":\n profile.options.remove(name, package)\n elif first_key == \"env\":\n profile.env_values.remove(name, package)\n elif first_key == \"build_requires\":\n raise ConanException(\"Edit the profile manually to delete a build_require\")\n except KeyError:\n raise ConanException(\"Profile key '%s' doesn't exist\" % key)\n\n contents = profile.dumps()\n profile_path = get_profile_path(profile_name, cache_profiles_path, get_cwd())\n save(profile_path, contents)\n", "path": "conans/client/cmd/profile.py"}]} | 1,939 | 225 |
gh_patches_debug_60854 | rasdani/github-patches | git_diff | airctic__icevision-441 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add icedata to icevision.all
## 🚀 Feature
Currently to train a dataset available with icedata the following two lines are necessary:
```python
import icedata
from icevision.all import *
```
Because icedata already depends on icevision, icevision cannot depend on icedata. **But** I guess we can add icedata as a soft dependency to `.all`, we just have to be sure not to use `icedata` internally in icevision.
</issue>
<code>
[start of icevision/all.py]
1 from icevision.imports import *
2 from icevision import *
3
[end of icevision/all.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/icevision/all.py b/icevision/all.py
--- a/icevision/all.py
+++ b/icevision/all.py
@@ -1,2 +1,9 @@
from icevision.imports import *
from icevision import *
+
+# soft import icedata
+try:
+ import icedata
+except ModuleNotFoundError as e:
+ if str(e) != f"No module named 'icedata'":
+ raise e
| {"golden_diff": "diff --git a/icevision/all.py b/icevision/all.py\n--- a/icevision/all.py\n+++ b/icevision/all.py\n@@ -1,2 +1,9 @@\n from icevision.imports import *\n from icevision import *\n+\n+# soft import icedata\n+try:\n+ import icedata\n+except ModuleNotFoundError as e:\n+ if str(e) != f\"No module named 'icedata'\":\n+ raise e\n", "issue": "Add icedata to icevision.all\n## \ud83d\ude80 Feature\r\nCurrently to train a dataset available with icedata the following two lines are necessary:\r\n```python\r\nimport icedata\r\nfrom icevision.all import *\r\n```\r\n\r\nBecause icedata already depends on icevision, icevision cannot depend on icedata. **But** I guess we can add icedata as a soft dependency to `.all`, we just have to be sure not to use `icedata` internally in icevision.\n", "before_files": [{"content": "from icevision.imports import *\nfrom icevision import *\n", "path": "icevision/all.py"}]} | 653 | 101 |
gh_patches_debug_5156 | rasdani/github-patches | git_diff | DataBiosphere__toil-2834 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Example script duplicated
`examples/hello.py` and `src/toil/test/docs/scripts/tutorial_arguments.py` are duplicate scripts with the same contents.
┆Issue is synchronized with this [Jira Task](https://ucsc-cgl.atlassian.net/browse/TOIL-443)
┆Issue Number: TOIL-443
</issue>
<code>
[start of examples/hello.py]
1 from toil.common import Toil
2 from toil.job import Job
3
4 class HelloWorld(Job):
5 def __init__(self, message):
6 Job.__init__(self, memory="1G", cores=2, disk="2G")
7 self.message = message
8
9 def run(self, fileStore):
10 return "Hello, world!, here's a message: %s" % self.message
11
12 if __name__=="__main__":
13 parser = Job.Runner.getDefaultArgumentParser()
14 options = parser.parse_args()
15
16 hello_job = HelloWorld("Woot")
17
18 with Toil(options) as toil:
19 print(toil.start(hello_job))
20
[end of examples/hello.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/examples/hello.py b/examples/hello.py
deleted file mode 100644
--- a/examples/hello.py
+++ /dev/null
@@ -1,19 +0,0 @@
-from toil.common import Toil
-from toil.job import Job
-
-class HelloWorld(Job):
- def __init__(self, message):
- Job.__init__(self, memory="1G", cores=2, disk="2G")
- self.message = message
-
- def run(self, fileStore):
- return "Hello, world!, here's a message: %s" % self.message
-
-if __name__=="__main__":
- parser = Job.Runner.getDefaultArgumentParser()
- options = parser.parse_args()
-
- hello_job = HelloWorld("Woot")
-
- with Toil(options) as toil:
- print(toil.start(hello_job))
| {"golden_diff": "diff --git a/examples/hello.py b/examples/hello.py\ndeleted file mode 100644\n--- a/examples/hello.py\n+++ /dev/null\n@@ -1,19 +0,0 @@\n-from toil.common import Toil\n-from toil.job import Job\n-\n-class HelloWorld(Job):\n- def __init__(self, message):\n- Job.__init__(self, memory=\"1G\", cores=2, disk=\"2G\")\n- self.message = message\n-\n- def run(self, fileStore):\n- return \"Hello, world!, here's a message: %s\" % self.message\n-\n-if __name__==\"__main__\":\n- parser = Job.Runner.getDefaultArgumentParser()\n- options = parser.parse_args()\n-\n- hello_job = HelloWorld(\"Woot\")\n-\n- with Toil(options) as toil:\n- print(toil.start(hello_job))\n", "issue": "Example script duplicated\n`examples/hello.py` and `src/toil/test/docs/scripts/tutorial_arguments.py` are duplicate scripts with the same contents.\n\n\u2506Issue is synchronized with this [Jira Task](https://ucsc-cgl.atlassian.net/browse/TOIL-443)\n\u2506Issue Number: TOIL-443\n\n", "before_files": [{"content": "from toil.common import Toil\nfrom toil.job import Job\n\nclass HelloWorld(Job):\n def __init__(self, message):\n Job.__init__(self, memory=\"1G\", cores=2, disk=\"2G\")\n self.message = message\n\n def run(self, fileStore):\n return \"Hello, world!, here's a message: %s\" % self.message\n\nif __name__==\"__main__\":\n parser = Job.Runner.getDefaultArgumentParser()\n options = parser.parse_args()\n\n hello_job = HelloWorld(\"Woot\")\n\n with Toil(options) as toil:\n print(toil.start(hello_job))\n", "path": "examples/hello.py"}]} | 778 | 200 |
gh_patches_debug_18244 | rasdani/github-patches | git_diff | interlegis__sapl-1606 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Exclusão Tramitação - status de tramitação
Ao incluir uma tramitação à matéria, em que esta tramitação tenha seu indicador de tramitação definido nas tabelas auxiliares como fim, a matéria em questão passa de "em tramitação - sim para não" corretamente. Se essa tramitação por ventura for excluída, não seria o caso de alterar novamente a matéria de "em tramitação - não para sim" ?
</issue>
<code>
[start of sapl/materia/receivers.py]
1 from django.dispatch import receiver
2
3 from sapl.materia.signals import tramitacao_signal
4 from sapl.utils import get_base_url
5
6 from .email_utils import do_envia_email_tramitacao
7
8
9 @receiver(tramitacao_signal)
10 def handle_tramitacao_signal(sender, **kwargs):
11 tramitacao = kwargs.get("post")
12 request = kwargs.get("request")
13 materia = tramitacao.materia
14
15 do_envia_email_tramitacao(
16 get_base_url(request),
17 materia,
18 tramitacao.status,
19 tramitacao.unidade_tramitacao_destino)
20
[end of sapl/materia/receivers.py]
[start of sapl/materia/apps.py]
1 from django import apps
2 from django.utils.translation import ugettext_lazy as _
3
4
5 class AppConfig(apps.AppConfig):
6 name = 'sapl.materia'
7 label = 'materia'
8 verbose_name = _('Matéria')
9
[end of sapl/materia/apps.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/sapl/materia/apps.py b/sapl/materia/apps.py
--- a/sapl/materia/apps.py
+++ b/sapl/materia/apps.py
@@ -6,3 +6,6 @@
name = 'sapl.materia'
label = 'materia'
verbose_name = _('Matéria')
+
+ def ready(self):
+ from . import receivers
\ No newline at end of file
diff --git a/sapl/materia/receivers.py b/sapl/materia/receivers.py
--- a/sapl/materia/receivers.py
+++ b/sapl/materia/receivers.py
@@ -1,5 +1,7 @@
+from django.db.models.signals import post_delete, post_save
from django.dispatch import receiver
+from sapl.materia.models import Tramitacao
from sapl.materia.signals import tramitacao_signal
from sapl.utils import get_base_url
@@ -17,3 +19,11 @@
materia,
tramitacao.status,
tramitacao.unidade_tramitacao_destino)
+
+
+@receiver(post_delete, sender=Tramitacao)
+def status_tramitacao_materia(sender, instance, **kwargs):
+ if instance.turno == 'F':
+ materia = instance.materia
+ materia.em_tramitacao = True
+ materia.save()
| {"golden_diff": "diff --git a/sapl/materia/apps.py b/sapl/materia/apps.py\n--- a/sapl/materia/apps.py\n+++ b/sapl/materia/apps.py\n@@ -6,3 +6,6 @@\n name = 'sapl.materia'\n label = 'materia'\n verbose_name = _('Mat\u00e9ria')\n+\n+ def ready(self):\n+ from . import receivers\n\\ No newline at end of file\ndiff --git a/sapl/materia/receivers.py b/sapl/materia/receivers.py\n--- a/sapl/materia/receivers.py\n+++ b/sapl/materia/receivers.py\n@@ -1,5 +1,7 @@\n+from django.db.models.signals import post_delete, post_save\n from django.dispatch import receiver\n \n+from sapl.materia.models import Tramitacao\n from sapl.materia.signals import tramitacao_signal\n from sapl.utils import get_base_url\n \n@@ -17,3 +19,11 @@\n materia,\n tramitacao.status,\n tramitacao.unidade_tramitacao_destino)\n+\n+\n+@receiver(post_delete, sender=Tramitacao)\n+def status_tramitacao_materia(sender, instance, **kwargs):\n+ if instance.turno == 'F':\n+ materia = instance.materia\n+ materia.em_tramitacao = True\n+ materia.save()\n", "issue": "Exclus\u00e3o Tramita\u00e7\u00e3o - status de tramita\u00e7\u00e3o\nAo incluir uma tramita\u00e7\u00e3o \u00e0 mat\u00e9ria, em que esta tramita\u00e7\u00e3o tenha seu indicador de tramita\u00e7\u00e3o definido nas tabelas auxiliares como fim, a mat\u00e9ria em quest\u00e3o passa de \"em tramita\u00e7\u00e3o - sim para n\u00e3o\" corretamente. Se essa tramita\u00e7\u00e3o por ventura for exclu\u00edda, n\u00e3o seria o caso de alterar novamente a mat\u00e9ria de \"em tramita\u00e7\u00e3o - n\u00e3o para sim\" ?\n", "before_files": [{"content": "from django.dispatch import receiver\n\nfrom sapl.materia.signals import tramitacao_signal\nfrom sapl.utils import get_base_url\n\nfrom .email_utils import do_envia_email_tramitacao\n\n\n@receiver(tramitacao_signal)\ndef handle_tramitacao_signal(sender, **kwargs):\n tramitacao = kwargs.get(\"post\")\n request = kwargs.get(\"request\")\n materia = tramitacao.materia\n\n do_envia_email_tramitacao(\n get_base_url(request),\n materia,\n tramitacao.status,\n tramitacao.unidade_tramitacao_destino)\n", "path": "sapl/materia/receivers.py"}, {"content": "from django import apps\nfrom django.utils.translation import ugettext_lazy as _\n\n\nclass AppConfig(apps.AppConfig):\n name = 'sapl.materia'\n label = 'materia'\n verbose_name = _('Mat\u00e9ria')\n", "path": "sapl/materia/apps.py"}]} | 886 | 301 |
gh_patches_debug_4716 | rasdani/github-patches | git_diff | crytic__slither-2331 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[Bug]: High level call does not always have function
### Describe the issue:
I have two somehow similar examples, in the first one I have the function on the high level call in function safeAdd. In the next example I edited the array size to be constant instead of a literal, and I got None instead of a function object
### Code example to reproduce the issue:
```solidity
library SafeMath {
uint256 private constant twelve = 12;
struct A {uint256 a;}
function add(A[twelve] storage z) internal { }
}
contract MathContract {
uint256 private constant twelve = 12;
using SafeMath for SafeMath.A[12];
SafeMath.A[12] public z;
function safeAdd() public {
z.add();
}
}
```
```solidity
library SafeMath {
uint256 private constant twelve = 12;
struct A {uint256 a;}
function add(A[twelve] storage z) internal { }
}
contract MathContract {
uint256 private constant twelve = 12;
using SafeMath for SafeMath.A[twelve];
SafeMath.A[twelve] public z;
function safeAdd() public {
z.add();
}
}
```
### Version:
0.10.0
### Relevant log output:
```shell
>>> from slither import Slither
>>> math = Slither('a.sol').contracts[1]
>>> math.name
'MathContract'
>>> f = math.functions[0]
>>> f.name
'safeAdd'
>>> f.nodes
[<slither.core.cfg.node.Node object at 0x7f5460aa1e50>, <slither.core.cfg.node.No
de object at 0x7f5460aa2090>]
>>> f.nodes[1]
<slither.core.cfg.node.Node object at 0x7f5460aa2090>
>>> f.nodes[1].irs
[<slither.slithir.operations.library_call.LibraryCall object at 0x7f5460a748d0>]
>>> f.nodes[1].irs[0].function
<slither.core.declarations.function_contract.FunctionContract object at 0x7f5460a
9e090>
>>> f.nodes[1].irs[0].function.name
'add'
----------------------------------------------------------------------------------
>>> from slither import Slither
>>> math = Slither('a.sol').contracts[1]
>>> math.name
'MathContract'
>>> f = math.functions[0]
>>> f.name
'safeAdd'
>>> f.nodes
[<slither.core.cfg.node.Node object at 0x7f9d6379db10>, <slither.core.cfg.node.No
de object at 0x7f9d63a47850>]
>>> f.nodes[1]
<slither.core.cfg.node.Node object at 0x7f9d63a47850>
>>> f.nodes[1].irs
[<slither.slithir.operations.high_level_call.HighLevelCall object at 0x7f9d63a376
90>]
>>> f.nodes[1].irs[0].function
>>> print(f.nodes[1].irs[0].function)
None
```
</issue>
<code>
[start of slither/core/solidity_types/array_type.py]
1 from typing import Union, Optional, Tuple, Any, TYPE_CHECKING
2
3 from slither.core.expressions.expression import Expression
4 from slither.core.expressions.literal import Literal
5 from slither.core.solidity_types.elementary_type import ElementaryType
6 from slither.core.solidity_types.type import Type
7 from slither.visitors.expression.constants_folding import ConstantFolding
8
9 if TYPE_CHECKING:
10 from slither.core.expressions.binary_operation import BinaryOperation
11 from slither.core.expressions.identifier import Identifier
12
13
14 class ArrayType(Type):
15 def __init__(
16 self,
17 t: Type,
18 length: Optional[Union["Identifier", Literal, "BinaryOperation", int]],
19 ) -> None:
20 assert isinstance(t, Type)
21 if length:
22 if isinstance(length, int):
23 length = Literal(length, ElementaryType("uint256"))
24
25 super().__init__()
26 self._type: Type = t
27 assert length is None or isinstance(length, Expression)
28 self._length: Optional[Expression] = length
29
30 if length:
31 if not isinstance(length, Literal):
32 cf = ConstantFolding(length, "uint256")
33 length = cf.result()
34 self._length_value: Optional[Literal] = length
35 else:
36 self._length_value = None
37
38 @property
39 def type(self) -> Type:
40 return self._type
41
42 @property
43 def is_dynamic(self) -> bool:
44 return self.length is None
45
46 @property
47 def length(self) -> Optional[Expression]:
48 return self._length
49
50 @property
51 def length_value(self) -> Optional[Literal]:
52 return self._length_value
53
54 @property
55 def is_fixed_array(self) -> bool:
56 return bool(self.length)
57
58 @property
59 def is_dynamic_array(self) -> bool:
60 return not self.is_fixed_array
61
62 @property
63 def storage_size(self) -> Tuple[int, bool]:
64 if self._length_value:
65 elem_size, _ = self._type.storage_size
66 return elem_size * int(str(self._length_value)), True
67 return 32, True
68
69 def __str__(self) -> str:
70 if self._length:
71 return str(self._type) + f"[{str(self._length_value)}]"
72 return str(self._type) + "[]"
73
74 def __eq__(self, other: Any) -> bool:
75 if not isinstance(other, ArrayType):
76 return False
77 return self._type == other.type and self.length == other.length
78
79 def __hash__(self) -> int:
80 return hash(str(self))
81
[end of slither/core/solidity_types/array_type.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/slither/core/solidity_types/array_type.py b/slither/core/solidity_types/array_type.py
--- a/slither/core/solidity_types/array_type.py
+++ b/slither/core/solidity_types/array_type.py
@@ -74,7 +74,7 @@
def __eq__(self, other: Any) -> bool:
if not isinstance(other, ArrayType):
return False
- return self._type == other.type and self.length == other.length
+ return self._type == other.type and self._length_value == other.length_value
def __hash__(self) -> int:
return hash(str(self))
| {"golden_diff": "diff --git a/slither/core/solidity_types/array_type.py b/slither/core/solidity_types/array_type.py\n--- a/slither/core/solidity_types/array_type.py\n+++ b/slither/core/solidity_types/array_type.py\n@@ -74,7 +74,7 @@\n def __eq__(self, other: Any) -> bool:\n if not isinstance(other, ArrayType):\n return False\n- return self._type == other.type and self.length == other.length\n+ return self._type == other.type and self._length_value == other.length_value\n \n def __hash__(self) -> int:\n return hash(str(self))\n", "issue": "[Bug]: High level call does not always have function\n### Describe the issue:\r\n\r\nI have two somehow similar examples, in the first one I have the function on the high level call in function safeAdd. In the next example I edited the array size to be constant instead of a literal, and I got None instead of a function object\r\n\r\n### Code example to reproduce the issue:\r\n\r\n```solidity\r\nlibrary SafeMath {\r\n uint256 private constant twelve = 12; \r\n struct A {uint256 a;}\r\n function add(A[twelve] storage z) internal { }\r\n}\r\n\r\ncontract MathContract {\r\n uint256 private constant twelve = 12; \r\n using SafeMath for SafeMath.A[12];\r\n SafeMath.A[12] public z;\r\n function safeAdd() public {\r\n z.add();\r\n }\r\n}\r\n```\r\n```solidity\r\nlibrary SafeMath {\r\n uint256 private constant twelve = 12; \r\n struct A {uint256 a;}\r\n function add(A[twelve] storage z) internal { }\r\n}\r\n\r\ncontract MathContract {\r\n uint256 private constant twelve = 12; \r\n using SafeMath for SafeMath.A[twelve];\r\n SafeMath.A[twelve] public z;\r\n function safeAdd() public {\r\n z.add();\r\n }\r\n}\r\n```\r\n\r\n### Version:\r\n\r\n0.10.0\r\n\r\n### Relevant log output:\r\n\r\n```shell\r\n>>> from slither import Slither\r\n>>> math = Slither('a.sol').contracts[1]\r\n>>> math.name\r\n'MathContract'\r\n>>> f = math.functions[0]\r\n>>> f.name\r\n'safeAdd'\r\n>>> f.nodes\r\n[<slither.core.cfg.node.Node object at 0x7f5460aa1e50>, <slither.core.cfg.node.No\r\nde object at 0x7f5460aa2090>]\r\n>>> f.nodes[1]\r\n<slither.core.cfg.node.Node object at 0x7f5460aa2090>\r\n>>> f.nodes[1].irs\r\n[<slither.slithir.operations.library_call.LibraryCall object at 0x7f5460a748d0>]\r\n>>> f.nodes[1].irs[0].function\r\n<slither.core.declarations.function_contract.FunctionContract object at 0x7f5460a\r\n9e090>\r\n>>> f.nodes[1].irs[0].function.name\r\n'add'\r\n----------------------------------------------------------------------------------\r\n>>> from slither import Slither\r\n>>> math = Slither('a.sol').contracts[1]\r\n>>> math.name\r\n'MathContract'\r\n>>> f = math.functions[0]\r\n>>> f.name\r\n'safeAdd'\r\n>>> f.nodes\r\n[<slither.core.cfg.node.Node object at 0x7f9d6379db10>, <slither.core.cfg.node.No\r\nde object at 0x7f9d63a47850>]\r\n>>> f.nodes[1]\r\n<slither.core.cfg.node.Node object at 0x7f9d63a47850>\r\n>>> f.nodes[1].irs\r\n[<slither.slithir.operations.high_level_call.HighLevelCall object at 0x7f9d63a376\r\n90>]\r\n>>> f.nodes[1].irs[0].function\r\n>>> print(f.nodes[1].irs[0].function)\r\nNone\r\n```\r\n\n", "before_files": [{"content": "from typing import Union, Optional, Tuple, Any, TYPE_CHECKING\n\nfrom slither.core.expressions.expression import Expression\nfrom slither.core.expressions.literal import Literal\nfrom slither.core.solidity_types.elementary_type import ElementaryType\nfrom slither.core.solidity_types.type import Type\nfrom slither.visitors.expression.constants_folding import ConstantFolding\n\nif TYPE_CHECKING:\n from slither.core.expressions.binary_operation import BinaryOperation\n from slither.core.expressions.identifier import Identifier\n\n\nclass ArrayType(Type):\n def __init__(\n self,\n t: Type,\n length: Optional[Union[\"Identifier\", Literal, \"BinaryOperation\", int]],\n ) -> None:\n assert isinstance(t, Type)\n if length:\n if isinstance(length, int):\n length = Literal(length, ElementaryType(\"uint256\"))\n\n super().__init__()\n self._type: Type = t\n assert length is None or isinstance(length, Expression)\n self._length: Optional[Expression] = length\n\n if length:\n if not isinstance(length, Literal):\n cf = ConstantFolding(length, \"uint256\")\n length = cf.result()\n self._length_value: Optional[Literal] = length\n else:\n self._length_value = None\n\n @property\n def type(self) -> Type:\n return self._type\n\n @property\n def is_dynamic(self) -> bool:\n return self.length is None\n\n @property\n def length(self) -> Optional[Expression]:\n return self._length\n\n @property\n def length_value(self) -> Optional[Literal]:\n return self._length_value\n\n @property\n def is_fixed_array(self) -> bool:\n return bool(self.length)\n\n @property\n def is_dynamic_array(self) -> bool:\n return not self.is_fixed_array\n\n @property\n def storage_size(self) -> Tuple[int, bool]:\n if self._length_value:\n elem_size, _ = self._type.storage_size\n return elem_size * int(str(self._length_value)), True\n return 32, True\n\n def __str__(self) -> str:\n if self._length:\n return str(self._type) + f\"[{str(self._length_value)}]\"\n return str(self._type) + \"[]\"\n\n def __eq__(self, other: Any) -> bool:\n if not isinstance(other, ArrayType):\n return False\n return self._type == other.type and self.length == other.length\n\n def __hash__(self) -> int:\n return hash(str(self))\n", "path": "slither/core/solidity_types/array_type.py"}]} | 2,004 | 143 |
gh_patches_debug_21235 | rasdani/github-patches | git_diff | facebookresearch__hydra-1630 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Document hydra.callbacks api.
- [ ] Document API
- [ ] Add news fragment if missing
</issue>
<code>
[start of hydra/experimental/callback.py]
1 # Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
2 import logging
3 from typing import Any
4
5 from omegaconf import DictConfig
6
7 from hydra.core.utils import JobReturn
8
9 logger = logging.getLogger(__name__)
10
11
12 class Callback:
13 def on_run_start(self, config: DictConfig, **kwargs: Any) -> None:
14 """
15 Called in RUN mode before job starts.
16 """
17 ...
18
19 def on_run_end(self, config: DictConfig, **kwargs: Any) -> None:
20 """
21 Called in RUN mode after job ends.
22 """
23 ...
24
25 def on_multirun_start(self, config: DictConfig, **kwargs: Any) -> None:
26 """
27 Called in MULTIRUN mode before any job starts.
28 """
29 ...
30
31 def on_multirun_end(self, config: DictConfig, **kwargs: Any) -> None:
32 """
33 Called in MULTIRUN mode after all job end.
34 """
35 ...
36
37 def on_job_start(self, config: DictConfig, **kwargs: Any) -> None:
38 """
39 Called in both RUN and MULTIRUN modes inside a Hydra job; before running
40 application code.
41 """
42 ...
43
44 def on_job_end(
45 self, config: DictConfig, job_return: JobReturn, **kwargs: Any
46 ) -> None:
47 """
48 Called in both RUN and MULTIRUN modes inside a Hydra job; after running
49 application code.
50 """
51 ...
52
[end of hydra/experimental/callback.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/hydra/experimental/callback.py b/hydra/experimental/callback.py
--- a/hydra/experimental/callback.py
+++ b/hydra/experimental/callback.py
@@ -30,14 +30,14 @@
def on_multirun_end(self, config: DictConfig, **kwargs: Any) -> None:
"""
- Called in MULTIRUN mode after all job end.
+ Called in MULTIRUN mode after all jobs end.
"""
...
def on_job_start(self, config: DictConfig, **kwargs: Any) -> None:
"""
- Called in both RUN and MULTIRUN modes inside a Hydra job; before running
- application code.
+ Called in both RUN and MULTIRUN modes, once for each Hydra job (before running
+ application code).
"""
...
@@ -45,7 +45,7 @@
self, config: DictConfig, job_return: JobReturn, **kwargs: Any
) -> None:
"""
- Called in both RUN and MULTIRUN modes inside a Hydra job; after running
- application code.
+ Called in both RUN and MULTIRUN modes, once for each Hydra job (after running
+ application code).
"""
...
| {"golden_diff": "diff --git a/hydra/experimental/callback.py b/hydra/experimental/callback.py\n--- a/hydra/experimental/callback.py\n+++ b/hydra/experimental/callback.py\n@@ -30,14 +30,14 @@\n \n def on_multirun_end(self, config: DictConfig, **kwargs: Any) -> None:\n \"\"\"\n- Called in MULTIRUN mode after all job end.\n+ Called in MULTIRUN mode after all jobs end.\n \"\"\"\n ...\n \n def on_job_start(self, config: DictConfig, **kwargs: Any) -> None:\n \"\"\"\n- Called in both RUN and MULTIRUN modes inside a Hydra job; before running\n- application code.\n+ Called in both RUN and MULTIRUN modes, once for each Hydra job (before running\n+ application code).\n \"\"\"\n ...\n \n@@ -45,7 +45,7 @@\n self, config: DictConfig, job_return: JobReturn, **kwargs: Any\n ) -> None:\n \"\"\"\n- Called in both RUN and MULTIRUN modes inside a Hydra job; after running\n- application code.\n+ Called in both RUN and MULTIRUN modes, once for each Hydra job (after running\n+ application code).\n \"\"\"\n ...\n", "issue": "Document hydra.callbacks api.\n- [ ] Document API\r\n- [ ] Add news fragment if missing\n", "before_files": [{"content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\nimport logging\nfrom typing import Any\n\nfrom omegaconf import DictConfig\n\nfrom hydra.core.utils import JobReturn\n\nlogger = logging.getLogger(__name__)\n\n\nclass Callback:\n def on_run_start(self, config: DictConfig, **kwargs: Any) -> None:\n \"\"\"\n Called in RUN mode before job starts.\n \"\"\"\n ...\n\n def on_run_end(self, config: DictConfig, **kwargs: Any) -> None:\n \"\"\"\n Called in RUN mode after job ends.\n \"\"\"\n ...\n\n def on_multirun_start(self, config: DictConfig, **kwargs: Any) -> None:\n \"\"\"\n Called in MULTIRUN mode before any job starts.\n \"\"\"\n ...\n\n def on_multirun_end(self, config: DictConfig, **kwargs: Any) -> None:\n \"\"\"\n Called in MULTIRUN mode after all job end.\n \"\"\"\n ...\n\n def on_job_start(self, config: DictConfig, **kwargs: Any) -> None:\n \"\"\"\n Called in both RUN and MULTIRUN modes inside a Hydra job; before running\n application code.\n \"\"\"\n ...\n\n def on_job_end(\n self, config: DictConfig, job_return: JobReturn, **kwargs: Any\n ) -> None:\n \"\"\"\n Called in both RUN and MULTIRUN modes inside a Hydra job; after running\n application code.\n \"\"\"\n ...\n", "path": "hydra/experimental/callback.py"}]} | 973 | 280 |
gh_patches_debug_8080 | rasdani/github-patches | git_diff | keras-team__keras-nlp-195 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Why does the docstring say vocab size should be no larger than 999?
https://github.com/keras-team/keras-nlp/blob/e3adddaa98bbe1aee071117c01678fe3017dae80/keras_nlp/layers/token_and_position_embedding.py#L30
Seems like a very small vocab to me
</issue>
<code>
[start of keras_nlp/layers/token_and_position_embedding.py]
1 # Copyright 2022 The KerasNLP Authors
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # https://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """Creates an Embedding Layer and adds Positional Embeddings"""
16
17 from tensorflow import keras
18
19 import keras_nlp.layers
20
21
22 class TokenAndPositionEmbedding(keras.layers.Layer):
23 """A layer which sums a token and position embedding.
24
25 This layer assumes that the last dimension in the input corresponds
26 to the sequence dimension.
27
28 Args:
29 vocabulary_size: The size of the vocabulary (should be no larger
30 than 999)
31 sequence_length: The maximum length of input sequence
32 embedding_dim: The output dimension of the embedding layer
33 embeddings_initializer: The initializer to use for the Embedding
34 Layers
35 mask_zero: Boolean, whether or not the input value 0 is a special
36 "padding" value that should be masked out.
37 This is useful when using recurrent layers which may take variable
38 length input. If this is True, then all subsequent layers in the
39 model need to support masking or an exception will be raised.
40 If mask_zero` is set to True, as a consequence, index 0 cannot be
41 used in the vocabulary
42 (input_dim should equal size of vocabulary + 1).
43
44 Examples:
45 ```python
46 seq_length = 50
47 vocab_size = 5000
48 embed_dim = 128
49 inputs = keras.Input(shape=(seq_length,))
50 embedding_layer = keras_nlp.layers.TokenAndPositionEmbedding(
51 vocabulary_size=vocab_size,
52 sequence_length=seq_length,
53 embedding_dim=embed_dim,
54 )
55 outputs = embedding_layer(inputs)
56 ```
57 """
58
59 def __init__(
60 self,
61 vocabulary_size,
62 sequence_length,
63 embedding_dim,
64 embeddings_initializer="glorot_uniform",
65 mask_zero=False,
66 **kwargs
67 ):
68 super().__init__(**kwargs)
69 if vocabulary_size is None:
70 raise ValueError(
71 "`vocabulary_size` must be an Integer, received `None`."
72 )
73 if sequence_length is None:
74 raise ValueError(
75 "`sequence_length` must be an Integer, received `None`."
76 )
77 if embedding_dim is None:
78 raise ValueError(
79 "`embedding_dim` must be an Integer, received `None`."
80 )
81 self.vocabulary_size = int(vocabulary_size)
82 self.sequence_length = int(sequence_length)
83 self.embedding_dim = int(embedding_dim)
84 self.token_embedding = keras.layers.Embedding(
85 vocabulary_size,
86 embedding_dim,
87 embeddings_initializer=embeddings_initializer,
88 mask_zero=mask_zero,
89 )
90 self.position_embedding = keras_nlp.layers.PositionEmbedding(
91 sequence_length=sequence_length,
92 initializer=embeddings_initializer,
93 )
94 self.supports_masking = self.token_embedding.supports_masking
95
96 def get_config(self):
97 config = super().get_config()
98 config.update(
99 {
100 "vocabulary_size": self.vocabulary_size,
101 "sequence_length": self.sequence_length,
102 "embedding_dim": self.embedding_dim,
103 "embeddings_initializer": keras.initializers.serialize(
104 self.token_embedding.embeddings_initializer
105 ),
106 "mask_zero": self.token_embedding.mask_zero,
107 },
108 )
109 return config
110
111 def call(self, inputs):
112 embedded_tokens = self.token_embedding(inputs)
113 embedded_positions = self.position_embedding(embedded_tokens)
114 outputs = embedded_tokens + embedded_positions
115 return outputs
116
117 def compute_mask(self, inputs, mask=None):
118 return self.token_embedding.compute_mask(inputs, mask=mask)
119
[end of keras_nlp/layers/token_and_position_embedding.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/keras_nlp/layers/token_and_position_embedding.py b/keras_nlp/layers/token_and_position_embedding.py
--- a/keras_nlp/layers/token_and_position_embedding.py
+++ b/keras_nlp/layers/token_and_position_embedding.py
@@ -26,8 +26,7 @@
to the sequence dimension.
Args:
- vocabulary_size: The size of the vocabulary (should be no larger
- than 999)
+ vocabulary_size: The size of the vocabulary.
sequence_length: The maximum length of input sequence
embedding_dim: The output dimension of the embedding layer
embeddings_initializer: The initializer to use for the Embedding
| {"golden_diff": "diff --git a/keras_nlp/layers/token_and_position_embedding.py b/keras_nlp/layers/token_and_position_embedding.py\n--- a/keras_nlp/layers/token_and_position_embedding.py\n+++ b/keras_nlp/layers/token_and_position_embedding.py\n@@ -26,8 +26,7 @@\n to the sequence dimension.\n \n Args:\n- vocabulary_size: The size of the vocabulary (should be no larger\n- than 999)\n+ vocabulary_size: The size of the vocabulary.\n sequence_length: The maximum length of input sequence\n embedding_dim: The output dimension of the embedding layer\n embeddings_initializer: The initializer to use for the Embedding\n", "issue": "Why does the docstring say vocab size should be no larger than 999?\nhttps://github.com/keras-team/keras-nlp/blob/e3adddaa98bbe1aee071117c01678fe3017dae80/keras_nlp/layers/token_and_position_embedding.py#L30\r\n\r\nSeems like a very small vocab to me\n", "before_files": [{"content": "# Copyright 2022 The KerasNLP Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Creates an Embedding Layer and adds Positional Embeddings\"\"\"\n\nfrom tensorflow import keras\n\nimport keras_nlp.layers\n\n\nclass TokenAndPositionEmbedding(keras.layers.Layer):\n \"\"\"A layer which sums a token and position embedding.\n\n This layer assumes that the last dimension in the input corresponds\n to the sequence dimension.\n\n Args:\n vocabulary_size: The size of the vocabulary (should be no larger\n than 999)\n sequence_length: The maximum length of input sequence\n embedding_dim: The output dimension of the embedding layer\n embeddings_initializer: The initializer to use for the Embedding\n Layers\n mask_zero: Boolean, whether or not the input value 0 is a special\n \"padding\" value that should be masked out.\n This is useful when using recurrent layers which may take variable\n length input. If this is True, then all subsequent layers in the\n model need to support masking or an exception will be raised.\n If mask_zero` is set to True, as a consequence, index 0 cannot be\n used in the vocabulary\n (input_dim should equal size of vocabulary + 1).\n\n Examples:\n ```python\n seq_length = 50\n vocab_size = 5000\n embed_dim = 128\n inputs = keras.Input(shape=(seq_length,))\n embedding_layer = keras_nlp.layers.TokenAndPositionEmbedding(\n vocabulary_size=vocab_size,\n sequence_length=seq_length,\n embedding_dim=embed_dim,\n )\n outputs = embedding_layer(inputs)\n ```\n \"\"\"\n\n def __init__(\n self,\n vocabulary_size,\n sequence_length,\n embedding_dim,\n embeddings_initializer=\"glorot_uniform\",\n mask_zero=False,\n **kwargs\n ):\n super().__init__(**kwargs)\n if vocabulary_size is None:\n raise ValueError(\n \"`vocabulary_size` must be an Integer, received `None`.\"\n )\n if sequence_length is None:\n raise ValueError(\n \"`sequence_length` must be an Integer, received `None`.\"\n )\n if embedding_dim is None:\n raise ValueError(\n \"`embedding_dim` must be an Integer, received `None`.\"\n )\n self.vocabulary_size = int(vocabulary_size)\n self.sequence_length = int(sequence_length)\n self.embedding_dim = int(embedding_dim)\n self.token_embedding = keras.layers.Embedding(\n vocabulary_size,\n embedding_dim,\n embeddings_initializer=embeddings_initializer,\n mask_zero=mask_zero,\n )\n self.position_embedding = keras_nlp.layers.PositionEmbedding(\n sequence_length=sequence_length,\n initializer=embeddings_initializer,\n )\n self.supports_masking = self.token_embedding.supports_masking\n\n def get_config(self):\n config = super().get_config()\n config.update(\n {\n \"vocabulary_size\": self.vocabulary_size,\n \"sequence_length\": self.sequence_length,\n \"embedding_dim\": self.embedding_dim,\n \"embeddings_initializer\": keras.initializers.serialize(\n self.token_embedding.embeddings_initializer\n ),\n \"mask_zero\": self.token_embedding.mask_zero,\n },\n )\n return config\n\n def call(self, inputs):\n embedded_tokens = self.token_embedding(inputs)\n embedded_positions = self.position_embedding(embedded_tokens)\n outputs = embedded_tokens + embedded_positions\n return outputs\n\n def compute_mask(self, inputs, mask=None):\n return self.token_embedding.compute_mask(inputs, mask=mask)\n", "path": "keras_nlp/layers/token_and_position_embedding.py"}]} | 1,758 | 154 |
gh_patches_debug_4313 | rasdani/github-patches | git_diff | fossasia__open-event-server-352 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
HTML Template rendered when page doesn't exist in API
If a paginated API endpoint is called with a non-existant page number, a template is rendered which should never happen in case of REST APIs.
```
http http://localhost:5000/api/v1/event/page/2
HTTP/1.0 404 NOT FOUND
Content-Length: 1062
Content-Type: text/html; charset=utf-8
Date: Sat, 21 May 2016 07:51:38 GMT
Server: Werkzeug/0.11.7 Python/2.7.10
<!DOCTYPE html>
<html>
<head lang="en">
<meta charset="UTF-8">
<title>You got 404'd</title>
<link href="/admin/static/bootstrap/bootstrap3/css/bootstrap.min.css" rel="stylesheet">
<link href="/static/admin/css/roboto.css" rel="stylesheet">
<link href="/static/admin/css/material-custom.css" rel="stylesheet">
</head>
<body>
<div class="container">
<div class="row">
<div class="col-md-push-3 col-md-6" style="margin-top: 20px;">
<div class="jumbotron">
<h2 style="font-weight: 100; ">Page Not Found</h2>
<p class="lead">Oops, the page you're looking for does not exist.</p>
<p style="font-size: 14px;">
You may want to head back to the homepage and restart your journey.
</p>
<a href="/" class="btn btn-large btn-info" style="background-color: #3f51b5;">
<i class="glyphicon glyphicon-home"></i> Take Me Home
</a>
</div>
</div>
</div>
</div>
</body>
</html>
```
</issue>
<code>
[start of open_event/helpers/object_formatter.py]
1 """Copyright 2015 Rafal Kowalski"""
2 from flask import jsonify
3
4 from .query_filter import QueryFilter
5
6
7 PER_PAGE = 20
8
9
10 class ObjectFormatter(object):
11 """Object formatter class"""
12 @staticmethod
13 def get_json(name, query, request, page=None):
14 """Returns formatted json"""
15 objects = QueryFilter(request.args, query).get_filtered_data()
16 count = objects.count()
17 if not page:
18 return jsonify(
19 {name: [
20 table_object.serialize
21 for table_object in
22 objects]})
23 else:
24 pagination = objects.paginate(page, PER_PAGE)
25 return jsonify({
26 name: [
27 table_object.serialize
28 for table_object in
29 pagination.items
30 ],
31 'total_pages': pagination.pages,
32 'page': pagination.page
33 })
34
[end of open_event/helpers/object_formatter.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/open_event/helpers/object_formatter.py b/open_event/helpers/object_formatter.py
--- a/open_event/helpers/object_formatter.py
+++ b/open_event/helpers/object_formatter.py
@@ -21,6 +21,8 @@
for table_object in
objects]})
else:
+ if count <= ((page-1) * PER_PAGE): # no results possible
+ return jsonify({})
pagination = objects.paginate(page, PER_PAGE)
return jsonify({
name: [
| {"golden_diff": "diff --git a/open_event/helpers/object_formatter.py b/open_event/helpers/object_formatter.py\n--- a/open_event/helpers/object_formatter.py\n+++ b/open_event/helpers/object_formatter.py\n@@ -21,6 +21,8 @@\n for table_object in\n objects]})\n else:\n+ if count <= ((page-1) * PER_PAGE): # no results possible\n+ return jsonify({})\n pagination = objects.paginate(page, PER_PAGE)\n return jsonify({\n name: [\n", "issue": "HTML Template rendered when page doesn't exist in API\nIf a paginated API endpoint is called with a non-existant page number, a template is rendered which should never happen in case of REST APIs.\n\n```\nhttp http://localhost:5000/api/v1/event/page/2\nHTTP/1.0 404 NOT FOUND\nContent-Length: 1062\nContent-Type: text/html; charset=utf-8\nDate: Sat, 21 May 2016 07:51:38 GMT\nServer: Werkzeug/0.11.7 Python/2.7.10\n\n<!DOCTYPE html>\n<html>\n<head lang=\"en\">\n <meta charset=\"UTF-8\">\n <title>You got 404'd</title>\n <link href=\"/admin/static/bootstrap/bootstrap3/css/bootstrap.min.css\" rel=\"stylesheet\">\n <link href=\"/static/admin/css/roboto.css\" rel=\"stylesheet\">\n <link href=\"/static/admin/css/material-custom.css\" rel=\"stylesheet\">\n</head>\n<body>\n<div class=\"container\">\n <div class=\"row\">\n <div class=\"col-md-push-3 col-md-6\" style=\"margin-top: 20px;\">\n <div class=\"jumbotron\">\n <h2 style=\"font-weight: 100; \">Page Not Found</h2>\n <p class=\"lead\">Oops, the page you're looking for does not exist.</p>\n <p style=\"font-size: 14px;\">\n You may want to head back to the homepage and restart your journey.\n </p>\n <a href=\"/\" class=\"btn btn-large btn-info\" style=\"background-color: #3f51b5;\">\n <i class=\"glyphicon glyphicon-home\"></i> Take Me Home\n </a>\n </div>\n </div>\n </div>\n</div>\n</body>\n</html>\n```\n\n", "before_files": [{"content": "\"\"\"Copyright 2015 Rafal Kowalski\"\"\"\nfrom flask import jsonify\n\nfrom .query_filter import QueryFilter\n\n\nPER_PAGE = 20\n\n\nclass ObjectFormatter(object):\n \"\"\"Object formatter class\"\"\"\n @staticmethod\n def get_json(name, query, request, page=None):\n \"\"\"Returns formatted json\"\"\"\n objects = QueryFilter(request.args, query).get_filtered_data()\n count = objects.count()\n if not page:\n return jsonify(\n {name: [\n table_object.serialize\n for table_object in\n objects]})\n else:\n pagination = objects.paginate(page, PER_PAGE)\n return jsonify({\n name: [\n table_object.serialize\n for table_object in\n pagination.items\n ],\n 'total_pages': pagination.pages,\n 'page': pagination.page\n })\n", "path": "open_event/helpers/object_formatter.py"}]} | 1,187 | 105 |
gh_patches_debug_2905 | rasdani/github-patches | git_diff | mabel-dev__opteryx-1689 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
🪲 VIEWs load error should be in debug mode only
### Thank you for taking the time to report a problem with Opteryx.
_To help us to respond to your request we ask that you try to provide the below detail about the bug._
**Describe the bug** _A clear and specific description of what the bug is. What the error, incorrect or unexpected behaviour was._
**Expected behaviour** _A clear and concise description of what you expected to happen._
**Sample Code/Statement** _If you can, please submit the SQL statement or Python code snippet, or a representative example using the sample datasets._
~~~sql
~~~
**Additional context** _Add any other context about the problem here, for example what you have done to try to diagnose or workaround the problem._
</issue>
<code>
[start of opteryx/planner/views/__init__.py]
1 # Licensed under the Apache License, Version 2.0 (the "License");
2 # you may not use this file except in compliance with the License.
3 # You may obtain a copy of the License at
4 #
5 # http://www.apache.org/licenses/LICENSE-2.0
6 #
7 # Unless required by applicable law or agreed to in writing, software
8 # distributed under the License is distributed on an "AS IS" BASIS,
9 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
10 # See the License for the specific language governing permissions and
11 # limitations under the License.
12
13 import orjson
14
15 from opteryx.planner.logical_planner import LogicalPlan
16
17
18 def _load_views():
19 try:
20 with open("views.json", "rb") as defs:
21 return orjson.loads(defs.read())
22 except Exception as err:
23 print(f"[OPTERYX] Unable to open views definition file. {err}")
24 return {}
25
26
27 VIEWS = _load_views()
28
29
30 def is_view(view_name: str) -> bool:
31 return view_name in VIEWS
32
33
34 def view_as_plan(view_name: str) -> LogicalPlan:
35 from opteryx.planner.logical_planner import do_logical_planning_phase
36 from opteryx.third_party import sqloxide
37 from opteryx.utils.sql import clean_statement
38 from opteryx.utils.sql import remove_comments
39
40 operation = VIEWS.get(view_name)["statement"]
41
42 clean_sql = clean_statement(remove_comments(operation))
43 parsed_statements = sqloxide.parse_sql(clean_sql, dialect="mysql")
44 logical_plan, _, _ = next(do_logical_planning_phase(parsed_statements))
45
46 return logical_plan
47
[end of opteryx/planner/views/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/opteryx/planner/views/__init__.py b/opteryx/planner/views/__init__.py
--- a/opteryx/planner/views/__init__.py
+++ b/opteryx/planner/views/__init__.py
@@ -20,7 +20,7 @@
with open("views.json", "rb") as defs:
return orjson.loads(defs.read())
except Exception as err:
- print(f"[OPTERYX] Unable to open views definition file. {err}")
+ # DEBUG:: log (f"[OPTERYX] Unable to open views definition file. {err}")
return {}
| {"golden_diff": "diff --git a/opteryx/planner/views/__init__.py b/opteryx/planner/views/__init__.py\n--- a/opteryx/planner/views/__init__.py\n+++ b/opteryx/planner/views/__init__.py\n@@ -20,7 +20,7 @@\n with open(\"views.json\", \"rb\") as defs:\n return orjson.loads(defs.read())\n except Exception as err:\n- print(f\"[OPTERYX] Unable to open views definition file. {err}\")\n+ # DEBUG:: log (f\"[OPTERYX] Unable to open views definition file. {err}\")\n return {}\n", "issue": "\ud83e\udeb2 VIEWs load error should be in debug mode only\n### Thank you for taking the time to report a problem with Opteryx.\r\n_To help us to respond to your request we ask that you try to provide the below detail about the bug._\r\n\r\n**Describe the bug** _A clear and specific description of what the bug is. What the error, incorrect or unexpected behaviour was._\r\n\r\n\r\n**Expected behaviour** _A clear and concise description of what you expected to happen._\r\n\r\n\r\n**Sample Code/Statement** _If you can, please submit the SQL statement or Python code snippet, or a representative example using the sample datasets._\r\n\r\n~~~sql\r\n\r\n~~~\r\n\r\n**Additional context** _Add any other context about the problem here, for example what you have done to try to diagnose or workaround the problem._\r\n\n", "before_files": [{"content": "# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport orjson\n\nfrom opteryx.planner.logical_planner import LogicalPlan\n\n\ndef _load_views():\n try:\n with open(\"views.json\", \"rb\") as defs:\n return orjson.loads(defs.read())\n except Exception as err:\n print(f\"[OPTERYX] Unable to open views definition file. {err}\")\n return {}\n\n\nVIEWS = _load_views()\n\n\ndef is_view(view_name: str) -> bool:\n return view_name in VIEWS\n\n\ndef view_as_plan(view_name: str) -> LogicalPlan:\n from opteryx.planner.logical_planner import do_logical_planning_phase\n from opteryx.third_party import sqloxide\n from opteryx.utils.sql import clean_statement\n from opteryx.utils.sql import remove_comments\n\n operation = VIEWS.get(view_name)[\"statement\"]\n\n clean_sql = clean_statement(remove_comments(operation))\n parsed_statements = sqloxide.parse_sql(clean_sql, dialect=\"mysql\")\n logical_plan, _, _ = next(do_logical_planning_phase(parsed_statements))\n\n return logical_plan\n", "path": "opteryx/planner/views/__init__.py"}]} | 1,156 | 138 |
gh_patches_debug_14047 | rasdani/github-patches | git_diff | liqd__a4-opin-1146 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
add languages to project detail page in wagtail (user manuals)
The titles of the user manuals or only in english and there are no fieldsfor translating them in wagtail. Is it possible to add the languages? :)


</issue>
<code>
[start of home/templatetags/base_tags.py]
1 import bleach
2 import feedparser
3 from dateutil import parser
4 from django import template
5 from django.conf import settings
6
7 from home.models.snippets import NavigationMenu
8
9 register = template.Library()
10
11
12 @register.assignment_tag(takes_context=True)
13 def get_site_root(context):
14 return context['request'].site.root_page
15
16
17 @register.inclusion_tag('tags/top_menu.html', takes_context=True)
18 def top_menu(context, parent, calling_page=None):
19 menuitems = parent.get_children().live().in_menu().specific()
20
21 return {
22 'calling_page': calling_page,
23 'menuitems': menuitems,
24 'request': context['request'],
25 }
26
27
28 @register.inclusion_tag('includes/rss_import.html', takes_context=True)
29 def import_rss(context, rss_import):
30
31 feeds = feedparser.parse(rss_import.url)
32 entries = feeds.entries[:2]
33
34 result = []
35
36 for entry in entries:
37 try:
38 published = parser.parse(entry["published"])
39 except Exception:
40 published = ''
41
42 result.append({
43 'published': published,
44 'title': entry.title,
45 'link': entry.link,
46 'description': bleach.clean(entry.summary,
47 tags=[],
48 attributes={},
49 styles=[],
50 strip=True
51 )
52 }
53 )
54
55 return {
56 'title': rss_import.translated_rss_title,
57 'entries': result
58 }
59
60
61 @register.assignment_tag(takes_context=False)
62 def load_site_menu(menu_name):
63 menu = NavigationMenu.objects.filter(menu_name=menu_name)
64
65 if menu:
66 return menu[0].menu_items.all()
67 else:
68 return None
69
70
71 @register.filter(name='clear_class')
72 def clear_class(columns_per_row, count):
73 if (count-1) % (12/int(columns_per_row)) == 0:
74 return "m-clear"
75 else:
76 return ""
77
78
79 @register.simple_tag
80 def settings_value(name):
81 return getattr(settings, name, "")
82
[end of home/templatetags/base_tags.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/home/templatetags/base_tags.py b/home/templatetags/base_tags.py
--- a/home/templatetags/base_tags.py
+++ b/home/templatetags/base_tags.py
@@ -3,12 +3,22 @@
from dateutil import parser
from django import template
from django.conf import settings
+from django.core.exceptions import ObjectDoesNotExist
+from home.models.manual_pages import ManualsDetailPage
from home.models.snippets import NavigationMenu
register = template.Library()
[email protected]_tag(takes_context=True)
+def get_page_name(context, page):
+ try:
+ return ManualsDetailPage.objects.get(id=page.id).translated_title
+ except ObjectDoesNotExist:
+ return page
+
+
@register.assignment_tag(takes_context=True)
def get_site_root(context):
return context['request'].site.root_page
| {"golden_diff": "diff --git a/home/templatetags/base_tags.py b/home/templatetags/base_tags.py\n--- a/home/templatetags/base_tags.py\n+++ b/home/templatetags/base_tags.py\n@@ -3,12 +3,22 @@\n from dateutil import parser\n from django import template\n from django.conf import settings\n+from django.core.exceptions import ObjectDoesNotExist\n \n+from home.models.manual_pages import ManualsDetailPage\n from home.models.snippets import NavigationMenu\n \n register = template.Library()\n \n \[email protected]_tag(takes_context=True)\n+def get_page_name(context, page):\n+ try:\n+ return ManualsDetailPage.objects.get(id=page.id).translated_title\n+ except ObjectDoesNotExist:\n+ return page\n+\n+\n @register.assignment_tag(takes_context=True)\n def get_site_root(context):\n return context['request'].site.root_page\n", "issue": "add languages to project detail page in wagtail (user manuals)\nThe titles of the user manuals or only in english and there are no fieldsfor translating them in wagtail. Is it possible to add the languages? :)\r\n\r\n\r\n\n", "before_files": [{"content": "import bleach\nimport feedparser\nfrom dateutil import parser\nfrom django import template\nfrom django.conf import settings\n\nfrom home.models.snippets import NavigationMenu\n\nregister = template.Library()\n\n\[email protected]_tag(takes_context=True)\ndef get_site_root(context):\n return context['request'].site.root_page\n\n\[email protected]_tag('tags/top_menu.html', takes_context=True)\ndef top_menu(context, parent, calling_page=None):\n menuitems = parent.get_children().live().in_menu().specific()\n\n return {\n 'calling_page': calling_page,\n 'menuitems': menuitems,\n 'request': context['request'],\n }\n\n\[email protected]_tag('includes/rss_import.html', takes_context=True)\ndef import_rss(context, rss_import):\n\n feeds = feedparser.parse(rss_import.url)\n entries = feeds.entries[:2]\n\n result = []\n\n for entry in entries:\n try:\n published = parser.parse(entry[\"published\"])\n except Exception:\n published = ''\n\n result.append({\n 'published': published,\n 'title': entry.title,\n 'link': entry.link,\n 'description': bleach.clean(entry.summary,\n tags=[],\n attributes={},\n styles=[],\n strip=True\n )\n }\n )\n\n return {\n 'title': rss_import.translated_rss_title,\n 'entries': result\n }\n\n\[email protected]_tag(takes_context=False)\ndef load_site_menu(menu_name):\n menu = NavigationMenu.objects.filter(menu_name=menu_name)\n\n if menu:\n return menu[0].menu_items.all()\n else:\n return None\n\n\[email protected](name='clear_class')\ndef clear_class(columns_per_row, count):\n if (count-1) % (12/int(columns_per_row)) == 0:\n return \"m-clear\"\n else:\n return \"\"\n\n\[email protected]_tag\ndef settings_value(name):\n return getattr(settings, name, \"\")\n", "path": "home/templatetags/base_tags.py"}]} | 1,295 | 192 |
gh_patches_debug_10988 | rasdani/github-patches | git_diff | chainer__chainer-7738 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Improve chainerx import check
#7518 has disabled the following:
1. Check out the source code and `import chainer` without pip install.
2. pip install chainer in non-editable mode (with/without chainerx) and import chainer from the source root directory.
\1. should be supported. In 2., we should give more comprehensible error.
</issue>
<code>
[start of chainerx/__init__.py]
1 import os
2 import warnings
3
4 from chainerx import _build_info
5
6
7 if _build_info.build_chainerx:
8 from chainerx import _core
9 _available = True
10 else:
11 _available = False
12
13
14 if _available:
15 from numpy import dtype # NOQA
16 from numpy import ( # NOQA
17 Inf, Infinity, NAN, NINF, NZERO, NaN, PINF, PZERO,
18 e, euler_gamma,
19 inf, infty, nan,
20 newaxis,
21 pi)
22 from numpy import (
23 bool_, int8, int16, int32, int64, uint8, float16, float32, float64) # NOQA
24 all_dtypes = (
25 bool_, int8, int16, int32, int64, uint8, float16, float32, float64)
26
27 from chainerx._core import * # NOQA
28 from chainerx._core import _to_cupy # NOQA
29
30 from builtins import bool, int, float # NOQA
31
32 from chainerx import _device # NOQA
33
34 from chainerx.creation.from_data import asanyarray # NOQA
35 from chainerx.creation.from_data import fromfile # NOQA
36 from chainerx.creation.from_data import fromfunction # NOQA
37 from chainerx.creation.from_data import fromiter # NOQA
38 from chainerx.creation.from_data import fromstring # NOQA
39 from chainerx.creation.from_data import loadtxt # NOQA
40
41 from chainerx.manipulation.shape import ravel # NOQA
42
43 from chainerx.math.misc import clip # NOQA
44
45 from chainerx import random # NOQA
46
47 _global_context = _core.Context()
48 _core.set_global_default_context(_global_context)
49
50 # Implements ndarray methods in Python
51 from chainerx import _ndarray
52 _ndarray.populate()
53
54 # Temporary workaround implementations that fall back to NumPy/CuPy's
55 # respective functions.
56 from chainerx import _fallback_workarounds
57 _fallback_workarounds.populate()
58
59 # Dynamically inject docstrings
60 from chainerx import _docs
61 _docs.set_docs()
62
63 from chainerx import _cuda
64 # Share memory pool with CuPy.
65 if bool(int(os.getenv('CHAINERX_CUDA_CUPY_SHARE_ALLOCATOR', '0'))):
66 _cuda.cupy_share_allocator()
67 else:
68 class ndarray(object):
69
70 """Dummy class for type testing."""
71
72 def __init__(self, *args, **kwargs):
73 raise RuntimeError('chainerx is not available.')
74
75
76 def is_available():
77 return _available
78
79
80 if _available and _core._is_debug():
81 # Warn if the ChainerX core binary is built in debug mode
82 warnings.warn(
83 'ChainerX core binary is built in debug mode.', stacklevel=2)
84
[end of chainerx/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/chainerx/__init__.py b/chainerx/__init__.py
--- a/chainerx/__init__.py
+++ b/chainerx/__init__.py
@@ -1,8 +1,22 @@
import os
import warnings
-from chainerx import _build_info
-
+try:
+ from chainerx import _build_info
+except ImportError:
+ raise ImportError(
+ '''\
+Cannot import chainerx because _build_info.py cannot be found.
+
+The chainer and chainerx module being imported was not correctly \
+installed by `pip install`.
+
+It may be caused by either of the following reasons.
+
+1. You are directly importing chainer source files without installing it with \
+`pip install`.
+2. You installed chainer in non-editable mode (`pip install` without -e) and \
+are importing chainer source files instead of the installed module.''')
if _build_info.build_chainerx:
from chainerx import _core
| {"golden_diff": "diff --git a/chainerx/__init__.py b/chainerx/__init__.py\n--- a/chainerx/__init__.py\n+++ b/chainerx/__init__.py\n@@ -1,8 +1,22 @@\n import os\n import warnings\n \n-from chainerx import _build_info\n-\n+try:\n+ from chainerx import _build_info\n+except ImportError:\n+ raise ImportError(\n+ '''\\\n+Cannot import chainerx because _build_info.py cannot be found.\n+\n+The chainer and chainerx module being imported was not correctly \\\n+installed by `pip install`.\n+\n+It may be caused by either of the following reasons.\n+\n+1. You are directly importing chainer source files without installing it with \\\n+`pip install`.\n+2. You installed chainer in non-editable mode (`pip install` without -e) and \\\n+are importing chainer source files instead of the installed module.''')\n \n if _build_info.build_chainerx:\n from chainerx import _core\n", "issue": "Improve chainerx import check\n#7518 has disabled the following:\r\n1. Check out the source code and `import chainer` without pip install.\r\n2. pip install chainer in non-editable mode (with/without chainerx) and import chainer from the source root directory.\r\n\r\n\\1. should be supported. In 2., we should give more comprehensible error.\n", "before_files": [{"content": "import os\nimport warnings\n\nfrom chainerx import _build_info\n\n\nif _build_info.build_chainerx:\n from chainerx import _core\n _available = True\nelse:\n _available = False\n\n\nif _available:\n from numpy import dtype # NOQA\n from numpy import ( # NOQA\n Inf, Infinity, NAN, NINF, NZERO, NaN, PINF, PZERO,\n e, euler_gamma,\n inf, infty, nan,\n newaxis,\n pi)\n from numpy import (\n bool_, int8, int16, int32, int64, uint8, float16, float32, float64) # NOQA\n all_dtypes = (\n bool_, int8, int16, int32, int64, uint8, float16, float32, float64)\n\n from chainerx._core import * # NOQA\n from chainerx._core import _to_cupy # NOQA\n\n from builtins import bool, int, float # NOQA\n\n from chainerx import _device # NOQA\n\n from chainerx.creation.from_data import asanyarray # NOQA\n from chainerx.creation.from_data import fromfile # NOQA\n from chainerx.creation.from_data import fromfunction # NOQA\n from chainerx.creation.from_data import fromiter # NOQA\n from chainerx.creation.from_data import fromstring # NOQA\n from chainerx.creation.from_data import loadtxt # NOQA\n\n from chainerx.manipulation.shape import ravel # NOQA\n\n from chainerx.math.misc import clip # NOQA\n\n from chainerx import random # NOQA\n\n _global_context = _core.Context()\n _core.set_global_default_context(_global_context)\n\n # Implements ndarray methods in Python\n from chainerx import _ndarray\n _ndarray.populate()\n\n # Temporary workaround implementations that fall back to NumPy/CuPy's\n # respective functions.\n from chainerx import _fallback_workarounds\n _fallback_workarounds.populate()\n\n # Dynamically inject docstrings\n from chainerx import _docs\n _docs.set_docs()\n\n from chainerx import _cuda\n # Share memory pool with CuPy.\n if bool(int(os.getenv('CHAINERX_CUDA_CUPY_SHARE_ALLOCATOR', '0'))):\n _cuda.cupy_share_allocator()\nelse:\n class ndarray(object):\n\n \"\"\"Dummy class for type testing.\"\"\"\n\n def __init__(self, *args, **kwargs):\n raise RuntimeError('chainerx is not available.')\n\n\ndef is_available():\n return _available\n\n\nif _available and _core._is_debug():\n # Warn if the ChainerX core binary is built in debug mode\n warnings.warn(\n 'ChainerX core binary is built in debug mode.', stacklevel=2)\n", "path": "chainerx/__init__.py"}]} | 1,451 | 223 |
gh_patches_debug_28311 | rasdani/github-patches | git_diff | liqd__a4-meinberlin-3152 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
when bplan is archived, office worker update email is sent
**URL:**
**user:**
**expected behaviour:** Office workers only get an update email when bplan is "published, changed or created as draft"
**behaviour:** The automatic archiving also changes the bplan and by that triggers the email. Thinking about it, it should also trigger the get-point thing (and might result in failure errors being thrown).
**important screensize:**
**device & browser:**
**Comment/Question:**
Screenshot?
</issue>
<code>
[start of meinberlin/apps/bplan/signals.py]
1 from django.db.models.signals import post_save
2 from django.dispatch import receiver
3
4 from . import emails
5 from . import tasks
6 from .models import Bplan
7 from .models import Statement
8
9
10 @receiver(post_save, sender=Bplan)
11 def get_location(sender, instance, update_fields, **kwargs):
12 if instance.identifier and (not update_fields or
13 'point' not in update_fields):
14 tasks.get_location_information(instance.pk)
15
16
17 @receiver(post_save, sender=Statement)
18 def send_notification(sender, instance, created, **kwargs):
19 if created:
20 emails.OfficeWorkerNotification.send(instance)
21
22 if instance.email:
23 emails.SubmitterConfirmation.send(instance)
24
25
26 @receiver(post_save, sender=Bplan)
27 def send_update(sender, instance, update_fields, **kwargs):
28 if not update_fields or 'point' not in update_fields:
29 emails.OfficeWorkerUpdateConfirmation.send(instance)
30
[end of meinberlin/apps/bplan/signals.py]
[start of meinberlin/apps/bplan/management/commands/bplan_auto_archive.py]
1 from datetime import timedelta
2
3 from django.core.management.base import BaseCommand
4 from django.utils import timezone
5
6 from meinberlin.apps.bplan import models as bplan_models
7
8
9 class Command(BaseCommand):
10 help = 'Archive finished bplan projects and delete old statements.'
11
12 def handle(self, *args, **options):
13 bplans = bplan_models.Bplan.objects.filter(is_draft=False)
14 for bplan in bplans:
15 if bplan.has_finished and not bplan.is_archived:
16 bplan.is_archived = True
17 bplan.save()
18 self.stdout.write('Archived bplan {}.'.format(bplan.name))
19
20 # Delete statements of archived projects
21 # To prevent deleting statements that have not been sent by mail yet
22 # only statements older then 48h are deleted.
23 num_deleted, _ = bplan_models.Statement.objects\
24 .filter(module__project__is_archived=True)\
25 .filter(created__lt=timezone.now() - timedelta(hours=48))\
26 .delete()
27 if num_deleted:
28 self.stdout.write('Deleted {} statements from archived bplans.'
29 .format(num_deleted))
30
[end of meinberlin/apps/bplan/management/commands/bplan_auto_archive.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/meinberlin/apps/bplan/management/commands/bplan_auto_archive.py b/meinberlin/apps/bplan/management/commands/bplan_auto_archive.py
--- a/meinberlin/apps/bplan/management/commands/bplan_auto_archive.py
+++ b/meinberlin/apps/bplan/management/commands/bplan_auto_archive.py
@@ -14,7 +14,9 @@
for bplan in bplans:
if bplan.has_finished and not bplan.is_archived:
bplan.is_archived = True
- bplan.save()
+ bplan.save(
+ update_fields=['is_archived']
+ )
self.stdout.write('Archived bplan {}.'.format(bplan.name))
# Delete statements of archived projects
diff --git a/meinberlin/apps/bplan/signals.py b/meinberlin/apps/bplan/signals.py
--- a/meinberlin/apps/bplan/signals.py
+++ b/meinberlin/apps/bplan/signals.py
@@ -10,7 +10,8 @@
@receiver(post_save, sender=Bplan)
def get_location(sender, instance, update_fields, **kwargs):
if instance.identifier and (not update_fields or
- 'point' not in update_fields):
+ ('point' not in update_fields and
+ 'is_archived' not in update_fields)):
tasks.get_location_information(instance.pk)
@@ -25,5 +26,6 @@
@receiver(post_save, sender=Bplan)
def send_update(sender, instance, update_fields, **kwargs):
- if not update_fields or 'point' not in update_fields:
+ if (not update_fields or ('point' not in update_fields and
+ 'is_archived' not in update_fields)):
emails.OfficeWorkerUpdateConfirmation.send(instance)
| {"golden_diff": "diff --git a/meinberlin/apps/bplan/management/commands/bplan_auto_archive.py b/meinberlin/apps/bplan/management/commands/bplan_auto_archive.py\n--- a/meinberlin/apps/bplan/management/commands/bplan_auto_archive.py\n+++ b/meinberlin/apps/bplan/management/commands/bplan_auto_archive.py\n@@ -14,7 +14,9 @@\n for bplan in bplans:\n if bplan.has_finished and not bplan.is_archived:\n bplan.is_archived = True\n- bplan.save()\n+ bplan.save(\n+ update_fields=['is_archived']\n+ )\n self.stdout.write('Archived bplan {}.'.format(bplan.name))\n \n # Delete statements of archived projects\ndiff --git a/meinberlin/apps/bplan/signals.py b/meinberlin/apps/bplan/signals.py\n--- a/meinberlin/apps/bplan/signals.py\n+++ b/meinberlin/apps/bplan/signals.py\n@@ -10,7 +10,8 @@\n @receiver(post_save, sender=Bplan)\n def get_location(sender, instance, update_fields, **kwargs):\n if instance.identifier and (not update_fields or\n- 'point' not in update_fields):\n+ ('point' not in update_fields and\n+ 'is_archived' not in update_fields)):\n tasks.get_location_information(instance.pk)\n \n \n@@ -25,5 +26,6 @@\n \n @receiver(post_save, sender=Bplan)\n def send_update(sender, instance, update_fields, **kwargs):\n- if not update_fields or 'point' not in update_fields:\n+ if (not update_fields or ('point' not in update_fields and\n+ 'is_archived' not in update_fields)):\n emails.OfficeWorkerUpdateConfirmation.send(instance)\n", "issue": "when bplan is archived, office worker update email is sent\n**URL:** \r\n**user:** \r\n**expected behaviour:** Office workers only get an update email when bplan is \"published, changed or created as draft\"\r\n**behaviour:** The automatic archiving also changes the bplan and by that triggers the email. Thinking about it, it should also trigger the get-point thing (and might result in failure errors being thrown).\r\n**important screensize:**\r\n**device & browser:** \r\n**Comment/Question:** \r\n\r\nScreenshot?\r\n\n", "before_files": [{"content": "from django.db.models.signals import post_save\nfrom django.dispatch import receiver\n\nfrom . import emails\nfrom . import tasks\nfrom .models import Bplan\nfrom .models import Statement\n\n\n@receiver(post_save, sender=Bplan)\ndef get_location(sender, instance, update_fields, **kwargs):\n if instance.identifier and (not update_fields or\n 'point' not in update_fields):\n tasks.get_location_information(instance.pk)\n\n\n@receiver(post_save, sender=Statement)\ndef send_notification(sender, instance, created, **kwargs):\n if created:\n emails.OfficeWorkerNotification.send(instance)\n\n if instance.email:\n emails.SubmitterConfirmation.send(instance)\n\n\n@receiver(post_save, sender=Bplan)\ndef send_update(sender, instance, update_fields, **kwargs):\n if not update_fields or 'point' not in update_fields:\n emails.OfficeWorkerUpdateConfirmation.send(instance)\n", "path": "meinberlin/apps/bplan/signals.py"}, {"content": "from datetime import timedelta\n\nfrom django.core.management.base import BaseCommand\nfrom django.utils import timezone\n\nfrom meinberlin.apps.bplan import models as bplan_models\n\n\nclass Command(BaseCommand):\n help = 'Archive finished bplan projects and delete old statements.'\n\n def handle(self, *args, **options):\n bplans = bplan_models.Bplan.objects.filter(is_draft=False)\n for bplan in bplans:\n if bplan.has_finished and not bplan.is_archived:\n bplan.is_archived = True\n bplan.save()\n self.stdout.write('Archived bplan {}.'.format(bplan.name))\n\n # Delete statements of archived projects\n # To prevent deleting statements that have not been sent by mail yet\n # only statements older then 48h are deleted.\n num_deleted, _ = bplan_models.Statement.objects\\\n .filter(module__project__is_archived=True)\\\n .filter(created__lt=timezone.now() - timedelta(hours=48))\\\n .delete()\n if num_deleted:\n self.stdout.write('Deleted {} statements from archived bplans.'\n .format(num_deleted))\n", "path": "meinberlin/apps/bplan/management/commands/bplan_auto_archive.py"}]} | 1,222 | 396 |
gh_patches_debug_31063 | rasdani/github-patches | git_diff | aws-cloudformation__cfn-lint-781 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Single-subnet ELB detection
*cfn-lint version: 0.10.3*
*Description of issue.*
When configuring `AWS::ElasticLoadBalancingV2::LoadBalancer`, if you specify only 1 subnet, you get back from AWS:
> 2019-02-05 12:22:39 +1100 Elb AWS::ElasticLoadBalancingV2::LoadBalancer CREATE_FAILED At least two subnets in two different Availability Zones must be specified (Service: AmazonElasticLoadBalancingV2; Status Code: 400; Error Code: ValidationError; Request ID: ...)
This could be covered by a cfn-lint check.
</issue>
<code>
[start of src/cfnlint/rules/resources/elb/Elb.py]
1 """
2 Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
3
4 Permission is hereby granted, free of charge, to any person obtaining a copy of this
5 software and associated documentation files (the "Software"), to deal in the Software
6 without restriction, including without limitation the rights to use, copy, modify,
7 merge, publish, distribute, sublicense, and/or sell copies of the Software, and to
8 permit persons to whom the Software is furnished to do so.
9
10 THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED,
11 INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A
12 PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
13 HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
14 OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
15 SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
16 """
17 import six
18 from cfnlint import CloudFormationLintRule
19 from cfnlint import RuleMatch
20
21
22 class Elb(CloudFormationLintRule):
23 """Check if Elb Resource Properties"""
24 id = 'E2503'
25 shortdesc = 'Resource ELB Properties'
26 description = 'See if Elb Resource Properties are set correctly \
27 HTTPS has certificate HTTP has no certificate'
28 source_url = 'https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-ec2-elb-listener.html'
29 tags = ['properties', 'elb']
30
31 def check_protocol_value(self, value, path, **kwargs):
32 """
33 Check Protocol Value
34 """
35 matches = []
36 if isinstance(value, six.string_types):
37 if value.upper() not in kwargs['accepted_protocols']:
38 message = 'Protocol must be {0} is invalid at {1}'
39 matches.append(RuleMatch(path, message.format((', '.join(kwargs['accepted_protocols'])), ('/'.join(map(str, path))))))
40 elif value.upper() in kwargs['certificate_protocols']:
41 if not kwargs['certificates']:
42 message = 'Certificates should be specified when using HTTPS for {0}'
43 matches.append(RuleMatch(path, message.format(('/'.join(map(str, path))))))
44
45 return matches
46
47 def match(self, cfn):
48 """Check ELB Resource Parameters"""
49
50 matches = []
51
52 results = cfn.get_resource_properties(['AWS::ElasticLoadBalancingV2::Listener'])
53 for result in results:
54 matches.extend(
55 cfn.check_value(
56 result['Value'], 'Protocol', result['Path'],
57 check_value=self.check_protocol_value,
58 accepted_protocols=['HTTP', 'HTTPS', 'TCP', 'TLS'],
59 certificate_protocols=['HTTPS', 'TLS'],
60 certificates=result['Value'].get('Certificates')))
61
62 results = cfn.get_resource_properties(['AWS::ElasticLoadBalancing::LoadBalancer', 'Listeners'])
63 for result in results:
64 if isinstance(result['Value'], list):
65 for index, listener in enumerate(result['Value']):
66 matches.extend(
67 cfn.check_value(
68 listener, 'Protocol', result['Path'] + [index],
69 check_value=self.check_protocol_value,
70 accepted_protocols=['HTTP', 'HTTPS', 'TCP', 'SSL'],
71 certificate_protocols=['HTTPS', 'SSL'],
72 certificates=listener.get('SSLCertificateId')))
73
74 results = cfn.get_resource_properties(['AWS::ElasticLoadBalancingV2::LoadBalancer'])
75 for result in results:
76 properties = result['Value']
77 if 'Type' in properties and properties['Type'] == 'network':
78 if 'SecurityGroups' in properties:
79 path = result['Path'] + ['SecurityGroups']
80 matches.append(RuleMatch(path, 'Security groups are not supported for load balancers with type "network"'))
81
82 return matches
83
[end of src/cfnlint/rules/resources/elb/Elb.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/cfnlint/rules/resources/elb/Elb.py b/src/cfnlint/rules/resources/elb/Elb.py
--- a/src/cfnlint/rules/resources/elb/Elb.py
+++ b/src/cfnlint/rules/resources/elb/Elb.py
@@ -44,6 +44,24 @@
return matches
+ def check_alb_subnets(self, props, path):
+ """ Validate at least two subnets with ALBs"""
+ matches = []
+ elb_type = props.get('Type')
+ if elb_type == 'application':
+ subnets = props.get('Subnets')
+ if isinstance(subnets, list):
+ if len(subnets) < 2:
+ path = path + ['Subnets']
+ matches.append(RuleMatch(path, 'You must specify at least two Subnets for load balancers with type "application"'))
+ subnet_mappings = props.get('SubnetMappings')
+ if isinstance(subnet_mappings, list):
+ if len(subnet_mappings) < 2:
+ path = path + ['SubnetMappings']
+ matches.append(RuleMatch(path, 'You must specify at least two SubnetMappings for load balancers with type "application"'))
+
+ return matches
+
def match(self, cfn):
"""Check ELB Resource Parameters"""
@@ -74,9 +92,12 @@
results = cfn.get_resource_properties(['AWS::ElasticLoadBalancingV2::LoadBalancer'])
for result in results:
properties = result['Value']
- if 'Type' in properties and properties['Type'] == 'network':
+ elb_type = properties.get('Type')
+ if elb_type == 'network':
if 'SecurityGroups' in properties:
path = result['Path'] + ['SecurityGroups']
matches.append(RuleMatch(path, 'Security groups are not supported for load balancers with type "network"'))
+ matches.extend(self.check_alb_subnets(properties, result['Path']))
+
return matches
| {"golden_diff": "diff --git a/src/cfnlint/rules/resources/elb/Elb.py b/src/cfnlint/rules/resources/elb/Elb.py\n--- a/src/cfnlint/rules/resources/elb/Elb.py\n+++ b/src/cfnlint/rules/resources/elb/Elb.py\n@@ -44,6 +44,24 @@\n \n return matches\n \n+ def check_alb_subnets(self, props, path):\n+ \"\"\" Validate at least two subnets with ALBs\"\"\"\n+ matches = []\n+ elb_type = props.get('Type')\n+ if elb_type == 'application':\n+ subnets = props.get('Subnets')\n+ if isinstance(subnets, list):\n+ if len(subnets) < 2:\n+ path = path + ['Subnets']\n+ matches.append(RuleMatch(path, 'You must specify at least two Subnets for load balancers with type \"application\"'))\n+ subnet_mappings = props.get('SubnetMappings')\n+ if isinstance(subnet_mappings, list):\n+ if len(subnet_mappings) < 2:\n+ path = path + ['SubnetMappings']\n+ matches.append(RuleMatch(path, 'You must specify at least two SubnetMappings for load balancers with type \"application\"'))\n+\n+ return matches\n+\n def match(self, cfn):\n \"\"\"Check ELB Resource Parameters\"\"\"\n \n@@ -74,9 +92,12 @@\n results = cfn.get_resource_properties(['AWS::ElasticLoadBalancingV2::LoadBalancer'])\n for result in results:\n properties = result['Value']\n- if 'Type' in properties and properties['Type'] == 'network':\n+ elb_type = properties.get('Type')\n+ if elb_type == 'network':\n if 'SecurityGroups' in properties:\n path = result['Path'] + ['SecurityGroups']\n matches.append(RuleMatch(path, 'Security groups are not supported for load balancers with type \"network\"'))\n \n+ matches.extend(self.check_alb_subnets(properties, result['Path']))\n+\n return matches\n", "issue": "Single-subnet ELB detection\n*cfn-lint version: 0.10.3*\r\n\r\n*Description of issue.*\r\n\r\nWhen configuring `AWS::ElasticLoadBalancingV2::LoadBalancer`, if you specify only 1 subnet, you get back from AWS:\r\n\r\n> 2019-02-05 12:22:39 +1100 Elb AWS::ElasticLoadBalancingV2::LoadBalancer CREATE_FAILED At least two subnets in two different Availability Zones must be specified (Service: AmazonElasticLoadBalancingV2; Status Code: 400; Error Code: ValidationError; Request ID: ...)\r\n\r\nThis could be covered by a cfn-lint check.\n", "before_files": [{"content": "\"\"\"\n Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n\n Permission is hereby granted, free of charge, to any person obtaining a copy of this\n software and associated documentation files (the \"Software\"), to deal in the Software\n without restriction, including without limitation the rights to use, copy, modify,\n merge, publish, distribute, sublicense, and/or sell copies of the Software, and to\n permit persons to whom the Software is furnished to do so.\n\n THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED,\n INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A\n PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT\n HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION\n OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE\n SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.\n\"\"\"\nimport six\nfrom cfnlint import CloudFormationLintRule\nfrom cfnlint import RuleMatch\n\n\nclass Elb(CloudFormationLintRule):\n \"\"\"Check if Elb Resource Properties\"\"\"\n id = 'E2503'\n shortdesc = 'Resource ELB Properties'\n description = 'See if Elb Resource Properties are set correctly \\\nHTTPS has certificate HTTP has no certificate'\n source_url = 'https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-ec2-elb-listener.html'\n tags = ['properties', 'elb']\n\n def check_protocol_value(self, value, path, **kwargs):\n \"\"\"\n Check Protocol Value\n \"\"\"\n matches = []\n if isinstance(value, six.string_types):\n if value.upper() not in kwargs['accepted_protocols']:\n message = 'Protocol must be {0} is invalid at {1}'\n matches.append(RuleMatch(path, message.format((', '.join(kwargs['accepted_protocols'])), ('/'.join(map(str, path))))))\n elif value.upper() in kwargs['certificate_protocols']:\n if not kwargs['certificates']:\n message = 'Certificates should be specified when using HTTPS for {0}'\n matches.append(RuleMatch(path, message.format(('/'.join(map(str, path))))))\n\n return matches\n\n def match(self, cfn):\n \"\"\"Check ELB Resource Parameters\"\"\"\n\n matches = []\n\n results = cfn.get_resource_properties(['AWS::ElasticLoadBalancingV2::Listener'])\n for result in results:\n matches.extend(\n cfn.check_value(\n result['Value'], 'Protocol', result['Path'],\n check_value=self.check_protocol_value,\n accepted_protocols=['HTTP', 'HTTPS', 'TCP', 'TLS'],\n certificate_protocols=['HTTPS', 'TLS'],\n certificates=result['Value'].get('Certificates')))\n\n results = cfn.get_resource_properties(['AWS::ElasticLoadBalancing::LoadBalancer', 'Listeners'])\n for result in results:\n if isinstance(result['Value'], list):\n for index, listener in enumerate(result['Value']):\n matches.extend(\n cfn.check_value(\n listener, 'Protocol', result['Path'] + [index],\n check_value=self.check_protocol_value,\n accepted_protocols=['HTTP', 'HTTPS', 'TCP', 'SSL'],\n certificate_protocols=['HTTPS', 'SSL'],\n certificates=listener.get('SSLCertificateId')))\n\n results = cfn.get_resource_properties(['AWS::ElasticLoadBalancingV2::LoadBalancer'])\n for result in results:\n properties = result['Value']\n if 'Type' in properties and properties['Type'] == 'network':\n if 'SecurityGroups' in properties:\n path = result['Path'] + ['SecurityGroups']\n matches.append(RuleMatch(path, 'Security groups are not supported for load balancers with type \"network\"'))\n\n return matches\n", "path": "src/cfnlint/rules/resources/elb/Elb.py"}]} | 1,684 | 452 |
gh_patches_debug_42674 | rasdani/github-patches | git_diff | python-telegram-bot__python-telegram-bot-2517 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[FEATURE] Add a pattern for result_id of ChosenInlineResultHandler
In this way you can separate the results of your inline queries and redirect them to specific function as it happens for callback queries.
</issue>
<code>
[start of telegram/ext/choseninlineresulthandler.py]
1 #!/usr/bin/env python
2 #
3 # A library that provides a Python interface to the Telegram Bot API
4 # Copyright (C) 2015-2021
5 # Leandro Toledo de Souza <[email protected]>
6 #
7 # This program is free software: you can redistribute it and/or modify
8 # it under the terms of the GNU Lesser Public License as published by
9 # the Free Software Foundation, either version 3 of the License, or
10 # (at your option) any later version.
11 #
12 # This program is distributed in the hope that it will be useful,
13 # but WITHOUT ANY WARRANTY; without even the implied warranty of
14 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
15 # GNU Lesser Public License for more details.
16 #
17 # You should have received a copy of the GNU Lesser Public License
18 # along with this program. If not, see [http://www.gnu.org/licenses/].
19 """This module contains the ChosenInlineResultHandler class."""
20
21 from typing import Optional, TypeVar, Union
22
23 from telegram import Update
24
25 from .handler import Handler
26
27 RT = TypeVar('RT')
28
29
30 class ChosenInlineResultHandler(Handler[Update]):
31 """Handler class to handle Telegram updates that contain a chosen inline result.
32
33 Note:
34 :attr:`pass_user_data` and :attr:`pass_chat_data` determine whether a ``dict`` you
35 can use to keep any data in will be sent to the :attr:`callback` function. Related to
36 either the user or the chat that the update was sent in. For each update from the same user
37 or in the same chat, it will be the same ``dict``.
38
39 Note that this is DEPRECATED, and you should use context based callbacks. See
40 https://git.io/fxJuV for more info.
41
42 Warning:
43 When setting ``run_async`` to :obj:`True`, you cannot rely on adding custom
44 attributes to :class:`telegram.ext.CallbackContext`. See its docs for more info.
45
46 Args:
47 callback (:obj:`callable`): The callback function for this handler. Will be called when
48 :attr:`check_update` has determined that an update should be processed by this handler.
49 Callback signature for context based API:
50
51 ``def callback(update: Update, context: CallbackContext)``
52
53 The return value of the callback is usually ignored except for the special case of
54 :class:`telegram.ext.ConversationHandler`.
55 pass_update_queue (:obj:`bool`, optional): If set to :obj:`True`, a keyword argument called
56 ``update_queue`` will be passed to the callback function. It will be the ``Queue``
57 instance used by the :class:`telegram.ext.Updater` and :class:`telegram.ext.Dispatcher`
58 that contains new updates which can be used to insert updates. Default is :obj:`False`.
59 DEPRECATED: Please switch to context based callbacks.
60 pass_job_queue (:obj:`bool`, optional): If set to :obj:`True`, a keyword argument called
61 ``job_queue`` will be passed to the callback function. It will be a
62 :class:`telegram.ext.JobQueue` instance created by the :class:`telegram.ext.Updater`
63 which can be used to schedule new jobs. Default is :obj:`False`.
64 DEPRECATED: Please switch to context based callbacks.
65 pass_user_data (:obj:`bool`, optional): If set to :obj:`True`, a keyword argument called
66 ``user_data`` will be passed to the callback function. Default is :obj:`False`.
67 DEPRECATED: Please switch to context based callbacks.
68 pass_chat_data (:obj:`bool`, optional): If set to :obj:`True`, a keyword argument called
69 ``chat_data`` will be passed to the callback function. Default is :obj:`False`.
70 DEPRECATED: Please switch to context based callbacks.
71 run_async (:obj:`bool`): Determines whether the callback will run asynchronously.
72 Defaults to :obj:`False`.
73
74 Attributes:
75 callback (:obj:`callable`): The callback function for this handler.
76 pass_update_queue (:obj:`bool`): Determines whether ``update_queue`` will be
77 passed to the callback function.
78 pass_job_queue (:obj:`bool`): Determines whether ``job_queue`` will be passed to
79 the callback function.
80 pass_user_data (:obj:`bool`): Determines whether ``user_data`` will be passed to
81 the callback function.
82 pass_chat_data (:obj:`bool`): Determines whether ``chat_data`` will be passed to
83 the callback function.
84 run_async (:obj:`bool`): Determines whether the callback will run asynchronously.
85
86 """
87
88 def check_update(self, update: object) -> Optional[Union[bool, object]]:
89 """Determines whether an update should be passed to this handlers :attr:`callback`.
90
91 Args:
92 update (:class:`telegram.Update` | :obj:`object`): Incoming update.
93
94 Returns:
95 :obj:`bool`
96
97 """
98 return isinstance(update, Update) and update.chosen_inline_result
99
[end of telegram/ext/choseninlineresulthandler.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/telegram/ext/choseninlineresulthandler.py b/telegram/ext/choseninlineresulthandler.py
--- a/telegram/ext/choseninlineresulthandler.py
+++ b/telegram/ext/choseninlineresulthandler.py
@@ -17,15 +17,19 @@
# You should have received a copy of the GNU Lesser Public License
# along with this program. If not, see [http://www.gnu.org/licenses/].
"""This module contains the ChosenInlineResultHandler class."""
-
-from typing import Optional, TypeVar, Union
+import re
+from typing import Optional, TypeVar, Union, Callable, TYPE_CHECKING, Pattern, Match, cast
from telegram import Update
+from telegram.utils.helpers import DefaultValue, DEFAULT_FALSE
from .handler import Handler
RT = TypeVar('RT')
+if TYPE_CHECKING:
+ from telegram.ext import CallbackContext, Dispatcher
+
class ChosenInlineResultHandler(Handler[Update]):
"""Handler class to handle Telegram updates that contain a chosen inline result.
@@ -70,6 +74,12 @@
DEPRECATED: Please switch to context based callbacks.
run_async (:obj:`bool`): Determines whether the callback will run asynchronously.
Defaults to :obj:`False`.
+ pattern (:obj:`str` | `Pattern`, optional): Regex pattern. If not :obj:`None`, ``re.match``
+ is used on :attr:`telegram.ChosenInlineResult.result_id` to determine if an update
+ should be handled by this handler. This is accessible in the callback as
+ :attr:`telegram.ext.CallbackContext.matches`.
+
+ .. versionadded:: 13.6
Attributes:
callback (:obj:`callable`): The callback function for this handler.
@@ -82,9 +92,37 @@
pass_chat_data (:obj:`bool`): Determines whether ``chat_data`` will be passed to
the callback function.
run_async (:obj:`bool`): Determines whether the callback will run asynchronously.
+ pattern (`Pattern`): Optional. Regex pattern to test
+ :attr:`telegram.ChosenInlineResult.result_id` against.
+
+ .. versionadded:: 13.6
"""
+ def __init__(
+ self,
+ callback: Callable[[Update, 'CallbackContext'], RT],
+ pass_update_queue: bool = False,
+ pass_job_queue: bool = False,
+ pass_user_data: bool = False,
+ pass_chat_data: bool = False,
+ run_async: Union[bool, DefaultValue] = DEFAULT_FALSE,
+ pattern: Union[str, Pattern] = None,
+ ):
+ super().__init__(
+ callback,
+ pass_update_queue=pass_update_queue,
+ pass_job_queue=pass_job_queue,
+ pass_user_data=pass_user_data,
+ pass_chat_data=pass_chat_data,
+ run_async=run_async,
+ )
+
+ if isinstance(pattern, str):
+ pattern = re.compile(pattern)
+
+ self.pattern = pattern
+
def check_update(self, update: object) -> Optional[Union[bool, object]]:
"""Determines whether an update should be passed to this handlers :attr:`callback`.
@@ -95,4 +133,24 @@
:obj:`bool`
"""
- return isinstance(update, Update) and update.chosen_inline_result
+ if isinstance(update, Update) and update.chosen_inline_result:
+ if self.pattern:
+ match = re.match(self.pattern, update.chosen_inline_result.result_id)
+ if match:
+ return match
+ else:
+ return True
+ return None
+
+ def collect_additional_context(
+ self,
+ context: 'CallbackContext',
+ update: Update,
+ dispatcher: 'Dispatcher',
+ check_result: Union[bool, Match],
+ ) -> None:
+ """This function adds the matched regex pattern result to
+ :attr:`telegram.ext.CallbackContext.matches`."""
+ if self.pattern:
+ check_result = cast(Match, check_result)
+ context.matches = [check_result]
| {"golden_diff": "diff --git a/telegram/ext/choseninlineresulthandler.py b/telegram/ext/choseninlineresulthandler.py\n--- a/telegram/ext/choseninlineresulthandler.py\n+++ b/telegram/ext/choseninlineresulthandler.py\n@@ -17,15 +17,19 @@\n # You should have received a copy of the GNU Lesser Public License\n # along with this program. If not, see [http://www.gnu.org/licenses/].\n \"\"\"This module contains the ChosenInlineResultHandler class.\"\"\"\n-\n-from typing import Optional, TypeVar, Union\n+import re\n+from typing import Optional, TypeVar, Union, Callable, TYPE_CHECKING, Pattern, Match, cast\n \n from telegram import Update\n \n+from telegram.utils.helpers import DefaultValue, DEFAULT_FALSE\n from .handler import Handler\n \n RT = TypeVar('RT')\n \n+if TYPE_CHECKING:\n+ from telegram.ext import CallbackContext, Dispatcher\n+\n \n class ChosenInlineResultHandler(Handler[Update]):\n \"\"\"Handler class to handle Telegram updates that contain a chosen inline result.\n@@ -70,6 +74,12 @@\n DEPRECATED: Please switch to context based callbacks.\n run_async (:obj:`bool`): Determines whether the callback will run asynchronously.\n Defaults to :obj:`False`.\n+ pattern (:obj:`str` | `Pattern`, optional): Regex pattern. If not :obj:`None`, ``re.match``\n+ is used on :attr:`telegram.ChosenInlineResult.result_id` to determine if an update\n+ should be handled by this handler. This is accessible in the callback as\n+ :attr:`telegram.ext.CallbackContext.matches`.\n+\n+ .. versionadded:: 13.6\n \n Attributes:\n callback (:obj:`callable`): The callback function for this handler.\n@@ -82,9 +92,37 @@\n pass_chat_data (:obj:`bool`): Determines whether ``chat_data`` will be passed to\n the callback function.\n run_async (:obj:`bool`): Determines whether the callback will run asynchronously.\n+ pattern (`Pattern`): Optional. Regex pattern to test\n+ :attr:`telegram.ChosenInlineResult.result_id` against.\n+\n+ .. versionadded:: 13.6\n \n \"\"\"\n \n+ def __init__(\n+ self,\n+ callback: Callable[[Update, 'CallbackContext'], RT],\n+ pass_update_queue: bool = False,\n+ pass_job_queue: bool = False,\n+ pass_user_data: bool = False,\n+ pass_chat_data: bool = False,\n+ run_async: Union[bool, DefaultValue] = DEFAULT_FALSE,\n+ pattern: Union[str, Pattern] = None,\n+ ):\n+ super().__init__(\n+ callback,\n+ pass_update_queue=pass_update_queue,\n+ pass_job_queue=pass_job_queue,\n+ pass_user_data=pass_user_data,\n+ pass_chat_data=pass_chat_data,\n+ run_async=run_async,\n+ )\n+\n+ if isinstance(pattern, str):\n+ pattern = re.compile(pattern)\n+\n+ self.pattern = pattern\n+\n def check_update(self, update: object) -> Optional[Union[bool, object]]:\n \"\"\"Determines whether an update should be passed to this handlers :attr:`callback`.\n \n@@ -95,4 +133,24 @@\n :obj:`bool`\n \n \"\"\"\n- return isinstance(update, Update) and update.chosen_inline_result\n+ if isinstance(update, Update) and update.chosen_inline_result:\n+ if self.pattern:\n+ match = re.match(self.pattern, update.chosen_inline_result.result_id)\n+ if match:\n+ return match\n+ else:\n+ return True\n+ return None\n+\n+ def collect_additional_context(\n+ self,\n+ context: 'CallbackContext',\n+ update: Update,\n+ dispatcher: 'Dispatcher',\n+ check_result: Union[bool, Match],\n+ ) -> None:\n+ \"\"\"This function adds the matched regex pattern result to\n+ :attr:`telegram.ext.CallbackContext.matches`.\"\"\"\n+ if self.pattern:\n+ check_result = cast(Match, check_result)\n+ context.matches = [check_result]\n", "issue": "[FEATURE] Add a pattern for result_id of ChosenInlineResultHandler\nIn this way you can separate the results of your inline queries and redirect them to specific function as it happens for callback queries.\n", "before_files": [{"content": "#!/usr/bin/env python\n#\n# A library that provides a Python interface to the Telegram Bot API\n# Copyright (C) 2015-2021\n# Leandro Toledo de Souza <[email protected]>\n#\n# This program is free software: you can redistribute it and/or modify\n# it under the terms of the GNU Lesser Public License as published by\n# the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# This program is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU Lesser Public License for more details.\n#\n# You should have received a copy of the GNU Lesser Public License\n# along with this program. If not, see [http://www.gnu.org/licenses/].\n\"\"\"This module contains the ChosenInlineResultHandler class.\"\"\"\n\nfrom typing import Optional, TypeVar, Union\n\nfrom telegram import Update\n\nfrom .handler import Handler\n\nRT = TypeVar('RT')\n\n\nclass ChosenInlineResultHandler(Handler[Update]):\n \"\"\"Handler class to handle Telegram updates that contain a chosen inline result.\n\n Note:\n :attr:`pass_user_data` and :attr:`pass_chat_data` determine whether a ``dict`` you\n can use to keep any data in will be sent to the :attr:`callback` function. Related to\n either the user or the chat that the update was sent in. For each update from the same user\n or in the same chat, it will be the same ``dict``.\n\n Note that this is DEPRECATED, and you should use context based callbacks. See\n https://git.io/fxJuV for more info.\n\n Warning:\n When setting ``run_async`` to :obj:`True`, you cannot rely on adding custom\n attributes to :class:`telegram.ext.CallbackContext`. See its docs for more info.\n\n Args:\n callback (:obj:`callable`): The callback function for this handler. Will be called when\n :attr:`check_update` has determined that an update should be processed by this handler.\n Callback signature for context based API:\n\n ``def callback(update: Update, context: CallbackContext)``\n\n The return value of the callback is usually ignored except for the special case of\n :class:`telegram.ext.ConversationHandler`.\n pass_update_queue (:obj:`bool`, optional): If set to :obj:`True`, a keyword argument called\n ``update_queue`` will be passed to the callback function. It will be the ``Queue``\n instance used by the :class:`telegram.ext.Updater` and :class:`telegram.ext.Dispatcher`\n that contains new updates which can be used to insert updates. Default is :obj:`False`.\n DEPRECATED: Please switch to context based callbacks.\n pass_job_queue (:obj:`bool`, optional): If set to :obj:`True`, a keyword argument called\n ``job_queue`` will be passed to the callback function. It will be a\n :class:`telegram.ext.JobQueue` instance created by the :class:`telegram.ext.Updater`\n which can be used to schedule new jobs. Default is :obj:`False`.\n DEPRECATED: Please switch to context based callbacks.\n pass_user_data (:obj:`bool`, optional): If set to :obj:`True`, a keyword argument called\n ``user_data`` will be passed to the callback function. Default is :obj:`False`.\n DEPRECATED: Please switch to context based callbacks.\n pass_chat_data (:obj:`bool`, optional): If set to :obj:`True`, a keyword argument called\n ``chat_data`` will be passed to the callback function. Default is :obj:`False`.\n DEPRECATED: Please switch to context based callbacks.\n run_async (:obj:`bool`): Determines whether the callback will run asynchronously.\n Defaults to :obj:`False`.\n\n Attributes:\n callback (:obj:`callable`): The callback function for this handler.\n pass_update_queue (:obj:`bool`): Determines whether ``update_queue`` will be\n passed to the callback function.\n pass_job_queue (:obj:`bool`): Determines whether ``job_queue`` will be passed to\n the callback function.\n pass_user_data (:obj:`bool`): Determines whether ``user_data`` will be passed to\n the callback function.\n pass_chat_data (:obj:`bool`): Determines whether ``chat_data`` will be passed to\n the callback function.\n run_async (:obj:`bool`): Determines whether the callback will run asynchronously.\n\n \"\"\"\n\n def check_update(self, update: object) -> Optional[Union[bool, object]]:\n \"\"\"Determines whether an update should be passed to this handlers :attr:`callback`.\n\n Args:\n update (:class:`telegram.Update` | :obj:`object`): Incoming update.\n\n Returns:\n :obj:`bool`\n\n \"\"\"\n return isinstance(update, Update) and update.chosen_inline_result\n", "path": "telegram/ext/choseninlineresulthandler.py"}]} | 1,856 | 914 |
gh_patches_debug_6980 | rasdani/github-patches | git_diff | sopel-irc__sopel-2174 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
UTF-8 Check fails on windows in Powershell
### Description
The current version of sopel in pip (py3.9.6) and from git master both prompt this message upon running sopel even with a utf8 configuration:
> WARNING!!! You are running with a non-UTF8 locale environment variable (e.g. LC_ALL is set to "C"), which makes Python 3 do stupid things. If you get strange errors, please set it to something like "en_US.UTF-8".
Despite having a "English_United States", "utf8" locale from python.
### Reproduction steps
Pull into a fresh python dev environment on windows 10.
Install sopel via pip or from source.
Run sopel.
### Expected behavior
To not get a warning about UTF-8 since it's configured.
### Logs
```
(.venv) PS C:\Users\Michael\Documents\Visual Studio Projects\Python\sopel> $PSVersionTable
Name Value
---- -----
PSVersion 7.1.4
PSEdition Core
GitCommitId 7.1.4
OS Microsoft Windows 10.0.19041
Platform Win32NT
PSCompatibleVersions {1.0, 2.0, 3.0, 4.0…}
PSRemotingProtocolVersion 2.3
SerializationVersion 1.1.0.1
WSManStackVersion 3.0
(.venv) PS C:\Users\Michael\Documents\Visual Studio Projects\Python\sopel> Get-WinSystemLocale
LCID Name DisplayName
---- ---- -----------
1033 en-US English (United States)
(.venv) PS C:\Users\Michael\Documents\Visual Studio Projects\Python\sopel> echo $OutputEncoding
Preamble :
BodyName : utf-8
EncodingName : Unicode (UTF-8)
HeaderName : utf-8
WebName : utf-8
WindowsCodePage : 1200
IsBrowserDisplay : True
IsBrowserSave : True
IsMailNewsDisplay : True
IsMailNewsSave : True
IsSingleByte : False
EncoderFallback : System.Text.EncoderReplacementFallback
DecoderFallback : System.Text.DecoderReplacementFallback
IsReadOnly : True
CodePage : 65001
(.venv) PS C:\Users\Michael\Documents\Visual Studio Projects\Python\sopel> py
Python 3.9.6 (tags/v3.9.6:db3ff76, Jun 28 2021, 15:26:21) [MSC v.1929 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import locale
>>> locale.getlocale()
('English_United States', 'utf8')
>>> exit()
```
### Environment
- Sopel `.version`: 8.0.0
- Sopel installed via: pip && source
- Python version: 3.9.6
- Operating system: Windows 10
- IRCd `/version`: N/A
- Relevant plugins: N/A
</issue>
<code>
[start of sopel/__init__.py]
1 # ASCII ONLY IN THIS FILE THOUGH!!!!!!!
2 # Python does some stupid bullshit of respecting LC_ALL over the encoding on the
3 # file, so in order to undo Python's ridiculous fucking idiocy, we have to have
4 # our own check.
5
6 # Copyright 2008, Sean B. Palmer, inamidst.com
7 # Copyright 2012, Elsie Powell, http://embolalia.com
8 # Copyright 2012, Elad Alfassa <[email protected]>
9 #
10 # Licensed under the Eiffel Forum License 2.
11
12 from __future__ import generator_stop
13
14 from collections import namedtuple
15 import locale
16 import re
17 import sys
18
19 import pkg_resources
20
21 __all__ = [
22 'bot',
23 'config',
24 'db',
25 'formatting',
26 'irc',
27 'loader',
28 'logger',
29 'module', # deprecated in 7.1, removed in 9.0
30 'plugin',
31 'tools',
32 'trigger',
33 'version_info',
34 ]
35
36 loc = locale.getlocale()
37 if not loc[1] or 'UTF-8' not in loc[1]:
38 print('WARNING!!! You are running with a non-UTF8 locale environment '
39 'variable (e.g. LC_ALL is set to "C"), which makes Python 3 do '
40 'stupid things. If you get strange errors, please set it to '
41 'something like "en_US.UTF-8".', file=sys.stderr)
42
43
44 __version__ = pkg_resources.get_distribution('sopel').version
45
46
47 def _version_info(version=__version__):
48 regex = re.compile(r'(\d+)\.(\d+)\.(\d+)(?:[\-\.]?(a|b|rc)(\d+))?.*')
49 version_groups = regex.match(version).groups()
50 major, minor, micro = (int(piece) for piece in version_groups[0:3])
51 level = version_groups[3]
52 serial = int(version_groups[4] or 0)
53 if level == 'a':
54 level = 'alpha'
55 elif level == 'b':
56 level = 'beta'
57 elif level == 'rc':
58 level = 'candidate'
59 elif not level and version_groups[4] is None:
60 level = 'final'
61 else:
62 level = 'alpha'
63 version_type = namedtuple('version_info',
64 'major, minor, micro, releaselevel, serial')
65 return version_type(major, minor, micro, level, serial)
66
67
68 version_info = _version_info()
69
[end of sopel/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/sopel/__init__.py b/sopel/__init__.py
--- a/sopel/__init__.py
+++ b/sopel/__init__.py
@@ -34,7 +34,7 @@
]
loc = locale.getlocale()
-if not loc[1] or 'UTF-8' not in loc[1]:
+if not loc[1] or ('UTF-8' not in loc[1] and 'utf8' not in loc[1]):
print('WARNING!!! You are running with a non-UTF8 locale environment '
'variable (e.g. LC_ALL is set to "C"), which makes Python 3 do '
'stupid things. If you get strange errors, please set it to '
| {"golden_diff": "diff --git a/sopel/__init__.py b/sopel/__init__.py\n--- a/sopel/__init__.py\n+++ b/sopel/__init__.py\n@@ -34,7 +34,7 @@\n ]\n \n loc = locale.getlocale()\n-if not loc[1] or 'UTF-8' not in loc[1]:\n+if not loc[1] or ('UTF-8' not in loc[1] and 'utf8' not in loc[1]):\n print('WARNING!!! You are running with a non-UTF8 locale environment '\n 'variable (e.g. LC_ALL is set to \"C\"), which makes Python 3 do '\n 'stupid things. If you get strange errors, please set it to '\n", "issue": "UTF-8 Check fails on windows in Powershell\n### Description\r\nThe current version of sopel in pip (py3.9.6) and from git master both prompt this message upon running sopel even with a utf8 configuration:\r\n\r\n> WARNING!!! You are running with a non-UTF8 locale environment variable (e.g. LC_ALL is set to \"C\"), which makes Python 3 do stupid things. If you get strange errors, please set it to something like \"en_US.UTF-8\".\r\n\r\nDespite having a \"English_United States\", \"utf8\" locale from python.\r\n\r\n### Reproduction steps\r\nPull into a fresh python dev environment on windows 10.\r\nInstall sopel via pip or from source.\r\nRun sopel.\r\n\r\n### Expected behavior\r\nTo not get a warning about UTF-8 since it's configured.\r\n\r\n### Logs\r\n```\r\n(.venv) PS C:\\Users\\Michael\\Documents\\Visual Studio Projects\\Python\\sopel> $PSVersionTable \r\n\r\nName Value\r\n---- -----\r\nPSVersion 7.1.4\r\nPSEdition Core\r\nGitCommitId 7.1.4\r\nOS Microsoft Windows 10.0.19041\r\nPlatform Win32NT\r\nPSCompatibleVersions {1.0, 2.0, 3.0, 4.0\u2026}\r\nPSRemotingProtocolVersion 2.3\r\nSerializationVersion 1.1.0.1\r\nWSManStackVersion 3.0\r\n\r\n(.venv) PS C:\\Users\\Michael\\Documents\\Visual Studio Projects\\Python\\sopel> Get-WinSystemLocale \r\n\r\nLCID Name DisplayName\r\n---- ---- -----------\r\n1033 en-US English (United States)\r\n\r\n(.venv) PS C:\\Users\\Michael\\Documents\\Visual Studio Projects\\Python\\sopel> echo $OutputEncoding\r\n\r\nPreamble : \r\nBodyName : utf-8\r\nEncodingName : Unicode (UTF-8)\r\nHeaderName : utf-8\r\nWebName : utf-8\r\nWindowsCodePage : 1200\r\nIsBrowserDisplay : True\r\nIsBrowserSave : True\r\nIsMailNewsDisplay : True\r\nIsMailNewsSave : True\r\nIsSingleByte : False\r\nEncoderFallback : System.Text.EncoderReplacementFallback\r\nDecoderFallback : System.Text.DecoderReplacementFallback\r\nIsReadOnly : True\r\nCodePage : 65001\r\n\r\n\r\n(.venv) PS C:\\Users\\Michael\\Documents\\Visual Studio Projects\\Python\\sopel> py\r\nPython 3.9.6 (tags/v3.9.6:db3ff76, Jun 28 2021, 15:26:21) [MSC v.1929 64 bit (AMD64)] on win32\r\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\r\n>>> import locale\r\n>>> locale.getlocale()\r\n('English_United States', 'utf8')\r\n>>> exit()\r\n```\r\n\r\n### Environment\r\n- Sopel `.version`: 8.0.0\r\n- Sopel installed via: pip && source\r\n- Python version: 3.9.6\r\n- Operating system: Windows 10\r\n- IRCd `/version`: N/A\r\n- Relevant plugins: N/A\r\n\n", "before_files": [{"content": "# ASCII ONLY IN THIS FILE THOUGH!!!!!!!\n# Python does some stupid bullshit of respecting LC_ALL over the encoding on the\n# file, so in order to undo Python's ridiculous fucking idiocy, we have to have\n# our own check.\n\n# Copyright 2008, Sean B. Palmer, inamidst.com\n# Copyright 2012, Elsie Powell, http://embolalia.com\n# Copyright 2012, Elad Alfassa <[email protected]>\n#\n# Licensed under the Eiffel Forum License 2.\n\nfrom __future__ import generator_stop\n\nfrom collections import namedtuple\nimport locale\nimport re\nimport sys\n\nimport pkg_resources\n\n__all__ = [\n 'bot',\n 'config',\n 'db',\n 'formatting',\n 'irc',\n 'loader',\n 'logger',\n 'module', # deprecated in 7.1, removed in 9.0\n 'plugin',\n 'tools',\n 'trigger',\n 'version_info',\n]\n\nloc = locale.getlocale()\nif not loc[1] or 'UTF-8' not in loc[1]:\n print('WARNING!!! You are running with a non-UTF8 locale environment '\n 'variable (e.g. LC_ALL is set to \"C\"), which makes Python 3 do '\n 'stupid things. If you get strange errors, please set it to '\n 'something like \"en_US.UTF-8\".', file=sys.stderr)\n\n\n__version__ = pkg_resources.get_distribution('sopel').version\n\n\ndef _version_info(version=__version__):\n regex = re.compile(r'(\\d+)\\.(\\d+)\\.(\\d+)(?:[\\-\\.]?(a|b|rc)(\\d+))?.*')\n version_groups = regex.match(version).groups()\n major, minor, micro = (int(piece) for piece in version_groups[0:3])\n level = version_groups[3]\n serial = int(version_groups[4] or 0)\n if level == 'a':\n level = 'alpha'\n elif level == 'b':\n level = 'beta'\n elif level == 'rc':\n level = 'candidate'\n elif not level and version_groups[4] is None:\n level = 'final'\n else:\n level = 'alpha'\n version_type = namedtuple('version_info',\n 'major, minor, micro, releaselevel, serial')\n return version_type(major, minor, micro, level, serial)\n\n\nversion_info = _version_info()\n", "path": "sopel/__init__.py"}]} | 1,943 | 165 |
gh_patches_debug_35368 | rasdani/github-patches | git_diff | dbt-labs__dbt-core-2262 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Adding IAM parameters to redshift adapter
### Describe the feature
I would the arguments to get_cluster_credentials to be added to the dbt profile configuration. In particular DbGroups to allow the temporary user to be added to a group and AutoCreate to allow auto creation of users that do not exist.
### Describe alternatives you've considered
Since these are IAM specific configurations the only other alternative is to not use the temporary credentials.
### Additional context
This is a feature specifically for redshift users.
### Who will this benefit?
This feature will be useful for dbt users who want to use temporary and dynamic credentials with redshift.
</issue>
<code>
[start of plugins/redshift/dbt/adapters/redshift/connections.py]
1 from multiprocessing import Lock
2 from contextlib import contextmanager
3 from typing import NewType
4
5 from dbt.adapters.postgres import PostgresConnectionManager
6 from dbt.adapters.postgres import PostgresCredentials
7 from dbt.logger import GLOBAL_LOGGER as logger # noqa
8 import dbt.exceptions
9 import dbt.flags
10
11 import boto3
12
13 from hologram import FieldEncoder, JsonSchemaMixin
14 from hologram.helpers import StrEnum
15
16 from dataclasses import dataclass, field
17 from typing import Optional
18
19 drop_lock: Lock = dbt.flags.MP_CONTEXT.Lock()
20
21
22 IAMDuration = NewType('IAMDuration', int)
23
24
25 class IAMDurationEncoder(FieldEncoder):
26 @property
27 def json_schema(self):
28 return {'type': 'integer', 'minimum': 0, 'maximum': 65535}
29
30
31 JsonSchemaMixin.register_field_encoders({IAMDuration: IAMDurationEncoder()})
32
33
34 class RedshiftConnectionMethod(StrEnum):
35 DATABASE = 'database'
36 IAM = 'iam'
37
38
39 @dataclass
40 class RedshiftCredentials(PostgresCredentials):
41 method: RedshiftConnectionMethod = RedshiftConnectionMethod.DATABASE
42 password: Optional[str] = None
43 cluster_id: Optional[str] = field(
44 default=None,
45 metadata={'description': 'If using IAM auth, the name of the cluster'},
46 )
47 iam_duration_seconds: int = 900
48 search_path: Optional[str] = None
49 keepalives_idle: int = 240
50
51 @property
52 def type(self):
53 return 'redshift'
54
55 def _connection_keys(self):
56 keys = super()._connection_keys()
57 return keys + ('method', 'cluster_id', 'iam_duration_seconds')
58
59
60 class RedshiftConnectionManager(PostgresConnectionManager):
61 TYPE = 'redshift'
62
63 @contextmanager
64 def fresh_transaction(self, name=None):
65 """On entrance to this context manager, hold an exclusive lock and
66 create a fresh transaction for redshift, then commit and begin a new
67 one before releasing the lock on exit.
68
69 See drop_relation in RedshiftAdapter for more information.
70
71 :param Optional[str] name: The name of the connection to use, or None
72 to use the default.
73 """
74 with drop_lock:
75 connection = self.get_thread_connection()
76
77 if connection.transaction_open:
78 self.commit()
79
80 self.begin()
81 yield
82
83 self.commit()
84 self.begin()
85
86 @classmethod
87 def fetch_cluster_credentials(cls, db_user, db_name, cluster_id,
88 duration_s):
89 """Fetches temporary login credentials from AWS. The specified user
90 must already exist in the database, or else an error will occur"""
91 boto_client = boto3.client('redshift')
92
93 try:
94 return boto_client.get_cluster_credentials(
95 DbUser=db_user,
96 DbName=db_name,
97 ClusterIdentifier=cluster_id,
98 DurationSeconds=duration_s,
99 AutoCreate=False)
100
101 except boto_client.exceptions.ClientError as e:
102 raise dbt.exceptions.FailedToConnectException(
103 "Unable to get temporary Redshift cluster credentials: {}"
104 .format(e))
105
106 @classmethod
107 def get_tmp_iam_cluster_credentials(cls, credentials):
108 cluster_id = credentials.cluster_id
109
110 # default via:
111 # boto3.readthedocs.io/en/latest/reference/services/redshift.html
112 iam_duration_s = credentials.iam_duration_seconds
113
114 if not cluster_id:
115 raise dbt.exceptions.FailedToConnectException(
116 "'cluster_id' must be provided in profile if IAM "
117 "authentication method selected")
118
119 cluster_creds = cls.fetch_cluster_credentials(
120 credentials.user,
121 credentials.database,
122 credentials.cluster_id,
123 iam_duration_s,
124 )
125
126 # replace username and password with temporary redshift credentials
127 return credentials.replace(user=cluster_creds.get('DbUser'),
128 password=cluster_creds.get('DbPassword'))
129
130 @classmethod
131 def get_credentials(cls, credentials):
132 method = credentials.method
133
134 # Support missing 'method' for backwards compatibility
135 if method == 'database' or method is None:
136 logger.debug("Connecting to Redshift using 'database' credentials")
137 # this requirement is really annoying to encode into json schema,
138 # so validate it here
139 if credentials.password is None:
140 raise dbt.exceptions.FailedToConnectException(
141 "'password' field is required for 'database' credentials"
142 )
143 return credentials
144
145 elif method == 'iam':
146 logger.debug("Connecting to Redshift using 'IAM' credentials")
147 return cls.get_tmp_iam_cluster_credentials(credentials)
148
149 else:
150 raise dbt.exceptions.FailedToConnectException(
151 "Invalid 'method' in profile: '{}'".format(method))
152
[end of plugins/redshift/dbt/adapters/redshift/connections.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/plugins/redshift/dbt/adapters/redshift/connections.py b/plugins/redshift/dbt/adapters/redshift/connections.py
--- a/plugins/redshift/dbt/adapters/redshift/connections.py
+++ b/plugins/redshift/dbt/adapters/redshift/connections.py
@@ -14,7 +14,7 @@
from hologram.helpers import StrEnum
from dataclasses import dataclass, field
-from typing import Optional
+from typing import Optional, List
drop_lock: Lock = dbt.flags.MP_CONTEXT.Lock()
@@ -47,6 +47,8 @@
iam_duration_seconds: int = 900
search_path: Optional[str] = None
keepalives_idle: int = 240
+ autocreate: bool = False
+ db_groups: List[str] = field(default_factory=list)
@property
def type(self):
@@ -85,7 +87,7 @@
@classmethod
def fetch_cluster_credentials(cls, db_user, db_name, cluster_id,
- duration_s):
+ duration_s, autocreate, db_groups):
"""Fetches temporary login credentials from AWS. The specified user
must already exist in the database, or else an error will occur"""
boto_client = boto3.client('redshift')
@@ -96,7 +98,8 @@
DbName=db_name,
ClusterIdentifier=cluster_id,
DurationSeconds=duration_s,
- AutoCreate=False)
+ AutoCreate=autocreate,
+ DbGroups=db_groups,)
except boto_client.exceptions.ClientError as e:
raise dbt.exceptions.FailedToConnectException(
@@ -121,6 +124,8 @@
credentials.database,
credentials.cluster_id,
iam_duration_s,
+ credentials.autocreate,
+ credentials.db_groups,
)
# replace username and password with temporary redshift credentials
| {"golden_diff": "diff --git a/plugins/redshift/dbt/adapters/redshift/connections.py b/plugins/redshift/dbt/adapters/redshift/connections.py\n--- a/plugins/redshift/dbt/adapters/redshift/connections.py\n+++ b/plugins/redshift/dbt/adapters/redshift/connections.py\n@@ -14,7 +14,7 @@\n from hologram.helpers import StrEnum\n \n from dataclasses import dataclass, field\n-from typing import Optional\n+from typing import Optional, List\n \n drop_lock: Lock = dbt.flags.MP_CONTEXT.Lock()\n \n@@ -47,6 +47,8 @@\n iam_duration_seconds: int = 900\n search_path: Optional[str] = None\n keepalives_idle: int = 240\n+ autocreate: bool = False\n+ db_groups: List[str] = field(default_factory=list)\n \n @property\n def type(self):\n@@ -85,7 +87,7 @@\n \n @classmethod\n def fetch_cluster_credentials(cls, db_user, db_name, cluster_id,\n- duration_s):\n+ duration_s, autocreate, db_groups):\n \"\"\"Fetches temporary login credentials from AWS. The specified user\n must already exist in the database, or else an error will occur\"\"\"\n boto_client = boto3.client('redshift')\n@@ -96,7 +98,8 @@\n DbName=db_name,\n ClusterIdentifier=cluster_id,\n DurationSeconds=duration_s,\n- AutoCreate=False)\n+ AutoCreate=autocreate,\n+ DbGroups=db_groups,)\n \n except boto_client.exceptions.ClientError as e:\n raise dbt.exceptions.FailedToConnectException(\n@@ -121,6 +124,8 @@\n credentials.database,\n credentials.cluster_id,\n iam_duration_s,\n+ credentials.autocreate,\n+ credentials.db_groups,\n )\n \n # replace username and password with temporary redshift credentials\n", "issue": "Adding IAM parameters to redshift adapter\n### Describe the feature\r\nI would the arguments to get_cluster_credentials to be added to the dbt profile configuration. In particular DbGroups to allow the temporary user to be added to a group and AutoCreate to allow auto creation of users that do not exist. \r\n\r\n### Describe alternatives you've considered\r\nSince these are IAM specific configurations the only other alternative is to not use the temporary credentials.\r\n\r\n### Additional context\r\nThis is a feature specifically for redshift users.\r\n\r\n### Who will this benefit?\r\nThis feature will be useful for dbt users who want to use temporary and dynamic credentials with redshift. \r\n\n", "before_files": [{"content": "from multiprocessing import Lock\nfrom contextlib import contextmanager\nfrom typing import NewType\n\nfrom dbt.adapters.postgres import PostgresConnectionManager\nfrom dbt.adapters.postgres import PostgresCredentials\nfrom dbt.logger import GLOBAL_LOGGER as logger # noqa\nimport dbt.exceptions\nimport dbt.flags\n\nimport boto3\n\nfrom hologram import FieldEncoder, JsonSchemaMixin\nfrom hologram.helpers import StrEnum\n\nfrom dataclasses import dataclass, field\nfrom typing import Optional\n\ndrop_lock: Lock = dbt.flags.MP_CONTEXT.Lock()\n\n\nIAMDuration = NewType('IAMDuration', int)\n\n\nclass IAMDurationEncoder(FieldEncoder):\n @property\n def json_schema(self):\n return {'type': 'integer', 'minimum': 0, 'maximum': 65535}\n\n\nJsonSchemaMixin.register_field_encoders({IAMDuration: IAMDurationEncoder()})\n\n\nclass RedshiftConnectionMethod(StrEnum):\n DATABASE = 'database'\n IAM = 'iam'\n\n\n@dataclass\nclass RedshiftCredentials(PostgresCredentials):\n method: RedshiftConnectionMethod = RedshiftConnectionMethod.DATABASE\n password: Optional[str] = None\n cluster_id: Optional[str] = field(\n default=None,\n metadata={'description': 'If using IAM auth, the name of the cluster'},\n )\n iam_duration_seconds: int = 900\n search_path: Optional[str] = None\n keepalives_idle: int = 240\n\n @property\n def type(self):\n return 'redshift'\n\n def _connection_keys(self):\n keys = super()._connection_keys()\n return keys + ('method', 'cluster_id', 'iam_duration_seconds')\n\n\nclass RedshiftConnectionManager(PostgresConnectionManager):\n TYPE = 'redshift'\n\n @contextmanager\n def fresh_transaction(self, name=None):\n \"\"\"On entrance to this context manager, hold an exclusive lock and\n create a fresh transaction for redshift, then commit and begin a new\n one before releasing the lock on exit.\n\n See drop_relation in RedshiftAdapter for more information.\n\n :param Optional[str] name: The name of the connection to use, or None\n to use the default.\n \"\"\"\n with drop_lock:\n connection = self.get_thread_connection()\n\n if connection.transaction_open:\n self.commit()\n\n self.begin()\n yield\n\n self.commit()\n self.begin()\n\n @classmethod\n def fetch_cluster_credentials(cls, db_user, db_name, cluster_id,\n duration_s):\n \"\"\"Fetches temporary login credentials from AWS. The specified user\n must already exist in the database, or else an error will occur\"\"\"\n boto_client = boto3.client('redshift')\n\n try:\n return boto_client.get_cluster_credentials(\n DbUser=db_user,\n DbName=db_name,\n ClusterIdentifier=cluster_id,\n DurationSeconds=duration_s,\n AutoCreate=False)\n\n except boto_client.exceptions.ClientError as e:\n raise dbt.exceptions.FailedToConnectException(\n \"Unable to get temporary Redshift cluster credentials: {}\"\n .format(e))\n\n @classmethod\n def get_tmp_iam_cluster_credentials(cls, credentials):\n cluster_id = credentials.cluster_id\n\n # default via:\n # boto3.readthedocs.io/en/latest/reference/services/redshift.html\n iam_duration_s = credentials.iam_duration_seconds\n\n if not cluster_id:\n raise dbt.exceptions.FailedToConnectException(\n \"'cluster_id' must be provided in profile if IAM \"\n \"authentication method selected\")\n\n cluster_creds = cls.fetch_cluster_credentials(\n credentials.user,\n credentials.database,\n credentials.cluster_id,\n iam_duration_s,\n )\n\n # replace username and password with temporary redshift credentials\n return credentials.replace(user=cluster_creds.get('DbUser'),\n password=cluster_creds.get('DbPassword'))\n\n @classmethod\n def get_credentials(cls, credentials):\n method = credentials.method\n\n # Support missing 'method' for backwards compatibility\n if method == 'database' or method is None:\n logger.debug(\"Connecting to Redshift using 'database' credentials\")\n # this requirement is really annoying to encode into json schema,\n # so validate it here\n if credentials.password is None:\n raise dbt.exceptions.FailedToConnectException(\n \"'password' field is required for 'database' credentials\"\n )\n return credentials\n\n elif method == 'iam':\n logger.debug(\"Connecting to Redshift using 'IAM' credentials\")\n return cls.get_tmp_iam_cluster_credentials(credentials)\n\n else:\n raise dbt.exceptions.FailedToConnectException(\n \"Invalid 'method' in profile: '{}'\".format(method))\n", "path": "plugins/redshift/dbt/adapters/redshift/connections.py"}]} | 2,040 | 418 |
gh_patches_debug_27588 | rasdani/github-patches | git_diff | engnadeau__pybotics-412 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Flaky `test_fk()` results
- Fails occasionally due to a small difference in the x-component of the transform matrix
- https://ci.appveyor.com/project/nnadeau/pybotics/build/1.0.732/job/c9mmdfvctt9jasie
- https://ci.appveyor.com/project/nnadeau/pybotics/build/1.0.732/job/qs645jqgd49iwa3s
- This corresponds to the first row of the UR10 resources
```
-45.0,147.0,-39.0,96.0,49.0,67.0,-0.10936549564013165,0.9937209495290638,0.02361912001841469,341.25528339185024,-0.30766985716369466,-0.011247226994026319,-0.9514266965341137,-658.5887448195482,-0. 9451869906829335,-0.11132014351410438,0.30696804115695675,-625.3245786240964,0.0,0.0,0.0,1.0
```
</issue>
<code>
[start of pybotics/predefined_models.py]
1 """Predefined robot models."""
2 from typing import Any
3
4 import numpy as np # type: ignore
5
6 from pybotics import Robot
7 from pybotics.kinematic_chain import MDHKinematicChain
8
9
10 class KukaLBRiiwa7(Robot):
11 """KUKA LBR iiwa 7 R800 collaborative robot."""
12
13 # TODO: add manufacturer's joint limits
14 kinematic_chain = MDHKinematicChain(
15 np.array([
16 0, 0, 0, 340,
17 -np.pi / 2, 0, 0, 0,
18 np.pi / 2, 0, 0, 400,
19 np.pi / 2, 0, 0, 0,
20 -np.pi / 2, 0, 0, 400,
21 -np.pi / 2, 0, 0, 0,
22 np.pi / 2, 0, 0, 126
23 ])
24 )
25
26 def __init__(self, **kwargs: Any) -> None:
27 """Init robot."""
28 super().__init__(self.kinematic_chain, **kwargs)
29
30
31 class MecademicMeca500(Robot):
32 """Mecademic Meca500 small robot."""
33
34 # TODO: add manufacturer's joint limits
35 kinematic_chain = MDHKinematicChain(
36 np.array([
37 0, 0, 0, 135,
38 -np.pi / 2, 0, -np.pi / 2, 0,
39 0, 135, 0, 0,
40 -np.pi / 2, 38, 0, 120,
41 np.pi / 2, 0, 0, 0,
42 -np.pi / 2, 0, np.pi, 72
43 ])
44 )
45
46 def __init__(self, **kwargs: Any) -> None:
47 """Init robot."""
48 super().__init__(self.kinematic_chain, **kwargs)
49
50
51 class PUMA560(Robot):
52 """PUMA 560 robot."""
53
54 # TODO: add manufacturer's joint limits
55 kinematic_chain = MDHKinematicChain(
56 np.array([
57 0, 0, 0, 0,
58 -np.pi / 2, 0, 0, 0,
59 0, 612.7, 0, 0,
60 0, 571.6, 0, 163.9,
61 -np.pi / 2, 0, 0, 115.7,
62 np.pi / 2, 0, np.pi, 92.2
63 ])
64 )
65
66 def __init__(self, **kwargs: Any) -> None:
67 """Init robot."""
68 super().__init__(self.kinematic_chain, **kwargs)
69
70
71 class UR10(Robot):
72 """Universal Robots UR10 collaborative robot."""
73
74 # TODO: add manufacturer's joint limits
75 kinematic_chain = MDHKinematicChain(
76 np.array([
77 0, 0, 0, 118,
78 np.pi / 2, 0, np.pi, 0,
79 0, 612.7, 0, 0,
80 0, 571.6, 0, 163.9,
81 -np.pi / 2, 0, 0, 115.7,
82 np.pi / 2, 0, np.pi, 92.2
83 ])
84 )
85
86 def __init__(self, **kwargs: Any) -> None:
87 """Init robot."""
88 super().__init__(self.kinematic_chain, **kwargs)
89
[end of pybotics/predefined_models.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/pybotics/predefined_models.py b/pybotics/predefined_models.py
--- a/pybotics/predefined_models.py
+++ b/pybotics/predefined_models.py
@@ -1,4 +1,5 @@
"""Predefined robot models."""
+from copy import deepcopy
from typing import Any
import numpy as np # type: ignore
@@ -25,7 +26,7 @@
def __init__(self, **kwargs: Any) -> None:
"""Init robot."""
- super().__init__(self.kinematic_chain, **kwargs)
+ super().__init__(deepcopy(self.kinematic_chain), **kwargs)
class MecademicMeca500(Robot):
@@ -45,7 +46,7 @@
def __init__(self, **kwargs: Any) -> None:
"""Init robot."""
- super().__init__(self.kinematic_chain, **kwargs)
+ super().__init__(deepcopy(self.kinematic_chain), **kwargs)
class PUMA560(Robot):
@@ -65,7 +66,7 @@
def __init__(self, **kwargs: Any) -> None:
"""Init robot."""
- super().__init__(self.kinematic_chain, **kwargs)
+ super().__init__(deepcopy(self.kinematic_chain), **kwargs)
class UR10(Robot):
@@ -85,4 +86,4 @@
def __init__(self, **kwargs: Any) -> None:
"""Init robot."""
- super().__init__(self.kinematic_chain, **kwargs)
+ super().__init__(deepcopy(self.kinematic_chain), **kwargs)
| {"golden_diff": "diff --git a/pybotics/predefined_models.py b/pybotics/predefined_models.py\n--- a/pybotics/predefined_models.py\n+++ b/pybotics/predefined_models.py\n@@ -1,4 +1,5 @@\n \"\"\"Predefined robot models.\"\"\"\n+from copy import deepcopy\n from typing import Any\n \n import numpy as np # type: ignore\n@@ -25,7 +26,7 @@\n \n def __init__(self, **kwargs: Any) -> None:\n \"\"\"Init robot.\"\"\"\n- super().__init__(self.kinematic_chain, **kwargs)\n+ super().__init__(deepcopy(self.kinematic_chain), **kwargs)\n \n \n class MecademicMeca500(Robot):\n@@ -45,7 +46,7 @@\n \n def __init__(self, **kwargs: Any) -> None:\n \"\"\"Init robot.\"\"\"\n- super().__init__(self.kinematic_chain, **kwargs)\n+ super().__init__(deepcopy(self.kinematic_chain), **kwargs)\n \n \n class PUMA560(Robot):\n@@ -65,7 +66,7 @@\n \n def __init__(self, **kwargs: Any) -> None:\n \"\"\"Init robot.\"\"\"\n- super().__init__(self.kinematic_chain, **kwargs)\n+ super().__init__(deepcopy(self.kinematic_chain), **kwargs)\n \n \n class UR10(Robot):\n@@ -85,4 +86,4 @@\n \n def __init__(self, **kwargs: Any) -> None:\n \"\"\"Init robot.\"\"\"\n- super().__init__(self.kinematic_chain, **kwargs)\n+ super().__init__(deepcopy(self.kinematic_chain), **kwargs)\n", "issue": "Flaky `test_fk()` results\n- Fails occasionally due to a small difference in the x-component of the transform matrix\r\n - https://ci.appveyor.com/project/nnadeau/pybotics/build/1.0.732/job/c9mmdfvctt9jasie\r\n - https://ci.appveyor.com/project/nnadeau/pybotics/build/1.0.732/job/qs645jqgd49iwa3s\r\n- This corresponds to the first row of the UR10 resources\r\n```\r\n-45.0,147.0,-39.0,96.0,49.0,67.0,-0.10936549564013165,0.9937209495290638,0.02361912001841469,341.25528339185024,-0.30766985716369466,-0.011247226994026319,-0.9514266965341137,-658.5887448195482,-0. 9451869906829335,-0.11132014351410438,0.30696804115695675,-625.3245786240964,0.0,0.0,0.0,1.0\r\n```\r\n\n", "before_files": [{"content": "\"\"\"Predefined robot models.\"\"\"\nfrom typing import Any\n\nimport numpy as np # type: ignore\n\nfrom pybotics import Robot\nfrom pybotics.kinematic_chain import MDHKinematicChain\n\n\nclass KukaLBRiiwa7(Robot):\n \"\"\"KUKA LBR iiwa 7 R800 collaborative robot.\"\"\"\n\n # TODO: add manufacturer's joint limits\n kinematic_chain = MDHKinematicChain(\n np.array([\n 0, 0, 0, 340,\n -np.pi / 2, 0, 0, 0,\n np.pi / 2, 0, 0, 400,\n np.pi / 2, 0, 0, 0,\n -np.pi / 2, 0, 0, 400,\n -np.pi / 2, 0, 0, 0,\n np.pi / 2, 0, 0, 126\n ])\n )\n\n def __init__(self, **kwargs: Any) -> None:\n \"\"\"Init robot.\"\"\"\n super().__init__(self.kinematic_chain, **kwargs)\n\n\nclass MecademicMeca500(Robot):\n \"\"\"Mecademic Meca500 small robot.\"\"\"\n\n # TODO: add manufacturer's joint limits\n kinematic_chain = MDHKinematicChain(\n np.array([\n 0, 0, 0, 135,\n -np.pi / 2, 0, -np.pi / 2, 0,\n 0, 135, 0, 0,\n -np.pi / 2, 38, 0, 120,\n np.pi / 2, 0, 0, 0,\n -np.pi / 2, 0, np.pi, 72\n ])\n )\n\n def __init__(self, **kwargs: Any) -> None:\n \"\"\"Init robot.\"\"\"\n super().__init__(self.kinematic_chain, **kwargs)\n\n\nclass PUMA560(Robot):\n \"\"\"PUMA 560 robot.\"\"\"\n\n # TODO: add manufacturer's joint limits\n kinematic_chain = MDHKinematicChain(\n np.array([\n 0, 0, 0, 0,\n -np.pi / 2, 0, 0, 0,\n 0, 612.7, 0, 0,\n 0, 571.6, 0, 163.9,\n -np.pi / 2, 0, 0, 115.7,\n np.pi / 2, 0, np.pi, 92.2\n ])\n )\n\n def __init__(self, **kwargs: Any) -> None:\n \"\"\"Init robot.\"\"\"\n super().__init__(self.kinematic_chain, **kwargs)\n\n\nclass UR10(Robot):\n \"\"\"Universal Robots UR10 collaborative robot.\"\"\"\n\n # TODO: add manufacturer's joint limits\n kinematic_chain = MDHKinematicChain(\n np.array([\n 0, 0, 0, 118,\n np.pi / 2, 0, np.pi, 0,\n 0, 612.7, 0, 0,\n 0, 571.6, 0, 163.9,\n -np.pi / 2, 0, 0, 115.7,\n np.pi / 2, 0, np.pi, 92.2\n ])\n )\n\n def __init__(self, **kwargs: Any) -> None:\n \"\"\"Init robot.\"\"\"\n super().__init__(self.kinematic_chain, **kwargs)\n", "path": "pybotics/predefined_models.py"}]} | 1,961 | 370 |
gh_patches_debug_640 | rasdani/github-patches | git_diff | pex-tool__pex-1922 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Release 2.1.106
On the docket:
+ [x] Providing a direct reference to a wheel with a local version fails to resolve #1919
</issue>
<code>
[start of pex/version.py]
1 # Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).
2 # Licensed under the Apache License, Version 2.0 (see LICENSE).
3
4 __version__ = "2.1.105"
5
[end of pex/version.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/pex/version.py b/pex/version.py
--- a/pex/version.py
+++ b/pex/version.py
@@ -1,4 +1,4 @@
# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).
# Licensed under the Apache License, Version 2.0 (see LICENSE).
-__version__ = "2.1.105"
+__version__ = "2.1.106"
| {"golden_diff": "diff --git a/pex/version.py b/pex/version.py\n--- a/pex/version.py\n+++ b/pex/version.py\n@@ -1,4 +1,4 @@\n # Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n # Licensed under the Apache License, Version 2.0 (see LICENSE).\n \n-__version__ = \"2.1.105\"\n+__version__ = \"2.1.106\"\n", "issue": "Release 2.1.106\nOn the docket:\r\n+ [x] Providing a direct reference to a wheel with a local version fails to resolve #1919 \n", "before_files": [{"content": "# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\n__version__ = \"2.1.105\"\n", "path": "pex/version.py"}]} | 623 | 99 |
gh_patches_debug_20855 | rasdani/github-patches | git_diff | pydantic__pydantic-1328 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Configurable SecretStr
This issue revisits this comment https://github.com/samuelcolvin/pydantic/issues/462#issuecomment-480326378 by @tiangolo.
I think it would be a good idea that there is a standard way of getting secrets exported for propagation to other services.
This is not something that tends to happen in three tier apps where e.g. the db creds are secret but rather a case that happens _a lot_ when dealing with microservice architectures wherein request payloads may serialize and deserialize multiple times through the end to end request lifecycle.
`.json()` to me is semantically like `.export` and as such defaulting to revealing secret makes sense. But that would be a breaking change.
Other approaches:
- `.json(reveal_secrets=True)`
- `.export()`
But maybe we can take the breaking change path via https://github.com/samuelcolvin/pydantic/issues/576 and then:
- `.json(keep_secrets=True)`
To be clear I don't see `.json` as being something used for logging. Something like `structlog` would work with `pydantic.dict()` instead:
```
log.info('something', data=model.dict())
```
I _think_ `.dict` defaulting to maintaining secrets seems right. But we could have, too:
```
log.info('something', data=model.dict(reveal_secrets=True))
```
But than we should make considerations around API consistency across methods and ensure usability is good overall, not just per case.
</issue>
<code>
[start of docs/examples/types_secret_types.py]
1 from pydantic import BaseModel, SecretStr, SecretBytes, ValidationError
2
3 class SimpleModel(BaseModel):
4 password: SecretStr
5 password_bytes: SecretBytes
6
7 sm = SimpleModel(password='IAmSensitive', password_bytes=b'IAmSensitiveBytes')
8
9 # Standard access methods will not display the secret
10 print(sm)
11 print(sm.password)
12 print(sm.json())
13
14 # Use get_secret_value method to see the secret's content.
15 print(sm.password.get_secret_value())
16 print(sm.password_bytes.get_secret_value())
17
18 try:
19 SimpleModel(password=[1, 2, 3], password_bytes=[1, 2, 3])
20 except ValidationError as e:
21 print(e)
22
[end of docs/examples/types_secret_types.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/docs/examples/types_secret_types.py b/docs/examples/types_secret_types.py
--- a/docs/examples/types_secret_types.py
+++ b/docs/examples/types_secret_types.py
@@ -9,6 +9,7 @@
# Standard access methods will not display the secret
print(sm)
print(sm.password)
+print(sm.dict())
print(sm.json())
# Use get_secret_value method to see the secret's content.
@@ -19,3 +20,26 @@
SimpleModel(password=[1, 2, 3], password_bytes=[1, 2, 3])
except ValidationError as e:
print(e)
+
+# If you want the secret to be dumped as plain-text using the json method,
+# you can use json_encoders in the Config class.
+class SimpleModelDumpable(BaseModel):
+ password: SecretStr
+ password_bytes: SecretBytes
+
+ class Config:
+ json_encoders = {
+ SecretStr: lambda v: v.get_secret_value() if v else None,
+ SecretBytes: lambda v: v.get_secret_value() if v else None,
+ }
+
+sm2 = SimpleModelDumpable(password='IAmSensitive',
+ password_bytes=b'IAmSensitiveBytes')
+
+# Standard access methods will not display the secret
+print(sm2)
+print(sm2.password)
+print(sm2.dict())
+
+# But the json method will
+print(sm2.json())
| {"golden_diff": "diff --git a/docs/examples/types_secret_types.py b/docs/examples/types_secret_types.py\n--- a/docs/examples/types_secret_types.py\n+++ b/docs/examples/types_secret_types.py\n@@ -9,6 +9,7 @@\n # Standard access methods will not display the secret\n print(sm)\n print(sm.password)\n+print(sm.dict())\n print(sm.json())\n \n # Use get_secret_value method to see the secret's content.\n@@ -19,3 +20,26 @@\n SimpleModel(password=[1, 2, 3], password_bytes=[1, 2, 3])\n except ValidationError as e:\n print(e)\n+\n+# If you want the secret to be dumped as plain-text using the json method,\n+# you can use json_encoders in the Config class.\n+class SimpleModelDumpable(BaseModel):\n+ password: SecretStr\n+ password_bytes: SecretBytes\n+\n+ class Config:\n+ json_encoders = {\n+ SecretStr: lambda v: v.get_secret_value() if v else None,\n+ SecretBytes: lambda v: v.get_secret_value() if v else None,\n+ }\n+\n+sm2 = SimpleModelDumpable(password='IAmSensitive', \n+ password_bytes=b'IAmSensitiveBytes')\n+\n+# Standard access methods will not display the secret\n+print(sm2)\n+print(sm2.password)\n+print(sm2.dict())\n+\n+# But the json method will\n+print(sm2.json())\n", "issue": "Configurable SecretStr\nThis issue revisits this comment https://github.com/samuelcolvin/pydantic/issues/462#issuecomment-480326378 by @tiangolo.\r\n\r\nI think it would be a good idea that there is a standard way of getting secrets exported for propagation to other services.\r\n\r\nThis is not something that tends to happen in three tier apps where e.g. the db creds are secret but rather a case that happens _a lot_ when dealing with microservice architectures wherein request payloads may serialize and deserialize multiple times through the end to end request lifecycle.\r\n\r\n`.json()` to me is semantically like `.export` and as such defaulting to revealing secret makes sense. But that would be a breaking change.\r\n\r\nOther approaches:\r\n\r\n- `.json(reveal_secrets=True)`\r\n- `.export()`\r\n\r\nBut maybe we can take the breaking change path via https://github.com/samuelcolvin/pydantic/issues/576 and then:\r\n\r\n- `.json(keep_secrets=True)`\r\n\r\nTo be clear I don't see `.json` as being something used for logging. Something like `structlog` would work with `pydantic.dict()` instead:\r\n\r\n```\r\nlog.info('something', data=model.dict())\r\n```\r\n\r\nI _think_ `.dict` defaulting to maintaining secrets seems right. But we could have, too:\r\n\r\n```\r\nlog.info('something', data=model.dict(reveal_secrets=True))\r\n```\r\n\r\nBut than we should make considerations around API consistency across methods and ensure usability is good overall, not just per case.\n", "before_files": [{"content": "from pydantic import BaseModel, SecretStr, SecretBytes, ValidationError\n\nclass SimpleModel(BaseModel):\n password: SecretStr\n password_bytes: SecretBytes\n\nsm = SimpleModel(password='IAmSensitive', password_bytes=b'IAmSensitiveBytes')\n\n# Standard access methods will not display the secret\nprint(sm)\nprint(sm.password)\nprint(sm.json())\n\n# Use get_secret_value method to see the secret's content.\nprint(sm.password.get_secret_value())\nprint(sm.password_bytes.get_secret_value())\n\ntry:\n SimpleModel(password=[1, 2, 3], password_bytes=[1, 2, 3])\nexcept ValidationError as e:\n print(e)\n", "path": "docs/examples/types_secret_types.py"}]} | 1,039 | 308 |
gh_patches_debug_13660 | rasdani/github-patches | git_diff | Lightning-AI__pytorch-lightning-3042 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Incorrect default cuda device when using single gpu other than cuda:0
## 🐛 Bug
The default `cuda` is not set properly to the `trainer.root_gpu` in single-GPU mode. The tensors created with `device='cuda'` will be placed on the incorrect gpu, and the dataloader will acquire memory on the incorrect gpu when `pin_memory=True`.
Maybe we'll need to add
`torch.cuda.set_device(self.trainer.root_gpu)` to https://github.com/PyTorchLightning/pytorch-lightning/blob/5dfc7b157e7febab692036b7392dac8b52f41b87/pytorch_lightning/accelerators/gpu_backend.py#L24
as `DDPBackend` did:
https://github.com/PyTorchLightning/pytorch-lightning/blob/5dfc7b157e7febab692036b7392dac8b52f41b87/pytorch_lightning/accelerators/ddp_backend.py#L195
### To Reproduce
Running the following code will get
`RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:1 and cuda:0!`
#### Code sample
```python
import pytorch_lightning as pl
import torch
from torch import nn
from torch.utils import data
class Dataset(data.Dataset):
def __getitem__(self, item):
return torch.zeros(1)
def __len__(self):
return 5
class Model(pl.LightningModule):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.x = nn.Parameter(torch.zeros(1))
def forward(self, *args, **kwargs):
return self.x
def training_step(self, *args, **kwargs):
return self.x + torch.zeros(1, device='cuda') # RuntimeError.
def train_dataloader(self):
return data.DataLoader(Dataset(), num_workers=1, pin_memory=True)
def configure_optimizers(self):
return torch.optim.SGD(self.parameters(), 1.0)
if __name__ == '__main__':
trainer = pl.Trainer(gpus=[1], num_sanity_val_steps=0, max_epochs=1)
model = Model()
trainer.fit(model)
```
### Expected behavior
No `RuntimeError` occurs.
### Environment
* CUDA:
- GPU:
- available:
- version:
* Packages:
- numpy: 1.18.5
- pyTorch_debug: False
- pyTorch_version: 1.6.0
- pytorch-lightning: 0.9.0rc16
- tensorboard: 2.3.0
- tqdm: 4.48.2
* System:
- OS: Windows
- architecture:
- 64bit
- WindowsPE
- processor:
- python: 3.7.3
- version: 10.0.18362
### Additional context
<!-- Add any other context about the problem here. -->
</issue>
<code>
[start of pytorch_lightning/accelerators/gpu_backend.py]
1 # Copyright The PyTorch Lightning team.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from pytorch_lightning.core import LightningModule
16 from pytorch_lightning.utilities import AMPType
17
18 try:
19 from apex import amp
20 except ImportError:
21 amp = None
22
23
24 class GPUBackend(object):
25 amp_backend: AMPType
26
27 def __init__(self, trainer):
28 self.trainer = trainer
29
30 def setup(self, model):
31
32 # call setup
33 self.trainer.call_setup_hook(model)
34
35 model.cuda(self.trainer.root_gpu)
36
37 # CHOOSE OPTIMIZER
38 # allow for lr schedulers as well
39 optimizers, lr_schedulers, optimizer_frequencies = self.trainer.init_optimizers(model)
40 self.trainer.optimizers = optimizers
41 self.trainer.lr_schedulers = lr_schedulers
42 self.trainer.optimizer_frequencies = optimizer_frequencies
43
44 if self.trainer.amp_backend == AMPType.APEX:
45 model = self._setup_nvidia_apex(model)
46 return model
47
48 def train(self, model):
49 results = self.trainer.run_pretrain_routine(model)
50 return results
51
52 def _setup_nvidia_apex(self, model: LightningModule):
53 model, optimizers = model.configure_apex(amp, model, self.trainer.optimizers, self.trainer.amp_level)
54 self.trainer.optimizers = optimizers
55 self.trainer.reinit_scheduler_properties(self.trainer.optimizers, self.trainer.lr_schedulers)
56 return model
57
[end of pytorch_lightning/accelerators/gpu_backend.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/pytorch_lightning/accelerators/gpu_backend.py b/pytorch_lightning/accelerators/gpu_backend.py
--- a/pytorch_lightning/accelerators/gpu_backend.py
+++ b/pytorch_lightning/accelerators/gpu_backend.py
@@ -12,6 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
+import torch
from pytorch_lightning.core import LightningModule
from pytorch_lightning.utilities import AMPType
@@ -32,6 +33,7 @@
# call setup
self.trainer.call_setup_hook(model)
+ torch.cuda.set_device(self.trainer.root_gpu)
model.cuda(self.trainer.root_gpu)
# CHOOSE OPTIMIZER
| {"golden_diff": "diff --git a/pytorch_lightning/accelerators/gpu_backend.py b/pytorch_lightning/accelerators/gpu_backend.py\n--- a/pytorch_lightning/accelerators/gpu_backend.py\n+++ b/pytorch_lightning/accelerators/gpu_backend.py\n@@ -12,6 +12,7 @@\n # See the License for the specific language governing permissions and\n # limitations under the License.\n \n+import torch\n from pytorch_lightning.core import LightningModule\n from pytorch_lightning.utilities import AMPType\n \n@@ -32,6 +33,7 @@\n # call setup\n self.trainer.call_setup_hook(model)\n \n+ torch.cuda.set_device(self.trainer.root_gpu)\n model.cuda(self.trainer.root_gpu)\n \n # CHOOSE OPTIMIZER\n", "issue": "Incorrect default cuda device when using single gpu other than cuda:0\n## \ud83d\udc1b Bug\r\n\r\nThe default `cuda` is not set properly to the `trainer.root_gpu` in single-GPU mode. The tensors created with `device='cuda'` will be placed on the incorrect gpu, and the dataloader will acquire memory on the incorrect gpu when `pin_memory=True`.\r\n\r\nMaybe we'll need to add\r\n`torch.cuda.set_device(self.trainer.root_gpu)` to https://github.com/PyTorchLightning/pytorch-lightning/blob/5dfc7b157e7febab692036b7392dac8b52f41b87/pytorch_lightning/accelerators/gpu_backend.py#L24\r\nas `DDPBackend` did:\r\n\r\nhttps://github.com/PyTorchLightning/pytorch-lightning/blob/5dfc7b157e7febab692036b7392dac8b52f41b87/pytorch_lightning/accelerators/ddp_backend.py#L195\r\n\r\n### To Reproduce\r\n\r\nRunning the following code will get \r\n\r\n`RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:1 and cuda:0!`\r\n\r\n#### Code sample\r\n\r\n```python\r\nimport pytorch_lightning as pl\r\nimport torch\r\nfrom torch import nn\r\nfrom torch.utils import data\r\n\r\n\r\nclass Dataset(data.Dataset):\r\n\r\n def __getitem__(self, item):\r\n return torch.zeros(1)\r\n\r\n def __len__(self):\r\n return 5\r\n\r\n\r\nclass Model(pl.LightningModule):\r\n\r\n def __init__(self, *args, **kwargs):\r\n super().__init__(*args, **kwargs)\r\n self.x = nn.Parameter(torch.zeros(1))\r\n\r\n def forward(self, *args, **kwargs):\r\n return self.x\r\n\r\n def training_step(self, *args, **kwargs):\r\n return self.x + torch.zeros(1, device='cuda') # RuntimeError.\r\n\r\n def train_dataloader(self):\r\n return data.DataLoader(Dataset(), num_workers=1, pin_memory=True)\r\n\r\n def configure_optimizers(self):\r\n return torch.optim.SGD(self.parameters(), 1.0)\r\n\r\n\r\nif __name__ == '__main__':\r\n trainer = pl.Trainer(gpus=[1], num_sanity_val_steps=0, max_epochs=1)\r\n model = Model()\r\n trainer.fit(model)\r\n\r\n```\r\n\r\n### Expected behavior\r\n\r\nNo `RuntimeError` occurs.\r\n\r\n### Environment\r\n\r\n* CUDA:\r\n\t- GPU:\r\n\t- available:\r\n\t- version:\r\n* Packages:\r\n\t- numpy: 1.18.5\r\n\t- pyTorch_debug: False\r\n\t- pyTorch_version: 1.6.0\r\n\t- pytorch-lightning: 0.9.0rc16\r\n\t- tensorboard: 2.3.0\r\n\t- tqdm: 4.48.2\r\n* System:\r\n\t- OS: Windows\r\n\t- architecture:\r\n\t\t- 64bit\r\n\t\t- WindowsPE\r\n\t- processor:\r\n\t- python: 3.7.3\r\n\t- version: 10.0.18362\r\n\r\n### Additional context\r\n\r\n<!-- Add any other context about the problem here. -->\r\n\n", "before_files": [{"content": "# Copyright The PyTorch Lightning team.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom pytorch_lightning.core import LightningModule\nfrom pytorch_lightning.utilities import AMPType\n\ntry:\n from apex import amp\nexcept ImportError:\n amp = None\n\n\nclass GPUBackend(object):\n amp_backend: AMPType\n\n def __init__(self, trainer):\n self.trainer = trainer\n\n def setup(self, model):\n\n # call setup\n self.trainer.call_setup_hook(model)\n\n model.cuda(self.trainer.root_gpu)\n\n # CHOOSE OPTIMIZER\n # allow for lr schedulers as well\n optimizers, lr_schedulers, optimizer_frequencies = self.trainer.init_optimizers(model)\n self.trainer.optimizers = optimizers\n self.trainer.lr_schedulers = lr_schedulers\n self.trainer.optimizer_frequencies = optimizer_frequencies\n\n if self.trainer.amp_backend == AMPType.APEX:\n model = self._setup_nvidia_apex(model)\n return model\n\n def train(self, model):\n results = self.trainer.run_pretrain_routine(model)\n return results\n\n def _setup_nvidia_apex(self, model: LightningModule):\n model, optimizers = model.configure_apex(amp, model, self.trainer.optimizers, self.trainer.amp_level)\n self.trainer.optimizers = optimizers\n self.trainer.reinit_scheduler_properties(self.trainer.optimizers, self.trainer.lr_schedulers)\n return model\n", "path": "pytorch_lightning/accelerators/gpu_backend.py"}]} | 1,810 | 173 |
gh_patches_debug_6024 | rasdani/github-patches | git_diff | scoutapp__scout_apm_python-340 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
SQLAlchemy executemany not detected in hooks
In the Django ORM integration, we record `executemany` calls as the `SQL/Many` operation, but this is missing in the SQLAlchemy integration.
</issue>
<code>
[start of src/scout_apm/sqlalchemy.py]
1 # coding=utf-8
2 from __future__ import absolute_import, division, print_function, unicode_literals
3
4 from sqlalchemy import event
5
6 from scout_apm.core.tracked_request import TrackedRequest
7
8
9 def before_cursor_execute(conn, cursor, statement, parameters, context, executemany):
10 tracked_request = TrackedRequest.instance()
11 span = tracked_request.start_span(operation="SQL/Query")
12 span.tag("db.statement", statement)
13
14
15 def after_cursor_execute(conn, cursor, statement, parameters, context, executemany):
16 tracked_request = TrackedRequest.instance()
17 span = tracked_request.current_span()
18 if span is not None:
19 tracked_request.callset.update(statement, 1, span.duration())
20 if tracked_request.callset.should_capture_backtrace(statement):
21 span.capture_backtrace()
22 tracked_request.stop_span()
23
24
25 def instrument_sqlalchemy(engine):
26 if getattr(engine, "_scout_instrumented", False):
27 return
28 event.listen(engine, "before_cursor_execute", before_cursor_execute)
29 event.listen(engine, "after_cursor_execute", after_cursor_execute)
30 engine._scout_instrumented = True
31
[end of src/scout_apm/sqlalchemy.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/scout_apm/sqlalchemy.py b/src/scout_apm/sqlalchemy.py
--- a/src/scout_apm/sqlalchemy.py
+++ b/src/scout_apm/sqlalchemy.py
@@ -7,8 +7,12 @@
def before_cursor_execute(conn, cursor, statement, parameters, context, executemany):
+ if executemany:
+ operation = "SQL/Many"
+ else:
+ operation = "SQL/Query"
tracked_request = TrackedRequest.instance()
- span = tracked_request.start_span(operation="SQL/Query")
+ span = tracked_request.start_span(operation=operation)
span.tag("db.statement", statement)
| {"golden_diff": "diff --git a/src/scout_apm/sqlalchemy.py b/src/scout_apm/sqlalchemy.py\n--- a/src/scout_apm/sqlalchemy.py\n+++ b/src/scout_apm/sqlalchemy.py\n@@ -7,8 +7,12 @@\n \n \n def before_cursor_execute(conn, cursor, statement, parameters, context, executemany):\n+ if executemany:\n+ operation = \"SQL/Many\"\n+ else:\n+ operation = \"SQL/Query\"\n tracked_request = TrackedRequest.instance()\n- span = tracked_request.start_span(operation=\"SQL/Query\")\n+ span = tracked_request.start_span(operation=operation)\n span.tag(\"db.statement\", statement)\n", "issue": "SQLAlchemy executemany not detected in hooks\nIn the Django ORM integration, we record `executemany` calls as the `SQL/Many` operation, but this is missing in the SQLAlchemy integration.\n", "before_files": [{"content": "# coding=utf-8\nfrom __future__ import absolute_import, division, print_function, unicode_literals\n\nfrom sqlalchemy import event\n\nfrom scout_apm.core.tracked_request import TrackedRequest\n\n\ndef before_cursor_execute(conn, cursor, statement, parameters, context, executemany):\n tracked_request = TrackedRequest.instance()\n span = tracked_request.start_span(operation=\"SQL/Query\")\n span.tag(\"db.statement\", statement)\n\n\ndef after_cursor_execute(conn, cursor, statement, parameters, context, executemany):\n tracked_request = TrackedRequest.instance()\n span = tracked_request.current_span()\n if span is not None:\n tracked_request.callset.update(statement, 1, span.duration())\n if tracked_request.callset.should_capture_backtrace(statement):\n span.capture_backtrace()\n tracked_request.stop_span()\n\n\ndef instrument_sqlalchemy(engine):\n if getattr(engine, \"_scout_instrumented\", False):\n return\n event.listen(engine, \"before_cursor_execute\", before_cursor_execute)\n event.listen(engine, \"after_cursor_execute\", after_cursor_execute)\n engine._scout_instrumented = True\n", "path": "src/scout_apm/sqlalchemy.py"}]} | 878 | 149 |
gh_patches_debug_38250 | rasdani/github-patches | git_diff | DDMAL__CantusDB-536 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Certain Genre Detail pages are _very_ slow to load
I think the problem is trying to display the "chants by genre" table - for a genre with lots of example chants, e.g. [Antiphon](http://206.12.88.113/genre/122), I've tried loading the page several times and keep getting a 502 error.
</issue>
<code>
[start of django/cantusdb_project/main_app/views/genre.py]
1 from typing import Dict, List
2
3 from django.views.generic import ListView
4 from django.views.generic.detail import SingleObjectMixin
5 from extra_views import SearchableListMixin
6 from main_app.models import Genre
7
8
9 class GenreDetailView(SingleObjectMixin, ListView):
10 paginate_by = 100
11 template_name = "genre_detail.html"
12
13 def get_genre_cantus_ids(self, display_unpublished=True) -> List[Dict]:
14 """
15 Get a list with data on each unique ``cantus_id`` related to this Genre.
16
17 The list contains dicts and each dict has the following keys:
18
19 ``cantus_id``: The ``cantus_id``
20 ``num_chants``: The number of Chants that have this ``cantus_id``
21 ``first_incipit``: The incipit of first Chant with this ``cantus_id``
22 ``first_incipit_url``: The url of first Chant with this ``cantus_id``
23
24 Returns:
25 List[Dict]: A list of dicts with data on each unique ``cantus_id``
26 """
27 cantus_ids = (self.object.chant_set
28 .exclude(cantus_id=None)
29 .values_list("cantus_id", flat=True)
30 .distinct("cantus_id")
31 )
32 if not display_unpublished:
33 cantus_ids = cantus_ids.filter(source__published=True)
34
35 cantus_ids_list = list(cantus_ids)
36
37 chant_list = []
38 for cantus_id in cantus_ids_list:
39 chants = self.object.chant_set.filter(cantus_id=cantus_id)
40 num_chants = chants.count()
41 first_chant = chants.first()
42 first_incipit_url = first_chant.get_absolute_url()
43 first_incipit = first_chant.incipit
44 chant_list.append(
45 {
46 "cantus_id": cantus_id,
47 "num_chants": num_chants,
48 "first_incipit": first_incipit,
49 "first_incipit_url": first_incipit_url,
50 }
51 )
52 # Sort list based on number of Chants per cantus_id (descending)
53 chant_list = sorted(chant_list, key=lambda k: k["num_chants"], reverse=True)
54 return chant_list
55
56 def get(self, request, *args, **kwargs):
57 self.object = self.get_object(queryset=Genre.objects.all())
58 return super().get(request, *args, **kwargs)
59
60 def get_context_data(self, **kwargs):
61 context = super().get_context_data(**kwargs)
62 context["genre"] = self.object
63 return context
64
65 def get_queryset(self):
66 display_unpublished = self.request.user.is_authenticated
67 search_term = self.request.GET.get("incipit")
68 if not search_term:
69 return self.get_genre_cantus_ids(display_unpublished=display_unpublished)
70 else:
71 search_term = search_term.strip(" ")
72 filtered_chants = [
73 chant
74 for chant in self.get_genre_cantus_ids(display_unpublished=display_unpublished)
75 if search_term.lower() in chant["first_incipit"].lower()
76 ]
77 return filtered_chants
78
79
80 class GenreListView(SearchableListMixin, ListView):
81 model = Genre
82 paginate_by = 100
83 context_object_name = "genres"
84 template_name = "genre_list.html"
85
86 def get_queryset(self):
87 queryset = super().get_queryset()
88 mass_office = self.request.GET.get("mass_office", None)
89 if mass_office in ["Mass", "Office", "Old Hispanic"]:
90 queryset = queryset.filter(mass_office__contains=mass_office)
91 return queryset.order_by("name")
92
[end of django/cantusdb_project/main_app/views/genre.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/django/cantusdb_project/main_app/views/genre.py b/django/cantusdb_project/main_app/views/genre.py
--- a/django/cantusdb_project/main_app/views/genre.py
+++ b/django/cantusdb_project/main_app/views/genre.py
@@ -1,81 +1,13 @@
-from typing import Dict, List
-
-from django.views.generic import ListView
-from django.views.generic.detail import SingleObjectMixin
+from django.views.generic import DetailView, ListView
from extra_views import SearchableListMixin
from main_app.models import Genre
-class GenreDetailView(SingleObjectMixin, ListView):
- paginate_by = 100
+class GenreDetailView(DetailView):
+ model = Genre
+ context_object_name = "genre"
template_name = "genre_detail.html"
- def get_genre_cantus_ids(self, display_unpublished=True) -> List[Dict]:
- """
- Get a list with data on each unique ``cantus_id`` related to this Genre.
-
- The list contains dicts and each dict has the following keys:
-
- ``cantus_id``: The ``cantus_id``
- ``num_chants``: The number of Chants that have this ``cantus_id``
- ``first_incipit``: The incipit of first Chant with this ``cantus_id``
- ``first_incipit_url``: The url of first Chant with this ``cantus_id``
-
- Returns:
- List[Dict]: A list of dicts with data on each unique ``cantus_id``
- """
- cantus_ids = (self.object.chant_set
- .exclude(cantus_id=None)
- .values_list("cantus_id", flat=True)
- .distinct("cantus_id")
- )
- if not display_unpublished:
- cantus_ids = cantus_ids.filter(source__published=True)
-
- cantus_ids_list = list(cantus_ids)
-
- chant_list = []
- for cantus_id in cantus_ids_list:
- chants = self.object.chant_set.filter(cantus_id=cantus_id)
- num_chants = chants.count()
- first_chant = chants.first()
- first_incipit_url = first_chant.get_absolute_url()
- first_incipit = first_chant.incipit
- chant_list.append(
- {
- "cantus_id": cantus_id,
- "num_chants": num_chants,
- "first_incipit": first_incipit,
- "first_incipit_url": first_incipit_url,
- }
- )
- # Sort list based on number of Chants per cantus_id (descending)
- chant_list = sorted(chant_list, key=lambda k: k["num_chants"], reverse=True)
- return chant_list
-
- def get(self, request, *args, **kwargs):
- self.object = self.get_object(queryset=Genre.objects.all())
- return super().get(request, *args, **kwargs)
-
- def get_context_data(self, **kwargs):
- context = super().get_context_data(**kwargs)
- context["genre"] = self.object
- return context
-
- def get_queryset(self):
- display_unpublished = self.request.user.is_authenticated
- search_term = self.request.GET.get("incipit")
- if not search_term:
- return self.get_genre_cantus_ids(display_unpublished=display_unpublished)
- else:
- search_term = search_term.strip(" ")
- filtered_chants = [
- chant
- for chant in self.get_genre_cantus_ids(display_unpublished=display_unpublished)
- if search_term.lower() in chant["first_incipit"].lower()
- ]
- return filtered_chants
-
class GenreListView(SearchableListMixin, ListView):
model = Genre
| {"golden_diff": "diff --git a/django/cantusdb_project/main_app/views/genre.py b/django/cantusdb_project/main_app/views/genre.py\n--- a/django/cantusdb_project/main_app/views/genre.py\n+++ b/django/cantusdb_project/main_app/views/genre.py\n@@ -1,81 +1,13 @@\n-from typing import Dict, List\n-\n-from django.views.generic import ListView\n-from django.views.generic.detail import SingleObjectMixin\n+from django.views.generic import DetailView, ListView\n from extra_views import SearchableListMixin\n from main_app.models import Genre\n \n \n-class GenreDetailView(SingleObjectMixin, ListView):\n- paginate_by = 100\n+class GenreDetailView(DetailView):\n+ model = Genre\n+ context_object_name = \"genre\"\n template_name = \"genre_detail.html\"\n \n- def get_genre_cantus_ids(self, display_unpublished=True) -> List[Dict]:\n- \"\"\"\n- Get a list with data on each unique ``cantus_id`` related to this Genre.\n-\n- The list contains dicts and each dict has the following keys:\n-\n- ``cantus_id``: The ``cantus_id``\n- ``num_chants``: The number of Chants that have this ``cantus_id``\n- ``first_incipit``: The incipit of first Chant with this ``cantus_id``\n- ``first_incipit_url``: The url of first Chant with this ``cantus_id``\n-\n- Returns:\n- List[Dict]: A list of dicts with data on each unique ``cantus_id``\n- \"\"\"\n- cantus_ids = (self.object.chant_set\n- .exclude(cantus_id=None)\n- .values_list(\"cantus_id\", flat=True)\n- .distinct(\"cantus_id\")\n- )\n- if not display_unpublished:\n- cantus_ids = cantus_ids.filter(source__published=True)\n- \n- cantus_ids_list = list(cantus_ids)\n-\n- chant_list = []\n- for cantus_id in cantus_ids_list:\n- chants = self.object.chant_set.filter(cantus_id=cantus_id)\n- num_chants = chants.count()\n- first_chant = chants.first()\n- first_incipit_url = first_chant.get_absolute_url()\n- first_incipit = first_chant.incipit\n- chant_list.append(\n- {\n- \"cantus_id\": cantus_id,\n- \"num_chants\": num_chants,\n- \"first_incipit\": first_incipit,\n- \"first_incipit_url\": first_incipit_url,\n- }\n- )\n- # Sort list based on number of Chants per cantus_id (descending)\n- chant_list = sorted(chant_list, key=lambda k: k[\"num_chants\"], reverse=True)\n- return chant_list\n-\n- def get(self, request, *args, **kwargs):\n- self.object = self.get_object(queryset=Genre.objects.all())\n- return super().get(request, *args, **kwargs)\n-\n- def get_context_data(self, **kwargs):\n- context = super().get_context_data(**kwargs)\n- context[\"genre\"] = self.object\n- return context\n-\n- def get_queryset(self):\n- display_unpublished = self.request.user.is_authenticated\n- search_term = self.request.GET.get(\"incipit\")\n- if not search_term:\n- return self.get_genre_cantus_ids(display_unpublished=display_unpublished)\n- else:\n- search_term = search_term.strip(\" \")\n- filtered_chants = [\n- chant\n- for chant in self.get_genre_cantus_ids(display_unpublished=display_unpublished)\n- if search_term.lower() in chant[\"first_incipit\"].lower()\n- ]\n- return filtered_chants\n-\n \n class GenreListView(SearchableListMixin, ListView):\n model = Genre\n", "issue": "Certain Genre Detail pages are _very_ slow to load\nI think the problem is trying to display the \"chants by genre\" table - for a genre with lots of example chants, e.g. [Antiphon](http://206.12.88.113/genre/122), I've tried loading the page several times and keep getting a 502 error.\n", "before_files": [{"content": "from typing import Dict, List\n\nfrom django.views.generic import ListView\nfrom django.views.generic.detail import SingleObjectMixin\nfrom extra_views import SearchableListMixin\nfrom main_app.models import Genre\n\n\nclass GenreDetailView(SingleObjectMixin, ListView):\n paginate_by = 100\n template_name = \"genre_detail.html\"\n\n def get_genre_cantus_ids(self, display_unpublished=True) -> List[Dict]:\n \"\"\"\n Get a list with data on each unique ``cantus_id`` related to this Genre.\n\n The list contains dicts and each dict has the following keys:\n\n ``cantus_id``: The ``cantus_id``\n ``num_chants``: The number of Chants that have this ``cantus_id``\n ``first_incipit``: The incipit of first Chant with this ``cantus_id``\n ``first_incipit_url``: The url of first Chant with this ``cantus_id``\n\n Returns:\n List[Dict]: A list of dicts with data on each unique ``cantus_id``\n \"\"\"\n cantus_ids = (self.object.chant_set\n .exclude(cantus_id=None)\n .values_list(\"cantus_id\", flat=True)\n .distinct(\"cantus_id\")\n )\n if not display_unpublished:\n cantus_ids = cantus_ids.filter(source__published=True)\n \n cantus_ids_list = list(cantus_ids)\n\n chant_list = []\n for cantus_id in cantus_ids_list:\n chants = self.object.chant_set.filter(cantus_id=cantus_id)\n num_chants = chants.count()\n first_chant = chants.first()\n first_incipit_url = first_chant.get_absolute_url()\n first_incipit = first_chant.incipit\n chant_list.append(\n {\n \"cantus_id\": cantus_id,\n \"num_chants\": num_chants,\n \"first_incipit\": first_incipit,\n \"first_incipit_url\": first_incipit_url,\n }\n )\n # Sort list based on number of Chants per cantus_id (descending)\n chant_list = sorted(chant_list, key=lambda k: k[\"num_chants\"], reverse=True)\n return chant_list\n\n def get(self, request, *args, **kwargs):\n self.object = self.get_object(queryset=Genre.objects.all())\n return super().get(request, *args, **kwargs)\n\n def get_context_data(self, **kwargs):\n context = super().get_context_data(**kwargs)\n context[\"genre\"] = self.object\n return context\n\n def get_queryset(self):\n display_unpublished = self.request.user.is_authenticated\n search_term = self.request.GET.get(\"incipit\")\n if not search_term:\n return self.get_genre_cantus_ids(display_unpublished=display_unpublished)\n else:\n search_term = search_term.strip(\" \")\n filtered_chants = [\n chant\n for chant in self.get_genre_cantus_ids(display_unpublished=display_unpublished)\n if search_term.lower() in chant[\"first_incipit\"].lower()\n ]\n return filtered_chants\n\n\nclass GenreListView(SearchableListMixin, ListView):\n model = Genre\n paginate_by = 100\n context_object_name = \"genres\"\n template_name = \"genre_list.html\"\n\n def get_queryset(self):\n queryset = super().get_queryset()\n mass_office = self.request.GET.get(\"mass_office\", None)\n if mass_office in [\"Mass\", \"Office\", \"Old Hispanic\"]:\n queryset = queryset.filter(mass_office__contains=mass_office)\n return queryset.order_by(\"name\")\n", "path": "django/cantusdb_project/main_app/views/genre.py"}]} | 1,599 | 866 |
gh_patches_debug_15081 | rasdani/github-patches | git_diff | vispy__vispy-713 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Implement reusable GLSL functions for color space transformations
At least RGB <-> HSV. Put them in `vispy.visuals.glsl`.
</issue>
<code>
[start of vispy/visuals/glsl/color.py]
1 """Color-related GLSL functions."""
2
3
4 # -----------------------------------------------------------------------------
5 # Colormaps
6 # -----------------------------------------------------------------------------
7
8 """Texture lookup for a discrete color map stored in a 1*ncolors 2D texture.
9
10 The `get_color()` function returns a RGB color from an index integer
11 referring to the colormap.
12
13
14 Inputs
15 ------
16
17 index (int): The color index.
18
19
20 Template variables
21 ------------------
22
23 $ncolors (int): The number of colors in the colormap.
24
25 $colormap (2D texture sampler): The sampler for the 2D 1*ncolors colormap
26 texture.
27
28
29 Outputs
30 -------
31
32 color (vec3): The color.
33
34 """
35 COLORMAP_TEXTURE = """
36 vec3 get_color(int index) {
37 float x = (float(index) + .5) / float($ncolors);
38 return texture2D($colormap, vec2(x, .5)).rgb;
39 }
40 """
41
[end of vispy/visuals/glsl/color.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/vispy/visuals/glsl/color.py b/vispy/visuals/glsl/color.py
--- a/vispy/visuals/glsl/color.py
+++ b/vispy/visuals/glsl/color.py
@@ -38,3 +38,33 @@
return texture2D($colormap, vec2(x, .5)).rgb;
}
"""
+
+
+# -----------------------------------------------------------------------------
+# Color space transformations
+# -----------------------------------------------------------------------------
+
+# From http://lolengine.net/blog/2013/07/27/rgb-to-hsv-in-glsl
+# TODO: unit tests
+HSV_TO_RGB = """
+vec3 hsv_to_rgb(vec3 c)
+{
+ vec4 K = vec4(1.0, 2.0 / 3.0, 1.0 / 3.0, 3.0);
+ vec3 p = abs(fract(c.xxx + K.xyz) * 6.0 - K.www);
+ return c.z * mix(K.xxx, clamp(p - K.xxx, 0.0, 1.0), c.y);
+}
+"""
+
+
+RGB_TO_HSV = """
+vec3 rgb_to_hsv(vec3 c)
+{
+ vec4 K = vec4(0.0, -1.0 / 3.0, 2.0 / 3.0, -1.0);
+ vec4 p = mix(vec4(c.bg, K.wz), vec4(c.gb, K.xy), step(c.b, c.g));
+ vec4 q = mix(vec4(p.xyw, c.r), vec4(c.r, p.yzx), step(p.x, c.r));
+
+ float d = q.x - min(q.w, q.y);
+ float e = 1.0e-10;
+ return vec3(abs(q.z + (q.w - q.y) / (6.0 * d + e)), d / (q.x + e), q.x);
+}
+"""
| {"golden_diff": "diff --git a/vispy/visuals/glsl/color.py b/vispy/visuals/glsl/color.py\n--- a/vispy/visuals/glsl/color.py\n+++ b/vispy/visuals/glsl/color.py\n@@ -38,3 +38,33 @@\n return texture2D($colormap, vec2(x, .5)).rgb;\n }\n \"\"\"\n+\n+\n+# -----------------------------------------------------------------------------\n+# Color space transformations\n+# -----------------------------------------------------------------------------\n+\n+# From http://lolengine.net/blog/2013/07/27/rgb-to-hsv-in-glsl\n+# TODO: unit tests\n+HSV_TO_RGB = \"\"\"\n+vec3 hsv_to_rgb(vec3 c)\n+{\n+ vec4 K = vec4(1.0, 2.0 / 3.0, 1.0 / 3.0, 3.0);\n+ vec3 p = abs(fract(c.xxx + K.xyz) * 6.0 - K.www);\n+ return c.z * mix(K.xxx, clamp(p - K.xxx, 0.0, 1.0), c.y);\n+}\n+\"\"\"\n+\n+\n+RGB_TO_HSV = \"\"\"\n+vec3 rgb_to_hsv(vec3 c)\n+{\n+ vec4 K = vec4(0.0, -1.0 / 3.0, 2.0 / 3.0, -1.0);\n+ vec4 p = mix(vec4(c.bg, K.wz), vec4(c.gb, K.xy), step(c.b, c.g));\n+ vec4 q = mix(vec4(p.xyw, c.r), vec4(c.r, p.yzx), step(p.x, c.r));\n+\n+ float d = q.x - min(q.w, q.y);\n+ float e = 1.0e-10;\n+ return vec3(abs(q.z + (q.w - q.y) / (6.0 * d + e)), d / (q.x + e), q.x);\n+}\n+\"\"\"\n", "issue": "Implement reusable GLSL functions for color space transformations\nAt least RGB <-> HSV. Put them in `vispy.visuals.glsl`.\n\n", "before_files": [{"content": "\"\"\"Color-related GLSL functions.\"\"\"\n\n\n# -----------------------------------------------------------------------------\n# Colormaps\n# -----------------------------------------------------------------------------\n\n\"\"\"Texture lookup for a discrete color map stored in a 1*ncolors 2D texture.\n\nThe `get_color()` function returns a RGB color from an index integer\nreferring to the colormap.\n\n\nInputs\n------\n\nindex (int): The color index.\n\n\nTemplate variables\n------------------\n\n$ncolors (int): The number of colors in the colormap.\n\n$colormap (2D texture sampler): The sampler for the 2D 1*ncolors colormap\n texture.\n\n\nOutputs\n-------\n\ncolor (vec3): The color.\n\n\"\"\"\nCOLORMAP_TEXTURE = \"\"\"\nvec3 get_color(int index) {\n float x = (float(index) + .5) / float($ncolors);\n return texture2D($colormap, vec2(x, .5)).rgb;\n}\n\"\"\"\n", "path": "vispy/visuals/glsl/color.py"}]} | 842 | 439 |
gh_patches_debug_39445 | rasdani/github-patches | git_diff | mampfes__hacs_waste_collection_schedule-1558 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add Garden waste type to Cornwall, UK source
### I propose a feature for:
Sources
### Describe your wanted feature
Can you please add Garden to the cornwall_gov_uk.py source?
Change - COLLECTIONS = {"Rubbish", "Recycling"}
to COLLECTIONS = {"Rubbish", "Recycling", "Garden"}
For my house I'm getting the following html snip so I think it should work.
<div id="my-waste-collection">
<h3 class="font-weight-bolder">Current collections</h3>
<div class="row text-center">
<div class="col-12 col-md-4">
<div id="recycling" class="collection text-center service">
<span>Recycling</span>
<span>SAT</span>
<span>30 Dec</span>
</div>
</div>
<div class="col-12 col-md-4">
<div id="rubbish" class="collection text-center service">
<span>Rubbish</span>
<span>TUE</span>
<span>2 Jan</span>
</div>
</div>
<div class="col-12 col-md-4">
<div id="gardenhassubscription" class="collection text-cente r service">
<span>Garden</span>
<span>FRI</span>
<span>22 Dec</span>
</div>
</div>
</div>
</issue>
<code>
[start of custom_components/waste_collection_schedule/waste_collection_schedule/source/cornwall_gov_uk.py]
1 from datetime import date, datetime
2
3 import requests
4 from bs4 import BeautifulSoup
5 from waste_collection_schedule import Collection
6
7 TITLE = "Cornwall Council"
8 DESCRIPTION = "Source for cornwall.gov.uk services for Cornwall Council"
9 URL = "https://cornwall.gov.uk"
10 TEST_CASES = {
11 "known_uprn": {"uprn": "100040118005"},
12 "unknown_uprn": {"postcode": "TR261SP", "housenumberorname": "7"},
13 }
14
15 SEARCH_URLS = {
16 "uprn_search": "https://www.cornwall.gov.uk/my-area/",
17 "collection_search": "https://www.cornwall.gov.uk/umbraco/Surface/Waste/MyCollectionDays?subscribe=False",
18 }
19 COLLECTIONS = {"Rubbish", "Recycling"}
20
21
22 class Source:
23 def __init__(
24 self, uprn=None, postcode=None, housenumberorname=None
25 ): # argX correspond to the args dict in the source configuration
26 self._uprn = uprn
27 self._postcode = postcode
28 self._housenumberorname = housenumberorname
29
30 def fetch(self):
31 entries = []
32 session = requests.Session()
33
34 # Find the UPRN based on the postcode and the property name/number
35 if self._uprn is None:
36 args = {"Postcode": self._postcode}
37 r = session.get(SEARCH_URLS["uprn_search"], params=args)
38 r.raise_for_status()
39 soup = BeautifulSoup(r.text, features="html.parser")
40 propertyUprns = soup.find(id="Uprn").find_all("option")
41 for match in propertyUprns:
42 if match.text.startswith(self._housenumberorname):
43 self._uprn = match["value"]
44
45 # Get the collection days based on the UPRN (either supplied through arguments or searched for above)
46 if self._uprn is not None:
47 args = {"uprn": self._uprn}
48 r = session.get(SEARCH_URLS["collection_search"], params=args)
49 r.raise_for_status()
50 soup = BeautifulSoup(r.text, features="html.parser")
51 for collection in COLLECTIONS:
52 d = (
53 soup.find(id=collection.lower()).find_all("span")[-1].text
54 + " "
55 + str(date.today().year)
56 )
57
58 entries.append(
59 Collection(
60 datetime.strptime(d, "%d %b %Y").date(),
61 collection,
62 )
63 )
64
65 return entries
66
[end of custom_components/waste_collection_schedule/waste_collection_schedule/source/cornwall_gov_uk.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/custom_components/waste_collection_schedule/waste_collection_schedule/source/cornwall_gov_uk.py b/custom_components/waste_collection_schedule/waste_collection_schedule/source/cornwall_gov_uk.py
--- a/custom_components/waste_collection_schedule/waste_collection_schedule/source/cornwall_gov_uk.py
+++ b/custom_components/waste_collection_schedule/waste_collection_schedule/source/cornwall_gov_uk.py
@@ -2,7 +2,7 @@
import requests
from bs4 import BeautifulSoup
-from waste_collection_schedule import Collection
+from waste_collection_schedule import Collection # type: ignore[attr-defined]
TITLE = "Cornwall Council"
DESCRIPTION = "Source for cornwall.gov.uk services for Cornwall Council"
@@ -10,13 +10,18 @@
TEST_CASES = {
"known_uprn": {"uprn": "100040118005"},
"unknown_uprn": {"postcode": "TR261SP", "housenumberorname": "7"},
+ "uprn_with_garden": {"uprn": "100040080721"},
}
SEARCH_URLS = {
"uprn_search": "https://www.cornwall.gov.uk/my-area/",
"collection_search": "https://www.cornwall.gov.uk/umbraco/Surface/Waste/MyCollectionDays?subscribe=False",
}
-COLLECTIONS = {"Rubbish", "Recycling"}
+ICON_MAP = {
+ "Rubbish": "mdi:delete",
+ "Recycling": "mdi:recycle",
+ "Garden": "mdi:flower",
+}
class Source:
@@ -41,25 +46,29 @@
for match in propertyUprns:
if match.text.startswith(self._housenumberorname):
self._uprn = match["value"]
+ if self._uprn is None:
+ raise Exception(
+ f"No UPRN found for {self._postcode} {self._housenumberorname}"
+ )
# Get the collection days based on the UPRN (either supplied through arguments or searched for above)
- if self._uprn is not None:
- args = {"uprn": self._uprn}
- r = session.get(SEARCH_URLS["collection_search"], params=args)
- r.raise_for_status()
- soup = BeautifulSoup(r.text, features="html.parser")
- for collection in COLLECTIONS:
- d = (
- soup.find(id=collection.lower()).find_all("span")[-1].text
- + " "
- + str(date.today().year)
- )
+ args = {"uprn": self._uprn}
+ r = session.get(SEARCH_URLS["collection_search"], params=args)
+ r.raise_for_status()
+ soup = BeautifulSoup(r.text, features="html.parser")
+ for collection_div in soup.find_all("div", class_="collection"):
+ spans = collection_div.find_all("span")
+ if not spans:
+ continue
+ collection = spans[0].text
+ d = spans[-1].text + " " + str(date.today().year)
- entries.append(
- Collection(
- datetime.strptime(d, "%d %b %Y").date(),
- collection,
- )
+ entries.append(
+ Collection(
+ datetime.strptime(d, "%d %b %Y").date(),
+ collection,
+ icon=ICON_MAP.get(collection),
)
+ )
return entries
| {"golden_diff": "diff --git a/custom_components/waste_collection_schedule/waste_collection_schedule/source/cornwall_gov_uk.py b/custom_components/waste_collection_schedule/waste_collection_schedule/source/cornwall_gov_uk.py\n--- a/custom_components/waste_collection_schedule/waste_collection_schedule/source/cornwall_gov_uk.py\n+++ b/custom_components/waste_collection_schedule/waste_collection_schedule/source/cornwall_gov_uk.py\n@@ -2,7 +2,7 @@\n \n import requests\n from bs4 import BeautifulSoup\n-from waste_collection_schedule import Collection\n+from waste_collection_schedule import Collection # type: ignore[attr-defined]\n \n TITLE = \"Cornwall Council\"\n DESCRIPTION = \"Source for cornwall.gov.uk services for Cornwall Council\"\n@@ -10,13 +10,18 @@\n TEST_CASES = {\n \"known_uprn\": {\"uprn\": \"100040118005\"},\n \"unknown_uprn\": {\"postcode\": \"TR261SP\", \"housenumberorname\": \"7\"},\n+ \"uprn_with_garden\": {\"uprn\": \"100040080721\"},\n }\n \n SEARCH_URLS = {\n \"uprn_search\": \"https://www.cornwall.gov.uk/my-area/\",\n \"collection_search\": \"https://www.cornwall.gov.uk/umbraco/Surface/Waste/MyCollectionDays?subscribe=False\",\n }\n-COLLECTIONS = {\"Rubbish\", \"Recycling\"}\n+ICON_MAP = {\n+ \"Rubbish\": \"mdi:delete\",\n+ \"Recycling\": \"mdi:recycle\",\n+ \"Garden\": \"mdi:flower\",\n+}\n \n \n class Source:\n@@ -41,25 +46,29 @@\n for match in propertyUprns:\n if match.text.startswith(self._housenumberorname):\n self._uprn = match[\"value\"]\n+ if self._uprn is None:\n+ raise Exception(\n+ f\"No UPRN found for {self._postcode} {self._housenumberorname}\"\n+ )\n \n # Get the collection days based on the UPRN (either supplied through arguments or searched for above)\n- if self._uprn is not None:\n- args = {\"uprn\": self._uprn}\n- r = session.get(SEARCH_URLS[\"collection_search\"], params=args)\n- r.raise_for_status()\n- soup = BeautifulSoup(r.text, features=\"html.parser\")\n- for collection in COLLECTIONS:\n- d = (\n- soup.find(id=collection.lower()).find_all(\"span\")[-1].text\n- + \" \"\n- + str(date.today().year)\n- )\n+ args = {\"uprn\": self._uprn}\n+ r = session.get(SEARCH_URLS[\"collection_search\"], params=args)\n+ r.raise_for_status()\n+ soup = BeautifulSoup(r.text, features=\"html.parser\")\n+ for collection_div in soup.find_all(\"div\", class_=\"collection\"):\n+ spans = collection_div.find_all(\"span\")\n+ if not spans:\n+ continue\n+ collection = spans[0].text\n+ d = spans[-1].text + \" \" + str(date.today().year)\n \n- entries.append(\n- Collection(\n- datetime.strptime(d, \"%d %b %Y\").date(),\n- collection,\n- )\n+ entries.append(\n+ Collection(\n+ datetime.strptime(d, \"%d %b %Y\").date(),\n+ collection,\n+ icon=ICON_MAP.get(collection),\n )\n+ )\n \n return entries\n", "issue": "Add Garden waste type to Cornwall, UK source\n### I propose a feature for:\n\nSources\n\n### Describe your wanted feature\n\nCan you please add Garden to the cornwall_gov_uk.py source?\r\n\r\nChange - COLLECTIONS = {\"Rubbish\", \"Recycling\"}\r\nto COLLECTIONS = {\"Rubbish\", \"Recycling\", \"Garden\"}\r\n\r\nFor my house I'm getting the following html snip so I think it should work. \r\n\r\n<div id=\"my-waste-collection\">\r\n <h3 class=\"font-weight-bolder\">Current collections</h3>\r\n <div class=\"row text-center\">\r\n <div class=\"col-12 col-md-4\">\r\n <div id=\"recycling\" class=\"collection text-center service\">\r\n <span>Recycling</span>\r\n <span>SAT</span>\r\n <span>30 Dec</span>\r\n </div>\r\n </div>\r\n <div class=\"col-12 col-md-4\">\r\n <div id=\"rubbish\" class=\"collection text-center service\">\r\n <span>Rubbish</span>\r\n <span>TUE</span>\r\n <span>2 Jan</span>\r\n </div>\r\n </div>\r\n <div class=\"col-12 col-md-4\">\r\n <div id=\"gardenhassubscription\" class=\"collection text-cente r service\">\r\n <span>Garden</span>\r\n <span>FRI</span>\r\n <span>22 Dec</span>\r\n </div>\r\n </div>\r\n </div>\n", "before_files": [{"content": "from datetime import date, datetime\n\nimport requests\nfrom bs4 import BeautifulSoup\nfrom waste_collection_schedule import Collection\n\nTITLE = \"Cornwall Council\"\nDESCRIPTION = \"Source for cornwall.gov.uk services for Cornwall Council\"\nURL = \"https://cornwall.gov.uk\"\nTEST_CASES = {\n \"known_uprn\": {\"uprn\": \"100040118005\"},\n \"unknown_uprn\": {\"postcode\": \"TR261SP\", \"housenumberorname\": \"7\"},\n}\n\nSEARCH_URLS = {\n \"uprn_search\": \"https://www.cornwall.gov.uk/my-area/\",\n \"collection_search\": \"https://www.cornwall.gov.uk/umbraco/Surface/Waste/MyCollectionDays?subscribe=False\",\n}\nCOLLECTIONS = {\"Rubbish\", \"Recycling\"}\n\n\nclass Source:\n def __init__(\n self, uprn=None, postcode=None, housenumberorname=None\n ): # argX correspond to the args dict in the source configuration\n self._uprn = uprn\n self._postcode = postcode\n self._housenumberorname = housenumberorname\n\n def fetch(self):\n entries = []\n session = requests.Session()\n\n # Find the UPRN based on the postcode and the property name/number\n if self._uprn is None:\n args = {\"Postcode\": self._postcode}\n r = session.get(SEARCH_URLS[\"uprn_search\"], params=args)\n r.raise_for_status()\n soup = BeautifulSoup(r.text, features=\"html.parser\")\n propertyUprns = soup.find(id=\"Uprn\").find_all(\"option\")\n for match in propertyUprns:\n if match.text.startswith(self._housenumberorname):\n self._uprn = match[\"value\"]\n\n # Get the collection days based on the UPRN (either supplied through arguments or searched for above)\n if self._uprn is not None:\n args = {\"uprn\": self._uprn}\n r = session.get(SEARCH_URLS[\"collection_search\"], params=args)\n r.raise_for_status()\n soup = BeautifulSoup(r.text, features=\"html.parser\")\n for collection in COLLECTIONS:\n d = (\n soup.find(id=collection.lower()).find_all(\"span\")[-1].text\n + \" \"\n + str(date.today().year)\n )\n\n entries.append(\n Collection(\n datetime.strptime(d, \"%d %b %Y\").date(),\n collection,\n )\n )\n\n return entries\n", "path": "custom_components/waste_collection_schedule/waste_collection_schedule/source/cornwall_gov_uk.py"}]} | 1,569 | 783 |
gh_patches_debug_25618 | rasdani/github-patches | git_diff | fonttools__fonttools-2014 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Remove MacOS module in macCreatorType
utils.macCreatorType uses either the `xattr` module or the `MacOS` module to do its thing. But the MacOS module has been removed from Python 3.x. If we only support 3.x, we should remove the `MacOS`-related code.
</issue>
<code>
[start of Lib/fontTools/misc/macCreatorType.py]
1 from fontTools.misc.py23 import *
2 import sys
3 try:
4 import xattr
5 except ImportError:
6 xattr = None
7 try:
8 import MacOS
9 except ImportError:
10 MacOS = None
11
12
13 def _reverseString(s):
14 s = list(s)
15 s.reverse()
16 return strjoin(s)
17
18
19 def getMacCreatorAndType(path):
20 """Returns file creator and file type codes for a path.
21
22 Args:
23 path (str): A file path.
24
25 Returns:
26 A tuple of two :py:class:`fontTools.py23.Tag` objects, the first
27 representing the file creator and the second representing the
28 file type.
29 """
30 if xattr is not None:
31 try:
32 finderInfo = xattr.getxattr(path, 'com.apple.FinderInfo')
33 except (KeyError, IOError):
34 pass
35 else:
36 fileType = Tag(finderInfo[:4])
37 fileCreator = Tag(finderInfo[4:8])
38 return fileCreator, fileType
39 if MacOS is not None:
40 fileCreator, fileType = MacOS.GetCreatorAndType(path)
41 if sys.version_info[:2] < (2, 7) and sys.byteorder == "little":
42 # work around bug in MacOS.GetCreatorAndType() on intel:
43 # http://bugs.python.org/issue1594
44 # (fixed with Python 2.7)
45 fileCreator = _reverseString(fileCreator)
46 fileType = _reverseString(fileType)
47 return fileCreator, fileType
48 else:
49 return None, None
50
51
52 def setMacCreatorAndType(path, fileCreator, fileType):
53 """Set file creator and file type codes for a path.
54
55 Note that if the ``xattr`` module is not installed, no action is
56 taken but no error is raised.
57
58 Args:
59 path (str): A file path.
60 fileCreator: A four-character file creator tag.
61 fileType: A four-character file type tag.
62
63 """
64 if xattr is not None:
65 from fontTools.misc.textTools import pad
66 if not all(len(s) == 4 for s in (fileCreator, fileType)):
67 raise TypeError('arg must be string of 4 chars')
68 finderInfo = pad(bytesjoin([fileType, fileCreator]), 32)
69 xattr.setxattr(path, 'com.apple.FinderInfo', finderInfo)
70 if MacOS is not None:
71 MacOS.SetCreatorAndType(path, fileCreator, fileType)
72
[end of Lib/fontTools/misc/macCreatorType.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/Lib/fontTools/misc/macCreatorType.py b/Lib/fontTools/misc/macCreatorType.py
--- a/Lib/fontTools/misc/macCreatorType.py
+++ b/Lib/fontTools/misc/macCreatorType.py
@@ -4,10 +4,6 @@
import xattr
except ImportError:
xattr = None
-try:
- import MacOS
-except ImportError:
- MacOS = None
def _reverseString(s):
@@ -36,17 +32,7 @@
fileType = Tag(finderInfo[:4])
fileCreator = Tag(finderInfo[4:8])
return fileCreator, fileType
- if MacOS is not None:
- fileCreator, fileType = MacOS.GetCreatorAndType(path)
- if sys.version_info[:2] < (2, 7) and sys.byteorder == "little":
- # work around bug in MacOS.GetCreatorAndType() on intel:
- # http://bugs.python.org/issue1594
- # (fixed with Python 2.7)
- fileCreator = _reverseString(fileCreator)
- fileType = _reverseString(fileType)
- return fileCreator, fileType
- else:
- return None, None
+ return None, None
def setMacCreatorAndType(path, fileCreator, fileType):
@@ -67,5 +53,3 @@
raise TypeError('arg must be string of 4 chars')
finderInfo = pad(bytesjoin([fileType, fileCreator]), 32)
xattr.setxattr(path, 'com.apple.FinderInfo', finderInfo)
- if MacOS is not None:
- MacOS.SetCreatorAndType(path, fileCreator, fileType)
| {"golden_diff": "diff --git a/Lib/fontTools/misc/macCreatorType.py b/Lib/fontTools/misc/macCreatorType.py\n--- a/Lib/fontTools/misc/macCreatorType.py\n+++ b/Lib/fontTools/misc/macCreatorType.py\n@@ -4,10 +4,6 @@\n \timport xattr\n except ImportError:\n \txattr = None\n-try:\n-\timport MacOS\n-except ImportError:\n-\tMacOS = None\n \n \n def _reverseString(s):\n@@ -36,17 +32,7 @@\n \t\t\tfileType = Tag(finderInfo[:4])\n \t\t\tfileCreator = Tag(finderInfo[4:8])\n \t\t\treturn fileCreator, fileType\n-\tif MacOS is not None:\n-\t\tfileCreator, fileType = MacOS.GetCreatorAndType(path)\n-\t\tif sys.version_info[:2] < (2, 7) and sys.byteorder == \"little\":\n-\t\t\t# work around bug in MacOS.GetCreatorAndType() on intel:\n-\t\t\t# http://bugs.python.org/issue1594\n-\t\t\t# (fixed with Python 2.7)\n-\t\t\tfileCreator = _reverseString(fileCreator)\n-\t\t\tfileType = _reverseString(fileType)\n-\t\treturn fileCreator, fileType\n-\telse:\n-\t\treturn None, None\n+\treturn None, None\n \n \n def setMacCreatorAndType(path, fileCreator, fileType):\n@@ -67,5 +53,3 @@\n \t\t\traise TypeError('arg must be string of 4 chars')\n \t\tfinderInfo = pad(bytesjoin([fileType, fileCreator]), 32)\n \t\txattr.setxattr(path, 'com.apple.FinderInfo', finderInfo)\n-\tif MacOS is not None:\n-\t\tMacOS.SetCreatorAndType(path, fileCreator, fileType)\n", "issue": "Remove MacOS module in macCreatorType\nutils.macCreatorType uses either the `xattr` module or the `MacOS` module to do its thing. But the MacOS module has been removed from Python 3.x. If we only support 3.x, we should remove the `MacOS`-related code.\n", "before_files": [{"content": "from fontTools.misc.py23 import *\nimport sys\ntry:\n\timport xattr\nexcept ImportError:\n\txattr = None\ntry:\n\timport MacOS\nexcept ImportError:\n\tMacOS = None\n\n\ndef _reverseString(s):\n\ts = list(s)\n\ts.reverse()\n\treturn strjoin(s)\n\n\ndef getMacCreatorAndType(path):\n\t\"\"\"Returns file creator and file type codes for a path.\n\n\tArgs:\n\t\tpath (str): A file path.\n\n\tReturns:\n\t\tA tuple of two :py:class:`fontTools.py23.Tag` objects, the first\n\t\trepresenting the file creator and the second representing the\n\t\tfile type.\n\t\"\"\"\n\tif xattr is not None:\n\t\ttry:\n\t\t\tfinderInfo = xattr.getxattr(path, 'com.apple.FinderInfo')\n\t\texcept (KeyError, IOError):\n\t\t\tpass\n\t\telse:\n\t\t\tfileType = Tag(finderInfo[:4])\n\t\t\tfileCreator = Tag(finderInfo[4:8])\n\t\t\treturn fileCreator, fileType\n\tif MacOS is not None:\n\t\tfileCreator, fileType = MacOS.GetCreatorAndType(path)\n\t\tif sys.version_info[:2] < (2, 7) and sys.byteorder == \"little\":\n\t\t\t# work around bug in MacOS.GetCreatorAndType() on intel:\n\t\t\t# http://bugs.python.org/issue1594\n\t\t\t# (fixed with Python 2.7)\n\t\t\tfileCreator = _reverseString(fileCreator)\n\t\t\tfileType = _reverseString(fileType)\n\t\treturn fileCreator, fileType\n\telse:\n\t\treturn None, None\n\n\ndef setMacCreatorAndType(path, fileCreator, fileType):\n\t\"\"\"Set file creator and file type codes for a path.\n\n\tNote that if the ``xattr`` module is not installed, no action is\n\ttaken but no error is raised.\n\n\tArgs:\n\t\tpath (str): A file path.\n\t\tfileCreator: A four-character file creator tag.\n\t\tfileType: A four-character file type tag.\n\n\t\"\"\"\n\tif xattr is not None:\n\t\tfrom fontTools.misc.textTools import pad\n\t\tif not all(len(s) == 4 for s in (fileCreator, fileType)):\n\t\t\traise TypeError('arg must be string of 4 chars')\n\t\tfinderInfo = pad(bytesjoin([fileType, fileCreator]), 32)\n\t\txattr.setxattr(path, 'com.apple.FinderInfo', finderInfo)\n\tif MacOS is not None:\n\t\tMacOS.SetCreatorAndType(path, fileCreator, fileType)\n", "path": "Lib/fontTools/misc/macCreatorType.py"}]} | 1,297 | 379 |
gh_patches_debug_90 | rasdani/github-patches | git_diff | archlinux__archinstall-470 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
PermissionError redeclared in exceptions.py shadows built-in PermissionError class
```
class PermissionError(BaseException):
pass
```
Can we remove this and just use the built-in? Or we could rename ours to something different.
</issue>
<code>
[start of archinstall/lib/exceptions.py]
1 class RequirementError(BaseException):
2 pass
3
4
5 class DiskError(BaseException):
6 pass
7
8
9 class UnknownFilesystemFormat(BaseException):
10 pass
11
12
13 class ProfileError(BaseException):
14 pass
15
16
17 class SysCallError(BaseException):
18 def __init__(self, message, exit_code):
19 super(SysCallError, self).__init__(message)
20 self.message = message
21 self.exit_code = exit_code
22
23
24 class ProfileNotFound(BaseException):
25 pass
26
27
28 class HardwareIncompatibilityError(BaseException):
29 pass
30
31
32 class PermissionError(BaseException):
33 pass
34
35
36 class UserError(BaseException):
37 pass
38
39
40 class ServiceException(BaseException):
41 pass
42
[end of archinstall/lib/exceptions.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/archinstall/lib/exceptions.py b/archinstall/lib/exceptions.py
--- a/archinstall/lib/exceptions.py
+++ b/archinstall/lib/exceptions.py
@@ -29,10 +29,6 @@
pass
-class PermissionError(BaseException):
- pass
-
-
class UserError(BaseException):
pass
| {"golden_diff": "diff --git a/archinstall/lib/exceptions.py b/archinstall/lib/exceptions.py\n--- a/archinstall/lib/exceptions.py\n+++ b/archinstall/lib/exceptions.py\n@@ -29,10 +29,6 @@\n \tpass\n \n \n-class PermissionError(BaseException):\n-\tpass\n-\n-\n class UserError(BaseException):\n \tpass\n", "issue": "PermissionError redeclared in exceptions.py shadows built-in PermissionError class\n```\r\nclass PermissionError(BaseException):\r\n\tpass\r\n```\r\n\r\nCan we remove this and just use the built-in? Or we could rename ours to something different.\n", "before_files": [{"content": "class RequirementError(BaseException):\n\tpass\n\n\nclass DiskError(BaseException):\n\tpass\n\n\nclass UnknownFilesystemFormat(BaseException):\n\tpass\n\n\nclass ProfileError(BaseException):\n\tpass\n\n\nclass SysCallError(BaseException):\n\tdef __init__(self, message, exit_code):\n\t\tsuper(SysCallError, self).__init__(message)\n\t\tself.message = message\n\t\tself.exit_code = exit_code\n\n\nclass ProfileNotFound(BaseException):\n\tpass\n\n\nclass HardwareIncompatibilityError(BaseException):\n\tpass\n\n\nclass PermissionError(BaseException):\n\tpass\n\n\nclass UserError(BaseException):\n\tpass\n\n\nclass ServiceException(BaseException):\n\tpass\n", "path": "archinstall/lib/exceptions.py"}]} | 809 | 74 |
gh_patches_debug_26469 | rasdani/github-patches | git_diff | optuna__optuna-1074 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Apply lazy import for `optuna.dashboard`.
Optuna always imports the dependencies of `optuna.dashboard` (e.g., `bokeh`), which makes unnecessary overhead in many use cases. Similar to #334, we can apply lazy import for them.
</issue>
<code>
[start of optuna/__init__.py]
1 from optuna import dashboard # NOQA
2 from optuna import distributions # NOQA
3 from optuna import exceptions # NOQA
4 from optuna import importance # NOQA
5 from optuna import integration # NOQA
6 from optuna import logging # NOQA
7 from optuna import pruners # NOQA
8 from optuna import samplers # NOQA
9 from optuna import storages # NOQA
10 from optuna import structs # NOQA
11 from optuna import study # NOQA
12 from optuna import trial # NOQA
13 from optuna import version # NOQA
14 from optuna import visualization # NOQA
15
16 from optuna.study import create_study # NOQA
17 from optuna.study import delete_study # NOQA
18 from optuna.study import get_all_study_summaries # NOQA
19 from optuna.study import load_study # NOQA
20 from optuna.study import Study # NOQA
21 from optuna.trial import Trial # NOQA
22 from optuna.version import __version__ # NOQA
23
[end of optuna/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/optuna/__init__.py b/optuna/__init__.py
--- a/optuna/__init__.py
+++ b/optuna/__init__.py
@@ -1,4 +1,6 @@
-from optuna import dashboard # NOQA
+import importlib
+import types
+
from optuna import distributions # NOQA
from optuna import exceptions # NOQA
from optuna import importance # NOQA
@@ -20,3 +22,37 @@
from optuna.study import Study # NOQA
from optuna.trial import Trial # NOQA
from optuna.version import __version__ # NOQA
+from optuna.type_checking import TYPE_CHECKING # NOQA
+
+
+if TYPE_CHECKING:
+ from optuna import dashboard # NOQA
+else:
+ from typing import Any
+
+ class _LazyImport(types.ModuleType):
+ """Module wrapper for lazy import.
+
+ This class wraps specified module and lazily import it when they are actually accessed.
+ Otherwise, `import optuna` becomes slower because it imports all submodules and
+ their dependencies (e.g., bokeh) all at once.
+ Within this project's usage, importlib override this module's attribute on the first
+ access and the imported submodule is directly accessed from the second access.
+
+ Args:
+ name: Name of module to apply lazy import.
+ """
+
+ def __init__(self, name: str) -> None:
+ super(_LazyImport, self).__init__(name)
+ self._name = name
+
+ def _load(self) -> types.ModuleType:
+ module = importlib.import_module(self._name)
+ self.__dict__.update(module.__dict__)
+ return module
+
+ def __getattr__(self, item: str) -> Any:
+ return getattr(self._load(), item)
+
+ dashboard = _LazyImport("optuna.dashboard")
| {"golden_diff": "diff --git a/optuna/__init__.py b/optuna/__init__.py\n--- a/optuna/__init__.py\n+++ b/optuna/__init__.py\n@@ -1,4 +1,6 @@\n-from optuna import dashboard # NOQA\n+import importlib\n+import types\n+\n from optuna import distributions # NOQA\n from optuna import exceptions # NOQA\n from optuna import importance # NOQA\n@@ -20,3 +22,37 @@\n from optuna.study import Study # NOQA\n from optuna.trial import Trial # NOQA\n from optuna.version import __version__ # NOQA\n+from optuna.type_checking import TYPE_CHECKING # NOQA\n+\n+\n+if TYPE_CHECKING:\n+ from optuna import dashboard # NOQA\n+else:\n+ from typing import Any\n+\n+ class _LazyImport(types.ModuleType):\n+ \"\"\"Module wrapper for lazy import.\n+\n+ This class wraps specified module and lazily import it when they are actually accessed.\n+ Otherwise, `import optuna` becomes slower because it imports all submodules and\n+ their dependencies (e.g., bokeh) all at once.\n+ Within this project's usage, importlib override this module's attribute on the first\n+ access and the imported submodule is directly accessed from the second access.\n+\n+ Args:\n+ name: Name of module to apply lazy import.\n+ \"\"\"\n+\n+ def __init__(self, name: str) -> None:\n+ super(_LazyImport, self).__init__(name)\n+ self._name = name\n+\n+ def _load(self) -> types.ModuleType:\n+ module = importlib.import_module(self._name)\n+ self.__dict__.update(module.__dict__)\n+ return module\n+\n+ def __getattr__(self, item: str) -> Any:\n+ return getattr(self._load(), item)\n+\n+ dashboard = _LazyImport(\"optuna.dashboard\")\n", "issue": "Apply lazy import for `optuna.dashboard`.\nOptuna always imports the dependencies of `optuna.dashboard` (e.g., `bokeh`), which makes unnecessary overhead in many use cases. Similar to #334, we can apply lazy import for them.\n", "before_files": [{"content": "from optuna import dashboard # NOQA\nfrom optuna import distributions # NOQA\nfrom optuna import exceptions # NOQA\nfrom optuna import importance # NOQA\nfrom optuna import integration # NOQA\nfrom optuna import logging # NOQA\nfrom optuna import pruners # NOQA\nfrom optuna import samplers # NOQA\nfrom optuna import storages # NOQA\nfrom optuna import structs # NOQA\nfrom optuna import study # NOQA\nfrom optuna import trial # NOQA\nfrom optuna import version # NOQA\nfrom optuna import visualization # NOQA\n\nfrom optuna.study import create_study # NOQA\nfrom optuna.study import delete_study # NOQA\nfrom optuna.study import get_all_study_summaries # NOQA\nfrom optuna.study import load_study # NOQA\nfrom optuna.study import Study # NOQA\nfrom optuna.trial import Trial # NOQA\nfrom optuna.version import __version__ # NOQA\n", "path": "optuna/__init__.py"}]} | 861 | 436 |
gh_patches_debug_4624 | rasdani/github-patches | git_diff | conan-io__conan-2419 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
'install_folder' attribute is not always set
To reproduce, take https://github.com/memsharded/conan-hello example with one addition:
```python
def package(self):
print("self.source_folder:", self.source_folder)
print("self.build_folder:", self.build_folder)
print("self.install_folder:", self.install_folder)
...
```
now package it:
```
conan create . dbely/testing
```
everything goes well, with the output
```
Hello/0.1@dbely/testing: Calling package()
self.source_folder: C:\Users\dbely\.conan\data\Hello\0.1\dbely\testing\build\6cc50b139b9c3d27b3e9042d5f5372d327b3a9f7
self.build_folder: C:\Users\dbely\.conan\data\Hello\0.1\dbely\testing\build\6cc50b139b9c3d27b3e9042d5f5372d327b3a9f7
self.install_folder: C:\Users\dbely\.conan\data\Hello\0.1\dbely\testing\build\6cc50b139b9c3d27b3e9042d5f5372d327b3a9f7
```
Now do it step by step:
```
conan source .
conan install .
conan build .
conan package .
```
All the commands succeed except the last one:
```
PROJECT: Calling package()
self.source_folder: C:\Users\dbely\conan\conan-hello.git
self.build_folder: C:\Users\dbely\conan\conan-hello.git
ERROR: Hello/0.1@PROJECT: Error in package() method, line 21
print("self.install_folder:", self.install_folder)
AttributeError: 'HelloConan' object has no attribute 'install_folder'
```
</issue>
<code>
[start of conans/client/packager.py]
1 import os
2 import shutil
3
4 from conans.client import tools
5 from conans.util.files import mkdir, save, rmdir
6 from conans.util.log import logger
7 from conans.paths import CONANINFO, CONAN_MANIFEST
8 from conans.errors import ConanException, ConanExceptionInUserConanfileMethod, conanfile_exception_formatter
9 from conans.model.manifest import FileTreeManifest
10 from conans.client.output import ScopedOutput
11 from conans.client.file_copier import FileCopier
12
13
14 def create_package(conanfile, source_folder, build_folder, package_folder, install_folder,
15 output, local=False, copy_info=False):
16 """ copies built artifacts, libs, headers, data, etc from build_folder to
17 package folder
18 """
19 mkdir(package_folder)
20
21 # Make the copy of all the patterns
22 output.info("Generating the package")
23 output.info("Package folder %s" % (package_folder))
24
25 try:
26 package_output = ScopedOutput("%s package()" % output.scope, output)
27 output.highlight("Calling package()")
28 conanfile.package_folder = package_folder
29 conanfile.source_folder = source_folder
30 conanfile.build_folder = build_folder
31
32 def recipe_has(conanfile, attribute):
33 return attribute in conanfile.__class__.__dict__
34
35 if source_folder != build_folder:
36 conanfile.copy = FileCopier(source_folder, package_folder, build_folder)
37 with conanfile_exception_formatter(str(conanfile), "package"):
38 with tools.chdir(source_folder):
39 conanfile.package()
40 warn = recipe_has(conanfile, "package")
41 conanfile.copy.report(package_output, warn=warn)
42
43 conanfile.copy = FileCopier(build_folder, package_folder)
44 with tools.chdir(build_folder):
45 with conanfile_exception_formatter(str(conanfile), "package"):
46 conanfile.package()
47 warn = recipe_has(conanfile, "build") and recipe_has(conanfile, "package")
48 conanfile.copy.report(package_output, warn=warn)
49 except Exception as e:
50 if not local:
51 os.chdir(build_folder)
52 try:
53 rmdir(package_folder)
54 except Exception as e_rm:
55 output.error("Unable to remove package folder %s\n%s" % (package_folder, str(e_rm)))
56 output.warn("**** Please delete it manually ****")
57
58 if isinstance(e, ConanExceptionInUserConanfileMethod):
59 raise
60 raise ConanException(e)
61
62 _create_aux_files(install_folder, package_folder, conanfile, copy_info)
63 output.success("Package '%s' created" % os.path.basename(package_folder))
64
65
66 def _create_aux_files(install_folder, package_folder, conanfile, copy_info):
67 """ auxiliary method that creates CONANINFO and manifest in
68 the package_folder
69 """
70 logger.debug("Creating config files to %s" % package_folder)
71 if copy_info:
72 try:
73 shutil.copy(os.path.join(install_folder, CONANINFO), package_folder)
74 except IOError:
75 raise ConanException("%s does not exist inside of your %s folder. "
76 "Try to re-build it again to solve it."
77 % (CONANINFO, install_folder))
78 else:
79 save(os.path.join(package_folder, CONANINFO), conanfile.info.dumps())
80
81 # Create the digest for the package
82 digest = FileTreeManifest.create(package_folder)
83 save(os.path.join(package_folder, CONAN_MANIFEST), str(digest))
84
[end of conans/client/packager.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/conans/client/packager.py b/conans/client/packager.py
--- a/conans/client/packager.py
+++ b/conans/client/packager.py
@@ -27,6 +27,7 @@
output.highlight("Calling package()")
conanfile.package_folder = package_folder
conanfile.source_folder = source_folder
+ conanfile.install_folder = install_folder
conanfile.build_folder = build_folder
def recipe_has(conanfile, attribute):
| {"golden_diff": "diff --git a/conans/client/packager.py b/conans/client/packager.py\n--- a/conans/client/packager.py\n+++ b/conans/client/packager.py\n@@ -27,6 +27,7 @@\n output.highlight(\"Calling package()\")\n conanfile.package_folder = package_folder\n conanfile.source_folder = source_folder\n+ conanfile.install_folder = install_folder\n conanfile.build_folder = build_folder\n \n def recipe_has(conanfile, attribute):\n", "issue": "'install_folder' attribute is not always set\nTo reproduce, take https://github.com/memsharded/conan-hello example with one addition:\r\n```python\r\n def package(self):\r\n print(\"self.source_folder:\", self.source_folder)\r\n print(\"self.build_folder:\", self.build_folder)\r\n print(\"self.install_folder:\", self.install_folder)\r\n ...\r\n```\r\nnow package it:\r\n```\r\nconan create . dbely/testing\r\n```\r\neverything goes well, with the output\r\n```\r\nHello/0.1@dbely/testing: Calling package()\r\nself.source_folder: C:\\Users\\dbely\\.conan\\data\\Hello\\0.1\\dbely\\testing\\build\\6cc50b139b9c3d27b3e9042d5f5372d327b3a9f7\r\nself.build_folder: C:\\Users\\dbely\\.conan\\data\\Hello\\0.1\\dbely\\testing\\build\\6cc50b139b9c3d27b3e9042d5f5372d327b3a9f7\r\nself.install_folder: C:\\Users\\dbely\\.conan\\data\\Hello\\0.1\\dbely\\testing\\build\\6cc50b139b9c3d27b3e9042d5f5372d327b3a9f7\r\n```\r\nNow do it step by step:\r\n```\r\nconan source .\r\nconan install .\r\nconan build .\r\nconan package .\r\n```\r\nAll the commands succeed except the last one:\r\n```\r\nPROJECT: Calling package()\r\nself.source_folder: C:\\Users\\dbely\\conan\\conan-hello.git\r\nself.build_folder: C:\\Users\\dbely\\conan\\conan-hello.git\r\nERROR: Hello/0.1@PROJECT: Error in package() method, line 21\r\n print(\"self.install_folder:\", self.install_folder)\r\n AttributeError: 'HelloConan' object has no attribute 'install_folder'\r\n```\r\n\n", "before_files": [{"content": "import os\nimport shutil\n\nfrom conans.client import tools\nfrom conans.util.files import mkdir, save, rmdir\nfrom conans.util.log import logger\nfrom conans.paths import CONANINFO, CONAN_MANIFEST\nfrom conans.errors import ConanException, ConanExceptionInUserConanfileMethod, conanfile_exception_formatter\nfrom conans.model.manifest import FileTreeManifest\nfrom conans.client.output import ScopedOutput\nfrom conans.client.file_copier import FileCopier\n\n\ndef create_package(conanfile, source_folder, build_folder, package_folder, install_folder,\n output, local=False, copy_info=False):\n \"\"\" copies built artifacts, libs, headers, data, etc from build_folder to\n package folder\n \"\"\"\n mkdir(package_folder)\n\n # Make the copy of all the patterns\n output.info(\"Generating the package\")\n output.info(\"Package folder %s\" % (package_folder))\n\n try:\n package_output = ScopedOutput(\"%s package()\" % output.scope, output)\n output.highlight(\"Calling package()\")\n conanfile.package_folder = package_folder\n conanfile.source_folder = source_folder\n conanfile.build_folder = build_folder\n\n def recipe_has(conanfile, attribute):\n return attribute in conanfile.__class__.__dict__\n\n if source_folder != build_folder:\n conanfile.copy = FileCopier(source_folder, package_folder, build_folder)\n with conanfile_exception_formatter(str(conanfile), \"package\"):\n with tools.chdir(source_folder):\n conanfile.package()\n warn = recipe_has(conanfile, \"package\")\n conanfile.copy.report(package_output, warn=warn)\n\n conanfile.copy = FileCopier(build_folder, package_folder)\n with tools.chdir(build_folder):\n with conanfile_exception_formatter(str(conanfile), \"package\"):\n conanfile.package()\n warn = recipe_has(conanfile, \"build\") and recipe_has(conanfile, \"package\")\n conanfile.copy.report(package_output, warn=warn)\n except Exception as e:\n if not local:\n os.chdir(build_folder)\n try:\n rmdir(package_folder)\n except Exception as e_rm:\n output.error(\"Unable to remove package folder %s\\n%s\" % (package_folder, str(e_rm)))\n output.warn(\"**** Please delete it manually ****\")\n\n if isinstance(e, ConanExceptionInUserConanfileMethod):\n raise\n raise ConanException(e)\n\n _create_aux_files(install_folder, package_folder, conanfile, copy_info)\n output.success(\"Package '%s' created\" % os.path.basename(package_folder))\n\n\ndef _create_aux_files(install_folder, package_folder, conanfile, copy_info):\n \"\"\" auxiliary method that creates CONANINFO and manifest in\n the package_folder\n \"\"\"\n logger.debug(\"Creating config files to %s\" % package_folder)\n if copy_info:\n try:\n shutil.copy(os.path.join(install_folder, CONANINFO), package_folder)\n except IOError:\n raise ConanException(\"%s does not exist inside of your %s folder. \"\n \"Try to re-build it again to solve it.\"\n % (CONANINFO, install_folder))\n else:\n save(os.path.join(package_folder, CONANINFO), conanfile.info.dumps())\n\n # Create the digest for the package\n digest = FileTreeManifest.create(package_folder)\n save(os.path.join(package_folder, CONAN_MANIFEST), str(digest))\n", "path": "conans/client/packager.py"}]} | 1,893 | 111 |
gh_patches_debug_15615 | rasdani/github-patches | git_diff | bookwyrm-social__bookwyrm-2832 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Hardcoded language ISO in instance info
This line of code
https://github.com/bookwyrm-social/bookwyrm/blob/290b74039297349693f4f139fa58659a19d1e1ac/bookwyrm/views/wellknown.py#L113
needs to be changed that the actual language of the instance is shown. My instance is set to German but it tells the world that it is in English.
The problem I have here (and why I just not solved it right now) is that the language is represented by a ISO 639-1 two-letter code and I do not see that we have it here. The question is: Do we need 4 letter codes in `.env`/`settings.py`or can we somehow derive from this setting and get the two letter code?
Anyhow, I am too confused to do it atm.
</issue>
<code>
[start of bookwyrm/views/wellknown.py]
1 """ responds to various requests to /.well-know """
2
3 from dateutil.relativedelta import relativedelta
4 from django.http import HttpResponseNotFound
5 from django.http import JsonResponse
6 from django.shortcuts import get_object_or_404
7 from django.template.response import TemplateResponse
8 from django.utils import timezone
9 from django.views.decorators.http import require_GET
10
11 from bookwyrm import models
12 from bookwyrm.settings import DOMAIN, VERSION
13
14
15 @require_GET
16 def webfinger(request):
17 """allow other servers to ask about a user"""
18 resource = request.GET.get("resource")
19 if not resource or not resource.startswith("acct:"):
20 return HttpResponseNotFound()
21
22 username = resource.replace("acct:", "")
23 user = get_object_or_404(models.User, username__iexact=username)
24
25 return JsonResponse(
26 {
27 "subject": f"acct:{user.username}",
28 "links": [
29 {
30 "rel": "self",
31 "type": "application/activity+json",
32 "href": user.remote_id,
33 },
34 {
35 "rel": "http://ostatus.org/schema/1.0/subscribe",
36 "template": f"https://{DOMAIN}/ostatus_subscribe?acct={{uri}}",
37 },
38 ],
39 }
40 )
41
42
43 @require_GET
44 def nodeinfo_pointer(_):
45 """direct servers to nodeinfo"""
46 return JsonResponse(
47 {
48 "links": [
49 {
50 "rel": "http://nodeinfo.diaspora.software/ns/schema/2.0",
51 "href": f"https://{DOMAIN}/nodeinfo/2.0",
52 }
53 ]
54 }
55 )
56
57
58 @require_GET
59 def nodeinfo(_):
60 """basic info about the server"""
61 status_count = models.Status.objects.filter(user__local=True, deleted=False).count()
62 user_count = models.User.objects.filter(is_active=True, local=True).count()
63
64 month_ago = timezone.now() - relativedelta(months=1)
65 last_month_count = models.User.objects.filter(
66 is_active=True, local=True, last_active_date__gt=month_ago
67 ).count()
68
69 six_months_ago = timezone.now() - relativedelta(months=6)
70 six_month_count = models.User.objects.filter(
71 is_active=True, local=True, last_active_date__gt=six_months_ago
72 ).count()
73
74 site = models.SiteSettings.get()
75 return JsonResponse(
76 {
77 "version": "2.0",
78 "software": {"name": "bookwyrm", "version": VERSION},
79 "protocols": ["activitypub"],
80 "usage": {
81 "users": {
82 "total": user_count,
83 "activeMonth": last_month_count,
84 "activeHalfyear": six_month_count,
85 },
86 "localPosts": status_count,
87 },
88 "openRegistrations": site.allow_registration,
89 }
90 )
91
92
93 @require_GET
94 def instance_info(_):
95 """let's talk about your cool unique instance"""
96 user_count = models.User.objects.filter(is_active=True, local=True).count()
97 status_count = models.Status.objects.filter(user__local=True, deleted=False).count()
98
99 site = models.SiteSettings.get()
100 logo = site.logo_url
101 return JsonResponse(
102 {
103 "uri": DOMAIN,
104 "title": site.name,
105 "short_description": site.instance_short_description,
106 "description": site.instance_description,
107 "version": VERSION,
108 "stats": {
109 "user_count": user_count,
110 "status_count": status_count,
111 },
112 "thumbnail": logo,
113 "languages": ["en"],
114 "registrations": site.allow_registration,
115 "approval_required": not site.allow_registration
116 and site.allow_invite_requests,
117 "email": site.admin_email,
118 }
119 )
120
121
122 @require_GET
123 def peers(_):
124 """list of federated servers this instance connects with"""
125 names = models.FederatedServer.objects.filter(status="federated").values_list(
126 "server_name", flat=True
127 )
128 return JsonResponse(list(names), safe=False)
129
130
131 @require_GET
132 def host_meta(request):
133 """meta of the host"""
134 return TemplateResponse(request, "host_meta.xml", {"DOMAIN": DOMAIN})
135
136
137 @require_GET
138 def opensearch(request):
139 """Open Search xml spec"""
140 site = models.SiteSettings.get()
141 image = site.favicon_url
142 return TemplateResponse(
143 request, "opensearch.xml", {"image": image, "DOMAIN": DOMAIN}
144 )
145
[end of bookwyrm/views/wellknown.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/bookwyrm/views/wellknown.py b/bookwyrm/views/wellknown.py
--- a/bookwyrm/views/wellknown.py
+++ b/bookwyrm/views/wellknown.py
@@ -9,7 +9,7 @@
from django.views.decorators.http import require_GET
from bookwyrm import models
-from bookwyrm.settings import DOMAIN, VERSION
+from bookwyrm.settings import DOMAIN, VERSION, LANGUAGE_CODE
@require_GET
@@ -110,7 +110,7 @@
"status_count": status_count,
},
"thumbnail": logo,
- "languages": ["en"],
+ "languages": [LANGUAGE_CODE[:2]],
"registrations": site.allow_registration,
"approval_required": not site.allow_registration
and site.allow_invite_requests,
| {"golden_diff": "diff --git a/bookwyrm/views/wellknown.py b/bookwyrm/views/wellknown.py\n--- a/bookwyrm/views/wellknown.py\n+++ b/bookwyrm/views/wellknown.py\n@@ -9,7 +9,7 @@\n from django.views.decorators.http import require_GET\n \n from bookwyrm import models\n-from bookwyrm.settings import DOMAIN, VERSION\n+from bookwyrm.settings import DOMAIN, VERSION, LANGUAGE_CODE\n \n \n @require_GET\n@@ -110,7 +110,7 @@\n \"status_count\": status_count,\n },\n \"thumbnail\": logo,\n- \"languages\": [\"en\"],\n+ \"languages\": [LANGUAGE_CODE[:2]],\n \"registrations\": site.allow_registration,\n \"approval_required\": not site.allow_registration\n and site.allow_invite_requests,\n", "issue": "Hardcoded language ISO in instance info\nThis line of code \r\n\r\nhttps://github.com/bookwyrm-social/bookwyrm/blob/290b74039297349693f4f139fa58659a19d1e1ac/bookwyrm/views/wellknown.py#L113\r\n\r\nneeds to be changed that the actual language of the instance is shown. My instance is set to German but it tells the world that it is in English.\r\n\r\nThe problem I have here (and why I just not solved it right now) is that the language is represented by a ISO 639-1 two-letter code and I do not see that we have it here. The question is: Do we need 4 letter codes in `.env`/`settings.py`or can we somehow derive from this setting and get the two letter code? \r\n\r\nAnyhow, I am too confused to do it atm.\n", "before_files": [{"content": "\"\"\" responds to various requests to /.well-know \"\"\"\n\nfrom dateutil.relativedelta import relativedelta\nfrom django.http import HttpResponseNotFound\nfrom django.http import JsonResponse\nfrom django.shortcuts import get_object_or_404\nfrom django.template.response import TemplateResponse\nfrom django.utils import timezone\nfrom django.views.decorators.http import require_GET\n\nfrom bookwyrm import models\nfrom bookwyrm.settings import DOMAIN, VERSION\n\n\n@require_GET\ndef webfinger(request):\n \"\"\"allow other servers to ask about a user\"\"\"\n resource = request.GET.get(\"resource\")\n if not resource or not resource.startswith(\"acct:\"):\n return HttpResponseNotFound()\n\n username = resource.replace(\"acct:\", \"\")\n user = get_object_or_404(models.User, username__iexact=username)\n\n return JsonResponse(\n {\n \"subject\": f\"acct:{user.username}\",\n \"links\": [\n {\n \"rel\": \"self\",\n \"type\": \"application/activity+json\",\n \"href\": user.remote_id,\n },\n {\n \"rel\": \"http://ostatus.org/schema/1.0/subscribe\",\n \"template\": f\"https://{DOMAIN}/ostatus_subscribe?acct={{uri}}\",\n },\n ],\n }\n )\n\n\n@require_GET\ndef nodeinfo_pointer(_):\n \"\"\"direct servers to nodeinfo\"\"\"\n return JsonResponse(\n {\n \"links\": [\n {\n \"rel\": \"http://nodeinfo.diaspora.software/ns/schema/2.0\",\n \"href\": f\"https://{DOMAIN}/nodeinfo/2.0\",\n }\n ]\n }\n )\n\n\n@require_GET\ndef nodeinfo(_):\n \"\"\"basic info about the server\"\"\"\n status_count = models.Status.objects.filter(user__local=True, deleted=False).count()\n user_count = models.User.objects.filter(is_active=True, local=True).count()\n\n month_ago = timezone.now() - relativedelta(months=1)\n last_month_count = models.User.objects.filter(\n is_active=True, local=True, last_active_date__gt=month_ago\n ).count()\n\n six_months_ago = timezone.now() - relativedelta(months=6)\n six_month_count = models.User.objects.filter(\n is_active=True, local=True, last_active_date__gt=six_months_ago\n ).count()\n\n site = models.SiteSettings.get()\n return JsonResponse(\n {\n \"version\": \"2.0\",\n \"software\": {\"name\": \"bookwyrm\", \"version\": VERSION},\n \"protocols\": [\"activitypub\"],\n \"usage\": {\n \"users\": {\n \"total\": user_count,\n \"activeMonth\": last_month_count,\n \"activeHalfyear\": six_month_count,\n },\n \"localPosts\": status_count,\n },\n \"openRegistrations\": site.allow_registration,\n }\n )\n\n\n@require_GET\ndef instance_info(_):\n \"\"\"let's talk about your cool unique instance\"\"\"\n user_count = models.User.objects.filter(is_active=True, local=True).count()\n status_count = models.Status.objects.filter(user__local=True, deleted=False).count()\n\n site = models.SiteSettings.get()\n logo = site.logo_url\n return JsonResponse(\n {\n \"uri\": DOMAIN,\n \"title\": site.name,\n \"short_description\": site.instance_short_description,\n \"description\": site.instance_description,\n \"version\": VERSION,\n \"stats\": {\n \"user_count\": user_count,\n \"status_count\": status_count,\n },\n \"thumbnail\": logo,\n \"languages\": [\"en\"],\n \"registrations\": site.allow_registration,\n \"approval_required\": not site.allow_registration\n and site.allow_invite_requests,\n \"email\": site.admin_email,\n }\n )\n\n\n@require_GET\ndef peers(_):\n \"\"\"list of federated servers this instance connects with\"\"\"\n names = models.FederatedServer.objects.filter(status=\"federated\").values_list(\n \"server_name\", flat=True\n )\n return JsonResponse(list(names), safe=False)\n\n\n@require_GET\ndef host_meta(request):\n \"\"\"meta of the host\"\"\"\n return TemplateResponse(request, \"host_meta.xml\", {\"DOMAIN\": DOMAIN})\n\n\n@require_GET\ndef opensearch(request):\n \"\"\"Open Search xml spec\"\"\"\n site = models.SiteSettings.get()\n image = site.favicon_url\n return TemplateResponse(\n request, \"opensearch.xml\", {\"image\": image, \"DOMAIN\": DOMAIN}\n )\n", "path": "bookwyrm/views/wellknown.py"}]} | 2,025 | 175 |
gh_patches_debug_19455 | rasdani/github-patches | git_diff | python__peps-2658 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Links on topic pages are broken
If you go to a page like https://peps.python.org/topic/packaging/ and try to click on any of the PEPs, you get a github 404, and the URL is ``https://peps.python.org/topic/pep-0582``.
</issue>
<code>
[start of pep_sphinx_extensions/__init__.py]
1 """Sphinx extensions for performant PEP processing"""
2
3 from __future__ import annotations
4
5 from typing import TYPE_CHECKING
6
7 from docutils.writers.html5_polyglot import HTMLTranslator
8 from sphinx import environment
9 from sphinx import project
10
11 from pep_sphinx_extensions.pep_processor.html import pep_html_builder
12 from pep_sphinx_extensions.pep_processor.html import pep_html_translator
13 from pep_sphinx_extensions.pep_processor.parsing import pep_parser
14 from pep_sphinx_extensions.pep_processor.parsing import pep_role
15 from pep_sphinx_extensions.pep_processor.transforms import pep_references
16 from pep_sphinx_extensions.pep_zero_generator.pep_index_generator import create_pep_zero
17
18 if TYPE_CHECKING:
19 from sphinx.application import Sphinx
20 from sphinx.config import Config
21
22
23 def find_files(self: environment.BuildEnvironment, config: Config, _b) -> None:
24 """Find all pep source files."""
25 import fnmatch
26 from pathlib import Path
27
28 root = Path(self.project.srcdir).absolute()
29 self.project.docnames = set()
30 for pattern in config.include_patterns:
31 for path in root.glob(pattern):
32 filename = str(path.relative_to(root))
33 if any(fnmatch.fnmatch(filename, pattern) for pattern in config.exclude_patterns):
34 continue
35
36 doc_name = self.project.path2doc(filename)
37 if not doc_name:
38 continue
39
40 if doc_name not in self.project.docnames:
41 self.project.docnames.add(doc_name)
42 continue
43
44 other_files = [str(f.relative_to(root)) for f in root.glob(f"{doc_name}.*")]
45 project.logger.warning(
46 f'multiple files found for the document "{doc_name}": {other_files!r}\n'
47 f'Use {self.doc2path(doc_name)!r} for the build.', once=True)
48
49
50 environment.BuildEnvironment.find_files = find_files
51
52
53 def _depart_maths():
54 pass # No-op callable for the type checker
55
56
57 def _update_config_for_builder(app: Sphinx) -> None:
58 app.env.document_ids = {} # For PEPReferenceRoleTitleText
59 if app.builder.name == "dirhtml":
60 app.env.settings["pep_url"] = "../pep-{:0>4}"
61
62 # internal_builder exists if Sphinx is run by build.py
63 if "internal_builder" not in app.tags:
64 app.connect("build-finished", _post_build) # Post-build tasks
65
66
67 def _post_build(app: Sphinx, exception: Exception | None) -> None:
68 from pathlib import Path
69
70 from build import create_index_file
71
72 if exception is not None:
73 return
74 create_index_file(Path(app.outdir), app.builder.name)
75
76
77 def setup(app: Sphinx) -> dict[str, bool]:
78 """Initialize Sphinx extension."""
79
80 environment.default_settings["pep_url"] = "pep-{:0>4}.html"
81 environment.default_settings["halt_level"] = 2 # Fail on Docutils warning
82
83 # Register plugin logic
84 app.add_builder(pep_html_builder.FileBuilder, override=True)
85 app.add_builder(pep_html_builder.DirectoryBuilder, override=True)
86
87 app.add_source_parser(pep_parser.PEPParser) # Add PEP transforms
88
89 app.set_translator("html", pep_html_translator.PEPTranslator) # Docutils Node Visitor overrides (html builder)
90 app.set_translator("dirhtml", pep_html_translator.PEPTranslator) # Docutils Node Visitor overrides (dirhtml builder)
91
92 app.add_role("pep", pep_role.PEPRole(), override=True) # Transform PEP references to links
93
94 app.add_post_transform(pep_references.PEPReferenceRoleTitleText)
95
96 # Register event callbacks
97 app.connect("builder-inited", _update_config_for_builder) # Update configuration values for builder used
98 app.connect("env-before-read-docs", create_pep_zero) # PEP 0 hook
99
100 # Mathematics rendering
101 inline_maths = HTMLTranslator.visit_math, _depart_maths
102 block_maths = HTMLTranslator.visit_math_block, _depart_maths
103 app.add_html_math_renderer("maths_to_html", inline_maths, block_maths) # Render maths to HTML
104
105 # Parallel safety: https://www.sphinx-doc.org/en/master/extdev/index.html#extension-metadata
106 return {"parallel_read_safe": True, "parallel_write_safe": True}
107
[end of pep_sphinx_extensions/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/pep_sphinx_extensions/__init__.py b/pep_sphinx_extensions/__init__.py
--- a/pep_sphinx_extensions/__init__.py
+++ b/pep_sphinx_extensions/__init__.py
@@ -57,7 +57,7 @@
def _update_config_for_builder(app: Sphinx) -> None:
app.env.document_ids = {} # For PEPReferenceRoleTitleText
if app.builder.name == "dirhtml":
- app.env.settings["pep_url"] = "../pep-{:0>4}"
+ app.env.settings["pep_url"] = "/pep-{:0>4}"
# internal_builder exists if Sphinx is run by build.py
if "internal_builder" not in app.tags:
@@ -77,7 +77,7 @@
def setup(app: Sphinx) -> dict[str, bool]:
"""Initialize Sphinx extension."""
- environment.default_settings["pep_url"] = "pep-{:0>4}.html"
+ environment.default_settings["pep_url"] = "/pep-{:0>4}.html"
environment.default_settings["halt_level"] = 2 # Fail on Docutils warning
# Register plugin logic
| {"golden_diff": "diff --git a/pep_sphinx_extensions/__init__.py b/pep_sphinx_extensions/__init__.py\n--- a/pep_sphinx_extensions/__init__.py\n+++ b/pep_sphinx_extensions/__init__.py\n@@ -57,7 +57,7 @@\n def _update_config_for_builder(app: Sphinx) -> None:\n app.env.document_ids = {} # For PEPReferenceRoleTitleText\n if app.builder.name == \"dirhtml\":\n- app.env.settings[\"pep_url\"] = \"../pep-{:0>4}\"\n+ app.env.settings[\"pep_url\"] = \"/pep-{:0>4}\"\n \n # internal_builder exists if Sphinx is run by build.py\n if \"internal_builder\" not in app.tags:\n@@ -77,7 +77,7 @@\n def setup(app: Sphinx) -> dict[str, bool]:\n \"\"\"Initialize Sphinx extension.\"\"\"\n \n- environment.default_settings[\"pep_url\"] = \"pep-{:0>4}.html\"\n+ environment.default_settings[\"pep_url\"] = \"/pep-{:0>4}.html\"\n environment.default_settings[\"halt_level\"] = 2 # Fail on Docutils warning\n \n # Register plugin logic\n", "issue": "Links on topic pages are broken\nIf you go to a page like https://peps.python.org/topic/packaging/ and try to click on any of the PEPs, you get a github 404, and the URL is ``https://peps.python.org/topic/pep-0582``. \n", "before_files": [{"content": "\"\"\"Sphinx extensions for performant PEP processing\"\"\"\n\nfrom __future__ import annotations\n\nfrom typing import TYPE_CHECKING\n\nfrom docutils.writers.html5_polyglot import HTMLTranslator\nfrom sphinx import environment\nfrom sphinx import project\n\nfrom pep_sphinx_extensions.pep_processor.html import pep_html_builder\nfrom pep_sphinx_extensions.pep_processor.html import pep_html_translator\nfrom pep_sphinx_extensions.pep_processor.parsing import pep_parser\nfrom pep_sphinx_extensions.pep_processor.parsing import pep_role\nfrom pep_sphinx_extensions.pep_processor.transforms import pep_references\nfrom pep_sphinx_extensions.pep_zero_generator.pep_index_generator import create_pep_zero\n\nif TYPE_CHECKING:\n from sphinx.application import Sphinx\n from sphinx.config import Config\n\n\ndef find_files(self: environment.BuildEnvironment, config: Config, _b) -> None:\n \"\"\"Find all pep source files.\"\"\"\n import fnmatch\n from pathlib import Path\n\n root = Path(self.project.srcdir).absolute()\n self.project.docnames = set()\n for pattern in config.include_patterns:\n for path in root.glob(pattern):\n filename = str(path.relative_to(root))\n if any(fnmatch.fnmatch(filename, pattern) for pattern in config.exclude_patterns):\n continue\n\n doc_name = self.project.path2doc(filename)\n if not doc_name:\n continue\n\n if doc_name not in self.project.docnames:\n self.project.docnames.add(doc_name)\n continue\n\n other_files = [str(f.relative_to(root)) for f in root.glob(f\"{doc_name}.*\")]\n project.logger.warning(\n f'multiple files found for the document \"{doc_name}\": {other_files!r}\\n'\n f'Use {self.doc2path(doc_name)!r} for the build.', once=True)\n\n\nenvironment.BuildEnvironment.find_files = find_files\n\n\ndef _depart_maths():\n pass # No-op callable for the type checker\n\n\ndef _update_config_for_builder(app: Sphinx) -> None:\n app.env.document_ids = {} # For PEPReferenceRoleTitleText\n if app.builder.name == \"dirhtml\":\n app.env.settings[\"pep_url\"] = \"../pep-{:0>4}\"\n\n # internal_builder exists if Sphinx is run by build.py\n if \"internal_builder\" not in app.tags:\n app.connect(\"build-finished\", _post_build) # Post-build tasks\n\n\ndef _post_build(app: Sphinx, exception: Exception | None) -> None:\n from pathlib import Path\n\n from build import create_index_file\n\n if exception is not None:\n return\n create_index_file(Path(app.outdir), app.builder.name)\n\n\ndef setup(app: Sphinx) -> dict[str, bool]:\n \"\"\"Initialize Sphinx extension.\"\"\"\n\n environment.default_settings[\"pep_url\"] = \"pep-{:0>4}.html\"\n environment.default_settings[\"halt_level\"] = 2 # Fail on Docutils warning\n\n # Register plugin logic\n app.add_builder(pep_html_builder.FileBuilder, override=True)\n app.add_builder(pep_html_builder.DirectoryBuilder, override=True)\n\n app.add_source_parser(pep_parser.PEPParser) # Add PEP transforms\n\n app.set_translator(\"html\", pep_html_translator.PEPTranslator) # Docutils Node Visitor overrides (html builder)\n app.set_translator(\"dirhtml\", pep_html_translator.PEPTranslator) # Docutils Node Visitor overrides (dirhtml builder)\n\n app.add_role(\"pep\", pep_role.PEPRole(), override=True) # Transform PEP references to links\n\n app.add_post_transform(pep_references.PEPReferenceRoleTitleText)\n\n # Register event callbacks\n app.connect(\"builder-inited\", _update_config_for_builder) # Update configuration values for builder used\n app.connect(\"env-before-read-docs\", create_pep_zero) # PEP 0 hook\n\n # Mathematics rendering\n inline_maths = HTMLTranslator.visit_math, _depart_maths\n block_maths = HTMLTranslator.visit_math_block, _depart_maths\n app.add_html_math_renderer(\"maths_to_html\", inline_maths, block_maths) # Render maths to HTML\n\n # Parallel safety: https://www.sphinx-doc.org/en/master/extdev/index.html#extension-metadata\n return {\"parallel_read_safe\": True, \"parallel_write_safe\": True}\n", "path": "pep_sphinx_extensions/__init__.py"}]} | 1,779 | 270 |
gh_patches_debug_30969 | rasdani/github-patches | git_diff | opensearch-project__opensearch-build-1395 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[META] Automate artifact signing with OpensearchSignerClient for linux
### Is your feature request related to a problem? Please describe
Signing currently is a long painstaking process. It needs to be automated to save time when releasing artifacts. We need a tool to be able to sign all artifacts -
- opensearch
- opensearch-dashboards
### Describe the solution you'd like
Use the existing `Signer.py` class to sign the artifacts and generate ".sig" file along with ".asc" files.
### Tasks
- [x] #1382
- [x] #1383
- [x] #1385
### Acceptance Criteria
- [ ] User is able to provide the artifact directory and tool is able to sign all the artifacts in the directory
- [ ] User is able to pass the artifact path and the tool is able to sign the artifact
### Next Steps
Next steps would include to extend this process over for mac and windows
</issue>
<code>
[start of src/sign_workflow/signer.py]
1 #!/usr/bin/env python
2
3 # SPDX-License-Identifier: Apache-2.0
4 #
5 # The OpenSearch Contributors require contributions made to
6 # this file be licensed under the Apache-2.0 license or a
7 # compatible open source license.
8
9 import logging
10 import os
11
12 from git.git_repository import GitRepository
13
14 """
15 This class is responsible for signing an artifact using the OpenSearch-signer-client and verifying its signature.
16 The signed artifacts will be found in the same location as the original artifacts.
17 """
18
19
20 class Signer:
21 ACCEPTED_FILE_TYPES = [".zip", ".jar", ".war", ".pom", ".module", ".tar.gz"]
22
23 def __init__(self):
24 self.git_repo = GitRepository(self.get_repo_url(), "HEAD", working_subdirectory="src")
25 self.git_repo.execute("./bootstrap")
26 self.git_repo.execute("rm config.cfg")
27
28 def sign_artifact(self, artifact, basepath, signature_type):
29 self.generate_signature_and_verify(artifact, basepath, signature_type)
30
31 def sign_artifacts(self, artifacts, basepath, signature_type):
32 for artifact in artifacts:
33 if not self.is_valid_file_type(artifact):
34 logging.info(f"Skipping signing of file ${artifact}")
35 continue
36 self.generate_signature_and_verify(artifact, basepath, signature_type)
37
38 def generate_signature_and_verify(self, artifact, basepath, signature_type):
39 location = os.path.join(basepath, artifact)
40 self.sign(location, signature_type)
41 self.verify(location + signature_type)
42
43 def is_valid_file_type(self, file_name):
44 return any(
45 file_name.endswith(x) for x in Signer.ACCEPTED_FILE_TYPES
46 )
47
48 def get_repo_url(self):
49 if "GITHUB_TOKEN" in os.environ:
50 return "https://${GITHUB_TOKEN}@github.com/opensearch-project/opensearch-signer-client.git"
51 return "https://github.com/opensearch-project/opensearch-signer-client.git"
52
53 def sign(self, filename, signature_type):
54 signature_file = filename + signature_type
55 signing_cmd = [
56 "./opensearch-signer-client",
57 "-i",
58 filename,
59 "-o",
60 signature_file,
61 "-p",
62 "pgp",
63 ]
64 self.git_repo.execute(" ".join(signing_cmd))
65
66 def verify(self, filename):
67 verify_cmd = ["gpg", "--verify-files", filename]
68 self.git_repo.execute(" ".join(verify_cmd))
69
[end of src/sign_workflow/signer.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/sign_workflow/signer.py b/src/sign_workflow/signer.py
--- a/src/sign_workflow/signer.py
+++ b/src/sign_workflow/signer.py
@@ -26,12 +26,15 @@
self.git_repo.execute("rm config.cfg")
def sign_artifact(self, artifact, basepath, signature_type):
+ if not self.is_valid_file_type(artifact):
+ logging.info(f"Skipping signing of file {artifact}")
+ return
self.generate_signature_and_verify(artifact, basepath, signature_type)
def sign_artifacts(self, artifacts, basepath, signature_type):
for artifact in artifacts:
if not self.is_valid_file_type(artifact):
- logging.info(f"Skipping signing of file ${artifact}")
+ logging.info(f"Skipping signing of file {artifact}")
continue
self.generate_signature_and_verify(artifact, basepath, signature_type)
@@ -50,8 +53,14 @@
return "https://${GITHUB_TOKEN}@github.com/opensearch-project/opensearch-signer-client.git"
return "https://github.com/opensearch-project/opensearch-signer-client.git"
+ def __remove_existing_signature__(self, signature_file):
+ if os.path.exists(signature_file):
+ logging.warning(f"Removing existing signature file {signature_file}")
+ os.remove(signature_file)
+
def sign(self, filename, signature_type):
signature_file = filename + signature_type
+ self.__remove_existing_signature__(signature_file)
signing_cmd = [
"./opensearch-signer-client",
"-i",
| {"golden_diff": "diff --git a/src/sign_workflow/signer.py b/src/sign_workflow/signer.py\n--- a/src/sign_workflow/signer.py\n+++ b/src/sign_workflow/signer.py\n@@ -26,12 +26,15 @@\n self.git_repo.execute(\"rm config.cfg\")\n \n def sign_artifact(self, artifact, basepath, signature_type):\n+ if not self.is_valid_file_type(artifact):\n+ logging.info(f\"Skipping signing of file {artifact}\")\n+ return\n self.generate_signature_and_verify(artifact, basepath, signature_type)\n \n def sign_artifacts(self, artifacts, basepath, signature_type):\n for artifact in artifacts:\n if not self.is_valid_file_type(artifact):\n- logging.info(f\"Skipping signing of file ${artifact}\")\n+ logging.info(f\"Skipping signing of file {artifact}\")\n continue\n self.generate_signature_and_verify(artifact, basepath, signature_type)\n \n@@ -50,8 +53,14 @@\n return \"https://${GITHUB_TOKEN}@github.com/opensearch-project/opensearch-signer-client.git\"\n return \"https://github.com/opensearch-project/opensearch-signer-client.git\"\n \n+ def __remove_existing_signature__(self, signature_file):\n+ if os.path.exists(signature_file):\n+ logging.warning(f\"Removing existing signature file {signature_file}\")\n+ os.remove(signature_file)\n+\n def sign(self, filename, signature_type):\n signature_file = filename + signature_type\n+ self.__remove_existing_signature__(signature_file)\n signing_cmd = [\n \"./opensearch-signer-client\",\n \"-i\",\n", "issue": "[META] Automate artifact signing with OpensearchSignerClient for linux\n### Is your feature request related to a problem? Please describe\r\n\r\nSigning currently is a long painstaking process. It needs to be automated to save time when releasing artifacts. We need a tool to be able to sign all artifacts - \r\n- opensearch\r\n- opensearch-dashboards\r\n\r\n### Describe the solution you'd like\r\n\r\nUse the existing `Signer.py` class to sign the artifacts and generate \".sig\" file along with \".asc\" files.\r\n\r\n### Tasks\r\n- [x] #1382\r\n- [x] #1383\r\n- [x] #1385\r\n\r\n### Acceptance Criteria\r\n- [ ] User is able to provide the artifact directory and tool is able to sign all the artifacts in the directory\r\n- [ ] User is able to pass the artifact path and the tool is able to sign the artifact\r\n\r\n### Next Steps\r\nNext steps would include to extend this process over for mac and windows\n", "before_files": [{"content": "#!/usr/bin/env python\n\n# SPDX-License-Identifier: Apache-2.0\n#\n# The OpenSearch Contributors require contributions made to\n# this file be licensed under the Apache-2.0 license or a\n# compatible open source license.\n\nimport logging\nimport os\n\nfrom git.git_repository import GitRepository\n\n\"\"\"\nThis class is responsible for signing an artifact using the OpenSearch-signer-client and verifying its signature.\nThe signed artifacts will be found in the same location as the original artifacts.\n\"\"\"\n\n\nclass Signer:\n ACCEPTED_FILE_TYPES = [\".zip\", \".jar\", \".war\", \".pom\", \".module\", \".tar.gz\"]\n\n def __init__(self):\n self.git_repo = GitRepository(self.get_repo_url(), \"HEAD\", working_subdirectory=\"src\")\n self.git_repo.execute(\"./bootstrap\")\n self.git_repo.execute(\"rm config.cfg\")\n\n def sign_artifact(self, artifact, basepath, signature_type):\n self.generate_signature_and_verify(artifact, basepath, signature_type)\n\n def sign_artifacts(self, artifacts, basepath, signature_type):\n for artifact in artifacts:\n if not self.is_valid_file_type(artifact):\n logging.info(f\"Skipping signing of file ${artifact}\")\n continue\n self.generate_signature_and_verify(artifact, basepath, signature_type)\n\n def generate_signature_and_verify(self, artifact, basepath, signature_type):\n location = os.path.join(basepath, artifact)\n self.sign(location, signature_type)\n self.verify(location + signature_type)\n\n def is_valid_file_type(self, file_name):\n return any(\n file_name.endswith(x) for x in Signer.ACCEPTED_FILE_TYPES\n )\n\n def get_repo_url(self):\n if \"GITHUB_TOKEN\" in os.environ:\n return \"https://${GITHUB_TOKEN}@github.com/opensearch-project/opensearch-signer-client.git\"\n return \"https://github.com/opensearch-project/opensearch-signer-client.git\"\n\n def sign(self, filename, signature_type):\n signature_file = filename + signature_type\n signing_cmd = [\n \"./opensearch-signer-client\",\n \"-i\",\n filename,\n \"-o\",\n signature_file,\n \"-p\",\n \"pgp\",\n ]\n self.git_repo.execute(\" \".join(signing_cmd))\n\n def verify(self, filename):\n verify_cmd = [\"gpg\", \"--verify-files\", filename]\n self.git_repo.execute(\" \".join(verify_cmd))\n", "path": "src/sign_workflow/signer.py"}]} | 1,404 | 347 |
gh_patches_debug_40057 | rasdani/github-patches | git_diff | larq__larq-97 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Support epoch level logging for QuantizedLogger
Currently this only work on a per batch bases and not with tensorboard using `update_freq="epoch"`
</issue>
<code>
[start of larq/callbacks.py]
1 import tensorflow as tf
2 import numpy as np
3
4
5 class QuantizationLogger(tf.keras.callbacks.Callback):
6 """Callback that adds quantization specific metrics.
7
8 !!! note ""
9 In order for metrics to be picked up by TensorBoard this callback needs to be
10 applied before the TensorBoard callback and use the same update frequency.
11
12 !!! example
13 ```python
14 callbacks = [
15 QuantizationLogger(update_freq=100),
16 tf.keras.callbacks.TensorBoard(update_freq=100),
17 ]
18 model.fit(X_train, Y_train, callbacks=callbacks)
19 ```
20
21 # Metrics
22 - `changed_quantization_ration`: The ration of quantized weights in each layer that
23 changed during the weight update.
24
25 # Arguments
26 update_freq: `'batch'` or integer. When using `'batch'`, computes the metrics after
27 each batch. If using an integer the callback will compute the metrics every
28 `update_freq` batches. Note that computing too frequently can slow down training.
29 """
30
31 def __init__(self, update_freq="batch"):
32 self.previous_weights = {}
33 self.update_freq = update_freq if update_freq != "batch" else 1
34
35 def on_batch_end(self, batch, logs=None):
36 should_log = batch > 0 and (batch + 1) % self.update_freq == 0
37 should_store = (batch + 2) % self.update_freq == 0
38
39 if should_log or should_store:
40 ops = []
41 op_names = []
42 for layer in self.model.layers:
43 if hasattr(layer, "quantized_weights"):
44 for i, weight in enumerate(layer.quantized_weights):
45 ops.append(weight)
46 op_names.append(layer.name if i == 0 else f"{layer.name}_{i}")
47
48 for key, value in zip(op_names, tf.keras.backend.batch_get_value(ops)):
49 if should_log:
50 logs[f"changed_quantization_ration/{key.replace(':', '_')}"] = 1 - (
51 np.count_nonzero(value == self.previous_weights[key])
52 / value.size
53 )
54 if should_store:
55 self.previous_weights[key] = value
56
57 if should_log and not should_store:
58 # We don't need it in the next batch anymore
59 self.previous_weights = {}
60
[end of larq/callbacks.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/larq/callbacks.py b/larq/callbacks.py
--- a/larq/callbacks.py
+++ b/larq/callbacks.py
@@ -11,10 +11,7 @@
!!! example
```python
- callbacks = [
- QuantizationLogger(update_freq=100),
- tf.keras.callbacks.TensorBoard(update_freq=100),
- ]
+ callbacks = [QuantizationLogger(), tf.keras.callbacks.TensorBoard()]
model.fit(X_train, Y_train, callbacks=callbacks)
```
@@ -23,19 +20,18 @@
changed during the weight update.
# Arguments
- update_freq: `'batch'` or integer. When using `'batch'`, computes the metrics after
- each batch. If using an integer the callback will compute the metrics every
- `update_freq` batches. Note that computing too frequently can slow down training.
+ update_freq: `'batch'` or `'epoch'` or integer. When using `'batch'`, computes the
+ metrics after each batch. The same applies for `'epoch'`. If using an integer
+ the callback will compute the metrics every `update_freq` batches.
+ Note that computing too frequently can slow down training.
"""
- def __init__(self, update_freq="batch"):
- self.previous_weights = {}
+ def __init__(self, update_freq="epoch"):
+ self.batch_previous_weights = {}
+ self.epoch_previous_weights = {}
self.update_freq = update_freq if update_freq != "batch" else 1
- def on_batch_end(self, batch, logs=None):
- should_log = batch > 0 and (batch + 1) % self.update_freq == 0
- should_store = (batch + 2) % self.update_freq == 0
-
+ def _maybe_log_and_store(self, storage, logs, should_log=True, should_store=True):
if should_log or should_store:
ops = []
op_names = []
@@ -46,14 +42,29 @@
op_names.append(layer.name if i == 0 else f"{layer.name}_{i}")
for key, value in zip(op_names, tf.keras.backend.batch_get_value(ops)):
+ value = value.astype(np.int8)
if should_log:
logs[f"changed_quantization_ration/{key.replace(':', '_')}"] = 1 - (
- np.count_nonzero(value == self.previous_weights[key])
- / value.size
+ np.count_nonzero(value == storage[key]) / value.size
)
if should_store:
- self.previous_weights[key] = value
+ storage[key] = value
if should_log and not should_store:
# We don't need it in the next batch anymore
- self.previous_weights = {}
+ storage = {}
+
+ def on_batch_end(self, batch, logs=None):
+ if self.update_freq != "epoch":
+ self._maybe_log_and_store(
+ self.batch_previous_weights,
+ logs,
+ should_log=batch > 0 and (batch + 1) % self.update_freq == 0,
+ should_store=(batch + 2) % self.update_freq == 0,
+ )
+
+ def on_train_begin(self, logs=None):
+ self._maybe_log_and_store(self.epoch_previous_weights, logs, should_log=False)
+
+ def on_epoch_end(self, epoch, logs=None):
+ self._maybe_log_and_store(self.epoch_previous_weights, logs)
| {"golden_diff": "diff --git a/larq/callbacks.py b/larq/callbacks.py\n--- a/larq/callbacks.py\n+++ b/larq/callbacks.py\n@@ -11,10 +11,7 @@\n \n !!! example\n ```python\n- callbacks = [\n- QuantizationLogger(update_freq=100),\n- tf.keras.callbacks.TensorBoard(update_freq=100),\n- ]\n+ callbacks = [QuantizationLogger(), tf.keras.callbacks.TensorBoard()]\n model.fit(X_train, Y_train, callbacks=callbacks)\n ```\n \n@@ -23,19 +20,18 @@\n changed during the weight update.\n \n # Arguments\n- update_freq: `'batch'` or integer. When using `'batch'`, computes the metrics after\n- each batch. If using an integer the callback will compute the metrics every\n- `update_freq` batches. Note that computing too frequently can slow down training.\n+ update_freq: `'batch'` or `'epoch'` or integer. When using `'batch'`, computes the\n+ metrics after each batch. The same applies for `'epoch'`. If using an integer\n+ the callback will compute the metrics every `update_freq` batches.\n+ Note that computing too frequently can slow down training.\n \"\"\"\n \n- def __init__(self, update_freq=\"batch\"):\n- self.previous_weights = {}\n+ def __init__(self, update_freq=\"epoch\"):\n+ self.batch_previous_weights = {}\n+ self.epoch_previous_weights = {}\n self.update_freq = update_freq if update_freq != \"batch\" else 1\n \n- def on_batch_end(self, batch, logs=None):\n- should_log = batch > 0 and (batch + 1) % self.update_freq == 0\n- should_store = (batch + 2) % self.update_freq == 0\n-\n+ def _maybe_log_and_store(self, storage, logs, should_log=True, should_store=True):\n if should_log or should_store:\n ops = []\n op_names = []\n@@ -46,14 +42,29 @@\n op_names.append(layer.name if i == 0 else f\"{layer.name}_{i}\")\n \n for key, value in zip(op_names, tf.keras.backend.batch_get_value(ops)):\n+ value = value.astype(np.int8)\n if should_log:\n logs[f\"changed_quantization_ration/{key.replace(':', '_')}\"] = 1 - (\n- np.count_nonzero(value == self.previous_weights[key])\n- / value.size\n+ np.count_nonzero(value == storage[key]) / value.size\n )\n if should_store:\n- self.previous_weights[key] = value\n+ storage[key] = value\n \n if should_log and not should_store:\n # We don't need it in the next batch anymore\n- self.previous_weights = {}\n+ storage = {}\n+\n+ def on_batch_end(self, batch, logs=None):\n+ if self.update_freq != \"epoch\":\n+ self._maybe_log_and_store(\n+ self.batch_previous_weights,\n+ logs,\n+ should_log=batch > 0 and (batch + 1) % self.update_freq == 0,\n+ should_store=(batch + 2) % self.update_freq == 0,\n+ )\n+\n+ def on_train_begin(self, logs=None):\n+ self._maybe_log_and_store(self.epoch_previous_weights, logs, should_log=False)\n+\n+ def on_epoch_end(self, epoch, logs=None):\n+ self._maybe_log_and_store(self.epoch_previous_weights, logs)\n", "issue": "Support epoch level logging for QuantizedLogger\nCurrently this only work on a per batch bases and not with tensorboard using `update_freq=\"epoch\"`\n", "before_files": [{"content": "import tensorflow as tf\nimport numpy as np\n\n\nclass QuantizationLogger(tf.keras.callbacks.Callback):\n \"\"\"Callback that adds quantization specific metrics.\n\n !!! note \"\"\n In order for metrics to be picked up by TensorBoard this callback needs to be\n applied before the TensorBoard callback and use the same update frequency.\n\n !!! example\n ```python\n callbacks = [\n QuantizationLogger(update_freq=100),\n tf.keras.callbacks.TensorBoard(update_freq=100),\n ]\n model.fit(X_train, Y_train, callbacks=callbacks)\n ```\n\n # Metrics\n - `changed_quantization_ration`: The ration of quantized weights in each layer that\n changed during the weight update.\n\n # Arguments\n update_freq: `'batch'` or integer. When using `'batch'`, computes the metrics after\n each batch. If using an integer the callback will compute the metrics every\n `update_freq` batches. Note that computing too frequently can slow down training.\n \"\"\"\n\n def __init__(self, update_freq=\"batch\"):\n self.previous_weights = {}\n self.update_freq = update_freq if update_freq != \"batch\" else 1\n\n def on_batch_end(self, batch, logs=None):\n should_log = batch > 0 and (batch + 1) % self.update_freq == 0\n should_store = (batch + 2) % self.update_freq == 0\n\n if should_log or should_store:\n ops = []\n op_names = []\n for layer in self.model.layers:\n if hasattr(layer, \"quantized_weights\"):\n for i, weight in enumerate(layer.quantized_weights):\n ops.append(weight)\n op_names.append(layer.name if i == 0 else f\"{layer.name}_{i}\")\n\n for key, value in zip(op_names, tf.keras.backend.batch_get_value(ops)):\n if should_log:\n logs[f\"changed_quantization_ration/{key.replace(':', '_')}\"] = 1 - (\n np.count_nonzero(value == self.previous_weights[key])\n / value.size\n )\n if should_store:\n self.previous_weights[key] = value\n\n if should_log and not should_store:\n # We don't need it in the next batch anymore\n self.previous_weights = {}\n", "path": "larq/callbacks.py"}]} | 1,164 | 777 |
gh_patches_debug_9359 | rasdani/github-patches | git_diff | saleor__saleor-4919 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add description and deprecation support to filters
`FilterInputObjectType` doesn't provide a way to document fields. We could add two fields to the meta-class: `descriptions = {field: description}` and `deprecations = {field: reason}`.
</issue>
<code>
[start of saleor/graphql/core/types/filter_input.py]
1 import six
2 from graphene import InputField, InputObjectType
3 from graphene.types.inputobjecttype import InputObjectTypeOptions
4 from graphene.types.utils import yank_fields_from_attrs
5 from graphene_django.filter.utils import get_filterset_class
6
7 from .converter import convert_form_field
8
9
10 class FilterInputObjectType(InputObjectType):
11 """Class for storing and serving django-filtres as graphQL input.
12
13 FilterSet class which inherits from django-filters.FilterSet should be
14 provided with using fitlerset_class argument.
15 """
16
17 @classmethod
18 def __init_subclass_with_meta__(
19 cls, _meta=None, model=None, filterset_class=None, fields=None, **options
20 ):
21 cls.custom_filterset_class = filterset_class
22 cls.filterset_class = None
23 cls.fields = fields
24 cls.model = model
25
26 if not _meta:
27 _meta = InputObjectTypeOptions(cls)
28
29 fields = cls.get_filtering_args_from_filterset()
30 fields = yank_fields_from_attrs(fields, _as=InputField)
31 if _meta.fields:
32 _meta.fields.update(fields)
33 else:
34 _meta.fields = fields
35
36 super().__init_subclass_with_meta__(_meta=_meta, **options)
37
38 @classmethod
39 def get_filtering_args_from_filterset(cls):
40 """Retrieve the filtering arguments from the queryset.
41
42 Inspect a FilterSet and produce the arguments to pass to a Graphene field.
43 These arguments will be available to filter against in the GraphQL.
44 """
45 if not cls.custom_filterset_class:
46 assert cls.model and cls.fields, (
47 "Provide filterset class or model and fields requested to "
48 "create default filterset"
49 )
50
51 meta = dict(model=cls.model, fields=cls.fields)
52 cls.filterset_class = get_filterset_class(cls.custom_filterset_class, **meta)
53
54 args = {}
55 for name, filter_field in six.iteritems(cls.filterset_class.base_filters):
56 input_class = getattr(filter_field, "input_class", None)
57 if input_class:
58 field_type = convert_form_field(filter_field)
59 else:
60 field_type = convert_form_field(filter_field.field)
61 field_type.description = filter_field.label
62 kwargs = getattr(field_type, "kwargs", {})
63 field_type.kwargs = kwargs
64 args[name] = field_type
65 return args
66
[end of saleor/graphql/core/types/filter_input.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/saleor/graphql/core/types/filter_input.py b/saleor/graphql/core/types/filter_input.py
--- a/saleor/graphql/core/types/filter_input.py
+++ b/saleor/graphql/core/types/filter_input.py
@@ -58,7 +58,7 @@
field_type = convert_form_field(filter_field)
else:
field_type = convert_form_field(filter_field.field)
- field_type.description = filter_field.label
+ field_type.description = getattr(filter_field, "help_text", "")
kwargs = getattr(field_type, "kwargs", {})
field_type.kwargs = kwargs
args[name] = field_type
| {"golden_diff": "diff --git a/saleor/graphql/core/types/filter_input.py b/saleor/graphql/core/types/filter_input.py\n--- a/saleor/graphql/core/types/filter_input.py\n+++ b/saleor/graphql/core/types/filter_input.py\n@@ -58,7 +58,7 @@\n field_type = convert_form_field(filter_field)\n else:\n field_type = convert_form_field(filter_field.field)\n- field_type.description = filter_field.label\n+ field_type.description = getattr(filter_field, \"help_text\", \"\")\n kwargs = getattr(field_type, \"kwargs\", {})\n field_type.kwargs = kwargs\n args[name] = field_type\n", "issue": "Add description and deprecation support to filters\n`FilterInputObjectType` doesn't provide a way to document fields. We could add two fields to the meta-class: `descriptions = {field: description}` and `deprecations = {field: reason}`.\n", "before_files": [{"content": "import six\nfrom graphene import InputField, InputObjectType\nfrom graphene.types.inputobjecttype import InputObjectTypeOptions\nfrom graphene.types.utils import yank_fields_from_attrs\nfrom graphene_django.filter.utils import get_filterset_class\n\nfrom .converter import convert_form_field\n\n\nclass FilterInputObjectType(InputObjectType):\n \"\"\"Class for storing and serving django-filtres as graphQL input.\n\n FilterSet class which inherits from django-filters.FilterSet should be\n provided with using fitlerset_class argument.\n \"\"\"\n\n @classmethod\n def __init_subclass_with_meta__(\n cls, _meta=None, model=None, filterset_class=None, fields=None, **options\n ):\n cls.custom_filterset_class = filterset_class\n cls.filterset_class = None\n cls.fields = fields\n cls.model = model\n\n if not _meta:\n _meta = InputObjectTypeOptions(cls)\n\n fields = cls.get_filtering_args_from_filterset()\n fields = yank_fields_from_attrs(fields, _as=InputField)\n if _meta.fields:\n _meta.fields.update(fields)\n else:\n _meta.fields = fields\n\n super().__init_subclass_with_meta__(_meta=_meta, **options)\n\n @classmethod\n def get_filtering_args_from_filterset(cls):\n \"\"\"Retrieve the filtering arguments from the queryset.\n\n Inspect a FilterSet and produce the arguments to pass to a Graphene field.\n These arguments will be available to filter against in the GraphQL.\n \"\"\"\n if not cls.custom_filterset_class:\n assert cls.model and cls.fields, (\n \"Provide filterset class or model and fields requested to \"\n \"create default filterset\"\n )\n\n meta = dict(model=cls.model, fields=cls.fields)\n cls.filterset_class = get_filterset_class(cls.custom_filterset_class, **meta)\n\n args = {}\n for name, filter_field in six.iteritems(cls.filterset_class.base_filters):\n input_class = getattr(filter_field, \"input_class\", None)\n if input_class:\n field_type = convert_form_field(filter_field)\n else:\n field_type = convert_form_field(filter_field.field)\n field_type.description = filter_field.label\n kwargs = getattr(field_type, \"kwargs\", {})\n field_type.kwargs = kwargs\n args[name] = field_type\n return args\n", "path": "saleor/graphql/core/types/filter_input.py"}]} | 1,218 | 136 |
gh_patches_debug_15652 | rasdani/github-patches | git_diff | airctic__icevision-1058 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
getting_started_object_detection.ipynb fails after CentripetalNet support merge
## 🐛 Bug
**Describe the bug**
`getting_started_object_detection.ipynb` fails to run with the following error.
`AttributeError: 'VFNet' object has no attribute 'mask_head'
`
**To Reproduce**
Steps to reproduce the behavior: Run the getting started notebook.
**Expected behavior**
Model should instantiate.
**Screenshots**

</issue>
<code>
[start of icevision/models/mmdet/utils.py]
1 __all__ = [
2 "MMDetBackboneConfig",
3 "mmdet_configs_path",
4 "param_groups",
5 "MMDetBackboneConfig",
6 "create_model_config",
7 ]
8
9 from icevision.imports import *
10 from icevision.utils import *
11 from icevision.backbones import BackboneConfig
12 from icevision.models.mmdet.download_configs import download_mmdet_configs
13 from mmdet.models.detectors import *
14 from mmcv import Config
15 from mmdet.models.backbones.ssd_vgg import SSDVGG
16 from mmdet.models.backbones.csp_darknet import CSPDarknet
17 from mmdet.models.backbones.swin import SwinTransformer
18 from mmdet.models.backbones.hourglass import HourglassNet
19
20
21 mmdet_configs_path = download_mmdet_configs()
22
23
24 class MMDetBackboneConfig(BackboneConfig):
25 def __init__(self, model_name, config_path, weights_url):
26 self.model_name = model_name
27 self.config_path = config_path
28 self.weights_url = weights_url
29 self.pretrained: bool
30
31 def __call__(self, pretrained: bool = True) -> "MMDetBackboneConfig":
32 self.pretrained = pretrained
33 return self
34
35
36 def param_groups(model):
37 body = model.backbone
38
39 layers = []
40
41 # add the backbone
42 if isinstance(body, SSDVGG):
43 layers += [body.features]
44 elif isinstance(body, CSPDarknet):
45 layers += [body.stem.conv.conv, body.stem.conv.bn]
46 layers += [body.stage1, body.stage2, body.stage3, body.stage4]
47
48 elif isinstance(body, HourglassNet):
49 layers += [
50 body.stem,
51 body.hourglass_modules,
52 body.inters,
53 body.conv1x1s,
54 body.out_convs,
55 body.remap_convs,
56 body.relu,
57 ]
58
59 elif isinstance(body, SwinTransformer):
60 layers += [
61 body.patch_embed.adap_padding,
62 body.patch_embed.projection,
63 body.patch_embed.norm,
64 body.drop_after_pos,
65 body.stages,
66 ]
67 # Swin backbone for two-stage detector has norm0 attribute
68 if getattr(body, "norm0", False):
69 layers += [body.norm0]
70
71 layers += [body.norm1, body.norm2, body.norm3]
72 else:
73 layers += [nn.Sequential(body.conv1, body.bn1)]
74 layers += [getattr(body, l) for l in body.res_layers]
75
76 # add the neck module if it exists (DETR doesn't have a neck module)
77 layers += [module for name, module in model.named_modules() if name == "neck"]
78
79 # add the head
80 if isinstance(model, SingleStageDetector):
81 layers += [model.bbox_head]
82
83 # YOLACT has mask_head and segm_head
84 if getattr(model, "mask_head"):
85 layers += [model.mask_head]
86 if getattr(model, "segm_head"):
87 layers += [model.segm_head]
88
89 elif isinstance(model, TwoStageDetector):
90 layers += [nn.Sequential(model.rpn_head, model.roi_head)]
91 else:
92 raise RuntimeError(
93 "{model} must inherit either from SingleStageDetector or TwoStageDetector class"
94 )
95
96 _param_groups = [list(layer.parameters()) for layer in layers]
97 check_all_model_params_in_groups2(model, _param_groups)
98 return _param_groups
99
100
101 def create_model_config(
102 backbone: MMDetBackboneConfig,
103 pretrained: bool = True,
104 checkpoints_path: Optional[Union[str, Path]] = "checkpoints",
105 force_download=False,
106 cfg_options=None,
107 ):
108
109 model_name = backbone.model_name
110 config_path = backbone.config_path
111 weights_url = backbone.weights_url
112
113 # download weights
114 weights_path = None
115 if pretrained and weights_url:
116 save_dir = Path(checkpoints_path) / model_name
117 save_dir.mkdir(exist_ok=True, parents=True)
118
119 fname = Path(weights_url).name
120 weights_path = save_dir / fname
121
122 if not weights_path.exists() or force_download:
123 download_url(url=weights_url, save_path=str(weights_path))
124
125 cfg = Config.fromfile(config_path)
126
127 if cfg_options is not None:
128 cfg.merge_from_dict(cfg_options)
129
130 return cfg, weights_path
131
[end of icevision/models/mmdet/utils.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/icevision/models/mmdet/utils.py b/icevision/models/mmdet/utils.py
--- a/icevision/models/mmdet/utils.py
+++ b/icevision/models/mmdet/utils.py
@@ -6,6 +6,7 @@
"create_model_config",
]
+from numpy import False_
from icevision.imports import *
from icevision.utils import *
from icevision.backbones import BackboneConfig
@@ -81,9 +82,9 @@
layers += [model.bbox_head]
# YOLACT has mask_head and segm_head
- if getattr(model, "mask_head"):
+ if hasattr(model, "mask_head"):
layers += [model.mask_head]
- if getattr(model, "segm_head"):
+ if hasattr(model, "segm_head"):
layers += [model.segm_head]
elif isinstance(model, TwoStageDetector):
| {"golden_diff": "diff --git a/icevision/models/mmdet/utils.py b/icevision/models/mmdet/utils.py\n--- a/icevision/models/mmdet/utils.py\n+++ b/icevision/models/mmdet/utils.py\n@@ -6,6 +6,7 @@\n \"create_model_config\",\n ]\n \n+from numpy import False_\n from icevision.imports import *\n from icevision.utils import *\n from icevision.backbones import BackboneConfig\n@@ -81,9 +82,9 @@\n layers += [model.bbox_head]\n \n # YOLACT has mask_head and segm_head\n- if getattr(model, \"mask_head\"):\n+ if hasattr(model, \"mask_head\"):\n layers += [model.mask_head]\n- if getattr(model, \"segm_head\"):\n+ if hasattr(model, \"segm_head\"):\n layers += [model.segm_head]\n \n elif isinstance(model, TwoStageDetector):\n", "issue": "getting_started_object_detection.ipynb fails after CentripetalNet support merge\n## \ud83d\udc1b Bug\r\n**Describe the bug**\r\n`getting_started_object_detection.ipynb` fails to run with the following error.\r\n\r\n`AttributeError: 'VFNet' object has no attribute 'mask_head'\r\n`\r\n**To Reproduce**\r\nSteps to reproduce the behavior: Run the getting started notebook.\r\n\r\n**Expected behavior**\r\nModel should instantiate.\r\n\r\n**Screenshots**\r\n\n", "before_files": [{"content": "__all__ = [\n \"MMDetBackboneConfig\",\n \"mmdet_configs_path\",\n \"param_groups\",\n \"MMDetBackboneConfig\",\n \"create_model_config\",\n]\n\nfrom icevision.imports import *\nfrom icevision.utils import *\nfrom icevision.backbones import BackboneConfig\nfrom icevision.models.mmdet.download_configs import download_mmdet_configs\nfrom mmdet.models.detectors import *\nfrom mmcv import Config\nfrom mmdet.models.backbones.ssd_vgg import SSDVGG\nfrom mmdet.models.backbones.csp_darknet import CSPDarknet\nfrom mmdet.models.backbones.swin import SwinTransformer\nfrom mmdet.models.backbones.hourglass import HourglassNet\n\n\nmmdet_configs_path = download_mmdet_configs()\n\n\nclass MMDetBackboneConfig(BackboneConfig):\n def __init__(self, model_name, config_path, weights_url):\n self.model_name = model_name\n self.config_path = config_path\n self.weights_url = weights_url\n self.pretrained: bool\n\n def __call__(self, pretrained: bool = True) -> \"MMDetBackboneConfig\":\n self.pretrained = pretrained\n return self\n\n\ndef param_groups(model):\n body = model.backbone\n\n layers = []\n\n # add the backbone\n if isinstance(body, SSDVGG):\n layers += [body.features]\n elif isinstance(body, CSPDarknet):\n layers += [body.stem.conv.conv, body.stem.conv.bn]\n layers += [body.stage1, body.stage2, body.stage3, body.stage4]\n\n elif isinstance(body, HourglassNet):\n layers += [\n body.stem,\n body.hourglass_modules,\n body.inters,\n body.conv1x1s,\n body.out_convs,\n body.remap_convs,\n body.relu,\n ]\n\n elif isinstance(body, SwinTransformer):\n layers += [\n body.patch_embed.adap_padding,\n body.patch_embed.projection,\n body.patch_embed.norm,\n body.drop_after_pos,\n body.stages,\n ]\n # Swin backbone for two-stage detector has norm0 attribute\n if getattr(body, \"norm0\", False):\n layers += [body.norm0]\n\n layers += [body.norm1, body.norm2, body.norm3]\n else:\n layers += [nn.Sequential(body.conv1, body.bn1)]\n layers += [getattr(body, l) for l in body.res_layers]\n\n # add the neck module if it exists (DETR doesn't have a neck module)\n layers += [module for name, module in model.named_modules() if name == \"neck\"]\n\n # add the head\n if isinstance(model, SingleStageDetector):\n layers += [model.bbox_head]\n\n # YOLACT has mask_head and segm_head\n if getattr(model, \"mask_head\"):\n layers += [model.mask_head]\n if getattr(model, \"segm_head\"):\n layers += [model.segm_head]\n\n elif isinstance(model, TwoStageDetector):\n layers += [nn.Sequential(model.rpn_head, model.roi_head)]\n else:\n raise RuntimeError(\n \"{model} must inherit either from SingleStageDetector or TwoStageDetector class\"\n )\n\n _param_groups = [list(layer.parameters()) for layer in layers]\n check_all_model_params_in_groups2(model, _param_groups)\n return _param_groups\n\n\ndef create_model_config(\n backbone: MMDetBackboneConfig,\n pretrained: bool = True,\n checkpoints_path: Optional[Union[str, Path]] = \"checkpoints\",\n force_download=False,\n cfg_options=None,\n):\n\n model_name = backbone.model_name\n config_path = backbone.config_path\n weights_url = backbone.weights_url\n\n # download weights\n weights_path = None\n if pretrained and weights_url:\n save_dir = Path(checkpoints_path) / model_name\n save_dir.mkdir(exist_ok=True, parents=True)\n\n fname = Path(weights_url).name\n weights_path = save_dir / fname\n\n if not weights_path.exists() or force_download:\n download_url(url=weights_url, save_path=str(weights_path))\n\n cfg = Config.fromfile(config_path)\n\n if cfg_options is not None:\n cfg.merge_from_dict(cfg_options)\n\n return cfg, weights_path\n", "path": "icevision/models/mmdet/utils.py"}]} | 1,919 | 199 |
gh_patches_debug_32487 | rasdani/github-patches | git_diff | PlasmaPy__PlasmaPy-2730 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Expands examples in docstrings of `HarrisSheet` and its methods
Currently, the docstring for `HarrisSheet` ([permalink](https://github.com/PlasmaPy/PlasmaPy/blob/2c1ee2e74e86d9519d1a306a6f78413683ca9a02/src/plasmapy/plasma/equilibria1d.py#L12)) doesn't contain any examples. It would be helpful to add a simple working example that shows how to use it.
One possibility would be to adapt some of the [tests](https://github.com/PlasmaPy/PlasmaPy/blob/2c1ee2e74e86d9519d1a306a6f78413683ca9a02/tests/plasma/test_equilibria1d.py).
</issue>
<code>
[start of src/plasmapy/plasma/equilibria1d.py]
1 """Functionality for representing one-dimensional equilibria."""
2
3 __all__ = ["HarrisSheet"]
4
5 import astropy.constants as const
6 import astropy.units as u
7 import numpy as np
8
9 from plasmapy.utils.decorators.validators import validate_quantities
10
11
12 class HarrisSheet:
13 r"""
14 Define a Harris Sheet Equilibrium.
15
16 Magnetic field will be in the :math:`±x` direction and the current
17 density will be in the :math:`±z` direction in a :math:`\hat{x} ×
18 \hat{y} = \hat{z}` coordinate system.
19
20 Parameters
21 ----------
22 B0 : `~astropy.units.Quantity`
23 Magnitude of magnetic field in the limit of :math:`y → ∞` in
24 units convertible to teslas.
25
26 delta : `~astropy.units.Quantity`
27 The thickness of the current sheet in units convertible to
28 meters.
29
30 P0 : `~astropy.units.Quantity`
31 The plasma pressure in the limit of :math:`y → ∞` in units
32 convertible to pascals.
33
34 Notes
35 -----
36 A current sheet is current limited to a surface.
37
38 A Harris sheet is a 1D ideal MHD equilibrium. In resistive MHD if
39 there is any resistivity, it won't be a true equilibrium since the
40 resistivity will gradually smooth the profile out over time.
41
42 A Harris sheet is often used as the initial condition for
43 simulations of magnetic reconnection.
44
45 Examples
46 --------
47 >>> import astropy.units as u
48 >>> harris_sheet = HarrisSheet(delta=3 * u.m, B0=2 * u.T)
49 >>> harris_sheet.magnetic_field(y=5 * u.m)
50 <Quantity 1.8622... T>
51 """
52
53 def __init__(self, B0, delta, P0=0 * u.Pa) -> None:
54 self.B0 = B0
55 self.delta = delta
56 self.P0 = P0
57
58 @validate_quantities
59 def magnetic_field(self, y: u.Quantity[u.m]) -> u.Quantity[u.T]:
60 r"""
61 Compute the magnetic field.
62
63 In this equation, :math:`B_0` is the asymptotic magnitude of the
64 magnetic field for :math:`y → ±∞` and :math:`δ` is the thickness
65 of the sheet.
66
67 .. math::
68
69 B_x(y) = B_0 \tanh \left( \frac{y}{δ} \right)
70
71 Parameters
72 ----------
73 y : `~astropy.units.Quantity`
74 Orthogonal distance from the current sheet center.
75 """
76 return self.B0 * np.tanh(u.rad * y / self.delta)
77
78 @validate_quantities
79 def current_density(self, y: u.Quantity[u.m]) -> u.Quantity[u.A / u.m**2]:
80 r"""
81 Compute the current density.
82
83 .. math::
84
85 J_z(y) = - \frac{B_0}{δ μ_0) \mathrm{sech}^2 \left( \frac{y}{δ} \right)
86
87 Parameters
88 ----------
89 y : `~astropy.units.Quantity`
90 Orthogonal distance from the current sheet center.
91 """
92 return (
93 -self.B0 / (self.delta * const.mu0) * np.cosh(u.rad * y / self.delta) ** -2
94 )
95
96 @validate_quantities
97 def plasma_pressure(self, y: u.Quantity[u.m]) -> u.Quantity[u.Pa]:
98 r"""
99 Compute plasma pressure.
100
101 .. math::
102
103 p(y) = \frac{B_0^2}{2 μ_0} \mathrm{sech}^2 \left( \frac{y}{δ} \right) + p_0
104
105 Parameters
106 ----------
107 y : `~astropy.units.Quantity`
108 Orthogonal distance from the current sheet center.
109 """
110 return (
111 self.B0**2 / (2 * const.mu0) * (np.cosh(u.rad * y / self.delta) ** -2)
112 + self.P0
113 )
114
[end of src/plasmapy/plasma/equilibria1d.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/plasmapy/plasma/equilibria1d.py b/src/plasmapy/plasma/equilibria1d.py
--- a/src/plasmapy/plasma/equilibria1d.py
+++ b/src/plasmapy/plasma/equilibria1d.py
@@ -72,6 +72,18 @@
----------
y : `~astropy.units.Quantity`
Orthogonal distance from the current sheet center.
+
+ Examples
+ --------
+ >>> import astropy.units as u
+ >>> B0 = 1 * u.T
+ >>> delta = 1 * u.m
+ >>> P0 = 0 * u.Pa
+ >>> hs = HarrisSheet(B0, delta, P0)
+ >>> y = [-2, 0, 2] * u.m
+ >>> hs.magnetic_field(y)
+ <Quantity [-0.96402758007, 0, 0.96402758007] T>
+
"""
return self.B0 * np.tanh(u.rad * y / self.delta)
@@ -88,6 +100,17 @@
----------
y : `~astropy.units.Quantity`
Orthogonal distance from the current sheet center.
+
+ Examples
+ --------
+ >>> import astropy.units as u
+ >>> B0 = 1 * u.T
+ >>> delta = 1 * u.m
+ >>> P0 = 0 * u.Pa
+ >>> hs = HarrisSheet(B0, delta, P0)
+ >>> y = [-2, 0, 2] * u.m
+ >>> hs.current_density(y)
+ <Quantity [-56222.1400445, -795774.715459, -56222.1400445] A / m2>
"""
return (
-self.B0 / (self.delta * const.mu0) * np.cosh(u.rad * y / self.delta) ** -2
@@ -106,6 +129,17 @@
----------
y : `~astropy.units.Quantity`
Orthogonal distance from the current sheet center.
+
+ Examples
+ --------
+ >>> import astropy.units as u
+ >>> B0 = 1 * u.T
+ >>> delta = 1 * u.m
+ >>> P0 = 0 * u.Pa
+ >>> hs = HarrisSheet(B0, delta, P0)
+ >>> y = [-2, 0, 2] * u.m
+ >>> hs.plasma_pressure(y)
+ <Quantity [28111.07, 397887.36, 28111.07] Pa>
"""
return (
self.B0**2 / (2 * const.mu0) * (np.cosh(u.rad * y / self.delta) ** -2)
| {"golden_diff": "diff --git a/src/plasmapy/plasma/equilibria1d.py b/src/plasmapy/plasma/equilibria1d.py\n--- a/src/plasmapy/plasma/equilibria1d.py\n+++ b/src/plasmapy/plasma/equilibria1d.py\n@@ -72,6 +72,18 @@\n ----------\n y : `~astropy.units.Quantity`\n Orthogonal distance from the current sheet center.\n+\n+ Examples\n+ --------\n+ >>> import astropy.units as u\n+ >>> B0 = 1 * u.T\n+ >>> delta = 1 * u.m\n+ >>> P0 = 0 * u.Pa\n+ >>> hs = HarrisSheet(B0, delta, P0)\n+ >>> y = [-2, 0, 2] * u.m\n+ >>> hs.magnetic_field(y)\n+ <Quantity [-0.96402758007, 0, 0.96402758007] T>\n+\n \"\"\"\n return self.B0 * np.tanh(u.rad * y / self.delta)\n \n@@ -88,6 +100,17 @@\n ----------\n y : `~astropy.units.Quantity`\n Orthogonal distance from the current sheet center.\n+\n+ Examples\n+ --------\n+ >>> import astropy.units as u\n+ >>> B0 = 1 * u.T\n+ >>> delta = 1 * u.m\n+ >>> P0 = 0 * u.Pa\n+ >>> hs = HarrisSheet(B0, delta, P0)\n+ >>> y = [-2, 0, 2] * u.m\n+ >>> hs.current_density(y)\n+ <Quantity [-56222.1400445, -795774.715459, -56222.1400445] A / m2>\n \"\"\"\n return (\n -self.B0 / (self.delta * const.mu0) * np.cosh(u.rad * y / self.delta) ** -2\n@@ -106,6 +129,17 @@\n ----------\n y : `~astropy.units.Quantity`\n Orthogonal distance from the current sheet center.\n+\n+ Examples\n+ --------\n+ >>> import astropy.units as u\n+ >>> B0 = 1 * u.T\n+ >>> delta = 1 * u.m\n+ >>> P0 = 0 * u.Pa\n+ >>> hs = HarrisSheet(B0, delta, P0)\n+ >>> y = [-2, 0, 2] * u.m\n+ >>> hs.plasma_pressure(y)\n+ <Quantity [28111.07, 397887.36, 28111.07] Pa>\n \"\"\"\n return (\n self.B0**2 / (2 * const.mu0) * (np.cosh(u.rad * y / self.delta) ** -2)\n", "issue": "Expands examples in docstrings of `HarrisSheet` and its methods\nCurrently, the docstring for `HarrisSheet` ([permalink](https://github.com/PlasmaPy/PlasmaPy/blob/2c1ee2e74e86d9519d1a306a6f78413683ca9a02/src/plasmapy/plasma/equilibria1d.py#L12)) doesn't contain any examples. It would be helpful to add a simple working example that shows how to use it.\r\n\r\nOne possibility would be to adapt some of the [tests](https://github.com/PlasmaPy/PlasmaPy/blob/2c1ee2e74e86d9519d1a306a6f78413683ca9a02/tests/plasma/test_equilibria1d.py).\n", "before_files": [{"content": "\"\"\"Functionality for representing one-dimensional equilibria.\"\"\"\n\n__all__ = [\"HarrisSheet\"]\n\nimport astropy.constants as const\nimport astropy.units as u\nimport numpy as np\n\nfrom plasmapy.utils.decorators.validators import validate_quantities\n\n\nclass HarrisSheet:\n r\"\"\"\n Define a Harris Sheet Equilibrium.\n\n Magnetic field will be in the :math:`\u00b1x` direction and the current\n density will be in the :math:`\u00b1z` direction in a :math:`\\hat{x} \u00d7\n \\hat{y} = \\hat{z}` coordinate system.\n\n Parameters\n ----------\n B0 : `~astropy.units.Quantity`\n Magnitude of magnetic field in the limit of :math:`y \u2192 \u221e` in\n units convertible to teslas.\n\n delta : `~astropy.units.Quantity`\n The thickness of the current sheet in units convertible to\n meters.\n\n P0 : `~astropy.units.Quantity`\n The plasma pressure in the limit of :math:`y \u2192 \u221e` in units\n convertible to pascals.\n\n Notes\n -----\n A current sheet is current limited to a surface.\n\n A Harris sheet is a 1D ideal MHD equilibrium. In resistive MHD if\n there is any resistivity, it won't be a true equilibrium since the\n resistivity will gradually smooth the profile out over time.\n\n A Harris sheet is often used as the initial condition for\n simulations of magnetic reconnection.\n\n Examples\n --------\n >>> import astropy.units as u\n >>> harris_sheet = HarrisSheet(delta=3 * u.m, B0=2 * u.T)\n >>> harris_sheet.magnetic_field(y=5 * u.m)\n <Quantity 1.8622... T>\n \"\"\"\n\n def __init__(self, B0, delta, P0=0 * u.Pa) -> None:\n self.B0 = B0\n self.delta = delta\n self.P0 = P0\n\n @validate_quantities\n def magnetic_field(self, y: u.Quantity[u.m]) -> u.Quantity[u.T]:\n r\"\"\"\n Compute the magnetic field.\n\n In this equation, :math:`B_0` is the asymptotic magnitude of the\n magnetic field for :math:`y \u2192 \u00b1\u221e` and :math:`\u03b4` is the thickness\n of the sheet.\n\n .. math::\n\n B_x(y) = B_0 \\tanh \\left( \\frac{y}{\u03b4} \\right)\n\n Parameters\n ----------\n y : `~astropy.units.Quantity`\n Orthogonal distance from the current sheet center.\n \"\"\"\n return self.B0 * np.tanh(u.rad * y / self.delta)\n\n @validate_quantities\n def current_density(self, y: u.Quantity[u.m]) -> u.Quantity[u.A / u.m**2]:\n r\"\"\"\n Compute the current density.\n\n .. math::\n\n J_z(y) = - \\frac{B_0}{\u03b4 \u03bc_0) \\mathrm{sech}^2 \\left( \\frac{y}{\u03b4} \\right)\n\n Parameters\n ----------\n y : `~astropy.units.Quantity`\n Orthogonal distance from the current sheet center.\n \"\"\"\n return (\n -self.B0 / (self.delta * const.mu0) * np.cosh(u.rad * y / self.delta) ** -2\n )\n\n @validate_quantities\n def plasma_pressure(self, y: u.Quantity[u.m]) -> u.Quantity[u.Pa]:\n r\"\"\"\n Compute plasma pressure.\n\n .. math::\n\n p(y) = \\frac{B_0^2}{2 \u03bc_0} \\mathrm{sech}^2 \\left( \\frac{y}{\u03b4} \\right) + p_0\n\n Parameters\n ----------\n y : `~astropy.units.Quantity`\n Orthogonal distance from the current sheet center.\n \"\"\"\n return (\n self.B0**2 / (2 * const.mu0) * (np.cosh(u.rad * y / self.delta) ** -2)\n + self.P0\n )\n", "path": "src/plasmapy/plasma/equilibria1d.py"}]} | 1,893 | 686 |
gh_patches_debug_26647 | rasdani/github-patches | git_diff | doccano__doccano-1989 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[Proposal] Warn and/or fail if default admin's password hasn't been changed
Feature description
---------
Proposal: warn and/or fail if default `admin`'s password hasn't been changed.
</issue>
<code>
[start of backend/api/management/commands/create_admin.py]
1 from django.contrib.auth.management.commands import createsuperuser
2 from django.core.management import CommandError
3
4
5 class Command(createsuperuser.Command):
6 help = "Non-interactively create an admin user"
7
8 def add_arguments(self, parser):
9 super().add_arguments(parser)
10 parser.add_argument("--password", default=None, help="The password for the admin.")
11
12 def handle(self, *args, **options):
13 password = options.get("password")
14 username = options.get("username")
15
16 if password and not username:
17 raise CommandError("--username is required if specifying --password")
18
19 try:
20 super().handle(*args, **options)
21 except Exception as err:
22 if "is already taken" in str(err):
23 self.stderr.write(f"User {username} already exists.")
24 else:
25 raise
26
27 if password:
28 database = options.get("database")
29 db = self.UserModel._default_manager.db_manager(database)
30 user = db.get(username=username)
31 user.set_password(password)
32 self.stderr.write(f"Setting password for User {username}.")
33 user.save()
34
[end of backend/api/management/commands/create_admin.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/backend/api/management/commands/create_admin.py b/backend/api/management/commands/create_admin.py
--- a/backend/api/management/commands/create_admin.py
+++ b/backend/api/management/commands/create_admin.py
@@ -13,9 +13,17 @@
password = options.get("password")
username = options.get("username")
- if password and not username:
+ if not username:
+ self.stderr.write("Error: Blank username isn't allowed.")
raise CommandError("--username is required if specifying --password")
+ if not password:
+ self.stderr.write("Error: Blank password isn't allowed.")
+ raise CommandError("--password is required")
+
+ if password == "password":
+ self.stdout.write(self.style.WARNING("Warning: You should change the default password."))
+
try:
super().handle(*args, **options)
except Exception as err:
@@ -24,10 +32,10 @@
else:
raise
- if password:
- database = options.get("database")
- db = self.UserModel._default_manager.db_manager(database)
- user = db.get(username=username)
- user.set_password(password)
- self.stderr.write(f"Setting password for User {username}.")
- user.save()
+ database = options.get("database")
+ db = self.UserModel._default_manager.db_manager(database)
+ user = db.get(username=username)
+ user.set_password(password)
+ message = f"Setting password for User {username}."
+ self.stdout.write(self.style.SUCCESS(message))
+ user.save()
| {"golden_diff": "diff --git a/backend/api/management/commands/create_admin.py b/backend/api/management/commands/create_admin.py\n--- a/backend/api/management/commands/create_admin.py\n+++ b/backend/api/management/commands/create_admin.py\n@@ -13,9 +13,17 @@\n password = options.get(\"password\")\n username = options.get(\"username\")\n \n- if password and not username:\n+ if not username:\n+ self.stderr.write(\"Error: Blank username isn't allowed.\")\n raise CommandError(\"--username is required if specifying --password\")\n \n+ if not password:\n+ self.stderr.write(\"Error: Blank password isn't allowed.\")\n+ raise CommandError(\"--password is required\")\n+\n+ if password == \"password\":\n+ self.stdout.write(self.style.WARNING(\"Warning: You should change the default password.\"))\n+\n try:\n super().handle(*args, **options)\n except Exception as err:\n@@ -24,10 +32,10 @@\n else:\n raise\n \n- if password:\n- database = options.get(\"database\")\n- db = self.UserModel._default_manager.db_manager(database)\n- user = db.get(username=username)\n- user.set_password(password)\n- self.stderr.write(f\"Setting password for User {username}.\")\n- user.save()\n+ database = options.get(\"database\")\n+ db = self.UserModel._default_manager.db_manager(database)\n+ user = db.get(username=username)\n+ user.set_password(password)\n+ message = f\"Setting password for User {username}.\"\n+ self.stdout.write(self.style.SUCCESS(message))\n+ user.save()\n", "issue": "[Proposal] Warn and/or fail if default admin's password hasn't been changed\nFeature description\r\n---------\r\nProposal: warn and/or fail if default `admin`'s password hasn't been changed.\n", "before_files": [{"content": "from django.contrib.auth.management.commands import createsuperuser\nfrom django.core.management import CommandError\n\n\nclass Command(createsuperuser.Command):\n help = \"Non-interactively create an admin user\"\n\n def add_arguments(self, parser):\n super().add_arguments(parser)\n parser.add_argument(\"--password\", default=None, help=\"The password for the admin.\")\n\n def handle(self, *args, **options):\n password = options.get(\"password\")\n username = options.get(\"username\")\n\n if password and not username:\n raise CommandError(\"--username is required if specifying --password\")\n\n try:\n super().handle(*args, **options)\n except Exception as err:\n if \"is already taken\" in str(err):\n self.stderr.write(f\"User {username} already exists.\")\n else:\n raise\n\n if password:\n database = options.get(\"database\")\n db = self.UserModel._default_manager.db_manager(database)\n user = db.get(username=username)\n user.set_password(password)\n self.stderr.write(f\"Setting password for User {username}.\")\n user.save()\n", "path": "backend/api/management/commands/create_admin.py"}]} | 876 | 351 |
gh_patches_debug_40653 | rasdani/github-patches | git_diff | crytic__slither-240 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Improvements on solc-version detectors
The allowed version is out of date:
https://github.com/crytic/slither/blob/0891f9a8a5e5e096084476e4b2bd292c3685f251/slither/detectors/attributes/incorrect_solc.py#L39
Due to the frequent solc release, we might want to change the logic to allow future releases.
Additionally:
- 0.5.5 should not be used: https://twitter.com/ethchris/status/1105903546602528768
- the wiki link is incorrect
</issue>
<code>
[start of slither/detectors/attributes/incorrect_solc.py]
1 """
2 Check if an incorrect version of solc is used
3 """
4
5 from slither.detectors.abstract_detector import AbstractDetector, DetectorClassification
6 import re
7
8 # group:
9 # 0: ^ > >= < <= (optional)
10 # 1: ' ' (optional)
11 # 2: version number
12 # 3: version number
13 # 4: version number
14 PATTERN = re.compile('(\^|>|>=|<|<=)?([ ]+)?(\d+)\.(\d+)\.(\d+)')
15
16 class IncorrectSolc(AbstractDetector):
17 """
18 Check if an old version of solc is used
19 """
20
21 ARGUMENT = 'solc-version'
22 HELP = 'Incorrect Solidity version (< 0.4.24 or complex pragma)'
23 IMPACT = DetectorClassification.INFORMATIONAL
24 CONFIDENCE = DetectorClassification.HIGH
25
26 WIKI = 'https://github.com/crytic/slither/wiki/Detector-Documentation#incorrect-version-of-solidity'
27
28 WIKI_TITLE = 'Incorrect versions of Solidity'
29 WIKI_DESCRIPTION = '''
30 Solc frequently releases new compiler versions. Using an old version prevents access to new Solidity security checks.
31 We recommend avoiding complex pragma statement.'''
32 WIKI_RECOMMENDATION = 'Use Solidity 0.4.25 or 0.5.2.'
33
34 COMPLEX_PRAGMA = "is too complex"
35 OLD_VERSION = "allows old versions"
36 LESS_THAN = "uses lesser than"
37
38 # Indicates the allowed versions.
39 ALLOWED_VERSIONS = ["0.4.24", "0.4.25", "0.5.2", "0.5.3"]
40
41 def _check_version(self, version):
42 op = version[0]
43 if op and not op in ['>', '>=', '^']:
44 return self.LESS_THAN
45 version_number = '.'.join(version[2:])
46 if version_number not in self.ALLOWED_VERSIONS:
47 return self.OLD_VERSION
48 return None
49
50 def _check_pragma(self, version):
51 versions = PATTERN.findall(version)
52 if len(versions) == 1:
53 version = versions[0]
54 return self._check_version(version)
55 elif len(versions) == 2:
56 version_left = versions[0]
57 version_right = versions[1]
58 # Only allow two elements if the second one is
59 # <0.5.0 or <0.6.0
60 if version_right not in [('<', '', '0', '5', '0'), ('<', '', '0', '6', '0')]:
61 return self.COMPLEX_PRAGMA
62 return self._check_version(version_left)
63 else:
64 return self.COMPLEX_PRAGMA
65 def _detect(self):
66 """
67 Detects pragma statements that allow for outdated solc versions.
68 :return: Returns the relevant JSON data for the findings.
69 """
70 # Detect all version related pragmas and check if they are disallowed.
71 results = []
72 pragma = self.slither.pragma_directives
73 disallowed_pragmas = []
74 detected_version = False
75 for p in pragma:
76 # Skip any pragma directives which do not refer to version
77 if len(p.directive) < 1 or p.directive[0] != "solidity":
78 continue
79
80 # This is version, so we test if this is disallowed.
81 detected_version = True
82 reason = self._check_pragma(p.version)
83 if reason:
84 disallowed_pragmas.append((reason, p))
85
86 # If we found any disallowed pragmas, we output our findings.
87 if disallowed_pragmas:
88 for (reason, p) in disallowed_pragmas:
89 info = f"Pragma version \"{p.version}\" {reason} ({p.source_mapping_str})\n"
90
91 json = self.generate_json_result(info)
92 self.add_pragma_to_json(p, json)
93 results.append(json)
94
95 return results
96
[end of slither/detectors/attributes/incorrect_solc.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/slither/detectors/attributes/incorrect_solc.py b/slither/detectors/attributes/incorrect_solc.py
--- a/slither/detectors/attributes/incorrect_solc.py
+++ b/slither/detectors/attributes/incorrect_solc.py
@@ -23,31 +23,43 @@
IMPACT = DetectorClassification.INFORMATIONAL
CONFIDENCE = DetectorClassification.HIGH
- WIKI = 'https://github.com/crytic/slither/wiki/Detector-Documentation#incorrect-version-of-solidity'
+ WIKI = 'https://github.com/crytic/slither/wiki/Detector-Documentation#incorrect-versions-of-solidity'
WIKI_TITLE = 'Incorrect versions of Solidity'
WIKI_DESCRIPTION = '''
Solc frequently releases new compiler versions. Using an old version prevents access to new Solidity security checks.
We recommend avoiding complex pragma statement.'''
- WIKI_RECOMMENDATION = 'Use Solidity 0.4.25 or 0.5.2.'
+ WIKI_RECOMMENDATION = '''
+Use Solidity 0.4.25 or 0.5.3. Consider using the latest version of Solidity for testing the compilation, and a trusted version for deploying.'''
- COMPLEX_PRAGMA = "is too complex"
- OLD_VERSION = "allows old versions"
- LESS_THAN = "uses lesser than"
+ COMPLEX_PRAGMA_TXT = "is too complex"
+ OLD_VERSION_TXT = "allows old versions"
+ LESS_THAN_TXT = "uses lesser than"
+
+ TOO_RECENT_VERSION_TXT = "necessitates versions too recent to be trusted. Consider deploying with 0.5.3"
+ BUGGY_VERSION_TXT = "is known to contain severe issue (https://solidity.readthedocs.io/en/v0.5.8/bugs.html)"
# Indicates the allowed versions.
- ALLOWED_VERSIONS = ["0.4.24", "0.4.25", "0.5.2", "0.5.3"]
+ ALLOWED_VERSIONS = ["0.4.25", "0.4.26", "0.5.3"]
+ # Indicates the versions too recent.
+ TOO_RECENT_VERSIONS = ["0.5.4", "0.5.7", "0.5.8", "0.5.9", "0.5.10"]
+ # Indicates the versions that should not be used.
+ BUGGY_VERSIONS = ["0.4.22", "0.5.5", "0.5.6", "^0.4.22", "^0.5.5", "^0.5.6"]
def _check_version(self, version):
op = version[0]
if op and not op in ['>', '>=', '^']:
- return self.LESS_THAN
+ return self.LESS_THAN_TXT
version_number = '.'.join(version[2:])
if version_number not in self.ALLOWED_VERSIONS:
- return self.OLD_VERSION
+ if version_number in self.TOO_RECENT_VERSIONS:
+ return self.TOO_RECENT_VERSION_TXT
+ return self.OLD_VERSION_TXT
return None
def _check_pragma(self, version):
+ if version in self.BUGGY_VERSIONS:
+ return self.BUGGY_VERSION_TXT
versions = PATTERN.findall(version)
if len(versions) == 1:
version = versions[0]
@@ -58,10 +70,10 @@
# Only allow two elements if the second one is
# <0.5.0 or <0.6.0
if version_right not in [('<', '', '0', '5', '0'), ('<', '', '0', '6', '0')]:
- return self.COMPLEX_PRAGMA
+ return self.COMPLEX_PRAGMA_TXT
return self._check_version(version_left)
else:
- return self.COMPLEX_PRAGMA
+ return self.COMPLEX_PRAGMA_TXT
def _detect(self):
"""
Detects pragma statements that allow for outdated solc versions.
| {"golden_diff": "diff --git a/slither/detectors/attributes/incorrect_solc.py b/slither/detectors/attributes/incorrect_solc.py\n--- a/slither/detectors/attributes/incorrect_solc.py\n+++ b/slither/detectors/attributes/incorrect_solc.py\n@@ -23,31 +23,43 @@\n IMPACT = DetectorClassification.INFORMATIONAL\n CONFIDENCE = DetectorClassification.HIGH\n \n- WIKI = 'https://github.com/crytic/slither/wiki/Detector-Documentation#incorrect-version-of-solidity'\n+ WIKI = 'https://github.com/crytic/slither/wiki/Detector-Documentation#incorrect-versions-of-solidity'\n \n WIKI_TITLE = 'Incorrect versions of Solidity'\n WIKI_DESCRIPTION = '''\n Solc frequently releases new compiler versions. Using an old version prevents access to new Solidity security checks.\n We recommend avoiding complex pragma statement.'''\n- WIKI_RECOMMENDATION = 'Use Solidity 0.4.25 or 0.5.2.'\n+ WIKI_RECOMMENDATION = '''\n+Use Solidity 0.4.25 or 0.5.3. Consider using the latest version of Solidity for testing the compilation, and a trusted version for deploying.'''\n \n- COMPLEX_PRAGMA = \"is too complex\"\n- OLD_VERSION = \"allows old versions\"\n- LESS_THAN = \"uses lesser than\"\n+ COMPLEX_PRAGMA_TXT = \"is too complex\"\n+ OLD_VERSION_TXT = \"allows old versions\"\n+ LESS_THAN_TXT = \"uses lesser than\"\n+\n+ TOO_RECENT_VERSION_TXT = \"necessitates versions too recent to be trusted. Consider deploying with 0.5.3\"\n+ BUGGY_VERSION_TXT = \"is known to contain severe issue (https://solidity.readthedocs.io/en/v0.5.8/bugs.html)\"\n \n # Indicates the allowed versions.\n- ALLOWED_VERSIONS = [\"0.4.24\", \"0.4.25\", \"0.5.2\", \"0.5.3\"]\n+ ALLOWED_VERSIONS = [\"0.4.25\", \"0.4.26\", \"0.5.3\"]\n+ # Indicates the versions too recent.\n+ TOO_RECENT_VERSIONS = [\"0.5.4\", \"0.5.7\", \"0.5.8\", \"0.5.9\", \"0.5.10\"]\n+ # Indicates the versions that should not be used.\n+ BUGGY_VERSIONS = [\"0.4.22\", \"0.5.5\", \"0.5.6\", \"^0.4.22\", \"^0.5.5\", \"^0.5.6\"]\n \n def _check_version(self, version):\n op = version[0]\n if op and not op in ['>', '>=', '^']:\n- return self.LESS_THAN\n+ return self.LESS_THAN_TXT\n version_number = '.'.join(version[2:])\n if version_number not in self.ALLOWED_VERSIONS:\n- return self.OLD_VERSION\n+ if version_number in self.TOO_RECENT_VERSIONS:\n+ return self.TOO_RECENT_VERSION_TXT\n+ return self.OLD_VERSION_TXT\n return None\n \n def _check_pragma(self, version):\n+ if version in self.BUGGY_VERSIONS:\n+ return self.BUGGY_VERSION_TXT\n versions = PATTERN.findall(version)\n if len(versions) == 1:\n version = versions[0]\n@@ -58,10 +70,10 @@\n # Only allow two elements if the second one is\n # <0.5.0 or <0.6.0\n if version_right not in [('<', '', '0', '5', '0'), ('<', '', '0', '6', '0')]:\n- return self.COMPLEX_PRAGMA\n+ return self.COMPLEX_PRAGMA_TXT\n return self._check_version(version_left)\n else:\n- return self.COMPLEX_PRAGMA\n+ return self.COMPLEX_PRAGMA_TXT\n def _detect(self):\n \"\"\"\n Detects pragma statements that allow for outdated solc versions.\n", "issue": "Improvements on solc-version detectors\nThe allowed version is out of date:\r\nhttps://github.com/crytic/slither/blob/0891f9a8a5e5e096084476e4b2bd292c3685f251/slither/detectors/attributes/incorrect_solc.py#L39\r\n\r\nDue to the frequent solc release, we might want to change the logic to allow future releases.\r\n\r\nAdditionally:\r\n- 0.5.5 should not be used: https://twitter.com/ethchris/status/1105903546602528768\r\n- the wiki link is incorrect\n", "before_files": [{"content": "\"\"\"\n Check if an incorrect version of solc is used\n\"\"\"\n\nfrom slither.detectors.abstract_detector import AbstractDetector, DetectorClassification\nimport re\n\n# group:\n# 0: ^ > >= < <= (optional)\n# 1: ' ' (optional)\n# 2: version number\n# 3: version number\n# 4: version number\nPATTERN = re.compile('(\\^|>|>=|<|<=)?([ ]+)?(\\d+)\\.(\\d+)\\.(\\d+)')\n\nclass IncorrectSolc(AbstractDetector):\n \"\"\"\n Check if an old version of solc is used\n \"\"\"\n\n ARGUMENT = 'solc-version'\n HELP = 'Incorrect Solidity version (< 0.4.24 or complex pragma)'\n IMPACT = DetectorClassification.INFORMATIONAL\n CONFIDENCE = DetectorClassification.HIGH\n\n WIKI = 'https://github.com/crytic/slither/wiki/Detector-Documentation#incorrect-version-of-solidity'\n\n WIKI_TITLE = 'Incorrect versions of Solidity'\n WIKI_DESCRIPTION = '''\nSolc frequently releases new compiler versions. Using an old version prevents access to new Solidity security checks.\nWe recommend avoiding complex pragma statement.'''\n WIKI_RECOMMENDATION = 'Use Solidity 0.4.25 or 0.5.2.'\n\n COMPLEX_PRAGMA = \"is too complex\"\n OLD_VERSION = \"allows old versions\"\n LESS_THAN = \"uses lesser than\"\n\n # Indicates the allowed versions.\n ALLOWED_VERSIONS = [\"0.4.24\", \"0.4.25\", \"0.5.2\", \"0.5.3\"]\n\n def _check_version(self, version):\n op = version[0]\n if op and not op in ['>', '>=', '^']:\n return self.LESS_THAN\n version_number = '.'.join(version[2:])\n if version_number not in self.ALLOWED_VERSIONS:\n return self.OLD_VERSION\n return None\n\n def _check_pragma(self, version):\n versions = PATTERN.findall(version)\n if len(versions) == 1:\n version = versions[0]\n return self._check_version(version)\n elif len(versions) == 2:\n version_left = versions[0]\n version_right = versions[1]\n # Only allow two elements if the second one is\n # <0.5.0 or <0.6.0\n if version_right not in [('<', '', '0', '5', '0'), ('<', '', '0', '6', '0')]:\n return self.COMPLEX_PRAGMA\n return self._check_version(version_left)\n else:\n return self.COMPLEX_PRAGMA\n def _detect(self):\n \"\"\"\n Detects pragma statements that allow for outdated solc versions.\n :return: Returns the relevant JSON data for the findings.\n \"\"\"\n # Detect all version related pragmas and check if they are disallowed.\n results = []\n pragma = self.slither.pragma_directives\n disallowed_pragmas = []\n detected_version = False\n for p in pragma:\n # Skip any pragma directives which do not refer to version\n if len(p.directive) < 1 or p.directive[0] != \"solidity\":\n continue\n\n # This is version, so we test if this is disallowed.\n detected_version = True\n reason = self._check_pragma(p.version)\n if reason:\n disallowed_pragmas.append((reason, p))\n\n # If we found any disallowed pragmas, we output our findings.\n if disallowed_pragmas:\n for (reason, p) in disallowed_pragmas:\n info = f\"Pragma version \\\"{p.version}\\\" {reason} ({p.source_mapping_str})\\n\"\n\n json = self.generate_json_result(info)\n self.add_pragma_to_json(p, json)\n results.append(json)\n\n return results\n", "path": "slither/detectors/attributes/incorrect_solc.py"}]} | 1,752 | 927 |
gh_patches_debug_5974 | rasdani/github-patches | git_diff | google__pytype-20 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
python -m pip install -U . doesn't work.
It ought to be possible to install pytype using pip by running
```
python -m pip install -U .
```
but doing so causes an error message.
</issue>
<code>
[start of setup.py]
1 #!/usr/bin/env python
2
3 # pylint: disable=bad-indentation
4
5 from distutils.core import setup
6
7 import glob
8 import os
9
10
11 def scan_package_data(path, pattern):
12 result = []
13 for subdir, _, _ in os.walk(path):
14 full_pattern = os.path.join(subdir, pattern)
15 if glob.glob(full_pattern):
16 # Once we know that it matches files, we store the pattern itself.
17 result.append(full_pattern)
18 return result
19
20
21 typeshed = scan_package_data('typeshed', '*.pyi')
22 assert 'typeshed/stdlib/2.7/*.pyi' in typeshed
23
24
25 setup(
26 name='pytype',
27 version='0.2',
28 description='Python type inferencer',
29 maintainer='Google',
30 maintainer_email='[email protected]',
31 url='http://github.com/google/pytype',
32 packages=['pytype',
33 'pytype/pyc',
34 'pytype/pytd',
35 'pytype/pytd/parse',
36 ],
37 scripts=['scripts/pytype', 'scripts/pytd'],
38 package_data={'pytype': ['pytd/builtins/*',
39 'pytd/stdlib/*',
40 ] + typeshed},
41 requires=['ply (>=3.4)'],
42 install_requires=['ply>=3.4'],
43 )
44
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -36,7 +36,8 @@
],
scripts=['scripts/pytype', 'scripts/pytd'],
package_data={'pytype': ['pytd/builtins/*',
- 'pytd/stdlib/*',
+ 'pytd/stdlib/os/*.pytd',
+ 'pytd/stdlib/*.pytd',
] + typeshed},
requires=['ply (>=3.4)'],
install_requires=['ply>=3.4'],
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -36,7 +36,8 @@\n ],\n scripts=['scripts/pytype', 'scripts/pytd'],\n package_data={'pytype': ['pytd/builtins/*',\n- 'pytd/stdlib/*',\n+ 'pytd/stdlib/os/*.pytd',\n+ 'pytd/stdlib/*.pytd',\n ] + typeshed},\n requires=['ply (>=3.4)'],\n install_requires=['ply>=3.4'],\n", "issue": "python -m pip install -U . doesn't work.\nIt ought to be possible to install pytype using pip by running\n\n```\npython -m pip install -U .\n```\n\nbut doing so causes an error message.\n\n", "before_files": [{"content": "#!/usr/bin/env python\n\n# pylint: disable=bad-indentation\n\nfrom distutils.core import setup\n\nimport glob\nimport os\n\n\ndef scan_package_data(path, pattern):\n result = []\n for subdir, _, _ in os.walk(path):\n full_pattern = os.path.join(subdir, pattern)\n if glob.glob(full_pattern):\n # Once we know that it matches files, we store the pattern itself.\n result.append(full_pattern)\n return result\n\n\ntypeshed = scan_package_data('typeshed', '*.pyi')\nassert 'typeshed/stdlib/2.7/*.pyi' in typeshed\n\n\nsetup(\n name='pytype',\n version='0.2',\n description='Python type inferencer',\n maintainer='Google',\n maintainer_email='[email protected]',\n url='http://github.com/google/pytype',\n packages=['pytype',\n 'pytype/pyc',\n 'pytype/pytd',\n 'pytype/pytd/parse',\n ],\n scripts=['scripts/pytype', 'scripts/pytd'],\n package_data={'pytype': ['pytd/builtins/*',\n 'pytd/stdlib/*',\n ] + typeshed},\n requires=['ply (>=3.4)'],\n install_requires=['ply>=3.4'],\n)\n", "path": "setup.py"}]} | 942 | 120 |
gh_patches_debug_35793 | rasdani/github-patches | git_diff | CTPUG__wafer-221 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Duplicate page created
On https://wafertest.debconf.org, I created the following page: https://wafertest.debconf.org/debconf-16-bursaries-instructions, when my Wafer pages page loaded, I saw that there existed two new pages with that title.
When I visit https://wafertest.debconf.org/debconf-16-bursaries-instructions, wafer gives me a debug page that says "get() returned more than one Page -- it returned 2!"
Here is the traceback: http://paste.debian.net/415666/
For now I'll just delete the duplicate page, but @stefanor mentioned that a unique index for pages may be required.
</issue>
<code>
[start of wafer/pages/admin.py]
1 from django.contrib import admin
2
3 from wafer.pages.models import File, Page
4
5 from wafer.compare.admin import CompareVersionAdmin, DateModifiedFilter
6
7
8 class PageAdmin(CompareVersionAdmin, admin.ModelAdmin):
9 prepopulated_fields = {"slug": ("name",)}
10 list_display = ('name', 'slug', 'get_people_display_names', 'get_in_schedule')
11
12 list_filter = (DateModifiedFilter,)
13
14
15
16 admin.site.register(Page, PageAdmin)
17 admin.site.register(File)
18
[end of wafer/pages/admin.py]
[start of wafer/pages/models.py]
1 import logging
2 logger = logging.getLogger(__name__)
3
4 from django.utils.translation import ugettext_lazy as _
5 from django.core.urlresolvers import reverse
6 from django.conf import settings
7 from django.db import models
8 from django.db.models.signals import post_save
9 from django.utils.encoding import python_2_unicode_compatible
10
11
12 from markitup.fields import MarkupField
13 from wafer.menu import MenuError, refresh_menu_cache
14
15
16 @python_2_unicode_compatible
17 class File(models.Model):
18 """A file for use in page markup."""
19 name = models.CharField(max_length=255)
20 description = models.TextField()
21 item = models.FileField(upload_to='pages_files')
22
23 def __str__(self):
24 return u'%s' % (self.name,)
25
26
27 @python_2_unicode_compatible
28 class Page(models.Model):
29 """An extra page for the site."""
30 name = models.CharField(max_length=255)
31 slug = models.SlugField(help_text=_("Last component of the page URL"))
32 parent = models.ForeignKey('self', null=True, blank=True)
33 content = MarkupField(
34 help_text=_("Markdown contents for the page."))
35 include_in_menu = models.BooleanField(
36 help_text=_("Whether to include in menus."),
37 default=False)
38 exclude_from_static = models.BooleanField(
39 help_text=_("Whether to exclude this page from the static version of"
40 " the site (Container pages, etc.)"),
41 default=False)
42 files = models.ManyToManyField(
43 File, related_name="pages", blank=True,
44 help_text=_("Images and other files for use in"
45 " the content markdown field."))
46
47 people = models.ManyToManyField(settings.AUTH_USER_MODEL,
48 related_name='pages', blank=True,
49 help_text=_("People associated with this page for display in the"
50 " schedule (Session chairs, panelists, etc.)"))
51
52 def __str__(self):
53 return u'%s' % (self.name,)
54
55 def get_path(self):
56 path, parent = [self.slug], self.parent
57 while parent is not None:
58 path.insert(0, parent.slug)
59 parent = parent.parent
60 return path
61
62 def get_absolute_url(self):
63 url = "/".join(self.get_path())
64 return reverse('wafer_page', args=(url,))
65
66 def get_in_schedule(self):
67 if self.scheduleitem_set.all():
68 return True
69 return False
70
71 def get_people_display_names(self):
72 names = [person.userprofile.display_name()
73 for person in self.people.all()]
74 if len(names) > 2:
75 comma_names = ', '.join(names[:-1])
76 return comma_names + ' and ' + names[-1]
77 else:
78 return ' and '.join(names)
79
80 get_in_schedule.short_description = 'Added to schedule'
81 get_in_schedule.boolean = True
82
83 get_people_display_names.short_description = 'People'
84
85 class Model:
86 unique_together = (('parent', 'slug'),)
87
88
89 def page_menus(root_menu):
90 """Add page menus."""
91 for page in Page.objects.filter(include_in_menu=True):
92 path = page.get_path()
93 menu = path[0] if len(path) > 1 else None
94 try:
95 root_menu.add_item(page.name, page.get_absolute_url(), menu=menu)
96 except MenuError as e:
97 logger.error("Bad menu item %r for page with slug %r."
98 % (e, page.slug))
99
100
101 post_save.connect(refresh_menu_cache, sender=Page)
102
[end of wafer/pages/models.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/wafer/pages/admin.py b/wafer/pages/admin.py
--- a/wafer/pages/admin.py
+++ b/wafer/pages/admin.py
@@ -7,7 +7,8 @@
class PageAdmin(CompareVersionAdmin, admin.ModelAdmin):
prepopulated_fields = {"slug": ("name",)}
- list_display = ('name', 'slug', 'get_people_display_names', 'get_in_schedule')
+ list_display = ('name', 'slug', 'get_absolute_url',
+ 'get_people_display_names', 'get_in_schedule')
list_filter = (DateModifiedFilter,)
diff --git a/wafer/pages/models.py b/wafer/pages/models.py
--- a/wafer/pages/models.py
+++ b/wafer/pages/models.py
@@ -3,6 +3,7 @@
from django.utils.translation import ugettext_lazy as _
from django.core.urlresolvers import reverse
+from django.core.exceptions import ValidationError, NON_FIELD_ERRORS
from django.conf import settings
from django.db import models
from django.db.models.signals import post_save
@@ -63,6 +64,8 @@
url = "/".join(self.get_path())
return reverse('wafer_page', args=(url,))
+ get_absolute_url.short_description = 'page url'
+
def get_in_schedule(self):
if self.scheduleitem_set.all():
return True
@@ -85,6 +88,35 @@
class Model:
unique_together = (('parent', 'slug'),)
+ def clean(self):
+ keys = [self.pk]
+ parent = self.parent
+ while parent is not None:
+ if parent.pk in keys:
+ raise ValidationError(
+ {
+ NON_FIELD_ERRORS: [
+ _("Circular reference in parent."),
+ ],
+ })
+ keys.append(parent.pk)
+ parent = parent.parent
+ return super(Page, self).clean()
+
+ def validate_unique(self, exclude=None):
+ existing = Page.objects.filter(slug=self.slug, parent=self.parent)
+ # We could be updating the page, so don't fail if the existing
+ # entry is this page.
+ if existing.count() > 1 or (existing.count() == 1 and
+ existing.first().pk != self.pk):
+ raise ValidationError(
+ {
+ NON_FIELD_ERRORS: [
+ _("Duplicate parent/slug combination."),
+ ],
+ })
+ return super(Page, self).validate_unique(exclude)
+
def page_menus(root_menu):
"""Add page menus."""
| {"golden_diff": "diff --git a/wafer/pages/admin.py b/wafer/pages/admin.py\n--- a/wafer/pages/admin.py\n+++ b/wafer/pages/admin.py\n@@ -7,7 +7,8 @@\n \n class PageAdmin(CompareVersionAdmin, admin.ModelAdmin):\n prepopulated_fields = {\"slug\": (\"name\",)}\n- list_display = ('name', 'slug', 'get_people_display_names', 'get_in_schedule')\n+ list_display = ('name', 'slug', 'get_absolute_url',\n+ 'get_people_display_names', 'get_in_schedule')\n \n list_filter = (DateModifiedFilter,)\n \ndiff --git a/wafer/pages/models.py b/wafer/pages/models.py\n--- a/wafer/pages/models.py\n+++ b/wafer/pages/models.py\n@@ -3,6 +3,7 @@\n \n from django.utils.translation import ugettext_lazy as _\n from django.core.urlresolvers import reverse\n+from django.core.exceptions import ValidationError, NON_FIELD_ERRORS\n from django.conf import settings\n from django.db import models\n from django.db.models.signals import post_save\n@@ -63,6 +64,8 @@\n url = \"/\".join(self.get_path())\n return reverse('wafer_page', args=(url,))\n \n+ get_absolute_url.short_description = 'page url'\n+\n def get_in_schedule(self):\n if self.scheduleitem_set.all():\n return True\n@@ -85,6 +88,35 @@\n class Model:\n unique_together = (('parent', 'slug'),)\n \n+ def clean(self):\n+ keys = [self.pk]\n+ parent = self.parent\n+ while parent is not None:\n+ if parent.pk in keys:\n+ raise ValidationError(\n+ {\n+ NON_FIELD_ERRORS: [\n+ _(\"Circular reference in parent.\"),\n+ ],\n+ })\n+ keys.append(parent.pk)\n+ parent = parent.parent\n+ return super(Page, self).clean()\n+\n+ def validate_unique(self, exclude=None):\n+ existing = Page.objects.filter(slug=self.slug, parent=self.parent)\n+ # We could be updating the page, so don't fail if the existing\n+ # entry is this page.\n+ if existing.count() > 1 or (existing.count() == 1 and\n+ existing.first().pk != self.pk):\n+ raise ValidationError(\n+ {\n+ NON_FIELD_ERRORS: [\n+ _(\"Duplicate parent/slug combination.\"),\n+ ],\n+ })\n+ return super(Page, self).validate_unique(exclude)\n+\n \n def page_menus(root_menu):\n \"\"\"Add page menus.\"\"\"\n", "issue": "Duplicate page created\nOn https://wafertest.debconf.org, I created the following page: https://wafertest.debconf.org/debconf-16-bursaries-instructions, when my Wafer pages page loaded, I saw that there existed two new pages with that title.\n\nWhen I visit https://wafertest.debconf.org/debconf-16-bursaries-instructions, wafer gives me a debug page that says \"get() returned more than one Page -- it returned 2!\"\n\nHere is the traceback: http://paste.debian.net/415666/\n\nFor now I'll just delete the duplicate page, but @stefanor mentioned that a unique index for pages may be required.\n\n", "before_files": [{"content": "from django.contrib import admin\n\nfrom wafer.pages.models import File, Page\n\nfrom wafer.compare.admin import CompareVersionAdmin, DateModifiedFilter\n\n\nclass PageAdmin(CompareVersionAdmin, admin.ModelAdmin):\n prepopulated_fields = {\"slug\": (\"name\",)}\n list_display = ('name', 'slug', 'get_people_display_names', 'get_in_schedule')\n\n list_filter = (DateModifiedFilter,)\n\n\n\nadmin.site.register(Page, PageAdmin)\nadmin.site.register(File)\n", "path": "wafer/pages/admin.py"}, {"content": "import logging\nlogger = logging.getLogger(__name__)\n\nfrom django.utils.translation import ugettext_lazy as _\nfrom django.core.urlresolvers import reverse\nfrom django.conf import settings\nfrom django.db import models\nfrom django.db.models.signals import post_save\nfrom django.utils.encoding import python_2_unicode_compatible\n\n\nfrom markitup.fields import MarkupField\nfrom wafer.menu import MenuError, refresh_menu_cache\n\n\n@python_2_unicode_compatible\nclass File(models.Model):\n \"\"\"A file for use in page markup.\"\"\"\n name = models.CharField(max_length=255)\n description = models.TextField()\n item = models.FileField(upload_to='pages_files')\n\n def __str__(self):\n return u'%s' % (self.name,)\n\n\n@python_2_unicode_compatible\nclass Page(models.Model):\n \"\"\"An extra page for the site.\"\"\"\n name = models.CharField(max_length=255)\n slug = models.SlugField(help_text=_(\"Last component of the page URL\"))\n parent = models.ForeignKey('self', null=True, blank=True)\n content = MarkupField(\n help_text=_(\"Markdown contents for the page.\"))\n include_in_menu = models.BooleanField(\n help_text=_(\"Whether to include in menus.\"),\n default=False)\n exclude_from_static = models.BooleanField(\n help_text=_(\"Whether to exclude this page from the static version of\"\n \" the site (Container pages, etc.)\"),\n default=False)\n files = models.ManyToManyField(\n File, related_name=\"pages\", blank=True,\n help_text=_(\"Images and other files for use in\"\n \" the content markdown field.\"))\n\n people = models.ManyToManyField(settings.AUTH_USER_MODEL,\n related_name='pages', blank=True,\n help_text=_(\"People associated with this page for display in the\"\n \" schedule (Session chairs, panelists, etc.)\"))\n\n def __str__(self):\n return u'%s' % (self.name,)\n\n def get_path(self):\n path, parent = [self.slug], self.parent\n while parent is not None:\n path.insert(0, parent.slug)\n parent = parent.parent\n return path\n\n def get_absolute_url(self):\n url = \"/\".join(self.get_path())\n return reverse('wafer_page', args=(url,))\n\n def get_in_schedule(self):\n if self.scheduleitem_set.all():\n return True\n return False\n\n def get_people_display_names(self):\n names = [person.userprofile.display_name()\n for person in self.people.all()]\n if len(names) > 2:\n comma_names = ', '.join(names[:-1])\n return comma_names + ' and ' + names[-1]\n else:\n return ' and '.join(names)\n\n get_in_schedule.short_description = 'Added to schedule'\n get_in_schedule.boolean = True\n\n get_people_display_names.short_description = 'People'\n\n class Model:\n unique_together = (('parent', 'slug'),)\n\n\ndef page_menus(root_menu):\n \"\"\"Add page menus.\"\"\"\n for page in Page.objects.filter(include_in_menu=True):\n path = page.get_path()\n menu = path[0] if len(path) > 1 else None\n try:\n root_menu.add_item(page.name, page.get_absolute_url(), menu=menu)\n except MenuError as e:\n logger.error(\"Bad menu item %r for page with slug %r.\"\n % (e, page.slug))\n\n\npost_save.connect(refresh_menu_cache, sender=Page)\n", "path": "wafer/pages/models.py"}]} | 1,777 | 558 |
gh_patches_debug_7804 | rasdani/github-patches | git_diff | apache__airflow-15207 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Specify that exit code -9 is due to RAM
Related to https://github.com/apache/airflow/issues/9655
It would be nice to add a message when you get this error with some info, like 'This probably is because a lack of RAM' or something like that.
I have found the code where the -9 is assigned but have no idea how to add a logging message.
self.process = None
if self._rc is None:
# Something else reaped it before we had a chance, so let's just "guess" at an error code.
self._rc = -9
</issue>
<code>
[start of airflow/task/task_runner/standard_task_runner.py]
1 #
2 # Licensed to the Apache Software Foundation (ASF) under one
3 # or more contributor license agreements. See the NOTICE file
4 # distributed with this work for additional information
5 # regarding copyright ownership. The ASF licenses this file
6 # to you under the Apache License, Version 2.0 (the
7 # "License"); you may not use this file except in compliance
8 # with the License. You may obtain a copy of the License at
9 #
10 # http://www.apache.org/licenses/LICENSE-2.0
11 #
12 # Unless required by applicable law or agreed to in writing,
13 # software distributed under the License is distributed on an
14 # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
15 # KIND, either express or implied. See the License for the
16 # specific language governing permissions and limitations
17 # under the License.
18 """Standard task runner"""
19 import logging
20 import os
21 from typing import Optional
22
23 import psutil
24 from setproctitle import setproctitle # pylint: disable=no-name-in-module
25
26 from airflow.settings import CAN_FORK
27 from airflow.task.task_runner.base_task_runner import BaseTaskRunner
28 from airflow.utils.process_utils import reap_process_group
29
30
31 class StandardTaskRunner(BaseTaskRunner):
32 """Standard runner for all tasks."""
33
34 def __init__(self, local_task_job):
35 super().__init__(local_task_job)
36 self._rc = None
37 self.dag = local_task_job.task_instance.task.dag
38
39 def start(self):
40 if CAN_FORK and not self.run_as_user:
41 self.process = self._start_by_fork()
42 else:
43 self.process = self._start_by_exec()
44
45 def _start_by_exec(self):
46 subprocess = self.run_command()
47 return psutil.Process(subprocess.pid)
48
49 def _start_by_fork(self): # pylint: disable=inconsistent-return-statements
50 pid = os.fork()
51 if pid:
52 self.log.info("Started process %d to run task", pid)
53 return psutil.Process(pid)
54 else:
55 import signal
56
57 from airflow import settings
58 from airflow.cli.cli_parser import get_parser
59 from airflow.sentry import Sentry
60
61 signal.signal(signal.SIGINT, signal.SIG_DFL)
62 signal.signal(signal.SIGTERM, signal.SIG_DFL)
63 # Start a new process group
64 os.setpgid(0, 0)
65
66 # Force a new SQLAlchemy session. We can't share open DB handles
67 # between process. The cli code will re-create this as part of its
68 # normal startup
69 settings.engine.pool.dispose()
70 settings.engine.dispose()
71
72 parser = get_parser()
73 # [1:] - remove "airflow" from the start of the command
74 args = parser.parse_args(self._command[1:])
75
76 self.log.info('Running: %s', self._command)
77 self.log.info('Job %s: Subtask %s', self._task_instance.job_id, self._task_instance.task_id)
78
79 proc_title = "airflow task runner: {0.dag_id} {0.task_id} {0.execution_date}"
80 if hasattr(args, "job_id"):
81 proc_title += " {0.job_id}"
82 setproctitle(proc_title.format(args))
83
84 try:
85 args.func(args, dag=self.dag)
86 return_code = 0
87 except Exception: # pylint: disable=broad-except
88 return_code = 1
89 finally:
90 # Explicitly flush any pending exception to Sentry if enabled
91 Sentry.flush()
92 logging.shutdown()
93 os._exit(return_code) # pylint: disable=protected-access
94
95 def return_code(self, timeout: int = 0) -> Optional[int]:
96 # We call this multiple times, but we can only wait on the process once
97 if self._rc is not None or not self.process:
98 return self._rc
99
100 try:
101 self._rc = self.process.wait(timeout=timeout)
102 self.process = None
103 except psutil.TimeoutExpired:
104 pass
105
106 return self._rc
107
108 def terminate(self):
109 if self.process is None:
110 return
111
112 # Reap the child process - it may already be finished
113 _ = self.return_code(timeout=0)
114
115 if self.process and self.process.is_running():
116 rcs = reap_process_group(self.process.pid, self.log)
117 self._rc = rcs.get(self.process.pid)
118
119 self.process = None
120
121 if self._rc is None:
122 # Something else reaped it before we had a chance, so let's just "guess" at an error code.
123 self._rc = -9
124
[end of airflow/task/task_runner/standard_task_runner.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/airflow/task/task_runner/standard_task_runner.py b/airflow/task/task_runner/standard_task_runner.py
--- a/airflow/task/task_runner/standard_task_runner.py
+++ b/airflow/task/task_runner/standard_task_runner.py
@@ -121,3 +121,11 @@
if self._rc is None:
# Something else reaped it before we had a chance, so let's just "guess" at an error code.
self._rc = -9
+
+ if self._rc == -9:
+ # If either we or psutil gives out a -9 return code, it likely means
+ # an OOM happened
+ self.log.error(
+ 'Job %s was killed before it finished (likely due to running out of memory)',
+ self._task_instance.job_id,
+ )
| {"golden_diff": "diff --git a/airflow/task/task_runner/standard_task_runner.py b/airflow/task/task_runner/standard_task_runner.py\n--- a/airflow/task/task_runner/standard_task_runner.py\n+++ b/airflow/task/task_runner/standard_task_runner.py\n@@ -121,3 +121,11 @@\n if self._rc is None:\n # Something else reaped it before we had a chance, so let's just \"guess\" at an error code.\n self._rc = -9\n+\n+ if self._rc == -9:\n+ # If either we or psutil gives out a -9 return code, it likely means\n+ # an OOM happened\n+ self.log.error(\n+ 'Job %s was killed before it finished (likely due to running out of memory)',\n+ self._task_instance.job_id,\n+ )\n", "issue": "Specify that exit code -9 is due to RAM\nRelated to https://github.com/apache/airflow/issues/9655\r\n\r\nIt would be nice to add a message when you get this error with some info, like 'This probably is because a lack of RAM' or something like that. \r\n\r\nI have found the code where the -9 is assigned but have no idea how to add a logging message. \r\n\r\n self.process = None\r\n\r\n if self._rc is None:\r\n # Something else reaped it before we had a chance, so let's just \"guess\" at an error code.\r\n self._rc = -9\n", "before_files": [{"content": "#\n# Licensed to the Apache Software Foundation (ASF) under one\n# or more contributor license agreements. See the NOTICE file\n# distributed with this work for additional information\n# regarding copyright ownership. The ASF licenses this file\n# to you under the Apache License, Version 2.0 (the\n# \"License\"); you may not use this file except in compliance\n# with the License. You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing,\n# software distributed under the License is distributed on an\n# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n# KIND, either express or implied. See the License for the\n# specific language governing permissions and limitations\n# under the License.\n\"\"\"Standard task runner\"\"\"\nimport logging\nimport os\nfrom typing import Optional\n\nimport psutil\nfrom setproctitle import setproctitle # pylint: disable=no-name-in-module\n\nfrom airflow.settings import CAN_FORK\nfrom airflow.task.task_runner.base_task_runner import BaseTaskRunner\nfrom airflow.utils.process_utils import reap_process_group\n\n\nclass StandardTaskRunner(BaseTaskRunner):\n \"\"\"Standard runner for all tasks.\"\"\"\n\n def __init__(self, local_task_job):\n super().__init__(local_task_job)\n self._rc = None\n self.dag = local_task_job.task_instance.task.dag\n\n def start(self):\n if CAN_FORK and not self.run_as_user:\n self.process = self._start_by_fork()\n else:\n self.process = self._start_by_exec()\n\n def _start_by_exec(self):\n subprocess = self.run_command()\n return psutil.Process(subprocess.pid)\n\n def _start_by_fork(self): # pylint: disable=inconsistent-return-statements\n pid = os.fork()\n if pid:\n self.log.info(\"Started process %d to run task\", pid)\n return psutil.Process(pid)\n else:\n import signal\n\n from airflow import settings\n from airflow.cli.cli_parser import get_parser\n from airflow.sentry import Sentry\n\n signal.signal(signal.SIGINT, signal.SIG_DFL)\n signal.signal(signal.SIGTERM, signal.SIG_DFL)\n # Start a new process group\n os.setpgid(0, 0)\n\n # Force a new SQLAlchemy session. We can't share open DB handles\n # between process. The cli code will re-create this as part of its\n # normal startup\n settings.engine.pool.dispose()\n settings.engine.dispose()\n\n parser = get_parser()\n # [1:] - remove \"airflow\" from the start of the command\n args = parser.parse_args(self._command[1:])\n\n self.log.info('Running: %s', self._command)\n self.log.info('Job %s: Subtask %s', self._task_instance.job_id, self._task_instance.task_id)\n\n proc_title = \"airflow task runner: {0.dag_id} {0.task_id} {0.execution_date}\"\n if hasattr(args, \"job_id\"):\n proc_title += \" {0.job_id}\"\n setproctitle(proc_title.format(args))\n\n try:\n args.func(args, dag=self.dag)\n return_code = 0\n except Exception: # pylint: disable=broad-except\n return_code = 1\n finally:\n # Explicitly flush any pending exception to Sentry if enabled\n Sentry.flush()\n logging.shutdown()\n os._exit(return_code) # pylint: disable=protected-access\n\n def return_code(self, timeout: int = 0) -> Optional[int]:\n # We call this multiple times, but we can only wait on the process once\n if self._rc is not None or not self.process:\n return self._rc\n\n try:\n self._rc = self.process.wait(timeout=timeout)\n self.process = None\n except psutil.TimeoutExpired:\n pass\n\n return self._rc\n\n def terminate(self):\n if self.process is None:\n return\n\n # Reap the child process - it may already be finished\n _ = self.return_code(timeout=0)\n\n if self.process and self.process.is_running():\n rcs = reap_process_group(self.process.pid, self.log)\n self._rc = rcs.get(self.process.pid)\n\n self.process = None\n\n if self._rc is None:\n # Something else reaped it before we had a chance, so let's just \"guess\" at an error code.\n self._rc = -9\n", "path": "airflow/task/task_runner/standard_task_runner.py"}]} | 1,934 | 191 |
gh_patches_debug_16048 | rasdani/github-patches | git_diff | pwndbg__pwndbg-473 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
TypeError exception raised on up/down commands.
### Description
When running the command `up` or `down` with an integer argument, the following exception is raised:
```
Traceback (most recent call last):
File "/home/david/.pwndbg/pwndbg/commands/__init__.py", line 109, in __call__
return self.function(*args, **kwargs)
File "/home/david/.pwndbg/pwndbg/commands/__init__.py", line 200, in _OnlyWhenRunning
return function(*a, **kw)
File "/home/david/.pwndbg/pwndbg/commands/ida.py", line 46, in up
for i in range(n):
TypeError: 'str' object cannot be interpreted as an integer
```
### Steps to reproduce
Open any binary and attempt to do `up 2` during debugging.
### My setup
pwndbg> version
Gdb: 7.12.0.20161007-git
Python: 3.6.5rc1 (default, Mar 14 2018, 06:54:23) [GCC 7.3.0]
Pwndbg: 1.0.0 build: f69b81e
Capstone: 4.0.1024
Unicorn: 1.0.1
</issue>
<code>
[start of pwndbg/commands/ida.py]
1 #!/usr/bin/env python
2 # -*- coding: utf-8 -*-
3 from __future__ import absolute_import
4 from __future__ import division
5 from __future__ import print_function
6 from __future__ import unicode_literals
7
8 import bz2
9 import datetime
10 import os
11
12 import gdb
13
14 import pwndbg.commands
15 import pwndbg.commands.context
16 import pwndbg.ida
17 import pwndbg.regs
18 from pwndbg.gdbutils.functions import GdbFunction
19
20
21 @pwndbg.commands.ParsedCommand
22 @pwndbg.commands.OnlyWhenRunning
23 @pwndbg.events.stop
24 @pwndbg.ida.withIDA
25 def j(*args):
26 """
27 Synchronize IDA's cursor with GDB
28 """
29 try:
30 pc = int(gdb.selected_frame().pc())
31 pwndbg.ida.Jump(pc)
32 except Exception:
33 pass
34
35
36
37 @pwndbg.commands.Command
38 @pwndbg.commands.OnlyWhenRunning
39 def up(n=1):
40 """
41 Select and print stack frame that called this one.
42 An argument says how many frames up to go.
43 """
44 f = gdb.selected_frame()
45
46 for i in range(n):
47 o = f.older()
48 if o:
49 o.select()
50
51 bt = pwndbg.commands.context.context_backtrace(with_banner=False)
52 print('\n'.join(bt))
53
54 j()
55
56
57 @pwndbg.commands.Command
58 @pwndbg.commands.OnlyWhenRunning
59 def down(n=1):
60 """
61 Select and print stack frame called by this one.
62 An argument says how many frames down to go.
63 """
64 f = gdb.selected_frame()
65
66 for i in range(n):
67 o = f.newer()
68 if o:
69 o.select()
70
71 bt = pwndbg.commands.context.context_backtrace(with_banner=False)
72 print('\n'.join(bt))
73
74 j()
75
76
77 @pwndbg.commands.Command
78 @pwndbg.ida.withIDA
79 def save_ida():
80 """Save the IDA database"""
81 if not pwndbg.ida.available():
82 return
83
84 path = pwndbg.ida.GetIdbPath()
85
86 # Need to handle emulated paths for Wine
87 if path.startswith('Z:'):
88 path = path[2:].replace('\\', '/')
89 pwndbg.ida.SaveBase(path)
90
91 basename = os.path.basename(path)
92 dirname = os.path.dirname(path)
93 backups = os.path.join(dirname, 'ida-backup')
94
95 if not os.path.isdir(backups):
96 os.mkdir(backups)
97
98 basename, ext = os.path.splitext(basename)
99 basename += '-%s' % datetime.datetime.now().isoformat()
100 basename += ext
101
102 # Windows doesn't like colons in paths
103 basename = basename.replace(':', '_')
104
105 full_path = os.path.join(backups, basename)
106
107 pwndbg.ida.SaveBase(full_path)
108
109 data = open(full_path, 'rb').read()
110
111 # Compress!
112 full_path_compressed = full_path + '.bz2'
113 bz2.BZ2File(full_path_compressed, 'w').write(data)
114
115 # Remove old version
116 os.unlink(full_path)
117
118 save_ida()
119
120
121 @GdbFunction()
122 def ida(name):
123
124 """Evaluate ida.LocByName() on the supplied value."""
125 name = name.string()
126 result = pwndbg.ida.LocByName(name)
127
128 if 0xffffe000 <= result <= 0xffffffff or 0xffffffffffffe000 <= result <= 0xffffffffffffffff:
129 raise ValueError("ida.LocByName(%r) == BADADDR" % name)
130
131 return result
132
[end of pwndbg/commands/ida.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/pwndbg/commands/ida.py b/pwndbg/commands/ida.py
--- a/pwndbg/commands/ida.py
+++ b/pwndbg/commands/ida.py
@@ -43,10 +43,10 @@
"""
f = gdb.selected_frame()
- for i in range(n):
- o = f.older()
- if o:
- o.select()
+ for i in range(int(n)):
+ if f.older():
+ f = f.older()
+ f.select()
bt = pwndbg.commands.context.context_backtrace(with_banner=False)
print('\n'.join(bt))
@@ -63,10 +63,10 @@
"""
f = gdb.selected_frame()
- for i in range(n):
- o = f.newer()
- if o:
- o.select()
+ for i in range(int(n)):
+ if f.newer():
+ f = f.newer()
+ f.select()
bt = pwndbg.commands.context.context_backtrace(with_banner=False)
print('\n'.join(bt))
| {"golden_diff": "diff --git a/pwndbg/commands/ida.py b/pwndbg/commands/ida.py\n--- a/pwndbg/commands/ida.py\n+++ b/pwndbg/commands/ida.py\n@@ -43,10 +43,10 @@\n \"\"\"\n f = gdb.selected_frame()\n \n- for i in range(n):\n- o = f.older()\n- if o:\n- o.select()\n+ for i in range(int(n)):\n+ if f.older():\n+ f = f.older()\n+ f.select()\n \n bt = pwndbg.commands.context.context_backtrace(with_banner=False)\n print('\\n'.join(bt))\n@@ -63,10 +63,10 @@\n \"\"\"\n f = gdb.selected_frame()\n \n- for i in range(n):\n- o = f.newer()\n- if o:\n- o.select()\n+ for i in range(int(n)):\n+ if f.newer():\n+ f = f.newer()\n+ f.select()\n \n bt = pwndbg.commands.context.context_backtrace(with_banner=False)\n print('\\n'.join(bt))\n", "issue": "TypeError exception raised on up/down commands.\n### Description\r\n\r\nWhen running the command `up` or `down` with an integer argument, the following exception is raised:\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"/home/david/.pwndbg/pwndbg/commands/__init__.py\", line 109, in __call__\r\n return self.function(*args, **kwargs)\r\n File \"/home/david/.pwndbg/pwndbg/commands/__init__.py\", line 200, in _OnlyWhenRunning\r\n return function(*a, **kw)\r\n File \"/home/david/.pwndbg/pwndbg/commands/ida.py\", line 46, in up\r\n for i in range(n):\r\nTypeError: 'str' object cannot be interpreted as an integer\r\n```\r\n\r\n### Steps to reproduce\r\n\r\nOpen any binary and attempt to do `up 2` during debugging.\r\n\r\n### My setup\r\n\r\npwndbg> version\r\nGdb: 7.12.0.20161007-git\r\nPython: 3.6.5rc1 (default, Mar 14 2018, 06:54:23) [GCC 7.3.0]\r\nPwndbg: 1.0.0 build: f69b81e\r\nCapstone: 4.0.1024\r\nUnicorn: 1.0.1\r\n\n", "before_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\nfrom __future__ import unicode_literals\n\nimport bz2\nimport datetime\nimport os\n\nimport gdb\n\nimport pwndbg.commands\nimport pwndbg.commands.context\nimport pwndbg.ida\nimport pwndbg.regs\nfrom pwndbg.gdbutils.functions import GdbFunction\n\n\[email protected]\[email protected]\[email protected]\[email protected]\ndef j(*args):\n \"\"\"\n Synchronize IDA's cursor with GDB\n \"\"\"\n try:\n pc = int(gdb.selected_frame().pc())\n pwndbg.ida.Jump(pc)\n except Exception:\n pass\n\n\n\[email protected]\[email protected]\ndef up(n=1):\n \"\"\"\n Select and print stack frame that called this one.\n An argument says how many frames up to go.\n \"\"\"\n f = gdb.selected_frame()\n\n for i in range(n):\n o = f.older()\n if o:\n o.select()\n\n bt = pwndbg.commands.context.context_backtrace(with_banner=False)\n print('\\n'.join(bt))\n\n j()\n\n\[email protected]\[email protected]\ndef down(n=1):\n \"\"\"\n Select and print stack frame called by this one.\n An argument says how many frames down to go.\n \"\"\"\n f = gdb.selected_frame()\n\n for i in range(n):\n o = f.newer()\n if o:\n o.select()\n\n bt = pwndbg.commands.context.context_backtrace(with_banner=False)\n print('\\n'.join(bt))\n\n j()\n\n\[email protected]\[email protected]\ndef save_ida():\n \"\"\"Save the IDA database\"\"\"\n if not pwndbg.ida.available():\n return\n\n path = pwndbg.ida.GetIdbPath()\n\n # Need to handle emulated paths for Wine\n if path.startswith('Z:'):\n path = path[2:].replace('\\\\', '/')\n pwndbg.ida.SaveBase(path)\n\n basename = os.path.basename(path)\n dirname = os.path.dirname(path)\n backups = os.path.join(dirname, 'ida-backup')\n\n if not os.path.isdir(backups):\n os.mkdir(backups)\n\n basename, ext = os.path.splitext(basename)\n basename += '-%s' % datetime.datetime.now().isoformat()\n basename += ext\n\n # Windows doesn't like colons in paths\n basename = basename.replace(':', '_')\n\n full_path = os.path.join(backups, basename)\n\n pwndbg.ida.SaveBase(full_path)\n\n data = open(full_path, 'rb').read()\n\n # Compress!\n full_path_compressed = full_path + '.bz2'\n bz2.BZ2File(full_path_compressed, 'w').write(data)\n\n # Remove old version\n os.unlink(full_path)\n\nsave_ida()\n\n\n@GdbFunction()\ndef ida(name):\n\n \"\"\"Evaluate ida.LocByName() on the supplied value.\"\"\"\n name = name.string()\n result = pwndbg.ida.LocByName(name)\n\n if 0xffffe000 <= result <= 0xffffffff or 0xffffffffffffe000 <= result <= 0xffffffffffffffff:\n raise ValueError(\"ida.LocByName(%r) == BADADDR\" % name)\n\n return result\n", "path": "pwndbg/commands/ida.py"}]} | 1,946 | 251 |
gh_patches_debug_9390 | rasdani/github-patches | git_diff | pandas-dev__pandas-18844 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
TST: make _skip_if into pytest decorators
- [X] _skip_if_32bit (#18693)
- [X] _skip_if_no_mpl (#18427)
- [X] _skip_if_mpl_1_5 (#18682)
- [x] _skip_if_no_scipy (#18794)
- [x] _skip_if_no_lzma (#18820)
- [x] _skip_if_no_xarray (#18814)
- [X] _skip_if_windows_python_3 (#18693)
- [X] _skip_if_windows (#18693)
- [x] _skip_if_no_pathlib (#18765)
- [x] _skip_if_no_localpath (#18765)
- [x] skip_if_no_ne (#18820)
- [x] _skip_if_has_locale (#18745)
- [x] _skip_if_not_us_locale (#18745)
- [ ] _skip_if_no_mock
- [x] _skip_if_no_ipython (#18814)
- [ ] skip_if_no_package
we should move the ``_skip_if_*`` functions out of ``pandas.util.testing`` to another (private module)
then we can add [skipif decorators](http://pytest.readthedocs.io/en/reorganize-docs/new-docs/user/skipping.html)
and use like this
```
@skip_if_windows_py3
def test_.......():
```
rather than calling ``tm._skip_if_windows_py390`` in the body of the function (sometimes you also need to do that, so we leave the functions themselves as well).
this makes much more idiomatic and readable pytest code and removes the need to roll your own when using the decorator.
</issue>
<code>
[start of pandas/conftest.py]
1 import pytest
2
3 from distutils.version import LooseVersion
4 import numpy
5 import pandas
6 import pandas.util.testing as tm
7 import dateutil
8
9
10 def pytest_addoption(parser):
11 parser.addoption("--skip-slow", action="store_true",
12 help="skip slow tests")
13 parser.addoption("--skip-network", action="store_true",
14 help="skip network tests")
15 parser.addoption("--run-high-memory", action="store_true",
16 help="run high memory tests")
17 parser.addoption("--only-slow", action="store_true",
18 help="run only slow tests")
19
20
21 def pytest_runtest_setup(item):
22 if 'slow' in item.keywords and item.config.getoption("--skip-slow"):
23 pytest.skip("skipping due to --skip-slow")
24
25 if 'slow' not in item.keywords and item.config.getoption("--only-slow"):
26 pytest.skip("skipping due to --only-slow")
27
28 if 'network' in item.keywords and item.config.getoption("--skip-network"):
29 pytest.skip("skipping due to --skip-network")
30
31 if 'high_memory' in item.keywords and not item.config.getoption(
32 "--run-high-memory"):
33 pytest.skip(
34 "skipping high memory test since --run-high-memory was not set")
35
36
37 # Configurations for all tests and all test modules
38
39 @pytest.fixture(autouse=True)
40 def configure_tests():
41 pandas.set_option('chained_assignment', 'raise')
42
43
44 # For running doctests: make np and pd names available
45
46 @pytest.fixture(autouse=True)
47 def add_imports(doctest_namespace):
48 doctest_namespace['np'] = numpy
49 doctest_namespace['pd'] = pandas
50
51
52 @pytest.fixture(params=['bsr', 'coo', 'csc', 'csr', 'dia', 'dok', 'lil'])
53 def spmatrix(request):
54 tm._skip_if_no_scipy()
55 from scipy import sparse
56 return getattr(sparse, request.param + '_matrix')
57
58
59 @pytest.fixture
60 def ip():
61 """
62 Get an instance of IPython.InteractiveShell.
63
64 Will raise a skip if IPython is not installed.
65 """
66
67 pytest.importorskip('IPython', minversion="6.0.0")
68 from IPython.core.interactiveshell import InteractiveShell
69 return InteractiveShell()
70
71
72 is_dateutil_le_261 = pytest.mark.skipif(
73 LooseVersion(dateutil.__version__) > LooseVersion('2.6.1'),
74 reason="dateutil api change version")
75 is_dateutil_gt_261 = pytest.mark.skipif(
76 LooseVersion(dateutil.__version__) <= LooseVersion('2.6.1'),
77 reason="dateutil stable version")
78
[end of pandas/conftest.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/pandas/conftest.py b/pandas/conftest.py
--- a/pandas/conftest.py
+++ b/pandas/conftest.py
@@ -3,7 +3,6 @@
from distutils.version import LooseVersion
import numpy
import pandas
-import pandas.util.testing as tm
import dateutil
@@ -51,7 +50,6 @@
@pytest.fixture(params=['bsr', 'coo', 'csc', 'csr', 'dia', 'dok', 'lil'])
def spmatrix(request):
- tm._skip_if_no_scipy()
from scipy import sparse
return getattr(sparse, request.param + '_matrix')
| {"golden_diff": "diff --git a/pandas/conftest.py b/pandas/conftest.py\n--- a/pandas/conftest.py\n+++ b/pandas/conftest.py\n@@ -3,7 +3,6 @@\n from distutils.version import LooseVersion\n import numpy\n import pandas\n-import pandas.util.testing as tm\n import dateutil\n \n \n@@ -51,7 +50,6 @@\n \n @pytest.fixture(params=['bsr', 'coo', 'csc', 'csr', 'dia', 'dok', 'lil'])\n def spmatrix(request):\n- tm._skip_if_no_scipy()\n from scipy import sparse\n return getattr(sparse, request.param + '_matrix')\n", "issue": "TST: make _skip_if into pytest decorators\n- [X] _skip_if_32bit (#18693)\r\n- [X] _skip_if_no_mpl (#18427)\r\n- [X] _skip_if_mpl_1_5 (#18682)\r\n- [x] _skip_if_no_scipy (#18794)\r\n- [x] _skip_if_no_lzma (#18820)\r\n- [x] _skip_if_no_xarray (#18814)\r\n- [X] _skip_if_windows_python_3 (#18693)\r\n- [X] _skip_if_windows (#18693)\r\n- [x] _skip_if_no_pathlib (#18765) \r\n- [x] _skip_if_no_localpath (#18765)\r\n- [x] skip_if_no_ne (#18820)\r\n- [x] _skip_if_has_locale (#18745) \r\n- [x] _skip_if_not_us_locale (#18745)\r\n- [ ] _skip_if_no_mock\r\n- [x] _skip_if_no_ipython (#18814)\r\n- [ ] skip_if_no_package\r\n\r\nwe should move the ``_skip_if_*`` functions out of ``pandas.util.testing`` to another (private module)\r\n\r\nthen we can add [skipif decorators](http://pytest.readthedocs.io/en/reorganize-docs/new-docs/user/skipping.html)\r\n\r\nand use like this\r\n\r\n```\r\n@skip_if_windows_py3\r\ndef test_.......():\r\n```\r\n\r\nrather than calling ``tm._skip_if_windows_py390`` in the body of the function (sometimes you also need to do that, so we leave the functions themselves as well).\r\n\r\nthis makes much more idiomatic and readable pytest code and removes the need to roll your own when using the decorator.\r\n\n", "before_files": [{"content": "import pytest\n\nfrom distutils.version import LooseVersion\nimport numpy\nimport pandas\nimport pandas.util.testing as tm\nimport dateutil\n\n\ndef pytest_addoption(parser):\n parser.addoption(\"--skip-slow\", action=\"store_true\",\n help=\"skip slow tests\")\n parser.addoption(\"--skip-network\", action=\"store_true\",\n help=\"skip network tests\")\n parser.addoption(\"--run-high-memory\", action=\"store_true\",\n help=\"run high memory tests\")\n parser.addoption(\"--only-slow\", action=\"store_true\",\n help=\"run only slow tests\")\n\n\ndef pytest_runtest_setup(item):\n if 'slow' in item.keywords and item.config.getoption(\"--skip-slow\"):\n pytest.skip(\"skipping due to --skip-slow\")\n\n if 'slow' not in item.keywords and item.config.getoption(\"--only-slow\"):\n pytest.skip(\"skipping due to --only-slow\")\n\n if 'network' in item.keywords and item.config.getoption(\"--skip-network\"):\n pytest.skip(\"skipping due to --skip-network\")\n\n if 'high_memory' in item.keywords and not item.config.getoption(\n \"--run-high-memory\"):\n pytest.skip(\n \"skipping high memory test since --run-high-memory was not set\")\n\n\n# Configurations for all tests and all test modules\n\[email protected](autouse=True)\ndef configure_tests():\n pandas.set_option('chained_assignment', 'raise')\n\n\n# For running doctests: make np and pd names available\n\[email protected](autouse=True)\ndef add_imports(doctest_namespace):\n doctest_namespace['np'] = numpy\n doctest_namespace['pd'] = pandas\n\n\[email protected](params=['bsr', 'coo', 'csc', 'csr', 'dia', 'dok', 'lil'])\ndef spmatrix(request):\n tm._skip_if_no_scipy()\n from scipy import sparse\n return getattr(sparse, request.param + '_matrix')\n\n\[email protected]\ndef ip():\n \"\"\"\n Get an instance of IPython.InteractiveShell.\n\n Will raise a skip if IPython is not installed.\n \"\"\"\n\n pytest.importorskip('IPython', minversion=\"6.0.0\")\n from IPython.core.interactiveshell import InteractiveShell\n return InteractiveShell()\n\n\nis_dateutil_le_261 = pytest.mark.skipif(\n LooseVersion(dateutil.__version__) > LooseVersion('2.6.1'),\n reason=\"dateutil api change version\")\nis_dateutil_gt_261 = pytest.mark.skipif(\n LooseVersion(dateutil.__version__) <= LooseVersion('2.6.1'),\n reason=\"dateutil stable version\")\n", "path": "pandas/conftest.py"}]} | 1,674 | 149 |
gh_patches_debug_28274 | rasdani/github-patches | git_diff | certbot__certbot-427 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
nginxparser does not recognize 'if' statements
E.g., this is unparseable by nginxparser:
```
if ($http_origin ~* ^https://www\.example\.com) {
add_header Access-Control-Allow-Origin "$http_origin";
}
```
</issue>
<code>
[start of letsencrypt_nginx/nginxparser.py]
1 """Very low-level nginx config parser based on pyparsing."""
2 import string
3
4 from pyparsing import (
5 Literal, White, Word, alphanums, CharsNotIn, Forward, Group,
6 Optional, OneOrMore, ZeroOrMore, pythonStyleComment)
7
8
9 class RawNginxParser(object):
10 # pylint: disable=expression-not-assigned
11 """A class that parses nginx configuration with pyparsing."""
12
13 # constants
14 left_bracket = Literal("{").suppress()
15 right_bracket = Literal("}").suppress()
16 semicolon = Literal(";").suppress()
17 space = White().suppress()
18 key = Word(alphanums + "_/")
19 value = CharsNotIn("{};,")
20 location = CharsNotIn("{};," + string.whitespace)
21 # modifier for location uri [ = | ~ | ~* | ^~ ]
22 modifier = Literal("=") | Literal("~*") | Literal("~") | Literal("^~")
23
24 # rules
25 assignment = (key + Optional(space + value) + semicolon)
26 block = Forward()
27
28 block << Group(
29 Group(key + Optional(space + modifier) + Optional(space + location))
30 + left_bracket
31 + Group(ZeroOrMore(Group(assignment) | block))
32 + right_bracket)
33
34 script = OneOrMore(Group(assignment) | block).ignore(pythonStyleComment)
35
36 def __init__(self, source):
37 self.source = source
38
39 def parse(self):
40 """Returns the parsed tree."""
41 return self.script.parseString(self.source)
42
43 def as_list(self):
44 """Returns the parsed tree as a list."""
45 return self.parse().asList()
46
47
48 class RawNginxDumper(object):
49 # pylint: disable=too-few-public-methods
50 """A class that dumps nginx configuration from the provided tree."""
51 def __init__(self, blocks, indentation=4):
52 self.blocks = blocks
53 self.indentation = indentation
54
55 def __iter__(self, blocks=None, current_indent=0, spacer=' '):
56 """Iterates the dumped nginx content."""
57 blocks = blocks or self.blocks
58 for key, values in blocks:
59 if current_indent:
60 yield spacer
61 indentation = spacer * current_indent
62 if isinstance(key, list):
63 yield indentation + spacer.join(key) + ' {'
64 for parameter in values:
65 if isinstance(parameter[0], list):
66 dumped = self.__iter__(
67 [parameter],
68 current_indent + self.indentation)
69 for line in dumped:
70 yield line
71 else:
72 dumped = spacer.join(parameter) + ';'
73 yield spacer * (
74 current_indent + self.indentation) + dumped
75
76 yield indentation + '}'
77 else:
78 yield spacer * current_indent + key + spacer + values + ';'
79
80 def as_string(self):
81 """Return the parsed block as a string."""
82 return '\n'.join(self)
83
84
85 # Shortcut functions to respect Python's serialization interface
86 # (like pyyaml, picker or json)
87
88 def loads(source):
89 """Parses from a string.
90
91 :param str souce: The string to parse
92 :returns: The parsed tree
93 :rtype: list
94
95 """
96 return RawNginxParser(source).as_list()
97
98
99 def load(_file):
100 """Parses from a file.
101
102 :param file _file: The file to parse
103 :returns: The parsed tree
104 :rtype: list
105
106 """
107 return loads(_file.read())
108
109
110 def dumps(blocks, indentation=4):
111 """Dump to a string.
112
113 :param list block: The parsed tree
114 :param int indentation: The number of spaces to indent
115 :rtype: str
116
117 """
118 return RawNginxDumper(blocks, indentation).as_string()
119
120
121 def dump(blocks, _file, indentation=4):
122 """Dump to a file.
123
124 :param list block: The parsed tree
125 :param file _file: The file to dump to
126 :param int indentation: The number of spaces to indent
127 :rtype: NoneType
128
129 """
130 return _file.write(dumps(blocks, indentation))
131
[end of letsencrypt_nginx/nginxparser.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/letsencrypt_nginx/nginxparser.py b/letsencrypt_nginx/nginxparser.py
--- a/letsencrypt_nginx/nginxparser.py
+++ b/letsencrypt_nginx/nginxparser.py
@@ -3,7 +3,7 @@
from pyparsing import (
Literal, White, Word, alphanums, CharsNotIn, Forward, Group,
- Optional, OneOrMore, ZeroOrMore, pythonStyleComment)
+ Optional, OneOrMore, Regex, ZeroOrMore, pythonStyleComment)
class RawNginxParser(object):
@@ -16,17 +16,21 @@
semicolon = Literal(";").suppress()
space = White().suppress()
key = Word(alphanums + "_/")
- value = CharsNotIn("{};,")
+ # Matches anything that is not a special character AND any chars in single
+ # or double quotes
+ value = Regex(r"((\".*\")?(\'.*\')?[^\{\};,]?)+")
location = CharsNotIn("{};," + string.whitespace)
# modifier for location uri [ = | ~ | ~* | ^~ ]
modifier = Literal("=") | Literal("~*") | Literal("~") | Literal("^~")
# rules
assignment = (key + Optional(space + value) + semicolon)
+ location_statement = Optional(space + modifier) + Optional(space + location)
+ if_statement = Literal("if") + space + Regex(r"\(.+\)") + space
block = Forward()
block << Group(
- Group(key + Optional(space + modifier) + Optional(space + location))
+ (Group(key + location_statement) ^ Group(if_statement))
+ left_bracket
+ Group(ZeroOrMore(Group(assignment) | block))
+ right_bracket)
| {"golden_diff": "diff --git a/letsencrypt_nginx/nginxparser.py b/letsencrypt_nginx/nginxparser.py\n--- a/letsencrypt_nginx/nginxparser.py\n+++ b/letsencrypt_nginx/nginxparser.py\n@@ -3,7 +3,7 @@\n \n from pyparsing import (\n Literal, White, Word, alphanums, CharsNotIn, Forward, Group,\n- Optional, OneOrMore, ZeroOrMore, pythonStyleComment)\n+ Optional, OneOrMore, Regex, ZeroOrMore, pythonStyleComment)\n \n \n class RawNginxParser(object):\n@@ -16,17 +16,21 @@\n semicolon = Literal(\";\").suppress()\n space = White().suppress()\n key = Word(alphanums + \"_/\")\n- value = CharsNotIn(\"{};,\")\n+ # Matches anything that is not a special character AND any chars in single\n+ # or double quotes\n+ value = Regex(r\"((\\\".*\\\")?(\\'.*\\')?[^\\{\\};,]?)+\")\n location = CharsNotIn(\"{};,\" + string.whitespace)\n # modifier for location uri [ = | ~ | ~* | ^~ ]\n modifier = Literal(\"=\") | Literal(\"~*\") | Literal(\"~\") | Literal(\"^~\")\n \n # rules\n assignment = (key + Optional(space + value) + semicolon)\n+ location_statement = Optional(space + modifier) + Optional(space + location)\n+ if_statement = Literal(\"if\") + space + Regex(r\"\\(.+\\)\") + space\n block = Forward()\n \n block << Group(\n- Group(key + Optional(space + modifier) + Optional(space + location))\n+ (Group(key + location_statement) ^ Group(if_statement))\n + left_bracket\n + Group(ZeroOrMore(Group(assignment) | block))\n + right_bracket)\n", "issue": "nginxparser does not recognize 'if' statements\nE.g., this is unparseable by nginxparser:\n\n```\nif ($http_origin ~* ^https://www\\.example\\.com) {\n add_header Access-Control-Allow-Origin \"$http_origin\";\n}\n```\n\n", "before_files": [{"content": "\"\"\"Very low-level nginx config parser based on pyparsing.\"\"\"\nimport string\n\nfrom pyparsing import (\n Literal, White, Word, alphanums, CharsNotIn, Forward, Group,\n Optional, OneOrMore, ZeroOrMore, pythonStyleComment)\n\n\nclass RawNginxParser(object):\n # pylint: disable=expression-not-assigned\n \"\"\"A class that parses nginx configuration with pyparsing.\"\"\"\n\n # constants\n left_bracket = Literal(\"{\").suppress()\n right_bracket = Literal(\"}\").suppress()\n semicolon = Literal(\";\").suppress()\n space = White().suppress()\n key = Word(alphanums + \"_/\")\n value = CharsNotIn(\"{};,\")\n location = CharsNotIn(\"{};,\" + string.whitespace)\n # modifier for location uri [ = | ~ | ~* | ^~ ]\n modifier = Literal(\"=\") | Literal(\"~*\") | Literal(\"~\") | Literal(\"^~\")\n\n # rules\n assignment = (key + Optional(space + value) + semicolon)\n block = Forward()\n\n block << Group(\n Group(key + Optional(space + modifier) + Optional(space + location))\n + left_bracket\n + Group(ZeroOrMore(Group(assignment) | block))\n + right_bracket)\n\n script = OneOrMore(Group(assignment) | block).ignore(pythonStyleComment)\n\n def __init__(self, source):\n self.source = source\n\n def parse(self):\n \"\"\"Returns the parsed tree.\"\"\"\n return self.script.parseString(self.source)\n\n def as_list(self):\n \"\"\"Returns the parsed tree as a list.\"\"\"\n return self.parse().asList()\n\n\nclass RawNginxDumper(object):\n # pylint: disable=too-few-public-methods\n \"\"\"A class that dumps nginx configuration from the provided tree.\"\"\"\n def __init__(self, blocks, indentation=4):\n self.blocks = blocks\n self.indentation = indentation\n\n def __iter__(self, blocks=None, current_indent=0, spacer=' '):\n \"\"\"Iterates the dumped nginx content.\"\"\"\n blocks = blocks or self.blocks\n for key, values in blocks:\n if current_indent:\n yield spacer\n indentation = spacer * current_indent\n if isinstance(key, list):\n yield indentation + spacer.join(key) + ' {'\n for parameter in values:\n if isinstance(parameter[0], list):\n dumped = self.__iter__(\n [parameter],\n current_indent + self.indentation)\n for line in dumped:\n yield line\n else:\n dumped = spacer.join(parameter) + ';'\n yield spacer * (\n current_indent + self.indentation) + dumped\n\n yield indentation + '}'\n else:\n yield spacer * current_indent + key + spacer + values + ';'\n\n def as_string(self):\n \"\"\"Return the parsed block as a string.\"\"\"\n return '\\n'.join(self)\n\n\n# Shortcut functions to respect Python's serialization interface\n# (like pyyaml, picker or json)\n\ndef loads(source):\n \"\"\"Parses from a string.\n\n :param str souce: The string to parse\n :returns: The parsed tree\n :rtype: list\n\n \"\"\"\n return RawNginxParser(source).as_list()\n\n\ndef load(_file):\n \"\"\"Parses from a file.\n\n :param file _file: The file to parse\n :returns: The parsed tree\n :rtype: list\n\n \"\"\"\n return loads(_file.read())\n\n\ndef dumps(blocks, indentation=4):\n \"\"\"Dump to a string.\n\n :param list block: The parsed tree\n :param int indentation: The number of spaces to indent\n :rtype: str\n\n \"\"\"\n return RawNginxDumper(blocks, indentation).as_string()\n\n\ndef dump(blocks, _file, indentation=4):\n \"\"\"Dump to a file.\n\n :param list block: The parsed tree\n :param file _file: The file to dump to\n :param int indentation: The number of spaces to indent\n :rtype: NoneType\n\n \"\"\"\n return _file.write(dumps(blocks, indentation))\n", "path": "letsencrypt_nginx/nginxparser.py"}]} | 1,780 | 400 |
gh_patches_debug_238 | rasdani/github-patches | git_diff | mitmproxy__mitmproxy-6117 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Warn new users about the lazy creation of connections (when requests are expected to be served in the script fully and only)
#### Problem Description
The [example script](https://docs.mitmproxy.org/stable/addons-examples/#http-reply-from-proxy) for not sending any data to the server does not prevent mitmproxy from **establishing a connection** to the server.
For which reason is said connection established when no data has to be sent to this host right away and possibly never in the future?
I trusted mitmproxy to **not send _any_ data, as stated**, but I had to discover (the hard way) that **that's not the case**.
I used mitmproxy in an environment where it required to stay silent, but it wasn't compliant.
Could you please consider warning new users about this behavior?
<strike>Is there an easy way to prevent establishing connections?
Is it planned to do so on default in this case?</strike>
*EDIT*: Trying to prevent connections by rerouting the connection to a closed port killed the flow for the client. Routing to a different host with invalid certificate worked though, warning me in the event log and suggesting setting connection strategy to lazy and it worked.
#### Steps to reproduce the behavior:
1. Load the example script
2. Have the client request examle.com
3. View the event log
#### System Information
Mitmproxy: 9.0.1
Python: 3.10.6
OpenSSL: OpenSSL 3.0.7 1 Nov 2022
Platform: Linux-5.15.0-71-generic-x86_64-with-glibc2.35
</issue>
<code>
[start of examples/addons/http-reply-from-proxy.py]
1 """Send a reply from the proxy without sending any data to the remote server."""
2 from mitmproxy import http
3
4
5 def request(flow: http.HTTPFlow) -> None:
6 if flow.request.pretty_url == "http://example.com/path":
7 flow.response = http.Response.make(
8 200, # (optional) status code
9 b"Hello World", # (optional) content
10 {"Content-Type": "text/html"}, # (optional) headers
11 )
12
[end of examples/addons/http-reply-from-proxy.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/examples/addons/http-reply-from-proxy.py b/examples/addons/http-reply-from-proxy.py
--- a/examples/addons/http-reply-from-proxy.py
+++ b/examples/addons/http-reply-from-proxy.py
@@ -1,4 +1,4 @@
-"""Send a reply from the proxy without sending any data to the remote server."""
+"""Send a reply from the proxy without sending the request to the remote server."""
from mitmproxy import http
| {"golden_diff": "diff --git a/examples/addons/http-reply-from-proxy.py b/examples/addons/http-reply-from-proxy.py\n--- a/examples/addons/http-reply-from-proxy.py\n+++ b/examples/addons/http-reply-from-proxy.py\n@@ -1,4 +1,4 @@\n-\"\"\"Send a reply from the proxy without sending any data to the remote server.\"\"\"\n+\"\"\"Send a reply from the proxy without sending the request to the remote server.\"\"\"\n from mitmproxy import http\n", "issue": "Warn new users about the lazy creation of connections (when requests are expected to be served in the script fully and only)\n#### Problem Description\r\nThe [example script](https://docs.mitmproxy.org/stable/addons-examples/#http-reply-from-proxy) for not sending any data to the server does not prevent mitmproxy from **establishing a connection** to the server.\r\nFor which reason is said connection established when no data has to be sent to this host right away and possibly never in the future?\r\nI trusted mitmproxy to **not send _any_ data, as stated**, but I had to discover (the hard way) that **that's not the case**.\r\nI used mitmproxy in an environment where it required to stay silent, but it wasn't compliant.\r\n\r\nCould you please consider warning new users about this behavior?\r\n<strike>Is there an easy way to prevent establishing connections?\r\nIs it planned to do so on default in this case?</strike>\r\n*EDIT*: Trying to prevent connections by rerouting the connection to a closed port killed the flow for the client. Routing to a different host with invalid certificate worked though, warning me in the event log and suggesting setting connection strategy to lazy and it worked.\r\n\r\n#### Steps to reproduce the behavior:\r\n1. Load the example script\r\n2. Have the client request examle.com\r\n3. View the event log\r\n\r\n#### System Information\r\nMitmproxy: 9.0.1\r\nPython: 3.10.6\r\nOpenSSL: OpenSSL 3.0.7 1 Nov 2022\r\nPlatform: Linux-5.15.0-71-generic-x86_64-with-glibc2.35\r\n\r\n\n", "before_files": [{"content": "\"\"\"Send a reply from the proxy without sending any data to the remote server.\"\"\"\nfrom mitmproxy import http\n\n\ndef request(flow: http.HTTPFlow) -> None:\n if flow.request.pretty_url == \"http://example.com/path\":\n flow.response = http.Response.make(\n 200, # (optional) status code\n b\"Hello World\", # (optional) content\n {\"Content-Type\": \"text/html\"}, # (optional) headers\n )\n", "path": "examples/addons/http-reply-from-proxy.py"}]} | 1,018 | 96 |
gh_patches_debug_65366 | rasdani/github-patches | git_diff | PaddlePaddle__models-399 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
使用 generate_sequence_by_rnn_lm 进行train的时候报错
在 generate_sequence_by_rnn_lm 这个模型下运行 train.py 的时候,当测试文件的路径不存在的时候会报错。错误的原因是把conf写成了config。错误行数是train.py 的112行
</issue>
<code>
[start of generate_sequence_by_rnn_lm/train.py]
1 import os
2 import sys
3 import gzip
4
5 import paddle.v2 as paddle
6 import config as conf
7 import reader
8 from network_conf import rnn_lm
9 from utils import logger, build_dict, load_dict
10
11
12 def train(topology,
13 train_reader,
14 test_reader,
15 model_save_dir="models",
16 num_passes=10):
17 """
18 train model.
19
20 :param topology: cost layer of the model to train.
21 :type topology: LayerOuput
22 :param train_reader: train data reader.
23 :type trainer_reader: collections.Iterable
24 :param test_reader: test data reader.
25 :type test_reader: collections.Iterable
26 :param model_save_dir: path to save the trained model
27 :type model_save_dir: str
28 :param num_passes: number of epoch
29 :type num_passes: int
30 """
31 if not os.path.exists(model_save_dir):
32 os.mkdir(model_save_dir)
33
34 # initialize PaddlePaddle
35 paddle.init(use_gpu=conf.use_gpu, trainer_count=conf.trainer_count)
36
37 # create optimizer
38 adam_optimizer = paddle.optimizer.Adam(
39 learning_rate=1e-3,
40 regularization=paddle.optimizer.L2Regularization(rate=1e-3),
41 model_average=paddle.optimizer.ModelAverage(
42 average_window=0.5, max_average_window=10000))
43
44 # create parameters
45 parameters = paddle.parameters.create(topology)
46 # create trainer
47 trainer = paddle.trainer.SGD(
48 cost=topology, parameters=parameters, update_equation=adam_optimizer)
49
50 # define the event_handler callback
51 def event_handler(event):
52 if isinstance(event, paddle.event.EndIteration):
53 if not event.batch_id % conf.log_period:
54 logger.info("Pass %d, Batch %d, Cost %f, %s" % (
55 event.pass_id, event.batch_id, event.cost, event.metrics))
56
57 if (not event.batch_id %
58 conf.save_period_by_batches) and event.batch_id:
59 save_name = os.path.join(model_save_dir,
60 "rnn_lm_pass_%05d_batch_%03d.tar.gz" %
61 (event.pass_id, event.batch_id))
62 with gzip.open(save_name, "w") as f:
63 trainer.save_parameter_to_tar(f)
64
65 if isinstance(event, paddle.event.EndPass):
66 if test_reader is not None:
67 result = trainer.test(reader=test_reader)
68 logger.info("Test with Pass %d, %s" %
69 (event.pass_id, result.metrics))
70 save_name = os.path.join(model_save_dir, "rnn_lm_pass_%05d.tar.gz" %
71 (event.pass_id))
72 with gzip.open(save_name, "w") as f:
73 trainer.save_parameter_to_tar(f)
74
75 logger.info("start training...")
76 trainer.train(
77 reader=train_reader, event_handler=event_handler, num_passes=num_passes)
78
79 logger.info("Training is finished.")
80
81
82 def main():
83 # prepare vocab
84 if not (os.path.exists(conf.vocab_file) and
85 os.path.getsize(conf.vocab_file)):
86 logger.info(("word dictionary does not exist, "
87 "build it from the training data"))
88 build_dict(conf.train_file, conf.vocab_file, conf.max_word_num,
89 conf.cutoff_word_fre)
90 logger.info("load word dictionary.")
91 word_dict = load_dict(conf.vocab_file)
92 logger.info("dictionay size = %d" % (len(word_dict)))
93
94 cost = rnn_lm(
95 len(word_dict), conf.emb_dim, conf.hidden_size, conf.stacked_rnn_num,
96 conf.rnn_type)
97
98 # define reader
99 reader_args = {
100 "file_name": conf.train_file,
101 "word_dict": word_dict,
102 }
103 train_reader = paddle.batch(
104 paddle.reader.shuffle(
105 reader.rnn_reader(**reader_args), buf_size=102400),
106 batch_size=conf.batch_size)
107 test_reader = None
108 if os.path.exists(conf.test_file) and os.path.getsize(conf.test_file):
109 test_reader = paddle.batch(
110 paddle.reader.shuffle(
111 reader.rnn_reader(**reader_args), buf_size=65536),
112 batch_size=config.batch_size)
113
114 train(
115 topology=cost,
116 train_reader=train_reader,
117 test_reader=test_reader,
118 model_save_dir=conf.model_save_dir,
119 num_passes=conf.num_passes)
120
121
122 if __name__ == "__main__":
123 main()
124
[end of generate_sequence_by_rnn_lm/train.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/generate_sequence_by_rnn_lm/train.py b/generate_sequence_by_rnn_lm/train.py
--- a/generate_sequence_by_rnn_lm/train.py
+++ b/generate_sequence_by_rnn_lm/train.py
@@ -109,7 +109,7 @@
test_reader = paddle.batch(
paddle.reader.shuffle(
reader.rnn_reader(**reader_args), buf_size=65536),
- batch_size=config.batch_size)
+ batch_size=conf.batch_size)
train(
topology=cost,
| {"golden_diff": "diff --git a/generate_sequence_by_rnn_lm/train.py b/generate_sequence_by_rnn_lm/train.py\n--- a/generate_sequence_by_rnn_lm/train.py\n+++ b/generate_sequence_by_rnn_lm/train.py\n@@ -109,7 +109,7 @@\n test_reader = paddle.batch(\n paddle.reader.shuffle(\n reader.rnn_reader(**reader_args), buf_size=65536),\n- batch_size=config.batch_size)\n+ batch_size=conf.batch_size)\n \n train(\n topology=cost,\n", "issue": "\u4f7f\u7528 generate_sequence_by_rnn_lm \u8fdb\u884ctrain\u7684\u65f6\u5019\u62a5\u9519\n\u5728 generate_sequence_by_rnn_lm \u8fd9\u4e2a\u6a21\u578b\u4e0b\u8fd0\u884c train.py \u7684\u65f6\u5019\uff0c\u5f53\u6d4b\u8bd5\u6587\u4ef6\u7684\u8def\u5f84\u4e0d\u5b58\u5728\u7684\u65f6\u5019\u4f1a\u62a5\u9519\u3002\u9519\u8bef\u7684\u539f\u56e0\u662f\u628aconf\u5199\u6210\u4e86config\u3002\u9519\u8bef\u884c\u6570\u662ftrain.py \u7684112\u884c\n", "before_files": [{"content": "import os\nimport sys\nimport gzip\n\nimport paddle.v2 as paddle\nimport config as conf\nimport reader\nfrom network_conf import rnn_lm\nfrom utils import logger, build_dict, load_dict\n\n\ndef train(topology,\n train_reader,\n test_reader,\n model_save_dir=\"models\",\n num_passes=10):\n \"\"\"\n train model.\n\n :param topology: cost layer of the model to train.\n :type topology: LayerOuput\n :param train_reader: train data reader.\n :type trainer_reader: collections.Iterable\n :param test_reader: test data reader.\n :type test_reader: collections.Iterable\n :param model_save_dir: path to save the trained model\n :type model_save_dir: str\n :param num_passes: number of epoch\n :type num_passes: int\n \"\"\"\n if not os.path.exists(model_save_dir):\n os.mkdir(model_save_dir)\n\n # initialize PaddlePaddle\n paddle.init(use_gpu=conf.use_gpu, trainer_count=conf.trainer_count)\n\n # create optimizer\n adam_optimizer = paddle.optimizer.Adam(\n learning_rate=1e-3,\n regularization=paddle.optimizer.L2Regularization(rate=1e-3),\n model_average=paddle.optimizer.ModelAverage(\n average_window=0.5, max_average_window=10000))\n\n # create parameters\n parameters = paddle.parameters.create(topology)\n # create trainer\n trainer = paddle.trainer.SGD(\n cost=topology, parameters=parameters, update_equation=adam_optimizer)\n\n # define the event_handler callback\n def event_handler(event):\n if isinstance(event, paddle.event.EndIteration):\n if not event.batch_id % conf.log_period:\n logger.info(\"Pass %d, Batch %d, Cost %f, %s\" % (\n event.pass_id, event.batch_id, event.cost, event.metrics))\n\n if (not event.batch_id %\n conf.save_period_by_batches) and event.batch_id:\n save_name = os.path.join(model_save_dir,\n \"rnn_lm_pass_%05d_batch_%03d.tar.gz\" %\n (event.pass_id, event.batch_id))\n with gzip.open(save_name, \"w\") as f:\n trainer.save_parameter_to_tar(f)\n\n if isinstance(event, paddle.event.EndPass):\n if test_reader is not None:\n result = trainer.test(reader=test_reader)\n logger.info(\"Test with Pass %d, %s\" %\n (event.pass_id, result.metrics))\n save_name = os.path.join(model_save_dir, \"rnn_lm_pass_%05d.tar.gz\" %\n (event.pass_id))\n with gzip.open(save_name, \"w\") as f:\n trainer.save_parameter_to_tar(f)\n\n logger.info(\"start training...\")\n trainer.train(\n reader=train_reader, event_handler=event_handler, num_passes=num_passes)\n\n logger.info(\"Training is finished.\")\n\n\ndef main():\n # prepare vocab\n if not (os.path.exists(conf.vocab_file) and\n os.path.getsize(conf.vocab_file)):\n logger.info((\"word dictionary does not exist, \"\n \"build it from the training data\"))\n build_dict(conf.train_file, conf.vocab_file, conf.max_word_num,\n conf.cutoff_word_fre)\n logger.info(\"load word dictionary.\")\n word_dict = load_dict(conf.vocab_file)\n logger.info(\"dictionay size = %d\" % (len(word_dict)))\n\n cost = rnn_lm(\n len(word_dict), conf.emb_dim, conf.hidden_size, conf.stacked_rnn_num,\n conf.rnn_type)\n\n # define reader\n reader_args = {\n \"file_name\": conf.train_file,\n \"word_dict\": word_dict,\n }\n train_reader = paddle.batch(\n paddle.reader.shuffle(\n reader.rnn_reader(**reader_args), buf_size=102400),\n batch_size=conf.batch_size)\n test_reader = None\n if os.path.exists(conf.test_file) and os.path.getsize(conf.test_file):\n test_reader = paddle.batch(\n paddle.reader.shuffle(\n reader.rnn_reader(**reader_args), buf_size=65536),\n batch_size=config.batch_size)\n\n train(\n topology=cost,\n train_reader=train_reader,\n test_reader=test_reader,\n model_save_dir=conf.model_save_dir,\n num_passes=conf.num_passes)\n\n\nif __name__ == \"__main__\":\n main()\n", "path": "generate_sequence_by_rnn_lm/train.py"}]} | 1,829 | 114 |
gh_patches_debug_2394 | rasdani/github-patches | git_diff | pyca__cryptography-1530 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Release Automation Fixes for Seventh Release
The release script is not properly waiting for the wheel job it starts to finish before downloading. This causes it to download previous releases and attempt to upload them.
</issue>
<code>
[start of tasks.py]
1 # This file is dual licensed under the terms of the Apache License, Version
2 # 2.0, and the BSD License. See the LICENSE file in the root of this repository
3 # for complete details.
4
5 from __future__ import absolute_import, division, print_function
6
7 import getpass
8 import os
9 import time
10
11 import invoke
12
13 import requests
14
15
16 JENKINS_URL = "https://jenkins.cryptography.io/job/cryptography-wheel-builder"
17
18
19 def wait_for_build_completed(session):
20 while True:
21 response = session.get(
22 "{0}/lastBuild/api/json/".format(JENKINS_URL),
23 headers={
24 "Accept": "application/json",
25 }
26 )
27 response.raise_for_status()
28 if not response.json()["building"]:
29 assert response.json()["result"] == "SUCCESS"
30 break
31 time.sleep(0.1)
32
33
34 def download_artifacts(session):
35 response = session.get(
36 "{0}/lastBuild/api/json/".format(JENKINS_URL),
37 headers={
38 "Accept": "application/json"
39 }
40 )
41 response.raise_for_status()
42 assert not response.json()["building"]
43 assert response.json()["result"] == "SUCCESS"
44
45 paths = []
46
47 for run in response.json()["runs"]:
48 response = session.get(
49 run["url"] + "api/json/",
50 headers={
51 "Accept": "application/json",
52 }
53 )
54 response.raise_for_status()
55 for artifact in response.json()["artifacts"]:
56 response = session.get(
57 "{0}artifact/{1}".format(run["url"], artifact["relativePath"])
58 )
59 out_path = os.path.join(
60 os.path.dirname(__file__),
61 "dist",
62 artifact["fileName"],
63 )
64 with open(out_path, "wb") as f:
65 f.write(response.content)
66 paths.append(out_path)
67 return paths
68
69
70 @invoke.task
71 def release(version):
72 """
73 ``version`` should be a string like '0.4' or '1.0'.
74 """
75 invoke.run("git tag -s {0} -m '{0} release'".format(version))
76 invoke.run("git push --tags")
77
78 invoke.run("python setup.py sdist")
79 invoke.run("cd vectors/ && python setup.py sdist bdist_wheel")
80
81 invoke.run(
82 "twine upload -s dist/cryptography-{0}* "
83 "vectors/dist/cryptography_vectors-{0}*".format(version)
84 )
85
86 session = requests.Session()
87
88 # This tells the CDN to delete the cached response for the URL. We do this
89 # so that the Jenkins builders will see the new sdist immediately when they
90 # go to build the wheels.
91 response = session.request(
92 "PURGE", "https://pypi.python.org/simple/cryptography/"
93 )
94 response.raise_for_status()
95
96 username = getpass.getpass("Input the GitHub/Jenkins username: ")
97 token = getpass.getpass("Input the Jenkins token: ")
98 response = session.post(
99 "{0}/build".format(JENKINS_URL),
100 auth=requests.auth.HTTPBasicAuth(
101 username, token
102 ),
103 params={
104 "cause": "Building wheels for {0}".format(version)
105 }
106 )
107 response.raise_for_status()
108 wait_for_build_completed(session)
109 paths = download_artifacts(session)
110 invoke.run("twine upload {0}".format(" ".join(paths)))
111
[end of tasks.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/tasks.py b/tasks.py
--- a/tasks.py
+++ b/tasks.py
@@ -17,6 +17,9 @@
def wait_for_build_completed(session):
+ # Wait 3 seconds before actually checking if the build is complete, to
+ # ensure that it had time to really start.
+ time.sleep(3)
while True:
response = session.get(
"{0}/lastBuild/api/json/".format(JENKINS_URL),
| {"golden_diff": "diff --git a/tasks.py b/tasks.py\n--- a/tasks.py\n+++ b/tasks.py\n@@ -17,6 +17,9 @@\n \n \n def wait_for_build_completed(session):\n+ # Wait 3 seconds before actually checking if the build is complete, to\n+ # ensure that it had time to really start.\n+ time.sleep(3)\n while True:\n response = session.get(\n \"{0}/lastBuild/api/json/\".format(JENKINS_URL),\n", "issue": "Release Automation Fixes for Seventh Release\nThe release script is not properly waiting for the wheel job it starts to finish before downloading. This causes it to download previous releases and attempt to upload them.\n\n", "before_files": [{"content": "# This file is dual licensed under the terms of the Apache License, Version\n# 2.0, and the BSD License. See the LICENSE file in the root of this repository\n# for complete details.\n\nfrom __future__ import absolute_import, division, print_function\n\nimport getpass\nimport os\nimport time\n\nimport invoke\n\nimport requests\n\n\nJENKINS_URL = \"https://jenkins.cryptography.io/job/cryptography-wheel-builder\"\n\n\ndef wait_for_build_completed(session):\n while True:\n response = session.get(\n \"{0}/lastBuild/api/json/\".format(JENKINS_URL),\n headers={\n \"Accept\": \"application/json\",\n }\n )\n response.raise_for_status()\n if not response.json()[\"building\"]:\n assert response.json()[\"result\"] == \"SUCCESS\"\n break\n time.sleep(0.1)\n\n\ndef download_artifacts(session):\n response = session.get(\n \"{0}/lastBuild/api/json/\".format(JENKINS_URL),\n headers={\n \"Accept\": \"application/json\"\n }\n )\n response.raise_for_status()\n assert not response.json()[\"building\"]\n assert response.json()[\"result\"] == \"SUCCESS\"\n\n paths = []\n\n for run in response.json()[\"runs\"]:\n response = session.get(\n run[\"url\"] + \"api/json/\",\n headers={\n \"Accept\": \"application/json\",\n }\n )\n response.raise_for_status()\n for artifact in response.json()[\"artifacts\"]:\n response = session.get(\n \"{0}artifact/{1}\".format(run[\"url\"], artifact[\"relativePath\"])\n )\n out_path = os.path.join(\n os.path.dirname(__file__),\n \"dist\",\n artifact[\"fileName\"],\n )\n with open(out_path, \"wb\") as f:\n f.write(response.content)\n paths.append(out_path)\n return paths\n\n\[email protected]\ndef release(version):\n \"\"\"\n ``version`` should be a string like '0.4' or '1.0'.\n \"\"\"\n invoke.run(\"git tag -s {0} -m '{0} release'\".format(version))\n invoke.run(\"git push --tags\")\n\n invoke.run(\"python setup.py sdist\")\n invoke.run(\"cd vectors/ && python setup.py sdist bdist_wheel\")\n\n invoke.run(\n \"twine upload -s dist/cryptography-{0}* \"\n \"vectors/dist/cryptography_vectors-{0}*\".format(version)\n )\n\n session = requests.Session()\n\n # This tells the CDN to delete the cached response for the URL. We do this\n # so that the Jenkins builders will see the new sdist immediately when they\n # go to build the wheels.\n response = session.request(\n \"PURGE\", \"https://pypi.python.org/simple/cryptography/\"\n )\n response.raise_for_status()\n\n username = getpass.getpass(\"Input the GitHub/Jenkins username: \")\n token = getpass.getpass(\"Input the Jenkins token: \")\n response = session.post(\n \"{0}/build\".format(JENKINS_URL),\n auth=requests.auth.HTTPBasicAuth(\n username, token\n ),\n params={\n \"cause\": \"Building wheels for {0}\".format(version)\n }\n )\n response.raise_for_status()\n wait_for_build_completed(session)\n paths = download_artifacts(session)\n invoke.run(\"twine upload {0}\".format(\" \".join(paths)))\n", "path": "tasks.py"}]} | 1,527 | 105 |
gh_patches_debug_5207 | rasdani/github-patches | git_diff | pytorch__ignite-3219 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add python 3.12 to CI
## 🚀 Feature
Add python 3.12 to CI: https://github.com/pytorch/ignite/blob/master/.github/workflows/unit-tests.yml
</issue>
<code>
[start of examples/mnist/mnist.py]
1 from argparse import ArgumentParser
2
3 import torch
4 import torch.nn.functional as F
5 from torch import nn
6 from torch.optim import SGD
7 from torch.utils.data import DataLoader
8 from torchvision.datasets import MNIST
9 from torchvision.transforms import Compose, Normalize, ToTensor
10 from tqdm import tqdm
11
12 from ignite.engine import create_supervised_evaluator, create_supervised_trainer, Events
13 from ignite.metrics import Accuracy, Loss
14 from ignite.utils import setup_logger
15
16
17 class Net(nn.Module):
18 def __init__(self):
19 super(Net, self).__init__()
20 self.conv1 = nn.Conv2d(1, 10, kernel_size=5)
21 self.conv2 = nn.Conv2d(10, 20, kernel_size=5)
22 self.conv2_drop = nn.Dropout2d()
23 self.fc1 = nn.Linear(320, 50)
24 self.fc2 = nn.Linear(50, 10)
25
26 def forward(self, x):
27 x = F.relu(F.max_pool2d(self.conv1(x), 2))
28 x = F.relu(F.max_pool2d(self.conv2_drop(self.conv2(x)), 2))
29 x = x.view(-1, 320)
30 x = F.relu(self.fc1(x))
31 x = F.dropout(x, training=self.training)
32 x = self.fc2(x)
33 return F.log_softmax(x, dim=-1)
34
35
36 def get_data_loaders(train_batch_size, val_batch_size):
37 data_transform = Compose([ToTensor(), Normalize((0.1307,), (0.3081,))])
38
39 train_loader = DataLoader(
40 MNIST(download=True, root=".", transform=data_transform, train=True), batch_size=train_batch_size, shuffle=True
41 )
42
43 val_loader = DataLoader(
44 MNIST(download=False, root=".", transform=data_transform, train=False), batch_size=val_batch_size, shuffle=False
45 )
46 return train_loader, val_loader
47
48
49 def run(train_batch_size, val_batch_size, epochs, lr, momentum, log_interval):
50 train_loader, val_loader = get_data_loaders(train_batch_size, val_batch_size)
51 model = Net()
52 device = "cpu"
53
54 if torch.cuda.is_available():
55 device = "cuda"
56
57 model.to(device) # Move model before creating optimizer
58 optimizer = SGD(model.parameters(), lr=lr, momentum=momentum)
59 criterion = nn.NLLLoss()
60 trainer = create_supervised_trainer(model, optimizer, criterion, device=device)
61 trainer.logger = setup_logger("trainer")
62
63 val_metrics = {"accuracy": Accuracy(), "nll": Loss(criterion)}
64 evaluator = create_supervised_evaluator(model, metrics=val_metrics, device=device)
65 evaluator.logger = setup_logger("evaluator")
66
67 pbar = tqdm(initial=0, leave=False, total=len(train_loader), desc=f"ITERATION - loss: {0:.2f}")
68
69 @trainer.on(Events.ITERATION_COMPLETED(every=log_interval))
70 def log_training_loss(engine):
71 pbar.desc = f"ITERATION - loss: {engine.state.output:.2f}"
72 pbar.update(log_interval)
73
74 @trainer.on(Events.EPOCH_COMPLETED)
75 def log_training_results(engine):
76 pbar.refresh()
77 evaluator.run(train_loader)
78 metrics = evaluator.state.metrics
79 avg_accuracy = metrics["accuracy"]
80 avg_nll = metrics["nll"]
81 tqdm.write(
82 f"Training Results - Epoch: {engine.state.epoch} Avg accuracy: {avg_accuracy:.2f} Avg loss: {avg_nll:.2f}"
83 )
84
85 @trainer.on(Events.EPOCH_COMPLETED)
86 def log_validation_results(engine):
87 evaluator.run(val_loader)
88 metrics = evaluator.state.metrics
89 avg_accuracy = metrics["accuracy"]
90 avg_nll = metrics["nll"]
91 tqdm.write(
92 f"Validation Results - Epoch: {engine.state.epoch} Avg accuracy: {avg_accuracy:.2f} Avg loss: {avg_nll:.2f}"
93 )
94
95 pbar.n = pbar.last_print_n = 0
96
97 @trainer.on(Events.EPOCH_COMPLETED | Events.COMPLETED)
98 def log_time(engine):
99 tqdm.write(f"{trainer.last_event_name.name} took { trainer.state.times[trainer.last_event_name.name]} seconds")
100
101 trainer.run(train_loader, max_epochs=epochs)
102 pbar.close()
103
104
105 if __name__ == "__main__":
106 parser = ArgumentParser()
107 parser.add_argument("--batch_size", type=int, default=64, help="input batch size for training (default: 64)")
108 parser.add_argument(
109 "--val_batch_size", type=int, default=1000, help="input batch size for validation (default: 1000)"
110 )
111 parser.add_argument("--epochs", type=int, default=10, help="number of epochs to train (default: 10)")
112 parser.add_argument("--lr", type=float, default=0.01, help="learning rate (default: 0.01)")
113 parser.add_argument("--momentum", type=float, default=0.5, help="SGD momentum (default: 0.5)")
114 parser.add_argument(
115 "--log_interval", type=int, default=10, help="how many batches to wait before logging training status"
116 )
117
118 args = parser.parse_args()
119
120 run(args.batch_size, args.val_batch_size, args.epochs, args.lr, args.momentum, args.log_interval)
121
[end of examples/mnist/mnist.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/examples/mnist/mnist.py b/examples/mnist/mnist.py
--- a/examples/mnist/mnist.py
+++ b/examples/mnist/mnist.py
@@ -96,7 +96,7 @@
@trainer.on(Events.EPOCH_COMPLETED | Events.COMPLETED)
def log_time(engine):
- tqdm.write(f"{trainer.last_event_name.name} took { trainer.state.times[trainer.last_event_name.name]} seconds")
+ tqdm.write(f"{trainer.last_event_name.name} took {trainer.state.times[trainer.last_event_name.name]} seconds")
trainer.run(train_loader, max_epochs=epochs)
pbar.close()
| {"golden_diff": "diff --git a/examples/mnist/mnist.py b/examples/mnist/mnist.py\n--- a/examples/mnist/mnist.py\n+++ b/examples/mnist/mnist.py\n@@ -96,7 +96,7 @@\n \n @trainer.on(Events.EPOCH_COMPLETED | Events.COMPLETED)\n def log_time(engine):\n- tqdm.write(f\"{trainer.last_event_name.name} took { trainer.state.times[trainer.last_event_name.name]} seconds\")\n+ tqdm.write(f\"{trainer.last_event_name.name} took {trainer.state.times[trainer.last_event_name.name]} seconds\")\n \n trainer.run(train_loader, max_epochs=epochs)\n pbar.close()\n", "issue": "Add python 3.12 to CI\n## \ud83d\ude80 Feature\r\n\r\nAdd python 3.12 to CI: https://github.com/pytorch/ignite/blob/master/.github/workflows/unit-tests.yml\r\n\n", "before_files": [{"content": "from argparse import ArgumentParser\n\nimport torch\nimport torch.nn.functional as F\nfrom torch import nn\nfrom torch.optim import SGD\nfrom torch.utils.data import DataLoader\nfrom torchvision.datasets import MNIST\nfrom torchvision.transforms import Compose, Normalize, ToTensor\nfrom tqdm import tqdm\n\nfrom ignite.engine import create_supervised_evaluator, create_supervised_trainer, Events\nfrom ignite.metrics import Accuracy, Loss\nfrom ignite.utils import setup_logger\n\n\nclass Net(nn.Module):\n def __init__(self):\n super(Net, self).__init__()\n self.conv1 = nn.Conv2d(1, 10, kernel_size=5)\n self.conv2 = nn.Conv2d(10, 20, kernel_size=5)\n self.conv2_drop = nn.Dropout2d()\n self.fc1 = nn.Linear(320, 50)\n self.fc2 = nn.Linear(50, 10)\n\n def forward(self, x):\n x = F.relu(F.max_pool2d(self.conv1(x), 2))\n x = F.relu(F.max_pool2d(self.conv2_drop(self.conv2(x)), 2))\n x = x.view(-1, 320)\n x = F.relu(self.fc1(x))\n x = F.dropout(x, training=self.training)\n x = self.fc2(x)\n return F.log_softmax(x, dim=-1)\n\n\ndef get_data_loaders(train_batch_size, val_batch_size):\n data_transform = Compose([ToTensor(), Normalize((0.1307,), (0.3081,))])\n\n train_loader = DataLoader(\n MNIST(download=True, root=\".\", transform=data_transform, train=True), batch_size=train_batch_size, shuffle=True\n )\n\n val_loader = DataLoader(\n MNIST(download=False, root=\".\", transform=data_transform, train=False), batch_size=val_batch_size, shuffle=False\n )\n return train_loader, val_loader\n\n\ndef run(train_batch_size, val_batch_size, epochs, lr, momentum, log_interval):\n train_loader, val_loader = get_data_loaders(train_batch_size, val_batch_size)\n model = Net()\n device = \"cpu\"\n\n if torch.cuda.is_available():\n device = \"cuda\"\n\n model.to(device) # Move model before creating optimizer\n optimizer = SGD(model.parameters(), lr=lr, momentum=momentum)\n criterion = nn.NLLLoss()\n trainer = create_supervised_trainer(model, optimizer, criterion, device=device)\n trainer.logger = setup_logger(\"trainer\")\n\n val_metrics = {\"accuracy\": Accuracy(), \"nll\": Loss(criterion)}\n evaluator = create_supervised_evaluator(model, metrics=val_metrics, device=device)\n evaluator.logger = setup_logger(\"evaluator\")\n\n pbar = tqdm(initial=0, leave=False, total=len(train_loader), desc=f\"ITERATION - loss: {0:.2f}\")\n\n @trainer.on(Events.ITERATION_COMPLETED(every=log_interval))\n def log_training_loss(engine):\n pbar.desc = f\"ITERATION - loss: {engine.state.output:.2f}\"\n pbar.update(log_interval)\n\n @trainer.on(Events.EPOCH_COMPLETED)\n def log_training_results(engine):\n pbar.refresh()\n evaluator.run(train_loader)\n metrics = evaluator.state.metrics\n avg_accuracy = metrics[\"accuracy\"]\n avg_nll = metrics[\"nll\"]\n tqdm.write(\n f\"Training Results - Epoch: {engine.state.epoch} Avg accuracy: {avg_accuracy:.2f} Avg loss: {avg_nll:.2f}\"\n )\n\n @trainer.on(Events.EPOCH_COMPLETED)\n def log_validation_results(engine):\n evaluator.run(val_loader)\n metrics = evaluator.state.metrics\n avg_accuracy = metrics[\"accuracy\"]\n avg_nll = metrics[\"nll\"]\n tqdm.write(\n f\"Validation Results - Epoch: {engine.state.epoch} Avg accuracy: {avg_accuracy:.2f} Avg loss: {avg_nll:.2f}\"\n )\n\n pbar.n = pbar.last_print_n = 0\n\n @trainer.on(Events.EPOCH_COMPLETED | Events.COMPLETED)\n def log_time(engine):\n tqdm.write(f\"{trainer.last_event_name.name} took { trainer.state.times[trainer.last_event_name.name]} seconds\")\n\n trainer.run(train_loader, max_epochs=epochs)\n pbar.close()\n\n\nif __name__ == \"__main__\":\n parser = ArgumentParser()\n parser.add_argument(\"--batch_size\", type=int, default=64, help=\"input batch size for training (default: 64)\")\n parser.add_argument(\n \"--val_batch_size\", type=int, default=1000, help=\"input batch size for validation (default: 1000)\"\n )\n parser.add_argument(\"--epochs\", type=int, default=10, help=\"number of epochs to train (default: 10)\")\n parser.add_argument(\"--lr\", type=float, default=0.01, help=\"learning rate (default: 0.01)\")\n parser.add_argument(\"--momentum\", type=float, default=0.5, help=\"SGD momentum (default: 0.5)\")\n parser.add_argument(\n \"--log_interval\", type=int, default=10, help=\"how many batches to wait before logging training status\"\n )\n\n args = parser.parse_args()\n\n run(args.batch_size, args.val_batch_size, args.epochs, args.lr, args.momentum, args.log_interval)\n", "path": "examples/mnist/mnist.py"}]} | 2,017 | 139 |
gh_patches_debug_20678 | rasdani/github-patches | git_diff | freqtrade__freqtrade-7571 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Allow freqai to pull prediction_models from user_data
Currently, only classes present in `freqai/prediction_models` are available for backtesting/trading.
Allowing the user to define a custom model to be used with `--freqaimodel` would allow more flexibility.
</issue>
<code>
[start of freqtrade/configuration/directory_operations.py]
1 import logging
2 import shutil
3 from pathlib import Path
4 from typing import Optional
5
6 from freqtrade.constants import USER_DATA_FILES, Config
7 from freqtrade.exceptions import OperationalException
8
9
10 logger = logging.getLogger(__name__)
11
12
13 def create_datadir(config: Config, datadir: Optional[str] = None) -> Path:
14
15 folder = Path(datadir) if datadir else Path(f"{config['user_data_dir']}/data")
16 if not datadir:
17 # set datadir
18 exchange_name = config.get('exchange', {}).get('name', '').lower()
19 folder = folder.joinpath(exchange_name)
20
21 if not folder.is_dir():
22 folder.mkdir(parents=True)
23 logger.info(f'Created data directory: {datadir}')
24 return folder
25
26
27 def chown_user_directory(directory: Path) -> None:
28 """
29 Use Sudo to change permissions of the home-directory if necessary
30 Only applies when running in docker!
31 """
32 import os
33 if os.environ.get('FT_APP_ENV') == 'docker':
34 try:
35 import subprocess
36 subprocess.check_output(
37 ['sudo', 'chown', '-R', 'ftuser:', str(directory.resolve())])
38 except Exception:
39 logger.warning(f"Could not chown {directory}")
40
41
42 def create_userdata_dir(directory: str, create_dir: bool = False) -> Path:
43 """
44 Create userdata directory structure.
45 if create_dir is True, then the parent-directory will be created if it does not exist.
46 Sub-directories will always be created if the parent directory exists.
47 Raises OperationalException if given a non-existing directory.
48 :param directory: Directory to check
49 :param create_dir: Create directory if it does not exist.
50 :return: Path object containing the directory
51 """
52 sub_dirs = ["backtest_results", "data", "hyperopts", "hyperopt_results", "logs",
53 "notebooks", "plot", "strategies", ]
54 folder = Path(directory)
55 chown_user_directory(folder)
56 if not folder.is_dir():
57 if create_dir:
58 folder.mkdir(parents=True)
59 logger.info(f'Created user-data directory: {folder}')
60 else:
61 raise OperationalException(
62 f"Directory `{folder}` does not exist. "
63 "Please use `freqtrade create-userdir` to create a user directory")
64
65 # Create required subdirectories
66 for f in sub_dirs:
67 subfolder = folder / f
68 if not subfolder.is_dir():
69 subfolder.mkdir(parents=False)
70 return folder
71
72
73 def copy_sample_files(directory: Path, overwrite: bool = False) -> None:
74 """
75 Copy files from templates to User data directory.
76 :param directory: Directory to copy data to
77 :param overwrite: Overwrite existing sample files
78 """
79 if not directory.is_dir():
80 raise OperationalException(f"Directory `{directory}` does not exist.")
81 sourcedir = Path(__file__).parents[1] / "templates"
82 for source, target in USER_DATA_FILES.items():
83 targetdir = directory / target
84 if not targetdir.is_dir():
85 raise OperationalException(f"Directory `{targetdir}` does not exist.")
86 targetfile = targetdir / source
87 if targetfile.exists():
88 if not overwrite:
89 logger.warning(f"File `{targetfile}` exists already, not deploying sample file.")
90 continue
91 logger.warning(f"File `{targetfile}` exists already, overwriting.")
92 shutil.copy(str(sourcedir / source), str(targetfile))
93
[end of freqtrade/configuration/directory_operations.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/freqtrade/configuration/directory_operations.py b/freqtrade/configuration/directory_operations.py
--- a/freqtrade/configuration/directory_operations.py
+++ b/freqtrade/configuration/directory_operations.py
@@ -3,7 +3,8 @@
from pathlib import Path
from typing import Optional
-from freqtrade.constants import USER_DATA_FILES, Config
+from freqtrade.constants import (USER_DATA_FILES, USERPATH_FREQAIMODELS, USERPATH_HYPEROPTS,
+ USERPATH_NOTEBOOKS, USERPATH_STRATEGIES, Config)
from freqtrade.exceptions import OperationalException
@@ -49,8 +50,8 @@
:param create_dir: Create directory if it does not exist.
:return: Path object containing the directory
"""
- sub_dirs = ["backtest_results", "data", "hyperopts", "hyperopt_results", "logs",
- "notebooks", "plot", "strategies", ]
+ sub_dirs = ["backtest_results", "data", USERPATH_HYPEROPTS, "hyperopt_results", "logs",
+ USERPATH_NOTEBOOKS, "plot", USERPATH_STRATEGIES, USERPATH_FREQAIMODELS]
folder = Path(directory)
chown_user_directory(folder)
if not folder.is_dir():
| {"golden_diff": "diff --git a/freqtrade/configuration/directory_operations.py b/freqtrade/configuration/directory_operations.py\n--- a/freqtrade/configuration/directory_operations.py\n+++ b/freqtrade/configuration/directory_operations.py\n@@ -3,7 +3,8 @@\n from pathlib import Path\n from typing import Optional\n \n-from freqtrade.constants import USER_DATA_FILES, Config\n+from freqtrade.constants import (USER_DATA_FILES, USERPATH_FREQAIMODELS, USERPATH_HYPEROPTS,\n+ USERPATH_NOTEBOOKS, USERPATH_STRATEGIES, Config)\n from freqtrade.exceptions import OperationalException\n \n \n@@ -49,8 +50,8 @@\n :param create_dir: Create directory if it does not exist.\n :return: Path object containing the directory\n \"\"\"\n- sub_dirs = [\"backtest_results\", \"data\", \"hyperopts\", \"hyperopt_results\", \"logs\",\n- \"notebooks\", \"plot\", \"strategies\", ]\n+ sub_dirs = [\"backtest_results\", \"data\", USERPATH_HYPEROPTS, \"hyperopt_results\", \"logs\",\n+ USERPATH_NOTEBOOKS, \"plot\", USERPATH_STRATEGIES, USERPATH_FREQAIMODELS]\n folder = Path(directory)\n chown_user_directory(folder)\n if not folder.is_dir():\n", "issue": "Allow freqai to pull prediction_models from user_data\nCurrently, only classes present in `freqai/prediction_models` are available for backtesting/trading.\r\nAllowing the user to define a custom model to be used with `--freqaimodel` would allow more flexibility.\n", "before_files": [{"content": "import logging\nimport shutil\nfrom pathlib import Path\nfrom typing import Optional\n\nfrom freqtrade.constants import USER_DATA_FILES, Config\nfrom freqtrade.exceptions import OperationalException\n\n\nlogger = logging.getLogger(__name__)\n\n\ndef create_datadir(config: Config, datadir: Optional[str] = None) -> Path:\n\n folder = Path(datadir) if datadir else Path(f\"{config['user_data_dir']}/data\")\n if not datadir:\n # set datadir\n exchange_name = config.get('exchange', {}).get('name', '').lower()\n folder = folder.joinpath(exchange_name)\n\n if not folder.is_dir():\n folder.mkdir(parents=True)\n logger.info(f'Created data directory: {datadir}')\n return folder\n\n\ndef chown_user_directory(directory: Path) -> None:\n \"\"\"\n Use Sudo to change permissions of the home-directory if necessary\n Only applies when running in docker!\n \"\"\"\n import os\n if os.environ.get('FT_APP_ENV') == 'docker':\n try:\n import subprocess\n subprocess.check_output(\n ['sudo', 'chown', '-R', 'ftuser:', str(directory.resolve())])\n except Exception:\n logger.warning(f\"Could not chown {directory}\")\n\n\ndef create_userdata_dir(directory: str, create_dir: bool = False) -> Path:\n \"\"\"\n Create userdata directory structure.\n if create_dir is True, then the parent-directory will be created if it does not exist.\n Sub-directories will always be created if the parent directory exists.\n Raises OperationalException if given a non-existing directory.\n :param directory: Directory to check\n :param create_dir: Create directory if it does not exist.\n :return: Path object containing the directory\n \"\"\"\n sub_dirs = [\"backtest_results\", \"data\", \"hyperopts\", \"hyperopt_results\", \"logs\",\n \"notebooks\", \"plot\", \"strategies\", ]\n folder = Path(directory)\n chown_user_directory(folder)\n if not folder.is_dir():\n if create_dir:\n folder.mkdir(parents=True)\n logger.info(f'Created user-data directory: {folder}')\n else:\n raise OperationalException(\n f\"Directory `{folder}` does not exist. \"\n \"Please use `freqtrade create-userdir` to create a user directory\")\n\n # Create required subdirectories\n for f in sub_dirs:\n subfolder = folder / f\n if not subfolder.is_dir():\n subfolder.mkdir(parents=False)\n return folder\n\n\ndef copy_sample_files(directory: Path, overwrite: bool = False) -> None:\n \"\"\"\n Copy files from templates to User data directory.\n :param directory: Directory to copy data to\n :param overwrite: Overwrite existing sample files\n \"\"\"\n if not directory.is_dir():\n raise OperationalException(f\"Directory `{directory}` does not exist.\")\n sourcedir = Path(__file__).parents[1] / \"templates\"\n for source, target in USER_DATA_FILES.items():\n targetdir = directory / target\n if not targetdir.is_dir():\n raise OperationalException(f\"Directory `{targetdir}` does not exist.\")\n targetfile = targetdir / source\n if targetfile.exists():\n if not overwrite:\n logger.warning(f\"File `{targetfile}` exists already, not deploying sample file.\")\n continue\n logger.warning(f\"File `{targetfile}` exists already, overwriting.\")\n shutil.copy(str(sourcedir / source), str(targetfile))\n", "path": "freqtrade/configuration/directory_operations.py"}]} | 1,517 | 277 |
gh_patches_debug_1797 | rasdani/github-patches | git_diff | readthedocs__readthedocs.org-5346 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Remove all warnings from pytest
When running `tox` we see these warnings in the summary.
We should use `request` fixture and access to `request.config` instead.
Docs: https://docs.pytest.org/en/latest/fixture.html#request-context
Change log: https://docs.pytest.org/en/latest/deprecations.html#pytest-config-global
```
====================================================================================== warnings summary ======================================================================================
readthedocs/rtd_tests/tests/test_core_tags.py::CoreTagsTests::test_mkdocs
readthedocs/rtd_tests/tests/test_core_tags.py::CoreTagsTests::test_mkdocs_index
readthedocs/rtd_tests/tests/test_core_tags.py::CoreTagsTests::test_mkdocs_index_no_directory_urls
readthedocs/rtd_tests/tests/test_core_tags.py::CoreTagsTests::test_mkdocs_no_directory_urls
readthedocs/rtd_tests/tests/test_core_tags.py::CoreTagsTests::test_project_and_version
readthedocs/rtd_tests/tests/test_core_tags.py::CoreTagsTests::test_project_and_version_and_page
readthedocs/rtd_tests/tests/test_core_tags.py::CoreTagsTests::test_project_and_version_and_page_htmldir
readthedocs/rtd_tests/tests/test_core_tags.py::CoreTagsTests::test_project_and_version_and_page_signlehtml
readthedocs/rtd_tests/tests/test_core_tags.py::CoreTagsTests::test_project_and_version_htmldir
readthedocs/rtd_tests/tests/test_core_tags.py::CoreTagsTests::test_project_and_version_singlehtml
readthedocs/rtd_tests/tests/test_core_tags.py::CoreTagsTests::test_project_only
readthedocs/rtd_tests/tests/test_core_tags.py::CoreTagsTests::test_project_only_htmldir
readthedocs/rtd_tests/tests/test_core_tags.py::CoreTagsTests::test_project_only_singlehtml
readthedocs/rtd_tests/tests/test_core_tags.py::CoreTagsTests::test_restructured_text
readthedocs/rtd_tests/tests/test_core_tags.py::CoreTagsTests::test_restructured_text_invalid
readthedocs/rtd_tests/tests/test_core_tags.py::CoreTagsTests::test_translation_project_and_version
readthedocs/rtd_tests/tests/test_core_tags.py::CoreTagsTests::test_translation_project_and_version_and_page
readthedocs/rtd_tests/tests/test_core_tags.py::CoreTagsTests::test_translation_project_and_version_and_page_htmldir
readthedocs/rtd_tests/tests/test_core_tags.py::CoreTagsTests::test_translation_project_and_version_and_page_singlehtml
readthedocs/rtd_tests/tests/test_core_tags.py::CoreTagsTests::test_translation_project_and_version_htmldir
readthedocs/rtd_tests/tests/test_core_tags.py::CoreTagsTests::test_translation_project_and_version_singlehtml
readthedocs/rtd_tests/tests/test_core_tags.py::CoreTagsTests::test_translation_project_only
readthedocs/rtd_tests/tests/test_core_tags.py::CoreTagsTests::test_translation_project_only_htmldir
readthedocs/rtd_tests/tests/test_core_tags.py::CoreTagsTests::test_translation_project_only_singlehtml
/home/humitos/rtfd/code/readthedocs-corporate/.tox/py36/readthedocs.org/readthedocs/rtd_tests/tests/test_core_tags.py:19: PytestDeprecationWarning: the `pytest.config` global is deprecated. Please use `request.config` or `pytest_configure` (if you're a pytest plugin) instead.
scheme=pytest.config.option.url_scheme,
-- Docs: https://docs.pytest.org/en/latest/warnings.html
```
</issue>
<code>
[start of conftest.py]
1 # -*- coding: utf-8 -*-
2 import pytest
3 from django.conf import settings
4 from rest_framework.test import APIClient
5
6 try:
7 # TODO: this file is read/executed even when called from ``readthedocsinc``,
8 # so it's overriding the options that we are defining in the ``conftest.py``
9 # from the corporate site. We need to find a better way to avoid this.
10 import readthedocsinc
11 PYTEST_OPTIONS = ()
12 except ImportError:
13 PYTEST_OPTIONS = (
14 # Options to set test environment
15 ('community', True),
16 ('corporate', False),
17 ('environment', 'readthedocs'),
18
19 ('url_scheme', 'http'),
20 )
21
22
23 def pytest_addoption(parser):
24 parser.addoption(
25 '--including-search',
26 action='store_true',
27 dest='searchtests',
28 default=False, help='enable search tests',
29 )
30
31
32 def pytest_configure(config):
33 if not config.option.searchtests:
34 # Include ``not search``` to parameters so search tests do not perform
35 markexpr = getattr(config.option, 'markexpr')
36 if markexpr:
37 markexpr += ' and not search'
38 else:
39 markexpr = 'not search'
40 setattr(config.option, 'markexpr', markexpr.strip())
41
42 for option, value in PYTEST_OPTIONS:
43 setattr(config.option, option, value)
44
45
46 @pytest.fixture(autouse=True)
47 def settings_modification(settings):
48 settings.CELERY_ALWAYS_EAGER = True
49
50 @pytest.fixture
51 def api_client():
52 return APIClient()
53
[end of conftest.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/conftest.py b/conftest.py
--- a/conftest.py
+++ b/conftest.py
@@ -47,6 +47,12 @@
def settings_modification(settings):
settings.CELERY_ALWAYS_EAGER = True
+
@pytest.fixture
def api_client():
return APIClient()
+
+
[email protected](scope="class")
+def url_scheme(request):
+ request.cls.url_scheme = request.config.option.url_scheme
| {"golden_diff": "diff --git a/conftest.py b/conftest.py\n--- a/conftest.py\n+++ b/conftest.py\n@@ -47,6 +47,12 @@\n def settings_modification(settings):\n settings.CELERY_ALWAYS_EAGER = True\n \n+\n @pytest.fixture\n def api_client():\n return APIClient()\n+\n+\[email protected](scope=\"class\")\n+def url_scheme(request):\n+ request.cls.url_scheme = request.config.option.url_scheme\n", "issue": "Remove all warnings from pytest\nWhen running `tox` we see these warnings in the summary.\r\n\r\nWe should use `request` fixture and access to `request.config` instead.\r\n\r\nDocs: https://docs.pytest.org/en/latest/fixture.html#request-context\r\nChange log: https://docs.pytest.org/en/latest/deprecations.html#pytest-config-global\r\n\r\n\r\n```\r\n====================================================================================== warnings summary ======================================================================================\r\nreadthedocs/rtd_tests/tests/test_core_tags.py::CoreTagsTests::test_mkdocs\r\nreadthedocs/rtd_tests/tests/test_core_tags.py::CoreTagsTests::test_mkdocs_index\r\nreadthedocs/rtd_tests/tests/test_core_tags.py::CoreTagsTests::test_mkdocs_index_no_directory_urls\r\nreadthedocs/rtd_tests/tests/test_core_tags.py::CoreTagsTests::test_mkdocs_no_directory_urls\r\nreadthedocs/rtd_tests/tests/test_core_tags.py::CoreTagsTests::test_project_and_version\r\nreadthedocs/rtd_tests/tests/test_core_tags.py::CoreTagsTests::test_project_and_version_and_page\r\nreadthedocs/rtd_tests/tests/test_core_tags.py::CoreTagsTests::test_project_and_version_and_page_htmldir\r\nreadthedocs/rtd_tests/tests/test_core_tags.py::CoreTagsTests::test_project_and_version_and_page_signlehtml\r\nreadthedocs/rtd_tests/tests/test_core_tags.py::CoreTagsTests::test_project_and_version_htmldir\r\nreadthedocs/rtd_tests/tests/test_core_tags.py::CoreTagsTests::test_project_and_version_singlehtml\r\nreadthedocs/rtd_tests/tests/test_core_tags.py::CoreTagsTests::test_project_only\r\nreadthedocs/rtd_tests/tests/test_core_tags.py::CoreTagsTests::test_project_only_htmldir\r\nreadthedocs/rtd_tests/tests/test_core_tags.py::CoreTagsTests::test_project_only_singlehtml\r\nreadthedocs/rtd_tests/tests/test_core_tags.py::CoreTagsTests::test_restructured_text\r\nreadthedocs/rtd_tests/tests/test_core_tags.py::CoreTagsTests::test_restructured_text_invalid\r\nreadthedocs/rtd_tests/tests/test_core_tags.py::CoreTagsTests::test_translation_project_and_version\r\nreadthedocs/rtd_tests/tests/test_core_tags.py::CoreTagsTests::test_translation_project_and_version_and_page\r\nreadthedocs/rtd_tests/tests/test_core_tags.py::CoreTagsTests::test_translation_project_and_version_and_page_htmldir\r\nreadthedocs/rtd_tests/tests/test_core_tags.py::CoreTagsTests::test_translation_project_and_version_and_page_singlehtml\r\nreadthedocs/rtd_tests/tests/test_core_tags.py::CoreTagsTests::test_translation_project_and_version_htmldir\r\nreadthedocs/rtd_tests/tests/test_core_tags.py::CoreTagsTests::test_translation_project_and_version_singlehtml\r\nreadthedocs/rtd_tests/tests/test_core_tags.py::CoreTagsTests::test_translation_project_only\r\nreadthedocs/rtd_tests/tests/test_core_tags.py::CoreTagsTests::test_translation_project_only_htmldir\r\nreadthedocs/rtd_tests/tests/test_core_tags.py::CoreTagsTests::test_translation_project_only_singlehtml\r\n /home/humitos/rtfd/code/readthedocs-corporate/.tox/py36/readthedocs.org/readthedocs/rtd_tests/tests/test_core_tags.py:19: PytestDeprecationWarning: the `pytest.config` global is deprecated. Please use `request.config` or `pytest_configure` (if you're a pytest plugin) instead.\r\n scheme=pytest.config.option.url_scheme,\r\n\r\n-- Docs: https://docs.pytest.org/en/latest/warnings.html\r\n```\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\nimport pytest\nfrom django.conf import settings\nfrom rest_framework.test import APIClient\n\ntry:\n # TODO: this file is read/executed even when called from ``readthedocsinc``,\n # so it's overriding the options that we are defining in the ``conftest.py``\n # from the corporate site. We need to find a better way to avoid this.\n import readthedocsinc\n PYTEST_OPTIONS = ()\nexcept ImportError:\n PYTEST_OPTIONS = (\n # Options to set test environment\n ('community', True),\n ('corporate', False),\n ('environment', 'readthedocs'),\n\n ('url_scheme', 'http'),\n )\n\n\ndef pytest_addoption(parser):\n parser.addoption(\n '--including-search',\n action='store_true',\n dest='searchtests',\n default=False, help='enable search tests',\n )\n\n\ndef pytest_configure(config):\n if not config.option.searchtests:\n # Include ``not search``` to parameters so search tests do not perform\n markexpr = getattr(config.option, 'markexpr')\n if markexpr:\n markexpr += ' and not search'\n else:\n markexpr = 'not search'\n setattr(config.option, 'markexpr', markexpr.strip())\n\n for option, value in PYTEST_OPTIONS:\n setattr(config.option, option, value)\n\n\[email protected](autouse=True)\ndef settings_modification(settings):\n settings.CELERY_ALWAYS_EAGER = True\n\[email protected]\ndef api_client():\n return APIClient()\n", "path": "conftest.py"}]} | 1,713 | 102 |
gh_patches_debug_22245 | rasdani/github-patches | git_diff | saulpw__visidata-543 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
TSV file with column name "length" causes TypeError
**Small description**
Files in TSV format containing a column named `length` cannot be loaded.
**Expected result**
See content of TSV file.
**Actual result with screenshot**
An empty file is shown. In the footer line it says:
```
TypeError: 'property' object is not callable
```
**Steps to reproduce with sample data and a .vd**
Create a file named `test.tsv` with this content:
```
length
1
```
Then, try to open it:
```
vd test.tsv
```
**Additional context**
version 1.5.2
</issue>
<code>
[start of visidata/utils.py]
1 import operator
2
3 'Various helper classes and functions.'
4
5 __all__ = ['AttrDict', 'joinSheetnames', 'moveListItem', 'namedlist', 'classproperty']
6
7
8 class AttrDict(dict):
9 'Augment a dict with more convenient .attr syntax. not-present keys return None.'
10 def __getattr__(self, k):
11 try:
12 return self[k]
13 except KeyError:
14 return None
15
16 def __setattr__(self, k, v):
17 self[k] = v
18
19 def __dir__(self):
20 return self.keys()
21
22
23 class classproperty(property):
24 def __get__(self, cls, obj):
25 return classmethod(self.fget).__get__(None, obj or cls)()
26
27
28 def joinSheetnames(*sheetnames):
29 'Concatenate sheet names in a standard way'
30 return '_'.join(str(x) for x in sheetnames)
31
32
33 def moveListItem(L, fromidx, toidx):
34 "Move element within list `L` and return element's new index."
35 toidx = min(max(toidx, 0), len(L)-1)
36 fromidx = min(max(fromidx, 0), len(L)-1)
37 r = L.pop(fromidx)
38 L.insert(toidx, r)
39 return toidx
40
41
42 class OnExit:
43 '"with OnExit(func, ...):" calls func(...) when the context is exited'
44 def __init__(self, func, *args, **kwargs):
45 self.func = func
46 self.args = args
47 self.kwargs = kwargs
48
49 def __enter__(self):
50 return self
51
52 def __exit__(self, exc_type, exc_value, exc_traceback):
53 try:
54 self.func(*self.args, **self.kwargs)
55 except Exception as e:
56 vd.exceptionCaught(e)
57
58
59 def itemsetter(i):
60 def g(obj, v):
61 obj[i] = v
62 return g
63
64
65 def namedlist(objname, fieldnames):
66 'like namedtuple but editable'
67 class NamedListTemplate(list):
68 __name__ = objname
69 _fields = fieldnames
70
71 def __init__(self, L=None, **kwargs):
72 if L is None:
73 L = [None]*self.length()
74 elif len(L) < self.length():
75 L.extend([None]*(self.length() - len(L)))
76 super().__init__(L)
77 for k, v in kwargs.items():
78 setattr(self, k, v)
79
80 @classmethod
81 def length(cls):
82 return len(cls._fields)
83
84 def __getattr__(self, k):
85 'to enable .fieldname'
86 try:
87 return self[self._fields.index(k)]
88 except ValueError:
89 raise AttributeError
90
91 def __setattr__(self, k, v):
92 'to enable .fieldname ='
93 try:
94 self[self._fields.index(k)] = v
95 except ValueError:
96 super().__setattr__(k, v)
97
98 for i, attrname in enumerate(fieldnames):
99 # create property getter/setter for each field
100 setattr(NamedListTemplate, attrname, property(operator.itemgetter(i), itemsetter(i)))
101
102 return NamedListTemplate
103
[end of visidata/utils.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/visidata/utils.py b/visidata/utils.py
--- a/visidata/utils.py
+++ b/visidata/utils.py
@@ -70,17 +70,13 @@
def __init__(self, L=None, **kwargs):
if L is None:
- L = [None]*self.length()
- elif len(L) < self.length():
- L.extend([None]*(self.length() - len(L)))
+ L = [None]*len(self._fields)
+ elif len(L) < len(self._fields):
+ L.extend([None]*(len(self._fields) - len(L)))
super().__init__(L)
for k, v in kwargs.items():
setattr(self, k, v)
- @classmethod
- def length(cls):
- return len(cls._fields)
-
def __getattr__(self, k):
'to enable .fieldname'
try:
@@ -95,8 +91,4 @@
except ValueError:
super().__setattr__(k, v)
- for i, attrname in enumerate(fieldnames):
- # create property getter/setter for each field
- setattr(NamedListTemplate, attrname, property(operator.itemgetter(i), itemsetter(i)))
-
return NamedListTemplate
| {"golden_diff": "diff --git a/visidata/utils.py b/visidata/utils.py\n--- a/visidata/utils.py\n+++ b/visidata/utils.py\n@@ -70,17 +70,13 @@\n \n def __init__(self, L=None, **kwargs):\n if L is None:\n- L = [None]*self.length()\n- elif len(L) < self.length():\n- L.extend([None]*(self.length() - len(L)))\n+ L = [None]*len(self._fields)\n+ elif len(L) < len(self._fields):\n+ L.extend([None]*(len(self._fields) - len(L)))\n super().__init__(L)\n for k, v in kwargs.items():\n setattr(self, k, v)\n \n- @classmethod\n- def length(cls):\n- return len(cls._fields)\n-\n def __getattr__(self, k):\n 'to enable .fieldname'\n try:\n@@ -95,8 +91,4 @@\n except ValueError:\n super().__setattr__(k, v)\n \n- for i, attrname in enumerate(fieldnames):\n- # create property getter/setter for each field\n- setattr(NamedListTemplate, attrname, property(operator.itemgetter(i), itemsetter(i)))\n-\n return NamedListTemplate\n", "issue": "TSV file with column name \"length\" causes TypeError\n**Small description**\r\nFiles in TSV format containing a column named `length` cannot be loaded.\r\n\r\n**Expected result**\r\nSee content of TSV file.\r\n\r\n**Actual result with screenshot**\r\nAn empty file is shown. In the footer line it says:\r\n```\r\nTypeError: 'property' object is not callable\r\n```\r\n\r\n**Steps to reproduce with sample data and a .vd**\r\nCreate a file named `test.tsv` with this content:\r\n```\r\nlength\r\n1\r\n```\r\nThen, try to open it:\r\n```\r\nvd test.tsv\r\n```\r\n\r\n**Additional context**\r\nversion 1.5.2\r\n\n", "before_files": [{"content": "import operator\n\n'Various helper classes and functions.'\n\n__all__ = ['AttrDict', 'joinSheetnames', 'moveListItem', 'namedlist', 'classproperty']\n\n\nclass AttrDict(dict):\n 'Augment a dict with more convenient .attr syntax. not-present keys return None.'\n def __getattr__(self, k):\n try:\n return self[k]\n except KeyError:\n return None\n\n def __setattr__(self, k, v):\n self[k] = v\n\n def __dir__(self):\n return self.keys()\n\n\nclass classproperty(property):\n def __get__(self, cls, obj):\n return classmethod(self.fget).__get__(None, obj or cls)()\n\n\ndef joinSheetnames(*sheetnames):\n 'Concatenate sheet names in a standard way'\n return '_'.join(str(x) for x in sheetnames)\n\n\ndef moveListItem(L, fromidx, toidx):\n \"Move element within list `L` and return element's new index.\"\n toidx = min(max(toidx, 0), len(L)-1)\n fromidx = min(max(fromidx, 0), len(L)-1)\n r = L.pop(fromidx)\n L.insert(toidx, r)\n return toidx\n\n\nclass OnExit:\n '\"with OnExit(func, ...):\" calls func(...) when the context is exited'\n def __init__(self, func, *args, **kwargs):\n self.func = func\n self.args = args\n self.kwargs = kwargs\n\n def __enter__(self):\n return self\n\n def __exit__(self, exc_type, exc_value, exc_traceback):\n try:\n self.func(*self.args, **self.kwargs)\n except Exception as e:\n vd.exceptionCaught(e)\n\n\ndef itemsetter(i):\n def g(obj, v):\n obj[i] = v\n return g\n\n\ndef namedlist(objname, fieldnames):\n 'like namedtuple but editable'\n class NamedListTemplate(list):\n __name__ = objname\n _fields = fieldnames\n\n def __init__(self, L=None, **kwargs):\n if L is None:\n L = [None]*self.length()\n elif len(L) < self.length():\n L.extend([None]*(self.length() - len(L)))\n super().__init__(L)\n for k, v in kwargs.items():\n setattr(self, k, v)\n\n @classmethod\n def length(cls):\n return len(cls._fields)\n\n def __getattr__(self, k):\n 'to enable .fieldname'\n try:\n return self[self._fields.index(k)]\n except ValueError:\n raise AttributeError\n\n def __setattr__(self, k, v):\n 'to enable .fieldname ='\n try:\n self[self._fields.index(k)] = v\n except ValueError:\n super().__setattr__(k, v)\n\n for i, attrname in enumerate(fieldnames):\n # create property getter/setter for each field\n setattr(NamedListTemplate, attrname, property(operator.itemgetter(i), itemsetter(i)))\n\n return NamedListTemplate\n", "path": "visidata/utils.py"}]} | 1,553 | 283 |
gh_patches_debug_8342 | rasdani/github-patches | git_diff | PaddlePaddle__models-799 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
policy_gradient 原理介绍部分内容格式存在问题
https://github.com/PaddlePaddle/models/tree/develop/fluid/policy_gradient
policy_gradient demo介绍部分,看起来格式存在问题,能辛苦调整下吗?或者以什么样的方式可以看到原始的文档呢? @wanghaoshuang @lcy-seso
</issue>
<code>
[start of fluid/policy_gradient/brain.py]
1 import numpy as np
2 import paddle.v2 as paddle
3 import paddle.fluid as fluid
4 # reproducible
5 np.random.seed(1)
6
7
8 class PolicyGradient:
9 def __init__(
10 self,
11 n_actions,
12 n_features,
13 learning_rate=0.01,
14 reward_decay=0.95,
15 output_graph=False, ):
16 self.n_actions = n_actions
17 self.n_features = n_features
18 self.lr = learning_rate
19 self.gamma = reward_decay
20
21 self.ep_obs, self.ep_as, self.ep_rs = [], [], []
22
23 self.place = fluid.CPUPlace()
24 self.exe = fluid.Executor(self.place)
25
26 def build_net(self):
27
28 obs = fluid.layers.data(
29 name='obs', shape=[self.n_features], dtype='float32')
30 acts = fluid.layers.data(name='acts', shape=[1], dtype='int64')
31 vt = fluid.layers.data(name='vt', shape=[1], dtype='float32')
32 # fc1
33 fc1 = fluid.layers.fc(
34 input=obs,
35 size=10,
36 act="tanh" # tanh activation
37 )
38 # fc2
39 self.all_act_prob = fluid.layers.fc(input=fc1,
40 size=self.n_actions,
41 act="softmax")
42 # to maximize total reward (log_p * R) is to minimize -(log_p * R)
43 neg_log_prob = fluid.layers.cross_entropy(
44 input=self.all_act_prob,
45 label=acts) # this is negative log of chosen action
46 neg_log_prob_weight = fluid.layers.elementwise_mul(x=neg_log_prob, y=vt)
47 loss = fluid.layers.reduce_mean(
48 x=neg_log_prob_weight) # reward guided loss
49
50 sgd_optimizer = fluid.optimizer.SGD(self.lr)
51 sgd_optimizer.minimize(loss)
52 self.exe.run(fluid.default_startup_program())
53
54 def choose_action(self, observation):
55 prob_weights = self.exe.run(
56 fluid.default_main_program().prune(self.all_act_prob),
57 feed={"obs": observation[np.newaxis, :]},
58 fetch_list=[self.all_act_prob])
59 prob_weights = np.array(prob_weights[0])
60 action = np.random.choice(
61 range(prob_weights.shape[1]),
62 p=prob_weights.ravel()) # select action w.r.t the actions prob
63 return action
64
65 def store_transition(self, s, a, r):
66 self.ep_obs.append(s)
67 self.ep_as.append(a)
68 self.ep_rs.append(r)
69
70 def learn(self):
71 # discount and normalize episode reward
72 discounted_ep_rs_norm = self._discount_and_norm_rewards()
73 tensor_obs = np.vstack(self.ep_obs).astype("float32")
74 tensor_as = np.array(self.ep_as).astype("int64")
75 tensor_as = tensor_as.reshape([tensor_as.shape[0], 1])
76 tensor_vt = discounted_ep_rs_norm.astype("float32")[:, np.newaxis]
77 # train on episode
78 self.exe.run(
79 fluid.default_main_program(),
80 feed={
81 "obs": tensor_obs, # shape=[None, n_obs]
82 "acts": tensor_as, # shape=[None, ]
83 "vt": tensor_vt # shape=[None, ]
84 })
85 self.ep_obs, self.ep_as, self.ep_rs = [], [], [] # empty episode data
86 return discounted_ep_rs_norm
87
88 def _discount_and_norm_rewards(self):
89 # discount episode rewards
90 discounted_ep_rs = np.zeros_like(self.ep_rs)
91 running_add = 0
92 for t in reversed(range(0, len(self.ep_rs))):
93 running_add = running_add * self.gamma + self.ep_rs[t]
94 discounted_ep_rs[t] = running_add
95
96 # normalize episode rewards
97 discounted_ep_rs -= np.mean(discounted_ep_rs)
98 discounted_ep_rs /= np.std(discounted_ep_rs)
99 return discounted_ep_rs
100
[end of fluid/policy_gradient/brain.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/fluid/policy_gradient/brain.py b/fluid/policy_gradient/brain.py
--- a/fluid/policy_gradient/brain.py
+++ b/fluid/policy_gradient/brain.py
@@ -45,7 +45,7 @@
label=acts) # this is negative log of chosen action
neg_log_prob_weight = fluid.layers.elementwise_mul(x=neg_log_prob, y=vt)
loss = fluid.layers.reduce_mean(
- x=neg_log_prob_weight) # reward guided loss
+ neg_log_prob_weight) # reward guided loss
sgd_optimizer = fluid.optimizer.SGD(self.lr)
sgd_optimizer.minimize(loss)
| {"golden_diff": "diff --git a/fluid/policy_gradient/brain.py b/fluid/policy_gradient/brain.py\n--- a/fluid/policy_gradient/brain.py\n+++ b/fluid/policy_gradient/brain.py\n@@ -45,7 +45,7 @@\n label=acts) # this is negative log of chosen action\n neg_log_prob_weight = fluid.layers.elementwise_mul(x=neg_log_prob, y=vt)\n loss = fluid.layers.reduce_mean(\n- x=neg_log_prob_weight) # reward guided loss\n+ neg_log_prob_weight) # reward guided loss\n \n sgd_optimizer = fluid.optimizer.SGD(self.lr)\n sgd_optimizer.minimize(loss)\n", "issue": "policy_gradient \u539f\u7406\u4ecb\u7ecd\u90e8\u5206\u5185\u5bb9\u683c\u5f0f\u5b58\u5728\u95ee\u9898\nhttps://github.com/PaddlePaddle/models/tree/develop/fluid/policy_gradient \r\npolicy_gradient demo\u4ecb\u7ecd\u90e8\u5206\uff0c\u770b\u8d77\u6765\u683c\u5f0f\u5b58\u5728\u95ee\u9898\uff0c\u80fd\u8f9b\u82e6\u8c03\u6574\u4e0b\u5417\uff1f\u6216\u8005\u4ee5\u4ec0\u4e48\u6837\u7684\u65b9\u5f0f\u53ef\u4ee5\u770b\u5230\u539f\u59cb\u7684\u6587\u6863\u5462\uff1f @wanghaoshuang @lcy-seso \n", "before_files": [{"content": "import numpy as np\nimport paddle.v2 as paddle\nimport paddle.fluid as fluid\n# reproducible\nnp.random.seed(1)\n\n\nclass PolicyGradient:\n def __init__(\n self,\n n_actions,\n n_features,\n learning_rate=0.01,\n reward_decay=0.95,\n output_graph=False, ):\n self.n_actions = n_actions\n self.n_features = n_features\n self.lr = learning_rate\n self.gamma = reward_decay\n\n self.ep_obs, self.ep_as, self.ep_rs = [], [], []\n\n self.place = fluid.CPUPlace()\n self.exe = fluid.Executor(self.place)\n\n def build_net(self):\n\n obs = fluid.layers.data(\n name='obs', shape=[self.n_features], dtype='float32')\n acts = fluid.layers.data(name='acts', shape=[1], dtype='int64')\n vt = fluid.layers.data(name='vt', shape=[1], dtype='float32')\n # fc1\n fc1 = fluid.layers.fc(\n input=obs,\n size=10,\n act=\"tanh\" # tanh activation\n )\n # fc2\n self.all_act_prob = fluid.layers.fc(input=fc1,\n size=self.n_actions,\n act=\"softmax\")\n # to maximize total reward (log_p * R) is to minimize -(log_p * R)\n neg_log_prob = fluid.layers.cross_entropy(\n input=self.all_act_prob,\n label=acts) # this is negative log of chosen action\n neg_log_prob_weight = fluid.layers.elementwise_mul(x=neg_log_prob, y=vt)\n loss = fluid.layers.reduce_mean(\n x=neg_log_prob_weight) # reward guided loss\n\n sgd_optimizer = fluid.optimizer.SGD(self.lr)\n sgd_optimizer.minimize(loss)\n self.exe.run(fluid.default_startup_program())\n\n def choose_action(self, observation):\n prob_weights = self.exe.run(\n fluid.default_main_program().prune(self.all_act_prob),\n feed={\"obs\": observation[np.newaxis, :]},\n fetch_list=[self.all_act_prob])\n prob_weights = np.array(prob_weights[0])\n action = np.random.choice(\n range(prob_weights.shape[1]),\n p=prob_weights.ravel()) # select action w.r.t the actions prob\n return action\n\n def store_transition(self, s, a, r):\n self.ep_obs.append(s)\n self.ep_as.append(a)\n self.ep_rs.append(r)\n\n def learn(self):\n # discount and normalize episode reward\n discounted_ep_rs_norm = self._discount_and_norm_rewards()\n tensor_obs = np.vstack(self.ep_obs).astype(\"float32\")\n tensor_as = np.array(self.ep_as).astype(\"int64\")\n tensor_as = tensor_as.reshape([tensor_as.shape[0], 1])\n tensor_vt = discounted_ep_rs_norm.astype(\"float32\")[:, np.newaxis]\n # train on episode\n self.exe.run(\n fluid.default_main_program(),\n feed={\n \"obs\": tensor_obs, # shape=[None, n_obs]\n \"acts\": tensor_as, # shape=[None, ]\n \"vt\": tensor_vt # shape=[None, ]\n })\n self.ep_obs, self.ep_as, self.ep_rs = [], [], [] # empty episode data\n return discounted_ep_rs_norm\n\n def _discount_and_norm_rewards(self):\n # discount episode rewards\n discounted_ep_rs = np.zeros_like(self.ep_rs)\n running_add = 0\n for t in reversed(range(0, len(self.ep_rs))):\n running_add = running_add * self.gamma + self.ep_rs[t]\n discounted_ep_rs[t] = running_add\n\n # normalize episode rewards\n discounted_ep_rs -= np.mean(discounted_ep_rs)\n discounted_ep_rs /= np.std(discounted_ep_rs)\n return discounted_ep_rs\n", "path": "fluid/policy_gradient/brain.py"}]} | 1,643 | 149 |
gh_patches_debug_22549 | rasdani/github-patches | git_diff | psf__black-3543 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
GitHub Action: Use action version as default Black version, instead of latest
> I'm alright with making the default Black version tied to the action version being used. For context `version` was introduced because the action didn't exist for a long time so tying black version to action version wouldn't work for version 19.10b0 for example. In hidesight, having the default being the action version keeping the `version` configuration option around as an escape hatch is the better solution. This will involve some complexity since commit SHAs aren't supported by the version code (but are by GHA) but there might be some pre-existing logic in scripts/diff_shades_gha_helper.py we could reuse.
_Originally posted by @ichard26 in https://github.com/psf/black/issues/1140#issuecomment-1026379455_
</issue>
<code>
[start of action/main.py]
1 import os
2 import shlex
3 import sys
4 from pathlib import Path
5 from subprocess import PIPE, STDOUT, run
6
7 ACTION_PATH = Path(os.environ["GITHUB_ACTION_PATH"])
8 ENV_PATH = ACTION_PATH / ".black-env"
9 ENV_BIN = ENV_PATH / ("Scripts" if sys.platform == "win32" else "bin")
10 OPTIONS = os.getenv("INPUT_OPTIONS", default="")
11 SRC = os.getenv("INPUT_SRC", default="")
12 JUPYTER = os.getenv("INPUT_JUPYTER") == "true"
13 BLACK_ARGS = os.getenv("INPUT_BLACK_ARGS", default="")
14 VERSION = os.getenv("INPUT_VERSION", default="")
15
16 run([sys.executable, "-m", "venv", str(ENV_PATH)], check=True)
17
18 version_specifier = VERSION
19 if VERSION and VERSION[0] in "0123456789":
20 version_specifier = f"=={VERSION}"
21 if JUPYTER:
22 extra_deps = "[colorama,jupyter]"
23 else:
24 extra_deps = "[colorama]"
25 req = f"black{extra_deps}{version_specifier}"
26 pip_proc = run(
27 [str(ENV_BIN / "python"), "-m", "pip", "install", req],
28 stdout=PIPE,
29 stderr=STDOUT,
30 encoding="utf-8",
31 )
32 if pip_proc.returncode:
33 print(pip_proc.stdout)
34 print("::error::Failed to install Black.", flush=True)
35 sys.exit(pip_proc.returncode)
36
37
38 base_cmd = [str(ENV_BIN / "black")]
39 if BLACK_ARGS:
40 # TODO: remove after a while since this is deprecated in favour of SRC + OPTIONS.
41 proc = run([*base_cmd, *shlex.split(BLACK_ARGS)])
42 else:
43 proc = run([*base_cmd, *shlex.split(OPTIONS), *shlex.split(SRC)])
44
45 sys.exit(proc.returncode)
46
[end of action/main.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/action/main.py b/action/main.py
--- a/action/main.py
+++ b/action/main.py
@@ -22,12 +22,34 @@
extra_deps = "[colorama,jupyter]"
else:
extra_deps = "[colorama]"
-req = f"black{extra_deps}{version_specifier}"
+if version_specifier:
+ req = f"black{extra_deps}{version_specifier}"
+else:
+ describe_name = ""
+ with open(ACTION_PATH / ".git_archival.txt", encoding="utf-8") as fp:
+ for line in fp:
+ if line.startswith("describe-name: "):
+ describe_name = line[len("describe-name: ") :].rstrip()
+ break
+ if not describe_name:
+ print("::error::Failed to detect action version.", flush=True)
+ sys.exit(1)
+ # expected format is one of:
+ # - 23.1.0
+ # - 23.1.0-51-g448bba7
+ if describe_name.count("-") < 2:
+ # the action's commit matches a tag exactly, install exact version from PyPI
+ req = f"black{extra_deps}=={describe_name}"
+ else:
+ # the action's commit does not match any tag, install from the local git repo
+ req = f".{extra_deps}"
+print(f"Installing {req}...", flush=True)
pip_proc = run(
[str(ENV_BIN / "python"), "-m", "pip", "install", req],
stdout=PIPE,
stderr=STDOUT,
encoding="utf-8",
+ cwd=ACTION_PATH,
)
if pip_proc.returncode:
print(pip_proc.stdout)
| {"golden_diff": "diff --git a/action/main.py b/action/main.py\n--- a/action/main.py\n+++ b/action/main.py\n@@ -22,12 +22,34 @@\n extra_deps = \"[colorama,jupyter]\"\n else:\n extra_deps = \"[colorama]\"\n-req = f\"black{extra_deps}{version_specifier}\"\n+if version_specifier:\n+ req = f\"black{extra_deps}{version_specifier}\"\n+else:\n+ describe_name = \"\"\n+ with open(ACTION_PATH / \".git_archival.txt\", encoding=\"utf-8\") as fp:\n+ for line in fp:\n+ if line.startswith(\"describe-name: \"):\n+ describe_name = line[len(\"describe-name: \") :].rstrip()\n+ break\n+ if not describe_name:\n+ print(\"::error::Failed to detect action version.\", flush=True)\n+ sys.exit(1)\n+ # expected format is one of:\n+ # - 23.1.0\n+ # - 23.1.0-51-g448bba7\n+ if describe_name.count(\"-\") < 2:\n+ # the action's commit matches a tag exactly, install exact version from PyPI\n+ req = f\"black{extra_deps}=={describe_name}\"\n+ else:\n+ # the action's commit does not match any tag, install from the local git repo\n+ req = f\".{extra_deps}\"\n+print(f\"Installing {req}...\", flush=True)\n pip_proc = run(\n [str(ENV_BIN / \"python\"), \"-m\", \"pip\", \"install\", req],\n stdout=PIPE,\n stderr=STDOUT,\n encoding=\"utf-8\",\n+ cwd=ACTION_PATH,\n )\n if pip_proc.returncode:\n print(pip_proc.stdout)\n", "issue": "GitHub Action: Use action version as default Black version, instead of latest\n> I'm alright with making the default Black version tied to the action version being used. For context `version` was introduced because the action didn't exist for a long time so tying black version to action version wouldn't work for version 19.10b0 for example. In hidesight, having the default being the action version keeping the `version` configuration option around as an escape hatch is the better solution. This will involve some complexity since commit SHAs aren't supported by the version code (but are by GHA) but there might be some pre-existing logic in scripts/diff_shades_gha_helper.py we could reuse.\r\n\r\n_Originally posted by @ichard26 in https://github.com/psf/black/issues/1140#issuecomment-1026379455_\r\n \n", "before_files": [{"content": "import os\nimport shlex\nimport sys\nfrom pathlib import Path\nfrom subprocess import PIPE, STDOUT, run\n\nACTION_PATH = Path(os.environ[\"GITHUB_ACTION_PATH\"])\nENV_PATH = ACTION_PATH / \".black-env\"\nENV_BIN = ENV_PATH / (\"Scripts\" if sys.platform == \"win32\" else \"bin\")\nOPTIONS = os.getenv(\"INPUT_OPTIONS\", default=\"\")\nSRC = os.getenv(\"INPUT_SRC\", default=\"\")\nJUPYTER = os.getenv(\"INPUT_JUPYTER\") == \"true\"\nBLACK_ARGS = os.getenv(\"INPUT_BLACK_ARGS\", default=\"\")\nVERSION = os.getenv(\"INPUT_VERSION\", default=\"\")\n\nrun([sys.executable, \"-m\", \"venv\", str(ENV_PATH)], check=True)\n\nversion_specifier = VERSION\nif VERSION and VERSION[0] in \"0123456789\":\n version_specifier = f\"=={VERSION}\"\nif JUPYTER:\n extra_deps = \"[colorama,jupyter]\"\nelse:\n extra_deps = \"[colorama]\"\nreq = f\"black{extra_deps}{version_specifier}\"\npip_proc = run(\n [str(ENV_BIN / \"python\"), \"-m\", \"pip\", \"install\", req],\n stdout=PIPE,\n stderr=STDOUT,\n encoding=\"utf-8\",\n)\nif pip_proc.returncode:\n print(pip_proc.stdout)\n print(\"::error::Failed to install Black.\", flush=True)\n sys.exit(pip_proc.returncode)\n\n\nbase_cmd = [str(ENV_BIN / \"black\")]\nif BLACK_ARGS:\n # TODO: remove after a while since this is deprecated in favour of SRC + OPTIONS.\n proc = run([*base_cmd, *shlex.split(BLACK_ARGS)])\nelse:\n proc = run([*base_cmd, *shlex.split(OPTIONS), *shlex.split(SRC)])\n\nsys.exit(proc.returncode)\n", "path": "action/main.py"}]} | 1,198 | 394 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.