problem_id
stringlengths 18
22
| source
stringclasses 1
value | task_type
stringclasses 1
value | in_source_id
stringlengths 13
58
| prompt
stringlengths 1.71k
18.9k
| golden_diff
stringlengths 145
5.13k
| verification_info
stringlengths 465
23.6k
| num_tokens_prompt
int64 556
4.1k
| num_tokens_diff
int64 47
1.02k
|
---|---|---|---|---|---|---|---|---|
gh_patches_debug_1184 | rasdani/github-patches | git_diff | voxel51__fiftyone-3297 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[BUG] fiftyone forces starlette=0.16.0 and it breaks integrations with applications that use FastAPI in newer versions.
### Instructions
Thank you for submitting an issue. Please refer to our
[issue policy](https://www.github.com/voxel51/fiftyone/blob/develop/ISSUE_POLICY.md)
for information on what types of issues we address.
**Please fill in this template to ensure a timely and thorough response.**
- Place an "x" between the brackets next to an option if it applies. Example:
- [x] Selected option
- Please delete this section (all content above this line) before submitting
the issue
### System information
- **OS Platform and Distribution (e.g., Linux Ubuntu 16.04)**: linux 18.04
- **FiftyOne installed from (pip or source)**:pip
- **FiftyOne version (run `fiftyone --version`)**:0.17.2
- **Python version**:3.8
### Commands to reproduce
As thoroughly as possible, please provide the Python and/or shell commands used
to encounter the issue. Application steps can be described in the next section.
requriements.txt:
fastapi==0.79.0
fiftyone==0.17.2
```
pip install -r requirments.txt
```
### Describe the problem
fiftyone cannot be used with the newer versions of fastapi, because it forces starlette to be in the version starlette=0.16.0
Is it possible to add a condition like: starlette>=0.16.0. In this way it would not break apps that use fiftyone
### Code to reproduce issue
fastapi==0.79.0
fiftyone==0.17.2
pip install -r requirments.txt
### Other info / logs
es.
#0 388.1
#0 388.1 The conflict is caused by:
#0 388.1 bentoml 1.0.4 depends on starlette
#0 388.1 fastapi 0.79.0 depends on starlette==0.19.1
#0 388.1 fiftyone 0.17.2 depends on starlette==0.16.0
#0 388.1
#0 388.1 To fix this you could try to:
#0 388.1 1. loosen the range of package versions you've specified
#0 388.1 2. remove package versions to allow pip attempt to solve the dependency conflict
#0 388.1
#0 388.1 ERROR: ResolutionImpossible: for help visit https://pip.pypa.io/en/latest/topics/dependency-resolution/#dealing-with-dependency-conflicts
#0 388.1 WARNING: You are using pip version 22.0.4; however, version 22.3 is available.
#0 388.1 You should consider upgrading via the '/usr/bin/python -m pip install --upgrade pip' command.
------
### What areas of FiftyOne does this bug affect?
- [X ] `App`: FiftyOne application issue
- [ ] `Core`: Core `fiftyone` Python library issue
- [ ] `Server`: Fiftyone server issue
### Willingness to contribute
The FiftyOne Community encourages bug fix contributions. Would you or another
member of your organization be willing to contribute a fix for this bug to the
FiftyOne codebase?
- [ X] Yes. I can contribute a fix for this bug independently.
- [ ] Yes. I would be willing to contribute a fix for this bug with guidance
from the FiftyOne community.
- [ ] No. I cannot contribute a bug fix at this time.
</issue>
<code>
[start of setup.py]
1 #!/usr/bin/env python
2 """
3 Installs FiftyOne.
4
5 | Copyright 2017-2023, Voxel51, Inc.
6 | `voxel51.com <https://voxel51.com/>`_
7 |
8 """
9 try:
10 from importlib import metadata
11 except ImportError:
12 import importlib_metadata as metadata
13
14 import os
15 import re
16 from setuptools import setup, find_packages
17
18
19 VERSION = "0.21.3"
20
21
22 def get_version():
23 if "RELEASE_VERSION" in os.environ:
24 version = os.environ["RELEASE_VERSION"]
25 if not version.startswith(VERSION):
26 raise ValueError(
27 "Release version does not match version: %s and %s"
28 % (version, VERSION)
29 )
30 return version
31
32 return VERSION
33
34
35 INSTALL_REQUIRES = [
36 # third-party packages
37 "aiofiles",
38 "argcomplete",
39 "boto3",
40 "cachetools",
41 "dacite>=1.6.0,<1.8.0",
42 "Deprecated",
43 "eventlet",
44 "ftfy",
45 "future",
46 "hypercorn>=0.13.2",
47 "importlib-metadata; python_version<'3.8'",
48 "Jinja2>=3",
49 "kaleido",
50 "matplotlib",
51 "mongoengine==0.24.2",
52 "motor>=2.5",
53 "numpy",
54 "packaging",
55 "pandas",
56 "Pillow>=6.2",
57 "plotly>=4.14",
58 "pprintpp",
59 "psutil",
60 "pymongo>=3.12",
61 "pytz",
62 "PyYAML",
63 "regex",
64 "retrying",
65 "scikit-learn",
66 "scikit-image",
67 "setuptools",
68 "sseclient-py>=1.7.2,<2",
69 "sse-starlette>=0.10.3,<1",
70 "starlette>=0.24.0,<0.27",
71 "strawberry-graphql==0.138.1",
72 "tabulate",
73 "xmltodict",
74 "universal-analytics-python3>=1.0.1,<2",
75 # internal packages
76 "fiftyone-brain>=0.13,<0.14",
77 "fiftyone-db>=0.4,<0.5",
78 "voxel51-eta>=0.10,<0.11",
79 ]
80
81
82 CHOOSE_INSTALL_REQUIRES = [
83 (
84 (
85 "opencv-python",
86 "opencv-contrib-python",
87 "opencv-contrib-python-headless",
88 ),
89 "opencv-python-headless",
90 )
91 ]
92
93
94 def choose_requirement(mains, secondary):
95 chosen = secondary
96 for main in mains:
97 try:
98 name = re.split(r"[!<>=]", main)[0]
99 metadata.version(name)
100 chosen = main
101 break
102 except metadata.PackageNotFoundError:
103 pass
104
105 return str(chosen)
106
107
108 def get_install_requirements(install_requires, choose_install_requires):
109 for mains, secondary in choose_install_requires:
110 install_requires.append(choose_requirement(mains, secondary))
111
112 return install_requires
113
114
115 EXTRAS_REQUIREMENTS = {"desktop": ["fiftyone-desktop>=0.28.2,<0.29"]}
116
117
118 with open("README.md", "r") as fh:
119 long_description = fh.read()
120
121
122 setup(
123 name="fiftyone",
124 version=get_version(),
125 description=(
126 "FiftyOne: the open-source tool for building high-quality datasets "
127 "and computer vision models"
128 ),
129 author="Voxel51, Inc.",
130 author_email="[email protected]",
131 url="https://github.com/voxel51/fiftyone",
132 extras_require=EXTRAS_REQUIREMENTS,
133 license="Apache",
134 long_description=long_description,
135 long_description_content_type="text/markdown",
136 packages=find_packages(
137 exclude=["app", "eta", "package", "requirements", "tests", "tools"]
138 )
139 + ["fiftyone.recipes", "fiftyone.tutorials"],
140 package_dir={
141 "fiftyone.recipes": "docs/source/recipes",
142 "fiftyone.tutorials": "docs/source/tutorials",
143 },
144 install_requires=get_install_requirements(
145 INSTALL_REQUIRES, CHOOSE_INSTALL_REQUIRES
146 ),
147 include_package_data=True,
148 classifiers=[
149 "Development Status :: 4 - Beta",
150 "Intended Audience :: Developers",
151 "Intended Audience :: Science/Research",
152 "License :: OSI Approved :: Apache Software License",
153 "Topic :: Scientific/Engineering :: Artificial Intelligence",
154 "Topic :: Scientific/Engineering :: Image Processing",
155 "Topic :: Scientific/Engineering :: Image Recognition",
156 "Topic :: Scientific/Engineering :: Information Analysis",
157 "Topic :: Scientific/Engineering :: Visualization",
158 "Operating System :: MacOS :: MacOS X",
159 "Operating System :: POSIX :: Linux",
160 "Operating System :: Microsoft :: Windows",
161 "Programming Language :: Python :: 3",
162 "Programming Language :: Python :: 3.7",
163 "Programming Language :: Python :: 3.8",
164 "Programming Language :: Python :: 3.9",
165 "Programming Language :: Python :: 3.10",
166 ],
167 entry_points={"console_scripts": ["fiftyone=fiftyone.core.cli:main"]},
168 python_requires=">=3.7",
169 )
170
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -67,7 +67,7 @@
"setuptools",
"sseclient-py>=1.7.2,<2",
"sse-starlette>=0.10.3,<1",
- "starlette>=0.24.0,<0.27",
+ "starlette>=0.24.0",
"strawberry-graphql==0.138.1",
"tabulate",
"xmltodict",
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -67,7 +67,7 @@\n \"setuptools\",\n \"sseclient-py>=1.7.2,<2\",\n \"sse-starlette>=0.10.3,<1\",\n- \"starlette>=0.24.0,<0.27\",\n+ \"starlette>=0.24.0\",\n \"strawberry-graphql==0.138.1\",\n \"tabulate\",\n \"xmltodict\",\n", "issue": "[BUG] fiftyone forces starlette=0.16.0 and it breaks integrations with applications that use FastAPI in newer versions.\n### Instructions\r\n\r\nThank you for submitting an issue. Please refer to our\r\n[issue policy](https://www.github.com/voxel51/fiftyone/blob/develop/ISSUE_POLICY.md)\r\nfor information on what types of issues we address.\r\n\r\n**Please fill in this template to ensure a timely and thorough response.**\r\n\r\n- Place an \"x\" between the brackets next to an option if it applies. Example:\r\n - [x] Selected option\r\n- Please delete this section (all content above this line) before submitting\r\n the issue\r\n\r\n### System information\r\n\r\n- **OS Platform and Distribution (e.g., Linux Ubuntu 16.04)**: linux 18.04\r\n- **FiftyOne installed from (pip or source)**:pip\r\n- **FiftyOne version (run `fiftyone --version`)**:0.17.2\r\n- **Python version**:3.8\r\n\r\n### Commands to reproduce\r\n\r\nAs thoroughly as possible, please provide the Python and/or shell commands used\r\nto encounter the issue. Application steps can be described in the next section.\r\nrequriements.txt:\r\nfastapi==0.79.0\r\nfiftyone==0.17.2\r\n\r\n```\r\npip install -r requirments.txt\r\n```\r\n\r\n### Describe the problem\r\n\r\nfiftyone cannot be used with the newer versions of fastapi, because it forces starlette to be in the version starlette=0.16.0\r\nIs it possible to add a condition like: starlette>=0.16.0. In this way it would not break apps that use fiftyone\r\n\r\n### Code to reproduce issue\r\nfastapi==0.79.0\r\nfiftyone==0.17.2\r\n\r\npip install -r requirments.txt\r\n\r\n### Other info / logs\r\n\r\nes.\r\n#0 388.1 \r\n#0 388.1 The conflict is caused by:\r\n#0 388.1 bentoml 1.0.4 depends on starlette\r\n#0 388.1 fastapi 0.79.0 depends on starlette==0.19.1\r\n#0 388.1 fiftyone 0.17.2 depends on starlette==0.16.0\r\n#0 388.1 \r\n#0 388.1 To fix this you could try to:\r\n#0 388.1 1. loosen the range of package versions you've specified\r\n#0 388.1 2. remove package versions to allow pip attempt to solve the dependency conflict\r\n#0 388.1 \r\n#0 388.1 ERROR: ResolutionImpossible: for help visit https://pip.pypa.io/en/latest/topics/dependency-resolution/#dealing-with-dependency-conflicts\r\n#0 388.1 WARNING: You are using pip version 22.0.4; however, version 22.3 is available.\r\n#0 388.1 You should consider upgrading via the '/usr/bin/python -m pip install --upgrade pip' command.\r\n------\r\n\r\n### What areas of FiftyOne does this bug affect?\r\n\r\n- [X ] `App`: FiftyOne application issue\r\n- [ ] `Core`: Core `fiftyone` Python library issue\r\n- [ ] `Server`: Fiftyone server issue\r\n\r\n### Willingness to contribute\r\n\r\nThe FiftyOne Community encourages bug fix contributions. Would you or another\r\nmember of your organization be willing to contribute a fix for this bug to the\r\nFiftyOne codebase?\r\n\r\n- [ X] Yes. I can contribute a fix for this bug independently.\r\n- [ ] Yes. I would be willing to contribute a fix for this bug with guidance\r\n from the FiftyOne community.\r\n- [ ] No. I cannot contribute a bug fix at this time.\r\n\n", "before_files": [{"content": "#!/usr/bin/env python\n\"\"\"\nInstalls FiftyOne.\n\n| Copyright 2017-2023, Voxel51, Inc.\n| `voxel51.com <https://voxel51.com/>`_\n|\n\"\"\"\ntry:\n from importlib import metadata\nexcept ImportError:\n import importlib_metadata as metadata\n\nimport os\nimport re\nfrom setuptools import setup, find_packages\n\n\nVERSION = \"0.21.3\"\n\n\ndef get_version():\n if \"RELEASE_VERSION\" in os.environ:\n version = os.environ[\"RELEASE_VERSION\"]\n if not version.startswith(VERSION):\n raise ValueError(\n \"Release version does not match version: %s and %s\"\n % (version, VERSION)\n )\n return version\n\n return VERSION\n\n\nINSTALL_REQUIRES = [\n # third-party packages\n \"aiofiles\",\n \"argcomplete\",\n \"boto3\",\n \"cachetools\",\n \"dacite>=1.6.0,<1.8.0\",\n \"Deprecated\",\n \"eventlet\",\n \"ftfy\",\n \"future\",\n \"hypercorn>=0.13.2\",\n \"importlib-metadata; python_version<'3.8'\",\n \"Jinja2>=3\",\n \"kaleido\",\n \"matplotlib\",\n \"mongoengine==0.24.2\",\n \"motor>=2.5\",\n \"numpy\",\n \"packaging\",\n \"pandas\",\n \"Pillow>=6.2\",\n \"plotly>=4.14\",\n \"pprintpp\",\n \"psutil\",\n \"pymongo>=3.12\",\n \"pytz\",\n \"PyYAML\",\n \"regex\",\n \"retrying\",\n \"scikit-learn\",\n \"scikit-image\",\n \"setuptools\",\n \"sseclient-py>=1.7.2,<2\",\n \"sse-starlette>=0.10.3,<1\",\n \"starlette>=0.24.0,<0.27\",\n \"strawberry-graphql==0.138.1\",\n \"tabulate\",\n \"xmltodict\",\n \"universal-analytics-python3>=1.0.1,<2\",\n # internal packages\n \"fiftyone-brain>=0.13,<0.14\",\n \"fiftyone-db>=0.4,<0.5\",\n \"voxel51-eta>=0.10,<0.11\",\n]\n\n\nCHOOSE_INSTALL_REQUIRES = [\n (\n (\n \"opencv-python\",\n \"opencv-contrib-python\",\n \"opencv-contrib-python-headless\",\n ),\n \"opencv-python-headless\",\n )\n]\n\n\ndef choose_requirement(mains, secondary):\n chosen = secondary\n for main in mains:\n try:\n name = re.split(r\"[!<>=]\", main)[0]\n metadata.version(name)\n chosen = main\n break\n except metadata.PackageNotFoundError:\n pass\n\n return str(chosen)\n\n\ndef get_install_requirements(install_requires, choose_install_requires):\n for mains, secondary in choose_install_requires:\n install_requires.append(choose_requirement(mains, secondary))\n\n return install_requires\n\n\nEXTRAS_REQUIREMENTS = {\"desktop\": [\"fiftyone-desktop>=0.28.2,<0.29\"]}\n\n\nwith open(\"README.md\", \"r\") as fh:\n long_description = fh.read()\n\n\nsetup(\n name=\"fiftyone\",\n version=get_version(),\n description=(\n \"FiftyOne: the open-source tool for building high-quality datasets \"\n \"and computer vision models\"\n ),\n author=\"Voxel51, Inc.\",\n author_email=\"[email protected]\",\n url=\"https://github.com/voxel51/fiftyone\",\n extras_require=EXTRAS_REQUIREMENTS,\n license=\"Apache\",\n long_description=long_description,\n long_description_content_type=\"text/markdown\",\n packages=find_packages(\n exclude=[\"app\", \"eta\", \"package\", \"requirements\", \"tests\", \"tools\"]\n )\n + [\"fiftyone.recipes\", \"fiftyone.tutorials\"],\n package_dir={\n \"fiftyone.recipes\": \"docs/source/recipes\",\n \"fiftyone.tutorials\": \"docs/source/tutorials\",\n },\n install_requires=get_install_requirements(\n INSTALL_REQUIRES, CHOOSE_INSTALL_REQUIRES\n ),\n include_package_data=True,\n classifiers=[\n \"Development Status :: 4 - Beta\",\n \"Intended Audience :: Developers\",\n \"Intended Audience :: Science/Research\",\n \"License :: OSI Approved :: Apache Software License\",\n \"Topic :: Scientific/Engineering :: Artificial Intelligence\",\n \"Topic :: Scientific/Engineering :: Image Processing\",\n \"Topic :: Scientific/Engineering :: Image Recognition\",\n \"Topic :: Scientific/Engineering :: Information Analysis\",\n \"Topic :: Scientific/Engineering :: Visualization\",\n \"Operating System :: MacOS :: MacOS X\",\n \"Operating System :: POSIX :: Linux\",\n \"Operating System :: Microsoft :: Windows\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Programming Language :: Python :: 3.10\",\n ],\n entry_points={\"console_scripts\": [\"fiftyone=fiftyone.core.cli:main\"]},\n python_requires=\">=3.7\",\n)\n", "path": "setup.py"}]} | 2,980 | 125 |
gh_patches_debug_22077 | rasdani/github-patches | git_diff | spack__spack-13605 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
pybind11: fix get_include()
The `py-pybind11` package has a python module helper of the form:
```python
import pybind11 as py
print(py.get_include())
```
which helps downstream, if they are not using the CMake config package (which probably everyone is using), to query the install location.
`get_include()` currently points to the spec's Python include location instead of the package's include.
Diff for a new unit test:
```diff
diff --git a/var/spack/repos/builtin/packages/py-pybind11/package.py b/var/spack/repos/builtin/packages/py-pybind11/package.py
index cd482ceb3..0a86afb97 100644
--- a/var/spack/repos/builtin/packages/py-pybind11/package.py
+++ b/var/spack/repos/builtin/packages/py-pybind11/package.py
@@ -60,3 +60,17 @@ class PyPybind11(CMakePackage):
super(PyPybind11, self).install(spec, prefix)
setup_py('install', '--single-version-externally-managed', '--root=/',
'--prefix={0}'.format(prefix))
+
+ @run_after('install')
+ @on_package_attributes(run_tests=True)
+ def test(self):
+ with working_dir('spack-test', create=True):
+ # test include helper points to right location
+ module = "pybind11"
+ python = Executable(self.spec['python'].command.path)
+ python('-c', 'import {0}'.format(module))
+ inc = python('-c', 'import {0} as py; print(py.get_include())'.format(
+ module), output=str)
+ print("INC:")
+ print(inc)
+ assert inc == self.prefix.include
```
Similar to:
- https://github.com/conda-forge/pybind11-feedstock/pull/32
- https://github.com/pybind/pybind11/pull/1877
</issue>
<code>
[start of var/spack/repos/builtin/packages/py-pybind11/package.py]
1 # Copyright 2013-2019 Lawrence Livermore National Security, LLC and other
2 # Spack Project Developers. See the top-level COPYRIGHT file for details.
3 #
4 # SPDX-License-Identifier: (Apache-2.0 OR MIT)
5
6 from spack import *
7
8
9 class PyPybind11(CMakePackage):
10 """pybind11 -- Seamless operability between C++11 and Python.
11
12 pybind11 is a lightweight header-only library that exposes C++ types in
13 Python and vice versa, mainly to create Python bindings of existing C++
14 code. Its goals and syntax are similar to the excellent Boost.Python
15 library by David Abrahams: to minimize boilerplate code in traditional
16 extension modules by inferring type information using compile-time
17 introspection."""
18
19 homepage = "https://pybind11.readthedocs.io"
20 url = "https://github.com/pybind/pybind11/archive/v2.1.0.tar.gz"
21 git = "https://github.com/pybind/pybind11.git"
22
23 maintainers = ['ax3l']
24
25 version('master', branch='master')
26 version('2.4.3', sha256='1eed57bc6863190e35637290f97a20c81cfe4d9090ac0a24f3bbf08f265eb71d')
27 version('2.3.0', sha256='0f34838f2c8024a6765168227ba587b3687729ebf03dc912f88ff75c7aa9cfe8')
28 version('2.2.4', sha256='b69e83658513215b8d1443544d0549b7d231b9f201f6fc787a2b2218b408181e')
29 version('2.2.3', sha256='3a3b7b651afab1c5ba557f4c37d785a522b8030dfc765da26adc2ecd1de940ea')
30 version('2.2.2', sha256='b639a2b2cbf1c467849660801c4665ffc1a4d0a9e153ae1996ed6f21c492064e')
31 version('2.2.1', sha256='f8bd1509578b2a1e7407d52e6ee8afe64268909a1bbda620ca407318598927e7')
32 version('2.2.0', sha256='1b0fda17c650c493f5862902e90f426df6751da8c0b58c05983ab009951ed769')
33 version('2.1.1', sha256='f2c6874f1ea5b4ad4ffffe352413f7d2cd1a49f9050940805c2a082348621540')
34 version('2.1.0', sha256='2860f2b8d0c9f65f0698289a161385f59d099b7ead1bf64e8993c486f2b93ee0')
35
36 depends_on('py-pytest', type='test')
37 depends_on('py-setuptools', type='build')
38
39 extends('python')
40
41 # compiler support
42 conflicts('%gcc@:4.7')
43 conflicts('%clang@:3.2')
44 conflicts('%intel@:16')
45
46 def cmake_args(self):
47 args = []
48 args.append('-DPYTHON_EXECUTABLE:FILEPATH=%s'
49 % self.spec['python'].command.path)
50 args += [
51 '-DPYBIND11_TEST:BOOL={0}'.format(
52 'ON' if self.run_tests else 'OFF')
53 ]
54 return args
55
56 def setup_build_environment(self, env):
57 env.set('PYBIND11_USE_CMAKE', 1)
58
59 def install(self, spec, prefix):
60 super(PyPybind11, self).install(spec, prefix)
61 setup_py('install', '--single-version-externally-managed', '--root=/',
62 '--prefix={0}'.format(prefix))
63
[end of var/spack/repos/builtin/packages/py-pybind11/package.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/var/spack/repos/builtin/packages/py-pybind11/package.py b/var/spack/repos/builtin/packages/py-pybind11/package.py
--- a/var/spack/repos/builtin/packages/py-pybind11/package.py
+++ b/var/spack/repos/builtin/packages/py-pybind11/package.py
@@ -56,7 +56,27 @@
def setup_build_environment(self, env):
env.set('PYBIND11_USE_CMAKE', 1)
+ def patch(self):
+ """ see https://github.com/spack/spack/issues/13559 """
+ filter_file('import sys',
+ 'import sys; return "{0}"'.format(self.prefix.include),
+ 'pybind11/__init__.py',
+ string=True)
+
def install(self, spec, prefix):
super(PyPybind11, self).install(spec, prefix)
setup_py('install', '--single-version-externally-managed', '--root=/',
'--prefix={0}'.format(prefix))
+
+ @run_after('install')
+ @on_package_attributes(run_tests=True)
+ def test(self):
+ with working_dir('spack-test', create=True):
+ # test include helper points to right location
+ python = self.spec['python'].command
+ inc = python(
+ '-c',
+ 'import pybind11 as py; ' +
+ self.spec['python'].package.print_string('py.get_include()'),
+ output=str)
+ assert inc.strip() == str(self.prefix.include)
| {"golden_diff": "diff --git a/var/spack/repos/builtin/packages/py-pybind11/package.py b/var/spack/repos/builtin/packages/py-pybind11/package.py\n--- a/var/spack/repos/builtin/packages/py-pybind11/package.py\n+++ b/var/spack/repos/builtin/packages/py-pybind11/package.py\n@@ -56,7 +56,27 @@\n def setup_build_environment(self, env):\n env.set('PYBIND11_USE_CMAKE', 1)\n \n+ def patch(self):\n+ \"\"\" see https://github.com/spack/spack/issues/13559 \"\"\"\n+ filter_file('import sys',\n+ 'import sys; return \"{0}\"'.format(self.prefix.include),\n+ 'pybind11/__init__.py',\n+ string=True)\n+\n def install(self, spec, prefix):\n super(PyPybind11, self).install(spec, prefix)\n setup_py('install', '--single-version-externally-managed', '--root=/',\n '--prefix={0}'.format(prefix))\n+\n+ @run_after('install')\n+ @on_package_attributes(run_tests=True)\n+ def test(self):\n+ with working_dir('spack-test', create=True):\n+ # test include helper points to right location\n+ python = self.spec['python'].command\n+ inc = python(\n+ '-c',\n+ 'import pybind11 as py; ' +\n+ self.spec['python'].package.print_string('py.get_include()'),\n+ output=str)\n+ assert inc.strip() == str(self.prefix.include)\n", "issue": "pybind11: fix get_include()\nThe `py-pybind11` package has a python module helper of the form:\r\n\r\n```python\r\nimport pybind11 as py\r\nprint(py.get_include())\r\n```\r\n\r\nwhich helps downstream, if they are not using the CMake config package (which probably everyone is using), to query the install location.\r\n\r\n`get_include()` currently points to the spec's Python include location instead of the package's include.\r\n\r\nDiff for a new unit test:\r\n```diff\r\ndiff --git a/var/spack/repos/builtin/packages/py-pybind11/package.py b/var/spack/repos/builtin/packages/py-pybind11/package.py\r\nindex cd482ceb3..0a86afb97 100644\r\n--- a/var/spack/repos/builtin/packages/py-pybind11/package.py\r\n+++ b/var/spack/repos/builtin/packages/py-pybind11/package.py\r\n@@ -60,3 +60,17 @@ class PyPybind11(CMakePackage):\r\n super(PyPybind11, self).install(spec, prefix)\r\n setup_py('install', '--single-version-externally-managed', '--root=/',\r\n '--prefix={0}'.format(prefix))\r\n+\r\n+ @run_after('install')\r\n+ @on_package_attributes(run_tests=True)\r\n+ def test(self):\r\n+ with working_dir('spack-test', create=True):\r\n+ # test include helper points to right location\r\n+ module = \"pybind11\"\r\n+ python = Executable(self.spec['python'].command.path)\r\n+ python('-c', 'import {0}'.format(module))\r\n+ inc = python('-c', 'import {0} as py; print(py.get_include())'.format(\r\n+ module), output=str)\r\n+ print(\"INC:\")\r\n+ print(inc)\r\n+ assert inc == self.prefix.include\r\n```\r\n\r\nSimilar to:\r\n- https://github.com/conda-forge/pybind11-feedstock/pull/32\r\n- https://github.com/pybind/pybind11/pull/1877\n", "before_files": [{"content": "# Copyright 2013-2019 Lawrence Livermore National Security, LLC and other\n# Spack Project Developers. See the top-level COPYRIGHT file for details.\n#\n# SPDX-License-Identifier: (Apache-2.0 OR MIT)\n\nfrom spack import *\n\n\nclass PyPybind11(CMakePackage):\n \"\"\"pybind11 -- Seamless operability between C++11 and Python.\n\n pybind11 is a lightweight header-only library that exposes C++ types in\n Python and vice versa, mainly to create Python bindings of existing C++\n code. Its goals and syntax are similar to the excellent Boost.Python\n library by David Abrahams: to minimize boilerplate code in traditional\n extension modules by inferring type information using compile-time\n introspection.\"\"\"\n\n homepage = \"https://pybind11.readthedocs.io\"\n url = \"https://github.com/pybind/pybind11/archive/v2.1.0.tar.gz\"\n git = \"https://github.com/pybind/pybind11.git\"\n\n maintainers = ['ax3l']\n\n version('master', branch='master')\n version('2.4.3', sha256='1eed57bc6863190e35637290f97a20c81cfe4d9090ac0a24f3bbf08f265eb71d')\n version('2.3.0', sha256='0f34838f2c8024a6765168227ba587b3687729ebf03dc912f88ff75c7aa9cfe8')\n version('2.2.4', sha256='b69e83658513215b8d1443544d0549b7d231b9f201f6fc787a2b2218b408181e')\n version('2.2.3', sha256='3a3b7b651afab1c5ba557f4c37d785a522b8030dfc765da26adc2ecd1de940ea')\n version('2.2.2', sha256='b639a2b2cbf1c467849660801c4665ffc1a4d0a9e153ae1996ed6f21c492064e')\n version('2.2.1', sha256='f8bd1509578b2a1e7407d52e6ee8afe64268909a1bbda620ca407318598927e7')\n version('2.2.0', sha256='1b0fda17c650c493f5862902e90f426df6751da8c0b58c05983ab009951ed769')\n version('2.1.1', sha256='f2c6874f1ea5b4ad4ffffe352413f7d2cd1a49f9050940805c2a082348621540')\n version('2.1.0', sha256='2860f2b8d0c9f65f0698289a161385f59d099b7ead1bf64e8993c486f2b93ee0')\n\n depends_on('py-pytest', type='test')\n depends_on('py-setuptools', type='build')\n\n extends('python')\n\n # compiler support\n conflicts('%gcc@:4.7')\n conflicts('%clang@:3.2')\n conflicts('%intel@:16')\n\n def cmake_args(self):\n args = []\n args.append('-DPYTHON_EXECUTABLE:FILEPATH=%s'\n % self.spec['python'].command.path)\n args += [\n '-DPYBIND11_TEST:BOOL={0}'.format(\n 'ON' if self.run_tests else 'OFF')\n ]\n return args\n\n def setup_build_environment(self, env):\n env.set('PYBIND11_USE_CMAKE', 1)\n\n def install(self, spec, prefix):\n super(PyPybind11, self).install(spec, prefix)\n setup_py('install', '--single-version-externally-managed', '--root=/',\n '--prefix={0}'.format(prefix))\n", "path": "var/spack/repos/builtin/packages/py-pybind11/package.py"}]} | 2,248 | 347 |
gh_patches_debug_25698 | rasdani/github-patches | git_diff | DataDog__dd-trace-py-3378 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
`ddtrace.opentracer.span.Span` method return types not compliant with `opentracing.span.Span`
I'll be happy to submit a PR for this--it should be a small fix--but wanted to make sure this isn't a known issue first.
### Which version of dd-trace-py are you using?
0.59.0
### Which version of pip are you using?
22.0.3
### Explanation
`opentracing.span.Span` defines a handful of methods that return `self` for the instance their invoked on, in order to support call chaining.
`ddtrace.opentracer.span.Span` does not implement this interface for `set_tag` or `set_operation_name`.
https://github.com/DataDog/dd-trace-py/blob/1a6bad3ec2cbfffab0f83ebf5b72fe786b76dbed/ddtrace/opentracer/span.py#L137-L138
https://github.com/DataDog/dd-trace-py/blob/1a6bad3ec2cbfffab0f83ebf5b72fe786b76dbed/ddtrace/opentracer/span.py#L98-L99
See the `opentracing` interfaces for [set_tag](https://github.com/opentracing/opentracing-python/blob/1d495f7b1c89e3577c0ce4911ed2962c8dcf1482/opentracing/span.py#L120-L139) and [set_operation_name](https://github.com/opentracing/opentracing-python/blob/1d495f7b1c89e3577c0ce4911ed2962c8dcf1482/opentracing/span.py#L95-L104).
</issue>
<code>
[start of ddtrace/opentracer/span.py]
1 import threading
2 from typing import Any
3 from typing import Dict
4 from typing import Optional
5 from typing import TYPE_CHECKING
6 from typing import Text
7 from typing import Union
8
9 from opentracing import Span as OpenTracingSpan
10 from opentracing.ext import tags as OTTags
11
12 from ddtrace.constants import ERROR_MSG
13 from ddtrace.constants import ERROR_STACK
14 from ddtrace.constants import ERROR_TYPE
15 from ddtrace.context import Context as DatadogContext
16 from ddtrace.internal.compat import NumericType
17 from ddtrace.span import Span as DatadogSpan
18
19 from .span_context import SpanContext
20 from .tags import Tags
21
22
23 if TYPE_CHECKING:
24 from .tracer import Tracer
25
26
27 _TagNameType = Union[Text, bytes]
28
29
30 class Span(OpenTracingSpan):
31 """Datadog implementation of :class:`opentracing.Span`"""
32
33 def __init__(self, tracer, context, operation_name):
34 # type: (Tracer, Optional[SpanContext], str) -> None
35 if context is not None:
36 context = SpanContext(ddcontext=context._dd_context, baggage=context.baggage)
37 else:
38 context = SpanContext()
39
40 super(Span, self).__init__(tracer, context)
41
42 self.finished = False
43 self._lock = threading.Lock()
44 # use a datadog span
45 self._dd_span = DatadogSpan(operation_name, context=context._dd_context)
46
47 def finish(self, finish_time=None):
48 # type: (Optional[float]) -> None
49 """Finish the span.
50
51 This calls finish on the ddspan.
52
53 :param finish_time: specify a custom finish time with a unix timestamp
54 per time.time()
55 :type timestamp: float
56 """
57 if self.finished:
58 return
59
60 # finish the datadog span
61 self._dd_span.finish(finish_time)
62 self.finished = True
63
64 def set_baggage_item(self, key, value):
65 # type: (str, Any) -> Span
66 """Sets a baggage item in the span context of this span.
67
68 Baggage is used to propagate state between spans.
69
70 :param key: baggage item key
71 :type key: str
72
73 :param value: baggage item value
74 :type value: a type that can be compat.stringify()'d
75
76 :rtype: Span
77 :return: itself for chaining calls
78 """
79 new_ctx = self.context.with_baggage_item(key, value)
80 with self._lock:
81 self._context = new_ctx
82 return self
83
84 def get_baggage_item(self, key):
85 # type: (str) -> Optional[str]
86 """Gets a baggage item from the span context of this span.
87
88 :param key: baggage item key
89 :type key: str
90
91 :rtype: str
92 :return: the baggage value for the given key or ``None``.
93 """
94 return self.context.get_baggage_item(key)
95
96 def set_operation_name(self, operation_name):
97 # type: (str) -> None
98 """Set the operation name."""
99 self._dd_span.name = operation_name
100
101 def log_kv(self, key_values, timestamp=None):
102 # type: (Dict[_TagNameType, Any], Optional[float]) -> Span
103 """Add a log record to this span.
104
105 Passes on relevant opentracing key values onto the datadog span.
106
107 :param key_values: a dict of string keys and values of any type
108 :type key_values: dict
109
110 :param timestamp: a unix timestamp per time.time()
111 :type timestamp: float
112
113 :return: the span itself, for call chaining
114 :rtype: Span
115 """
116
117 # match opentracing defined keys to datadog functionality
118 # opentracing/specification/blob/1be630515dafd4d2a468d083300900f89f28e24d/semantic_conventions.md#log-fields-table
119 for key, val in key_values.items():
120 if key == "event" and val == "error":
121 # TODO: not sure if it's actually necessary to set the error manually
122 self._dd_span.error = 1
123 self.set_tag("error", 1)
124 elif key == "error" or key == "error.object":
125 self.set_tag(ERROR_TYPE, val)
126 elif key == "message":
127 self.set_tag(ERROR_MSG, val)
128 elif key == "stack":
129 self.set_tag(ERROR_STACK, val)
130 else:
131 pass
132
133 return self
134
135 def set_tag(self, key, value):
136 # type: (_TagNameType, Any) -> None
137 """Set a tag on the span.
138
139 This sets the tag on the underlying datadog span.
140 """
141 if key == Tags.SPAN_TYPE:
142 self._dd_span.span_type = value
143 elif key == Tags.SERVICE_NAME:
144 self._dd_span.service = value
145 elif key == Tags.RESOURCE_NAME or key == OTTags.DATABASE_STATEMENT:
146 self._dd_span.resource = value
147 elif key == OTTags.PEER_HOSTNAME:
148 self._dd_span.set_tag(Tags.TARGET_HOST, value)
149 elif key == OTTags.PEER_PORT:
150 self._dd_span.set_tag(Tags.TARGET_PORT, value)
151 elif key == Tags.SAMPLING_PRIORITY:
152 self._dd_span.context.sampling_priority = value
153 else:
154 self._dd_span.set_tag(key, value)
155
156 def _get_tag(self, key):
157 # type: (_TagNameType) -> Optional[Text]
158 """Gets a tag from the span.
159
160 This method retrieves the tag from the underlying datadog span.
161 """
162 return self._dd_span.get_tag(key)
163
164 def _get_metric(self, key):
165 # type: (_TagNameType) -> Optional[NumericType]
166 """Gets a metric from the span.
167
168 This method retrieves the metric from the underlying datadog span.
169 """
170 return self._dd_span.get_metric(key)
171
172 def __enter__(self):
173 return self
174
175 def __exit__(self, exc_type, exc_val, exc_tb):
176 if exc_type:
177 self._dd_span.set_exc_info(exc_type, exc_val, exc_tb)
178
179 # note: self.finish() AND _dd_span.__exit__ will call _span.finish() but
180 # it is idempotent
181 self._dd_span.__exit__(exc_type, exc_val, exc_tb)
182 self.finish()
183
184 def _associate_dd_span(self, ddspan):
185 # type: (DatadogSpan) -> None
186 """Associates a DD span with this span."""
187 # get the datadog span context
188 self._dd_span = ddspan
189 self.context._dd_context = ddspan.context
190
191 @property
192 def _dd_context(self):
193 # type: () -> DatadogContext
194 return self._dd_span.context
195
[end of ddtrace/opentracer/span.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/ddtrace/opentracer/span.py b/ddtrace/opentracer/span.py
--- a/ddtrace/opentracer/span.py
+++ b/ddtrace/opentracer/span.py
@@ -94,9 +94,10 @@
return self.context.get_baggage_item(key)
def set_operation_name(self, operation_name):
- # type: (str) -> None
+ # type: (str) -> Span
"""Set the operation name."""
self._dd_span.name = operation_name
+ return self
def log_kv(self, key_values, timestamp=None):
# type: (Dict[_TagNameType, Any], Optional[float]) -> Span
@@ -133,7 +134,7 @@
return self
def set_tag(self, key, value):
- # type: (_TagNameType, Any) -> None
+ # type: (_TagNameType, Any) -> Span
"""Set a tag on the span.
This sets the tag on the underlying datadog span.
@@ -152,6 +153,7 @@
self._dd_span.context.sampling_priority = value
else:
self._dd_span.set_tag(key, value)
+ return self
def _get_tag(self, key):
# type: (_TagNameType) -> Optional[Text]
| {"golden_diff": "diff --git a/ddtrace/opentracer/span.py b/ddtrace/opentracer/span.py\n--- a/ddtrace/opentracer/span.py\n+++ b/ddtrace/opentracer/span.py\n@@ -94,9 +94,10 @@\n return self.context.get_baggage_item(key)\n \n def set_operation_name(self, operation_name):\n- # type: (str) -> None\n+ # type: (str) -> Span\n \"\"\"Set the operation name.\"\"\"\n self._dd_span.name = operation_name\n+ return self\n \n def log_kv(self, key_values, timestamp=None):\n # type: (Dict[_TagNameType, Any], Optional[float]) -> Span\n@@ -133,7 +134,7 @@\n return self\n \n def set_tag(self, key, value):\n- # type: (_TagNameType, Any) -> None\n+ # type: (_TagNameType, Any) -> Span\n \"\"\"Set a tag on the span.\n \n This sets the tag on the underlying datadog span.\n@@ -152,6 +153,7 @@\n self._dd_span.context.sampling_priority = value\n else:\n self._dd_span.set_tag(key, value)\n+ return self\n \n def _get_tag(self, key):\n # type: (_TagNameType) -> Optional[Text]\n", "issue": "`ddtrace.opentracer.span.Span` method return types not compliant with `opentracing.span.Span`\nI'll be happy to submit a PR for this--it should be a small fix--but wanted to make sure this isn't a known issue first.\r\n\r\n### Which version of dd-trace-py are you using?\r\n\r\n0.59.0\r\n\r\n### Which version of pip are you using?\r\n\r\n22.0.3\r\n\r\n### Explanation\r\n\r\n`opentracing.span.Span` defines a handful of methods that return `self` for the instance their invoked on, in order to support call chaining.\r\n\r\n`ddtrace.opentracer.span.Span` does not implement this interface for `set_tag` or `set_operation_name`.\r\n\r\nhttps://github.com/DataDog/dd-trace-py/blob/1a6bad3ec2cbfffab0f83ebf5b72fe786b76dbed/ddtrace/opentracer/span.py#L137-L138\r\nhttps://github.com/DataDog/dd-trace-py/blob/1a6bad3ec2cbfffab0f83ebf5b72fe786b76dbed/ddtrace/opentracer/span.py#L98-L99\r\n\r\nSee the `opentracing` interfaces for [set_tag](https://github.com/opentracing/opentracing-python/blob/1d495f7b1c89e3577c0ce4911ed2962c8dcf1482/opentracing/span.py#L120-L139) and [set_operation_name](https://github.com/opentracing/opentracing-python/blob/1d495f7b1c89e3577c0ce4911ed2962c8dcf1482/opentracing/span.py#L95-L104).\r\n\n", "before_files": [{"content": "import threading\nfrom typing import Any\nfrom typing import Dict\nfrom typing import Optional\nfrom typing import TYPE_CHECKING\nfrom typing import Text\nfrom typing import Union\n\nfrom opentracing import Span as OpenTracingSpan\nfrom opentracing.ext import tags as OTTags\n\nfrom ddtrace.constants import ERROR_MSG\nfrom ddtrace.constants import ERROR_STACK\nfrom ddtrace.constants import ERROR_TYPE\nfrom ddtrace.context import Context as DatadogContext\nfrom ddtrace.internal.compat import NumericType\nfrom ddtrace.span import Span as DatadogSpan\n\nfrom .span_context import SpanContext\nfrom .tags import Tags\n\n\nif TYPE_CHECKING:\n from .tracer import Tracer\n\n\n_TagNameType = Union[Text, bytes]\n\n\nclass Span(OpenTracingSpan):\n \"\"\"Datadog implementation of :class:`opentracing.Span`\"\"\"\n\n def __init__(self, tracer, context, operation_name):\n # type: (Tracer, Optional[SpanContext], str) -> None\n if context is not None:\n context = SpanContext(ddcontext=context._dd_context, baggage=context.baggage)\n else:\n context = SpanContext()\n\n super(Span, self).__init__(tracer, context)\n\n self.finished = False\n self._lock = threading.Lock()\n # use a datadog span\n self._dd_span = DatadogSpan(operation_name, context=context._dd_context)\n\n def finish(self, finish_time=None):\n # type: (Optional[float]) -> None\n \"\"\"Finish the span.\n\n This calls finish on the ddspan.\n\n :param finish_time: specify a custom finish time with a unix timestamp\n per time.time()\n :type timestamp: float\n \"\"\"\n if self.finished:\n return\n\n # finish the datadog span\n self._dd_span.finish(finish_time)\n self.finished = True\n\n def set_baggage_item(self, key, value):\n # type: (str, Any) -> Span\n \"\"\"Sets a baggage item in the span context of this span.\n\n Baggage is used to propagate state between spans.\n\n :param key: baggage item key\n :type key: str\n\n :param value: baggage item value\n :type value: a type that can be compat.stringify()'d\n\n :rtype: Span\n :return: itself for chaining calls\n \"\"\"\n new_ctx = self.context.with_baggage_item(key, value)\n with self._lock:\n self._context = new_ctx\n return self\n\n def get_baggage_item(self, key):\n # type: (str) -> Optional[str]\n \"\"\"Gets a baggage item from the span context of this span.\n\n :param key: baggage item key\n :type key: str\n\n :rtype: str\n :return: the baggage value for the given key or ``None``.\n \"\"\"\n return self.context.get_baggage_item(key)\n\n def set_operation_name(self, operation_name):\n # type: (str) -> None\n \"\"\"Set the operation name.\"\"\"\n self._dd_span.name = operation_name\n\n def log_kv(self, key_values, timestamp=None):\n # type: (Dict[_TagNameType, Any], Optional[float]) -> Span\n \"\"\"Add a log record to this span.\n\n Passes on relevant opentracing key values onto the datadog span.\n\n :param key_values: a dict of string keys and values of any type\n :type key_values: dict\n\n :param timestamp: a unix timestamp per time.time()\n :type timestamp: float\n\n :return: the span itself, for call chaining\n :rtype: Span\n \"\"\"\n\n # match opentracing defined keys to datadog functionality\n # opentracing/specification/blob/1be630515dafd4d2a468d083300900f89f28e24d/semantic_conventions.md#log-fields-table\n for key, val in key_values.items():\n if key == \"event\" and val == \"error\":\n # TODO: not sure if it's actually necessary to set the error manually\n self._dd_span.error = 1\n self.set_tag(\"error\", 1)\n elif key == \"error\" or key == \"error.object\":\n self.set_tag(ERROR_TYPE, val)\n elif key == \"message\":\n self.set_tag(ERROR_MSG, val)\n elif key == \"stack\":\n self.set_tag(ERROR_STACK, val)\n else:\n pass\n\n return self\n\n def set_tag(self, key, value):\n # type: (_TagNameType, Any) -> None\n \"\"\"Set a tag on the span.\n\n This sets the tag on the underlying datadog span.\n \"\"\"\n if key == Tags.SPAN_TYPE:\n self._dd_span.span_type = value\n elif key == Tags.SERVICE_NAME:\n self._dd_span.service = value\n elif key == Tags.RESOURCE_NAME or key == OTTags.DATABASE_STATEMENT:\n self._dd_span.resource = value\n elif key == OTTags.PEER_HOSTNAME:\n self._dd_span.set_tag(Tags.TARGET_HOST, value)\n elif key == OTTags.PEER_PORT:\n self._dd_span.set_tag(Tags.TARGET_PORT, value)\n elif key == Tags.SAMPLING_PRIORITY:\n self._dd_span.context.sampling_priority = value\n else:\n self._dd_span.set_tag(key, value)\n\n def _get_tag(self, key):\n # type: (_TagNameType) -> Optional[Text]\n \"\"\"Gets a tag from the span.\n\n This method retrieves the tag from the underlying datadog span.\n \"\"\"\n return self._dd_span.get_tag(key)\n\n def _get_metric(self, key):\n # type: (_TagNameType) -> Optional[NumericType]\n \"\"\"Gets a metric from the span.\n\n This method retrieves the metric from the underlying datadog span.\n \"\"\"\n return self._dd_span.get_metric(key)\n\n def __enter__(self):\n return self\n\n def __exit__(self, exc_type, exc_val, exc_tb):\n if exc_type:\n self._dd_span.set_exc_info(exc_type, exc_val, exc_tb)\n\n # note: self.finish() AND _dd_span.__exit__ will call _span.finish() but\n # it is idempotent\n self._dd_span.__exit__(exc_type, exc_val, exc_tb)\n self.finish()\n\n def _associate_dd_span(self, ddspan):\n # type: (DatadogSpan) -> None\n \"\"\"Associates a DD span with this span.\"\"\"\n # get the datadog span context\n self._dd_span = ddspan\n self.context._dd_context = ddspan.context\n\n @property\n def _dd_context(self):\n # type: () -> DatadogContext\n return self._dd_span.context\n", "path": "ddtrace/opentracer/span.py"}]} | 2,957 | 294 |
gh_patches_debug_24504 | rasdani/github-patches | git_diff | pyca__cryptography-5438 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Make OpenSSL 1.0.2 error (+ env var fallback)
</issue>
<code>
[start of src/cryptography/hazmat/bindings/openssl/binding.py]
1 # This file is dual licensed under the terms of the Apache License, Version
2 # 2.0, and the BSD License. See the LICENSE file in the root of this repository
3 # for complete details.
4
5 from __future__ import absolute_import, division, print_function
6
7 import collections
8 import threading
9 import types
10 import warnings
11
12 import cryptography
13 from cryptography import utils
14 from cryptography.exceptions import InternalError
15 from cryptography.hazmat.bindings._openssl import ffi, lib
16 from cryptography.hazmat.bindings.openssl._conditional import CONDITIONAL_NAMES
17
18 _OpenSSLErrorWithText = collections.namedtuple(
19 "_OpenSSLErrorWithText", ["code", "lib", "func", "reason", "reason_text"]
20 )
21
22
23 class _OpenSSLError(object):
24 def __init__(self, code, lib, func, reason):
25 self._code = code
26 self._lib = lib
27 self._func = func
28 self._reason = reason
29
30 def _lib_reason_match(self, lib, reason):
31 return lib == self.lib and reason == self.reason
32
33 code = utils.read_only_property("_code")
34 lib = utils.read_only_property("_lib")
35 func = utils.read_only_property("_func")
36 reason = utils.read_only_property("_reason")
37
38
39 def _consume_errors(lib):
40 errors = []
41 while True:
42 code = lib.ERR_get_error()
43 if code == 0:
44 break
45
46 err_lib = lib.ERR_GET_LIB(code)
47 err_func = lib.ERR_GET_FUNC(code)
48 err_reason = lib.ERR_GET_REASON(code)
49
50 errors.append(_OpenSSLError(code, err_lib, err_func, err_reason))
51
52 return errors
53
54
55 def _errors_with_text(errors):
56 errors_with_text = []
57 for err in errors:
58 buf = ffi.new("char[]", 256)
59 lib.ERR_error_string_n(err.code, buf, len(buf))
60 err_text_reason = ffi.string(buf)
61
62 errors_with_text.append(
63 _OpenSSLErrorWithText(
64 err.code, err.lib, err.func, err.reason, err_text_reason
65 )
66 )
67
68 return errors_with_text
69
70
71 def _consume_errors_with_text(lib):
72 return _errors_with_text(_consume_errors(lib))
73
74
75 def _openssl_assert(lib, ok, errors=None):
76 if not ok:
77 if errors is None:
78 errors = _consume_errors(lib)
79 errors_with_text = _errors_with_text(errors)
80
81 raise InternalError(
82 "Unknown OpenSSL error. This error is commonly encountered when "
83 "another library is not cleaning up the OpenSSL error stack. If "
84 "you are using cryptography with another library that uses "
85 "OpenSSL try disabling it before reporting a bug. Otherwise "
86 "please file an issue at https://github.com/pyca/cryptography/"
87 "issues with information on how to reproduce "
88 "this. ({0!r})".format(errors_with_text),
89 errors_with_text,
90 )
91
92
93 def build_conditional_library(lib, conditional_names):
94 conditional_lib = types.ModuleType("lib")
95 conditional_lib._original_lib = lib
96 excluded_names = set()
97 for condition, names_cb in conditional_names.items():
98 if not getattr(lib, condition):
99 excluded_names.update(names_cb())
100
101 for attr in dir(lib):
102 if attr not in excluded_names:
103 setattr(conditional_lib, attr, getattr(lib, attr))
104
105 return conditional_lib
106
107
108 class Binding(object):
109 """
110 OpenSSL API wrapper.
111 """
112
113 lib = None
114 ffi = ffi
115 _lib_loaded = False
116 _init_lock = threading.Lock()
117 _lock_init_lock = threading.Lock()
118
119 def __init__(self):
120 self._ensure_ffi_initialized()
121
122 @classmethod
123 def _register_osrandom_engine(cls):
124 # Clear any errors extant in the queue before we start. In many
125 # scenarios other things may be interacting with OpenSSL in the same
126 # process space and it has proven untenable to assume that they will
127 # reliably clear the error queue. Once we clear it here we will
128 # error on any subsequent unexpected item in the stack.
129 cls.lib.ERR_clear_error()
130 if cls.lib.CRYPTOGRAPHY_NEEDS_OSRANDOM_ENGINE:
131 result = cls.lib.Cryptography_add_osrandom_engine()
132 _openssl_assert(cls.lib, result in (1, 2))
133
134 @classmethod
135 def _ensure_ffi_initialized(cls):
136 with cls._init_lock:
137 if not cls._lib_loaded:
138 cls.lib = build_conditional_library(lib, CONDITIONAL_NAMES)
139 cls._lib_loaded = True
140 # initialize the SSL library
141 cls.lib.SSL_library_init()
142 # adds all ciphers/digests for EVP
143 cls.lib.OpenSSL_add_all_algorithms()
144 # loads error strings for libcrypto and libssl functions
145 cls.lib.SSL_load_error_strings()
146 cls._register_osrandom_engine()
147
148 @classmethod
149 def init_static_locks(cls):
150 with cls._lock_init_lock:
151 cls._ensure_ffi_initialized()
152 # Use Python's implementation if available, importing _ssl triggers
153 # the setup for this.
154 __import__("_ssl")
155
156 if (
157 not cls.lib.Cryptography_HAS_LOCKING_CALLBACKS
158 or cls.lib.CRYPTO_get_locking_callback() != cls.ffi.NULL
159 ):
160 return
161
162 # If nothing else has setup a locking callback already, we set up
163 # our own
164 res = lib.Cryptography_setup_ssl_threads()
165 _openssl_assert(cls.lib, res == 1)
166
167
168 def _verify_openssl_version(lib):
169 if (
170 lib.CRYPTOGRAPHY_OPENSSL_LESS_THAN_110
171 and not lib.CRYPTOGRAPHY_IS_LIBRESSL
172 ):
173 warnings.warn(
174 "OpenSSL version 1.0.2 is no longer supported by the OpenSSL "
175 "project, please upgrade. The next version of cryptography will "
176 "drop support for it.",
177 utils.CryptographyDeprecationWarning,
178 )
179
180
181 def _verify_package_version(version):
182 # Occasionally we run into situations where the version of the Python
183 # package does not match the version of the shared object that is loaded.
184 # This may occur in environments where multiple versions of cryptography
185 # are installed and available in the python path. To avoid errors cropping
186 # up later this code checks that the currently imported package and the
187 # shared object that were loaded have the same version and raise an
188 # ImportError if they do not
189 so_package_version = ffi.string(lib.CRYPTOGRAPHY_PACKAGE_VERSION)
190 if version.encode("ascii") != so_package_version:
191 raise ImportError(
192 "The version of cryptography does not match the loaded "
193 "shared object. This can happen if you have multiple copies of "
194 "cryptography installed in your Python path. Please try creating "
195 "a new virtual environment to resolve this issue. "
196 "Loaded python version: {}, shared object version: {}".format(
197 version, so_package_version
198 )
199 )
200
201
202 _verify_package_version(cryptography.__version__)
203
204 # OpenSSL is not thread safe until the locks are initialized. We call this
205 # method in module scope so that it executes with the import lock. On
206 # Pythons < 3.4 this import lock is a global lock, which can prevent a race
207 # condition registering the OpenSSL locks. On Python 3.4+ the import lock
208 # is per module so this approach will not work.
209 Binding.init_static_locks()
210
211 _verify_openssl_version(Binding.lib)
212
[end of src/cryptography/hazmat/bindings/openssl/binding.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/cryptography/hazmat/bindings/openssl/binding.py b/src/cryptography/hazmat/bindings/openssl/binding.py
--- a/src/cryptography/hazmat/bindings/openssl/binding.py
+++ b/src/cryptography/hazmat/bindings/openssl/binding.py
@@ -5,6 +5,7 @@
from __future__ import absolute_import, division, print_function
import collections
+import os
import threading
import types
import warnings
@@ -170,12 +171,19 @@
lib.CRYPTOGRAPHY_OPENSSL_LESS_THAN_110
and not lib.CRYPTOGRAPHY_IS_LIBRESSL
):
- warnings.warn(
- "OpenSSL version 1.0.2 is no longer supported by the OpenSSL "
- "project, please upgrade. The next version of cryptography will "
- "drop support for it.",
- utils.CryptographyDeprecationWarning,
- )
+ if os.environ.get("CRYPTOGRAPHY_ALLOW_OPENSSL_102"):
+ warnings.warn(
+ "OpenSSL version 1.0.2 is no longer supported by the OpenSSL "
+ "project, please upgrade. The next version of cryptography "
+ "will completely remove support for it.",
+ utils.CryptographyDeprecationWarning,
+ )
+ else:
+ raise RuntimeError(
+ "You are linking against OpenSSL 1.0.2, which is no longer "
+ "supported by the OpenSSL project. You need to upgrade to a "
+ "newer version of OpenSSL."
+ )
def _verify_package_version(version):
| {"golden_diff": "diff --git a/src/cryptography/hazmat/bindings/openssl/binding.py b/src/cryptography/hazmat/bindings/openssl/binding.py\n--- a/src/cryptography/hazmat/bindings/openssl/binding.py\n+++ b/src/cryptography/hazmat/bindings/openssl/binding.py\n@@ -5,6 +5,7 @@\n from __future__ import absolute_import, division, print_function\n \n import collections\n+import os\n import threading\n import types\n import warnings\n@@ -170,12 +171,19 @@\n lib.CRYPTOGRAPHY_OPENSSL_LESS_THAN_110\n and not lib.CRYPTOGRAPHY_IS_LIBRESSL\n ):\n- warnings.warn(\n- \"OpenSSL version 1.0.2 is no longer supported by the OpenSSL \"\n- \"project, please upgrade. The next version of cryptography will \"\n- \"drop support for it.\",\n- utils.CryptographyDeprecationWarning,\n- )\n+ if os.environ.get(\"CRYPTOGRAPHY_ALLOW_OPENSSL_102\"):\n+ warnings.warn(\n+ \"OpenSSL version 1.0.2 is no longer supported by the OpenSSL \"\n+ \"project, please upgrade. The next version of cryptography \"\n+ \"will completely remove support for it.\",\n+ utils.CryptographyDeprecationWarning,\n+ )\n+ else:\n+ raise RuntimeError(\n+ \"You are linking against OpenSSL 1.0.2, which is no longer \"\n+ \"supported by the OpenSSL project. You need to upgrade to a \"\n+ \"newer version of OpenSSL.\"\n+ )\n \n \n def _verify_package_version(version):\n", "issue": "Make OpenSSL 1.0.2 error (+ env var fallback)\n\n", "before_files": [{"content": "# This file is dual licensed under the terms of the Apache License, Version\n# 2.0, and the BSD License. See the LICENSE file in the root of this repository\n# for complete details.\n\nfrom __future__ import absolute_import, division, print_function\n\nimport collections\nimport threading\nimport types\nimport warnings\n\nimport cryptography\nfrom cryptography import utils\nfrom cryptography.exceptions import InternalError\nfrom cryptography.hazmat.bindings._openssl import ffi, lib\nfrom cryptography.hazmat.bindings.openssl._conditional import CONDITIONAL_NAMES\n\n_OpenSSLErrorWithText = collections.namedtuple(\n \"_OpenSSLErrorWithText\", [\"code\", \"lib\", \"func\", \"reason\", \"reason_text\"]\n)\n\n\nclass _OpenSSLError(object):\n def __init__(self, code, lib, func, reason):\n self._code = code\n self._lib = lib\n self._func = func\n self._reason = reason\n\n def _lib_reason_match(self, lib, reason):\n return lib == self.lib and reason == self.reason\n\n code = utils.read_only_property(\"_code\")\n lib = utils.read_only_property(\"_lib\")\n func = utils.read_only_property(\"_func\")\n reason = utils.read_only_property(\"_reason\")\n\n\ndef _consume_errors(lib):\n errors = []\n while True:\n code = lib.ERR_get_error()\n if code == 0:\n break\n\n err_lib = lib.ERR_GET_LIB(code)\n err_func = lib.ERR_GET_FUNC(code)\n err_reason = lib.ERR_GET_REASON(code)\n\n errors.append(_OpenSSLError(code, err_lib, err_func, err_reason))\n\n return errors\n\n\ndef _errors_with_text(errors):\n errors_with_text = []\n for err in errors:\n buf = ffi.new(\"char[]\", 256)\n lib.ERR_error_string_n(err.code, buf, len(buf))\n err_text_reason = ffi.string(buf)\n\n errors_with_text.append(\n _OpenSSLErrorWithText(\n err.code, err.lib, err.func, err.reason, err_text_reason\n )\n )\n\n return errors_with_text\n\n\ndef _consume_errors_with_text(lib):\n return _errors_with_text(_consume_errors(lib))\n\n\ndef _openssl_assert(lib, ok, errors=None):\n if not ok:\n if errors is None:\n errors = _consume_errors(lib)\n errors_with_text = _errors_with_text(errors)\n\n raise InternalError(\n \"Unknown OpenSSL error. This error is commonly encountered when \"\n \"another library is not cleaning up the OpenSSL error stack. If \"\n \"you are using cryptography with another library that uses \"\n \"OpenSSL try disabling it before reporting a bug. Otherwise \"\n \"please file an issue at https://github.com/pyca/cryptography/\"\n \"issues with information on how to reproduce \"\n \"this. ({0!r})\".format(errors_with_text),\n errors_with_text,\n )\n\n\ndef build_conditional_library(lib, conditional_names):\n conditional_lib = types.ModuleType(\"lib\")\n conditional_lib._original_lib = lib\n excluded_names = set()\n for condition, names_cb in conditional_names.items():\n if not getattr(lib, condition):\n excluded_names.update(names_cb())\n\n for attr in dir(lib):\n if attr not in excluded_names:\n setattr(conditional_lib, attr, getattr(lib, attr))\n\n return conditional_lib\n\n\nclass Binding(object):\n \"\"\"\n OpenSSL API wrapper.\n \"\"\"\n\n lib = None\n ffi = ffi\n _lib_loaded = False\n _init_lock = threading.Lock()\n _lock_init_lock = threading.Lock()\n\n def __init__(self):\n self._ensure_ffi_initialized()\n\n @classmethod\n def _register_osrandom_engine(cls):\n # Clear any errors extant in the queue before we start. In many\n # scenarios other things may be interacting with OpenSSL in the same\n # process space and it has proven untenable to assume that they will\n # reliably clear the error queue. Once we clear it here we will\n # error on any subsequent unexpected item in the stack.\n cls.lib.ERR_clear_error()\n if cls.lib.CRYPTOGRAPHY_NEEDS_OSRANDOM_ENGINE:\n result = cls.lib.Cryptography_add_osrandom_engine()\n _openssl_assert(cls.lib, result in (1, 2))\n\n @classmethod\n def _ensure_ffi_initialized(cls):\n with cls._init_lock:\n if not cls._lib_loaded:\n cls.lib = build_conditional_library(lib, CONDITIONAL_NAMES)\n cls._lib_loaded = True\n # initialize the SSL library\n cls.lib.SSL_library_init()\n # adds all ciphers/digests for EVP\n cls.lib.OpenSSL_add_all_algorithms()\n # loads error strings for libcrypto and libssl functions\n cls.lib.SSL_load_error_strings()\n cls._register_osrandom_engine()\n\n @classmethod\n def init_static_locks(cls):\n with cls._lock_init_lock:\n cls._ensure_ffi_initialized()\n # Use Python's implementation if available, importing _ssl triggers\n # the setup for this.\n __import__(\"_ssl\")\n\n if (\n not cls.lib.Cryptography_HAS_LOCKING_CALLBACKS\n or cls.lib.CRYPTO_get_locking_callback() != cls.ffi.NULL\n ):\n return\n\n # If nothing else has setup a locking callback already, we set up\n # our own\n res = lib.Cryptography_setup_ssl_threads()\n _openssl_assert(cls.lib, res == 1)\n\n\ndef _verify_openssl_version(lib):\n if (\n lib.CRYPTOGRAPHY_OPENSSL_LESS_THAN_110\n and not lib.CRYPTOGRAPHY_IS_LIBRESSL\n ):\n warnings.warn(\n \"OpenSSL version 1.0.2 is no longer supported by the OpenSSL \"\n \"project, please upgrade. The next version of cryptography will \"\n \"drop support for it.\",\n utils.CryptographyDeprecationWarning,\n )\n\n\ndef _verify_package_version(version):\n # Occasionally we run into situations where the version of the Python\n # package does not match the version of the shared object that is loaded.\n # This may occur in environments where multiple versions of cryptography\n # are installed and available in the python path. To avoid errors cropping\n # up later this code checks that the currently imported package and the\n # shared object that were loaded have the same version and raise an\n # ImportError if they do not\n so_package_version = ffi.string(lib.CRYPTOGRAPHY_PACKAGE_VERSION)\n if version.encode(\"ascii\") != so_package_version:\n raise ImportError(\n \"The version of cryptography does not match the loaded \"\n \"shared object. This can happen if you have multiple copies of \"\n \"cryptography installed in your Python path. Please try creating \"\n \"a new virtual environment to resolve this issue. \"\n \"Loaded python version: {}, shared object version: {}\".format(\n version, so_package_version\n )\n )\n\n\n_verify_package_version(cryptography.__version__)\n\n# OpenSSL is not thread safe until the locks are initialized. We call this\n# method in module scope so that it executes with the import lock. On\n# Pythons < 3.4 this import lock is a global lock, which can prevent a race\n# condition registering the OpenSSL locks. On Python 3.4+ the import lock\n# is per module so this approach will not work.\nBinding.init_static_locks()\n\n_verify_openssl_version(Binding.lib)\n", "path": "src/cryptography/hazmat/bindings/openssl/binding.py"}]} | 2,708 | 353 |
gh_patches_debug_38672 | rasdani/github-patches | git_diff | dask__dask-4160 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Use key names rather than function names in dask.visualize
When creating task graph images we currently inspect the task graph to find functions and use those function names in the graph. Now that atop produces `subgraph_callables` all the time, this is less informative than it used to be. We could either dig deeper here, or we could use the keynames instead. This is what is done in most of the other diagnostics. It might be a good time to switch over?
cc @jcrist
</issue>
<code>
[start of dask/dot.py]
1 from __future__ import absolute_import, division, print_function
2
3 import re
4 import os
5 from functools import partial
6
7 from .compatibility import apply
8 from .core import istask, get_dependencies, ishashable
9 from .utils import funcname, import_required
10
11
12 graphviz = import_required("graphviz", "Drawing dask graphs requires the "
13 "`graphviz` python library and the "
14 "`graphviz` system library to be "
15 "installed.")
16
17
18 def task_label(task):
19 """Label for a task on a dot graph.
20
21 Examples
22 --------
23 >>> from operator import add
24 >>> task_label((add, 1, 2))
25 'add'
26 >>> task_label((add, (add, 1, 2), 3))
27 'add(...)'
28 """
29 func = task[0]
30 if func is apply:
31 func = task[1]
32 if hasattr(func, 'funcs'):
33 if len(func.funcs) > 1:
34 return '{0}(...)'.format(funcname(func.funcs[0]))
35 else:
36 head = funcname(func.funcs[0])
37 else:
38 head = funcname(func)
39 if any(has_sub_tasks(i) for i in task[1:]):
40 return '{0}(...)'.format(head)
41 else:
42 return head
43
44
45 def has_sub_tasks(task):
46 """Returns True if the task has sub tasks"""
47 if istask(task):
48 return True
49 elif isinstance(task, list):
50 return any(has_sub_tasks(i) for i in task)
51 else:
52 return False
53
54
55 def name(x):
56 try:
57 return str(hash(x))
58 except TypeError:
59 return str(hash(str(x)))
60
61
62 _HASHPAT = re.compile('([0-9a-z]{32})')
63 _UUIDPAT = re.compile('([0-9a-z]{8}-[0-9a-z]{4}-[0-9a-z]{4}-[0-9a-z]{4}-[0-9a-z]{12})')
64
65
66 def label(x, cache=None):
67 """
68
69 >>> label('x')
70 'x'
71
72 >>> label(('x', 1))
73 "('x', 1)"
74
75 >>> from hashlib import md5
76 >>> x = 'x-%s-hello' % md5(b'1234').hexdigest()
77 >>> x
78 'x-81dc9bdb52d04dc20036dbd8313ed055-hello'
79
80 >>> label(x)
81 'x-#-hello'
82
83 >>> from uuid import uuid1
84 >>> x = 'x-%s-hello' % uuid1()
85 >>> x # doctest: +SKIP
86 'x-4c1a3d7e-0b45-11e6-8334-54ee75105593-hello'
87
88 >>> label(x)
89 'x-#-hello'
90 """
91 s = str(x)
92 for pattern in (_HASHPAT, _UUIDPAT):
93 m = re.search(pattern, s)
94 if m is not None:
95 for h in m.groups():
96 if cache is not None:
97 n = cache.get(h, len(cache))
98 label = '#{0}'.format(n)
99 # cache will be overwritten destructively
100 cache[h] = n
101 else:
102 label = '#'
103 s = s.replace(h, label)
104 return s
105
106
107 def to_graphviz(dsk, data_attributes=None, function_attributes=None,
108 rankdir='BT', graph_attr={}, node_attr=None, edge_attr=None, **kwargs):
109 if data_attributes is None:
110 data_attributes = {}
111 if function_attributes is None:
112 function_attributes = {}
113
114 graph_attr = graph_attr or {}
115 graph_attr['rankdir'] = rankdir
116 graph_attr.update(kwargs)
117 g = graphviz.Digraph(graph_attr=graph_attr,
118 node_attr=node_attr,
119 edge_attr=edge_attr)
120
121 seen = set()
122 cache = {}
123
124 for k, v in dsk.items():
125 k_name = name(k)
126 if k_name not in seen:
127 seen.add(k_name)
128 attrs = data_attributes.get(k, {})
129 attrs.setdefault('label', label(k, cache=cache))
130 attrs.setdefault('shape', 'box')
131 g.node(k_name, **attrs)
132
133 if istask(v):
134 func_name = name((k, 'function'))
135 if func_name not in seen:
136 seen.add(func_name)
137 attrs = function_attributes.get(k, {})
138 attrs.setdefault('label', task_label(v))
139 attrs.setdefault('shape', 'circle')
140 g.node(func_name, **attrs)
141 g.edge(func_name, k_name)
142
143 for dep in get_dependencies(dsk, k):
144 dep_name = name(dep)
145 if dep_name not in seen:
146 seen.add(dep_name)
147 attrs = data_attributes.get(dep, {})
148 attrs.setdefault('label', label(dep, cache=cache))
149 attrs.setdefault('shape', 'box')
150 g.node(dep_name, **attrs)
151 g.edge(dep_name, func_name)
152 elif ishashable(v) and v in dsk:
153 g.edge(name(v), k_name)
154 return g
155
156
157 IPYTHON_IMAGE_FORMATS = frozenset(['jpeg', 'png'])
158 IPYTHON_NO_DISPLAY_FORMATS = frozenset(['dot', 'pdf'])
159
160
161 def _get_display_cls(format):
162 """
163 Get the appropriate IPython display class for `format`.
164
165 Returns `IPython.display.SVG` if format=='svg', otherwise
166 `IPython.display.Image`.
167
168 If IPython is not importable, return dummy function that swallows its
169 arguments and returns None.
170 """
171 dummy = lambda *args, **kwargs: None
172 try:
173 import IPython.display as display
174 except ImportError:
175 # Can't return a display object if no IPython.
176 return dummy
177
178 if format in IPYTHON_NO_DISPLAY_FORMATS:
179 # IPython can't display this format natively, so just return None.
180 return dummy
181 elif format in IPYTHON_IMAGE_FORMATS:
182 # Partially apply `format` so that `Image` and `SVG` supply a uniform
183 # interface to the caller.
184 return partial(display.Image, format=format)
185 elif format == 'svg':
186 return display.SVG
187 else:
188 raise ValueError("Unknown format '%s' passed to `dot_graph`" % format)
189
190
191 def dot_graph(dsk, filename='mydask', format=None, **kwargs):
192 """
193 Render a task graph using dot.
194
195 If `filename` is not None, write a file to disk with that name in the
196 format specified by `format`. `filename` should not include an extension.
197
198 Parameters
199 ----------
200 dsk : dict
201 The graph to display.
202 filename : str or None, optional
203 The name (without an extension) of the file to write to disk. If
204 `filename` is None, no file will be written, and we communicate with
205 dot using only pipes. Default is 'mydask'.
206 format : {'png', 'pdf', 'dot', 'svg', 'jpeg', 'jpg'}, optional
207 Format in which to write output file. Default is 'png'.
208 **kwargs
209 Additional keyword arguments to forward to `to_graphviz`.
210
211 Returns
212 -------
213 result : None or IPython.display.Image or IPython.display.SVG (See below.)
214
215 Notes
216 -----
217 If IPython is installed, we return an IPython.display object in the
218 requested format. If IPython is not installed, we just return None.
219
220 We always return None if format is 'pdf' or 'dot', because IPython can't
221 display these formats natively. Passing these formats with filename=None
222 will not produce any useful output.
223
224 See Also
225 --------
226 dask.dot.to_graphviz
227 """
228 g = to_graphviz(dsk, **kwargs)
229
230 fmts = ['.png', '.pdf', '.dot', '.svg', '.jpeg', '.jpg']
231 if format is None and any(filename.lower().endswith(fmt) for fmt in fmts):
232 filename, format = os.path.splitext(filename)
233 format = format[1:].lower()
234
235 if format is None:
236 format = 'png'
237
238 data = g.pipe(format=format)
239 if not data:
240 raise RuntimeError("Graphviz failed to properly produce an image. "
241 "This probably means your installation of graphviz "
242 "is missing png support. See: "
243 "https://github.com/ContinuumIO/anaconda-issues/"
244 "issues/485 for more information.")
245
246 display_cls = _get_display_cls(format)
247
248 if not filename:
249 return display_cls(data=data)
250
251 full_filename = '.'.join([filename, format])
252 with open(full_filename, 'wb') as f:
253 f.write(data)
254
255 return display_cls(filename=full_filename)
256
[end of dask/dot.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/dask/dot.py b/dask/dot.py
--- a/dask/dot.py
+++ b/dask/dot.py
@@ -6,7 +6,7 @@
from .compatibility import apply
from .core import istask, get_dependencies, ishashable
-from .utils import funcname, import_required
+from .utils import funcname, import_required, key_split
graphviz = import_required("graphviz", "Drawing dask graphs requires the "
@@ -104,6 +104,25 @@
return s
+def box_label(key):
+ """ Label boxes in graph by chunk index
+
+ >>> box_label(('x', 1, 2, 3))
+ '(1, 2, 3)'
+ >>> box_label(('x', 123))
+ '123'
+ >>> box_label('x')
+ ''
+ """
+ if isinstance(key, tuple):
+ key = key[1:]
+ if len(key) == 1:
+ [key] = key
+ return str(key)
+ else:
+ return ""
+
+
def to_graphviz(dsk, data_attributes=None, function_attributes=None,
rankdir='BT', graph_attr={}, node_attr=None, edge_attr=None, **kwargs):
if data_attributes is None:
@@ -119,14 +138,13 @@
edge_attr=edge_attr)
seen = set()
- cache = {}
for k, v in dsk.items():
k_name = name(k)
if k_name not in seen:
seen.add(k_name)
attrs = data_attributes.get(k, {})
- attrs.setdefault('label', label(k, cache=cache))
+ attrs.setdefault('label', box_label(k))
attrs.setdefault('shape', 'box')
g.node(k_name, **attrs)
@@ -135,7 +153,7 @@
if func_name not in seen:
seen.add(func_name)
attrs = function_attributes.get(k, {})
- attrs.setdefault('label', task_label(v))
+ attrs.setdefault('label', key_split(k))
attrs.setdefault('shape', 'circle')
g.node(func_name, **attrs)
g.edge(func_name, k_name)
@@ -145,7 +163,7 @@
if dep_name not in seen:
seen.add(dep_name)
attrs = data_attributes.get(dep, {})
- attrs.setdefault('label', label(dep, cache=cache))
+ attrs.setdefault('label', box_label(dep))
attrs.setdefault('shape', 'box')
g.node(dep_name, **attrs)
g.edge(dep_name, func_name)
| {"golden_diff": "diff --git a/dask/dot.py b/dask/dot.py\n--- a/dask/dot.py\n+++ b/dask/dot.py\n@@ -6,7 +6,7 @@\n \n from .compatibility import apply\n from .core import istask, get_dependencies, ishashable\n-from .utils import funcname, import_required\n+from .utils import funcname, import_required, key_split\n \n \n graphviz = import_required(\"graphviz\", \"Drawing dask graphs requires the \"\n@@ -104,6 +104,25 @@\n return s\n \n \n+def box_label(key):\n+ \"\"\" Label boxes in graph by chunk index\n+\n+ >>> box_label(('x', 1, 2, 3))\n+ '(1, 2, 3)'\n+ >>> box_label(('x', 123))\n+ '123'\n+ >>> box_label('x')\n+ ''\n+ \"\"\"\n+ if isinstance(key, tuple):\n+ key = key[1:]\n+ if len(key) == 1:\n+ [key] = key\n+ return str(key)\n+ else:\n+ return \"\"\n+\n+\n def to_graphviz(dsk, data_attributes=None, function_attributes=None,\n rankdir='BT', graph_attr={}, node_attr=None, edge_attr=None, **kwargs):\n if data_attributes is None:\n@@ -119,14 +138,13 @@\n edge_attr=edge_attr)\n \n seen = set()\n- cache = {}\n \n for k, v in dsk.items():\n k_name = name(k)\n if k_name not in seen:\n seen.add(k_name)\n attrs = data_attributes.get(k, {})\n- attrs.setdefault('label', label(k, cache=cache))\n+ attrs.setdefault('label', box_label(k))\n attrs.setdefault('shape', 'box')\n g.node(k_name, **attrs)\n \n@@ -135,7 +153,7 @@\n if func_name not in seen:\n seen.add(func_name)\n attrs = function_attributes.get(k, {})\n- attrs.setdefault('label', task_label(v))\n+ attrs.setdefault('label', key_split(k))\n attrs.setdefault('shape', 'circle')\n g.node(func_name, **attrs)\n g.edge(func_name, k_name)\n@@ -145,7 +163,7 @@\n if dep_name not in seen:\n seen.add(dep_name)\n attrs = data_attributes.get(dep, {})\n- attrs.setdefault('label', label(dep, cache=cache))\n+ attrs.setdefault('label', box_label(dep))\n attrs.setdefault('shape', 'box')\n g.node(dep_name, **attrs)\n g.edge(dep_name, func_name)\n", "issue": "Use key names rather than function names in dask.visualize\nWhen creating task graph images we currently inspect the task graph to find functions and use those function names in the graph. Now that atop produces `subgraph_callables` all the time, this is less informative than it used to be. We could either dig deeper here, or we could use the keynames instead. This is what is done in most of the other diagnostics. It might be a good time to switch over?\r\n\r\ncc @jcrist \n", "before_files": [{"content": "from __future__ import absolute_import, division, print_function\n\nimport re\nimport os\nfrom functools import partial\n\nfrom .compatibility import apply\nfrom .core import istask, get_dependencies, ishashable\nfrom .utils import funcname, import_required\n\n\ngraphviz = import_required(\"graphviz\", \"Drawing dask graphs requires the \"\n \"`graphviz` python library and the \"\n \"`graphviz` system library to be \"\n \"installed.\")\n\n\ndef task_label(task):\n \"\"\"Label for a task on a dot graph.\n\n Examples\n --------\n >>> from operator import add\n >>> task_label((add, 1, 2))\n 'add'\n >>> task_label((add, (add, 1, 2), 3))\n 'add(...)'\n \"\"\"\n func = task[0]\n if func is apply:\n func = task[1]\n if hasattr(func, 'funcs'):\n if len(func.funcs) > 1:\n return '{0}(...)'.format(funcname(func.funcs[0]))\n else:\n head = funcname(func.funcs[0])\n else:\n head = funcname(func)\n if any(has_sub_tasks(i) for i in task[1:]):\n return '{0}(...)'.format(head)\n else:\n return head\n\n\ndef has_sub_tasks(task):\n \"\"\"Returns True if the task has sub tasks\"\"\"\n if istask(task):\n return True\n elif isinstance(task, list):\n return any(has_sub_tasks(i) for i in task)\n else:\n return False\n\n\ndef name(x):\n try:\n return str(hash(x))\n except TypeError:\n return str(hash(str(x)))\n\n\n_HASHPAT = re.compile('([0-9a-z]{32})')\n_UUIDPAT = re.compile('([0-9a-z]{8}-[0-9a-z]{4}-[0-9a-z]{4}-[0-9a-z]{4}-[0-9a-z]{12})')\n\n\ndef label(x, cache=None):\n \"\"\"\n\n >>> label('x')\n 'x'\n\n >>> label(('x', 1))\n \"('x', 1)\"\n\n >>> from hashlib import md5\n >>> x = 'x-%s-hello' % md5(b'1234').hexdigest()\n >>> x\n 'x-81dc9bdb52d04dc20036dbd8313ed055-hello'\n\n >>> label(x)\n 'x-#-hello'\n\n >>> from uuid import uuid1\n >>> x = 'x-%s-hello' % uuid1()\n >>> x # doctest: +SKIP\n 'x-4c1a3d7e-0b45-11e6-8334-54ee75105593-hello'\n\n >>> label(x)\n 'x-#-hello'\n \"\"\"\n s = str(x)\n for pattern in (_HASHPAT, _UUIDPAT):\n m = re.search(pattern, s)\n if m is not None:\n for h in m.groups():\n if cache is not None:\n n = cache.get(h, len(cache))\n label = '#{0}'.format(n)\n # cache will be overwritten destructively\n cache[h] = n\n else:\n label = '#'\n s = s.replace(h, label)\n return s\n\n\ndef to_graphviz(dsk, data_attributes=None, function_attributes=None,\n rankdir='BT', graph_attr={}, node_attr=None, edge_attr=None, **kwargs):\n if data_attributes is None:\n data_attributes = {}\n if function_attributes is None:\n function_attributes = {}\n\n graph_attr = graph_attr or {}\n graph_attr['rankdir'] = rankdir\n graph_attr.update(kwargs)\n g = graphviz.Digraph(graph_attr=graph_attr,\n node_attr=node_attr,\n edge_attr=edge_attr)\n\n seen = set()\n cache = {}\n\n for k, v in dsk.items():\n k_name = name(k)\n if k_name not in seen:\n seen.add(k_name)\n attrs = data_attributes.get(k, {})\n attrs.setdefault('label', label(k, cache=cache))\n attrs.setdefault('shape', 'box')\n g.node(k_name, **attrs)\n\n if istask(v):\n func_name = name((k, 'function'))\n if func_name not in seen:\n seen.add(func_name)\n attrs = function_attributes.get(k, {})\n attrs.setdefault('label', task_label(v))\n attrs.setdefault('shape', 'circle')\n g.node(func_name, **attrs)\n g.edge(func_name, k_name)\n\n for dep in get_dependencies(dsk, k):\n dep_name = name(dep)\n if dep_name not in seen:\n seen.add(dep_name)\n attrs = data_attributes.get(dep, {})\n attrs.setdefault('label', label(dep, cache=cache))\n attrs.setdefault('shape', 'box')\n g.node(dep_name, **attrs)\n g.edge(dep_name, func_name)\n elif ishashable(v) and v in dsk:\n g.edge(name(v), k_name)\n return g\n\n\nIPYTHON_IMAGE_FORMATS = frozenset(['jpeg', 'png'])\nIPYTHON_NO_DISPLAY_FORMATS = frozenset(['dot', 'pdf'])\n\n\ndef _get_display_cls(format):\n \"\"\"\n Get the appropriate IPython display class for `format`.\n\n Returns `IPython.display.SVG` if format=='svg', otherwise\n `IPython.display.Image`.\n\n If IPython is not importable, return dummy function that swallows its\n arguments and returns None.\n \"\"\"\n dummy = lambda *args, **kwargs: None\n try:\n import IPython.display as display\n except ImportError:\n # Can't return a display object if no IPython.\n return dummy\n\n if format in IPYTHON_NO_DISPLAY_FORMATS:\n # IPython can't display this format natively, so just return None.\n return dummy\n elif format in IPYTHON_IMAGE_FORMATS:\n # Partially apply `format` so that `Image` and `SVG` supply a uniform\n # interface to the caller.\n return partial(display.Image, format=format)\n elif format == 'svg':\n return display.SVG\n else:\n raise ValueError(\"Unknown format '%s' passed to `dot_graph`\" % format)\n\n\ndef dot_graph(dsk, filename='mydask', format=None, **kwargs):\n \"\"\"\n Render a task graph using dot.\n\n If `filename` is not None, write a file to disk with that name in the\n format specified by `format`. `filename` should not include an extension.\n\n Parameters\n ----------\n dsk : dict\n The graph to display.\n filename : str or None, optional\n The name (without an extension) of the file to write to disk. If\n `filename` is None, no file will be written, and we communicate with\n dot using only pipes. Default is 'mydask'.\n format : {'png', 'pdf', 'dot', 'svg', 'jpeg', 'jpg'}, optional\n Format in which to write output file. Default is 'png'.\n **kwargs\n Additional keyword arguments to forward to `to_graphviz`.\n\n Returns\n -------\n result : None or IPython.display.Image or IPython.display.SVG (See below.)\n\n Notes\n -----\n If IPython is installed, we return an IPython.display object in the\n requested format. If IPython is not installed, we just return None.\n\n We always return None if format is 'pdf' or 'dot', because IPython can't\n display these formats natively. Passing these formats with filename=None\n will not produce any useful output.\n\n See Also\n --------\n dask.dot.to_graphviz\n \"\"\"\n g = to_graphviz(dsk, **kwargs)\n\n fmts = ['.png', '.pdf', '.dot', '.svg', '.jpeg', '.jpg']\n if format is None and any(filename.lower().endswith(fmt) for fmt in fmts):\n filename, format = os.path.splitext(filename)\n format = format[1:].lower()\n\n if format is None:\n format = 'png'\n\n data = g.pipe(format=format)\n if not data:\n raise RuntimeError(\"Graphviz failed to properly produce an image. \"\n \"This probably means your installation of graphviz \"\n \"is missing png support. See: \"\n \"https://github.com/ContinuumIO/anaconda-issues/\"\n \"issues/485 for more information.\")\n\n display_cls = _get_display_cls(format)\n\n if not filename:\n return display_cls(data=data)\n\n full_filename = '.'.join([filename, format])\n with open(full_filename, 'wb') as f:\n f.write(data)\n\n return display_cls(filename=full_filename)\n", "path": "dask/dot.py"}]} | 3,269 | 591 |
gh_patches_debug_9405 | rasdani/github-patches | git_diff | ivy-llc__ivy-17873 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
dropout1d
</issue>
<code>
[start of ivy/functional/frontends/torch/nn/functional/dropout_functions.py]
1 # local
2 import ivy
3 from ivy.func_wrapper import with_unsupported_dtypes
4
5 from ivy.functional.frontends.torch.func_wrapper import to_ivy_arrays_and_back
6
7
8 @to_ivy_arrays_and_back
9 @with_unsupported_dtypes({"2.0.1 and below": ("float16",)}, "torch")
10 def dropout(input, p=0.5, training=True, inplace=False):
11 return ivy.dropout(input, p, training=training)
12
[end of ivy/functional/frontends/torch/nn/functional/dropout_functions.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/ivy/functional/frontends/torch/nn/functional/dropout_functions.py b/ivy/functional/frontends/torch/nn/functional/dropout_functions.py
--- a/ivy/functional/frontends/torch/nn/functional/dropout_functions.py
+++ b/ivy/functional/frontends/torch/nn/functional/dropout_functions.py
@@ -9,3 +9,11 @@
@with_unsupported_dtypes({"2.0.1 and below": ("float16",)}, "torch")
def dropout(input, p=0.5, training=True, inplace=False):
return ivy.dropout(input, p, training=training)
+
+
+@to_ivy_arrays_and_back
+@with_unsupported_dtypes({"2.0.1 and below": ("float16",)}, "torch")
+def dropout1d(input, p=0.5, training=True, inplace=False):
+ if inplace:
+ return ivy.dropout1d(input, p, training=training, data_format="NCW", out=input)
+ return ivy.dropout1d(input, p, training=training, data_format="NCW")
| {"golden_diff": "diff --git a/ivy/functional/frontends/torch/nn/functional/dropout_functions.py b/ivy/functional/frontends/torch/nn/functional/dropout_functions.py\n--- a/ivy/functional/frontends/torch/nn/functional/dropout_functions.py\n+++ b/ivy/functional/frontends/torch/nn/functional/dropout_functions.py\n@@ -9,3 +9,11 @@\n @with_unsupported_dtypes({\"2.0.1 and below\": (\"float16\",)}, \"torch\")\n def dropout(input, p=0.5, training=True, inplace=False):\n return ivy.dropout(input, p, training=training)\n+\n+\n+@to_ivy_arrays_and_back\n+@with_unsupported_dtypes({\"2.0.1 and below\": (\"float16\",)}, \"torch\")\n+def dropout1d(input, p=0.5, training=True, inplace=False):\n+ if inplace:\n+ return ivy.dropout1d(input, p, training=training, data_format=\"NCW\", out=input)\n+ return ivy.dropout1d(input, p, training=training, data_format=\"NCW\")\n", "issue": "dropout1d\n\n", "before_files": [{"content": "# local\nimport ivy\nfrom ivy.func_wrapper import with_unsupported_dtypes\n\nfrom ivy.functional.frontends.torch.func_wrapper import to_ivy_arrays_and_back\n\n\n@to_ivy_arrays_and_back\n@with_unsupported_dtypes({\"2.0.1 and below\": (\"float16\",)}, \"torch\")\ndef dropout(input, p=0.5, training=True, inplace=False):\n return ivy.dropout(input, p, training=training)\n", "path": "ivy/functional/frontends/torch/nn/functional/dropout_functions.py"}]} | 679 | 251 |
gh_patches_debug_34828 | rasdani/github-patches | git_diff | PaddlePaddle__models-1879 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
文本分类text_classification训练选项不匹配
1. train.py中batch_size参数冗余
2. 命令行选项gru配置错误
3. GRU Layer embedding学习率默认配置过大,导致模型训练补课收敛。
</issue>
<code>
[start of fluid/PaddleNLP/text_classification/train.py]
1 import os
2 import six
3 import sys
4 import time
5 import unittest
6 import contextlib
7
8 import paddle
9 import paddle.fluid as fluid
10
11 import utils
12 from nets import bow_net
13 from nets import cnn_net
14 from nets import lstm_net
15 from nets import gru_net
16
17
18 def train(train_reader,
19 word_dict,
20 network,
21 use_cuda,
22 parallel,
23 save_dirname,
24 lr=0.2,
25 batch_size=128,
26 pass_num=30):
27 """
28 train network
29 """
30 data = fluid.layers.data(
31 name="words", shape=[1], dtype="int64", lod_level=1)
32
33 label = fluid.layers.data(name="label", shape=[1], dtype="int64")
34
35 if not parallel:
36 cost, acc, prediction = network(data, label, len(word_dict))
37 else:
38 places = fluid.layers.device.get_places(device_count=2)
39 pd = fluid.layers.ParallelDo(places)
40 with pd.do():
41 cost, acc, prediction = network(
42 pd.read_input(data), pd.read_input(label), len(word_dict))
43
44 pd.write_output(cost)
45 pd.write_output(acc)
46
47 cost, acc = pd()
48 cost = fluid.layers.mean(cost)
49 acc = fluid.layers.mean(acc)
50
51 sgd_optimizer = fluid.optimizer.Adagrad(learning_rate=lr)
52 sgd_optimizer.minimize(cost)
53
54 place = fluid.CUDAPlace(0) if use_cuda else fluid.CPUPlace()
55 exe = fluid.Executor(place)
56 feeder = fluid.DataFeeder(feed_list=[data, label], place=place)
57
58 # For internal continuous evaluation
59 if "CE_MODE_X" in os.environ:
60 fluid.default_startup_program().random_seed = 110
61 exe.run(fluid.default_startup_program())
62 for pass_id in six.moves.xrange(pass_num):
63 pass_start = time.time()
64 data_size, data_count, total_acc, total_cost = 0, 0, 0.0, 0.0
65 for data in train_reader():
66 avg_cost_np, avg_acc_np = exe.run(fluid.default_main_program(),
67 feed=feeder.feed(data),
68 fetch_list=[cost, acc])
69 data_size = len(data)
70 total_acc += data_size * avg_acc_np
71 total_cost += data_size * avg_cost_np
72 data_count += data_size
73 avg_cost = total_cost / data_count
74
75 avg_acc = total_acc / data_count
76 print("pass_id: %d, avg_acc: %f, avg_cost: %f, pass_time_cost: %f" %
77 (pass_id, avg_acc, avg_cost, time.time() - pass_start))
78
79 epoch_model = save_dirname + "/" + "epoch" + str(pass_id)
80 fluid.io.save_inference_model(epoch_model, ["words", "label"], acc, exe)
81
82 pass_end = time.time()
83 # For internal continuous evaluation
84 if "CE_MODE_X" in os.environ:
85 print("kpis train_acc %f" % avg_acc)
86 print("kpis train_cost %f" % avg_cost)
87 print("kpis train_duration %f" % (pass_end - pass_start))
88
89
90 def train_net():
91 word_dict, train_reader, test_reader = utils.prepare_data(
92 "imdb", self_dict=False, batch_size=128, buf_size=50000)
93
94 if sys.argv[1] == "bow":
95 train(
96 train_reader,
97 word_dict,
98 bow_net,
99 use_cuda=False,
100 parallel=False,
101 save_dirname="bow_model",
102 lr=0.002,
103 pass_num=30,
104 batch_size=4)
105 elif sys.argv[1] == "cnn":
106 train(
107 train_reader,
108 word_dict,
109 cnn_net,
110 use_cuda=True,
111 parallel=False,
112 save_dirname="cnn_model",
113 lr=0.01,
114 pass_num=30,
115 batch_size=4)
116 elif sys.argv[1] == "lstm":
117 train(
118 train_reader,
119 word_dict,
120 lstm_net,
121 use_cuda=True,
122 parallel=False,
123 save_dirname="lstm_model",
124 lr=0.05,
125 pass_num=30,
126 batch_size=4)
127 elif sys.argv[1] == "gru":
128 train(
129 train_reader,
130 word_dict,
131 lstm_net,
132 use_cuda=True,
133 parallel=False,
134 save_dirname="gru_model",
135 lr=0.05,
136 pass_num=30,
137 batch_size=4)
138 else:
139 print("network name cannot be found!")
140 sys.exit(1)
141
142
143 if __name__ == "__main__":
144 train_net()
145
[end of fluid/PaddleNLP/text_classification/train.py]
[start of fluid/PaddleNLP/text_classification/nets.py]
1 import sys
2 import time
3 import numpy as np
4
5 import paddle
6 import paddle.fluid as fluid
7
8
9 def bow_net(data,
10 label,
11 dict_dim,
12 emb_dim=128,
13 hid_dim=128,
14 hid_dim2=96,
15 class_dim=2):
16 """
17 bow net
18 """
19 emb = fluid.layers.embedding(input=data, size=[dict_dim, emb_dim])
20 bow = fluid.layers.sequence_pool(input=emb, pool_type='sum')
21 bow_tanh = fluid.layers.tanh(bow)
22 fc_1 = fluid.layers.fc(input=bow_tanh, size=hid_dim, act="tanh")
23 fc_2 = fluid.layers.fc(input=fc_1, size=hid_dim2, act="tanh")
24 prediction = fluid.layers.fc(input=[fc_2], size=class_dim, act="softmax")
25 cost = fluid.layers.cross_entropy(input=prediction, label=label)
26 avg_cost = fluid.layers.mean(x=cost)
27 acc = fluid.layers.accuracy(input=prediction, label=label)
28
29 return avg_cost, acc, prediction
30
31
32 def cnn_net(data,
33 label,
34 dict_dim,
35 emb_dim=128,
36 hid_dim=128,
37 hid_dim2=96,
38 class_dim=2,
39 win_size=3):
40 """
41 conv net
42 """
43 emb = fluid.layers.embedding(input=data, size=[dict_dim, emb_dim])
44
45 conv_3 = fluid.nets.sequence_conv_pool(
46 input=emb,
47 num_filters=hid_dim,
48 filter_size=win_size,
49 act="tanh",
50 pool_type="max")
51
52 fc_1 = fluid.layers.fc(input=[conv_3], size=hid_dim2)
53
54 prediction = fluid.layers.fc(input=[fc_1], size=class_dim, act="softmax")
55 cost = fluid.layers.cross_entropy(input=prediction, label=label)
56 avg_cost = fluid.layers.mean(x=cost)
57 acc = fluid.layers.accuracy(input=prediction, label=label)
58
59 return avg_cost, acc, prediction
60
61
62 def lstm_net(data,
63 label,
64 dict_dim,
65 emb_dim=128,
66 hid_dim=128,
67 hid_dim2=96,
68 class_dim=2,
69 emb_lr=30.0):
70 """
71 lstm net
72 """
73 emb = fluid.layers.embedding(
74 input=data,
75 size=[dict_dim, emb_dim],
76 param_attr=fluid.ParamAttr(learning_rate=emb_lr))
77
78 fc0 = fluid.layers.fc(input=emb, size=hid_dim * 4)
79
80 lstm_h, c = fluid.layers.dynamic_lstm(
81 input=fc0, size=hid_dim * 4, is_reverse=False)
82
83 lstm_max = fluid.layers.sequence_pool(input=lstm_h, pool_type='max')
84 lstm_max_tanh = fluid.layers.tanh(lstm_max)
85
86 fc1 = fluid.layers.fc(input=lstm_max_tanh, size=hid_dim2, act='tanh')
87
88 prediction = fluid.layers.fc(input=fc1, size=class_dim, act='softmax')
89
90 cost = fluid.layers.cross_entropy(input=prediction, label=label)
91 avg_cost = fluid.layers.mean(x=cost)
92 acc = fluid.layers.accuracy(input=prediction, label=label)
93
94 return avg_cost, acc, prediction
95
96
97 def gru_net(data,
98 label,
99 dict_dim,
100 emb_dim=128,
101 hid_dim=128,
102 hid_dim2=96,
103 class_dim=2,
104 emb_lr=400.0):
105 """
106 gru net
107 """
108 emb = fluid.layers.embedding(
109 input=data,
110 size=[dict_dim, emb_dim],
111 param_attr=fluid.ParamAttr(learning_rate=emb_lr))
112
113 fc0 = fluid.layers.fc(input=emb, size=hid_dim * 3)
114 gru_h = fluid.layers.dynamic_gru(input=fc0, size=hid_dim, is_reverse=False)
115 gru_max = fluid.layers.sequence_pool(input=gru_h, pool_type='max')
116 gru_max_tanh = fluid.layers.tanh(gru_max)
117 fc1 = fluid.layers.fc(input=gru_max_tanh, size=hid_dim2, act='tanh')
118 prediction = fluid.layers.fc(input=fc1, size=class_dim, act='softmax')
119
120 cost = fluid.layers.cross_entropy(input=prediction, label=label)
121 avg_cost = fluid.layers.mean(x=cost)
122 acc = fluid.layers.accuracy(input=prediction, label=label)
123
124 return avg_cost, acc, prediction
125
[end of fluid/PaddleNLP/text_classification/nets.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/fluid/PaddleNLP/text_classification/nets.py b/fluid/PaddleNLP/text_classification/nets.py
--- a/fluid/PaddleNLP/text_classification/nets.py
+++ b/fluid/PaddleNLP/text_classification/nets.py
@@ -101,7 +101,7 @@
hid_dim=128,
hid_dim2=96,
class_dim=2,
- emb_lr=400.0):
+ emb_lr=30.0):
"""
gru net
"""
diff --git a/fluid/PaddleNLP/text_classification/train.py b/fluid/PaddleNLP/text_classification/train.py
--- a/fluid/PaddleNLP/text_classification/train.py
+++ b/fluid/PaddleNLP/text_classification/train.py
@@ -22,7 +22,6 @@
parallel,
save_dirname,
lr=0.2,
- batch_size=128,
pass_num=30):
"""
train network
@@ -100,8 +99,7 @@
parallel=False,
save_dirname="bow_model",
lr=0.002,
- pass_num=30,
- batch_size=4)
+ pass_num=30)
elif sys.argv[1] == "cnn":
train(
train_reader,
@@ -111,8 +109,7 @@
parallel=False,
save_dirname="cnn_model",
lr=0.01,
- pass_num=30,
- batch_size=4)
+ pass_num=30)
elif sys.argv[1] == "lstm":
train(
train_reader,
@@ -122,19 +119,17 @@
parallel=False,
save_dirname="lstm_model",
lr=0.05,
- pass_num=30,
- batch_size=4)
+ pass_num=30)
elif sys.argv[1] == "gru":
train(
train_reader,
word_dict,
- lstm_net,
+ gru_net,
use_cuda=True,
parallel=False,
save_dirname="gru_model",
lr=0.05,
- pass_num=30,
- batch_size=4)
+ pass_num=30)
else:
print("network name cannot be found!")
sys.exit(1)
| {"golden_diff": "diff --git a/fluid/PaddleNLP/text_classification/nets.py b/fluid/PaddleNLP/text_classification/nets.py\n--- a/fluid/PaddleNLP/text_classification/nets.py\n+++ b/fluid/PaddleNLP/text_classification/nets.py\n@@ -101,7 +101,7 @@\n hid_dim=128,\n hid_dim2=96,\n class_dim=2,\n- emb_lr=400.0):\n+ emb_lr=30.0):\n \"\"\"\n gru net\n \"\"\"\ndiff --git a/fluid/PaddleNLP/text_classification/train.py b/fluid/PaddleNLP/text_classification/train.py\n--- a/fluid/PaddleNLP/text_classification/train.py\n+++ b/fluid/PaddleNLP/text_classification/train.py\n@@ -22,7 +22,6 @@\n parallel,\n save_dirname,\n lr=0.2,\n- batch_size=128,\n pass_num=30):\n \"\"\"\n train network\n@@ -100,8 +99,7 @@\n parallel=False,\n save_dirname=\"bow_model\",\n lr=0.002,\n- pass_num=30,\n- batch_size=4)\n+ pass_num=30)\n elif sys.argv[1] == \"cnn\":\n train(\n train_reader,\n@@ -111,8 +109,7 @@\n parallel=False,\n save_dirname=\"cnn_model\",\n lr=0.01,\n- pass_num=30,\n- batch_size=4)\n+ pass_num=30)\n elif sys.argv[1] == \"lstm\":\n train(\n train_reader,\n@@ -122,19 +119,17 @@\n parallel=False,\n save_dirname=\"lstm_model\",\n lr=0.05,\n- pass_num=30,\n- batch_size=4)\n+ pass_num=30)\n elif sys.argv[1] == \"gru\":\n train(\n train_reader,\n word_dict,\n- lstm_net,\n+ gru_net,\n use_cuda=True,\n parallel=False,\n save_dirname=\"gru_model\",\n lr=0.05,\n- pass_num=30,\n- batch_size=4)\n+ pass_num=30)\n else:\n print(\"network name cannot be found!\")\n sys.exit(1)\n", "issue": "\u6587\u672c\u5206\u7c7btext_classification\u8bad\u7ec3\u9009\u9879\u4e0d\u5339\u914d\n1. train.py\u4e2dbatch_size\u53c2\u6570\u5197\u4f59\r\n\r\n2. \u547d\u4ee4\u884c\u9009\u9879gru\u914d\u7f6e\u9519\u8bef\r\n\r\n3. GRU Layer embedding\u5b66\u4e60\u7387\u9ed8\u8ba4\u914d\u7f6e\u8fc7\u5927\uff0c\u5bfc\u81f4\u6a21\u578b\u8bad\u7ec3\u8865\u8bfe\u6536\u655b\u3002\n", "before_files": [{"content": "import os\nimport six\nimport sys\nimport time\nimport unittest\nimport contextlib\n\nimport paddle\nimport paddle.fluid as fluid\n\nimport utils\nfrom nets import bow_net\nfrom nets import cnn_net\nfrom nets import lstm_net\nfrom nets import gru_net\n\n\ndef train(train_reader,\n word_dict,\n network,\n use_cuda,\n parallel,\n save_dirname,\n lr=0.2,\n batch_size=128,\n pass_num=30):\n \"\"\"\n train network\n \"\"\"\n data = fluid.layers.data(\n name=\"words\", shape=[1], dtype=\"int64\", lod_level=1)\n\n label = fluid.layers.data(name=\"label\", shape=[1], dtype=\"int64\")\n\n if not parallel:\n cost, acc, prediction = network(data, label, len(word_dict))\n else:\n places = fluid.layers.device.get_places(device_count=2)\n pd = fluid.layers.ParallelDo(places)\n with pd.do():\n cost, acc, prediction = network(\n pd.read_input(data), pd.read_input(label), len(word_dict))\n\n pd.write_output(cost)\n pd.write_output(acc)\n\n cost, acc = pd()\n cost = fluid.layers.mean(cost)\n acc = fluid.layers.mean(acc)\n\n sgd_optimizer = fluid.optimizer.Adagrad(learning_rate=lr)\n sgd_optimizer.minimize(cost)\n\n place = fluid.CUDAPlace(0) if use_cuda else fluid.CPUPlace()\n exe = fluid.Executor(place)\n feeder = fluid.DataFeeder(feed_list=[data, label], place=place)\n\n # For internal continuous evaluation\n if \"CE_MODE_X\" in os.environ:\n fluid.default_startup_program().random_seed = 110\n exe.run(fluid.default_startup_program())\n for pass_id in six.moves.xrange(pass_num):\n pass_start = time.time()\n data_size, data_count, total_acc, total_cost = 0, 0, 0.0, 0.0\n for data in train_reader():\n avg_cost_np, avg_acc_np = exe.run(fluid.default_main_program(),\n feed=feeder.feed(data),\n fetch_list=[cost, acc])\n data_size = len(data)\n total_acc += data_size * avg_acc_np\n total_cost += data_size * avg_cost_np\n data_count += data_size\n avg_cost = total_cost / data_count\n\n avg_acc = total_acc / data_count\n print(\"pass_id: %d, avg_acc: %f, avg_cost: %f, pass_time_cost: %f\" %\n (pass_id, avg_acc, avg_cost, time.time() - pass_start))\n\n epoch_model = save_dirname + \"/\" + \"epoch\" + str(pass_id)\n fluid.io.save_inference_model(epoch_model, [\"words\", \"label\"], acc, exe)\n\n pass_end = time.time()\n # For internal continuous evaluation\n if \"CE_MODE_X\" in os.environ:\n print(\"kpis\ttrain_acc\t%f\" % avg_acc)\n print(\"kpis\ttrain_cost\t%f\" % avg_cost)\n print(\"kpis\ttrain_duration\t%f\" % (pass_end - pass_start))\n\n\ndef train_net():\n word_dict, train_reader, test_reader = utils.prepare_data(\n \"imdb\", self_dict=False, batch_size=128, buf_size=50000)\n\n if sys.argv[1] == \"bow\":\n train(\n train_reader,\n word_dict,\n bow_net,\n use_cuda=False,\n parallel=False,\n save_dirname=\"bow_model\",\n lr=0.002,\n pass_num=30,\n batch_size=4)\n elif sys.argv[1] == \"cnn\":\n train(\n train_reader,\n word_dict,\n cnn_net,\n use_cuda=True,\n parallel=False,\n save_dirname=\"cnn_model\",\n lr=0.01,\n pass_num=30,\n batch_size=4)\n elif sys.argv[1] == \"lstm\":\n train(\n train_reader,\n word_dict,\n lstm_net,\n use_cuda=True,\n parallel=False,\n save_dirname=\"lstm_model\",\n lr=0.05,\n pass_num=30,\n batch_size=4)\n elif sys.argv[1] == \"gru\":\n train(\n train_reader,\n word_dict,\n lstm_net,\n use_cuda=True,\n parallel=False,\n save_dirname=\"gru_model\",\n lr=0.05,\n pass_num=30,\n batch_size=4)\n else:\n print(\"network name cannot be found!\")\n sys.exit(1)\n\n\nif __name__ == \"__main__\":\n train_net()\n", "path": "fluid/PaddleNLP/text_classification/train.py"}, {"content": "import sys\nimport time\nimport numpy as np\n\nimport paddle\nimport paddle.fluid as fluid\n\n\ndef bow_net(data,\n label,\n dict_dim,\n emb_dim=128,\n hid_dim=128,\n hid_dim2=96,\n class_dim=2):\n \"\"\"\n bow net\n \"\"\"\n emb = fluid.layers.embedding(input=data, size=[dict_dim, emb_dim])\n bow = fluid.layers.sequence_pool(input=emb, pool_type='sum')\n bow_tanh = fluid.layers.tanh(bow)\n fc_1 = fluid.layers.fc(input=bow_tanh, size=hid_dim, act=\"tanh\")\n fc_2 = fluid.layers.fc(input=fc_1, size=hid_dim2, act=\"tanh\")\n prediction = fluid.layers.fc(input=[fc_2], size=class_dim, act=\"softmax\")\n cost = fluid.layers.cross_entropy(input=prediction, label=label)\n avg_cost = fluid.layers.mean(x=cost)\n acc = fluid.layers.accuracy(input=prediction, label=label)\n\n return avg_cost, acc, prediction\n\n\ndef cnn_net(data,\n label,\n dict_dim,\n emb_dim=128,\n hid_dim=128,\n hid_dim2=96,\n class_dim=2,\n win_size=3):\n \"\"\"\n conv net\n \"\"\"\n emb = fluid.layers.embedding(input=data, size=[dict_dim, emb_dim])\n\n conv_3 = fluid.nets.sequence_conv_pool(\n input=emb,\n num_filters=hid_dim,\n filter_size=win_size,\n act=\"tanh\",\n pool_type=\"max\")\n\n fc_1 = fluid.layers.fc(input=[conv_3], size=hid_dim2)\n\n prediction = fluid.layers.fc(input=[fc_1], size=class_dim, act=\"softmax\")\n cost = fluid.layers.cross_entropy(input=prediction, label=label)\n avg_cost = fluid.layers.mean(x=cost)\n acc = fluid.layers.accuracy(input=prediction, label=label)\n\n return avg_cost, acc, prediction\n\n\ndef lstm_net(data,\n label,\n dict_dim,\n emb_dim=128,\n hid_dim=128,\n hid_dim2=96,\n class_dim=2,\n emb_lr=30.0):\n \"\"\"\n lstm net\n \"\"\"\n emb = fluid.layers.embedding(\n input=data,\n size=[dict_dim, emb_dim],\n param_attr=fluid.ParamAttr(learning_rate=emb_lr))\n\n fc0 = fluid.layers.fc(input=emb, size=hid_dim * 4)\n\n lstm_h, c = fluid.layers.dynamic_lstm(\n input=fc0, size=hid_dim * 4, is_reverse=False)\n\n lstm_max = fluid.layers.sequence_pool(input=lstm_h, pool_type='max')\n lstm_max_tanh = fluid.layers.tanh(lstm_max)\n\n fc1 = fluid.layers.fc(input=lstm_max_tanh, size=hid_dim2, act='tanh')\n\n prediction = fluid.layers.fc(input=fc1, size=class_dim, act='softmax')\n\n cost = fluid.layers.cross_entropy(input=prediction, label=label)\n avg_cost = fluid.layers.mean(x=cost)\n acc = fluid.layers.accuracy(input=prediction, label=label)\n\n return avg_cost, acc, prediction\n\n\ndef gru_net(data,\n label,\n dict_dim,\n emb_dim=128,\n hid_dim=128,\n hid_dim2=96,\n class_dim=2,\n emb_lr=400.0):\n \"\"\"\n gru net\n \"\"\"\n emb = fluid.layers.embedding(\n input=data,\n size=[dict_dim, emb_dim],\n param_attr=fluid.ParamAttr(learning_rate=emb_lr))\n\n fc0 = fluid.layers.fc(input=emb, size=hid_dim * 3)\n gru_h = fluid.layers.dynamic_gru(input=fc0, size=hid_dim, is_reverse=False)\n gru_max = fluid.layers.sequence_pool(input=gru_h, pool_type='max')\n gru_max_tanh = fluid.layers.tanh(gru_max)\n fc1 = fluid.layers.fc(input=gru_max_tanh, size=hid_dim2, act='tanh')\n prediction = fluid.layers.fc(input=fc1, size=class_dim, act='softmax')\n\n cost = fluid.layers.cross_entropy(input=prediction, label=label)\n avg_cost = fluid.layers.mean(x=cost)\n acc = fluid.layers.accuracy(input=prediction, label=label)\n\n return avg_cost, acc, prediction\n", "path": "fluid/PaddleNLP/text_classification/nets.py"}]} | 3,254 | 537 |
gh_patches_debug_21209 | rasdani/github-patches | git_diff | quantumlib__Cirq-2004 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
ControlledOperation objects do not (un)pickle
Python version 3.6.8
Cirq Version = 0.6.0.dev (also in 0.5.0)
Attempting to unpickle any ops.ControlledOperation object, either by pickle or by dill, results in an error.
Code to reproduce:
```
import pickle
import cirq
import sympy
dump=pickle.dumps(cirq.Rx(sympy.Symbol('param')).on(cirq.LineQubit(0)).controlled_by(cirq.LineQubit(1)))
load=pickle.loads(dump)
```
Error message:
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/sumneralperin-lea/anaconda3/envs/quantum/lib/python3.6/pickle.py", line 1558, in _loads
encoding=encoding, errors=errors).load()
File "/Users/sumneralperin-lea/anaconda3/envs/quantum/lib/python3.6/pickle.py", line 1050, in load
dispatch[key[0]](self)
File "/Users/sumneralperin-lea/anaconda3/envs/quantum/lib/python3.6/pickle.py", line 1323, in load_newobj
obj = cls.__new__(cls, *args)
TypeError: __new__() missing 2 required positional arguments: 'controls' and 'sub_operation'
```
Extra info/commentary:
Though this issue doesn't reside exclusively in cirq, it does restrict the (full) use of cirq with some standard python modules. In my particular use case, it prevents use of the multiprocessing module to parallelize the simulation of circuits.
</issue>
<code>
[start of cirq/ops/controlled_operation.py]
1 # Copyright 2019 The Cirq Developers
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # https://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 from typing import Union, Any, Optional, List, Sequence
15
16 import numpy as np
17
18 from cirq import protocols, linalg, value
19 from cirq.ops import raw_types, gate_operation
20 from cirq.type_workarounds import NotImplementedType
21 from cirq.protocols import trace_distance_from_angle_list
22
23
24 @value.value_equality
25 class ControlledOperation(raw_types.Operation):
26 def __new__(cls,
27 controls: Sequence[raw_types.Qid],
28 sub_operation: raw_types.Operation):
29 """Auto-flatten nested controlled operations."""
30 if isinstance(sub_operation, ControlledOperation):
31 return ControlledOperation(
32 tuple(controls) + sub_operation.controls,
33 sub_operation.sub_operation)
34 return super().__new__(cls)
35
36 def __init__(self,
37 controls: Sequence[raw_types.Qid],
38 sub_operation: raw_types.Operation):
39 self.controls = tuple(controls)
40 self.sub_operation = sub_operation
41
42 @property
43 def qubits(self):
44 return self.controls + self.sub_operation.qubits
45
46 def with_qubits(self, *new_qubits):
47 n = len(self.controls)
48 return ControlledOperation(
49 new_qubits[:n],
50 self.sub_operation.with_qubits(*new_qubits[n:]))
51
52 def _decompose_(self):
53 result = protocols.decompose_once(self.sub_operation, NotImplemented)
54 if result is NotImplemented:
55 return NotImplemented
56
57 return [ControlledOperation(self.controls, op) for op in result]
58
59 def _value_equality_values_(self):
60 return frozenset(self.controls), self.sub_operation
61
62 def _apply_unitary_(self, args: protocols.ApplyUnitaryArgs) -> np.ndarray:
63 n = len(self.controls)
64 control_axes = args.axes[:n]
65 sub_axes = args.axes[n:]
66 active = linalg.slice_for_qubits_equal_to(control_axes, -1)
67 view_axes = _positions_after_removals_at(
68 initial_positions=sub_axes,
69 removals=control_axes)
70 target_view = args.target_tensor[active]
71 buffer_view = args.available_buffer[active]
72 result = protocols.apply_unitary(
73 self.sub_operation,
74 protocols.ApplyUnitaryArgs(
75 target_view,
76 buffer_view,
77 view_axes),
78 default=NotImplemented)
79
80 if result is NotImplemented:
81 return NotImplemented
82
83 if result is target_view:
84 return args.target_tensor
85
86 # HACK: assume they didn't somehow escape the slice view and edit the
87 # rest of target_tensor.
88 args.target_tensor[active] = result
89 return args.target_tensor
90
91 def _has_unitary_(self) -> bool:
92 return protocols.has_unitary(self.sub_operation)
93
94 def _unitary_(self) -> Union[np.ndarray, NotImplementedType]:
95 sub_matrix = protocols.unitary(self.sub_operation, None)
96 if sub_matrix is None:
97 return NotImplemented
98 return linalg.block_diag(
99 np.eye(pow(2, len(self.qubits))-sub_matrix.shape[0]),
100 sub_matrix)
101
102 def __str__(self):
103 if isinstance(self.sub_operation, gate_operation.GateOperation):
104 return '{}{}({})'.format(
105 'C' * len(self.controls),
106 self.sub_operation.gate,
107 ', '.join(map(str, self.qubits)))
108 return 'C({}, {})'.format(', '.join(str(q) for q in self.controls),
109 str(self.sub_operation))
110
111 def __repr__(self):
112 return ('cirq.ControlledOperation(controls={!r}, '
113 'sub_operation={!r})'.format(self.controls,
114 self.sub_operation))
115
116 def _is_parameterized_(self) -> bool:
117 return protocols.is_parameterized(self.sub_operation)
118
119 def _resolve_parameters_(self, resolver):
120 new_sub_op = protocols.resolve_parameters(self.sub_operation, resolver)
121 return ControlledOperation(self.controls, new_sub_op)
122
123 def _trace_distance_bound_(self) -> Optional[float]:
124 if self._is_parameterized_():
125 return None
126 u = protocols.unitary(self.sub_operation, default=None)
127 if u is None:
128 return NotImplemented
129 angle_list = np.append(np.angle(np.linalg.eigvals(u)), 0)
130 return trace_distance_from_angle_list(angle_list)
131
132 def __pow__(self, exponent: Any) -> 'ControlledOperation':
133 new_sub_op = protocols.pow(self.sub_operation,
134 exponent,
135 NotImplemented)
136 if new_sub_op is NotImplemented:
137 return NotImplemented
138 return ControlledOperation(self.controls, new_sub_op)
139
140 def _circuit_diagram_info_(self,
141 args: protocols.CircuitDiagramInfoArgs
142 ) -> Optional[protocols.CircuitDiagramInfo]:
143 n = len(self.controls)
144
145 sub_args = protocols.CircuitDiagramInfoArgs(
146 known_qubit_count=(args.known_qubit_count - n
147 if args.known_qubit_count is not None else None),
148 known_qubits=(args.known_qubits[n:]
149 if args.known_qubits is not None else None),
150 use_unicode_characters=args.use_unicode_characters,
151 precision=args.precision,
152 qubit_map=args.qubit_map)
153 sub_info = protocols.circuit_diagram_info(self.sub_operation,
154 sub_args,
155 None)
156 if sub_info is None:
157 return NotImplemented
158
159 return protocols.CircuitDiagramInfo(wire_symbols=('@',) * n +
160 sub_info.wire_symbols,
161 exponent=sub_info.exponent)
162
163
164 def _positions_after_removals_at(initial_positions: Sequence[int],
165 removals: Sequence[int]) -> List[int]:
166 # TODO: O(n lg n) instead of O(n**2)
167 result = []
168 for p in initial_positions:
169 change = len([1 for r in removals if r < p])
170 result.append(p - change)
171 return result
172
[end of cirq/ops/controlled_operation.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/cirq/ops/controlled_operation.py b/cirq/ops/controlled_operation.py
--- a/cirq/ops/controlled_operation.py
+++ b/cirq/ops/controlled_operation.py
@@ -23,21 +23,17 @@
@value.value_equality
class ControlledOperation(raw_types.Operation):
- def __new__(cls,
- controls: Sequence[raw_types.Qid],
- sub_operation: raw_types.Operation):
- """Auto-flatten nested controlled operations."""
- if isinstance(sub_operation, ControlledOperation):
- return ControlledOperation(
- tuple(controls) + sub_operation.controls,
- sub_operation.sub_operation)
- return super().__new__(cls)
def __init__(self,
controls: Sequence[raw_types.Qid],
sub_operation: raw_types.Operation):
- self.controls = tuple(controls)
- self.sub_operation = sub_operation
+ if not isinstance(sub_operation, ControlledOperation):
+ self.controls = tuple(controls)
+ self.sub_operation = sub_operation
+ else:
+ # Auto-flatten nested controlled operations.
+ self.controls = tuple(controls) + sub_operation.controls
+ self.sub_operation = sub_operation.sub_operation
@property
def qubits(self):
| {"golden_diff": "diff --git a/cirq/ops/controlled_operation.py b/cirq/ops/controlled_operation.py\n--- a/cirq/ops/controlled_operation.py\n+++ b/cirq/ops/controlled_operation.py\n@@ -23,21 +23,17 @@\n \n @value.value_equality\n class ControlledOperation(raw_types.Operation):\n- def __new__(cls,\n- controls: Sequence[raw_types.Qid],\n- sub_operation: raw_types.Operation):\n- \"\"\"Auto-flatten nested controlled operations.\"\"\"\n- if isinstance(sub_operation, ControlledOperation):\n- return ControlledOperation(\n- tuple(controls) + sub_operation.controls,\n- sub_operation.sub_operation)\n- return super().__new__(cls)\n \n def __init__(self,\n controls: Sequence[raw_types.Qid],\n sub_operation: raw_types.Operation):\n- self.controls = tuple(controls)\n- self.sub_operation = sub_operation\n+ if not isinstance(sub_operation, ControlledOperation):\n+ self.controls = tuple(controls)\n+ self.sub_operation = sub_operation\n+ else:\n+ # Auto-flatten nested controlled operations.\n+ self.controls = tuple(controls) + sub_operation.controls\n+ self.sub_operation = sub_operation.sub_operation\n \n @property\n def qubits(self):\n", "issue": "ControlledOperation objects do not (un)pickle\nPython version 3.6.8\r\nCirq Version = 0.6.0.dev (also in 0.5.0)\r\n\r\nAttempting to unpickle any ops.ControlledOperation object, either by pickle or by dill, results in an error.\r\n\r\nCode to reproduce:\r\n\r\n```\r\nimport pickle\r\nimport cirq\r\nimport sympy\r\n\r\ndump=pickle.dumps(cirq.Rx(sympy.Symbol('param')).on(cirq.LineQubit(0)).controlled_by(cirq.LineQubit(1)))\r\nload=pickle.loads(dump)\r\n```\r\n\r\nError message:\r\n```\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/Users/sumneralperin-lea/anaconda3/envs/quantum/lib/python3.6/pickle.py\", line 1558, in _loads\r\n encoding=encoding, errors=errors).load()\r\n File \"/Users/sumneralperin-lea/anaconda3/envs/quantum/lib/python3.6/pickle.py\", line 1050, in load\r\n dispatch[key[0]](self)\r\n File \"/Users/sumneralperin-lea/anaconda3/envs/quantum/lib/python3.6/pickle.py\", line 1323, in load_newobj\r\n obj = cls.__new__(cls, *args)\r\nTypeError: __new__() missing 2 required positional arguments: 'controls' and 'sub_operation'\r\n```\r\nExtra info/commentary:\r\n\r\nThough this issue doesn't reside exclusively in cirq, it does restrict the (full) use of cirq with some standard python modules. In my particular use case, it prevents use of the multiprocessing module to parallelize the simulation of circuits.\r\n\r\n\n", "before_files": [{"content": "# Copyright 2019 The Cirq Developers\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nfrom typing import Union, Any, Optional, List, Sequence\n\nimport numpy as np\n\nfrom cirq import protocols, linalg, value\nfrom cirq.ops import raw_types, gate_operation\nfrom cirq.type_workarounds import NotImplementedType\nfrom cirq.protocols import trace_distance_from_angle_list\n\n\[email protected]_equality\nclass ControlledOperation(raw_types.Operation):\n def __new__(cls,\n controls: Sequence[raw_types.Qid],\n sub_operation: raw_types.Operation):\n \"\"\"Auto-flatten nested controlled operations.\"\"\"\n if isinstance(sub_operation, ControlledOperation):\n return ControlledOperation(\n tuple(controls) + sub_operation.controls,\n sub_operation.sub_operation)\n return super().__new__(cls)\n\n def __init__(self,\n controls: Sequence[raw_types.Qid],\n sub_operation: raw_types.Operation):\n self.controls = tuple(controls)\n self.sub_operation = sub_operation\n\n @property\n def qubits(self):\n return self.controls + self.sub_operation.qubits\n\n def with_qubits(self, *new_qubits):\n n = len(self.controls)\n return ControlledOperation(\n new_qubits[:n],\n self.sub_operation.with_qubits(*new_qubits[n:]))\n\n def _decompose_(self):\n result = protocols.decompose_once(self.sub_operation, NotImplemented)\n if result is NotImplemented:\n return NotImplemented\n\n return [ControlledOperation(self.controls, op) for op in result]\n\n def _value_equality_values_(self):\n return frozenset(self.controls), self.sub_operation\n\n def _apply_unitary_(self, args: protocols.ApplyUnitaryArgs) -> np.ndarray:\n n = len(self.controls)\n control_axes = args.axes[:n]\n sub_axes = args.axes[n:]\n active = linalg.slice_for_qubits_equal_to(control_axes, -1)\n view_axes = _positions_after_removals_at(\n initial_positions=sub_axes,\n removals=control_axes)\n target_view = args.target_tensor[active]\n buffer_view = args.available_buffer[active]\n result = protocols.apply_unitary(\n self.sub_operation,\n protocols.ApplyUnitaryArgs(\n target_view,\n buffer_view,\n view_axes),\n default=NotImplemented)\n\n if result is NotImplemented:\n return NotImplemented\n\n if result is target_view:\n return args.target_tensor\n\n # HACK: assume they didn't somehow escape the slice view and edit the\n # rest of target_tensor.\n args.target_tensor[active] = result\n return args.target_tensor\n\n def _has_unitary_(self) -> bool:\n return protocols.has_unitary(self.sub_operation)\n\n def _unitary_(self) -> Union[np.ndarray, NotImplementedType]:\n sub_matrix = protocols.unitary(self.sub_operation, None)\n if sub_matrix is None:\n return NotImplemented\n return linalg.block_diag(\n np.eye(pow(2, len(self.qubits))-sub_matrix.shape[0]),\n sub_matrix)\n\n def __str__(self):\n if isinstance(self.sub_operation, gate_operation.GateOperation):\n return '{}{}({})'.format(\n 'C' * len(self.controls),\n self.sub_operation.gate,\n ', '.join(map(str, self.qubits)))\n return 'C({}, {})'.format(', '.join(str(q) for q in self.controls),\n str(self.sub_operation))\n\n def __repr__(self):\n return ('cirq.ControlledOperation(controls={!r}, '\n 'sub_operation={!r})'.format(self.controls,\n self.sub_operation))\n\n def _is_parameterized_(self) -> bool:\n return protocols.is_parameterized(self.sub_operation)\n\n def _resolve_parameters_(self, resolver):\n new_sub_op = protocols.resolve_parameters(self.sub_operation, resolver)\n return ControlledOperation(self.controls, new_sub_op)\n\n def _trace_distance_bound_(self) -> Optional[float]:\n if self._is_parameterized_():\n return None\n u = protocols.unitary(self.sub_operation, default=None)\n if u is None:\n return NotImplemented\n angle_list = np.append(np.angle(np.linalg.eigvals(u)), 0)\n return trace_distance_from_angle_list(angle_list)\n\n def __pow__(self, exponent: Any) -> 'ControlledOperation':\n new_sub_op = protocols.pow(self.sub_operation,\n exponent,\n NotImplemented)\n if new_sub_op is NotImplemented:\n return NotImplemented\n return ControlledOperation(self.controls, new_sub_op)\n\n def _circuit_diagram_info_(self,\n args: protocols.CircuitDiagramInfoArgs\n ) -> Optional[protocols.CircuitDiagramInfo]:\n n = len(self.controls)\n\n sub_args = protocols.CircuitDiagramInfoArgs(\n known_qubit_count=(args.known_qubit_count - n\n if args.known_qubit_count is not None else None),\n known_qubits=(args.known_qubits[n:]\n if args.known_qubits is not None else None),\n use_unicode_characters=args.use_unicode_characters,\n precision=args.precision,\n qubit_map=args.qubit_map)\n sub_info = protocols.circuit_diagram_info(self.sub_operation,\n sub_args,\n None)\n if sub_info is None:\n return NotImplemented\n\n return protocols.CircuitDiagramInfo(wire_symbols=('@',) * n +\n sub_info.wire_symbols,\n exponent=sub_info.exponent)\n\n\ndef _positions_after_removals_at(initial_positions: Sequence[int],\n removals: Sequence[int]) -> List[int]:\n # TODO: O(n lg n) instead of O(n**2)\n result = []\n for p in initial_positions:\n change = len([1 for r in removals if r < p])\n result.append(p - change)\n return result\n", "path": "cirq/ops/controlled_operation.py"}]} | 2,696 | 275 |
gh_patches_debug_25242 | rasdani/github-patches | git_diff | fedora-infra__bodhi-5479 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Updates sometimes get stuck in pending state
From quick look, it seems that the composer does:
1. move from `f*-updates-candidate` to `f*-updates-testing`
2. do stuff
3. untag from `f*-updates-testing-pending`
4. mark update state as testing
If the composer hang on 2 the update remains stuck in pending as the builds are not tagged anymore in `f*-updates-candidate`. We should find a solution.
</issue>
<code>
[start of bodhi-server/bodhi/server/tasks/check_signed_builds.py]
1 # Copyright © 2017 Red Hat, Inc.
2 #
3 # This file is part of Bodhi.
4 #
5 # This program is free software; you can redistribute it and/or
6 # modify it under the terms of the GNU General Public License
7 # as published by the Free Software Foundation; either version 2
8 # of the License, or (at your option) any later version.
9 #
10 # This program is distributed in the hope that it will be useful,
11 # but WITHOUT ANY WARRANTY; without even the implied warranty of
12 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
13 # GNU General Public License for more details.
14 #
15 # You should have received a copy of the GNU General Public License
16 # along with this program; if not, write to the Free Software
17 # Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
18
19 """
20 Avoid Updates being stuck in pending.
21
22 It may happen that Bodhi misses fedora-messaging messages announcing builds
23 have been signed.
24 In these cases, the Update remain stuck in pending until a manual intervention.
25
26 This script will cycle through builds of Updates in pending status and update
27 the signed status in the db to match the tags found in Koji.
28 """
29
30 import logging
31 from datetime import datetime, timedelta
32
33 from bodhi.server import buildsys, models
34 from bodhi.server.config import config
35 from bodhi.server.util import transactional_session_maker
36
37
38 log = logging.getLogger(__name__)
39
40
41 def main():
42 """Check build tags and sign those we missed."""
43 db_factory = transactional_session_maker()
44 older_than = datetime.utcnow() - timedelta(days=config.get('check_signed_builds_delay'))
45 with db_factory() as session:
46 updates = models.Update.query.filter(
47 models.Update.status == models.UpdateStatus.pending
48 ).filter(
49 models.Update.release_id == models.Release.id
50 ).filter(
51 models.Release.state.in_([
52 models.ReleaseState.current,
53 models.ReleaseState.pending,
54 models.ReleaseState.frozen,
55 ])
56 ).all()
57
58 if len(updates) == 0:
59 log.debug('No stuck Updates found')
60 return
61
62 kc = buildsys.get_session()
63 stuck_builds = []
64 overlooked_builds = []
65
66 for update in updates:
67 # Let Bodhi have its times
68 if update.date_submitted >= older_than:
69 continue
70 builds = update.builds
71 # Clean Updates with no builds
72 if len(builds) == 0:
73 log.debug(f'Obsoleting empty update {update.alias}')
74 update.obsolete(session)
75 session.flush()
76 continue
77 pending_signing_tag = update.release.pending_signing_tag
78 pending_testing_tag = update.release.pending_testing_tag
79 for build in builds:
80 if build.signed:
81 log.debug(f'{build.nvr} already marked as signed')
82 continue
83 build_tags = [t['name'] for t in kc.listTags(build=build.nvr)]
84 if pending_signing_tag not in build_tags and pending_testing_tag in build_tags:
85 # Our composer missed the message that the build got signed
86 log.debug(f'Changing signed status of {build.nvr}')
87 build.signed = True
88 elif pending_signing_tag in build_tags and pending_testing_tag not in build_tags:
89 # autosign missed the message that the build is waiting to be signed
90 log.debug(f'{build.nvr} is stuck waiting to be signed, let\'s try again')
91 stuck_builds.append((build.nvr, pending_signing_tag))
92 elif (pending_signing_tag not in build_tags
93 and pending_testing_tag not in build_tags):
94 # this means that an update has been created but we never tagged the build
95 # as pending-signing
96 log.debug(f'Oh, no! We\'ve never sent {build.nvr} for signing, let\'s fix it')
97 overlooked_builds.append((build.nvr, pending_signing_tag))
98 session.flush()
99
100 if stuck_builds:
101 kc.multicall = True
102 for b, t in stuck_builds:
103 kc.untagBuild(t, b, force=True)
104 kc.multiCall()
105 for b, t in stuck_builds:
106 kc.tagBuild(t, b, force=True)
107 kc.multiCall()
108
109 if overlooked_builds:
110 kc.multicall = True
111 for b, t in overlooked_builds:
112 kc.tagBuild(t, b, force=True)
113 kc.multiCall()
114
[end of bodhi-server/bodhi/server/tasks/check_signed_builds.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/bodhi-server/bodhi/server/tasks/check_signed_builds.py b/bodhi-server/bodhi/server/tasks/check_signed_builds.py
--- a/bodhi-server/bodhi/server/tasks/check_signed_builds.py
+++ b/bodhi-server/bodhi/server/tasks/check_signed_builds.py
@@ -77,10 +77,16 @@
pending_signing_tag = update.release.pending_signing_tag
pending_testing_tag = update.release.pending_testing_tag
for build in builds:
+ build_tags = [t['name'] for t in kc.listTags(build=build.nvr)]
if build.signed:
log.debug(f'{build.nvr} already marked as signed')
+ if (update.release.testing_tag in build_tags
+ and update.release.candidate_tag not in build_tags):
+ # The update was probably ejected from a compose and is stuck
+ log.debug(f'Resubmitting {update.alias} to testing')
+ update.set_request(session, models.UpdateRequest.testing, 'bodhi')
+ break
continue
- build_tags = [t['name'] for t in kc.listTags(build=build.nvr)]
if pending_signing_tag not in build_tags and pending_testing_tag in build_tags:
# Our composer missed the message that the build got signed
log.debug(f'Changing signed status of {build.nvr}')
| {"golden_diff": "diff --git a/bodhi-server/bodhi/server/tasks/check_signed_builds.py b/bodhi-server/bodhi/server/tasks/check_signed_builds.py\n--- a/bodhi-server/bodhi/server/tasks/check_signed_builds.py\n+++ b/bodhi-server/bodhi/server/tasks/check_signed_builds.py\n@@ -77,10 +77,16 @@\n pending_signing_tag = update.release.pending_signing_tag\n pending_testing_tag = update.release.pending_testing_tag\n for build in builds:\n+ build_tags = [t['name'] for t in kc.listTags(build=build.nvr)]\n if build.signed:\n log.debug(f'{build.nvr} already marked as signed')\n+ if (update.release.testing_tag in build_tags\n+ and update.release.candidate_tag not in build_tags):\n+ # The update was probably ejected from a compose and is stuck\n+ log.debug(f'Resubmitting {update.alias} to testing')\n+ update.set_request(session, models.UpdateRequest.testing, 'bodhi')\n+ break\n continue\n- build_tags = [t['name'] for t in kc.listTags(build=build.nvr)]\n if pending_signing_tag not in build_tags and pending_testing_tag in build_tags:\n # Our composer missed the message that the build got signed\n log.debug(f'Changing signed status of {build.nvr}')\n", "issue": "Updates sometimes get stuck in pending state\nFrom quick look, it seems that the composer does:\r\n1. move from `f*-updates-candidate` to `f*-updates-testing`\r\n2. do stuff\r\n3. untag from `f*-updates-testing-pending`\r\n4. mark update state as testing\r\n\r\nIf the composer hang on 2 the update remains stuck in pending as the builds are not tagged anymore in `f*-updates-candidate`. We should find a solution.\n", "before_files": [{"content": "# Copyright \u00a9 2017 Red Hat, Inc.\n#\n# This file is part of Bodhi.\n#\n# This program is free software; you can redistribute it and/or\n# modify it under the terms of the GNU General Public License\n# as published by the Free Software Foundation; either version 2\n# of the License, or (at your option) any later version.\n#\n# This program is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with this program; if not, write to the Free Software\n# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.\n\n\"\"\"\nAvoid Updates being stuck in pending.\n\nIt may happen that Bodhi misses fedora-messaging messages announcing builds\nhave been signed.\nIn these cases, the Update remain stuck in pending until a manual intervention.\n\nThis script will cycle through builds of Updates in pending status and update\nthe signed status in the db to match the tags found in Koji.\n\"\"\"\n\nimport logging\nfrom datetime import datetime, timedelta\n\nfrom bodhi.server import buildsys, models\nfrom bodhi.server.config import config\nfrom bodhi.server.util import transactional_session_maker\n\n\nlog = logging.getLogger(__name__)\n\n\ndef main():\n \"\"\"Check build tags and sign those we missed.\"\"\"\n db_factory = transactional_session_maker()\n older_than = datetime.utcnow() - timedelta(days=config.get('check_signed_builds_delay'))\n with db_factory() as session:\n updates = models.Update.query.filter(\n models.Update.status == models.UpdateStatus.pending\n ).filter(\n models.Update.release_id == models.Release.id\n ).filter(\n models.Release.state.in_([\n models.ReleaseState.current,\n models.ReleaseState.pending,\n models.ReleaseState.frozen,\n ])\n ).all()\n\n if len(updates) == 0:\n log.debug('No stuck Updates found')\n return\n\n kc = buildsys.get_session()\n stuck_builds = []\n overlooked_builds = []\n\n for update in updates:\n # Let Bodhi have its times\n if update.date_submitted >= older_than:\n continue\n builds = update.builds\n # Clean Updates with no builds\n if len(builds) == 0:\n log.debug(f'Obsoleting empty update {update.alias}')\n update.obsolete(session)\n session.flush()\n continue\n pending_signing_tag = update.release.pending_signing_tag\n pending_testing_tag = update.release.pending_testing_tag\n for build in builds:\n if build.signed:\n log.debug(f'{build.nvr} already marked as signed')\n continue\n build_tags = [t['name'] for t in kc.listTags(build=build.nvr)]\n if pending_signing_tag not in build_tags and pending_testing_tag in build_tags:\n # Our composer missed the message that the build got signed\n log.debug(f'Changing signed status of {build.nvr}')\n build.signed = True\n elif pending_signing_tag in build_tags and pending_testing_tag not in build_tags:\n # autosign missed the message that the build is waiting to be signed\n log.debug(f'{build.nvr} is stuck waiting to be signed, let\\'s try again')\n stuck_builds.append((build.nvr, pending_signing_tag))\n elif (pending_signing_tag not in build_tags\n and pending_testing_tag not in build_tags):\n # this means that an update has been created but we never tagged the build\n # as pending-signing\n log.debug(f'Oh, no! We\\'ve never sent {build.nvr} for signing, let\\'s fix it')\n overlooked_builds.append((build.nvr, pending_signing_tag))\n session.flush()\n\n if stuck_builds:\n kc.multicall = True\n for b, t in stuck_builds:\n kc.untagBuild(t, b, force=True)\n kc.multiCall()\n for b, t in stuck_builds:\n kc.tagBuild(t, b, force=True)\n kc.multiCall()\n\n if overlooked_builds:\n kc.multicall = True\n for b, t in overlooked_builds:\n kc.tagBuild(t, b, force=True)\n kc.multiCall()\n", "path": "bodhi-server/bodhi/server/tasks/check_signed_builds.py"}]} | 1,844 | 300 |
gh_patches_debug_25859 | rasdani/github-patches | git_diff | ansible__ansible-35363 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
ios_command module : can not use "configure revert" commands
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
- ios_command
##### ANSIBLE VERSION
- 2.x
##### SUMMARY: ios_command module should not filter out all configure commands
ios_command is a wonderful module, however i found a minor issue :
On line 165 there is a sanity check that verify that the user is not sending a configuration command
` elif item['command'].startswith('conf'):`
With this test, it is not possible to send the following commands :
- configure revert timer 5
- configure confirm
- configure replace
- configure network
which ARE valid exec commands and do NOT enter in configuration mode.
Hopefully (for me), a basic workaround is to insert a space (may be it is intented) :
- [space] configure revert timer 5
I propose to change the test and use something like (sorry i am not familiar with python) :
` elif re.match("^\s*conf\S*\s+t", item['command']) `
just to ensure that the user has NOT entered one of various configure terminal flavors.
##### STEPS TO REPRODUCE
Use the following playbook :
<pre>
---
- name: try
hosts: all
connection: local
gather_facts: no
vars:
cli:
host: "{{ inventory_hostname }}"
username: cisco
password: cisco
transport: cli
- name: This work (note the space at the beginning of the command line)
ios_command:
commands:
- clear configuration lock
- " configure revert timer 8"
- name: This will not work (but should)
ios_command:
commands:
- clear configuration lock
- "configure revert timer 8"
</pre>
</issue>
<code>
[start of lib/ansible/modules/network/ios/ios_command.py]
1 #!/usr/bin/python
2 #
3 # This file is part of Ansible
4 #
5 # Ansible is free software: you can redistribute it and/or modify
6 # it under the terms of the GNU General Public License as published by
7 # the Free Software Foundation, either version 3 of the License, or
8 # (at your option) any later version.
9 #
10 # Ansible is distributed in the hope that it will be useful,
11 # but WITHOUT ANY WARRANTY; without even the implied warranty of
12 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
13 # GNU General Public License for more details.
14 #
15 # You should have received a copy of the GNU General Public License
16 # along with Ansible. If not, see <http://www.gnu.org/licenses/>.
17 #
18
19 ANSIBLE_METADATA = {'metadata_version': '1.1',
20 'status': ['preview'],
21 'supported_by': 'network'}
22
23
24 DOCUMENTATION = """
25 ---
26 module: ios_command
27 version_added: "2.1"
28 author: "Peter Sprygada (@privateip)"
29 short_description: Run commands on remote devices running Cisco IOS
30 description:
31 - Sends arbitrary commands to an ios node and returns the results
32 read from the device. This module includes an
33 argument that will cause the module to wait for a specific condition
34 before returning or timing out if the condition is not met.
35 - This module does not support running commands in configuration mode.
36 Please use M(ios_config) to configure IOS devices.
37 extends_documentation_fragment: ios
38 notes:
39 - Tested against IOS 15.6
40 options:
41 commands:
42 description:
43 - List of commands to send to the remote ios device over the
44 configured provider. The resulting output from the command
45 is returned. If the I(wait_for) argument is provided, the
46 module is not returned until the condition is satisfied or
47 the number of retries has expired.
48 required: true
49 wait_for:
50 description:
51 - List of conditions to evaluate against the output of the
52 command. The task will wait for each condition to be true
53 before moving forward. If the conditional is not true
54 within the configured number of retries, the task fails.
55 See examples.
56 required: false
57 default: null
58 aliases: ['waitfor']
59 version_added: "2.2"
60 match:
61 description:
62 - The I(match) argument is used in conjunction with the
63 I(wait_for) argument to specify the match policy. Valid
64 values are C(all) or C(any). If the value is set to C(all)
65 then all conditionals in the wait_for must be satisfied. If
66 the value is set to C(any) then only one of the values must be
67 satisfied.
68 required: false
69 default: all
70 choices: ['any', 'all']
71 version_added: "2.2"
72 retries:
73 description:
74 - Specifies the number of retries a command should by tried
75 before it is considered failed. The command is run on the
76 target device every retry and evaluated against the
77 I(wait_for) conditions.
78 required: false
79 default: 10
80 interval:
81 description:
82 - Configures the interval in seconds to wait between retries
83 of the command. If the command does not pass the specified
84 conditions, the interval indicates how long to wait before
85 trying the command again.
86 required: false
87 default: 1
88 """
89
90 EXAMPLES = """
91 tasks:
92 - name: run show version on remote devices
93 ios_command:
94 commands: show version
95
96 - name: run show version and check to see if output contains IOS
97 ios_command:
98 commands: show version
99 wait_for: result[0] contains IOS
100
101 - name: run multiple commands on remote nodes
102 ios_command:
103 commands:
104 - show version
105 - show interfaces
106
107 - name: run multiple commands and evaluate the output
108 ios_command:
109 commands:
110 - show version
111 - show interfaces
112 wait_for:
113 - result[0] contains IOS
114 - result[1] contains Loopback0
115 """
116
117 RETURN = """
118 stdout:
119 description: The set of responses from the commands
120 returned: always apart from low level errors (such as action plugin)
121 type: list
122 sample: ['...', '...']
123 stdout_lines:
124 description: The value of stdout split into a list
125 returned: always apart from low level errors (such as action plugin)
126 type: list
127 sample: [['...', '...'], ['...'], ['...']]
128 failed_conditions:
129 description: The list of conditionals that have failed
130 returned: failed
131 type: list
132 sample: ['...', '...']
133 """
134 import time
135
136 from ansible.module_utils.network.ios.ios import run_commands
137 from ansible.module_utils.network.ios.ios import ios_argument_spec, check_args
138 from ansible.module_utils.basic import AnsibleModule
139 from ansible.module_utils.network.common.utils import ComplexList
140 from ansible.module_utils.network.common.parsing import Conditional
141 from ansible.module_utils.six import string_types
142
143
144 def to_lines(stdout):
145 for item in stdout:
146 if isinstance(item, string_types):
147 item = str(item).split('\n')
148 yield item
149
150
151 def parse_commands(module, warnings):
152 command = ComplexList(dict(
153 command=dict(key=True),
154 prompt=dict(),
155 answer=dict()
156 ), module)
157 commands = command(module.params['commands'])
158 for item in list(commands):
159 if module.check_mode and not item['command'].startswith('show'):
160 warnings.append(
161 'only show commands are supported when using check mode, not '
162 'executing `%s`' % item['command']
163 )
164 commands.remove(item)
165 elif item['command'].startswith('conf'):
166 module.fail_json(
167 msg='ios_command does not support running config mode '
168 'commands. Please use ios_config instead'
169 )
170 return commands
171
172
173 def main():
174 """main entry point for module execution
175 """
176 argument_spec = dict(
177 commands=dict(type='list', required=True),
178
179 wait_for=dict(type='list', aliases=['waitfor']),
180 match=dict(default='all', choices=['all', 'any']),
181
182 retries=dict(default=10, type='int'),
183 interval=dict(default=1, type='int')
184 )
185
186 argument_spec.update(ios_argument_spec)
187
188 module = AnsibleModule(argument_spec=argument_spec,
189 supports_check_mode=True)
190
191 result = {'changed': False}
192
193 warnings = list()
194 check_args(module, warnings)
195 commands = parse_commands(module, warnings)
196 result['warnings'] = warnings
197
198 wait_for = module.params['wait_for'] or list()
199 conditionals = [Conditional(c) for c in wait_for]
200
201 retries = module.params['retries']
202 interval = module.params['interval']
203 match = module.params['match']
204
205 while retries > 0:
206 responses = run_commands(module, commands)
207
208 for item in list(conditionals):
209 if item(responses):
210 if match == 'any':
211 conditionals = list()
212 break
213 conditionals.remove(item)
214
215 if not conditionals:
216 break
217
218 time.sleep(interval)
219 retries -= 1
220
221 if conditionals:
222 failed_conditions = [item.raw for item in conditionals]
223 msg = 'One or more conditional statements have not be satisfied'
224 module.fail_json(msg=msg, failed_conditions=failed_conditions)
225
226 result.update({
227 'changed': False,
228 'stdout': responses,
229 'stdout_lines': list(to_lines(responses))
230 })
231
232 module.exit_json(**result)
233
234
235 if __name__ == '__main__':
236 main()
237
[end of lib/ansible/modules/network/ios/ios_command.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/lib/ansible/modules/network/ios/ios_command.py b/lib/ansible/modules/network/ios/ios_command.py
--- a/lib/ansible/modules/network/ios/ios_command.py
+++ b/lib/ansible/modules/network/ios/ios_command.py
@@ -131,6 +131,7 @@
type: list
sample: ['...', '...']
"""
+import re
import time
from ansible.module_utils.network.ios.ios import run_commands
@@ -156,13 +157,14 @@
), module)
commands = command(module.params['commands'])
for item in list(commands):
+ configure_type = re.match(r'conf(?:\w*)(?:\s+(\w+))?', item['command'])
if module.check_mode and not item['command'].startswith('show'):
warnings.append(
'only show commands are supported when using check mode, not '
'executing `%s`' % item['command']
)
commands.remove(item)
- elif item['command'].startswith('conf'):
+ elif configure_type and configure_type.group(1) not in ('confirm', 'replace', 'revert', 'network'):
module.fail_json(
msg='ios_command does not support running config mode '
'commands. Please use ios_config instead'
| {"golden_diff": "diff --git a/lib/ansible/modules/network/ios/ios_command.py b/lib/ansible/modules/network/ios/ios_command.py\n--- a/lib/ansible/modules/network/ios/ios_command.py\n+++ b/lib/ansible/modules/network/ios/ios_command.py\n@@ -131,6 +131,7 @@\n type: list\n sample: ['...', '...']\n \"\"\"\n+import re\n import time\n \n from ansible.module_utils.network.ios.ios import run_commands\n@@ -156,13 +157,14 @@\n ), module)\n commands = command(module.params['commands'])\n for item in list(commands):\n+ configure_type = re.match(r'conf(?:\\w*)(?:\\s+(\\w+))?', item['command'])\n if module.check_mode and not item['command'].startswith('show'):\n warnings.append(\n 'only show commands are supported when using check mode, not '\n 'executing `%s`' % item['command']\n )\n commands.remove(item)\n- elif item['command'].startswith('conf'):\n+ elif configure_type and configure_type.group(1) not in ('confirm', 'replace', 'revert', 'network'):\n module.fail_json(\n msg='ios_command does not support running config mode '\n 'commands. Please use ios_config instead'\n", "issue": "ios_command module : can not use \"configure revert\" commands\n##### ISSUE TYPE \r\n- Bug Report\r\n\r\n##### COMPONENT NAME\r\n- ios_command\r\n\r\n##### ANSIBLE VERSION\r\n- 2.x\r\n\r\n##### SUMMARY: ios_command module should not filter out all configure commands\r\n\r\nios_command is a wonderful module, however i found a minor issue :\r\n\r\nOn line 165 there is a sanity check that verify that the user is not sending a configuration command\r\n ` elif item['command'].startswith('conf'):` \r\n\r\nWith this test, it is not possible to send the following commands :\r\n - configure revert timer 5\r\n - configure confirm\r\n - configure replace\r\n - configure network\r\nwhich ARE valid exec commands and do NOT enter in configuration mode.\r\n\r\nHopefully (for me), a basic workaround is to insert a space (may be it is intented) :\r\n - [space] configure revert timer 5\r\n\r\nI propose to change the test and use something like (sorry i am not familiar with python) :\r\n` elif re.match(\"^\\s*conf\\S*\\s+t\", item['command']) `\r\njust to ensure that the user has NOT entered one of various configure terminal flavors.\r\n\r\n\r\n##### STEPS TO REPRODUCE\r\n\r\nUse the following playbook : \r\n\r\n<pre> \r\n---\r\n- name: try\r\n hosts: all\r\n connection: local\r\n gather_facts: no\r\n\r\n vars:\r\n cli:\r\n host: \"{{ inventory_hostname }}\"\r\n username: cisco\r\n password: cisco\r\n transport: cli\r\n\r\n - name: This work (note the space at the beginning of the command line)\r\n ios_command:\r\n commands:\r\n - clear configuration lock\r\n - \" configure revert timer 8\"\r\n\r\n - name: This will not work (but should)\r\n ios_command:\r\n commands:\r\n - clear configuration lock\r\n - \"configure revert timer 8\"\r\n</pre> \r\n\n", "before_files": [{"content": "#!/usr/bin/python\n#\n# This file is part of Ansible\n#\n# Ansible is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# Ansible is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with Ansible. If not, see <http://www.gnu.org/licenses/>.\n#\n\nANSIBLE_METADATA = {'metadata_version': '1.1',\n 'status': ['preview'],\n 'supported_by': 'network'}\n\n\nDOCUMENTATION = \"\"\"\n---\nmodule: ios_command\nversion_added: \"2.1\"\nauthor: \"Peter Sprygada (@privateip)\"\nshort_description: Run commands on remote devices running Cisco IOS\ndescription:\n - Sends arbitrary commands to an ios node and returns the results\n read from the device. This module includes an\n argument that will cause the module to wait for a specific condition\n before returning or timing out if the condition is not met.\n - This module does not support running commands in configuration mode.\n Please use M(ios_config) to configure IOS devices.\nextends_documentation_fragment: ios\nnotes:\n - Tested against IOS 15.6\noptions:\n commands:\n description:\n - List of commands to send to the remote ios device over the\n configured provider. The resulting output from the command\n is returned. If the I(wait_for) argument is provided, the\n module is not returned until the condition is satisfied or\n the number of retries has expired.\n required: true\n wait_for:\n description:\n - List of conditions to evaluate against the output of the\n command. The task will wait for each condition to be true\n before moving forward. If the conditional is not true\n within the configured number of retries, the task fails.\n See examples.\n required: false\n default: null\n aliases: ['waitfor']\n version_added: \"2.2\"\n match:\n description:\n - The I(match) argument is used in conjunction with the\n I(wait_for) argument to specify the match policy. Valid\n values are C(all) or C(any). If the value is set to C(all)\n then all conditionals in the wait_for must be satisfied. If\n the value is set to C(any) then only one of the values must be\n satisfied.\n required: false\n default: all\n choices: ['any', 'all']\n version_added: \"2.2\"\n retries:\n description:\n - Specifies the number of retries a command should by tried\n before it is considered failed. The command is run on the\n target device every retry and evaluated against the\n I(wait_for) conditions.\n required: false\n default: 10\n interval:\n description:\n - Configures the interval in seconds to wait between retries\n of the command. If the command does not pass the specified\n conditions, the interval indicates how long to wait before\n trying the command again.\n required: false\n default: 1\n\"\"\"\n\nEXAMPLES = \"\"\"\ntasks:\n - name: run show version on remote devices\n ios_command:\n commands: show version\n\n - name: run show version and check to see if output contains IOS\n ios_command:\n commands: show version\n wait_for: result[0] contains IOS\n\n - name: run multiple commands on remote nodes\n ios_command:\n commands:\n - show version\n - show interfaces\n\n - name: run multiple commands and evaluate the output\n ios_command:\n commands:\n - show version\n - show interfaces\n wait_for:\n - result[0] contains IOS\n - result[1] contains Loopback0\n\"\"\"\n\nRETURN = \"\"\"\nstdout:\n description: The set of responses from the commands\n returned: always apart from low level errors (such as action plugin)\n type: list\n sample: ['...', '...']\nstdout_lines:\n description: The value of stdout split into a list\n returned: always apart from low level errors (such as action plugin)\n type: list\n sample: [['...', '...'], ['...'], ['...']]\nfailed_conditions:\n description: The list of conditionals that have failed\n returned: failed\n type: list\n sample: ['...', '...']\n\"\"\"\nimport time\n\nfrom ansible.module_utils.network.ios.ios import run_commands\nfrom ansible.module_utils.network.ios.ios import ios_argument_spec, check_args\nfrom ansible.module_utils.basic import AnsibleModule\nfrom ansible.module_utils.network.common.utils import ComplexList\nfrom ansible.module_utils.network.common.parsing import Conditional\nfrom ansible.module_utils.six import string_types\n\n\ndef to_lines(stdout):\n for item in stdout:\n if isinstance(item, string_types):\n item = str(item).split('\\n')\n yield item\n\n\ndef parse_commands(module, warnings):\n command = ComplexList(dict(\n command=dict(key=True),\n prompt=dict(),\n answer=dict()\n ), module)\n commands = command(module.params['commands'])\n for item in list(commands):\n if module.check_mode and not item['command'].startswith('show'):\n warnings.append(\n 'only show commands are supported when using check mode, not '\n 'executing `%s`' % item['command']\n )\n commands.remove(item)\n elif item['command'].startswith('conf'):\n module.fail_json(\n msg='ios_command does not support running config mode '\n 'commands. Please use ios_config instead'\n )\n return commands\n\n\ndef main():\n \"\"\"main entry point for module execution\n \"\"\"\n argument_spec = dict(\n commands=dict(type='list', required=True),\n\n wait_for=dict(type='list', aliases=['waitfor']),\n match=dict(default='all', choices=['all', 'any']),\n\n retries=dict(default=10, type='int'),\n interval=dict(default=1, type='int')\n )\n\n argument_spec.update(ios_argument_spec)\n\n module = AnsibleModule(argument_spec=argument_spec,\n supports_check_mode=True)\n\n result = {'changed': False}\n\n warnings = list()\n check_args(module, warnings)\n commands = parse_commands(module, warnings)\n result['warnings'] = warnings\n\n wait_for = module.params['wait_for'] or list()\n conditionals = [Conditional(c) for c in wait_for]\n\n retries = module.params['retries']\n interval = module.params['interval']\n match = module.params['match']\n\n while retries > 0:\n responses = run_commands(module, commands)\n\n for item in list(conditionals):\n if item(responses):\n if match == 'any':\n conditionals = list()\n break\n conditionals.remove(item)\n\n if not conditionals:\n break\n\n time.sleep(interval)\n retries -= 1\n\n if conditionals:\n failed_conditions = [item.raw for item in conditionals]\n msg = 'One or more conditional statements have not be satisfied'\n module.fail_json(msg=msg, failed_conditions=failed_conditions)\n\n result.update({\n 'changed': False,\n 'stdout': responses,\n 'stdout_lines': list(to_lines(responses))\n })\n\n module.exit_json(**result)\n\n\nif __name__ == '__main__':\n main()\n", "path": "lib/ansible/modules/network/ios/ios_command.py"}]} | 3,204 | 285 |
gh_patches_debug_10905 | rasdani/github-patches | git_diff | saleor__saleor-1775 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Improving product variant behaviour
### Steps to reproduce the problem
1. Choose any product in dashboard
2. Remove all variants
3. Try to go to the product page in storefront
### What I expected to happen
Product page in storefront with "UNAVAILABLE" label.
### What happened instead/how it failed
500 code.
It is getting crashed with `list index out of range` error, because in method `update_field_data` in `VariantChoiceField` lines
```
if self.queryset.count() < 2:
self.widget = forms.HiddenInput(
{'value': variants.all()[0].pk})
```
are trying to get index of empty query set.
Possible solution would be to omit this block of code if `variants.all()` is `False`.
</issue>
<code>
[start of saleor/product/forms.py]
1 import json
2
3 from django import forms
4 from django.utils.encoding import smart_text
5 from django.utils.translation import pgettext_lazy
6 from django_prices.templatetags.prices_i18n import gross
7
8 from ..cart.forms import AddToCartForm
9
10
11 class VariantChoiceField(forms.ModelChoiceField):
12 discounts = None
13
14 def label_from_instance(self, obj):
15 variant_label = smart_text(obj)
16 label = pgettext_lazy(
17 'Variant choice field label',
18 '%(variant_label)s - %(price)s') % {
19 'variant_label': variant_label,
20 'price': gross(
21 obj.get_price_per_item(discounts=self.discounts))}
22 return label
23
24 def update_field_data(self, variants, cart):
25 """Initialize variant picker metadata."""
26 self.queryset = variants
27 self.discounts = cart.discounts
28 self.empty_label = None
29 images_map = {
30 variant.pk: [
31 vi.image.image.url for vi in variant.variant_images.all()]
32 for variant in variants.all()}
33 self.widget.attrs['data-images'] = json.dumps(images_map)
34 # Don't display select input if there are less than two variants
35 if self.queryset.count() < 2:
36 self.widget = forms.HiddenInput(
37 {'value': variants.all()[0].pk})
38
39
40 class ProductForm(AddToCartForm):
41 variant = VariantChoiceField(queryset=None)
42
43 def __init__(self, *args, **kwargs):
44 super().__init__(*args, **kwargs)
45 variant_field = self.fields['variant']
46 variant_field.update_field_data(self.product.variants, self.cart)
47
48 def get_variant(self, cleaned_data):
49 return cleaned_data.get('variant')
50
[end of saleor/product/forms.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/saleor/product/forms.py b/saleor/product/forms.py
--- a/saleor/product/forms.py
+++ b/saleor/product/forms.py
@@ -31,8 +31,8 @@
vi.image.image.url for vi in variant.variant_images.all()]
for variant in variants.all()}
self.widget.attrs['data-images'] = json.dumps(images_map)
- # Don't display select input if there are less than two variants
- if self.queryset.count() < 2:
+ # Don't display select input if there is only one variant.
+ if self.queryset.count() == 1:
self.widget = forms.HiddenInput(
{'value': variants.all()[0].pk})
| {"golden_diff": "diff --git a/saleor/product/forms.py b/saleor/product/forms.py\n--- a/saleor/product/forms.py\n+++ b/saleor/product/forms.py\n@@ -31,8 +31,8 @@\n vi.image.image.url for vi in variant.variant_images.all()]\n for variant in variants.all()}\n self.widget.attrs['data-images'] = json.dumps(images_map)\n- # Don't display select input if there are less than two variants\n- if self.queryset.count() < 2:\n+ # Don't display select input if there is only one variant.\n+ if self.queryset.count() == 1:\n self.widget = forms.HiddenInput(\n {'value': variants.all()[0].pk})\n", "issue": "Improving product variant behaviour\n### Steps to reproduce the problem\r\n\r\n1. Choose any product in dashboard\r\n2. Remove all variants\r\n3. Try to go to the product page in storefront\r\n\r\n### What I expected to happen\r\n\r\nProduct page in storefront with \"UNAVAILABLE\" label.\r\n\r\n### What happened instead/how it failed\r\n500 code. \r\n\r\n\r\nIt is getting crashed with `list index out of range` error, because in method `update_field_data` in `VariantChoiceField` lines\r\n```\r\n if self.queryset.count() < 2:\r\n self.widget = forms.HiddenInput(\r\n {'value': variants.all()[0].pk})\r\n```\r\nare trying to get index of empty query set.\r\nPossible solution would be to omit this block of code if `variants.all()` is `False`.\n", "before_files": [{"content": "import json\n\nfrom django import forms\nfrom django.utils.encoding import smart_text\nfrom django.utils.translation import pgettext_lazy\nfrom django_prices.templatetags.prices_i18n import gross\n\nfrom ..cart.forms import AddToCartForm\n\n\nclass VariantChoiceField(forms.ModelChoiceField):\n discounts = None\n\n def label_from_instance(self, obj):\n variant_label = smart_text(obj)\n label = pgettext_lazy(\n 'Variant choice field label',\n '%(variant_label)s - %(price)s') % {\n 'variant_label': variant_label,\n 'price': gross(\n obj.get_price_per_item(discounts=self.discounts))}\n return label\n\n def update_field_data(self, variants, cart):\n \"\"\"Initialize variant picker metadata.\"\"\"\n self.queryset = variants\n self.discounts = cart.discounts\n self.empty_label = None\n images_map = {\n variant.pk: [\n vi.image.image.url for vi in variant.variant_images.all()]\n for variant in variants.all()}\n self.widget.attrs['data-images'] = json.dumps(images_map)\n # Don't display select input if there are less than two variants\n if self.queryset.count() < 2:\n self.widget = forms.HiddenInput(\n {'value': variants.all()[0].pk})\n\n\nclass ProductForm(AddToCartForm):\n variant = VariantChoiceField(queryset=None)\n\n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n variant_field = self.fields['variant']\n variant_field.update_field_data(self.product.variants, self.cart)\n\n def get_variant(self, cleaned_data):\n return cleaned_data.get('variant')\n", "path": "saleor/product/forms.py"}]} | 1,153 | 156 |
gh_patches_debug_34809 | rasdani/github-patches | git_diff | readthedocs__readthedocs.org-4915 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Versions aren't deleted if there is one with the same commit
If we create two versions pointing to the same commit (say, master and a tag on master). If we delete the tag, when syncing the repositories that tag isn't deleted from the versions list (because master has the same commit).
This is also seeing when creating two tags that point to the same commit.
</issue>
<code>
[start of readthedocs/restapi/utils.py]
1 # -*- coding: utf-8 -*-
2 """Utility functions that are used by both views and celery tasks."""
3
4 from __future__ import (
5 absolute_import, division, print_function, unicode_literals)
6
7 import hashlib
8 import logging
9
10 from rest_framework.pagination import PageNumberPagination
11
12 from readthedocs.builds.constants import (LATEST, LATEST_VERBOSE_NAME,
13 NON_REPOSITORY_VERSIONS, STABLE,
14 STABLE_VERBOSE_NAME)
15 from readthedocs.builds.models import Version
16 from readthedocs.search.indexes import PageIndex, ProjectIndex, SectionIndex
17
18 log = logging.getLogger(__name__)
19
20
21 def sync_versions(project, versions, type): # pylint: disable=redefined-builtin
22 """Update the database with the current versions from the repository."""
23 old_version_values = project.versions.filter(type=type).values_list(
24 'verbose_name', 'identifier'
25 )
26 old_versions = dict(old_version_values)
27
28 # Add new versions
29 added = set()
30 has_user_stable = False
31 has_user_latest = False
32 for version in versions:
33 version_id = version['identifier']
34 version_name = version['verbose_name']
35 if version_name == STABLE_VERBOSE_NAME:
36 has_user_stable = True
37 created_version, created = set_or_create_version(
38 project=project,
39 slug=STABLE,
40 version_id=version_id,
41 verbose_name=version_name,
42 type_=type
43 )
44 if created:
45 added.add(created_version.slug)
46 elif version_name == LATEST_VERBOSE_NAME:
47 has_user_latest = True
48 created_version, created = set_or_create_version(
49 project=project,
50 slug=LATEST,
51 version_id=version_id,
52 verbose_name=version_name,
53 type_=type
54 )
55 if created:
56 added.add(created_version.slug)
57 elif version_name in old_versions:
58 if version_id == old_versions[version_name]:
59 # Version is correct
60 continue
61 else:
62 # Update slug with new identifier
63 Version.objects.filter(
64 project=project, verbose_name=version_name).update(
65 identifier=version_id,
66 type=type,
67 machine=False,
68 ) # noqa
69
70 log.info(
71 '(Sync Versions) Updated Version: [%s=%s] ',
72 version_name,
73 version_id,
74 )
75 else:
76 # New Version
77 created_version = Version.objects.create(
78 project=project,
79 type=type,
80 identifier=version_id,
81 verbose_name=version_name,
82 )
83 added.add(created_version.slug)
84 if not has_user_stable:
85 stable_version = (
86 project.versions
87 .filter(slug=STABLE, type=type)
88 .first()
89 )
90 if stable_version:
91 # Put back the RTD's stable version
92 stable_version.machine = True
93 stable_version.save()
94 if not has_user_latest:
95 latest_version = (
96 project.versions
97 .filter(slug=LATEST, type=type)
98 .first()
99 )
100 if latest_version:
101 # Put back the RTD's latest version
102 latest_version.machine = True
103 latest_version.identifier = project.get_default_branch()
104 latest_version.verbose_name = LATEST_VERBOSE_NAME
105 latest_version.save()
106 if added:
107 log.info('(Sync Versions) Added Versions: [%s] ', ' '.join(added))
108 return added
109
110
111 def set_or_create_version(project, slug, version_id, verbose_name, type_):
112 """Search or create a version and set its machine attribute to false."""
113 version = (
114 project.versions
115 .filter(slug=slug)
116 .first()
117 )
118 if version:
119 version.identifier = version_id
120 version.machine = False
121 version.type = type_
122 version.save()
123 else:
124 created_version = Version.objects.create(
125 project=project,
126 type=type_,
127 identifier=version_id,
128 verbose_name=verbose_name,
129 )
130 return created_version, True
131 return version, False
132
133
134 def delete_versions(project, version_data):
135 """Delete all versions not in the current repo."""
136 current_versions = []
137 if 'tags' in version_data:
138 for version in version_data['tags']:
139 current_versions.append(version['identifier'])
140 if 'branches' in version_data:
141 for version in version_data['branches']:
142 current_versions.append(version['identifier'])
143 to_delete_qs = project.versions.all()
144 to_delete_qs = to_delete_qs.exclude(identifier__in=current_versions)
145 to_delete_qs = to_delete_qs.exclude(uploaded=True)
146 to_delete_qs = to_delete_qs.exclude(active=True)
147 to_delete_qs = to_delete_qs.exclude(slug__in=NON_REPOSITORY_VERSIONS)
148
149 if to_delete_qs.count():
150 ret_val = {obj.slug for obj in to_delete_qs}
151 log.info('(Sync Versions) Deleted Versions: [%s]', ' '.join(ret_val))
152 to_delete_qs.delete()
153 return ret_val
154 return set()
155
156
157 def index_search_request(
158 version, page_list, commit, project_scale, page_scale, section=True,
159 delete=True):
160 """
161 Update search indexes with build output JSON.
162
163 In order to keep sub-projects all indexed on the same shard, indexes will be
164 updated using the parent project's slug as the routing value.
165 """
166 # TODO refactor this function
167 # pylint: disable=too-many-locals
168 project = version.project
169
170 log_msg = ' '.join([page['path'] for page in page_list])
171 log.info(
172 'Updating search index: project=%s pages=[%s]',
173 project.slug,
174 log_msg,
175 )
176
177 project_obj = ProjectIndex()
178 project_obj.index_document(
179 data={
180 'id': project.pk,
181 'name': project.name,
182 'slug': project.slug,
183 'description': project.description,
184 'lang': project.language,
185 'author': [user.username for user in project.users.all()],
186 'url': project.get_absolute_url(),
187 'tags': None,
188 'weight': project_scale,
189 })
190
191 page_obj = PageIndex()
192 section_obj = SectionIndex()
193 index_list = []
194 section_index_list = []
195 routes = [project.slug]
196 routes.extend([p.parent.slug for p in project.superprojects.all()])
197 for page in page_list:
198 log.debug('Indexing page: %s:%s', project.slug, page['path'])
199 to_hash = '-'.join([project.slug, version.slug, page['path']])
200 page_id = hashlib.md5(to_hash.encode('utf-8')).hexdigest()
201 index_list.append({
202 'id': page_id,
203 'project': project.slug,
204 'version': version.slug,
205 'path': page['path'],
206 'title': page['title'],
207 'headers': page['headers'],
208 'content': page['content'],
209 'taxonomy': None,
210 'commit': commit,
211 'weight': page_scale + project_scale,
212 })
213 if section:
214 for sect in page['sections']:
215 id_to_hash = '-'.join([
216 project.slug,
217 version.slug,
218 page['path'],
219 sect['id'],
220 ])
221 section_index_list.append({
222 'id': (hashlib.md5(id_to_hash.encode('utf-8')).hexdigest()),
223 'project': project.slug,
224 'version': version.slug,
225 'path': page['path'],
226 'page_id': sect['id'],
227 'title': sect['title'],
228 'content': sect['content'],
229 'weight': page_scale,
230 })
231 for route in routes:
232 section_obj.bulk_index(
233 section_index_list,
234 parent=page_id,
235 routing=route,
236 )
237
238 for route in routes:
239 page_obj.bulk_index(index_list, parent=project.slug, routing=route)
240
241 if delete:
242 log.info('Deleting files not in commit: %s', commit)
243 # TODO: AK Make sure this works
244 delete_query = {
245 'query': {
246 'bool': {
247 'must': [
248 {
249 'term': {
250 'project': project.slug,
251 },
252 },
253 {
254 'term': {
255 'version': version.slug,
256 },
257 },
258 ],
259 'must_not': {
260 'term': {
261 'commit': commit,
262 },
263 },
264 },
265 },
266 }
267 page_obj.delete_document(body=delete_query)
268
269
270 class RemoteOrganizationPagination(PageNumberPagination):
271 page_size = 25
272
273
274 class RemoteProjectPagination(PageNumberPagination):
275 page_size = 15
276
277
278 class ProjectPagination(PageNumberPagination):
279 page_size = 100
280 max_page_size = 1000
281
[end of readthedocs/restapi/utils.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/readthedocs/restapi/utils.py b/readthedocs/restapi/utils.py
--- a/readthedocs/restapi/utils.py
+++ b/readthedocs/restapi/utils.py
@@ -2,16 +2,26 @@
"""Utility functions that are used by both views and celery tasks."""
from __future__ import (
- absolute_import, division, print_function, unicode_literals)
+ absolute_import,
+ division,
+ print_function,
+ unicode_literals,
+)
import hashlib
import logging
from rest_framework.pagination import PageNumberPagination
-from readthedocs.builds.constants import (LATEST, LATEST_VERBOSE_NAME,
- NON_REPOSITORY_VERSIONS, STABLE,
- STABLE_VERBOSE_NAME)
+from readthedocs.builds.constants import (
+ BRANCH,
+ LATEST,
+ LATEST_VERBOSE_NAME,
+ NON_REPOSITORY_VERSIONS,
+ STABLE,
+ STABLE_VERBOSE_NAME,
+ TAG,
+)
from readthedocs.builds.models import Version
from readthedocs.search.indexes import PageIndex, ProjectIndex, SectionIndex
@@ -133,15 +143,25 @@
def delete_versions(project, version_data):
"""Delete all versions not in the current repo."""
- current_versions = []
- if 'tags' in version_data:
- for version in version_data['tags']:
- current_versions.append(version['identifier'])
- if 'branches' in version_data:
- for version in version_data['branches']:
- current_versions.append(version['identifier'])
+ # We use verbose_name for tags
+ # because several tags can point to the same identifier.
+ versions_tags = [
+ version['verbose_name']
+ for version in version_data.get('tags', [])
+ ]
+ versions_branches = [
+ version['identifier']
+ for version in version_data.get('branches', [])
+ ]
to_delete_qs = project.versions.all()
- to_delete_qs = to_delete_qs.exclude(identifier__in=current_versions)
+ to_delete_qs = to_delete_qs.exclude(
+ type=TAG,
+ verbose_name__in=versions_tags,
+ )
+ to_delete_qs = to_delete_qs.exclude(
+ type=BRANCH,
+ identifier__in=versions_branches,
+ )
to_delete_qs = to_delete_qs.exclude(uploaded=True)
to_delete_qs = to_delete_qs.exclude(active=True)
to_delete_qs = to_delete_qs.exclude(slug__in=NON_REPOSITORY_VERSIONS)
| {"golden_diff": "diff --git a/readthedocs/restapi/utils.py b/readthedocs/restapi/utils.py\n--- a/readthedocs/restapi/utils.py\n+++ b/readthedocs/restapi/utils.py\n@@ -2,16 +2,26 @@\n \"\"\"Utility functions that are used by both views and celery tasks.\"\"\"\n \n from __future__ import (\n- absolute_import, division, print_function, unicode_literals)\n+ absolute_import,\n+ division,\n+ print_function,\n+ unicode_literals,\n+)\n \n import hashlib\n import logging\n \n from rest_framework.pagination import PageNumberPagination\n \n-from readthedocs.builds.constants import (LATEST, LATEST_VERBOSE_NAME,\n- NON_REPOSITORY_VERSIONS, STABLE,\n- STABLE_VERBOSE_NAME)\n+from readthedocs.builds.constants import (\n+ BRANCH,\n+ LATEST,\n+ LATEST_VERBOSE_NAME,\n+ NON_REPOSITORY_VERSIONS,\n+ STABLE,\n+ STABLE_VERBOSE_NAME,\n+ TAG,\n+)\n from readthedocs.builds.models import Version\n from readthedocs.search.indexes import PageIndex, ProjectIndex, SectionIndex\n \n@@ -133,15 +143,25 @@\n \n def delete_versions(project, version_data):\n \"\"\"Delete all versions not in the current repo.\"\"\"\n- current_versions = []\n- if 'tags' in version_data:\n- for version in version_data['tags']:\n- current_versions.append(version['identifier'])\n- if 'branches' in version_data:\n- for version in version_data['branches']:\n- current_versions.append(version['identifier'])\n+ # We use verbose_name for tags\n+ # because several tags can point to the same identifier.\n+ versions_tags = [\n+ version['verbose_name']\n+ for version in version_data.get('tags', [])\n+ ]\n+ versions_branches = [\n+ version['identifier']\n+ for version in version_data.get('branches', [])\n+ ]\n to_delete_qs = project.versions.all()\n- to_delete_qs = to_delete_qs.exclude(identifier__in=current_versions)\n+ to_delete_qs = to_delete_qs.exclude(\n+ type=TAG,\n+ verbose_name__in=versions_tags,\n+ )\n+ to_delete_qs = to_delete_qs.exclude(\n+ type=BRANCH,\n+ identifier__in=versions_branches,\n+ )\n to_delete_qs = to_delete_qs.exclude(uploaded=True)\n to_delete_qs = to_delete_qs.exclude(active=True)\n to_delete_qs = to_delete_qs.exclude(slug__in=NON_REPOSITORY_VERSIONS)\n", "issue": "Versions aren't deleted if there is one with the same commit\nIf we create two versions pointing to the same commit (say, master and a tag on master). If we delete the tag, when syncing the repositories that tag isn't deleted from the versions list (because master has the same commit).\r\n\r\nThis is also seeing when creating two tags that point to the same commit.\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\"\"\"Utility functions that are used by both views and celery tasks.\"\"\"\n\nfrom __future__ import (\n absolute_import, division, print_function, unicode_literals)\n\nimport hashlib\nimport logging\n\nfrom rest_framework.pagination import PageNumberPagination\n\nfrom readthedocs.builds.constants import (LATEST, LATEST_VERBOSE_NAME,\n NON_REPOSITORY_VERSIONS, STABLE,\n STABLE_VERBOSE_NAME)\nfrom readthedocs.builds.models import Version\nfrom readthedocs.search.indexes import PageIndex, ProjectIndex, SectionIndex\n\nlog = logging.getLogger(__name__)\n\n\ndef sync_versions(project, versions, type): # pylint: disable=redefined-builtin\n \"\"\"Update the database with the current versions from the repository.\"\"\"\n old_version_values = project.versions.filter(type=type).values_list(\n 'verbose_name', 'identifier'\n )\n old_versions = dict(old_version_values)\n\n # Add new versions\n added = set()\n has_user_stable = False\n has_user_latest = False\n for version in versions:\n version_id = version['identifier']\n version_name = version['verbose_name']\n if version_name == STABLE_VERBOSE_NAME:\n has_user_stable = True\n created_version, created = set_or_create_version(\n project=project,\n slug=STABLE,\n version_id=version_id,\n verbose_name=version_name,\n type_=type\n )\n if created:\n added.add(created_version.slug)\n elif version_name == LATEST_VERBOSE_NAME:\n has_user_latest = True\n created_version, created = set_or_create_version(\n project=project,\n slug=LATEST,\n version_id=version_id,\n verbose_name=version_name,\n type_=type\n )\n if created:\n added.add(created_version.slug)\n elif version_name in old_versions:\n if version_id == old_versions[version_name]:\n # Version is correct\n continue\n else:\n # Update slug with new identifier\n Version.objects.filter(\n project=project, verbose_name=version_name).update(\n identifier=version_id,\n type=type,\n machine=False,\n ) # noqa\n\n log.info(\n '(Sync Versions) Updated Version: [%s=%s] ',\n version_name,\n version_id,\n )\n else:\n # New Version\n created_version = Version.objects.create(\n project=project,\n type=type,\n identifier=version_id,\n verbose_name=version_name,\n )\n added.add(created_version.slug)\n if not has_user_stable:\n stable_version = (\n project.versions\n .filter(slug=STABLE, type=type)\n .first()\n )\n if stable_version:\n # Put back the RTD's stable version\n stable_version.machine = True\n stable_version.save()\n if not has_user_latest:\n latest_version = (\n project.versions\n .filter(slug=LATEST, type=type)\n .first()\n )\n if latest_version:\n # Put back the RTD's latest version\n latest_version.machine = True\n latest_version.identifier = project.get_default_branch()\n latest_version.verbose_name = LATEST_VERBOSE_NAME\n latest_version.save()\n if added:\n log.info('(Sync Versions) Added Versions: [%s] ', ' '.join(added))\n return added\n\n\ndef set_or_create_version(project, slug, version_id, verbose_name, type_):\n \"\"\"Search or create a version and set its machine attribute to false.\"\"\"\n version = (\n project.versions\n .filter(slug=slug)\n .first()\n )\n if version:\n version.identifier = version_id\n version.machine = False\n version.type = type_\n version.save()\n else:\n created_version = Version.objects.create(\n project=project,\n type=type_,\n identifier=version_id,\n verbose_name=verbose_name,\n )\n return created_version, True\n return version, False\n\n\ndef delete_versions(project, version_data):\n \"\"\"Delete all versions not in the current repo.\"\"\"\n current_versions = []\n if 'tags' in version_data:\n for version in version_data['tags']:\n current_versions.append(version['identifier'])\n if 'branches' in version_data:\n for version in version_data['branches']:\n current_versions.append(version['identifier'])\n to_delete_qs = project.versions.all()\n to_delete_qs = to_delete_qs.exclude(identifier__in=current_versions)\n to_delete_qs = to_delete_qs.exclude(uploaded=True)\n to_delete_qs = to_delete_qs.exclude(active=True)\n to_delete_qs = to_delete_qs.exclude(slug__in=NON_REPOSITORY_VERSIONS)\n\n if to_delete_qs.count():\n ret_val = {obj.slug for obj in to_delete_qs}\n log.info('(Sync Versions) Deleted Versions: [%s]', ' '.join(ret_val))\n to_delete_qs.delete()\n return ret_val\n return set()\n\n\ndef index_search_request(\n version, page_list, commit, project_scale, page_scale, section=True,\n delete=True):\n \"\"\"\n Update search indexes with build output JSON.\n\n In order to keep sub-projects all indexed on the same shard, indexes will be\n updated using the parent project's slug as the routing value.\n \"\"\"\n # TODO refactor this function\n # pylint: disable=too-many-locals\n project = version.project\n\n log_msg = ' '.join([page['path'] for page in page_list])\n log.info(\n 'Updating search index: project=%s pages=[%s]',\n project.slug,\n log_msg,\n )\n\n project_obj = ProjectIndex()\n project_obj.index_document(\n data={\n 'id': project.pk,\n 'name': project.name,\n 'slug': project.slug,\n 'description': project.description,\n 'lang': project.language,\n 'author': [user.username for user in project.users.all()],\n 'url': project.get_absolute_url(),\n 'tags': None,\n 'weight': project_scale,\n })\n\n page_obj = PageIndex()\n section_obj = SectionIndex()\n index_list = []\n section_index_list = []\n routes = [project.slug]\n routes.extend([p.parent.slug for p in project.superprojects.all()])\n for page in page_list:\n log.debug('Indexing page: %s:%s', project.slug, page['path'])\n to_hash = '-'.join([project.slug, version.slug, page['path']])\n page_id = hashlib.md5(to_hash.encode('utf-8')).hexdigest()\n index_list.append({\n 'id': page_id,\n 'project': project.slug,\n 'version': version.slug,\n 'path': page['path'],\n 'title': page['title'],\n 'headers': page['headers'],\n 'content': page['content'],\n 'taxonomy': None,\n 'commit': commit,\n 'weight': page_scale + project_scale,\n })\n if section:\n for sect in page['sections']:\n id_to_hash = '-'.join([\n project.slug,\n version.slug,\n page['path'],\n sect['id'],\n ])\n section_index_list.append({\n 'id': (hashlib.md5(id_to_hash.encode('utf-8')).hexdigest()),\n 'project': project.slug,\n 'version': version.slug,\n 'path': page['path'],\n 'page_id': sect['id'],\n 'title': sect['title'],\n 'content': sect['content'],\n 'weight': page_scale,\n })\n for route in routes:\n section_obj.bulk_index(\n section_index_list,\n parent=page_id,\n routing=route,\n )\n\n for route in routes:\n page_obj.bulk_index(index_list, parent=project.slug, routing=route)\n\n if delete:\n log.info('Deleting files not in commit: %s', commit)\n # TODO: AK Make sure this works\n delete_query = {\n 'query': {\n 'bool': {\n 'must': [\n {\n 'term': {\n 'project': project.slug,\n },\n },\n {\n 'term': {\n 'version': version.slug,\n },\n },\n ],\n 'must_not': {\n 'term': {\n 'commit': commit,\n },\n },\n },\n },\n }\n page_obj.delete_document(body=delete_query)\n\n\nclass RemoteOrganizationPagination(PageNumberPagination):\n page_size = 25\n\n\nclass RemoteProjectPagination(PageNumberPagination):\n page_size = 15\n\n\nclass ProjectPagination(PageNumberPagination):\n page_size = 100\n max_page_size = 1000\n", "path": "readthedocs/restapi/utils.py"}]} | 3,197 | 547 |
gh_patches_debug_22682 | rasdani/github-patches | git_diff | strawberry-graphql__strawberry-1748 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Pydantic converted types have no way to deprecate fields
Since we recreate the field object for the values that use simple fields, there isn't a clean way to add a deprecation_reason to a field from a pydantic class. The same is true for permission_classes and directives.
</issue>
<code>
[start of strawberry/experimental/pydantic/object_type.py]
1 from __future__ import annotations
2
3 import builtins
4 import dataclasses
5 import warnings
6 from functools import partial
7 from typing import (
8 TYPE_CHECKING,
9 Any,
10 Callable,
11 Dict,
12 List,
13 Optional,
14 Sequence,
15 Set,
16 Type,
17 cast,
18 )
19
20 from pydantic import BaseModel
21 from pydantic.fields import ModelField
22 from typing_extensions import Literal
23
24 from graphql import GraphQLResolveInfo
25
26 import strawberry
27 from strawberry.arguments import UNSET
28 from strawberry.experimental.pydantic.conversion import (
29 convert_pydantic_model_to_strawberry_class,
30 convert_strawberry_class_to_pydantic_model,
31 )
32 from strawberry.experimental.pydantic.fields import get_basic_type
33 from strawberry.experimental.pydantic.utils import (
34 DataclassCreationFields,
35 ensure_all_auto_fields_in_pydantic,
36 get_default_factory_for_field,
37 get_private_fields,
38 sort_creation_fields,
39 )
40 from strawberry.field import StrawberryField
41 from strawberry.object_type import _process_type, _wrap_dataclass
42 from strawberry.schema_directive import StrawberrySchemaDirective
43 from strawberry.types.type_resolver import _get_fields
44 from strawberry.types.types import TypeDefinition
45
46 from .exceptions import MissingFieldsListError, UnregisteredTypeException
47
48
49 def replace_pydantic_types(type_: Any, is_input: bool):
50 origin = getattr(type_, "__origin__", None)
51 if origin is Literal:
52 # Literal does not have types in its __args__ so we return early
53 return type_
54 if hasattr(type_, "__args__"):
55 replaced_type = type_.copy_with(
56 tuple(replace_pydantic_types(t, is_input) for t in type_.__args__)
57 )
58
59 if isinstance(replaced_type, TypeDefinition):
60 # TODO: Not sure if this is necessary. No coverage in tests
61 # TODO: Unnecessary with StrawberryObject
62
63 replaced_type = builtins.type(
64 replaced_type.name,
65 (),
66 {"_type_definition": replaced_type},
67 )
68
69 return replaced_type
70
71 if issubclass(type_, BaseModel):
72 attr = "_strawberry_input_type" if is_input else "_strawberry_type"
73 if hasattr(type_, attr):
74 return getattr(type_, attr)
75 else:
76 raise UnregisteredTypeException(type_)
77
78 return type_
79
80
81 def get_type_for_field(field: ModelField, is_input: bool):
82 outer_type = field.outer_type_
83 basic_type = get_basic_type(outer_type)
84 replaced_type = replace_pydantic_types(basic_type, is_input)
85
86 if not field.required:
87 return Optional[replaced_type]
88 else:
89 return replaced_type
90
91
92 def _build_dataclass_creation_fields(
93 field: ModelField,
94 is_input: bool,
95 existing_fields: Dict[str, StrawberryField],
96 auto_fields_set: Set[str],
97 use_pydantic_alias: bool,
98 ) -> DataclassCreationFields:
99 type_annotation = (
100 get_type_for_field(field, is_input)
101 if field.name in auto_fields_set
102 else existing_fields[field.name].type
103 )
104
105 if (
106 field.name in existing_fields
107 and existing_fields[field.name].base_resolver is not None
108 ):
109 # if the user has defined a resolver for this field, always use it
110 strawberry_field = existing_fields[field.name]
111 else:
112 # otherwise we build an appropriate strawberry field that resolves it
113 strawberry_field = StrawberryField(
114 python_name=field.name,
115 graphql_name=field.alias
116 if field.has_alias and use_pydantic_alias
117 else None,
118 # always unset because we use default_factory instead
119 default=UNSET,
120 default_factory=get_default_factory_for_field(field),
121 type_annotation=type_annotation,
122 description=field.field_info.description,
123 )
124
125 return DataclassCreationFields(
126 name=field.name,
127 type_annotation=type_annotation,
128 field=strawberry_field,
129 )
130
131
132 if TYPE_CHECKING:
133 from strawberry.experimental.pydantic.conversion_types import (
134 PydanticModel,
135 StrawberryTypeFromPydantic,
136 )
137
138
139 def type(
140 model: Type[PydanticModel],
141 *,
142 fields: Optional[List[str]] = None,
143 name: Optional[str] = None,
144 is_input: bool = False,
145 is_interface: bool = False,
146 description: Optional[str] = None,
147 directives: Optional[Sequence[StrawberrySchemaDirective]] = (),
148 all_fields: bool = False,
149 use_pydantic_alias: bool = True,
150 ) -> Callable[..., Type[StrawberryTypeFromPydantic[PydanticModel]]]:
151 def wrap(cls: Any) -> Type[StrawberryTypeFromPydantic[PydanticModel]]:
152 model_fields = model.__fields__
153 original_fields_set = set(fields) if fields else set([])
154
155 if fields:
156 warnings.warn(
157 "`fields` is deprecated, use `auto` type annotations instead",
158 DeprecationWarning,
159 )
160
161 existing_fields = getattr(cls, "__annotations__", {})
162 # these are the fields that matched a field name in the pydantic model
163 # and should copy their alias from the pydantic model
164 fields_set = original_fields_set.union(
165 set(name for name, _ in existing_fields.items() if name in model_fields)
166 )
167 # these are the fields that were marked with strawberry.auto and
168 # should copy their type from the pydantic model
169 auto_fields_set = original_fields_set.union(
170 set(name for name, typ in existing_fields.items() if typ == strawberry.auto)
171 )
172
173 if all_fields:
174 if fields_set:
175 warnings.warn(
176 "Using all_fields overrides any explicitly defined fields "
177 "in the model, using both is likely a bug",
178 stacklevel=2,
179 )
180 fields_set = set(model_fields.keys())
181 auto_fields_set = set(model_fields.keys())
182
183 if not fields_set:
184 raise MissingFieldsListError(cls)
185
186 ensure_all_auto_fields_in_pydantic(
187 model=model, auto_fields=auto_fields_set, cls_name=cls.__name__
188 )
189
190 wrapped = _wrap_dataclass(cls)
191 extra_strawberry_fields = _get_fields(wrapped)
192 extra_fields = cast(List[dataclasses.Field], extra_strawberry_fields)
193 private_fields = get_private_fields(wrapped)
194
195 extra_fields_dict = {field.name: field for field in extra_strawberry_fields}
196
197 all_model_fields: List[DataclassCreationFields] = [
198 _build_dataclass_creation_fields(
199 field, is_input, extra_fields_dict, auto_fields_set, use_pydantic_alias
200 )
201 for field_name, field in model_fields.items()
202 if field_name in fields_set
203 ]
204
205 all_model_fields.extend(
206 (
207 DataclassCreationFields(
208 name=field.name,
209 type_annotation=field.type,
210 field=field,
211 )
212 for field in extra_fields + private_fields
213 if field.name not in fields_set
214 )
215 )
216
217 # Sort fields so that fields with missing defaults go first
218 sorted_fields = sort_creation_fields(all_model_fields)
219
220 # Implicitly define `is_type_of` to support interfaces/unions that use
221 # pydantic objects (not the corresponding strawberry type)
222 @classmethod # type: ignore
223 def is_type_of(cls: Type, obj: Any, _info: GraphQLResolveInfo) -> bool:
224 return isinstance(obj, (cls, model))
225
226 namespace = {"is_type_of": is_type_of}
227 # We need to tell the difference between a from_pydantic method that is
228 # inherited from a base class and one that is defined by the user in the
229 # decorated class. We want to override the method only if it is
230 # inherited. To tell the difference, we compare the class name to the
231 # fully qualified name of the method, which will end in <class>.from_pydantic
232 has_custom_from_pydantic = hasattr(
233 cls, "from_pydantic"
234 ) and cls.from_pydantic.__qualname__.endswith(f"{cls.__name__}.from_pydantic")
235 has_custom_to_pydantic = hasattr(
236 cls, "to_pydantic"
237 ) and cls.to_pydantic.__qualname__.endswith(f"{cls.__name__}.to_pydantic")
238
239 if has_custom_from_pydantic:
240 namespace["from_pydantic"] = cls.from_pydantic
241 if has_custom_to_pydantic:
242 namespace["to_pydantic"] = cls.to_pydantic
243
244 cls = dataclasses.make_dataclass(
245 cls.__name__,
246 [field.to_tuple() for field in sorted_fields],
247 bases=cls.__bases__,
248 namespace=namespace,
249 )
250
251 _process_type(
252 cls,
253 name=name,
254 is_input=is_input,
255 is_interface=is_interface,
256 description=description,
257 directives=directives,
258 )
259
260 if is_input:
261 model._strawberry_input_type = cls # type: ignore
262 else:
263 model._strawberry_type = cls # type: ignore
264 cls._pydantic_type = model
265
266 def from_pydantic_default(
267 instance: PydanticModel, extra: Dict[str, Any] = None
268 ) -> StrawberryTypeFromPydantic[PydanticModel]:
269 return convert_pydantic_model_to_strawberry_class(
270 cls=cls, model_instance=instance, extra=extra
271 )
272
273 def to_pydantic_default(self) -> PydanticModel:
274 instance_kwargs = {
275 f.name: convert_strawberry_class_to_pydantic_model(
276 getattr(self, f.name)
277 )
278 for f in dataclasses.fields(self)
279 }
280 return model(**instance_kwargs)
281
282 if not has_custom_from_pydantic:
283 cls.from_pydantic = staticmethod(from_pydantic_default)
284 if not has_custom_to_pydantic:
285 cls.to_pydantic = to_pydantic_default
286
287 return cls
288
289 return wrap
290
291
292 input = partial(type, is_input=True)
293
294 interface = partial(type, is_interface=True)
295
[end of strawberry/experimental/pydantic/object_type.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/strawberry/experimental/pydantic/object_type.py b/strawberry/experimental/pydantic/object_type.py
--- a/strawberry/experimental/pydantic/object_type.py
+++ b/strawberry/experimental/pydantic/object_type.py
@@ -110,6 +110,7 @@
strawberry_field = existing_fields[field.name]
else:
# otherwise we build an appropriate strawberry field that resolves it
+ existing_field = existing_fields.get(field.name)
strawberry_field = StrawberryField(
python_name=field.name,
graphql_name=field.alias
@@ -120,6 +121,13 @@
default_factory=get_default_factory_for_field(field),
type_annotation=type_annotation,
description=field.field_info.description,
+ deprecation_reason=(
+ existing_field.deprecation_reason if existing_field else None
+ ),
+ permission_classes=(
+ existing_field.permission_classes if existing_field else []
+ ),
+ directives=existing_field.directives if existing_field else (),
)
return DataclassCreationFields(
| {"golden_diff": "diff --git a/strawberry/experimental/pydantic/object_type.py b/strawberry/experimental/pydantic/object_type.py\n--- a/strawberry/experimental/pydantic/object_type.py\n+++ b/strawberry/experimental/pydantic/object_type.py\n@@ -110,6 +110,7 @@\n strawberry_field = existing_fields[field.name]\n else:\n # otherwise we build an appropriate strawberry field that resolves it\n+ existing_field = existing_fields.get(field.name)\n strawberry_field = StrawberryField(\n python_name=field.name,\n graphql_name=field.alias\n@@ -120,6 +121,13 @@\n default_factory=get_default_factory_for_field(field),\n type_annotation=type_annotation,\n description=field.field_info.description,\n+ deprecation_reason=(\n+ existing_field.deprecation_reason if existing_field else None\n+ ),\n+ permission_classes=(\n+ existing_field.permission_classes if existing_field else []\n+ ),\n+ directives=existing_field.directives if existing_field else (),\n )\n \n return DataclassCreationFields(\n", "issue": "Pydantic converted types have no way to deprecate fields\nSince we recreate the field object for the values that use simple fields, there isn't a clean way to add a deprecation_reason to a field from a pydantic class. The same is true for permission_classes and directives.\n", "before_files": [{"content": "from __future__ import annotations\n\nimport builtins\nimport dataclasses\nimport warnings\nfrom functools import partial\nfrom typing import (\n TYPE_CHECKING,\n Any,\n Callable,\n Dict,\n List,\n Optional,\n Sequence,\n Set,\n Type,\n cast,\n)\n\nfrom pydantic import BaseModel\nfrom pydantic.fields import ModelField\nfrom typing_extensions import Literal\n\nfrom graphql import GraphQLResolveInfo\n\nimport strawberry\nfrom strawberry.arguments import UNSET\nfrom strawberry.experimental.pydantic.conversion import (\n convert_pydantic_model_to_strawberry_class,\n convert_strawberry_class_to_pydantic_model,\n)\nfrom strawberry.experimental.pydantic.fields import get_basic_type\nfrom strawberry.experimental.pydantic.utils import (\n DataclassCreationFields,\n ensure_all_auto_fields_in_pydantic,\n get_default_factory_for_field,\n get_private_fields,\n sort_creation_fields,\n)\nfrom strawberry.field import StrawberryField\nfrom strawberry.object_type import _process_type, _wrap_dataclass\nfrom strawberry.schema_directive import StrawberrySchemaDirective\nfrom strawberry.types.type_resolver import _get_fields\nfrom strawberry.types.types import TypeDefinition\n\nfrom .exceptions import MissingFieldsListError, UnregisteredTypeException\n\n\ndef replace_pydantic_types(type_: Any, is_input: bool):\n origin = getattr(type_, \"__origin__\", None)\n if origin is Literal:\n # Literal does not have types in its __args__ so we return early\n return type_\n if hasattr(type_, \"__args__\"):\n replaced_type = type_.copy_with(\n tuple(replace_pydantic_types(t, is_input) for t in type_.__args__)\n )\n\n if isinstance(replaced_type, TypeDefinition):\n # TODO: Not sure if this is necessary. No coverage in tests\n # TODO: Unnecessary with StrawberryObject\n\n replaced_type = builtins.type(\n replaced_type.name,\n (),\n {\"_type_definition\": replaced_type},\n )\n\n return replaced_type\n\n if issubclass(type_, BaseModel):\n attr = \"_strawberry_input_type\" if is_input else \"_strawberry_type\"\n if hasattr(type_, attr):\n return getattr(type_, attr)\n else:\n raise UnregisteredTypeException(type_)\n\n return type_\n\n\ndef get_type_for_field(field: ModelField, is_input: bool):\n outer_type = field.outer_type_\n basic_type = get_basic_type(outer_type)\n replaced_type = replace_pydantic_types(basic_type, is_input)\n\n if not field.required:\n return Optional[replaced_type]\n else:\n return replaced_type\n\n\ndef _build_dataclass_creation_fields(\n field: ModelField,\n is_input: bool,\n existing_fields: Dict[str, StrawberryField],\n auto_fields_set: Set[str],\n use_pydantic_alias: bool,\n) -> DataclassCreationFields:\n type_annotation = (\n get_type_for_field(field, is_input)\n if field.name in auto_fields_set\n else existing_fields[field.name].type\n )\n\n if (\n field.name in existing_fields\n and existing_fields[field.name].base_resolver is not None\n ):\n # if the user has defined a resolver for this field, always use it\n strawberry_field = existing_fields[field.name]\n else:\n # otherwise we build an appropriate strawberry field that resolves it\n strawberry_field = StrawberryField(\n python_name=field.name,\n graphql_name=field.alias\n if field.has_alias and use_pydantic_alias\n else None,\n # always unset because we use default_factory instead\n default=UNSET,\n default_factory=get_default_factory_for_field(field),\n type_annotation=type_annotation,\n description=field.field_info.description,\n )\n\n return DataclassCreationFields(\n name=field.name,\n type_annotation=type_annotation,\n field=strawberry_field,\n )\n\n\nif TYPE_CHECKING:\n from strawberry.experimental.pydantic.conversion_types import (\n PydanticModel,\n StrawberryTypeFromPydantic,\n )\n\n\ndef type(\n model: Type[PydanticModel],\n *,\n fields: Optional[List[str]] = None,\n name: Optional[str] = None,\n is_input: bool = False,\n is_interface: bool = False,\n description: Optional[str] = None,\n directives: Optional[Sequence[StrawberrySchemaDirective]] = (),\n all_fields: bool = False,\n use_pydantic_alias: bool = True,\n) -> Callable[..., Type[StrawberryTypeFromPydantic[PydanticModel]]]:\n def wrap(cls: Any) -> Type[StrawberryTypeFromPydantic[PydanticModel]]:\n model_fields = model.__fields__\n original_fields_set = set(fields) if fields else set([])\n\n if fields:\n warnings.warn(\n \"`fields` is deprecated, use `auto` type annotations instead\",\n DeprecationWarning,\n )\n\n existing_fields = getattr(cls, \"__annotations__\", {})\n # these are the fields that matched a field name in the pydantic model\n # and should copy their alias from the pydantic model\n fields_set = original_fields_set.union(\n set(name for name, _ in existing_fields.items() if name in model_fields)\n )\n # these are the fields that were marked with strawberry.auto and\n # should copy their type from the pydantic model\n auto_fields_set = original_fields_set.union(\n set(name for name, typ in existing_fields.items() if typ == strawberry.auto)\n )\n\n if all_fields:\n if fields_set:\n warnings.warn(\n \"Using all_fields overrides any explicitly defined fields \"\n \"in the model, using both is likely a bug\",\n stacklevel=2,\n )\n fields_set = set(model_fields.keys())\n auto_fields_set = set(model_fields.keys())\n\n if not fields_set:\n raise MissingFieldsListError(cls)\n\n ensure_all_auto_fields_in_pydantic(\n model=model, auto_fields=auto_fields_set, cls_name=cls.__name__\n )\n\n wrapped = _wrap_dataclass(cls)\n extra_strawberry_fields = _get_fields(wrapped)\n extra_fields = cast(List[dataclasses.Field], extra_strawberry_fields)\n private_fields = get_private_fields(wrapped)\n\n extra_fields_dict = {field.name: field for field in extra_strawberry_fields}\n\n all_model_fields: List[DataclassCreationFields] = [\n _build_dataclass_creation_fields(\n field, is_input, extra_fields_dict, auto_fields_set, use_pydantic_alias\n )\n for field_name, field in model_fields.items()\n if field_name in fields_set\n ]\n\n all_model_fields.extend(\n (\n DataclassCreationFields(\n name=field.name,\n type_annotation=field.type,\n field=field,\n )\n for field in extra_fields + private_fields\n if field.name not in fields_set\n )\n )\n\n # Sort fields so that fields with missing defaults go first\n sorted_fields = sort_creation_fields(all_model_fields)\n\n # Implicitly define `is_type_of` to support interfaces/unions that use\n # pydantic objects (not the corresponding strawberry type)\n @classmethod # type: ignore\n def is_type_of(cls: Type, obj: Any, _info: GraphQLResolveInfo) -> bool:\n return isinstance(obj, (cls, model))\n\n namespace = {\"is_type_of\": is_type_of}\n # We need to tell the difference between a from_pydantic method that is\n # inherited from a base class and one that is defined by the user in the\n # decorated class. We want to override the method only if it is\n # inherited. To tell the difference, we compare the class name to the\n # fully qualified name of the method, which will end in <class>.from_pydantic\n has_custom_from_pydantic = hasattr(\n cls, \"from_pydantic\"\n ) and cls.from_pydantic.__qualname__.endswith(f\"{cls.__name__}.from_pydantic\")\n has_custom_to_pydantic = hasattr(\n cls, \"to_pydantic\"\n ) and cls.to_pydantic.__qualname__.endswith(f\"{cls.__name__}.to_pydantic\")\n\n if has_custom_from_pydantic:\n namespace[\"from_pydantic\"] = cls.from_pydantic\n if has_custom_to_pydantic:\n namespace[\"to_pydantic\"] = cls.to_pydantic\n\n cls = dataclasses.make_dataclass(\n cls.__name__,\n [field.to_tuple() for field in sorted_fields],\n bases=cls.__bases__,\n namespace=namespace,\n )\n\n _process_type(\n cls,\n name=name,\n is_input=is_input,\n is_interface=is_interface,\n description=description,\n directives=directives,\n )\n\n if is_input:\n model._strawberry_input_type = cls # type: ignore\n else:\n model._strawberry_type = cls # type: ignore\n cls._pydantic_type = model\n\n def from_pydantic_default(\n instance: PydanticModel, extra: Dict[str, Any] = None\n ) -> StrawberryTypeFromPydantic[PydanticModel]:\n return convert_pydantic_model_to_strawberry_class(\n cls=cls, model_instance=instance, extra=extra\n )\n\n def to_pydantic_default(self) -> PydanticModel:\n instance_kwargs = {\n f.name: convert_strawberry_class_to_pydantic_model(\n getattr(self, f.name)\n )\n for f in dataclasses.fields(self)\n }\n return model(**instance_kwargs)\n\n if not has_custom_from_pydantic:\n cls.from_pydantic = staticmethod(from_pydantic_default)\n if not has_custom_to_pydantic:\n cls.to_pydantic = to_pydantic_default\n\n return cls\n\n return wrap\n\n\ninput = partial(type, is_input=True)\n\ninterface = partial(type, is_interface=True)\n", "path": "strawberry/experimental/pydantic/object_type.py"}]} | 3,556 | 235 |
gh_patches_debug_25281 | rasdani/github-patches | git_diff | crytic__slither-1470 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
`naming-convention` does not flag `I` or `O` variable
### Describe the issue:
Slither has a detection for usage of
* l - Lowercase letter el
* O - Uppercase letter oh
* I - Uppercase letter eye
https://docs.soliditylang.org/en/v0.8.17/style-guide.html#names-to-avoid
However this detection fails on O and I, as the code considers them "uppercase with underscores" even though there are no underscores.
Relevant code:
* https://github.com/crytic/slither/blob/fba37f2c0c8196079719432d6324e42a1a974399/slither/detectors/naming_convention/naming_convention.py#L52-L54 (will match just uppercase letters as well)
* https://github.com/crytic/slither/blob/fba37f2c0c8196079719432d6324e42a1a974399/slither/detectors/naming_convention/naming_convention.py#L120-L137 (detection code)
### Code example to reproduce the issue:
```solidity
// SPDX-License-Identifier: UNLICENSED
pragma solidity ^0.8.0;
contract C {
bool constant l = false;
bool constant O = false;
bool constant I = false;
function f() public pure {}
}
```
### Version:
0.9.0
### Relevant log output:
_No response_
</issue>
<code>
[start of slither/detectors/naming_convention/naming_convention.py]
1 import re
2 from slither.detectors.abstract_detector import AbstractDetector, DetectorClassification
3 from slither.formatters.naming_convention.naming_convention import custom_format
4
5
6 class NamingConvention(AbstractDetector):
7 """
8 Check if naming conventions are followed
9 https://solidity.readthedocs.io/en/v0.4.25/style-guide.html?highlight=naming_convention%20convention#naming_convention-conventions
10
11 Exceptions:
12 - Allow constant variables name/symbol/decimals to be lowercase (ERC20)
13 - Allow '_' at the beggining of the mixed_case match for private variables and unused parameters
14 - Ignore echidna properties (functions with names starting 'echidna_' or 'crytic_'
15 """
16
17 ARGUMENT = "naming-convention"
18 HELP = "Conformity to Solidity naming conventions"
19 IMPACT = DetectorClassification.INFORMATIONAL
20 CONFIDENCE = DetectorClassification.HIGH
21
22 WIKI = "https://github.com/crytic/slither/wiki/Detector-Documentation#conformance-to-solidity-naming-conventions"
23
24 WIKI_TITLE = "Conformance to Solidity naming conventions"
25
26 # region wiki_description
27 WIKI_DESCRIPTION = """
28 Solidity defines a [naming convention](https://solidity.readthedocs.io/en/v0.4.25/style-guide.html#naming-conventions) that should be followed.
29 #### Rule exceptions
30 - Allow constant variable name/symbol/decimals to be lowercase (`ERC20`).
31 - Allow `_` at the beginning of the `mixed_case` match for private variables and unused parameters."""
32 # endregion wiki_description
33
34 WIKI_RECOMMENDATION = "Follow the Solidity [naming convention](https://solidity.readthedocs.io/en/v0.4.25/style-guide.html#naming-conventions)."
35
36 STANDARD_JSON = False
37
38 @staticmethod
39 def is_cap_words(name):
40 return re.search("^[A-Z]([A-Za-z0-9]+)?_?$", name) is not None
41
42 @staticmethod
43 def is_mixed_case(name):
44 return re.search("^[a-z]([A-Za-z0-9]+)?_?$", name) is not None
45
46 @staticmethod
47 def is_mixed_case_with_underscore(name):
48 # Allow _ at the beginning to represent private variable
49 # or unused parameters
50 return re.search("^[_]?[a-z]([A-Za-z0-9]+)?_?$", name) is not None
51
52 @staticmethod
53 def is_upper_case_with_underscores(name):
54 return re.search("^[A-Z0-9_]+_?$", name) is not None
55
56 @staticmethod
57 def should_avoid_name(name):
58 return re.search("^[lOI]$", name) is not None
59
60 def _detect(self): # pylint: disable=too-many-branches,too-many-statements
61
62 results = []
63 for contract in self.contracts:
64
65 if not self.is_cap_words(contract.name):
66 info = ["Contract ", contract, " is not in CapWords\n"]
67
68 res = self.generate_result(info)
69 res.add(contract, {"target": "contract", "convention": "CapWords"})
70 results.append(res)
71
72 for struct in contract.structures_declared:
73 if not self.is_cap_words(struct.name):
74 info = ["Struct ", struct, " is not in CapWords\n"]
75
76 res = self.generate_result(info)
77 res.add(struct, {"target": "structure", "convention": "CapWords"})
78 results.append(res)
79
80 for event in contract.events_declared:
81 if not self.is_cap_words(event.name):
82 info = ["Event ", event, " is not in CapWords\n"]
83
84 res = self.generate_result(info)
85 res.add(event, {"target": "event", "convention": "CapWords"})
86 results.append(res)
87
88 for func in contract.functions_declared:
89 if func.is_constructor:
90 continue
91 if not self.is_mixed_case(func.name):
92 if func.visibility in [
93 "internal",
94 "private",
95 ] and self.is_mixed_case_with_underscore(func.name):
96 continue
97 if func.name.startswith(("echidna_", "crytic_")):
98 continue
99 info = ["Function ", func, " is not in mixedCase\n"]
100
101 res = self.generate_result(info)
102 res.add(func, {"target": "function", "convention": "mixedCase"})
103 results.append(res)
104
105 for argument in func.parameters:
106 # Ignore parameter names that are not specified i.e. empty strings
107 if argument.name == "":
108 continue
109 if argument in func.variables_read_or_written:
110 correct_naming = self.is_mixed_case(argument.name)
111 else:
112 correct_naming = self.is_mixed_case_with_underscore(argument.name)
113 if not correct_naming:
114 info = ["Parameter ", argument, " is not in mixedCase\n"]
115
116 res = self.generate_result(info)
117 res.add(argument, {"target": "parameter", "convention": "mixedCase"})
118 results.append(res)
119
120 for var in contract.state_variables_declared:
121 if self.should_avoid_name(var.name):
122 if not self.is_upper_case_with_underscores(var.name):
123 info = [
124 "Variable ",
125 var,
126 " used l, O, I, which should not be used\n",
127 ]
128
129 res = self.generate_result(info)
130 res.add(
131 var,
132 {
133 "target": "variable",
134 "convention": "l_O_I_should_not_be_used",
135 },
136 )
137 results.append(res)
138
139 if var.is_constant is True:
140 # For ERC20 compatibility
141 if var.name in ["symbol", "name", "decimals"]:
142 continue
143
144 if not self.is_upper_case_with_underscores(var.name):
145 info = [
146 "Constant ",
147 var,
148 " is not in UPPER_CASE_WITH_UNDERSCORES\n",
149 ]
150
151 res = self.generate_result(info)
152 res.add(
153 var,
154 {
155 "target": "variable_constant",
156 "convention": "UPPER_CASE_WITH_UNDERSCORES",
157 },
158 )
159 results.append(res)
160
161 else:
162 if var.visibility == "private":
163 correct_naming = self.is_mixed_case_with_underscore(var.name)
164 else:
165 correct_naming = self.is_mixed_case(var.name)
166 if not correct_naming:
167 info = ["Variable ", var, " is not in mixedCase\n"]
168
169 res = self.generate_result(info)
170 res.add(var, {"target": "variable", "convention": "mixedCase"})
171 results.append(res)
172
173 for enum in contract.enums_declared:
174 if not self.is_cap_words(enum.name):
175 info = ["Enum ", enum, " is not in CapWords\n"]
176
177 res = self.generate_result(info)
178 res.add(enum, {"target": "enum", "convention": "CapWords"})
179 results.append(res)
180
181 for modifier in contract.modifiers_declared:
182 if not self.is_mixed_case(modifier.name):
183 info = ["Modifier ", modifier, " is not in mixedCase\n"]
184
185 res = self.generate_result(info)
186 res.add(modifier, {"target": "modifier", "convention": "mixedCase"})
187 results.append(res)
188
189 return results
190
191 @staticmethod
192 def _format(slither, result):
193 custom_format(slither, result)
194
[end of slither/detectors/naming_convention/naming_convention.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/slither/detectors/naming_convention/naming_convention.py b/slither/detectors/naming_convention/naming_convention.py
--- a/slither/detectors/naming_convention/naming_convention.py
+++ b/slither/detectors/naming_convention/naming_convention.py
@@ -119,22 +119,21 @@
for var in contract.state_variables_declared:
if self.should_avoid_name(var.name):
- if not self.is_upper_case_with_underscores(var.name):
- info = [
- "Variable ",
- var,
- " used l, O, I, which should not be used\n",
- ]
+ info = [
+ "Variable ",
+ var,
+ " is single letter l, O, or I, which should not be used\n",
+ ]
- res = self.generate_result(info)
- res.add(
- var,
- {
- "target": "variable",
- "convention": "l_O_I_should_not_be_used",
- },
- )
- results.append(res)
+ res = self.generate_result(info)
+ res.add(
+ var,
+ {
+ "target": "variable",
+ "convention": "l_O_I_should_not_be_used",
+ },
+ )
+ results.append(res)
if var.is_constant is True:
# For ERC20 compatibility
| {"golden_diff": "diff --git a/slither/detectors/naming_convention/naming_convention.py b/slither/detectors/naming_convention/naming_convention.py\n--- a/slither/detectors/naming_convention/naming_convention.py\n+++ b/slither/detectors/naming_convention/naming_convention.py\n@@ -119,22 +119,21 @@\n \n for var in contract.state_variables_declared:\n if self.should_avoid_name(var.name):\n- if not self.is_upper_case_with_underscores(var.name):\n- info = [\n- \"Variable \",\n- var,\n- \" used l, O, I, which should not be used\\n\",\n- ]\n+ info = [\n+ \"Variable \",\n+ var,\n+ \" is single letter l, O, or I, which should not be used\\n\",\n+ ]\n \n- res = self.generate_result(info)\n- res.add(\n- var,\n- {\n- \"target\": \"variable\",\n- \"convention\": \"l_O_I_should_not_be_used\",\n- },\n- )\n- results.append(res)\n+ res = self.generate_result(info)\n+ res.add(\n+ var,\n+ {\n+ \"target\": \"variable\",\n+ \"convention\": \"l_O_I_should_not_be_used\",\n+ },\n+ )\n+ results.append(res)\n \n if var.is_constant is True:\n # For ERC20 compatibility\n", "issue": "`naming-convention` does not flag `I` or `O` variable\n### Describe the issue:\n\nSlither has a detection for usage of\r\n\r\n* l - Lowercase letter el\r\n* O - Uppercase letter oh\r\n* I - Uppercase letter eye\r\n\r\nhttps://docs.soliditylang.org/en/v0.8.17/style-guide.html#names-to-avoid\r\n\r\nHowever this detection fails on O and I, as the code considers them \"uppercase with underscores\" even though there are no underscores.\r\n\r\nRelevant code:\r\n* https://github.com/crytic/slither/blob/fba37f2c0c8196079719432d6324e42a1a974399/slither/detectors/naming_convention/naming_convention.py#L52-L54 (will match just uppercase letters as well)\r\n* https://github.com/crytic/slither/blob/fba37f2c0c8196079719432d6324e42a1a974399/slither/detectors/naming_convention/naming_convention.py#L120-L137 (detection code)\n\n### Code example to reproduce the issue:\n\n```solidity\r\n// SPDX-License-Identifier: UNLICENSED\r\npragma solidity ^0.8.0;\r\n\r\ncontract C {\r\n bool constant l = false;\r\n bool constant O = false;\r\n bool constant I = false;\r\n function f() public pure {}\r\n}\r\n```\n\n### Version:\n\n0.9.0\n\n### Relevant log output:\n\n_No response_\n", "before_files": [{"content": "import re\nfrom slither.detectors.abstract_detector import AbstractDetector, DetectorClassification\nfrom slither.formatters.naming_convention.naming_convention import custom_format\n\n\nclass NamingConvention(AbstractDetector):\n \"\"\"\n Check if naming conventions are followed\n https://solidity.readthedocs.io/en/v0.4.25/style-guide.html?highlight=naming_convention%20convention#naming_convention-conventions\n\n Exceptions:\n - Allow constant variables name/symbol/decimals to be lowercase (ERC20)\n - Allow '_' at the beggining of the mixed_case match for private variables and unused parameters\n - Ignore echidna properties (functions with names starting 'echidna_' or 'crytic_'\n \"\"\"\n\n ARGUMENT = \"naming-convention\"\n HELP = \"Conformity to Solidity naming conventions\"\n IMPACT = DetectorClassification.INFORMATIONAL\n CONFIDENCE = DetectorClassification.HIGH\n\n WIKI = \"https://github.com/crytic/slither/wiki/Detector-Documentation#conformance-to-solidity-naming-conventions\"\n\n WIKI_TITLE = \"Conformance to Solidity naming conventions\"\n\n # region wiki_description\n WIKI_DESCRIPTION = \"\"\"\nSolidity defines a [naming convention](https://solidity.readthedocs.io/en/v0.4.25/style-guide.html#naming-conventions) that should be followed.\n#### Rule exceptions\n- Allow constant variable name/symbol/decimals to be lowercase (`ERC20`).\n- Allow `_` at the beginning of the `mixed_case` match for private variables and unused parameters.\"\"\"\n # endregion wiki_description\n\n WIKI_RECOMMENDATION = \"Follow the Solidity [naming convention](https://solidity.readthedocs.io/en/v0.4.25/style-guide.html#naming-conventions).\"\n\n STANDARD_JSON = False\n\n @staticmethod\n def is_cap_words(name):\n return re.search(\"^[A-Z]([A-Za-z0-9]+)?_?$\", name) is not None\n\n @staticmethod\n def is_mixed_case(name):\n return re.search(\"^[a-z]([A-Za-z0-9]+)?_?$\", name) is not None\n\n @staticmethod\n def is_mixed_case_with_underscore(name):\n # Allow _ at the beginning to represent private variable\n # or unused parameters\n return re.search(\"^[_]?[a-z]([A-Za-z0-9]+)?_?$\", name) is not None\n\n @staticmethod\n def is_upper_case_with_underscores(name):\n return re.search(\"^[A-Z0-9_]+_?$\", name) is not None\n\n @staticmethod\n def should_avoid_name(name):\n return re.search(\"^[lOI]$\", name) is not None\n\n def _detect(self): # pylint: disable=too-many-branches,too-many-statements\n\n results = []\n for contract in self.contracts:\n\n if not self.is_cap_words(contract.name):\n info = [\"Contract \", contract, \" is not in CapWords\\n\"]\n\n res = self.generate_result(info)\n res.add(contract, {\"target\": \"contract\", \"convention\": \"CapWords\"})\n results.append(res)\n\n for struct in contract.structures_declared:\n if not self.is_cap_words(struct.name):\n info = [\"Struct \", struct, \" is not in CapWords\\n\"]\n\n res = self.generate_result(info)\n res.add(struct, {\"target\": \"structure\", \"convention\": \"CapWords\"})\n results.append(res)\n\n for event in contract.events_declared:\n if not self.is_cap_words(event.name):\n info = [\"Event \", event, \" is not in CapWords\\n\"]\n\n res = self.generate_result(info)\n res.add(event, {\"target\": \"event\", \"convention\": \"CapWords\"})\n results.append(res)\n\n for func in contract.functions_declared:\n if func.is_constructor:\n continue\n if not self.is_mixed_case(func.name):\n if func.visibility in [\n \"internal\",\n \"private\",\n ] and self.is_mixed_case_with_underscore(func.name):\n continue\n if func.name.startswith((\"echidna_\", \"crytic_\")):\n continue\n info = [\"Function \", func, \" is not in mixedCase\\n\"]\n\n res = self.generate_result(info)\n res.add(func, {\"target\": \"function\", \"convention\": \"mixedCase\"})\n results.append(res)\n\n for argument in func.parameters:\n # Ignore parameter names that are not specified i.e. empty strings\n if argument.name == \"\":\n continue\n if argument in func.variables_read_or_written:\n correct_naming = self.is_mixed_case(argument.name)\n else:\n correct_naming = self.is_mixed_case_with_underscore(argument.name)\n if not correct_naming:\n info = [\"Parameter \", argument, \" is not in mixedCase\\n\"]\n\n res = self.generate_result(info)\n res.add(argument, {\"target\": \"parameter\", \"convention\": \"mixedCase\"})\n results.append(res)\n\n for var in contract.state_variables_declared:\n if self.should_avoid_name(var.name):\n if not self.is_upper_case_with_underscores(var.name):\n info = [\n \"Variable \",\n var,\n \" used l, O, I, which should not be used\\n\",\n ]\n\n res = self.generate_result(info)\n res.add(\n var,\n {\n \"target\": \"variable\",\n \"convention\": \"l_O_I_should_not_be_used\",\n },\n )\n results.append(res)\n\n if var.is_constant is True:\n # For ERC20 compatibility\n if var.name in [\"symbol\", \"name\", \"decimals\"]:\n continue\n\n if not self.is_upper_case_with_underscores(var.name):\n info = [\n \"Constant \",\n var,\n \" is not in UPPER_CASE_WITH_UNDERSCORES\\n\",\n ]\n\n res = self.generate_result(info)\n res.add(\n var,\n {\n \"target\": \"variable_constant\",\n \"convention\": \"UPPER_CASE_WITH_UNDERSCORES\",\n },\n )\n results.append(res)\n\n else:\n if var.visibility == \"private\":\n correct_naming = self.is_mixed_case_with_underscore(var.name)\n else:\n correct_naming = self.is_mixed_case(var.name)\n if not correct_naming:\n info = [\"Variable \", var, \" is not in mixedCase\\n\"]\n\n res = self.generate_result(info)\n res.add(var, {\"target\": \"variable\", \"convention\": \"mixedCase\"})\n results.append(res)\n\n for enum in contract.enums_declared:\n if not self.is_cap_words(enum.name):\n info = [\"Enum \", enum, \" is not in CapWords\\n\"]\n\n res = self.generate_result(info)\n res.add(enum, {\"target\": \"enum\", \"convention\": \"CapWords\"})\n results.append(res)\n\n for modifier in contract.modifiers_declared:\n if not self.is_mixed_case(modifier.name):\n info = [\"Modifier \", modifier, \" is not in mixedCase\\n\"]\n\n res = self.generate_result(info)\n res.add(modifier, {\"target\": \"modifier\", \"convention\": \"mixedCase\"})\n results.append(res)\n\n return results\n\n @staticmethod\n def _format(slither, result):\n custom_format(slither, result)\n", "path": "slither/detectors/naming_convention/naming_convention.py"}]} | 2,999 | 321 |
gh_patches_debug_35598 | rasdani/github-patches | git_diff | beetbox__beets-2917 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
mbsync: Tolerate MusicBrainz recording ID changes
mbsync uses the musicbrainz recording id to match local track information against a certain track in a musicbrainz release. However, there is no fallback matching defined, so once the recording id changes in musicbrainz the local track can no longer be synced unless the recording id is changed manually. But this behavior is also not very visible (I stumbled upon it by accident), so most users would not even be aware that the recording id is outdated.
mbsync: Tolerate MusicBrainz recording ID changes
mbsync uses the musicbrainz recording id to match local track information against a certain track in a musicbrainz release. However, there is no fallback matching defined, so once the recording id changes in musicbrainz the local track can no longer be synced unless the recording id is changed manually. But this behavior is also not very visible (I stumbled upon it by accident), so most users would not even be aware that the recording id is outdated.
</issue>
<code>
[start of beetsplug/mbsync.py]
1 # -*- coding: utf-8 -*-
2 # This file is part of beets.
3 # Copyright 2016, Jakob Schnitzer.
4 #
5 # Permission is hereby granted, free of charge, to any person obtaining
6 # a copy of this software and associated documentation files (the
7 # "Software"), to deal in the Software without restriction, including
8 # without limitation the rights to use, copy, modify, merge, publish,
9 # distribute, sublicense, and/or sell copies of the Software, and to
10 # permit persons to whom the Software is furnished to do so, subject to
11 # the following conditions:
12 #
13 # The above copyright notice and this permission notice shall be
14 # included in all copies or substantial portions of the Software.
15
16 """Update library's tags using MusicBrainz.
17 """
18 from __future__ import division, absolute_import, print_function
19
20 from beets.plugins import BeetsPlugin
21 from beets import autotag, library, ui, util
22 from beets.autotag import hooks
23 from collections import defaultdict
24
25
26 def apply_item_changes(lib, item, move, pretend, write):
27 """Store, move and write the item according to the arguments.
28 """
29 if not pretend:
30 # Move the item if it's in the library.
31 if move and lib.directory in util.ancestry(item.path):
32 item.move(with_album=False)
33
34 if write:
35 item.try_write()
36 item.store()
37
38
39 class MBSyncPlugin(BeetsPlugin):
40 def __init__(self):
41 super(MBSyncPlugin, self).__init__()
42
43 def commands(self):
44 cmd = ui.Subcommand('mbsync',
45 help=u'update metadata from musicbrainz')
46 cmd.parser.add_option(
47 u'-p', u'--pretend', action='store_true',
48 help=u'show all changes but do nothing')
49 cmd.parser.add_option(
50 u'-m', u'--move', action='store_true', dest='move',
51 help=u"move files in the library directory")
52 cmd.parser.add_option(
53 u'-M', u'--nomove', action='store_false', dest='move',
54 help=u"don't move files in library")
55 cmd.parser.add_option(
56 u'-W', u'--nowrite', action='store_false',
57 default=None, dest='write',
58 help=u"don't write updated metadata to files")
59 cmd.parser.add_format_option()
60 cmd.func = self.func
61 return [cmd]
62
63 def func(self, lib, opts, args):
64 """Command handler for the mbsync function.
65 """
66 move = ui.should_move(opts.move)
67 pretend = opts.pretend
68 write = ui.should_write(opts.write)
69 query = ui.decargs(args)
70
71 self.singletons(lib, query, move, pretend, write)
72 self.albums(lib, query, move, pretend, write)
73
74 def singletons(self, lib, query, move, pretend, write):
75 """Retrieve and apply info from the autotagger for items matched by
76 query.
77 """
78 for item in lib.items(query + [u'singleton:true']):
79 item_formatted = format(item)
80 if not item.mb_trackid:
81 self._log.info(u'Skipping singleton with no mb_trackid: {0}',
82 item_formatted)
83 continue
84
85 # Get the MusicBrainz recording info.
86 track_info = hooks.track_for_mbid(item.mb_trackid)
87 if not track_info:
88 self._log.info(u'Recording ID not found: {0} for track {0}',
89 item.mb_trackid,
90 item_formatted)
91 continue
92
93 # Apply.
94 with lib.transaction():
95 autotag.apply_item_metadata(item, track_info)
96 apply_item_changes(lib, item, move, pretend, write)
97
98 def albums(self, lib, query, move, pretend, write):
99 """Retrieve and apply info from the autotagger for albums matched by
100 query and their items.
101 """
102 # Process matching albums.
103 for a in lib.albums(query):
104 album_formatted = format(a)
105 if not a.mb_albumid:
106 self._log.info(u'Skipping album with no mb_albumid: {0}',
107 album_formatted)
108 continue
109
110 items = list(a.items())
111
112 # Get the MusicBrainz album information.
113 album_info = hooks.album_for_mbid(a.mb_albumid)
114 if not album_info:
115 self._log.info(u'Release ID {0} not found for album {1}',
116 a.mb_albumid,
117 album_formatted)
118 continue
119
120 # Map recording MBIDs to their information. Recordings can appear
121 # multiple times on a release, so each MBID maps to a list of
122 # TrackInfo objects.
123 track_index = defaultdict(list)
124 for track_info in album_info.tracks:
125 track_index[track_info.track_id].append(track_info)
126
127 # Construct a track mapping according to MBIDs. This should work
128 # for albums that have missing or extra tracks. If there are
129 # multiple copies of a recording, they are disambiguated using
130 # their disc and track number.
131 mapping = {}
132 for item in items:
133 candidates = track_index[item.mb_trackid]
134 if len(candidates) == 1:
135 mapping[item] = candidates[0]
136 else:
137 for c in candidates:
138 if (c.medium_index == item.track and
139 c.medium == item.disc):
140 mapping[item] = c
141 break
142
143 # Apply.
144 self._log.debug(u'applying changes to {}', album_formatted)
145 with lib.transaction():
146 autotag.apply_metadata(album_info, mapping)
147 changed = False
148 for item in items:
149 item_changed = ui.show_model_changes(item)
150 changed |= item_changed
151 if item_changed:
152 apply_item_changes(lib, item, move, pretend, write)
153
154 if not changed:
155 # No change to any item.
156 continue
157
158 if not pretend:
159 # Update album structure to reflect an item in it.
160 for key in library.Album.item_keys:
161 a[key] = items[0][key]
162 a.store()
163
164 # Move album art (and any inconsistent items).
165 if move and lib.directory in util.ancestry(items[0].path):
166 self._log.debug(u'moving album {0}', album_formatted)
167 a.move()
168
[end of beetsplug/mbsync.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/beetsplug/mbsync.py b/beetsplug/mbsync.py
--- a/beetsplug/mbsync.py
+++ b/beetsplug/mbsync.py
@@ -117,28 +117,35 @@
album_formatted)
continue
- # Map recording MBIDs to their information. Recordings can appear
- # multiple times on a release, so each MBID maps to a list of
- # TrackInfo objects.
+ # Map release track and recording MBIDs to their information.
+ # Recordings can appear multiple times on a release, so each MBID
+ # maps to a list of TrackInfo objects.
+ releasetrack_index = dict()
track_index = defaultdict(list)
for track_info in album_info.tracks:
+ releasetrack_index[track_info.release_track_id] = track_info
track_index[track_info.track_id].append(track_info)
- # Construct a track mapping according to MBIDs. This should work
- # for albums that have missing or extra tracks. If there are
- # multiple copies of a recording, they are disambiguated using
- # their disc and track number.
+ # Construct a track mapping according to MBIDs (release track MBIDs
+ # first, if available, and recording MBIDs otherwise). This should
+ # work for albums that have missing or extra tracks.
mapping = {}
for item in items:
- candidates = track_index[item.mb_trackid]
- if len(candidates) == 1:
- mapping[item] = candidates[0]
+ if item.mb_releasetrackid and \
+ item.mb_releasetrackid in releasetrack_index:
+ mapping[item] = releasetrack_index[item.mb_releasetrackid]
else:
- for c in candidates:
- if (c.medium_index == item.track and
- c.medium == item.disc):
- mapping[item] = c
- break
+ candidates = track_index[item.mb_trackid]
+ if len(candidates) == 1:
+ mapping[item] = candidates[0]
+ else:
+ # If there are multiple copies of a recording, they are
+ # disambiguated using their disc and track number.
+ for c in candidates:
+ if (c.medium_index == item.track and
+ c.medium == item.disc):
+ mapping[item] = c
+ break
# Apply.
self._log.debug(u'applying changes to {}', album_formatted)
| {"golden_diff": "diff --git a/beetsplug/mbsync.py b/beetsplug/mbsync.py\n--- a/beetsplug/mbsync.py\n+++ b/beetsplug/mbsync.py\n@@ -117,28 +117,35 @@\n album_formatted)\n continue\n \n- # Map recording MBIDs to their information. Recordings can appear\n- # multiple times on a release, so each MBID maps to a list of\n- # TrackInfo objects.\n+ # Map release track and recording MBIDs to their information.\n+ # Recordings can appear multiple times on a release, so each MBID\n+ # maps to a list of TrackInfo objects.\n+ releasetrack_index = dict()\n track_index = defaultdict(list)\n for track_info in album_info.tracks:\n+ releasetrack_index[track_info.release_track_id] = track_info\n track_index[track_info.track_id].append(track_info)\n \n- # Construct a track mapping according to MBIDs. This should work\n- # for albums that have missing or extra tracks. If there are\n- # multiple copies of a recording, they are disambiguated using\n- # their disc and track number.\n+ # Construct a track mapping according to MBIDs (release track MBIDs\n+ # first, if available, and recording MBIDs otherwise). This should\n+ # work for albums that have missing or extra tracks.\n mapping = {}\n for item in items:\n- candidates = track_index[item.mb_trackid]\n- if len(candidates) == 1:\n- mapping[item] = candidates[0]\n+ if item.mb_releasetrackid and \\\n+ item.mb_releasetrackid in releasetrack_index:\n+ mapping[item] = releasetrack_index[item.mb_releasetrackid]\n else:\n- for c in candidates:\n- if (c.medium_index == item.track and\n- c.medium == item.disc):\n- mapping[item] = c\n- break\n+ candidates = track_index[item.mb_trackid]\n+ if len(candidates) == 1:\n+ mapping[item] = candidates[0]\n+ else:\n+ # If there are multiple copies of a recording, they are\n+ # disambiguated using their disc and track number.\n+ for c in candidates:\n+ if (c.medium_index == item.track and\n+ c.medium == item.disc):\n+ mapping[item] = c\n+ break\n \n # Apply.\n self._log.debug(u'applying changes to {}', album_formatted)\n", "issue": "mbsync: Tolerate MusicBrainz recording ID changes\nmbsync uses the musicbrainz recording id to match local track information against a certain track in a musicbrainz release. However, there is no fallback matching defined, so once the recording id changes in musicbrainz the local track can no longer be synced unless the recording id is changed manually. But this behavior is also not very visible (I stumbled upon it by accident), so most users would not even be aware that the recording id is outdated.\n\nmbsync: Tolerate MusicBrainz recording ID changes\nmbsync uses the musicbrainz recording id to match local track information against a certain track in a musicbrainz release. However, there is no fallback matching defined, so once the recording id changes in musicbrainz the local track can no longer be synced unless the recording id is changed manually. But this behavior is also not very visible (I stumbled upon it by accident), so most users would not even be aware that the recording id is outdated.\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n# This file is part of beets.\n# Copyright 2016, Jakob Schnitzer.\n#\n# Permission is hereby granted, free of charge, to any person obtaining\n# a copy of this software and associated documentation files (the\n# \"Software\"), to deal in the Software without restriction, including\n# without limitation the rights to use, copy, modify, merge, publish,\n# distribute, sublicense, and/or sell copies of the Software, and to\n# permit persons to whom the Software is furnished to do so, subject to\n# the following conditions:\n#\n# The above copyright notice and this permission notice shall be\n# included in all copies or substantial portions of the Software.\n\n\"\"\"Update library's tags using MusicBrainz.\n\"\"\"\nfrom __future__ import division, absolute_import, print_function\n\nfrom beets.plugins import BeetsPlugin\nfrom beets import autotag, library, ui, util\nfrom beets.autotag import hooks\nfrom collections import defaultdict\n\n\ndef apply_item_changes(lib, item, move, pretend, write):\n \"\"\"Store, move and write the item according to the arguments.\n \"\"\"\n if not pretend:\n # Move the item if it's in the library.\n if move and lib.directory in util.ancestry(item.path):\n item.move(with_album=False)\n\n if write:\n item.try_write()\n item.store()\n\n\nclass MBSyncPlugin(BeetsPlugin):\n def __init__(self):\n super(MBSyncPlugin, self).__init__()\n\n def commands(self):\n cmd = ui.Subcommand('mbsync',\n help=u'update metadata from musicbrainz')\n cmd.parser.add_option(\n u'-p', u'--pretend', action='store_true',\n help=u'show all changes but do nothing')\n cmd.parser.add_option(\n u'-m', u'--move', action='store_true', dest='move',\n help=u\"move files in the library directory\")\n cmd.parser.add_option(\n u'-M', u'--nomove', action='store_false', dest='move',\n help=u\"don't move files in library\")\n cmd.parser.add_option(\n u'-W', u'--nowrite', action='store_false',\n default=None, dest='write',\n help=u\"don't write updated metadata to files\")\n cmd.parser.add_format_option()\n cmd.func = self.func\n return [cmd]\n\n def func(self, lib, opts, args):\n \"\"\"Command handler for the mbsync function.\n \"\"\"\n move = ui.should_move(opts.move)\n pretend = opts.pretend\n write = ui.should_write(opts.write)\n query = ui.decargs(args)\n\n self.singletons(lib, query, move, pretend, write)\n self.albums(lib, query, move, pretend, write)\n\n def singletons(self, lib, query, move, pretend, write):\n \"\"\"Retrieve and apply info from the autotagger for items matched by\n query.\n \"\"\"\n for item in lib.items(query + [u'singleton:true']):\n item_formatted = format(item)\n if not item.mb_trackid:\n self._log.info(u'Skipping singleton with no mb_trackid: {0}',\n item_formatted)\n continue\n\n # Get the MusicBrainz recording info.\n track_info = hooks.track_for_mbid(item.mb_trackid)\n if not track_info:\n self._log.info(u'Recording ID not found: {0} for track {0}',\n item.mb_trackid,\n item_formatted)\n continue\n\n # Apply.\n with lib.transaction():\n autotag.apply_item_metadata(item, track_info)\n apply_item_changes(lib, item, move, pretend, write)\n\n def albums(self, lib, query, move, pretend, write):\n \"\"\"Retrieve and apply info from the autotagger for albums matched by\n query and their items.\n \"\"\"\n # Process matching albums.\n for a in lib.albums(query):\n album_formatted = format(a)\n if not a.mb_albumid:\n self._log.info(u'Skipping album with no mb_albumid: {0}',\n album_formatted)\n continue\n\n items = list(a.items())\n\n # Get the MusicBrainz album information.\n album_info = hooks.album_for_mbid(a.mb_albumid)\n if not album_info:\n self._log.info(u'Release ID {0} not found for album {1}',\n a.mb_albumid,\n album_formatted)\n continue\n\n # Map recording MBIDs to their information. Recordings can appear\n # multiple times on a release, so each MBID maps to a list of\n # TrackInfo objects.\n track_index = defaultdict(list)\n for track_info in album_info.tracks:\n track_index[track_info.track_id].append(track_info)\n\n # Construct a track mapping according to MBIDs. This should work\n # for albums that have missing or extra tracks. If there are\n # multiple copies of a recording, they are disambiguated using\n # their disc and track number.\n mapping = {}\n for item in items:\n candidates = track_index[item.mb_trackid]\n if len(candidates) == 1:\n mapping[item] = candidates[0]\n else:\n for c in candidates:\n if (c.medium_index == item.track and\n c.medium == item.disc):\n mapping[item] = c\n break\n\n # Apply.\n self._log.debug(u'applying changes to {}', album_formatted)\n with lib.transaction():\n autotag.apply_metadata(album_info, mapping)\n changed = False\n for item in items:\n item_changed = ui.show_model_changes(item)\n changed |= item_changed\n if item_changed:\n apply_item_changes(lib, item, move, pretend, write)\n\n if not changed:\n # No change to any item.\n continue\n\n if not pretend:\n # Update album structure to reflect an item in it.\n for key in library.Album.item_keys:\n a[key] = items[0][key]\n a.store()\n\n # Move album art (and any inconsistent items).\n if move and lib.directory in util.ancestry(items[0].path):\n self._log.debug(u'moving album {0}', album_formatted)\n a.move()\n", "path": "beetsplug/mbsync.py"}]} | 2,508 | 562 |
gh_patches_debug_28913 | rasdani/github-patches | git_diff | PrefectHQ__prefect-2534 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Behaviour differs from documentation: state.result & flow.get_tasks
Prefect Version: 0.10.4
## Current behavior
Minor issue - The following behaviour doesn't match up to that described by the documentation on [result handling](https://docs.prefect.io/core/concepts/results.html#results):
```python
>>> task_ref = flow.get_tasks[0]
>>> state = flow.run()
>>> state.result[task_ref]._result # a Task State's Result contains the Task's return value
<Result: 1>
```
Instead prefect requires that the task_ref be created via a name or tag call (`flow.get_tasks(name='<name>')` instead of `flow.get_tasks[0]`) and be extracted from its list (`state.result[task_ref[0]]` instead of state.result[task_ref]):
```python
>>> task_ref = flow.get_tasks(name='<name>')
>>> state = flow.run()
>>> state.result[task_ref[0]]._result # a Task State's Result contains the Task's return value
<Result: 1>
```
and also the documentation on [Local Debugging](https://docs.prefect.io/core/advanced_tutorials/local-debugging.html#local-debugging):
```python
from prefect import Flow, task
@task
def gotcha():
tup = ('a', ['b'])
try:
tup[1] += ['c']
except TypeError:
assert len(tup[1]) == 1
flow = Flow(name="tuples", tasks=[gotcha])
state = flow.run()
state.result # {<Task: gotcha>: Failed("Unexpected error: AssertionError()")}
failed_state = state.result[gotcha]
raise failed_state.result
```
## Examples
### `flow.get_tasks`:
```python
from prefect import Flow, task
import pandas as pd
@task(name='load data')
def _load_data() -> pd.DataFrame:
return pd.DataFrame({'a': [1, 2, 3]})
with Flow('preprocess-VO') as flow:
_load_data()
state = flow.run()
task_ref = flow.get_tasks[0]
```
Returns:
```python
>>> TypeError: 'method' object is not subscriptable
```
### `state.result[task_ref]`
```python
from prefect import Flow, task
import pandas as pd
@task(name='load data')
def _load_data() -> pd.DataFrame:
return pd.DataFrame({'a': [1, 2, 3]})
with Flow('preprocess-VO') as flow:
_load_data()
state = flow.run()
task_ref = flow.get_tasks(name='load data')
state.result[task_ref]
```
Returns:
```python
>>> TypeError: unhashable type: 'list'
```
### `state.result[<function name>]`
```python
from prefect import Flow, task
import pandas as pd
@task(name='load data')
def _load_data() -> pd.DataFrame:
return pd.DataFrame({'a': [1, 2, 3]})
with Flow('preprocess-VO') as flow:
_load_data()
state = flow.run()
task_ref = flow.get_tasks(name='load data')
state.result[_load_data]
```
```python
KeyError: <Task: load data>
```
## Proposed behavior
It may be a bit more readable to adapt the functionality here so that Prefect behaves as in the documentation so:
- `state.result[task_ref]` can be used instead of `state.result[task_ref[0]]`
- `state.result[<func name>]` works
- `flow.get_tasks[0]` returns a task reference
Additionally, it would be great if function names could be passed to either `state.result[<func name>]` (EDIT: I see now that this was functionality did exist) or `flow.get_tasks(<func name>)` as an alternative to adding tags and/or names to each task function to avoid having to double name all functions.
---
Thanks for creating and maintaining prefect, really enjoying the library so far
</issue>
<code>
[start of src/prefect/engine/results/azure_result.py]
1 import os
2 from typing import TYPE_CHECKING, Any
3
4 from prefect.client import Secret
5 from prefect.engine.result import Result
6
7 if TYPE_CHECKING:
8 import azure.storage.blob
9
10
11 class AzureResult(Result):
12 """
13 Result for writing to and reading from an Azure Blob storage.
14
15 Note that your flow's runtime environment must be able to authenticate with
16 Azure; there are currently two supported options: provide a connection string
17 either at initialization or at runtime through an environment variable, or
18 set your Azure connection string as a Prefect Secret. Using an environment
19 variable is the recommended approach.
20
21 Args:
22 - container (str): the name of the container to write to / read from
23 - connection_string (str, optional): an Azure connection string for communicating with
24 Blob storage. If not provided the value set in the environment as `AZURE_STORAGE_CONNECTION_STRING`
25 will be used
26 - connection_string_secret (str, optional): the name of a Prefect Secret
27 which stores your Azure connection tring
28 """
29
30 def __init__(
31 self,
32 container: str,
33 connection_string: str = None,
34 connection_string_secret: str = None,
35 **kwargs: Any
36 ) -> None:
37 self.container = container
38 self.connection_string = connection_string or os.getenv(
39 "AZURE_STORAGE_CONNECTION_STRING"
40 )
41 self.connection_string_secret = connection_string_secret
42 super().__init__(**kwargs)
43
44 def initialize_service(self) -> None:
45 """
46 Initialize a Blob service.
47 """
48 import azure.storage.blob
49
50 connection_string = self.connection_string
51 if not connection_string and self.connection_string_secret:
52 connection_string = Secret(self.connection_string_secret).get()
53
54 self._service = azure.storage.blob.BlobServiceClient.from_connection_string(
55 conn_str=connection_string
56 )
57
58 @property
59 def service(self) -> "azure.storage.blob.BlobServiceClient":
60 if not hasattr(self, "_service"):
61 self.initialize_service()
62 return self._service
63
64 @service.setter
65 def service(self, val: Any) -> None:
66 self._service = val
67
68 def __getstate__(self) -> dict:
69 state = self.__dict__.copy()
70 if "_service" in state:
71 del state["_service"]
72 return state
73
74 def __setstate__(self, state: dict) -> None:
75 self.__dict__.update(state)
76
77 def write(self, value: Any, **kwargs: Any) -> Result:
78 """
79 Writes the result value to a blob storage in Azure.
80
81 Args:
82 - value (Any): the value to write; will then be stored as the `value` attribute
83 of the returned `Result` instance
84 - **kwargs (optional): if provided, will be used to format the location template
85 to determine the location to write to
86
87 Returns:
88 - Result: a new Result instance with the appropriately formatted location
89 """
90 new = self.format(**kwargs)
91 new.value = value
92
93 self.logger.debug("Starting to upload result to {}...".format(new.location))
94
95 ## prepare data
96 binary_data = new.serialize_to_bytes(new.value).decode()
97
98 # initialize client and upload
99 client = self.service.get_blob_client(
100 container=self.container, blob=new.location
101 )
102 client.upload_blob(binary_data)
103
104 self.logger.debug("Finished uploading result to {}.".format(new.location))
105
106 return new
107
108 def read(self, location: str) -> Result:
109 """
110 Reads a result from an Azure Blob container and returns a corresponding `Result` instance.
111
112 Args:
113 - location (str): the Azure blob location to read from
114
115 Returns:
116 - Result: the read result
117 """
118 new = self.copy()
119 new.location = location
120
121 try:
122 self.logger.debug("Starting to download result from {}...".format(location))
123
124 # initialize client and download
125 client = self.service.get_blob_client(
126 container=self.container, blob=location
127 )
128 content_string = client.download_blob()
129
130 try:
131 new.value = new.deserialize_from_bytes(content_string)
132 except EOFError:
133 new.value = None
134 self.logger.debug("Finished downloading result from {}.".format(location))
135 except Exception as exc:
136 self.logger.exception(
137 "Unexpected error while reading from result handler: {}".format(
138 repr(exc)
139 )
140 )
141 raise exc
142 return new
143
144 def exists(self, location: str, **kwargs: Any) -> bool:
145 """
146 Checks whether the target result exists.
147
148 Does not validate whether the result is `valid`, only that it is present.
149
150 Args:
151 - location (str): Location of the result in the specific result target.
152 Will check whether the provided location exists
153 - **kwargs (Any): string format arguments for `location`
154
155 Returns:
156 - bool: whether or not the target result exists.
157 """
158 from azure.core.exceptions import ResourceNotFoundError
159
160 # initialize client and download
161 client = self.service.get_blob_client(
162 container=self.container, blob=location.format(**kwargs)
163 )
164
165 # Catch exception because Azure python bindings do not yet have an exists method
166 # https://github.com/Azure/azure-sdk-for-python/issues/9507
167 try:
168 client.get_blob_properties()
169 return True
170 except ResourceNotFoundError:
171 return False
172
[end of src/prefect/engine/results/azure_result.py]
[start of src/prefect/engine/results/constant_result.py]
1 from typing import Any
2 from prefect.engine.result import Result
3
4
5 class ConstantResult(Result):
6 """
7 Hook for storing and retrieving constant Python objects. Only intended to be used
8 internally. The "backend" in this instance is the class instance itself.
9
10 Args:
11 - **kwargs (Any, optional): any additional `Result` initialization options
12 """
13
14 def __init__(self, **kwargs: Any) -> None:
15 super().__init__(**kwargs)
16
17 def read(self, location: str) -> Result:
18 """
19 Returns the underlying value regardless of the argument passed.
20
21 Args:
22 - location (str): an unused argument
23 """
24 return self
25
26 def write(self, value: Any, **kwargs: Any) -> Result:
27 """
28 Returns the repr of the underlying value, purely for convenience.
29
30 Args:
31 - value (Any): unused, for interface compatibility
32 - **kwargs (optional): unused, for interface compatibility
33
34 Raises:
35 ValueError: ConstantResults cannot be written to
36 """
37 raise ValueError("Cannot write values to `ConstantResult` types.")
38
39 def exists(self, location: str, **kwargs: Any) -> bool:
40 """
41 As all Python objects are valid constants, always returns `True`.
42
43 Args:
44 - location (str): for interface compatibility
45 - **kwargs (Any): string format arguments for `location`
46
47 Returns:
48 - bool: True, confirming the constant exists.
49 """
50 return True
51
[end of src/prefect/engine/results/constant_result.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/prefect/engine/results/azure_result.py b/src/prefect/engine/results/azure_result.py
--- a/src/prefect/engine/results/azure_result.py
+++ b/src/prefect/engine/results/azure_result.py
@@ -25,6 +25,7 @@
will be used
- connection_string_secret (str, optional): the name of a Prefect Secret
which stores your Azure connection tring
+ - **kwargs (Any, optional): any additional `Result` initialization options
"""
def __init__(
diff --git a/src/prefect/engine/results/constant_result.py b/src/prefect/engine/results/constant_result.py
--- a/src/prefect/engine/results/constant_result.py
+++ b/src/prefect/engine/results/constant_result.py
@@ -16,7 +16,7 @@
def read(self, location: str) -> Result:
"""
- Returns the underlying value regardless of the argument passed.
+ Will return the underlying value regardless of the argument passed.
Args:
- location (str): an unused argument
@@ -25,14 +25,14 @@
def write(self, value: Any, **kwargs: Any) -> Result:
"""
- Returns the repr of the underlying value, purely for convenience.
+ Will return the repr of the underlying value, purely for convenience.
Args:
- value (Any): unused, for interface compatibility
- **kwargs (optional): unused, for interface compatibility
Raises:
- ValueError: ConstantResults cannot be written to
+ - ValueError: ConstantResults cannot be written to
"""
raise ValueError("Cannot write values to `ConstantResult` types.")
| {"golden_diff": "diff --git a/src/prefect/engine/results/azure_result.py b/src/prefect/engine/results/azure_result.py\n--- a/src/prefect/engine/results/azure_result.py\n+++ b/src/prefect/engine/results/azure_result.py\n@@ -25,6 +25,7 @@\n will be used\n - connection_string_secret (str, optional): the name of a Prefect Secret\n which stores your Azure connection tring\n+ - **kwargs (Any, optional): any additional `Result` initialization options\n \"\"\"\n \n def __init__(\ndiff --git a/src/prefect/engine/results/constant_result.py b/src/prefect/engine/results/constant_result.py\n--- a/src/prefect/engine/results/constant_result.py\n+++ b/src/prefect/engine/results/constant_result.py\n@@ -16,7 +16,7 @@\n \n def read(self, location: str) -> Result:\n \"\"\"\n- Returns the underlying value regardless of the argument passed.\n+ Will return the underlying value regardless of the argument passed.\n \n Args:\n - location (str): an unused argument\n@@ -25,14 +25,14 @@\n \n def write(self, value: Any, **kwargs: Any) -> Result:\n \"\"\"\n- Returns the repr of the underlying value, purely for convenience.\n+ Will return the repr of the underlying value, purely for convenience.\n \n Args:\n - value (Any): unused, for interface compatibility\n - **kwargs (optional): unused, for interface compatibility\n \n Raises:\n- ValueError: ConstantResults cannot be written to\n+ - ValueError: ConstantResults cannot be written to\n \"\"\"\n raise ValueError(\"Cannot write values to `ConstantResult` types.\")\n", "issue": "Behaviour differs from documentation: state.result & flow.get_tasks \nPrefect Version: 0.10.4\r\n\r\n## Current behavior\r\nMinor issue - The following behaviour doesn't match up to that described by the documentation on [result handling](https://docs.prefect.io/core/concepts/results.html#results):\r\n\r\n```python\r\n>>> task_ref = flow.get_tasks[0]\r\n>>> state = flow.run()\r\n>>> state.result[task_ref]._result # a Task State's Result contains the Task's return value\r\n<Result: 1>\r\n```\r\n\r\nInstead prefect requires that the task_ref be created via a name or tag call (`flow.get_tasks(name='<name>')` instead of `flow.get_tasks[0]`) and be extracted from its list (`state.result[task_ref[0]]` instead of state.result[task_ref]):\r\n\r\n```python\r\n>>> task_ref = flow.get_tasks(name='<name>')\r\n>>> state = flow.run()\r\n>>> state.result[task_ref[0]]._result # a Task State's Result contains the Task's return value\r\n<Result: 1>\r\n```\r\n\r\nand also the documentation on [Local Debugging](https://docs.prefect.io/core/advanced_tutorials/local-debugging.html#local-debugging):\r\n\r\n```python\r\nfrom prefect import Flow, task\r\n\r\n@task\r\ndef gotcha():\r\n tup = ('a', ['b'])\r\n try:\r\n tup[1] += ['c']\r\n except TypeError:\r\n assert len(tup[1]) == 1\r\n\r\nflow = Flow(name=\"tuples\", tasks=[gotcha])\r\n\r\nstate = flow.run()\r\nstate.result # {<Task: gotcha>: Failed(\"Unexpected error: AssertionError()\")}\r\n\r\nfailed_state = state.result[gotcha]\r\nraise failed_state.result\r\n```\r\n\r\n## Examples\r\n\r\n### `flow.get_tasks`:\r\n\r\n```python\r\nfrom prefect import Flow, task\r\nimport pandas as pd\r\n\r\n@task(name='load data')\r\ndef _load_data() -> pd.DataFrame:\r\n return pd.DataFrame({'a': [1, 2, 3]})\r\n\r\n\r\nwith Flow('preprocess-VO') as flow:\r\n _load_data()\r\n\r\nstate = flow.run()\r\ntask_ref = flow.get_tasks[0]\r\n```\r\n\r\nReturns:\r\n```python\r\n>>> TypeError: 'method' object is not subscriptable\r\n```\r\n\r\n### `state.result[task_ref]`\r\n\r\n```python\r\nfrom prefect import Flow, task\r\nimport pandas as pd\r\n\r\n@task(name='load data')\r\ndef _load_data() -> pd.DataFrame:\r\n return pd.DataFrame({'a': [1, 2, 3]})\r\n\r\n\r\nwith Flow('preprocess-VO') as flow:\r\n _load_data()\r\n\r\nstate = flow.run()\r\ntask_ref = flow.get_tasks(name='load data')\r\nstate.result[task_ref]\r\n```\r\n\r\nReturns:\r\n```python\r\n>>> TypeError: unhashable type: 'list'\r\n```\r\n\r\n### `state.result[<function name>]`\r\n\r\n```python\r\nfrom prefect import Flow, task\r\nimport pandas as pd\r\n\r\n@task(name='load data')\r\ndef _load_data() -> pd.DataFrame:\r\n return pd.DataFrame({'a': [1, 2, 3]})\r\n\r\n\r\nwith Flow('preprocess-VO') as flow:\r\n _load_data()\r\n\r\nstate = flow.run()\r\ntask_ref = flow.get_tasks(name='load data')\r\nstate.result[_load_data]\r\n```\r\n\r\n```python\r\nKeyError: <Task: load data>\r\n```\r\n\r\n## Proposed behavior\r\nIt may be a bit more readable to adapt the functionality here so that Prefect behaves as in the documentation so:\r\n- `state.result[task_ref]` can be used instead of `state.result[task_ref[0]]` \r\n- `state.result[<func name>]` works\r\n- `flow.get_tasks[0]` returns a task reference\r\n\r\nAdditionally, it would be great if function names could be passed to either `state.result[<func name>]` (EDIT: I see now that this was functionality did exist) or `flow.get_tasks(<func name>)` as an alternative to adding tags and/or names to each task function to avoid having to double name all functions.\r\n\r\n---\r\n\r\nThanks for creating and maintaining prefect, really enjoying the library so far \r\n\n", "before_files": [{"content": "import os\nfrom typing import TYPE_CHECKING, Any\n\nfrom prefect.client import Secret\nfrom prefect.engine.result import Result\n\nif TYPE_CHECKING:\n import azure.storage.blob\n\n\nclass AzureResult(Result):\n \"\"\"\n Result for writing to and reading from an Azure Blob storage.\n\n Note that your flow's runtime environment must be able to authenticate with\n Azure; there are currently two supported options: provide a connection string\n either at initialization or at runtime through an environment variable, or\n set your Azure connection string as a Prefect Secret. Using an environment\n variable is the recommended approach.\n\n Args:\n - container (str): the name of the container to write to / read from\n - connection_string (str, optional): an Azure connection string for communicating with\n Blob storage. If not provided the value set in the environment as `AZURE_STORAGE_CONNECTION_STRING`\n will be used\n - connection_string_secret (str, optional): the name of a Prefect Secret\n which stores your Azure connection tring\n \"\"\"\n\n def __init__(\n self,\n container: str,\n connection_string: str = None,\n connection_string_secret: str = None,\n **kwargs: Any\n ) -> None:\n self.container = container\n self.connection_string = connection_string or os.getenv(\n \"AZURE_STORAGE_CONNECTION_STRING\"\n )\n self.connection_string_secret = connection_string_secret\n super().__init__(**kwargs)\n\n def initialize_service(self) -> None:\n \"\"\"\n Initialize a Blob service.\n \"\"\"\n import azure.storage.blob\n\n connection_string = self.connection_string\n if not connection_string and self.connection_string_secret:\n connection_string = Secret(self.connection_string_secret).get()\n\n self._service = azure.storage.blob.BlobServiceClient.from_connection_string(\n conn_str=connection_string\n )\n\n @property\n def service(self) -> \"azure.storage.blob.BlobServiceClient\":\n if not hasattr(self, \"_service\"):\n self.initialize_service()\n return self._service\n\n @service.setter\n def service(self, val: Any) -> None:\n self._service = val\n\n def __getstate__(self) -> dict:\n state = self.__dict__.copy()\n if \"_service\" in state:\n del state[\"_service\"]\n return state\n\n def __setstate__(self, state: dict) -> None:\n self.__dict__.update(state)\n\n def write(self, value: Any, **kwargs: Any) -> Result:\n \"\"\"\n Writes the result value to a blob storage in Azure.\n\n Args:\n - value (Any): the value to write; will then be stored as the `value` attribute\n of the returned `Result` instance\n - **kwargs (optional): if provided, will be used to format the location template\n to determine the location to write to\n\n Returns:\n - Result: a new Result instance with the appropriately formatted location\n \"\"\"\n new = self.format(**kwargs)\n new.value = value\n\n self.logger.debug(\"Starting to upload result to {}...\".format(new.location))\n\n ## prepare data\n binary_data = new.serialize_to_bytes(new.value).decode()\n\n # initialize client and upload\n client = self.service.get_blob_client(\n container=self.container, blob=new.location\n )\n client.upload_blob(binary_data)\n\n self.logger.debug(\"Finished uploading result to {}.\".format(new.location))\n\n return new\n\n def read(self, location: str) -> Result:\n \"\"\"\n Reads a result from an Azure Blob container and returns a corresponding `Result` instance.\n\n Args:\n - location (str): the Azure blob location to read from\n\n Returns:\n - Result: the read result\n \"\"\"\n new = self.copy()\n new.location = location\n\n try:\n self.logger.debug(\"Starting to download result from {}...\".format(location))\n\n # initialize client and download\n client = self.service.get_blob_client(\n container=self.container, blob=location\n )\n content_string = client.download_blob()\n\n try:\n new.value = new.deserialize_from_bytes(content_string)\n except EOFError:\n new.value = None\n self.logger.debug(\"Finished downloading result from {}.\".format(location))\n except Exception as exc:\n self.logger.exception(\n \"Unexpected error while reading from result handler: {}\".format(\n repr(exc)\n )\n )\n raise exc\n return new\n\n def exists(self, location: str, **kwargs: Any) -> bool:\n \"\"\"\n Checks whether the target result exists.\n\n Does not validate whether the result is `valid`, only that it is present.\n\n Args:\n - location (str): Location of the result in the specific result target.\n Will check whether the provided location exists\n - **kwargs (Any): string format arguments for `location`\n\n Returns:\n - bool: whether or not the target result exists.\n \"\"\"\n from azure.core.exceptions import ResourceNotFoundError\n\n # initialize client and download\n client = self.service.get_blob_client(\n container=self.container, blob=location.format(**kwargs)\n )\n\n # Catch exception because Azure python bindings do not yet have an exists method\n # https://github.com/Azure/azure-sdk-for-python/issues/9507\n try:\n client.get_blob_properties()\n return True\n except ResourceNotFoundError:\n return False\n", "path": "src/prefect/engine/results/azure_result.py"}, {"content": "from typing import Any\nfrom prefect.engine.result import Result\n\n\nclass ConstantResult(Result):\n \"\"\"\n Hook for storing and retrieving constant Python objects. Only intended to be used\n internally. The \"backend\" in this instance is the class instance itself.\n\n Args:\n - **kwargs (Any, optional): any additional `Result` initialization options\n \"\"\"\n\n def __init__(self, **kwargs: Any) -> None:\n super().__init__(**kwargs)\n\n def read(self, location: str) -> Result:\n \"\"\"\n Returns the underlying value regardless of the argument passed.\n\n Args:\n - location (str): an unused argument\n \"\"\"\n return self\n\n def write(self, value: Any, **kwargs: Any) -> Result:\n \"\"\"\n Returns the repr of the underlying value, purely for convenience.\n\n Args:\n - value (Any): unused, for interface compatibility\n - **kwargs (optional): unused, for interface compatibility\n\n Raises:\n ValueError: ConstantResults cannot be written to\n \"\"\"\n raise ValueError(\"Cannot write values to `ConstantResult` types.\")\n\n def exists(self, location: str, **kwargs: Any) -> bool:\n \"\"\"\n As all Python objects are valid constants, always returns `True`.\n\n Args:\n - location (str): for interface compatibility\n - **kwargs (Any): string format arguments for `location`\n\n Returns:\n - bool: True, confirming the constant exists.\n \"\"\"\n return True\n", "path": "src/prefect/engine/results/constant_result.py"}]} | 3,423 | 374 |
gh_patches_debug_6255 | rasdani/github-patches | git_diff | qtile__qtile-180 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
using Qtile.cmd_{info, debug} doesn't switch logging level
It seems that critical, warning, and error do all work, though.
</issue>
<code>
[start of libqtile/log_utils.py]
1 import logging
2 import os
3 import sys
4 from logging import getLogger, StreamHandler
5
6
7 class ColorFormatter(logging.Formatter):
8 """Logging formatter adding console colors to the output.
9 """
10 black, red, green, yellow, blue, magenta, cyan, white = range(8)
11 colors = {
12 'WARNING': yellow,
13 'INFO': green,
14 'DEBUG': blue,
15 'CRITICAL': yellow,
16 'ERROR': red,
17 'RED': red,
18 'GREEN': green,
19 'YELLOW': yellow,
20 'BLUE': blue,
21 'MAGENTA': magenta,
22 'CYAN': cyan,
23 'WHITE': white}
24 reset_seq = '\033[0m'
25 color_seq = '\033[%dm'
26 bold_seq = '\033[1m'
27
28 def format(self, record):
29 """Format the record with colors."""
30 color = self.color_seq % (30 + self.colors[record.levelname])
31 message = logging.Formatter.format(self, record)
32 message = message.replace('$RESET', self.reset_seq)\
33 .replace('$BOLD', self.bold_seq)\
34 .replace('$COLOR', color)
35 for color, value in self.colors.items():
36 message = message.replace(
37 '$' + color, self.color_seq % (value + 30))\
38 .replace('$BG' + color, self.color_seq % (value + 40))\
39 .replace('$BG-' + color, self.color_seq % (value + 40))
40 return message + self.reset_seq
41
42
43 def init_log(log_level=logging.WARNING, logger='qtile'):
44 handler = logging.FileHandler(
45 os.path.expanduser('~/.%s.log' % logger))
46 handler.setLevel(logging.WARNING)
47 handler.setFormatter(
48 logging.Formatter(
49 "%(asctime)s %(levelname)s %(funcName)s:%(lineno)d %(message)s"))
50 log = getLogger(logger)
51 log.setLevel(log_level)
52 log.addHandler(handler)
53 log.warning('Starting %s' % logger.title())
54 handler = StreamHandler(sys.stderr)
55 handler.setFormatter(
56 ColorFormatter(
57 '$RESET$COLOR%(asctime)s $BOLD$COLOR%(name)s'
58 ' %(funcName)s:%(lineno)d $RESET %(message)s'))
59 log.addHandler(handler)
60 return log
61
[end of libqtile/log_utils.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/libqtile/log_utils.py b/libqtile/log_utils.py
--- a/libqtile/log_utils.py
+++ b/libqtile/log_utils.py
@@ -43,7 +43,6 @@
def init_log(log_level=logging.WARNING, logger='qtile'):
handler = logging.FileHandler(
os.path.expanduser('~/.%s.log' % logger))
- handler.setLevel(logging.WARNING)
handler.setFormatter(
logging.Formatter(
"%(asctime)s %(levelname)s %(funcName)s:%(lineno)d %(message)s"))
| {"golden_diff": "diff --git a/libqtile/log_utils.py b/libqtile/log_utils.py\n--- a/libqtile/log_utils.py\n+++ b/libqtile/log_utils.py\n@@ -43,7 +43,6 @@\n def init_log(log_level=logging.WARNING, logger='qtile'):\n handler = logging.FileHandler(\n os.path.expanduser('~/.%s.log' % logger))\n- handler.setLevel(logging.WARNING)\n handler.setFormatter(\n logging.Formatter(\n \"%(asctime)s %(levelname)s %(funcName)s:%(lineno)d %(message)s\"))\n", "issue": "using Qtile.cmd_{info, debug} doesn't switch logging level\nIt seems that critical, warning, and error do all work, though.\n\n", "before_files": [{"content": "import logging\nimport os\nimport sys\nfrom logging import getLogger, StreamHandler\n\n\nclass ColorFormatter(logging.Formatter):\n \"\"\"Logging formatter adding console colors to the output.\n \"\"\"\n black, red, green, yellow, blue, magenta, cyan, white = range(8)\n colors = {\n 'WARNING': yellow,\n 'INFO': green,\n 'DEBUG': blue,\n 'CRITICAL': yellow,\n 'ERROR': red,\n 'RED': red,\n 'GREEN': green,\n 'YELLOW': yellow,\n 'BLUE': blue,\n 'MAGENTA': magenta,\n 'CYAN': cyan,\n 'WHITE': white}\n reset_seq = '\\033[0m'\n color_seq = '\\033[%dm'\n bold_seq = '\\033[1m'\n\n def format(self, record):\n \"\"\"Format the record with colors.\"\"\"\n color = self.color_seq % (30 + self.colors[record.levelname])\n message = logging.Formatter.format(self, record)\n message = message.replace('$RESET', self.reset_seq)\\\n .replace('$BOLD', self.bold_seq)\\\n .replace('$COLOR', color)\n for color, value in self.colors.items():\n message = message.replace(\n '$' + color, self.color_seq % (value + 30))\\\n .replace('$BG' + color, self.color_seq % (value + 40))\\\n .replace('$BG-' + color, self.color_seq % (value + 40))\n return message + self.reset_seq\n\n\ndef init_log(log_level=logging.WARNING, logger='qtile'):\n handler = logging.FileHandler(\n os.path.expanduser('~/.%s.log' % logger))\n handler.setLevel(logging.WARNING)\n handler.setFormatter(\n logging.Formatter(\n \"%(asctime)s %(levelname)s %(funcName)s:%(lineno)d %(message)s\"))\n log = getLogger(logger)\n log.setLevel(log_level)\n log.addHandler(handler)\n log.warning('Starting %s' % logger.title())\n handler = StreamHandler(sys.stderr)\n handler.setFormatter(\n ColorFormatter(\n '$RESET$COLOR%(asctime)s $BOLD$COLOR%(name)s'\n ' %(funcName)s:%(lineno)d $RESET %(message)s'))\n log.addHandler(handler)\n return log\n", "path": "libqtile/log_utils.py"}]} | 1,173 | 118 |
gh_patches_debug_38317 | rasdani/github-patches | git_diff | lutris__lutris-2235 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Lutris removes "DXVK" dlls even when they're not from DXVK
**Describe the bug**
It is currently not possible to use Gallium Nine, when DXVK is disabled in the settings (which would be what I thought would be necessary to run Gallium Nine). When I enable Gallium Nine and then run a game I get these messages in the console:
```
2019-06-17 10:21:30,859: Removing DXVK dll /home/bjoern/Games/starcraft-ii/drive_c/windows/system32/d3d9
2019-06-17 10:21:30,859: Removing DXVK dll /home/bjoern/Games/starcraft-ii/drive_c/windows/syswow64/d3d9
```
Afterwards the game complains about a missiing d3d9.dll
Enabling DXVK and setting the version to manual makes it work.
**Expected behavior**
Game runs with Gallium Nine.
**Current behavior**
Game complains about a missiing d3d9.dll
**Steps to reproduce**
* Install Gallium Nine via winetricks.
* Enable Gallium Nine by running drive_c/windows/syswo64/ninewinecfg.exe
* Disable DXVK.
* Run game.
**Workarounds**
Any of these should work:
* Set DXVK-version to manual and enable DXVK.
* Run the game manually from the console and not via lutris.
* It would probably work to run the game exe manually via "Run EXE inside wine prefix".
</issue>
<code>
[start of lutris/util/wine/dxvk.py]
1 """DXVK helper module"""
2 import os
3 import json
4 import time
5 import shutil
6 import urllib.request
7
8 from lutris.settings import RUNTIME_DIR
9 from lutris.util.log import logger
10 from lutris.util.extract import extract_archive
11 from lutris.util.downloader import Downloader
12 from lutris.util import system
13
14 CACHE_MAX_AGE = 86400 # Re-download DXVK versions every day
15
16
17 @system.run_once
18 def init_dxvk_versions():
19 def get_dxvk_versions(base_name, tags_url):
20 """Get DXVK versions from GitHub"""
21 logger.info("Updating "+base_name.upper()+" versions")
22 dxvk_path = os.path.join(RUNTIME_DIR, base_name)
23 if not os.path.isdir(dxvk_path):
24 os.mkdir(dxvk_path)
25 versions_path = os.path.join(dxvk_path, base_name+"_versions.json")
26
27 urllib.request.urlretrieve(tags_url, versions_path)
28
29 with open(versions_path, "r") as dxvk_tags:
30 dxvk_json = json.load(dxvk_tags)
31 dxvk_versions = list()
32 for x in dxvk_json:
33 version_name = x["name"].replace("v", "")
34 if version_name.startswith('m'): # ignore master snapshots of d9vk
35 continue
36 dxvk_versions.append(version_name)
37 return dxvk_versions
38
39 def init_versions(manager):
40 try:
41 manager.DXVK_VERSIONS \
42 = get_dxvk_versions(manager.base_name, manager.DXVK_TAGS_URL)
43 except Exception as ex: # pylint: disable= broad-except
44 logger.error(ex)
45 manager.DXVK_LATEST, manager.DXVK_PAST_RELEASES = manager.DXVK_VERSIONS[0], manager.DXVK_VERSIONS[1:9]
46
47 init_versions(DXVKManager)
48 init_versions(D9VKManager)
49
50
51 class UnavailableDXVKVersion(RuntimeError):
52 """Exception raised when a version of DXVK is not found"""
53
54
55 class DXVKManager:
56 """Utility class to install DXVK dlls to a Wine prefix"""
57
58 DXVK_TAGS_URL = "https://api.github.com/repos/doitsujin/dxvk/tags"
59 DXVK_VERSIONS = [
60 "0.94",
61 ]
62 DXVK_LATEST, DXVK_PAST_RELEASES = DXVK_VERSIONS[0], DXVK_VERSIONS[1:9]
63
64 base_url = "https://github.com/doitsujin/dxvk/releases/download/v{}/dxvk-{}.tar.gz"
65 base_name = "dxvk"
66 base_dir = os.path.join(RUNTIME_DIR, base_name)
67 dxvk_dlls = ("dxgi", "d3d11", "d3d10core", "d3d10_1", "d3d10", "d3d9")
68 latest_version = DXVK_LATEST
69
70 def __init__(self, prefix, arch="win64", version=None):
71 self.prefix = prefix
72 if not os.path.isdir(self.base_dir):
73 os.makedirs(self.base_dir)
74 self._version = version
75 self.wine_arch = arch
76
77 @property
78 def version(self):
79 """Return version of DXVK (latest known version if not provided)"""
80 if self._version:
81 return self._version
82 return self.latest_version
83
84 @property
85 def dxvk_path(self):
86 """Return path to DXVK local cache"""
87 return os.path.join(self.base_dir, self.version)
88
89 @staticmethod
90 def is_dxvk_dll(dll_path):
91 """Check if a given DLL path is provided by DXVK
92
93 Very basic check to see if a dll exists and is over 256K. If this is the
94 case, then consider the DLL to be from DXVK
95 """
96 if system.path_exists(dll_path, check_symlinks=True):
97 dll_stats = os.stat(dll_path)
98 dll_size = dll_stats.st_size
99 else:
100 dll_size = 0
101 return dll_size > 1024 * 256
102
103 def is_available(self):
104 """Return whether DXVK is cached locally"""
105 return system.path_exists(self.dxvk_path)
106
107 def dxvk_dll_exists(self, dll_name):
108 """Check if the dll exists as a DXVK variant"""
109 return system.path_exists(os.path.join(self.dxvk_path, "x64", dll_name + ".dll")) \
110 and system.path_exists(os.path.join(self.dxvk_path, "x32", dll_name + ".dll"))
111
112 def download(self):
113 """Download DXVK to the local cache"""
114 dxvk_url = self.base_url.format(self.version, self.version)
115 if self.is_available():
116 logger.warning(self.base_name.upper()+" already available at %s", self.dxvk_path)
117
118 dxvk_archive_path = os.path.join(self.base_dir, os.path.basename(dxvk_url))
119
120 downloader = Downloader(dxvk_url, dxvk_archive_path)
121 downloader.start()
122 while downloader.check_progress() < 1 and downloader.state != downloader.ERROR:
123 time.sleep(0.3)
124 if not system.path_exists(dxvk_archive_path):
125 raise UnavailableDXVKVersion("Failed to download "+self.base_name.upper()+" %s" % self.version)
126 if os.stat(dxvk_archive_path).st_size:
127 extract_archive(dxvk_archive_path, self.dxvk_path, merge_single=True)
128 os.remove(dxvk_archive_path)
129 else:
130 os.remove(dxvk_archive_path)
131 raise UnavailableDXVKVersion("Failed to download "+self.base_name.upper()+" %s" % self.version)
132
133 def enable_dxvk_dll(self, system_dir, dxvk_arch, dll):
134 """Copies DXVK dlls to the appropriate destination"""
135 # Copying DXVK's version
136 dxvk_dll_path = os.path.join(self.dxvk_path, dxvk_arch, "%s.dll" % dll)
137 if system.path_exists(dxvk_dll_path):
138 wine_dll_path = os.path.join(system_dir, "%s.dll" % dll)
139 logger.info("Replacing %s/%s with "+self.base_name.upper()+" version", system_dir, dll)
140 if not self.is_dxvk_dll(wine_dll_path):
141 # Backing up original version (may not be needed)
142 if system.path_exists(wine_dll_path):
143 shutil.move(wine_dll_path, wine_dll_path + ".orig")
144 if system.path_exists(wine_dll_path):
145 os.remove(wine_dll_path)
146 os.symlink(dxvk_dll_path, wine_dll_path)
147 else:
148 self.disable_dxvk_dll(system_dir, dxvk_arch, dll)
149
150 def disable_dxvk_dll(self, system_dir, dxvk_arch, dll):
151 """Remove DXVK DLL from Wine prefix"""
152 wine_dll_path = os.path.join(system_dir, "%s.dll" % dll)
153 if self.is_dxvk_dll(wine_dll_path):
154 logger.info("Removing "+self.base_name.upper()+" dll %s/%s", system_dir, dll)
155 os.remove(wine_dll_path)
156 # Restoring original version (may not be needed)
157 if system.path_exists(wine_dll_path + ".orig"):
158 shutil.move(wine_dll_path + ".orig", wine_dll_path)
159
160 def _iter_dxvk_dlls(self):
161 windows_path = os.path.join(self.prefix, "drive_c/windows")
162 if self.wine_arch == "win64":
163 system_dirs = {
164 "x64": os.path.join(windows_path, "system32"),
165 "x32": os.path.join(windows_path, "syswow64"),
166 }
167 elif self.wine_arch == "win32":
168 system_dirs = {"x32": os.path.join(windows_path, "system32")}
169
170 for dxvk_arch, system_dir in system_dirs.items():
171 for dll in self.dxvk_dlls:
172 yield system_dir, dxvk_arch, dll
173
174 def enable(self):
175 """Enable DXVK for the current prefix"""
176 if not system.path_exists(self.dxvk_path):
177 logger.error(self.base_name.upper()+" %s is not available locally", self.version)
178 return
179 for system_dir, dxvk_arch, dll in self._iter_dxvk_dlls():
180 self.enable_dxvk_dll(system_dir, dxvk_arch, dll)
181
182 def disable(self):
183 """Disable DXVK for the current prefix"""
184 for system_dir, dxvk_arch, dll in self._iter_dxvk_dlls():
185 self.disable_dxvk_dll(system_dir, dxvk_arch, dll)
186
187
188 class D9VKManager(DXVKManager):
189 DXVK_TAGS_URL = "https://api.github.com/repos/Joshua-Ashton/d9vk/tags"
190 DXVK_VERSIONS = [
191 "0.10",
192 ]
193 DXVK_LATEST, DXVK_PAST_RELEASES = DXVK_VERSIONS[0], DXVK_VERSIONS[1:9]
194
195 base_url = "https://github.com/Joshua-Ashton/d9vk/releases/download/{}/d9vk-{}.tar.gz"
196 base_name = "d9vk"
197 base_dir = os.path.join(RUNTIME_DIR, base_name)
198 dxvk_dlls = ("d3d9",)
199 latest_version = DXVK_LATEST
200
[end of lutris/util/wine/dxvk.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/lutris/util/wine/dxvk.py b/lutris/util/wine/dxvk.py
--- a/lutris/util/wine/dxvk.py
+++ b/lutris/util/wine/dxvk.py
@@ -23,25 +23,31 @@
if not os.path.isdir(dxvk_path):
os.mkdir(dxvk_path)
versions_path = os.path.join(dxvk_path, base_name+"_versions.json")
-
- urllib.request.urlretrieve(tags_url, versions_path)
-
+ internet_available = True
+ try:
+ urllib.request.urlretrieve(tags_url, versions_path)
+ except Exception as ex: # pylint: disable= broad-except
+ logger.error(ex)
+ internet_available = False
+ dxvk_versions = list()
with open(versions_path, "r") as dxvk_tags:
dxvk_json = json.load(dxvk_tags)
- dxvk_versions = list()
for x in dxvk_json:
version_name = x["name"].replace("v", "")
if version_name.startswith('m'): # ignore master snapshots of d9vk
continue
- dxvk_versions.append(version_name)
- return dxvk_versions
+ if internet_available or version_name in os.listdir(dxvk_path):
+ dxvk_versions.append(version_name)
+ if not dxvk_versions: # We don't want to set manager.DXVK_VERSIONS, if the list is empty
+ raise IndexError
+ return sorted(dxvk_versions, reverse=True)
def init_versions(manager):
try:
manager.DXVK_VERSIONS \
= get_dxvk_versions(manager.base_name, manager.DXVK_TAGS_URL)
- except Exception as ex: # pylint: disable= broad-except
- logger.error(ex)
+ except (IndexError, FileNotFoundError):
+ pass
manager.DXVK_LATEST, manager.DXVK_PAST_RELEASES = manager.DXVK_VERSIONS[0], manager.DXVK_VERSIONS[1:9]
init_versions(DXVKManager)
@@ -150,11 +156,9 @@
def disable_dxvk_dll(self, system_dir, dxvk_arch, dll):
"""Remove DXVK DLL from Wine prefix"""
wine_dll_path = os.path.join(system_dir, "%s.dll" % dll)
- if self.is_dxvk_dll(wine_dll_path):
+ if self.is_dxvk_dll(wine_dll_path) and system.path_exists(wine_dll_path + ".orig"):
logger.info("Removing "+self.base_name.upper()+" dll %s/%s", system_dir, dll)
os.remove(wine_dll_path)
- # Restoring original version (may not be needed)
- if system.path_exists(wine_dll_path + ".orig"):
shutil.move(wine_dll_path + ".orig", wine_dll_path)
def _iter_dxvk_dlls(self):
| {"golden_diff": "diff --git a/lutris/util/wine/dxvk.py b/lutris/util/wine/dxvk.py\n--- a/lutris/util/wine/dxvk.py\n+++ b/lutris/util/wine/dxvk.py\n@@ -23,25 +23,31 @@\n if not os.path.isdir(dxvk_path):\n os.mkdir(dxvk_path)\n versions_path = os.path.join(dxvk_path, base_name+\"_versions.json\")\n-\n- urllib.request.urlretrieve(tags_url, versions_path)\n-\n+ internet_available = True\n+ try:\n+ urllib.request.urlretrieve(tags_url, versions_path)\n+ except Exception as ex: # pylint: disable= broad-except\n+ logger.error(ex)\n+ internet_available = False\n+ dxvk_versions = list()\n with open(versions_path, \"r\") as dxvk_tags:\n dxvk_json = json.load(dxvk_tags)\n- dxvk_versions = list()\n for x in dxvk_json:\n version_name = x[\"name\"].replace(\"v\", \"\")\n if version_name.startswith('m'): # ignore master snapshots of d9vk\n continue\n- dxvk_versions.append(version_name)\n- return dxvk_versions\n+ if internet_available or version_name in os.listdir(dxvk_path):\n+ dxvk_versions.append(version_name)\n+ if not dxvk_versions: # We don't want to set manager.DXVK_VERSIONS, if the list is empty\n+ raise IndexError\n+ return sorted(dxvk_versions, reverse=True)\n \n def init_versions(manager):\n try:\n manager.DXVK_VERSIONS \\\n = get_dxvk_versions(manager.base_name, manager.DXVK_TAGS_URL)\n- except Exception as ex: # pylint: disable= broad-except\n- logger.error(ex)\n+ except (IndexError, FileNotFoundError):\n+ pass\n manager.DXVK_LATEST, manager.DXVK_PAST_RELEASES = manager.DXVK_VERSIONS[0], manager.DXVK_VERSIONS[1:9]\n \n init_versions(DXVKManager)\n@@ -150,11 +156,9 @@\n def disable_dxvk_dll(self, system_dir, dxvk_arch, dll):\n \"\"\"Remove DXVK DLL from Wine prefix\"\"\"\n wine_dll_path = os.path.join(system_dir, \"%s.dll\" % dll)\n- if self.is_dxvk_dll(wine_dll_path):\n+ if self.is_dxvk_dll(wine_dll_path) and system.path_exists(wine_dll_path + \".orig\"):\n logger.info(\"Removing \"+self.base_name.upper()+\" dll %s/%s\", system_dir, dll)\n os.remove(wine_dll_path)\n- # Restoring original version (may not be needed)\n- if system.path_exists(wine_dll_path + \".orig\"):\n shutil.move(wine_dll_path + \".orig\", wine_dll_path)\n \n def _iter_dxvk_dlls(self):\n", "issue": "Lutris removes \"DXVK\" dlls even when they're not from DXVK\n**Describe the bug**\r\n\r\nIt is currently not possible to use Gallium Nine, when DXVK is disabled in the settings (which would be what I thought would be necessary to run Gallium Nine). When I enable Gallium Nine and then run a game I get these messages in the console:\r\n\r\n```\r\n2019-06-17 10:21:30,859: Removing DXVK dll /home/bjoern/Games/starcraft-ii/drive_c/windows/system32/d3d9\r\n2019-06-17 10:21:30,859: Removing DXVK dll /home/bjoern/Games/starcraft-ii/drive_c/windows/syswow64/d3d9\r\n```\r\n\r\nAfterwards the game complains about a missiing d3d9.dll\r\nEnabling DXVK and setting the version to manual makes it work.\r\n\r\n**Expected behavior**\r\n\r\nGame runs with Gallium Nine.\r\n\r\n**Current behavior**\r\n\r\nGame complains about a missiing d3d9.dll\r\n\r\n**Steps to reproduce**\r\n\r\n* Install Gallium Nine via winetricks.\r\n* Enable Gallium Nine by running drive_c/windows/syswo64/ninewinecfg.exe\r\n* Disable DXVK.\r\n* Run game.\r\n\r\n**Workarounds**\r\n\r\nAny of these should work:\r\n\r\n* Set DXVK-version to manual and enable DXVK.\r\n* Run the game manually from the console and not via lutris.\r\n* It would probably work to run the game exe manually via \"Run EXE inside wine prefix\".\n", "before_files": [{"content": "\"\"\"DXVK helper module\"\"\"\nimport os\nimport json\nimport time\nimport shutil\nimport urllib.request\n\nfrom lutris.settings import RUNTIME_DIR\nfrom lutris.util.log import logger\nfrom lutris.util.extract import extract_archive\nfrom lutris.util.downloader import Downloader\nfrom lutris.util import system\n\nCACHE_MAX_AGE = 86400 # Re-download DXVK versions every day\n\n\[email protected]_once\ndef init_dxvk_versions():\n def get_dxvk_versions(base_name, tags_url):\n \"\"\"Get DXVK versions from GitHub\"\"\"\n logger.info(\"Updating \"+base_name.upper()+\" versions\")\n dxvk_path = os.path.join(RUNTIME_DIR, base_name)\n if not os.path.isdir(dxvk_path):\n os.mkdir(dxvk_path)\n versions_path = os.path.join(dxvk_path, base_name+\"_versions.json\")\n\n urllib.request.urlretrieve(tags_url, versions_path)\n\n with open(versions_path, \"r\") as dxvk_tags:\n dxvk_json = json.load(dxvk_tags)\n dxvk_versions = list()\n for x in dxvk_json:\n version_name = x[\"name\"].replace(\"v\", \"\")\n if version_name.startswith('m'): # ignore master snapshots of d9vk\n continue\n dxvk_versions.append(version_name)\n return dxvk_versions\n\n def init_versions(manager):\n try:\n manager.DXVK_VERSIONS \\\n = get_dxvk_versions(manager.base_name, manager.DXVK_TAGS_URL)\n except Exception as ex: # pylint: disable= broad-except\n logger.error(ex)\n manager.DXVK_LATEST, manager.DXVK_PAST_RELEASES = manager.DXVK_VERSIONS[0], manager.DXVK_VERSIONS[1:9]\n\n init_versions(DXVKManager)\n init_versions(D9VKManager)\n\n\nclass UnavailableDXVKVersion(RuntimeError):\n \"\"\"Exception raised when a version of DXVK is not found\"\"\"\n\n\nclass DXVKManager:\n \"\"\"Utility class to install DXVK dlls to a Wine prefix\"\"\"\n\n DXVK_TAGS_URL = \"https://api.github.com/repos/doitsujin/dxvk/tags\"\n DXVK_VERSIONS = [\n \"0.94\",\n ]\n DXVK_LATEST, DXVK_PAST_RELEASES = DXVK_VERSIONS[0], DXVK_VERSIONS[1:9]\n\n base_url = \"https://github.com/doitsujin/dxvk/releases/download/v{}/dxvk-{}.tar.gz\"\n base_name = \"dxvk\"\n base_dir = os.path.join(RUNTIME_DIR, base_name)\n dxvk_dlls = (\"dxgi\", \"d3d11\", \"d3d10core\", \"d3d10_1\", \"d3d10\", \"d3d9\")\n latest_version = DXVK_LATEST\n\n def __init__(self, prefix, arch=\"win64\", version=None):\n self.prefix = prefix\n if not os.path.isdir(self.base_dir):\n os.makedirs(self.base_dir)\n self._version = version\n self.wine_arch = arch\n\n @property\n def version(self):\n \"\"\"Return version of DXVK (latest known version if not provided)\"\"\"\n if self._version:\n return self._version\n return self.latest_version\n\n @property\n def dxvk_path(self):\n \"\"\"Return path to DXVK local cache\"\"\"\n return os.path.join(self.base_dir, self.version)\n\n @staticmethod\n def is_dxvk_dll(dll_path):\n \"\"\"Check if a given DLL path is provided by DXVK\n\n Very basic check to see if a dll exists and is over 256K. If this is the\n case, then consider the DLL to be from DXVK\n \"\"\"\n if system.path_exists(dll_path, check_symlinks=True):\n dll_stats = os.stat(dll_path)\n dll_size = dll_stats.st_size\n else:\n dll_size = 0\n return dll_size > 1024 * 256\n\n def is_available(self):\n \"\"\"Return whether DXVK is cached locally\"\"\"\n return system.path_exists(self.dxvk_path)\n\n def dxvk_dll_exists(self, dll_name):\n \"\"\"Check if the dll exists as a DXVK variant\"\"\"\n return system.path_exists(os.path.join(self.dxvk_path, \"x64\", dll_name + \".dll\")) \\\n and system.path_exists(os.path.join(self.dxvk_path, \"x32\", dll_name + \".dll\"))\n\n def download(self):\n \"\"\"Download DXVK to the local cache\"\"\"\n dxvk_url = self.base_url.format(self.version, self.version)\n if self.is_available():\n logger.warning(self.base_name.upper()+\" already available at %s\", self.dxvk_path)\n\n dxvk_archive_path = os.path.join(self.base_dir, os.path.basename(dxvk_url))\n\n downloader = Downloader(dxvk_url, dxvk_archive_path)\n downloader.start()\n while downloader.check_progress() < 1 and downloader.state != downloader.ERROR:\n time.sleep(0.3)\n if not system.path_exists(dxvk_archive_path):\n raise UnavailableDXVKVersion(\"Failed to download \"+self.base_name.upper()+\" %s\" % self.version)\n if os.stat(dxvk_archive_path).st_size:\n extract_archive(dxvk_archive_path, self.dxvk_path, merge_single=True)\n os.remove(dxvk_archive_path)\n else:\n os.remove(dxvk_archive_path)\n raise UnavailableDXVKVersion(\"Failed to download \"+self.base_name.upper()+\" %s\" % self.version)\n\n def enable_dxvk_dll(self, system_dir, dxvk_arch, dll):\n \"\"\"Copies DXVK dlls to the appropriate destination\"\"\"\n # Copying DXVK's version\n dxvk_dll_path = os.path.join(self.dxvk_path, dxvk_arch, \"%s.dll\" % dll)\n if system.path_exists(dxvk_dll_path):\n wine_dll_path = os.path.join(system_dir, \"%s.dll\" % dll)\n logger.info(\"Replacing %s/%s with \"+self.base_name.upper()+\" version\", system_dir, dll)\n if not self.is_dxvk_dll(wine_dll_path):\n # Backing up original version (may not be needed)\n if system.path_exists(wine_dll_path):\n shutil.move(wine_dll_path, wine_dll_path + \".orig\")\n if system.path_exists(wine_dll_path):\n os.remove(wine_dll_path)\n os.symlink(dxvk_dll_path, wine_dll_path)\n else:\n self.disable_dxvk_dll(system_dir, dxvk_arch, dll)\n\n def disable_dxvk_dll(self, system_dir, dxvk_arch, dll):\n \"\"\"Remove DXVK DLL from Wine prefix\"\"\"\n wine_dll_path = os.path.join(system_dir, \"%s.dll\" % dll)\n if self.is_dxvk_dll(wine_dll_path):\n logger.info(\"Removing \"+self.base_name.upper()+\" dll %s/%s\", system_dir, dll)\n os.remove(wine_dll_path)\n # Restoring original version (may not be needed)\n if system.path_exists(wine_dll_path + \".orig\"):\n shutil.move(wine_dll_path + \".orig\", wine_dll_path)\n\n def _iter_dxvk_dlls(self):\n windows_path = os.path.join(self.prefix, \"drive_c/windows\")\n if self.wine_arch == \"win64\":\n system_dirs = {\n \"x64\": os.path.join(windows_path, \"system32\"),\n \"x32\": os.path.join(windows_path, \"syswow64\"),\n }\n elif self.wine_arch == \"win32\":\n system_dirs = {\"x32\": os.path.join(windows_path, \"system32\")}\n\n for dxvk_arch, system_dir in system_dirs.items():\n for dll in self.dxvk_dlls:\n yield system_dir, dxvk_arch, dll\n\n def enable(self):\n \"\"\"Enable DXVK for the current prefix\"\"\"\n if not system.path_exists(self.dxvk_path):\n logger.error(self.base_name.upper()+\" %s is not available locally\", self.version)\n return\n for system_dir, dxvk_arch, dll in self._iter_dxvk_dlls():\n self.enable_dxvk_dll(system_dir, dxvk_arch, dll)\n\n def disable(self):\n \"\"\"Disable DXVK for the current prefix\"\"\"\n for system_dir, dxvk_arch, dll in self._iter_dxvk_dlls():\n self.disable_dxvk_dll(system_dir, dxvk_arch, dll)\n\n\nclass D9VKManager(DXVKManager):\n DXVK_TAGS_URL = \"https://api.github.com/repos/Joshua-Ashton/d9vk/tags\"\n DXVK_VERSIONS = [\n \"0.10\",\n ]\n DXVK_LATEST, DXVK_PAST_RELEASES = DXVK_VERSIONS[0], DXVK_VERSIONS[1:9]\n\n base_url = \"https://github.com/Joshua-Ashton/d9vk/releases/download/{}/d9vk-{}.tar.gz\"\n base_name = \"d9vk\"\n base_dir = os.path.join(RUNTIME_DIR, base_name)\n dxvk_dlls = (\"d3d9\",)\n latest_version = DXVK_LATEST\n", "path": "lutris/util/wine/dxvk.py"}]} | 3,379 | 634 |
gh_patches_debug_7605 | rasdani/github-patches | git_diff | vacanza__python-holidays-376 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
pop_named() sometimes generates an exception
This simple code block generates an exception when I try to pop "Columbus Day" (last line):
```
from datetime import date
import holidays
us_holidays = holidays.CountryHoliday('US', prov=None, state='CA', years=2022)
print(us_holidays.get('2022-10-10').format("%s"))
us_holidays.pop_named("Columbus Day")
```
The exception is `dictionary changed size during iteration`, and it seems to be related to popping elements out of a dictionary while you're iterating through it: https://stackoverflow.com/questions/11941817/how-to-avoid-runtimeerror-dictionary-changed-size-during-iteration-error. The element to be popped isn't actually removed.
I'm far from a competent Python programmer, but iterating over `list(self)` seems to solve the problem for me - no exception, and the item is removed properly:
```
def get_named(self, name):
# find all dates matching provided name (accepting partial
# strings too, case insensitive), returning them in a list
return [key for key in list(self) if name.lower() in self[key].lower()]
```
</issue>
<code>
[start of holidays/holiday_base.py]
1 # -*- coding: utf-8 -*-
2
3 # python-holidays
4 # ---------------
5 # A fast, efficient Python library for generating country, province and state
6 # specific sets of holidays on the fly. It aims to make determining whether a
7 # specific date is a holiday as fast and flexible as possible.
8 #
9 # Author: ryanss <[email protected]> (c) 2014-2017
10 # dr-prodigy <[email protected]> (c) 2017-2020
11 # Website: https://github.com/dr-prodigy/python-holidays
12 # License: MIT (see LICENSE file)
13
14 from datetime import timedelta, datetime, date
15
16 import six
17 from dateutil.parser import parse
18
19
20 class HolidayBase(dict):
21 PROVINCES = []
22
23 def __init__(self, years=[], expand=True, observed=True,
24 prov=None, state=None):
25 self.observed = observed
26 self.expand = expand
27 if isinstance(years, int):
28 years = [years, ]
29 self.years = set(years)
30 if not getattr(self, 'prov', False):
31 self.prov = prov
32 self.state = state
33 for year in list(self.years):
34 self._populate(year)
35
36 def __setattr__(self, key, value):
37 if key == 'observed' and len(self) > 0:
38 dict.__setattr__(self, key, value)
39 if value is True:
40 # Add (Observed) dates
41 years = list(self.years)
42 self.years = set()
43 self.clear()
44 for year in years:
45 self._populate(year)
46 else:
47 # Remove (Observed) dates
48 for k, v in list(self.items()):
49 if v.find("Observed") >= 0:
50 del self[k]
51 else:
52 return dict.__setattr__(self, key, value)
53
54 def __keytransform__(self, key):
55 if isinstance(key, datetime):
56 key = key.date()
57 elif isinstance(key, date):
58 key = key
59 elif isinstance(key, int) or isinstance(key, float):
60 key = datetime.utcfromtimestamp(key).date()
61 elif isinstance(key, six.string_types):
62 try:
63 key = parse(key).date()
64 except (ValueError, OverflowError):
65 raise ValueError("Cannot parse date from string '%s'" % key)
66 else:
67 raise TypeError("Cannot convert type '%s' to date." % type(key))
68
69 if self.expand and key.year not in self.years:
70 self.years.add(key.year)
71 self._populate(key.year)
72 return key
73
74 def __contains__(self, key):
75 return dict.__contains__(self, self.__keytransform__(key))
76
77 def __getitem__(self, key):
78 if isinstance(key, slice):
79 if not key.start or not key.stop:
80 raise ValueError("Both start and stop must be given.")
81
82 start = self.__keytransform__(key.start)
83 stop = self.__keytransform__(key.stop)
84
85 if key.step is None:
86 step = 1
87 elif isinstance(key.step, timedelta):
88 step = key.step.days
89 elif isinstance(key.step, int):
90 step = key.step
91 else:
92 raise TypeError(
93 "Cannot convert type '%s' to int." % type(key.step)
94 )
95
96 if step == 0:
97 raise ValueError('Step value must not be zero.')
98
99 date_diff = stop - start
100 if date_diff.days < 0 <= step or date_diff.days >= 0 > step:
101 step *= -1
102
103 days_in_range = []
104 for delta_days in range(0, date_diff.days, step):
105 day = start + timedelta(days=delta_days)
106 try:
107 dict.__getitem__(
108 self,
109 day
110 )
111 days_in_range.append(day)
112 except KeyError:
113 pass
114 return days_in_range
115 return dict.__getitem__(self, self.__keytransform__(key))
116
117 def __setitem__(self, key, value):
118 if key in self:
119 if self.get(key).find(value) < 0 \
120 and value.find(self.get(key)) < 0:
121 value = "%s, %s" % (value, self.get(key))
122 else:
123 value = self.get(key)
124 return dict.__setitem__(self, self.__keytransform__(key), value)
125
126 def update(self, *args):
127 args = list(args)
128 for arg in args:
129 if isinstance(arg, dict):
130 for key, value in list(arg.items()):
131 self[key] = value
132 elif isinstance(arg, list):
133 for item in arg:
134 self[item] = "Holiday"
135 else:
136 self[arg] = "Holiday"
137
138 def append(self, *args):
139 return self.update(*args)
140
141 def get(self, key, default=None):
142 return dict.get(self, self.__keytransform__(key), default)
143
144 def get_list(self, key):
145 return [h for h in self.get(key, "").split(", ") if h]
146
147 def get_named(self, name):
148 # find all dates matching provided name (accepting partial
149 # strings too, case insensitive), returning them in a list
150 return [key for key in self if name.lower() in self[key].lower()]
151
152 def pop(self, key, default=None):
153 if default is None:
154 return dict.pop(self, self.__keytransform__(key))
155 return dict.pop(self, self.__keytransform__(key), default)
156
157 def pop_named(self, name):
158 to_pop = self.get_named(name)
159 if not to_pop:
160 raise KeyError(name)
161 for key in to_pop:
162 self.pop(key)
163 return to_pop
164
165 def __eq__(self, other):
166 return dict.__eq__(self, other) and self.__dict__ == other.__dict__
167
168 def __ne__(self, other):
169 return dict.__ne__(self, other) or self.__dict__ != other.__dict__
170
171 def __add__(self, other):
172 if isinstance(other, int) and other == 0:
173 # Required to sum() list of holidays
174 # sum([h1, h2]) is equivalent to (0 + h1 + h2)
175 return self
176 elif not isinstance(other, HolidayBase):
177 raise TypeError()
178 HolidaySum = createHolidaySum(self, other)
179 country = (getattr(self, 'country', None) or
180 getattr(other, 'country', None))
181 if self.country and other.country and self.country != other.country:
182 c1 = self.country
183 if not isinstance(c1, list):
184 c1 = [c1]
185 c2 = other.country
186 if not isinstance(c2, list):
187 c2 = [c2]
188 country = c1 + c2
189 prov = getattr(self, 'prov', None) or getattr(other, 'prov', None)
190 if self.prov and other.prov and self.prov != other.prov:
191 p1 = self.prov if isinstance(self.prov, list) else [self.prov]
192 p2 = other.prov if isinstance(other.prov, list) else [other.prov]
193 prov = p1 + p2
194 return HolidaySum(years=(self.years | other.years),
195 expand=(self.expand or other.expand),
196 observed=(self.observed or other.observed),
197 country=country, prov=prov)
198
199 def __radd__(self, other):
200 return self.__add__(other)
201
202 def _populate(self, year):
203 pass
204
205
206 def createHolidaySum(h1, h2):
207 class HolidaySum(HolidayBase):
208
209 def __init__(self, country, **kwargs):
210 self.country = country
211 self.holidays = []
212 if getattr(h1, 'holidays', False):
213 for h in h1.holidays:
214 self.holidays.append(h)
215 else:
216 self.holidays.append(h1)
217 if getattr(h2, 'holidays', False):
218 for h in h2.holidays:
219 self.holidays.append(h)
220 else:
221 self.holidays.append(h2)
222 HolidayBase.__init__(self, **kwargs)
223
224 def _populate(self, year):
225 for h in self.holidays[::-1]:
226 h._populate(year)
227 self.update(h)
228
229 return HolidaySum
230
[end of holidays/holiday_base.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/holidays/holiday_base.py b/holidays/holiday_base.py
--- a/holidays/holiday_base.py
+++ b/holidays/holiday_base.py
@@ -147,7 +147,8 @@
def get_named(self, name):
# find all dates matching provided name (accepting partial
# strings too, case insensitive), returning them in a list
- return [key for key in self if name.lower() in self[key].lower()]
+ matches = [key for key in self if name.lower() in self[key].lower()]
+ return matches
def pop(self, key, default=None):
if default is None:
| {"golden_diff": "diff --git a/holidays/holiday_base.py b/holidays/holiday_base.py\n--- a/holidays/holiday_base.py\n+++ b/holidays/holiday_base.py\n@@ -147,7 +147,8 @@\n def get_named(self, name):\n # find all dates matching provided name (accepting partial\n # strings too, case insensitive), returning them in a list\n- return [key for key in self if name.lower() in self[key].lower()]\n+ matches = [key for key in self if name.lower() in self[key].lower()]\n+ return matches\n \n def pop(self, key, default=None):\n if default is None:\n", "issue": "pop_named() sometimes generates an exception\nThis simple code block generates an exception when I try to pop \"Columbus Day\" (last line):\r\n\r\n```\r\nfrom datetime import date\r\n \r\nimport holidays\r\n\r\nus_holidays = holidays.CountryHoliday('US', prov=None, state='CA', years=2022)\r\n\r\nprint(us_holidays.get('2022-10-10').format(\"%s\"))\r\n\r\nus_holidays.pop_named(\"Columbus Day\")\r\n```\r\n\r\nThe exception is `dictionary changed size during iteration`, and it seems to be related to popping elements out of a dictionary while you're iterating through it: https://stackoverflow.com/questions/11941817/how-to-avoid-runtimeerror-dictionary-changed-size-during-iteration-error. The element to be popped isn't actually removed.\r\n\r\nI'm far from a competent Python programmer, but iterating over `list(self)` seems to solve the problem for me - no exception, and the item is removed properly:\r\n\r\n```\r\n def get_named(self, name):\r\n # find all dates matching provided name (accepting partial\r\n # strings too, case insensitive), returning them in a list\r\n return [key for key in list(self) if name.lower() in self[key].lower()]\r\n```\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\n# python-holidays\n# ---------------\n# A fast, efficient Python library for generating country, province and state\n# specific sets of holidays on the fly. It aims to make determining whether a\n# specific date is a holiday as fast and flexible as possible.\n#\n# Author: ryanss <[email protected]> (c) 2014-2017\n# dr-prodigy <[email protected]> (c) 2017-2020\n# Website: https://github.com/dr-prodigy/python-holidays\n# License: MIT (see LICENSE file)\n\nfrom datetime import timedelta, datetime, date\n\nimport six\nfrom dateutil.parser import parse\n\n\nclass HolidayBase(dict):\n PROVINCES = []\n\n def __init__(self, years=[], expand=True, observed=True,\n prov=None, state=None):\n self.observed = observed\n self.expand = expand\n if isinstance(years, int):\n years = [years, ]\n self.years = set(years)\n if not getattr(self, 'prov', False):\n self.prov = prov\n self.state = state\n for year in list(self.years):\n self._populate(year)\n\n def __setattr__(self, key, value):\n if key == 'observed' and len(self) > 0:\n dict.__setattr__(self, key, value)\n if value is True:\n # Add (Observed) dates\n years = list(self.years)\n self.years = set()\n self.clear()\n for year in years:\n self._populate(year)\n else:\n # Remove (Observed) dates\n for k, v in list(self.items()):\n if v.find(\"Observed\") >= 0:\n del self[k]\n else:\n return dict.__setattr__(self, key, value)\n\n def __keytransform__(self, key):\n if isinstance(key, datetime):\n key = key.date()\n elif isinstance(key, date):\n key = key\n elif isinstance(key, int) or isinstance(key, float):\n key = datetime.utcfromtimestamp(key).date()\n elif isinstance(key, six.string_types):\n try:\n key = parse(key).date()\n except (ValueError, OverflowError):\n raise ValueError(\"Cannot parse date from string '%s'\" % key)\n else:\n raise TypeError(\"Cannot convert type '%s' to date.\" % type(key))\n\n if self.expand and key.year not in self.years:\n self.years.add(key.year)\n self._populate(key.year)\n return key\n\n def __contains__(self, key):\n return dict.__contains__(self, self.__keytransform__(key))\n\n def __getitem__(self, key):\n if isinstance(key, slice):\n if not key.start or not key.stop:\n raise ValueError(\"Both start and stop must be given.\")\n\n start = self.__keytransform__(key.start)\n stop = self.__keytransform__(key.stop)\n\n if key.step is None:\n step = 1\n elif isinstance(key.step, timedelta):\n step = key.step.days\n elif isinstance(key.step, int):\n step = key.step\n else:\n raise TypeError(\n \"Cannot convert type '%s' to int.\" % type(key.step)\n )\n\n if step == 0:\n raise ValueError('Step value must not be zero.')\n\n date_diff = stop - start\n if date_diff.days < 0 <= step or date_diff.days >= 0 > step:\n step *= -1\n\n days_in_range = []\n for delta_days in range(0, date_diff.days, step):\n day = start + timedelta(days=delta_days)\n try:\n dict.__getitem__(\n self,\n day\n )\n days_in_range.append(day)\n except KeyError:\n pass\n return days_in_range\n return dict.__getitem__(self, self.__keytransform__(key))\n\n def __setitem__(self, key, value):\n if key in self:\n if self.get(key).find(value) < 0 \\\n and value.find(self.get(key)) < 0:\n value = \"%s, %s\" % (value, self.get(key))\n else:\n value = self.get(key)\n return dict.__setitem__(self, self.__keytransform__(key), value)\n\n def update(self, *args):\n args = list(args)\n for arg in args:\n if isinstance(arg, dict):\n for key, value in list(arg.items()):\n self[key] = value\n elif isinstance(arg, list):\n for item in arg:\n self[item] = \"Holiday\"\n else:\n self[arg] = \"Holiday\"\n\n def append(self, *args):\n return self.update(*args)\n\n def get(self, key, default=None):\n return dict.get(self, self.__keytransform__(key), default)\n\n def get_list(self, key):\n return [h for h in self.get(key, \"\").split(\", \") if h]\n\n def get_named(self, name):\n # find all dates matching provided name (accepting partial\n # strings too, case insensitive), returning them in a list\n return [key for key in self if name.lower() in self[key].lower()]\n\n def pop(self, key, default=None):\n if default is None:\n return dict.pop(self, self.__keytransform__(key))\n return dict.pop(self, self.__keytransform__(key), default)\n\n def pop_named(self, name):\n to_pop = self.get_named(name)\n if not to_pop:\n raise KeyError(name)\n for key in to_pop:\n self.pop(key)\n return to_pop\n\n def __eq__(self, other):\n return dict.__eq__(self, other) and self.__dict__ == other.__dict__\n\n def __ne__(self, other):\n return dict.__ne__(self, other) or self.__dict__ != other.__dict__\n\n def __add__(self, other):\n if isinstance(other, int) and other == 0:\n # Required to sum() list of holidays\n # sum([h1, h2]) is equivalent to (0 + h1 + h2)\n return self\n elif not isinstance(other, HolidayBase):\n raise TypeError()\n HolidaySum = createHolidaySum(self, other)\n country = (getattr(self, 'country', None) or\n getattr(other, 'country', None))\n if self.country and other.country and self.country != other.country:\n c1 = self.country\n if not isinstance(c1, list):\n c1 = [c1]\n c2 = other.country\n if not isinstance(c2, list):\n c2 = [c2]\n country = c1 + c2\n prov = getattr(self, 'prov', None) or getattr(other, 'prov', None)\n if self.prov and other.prov and self.prov != other.prov:\n p1 = self.prov if isinstance(self.prov, list) else [self.prov]\n p2 = other.prov if isinstance(other.prov, list) else [other.prov]\n prov = p1 + p2\n return HolidaySum(years=(self.years | other.years),\n expand=(self.expand or other.expand),\n observed=(self.observed or other.observed),\n country=country, prov=prov)\n\n def __radd__(self, other):\n return self.__add__(other)\n\n def _populate(self, year):\n pass\n\n\ndef createHolidaySum(h1, h2):\n class HolidaySum(HolidayBase):\n\n def __init__(self, country, **kwargs):\n self.country = country\n self.holidays = []\n if getattr(h1, 'holidays', False):\n for h in h1.holidays:\n self.holidays.append(h)\n else:\n self.holidays.append(h1)\n if getattr(h2, 'holidays', False):\n for h in h2.holidays:\n self.holidays.append(h)\n else:\n self.holidays.append(h2)\n HolidayBase.__init__(self, **kwargs)\n\n def _populate(self, year):\n for h in self.holidays[::-1]:\n h._populate(year)\n self.update(h)\n\n return HolidaySum\n", "path": "holidays/holiday_base.py"}]} | 3,212 | 145 |
gh_patches_debug_20514 | rasdani/github-patches | git_diff | liqd__a4-product-149 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
tile images on partner page are not cut to same size

</issue>
<code>
[start of liqd_product/config/urls.py]
1 """Beteiligung.in URL Configuration."""
2
3 from ckeditor_uploader import views as ck_views
4 from django.conf import settings
5 from django.conf.urls import include
6 from django.conf.urls import url
7 from django.contrib import admin
8 from django.views.decorators.cache import never_cache
9 from django.views.i18n import javascript_catalog
10 from rest_framework import routers
11
12 from adhocracy4.api import routers as a4routers
13 from adhocracy4.comments.api import CommentViewSet
14 from adhocracy4.follows.api import FollowViewSet
15 from adhocracy4.ratings.api import RatingViewSet
16 from adhocracy4.reports.api import ReportViewSet
17 from liqd_product.apps.partners.urlresolvers import partner_patterns
18 from liqd_product.apps.users.decorators import user_is_project_admin
19 from meinberlin.apps.documents.api import DocumentViewSet
20 from meinberlin.apps.polls.api import PollViewSet
21 from meinberlin.apps.polls.api import VoteViewSet
22 from meinberlin.apps.polls.routers import QuestionDefaultRouter
23
24 js_info_dict = {
25 'packages': ('adhocracy4.comments',),
26 }
27
28 router = routers.DefaultRouter()
29 router.register(r'follows', FollowViewSet, base_name='follows')
30 router.register(r'reports', ReportViewSet, base_name='reports')
31 router.register(r'polls', PollViewSet, base_name='polls')
32
33 module_router = a4routers.ModuleDefaultRouter()
34 # FIXME: rename to 'chapters'
35 module_router.register(r'documents', DocumentViewSet, base_name='chapters')
36
37 orga_router = a4routers.OrganisationDefaultRouter()
38
39 ct_router = a4routers.ContentTypeDefaultRouter()
40 ct_router.register(r'comments', CommentViewSet, base_name='comments')
41 ct_router.register(r'ratings', RatingViewSet, base_name='ratings')
42
43 question_router = QuestionDefaultRouter()
44 question_router.register(r'vote', VoteViewSet, base_name='vote')
45
46
47 urlpatterns = [
48 # General platform urls
49 url(r'^django-admin/', include(admin.site.urls)),
50 url(r'^admin/', include('wagtail.wagtailadmin.urls')),
51
52 url(r'^accounts/', include('allauth.urls')),
53 url(r'^account/', include('liqd_product.apps.account.urls')),
54 url(r'^embed/', include('meinberlin.apps.embed.urls')),
55 url(r'^dashboard/', include('meinberlin.apps.dashboard2.urls')),
56 url(r'^profile/', include('liqd_product.apps.users.urls')),
57
58 # API urls
59 url(r'^api/', include(ct_router.urls)),
60 url(r'^api/', include(module_router.urls)),
61 url(r'^api/', include(orga_router.urls)),
62 url(r'^api/', include(question_router.urls)),
63 url(r'^api/', include(router.urls)),
64
65 url(r'^upload/', user_is_project_admin(ck_views.upload),
66 name='ckeditor_upload'),
67 url(r'^browse/', never_cache(user_is_project_admin(ck_views.browse)),
68 name='ckeditor_browse'),
69
70 url(r'^jsi18n/$', javascript_catalog,
71 js_info_dict, name='javascript-catalog'),
72
73 # Urls within the context of a partner
74 partner_patterns(
75 url(r'^modules/', include('adhocracy4.modules.urls')),
76 url(r'^projects/', include('adhocracy4.projects.urls')),
77 url(r'^offlineevents/', include('meinberlin.apps.offlineevents.urls',
78 namespace='meinberlin_offlineevents')),
79 url(r'^ideas/', include(r'meinberlin.apps.ideas.urls',
80 namespace='meinberlin_ideas')),
81 url(r'^mapideas/', include('meinberlin.apps.mapideas.urls',
82 namespace='meinberlin_mapideas')),
83 url(r'^text/', include('meinberlin.apps.documents.urls',
84 namespace='meinberlin_documents')),
85 ),
86
87 url(r'', include('liqd_product.apps.partners.urls')),
88 url(r'', include('wagtail.wagtailcore.urls'))
89 ]
90
91
92 if settings.DEBUG:
93 from django.conf.urls.static import static
94 from django.contrib.staticfiles.urls import staticfiles_urlpatterns
95
96 # Serve static and media locally
97 urlpatterns += staticfiles_urlpatterns()
98 urlpatterns += static(settings.MEDIA_URL,
99 document_root=settings.MEDIA_ROOT)
100 try:
101 import debug_toolbar
102 except ImportError:
103 pass
104 else:
105 urlpatterns = [
106 url(r'^__debug__/', include(debug_toolbar.urls)),
107 ] + urlpatterns
108
[end of liqd_product/config/urls.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/liqd_product/config/urls.py b/liqd_product/config/urls.py
--- a/liqd_product/config/urls.py
+++ b/liqd_product/config/urls.py
@@ -16,6 +16,7 @@
from adhocracy4.reports.api import ReportViewSet
from liqd_product.apps.partners.urlresolvers import partner_patterns
from liqd_product.apps.users.decorators import user_is_project_admin
+from meinberlin.apps.contrib import views as contrib_views
from meinberlin.apps.documents.api import DocumentViewSet
from meinberlin.apps.polls.api import PollViewSet
from meinberlin.apps.polls.api import VoteViewSet
@@ -67,6 +68,7 @@
url(r'^browse/', never_cache(user_is_project_admin(ck_views.browse)),
name='ckeditor_browse'),
+ url(r'^components/$', contrib_views.ComponentLibraryView.as_view()),
url(r'^jsi18n/$', javascript_catalog,
js_info_dict, name='javascript-catalog'),
| {"golden_diff": "diff --git a/liqd_product/config/urls.py b/liqd_product/config/urls.py\n--- a/liqd_product/config/urls.py\n+++ b/liqd_product/config/urls.py\n@@ -16,6 +16,7 @@\n from adhocracy4.reports.api import ReportViewSet\n from liqd_product.apps.partners.urlresolvers import partner_patterns\n from liqd_product.apps.users.decorators import user_is_project_admin\n+from meinberlin.apps.contrib import views as contrib_views\n from meinberlin.apps.documents.api import DocumentViewSet\n from meinberlin.apps.polls.api import PollViewSet\n from meinberlin.apps.polls.api import VoteViewSet\n@@ -67,6 +68,7 @@\n url(r'^browse/', never_cache(user_is_project_admin(ck_views.browse)),\n name='ckeditor_browse'),\n \n+ url(r'^components/$', contrib_views.ComponentLibraryView.as_view()),\n url(r'^jsi18n/$', javascript_catalog,\n js_info_dict, name='javascript-catalog'),\n", "issue": "tile images on partner page are not cut to same size\n\r\n\n", "before_files": [{"content": "\"\"\"Beteiligung.in URL Configuration.\"\"\"\n\nfrom ckeditor_uploader import views as ck_views\nfrom django.conf import settings\nfrom django.conf.urls import include\nfrom django.conf.urls import url\nfrom django.contrib import admin\nfrom django.views.decorators.cache import never_cache\nfrom django.views.i18n import javascript_catalog\nfrom rest_framework import routers\n\nfrom adhocracy4.api import routers as a4routers\nfrom adhocracy4.comments.api import CommentViewSet\nfrom adhocracy4.follows.api import FollowViewSet\nfrom adhocracy4.ratings.api import RatingViewSet\nfrom adhocracy4.reports.api import ReportViewSet\nfrom liqd_product.apps.partners.urlresolvers import partner_patterns\nfrom liqd_product.apps.users.decorators import user_is_project_admin\nfrom meinberlin.apps.documents.api import DocumentViewSet\nfrom meinberlin.apps.polls.api import PollViewSet\nfrom meinberlin.apps.polls.api import VoteViewSet\nfrom meinberlin.apps.polls.routers import QuestionDefaultRouter\n\njs_info_dict = {\n 'packages': ('adhocracy4.comments',),\n}\n\nrouter = routers.DefaultRouter()\nrouter.register(r'follows', FollowViewSet, base_name='follows')\nrouter.register(r'reports', ReportViewSet, base_name='reports')\nrouter.register(r'polls', PollViewSet, base_name='polls')\n\nmodule_router = a4routers.ModuleDefaultRouter()\n# FIXME: rename to 'chapters'\nmodule_router.register(r'documents', DocumentViewSet, base_name='chapters')\n\norga_router = a4routers.OrganisationDefaultRouter()\n\nct_router = a4routers.ContentTypeDefaultRouter()\nct_router.register(r'comments', CommentViewSet, base_name='comments')\nct_router.register(r'ratings', RatingViewSet, base_name='ratings')\n\nquestion_router = QuestionDefaultRouter()\nquestion_router.register(r'vote', VoteViewSet, base_name='vote')\n\n\nurlpatterns = [\n # General platform urls\n url(r'^django-admin/', include(admin.site.urls)),\n url(r'^admin/', include('wagtail.wagtailadmin.urls')),\n\n url(r'^accounts/', include('allauth.urls')),\n url(r'^account/', include('liqd_product.apps.account.urls')),\n url(r'^embed/', include('meinberlin.apps.embed.urls')),\n url(r'^dashboard/', include('meinberlin.apps.dashboard2.urls')),\n url(r'^profile/', include('liqd_product.apps.users.urls')),\n\n # API urls\n url(r'^api/', include(ct_router.urls)),\n url(r'^api/', include(module_router.urls)),\n url(r'^api/', include(orga_router.urls)),\n url(r'^api/', include(question_router.urls)),\n url(r'^api/', include(router.urls)),\n\n url(r'^upload/', user_is_project_admin(ck_views.upload),\n name='ckeditor_upload'),\n url(r'^browse/', never_cache(user_is_project_admin(ck_views.browse)),\n name='ckeditor_browse'),\n\n url(r'^jsi18n/$', javascript_catalog,\n js_info_dict, name='javascript-catalog'),\n\n # Urls within the context of a partner\n partner_patterns(\n url(r'^modules/', include('adhocracy4.modules.urls')),\n url(r'^projects/', include('adhocracy4.projects.urls')),\n url(r'^offlineevents/', include('meinberlin.apps.offlineevents.urls',\n namespace='meinberlin_offlineevents')),\n url(r'^ideas/', include(r'meinberlin.apps.ideas.urls',\n namespace='meinberlin_ideas')),\n url(r'^mapideas/', include('meinberlin.apps.mapideas.urls',\n namespace='meinberlin_mapideas')),\n url(r'^text/', include('meinberlin.apps.documents.urls',\n namespace='meinberlin_documents')),\n ),\n\n url(r'', include('liqd_product.apps.partners.urls')),\n url(r'', include('wagtail.wagtailcore.urls'))\n]\n\n\nif settings.DEBUG:\n from django.conf.urls.static import static\n from django.contrib.staticfiles.urls import staticfiles_urlpatterns\n\n # Serve static and media locally\n urlpatterns += staticfiles_urlpatterns()\n urlpatterns += static(settings.MEDIA_URL,\n document_root=settings.MEDIA_ROOT)\n try:\n import debug_toolbar\n except ImportError:\n pass\n else:\n urlpatterns = [\n url(r'^__debug__/', include(debug_toolbar.urls)),\n ] + urlpatterns\n", "path": "liqd_product/config/urls.py"}]} | 1,789 | 216 |
gh_patches_debug_30361 | rasdani/github-patches | git_diff | pytorch__ignite-1771 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Patch MNIST dataset downloading to fix CI
Currently, there is an issue with downloading MNIST dataset using torchvision. Let's introduce the following patch to our CI to fix it:
- https://github.com/pytorch/vision/issues/3500#issuecomment-790491487
Where to put that:
- create new step with the patch before [here](https://github.com/pytorch/ignite/blob/700f0e1325efc5dc0dce88d26284e51bc2a7c87c/.github/workflows/unit-tests.yml#L106)
- add patch [here](https://github.com/pytorch/ignite/blob/700f0e1325efc5dc0dce88d26284e51bc2a7c87c/.circleci/config.yml#L147).
</issue>
<code>
[start of examples/mnist/mnist_patch.py]
1 """Patch to fix MNIST download issue as described here:
2 - https://github.com/pytorch/ignite/issues/1737
3 - https://github.com/pytorch/vision/issues/3500
4 """
5
6 import os
7 import subprocess as sp
8
9 import torch
10 from torchvision.datasets.mnist import MNIST, read_image_file, read_label_file
11 from torchvision.datasets.utils import extract_archive
12
13
14 def patched_download(self):
15 """wget patched download method.
16 """
17 if self._check_exists():
18 return
19
20 os.makedirs(self.raw_folder, exist_ok=True)
21 os.makedirs(self.processed_folder, exist_ok=True)
22
23 # download files
24 for url, md5 in self.resources:
25 filename = url.rpartition("/")[2]
26 download_root = os.path.expanduser(self.raw_folder)
27 extract_root = None
28 remove_finished = False
29
30 if extract_root is None:
31 extract_root = download_root
32 if not filename:
33 filename = os.path.basename(url)
34
35 # Use wget to download archives
36 sp.run(["wget", url, "-P", download_root])
37
38 archive = os.path.join(download_root, filename)
39 print("Extracting {} to {}".format(archive, extract_root))
40 extract_archive(archive, extract_root, remove_finished)
41
42 # process and save as torch files
43 print("Processing...")
44
45 training_set = (
46 read_image_file(os.path.join(self.raw_folder, "train-images-idx3-ubyte")),
47 read_label_file(os.path.join(self.raw_folder, "train-labels-idx1-ubyte")),
48 )
49 test_set = (
50 read_image_file(os.path.join(self.raw_folder, "t10k-images-idx3-ubyte")),
51 read_label_file(os.path.join(self.raw_folder, "t10k-labels-idx1-ubyte")),
52 )
53 with open(os.path.join(self.processed_folder, self.training_file), "wb") as f:
54 torch.save(training_set, f)
55 with open(os.path.join(self.processed_folder, self.test_file), "wb") as f:
56 torch.save(test_set, f)
57
58 print("Done!")
59
60
61 def main():
62 # Patch download method
63 MNIST.download = patched_download
64 # Download MNIST
65 MNIST(".", download=True)
66
67
68 if __name__ == "__main__":
69 main()
70
[end of examples/mnist/mnist_patch.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/examples/mnist/mnist_patch.py b/examples/mnist/mnist_patch.py
deleted file mode 100644
--- a/examples/mnist/mnist_patch.py
+++ /dev/null
@@ -1,69 +0,0 @@
-"""Patch to fix MNIST download issue as described here:
-- https://github.com/pytorch/ignite/issues/1737
-- https://github.com/pytorch/vision/issues/3500
-"""
-
-import os
-import subprocess as sp
-
-import torch
-from torchvision.datasets.mnist import MNIST, read_image_file, read_label_file
-from torchvision.datasets.utils import extract_archive
-
-
-def patched_download(self):
- """wget patched download method.
- """
- if self._check_exists():
- return
-
- os.makedirs(self.raw_folder, exist_ok=True)
- os.makedirs(self.processed_folder, exist_ok=True)
-
- # download files
- for url, md5 in self.resources:
- filename = url.rpartition("/")[2]
- download_root = os.path.expanduser(self.raw_folder)
- extract_root = None
- remove_finished = False
-
- if extract_root is None:
- extract_root = download_root
- if not filename:
- filename = os.path.basename(url)
-
- # Use wget to download archives
- sp.run(["wget", url, "-P", download_root])
-
- archive = os.path.join(download_root, filename)
- print("Extracting {} to {}".format(archive, extract_root))
- extract_archive(archive, extract_root, remove_finished)
-
- # process and save as torch files
- print("Processing...")
-
- training_set = (
- read_image_file(os.path.join(self.raw_folder, "train-images-idx3-ubyte")),
- read_label_file(os.path.join(self.raw_folder, "train-labels-idx1-ubyte")),
- )
- test_set = (
- read_image_file(os.path.join(self.raw_folder, "t10k-images-idx3-ubyte")),
- read_label_file(os.path.join(self.raw_folder, "t10k-labels-idx1-ubyte")),
- )
- with open(os.path.join(self.processed_folder, self.training_file), "wb") as f:
- torch.save(training_set, f)
- with open(os.path.join(self.processed_folder, self.test_file), "wb") as f:
- torch.save(test_set, f)
-
- print("Done!")
-
-
-def main():
- # Patch download method
- MNIST.download = patched_download
- # Download MNIST
- MNIST(".", download=True)
-
-
-if __name__ == "__main__":
- main()
| {"golden_diff": "diff --git a/examples/mnist/mnist_patch.py b/examples/mnist/mnist_patch.py\ndeleted file mode 100644\n--- a/examples/mnist/mnist_patch.py\n+++ /dev/null\n@@ -1,69 +0,0 @@\n-\"\"\"Patch to fix MNIST download issue as described here:\n-- https://github.com/pytorch/ignite/issues/1737\n-- https://github.com/pytorch/vision/issues/3500\n-\"\"\"\n-\n-import os\n-import subprocess as sp\n-\n-import torch\n-from torchvision.datasets.mnist import MNIST, read_image_file, read_label_file\n-from torchvision.datasets.utils import extract_archive\n-\n-\n-def patched_download(self):\n- \"\"\"wget patched download method.\n- \"\"\"\n- if self._check_exists():\n- return\n-\n- os.makedirs(self.raw_folder, exist_ok=True)\n- os.makedirs(self.processed_folder, exist_ok=True)\n-\n- # download files\n- for url, md5 in self.resources:\n- filename = url.rpartition(\"/\")[2]\n- download_root = os.path.expanduser(self.raw_folder)\n- extract_root = None\n- remove_finished = False\n-\n- if extract_root is None:\n- extract_root = download_root\n- if not filename:\n- filename = os.path.basename(url)\n-\n- # Use wget to download archives\n- sp.run([\"wget\", url, \"-P\", download_root])\n-\n- archive = os.path.join(download_root, filename)\n- print(\"Extracting {} to {}\".format(archive, extract_root))\n- extract_archive(archive, extract_root, remove_finished)\n-\n- # process and save as torch files\n- print(\"Processing...\")\n-\n- training_set = (\n- read_image_file(os.path.join(self.raw_folder, \"train-images-idx3-ubyte\")),\n- read_label_file(os.path.join(self.raw_folder, \"train-labels-idx1-ubyte\")),\n- )\n- test_set = (\n- read_image_file(os.path.join(self.raw_folder, \"t10k-images-idx3-ubyte\")),\n- read_label_file(os.path.join(self.raw_folder, \"t10k-labels-idx1-ubyte\")),\n- )\n- with open(os.path.join(self.processed_folder, self.training_file), \"wb\") as f:\n- torch.save(training_set, f)\n- with open(os.path.join(self.processed_folder, self.test_file), \"wb\") as f:\n- torch.save(test_set, f)\n-\n- print(\"Done!\")\n-\n-\n-def main():\n- # Patch download method\n- MNIST.download = patched_download\n- # Download MNIST\n- MNIST(\".\", download=True)\n-\n-\n-if __name__ == \"__main__\":\n- main()\n", "issue": "Patch MNIST dataset downloading to fix CI\nCurrently, there is an issue with downloading MNIST dataset using torchvision. Let's introduce the following patch to our CI to fix it:\r\n- https://github.com/pytorch/vision/issues/3500#issuecomment-790491487\r\n\r\nWhere to put that:\r\n- create new step with the patch before [here](https://github.com/pytorch/ignite/blob/700f0e1325efc5dc0dce88d26284e51bc2a7c87c/.github/workflows/unit-tests.yml#L106)\r\n- add patch [here](https://github.com/pytorch/ignite/blob/700f0e1325efc5dc0dce88d26284e51bc2a7c87c/.circleci/config.yml#L147).\n", "before_files": [{"content": "\"\"\"Patch to fix MNIST download issue as described here:\n- https://github.com/pytorch/ignite/issues/1737\n- https://github.com/pytorch/vision/issues/3500\n\"\"\"\n\nimport os\nimport subprocess as sp\n\nimport torch\nfrom torchvision.datasets.mnist import MNIST, read_image_file, read_label_file\nfrom torchvision.datasets.utils import extract_archive\n\n\ndef patched_download(self):\n \"\"\"wget patched download method.\n \"\"\"\n if self._check_exists():\n return\n\n os.makedirs(self.raw_folder, exist_ok=True)\n os.makedirs(self.processed_folder, exist_ok=True)\n\n # download files\n for url, md5 in self.resources:\n filename = url.rpartition(\"/\")[2]\n download_root = os.path.expanduser(self.raw_folder)\n extract_root = None\n remove_finished = False\n\n if extract_root is None:\n extract_root = download_root\n if not filename:\n filename = os.path.basename(url)\n\n # Use wget to download archives\n sp.run([\"wget\", url, \"-P\", download_root])\n\n archive = os.path.join(download_root, filename)\n print(\"Extracting {} to {}\".format(archive, extract_root))\n extract_archive(archive, extract_root, remove_finished)\n\n # process and save as torch files\n print(\"Processing...\")\n\n training_set = (\n read_image_file(os.path.join(self.raw_folder, \"train-images-idx3-ubyte\")),\n read_label_file(os.path.join(self.raw_folder, \"train-labels-idx1-ubyte\")),\n )\n test_set = (\n read_image_file(os.path.join(self.raw_folder, \"t10k-images-idx3-ubyte\")),\n read_label_file(os.path.join(self.raw_folder, \"t10k-labels-idx1-ubyte\")),\n )\n with open(os.path.join(self.processed_folder, self.training_file), \"wb\") as f:\n torch.save(training_set, f)\n with open(os.path.join(self.processed_folder, self.test_file), \"wb\") as f:\n torch.save(test_set, f)\n\n print(\"Done!\")\n\n\ndef main():\n # Patch download method\n MNIST.download = patched_download\n # Download MNIST\n MNIST(\".\", download=True)\n\n\nif __name__ == \"__main__\":\n main()\n", "path": "examples/mnist/mnist_patch.py"}]} | 1,373 | 600 |
gh_patches_debug_18310 | rasdani/github-patches | git_diff | huggingface__accelerate-695 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
`notebook_launcher` cannot use MPS device on M1 or M2 Mac
### System Info
```Shell
- `Accelerate` version: 0.13.0.dev0
- Platform: macOS-12.5.1-arm64-arm-64bit
- Python version: 3.10.4
- Numpy version: 1.23.1
- PyTorch version (GPU?): 1.13.0.dev20220909 (False)
- `Accelerate` default config:
- compute_environment: LOCAL_MACHINE
- distributed_type: MPS
- mixed_precision: no
- use_cpu: False
- num_processes: 1
- machine_rank: 0
- num_machines: 1
- main_process_ip: None
- main_process_port: None
- rdzv_backend: static
- same_network: False
- main_training_function: main
- deepspeed_config: {}
- fsdp_config: {}
- downcast_bf16: no
```
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] One of the scripts in the examples/ folder of Accelerate or an officially supported `no_trainer` script in the `examples` folder of the `transformers` repo (such as `run_no_trainer_glue.py`)
- [X] My own task or dataset (give details below)
### Reproduction
Steps to reproduce:
1. Stable Diffusion Textual Inversion Colab script: https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb
2. Run script on M1 Max, expecting `accelerate.notebook_launcher` to report running the training on *GPU (or MPS)*
3. Observe it runs the training on *CPU* instead
### Expected behavior
```Shell
With a previous PR, `mps` M1 GPU support was introduced to Accelerate: https://github.com/huggingface/accelerate/pull/596
However `notebook_launcher` needs to be adapted to make use of this change as well.
```
</issue>
<code>
[start of src/accelerate/launchers.py]
1 # Copyright 2021 The HuggingFace Team. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import os
16 import sys
17 import tempfile
18 import warnings
19
20 import torch
21
22 from .state import AcceleratorState
23 from .utils import PrecisionType, PrepareForLaunch, patch_environment
24
25
26 def notebook_launcher(function, args=(), num_processes=None, use_fp16=False, mixed_precision="no", use_port="29500"):
27 """
28 Launches a training function, using several processes if it's possible in the current environment (TPU with
29 multiple cores for instance).
30
31 Args:
32 function (`Callable`):
33 The training function to execute. If it accepts arguments, the first argument should be the index of the
34 process run.
35 args (`Tuple`):
36 Tuple of arguments to pass to the function (it will receive `*args`).
37 num_processes (`int`, *optional*):
38 The number of processes to use for training. Will default to 8 in Colab/Kaggle if a TPU is available, to
39 the number of GPUs available otherwise.
40 mixed_precision (`str`, *optional*, defaults to `"no"`):
41 If `fp16` or `bf16`, will use mixed precision training on multi-GPU.
42 use_port (`str`, *optional*, defaults to `"29500"`):
43 The port to use to communicate between processes when launching a multi-GPU training.
44 """
45 # Are we in a google colab or a Kaggle Kernel?
46 if any(key.startswith("KAGGLE") for key in os.environ.keys()):
47 in_colab_or_kaggle = True
48 elif "IPython" in sys.modules:
49 in_colab_or_kaggle = "google.colab" in str(sys.modules["IPython"].get_ipython())
50 else:
51 in_colab_or_kaggle = False
52
53 try:
54 mixed_precision = PrecisionType(mixed_precision.lower())
55 except ValueError:
56 raise ValueError(
57 f"Unknown mixed_precision mode: {args.mixed_precision.lower()}. Choose between {PrecisionType.list()}."
58 )
59
60 if in_colab_or_kaggle:
61 if os.environ.get("TPU_NAME", None) is not None:
62 # TPU launch
63 import torch_xla.distributed.xla_multiprocessing as xmp
64
65 if len(AcceleratorState._shared_state) > 0:
66 raise ValueError(
67 "To train on TPU in Colab or Kaggle Kernel, the `Accelerator` should only be initialized inside "
68 "your training function. Restart your notebook and make sure no cells initializes an "
69 "`Accelerator`."
70 )
71 if num_processes is None:
72 num_processes = 8
73
74 launcher = PrepareForLaunch(function, distributed_type="TPU")
75 print(f"Launching a training on {num_processes} TPU cores.")
76 xmp.spawn(launcher, args=args, nprocs=num_processes, start_method="fork")
77 else:
78 # No need for a distributed launch otherwise as it's either CPU or one GPU.
79 if torch.cuda.is_available():
80 print("Launching training on one GPU.")
81 else:
82 print("Launching training on one CPU.")
83 function(*args)
84
85 else:
86 if num_processes is None:
87 raise ValueError(
88 "You have to specify the number of GPUs you would like to use, add `num_processes=...` to your call."
89 )
90
91 if num_processes > 1:
92 # Multi-GPU launch
93 from torch.multiprocessing import start_processes
94
95 if len(AcceleratorState._shared_state) > 0:
96 raise ValueError(
97 "To launch a multi-GPU training from your notebook, the `Accelerator` should only be initialized "
98 "inside your training function. Restart your notebook and make sure no cells initializes an "
99 "`Accelerator`."
100 )
101
102 if torch.cuda.is_initialized():
103 raise ValueError(
104 "To launch a multi-GPU training from your notebook, you need to avoid running any instruction "
105 "using `torch.cuda` in any cell. Restart your notebook and make sure no cells use any CUDA "
106 "function."
107 )
108
109 if use_fp16:
110 warnings.warn('use_fp16=True is deprecated. Use mixed_precision="fp16" instead.', DeprecationWarning)
111 mixed_precision = "fp16"
112
113 # torch.distributed will expect a few environment variable to be here. We set the ones common to each
114 # process here (the other ones will be set be the launcher).
115 with patch_environment(
116 world_size=num_processes, master_addr="127.0.01", master_port=use_port, mixed_precision=mixed_precision
117 ):
118 launcher = PrepareForLaunch(function, distributed_type="MULTI_GPU")
119
120 print(f"Launching training on {num_processes} GPUs.")
121 start_processes(launcher, args=args, nprocs=num_processes, start_method="fork")
122
123 else:
124 # No need for a distributed launch otherwise as it's either CPU or one GPU.
125 if torch.cuda.is_available():
126 print("Launching training on one GPU.")
127 else:
128 print("Launching training on CPU.")
129 function(*args)
130
131
132 def debug_launcher(function, args=(), num_processes=2):
133 """
134 Launches a training function using several processes on CPU for debugging purposes.
135
136 <Tip warning={true}>
137
138 This function is provided for internal testing and debugging, but it's not intended for real trainings. It will
139 only use the CPU.
140
141 </Tip>
142
143 Args:
144 function (`Callable`):
145 The training function to execute.
146 args (`Tuple`):
147 Tuple of arguments to pass to the function (it will receive `*args`).
148 num_processes (`int`, *optional*, defaults to 2):
149 The number of processes to use for training.
150 """
151 from torch.multiprocessing import start_processes
152
153 with tempfile.NamedTemporaryFile() as tmp_file:
154 # torch.distributed will expect a few environment variable to be here. We set the ones common to each
155 # process here (the other ones will be set be the launcher).
156 with patch_environment(
157 world_size=num_processes,
158 master_addr="127.0.01",
159 master_port="29500",
160 mixed_precision="no",
161 accelerate_debug_rdv_file=tmp_file.name,
162 use_cpu="yes",
163 ):
164 launcher = PrepareForLaunch(function, debug=True)
165 start_processes(launcher, args=args, nprocs=num_processes, start_method="fork")
166
[end of src/accelerate/launchers.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/accelerate/launchers.py b/src/accelerate/launchers.py
--- a/src/accelerate/launchers.py
+++ b/src/accelerate/launchers.py
@@ -121,12 +121,17 @@
start_processes(launcher, args=args, nprocs=num_processes, start_method="fork")
else:
- # No need for a distributed launch otherwise as it's either CPU or one GPU.
- if torch.cuda.is_available():
+ # No need for a distributed launch otherwise as it's either CPU, GPU or MPS.
+ use_mps_device = "false"
+ if torch.backends.mps.is_available():
+ print("Launching training on MPS.")
+ use_mps_device = "true"
+ elif torch.cuda.is_available():
print("Launching training on one GPU.")
else:
print("Launching training on CPU.")
- function(*args)
+ with patch_environment(use_mps_device=use_mps_device):
+ function(*args)
def debug_launcher(function, args=(), num_processes=2):
| {"golden_diff": "diff --git a/src/accelerate/launchers.py b/src/accelerate/launchers.py\n--- a/src/accelerate/launchers.py\n+++ b/src/accelerate/launchers.py\n@@ -121,12 +121,17 @@\n start_processes(launcher, args=args, nprocs=num_processes, start_method=\"fork\")\n \n else:\n- # No need for a distributed launch otherwise as it's either CPU or one GPU.\n- if torch.cuda.is_available():\n+ # No need for a distributed launch otherwise as it's either CPU, GPU or MPS.\n+ use_mps_device = \"false\"\n+ if torch.backends.mps.is_available():\n+ print(\"Launching training on MPS.\")\n+ use_mps_device = \"true\"\n+ elif torch.cuda.is_available():\n print(\"Launching training on one GPU.\")\n else:\n print(\"Launching training on CPU.\")\n- function(*args)\n+ with patch_environment(use_mps_device=use_mps_device):\n+ function(*args)\n \n \n def debug_launcher(function, args=(), num_processes=2):\n", "issue": "`notebook_launcher` cannot use MPS device on M1 or M2 Mac\n### System Info\r\n\r\n```Shell\r\n- `Accelerate` version: 0.13.0.dev0\r\n- Platform: macOS-12.5.1-arm64-arm-64bit\r\n- Python version: 3.10.4\r\n- Numpy version: 1.23.1\r\n- PyTorch version (GPU?): 1.13.0.dev20220909 (False)\r\n- `Accelerate` default config:\r\n\t- compute_environment: LOCAL_MACHINE\r\n\t- distributed_type: MPS\r\n\t- mixed_precision: no\r\n\t- use_cpu: False\r\n\t- num_processes: 1\r\n\t- machine_rank: 0\r\n\t- num_machines: 1\r\n\t- main_process_ip: None\r\n\t- main_process_port: None\r\n\t- rdzv_backend: static\r\n\t- same_network: False\r\n\t- main_training_function: main\r\n\t- deepspeed_config: {}\r\n\t- fsdp_config: {}\r\n\t- downcast_bf16: no\r\n```\r\n\r\n\r\n### Information\r\n\r\n- [ ] The official example scripts\r\n- [X] My own modified scripts\r\n\r\n### Tasks\r\n\r\n- [ ] One of the scripts in the examples/ folder of Accelerate or an officially supported `no_trainer` script in the `examples` folder of the `transformers` repo (such as `run_no_trainer_glue.py`)\r\n- [X] My own task or dataset (give details below)\r\n\r\n### Reproduction\r\n\r\nSteps to reproduce:\r\n1. Stable Diffusion Textual Inversion Colab script: https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb\r\n2. Run script on M1 Max, expecting `accelerate.notebook_launcher` to report running the training on *GPU (or MPS)*\r\n3. Observe it runs the training on *CPU* instead\r\n\r\n### Expected behavior\r\n\r\n```Shell\r\nWith a previous PR, `mps` M1 GPU support was introduced to Accelerate: https://github.com/huggingface/accelerate/pull/596\r\n\r\nHowever `notebook_launcher` needs to be adapted to make use of this change as well.\r\n```\r\n\n", "before_files": [{"content": "# Copyright 2021 The HuggingFace Team. All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport os\nimport sys\nimport tempfile\nimport warnings\n\nimport torch\n\nfrom .state import AcceleratorState\nfrom .utils import PrecisionType, PrepareForLaunch, patch_environment\n\n\ndef notebook_launcher(function, args=(), num_processes=None, use_fp16=False, mixed_precision=\"no\", use_port=\"29500\"):\n \"\"\"\n Launches a training function, using several processes if it's possible in the current environment (TPU with\n multiple cores for instance).\n\n Args:\n function (`Callable`):\n The training function to execute. If it accepts arguments, the first argument should be the index of the\n process run.\n args (`Tuple`):\n Tuple of arguments to pass to the function (it will receive `*args`).\n num_processes (`int`, *optional*):\n The number of processes to use for training. Will default to 8 in Colab/Kaggle if a TPU is available, to\n the number of GPUs available otherwise.\n mixed_precision (`str`, *optional*, defaults to `\"no\"`):\n If `fp16` or `bf16`, will use mixed precision training on multi-GPU.\n use_port (`str`, *optional*, defaults to `\"29500\"`):\n The port to use to communicate between processes when launching a multi-GPU training.\n \"\"\"\n # Are we in a google colab or a Kaggle Kernel?\n if any(key.startswith(\"KAGGLE\") for key in os.environ.keys()):\n in_colab_or_kaggle = True\n elif \"IPython\" in sys.modules:\n in_colab_or_kaggle = \"google.colab\" in str(sys.modules[\"IPython\"].get_ipython())\n else:\n in_colab_or_kaggle = False\n\n try:\n mixed_precision = PrecisionType(mixed_precision.lower())\n except ValueError:\n raise ValueError(\n f\"Unknown mixed_precision mode: {args.mixed_precision.lower()}. Choose between {PrecisionType.list()}.\"\n )\n\n if in_colab_or_kaggle:\n if os.environ.get(\"TPU_NAME\", None) is not None:\n # TPU launch\n import torch_xla.distributed.xla_multiprocessing as xmp\n\n if len(AcceleratorState._shared_state) > 0:\n raise ValueError(\n \"To train on TPU in Colab or Kaggle Kernel, the `Accelerator` should only be initialized inside \"\n \"your training function. Restart your notebook and make sure no cells initializes an \"\n \"`Accelerator`.\"\n )\n if num_processes is None:\n num_processes = 8\n\n launcher = PrepareForLaunch(function, distributed_type=\"TPU\")\n print(f\"Launching a training on {num_processes} TPU cores.\")\n xmp.spawn(launcher, args=args, nprocs=num_processes, start_method=\"fork\")\n else:\n # No need for a distributed launch otherwise as it's either CPU or one GPU.\n if torch.cuda.is_available():\n print(\"Launching training on one GPU.\")\n else:\n print(\"Launching training on one CPU.\")\n function(*args)\n\n else:\n if num_processes is None:\n raise ValueError(\n \"You have to specify the number of GPUs you would like to use, add `num_processes=...` to your call.\"\n )\n\n if num_processes > 1:\n # Multi-GPU launch\n from torch.multiprocessing import start_processes\n\n if len(AcceleratorState._shared_state) > 0:\n raise ValueError(\n \"To launch a multi-GPU training from your notebook, the `Accelerator` should only be initialized \"\n \"inside your training function. Restart your notebook and make sure no cells initializes an \"\n \"`Accelerator`.\"\n )\n\n if torch.cuda.is_initialized():\n raise ValueError(\n \"To launch a multi-GPU training from your notebook, you need to avoid running any instruction \"\n \"using `torch.cuda` in any cell. Restart your notebook and make sure no cells use any CUDA \"\n \"function.\"\n )\n\n if use_fp16:\n warnings.warn('use_fp16=True is deprecated. Use mixed_precision=\"fp16\" instead.', DeprecationWarning)\n mixed_precision = \"fp16\"\n\n # torch.distributed will expect a few environment variable to be here. We set the ones common to each\n # process here (the other ones will be set be the launcher).\n with patch_environment(\n world_size=num_processes, master_addr=\"127.0.01\", master_port=use_port, mixed_precision=mixed_precision\n ):\n launcher = PrepareForLaunch(function, distributed_type=\"MULTI_GPU\")\n\n print(f\"Launching training on {num_processes} GPUs.\")\n start_processes(launcher, args=args, nprocs=num_processes, start_method=\"fork\")\n\n else:\n # No need for a distributed launch otherwise as it's either CPU or one GPU.\n if torch.cuda.is_available():\n print(\"Launching training on one GPU.\")\n else:\n print(\"Launching training on CPU.\")\n function(*args)\n\n\ndef debug_launcher(function, args=(), num_processes=2):\n \"\"\"\n Launches a training function using several processes on CPU for debugging purposes.\n\n <Tip warning={true}>\n\n This function is provided for internal testing and debugging, but it's not intended for real trainings. It will\n only use the CPU.\n\n </Tip>\n\n Args:\n function (`Callable`):\n The training function to execute.\n args (`Tuple`):\n Tuple of arguments to pass to the function (it will receive `*args`).\n num_processes (`int`, *optional*, defaults to 2):\n The number of processes to use for training.\n \"\"\"\n from torch.multiprocessing import start_processes\n\n with tempfile.NamedTemporaryFile() as tmp_file:\n # torch.distributed will expect a few environment variable to be here. We set the ones common to each\n # process here (the other ones will be set be the launcher).\n with patch_environment(\n world_size=num_processes,\n master_addr=\"127.0.01\",\n master_port=\"29500\",\n mixed_precision=\"no\",\n accelerate_debug_rdv_file=tmp_file.name,\n use_cpu=\"yes\",\n ):\n launcher = PrepareForLaunch(function, debug=True)\n start_processes(launcher, args=args, nprocs=num_processes, start_method=\"fork\")\n", "path": "src/accelerate/launchers.py"}]} | 2,947 | 239 |
gh_patches_debug_8885 | rasdani/github-patches | git_diff | certbot__certbot-5861 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[1] Standardize how you’ll run mypy
From http://blog.zulip.org/2016/10/13/static-types-in-python-oh-mypy/:
Write tooling to [install](https://github.com/zulip/zulip/blob/master/tools/install-mypy) and [run](https://github.com/zulip/zulip/blob/master/tools/run-mypy) `mypy` against your codebase, so that everyone using the project can run the type checker the same way. Two features are important in how you run mypy:
- Support for determining which files should be checked (a whitelist/exclude list is useful!).
- Specifying the correct flags for your project at this time. For a Python 2 project, I recommend starting with `mypy --py2 --silent-imports --fast-parser -i <paths>`. You should be able to do this using a [mypy.ini](http://mypy.readthedocs.io/en/latest/config_file.html) file.
</issue>
<code>
[start of setup.py]
1 import codecs
2 import os
3 import re
4 import sys
5
6 from setuptools import setup
7 from setuptools import find_packages
8
9 # Workaround for http://bugs.python.org/issue8876, see
10 # http://bugs.python.org/issue8876#msg208792
11 # This can be removed when using Python 2.7.9 or later:
12 # https://hg.python.org/cpython/raw-file/v2.7.9/Misc/NEWS
13 if os.path.abspath(__file__).split(os.path.sep)[1] == 'vagrant':
14 del os.link
15
16
17 def read_file(filename, encoding='utf8'):
18 """Read unicode from given file."""
19 with codecs.open(filename, encoding=encoding) as fd:
20 return fd.read()
21
22
23 here = os.path.abspath(os.path.dirname(__file__))
24
25 # read version number (and other metadata) from package init
26 init_fn = os.path.join(here, 'certbot', '__init__.py')
27 meta = dict(re.findall(r"""__([a-z]+)__ = '([^']+)""", read_file(init_fn)))
28
29 readme = read_file(os.path.join(here, 'README.rst'))
30 changes = read_file(os.path.join(here, 'CHANGES.rst'))
31 version = meta['version']
32
33 # This package relies on PyOpenSSL, requests, and six, however, it isn't
34 # specified here to avoid masking the more specific request requirements in
35 # acme. See https://github.com/pypa/pip/issues/988 for more info.
36 install_requires = [
37 'acme>=0.22.1',
38 # We technically need ConfigArgParse 0.10.0 for Python 2.6 support, but
39 # saying so here causes a runtime error against our temporary fork of 0.9.3
40 # in which we added 2.6 support (see #2243), so we relax the requirement.
41 'ConfigArgParse>=0.9.3',
42 'configobj',
43 'cryptography>=1.2', # load_pem_x509_certificate
44 'josepy',
45 'mock',
46 'parsedatetime>=1.3', # Calendar.parseDT
47 'pyrfc3339',
48 'pytz',
49 'setuptools',
50 'zope.component',
51 'zope.interface',
52 ]
53
54 dev_extras = [
55 # Pin astroid==1.3.5, pylint==1.4.2 as a workaround for #289
56 'astroid==1.3.5',
57 'coverage',
58 'ipdb',
59 'pytest',
60 'pytest-cov',
61 'pytest-xdist',
62 'pylint==1.4.2', # upstream #248
63 'tox',
64 'twine',
65 'wheel',
66 ]
67
68 docs_extras = [
69 'repoze.sphinx.autointerface',
70 # autodoc_member_order = 'bysource', autodoc_default_flags, and #4686
71 'Sphinx >=1.0,<=1.5.6',
72 'sphinx_rtd_theme',
73 ]
74
75 setup(
76 name='certbot',
77 version=version,
78 description="ACME client",
79 long_description=readme, # later: + '\n\n' + changes
80 url='https://github.com/letsencrypt/letsencrypt',
81 author="Certbot Project",
82 author_email='[email protected]',
83 license='Apache License 2.0',
84 python_requires='>=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*',
85 classifiers=[
86 'Development Status :: 3 - Alpha',
87 'Environment :: Console',
88 'Environment :: Console :: Curses',
89 'Intended Audience :: System Administrators',
90 'License :: OSI Approved :: Apache Software License',
91 'Operating System :: POSIX :: Linux',
92 'Programming Language :: Python',
93 'Programming Language :: Python :: 2',
94 'Programming Language :: Python :: 2.7',
95 'Programming Language :: Python :: 3',
96 'Programming Language :: Python :: 3.4',
97 'Programming Language :: Python :: 3.5',
98 'Programming Language :: Python :: 3.6',
99 'Topic :: Internet :: WWW/HTTP',
100 'Topic :: Security',
101 'Topic :: System :: Installation/Setup',
102 'Topic :: System :: Networking',
103 'Topic :: System :: Systems Administration',
104 'Topic :: Utilities',
105 ],
106
107 packages=find_packages(exclude=['docs', 'examples', 'tests', 'venv']),
108 include_package_data=True,
109
110 install_requires=install_requires,
111 extras_require={
112 'dev': dev_extras,
113 'docs': docs_extras,
114 },
115
116 # to test all packages run "python setup.py test -s
117 # {acme,certbot_apache,certbot_nginx}"
118 test_suite='certbot',
119
120 entry_points={
121 'console_scripts': [
122 'certbot = certbot.main:main',
123 ],
124 'certbot.plugins': [
125 'manual = certbot.plugins.manual:Authenticator',
126 'null = certbot.plugins.null:Installer',
127 'standalone = certbot.plugins.standalone:Authenticator',
128 'webroot = certbot.plugins.webroot:Authenticator',
129 ],
130 },
131 )
132
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -65,6 +65,10 @@
'wheel',
]
+dev3_extras = [
+ 'mypy',
+]
+
docs_extras = [
'repoze.sphinx.autointerface',
# autodoc_member_order = 'bysource', autodoc_default_flags, and #4686
@@ -110,6 +114,7 @@
install_requires=install_requires,
extras_require={
'dev': dev_extras,
+ 'dev3': dev3_extras,
'docs': docs_extras,
},
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -65,6 +65,10 @@\n 'wheel',\n ]\n \n+dev3_extras = [\n+ 'mypy',\n+]\n+\n docs_extras = [\n 'repoze.sphinx.autointerface',\n # autodoc_member_order = 'bysource', autodoc_default_flags, and #4686\n@@ -110,6 +114,7 @@\n install_requires=install_requires,\n extras_require={\n 'dev': dev_extras,\n+ 'dev3': dev3_extras,\n 'docs': docs_extras,\n },\n", "issue": "[1] Standardize how you\u2019ll run mypy\nFrom http://blog.zulip.org/2016/10/13/static-types-in-python-oh-mypy/:\r\n\r\nWrite tooling to [install](https://github.com/zulip/zulip/blob/master/tools/install-mypy) and [run](https://github.com/zulip/zulip/blob/master/tools/run-mypy) `mypy` against your codebase, so that everyone using the project can run the type checker the same way. Two features are important in how you run mypy:\r\n\r\n- Support for determining which files should be checked (a whitelist/exclude list is useful!).\r\n- Specifying the correct flags for your project at this time. For a Python 2 project, I recommend starting with `mypy --py2 --silent-imports --fast-parser -i <paths>`. You should be able to do this using a [mypy.ini](http://mypy.readthedocs.io/en/latest/config_file.html) file.\n", "before_files": [{"content": "import codecs\nimport os\nimport re\nimport sys\n\nfrom setuptools import setup\nfrom setuptools import find_packages\n\n# Workaround for http://bugs.python.org/issue8876, see\n# http://bugs.python.org/issue8876#msg208792\n# This can be removed when using Python 2.7.9 or later:\n# https://hg.python.org/cpython/raw-file/v2.7.9/Misc/NEWS\nif os.path.abspath(__file__).split(os.path.sep)[1] == 'vagrant':\n del os.link\n\n\ndef read_file(filename, encoding='utf8'):\n \"\"\"Read unicode from given file.\"\"\"\n with codecs.open(filename, encoding=encoding) as fd:\n return fd.read()\n\n\nhere = os.path.abspath(os.path.dirname(__file__))\n\n# read version number (and other metadata) from package init\ninit_fn = os.path.join(here, 'certbot', '__init__.py')\nmeta = dict(re.findall(r\"\"\"__([a-z]+)__ = '([^']+)\"\"\", read_file(init_fn)))\n\nreadme = read_file(os.path.join(here, 'README.rst'))\nchanges = read_file(os.path.join(here, 'CHANGES.rst'))\nversion = meta['version']\n\n# This package relies on PyOpenSSL, requests, and six, however, it isn't\n# specified here to avoid masking the more specific request requirements in\n# acme. See https://github.com/pypa/pip/issues/988 for more info.\ninstall_requires = [\n 'acme>=0.22.1',\n # We technically need ConfigArgParse 0.10.0 for Python 2.6 support, but\n # saying so here causes a runtime error against our temporary fork of 0.9.3\n # in which we added 2.6 support (see #2243), so we relax the requirement.\n 'ConfigArgParse>=0.9.3',\n 'configobj',\n 'cryptography>=1.2', # load_pem_x509_certificate\n 'josepy',\n 'mock',\n 'parsedatetime>=1.3', # Calendar.parseDT\n 'pyrfc3339',\n 'pytz',\n 'setuptools',\n 'zope.component',\n 'zope.interface',\n]\n\ndev_extras = [\n # Pin astroid==1.3.5, pylint==1.4.2 as a workaround for #289\n 'astroid==1.3.5',\n 'coverage',\n 'ipdb',\n 'pytest',\n 'pytest-cov',\n 'pytest-xdist',\n 'pylint==1.4.2', # upstream #248\n 'tox',\n 'twine',\n 'wheel',\n]\n\ndocs_extras = [\n 'repoze.sphinx.autointerface',\n # autodoc_member_order = 'bysource', autodoc_default_flags, and #4686\n 'Sphinx >=1.0,<=1.5.6',\n 'sphinx_rtd_theme',\n]\n\nsetup(\n name='certbot',\n version=version,\n description=\"ACME client\",\n long_description=readme, # later: + '\\n\\n' + changes\n url='https://github.com/letsencrypt/letsencrypt',\n author=\"Certbot Project\",\n author_email='[email protected]',\n license='Apache License 2.0',\n python_requires='>=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*',\n classifiers=[\n 'Development Status :: 3 - Alpha',\n 'Environment :: Console',\n 'Environment :: Console :: Curses',\n 'Intended Audience :: System Administrators',\n 'License :: OSI Approved :: Apache Software License',\n 'Operating System :: POSIX :: Linux',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n 'Topic :: Internet :: WWW/HTTP',\n 'Topic :: Security',\n 'Topic :: System :: Installation/Setup',\n 'Topic :: System :: Networking',\n 'Topic :: System :: Systems Administration',\n 'Topic :: Utilities',\n ],\n\n packages=find_packages(exclude=['docs', 'examples', 'tests', 'venv']),\n include_package_data=True,\n\n install_requires=install_requires,\n extras_require={\n 'dev': dev_extras,\n 'docs': docs_extras,\n },\n\n # to test all packages run \"python setup.py test -s\n # {acme,certbot_apache,certbot_nginx}\"\n test_suite='certbot',\n\n entry_points={\n 'console_scripts': [\n 'certbot = certbot.main:main',\n ],\n 'certbot.plugins': [\n 'manual = certbot.plugins.manual:Authenticator',\n 'null = certbot.plugins.null:Installer',\n 'standalone = certbot.plugins.standalone:Authenticator',\n 'webroot = certbot.plugins.webroot:Authenticator',\n ],\n },\n)\n", "path": "setup.py"}]} | 2,190 | 151 |
gh_patches_debug_42707 | rasdani/github-patches | git_diff | secondmind-labs__trieste-441 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Active learning acquisition functions are missing integration tests
</issue>
<code>
[start of docs/notebooks/active_learning.pct.py]
1 # %% [markdown]
2 # # Active Learning
3
4 # %% [markdown]
5 # Sometimes, we may just want to learn a black-box function, rather than optimizing it. This goal is known as active learning and corresponds to choosing query points that reduce our model uncertainty. This notebook demonstrates how to perform Bayesian active learning using Trieste.
6
7 # %%
8 # %matplotlib inline
9 import numpy as np
10 import tensorflow as tf
11
12 np.random.seed(1793)
13 tf.random.set_seed(1793)
14
15 # %% [markdown]
16 # ## Describe the problem
17 #
18 # In this example, we will perform active learning for the scaled Branin function.
19
20
21 # %%
22 from trieste.objectives import scaled_branin
23 from util.plotting_plotly import plot_function_plotly
24 from trieste.space import Box
25
26 search_space = Box([0, 0], [1, 1])
27
28 fig = plot_function_plotly(
29 scaled_branin, search_space.lower, search_space.upper, grid_density=20
30 )
31 fig.update_layout(height=400, width=400)
32 fig.show()
33
34 # %% [markdown]
35 # We begin our Bayesian active learning from a two-point initial design built from a space-filling Halton sequence.
36
37 # %%
38 import trieste
39
40 observer = trieste.objectives.utils.mk_observer(scaled_branin)
41
42 num_initial_points = 4
43 initial_query_points = search_space.sample_halton(num_initial_points)
44 initial_data = observer(initial_query_points)
45
46
47 # %% [markdown]
48 # ## Surrogate model
49 #
50 # Just like in sequential optimization, we fit a surrogate Gaussian process model as implemented in GPflow to the initial data. The GPflow models cannot be used directly in our Bayesian optimization routines, so we build a GPflow's `GPR` model and pass it to the `GaussianProcessRegression` wrapper.
51
52 # %%
53 import gpflow
54 from trieste.models.gpflow.models import GaussianProcessRegression
55
56
57 def build_model(data):
58 variance = tf.math.reduce_variance(data.observations)
59 kernel = gpflow.kernels.RBF(variance=variance, lengthscales=[2, 2])
60 gpr = gpflow.models.GPR(data.astuple(), kernel, noise_variance=1e-5)
61 gpflow.set_trainable(gpr.likelihood, False)
62
63 return GaussianProcessRegression(gpr)
64
65
66 model = build_model(initial_data)
67
68 # %% [markdown]
69 # ## Active learning using predictive variance
70 #
71 # For our first active learning example, we will use a simple acquisition function known as `PredictiveVariance` which chooses points for which we are highly uncertain (i.e. the predictive posterior covariance matrix at these points has large determinant), as discussed in <cite data-cite="MacKay1992"/>. Note that this also implies that our model needs to have `predict_joint` method to be able to return the full covariance, and it's likely to be expensive to compute.
72 #
73 # We will now demonstrate how to choose individual query points using `PredictiveVariance` before moving onto batch active learning. For both cases, we can utilize Trieste's `BayesianOptimizer` to do the active learning steps.
74 #
75
76 # %%
77 from trieste.acquisition.function import PredictiveVariance
78 from trieste.acquisition.optimizer import generate_continuous_optimizer
79 from trieste.acquisition.rule import EfficientGlobalOptimization
80
81 acq = PredictiveVariance()
82 rule = EfficientGlobalOptimization(
83 builder=acq, optimizer=generate_continuous_optimizer()
84 )
85 bo = trieste.bayesian_optimizer.BayesianOptimizer(observer, search_space)
86
87 # %% [markdown]
88 # To plot the contour of variance of our model at each step, we can set the `track_state` parameter to `True` in `bo.optimize()`, this will make Trieste record our model at each iteration.
89
90 # %%
91 bo_iter = 5
92 result = bo.optimize(bo_iter, initial_data, model, rule, track_state=True)
93
94 # %% [markdown]
95 # Then we can retrieve our final dataset from the active learning steps.
96
97 # %%
98 dataset = result.try_get_final_dataset()
99 query_points = dataset.query_points.numpy()
100 observations = dataset.observations.numpy()
101
102 # %% [markdown]
103 # Finally, we can check the performance of our `PredictiveVariance` active learning acquisition function by plotting the predictive variance landscape of our model. We can see how it samples regions for which our model is highly uncertain.
104
105 # %%
106 from util.plotting import plot_bo_points, plot_function_2d
107
108
109 def plot_active_learning_query(
110 result, bo_iter, num_initial_points, query_points, num_query=1
111 ):
112
113 for i in range(bo_iter):
114
115 def pred_var(x):
116 _, var = result.history[i].models["OBJECTIVE"].model.predict_f(x)
117 return var
118
119 _, ax = plot_function_2d(
120 pred_var,
121 search_space.lower - 0.01,
122 search_space.upper + 0.01,
123 grid_density=100,
124 contour=True,
125 colorbar=True,
126 figsize=(10, 6),
127 title=[
128 "Variance contour with queried points at iter:" + str(i + 1)
129 ],
130 xlabel="$X_1$",
131 ylabel="$X_2$",
132 )
133
134 plot_bo_points(
135 query_points[: num_initial_points + (i * num_query)],
136 ax[0, 0],
137 num_initial_points,
138 )
139
140
141 plot_active_learning_query(result, bo_iter, num_initial_points, query_points)
142
143
144 # %% [markdown]
145 # ## Batch active learning using predictive variance
146 #
147 # For some cases, query several points at a time can be convenient by doing batch active learning. For this case, we must pass a num_query_points input to our `EfficientGlobalOptimization` rule. The drawback of the batch predictive variance is, it tends to query in high variance area less accurately, compared to the sequentially drawing one point at a time.
148
149 # %%
150 bo_iter = 5
151 num_query = 3
152 model = build_model(initial_data)
153 acq = PredictiveVariance()
154 rule = EfficientGlobalOptimization(
155 num_query_points=num_query,
156 builder=acq,
157 optimizer=generate_continuous_optimizer(num_optimization_runs=1),
158 )
159 bo = trieste.bayesian_optimizer.BayesianOptimizer(observer, search_space)
160
161 result = bo.optimize(bo_iter, initial_data, model, rule, track_state=True)
162
163
164 # %% [markdown]
165 # After that, we can retrieve our final dataset.
166
167 # %%
168 dataset = result.try_get_final_dataset()
169 query_points = dataset.query_points.numpy()
170 observations = dataset.observations.numpy()
171
172
173 # %% [markdown]
174 # Now we can visualize the batch predictive variance using our plotting function.
175
176 # %%
177 from util.plotting import plot_bo_points, plot_function_2d
178
179 plot_active_learning_query(
180 result, bo_iter, num_initial_points, query_points, num_query
181 )
182
183 # %% [markdown]
184 # ## LICENSE
185 #
186 # [Apache License 2.0](https://github.com/secondmind-labs/trieste/blob/develop/LICENSE)
187
[end of docs/notebooks/active_learning.pct.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/docs/notebooks/active_learning.pct.py b/docs/notebooks/active_learning.pct.py
--- a/docs/notebooks/active_learning.pct.py
+++ b/docs/notebooks/active_learning.pct.py
@@ -19,11 +19,11 @@
# %%
-from trieste.objectives import scaled_branin
+from trieste.objectives import BRANIN_SEARCH_SPACE, scaled_branin
from util.plotting_plotly import plot_function_plotly
from trieste.space import Box
-search_space = Box([0, 0], [1, 1])
+search_space = BRANIN_SEARCH_SPACE
fig = plot_function_plotly(
scaled_branin, search_space.lower, search_space.upper, grid_density=20
@@ -32,7 +32,7 @@
fig.show()
# %% [markdown]
-# We begin our Bayesian active learning from a two-point initial design built from a space-filling Halton sequence.
+# We begin our Bayesian active learning from a small initial design built from a space-filling Halton sequence.
# %%
import trieste
@@ -47,16 +47,24 @@
# %% [markdown]
# ## Surrogate model
#
-# Just like in sequential optimization, we fit a surrogate Gaussian process model as implemented in GPflow to the initial data. The GPflow models cannot be used directly in our Bayesian optimization routines, so we build a GPflow's `GPR` model and pass it to the `GaussianProcessRegression` wrapper.
+# Just like in sequential optimization, we fit a surrogate Gaussian process model as implemented in GPflow to the initial data. The GPflow models cannot be used directly in our Bayesian optimization routines, so we build a GPflow's `GPR` model and pass it to the `GaussianProcessRegression` wrapper. As a good practice, we use priors for the kernel hyperparameters.
# %%
import gpflow
from trieste.models.gpflow.models import GaussianProcessRegression
+import tensorflow_probability as tfp
def build_model(data):
variance = tf.math.reduce_variance(data.observations)
- kernel = gpflow.kernels.RBF(variance=variance, lengthscales=[2, 2])
+ kernel = gpflow.kernels.Matern52(variance=variance, lengthscales=[0.2, 0.2])
+ prior_scale = tf.cast(1.0, dtype=tf.float64)
+ kernel.variance.prior = tfp.distributions.LogNormal(
+ tf.cast(-2.0, dtype=tf.float64), prior_scale
+ )
+ kernel.lengthscales.prior = tfp.distributions.LogNormal(
+ tf.math.log(kernel.lengthscales), prior_scale
+ )
gpr = gpflow.models.GPR(data.astuple(), kernel, noise_variance=1e-5)
gpflow.set_trainable(gpr.likelihood, False)
@@ -79,9 +87,7 @@
from trieste.acquisition.rule import EfficientGlobalOptimization
acq = PredictiveVariance()
-rule = EfficientGlobalOptimization(
- builder=acq, optimizer=generate_continuous_optimizer()
-)
+rule = EfficientGlobalOptimization(builder=acq) # type: ignore
bo = trieste.bayesian_optimizer.BayesianOptimizer(observer, search_space)
# %% [markdown]
@@ -144,12 +150,14 @@
# %% [markdown]
# ## Batch active learning using predictive variance
#
-# For some cases, query several points at a time can be convenient by doing batch active learning. For this case, we must pass a num_query_points input to our `EfficientGlobalOptimization` rule. The drawback of the batch predictive variance is, it tends to query in high variance area less accurately, compared to the sequentially drawing one point at a time.
+# In cases when we can evaluate the black-box function in parallel, it would be useful to produce a batch of points rather than a single point. `PredictiveVariance` acquisition function can also perform batch active learning. We must pass a `num_query_points` input to our `EfficientGlobalOptimization` rule. The drawback of the batch predictive variance is that it tends to query in high variance area less accurately, compared to sequentially drawing one point at a time.
# %%
bo_iter = 5
num_query = 3
+
model = build_model(initial_data)
+
acq = PredictiveVariance()
rule = EfficientGlobalOptimization(
num_query_points=num_query,
| {"golden_diff": "diff --git a/docs/notebooks/active_learning.pct.py b/docs/notebooks/active_learning.pct.py\n--- a/docs/notebooks/active_learning.pct.py\n+++ b/docs/notebooks/active_learning.pct.py\n@@ -19,11 +19,11 @@\n \n \n # %%\n-from trieste.objectives import scaled_branin\n+from trieste.objectives import BRANIN_SEARCH_SPACE, scaled_branin\n from util.plotting_plotly import plot_function_plotly\n from trieste.space import Box\n \n-search_space = Box([0, 0], [1, 1])\n+search_space = BRANIN_SEARCH_SPACE\n \n fig = plot_function_plotly(\n scaled_branin, search_space.lower, search_space.upper, grid_density=20\n@@ -32,7 +32,7 @@\n fig.show()\n \n # %% [markdown]\n-# We begin our Bayesian active learning from a two-point initial design built from a space-filling Halton sequence.\n+# We begin our Bayesian active learning from a small initial design built from a space-filling Halton sequence.\n \n # %%\n import trieste\n@@ -47,16 +47,24 @@\n # %% [markdown]\n # ## Surrogate model\n #\n-# Just like in sequential optimization, we fit a surrogate Gaussian process model as implemented in GPflow to the initial data. The GPflow models cannot be used directly in our Bayesian optimization routines, so we build a GPflow's `GPR` model and pass it to the `GaussianProcessRegression` wrapper.\n+# Just like in sequential optimization, we fit a surrogate Gaussian process model as implemented in GPflow to the initial data. The GPflow models cannot be used directly in our Bayesian optimization routines, so we build a GPflow's `GPR` model and pass it to the `GaussianProcessRegression` wrapper. As a good practice, we use priors for the kernel hyperparameters.\n \n # %%\n import gpflow\n from trieste.models.gpflow.models import GaussianProcessRegression\n+import tensorflow_probability as tfp\n \n \n def build_model(data):\n variance = tf.math.reduce_variance(data.observations)\n- kernel = gpflow.kernels.RBF(variance=variance, lengthscales=[2, 2])\n+ kernel = gpflow.kernels.Matern52(variance=variance, lengthscales=[0.2, 0.2])\n+ prior_scale = tf.cast(1.0, dtype=tf.float64)\n+ kernel.variance.prior = tfp.distributions.LogNormal(\n+ tf.cast(-2.0, dtype=tf.float64), prior_scale\n+ )\n+ kernel.lengthscales.prior = tfp.distributions.LogNormal(\n+ tf.math.log(kernel.lengthscales), prior_scale\n+ )\n gpr = gpflow.models.GPR(data.astuple(), kernel, noise_variance=1e-5)\n gpflow.set_trainable(gpr.likelihood, False)\n \n@@ -79,9 +87,7 @@\n from trieste.acquisition.rule import EfficientGlobalOptimization\n \n acq = PredictiveVariance()\n-rule = EfficientGlobalOptimization(\n- builder=acq, optimizer=generate_continuous_optimizer()\n-)\n+rule = EfficientGlobalOptimization(builder=acq) # type: ignore\n bo = trieste.bayesian_optimizer.BayesianOptimizer(observer, search_space)\n \n # %% [markdown]\n@@ -144,12 +150,14 @@\n # %% [markdown]\n # ## Batch active learning using predictive variance\n #\n-# For some cases, query several points at a time can be convenient by doing batch active learning. For this case, we must pass a num_query_points input to our `EfficientGlobalOptimization` rule. The drawback of the batch predictive variance is, it tends to query in high variance area less accurately, compared to the sequentially drawing one point at a time.\n+# In cases when we can evaluate the black-box function in parallel, it would be useful to produce a batch of points rather than a single point. `PredictiveVariance` acquisition function can also perform batch active learning. We must pass a `num_query_points` input to our `EfficientGlobalOptimization` rule. The drawback of the batch predictive variance is that it tends to query in high variance area less accurately, compared to sequentially drawing one point at a time.\n \n # %%\n bo_iter = 5\n num_query = 3\n+\n model = build_model(initial_data)\n+\n acq = PredictiveVariance()\n rule = EfficientGlobalOptimization(\n num_query_points=num_query,\n", "issue": "Active learning acquisition functions are missing integration tests\n\n", "before_files": [{"content": "# %% [markdown]\n# # Active Learning\n\n# %% [markdown]\n# Sometimes, we may just want to learn a black-box function, rather than optimizing it. This goal is known as active learning and corresponds to choosing query points that reduce our model uncertainty. This notebook demonstrates how to perform Bayesian active learning using Trieste.\n\n# %%\n# %matplotlib inline\nimport numpy as np\nimport tensorflow as tf\n\nnp.random.seed(1793)\ntf.random.set_seed(1793)\n\n# %% [markdown]\n# ## Describe the problem\n#\n# In this example, we will perform active learning for the scaled Branin function.\n\n\n# %%\nfrom trieste.objectives import scaled_branin\nfrom util.plotting_plotly import plot_function_plotly\nfrom trieste.space import Box\n\nsearch_space = Box([0, 0], [1, 1])\n\nfig = plot_function_plotly(\n scaled_branin, search_space.lower, search_space.upper, grid_density=20\n)\nfig.update_layout(height=400, width=400)\nfig.show()\n\n# %% [markdown]\n# We begin our Bayesian active learning from a two-point initial design built from a space-filling Halton sequence.\n\n# %%\nimport trieste\n\nobserver = trieste.objectives.utils.mk_observer(scaled_branin)\n\nnum_initial_points = 4\ninitial_query_points = search_space.sample_halton(num_initial_points)\ninitial_data = observer(initial_query_points)\n\n\n# %% [markdown]\n# ## Surrogate model\n#\n# Just like in sequential optimization, we fit a surrogate Gaussian process model as implemented in GPflow to the initial data. The GPflow models cannot be used directly in our Bayesian optimization routines, so we build a GPflow's `GPR` model and pass it to the `GaussianProcessRegression` wrapper.\n\n# %%\nimport gpflow\nfrom trieste.models.gpflow.models import GaussianProcessRegression\n\n\ndef build_model(data):\n variance = tf.math.reduce_variance(data.observations)\n kernel = gpflow.kernels.RBF(variance=variance, lengthscales=[2, 2])\n gpr = gpflow.models.GPR(data.astuple(), kernel, noise_variance=1e-5)\n gpflow.set_trainable(gpr.likelihood, False)\n\n return GaussianProcessRegression(gpr)\n\n\nmodel = build_model(initial_data)\n\n# %% [markdown]\n# ## Active learning using predictive variance\n#\n# For our first active learning example, we will use a simple acquisition function known as `PredictiveVariance` which chooses points for which we are highly uncertain (i.e. the predictive posterior covariance matrix at these points has large determinant), as discussed in <cite data-cite=\"MacKay1992\"/>. Note that this also implies that our model needs to have `predict_joint` method to be able to return the full covariance, and it's likely to be expensive to compute.\n#\n# We will now demonstrate how to choose individual query points using `PredictiveVariance` before moving onto batch active learning. For both cases, we can utilize Trieste's `BayesianOptimizer` to do the active learning steps.\n#\n\n# %%\nfrom trieste.acquisition.function import PredictiveVariance\nfrom trieste.acquisition.optimizer import generate_continuous_optimizer\nfrom trieste.acquisition.rule import EfficientGlobalOptimization\n\nacq = PredictiveVariance()\nrule = EfficientGlobalOptimization(\n builder=acq, optimizer=generate_continuous_optimizer()\n)\nbo = trieste.bayesian_optimizer.BayesianOptimizer(observer, search_space)\n\n# %% [markdown]\n# To plot the contour of variance of our model at each step, we can set the `track_state` parameter to `True` in `bo.optimize()`, this will make Trieste record our model at each iteration.\n\n# %%\nbo_iter = 5\nresult = bo.optimize(bo_iter, initial_data, model, rule, track_state=True)\n\n# %% [markdown]\n# Then we can retrieve our final dataset from the active learning steps.\n\n# %%\ndataset = result.try_get_final_dataset()\nquery_points = dataset.query_points.numpy()\nobservations = dataset.observations.numpy()\n\n# %% [markdown]\n# Finally, we can check the performance of our `PredictiveVariance` active learning acquisition function by plotting the predictive variance landscape of our model. We can see how it samples regions for which our model is highly uncertain.\n\n# %%\nfrom util.plotting import plot_bo_points, plot_function_2d\n\n\ndef plot_active_learning_query(\n result, bo_iter, num_initial_points, query_points, num_query=1\n):\n\n for i in range(bo_iter):\n\n def pred_var(x):\n _, var = result.history[i].models[\"OBJECTIVE\"].model.predict_f(x)\n return var\n\n _, ax = plot_function_2d(\n pred_var,\n search_space.lower - 0.01,\n search_space.upper + 0.01,\n grid_density=100,\n contour=True,\n colorbar=True,\n figsize=(10, 6),\n title=[\n \"Variance contour with queried points at iter:\" + str(i + 1)\n ],\n xlabel=\"$X_1$\",\n ylabel=\"$X_2$\",\n )\n\n plot_bo_points(\n query_points[: num_initial_points + (i * num_query)],\n ax[0, 0],\n num_initial_points,\n )\n\n\nplot_active_learning_query(result, bo_iter, num_initial_points, query_points)\n\n\n# %% [markdown]\n# ## Batch active learning using predictive variance\n#\n# For some cases, query several points at a time can be convenient by doing batch active learning. For this case, we must pass a num_query_points input to our `EfficientGlobalOptimization` rule. The drawback of the batch predictive variance is, it tends to query in high variance area less accurately, compared to the sequentially drawing one point at a time.\n\n# %%\nbo_iter = 5\nnum_query = 3\nmodel = build_model(initial_data)\nacq = PredictiveVariance()\nrule = EfficientGlobalOptimization(\n num_query_points=num_query,\n builder=acq,\n optimizer=generate_continuous_optimizer(num_optimization_runs=1),\n)\nbo = trieste.bayesian_optimizer.BayesianOptimizer(observer, search_space)\n\nresult = bo.optimize(bo_iter, initial_data, model, rule, track_state=True)\n\n\n# %% [markdown]\n# After that, we can retrieve our final dataset.\n\n# %%\ndataset = result.try_get_final_dataset()\nquery_points = dataset.query_points.numpy()\nobservations = dataset.observations.numpy()\n\n\n# %% [markdown]\n# Now we can visualize the batch predictive variance using our plotting function.\n\n# %%\nfrom util.plotting import plot_bo_points, plot_function_2d\n\nplot_active_learning_query(\n result, bo_iter, num_initial_points, query_points, num_query\n)\n\n# %% [markdown]\n# ## LICENSE\n#\n# [Apache License 2.0](https://github.com/secondmind-labs/trieste/blob/develop/LICENSE)\n", "path": "docs/notebooks/active_learning.pct.py"}]} | 2,513 | 977 |
gh_patches_debug_14621 | rasdani/github-patches | git_diff | talonhub__community-378 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
the command key should probably be made Mac specific
</issue>
<code>
[start of code/keys.py]
1 from typing import Set
2
3 from talon import Module, Context, actions, app
4 import sys
5
6 default_alphabet = "air bat cap drum each fine gust harp sit jury crunch look made near odd pit quench red sun trap urge vest whale plex yank zip".split(
7 " "
8 )
9 letters_string = "abcdefghijklmnopqrstuvwxyz"
10
11 default_digits = "zero one two three four five six seven eight nine".split(" ")
12 numbers = [str(i) for i in range(10)]
13 default_f_digits = "one two three four five six seven eight nine ten eleven twelve".split(
14 " "
15 )
16
17 mod = Module()
18 mod.list("letter", desc="The spoken phonetic alphabet")
19 mod.list("symbol_key", desc="All symbols from the keyboard")
20 mod.list("arrow_key", desc="All arrow keys")
21 mod.list("number_key", desc="All number keys")
22 mod.list("modifier_key", desc="All modifier keys")
23 mod.list("function_key", desc="All function keys")
24 mod.list("special_key", desc="All special keys")
25 mod.list("punctuation", desc="words for inserting punctuation into text")
26
27
28 @mod.capture(rule="{self.modifier_key}+")
29 def modifiers(m) -> str:
30 "One or more modifier keys"
31 return "-".join(m.modifier_key_list)
32
33
34 @mod.capture(rule="{self.arrow_key}")
35 def arrow_key(m) -> str:
36 "One directional arrow key"
37 return m.arrow_key
38
39
40 @mod.capture(rule="<self.arrow_key>+")
41 def arrow_keys(m) -> str:
42 "One or more arrow keys separated by a space"
43 return str(m)
44
45
46 @mod.capture(rule="{self.number_key}")
47 def number_key(m) -> str:
48 "One number key"
49 return m.number_key
50
51
52 @mod.capture(rule="{self.letter}")
53 def letter(m) -> str:
54 "One letter key"
55 return m.letter
56
57
58 @mod.capture(rule="{self.special_key}")
59 def special_key(m) -> str:
60 "One special key"
61 return m.special_key
62
63
64 @mod.capture(rule="{self.symbol_key}")
65 def symbol_key(m) -> str:
66 "One symbol key"
67 return m.symbol_key
68
69
70 @mod.capture(rule="{self.function_key}")
71 def function_key(m) -> str:
72 "One function key"
73 return m.function_key
74
75
76 @mod.capture(rule="( <self.letter> | <self.number_key> | <self.symbol_key> )")
77 def any_alphanumeric_key(m) -> str:
78 "any alphanumeric key"
79 return str(m)
80
81
82 @mod.capture(
83 rule="( <self.letter> | <self.number_key> | <self.symbol_key> "
84 "| <self.arrow_key> | <self.function_key> | <self.special_key> )"
85 )
86 def unmodified_key(m) -> str:
87 "A single key with no modifiers"
88 return str(m)
89
90
91 @mod.capture(rule="{self.modifier_key}* <self.unmodified_key>")
92 def key(m) -> str:
93 "A single key with optional modifiers"
94 try:
95 mods = m.modifier_key_list
96 except AttributeError:
97 mods = []
98 return "-".join(mods + [m.unmodified_key])
99
100
101 @mod.capture(rule="<self.key>+")
102 def keys(m) -> str:
103 "A sequence of one or more keys with optional modifiers"
104 return " ".join(m.key_list)
105
106
107 @mod.capture(rule="{self.letter}+")
108 def letters(m) -> str:
109 "Multiple letter keys"
110 return "".join(m.letter_list)
111
112
113 ctx = Context()
114 ctx.lists["self.modifier_key"] = {
115 # If you find 'alt' is often misrecognized, try using 'alter'.
116 "alt": "alt", #'alter': 'alt',
117 "command": "cmd",
118 "control": "ctrl", #'troll': 'ctrl',
119 "option": "alt",
120 "shift": "shift", #'sky': 'shift',
121 "super": "super",
122 }
123 alphabet = dict(zip(default_alphabet, letters_string))
124 ctx.lists["self.letter"] = alphabet
125
126 # `punctuation_words` is for words you want available BOTH in dictation and as
127 # key names in command mode. `symbol_key_words` is for key names that should be
128 # available in command mode, but NOT during dictation.
129 punctuation_words = {
130 # TODO: I'm not sure why we need these, I think it has something to do with
131 # Dragon. Possibly it has been fixed by later improvements to talon? -rntz
132 "`": "`",
133 ",": ",", # <== these things
134 "back tick": "`",
135 "comma": ",",
136 "period": ".",
137 "semicolon": ";",
138 "colon": ":",
139 "forward slash": "/",
140 "question mark": "?",
141 "exclamation mark": "!",
142 "exclamation point": "!",
143 "dollar sign": "$",
144 "asterisk": "*",
145 "hash sign": "#",
146 "number sign": "#",
147 "percent sign": "%",
148 "at sign": "@",
149 "and sign": "&",
150 "ampersand": "&",
151 }
152 symbol_key_words = {
153 "dot": ".",
154 "quote": "'",
155 "L square": "[",
156 "left square": "[",
157 "square": "[",
158 "R square": "]",
159 "right square": "]",
160 "slash": "/",
161 "backslash": "\\",
162 "minus": "-",
163 "dash": "-",
164 "equals": "=",
165 "plus": "+",
166 "tilde": "~",
167 "bang": "!",
168 "dollar": "$",
169 "down score": "_",
170 "under score": "_",
171 "paren": "(",
172 "L paren": "(",
173 "left paren": "(",
174 "R paren": ")",
175 "right paren": ")",
176 "brace": "{",
177 "left brace": "{",
178 "R brace": "}",
179 "right brace": "}",
180 "angle": "<",
181 "left angle": "<",
182 "less than": "<",
183 "rangle": ">",
184 "R angle": ">",
185 "right angle": ">",
186 "greater than": ">",
187 "star": "*",
188 "pound": "#",
189 "hash": "#",
190 "percent": "%",
191 "caret": "^",
192 "amper": "&",
193 "pipe": "|",
194 "dubquote": '"',
195 "double quote": '"',
196 }
197
198 # make punctuation words also included in {user.symbol_keys}
199 symbol_key_words.update(punctuation_words)
200 ctx.lists["self.punctuation"] = punctuation_words
201 ctx.lists["self.symbol_key"] = symbol_key_words
202 ctx.lists["self.number_key"] = dict(zip(default_digits, numbers))
203 ctx.lists["self.arrow_key"] = {
204 "down": "down",
205 "left": "left",
206 "right": "right",
207 "up": "up",
208 }
209
210 simple_keys = [
211 "end",
212 "enter",
213 "escape",
214 "home",
215 "insert",
216 "pagedown",
217 "pageup",
218 "space",
219 "tab",
220 ]
221
222 alternate_keys = {
223 "delete": "backspace",
224 "forward delete": "delete",
225 #'junk': 'backspace',
226 }
227 # mac apparently doesn't have the menu key.
228 if app.platform in ("windows", "linux"):
229 alternate_keys["menu key"] = "menu"
230
231 keys = {k: k for k in simple_keys}
232 keys.update(alternate_keys)
233 ctx.lists["self.special_key"] = keys
234 ctx.lists["self.function_key"] = {
235 f"F {default_f_digits[i]}": f"f{i + 1}" for i in range(12)
236 }
237
238
239 @mod.action_class
240 class Actions:
241 def get_alphabet() -> dict:
242 """Provides the alphabet dictionary"""
243 return alphabet
244
245
[end of code/keys.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/code/keys.py b/code/keys.py
--- a/code/keys.py
+++ b/code/keys.py
@@ -111,15 +111,17 @@
ctx = Context()
-ctx.lists["self.modifier_key"] = {
+modifier_keys = {
# If you find 'alt' is often misrecognized, try using 'alter'.
"alt": "alt", #'alter': 'alt',
- "command": "cmd",
"control": "ctrl", #'troll': 'ctrl',
- "option": "alt",
"shift": "shift", #'sky': 'shift',
"super": "super",
}
+if app.platform == "mac":
+ modifier_keys["command"] = "cmd"
+ modifier_keys["option"] = "alt"
+ctx.lists["self.modifier_key"] = modifier_keys
alphabet = dict(zip(default_alphabet, letters_string))
ctx.lists["self.letter"] = alphabet
| {"golden_diff": "diff --git a/code/keys.py b/code/keys.py\n--- a/code/keys.py\n+++ b/code/keys.py\n@@ -111,15 +111,17 @@\n \n \n ctx = Context()\n-ctx.lists[\"self.modifier_key\"] = {\n+modifier_keys = {\n # If you find 'alt' is often misrecognized, try using 'alter'.\n \"alt\": \"alt\", #'alter': 'alt',\n- \"command\": \"cmd\",\n \"control\": \"ctrl\", #'troll': 'ctrl',\n- \"option\": \"alt\",\n \"shift\": \"shift\", #'sky': 'shift',\n \"super\": \"super\",\n }\n+if app.platform == \"mac\":\n+ modifier_keys[\"command\"] = \"cmd\"\n+ modifier_keys[\"option\"] = \"alt\"\n+ctx.lists[\"self.modifier_key\"] = modifier_keys\n alphabet = dict(zip(default_alphabet, letters_string))\n ctx.lists[\"self.letter\"] = alphabet\n", "issue": "the command key should probably be made Mac specific\n\n", "before_files": [{"content": "from typing import Set\n\nfrom talon import Module, Context, actions, app\nimport sys\n\ndefault_alphabet = \"air bat cap drum each fine gust harp sit jury crunch look made near odd pit quench red sun trap urge vest whale plex yank zip\".split(\n \" \"\n)\nletters_string = \"abcdefghijklmnopqrstuvwxyz\"\n\ndefault_digits = \"zero one two three four five six seven eight nine\".split(\" \")\nnumbers = [str(i) for i in range(10)]\ndefault_f_digits = \"one two three four five six seven eight nine ten eleven twelve\".split(\n \" \"\n)\n\nmod = Module()\nmod.list(\"letter\", desc=\"The spoken phonetic alphabet\")\nmod.list(\"symbol_key\", desc=\"All symbols from the keyboard\")\nmod.list(\"arrow_key\", desc=\"All arrow keys\")\nmod.list(\"number_key\", desc=\"All number keys\")\nmod.list(\"modifier_key\", desc=\"All modifier keys\")\nmod.list(\"function_key\", desc=\"All function keys\")\nmod.list(\"special_key\", desc=\"All special keys\")\nmod.list(\"punctuation\", desc=\"words for inserting punctuation into text\")\n\n\[email protected](rule=\"{self.modifier_key}+\")\ndef modifiers(m) -> str:\n \"One or more modifier keys\"\n return \"-\".join(m.modifier_key_list)\n\n\[email protected](rule=\"{self.arrow_key}\")\ndef arrow_key(m) -> str:\n \"One directional arrow key\"\n return m.arrow_key\n\n\[email protected](rule=\"<self.arrow_key>+\")\ndef arrow_keys(m) -> str:\n \"One or more arrow keys separated by a space\"\n return str(m)\n\n\[email protected](rule=\"{self.number_key}\")\ndef number_key(m) -> str:\n \"One number key\"\n return m.number_key\n\n\[email protected](rule=\"{self.letter}\")\ndef letter(m) -> str:\n \"One letter key\"\n return m.letter\n\n\[email protected](rule=\"{self.special_key}\")\ndef special_key(m) -> str:\n \"One special key\"\n return m.special_key\n\n\[email protected](rule=\"{self.symbol_key}\")\ndef symbol_key(m) -> str:\n \"One symbol key\"\n return m.symbol_key\n\n\[email protected](rule=\"{self.function_key}\")\ndef function_key(m) -> str:\n \"One function key\"\n return m.function_key\n\n\[email protected](rule=\"( <self.letter> | <self.number_key> | <self.symbol_key> )\")\ndef any_alphanumeric_key(m) -> str:\n \"any alphanumeric key\"\n return str(m)\n\n\[email protected](\n rule=\"( <self.letter> | <self.number_key> | <self.symbol_key> \"\n \"| <self.arrow_key> | <self.function_key> | <self.special_key> )\"\n)\ndef unmodified_key(m) -> str:\n \"A single key with no modifiers\"\n return str(m)\n\n\[email protected](rule=\"{self.modifier_key}* <self.unmodified_key>\")\ndef key(m) -> str:\n \"A single key with optional modifiers\"\n try:\n mods = m.modifier_key_list\n except AttributeError:\n mods = []\n return \"-\".join(mods + [m.unmodified_key])\n\n\[email protected](rule=\"<self.key>+\")\ndef keys(m) -> str:\n \"A sequence of one or more keys with optional modifiers\"\n return \" \".join(m.key_list)\n\n\[email protected](rule=\"{self.letter}+\")\ndef letters(m) -> str:\n \"Multiple letter keys\"\n return \"\".join(m.letter_list)\n\n\nctx = Context()\nctx.lists[\"self.modifier_key\"] = {\n # If you find 'alt' is often misrecognized, try using 'alter'.\n \"alt\": \"alt\", #'alter': 'alt',\n \"command\": \"cmd\",\n \"control\": \"ctrl\", #'troll': 'ctrl',\n \"option\": \"alt\",\n \"shift\": \"shift\", #'sky': 'shift',\n \"super\": \"super\",\n}\nalphabet = dict(zip(default_alphabet, letters_string))\nctx.lists[\"self.letter\"] = alphabet\n\n# `punctuation_words` is for words you want available BOTH in dictation and as\n# key names in command mode. `symbol_key_words` is for key names that should be\n# available in command mode, but NOT during dictation.\npunctuation_words = {\n # TODO: I'm not sure why we need these, I think it has something to do with\n # Dragon. Possibly it has been fixed by later improvements to talon? -rntz\n \"`\": \"`\",\n \",\": \",\", # <== these things\n \"back tick\": \"`\",\n \"comma\": \",\",\n \"period\": \".\",\n \"semicolon\": \";\",\n \"colon\": \":\",\n \"forward slash\": \"/\",\n \"question mark\": \"?\",\n \"exclamation mark\": \"!\",\n \"exclamation point\": \"!\",\n \"dollar sign\": \"$\",\n \"asterisk\": \"*\",\n \"hash sign\": \"#\",\n \"number sign\": \"#\",\n \"percent sign\": \"%\",\n \"at sign\": \"@\",\n \"and sign\": \"&\",\n \"ampersand\": \"&\",\n}\nsymbol_key_words = {\n \"dot\": \".\",\n \"quote\": \"'\",\n \"L square\": \"[\",\n \"left square\": \"[\",\n \"square\": \"[\",\n \"R square\": \"]\",\n \"right square\": \"]\",\n \"slash\": \"/\",\n \"backslash\": \"\\\\\",\n \"minus\": \"-\",\n \"dash\": \"-\",\n \"equals\": \"=\",\n \"plus\": \"+\",\n \"tilde\": \"~\",\n \"bang\": \"!\",\n \"dollar\": \"$\",\n \"down score\": \"_\",\n \"under score\": \"_\",\n \"paren\": \"(\",\n \"L paren\": \"(\",\n \"left paren\": \"(\",\n \"R paren\": \")\",\n \"right paren\": \")\",\n \"brace\": \"{\",\n \"left brace\": \"{\",\n \"R brace\": \"}\",\n \"right brace\": \"}\",\n \"angle\": \"<\",\n \"left angle\": \"<\",\n \"less than\": \"<\",\n \"rangle\": \">\",\n \"R angle\": \">\",\n \"right angle\": \">\",\n \"greater than\": \">\",\n \"star\": \"*\",\n \"pound\": \"#\",\n \"hash\": \"#\",\n \"percent\": \"%\",\n \"caret\": \"^\",\n \"amper\": \"&\",\n \"pipe\": \"|\",\n \"dubquote\": '\"',\n \"double quote\": '\"',\n}\n\n# make punctuation words also included in {user.symbol_keys}\nsymbol_key_words.update(punctuation_words)\nctx.lists[\"self.punctuation\"] = punctuation_words\nctx.lists[\"self.symbol_key\"] = symbol_key_words\nctx.lists[\"self.number_key\"] = dict(zip(default_digits, numbers))\nctx.lists[\"self.arrow_key\"] = {\n \"down\": \"down\",\n \"left\": \"left\",\n \"right\": \"right\",\n \"up\": \"up\",\n}\n\nsimple_keys = [\n \"end\",\n \"enter\",\n \"escape\",\n \"home\",\n \"insert\",\n \"pagedown\",\n \"pageup\",\n \"space\",\n \"tab\",\n]\n\nalternate_keys = {\n \"delete\": \"backspace\",\n \"forward delete\": \"delete\",\n #'junk': 'backspace',\n}\n# mac apparently doesn't have the menu key.\nif app.platform in (\"windows\", \"linux\"):\n alternate_keys[\"menu key\"] = \"menu\"\n\nkeys = {k: k for k in simple_keys}\nkeys.update(alternate_keys)\nctx.lists[\"self.special_key\"] = keys\nctx.lists[\"self.function_key\"] = {\n f\"F {default_f_digits[i]}\": f\"f{i + 1}\" for i in range(12)\n}\n\n\[email protected]_class\nclass Actions:\n def get_alphabet() -> dict:\n \"\"\"Provides the alphabet dictionary\"\"\"\n return alphabet\n\n", "path": "code/keys.py"}]} | 2,875 | 217 |
gh_patches_debug_24197 | rasdani/github-patches | git_diff | kserve__kserve-3226 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
KServe lgbserver runtime error input for v1 endpoint
/kind bug
**What steps did you take and what happened:**
lgbserver used to support both of following inputs formats prior to 0.11 release.
```python
request = {'inputs' : [{'sepal_width_(cm)': {0: 3.5},
'petal_length_(cm)': {0: 1.4},
'petal_width_(cm)': {0: 0.2},
'sepal_length_(cm)': {0: 5.1} }]}
```
```python
request2 = {'inputs': [
[{'sepal_width_(cm)': 3.5},
{'petal_length_(cm)': 1.4},
{'petal_width_(cm)': 0.2},
{'sepal_length_(cm)': 5.1}]
] }
```
KServe only documented the first input format https://kserve.github.io/website/0.11/modelserving/v1beta1/lightgbm
and in 0.11 the second input format stop working with following error
```bash
2023-11-03 09:06:02.099 32367 kserve ERROR [inference_error_handler():89] Exception:
Traceback (most recent call last):
File "/Users/dsun20/kserve/python/lgbserver/lgbserver/model.py", line 62, in predict
result = self._booster.predict(instances)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/lightgbm/basic.py", line 4220, in predict
return predictor.predict(
^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/lightgbm/basic.py", line 1047, in predict
preds, nrow = self.__pred_for_np2d(
^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/lightgbm/basic.py", line 1187, in __pred_for_np2d
return self.__inner_predict_np2d(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/lightgbm/basic.py", line 1127, in __inner_predict_np2d
data = np.array(mat.reshape(mat.size), dtype=np.float32)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: float() argument must be a string or a real number, not 'dict'
```
**What did you expect to happen:**
**What's the InferenceService yaml:**
[To help us debug please run `kubectl get isvc $name -n $namespace -oyaml` and paste the output]
**Anything else you would like to add:**
[Miscellaneous information that will assist in solving the issue.]
**Environment:**
- Istio Version:
- Knative Version:
- KServe Version:
- Kubeflow version:
- Cloud Environment:[k8s_istio/istio_dex/gcp_basic_auth/gcp_iap/aws/aws_cognito/ibm]
- Minikube/Kind version:
- Kubernetes version: (use `kubectl version`):
- OS (e.g. from `/etc/os-release`):
</issue>
<code>
[start of python/kserve/kserve/utils/utils.py]
1 # Copyright 2021 The KServe Authors.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import os
16 import sys
17 import uuid
18 from kserve.protocol.grpc.grpc_predict_v2_pb2 import InferParameter
19 from typing import Dict, Union
20
21 from kserve.utils.numpy_codec import from_np_dtype
22 import pandas as pd
23 import numpy as np
24 import psutil
25 from cloudevents.conversion import to_binary, to_structured
26 from cloudevents.http import CloudEvent
27 from grpc import ServicerContext
28 from kserve.protocol.infer_type import InferOutput, InferRequest, InferResponse
29
30
31 def is_running_in_k8s():
32 return os.path.isdir('/var/run/secrets/kubernetes.io/')
33
34
35 def get_current_k8s_namespace():
36 with open('/var/run/secrets/kubernetes.io/serviceaccount/namespace', 'r') as f:
37 return f.readline()
38
39
40 def get_default_target_namespace():
41 if not is_running_in_k8s():
42 return 'default'
43 return get_current_k8s_namespace()
44
45
46 def get_isvc_namespace(inferenceservice):
47 return inferenceservice.metadata.namespace or get_default_target_namespace()
48
49
50 def get_ig_namespace(inferencegraph):
51 return inferencegraph.metadata.namespace or get_default_target_namespace()
52
53
54 def cpu_count():
55 """Get the available CPU count for this system.
56 Takes the minimum value from the following locations:
57 - Total system cpus available on the host.
58 - CPU Affinity (if set)
59 - Cgroups limit (if set)
60 """
61 count = os.cpu_count()
62
63 # Check CPU affinity if available
64 try:
65 affinity_count = len(psutil.Process().cpu_affinity())
66 if affinity_count > 0:
67 count = min(count, affinity_count)
68 except Exception:
69 pass
70
71 # Check cgroups if available
72 if sys.platform == "linux":
73 try:
74 with open("/sys/fs/cgroup/cpu,cpuacct/cpu.cfs_quota_us") as f:
75 quota = int(f.read())
76 with open("/sys/fs/cgroup/cpu,cpuacct/cpu.cfs_period_us") as f:
77 period = int(f.read())
78 cgroups_count = int(quota / period)
79 if cgroups_count > 0:
80 count = min(count, cgroups_count)
81 except Exception:
82 pass
83
84 return count
85
86
87 def is_structured_cloudevent(body: Dict) -> bool:
88 """Returns True if the JSON request body resembles a structured CloudEvent"""
89 return "time" in body \
90 and "type" in body \
91 and "source" in body \
92 and "id" in body \
93 and "specversion" in body \
94 and "data" in body
95
96
97 def create_response_cloudevent(model_name: str, response: Dict, req_attributes: Dict,
98 binary_event=False) -> tuple:
99 ce_attributes = {}
100
101 if os.getenv("CE_MERGE", "false").lower() == "true":
102 if binary_event:
103 ce_attributes = req_attributes
104 if "datacontenttype" in ce_attributes: # Optional field so must check
105 del ce_attributes["datacontenttype"]
106 else:
107 ce_attributes = req_attributes
108
109 # Remove these fields so we generate new ones
110 del ce_attributes["id"]
111 del ce_attributes["time"]
112
113 ce_attributes["type"] = os.getenv("CE_TYPE", "io.kserve.inference.response")
114 ce_attributes["source"] = os.getenv("CE_SOURCE", f"io.kserve.inference.{model_name}")
115
116 event = CloudEvent(ce_attributes, response)
117
118 if binary_event:
119 event_headers, event_body = to_binary(event)
120 else:
121 event_headers, event_body = to_structured(event)
122
123 return event_headers, event_body
124
125
126 def generate_uuid() -> str:
127 return str(uuid.uuid4())
128
129
130 def to_headers(context: ServicerContext) -> Dict[str, str]:
131 metadata = context.invocation_metadata()
132 if hasattr(context, "trailing_metadata"):
133 metadata += context.trailing_metadata()
134 headers = {}
135 for metadatum in metadata:
136 headers[metadatum.key] = metadatum.value
137
138 return headers
139
140
141 def get_predict_input(payload: Union[Dict, InferRequest]) -> Union[np.ndarray, pd.DataFrame]:
142 if isinstance(payload, Dict):
143 instances = payload["inputs"] if "inputs" in payload else payload["instances"]
144 if len(instances) == 0:
145 return np.array(instances)
146 if isinstance(instances[0], Dict):
147 dfs = []
148 for input in instances:
149 dfs.append(pd.DataFrame(input))
150 inputs = pd.concat(dfs, axis=0)
151 return inputs
152 else:
153 return np.array(instances)
154
155 elif isinstance(payload, InferRequest):
156 content_type = ''
157 parameters = payload.parameters
158 if parameters:
159 if isinstance(parameters.get("content_type"), InferParameter):
160 # for v2 grpc, we get InferParameter obj eg: {"content_type": string_param: "pd"}
161 content_type = str(parameters.get("content_type").string_param)
162 else:
163 # for v2 http, we get string eg: {"content_type": "pd"}
164 content_type = parameters.get("content_type")
165
166 if content_type == "pd":
167 return payload.as_dataframe()
168 else:
169 input = payload.inputs[0]
170 return input.as_numpy()
171
172
173 def get_predict_response(payload: Union[Dict, InferRequest], result: Union[np.ndarray, pd.DataFrame],
174 model_name: str) -> Union[Dict, InferResponse]:
175 if isinstance(payload, Dict):
176 infer_outputs = result
177 if isinstance(result, pd.DataFrame):
178 infer_outputs = []
179 for label, row in result.iterrows():
180 infer_outputs.append(row.to_dict())
181 elif isinstance(result, np.ndarray):
182 infer_outputs = result.tolist()
183 return {"predictions": infer_outputs}
184 elif isinstance(payload, InferRequest):
185 infer_outputs = []
186 if isinstance(result, pd.DataFrame):
187 for col in result.columns:
188 infer_output = InferOutput(
189 name=col,
190 shape=list(result[col].shape),
191 datatype=from_np_dtype(result[col].dtype),
192 data=result[col].tolist()
193 )
194 infer_outputs.append(infer_output)
195 else:
196 infer_output = InferOutput(
197 name="output-0",
198 shape=list(result.shape),
199 datatype=from_np_dtype(result.dtype),
200 data=result.flatten().tolist()
201 )
202 infer_outputs.append(infer_output)
203 return InferResponse(
204 model_name=model_name,
205 infer_outputs=infer_outputs,
206 response_id=payload.id if payload.id else generate_uuid()
207 )
208
209
210 def strtobool(val: str) -> bool:
211 """Convert a string representation of truth to True or False.
212
213 True values are 'y', 'yes', 't', 'true', 'on', and '1'; false values
214 are 'n', 'no', 'f', 'false', 'off', and '0'. Raises ValueError if
215 'val' is anything else.
216
217 Adapted from deprecated `distutils`
218 https://github.com/python/cpython/blob/3.11/Lib/distutils/util.py
219 """
220 val = val.lower()
221 if val in ('y', 'yes', 't', 'true', 'on', '1'):
222 return True
223 elif val in ('n', 'no', 'f', 'false', 'off', '0'):
224 return False
225 else:
226 raise ValueError("invalid truth value %r" % (val,))
227
[end of python/kserve/kserve/utils/utils.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/python/kserve/kserve/utils/utils.py b/python/kserve/kserve/utils/utils.py
--- a/python/kserve/kserve/utils/utils.py
+++ b/python/kserve/kserve/utils/utils.py
@@ -16,7 +16,7 @@
import sys
import uuid
from kserve.protocol.grpc.grpc_predict_v2_pb2 import InferParameter
-from typing import Dict, Union
+from typing import Dict, Union, List
from kserve.utils.numpy_codec import from_np_dtype
import pandas as pd
@@ -149,6 +149,24 @@
dfs.append(pd.DataFrame(input))
inputs = pd.concat(dfs, axis=0)
return inputs
+
+ # Handles the following input format
+ # {'inputs': [
+ # [{'sepal_width_(cm)': 3.5},
+ # {'petal_length_(cm)': 1.4},
+ # {'petal_width_(cm)': 0.2},
+ # {'sepal_length_(cm)': 5.1}]
+ # ]}
+ elif isinstance(instances[0], List) and len(instances[0]) != 0 and isinstance(instances[0][0], Dict):
+ data: Dict[str, List] = {}
+ for instance in instances:
+ for item in instance:
+ for key, val in item.items():
+ if key in data:
+ data[key].append(val)
+ else:
+ data[key] = [val]
+ return pd.DataFrame(data)
else:
return np.array(instances)
| {"golden_diff": "diff --git a/python/kserve/kserve/utils/utils.py b/python/kserve/kserve/utils/utils.py\n--- a/python/kserve/kserve/utils/utils.py\n+++ b/python/kserve/kserve/utils/utils.py\n@@ -16,7 +16,7 @@\n import sys\n import uuid\n from kserve.protocol.grpc.grpc_predict_v2_pb2 import InferParameter\n-from typing import Dict, Union\n+from typing import Dict, Union, List\n \n from kserve.utils.numpy_codec import from_np_dtype\n import pandas as pd\n@@ -149,6 +149,24 @@\n dfs.append(pd.DataFrame(input))\n inputs = pd.concat(dfs, axis=0)\n return inputs\n+\n+ # Handles the following input format\n+ # {'inputs': [\n+ # [{'sepal_width_(cm)': 3.5},\n+ # {'petal_length_(cm)': 1.4},\n+ # {'petal_width_(cm)': 0.2},\n+ # {'sepal_length_(cm)': 5.1}]\n+ # ]}\n+ elif isinstance(instances[0], List) and len(instances[0]) != 0 and isinstance(instances[0][0], Dict):\n+ data: Dict[str, List] = {}\n+ for instance in instances:\n+ for item in instance:\n+ for key, val in item.items():\n+ if key in data:\n+ data[key].append(val)\n+ else:\n+ data[key] = [val]\n+ return pd.DataFrame(data)\n else:\n return np.array(instances)\n", "issue": "KServe lgbserver runtime error input for v1 endpoint\n/kind bug\r\n\r\n**What steps did you take and what happened:**\r\nlgbserver used to support both of following inputs formats prior to 0.11 release.\r\n\r\n```python\r\nrequest = {'inputs' : [{'sepal_width_(cm)': {0: 3.5}, \r\n'petal_length_(cm)': {0: 1.4}, \r\n'petal_width_(cm)': {0: 0.2},\r\n'sepal_length_(cm)': {0: 5.1} }]}\r\n```\r\n\r\n```python\r\nrequest2 = {'inputs': [\r\n[{'sepal_width_(cm)': 3.5}, \r\n {'petal_length_(cm)': 1.4}, \r\n {'petal_width_(cm)': 0.2}, \r\n {'sepal_length_(cm)': 5.1}]\r\n ] }\r\n```\r\n\r\nKServe only documented the first input format https://kserve.github.io/website/0.11/modelserving/v1beta1/lightgbm\r\n\r\nand in 0.11 the second input format stop working with following error\r\n\r\n```bash\r\n2023-11-03 09:06:02.099 32367 kserve ERROR [inference_error_handler():89] Exception:\r\nTraceback (most recent call last):\r\n File \"/Users/dsun20/kserve/python/lgbserver/lgbserver/model.py\", line 62, in predict\r\n result = self._booster.predict(instances)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/opt/homebrew/lib/python3.11/site-packages/lightgbm/basic.py\", line 4220, in predict\r\n return predictor.predict(\r\n ^^^^^^^^^^^^^^^^^^\r\n File \"/opt/homebrew/lib/python3.11/site-packages/lightgbm/basic.py\", line 1047, in predict\r\n preds, nrow = self.__pred_for_np2d(\r\n ^^^^^^^^^^^^^^^^^^^^^\r\n File \"/opt/homebrew/lib/python3.11/site-packages/lightgbm/basic.py\", line 1187, in __pred_for_np2d\r\n return self.__inner_predict_np2d(\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/opt/homebrew/lib/python3.11/site-packages/lightgbm/basic.py\", line 1127, in __inner_predict_np2d\r\n data = np.array(mat.reshape(mat.size), dtype=np.float32)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\nTypeError: float() argument must be a string or a real number, not 'dict'\r\n```\r\n\r\n\r\n**What did you expect to happen:**\r\n\r\n\r\n**What's the InferenceService yaml:**\r\n[To help us debug please run `kubectl get isvc $name -n $namespace -oyaml` and paste the output]\r\n\r\n**Anything else you would like to add:**\r\n[Miscellaneous information that will assist in solving the issue.]\r\n\r\n\r\n**Environment:**\r\n\r\n- Istio Version:\r\n- Knative Version:\r\n- KServe Version:\r\n- Kubeflow version:\r\n- Cloud Environment:[k8s_istio/istio_dex/gcp_basic_auth/gcp_iap/aws/aws_cognito/ibm]\r\n- Minikube/Kind version:\r\n- Kubernetes version: (use `kubectl version`):\r\n- OS (e.g. from `/etc/os-release`):\r\n\n", "before_files": [{"content": "# Copyright 2021 The KServe Authors.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport os\nimport sys\nimport uuid\nfrom kserve.protocol.grpc.grpc_predict_v2_pb2 import InferParameter\nfrom typing import Dict, Union\n\nfrom kserve.utils.numpy_codec import from_np_dtype\nimport pandas as pd\nimport numpy as np\nimport psutil\nfrom cloudevents.conversion import to_binary, to_structured\nfrom cloudevents.http import CloudEvent\nfrom grpc import ServicerContext\nfrom kserve.protocol.infer_type import InferOutput, InferRequest, InferResponse\n\n\ndef is_running_in_k8s():\n return os.path.isdir('/var/run/secrets/kubernetes.io/')\n\n\ndef get_current_k8s_namespace():\n with open('/var/run/secrets/kubernetes.io/serviceaccount/namespace', 'r') as f:\n return f.readline()\n\n\ndef get_default_target_namespace():\n if not is_running_in_k8s():\n return 'default'\n return get_current_k8s_namespace()\n\n\ndef get_isvc_namespace(inferenceservice):\n return inferenceservice.metadata.namespace or get_default_target_namespace()\n\n\ndef get_ig_namespace(inferencegraph):\n return inferencegraph.metadata.namespace or get_default_target_namespace()\n\n\ndef cpu_count():\n \"\"\"Get the available CPU count for this system.\n Takes the minimum value from the following locations:\n - Total system cpus available on the host.\n - CPU Affinity (if set)\n - Cgroups limit (if set)\n \"\"\"\n count = os.cpu_count()\n\n # Check CPU affinity if available\n try:\n affinity_count = len(psutil.Process().cpu_affinity())\n if affinity_count > 0:\n count = min(count, affinity_count)\n except Exception:\n pass\n\n # Check cgroups if available\n if sys.platform == \"linux\":\n try:\n with open(\"/sys/fs/cgroup/cpu,cpuacct/cpu.cfs_quota_us\") as f:\n quota = int(f.read())\n with open(\"/sys/fs/cgroup/cpu,cpuacct/cpu.cfs_period_us\") as f:\n period = int(f.read())\n cgroups_count = int(quota / period)\n if cgroups_count > 0:\n count = min(count, cgroups_count)\n except Exception:\n pass\n\n return count\n\n\ndef is_structured_cloudevent(body: Dict) -> bool:\n \"\"\"Returns True if the JSON request body resembles a structured CloudEvent\"\"\"\n return \"time\" in body \\\n and \"type\" in body \\\n and \"source\" in body \\\n and \"id\" in body \\\n and \"specversion\" in body \\\n and \"data\" in body\n\n\ndef create_response_cloudevent(model_name: str, response: Dict, req_attributes: Dict,\n binary_event=False) -> tuple:\n ce_attributes = {}\n\n if os.getenv(\"CE_MERGE\", \"false\").lower() == \"true\":\n if binary_event:\n ce_attributes = req_attributes\n if \"datacontenttype\" in ce_attributes: # Optional field so must check\n del ce_attributes[\"datacontenttype\"]\n else:\n ce_attributes = req_attributes\n\n # Remove these fields so we generate new ones\n del ce_attributes[\"id\"]\n del ce_attributes[\"time\"]\n\n ce_attributes[\"type\"] = os.getenv(\"CE_TYPE\", \"io.kserve.inference.response\")\n ce_attributes[\"source\"] = os.getenv(\"CE_SOURCE\", f\"io.kserve.inference.{model_name}\")\n\n event = CloudEvent(ce_attributes, response)\n\n if binary_event:\n event_headers, event_body = to_binary(event)\n else:\n event_headers, event_body = to_structured(event)\n\n return event_headers, event_body\n\n\ndef generate_uuid() -> str:\n return str(uuid.uuid4())\n\n\ndef to_headers(context: ServicerContext) -> Dict[str, str]:\n metadata = context.invocation_metadata()\n if hasattr(context, \"trailing_metadata\"):\n metadata += context.trailing_metadata()\n headers = {}\n for metadatum in metadata:\n headers[metadatum.key] = metadatum.value\n\n return headers\n\n\ndef get_predict_input(payload: Union[Dict, InferRequest]) -> Union[np.ndarray, pd.DataFrame]:\n if isinstance(payload, Dict):\n instances = payload[\"inputs\"] if \"inputs\" in payload else payload[\"instances\"]\n if len(instances) == 0:\n return np.array(instances)\n if isinstance(instances[0], Dict):\n dfs = []\n for input in instances:\n dfs.append(pd.DataFrame(input))\n inputs = pd.concat(dfs, axis=0)\n return inputs\n else:\n return np.array(instances)\n\n elif isinstance(payload, InferRequest):\n content_type = ''\n parameters = payload.parameters\n if parameters:\n if isinstance(parameters.get(\"content_type\"), InferParameter):\n # for v2 grpc, we get InferParameter obj eg: {\"content_type\": string_param: \"pd\"}\n content_type = str(parameters.get(\"content_type\").string_param)\n else:\n # for v2 http, we get string eg: {\"content_type\": \"pd\"}\n content_type = parameters.get(\"content_type\")\n\n if content_type == \"pd\":\n return payload.as_dataframe()\n else:\n input = payload.inputs[0]\n return input.as_numpy()\n\n\ndef get_predict_response(payload: Union[Dict, InferRequest], result: Union[np.ndarray, pd.DataFrame],\n model_name: str) -> Union[Dict, InferResponse]:\n if isinstance(payload, Dict):\n infer_outputs = result\n if isinstance(result, pd.DataFrame):\n infer_outputs = []\n for label, row in result.iterrows():\n infer_outputs.append(row.to_dict())\n elif isinstance(result, np.ndarray):\n infer_outputs = result.tolist()\n return {\"predictions\": infer_outputs}\n elif isinstance(payload, InferRequest):\n infer_outputs = []\n if isinstance(result, pd.DataFrame):\n for col in result.columns:\n infer_output = InferOutput(\n name=col,\n shape=list(result[col].shape),\n datatype=from_np_dtype(result[col].dtype),\n data=result[col].tolist()\n )\n infer_outputs.append(infer_output)\n else:\n infer_output = InferOutput(\n name=\"output-0\",\n shape=list(result.shape),\n datatype=from_np_dtype(result.dtype),\n data=result.flatten().tolist()\n )\n infer_outputs.append(infer_output)\n return InferResponse(\n model_name=model_name,\n infer_outputs=infer_outputs,\n response_id=payload.id if payload.id else generate_uuid()\n )\n\n\ndef strtobool(val: str) -> bool:\n \"\"\"Convert a string representation of truth to True or False.\n\n True values are 'y', 'yes', 't', 'true', 'on', and '1'; false values\n are 'n', 'no', 'f', 'false', 'off', and '0'. Raises ValueError if\n 'val' is anything else.\n\n Adapted from deprecated `distutils`\n https://github.com/python/cpython/blob/3.11/Lib/distutils/util.py\n \"\"\"\n val = val.lower()\n if val in ('y', 'yes', 't', 'true', 'on', '1'):\n return True\n elif val in ('n', 'no', 'f', 'false', 'off', '0'):\n return False\n else:\n raise ValueError(\"invalid truth value %r\" % (val,))\n", "path": "python/kserve/kserve/utils/utils.py"}]} | 3,574 | 345 |
gh_patches_debug_15872 | rasdani/github-patches | git_diff | azavea__raster-vision-427 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Fill out model-defaults for tensorflow-od
Currently we only have 1 model in model_defaults.json:
https://github.com/azavea/raster-vision/blob/feature/api-refactor/src/rastervision/backend/model_defaults.json#L2
We need to fill it out to include each of these configurations, matched up with each of the pretrained weights from the model zoo:
#### Configs
https://github.com/azavea/models/tree/master/research/object_detection/samples/configs
#### Weights
https://github.com/azavea/models/blob/master/research/object_detection/g3doc/detection_model_zoo.md
</issue>
<code>
[start of src/rastervision/backend/api.py]
1 # flake8: noqa
2
3 # Registry keys
4
5 BACKEND = 'BACKEND'
6
7 ## Backend Keys
8
9 TF_OBJECT_DETECTION = 'TF_OBJECT_DETECTION'
10 KERAS_CLASSIFICATION = 'KERAS_CLASSIFICATION'
11
12 ## Model keys
13
14 ### TF Object Detection
15 SSD_MOBILENET_V1_COCO = 'SSD_MOBILENET_V1_COCO'
16
17 ## Keras Classificaiton
18 RESNET50_IMAGENET = 'RESNET50_IMAGENET'
19
20 from .backend_config import BackendConfig
21
[end of src/rastervision/backend/api.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/rastervision/backend/api.py b/src/rastervision/backend/api.py
--- a/src/rastervision/backend/api.py
+++ b/src/rastervision/backend/api.py
@@ -13,6 +13,20 @@
### TF Object Detection
SSD_MOBILENET_V1_COCO = 'SSD_MOBILENET_V1_COCO'
+SSD_MOBILENET_V2_COCO = 'SSD_MOBILENET_V2_COCO'
+SSDLITE_MOBILENET_V2_COCO = 'SSDLITE_MOBILENET_V2_COCO'
+SSD_INCEPTION_V2_COCO = 'SSD_INCEPTION_V2_COCO'
+FASTER_RCNN_INCEPTION_V2_COCO = 'FASTER_RCNN_INCEPTION_V2_COCO'
+FASTER_RCNN_RESNET50_COCO = 'FASTER_RCNN_RESNET50_COCO'
+RFCN_RESNET101_COCO = 'RFCN_RESNET101_COCO'
+FASTER_RCNN_RESNET101_COCO = 'FASTER_RCNN_RESNET101_COCO'
+FASTER_RCNN_INCEPTION_RESNET_V2_ATROUS_COCO = \
+'FASTER_RCNN_INCEPTION_RESNET_V2_ATROUS_COCO'
+MASK_RCNN_INCEPTION_RESNET_V2_ATROUS_COCO = \
+'MASK_RCNN_INCEPTION_RESNET_V2_ATROUS_COCO'
+MASK_RCNN_INCEPTION_V2_COCO = 'MASK_RCNN_INCEPTION_V2_COCO'
+MASK_RCNN_RESNET101_ATROUS_COCO = 'MASK_RCNN_RESNET101_ATROUS_COCO'
+MASK_RCNN_RESNET50_ATROUS_COCO = 'MASK_RCNN_RESNET50_ATROUS_COCO'
## Keras Classificaiton
RESNET50_IMAGENET = 'RESNET50_IMAGENET'
| {"golden_diff": "diff --git a/src/rastervision/backend/api.py b/src/rastervision/backend/api.py\n--- a/src/rastervision/backend/api.py\n+++ b/src/rastervision/backend/api.py\n@@ -13,6 +13,20 @@\n \n ### TF Object Detection\n SSD_MOBILENET_V1_COCO = 'SSD_MOBILENET_V1_COCO'\n+SSD_MOBILENET_V2_COCO = 'SSD_MOBILENET_V2_COCO'\n+SSDLITE_MOBILENET_V2_COCO = 'SSDLITE_MOBILENET_V2_COCO'\n+SSD_INCEPTION_V2_COCO = 'SSD_INCEPTION_V2_COCO'\n+FASTER_RCNN_INCEPTION_V2_COCO = 'FASTER_RCNN_INCEPTION_V2_COCO'\n+FASTER_RCNN_RESNET50_COCO = 'FASTER_RCNN_RESNET50_COCO'\n+RFCN_RESNET101_COCO = 'RFCN_RESNET101_COCO'\n+FASTER_RCNN_RESNET101_COCO = 'FASTER_RCNN_RESNET101_COCO'\n+FASTER_RCNN_INCEPTION_RESNET_V2_ATROUS_COCO = \\\n+'FASTER_RCNN_INCEPTION_RESNET_V2_ATROUS_COCO'\n+MASK_RCNN_INCEPTION_RESNET_V2_ATROUS_COCO = \\\n+'MASK_RCNN_INCEPTION_RESNET_V2_ATROUS_COCO'\n+MASK_RCNN_INCEPTION_V2_COCO = 'MASK_RCNN_INCEPTION_V2_COCO'\n+MASK_RCNN_RESNET101_ATROUS_COCO = 'MASK_RCNN_RESNET101_ATROUS_COCO'\n+MASK_RCNN_RESNET50_ATROUS_COCO = 'MASK_RCNN_RESNET50_ATROUS_COCO'\n \n ## Keras Classificaiton\n RESNET50_IMAGENET = 'RESNET50_IMAGENET'\n", "issue": "Fill out model-defaults for tensorflow-od\nCurrently we only have 1 model in model_defaults.json: \r\n\r\nhttps://github.com/azavea/raster-vision/blob/feature/api-refactor/src/rastervision/backend/model_defaults.json#L2\r\n\r\nWe need to fill it out to include each of these configurations, matched up with each of the pretrained weights from the model zoo:\r\n\r\n#### Configs\r\nhttps://github.com/azavea/models/tree/master/research/object_detection/samples/configs\r\n\r\n#### Weights\r\nhttps://github.com/azavea/models/blob/master/research/object_detection/g3doc/detection_model_zoo.md\n", "before_files": [{"content": "# flake8: noqa\n\n# Registry keys\n\nBACKEND = 'BACKEND'\n\n## Backend Keys\n\nTF_OBJECT_DETECTION = 'TF_OBJECT_DETECTION'\nKERAS_CLASSIFICATION = 'KERAS_CLASSIFICATION'\n\n## Model keys\n\n### TF Object Detection\nSSD_MOBILENET_V1_COCO = 'SSD_MOBILENET_V1_COCO'\n\n## Keras Classificaiton\nRESNET50_IMAGENET = 'RESNET50_IMAGENET'\n\nfrom .backend_config import BackendConfig\n", "path": "src/rastervision/backend/api.py"}]} | 824 | 441 |
gh_patches_debug_37445 | rasdani/github-patches | git_diff | hpcaitech__ColossalAI-4815 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[tensor] fix some unittests
[tensor] fix some unittests
[tensor] fix some unittests
</issue>
<code>
[start of op_builder/gptq.py]
1 import os
2 import torch
3 import re
4
5 from .builder import Builder
6 from .utils import append_nvcc_threads, get_cuda_cc_flag
7
8 class GPTQBuilder(Builder):
9
10 NAME = "cu_gptq"
11 PREBUILT_IMPORT_PATH = "colossalai._C.cu_gptq"
12
13 def __init__(self):
14 super().__init__(name=GPTQBuilder.NAME,
15 prebuilt_import_path=GPTQBuilder.PREBUILT_IMPORT_PATH)
16
17
18 def include_dirs(self):
19 ret = [self.csrc_abs_path("gptq"), self.get_cuda_home_include()]
20 return ret
21
22 def sources_files(self):
23 ret = [
24 self.csrc_abs_path(fname) for fname in [
25 'gptq/linear_gptq.cpp',
26 'gptq/column_remap.cu',
27 'gptq/cuda_buffers.cu',
28 'gptq/q4_matmul.cu',
29 'gptq/q4_matrix.cu'
30 ]
31 ]
32 return ret
33
34 def cxx_flags(self):
35 return ['-O3'] + self.version_dependent_macros
36
37 def nvcc_flags(self):
38 extra_cuda_flags = ['-v',
39 '-std=c++14', '-U__CUDA_NO_HALF_OPERATORS__', '-U__CUDA_NO_HALF_CONVERSIONS__',
40 '-U__CUDA_NO_HALF2_OPERATORS__', '-DTHRUST_IGNORE_CUB_VERSION_CHECK', "-lcublas", "-std=c++17"
41 ]
42
43
44 for arch in torch.cuda.get_arch_list():
45 res = re.search(r'sm_(\d+)', arch)
46 if res:
47 arch_cap = res[1]
48 if int(arch_cap) >= 80:
49 extra_cuda_flags.extend(['-gencode', f'arch=compute_{arch_cap},code={arch}'])
50
51 ret = ['-O3', '--use_fast_math'] + self.version_dependent_macros + extra_cuda_flags
52 return append_nvcc_threads(ret)
[end of op_builder/gptq.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/op_builder/gptq.py b/op_builder/gptq.py
--- a/op_builder/gptq.py
+++ b/op_builder/gptq.py
@@ -1,19 +1,17 @@
-import os
-import torch
import re
+import torch
+
from .builder import Builder
-from .utils import append_nvcc_threads, get_cuda_cc_flag
+from .utils import append_nvcc_threads
-class GPTQBuilder(Builder):
+class GPTQBuilder(Builder):
NAME = "cu_gptq"
PREBUILT_IMPORT_PATH = "colossalai._C.cu_gptq"
def __init__(self):
- super().__init__(name=GPTQBuilder.NAME,
- prebuilt_import_path=GPTQBuilder.PREBUILT_IMPORT_PATH)
-
+ super().__init__(name=GPTQBuilder.NAME, prebuilt_import_path=GPTQBuilder.PREBUILT_IMPORT_PATH)
def include_dirs(self):
ret = [self.csrc_abs_path("gptq"), self.get_cuda_home_include()]
@@ -21,32 +19,38 @@
def sources_files(self):
ret = [
- self.csrc_abs_path(fname) for fname in [
- 'gptq/linear_gptq.cpp',
- 'gptq/column_remap.cu',
- 'gptq/cuda_buffers.cu',
- 'gptq/q4_matmul.cu',
- 'gptq/q4_matrix.cu'
+ self.csrc_abs_path(fname)
+ for fname in [
+ "gptq/linear_gptq.cpp",
+ "gptq/column_remap.cu",
+ "gptq/cuda_buffers.cu",
+ "gptq/q4_matmul.cu",
+ "gptq/q4_matrix.cu",
]
]
return ret
def cxx_flags(self):
- return ['-O3'] + self.version_dependent_macros
+ return ["-O3"] + self.version_dependent_macros
def nvcc_flags(self):
- extra_cuda_flags = ['-v',
- '-std=c++14', '-U__CUDA_NO_HALF_OPERATORS__', '-U__CUDA_NO_HALF_CONVERSIONS__',
- '-U__CUDA_NO_HALF2_OPERATORS__', '-DTHRUST_IGNORE_CUB_VERSION_CHECK', "-lcublas", "-std=c++17"
+ extra_cuda_flags = [
+ "-v",
+ "-std=c++14",
+ "-U__CUDA_NO_HALF_OPERATORS__",
+ "-U__CUDA_NO_HALF_CONVERSIONS__",
+ "-U__CUDA_NO_HALF2_OPERATORS__",
+ "-DTHRUST_IGNORE_CUB_VERSION_CHECK",
+ "-lcublas",
+ "-std=c++17",
]
-
for arch in torch.cuda.get_arch_list():
- res = re.search(r'sm_(\d+)', arch)
+ res = re.search(r"sm_(\d+)", arch)
if res:
arch_cap = res[1]
if int(arch_cap) >= 80:
- extra_cuda_flags.extend(['-gencode', f'arch=compute_{arch_cap},code={arch}'])
+ extra_cuda_flags.extend(["-gencode", f"arch=compute_{arch_cap},code={arch}"])
- ret = ['-O3', '--use_fast_math'] + self.version_dependent_macros + extra_cuda_flags
- return append_nvcc_threads(ret)
\ No newline at end of file
+ ret = ["-O3", "--use_fast_math"] + self.version_dependent_macros + extra_cuda_flags
+ return append_nvcc_threads(ret)
| {"golden_diff": "diff --git a/op_builder/gptq.py b/op_builder/gptq.py\n--- a/op_builder/gptq.py\n+++ b/op_builder/gptq.py\n@@ -1,19 +1,17 @@\n-import os\n-import torch\n import re\n \n+import torch\n+\n from .builder import Builder\n-from .utils import append_nvcc_threads, get_cuda_cc_flag\n+from .utils import append_nvcc_threads\n \n-class GPTQBuilder(Builder):\n \n+class GPTQBuilder(Builder):\n NAME = \"cu_gptq\"\n PREBUILT_IMPORT_PATH = \"colossalai._C.cu_gptq\"\n \n def __init__(self):\n- super().__init__(name=GPTQBuilder.NAME,\n- prebuilt_import_path=GPTQBuilder.PREBUILT_IMPORT_PATH)\n-\n+ super().__init__(name=GPTQBuilder.NAME, prebuilt_import_path=GPTQBuilder.PREBUILT_IMPORT_PATH)\n \n def include_dirs(self):\n ret = [self.csrc_abs_path(\"gptq\"), self.get_cuda_home_include()]\n@@ -21,32 +19,38 @@\n \n def sources_files(self):\n ret = [\n- self.csrc_abs_path(fname) for fname in [\n- 'gptq/linear_gptq.cpp',\n- 'gptq/column_remap.cu',\n- 'gptq/cuda_buffers.cu',\n- 'gptq/q4_matmul.cu',\n- 'gptq/q4_matrix.cu'\n+ self.csrc_abs_path(fname)\n+ for fname in [\n+ \"gptq/linear_gptq.cpp\",\n+ \"gptq/column_remap.cu\",\n+ \"gptq/cuda_buffers.cu\",\n+ \"gptq/q4_matmul.cu\",\n+ \"gptq/q4_matrix.cu\",\n ]\n ]\n return ret\n \n def cxx_flags(self):\n- return ['-O3'] + self.version_dependent_macros\n+ return [\"-O3\"] + self.version_dependent_macros\n \n def nvcc_flags(self):\n- extra_cuda_flags = ['-v',\n- '-std=c++14', '-U__CUDA_NO_HALF_OPERATORS__', '-U__CUDA_NO_HALF_CONVERSIONS__',\n- '-U__CUDA_NO_HALF2_OPERATORS__', '-DTHRUST_IGNORE_CUB_VERSION_CHECK', \"-lcublas\", \"-std=c++17\"\n+ extra_cuda_flags = [\n+ \"-v\",\n+ \"-std=c++14\",\n+ \"-U__CUDA_NO_HALF_OPERATORS__\",\n+ \"-U__CUDA_NO_HALF_CONVERSIONS__\",\n+ \"-U__CUDA_NO_HALF2_OPERATORS__\",\n+ \"-DTHRUST_IGNORE_CUB_VERSION_CHECK\",\n+ \"-lcublas\",\n+ \"-std=c++17\",\n ]\n \n-\n for arch in torch.cuda.get_arch_list():\n- res = re.search(r'sm_(\\d+)', arch)\n+ res = re.search(r\"sm_(\\d+)\", arch)\n if res:\n arch_cap = res[1]\n if int(arch_cap) >= 80:\n- extra_cuda_flags.extend(['-gencode', f'arch=compute_{arch_cap},code={arch}'])\n+ extra_cuda_flags.extend([\"-gencode\", f\"arch=compute_{arch_cap},code={arch}\"])\n \n- ret = ['-O3', '--use_fast_math'] + self.version_dependent_macros + extra_cuda_flags\n- return append_nvcc_threads(ret)\n\\ No newline at end of file\n+ ret = [\"-O3\", \"--use_fast_math\"] + self.version_dependent_macros + extra_cuda_flags\n+ return append_nvcc_threads(ret)\n", "issue": "[tensor] fix some unittests\n\n[tensor] fix some unittests\n\n[tensor] fix some unittests\n\n", "before_files": [{"content": "import os\nimport torch\nimport re\n\nfrom .builder import Builder\nfrom .utils import append_nvcc_threads, get_cuda_cc_flag\n\nclass GPTQBuilder(Builder):\n\n NAME = \"cu_gptq\"\n PREBUILT_IMPORT_PATH = \"colossalai._C.cu_gptq\"\n\n def __init__(self):\n super().__init__(name=GPTQBuilder.NAME,\n prebuilt_import_path=GPTQBuilder.PREBUILT_IMPORT_PATH)\n\n\n def include_dirs(self):\n ret = [self.csrc_abs_path(\"gptq\"), self.get_cuda_home_include()]\n return ret\n\n def sources_files(self):\n ret = [\n self.csrc_abs_path(fname) for fname in [\n 'gptq/linear_gptq.cpp',\n 'gptq/column_remap.cu',\n 'gptq/cuda_buffers.cu',\n 'gptq/q4_matmul.cu',\n 'gptq/q4_matrix.cu'\n ]\n ]\n return ret\n\n def cxx_flags(self):\n return ['-O3'] + self.version_dependent_macros\n\n def nvcc_flags(self):\n extra_cuda_flags = ['-v',\n '-std=c++14', '-U__CUDA_NO_HALF_OPERATORS__', '-U__CUDA_NO_HALF_CONVERSIONS__',\n '-U__CUDA_NO_HALF2_OPERATORS__', '-DTHRUST_IGNORE_CUB_VERSION_CHECK', \"-lcublas\", \"-std=c++17\"\n ]\n\n\n for arch in torch.cuda.get_arch_list():\n res = re.search(r'sm_(\\d+)', arch)\n if res:\n arch_cap = res[1]\n if int(arch_cap) >= 80:\n extra_cuda_flags.extend(['-gencode', f'arch=compute_{arch_cap},code={arch}'])\n\n ret = ['-O3', '--use_fast_math'] + self.version_dependent_macros + extra_cuda_flags\n return append_nvcc_threads(ret)", "path": "op_builder/gptq.py"}]} | 1,086 | 815 |
gh_patches_debug_5850 | rasdani/github-patches | git_diff | web2py__web2py-544 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
db.export_to_csv_file() error with python 3.7
**Describe the bug**
db.export_to_csv_file() error with python 3.7
```
TypeError: a bytes-like object is required, not 'str'
```
in
```
web2py/gluon/packages/dal/pydal/base.py", line 844, in export_to_csv_file
ofile.write('TABLE %s\r\n' % table)
```
**To Reproduce**
```
Bens-iMac:web2py ben$ pipenv run python web2py.py -M -S pacificW
/Users/ben/.local/share/virtualenvs/web2py-6f1t1lX6/lib/python3.7/site-packages/psycopg2/__init__.py:144: UserWarning: The psycopg2 wheel package will be renamed from release 2.8; in order to keep installing from binary please use "pip install psycopg2-binary" instead. For details see: <http://initd.org/psycopg/docs/install.html#binary-install-from-pypi>.
""")
web2py Web Framework
Created by Massimo Di Pierro, Copyright 2007-2018
Version 2.17.1-stable+timestamp.2018.08.05.17.57.00
Database drivers available: sqlite3, psycopg2, imaplib, pymysql, pyodbc
WARNING:web2py:import IPython error; use default python shell
Python 3.7.0 (default, Jul 23 2018, 20:22:55)
[Clang 9.1.0 (clang-902.0.39.2)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
(InteractiveConsole)
>>> db.export_to_csv_file(open('somefile.csv', 'wb'))
Traceback (most recent call last):
File "<console>", line 1, in <module>
File "/Users/ben/Documents/programs/web2py/gluon/packages/dal/pydal/base.py", line 844, in export_to_csv_file
ofile.write('TABLE %s\r\n' % table)
TypeError: a bytes-like object is required, not 'str'
>>>
```
</issue>
<code>
[start of gluon/dal/adapters/oracle.py]
1 # -*- coding: utf-8 -*-
2 import base64
3 import datetime
4 import re
5
6 from .._globals import IDENTITY
7 from .._load import cx_Oracle
8 from .base import BaseAdapter
9
10 class OracleAdapter(BaseAdapter):
11 drivers = ('cx_Oracle',)
12
13 commit_on_alter_table = False
14 types = {
15 'boolean': 'CHAR(1)',
16 'string': 'VARCHAR2(%(length)s)',
17 'text': 'CLOB',
18 'json': 'CLOB',
19 'password': 'VARCHAR2(%(length)s)',
20 'blob': 'CLOB',
21 'upload': 'VARCHAR2(%(length)s)',
22 'integer': 'INT',
23 'bigint': 'NUMBER',
24 'float': 'FLOAT',
25 'double': 'BINARY_DOUBLE',
26 'decimal': 'NUMERIC(%(precision)s,%(scale)s)',
27 'date': 'DATE',
28 'time': 'CHAR(8)',
29 'datetime': 'DATE',
30 'id': 'NUMBER PRIMARY KEY',
31 'reference': 'NUMBER, CONSTRAINT %(constraint_name)s FOREIGN KEY (%(field_name)s) REFERENCES %(foreign_key)s ON DELETE %(on_delete_action)s',
32 'list:integer': 'CLOB',
33 'list:string': 'CLOB',
34 'list:reference': 'CLOB',
35 'big-id': 'NUMBER PRIMARY KEY',
36 'big-reference': 'NUMBER, CONSTRAINT %(constraint_name)s FOREIGN KEY (%(field_name)s) REFERENCES %(foreign_key)s ON DELETE %(on_delete_action)s',
37 'reference FK': ', CONSTRAINT FK_%(constraint_name)s FOREIGN KEY (%(field_name)s) REFERENCES %(foreign_key)s ON DELETE %(on_delete_action)s',
38 'reference TFK': ' CONSTRAINT FK_%(foreign_table)s_PK FOREIGN KEY (%(field_name)s) REFERENCES %(foreign_table)s (%(foreign_key)s) ON DELETE %(on_delete_action)s',
39 }
40
41
42 def trigger_name(self,tablename):
43 return '%s_trigger' % tablename
44
45 def LEFT_JOIN(self):
46 return 'LEFT OUTER JOIN'
47
48 def RANDOM(self):
49 return 'dbms_random.value'
50
51 def NOT_NULL(self,default,field_type):
52 return 'DEFAULT %s NOT NULL' % self.represent(default,field_type)
53
54 def REGEXP(self, first, second):
55 return 'REGEXP_LIKE(%s, %s)' % (self.expand(first),
56 self.expand(second, 'string'))
57
58 def _drop(self,table,mode):
59 sequence_name = table._sequence_name
60 return ['DROP TABLE %s %s;' % (table.sqlsafe, mode), 'DROP SEQUENCE %s;' % sequence_name]
61
62 def select_limitby(self, sql_s, sql_f, sql_t, sql_w, sql_o, limitby):
63 if limitby:
64 (lmin, lmax) = limitby
65 if len(sql_w) > 1:
66 sql_w_row = sql_w + ' AND w_row > %i' % lmin
67 else:
68 sql_w_row = 'WHERE w_row > %i' % lmin
69 return 'SELECT %s %s FROM (SELECT w_tmp.*, ROWNUM w_row FROM (SELECT %s FROM %s%s%s) w_tmp WHERE ROWNUM<=%i) %s %s %s;' % (sql_s, sql_f, sql_f, sql_t, sql_w, sql_o, lmax, sql_t, sql_w_row, sql_o)
70 return 'SELECT %s %s FROM %s%s%s;' % (sql_s, sql_f, sql_t, sql_w, sql_o)
71
72 def constraint_name(self, tablename, fieldname):
73 constraint_name = BaseAdapter.constraint_name(self, tablename, fieldname)
74 if len(constraint_name)>30:
75 constraint_name = '%s_%s__constraint' % (tablename[:10], fieldname[:7])
76 return constraint_name
77
78 def represent_exceptions(self, obj, fieldtype):
79 if fieldtype == 'blob':
80 obj = base64.b64encode(str(obj))
81 return ":CLOB('%s')" % obj
82 elif fieldtype == 'date':
83 if isinstance(obj, (datetime.date, datetime.datetime)):
84 obj = obj.isoformat()[:10]
85 else:
86 obj = str(obj)
87 return "to_date('%s','yyyy-mm-dd')" % obj
88 elif fieldtype == 'datetime':
89 if isinstance(obj, datetime.datetime):
90 obj = obj.isoformat()[:19].replace('T',' ')
91 elif isinstance(obj, datetime.date):
92 obj = obj.isoformat()[:10]+' 00:00:00'
93 else:
94 obj = str(obj)
95 return "to_date('%s','yyyy-mm-dd hh24:mi:ss')" % obj
96 return None
97
98 def __init__(self,db,uri,pool_size=0,folder=None,db_codec ='UTF-8',
99 credential_decoder=IDENTITY, driver_args={},
100 adapter_args={}, do_connect=True, after_connection=None):
101 self.db = db
102 self.dbengine = "oracle"
103 self.uri = uri
104 if do_connect: self.find_driver(adapter_args,uri)
105 self.pool_size = pool_size
106 self.folder = folder
107 self.db_codec = db_codec
108 self._after_connection = after_connection
109 self.find_or_make_work_folder()
110 ruri = uri.split('://',1)[1]
111 if not 'threaded' in driver_args:
112 driver_args['threaded']=True
113 def connector(uri=ruri,driver_args=driver_args):
114 return self.driver.connect(uri,**driver_args)
115 self.connector = connector
116 if do_connect: self.reconnect()
117
118 def after_connection(self):
119 self.execute("ALTER SESSION SET NLS_DATE_FORMAT = 'YYYY-MM-DD HH24:MI:SS';")
120 self.execute("ALTER SESSION SET NLS_TIMESTAMP_FORMAT = 'YYYY-MM-DD HH24:MI:SS';")
121
122 oracle_fix = re.compile("[^']*('[^']*'[^']*)*\:(?P<clob>CLOB\('([^']+|'')*'\))")
123
124 def execute(self, command, args=None):
125 args = args or []
126 i = 1
127 while True:
128 m = self.oracle_fix.match(command)
129 if not m:
130 break
131 command = command[:m.start('clob')] + str(i) + command[m.end('clob'):]
132 args.append(m.group('clob')[6:-2].replace("''", "'"))
133 i += 1
134 if command[-1:]==';':
135 command = command[:-1]
136 return self.log_execute(command, args)
137
138 def create_sequence_and_triggers(self, query, table, **args):
139 tablename = table._tablename
140 id_name = table._id.name
141 sequence_name = table._sequence_name
142 trigger_name = table._trigger_name
143 self.execute(query)
144 self.execute('CREATE SEQUENCE %s START WITH 1 INCREMENT BY 1 NOMAXVALUE MINVALUE -1;' % sequence_name)
145 self.execute("""
146 CREATE OR REPLACE TRIGGER %(trigger_name)s BEFORE INSERT ON %(tablename)s FOR EACH ROW
147 DECLARE
148 curr_val NUMBER;
149 diff_val NUMBER;
150 PRAGMA autonomous_transaction;
151 BEGIN
152 IF :NEW.%(id)s IS NOT NULL THEN
153 EXECUTE IMMEDIATE 'SELECT %(sequence_name)s.nextval FROM dual' INTO curr_val;
154 diff_val := :NEW.%(id)s - curr_val - 1;
155 IF diff_val != 0 THEN
156 EXECUTE IMMEDIATE 'alter sequence %(sequence_name)s increment by '|| diff_val;
157 EXECUTE IMMEDIATE 'SELECT %(sequence_name)s.nextval FROM dual' INTO curr_val;
158 EXECUTE IMMEDIATE 'alter sequence %(sequence_name)s increment by 1';
159 END IF;
160 END IF;
161 SELECT %(sequence_name)s.nextval INTO :NEW.%(id)s FROM DUAL;
162 END;
163 """ % dict(trigger_name=trigger_name, tablename=tablename,
164 sequence_name=sequence_name,id=id_name))
165
166 def lastrowid(self,table):
167 sequence_name = table._sequence_name
168 self.execute('SELECT %s.currval FROM dual;' % sequence_name)
169 return long(self.cursor.fetchone()[0])
170
171 #def parse_value(self, value, field_type, blob_decode=True):
172 # if blob_decode and isinstance(value, cx_Oracle.LOB):
173 # try:
174 # value = value.read()
175 # except self.driver.ProgrammingError:
176 # # After a subsequent fetch the LOB value is not valid anymore
177 # pass
178 # return BaseAdapter.parse_value(self, value, field_type, blob_decode)
179
180 def _fetchall(self):
181 if any(x[1]==cx_Oracle.LOB for x in self.cursor.description):
182 return [tuple([(c.read() if type(c) == cx_Oracle.LOB else c) \
183 for c in r]) for r in self.cursor]
184 else:
185 return self.cursor.fetchall()
186
187 def sqlsafe_table(self, tablename, ot=None):
188 if ot is not None:
189 return (self.QUOTE_TEMPLATE + ' ' \
190 + self.QUOTE_TEMPLATE) % (ot, tablename)
191 return self.QUOTE_TEMPLATE % tablename
192
[end of gluon/dal/adapters/oracle.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/gluon/dal/adapters/oracle.py b/gluon/dal/adapters/oracle.py
--- a/gluon/dal/adapters/oracle.py
+++ b/gluon/dal/adapters/oracle.py
@@ -136,7 +136,7 @@
return self.log_execute(command, args)
def create_sequence_and_triggers(self, query, table, **args):
- tablename = table._tablename
+ tablename = table._rname or table._tablename
id_name = table._id.name
sequence_name = table._sequence_name
trigger_name = table._trigger_name
| {"golden_diff": "diff --git a/gluon/dal/adapters/oracle.py b/gluon/dal/adapters/oracle.py\n--- a/gluon/dal/adapters/oracle.py\n+++ b/gluon/dal/adapters/oracle.py\n@@ -136,7 +136,7 @@\n return self.log_execute(command, args)\n \n def create_sequence_and_triggers(self, query, table, **args):\n- tablename = table._tablename\n+ tablename = table._rname or table._tablename\n id_name = table._id.name\n sequence_name = table._sequence_name\n trigger_name = table._trigger_name\n", "issue": "db.export_to_csv_file() error with python 3.7\n**Describe the bug**\r\ndb.export_to_csv_file() error with python 3.7\r\n```\r\nTypeError: a bytes-like object is required, not 'str'\r\n```\r\nin\r\n```\r\nweb2py/gluon/packages/dal/pydal/base.py\", line 844, in export_to_csv_file\r\n ofile.write('TABLE %s\\r\\n' % table)\r\n```\r\n\r\n**To Reproduce**\r\n```\r\nBens-iMac:web2py ben$ pipenv run python web2py.py -M -S pacificW\r\n/Users/ben/.local/share/virtualenvs/web2py-6f1t1lX6/lib/python3.7/site-packages/psycopg2/__init__.py:144: UserWarning: The psycopg2 wheel package will be renamed from release 2.8; in order to keep installing from binary please use \"pip install psycopg2-binary\" instead. For details see: <http://initd.org/psycopg/docs/install.html#binary-install-from-pypi>.\r\n \"\"\")\r\nweb2py Web Framework\r\nCreated by Massimo Di Pierro, Copyright 2007-2018\r\nVersion 2.17.1-stable+timestamp.2018.08.05.17.57.00\r\nDatabase drivers available: sqlite3, psycopg2, imaplib, pymysql, pyodbc\r\nWARNING:web2py:import IPython error; use default python shell\r\nPython 3.7.0 (default, Jul 23 2018, 20:22:55) \r\n[Clang 9.1.0 (clang-902.0.39.2)] on darwin\r\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\r\n(InteractiveConsole)\r\n>>> db.export_to_csv_file(open('somefile.csv', 'wb'))\r\nTraceback (most recent call last):\r\n File \"<console>\", line 1, in <module>\r\n File \"/Users/ben/Documents/programs/web2py/gluon/packages/dal/pydal/base.py\", line 844, in export_to_csv_file\r\n ofile.write('TABLE %s\\r\\n' % table)\r\nTypeError: a bytes-like object is required, not 'str'\r\n>>> \r\n```\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\nimport base64\nimport datetime\nimport re\n\nfrom .._globals import IDENTITY\nfrom .._load import cx_Oracle\nfrom .base import BaseAdapter\n\nclass OracleAdapter(BaseAdapter):\n drivers = ('cx_Oracle',)\n\n commit_on_alter_table = False\n types = {\n 'boolean': 'CHAR(1)',\n 'string': 'VARCHAR2(%(length)s)',\n 'text': 'CLOB',\n 'json': 'CLOB',\n 'password': 'VARCHAR2(%(length)s)',\n 'blob': 'CLOB',\n 'upload': 'VARCHAR2(%(length)s)',\n 'integer': 'INT',\n 'bigint': 'NUMBER',\n 'float': 'FLOAT',\n 'double': 'BINARY_DOUBLE',\n 'decimal': 'NUMERIC(%(precision)s,%(scale)s)',\n 'date': 'DATE',\n 'time': 'CHAR(8)',\n 'datetime': 'DATE',\n 'id': 'NUMBER PRIMARY KEY',\n 'reference': 'NUMBER, CONSTRAINT %(constraint_name)s FOREIGN KEY (%(field_name)s) REFERENCES %(foreign_key)s ON DELETE %(on_delete_action)s',\n 'list:integer': 'CLOB',\n 'list:string': 'CLOB',\n 'list:reference': 'CLOB',\n 'big-id': 'NUMBER PRIMARY KEY',\n 'big-reference': 'NUMBER, CONSTRAINT %(constraint_name)s FOREIGN KEY (%(field_name)s) REFERENCES %(foreign_key)s ON DELETE %(on_delete_action)s',\n 'reference FK': ', CONSTRAINT FK_%(constraint_name)s FOREIGN KEY (%(field_name)s) REFERENCES %(foreign_key)s ON DELETE %(on_delete_action)s',\n 'reference TFK': ' CONSTRAINT FK_%(foreign_table)s_PK FOREIGN KEY (%(field_name)s) REFERENCES %(foreign_table)s (%(foreign_key)s) ON DELETE %(on_delete_action)s',\n }\n\n\n def trigger_name(self,tablename):\n return '%s_trigger' % tablename\n\n def LEFT_JOIN(self):\n return 'LEFT OUTER JOIN'\n\n def RANDOM(self):\n return 'dbms_random.value'\n\n def NOT_NULL(self,default,field_type):\n return 'DEFAULT %s NOT NULL' % self.represent(default,field_type)\n\n def REGEXP(self, first, second):\n return 'REGEXP_LIKE(%s, %s)' % (self.expand(first),\n self.expand(second, 'string'))\n\n def _drop(self,table,mode):\n sequence_name = table._sequence_name\n return ['DROP TABLE %s %s;' % (table.sqlsafe, mode), 'DROP SEQUENCE %s;' % sequence_name]\n\n def select_limitby(self, sql_s, sql_f, sql_t, sql_w, sql_o, limitby):\n if limitby:\n (lmin, lmax) = limitby\n if len(sql_w) > 1:\n sql_w_row = sql_w + ' AND w_row > %i' % lmin\n else:\n sql_w_row = 'WHERE w_row > %i' % lmin\n return 'SELECT %s %s FROM (SELECT w_tmp.*, ROWNUM w_row FROM (SELECT %s FROM %s%s%s) w_tmp WHERE ROWNUM<=%i) %s %s %s;' % (sql_s, sql_f, sql_f, sql_t, sql_w, sql_o, lmax, sql_t, sql_w_row, sql_o)\n return 'SELECT %s %s FROM %s%s%s;' % (sql_s, sql_f, sql_t, sql_w, sql_o)\n\n def constraint_name(self, tablename, fieldname):\n constraint_name = BaseAdapter.constraint_name(self, tablename, fieldname)\n if len(constraint_name)>30:\n constraint_name = '%s_%s__constraint' % (tablename[:10], fieldname[:7])\n return constraint_name\n\n def represent_exceptions(self, obj, fieldtype):\n if fieldtype == 'blob':\n obj = base64.b64encode(str(obj))\n return \":CLOB('%s')\" % obj\n elif fieldtype == 'date':\n if isinstance(obj, (datetime.date, datetime.datetime)):\n obj = obj.isoformat()[:10]\n else:\n obj = str(obj)\n return \"to_date('%s','yyyy-mm-dd')\" % obj\n elif fieldtype == 'datetime':\n if isinstance(obj, datetime.datetime):\n obj = obj.isoformat()[:19].replace('T',' ')\n elif isinstance(obj, datetime.date):\n obj = obj.isoformat()[:10]+' 00:00:00'\n else:\n obj = str(obj)\n return \"to_date('%s','yyyy-mm-dd hh24:mi:ss')\" % obj\n return None\n\n def __init__(self,db,uri,pool_size=0,folder=None,db_codec ='UTF-8',\n credential_decoder=IDENTITY, driver_args={},\n adapter_args={}, do_connect=True, after_connection=None):\n self.db = db\n self.dbengine = \"oracle\"\n self.uri = uri\n if do_connect: self.find_driver(adapter_args,uri)\n self.pool_size = pool_size\n self.folder = folder\n self.db_codec = db_codec\n self._after_connection = after_connection\n self.find_or_make_work_folder()\n ruri = uri.split('://',1)[1]\n if not 'threaded' in driver_args:\n driver_args['threaded']=True\n def connector(uri=ruri,driver_args=driver_args):\n return self.driver.connect(uri,**driver_args)\n self.connector = connector\n if do_connect: self.reconnect()\n\n def after_connection(self):\n self.execute(\"ALTER SESSION SET NLS_DATE_FORMAT = 'YYYY-MM-DD HH24:MI:SS';\")\n self.execute(\"ALTER SESSION SET NLS_TIMESTAMP_FORMAT = 'YYYY-MM-DD HH24:MI:SS';\")\n\n oracle_fix = re.compile(\"[^']*('[^']*'[^']*)*\\:(?P<clob>CLOB\\('([^']+|'')*'\\))\")\n\n def execute(self, command, args=None):\n args = args or []\n i = 1\n while True:\n m = self.oracle_fix.match(command)\n if not m:\n break\n command = command[:m.start('clob')] + str(i) + command[m.end('clob'):]\n args.append(m.group('clob')[6:-2].replace(\"''\", \"'\"))\n i += 1\n if command[-1:]==';':\n command = command[:-1]\n return self.log_execute(command, args)\n\n def create_sequence_and_triggers(self, query, table, **args):\n tablename = table._tablename\n id_name = table._id.name\n sequence_name = table._sequence_name\n trigger_name = table._trigger_name\n self.execute(query)\n self.execute('CREATE SEQUENCE %s START WITH 1 INCREMENT BY 1 NOMAXVALUE MINVALUE -1;' % sequence_name)\n self.execute(\"\"\"\n CREATE OR REPLACE TRIGGER %(trigger_name)s BEFORE INSERT ON %(tablename)s FOR EACH ROW\n DECLARE\n curr_val NUMBER;\n diff_val NUMBER;\n PRAGMA autonomous_transaction;\n BEGIN\n IF :NEW.%(id)s IS NOT NULL THEN\n EXECUTE IMMEDIATE 'SELECT %(sequence_name)s.nextval FROM dual' INTO curr_val;\n diff_val := :NEW.%(id)s - curr_val - 1;\n IF diff_val != 0 THEN\n EXECUTE IMMEDIATE 'alter sequence %(sequence_name)s increment by '|| diff_val;\n EXECUTE IMMEDIATE 'SELECT %(sequence_name)s.nextval FROM dual' INTO curr_val;\n EXECUTE IMMEDIATE 'alter sequence %(sequence_name)s increment by 1';\n END IF;\n END IF;\n SELECT %(sequence_name)s.nextval INTO :NEW.%(id)s FROM DUAL;\n END;\n \"\"\" % dict(trigger_name=trigger_name, tablename=tablename,\n sequence_name=sequence_name,id=id_name))\n\n def lastrowid(self,table):\n sequence_name = table._sequence_name\n self.execute('SELECT %s.currval FROM dual;' % sequence_name)\n return long(self.cursor.fetchone()[0])\n\n #def parse_value(self, value, field_type, blob_decode=True):\n # if blob_decode and isinstance(value, cx_Oracle.LOB):\n # try:\n # value = value.read()\n # except self.driver.ProgrammingError:\n # # After a subsequent fetch the LOB value is not valid anymore\n # pass\n # return BaseAdapter.parse_value(self, value, field_type, blob_decode)\n\n def _fetchall(self):\n if any(x[1]==cx_Oracle.LOB for x in self.cursor.description):\n return [tuple([(c.read() if type(c) == cx_Oracle.LOB else c) \\\n for c in r]) for r in self.cursor]\n else:\n return self.cursor.fetchall()\n\n def sqlsafe_table(self, tablename, ot=None):\n if ot is not None:\n return (self.QUOTE_TEMPLATE + ' ' \\\n + self.QUOTE_TEMPLATE) % (ot, tablename)\n return self.QUOTE_TEMPLATE % tablename\n", "path": "gluon/dal/adapters/oracle.py"}]} | 3,519 | 140 |
gh_patches_debug_35667 | rasdani/github-patches | git_diff | scrapy__scrapy-4799 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Usage of HttpCompressionMiddleware needs to be reflected in Scrapy stats
## Summary
Usage of `HttpCompressionMiddleware` needs to be relfected in Scrapy stats.
## Motivation
In order to estimate scrapy memory usage efficiency and prevent.. memory leaks like [this](https://stackoverflow.com/q/63936759/10884791).
I will need to know:
1. number of request/response objects that can be active (can be achieved by using [`trackref`](https://docs.scrapy.org/en/latest/topics/leaks.html#debugging-memory-leaks-with-trackref) )
2. size of memory required to store that number of request/response objects.
A lot of websites use compression to reduce traffic. In this case I would like to calculate average size of **decompressed** responses to estimate p.2.
Decompression process means that at some point application will require to allocate memory to store both compressed and decompressed response body and I will need to know this sizes to have more complete vision of scrapy memory usage.
Also size of decompressed body will be several times more than size of compressed response and it will affect scrapy memory usage.
## Describe alternatives you've considered
The easiest one - is to change priority of `DownloaderStats` middleware and check difference in `downloader/response_bytes` stats parameter.
```
custom_settings = {"DOWNLOAD_DELAY":1,
"DOWNLOADER_MIDDLEWARES":{
'scrapy.downloadermiddlewares.stats.DownloaderStats':50
}
```
Stats from quotes.toscrape.com spider (it uses `gzip` compression) with default settings:
```
{'downloader/request_bytes': 2642,
'downloader/request_count': 10,
'downloader/request_method_count/GET': 10,
'downloader/response_bytes': 24534,
```
And with changed priority of `DownloaderStats` middleware:
```
{'downloader/request_bytes': 912, # size reduced as it didn't count size of request headers populated by downloader middlewares
'downloader/request_count': 10,
'downloader/request_method_count/GET': 10,
'downloader/response_bytes': 110191, # it counted size of decompressed data
```
Average size of compressed response (by default) - 2453 bytes.
Average size of decompressed response - 11019 bytes (~4.5 times more).
## Additional context
Potential solution is to add something like this:
` self.stats.inc_value('decompressed_bytes', spider=spider)`
into `process_response` method of `HttpCompressionMiddleware`
</issue>
<code>
[start of scrapy/downloadermiddlewares/httpcompression.py]
1 import io
2 import zlib
3
4 from scrapy.utils.gz import gunzip
5 from scrapy.http import Response, TextResponse
6 from scrapy.responsetypes import responsetypes
7 from scrapy.exceptions import NotConfigured
8
9
10 ACCEPTED_ENCODINGS = [b'gzip', b'deflate']
11
12 try:
13 import brotli
14 ACCEPTED_ENCODINGS.append(b'br')
15 except ImportError:
16 pass
17
18 try:
19 import zstandard
20 ACCEPTED_ENCODINGS.append(b'zstd')
21 except ImportError:
22 pass
23
24
25 class HttpCompressionMiddleware:
26 """This middleware allows compressed (gzip, deflate) traffic to be
27 sent/received from web sites"""
28 @classmethod
29 def from_crawler(cls, crawler):
30 if not crawler.settings.getbool('COMPRESSION_ENABLED'):
31 raise NotConfigured
32 return cls()
33
34 def process_request(self, request, spider):
35 request.headers.setdefault('Accept-Encoding',
36 b", ".join(ACCEPTED_ENCODINGS))
37
38 def process_response(self, request, response, spider):
39
40 if request.method == 'HEAD':
41 return response
42 if isinstance(response, Response):
43 content_encoding = response.headers.getlist('Content-Encoding')
44 if content_encoding:
45 encoding = content_encoding.pop()
46 decoded_body = self._decode(response.body, encoding.lower())
47 respcls = responsetypes.from_args(
48 headers=response.headers, url=response.url, body=decoded_body
49 )
50 kwargs = dict(cls=respcls, body=decoded_body)
51 if issubclass(respcls, TextResponse):
52 # force recalculating the encoding until we make sure the
53 # responsetypes guessing is reliable
54 kwargs['encoding'] = None
55 response = response.replace(**kwargs)
56 if not content_encoding:
57 del response.headers['Content-Encoding']
58
59 return response
60
61 def _decode(self, body, encoding):
62 if encoding == b'gzip' or encoding == b'x-gzip':
63 body = gunzip(body)
64
65 if encoding == b'deflate':
66 try:
67 body = zlib.decompress(body)
68 except zlib.error:
69 # ugly hack to work with raw deflate content that may
70 # be sent by microsoft servers. For more information, see:
71 # http://carsten.codimi.de/gzip.yaws/
72 # http://www.port80software.com/200ok/archive/2005/10/31/868.aspx
73 # http://www.gzip.org/zlib/zlib_faq.html#faq38
74 body = zlib.decompress(body, -15)
75 if encoding == b'br' and b'br' in ACCEPTED_ENCODINGS:
76 body = brotli.decompress(body)
77 if encoding == b'zstd' and b'zstd' in ACCEPTED_ENCODINGS:
78 # Using its streaming API since its simple API could handle only cases
79 # where there is content size data embedded in the frame
80 reader = zstandard.ZstdDecompressor().stream_reader(io.BytesIO(body))
81 body = reader.read()
82 return body
83
[end of scrapy/downloadermiddlewares/httpcompression.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/scrapy/downloadermiddlewares/httpcompression.py b/scrapy/downloadermiddlewares/httpcompression.py
--- a/scrapy/downloadermiddlewares/httpcompression.py
+++ b/scrapy/downloadermiddlewares/httpcompression.py
@@ -1,10 +1,12 @@
import io
+import warnings
import zlib
-from scrapy.utils.gz import gunzip
+from scrapy.exceptions import NotConfigured
from scrapy.http import Response, TextResponse
from scrapy.responsetypes import responsetypes
-from scrapy.exceptions import NotConfigured
+from scrapy.utils.deprecate import ScrapyDeprecationWarning
+from scrapy.utils.gz import gunzip
ACCEPTED_ENCODINGS = [b'gzip', b'deflate']
@@ -25,11 +27,25 @@
class HttpCompressionMiddleware:
"""This middleware allows compressed (gzip, deflate) traffic to be
sent/received from web sites"""
+ def __init__(self, stats=None):
+ self.stats = stats
+
@classmethod
def from_crawler(cls, crawler):
if not crawler.settings.getbool('COMPRESSION_ENABLED'):
raise NotConfigured
- return cls()
+ try:
+ return cls(stats=crawler.stats)
+ except TypeError:
+ warnings.warn(
+ "HttpCompressionMiddleware subclasses must either modify "
+ "their '__init__' method to support a 'stats' parameter or "
+ "reimplement the 'from_crawler' method.",
+ ScrapyDeprecationWarning,
+ )
+ result = cls()
+ result.stats = crawler.stats
+ return result
def process_request(self, request, spider):
request.headers.setdefault('Accept-Encoding',
@@ -44,6 +60,9 @@
if content_encoding:
encoding = content_encoding.pop()
decoded_body = self._decode(response.body, encoding.lower())
+ if self.stats:
+ self.stats.inc_value('httpcompression/response_bytes', len(decoded_body), spider=spider)
+ self.stats.inc_value('httpcompression/response_count', spider=spider)
respcls = responsetypes.from_args(
headers=response.headers, url=response.url, body=decoded_body
)
| {"golden_diff": "diff --git a/scrapy/downloadermiddlewares/httpcompression.py b/scrapy/downloadermiddlewares/httpcompression.py\n--- a/scrapy/downloadermiddlewares/httpcompression.py\n+++ b/scrapy/downloadermiddlewares/httpcompression.py\n@@ -1,10 +1,12 @@\n import io\n+import warnings\n import zlib\n \n-from scrapy.utils.gz import gunzip\n+from scrapy.exceptions import NotConfigured\n from scrapy.http import Response, TextResponse\n from scrapy.responsetypes import responsetypes\n-from scrapy.exceptions import NotConfigured\n+from scrapy.utils.deprecate import ScrapyDeprecationWarning\n+from scrapy.utils.gz import gunzip\n \n \n ACCEPTED_ENCODINGS = [b'gzip', b'deflate']\n@@ -25,11 +27,25 @@\n class HttpCompressionMiddleware:\n \"\"\"This middleware allows compressed (gzip, deflate) traffic to be\n sent/received from web sites\"\"\"\n+ def __init__(self, stats=None):\n+ self.stats = stats\n+\n @classmethod\n def from_crawler(cls, crawler):\n if not crawler.settings.getbool('COMPRESSION_ENABLED'):\n raise NotConfigured\n- return cls()\n+ try:\n+ return cls(stats=crawler.stats)\n+ except TypeError:\n+ warnings.warn(\n+ \"HttpCompressionMiddleware subclasses must either modify \"\n+ \"their '__init__' method to support a 'stats' parameter or \"\n+ \"reimplement the 'from_crawler' method.\",\n+ ScrapyDeprecationWarning,\n+ )\n+ result = cls()\n+ result.stats = crawler.stats\n+ return result\n \n def process_request(self, request, spider):\n request.headers.setdefault('Accept-Encoding',\n@@ -44,6 +60,9 @@\n if content_encoding:\n encoding = content_encoding.pop()\n decoded_body = self._decode(response.body, encoding.lower())\n+ if self.stats:\n+ self.stats.inc_value('httpcompression/response_bytes', len(decoded_body), spider=spider)\n+ self.stats.inc_value('httpcompression/response_count', spider=spider)\n respcls = responsetypes.from_args(\n headers=response.headers, url=response.url, body=decoded_body\n )\n", "issue": "Usage of HttpCompressionMiddleware needs to be reflected in Scrapy stats\n## Summary\r\nUsage of `HttpCompressionMiddleware` needs to be relfected in Scrapy stats.\r\n## Motivation\r\nIn order to estimate scrapy memory usage efficiency and prevent.. memory leaks like [this](https://stackoverflow.com/q/63936759/10884791).\r\nI will need to know:\r\n1. number of request/response objects that can be active (can be achieved by using [`trackref`](https://docs.scrapy.org/en/latest/topics/leaks.html#debugging-memory-leaks-with-trackref) )\r\n2. size of memory required to store that number of request/response objects. \r\n\r\nA lot of websites use compression to reduce traffic. In this case I would like to calculate average size of **decompressed** responses to estimate p.2.\r\n\r\nDecompression process means that at some point application will require to allocate memory to store both compressed and decompressed response body and I will need to know this sizes to have more complete vision of scrapy memory usage.\r\n\r\nAlso size of decompressed body will be several times more than size of compressed response and it will affect scrapy memory usage.\r\n\r\n## Describe alternatives you've considered\r\nThe easiest one - is to change priority of `DownloaderStats` middleware and check difference in `downloader/response_bytes` stats parameter.\r\n```\r\n custom_settings = {\"DOWNLOAD_DELAY\":1,\r\n \"DOWNLOADER_MIDDLEWARES\":{\r\n 'scrapy.downloadermiddlewares.stats.DownloaderStats':50\r\n }\r\n```\r\nStats from quotes.toscrape.com spider (it uses `gzip` compression) with default settings:\r\n\r\n```\r\n{'downloader/request_bytes': 2642,\r\n 'downloader/request_count': 10,\r\n 'downloader/request_method_count/GET': 10,\r\n 'downloader/response_bytes': 24534,\r\n```\r\n \r\nAnd with changed priority of `DownloaderStats` middleware:\r\n\r\n```\r\n{'downloader/request_bytes': 912, # size reduced as it didn't count size of request headers populated by downloader middlewares\r\n 'downloader/request_count': 10,\r\n 'downloader/request_method_count/GET': 10,\r\n 'downloader/response_bytes': 110191, # it counted size of decompressed data \r\n```\r\n\r\nAverage size of compressed response (by default) - 2453 bytes.\r\nAverage size of decompressed response - 11019 bytes (~4.5 times more).\r\n\r\n## Additional context\r\nPotential solution is to add something like this:\r\n` self.stats.inc_value('decompressed_bytes', spider=spider)`\r\ninto `process_response` method of `HttpCompressionMiddleware`\n", "before_files": [{"content": "import io\nimport zlib\n\nfrom scrapy.utils.gz import gunzip\nfrom scrapy.http import Response, TextResponse\nfrom scrapy.responsetypes import responsetypes\nfrom scrapy.exceptions import NotConfigured\n\n\nACCEPTED_ENCODINGS = [b'gzip', b'deflate']\n\ntry:\n import brotli\n ACCEPTED_ENCODINGS.append(b'br')\nexcept ImportError:\n pass\n\ntry:\n import zstandard\n ACCEPTED_ENCODINGS.append(b'zstd')\nexcept ImportError:\n pass\n\n\nclass HttpCompressionMiddleware:\n \"\"\"This middleware allows compressed (gzip, deflate) traffic to be\n sent/received from web sites\"\"\"\n @classmethod\n def from_crawler(cls, crawler):\n if not crawler.settings.getbool('COMPRESSION_ENABLED'):\n raise NotConfigured\n return cls()\n\n def process_request(self, request, spider):\n request.headers.setdefault('Accept-Encoding',\n b\", \".join(ACCEPTED_ENCODINGS))\n\n def process_response(self, request, response, spider):\n\n if request.method == 'HEAD':\n return response\n if isinstance(response, Response):\n content_encoding = response.headers.getlist('Content-Encoding')\n if content_encoding:\n encoding = content_encoding.pop()\n decoded_body = self._decode(response.body, encoding.lower())\n respcls = responsetypes.from_args(\n headers=response.headers, url=response.url, body=decoded_body\n )\n kwargs = dict(cls=respcls, body=decoded_body)\n if issubclass(respcls, TextResponse):\n # force recalculating the encoding until we make sure the\n # responsetypes guessing is reliable\n kwargs['encoding'] = None\n response = response.replace(**kwargs)\n if not content_encoding:\n del response.headers['Content-Encoding']\n\n return response\n\n def _decode(self, body, encoding):\n if encoding == b'gzip' or encoding == b'x-gzip':\n body = gunzip(body)\n\n if encoding == b'deflate':\n try:\n body = zlib.decompress(body)\n except zlib.error:\n # ugly hack to work with raw deflate content that may\n # be sent by microsoft servers. For more information, see:\n # http://carsten.codimi.de/gzip.yaws/\n # http://www.port80software.com/200ok/archive/2005/10/31/868.aspx\n # http://www.gzip.org/zlib/zlib_faq.html#faq38\n body = zlib.decompress(body, -15)\n if encoding == b'br' and b'br' in ACCEPTED_ENCODINGS:\n body = brotli.decompress(body)\n if encoding == b'zstd' and b'zstd' in ACCEPTED_ENCODINGS:\n # Using its streaming API since its simple API could handle only cases\n # where there is content size data embedded in the frame\n reader = zstandard.ZstdDecompressor().stream_reader(io.BytesIO(body))\n body = reader.read()\n return body\n", "path": "scrapy/downloadermiddlewares/httpcompression.py"}]} | 1,921 | 471 |
gh_patches_debug_9872 | rasdani/github-patches | git_diff | google__flax-3344 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
ResNetV1 implementation is actually ResNetV1.5
[Downsampling is performed by the 3x3 convolution](https://github.com/google/flax/blob/3ea6381139a340fa7b29c74a84205d8bad18b489/examples/imagenet/models.py#L74), which means it is actually [ResNetV1.5](https://catalog.ngc.nvidia.com/orgs/nvidia/resources/resnet_50_v1_5_for_pytorch), not ResNetV1 as claimed. (In the original version the first convolution in the block has stride 2: see [original paper](https://arxiv.org/pdf/1512.03385.pdf), the caption to Table 1.)
</issue>
<code>
[start of examples/imagenet/models.py]
1 # Copyright 2023 The Flax Authors.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """Flax implementation of ResNet V1."""
16
17 # See issue #620.
18 # pytype: disable=wrong-arg-count
19
20 from functools import partial
21 from typing import Any, Callable, Sequence, Tuple
22
23 from flax import linen as nn
24 import jax.numpy as jnp
25
26 ModuleDef = Any
27
28
29 class ResNetBlock(nn.Module):
30 """ResNet block."""
31
32 filters: int
33 conv: ModuleDef
34 norm: ModuleDef
35 act: Callable
36 strides: Tuple[int, int] = (1, 1)
37
38 @nn.compact
39 def __call__(
40 self,
41 x,
42 ):
43 residual = x
44 y = self.conv(self.filters, (3, 3), self.strides)(x)
45 y = self.norm()(y)
46 y = self.act(y)
47 y = self.conv(self.filters, (3, 3))(y)
48 y = self.norm(scale_init=nn.initializers.zeros_init())(y)
49
50 if residual.shape != y.shape:
51 residual = self.conv(
52 self.filters, (1, 1), self.strides, name='conv_proj'
53 )(residual)
54 residual = self.norm(name='norm_proj')(residual)
55
56 return self.act(residual + y)
57
58
59 class BottleneckResNetBlock(nn.Module):
60 """Bottleneck ResNet block."""
61
62 filters: int
63 conv: ModuleDef
64 norm: ModuleDef
65 act: Callable
66 strides: Tuple[int, int] = (1, 1)
67
68 @nn.compact
69 def __call__(self, x):
70 residual = x
71 y = self.conv(self.filters, (1, 1))(x)
72 y = self.norm()(y)
73 y = self.act(y)
74 y = self.conv(self.filters, (3, 3), self.strides)(y)
75 y = self.norm()(y)
76 y = self.act(y)
77 y = self.conv(self.filters * 4, (1, 1))(y)
78 y = self.norm(scale_init=nn.initializers.zeros_init())(y)
79
80 if residual.shape != y.shape:
81 residual = self.conv(
82 self.filters * 4, (1, 1), self.strides, name='conv_proj'
83 )(residual)
84 residual = self.norm(name='norm_proj')(residual)
85
86 return self.act(residual + y)
87
88
89 class ResNet(nn.Module):
90 """ResNetV1."""
91
92 stage_sizes: Sequence[int]
93 block_cls: ModuleDef
94 num_classes: int
95 num_filters: int = 64
96 dtype: Any = jnp.float32
97 act: Callable = nn.relu
98 conv: ModuleDef = nn.Conv
99
100 @nn.compact
101 def __call__(self, x, train: bool = True):
102 conv = partial(self.conv, use_bias=False, dtype=self.dtype)
103 norm = partial(
104 nn.BatchNorm,
105 use_running_average=not train,
106 momentum=0.9,
107 epsilon=1e-5,
108 dtype=self.dtype,
109 axis_name='batch',
110 )
111
112 x = conv(
113 self.num_filters,
114 (7, 7),
115 (2, 2),
116 padding=[(3, 3), (3, 3)],
117 name='conv_init',
118 )(x)
119 x = norm(name='bn_init')(x)
120 x = nn.relu(x)
121 x = nn.max_pool(x, (3, 3), strides=(2, 2), padding='SAME')
122 for i, block_size in enumerate(self.stage_sizes):
123 for j in range(block_size):
124 strides = (2, 2) if i > 0 and j == 0 else (1, 1)
125 x = self.block_cls(
126 self.num_filters * 2**i,
127 strides=strides,
128 conv=conv,
129 norm=norm,
130 act=self.act,
131 )(x)
132 x = jnp.mean(x, axis=(1, 2))
133 x = nn.Dense(self.num_classes, dtype=self.dtype)(x)
134 x = jnp.asarray(x, self.dtype)
135 return x
136
137
138 ResNet18 = partial(ResNet, stage_sizes=[2, 2, 2, 2], block_cls=ResNetBlock)
139 ResNet34 = partial(ResNet, stage_sizes=[3, 4, 6, 3], block_cls=ResNetBlock)
140 ResNet50 = partial(
141 ResNet, stage_sizes=[3, 4, 6, 3], block_cls=BottleneckResNetBlock
142 )
143 ResNet101 = partial(
144 ResNet, stage_sizes=[3, 4, 23, 3], block_cls=BottleneckResNetBlock
145 )
146 ResNet152 = partial(
147 ResNet, stage_sizes=[3, 8, 36, 3], block_cls=BottleneckResNetBlock
148 )
149 ResNet200 = partial(
150 ResNet, stage_sizes=[3, 24, 36, 3], block_cls=BottleneckResNetBlock
151 )
152
153
154 ResNet18Local = partial(
155 ResNet, stage_sizes=[2, 2, 2, 2], block_cls=ResNetBlock, conv=nn.ConvLocal
156 )
157
158
159 # Used for testing only.
160 _ResNet1 = partial(ResNet, stage_sizes=[1], block_cls=ResNetBlock)
161 _ResNet1Local = partial(
162 ResNet, stage_sizes=[1], block_cls=ResNetBlock, conv=nn.ConvLocal
163 )
164
[end of examples/imagenet/models.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/examples/imagenet/models.py b/examples/imagenet/models.py
--- a/examples/imagenet/models.py
+++ b/examples/imagenet/models.py
@@ -12,7 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
-"""Flax implementation of ResNet V1."""
+"""Flax implementation of ResNet V1.5."""
# See issue #620.
# pytype: disable=wrong-arg-count
@@ -87,7 +87,7 @@
class ResNet(nn.Module):
- """ResNetV1."""
+ """ResNetV1.5."""
stage_sizes: Sequence[int]
block_cls: ModuleDef
| {"golden_diff": "diff --git a/examples/imagenet/models.py b/examples/imagenet/models.py\n--- a/examples/imagenet/models.py\n+++ b/examples/imagenet/models.py\n@@ -12,7 +12,7 @@\n # See the License for the specific language governing permissions and\n # limitations under the License.\n \n-\"\"\"Flax implementation of ResNet V1.\"\"\"\n+\"\"\"Flax implementation of ResNet V1.5.\"\"\"\n \n # See issue #620.\n # pytype: disable=wrong-arg-count\n@@ -87,7 +87,7 @@\n \n \n class ResNet(nn.Module):\n- \"\"\"ResNetV1.\"\"\"\n+ \"\"\"ResNetV1.5.\"\"\"\n \n stage_sizes: Sequence[int]\n block_cls: ModuleDef\n", "issue": "ResNetV1 implementation is actually ResNetV1.5\n[Downsampling is performed by the 3x3 convolution](https://github.com/google/flax/blob/3ea6381139a340fa7b29c74a84205d8bad18b489/examples/imagenet/models.py#L74), which means it is actually [ResNetV1.5](https://catalog.ngc.nvidia.com/orgs/nvidia/resources/resnet_50_v1_5_for_pytorch), not ResNetV1 as claimed. (In the original version the first convolution in the block has stride 2: see [original paper](https://arxiv.org/pdf/1512.03385.pdf), the caption to Table 1.)\n", "before_files": [{"content": "# Copyright 2023 The Flax Authors.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Flax implementation of ResNet V1.\"\"\"\n\n# See issue #620.\n# pytype: disable=wrong-arg-count\n\nfrom functools import partial\nfrom typing import Any, Callable, Sequence, Tuple\n\nfrom flax import linen as nn\nimport jax.numpy as jnp\n\nModuleDef = Any\n\n\nclass ResNetBlock(nn.Module):\n \"\"\"ResNet block.\"\"\"\n\n filters: int\n conv: ModuleDef\n norm: ModuleDef\n act: Callable\n strides: Tuple[int, int] = (1, 1)\n\n @nn.compact\n def __call__(\n self,\n x,\n ):\n residual = x\n y = self.conv(self.filters, (3, 3), self.strides)(x)\n y = self.norm()(y)\n y = self.act(y)\n y = self.conv(self.filters, (3, 3))(y)\n y = self.norm(scale_init=nn.initializers.zeros_init())(y)\n\n if residual.shape != y.shape:\n residual = self.conv(\n self.filters, (1, 1), self.strides, name='conv_proj'\n )(residual)\n residual = self.norm(name='norm_proj')(residual)\n\n return self.act(residual + y)\n\n\nclass BottleneckResNetBlock(nn.Module):\n \"\"\"Bottleneck ResNet block.\"\"\"\n\n filters: int\n conv: ModuleDef\n norm: ModuleDef\n act: Callable\n strides: Tuple[int, int] = (1, 1)\n\n @nn.compact\n def __call__(self, x):\n residual = x\n y = self.conv(self.filters, (1, 1))(x)\n y = self.norm()(y)\n y = self.act(y)\n y = self.conv(self.filters, (3, 3), self.strides)(y)\n y = self.norm()(y)\n y = self.act(y)\n y = self.conv(self.filters * 4, (1, 1))(y)\n y = self.norm(scale_init=nn.initializers.zeros_init())(y)\n\n if residual.shape != y.shape:\n residual = self.conv(\n self.filters * 4, (1, 1), self.strides, name='conv_proj'\n )(residual)\n residual = self.norm(name='norm_proj')(residual)\n\n return self.act(residual + y)\n\n\nclass ResNet(nn.Module):\n \"\"\"ResNetV1.\"\"\"\n\n stage_sizes: Sequence[int]\n block_cls: ModuleDef\n num_classes: int\n num_filters: int = 64\n dtype: Any = jnp.float32\n act: Callable = nn.relu\n conv: ModuleDef = nn.Conv\n\n @nn.compact\n def __call__(self, x, train: bool = True):\n conv = partial(self.conv, use_bias=False, dtype=self.dtype)\n norm = partial(\n nn.BatchNorm,\n use_running_average=not train,\n momentum=0.9,\n epsilon=1e-5,\n dtype=self.dtype,\n axis_name='batch',\n )\n\n x = conv(\n self.num_filters,\n (7, 7),\n (2, 2),\n padding=[(3, 3), (3, 3)],\n name='conv_init',\n )(x)\n x = norm(name='bn_init')(x)\n x = nn.relu(x)\n x = nn.max_pool(x, (3, 3), strides=(2, 2), padding='SAME')\n for i, block_size in enumerate(self.stage_sizes):\n for j in range(block_size):\n strides = (2, 2) if i > 0 and j == 0 else (1, 1)\n x = self.block_cls(\n self.num_filters * 2**i,\n strides=strides,\n conv=conv,\n norm=norm,\n act=self.act,\n )(x)\n x = jnp.mean(x, axis=(1, 2))\n x = nn.Dense(self.num_classes, dtype=self.dtype)(x)\n x = jnp.asarray(x, self.dtype)\n return x\n\n\nResNet18 = partial(ResNet, stage_sizes=[2, 2, 2, 2], block_cls=ResNetBlock)\nResNet34 = partial(ResNet, stage_sizes=[3, 4, 6, 3], block_cls=ResNetBlock)\nResNet50 = partial(\n ResNet, stage_sizes=[3, 4, 6, 3], block_cls=BottleneckResNetBlock\n)\nResNet101 = partial(\n ResNet, stage_sizes=[3, 4, 23, 3], block_cls=BottleneckResNetBlock\n)\nResNet152 = partial(\n ResNet, stage_sizes=[3, 8, 36, 3], block_cls=BottleneckResNetBlock\n)\nResNet200 = partial(\n ResNet, stage_sizes=[3, 24, 36, 3], block_cls=BottleneckResNetBlock\n)\n\n\nResNet18Local = partial(\n ResNet, stage_sizes=[2, 2, 2, 2], block_cls=ResNetBlock, conv=nn.ConvLocal\n)\n\n\n# Used for testing only.\n_ResNet1 = partial(ResNet, stage_sizes=[1], block_cls=ResNetBlock)\n_ResNet1Local = partial(\n ResNet, stage_sizes=[1], block_cls=ResNetBlock, conv=nn.ConvLocal\n)\n", "path": "examples/imagenet/models.py"}]} | 2,450 | 163 |
gh_patches_debug_2301 | rasdani/github-patches | git_diff | encode__httpx-1034 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
aiter_lines() doesn't return full lines that span multiple chunks
<https://gist.github.com/scr-oath/aa76d200222a0409d09a0d6feb1a13e2> shows an example setup using cherry.py as server that just outputs two lines - the json is big enough to be sent in two chunks; httpx aiter_lines() gets confused and sends data from the middle of the json line - seems to skip the starting part - which was most likely sent in a chunk without a newline
### test-httpx.py
```python
import asyncio
import json
import httpx
class TestHttpx:
def __init__(self):
pass
async def __call__(self):
http_client = httpx.AsyncClient()
async with http_client.stream(method="GET", url='http://localhost:8080/lines') as response:
is_message = True
async for line in response.aiter_lines():
is_message = not is_message
if is_message:
message = json.loads(line)
print(message)
def main():
test_httpx = TestHttpx()
asyncio.run(test_httpx())
if __name__ == '__main__':
main()
```
</issue>
<code>
[start of httpx/_decoders.py]
1 """
2 Handlers for Content-Encoding.
3
4 See: https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Content-Encoding
5 """
6 import codecs
7 import typing
8 import zlib
9
10 import chardet
11
12 from ._exceptions import DecodingError
13
14 try:
15 import brotli
16 except ImportError: # pragma: nocover
17 brotli = None
18
19
20 class Decoder:
21 def decode(self, data: bytes) -> bytes:
22 raise NotImplementedError() # pragma: nocover
23
24 def flush(self) -> bytes:
25 raise NotImplementedError() # pragma: nocover
26
27
28 class IdentityDecoder(Decoder):
29 """
30 Handle unencoded data.
31 """
32
33 def decode(self, data: bytes) -> bytes:
34 return data
35
36 def flush(self) -> bytes:
37 return b""
38
39
40 class DeflateDecoder(Decoder):
41 """
42 Handle 'deflate' decoding.
43
44 See: https://stackoverflow.com/questions/1838699
45 """
46
47 def __init__(self) -> None:
48 self.first_attempt = True
49 self.decompressor = zlib.decompressobj()
50
51 def decode(self, data: bytes) -> bytes:
52 was_first_attempt = self.first_attempt
53 self.first_attempt = False
54 try:
55 return self.decompressor.decompress(data)
56 except zlib.error as exc:
57 if was_first_attempt:
58 self.decompressor = zlib.decompressobj(-zlib.MAX_WBITS)
59 return self.decode(data)
60 raise DecodingError from exc
61
62 def flush(self) -> bytes:
63 try:
64 return self.decompressor.flush()
65 except zlib.error as exc: # pragma: nocover
66 raise DecodingError from exc
67
68
69 class GZipDecoder(Decoder):
70 """
71 Handle 'gzip' decoding.
72
73 See: https://stackoverflow.com/questions/1838699
74 """
75
76 def __init__(self) -> None:
77 self.decompressor = zlib.decompressobj(zlib.MAX_WBITS | 16)
78
79 def decode(self, data: bytes) -> bytes:
80 try:
81 return self.decompressor.decompress(data)
82 except zlib.error as exc:
83 raise DecodingError from exc
84
85 def flush(self) -> bytes:
86 try:
87 return self.decompressor.flush()
88 except zlib.error as exc: # pragma: nocover
89 raise DecodingError from exc
90
91
92 class BrotliDecoder(Decoder):
93 """
94 Handle 'brotli' decoding.
95
96 Requires `pip install brotlipy`. See: https://brotlipy.readthedocs.io/
97 or `pip install brotli`. See https://github.com/google/brotli
98 Supports both 'brotlipy' and 'Brotli' packages since they share an import
99 name. The top branches are for 'brotlipy' and bottom branches for 'Brotli'
100 """
101
102 def __init__(self) -> None:
103 assert (
104 brotli is not None
105 ), "The 'brotlipy' or 'brotli' library must be installed to use 'BrotliDecoder'"
106 self.decompressor = brotli.Decompressor()
107 self.seen_data = False
108 if hasattr(self.decompressor, "decompress"):
109 self._decompress = self.decompressor.decompress
110 else:
111 self._decompress = self.decompressor.process # pragma: nocover
112
113 def decode(self, data: bytes) -> bytes:
114 if not data:
115 return b""
116 self.seen_data = True
117 try:
118 return self._decompress(data)
119 except brotli.error as exc:
120 raise DecodingError from exc
121
122 def flush(self) -> bytes:
123 if not self.seen_data:
124 return b""
125 try:
126 if hasattr(self.decompressor, "finish"):
127 self.decompressor.finish()
128 return b""
129 except brotli.error as exc: # pragma: nocover
130 raise DecodingError from exc
131
132
133 class MultiDecoder(Decoder):
134 """
135 Handle the case where multiple encodings have been applied.
136 """
137
138 def __init__(self, children: typing.Sequence[Decoder]) -> None:
139 """
140 'children' should be a sequence of decoders in the order in which
141 each was applied.
142 """
143 # Note that we reverse the order for decoding.
144 self.children = list(reversed(children))
145
146 def decode(self, data: bytes) -> bytes:
147 for child in self.children:
148 data = child.decode(data)
149 return data
150
151 def flush(self) -> bytes:
152 data = b""
153 for child in self.children:
154 data = child.decode(data) + child.flush()
155 return data
156
157
158 class TextDecoder:
159 """
160 Handles incrementally decoding bytes into text
161 """
162
163 def __init__(self, encoding: typing.Optional[str] = None):
164 self.decoder: typing.Optional[codecs.IncrementalDecoder] = (
165 None if encoding is None else codecs.getincrementaldecoder(encoding)()
166 )
167 self.detector = chardet.universaldetector.UniversalDetector()
168
169 # This buffer is only needed if 'decoder' is 'None'
170 # we want to trigger errors if data is getting added to
171 # our internal buffer for some silly reason while
172 # a decoder is discovered.
173 self.buffer: typing.Optional[bytearray] = None if self.decoder else bytearray()
174
175 def decode(self, data: bytes) -> str:
176 try:
177 if self.decoder is not None:
178 text = self.decoder.decode(data)
179 else:
180 assert self.buffer is not None
181 text = ""
182 self.detector.feed(data)
183 self.buffer += data
184
185 # Should be more than enough data to process, we don't
186 # want to buffer too long as chardet will wait until
187 # detector.close() is used to give back common
188 # encodings like 'utf-8'.
189 if len(self.buffer) >= 4096:
190 self.decoder = codecs.getincrementaldecoder(
191 self._detector_result()
192 )()
193 text = self.decoder.decode(bytes(self.buffer), False)
194 self.buffer = None
195
196 return text
197 except UnicodeDecodeError: # pragma: nocover
198 raise DecodingError() from None
199
200 def flush(self) -> str:
201 try:
202 if self.decoder is None:
203 # Empty string case as chardet is guaranteed to not have a guess.
204 assert self.buffer is not None
205 if len(self.buffer) == 0:
206 return ""
207 return bytes(self.buffer).decode(self._detector_result())
208
209 return self.decoder.decode(b"", True)
210 except UnicodeDecodeError: # pragma: nocover
211 raise DecodingError() from None
212
213 def _detector_result(self) -> str:
214 self.detector.close()
215 result = self.detector.result["encoding"]
216 if not result: # pragma: nocover
217 raise DecodingError("Unable to determine encoding of content")
218
219 return result
220
221
222 class LineDecoder:
223 """
224 Handles incrementally reading lines from text.
225
226 Uses universal line decoding, supporting any of `\n`, `\r`, or `\r\n`
227 as line endings, normalizing to `\n`.
228 """
229
230 def __init__(self) -> None:
231 self.buffer = ""
232
233 def decode(self, text: str) -> typing.List[str]:
234 lines = []
235
236 if text.startswith("\n") and self.buffer and self.buffer[-1] == "\r":
237 # Handle the case where we have an "\r\n" split across
238 # our previous input, and our new chunk.
239 lines.append(self.buffer[:-1] + "\n")
240 self.buffer = ""
241 text = text[1:]
242
243 while text:
244 num_chars = len(text)
245 for idx in range(num_chars):
246 char = text[idx]
247 next_char = None if idx + 1 == num_chars else text[idx + 1]
248 if char == "\n":
249 lines.append(self.buffer + text[: idx + 1])
250 self.buffer = ""
251 text = text[idx + 1 :]
252 break
253 elif char == "\r" and next_char == "\n":
254 lines.append(self.buffer + text[:idx] + "\n")
255 self.buffer = ""
256 text = text[idx + 2 :]
257 break
258 elif char == "\r" and next_char is not None:
259 lines.append(self.buffer + text[:idx] + "\n")
260 self.buffer = ""
261 text = text[idx + 1 :]
262 break
263 elif next_char is None:
264 self.buffer = text
265 text = ""
266 break
267
268 return lines
269
270 def flush(self) -> typing.List[str]:
271 if self.buffer.endswith("\r"):
272 # Handle the case where we had a trailing '\r', which could have
273 # been a '\r\n' pair.
274 lines = [self.buffer[:-1] + "\n"]
275 elif self.buffer:
276 lines = [self.buffer]
277 else:
278 lines = []
279 self.buffer = ""
280 return lines
281
282
283 SUPPORTED_DECODERS = {
284 "identity": IdentityDecoder,
285 "gzip": GZipDecoder,
286 "deflate": DeflateDecoder,
287 "br": BrotliDecoder,
288 }
289
290
291 if brotli is None:
292 SUPPORTED_DECODERS.pop("br") # pragma: nocover
293
[end of httpx/_decoders.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/httpx/_decoders.py b/httpx/_decoders.py
--- a/httpx/_decoders.py
+++ b/httpx/_decoders.py
@@ -261,7 +261,7 @@
text = text[idx + 1 :]
break
elif next_char is None:
- self.buffer = text
+ self.buffer += text
text = ""
break
| {"golden_diff": "diff --git a/httpx/_decoders.py b/httpx/_decoders.py\n--- a/httpx/_decoders.py\n+++ b/httpx/_decoders.py\n@@ -261,7 +261,7 @@\n text = text[idx + 1 :]\n break\n elif next_char is None:\n- self.buffer = text\n+ self.buffer += text\n text = \"\"\n break\n", "issue": "aiter_lines() doesn't return full lines that span multiple chunks\n<https://gist.github.com/scr-oath/aa76d200222a0409d09a0d6feb1a13e2> shows an example setup using cherry.py as server that just outputs two lines - the json is big enough to be sent in two chunks; httpx aiter_lines() gets confused and sends data from the middle of the json line - seems to skip the starting part - which was most likely sent in a chunk without a newline\r\n\r\n### test-httpx.py\r\n```python\r\nimport asyncio\r\nimport json\r\n\r\nimport httpx\r\n\r\n\r\nclass TestHttpx:\r\n def __init__(self):\r\n pass\r\n\r\n async def __call__(self):\r\n http_client = httpx.AsyncClient()\r\n async with http_client.stream(method=\"GET\", url='http://localhost:8080/lines') as response:\r\n is_message = True\r\n async for line in response.aiter_lines():\r\n is_message = not is_message\r\n if is_message:\r\n message = json.loads(line)\r\n print(message)\r\n\r\n\r\ndef main():\r\n test_httpx = TestHttpx()\r\n asyncio.run(test_httpx())\r\n\r\n\r\nif __name__ == '__main__':\r\n main()\r\n```\n", "before_files": [{"content": "\"\"\"\nHandlers for Content-Encoding.\n\nSee: https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Content-Encoding\n\"\"\"\nimport codecs\nimport typing\nimport zlib\n\nimport chardet\n\nfrom ._exceptions import DecodingError\n\ntry:\n import brotli\nexcept ImportError: # pragma: nocover\n brotli = None\n\n\nclass Decoder:\n def decode(self, data: bytes) -> bytes:\n raise NotImplementedError() # pragma: nocover\n\n def flush(self) -> bytes:\n raise NotImplementedError() # pragma: nocover\n\n\nclass IdentityDecoder(Decoder):\n \"\"\"\n Handle unencoded data.\n \"\"\"\n\n def decode(self, data: bytes) -> bytes:\n return data\n\n def flush(self) -> bytes:\n return b\"\"\n\n\nclass DeflateDecoder(Decoder):\n \"\"\"\n Handle 'deflate' decoding.\n\n See: https://stackoverflow.com/questions/1838699\n \"\"\"\n\n def __init__(self) -> None:\n self.first_attempt = True\n self.decompressor = zlib.decompressobj()\n\n def decode(self, data: bytes) -> bytes:\n was_first_attempt = self.first_attempt\n self.first_attempt = False\n try:\n return self.decompressor.decompress(data)\n except zlib.error as exc:\n if was_first_attempt:\n self.decompressor = zlib.decompressobj(-zlib.MAX_WBITS)\n return self.decode(data)\n raise DecodingError from exc\n\n def flush(self) -> bytes:\n try:\n return self.decompressor.flush()\n except zlib.error as exc: # pragma: nocover\n raise DecodingError from exc\n\n\nclass GZipDecoder(Decoder):\n \"\"\"\n Handle 'gzip' decoding.\n\n See: https://stackoverflow.com/questions/1838699\n \"\"\"\n\n def __init__(self) -> None:\n self.decompressor = zlib.decompressobj(zlib.MAX_WBITS | 16)\n\n def decode(self, data: bytes) -> bytes:\n try:\n return self.decompressor.decompress(data)\n except zlib.error as exc:\n raise DecodingError from exc\n\n def flush(self) -> bytes:\n try:\n return self.decompressor.flush()\n except zlib.error as exc: # pragma: nocover\n raise DecodingError from exc\n\n\nclass BrotliDecoder(Decoder):\n \"\"\"\n Handle 'brotli' decoding.\n\n Requires `pip install brotlipy`. See: https://brotlipy.readthedocs.io/\n or `pip install brotli`. See https://github.com/google/brotli\n Supports both 'brotlipy' and 'Brotli' packages since they share an import\n name. The top branches are for 'brotlipy' and bottom branches for 'Brotli'\n \"\"\"\n\n def __init__(self) -> None:\n assert (\n brotli is not None\n ), \"The 'brotlipy' or 'brotli' library must be installed to use 'BrotliDecoder'\"\n self.decompressor = brotli.Decompressor()\n self.seen_data = False\n if hasattr(self.decompressor, \"decompress\"):\n self._decompress = self.decompressor.decompress\n else:\n self._decompress = self.decompressor.process # pragma: nocover\n\n def decode(self, data: bytes) -> bytes:\n if not data:\n return b\"\"\n self.seen_data = True\n try:\n return self._decompress(data)\n except brotli.error as exc:\n raise DecodingError from exc\n\n def flush(self) -> bytes:\n if not self.seen_data:\n return b\"\"\n try:\n if hasattr(self.decompressor, \"finish\"):\n self.decompressor.finish()\n return b\"\"\n except brotli.error as exc: # pragma: nocover\n raise DecodingError from exc\n\n\nclass MultiDecoder(Decoder):\n \"\"\"\n Handle the case where multiple encodings have been applied.\n \"\"\"\n\n def __init__(self, children: typing.Sequence[Decoder]) -> None:\n \"\"\"\n 'children' should be a sequence of decoders in the order in which\n each was applied.\n \"\"\"\n # Note that we reverse the order for decoding.\n self.children = list(reversed(children))\n\n def decode(self, data: bytes) -> bytes:\n for child in self.children:\n data = child.decode(data)\n return data\n\n def flush(self) -> bytes:\n data = b\"\"\n for child in self.children:\n data = child.decode(data) + child.flush()\n return data\n\n\nclass TextDecoder:\n \"\"\"\n Handles incrementally decoding bytes into text\n \"\"\"\n\n def __init__(self, encoding: typing.Optional[str] = None):\n self.decoder: typing.Optional[codecs.IncrementalDecoder] = (\n None if encoding is None else codecs.getincrementaldecoder(encoding)()\n )\n self.detector = chardet.universaldetector.UniversalDetector()\n\n # This buffer is only needed if 'decoder' is 'None'\n # we want to trigger errors if data is getting added to\n # our internal buffer for some silly reason while\n # a decoder is discovered.\n self.buffer: typing.Optional[bytearray] = None if self.decoder else bytearray()\n\n def decode(self, data: bytes) -> str:\n try:\n if self.decoder is not None:\n text = self.decoder.decode(data)\n else:\n assert self.buffer is not None\n text = \"\"\n self.detector.feed(data)\n self.buffer += data\n\n # Should be more than enough data to process, we don't\n # want to buffer too long as chardet will wait until\n # detector.close() is used to give back common\n # encodings like 'utf-8'.\n if len(self.buffer) >= 4096:\n self.decoder = codecs.getincrementaldecoder(\n self._detector_result()\n )()\n text = self.decoder.decode(bytes(self.buffer), False)\n self.buffer = None\n\n return text\n except UnicodeDecodeError: # pragma: nocover\n raise DecodingError() from None\n\n def flush(self) -> str:\n try:\n if self.decoder is None:\n # Empty string case as chardet is guaranteed to not have a guess.\n assert self.buffer is not None\n if len(self.buffer) == 0:\n return \"\"\n return bytes(self.buffer).decode(self._detector_result())\n\n return self.decoder.decode(b\"\", True)\n except UnicodeDecodeError: # pragma: nocover\n raise DecodingError() from None\n\n def _detector_result(self) -> str:\n self.detector.close()\n result = self.detector.result[\"encoding\"]\n if not result: # pragma: nocover\n raise DecodingError(\"Unable to determine encoding of content\")\n\n return result\n\n\nclass LineDecoder:\n \"\"\"\n Handles incrementally reading lines from text.\n\n Uses universal line decoding, supporting any of `\\n`, `\\r`, or `\\r\\n`\n as line endings, normalizing to `\\n`.\n \"\"\"\n\n def __init__(self) -> None:\n self.buffer = \"\"\n\n def decode(self, text: str) -> typing.List[str]:\n lines = []\n\n if text.startswith(\"\\n\") and self.buffer and self.buffer[-1] == \"\\r\":\n # Handle the case where we have an \"\\r\\n\" split across\n # our previous input, and our new chunk.\n lines.append(self.buffer[:-1] + \"\\n\")\n self.buffer = \"\"\n text = text[1:]\n\n while text:\n num_chars = len(text)\n for idx in range(num_chars):\n char = text[idx]\n next_char = None if idx + 1 == num_chars else text[idx + 1]\n if char == \"\\n\":\n lines.append(self.buffer + text[: idx + 1])\n self.buffer = \"\"\n text = text[idx + 1 :]\n break\n elif char == \"\\r\" and next_char == \"\\n\":\n lines.append(self.buffer + text[:idx] + \"\\n\")\n self.buffer = \"\"\n text = text[idx + 2 :]\n break\n elif char == \"\\r\" and next_char is not None:\n lines.append(self.buffer + text[:idx] + \"\\n\")\n self.buffer = \"\"\n text = text[idx + 1 :]\n break\n elif next_char is None:\n self.buffer = text\n text = \"\"\n break\n\n return lines\n\n def flush(self) -> typing.List[str]:\n if self.buffer.endswith(\"\\r\"):\n # Handle the case where we had a trailing '\\r', which could have\n # been a '\\r\\n' pair.\n lines = [self.buffer[:-1] + \"\\n\"]\n elif self.buffer:\n lines = [self.buffer]\n else:\n lines = []\n self.buffer = \"\"\n return lines\n\n\nSUPPORTED_DECODERS = {\n \"identity\": IdentityDecoder,\n \"gzip\": GZipDecoder,\n \"deflate\": DeflateDecoder,\n \"br\": BrotliDecoder,\n}\n\n\nif brotli is None:\n SUPPORTED_DECODERS.pop(\"br\") # pragma: nocover\n", "path": "httpx/_decoders.py"}]} | 3,622 | 90 |
gh_patches_debug_35213 | rasdani/github-patches | git_diff | saleor__saleor-5590 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
psycopg2.errors.NotNullViolation: column "slug" contains null values
### What I'm trying to achieve
Upgrade from version 2.9.0 to 2.10.0-rc.1. Running the migrate command successfully.
Result:
```
Applying product.0111_auto_20191209_0437... OK
Applying product.0112_auto_20200129_0050...Traceback (most recent call last):
File "/usr/local/lib/python3.8/site-packages/django/db/backends/utils.py", line 86, in _execute
return self.cursor.execute(sql, params)
psycopg2.errors.NotNullViolation: column "slug" contains null values
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "manage.py", line 10, in <module>
execute_from_command_line(sys.argv)
File "/usr/local/lib/python3.8/site-packages/django/core/management/__init__.py", line 401, in execute_from_command_line
utility.execute()
File "/usr/local/lib/python3.8/site-packages/django/core/management/__init__.py", line 395, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/usr/local/lib/python3.8/site-packages/django/core/management/base.py", line 328, in run_from_argv
self.execute(*args, **cmd_options)
File "/usr/local/lib/python3.8/site-packages/django/core/management/base.py", line 369, in execute
output = self.handle(*args, **options)
File "/usr/local/lib/python3.8/site-packages/django/core/management/base.py", line 83, in wrapped
res = handle_func(*args, **kwargs)
File "/usr/local/lib/python3.8/site-packages/django/core/management/commands/migrate.py", line 231, in handle
post_migrate_state = executor.migrate(
File "/usr/local/lib/python3.8/site-packages/django/db/migrations/executor.py", line 117, in migrate
state = self._migrate_all_forwards(state, plan, full_plan, fake=fake, fake_initial=fake_initial)
File "/usr/local/lib/python3.8/site-packages/django/db/migrations/executor.py", line 147, in _migrate_all_forwards
state = self.apply_migration(state, migration, fake=fake, fake_initial=fake_initial)
File "/usr/local/lib/python3.8/site-packages/django/db/migrations/executor.py", line 245, in apply_migration
state = migration.apply(state, schema_editor)
File "/usr/local/lib/python3.8/site-packages/django/db/migrations/migration.py", line 124, in apply
operation.database_forwards(self.app_label, schema_editor, old_state, project_state)
File "/usr/local/lib/python3.8/site-packages/django/db/migrations/operations/fields.py", line 249, in database_forwards
schema_editor.alter_field(from_model, from_field, to_field)
File "/usr/local/lib/python3.8/site-packages/django/db/backends/base/schema.py", line 564, in alter_field
self._alter_field(model, old_field, new_field, old_type, new_type,
File "/usr/local/lib/python3.8/site-packages/django/db/backends/postgresql/schema.py", line 152, in _alter_field
super()._alter_field(
File "/usr/local/lib/python3.8/site-packages/django/db/backends/base/schema.py", line 710, in _alter_field
self.execute(
File "/usr/local/lib/python3.8/site-packages/django/db/backends/base/schema.py", line 142, in execute
cursor.execute(sql, params)
File "/usr/local/lib/python3.8/site-packages/django/db/backends/utils.py", line 100, in execute
return super().execute(sql, params)
File "/usr/local/lib/python3.8/site-packages/django/db/backends/utils.py", line 68, in execute
return self._execute_with_wrappers(sql, params, many=False, executor=self._execute)
File "/usr/local/lib/python3.8/site-packages/django/db/backends/utils.py", line 77, in _execute_with_wrappers
return executor(sql, params, many, context)
File "/usr/local/lib/python3.8/site-packages/django/db/backends/utils.py", line 86, in _execute
return self.cursor.execute(sql, params)
File "/usr/local/lib/python3.8/site-packages/django/db/utils.py", line 90, in __exit__
raise dj_exc_value.with_traceback(traceback) from exc_value
File "/usr/local/lib/python3.8/site-packages/django/db/backends/utils.py", line 86, in _execute
return self.cursor.execute(sql, params)
django.db.utils.IntegrityError: column "slug" contains null values
```
### Steps to reproduce the problem
1. Running 2.9.0 version with some data included
2. Upgrade the docker container, try to run the migrate command
### What I expected to happen
Run the migrate command successfully. The migration still seems to be very buggy. We already had issue before as seen here: #5391
### Screenshots
<!-- If applicable, add screenshots to help explain your problem. -->
**System information**
Operating system:
</issue>
<code>
[start of saleor/warehouse/migrations/0003_warehouse_slug.py]
1 # Generated by Django 2.2.9 on 2020-01-29 06:52
2
3 from django.db import migrations, models
4 from django.db.models.functions import Lower
5 from django.utils.text import slugify
6
7
8 def create_unique_slug_for_warehouses(apps, schema_editor):
9 Warehouse = apps.get_model("warehouse", "Warehouse")
10
11 warehouses = (
12 Warehouse.objects.filter(slug__isnull=True).order_by(Lower("name")).iterator()
13 )
14 previous_char = ""
15 slug_values = []
16 for warehouse in warehouses:
17 first_char = warehouse.name[0].lower()
18 if first_char != previous_char:
19 previous_char = first_char
20 slug_values = Warehouse.objects.filter(
21 slug__istartswith=first_char
22 ).values_list("slug", flat=True)
23
24 slug = generate_unique_slug(warehouse, slug_values)
25 warehouse.slug = slug
26 slug_values.append(slug)
27
28
29 def generate_unique_slug(instance, slug_values):
30 slug = slugify(instance.name)
31 unique_slug = slug
32 extension = 1
33
34 while unique_slug in slug_values:
35 extension += 1
36 unique_slug = f"{slug}-{extension}"
37
38 return unique_slug
39
40
41 class Migration(migrations.Migration):
42
43 dependencies = [
44 ("warehouse", "0002_auto_20200123_0036"),
45 ]
46
47 operations = [
48 migrations.AddField(
49 model_name="warehouse",
50 name="slug",
51 field=models.SlugField(null=True, max_length=255, unique=True),
52 preserve_default=False,
53 ),
54 migrations.RunPython(
55 create_unique_slug_for_warehouses, migrations.RunPython.noop
56 ),
57 migrations.AlterField(
58 model_name="warehouse",
59 name="slug",
60 field=models.SlugField(max_length=255, unique=True),
61 ),
62 ]
63
[end of saleor/warehouse/migrations/0003_warehouse_slug.py]
[start of saleor/product/migrations/0114_auto_20200129_0815.py]
1 # Generated by Django 2.2.9 on 2020-01-29 14:15
2
3 from django.db import migrations, models
4 from django.db.models.functions import Lower
5 from django.utils.text import slugify
6
7
8 def create_unique_slug_for_products(apps, schema_editor):
9 Product = apps.get_model("product", "Product")
10
11 products = (
12 Product.objects.filter(slug__isnull=True).order_by(Lower("name")).iterator()
13 )
14 previous_char = ""
15 slug_values = []
16 for product in products:
17 first_char = product.name[0].lower()
18 if first_char != previous_char:
19 previous_char = first_char
20 slug_values = Product.objects.filter(
21 slug__istartswith=first_char
22 ).values_list("slug", flat=True)
23
24 slug = generate_unique_slug(product, slug_values)
25 product.slug = slug
26 slug_values.append(slug)
27
28
29 def generate_unique_slug(instance, slug_values):
30 slug = slugify(instance.name)
31 unique_slug = slug
32 extension = 1
33
34 while unique_slug in slug_values:
35 extension += 1
36 unique_slug = f"{slug}-{extension}"
37
38 return unique_slug
39
40
41 class Migration(migrations.Migration):
42
43 dependencies = [
44 ("product", "0113_auto_20200129_0717"),
45 ]
46
47 operations = [
48 migrations.AddField(
49 model_name="product",
50 name="slug",
51 field=models.SlugField(null=True, max_length=255, unique=True),
52 preserve_default=False,
53 ),
54 migrations.AlterField(
55 model_name="product", name="name", field=models.CharField(max_length=250),
56 ),
57 migrations.RunPython(
58 create_unique_slug_for_products, migrations.RunPython.noop
59 ),
60 migrations.AlterField(
61 model_name="product",
62 name="slug",
63 field=models.SlugField(max_length=255, unique=True),
64 ),
65 ]
66
[end of saleor/product/migrations/0114_auto_20200129_0815.py]
[start of saleor/product/migrations/0112_auto_20200129_0050.py]
1 # Generated by Django 2.2.9 on 2020-01-29 06:50
2
3 from collections import defaultdict
4
5 from django.db import migrations, models
6 from django.db.models.functions import Lower
7 from django.utils.text import slugify
8
9
10 def create_unique_slugs_for_producttypes(apps, schema_editor):
11 ProductType = apps.get_model("product", "ProductType")
12
13 product_types = (
14 ProductType.objects.filter(slug__isnull=True).order_by(Lower("name")).iterator()
15 )
16 previous_char = ""
17 slug_values = []
18 for product_type in product_types:
19 first_char = product_type.name[0].lower()
20 if first_char != previous_char:
21 previous_char = first_char
22 slug_values = list(
23 ProductType.objects.filter(slug__istartswith=first_char).values_list(
24 "slug", flat=True
25 )
26 )
27
28 slug = generate_unique_slug(product_type, slug_values)
29 product_type.slug = slug
30 slug_values.append(slug)
31
32
33 def generate_unique_slug(instance, slug_values_list):
34 slug = slugify(instance.name)
35 unique_slug = slug
36
37 extension = 1
38
39 while unique_slug in slug_values_list:
40 extension += 1
41 unique_slug = f"{slug}-{extension}"
42
43 return unique_slug
44
45
46 def update_non_unique_slugs_for_models(apps, schema_editor):
47 models_to_update = ["Category", "Collection"]
48
49 for model in models_to_update:
50 Model = apps.get_model("product", model)
51
52 duplicated_slugs = (
53 Model.objects.all()
54 .values("slug")
55 .annotate(duplicated_slug_num=models.Count("slug"))
56 .filter(duplicated_slug_num__gt=1)
57 )
58
59 slugs_counter = defaultdict(int)
60 for data in duplicated_slugs:
61 slugs_counter[data["slug"]] = data["duplicated_slug_num"]
62
63 queryset = Model.objects.filter(slug__in=slugs_counter.keys()).order_by("name")
64
65 for instance in queryset:
66 slugs_counter[instance.slug] -= 1
67 slug = update_slug_to_unique_value(instance.slug, slugs_counter)
68 instance.slug = slug
69 slugs_counter[slug] += 1
70
71
72 def update_slug_to_unique_value(slug_value, slugs_counter):
73 unique_slug = slug_value
74 extension = 1
75
76 while unique_slug in slugs_counter and slugs_counter[unique_slug] > 0:
77 extension += 1
78 unique_slug = f"{slug_value}-{extension}"
79
80 return unique_slug
81
82
83 class Migration(migrations.Migration):
84
85 dependencies = [
86 ("product", "0111_auto_20191209_0437"),
87 ]
88
89 operations = [
90 migrations.RunPython(
91 update_non_unique_slugs_for_models, migrations.RunPython.noop
92 ),
93 migrations.AddField(
94 model_name="producttype",
95 name="slug",
96 field=models.SlugField(null=True, max_length=128, unique=True),
97 preserve_default=False,
98 ),
99 migrations.AlterField(
100 model_name="category",
101 name="slug",
102 field=models.SlugField(max_length=128, unique=True),
103 ),
104 migrations.AlterField(
105 model_name="collection",
106 name="slug",
107 field=models.SlugField(max_length=128, unique=True),
108 ),
109 migrations.RunPython(
110 create_unique_slugs_for_producttypes, migrations.RunPython.noop
111 ),
112 migrations.AlterField(
113 model_name="producttype",
114 name="slug",
115 field=models.SlugField(max_length=128, unique=True),
116 ),
117 ]
118
[end of saleor/product/migrations/0112_auto_20200129_0050.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/saleor/product/migrations/0112_auto_20200129_0050.py b/saleor/product/migrations/0112_auto_20200129_0050.py
--- a/saleor/product/migrations/0112_auto_20200129_0050.py
+++ b/saleor/product/migrations/0112_auto_20200129_0050.py
@@ -27,6 +27,7 @@
slug = generate_unique_slug(product_type, slug_values)
product_type.slug = slug
+ product_type.save(update_fields=["slug"])
slug_values.append(slug)
@@ -66,6 +67,7 @@
slugs_counter[instance.slug] -= 1
slug = update_slug_to_unique_value(instance.slug, slugs_counter)
instance.slug = slug
+ instance.save(update_fields=["slug"])
slugs_counter[slug] += 1
diff --git a/saleor/product/migrations/0114_auto_20200129_0815.py b/saleor/product/migrations/0114_auto_20200129_0815.py
--- a/saleor/product/migrations/0114_auto_20200129_0815.py
+++ b/saleor/product/migrations/0114_auto_20200129_0815.py
@@ -17,12 +17,15 @@
first_char = product.name[0].lower()
if first_char != previous_char:
previous_char = first_char
- slug_values = Product.objects.filter(
- slug__istartswith=first_char
- ).values_list("slug", flat=True)
+ slug_values = list(
+ Product.objects.filter(slug__istartswith=first_char).values_list(
+ "slug", flat=True
+ )
+ )
slug = generate_unique_slug(product, slug_values)
product.slug = slug
+ product.save(update_fields=["slug"])
slug_values.append(slug)
diff --git a/saleor/warehouse/migrations/0003_warehouse_slug.py b/saleor/warehouse/migrations/0003_warehouse_slug.py
--- a/saleor/warehouse/migrations/0003_warehouse_slug.py
+++ b/saleor/warehouse/migrations/0003_warehouse_slug.py
@@ -17,12 +17,15 @@
first_char = warehouse.name[0].lower()
if first_char != previous_char:
previous_char = first_char
- slug_values = Warehouse.objects.filter(
- slug__istartswith=first_char
- ).values_list("slug", flat=True)
+ slug_values = list(
+ Warehouse.objects.filter(slug__istartswith=first_char).values_list(
+ "slug", flat=True
+ )
+ )
slug = generate_unique_slug(warehouse, slug_values)
warehouse.slug = slug
+ warehouse.save(update_fields=["slug"])
slug_values.append(slug)
| {"golden_diff": "diff --git a/saleor/product/migrations/0112_auto_20200129_0050.py b/saleor/product/migrations/0112_auto_20200129_0050.py\n--- a/saleor/product/migrations/0112_auto_20200129_0050.py\n+++ b/saleor/product/migrations/0112_auto_20200129_0050.py\n@@ -27,6 +27,7 @@\n \n slug = generate_unique_slug(product_type, slug_values)\n product_type.slug = slug\n+ product_type.save(update_fields=[\"slug\"])\n slug_values.append(slug)\n \n \n@@ -66,6 +67,7 @@\n slugs_counter[instance.slug] -= 1\n slug = update_slug_to_unique_value(instance.slug, slugs_counter)\n instance.slug = slug\n+ instance.save(update_fields=[\"slug\"])\n slugs_counter[slug] += 1\n \n \ndiff --git a/saleor/product/migrations/0114_auto_20200129_0815.py b/saleor/product/migrations/0114_auto_20200129_0815.py\n--- a/saleor/product/migrations/0114_auto_20200129_0815.py\n+++ b/saleor/product/migrations/0114_auto_20200129_0815.py\n@@ -17,12 +17,15 @@\n first_char = product.name[0].lower()\n if first_char != previous_char:\n previous_char = first_char\n- slug_values = Product.objects.filter(\n- slug__istartswith=first_char\n- ).values_list(\"slug\", flat=True)\n+ slug_values = list(\n+ Product.objects.filter(slug__istartswith=first_char).values_list(\n+ \"slug\", flat=True\n+ )\n+ )\n \n slug = generate_unique_slug(product, slug_values)\n product.slug = slug\n+ product.save(update_fields=[\"slug\"])\n slug_values.append(slug)\n \n \ndiff --git a/saleor/warehouse/migrations/0003_warehouse_slug.py b/saleor/warehouse/migrations/0003_warehouse_slug.py\n--- a/saleor/warehouse/migrations/0003_warehouse_slug.py\n+++ b/saleor/warehouse/migrations/0003_warehouse_slug.py\n@@ -17,12 +17,15 @@\n first_char = warehouse.name[0].lower()\n if first_char != previous_char:\n previous_char = first_char\n- slug_values = Warehouse.objects.filter(\n- slug__istartswith=first_char\n- ).values_list(\"slug\", flat=True)\n+ slug_values = list(\n+ Warehouse.objects.filter(slug__istartswith=first_char).values_list(\n+ \"slug\", flat=True\n+ )\n+ )\n \n slug = generate_unique_slug(warehouse, slug_values)\n warehouse.slug = slug\n+ warehouse.save(update_fields=[\"slug\"])\n slug_values.append(slug)\n", "issue": "psycopg2.errors.NotNullViolation: column \"slug\" contains null values\n### What I'm trying to achieve\r\nUpgrade from version 2.9.0 to 2.10.0-rc.1. Running the migrate command successfully.\r\n\r\nResult:\r\n```\r\n Applying product.0111_auto_20191209_0437... OK\r\n Applying product.0112_auto_20200129_0050...Traceback (most recent call last):\r\n File \"/usr/local/lib/python3.8/site-packages/django/db/backends/utils.py\", line 86, in _execute\r\n return self.cursor.execute(sql, params)\r\npsycopg2.errors.NotNullViolation: column \"slug\" contains null values\r\n\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nTraceback (most recent call last):\r\n File \"manage.py\", line 10, in <module>\r\n execute_from_command_line(sys.argv)\r\n File \"/usr/local/lib/python3.8/site-packages/django/core/management/__init__.py\", line 401, in execute_from_command_line\r\n utility.execute()\r\n File \"/usr/local/lib/python3.8/site-packages/django/core/management/__init__.py\", line 395, in execute\r\n self.fetch_command(subcommand).run_from_argv(self.argv)\r\n File \"/usr/local/lib/python3.8/site-packages/django/core/management/base.py\", line 328, in run_from_argv\r\n self.execute(*args, **cmd_options)\r\n File \"/usr/local/lib/python3.8/site-packages/django/core/management/base.py\", line 369, in execute\r\n output = self.handle(*args, **options)\r\n File \"/usr/local/lib/python3.8/site-packages/django/core/management/base.py\", line 83, in wrapped\r\n res = handle_func(*args, **kwargs)\r\n File \"/usr/local/lib/python3.8/site-packages/django/core/management/commands/migrate.py\", line 231, in handle\r\n post_migrate_state = executor.migrate(\r\n File \"/usr/local/lib/python3.8/site-packages/django/db/migrations/executor.py\", line 117, in migrate\r\n state = self._migrate_all_forwards(state, plan, full_plan, fake=fake, fake_initial=fake_initial)\r\n File \"/usr/local/lib/python3.8/site-packages/django/db/migrations/executor.py\", line 147, in _migrate_all_forwards\r\n state = self.apply_migration(state, migration, fake=fake, fake_initial=fake_initial)\r\n File \"/usr/local/lib/python3.8/site-packages/django/db/migrations/executor.py\", line 245, in apply_migration\r\n state = migration.apply(state, schema_editor)\r\n File \"/usr/local/lib/python3.8/site-packages/django/db/migrations/migration.py\", line 124, in apply\r\n operation.database_forwards(self.app_label, schema_editor, old_state, project_state)\r\n File \"/usr/local/lib/python3.8/site-packages/django/db/migrations/operations/fields.py\", line 249, in database_forwards\r\n schema_editor.alter_field(from_model, from_field, to_field)\r\n File \"/usr/local/lib/python3.8/site-packages/django/db/backends/base/schema.py\", line 564, in alter_field\r\n self._alter_field(model, old_field, new_field, old_type, new_type,\r\n File \"/usr/local/lib/python3.8/site-packages/django/db/backends/postgresql/schema.py\", line 152, in _alter_field\r\n super()._alter_field(\r\n File \"/usr/local/lib/python3.8/site-packages/django/db/backends/base/schema.py\", line 710, in _alter_field\r\n self.execute(\r\n File \"/usr/local/lib/python3.8/site-packages/django/db/backends/base/schema.py\", line 142, in execute\r\n cursor.execute(sql, params)\r\n File \"/usr/local/lib/python3.8/site-packages/django/db/backends/utils.py\", line 100, in execute\r\n return super().execute(sql, params)\r\n File \"/usr/local/lib/python3.8/site-packages/django/db/backends/utils.py\", line 68, in execute\r\n return self._execute_with_wrappers(sql, params, many=False, executor=self._execute)\r\n File \"/usr/local/lib/python3.8/site-packages/django/db/backends/utils.py\", line 77, in _execute_with_wrappers\r\n return executor(sql, params, many, context)\r\n File \"/usr/local/lib/python3.8/site-packages/django/db/backends/utils.py\", line 86, in _execute\r\n return self.cursor.execute(sql, params)\r\n File \"/usr/local/lib/python3.8/site-packages/django/db/utils.py\", line 90, in __exit__\r\n raise dj_exc_value.with_traceback(traceback) from exc_value\r\n File \"/usr/local/lib/python3.8/site-packages/django/db/backends/utils.py\", line 86, in _execute\r\n return self.cursor.execute(sql, params)\r\ndjango.db.utils.IntegrityError: column \"slug\" contains null values\r\n``` \r\n\r\n\r\n\r\n### Steps to reproduce the problem\r\n1. Running 2.9.0 version with some data included\r\n2. Upgrade the docker container, try to run the migrate command\r\n\r\n### What I expected to happen\r\nRun the migrate command successfully. The migration still seems to be very buggy. We already had issue before as seen here: #5391 \r\n\r\n### Screenshots\r\n<!-- If applicable, add screenshots to help explain your problem. -->\r\n\r\n**System information**\r\nOperating system:\r\n\n", "before_files": [{"content": "# Generated by Django 2.2.9 on 2020-01-29 06:52\n\nfrom django.db import migrations, models\nfrom django.db.models.functions import Lower\nfrom django.utils.text import slugify\n\n\ndef create_unique_slug_for_warehouses(apps, schema_editor):\n Warehouse = apps.get_model(\"warehouse\", \"Warehouse\")\n\n warehouses = (\n Warehouse.objects.filter(slug__isnull=True).order_by(Lower(\"name\")).iterator()\n )\n previous_char = \"\"\n slug_values = []\n for warehouse in warehouses:\n first_char = warehouse.name[0].lower()\n if first_char != previous_char:\n previous_char = first_char\n slug_values = Warehouse.objects.filter(\n slug__istartswith=first_char\n ).values_list(\"slug\", flat=True)\n\n slug = generate_unique_slug(warehouse, slug_values)\n warehouse.slug = slug\n slug_values.append(slug)\n\n\ndef generate_unique_slug(instance, slug_values):\n slug = slugify(instance.name)\n unique_slug = slug\n extension = 1\n\n while unique_slug in slug_values:\n extension += 1\n unique_slug = f\"{slug}-{extension}\"\n\n return unique_slug\n\n\nclass Migration(migrations.Migration):\n\n dependencies = [\n (\"warehouse\", \"0002_auto_20200123_0036\"),\n ]\n\n operations = [\n migrations.AddField(\n model_name=\"warehouse\",\n name=\"slug\",\n field=models.SlugField(null=True, max_length=255, unique=True),\n preserve_default=False,\n ),\n migrations.RunPython(\n create_unique_slug_for_warehouses, migrations.RunPython.noop\n ),\n migrations.AlterField(\n model_name=\"warehouse\",\n name=\"slug\",\n field=models.SlugField(max_length=255, unique=True),\n ),\n ]\n", "path": "saleor/warehouse/migrations/0003_warehouse_slug.py"}, {"content": "# Generated by Django 2.2.9 on 2020-01-29 14:15\n\nfrom django.db import migrations, models\nfrom django.db.models.functions import Lower\nfrom django.utils.text import slugify\n\n\ndef create_unique_slug_for_products(apps, schema_editor):\n Product = apps.get_model(\"product\", \"Product\")\n\n products = (\n Product.objects.filter(slug__isnull=True).order_by(Lower(\"name\")).iterator()\n )\n previous_char = \"\"\n slug_values = []\n for product in products:\n first_char = product.name[0].lower()\n if first_char != previous_char:\n previous_char = first_char\n slug_values = Product.objects.filter(\n slug__istartswith=first_char\n ).values_list(\"slug\", flat=True)\n\n slug = generate_unique_slug(product, slug_values)\n product.slug = slug\n slug_values.append(slug)\n\n\ndef generate_unique_slug(instance, slug_values):\n slug = slugify(instance.name)\n unique_slug = slug\n extension = 1\n\n while unique_slug in slug_values:\n extension += 1\n unique_slug = f\"{slug}-{extension}\"\n\n return unique_slug\n\n\nclass Migration(migrations.Migration):\n\n dependencies = [\n (\"product\", \"0113_auto_20200129_0717\"),\n ]\n\n operations = [\n migrations.AddField(\n model_name=\"product\",\n name=\"slug\",\n field=models.SlugField(null=True, max_length=255, unique=True),\n preserve_default=False,\n ),\n migrations.AlterField(\n model_name=\"product\", name=\"name\", field=models.CharField(max_length=250),\n ),\n migrations.RunPython(\n create_unique_slug_for_products, migrations.RunPython.noop\n ),\n migrations.AlterField(\n model_name=\"product\",\n name=\"slug\",\n field=models.SlugField(max_length=255, unique=True),\n ),\n ]\n", "path": "saleor/product/migrations/0114_auto_20200129_0815.py"}, {"content": "# Generated by Django 2.2.9 on 2020-01-29 06:50\n\nfrom collections import defaultdict\n\nfrom django.db import migrations, models\nfrom django.db.models.functions import Lower\nfrom django.utils.text import slugify\n\n\ndef create_unique_slugs_for_producttypes(apps, schema_editor):\n ProductType = apps.get_model(\"product\", \"ProductType\")\n\n product_types = (\n ProductType.objects.filter(slug__isnull=True).order_by(Lower(\"name\")).iterator()\n )\n previous_char = \"\"\n slug_values = []\n for product_type in product_types:\n first_char = product_type.name[0].lower()\n if first_char != previous_char:\n previous_char = first_char\n slug_values = list(\n ProductType.objects.filter(slug__istartswith=first_char).values_list(\n \"slug\", flat=True\n )\n )\n\n slug = generate_unique_slug(product_type, slug_values)\n product_type.slug = slug\n slug_values.append(slug)\n\n\ndef generate_unique_slug(instance, slug_values_list):\n slug = slugify(instance.name)\n unique_slug = slug\n\n extension = 1\n\n while unique_slug in slug_values_list:\n extension += 1\n unique_slug = f\"{slug}-{extension}\"\n\n return unique_slug\n\n\ndef update_non_unique_slugs_for_models(apps, schema_editor):\n models_to_update = [\"Category\", \"Collection\"]\n\n for model in models_to_update:\n Model = apps.get_model(\"product\", model)\n\n duplicated_slugs = (\n Model.objects.all()\n .values(\"slug\")\n .annotate(duplicated_slug_num=models.Count(\"slug\"))\n .filter(duplicated_slug_num__gt=1)\n )\n\n slugs_counter = defaultdict(int)\n for data in duplicated_slugs:\n slugs_counter[data[\"slug\"]] = data[\"duplicated_slug_num\"]\n\n queryset = Model.objects.filter(slug__in=slugs_counter.keys()).order_by(\"name\")\n\n for instance in queryset:\n slugs_counter[instance.slug] -= 1\n slug = update_slug_to_unique_value(instance.slug, slugs_counter)\n instance.slug = slug\n slugs_counter[slug] += 1\n\n\ndef update_slug_to_unique_value(slug_value, slugs_counter):\n unique_slug = slug_value\n extension = 1\n\n while unique_slug in slugs_counter and slugs_counter[unique_slug] > 0:\n extension += 1\n unique_slug = f\"{slug_value}-{extension}\"\n\n return unique_slug\n\n\nclass Migration(migrations.Migration):\n\n dependencies = [\n (\"product\", \"0111_auto_20191209_0437\"),\n ]\n\n operations = [\n migrations.RunPython(\n update_non_unique_slugs_for_models, migrations.RunPython.noop\n ),\n migrations.AddField(\n model_name=\"producttype\",\n name=\"slug\",\n field=models.SlugField(null=True, max_length=128, unique=True),\n preserve_default=False,\n ),\n migrations.AlterField(\n model_name=\"category\",\n name=\"slug\",\n field=models.SlugField(max_length=128, unique=True),\n ),\n migrations.AlterField(\n model_name=\"collection\",\n name=\"slug\",\n field=models.SlugField(max_length=128, unique=True),\n ),\n migrations.RunPython(\n create_unique_slugs_for_producttypes, migrations.RunPython.noop\n ),\n migrations.AlterField(\n model_name=\"producttype\",\n name=\"slug\",\n field=models.SlugField(max_length=128, unique=True),\n ),\n ]\n", "path": "saleor/product/migrations/0112_auto_20200129_0050.py"}]} | 4,022 | 717 |
gh_patches_debug_35340 | rasdani/github-patches | git_diff | microsoft__playwright-python-86 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Update versions in README.md on Playwright roll
</issue>
<code>
[start of build_driver.py]
1 # Copyright (c) Microsoft Corporation.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import gzip
16 import os
17 import shutil
18 import subprocess
19
20 driver_path = os.path.join(os.path.dirname(os.path.abspath(__file__)), "driver")
21 package_path = os.path.join(os.path.dirname(os.path.abspath(__file__)), "playwright")
22 drivers_path = os.path.join(package_path, "drivers")
23
24 if os.path.exists(os.path.join(driver_path, "package-lock.json")):
25 os.remove(os.path.join(driver_path, "package-lock.json"))
26 if os.path.exists(os.path.join(driver_path, "node_modules")):
27 shutil.rmtree(os.path.join(driver_path, "node_modules"))
28 if os.path.exists(os.path.join(driver_path, "out")):
29 shutil.rmtree(os.path.join(driver_path, "out"))
30
31 subprocess.run("npm i", cwd=driver_path, shell=True)
32 subprocess.run("npm run bake", cwd=driver_path, shell=True)
33
34 for driver in ["driver-linux", "driver-macos", "driver-win.exe"]:
35 if os.path.exists(os.path.join(package_path, driver)):
36 os.remove(os.path.join(package_path, driver))
37
38 in_path = os.path.join(driver_path, "out", driver)
39 out_path = os.path.join(drivers_path, driver + ".gz")
40 with open(in_path, "rb") as f_in, gzip.open(out_path, "wb") as f_out:
41 shutil.copyfileobj(f_in, f_out)
42
43 shutil.copyfile(
44 os.path.join(driver_path, "node_modules", "playwright", "browsers.json"),
45 os.path.join(drivers_path, "browsers.json"),
46 )
47
[end of build_driver.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/build_driver.py b/build_driver.py
--- a/build_driver.py
+++ b/build_driver.py
@@ -14,33 +14,52 @@
import gzip
import os
+import re
import shutil
import subprocess
+from pathlib import Path
-driver_path = os.path.join(os.path.dirname(os.path.abspath(__file__)), "driver")
-package_path = os.path.join(os.path.dirname(os.path.abspath(__file__)), "playwright")
-drivers_path = os.path.join(package_path, "drivers")
+_dirname = Path(os.path.dirname(os.path.abspath(__file__)))
-if os.path.exists(os.path.join(driver_path, "package-lock.json")):
- os.remove(os.path.join(driver_path, "package-lock.json"))
-if os.path.exists(os.path.join(driver_path, "node_modules")):
- shutil.rmtree(os.path.join(driver_path, "node_modules"))
-if os.path.exists(os.path.join(driver_path, "out")):
- shutil.rmtree(os.path.join(driver_path, "out"))
+driver_path = _dirname / "driver"
+package_path = _dirname / "playwright"
+drivers_path = package_path / "drivers"
+
+if (driver_path / "package-lock.json").exists():
+ os.remove(driver_path / "package-lock.json")
+if (driver_path / "node_modules").exists():
+ shutil.rmtree(driver_path / "node_modules")
+if (driver_path / "out").exists():
+ shutil.rmtree(driver_path / "out")
subprocess.run("npm i", cwd=driver_path, shell=True)
subprocess.run("npm run bake", cwd=driver_path, shell=True)
for driver in ["driver-linux", "driver-macos", "driver-win.exe"]:
- if os.path.exists(os.path.join(package_path, driver)):
- os.remove(os.path.join(package_path, driver))
+ if (package_path / driver).exists():
+ os.remove((package_path / driver))
- in_path = os.path.join(driver_path, "out", driver)
- out_path = os.path.join(drivers_path, driver + ".gz")
+ in_path = driver_path / "out" / driver
+ out_path = drivers_path / (driver + ".gz")
with open(in_path, "rb") as f_in, gzip.open(out_path, "wb") as f_out:
shutil.copyfileobj(f_in, f_out)
+node_modules_playwright = driver_path / "node_modules" / "playwright"
+
shutil.copyfile(
- os.path.join(driver_path, "node_modules", "playwright", "browsers.json"),
- os.path.join(drivers_path, "browsers.json"),
+ node_modules_playwright / "browsers.json", drivers_path / "browsers.json",
)
+
+upstream_readme = (node_modules_playwright / "README.md").read_text()
+pw_python_readme = (_dirname / "README.md").read_text()
+
+matches = re.findall(r"<!-- GEN:(.*?) -->(.*?)<!-- GEN:stop -->", upstream_readme)
+
+for key, value in matches:
+ pw_python_readme = re.sub(
+ rf"(<!-- GEN:{key} -->).*?(<!-- GEN:stop -->)",
+ f"<!-- GEN:{key} -->{value}<!-- GEN:stop -->",
+ pw_python_readme,
+ )
+
+(_dirname / "README.md").write_text(pw_python_readme)
| {"golden_diff": "diff --git a/build_driver.py b/build_driver.py\n--- a/build_driver.py\n+++ b/build_driver.py\n@@ -14,33 +14,52 @@\n \n import gzip\n import os\n+import re\n import shutil\n import subprocess\n+from pathlib import Path\n \n-driver_path = os.path.join(os.path.dirname(os.path.abspath(__file__)), \"driver\")\n-package_path = os.path.join(os.path.dirname(os.path.abspath(__file__)), \"playwright\")\n-drivers_path = os.path.join(package_path, \"drivers\")\n+_dirname = Path(os.path.dirname(os.path.abspath(__file__)))\n \n-if os.path.exists(os.path.join(driver_path, \"package-lock.json\")):\n- os.remove(os.path.join(driver_path, \"package-lock.json\"))\n-if os.path.exists(os.path.join(driver_path, \"node_modules\")):\n- shutil.rmtree(os.path.join(driver_path, \"node_modules\"))\n-if os.path.exists(os.path.join(driver_path, \"out\")):\n- shutil.rmtree(os.path.join(driver_path, \"out\"))\n+driver_path = _dirname / \"driver\"\n+package_path = _dirname / \"playwright\"\n+drivers_path = package_path / \"drivers\"\n+\n+if (driver_path / \"package-lock.json\").exists():\n+ os.remove(driver_path / \"package-lock.json\")\n+if (driver_path / \"node_modules\").exists():\n+ shutil.rmtree(driver_path / \"node_modules\")\n+if (driver_path / \"out\").exists():\n+ shutil.rmtree(driver_path / \"out\")\n \n subprocess.run(\"npm i\", cwd=driver_path, shell=True)\n subprocess.run(\"npm run bake\", cwd=driver_path, shell=True)\n \n for driver in [\"driver-linux\", \"driver-macos\", \"driver-win.exe\"]:\n- if os.path.exists(os.path.join(package_path, driver)):\n- os.remove(os.path.join(package_path, driver))\n+ if (package_path / driver).exists():\n+ os.remove((package_path / driver))\n \n- in_path = os.path.join(driver_path, \"out\", driver)\n- out_path = os.path.join(drivers_path, driver + \".gz\")\n+ in_path = driver_path / \"out\" / driver\n+ out_path = drivers_path / (driver + \".gz\")\n with open(in_path, \"rb\") as f_in, gzip.open(out_path, \"wb\") as f_out:\n shutil.copyfileobj(f_in, f_out)\n \n+node_modules_playwright = driver_path / \"node_modules\" / \"playwright\"\n+\n shutil.copyfile(\n- os.path.join(driver_path, \"node_modules\", \"playwright\", \"browsers.json\"),\n- os.path.join(drivers_path, \"browsers.json\"),\n+ node_modules_playwright / \"browsers.json\", drivers_path / \"browsers.json\",\n )\n+\n+upstream_readme = (node_modules_playwright / \"README.md\").read_text()\n+pw_python_readme = (_dirname / \"README.md\").read_text()\n+\n+matches = re.findall(r\"<!-- GEN:(.*?) -->(.*?)<!-- GEN:stop -->\", upstream_readme)\n+\n+for key, value in matches:\n+ pw_python_readme = re.sub(\n+ rf\"(<!-- GEN:{key} -->).*?(<!-- GEN:stop -->)\",\n+ f\"<!-- GEN:{key} -->{value}<!-- GEN:stop -->\",\n+ pw_python_readme,\n+ )\n+\n+(_dirname / \"README.md\").write_text(pw_python_readme)\n", "issue": "Update versions in README.md on Playwright roll\n\n", "before_files": [{"content": "# Copyright (c) Microsoft Corporation.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport gzip\nimport os\nimport shutil\nimport subprocess\n\ndriver_path = os.path.join(os.path.dirname(os.path.abspath(__file__)), \"driver\")\npackage_path = os.path.join(os.path.dirname(os.path.abspath(__file__)), \"playwright\")\ndrivers_path = os.path.join(package_path, \"drivers\")\n\nif os.path.exists(os.path.join(driver_path, \"package-lock.json\")):\n os.remove(os.path.join(driver_path, \"package-lock.json\"))\nif os.path.exists(os.path.join(driver_path, \"node_modules\")):\n shutil.rmtree(os.path.join(driver_path, \"node_modules\"))\nif os.path.exists(os.path.join(driver_path, \"out\")):\n shutil.rmtree(os.path.join(driver_path, \"out\"))\n\nsubprocess.run(\"npm i\", cwd=driver_path, shell=True)\nsubprocess.run(\"npm run bake\", cwd=driver_path, shell=True)\n\nfor driver in [\"driver-linux\", \"driver-macos\", \"driver-win.exe\"]:\n if os.path.exists(os.path.join(package_path, driver)):\n os.remove(os.path.join(package_path, driver))\n\n in_path = os.path.join(driver_path, \"out\", driver)\n out_path = os.path.join(drivers_path, driver + \".gz\")\n with open(in_path, \"rb\") as f_in, gzip.open(out_path, \"wb\") as f_out:\n shutil.copyfileobj(f_in, f_out)\n\nshutil.copyfile(\n os.path.join(driver_path, \"node_modules\", \"playwright\", \"browsers.json\"),\n os.path.join(drivers_path, \"browsers.json\"),\n)\n", "path": "build_driver.py"}]} | 1,082 | 733 |
gh_patches_debug_18813 | rasdani/github-patches | git_diff | ibis-project__ibis-2521 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
BENCH: cleanup errors around benchmarks
we are showing some errors in the benchmark suite: https://github.com/ibis-project/ibis/pull/2451/checks?check_run_id=1220781799
would be nice to have these run fully w/o errors.
</issue>
<code>
[start of benchmarks/benchmarks.py]
1 import numpy as np
2 import pandas as pd
3
4 import ibis
5 import ibis.expr.datatypes as dt
6 from ibis.backends.pandas.udf import udf
7
8
9 def make_t(name='t'):
10 return ibis.table(
11 (
12 ('_timestamp', 'int32'),
13 ('dim1', 'int32'),
14 ('dim2', 'int32'),
15 ('valid_seconds', 'int32'),
16 ('meas1', 'int32'),
17 ('meas2', 'int32'),
18 ('year', 'int32'),
19 ('month', 'int32'),
20 ('day', 'int32'),
21 ('hour', 'int32'),
22 ('minute', 'int32'),
23 ),
24 name=name,
25 )
26
27
28 def make_base(t):
29 return (
30 (t.year > 2016)
31 | ((t.year == 2016) & (t.month > 6))
32 | ((t.year == 2016) & (t.month == 6) & (t.day > 6))
33 | ((t.year == 2016) & (t.month == 6) & (t.day == 6) & (t.hour > 6))
34 | (
35 (t.year == 2016)
36 & (t.month == 6)
37 & (t.day == 6)
38 & (t.hour == 6)
39 & (t.minute >= 5)
40 )
41 ) & (
42 (t.year < 2016)
43 | ((t.year == 2016) & (t.month < 6))
44 | ((t.year == 2016) & (t.month == 6) & (t.day < 6))
45 | ((t.year == 2016) & (t.month == 6) & (t.day == 6) & (t.hour < 6))
46 | (
47 (t.year == 2016)
48 & (t.month == 6)
49 & (t.day == 6)
50 & (t.hour == 6)
51 & (t.minute <= 5)
52 )
53 )
54
55
56 def make_large_expr(t, base):
57 src_table = t[base]
58 src_table = src_table.mutate(
59 _timestamp=(src_table['_timestamp'] - src_table['_timestamp'] % 3600)
60 .cast('int32')
61 .name('_timestamp'),
62 valid_seconds=300,
63 )
64
65 aggs = []
66 for meas in ['meas1', 'meas2']:
67 aggs.append(src_table[meas].sum().cast('float').name(meas))
68 src_table = src_table.aggregate(
69 aggs, by=['_timestamp', 'dim1', 'dim2', 'valid_seconds']
70 )
71
72 part_keys = ['year', 'month', 'day', 'hour', 'minute']
73 ts_col = src_table['_timestamp'].cast('timestamp')
74 new_cols = {}
75 for part_key in part_keys:
76 part_col = getattr(ts_col, part_key)()
77 new_cols[part_key] = part_col
78 src_table = src_table.mutate(**new_cols)
79 return src_table[
80 [
81 '_timestamp',
82 'dim1',
83 'dim2',
84 'meas1',
85 'meas2',
86 'year',
87 'month',
88 'day',
89 'hour',
90 'minute',
91 ]
92 ]
93
94
95 class Suite:
96 def setup(self):
97 self.t = t = make_t()
98 self.base = make_base(t)
99 self.expr = self.large_expr
100
101 @property
102 def large_expr(self):
103 t = make_t()
104 return make_large_expr(t, make_base(t))
105
106
107 class Construction(Suite):
108 def time_large_expr_construction(self):
109 self.large_expr
110
111
112 class Hashing(Suite):
113 def time_hash_small_expr(self):
114 hash(make_t())
115
116 def time_hash_medium_expr(self):
117 hash(make_base(make_t()))
118
119 def time_hash_large_expr(self):
120 hash(self.large_expr)
121
122
123 class Formatting(Suite):
124 def time_base_expr_formatting(self):
125 str(self.base)
126
127 def time_large_expr_formatting(self):
128 str(self.expr)
129
130
131 class Compilation(Suite):
132 def time_impala_base_compile(self):
133 ibis.impala.compile(self.base)
134
135 def time_impala_large_expr_compile(self):
136 ibis.impala.compile(self.expr)
137
138
139 class PandasBackend:
140 def setup(self):
141 n = 30 * int(2e5)
142 self.data = pd.DataFrame(
143 {
144 'key': np.random.choice(16000, size=n),
145 'low_card_key': np.random.choice(30, size=n),
146 'value': np.random.rand(n),
147 'timestamps': pd.date_range(
148 start='now', periods=n, freq='s'
149 ).values,
150 'timestamp_strings': pd.date_range(
151 start='now', periods=n, freq='s'
152 ).values.astype(str),
153 'repeated_timestamps': pd.date_range(
154 start='2018-09-01', periods=30
155 ).repeat(int(n / 30)),
156 }
157 )
158
159 t = ibis.pandas.connect({'df': self.data}).table('df')
160
161 self.high_card_group_by = t.groupby(t.key).aggregate(
162 avg_value=t.value.mean()
163 )
164
165 self.cast_to_dates = t.timestamps.cast(dt.date)
166 self.cast_to_dates_from_strings = t.timestamp_strings.cast(dt.date)
167
168 self.multikey_group_by_with_mutate = (
169 t.mutate(dates=t.timestamps.cast('date'))
170 .groupby(['low_card_key', 'dates'])
171 .aggregate(avg_value=lambda t: t.value.mean())
172 )
173
174 self.simple_sort = t.sort_by([t.key])
175
176 self.simple_sort_projection = t[['key', 'value']].sort_by(['key'])
177
178 self.multikey_sort = t.sort_by(['low_card_key', 'key'])
179
180 self.multikey_sort_projection = t[
181 ['low_card_key', 'key', 'value']
182 ].sort_by(['low_card_key', 'key'])
183
184 low_card_rolling_window = ibis.trailing_range_window(
185 ibis.interval(days=2),
186 order_by=t.repeated_timestamps,
187 group_by=t.low_card_key,
188 )
189 self.low_card_grouped_rolling = t.value.mean().over(
190 low_card_rolling_window
191 )
192
193 high_card_rolling_window = ibis.trailing_range_window(
194 ibis.interval(days=2),
195 order_by=t.repeated_timestamps,
196 group_by=t.key,
197 )
198 self.high_card_grouped_rolling = t.value.mean().over(
199 high_card_rolling_window
200 )
201
202 @udf.reduction(['double'], 'double')
203 def my_mean(series):
204 return series.mean()
205
206 self.low_card_grouped_rolling_udf_mean = my_mean(t.value).over(
207 low_card_rolling_window
208 )
209 self.high_card_grouped_rolling_udf_mean = my_mean(t.value).over(
210 high_card_rolling_window
211 )
212
213 @udf.analytic(['double'], 'double')
214 def my_zscore(series):
215 return (series - series.mean()) / series.std()
216
217 low_card_window = ibis.window(group_by=t.low_card_key)
218
219 high_card_window = ibis.window(group_by=t.key)
220
221 self.low_card_window_analytics_udf = my_zscore(t.value).over(
222 low_card_window
223 )
224 self.high_card_window_analytics_udf = my_zscore(t.value).over(
225 high_card_window
226 )
227
228 @udf.reduction(['double', 'double'], 'double')
229 def my_wm(v, w):
230 return np.average(v, weights=w)
231
232 self.low_card_grouped_rolling_udf_wm = my_wm(t.value, t.value).over(
233 low_card_rolling_window
234 )
235
236 self.high_card_grouped_rolling_udf_wm = my_wm(t.value, t.value).over(
237 low_card_rolling_window
238 )
239
240 def time_high_cardinality_group_by(self):
241 self.high_card_group_by.execute()
242
243 def time_cast_to_date(self):
244 self.cast_to_dates.execute()
245
246 def time_cast_to_date_from_string(self):
247 self.cast_to_dates_from_strings.execute()
248
249 def time_multikey_group_by_with_mutate(self):
250 self.multikey_group_by_with_mutate.execute()
251
252 def time_simple_sort(self):
253 self.simple_sort.execute()
254
255 def time_multikey_sort(self):
256 self.multikey_sort.execute()
257
258 def time_simple_sort_projection(self):
259 self.simple_sort_projection.execute()
260
261 def time_multikey_sort_projection(self):
262 self.multikey_sort_projection.execute()
263
264 def time_low_card_grouped_rolling(self):
265 self.low_card_grouped_rolling.execute()
266
267 def time_high_card_grouped_rolling(self):
268 self.high_card_grouped_rolling.execute()
269
270 def time_low_card_grouped_rolling_udf(self):
271 self.low_card_grouped_rolling_udf.execute()
272
273 def time_high_card_grouped_rolling_udf(self):
274 self.high_card_grouped_rolling_udf.execute()
275
276 def time_low_card_window_analytics_udf(self):
277 self.low_card_window_analytics_udf.execute()
278
279 def time_high_card_grouped_rolling_udf_wm(self):
280 self.high_card_grouped_rolling_udf_wm.execute()
281
282 def time_low_card_grouped_rolling_udf_wm(self):
283 self.low_card_grouped_rolling_udf_wm.execute()
284
[end of benchmarks/benchmarks.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/benchmarks/benchmarks.py b/benchmarks/benchmarks.py
--- a/benchmarks/benchmarks.py
+++ b/benchmarks/benchmarks.py
@@ -138,7 +138,7 @@
class PandasBackend:
def setup(self):
- n = 30 * int(2e5)
+ n = 30 * int(2e4)
self.data = pd.DataFrame(
{
'key': np.random.choice(16000, size=n),
@@ -268,10 +268,10 @@
self.high_card_grouped_rolling.execute()
def time_low_card_grouped_rolling_udf(self):
- self.low_card_grouped_rolling_udf.execute()
+ self.low_card_grouped_rolling_udf_mean.execute()
def time_high_card_grouped_rolling_udf(self):
- self.high_card_grouped_rolling_udf.execute()
+ self.high_card_grouped_rolling_udf_mean.execute()
def time_low_card_window_analytics_udf(self):
self.low_card_window_analytics_udf.execute()
| {"golden_diff": "diff --git a/benchmarks/benchmarks.py b/benchmarks/benchmarks.py\n--- a/benchmarks/benchmarks.py\n+++ b/benchmarks/benchmarks.py\n@@ -138,7 +138,7 @@\n \n class PandasBackend:\n def setup(self):\n- n = 30 * int(2e5)\n+ n = 30 * int(2e4)\n self.data = pd.DataFrame(\n {\n 'key': np.random.choice(16000, size=n),\n@@ -268,10 +268,10 @@\n self.high_card_grouped_rolling.execute()\n \n def time_low_card_grouped_rolling_udf(self):\n- self.low_card_grouped_rolling_udf.execute()\n+ self.low_card_grouped_rolling_udf_mean.execute()\n \n def time_high_card_grouped_rolling_udf(self):\n- self.high_card_grouped_rolling_udf.execute()\n+ self.high_card_grouped_rolling_udf_mean.execute()\n \n def time_low_card_window_analytics_udf(self):\n self.low_card_window_analytics_udf.execute()\n", "issue": "BENCH: cleanup errors around benchmarks\nwe are showing some errors in the benchmark suite: https://github.com/ibis-project/ibis/pull/2451/checks?check_run_id=1220781799\r\n\r\nwould be nice to have these run fully w/o errors.\n", "before_files": [{"content": "import numpy as np\nimport pandas as pd\n\nimport ibis\nimport ibis.expr.datatypes as dt\nfrom ibis.backends.pandas.udf import udf\n\n\ndef make_t(name='t'):\n return ibis.table(\n (\n ('_timestamp', 'int32'),\n ('dim1', 'int32'),\n ('dim2', 'int32'),\n ('valid_seconds', 'int32'),\n ('meas1', 'int32'),\n ('meas2', 'int32'),\n ('year', 'int32'),\n ('month', 'int32'),\n ('day', 'int32'),\n ('hour', 'int32'),\n ('minute', 'int32'),\n ),\n name=name,\n )\n\n\ndef make_base(t):\n return (\n (t.year > 2016)\n | ((t.year == 2016) & (t.month > 6))\n | ((t.year == 2016) & (t.month == 6) & (t.day > 6))\n | ((t.year == 2016) & (t.month == 6) & (t.day == 6) & (t.hour > 6))\n | (\n (t.year == 2016)\n & (t.month == 6)\n & (t.day == 6)\n & (t.hour == 6)\n & (t.minute >= 5)\n )\n ) & (\n (t.year < 2016)\n | ((t.year == 2016) & (t.month < 6))\n | ((t.year == 2016) & (t.month == 6) & (t.day < 6))\n | ((t.year == 2016) & (t.month == 6) & (t.day == 6) & (t.hour < 6))\n | (\n (t.year == 2016)\n & (t.month == 6)\n & (t.day == 6)\n & (t.hour == 6)\n & (t.minute <= 5)\n )\n )\n\n\ndef make_large_expr(t, base):\n src_table = t[base]\n src_table = src_table.mutate(\n _timestamp=(src_table['_timestamp'] - src_table['_timestamp'] % 3600)\n .cast('int32')\n .name('_timestamp'),\n valid_seconds=300,\n )\n\n aggs = []\n for meas in ['meas1', 'meas2']:\n aggs.append(src_table[meas].sum().cast('float').name(meas))\n src_table = src_table.aggregate(\n aggs, by=['_timestamp', 'dim1', 'dim2', 'valid_seconds']\n )\n\n part_keys = ['year', 'month', 'day', 'hour', 'minute']\n ts_col = src_table['_timestamp'].cast('timestamp')\n new_cols = {}\n for part_key in part_keys:\n part_col = getattr(ts_col, part_key)()\n new_cols[part_key] = part_col\n src_table = src_table.mutate(**new_cols)\n return src_table[\n [\n '_timestamp',\n 'dim1',\n 'dim2',\n 'meas1',\n 'meas2',\n 'year',\n 'month',\n 'day',\n 'hour',\n 'minute',\n ]\n ]\n\n\nclass Suite:\n def setup(self):\n self.t = t = make_t()\n self.base = make_base(t)\n self.expr = self.large_expr\n\n @property\n def large_expr(self):\n t = make_t()\n return make_large_expr(t, make_base(t))\n\n\nclass Construction(Suite):\n def time_large_expr_construction(self):\n self.large_expr\n\n\nclass Hashing(Suite):\n def time_hash_small_expr(self):\n hash(make_t())\n\n def time_hash_medium_expr(self):\n hash(make_base(make_t()))\n\n def time_hash_large_expr(self):\n hash(self.large_expr)\n\n\nclass Formatting(Suite):\n def time_base_expr_formatting(self):\n str(self.base)\n\n def time_large_expr_formatting(self):\n str(self.expr)\n\n\nclass Compilation(Suite):\n def time_impala_base_compile(self):\n ibis.impala.compile(self.base)\n\n def time_impala_large_expr_compile(self):\n ibis.impala.compile(self.expr)\n\n\nclass PandasBackend:\n def setup(self):\n n = 30 * int(2e5)\n self.data = pd.DataFrame(\n {\n 'key': np.random.choice(16000, size=n),\n 'low_card_key': np.random.choice(30, size=n),\n 'value': np.random.rand(n),\n 'timestamps': pd.date_range(\n start='now', periods=n, freq='s'\n ).values,\n 'timestamp_strings': pd.date_range(\n start='now', periods=n, freq='s'\n ).values.astype(str),\n 'repeated_timestamps': pd.date_range(\n start='2018-09-01', periods=30\n ).repeat(int(n / 30)),\n }\n )\n\n t = ibis.pandas.connect({'df': self.data}).table('df')\n\n self.high_card_group_by = t.groupby(t.key).aggregate(\n avg_value=t.value.mean()\n )\n\n self.cast_to_dates = t.timestamps.cast(dt.date)\n self.cast_to_dates_from_strings = t.timestamp_strings.cast(dt.date)\n\n self.multikey_group_by_with_mutate = (\n t.mutate(dates=t.timestamps.cast('date'))\n .groupby(['low_card_key', 'dates'])\n .aggregate(avg_value=lambda t: t.value.mean())\n )\n\n self.simple_sort = t.sort_by([t.key])\n\n self.simple_sort_projection = t[['key', 'value']].sort_by(['key'])\n\n self.multikey_sort = t.sort_by(['low_card_key', 'key'])\n\n self.multikey_sort_projection = t[\n ['low_card_key', 'key', 'value']\n ].sort_by(['low_card_key', 'key'])\n\n low_card_rolling_window = ibis.trailing_range_window(\n ibis.interval(days=2),\n order_by=t.repeated_timestamps,\n group_by=t.low_card_key,\n )\n self.low_card_grouped_rolling = t.value.mean().over(\n low_card_rolling_window\n )\n\n high_card_rolling_window = ibis.trailing_range_window(\n ibis.interval(days=2),\n order_by=t.repeated_timestamps,\n group_by=t.key,\n )\n self.high_card_grouped_rolling = t.value.mean().over(\n high_card_rolling_window\n )\n\n @udf.reduction(['double'], 'double')\n def my_mean(series):\n return series.mean()\n\n self.low_card_grouped_rolling_udf_mean = my_mean(t.value).over(\n low_card_rolling_window\n )\n self.high_card_grouped_rolling_udf_mean = my_mean(t.value).over(\n high_card_rolling_window\n )\n\n @udf.analytic(['double'], 'double')\n def my_zscore(series):\n return (series - series.mean()) / series.std()\n\n low_card_window = ibis.window(group_by=t.low_card_key)\n\n high_card_window = ibis.window(group_by=t.key)\n\n self.low_card_window_analytics_udf = my_zscore(t.value).over(\n low_card_window\n )\n self.high_card_window_analytics_udf = my_zscore(t.value).over(\n high_card_window\n )\n\n @udf.reduction(['double', 'double'], 'double')\n def my_wm(v, w):\n return np.average(v, weights=w)\n\n self.low_card_grouped_rolling_udf_wm = my_wm(t.value, t.value).over(\n low_card_rolling_window\n )\n\n self.high_card_grouped_rolling_udf_wm = my_wm(t.value, t.value).over(\n low_card_rolling_window\n )\n\n def time_high_cardinality_group_by(self):\n self.high_card_group_by.execute()\n\n def time_cast_to_date(self):\n self.cast_to_dates.execute()\n\n def time_cast_to_date_from_string(self):\n self.cast_to_dates_from_strings.execute()\n\n def time_multikey_group_by_with_mutate(self):\n self.multikey_group_by_with_mutate.execute()\n\n def time_simple_sort(self):\n self.simple_sort.execute()\n\n def time_multikey_sort(self):\n self.multikey_sort.execute()\n\n def time_simple_sort_projection(self):\n self.simple_sort_projection.execute()\n\n def time_multikey_sort_projection(self):\n self.multikey_sort_projection.execute()\n\n def time_low_card_grouped_rolling(self):\n self.low_card_grouped_rolling.execute()\n\n def time_high_card_grouped_rolling(self):\n self.high_card_grouped_rolling.execute()\n\n def time_low_card_grouped_rolling_udf(self):\n self.low_card_grouped_rolling_udf.execute()\n\n def time_high_card_grouped_rolling_udf(self):\n self.high_card_grouped_rolling_udf.execute()\n\n def time_low_card_window_analytics_udf(self):\n self.low_card_window_analytics_udf.execute()\n\n def time_high_card_grouped_rolling_udf_wm(self):\n self.high_card_grouped_rolling_udf_wm.execute()\n\n def time_low_card_grouped_rolling_udf_wm(self):\n self.low_card_grouped_rolling_udf_wm.execute()\n", "path": "benchmarks/benchmarks.py"}]} | 3,449 | 251 |
gh_patches_debug_12201 | rasdani/github-patches | git_diff | PyGithub__PyGithub-1433 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
List pull requests based on commit
There's a functionality on the Github API that is apparently not supported on PyGithub -- https://developer.github.com/v3/repos/commits/#list-pull-requests-associated-with-commit
## Motivation
I'm doing an auto-versioning type of thing on my repo, and I need to walk through all the pull requests searching for labels. I can get all the commits since the last release using `<repo>.get_commits(since=date_from_last_release)`. It would be incredibly useful if those commits came with their associated pull requests - that way I could simply check for each label on those.
I couldn't find this functionality on the documentation or reading the code, and I couldn't find another decent way to do this on the current PyGithub implementation.
## Caveats
This feature seems to be in preview period and the API might change, so it's probably not worth doing this right now (if ever).
</issue>
<code>
[start of github/Commit.py]
1 # -*- coding: utf-8 -*-
2
3 ############################ Copyrights and license ############################
4 # #
5 # Copyright 2012 Vincent Jacques <[email protected]> #
6 # Copyright 2012 Zearin <[email protected]> #
7 # Copyright 2013 AKFish <[email protected]> #
8 # Copyright 2013 Vincent Jacques <[email protected]> #
9 # Copyright 2013 martinqt <[email protected]> #
10 # Copyright 2014 Andy Casey <[email protected]> #
11 # Copyright 2014 Vincent Jacques <[email protected]> #
12 # Copyright 2016 Jannis Gebauer <[email protected]> #
13 # Copyright 2016 John Eskew <[email protected]> #
14 # Copyright 2016 Peter Buckley <[email protected]> #
15 # Copyright 2018 sfdye <[email protected]> #
16 # #
17 # This file is part of PyGithub. #
18 # http://pygithub.readthedocs.io/ #
19 # #
20 # PyGithub is free software: you can redistribute it and/or modify it under #
21 # the terms of the GNU Lesser General Public License as published by the Free #
22 # Software Foundation, either version 3 of the License, or (at your option) #
23 # any later version. #
24 # #
25 # PyGithub is distributed in the hope that it will be useful, but WITHOUT ANY #
26 # WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS #
27 # FOR A PARTICULAR PURPOSE. See the GNU Lesser General Public License for more #
28 # details. #
29 # #
30 # You should have received a copy of the GNU Lesser General Public License #
31 # along with PyGithub. If not, see <http://www.gnu.org/licenses/>. #
32 # #
33 ################################################################################
34
35 import github.CommitCombinedStatus
36 import github.CommitComment
37 import github.CommitStats
38 import github.CommitStatus
39 import github.File
40 import github.GitCommit
41 import github.GithubObject
42 import github.NamedUser
43 import github.PaginatedList
44
45
46 class Commit(github.GithubObject.CompletableGithubObject):
47 """
48 This class represents Commits. The reference can be found here http://developer.github.com/v3/git/commits/
49 """
50
51 def __repr__(self):
52 return self.get__repr__({"sha": self._sha.value})
53
54 @property
55 def author(self):
56 """
57 :type: :class:`github.NamedUser.NamedUser`
58 """
59 self._completeIfNotSet(self._author)
60 return self._author.value
61
62 @property
63 def comments_url(self):
64 """
65 :type: string
66 """
67 self._completeIfNotSet(self._comments_url)
68 return self._comments_url.value
69
70 @property
71 def commit(self):
72 """
73 :type: :class:`github.GitCommit.GitCommit`
74 """
75 self._completeIfNotSet(self._commit)
76 return self._commit.value
77
78 @property
79 def committer(self):
80 """
81 :type: :class:`github.NamedUser.NamedUser`
82 """
83 self._completeIfNotSet(self._committer)
84 return self._committer.value
85
86 @property
87 def files(self):
88 """
89 :type: list of :class:`github.File.File`
90 """
91 self._completeIfNotSet(self._files)
92 return self._files.value
93
94 @property
95 def html_url(self):
96 """
97 :type: string
98 """
99 self._completeIfNotSet(self._html_url)
100 return self._html_url.value
101
102 @property
103 def parents(self):
104 """
105 :type: list of :class:`github.Commit.Commit`
106 """
107 self._completeIfNotSet(self._parents)
108 return self._parents.value
109
110 @property
111 def sha(self):
112 """
113 :type: string
114 """
115 self._completeIfNotSet(self._sha)
116 return self._sha.value
117
118 @property
119 def stats(self):
120 """
121 :type: :class:`github.CommitStats.CommitStats`
122 """
123 self._completeIfNotSet(self._stats)
124 return self._stats.value
125
126 @property
127 def url(self):
128 """
129 :type: string
130 """
131 self._completeIfNotSet(self._url)
132 return self._url.value
133
134 def create_comment(
135 self,
136 body,
137 line=github.GithubObject.NotSet,
138 path=github.GithubObject.NotSet,
139 position=github.GithubObject.NotSet,
140 ):
141 """
142 :calls: `POST /repos/:owner/:repo/commits/:sha/comments <http://developer.github.com/v3/repos/comments>`_
143 :param body: string
144 :param line: integer
145 :param path: string
146 :param position: integer
147 :rtype: :class:`github.CommitComment.CommitComment`
148 """
149 assert isinstance(body, str), body
150 assert line is github.GithubObject.NotSet or isinstance(line, int), line
151 assert path is github.GithubObject.NotSet or isinstance(path, str), path
152 assert position is github.GithubObject.NotSet or isinstance(
153 position, int
154 ), position
155 post_parameters = {
156 "body": body,
157 }
158 if line is not github.GithubObject.NotSet:
159 post_parameters["line"] = line
160 if path is not github.GithubObject.NotSet:
161 post_parameters["path"] = path
162 if position is not github.GithubObject.NotSet:
163 post_parameters["position"] = position
164 headers, data = self._requester.requestJsonAndCheck(
165 "POST", self.url + "/comments", input=post_parameters
166 )
167 return github.CommitComment.CommitComment(
168 self._requester, headers, data, completed=True
169 )
170
171 def create_status(
172 self,
173 state,
174 target_url=github.GithubObject.NotSet,
175 description=github.GithubObject.NotSet,
176 context=github.GithubObject.NotSet,
177 ):
178 """
179 :calls: `POST /repos/:owner/:repo/statuses/:sha <http://developer.github.com/v3/repos/statuses>`_
180 :param state: string
181 :param target_url: string
182 :param description: string
183 :param context: string
184 :rtype: :class:`github.CommitStatus.CommitStatus`
185 """
186 assert isinstance(state, str), state
187 assert target_url is github.GithubObject.NotSet or isinstance(
188 target_url, str
189 ), target_url
190 assert description is github.GithubObject.NotSet or isinstance(
191 description, str
192 ), description
193 assert context is github.GithubObject.NotSet or isinstance(
194 context, str
195 ), context
196 post_parameters = {
197 "state": state,
198 }
199 if target_url is not github.GithubObject.NotSet:
200 post_parameters["target_url"] = target_url
201 if description is not github.GithubObject.NotSet:
202 post_parameters["description"] = description
203 if context is not github.GithubObject.NotSet:
204 post_parameters["context"] = context
205 headers, data = self._requester.requestJsonAndCheck(
206 "POST",
207 self._parentUrl(self._parentUrl(self.url)) + "/statuses/" + self.sha,
208 input=post_parameters,
209 )
210 return github.CommitStatus.CommitStatus(
211 self._requester, headers, data, completed=True
212 )
213
214 def get_comments(self):
215 """
216 :calls: `GET /repos/:owner/:repo/commits/:sha/comments <http://developer.github.com/v3/repos/comments>`_
217 :rtype: :class:`github.PaginatedList.PaginatedList` of :class:`github.CommitComment.CommitComment`
218 """
219 return github.PaginatedList.PaginatedList(
220 github.CommitComment.CommitComment,
221 self._requester,
222 self.url + "/comments",
223 None,
224 )
225
226 def get_statuses(self):
227 """
228 :calls: `GET /repos/:owner/:repo/statuses/:ref <http://developer.github.com/v3/repos/statuses>`_
229 :rtype: :class:`github.PaginatedList.PaginatedList` of :class:`github.CommitStatus.CommitStatus`
230 """
231 return github.PaginatedList.PaginatedList(
232 github.CommitStatus.CommitStatus,
233 self._requester,
234 self._parentUrl(self._parentUrl(self.url)) + "/statuses/" + self.sha,
235 None,
236 )
237
238 def get_combined_status(self):
239 """
240 :calls: `GET /repos/:owner/:repo/commits/:ref/status/ <http://developer.github.com/v3/repos/statuses>`_
241 :rtype: :class:`github.CommitCombinedStatus.CommitCombinedStatus`
242 """
243 headers, data = self._requester.requestJsonAndCheck("GET", self.url + "/status")
244 return github.CommitCombinedStatus.CommitCombinedStatus(
245 self._requester, headers, data, completed=True
246 )
247
248 @property
249 def _identity(self):
250 return self.sha
251
252 def _initAttributes(self):
253 self._author = github.GithubObject.NotSet
254 self._comments_url = github.GithubObject.NotSet
255 self._commit = github.GithubObject.NotSet
256 self._committer = github.GithubObject.NotSet
257 self._files = github.GithubObject.NotSet
258 self._html_url = github.GithubObject.NotSet
259 self._parents = github.GithubObject.NotSet
260 self._sha = github.GithubObject.NotSet
261 self._stats = github.GithubObject.NotSet
262 self._url = github.GithubObject.NotSet
263
264 def _useAttributes(self, attributes):
265 if "author" in attributes: # pragma no branch
266 self._author = self._makeClassAttribute(
267 github.NamedUser.NamedUser, attributes["author"]
268 )
269 if "comments_url" in attributes: # pragma no branch
270 self._comments_url = self._makeStringAttribute(attributes["comments_url"])
271 if "commit" in attributes: # pragma no branch
272 self._commit = self._makeClassAttribute(
273 github.GitCommit.GitCommit, attributes["commit"]
274 )
275 if "committer" in attributes: # pragma no branch
276 self._committer = self._makeClassAttribute(
277 github.NamedUser.NamedUser, attributes["committer"]
278 )
279 if "files" in attributes: # pragma no branch
280 self._files = self._makeListOfClassesAttribute(
281 github.File.File, attributes["files"]
282 )
283 if "html_url" in attributes: # pragma no branch
284 self._html_url = self._makeStringAttribute(attributes["html_url"])
285 if "parents" in attributes: # pragma no branch
286 self._parents = self._makeListOfClassesAttribute(
287 Commit, attributes["parents"]
288 )
289 if "sha" in attributes: # pragma no branch
290 self._sha = self._makeStringAttribute(attributes["sha"])
291 if "stats" in attributes: # pragma no branch
292 self._stats = self._makeClassAttribute(
293 github.CommitStats.CommitStats, attributes["stats"]
294 )
295 if "url" in attributes: # pragma no branch
296 self._url = self._makeStringAttribute(attributes["url"])
297
[end of github/Commit.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/github/Commit.py b/github/Commit.py
--- a/github/Commit.py
+++ b/github/Commit.py
@@ -245,6 +245,19 @@
self._requester, headers, data, completed=True
)
+ def get_pulls(self):
+ """
+ :calls: `GET /repos/:owner/:repo/commits/:sha/pulls <https://developer.github.com/v3/repos/commits/#list-pull-requests-associated-with-commit>`_
+ :rtype: :class:`github.PaginatedList.PaginatedList` of :class:`github.PullRequest.PullRequest`
+ """
+ return github.PaginatedList.PaginatedList(
+ github.PullRequest.PullRequest,
+ self._requester,
+ self.url + "/pulls",
+ None,
+ headers={"Accept": "application/vnd.github.groot-preview+json"},
+ )
+
@property
def _identity(self):
return self.sha
| {"golden_diff": "diff --git a/github/Commit.py b/github/Commit.py\n--- a/github/Commit.py\n+++ b/github/Commit.py\n@@ -245,6 +245,19 @@\n self._requester, headers, data, completed=True\n )\n \n+ def get_pulls(self):\n+ \"\"\"\n+ :calls: `GET /repos/:owner/:repo/commits/:sha/pulls <https://developer.github.com/v3/repos/commits/#list-pull-requests-associated-with-commit>`_\n+ :rtype: :class:`github.PaginatedList.PaginatedList` of :class:`github.PullRequest.PullRequest`\n+ \"\"\"\n+ return github.PaginatedList.PaginatedList(\n+ github.PullRequest.PullRequest,\n+ self._requester,\n+ self.url + \"/pulls\",\n+ None,\n+ headers={\"Accept\": \"application/vnd.github.groot-preview+json\"},\n+ )\n+\n @property\n def _identity(self):\n return self.sha\n", "issue": "List pull requests based on commit\nThere's a functionality on the Github API that is apparently not supported on PyGithub -- https://developer.github.com/v3/repos/commits/#list-pull-requests-associated-with-commit\r\n\r\n## Motivation\r\n\r\nI'm doing an auto-versioning type of thing on my repo, and I need to walk through all the pull requests searching for labels. I can get all the commits since the last release using `<repo>.get_commits(since=date_from_last_release)`. It would be incredibly useful if those commits came with their associated pull requests - that way I could simply check for each label on those.\r\n\r\nI couldn't find this functionality on the documentation or reading the code, and I couldn't find another decent way to do this on the current PyGithub implementation.\r\n\r\n## Caveats\r\n\r\nThis feature seems to be in preview period and the API might change, so it's probably not worth doing this right now (if ever).\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\n############################ Copyrights and license ############################\n# #\n# Copyright 2012 Vincent Jacques <[email protected]> #\n# Copyright 2012 Zearin <[email protected]> #\n# Copyright 2013 AKFish <[email protected]> #\n# Copyright 2013 Vincent Jacques <[email protected]> #\n# Copyright 2013 martinqt <[email protected]> #\n# Copyright 2014 Andy Casey <[email protected]> #\n# Copyright 2014 Vincent Jacques <[email protected]> #\n# Copyright 2016 Jannis Gebauer <[email protected]> #\n# Copyright 2016 John Eskew <[email protected]> #\n# Copyright 2016 Peter Buckley <[email protected]> #\n# Copyright 2018 sfdye <[email protected]> #\n# #\n# This file is part of PyGithub. #\n# http://pygithub.readthedocs.io/ #\n# #\n# PyGithub is free software: you can redistribute it and/or modify it under #\n# the terms of the GNU Lesser General Public License as published by the Free #\n# Software Foundation, either version 3 of the License, or (at your option) #\n# any later version. #\n# #\n# PyGithub is distributed in the hope that it will be useful, but WITHOUT ANY #\n# WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS #\n# FOR A PARTICULAR PURPOSE. See the GNU Lesser General Public License for more #\n# details. #\n# #\n# You should have received a copy of the GNU Lesser General Public License #\n# along with PyGithub. If not, see <http://www.gnu.org/licenses/>. #\n# #\n################################################################################\n\nimport github.CommitCombinedStatus\nimport github.CommitComment\nimport github.CommitStats\nimport github.CommitStatus\nimport github.File\nimport github.GitCommit\nimport github.GithubObject\nimport github.NamedUser\nimport github.PaginatedList\n\n\nclass Commit(github.GithubObject.CompletableGithubObject):\n \"\"\"\n This class represents Commits. The reference can be found here http://developer.github.com/v3/git/commits/\n \"\"\"\n\n def __repr__(self):\n return self.get__repr__({\"sha\": self._sha.value})\n\n @property\n def author(self):\n \"\"\"\n :type: :class:`github.NamedUser.NamedUser`\n \"\"\"\n self._completeIfNotSet(self._author)\n return self._author.value\n\n @property\n def comments_url(self):\n \"\"\"\n :type: string\n \"\"\"\n self._completeIfNotSet(self._comments_url)\n return self._comments_url.value\n\n @property\n def commit(self):\n \"\"\"\n :type: :class:`github.GitCommit.GitCommit`\n \"\"\"\n self._completeIfNotSet(self._commit)\n return self._commit.value\n\n @property\n def committer(self):\n \"\"\"\n :type: :class:`github.NamedUser.NamedUser`\n \"\"\"\n self._completeIfNotSet(self._committer)\n return self._committer.value\n\n @property\n def files(self):\n \"\"\"\n :type: list of :class:`github.File.File`\n \"\"\"\n self._completeIfNotSet(self._files)\n return self._files.value\n\n @property\n def html_url(self):\n \"\"\"\n :type: string\n \"\"\"\n self._completeIfNotSet(self._html_url)\n return self._html_url.value\n\n @property\n def parents(self):\n \"\"\"\n :type: list of :class:`github.Commit.Commit`\n \"\"\"\n self._completeIfNotSet(self._parents)\n return self._parents.value\n\n @property\n def sha(self):\n \"\"\"\n :type: string\n \"\"\"\n self._completeIfNotSet(self._sha)\n return self._sha.value\n\n @property\n def stats(self):\n \"\"\"\n :type: :class:`github.CommitStats.CommitStats`\n \"\"\"\n self._completeIfNotSet(self._stats)\n return self._stats.value\n\n @property\n def url(self):\n \"\"\"\n :type: string\n \"\"\"\n self._completeIfNotSet(self._url)\n return self._url.value\n\n def create_comment(\n self,\n body,\n line=github.GithubObject.NotSet,\n path=github.GithubObject.NotSet,\n position=github.GithubObject.NotSet,\n ):\n \"\"\"\n :calls: `POST /repos/:owner/:repo/commits/:sha/comments <http://developer.github.com/v3/repos/comments>`_\n :param body: string\n :param line: integer\n :param path: string\n :param position: integer\n :rtype: :class:`github.CommitComment.CommitComment`\n \"\"\"\n assert isinstance(body, str), body\n assert line is github.GithubObject.NotSet or isinstance(line, int), line\n assert path is github.GithubObject.NotSet or isinstance(path, str), path\n assert position is github.GithubObject.NotSet or isinstance(\n position, int\n ), position\n post_parameters = {\n \"body\": body,\n }\n if line is not github.GithubObject.NotSet:\n post_parameters[\"line\"] = line\n if path is not github.GithubObject.NotSet:\n post_parameters[\"path\"] = path\n if position is not github.GithubObject.NotSet:\n post_parameters[\"position\"] = position\n headers, data = self._requester.requestJsonAndCheck(\n \"POST\", self.url + \"/comments\", input=post_parameters\n )\n return github.CommitComment.CommitComment(\n self._requester, headers, data, completed=True\n )\n\n def create_status(\n self,\n state,\n target_url=github.GithubObject.NotSet,\n description=github.GithubObject.NotSet,\n context=github.GithubObject.NotSet,\n ):\n \"\"\"\n :calls: `POST /repos/:owner/:repo/statuses/:sha <http://developer.github.com/v3/repos/statuses>`_\n :param state: string\n :param target_url: string\n :param description: string\n :param context: string\n :rtype: :class:`github.CommitStatus.CommitStatus`\n \"\"\"\n assert isinstance(state, str), state\n assert target_url is github.GithubObject.NotSet or isinstance(\n target_url, str\n ), target_url\n assert description is github.GithubObject.NotSet or isinstance(\n description, str\n ), description\n assert context is github.GithubObject.NotSet or isinstance(\n context, str\n ), context\n post_parameters = {\n \"state\": state,\n }\n if target_url is not github.GithubObject.NotSet:\n post_parameters[\"target_url\"] = target_url\n if description is not github.GithubObject.NotSet:\n post_parameters[\"description\"] = description\n if context is not github.GithubObject.NotSet:\n post_parameters[\"context\"] = context\n headers, data = self._requester.requestJsonAndCheck(\n \"POST\",\n self._parentUrl(self._parentUrl(self.url)) + \"/statuses/\" + self.sha,\n input=post_parameters,\n )\n return github.CommitStatus.CommitStatus(\n self._requester, headers, data, completed=True\n )\n\n def get_comments(self):\n \"\"\"\n :calls: `GET /repos/:owner/:repo/commits/:sha/comments <http://developer.github.com/v3/repos/comments>`_\n :rtype: :class:`github.PaginatedList.PaginatedList` of :class:`github.CommitComment.CommitComment`\n \"\"\"\n return github.PaginatedList.PaginatedList(\n github.CommitComment.CommitComment,\n self._requester,\n self.url + \"/comments\",\n None,\n )\n\n def get_statuses(self):\n \"\"\"\n :calls: `GET /repos/:owner/:repo/statuses/:ref <http://developer.github.com/v3/repos/statuses>`_\n :rtype: :class:`github.PaginatedList.PaginatedList` of :class:`github.CommitStatus.CommitStatus`\n \"\"\"\n return github.PaginatedList.PaginatedList(\n github.CommitStatus.CommitStatus,\n self._requester,\n self._parentUrl(self._parentUrl(self.url)) + \"/statuses/\" + self.sha,\n None,\n )\n\n def get_combined_status(self):\n \"\"\"\n :calls: `GET /repos/:owner/:repo/commits/:ref/status/ <http://developer.github.com/v3/repos/statuses>`_\n :rtype: :class:`github.CommitCombinedStatus.CommitCombinedStatus`\n \"\"\"\n headers, data = self._requester.requestJsonAndCheck(\"GET\", self.url + \"/status\")\n return github.CommitCombinedStatus.CommitCombinedStatus(\n self._requester, headers, data, completed=True\n )\n\n @property\n def _identity(self):\n return self.sha\n\n def _initAttributes(self):\n self._author = github.GithubObject.NotSet\n self._comments_url = github.GithubObject.NotSet\n self._commit = github.GithubObject.NotSet\n self._committer = github.GithubObject.NotSet\n self._files = github.GithubObject.NotSet\n self._html_url = github.GithubObject.NotSet\n self._parents = github.GithubObject.NotSet\n self._sha = github.GithubObject.NotSet\n self._stats = github.GithubObject.NotSet\n self._url = github.GithubObject.NotSet\n\n def _useAttributes(self, attributes):\n if \"author\" in attributes: # pragma no branch\n self._author = self._makeClassAttribute(\n github.NamedUser.NamedUser, attributes[\"author\"]\n )\n if \"comments_url\" in attributes: # pragma no branch\n self._comments_url = self._makeStringAttribute(attributes[\"comments_url\"])\n if \"commit\" in attributes: # pragma no branch\n self._commit = self._makeClassAttribute(\n github.GitCommit.GitCommit, attributes[\"commit\"]\n )\n if \"committer\" in attributes: # pragma no branch\n self._committer = self._makeClassAttribute(\n github.NamedUser.NamedUser, attributes[\"committer\"]\n )\n if \"files\" in attributes: # pragma no branch\n self._files = self._makeListOfClassesAttribute(\n github.File.File, attributes[\"files\"]\n )\n if \"html_url\" in attributes: # pragma no branch\n self._html_url = self._makeStringAttribute(attributes[\"html_url\"])\n if \"parents\" in attributes: # pragma no branch\n self._parents = self._makeListOfClassesAttribute(\n Commit, attributes[\"parents\"]\n )\n if \"sha\" in attributes: # pragma no branch\n self._sha = self._makeStringAttribute(attributes[\"sha\"])\n if \"stats\" in attributes: # pragma no branch\n self._stats = self._makeClassAttribute(\n github.CommitStats.CommitStats, attributes[\"stats\"]\n )\n if \"url\" in attributes: # pragma no branch\n self._url = self._makeStringAttribute(attributes[\"url\"])\n", "path": "github/Commit.py"}]} | 3,997 | 220 |
gh_patches_debug_17758 | rasdani/github-patches | git_diff | mampfes__hacs_waste_collection_schedule-1857 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[Bug]: "Ortsteil" missing for AbfallPlus in Weißenburg/Gunzenhausen
### I Have A Problem With:
A specific source
### What's Your Problem
For my part of the village (aka "Ortsteil") it is not choosable.
You can check it here: https://www.landkreis-wug.de/abfall/abfuhrkalender/ , I have to select
Stadt/Gemeinde:Haundorf
Straße/Ortsteil: Obererlbach
Straße: Alle Straßen
I tried the app_abfallplus_de.py script, but I can't select my "Ortsteil", just the Maincommunity/-city "Haundorf".
waste_collection_schedule:
sources:
- name: app_abfallplus_de
args:
app_id: de.k4systems.abfallappwug
city: Haundorf
strasse: alle Straßen
### Source (if relevant)
app_abfallplus_de
### Logs
_No response_
### Relevant Configuration
_No response_
### Checklist Source Error
- [X] Use the example parameters for your source (often available in the documentation) (don't forget to restart Home Assistant after changing the configuration)
- [X] Checked that the website of your service provider is still working
- [X] Tested my attributes on the service provider website (if possible)
- [X] I have tested with the latest version of the integration (master) (for HACS in the 3 dot menu of the integration click on "Redownload" and choose master as version)
### Checklist Sensor Error
- [X] Checked in the Home Assistant Calendar tab if the event names match the types names (if types argument is used)
### Required
- [X] I have searched past (closed AND opened) issues to see if this bug has already been reported, and it hasn't been.
- [X] I understand that people give their precious time for free, and thus I've done my very best to make this problem as easy as possible to investigate.
</issue>
<code>
[start of custom_components/waste_collection_schedule/waste_collection_schedule/wizard/app_abfallplus_de.py]
1 #!/usr/bin/env python3
2 import site
3 from pathlib import Path
4 from typing import Tuple
5
6 import inquirer
7
8 package_dir = Path(__file__).resolve().parents[2]
9 site.addsitedir(str(package_dir))
10 import waste_collection_schedule.service.AppAbfallplusDe as AppAbfallplusDe # noqa: E402
11
12 YAML = {
13 "base": """
14 waste_collection_schedule:
15 sources:
16 - name: app_abfallplus_de
17 args:
18 app_id: {app_id}
19 city: {city}""",
20 "bezirk": """
21 bezirk: {bezirk}""",
22 "street": """
23 strasse: {strasse}""",
24 "hnr": """
25 hnr: {hnr}""",
26 "bundesland": """
27 bundesland: {bundesland}""",
28 "landkreis": """
29 landkreis: {landkreis}""",
30 }
31
32
33 def select_bundesland(app: AppAbfallplusDe.AppAbfallplusDe):
34 bundeslaender = app.get_bundeslaender()
35 questions = [
36 inquirer.List(
37 "bundesland",
38 choices=sorted([(s["name"], s["name"]) for s in bundeslaender]),
39 message="Select your Bundesland",
40 )
41 ]
42 bundesland = inquirer.prompt(questions)["bundesland"]
43 app.select_bundesland(bundesland)
44 return bundesland
45
46
47 def select_landkreis(app: AppAbfallplusDe.AppAbfallplusDe):
48 landkreise = app.get_landkreise()
49 questions = [
50 inquirer.List(
51 "landkreis",
52 choices=sorted(
53 [(s["name"], s["name"]) for s in landkreise] + [("BACK", "BACK")]
54 ),
55 message="Select your Landkreis",
56 )
57 ]
58 landkreis = inquirer.prompt(questions)["landkreis"]
59 if landkreis == "BACK":
60 app.clear(0)
61 select_bundesland(app)
62 return select_landkreis(app)
63 app.select_landkreis(landkreis)
64 return landkreis
65
66
67 def select_city(app: AppAbfallplusDe.AppAbfallplusDe, bund_select: bool):
68 cities = app.get_kommunen()
69 questions = [
70 inquirer.List(
71 "city",
72 choices=sorted([(s["name"], s["name"]) for s in cities])
73 + ([("BACK", "BACK")] if bund_select else []),
74 message="Select your Kommune",
75 )
76 ]
77 city = inquirer.prompt(questions)["city"]
78 if city == "BACK":
79 app.clear(1)
80 select_landkreis(app)
81 return select_city(app, bund_select)
82
83 app.select_kommune(city)
84 return city
85
86
87 def select_bezirk(
88 app: AppAbfallplusDe.AppAbfallplusDe, bund_select: bool
89 ) -> Tuple[str, bool]:
90 bezirke = app.get_bezirke()
91 questions = [
92 inquirer.List(
93 "bezirk",
94 choices=sorted([(s["name"], s["name"]) for s in bezirke])
95 + [("BACK", "BACK")],
96 message="Select your Bezirk",
97 )
98 ]
99 bezirk = inquirer.prompt(questions)["bezirk"]
100 if bezirk == "BACK":
101 app.clear(2)
102 select_city(app, bund_select)
103 return select_bezirk(app, bund_select)
104
105 return bezirk, app.select_bezirk(bezirk)
106
107
108 def select_street(app: AppAbfallplusDe.AppAbfallplusDe, bund_select: bool):
109 street = None
110 street_search = ""
111 while street is None:
112 questions = [
113 inquirer.Text(
114 "street_search",
115 message="Search your street you will be given some options to choose from",
116 default=street_search,
117 )
118 ]
119 streets = app.get_streets(inquirer.prompt(questions)["street_search"])
120 questions = [
121 inquirer.List(
122 "street",
123 choices=sorted([(s["name"], s["name"]) for s in streets])
124 + [("BACK", "BACK")],
125 message="Select your Street",
126 )
127 ]
128 street = inquirer.prompt(questions)["street"]
129 if street == "BACK":
130 street = None
131
132 if street == "BACK":
133 app.clear(2)
134 select_city(app, bund_select)
135 return select_street(app, bund_select)
136
137 app.select_street(street)
138 return street
139
140
141 def select_house_number(app: AppAbfallplusDe.AppAbfallplusDe, bund_select: bool):
142 house_numbers = app.get_hnrs()
143 questions = [
144 inquirer.List(
145 "house_number",
146 choices=[(s["name"], s["name"]) for s in house_numbers]
147 + [("BACK", "BACK")],
148 message="Select your House Number",
149 )
150 ]
151 house_number = inquirer.prompt(questions)["house_number"]
152 if house_number == "BACK":
153 app.clear(3)
154 select_street(app, bund_select)
155 return select_house_number(app, bund_select)
156 app.select_hnr(house_number)
157 return house_number
158
159
160 def main():
161 questions = [
162 inquirer.List(
163 "app-id",
164 choices=[(s, s) for s in sorted(AppAbfallplusDe.SUPPORTED_APPS)],
165 message="Select your App",
166 )
167 ]
168 app_id = inquirer.prompt(questions)["app-id"]
169
170 app = AppAbfallplusDe.AppAbfallplusDe(app_id, "", "", "")
171 bezirk_needed = "bezirk" in app.init_connection() and app.get_bezirke() != []
172 cities = app.get_kommunen()
173 bund_select = cities == []
174
175 bundesland = landkreis = None
176 if bund_select:
177 bundesland = select_bundesland(app)
178 landkreis = select_landkreis(app)
179 # cities = app.get_kommunen()
180
181 city = select_city(app, bund_select)
182 finished = False
183 house_number = ""
184 street = None
185 if bezirk_needed:
186 bezirk, finished = select_bezirk(app, bund_select)
187 if not finished:
188 street = select_street(app, bund_select)
189 if app.get_hrn_needed():
190 house_number = select_house_number(app, bund_select)
191
192 yaml = YAML["base"].format(
193 app_id=app_id,
194 city=city,
195 )
196 if bezirk_needed:
197 yaml += YAML["bezirk"].format(bezirk=bezirk)
198 if street:
199 yaml += YAML["street"].format(strasse=street)
200 if house_number:
201 yaml += YAML["hnr"].format(hnr=house_number)
202 if bundesland:
203 yaml += YAML["bundesland"].format(bundesland=bundesland)
204 if landkreis:
205 yaml += YAML["landkreis"].format(landkreis=landkreis)
206
207 print(yaml)
208
209
210 if __name__ == "__main__":
211 main()
212
[end of custom_components/waste_collection_schedule/waste_collection_schedule/wizard/app_abfallplus_de.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/custom_components/waste_collection_schedule/waste_collection_schedule/wizard/app_abfallplus_de.py b/custom_components/waste_collection_schedule/waste_collection_schedule/wizard/app_abfallplus_de.py
--- a/custom_components/waste_collection_schedule/waste_collection_schedule/wizard/app_abfallplus_de.py
+++ b/custom_components/waste_collection_schedule/waste_collection_schedule/wizard/app_abfallplus_de.py
@@ -168,7 +168,7 @@
app_id = inquirer.prompt(questions)["app-id"]
app = AppAbfallplusDe.AppAbfallplusDe(app_id, "", "", "")
- bezirk_needed = "bezirk" in app.init_connection() and app.get_bezirke() != []
+ bezirk_needed = "bezirk" in app.init_connection()
cities = app.get_kommunen()
bund_select = cities == []
@@ -182,6 +182,7 @@
finished = False
house_number = ""
street = None
+ bezirk_needed = bezirk_needed and app.get_bezirke() != []
if bezirk_needed:
bezirk, finished = select_bezirk(app, bund_select)
if not finished:
| {"golden_diff": "diff --git a/custom_components/waste_collection_schedule/waste_collection_schedule/wizard/app_abfallplus_de.py b/custom_components/waste_collection_schedule/waste_collection_schedule/wizard/app_abfallplus_de.py\n--- a/custom_components/waste_collection_schedule/waste_collection_schedule/wizard/app_abfallplus_de.py\n+++ b/custom_components/waste_collection_schedule/waste_collection_schedule/wizard/app_abfallplus_de.py\n@@ -168,7 +168,7 @@\n app_id = inquirer.prompt(questions)[\"app-id\"]\n \n app = AppAbfallplusDe.AppAbfallplusDe(app_id, \"\", \"\", \"\")\n- bezirk_needed = \"bezirk\" in app.init_connection() and app.get_bezirke() != []\n+ bezirk_needed = \"bezirk\" in app.init_connection()\n cities = app.get_kommunen()\n bund_select = cities == []\n \n@@ -182,6 +182,7 @@\n finished = False\n house_number = \"\"\n street = None\n+ bezirk_needed = bezirk_needed and app.get_bezirke() != []\n if bezirk_needed:\n bezirk, finished = select_bezirk(app, bund_select)\n if not finished:\n", "issue": "[Bug]: \"Ortsteil\" missing for AbfallPlus in Wei\u00dfenburg/Gunzenhausen\n### I Have A Problem With:\n\nA specific source\n\n### What's Your Problem\n\nFor my part of the village (aka \"Ortsteil\") it is not choosable. \r\nYou can check it here: https://www.landkreis-wug.de/abfall/abfuhrkalender/ , I have to select\r\nStadt/Gemeinde:Haundorf\r\nStra\u00dfe/Ortsteil: Obererlbach\r\nStra\u00dfe: Alle Stra\u00dfen\r\n\r\nI tried the app_abfallplus_de.py script, but I can't select my \"Ortsteil\", just the Maincommunity/-city \"Haundorf\".\r\nwaste_collection_schedule:\r\n sources:\r\n - name: app_abfallplus_de\r\n args:\r\n app_id: de.k4systems.abfallappwug\r\n city: Haundorf\r\n strasse: alle Stra\u00dfen\n\n### Source (if relevant)\n\napp_abfallplus_de\n\n### Logs\n\n_No response_\n\n### Relevant Configuration\n\n_No response_\n\n### Checklist Source Error\n\n- [X] Use the example parameters for your source (often available in the documentation) (don't forget to restart Home Assistant after changing the configuration)\n- [X] Checked that the website of your service provider is still working\n- [X] Tested my attributes on the service provider website (if possible)\n- [X] I have tested with the latest version of the integration (master) (for HACS in the 3 dot menu of the integration click on \"Redownload\" and choose master as version)\n\n### Checklist Sensor Error\n\n- [X] Checked in the Home Assistant Calendar tab if the event names match the types names (if types argument is used)\n\n### Required\n\n- [X] I have searched past (closed AND opened) issues to see if this bug has already been reported, and it hasn't been.\n- [X] I understand that people give their precious time for free, and thus I've done my very best to make this problem as easy as possible to investigate.\n", "before_files": [{"content": "#!/usr/bin/env python3\nimport site\nfrom pathlib import Path\nfrom typing import Tuple\n\nimport inquirer\n\npackage_dir = Path(__file__).resolve().parents[2]\nsite.addsitedir(str(package_dir))\nimport waste_collection_schedule.service.AppAbfallplusDe as AppAbfallplusDe # noqa: E402\n\nYAML = {\n \"base\": \"\"\"\nwaste_collection_schedule:\n sources:\n - name: app_abfallplus_de\n args:\n app_id: {app_id}\n city: {city}\"\"\",\n \"bezirk\": \"\"\"\n bezirk: {bezirk}\"\"\",\n \"street\": \"\"\"\n strasse: {strasse}\"\"\",\n \"hnr\": \"\"\"\n hnr: {hnr}\"\"\",\n \"bundesland\": \"\"\"\n bundesland: {bundesland}\"\"\",\n \"landkreis\": \"\"\"\n landkreis: {landkreis}\"\"\",\n}\n\n\ndef select_bundesland(app: AppAbfallplusDe.AppAbfallplusDe):\n bundeslaender = app.get_bundeslaender()\n questions = [\n inquirer.List(\n \"bundesland\",\n choices=sorted([(s[\"name\"], s[\"name\"]) for s in bundeslaender]),\n message=\"Select your Bundesland\",\n )\n ]\n bundesland = inquirer.prompt(questions)[\"bundesland\"]\n app.select_bundesland(bundesland)\n return bundesland\n\n\ndef select_landkreis(app: AppAbfallplusDe.AppAbfallplusDe):\n landkreise = app.get_landkreise()\n questions = [\n inquirer.List(\n \"landkreis\",\n choices=sorted(\n [(s[\"name\"], s[\"name\"]) for s in landkreise] + [(\"BACK\", \"BACK\")]\n ),\n message=\"Select your Landkreis\",\n )\n ]\n landkreis = inquirer.prompt(questions)[\"landkreis\"]\n if landkreis == \"BACK\":\n app.clear(0)\n select_bundesland(app)\n return select_landkreis(app)\n app.select_landkreis(landkreis)\n return landkreis\n\n\ndef select_city(app: AppAbfallplusDe.AppAbfallplusDe, bund_select: bool):\n cities = app.get_kommunen()\n questions = [\n inquirer.List(\n \"city\",\n choices=sorted([(s[\"name\"], s[\"name\"]) for s in cities])\n + ([(\"BACK\", \"BACK\")] if bund_select else []),\n message=\"Select your Kommune\",\n )\n ]\n city = inquirer.prompt(questions)[\"city\"]\n if city == \"BACK\":\n app.clear(1)\n select_landkreis(app)\n return select_city(app, bund_select)\n\n app.select_kommune(city)\n return city\n\n\ndef select_bezirk(\n app: AppAbfallplusDe.AppAbfallplusDe, bund_select: bool\n) -> Tuple[str, bool]:\n bezirke = app.get_bezirke()\n questions = [\n inquirer.List(\n \"bezirk\",\n choices=sorted([(s[\"name\"], s[\"name\"]) for s in bezirke])\n + [(\"BACK\", \"BACK\")],\n message=\"Select your Bezirk\",\n )\n ]\n bezirk = inquirer.prompt(questions)[\"bezirk\"]\n if bezirk == \"BACK\":\n app.clear(2)\n select_city(app, bund_select)\n return select_bezirk(app, bund_select)\n\n return bezirk, app.select_bezirk(bezirk)\n\n\ndef select_street(app: AppAbfallplusDe.AppAbfallplusDe, bund_select: bool):\n street = None\n street_search = \"\"\n while street is None:\n questions = [\n inquirer.Text(\n \"street_search\",\n message=\"Search your street you will be given some options to choose from\",\n default=street_search,\n )\n ]\n streets = app.get_streets(inquirer.prompt(questions)[\"street_search\"])\n questions = [\n inquirer.List(\n \"street\",\n choices=sorted([(s[\"name\"], s[\"name\"]) for s in streets])\n + [(\"BACK\", \"BACK\")],\n message=\"Select your Street\",\n )\n ]\n street = inquirer.prompt(questions)[\"street\"]\n if street == \"BACK\":\n street = None\n\n if street == \"BACK\":\n app.clear(2)\n select_city(app, bund_select)\n return select_street(app, bund_select)\n\n app.select_street(street)\n return street\n\n\ndef select_house_number(app: AppAbfallplusDe.AppAbfallplusDe, bund_select: bool):\n house_numbers = app.get_hnrs()\n questions = [\n inquirer.List(\n \"house_number\",\n choices=[(s[\"name\"], s[\"name\"]) for s in house_numbers]\n + [(\"BACK\", \"BACK\")],\n message=\"Select your House Number\",\n )\n ]\n house_number = inquirer.prompt(questions)[\"house_number\"]\n if house_number == \"BACK\":\n app.clear(3)\n select_street(app, bund_select)\n return select_house_number(app, bund_select)\n app.select_hnr(house_number)\n return house_number\n\n\ndef main():\n questions = [\n inquirer.List(\n \"app-id\",\n choices=[(s, s) for s in sorted(AppAbfallplusDe.SUPPORTED_APPS)],\n message=\"Select your App\",\n )\n ]\n app_id = inquirer.prompt(questions)[\"app-id\"]\n\n app = AppAbfallplusDe.AppAbfallplusDe(app_id, \"\", \"\", \"\")\n bezirk_needed = \"bezirk\" in app.init_connection() and app.get_bezirke() != []\n cities = app.get_kommunen()\n bund_select = cities == []\n\n bundesland = landkreis = None\n if bund_select:\n bundesland = select_bundesland(app)\n landkreis = select_landkreis(app)\n # cities = app.get_kommunen()\n\n city = select_city(app, bund_select)\n finished = False\n house_number = \"\"\n street = None\n if bezirk_needed:\n bezirk, finished = select_bezirk(app, bund_select)\n if not finished:\n street = select_street(app, bund_select)\n if app.get_hrn_needed():\n house_number = select_house_number(app, bund_select)\n\n yaml = YAML[\"base\"].format(\n app_id=app_id,\n city=city,\n )\n if bezirk_needed:\n yaml += YAML[\"bezirk\"].format(bezirk=bezirk)\n if street:\n yaml += YAML[\"street\"].format(strasse=street)\n if house_number:\n yaml += YAML[\"hnr\"].format(hnr=house_number)\n if bundesland:\n yaml += YAML[\"bundesland\"].format(bundesland=bundesland)\n if landkreis:\n yaml += YAML[\"landkreis\"].format(landkreis=landkreis)\n\n print(yaml)\n\n\nif __name__ == \"__main__\":\n main()\n", "path": "custom_components/waste_collection_schedule/waste_collection_schedule/wizard/app_abfallplus_de.py"}]} | 3,075 | 263 |
gh_patches_debug_38367 | rasdani/github-patches | git_diff | napari__napari-5997 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Setting active layer from a plugin on PublicOnlyProxy works partially, but not in GUI
## 🐛 Bug
We’re writing a [classifier plugin](https://github.com/fractal-napari-plugins-collection/napari-feature-classifier/tree/classifier_refactor) that allows user to annotate a label layer and train classifiers based on those annotations. The way we show user annotations is by adding a label layer and coloring the objects based on user annotation.
When we add an annotation layer from our plugin, we don’t want to make it the selected layer (because we use layer selection to decide which label layer is currently being annotated).
Changing the active layer by setting `viewer.layers.selection.active` to the layer I want selected does not seem to work from a plugin (for layers that where not created by the plugin). I suspect it has something to do with the plugin not getting the actual viewer object, but a napari.utils._proxies.PublicOnlyProxy ⇒ only setting active state on that proxy.
It’s confusing though, because checking the active layer (via viewer.layers.selection.active) returns what I’d expect, but it’s not shown in the GUI.
## To Reproduce
Here's some sample code to reproduce this behavior:
```python
from pathlib import Path
import imageio
import napari
import napari.layers
import napari.viewer
import numpy as np
import pandas as pd
from magicgui.widgets import Container, Label
from napari.utils.notifications import show_info
def main():
lbls = imageio.v2.imread(Path("sample_data/test_labels.tif"))
lbls2 = np.zeros_like(lbls)
lbls2[:, 3:, 2:] = lbls[:, :-3, :-2]
lbls2 = lbls2 * 20
labels = np.unique(lbls)[1:]
labels_2 = np.unique(lbls2)[1:]
viewer = napari.Viewer()
lbls_layer = viewer.add_labels(lbls)
lbls_layer2 = viewer.add_labels(lbls2)
# Add the widget directly via code:
label_selector_widget = LabelSelector(viewer) # Comment out to reproduce issue
viewer.window.add_dock_widget(label_selector_widget) # Comment out to reproduce issue
viewer.show(block=True)
class LabelSelector(Container):
def __init__(
self,
viewer: napari.viewer.Viewer,
):
self._viewer = viewer
self.label = Label(label='Test')
super().__init__(
widgets=[
self.label
]
)
self._last_selected_label_layer = self._viewer.layers[1]
annotation_layer = self._viewer.add_labels(
self._last_selected_label_layer.data,
scale=self._last_selected_label_layer.scale,
name="Annotations",
)
self._viewer.layers.selection.active = self._viewer.layers[0]
print(f'Selected Layer at the end: {self._viewer.layers.selection.active}')
print(f"Type of annotation layer: {type(annotation_layer)}")
print(f"Type of first label layer: {type(self._viewer.layers[0])}")
if __name__ == "__main__":
main()
```
If I run it as above (i.e. adding the dockwidget from Python), I get the expected behavior and the correct layer (the first one) is selected after the new Annotations layer was added:
<img width="1198" alt="Screenshot 2023-04-25 at 09 16 44" src="https://user-images.githubusercontent.com/18033446/234206865-8bd2fe29-a7c9-4a0b-aa73-659c51acdcbe.png">
The printing output is:
```
Selected Layer at the end: lbls
Type of annotation layer: <class 'napari.layers.labels.labels.Labels'>
Type of first label layer: <class 'napari.layers.labels.labels.Labels'>
```
If the two lines that are adding the widget manually are commented out:
```python
# label_selector_widget = LabelSelector(viewer)
# viewer.window.add_dock_widget(label_selector_widget)
```
and instead the dockwidget is added as a plugin, which is started from the GUI, we get this behavior:

According to viewer.layers.selection.active, the first layer was selected. But the GUI does not show any layer selected.
The GUI still reacts to changes in the layer controls (i.e. changing opacity) and applies it to the layer that is selected behind the scenes. The user just isn't shown that it applies to that layer.
This is the print output:
```
Selected Layer at the end: lbls
Type of annotation layer: <class 'napari.layers.labels.labels.Labels'>
Type of first label layer: <class 'napari.utils._proxies.PublicOnlyProxy'>
```
## Expected behavior
I would expect the plugin flow to behave the same as when I manually add a widget: The GUI shows the selected layer.
Especially given that some parts of the GUI react to the layer selection (e.g. the layer controls), the actively selected layer should be shown.
## Environment
- Please copy and paste the information at napari info option in help menubar here:
```
napari: 0.5.0a2.dev42+g9e911040
Platform: macOS-13.2.1-arm64-arm-64bit
System: MacOS 13.2.1
Python: 3.10.9 | packaged by conda-forge | (main, Feb 2 2023, 20:26:08) [Clang 14.0.6 ]
Qt: 5.15.6
PyQt5: 5.15.7
NumPy: 1.24.2
SciPy: 1.10.1
Dask: 2023.3.0
VisPy: 0.12.1
magicgui: 0.7.2
superqt: unknown
in-n-out: 0.1.7
app-model: 0.1.2
npe2: 0.6.2
OpenGL:
- GL version: 2.1 Metal - 83
- MAX_TEXTURE_SIZE: 16384
Screens:
- screen 1: resolution 2560x1440, scale 2.0
- screen 2: resolution 1512x982, scale 2.0
Settings path:
- /Users/joel/Library/Application Support/napari/classifier-dev-napari-main_2751af53b3d3e49c82e2e47937e51f1f537130c2/settings.yaml
```
- Any other relevant information:
I also tested it in napari 0.4.17 and in more recent napari nightly builds (0.5.0a2.dev71+g66df74d5) and always get the same behavior
</issue>
<code>
[start of napari/utils/_proxies.py]
1 import os
2 import re
3 import sys
4 import warnings
5 from typing import Any, Callable, Generic, TypeVar, Union
6
7 import wrapt
8
9 from napari.utils import misc
10 from napari.utils.translations import trans
11
12 _T = TypeVar("_T")
13
14
15 class ReadOnlyWrapper(wrapt.ObjectProxy):
16 """
17 Disable item and attribute setting with the exception of ``__wrapped__``.
18 """
19
20 def __init__(self, wrapped, exceptions=()):
21 super().__init__(wrapped)
22 self._self_exceptions = exceptions
23
24 def __setattr__(self, name, val):
25 if (
26 name not in ('__wrapped__', '_self_exceptions')
27 and name not in self._self_exceptions
28 ):
29 raise TypeError(
30 trans._(
31 'cannot set attribute {name}',
32 deferred=True,
33 name=name,
34 )
35 )
36
37 super().__setattr__(name, val)
38
39 def __setitem__(self, name, val):
40 if name not in self._self_exceptions:
41 raise TypeError(
42 trans._('cannot set item {name}', deferred=True, name=name)
43 )
44 super().__setitem__(name, val)
45
46
47 _SUNDER = re.compile('^_[^_]')
48
49
50 class PublicOnlyProxy(wrapt.ObjectProxy, Generic[_T]):
51 """Proxy to prevent private attribute and item access, recursively."""
52
53 __wrapped__: _T
54
55 @staticmethod
56 def _is_private_attr(name: str) -> bool:
57 return name.startswith("_") and not (
58 name.startswith('__') and name.endswith('__')
59 )
60
61 @staticmethod
62 def _private_attr_warning(name: str, typ: str):
63 warnings.warn(
64 trans._(
65 "Private attribute access ('{typ}.{name}') in this context (e.g. inside a plugin widget or dock widget) is deprecated and will be unavailable in version 0.5.0",
66 deferred=True,
67 name=name,
68 typ=typ,
69 ),
70 category=FutureWarning,
71 stacklevel=3,
72 )
73
74 # This is code prepared for a moment where we want to block access to private attributes
75 # raise AttributeError(
76 # trans._(
77 # "Private attribute set/access ('{typ}.{name}') not allowed in this context.",
78 # deferred=True,
79 # name=name,
80 # typ=typ,
81 # )
82 # )
83
84 @staticmethod
85 def _is_called_from_napari():
86 """
87 Check if the getter or setter is called from inner napari.
88 """
89 if hasattr(sys, "_getframe"):
90 frame = sys._getframe(2)
91 return frame.f_code.co_filename.startswith(misc.ROOT_DIR)
92 return False
93
94 def __getattr__(self, name: str):
95 if self._is_private_attr(name):
96 # allow napari to access private attributes and get an non-proxy
97 if self._is_called_from_napari():
98 return super().__getattr__(name)
99
100 typ = type(self.__wrapped__).__name__
101
102 self._private_attr_warning(name, typ)
103
104 return self.create(super().__getattr__(name))
105
106 def __setattr__(self, name: str, value: Any):
107 if (
108 os.environ.get("NAPARI_ENSURE_PLUGIN_MAIN_THREAD", "0")
109 not in ("0", "False")
110 ) and not in_main_thread():
111 raise RuntimeError(
112 "Setting attributes on a napari object is only allowed from the main Qt thread."
113 )
114
115 if self._is_private_attr(name):
116 if self._is_called_from_napari():
117 return super().__setattr__(name, value)
118
119 typ = type(self.__wrapped__).__name__
120 self._private_attr_warning(name, typ)
121
122 setattr(self.__wrapped__, name, value)
123 return None
124
125 def __getitem__(self, key):
126 return self.create(super().__getitem__(key))
127
128 def __repr__(self):
129 return repr(self.__wrapped__)
130
131 def __dir__(self):
132 return [x for x in dir(self.__wrapped__) if not _SUNDER.match(x)]
133
134 @classmethod
135 def create(cls, obj: Any) -> Union['PublicOnlyProxy', Any]:
136 # restrict the scope of this proxy to napari objects
137 mod = getattr(type(obj), '__module__', None) or ''
138 if not mod.startswith('napari'):
139 return obj
140 if isinstance(obj, PublicOnlyProxy):
141 return obj # don't double-wrap
142 if callable(obj):
143 return CallablePublicOnlyProxy(obj)
144 return PublicOnlyProxy(obj)
145
146
147 class CallablePublicOnlyProxy(PublicOnlyProxy[Callable]):
148 def __call__(self, *args, **kwargs):
149 return self.__wrapped__(*args, **kwargs)
150
151
152 def in_main_thread_py() -> bool:
153 """
154 Check if caller is in main python thread.
155
156 Returns
157 -------
158 thread_flag : bool
159 True if we are in the main thread, False otherwise.
160 """
161 import threading
162
163 return threading.current_thread() == threading.main_thread()
164
165
166 def _in_main_thread() -> bool:
167 """
168 General implementation of checking if we are in a proper thread.
169 If Qt is available and Application is created then assign :py:func:`in_qt_main_thread` to `in_main_thread`.
170 If Qt liba are not available then assign :py:func:`in_main_thread_py` to in_main_thread.
171 IF Qt libs are available but there is no Application ti wil emmit warning and return result of in_main_thread_py.
172
173 Returns
174 -------
175 thread_flag : bool
176 True if we are in the main thread, False otherwise.
177 """
178
179 global in_main_thread
180 try:
181 from napari._qt.utils import in_qt_main_thread
182
183 res = in_qt_main_thread()
184 in_main_thread = in_qt_main_thread
185 except ImportError:
186 in_main_thread = in_main_thread_py
187 return in_main_thread_py()
188 except AttributeError:
189 warnings.warn(
190 "Qt libs are available but no QtApplication instance is created"
191 )
192 return in_main_thread_py()
193 return res
194
195
196 in_main_thread = _in_main_thread
197
[end of napari/utils/_proxies.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/napari/utils/_proxies.py b/napari/utils/_proxies.py
--- a/napari/utils/_proxies.py
+++ b/napari/utils/_proxies.py
@@ -119,6 +119,24 @@
typ = type(self.__wrapped__).__name__
self._private_attr_warning(name, typ)
+ if isinstance(value, PublicOnlyProxy):
+ # if we want to set an attribute on a PublicOnlyProxy *and* the
+ # value that we want to set is itself a PublicOnlyProxy, we unwrap
+ # the value. This has two benefits:
+ #
+ # 1. Checking the attribute later will incur a significant
+ # performance cost, because _is_called_from_napari() will be
+ # checked on each attribute access and it involves inspecting the
+ # calling frame, which is expensive.
+ # 2. Certain equality checks fail when objects are
+ # PublicOnlyProxies. Notably, equality checks fail when such
+ # objects are included in a Qt data model. For example, plugins can
+ # grab a layer from the viewer; this layer will be wrapped by the
+ # PublicOnlyProxy, and then using this object to set the current
+ # layer selection will not propagate the selection to the Viewer.
+ # See https://github.com/napari/napari/issues/5767
+ value = value.__wrapped__
+
setattr(self.__wrapped__, name, value)
return None
@@ -134,7 +152,15 @@
@classmethod
def create(cls, obj: Any) -> Union['PublicOnlyProxy', Any]:
# restrict the scope of this proxy to napari objects
- mod = getattr(type(obj), '__module__', None) or ''
+ if type(obj).__name__ == 'method':
+ # If the given object is a method, we check the module *of the
+ # object to which that method is bound*. Otherwise, the module of a
+ # method is just builtins!
+ mod = getattr(type(obj.__self__), '__module__', None) or ''
+ else:
+ # Otherwise, the module is of an object just given by the
+ # __module__ attribute.
+ mod = getattr(type(obj), '__module__', None) or ''
if not mod.startswith('napari'):
return obj
if isinstance(obj, PublicOnlyProxy):
@@ -146,7 +172,20 @@
class CallablePublicOnlyProxy(PublicOnlyProxy[Callable]):
def __call__(self, *args, **kwargs):
- return self.__wrapped__(*args, **kwargs)
+ # if a PublicOnlyProxy is callable, then when we call it we:
+ # - unwrap the arguments, to avoid performance issues detailed in
+ # PublicOnlyProxy.__setattr__,
+ # - call the unwrapped callable on the unwrapped arguments
+ # - wrap the result in a PublicOnlyProxy
+ args = [
+ arg.__wrapped__ if isinstance(arg, PublicOnlyProxy) else arg
+ for arg in args
+ ]
+ kwargs = {
+ k: v.__wrapped__ if isinstance(v, PublicOnlyProxy) else v
+ for k, v in kwargs.items()
+ }
+ return self.create(self.__wrapped__(*args, **kwargs))
def in_main_thread_py() -> bool:
| {"golden_diff": "diff --git a/napari/utils/_proxies.py b/napari/utils/_proxies.py\n--- a/napari/utils/_proxies.py\n+++ b/napari/utils/_proxies.py\n@@ -119,6 +119,24 @@\n typ = type(self.__wrapped__).__name__\n self._private_attr_warning(name, typ)\n \n+ if isinstance(value, PublicOnlyProxy):\n+ # if we want to set an attribute on a PublicOnlyProxy *and* the\n+ # value that we want to set is itself a PublicOnlyProxy, we unwrap\n+ # the value. This has two benefits:\n+ #\n+ # 1. Checking the attribute later will incur a significant\n+ # performance cost, because _is_called_from_napari() will be\n+ # checked on each attribute access and it involves inspecting the\n+ # calling frame, which is expensive.\n+ # 2. Certain equality checks fail when objects are\n+ # PublicOnlyProxies. Notably, equality checks fail when such\n+ # objects are included in a Qt data model. For example, plugins can\n+ # grab a layer from the viewer; this layer will be wrapped by the\n+ # PublicOnlyProxy, and then using this object to set the current\n+ # layer selection will not propagate the selection to the Viewer.\n+ # See https://github.com/napari/napari/issues/5767\n+ value = value.__wrapped__\n+\n setattr(self.__wrapped__, name, value)\n return None\n \n@@ -134,7 +152,15 @@\n @classmethod\n def create(cls, obj: Any) -> Union['PublicOnlyProxy', Any]:\n # restrict the scope of this proxy to napari objects\n- mod = getattr(type(obj), '__module__', None) or ''\n+ if type(obj).__name__ == 'method':\n+ # If the given object is a method, we check the module *of the\n+ # object to which that method is bound*. Otherwise, the module of a\n+ # method is just builtins!\n+ mod = getattr(type(obj.__self__), '__module__', None) or ''\n+ else:\n+ # Otherwise, the module is of an object just given by the\n+ # __module__ attribute.\n+ mod = getattr(type(obj), '__module__', None) or ''\n if not mod.startswith('napari'):\n return obj\n if isinstance(obj, PublicOnlyProxy):\n@@ -146,7 +172,20 @@\n \n class CallablePublicOnlyProxy(PublicOnlyProxy[Callable]):\n def __call__(self, *args, **kwargs):\n- return self.__wrapped__(*args, **kwargs)\n+ # if a PublicOnlyProxy is callable, then when we call it we:\n+ # - unwrap the arguments, to avoid performance issues detailed in\n+ # PublicOnlyProxy.__setattr__,\n+ # - call the unwrapped callable on the unwrapped arguments\n+ # - wrap the result in a PublicOnlyProxy\n+ args = [\n+ arg.__wrapped__ if isinstance(arg, PublicOnlyProxy) else arg\n+ for arg in args\n+ ]\n+ kwargs = {\n+ k: v.__wrapped__ if isinstance(v, PublicOnlyProxy) else v\n+ for k, v in kwargs.items()\n+ }\n+ return self.create(self.__wrapped__(*args, **kwargs))\n \n \n def in_main_thread_py() -> bool:\n", "issue": "Setting active layer from a plugin on PublicOnlyProxy works partially, but not in GUI\n## \ud83d\udc1b Bug\r\n\r\nWe\u2019re writing a [classifier plugin](https://github.com/fractal-napari-plugins-collection/napari-feature-classifier/tree/classifier_refactor) that allows user to annotate a label layer and train classifiers based on those annotations. The way we show user annotations is by adding a label layer and coloring the objects based on user annotation.\r\n\r\nWhen we add an annotation layer from our plugin, we don\u2019t want to make it the selected layer (because we use layer selection to decide which label layer is currently being annotated).\r\n\r\nChanging the active layer by setting `viewer.layers.selection.active` to the layer I want selected does not seem to work from a plugin (for layers that where not created by the plugin). I suspect it has something to do with the plugin not getting the actual viewer object, but a napari.utils._proxies.PublicOnlyProxy \u21d2 only setting active state on that proxy.\r\n\r\nIt\u2019s confusing though, because checking the active layer (via viewer.layers.selection.active) returns what I\u2019d expect, but it\u2019s not shown in the GUI.\r\n\r\n## To Reproduce\r\n\r\nHere's some sample code to reproduce this behavior:\r\n```python\r\nfrom pathlib import Path\r\n\r\nimport imageio\r\nimport napari\r\nimport napari.layers\r\nimport napari.viewer\r\nimport numpy as np\r\nimport pandas as pd\r\nfrom magicgui.widgets import Container, Label\r\nfrom napari.utils.notifications import show_info\r\n\r\ndef main():\r\n lbls = imageio.v2.imread(Path(\"sample_data/test_labels.tif\"))\r\n lbls2 = np.zeros_like(lbls)\r\n lbls2[:, 3:, 2:] = lbls[:, :-3, :-2]\r\n lbls2 = lbls2 * 20\r\n\r\n labels = np.unique(lbls)[1:]\r\n labels_2 = np.unique(lbls2)[1:]\r\n\r\n viewer = napari.Viewer()\r\n lbls_layer = viewer.add_labels(lbls)\r\n lbls_layer2 = viewer.add_labels(lbls2)\r\n\r\n # Add the widget directly via code:\r\n label_selector_widget = LabelSelector(viewer) # Comment out to reproduce issue\r\n viewer.window.add_dock_widget(label_selector_widget) # Comment out to reproduce issue\r\n\r\n viewer.show(block=True)\r\n\r\n\r\nclass LabelSelector(Container):\r\n def __init__(\r\n self,\r\n viewer: napari.viewer.Viewer,\r\n ):\r\n self._viewer = viewer\r\n self.label = Label(label='Test')\r\n super().__init__(\r\n widgets=[\r\n self.label\r\n ]\r\n )\r\n self._last_selected_label_layer = self._viewer.layers[1]\r\n annotation_layer = self._viewer.add_labels(\r\n self._last_selected_label_layer.data,\r\n scale=self._last_selected_label_layer.scale,\r\n name=\"Annotations\",\r\n )\r\n self._viewer.layers.selection.active = self._viewer.layers[0]\r\n print(f'Selected Layer at the end: {self._viewer.layers.selection.active}')\r\n print(f\"Type of annotation layer: {type(annotation_layer)}\")\r\n print(f\"Type of first label layer: {type(self._viewer.layers[0])}\")\r\n\r\n\r\nif __name__ == \"__main__\":\r\n main()\r\n```\r\n\r\nIf I run it as above (i.e. adding the dockwidget from Python), I get the expected behavior and the correct layer (the first one) is selected after the new Annotations layer was added:\r\n<img width=\"1198\" alt=\"Screenshot 2023-04-25 at 09 16 44\" src=\"https://user-images.githubusercontent.com/18033446/234206865-8bd2fe29-a7c9-4a0b-aa73-659c51acdcbe.png\">\r\n\r\nThe printing output is:\r\n```\r\nSelected Layer at the end: lbls\r\nType of annotation layer: <class 'napari.layers.labels.labels.Labels'>\r\nType of first label layer: <class 'napari.layers.labels.labels.Labels'>\r\n```\r\n\r\nIf the two lines that are adding the widget manually are commented out:\r\n```python\r\n # label_selector_widget = LabelSelector(viewer)\r\n # viewer.window.add_dock_widget(label_selector_widget)\r\n```\r\nand instead the dockwidget is added as a plugin, which is started from the GUI, we get this behavior:\r\n\r\nAccording to viewer.layers.selection.active, the first layer was selected. But the GUI does not show any layer selected.\r\nThe GUI still reacts to changes in the layer controls (i.e. changing opacity) and applies it to the layer that is selected behind the scenes. The user just isn't shown that it applies to that layer.\r\n\r\nThis is the print output:\r\n```\r\nSelected Layer at the end: lbls\r\nType of annotation layer: <class 'napari.layers.labels.labels.Labels'>\r\nType of first label layer: <class 'napari.utils._proxies.PublicOnlyProxy'>\r\n```\r\n\r\n## Expected behavior\r\n\r\nI would expect the plugin flow to behave the same as when I manually add a widget: The GUI shows the selected layer.\r\nEspecially given that some parts of the GUI react to the layer selection (e.g. the layer controls), the actively selected layer should be shown.\r\n\r\n## Environment\r\n\r\n - Please copy and paste the information at napari info option in help menubar here:\r\n```\r\nnapari: 0.5.0a2.dev42+g9e911040\r\nPlatform: macOS-13.2.1-arm64-arm-64bit\r\nSystem: MacOS 13.2.1\r\nPython: 3.10.9 | packaged by conda-forge | (main, Feb 2 2023, 20:26:08) [Clang 14.0.6 ]\r\nQt: 5.15.6\r\nPyQt5: 5.15.7\r\nNumPy: 1.24.2\r\nSciPy: 1.10.1\r\nDask: 2023.3.0\r\nVisPy: 0.12.1\r\nmagicgui: 0.7.2\r\nsuperqt: unknown\r\nin-n-out: 0.1.7\r\napp-model: 0.1.2\r\nnpe2: 0.6.2\r\n\r\nOpenGL:\r\n- GL version: 2.1 Metal - 83\r\n- MAX_TEXTURE_SIZE: 16384\r\n\r\nScreens:\r\n- screen 1: resolution 2560x1440, scale 2.0\r\n- screen 2: resolution 1512x982, scale 2.0\r\n\r\nSettings path:\r\n- /Users/joel/Library/Application Support/napari/classifier-dev-napari-main_2751af53b3d3e49c82e2e47937e51f1f537130c2/settings.yaml\r\n```\r\n\r\n - Any other relevant information:\r\n I also tested it in napari 0.4.17 and in more recent napari nightly builds (0.5.0a2.dev71+g66df74d5) and always get the same behavior\r\n\r\n\n", "before_files": [{"content": "import os\nimport re\nimport sys\nimport warnings\nfrom typing import Any, Callable, Generic, TypeVar, Union\n\nimport wrapt\n\nfrom napari.utils import misc\nfrom napari.utils.translations import trans\n\n_T = TypeVar(\"_T\")\n\n\nclass ReadOnlyWrapper(wrapt.ObjectProxy):\n \"\"\"\n Disable item and attribute setting with the exception of ``__wrapped__``.\n \"\"\"\n\n def __init__(self, wrapped, exceptions=()):\n super().__init__(wrapped)\n self._self_exceptions = exceptions\n\n def __setattr__(self, name, val):\n if (\n name not in ('__wrapped__', '_self_exceptions')\n and name not in self._self_exceptions\n ):\n raise TypeError(\n trans._(\n 'cannot set attribute {name}',\n deferred=True,\n name=name,\n )\n )\n\n super().__setattr__(name, val)\n\n def __setitem__(self, name, val):\n if name not in self._self_exceptions:\n raise TypeError(\n trans._('cannot set item {name}', deferred=True, name=name)\n )\n super().__setitem__(name, val)\n\n\n_SUNDER = re.compile('^_[^_]')\n\n\nclass PublicOnlyProxy(wrapt.ObjectProxy, Generic[_T]):\n \"\"\"Proxy to prevent private attribute and item access, recursively.\"\"\"\n\n __wrapped__: _T\n\n @staticmethod\n def _is_private_attr(name: str) -> bool:\n return name.startswith(\"_\") and not (\n name.startswith('__') and name.endswith('__')\n )\n\n @staticmethod\n def _private_attr_warning(name: str, typ: str):\n warnings.warn(\n trans._(\n \"Private attribute access ('{typ}.{name}') in this context (e.g. inside a plugin widget or dock widget) is deprecated and will be unavailable in version 0.5.0\",\n deferred=True,\n name=name,\n typ=typ,\n ),\n category=FutureWarning,\n stacklevel=3,\n )\n\n # This is code prepared for a moment where we want to block access to private attributes\n # raise AttributeError(\n # trans._(\n # \"Private attribute set/access ('{typ}.{name}') not allowed in this context.\",\n # deferred=True,\n # name=name,\n # typ=typ,\n # )\n # )\n\n @staticmethod\n def _is_called_from_napari():\n \"\"\"\n Check if the getter or setter is called from inner napari.\n \"\"\"\n if hasattr(sys, \"_getframe\"):\n frame = sys._getframe(2)\n return frame.f_code.co_filename.startswith(misc.ROOT_DIR)\n return False\n\n def __getattr__(self, name: str):\n if self._is_private_attr(name):\n # allow napari to access private attributes and get an non-proxy\n if self._is_called_from_napari():\n return super().__getattr__(name)\n\n typ = type(self.__wrapped__).__name__\n\n self._private_attr_warning(name, typ)\n\n return self.create(super().__getattr__(name))\n\n def __setattr__(self, name: str, value: Any):\n if (\n os.environ.get(\"NAPARI_ENSURE_PLUGIN_MAIN_THREAD\", \"0\")\n not in (\"0\", \"False\")\n ) and not in_main_thread():\n raise RuntimeError(\n \"Setting attributes on a napari object is only allowed from the main Qt thread.\"\n )\n\n if self._is_private_attr(name):\n if self._is_called_from_napari():\n return super().__setattr__(name, value)\n\n typ = type(self.__wrapped__).__name__\n self._private_attr_warning(name, typ)\n\n setattr(self.__wrapped__, name, value)\n return None\n\n def __getitem__(self, key):\n return self.create(super().__getitem__(key))\n\n def __repr__(self):\n return repr(self.__wrapped__)\n\n def __dir__(self):\n return [x for x in dir(self.__wrapped__) if not _SUNDER.match(x)]\n\n @classmethod\n def create(cls, obj: Any) -> Union['PublicOnlyProxy', Any]:\n # restrict the scope of this proxy to napari objects\n mod = getattr(type(obj), '__module__', None) or ''\n if not mod.startswith('napari'):\n return obj\n if isinstance(obj, PublicOnlyProxy):\n return obj # don't double-wrap\n if callable(obj):\n return CallablePublicOnlyProxy(obj)\n return PublicOnlyProxy(obj)\n\n\nclass CallablePublicOnlyProxy(PublicOnlyProxy[Callable]):\n def __call__(self, *args, **kwargs):\n return self.__wrapped__(*args, **kwargs)\n\n\ndef in_main_thread_py() -> bool:\n \"\"\"\n Check if caller is in main python thread.\n\n Returns\n -------\n thread_flag : bool\n True if we are in the main thread, False otherwise.\n \"\"\"\n import threading\n\n return threading.current_thread() == threading.main_thread()\n\n\ndef _in_main_thread() -> bool:\n \"\"\"\n General implementation of checking if we are in a proper thread.\n If Qt is available and Application is created then assign :py:func:`in_qt_main_thread` to `in_main_thread`.\n If Qt liba are not available then assign :py:func:`in_main_thread_py` to in_main_thread.\n IF Qt libs are available but there is no Application ti wil emmit warning and return result of in_main_thread_py.\n\n Returns\n -------\n thread_flag : bool\n True if we are in the main thread, False otherwise.\n \"\"\"\n\n global in_main_thread\n try:\n from napari._qt.utils import in_qt_main_thread\n\n res = in_qt_main_thread()\n in_main_thread = in_qt_main_thread\n except ImportError:\n in_main_thread = in_main_thread_py\n return in_main_thread_py()\n except AttributeError:\n warnings.warn(\n \"Qt libs are available but no QtApplication instance is created\"\n )\n return in_main_thread_py()\n return res\n\n\nin_main_thread = _in_main_thread\n", "path": "napari/utils/_proxies.py"}]} | 3,961 | 766 |
gh_patches_debug_47576 | rasdani/github-patches | git_diff | getpelican__pelican-905 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
pelican-quickstart: error with accented characters
Hello,
I've got a problem with pelican-quickstart, when I put accented characters in answers.
Here is the output I got :
> Who will be the author of this web site? Guillaume LAMÉ
> Traceback (most recent call last):
> File "/home/lomig/StaticGen/Pelican/bin/pelican-quickstart", line 9, in <module>
> load_entry_point('pelican==3.2.1', 'console_scripts', 'pelican-quickstart')()
> File "/home/lomig/StaticGen/Pelican/local/lib/python2.7/site-packages/pelican/tools/pelican_quickstart.py", line 184, in main
> CONF['author'] = ask('Who will be the author of this web site?', answer=str_compat, default=args.author)
> File "/home/lomig/StaticGen/Pelican/local/lib/python2.7/site-packages/pelican/tools/pelican_quickstart.py", line 57, in wrapper
> return out.decode(sys.stdin.encoding)
> File "/home/lomig/StaticGen/Pelican/lib/python2.7/encodings/utf_8.py", line 16, in decode
> return codecs.utf_8_decode(input, errors, True)
> UnicodeEncodeError: 'ascii' codec can't encode character u'\xc9' in position 13: ordinal not in range(128)
Thanks.
</issue>
<code>
[start of pelican/tools/pelican_quickstart.py]
1 #!/usr/bin/env python
2
3 # -*- coding: utf-8 -*-
4 from __future__ import unicode_literals, print_function
5 import six
6
7 import os
8 import string
9 import argparse
10 import sys
11 import codecs
12
13 from pelican import __version__
14
15 _TEMPLATES_DIR = os.path.join(os.path.dirname(os.path.abspath(__file__)),
16 "templates")
17
18 CONF = {
19 'pelican': 'pelican',
20 'pelicanopts': '',
21 'basedir': os.curdir,
22 'ftp_host': 'localhost',
23 'ftp_user': 'anonymous',
24 'ftp_target_dir': '/',
25 'ssh_host': 'localhost',
26 'ssh_port': 22,
27 'ssh_user': 'root',
28 'ssh_target_dir': '/var/www',
29 's3_bucket': 'my_s3_bucket',
30 'dropbox_dir': '~/Dropbox/Public/',
31 'default_pagination': 10,
32 'siteurl': '',
33 'lang': 'en'
34 }
35
36 def _input_compat(prompt):
37 if six.PY3:
38 r = input(prompt)
39 else:
40 # FIXME: why use this with @decoding_strings?
41 r = raw_input(prompt).decode('utf-8')
42 return r
43
44 if six.PY3:
45 str_compat = str
46 else:
47 str_compat = unicode
48
49 def decoding_strings(f):
50 def wrapper(*args, **kwargs):
51 out = f(*args, **kwargs)
52 if isinstance(out, six.string_types) and not six.PY3:
53 # todo: make encoding configurable?
54 if six.PY3:
55 return out
56 else:
57 return out.decode(sys.stdin.encoding)
58 return out
59 return wrapper
60
61
62 def get_template(name, as_encoding='utf-8'):
63 template = os.path.join(_TEMPLATES_DIR, "{0}.in".format(name))
64
65 if not os.path.isfile(template):
66 raise RuntimeError("Cannot open {0}".format(template))
67
68 with codecs.open(template, 'r', as_encoding) as fd:
69 line = fd.readline()
70 while line:
71 yield line
72 line = fd.readline()
73 fd.close()
74
75
76 @decoding_strings
77 def ask(question, answer=str_compat, default=None, l=None):
78 if answer == str_compat:
79 r = ''
80 while True:
81 if default:
82 r = _input_compat('> {0} [{1}] '.format(question, default))
83 else:
84 r = _input_compat('> {0} '.format(question, default))
85
86 r = r.strip()
87
88 if len(r) <= 0:
89 if default:
90 r = default
91 break
92 else:
93 print('You must enter something')
94 else:
95 if l and len(r) != l:
96 print('You must enter a {0} letters long string'.format(l))
97 else:
98 break
99
100 return r
101
102 elif answer == bool:
103 r = None
104 while True:
105 if default is True:
106 r = _input_compat('> {0} (Y/n) '.format(question))
107 elif default is False:
108 r = _input_compat('> {0} (y/N) '.format(question))
109 else:
110 r = _input_compat('> {0} (y/n) '.format(question))
111
112 r = r.strip().lower()
113
114 if r in ('y', 'yes'):
115 r = True
116 break
117 elif r in ('n', 'no'):
118 r = False
119 break
120 elif not r:
121 r = default
122 break
123 else:
124 print("You must answer 'yes' or 'no'")
125 return r
126 elif answer == int:
127 r = None
128 while True:
129 if default:
130 r = _input_compat('> {0} [{1}] '.format(question, default))
131 else:
132 r = _input_compat('> {0} '.format(question))
133
134 r = r.strip()
135
136 if not r:
137 r = default
138 break
139
140 try:
141 r = int(r)
142 break
143 except:
144 print('You must enter an integer')
145 return r
146 else:
147 raise NotImplemented('Argument `answer` must be str_compat, bool, or integer')
148
149
150 def main():
151 parser = argparse.ArgumentParser(
152 description="A kickstarter for Pelican",
153 formatter_class=argparse.ArgumentDefaultsHelpFormatter)
154 parser.add_argument('-p', '--path', default=os.curdir,
155 help="The path to generate the blog into")
156 parser.add_argument('-t', '--title', metavar="title",
157 help='Set the title of the website')
158 parser.add_argument('-a', '--author', metavar="author",
159 help='Set the author name of the website')
160 parser.add_argument('-l', '--lang', metavar="lang",
161 help='Set the default web site language')
162
163 args = parser.parse_args()
164
165 print('''Welcome to pelican-quickstart v{v}.
166
167 This script will help you create a new Pelican-based website.
168
169 Please answer the following questions so this script can generate the files
170 needed by Pelican.
171
172 '''.format(v=__version__))
173
174 project = os.path.join(
175 os.environ.get('VIRTUAL_ENV', os.curdir), '.project')
176 if os.path.isfile(project):
177 CONF['basedir'] = open(project, 'r').read().rstrip("\n")
178 print('Using project associated with current virtual environment.'
179 'Will save to:\n%s\n' % CONF['basedir'])
180 else:
181 CONF['basedir'] = os.path.abspath(ask('Where do you want to create your new web site?', answer=str_compat, default=args.path))
182
183 CONF['sitename'] = ask('What will be the title of this web site?', answer=str_compat, default=args.title)
184 CONF['author'] = ask('Who will be the author of this web site?', answer=str_compat, default=args.author)
185 CONF['lang'] = ask('What will be the default language of this web site?', str_compat, args.lang or CONF['lang'], 2)
186
187 if ask('Do you want to specify a URL prefix? e.g., http://example.com ', answer=bool, default=True):
188 CONF['siteurl'] = ask('What is your URL prefix? (see above example; no trailing slash)', str_compat, CONF['siteurl'])
189
190 CONF['with_pagination'] = ask('Do you want to enable article pagination?', bool, bool(CONF['default_pagination']))
191
192 if CONF['with_pagination']:
193 CONF['default_pagination'] = ask('How many articles per page do you want?', int, CONF['default_pagination'])
194 else:
195 CONF['default_pagination'] = False
196
197 mkfile = ask('Do you want to generate a Makefile to easily manage your website?', bool, True)
198 develop = ask('Do you want an auto-reload & simpleHTTP script to assist with theme and site development?', bool, True)
199
200 if mkfile:
201 if ask('Do you want to upload your website using FTP?', answer=bool, default=False):
202 CONF['ftp_host'] = ask('What is the hostname of your FTP server?', str_compat, CONF['ftp_host'])
203 CONF['ftp_user'] = ask('What is your username on that server?', str_compat, CONF['ftp_user'])
204 CONF['ftp_target_dir'] = ask('Where do you want to put your web site on that server?', str_compat, CONF['ftp_target_dir'])
205 if ask('Do you want to upload your website using SSH?', answer=bool, default=False):
206 CONF['ssh_host'] = ask('What is the hostname of your SSH server?', str_compat, CONF['ssh_host'])
207 CONF['ssh_port'] = ask('What is the port of your SSH server?', int, CONF['ssh_port'])
208 CONF['ssh_user'] = ask('What is your username on that server?', str_compat, CONF['ssh_user'])
209 CONF['ssh_target_dir'] = ask('Where do you want to put your web site on that server?', str_compat, CONF['ssh_target_dir'])
210 if ask('Do you want to upload your website using Dropbox?', answer=bool, default=False):
211 CONF['dropbox_dir'] = ask('Where is your Dropbox directory?', str_compat, CONF['dropbox_dir'])
212 if ask('Do you want to upload your website using S3?', answer=bool, default=False):
213 CONF['s3_bucket'] = ask('What is the name of your S3 bucket?', str_compat, CONF['s3_bucket'])
214
215 try:
216 os.makedirs(os.path.join(CONF['basedir'], 'content'))
217 except OSError as e:
218 print('Error: {0}'.format(e))
219
220 try:
221 os.makedirs(os.path.join(CONF['basedir'], 'output'))
222 except OSError as e:
223 print('Error: {0}'.format(e))
224
225 try:
226 with codecs.open(os.path.join(CONF['basedir'], 'pelicanconf.py'), 'w', 'utf-8') as fd:
227 conf_python = dict()
228 for key, value in CONF.items():
229 conf_python[key] = repr(value)
230
231 for line in get_template('pelicanconf.py'):
232 template = string.Template(line)
233 fd.write(template.safe_substitute(conf_python))
234 fd.close()
235 except OSError as e:
236 print('Error: {0}'.format(e))
237
238 try:
239 with codecs.open(os.path.join(CONF['basedir'], 'publishconf.py'), 'w', 'utf-8') as fd:
240 for line in get_template('publishconf.py'):
241 template = string.Template(line)
242 fd.write(template.safe_substitute(CONF))
243 fd.close()
244 except OSError as e:
245 print('Error: {0}'.format(e))
246
247 if mkfile:
248 try:
249 with codecs.open(os.path.join(CONF['basedir'], 'Makefile'), 'w', 'utf-8') as fd:
250 mkfile_template_name = 'Makefile'
251 py_v = 'PY=python'
252 if six.PY3:
253 py_v = 'PY=python3'
254 template = string.Template(py_v)
255 fd.write(template.safe_substitute(CONF))
256 fd.write('\n')
257 for line in get_template(mkfile_template_name):
258 template = string.Template(line)
259 fd.write(template.safe_substitute(CONF))
260 fd.close()
261 except OSError as e:
262 print('Error: {0}'.format(e))
263
264 if develop:
265 conf_shell = dict()
266 for key, value in CONF.items():
267 if isinstance(value, six.string_types) and ' ' in value:
268 value = '"' + value.replace('"', '\\"') + '"'
269 conf_shell[key] = value
270 try:
271 with codecs.open(os.path.join(CONF['basedir'], 'develop_server.sh'), 'w', 'utf-8') as fd:
272 lines = list(get_template('develop_server.sh'))
273 py_v = 'PY=python\n'
274 if six.PY3:
275 py_v = 'PY=python3\n'
276 lines = lines[:4] + [py_v] + lines[4:]
277 for line in lines:
278 template = string.Template(line)
279 fd.write(template.safe_substitute(conf_shell))
280 fd.close()
281 os.chmod((os.path.join(CONF['basedir'], 'develop_server.sh')), 493) # mode 0o755
282 except OSError as e:
283 print('Error: {0}'.format(e))
284
285 print('Done. Your new project is available at %s' % CONF['basedir'])
286
[end of pelican/tools/pelican_quickstart.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/pelican/tools/pelican_quickstart.py b/pelican/tools/pelican_quickstart.py
--- a/pelican/tools/pelican_quickstart.py
+++ b/pelican/tools/pelican_quickstart.py
@@ -37,8 +37,7 @@
if six.PY3:
r = input(prompt)
else:
- # FIXME: why use this with @decoding_strings?
- r = raw_input(prompt).decode('utf-8')
+ r = raw_input(prompt)
return r
if six.PY3:
| {"golden_diff": "diff --git a/pelican/tools/pelican_quickstart.py b/pelican/tools/pelican_quickstart.py\n--- a/pelican/tools/pelican_quickstart.py\n+++ b/pelican/tools/pelican_quickstart.py\n@@ -37,8 +37,7 @@\n if six.PY3:\n r = input(prompt)\n else:\n- # FIXME: why use this with @decoding_strings?\n- r = raw_input(prompt).decode('utf-8')\n+ r = raw_input(prompt)\n return r\n \n if six.PY3:\n", "issue": "pelican-quickstart: error with accented characters\nHello,\n\nI've got a problem with pelican-quickstart, when I put accented characters in answers.\n\nHere is the output I got : \n\n> Who will be the author of this web site? Guillaume LAM\u00c9\n> Traceback (most recent call last):\n> File \"/home/lomig/StaticGen/Pelican/bin/pelican-quickstart\", line 9, in <module>\n> load_entry_point('pelican==3.2.1', 'console_scripts', 'pelican-quickstart')()\n> File \"/home/lomig/StaticGen/Pelican/local/lib/python2.7/site-packages/pelican/tools/pelican_quickstart.py\", line 184, in main\n> CONF['author'] = ask('Who will be the author of this web site?', answer=str_compat, default=args.author)\n> File \"/home/lomig/StaticGen/Pelican/local/lib/python2.7/site-packages/pelican/tools/pelican_quickstart.py\", line 57, in wrapper\n> return out.decode(sys.stdin.encoding)\n> File \"/home/lomig/StaticGen/Pelican/lib/python2.7/encodings/utf_8.py\", line 16, in decode\n> return codecs.utf_8_decode(input, errors, True)\n> UnicodeEncodeError: 'ascii' codec can't encode character u'\\xc9' in position 13: ordinal not in range(128)\n\nThanks.\n\n", "before_files": [{"content": "#!/usr/bin/env python\n\n# -*- coding: utf-8 -*-\nfrom __future__ import unicode_literals, print_function\nimport six\n\nimport os\nimport string\nimport argparse\nimport sys\nimport codecs\n\nfrom pelican import __version__\n\n_TEMPLATES_DIR = os.path.join(os.path.dirname(os.path.abspath(__file__)),\n \"templates\")\n\nCONF = {\n 'pelican': 'pelican',\n 'pelicanopts': '',\n 'basedir': os.curdir,\n 'ftp_host': 'localhost',\n 'ftp_user': 'anonymous',\n 'ftp_target_dir': '/',\n 'ssh_host': 'localhost',\n 'ssh_port': 22,\n 'ssh_user': 'root',\n 'ssh_target_dir': '/var/www',\n 's3_bucket': 'my_s3_bucket',\n 'dropbox_dir': '~/Dropbox/Public/',\n 'default_pagination': 10,\n 'siteurl': '',\n 'lang': 'en'\n}\n\ndef _input_compat(prompt):\n if six.PY3:\n r = input(prompt)\n else:\n # FIXME: why use this with @decoding_strings?\n r = raw_input(prompt).decode('utf-8')\n return r\n\nif six.PY3:\n str_compat = str\nelse:\n str_compat = unicode\n\ndef decoding_strings(f):\n def wrapper(*args, **kwargs):\n out = f(*args, **kwargs)\n if isinstance(out, six.string_types) and not six.PY3:\n # todo: make encoding configurable?\n if six.PY3:\n return out\n else:\n return out.decode(sys.stdin.encoding)\n return out\n return wrapper\n\n\ndef get_template(name, as_encoding='utf-8'):\n template = os.path.join(_TEMPLATES_DIR, \"{0}.in\".format(name))\n\n if not os.path.isfile(template):\n raise RuntimeError(\"Cannot open {0}\".format(template))\n\n with codecs.open(template, 'r', as_encoding) as fd:\n line = fd.readline()\n while line:\n yield line\n line = fd.readline()\n fd.close()\n\n\n@decoding_strings\ndef ask(question, answer=str_compat, default=None, l=None):\n if answer == str_compat:\n r = ''\n while True:\n if default:\n r = _input_compat('> {0} [{1}] '.format(question, default))\n else:\n r = _input_compat('> {0} '.format(question, default))\n\n r = r.strip()\n\n if len(r) <= 0:\n if default:\n r = default\n break\n else:\n print('You must enter something')\n else:\n if l and len(r) != l:\n print('You must enter a {0} letters long string'.format(l))\n else:\n break\n\n return r\n\n elif answer == bool:\n r = None\n while True:\n if default is True:\n r = _input_compat('> {0} (Y/n) '.format(question))\n elif default is False:\n r = _input_compat('> {0} (y/N) '.format(question))\n else:\n r = _input_compat('> {0} (y/n) '.format(question))\n\n r = r.strip().lower()\n\n if r in ('y', 'yes'):\n r = True\n break\n elif r in ('n', 'no'):\n r = False\n break\n elif not r:\n r = default\n break\n else:\n print(\"You must answer 'yes' or 'no'\")\n return r\n elif answer == int:\n r = None\n while True:\n if default:\n r = _input_compat('> {0} [{1}] '.format(question, default))\n else:\n r = _input_compat('> {0} '.format(question))\n\n r = r.strip()\n\n if not r:\n r = default\n break\n\n try:\n r = int(r)\n break\n except:\n print('You must enter an integer')\n return r\n else:\n raise NotImplemented('Argument `answer` must be str_compat, bool, or integer')\n\n\ndef main():\n parser = argparse.ArgumentParser(\n description=\"A kickstarter for Pelican\",\n formatter_class=argparse.ArgumentDefaultsHelpFormatter)\n parser.add_argument('-p', '--path', default=os.curdir,\n help=\"The path to generate the blog into\")\n parser.add_argument('-t', '--title', metavar=\"title\",\n help='Set the title of the website')\n parser.add_argument('-a', '--author', metavar=\"author\",\n help='Set the author name of the website')\n parser.add_argument('-l', '--lang', metavar=\"lang\",\n help='Set the default web site language')\n\n args = parser.parse_args()\n\n print('''Welcome to pelican-quickstart v{v}.\n\nThis script will help you create a new Pelican-based website.\n\nPlease answer the following questions so this script can generate the files\nneeded by Pelican.\n\n '''.format(v=__version__))\n\n project = os.path.join(\n os.environ.get('VIRTUAL_ENV', os.curdir), '.project')\n if os.path.isfile(project):\n CONF['basedir'] = open(project, 'r').read().rstrip(\"\\n\")\n print('Using project associated with current virtual environment.'\n 'Will save to:\\n%s\\n' % CONF['basedir'])\n else:\n CONF['basedir'] = os.path.abspath(ask('Where do you want to create your new web site?', answer=str_compat, default=args.path))\n\n CONF['sitename'] = ask('What will be the title of this web site?', answer=str_compat, default=args.title)\n CONF['author'] = ask('Who will be the author of this web site?', answer=str_compat, default=args.author)\n CONF['lang'] = ask('What will be the default language of this web site?', str_compat, args.lang or CONF['lang'], 2)\n\n if ask('Do you want to specify a URL prefix? e.g., http://example.com ', answer=bool, default=True):\n CONF['siteurl'] = ask('What is your URL prefix? (see above example; no trailing slash)', str_compat, CONF['siteurl'])\n\n CONF['with_pagination'] = ask('Do you want to enable article pagination?', bool, bool(CONF['default_pagination']))\n\n if CONF['with_pagination']:\n CONF['default_pagination'] = ask('How many articles per page do you want?', int, CONF['default_pagination'])\n else:\n CONF['default_pagination'] = False\n\n mkfile = ask('Do you want to generate a Makefile to easily manage your website?', bool, True)\n develop = ask('Do you want an auto-reload & simpleHTTP script to assist with theme and site development?', bool, True)\n\n if mkfile:\n if ask('Do you want to upload your website using FTP?', answer=bool, default=False):\n CONF['ftp_host'] = ask('What is the hostname of your FTP server?', str_compat, CONF['ftp_host'])\n CONF['ftp_user'] = ask('What is your username on that server?', str_compat, CONF['ftp_user'])\n CONF['ftp_target_dir'] = ask('Where do you want to put your web site on that server?', str_compat, CONF['ftp_target_dir'])\n if ask('Do you want to upload your website using SSH?', answer=bool, default=False):\n CONF['ssh_host'] = ask('What is the hostname of your SSH server?', str_compat, CONF['ssh_host'])\n CONF['ssh_port'] = ask('What is the port of your SSH server?', int, CONF['ssh_port'])\n CONF['ssh_user'] = ask('What is your username on that server?', str_compat, CONF['ssh_user'])\n CONF['ssh_target_dir'] = ask('Where do you want to put your web site on that server?', str_compat, CONF['ssh_target_dir'])\n if ask('Do you want to upload your website using Dropbox?', answer=bool, default=False):\n CONF['dropbox_dir'] = ask('Where is your Dropbox directory?', str_compat, CONF['dropbox_dir'])\n if ask('Do you want to upload your website using S3?', answer=bool, default=False):\n CONF['s3_bucket'] = ask('What is the name of your S3 bucket?', str_compat, CONF['s3_bucket'])\n\n try:\n os.makedirs(os.path.join(CONF['basedir'], 'content'))\n except OSError as e:\n print('Error: {0}'.format(e))\n\n try:\n os.makedirs(os.path.join(CONF['basedir'], 'output'))\n except OSError as e:\n print('Error: {0}'.format(e))\n\n try:\n with codecs.open(os.path.join(CONF['basedir'], 'pelicanconf.py'), 'w', 'utf-8') as fd:\n conf_python = dict()\n for key, value in CONF.items():\n conf_python[key] = repr(value)\n\n for line in get_template('pelicanconf.py'):\n template = string.Template(line)\n fd.write(template.safe_substitute(conf_python))\n fd.close()\n except OSError as e:\n print('Error: {0}'.format(e))\n\n try:\n with codecs.open(os.path.join(CONF['basedir'], 'publishconf.py'), 'w', 'utf-8') as fd:\n for line in get_template('publishconf.py'):\n template = string.Template(line)\n fd.write(template.safe_substitute(CONF))\n fd.close()\n except OSError as e:\n print('Error: {0}'.format(e))\n\n if mkfile:\n try:\n with codecs.open(os.path.join(CONF['basedir'], 'Makefile'), 'w', 'utf-8') as fd:\n mkfile_template_name = 'Makefile'\n py_v = 'PY=python'\n if six.PY3:\n py_v = 'PY=python3'\n template = string.Template(py_v)\n fd.write(template.safe_substitute(CONF))\n fd.write('\\n')\n for line in get_template(mkfile_template_name):\n template = string.Template(line)\n fd.write(template.safe_substitute(CONF))\n fd.close()\n except OSError as e:\n print('Error: {0}'.format(e))\n\n if develop:\n conf_shell = dict()\n for key, value in CONF.items():\n if isinstance(value, six.string_types) and ' ' in value:\n value = '\"' + value.replace('\"', '\\\\\"') + '\"'\n conf_shell[key] = value\n try:\n with codecs.open(os.path.join(CONF['basedir'], 'develop_server.sh'), 'w', 'utf-8') as fd:\n lines = list(get_template('develop_server.sh'))\n py_v = 'PY=python\\n'\n if six.PY3:\n py_v = 'PY=python3\\n'\n lines = lines[:4] + [py_v] + lines[4:]\n for line in lines:\n template = string.Template(line)\n fd.write(template.safe_substitute(conf_shell))\n fd.close()\n os.chmod((os.path.join(CONF['basedir'], 'develop_server.sh')), 493) # mode 0o755\n except OSError as e:\n print('Error: {0}'.format(e))\n\n print('Done. Your new project is available at %s' % CONF['basedir'])\n", "path": "pelican/tools/pelican_quickstart.py"}]} | 4,094 | 126 |
gh_patches_debug_21314 | rasdani/github-patches | git_diff | matrix-org__synapse-4330 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
500s calling /messages on matrix.org
as seen by @schiessle on riot-android at https://github.com/vector-im/riot-android/issues/2802
</issue>
<code>
[start of synapse/handlers/pagination.py]
1 # -*- coding: utf-8 -*-
2 # Copyright 2014 - 2016 OpenMarket Ltd
3 # Copyright 2017 - 2018 New Vector Ltd
4 #
5 # Licensed under the Apache License, Version 2.0 (the "License");
6 # you may not use this file except in compliance with the License.
7 # You may obtain a copy of the License at
8 #
9 # http://www.apache.org/licenses/LICENSE-2.0
10 #
11 # Unless required by applicable law or agreed to in writing, software
12 # distributed under the License is distributed on an "AS IS" BASIS,
13 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
14 # See the License for the specific language governing permissions and
15 # limitations under the License.
16 import logging
17
18 from twisted.internet import defer
19 from twisted.python.failure import Failure
20
21 from synapse.api.constants import EventTypes, Membership
22 from synapse.api.errors import SynapseError
23 from synapse.events.utils import serialize_event
24 from synapse.storage.state import StateFilter
25 from synapse.types import RoomStreamToken
26 from synapse.util.async_helpers import ReadWriteLock
27 from synapse.util.logcontext import run_in_background
28 from synapse.util.stringutils import random_string
29 from synapse.visibility import filter_events_for_client
30
31 logger = logging.getLogger(__name__)
32
33
34 class PurgeStatus(object):
35 """Object tracking the status of a purge request
36
37 This class contains information on the progress of a purge request, for
38 return by get_purge_status.
39
40 Attributes:
41 status (int): Tracks whether this request has completed. One of
42 STATUS_{ACTIVE,COMPLETE,FAILED}
43 """
44
45 STATUS_ACTIVE = 0
46 STATUS_COMPLETE = 1
47 STATUS_FAILED = 2
48
49 STATUS_TEXT = {
50 STATUS_ACTIVE: "active",
51 STATUS_COMPLETE: "complete",
52 STATUS_FAILED: "failed",
53 }
54
55 def __init__(self):
56 self.status = PurgeStatus.STATUS_ACTIVE
57
58 def asdict(self):
59 return {
60 "status": PurgeStatus.STATUS_TEXT[self.status]
61 }
62
63
64 class PaginationHandler(object):
65 """Handles pagination and purge history requests.
66
67 These are in the same handler due to the fact we need to block clients
68 paginating during a purge.
69 """
70
71 def __init__(self, hs):
72 self.hs = hs
73 self.auth = hs.get_auth()
74 self.store = hs.get_datastore()
75 self.clock = hs.get_clock()
76
77 self.pagination_lock = ReadWriteLock()
78 self._purges_in_progress_by_room = set()
79 # map from purge id to PurgeStatus
80 self._purges_by_id = {}
81
82 def start_purge_history(self, room_id, token,
83 delete_local_events=False):
84 """Start off a history purge on a room.
85
86 Args:
87 room_id (str): The room to purge from
88
89 token (str): topological token to delete events before
90 delete_local_events (bool): True to delete local events as well as
91 remote ones
92
93 Returns:
94 str: unique ID for this purge transaction.
95 """
96 if room_id in self._purges_in_progress_by_room:
97 raise SynapseError(
98 400,
99 "History purge already in progress for %s" % (room_id, ),
100 )
101
102 purge_id = random_string(16)
103
104 # we log the purge_id here so that it can be tied back to the
105 # request id in the log lines.
106 logger.info("[purge] starting purge_id %s", purge_id)
107
108 self._purges_by_id[purge_id] = PurgeStatus()
109 run_in_background(
110 self._purge_history,
111 purge_id, room_id, token, delete_local_events,
112 )
113 return purge_id
114
115 @defer.inlineCallbacks
116 def _purge_history(self, purge_id, room_id, token,
117 delete_local_events):
118 """Carry out a history purge on a room.
119
120 Args:
121 purge_id (str): The id for this purge
122 room_id (str): The room to purge from
123 token (str): topological token to delete events before
124 delete_local_events (bool): True to delete local events as well as
125 remote ones
126
127 Returns:
128 Deferred
129 """
130 self._purges_in_progress_by_room.add(room_id)
131 try:
132 with (yield self.pagination_lock.write(room_id)):
133 yield self.store.purge_history(
134 room_id, token, delete_local_events,
135 )
136 logger.info("[purge] complete")
137 self._purges_by_id[purge_id].status = PurgeStatus.STATUS_COMPLETE
138 except Exception:
139 logger.error("[purge] failed: %s", Failure().getTraceback().rstrip())
140 self._purges_by_id[purge_id].status = PurgeStatus.STATUS_FAILED
141 finally:
142 self._purges_in_progress_by_room.discard(room_id)
143
144 # remove the purge from the list 24 hours after it completes
145 def clear_purge():
146 del self._purges_by_id[purge_id]
147 self.hs.get_reactor().callLater(24 * 3600, clear_purge)
148
149 def get_purge_status(self, purge_id):
150 """Get the current status of an active purge
151
152 Args:
153 purge_id (str): purge_id returned by start_purge_history
154
155 Returns:
156 PurgeStatus|None
157 """
158 return self._purges_by_id.get(purge_id)
159
160 @defer.inlineCallbacks
161 def get_messages(self, requester, room_id=None, pagin_config=None,
162 as_client_event=True, event_filter=None):
163 """Get messages in a room.
164
165 Args:
166 requester (Requester): The user requesting messages.
167 room_id (str): The room they want messages from.
168 pagin_config (synapse.api.streams.PaginationConfig): The pagination
169 config rules to apply, if any.
170 as_client_event (bool): True to get events in client-server format.
171 event_filter (Filter): Filter to apply to results or None
172 Returns:
173 dict: Pagination API results
174 """
175 user_id = requester.user.to_string()
176
177 if pagin_config.from_token:
178 room_token = pagin_config.from_token.room_key
179 else:
180 pagin_config.from_token = (
181 yield self.hs.get_event_sources().get_current_token_for_room(
182 room_id=room_id
183 )
184 )
185 room_token = pagin_config.from_token.room_key
186
187 room_token = RoomStreamToken.parse(room_token)
188
189 pagin_config.from_token = pagin_config.from_token.copy_and_replace(
190 "room_key", str(room_token)
191 )
192
193 source_config = pagin_config.get_source_config("room")
194
195 with (yield self.pagination_lock.read(room_id)):
196 membership, member_event_id = yield self.auth.check_in_room_or_world_readable(
197 room_id, user_id
198 )
199
200 if source_config.direction == 'b':
201 # if we're going backwards, we might need to backfill. This
202 # requires that we have a topo token.
203 if room_token.topological:
204 max_topo = room_token.topological
205 else:
206 max_topo = yield self.store.get_max_topological_token(
207 room_id, room_token.stream
208 )
209
210 if membership == Membership.LEAVE:
211 # If they have left the room then clamp the token to be before
212 # they left the room, to save the effort of loading from the
213 # database.
214 leave_token = yield self.store.get_topological_token_for_event(
215 member_event_id
216 )
217 leave_token = RoomStreamToken.parse(leave_token)
218 if leave_token.topological < max_topo:
219 source_config.from_key = str(leave_token)
220
221 yield self.hs.get_handlers().federation_handler.maybe_backfill(
222 room_id, max_topo
223 )
224
225 events, next_key = yield self.store.paginate_room_events(
226 room_id=room_id,
227 from_key=source_config.from_key,
228 to_key=source_config.to_key,
229 direction=source_config.direction,
230 limit=source_config.limit,
231 event_filter=event_filter,
232 )
233
234 next_token = pagin_config.from_token.copy_and_replace(
235 "room_key", next_key
236 )
237
238 if not events:
239 defer.returnValue({
240 "chunk": [],
241 "start": pagin_config.from_token.to_string(),
242 "end": next_token.to_string(),
243 })
244
245 if event_filter:
246 events = event_filter.filter(events)
247
248 events = yield filter_events_for_client(
249 self.store,
250 user_id,
251 events,
252 is_peeking=(member_event_id is None),
253 )
254
255 state = None
256 if event_filter and event_filter.lazy_load_members():
257 # TODO: remove redundant members
258
259 # FIXME: we also care about invite targets etc.
260 state_filter = StateFilter.from_types(
261 (EventTypes.Member, event.sender)
262 for event in events
263 )
264
265 state_ids = yield self.store.get_state_ids_for_event(
266 events[0].event_id, state_filter=state_filter,
267 )
268
269 if state_ids:
270 state = yield self.store.get_events(list(state_ids.values()))
271 state = state.values()
272
273 time_now = self.clock.time_msec()
274
275 chunk = {
276 "chunk": [
277 serialize_event(e, time_now, as_client_event)
278 for e in events
279 ],
280 "start": pagin_config.from_token.to_string(),
281 "end": next_token.to_string(),
282 }
283
284 if state:
285 chunk["state"] = [
286 serialize_event(e, time_now, as_client_event)
287 for e in state
288 ]
289
290 defer.returnValue(chunk)
291
[end of synapse/handlers/pagination.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/synapse/handlers/pagination.py b/synapse/handlers/pagination.py
--- a/synapse/handlers/pagination.py
+++ b/synapse/handlers/pagination.py
@@ -235,6 +235,17 @@
"room_key", next_key
)
+ if events:
+ if event_filter:
+ events = event_filter.filter(events)
+
+ events = yield filter_events_for_client(
+ self.store,
+ user_id,
+ events,
+ is_peeking=(member_event_id is None),
+ )
+
if not events:
defer.returnValue({
"chunk": [],
@@ -242,16 +253,6 @@
"end": next_token.to_string(),
})
- if event_filter:
- events = event_filter.filter(events)
-
- events = yield filter_events_for_client(
- self.store,
- user_id,
- events,
- is_peeking=(member_event_id is None),
- )
-
state = None
if event_filter and event_filter.lazy_load_members():
# TODO: remove redundant members
| {"golden_diff": "diff --git a/synapse/handlers/pagination.py b/synapse/handlers/pagination.py\n--- a/synapse/handlers/pagination.py\n+++ b/synapse/handlers/pagination.py\n@@ -235,6 +235,17 @@\n \"room_key\", next_key\n )\n \n+ if events:\n+ if event_filter:\n+ events = event_filter.filter(events)\n+\n+ events = yield filter_events_for_client(\n+ self.store,\n+ user_id,\n+ events,\n+ is_peeking=(member_event_id is None),\n+ )\n+\n if not events:\n defer.returnValue({\n \"chunk\": [],\n@@ -242,16 +253,6 @@\n \"end\": next_token.to_string(),\n })\n \n- if event_filter:\n- events = event_filter.filter(events)\n-\n- events = yield filter_events_for_client(\n- self.store,\n- user_id,\n- events,\n- is_peeking=(member_event_id is None),\n- )\n-\n state = None\n if event_filter and event_filter.lazy_load_members():\n # TODO: remove redundant members\n", "issue": "500s calling /messages on matrix.org\nas seen by @schiessle on riot-android at https://github.com/vector-im/riot-android/issues/2802\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n# Copyright 2014 - 2016 OpenMarket Ltd\n# Copyright 2017 - 2018 New Vector Ltd\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nimport logging\n\nfrom twisted.internet import defer\nfrom twisted.python.failure import Failure\n\nfrom synapse.api.constants import EventTypes, Membership\nfrom synapse.api.errors import SynapseError\nfrom synapse.events.utils import serialize_event\nfrom synapse.storage.state import StateFilter\nfrom synapse.types import RoomStreamToken\nfrom synapse.util.async_helpers import ReadWriteLock\nfrom synapse.util.logcontext import run_in_background\nfrom synapse.util.stringutils import random_string\nfrom synapse.visibility import filter_events_for_client\n\nlogger = logging.getLogger(__name__)\n\n\nclass PurgeStatus(object):\n \"\"\"Object tracking the status of a purge request\n\n This class contains information on the progress of a purge request, for\n return by get_purge_status.\n\n Attributes:\n status (int): Tracks whether this request has completed. One of\n STATUS_{ACTIVE,COMPLETE,FAILED}\n \"\"\"\n\n STATUS_ACTIVE = 0\n STATUS_COMPLETE = 1\n STATUS_FAILED = 2\n\n STATUS_TEXT = {\n STATUS_ACTIVE: \"active\",\n STATUS_COMPLETE: \"complete\",\n STATUS_FAILED: \"failed\",\n }\n\n def __init__(self):\n self.status = PurgeStatus.STATUS_ACTIVE\n\n def asdict(self):\n return {\n \"status\": PurgeStatus.STATUS_TEXT[self.status]\n }\n\n\nclass PaginationHandler(object):\n \"\"\"Handles pagination and purge history requests.\n\n These are in the same handler due to the fact we need to block clients\n paginating during a purge.\n \"\"\"\n\n def __init__(self, hs):\n self.hs = hs\n self.auth = hs.get_auth()\n self.store = hs.get_datastore()\n self.clock = hs.get_clock()\n\n self.pagination_lock = ReadWriteLock()\n self._purges_in_progress_by_room = set()\n # map from purge id to PurgeStatus\n self._purges_by_id = {}\n\n def start_purge_history(self, room_id, token,\n delete_local_events=False):\n \"\"\"Start off a history purge on a room.\n\n Args:\n room_id (str): The room to purge from\n\n token (str): topological token to delete events before\n delete_local_events (bool): True to delete local events as well as\n remote ones\n\n Returns:\n str: unique ID for this purge transaction.\n \"\"\"\n if room_id in self._purges_in_progress_by_room:\n raise SynapseError(\n 400,\n \"History purge already in progress for %s\" % (room_id, ),\n )\n\n purge_id = random_string(16)\n\n # we log the purge_id here so that it can be tied back to the\n # request id in the log lines.\n logger.info(\"[purge] starting purge_id %s\", purge_id)\n\n self._purges_by_id[purge_id] = PurgeStatus()\n run_in_background(\n self._purge_history,\n purge_id, room_id, token, delete_local_events,\n )\n return purge_id\n\n @defer.inlineCallbacks\n def _purge_history(self, purge_id, room_id, token,\n delete_local_events):\n \"\"\"Carry out a history purge on a room.\n\n Args:\n purge_id (str): The id for this purge\n room_id (str): The room to purge from\n token (str): topological token to delete events before\n delete_local_events (bool): True to delete local events as well as\n remote ones\n\n Returns:\n Deferred\n \"\"\"\n self._purges_in_progress_by_room.add(room_id)\n try:\n with (yield self.pagination_lock.write(room_id)):\n yield self.store.purge_history(\n room_id, token, delete_local_events,\n )\n logger.info(\"[purge] complete\")\n self._purges_by_id[purge_id].status = PurgeStatus.STATUS_COMPLETE\n except Exception:\n logger.error(\"[purge] failed: %s\", Failure().getTraceback().rstrip())\n self._purges_by_id[purge_id].status = PurgeStatus.STATUS_FAILED\n finally:\n self._purges_in_progress_by_room.discard(room_id)\n\n # remove the purge from the list 24 hours after it completes\n def clear_purge():\n del self._purges_by_id[purge_id]\n self.hs.get_reactor().callLater(24 * 3600, clear_purge)\n\n def get_purge_status(self, purge_id):\n \"\"\"Get the current status of an active purge\n\n Args:\n purge_id (str): purge_id returned by start_purge_history\n\n Returns:\n PurgeStatus|None\n \"\"\"\n return self._purges_by_id.get(purge_id)\n\n @defer.inlineCallbacks\n def get_messages(self, requester, room_id=None, pagin_config=None,\n as_client_event=True, event_filter=None):\n \"\"\"Get messages in a room.\n\n Args:\n requester (Requester): The user requesting messages.\n room_id (str): The room they want messages from.\n pagin_config (synapse.api.streams.PaginationConfig): The pagination\n config rules to apply, if any.\n as_client_event (bool): True to get events in client-server format.\n event_filter (Filter): Filter to apply to results or None\n Returns:\n dict: Pagination API results\n \"\"\"\n user_id = requester.user.to_string()\n\n if pagin_config.from_token:\n room_token = pagin_config.from_token.room_key\n else:\n pagin_config.from_token = (\n yield self.hs.get_event_sources().get_current_token_for_room(\n room_id=room_id\n )\n )\n room_token = pagin_config.from_token.room_key\n\n room_token = RoomStreamToken.parse(room_token)\n\n pagin_config.from_token = pagin_config.from_token.copy_and_replace(\n \"room_key\", str(room_token)\n )\n\n source_config = pagin_config.get_source_config(\"room\")\n\n with (yield self.pagination_lock.read(room_id)):\n membership, member_event_id = yield self.auth.check_in_room_or_world_readable(\n room_id, user_id\n )\n\n if source_config.direction == 'b':\n # if we're going backwards, we might need to backfill. This\n # requires that we have a topo token.\n if room_token.topological:\n max_topo = room_token.topological\n else:\n max_topo = yield self.store.get_max_topological_token(\n room_id, room_token.stream\n )\n\n if membership == Membership.LEAVE:\n # If they have left the room then clamp the token to be before\n # they left the room, to save the effort of loading from the\n # database.\n leave_token = yield self.store.get_topological_token_for_event(\n member_event_id\n )\n leave_token = RoomStreamToken.parse(leave_token)\n if leave_token.topological < max_topo:\n source_config.from_key = str(leave_token)\n\n yield self.hs.get_handlers().federation_handler.maybe_backfill(\n room_id, max_topo\n )\n\n events, next_key = yield self.store.paginate_room_events(\n room_id=room_id,\n from_key=source_config.from_key,\n to_key=source_config.to_key,\n direction=source_config.direction,\n limit=source_config.limit,\n event_filter=event_filter,\n )\n\n next_token = pagin_config.from_token.copy_and_replace(\n \"room_key\", next_key\n )\n\n if not events:\n defer.returnValue({\n \"chunk\": [],\n \"start\": pagin_config.from_token.to_string(),\n \"end\": next_token.to_string(),\n })\n\n if event_filter:\n events = event_filter.filter(events)\n\n events = yield filter_events_for_client(\n self.store,\n user_id,\n events,\n is_peeking=(member_event_id is None),\n )\n\n state = None\n if event_filter and event_filter.lazy_load_members():\n # TODO: remove redundant members\n\n # FIXME: we also care about invite targets etc.\n state_filter = StateFilter.from_types(\n (EventTypes.Member, event.sender)\n for event in events\n )\n\n state_ids = yield self.store.get_state_ids_for_event(\n events[0].event_id, state_filter=state_filter,\n )\n\n if state_ids:\n state = yield self.store.get_events(list(state_ids.values()))\n state = state.values()\n\n time_now = self.clock.time_msec()\n\n chunk = {\n \"chunk\": [\n serialize_event(e, time_now, as_client_event)\n for e in events\n ],\n \"start\": pagin_config.from_token.to_string(),\n \"end\": next_token.to_string(),\n }\n\n if state:\n chunk[\"state\"] = [\n serialize_event(e, time_now, as_client_event)\n for e in state\n ]\n\n defer.returnValue(chunk)\n", "path": "synapse/handlers/pagination.py"}]} | 3,441 | 258 |
gh_patches_debug_23812 | rasdani/github-patches | git_diff | interlegis__sapl-2091 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Auto-numeração de Norma Jurídica
Retirado do tíquete nº 132457
"Bom dia,
Achei muito útil a funcionalidade de numeração automática das normas jurídicas no SAPL 3.1
Outra novidade que gostei muito é a aceitação de leis com letra no final, do tipo "lei 2133A"
Porém, quando insiro alguma lei com letra, a auto-numeração das leis seguintes deixa de funcionar.
Peço então que, por gentileza, revisem esse problema.
Atenciosamente,
Marcos F. Scher."
</issue>
<code>
[start of sapl/norma/views.py]
1
2 import weasyprint
3 from django.contrib.auth.mixins import PermissionRequiredMixin
4 from django.core.exceptions import ObjectDoesNotExist
5 from django.core.urlresolvers import reverse
6 from django.http import HttpResponse, JsonResponse
7 from django.template import RequestContext, loader
8 from django.utils import timezone
9 from django.utils.translation import ugettext_lazy as _
10 from django.views.generic import TemplateView, UpdateView
11 from django.views.generic.base import RedirectView
12 from django.views.generic.edit import FormView
13 from django_filters.views import FilterView
14
15 from sapl.base.models import AppConfig
16 from sapl.compilacao.views import IntegracaoTaView
17 from sapl.crud.base import (RP_DETAIL, RP_LIST, Crud, CrudAux,
18 MasterDetailCrud, make_pagination)
19 from sapl.utils import show_results_filter_set
20
21 from .forms import (NormaFilterSet, NormaJuridicaForm,
22 NormaPesquisaSimplesForm, NormaRelacionadaForm)
23 from .models import (AssuntoNorma, NormaJuridica, NormaRelacionada,
24 TipoNormaJuridica, TipoVinculoNormaJuridica)
25
26 # LegislacaoCitadaCrud = Crud.build(LegislacaoCitada, '')
27 AssuntoNormaCrud = CrudAux.build(AssuntoNorma, 'assunto_norma_juridica',
28 list_field_names=['assunto', 'descricao'])
29
30
31 TipoNormaCrud = CrudAux.build(
32 TipoNormaJuridica, 'tipo_norma_juridica',
33 list_field_names=['sigla', 'descricao', 'equivalente_lexml'])
34 TipoVinculoNormaJuridicaCrud = CrudAux.build(
35 TipoVinculoNormaJuridica, '',
36 list_field_names=['sigla', 'descricao_ativa', 'descricao_passiva'])
37
38
39 class NormaRelacionadaCrud(MasterDetailCrud):
40 model = NormaRelacionada
41 parent_field = 'norma_principal'
42 help_topic = 'norma_juridica'
43
44 class BaseMixin(MasterDetailCrud.BaseMixin):
45 list_field_names = ['norma_relacionada', 'tipo_vinculo']
46
47 class CreateView(MasterDetailCrud.CreateView):
48 form_class = NormaRelacionadaForm
49
50 class UpdateView(MasterDetailCrud.UpdateView):
51 form_class = NormaRelacionadaForm
52
53 def get_initial(self):
54 initial = super(UpdateView, self).get_initial()
55 initial['tipo'] = self.object.norma_relacionada.tipo.id
56 initial['numero'] = self.object.norma_relacionada.numero
57 initial['ano'] = self.object.norma_relacionada.ano
58 initial['ementa'] = self.object.norma_relacionada.ementa
59 return initial
60
61 class DetailView(MasterDetailCrud.DetailView):
62
63 layout_key = 'NormaRelacionadaDetail'
64
65
66 class NormaPesquisaView(FilterView):
67 model = NormaJuridica
68 filterset_class = NormaFilterSet
69 paginate_by = 10
70
71 def get_queryset(self):
72 qs = super().get_queryset()
73
74 qs.select_related('tipo', 'materia')
75
76 return qs
77
78 def get_context_data(self, **kwargs):
79 context = super(NormaPesquisaView, self).get_context_data(**kwargs)
80
81 context['title'] = _('Pesquisar Norma Jurídica')
82
83 qr = self.request.GET.copy()
84
85 if 'page' in qr:
86 del qr['page']
87
88 paginator = context['paginator']
89 page_obj = context['page_obj']
90
91 context['page_range'] = make_pagination(
92 page_obj.number, paginator.num_pages)
93
94 context['filter_url'] = ('&' + qr.urlencode()) if len(qr) > 0 else ''
95
96 context['show_results'] = show_results_filter_set(qr)
97
98 return context
99
100
101 class NormaTaView(IntegracaoTaView):
102 model = NormaJuridica
103 model_type_foreignkey = TipoNormaJuridica
104 map_fields = {
105 'data': 'data',
106 'ementa': 'ementa',
107 'observacao': 'observacao',
108 'numero': 'numero',
109 'ano': 'ano',
110 }
111
112 map_funcs = {
113 'publicacao_func': True
114 }
115
116 def get(self, request, *args, **kwargs):
117 """
118 Para manter a app compilacao isolada das outras aplicações,
119 este get foi implementado para tratar uma prerrogativa externa
120 de usuário.
121 """
122 if AppConfig.attr('texto_articulado_norma'):
123 return IntegracaoTaView.get(self, request, *args, **kwargs)
124 else:
125 return self.get_redirect_deactivated()
126
127
128 class NormaCrud(Crud):
129 model = NormaJuridica
130 help_topic = 'norma_juridica'
131 public = [RP_LIST, RP_DETAIL]
132
133 class BaseMixin(Crud.BaseMixin):
134 list_field_names = ['tipo', 'numero', 'ano', 'ementa']
135
136 list_url = ''
137
138 @property
139 def search_url(self):
140 namespace = self.model._meta.app_config.name
141 return reverse('%s:%s' % (namespace, 'norma_pesquisa'))
142
143 class DetailView(Crud.DetailView):
144 pass
145
146 class DeleteView(Crud.DeleteView):
147
148 def get_success_url(self):
149 return self.search_url
150
151 class CreateView(Crud.CreateView):
152 form_class = NormaJuridicaForm
153
154 @property
155 def cancel_url(self):
156 return self.search_url
157
158 layout_key = 'NormaJuridicaCreate'
159
160 class ListView(Crud.ListView, RedirectView):
161
162 def get_redirect_url(self, *args, **kwargs):
163 namespace = self.model._meta.app_config.name
164 return reverse('%s:%s' % (namespace, 'norma_pesquisa'))
165
166 def get(self, request, *args, **kwargs):
167 return RedirectView.get(self, request, *args, **kwargs)
168
169 class UpdateView(Crud.UpdateView):
170 form_class = NormaJuridicaForm
171
172 layout_key = 'NormaJuridicaCreate'
173
174 def get_initial(self):
175 initial = super(UpdateView, self).get_initial()
176 norma = NormaJuridica.objects.get(id=self.kwargs['pk'])
177 if norma.materia:
178 initial['tipo_materia'] = norma.materia.tipo
179 initial['ano_materia'] = norma.materia.ano
180 initial['numero_materia'] = norma.materia.numero
181 return initial
182
183
184 def recuperar_norma(request):
185 tipo = TipoNormaJuridica.objects.get(pk=request.GET['tipo'])
186 numero = request.GET['numero']
187 ano = request.GET['ano']
188
189 try:
190 norma = NormaJuridica.objects.get(tipo=tipo,
191 ano=ano,
192 numero=numero)
193 response = JsonResponse({'ementa': norma.ementa,
194 'id': norma.id})
195 except ObjectDoesNotExist:
196 response = JsonResponse({'ementa': '', 'id': 0})
197
198 return response
199
200
201 def recuperar_numero_norma(request):
202 tipo = TipoNormaJuridica.objects.get(pk=request.GET['tipo'])
203 ano = request.GET.get('ano', '')
204
205 param = {'tipo': tipo}
206 param['ano'] = ano if ano else timezone.now().year
207 norma = NormaJuridica.objects.filter(**param).extra(
208 {'numero_id': "CAST(numero as INTEGER)"}).order_by(
209 'tipo', 'ano','numero_id').values_list('numero', 'ano').last()
210 if norma:
211 response = JsonResponse({'numero': int(norma[0]) + 1,
212 'ano': norma[1]})
213 else:
214 response = JsonResponse(
215 {'numero': 1, 'ano': ano})
216
217 return response
218
219
220 class ImpressosView(PermissionRequiredMixin, TemplateView):
221 template_name = 'materia/impressos/impressos.html'
222 permission_required = ('materia.can_access_impressos', )
223
224
225 def gerar_pdf_impressos(request, context, template_name):
226 template = loader.get_template(template_name)
227 html = template.render(RequestContext(request, context))
228 pdf = weasyprint.HTML(string=html, base_url=request.build_absolute_uri()
229 ).write_pdf()
230
231 response = HttpResponse(pdf, content_type='application/pdf')
232 response['Content-Disposition'] = (
233 'inline; filename="relatorio_impressos.pdf"')
234 response['Content-Transfer-Encoding'] = 'binary'
235
236 return response
237
238
239 class NormaPesquisaSimplesView(PermissionRequiredMixin, FormView):
240 form_class = NormaPesquisaSimplesForm
241 template_name = 'materia/impressos/norma.html'
242 permission_required = ('materia.can_access_impressos', )
243
244 def form_valid(self, form):
245 normas = NormaJuridica.objects.all().order_by(
246 'numero')
247 template_norma = 'materia/impressos/normas_pdf.html'
248
249 titulo = form.cleaned_data['titulo']
250
251 if form.cleaned_data['tipo_norma']:
252 normas = normas.filter(tipo=form.cleaned_data['tipo_norma'])
253
254 if form.cleaned_data['data_inicial']:
255 normas = normas.filter(
256 data__gte=form.cleaned_data['data_inicial'],
257 data__lte=form.cleaned_data['data_final'])
258
259 qtd_resultados = len(normas)
260 if qtd_resultados > 2000:
261 normas = normas[:2000]
262
263 context = {'quantidade': qtd_resultados,
264 'titulo': titulo,
265 'normas': normas}
266
267 return gerar_pdf_impressos(self.request, context, template_norma)
268
[end of sapl/norma/views.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/sapl/norma/views.py b/sapl/norma/views.py
--- a/sapl/norma/views.py
+++ b/sapl/norma/views.py
@@ -1,4 +1,5 @@
+import re
import weasyprint
from django.contrib.auth.mixins import PermissionRequiredMixin
from django.core.exceptions import ObjectDoesNotExist
@@ -201,14 +202,12 @@
def recuperar_numero_norma(request):
tipo = TipoNormaJuridica.objects.get(pk=request.GET['tipo'])
ano = request.GET.get('ano', '')
-
param = {'tipo': tipo}
param['ano'] = ano if ano else timezone.now().year
- norma = NormaJuridica.objects.filter(**param).extra(
- {'numero_id': "CAST(numero as INTEGER)"}).order_by(
- 'tipo', 'ano','numero_id').values_list('numero', 'ano').last()
+ norma = NormaJuridica.objects.filter(**param).order_by(
+ 'tipo', 'ano').values_list('numero', 'ano').last()
if norma:
- response = JsonResponse({'numero': int(norma[0]) + 1,
+ response = JsonResponse({'numero': int(re.sub("[^0-9].*", '', norma[0])) + 1,
'ano': norma[1]})
else:
response = JsonResponse(
| {"golden_diff": "diff --git a/sapl/norma/views.py b/sapl/norma/views.py\n--- a/sapl/norma/views.py\n+++ b/sapl/norma/views.py\n@@ -1,4 +1,5 @@\n \n+import re\n import weasyprint\n from django.contrib.auth.mixins import PermissionRequiredMixin\n from django.core.exceptions import ObjectDoesNotExist\n@@ -201,14 +202,12 @@\n def recuperar_numero_norma(request):\n tipo = TipoNormaJuridica.objects.get(pk=request.GET['tipo'])\n ano = request.GET.get('ano', '')\n-\n param = {'tipo': tipo}\n param['ano'] = ano if ano else timezone.now().year\n- norma = NormaJuridica.objects.filter(**param).extra(\n- {'numero_id': \"CAST(numero as INTEGER)\"}).order_by(\n- 'tipo', 'ano','numero_id').values_list('numero', 'ano').last()\n+ norma = NormaJuridica.objects.filter(**param).order_by(\n+ 'tipo', 'ano').values_list('numero', 'ano').last()\n if norma:\n- response = JsonResponse({'numero': int(norma[0]) + 1,\n+ response = JsonResponse({'numero': int(re.sub(\"[^0-9].*\", '', norma[0])) + 1,\n 'ano': norma[1]})\n else:\n response = JsonResponse(\n", "issue": "Auto-numera\u00e7\u00e3o de Norma Jur\u00eddica\nRetirado do t\u00edquete n\u00ba 132457\r\n\"Bom dia,\r\nAchei muito \u00fatil a funcionalidade de numera\u00e7\u00e3o autom\u00e1tica das normas jur\u00eddicas no SAPL 3.1\r\nOutra novidade que gostei muito \u00e9 a aceita\u00e7\u00e3o de leis com letra no final, do tipo \"lei 2133A\"\r\nPor\u00e9m, quando insiro alguma lei com letra, a auto-numera\u00e7\u00e3o das leis seguintes deixa de funcionar. \r\nPe\u00e7o ent\u00e3o que, por gentileza, revisem esse problema. \r\nAtenciosamente,\r\nMarcos F. Scher.\"\n", "before_files": [{"content": "\nimport weasyprint\nfrom django.contrib.auth.mixins import PermissionRequiredMixin\nfrom django.core.exceptions import ObjectDoesNotExist\nfrom django.core.urlresolvers import reverse\nfrom django.http import HttpResponse, JsonResponse\nfrom django.template import RequestContext, loader\nfrom django.utils import timezone\nfrom django.utils.translation import ugettext_lazy as _\nfrom django.views.generic import TemplateView, UpdateView\nfrom django.views.generic.base import RedirectView\nfrom django.views.generic.edit import FormView\nfrom django_filters.views import FilterView\n\nfrom sapl.base.models import AppConfig\nfrom sapl.compilacao.views import IntegracaoTaView\nfrom sapl.crud.base import (RP_DETAIL, RP_LIST, Crud, CrudAux,\n MasterDetailCrud, make_pagination)\nfrom sapl.utils import show_results_filter_set\n\nfrom .forms import (NormaFilterSet, NormaJuridicaForm,\n NormaPesquisaSimplesForm, NormaRelacionadaForm)\nfrom .models import (AssuntoNorma, NormaJuridica, NormaRelacionada,\n TipoNormaJuridica, TipoVinculoNormaJuridica)\n\n# LegislacaoCitadaCrud = Crud.build(LegislacaoCitada, '')\nAssuntoNormaCrud = CrudAux.build(AssuntoNorma, 'assunto_norma_juridica',\n list_field_names=['assunto', 'descricao'])\n\n\nTipoNormaCrud = CrudAux.build(\n TipoNormaJuridica, 'tipo_norma_juridica',\n list_field_names=['sigla', 'descricao', 'equivalente_lexml'])\nTipoVinculoNormaJuridicaCrud = CrudAux.build(\n TipoVinculoNormaJuridica, '',\n list_field_names=['sigla', 'descricao_ativa', 'descricao_passiva'])\n\n\nclass NormaRelacionadaCrud(MasterDetailCrud):\n model = NormaRelacionada\n parent_field = 'norma_principal'\n help_topic = 'norma_juridica'\n\n class BaseMixin(MasterDetailCrud.BaseMixin):\n list_field_names = ['norma_relacionada', 'tipo_vinculo']\n\n class CreateView(MasterDetailCrud.CreateView):\n form_class = NormaRelacionadaForm\n\n class UpdateView(MasterDetailCrud.UpdateView):\n form_class = NormaRelacionadaForm\n\n def get_initial(self):\n initial = super(UpdateView, self).get_initial()\n initial['tipo'] = self.object.norma_relacionada.tipo.id\n initial['numero'] = self.object.norma_relacionada.numero\n initial['ano'] = self.object.norma_relacionada.ano\n initial['ementa'] = self.object.norma_relacionada.ementa\n return initial\n\n class DetailView(MasterDetailCrud.DetailView):\n\n layout_key = 'NormaRelacionadaDetail'\n\n\nclass NormaPesquisaView(FilterView):\n model = NormaJuridica\n filterset_class = NormaFilterSet\n paginate_by = 10\n\n def get_queryset(self):\n qs = super().get_queryset()\n\n qs.select_related('tipo', 'materia')\n\n return qs\n\n def get_context_data(self, **kwargs):\n context = super(NormaPesquisaView, self).get_context_data(**kwargs)\n\n context['title'] = _('Pesquisar Norma Jur\u00eddica')\n\n qr = self.request.GET.copy()\n\n if 'page' in qr:\n del qr['page']\n\n paginator = context['paginator']\n page_obj = context['page_obj']\n\n context['page_range'] = make_pagination(\n page_obj.number, paginator.num_pages)\n\n context['filter_url'] = ('&' + qr.urlencode()) if len(qr) > 0 else ''\n\n context['show_results'] = show_results_filter_set(qr)\n\n return context\n\n\nclass NormaTaView(IntegracaoTaView):\n model = NormaJuridica\n model_type_foreignkey = TipoNormaJuridica\n map_fields = {\n 'data': 'data',\n 'ementa': 'ementa',\n 'observacao': 'observacao',\n 'numero': 'numero',\n 'ano': 'ano',\n }\n\n map_funcs = {\n 'publicacao_func': True\n }\n\n def get(self, request, *args, **kwargs):\n \"\"\"\n Para manter a app compilacao isolada das outras aplica\u00e7\u00f5es,\n este get foi implementado para tratar uma prerrogativa externa\n de usu\u00e1rio.\n \"\"\"\n if AppConfig.attr('texto_articulado_norma'):\n return IntegracaoTaView.get(self, request, *args, **kwargs)\n else:\n return self.get_redirect_deactivated()\n\n\nclass NormaCrud(Crud):\n model = NormaJuridica\n help_topic = 'norma_juridica'\n public = [RP_LIST, RP_DETAIL]\n\n class BaseMixin(Crud.BaseMixin):\n list_field_names = ['tipo', 'numero', 'ano', 'ementa']\n\n list_url = ''\n\n @property\n def search_url(self):\n namespace = self.model._meta.app_config.name\n return reverse('%s:%s' % (namespace, 'norma_pesquisa'))\n\n class DetailView(Crud.DetailView):\n pass\n\n class DeleteView(Crud.DeleteView):\n\n def get_success_url(self):\n return self.search_url\n\n class CreateView(Crud.CreateView):\n form_class = NormaJuridicaForm\n\n @property\n def cancel_url(self):\n return self.search_url\n\n layout_key = 'NormaJuridicaCreate'\n\n class ListView(Crud.ListView, RedirectView):\n\n def get_redirect_url(self, *args, **kwargs):\n namespace = self.model._meta.app_config.name\n return reverse('%s:%s' % (namespace, 'norma_pesquisa'))\n\n def get(self, request, *args, **kwargs):\n return RedirectView.get(self, request, *args, **kwargs)\n\n class UpdateView(Crud.UpdateView):\n form_class = NormaJuridicaForm\n\n layout_key = 'NormaJuridicaCreate'\n\n def get_initial(self):\n initial = super(UpdateView, self).get_initial()\n norma = NormaJuridica.objects.get(id=self.kwargs['pk'])\n if norma.materia:\n initial['tipo_materia'] = norma.materia.tipo\n initial['ano_materia'] = norma.materia.ano\n initial['numero_materia'] = norma.materia.numero\n return initial\n\n\ndef recuperar_norma(request):\n tipo = TipoNormaJuridica.objects.get(pk=request.GET['tipo'])\n numero = request.GET['numero']\n ano = request.GET['ano']\n\n try:\n norma = NormaJuridica.objects.get(tipo=tipo,\n ano=ano,\n numero=numero)\n response = JsonResponse({'ementa': norma.ementa,\n 'id': norma.id})\n except ObjectDoesNotExist:\n response = JsonResponse({'ementa': '', 'id': 0})\n\n return response\n\n\ndef recuperar_numero_norma(request):\n tipo = TipoNormaJuridica.objects.get(pk=request.GET['tipo'])\n ano = request.GET.get('ano', '')\n\n param = {'tipo': tipo}\n param['ano'] = ano if ano else timezone.now().year\n norma = NormaJuridica.objects.filter(**param).extra(\n {'numero_id': \"CAST(numero as INTEGER)\"}).order_by(\n 'tipo', 'ano','numero_id').values_list('numero', 'ano').last()\n if norma:\n response = JsonResponse({'numero': int(norma[0]) + 1,\n 'ano': norma[1]})\n else:\n response = JsonResponse(\n {'numero': 1, 'ano': ano})\n\n return response\n\n\nclass ImpressosView(PermissionRequiredMixin, TemplateView):\n template_name = 'materia/impressos/impressos.html'\n permission_required = ('materia.can_access_impressos', )\n\n\ndef gerar_pdf_impressos(request, context, template_name):\n template = loader.get_template(template_name)\n html = template.render(RequestContext(request, context))\n pdf = weasyprint.HTML(string=html, base_url=request.build_absolute_uri()\n ).write_pdf()\n\n response = HttpResponse(pdf, content_type='application/pdf')\n response['Content-Disposition'] = (\n 'inline; filename=\"relatorio_impressos.pdf\"')\n response['Content-Transfer-Encoding'] = 'binary'\n\n return response\n\n\nclass NormaPesquisaSimplesView(PermissionRequiredMixin, FormView):\n form_class = NormaPesquisaSimplesForm\n template_name = 'materia/impressos/norma.html'\n permission_required = ('materia.can_access_impressos', )\n\n def form_valid(self, form):\n normas = NormaJuridica.objects.all().order_by(\n 'numero')\n template_norma = 'materia/impressos/normas_pdf.html'\n\n titulo = form.cleaned_data['titulo']\n\n if form.cleaned_data['tipo_norma']:\n normas = normas.filter(tipo=form.cleaned_data['tipo_norma'])\n\n if form.cleaned_data['data_inicial']:\n normas = normas.filter(\n data__gte=form.cleaned_data['data_inicial'],\n data__lte=form.cleaned_data['data_final'])\n\n qtd_resultados = len(normas)\n if qtd_resultados > 2000:\n normas = normas[:2000]\n\n context = {'quantidade': qtd_resultados,\n 'titulo': titulo,\n 'normas': normas}\n\n return gerar_pdf_impressos(self.request, context, template_norma)\n", "path": "sapl/norma/views.py"}]} | 3,528 | 313 |
gh_patches_debug_14947 | rasdani/github-patches | git_diff | python-telegram-bot__python-telegram-bot-751 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
RegexHandler error with stickers, voices, images...
RegexHandler does not check if update.effective_message.text exists.
### Steps to reproduce
1. Add a RegexHandler
2. Run the bot
3. Send a sticker
### Expected behaviour
The handler should not capture the sticker
### Actual behaviour
The handler capture the sticker and gives an error
### Configuration
Does not matter
**Version of Python, python-telegram-bot & dependencies:**
``python-telegram-bot 7.0.0``
### Logs
2017-07-26 14:02:47,301 - telegram.ext.dispatcher - ERROR - An uncaught error was raised while processing the update
Traceback (most recent call last):
File "/usr/local/lib/python3.5/dist-packages/telegram/ext/dispatcher.py", line 269, in process_update
if handler.check_update(update):
File "/usr/local/lib/python3.5/dist-packages/telegram/ext/regexhandler.py", line 150, in check_update
match = re.match(self.pattern, update.effective_message.text)
File "/usr/lib/python3.5/re.py", line 163, in match
return _compile(pattern, flags).match(string)
TypeError: expected string or bytes-like object
</issue>
<code>
[start of telegram/ext/regexhandler.py]
1 #!/usr/bin/env python
2 #
3 # A library that provides a Python interface to the Telegram Bot API
4 # Copyright (C) 2015-2017
5 # Leandro Toledo de Souza <[email protected]>
6 #
7 # This program is free software: you can redistribute it and/or modify
8 # it under the terms of the GNU Lesser Public License as published by
9 # the Free Software Foundation, either version 3 of the License, or
10 # (at your option) any later version.
11 #
12 # This program is distributed in the hope that it will be useful,
13 # but WITHOUT ANY WARRANTY; without even the implied warranty of
14 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
15 # GNU Lesser Public License for more details.
16 #
17 # You should have received a copy of the GNU Lesser Public License
18 # along with this program. If not, see [http://www.gnu.org/licenses/].
19 # TODO: Remove allow_edited
20 """ This module contains the RegexHandler class """
21
22 import re
23 import warnings
24
25 from future.utils import string_types
26
27 from telegram import Update
28 from .handler import Handler
29
30
31 class RegexHandler(Handler):
32 """
33 Handler class to handle Telegram updates based on a regex. It uses a
34 regular expression to check text messages. Read the documentation of the
35 ``re`` module for more information. The ``re.match`` function is used to
36 determine if an update should be handled by this handler.
37
38 Attributes:
39 pattern (:obj:`str` | :obj:`Pattern`): The regex pattern.
40 callback (:obj:`callable`): The callback function for this handler.
41 pass_groups (:obj:`bool`): Optional. Determines whether ``groups`` will be passed to the
42 callback function.
43 pass_groupdict (:obj:`bool`): Optional. Determines whether ``groupdict``. will be passed to
44 the callback function.
45 pass_update_queue (:obj:`bool`): Optional. Determines whether ``update_queue`` will be
46 passed to the callback function.
47 pass_job_queue (:obj:`bool`): Optional. Determines whether ``job_queue`` will be passed to
48 the callback function.
49 pass_user_data (:obj:`bool`): Optional. Determines whether ``user_data`` will be passed to
50 the callback function.
51 pass_chat_data (:obj:`bool`): Optional. Determines whether ``chat_data`` will be passed to
52 the callback function.
53
54 Note:
55 :attr:`pass_user_data` and :attr:`pass_chat_data` determine whether a ``dict`` you
56 can use to keep any data in will be sent to the :attr:`callback` function.. Related to
57 either the user or the chat that the update was sent in. For each update from the same user
58 or in the same chat, it will be the same ``dict``.
59
60 Args:
61 pattern (:obj:`str` | :obj:`Pattern`): The regex pattern.
62 callback (:obj:`callable`): A function that takes ``bot, update`` as positional arguments.
63 It will be called when the :attr:`check_update` has determined that an update should be
64 processed by this handler.
65 pass_groups (:obj:`bool`, optional): If the callback should be passed the result of
66 ``re.match(pattern, data).groups()`` as a keyword argument called ``groups``.
67 Default is ``False``
68 pass_groupdict (:obj:`bool`, optional): If the callback should be passed the result of
69 ``re.match(pattern, data).groupdict()`` as a keyword argument called ``groupdict``.
70 Default is ``False``
71 pass_update_queue (:obj:`bool`, optional): If set to ``True``, a keyword argument called
72 ``update_queue`` will be passed to the callback function. It will be the ``Queue``
73 instance used by the :class:`telegram.ext.Updater` and :class:`telegram.ext.Dispatcher`
74 that contains new updates which can be used to insert updates. Default is ``False``.
75 pass_job_queue (:obj:`bool`, optional): If set to ``True``, a keyword argument called
76 ``job_queue`` will be passed to the callback function. It will be a
77 :class:`telegram.ext.JobQueue` instance created by the :class:`telegram.ext.Updater`
78 which can be used to schedule new jobs. Default is ``False``.
79 pass_user_data (:obj:`bool`, optional): If set to ``True``, a keyword argument called
80 ``user_data`` will be passed to the callback function. Default is ``False``.
81 pass_chat_data (:obj:`bool`, optional): If set to ``True``, a keyword argument called
82 ``chat_data`` will be passed to the callback function. Default is ``False``.
83 message_updates (:obj:`bool`, optional): Should "normal" message updates be handled?
84 Default is ``True``.
85 channel_post_updates (:obj:`bool`, optional): Should channel posts updates be handled?
86 Default is ``True``.
87 edited_updates (:obj:`bool`, optional): Should "edited" message updates be handled? Default
88 is ``False``.
89 allow_edited (:obj:`bool`, optional): If the handler should also accept edited messages.
90 Default is ``False`` - Deprecated. use edited_updates instead.
91
92 Raises:
93 ValueError
94 """
95
96 def __init__(self,
97 pattern,
98 callback,
99 pass_groups=False,
100 pass_groupdict=False,
101 pass_update_queue=False,
102 pass_job_queue=False,
103 pass_user_data=False,
104 pass_chat_data=False,
105 allow_edited=False,
106 message_updates=True,
107 channel_post_updates=False,
108 edited_updates=False
109 ):
110 if not message_updates and not channel_post_updates and not edited_updates:
111 raise ValueError(
112 'message_updates, channel_post_updates and edited_updates are all False')
113 if allow_edited:
114 warnings.warn('allow_edited is getting deprecated, please use edited_updates instead')
115 edited_updates = allow_edited
116
117 super(RegexHandler, self).__init__(
118 callback,
119 pass_update_queue=pass_update_queue,
120 pass_job_queue=pass_job_queue,
121 pass_user_data=pass_user_data,
122 pass_chat_data=pass_chat_data)
123
124 if isinstance(pattern, string_types):
125 pattern = re.compile(pattern)
126
127 self.pattern = pattern
128 self.pass_groups = pass_groups
129 self.pass_groupdict = pass_groupdict
130 self.allow_edited = allow_edited
131 self.message_updates = message_updates
132 self.channel_post_updates = channel_post_updates
133 self.edited_updates = edited_updates
134
135 def check_update(self, update):
136 """
137 Determines whether an update should be passed to this handlers :attr:`callback`.
138
139 Args:
140 update (:class:`telegram.Update`): Incoming telegram update.
141
142 Returns:
143 :obj:`bool`
144 """
145
146 if any([(self.message_updates and update.message),
147 (self.edited_updates and update.edited_message),
148 (self.channel_post_updates and update.channel_post)]) and (
149 isinstance(update, Update)):
150 match = re.match(self.pattern, update.effective_message.text)
151 return bool(match)
152 return False
153
154 def handle_update(self, update, dispatcher):
155 """
156 Send the update to the :attr:`callback`.
157
158 Args:
159 update (:class:`telegram.Update`): Incoming telegram update.
160 dispatcher (:class:`telegram.ext.Dispatcher`): Dispatcher that originated the Update.
161 """
162
163 optional_args = self.collect_optional_args(dispatcher, update)
164 match = re.match(self.pattern, update.effective_message.text)
165
166 if self.pass_groups:
167 optional_args['groups'] = match.groups()
168 if self.pass_groupdict:
169 optional_args['groupdict'] = match.groupdict()
170
171 return self.callback(dispatcher.bot, update, **optional_args)
172
[end of telegram/ext/regexhandler.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/telegram/ext/regexhandler.py b/telegram/ext/regexhandler.py
--- a/telegram/ext/regexhandler.py
+++ b/telegram/ext/regexhandler.py
@@ -142,11 +142,12 @@
Returns:
:obj:`bool`
"""
-
+ if not isinstance(update, Update) and not update.effective_message:
+ return False
if any([(self.message_updates and update.message),
(self.edited_updates and update.edited_message),
- (self.channel_post_updates and update.channel_post)]) and (
- isinstance(update, Update)):
+ (self.channel_post_updates and update.channel_post)]) and \
+ update.effective_message.text:
match = re.match(self.pattern, update.effective_message.text)
return bool(match)
return False
| {"golden_diff": "diff --git a/telegram/ext/regexhandler.py b/telegram/ext/regexhandler.py\n--- a/telegram/ext/regexhandler.py\n+++ b/telegram/ext/regexhandler.py\n@@ -142,11 +142,12 @@\n Returns:\n :obj:`bool`\n \"\"\"\n-\n+ if not isinstance(update, Update) and not update.effective_message:\n+ return False\n if any([(self.message_updates and update.message),\n (self.edited_updates and update.edited_message),\n- (self.channel_post_updates and update.channel_post)]) and (\n- isinstance(update, Update)):\n+ (self.channel_post_updates and update.channel_post)]) and \\\n+ update.effective_message.text:\n match = re.match(self.pattern, update.effective_message.text)\n return bool(match)\n return False\n", "issue": "RegexHandler error with stickers, voices, images...\nRegexHandler does not check if update.effective_message.text exists.\r\n\r\n### Steps to reproduce\r\n1. Add a RegexHandler\r\n2. Run the bot\r\n3. Send a sticker\r\n\r\n### Expected behaviour\r\nThe handler should not capture the sticker\r\n\r\n### Actual behaviour\r\nThe handler capture the sticker and gives an error\r\n\r\n### Configuration\r\nDoes not matter\r\n\r\n\r\n**Version of Python, python-telegram-bot & dependencies:**\r\n\r\n``python-telegram-bot 7.0.0``\r\n\r\n### Logs\r\n2017-07-26 14:02:47,301 - telegram.ext.dispatcher - ERROR - An uncaught error was raised while processing the update\r\nTraceback (most recent call last):\r\n File \"/usr/local/lib/python3.5/dist-packages/telegram/ext/dispatcher.py\", line 269, in process_update\r\n if handler.check_update(update):\r\n File \"/usr/local/lib/python3.5/dist-packages/telegram/ext/regexhandler.py\", line 150, in check_update\r\n match = re.match(self.pattern, update.effective_message.text)\r\n File \"/usr/lib/python3.5/re.py\", line 163, in match\r\n return _compile(pattern, flags).match(string)\r\nTypeError: expected string or bytes-like object\r\n\n", "before_files": [{"content": "#!/usr/bin/env python\n#\n# A library that provides a Python interface to the Telegram Bot API\n# Copyright (C) 2015-2017\n# Leandro Toledo de Souza <[email protected]>\n#\n# This program is free software: you can redistribute it and/or modify\n# it under the terms of the GNU Lesser Public License as published by\n# the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# This program is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU Lesser Public License for more details.\n#\n# You should have received a copy of the GNU Lesser Public License\n# along with this program. If not, see [http://www.gnu.org/licenses/].\n# TODO: Remove allow_edited\n\"\"\" This module contains the RegexHandler class \"\"\"\n\nimport re\nimport warnings\n\nfrom future.utils import string_types\n\nfrom telegram import Update\nfrom .handler import Handler\n\n\nclass RegexHandler(Handler):\n \"\"\"\n Handler class to handle Telegram updates based on a regex. It uses a\n regular expression to check text messages. Read the documentation of the\n ``re`` module for more information. The ``re.match`` function is used to\n determine if an update should be handled by this handler.\n\n Attributes:\n pattern (:obj:`str` | :obj:`Pattern`): The regex pattern.\n callback (:obj:`callable`): The callback function for this handler.\n pass_groups (:obj:`bool`): Optional. Determines whether ``groups`` will be passed to the\n callback function.\n pass_groupdict (:obj:`bool`): Optional. Determines whether ``groupdict``. will be passed to\n the callback function.\n pass_update_queue (:obj:`bool`): Optional. Determines whether ``update_queue`` will be\n passed to the callback function.\n pass_job_queue (:obj:`bool`): Optional. Determines whether ``job_queue`` will be passed to\n the callback function.\n pass_user_data (:obj:`bool`): Optional. Determines whether ``user_data`` will be passed to\n the callback function.\n pass_chat_data (:obj:`bool`): Optional. Determines whether ``chat_data`` will be passed to\n the callback function.\n\n Note:\n :attr:`pass_user_data` and :attr:`pass_chat_data` determine whether a ``dict`` you\n can use to keep any data in will be sent to the :attr:`callback` function.. Related to\n either the user or the chat that the update was sent in. For each update from the same user\n or in the same chat, it will be the same ``dict``.\n\n Args:\n pattern (:obj:`str` | :obj:`Pattern`): The regex pattern.\n callback (:obj:`callable`): A function that takes ``bot, update`` as positional arguments.\n It will be called when the :attr:`check_update` has determined that an update should be\n processed by this handler.\n pass_groups (:obj:`bool`, optional): If the callback should be passed the result of\n ``re.match(pattern, data).groups()`` as a keyword argument called ``groups``.\n Default is ``False``\n pass_groupdict (:obj:`bool`, optional): If the callback should be passed the result of\n ``re.match(pattern, data).groupdict()`` as a keyword argument called ``groupdict``.\n Default is ``False``\n pass_update_queue (:obj:`bool`, optional): If set to ``True``, a keyword argument called\n ``update_queue`` will be passed to the callback function. It will be the ``Queue``\n instance used by the :class:`telegram.ext.Updater` and :class:`telegram.ext.Dispatcher`\n that contains new updates which can be used to insert updates. Default is ``False``.\n pass_job_queue (:obj:`bool`, optional): If set to ``True``, a keyword argument called\n ``job_queue`` will be passed to the callback function. It will be a\n :class:`telegram.ext.JobQueue` instance created by the :class:`telegram.ext.Updater`\n which can be used to schedule new jobs. Default is ``False``.\n pass_user_data (:obj:`bool`, optional): If set to ``True``, a keyword argument called\n ``user_data`` will be passed to the callback function. Default is ``False``.\n pass_chat_data (:obj:`bool`, optional): If set to ``True``, a keyword argument called\n ``chat_data`` will be passed to the callback function. Default is ``False``.\n message_updates (:obj:`bool`, optional): Should \"normal\" message updates be handled?\n Default is ``True``.\n channel_post_updates (:obj:`bool`, optional): Should channel posts updates be handled?\n Default is ``True``.\n edited_updates (:obj:`bool`, optional): Should \"edited\" message updates be handled? Default\n is ``False``.\n allow_edited (:obj:`bool`, optional): If the handler should also accept edited messages.\n Default is ``False`` - Deprecated. use edited_updates instead.\n\n Raises:\n ValueError\n \"\"\"\n\n def __init__(self,\n pattern,\n callback,\n pass_groups=False,\n pass_groupdict=False,\n pass_update_queue=False,\n pass_job_queue=False,\n pass_user_data=False,\n pass_chat_data=False,\n allow_edited=False,\n message_updates=True,\n channel_post_updates=False,\n edited_updates=False\n ):\n if not message_updates and not channel_post_updates and not edited_updates:\n raise ValueError(\n 'message_updates, channel_post_updates and edited_updates are all False')\n if allow_edited:\n warnings.warn('allow_edited is getting deprecated, please use edited_updates instead')\n edited_updates = allow_edited\n\n super(RegexHandler, self).__init__(\n callback,\n pass_update_queue=pass_update_queue,\n pass_job_queue=pass_job_queue,\n pass_user_data=pass_user_data,\n pass_chat_data=pass_chat_data)\n\n if isinstance(pattern, string_types):\n pattern = re.compile(pattern)\n\n self.pattern = pattern\n self.pass_groups = pass_groups\n self.pass_groupdict = pass_groupdict\n self.allow_edited = allow_edited\n self.message_updates = message_updates\n self.channel_post_updates = channel_post_updates\n self.edited_updates = edited_updates\n\n def check_update(self, update):\n \"\"\"\n Determines whether an update should be passed to this handlers :attr:`callback`.\n\n Args:\n update (:class:`telegram.Update`): Incoming telegram update.\n\n Returns:\n :obj:`bool`\n \"\"\"\n\n if any([(self.message_updates and update.message),\n (self.edited_updates and update.edited_message),\n (self.channel_post_updates and update.channel_post)]) and (\n isinstance(update, Update)):\n match = re.match(self.pattern, update.effective_message.text)\n return bool(match)\n return False\n\n def handle_update(self, update, dispatcher):\n \"\"\"\n Send the update to the :attr:`callback`.\n\n Args:\n update (:class:`telegram.Update`): Incoming telegram update.\n dispatcher (:class:`telegram.ext.Dispatcher`): Dispatcher that originated the Update.\n \"\"\"\n\n optional_args = self.collect_optional_args(dispatcher, update)\n match = re.match(self.pattern, update.effective_message.text)\n\n if self.pass_groups:\n optional_args['groups'] = match.groups()\n if self.pass_groupdict:\n optional_args['groupdict'] = match.groupdict()\n\n return self.callback(dispatcher.bot, update, **optional_args)\n", "path": "telegram/ext/regexhandler.py"}]} | 2,884 | 178 |
gh_patches_debug_34803 | rasdani/github-patches | git_diff | learningequality__kolibri-5393 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Auto sign-out from tablets during normal class
**Observed behavior**
Kids are signout out of their tabs automatically while watching videos or doing quizzes
**Expected behavior**
The user should not be signed out automatically.
**User-facing consequences**
While the kids are in the middle of any exercise or video, they are being automatically signed out. They need to again re-login to continue the session
**Steps to reproduce**
This happens in some of the tabs during a normal classroom session.
**Context**
Kolibri version : Kolibri 0.12.2
Operating system : ubuntu 14.04
Browser : Chrome

</issue>
<code>
[start of kolibri/core/logger/api.py]
1 from django.core.exceptions import ObjectDoesNotExist
2 from django.db.models.query import F
3 from django.http import Http404
4 from django_filters import ModelChoiceFilter
5 from django_filters.rest_framework import CharFilter
6 from django_filters.rest_framework import DjangoFilterBackend
7 from django_filters.rest_framework import FilterSet
8 from rest_framework import filters
9 from rest_framework import viewsets
10 from rest_framework.response import Response
11
12 from .models import AttemptLog
13 from .models import ContentSessionLog
14 from .models import ContentSummaryLog
15 from .models import ExamAttemptLog
16 from .models import ExamLog
17 from .models import MasteryLog
18 from .models import UserSessionLog
19 from .permissions import ExamActivePermissions
20 from .serializers import AttemptLogSerializer
21 from .serializers import ContentSessionLogSerializer
22 from .serializers import ContentSummaryLogSerializer
23 from .serializers import ExamAttemptLogSerializer
24 from .serializers import ExamLogSerializer
25 from .serializers import MasteryLogSerializer
26 from .serializers import TotalContentProgressSerializer
27 from .serializers import UserSessionLogSerializer
28 from kolibri.core.auth.api import KolibriAuthPermissions
29 from kolibri.core.auth.api import KolibriAuthPermissionsFilter
30 from kolibri.core.auth.filters import HierarchyRelationsFilter
31 from kolibri.core.auth.models import Classroom
32 from kolibri.core.auth.models import Collection
33 from kolibri.core.auth.models import Facility
34 from kolibri.core.auth.models import FacilityUser
35 from kolibri.core.auth.models import LearnerGroup
36 from kolibri.core.content.api import OptionalPageNumberPagination
37 from kolibri.core.exams.models import Exam
38
39
40 class BaseLogFilter(FilterSet):
41 facility = ModelChoiceFilter(method="filter_facility", queryset=Facility.objects.all())
42 classroom = ModelChoiceFilter(method="filter_classroom", queryset=Classroom.objects.all())
43 learner_group = ModelChoiceFilter(method="filter_learner_group", queryset=LearnerGroup.objects.all())
44
45 # Only a superuser can filter by facilities
46 def filter_facility(self, queryset, name, value):
47 return queryset.filter(user__facility=value)
48
49 def filter_classroom(self, queryset, name, value):
50 return HierarchyRelationsFilter(queryset).filter_by_hierarchy(
51 ancestor_collection=value,
52 target_user=F("user"),
53 )
54
55 def filter_learner_group(self, queryset, name, value):
56 return HierarchyRelationsFilter(queryset).filter_by_hierarchy(
57 ancestor_collection=value,
58 target_user=F("user"),
59 )
60
61
62 class LoggerViewSet(viewsets.ModelViewSet):
63 def update(self, request, *args, **kwargs):
64 partial = kwargs.pop('partial', False)
65 model = self.queryset.model
66 lookup_url_kwarg = self.lookup_url_kwarg or self.lookup_field
67 try:
68 instance = model.objects.get(id=self.kwargs[lookup_url_kwarg])
69 self.check_object_permissions(request, instance)
70 except (ValueError, ObjectDoesNotExist):
71 raise Http404
72 serializer = self.get_serializer(instance, data=request.data, partial=partial)
73 serializer.is_valid(raise_exception=True)
74 self.perform_update(serializer)
75
76 if getattr(instance, '_prefetched_objects_cache', None):
77 # If 'prefetch_related' has been applied to a queryset, we need to
78 # forcibly invalidate the prefetch cache on the instance.
79 instance._prefetched_objects_cache = {}
80 default_response = dict(request.data)
81 # First look if the computed fields to be updated are listed:
82 updating_fields = getattr(serializer.root, 'update_fields', None)
83 # If not, fetch all the fields that are computed methods:
84 if updating_fields is None:
85 updating_fields = [field for field in serializer.fields if getattr(serializer.fields[field], 'method_name', None)]
86 for field in updating_fields:
87 method_name = getattr(serializer.fields[field], 'method_name', None)
88 if method_name:
89 method = getattr(serializer.root, method_name)
90 default_response[field] = method(instance)
91 return Response(default_response)
92
93
94 class ContentSessionLogFilter(BaseLogFilter):
95
96 class Meta:
97 model = ContentSessionLog
98 fields = ['user_id', 'content_id']
99
100
101 class ContentSessionLogViewSet(LoggerViewSet):
102 permission_classes = (KolibriAuthPermissions,)
103 filter_backends = (KolibriAuthPermissionsFilter, DjangoFilterBackend)
104 queryset = ContentSessionLog.objects.all()
105 serializer_class = ContentSessionLogSerializer
106 pagination_class = OptionalPageNumberPagination
107 filter_class = ContentSessionLogFilter
108
109
110 class ContentSummaryLogFilter(BaseLogFilter):
111
112 class Meta:
113 model = ContentSummaryLog
114 fields = ['user_id', 'content_id']
115
116
117 class ContentSummaryLogViewSet(LoggerViewSet):
118 permission_classes = (KolibriAuthPermissions,)
119 filter_backends = (KolibriAuthPermissionsFilter, DjangoFilterBackend)
120 queryset = ContentSummaryLog.objects.all()
121 serializer_class = ContentSummaryLogSerializer
122 pagination_class = OptionalPageNumberPagination
123 filter_class = ContentSummaryLogFilter
124
125
126 class TotalContentProgressViewSet(viewsets.ModelViewSet):
127 permission_classes = (KolibriAuthPermissions,)
128 filter_backends = (KolibriAuthPermissionsFilter,)
129 queryset = FacilityUser.objects.all()
130 serializer_class = TotalContentProgressSerializer
131
132
133 class UserSessionLogFilter(BaseLogFilter):
134
135 class Meta:
136 model = UserSessionLog
137 fields = ['user_id']
138
139
140 class UserSessionLogViewSet(LoggerViewSet):
141 permission_classes = (KolibriAuthPermissions,)
142 filter_backends = (KolibriAuthPermissionsFilter, DjangoFilterBackend)
143 queryset = UserSessionLog.objects.all()
144 serializer_class = UserSessionLogSerializer
145 pagination_class = OptionalPageNumberPagination
146 filter_class = UserSessionLogFilter
147
148
149 class MasteryFilter(FilterSet):
150
151 class Meta:
152 model = MasteryLog
153 fields = ['summarylog']
154
155
156 class MasteryLogViewSet(LoggerViewSet):
157 permission_classes = (KolibriAuthPermissions,)
158 filter_backends = (KolibriAuthPermissionsFilter, DjangoFilterBackend)
159 queryset = MasteryLog.objects.all()
160 serializer_class = MasteryLogSerializer
161 pagination_class = OptionalPageNumberPagination
162 filter_class = MasteryFilter
163
164
165 class AttemptFilter(BaseLogFilter):
166 content = CharFilter(method="filter_content")
167
168 def filter_content(self, queryset, name, value):
169 return queryset.filter(masterylog__summarylog__content_id=value)
170
171 class Meta:
172 model = AttemptLog
173 fields = ['masterylog', 'complete', 'user', 'content', 'item']
174
175
176 class AttemptLogViewSet(LoggerViewSet):
177 permission_classes = (KolibriAuthPermissions,)
178 filter_backends = (KolibriAuthPermissionsFilter, DjangoFilterBackend, filters.OrderingFilter)
179 queryset = AttemptLog.objects.all()
180 serializer_class = AttemptLogSerializer
181 pagination_class = OptionalPageNumberPagination
182 filter_class = AttemptFilter
183 ordering_fields = ('end_timestamp',)
184 ordering = ('end_timestamp',)
185
186
187 class ExamAttemptFilter(BaseLogFilter):
188 exam = ModelChoiceFilter(method="filter_exam", queryset=Exam.objects.all())
189 user = ModelChoiceFilter(method="filter_user", queryset=FacilityUser.objects.all())
190 content = CharFilter(field_name="content_id")
191
192 def filter_exam(self, queryset, name, value):
193 return queryset.filter(examlog__exam=value)
194
195 def filter_user(self, queryset, name, value):
196 return queryset.filter(examlog__user=value)
197
198 class Meta:
199 model = ExamAttemptLog
200 fields = ['examlog', 'exam', 'user', 'content', 'item']
201
202
203 class ExamAttemptLogViewSet(LoggerViewSet):
204 permission_classes = (ExamActivePermissions, KolibriAuthPermissions, )
205 filter_backends = (KolibriAuthPermissionsFilter, DjangoFilterBackend, filters.OrderingFilter)
206 queryset = ExamAttemptLog.objects.all()
207 serializer_class = ExamAttemptLogSerializer
208 pagination_class = OptionalPageNumberPagination
209 filter_class = ExamAttemptFilter
210
211
212 class ExamLogFilter(BaseLogFilter):
213
214 collection = ModelChoiceFilter(method="filter_collection", queryset=Collection.objects.all())
215
216 def filter_collection(self, queryset, name, collection):
217 return HierarchyRelationsFilter(queryset).filter_by_hierarchy(
218 target_user=F('user'),
219 ancestor_collection=collection,
220 )
221
222 class Meta:
223 model = ExamLog
224 fields = ['user', 'exam']
225
226
227 class ExamLogViewSet(viewsets.ModelViewSet):
228 permission_classes = (KolibriAuthPermissions,)
229 filter_backends = (KolibriAuthPermissionsFilter, DjangoFilterBackend)
230 queryset = ExamLog.objects.all()
231 serializer_class = ExamLogSerializer
232 pagination_class = OptionalPageNumberPagination
233 filter_class = ExamLogFilter
234
[end of kolibri/core/logger/api.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/kolibri/core/logger/api.py b/kolibri/core/logger/api.py
--- a/kolibri/core/logger/api.py
+++ b/kolibri/core/logger/api.py
@@ -1,12 +1,17 @@
+import logging
+
from django.core.exceptions import ObjectDoesNotExist
from django.db.models.query import F
+from django.db.utils import IntegrityError
from django.http import Http404
from django_filters import ModelChoiceFilter
from django_filters.rest_framework import CharFilter
from django_filters.rest_framework import DjangoFilterBackend
from django_filters.rest_framework import FilterSet
from rest_framework import filters
+from rest_framework import status
from rest_framework import viewsets
+from rest_framework.exceptions import ValidationError
from rest_framework.response import Response
from .models import AttemptLog
@@ -36,6 +41,8 @@
from kolibri.core.content.api import OptionalPageNumberPagination
from kolibri.core.exams.models import Exam
+logger = logging.getLogger(__name__)
+
class BaseLogFilter(FilterSet):
facility = ModelChoiceFilter(method="filter_facility", queryset=Facility.objects.all())
@@ -90,6 +97,21 @@
default_response[field] = method(instance)
return Response(default_response)
+ def create(self, request, *args, **kwargs):
+ try:
+ return super(LoggerViewSet, self).create(request, *args, **kwargs)
+ except IntegrityError:
+ # The object has been created previously: let's calculate its id and return it
+ serializer = self.get_serializer(data=request.data)
+ serializer.is_valid(raise_exception=True)
+ obj = serializer.Meta.model(**serializer.validated_data)
+ obj.id = obj.calculate_uuid()
+ final_obj = self.get_serializer(obj)
+ return Response(final_obj.data)
+ except ValidationError as e:
+ logger.error("Failed to validate data: {}".format(e))
+ return Response(request.data, status.HTTP_400_BAD_REQUEST)
+
class ContentSessionLogFilter(BaseLogFilter):
| {"golden_diff": "diff --git a/kolibri/core/logger/api.py b/kolibri/core/logger/api.py\n--- a/kolibri/core/logger/api.py\n+++ b/kolibri/core/logger/api.py\n@@ -1,12 +1,17 @@\n+import logging\n+\n from django.core.exceptions import ObjectDoesNotExist\n from django.db.models.query import F\n+from django.db.utils import IntegrityError\n from django.http import Http404\n from django_filters import ModelChoiceFilter\n from django_filters.rest_framework import CharFilter\n from django_filters.rest_framework import DjangoFilterBackend\n from django_filters.rest_framework import FilterSet\n from rest_framework import filters\n+from rest_framework import status\n from rest_framework import viewsets\n+from rest_framework.exceptions import ValidationError\n from rest_framework.response import Response\n \n from .models import AttemptLog\n@@ -36,6 +41,8 @@\n from kolibri.core.content.api import OptionalPageNumberPagination\n from kolibri.core.exams.models import Exam\n \n+logger = logging.getLogger(__name__)\n+\n \n class BaseLogFilter(FilterSet):\n facility = ModelChoiceFilter(method=\"filter_facility\", queryset=Facility.objects.all())\n@@ -90,6 +97,21 @@\n default_response[field] = method(instance)\n return Response(default_response)\n \n+ def create(self, request, *args, **kwargs):\n+ try:\n+ return super(LoggerViewSet, self).create(request, *args, **kwargs)\n+ except IntegrityError:\n+ # The object has been created previously: let's calculate its id and return it\n+ serializer = self.get_serializer(data=request.data)\n+ serializer.is_valid(raise_exception=True)\n+ obj = serializer.Meta.model(**serializer.validated_data)\n+ obj.id = obj.calculate_uuid()\n+ final_obj = self.get_serializer(obj)\n+ return Response(final_obj.data)\n+ except ValidationError as e:\n+ logger.error(\"Failed to validate data: {}\".format(e))\n+ return Response(request.data, status.HTTP_400_BAD_REQUEST)\n+\n \n class ContentSessionLogFilter(BaseLogFilter):\n", "issue": "Auto sign-out from tablets during normal class\n**Observed behavior**\r\nKids are signout out of their tabs automatically while watching videos or doing quizzes\r\n\r\n**Expected behavior**\r\nThe user should not be signed out automatically.\r\n\r\n\r\n**User-facing consequences**\r\nWhile the kids are in the middle of any exercise or video, they are being automatically signed out. They need to again re-login to continue the session\r\n\r\n**Steps to reproduce**\r\nThis happens in some of the tabs during a normal classroom session.\r\n\r\n\r\n**Context**\r\nKolibri version : Kolibri 0.12.2\r\nOperating system : ubuntu 14.04\r\nBrowser : Chrome\r\n\r\n\r\n\n", "before_files": [{"content": "from django.core.exceptions import ObjectDoesNotExist\nfrom django.db.models.query import F\nfrom django.http import Http404\nfrom django_filters import ModelChoiceFilter\nfrom django_filters.rest_framework import CharFilter\nfrom django_filters.rest_framework import DjangoFilterBackend\nfrom django_filters.rest_framework import FilterSet\nfrom rest_framework import filters\nfrom rest_framework import viewsets\nfrom rest_framework.response import Response\n\nfrom .models import AttemptLog\nfrom .models import ContentSessionLog\nfrom .models import ContentSummaryLog\nfrom .models import ExamAttemptLog\nfrom .models import ExamLog\nfrom .models import MasteryLog\nfrom .models import UserSessionLog\nfrom .permissions import ExamActivePermissions\nfrom .serializers import AttemptLogSerializer\nfrom .serializers import ContentSessionLogSerializer\nfrom .serializers import ContentSummaryLogSerializer\nfrom .serializers import ExamAttemptLogSerializer\nfrom .serializers import ExamLogSerializer\nfrom .serializers import MasteryLogSerializer\nfrom .serializers import TotalContentProgressSerializer\nfrom .serializers import UserSessionLogSerializer\nfrom kolibri.core.auth.api import KolibriAuthPermissions\nfrom kolibri.core.auth.api import KolibriAuthPermissionsFilter\nfrom kolibri.core.auth.filters import HierarchyRelationsFilter\nfrom kolibri.core.auth.models import Classroom\nfrom kolibri.core.auth.models import Collection\nfrom kolibri.core.auth.models import Facility\nfrom kolibri.core.auth.models import FacilityUser\nfrom kolibri.core.auth.models import LearnerGroup\nfrom kolibri.core.content.api import OptionalPageNumberPagination\nfrom kolibri.core.exams.models import Exam\n\n\nclass BaseLogFilter(FilterSet):\n facility = ModelChoiceFilter(method=\"filter_facility\", queryset=Facility.objects.all())\n classroom = ModelChoiceFilter(method=\"filter_classroom\", queryset=Classroom.objects.all())\n learner_group = ModelChoiceFilter(method=\"filter_learner_group\", queryset=LearnerGroup.objects.all())\n\n # Only a superuser can filter by facilities\n def filter_facility(self, queryset, name, value):\n return queryset.filter(user__facility=value)\n\n def filter_classroom(self, queryset, name, value):\n return HierarchyRelationsFilter(queryset).filter_by_hierarchy(\n ancestor_collection=value,\n target_user=F(\"user\"),\n )\n\n def filter_learner_group(self, queryset, name, value):\n return HierarchyRelationsFilter(queryset).filter_by_hierarchy(\n ancestor_collection=value,\n target_user=F(\"user\"),\n )\n\n\nclass LoggerViewSet(viewsets.ModelViewSet):\n def update(self, request, *args, **kwargs):\n partial = kwargs.pop('partial', False)\n model = self.queryset.model\n lookup_url_kwarg = self.lookup_url_kwarg or self.lookup_field\n try:\n instance = model.objects.get(id=self.kwargs[lookup_url_kwarg])\n self.check_object_permissions(request, instance)\n except (ValueError, ObjectDoesNotExist):\n raise Http404\n serializer = self.get_serializer(instance, data=request.data, partial=partial)\n serializer.is_valid(raise_exception=True)\n self.perform_update(serializer)\n\n if getattr(instance, '_prefetched_objects_cache', None):\n # If 'prefetch_related' has been applied to a queryset, we need to\n # forcibly invalidate the prefetch cache on the instance.\n instance._prefetched_objects_cache = {}\n default_response = dict(request.data)\n # First look if the computed fields to be updated are listed:\n updating_fields = getattr(serializer.root, 'update_fields', None)\n # If not, fetch all the fields that are computed methods:\n if updating_fields is None:\n updating_fields = [field for field in serializer.fields if getattr(serializer.fields[field], 'method_name', None)]\n for field in updating_fields:\n method_name = getattr(serializer.fields[field], 'method_name', None)\n if method_name:\n method = getattr(serializer.root, method_name)\n default_response[field] = method(instance)\n return Response(default_response)\n\n\nclass ContentSessionLogFilter(BaseLogFilter):\n\n class Meta:\n model = ContentSessionLog\n fields = ['user_id', 'content_id']\n\n\nclass ContentSessionLogViewSet(LoggerViewSet):\n permission_classes = (KolibriAuthPermissions,)\n filter_backends = (KolibriAuthPermissionsFilter, DjangoFilterBackend)\n queryset = ContentSessionLog.objects.all()\n serializer_class = ContentSessionLogSerializer\n pagination_class = OptionalPageNumberPagination\n filter_class = ContentSessionLogFilter\n\n\nclass ContentSummaryLogFilter(BaseLogFilter):\n\n class Meta:\n model = ContentSummaryLog\n fields = ['user_id', 'content_id']\n\n\nclass ContentSummaryLogViewSet(LoggerViewSet):\n permission_classes = (KolibriAuthPermissions,)\n filter_backends = (KolibriAuthPermissionsFilter, DjangoFilterBackend)\n queryset = ContentSummaryLog.objects.all()\n serializer_class = ContentSummaryLogSerializer\n pagination_class = OptionalPageNumberPagination\n filter_class = ContentSummaryLogFilter\n\n\nclass TotalContentProgressViewSet(viewsets.ModelViewSet):\n permission_classes = (KolibriAuthPermissions,)\n filter_backends = (KolibriAuthPermissionsFilter,)\n queryset = FacilityUser.objects.all()\n serializer_class = TotalContentProgressSerializer\n\n\nclass UserSessionLogFilter(BaseLogFilter):\n\n class Meta:\n model = UserSessionLog\n fields = ['user_id']\n\n\nclass UserSessionLogViewSet(LoggerViewSet):\n permission_classes = (KolibriAuthPermissions,)\n filter_backends = (KolibriAuthPermissionsFilter, DjangoFilterBackend)\n queryset = UserSessionLog.objects.all()\n serializer_class = UserSessionLogSerializer\n pagination_class = OptionalPageNumberPagination\n filter_class = UserSessionLogFilter\n\n\nclass MasteryFilter(FilterSet):\n\n class Meta:\n model = MasteryLog\n fields = ['summarylog']\n\n\nclass MasteryLogViewSet(LoggerViewSet):\n permission_classes = (KolibriAuthPermissions,)\n filter_backends = (KolibriAuthPermissionsFilter, DjangoFilterBackend)\n queryset = MasteryLog.objects.all()\n serializer_class = MasteryLogSerializer\n pagination_class = OptionalPageNumberPagination\n filter_class = MasteryFilter\n\n\nclass AttemptFilter(BaseLogFilter):\n content = CharFilter(method=\"filter_content\")\n\n def filter_content(self, queryset, name, value):\n return queryset.filter(masterylog__summarylog__content_id=value)\n\n class Meta:\n model = AttemptLog\n fields = ['masterylog', 'complete', 'user', 'content', 'item']\n\n\nclass AttemptLogViewSet(LoggerViewSet):\n permission_classes = (KolibriAuthPermissions,)\n filter_backends = (KolibriAuthPermissionsFilter, DjangoFilterBackend, filters.OrderingFilter)\n queryset = AttemptLog.objects.all()\n serializer_class = AttemptLogSerializer\n pagination_class = OptionalPageNumberPagination\n filter_class = AttemptFilter\n ordering_fields = ('end_timestamp',)\n ordering = ('end_timestamp',)\n\n\nclass ExamAttemptFilter(BaseLogFilter):\n exam = ModelChoiceFilter(method=\"filter_exam\", queryset=Exam.objects.all())\n user = ModelChoiceFilter(method=\"filter_user\", queryset=FacilityUser.objects.all())\n content = CharFilter(field_name=\"content_id\")\n\n def filter_exam(self, queryset, name, value):\n return queryset.filter(examlog__exam=value)\n\n def filter_user(self, queryset, name, value):\n return queryset.filter(examlog__user=value)\n\n class Meta:\n model = ExamAttemptLog\n fields = ['examlog', 'exam', 'user', 'content', 'item']\n\n\nclass ExamAttemptLogViewSet(LoggerViewSet):\n permission_classes = (ExamActivePermissions, KolibriAuthPermissions, )\n filter_backends = (KolibriAuthPermissionsFilter, DjangoFilterBackend, filters.OrderingFilter)\n queryset = ExamAttemptLog.objects.all()\n serializer_class = ExamAttemptLogSerializer\n pagination_class = OptionalPageNumberPagination\n filter_class = ExamAttemptFilter\n\n\nclass ExamLogFilter(BaseLogFilter):\n\n collection = ModelChoiceFilter(method=\"filter_collection\", queryset=Collection.objects.all())\n\n def filter_collection(self, queryset, name, collection):\n return HierarchyRelationsFilter(queryset).filter_by_hierarchy(\n target_user=F('user'),\n ancestor_collection=collection,\n )\n\n class Meta:\n model = ExamLog\n fields = ['user', 'exam']\n\n\nclass ExamLogViewSet(viewsets.ModelViewSet):\n permission_classes = (KolibriAuthPermissions,)\n filter_backends = (KolibriAuthPermissionsFilter, DjangoFilterBackend)\n queryset = ExamLog.objects.all()\n serializer_class = ExamLogSerializer\n pagination_class = OptionalPageNumberPagination\n filter_class = ExamLogFilter\n", "path": "kolibri/core/logger/api.py"}]} | 3,203 | 439 |
gh_patches_debug_33690 | rasdani/github-patches | git_diff | bridgecrewio__checkov-2027 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Delimiter in wrong position when using multi-output with empty error list
**Describe the bug**
The "--- OUTPUT DELIMITER ---" message appears below two outputs when running on a project without any errors to output. This causes problems when trying to use parsers to split the output.
**To Reproduce**
Steps to reproduce the behavior:
1. mkdir -p /tmp/checkov-bug
2. cd /tmp/checkov-bug
3. docker run -it -v $PWD:/app -w /app bridgecrew/checkov:2.0.591 -o cli -o junitxml -d .
Output is:
_ _
___| |__ ___ ___| | _______ __
/ __| '_ \ / _ \/ __| |/ / _ \ \ / /
| (__| | | | __/ (__| < (_) \ V /
\___|_| |_|\___|\___|_|\_\___/ \_/
By bridgecrew.io | version: 2.0.590
Update available 2.0.590 -> 2.0.591
Run pip3 install -U checkov to update
\<?xml version="1.0" ?\>
\<testsuites/\>
--- OUTPUT DELIMITER ---
**Expected behavior**
The expected behaviour would be for the XML snippet to be below "--- OUTPUT DELIMITER ---", since it was the second output option passed to checkov.
**Screenshots**
**Desktop (please complete the following information):**
- OS: Docker image bridgecrew/checkov
- Checkov Version 2.0.591
**Additional context**
</issue>
<code>
[start of checkov/common/runners/runner_registry.py]
1 import argparse
2 import itertools
3 from json import dumps, JSONEncoder
4 from lark import Tree
5 import datetime
6 import logging
7 import os
8 from abc import abstractmethod
9 from typing import List, Union, Dict, Any, Tuple, Optional
10
11 from typing_extensions import Literal
12
13 from checkov.common.bridgecrew.integration_features.integration_feature_registry import integration_feature_registry
14 from checkov.common.output.baseline import Baseline
15 from checkov.common.output.report import Report
16 from checkov.common.runners.base_runner import BaseRunner
17 from checkov.common.util import data_structures_utils
18 from checkov.runner_filter import RunnerFilter
19 from checkov.terraform.context_parsers.registry import parser_registry
20 from checkov.terraform.runner import Runner as tf_runner
21 from checkov.terraform.parser import Parser
22 from checkov.common.parallelizer.parallel_runner import parallel_runner
23 from checkov.common.util.ext_cyclonedx_xml import ExtXml
24 from checkov.common.util.banner import tool as tool_name
25
26 CHECK_BLOCK_TYPES = frozenset(["resource", "data", "provider", "module"])
27 OUTPUT_CHOICES = ["cli", "cyclonedx", "json", "junitxml", "github_failed_only", "sarif"]
28 OUTPUT_DELIMITER = "\n--- OUTPUT DELIMITER ---\n"
29
30 class OutputEncoder(JSONEncoder):
31 def default(self, obj):
32 if isinstance(obj, set):
33 return list(obj)
34 elif isinstance(obj, Tree):
35 return str(obj)
36 elif isinstance(obj, datetime.date):
37 return str(obj)
38 return super().default(obj)
39
40 class RunnerRegistry:
41 runners: List[BaseRunner] = []
42 scan_reports: List[Report] = []
43 banner = ""
44
45 def __init__(self, banner: str, runner_filter: RunnerFilter, *runners: BaseRunner) -> None:
46 self.logger = logging.getLogger(__name__)
47 self.runner_filter = runner_filter
48 self.runners = list(runners)
49 self.banner = banner
50 self.scan_reports = []
51 self.filter_runner_framework()
52 self.tool = tool_name
53
54 @abstractmethod
55 def extract_entity_details(self, entity: Dict[str, Any]) -> Tuple[str, str, Dict[str, Any]]:
56 raise NotImplementedError()
57
58 def run(
59 self,
60 root_folder: Optional[str] = None,
61 external_checks_dir: Optional[List[str]] = None,
62 files: Optional[List[str]] = None,
63 guidelines: Optional[Dict[str, str]] = None,
64 collect_skip_comments: bool = True,
65 repo_root_for_plan_enrichment: Optional[List[Union[str, os.PathLike]]] = None,
66 ) -> List[Report]:
67 integration_feature_registry.run_pre_runner()
68 if len(self.runners) == 1:
69 reports = [self.runners[0].run(root_folder, external_checks_dir=external_checks_dir, files=files,
70 runner_filter=self.runner_filter, collect_skip_comments=collect_skip_comments)]
71 else:
72 reports = parallel_runner.run_function(
73 lambda runner: runner.run(root_folder, external_checks_dir=external_checks_dir, files=files,
74 runner_filter=self.runner_filter, collect_skip_comments=collect_skip_comments),
75 self.runners, 1)
76
77 for scan_report in reports:
78 self._handle_report(scan_report, guidelines, repo_root_for_plan_enrichment)
79 return self.scan_reports
80
81 def _handle_report(self, scan_report, guidelines, repo_root_for_plan_enrichment):
82 integration_feature_registry.run_post_runner(scan_report)
83 if guidelines:
84 RunnerRegistry.enrich_report_with_guidelines(scan_report, guidelines)
85 if repo_root_for_plan_enrichment:
86 enriched_resources = RunnerRegistry.get_enriched_resources(repo_root_for_plan_enrichment)
87 scan_report = Report("terraform_plan").enrich_plan_report(scan_report, enriched_resources)
88 scan_report = Report("terraform_plan").handle_skipped_checks(scan_report, enriched_resources)
89 self.scan_reports.append(scan_report)
90
91 def print_reports(
92 self,
93 scan_reports: List[Report],
94 config: argparse.Namespace,
95 url: Optional[str] = None,
96 created_baseline_path: Optional[str] = None,
97 baseline: Optional[Baseline] = None,
98 ) -> Literal[0, 1]:
99 output_formats = set(config.output)
100
101 if "cli" in config.output and not config.quiet:
102 print(f"{self.banner}\n")
103 exit_codes = []
104 report_jsons = []
105 sarif_reports = []
106 junit_reports = []
107 cyclonedx_reports = []
108 for report in scan_reports:
109 if not report.is_empty():
110 if "json" in config.output:
111 report_jsons.append(report.get_dict(is_quiet=config.quiet, url=url))
112 if "junitxml" in config.output:
113 junit_reports.append(report)
114 # report.print_junit_xml()
115 if "github_failed_only" in config.output:
116 report.print_failed_github_md(use_bc_ids=config.output_bc_ids)
117 if "sarif" in config.output:
118 sarif_reports.append(report)
119 if "cli" in config.output:
120 report.print_console(
121 is_quiet=config.quiet,
122 is_compact=config.compact,
123 created_baseline_path=created_baseline_path,
124 baseline=baseline,
125 use_bc_ids=config.output_bc_ids,
126 )
127 if url:
128 print("More details: {}".format(url))
129 output_formats.discard("cli")
130 if output_formats:
131 print(OUTPUT_DELIMITER)
132 if "cyclonedx" in config.output:
133 cyclonedx_reports.append(report)
134 exit_codes.append(report.get_exit_code(config.soft_fail, config.soft_fail_on, config.hard_fail_on))
135
136 if "sarif" in config.output:
137 master_report = Report("merged")
138 print(self.banner)
139 for report in sarif_reports:
140 report.print_console(
141 is_quiet=config.quiet,
142 is_compact=config.compact,
143 created_baseline_path=created_baseline_path,
144 baseline=baseline,
145 use_bc_ids=config.output_bc_ids,
146 )
147 master_report.failed_checks += report.failed_checks
148 master_report.skipped_checks += report.skipped_checks
149 if url:
150 print("More details: {}".format(url))
151 master_report.write_sarif_output(self.tool)
152 output_formats.remove("sarif")
153 if output_formats:
154 print(OUTPUT_DELIMITER)
155 if "json" in config.output:
156 if not report_jsons:
157 print(dumps(Report(None).get_summary(), indent=4))
158 elif len(report_jsons) == 1:
159 print(dumps(report_jsons[0], indent=4, cls=OutputEncoder))
160 else:
161 print(dumps(report_jsons, indent=4, cls=OutputEncoder))
162 output_formats.remove("json")
163 if output_formats:
164 print(OUTPUT_DELIMITER)
165 if "junitxml" in config.output:
166 if len(junit_reports) == 1:
167 junit_reports[0].print_junit_xml(use_bc_ids=config.output_bc_ids)
168 else:
169 master_report = Report(None)
170 for report in junit_reports:
171 master_report.skipped_checks += report.skipped_checks
172 master_report.passed_checks += report.passed_checks
173 master_report.failed_checks += report.failed_checks
174 master_report.print_junit_xml(use_bc_ids=config.output_bc_ids)
175 output_formats.remove("junitxml")
176 if output_formats:
177 print(OUTPUT_DELIMITER)
178
179 if "cyclonedx" in config.output:
180 if cyclonedx_reports:
181 # More than one Report - combine Reports first
182 report = Report(None)
183 for r in cyclonedx_reports:
184 report.passed_checks += r.passed_checks
185 report.skipped_checks += r.skipped_checks
186 report.failed_checks += r.failed_checks
187 else:
188 report = cyclonedx_reports[0]
189 cyclonedx_output = ExtXml(bom=report.get_cyclonedx_bom())
190 print(cyclonedx_output.output_as_string())
191 output_formats.remove("cyclonedx")
192 if output_formats:
193 print(OUTPUT_DELIMITER)
194
195 exit_code = 1 if 1 in exit_codes else 0
196 return exit_code
197
198 def filter_runner_framework(self) -> None:
199 if not self.runner_filter:
200 return
201 if self.runner_filter.framework is None:
202 return
203 if self.runner_filter.framework == "all":
204 return
205 self.runners = [runner for runner in self.runners if runner.check_type == self.runner_filter.framework]
206
207 def remove_runner(self, runner: BaseRunner) -> None:
208 if runner in self.runners:
209 self.runners.remove(runner)
210
211 @staticmethod
212 def enrich_report_with_guidelines(scan_report: Report, guidelines: Dict[str, str]) -> None:
213 for record in itertools.chain(scan_report.failed_checks, scan_report.passed_checks, scan_report.skipped_checks):
214 if record.check_id in guidelines:
215 record.set_guideline(guidelines[record.check_id])
216
217 @staticmethod
218 def get_enriched_resources(repo_roots: List[Union[str, os.PathLike]]) -> Dict[str, Dict[str, Any]]:
219 repo_definitions = {}
220 for repo_root in repo_roots:
221 tf_definitions = {}
222 parsing_errors = {}
223 Parser().parse_directory(
224 directory=repo_root, # assume plan file is in the repo-root
225 out_definitions=tf_definitions,
226 out_parsing_errors=parsing_errors,
227 )
228 repo_definitions[repo_root] = { 'tf_definitions': tf_definitions, 'parsing_errors': parsing_errors }
229
230 enriched_resources = {}
231 for repo_root, parse_results in repo_definitions.items():
232 for full_file_path, definition in parse_results['tf_definitions'].items():
233 definitions_context = parser_registry.enrich_definitions_context((full_file_path, definition))
234 abs_scanned_file, _ = tf_runner._strip_module_referrer(full_file_path)
235 scanned_file = os.path.relpath(abs_scanned_file, repo_root)
236 for block_type, block_value in definition.items():
237 if block_type in CHECK_BLOCK_TYPES:
238 for entity in block_value:
239 context_parser = parser_registry.context_parsers[block_type]
240 definition_path = context_parser.get_entity_context_path(entity)
241 entity_id = ".".join(definition_path)
242 entity_context_path = [block_type] + definition_path
243 entity_context = data_structures_utils.get_inner_dict(
244 definitions_context[full_file_path], entity_context_path
245 )
246 entity_lines_range = [
247 entity_context.get("start_line"),
248 entity_context.get("end_line"),
249 ]
250 entity_code_lines = entity_context.get("code_lines")
251 skipped_checks = entity_context.get("skipped_checks")
252 enriched_resources[entity_id] = {
253 "entity_code_lines": entity_code_lines,
254 "entity_lines_range": entity_lines_range,
255 "scanned_file": scanned_file,
256 "skipped_checks": skipped_checks,
257 }
258 return enriched_resources
259
[end of checkov/common/runners/runner_registry.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/checkov/common/runners/runner_registry.py b/checkov/common/runners/runner_registry.py
--- a/checkov/common/runners/runner_registry.py
+++ b/checkov/common/runners/runner_registry.py
@@ -101,6 +101,7 @@
if "cli" in config.output and not config.quiet:
print(f"{self.banner}\n")
exit_codes = []
+ cli_reports = []
report_jsons = []
sarif_reports = []
junit_reports = []
@@ -117,22 +118,25 @@
if "sarif" in config.output:
sarif_reports.append(report)
if "cli" in config.output:
- report.print_console(
- is_quiet=config.quiet,
- is_compact=config.compact,
- created_baseline_path=created_baseline_path,
- baseline=baseline,
- use_bc_ids=config.output_bc_ids,
- )
- if url:
- print("More details: {}".format(url))
- output_formats.discard("cli")
- if output_formats:
- print(OUTPUT_DELIMITER)
+ cli_reports.append(report)
if "cyclonedx" in config.output:
cyclonedx_reports.append(report)
exit_codes.append(report.get_exit_code(config.soft_fail, config.soft_fail_on, config.hard_fail_on))
+ if "cli" in config.output:
+ for report in cli_reports:
+ report.print_console(
+ is_quiet=config.quiet,
+ is_compact=config.compact,
+ created_baseline_path=created_baseline_path,
+ baseline=baseline,
+ use_bc_ids=config.output_bc_ids,
+ )
+ if url:
+ print("More details: {}".format(url))
+ output_formats.remove("cli")
+ if output_formats:
+ print(OUTPUT_DELIMITER)
if "sarif" in config.output:
master_report = Report("merged")
print(self.banner)
| {"golden_diff": "diff --git a/checkov/common/runners/runner_registry.py b/checkov/common/runners/runner_registry.py\n--- a/checkov/common/runners/runner_registry.py\n+++ b/checkov/common/runners/runner_registry.py\n@@ -101,6 +101,7 @@\n if \"cli\" in config.output and not config.quiet:\n print(f\"{self.banner}\\n\")\n exit_codes = []\n+ cli_reports = []\n report_jsons = []\n sarif_reports = []\n junit_reports = []\n@@ -117,22 +118,25 @@\n if \"sarif\" in config.output:\n sarif_reports.append(report)\n if \"cli\" in config.output:\n- report.print_console(\n- is_quiet=config.quiet,\n- is_compact=config.compact,\n- created_baseline_path=created_baseline_path,\n- baseline=baseline,\n- use_bc_ids=config.output_bc_ids,\n- )\n- if url:\n- print(\"More details: {}\".format(url))\n- output_formats.discard(\"cli\")\n- if output_formats:\n- print(OUTPUT_DELIMITER)\n+ cli_reports.append(report)\n if \"cyclonedx\" in config.output:\n cyclonedx_reports.append(report)\n exit_codes.append(report.get_exit_code(config.soft_fail, config.soft_fail_on, config.hard_fail_on))\n \n+ if \"cli\" in config.output:\n+ for report in cli_reports:\n+ report.print_console(\n+ is_quiet=config.quiet,\n+ is_compact=config.compact,\n+ created_baseline_path=created_baseline_path,\n+ baseline=baseline,\n+ use_bc_ids=config.output_bc_ids,\n+ )\n+ if url:\n+ print(\"More details: {}\".format(url))\n+ output_formats.remove(\"cli\")\n+ if output_formats:\n+ print(OUTPUT_DELIMITER)\n if \"sarif\" in config.output:\n master_report = Report(\"merged\")\n print(self.banner)\n", "issue": "Delimiter in wrong position when using multi-output with empty error list\n**Describe the bug**\r\nThe \"--- OUTPUT DELIMITER ---\" message appears below two outputs when running on a project without any errors to output. This causes problems when trying to use parsers to split the output.\r\n\r\n**To Reproduce**\r\nSteps to reproduce the behavior:\r\n1. mkdir -p /tmp/checkov-bug\r\n2. cd /tmp/checkov-bug\r\n3. docker run -it -v $PWD:/app -w /app bridgecrew/checkov:2.0.591 -o cli -o junitxml -d .\r\n\r\nOutput is:\r\n _ _ \r\n ___| |__ ___ ___| | _______ __\r\n / __| '_ \\ / _ \\/ __| |/ / _ \\ \\ / /\r\n | (__| | | | __/ (__| < (_) \\ V / \r\n \\___|_| |_|\\___|\\___|_|\\_\\___/ \\_/ \r\n \r\nBy bridgecrew.io | version: 2.0.590 \r\nUpdate available 2.0.590 -> 2.0.591\r\nRun pip3 install -U checkov to update \r\n\r\n\\<?xml version=\"1.0\" ?\\>\r\n\\<testsuites/\\>\r\n\r\n--- OUTPUT DELIMITER ---\r\n\r\n\r\n**Expected behavior**\r\n\r\nThe expected behaviour would be for the XML snippet to be below \"--- OUTPUT DELIMITER ---\", since it was the second output option passed to checkov.\r\n\r\n**Screenshots**\r\n\r\n**Desktop (please complete the following information):**\r\n - OS: Docker image bridgecrew/checkov\r\n - Checkov Version 2.0.591\r\n\r\n**Additional context**\r\n\n", "before_files": [{"content": "import argparse\nimport itertools\nfrom json import dumps, JSONEncoder\nfrom lark import Tree\nimport datetime\nimport logging\nimport os\nfrom abc import abstractmethod\nfrom typing import List, Union, Dict, Any, Tuple, Optional\n\nfrom typing_extensions import Literal\n\nfrom checkov.common.bridgecrew.integration_features.integration_feature_registry import integration_feature_registry\nfrom checkov.common.output.baseline import Baseline\nfrom checkov.common.output.report import Report\nfrom checkov.common.runners.base_runner import BaseRunner\nfrom checkov.common.util import data_structures_utils\nfrom checkov.runner_filter import RunnerFilter\nfrom checkov.terraform.context_parsers.registry import parser_registry\nfrom checkov.terraform.runner import Runner as tf_runner\nfrom checkov.terraform.parser import Parser\nfrom checkov.common.parallelizer.parallel_runner import parallel_runner\nfrom checkov.common.util.ext_cyclonedx_xml import ExtXml\nfrom checkov.common.util.banner import tool as tool_name\n\nCHECK_BLOCK_TYPES = frozenset([\"resource\", \"data\", \"provider\", \"module\"])\nOUTPUT_CHOICES = [\"cli\", \"cyclonedx\", \"json\", \"junitxml\", \"github_failed_only\", \"sarif\"]\nOUTPUT_DELIMITER = \"\\n--- OUTPUT DELIMITER ---\\n\"\n\nclass OutputEncoder(JSONEncoder):\n def default(self, obj):\n if isinstance(obj, set):\n return list(obj)\n elif isinstance(obj, Tree):\n return str(obj)\n elif isinstance(obj, datetime.date):\n return str(obj)\n return super().default(obj)\n\nclass RunnerRegistry:\n runners: List[BaseRunner] = []\n scan_reports: List[Report] = []\n banner = \"\"\n\n def __init__(self, banner: str, runner_filter: RunnerFilter, *runners: BaseRunner) -> None:\n self.logger = logging.getLogger(__name__)\n self.runner_filter = runner_filter\n self.runners = list(runners)\n self.banner = banner\n self.scan_reports = []\n self.filter_runner_framework()\n self.tool = tool_name\n\n @abstractmethod\n def extract_entity_details(self, entity: Dict[str, Any]) -> Tuple[str, str, Dict[str, Any]]:\n raise NotImplementedError()\n\n def run(\n self,\n root_folder: Optional[str] = None,\n external_checks_dir: Optional[List[str]] = None,\n files: Optional[List[str]] = None,\n guidelines: Optional[Dict[str, str]] = None,\n collect_skip_comments: bool = True,\n repo_root_for_plan_enrichment: Optional[List[Union[str, os.PathLike]]] = None,\n ) -> List[Report]:\n integration_feature_registry.run_pre_runner()\n if len(self.runners) == 1:\n reports = [self.runners[0].run(root_folder, external_checks_dir=external_checks_dir, files=files,\n runner_filter=self.runner_filter, collect_skip_comments=collect_skip_comments)]\n else:\n reports = parallel_runner.run_function(\n lambda runner: runner.run(root_folder, external_checks_dir=external_checks_dir, files=files,\n runner_filter=self.runner_filter, collect_skip_comments=collect_skip_comments),\n self.runners, 1)\n\n for scan_report in reports:\n self._handle_report(scan_report, guidelines, repo_root_for_plan_enrichment)\n return self.scan_reports\n\n def _handle_report(self, scan_report, guidelines, repo_root_for_plan_enrichment):\n integration_feature_registry.run_post_runner(scan_report)\n if guidelines:\n RunnerRegistry.enrich_report_with_guidelines(scan_report, guidelines)\n if repo_root_for_plan_enrichment:\n enriched_resources = RunnerRegistry.get_enriched_resources(repo_root_for_plan_enrichment)\n scan_report = Report(\"terraform_plan\").enrich_plan_report(scan_report, enriched_resources)\n scan_report = Report(\"terraform_plan\").handle_skipped_checks(scan_report, enriched_resources)\n self.scan_reports.append(scan_report)\n\n def print_reports(\n self,\n scan_reports: List[Report],\n config: argparse.Namespace,\n url: Optional[str] = None,\n created_baseline_path: Optional[str] = None,\n baseline: Optional[Baseline] = None,\n ) -> Literal[0, 1]:\n output_formats = set(config.output)\n\n if \"cli\" in config.output and not config.quiet:\n print(f\"{self.banner}\\n\")\n exit_codes = []\n report_jsons = []\n sarif_reports = []\n junit_reports = []\n cyclonedx_reports = []\n for report in scan_reports:\n if not report.is_empty():\n if \"json\" in config.output:\n report_jsons.append(report.get_dict(is_quiet=config.quiet, url=url))\n if \"junitxml\" in config.output:\n junit_reports.append(report)\n # report.print_junit_xml()\n if \"github_failed_only\" in config.output:\n report.print_failed_github_md(use_bc_ids=config.output_bc_ids)\n if \"sarif\" in config.output:\n sarif_reports.append(report)\n if \"cli\" in config.output:\n report.print_console(\n is_quiet=config.quiet,\n is_compact=config.compact,\n created_baseline_path=created_baseline_path,\n baseline=baseline,\n use_bc_ids=config.output_bc_ids,\n )\n if url:\n print(\"More details: {}\".format(url))\n output_formats.discard(\"cli\")\n if output_formats:\n print(OUTPUT_DELIMITER)\n if \"cyclonedx\" in config.output:\n cyclonedx_reports.append(report)\n exit_codes.append(report.get_exit_code(config.soft_fail, config.soft_fail_on, config.hard_fail_on))\n\n if \"sarif\" in config.output:\n master_report = Report(\"merged\")\n print(self.banner)\n for report in sarif_reports:\n report.print_console(\n is_quiet=config.quiet,\n is_compact=config.compact,\n created_baseline_path=created_baseline_path,\n baseline=baseline,\n use_bc_ids=config.output_bc_ids,\n )\n master_report.failed_checks += report.failed_checks\n master_report.skipped_checks += report.skipped_checks\n if url:\n print(\"More details: {}\".format(url))\n master_report.write_sarif_output(self.tool)\n output_formats.remove(\"sarif\")\n if output_formats:\n print(OUTPUT_DELIMITER)\n if \"json\" in config.output:\n if not report_jsons:\n print(dumps(Report(None).get_summary(), indent=4))\n elif len(report_jsons) == 1:\n print(dumps(report_jsons[0], indent=4, cls=OutputEncoder))\n else:\n print(dumps(report_jsons, indent=4, cls=OutputEncoder))\n output_formats.remove(\"json\")\n if output_formats:\n print(OUTPUT_DELIMITER)\n if \"junitxml\" in config.output:\n if len(junit_reports) == 1:\n junit_reports[0].print_junit_xml(use_bc_ids=config.output_bc_ids)\n else:\n master_report = Report(None)\n for report in junit_reports:\n master_report.skipped_checks += report.skipped_checks\n master_report.passed_checks += report.passed_checks\n master_report.failed_checks += report.failed_checks\n master_report.print_junit_xml(use_bc_ids=config.output_bc_ids)\n output_formats.remove(\"junitxml\")\n if output_formats:\n print(OUTPUT_DELIMITER)\n\n if \"cyclonedx\" in config.output:\n if cyclonedx_reports:\n # More than one Report - combine Reports first\n report = Report(None)\n for r in cyclonedx_reports:\n report.passed_checks += r.passed_checks\n report.skipped_checks += r.skipped_checks\n report.failed_checks += r.failed_checks\n else:\n report = cyclonedx_reports[0]\n cyclonedx_output = ExtXml(bom=report.get_cyclonedx_bom())\n print(cyclonedx_output.output_as_string())\n output_formats.remove(\"cyclonedx\")\n if output_formats:\n print(OUTPUT_DELIMITER)\n\n exit_code = 1 if 1 in exit_codes else 0\n return exit_code\n\n def filter_runner_framework(self) -> None:\n if not self.runner_filter:\n return\n if self.runner_filter.framework is None:\n return\n if self.runner_filter.framework == \"all\":\n return\n self.runners = [runner for runner in self.runners if runner.check_type == self.runner_filter.framework]\n\n def remove_runner(self, runner: BaseRunner) -> None:\n if runner in self.runners:\n self.runners.remove(runner)\n\n @staticmethod\n def enrich_report_with_guidelines(scan_report: Report, guidelines: Dict[str, str]) -> None:\n for record in itertools.chain(scan_report.failed_checks, scan_report.passed_checks, scan_report.skipped_checks):\n if record.check_id in guidelines:\n record.set_guideline(guidelines[record.check_id])\n\n @staticmethod\n def get_enriched_resources(repo_roots: List[Union[str, os.PathLike]]) -> Dict[str, Dict[str, Any]]:\n repo_definitions = {}\n for repo_root in repo_roots:\n tf_definitions = {}\n parsing_errors = {}\n Parser().parse_directory(\n directory=repo_root, # assume plan file is in the repo-root\n out_definitions=tf_definitions,\n out_parsing_errors=parsing_errors,\n )\n repo_definitions[repo_root] = { 'tf_definitions': tf_definitions, 'parsing_errors': parsing_errors }\n\n enriched_resources = {}\n for repo_root, parse_results in repo_definitions.items():\n for full_file_path, definition in parse_results['tf_definitions'].items():\n definitions_context = parser_registry.enrich_definitions_context((full_file_path, definition))\n abs_scanned_file, _ = tf_runner._strip_module_referrer(full_file_path)\n scanned_file = os.path.relpath(abs_scanned_file, repo_root)\n for block_type, block_value in definition.items():\n if block_type in CHECK_BLOCK_TYPES:\n for entity in block_value:\n context_parser = parser_registry.context_parsers[block_type]\n definition_path = context_parser.get_entity_context_path(entity)\n entity_id = \".\".join(definition_path)\n entity_context_path = [block_type] + definition_path\n entity_context = data_structures_utils.get_inner_dict(\n definitions_context[full_file_path], entity_context_path\n )\n entity_lines_range = [\n entity_context.get(\"start_line\"),\n entity_context.get(\"end_line\"),\n ]\n entity_code_lines = entity_context.get(\"code_lines\")\n skipped_checks = entity_context.get(\"skipped_checks\")\n enriched_resources[entity_id] = {\n \"entity_code_lines\": entity_code_lines,\n \"entity_lines_range\": entity_lines_range,\n \"scanned_file\": scanned_file,\n \"skipped_checks\": skipped_checks,\n }\n return enriched_resources\n", "path": "checkov/common/runners/runner_registry.py"}]} | 3,865 | 429 |
gh_patches_debug_37315 | rasdani/github-patches | git_diff | nltk__nltk-900 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
HunposTagger error in Python3
Bug report from Tülin Erçelebi Ayyıldız:
> Dear hunpos authors,
>
> I am currently trying to use hunpos tagger to tag a text file, however I get an error at the stage of hunpos.py run.
>
> My configuration:
> OS : Windows 7-64bit
> python 3.4.1
> nltk 3.0.1
>
> All "english.model", "hunpos-tag.exe" and "hunpos-train.exe" are located in "C:/Users" folder. My python code is as follows:
> ---
``` python
import nltk
from nltk.tag.hunpos import HunposTagger
from nltk.tokenize import word_tokenize
corpus = "so how do i hunpos tag my ntuen ? i can't get the following code to work."
ht = HunposTagger('C:/Users/english.model','C:/Users/hunpos-tag.exe')
x=word_tokenize(corpus)
ht.tag(x)
```
> ---
>
> When I run this module I get the following error:
```
Traceback (most recent call last):
File "C:\Users\Tülin\Desktop\hunpos_deneme.py", line 12, in <module>
ht.tag(x)
File "C:\Python34\lib\site-packages\nltk\tag\hunpos.py", line 109, in tag
self._hunpos.stdin.write(token + "\n")
TypeError: can't concat bytes to str
```
> I tried several things, but I could not successfully eliminate the problem and get a correct result.
</issue>
<code>
[start of nltk/tag/hunpos.py]
1 # -*- coding: utf-8 -*-
2 # Natural Language Toolkit: Interface to the HunPos POS-tagger
3 #
4 # Copyright (C) 2001-2015 NLTK Project
5 # Author: Peter Ljunglöf <[email protected]>
6 # David Nemeskey <[email protected]> (modifications)
7 # Attila Zseder <[email protected]> (modifications)
8 # URL: <http://nltk.org/>
9 # For license information, see LICENSE.TXT
10
11 """
12 A module for interfacing with the HunPos open-source POS-tagger.
13 """
14
15 import os
16 from subprocess import Popen, PIPE
17
18 from nltk.internals import find_binary, find_file
19 from nltk.tag.api import TaggerI
20 from nltk import compat
21
22 _hunpos_url = 'http://code.google.com/p/hunpos/'
23
24 _hunpos_charset = 'ISO-8859-1'
25 """The default encoding used by hunpos: ISO-8859-1."""
26
27 class HunposTagger(TaggerI):
28 """
29 A class for pos tagging with HunPos. The input is the paths to:
30 - a model trained on training data
31 - (optionally) the path to the hunpos-tag binary
32 - (optionally) the encoding of the training data (default: ISO-8859-1)
33
34 Example:
35
36 >>> from nltk.tag.hunpos import HunposTagger
37 >>> ht = HunposTagger('en_wsj.model')
38 >>> ht.tag('What is the airspeed of an unladen swallow ?'.split())
39 [('What', 'WP'), ('is', 'VBZ'), ('the', 'DT'), ('airspeed', 'NN'), ('of', 'IN'), ('an', 'DT'), ('unladen', 'NN'), ('swallow', 'VB'), ('?', '.')]
40 >>> ht.close()
41
42 This class communicates with the hunpos-tag binary via pipes. When the
43 tagger object is no longer needed, the close() method should be called to
44 free system resources. The class supports the context manager interface; if
45 used in a with statement, the close() method is invoked automatically:
46
47 >>> with HunposTagger('en_wsj.model') as ht:
48 ... ht.tag('What is the airspeed of an unladen swallow ?'.split())
49 ...
50 [('What', 'WP'), ('is', 'VBZ'), ('the', 'DT'), ('airspeed', 'NN'), ('of', 'IN'), ('an', 'DT'), ('unladen', 'NN'), ('swallow', 'VB'), ('?', '.')]
51 """
52
53 def __init__(self, path_to_model, path_to_bin=None,
54 encoding=_hunpos_charset, verbose=False):
55 """
56 Starts the hunpos-tag executable and establishes a connection with it.
57
58 :param path_to_model: The model file.
59 :param path_to_bin: The hunpos-tag binary.
60 :param encoding: The encoding used by the model. Unicode tokens
61 passed to the tag() and tag_sents() methods are converted to
62 this charset when they are sent to hunpos-tag.
63 The default is ISO-8859-1 (Latin-1).
64
65 This parameter is ignored for str tokens, which are sent as-is.
66 The caller must ensure that tokens are encoded in the right charset.
67 """
68 self._closed = True
69 hunpos_paths = ['.', '/usr/bin', '/usr/local/bin', '/opt/local/bin',
70 '/Applications/bin', '~/bin', '~/Applications/bin']
71 hunpos_paths = list(map(os.path.expanduser, hunpos_paths))
72
73 self._hunpos_bin = find_binary(
74 'hunpos-tag', path_to_bin,
75 env_vars=('HUNPOS_TAGGER',),
76 searchpath=hunpos_paths,
77 url=_hunpos_url,
78 verbose=verbose)
79
80 self._hunpos_model = find_file(path_to_model,
81 env_vars=('HUNPOS_TAGGER',), verbose=verbose)
82 self._encoding = encoding
83 self._hunpos = Popen([self._hunpos_bin, self._hunpos_model],
84 shell=False, stdin=PIPE, stdout=PIPE, stderr=PIPE)
85 self._closed = False
86
87 def __del__(self):
88 self.close()
89
90 def close(self):
91 """Closes the pipe to the hunpos executable."""
92 if not self._closed:
93 self._hunpos.communicate()
94 self._closed = True
95
96 def __enter__(self):
97 return self
98 def __exit__(self, exc_type, exc_value, traceback):
99 self.close()
100
101 def tag(self, tokens):
102 """Tags a single sentence: a list of words.
103 The tokens should not contain any newline characters.
104 """
105 for token in tokens:
106 assert "\n" not in token, "Tokens should not contain newlines"
107 if isinstance(token, compat.text_type):
108 token = token.encode(self._encoding)
109 self._hunpos.stdin.write(token + "\n")
110 # We write a final empty line to tell hunpos that the sentence is finished:
111 self._hunpos.stdin.write("\n")
112 self._hunpos.stdin.flush()
113
114 tagged_tokens = []
115 for token in tokens:
116 tagged = self._hunpos.stdout.readline().strip().split("\t")
117 tag = (tagged[1] if len(tagged) > 1 else None)
118 tagged_tokens.append((token, tag))
119 # We have to read (and dismiss) the final empty line:
120 self._hunpos.stdout.readline()
121
122 return tagged_tokens
123
124 # skip doctests if Hunpos tagger is not installed
125 def setup_module(module):
126 from nose import SkipTest
127 try:
128 HunposTagger('en_wsj.model')
129 except LookupError:
130 raise SkipTest("HunposTagger is not available")
131
132 if __name__ == "__main__":
133 import doctest
134 doctest.testmod(optionflags=doctest.NORMALIZE_WHITESPACE)
135
[end of nltk/tag/hunpos.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/nltk/tag/hunpos.py b/nltk/tag/hunpos.py
--- a/nltk/tag/hunpos.py
+++ b/nltk/tag/hunpos.py
@@ -3,8 +3,8 @@
#
# Copyright (C) 2001-2015 NLTK Project
# Author: Peter Ljunglöf <[email protected]>
-# David Nemeskey <[email protected]> (modifications)
-# Attila Zseder <[email protected]> (modifications)
+# Dávid Márk Nemeskey <[email protected]> (modifications)
+# Attila Zséder <[email protected]> (modifications)
# URL: <http://nltk.org/>
# For license information, see LICENSE.TXT
@@ -71,14 +71,15 @@
hunpos_paths = list(map(os.path.expanduser, hunpos_paths))
self._hunpos_bin = find_binary(
- 'hunpos-tag', path_to_bin,
- env_vars=('HUNPOS_TAGGER',),
- searchpath=hunpos_paths,
- url=_hunpos_url,
- verbose=verbose)
-
- self._hunpos_model = find_file(path_to_model,
- env_vars=('HUNPOS_TAGGER',), verbose=verbose)
+ 'hunpos-tag', path_to_bin,
+ env_vars=('HUNPOS_TAGGER',),
+ searchpath=hunpos_paths,
+ url=_hunpos_url,
+ verbose=verbose
+ )
+
+ self._hunpos_model = find_file(
+ path_to_model, env_vars=('HUNPOS_TAGGER',), verbose=verbose)
self._encoding = encoding
self._hunpos = Popen([self._hunpos_bin, self._hunpos_model],
shell=False, stdin=PIPE, stdout=PIPE, stderr=PIPE)
@@ -106,14 +107,14 @@
assert "\n" not in token, "Tokens should not contain newlines"
if isinstance(token, compat.text_type):
token = token.encode(self._encoding)
- self._hunpos.stdin.write(token + "\n")
+ self._hunpos.stdin.write(token + b"\n")
# We write a final empty line to tell hunpos that the sentence is finished:
- self._hunpos.stdin.write("\n")
+ self._hunpos.stdin.write(b"\n")
self._hunpos.stdin.flush()
tagged_tokens = []
for token in tokens:
- tagged = self._hunpos.stdout.readline().strip().split("\t")
+ tagged = self._hunpos.stdout.readline().strip().split(b"\t")
tag = (tagged[1] if len(tagged) > 1 else None)
tagged_tokens.append((token, tag))
# We have to read (and dismiss) the final empty line:
| {"golden_diff": "diff --git a/nltk/tag/hunpos.py b/nltk/tag/hunpos.py\n--- a/nltk/tag/hunpos.py\n+++ b/nltk/tag/hunpos.py\n@@ -3,8 +3,8 @@\n #\n # Copyright (C) 2001-2015 NLTK Project\n # Author: Peter Ljungl\u00f6f <[email protected]>\n-# David Nemeskey <[email protected]> (modifications)\n-# Attila Zseder <[email protected]> (modifications)\n+# D\u00e1vid M\u00e1rk Nemeskey <[email protected]> (modifications)\n+# Attila Zs\u00e9der <[email protected]> (modifications)\n # URL: <http://nltk.org/>\n # For license information, see LICENSE.TXT\n \n@@ -71,14 +71,15 @@\n hunpos_paths = list(map(os.path.expanduser, hunpos_paths))\n \n self._hunpos_bin = find_binary(\n- 'hunpos-tag', path_to_bin,\n- env_vars=('HUNPOS_TAGGER',),\n- searchpath=hunpos_paths,\n- url=_hunpos_url,\n- verbose=verbose)\n-\n- self._hunpos_model = find_file(path_to_model,\n- env_vars=('HUNPOS_TAGGER',), verbose=verbose)\n+ 'hunpos-tag', path_to_bin,\n+ env_vars=('HUNPOS_TAGGER',),\n+ searchpath=hunpos_paths,\n+ url=_hunpos_url,\n+ verbose=verbose\n+ )\n+\n+ self._hunpos_model = find_file(\n+ path_to_model, env_vars=('HUNPOS_TAGGER',), verbose=verbose)\n self._encoding = encoding\n self._hunpos = Popen([self._hunpos_bin, self._hunpos_model],\n shell=False, stdin=PIPE, stdout=PIPE, stderr=PIPE)\n@@ -106,14 +107,14 @@\n assert \"\\n\" not in token, \"Tokens should not contain newlines\"\n if isinstance(token, compat.text_type):\n token = token.encode(self._encoding)\n- self._hunpos.stdin.write(token + \"\\n\")\n+ self._hunpos.stdin.write(token + b\"\\n\")\n # We write a final empty line to tell hunpos that the sentence is finished:\n- self._hunpos.stdin.write(\"\\n\")\n+ self._hunpos.stdin.write(b\"\\n\")\n self._hunpos.stdin.flush()\n \n tagged_tokens = []\n for token in tokens:\n- tagged = self._hunpos.stdout.readline().strip().split(\"\\t\")\n+ tagged = self._hunpos.stdout.readline().strip().split(b\"\\t\")\n tag = (tagged[1] if len(tagged) > 1 else None)\n tagged_tokens.append((token, tag))\n # We have to read (and dismiss) the final empty line:\n", "issue": "HunposTagger error in Python3\nBug report from T\u00fclin Er\u00e7elebi Ayy\u0131ld\u0131z:\n\n> Dear hunpos authors,\n> \n> I am currently trying to use hunpos tagger to tag a text file, however I get an error at the stage of hunpos.py run.\n> \n> My configuration:\n> OS : Windows 7-64bit\n> python 3.4.1\n> nltk 3.0.1\n> \n> All \"english.model\", \"hunpos-tag.exe\" and \"hunpos-train.exe\" are located in \"C:/Users\" folder. My python code is as follows:\n> ---\n\n``` python\nimport nltk \nfrom nltk.tag.hunpos import HunposTagger\nfrom nltk.tokenize import word_tokenize\ncorpus = \"so how do i hunpos tag my ntuen ? i can't get the following code to work.\"\n\nht = HunposTagger('C:/Users/english.model','C:/Users/hunpos-tag.exe')\nx=word_tokenize(corpus)\nht.tag(x)\n```\n\n> ---\n> \n> When I run this module I get the following error:\n\n```\nTraceback (most recent call last):\n File \"C:\\Users\\T\u00fclin\\Desktop\\hunpos_deneme.py\", line 12, in <module>\n ht.tag(x)\n File \"C:\\Python34\\lib\\site-packages\\nltk\\tag\\hunpos.py\", line 109, in tag\n self._hunpos.stdin.write(token + \"\\n\")\nTypeError: can't concat bytes to str\n```\n\n> I tried several things, but I could not successfully eliminate the problem and get a correct result.\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n# Natural Language Toolkit: Interface to the HunPos POS-tagger\n#\n# Copyright (C) 2001-2015 NLTK Project\n# Author: Peter Ljungl\u00f6f <[email protected]>\n# David Nemeskey <[email protected]> (modifications)\n# Attila Zseder <[email protected]> (modifications)\n# URL: <http://nltk.org/>\n# For license information, see LICENSE.TXT\n\n\"\"\"\nA module for interfacing with the HunPos open-source POS-tagger.\n\"\"\"\n\nimport os\nfrom subprocess import Popen, PIPE\n\nfrom nltk.internals import find_binary, find_file\nfrom nltk.tag.api import TaggerI\nfrom nltk import compat\n\n_hunpos_url = 'http://code.google.com/p/hunpos/'\n\n_hunpos_charset = 'ISO-8859-1'\n\"\"\"The default encoding used by hunpos: ISO-8859-1.\"\"\"\n\nclass HunposTagger(TaggerI):\n \"\"\"\n A class for pos tagging with HunPos. The input is the paths to:\n - a model trained on training data\n - (optionally) the path to the hunpos-tag binary\n - (optionally) the encoding of the training data (default: ISO-8859-1)\n\n Example:\n\n >>> from nltk.tag.hunpos import HunposTagger\n >>> ht = HunposTagger('en_wsj.model')\n >>> ht.tag('What is the airspeed of an unladen swallow ?'.split())\n [('What', 'WP'), ('is', 'VBZ'), ('the', 'DT'), ('airspeed', 'NN'), ('of', 'IN'), ('an', 'DT'), ('unladen', 'NN'), ('swallow', 'VB'), ('?', '.')]\n >>> ht.close()\n\n This class communicates with the hunpos-tag binary via pipes. When the\n tagger object is no longer needed, the close() method should be called to\n free system resources. The class supports the context manager interface; if\n used in a with statement, the close() method is invoked automatically:\n\n >>> with HunposTagger('en_wsj.model') as ht:\n ... ht.tag('What is the airspeed of an unladen swallow ?'.split())\n ...\n [('What', 'WP'), ('is', 'VBZ'), ('the', 'DT'), ('airspeed', 'NN'), ('of', 'IN'), ('an', 'DT'), ('unladen', 'NN'), ('swallow', 'VB'), ('?', '.')]\n \"\"\"\n\n def __init__(self, path_to_model, path_to_bin=None,\n encoding=_hunpos_charset, verbose=False):\n \"\"\"\n Starts the hunpos-tag executable and establishes a connection with it.\n\n :param path_to_model: The model file.\n :param path_to_bin: The hunpos-tag binary.\n :param encoding: The encoding used by the model. Unicode tokens\n passed to the tag() and tag_sents() methods are converted to\n this charset when they are sent to hunpos-tag.\n The default is ISO-8859-1 (Latin-1).\n\n This parameter is ignored for str tokens, which are sent as-is.\n The caller must ensure that tokens are encoded in the right charset.\n \"\"\"\n self._closed = True\n hunpos_paths = ['.', '/usr/bin', '/usr/local/bin', '/opt/local/bin',\n '/Applications/bin', '~/bin', '~/Applications/bin']\n hunpos_paths = list(map(os.path.expanduser, hunpos_paths))\n\n self._hunpos_bin = find_binary(\n 'hunpos-tag', path_to_bin,\n env_vars=('HUNPOS_TAGGER',),\n searchpath=hunpos_paths,\n url=_hunpos_url,\n verbose=verbose)\n\n self._hunpos_model = find_file(path_to_model,\n env_vars=('HUNPOS_TAGGER',), verbose=verbose)\n self._encoding = encoding\n self._hunpos = Popen([self._hunpos_bin, self._hunpos_model],\n shell=False, stdin=PIPE, stdout=PIPE, stderr=PIPE)\n self._closed = False\n\n def __del__(self):\n self.close()\n\n def close(self):\n \"\"\"Closes the pipe to the hunpos executable.\"\"\"\n if not self._closed:\n self._hunpos.communicate()\n self._closed = True\n\n def __enter__(self):\n return self\n def __exit__(self, exc_type, exc_value, traceback):\n self.close()\n\n def tag(self, tokens):\n \"\"\"Tags a single sentence: a list of words.\n The tokens should not contain any newline characters.\n \"\"\"\n for token in tokens:\n assert \"\\n\" not in token, \"Tokens should not contain newlines\"\n if isinstance(token, compat.text_type):\n token = token.encode(self._encoding)\n self._hunpos.stdin.write(token + \"\\n\")\n # We write a final empty line to tell hunpos that the sentence is finished:\n self._hunpos.stdin.write(\"\\n\")\n self._hunpos.stdin.flush()\n\n tagged_tokens = []\n for token in tokens:\n tagged = self._hunpos.stdout.readline().strip().split(\"\\t\")\n tag = (tagged[1] if len(tagged) > 1 else None)\n tagged_tokens.append((token, tag))\n # We have to read (and dismiss) the final empty line:\n self._hunpos.stdout.readline()\n\n return tagged_tokens\n\n# skip doctests if Hunpos tagger is not installed\ndef setup_module(module):\n from nose import SkipTest\n try:\n HunposTagger('en_wsj.model')\n except LookupError:\n raise SkipTest(\"HunposTagger is not available\")\n\nif __name__ == \"__main__\":\n import doctest\n doctest.testmod(optionflags=doctest.NORMALIZE_WHITESPACE)\n", "path": "nltk/tag/hunpos.py"}]} | 2,522 | 670 |
gh_patches_debug_34326 | rasdani/github-patches | git_diff | encode__starlette-1350 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
StaticFiles support for directories other than "statics"
### Checklist
<!-- Please make sure you check all these items before submitting your feature request. -->
- [/] There are no similar issues or pull requests for this yet.
- [X - tried to get feedback but no replies] I discussed this idea on the [community chat](https://gitter.im/encode/community) and feedback is positive.
### Is your feature related to a problem? Please describe.
I want to be able to serve static files from arbitrary locations within a python package but I can't because currently it looks in a `statics` folder which is hardcoded.
## Describe the solution you would like.
I'd like to be able to pass as a parameter the path within the python package to look in for the static files.
## Describe alternatives you considered
<!-- Please describe any alternative solutions or features you've considered to solve
your problem and why they wouldn't solve it. -->
I've considered changing the location of my packaged files to `statics` but this will have knock on effects across other systems, and `statics` itself is non-standard as far as I can tell.
## Additional context
<!-- Provide any additional context, screenshots, tracebacks, etc. about the feature here. -->
</issue>
<code>
[start of starlette/staticfiles.py]
1 import importlib.util
2 import os
3 import stat
4 import typing
5 from email.utils import parsedate
6
7 import anyio
8
9 from starlette.datastructures import URL, Headers
10 from starlette.exceptions import HTTPException
11 from starlette.responses import FileResponse, RedirectResponse, Response
12 from starlette.types import Receive, Scope, Send
13
14 PathLike = typing.Union[str, "os.PathLike[str]"]
15
16
17 class NotModifiedResponse(Response):
18 NOT_MODIFIED_HEADERS = (
19 "cache-control",
20 "content-location",
21 "date",
22 "etag",
23 "expires",
24 "vary",
25 )
26
27 def __init__(self, headers: Headers):
28 super().__init__(
29 status_code=304,
30 headers={
31 name: value
32 for name, value in headers.items()
33 if name in self.NOT_MODIFIED_HEADERS
34 },
35 )
36
37
38 class StaticFiles:
39 def __init__(
40 self,
41 *,
42 directory: PathLike = None,
43 packages: typing.List[str] = None,
44 html: bool = False,
45 check_dir: bool = True,
46 ) -> None:
47 self.directory = directory
48 self.packages = packages
49 self.all_directories = self.get_directories(directory, packages)
50 self.html = html
51 self.config_checked = False
52 if check_dir and directory is not None and not os.path.isdir(directory):
53 raise RuntimeError(f"Directory '{directory}' does not exist")
54
55 def get_directories(
56 self, directory: PathLike = None, packages: typing.List[str] = None
57 ) -> typing.List[PathLike]:
58 """
59 Given `directory` and `packages` arguments, return a list of all the
60 directories that should be used for serving static files from.
61 """
62 directories = []
63 if directory is not None:
64 directories.append(directory)
65
66 for package in packages or []:
67 spec = importlib.util.find_spec(package)
68 assert spec is not None, f"Package {package!r} could not be found."
69 assert (
70 spec.origin is not None
71 ), f"Directory 'statics' in package {package!r} could not be found."
72 package_directory = os.path.normpath(
73 os.path.join(spec.origin, "..", "statics")
74 )
75 assert os.path.isdir(
76 package_directory
77 ), f"Directory 'statics' in package {package!r} could not be found."
78 directories.append(package_directory)
79
80 return directories
81
82 async def __call__(self, scope: Scope, receive: Receive, send: Send) -> None:
83 """
84 The ASGI entry point.
85 """
86 assert scope["type"] == "http"
87
88 if not self.config_checked:
89 await self.check_config()
90 self.config_checked = True
91
92 path = self.get_path(scope)
93 response = await self.get_response(path, scope)
94 await response(scope, receive, send)
95
96 def get_path(self, scope: Scope) -> str:
97 """
98 Given the ASGI scope, return the `path` string to serve up,
99 with OS specific path seperators, and any '..', '.' components removed.
100 """
101 return os.path.normpath(os.path.join(*scope["path"].split("/")))
102
103 async def get_response(self, path: str, scope: Scope) -> Response:
104 """
105 Returns an HTTP response, given the incoming path, method and request headers.
106 """
107 if scope["method"] not in ("GET", "HEAD"):
108 raise HTTPException(status_code=405)
109
110 try:
111 full_path, stat_result = await anyio.to_thread.run_sync(
112 self.lookup_path, path
113 )
114 except PermissionError:
115 raise HTTPException(status_code=401)
116 except OSError:
117 raise
118
119 if stat_result and stat.S_ISREG(stat_result.st_mode):
120 # We have a static file to serve.
121 return self.file_response(full_path, stat_result, scope)
122
123 elif stat_result and stat.S_ISDIR(stat_result.st_mode) and self.html:
124 # We're in HTML mode, and have got a directory URL.
125 # Check if we have 'index.html' file to serve.
126 index_path = os.path.join(path, "index.html")
127 full_path, stat_result = await anyio.to_thread.run_sync(
128 self.lookup_path, index_path
129 )
130 if stat_result is not None and stat.S_ISREG(stat_result.st_mode):
131 if not scope["path"].endswith("/"):
132 # Directory URLs should redirect to always end in "/".
133 url = URL(scope=scope)
134 url = url.replace(path=url.path + "/")
135 return RedirectResponse(url=url)
136 return self.file_response(full_path, stat_result, scope)
137
138 if self.html:
139 # Check for '404.html' if we're in HTML mode.
140 full_path, stat_result = await anyio.to_thread.run_sync(
141 self.lookup_path, "404.html"
142 )
143 if stat_result and stat.S_ISREG(stat_result.st_mode):
144 return FileResponse(
145 full_path,
146 stat_result=stat_result,
147 method=scope["method"],
148 status_code=404,
149 )
150 raise HTTPException(status_code=404)
151
152 def lookup_path(
153 self, path: str
154 ) -> typing.Tuple[str, typing.Optional[os.stat_result]]:
155 for directory in self.all_directories:
156 full_path = os.path.realpath(os.path.join(directory, path))
157 directory = os.path.realpath(directory)
158 if os.path.commonprefix([full_path, directory]) != directory:
159 # Don't allow misbehaving clients to break out of the static files
160 # directory.
161 continue
162 try:
163 return full_path, os.stat(full_path)
164 except (FileNotFoundError, NotADirectoryError):
165 continue
166 return "", None
167
168 def file_response(
169 self,
170 full_path: PathLike,
171 stat_result: os.stat_result,
172 scope: Scope,
173 status_code: int = 200,
174 ) -> Response:
175 method = scope["method"]
176 request_headers = Headers(scope=scope)
177
178 response = FileResponse(
179 full_path, status_code=status_code, stat_result=stat_result, method=method
180 )
181 if self.is_not_modified(response.headers, request_headers):
182 return NotModifiedResponse(response.headers)
183 return response
184
185 async def check_config(self) -> None:
186 """
187 Perform a one-off configuration check that StaticFiles is actually
188 pointed at a directory, so that we can raise loud errors rather than
189 just returning 404 responses.
190 """
191 if self.directory is None:
192 return
193
194 try:
195 stat_result = await anyio.to_thread.run_sync(os.stat, self.directory)
196 except FileNotFoundError:
197 raise RuntimeError(
198 f"StaticFiles directory '{self.directory}' does not exist."
199 )
200 if not (stat.S_ISDIR(stat_result.st_mode) or stat.S_ISLNK(stat_result.st_mode)):
201 raise RuntimeError(
202 f"StaticFiles path '{self.directory}' is not a directory."
203 )
204
205 def is_not_modified(
206 self, response_headers: Headers, request_headers: Headers
207 ) -> bool:
208 """
209 Given the request and response headers, return `True` if an HTTP
210 "Not Modified" response could be returned instead.
211 """
212 try:
213 if_none_match = request_headers["if-none-match"]
214 etag = response_headers["etag"]
215 if if_none_match == etag:
216 return True
217 except KeyError:
218 pass
219
220 try:
221 if_modified_since = parsedate(request_headers["if-modified-since"])
222 last_modified = parsedate(response_headers["last-modified"])
223 if (
224 if_modified_since is not None
225 and last_modified is not None
226 and if_modified_since >= last_modified
227 ):
228 return True
229 except KeyError:
230 pass
231
232 return False
233
[end of starlette/staticfiles.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/starlette/staticfiles.py b/starlette/staticfiles.py
--- a/starlette/staticfiles.py
+++ b/starlette/staticfiles.py
@@ -40,7 +40,7 @@
self,
*,
directory: PathLike = None,
- packages: typing.List[str] = None,
+ packages: typing.List[typing.Union[str, typing.Tuple[str, str]]] = None,
html: bool = False,
check_dir: bool = True,
) -> None:
@@ -53,7 +53,9 @@
raise RuntimeError(f"Directory '{directory}' does not exist")
def get_directories(
- self, directory: PathLike = None, packages: typing.List[str] = None
+ self,
+ directory: PathLike = None,
+ packages: typing.List[typing.Union[str, typing.Tuple[str, str]]] = None,
) -> typing.List[PathLike]:
"""
Given `directory` and `packages` arguments, return a list of all the
@@ -64,17 +66,19 @@
directories.append(directory)
for package in packages or []:
+ if isinstance(package, tuple):
+ package, statics_dir = package
+ else:
+ statics_dir = "statics"
spec = importlib.util.find_spec(package)
assert spec is not None, f"Package {package!r} could not be found."
- assert (
- spec.origin is not None
- ), f"Directory 'statics' in package {package!r} could not be found."
+ assert spec.origin is not None, f"Package {package!r} could not be found."
package_directory = os.path.normpath(
- os.path.join(spec.origin, "..", "statics")
+ os.path.join(spec.origin, "..", statics_dir)
)
assert os.path.isdir(
package_directory
- ), f"Directory 'statics' in package {package!r} could not be found."
+ ), f"Directory '{statics_dir!r}' in package {package!r} could not be found."
directories.append(package_directory)
return directories
| {"golden_diff": "diff --git a/starlette/staticfiles.py b/starlette/staticfiles.py\n--- a/starlette/staticfiles.py\n+++ b/starlette/staticfiles.py\n@@ -40,7 +40,7 @@\n self,\n *,\n directory: PathLike = None,\n- packages: typing.List[str] = None,\n+ packages: typing.List[typing.Union[str, typing.Tuple[str, str]]] = None,\n html: bool = False,\n check_dir: bool = True,\n ) -> None:\n@@ -53,7 +53,9 @@\n raise RuntimeError(f\"Directory '{directory}' does not exist\")\n \n def get_directories(\n- self, directory: PathLike = None, packages: typing.List[str] = None\n+ self,\n+ directory: PathLike = None,\n+ packages: typing.List[typing.Union[str, typing.Tuple[str, str]]] = None,\n ) -> typing.List[PathLike]:\n \"\"\"\n Given `directory` and `packages` arguments, return a list of all the\n@@ -64,17 +66,19 @@\n directories.append(directory)\n \n for package in packages or []:\n+ if isinstance(package, tuple):\n+ package, statics_dir = package\n+ else:\n+ statics_dir = \"statics\"\n spec = importlib.util.find_spec(package)\n assert spec is not None, f\"Package {package!r} could not be found.\"\n- assert (\n- spec.origin is not None\n- ), f\"Directory 'statics' in package {package!r} could not be found.\"\n+ assert spec.origin is not None, f\"Package {package!r} could not be found.\"\n package_directory = os.path.normpath(\n- os.path.join(spec.origin, \"..\", \"statics\")\n+ os.path.join(spec.origin, \"..\", statics_dir)\n )\n assert os.path.isdir(\n package_directory\n- ), f\"Directory 'statics' in package {package!r} could not be found.\"\n+ ), f\"Directory '{statics_dir!r}' in package {package!r} could not be found.\"\n directories.append(package_directory)\n \n return directories\n", "issue": "StaticFiles support for directories other than \"statics\"\n### Checklist\r\n\r\n<!-- Please make sure you check all these items before submitting your feature request. -->\r\n\r\n- [/] There are no similar issues or pull requests for this yet.\r\n- [X - tried to get feedback but no replies] I discussed this idea on the [community chat](https://gitter.im/encode/community) and feedback is positive.\r\n\r\n### Is your feature related to a problem? Please describe.\r\n\r\nI want to be able to serve static files from arbitrary locations within a python package but I can't because currently it looks in a `statics` folder which is hardcoded.\r\n\r\n## Describe the solution you would like.\r\n\r\nI'd like to be able to pass as a parameter the path within the python package to look in for the static files.\r\n\r\n## Describe alternatives you considered\r\n\r\n<!-- Please describe any alternative solutions or features you've considered to solve\r\nyour problem and why they wouldn't solve it. -->\r\n\r\nI've considered changing the location of my packaged files to `statics` but this will have knock on effects across other systems, and `statics` itself is non-standard as far as I can tell.\r\n\r\n## Additional context\r\n\r\n<!-- Provide any additional context, screenshots, tracebacks, etc. about the feature here. -->\r\n\n", "before_files": [{"content": "import importlib.util\nimport os\nimport stat\nimport typing\nfrom email.utils import parsedate\n\nimport anyio\n\nfrom starlette.datastructures import URL, Headers\nfrom starlette.exceptions import HTTPException\nfrom starlette.responses import FileResponse, RedirectResponse, Response\nfrom starlette.types import Receive, Scope, Send\n\nPathLike = typing.Union[str, \"os.PathLike[str]\"]\n\n\nclass NotModifiedResponse(Response):\n NOT_MODIFIED_HEADERS = (\n \"cache-control\",\n \"content-location\",\n \"date\",\n \"etag\",\n \"expires\",\n \"vary\",\n )\n\n def __init__(self, headers: Headers):\n super().__init__(\n status_code=304,\n headers={\n name: value\n for name, value in headers.items()\n if name in self.NOT_MODIFIED_HEADERS\n },\n )\n\n\nclass StaticFiles:\n def __init__(\n self,\n *,\n directory: PathLike = None,\n packages: typing.List[str] = None,\n html: bool = False,\n check_dir: bool = True,\n ) -> None:\n self.directory = directory\n self.packages = packages\n self.all_directories = self.get_directories(directory, packages)\n self.html = html\n self.config_checked = False\n if check_dir and directory is not None and not os.path.isdir(directory):\n raise RuntimeError(f\"Directory '{directory}' does not exist\")\n\n def get_directories(\n self, directory: PathLike = None, packages: typing.List[str] = None\n ) -> typing.List[PathLike]:\n \"\"\"\n Given `directory` and `packages` arguments, return a list of all the\n directories that should be used for serving static files from.\n \"\"\"\n directories = []\n if directory is not None:\n directories.append(directory)\n\n for package in packages or []:\n spec = importlib.util.find_spec(package)\n assert spec is not None, f\"Package {package!r} could not be found.\"\n assert (\n spec.origin is not None\n ), f\"Directory 'statics' in package {package!r} could not be found.\"\n package_directory = os.path.normpath(\n os.path.join(spec.origin, \"..\", \"statics\")\n )\n assert os.path.isdir(\n package_directory\n ), f\"Directory 'statics' in package {package!r} could not be found.\"\n directories.append(package_directory)\n\n return directories\n\n async def __call__(self, scope: Scope, receive: Receive, send: Send) -> None:\n \"\"\"\n The ASGI entry point.\n \"\"\"\n assert scope[\"type\"] == \"http\"\n\n if not self.config_checked:\n await self.check_config()\n self.config_checked = True\n\n path = self.get_path(scope)\n response = await self.get_response(path, scope)\n await response(scope, receive, send)\n\n def get_path(self, scope: Scope) -> str:\n \"\"\"\n Given the ASGI scope, return the `path` string to serve up,\n with OS specific path seperators, and any '..', '.' components removed.\n \"\"\"\n return os.path.normpath(os.path.join(*scope[\"path\"].split(\"/\")))\n\n async def get_response(self, path: str, scope: Scope) -> Response:\n \"\"\"\n Returns an HTTP response, given the incoming path, method and request headers.\n \"\"\"\n if scope[\"method\"] not in (\"GET\", \"HEAD\"):\n raise HTTPException(status_code=405)\n\n try:\n full_path, stat_result = await anyio.to_thread.run_sync(\n self.lookup_path, path\n )\n except PermissionError:\n raise HTTPException(status_code=401)\n except OSError:\n raise\n\n if stat_result and stat.S_ISREG(stat_result.st_mode):\n # We have a static file to serve.\n return self.file_response(full_path, stat_result, scope)\n\n elif stat_result and stat.S_ISDIR(stat_result.st_mode) and self.html:\n # We're in HTML mode, and have got a directory URL.\n # Check if we have 'index.html' file to serve.\n index_path = os.path.join(path, \"index.html\")\n full_path, stat_result = await anyio.to_thread.run_sync(\n self.lookup_path, index_path\n )\n if stat_result is not None and stat.S_ISREG(stat_result.st_mode):\n if not scope[\"path\"].endswith(\"/\"):\n # Directory URLs should redirect to always end in \"/\".\n url = URL(scope=scope)\n url = url.replace(path=url.path + \"/\")\n return RedirectResponse(url=url)\n return self.file_response(full_path, stat_result, scope)\n\n if self.html:\n # Check for '404.html' if we're in HTML mode.\n full_path, stat_result = await anyio.to_thread.run_sync(\n self.lookup_path, \"404.html\"\n )\n if stat_result and stat.S_ISREG(stat_result.st_mode):\n return FileResponse(\n full_path,\n stat_result=stat_result,\n method=scope[\"method\"],\n status_code=404,\n )\n raise HTTPException(status_code=404)\n\n def lookup_path(\n self, path: str\n ) -> typing.Tuple[str, typing.Optional[os.stat_result]]:\n for directory in self.all_directories:\n full_path = os.path.realpath(os.path.join(directory, path))\n directory = os.path.realpath(directory)\n if os.path.commonprefix([full_path, directory]) != directory:\n # Don't allow misbehaving clients to break out of the static files\n # directory.\n continue\n try:\n return full_path, os.stat(full_path)\n except (FileNotFoundError, NotADirectoryError):\n continue\n return \"\", None\n\n def file_response(\n self,\n full_path: PathLike,\n stat_result: os.stat_result,\n scope: Scope,\n status_code: int = 200,\n ) -> Response:\n method = scope[\"method\"]\n request_headers = Headers(scope=scope)\n\n response = FileResponse(\n full_path, status_code=status_code, stat_result=stat_result, method=method\n )\n if self.is_not_modified(response.headers, request_headers):\n return NotModifiedResponse(response.headers)\n return response\n\n async def check_config(self) -> None:\n \"\"\"\n Perform a one-off configuration check that StaticFiles is actually\n pointed at a directory, so that we can raise loud errors rather than\n just returning 404 responses.\n \"\"\"\n if self.directory is None:\n return\n\n try:\n stat_result = await anyio.to_thread.run_sync(os.stat, self.directory)\n except FileNotFoundError:\n raise RuntimeError(\n f\"StaticFiles directory '{self.directory}' does not exist.\"\n )\n if not (stat.S_ISDIR(stat_result.st_mode) or stat.S_ISLNK(stat_result.st_mode)):\n raise RuntimeError(\n f\"StaticFiles path '{self.directory}' is not a directory.\"\n )\n\n def is_not_modified(\n self, response_headers: Headers, request_headers: Headers\n ) -> bool:\n \"\"\"\n Given the request and response headers, return `True` if an HTTP\n \"Not Modified\" response could be returned instead.\n \"\"\"\n try:\n if_none_match = request_headers[\"if-none-match\"]\n etag = response_headers[\"etag\"]\n if if_none_match == etag:\n return True\n except KeyError:\n pass\n\n try:\n if_modified_since = parsedate(request_headers[\"if-modified-since\"])\n last_modified = parsedate(response_headers[\"last-modified\"])\n if (\n if_modified_since is not None\n and last_modified is not None\n and if_modified_since >= last_modified\n ):\n return True\n except KeyError:\n pass\n\n return False\n", "path": "starlette/staticfiles.py"}]} | 3,074 | 480 |
gh_patches_debug_7039 | rasdani/github-patches | git_diff | encode__httpx-545 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Instances of `tempfile.TemporaryFile` fail when used as an upload file.
When using `tempfile.TemporaryFile` the `file.name` attribute returns an integer, rather than the usual path string, which causes a breakage for us further down the line...
```shell
venv/lib/python3.7/site-packages/httpx/client.py:484: in post
trust_env=trust_env,
venv/lib/python3.7/site-packages/httpx/client.py:616: in request
cookies=cookies,
venv/lib/python3.7/site-packages/httpx/client.py:356: in build_request
cookies=cookies,
venv/lib/python3.7/site-packages/httpx/models.py:696: in __init__
content, content_type = self.encode_data(data, files, json)
venv/lib/python3.7/site-packages/httpx/models.py:619: in encode_data
content, content_type = multipart_encode(data or {}, files)
venv/lib/python3.7/site-packages/httpx/multipart.py:100: in multipart_encode
for field in iter_fields(data, files):
venv/lib/python3.7/site-packages/httpx/multipart.py:93: in iter_fields
yield FileField(name=name, value=value)
venv/lib/python3.7/site-packages/httpx/multipart.py:51: in __init__
self.filename = Path(getattr(value, "name", "upload")).name
/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/pathlib.py:994: in __new__
self = cls._from_parts(args, init=False)
/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/pathlib.py:649: in _from_parts
drv, root, parts = self._parse_args(args)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
cls = <class 'pathlib.PosixPath'>, args = (29,)
@classmethod
def _parse_args(cls, args):
# This is useful when you don't want to create an instance, just
# canonicalize some constructor arguments.
parts = []
for a in args:
if isinstance(a, PurePath):
parts += a._parts
else:
> a = os.fspath(a)
E TypeError: expected str, bytes or os.PathLike object, not int
```
Have also confirmed that the issue *doesn't* occur with `tempfile.NamedTemporaryFile`.
I believe the resolution will be on this line...
https://github.com/encode/httpx/blob/1a32cf036a825f6eb35395af5388a3b23180a82e/httpx/multipart.py#L51
I assume that this would be sufficient...
```python
self.filename = Path(str(getattr(value, "name", "upload")).name
```
</issue>
<code>
[start of httpx/multipart.py]
1 import binascii
2 import mimetypes
3 import os
4 import re
5 import typing
6 from io import BytesIO
7 from pathlib import Path
8
9 _HTML5_FORM_ENCODING_REPLACEMENTS = {'"': "%22", "\\": "\\\\"}
10 _HTML5_FORM_ENCODING_REPLACEMENTS.update(
11 {chr(c): "%{:02X}".format(c) for c in range(0x00, 0x1F + 1) if c != 0x1B}
12 )
13 _HTML5_FORM_ENCODING_RE = re.compile(
14 r"|".join([re.escape(c) for c in _HTML5_FORM_ENCODING_REPLACEMENTS.keys()])
15 )
16
17
18 class Field:
19 def render_headers(self) -> bytes:
20 raise NotImplementedError() # pragma: nocover
21
22 def render_data(self) -> bytes:
23 raise NotImplementedError() # pragma: nocover
24
25
26 class DataField(Field):
27 def __init__(self, name: str, value: typing.Union[str, bytes]) -> None:
28 if not isinstance(name, str):
29 raise TypeError("Invalid type for name. Expected str.")
30 if not isinstance(value, (str, bytes)):
31 raise TypeError("Invalid type for value. Expected str or bytes.")
32 self.name = name
33 self.value = value
34
35 def render_headers(self) -> bytes:
36 name = _format_param("name", self.name)
37 return b"".join([b"Content-Disposition: form-data; ", name, b"\r\n\r\n"])
38
39 def render_data(self) -> bytes:
40 return (
41 self.value if isinstance(self.value, bytes) else self.value.encode("utf-8")
42 )
43
44
45 class FileField(Field):
46 def __init__(
47 self, name: str, value: typing.Union[typing.IO[typing.AnyStr], tuple]
48 ) -> None:
49 self.name = name
50 if not isinstance(value, tuple):
51 self.filename = Path(getattr(value, "name", "upload")).name
52 self.file = value # type: typing.Union[typing.IO[str], typing.IO[bytes]]
53 self.content_type = self.guess_content_type()
54 else:
55 self.filename = value[0]
56 self.file = value[1]
57 self.content_type = (
58 value[2] if len(value) > 2 else self.guess_content_type()
59 )
60
61 def guess_content_type(self) -> str:
62 if self.filename:
63 return mimetypes.guess_type(self.filename)[0] or "application/octet-stream"
64 else:
65 return "application/octet-stream"
66
67 def render_headers(self) -> bytes:
68 parts = [b"Content-Disposition: form-data; ", _format_param("name", self.name)]
69 if self.filename:
70 filename = _format_param("filename", self.filename)
71 parts.extend([b"; ", filename])
72 content_type = self.content_type.encode()
73 parts.extend([b"\r\nContent-Type: ", content_type, b"\r\n\r\n"])
74 return b"".join(parts)
75
76 def render_data(self) -> bytes:
77 if isinstance(self.file, str):
78 content = self.file
79 else:
80 content = self.file.read()
81 return content.encode("utf-8") if isinstance(content, str) else content
82
83
84 def iter_fields(data: dict, files: dict) -> typing.Iterator[Field]:
85 for name, value in data.items():
86 if isinstance(value, (list, dict)):
87 for item in value:
88 yield DataField(name=name, value=item)
89 else:
90 yield DataField(name=name, value=value)
91
92 for name, value in files.items():
93 yield FileField(name=name, value=value)
94
95
96 def multipart_encode(data: dict, files: dict) -> typing.Tuple[bytes, str]:
97 body = BytesIO()
98 boundary = binascii.hexlify(os.urandom(16))
99
100 for field in iter_fields(data, files):
101 body.write(b"--%s\r\n" % boundary)
102 body.write(field.render_headers())
103 body.write(field.render_data())
104 body.write(b"\r\n")
105
106 body.write(b"--%s--\r\n" % boundary)
107
108 content_type = "multipart/form-data; boundary=%s" % boundary.decode("ascii")
109
110 return body.getvalue(), content_type
111
112
113 def _format_param(name: str, value: typing.Union[str, bytes]) -> bytes:
114 if isinstance(value, bytes):
115 value = value.decode()
116
117 def replacer(match: typing.Match[str]) -> str:
118 return _HTML5_FORM_ENCODING_REPLACEMENTS[match.group(0)]
119
120 value = _HTML5_FORM_ENCODING_RE.sub(replacer, value)
121 return f'{name}="{value}"'.encode()
122
[end of httpx/multipart.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/httpx/multipart.py b/httpx/multipart.py
--- a/httpx/multipart.py
+++ b/httpx/multipart.py
@@ -48,7 +48,7 @@
) -> None:
self.name = name
if not isinstance(value, tuple):
- self.filename = Path(getattr(value, "name", "upload")).name
+ self.filename = Path(str(getattr(value, "name", "upload"))).name
self.file = value # type: typing.Union[typing.IO[str], typing.IO[bytes]]
self.content_type = self.guess_content_type()
else:
| {"golden_diff": "diff --git a/httpx/multipart.py b/httpx/multipart.py\n--- a/httpx/multipart.py\n+++ b/httpx/multipart.py\n@@ -48,7 +48,7 @@\n ) -> None:\n self.name = name\n if not isinstance(value, tuple):\n- self.filename = Path(getattr(value, \"name\", \"upload\")).name\n+ self.filename = Path(str(getattr(value, \"name\", \"upload\"))).name\n self.file = value # type: typing.Union[typing.IO[str], typing.IO[bytes]]\n self.content_type = self.guess_content_type()\n else:\n", "issue": "Instances of `tempfile.TemporaryFile` fail when used as an upload file.\nWhen using `tempfile.TemporaryFile` the `file.name` attribute returns an integer, rather than the usual path string, which causes a breakage for us further down the line...\r\n\r\n```shell\r\nvenv/lib/python3.7/site-packages/httpx/client.py:484: in post\r\n trust_env=trust_env,\r\nvenv/lib/python3.7/site-packages/httpx/client.py:616: in request\r\n cookies=cookies,\r\nvenv/lib/python3.7/site-packages/httpx/client.py:356: in build_request\r\n cookies=cookies,\r\nvenv/lib/python3.7/site-packages/httpx/models.py:696: in __init__\r\n content, content_type = self.encode_data(data, files, json)\r\nvenv/lib/python3.7/site-packages/httpx/models.py:619: in encode_data\r\n content, content_type = multipart_encode(data or {}, files)\r\nvenv/lib/python3.7/site-packages/httpx/multipart.py:100: in multipart_encode\r\n for field in iter_fields(data, files):\r\nvenv/lib/python3.7/site-packages/httpx/multipart.py:93: in iter_fields\r\n yield FileField(name=name, value=value)\r\nvenv/lib/python3.7/site-packages/httpx/multipart.py:51: in __init__\r\n self.filename = Path(getattr(value, \"name\", \"upload\")).name\r\n/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/pathlib.py:994: in __new__\r\n self = cls._from_parts(args, init=False)\r\n/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/pathlib.py:649: in _from_parts\r\n drv, root, parts = self._parse_args(args)\r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _\r\n\r\ncls = <class 'pathlib.PosixPath'>, args = (29,)\r\n\r\n @classmethod\r\n def _parse_args(cls, args):\r\n # This is useful when you don't want to create an instance, just\r\n # canonicalize some constructor arguments.\r\n parts = []\r\n for a in args:\r\n if isinstance(a, PurePath):\r\n parts += a._parts\r\n else:\r\n> a = os.fspath(a)\r\nE TypeError: expected str, bytes or os.PathLike object, not int\r\n```\r\n\r\nHave also confirmed that the issue *doesn't* occur with `tempfile.NamedTemporaryFile`.\r\n\r\nI believe the resolution will be on this line...\r\n\r\nhttps://github.com/encode/httpx/blob/1a32cf036a825f6eb35395af5388a3b23180a82e/httpx/multipart.py#L51\r\n\r\nI assume that this would be sufficient...\r\n\r\n```python\r\nself.filename = Path(str(getattr(value, \"name\", \"upload\")).name \r\n```\r\n\n", "before_files": [{"content": "import binascii\nimport mimetypes\nimport os\nimport re\nimport typing\nfrom io import BytesIO\nfrom pathlib import Path\n\n_HTML5_FORM_ENCODING_REPLACEMENTS = {'\"': \"%22\", \"\\\\\": \"\\\\\\\\\"}\n_HTML5_FORM_ENCODING_REPLACEMENTS.update(\n {chr(c): \"%{:02X}\".format(c) for c in range(0x00, 0x1F + 1) if c != 0x1B}\n)\n_HTML5_FORM_ENCODING_RE = re.compile(\n r\"|\".join([re.escape(c) for c in _HTML5_FORM_ENCODING_REPLACEMENTS.keys()])\n)\n\n\nclass Field:\n def render_headers(self) -> bytes:\n raise NotImplementedError() # pragma: nocover\n\n def render_data(self) -> bytes:\n raise NotImplementedError() # pragma: nocover\n\n\nclass DataField(Field):\n def __init__(self, name: str, value: typing.Union[str, bytes]) -> None:\n if not isinstance(name, str):\n raise TypeError(\"Invalid type for name. Expected str.\")\n if not isinstance(value, (str, bytes)):\n raise TypeError(\"Invalid type for value. Expected str or bytes.\")\n self.name = name\n self.value = value\n\n def render_headers(self) -> bytes:\n name = _format_param(\"name\", self.name)\n return b\"\".join([b\"Content-Disposition: form-data; \", name, b\"\\r\\n\\r\\n\"])\n\n def render_data(self) -> bytes:\n return (\n self.value if isinstance(self.value, bytes) else self.value.encode(\"utf-8\")\n )\n\n\nclass FileField(Field):\n def __init__(\n self, name: str, value: typing.Union[typing.IO[typing.AnyStr], tuple]\n ) -> None:\n self.name = name\n if not isinstance(value, tuple):\n self.filename = Path(getattr(value, \"name\", \"upload\")).name\n self.file = value # type: typing.Union[typing.IO[str], typing.IO[bytes]]\n self.content_type = self.guess_content_type()\n else:\n self.filename = value[0]\n self.file = value[1]\n self.content_type = (\n value[2] if len(value) > 2 else self.guess_content_type()\n )\n\n def guess_content_type(self) -> str:\n if self.filename:\n return mimetypes.guess_type(self.filename)[0] or \"application/octet-stream\"\n else:\n return \"application/octet-stream\"\n\n def render_headers(self) -> bytes:\n parts = [b\"Content-Disposition: form-data; \", _format_param(\"name\", self.name)]\n if self.filename:\n filename = _format_param(\"filename\", self.filename)\n parts.extend([b\"; \", filename])\n content_type = self.content_type.encode()\n parts.extend([b\"\\r\\nContent-Type: \", content_type, b\"\\r\\n\\r\\n\"])\n return b\"\".join(parts)\n\n def render_data(self) -> bytes:\n if isinstance(self.file, str):\n content = self.file\n else:\n content = self.file.read()\n return content.encode(\"utf-8\") if isinstance(content, str) else content\n\n\ndef iter_fields(data: dict, files: dict) -> typing.Iterator[Field]:\n for name, value in data.items():\n if isinstance(value, (list, dict)):\n for item in value:\n yield DataField(name=name, value=item)\n else:\n yield DataField(name=name, value=value)\n\n for name, value in files.items():\n yield FileField(name=name, value=value)\n\n\ndef multipart_encode(data: dict, files: dict) -> typing.Tuple[bytes, str]:\n body = BytesIO()\n boundary = binascii.hexlify(os.urandom(16))\n\n for field in iter_fields(data, files):\n body.write(b\"--%s\\r\\n\" % boundary)\n body.write(field.render_headers())\n body.write(field.render_data())\n body.write(b\"\\r\\n\")\n\n body.write(b\"--%s--\\r\\n\" % boundary)\n\n content_type = \"multipart/form-data; boundary=%s\" % boundary.decode(\"ascii\")\n\n return body.getvalue(), content_type\n\n\ndef _format_param(name: str, value: typing.Union[str, bytes]) -> bytes:\n if isinstance(value, bytes):\n value = value.decode()\n\n def replacer(match: typing.Match[str]) -> str:\n return _HTML5_FORM_ENCODING_REPLACEMENTS[match.group(0)]\n\n value = _HTML5_FORM_ENCODING_RE.sub(replacer, value)\n return f'{name}=\"{value}\"'.encode()\n", "path": "httpx/multipart.py"}]} | 2,566 | 137 |
gh_patches_debug_1213 | rasdani/github-patches | git_diff | scalableminds__webknossos-libs-312 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Convenience for wkcuber.api
To open/create a dataset with the cool new high-level API the following code is required:
```python
from wkcuber.api.Dataset import WKDataset
from pathlib import Path
ds1 = WKDataset.create(Path("path") / "to" / "dataset1", scale=(128,128,128))
ds2 = WKDataset.open(Path("path") / "to" / "dataset2")
```
For one-off scripts, I think that could be a bit more convenient, if we had an API like this
```python
from wkcuber import WKDataset
ds1 = WKDataset.create("path/to/dataset1", scale=(128, 128, 128))
ds2 = WKDataset.open("path/to/dataset2")
```
Any thoughts? @rschwanhold @jstriebel @philippotto
</issue>
<code>
[start of wkcuber/__init__.py]
1 from .cubing import cubing
2 from .downsampling import downsample_mags
3 from .compress import compress_mag
4 from .metadata import write_webknossos_metadata
5
[end of wkcuber/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/wkcuber/__init__.py b/wkcuber/__init__.py
--- a/wkcuber/__init__.py
+++ b/wkcuber/__init__.py
@@ -1,4 +1,6 @@
+from .api.Dataset import WKDataset
from .cubing import cubing
from .downsampling import downsample_mags
from .compress import compress_mag
+from .mag import Mag
from .metadata import write_webknossos_metadata
| {"golden_diff": "diff --git a/wkcuber/__init__.py b/wkcuber/__init__.py\n--- a/wkcuber/__init__.py\n+++ b/wkcuber/__init__.py\n@@ -1,4 +1,6 @@\n+from .api.Dataset import WKDataset\n from .cubing import cubing\n from .downsampling import downsample_mags\n from .compress import compress_mag\n+from .mag import Mag\n from .metadata import write_webknossos_metadata\n", "issue": "Convenience for wkcuber.api\nTo open/create a dataset with the cool new high-level API the following code is required:\r\n\r\n```python\r\nfrom wkcuber.api.Dataset import WKDataset\r\nfrom pathlib import Path\r\n\r\nds1 = WKDataset.create(Path(\"path\") / \"to\" / \"dataset1\", scale=(128,128,128))\r\nds2 = WKDataset.open(Path(\"path\") / \"to\" / \"dataset2\")\r\n\r\n```\r\n\r\nFor one-off scripts, I think that could be a bit more convenient, if we had an API like this\r\n\r\n```python\r\nfrom wkcuber import WKDataset\r\n\r\nds1 = WKDataset.create(\"path/to/dataset1\", scale=(128, 128, 128))\r\nds2 = WKDataset.open(\"path/to/dataset2\")\r\n```\r\n\r\nAny thoughts? @rschwanhold @jstriebel @philippotto \r\n\n", "before_files": [{"content": "from .cubing import cubing\nfrom .downsampling import downsample_mags\nfrom .compress import compress_mag\nfrom .metadata import write_webknossos_metadata\n", "path": "wkcuber/__init__.py"}]} | 777 | 103 |
gh_patches_debug_15531 | rasdani/github-patches | git_diff | tensorflow__addons-2299 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Enable SSE4.2 and AVX support during build
So the pip installed TF does not support these instruction sets by default, but modern-ish CPUs do. (Roughly CPUs after 2012).
We could try this and see if there are any improvements in test times and weight the benefits. If nothing else we can add it as a flag for building from source. Currently TF-IO does this by default:
https://github.com/tensorflow/io/blob/master/.github/workflows/build.yml#L13
@perfinion do we know if this is on the roadmap for default TF installations?
</issue>
<code>
[start of configure.py]
1 # Copyright 2020 The TensorFlow Authors. All Rights Reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 # ==============================================================================
15 # Usage: python configure.py
16 #
17
18
19 import os
20 import pathlib
21 import platform
22 import logging
23
24 import tensorflow as tf
25
26 _TFA_BAZELRC = ".bazelrc"
27
28
29 # Writes variables to bazelrc file
30 def write(line):
31 with open(_TFA_BAZELRC, "a") as f:
32 f.write(line + "\n")
33
34
35 def write_action_env(var_name, var):
36 write('build --action_env {}="{}"'.format(var_name, var))
37
38
39 def is_macos():
40 return platform.system() == "Darwin"
41
42
43 def is_windows():
44 return platform.system() == "Windows"
45
46
47 def is_raspi_arm():
48 return os.uname()[4] == "armv7l"
49
50
51 def get_tf_header_dir():
52 import tensorflow as tf
53
54 tf_header_dir = tf.sysconfig.get_compile_flags()[0][2:]
55 if is_windows():
56 tf_header_dir = tf_header_dir.replace("\\", "/")
57 return tf_header_dir
58
59
60 def get_tf_shared_lib_dir():
61 import tensorflow as tf
62
63 # OS Specific parsing
64 if is_windows():
65 tf_shared_lib_dir = tf.sysconfig.get_compile_flags()[0][2:-7] + "python"
66 return tf_shared_lib_dir.replace("\\", "/")
67 elif is_raspi_arm():
68 return tf.sysconfig.get_compile_flags()[0][2:-7] + "python"
69 else:
70 return tf.sysconfig.get_link_flags()[0][2:]
71
72
73 # Converts the linkflag namespec to the full shared library name
74 def get_shared_lib_name():
75 import tensorflow as tf
76
77 namespec = tf.sysconfig.get_link_flags()
78 if is_macos():
79 # MacOS
80 return "lib" + namespec[1][2:] + ".dylib"
81 elif is_windows():
82 # Windows
83 return "_pywrap_tensorflow_internal.lib"
84 elif is_raspi_arm():
85 # The below command for linux would return an empty list
86 return "_pywrap_tensorflow_internal.so"
87 else:
88 # Linux
89 return namespec[1][3:]
90
91
92 def create_build_configuration():
93 print()
94 print("Configuring TensorFlow Addons to be built from source...")
95
96 if os.path.isfile(_TFA_BAZELRC):
97 os.remove(_TFA_BAZELRC)
98
99 logging.disable(logging.WARNING)
100
101 write_action_env("TF_HEADER_DIR", get_tf_header_dir())
102 write_action_env("TF_SHARED_LIBRARY_DIR", get_tf_shared_lib_dir())
103 write_action_env("TF_SHARED_LIBRARY_NAME", get_shared_lib_name())
104 write_action_env("TF_CXX11_ABI_FLAG", tf.sysconfig.CXX11_ABI_FLAG)
105
106 write("build --spawn_strategy=standalone")
107 write("build --strategy=Genrule=standalone")
108 write("build -c opt")
109
110 if is_windows():
111 write("build --config=windows")
112 write("build:windows --copt=/experimental:preprocessor")
113 write("build:windows --host_copt=/experimental:preprocessor")
114
115 if os.getenv("TF_NEED_CUDA", "0") == "1":
116 print("> Building GPU & CPU ops")
117 configure_cuda()
118 else:
119 print("> Building only CPU ops")
120
121 print()
122 print("Build configurations successfully written to", _TFA_BAZELRC, ":\n")
123 print(pathlib.Path(_TFA_BAZELRC).read_text())
124
125
126 def configure_cuda():
127 write_action_env("TF_NEED_CUDA", "1")
128 write_action_env(
129 "CUDA_TOOLKIT_PATH", os.getenv("CUDA_TOOLKIT_PATH", "/usr/local/cuda")
130 )
131 write_action_env(
132 "CUDNN_INSTALL_PATH",
133 os.getenv("CUDNN_INSTALL_PATH", "/usr/lib/x86_64-linux-gnu"),
134 )
135 write_action_env("TF_CUDA_VERSION", os.getenv("TF_CUDA_VERSION", "11"))
136 write_action_env("TF_CUDNN_VERSION", os.getenv("TF_CUDNN_VERSION", "8"))
137
138 write("test --config=cuda")
139 write("build --config=cuda")
140 write("build:cuda --define=using_cuda=true --define=using_cuda_nvcc=true")
141 write("build:cuda --crosstool_top=@local_config_cuda//crosstool:toolchain")
142
143
144 if __name__ == "__main__":
145 create_build_configuration()
146
[end of configure.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/configure.py b/configure.py
--- a/configure.py
+++ b/configure.py
@@ -44,6 +44,10 @@
return platform.system() == "Windows"
+def is_linux():
+ return platform.system() == "Linux"
+
+
def is_raspi_arm():
return os.uname()[4] == "armv7l"
@@ -111,6 +115,10 @@
write("build --config=windows")
write("build:windows --copt=/experimental:preprocessor")
write("build:windows --host_copt=/experimental:preprocessor")
+ write("build:windows --copt=/arch=AVX2")
+
+ if is_macos() or is_linux():
+ write("build --copt=-mavx2")
if os.getenv("TF_NEED_CUDA", "0") == "1":
print("> Building GPU & CPU ops")
| {"golden_diff": "diff --git a/configure.py b/configure.py\n--- a/configure.py\n+++ b/configure.py\n@@ -44,6 +44,10 @@\n return platform.system() == \"Windows\"\n \n \n+def is_linux():\n+ return platform.system() == \"Linux\"\n+\n+\n def is_raspi_arm():\n return os.uname()[4] == \"armv7l\"\n \n@@ -111,6 +115,10 @@\n write(\"build --config=windows\")\n write(\"build:windows --copt=/experimental:preprocessor\")\n write(\"build:windows --host_copt=/experimental:preprocessor\")\n+ write(\"build:windows --copt=/arch=AVX2\")\n+\n+ if is_macos() or is_linux():\n+ write(\"build --copt=-mavx2\")\n \n if os.getenv(\"TF_NEED_CUDA\", \"0\") == \"1\":\n print(\"> Building GPU & CPU ops\")\n", "issue": "Enable SSE4.2 and AVX support during build\nSo the pip installed TF does not support these instruction sets by default, but modern-ish CPUs do. (Roughly CPUs after 2012).\r\n\r\nWe could try this and see if there are any improvements in test times and weight the benefits. If nothing else we can add it as a flag for building from source. Currently TF-IO does this by default:\r\nhttps://github.com/tensorflow/io/blob/master/.github/workflows/build.yml#L13\r\n\r\n@perfinion do we know if this is on the roadmap for default TF installations?\n", "before_files": [{"content": "# Copyright 2020 The TensorFlow Authors. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n# ==============================================================================\n# Usage: python configure.py\n#\n\n\nimport os\nimport pathlib\nimport platform\nimport logging\n\nimport tensorflow as tf\n\n_TFA_BAZELRC = \".bazelrc\"\n\n\n# Writes variables to bazelrc file\ndef write(line):\n with open(_TFA_BAZELRC, \"a\") as f:\n f.write(line + \"\\n\")\n\n\ndef write_action_env(var_name, var):\n write('build --action_env {}=\"{}\"'.format(var_name, var))\n\n\ndef is_macos():\n return platform.system() == \"Darwin\"\n\n\ndef is_windows():\n return platform.system() == \"Windows\"\n\n\ndef is_raspi_arm():\n return os.uname()[4] == \"armv7l\"\n\n\ndef get_tf_header_dir():\n import tensorflow as tf\n\n tf_header_dir = tf.sysconfig.get_compile_flags()[0][2:]\n if is_windows():\n tf_header_dir = tf_header_dir.replace(\"\\\\\", \"/\")\n return tf_header_dir\n\n\ndef get_tf_shared_lib_dir():\n import tensorflow as tf\n\n # OS Specific parsing\n if is_windows():\n tf_shared_lib_dir = tf.sysconfig.get_compile_flags()[0][2:-7] + \"python\"\n return tf_shared_lib_dir.replace(\"\\\\\", \"/\")\n elif is_raspi_arm():\n return tf.sysconfig.get_compile_flags()[0][2:-7] + \"python\"\n else:\n return tf.sysconfig.get_link_flags()[0][2:]\n\n\n# Converts the linkflag namespec to the full shared library name\ndef get_shared_lib_name():\n import tensorflow as tf\n\n namespec = tf.sysconfig.get_link_flags()\n if is_macos():\n # MacOS\n return \"lib\" + namespec[1][2:] + \".dylib\"\n elif is_windows():\n # Windows\n return \"_pywrap_tensorflow_internal.lib\"\n elif is_raspi_arm():\n # The below command for linux would return an empty list\n return \"_pywrap_tensorflow_internal.so\"\n else:\n # Linux\n return namespec[1][3:]\n\n\ndef create_build_configuration():\n print()\n print(\"Configuring TensorFlow Addons to be built from source...\")\n\n if os.path.isfile(_TFA_BAZELRC):\n os.remove(_TFA_BAZELRC)\n\n logging.disable(logging.WARNING)\n\n write_action_env(\"TF_HEADER_DIR\", get_tf_header_dir())\n write_action_env(\"TF_SHARED_LIBRARY_DIR\", get_tf_shared_lib_dir())\n write_action_env(\"TF_SHARED_LIBRARY_NAME\", get_shared_lib_name())\n write_action_env(\"TF_CXX11_ABI_FLAG\", tf.sysconfig.CXX11_ABI_FLAG)\n\n write(\"build --spawn_strategy=standalone\")\n write(\"build --strategy=Genrule=standalone\")\n write(\"build -c opt\")\n\n if is_windows():\n write(\"build --config=windows\")\n write(\"build:windows --copt=/experimental:preprocessor\")\n write(\"build:windows --host_copt=/experimental:preprocessor\")\n\n if os.getenv(\"TF_NEED_CUDA\", \"0\") == \"1\":\n print(\"> Building GPU & CPU ops\")\n configure_cuda()\n else:\n print(\"> Building only CPU ops\")\n\n print()\n print(\"Build configurations successfully written to\", _TFA_BAZELRC, \":\\n\")\n print(pathlib.Path(_TFA_BAZELRC).read_text())\n\n\ndef configure_cuda():\n write_action_env(\"TF_NEED_CUDA\", \"1\")\n write_action_env(\n \"CUDA_TOOLKIT_PATH\", os.getenv(\"CUDA_TOOLKIT_PATH\", \"/usr/local/cuda\")\n )\n write_action_env(\n \"CUDNN_INSTALL_PATH\",\n os.getenv(\"CUDNN_INSTALL_PATH\", \"/usr/lib/x86_64-linux-gnu\"),\n )\n write_action_env(\"TF_CUDA_VERSION\", os.getenv(\"TF_CUDA_VERSION\", \"11\"))\n write_action_env(\"TF_CUDNN_VERSION\", os.getenv(\"TF_CUDNN_VERSION\", \"8\"))\n\n write(\"test --config=cuda\")\n write(\"build --config=cuda\")\n write(\"build:cuda --define=using_cuda=true --define=using_cuda_nvcc=true\")\n write(\"build:cuda --crosstool_top=@local_config_cuda//crosstool:toolchain\")\n\n\nif __name__ == \"__main__\":\n create_build_configuration()\n", "path": "configure.py"}]} | 2,077 | 211 |
gh_patches_debug_16647 | rasdani/github-patches | git_diff | ietf-tools__datatracker-3832 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
person_link error in charter document view
### What happened?
A 500 error occurs when retrieving some group charter documents. E.g., when getting `/doc/charter-ietf-cat/`:
```
ERROR: django.request:228: Internal Server Error: /doc/charter-ietf-cat/
Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/django/core/handlers/exception.py", line 34, in inner
response = get_response(request)
File "/usr/local/lib/python3.6/site-packages/django/core/handlers/base.py", line 115, in _get_response
response = self.process_exception_by_middleware(e, request)
File "/usr/local/lib/python3.6/site-packages/django/core/handlers/base.py", line 113, in _get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "/workspace/ietf/doc/views_doc.py", line 548, in document_main
can_manage=can_manage,
File "/usr/local/lib/python3.6/site-packages/django/shortcuts.py", line 36, in render
content = loader.render_to_string(template_name, context, request, using=using)
File "/usr/local/lib/python3.6/site-packages/django/template/loader.py", line 62, in render_to_string
return template.render(context, request)
File "/usr/local/lib/python3.6/site-packages/django/template/backends/django.py", line 61, in render
return self.template.render(context)
File "/usr/local/lib/python3.6/site-packages/django/template/base.py", line 171, in render
return self._render(context)
File "/usr/local/lib/python3.6/site-packages/django/template/base.py", line 163, in _render
return self.nodelist.render(context)
File "/usr/local/lib/python3.6/site-packages/django/template/base.py", line 937, in render
bit = node.render_annotated(context)
File "/usr/local/lib/python3.6/site-packages/django/template/base.py", line 904, in render_annotated
return self.render(context)
File "/usr/local/lib/python3.6/site-packages/django/template/loader_tags.py", line 150, in render
return compiled_parent._render(context)
File "/usr/local/lib/python3.6/site-packages/django/template/base.py", line 163, in _render
return self.nodelist.render(context)
File "/usr/local/lib/python3.6/site-packages/django/template/base.py", line 937, in render
bit = node.render_annotated(context)
File "/usr/local/lib/python3.6/site-packages/django/template/base.py", line 904, in render_annotated
return self.render(context)
File "/usr/local/lib/python3.6/site-packages/django/template/loader_tags.py", line 62, in render
result = block.nodelist.render(context)
File "/usr/local/lib/python3.6/site-packages/django/template/base.py", line 937, in render
bit = node.render_annotated(context)
File "/usr/local/lib/python3.6/site-packages/django/template/base.py", line 904, in render_annotated
return self.render(context)
File "/usr/local/lib/python3.6/site-packages/django/template/defaulttags.py", line 312, in render
return nodelist.render(context)
File "/usr/local/lib/python3.6/site-packages/django/template/base.py", line 937, in render
bit = node.render_annotated(context)
File "/usr/local/lib/python3.6/site-packages/django/template/base.py", line 904, in render_annotated
return self.render(context)
File "/usr/local/lib/python3.6/site-packages/django/template/library.py", line 214, in render
_dict = self.func(*resolved_args, **resolved_kwargs)
File "/workspace/ietf/person/templatetags/person_filters.py", line 44, in person_link
plain_name = person.plain_name()
AttributeError: 'str' object has no attribute 'plain_name'
ERROR: django.request:228: Internal Server Error: /doc/charter-ietf-cat/
Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/django/core/handlers/exception.py", line 34, in inner
response = get_response(request)
File "/usr/local/lib/python3.6/site-packages/django/core/handlers/base.py", line 115, in _get_response
response = self.process_exception_by_middleware(e, request)
File "/usr/local/lib/python3.6/site-packages/django/core/handlers/base.py", line 113, in _get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "/workspace/ietf/doc/views_doc.py", line 548, in document_main
can_manage=can_manage,
File "/usr/local/lib/python3.6/site-packages/django/shortcuts.py", line 36, in render
content = loader.render_to_string(template_name, context, request, using=using)
File "/usr/local/lib/python3.6/site-packages/django/template/loader.py", line 62, in render_to_string
return template.render(context, request)
File "/usr/local/lib/python3.6/site-packages/django/template/backends/django.py", line 61, in render
return self.template.render(context)
File "/usr/local/lib/python3.6/site-packages/django/template/base.py", line 171, in render
return self._render(context)
File "/usr/local/lib/python3.6/site-packages/django/template/base.py", line 163, in _render
return self.nodelist.render(context)
File "/usr/local/lib/python3.6/site-packages/django/template/base.py", line 937, in render
bit = node.render_annotated(context)
File "/usr/local/lib/python3.6/site-packages/django/template/base.py", line 904, in render_annotated
return self.render(context)
File "/usr/local/lib/python3.6/site-packages/django/template/loader_tags.py", line 150, in render
return compiled_parent._render(context)
File "/usr/local/lib/python3.6/site-packages/django/template/base.py", line 163, in _render
return self.nodelist.render(context)
File "/usr/local/lib/python3.6/site-packages/django/template/base.py", line 937, in render
bit = node.render_annotated(context)
File "/usr/local/lib/python3.6/site-packages/django/template/base.py", line 904, in render_annotated
return self.render(context)
File "/usr/local/lib/python3.6/site-packages/django/template/loader_tags.py", line 62, in render
result = block.nodelist.render(context)
File "/usr/local/lib/python3.6/site-packages/django/template/base.py", line 937, in render
bit = node.render_annotated(context)
File "/usr/local/lib/python3.6/site-packages/django/template/base.py", line 904, in render_annotated
return self.render(context)
File "/usr/local/lib/python3.6/site-packages/django/template/defaulttags.py", line 312, in render
return nodelist.render(context)
File "/usr/local/lib/python3.6/site-packages/django/template/base.py", line 937, in render
bit = node.render_annotated(context)
File "/usr/local/lib/python3.6/site-packages/django/template/base.py", line 904, in render_annotated
return self.render(context)
File "/usr/local/lib/python3.6/site-packages/django/template/library.py", line 214, in render
_dict = self.func(*resolved_args, **resolved_kwargs)
File "/workspace/ietf/person/templatetags/person_filters.py", line 44, in person_link
plain_name = person.plain_name()
AttributeError: 'str' object has no attribute 'plain_name'
```
### What browser(s) are you seeing the problem on?
Not Applicable
### Code of Conduct
- [X] I agree to follow the IETF's Code of Conduct
</issue>
<code>
[start of ietf/person/templatetags/person_filters.py]
1 # Copyright The IETF Trust 2017-2020, All Rights Reserved
2
3 import datetime
4
5 from django import template
6
7 import debug # pyflakes:ignore
8
9 from ietf.nomcom.utils import is_eligible
10 from ietf.person.models import Alias
11
12 register = template.Library()
13
14
15 @register.filter
16 def is_nomcom_eligible(person, date=datetime.date.today()):
17 return is_eligible(person=person, date=date)
18
19
20 @register.filter
21 def person_by_name(name):
22 "Look up a person record from name"
23 if not isinstance(name, (type(b""), type(""))):
24 return None
25 alias = Alias.objects.filter(name=name).first()
26 return alias.person if alias else None
27
28
29 # CLEANUP: There are several hundred Person objects with no Alias object,
30 # violating the expectations of the code. The check for the existence of an
31 # alias object below matching the person's name avoids presenting a link that
32 # we know will 404. When the database is corrected and we can expect that the
33 # Alias for the person's name to always be there, we can remove this extra
34 # database query (or leave it as a safeguard until it becomes a performance
35 # issue.)
36
37
38 @register.inclusion_tag("person/person_link.html")
39 def person_link(person, **kwargs):
40 title = kwargs.get("title", "")
41 cls = kwargs.get("class", "")
42 with_email = kwargs.get("with_email", True)
43 if person:
44 plain_name = person.plain_name()
45 name = (
46 person.name
47 if person.alias_set.filter(name=person.name).exists()
48 else plain_name
49 )
50 email = person.email_address()
51 return {
52 "name": name,
53 "plain_name": plain_name,
54 "email": email,
55 "title": title,
56 "class": cls,
57 "with_email": with_email,
58 }
59 else:
60 return {}
61
62
63 @register.inclusion_tag("person/person_link.html")
64 def email_person_link(email, **kwargs):
65 title = kwargs.get("title", "")
66 cls = kwargs.get("class", "")
67 with_email = kwargs.get("with_email", True)
68 plain_name = email.person.plain_name()
69 name = (
70 email.person.name
71 if email.person.alias_set.filter(name=email.person.name).exists()
72 else plain_name
73 )
74 email = email.address
75 return {
76 "name": name,
77 "plain_name": plain_name,
78 "email": email,
79 "title": title,
80 "class": cls,
81 "with_email": with_email,
82 }
[end of ietf/person/templatetags/person_filters.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/ietf/person/templatetags/person_filters.py b/ietf/person/templatetags/person_filters.py
--- a/ietf/person/templatetags/person_filters.py
+++ b/ietf/person/templatetags/person_filters.py
@@ -37,10 +37,19 @@
@register.inclusion_tag("person/person_link.html")
def person_link(person, **kwargs):
+ """Render a link to a Person
+
+ If person is None or a string, renders as a span containing '(None)'.
+ """
+ if isinstance(person, str):
+ # If person is a string, most likely an invalid template variable was referenced.
+ # That normally comes in as an empty string, but may be non-empty if string_if_invalid
+ # is set. Translate strings into None to try to get consistent behavior.
+ person = None
title = kwargs.get("title", "")
cls = kwargs.get("class", "")
with_email = kwargs.get("with_email", True)
- if person:
+ if person is not None:
plain_name = person.plain_name()
name = (
person.name
| {"golden_diff": "diff --git a/ietf/person/templatetags/person_filters.py b/ietf/person/templatetags/person_filters.py\n--- a/ietf/person/templatetags/person_filters.py\n+++ b/ietf/person/templatetags/person_filters.py\n@@ -37,10 +37,19 @@\n \n @register.inclusion_tag(\"person/person_link.html\")\n def person_link(person, **kwargs):\n+ \"\"\"Render a link to a Person\n+\n+ If person is None or a string, renders as a span containing '(None)'.\n+ \"\"\"\n+ if isinstance(person, str):\n+ # If person is a string, most likely an invalid template variable was referenced.\n+ # That normally comes in as an empty string, but may be non-empty if string_if_invalid\n+ # is set. Translate strings into None to try to get consistent behavior.\n+ person = None\n title = kwargs.get(\"title\", \"\")\n cls = kwargs.get(\"class\", \"\")\n with_email = kwargs.get(\"with_email\", True)\n- if person:\n+ if person is not None:\n plain_name = person.plain_name()\n name = (\n person.name\n", "issue": "person_link error in charter document view\n### What happened?\n\nA 500 error occurs when retrieving some group charter documents. E.g., when getting `/doc/charter-ietf-cat/`:\r\n```\r\nERROR: django.request:228: Internal Server Error: /doc/charter-ietf-cat/\r\nTraceback (most recent call last):\r\n File \"/usr/local/lib/python3.6/site-packages/django/core/handlers/exception.py\", line 34, in inner\r\n response = get_response(request)\r\n File \"/usr/local/lib/python3.6/site-packages/django/core/handlers/base.py\", line 115, in _get_response\r\n response = self.process_exception_by_middleware(e, request)\r\n File \"/usr/local/lib/python3.6/site-packages/django/core/handlers/base.py\", line 113, in _get_response\r\n response = wrapped_callback(request, *callback_args, **callback_kwargs)\r\n File \"/workspace/ietf/doc/views_doc.py\", line 548, in document_main\r\n can_manage=can_manage,\r\n File \"/usr/local/lib/python3.6/site-packages/django/shortcuts.py\", line 36, in render\r\n content = loader.render_to_string(template_name, context, request, using=using)\r\n File \"/usr/local/lib/python3.6/site-packages/django/template/loader.py\", line 62, in render_to_string\r\n return template.render(context, request)\r\n File \"/usr/local/lib/python3.6/site-packages/django/template/backends/django.py\", line 61, in render\r\n return self.template.render(context)\r\n File \"/usr/local/lib/python3.6/site-packages/django/template/base.py\", line 171, in render\r\n return self._render(context)\r\n File \"/usr/local/lib/python3.6/site-packages/django/template/base.py\", line 163, in _render\r\n return self.nodelist.render(context)\r\n File \"/usr/local/lib/python3.6/site-packages/django/template/base.py\", line 937, in render\r\n bit = node.render_annotated(context)\r\n File \"/usr/local/lib/python3.6/site-packages/django/template/base.py\", line 904, in render_annotated\r\n return self.render(context)\r\n File \"/usr/local/lib/python3.6/site-packages/django/template/loader_tags.py\", line 150, in render\r\n return compiled_parent._render(context)\r\n File \"/usr/local/lib/python3.6/site-packages/django/template/base.py\", line 163, in _render\r\n return self.nodelist.render(context)\r\n File \"/usr/local/lib/python3.6/site-packages/django/template/base.py\", line 937, in render\r\n bit = node.render_annotated(context)\r\n File \"/usr/local/lib/python3.6/site-packages/django/template/base.py\", line 904, in render_annotated\r\n return self.render(context)\r\n File \"/usr/local/lib/python3.6/site-packages/django/template/loader_tags.py\", line 62, in render\r\n result = block.nodelist.render(context)\r\n File \"/usr/local/lib/python3.6/site-packages/django/template/base.py\", line 937, in render\r\n bit = node.render_annotated(context)\r\n File \"/usr/local/lib/python3.6/site-packages/django/template/base.py\", line 904, in render_annotated\r\n return self.render(context)\r\n File \"/usr/local/lib/python3.6/site-packages/django/template/defaulttags.py\", line 312, in render\r\n return nodelist.render(context)\r\n File \"/usr/local/lib/python3.6/site-packages/django/template/base.py\", line 937, in render\r\n bit = node.render_annotated(context)\r\n File \"/usr/local/lib/python3.6/site-packages/django/template/base.py\", line 904, in render_annotated\r\n return self.render(context)\r\n File \"/usr/local/lib/python3.6/site-packages/django/template/library.py\", line 214, in render\r\n _dict = self.func(*resolved_args, **resolved_kwargs)\r\n File \"/workspace/ietf/person/templatetags/person_filters.py\", line 44, in person_link\r\n plain_name = person.plain_name()\r\nAttributeError: 'str' object has no attribute 'plain_name'\r\nERROR: django.request:228: Internal Server Error: /doc/charter-ietf-cat/\r\nTraceback (most recent call last):\r\n File \"/usr/local/lib/python3.6/site-packages/django/core/handlers/exception.py\", line 34, in inner\r\n response = get_response(request)\r\n File \"/usr/local/lib/python3.6/site-packages/django/core/handlers/base.py\", line 115, in _get_response\r\n response = self.process_exception_by_middleware(e, request)\r\n File \"/usr/local/lib/python3.6/site-packages/django/core/handlers/base.py\", line 113, in _get_response\r\n response = wrapped_callback(request, *callback_args, **callback_kwargs)\r\n File \"/workspace/ietf/doc/views_doc.py\", line 548, in document_main\r\n can_manage=can_manage,\r\n File \"/usr/local/lib/python3.6/site-packages/django/shortcuts.py\", line 36, in render\r\n content = loader.render_to_string(template_name, context, request, using=using)\r\n File \"/usr/local/lib/python3.6/site-packages/django/template/loader.py\", line 62, in render_to_string\r\n return template.render(context, request)\r\n File \"/usr/local/lib/python3.6/site-packages/django/template/backends/django.py\", line 61, in render\r\n return self.template.render(context)\r\n File \"/usr/local/lib/python3.6/site-packages/django/template/base.py\", line 171, in render\r\n return self._render(context)\r\n File \"/usr/local/lib/python3.6/site-packages/django/template/base.py\", line 163, in _render\r\n return self.nodelist.render(context)\r\n File \"/usr/local/lib/python3.6/site-packages/django/template/base.py\", line 937, in render\r\n bit = node.render_annotated(context)\r\n File \"/usr/local/lib/python3.6/site-packages/django/template/base.py\", line 904, in render_annotated\r\n return self.render(context)\r\n File \"/usr/local/lib/python3.6/site-packages/django/template/loader_tags.py\", line 150, in render\r\n return compiled_parent._render(context)\r\n File \"/usr/local/lib/python3.6/site-packages/django/template/base.py\", line 163, in _render\r\n return self.nodelist.render(context)\r\n File \"/usr/local/lib/python3.6/site-packages/django/template/base.py\", line 937, in render\r\n bit = node.render_annotated(context)\r\n File \"/usr/local/lib/python3.6/site-packages/django/template/base.py\", line 904, in render_annotated\r\n return self.render(context)\r\n File \"/usr/local/lib/python3.6/site-packages/django/template/loader_tags.py\", line 62, in render\r\n result = block.nodelist.render(context)\r\n File \"/usr/local/lib/python3.6/site-packages/django/template/base.py\", line 937, in render\r\n bit = node.render_annotated(context)\r\n File \"/usr/local/lib/python3.6/site-packages/django/template/base.py\", line 904, in render_annotated\r\n return self.render(context)\r\n File \"/usr/local/lib/python3.6/site-packages/django/template/defaulttags.py\", line 312, in render\r\n return nodelist.render(context)\r\n File \"/usr/local/lib/python3.6/site-packages/django/template/base.py\", line 937, in render\r\n bit = node.render_annotated(context)\r\n File \"/usr/local/lib/python3.6/site-packages/django/template/base.py\", line 904, in render_annotated\r\n return self.render(context)\r\n File \"/usr/local/lib/python3.6/site-packages/django/template/library.py\", line 214, in render\r\n _dict = self.func(*resolved_args, **resolved_kwargs)\r\n File \"/workspace/ietf/person/templatetags/person_filters.py\", line 44, in person_link\r\n plain_name = person.plain_name()\r\nAttributeError: 'str' object has no attribute 'plain_name'\r\n```\n\n### What browser(s) are you seeing the problem on?\n\nNot Applicable\n\n### Code of Conduct\n\n- [X] I agree to follow the IETF's Code of Conduct\n", "before_files": [{"content": "# Copyright The IETF Trust 2017-2020, All Rights Reserved\n\nimport datetime\n\nfrom django import template\n\nimport debug # pyflakes:ignore\n\nfrom ietf.nomcom.utils import is_eligible\nfrom ietf.person.models import Alias\n\nregister = template.Library()\n\n\[email protected]\ndef is_nomcom_eligible(person, date=datetime.date.today()):\n return is_eligible(person=person, date=date)\n\n\[email protected]\ndef person_by_name(name):\n \"Look up a person record from name\"\n if not isinstance(name, (type(b\"\"), type(\"\"))):\n return None\n alias = Alias.objects.filter(name=name).first()\n return alias.person if alias else None\n\n\n# CLEANUP: There are several hundred Person objects with no Alias object,\n# violating the expectations of the code. The check for the existence of an\n# alias object below matching the person's name avoids presenting a link that\n# we know will 404. When the database is corrected and we can expect that the\n# Alias for the person's name to always be there, we can remove this extra\n# database query (or leave it as a safeguard until it becomes a performance\n# issue.)\n\n\[email protected]_tag(\"person/person_link.html\")\ndef person_link(person, **kwargs):\n title = kwargs.get(\"title\", \"\")\n cls = kwargs.get(\"class\", \"\")\n with_email = kwargs.get(\"with_email\", True)\n if person:\n plain_name = person.plain_name()\n name = (\n person.name\n if person.alias_set.filter(name=person.name).exists()\n else plain_name\n )\n email = person.email_address()\n return {\n \"name\": name,\n \"plain_name\": plain_name,\n \"email\": email,\n \"title\": title,\n \"class\": cls,\n \"with_email\": with_email,\n }\n else:\n return {}\n\n\[email protected]_tag(\"person/person_link.html\")\ndef email_person_link(email, **kwargs):\n title = kwargs.get(\"title\", \"\")\n cls = kwargs.get(\"class\", \"\")\n with_email = kwargs.get(\"with_email\", True)\n plain_name = email.person.plain_name()\n name = (\n email.person.name\n if email.person.alias_set.filter(name=email.person.name).exists()\n else plain_name\n )\n email = email.address\n return {\n \"name\": name,\n \"plain_name\": plain_name,\n \"email\": email,\n \"title\": title,\n \"class\": cls,\n \"with_email\": with_email,\n }", "path": "ietf/person/templatetags/person_filters.py"}]} | 3,145 | 254 |
gh_patches_debug_29909 | rasdani/github-patches | git_diff | nf-core__tools-2031 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Bump minimum required Nextflow version
### Description of feature
Latest stable release brings lots of new features that we probably want to use at module level (eg. `bin` directories).
</issue>
<code>
[start of nf_core/lint/readme.py]
1 import os
2 import re
3
4
5 def readme(self):
6 """Repository ``README.md`` tests
7
8 The ``README.md`` files for a project are very important and must meet some requirements:
9
10 * Nextflow badge
11
12 * If no Nextflow badge is found, a warning is given
13 * If a badge is found but the version doesn't match the minimum version in the config file, the test fails
14 * Example badge code:
15
16 .. code-block:: md
17
18 [](https://www.nextflow.io/)
19
20 * Bioconda badge
21
22 * If your pipeline contains a file called ``environment.yml`` in the root directory, a bioconda badge is required
23 * Required badge code:
24
25 .. code-block:: md
26
27 [](https://bioconda.github.io/)
28
29 .. note:: These badges are a markdown image ```` *inside* a markdown link ``[markdown image](<link URL>)``, so a bit fiddly to write.
30 """
31 passed = []
32 warned = []
33 failed = []
34
35 # Remove field that should be ignored according to the linting config
36 ignore_configs = self.lint_config.get("readme", [])
37
38 with open(os.path.join(self.wf_path, "README.md"), "r") as fh:
39 content = fh.read()
40
41 if "nextflow_badge" not in ignore_configs:
42 # Check that there is a readme badge showing the minimum required version of Nextflow
43 # [](https://www.nextflow.io/)
44 # and that it has the correct version
45 nf_badge_re = r"\[!\[Nextflow\]\(https://img\.shields\.io/badge/nextflow%20DSL2-!?(?:%E2%89%A5|%3E%3D)([\d\.]+)-23aa62\.svg\)\]\(https://www\.nextflow\.io/\)"
46 match = re.search(nf_badge_re, content)
47 if match:
48 nf_badge_version = match.group(1).strip("'\"")
49 try:
50 if nf_badge_version != self.minNextflowVersion:
51 raise AssertionError()
52 except (AssertionError, KeyError):
53 failed.append(
54 f"README Nextflow minimum version badge does not match config. Badge: `{nf_badge_version}`, "
55 f"Config: `{self.minNextflowVersion}`"
56 )
57 else:
58 passed.append(
59 f"README Nextflow minimum version badge matched config. Badge: `{nf_badge_version}`, "
60 f"Config: `{self.minNextflowVersion}`"
61 )
62 else:
63 warned.append("README did not have a Nextflow minimum version badge.")
64
65 # Check that the minimum version mentioned in the quick start section is consistent
66 # Looking for: "1. Install [`Nextflow`](https://www.nextflow.io/docs/latest/getstarted.html#installation) (`>=21.10.3`)"
67 nf_version_re = r"1\.\s*Install\s*\[`Nextflow`\]\(https://www.nextflow.io/docs/latest/getstarted.html#installation\)\s*\(`>=(\d*\.\d*\.\d*)`\)"
68 match = re.search(nf_version_re, content)
69 if match:
70 nf_quickstart_version = match.group(1)
71 try:
72 if nf_quickstart_version != self.minNextflowVersion:
73 raise AssertionError()
74 except (AssertionError, KeyError):
75 failed.append(
76 f"README Nextflow minimium version in Quick Start section does not match config. README: `{nf_quickstart_version}`, Config `{self.minNextflowVersion}`"
77 )
78 else:
79 passed.append(
80 f"README Nextflow minimum version in Quick Start section matched config. README: `{nf_quickstart_version}`, Config: `{self.minNextflowVersion}`"
81 )
82 else:
83 warned.append("README did not have a Nextflow minimum version mentioned in Quick Start section.")
84
85 return {"passed": passed, "warned": warned, "failed": failed}
86
[end of nf_core/lint/readme.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/nf_core/lint/readme.py b/nf_core/lint/readme.py
--- a/nf_core/lint/readme.py
+++ b/nf_core/lint/readme.py
@@ -40,7 +40,7 @@
if "nextflow_badge" not in ignore_configs:
# Check that there is a readme badge showing the minimum required version of Nextflow
- # [](https://www.nextflow.io/)
+ # [](https://www.nextflow.io/)
# and that it has the correct version
nf_badge_re = r"\[!\[Nextflow\]\(https://img\.shields\.io/badge/nextflow%20DSL2-!?(?:%E2%89%A5|%3E%3D)([\d\.]+)-23aa62\.svg\)\]\(https://www\.nextflow\.io/\)"
match = re.search(nf_badge_re, content)
@@ -63,7 +63,7 @@
warned.append("README did not have a Nextflow minimum version badge.")
# Check that the minimum version mentioned in the quick start section is consistent
- # Looking for: "1. Install [`Nextflow`](https://www.nextflow.io/docs/latest/getstarted.html#installation) (`>=21.10.3`)"
+ # Looking for: "1. Install [`Nextflow`](https://www.nextflow.io/docs/latest/getstarted.html#installation) (`>=22.10.1`)"
nf_version_re = r"1\.\s*Install\s*\[`Nextflow`\]\(https://www.nextflow.io/docs/latest/getstarted.html#installation\)\s*\(`>=(\d*\.\d*\.\d*)`\)"
match = re.search(nf_version_re, content)
if match:
| {"golden_diff": "diff --git a/nf_core/lint/readme.py b/nf_core/lint/readme.py\n--- a/nf_core/lint/readme.py\n+++ b/nf_core/lint/readme.py\n@@ -40,7 +40,7 @@\n \n if \"nextflow_badge\" not in ignore_configs:\n # Check that there is a readme badge showing the minimum required version of Nextflow\n- # [](https://www.nextflow.io/)\n+ # [](https://www.nextflow.io/)\n # and that it has the correct version\n nf_badge_re = r\"\\[!\\[Nextflow\\]\\(https://img\\.shields\\.io/badge/nextflow%20DSL2-!?(?:%E2%89%A5|%3E%3D)([\\d\\.]+)-23aa62\\.svg\\)\\]\\(https://www\\.nextflow\\.io/\\)\"\n match = re.search(nf_badge_re, content)\n@@ -63,7 +63,7 @@\n warned.append(\"README did not have a Nextflow minimum version badge.\")\n \n # Check that the minimum version mentioned in the quick start section is consistent\n- # Looking for: \"1. Install [`Nextflow`](https://www.nextflow.io/docs/latest/getstarted.html#installation) (`>=21.10.3`)\"\n+ # Looking for: \"1. Install [`Nextflow`](https://www.nextflow.io/docs/latest/getstarted.html#installation) (`>=22.10.1`)\"\n nf_version_re = r\"1\\.\\s*Install\\s*\\[`Nextflow`\\]\\(https://www.nextflow.io/docs/latest/getstarted.html#installation\\)\\s*\\(`>=(\\d*\\.\\d*\\.\\d*)`\\)\"\n match = re.search(nf_version_re, content)\n if match:\n", "issue": "Bump minimum required Nextflow version\n### Description of feature\n\nLatest stable release brings lots of new features that we probably want to use at module level (eg. `bin` directories).\n", "before_files": [{"content": "import os\nimport re\n\n\ndef readme(self):\n \"\"\"Repository ``README.md`` tests\n\n The ``README.md`` files for a project are very important and must meet some requirements:\n\n * Nextflow badge\n\n * If no Nextflow badge is found, a warning is given\n * If a badge is found but the version doesn't match the minimum version in the config file, the test fails\n * Example badge code:\n\n .. code-block:: md\n\n [](https://www.nextflow.io/)\n\n * Bioconda badge\n\n * If your pipeline contains a file called ``environment.yml`` in the root directory, a bioconda badge is required\n * Required badge code:\n\n .. code-block:: md\n\n [](https://bioconda.github.io/)\n\n .. note:: These badges are a markdown image ```` *inside* a markdown link ``[markdown image](<link URL>)``, so a bit fiddly to write.\n \"\"\"\n passed = []\n warned = []\n failed = []\n\n # Remove field that should be ignored according to the linting config\n ignore_configs = self.lint_config.get(\"readme\", [])\n\n with open(os.path.join(self.wf_path, \"README.md\"), \"r\") as fh:\n content = fh.read()\n\n if \"nextflow_badge\" not in ignore_configs:\n # Check that there is a readme badge showing the minimum required version of Nextflow\n # [](https://www.nextflow.io/)\n # and that it has the correct version\n nf_badge_re = r\"\\[!\\[Nextflow\\]\\(https://img\\.shields\\.io/badge/nextflow%20DSL2-!?(?:%E2%89%A5|%3E%3D)([\\d\\.]+)-23aa62\\.svg\\)\\]\\(https://www\\.nextflow\\.io/\\)\"\n match = re.search(nf_badge_re, content)\n if match:\n nf_badge_version = match.group(1).strip(\"'\\\"\")\n try:\n if nf_badge_version != self.minNextflowVersion:\n raise AssertionError()\n except (AssertionError, KeyError):\n failed.append(\n f\"README Nextflow minimum version badge does not match config. Badge: `{nf_badge_version}`, \"\n f\"Config: `{self.minNextflowVersion}`\"\n )\n else:\n passed.append(\n f\"README Nextflow minimum version badge matched config. Badge: `{nf_badge_version}`, \"\n f\"Config: `{self.minNextflowVersion}`\"\n )\n else:\n warned.append(\"README did not have a Nextflow minimum version badge.\")\n\n # Check that the minimum version mentioned in the quick start section is consistent\n # Looking for: \"1. Install [`Nextflow`](https://www.nextflow.io/docs/latest/getstarted.html#installation) (`>=21.10.3`)\"\n nf_version_re = r\"1\\.\\s*Install\\s*\\[`Nextflow`\\]\\(https://www.nextflow.io/docs/latest/getstarted.html#installation\\)\\s*\\(`>=(\\d*\\.\\d*\\.\\d*)`\\)\"\n match = re.search(nf_version_re, content)\n if match:\n nf_quickstart_version = match.group(1)\n try:\n if nf_quickstart_version != self.minNextflowVersion:\n raise AssertionError()\n except (AssertionError, KeyError):\n failed.append(\n f\"README Nextflow minimium version in Quick Start section does not match config. README: `{nf_quickstart_version}`, Config `{self.minNextflowVersion}`\"\n )\n else:\n passed.append(\n f\"README Nextflow minimum version in Quick Start section matched config. README: `{nf_quickstart_version}`, Config: `{self.minNextflowVersion}`\"\n )\n else:\n warned.append(\"README did not have a Nextflow minimum version mentioned in Quick Start section.\")\n\n return {\"passed\": passed, \"warned\": warned, \"failed\": failed}\n", "path": "nf_core/lint/readme.py"}]} | 1,715 | 490 |
gh_patches_debug_937 | rasdani/github-patches | git_diff | boto__boto-2166 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Invalid path check in euca-bundle-image
The -i option uses convert_file in boto/roboto/param.py to verify that the path passed is, indeed, a file. This fails unless the path specified is a boring old file which is not necessary. Indeed it not being necessary is sort of the whole point in unix having a /dev in the first place. Everything is a file.
The code calls os.path.isfile(value) in convert_file(). It should call os.path.exists(value) and not os.path.isdir(value). Directories are the only types of files which need to be considered special in the normal course of events.
</issue>
<code>
[start of boto/roboto/param.py]
1 # Copyright (c) 2010 Mitch Garnaat http://garnaat.org/
2 # Copyright (c) 2010, Eucalyptus Systems, Inc.
3 #
4 # Permission is hereby granted, free of charge, to any person obtaining a
5 # copy of this software and associated documentation files (the
6 # "Software"), to deal in the Software without restriction, including
7 # without limitation the rights to use, copy, modify, merge, publish, dis-
8 # tribute, sublicense, and/or sell copies of the Software, and to permit
9 # persons to whom the Software is furnished to do so, subject to the fol-
10 # lowing conditions:
11 #
12 # The above copyright notice and this permission notice shall be included
13 # in all copies or substantial portions of the Software.
14 #
15 # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
16 # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
17 # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
18 # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
19 # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20 # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
21 # IN THE SOFTWARE.
22
23 import os
24
25 class Converter(object):
26
27 @classmethod
28 def convert_string(cls, param, value):
29 # TODO: could do length validation, etc. here
30 if not isinstance(value, basestring):
31 raise ValueError
32 return value
33
34 @classmethod
35 def convert_integer(cls, param, value):
36 # TODO: could do range checking here
37 return int(value)
38
39 @classmethod
40 def convert_boolean(cls, param, value):
41 """
42 For command line arguments, just the presence
43 of the option means True so just return True
44 """
45 return True
46
47 @classmethod
48 def convert_file(cls, param, value):
49 if os.path.isfile(value):
50 return value
51 raise ValueError
52
53 @classmethod
54 def convert_dir(cls, param, value):
55 if os.path.isdir(value):
56 return value
57 raise ValueError
58
59 @classmethod
60 def convert(cls, param, value):
61 try:
62 if hasattr(cls, 'convert_'+param.ptype):
63 mthd = getattr(cls, 'convert_'+param.ptype)
64 else:
65 mthd = cls.convert_string
66 return mthd(param, value)
67 except:
68 raise ValidationException(param, '')
69
70 class Param(Converter):
71
72 def __init__(self, name=None, ptype='string', optional=True,
73 short_name=None, long_name=None, doc='',
74 metavar=None, cardinality=1, default=None,
75 choices=None, encoder=None, request_param=True):
76 self.name = name
77 self.ptype = ptype
78 self.optional = optional
79 self.short_name = short_name
80 self.long_name = long_name
81 self.doc = doc
82 self.metavar = metavar
83 self.cardinality = cardinality
84 self.default = default
85 self.choices = choices
86 self.encoder = encoder
87 self.request_param = request_param
88
89 @property
90 def optparse_long_name(self):
91 ln = None
92 if self.long_name:
93 ln = '--%s' % self.long_name
94 return ln
95
96 @property
97 def synopsis_long_name(self):
98 ln = None
99 if self.long_name:
100 ln = '--%s' % self.long_name
101 return ln
102
103 @property
104 def getopt_long_name(self):
105 ln = None
106 if self.long_name:
107 ln = '%s' % self.long_name
108 if self.ptype != 'boolean':
109 ln += '='
110 return ln
111
112 @property
113 def optparse_short_name(self):
114 sn = None
115 if self.short_name:
116 sn = '-%s' % self.short_name
117 return sn
118
119 @property
120 def synopsis_short_name(self):
121 sn = None
122 if self.short_name:
123 sn = '-%s' % self.short_name
124 return sn
125
126 @property
127 def getopt_short_name(self):
128 sn = None
129 if self.short_name:
130 sn = '%s' % self.short_name
131 if self.ptype != 'boolean':
132 sn += ':'
133 return sn
134
135 def convert(self, value):
136 """
137 Convert a string value as received in the command line
138 tools and convert to the appropriate type of value.
139 Raise a ValidationError if the value can't be converted.
140
141 :type value: str
142 :param value: The value to convert. This should always
143 be a string.
144 """
145 return super(Param, self).convert(self,value)
146
147
148
[end of boto/roboto/param.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/boto/roboto/param.py b/boto/roboto/param.py
--- a/boto/roboto/param.py
+++ b/boto/roboto/param.py
@@ -46,7 +46,7 @@
@classmethod
def convert_file(cls, param, value):
- if os.path.isfile(value):
+ if os.path.exists(value) and not os.path.isdir(value):
return value
raise ValueError
| {"golden_diff": "diff --git a/boto/roboto/param.py b/boto/roboto/param.py\n--- a/boto/roboto/param.py\n+++ b/boto/roboto/param.py\n@@ -46,7 +46,7 @@\n \n @classmethod\n def convert_file(cls, param, value):\n- if os.path.isfile(value):\n+ if os.path.exists(value) and not os.path.isdir(value):\n return value\n raise ValueError\n", "issue": "Invalid path check in euca-bundle-image\nThe -i option uses convert_file in boto/roboto/param.py to verify that the path passed is, indeed, a file. This fails unless the path specified is a boring old file which is not necessary. Indeed it not being necessary is sort of the whole point in unix having a /dev in the first place. Everything is a file.\n\nThe code calls os.path.isfile(value) in convert_file(). It should call os.path.exists(value) and not os.path.isdir(value). Directories are the only types of files which need to be considered special in the normal course of events.\n\n", "before_files": [{"content": "# Copyright (c) 2010 Mitch Garnaat http://garnaat.org/\n# Copyright (c) 2010, Eucalyptus Systems, Inc.\n#\n# Permission is hereby granted, free of charge, to any person obtaining a\n# copy of this software and associated documentation files (the\n# \"Software\"), to deal in the Software without restriction, including\n# without limitation the rights to use, copy, modify, merge, publish, dis-\n# tribute, sublicense, and/or sell copies of the Software, and to permit\n# persons to whom the Software is furnished to do so, subject to the fol-\n# lowing conditions:\n#\n# The above copyright notice and this permission notice shall be included\n# in all copies or substantial portions of the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS\n# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-\n# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT\n# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,\n# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n# IN THE SOFTWARE.\n\nimport os\n\nclass Converter(object):\n\n @classmethod\n def convert_string(cls, param, value):\n # TODO: could do length validation, etc. here\n if not isinstance(value, basestring):\n raise ValueError\n return value\n\n @classmethod\n def convert_integer(cls, param, value):\n # TODO: could do range checking here\n return int(value)\n\n @classmethod\n def convert_boolean(cls, param, value):\n \"\"\"\n For command line arguments, just the presence\n of the option means True so just return True\n \"\"\"\n return True\n\n @classmethod\n def convert_file(cls, param, value):\n if os.path.isfile(value):\n return value\n raise ValueError\n\n @classmethod\n def convert_dir(cls, param, value):\n if os.path.isdir(value):\n return value\n raise ValueError\n\n @classmethod\n def convert(cls, param, value):\n try:\n if hasattr(cls, 'convert_'+param.ptype):\n mthd = getattr(cls, 'convert_'+param.ptype)\n else:\n mthd = cls.convert_string\n return mthd(param, value)\n except:\n raise ValidationException(param, '')\n\nclass Param(Converter):\n\n def __init__(self, name=None, ptype='string', optional=True,\n short_name=None, long_name=None, doc='',\n metavar=None, cardinality=1, default=None,\n choices=None, encoder=None, request_param=True):\n self.name = name\n self.ptype = ptype\n self.optional = optional\n self.short_name = short_name\n self.long_name = long_name\n self.doc = doc\n self.metavar = metavar\n self.cardinality = cardinality\n self.default = default\n self.choices = choices\n self.encoder = encoder\n self.request_param = request_param\n\n @property\n def optparse_long_name(self):\n ln = None\n if self.long_name:\n ln = '--%s' % self.long_name\n return ln\n\n @property\n def synopsis_long_name(self):\n ln = None\n if self.long_name:\n ln = '--%s' % self.long_name\n return ln\n\n @property\n def getopt_long_name(self):\n ln = None\n if self.long_name:\n ln = '%s' % self.long_name\n if self.ptype != 'boolean':\n ln += '='\n return ln\n\n @property\n def optparse_short_name(self):\n sn = None\n if self.short_name:\n sn = '-%s' % self.short_name\n return sn\n\n @property\n def synopsis_short_name(self):\n sn = None\n if self.short_name:\n sn = '-%s' % self.short_name\n return sn\n\n @property\n def getopt_short_name(self):\n sn = None\n if self.short_name:\n sn = '%s' % self.short_name\n if self.ptype != 'boolean':\n sn += ':'\n return sn\n\n def convert(self, value):\n \"\"\"\n Convert a string value as received in the command line\n tools and convert to the appropriate type of value.\n Raise a ValidationError if the value can't be converted.\n\n :type value: str\n :param value: The value to convert. This should always\n be a string.\n \"\"\"\n return super(Param, self).convert(self,value)\n\n\n", "path": "boto/roboto/param.py"}]} | 2,027 | 102 |
gh_patches_debug_22109 | rasdani/github-patches | git_diff | scoutapp__scout_apm_python-259 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Warn about wrong values for SCOUT_AGENT_TRIPLE
Follow-on to #239 - it's easy to override `SCOUT_AGENT_TRIPLE` to a wrong value. We should add some validation here with a log message if it looks wrong.
</issue>
<code>
[start of src/scout_apm/core/platform_detection.py]
1 # coding=utf-8
2 from __future__ import absolute_import, division, print_function, unicode_literals
3
4 import platform
5 import subprocess
6
7
8 def get_triple():
9 return "{arch}-{platform}".format(arch=get_arch(), platform=get_platform())
10
11
12 def get_arch():
13 """
14 What CPU are we on?
15 """
16 arch = platform.machine()
17 if arch == "i686":
18 return "i686"
19 elif arch == "x86_64":
20 return "x86_64"
21 else:
22 return "unknown"
23
24
25 def get_platform():
26 """
27 What Operating System (and sub-system like glibc / musl)
28 """
29 system_name = platform.system()
30 if system_name == "Linux":
31 libc = get_libc()
32 return "unknown-linux-{libc}".format(libc=libc)
33 elif system_name == "Darwin":
34 return "apple-darwin"
35 else:
36 return "unknown"
37
38
39 _libc = None
40
41
42 def get_libc():
43 """
44 Alpine linux uses a non glibc version of the standard library, it uses
45 the stripped down musl instead. The core agent can be built against it,
46 but which one is running must be detected. Shelling out to `ldd`
47 appears to be the most reliable way to do this.
48 """
49 global _libc
50 if _libc is None:
51 try:
52 output = subprocess.check_output(
53 ["ldd", "--version"], stderr=subprocess.STDOUT, close_fds=True
54 )
55 except (OSError, subprocess.CalledProcessError):
56 _libc = "gnu"
57 else:
58 if b"musl" in output:
59 _libc = "musl"
60 else:
61 _libc = "gnu"
62 return _libc
63
[end of src/scout_apm/core/platform_detection.py]
[start of src/scout_apm/core/config.py]
1 # coding=utf-8
2 from __future__ import absolute_import, division, print_function, unicode_literals
3
4 import logging
5 import os
6
7 from scout_apm.compat import string_type
8 from scout_apm.core import platform_detection
9 from scout_apm.core.util import octal
10
11 logger = logging.getLogger(__name__)
12
13
14 class ScoutConfig(object):
15 """
16 Configuration object for the ScoutApm agent.
17
18 Contains a list of configuration "layers". When a configuration key is
19 looked up, each layer is asked in turn if it knows the value. The first one
20 to answer affirmatively returns the value.
21 """
22
23 def __init__(self):
24 self.layers = [
25 ScoutConfigEnv(),
26 ScoutConfigPython(),
27 ScoutConfigDerived(self),
28 ScoutConfigDefaults(),
29 ScoutConfigNull(),
30 ]
31
32 def value(self, key):
33 value = self.locate_layer_for_key(key).value(key)
34 if key in CONVERSIONS:
35 return CONVERSIONS[key](value)
36 return value
37
38 def locate_layer_for_key(self, key):
39 for layer in self.layers:
40 if layer.has_config(key):
41 return layer
42
43 # Should be unreachable because ScoutConfigNull returns None for all
44 # keys.
45 raise ValueError("key {!r} not found in any layer".format(key))
46
47 def log(self):
48 logger.debug("Configuration Loaded:")
49 for key in self.known_keys():
50 layer = self.locate_layer_for_key(key)
51 logger.debug("%-9s: %s = %s", layer.name(), key, layer.value(key))
52
53 def known_keys(self):
54 return [
55 "app_server",
56 "application_root",
57 "core_agent_dir",
58 "core_agent_download",
59 "core_agent_launch",
60 "core_agent_permissions",
61 "core_agent_version",
62 "disabled_instruments",
63 "download_url",
64 "framework",
65 "framework_version",
66 "hostname",
67 "ignore",
68 "key",
69 "log_level",
70 "monitor",
71 "name",
72 "revision_sha",
73 "scm_subdirectory",
74 "socket_path",
75 ]
76
77 def core_agent_permissions(self):
78 try:
79 return octal(self.value("core_agent_permissions"))
80 except ValueError:
81 logger.exception(
82 "Invalid core_agent_permissions value, using default of 0o700"
83 )
84 return 0o700
85
86 @classmethod
87 def set(cls, **kwargs):
88 """
89 Sets a configuration value for the Scout agent. Values set here will
90 not override values set in ENV.
91 """
92 global SCOUT_PYTHON_VALUES
93 for key, value in kwargs.items():
94 SCOUT_PYTHON_VALUES[key] = value
95
96 @classmethod
97 def unset(cls, *keys):
98 """
99 Removes a configuration value for the Scout agent.
100 """
101 global SCOUT_PYTHON_VALUES
102 for key in keys:
103 SCOUT_PYTHON_VALUES.pop(key, None)
104
105 @classmethod
106 def reset_all(cls):
107 """
108 Remove all configuration settings set via `ScoutConfig.set(...)`.
109
110 This is meant for use in testing.
111 """
112 global SCOUT_PYTHON_VALUES
113 SCOUT_PYTHON_VALUES.clear()
114
115
116 # Module-level data, the ScoutConfig.set(key="value") adds to this
117 SCOUT_PYTHON_VALUES = {}
118
119
120 class ScoutConfigPython(object):
121 """
122 A configuration overlay that lets other parts of python set values.
123 """
124
125 def name(self):
126 return "Python"
127
128 def has_config(self, key):
129 return key in SCOUT_PYTHON_VALUES
130
131 def value(self, key):
132 return SCOUT_PYTHON_VALUES[key]
133
134
135 class ScoutConfigEnv(object):
136 """
137 Reads configuration from environment by prefixing the key
138 requested with "SCOUT_"
139
140 Example: the `log_level` config looks for SCOUT_LOG_LEVEL
141 environment variable
142 """
143
144 def name(self):
145 return "ENV"
146
147 def has_config(self, key):
148 env_key = self.modify_key(key)
149 return env_key in os.environ
150
151 def value(self, key):
152 env_key = self.modify_key(key)
153 return os.environ[env_key]
154
155 def modify_key(self, key):
156 env_key = ("SCOUT_" + key).upper()
157 return env_key
158
159
160 class ScoutConfigDerived(object):
161 """
162 A configuration overlay that calculates from other values.
163 """
164
165 def __init__(self, config):
166 """
167 config argument is the overall ScoutConfig var, so we can lookup the
168 components of the derived info.
169 """
170 self.config = config
171
172 def name(self):
173 return "Derived"
174
175 def has_config(self, key):
176 return self.lookup_func(key) is not None
177
178 def value(self, key):
179 return self.lookup_func(key)()
180
181 def lookup_func(self, key):
182 """
183 Returns the derive_#{key} function, or None if it isn't defined
184 """
185 func_name = "derive_" + key
186 return getattr(self, func_name, None)
187
188 def derive_socket_path(self):
189 return "{}/{}/scout-agent.sock".format(
190 self.config.value("core_agent_dir"),
191 self.config.value("core_agent_full_name"),
192 )
193
194 def derive_core_agent_full_name(self):
195 return "{name}-{version}-{triple}".format(
196 name="scout_apm_core",
197 version=self.config.value("core_agent_version"),
198 triple=self.config.value("core_agent_triple"),
199 )
200
201 def derive_core_agent_triple(self):
202 return platform_detection.get_triple()
203
204
205 class ScoutConfigDefaults(object):
206 """
207 Provides default values for important configurations
208 """
209
210 def name(self):
211 return "Defaults"
212
213 def __init__(self):
214 self.defaults = {
215 "app_server": "",
216 "application_root": "",
217 "core_agent_dir": "/tmp/scout_apm_core",
218 "core_agent_download": True,
219 "core_agent_launch": True,
220 "core_agent_permissions": 700,
221 "core_agent_version": "v1.2.0", # can be an exact tag name, or 'latest'
222 "disabled_instruments": [],
223 "download_url": "https://s3-us-west-1.amazonaws.com/scout-public-downloads/apm_core_agent/release", # noqa: E501
224 "framework": "",
225 "framework_version": "",
226 "hostname": None,
227 "key": "",
228 "log_level": "info",
229 "monitor": False,
230 "name": "",
231 "revision_sha": self._git_revision_sha(),
232 "scm_subdirectory": "",
233 }
234
235 def _git_revision_sha(self):
236 # N.B. The environment variable SCOUT_REVISION_SHA may also be used,
237 # but that will be picked up by ScoutConfigEnv
238 return os.environ.get("HEROKU_SLUG_COMMIT", "")
239
240 def has_config(self, key):
241 return key in self.defaults
242
243 def value(self, key):
244 return self.defaults[key]
245
246
247 # Always returns None to any key
248 class ScoutConfigNull(object):
249 """
250 Always answers that a key is present, but the value is None
251
252 Used as the last step of the layered configuration.
253 """
254
255 def name(self):
256 return "Null"
257
258 def has_config(self, key):
259 return True
260
261 def value(self, key):
262 return None
263
264
265 def convert_to_bool(value):
266 if isinstance(value, bool):
267 return value
268 if isinstance(value, string_type):
269 return value.lower() in ("yes", "true", "t", "1")
270 # Unknown type - default to false?
271 return False
272
273
274 def convert_to_list(value):
275 if isinstance(value, list):
276 return value
277 if isinstance(value, tuple):
278 return list(value)
279 if isinstance(value, string_type):
280 # Split on commas
281 return [item.strip() for item in value.split(",") if item]
282 # Unknown type - default to empty?
283 return []
284
285
286 CONVERSIONS = {
287 "core_agent_download": convert_to_bool,
288 "core_agent_launch": convert_to_bool,
289 "monitor": convert_to_bool,
290 "disabled_instruments": convert_to_list,
291 "ignore": convert_to_list,
292 }
293
[end of src/scout_apm/core/config.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/scout_apm/core/config.py b/src/scout_apm/core/config.py
--- a/src/scout_apm/core/config.py
+++ b/src/scout_apm/core/config.py
@@ -192,10 +192,13 @@
)
def derive_core_agent_full_name(self):
+ triple = self.config.value("core_agent_triple")
+ if not platform_detection.is_valid_triple(triple):
+ logger.warning("Invalid value for core_agent_triple: %s", triple)
return "{name}-{version}-{triple}".format(
name="scout_apm_core",
version=self.config.value("core_agent_version"),
- triple=self.config.value("core_agent_triple"),
+ triple=triple,
)
def derive_core_agent_triple(self):
diff --git a/src/scout_apm/core/platform_detection.py b/src/scout_apm/core/platform_detection.py
--- a/src/scout_apm/core/platform_detection.py
+++ b/src/scout_apm/core/platform_detection.py
@@ -5,6 +5,16 @@
import subprocess
+def is_valid_triple(triple):
+ values = triple.split("-", 1)
+ return (
+ len(values) == 2
+ and values[0] in ("i686", "x86_64", "unknown")
+ and values[1]
+ in ("unknown-linux-gnu", "unknown-linux-musl", "apple-darwin", "unknown")
+ )
+
+
def get_triple():
return "{arch}-{platform}".format(arch=get_arch(), platform=get_platform())
| {"golden_diff": "diff --git a/src/scout_apm/core/config.py b/src/scout_apm/core/config.py\n--- a/src/scout_apm/core/config.py\n+++ b/src/scout_apm/core/config.py\n@@ -192,10 +192,13 @@\n )\n \n def derive_core_agent_full_name(self):\n+ triple = self.config.value(\"core_agent_triple\")\n+ if not platform_detection.is_valid_triple(triple):\n+ logger.warning(\"Invalid value for core_agent_triple: %s\", triple)\n return \"{name}-{version}-{triple}\".format(\n name=\"scout_apm_core\",\n version=self.config.value(\"core_agent_version\"),\n- triple=self.config.value(\"core_agent_triple\"),\n+ triple=triple,\n )\n \n def derive_core_agent_triple(self):\ndiff --git a/src/scout_apm/core/platform_detection.py b/src/scout_apm/core/platform_detection.py\n--- a/src/scout_apm/core/platform_detection.py\n+++ b/src/scout_apm/core/platform_detection.py\n@@ -5,6 +5,16 @@\n import subprocess\n \n \n+def is_valid_triple(triple):\n+ values = triple.split(\"-\", 1)\n+ return (\n+ len(values) == 2\n+ and values[0] in (\"i686\", \"x86_64\", \"unknown\")\n+ and values[1]\n+ in (\"unknown-linux-gnu\", \"unknown-linux-musl\", \"apple-darwin\", \"unknown\")\n+ )\n+\n+\n def get_triple():\n return \"{arch}-{platform}\".format(arch=get_arch(), platform=get_platform())\n", "issue": "Warn about wrong values for SCOUT_AGENT_TRIPLE\nFollow-on to #239 - it's easy to override `SCOUT_AGENT_TRIPLE` to a wrong value. We should add some validation here with a log message if it looks wrong.\n", "before_files": [{"content": "# coding=utf-8\nfrom __future__ import absolute_import, division, print_function, unicode_literals\n\nimport platform\nimport subprocess\n\n\ndef get_triple():\n return \"{arch}-{platform}\".format(arch=get_arch(), platform=get_platform())\n\n\ndef get_arch():\n \"\"\"\n What CPU are we on?\n \"\"\"\n arch = platform.machine()\n if arch == \"i686\":\n return \"i686\"\n elif arch == \"x86_64\":\n return \"x86_64\"\n else:\n return \"unknown\"\n\n\ndef get_platform():\n \"\"\"\n What Operating System (and sub-system like glibc / musl)\n \"\"\"\n system_name = platform.system()\n if system_name == \"Linux\":\n libc = get_libc()\n return \"unknown-linux-{libc}\".format(libc=libc)\n elif system_name == \"Darwin\":\n return \"apple-darwin\"\n else:\n return \"unknown\"\n\n\n_libc = None\n\n\ndef get_libc():\n \"\"\"\n Alpine linux uses a non glibc version of the standard library, it uses\n the stripped down musl instead. The core agent can be built against it,\n but which one is running must be detected. Shelling out to `ldd`\n appears to be the most reliable way to do this.\n \"\"\"\n global _libc\n if _libc is None:\n try:\n output = subprocess.check_output(\n [\"ldd\", \"--version\"], stderr=subprocess.STDOUT, close_fds=True\n )\n except (OSError, subprocess.CalledProcessError):\n _libc = \"gnu\"\n else:\n if b\"musl\" in output:\n _libc = \"musl\"\n else:\n _libc = \"gnu\"\n return _libc\n", "path": "src/scout_apm/core/platform_detection.py"}, {"content": "# coding=utf-8\nfrom __future__ import absolute_import, division, print_function, unicode_literals\n\nimport logging\nimport os\n\nfrom scout_apm.compat import string_type\nfrom scout_apm.core import platform_detection\nfrom scout_apm.core.util import octal\n\nlogger = logging.getLogger(__name__)\n\n\nclass ScoutConfig(object):\n \"\"\"\n Configuration object for the ScoutApm agent.\n\n Contains a list of configuration \"layers\". When a configuration key is\n looked up, each layer is asked in turn if it knows the value. The first one\n to answer affirmatively returns the value.\n \"\"\"\n\n def __init__(self):\n self.layers = [\n ScoutConfigEnv(),\n ScoutConfigPython(),\n ScoutConfigDerived(self),\n ScoutConfigDefaults(),\n ScoutConfigNull(),\n ]\n\n def value(self, key):\n value = self.locate_layer_for_key(key).value(key)\n if key in CONVERSIONS:\n return CONVERSIONS[key](value)\n return value\n\n def locate_layer_for_key(self, key):\n for layer in self.layers:\n if layer.has_config(key):\n return layer\n\n # Should be unreachable because ScoutConfigNull returns None for all\n # keys.\n raise ValueError(\"key {!r} not found in any layer\".format(key))\n\n def log(self):\n logger.debug(\"Configuration Loaded:\")\n for key in self.known_keys():\n layer = self.locate_layer_for_key(key)\n logger.debug(\"%-9s: %s = %s\", layer.name(), key, layer.value(key))\n\n def known_keys(self):\n return [\n \"app_server\",\n \"application_root\",\n \"core_agent_dir\",\n \"core_agent_download\",\n \"core_agent_launch\",\n \"core_agent_permissions\",\n \"core_agent_version\",\n \"disabled_instruments\",\n \"download_url\",\n \"framework\",\n \"framework_version\",\n \"hostname\",\n \"ignore\",\n \"key\",\n \"log_level\",\n \"monitor\",\n \"name\",\n \"revision_sha\",\n \"scm_subdirectory\",\n \"socket_path\",\n ]\n\n def core_agent_permissions(self):\n try:\n return octal(self.value(\"core_agent_permissions\"))\n except ValueError:\n logger.exception(\n \"Invalid core_agent_permissions value, using default of 0o700\"\n )\n return 0o700\n\n @classmethod\n def set(cls, **kwargs):\n \"\"\"\n Sets a configuration value for the Scout agent. Values set here will\n not override values set in ENV.\n \"\"\"\n global SCOUT_PYTHON_VALUES\n for key, value in kwargs.items():\n SCOUT_PYTHON_VALUES[key] = value\n\n @classmethod\n def unset(cls, *keys):\n \"\"\"\n Removes a configuration value for the Scout agent.\n \"\"\"\n global SCOUT_PYTHON_VALUES\n for key in keys:\n SCOUT_PYTHON_VALUES.pop(key, None)\n\n @classmethod\n def reset_all(cls):\n \"\"\"\n Remove all configuration settings set via `ScoutConfig.set(...)`.\n\n This is meant for use in testing.\n \"\"\"\n global SCOUT_PYTHON_VALUES\n SCOUT_PYTHON_VALUES.clear()\n\n\n# Module-level data, the ScoutConfig.set(key=\"value\") adds to this\nSCOUT_PYTHON_VALUES = {}\n\n\nclass ScoutConfigPython(object):\n \"\"\"\n A configuration overlay that lets other parts of python set values.\n \"\"\"\n\n def name(self):\n return \"Python\"\n\n def has_config(self, key):\n return key in SCOUT_PYTHON_VALUES\n\n def value(self, key):\n return SCOUT_PYTHON_VALUES[key]\n\n\nclass ScoutConfigEnv(object):\n \"\"\"\n Reads configuration from environment by prefixing the key\n requested with \"SCOUT_\"\n\n Example: the `log_level` config looks for SCOUT_LOG_LEVEL\n environment variable\n \"\"\"\n\n def name(self):\n return \"ENV\"\n\n def has_config(self, key):\n env_key = self.modify_key(key)\n return env_key in os.environ\n\n def value(self, key):\n env_key = self.modify_key(key)\n return os.environ[env_key]\n\n def modify_key(self, key):\n env_key = (\"SCOUT_\" + key).upper()\n return env_key\n\n\nclass ScoutConfigDerived(object):\n \"\"\"\n A configuration overlay that calculates from other values.\n \"\"\"\n\n def __init__(self, config):\n \"\"\"\n config argument is the overall ScoutConfig var, so we can lookup the\n components of the derived info.\n \"\"\"\n self.config = config\n\n def name(self):\n return \"Derived\"\n\n def has_config(self, key):\n return self.lookup_func(key) is not None\n\n def value(self, key):\n return self.lookup_func(key)()\n\n def lookup_func(self, key):\n \"\"\"\n Returns the derive_#{key} function, or None if it isn't defined\n \"\"\"\n func_name = \"derive_\" + key\n return getattr(self, func_name, None)\n\n def derive_socket_path(self):\n return \"{}/{}/scout-agent.sock\".format(\n self.config.value(\"core_agent_dir\"),\n self.config.value(\"core_agent_full_name\"),\n )\n\n def derive_core_agent_full_name(self):\n return \"{name}-{version}-{triple}\".format(\n name=\"scout_apm_core\",\n version=self.config.value(\"core_agent_version\"),\n triple=self.config.value(\"core_agent_triple\"),\n )\n\n def derive_core_agent_triple(self):\n return platform_detection.get_triple()\n\n\nclass ScoutConfigDefaults(object):\n \"\"\"\n Provides default values for important configurations\n \"\"\"\n\n def name(self):\n return \"Defaults\"\n\n def __init__(self):\n self.defaults = {\n \"app_server\": \"\",\n \"application_root\": \"\",\n \"core_agent_dir\": \"/tmp/scout_apm_core\",\n \"core_agent_download\": True,\n \"core_agent_launch\": True,\n \"core_agent_permissions\": 700,\n \"core_agent_version\": \"v1.2.0\", # can be an exact tag name, or 'latest'\n \"disabled_instruments\": [],\n \"download_url\": \"https://s3-us-west-1.amazonaws.com/scout-public-downloads/apm_core_agent/release\", # noqa: E501\n \"framework\": \"\",\n \"framework_version\": \"\",\n \"hostname\": None,\n \"key\": \"\",\n \"log_level\": \"info\",\n \"monitor\": False,\n \"name\": \"\",\n \"revision_sha\": self._git_revision_sha(),\n \"scm_subdirectory\": \"\",\n }\n\n def _git_revision_sha(self):\n # N.B. The environment variable SCOUT_REVISION_SHA may also be used,\n # but that will be picked up by ScoutConfigEnv\n return os.environ.get(\"HEROKU_SLUG_COMMIT\", \"\")\n\n def has_config(self, key):\n return key in self.defaults\n\n def value(self, key):\n return self.defaults[key]\n\n\n# Always returns None to any key\nclass ScoutConfigNull(object):\n \"\"\"\n Always answers that a key is present, but the value is None\n\n Used as the last step of the layered configuration.\n \"\"\"\n\n def name(self):\n return \"Null\"\n\n def has_config(self, key):\n return True\n\n def value(self, key):\n return None\n\n\ndef convert_to_bool(value):\n if isinstance(value, bool):\n return value\n if isinstance(value, string_type):\n return value.lower() in (\"yes\", \"true\", \"t\", \"1\")\n # Unknown type - default to false?\n return False\n\n\ndef convert_to_list(value):\n if isinstance(value, list):\n return value\n if isinstance(value, tuple):\n return list(value)\n if isinstance(value, string_type):\n # Split on commas\n return [item.strip() for item in value.split(\",\") if item]\n # Unknown type - default to empty?\n return []\n\n\nCONVERSIONS = {\n \"core_agent_download\": convert_to_bool,\n \"core_agent_launch\": convert_to_bool,\n \"monitor\": convert_to_bool,\n \"disabled_instruments\": convert_to_list,\n \"ignore\": convert_to_list,\n}\n", "path": "src/scout_apm/core/config.py"}]} | 3,674 | 356 |
gh_patches_debug_51416 | rasdani/github-patches | git_diff | pallets__click-1496 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
ClickException message goes to stdout instead of stderr with version 7.1
Thanks a lot for ``click`` - absolutely fantastic project.
I've noticed the following change when upgrading to 7.1, and am not sure if it's intentional or not:
### Expected Behavior
When raising ``click.ClickException``, the corresponding ``Error: <message>`` goes to ``stderr`` using click version 7.0.
Minimal reproducing code:
```python
import click
@click.command()
def run():
raise click.ClickException('exception message')
if __name__ == '__main__':
run()
```
```bash
python <filename>.py 1> stdout 2> stderr
```
### Actual Behavior
With version 7.1, the ``Error: exception message`` ends up in ``stdout`` instead of ``stderr``.
### Environment
* Python version: Python 3.7.5
* Click version: 7.1
* OS: Ubuntu 18.04 [through WSL1]
* Shell: GNU bash, version 4.4.20
### Additional comments
As mentioned above I'm not sure if this is an intended change, but I couldn't find any mention on the [Changelog](https://click.palletsprojects.com/en/7.x/changelog/#version-7-1), and [this part](https://click.palletsprojects.com/en/7.x/exceptions/#which-exceptions-exist) of the docs still referes to ``show`` being printed to ``stderr``.
Happy to do some more digging if this happens only on my system.
</issue>
<code>
[start of src/click/exceptions.py]
1 from ._compat import filename_to_ui
2 from ._compat import get_text_stderr
3 from ._compat import PY2
4 from .utils import echo
5
6
7 def _join_param_hints(param_hint):
8 if isinstance(param_hint, (tuple, list)):
9 return " / ".join(repr(x) for x in param_hint)
10 return param_hint
11
12
13 class ClickException(Exception):
14 """An exception that Click can handle and show to the user."""
15
16 #: The exit code for this exception
17 exit_code = 1
18
19 def __init__(self, message):
20 ctor_msg = message
21 if PY2:
22 if ctor_msg is not None:
23 ctor_msg = ctor_msg.encode("utf-8")
24 Exception.__init__(self, ctor_msg)
25 self.message = message
26
27 def format_message(self):
28 return self.message
29
30 def __str__(self):
31 return self.message
32
33 if PY2:
34 __unicode__ = __str__
35
36 def __str__(self):
37 return self.message.encode("utf-8")
38
39 def show(self, file=None):
40 if file is None:
41 file = get_text_stderr()
42 echo("Error: {}".format(self.format_message(), file=file))
43
44
45 class UsageError(ClickException):
46 """An internal exception that signals a usage error. This typically
47 aborts any further handling.
48
49 :param message: the error message to display.
50 :param ctx: optionally the context that caused this error. Click will
51 fill in the context automatically in some situations.
52 """
53
54 exit_code = 2
55
56 def __init__(self, message, ctx=None):
57 ClickException.__init__(self, message)
58 self.ctx = ctx
59 self.cmd = self.ctx.command if self.ctx else None
60
61 def show(self, file=None):
62 if file is None:
63 file = get_text_stderr()
64 color = None
65 hint = ""
66 if self.cmd is not None and self.cmd.get_help_option(self.ctx) is not None:
67 hint = "Try '{} {}' for help.\n".format(
68 self.ctx.command_path, self.ctx.help_option_names[0]
69 )
70 if self.ctx is not None:
71 color = self.ctx.color
72 echo("{}\n{}".format(self.ctx.get_usage(), hint), file=file, color=color)
73 echo("Error: {}".format(self.format_message()), file=file, color=color)
74
75
76 class BadParameter(UsageError):
77 """An exception that formats out a standardized error message for a
78 bad parameter. This is useful when thrown from a callback or type as
79 Click will attach contextual information to it (for instance, which
80 parameter it is).
81
82 .. versionadded:: 2.0
83
84 :param param: the parameter object that caused this error. This can
85 be left out, and Click will attach this info itself
86 if possible.
87 :param param_hint: a string that shows up as parameter name. This
88 can be used as alternative to `param` in cases
89 where custom validation should happen. If it is
90 a string it's used as such, if it's a list then
91 each item is quoted and separated.
92 """
93
94 def __init__(self, message, ctx=None, param=None, param_hint=None):
95 UsageError.__init__(self, message, ctx)
96 self.param = param
97 self.param_hint = param_hint
98
99 def format_message(self):
100 if self.param_hint is not None:
101 param_hint = self.param_hint
102 elif self.param is not None:
103 param_hint = self.param.get_error_hint(self.ctx)
104 else:
105 return "Invalid value: {}".format(self.message)
106 param_hint = _join_param_hints(param_hint)
107
108 return "Invalid value for {}: {}".format(param_hint, self.message)
109
110
111 class MissingParameter(BadParameter):
112 """Raised if click required an option or argument but it was not
113 provided when invoking the script.
114
115 .. versionadded:: 4.0
116
117 :param param_type: a string that indicates the type of the parameter.
118 The default is to inherit the parameter type from
119 the given `param`. Valid values are ``'parameter'``,
120 ``'option'`` or ``'argument'``.
121 """
122
123 def __init__(
124 self, message=None, ctx=None, param=None, param_hint=None, param_type=None
125 ):
126 BadParameter.__init__(self, message, ctx, param, param_hint)
127 self.param_type = param_type
128
129 def format_message(self):
130 if self.param_hint is not None:
131 param_hint = self.param_hint
132 elif self.param is not None:
133 param_hint = self.param.get_error_hint(self.ctx)
134 else:
135 param_hint = None
136 param_hint = _join_param_hints(param_hint)
137
138 param_type = self.param_type
139 if param_type is None and self.param is not None:
140 param_type = self.param.param_type_name
141
142 msg = self.message
143 if self.param is not None:
144 msg_extra = self.param.type.get_missing_message(self.param)
145 if msg_extra:
146 if msg:
147 msg += ". {}".format(msg_extra)
148 else:
149 msg = msg_extra
150
151 return "Missing {}{}{}{}".format(
152 param_type,
153 " {}".format(param_hint) if param_hint else "",
154 ". " if msg else ".",
155 msg or "",
156 )
157
158 def __str__(self):
159 if self.message is None:
160 param_name = self.param.name if self.param else None
161 return "missing parameter: {}".format(param_name)
162 else:
163 return self.message
164
165 if PY2:
166 __unicode__ = __str__
167
168 def __str__(self):
169 return self.__unicode__().encode("utf-8")
170
171
172 class NoSuchOption(UsageError):
173 """Raised if click attempted to handle an option that does not
174 exist.
175
176 .. versionadded:: 4.0
177 """
178
179 def __init__(self, option_name, message=None, possibilities=None, ctx=None):
180 if message is None:
181 message = "no such option: {}".format(option_name)
182 UsageError.__init__(self, message, ctx)
183 self.option_name = option_name
184 self.possibilities = possibilities
185
186 def format_message(self):
187 bits = [self.message]
188 if self.possibilities:
189 if len(self.possibilities) == 1:
190 bits.append("Did you mean {}?".format(self.possibilities[0]))
191 else:
192 possibilities = sorted(self.possibilities)
193 bits.append("(Possible options: {})".format(", ".join(possibilities)))
194 return " ".join(bits)
195
196
197 class BadOptionUsage(UsageError):
198 """Raised if an option is generally supplied but the use of the option
199 was incorrect. This is for instance raised if the number of arguments
200 for an option is not correct.
201
202 .. versionadded:: 4.0
203
204 :param option_name: the name of the option being used incorrectly.
205 """
206
207 def __init__(self, option_name, message, ctx=None):
208 UsageError.__init__(self, message, ctx)
209 self.option_name = option_name
210
211
212 class BadArgumentUsage(UsageError):
213 """Raised if an argument is generally supplied but the use of the argument
214 was incorrect. This is for instance raised if the number of values
215 for an argument is not correct.
216
217 .. versionadded:: 6.0
218 """
219
220 def __init__(self, message, ctx=None):
221 UsageError.__init__(self, message, ctx)
222
223
224 class FileError(ClickException):
225 """Raised if a file cannot be opened."""
226
227 def __init__(self, filename, hint=None):
228 ui_filename = filename_to_ui(filename)
229 if hint is None:
230 hint = "unknown error"
231 ClickException.__init__(self, hint)
232 self.ui_filename = ui_filename
233 self.filename = filename
234
235 def format_message(self):
236 return "Could not open file {}: {}".format(self.ui_filename, self.message)
237
238
239 class Abort(RuntimeError):
240 """An internal signalling exception that signals Click to abort."""
241
242
243 class Exit(RuntimeError):
244 """An exception that indicates that the application should exit with some
245 status code.
246
247 :param code: the status code to exit with.
248 """
249
250 __slots__ = ("exit_code",)
251
252 def __init__(self, code=0):
253 self.exit_code = code
254
[end of src/click/exceptions.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/click/exceptions.py b/src/click/exceptions.py
--- a/src/click/exceptions.py
+++ b/src/click/exceptions.py
@@ -39,7 +39,7 @@
def show(self, file=None):
if file is None:
file = get_text_stderr()
- echo("Error: {}".format(self.format_message(), file=file))
+ echo("Error: {}".format(self.format_message()), file=file)
class UsageError(ClickException):
| {"golden_diff": "diff --git a/src/click/exceptions.py b/src/click/exceptions.py\n--- a/src/click/exceptions.py\n+++ b/src/click/exceptions.py\n@@ -39,7 +39,7 @@\n def show(self, file=None):\n if file is None:\n file = get_text_stderr()\n- echo(\"Error: {}\".format(self.format_message(), file=file))\n+ echo(\"Error: {}\".format(self.format_message()), file=file)\n \n \n class UsageError(ClickException):\n", "issue": "ClickException message goes to stdout instead of stderr with version 7.1\nThanks a lot for ``click`` - absolutely fantastic project. \r\n\r\nI've noticed the following change when upgrading to 7.1, and am not sure if it's intentional or not:\r\n\r\n### Expected Behavior\r\n\r\nWhen raising ``click.ClickException``, the corresponding ``Error: <message>`` goes to ``stderr`` using click version 7.0.\r\n\r\nMinimal reproducing code:\r\n\r\n```python\r\nimport click\r\n\r\[email protected]()\r\ndef run():\r\n raise click.ClickException('exception message')\r\n\r\nif __name__ == '__main__':\r\n run()\r\n```\r\n```bash\r\npython <filename>.py 1> stdout 2> stderr\r\n```\r\n\r\n### Actual Behavior\r\n\r\nWith version 7.1, the ``Error: exception message`` ends up in ``stdout`` instead of ``stderr``.\r\n\r\n### Environment\r\n\r\n* Python version: Python 3.7.5\r\n* Click version: 7.1\r\n* OS: Ubuntu 18.04 [through WSL1]\r\n* Shell: GNU bash, version 4.4.20\r\n\r\n### Additional comments\r\n\r\nAs mentioned above I'm not sure if this is an intended change, but I couldn't find any mention on the [Changelog](https://click.palletsprojects.com/en/7.x/changelog/#version-7-1), and [this part](https://click.palletsprojects.com/en/7.x/exceptions/#which-exceptions-exist) of the docs still referes to ``show`` being printed to ``stderr``.\r\n\r\nHappy to do some more digging if this happens only on my system.\r\n\n", "before_files": [{"content": "from ._compat import filename_to_ui\nfrom ._compat import get_text_stderr\nfrom ._compat import PY2\nfrom .utils import echo\n\n\ndef _join_param_hints(param_hint):\n if isinstance(param_hint, (tuple, list)):\n return \" / \".join(repr(x) for x in param_hint)\n return param_hint\n\n\nclass ClickException(Exception):\n \"\"\"An exception that Click can handle and show to the user.\"\"\"\n\n #: The exit code for this exception\n exit_code = 1\n\n def __init__(self, message):\n ctor_msg = message\n if PY2:\n if ctor_msg is not None:\n ctor_msg = ctor_msg.encode(\"utf-8\")\n Exception.__init__(self, ctor_msg)\n self.message = message\n\n def format_message(self):\n return self.message\n\n def __str__(self):\n return self.message\n\n if PY2:\n __unicode__ = __str__\n\n def __str__(self):\n return self.message.encode(\"utf-8\")\n\n def show(self, file=None):\n if file is None:\n file = get_text_stderr()\n echo(\"Error: {}\".format(self.format_message(), file=file))\n\n\nclass UsageError(ClickException):\n \"\"\"An internal exception that signals a usage error. This typically\n aborts any further handling.\n\n :param message: the error message to display.\n :param ctx: optionally the context that caused this error. Click will\n fill in the context automatically in some situations.\n \"\"\"\n\n exit_code = 2\n\n def __init__(self, message, ctx=None):\n ClickException.__init__(self, message)\n self.ctx = ctx\n self.cmd = self.ctx.command if self.ctx else None\n\n def show(self, file=None):\n if file is None:\n file = get_text_stderr()\n color = None\n hint = \"\"\n if self.cmd is not None and self.cmd.get_help_option(self.ctx) is not None:\n hint = \"Try '{} {}' for help.\\n\".format(\n self.ctx.command_path, self.ctx.help_option_names[0]\n )\n if self.ctx is not None:\n color = self.ctx.color\n echo(\"{}\\n{}\".format(self.ctx.get_usage(), hint), file=file, color=color)\n echo(\"Error: {}\".format(self.format_message()), file=file, color=color)\n\n\nclass BadParameter(UsageError):\n \"\"\"An exception that formats out a standardized error message for a\n bad parameter. This is useful when thrown from a callback or type as\n Click will attach contextual information to it (for instance, which\n parameter it is).\n\n .. versionadded:: 2.0\n\n :param param: the parameter object that caused this error. This can\n be left out, and Click will attach this info itself\n if possible.\n :param param_hint: a string that shows up as parameter name. This\n can be used as alternative to `param` in cases\n where custom validation should happen. If it is\n a string it's used as such, if it's a list then\n each item is quoted and separated.\n \"\"\"\n\n def __init__(self, message, ctx=None, param=None, param_hint=None):\n UsageError.__init__(self, message, ctx)\n self.param = param\n self.param_hint = param_hint\n\n def format_message(self):\n if self.param_hint is not None:\n param_hint = self.param_hint\n elif self.param is not None:\n param_hint = self.param.get_error_hint(self.ctx)\n else:\n return \"Invalid value: {}\".format(self.message)\n param_hint = _join_param_hints(param_hint)\n\n return \"Invalid value for {}: {}\".format(param_hint, self.message)\n\n\nclass MissingParameter(BadParameter):\n \"\"\"Raised if click required an option or argument but it was not\n provided when invoking the script.\n\n .. versionadded:: 4.0\n\n :param param_type: a string that indicates the type of the parameter.\n The default is to inherit the parameter type from\n the given `param`. Valid values are ``'parameter'``,\n ``'option'`` or ``'argument'``.\n \"\"\"\n\n def __init__(\n self, message=None, ctx=None, param=None, param_hint=None, param_type=None\n ):\n BadParameter.__init__(self, message, ctx, param, param_hint)\n self.param_type = param_type\n\n def format_message(self):\n if self.param_hint is not None:\n param_hint = self.param_hint\n elif self.param is not None:\n param_hint = self.param.get_error_hint(self.ctx)\n else:\n param_hint = None\n param_hint = _join_param_hints(param_hint)\n\n param_type = self.param_type\n if param_type is None and self.param is not None:\n param_type = self.param.param_type_name\n\n msg = self.message\n if self.param is not None:\n msg_extra = self.param.type.get_missing_message(self.param)\n if msg_extra:\n if msg:\n msg += \". {}\".format(msg_extra)\n else:\n msg = msg_extra\n\n return \"Missing {}{}{}{}\".format(\n param_type,\n \" {}\".format(param_hint) if param_hint else \"\",\n \". \" if msg else \".\",\n msg or \"\",\n )\n\n def __str__(self):\n if self.message is None:\n param_name = self.param.name if self.param else None\n return \"missing parameter: {}\".format(param_name)\n else:\n return self.message\n\n if PY2:\n __unicode__ = __str__\n\n def __str__(self):\n return self.__unicode__().encode(\"utf-8\")\n\n\nclass NoSuchOption(UsageError):\n \"\"\"Raised if click attempted to handle an option that does not\n exist.\n\n .. versionadded:: 4.0\n \"\"\"\n\n def __init__(self, option_name, message=None, possibilities=None, ctx=None):\n if message is None:\n message = \"no such option: {}\".format(option_name)\n UsageError.__init__(self, message, ctx)\n self.option_name = option_name\n self.possibilities = possibilities\n\n def format_message(self):\n bits = [self.message]\n if self.possibilities:\n if len(self.possibilities) == 1:\n bits.append(\"Did you mean {}?\".format(self.possibilities[0]))\n else:\n possibilities = sorted(self.possibilities)\n bits.append(\"(Possible options: {})\".format(\", \".join(possibilities)))\n return \" \".join(bits)\n\n\nclass BadOptionUsage(UsageError):\n \"\"\"Raised if an option is generally supplied but the use of the option\n was incorrect. This is for instance raised if the number of arguments\n for an option is not correct.\n\n .. versionadded:: 4.0\n\n :param option_name: the name of the option being used incorrectly.\n \"\"\"\n\n def __init__(self, option_name, message, ctx=None):\n UsageError.__init__(self, message, ctx)\n self.option_name = option_name\n\n\nclass BadArgumentUsage(UsageError):\n \"\"\"Raised if an argument is generally supplied but the use of the argument\n was incorrect. This is for instance raised if the number of values\n for an argument is not correct.\n\n .. versionadded:: 6.0\n \"\"\"\n\n def __init__(self, message, ctx=None):\n UsageError.__init__(self, message, ctx)\n\n\nclass FileError(ClickException):\n \"\"\"Raised if a file cannot be opened.\"\"\"\n\n def __init__(self, filename, hint=None):\n ui_filename = filename_to_ui(filename)\n if hint is None:\n hint = \"unknown error\"\n ClickException.__init__(self, hint)\n self.ui_filename = ui_filename\n self.filename = filename\n\n def format_message(self):\n return \"Could not open file {}: {}\".format(self.ui_filename, self.message)\n\n\nclass Abort(RuntimeError):\n \"\"\"An internal signalling exception that signals Click to abort.\"\"\"\n\n\nclass Exit(RuntimeError):\n \"\"\"An exception that indicates that the application should exit with some\n status code.\n\n :param code: the status code to exit with.\n \"\"\"\n\n __slots__ = (\"exit_code\",)\n\n def __init__(self, code=0):\n self.exit_code = code\n", "path": "src/click/exceptions.py"}]} | 3,373 | 107 |
gh_patches_debug_35737 | rasdani/github-patches | git_diff | CTFd__CTFd-899 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Cannot unset country field
**Environment**:
- CTFd Version/Commit: tag 2.0.4
- Operating System: Ubuntu 16.04
- Web Browser and Version: Chrome latest
**What happened?**
I changed my country to "Blank" (`<option></option>`) in settings, hit update, it said success, but refresh showed old country.
**What did you expect to happen?**
My country to be blank upon reload.
**How to reproduce your issue**
Set your country to anything (except blank). Try to change back to blank.
**Any associated stack traces or error logs**
N/A
</issue>
<code>
[start of CTFd/schemas/users.py]
1 from flask import session
2 from sqlalchemy.sql.expression import union_all
3 from marshmallow import fields, post_load
4 from marshmallow import validate, ValidationError, pre_load
5 from marshmallow.decorators import validates_schema
6 from marshmallow_sqlalchemy import field_for
7 from CTFd.models import ma, Users
8 from CTFd.utils import get_config
9 from CTFd.utils.validators import unique_email, validate_country_code
10 from CTFd.utils.user import is_admin, get_current_user
11 from CTFd.utils.countries import lookup_country_code
12 from CTFd.utils.crypto import verify_password, hash_password
13 from CTFd.utils.email import check_email_is_whitelisted
14
15
16 class UserSchema(ma.ModelSchema):
17 class Meta:
18 model = Users
19 include_fk = True
20 dump_only = ('id', 'oauth_id', 'created')
21 load_only = ('password',)
22
23 name = field_for(
24 Users,
25 'name',
26 required=True,
27 validate=[
28 validate.Length(min=1, max=128, error='User names must not be empty')
29 ]
30 )
31 email = field_for(
32 Users,
33 'email',
34 validate=[
35 validate.Email('Emails must be a properly formatted email address'),
36 validate.Length(min=1, max=128, error='Emails must not be empty'),
37 ]
38 )
39 website = field_for(
40 Users,
41 'website',
42 validate=validate.URL(
43 error='Websites must be a proper URL starting with http or https',
44 schemes={'http', 'https'}
45 )
46 )
47 country = field_for(
48 Users,
49 'country',
50 validate=[
51 validate_country_code
52 ]
53 )
54 password = field_for(
55 Users,
56 'password',
57 validate=[
58 validate.Length(min=1, error='Passwords must not be empty'),
59 ]
60 )
61
62 @pre_load
63 def validate_name(self, data):
64 name = data.get('name')
65 if name is None:
66 return
67
68 existing_user = Users.query.filter_by(name=name).first()
69 if is_admin():
70 user_id = data.get('id')
71 if user_id:
72 if existing_user and existing_user.id != user_id:
73 raise ValidationError('User name has already been taken', field_names=['name'])
74 else:
75 if existing_user:
76 raise ValidationError('User name has already been taken', field_names=['name'])
77 else:
78 current_user = get_current_user()
79 if name == current_user.name:
80 return data
81 else:
82 name_changes = get_config('name_changes', default=True)
83 if bool(name_changes) is False:
84 raise ValidationError('Name changes are disabled', field_names=['name'])
85 if existing_user:
86 raise ValidationError('User name has already been taken', field_names=['name'])
87
88 @pre_load
89 def validate_email(self, data):
90 email = data.get('email')
91 if email is None:
92 return
93
94 existing_user = Users.query.filter_by(email=email).first()
95
96 if is_admin():
97 user_id = data.get('id')
98 if user_id:
99 if existing_user and existing_user.id != user_id:
100 raise ValidationError('Email address has already been used', field_names=['email'])
101 else:
102 if existing_user:
103 raise ValidationError('Email address has already been used', field_names=['email'])
104 else:
105 current_user = get_current_user()
106 if email == current_user.email:
107 return data
108 else:
109 if existing_user:
110 raise ValidationError('Email address has already been used', field_names=['email'])
111 if check_email_is_whitelisted(email) is False:
112 raise ValidationError(
113 "Only email addresses under {domains} may register".format(
114 domains=get_config('domain_whitelist')
115 ),
116 field_names=['email']
117 )
118 if get_config('verify_emails'):
119 current_user.verified = False
120
121 @pre_load
122 def validate_password_confirmation(self, data):
123 password = data.get('password')
124 confirm = data.get('confirm')
125 target_user = get_current_user()
126 user_id = data.get('id')
127
128 if is_admin():
129 pass
130 else:
131 if password and (confirm is None):
132 raise ValidationError('Please confirm your current password', field_names=['confirm'])
133
134 if password and confirm:
135 test = verify_password(plaintext=confirm, ciphertext=target_user.password)
136 if test is True:
137 return data
138 else:
139 raise ValidationError('Your previous password is incorrect', field_names=['confirm'])
140
141 views = {
142 'user': [
143 'website',
144 'name',
145 'country',
146 'affiliation',
147 'bracket',
148 'id',
149 'oauth_id',
150 ],
151 'self': [
152 'website',
153 'name',
154 'email',
155 'country',
156 'affiliation',
157 'bracket',
158 'id',
159 'oauth_id',
160 'password'
161 ],
162 'admin': [
163 'website',
164 'name',
165 'created',
166 'country',
167 'banned',
168 'email',
169 'affiliation',
170 'secret',
171 'bracket',
172 'hidden',
173 'id',
174 'oauth_id',
175 'password',
176 'type',
177 'verified'
178 ]
179 }
180
181 def __init__(self, view=None, *args, **kwargs):
182 if view:
183 if type(view) == str:
184 kwargs['only'] = self.views[view]
185 elif type(view) == list:
186 kwargs['only'] = view
187
188 super(UserSchema, self).__init__(*args, **kwargs)
189
[end of CTFd/schemas/users.py]
[start of CTFd/utils/validators/__init__.py]
1 from flask import session
2 from CTFd.utils.user import is_admin, get_current_team, get_current_user
3 from CTFd.models import Teams, Users
4 from CTFd.utils.countries import lookup_country_code
5 from six.moves.urllib.parse import urlparse, urljoin, quote, unquote
6 from flask import request
7 from marshmallow import ValidationError
8 import re
9
10
11 def is_safe_url(target):
12 ref_url = urlparse(request.host_url)
13 test_url = urlparse(urljoin(request.host_url, target))
14 return test_url.scheme in ('http', 'https') and ref_url.netloc == test_url.netloc
15
16
17 def validate_url(url):
18 return urlparse(url).scheme.startswith('http')
19
20
21 def validate_email(email):
22 return bool(re.match(r"(^[a-zA-Z0-9_.+-]+@[a-zA-Z0-9-]+\.[a-zA-Z0-9-.]+$)", email))
23
24
25 def unique_email(email, model=Users):
26 obj = model.query.filter_by(email=email).first()
27 if is_admin():
28 if obj:
29 raise ValidationError('Email address has already been used')
30 if obj and obj.id != get_current_user().id:
31 raise ValidationError('Email address has already been used')
32
33
34 def validate_country_code(country_code):
35 if lookup_country_code(country_code) is None:
36 raise ValidationError('Invalid Country')
37
[end of CTFd/utils/validators/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/CTFd/schemas/users.py b/CTFd/schemas/users.py
--- a/CTFd/schemas/users.py
+++ b/CTFd/schemas/users.py
@@ -39,10 +39,13 @@
website = field_for(
Users,
'website',
- validate=validate.URL(
- error='Websites must be a proper URL starting with http or https',
- schemes={'http', 'https'}
- )
+ validate=[
+ # This is a dirty hack to let website accept empty strings so you can remove your website
+ lambda website: validate.URL(
+ error='Websites must be a proper URL starting with http or https',
+ schemes={'http', 'https'}
+ )(website) if website else True
+ ]
)
country = field_for(
Users,
@@ -54,9 +57,6 @@
password = field_for(
Users,
'password',
- validate=[
- validate.Length(min=1, error='Passwords must not be empty'),
- ]
)
@pre_load
@@ -123,12 +123,11 @@
password = data.get('password')
confirm = data.get('confirm')
target_user = get_current_user()
- user_id = data.get('id')
if is_admin():
pass
else:
- if password and (confirm is None):
+ if password and (bool(confirm) is False):
raise ValidationError('Please confirm your current password', field_names=['confirm'])
if password and confirm:
@@ -137,6 +136,9 @@
return data
else:
raise ValidationError('Your previous password is incorrect', field_names=['confirm'])
+ else:
+ data.pop('password', None)
+ data.pop('confirm', None)
views = {
'user': [
diff --git a/CTFd/utils/validators/__init__.py b/CTFd/utils/validators/__init__.py
--- a/CTFd/utils/validators/__init__.py
+++ b/CTFd/utils/validators/__init__.py
@@ -32,5 +32,7 @@
def validate_country_code(country_code):
+ if country_code.strip() == "":
+ return
if lookup_country_code(country_code) is None:
raise ValidationError('Invalid Country')
| {"golden_diff": "diff --git a/CTFd/schemas/users.py b/CTFd/schemas/users.py\n--- a/CTFd/schemas/users.py\n+++ b/CTFd/schemas/users.py\n@@ -39,10 +39,13 @@\n website = field_for(\n Users,\n 'website',\n- validate=validate.URL(\n- error='Websites must be a proper URL starting with http or https',\n- schemes={'http', 'https'}\n- )\n+ validate=[\n+ # This is a dirty hack to let website accept empty strings so you can remove your website\n+ lambda website: validate.URL(\n+ error='Websites must be a proper URL starting with http or https',\n+ schemes={'http', 'https'}\n+ )(website) if website else True\n+ ]\n )\n country = field_for(\n Users,\n@@ -54,9 +57,6 @@\n password = field_for(\n Users,\n 'password',\n- validate=[\n- validate.Length(min=1, error='Passwords must not be empty'),\n- ]\n )\n \n @pre_load\n@@ -123,12 +123,11 @@\n password = data.get('password')\n confirm = data.get('confirm')\n target_user = get_current_user()\n- user_id = data.get('id')\n \n if is_admin():\n pass\n else:\n- if password and (confirm is None):\n+ if password and (bool(confirm) is False):\n raise ValidationError('Please confirm your current password', field_names=['confirm'])\n \n if password and confirm:\n@@ -137,6 +136,9 @@\n return data\n else:\n raise ValidationError('Your previous password is incorrect', field_names=['confirm'])\n+ else:\n+ data.pop('password', None)\n+ data.pop('confirm', None)\n \n views = {\n 'user': [\ndiff --git a/CTFd/utils/validators/__init__.py b/CTFd/utils/validators/__init__.py\n--- a/CTFd/utils/validators/__init__.py\n+++ b/CTFd/utils/validators/__init__.py\n@@ -32,5 +32,7 @@\n \n \n def validate_country_code(country_code):\n+ if country_code.strip() == \"\":\n+ return\n if lookup_country_code(country_code) is None:\n raise ValidationError('Invalid Country')\n", "issue": "Cannot unset country field\n**Environment**:\r\n\r\n - CTFd Version/Commit: tag 2.0.4\r\n - Operating System: Ubuntu 16.04\r\n - Web Browser and Version: Chrome latest\r\n\r\n**What happened?**\r\nI changed my country to \"Blank\" (`<option></option>`) in settings, hit update, it said success, but refresh showed old country.\r\n\r\n**What did you expect to happen?**\r\nMy country to be blank upon reload.\r\n\r\n**How to reproduce your issue**\r\nSet your country to anything (except blank). Try to change back to blank.\r\n\r\n**Any associated stack traces or error logs**\r\nN/A\r\n\n", "before_files": [{"content": "from flask import session\nfrom sqlalchemy.sql.expression import union_all\nfrom marshmallow import fields, post_load\nfrom marshmallow import validate, ValidationError, pre_load\nfrom marshmallow.decorators import validates_schema\nfrom marshmallow_sqlalchemy import field_for\nfrom CTFd.models import ma, Users\nfrom CTFd.utils import get_config\nfrom CTFd.utils.validators import unique_email, validate_country_code\nfrom CTFd.utils.user import is_admin, get_current_user\nfrom CTFd.utils.countries import lookup_country_code\nfrom CTFd.utils.crypto import verify_password, hash_password\nfrom CTFd.utils.email import check_email_is_whitelisted\n\n\nclass UserSchema(ma.ModelSchema):\n class Meta:\n model = Users\n include_fk = True\n dump_only = ('id', 'oauth_id', 'created')\n load_only = ('password',)\n\n name = field_for(\n Users,\n 'name',\n required=True,\n validate=[\n validate.Length(min=1, max=128, error='User names must not be empty')\n ]\n )\n email = field_for(\n Users,\n 'email',\n validate=[\n validate.Email('Emails must be a properly formatted email address'),\n validate.Length(min=1, max=128, error='Emails must not be empty'),\n ]\n )\n website = field_for(\n Users,\n 'website',\n validate=validate.URL(\n error='Websites must be a proper URL starting with http or https',\n schemes={'http', 'https'}\n )\n )\n country = field_for(\n Users,\n 'country',\n validate=[\n validate_country_code\n ]\n )\n password = field_for(\n Users,\n 'password',\n validate=[\n validate.Length(min=1, error='Passwords must not be empty'),\n ]\n )\n\n @pre_load\n def validate_name(self, data):\n name = data.get('name')\n if name is None:\n return\n\n existing_user = Users.query.filter_by(name=name).first()\n if is_admin():\n user_id = data.get('id')\n if user_id:\n if existing_user and existing_user.id != user_id:\n raise ValidationError('User name has already been taken', field_names=['name'])\n else:\n if existing_user:\n raise ValidationError('User name has already been taken', field_names=['name'])\n else:\n current_user = get_current_user()\n if name == current_user.name:\n return data\n else:\n name_changes = get_config('name_changes', default=True)\n if bool(name_changes) is False:\n raise ValidationError('Name changes are disabled', field_names=['name'])\n if existing_user:\n raise ValidationError('User name has already been taken', field_names=['name'])\n\n @pre_load\n def validate_email(self, data):\n email = data.get('email')\n if email is None:\n return\n\n existing_user = Users.query.filter_by(email=email).first()\n\n if is_admin():\n user_id = data.get('id')\n if user_id:\n if existing_user and existing_user.id != user_id:\n raise ValidationError('Email address has already been used', field_names=['email'])\n else:\n if existing_user:\n raise ValidationError('Email address has already been used', field_names=['email'])\n else:\n current_user = get_current_user()\n if email == current_user.email:\n return data\n else:\n if existing_user:\n raise ValidationError('Email address has already been used', field_names=['email'])\n if check_email_is_whitelisted(email) is False:\n raise ValidationError(\n \"Only email addresses under {domains} may register\".format(\n domains=get_config('domain_whitelist')\n ),\n field_names=['email']\n )\n if get_config('verify_emails'):\n current_user.verified = False\n\n @pre_load\n def validate_password_confirmation(self, data):\n password = data.get('password')\n confirm = data.get('confirm')\n target_user = get_current_user()\n user_id = data.get('id')\n\n if is_admin():\n pass\n else:\n if password and (confirm is None):\n raise ValidationError('Please confirm your current password', field_names=['confirm'])\n\n if password and confirm:\n test = verify_password(plaintext=confirm, ciphertext=target_user.password)\n if test is True:\n return data\n else:\n raise ValidationError('Your previous password is incorrect', field_names=['confirm'])\n\n views = {\n 'user': [\n 'website',\n 'name',\n 'country',\n 'affiliation',\n 'bracket',\n 'id',\n 'oauth_id',\n ],\n 'self': [\n 'website',\n 'name',\n 'email',\n 'country',\n 'affiliation',\n 'bracket',\n 'id',\n 'oauth_id',\n 'password'\n ],\n 'admin': [\n 'website',\n 'name',\n 'created',\n 'country',\n 'banned',\n 'email',\n 'affiliation',\n 'secret',\n 'bracket',\n 'hidden',\n 'id',\n 'oauth_id',\n 'password',\n 'type',\n 'verified'\n ]\n }\n\n def __init__(self, view=None, *args, **kwargs):\n if view:\n if type(view) == str:\n kwargs['only'] = self.views[view]\n elif type(view) == list:\n kwargs['only'] = view\n\n super(UserSchema, self).__init__(*args, **kwargs)\n", "path": "CTFd/schemas/users.py"}, {"content": "from flask import session\nfrom CTFd.utils.user import is_admin, get_current_team, get_current_user\nfrom CTFd.models import Teams, Users\nfrom CTFd.utils.countries import lookup_country_code\nfrom six.moves.urllib.parse import urlparse, urljoin, quote, unquote\nfrom flask import request\nfrom marshmallow import ValidationError\nimport re\n\n\ndef is_safe_url(target):\n ref_url = urlparse(request.host_url)\n test_url = urlparse(urljoin(request.host_url, target))\n return test_url.scheme in ('http', 'https') and ref_url.netloc == test_url.netloc\n\n\ndef validate_url(url):\n return urlparse(url).scheme.startswith('http')\n\n\ndef validate_email(email):\n return bool(re.match(r\"(^[a-zA-Z0-9_.+-]+@[a-zA-Z0-9-]+\\.[a-zA-Z0-9-.]+$)\", email))\n\n\ndef unique_email(email, model=Users):\n obj = model.query.filter_by(email=email).first()\n if is_admin():\n if obj:\n raise ValidationError('Email address has already been used')\n if obj and obj.id != get_current_user().id:\n raise ValidationError('Email address has already been used')\n\n\ndef validate_country_code(country_code):\n if lookup_country_code(country_code) is None:\n raise ValidationError('Invalid Country')\n", "path": "CTFd/utils/validators/__init__.py"}]} | 2,701 | 519 |
gh_patches_debug_526 | rasdani/github-patches | git_diff | Parsl__parsl-2302 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Remove parsl container bits
This issue is to remind us to remove Parsl container support and update the docs as soon as the funcX executor is integrated-- we should switch to recommending container support through it.
</issue>
<code>
[start of docker/app1/app1.py]
1
2 def predict(list_items):
3 """Returns the double of the items"""
4 return [i*2 for i in list_items]
5
[end of docker/app1/app1.py]
[start of docker/app2/app2.py]
1
2 def predict(list_items):
3 """Returns items+10"""
4 return [i+10 for i in list_items]
5
[end of docker/app2/app2.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/docker/app1/app1.py b/docker/app1/app1.py
deleted file mode 100644
--- a/docker/app1/app1.py
+++ /dev/null
@@ -1,4 +0,0 @@
-
-def predict(list_items):
- """Returns the double of the items"""
- return [i*2 for i in list_items]
diff --git a/docker/app2/app2.py b/docker/app2/app2.py
deleted file mode 100644
--- a/docker/app2/app2.py
+++ /dev/null
@@ -1,4 +0,0 @@
-
-def predict(list_items):
- """Returns items+10"""
- return [i+10 for i in list_items]
| {"golden_diff": "diff --git a/docker/app1/app1.py b/docker/app1/app1.py\ndeleted file mode 100644\n--- a/docker/app1/app1.py\n+++ /dev/null\n@@ -1,4 +0,0 @@\n-\n-def predict(list_items):\n- \"\"\"Returns the double of the items\"\"\"\n- return [i*2 for i in list_items]\ndiff --git a/docker/app2/app2.py b/docker/app2/app2.py\ndeleted file mode 100644\n--- a/docker/app2/app2.py\n+++ /dev/null\n@@ -1,4 +0,0 @@\n-\n-def predict(list_items):\n- \"\"\"Returns items+10\"\"\"\n- return [i+10 for i in list_items]\n", "issue": "Remove parsl container bits\nThis issue is to remind us to remove Parsl container support and update the docs as soon as the funcX executor is integrated-- we should switch to recommending container support through it.\n", "before_files": [{"content": "\ndef predict(list_items):\n \"\"\"Returns the double of the items\"\"\"\n return [i*2 for i in list_items]\n", "path": "docker/app1/app1.py"}, {"content": "\ndef predict(list_items):\n \"\"\"Returns items+10\"\"\"\n return [i+10 for i in list_items]\n", "path": "docker/app2/app2.py"}]} | 662 | 164 |
gh_patches_debug_38871 | rasdani/github-patches | git_diff | ansible-collections__community.general-4360 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
ldap_entry: recursive deletion
### Summary
I would like to delete a whole branch of my ldap tree with `ldap_entry` but it seems recursive deletions are not supported.
I suggest handling recursive deletions.
### Issue Type
Feature Idea
### Component Name
ldap_entry
### Additional Information
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
- name: delete all the users
ldap_entry:
server_uri: "{{ ldap_api_url }}"
bind_dn: "{{ ldap_bind_id }}"
bind_pw: "{{ ldap_admin_password }}"
dn: "{{ ldap_users_base }}"
state: absent
```
```python
The full traceback is:
Traceback (most recent call last):
File "master:~/.ansible/collections/ansible_collections/community/general/plugins/modules/ldap_entry.py", line 229, in main
File "master:~/.ansible/collections/ansible_collections/community/general/plugins/modules/ldap_entry.py", line 166, in _delete
File "/usr/lib/python3/dist-packages/ldap/ldapobject.py", line 560, in delete_s
return self.delete_ext_s(dn,None,None)
File "/usr/lib/python3/dist-packages/ldap/ldapobject.py", line 553, in delete_ext_s
resp_type, resp_data, resp_msgid, resp_ctrls = self.result3(msgid,all=1,timeout=self.timeout)
File "/usr/lib/python3/dist-packages/ldap/ldapobject.py", line 748, in result3
resp_type, resp_data, resp_msgid, decoded_resp_ctrls, retoid, retval = self.result4(
File "/usr/lib/python3/dist-packages/ldap/ldapobject.py", line 758, in result4
ldap_result = self._ldap_call(self._l.result4,msgid,all,timeout,add_ctrls,add_intermediates,add_extop)
File "/usr/lib/python3/dist-packages/ldap/ldapobject.py", line 331, in _ldap_call
reraise(exc_type, exc_value, exc_traceback)
File "/usr/lib/python3/dist-packages/ldap/compat.py", line 44, in reraise
raise exc_value
File "/usr/lib/python3/dist-packages/ldap/ldapobject.py", line 315, in _ldap_call
result = func(*args,**kwargs)
ldap.NOT_ALLOWED_ON_NONLEAF: {'desc': 'Operation not allowed on non-leaf', 'info': 'subordinate objects must be deleted first'}
fatal: [brumaire.yaal.coop]: FAILED! => {
"changed": false,
"details": "{'desc': 'Operation not allowed on non-leaf', 'info': 'subordinate objects must be deleted first'}",
"invocation": {
"module_args": {
"attributes": {
"objectClass": null
},
"bind_dn": "cn=admin,dc=mydomain,dc=tld",
"bind_pw": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"dn": "ou=Users,dc=mydomain,dc=tld",
"objectClass": null,
"params": null,
"server_uri": "ldapi:///",
"start_tls": false,
"state": "absent",
"validate_certs": true
}
},
"msg": "Entry action failed."
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
</issue>
<code>
[start of plugins/modules/net_tools/ldap/ldap_entry.py]
1 #!/usr/bin/python
2 # -*- coding: utf-8 -*-
3
4 # Copyright: (c) 2016, Peter Sagerson <[email protected]>
5 # Copyright: (c) 2016, Jiri Tyr <[email protected]>
6 #
7 # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
8
9 from __future__ import absolute_import, division, print_function
10 __metaclass__ = type
11
12
13 DOCUMENTATION = '''
14 ---
15 module: ldap_entry
16 short_description: Add or remove LDAP entries.
17 description:
18 - Add or remove LDAP entries. This module only asserts the existence or
19 non-existence of an LDAP entry, not its attributes. To assert the
20 attribute values of an entry, see M(community.general.ldap_attrs).
21 notes:
22 - The default authentication settings will attempt to use a SASL EXTERNAL
23 bind over a UNIX domain socket. This works well with the default Ubuntu
24 install for example, which includes a cn=peercred,cn=external,cn=auth ACL
25 rule allowing root to modify the server configuration. If you need to use
26 a simple bind to access your server, pass the credentials in I(bind_dn)
27 and I(bind_pw).
28 author:
29 - Jiri Tyr (@jtyr)
30 requirements:
31 - python-ldap
32 options:
33 attributes:
34 description:
35 - If I(state=present), attributes necessary to create an entry. Existing
36 entries are never modified. To assert specific attribute values on an
37 existing entry, use M(community.general.ldap_attrs) module instead.
38 type: dict
39 objectClass:
40 description:
41 - If I(state=present), value or list of values to use when creating
42 the entry. It can either be a string or an actual list of
43 strings.
44 type: list
45 elements: str
46 state:
47 description:
48 - The target state of the entry.
49 choices: [present, absent]
50 default: present
51 type: str
52 extends_documentation_fragment:
53 - community.general.ldap.documentation
54
55 '''
56
57
58 EXAMPLES = """
59 - name: Make sure we have a parent entry for users
60 community.general.ldap_entry:
61 dn: ou=users,dc=example,dc=com
62 objectClass: organizationalUnit
63
64 - name: Make sure we have an admin user
65 community.general.ldap_entry:
66 dn: cn=admin,dc=example,dc=com
67 objectClass:
68 - simpleSecurityObject
69 - organizationalRole
70 attributes:
71 description: An LDAP administrator
72 userPassword: "{SSHA}tabyipcHzhwESzRaGA7oQ/SDoBZQOGND"
73
74 - name: Get rid of an old entry
75 community.general.ldap_entry:
76 dn: ou=stuff,dc=example,dc=com
77 state: absent
78 server_uri: ldap://localhost/
79 bind_dn: cn=admin,dc=example,dc=com
80 bind_pw: password
81
82 #
83 # The same as in the previous example but with the authentication details
84 # stored in the ldap_auth variable:
85 #
86 # ldap_auth:
87 # server_uri: ldap://localhost/
88 # bind_dn: cn=admin,dc=example,dc=com
89 # bind_pw: password
90 #
91 # In the example below, 'args' is a task keyword, passed at the same level as the module
92 - name: Get rid of an old entry
93 community.general.ldap_entry:
94 dn: ou=stuff,dc=example,dc=com
95 state: absent
96 args: "{{ ldap_auth }}"
97 """
98
99
100 RETURN = """
101 # Default return values
102 """
103
104 import traceback
105
106 from ansible.module_utils.basic import AnsibleModule, missing_required_lib
107 from ansible.module_utils.common.text.converters import to_native, to_bytes
108 from ansible_collections.community.general.plugins.module_utils.ldap import LdapGeneric, gen_specs
109
110 LDAP_IMP_ERR = None
111 try:
112 import ldap.modlist
113
114 HAS_LDAP = True
115 except ImportError:
116 LDAP_IMP_ERR = traceback.format_exc()
117 HAS_LDAP = False
118
119
120 class LdapEntry(LdapGeneric):
121 def __init__(self, module):
122 LdapGeneric.__init__(self, module)
123
124 # Shortcuts
125 self.state = self.module.params['state']
126
127 # Add the objectClass into the list of attributes
128 self.module.params['attributes']['objectClass'] = (
129 self.module.params['objectClass'])
130
131 # Load attributes
132 if self.state == 'present':
133 self.attrs = self._load_attrs()
134
135 def _load_attrs(self):
136 """ Turn attribute's value to array. """
137 attrs = {}
138
139 for name, value in self.module.params['attributes'].items():
140 if isinstance(value, list):
141 attrs[name] = list(map(to_bytes, value))
142 else:
143 attrs[name] = [to_bytes(value)]
144
145 return attrs
146
147 def add(self):
148 """ If self.dn does not exist, returns a callable that will add it. """
149 def _add():
150 self.connection.add_s(self.dn, modlist)
151
152 if not self._is_entry_present():
153 modlist = ldap.modlist.addModlist(self.attrs)
154 action = _add
155 else:
156 action = None
157
158 return action
159
160 def delete(self):
161 """ If self.dn exists, returns a callable that will delete it. """
162 def _delete():
163 self.connection.delete_s(self.dn)
164
165 if self._is_entry_present():
166 action = _delete
167 else:
168 action = None
169
170 return action
171
172 def _is_entry_present(self):
173 try:
174 self.connection.search_s(self.dn, ldap.SCOPE_BASE)
175 except ldap.NO_SUCH_OBJECT:
176 is_present = False
177 else:
178 is_present = True
179
180 return is_present
181
182
183 def main():
184 module = AnsibleModule(
185 argument_spec=gen_specs(
186 attributes=dict(default={}, type='dict'),
187 objectClass=dict(type='list', elements='str'),
188 state=dict(default='present', choices=['present', 'absent']),
189 ),
190 required_if=[('state', 'present', ['objectClass'])],
191 supports_check_mode=True,
192 )
193
194 if not HAS_LDAP:
195 module.fail_json(msg=missing_required_lib('python-ldap'),
196 exception=LDAP_IMP_ERR)
197
198 state = module.params['state']
199
200 # Instantiate the LdapEntry object
201 ldap = LdapEntry(module)
202
203 # Get the action function
204 if state == 'present':
205 action = ldap.add()
206 elif state == 'absent':
207 action = ldap.delete()
208
209 # Perform the action
210 if action is not None and not module.check_mode:
211 try:
212 action()
213 except Exception as e:
214 module.fail_json(msg="Entry action failed.", details=to_native(e), exception=traceback.format_exc())
215
216 module.exit_json(changed=(action is not None))
217
218
219 if __name__ == '__main__':
220 main()
221
[end of plugins/modules/net_tools/ldap/ldap_entry.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/plugins/modules/net_tools/ldap/ldap_entry.py b/plugins/modules/net_tools/ldap/ldap_entry.py
--- a/plugins/modules/net_tools/ldap/ldap_entry.py
+++ b/plugins/modules/net_tools/ldap/ldap_entry.py
@@ -49,6 +49,13 @@
choices: [present, absent]
default: present
type: str
+ recursive:
+ description:
+ - If I(state=delete), a flag indicating whether a single entry or the
+ whole branch must be deleted.
+ type: bool
+ default: false
+ version_added: 4.6.0
extends_documentation_fragment:
- community.general.ldap.documentation
@@ -110,6 +117,7 @@
LDAP_IMP_ERR = None
try:
import ldap.modlist
+ import ldap.controls
HAS_LDAP = True
except ImportError:
@@ -123,6 +131,7 @@
# Shortcuts
self.state = self.module.params['state']
+ self.recursive = self.module.params['recursive']
# Add the objectClass into the list of attributes
self.module.params['attributes']['objectClass'] = (
@@ -158,12 +167,29 @@
return action
def delete(self):
- """ If self.dn exists, returns a callable that will delete it. """
+ """ If self.dn exists, returns a callable that will delete either
+ the item itself if the recursive option is not set or the whole branch
+ if it is. """
def _delete():
self.connection.delete_s(self.dn)
+ def _delete_recursive():
+ """ Attempt recurive deletion using the subtree-delete control.
+ If that fails, do it manually. """
+ try:
+ subtree_delete = ldap.controls.ValueLessRequestControl('1.2.840.113556.1.4.805')
+ self.connection.delete_ext_s(self.dn, serverctrls=[subtree_delete])
+ except ldap.NOT_ALLOWED_ON_NONLEAF:
+ search = self.connection.search_s(self.dn, ldap.SCOPE_SUBTREE, attrlist=('dn',))
+ search.reverse()
+ for entry in search:
+ self.connection.delete_s(entry[0])
+
if self._is_entry_present():
- action = _delete
+ if self.recursive:
+ action = _delete_recursive
+ else:
+ action = _delete
else:
action = None
@@ -186,6 +212,7 @@
attributes=dict(default={}, type='dict'),
objectClass=dict(type='list', elements='str'),
state=dict(default='present', choices=['present', 'absent']),
+ recursive=dict(default=False, type='bool'),
),
required_if=[('state', 'present', ['objectClass'])],
supports_check_mode=True,
| {"golden_diff": "diff --git a/plugins/modules/net_tools/ldap/ldap_entry.py b/plugins/modules/net_tools/ldap/ldap_entry.py\n--- a/plugins/modules/net_tools/ldap/ldap_entry.py\n+++ b/plugins/modules/net_tools/ldap/ldap_entry.py\n@@ -49,6 +49,13 @@\n choices: [present, absent]\n default: present\n type: str\n+ recursive:\n+ description:\n+ - If I(state=delete), a flag indicating whether a single entry or the\n+ whole branch must be deleted.\n+ type: bool\n+ default: false\n+ version_added: 4.6.0\n extends_documentation_fragment:\n - community.general.ldap.documentation\n \n@@ -110,6 +117,7 @@\n LDAP_IMP_ERR = None\n try:\n import ldap.modlist\n+ import ldap.controls\n \n HAS_LDAP = True\n except ImportError:\n@@ -123,6 +131,7 @@\n \n # Shortcuts\n self.state = self.module.params['state']\n+ self.recursive = self.module.params['recursive']\n \n # Add the objectClass into the list of attributes\n self.module.params['attributes']['objectClass'] = (\n@@ -158,12 +167,29 @@\n return action\n \n def delete(self):\n- \"\"\" If self.dn exists, returns a callable that will delete it. \"\"\"\n+ \"\"\" If self.dn exists, returns a callable that will delete either\n+ the item itself if the recursive option is not set or the whole branch\n+ if it is. \"\"\"\n def _delete():\n self.connection.delete_s(self.dn)\n \n+ def _delete_recursive():\n+ \"\"\" Attempt recurive deletion using the subtree-delete control.\n+ If that fails, do it manually. \"\"\"\n+ try:\n+ subtree_delete = ldap.controls.ValueLessRequestControl('1.2.840.113556.1.4.805')\n+ self.connection.delete_ext_s(self.dn, serverctrls=[subtree_delete])\n+ except ldap.NOT_ALLOWED_ON_NONLEAF:\n+ search = self.connection.search_s(self.dn, ldap.SCOPE_SUBTREE, attrlist=('dn',))\n+ search.reverse()\n+ for entry in search:\n+ self.connection.delete_s(entry[0])\n+\n if self._is_entry_present():\n- action = _delete\n+ if self.recursive:\n+ action = _delete_recursive\n+ else:\n+ action = _delete\n else:\n action = None\n \n@@ -186,6 +212,7 @@\n attributes=dict(default={}, type='dict'),\n objectClass=dict(type='list', elements='str'),\n state=dict(default='present', choices=['present', 'absent']),\n+ recursive=dict(default=False, type='bool'),\n ),\n required_if=[('state', 'present', ['objectClass'])],\n supports_check_mode=True,\n", "issue": "ldap_entry: recursive deletion\n### Summary\n\nI would like to delete a whole branch of my ldap tree with `ldap_entry` but it seems recursive deletions are not supported.\r\n\r\nI suggest handling recursive deletions.\n\n### Issue Type\n\nFeature Idea\n\n### Component Name\n\nldap_entry\n\n### Additional Information\n\n<!--- Paste example playbooks or commands between quotes below -->\r\n```yaml (paste below)\r\n- name: delete all the users\r\n ldap_entry:\r\n server_uri: \"{{ ldap_api_url }}\"\r\n bind_dn: \"{{ ldap_bind_id }}\"\r\n bind_pw: \"{{ ldap_admin_password }}\"\r\n dn: \"{{ ldap_users_base }}\"\r\n state: absent\r\n```\r\n```python\r\nThe full traceback is:\r\nTraceback (most recent call last):\r\n File \"master:~/.ansible/collections/ansible_collections/community/general/plugins/modules/ldap_entry.py\", line 229, in main\r\n File \"master:~/.ansible/collections/ansible_collections/community/general/plugins/modules/ldap_entry.py\", line 166, in _delete\r\n File \"/usr/lib/python3/dist-packages/ldap/ldapobject.py\", line 560, in delete_s\r\n return self.delete_ext_s(dn,None,None)\r\n File \"/usr/lib/python3/dist-packages/ldap/ldapobject.py\", line 553, in delete_ext_s\r\n resp_type, resp_data, resp_msgid, resp_ctrls = self.result3(msgid,all=1,timeout=self.timeout)\r\n File \"/usr/lib/python3/dist-packages/ldap/ldapobject.py\", line 748, in result3\r\n resp_type, resp_data, resp_msgid, decoded_resp_ctrls, retoid, retval = self.result4(\r\n File \"/usr/lib/python3/dist-packages/ldap/ldapobject.py\", line 758, in result4\r\n ldap_result = self._ldap_call(self._l.result4,msgid,all,timeout,add_ctrls,add_intermediates,add_extop)\r\n File \"/usr/lib/python3/dist-packages/ldap/ldapobject.py\", line 331, in _ldap_call\r\n reraise(exc_type, exc_value, exc_traceback)\r\n File \"/usr/lib/python3/dist-packages/ldap/compat.py\", line 44, in reraise\r\n raise exc_value\r\n File \"/usr/lib/python3/dist-packages/ldap/ldapobject.py\", line 315, in _ldap_call\r\n result = func(*args,**kwargs)\r\nldap.NOT_ALLOWED_ON_NONLEAF: {'desc': 'Operation not allowed on non-leaf', 'info': 'subordinate objects must be deleted first'}\r\nfatal: [brumaire.yaal.coop]: FAILED! => {\r\n \"changed\": false,\r\n \"details\": \"{'desc': 'Operation not allowed on non-leaf', 'info': 'subordinate objects must be deleted first'}\",\r\n \"invocation\": {\r\n \"module_args\": {\r\n \"attributes\": {\r\n \"objectClass\": null\r\n },\r\n \"bind_dn\": \"cn=admin,dc=mydomain,dc=tld\",\r\n \"bind_pw\": \"VALUE_SPECIFIED_IN_NO_LOG_PARAMETER\",\r\n \"dn\": \"ou=Users,dc=mydomain,dc=tld\",\r\n \"objectClass\": null,\r\n \"params\": null,\r\n \"server_uri\": \"ldapi:///\",\r\n \"start_tls\": false,\r\n \"state\": \"absent\",\r\n \"validate_certs\": true\r\n }\r\n },\r\n \"msg\": \"Entry action failed.\"\r\n```\n\n### Code of Conduct\n\n- [X] I agree to follow the Ansible Code of Conduct\n", "before_files": [{"content": "#!/usr/bin/python\n# -*- coding: utf-8 -*-\n\n# Copyright: (c) 2016, Peter Sagerson <[email protected]>\n# Copyright: (c) 2016, Jiri Tyr <[email protected]>\n#\n# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)\n\nfrom __future__ import absolute_import, division, print_function\n__metaclass__ = type\n\n\nDOCUMENTATION = '''\n---\nmodule: ldap_entry\nshort_description: Add or remove LDAP entries.\ndescription:\n - Add or remove LDAP entries. This module only asserts the existence or\n non-existence of an LDAP entry, not its attributes. To assert the\n attribute values of an entry, see M(community.general.ldap_attrs).\nnotes:\n - The default authentication settings will attempt to use a SASL EXTERNAL\n bind over a UNIX domain socket. This works well with the default Ubuntu\n install for example, which includes a cn=peercred,cn=external,cn=auth ACL\n rule allowing root to modify the server configuration. If you need to use\n a simple bind to access your server, pass the credentials in I(bind_dn)\n and I(bind_pw).\nauthor:\n - Jiri Tyr (@jtyr)\nrequirements:\n - python-ldap\noptions:\n attributes:\n description:\n - If I(state=present), attributes necessary to create an entry. Existing\n entries are never modified. To assert specific attribute values on an\n existing entry, use M(community.general.ldap_attrs) module instead.\n type: dict\n objectClass:\n description:\n - If I(state=present), value or list of values to use when creating\n the entry. It can either be a string or an actual list of\n strings.\n type: list\n elements: str\n state:\n description:\n - The target state of the entry.\n choices: [present, absent]\n default: present\n type: str\nextends_documentation_fragment:\n- community.general.ldap.documentation\n\n'''\n\n\nEXAMPLES = \"\"\"\n- name: Make sure we have a parent entry for users\n community.general.ldap_entry:\n dn: ou=users,dc=example,dc=com\n objectClass: organizationalUnit\n\n- name: Make sure we have an admin user\n community.general.ldap_entry:\n dn: cn=admin,dc=example,dc=com\n objectClass:\n - simpleSecurityObject\n - organizationalRole\n attributes:\n description: An LDAP administrator\n userPassword: \"{SSHA}tabyipcHzhwESzRaGA7oQ/SDoBZQOGND\"\n\n- name: Get rid of an old entry\n community.general.ldap_entry:\n dn: ou=stuff,dc=example,dc=com\n state: absent\n server_uri: ldap://localhost/\n bind_dn: cn=admin,dc=example,dc=com\n bind_pw: password\n\n#\n# The same as in the previous example but with the authentication details\n# stored in the ldap_auth variable:\n#\n# ldap_auth:\n# server_uri: ldap://localhost/\n# bind_dn: cn=admin,dc=example,dc=com\n# bind_pw: password\n#\n# In the example below, 'args' is a task keyword, passed at the same level as the module\n- name: Get rid of an old entry\n community.general.ldap_entry:\n dn: ou=stuff,dc=example,dc=com\n state: absent\n args: \"{{ ldap_auth }}\"\n\"\"\"\n\n\nRETURN = \"\"\"\n# Default return values\n\"\"\"\n\nimport traceback\n\nfrom ansible.module_utils.basic import AnsibleModule, missing_required_lib\nfrom ansible.module_utils.common.text.converters import to_native, to_bytes\nfrom ansible_collections.community.general.plugins.module_utils.ldap import LdapGeneric, gen_specs\n\nLDAP_IMP_ERR = None\ntry:\n import ldap.modlist\n\n HAS_LDAP = True\nexcept ImportError:\n LDAP_IMP_ERR = traceback.format_exc()\n HAS_LDAP = False\n\n\nclass LdapEntry(LdapGeneric):\n def __init__(self, module):\n LdapGeneric.__init__(self, module)\n\n # Shortcuts\n self.state = self.module.params['state']\n\n # Add the objectClass into the list of attributes\n self.module.params['attributes']['objectClass'] = (\n self.module.params['objectClass'])\n\n # Load attributes\n if self.state == 'present':\n self.attrs = self._load_attrs()\n\n def _load_attrs(self):\n \"\"\" Turn attribute's value to array. \"\"\"\n attrs = {}\n\n for name, value in self.module.params['attributes'].items():\n if isinstance(value, list):\n attrs[name] = list(map(to_bytes, value))\n else:\n attrs[name] = [to_bytes(value)]\n\n return attrs\n\n def add(self):\n \"\"\" If self.dn does not exist, returns a callable that will add it. \"\"\"\n def _add():\n self.connection.add_s(self.dn, modlist)\n\n if not self._is_entry_present():\n modlist = ldap.modlist.addModlist(self.attrs)\n action = _add\n else:\n action = None\n\n return action\n\n def delete(self):\n \"\"\" If self.dn exists, returns a callable that will delete it. \"\"\"\n def _delete():\n self.connection.delete_s(self.dn)\n\n if self._is_entry_present():\n action = _delete\n else:\n action = None\n\n return action\n\n def _is_entry_present(self):\n try:\n self.connection.search_s(self.dn, ldap.SCOPE_BASE)\n except ldap.NO_SUCH_OBJECT:\n is_present = False\n else:\n is_present = True\n\n return is_present\n\n\ndef main():\n module = AnsibleModule(\n argument_spec=gen_specs(\n attributes=dict(default={}, type='dict'),\n objectClass=dict(type='list', elements='str'),\n state=dict(default='present', choices=['present', 'absent']),\n ),\n required_if=[('state', 'present', ['objectClass'])],\n supports_check_mode=True,\n )\n\n if not HAS_LDAP:\n module.fail_json(msg=missing_required_lib('python-ldap'),\n exception=LDAP_IMP_ERR)\n\n state = module.params['state']\n\n # Instantiate the LdapEntry object\n ldap = LdapEntry(module)\n\n # Get the action function\n if state == 'present':\n action = ldap.add()\n elif state == 'absent':\n action = ldap.delete()\n\n # Perform the action\n if action is not None and not module.check_mode:\n try:\n action()\n except Exception as e:\n module.fail_json(msg=\"Entry action failed.\", details=to_native(e), exception=traceback.format_exc())\n\n module.exit_json(changed=(action is not None))\n\n\nif __name__ == '__main__':\n main()\n", "path": "plugins/modules/net_tools/ldap/ldap_entry.py"}]} | 3,385 | 644 |
gh_patches_debug_26453 | rasdani/github-patches | git_diff | StackStorm__st2-4666 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
st2 python3 interpreter is falling back to importing a python2 lib that is incompatible with python3
##### SUMMARY
One of my libraries is trying to import `cassandra.cluster`. That library is trying to import `from concurrent.futures import ThreadPoolExecutor, FIRST_COMPLETED, wait as wait_futures` - which, as I understand, is built into python3 (and a separate library for compatibility with python3)
##### ISSUE TYPE
- Bug Report
##### STACKSTORM VERSION
```
# st2 --version
st2 3.0.0, on Python 2.7.6
```
##### OS / ENVIRONMENT / INSTALL METHOD
Docker container running Ubuntu 14.04
##### STEPS TO REPRODUCE
Create a python runner in your python3-specific pack. Inside the runner import cassandra libs and just create an object.
```
from cassandra.cluster import Cluster
cluster = Cluster()
```
##### EXPECTED RESULTS
I expect the library to import and the object to initialize
##### ACTUAL RESULTS
st2 python3 falls back to python2 to import the lib and it throws an exception similar to
```
File \"/opt/stackstorm/packs/ostk_common/actions/lib/ostkdbs.py\", line 2, in <module>
from cassandra.cluster import Cluster
File \"/opt/stackstorm/virtualenvs/ostk_common/lib/python3.5/site-packages/cassandra/cluster.py\", line 23, in <module>
from concurrent.futures import ThreadPoolExecutor, FIRST_COMPLETED, wait as wait_futures
File \"/opt/stackstorm/st2/lib/python2.7/site-packages/concurrent/futures/__init__.py\", line 8, in <module>
from concurrent.futures._base import (FIRST_COMPLETED,
File \"/opt/stackstorm/st2/lib/python2.7/site-packages/concurrent/futures/_base.py\", line 414
raise exception_type, self._exception, self._traceback
^
SyntaxError: invalid syntax
```
</issue>
<code>
[start of st2common/st2common/util/sandboxing.py]
1 # Copyright 2019 Extreme Networks, Inc.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """
16 Utility functions for our sandboxing model which is implemented on top of separate processes and
17 virtual environments.
18 """
19
20 from __future__ import absolute_import
21
22 import os
23 import sys
24 import fnmatch
25 from distutils.sysconfig import get_python_lib
26
27 from oslo_config import cfg
28
29 from st2common.constants.pack import SYSTEM_PACK_NAMES
30 from st2common.content.utils import get_pack_base_path
31
32 __all__ = [
33 'get_sandbox_python_binary_path',
34 'get_sandbox_python_path',
35 'get_sandbox_python_path_for_python_action',
36 'get_sandbox_path',
37 'get_sandbox_virtualenv_path',
38
39 'is_pack_virtualenv_using_python3'
40 ]
41
42
43 def get_sandbox_python_binary_path(pack=None):
44 """
45 Return path to the Python binary for the provided pack.
46
47 :param pack: Pack name.
48 :type pack: ``str``
49 """
50 system_base_path = cfg.CONF.system.base_path
51 virtualenv_path = os.path.join(system_base_path, 'virtualenvs', pack)
52
53 if pack in SYSTEM_PACK_NAMES:
54 # Use system python for "packs" and "core" actions
55 python_path = sys.executable
56 else:
57 python_path = os.path.join(virtualenv_path, 'bin/python')
58
59 return python_path
60
61
62 def get_sandbox_path(virtualenv_path):
63 """
64 Return PATH environment variable value for the sandboxed environment.
65
66 This function makes sure that virtualenv/bin directory is in the path and has precedence over
67 the global PATH values.
68
69 Note: This function needs to be called from the parent process (one which is spawning a
70 sandboxed process).
71 """
72 sandbox_path = []
73
74 parent_path = os.environ.get('PATH', '')
75 if not virtualenv_path:
76 return parent_path
77
78 parent_path = parent_path.split(':')
79 parent_path = [path for path in parent_path if path]
80
81 # Add virtualenv bin directory
82 virtualenv_bin_path = os.path.join(virtualenv_path, 'bin/')
83 sandbox_path.append(virtualenv_bin_path)
84 sandbox_path.extend(parent_path)
85
86 sandbox_path = ':'.join(sandbox_path)
87 return sandbox_path
88
89
90 def get_sandbox_python_path(inherit_from_parent=True, inherit_parent_virtualenv=True):
91 """
92 Return PYTHONPATH environment variable value for the new sandboxed environment.
93
94 This function takes into account if the current (parent) process is running under virtualenv
95 and other things like that.
96
97 Note: This function needs to be called from the parent process (one which is spawning a
98 sandboxed process).
99
100 :param inherit_from_parent: True to inheir PYTHONPATH from the current process.
101 :type inherit_from_parent: ``str``
102
103 :param inherit_parent_virtualenv: True to inherit virtualenv path if the current process is
104 running inside virtual environment.
105 :type inherit_parent_virtualenv: ``str``
106 """
107 sandbox_python_path = []
108 parent_python_path = os.environ.get('PYTHONPATH', '')
109
110 parent_python_path = parent_python_path.split(':')
111 parent_python_path = [path for path in parent_python_path if path]
112
113 if inherit_from_parent:
114 sandbox_python_path.extend(parent_python_path)
115
116 if inherit_parent_virtualenv and hasattr(sys, 'real_prefix'):
117 # We are running inside virtualenv
118 site_packages_dir = get_python_lib()
119
120 sys_prefix = os.path.abspath(sys.prefix)
121 assert sys_prefix in site_packages_dir
122
123 sandbox_python_path.append(site_packages_dir)
124
125 sandbox_python_path = ':'.join(sandbox_python_path)
126 sandbox_python_path = ':' + sandbox_python_path
127 return sandbox_python_path
128
129
130 def get_sandbox_python_path_for_python_action(pack, inherit_from_parent=True,
131 inherit_parent_virtualenv=True):
132 """
133 Return sandbox PYTHONPATH for a particular Python runner action.
134
135 Same as get_sandbox_python_path() function, but it's intended to be used for Python runner
136 actions and also takes into account if a pack virtual environment uses Python 3.
137 """
138 sandbox_python_path = get_sandbox_python_path(
139 inherit_from_parent=inherit_from_parent,
140 inherit_parent_virtualenv=inherit_parent_virtualenv)
141
142 pack_base_path = get_pack_base_path(pack_name=pack)
143 virtualenv_path = get_sandbox_virtualenv_path(pack=pack)
144
145 if not virtualenv_path:
146 return sandbox_python_path
147
148 uses_python3, virtualenv_directories = is_pack_virtualenv_using_python3(pack=pack)
149 if uses_python3:
150 # Add Python 3 lib directory (lib/python3.x) in front of the PYTHONPATH. This way we avoid
151 # issues with scripts trying to use packages / modules from Python 2.7 site-packages
152 # directory instead of the versions from Python 3 stdlib.
153 pack_actions_lib_paths = os.path.join(pack_base_path, 'actions/lib/')
154 pack_virtualenv_lib_path = os.path.join(virtualenv_path, 'lib')
155 python3_lib_directory = os.path.join(pack_virtualenv_lib_path, virtualenv_directories[0])
156
157 # Add Python 3 site-packages directory (lib/python3.x/site-packages) in front of the Python
158 # 2.7 system site-packages This is important because we want Python 3 compatible libraries
159 # to be used from the pack virtual environment and not system ones.
160 python3_site_packages_directory = os.path.join(pack_virtualenv_lib_path,
161 virtualenv_directories[0],
162 'site-packages')
163 sandbox_python_path = (python3_lib_directory + ':' + python3_site_packages_directory + ':' +
164 pack_actions_lib_paths + ':' + sandbox_python_path)
165
166 return sandbox_python_path
167
168
169 def is_pack_virtualenv_using_python3(pack):
170 """
171 Return True if a particular pack virtual environment is using Python 3.
172
173 :return: (uses_python3_bool, virtualenv_lib_directories)
174 :rtype: ``tuple``
175 """
176 # If python3.? directory exists in pack virtualenv lib/ path it means Python 3 is used by
177 # that virtual environment and we take that in to account when constructing PYTHONPATH
178 virtualenv_path = get_sandbox_virtualenv_path(pack=pack)
179
180 if virtualenv_path and os.path.isdir(virtualenv_path):
181 pack_virtualenv_lib_path = os.path.join(virtualenv_path, 'lib')
182
183 if not os.path.exists(pack_virtualenv_lib_path):
184 return False, None
185
186 virtualenv_directories = os.listdir(pack_virtualenv_lib_path)
187 virtualenv_directories = [dir_name for dir_name in virtualenv_directories if
188 fnmatch.fnmatch(dir_name, 'python3*')]
189 uses_python3 = bool(virtualenv_directories)
190 else:
191 uses_python3 = False
192 virtualenv_directories = None
193
194 return uses_python3, virtualenv_directories
195
196
197 def get_sandbox_virtualenv_path(pack):
198 """
199 Return a path to the virtual environment for the provided pack.
200 """
201
202 if pack in SYSTEM_PACK_NAMES:
203 virtualenv_path = None
204 else:
205 system_base_path = cfg.CONF.system.base_path
206 virtualenv_path = os.path.join(system_base_path, 'virtualenvs', pack)
207
208 return virtualenv_path
209
[end of st2common/st2common/util/sandboxing.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/st2common/st2common/util/sandboxing.py b/st2common/st2common/util/sandboxing.py
--- a/st2common/st2common/util/sandboxing.py
+++ b/st2common/st2common/util/sandboxing.py
@@ -160,8 +160,30 @@
python3_site_packages_directory = os.path.join(pack_virtualenv_lib_path,
virtualenv_directories[0],
'site-packages')
- sandbox_python_path = (python3_lib_directory + ':' + python3_site_packages_directory + ':' +
- pack_actions_lib_paths + ':' + sandbox_python_path)
+
+ # Work around to make sure we also add system lib dir to PYTHONPATH and not just virtualenv
+ # one
+ # NOTE: abc.py is always available in base lib directory which is symlinked to virtualenv
+ # lib directory
+ abc_module_path = os.path.join(python3_lib_directory, 'abc.py')
+ link_path = os.path.realpath(abc_module_path)
+ python3_system_lib_directory = os.path.dirname(link_path)
+
+ if not os.path.exists(python3_system_lib_directory):
+ python3_system_lib_directory = None
+
+ full_sandbox_python_path = []
+
+ # NOTE: Order here is very important for imports to function correctly
+ if python3_lib_directory:
+ full_sandbox_python_path.append(python3_system_lib_directory)
+
+ full_sandbox_python_path.append(python3_lib_directory)
+ full_sandbox_python_path.append(python3_site_packages_directory)
+ full_sandbox_python_path.append(pack_actions_lib_paths)
+ full_sandbox_python_path.append(sandbox_python_path)
+
+ sandbox_python_path = ':'.join(full_sandbox_python_path)
return sandbox_python_path
| {"golden_diff": "diff --git a/st2common/st2common/util/sandboxing.py b/st2common/st2common/util/sandboxing.py\n--- a/st2common/st2common/util/sandboxing.py\n+++ b/st2common/st2common/util/sandboxing.py\n@@ -160,8 +160,30 @@\n python3_site_packages_directory = os.path.join(pack_virtualenv_lib_path,\n virtualenv_directories[0],\n 'site-packages')\n- sandbox_python_path = (python3_lib_directory + ':' + python3_site_packages_directory + ':' +\n- pack_actions_lib_paths + ':' + sandbox_python_path)\n+\n+ # Work around to make sure we also add system lib dir to PYTHONPATH and not just virtualenv\n+ # one\n+ # NOTE: abc.py is always available in base lib directory which is symlinked to virtualenv\n+ # lib directory\n+ abc_module_path = os.path.join(python3_lib_directory, 'abc.py')\n+ link_path = os.path.realpath(abc_module_path)\n+ python3_system_lib_directory = os.path.dirname(link_path)\n+\n+ if not os.path.exists(python3_system_lib_directory):\n+ python3_system_lib_directory = None\n+\n+ full_sandbox_python_path = []\n+\n+ # NOTE: Order here is very important for imports to function correctly\n+ if python3_lib_directory:\n+ full_sandbox_python_path.append(python3_system_lib_directory)\n+\n+ full_sandbox_python_path.append(python3_lib_directory)\n+ full_sandbox_python_path.append(python3_site_packages_directory)\n+ full_sandbox_python_path.append(pack_actions_lib_paths)\n+ full_sandbox_python_path.append(sandbox_python_path)\n+\n+ sandbox_python_path = ':'.join(full_sandbox_python_path)\n \n return sandbox_python_path\n", "issue": "st2 python3 interpreter is falling back to importing a python2 lib that is incompatible with python3\n##### SUMMARY\r\n\r\n One of my libraries is trying to import `cassandra.cluster`. That library is trying to import `from concurrent.futures import ThreadPoolExecutor, FIRST_COMPLETED, wait as wait_futures` - which, as I understand, is built into python3 (and a separate library for compatibility with python3)\r\n\r\n##### ISSUE TYPE\r\n - Bug Report\r\n\r\n##### STACKSTORM VERSION\r\n```\r\n# st2 --version\r\nst2 3.0.0, on Python 2.7.6\r\n```\r\n##### OS / ENVIRONMENT / INSTALL METHOD\r\nDocker container running Ubuntu 14.04\r\n\r\n##### STEPS TO REPRODUCE\r\nCreate a python runner in your python3-specific pack. Inside the runner import cassandra libs and just create an object.\r\n```\r\nfrom cassandra.cluster import Cluster\r\n cluster = Cluster()\r\n```\r\n\r\n##### EXPECTED RESULTS\r\nI expect the library to import and the object to initialize\r\n\r\n##### ACTUAL RESULTS\r\nst2 python3 falls back to python2 to import the lib and it throws an exception similar to\r\n```\r\n File \\\"/opt/stackstorm/packs/ostk_common/actions/lib/ostkdbs.py\\\", line 2, in <module>\r\n from cassandra.cluster import Cluster\r\n File \\\"/opt/stackstorm/virtualenvs/ostk_common/lib/python3.5/site-packages/cassandra/cluster.py\\\", line 23, in <module>\r\n from concurrent.futures import ThreadPoolExecutor, FIRST_COMPLETED, wait as wait_futures\r\n File \\\"/opt/stackstorm/st2/lib/python2.7/site-packages/concurrent/futures/__init__.py\\\", line 8, in <module>\r\n from concurrent.futures._base import (FIRST_COMPLETED,\r\n File \\\"/opt/stackstorm/st2/lib/python2.7/site-packages/concurrent/futures/_base.py\\\", line 414\r\n raise exception_type, self._exception, self._traceback\r\n ^\r\nSyntaxError: invalid syntax\r\n```\n", "before_files": [{"content": "# Copyright 2019 Extreme Networks, Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"\nUtility functions for our sandboxing model which is implemented on top of separate processes and\nvirtual environments.\n\"\"\"\n\nfrom __future__ import absolute_import\n\nimport os\nimport sys\nimport fnmatch\nfrom distutils.sysconfig import get_python_lib\n\nfrom oslo_config import cfg\n\nfrom st2common.constants.pack import SYSTEM_PACK_NAMES\nfrom st2common.content.utils import get_pack_base_path\n\n__all__ = [\n 'get_sandbox_python_binary_path',\n 'get_sandbox_python_path',\n 'get_sandbox_python_path_for_python_action',\n 'get_sandbox_path',\n 'get_sandbox_virtualenv_path',\n\n 'is_pack_virtualenv_using_python3'\n]\n\n\ndef get_sandbox_python_binary_path(pack=None):\n \"\"\"\n Return path to the Python binary for the provided pack.\n\n :param pack: Pack name.\n :type pack: ``str``\n \"\"\"\n system_base_path = cfg.CONF.system.base_path\n virtualenv_path = os.path.join(system_base_path, 'virtualenvs', pack)\n\n if pack in SYSTEM_PACK_NAMES:\n # Use system python for \"packs\" and \"core\" actions\n python_path = sys.executable\n else:\n python_path = os.path.join(virtualenv_path, 'bin/python')\n\n return python_path\n\n\ndef get_sandbox_path(virtualenv_path):\n \"\"\"\n Return PATH environment variable value for the sandboxed environment.\n\n This function makes sure that virtualenv/bin directory is in the path and has precedence over\n the global PATH values.\n\n Note: This function needs to be called from the parent process (one which is spawning a\n sandboxed process).\n \"\"\"\n sandbox_path = []\n\n parent_path = os.environ.get('PATH', '')\n if not virtualenv_path:\n return parent_path\n\n parent_path = parent_path.split(':')\n parent_path = [path for path in parent_path if path]\n\n # Add virtualenv bin directory\n virtualenv_bin_path = os.path.join(virtualenv_path, 'bin/')\n sandbox_path.append(virtualenv_bin_path)\n sandbox_path.extend(parent_path)\n\n sandbox_path = ':'.join(sandbox_path)\n return sandbox_path\n\n\ndef get_sandbox_python_path(inherit_from_parent=True, inherit_parent_virtualenv=True):\n \"\"\"\n Return PYTHONPATH environment variable value for the new sandboxed environment.\n\n This function takes into account if the current (parent) process is running under virtualenv\n and other things like that.\n\n Note: This function needs to be called from the parent process (one which is spawning a\n sandboxed process).\n\n :param inherit_from_parent: True to inheir PYTHONPATH from the current process.\n :type inherit_from_parent: ``str``\n\n :param inherit_parent_virtualenv: True to inherit virtualenv path if the current process is\n running inside virtual environment.\n :type inherit_parent_virtualenv: ``str``\n \"\"\"\n sandbox_python_path = []\n parent_python_path = os.environ.get('PYTHONPATH', '')\n\n parent_python_path = parent_python_path.split(':')\n parent_python_path = [path for path in parent_python_path if path]\n\n if inherit_from_parent:\n sandbox_python_path.extend(parent_python_path)\n\n if inherit_parent_virtualenv and hasattr(sys, 'real_prefix'):\n # We are running inside virtualenv\n site_packages_dir = get_python_lib()\n\n sys_prefix = os.path.abspath(sys.prefix)\n assert sys_prefix in site_packages_dir\n\n sandbox_python_path.append(site_packages_dir)\n\n sandbox_python_path = ':'.join(sandbox_python_path)\n sandbox_python_path = ':' + sandbox_python_path\n return sandbox_python_path\n\n\ndef get_sandbox_python_path_for_python_action(pack, inherit_from_parent=True,\n inherit_parent_virtualenv=True):\n \"\"\"\n Return sandbox PYTHONPATH for a particular Python runner action.\n\n Same as get_sandbox_python_path() function, but it's intended to be used for Python runner\n actions and also takes into account if a pack virtual environment uses Python 3.\n \"\"\"\n sandbox_python_path = get_sandbox_python_path(\n inherit_from_parent=inherit_from_parent,\n inherit_parent_virtualenv=inherit_parent_virtualenv)\n\n pack_base_path = get_pack_base_path(pack_name=pack)\n virtualenv_path = get_sandbox_virtualenv_path(pack=pack)\n\n if not virtualenv_path:\n return sandbox_python_path\n\n uses_python3, virtualenv_directories = is_pack_virtualenv_using_python3(pack=pack)\n if uses_python3:\n # Add Python 3 lib directory (lib/python3.x) in front of the PYTHONPATH. This way we avoid\n # issues with scripts trying to use packages / modules from Python 2.7 site-packages\n # directory instead of the versions from Python 3 stdlib.\n pack_actions_lib_paths = os.path.join(pack_base_path, 'actions/lib/')\n pack_virtualenv_lib_path = os.path.join(virtualenv_path, 'lib')\n python3_lib_directory = os.path.join(pack_virtualenv_lib_path, virtualenv_directories[0])\n\n # Add Python 3 site-packages directory (lib/python3.x/site-packages) in front of the Python\n # 2.7 system site-packages This is important because we want Python 3 compatible libraries\n # to be used from the pack virtual environment and not system ones.\n python3_site_packages_directory = os.path.join(pack_virtualenv_lib_path,\n virtualenv_directories[0],\n 'site-packages')\n sandbox_python_path = (python3_lib_directory + ':' + python3_site_packages_directory + ':' +\n pack_actions_lib_paths + ':' + sandbox_python_path)\n\n return sandbox_python_path\n\n\ndef is_pack_virtualenv_using_python3(pack):\n \"\"\"\n Return True if a particular pack virtual environment is using Python 3.\n\n :return: (uses_python3_bool, virtualenv_lib_directories)\n :rtype: ``tuple``\n \"\"\"\n # If python3.? directory exists in pack virtualenv lib/ path it means Python 3 is used by\n # that virtual environment and we take that in to account when constructing PYTHONPATH\n virtualenv_path = get_sandbox_virtualenv_path(pack=pack)\n\n if virtualenv_path and os.path.isdir(virtualenv_path):\n pack_virtualenv_lib_path = os.path.join(virtualenv_path, 'lib')\n\n if not os.path.exists(pack_virtualenv_lib_path):\n return False, None\n\n virtualenv_directories = os.listdir(pack_virtualenv_lib_path)\n virtualenv_directories = [dir_name for dir_name in virtualenv_directories if\n fnmatch.fnmatch(dir_name, 'python3*')]\n uses_python3 = bool(virtualenv_directories)\n else:\n uses_python3 = False\n virtualenv_directories = None\n\n return uses_python3, virtualenv_directories\n\n\ndef get_sandbox_virtualenv_path(pack):\n \"\"\"\n Return a path to the virtual environment for the provided pack.\n \"\"\"\n\n if pack in SYSTEM_PACK_NAMES:\n virtualenv_path = None\n else:\n system_base_path = cfg.CONF.system.base_path\n virtualenv_path = os.path.join(system_base_path, 'virtualenvs', pack)\n\n return virtualenv_path\n", "path": "st2common/st2common/util/sandboxing.py"}]} | 3,193 | 392 |
gh_patches_debug_12266 | rasdani/github-patches | git_diff | mindsdb__mindsdb-1799 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Connection to CocroachDB is not possible
</issue>
<code>
[start of mindsdb/integrations/postgres/postgres.py]
1 from contextlib import closing
2 import pg8000
3
4 from lightwood.api import dtype
5 from mindsdb.integrations.base import Integration
6 from mindsdb.utilities.log import log
7
8
9 class PostgreSQLConnectionChecker:
10 def __init__(self, **kwargs):
11 self.host = kwargs.get('host')
12 self.port = kwargs.get('port')
13 self.user = kwargs.get('user')
14 self.password = kwargs.get('password')
15 self.database = kwargs.get('database', 'postgres')
16
17 def _get_connection(self):
18 return pg8000.connect(
19 database=self.database,
20 user=self.user,
21 password=self.password,
22 host=self.host,
23 port=self.port
24 )
25
26 def check_connection(self):
27 try:
28 con = self._get_connection()
29 with closing(con) as con:
30 con.run('select 1;')
31 connected = True
32 except Exception:
33 connected = False
34 return connected
35
36
37 class PostgreSQL(Integration, PostgreSQLConnectionChecker):
38 def __init__(self, config, name, db_info):
39 super().__init__(config, name)
40 self.user = db_info.get('user')
41 self.password = db_info.get('password')
42 self.host = db_info.get('host')
43 self.port = db_info.get('port')
44 self.database = db_info.get('database', 'postgres')
45
46 def _to_postgres_table(self, dtype_dict, predicted_cols, columns):
47 subtype_map = {
48 dtype.integer: ' int8',
49 dtype.float: 'float8',
50 dtype.binary: 'bool',
51 dtype.date: 'date',
52 dtype.datetime: 'timestamp',
53 dtype.binary: 'text',
54 dtype.categorical: 'text',
55 dtype.tags: 'text',
56 dtype.image: 'text',
57 dtype.video: 'text',
58 dtype.audio: 'text',
59 dtype.short_text: 'text',
60 dtype.rich_text: 'text',
61 dtype.array: 'text',
62 dtype.quantity: 'text',
63 dtype.tsarray: 'text',
64 'default': 'text'
65 }
66
67 column_declaration = []
68 for name in columns:
69 try:
70 col_subtype = dtype_dict[name]
71 new_type = subtype_map.get(col_subtype, subtype_map.get('default'))
72 column_declaration.append(f' "{name}" {new_type} ')
73 if name in predicted_cols:
74 column_declaration.append(f' "{name}_original" {new_type} ')
75 except Exception as e:
76 log.error(f'Error: can not determine postgres data type for column {name}: {e}')
77
78 return column_declaration
79
80 def _escape_table_name(self, name):
81 return '"' + name.replace('"', '""') + '"'
82
83 def _query(self, query):
84 con = self._get_connection()
85 with closing(con) as con:
86
87 cur = con.cursor()
88 res = True
89 cur.execute(query)
90
91 try:
92 rows = cur.fetchall()
93 keys = [k[0] if isinstance(k[0], str) else k[0].decode('ascii') for k in cur.description]
94 res = [dict(zip(keys, row)) for row in rows]
95 except Exception:
96 pass
97
98 con.commit()
99
100 return res
101
102 def setup(self):
103 user = f"{self.config['api']['mysql']['user']}_{self.name}"
104 password = self.config['api']['mysql']['password']
105 host = self.config['api']['mysql']['host']
106 port = self.config['api']['mysql']['port']
107
108 try:
109 self._query('''
110 DO $$
111 begin
112 if not exists (SELECT 1 FROM pg_extension where extname = 'mysql_fdw') then
113 CREATE EXTENSION mysql_fdw;
114 end if;
115 END
116 $$;
117 ''')
118 except Exception:
119 print('Error: cant find or activate mysql_fdw extension for PostgreSQL.')
120
121 self._query(f'DROP SCHEMA IF EXISTS {self.mindsdb_database} CASCADE')
122
123 self._query(f"DROP USER MAPPING IF EXISTS FOR {self.user} SERVER server_{self.mindsdb_database}")
124
125 self._query(f'DROP SERVER IF EXISTS server_{self.mindsdb_database} CASCADE')
126
127 self._query(f'''
128 CREATE SERVER server_{self.mindsdb_database}
129 FOREIGN DATA WRAPPER mysql_fdw
130 OPTIONS (host '{host}', port '{port}');
131 ''')
132
133 self._query(f'''
134 CREATE USER MAPPING FOR {self.user}
135 SERVER server_{self.mindsdb_database}
136 OPTIONS (username '{user}', password '{password}');
137 ''')
138
139 self._query(f'CREATE SCHEMA {self.mindsdb_database}')
140
141 q = f"""
142 CREATE FOREIGN TABLE IF NOT EXISTS {self.mindsdb_database}.predictors (
143 name text,
144 status text,
145 accuracy text,
146 predict text,
147 select_data_query text,
148 training_options text
149 )
150 SERVER server_{self.mindsdb_database}
151 OPTIONS (dbname 'mindsdb', table_name 'predictors');
152 """
153 self._query(q)
154
155 q = f"""
156 CREATE FOREIGN TABLE IF NOT EXISTS {self.mindsdb_database}.commands (
157 command text
158 ) SERVER server_{self.mindsdb_database}
159 OPTIONS (dbname 'mindsdb', table_name 'commands');
160 """
161 self._query(q)
162
163 def register_predictors(self, model_data_arr):
164 for model_meta in model_data_arr:
165 name = model_meta['name']
166 predict = model_meta['predict']
167 if not isinstance(predict, list):
168 predict = [predict]
169 columns_sql = ','.join(self._to_postgres_table(
170 model_meta['dtype_dict'],
171 predict,
172 list(model_meta['dtype_dict'].keys())
173 ))
174 columns_sql += ',"select_data_query" text'
175 for col in predict:
176 columns_sql += f',"{col}_confidence" float8'
177 if model_meta['dtype_dict'][col] in (dtype.integer, dtype.float):
178 columns_sql += f',"{col}_min" float8'
179 columns_sql += f',"{col}_max" float8'
180 columns_sql += f',"{col}_explain" text'
181
182 self.unregister_predictor(name)
183 q = f"""
184 CREATE FOREIGN TABLE {self.mindsdb_database}.{self._escape_table_name(name)} (
185 {columns_sql}
186 ) SERVER server_{self.mindsdb_database}
187 OPTIONS (dbname 'mindsdb', table_name '{name}');
188 """
189 self._query(q)
190
191 def unregister_predictor(self, name):
192 q = f"""
193 DROP FOREIGN TABLE IF EXISTS {self.mindsdb_database}.{self._escape_table_name(name)};
194 """
195 self._query(q)
196
197 def get_row_count(self, query):
198 q = f"""
199 SELECT COUNT(*) as count
200 FROM ({query}) as query;
201 """
202 result = self._query(q)
203 return result[0]['count']
204
205 def get_tables_list(self):
206 q = """
207 SELECT table_schema, table_name
208 FROM information_schema.tables
209 WHERE table_schema != 'pg_catalog'
210 AND table_schema != 'information_schema'
211 ORDER BY table_schema, table_name
212 """
213 tables_list = self._query(q)
214 tables = [f"{table['table_schema']}.{table['table_name']}" for table in tables_list]
215 return tables
216
217 def get_columns(self, query):
218 q = f"""SELECT * from ({query}) LIMIT 1;"""
219 query_response = self._query(q)
220 if len(query_response) > 0:
221 columns = list(query_response[0].keys())
222 return columns
223 else:
224 return []
225
[end of mindsdb/integrations/postgres/postgres.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/mindsdb/integrations/postgres/postgres.py b/mindsdb/integrations/postgres/postgres.py
--- a/mindsdb/integrations/postgres/postgres.py
+++ b/mindsdb/integrations/postgres/postgres.py
@@ -15,12 +15,16 @@
self.database = kwargs.get('database', 'postgres')
def _get_connection(self):
+ additional_args = {}
+ if 'cockroachlabs.cloud' in self.host:
+ additional_args['ssl_context'] = True
return pg8000.connect(
database=self.database,
user=self.user,
password=self.password,
host=self.host,
- port=self.port
+ port=self.port,
+ **additional_args
)
def check_connection(self):
| {"golden_diff": "diff --git a/mindsdb/integrations/postgres/postgres.py b/mindsdb/integrations/postgres/postgres.py\n--- a/mindsdb/integrations/postgres/postgres.py\n+++ b/mindsdb/integrations/postgres/postgres.py\n@@ -15,12 +15,16 @@\n self.database = kwargs.get('database', 'postgres')\n \n def _get_connection(self):\n+ additional_args = {}\n+ if 'cockroachlabs.cloud' in self.host:\n+ additional_args['ssl_context'] = True\n return pg8000.connect(\n database=self.database,\n user=self.user,\n password=self.password,\n host=self.host,\n- port=self.port\n+ port=self.port,\n+ **additional_args\n )\n \n def check_connection(self):\n", "issue": "Connection to CocroachDB is not possible\n\n", "before_files": [{"content": "from contextlib import closing\nimport pg8000\n\nfrom lightwood.api import dtype\nfrom mindsdb.integrations.base import Integration\nfrom mindsdb.utilities.log import log\n\n\nclass PostgreSQLConnectionChecker:\n def __init__(self, **kwargs):\n self.host = kwargs.get('host')\n self.port = kwargs.get('port')\n self.user = kwargs.get('user')\n self.password = kwargs.get('password')\n self.database = kwargs.get('database', 'postgres')\n\n def _get_connection(self):\n return pg8000.connect(\n database=self.database,\n user=self.user,\n password=self.password,\n host=self.host,\n port=self.port\n )\n\n def check_connection(self):\n try:\n con = self._get_connection()\n with closing(con) as con:\n con.run('select 1;')\n connected = True\n except Exception:\n connected = False\n return connected\n\n\nclass PostgreSQL(Integration, PostgreSQLConnectionChecker):\n def __init__(self, config, name, db_info):\n super().__init__(config, name)\n self.user = db_info.get('user')\n self.password = db_info.get('password')\n self.host = db_info.get('host')\n self.port = db_info.get('port')\n self.database = db_info.get('database', 'postgres')\n\n def _to_postgres_table(self, dtype_dict, predicted_cols, columns):\n subtype_map = {\n dtype.integer: ' int8',\n dtype.float: 'float8',\n dtype.binary: 'bool',\n dtype.date: 'date',\n dtype.datetime: 'timestamp',\n dtype.binary: 'text',\n dtype.categorical: 'text',\n dtype.tags: 'text',\n dtype.image: 'text',\n dtype.video: 'text',\n dtype.audio: 'text',\n dtype.short_text: 'text',\n dtype.rich_text: 'text',\n dtype.array: 'text',\n dtype.quantity: 'text',\n dtype.tsarray: 'text',\n 'default': 'text'\n }\n\n column_declaration = []\n for name in columns:\n try:\n col_subtype = dtype_dict[name]\n new_type = subtype_map.get(col_subtype, subtype_map.get('default'))\n column_declaration.append(f' \"{name}\" {new_type} ')\n if name in predicted_cols:\n column_declaration.append(f' \"{name}_original\" {new_type} ')\n except Exception as e:\n log.error(f'Error: can not determine postgres data type for column {name}: {e}')\n\n return column_declaration\n\n def _escape_table_name(self, name):\n return '\"' + name.replace('\"', '\"\"') + '\"'\n\n def _query(self, query):\n con = self._get_connection()\n with closing(con) as con:\n\n cur = con.cursor()\n res = True\n cur.execute(query)\n\n try:\n rows = cur.fetchall()\n keys = [k[0] if isinstance(k[0], str) else k[0].decode('ascii') for k in cur.description]\n res = [dict(zip(keys, row)) for row in rows]\n except Exception:\n pass\n\n con.commit()\n\n return res\n\n def setup(self):\n user = f\"{self.config['api']['mysql']['user']}_{self.name}\"\n password = self.config['api']['mysql']['password']\n host = self.config['api']['mysql']['host']\n port = self.config['api']['mysql']['port']\n\n try:\n self._query('''\n DO $$\n begin\n if not exists (SELECT 1 FROM pg_extension where extname = 'mysql_fdw') then\n CREATE EXTENSION mysql_fdw;\n end if;\n END\n $$;\n ''')\n except Exception:\n print('Error: cant find or activate mysql_fdw extension for PostgreSQL.')\n\n self._query(f'DROP SCHEMA IF EXISTS {self.mindsdb_database} CASCADE')\n\n self._query(f\"DROP USER MAPPING IF EXISTS FOR {self.user} SERVER server_{self.mindsdb_database}\")\n\n self._query(f'DROP SERVER IF EXISTS server_{self.mindsdb_database} CASCADE')\n\n self._query(f'''\n CREATE SERVER server_{self.mindsdb_database}\n FOREIGN DATA WRAPPER mysql_fdw\n OPTIONS (host '{host}', port '{port}');\n ''')\n\n self._query(f'''\n CREATE USER MAPPING FOR {self.user}\n SERVER server_{self.mindsdb_database}\n OPTIONS (username '{user}', password '{password}');\n ''')\n\n self._query(f'CREATE SCHEMA {self.mindsdb_database}')\n\n q = f\"\"\"\n CREATE FOREIGN TABLE IF NOT EXISTS {self.mindsdb_database}.predictors (\n name text,\n status text,\n accuracy text,\n predict text,\n select_data_query text,\n training_options text\n )\n SERVER server_{self.mindsdb_database}\n OPTIONS (dbname 'mindsdb', table_name 'predictors');\n \"\"\"\n self._query(q)\n\n q = f\"\"\"\n CREATE FOREIGN TABLE IF NOT EXISTS {self.mindsdb_database}.commands (\n command text\n ) SERVER server_{self.mindsdb_database}\n OPTIONS (dbname 'mindsdb', table_name 'commands');\n \"\"\"\n self._query(q)\n\n def register_predictors(self, model_data_arr):\n for model_meta in model_data_arr:\n name = model_meta['name']\n predict = model_meta['predict']\n if not isinstance(predict, list):\n predict = [predict]\n columns_sql = ','.join(self._to_postgres_table(\n model_meta['dtype_dict'],\n predict,\n list(model_meta['dtype_dict'].keys())\n ))\n columns_sql += ',\"select_data_query\" text'\n for col in predict:\n columns_sql += f',\"{col}_confidence\" float8'\n if model_meta['dtype_dict'][col] in (dtype.integer, dtype.float):\n columns_sql += f',\"{col}_min\" float8'\n columns_sql += f',\"{col}_max\" float8'\n columns_sql += f',\"{col}_explain\" text'\n\n self.unregister_predictor(name)\n q = f\"\"\"\n CREATE FOREIGN TABLE {self.mindsdb_database}.{self._escape_table_name(name)} (\n {columns_sql}\n ) SERVER server_{self.mindsdb_database}\n OPTIONS (dbname 'mindsdb', table_name '{name}');\n \"\"\"\n self._query(q)\n\n def unregister_predictor(self, name):\n q = f\"\"\"\n DROP FOREIGN TABLE IF EXISTS {self.mindsdb_database}.{self._escape_table_name(name)};\n \"\"\"\n self._query(q)\n\n def get_row_count(self, query):\n q = f\"\"\"\n SELECT COUNT(*) as count\n FROM ({query}) as query;\n \"\"\"\n result = self._query(q)\n return result[0]['count']\n\n def get_tables_list(self):\n q = \"\"\"\n SELECT table_schema, table_name\n FROM information_schema.tables\n WHERE table_schema != 'pg_catalog'\n AND table_schema != 'information_schema'\n ORDER BY table_schema, table_name\n \"\"\"\n tables_list = self._query(q)\n tables = [f\"{table['table_schema']}.{table['table_name']}\" for table in tables_list]\n return tables\n\n def get_columns(self, query):\n q = f\"\"\"SELECT * from ({query}) LIMIT 1;\"\"\"\n query_response = self._query(q)\n if len(query_response) > 0:\n columns = list(query_response[0].keys())\n return columns\n else:\n return []\n", "path": "mindsdb/integrations/postgres/postgres.py"}]} | 2,756 | 176 |
gh_patches_debug_8280 | rasdani/github-patches | git_diff | spyder-ide__spyder-11533 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Crash when trying to run a Python file in a project from the explorer pane - Spyder 4.0.1
<!--- **PLEASE READ:** When submitting here, please ensure you've completed the following checklist and checked the boxes to confirm. Issue reports without it may be closed. Thanks! --->
### Issue Report Checklist
* [y] Searched the [issues page](https://github.com/spyder-ide/spyder/issues?q=is%3Aissue) for similar reports
* [ y] Read the relevant sections of the [Spyder Troubleshooting Guide](https://github.com/spyder-ide/spyder/wiki/Troubleshooting-Guide-and-FAQ) and followed its advice
* [ y] Reproduced the issue after updating with ``conda update spyder`` (or ``pip``, if not using Anaconda)
* [n/a ] Could not reproduce inside ``jupyter qtconsole`` (if console-related)
* [ y] Tried basic troubleshooting (if a bug/error)
* [y ] Restarted Spyder
* [ n] Reset preferences with ``spyder --reset`` I reinstalled everything from scratch using `Anaconda3-2019.10-Windows-x86_64`
* [y ] Reinstalled the latest version of [Anaconda](https://www.anaconda.com/download/) updated following: https://docs.anaconda.com/anaconda/install/update-version/
* [n/a] Tried the other applicable steps from the Troubleshooting Guide
* [y ] Completed the **Problem Description**, **Steps to Reproduce** and **Version** sections below
## Problem Description
Crash when trying to run a Python file in a project
Same as in: https://github.com/spyder-ide/spyder/issues/10590
### What steps reproduce the problem?
1. right click on a python file in the left pane
2. choose "run" from the context menu
### What is the expected output? What do you see instead?
file should execute but instead I get an error message
### Paste Traceback/Error Below (if applicable)
<!--- Copy from error dialog or View > Panes > Internal Console --->
```python-traceback
File "C:\Users\...\AppData\Local\Continuum\anaconda3\envs\py37\lib\site-packages\spyder\plugins\explorer\plugin.py", line 100, in <lambda>
False, True))
TypeError: run_script() missing 1 required positional argument: 'console_namespace'
```
## Versions
<!--- You can get this information from Help > About Spyder...
or (if Spyder won't launch) the "conda list" command
from the Anaconda Prompt/Terminal/command line. --->
* Spyder version: 4.0.1
* Python version: 3.7.6
* Qt version: 5.12.5
* PyQt version: 5.12.3
* Operating System name/version: Windows 10
__N.B. In practice I am using a py37 (python 3.7.6) virtual environment selected from the Anaconda Navigator, and with Spyder launched from there.__


### Dependencies
<!--- Please go to the menu entry Help > Dependencies,
press the Copy to clipboard button and paste below --->
```
atomicwrites >=1.2.0 : 1.3.0 (OK)
chardet >=2.0.0 : 3.0.4 (OK)
cloudpickle >=0.5.0 : 1.2.2 (OK)
diff_match_patch >=20181111 : 20181111 (OK)
intervaltree : None (OK)
IPython >=4.0 : 7.12.0 (OK)
jedi =0.14.1 : 0.14.1 (OK)
nbconvert >=4.0 : 5.6.1 (OK)
numpydoc >=0.6.0 : 0.9.2 (OK)
pexpect >=4.4.0 : 4.8.0 (OK)
pickleshare >=0.4 : 0.7.5 (OK)
psutil >=0.3 : 5.6.7 (OK)
pygments >=2.0 : 2.5.2 (OK)
pylint >=0.25 : 2.4.4 (OK)
pyls >=0.31.2;<0.32.0 : 0.31.7 (OK)
zmq >=17 : 18.1.1 (OK)
qdarkstyle >=2.7 : 2.8 (OK)
qtawesome >=0.5.7 : 0.6.1 (OK)
qtconsole >=4.6.0 : 4.6.0 (OK)
qtpy >=1.5.0 : 1.9.0 (OK)
rtree >=0.8.3 : 0.9.4 (OK)
sphinx >=0.6.6 : 2.4.0 (OK)
spyder_kernels >=1.8.1;<2.0.0: 1.8.1 (OK)
watchdog : None (OK)
cython >=0.21 : None (NOK)
matplotlib >=2.0.0 : 3.1.3 (OK)
numpy >=1.7 : 1.18.1 (OK)
pandas >=0.13.1 : 1.0.1 (OK)
scipy >=0.17.0 : 1.4.1 (OK)
sympy >=0.7.3 : None (NOK)
```
</issue>
<code>
[start of spyder/plugins/explorer/plugin.py]
1 # -*- coding: utf-8 -*-
2 #
3 # Copyright © Spyder Project Contributors
4 # Licensed under the terms of the MIT License
5 # (see spyder/__init__.py for details)
6
7 """Files and Directories Explorer Plugin"""
8
9 # pylint: disable=C0103
10 # pylint: disable=R0903
11 # pylint: disable=R0911
12 # pylint: disable=R0201
13
14 # Standard library imports
15 import os.path as osp
16
17 # Third party imports
18 from qtpy.QtWidgets import QVBoxLayout
19
20 # Local imports
21 from spyder.config.base import _
22 from spyder.api.plugins import SpyderPluginWidget
23 from spyder.plugins.explorer.widgets.explorer import ExplorerWidget
24 from spyder.plugins.explorer.confpage import ExplorerConfigPage
25
26
27 class Explorer(SpyderPluginWidget):
28 """File and Directories Explorer DockWidget."""
29
30 CONF_SECTION = 'explorer'
31 CONFIGWIDGET_CLASS = ExplorerConfigPage
32 CONF_FILE = False
33
34 def __init__(self, parent=None):
35 """Initialization."""
36 SpyderPluginWidget.__init__(self, parent)
37
38 visible_columns = self.get_option('visible_columns',
39 default=[0, 3]) # Name & Last Mod
40 self.fileexplorer = ExplorerWidget(
41 self,
42 name_filters=self.get_option('name_filters'),
43 show_all=self.get_option('show_all'),
44 show_icontext=self.get_option('show_icontext'),
45 options_button=self.options_button,
46 single_click_to_open=self.get_option('single_click_to_open'),
47 file_associations=self.get_option('file_associations',
48 default={}),
49 visible_columns=visible_columns,
50 )
51 layout = QVBoxLayout()
52 layout.addWidget(self.fileexplorer)
53 self.setLayout(layout)
54 self.fileexplorer.sig_option_changed.connect(
55 self._update_config_options)
56
57 def _update_config_options(self, option, value):
58 """Update the config options of the explorer to make them permanent."""
59 self.set_option(option, value)
60
61 #------ SpyderPluginWidget API ---------------------------------------------
62 def get_plugin_title(self):
63 """Return widget title"""
64 return _("Files")
65
66 def get_focus_widget(self):
67 """
68 Return the widget to give focus to when
69 this plugin's dockwidget is raised on top-level
70 """
71 return self.fileexplorer.treewidget
72
73 def get_plugin_actions(self):
74 """Return a list of actions related to plugin"""
75 return self.fileexplorer.treewidget.common_actions
76
77 def register_plugin(self):
78 """Register plugin in Spyder's main window"""
79 ipyconsole = self.main.ipyconsole
80 treewidget = self.fileexplorer.treewidget
81
82 self.add_dockwidget()
83 self.fileexplorer.sig_open_file.connect(self.main.open_file)
84 self.register_widget_shortcuts(treewidget)
85
86 treewidget.sig_edit.connect(self.main.editor.load)
87 treewidget.sig_removed.connect(self.main.editor.removed)
88 treewidget.sig_removed_tree.connect(self.main.editor.removed_tree)
89 treewidget.sig_renamed.connect(self.main.editor.renamed)
90 treewidget.sig_renamed_tree.connect(self.main.editor.renamed_tree)
91 treewidget.sig_create_module.connect(self.main.editor.new)
92 treewidget.sig_new_file.connect(lambda t: self.main.editor.new(text=t))
93 treewidget.sig_open_interpreter.connect(
94 ipyconsole.create_client_from_path)
95 treewidget.redirect_stdio.connect(
96 self.main.redirect_internalshell_stdio)
97 treewidget.sig_run.connect(
98 lambda fname:
99 ipyconsole.run_script(fname, osp.dirname(fname), '', False, False,
100 False, True))
101 treewidget.sig_open_dir.connect(
102 lambda dirname:
103 self.main.workingdirectory.chdir(dirname,
104 refresh_explorer=False,
105 refresh_console=True))
106
107 self.main.editor.open_dir.connect(self.chdir)
108
109 # Signal "set_explorer_cwd(QString)" will refresh only the
110 # contents of path passed by the signal in explorer:
111 self.main.workingdirectory.set_explorer_cwd.connect(
112 lambda directory: self.refresh_plugin(new_path=directory,
113 force_current=True))
114
115 def refresh_plugin(self, new_path=None, force_current=True):
116 """Refresh explorer widget"""
117 self.fileexplorer.treewidget.update_history(new_path)
118 self.fileexplorer.treewidget.refresh(new_path,
119 force_current=force_current)
120
121 def on_first_registration(self):
122 """Action to be performed on first plugin registration."""
123 # TODO: Remove this for spyder 5
124 # self.tabify(self.main.projects)
125 self.tabify(self.main.variableexplorer)
126
127 def apply_plugin_settings(self, options):
128 """Handle preference options update."""
129 method_map = {
130 'file_associations':
131 self.fileexplorer.treewidget.set_file_associations,
132 'single_click_to_open':
133 self.fileexplorer.treewidget.set_single_click_to_open,
134 'name_filters':
135 self.fileexplorer.treewidget.set_name_filters,
136 'show_all':
137 self.fileexplorer.treewidget.toggle_all,
138 'show_icontext':
139 self.fileexplorer.toggle_icontext,
140 }
141 for option in options:
142 if option in method_map:
143 value = self.get_option(option)
144 method = method_map.get(option)
145 method(value)
146 self.fileexplorer.treewidget.update_common_actions()
147
148 #------ Public API ---------------------------------------------------------
149 def chdir(self, directory):
150 """Set working directory"""
151 self.fileexplorer.treewidget.chdir(directory)
152
[end of spyder/plugins/explorer/plugin.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/spyder/plugins/explorer/plugin.py b/spyder/plugins/explorer/plugin.py
--- a/spyder/plugins/explorer/plugin.py
+++ b/spyder/plugins/explorer/plugin.py
@@ -97,7 +97,7 @@
treewidget.sig_run.connect(
lambda fname:
ipyconsole.run_script(fname, osp.dirname(fname), '', False, False,
- False, True))
+ False, True, False))
treewidget.sig_open_dir.connect(
lambda dirname:
self.main.workingdirectory.chdir(dirname,
| {"golden_diff": "diff --git a/spyder/plugins/explorer/plugin.py b/spyder/plugins/explorer/plugin.py\n--- a/spyder/plugins/explorer/plugin.py\n+++ b/spyder/plugins/explorer/plugin.py\n@@ -97,7 +97,7 @@\n treewidget.sig_run.connect(\r\n lambda fname:\r\n ipyconsole.run_script(fname, osp.dirname(fname), '', False, False,\r\n- False, True))\r\n+ False, True, False))\r\n treewidget.sig_open_dir.connect(\r\n lambda dirname:\r\n self.main.workingdirectory.chdir(dirname,\n", "issue": "Crash when trying to run a Python file in a project from the explorer pane - Spyder 4.0.1\n<!--- **PLEASE READ:** When submitting here, please ensure you've completed the following checklist and checked the boxes to confirm. Issue reports without it may be closed. Thanks! --->\r\n\r\n### Issue Report Checklist\r\n\r\n* [y] Searched the [issues page](https://github.com/spyder-ide/spyder/issues?q=is%3Aissue) for similar reports\r\n* [ y] Read the relevant sections of the [Spyder Troubleshooting Guide](https://github.com/spyder-ide/spyder/wiki/Troubleshooting-Guide-and-FAQ) and followed its advice\r\n* [ y] Reproduced the issue after updating with ``conda update spyder`` (or ``pip``, if not using Anaconda)\r\n* [n/a ] Could not reproduce inside ``jupyter qtconsole`` (if console-related)\r\n* [ y] Tried basic troubleshooting (if a bug/error)\r\n * [y ] Restarted Spyder\r\n * [ n] Reset preferences with ``spyder --reset`` I reinstalled everything from scratch using `Anaconda3-2019.10-Windows-x86_64`\r\n * [y ] Reinstalled the latest version of [Anaconda](https://www.anaconda.com/download/) updated following: https://docs.anaconda.com/anaconda/install/update-version/ \r\n * [n/a] Tried the other applicable steps from the Troubleshooting Guide\r\n* [y ] Completed the **Problem Description**, **Steps to Reproduce** and **Version** sections below\r\n\r\n\r\n## Problem Description\r\nCrash when trying to run a Python file in a project\r\nSame as in: https://github.com/spyder-ide/spyder/issues/10590\r\n\r\n\r\n### What steps reproduce the problem?\r\n1. right click on a python file in the left pane\r\n2. choose \"run\" from the context menu\r\n\r\n### What is the expected output? What do you see instead?\r\nfile should execute but instead I get an error message\r\n\r\n\r\n### Paste Traceback/Error Below (if applicable)\r\n<!--- Copy from error dialog or View > Panes > Internal Console --->\r\n\r\n```python-traceback\r\n\r\n File \"C:\\Users\\...\\AppData\\Local\\Continuum\\anaconda3\\envs\\py37\\lib\\site-packages\\spyder\\plugins\\explorer\\plugin.py\", line 100, in <lambda>\r\n False, True))\r\nTypeError: run_script() missing 1 required positional argument: 'console_namespace'\r\n\r\n```\r\n\r\n## Versions\r\n<!--- You can get this information from Help > About Spyder...\r\nor (if Spyder won't launch) the \"conda list\" command\r\nfrom the Anaconda Prompt/Terminal/command line. --->\r\n\r\n* Spyder version: 4.0.1\r\n* Python version: 3.7.6 \r\n* Qt version: 5.12.5\r\n* PyQt version: 5.12.3 \r\n* Operating System name/version: Windows 10\r\n\r\n__N.B. In practice I am using a py37 (python 3.7.6) virtual environment selected from the Anaconda Navigator, and with Spyder launched from there.__\r\n\r\n\r\n\r\n\r\n\r\n### Dependencies\r\n<!--- Please go to the menu entry Help > Dependencies,\r\npress the Copy to clipboard button and paste below --->\r\n\r\n```\r\natomicwrites >=1.2.0 : 1.3.0 (OK)\r\nchardet >=2.0.0 : 3.0.4 (OK)\r\ncloudpickle >=0.5.0 : 1.2.2 (OK)\r\ndiff_match_patch >=20181111 : 20181111 (OK)\r\nintervaltree : None (OK)\r\nIPython >=4.0 : 7.12.0 (OK)\r\njedi =0.14.1 : 0.14.1 (OK)\r\nnbconvert >=4.0 : 5.6.1 (OK)\r\nnumpydoc >=0.6.0 : 0.9.2 (OK)\r\npexpect >=4.4.0 : 4.8.0 (OK)\r\npickleshare >=0.4 : 0.7.5 (OK)\r\npsutil >=0.3 : 5.6.7 (OK)\r\npygments >=2.0 : 2.5.2 (OK)\r\npylint >=0.25 : 2.4.4 (OK)\r\npyls >=0.31.2;<0.32.0 : 0.31.7 (OK)\r\nzmq >=17 : 18.1.1 (OK)\r\nqdarkstyle >=2.7 : 2.8 (OK)\r\nqtawesome >=0.5.7 : 0.6.1 (OK)\r\nqtconsole >=4.6.0 : 4.6.0 (OK)\r\nqtpy >=1.5.0 : 1.9.0 (OK)\r\nrtree >=0.8.3 : 0.9.4 (OK)\r\nsphinx >=0.6.6 : 2.4.0 (OK)\r\nspyder_kernels >=1.8.1;<2.0.0: 1.8.1 (OK)\r\nwatchdog : None (OK)\r\ncython >=0.21 : None (NOK)\r\nmatplotlib >=2.0.0 : 3.1.3 (OK)\r\nnumpy >=1.7 : 1.18.1 (OK)\r\npandas >=0.13.1 : 1.0.1 (OK)\r\nscipy >=0.17.0 : 1.4.1 (OK)\r\nsympy >=0.7.3 : None (NOK)\r\n\r\n```\r\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\r\n#\r\n# Copyright \u00a9 Spyder Project Contributors\r\n# Licensed under the terms of the MIT License\r\n# (see spyder/__init__.py for details)\r\n\r\n\"\"\"Files and Directories Explorer Plugin\"\"\"\r\n\r\n# pylint: disable=C0103\r\n# pylint: disable=R0903\r\n# pylint: disable=R0911\r\n# pylint: disable=R0201\r\n\r\n# Standard library imports\r\nimport os.path as osp\r\n\r\n# Third party imports\r\nfrom qtpy.QtWidgets import QVBoxLayout\r\n\r\n# Local imports\r\nfrom spyder.config.base import _\r\nfrom spyder.api.plugins import SpyderPluginWidget\r\nfrom spyder.plugins.explorer.widgets.explorer import ExplorerWidget\r\nfrom spyder.plugins.explorer.confpage import ExplorerConfigPage\r\n\r\n\r\nclass Explorer(SpyderPluginWidget):\r\n \"\"\"File and Directories Explorer DockWidget.\"\"\"\r\n\r\n CONF_SECTION = 'explorer'\r\n CONFIGWIDGET_CLASS = ExplorerConfigPage\r\n CONF_FILE = False\r\n\r\n def __init__(self, parent=None):\r\n \"\"\"Initialization.\"\"\"\r\n SpyderPluginWidget.__init__(self, parent)\r\n\r\n visible_columns = self.get_option('visible_columns',\r\n default=[0, 3]) # Name & Last Mod\r\n self.fileexplorer = ExplorerWidget(\r\n self,\r\n name_filters=self.get_option('name_filters'),\r\n show_all=self.get_option('show_all'),\r\n show_icontext=self.get_option('show_icontext'),\r\n options_button=self.options_button,\r\n single_click_to_open=self.get_option('single_click_to_open'),\r\n file_associations=self.get_option('file_associations',\r\n default={}),\r\n visible_columns=visible_columns,\r\n )\r\n layout = QVBoxLayout()\r\n layout.addWidget(self.fileexplorer)\r\n self.setLayout(layout)\r\n self.fileexplorer.sig_option_changed.connect(\r\n self._update_config_options)\r\n\r\n def _update_config_options(self, option, value):\r\n \"\"\"Update the config options of the explorer to make them permanent.\"\"\"\r\n self.set_option(option, value)\r\n\r\n #------ SpyderPluginWidget API ---------------------------------------------\r\n def get_plugin_title(self):\r\n \"\"\"Return widget title\"\"\"\r\n return _(\"Files\")\r\n \r\n def get_focus_widget(self):\r\n \"\"\"\r\n Return the widget to give focus to when\r\n this plugin's dockwidget is raised on top-level\r\n \"\"\"\r\n return self.fileexplorer.treewidget\r\n \r\n def get_plugin_actions(self):\r\n \"\"\"Return a list of actions related to plugin\"\"\"\r\n return self.fileexplorer.treewidget.common_actions\r\n \r\n def register_plugin(self):\r\n \"\"\"Register plugin in Spyder's main window\"\"\"\r\n ipyconsole = self.main.ipyconsole\r\n treewidget = self.fileexplorer.treewidget\r\n\r\n self.add_dockwidget()\r\n self.fileexplorer.sig_open_file.connect(self.main.open_file)\r\n self.register_widget_shortcuts(treewidget)\r\n\r\n treewidget.sig_edit.connect(self.main.editor.load)\r\n treewidget.sig_removed.connect(self.main.editor.removed)\r\n treewidget.sig_removed_tree.connect(self.main.editor.removed_tree)\r\n treewidget.sig_renamed.connect(self.main.editor.renamed)\r\n treewidget.sig_renamed_tree.connect(self.main.editor.renamed_tree)\r\n treewidget.sig_create_module.connect(self.main.editor.new)\r\n treewidget.sig_new_file.connect(lambda t: self.main.editor.new(text=t))\r\n treewidget.sig_open_interpreter.connect(\r\n ipyconsole.create_client_from_path)\r\n treewidget.redirect_stdio.connect(\r\n self.main.redirect_internalshell_stdio)\r\n treewidget.sig_run.connect(\r\n lambda fname:\r\n ipyconsole.run_script(fname, osp.dirname(fname), '', False, False,\r\n False, True))\r\n treewidget.sig_open_dir.connect(\r\n lambda dirname:\r\n self.main.workingdirectory.chdir(dirname,\r\n refresh_explorer=False,\r\n refresh_console=True))\r\n\r\n self.main.editor.open_dir.connect(self.chdir)\r\n\r\n # Signal \"set_explorer_cwd(QString)\" will refresh only the\r\n # contents of path passed by the signal in explorer:\r\n self.main.workingdirectory.set_explorer_cwd.connect(\r\n lambda directory: self.refresh_plugin(new_path=directory,\r\n force_current=True))\r\n\r\n def refresh_plugin(self, new_path=None, force_current=True):\r\n \"\"\"Refresh explorer widget\"\"\"\r\n self.fileexplorer.treewidget.update_history(new_path)\r\n self.fileexplorer.treewidget.refresh(new_path,\r\n force_current=force_current)\r\n\r\n def on_first_registration(self):\r\n \"\"\"Action to be performed on first plugin registration.\"\"\"\r\n # TODO: Remove this for spyder 5\r\n # self.tabify(self.main.projects)\r\n self.tabify(self.main.variableexplorer)\r\n\r\n def apply_plugin_settings(self, options):\r\n \"\"\"Handle preference options update.\"\"\"\r\n method_map = {\r\n 'file_associations':\r\n self.fileexplorer.treewidget.set_file_associations,\r\n 'single_click_to_open':\r\n self.fileexplorer.treewidget.set_single_click_to_open,\r\n 'name_filters':\r\n self.fileexplorer.treewidget.set_name_filters,\r\n 'show_all':\r\n self.fileexplorer.treewidget.toggle_all,\r\n 'show_icontext':\r\n self.fileexplorer.toggle_icontext,\r\n }\r\n for option in options:\r\n if option in method_map:\r\n value = self.get_option(option)\r\n method = method_map.get(option)\r\n method(value)\r\n self.fileexplorer.treewidget.update_common_actions()\r\n\r\n #------ Public API ---------------------------------------------------------\r\n def chdir(self, directory):\r\n \"\"\"Set working directory\"\"\"\r\n self.fileexplorer.treewidget.chdir(directory)\r\n", "path": "spyder/plugins/explorer/plugin.py"}]} | 3,510 | 121 |
gh_patches_debug_1401 | rasdani/github-patches | git_diff | ktbyers__netmiko-1073 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Huawei vrpv8 commit func issue
After commiting changes on huawei vrpv8, cli on devices look like this:
```
[~HUAWEI]dot1x enable
[*HUAWEI]snmp-agent sys-info version all
Warning: SNMPv1/SNMPv2c is not secure, and SNMPv3 in either authentication or privacy mode is recommended.
[*HUAWEI]commit
[~HUAWEI]
```
with following code:
```
from netmiko import Netmiko
device = {
"host": "10.0.0.3",
"username": "yyy",
"password": "xxx",
"device_type": "huawei_vrpv8",
"session_log": "log_file2.txt"
}
config_commands = ['dot1x enable','snmp-agent sys-info version all']
net_connect = Netmiko(**device)
output = net_connect.send_config_set(config_commands,exit_config_mode=False)
output += net_connect.commit()
print(output)
```
i got this error:
```
Traceback (most recent call last):
File "/home/kafooo/PycharmProjects/nornir_scripts/venv/huawei_netmiko_test.py", line 18, in <module>
output2 = net_connect.commit()
File "/home/kafooo/PycharmProjects/nornir_scripts/venv/lib/python3.6/site-packages/netmiko/huawei/huawei_ssh.py", line 114, in commit
strip_command=False, delay_factor=delay_factor)
File "/home/kafooo/PycharmProjects/nornir_scripts/venv/lib/python3.6/site-packages/netmiko/base_connection.py", line 1206, in send_command_expect
return self.send_command(*args, **kwargs)
File "/home/kafooo/PycharmProjects/nornir_scripts/venv/lib/python3.6/site-packages/netmiko/base_connection.py", line 1188, in send_command
search_pattern))
OSError: Search pattern never detected in send_command_expect: \[\*HUAWEI\]
```
looks like netmiko is expecting [*hostname] after commit, but in reality there is [~hostname] after commit
</issue>
<code>
[start of netmiko/huawei/huawei_ssh.py]
1 from __future__ import print_function
2 from __future__ import unicode_literals
3 import time
4 import re
5 from netmiko.cisco_base_connection import CiscoSSHConnection
6 from netmiko import log
7
8
9 class HuaweiSSH(CiscoSSHConnection):
10 def session_preparation(self):
11 """Prepare the session after the connection has been established."""
12 self._test_channel_read()
13 self.set_base_prompt()
14 self.disable_paging(command="screen-length 0 temporary")
15 # Clear the read buffer
16 time.sleep(0.3 * self.global_delay_factor)
17 self.clear_buffer()
18
19 def config_mode(self, config_command="system-view"):
20 """Enter configuration mode."""
21 return super(HuaweiSSH, self).config_mode(config_command=config_command)
22
23 def exit_config_mode(self, exit_config="return", pattern=r">"):
24 """Exit configuration mode."""
25 return super(HuaweiSSH, self).exit_config_mode(
26 exit_config=exit_config, pattern=pattern
27 )
28
29 def check_config_mode(self, check_string="]"):
30 """Checks whether in configuration mode. Returns a boolean."""
31 return super(HuaweiSSH, self).check_config_mode(check_string=check_string)
32
33 def check_enable_mode(self, *args, **kwargs):
34 """Huawei has no enable mode."""
35 pass
36
37 def enable(self, *args, **kwargs):
38 """Huawei has no enable mode."""
39 return ""
40
41 def exit_enable_mode(self, *args, **kwargs):
42 """Huawei has no enable mode."""
43 return ""
44
45 def set_base_prompt(
46 self, pri_prompt_terminator=">", alt_prompt_terminator="]", delay_factor=1
47 ):
48 """
49 Sets self.base_prompt
50
51 Used as delimiter for stripping of trailing prompt in output.
52
53 Should be set to something that is general and applies in multiple contexts. For Comware
54 this will be the router prompt with < > or [ ] stripped off.
55
56 This will be set on logging in, but not when entering system-view
57 """
58 log.debug("In set_base_prompt")
59 delay_factor = self.select_delay_factor(delay_factor)
60 self.clear_buffer()
61 self.write_channel(self.RETURN)
62 time.sleep(0.5 * delay_factor)
63
64 prompt = self.read_channel()
65 prompt = self.normalize_linefeeds(prompt)
66
67 # If multiple lines in the output take the last line
68 prompt = prompt.split(self.RESPONSE_RETURN)[-1]
69 prompt = prompt.strip()
70
71 # Check that ends with a valid terminator character
72 if not prompt[-1] in (pri_prompt_terminator, alt_prompt_terminator):
73 raise ValueError("Router prompt not found: {0}".format(prompt))
74
75 # Strip off any leading HRP_. characters for USGv5 HA
76 prompt = re.sub(r"^HRP_.", "", prompt, flags=re.M)
77
78 # Strip off leading and trailing terminator
79 prompt = prompt[1:-1]
80 prompt = prompt.strip()
81 self.base_prompt = prompt
82 log.debug("prompt: {0}".format(self.base_prompt))
83
84 return self.base_prompt
85
86 def save_config(self, cmd="save", confirm=False, confirm_response=""):
87 """ Save Config for HuaweiSSH"""
88 return super(HuaweiSSH, self).save_config(cmd=cmd, confirm=confirm)
89
90
91 class HuaweiVrpv8SSH(HuaweiSSH):
92 def commit(self, comment="", delay_factor=1):
93 """
94 Commit the candidate configuration.
95
96 Commit the entered configuration. Raise an error and return the failure
97 if the commit fails.
98
99 default:
100 command_string = commit
101 comment:
102 command_string = commit comment <comment>
103
104 """
105 delay_factor = self.select_delay_factor(delay_factor)
106 error_marker = "Failed to generate committed config"
107 command_string = "commit"
108
109 if comment:
110 command_string += ' comment "{}"'.format(comment)
111
112 output = self.config_mode()
113 output += self.send_command_expect(
114 command_string,
115 strip_prompt=False,
116 strip_command=False,
117 delay_factor=delay_factor,
118 )
119 output += self.exit_config_mode()
120
121 if error_marker in output:
122 raise ValueError(
123 "Commit failed with following errors:\n\n{}".format(output)
124 )
125 return output
126
127 def save_config(self, cmd="", confirm=True, confirm_response=""):
128 """Not Implemented"""
129 raise NotImplementedError
130
[end of netmiko/huawei/huawei_ssh.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/netmiko/huawei/huawei_ssh.py b/netmiko/huawei/huawei_ssh.py
--- a/netmiko/huawei/huawei_ssh.py
+++ b/netmiko/huawei/huawei_ssh.py
@@ -115,6 +115,7 @@
strip_prompt=False,
strip_command=False,
delay_factor=delay_factor,
+ expect_string=r"]",
)
output += self.exit_config_mode()
| {"golden_diff": "diff --git a/netmiko/huawei/huawei_ssh.py b/netmiko/huawei/huawei_ssh.py\n--- a/netmiko/huawei/huawei_ssh.py\n+++ b/netmiko/huawei/huawei_ssh.py\n@@ -115,6 +115,7 @@\n strip_prompt=False,\n strip_command=False,\n delay_factor=delay_factor,\n+ expect_string=r\"]\",\n )\n output += self.exit_config_mode()\n", "issue": "Huawei vrpv8 commit func issue\nAfter commiting changes on huawei vrpv8, cli on devices look like this: \r\n```\r\n[~HUAWEI]dot1x enable\r\n[*HUAWEI]snmp-agent sys-info version all\r\nWarning: SNMPv1/SNMPv2c is not secure, and SNMPv3 in either authentication or privacy mode is recommended.\r\n[*HUAWEI]commit\r\n[~HUAWEI]\r\n```\r\n\r\n\r\nwith following code: \r\n\r\n```\r\nfrom netmiko import Netmiko\r\n\r\ndevice = {\r\n \"host\": \"10.0.0.3\",\r\n \"username\": \"yyy\",\r\n \"password\": \"xxx\",\r\n \"device_type\": \"huawei_vrpv8\",\r\n \"session_log\": \"log_file2.txt\"\r\n}\r\nconfig_commands = ['dot1x enable','snmp-agent sys-info version all']\r\nnet_connect = Netmiko(**device)\r\n\r\noutput = net_connect.send_config_set(config_commands,exit_config_mode=False)\r\noutput += net_connect.commit()\r\nprint(output)\r\n```\r\n\r\ni got this error: \r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"/home/kafooo/PycharmProjects/nornir_scripts/venv/huawei_netmiko_test.py\", line 18, in <module>\r\n output2 = net_connect.commit()\r\n File \"/home/kafooo/PycharmProjects/nornir_scripts/venv/lib/python3.6/site-packages/netmiko/huawei/huawei_ssh.py\", line 114, in commit\r\n strip_command=False, delay_factor=delay_factor)\r\n File \"/home/kafooo/PycharmProjects/nornir_scripts/venv/lib/python3.6/site-packages/netmiko/base_connection.py\", line 1206, in send_command_expect\r\n return self.send_command(*args, **kwargs)\r\n File \"/home/kafooo/PycharmProjects/nornir_scripts/venv/lib/python3.6/site-packages/netmiko/base_connection.py\", line 1188, in send_command\r\n search_pattern))\r\nOSError: Search pattern never detected in send_command_expect: \\[\\*HUAWEI\\]\r\n```\r\n\r\n\r\nlooks like netmiko is expecting [*hostname] after commit, but in reality there is [~hostname] after commit\n", "before_files": [{"content": "from __future__ import print_function\nfrom __future__ import unicode_literals\nimport time\nimport re\nfrom netmiko.cisco_base_connection import CiscoSSHConnection\nfrom netmiko import log\n\n\nclass HuaweiSSH(CiscoSSHConnection):\n def session_preparation(self):\n \"\"\"Prepare the session after the connection has been established.\"\"\"\n self._test_channel_read()\n self.set_base_prompt()\n self.disable_paging(command=\"screen-length 0 temporary\")\n # Clear the read buffer\n time.sleep(0.3 * self.global_delay_factor)\n self.clear_buffer()\n\n def config_mode(self, config_command=\"system-view\"):\n \"\"\"Enter configuration mode.\"\"\"\n return super(HuaweiSSH, self).config_mode(config_command=config_command)\n\n def exit_config_mode(self, exit_config=\"return\", pattern=r\">\"):\n \"\"\"Exit configuration mode.\"\"\"\n return super(HuaweiSSH, self).exit_config_mode(\n exit_config=exit_config, pattern=pattern\n )\n\n def check_config_mode(self, check_string=\"]\"):\n \"\"\"Checks whether in configuration mode. Returns a boolean.\"\"\"\n return super(HuaweiSSH, self).check_config_mode(check_string=check_string)\n\n def check_enable_mode(self, *args, **kwargs):\n \"\"\"Huawei has no enable mode.\"\"\"\n pass\n\n def enable(self, *args, **kwargs):\n \"\"\"Huawei has no enable mode.\"\"\"\n return \"\"\n\n def exit_enable_mode(self, *args, **kwargs):\n \"\"\"Huawei has no enable mode.\"\"\"\n return \"\"\n\n def set_base_prompt(\n self, pri_prompt_terminator=\">\", alt_prompt_terminator=\"]\", delay_factor=1\n ):\n \"\"\"\n Sets self.base_prompt\n\n Used as delimiter for stripping of trailing prompt in output.\n\n Should be set to something that is general and applies in multiple contexts. For Comware\n this will be the router prompt with < > or [ ] stripped off.\n\n This will be set on logging in, but not when entering system-view\n \"\"\"\n log.debug(\"In set_base_prompt\")\n delay_factor = self.select_delay_factor(delay_factor)\n self.clear_buffer()\n self.write_channel(self.RETURN)\n time.sleep(0.5 * delay_factor)\n\n prompt = self.read_channel()\n prompt = self.normalize_linefeeds(prompt)\n\n # If multiple lines in the output take the last line\n prompt = prompt.split(self.RESPONSE_RETURN)[-1]\n prompt = prompt.strip()\n\n # Check that ends with a valid terminator character\n if not prompt[-1] in (pri_prompt_terminator, alt_prompt_terminator):\n raise ValueError(\"Router prompt not found: {0}\".format(prompt))\n\n # Strip off any leading HRP_. characters for USGv5 HA\n prompt = re.sub(r\"^HRP_.\", \"\", prompt, flags=re.M)\n\n # Strip off leading and trailing terminator\n prompt = prompt[1:-1]\n prompt = prompt.strip()\n self.base_prompt = prompt\n log.debug(\"prompt: {0}\".format(self.base_prompt))\n\n return self.base_prompt\n\n def save_config(self, cmd=\"save\", confirm=False, confirm_response=\"\"):\n \"\"\" Save Config for HuaweiSSH\"\"\"\n return super(HuaweiSSH, self).save_config(cmd=cmd, confirm=confirm)\n\n\nclass HuaweiVrpv8SSH(HuaweiSSH):\n def commit(self, comment=\"\", delay_factor=1):\n \"\"\"\n Commit the candidate configuration.\n\n Commit the entered configuration. Raise an error and return the failure\n if the commit fails.\n\n default:\n command_string = commit\n comment:\n command_string = commit comment <comment>\n\n \"\"\"\n delay_factor = self.select_delay_factor(delay_factor)\n error_marker = \"Failed to generate committed config\"\n command_string = \"commit\"\n\n if comment:\n command_string += ' comment \"{}\"'.format(comment)\n\n output = self.config_mode()\n output += self.send_command_expect(\n command_string,\n strip_prompt=False,\n strip_command=False,\n delay_factor=delay_factor,\n )\n output += self.exit_config_mode()\n\n if error_marker in output:\n raise ValueError(\n \"Commit failed with following errors:\\n\\n{}\".format(output)\n )\n return output\n\n def save_config(self, cmd=\"\", confirm=True, confirm_response=\"\"):\n \"\"\"Not Implemented\"\"\"\n raise NotImplementedError\n", "path": "netmiko/huawei/huawei_ssh.py"}]} | 2,269 | 104 |
gh_patches_debug_17030 | rasdani/github-patches | git_diff | apache__tvm-6499 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[uTVM] Use an alternative CRC Library
The 3rdparty crc library introduced in https://github.com/apache/incubator-tvm/pull/6334 has a license problem.
We will need to replace it with a new impl or an alternative library
</issue>
<code>
[start of python/tvm/micro/build.py]
1 # Licensed to the Apache Software Foundation (ASF) under one
2 # or more contributor license agreements. See the NOTICE file
3 # distributed with this work for additional information
4 # regarding copyright ownership. The ASF licenses this file
5 # to you under the Apache License, Version 2.0 (the
6 # "License"); you may not use this file except in compliance
7 # with the License. You may obtain a copy of the License at
8 #
9 # http://www.apache.org/licenses/LICENSE-2.0
10 #
11 # Unless required by applicable law or agreed to in writing,
12 # software distributed under the License is distributed on an
13 # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
14 # KIND, either express or implied. See the License for the
15 # specific language governing permissions and limitations
16 # under the License.
17
18 """Defines top-level glue functions for building microTVM artifacts."""
19
20 import copy
21 import logging
22 import os
23 import re
24 from tvm.contrib import util
25
26
27 _LOG = logging.getLogger(__name__)
28
29
30 class Workspace:
31 """Defines helper functions for manipulating temporary compilation workspaces."""
32
33 def __init__(self, root=None, debug=False):
34 if debug or root is not None:
35 with util.TempDirectory.set_keep_for_debug():
36 self.tempdir = util.tempdir(custom_path=root)
37 _LOG.info("Created debug mode workspace at: %s", self.tempdir.temp_dir)
38 else:
39 self.tempdir = util.tempdir()
40
41 def relpath(self, path):
42 return self.tempdir.relpath(path)
43
44 def listdir(self):
45 return self.tempdir.listdir()
46
47 @property
48 def path(self):
49 return self.tempdir.temp_dir
50
51
52 # Required C runtime libraries, in link order.
53 CRT_RUNTIME_LIB_NAMES = ["utvm_rpc_server", "utvm_rpc_common", "common"]
54
55
56 TVM_ROOT_DIR = os.path.realpath(os.path.join(os.path.dirname(__file__), "..", "..", ".."))
57
58
59 CRT_ROOT_DIR = os.path.join(TVM_ROOT_DIR, "src", "runtime", "crt")
60
61
62 RUNTIME_LIB_SRC_DIRS = [os.path.join(CRT_ROOT_DIR, n) for n in CRT_RUNTIME_LIB_NAMES] + [
63 os.path.join(
64 TVM_ROOT_DIR,
65 "3rdparty/mbed-os/targets/TARGET_NORDIC/TARGET_NRF5x/TARGET_SDK_11/" "libraries/crc16",
66 )
67 ]
68
69
70 RUNTIME_SRC_REGEX = re.compile(r"^.*\.cc?$", re.IGNORECASE)
71
72
73 _CRT_DEFAULT_OPTIONS = {
74 "ccflags": ["-std=c++11"],
75 "ldflags": ["-std=gnu++14"],
76 "include_dirs": [
77 f"{TVM_ROOT_DIR}/include",
78 f"{TVM_ROOT_DIR}/3rdparty/dlpack/include",
79 f"{TVM_ROOT_DIR}/3rdparty/mbed-os/targets/TARGET_NORDIC/TARGET_NRF5x/"
80 "TARGET_SDK_11/libraries/crc16/",
81 f"{TVM_ROOT_DIR}/3rdparty/dmlc-core/include",
82 f"{CRT_ROOT_DIR}/include",
83 ],
84 "profile": {"common": ["-Wno-unused-variable"]},
85 }
86
87
88 def default_options(target_include_dir):
89 """Return default opts passed to Compile commands."""
90 bin_opts = copy.deepcopy(_CRT_DEFAULT_OPTIONS)
91 bin_opts["include_dirs"].append(target_include_dir)
92 lib_opts = copy.deepcopy(bin_opts)
93 lib_opts["profile"]["common"].append("-Werror")
94 lib_opts["cflags"] = ["-Wno-error=incompatible-pointer-types"]
95 return {"bin_opts": bin_opts, "lib_opts": lib_opts}
96
97
98 def build_static_runtime(workspace, compiler, module, lib_opts=None, bin_opts=None):
99 """Build the on-device runtime, statically linking the given modules.
100
101 Parameters
102 ----------
103 compiler : tvm.micro.Compiler
104 Compiler instance used to build the runtime.
105
106 module : IRModule
107 Module to statically link.
108
109 lib_opts : dict
110 Extra kwargs passed to library(),
111
112 bin_opts : dict
113 Extra kwargs passed to binary(),
114
115 Returns
116 -------
117 MicroBinary :
118 The compiled runtime.
119 """
120 lib_opts = _CRT_DEFAULT_OPTIONS if lib_opts is None else lib_opts
121 bin_opts = _CRT_DEFAULT_OPTIONS if bin_opts is None else bin_opts
122
123 mod_build_dir = workspace.relpath(os.path.join("build", "module"))
124 os.makedirs(mod_build_dir)
125 mod_src_dir = workspace.relpath(os.path.join("src", "module"))
126 os.makedirs(mod_src_dir)
127 mod_src_path = os.path.join(mod_src_dir, "module.c")
128 module.save(mod_src_path, "cc")
129
130 libs = []
131 for lib_src_dir in RUNTIME_LIB_SRC_DIRS:
132 lib_name = os.path.basename(lib_src_dir)
133 lib_build_dir = workspace.relpath(f"build/{lib_name}")
134 os.makedirs(lib_build_dir)
135
136 lib_srcs = []
137 for p in os.listdir(lib_src_dir):
138 if RUNTIME_SRC_REGEX.match(p):
139 lib_srcs.append(os.path.join(lib_src_dir, p))
140
141 libs.append(compiler.library(lib_build_dir, lib_srcs, lib_opts))
142
143 libs.append(compiler.library(mod_build_dir, [mod_src_path], lib_opts))
144
145 runtime_build_dir = workspace.relpath(f"build/runtime")
146 os.makedirs(runtime_build_dir)
147 return compiler.binary(runtime_build_dir, libs, bin_opts)
148
[end of python/tvm/micro/build.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/python/tvm/micro/build.py b/python/tvm/micro/build.py
--- a/python/tvm/micro/build.py
+++ b/python/tvm/micro/build.py
@@ -60,10 +60,7 @@
RUNTIME_LIB_SRC_DIRS = [os.path.join(CRT_ROOT_DIR, n) for n in CRT_RUNTIME_LIB_NAMES] + [
- os.path.join(
- TVM_ROOT_DIR,
- "3rdparty/mbed-os/targets/TARGET_NORDIC/TARGET_NRF5x/TARGET_SDK_11/" "libraries/crc16",
- )
+ os.path.join(TVM_ROOT_DIR, "3rdparty/libcrc/src")
]
@@ -76,8 +73,7 @@
"include_dirs": [
f"{TVM_ROOT_DIR}/include",
f"{TVM_ROOT_DIR}/3rdparty/dlpack/include",
- f"{TVM_ROOT_DIR}/3rdparty/mbed-os/targets/TARGET_NORDIC/TARGET_NRF5x/"
- "TARGET_SDK_11/libraries/crc16/",
+ f"{TVM_ROOT_DIR}/3rdparty/libcrc/include",
f"{TVM_ROOT_DIR}/3rdparty/dmlc-core/include",
f"{CRT_ROOT_DIR}/include",
],
| {"golden_diff": "diff --git a/python/tvm/micro/build.py b/python/tvm/micro/build.py\n--- a/python/tvm/micro/build.py\n+++ b/python/tvm/micro/build.py\n@@ -60,10 +60,7 @@\n \n \n RUNTIME_LIB_SRC_DIRS = [os.path.join(CRT_ROOT_DIR, n) for n in CRT_RUNTIME_LIB_NAMES] + [\n- os.path.join(\n- TVM_ROOT_DIR,\n- \"3rdparty/mbed-os/targets/TARGET_NORDIC/TARGET_NRF5x/TARGET_SDK_11/\" \"libraries/crc16\",\n- )\n+ os.path.join(TVM_ROOT_DIR, \"3rdparty/libcrc/src\")\n ]\n \n \n@@ -76,8 +73,7 @@\n \"include_dirs\": [\n f\"{TVM_ROOT_DIR}/include\",\n f\"{TVM_ROOT_DIR}/3rdparty/dlpack/include\",\n- f\"{TVM_ROOT_DIR}/3rdparty/mbed-os/targets/TARGET_NORDIC/TARGET_NRF5x/\"\n- \"TARGET_SDK_11/libraries/crc16/\",\n+ f\"{TVM_ROOT_DIR}/3rdparty/libcrc/include\",\n f\"{TVM_ROOT_DIR}/3rdparty/dmlc-core/include\",\n f\"{CRT_ROOT_DIR}/include\",\n ],\n", "issue": "[uTVM] Use an alternative CRC Library\nThe 3rdparty crc library introduced in https://github.com/apache/incubator-tvm/pull/6334 has a license problem.\r\n\r\n\r\nWe will need to replace it with a new impl or an alternative library\r\n\n", "before_files": [{"content": "# Licensed to the Apache Software Foundation (ASF) under one\n# or more contributor license agreements. See the NOTICE file\n# distributed with this work for additional information\n# regarding copyright ownership. The ASF licenses this file\n# to you under the Apache License, Version 2.0 (the\n# \"License\"); you may not use this file except in compliance\n# with the License. You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing,\n# software distributed under the License is distributed on an\n# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n# KIND, either express or implied. See the License for the\n# specific language governing permissions and limitations\n# under the License.\n\n\"\"\"Defines top-level glue functions for building microTVM artifacts.\"\"\"\n\nimport copy\nimport logging\nimport os\nimport re\nfrom tvm.contrib import util\n\n\n_LOG = logging.getLogger(__name__)\n\n\nclass Workspace:\n \"\"\"Defines helper functions for manipulating temporary compilation workspaces.\"\"\"\n\n def __init__(self, root=None, debug=False):\n if debug or root is not None:\n with util.TempDirectory.set_keep_for_debug():\n self.tempdir = util.tempdir(custom_path=root)\n _LOG.info(\"Created debug mode workspace at: %s\", self.tempdir.temp_dir)\n else:\n self.tempdir = util.tempdir()\n\n def relpath(self, path):\n return self.tempdir.relpath(path)\n\n def listdir(self):\n return self.tempdir.listdir()\n\n @property\n def path(self):\n return self.tempdir.temp_dir\n\n\n# Required C runtime libraries, in link order.\nCRT_RUNTIME_LIB_NAMES = [\"utvm_rpc_server\", \"utvm_rpc_common\", \"common\"]\n\n\nTVM_ROOT_DIR = os.path.realpath(os.path.join(os.path.dirname(__file__), \"..\", \"..\", \"..\"))\n\n\nCRT_ROOT_DIR = os.path.join(TVM_ROOT_DIR, \"src\", \"runtime\", \"crt\")\n\n\nRUNTIME_LIB_SRC_DIRS = [os.path.join(CRT_ROOT_DIR, n) for n in CRT_RUNTIME_LIB_NAMES] + [\n os.path.join(\n TVM_ROOT_DIR,\n \"3rdparty/mbed-os/targets/TARGET_NORDIC/TARGET_NRF5x/TARGET_SDK_11/\" \"libraries/crc16\",\n )\n]\n\n\nRUNTIME_SRC_REGEX = re.compile(r\"^.*\\.cc?$\", re.IGNORECASE)\n\n\n_CRT_DEFAULT_OPTIONS = {\n \"ccflags\": [\"-std=c++11\"],\n \"ldflags\": [\"-std=gnu++14\"],\n \"include_dirs\": [\n f\"{TVM_ROOT_DIR}/include\",\n f\"{TVM_ROOT_DIR}/3rdparty/dlpack/include\",\n f\"{TVM_ROOT_DIR}/3rdparty/mbed-os/targets/TARGET_NORDIC/TARGET_NRF5x/\"\n \"TARGET_SDK_11/libraries/crc16/\",\n f\"{TVM_ROOT_DIR}/3rdparty/dmlc-core/include\",\n f\"{CRT_ROOT_DIR}/include\",\n ],\n \"profile\": {\"common\": [\"-Wno-unused-variable\"]},\n}\n\n\ndef default_options(target_include_dir):\n \"\"\"Return default opts passed to Compile commands.\"\"\"\n bin_opts = copy.deepcopy(_CRT_DEFAULT_OPTIONS)\n bin_opts[\"include_dirs\"].append(target_include_dir)\n lib_opts = copy.deepcopy(bin_opts)\n lib_opts[\"profile\"][\"common\"].append(\"-Werror\")\n lib_opts[\"cflags\"] = [\"-Wno-error=incompatible-pointer-types\"]\n return {\"bin_opts\": bin_opts, \"lib_opts\": lib_opts}\n\n\ndef build_static_runtime(workspace, compiler, module, lib_opts=None, bin_opts=None):\n \"\"\"Build the on-device runtime, statically linking the given modules.\n\n Parameters\n ----------\n compiler : tvm.micro.Compiler\n Compiler instance used to build the runtime.\n\n module : IRModule\n Module to statically link.\n\n lib_opts : dict\n Extra kwargs passed to library(),\n\n bin_opts : dict\n Extra kwargs passed to binary(),\n\n Returns\n -------\n MicroBinary :\n The compiled runtime.\n \"\"\"\n lib_opts = _CRT_DEFAULT_OPTIONS if lib_opts is None else lib_opts\n bin_opts = _CRT_DEFAULT_OPTIONS if bin_opts is None else bin_opts\n\n mod_build_dir = workspace.relpath(os.path.join(\"build\", \"module\"))\n os.makedirs(mod_build_dir)\n mod_src_dir = workspace.relpath(os.path.join(\"src\", \"module\"))\n os.makedirs(mod_src_dir)\n mod_src_path = os.path.join(mod_src_dir, \"module.c\")\n module.save(mod_src_path, \"cc\")\n\n libs = []\n for lib_src_dir in RUNTIME_LIB_SRC_DIRS:\n lib_name = os.path.basename(lib_src_dir)\n lib_build_dir = workspace.relpath(f\"build/{lib_name}\")\n os.makedirs(lib_build_dir)\n\n lib_srcs = []\n for p in os.listdir(lib_src_dir):\n if RUNTIME_SRC_REGEX.match(p):\n lib_srcs.append(os.path.join(lib_src_dir, p))\n\n libs.append(compiler.library(lib_build_dir, lib_srcs, lib_opts))\n\n libs.append(compiler.library(mod_build_dir, [mod_src_path], lib_opts))\n\n runtime_build_dir = workspace.relpath(f\"build/runtime\")\n os.makedirs(runtime_build_dir)\n return compiler.binary(runtime_build_dir, libs, bin_opts)\n", "path": "python/tvm/micro/build.py"}]} | 2,110 | 285 |
gh_patches_debug_28035 | rasdani/github-patches | git_diff | scikit-image__scikit-image-6293 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Build image pyramids not always working with other images
## Description
Using the *[Build image pyramids](https://scikit-image.org/docs/dev/auto_examples/transform/plot_pyramid.html)* example with a random image is not always working.
## Way to reproduce
### hand.jpg

```python
import numpy as np
import matplotlib.pyplot as plt
from skimage import data
from skimage.transform import pyramid_gaussian
import imageio as io
image = io.imread('hand.jpg') # data.astronaut()
rows, cols, dim = image.shape
pyramid = tuple(pyramid_gaussian(image, downscale=2, multichannel=True))
composite_image = np.zeros((rows, cols + cols // 2, 3), dtype=np.double)
composite_image[:rows, :cols, :] = pyramid[0]
i_row = 0
for p in pyramid[1:]:
n_rows, n_cols = p.shape[:2]
composite_image[i_row:i_row + n_rows, cols:cols + n_cols] = p
i_row += n_rows
fig, ax = plt.subplots()
ax.imshow(composite_image)
plt.show()
```
## Version information
```python
3.7.4 (tags/v3.7.4:e09359112e, Jul 8 2019, 20:34:20) [MSC v.1916 64 bit (AMD64)]
Windows-10-10.0.18362-SP0
scikit-image version: 0.16.1
numpy version: 1.17.2
```
```python
Traceback (most recent call last):
File "D:\Vincent\Bureau\Patern recongnition and image analysis\Patern recognition and patern analysis\LAB_1\plot_pyramid.py", line 44, in <module>
composite_image[i_row:i_row + n_rows, cols:cols + n_cols] = p
ValueError: could not broadcast input array from shape (2,2,3) into shape (1,2,3)
```
## Possible solution
I was able to make it works for the same RGB image but this code is not adapted for BW and RGBA.
```python
import numpy as np
import matplotlib.pyplot as plt
from skimage import data
from skimage.transform import pyramid_gaussian
import imageio as io
image = io.imread('hand.jpg') # data.astronaut()
rows, cols, dim = image.shape
pyramid = tuple(pyramid_gaussian(image, downscale=2, multichannel=True))
composite_image = np.zeros((rows, cols + cols // 2, dim), dtype=np.double)
composite_image[:rows, :cols, :] = pyramid[0]
i_row = 0
for p in pyramid[1:]:
n_rows, n_cols = p.shape[:2]
# Check the dimension before assignement
if(composite_image[i_row:i_row + n_rows, cols:cols + n_cols].shape==p.shape):
composite_image[i_row:i_row + n_rows, cols:cols + n_cols] = p
i_row += n_rows
else:
break
fig, ax = plt.subplots()
ax.imshow(composite_image)
plt.show()
```
### Result

</issue>
<code>
[start of doc/examples/transform/plot_pyramid.py]
1 """
2 ====================
3 Build image pyramids
4 ====================
5
6 The ``pyramid_gaussian`` function takes an image and yields successive images
7 shrunk by a constant scale factor. Image pyramids are often used, e.g., to
8 implement algorithms for denoising, texture discrimination, and scale-invariant
9 detection.
10
11 """
12 import numpy as np
13 import matplotlib.pyplot as plt
14
15 from skimage import data
16 from skimage.transform import pyramid_gaussian
17
18
19 image = data.astronaut()
20 rows, cols, dim = image.shape
21 pyramid = tuple(pyramid_gaussian(image, downscale=2, channel_axis=-1))
22
23 composite_image = np.zeros((rows, cols + cols // 2, 3), dtype=np.double)
24
25 composite_image[:rows, :cols, :] = pyramid[0]
26
27 i_row = 0
28 for p in pyramid[1:]:
29 n_rows, n_cols = p.shape[:2]
30 composite_image[i_row:i_row + n_rows, cols:cols + n_cols] = p
31 i_row += n_rows
32
33 fig, ax = plt.subplots()
34 ax.imshow(composite_image)
35 plt.show()
36
[end of doc/examples/transform/plot_pyramid.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/doc/examples/transform/plot_pyramid.py b/doc/examples/transform/plot_pyramid.py
--- a/doc/examples/transform/plot_pyramid.py
+++ b/doc/examples/transform/plot_pyramid.py
@@ -9,6 +9,8 @@
detection.
"""
+import math
+
import numpy as np
import matplotlib.pyplot as plt
@@ -20,10 +22,31 @@
rows, cols, dim = image.shape
pyramid = tuple(pyramid_gaussian(image, downscale=2, channel_axis=-1))
-composite_image = np.zeros((rows, cols + cols // 2, 3), dtype=np.double)
-
+#####################################################################
+# Generate a composite image for visualization
+# ============================================
+#
+# For visualization, we generate a composite image with the same number of rows
+# as the source image but with ``cols + pyramid[1].shape[1]`` columns. We then
+# have space to stack all of the dowsampled images to the right of the
+# original.
+#
+# Note: The sum of the number of rows in all dowsampled images in the pyramid
+# may sometimes exceed the original image size in cases when image.shape[0] is
+# not a power of two. We expand the number of rows in the composite slightly as
+# necessary to account for this. Expansion beyond the number of rows in the
+# original will also be necessary to cover cases where downscale < 2.
+
+# determine the total number of rows and columns for the composite
+composite_rows = max(rows, sum(p.shape[0] for p in pyramid[1:]))
+composite_cols = cols + pyramid[1].shape[1]
+composite_image = np.zeros((composite_rows, composite_cols, 3),
+ dtype=np.double)
+
+# store the original to the left
composite_image[:rows, :cols, :] = pyramid[0]
+# stack all downsampled images in a column to the right of the original
i_row = 0
for p in pyramid[1:]:
n_rows, n_cols = p.shape[:2]
| {"golden_diff": "diff --git a/doc/examples/transform/plot_pyramid.py b/doc/examples/transform/plot_pyramid.py\n--- a/doc/examples/transform/plot_pyramid.py\n+++ b/doc/examples/transform/plot_pyramid.py\n@@ -9,6 +9,8 @@\n detection.\n \n \"\"\"\n+import math\n+\n import numpy as np\n import matplotlib.pyplot as plt\n \n@@ -20,10 +22,31 @@\n rows, cols, dim = image.shape\n pyramid = tuple(pyramid_gaussian(image, downscale=2, channel_axis=-1))\n \n-composite_image = np.zeros((rows, cols + cols // 2, 3), dtype=np.double)\n-\n+#####################################################################\n+# Generate a composite image for visualization\n+# ============================================\n+#\n+# For visualization, we generate a composite image with the same number of rows\n+# as the source image but with ``cols + pyramid[1].shape[1]`` columns. We then\n+# have space to stack all of the dowsampled images to the right of the\n+# original.\n+#\n+# Note: The sum of the number of rows in all dowsampled images in the pyramid\n+# may sometimes exceed the original image size in cases when image.shape[0] is\n+# not a power of two. We expand the number of rows in the composite slightly as\n+# necessary to account for this. Expansion beyond the number of rows in the\n+# original will also be necessary to cover cases where downscale < 2.\n+\n+# determine the total number of rows and columns for the composite\n+composite_rows = max(rows, sum(p.shape[0] for p in pyramid[1:]))\n+composite_cols = cols + pyramid[1].shape[1]\n+composite_image = np.zeros((composite_rows, composite_cols, 3),\n+ dtype=np.double)\n+\n+# store the original to the left\n composite_image[:rows, :cols, :] = pyramid[0]\n \n+# stack all downsampled images in a column to the right of the original\n i_row = 0\n for p in pyramid[1:]:\n n_rows, n_cols = p.shape[:2]\n", "issue": "Build image pyramids not always working with other images\n## Description\r\nUsing the *[Build image pyramids](https://scikit-image.org/docs/dev/auto_examples/transform/plot_pyramid.html)* example with a random image is not always working.\r\n\r\n## Way to reproduce\r\n### hand.jpg\r\n\r\n```python\r\nimport numpy as np\r\nimport matplotlib.pyplot as plt\r\n\r\nfrom skimage import data\r\nfrom skimage.transform import pyramid_gaussian\r\n\r\nimport imageio as io\r\n\r\nimage = io.imread('hand.jpg') # data.astronaut()\r\nrows, cols, dim = image.shape\r\npyramid = tuple(pyramid_gaussian(image, downscale=2, multichannel=True))\r\n\r\ncomposite_image = np.zeros((rows, cols + cols // 2, 3), dtype=np.double)\r\n\r\ncomposite_image[:rows, :cols, :] = pyramid[0]\r\n\r\ni_row = 0\r\nfor p in pyramid[1:]:\r\n n_rows, n_cols = p.shape[:2]\r\n composite_image[i_row:i_row + n_rows, cols:cols + n_cols] = p\r\n i_row += n_rows\r\n\r\nfig, ax = plt.subplots()\r\nax.imshow(composite_image)\r\nplt.show()\r\n```\r\n\r\n\r\n## Version information\r\n```python\r\n3.7.4 (tags/v3.7.4:e09359112e, Jul 8 2019, 20:34:20) [MSC v.1916 64 bit (AMD64)]\r\nWindows-10-10.0.18362-SP0\r\nscikit-image version: 0.16.1\r\nnumpy version: 1.17.2\r\n```\r\n\r\n```python\r\nTraceback (most recent call last):\r\n File \"D:\\Vincent\\Bureau\\Patern recongnition and image analysis\\Patern recognition and patern analysis\\LAB_1\\plot_pyramid.py\", line 44, in <module>\r\n composite_image[i_row:i_row + n_rows, cols:cols + n_cols] = p\r\nValueError: could not broadcast input array from shape (2,2,3) into shape (1,2,3)\r\n```\r\n## Possible solution\r\nI was able to make it works for the same RGB image but this code is not adapted for BW and RGBA.\r\n\r\n```python\r\nimport numpy as np\r\nimport matplotlib.pyplot as plt\r\n\r\nfrom skimage import data\r\nfrom skimage.transform import pyramid_gaussian\r\nimport imageio as io\r\n\r\nimage = io.imread('hand.jpg') # data.astronaut()\r\n\r\nrows, cols, dim = image.shape\r\npyramid = tuple(pyramid_gaussian(image, downscale=2, multichannel=True))\r\n\r\ncomposite_image = np.zeros((rows, cols + cols // 2, dim), dtype=np.double)\r\n\r\ncomposite_image[:rows, :cols, :] = pyramid[0]\r\n\r\ni_row = 0\r\nfor p in pyramid[1:]:\r\n n_rows, n_cols = p.shape[:2]\r\n # Check the dimension before assignement\r\n if(composite_image[i_row:i_row + n_rows, cols:cols + n_cols].shape==p.shape):\r\n composite_image[i_row:i_row + n_rows, cols:cols + n_cols] = p\r\n i_row += n_rows\r\n else:\r\n break\r\n \r\nfig, ax = plt.subplots()\r\nax.imshow(composite_image)\r\nplt.show()\r\n```\r\n### Result\r\n\r\n\r\n\n", "before_files": [{"content": "\"\"\"\n====================\nBuild image pyramids\n====================\n\nThe ``pyramid_gaussian`` function takes an image and yields successive images\nshrunk by a constant scale factor. Image pyramids are often used, e.g., to\nimplement algorithms for denoising, texture discrimination, and scale-invariant\ndetection.\n\n\"\"\"\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nfrom skimage import data\nfrom skimage.transform import pyramid_gaussian\n\n\nimage = data.astronaut()\nrows, cols, dim = image.shape\npyramid = tuple(pyramid_gaussian(image, downscale=2, channel_axis=-1))\n\ncomposite_image = np.zeros((rows, cols + cols // 2, 3), dtype=np.double)\n\ncomposite_image[:rows, :cols, :] = pyramid[0]\n\ni_row = 0\nfor p in pyramid[1:]:\n n_rows, n_cols = p.shape[:2]\n composite_image[i_row:i_row + n_rows, cols:cols + n_cols] = p\n i_row += n_rows\n\nfig, ax = plt.subplots()\nax.imshow(composite_image)\nplt.show()\n", "path": "doc/examples/transform/plot_pyramid.py"}]} | 1,684 | 450 |
gh_patches_debug_21479 | rasdani/github-patches | git_diff | ansible__molecule-2063 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Extra vars are not passed to the playbook anymore with converge.
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Please also check https://molecule.readthedocs.io/en/latest/faq.html --->
<!--- Please use https://groups.google.com/forum/#!forum/molecule-users for usage questions -->
# Issue Type
- Bug report
# Molecule and Ansible details
```
ansible 2.8.0
config file = None
configured module search path = ['/home/olcla/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.6/dist-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.6.7 (default, Oct 22 2018, 11:32:17) [GCC 8.2.0]
molecule (downgraded to 2.20 to fix but problem related to master branch)
```
Molecule installation method (one of):
- pip from git master
Ansible installation method (one of):
- pip
# Desired Behavior
When running `molecule converge -s my_scenario -- -e somevar=value` I expect molecule to pass the extra vars when running playbook.yml
# Actual Behaviour
I was trying to fix deprecations related to ansible 2.8 and had to install molecule from git master to overcome problems with testinfra. When running the above command with the latest molecule master, the extra vars are not passed to the playbook anymore. This is working as expected when downgrading to molecule 2.20.1
</issue>
<code>
[start of molecule/command/converge.py]
1 # Copyright (c) 2015-2018 Cisco Systems, Inc.
2 #
3 # Permission is hereby granted, free of charge, to any person obtaining a copy
4 # of this software and associated documentation files (the "Software"), to
5 # deal in the Software without restriction, including without limitation the
6 # rights to use, copy, modify, merge, publish, distribute, sublicense, and/or
7 # sell copies of the Software, and to permit persons to whom the Software is
8 # furnished to do so, subject to the following conditions:
9 #
10 # The above copyright notice and this permission notice shall be included in
11 # all copies or substantial portions of the Software.
12 #
13 # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
14 # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
15 # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
16 # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
17 # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
18 # FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
19 # DEALINGS IN THE SOFTWARE.
20
21 import click
22
23 from molecule import logger
24 from molecule.command import base
25
26 LOG = logger.get_logger(__name__)
27
28
29 class Converge(base.Base):
30 """
31 .. program:: molecule converge
32
33 .. option:: molecule converge
34
35 Target the default scenario.
36
37 .. program:: molecule converge --scenario-name foo
38
39 .. option:: molecule converge --scenario-name foo
40
41 Targeting a specific scenario.
42
43 .. program:: molecule converge -- -vvv --tags foo,bar
44
45 .. option:: molecule converge -- -vvv --tags foo,bar
46
47 Providing additional command line arguments to the `ansible-playbook`
48 command. Use this option with care, as there is no sanitation or
49 validation of input. Options passed on the CLI override options
50 provided in provisioner's `options` section of `molecule.yml`.
51
52 .. program:: molecule --debug converge
53
54 .. option:: molecule --debug converge
55
56 Executing with `debug`.
57
58 .. program:: molecule --base-config base.yml converge
59
60 .. option:: molecule --base-config base.yml converge
61
62 Executing with a `base-config`.
63
64 .. program:: molecule --env-file foo.yml converge
65
66 .. option:: molecule --env-file foo.yml converge
67
68 Load an env file to read variables from when rendering
69 molecule.yml.
70 """
71
72 def execute(self):
73 """
74 Execute the actions necessary to perform a `molecule converge` and
75 returns None.
76
77 :return: None
78 """
79 self.print_info()
80 self._config.provisioner.converge()
81 self._config.state.change_state('converged', True)
82
83
84 @click.command()
85 @click.pass_context
86 @click.option(
87 '--scenario-name',
88 '-s',
89 default=base.MOLECULE_DEFAULT_SCENARIO_NAME,
90 help='Name of the scenario to target. ({})'.format(
91 base.MOLECULE_DEFAULT_SCENARIO_NAME))
92 @click.argument('ansible_args', nargs=-1, type=click.UNPROCESSED)
93 def converge(ctx, scenario_name, ansible_args): # pragma: no cover
94 """
95 Use the provisioner to configure instances (dependency, create, prepare
96 converge).
97 """
98
99 args = ctx.obj.get('args')
100 subcommand = base._get_subcommand(__name__)
101 command_args = {
102 'subcommand': subcommand,
103 }
104
105 base.execute_cmdline_scenarios(scenario_name, args, command_args)
106
[end of molecule/command/converge.py]
[start of molecule/command/base.py]
1 # Copyright (c) 2015-2018 Cisco Systems, Inc.
2 #
3 # Permission is hereby granted, free of charge, to any person obtaining a copy
4 # of this software and associated documentation files (the "Software"), to
5 # deal in the Software without restriction, including without limitation the
6 # rights to use, copy, modify, merge, publish, distribute, sublicense, and/or
7 # sell copies of the Software, and to permit persons to whom the Software is
8 # furnished to do so, subject to the following conditions:
9 #
10 # The above copyright notice and this permission notice shall be included in
11 # all copies or substantial portions of the Software.
12 #
13 # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
14 # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
15 # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
16 # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
17 # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
18 # FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
19 # DEALINGS IN THE SOFTWARE.
20
21 import abc
22 import collections
23 import glob
24 import os
25
26 import six
27
28 import molecule.command
29 import molecule.scenarios
30 from molecule import config
31 from molecule import logger
32 from molecule import util
33
34 LOG = logger.get_logger(__name__)
35 MOLECULE_GLOB = os.environ.get('MOLECULE_GLOB', 'molecule/*/molecule.yml')
36 MOLECULE_DEFAULT_SCENARIO_NAME = 'default'
37
38
39 @six.add_metaclass(abc.ABCMeta)
40 class Base(object):
41 """
42 An abstract base class used to define the command interface.
43 """
44
45 def __init__(self, c):
46 """
47 Base initializer for all :ref:`Command` classes.
48
49 :param c: An instance of a Molecule config.
50 :returns: None
51 """
52 self._config = c
53 self._setup()
54
55 @abc.abstractmethod
56 def execute(self): # pragma: no cover
57 pass
58
59 def print_info(self):
60 msg = "Scenario: '{}'".format(self._config.scenario.name)
61 LOG.info(msg)
62 msg = "Action: '{}'".format(util.underscore(self.__class__.__name__))
63 LOG.info(msg)
64
65 def _setup(self):
66 """
67 Prepare Molecule's provisioner and returns None.
68
69 :return: None
70 """
71 self._config.provisioner.write_config()
72 self._config.provisioner.manage_inventory()
73
74
75 def execute_cmdline_scenarios(scenario_name, args, command_args):
76 """
77 Execute scenario sequences based on parsed command-line arguments.
78
79 This is useful for subcommands that run scenario sequences, which
80 excludes subcommands such as ``list``, ``login``, and ``matrix``.
81
82 ``args`` and ``command_args`` are combined using :func:`get_configs`
83 to generate the scenario(s) configuration.
84
85 :param scenario_name: Name of scenario to run, or ``None`` to run all.
86 :param args: ``args`` dict from ``click`` command context
87 :param command_args: dict of command argumentss, including the target
88 subcommand to execute
89 :returns: None
90
91 """
92 scenarios = molecule.scenarios.Scenarios(
93 get_configs(args, command_args), scenario_name)
94 scenarios.print_matrix()
95 for scenario in scenarios:
96 try:
97 execute_scenario(scenario)
98 except SystemExit:
99 # if the command has a 'destroy' arg, like test does,
100 # handle that behavior here.
101 if command_args.get('destroy') == 'always':
102 msg = ('An error occurred during the {} sequence action: '
103 "'{}'. Cleaning up.").format(scenario.config.subcommand,
104 scenario.config.action)
105 LOG.warning(msg)
106 execute_subcommand(scenario.config, 'cleanup')
107 execute_subcommand(scenario.config, 'destroy')
108 # always prune ephemeral dir if destroying on failure
109 scenario.prune()
110 util.sysexit()
111 else:
112 raise
113
114
115 def execute_subcommand(config, subcommand):
116 command_module = getattr(molecule.command, subcommand)
117 command = getattr(command_module, util.camelize(subcommand))
118 # knowledge of the current action is used by some provisioners
119 # to ensure they behave correctly during certain sequence steps,
120 # particulary the setting of ansible options in create/destroy,
121 # and is also used for reporting in execute_cmdline_scenarios
122 config.action = subcommand
123
124 return command(config).execute()
125
126
127 def execute_scenario(scenario):
128 """
129 Execute each command in the given scenario's configured sequence.
130
131 :param scenario: The scenario to execute.
132 :returns: None
133
134 """
135
136 for action in scenario.sequence:
137 execute_subcommand(scenario.config, action)
138
139 # pruning only if a 'destroy' step was in the sequence allows for normal
140 # debugging by manually stepping through a scenario sequence
141 if 'destroy' in scenario.sequence:
142 scenario.prune()
143
144
145 def get_configs(args, command_args, ansible_args=()):
146 """
147 Glob the current directory for Molecule config files, instantiate config
148 objects, and returns a list.
149
150 :param args: A dict of options, arguments and commands from the CLI.
151 :param command_args: A dict of options passed to the subcommand from
152 the CLI.
153 :param ansible_args: An optional tuple of arguments provided to the
154 `ansible-playbook` command.
155 :return: list
156 """
157 configs = [
158 config.Config(
159 molecule_file=util.abs_path(c),
160 args=args,
161 command_args=command_args,
162 ansible_args=ansible_args,
163 ) for c in glob.glob(MOLECULE_GLOB)
164 ]
165 _verify_configs(configs)
166
167 return configs
168
169
170 def _verify_configs(configs):
171 """
172 Verify a Molecule config was found and returns None.
173
174 :param configs: A list containing absolute paths to Molecule config files.
175 :return: None
176 """
177 if configs:
178 scenario_names = [c.scenario.name for c in configs]
179 for scenario_name, n in collections.Counter(scenario_names).items():
180 if n > 1:
181 msg = ("Duplicate scenario name '{}' found. "
182 'Exiting.').format(scenario_name)
183 util.sysexit_with_message(msg)
184
185 else:
186 msg = "'{}' glob failed. Exiting.".format(MOLECULE_GLOB)
187 util.sysexit_with_message(msg)
188
189
190 def _get_subcommand(string):
191 return string.split('.')[-1]
192
[end of molecule/command/base.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/molecule/command/base.py b/molecule/command/base.py
--- a/molecule/command/base.py
+++ b/molecule/command/base.py
@@ -72,7 +72,10 @@
self._config.provisioner.manage_inventory()
-def execute_cmdline_scenarios(scenario_name, args, command_args):
+def execute_cmdline_scenarios(scenario_name,
+ args,
+ command_args,
+ ansible_args=()):
"""
Execute scenario sequences based on parsed command-line arguments.
@@ -90,7 +93,7 @@
"""
scenarios = molecule.scenarios.Scenarios(
- get_configs(args, command_args), scenario_name)
+ get_configs(args, command_args, ansible_args), scenario_name)
scenarios.print_matrix()
for scenario in scenarios:
try:
diff --git a/molecule/command/converge.py b/molecule/command/converge.py
--- a/molecule/command/converge.py
+++ b/molecule/command/converge.py
@@ -102,4 +102,5 @@
'subcommand': subcommand,
}
- base.execute_cmdline_scenarios(scenario_name, args, command_args)
+ base.execute_cmdline_scenarios(scenario_name, args, command_args,
+ ansible_args)
| {"golden_diff": "diff --git a/molecule/command/base.py b/molecule/command/base.py\n--- a/molecule/command/base.py\n+++ b/molecule/command/base.py\n@@ -72,7 +72,10 @@\n self._config.provisioner.manage_inventory()\n \n \n-def execute_cmdline_scenarios(scenario_name, args, command_args):\n+def execute_cmdline_scenarios(scenario_name,\n+ args,\n+ command_args,\n+ ansible_args=()):\n \"\"\"\n Execute scenario sequences based on parsed command-line arguments.\n \n@@ -90,7 +93,7 @@\n \n \"\"\"\n scenarios = molecule.scenarios.Scenarios(\n- get_configs(args, command_args), scenario_name)\n+ get_configs(args, command_args, ansible_args), scenario_name)\n scenarios.print_matrix()\n for scenario in scenarios:\n try:\ndiff --git a/molecule/command/converge.py b/molecule/command/converge.py\n--- a/molecule/command/converge.py\n+++ b/molecule/command/converge.py\n@@ -102,4 +102,5 @@\n 'subcommand': subcommand,\n }\n \n- base.execute_cmdline_scenarios(scenario_name, args, command_args)\n+ base.execute_cmdline_scenarios(scenario_name, args, command_args,\n+ ansible_args)\n", "issue": "Extra vars are not passed to the playbook anymore with converge.\n<!--- Verify first that your issue is not already reported on GitHub -->\r\n<!--- Please also check https://molecule.readthedocs.io/en/latest/faq.html --->\r\n<!--- Please use https://groups.google.com/forum/#!forum/molecule-users for usage questions -->\r\n\r\n# Issue Type\r\n\r\n- Bug report\r\n\r\n# Molecule and Ansible details\r\n\r\n```\r\nansible 2.8.0\r\n config file = None\r\n configured module search path = ['/home/olcla/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']\r\n ansible python module location = /usr/local/lib/python3.6/dist-packages/ansible\r\n executable location = /usr/local/bin/ansible\r\n python version = 3.6.7 (default, Oct 22 2018, 11:32:17) [GCC 8.2.0]\r\n\r\nmolecule (downgraded to 2.20 to fix but problem related to master branch)\r\n```\r\n\r\nMolecule installation method (one of):\r\n\r\n- pip from git master\r\n\r\nAnsible installation method (one of):\r\n\r\n- pip\r\n\r\n# Desired Behavior\r\n\r\nWhen running `molecule converge -s my_scenario -- -e somevar=value` I expect molecule to pass the extra vars when running playbook.yml\r\n\r\n# Actual Behaviour\r\n\r\nI was trying to fix deprecations related to ansible 2.8 and had to install molecule from git master to overcome problems with testinfra. When running the above command with the latest molecule master, the extra vars are not passed to the playbook anymore. This is working as expected when downgrading to molecule 2.20.1 \r\n\r\n\n", "before_files": [{"content": "# Copyright (c) 2015-2018 Cisco Systems, Inc.\n#\n# Permission is hereby granted, free of charge, to any person obtaining a copy\n# of this software and associated documentation files (the \"Software\"), to\n# deal in the Software without restriction, including without limitation the\n# rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n# sell copies of the Software, and to permit persons to whom the Software is\n# furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice shall be included in\n# all copies or substantial portions of the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER\n# DEALINGS IN THE SOFTWARE.\n\nimport click\n\nfrom molecule import logger\nfrom molecule.command import base\n\nLOG = logger.get_logger(__name__)\n\n\nclass Converge(base.Base):\n \"\"\"\n .. program:: molecule converge\n\n .. option:: molecule converge\n\n Target the default scenario.\n\n .. program:: molecule converge --scenario-name foo\n\n .. option:: molecule converge --scenario-name foo\n\n Targeting a specific scenario.\n\n .. program:: molecule converge -- -vvv --tags foo,bar\n\n .. option:: molecule converge -- -vvv --tags foo,bar\n\n Providing additional command line arguments to the `ansible-playbook`\n command. Use this option with care, as there is no sanitation or\n validation of input. Options passed on the CLI override options\n provided in provisioner's `options` section of `molecule.yml`.\n\n .. program:: molecule --debug converge\n\n .. option:: molecule --debug converge\n\n Executing with `debug`.\n\n .. program:: molecule --base-config base.yml converge\n\n .. option:: molecule --base-config base.yml converge\n\n Executing with a `base-config`.\n\n .. program:: molecule --env-file foo.yml converge\n\n .. option:: molecule --env-file foo.yml converge\n\n Load an env file to read variables from when rendering\n molecule.yml.\n \"\"\"\n\n def execute(self):\n \"\"\"\n Execute the actions necessary to perform a `molecule converge` and\n returns None.\n\n :return: None\n \"\"\"\n self.print_info()\n self._config.provisioner.converge()\n self._config.state.change_state('converged', True)\n\n\[email protected]()\[email protected]_context\[email protected](\n '--scenario-name',\n '-s',\n default=base.MOLECULE_DEFAULT_SCENARIO_NAME,\n help='Name of the scenario to target. ({})'.format(\n base.MOLECULE_DEFAULT_SCENARIO_NAME))\[email protected]('ansible_args', nargs=-1, type=click.UNPROCESSED)\ndef converge(ctx, scenario_name, ansible_args): # pragma: no cover\n \"\"\"\n Use the provisioner to configure instances (dependency, create, prepare\n converge).\n \"\"\"\n\n args = ctx.obj.get('args')\n subcommand = base._get_subcommand(__name__)\n command_args = {\n 'subcommand': subcommand,\n }\n\n base.execute_cmdline_scenarios(scenario_name, args, command_args)\n", "path": "molecule/command/converge.py"}, {"content": "# Copyright (c) 2015-2018 Cisco Systems, Inc.\n#\n# Permission is hereby granted, free of charge, to any person obtaining a copy\n# of this software and associated documentation files (the \"Software\"), to\n# deal in the Software without restriction, including without limitation the\n# rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n# sell copies of the Software, and to permit persons to whom the Software is\n# furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice shall be included in\n# all copies or substantial portions of the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER\n# DEALINGS IN THE SOFTWARE.\n\nimport abc\nimport collections\nimport glob\nimport os\n\nimport six\n\nimport molecule.command\nimport molecule.scenarios\nfrom molecule import config\nfrom molecule import logger\nfrom molecule import util\n\nLOG = logger.get_logger(__name__)\nMOLECULE_GLOB = os.environ.get('MOLECULE_GLOB', 'molecule/*/molecule.yml')\nMOLECULE_DEFAULT_SCENARIO_NAME = 'default'\n\n\[email protected]_metaclass(abc.ABCMeta)\nclass Base(object):\n \"\"\"\n An abstract base class used to define the command interface.\n \"\"\"\n\n def __init__(self, c):\n \"\"\"\n Base initializer for all :ref:`Command` classes.\n\n :param c: An instance of a Molecule config.\n :returns: None\n \"\"\"\n self._config = c\n self._setup()\n\n @abc.abstractmethod\n def execute(self): # pragma: no cover\n pass\n\n def print_info(self):\n msg = \"Scenario: '{}'\".format(self._config.scenario.name)\n LOG.info(msg)\n msg = \"Action: '{}'\".format(util.underscore(self.__class__.__name__))\n LOG.info(msg)\n\n def _setup(self):\n \"\"\"\n Prepare Molecule's provisioner and returns None.\n\n :return: None\n \"\"\"\n self._config.provisioner.write_config()\n self._config.provisioner.manage_inventory()\n\n\ndef execute_cmdline_scenarios(scenario_name, args, command_args):\n \"\"\"\n Execute scenario sequences based on parsed command-line arguments.\n\n This is useful for subcommands that run scenario sequences, which\n excludes subcommands such as ``list``, ``login``, and ``matrix``.\n\n ``args`` and ``command_args`` are combined using :func:`get_configs`\n to generate the scenario(s) configuration.\n\n :param scenario_name: Name of scenario to run, or ``None`` to run all.\n :param args: ``args`` dict from ``click`` command context\n :param command_args: dict of command argumentss, including the target\n subcommand to execute\n :returns: None\n\n \"\"\"\n scenarios = molecule.scenarios.Scenarios(\n get_configs(args, command_args), scenario_name)\n scenarios.print_matrix()\n for scenario in scenarios:\n try:\n execute_scenario(scenario)\n except SystemExit:\n # if the command has a 'destroy' arg, like test does,\n # handle that behavior here.\n if command_args.get('destroy') == 'always':\n msg = ('An error occurred during the {} sequence action: '\n \"'{}'. Cleaning up.\").format(scenario.config.subcommand,\n scenario.config.action)\n LOG.warning(msg)\n execute_subcommand(scenario.config, 'cleanup')\n execute_subcommand(scenario.config, 'destroy')\n # always prune ephemeral dir if destroying on failure\n scenario.prune()\n util.sysexit()\n else:\n raise\n\n\ndef execute_subcommand(config, subcommand):\n command_module = getattr(molecule.command, subcommand)\n command = getattr(command_module, util.camelize(subcommand))\n # knowledge of the current action is used by some provisioners\n # to ensure they behave correctly during certain sequence steps,\n # particulary the setting of ansible options in create/destroy,\n # and is also used for reporting in execute_cmdline_scenarios\n config.action = subcommand\n\n return command(config).execute()\n\n\ndef execute_scenario(scenario):\n \"\"\"\n Execute each command in the given scenario's configured sequence.\n\n :param scenario: The scenario to execute.\n :returns: None\n\n \"\"\"\n\n for action in scenario.sequence:\n execute_subcommand(scenario.config, action)\n\n # pruning only if a 'destroy' step was in the sequence allows for normal\n # debugging by manually stepping through a scenario sequence\n if 'destroy' in scenario.sequence:\n scenario.prune()\n\n\ndef get_configs(args, command_args, ansible_args=()):\n \"\"\"\n Glob the current directory for Molecule config files, instantiate config\n objects, and returns a list.\n\n :param args: A dict of options, arguments and commands from the CLI.\n :param command_args: A dict of options passed to the subcommand from\n the CLI.\n :param ansible_args: An optional tuple of arguments provided to the\n `ansible-playbook` command.\n :return: list\n \"\"\"\n configs = [\n config.Config(\n molecule_file=util.abs_path(c),\n args=args,\n command_args=command_args,\n ansible_args=ansible_args,\n ) for c in glob.glob(MOLECULE_GLOB)\n ]\n _verify_configs(configs)\n\n return configs\n\n\ndef _verify_configs(configs):\n \"\"\"\n Verify a Molecule config was found and returns None.\n\n :param configs: A list containing absolute paths to Molecule config files.\n :return: None\n \"\"\"\n if configs:\n scenario_names = [c.scenario.name for c in configs]\n for scenario_name, n in collections.Counter(scenario_names).items():\n if n > 1:\n msg = (\"Duplicate scenario name '{}' found. \"\n 'Exiting.').format(scenario_name)\n util.sysexit_with_message(msg)\n\n else:\n msg = \"'{}' glob failed. Exiting.\".format(MOLECULE_GLOB)\n util.sysexit_with_message(msg)\n\n\ndef _get_subcommand(string):\n return string.split('.')[-1]\n", "path": "molecule/command/base.py"}]} | 3,805 | 280 |
gh_patches_debug_25326 | rasdani/github-patches | git_diff | mlflow__mlflow-12224 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[BUG] uc_volume_dataset_source only validates file paths, not folder paths
### Issues Policy acknowledgement
- [X] I have read and agree to submit bug reports in accordance with the [issues policy](https://www.github.com/mlflow/mlflow/blob/master/ISSUE_POLICY.md)
### Where did you encounter this bug?
Local machine
### Willingness to contribute
Yes. I would be willing to contribute a fix for this bug with guidance from the MLflow community.
### MLflow version
mlflow-2.12.2
### System information
- **OS Platform and Distribution (e.g., Linux Ubuntu 16.04)**:
- **Python version**:
- **yarn version, if running the dev UI**:
### Describe the problem
https://github.com/mlflow/mlflow/blob/72df4a2a0f44c52179dfbdc7d47ad10f58ceec39/mlflow/data/uc_volume_dataset_source.py#L28 doesn't verify folder paths, only file paths
### Tracking information
<!-- PLEASE KEEP BACKTICKS AND CHECK PREVIEW -->
```shell
REPLACE_ME
```
### Code to reproduce issue
<!-- PLEASE KEEP BACKTICKS AND CHECK PREVIEW -->
```
REPLACE_ME
```
### Stack trace
<!-- PLEASE KEEP BACKTICKS AND CHECK PREVIEW -->
```
REPLACE_ME
```
### Other info / logs
<!-- PLEASE KEEP BACKTICKS AND CHECK PREVIEW -->
```
REPLACE_ME
```
### What component(s) does this bug affect?
- [ ] `area/artifacts`: Artifact stores and artifact logging
- [ ] `area/build`: Build and test infrastructure for MLflow
- [ ] `area/deployments`: MLflow Deployments client APIs, server, and third-party Deployments integrations
- [ ] `area/docs`: MLflow documentation pages
- [ ] `area/examples`: Example code
- [ ] `area/model-registry`: Model Registry service, APIs, and the fluent client calls for Model Registry
- [ ] `area/models`: MLmodel format, model serialization/deserialization, flavors
- [ ] `area/recipes`: Recipes, Recipe APIs, Recipe configs, Recipe Templates
- [ ] `area/projects`: MLproject format, project running backends
- [ ] `area/scoring`: MLflow Model server, model deployment tools, Spark UDFs
- [ ] `area/server-infra`: MLflow Tracking server backend
- [ ] `area/tracking`: Tracking Service, tracking client APIs, autologging
### What interface(s) does this bug affect?
- [ ] `area/uiux`: Front-end, user experience, plotting, JavaScript, JavaScript dev server
- [ ] `area/docker`: Docker use across MLflow's components, such as MLflow Projects and MLflow Models
- [ ] `area/sqlalchemy`: Use of SQLAlchemy in the Tracking Service or Model Registry
- [ ] `area/windows`: Windows support
### What language(s) does this bug affect?
- [ ] `language/r`: R APIs and clients
- [ ] `language/java`: Java APIs and clients
- [ ] `language/new`: Proposals for new client languages
### What integration(s) does this bug affect?
- [ ] `integrations/azure`: Azure and Azure ML integrations
- [ ] `integrations/sagemaker`: SageMaker integrations
- [ ] `integrations/databricks`: Databricks integrations
</issue>
<code>
[start of mlflow/data/uc_volume_dataset_source.py]
1 import logging
2 from typing import Any, Dict
3
4 from mlflow.data.dataset_source import DatasetSource
5 from mlflow.exceptions import MlflowException
6
7 _logger = logging.getLogger(__name__)
8
9
10 class UCVolumeDatasetSource(DatasetSource):
11 """Represents the source of a dataset stored in Databricks Unified Catalog Volume.
12
13 If you are using a delta table, please use `mlflow.data.delta_dataset_source.DeltaDatasetSource`
14 instead. This `UCVolumeDatasetSource` does not provide loading function, and is mostly useful
15 when you are logging a `mlflow.data.meta_dataset.MetaDataset` to MLflow, i.e., you want
16 to log the source of dataset to MLflow without loading the dataset.
17
18 Args:
19 path: the UC path of your data. It should be a valid UC path following the pattern
20 "/Volumes/{catalog}/{schema}/{volume}/{file_path}". For example,
21 "/Volumes/MyCatalog/MySchema/MyVolume/MyFile.json".
22 """
23
24 def __init__(self, path: str):
25 self._verify_uc_path_is_valid(path)
26 self.path = path
27
28 def _verify_uc_path_is_valid(self, path):
29 """Verify if the path exists in Databricks Unified Catalog."""
30 try:
31 from databricks.sdk import WorkspaceClient
32
33 w = WorkspaceClient()
34 except ImportError:
35 _logger.warning(
36 "Cannot verify the path of `UCVolumeDatasetSource` because of missing"
37 "`databricks-sdk`. Please install `databricks-sdk` via "
38 "`pip install -U databricks-sdk`. This does not block creating "
39 "`UCVolumeDatasetSource`, but your `UCVolumeDatasetSource` might be invalid."
40 )
41 return
42 except Exception:
43 _logger.warning(
44 "Cannot verify the path of `UCVolumeDatasetSource` due to a connection failure "
45 "with Databricks workspace. Please run `mlflow.login()` to log in to Databricks. "
46 "This does not block creating `UCVolumeDatasetSource`, but your "
47 "`UCVolumeDatasetSource` might be invalid."
48 )
49 return
50
51 try:
52 w.files.get_metadata(path)
53 except Exception:
54 raise MlflowException(f"{path} does not exist in Databricks Unified Catalog.")
55
56 @staticmethod
57 def _get_source_type() -> str:
58 return "uc_volume"
59
60 @staticmethod
61 def _can_resolve(raw_source: Any):
62 raise NotImplementedError
63
64 @classmethod
65 def _resolve(cls, raw_source: str):
66 raise NotImplementedError
67
68 def to_dict(self) -> Dict[Any, Any]:
69 return {"path": self.path}
70
71 @classmethod
72 def from_dict(cls, source_dict: Dict[Any, Any]) -> "UCVolumeDatasetSource":
73 return cls(**source_dict)
74
[end of mlflow/data/uc_volume_dataset_source.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/mlflow/data/uc_volume_dataset_source.py b/mlflow/data/uc_volume_dataset_source.py
--- a/mlflow/data/uc_volume_dataset_source.py
+++ b/mlflow/data/uc_volume_dataset_source.py
@@ -22,10 +22,10 @@
"""
def __init__(self, path: str):
- self._verify_uc_path_is_valid(path)
self.path = path
+ self._verify_uc_path_is_valid()
- def _verify_uc_path_is_valid(self, path):
+ def _verify_uc_path_is_valid(self):
"""Verify if the path exists in Databricks Unified Catalog."""
try:
from databricks.sdk import WorkspaceClient
@@ -49,9 +49,17 @@
return
try:
- w.files.get_metadata(path)
+ # Check if `self.path` points to a valid UC file.
+ w.files.get_metadata(self.path)
except Exception:
- raise MlflowException(f"{path} does not exist in Databricks Unified Catalog.")
+ try:
+ # Check if `self.path` points to a valid UC directory.
+ w.files.get_directory_metadata(self.path)
+ # Append a slash to `self.path` to indicate it's a directory.
+ self.path += "/" if not self.path.endswith("/") else ""
+ except Exception:
+ # Neither file nor directory exists, we throw an exception.
+ raise MlflowException(f"{self.path} does not exist in Databricks Unified Catalog.")
@staticmethod
def _get_source_type() -> str:
| {"golden_diff": "diff --git a/mlflow/data/uc_volume_dataset_source.py b/mlflow/data/uc_volume_dataset_source.py\n--- a/mlflow/data/uc_volume_dataset_source.py\n+++ b/mlflow/data/uc_volume_dataset_source.py\n@@ -22,10 +22,10 @@\n \"\"\"\n \n def __init__(self, path: str):\n- self._verify_uc_path_is_valid(path)\n self.path = path\n+ self._verify_uc_path_is_valid()\n \n- def _verify_uc_path_is_valid(self, path):\n+ def _verify_uc_path_is_valid(self):\n \"\"\"Verify if the path exists in Databricks Unified Catalog.\"\"\"\n try:\n from databricks.sdk import WorkspaceClient\n@@ -49,9 +49,17 @@\n return\n \n try:\n- w.files.get_metadata(path)\n+ # Check if `self.path` points to a valid UC file.\n+ w.files.get_metadata(self.path)\n except Exception:\n- raise MlflowException(f\"{path} does not exist in Databricks Unified Catalog.\")\n+ try:\n+ # Check if `self.path` points to a valid UC directory.\n+ w.files.get_directory_metadata(self.path)\n+ # Append a slash to `self.path` to indicate it's a directory.\n+ self.path += \"/\" if not self.path.endswith(\"/\") else \"\"\n+ except Exception:\n+ # Neither file nor directory exists, we throw an exception.\n+ raise MlflowException(f\"{self.path} does not exist in Databricks Unified Catalog.\")\n \n @staticmethod\n def _get_source_type() -> str:\n", "issue": "[BUG] uc_volume_dataset_source only validates file paths, not folder paths\n### Issues Policy acknowledgement\n\n- [X] I have read and agree to submit bug reports in accordance with the [issues policy](https://www.github.com/mlflow/mlflow/blob/master/ISSUE_POLICY.md)\n\n### Where did you encounter this bug?\n\nLocal machine\n\n### Willingness to contribute\n\nYes. I would be willing to contribute a fix for this bug with guidance from the MLflow community.\n\n### MLflow version\n\nmlflow-2.12.2\n\n### System information\n\n- **OS Platform and Distribution (e.g., Linux Ubuntu 16.04)**:\r\n- **Python version**:\r\n- **yarn version, if running the dev UI**:\r\n\n\n### Describe the problem\n\nhttps://github.com/mlflow/mlflow/blob/72df4a2a0f44c52179dfbdc7d47ad10f58ceec39/mlflow/data/uc_volume_dataset_source.py#L28 doesn't verify folder paths, only file paths\n\n### Tracking information\n\n<!-- PLEASE KEEP BACKTICKS AND CHECK PREVIEW -->\r\n```shell\r\nREPLACE_ME\r\n```\r\n\n\n### Code to reproduce issue\n\n<!-- PLEASE KEEP BACKTICKS AND CHECK PREVIEW -->\r\n```\r\nREPLACE_ME\r\n```\r\n\n\n### Stack trace\n\n<!-- PLEASE KEEP BACKTICKS AND CHECK PREVIEW -->\r\n```\r\nREPLACE_ME\r\n```\r\n\n\n### Other info / logs\n\n<!-- PLEASE KEEP BACKTICKS AND CHECK PREVIEW -->\r\n```\r\nREPLACE_ME\r\n```\r\n\n\n### What component(s) does this bug affect?\n\n- [ ] `area/artifacts`: Artifact stores and artifact logging\n- [ ] `area/build`: Build and test infrastructure for MLflow\n- [ ] `area/deployments`: MLflow Deployments client APIs, server, and third-party Deployments integrations\n- [ ] `area/docs`: MLflow documentation pages\n- [ ] `area/examples`: Example code\n- [ ] `area/model-registry`: Model Registry service, APIs, and the fluent client calls for Model Registry\n- [ ] `area/models`: MLmodel format, model serialization/deserialization, flavors\n- [ ] `area/recipes`: Recipes, Recipe APIs, Recipe configs, Recipe Templates\n- [ ] `area/projects`: MLproject format, project running backends\n- [ ] `area/scoring`: MLflow Model server, model deployment tools, Spark UDFs\n- [ ] `area/server-infra`: MLflow Tracking server backend\n- [ ] `area/tracking`: Tracking Service, tracking client APIs, autologging\n\n### What interface(s) does this bug affect?\n\n- [ ] `area/uiux`: Front-end, user experience, plotting, JavaScript, JavaScript dev server\n- [ ] `area/docker`: Docker use across MLflow's components, such as MLflow Projects and MLflow Models\n- [ ] `area/sqlalchemy`: Use of SQLAlchemy in the Tracking Service or Model Registry\n- [ ] `area/windows`: Windows support\n\n### What language(s) does this bug affect?\n\n- [ ] `language/r`: R APIs and clients\n- [ ] `language/java`: Java APIs and clients\n- [ ] `language/new`: Proposals for new client languages\n\n### What integration(s) does this bug affect?\n\n- [ ] `integrations/azure`: Azure and Azure ML integrations\n- [ ] `integrations/sagemaker`: SageMaker integrations\n- [ ] `integrations/databricks`: Databricks integrations\n", "before_files": [{"content": "import logging\nfrom typing import Any, Dict\n\nfrom mlflow.data.dataset_source import DatasetSource\nfrom mlflow.exceptions import MlflowException\n\n_logger = logging.getLogger(__name__)\n\n\nclass UCVolumeDatasetSource(DatasetSource):\n \"\"\"Represents the source of a dataset stored in Databricks Unified Catalog Volume.\n\n If you are using a delta table, please use `mlflow.data.delta_dataset_source.DeltaDatasetSource`\n instead. This `UCVolumeDatasetSource` does not provide loading function, and is mostly useful\n when you are logging a `mlflow.data.meta_dataset.MetaDataset` to MLflow, i.e., you want\n to log the source of dataset to MLflow without loading the dataset.\n\n Args:\n path: the UC path of your data. It should be a valid UC path following the pattern\n \"/Volumes/{catalog}/{schema}/{volume}/{file_path}\". For example,\n \"/Volumes/MyCatalog/MySchema/MyVolume/MyFile.json\".\n \"\"\"\n\n def __init__(self, path: str):\n self._verify_uc_path_is_valid(path)\n self.path = path\n\n def _verify_uc_path_is_valid(self, path):\n \"\"\"Verify if the path exists in Databricks Unified Catalog.\"\"\"\n try:\n from databricks.sdk import WorkspaceClient\n\n w = WorkspaceClient()\n except ImportError:\n _logger.warning(\n \"Cannot verify the path of `UCVolumeDatasetSource` because of missing\"\n \"`databricks-sdk`. Please install `databricks-sdk` via \"\n \"`pip install -U databricks-sdk`. This does not block creating \"\n \"`UCVolumeDatasetSource`, but your `UCVolumeDatasetSource` might be invalid.\"\n )\n return\n except Exception:\n _logger.warning(\n \"Cannot verify the path of `UCVolumeDatasetSource` due to a connection failure \"\n \"with Databricks workspace. Please run `mlflow.login()` to log in to Databricks. \"\n \"This does not block creating `UCVolumeDatasetSource`, but your \"\n \"`UCVolumeDatasetSource` might be invalid.\"\n )\n return\n\n try:\n w.files.get_metadata(path)\n except Exception:\n raise MlflowException(f\"{path} does not exist in Databricks Unified Catalog.\")\n\n @staticmethod\n def _get_source_type() -> str:\n return \"uc_volume\"\n\n @staticmethod\n def _can_resolve(raw_source: Any):\n raise NotImplementedError\n\n @classmethod\n def _resolve(cls, raw_source: str):\n raise NotImplementedError\n\n def to_dict(self) -> Dict[Any, Any]:\n return {\"path\": self.path}\n\n @classmethod\n def from_dict(cls, source_dict: Dict[Any, Any]) -> \"UCVolumeDatasetSource\":\n return cls(**source_dict)\n", "path": "mlflow/data/uc_volume_dataset_source.py"}]} | 2,032 | 349 |
gh_patches_debug_24966 | rasdani/github-patches | git_diff | chainer__chainer-2721 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
resuming issue of LinearShift
Same issue as #2680
```
import chainer
from chainer import iterators
from chainer import optimizers
from chainer import training
from chainer.training import extensions
from chainer import serializers
class DummyModel(chainer.Chain):
def __call__(self, x):
return x
def setup_trainer(iteration):
model = DummyModel()
optimizer = optimizers.SGD()
optimizer.setup(model)
iterator = iterators.SerialIterator([1, 2, 3], 1)
updater = training.StandardUpdater(iterator, optimizer)
trainer = training.Trainer(updater, (iteration, 'iteration'), out='.')
trainer.extend(extensions.LogReport(trigger=(1, 'iteration')))
trainer.extend(extensions.observe_lr(), trigger=(1, 'iteration'))
trainer.extend(
extensions.PrintReport(['iteration', 'lr']),
trigger=(1, 'iteration'))
trainer.extend(
extensions.LinearShift('lr', (2, 1), (5, 15)),
trigger=(1, 'iteration'))
return trainer
trainer = setup_trainer(10)
trainer.run()
serializers.save_npz('tmp', trainer)
# iteration lr
# 1 2
# 2 2
# 3 2
# 4 2
# 5 2
# 6 2
# 7 1.9
# 8 1.8
# 9 1.7
# 10 1.6
resumed_trainer = setup_trainer(20)
serializers.load_npz('tmp', resumed_trainer)
resumed_trainer.run()
# iteration lr
# 1 2
# 2 2
# 3 2
# 4 2
# 5 2
# 6 2
# 7 1.9
# 8 1.8
# 9 1.7
# 10 1.6
# 11 1.4 (lr = 1.5 is skipped)
# 12 1.3
# 13 1.2
# 14 1.1
# 15 1
# 16 1
# 17 1
# 18 1
# 19 1
# 20 1
```
</issue>
<code>
[start of chainer/training/extensions/linear_shift.py]
1 from __future__ import division
2
3 from chainer.training import extension
4
5
6 class LinearShift(extension.Extension):
7
8 """Trainer extension to change an optimizer attribute linearly.
9
10 This extension changes an optimizer attribute from the first value to the
11 last value linearly within a specified duration. The typical use case is
12 warming up of the momentum coefficient.
13
14 For example, suppose that this extension is called at every iteration, and
15 ``value_range == (x, y)`` and ``time_range == (i, j)``. Then, this
16 extension keeps the attribute to be ``x`` up to the ``i``-th iteration,
17 linearly shifts the value to ``y`` by the ``j``-th iteration, and then
18 keeps the value to be ``y`` after the ``j``-th iteration.
19
20 This extension is also called before the training loop starts by default.
21
22 Args:
23 attr (str): Name of the optimizer attribute to adjust.
24 value_range (tuple of float): The first and the last values of the
25 attribute.
26 time_range (tuple of ints): The first and last counts of calls in which
27 the attribute is adjusted.
28 optimizer (~chainer.Optimizer): Target optimizer object. If it is None,
29 the main optimizer of the trainer is used.
30
31 """
32 invoke_before_training = True
33
34 def __init__(self, attr, value_range, time_range, optimizer=None):
35 self._attr = attr
36 self._value_range = value_range
37 self._time_range = time_range
38 self._optimizer = optimizer
39 self._t = 0
40
41 def __call__(self, trainer):
42 optimizer = self._optimizer or trainer.updater.get_optimizer('main')
43 t1, t2 = self._time_range
44 v1, v2 = self._value_range
45
46 if self._t <= t1:
47 value = v1
48 elif self._t >= t2:
49 value = v2
50 else:
51 rate = (self._t - t1) / (t2 - t1)
52 value = v1 + rate * (v2 - v1)
53 setattr(optimizer, self._attr, value)
54
55 self._t += 1
56
57 def serialize(self, serializer):
58 self._t = serializer('_t', self._t)
59
[end of chainer/training/extensions/linear_shift.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/chainer/training/extensions/linear_shift.py b/chainer/training/extensions/linear_shift.py
--- a/chainer/training/extensions/linear_shift.py
+++ b/chainer/training/extensions/linear_shift.py
@@ -36,23 +36,34 @@
self._value_range = value_range
self._time_range = time_range
self._optimizer = optimizer
- self._t = 0
+ self._t = 1
+ self._before_training = True
def __call__(self, trainer):
optimizer = self._optimizer or trainer.updater.get_optimizer('main')
+
+ if self._before_training:
+ self._before_training = False
+ value = self._compute_value(self._t - 1)
+ else:
+ value = self._compute_value(self._t)
+ self._t += 1
+
+ setattr(optimizer, self._attr, value)
+
+ def serialize(self, serializer):
+ self._t = serializer('_t', self._t)
+
+ def _compute_value(self, t):
t1, t2 = self._time_range
v1, v2 = self._value_range
- if self._t <= t1:
+ if t <= t1:
value = v1
- elif self._t >= t2:
+ elif t >= t2:
value = v2
else:
- rate = (self._t - t1) / (t2 - t1)
+ rate = (t - t1) / (t2 - t1)
value = v1 + rate * (v2 - v1)
- setattr(optimizer, self._attr, value)
- self._t += 1
-
- def serialize(self, serializer):
- self._t = serializer('_t', self._t)
+ return value
| {"golden_diff": "diff --git a/chainer/training/extensions/linear_shift.py b/chainer/training/extensions/linear_shift.py\n--- a/chainer/training/extensions/linear_shift.py\n+++ b/chainer/training/extensions/linear_shift.py\n@@ -36,23 +36,34 @@\n self._value_range = value_range\n self._time_range = time_range\n self._optimizer = optimizer\n- self._t = 0\n+ self._t = 1\n+ self._before_training = True\n \n def __call__(self, trainer):\n optimizer = self._optimizer or trainer.updater.get_optimizer('main')\n+\n+ if self._before_training:\n+ self._before_training = False\n+ value = self._compute_value(self._t - 1)\n+ else:\n+ value = self._compute_value(self._t)\n+ self._t += 1\n+\n+ setattr(optimizer, self._attr, value)\n+\n+ def serialize(self, serializer):\n+ self._t = serializer('_t', self._t)\n+\n+ def _compute_value(self, t):\n t1, t2 = self._time_range\n v1, v2 = self._value_range\n \n- if self._t <= t1:\n+ if t <= t1:\n value = v1\n- elif self._t >= t2:\n+ elif t >= t2:\n value = v2\n else:\n- rate = (self._t - t1) / (t2 - t1)\n+ rate = (t - t1) / (t2 - t1)\n value = v1 + rate * (v2 - v1)\n- setattr(optimizer, self._attr, value)\n \n- self._t += 1\n-\n- def serialize(self, serializer):\n- self._t = serializer('_t', self._t)\n+ return value\n", "issue": "resuming issue of LinearShift\nSame issue as #2680\r\n\r\n```\r\nimport chainer\r\nfrom chainer import iterators\r\nfrom chainer import optimizers\r\nfrom chainer import training\r\nfrom chainer.training import extensions\r\nfrom chainer import serializers\r\n\r\n\r\nclass DummyModel(chainer.Chain):\r\n\r\n def __call__(self, x):\r\n return x\r\n\r\n\r\ndef setup_trainer(iteration):\r\n model = DummyModel()\r\n optimizer = optimizers.SGD()\r\n optimizer.setup(model)\r\n\r\n iterator = iterators.SerialIterator([1, 2, 3], 1)\r\n\r\n updater = training.StandardUpdater(iterator, optimizer)\r\n trainer = training.Trainer(updater, (iteration, 'iteration'), out='.')\r\n\r\n trainer.extend(extensions.LogReport(trigger=(1, 'iteration')))\r\n trainer.extend(extensions.observe_lr(), trigger=(1, 'iteration'))\r\n trainer.extend(\r\n extensions.PrintReport(['iteration', 'lr']),\r\n trigger=(1, 'iteration'))\r\n\r\n trainer.extend(\r\n extensions.LinearShift('lr', (2, 1), (5, 15)),\r\n trigger=(1, 'iteration'))\r\n\r\n return trainer\r\n\r\n\r\ntrainer = setup_trainer(10)\r\ntrainer.run()\r\nserializers.save_npz('tmp', trainer)\r\n# iteration lr\r\n# 1 2\r\n# 2 2\r\n# 3 2\r\n# 4 2\r\n# 5 2\r\n# 6 2\r\n# 7 1.9\r\n# 8 1.8\r\n# 9 1.7\r\n# 10 1.6\r\n\r\nresumed_trainer = setup_trainer(20)\r\nserializers.load_npz('tmp', resumed_trainer)\r\nresumed_trainer.run()\r\n# iteration lr\r\n# 1 2\r\n# 2 2\r\n# 3 2\r\n# 4 2\r\n# 5 2\r\n# 6 2\r\n# 7 1.9\r\n# 8 1.8\r\n# 9 1.7\r\n# 10 1.6\r\n# 11 1.4 (lr = 1.5 is skipped)\r\n# 12 1.3\r\n# 13 1.2\r\n# 14 1.1\r\n# 15 1\r\n# 16 1\r\n# 17 1\r\n# 18 1\r\n# 19 1\r\n# 20 1\r\n```\n", "before_files": [{"content": "from __future__ import division\n\nfrom chainer.training import extension\n\n\nclass LinearShift(extension.Extension):\n\n \"\"\"Trainer extension to change an optimizer attribute linearly.\n\n This extension changes an optimizer attribute from the first value to the\n last value linearly within a specified duration. The typical use case is\n warming up of the momentum coefficient.\n\n For example, suppose that this extension is called at every iteration, and\n ``value_range == (x, y)`` and ``time_range == (i, j)``. Then, this\n extension keeps the attribute to be ``x`` up to the ``i``-th iteration,\n linearly shifts the value to ``y`` by the ``j``-th iteration, and then\n keeps the value to be ``y`` after the ``j``-th iteration.\n\n This extension is also called before the training loop starts by default.\n\n Args:\n attr (str): Name of the optimizer attribute to adjust.\n value_range (tuple of float): The first and the last values of the\n attribute.\n time_range (tuple of ints): The first and last counts of calls in which\n the attribute is adjusted.\n optimizer (~chainer.Optimizer): Target optimizer object. If it is None,\n the main optimizer of the trainer is used.\n\n \"\"\"\n invoke_before_training = True\n\n def __init__(self, attr, value_range, time_range, optimizer=None):\n self._attr = attr\n self._value_range = value_range\n self._time_range = time_range\n self._optimizer = optimizer\n self._t = 0\n\n def __call__(self, trainer):\n optimizer = self._optimizer or trainer.updater.get_optimizer('main')\n t1, t2 = self._time_range\n v1, v2 = self._value_range\n\n if self._t <= t1:\n value = v1\n elif self._t >= t2:\n value = v2\n else:\n rate = (self._t - t1) / (t2 - t1)\n value = v1 + rate * (v2 - v1)\n setattr(optimizer, self._attr, value)\n\n self._t += 1\n\n def serialize(self, serializer):\n self._t = serializer('_t', self._t)\n", "path": "chainer/training/extensions/linear_shift.py"}]} | 1,718 | 419 |
gh_patches_debug_33052 | rasdani/github-patches | git_diff | ansible__ansible-modules-extras-2818 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
vmware_local_user_manager error: 'module' not defined
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
vmware_local_user_manager
##### ANSIBLE VERSION
```
ansible 2.2.0 (devel 321d2e8cee) last updated 2016/08/21 14:29:27 (GMT +200)
lib/ansible/modules/core: (detached HEAD 91a839f1e3) last updated 2016/08/21 14:32:29 (GMT +200)
lib/ansible/modules/extras: (detached HEAD 1aeb9f8a8c) last updated 2016/08/21 14:32:44 (GMT +200)
```
##### CONFIGURATION
None
##### OS / ENVIRONMENT
Execution from: Ubuntu 14.04
Execution to: VMware ESXi 6.0 U2
##### SUMMARY
Execution of module vmware_local_user_manager fails with error
##### STEPS TO REPRODUCE
```
- name: "vSphere ESXi: Add users"
vmware_local_user_manager: validate_certs=False hostname={{ inventory_hostname }} username=root password=mypassword local_user_name=foo local_user_password=bar
```
##### EXPECTED RESULTS
Create user on target system
##### ACTUAL RESULTS
Error:
```
TASK [vSphere ESXi: Add users] *************************************************
task path: /home/devel/ansible-configuration/vmware.yml:15
Using module file /home/devel/ansible/lib/ansible/modules/extras/cloud/vmware/vmware_local_user_manager.py
<esxi> ESTABLISH LOCAL CONNECTION FOR USER: devel
<esxi> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1471786926.92-121489380863382 `" && echo ansible-tmp-1471786926.92-121489380863382="` echo $HOME/.ansible/tmp/ansible-tmp-1471786926.92-121489380863382 `" ) && sleep 0'
<esxi> PUT /tmp/tmpHVuXHh TO /home/devel/.ansible/tmp/ansible-tmp-1471786926.92-121489380863382/vmware_local_user_manager.py
<esxi> EXEC /bin/sh -c 'chmod u+x /home/devel/.ansible/tmp/ansible-tmp-1471786926.92-121489380863382/ /home/devel/.ansible/tmp/ansible-tmp-1471786926.92-121489380863382/vmware_local_user_manager.py && sleep 0'
<esxi> EXEC /bin/sh -c '/usr/bin/python /home/devel/.ansible/tmp/ansible-tmp-1471786926.92-121489380863382/vmware_local_user_manager.py; rm -rf "/home/devel/.ansible/tmp/ansible-tmp-1471786926.92-121489380863382/" > /dev/null 2>&1 && sleep 0'
fatal: [esxi]: FAILED! => {
"changed": false,
"failed": true,
"invocation": {
"module_args": {
"hostname": "esxi",
"local_user_description": null,
"local_user_name": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"local_user_password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"state": "present",
"username": "devel",
"validate_certs": false
},
"module_name": "vmware_local_user_manager"
},
"msg": "global name 'module' is not defined"
}
```
</issue>
<code>
[start of cloud/vmware/vmware_local_user_manager.py]
1 #!/usr/bin/python
2 # -*- coding: utf-8 -*-
3
4 # Copyright IBM Corp. 2016
5 # Author(s): Andreas Nafpliotis <[email protected]>
6
7 # This file is part of Ansible
8 #
9 # Ansible is free software: you can redistribute it and/or modify
10 # it under the terms of the GNU General Public License as published by
11 # the Free Software Foundation, either version 3 of the License, or
12 # (at your option) any later version.
13 #
14 # Ansible is distributed in the hope that it will be useful,
15 # but WITHOUT ANY WARRANTY; without even the implied warranty of
16 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
17 # GNU General Public License for more details.
18 #
19 # You should have received a copy of the GNU General Public License
20 # along with Ansible. If not, see <http://www.gnu.org/licenses/
21
22 DOCUMENTATION = '''
23 ---
24 module: vmware_local_user_manager
25 short_description: Manage local users on an ESXi host
26 description:
27 - Manage local users on an ESXi host
28 version_added: "2.2"
29 author: Andreas Nafpliotis
30 notes:
31 - Tested on ESXi 6.0
32 - Be sure that the ESXi user used for login, has the appropriate rights to create / delete / edit users
33 requirements:
34 - "python >= 2.6"
35 - PyVmomi installed
36 options:
37 local_user_name:
38 description:
39 - The local user name to be changed
40 required: True
41 local_user_password:
42 description:
43 - The password to be set
44 required: False
45 local_user_description:
46 description:
47 - Description for the user
48 required: False
49 state:
50 description:
51 - Indicate desired state of the user. If the user already exists when C(state=present), the user info is updated
52 choices: ['present', 'absent']
53 default: present
54 extends_documentation_fragment: vmware.documentation
55 '''
56
57 EXAMPLES = '''
58 # Example vmware_local_user_manager command from Ansible Playbooks
59 - name: Add local user to ESXi
60 local_action:
61 module: vmware_local_user_manager
62 hostname: esxi_hostname
63 username: root
64 password: vmware
65 local_user_name: foo
66 '''
67
68 RETURN = '''# '''
69
70 try:
71 from pyVmomi import vim, vmodl
72 HAS_PYVMOMI = True
73 except ImportError:
74 HAS_PYVMOMI = False
75
76
77 class VMwareLocalUserManager(object):
78 def __init__(self, module):
79 self.module = module
80 self.content = connect_to_api(self.module)
81 self.local_user_name = self.module.params['local_user_name']
82 self.local_user_password = self.module.params['local_user_password']
83 self.local_user_description = self.module.params['local_user_description']
84 self.state = self.module.params['state']
85
86 def process_state(self):
87 try:
88 local_account_manager_states = {
89 'absent': {
90 'present': self.state_remove_user,
91 'absent': self.state_exit_unchanged,
92 },
93 'present': {
94 'present': self.state_update_user,
95 'absent': self.state_create_user,
96 }
97 }
98
99 local_account_manager_states[self.state][self.check_local_user_manager_state()]()
100 except vmodl.RuntimeFault as runtime_fault:
101 self.module.fail_json(msg=runtime_fault.msg)
102 except vmodl.MethodFault as method_fault:
103 self.module.fail_json(msg=method_fault.msg)
104 except Exception as e:
105 self.module.fail_json(msg=str(e))
106
107
108 def check_local_user_manager_state(self):
109 user_account = self.find_user_account()
110 if not user_account:
111 return 'absent'
112 else:
113 return 'present'
114
115
116 def find_user_account(self):
117 searchStr = self.local_user_name
118 exactMatch = True
119 findUsers = True
120 findGroups = False
121 user_account = self.content.userDirectory.RetrieveUserGroups(None, searchStr, None, None, exactMatch, findUsers, findGroups)
122 return user_account
123
124
125 def create_account_spec(self):
126 account_spec = vim.host.LocalAccountManager.AccountSpecification()
127 account_spec.id = self.local_user_name
128 account_spec.password = self.local_user_password
129 account_spec.description = self.local_user_description
130 return account_spec
131
132
133 def state_create_user(self):
134 account_spec = self.create_account_spec()
135
136 try:
137 task = self.content.accountManager.CreateUser(account_spec)
138 self.module.exit_json(changed=True)
139 except vmodl.RuntimeFault as runtime_fault:
140 module.fail_json(msg=runtime_fault.msg)
141 except vmodl.MethodFault as method_fault:
142 module.fail_json(msg=method_fault.msg)
143
144 def state_update_user(self):
145 account_spec = self.create_account_spec()
146
147 try:
148 task = self.content.accountManager.UpdateUser(account_spec)
149 self.module.exit_json(changed=True)
150 except vmodl.RuntimeFault as runtime_fault:
151 module.fail_json(msg=runtime_fault.msg)
152 except vmodl.MethodFault as method_fault:
153 module.fail_json(msg=method_fault.msg)
154
155
156 def state_remove_user(self):
157 try:
158 task = self.content.accountManager.RemoveUser(self.local_user_name)
159 self.module.exit_json(changed=True)
160 except vmodl.RuntimeFault as runtime_fault:
161 module.fail_json(msg=runtime_fault.msg)
162 except vmodl.MethodFault as method_fault:
163 module.fail_json(msg=method_fault.msg)
164
165
166 def state_exit_unchanged(self):
167 self.module.exit_json(changed=False)
168
169
170
171 def main():
172
173 argument_spec = vmware_argument_spec()
174 argument_spec.update(dict(local_user_name=dict(required=True, type='str'),
175 local_user_password=dict(required=False, type='str', no_log=True),
176 local_user_description=dict(required=False, type='str'),
177 state=dict(default='present', choices=['present', 'absent'], type='str')))
178
179 module = AnsibleModule(argument_spec=argument_spec, supports_check_mode=False)
180
181 if not HAS_PYVMOMI:
182 module.fail_json(msg='pyvmomi is required for this module')
183
184 vmware_local_user_manager = VMwareLocalUserManager(module)
185 vmware_local_user_manager.process_state()
186
187 from ansible.module_utils.vmware import *
188 from ansible.module_utils.basic import *
189
190 if __name__ == '__main__':
191 main()
192
[end of cloud/vmware/vmware_local_user_manager.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/cloud/vmware/vmware_local_user_manager.py b/cloud/vmware/vmware_local_user_manager.py
--- a/cloud/vmware/vmware_local_user_manager.py
+++ b/cloud/vmware/vmware_local_user_manager.py
@@ -137,9 +137,9 @@
task = self.content.accountManager.CreateUser(account_spec)
self.module.exit_json(changed=True)
except vmodl.RuntimeFault as runtime_fault:
- module.fail_json(msg=runtime_fault.msg)
+ self.module.fail_json(msg=runtime_fault.msg)
except vmodl.MethodFault as method_fault:
- module.fail_json(msg=method_fault.msg)
+ self.module.fail_json(msg=method_fault.msg)
def state_update_user(self):
account_spec = self.create_account_spec()
@@ -148,9 +148,9 @@
task = self.content.accountManager.UpdateUser(account_spec)
self.module.exit_json(changed=True)
except vmodl.RuntimeFault as runtime_fault:
- module.fail_json(msg=runtime_fault.msg)
+ self.module.fail_json(msg=runtime_fault.msg)
except vmodl.MethodFault as method_fault:
- module.fail_json(msg=method_fault.msg)
+ self.module.fail_json(msg=method_fault.msg)
def state_remove_user(self):
@@ -158,9 +158,9 @@
task = self.content.accountManager.RemoveUser(self.local_user_name)
self.module.exit_json(changed=True)
except vmodl.RuntimeFault as runtime_fault:
- module.fail_json(msg=runtime_fault.msg)
+ self.module.fail_json(msg=runtime_fault.msg)
except vmodl.MethodFault as method_fault:
- module.fail_json(msg=method_fault.msg)
+ self.module.fail_json(msg=method_fault.msg)
def state_exit_unchanged(self):
| {"golden_diff": "diff --git a/cloud/vmware/vmware_local_user_manager.py b/cloud/vmware/vmware_local_user_manager.py\n--- a/cloud/vmware/vmware_local_user_manager.py\n+++ b/cloud/vmware/vmware_local_user_manager.py\n@@ -137,9 +137,9 @@\n task = self.content.accountManager.CreateUser(account_spec)\n self.module.exit_json(changed=True)\n except vmodl.RuntimeFault as runtime_fault:\n- module.fail_json(msg=runtime_fault.msg)\n+ self.module.fail_json(msg=runtime_fault.msg)\n except vmodl.MethodFault as method_fault:\n- module.fail_json(msg=method_fault.msg)\n+ self.module.fail_json(msg=method_fault.msg)\n \n def state_update_user(self):\n account_spec = self.create_account_spec()\n@@ -148,9 +148,9 @@\n task = self.content.accountManager.UpdateUser(account_spec)\n self.module.exit_json(changed=True)\n except vmodl.RuntimeFault as runtime_fault:\n- module.fail_json(msg=runtime_fault.msg)\n+ self.module.fail_json(msg=runtime_fault.msg)\n except vmodl.MethodFault as method_fault:\n- module.fail_json(msg=method_fault.msg)\n+ self.module.fail_json(msg=method_fault.msg)\n \n \n def state_remove_user(self):\n@@ -158,9 +158,9 @@\n task = self.content.accountManager.RemoveUser(self.local_user_name)\n self.module.exit_json(changed=True)\n except vmodl.RuntimeFault as runtime_fault:\n- module.fail_json(msg=runtime_fault.msg)\n+ self.module.fail_json(msg=runtime_fault.msg)\n except vmodl.MethodFault as method_fault:\n- module.fail_json(msg=method_fault.msg)\n+ self.module.fail_json(msg=method_fault.msg)\n \n \n def state_exit_unchanged(self):\n", "issue": "vmware_local_user_manager error: 'module' not defined\n##### ISSUE TYPE\n- Bug Report\n##### COMPONENT NAME\n\nvmware_local_user_manager\n##### ANSIBLE VERSION\n\n```\nansible 2.2.0 (devel 321d2e8cee) last updated 2016/08/21 14:29:27 (GMT +200)\n lib/ansible/modules/core: (detached HEAD 91a839f1e3) last updated 2016/08/21 14:32:29 (GMT +200)\n lib/ansible/modules/extras: (detached HEAD 1aeb9f8a8c) last updated 2016/08/21 14:32:44 (GMT +200)\n```\n##### CONFIGURATION\n\nNone\n##### OS / ENVIRONMENT\n\nExecution from: Ubuntu 14.04\nExecution to: VMware ESXi 6.0 U2\n##### SUMMARY\n\nExecution of module vmware_local_user_manager fails with error\n##### STEPS TO REPRODUCE\n\n```\n- name: \"vSphere ESXi: Add users\"\n vmware_local_user_manager: validate_certs=False hostname={{ inventory_hostname }} username=root password=mypassword local_user_name=foo local_user_password=bar\n```\n##### EXPECTED RESULTS\n\nCreate user on target system\n##### ACTUAL RESULTS\n\nError:\n\n```\nTASK [vSphere ESXi: Add users] *************************************************\ntask path: /home/devel/ansible-configuration/vmware.yml:15\nUsing module file /home/devel/ansible/lib/ansible/modules/extras/cloud/vmware/vmware_local_user_manager.py\n<esxi> ESTABLISH LOCAL CONNECTION FOR USER: devel\n<esxi> EXEC /bin/sh -c '( umask 77 && mkdir -p \"` echo $HOME/.ansible/tmp/ansible-tmp-1471786926.92-121489380863382 `\" && echo ansible-tmp-1471786926.92-121489380863382=\"` echo $HOME/.ansible/tmp/ansible-tmp-1471786926.92-121489380863382 `\" ) && sleep 0'\n<esxi> PUT /tmp/tmpHVuXHh TO /home/devel/.ansible/tmp/ansible-tmp-1471786926.92-121489380863382/vmware_local_user_manager.py\n<esxi> EXEC /bin/sh -c 'chmod u+x /home/devel/.ansible/tmp/ansible-tmp-1471786926.92-121489380863382/ /home/devel/.ansible/tmp/ansible-tmp-1471786926.92-121489380863382/vmware_local_user_manager.py && sleep 0'\n<esxi> EXEC /bin/sh -c '/usr/bin/python /home/devel/.ansible/tmp/ansible-tmp-1471786926.92-121489380863382/vmware_local_user_manager.py; rm -rf \"/home/devel/.ansible/tmp/ansible-tmp-1471786926.92-121489380863382/\" > /dev/null 2>&1 && sleep 0'\nfatal: [esxi]: FAILED! => {\n \"changed\": false, \n \"failed\": true, \n \"invocation\": {\n \"module_args\": {\n \"hostname\": \"esxi\", \n \"local_user_description\": null, \n \"local_user_name\": \"VALUE_SPECIFIED_IN_NO_LOG_PARAMETER\", \n \"local_user_password\": \"VALUE_SPECIFIED_IN_NO_LOG_PARAMETER\", \n \"password\": \"VALUE_SPECIFIED_IN_NO_LOG_PARAMETER\", \n \"state\": \"present\", \n \"username\": \"devel\", \n \"validate_certs\": false\n }, \n \"module_name\": \"vmware_local_user_manager\"\n }, \n \"msg\": \"global name 'module' is not defined\"\n}\n```\n\n", "before_files": [{"content": "#!/usr/bin/python\n# -*- coding: utf-8 -*-\n\n# Copyright IBM Corp. 2016\n# Author(s): Andreas Nafpliotis <[email protected]>\n\n# This file is part of Ansible\n#\n# Ansible is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# Ansible is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with Ansible. If not, see <http://www.gnu.org/licenses/\n\nDOCUMENTATION = '''\n---\nmodule: vmware_local_user_manager\nshort_description: Manage local users on an ESXi host\ndescription:\n - Manage local users on an ESXi host\nversion_added: \"2.2\"\nauthor: Andreas Nafpliotis\nnotes:\n - Tested on ESXi 6.0\n - Be sure that the ESXi user used for login, has the appropriate rights to create / delete / edit users\nrequirements:\n - \"python >= 2.6\"\n - PyVmomi installed\noptions:\n local_user_name:\n description:\n - The local user name to be changed\n required: True\n local_user_password:\n description:\n - The password to be set\n required: False\n local_user_description:\n description:\n - Description for the user\n required: False\n state:\n description:\n - Indicate desired state of the user. If the user already exists when C(state=present), the user info is updated\n choices: ['present', 'absent']\n default: present\nextends_documentation_fragment: vmware.documentation\n'''\n\nEXAMPLES = '''\n# Example vmware_local_user_manager command from Ansible Playbooks\n- name: Add local user to ESXi\n local_action:\n module: vmware_local_user_manager\n hostname: esxi_hostname\n username: root\n password: vmware\n local_user_name: foo\n'''\n\nRETURN = '''# '''\n\ntry:\n from pyVmomi import vim, vmodl\n HAS_PYVMOMI = True\nexcept ImportError:\n HAS_PYVMOMI = False\n\n\nclass VMwareLocalUserManager(object):\n def __init__(self, module):\n self.module = module\n self.content = connect_to_api(self.module)\n self.local_user_name = self.module.params['local_user_name']\n self.local_user_password = self.module.params['local_user_password']\n self.local_user_description = self.module.params['local_user_description']\n self.state = self.module.params['state']\n\n def process_state(self):\n try:\n local_account_manager_states = {\n 'absent': {\n 'present': self.state_remove_user,\n 'absent': self.state_exit_unchanged,\n },\n 'present': {\n 'present': self.state_update_user,\n 'absent': self.state_create_user,\n }\n }\n\n local_account_manager_states[self.state][self.check_local_user_manager_state()]()\n except vmodl.RuntimeFault as runtime_fault:\n self.module.fail_json(msg=runtime_fault.msg)\n except vmodl.MethodFault as method_fault:\n self.module.fail_json(msg=method_fault.msg)\n except Exception as e:\n self.module.fail_json(msg=str(e))\n\n\n def check_local_user_manager_state(self):\n user_account = self.find_user_account()\n if not user_account:\n return 'absent'\n else:\n return 'present'\n\n\n def find_user_account(self):\n searchStr = self.local_user_name\n exactMatch = True\n findUsers = True\n findGroups = False\n user_account = self.content.userDirectory.RetrieveUserGroups(None, searchStr, None, None, exactMatch, findUsers, findGroups)\n return user_account\n\n\n def create_account_spec(self):\n account_spec = vim.host.LocalAccountManager.AccountSpecification()\n account_spec.id = self.local_user_name\n account_spec.password = self.local_user_password\n account_spec.description = self.local_user_description\n return account_spec\n\n\n def state_create_user(self):\n account_spec = self.create_account_spec()\n\n try:\n task = self.content.accountManager.CreateUser(account_spec)\n self.module.exit_json(changed=True)\n except vmodl.RuntimeFault as runtime_fault:\n module.fail_json(msg=runtime_fault.msg)\n except vmodl.MethodFault as method_fault:\n module.fail_json(msg=method_fault.msg)\n\n def state_update_user(self):\n account_spec = self.create_account_spec()\n\n try:\n task = self.content.accountManager.UpdateUser(account_spec)\n self.module.exit_json(changed=True)\n except vmodl.RuntimeFault as runtime_fault:\n module.fail_json(msg=runtime_fault.msg)\n except vmodl.MethodFault as method_fault:\n module.fail_json(msg=method_fault.msg)\n\n\n def state_remove_user(self):\n try:\n task = self.content.accountManager.RemoveUser(self.local_user_name)\n self.module.exit_json(changed=True)\n except vmodl.RuntimeFault as runtime_fault:\n module.fail_json(msg=runtime_fault.msg)\n except vmodl.MethodFault as method_fault:\n module.fail_json(msg=method_fault.msg)\n\n\n def state_exit_unchanged(self):\n self.module.exit_json(changed=False)\n\n\n\ndef main():\n\n argument_spec = vmware_argument_spec()\n argument_spec.update(dict(local_user_name=dict(required=True, type='str'),\n local_user_password=dict(required=False, type='str', no_log=True),\n local_user_description=dict(required=False, type='str'),\n state=dict(default='present', choices=['present', 'absent'], type='str')))\n\n module = AnsibleModule(argument_spec=argument_spec, supports_check_mode=False)\n\n if not HAS_PYVMOMI:\n module.fail_json(msg='pyvmomi is required for this module')\n\n vmware_local_user_manager = VMwareLocalUserManager(module)\n vmware_local_user_manager.process_state()\n\nfrom ansible.module_utils.vmware import *\nfrom ansible.module_utils.basic import *\n\nif __name__ == '__main__':\n main()\n", "path": "cloud/vmware/vmware_local_user_manager.py"}]} | 3,395 | 402 |
gh_patches_debug_28704 | rasdani/github-patches | git_diff | biolab__orange3-1205 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
There is a desktop file missing in a Linux version
##### Orange version
3.3.1.dev0
##### Expected bahavior
/usr/share/applications directory is populated with a orange.desktop file providing a desktop icon on Linux machines.
##### Actual behavior
That doesn't happen - the /usr/share/applications/orange.desktop file is missing.
##### Steps to reproduce the behavior
Install Orange on a Linux machine.
</issue>
<code>
[start of setup.py]
1 #! /usr/bin/env python3
2
3 import os
4 import sys
5 import subprocess
6 from setuptools import find_packages, Command
7
8 if sys.version_info < (3, 4):
9 sys.exit('Orange requires Python >= 3.4')
10 try:
11 from numpy.distutils.core import setup
12 except ImportError:
13 sys.exit('setup requires numpy; install numpy first')
14
15 NAME = 'Orange'
16
17 VERSION = '3.3.2'
18 ISRELEASED = False
19
20 DESCRIPTION = 'Orange, a component-based data mining framework.'
21 README_FILE = os.path.join(os.path.dirname(__file__), 'README.md')
22 LONG_DESCRIPTION = open(README_FILE).read()
23 AUTHOR = 'Bioinformatics Laboratory, FRI UL'
24 AUTHOR_EMAIL = '[email protected]'
25 URL = 'http://orange.biolab.si/'
26 LICENSE = 'GPLv3+'
27
28 KEYWORDS = (
29 'data mining',
30 'machine learning',
31 'artificial intelligence',
32 )
33
34 CLASSIFIERS = (
35 'Development Status :: 4 - Beta',
36 'Environment :: X11 Applications :: Qt',
37 'Environment :: Console',
38 'Environment :: Plugins',
39 'Programming Language :: Python',
40 'Framework :: Orange',
41 'License :: OSI Approved :: '
42 'GNU General Public License v3 or later (GPLv3+)',
43 'Operating System :: POSIX',
44 'Operating System :: Microsoft :: Windows',
45 'Topic :: Scientific/Engineering :: Artificial Intelligence',
46 'Topic :: Scientific/Engineering :: Visualization',
47 'Topic :: Software Development :: Libraries :: Python Modules',
48 'Intended Audience :: Education',
49 'Intended Audience :: Science/Research',
50 'Intended Audience :: Developers',
51 )
52
53 requirements = ['requirements-core.txt', 'requirements-gui.txt']
54
55 INSTALL_REQUIRES = sorted(set(
56 line.partition('#')[0].strip()
57 for file in (os.path.join(os.path.dirname(__file__), file)
58 for file in requirements)
59 for line in open(file)
60 ) - {''})
61
62 ENTRY_POINTS = {
63 "orange.canvas.help": (
64 "html-index = Orange.widgets:WIDGET_HELP_PATH",
65 ),
66 "gui_scripts": (
67 "orange-canvas = Orange.canvas.__main__:main",
68 ),
69 }
70
71
72 # Return the git revision as a string
73 def git_version():
74 """Return the git revision as a string.
75
76 Copied from numpy setup.py
77 """
78 def _minimal_ext_cmd(cmd):
79 # construct minimal environment
80 env = {}
81 for k in ['SYSTEMROOT', 'PATH']:
82 v = os.environ.get(k)
83 if v is not None:
84 env[k] = v
85 # LANGUAGE is used on win32
86 env['LANGUAGE'] = 'C'
87 env['LANG'] = 'C'
88 env['LC_ALL'] = 'C'
89 out = subprocess.Popen(cmd, stdout = subprocess.PIPE, env=env).communicate()[0]
90 return out
91
92 try:
93 out = _minimal_ext_cmd(['git', 'rev-parse', 'HEAD'])
94 GIT_REVISION = out.strip().decode('ascii')
95 except OSError:
96 GIT_REVISION = "Unknown"
97 return GIT_REVISION
98
99
100 def write_version_py(filename='Orange/version.py'):
101 # Copied from numpy setup.py
102 cnt = """
103 # THIS FILE IS GENERATED FROM ORANGE SETUP.PY
104 short_version = '%(version)s'
105 version = '%(version)s'
106 full_version = '%(full_version)s'
107 git_revision = '%(git_revision)s'
108 release = %(isrelease)s
109
110 if not release:
111 version = full_version
112 short_version += ".dev"
113 """
114 FULLVERSION = VERSION
115 if os.path.exists('.git'):
116 GIT_REVISION = git_version()
117 elif os.path.exists('Orange/version.py'):
118 # must be a source distribution, use existing version file
119 import imp
120 version = imp.load_source("Orange.version", "Orange/version.py")
121 GIT_REVISION = version.git_revision
122 else:
123 GIT_REVISION = "Unknown"
124
125 if not ISRELEASED:
126 FULLVERSION += '.dev0+' + GIT_REVISION[:7]
127
128 a = open(filename, 'w')
129 try:
130 a.write(cnt % {'version': VERSION,
131 'full_version': FULLVERSION,
132 'git_revision': GIT_REVISION,
133 'isrelease': str(ISRELEASED)})
134 finally:
135 a.close()
136
137
138 def configuration(parent_package='', top_path=None):
139 if os.path.exists('MANIFEST'):
140 os.remove('MANIFEST')
141
142 from numpy.distutils.misc_util import Configuration
143 config = Configuration(None, parent_package, top_path)
144
145 # Avoid non-useful msg:
146 # "Ignoring attempt to set 'name' (from ... "
147 config.set_options(ignore_setup_xxx_py=True,
148 assume_default_configuration=True,
149 delegate_options_to_subpackages=True,
150 quiet=True)
151
152 config.add_subpackage('Orange')
153
154 config.get_version('Orange/version.py') # sets config.version
155
156 return config
157
158
159 PACKAGES = find_packages()
160
161 # Extra non .py, .{so,pyd} files that are installed within the package dir
162 # hierarchy
163 PACKAGE_DATA = {
164 "Orange": ["datasets/*.{}".format(ext)
165 for ext in ["tab", "csv", "basket", "info"]],
166 "Orange.canvas": ["icons/*.png", "icons/*.svg"],
167 "Orange.canvas.styles": ["*.qss", "orange/*.svg"],
168 "Orange.canvas.application.tutorials": ["*.ows"],
169 "Orange.canvas.report": ["icons/*.svg", "*.html"],
170 "Orange.widgets": ["icons/*.png", "icons/*.svg"],
171 "Orange.widgets.classify": ["icons/*.svg"],
172 "Orange.widgets.data": ["icons/*.svg",
173 "icons/paintdata/*.png",
174 "icons/paintdata/*.svg"],
175 "Orange.widgets.evaluate": ["icons/*.svg"],
176 "Orange.widgets.visualize": ["icons/*.svg"],
177 "Orange.widgets.regression": ["icons/*.svg"],
178 "Orange.widgets.unsupervised": ["icons/*.svg"],
179 "Orange.widgets.utils.plot": ["*.fs", "*.gs", "*.vs"],
180 "Orange.widgets.utils.plot.primitives": ["*.obj"],
181 "Orange.tests": ["xlsx_files/*.xlsx", "*.tab", "*.basket", "*.csv"]
182 }
183
184
185 class LintCommand(Command):
186 """A setup.py lint subcommand developers can run locally."""
187 description = "run code linter(s)"
188 user_options = []
189 initialize_options = finalize_options = lambda self: None
190
191 def run(self):
192 """Lint current branch compared to a reasonable master branch"""
193 sys.exit(subprocess.call(r'''
194 set -eu
195 upstream="$(git remote -v |
196 awk '/[@\/]github.com[:\/]biolab\/orange3[\. ]/{ print $1; exit }')"
197 git fetch -q $upstream master
198 best_ancestor=$(git merge-base HEAD refs/remotes/$upstream/master)
199 .travis/check_pylint_diff $best_ancestor
200 ''', shell=True, cwd=os.path.dirname(os.path.abspath(__file__))))
201
202 class CoverageCommand(Command):
203 """A setup.py coverage subcommand developers can run locally."""
204 description = "run code coverage"
205 user_options = []
206 initialize_options = finalize_options = lambda self: None
207
208 def run(self):
209 """Check coverage on current workdir"""
210 sys.exit(subprocess.call(r'''
211 coverage run --source=Orange -m unittest -v Orange.tests
212 echo; echo
213 coverage report
214 coverage html &&
215 { echo; echo "See also: file://$(pwd)/htmlcov/index.html"; echo; }
216 ''', shell=True, cwd=os.path.dirname(os.path.abspath(__file__))))
217
218
219
220
221 def setup_package():
222 write_version_py()
223 setup(
224 configuration=configuration,
225 name=NAME,
226 description=DESCRIPTION,
227 long_description=LONG_DESCRIPTION,
228 author=AUTHOR,
229 author_email=AUTHOR_EMAIL,
230 url=URL,
231 license=LICENSE,
232 keywords=KEYWORDS,
233 classifiers=CLASSIFIERS,
234 packages=PACKAGES,
235 package_data=PACKAGE_DATA,
236 install_requires=INSTALL_REQUIRES,
237 entry_points=ENTRY_POINTS,
238 zip_safe=False,
239 test_suite='Orange.tests.test_suite',
240 cmdclass={
241 'lint': LintCommand,
242 'coverage': CoverageCommand,
243 },
244 )
245
246 if __name__ == '__main__':
247 setup_package()
248
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -216,6 +216,38 @@
''', shell=True, cwd=os.path.dirname(os.path.abspath(__file__))))
+# Install desktop file and icon on GNU/Linux/BSD
+DATA_FILES = []
+if any(sys.platform.startswith(platform)
+ for platform in ('linux', 'freebsd')):
+ # Patch desktop file executable to work with virtualenv
+ try:
+ sys.real_prefix
+ except AttributeError:
+ pass # Not in virtualenv
+ else:
+ with open(os.path.join(os.path.dirname(__file__),
+ 'distribute',
+ 'orange-canvas.desktop'), 'r+') as desktop:
+ spec = []
+ for line in desktop:
+ if line.startswith('Exec='):
+ line = 'Exec="{}" -m Orange.canvas\n'.format(sys.executable)
+ spec.append(line)
+ desktop.seek(0)
+ desktop.truncate(0)
+ desktop.writelines(spec)
+
+ usr_share = os.path.join(sys.prefix, "share")
+ if not usr_share.startswith('/usr/') or not os.access(usr_share, os.W_OK):
+ usr_share = os.environ.get('XDG_DATA_HOME',
+ os.path.expanduser('~/.local/share'))
+ DATA_FILES += [
+ (os.path.join(usr_share, 'applications'),
+ ['distribute/orange-canvas.desktop']),
+ (os.path.join(usr_share, 'icons', 'hicolor', 'scalable', 'apps'),
+ ['distribute/orange-canvas.svg'])
+ ]
def setup_package():
@@ -235,6 +267,7 @@
package_data=PACKAGE_DATA,
install_requires=INSTALL_REQUIRES,
entry_points=ENTRY_POINTS,
+ data_files=DATA_FILES,
zip_safe=False,
test_suite='Orange.tests.test_suite',
cmdclass={
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -216,6 +216,38 @@\n ''', shell=True, cwd=os.path.dirname(os.path.abspath(__file__))))\n \n \n+# Install desktop file and icon on GNU/Linux/BSD\n+DATA_FILES = []\n+if any(sys.platform.startswith(platform)\n+ for platform in ('linux', 'freebsd')):\n+ # Patch desktop file executable to work with virtualenv\n+ try:\n+ sys.real_prefix\n+ except AttributeError:\n+ pass # Not in virtualenv\n+ else:\n+ with open(os.path.join(os.path.dirname(__file__),\n+ 'distribute',\n+ 'orange-canvas.desktop'), 'r+') as desktop:\n+ spec = []\n+ for line in desktop:\n+ if line.startswith('Exec='):\n+ line = 'Exec=\"{}\" -m Orange.canvas\\n'.format(sys.executable)\n+ spec.append(line)\n+ desktop.seek(0)\n+ desktop.truncate(0)\n+ desktop.writelines(spec)\n+\n+ usr_share = os.path.join(sys.prefix, \"share\")\n+ if not usr_share.startswith('/usr/') or not os.access(usr_share, os.W_OK):\n+ usr_share = os.environ.get('XDG_DATA_HOME',\n+ os.path.expanduser('~/.local/share'))\n+ DATA_FILES += [\n+ (os.path.join(usr_share, 'applications'),\n+ ['distribute/orange-canvas.desktop']),\n+ (os.path.join(usr_share, 'icons', 'hicolor', 'scalable', 'apps'),\n+ ['distribute/orange-canvas.svg'])\n+ ]\n \n \n def setup_package():\n@@ -235,6 +267,7 @@\n package_data=PACKAGE_DATA,\n install_requires=INSTALL_REQUIRES,\n entry_points=ENTRY_POINTS,\n+ data_files=DATA_FILES,\n zip_safe=False,\n test_suite='Orange.tests.test_suite',\n cmdclass={\n", "issue": "There is a desktop file missing in a Linux version\n##### Orange version\n\n3.3.1.dev0\n##### Expected bahavior\n\n/usr/share/applications directory is populated with a orange.desktop file providing a desktop icon on Linux machines.\n##### Actual behavior\n\nThat doesn't happen - the /usr/share/applications/orange.desktop file is missing.\n##### Steps to reproduce the behavior\n\nInstall Orange on a Linux machine.\n\n", "before_files": [{"content": "#! /usr/bin/env python3\n\nimport os\nimport sys\nimport subprocess\nfrom setuptools import find_packages, Command\n\nif sys.version_info < (3, 4):\n sys.exit('Orange requires Python >= 3.4')\ntry:\n from numpy.distutils.core import setup\nexcept ImportError:\n sys.exit('setup requires numpy; install numpy first')\n\nNAME = 'Orange'\n\nVERSION = '3.3.2'\nISRELEASED = False\n\nDESCRIPTION = 'Orange, a component-based data mining framework.'\nREADME_FILE = os.path.join(os.path.dirname(__file__), 'README.md')\nLONG_DESCRIPTION = open(README_FILE).read()\nAUTHOR = 'Bioinformatics Laboratory, FRI UL'\nAUTHOR_EMAIL = '[email protected]'\nURL = 'http://orange.biolab.si/'\nLICENSE = 'GPLv3+'\n\nKEYWORDS = (\n 'data mining',\n 'machine learning',\n 'artificial intelligence',\n)\n\nCLASSIFIERS = (\n 'Development Status :: 4 - Beta',\n 'Environment :: X11 Applications :: Qt',\n 'Environment :: Console',\n 'Environment :: Plugins',\n 'Programming Language :: Python',\n 'Framework :: Orange',\n 'License :: OSI Approved :: '\n 'GNU General Public License v3 or later (GPLv3+)',\n 'Operating System :: POSIX',\n 'Operating System :: Microsoft :: Windows',\n 'Topic :: Scientific/Engineering :: Artificial Intelligence',\n 'Topic :: Scientific/Engineering :: Visualization',\n 'Topic :: Software Development :: Libraries :: Python Modules',\n 'Intended Audience :: Education',\n 'Intended Audience :: Science/Research',\n 'Intended Audience :: Developers',\n)\n\nrequirements = ['requirements-core.txt', 'requirements-gui.txt']\n\nINSTALL_REQUIRES = sorted(set(\n line.partition('#')[0].strip()\n for file in (os.path.join(os.path.dirname(__file__), file)\n for file in requirements)\n for line in open(file)\n) - {''})\n\nENTRY_POINTS = {\n \"orange.canvas.help\": (\n \"html-index = Orange.widgets:WIDGET_HELP_PATH\",\n ),\n \"gui_scripts\": (\n \"orange-canvas = Orange.canvas.__main__:main\",\n ),\n}\n\n\n# Return the git revision as a string\ndef git_version():\n \"\"\"Return the git revision as a string.\n\n Copied from numpy setup.py\n \"\"\"\n def _minimal_ext_cmd(cmd):\n # construct minimal environment\n env = {}\n for k in ['SYSTEMROOT', 'PATH']:\n v = os.environ.get(k)\n if v is not None:\n env[k] = v\n # LANGUAGE is used on win32\n env['LANGUAGE'] = 'C'\n env['LANG'] = 'C'\n env['LC_ALL'] = 'C'\n out = subprocess.Popen(cmd, stdout = subprocess.PIPE, env=env).communicate()[0]\n return out\n\n try:\n out = _minimal_ext_cmd(['git', 'rev-parse', 'HEAD'])\n GIT_REVISION = out.strip().decode('ascii')\n except OSError:\n GIT_REVISION = \"Unknown\"\n return GIT_REVISION\n\n\ndef write_version_py(filename='Orange/version.py'):\n # Copied from numpy setup.py\n cnt = \"\"\"\n# THIS FILE IS GENERATED FROM ORANGE SETUP.PY\nshort_version = '%(version)s'\nversion = '%(version)s'\nfull_version = '%(full_version)s'\ngit_revision = '%(git_revision)s'\nrelease = %(isrelease)s\n\nif not release:\n version = full_version\n short_version += \".dev\"\n\"\"\"\n FULLVERSION = VERSION\n if os.path.exists('.git'):\n GIT_REVISION = git_version()\n elif os.path.exists('Orange/version.py'):\n # must be a source distribution, use existing version file\n import imp\n version = imp.load_source(\"Orange.version\", \"Orange/version.py\")\n GIT_REVISION = version.git_revision\n else:\n GIT_REVISION = \"Unknown\"\n\n if not ISRELEASED:\n FULLVERSION += '.dev0+' + GIT_REVISION[:7]\n\n a = open(filename, 'w')\n try:\n a.write(cnt % {'version': VERSION,\n 'full_version': FULLVERSION,\n 'git_revision': GIT_REVISION,\n 'isrelease': str(ISRELEASED)})\n finally:\n a.close()\n\n\ndef configuration(parent_package='', top_path=None):\n if os.path.exists('MANIFEST'):\n os.remove('MANIFEST')\n\n from numpy.distutils.misc_util import Configuration\n config = Configuration(None, parent_package, top_path)\n\n # Avoid non-useful msg:\n # \"Ignoring attempt to set 'name' (from ... \"\n config.set_options(ignore_setup_xxx_py=True,\n assume_default_configuration=True,\n delegate_options_to_subpackages=True,\n quiet=True)\n\n config.add_subpackage('Orange')\n\n config.get_version('Orange/version.py') # sets config.version\n\n return config\n\n\nPACKAGES = find_packages()\n\n# Extra non .py, .{so,pyd} files that are installed within the package dir\n# hierarchy\nPACKAGE_DATA = {\n \"Orange\": [\"datasets/*.{}\".format(ext)\n for ext in [\"tab\", \"csv\", \"basket\", \"info\"]],\n \"Orange.canvas\": [\"icons/*.png\", \"icons/*.svg\"],\n \"Orange.canvas.styles\": [\"*.qss\", \"orange/*.svg\"],\n \"Orange.canvas.application.tutorials\": [\"*.ows\"],\n \"Orange.canvas.report\": [\"icons/*.svg\", \"*.html\"],\n \"Orange.widgets\": [\"icons/*.png\", \"icons/*.svg\"],\n \"Orange.widgets.classify\": [\"icons/*.svg\"],\n \"Orange.widgets.data\": [\"icons/*.svg\",\n \"icons/paintdata/*.png\",\n \"icons/paintdata/*.svg\"],\n \"Orange.widgets.evaluate\": [\"icons/*.svg\"],\n \"Orange.widgets.visualize\": [\"icons/*.svg\"],\n \"Orange.widgets.regression\": [\"icons/*.svg\"],\n \"Orange.widgets.unsupervised\": [\"icons/*.svg\"],\n \"Orange.widgets.utils.plot\": [\"*.fs\", \"*.gs\", \"*.vs\"],\n \"Orange.widgets.utils.plot.primitives\": [\"*.obj\"],\n \"Orange.tests\": [\"xlsx_files/*.xlsx\", \"*.tab\", \"*.basket\", \"*.csv\"]\n}\n\n\nclass LintCommand(Command):\n \"\"\"A setup.py lint subcommand developers can run locally.\"\"\"\n description = \"run code linter(s)\"\n user_options = []\n initialize_options = finalize_options = lambda self: None\n\n def run(self):\n \"\"\"Lint current branch compared to a reasonable master branch\"\"\"\n sys.exit(subprocess.call(r'''\n set -eu\n upstream=\"$(git remote -v |\n awk '/[@\\/]github.com[:\\/]biolab\\/orange3[\\. ]/{ print $1; exit }')\"\n git fetch -q $upstream master\n best_ancestor=$(git merge-base HEAD refs/remotes/$upstream/master)\n .travis/check_pylint_diff $best_ancestor\n ''', shell=True, cwd=os.path.dirname(os.path.abspath(__file__))))\n\nclass CoverageCommand(Command):\n \"\"\"A setup.py coverage subcommand developers can run locally.\"\"\"\n description = \"run code coverage\"\n user_options = []\n initialize_options = finalize_options = lambda self: None\n\n def run(self):\n \"\"\"Check coverage on current workdir\"\"\"\n sys.exit(subprocess.call(r'''\n coverage run --source=Orange -m unittest -v Orange.tests\n echo; echo\n coverage report\n coverage html &&\n { echo; echo \"See also: file://$(pwd)/htmlcov/index.html\"; echo; }\n ''', shell=True, cwd=os.path.dirname(os.path.abspath(__file__))))\n\n\n\n\ndef setup_package():\n write_version_py()\n setup(\n configuration=configuration,\n name=NAME,\n description=DESCRIPTION,\n long_description=LONG_DESCRIPTION,\n author=AUTHOR,\n author_email=AUTHOR_EMAIL,\n url=URL,\n license=LICENSE,\n keywords=KEYWORDS,\n classifiers=CLASSIFIERS,\n packages=PACKAGES,\n package_data=PACKAGE_DATA,\n install_requires=INSTALL_REQUIRES,\n entry_points=ENTRY_POINTS,\n zip_safe=False,\n test_suite='Orange.tests.test_suite',\n cmdclass={\n 'lint': LintCommand,\n 'coverage': CoverageCommand,\n },\n )\n\nif __name__ == '__main__':\n setup_package()\n", "path": "setup.py"}]} | 3,047 | 432 |
gh_patches_debug_502 | rasdani/github-patches | git_diff | google__flax-2827 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Cannot import flax.training.checkpoints in 0.6.4
### System information
- OS Platform and Distribution: Ubuntu 22.04.1 LTS, also in Colab environment
- Flax, jax, jaxlib versions:
* flax 0.6.4
* jax 0.3.25
* jaxlib 0.3.25
- Python version: 3.10.6
- GPU/TPU model and memory: No Accelerator / 16GB
### Problem you have encountered:
With FLAX v0.6.4 I can't import `flax.training.checkpoints` module due to following error:
```
ImportError: cannot import name 'monitoring' from 'jax' (/usr/local/lib/python3.8/dist-packages/jax/__init__.py)
```
This does not happen in v0.6.3.
### What you expected to happen:
The module should be imported.
### Logs, error messages, etc:
Error message from jupyter notebook:
```
ImportError Traceback (most recent call last)
[<ipython-input-3-9a234296e658>](https://localhost:8080/#) in <module>
1 import flax
----> 2 from flax.training import checkpoints
[/usr/local/lib/python3.8/dist-packages/flax/training/checkpoints.py](https://localhost:8080/#) in <module>
36 from flax import traverse_util
37 import jax
---> 38 from jax import monitoring
39 from jax import process_index
40 from jax import sharding
ImportError: cannot import name 'monitoring' from 'jax' (/usr/local/lib/python3.8/dist-packages/jax/__init__.py)
```
### Steps to reproduce:
[Colab notebook](https://colab.research.google.com/drive/1ZLR1JSJPfaaoTmL7bow8oebqyhhxrqSo?usp=sharing)
</issue>
<code>
[start of setup.py]
1 # Copyright 2022 The Flax Authors.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """setup.py for Flax."""
16
17 import os
18 from setuptools import find_packages
19 from setuptools import setup
20
21 here = os.path.abspath(os.path.dirname(__file__))
22 try:
23 README = open(os.path.join(here, "README.md"), encoding="utf-8").read()
24 except OSError:
25 README = ""
26
27 install_requires = [
28 "numpy>=1.12",
29 "jax>=0.3.16",
30 "matplotlib", # only needed for tensorboard export
31 "msgpack",
32 "optax",
33 "orbax",
34 "tensorstore",
35 "rich>=11.1",
36 "typing_extensions>=4.1.1",
37 "PyYAML>=5.4.1",
38 ]
39
40 tests_require = [
41 "atari-py==0.2.5", # Last version does not have the ROMs we test on pre-packaged
42 "clu", # All examples.
43 "gym==0.18.3",
44 "jaxlib",
45 "jraph>=0.0.6dev0",
46 "ml-collections",
47 "mypy",
48 "opencv-python",
49 "pytest",
50 "pytest-cov",
51 "pytest-custom_exit_code",
52 "pytest-xdist==1.34.0", # upgrading to 2.0 broke tests, need to investigate
53 "pytype",
54 "sentencepiece", # WMT example.
55 "tensorflow_text>=2.4.0", # WMT example.
56 "tensorflow_datasets",
57 "tensorflow",
58 "torch",
59 ]
60
61 __version__ = None
62
63 with open("flax/version.py") as f:
64 exec(f.read(), globals())
65
66 setup(
67 name="flax",
68 version=__version__,
69 description="Flax: A neural network library for JAX designed for flexibility",
70 long_description="\n\n".join([README]),
71 long_description_content_type="text/markdown",
72 classifiers=[
73 "Development Status :: 3 - Alpha",
74 "Intended Audience :: Developers",
75 "Intended Audience :: Science/Research",
76 "License :: OSI Approved :: Apache Software License",
77 "Programming Language :: Python :: 3.7",
78 "Topic :: Scientific/Engineering :: Artificial Intelligence",
79 ],
80 keywords="",
81 author="Flax team",
82 author_email="[email protected]",
83 url="https://github.com/google/flax",
84 packages=find_packages(),
85 package_data={"flax": ["py.typed"]},
86 zip_safe=False,
87 install_requires=install_requires,
88 extras_require={
89 "testing": tests_require,
90 },
91 )
92
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -26,7 +26,7 @@
install_requires = [
"numpy>=1.12",
- "jax>=0.3.16",
+ "jax>=0.4.2",
"matplotlib", # only needed for tensorboard export
"msgpack",
"optax",
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -26,7 +26,7 @@\n \n install_requires = [\n \"numpy>=1.12\",\n- \"jax>=0.3.16\",\n+ \"jax>=0.4.2\",\n \"matplotlib\", # only needed for tensorboard export\n \"msgpack\",\n \"optax\",\n", "issue": "Cannot import flax.training.checkpoints in 0.6.4\n### System information\r\n- OS Platform and Distribution: Ubuntu 22.04.1 LTS, also in Colab environment\r\n- Flax, jax, jaxlib versions:\r\n * flax 0.6.4\r\n * jax 0.3.25\r\n * jaxlib 0.3.25\r\n- Python version: 3.10.6\r\n- GPU/TPU model and memory: No Accelerator / 16GB\r\n\r\n### Problem you have encountered:\r\nWith FLAX v0.6.4 I can't import `flax.training.checkpoints` module due to following error:\r\n```\r\nImportError: cannot import name 'monitoring' from 'jax' (/usr/local/lib/python3.8/dist-packages/jax/__init__.py)\r\n```\r\nThis does not happen in v0.6.3.\r\n\r\n### What you expected to happen:\r\nThe module should be imported.\r\n\r\n### Logs, error messages, etc:\r\nError message from jupyter notebook:\r\n```\r\nImportError Traceback (most recent call last)\r\n\r\n[<ipython-input-3-9a234296e658>](https://localhost:8080/#) in <module>\r\n 1 import flax\r\n----> 2 from flax.training import checkpoints\r\n\r\n[/usr/local/lib/python3.8/dist-packages/flax/training/checkpoints.py](https://localhost:8080/#) in <module>\r\n 36 from flax import traverse_util\r\n 37 import jax\r\n---> 38 from jax import monitoring\r\n 39 from jax import process_index\r\n 40 from jax import sharding\r\n\r\nImportError: cannot import name 'monitoring' from 'jax' (/usr/local/lib/python3.8/dist-packages/jax/__init__.py)\r\n```\r\n\r\n### Steps to reproduce:\r\n[Colab notebook](https://colab.research.google.com/drive/1ZLR1JSJPfaaoTmL7bow8oebqyhhxrqSo?usp=sharing)\r\n\n", "before_files": [{"content": "# Copyright 2022 The Flax Authors.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"setup.py for Flax.\"\"\"\n\nimport os\nfrom setuptools import find_packages\nfrom setuptools import setup\n\nhere = os.path.abspath(os.path.dirname(__file__))\ntry:\n README = open(os.path.join(here, \"README.md\"), encoding=\"utf-8\").read()\nexcept OSError:\n README = \"\"\n\ninstall_requires = [\n \"numpy>=1.12\",\n \"jax>=0.3.16\",\n \"matplotlib\", # only needed for tensorboard export\n \"msgpack\",\n \"optax\",\n \"orbax\",\n \"tensorstore\",\n \"rich>=11.1\",\n \"typing_extensions>=4.1.1\",\n \"PyYAML>=5.4.1\",\n]\n\ntests_require = [\n \"atari-py==0.2.5\", # Last version does not have the ROMs we test on pre-packaged\n \"clu\", # All examples.\n \"gym==0.18.3\",\n \"jaxlib\",\n \"jraph>=0.0.6dev0\",\n \"ml-collections\",\n \"mypy\",\n \"opencv-python\",\n \"pytest\",\n \"pytest-cov\",\n \"pytest-custom_exit_code\",\n \"pytest-xdist==1.34.0\", # upgrading to 2.0 broke tests, need to investigate\n \"pytype\",\n \"sentencepiece\", # WMT example.\n \"tensorflow_text>=2.4.0\", # WMT example.\n \"tensorflow_datasets\",\n \"tensorflow\",\n \"torch\",\n]\n\n__version__ = None\n\nwith open(\"flax/version.py\") as f:\n exec(f.read(), globals())\n\nsetup(\n name=\"flax\",\n version=__version__,\n description=\"Flax: A neural network library for JAX designed for flexibility\",\n long_description=\"\\n\\n\".join([README]),\n long_description_content_type=\"text/markdown\",\n classifiers=[\n \"Development Status :: 3 - Alpha\",\n \"Intended Audience :: Developers\",\n \"Intended Audience :: Science/Research\",\n \"License :: OSI Approved :: Apache Software License\",\n \"Programming Language :: Python :: 3.7\",\n \"Topic :: Scientific/Engineering :: Artificial Intelligence\",\n ],\n keywords=\"\",\n author=\"Flax team\",\n author_email=\"[email protected]\",\n url=\"https://github.com/google/flax\",\n packages=find_packages(),\n package_data={\"flax\": [\"py.typed\"]},\n zip_safe=False,\n install_requires=install_requires,\n extras_require={\n \"testing\": tests_require,\n },\n )\n", "path": "setup.py"}]} | 1,860 | 92 |
gh_patches_debug_26532 | rasdani/github-patches | git_diff | jazzband__pip-tools-733 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Command in autogenerated requirements.txt can be shortened
When I run `pip-compile`, my requirements.txt has
```
#
# This file is autogenerated by pip-compile
# To update, run:
#
# pip-compile --output-file requirements.txt requirements.in
#
```
But I think the `--output-file requirements.txt` can just be dropped (for brevity) when the written file itself is named `requirements.txt`.
I'm recommending this because `pip-compile` already goes ahead and modifies `requirements.txt` when no options are specified. Thoughts?
</issue>
<code>
[start of piptools/writer.py]
1 import os
2 from itertools import chain
3
4 from ._compat import ExitStack
5 from .click import unstyle
6 from .io import AtomicSaver
7 from .logging import log
8 from .utils import comment, dedup, format_requirement, key_from_req, UNSAFE_PACKAGES
9
10
11 class OutputWriter(object):
12 def __init__(self, src_files, dst_file, dry_run, emit_header, emit_index,
13 emit_trusted_host, annotate, generate_hashes,
14 default_index_url, index_urls, trusted_hosts, format_control,
15 allow_unsafe):
16 self.src_files = src_files
17 self.dst_file = dst_file
18 self.dry_run = dry_run
19 self.emit_header = emit_header
20 self.emit_index = emit_index
21 self.emit_trusted_host = emit_trusted_host
22 self.annotate = annotate
23 self.generate_hashes = generate_hashes
24 self.default_index_url = default_index_url
25 self.index_urls = index_urls
26 self.trusted_hosts = trusted_hosts
27 self.format_control = format_control
28 self.allow_unsafe = allow_unsafe
29
30 def _sort_key(self, ireq):
31 return (not ireq.editable, str(ireq.req).lower())
32
33 def write_header(self):
34 if self.emit_header:
35 yield comment('#')
36 yield comment('# This file is autogenerated by pip-compile')
37 yield comment('# To update, run:')
38 yield comment('#')
39 custom_cmd = os.environ.get('CUSTOM_COMPILE_COMMAND')
40 if custom_cmd:
41 yield comment('# {}'.format(custom_cmd))
42 else:
43 params = []
44 if not self.emit_index:
45 params += ['--no-index']
46 if not self.emit_trusted_host:
47 params += ['--no-emit-trusted-host']
48 if not self.annotate:
49 params += ['--no-annotate']
50 if self.generate_hashes:
51 params += ["--generate-hashes"]
52 if self.allow_unsafe:
53 params += ["--allow-unsafe"]
54 params += ['--output-file', self.dst_file]
55 params += self.src_files
56 yield comment('# pip-compile {}'.format(' '.join(params)))
57 yield comment('#')
58
59 def write_index_options(self):
60 if self.emit_index:
61 for index, index_url in enumerate(dedup(self.index_urls)):
62 if index_url.rstrip('/') == self.default_index_url:
63 continue
64 flag = '--index-url' if index == 0 else '--extra-index-url'
65 yield '{} {}'.format(flag, index_url)
66
67 def write_trusted_hosts(self):
68 if self.emit_trusted_host:
69 for trusted_host in dedup(self.trusted_hosts):
70 yield '--trusted-host {}'.format(trusted_host)
71
72 def write_format_controls(self):
73 for nb in dedup(self.format_control.no_binary):
74 yield '--no-binary {}'.format(nb)
75 for ob in dedup(self.format_control.only_binary):
76 yield '--only-binary {}'.format(ob)
77
78 def write_flags(self):
79 emitted = False
80 for line in chain(self.write_index_options(),
81 self.write_trusted_hosts(),
82 self.write_format_controls()):
83 emitted = True
84 yield line
85 if emitted:
86 yield ''
87
88 def _iter_lines(self, results, unsafe_requirements, reverse_dependencies,
89 primary_packages, markers, hashes):
90 for line in self.write_header():
91 yield line
92 for line in self.write_flags():
93 yield line
94
95 unsafe_requirements = {r for r in results if r.name in UNSAFE_PACKAGES} if not unsafe_requirements else unsafe_requirements # noqa
96 packages = {r for r in results if r.name not in UNSAFE_PACKAGES}
97
98 packages = sorted(packages, key=self._sort_key)
99
100 for ireq in packages:
101 line = self._format_requirement(
102 ireq, reverse_dependencies, primary_packages,
103 markers.get(key_from_req(ireq.req)), hashes=hashes)
104 yield line
105
106 if unsafe_requirements:
107 unsafe_requirements = sorted(unsafe_requirements, key=self._sort_key)
108 yield ''
109 yield comment('# The following packages are considered to be unsafe in a requirements file:')
110
111 for ireq in unsafe_requirements:
112 req = self._format_requirement(ireq,
113 reverse_dependencies,
114 primary_packages,
115 marker=markers.get(key_from_req(ireq.req)),
116 hashes=hashes)
117 if not self.allow_unsafe:
118 yield comment('# {}'.format(req))
119 else:
120 yield req
121
122 def write(self, results, unsafe_requirements, reverse_dependencies,
123 primary_packages, markers, hashes):
124 with ExitStack() as stack:
125 f = None
126 if not self.dry_run:
127 f = stack.enter_context(AtomicSaver(self.dst_file))
128
129 for line in self._iter_lines(results, unsafe_requirements, reverse_dependencies,
130 primary_packages, markers, hashes):
131 log.info(line)
132 if f:
133 f.write(unstyle(line).encode('utf-8'))
134 f.write(os.linesep.encode('utf-8'))
135
136 def _format_requirement(self, ireq, reverse_dependencies, primary_packages, marker=None, hashes=None):
137 ireq_hashes = (hashes if hashes is not None else {}).get(ireq)
138
139 line = format_requirement(ireq, marker=marker, hashes=ireq_hashes)
140
141 if not self.annotate or key_from_req(ireq.req) in primary_packages:
142 return line
143
144 # Annotate what packages this package is required by
145 required_by = reverse_dependencies.get(ireq.name.lower(), [])
146 if required_by:
147 annotation = ", ".join(sorted(required_by))
148 line = "{:24}{}{}".format(
149 line,
150 " \\\n " if ireq_hashes else " ",
151 comment("# via " + annotation))
152 return line
153
[end of piptools/writer.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/piptools/writer.py b/piptools/writer.py
--- a/piptools/writer.py
+++ b/piptools/writer.py
@@ -1,8 +1,9 @@
import os
+import sys
from itertools import chain
from ._compat import ExitStack
-from .click import unstyle
+from .click import unstyle, get_os_args
from .io import AtomicSaver
from .logging import log
from .utils import comment, dedup, format_requirement, key_from_req, UNSAFE_PACKAGES
@@ -40,20 +41,9 @@
if custom_cmd:
yield comment('# {}'.format(custom_cmd))
else:
- params = []
- if not self.emit_index:
- params += ['--no-index']
- if not self.emit_trusted_host:
- params += ['--no-emit-trusted-host']
- if not self.annotate:
- params += ['--no-annotate']
- if self.generate_hashes:
- params += ["--generate-hashes"]
- if self.allow_unsafe:
- params += ["--allow-unsafe"]
- params += ['--output-file', self.dst_file]
- params += self.src_files
- yield comment('# pip-compile {}'.format(' '.join(params)))
+ prog = os.path.basename(sys.argv[0])
+ args = ' '.join(get_os_args())
+ yield comment('# {prog} {args}'.format(prog=prog, args=args))
yield comment('#')
def write_index_options(self):
| {"golden_diff": "diff --git a/piptools/writer.py b/piptools/writer.py\n--- a/piptools/writer.py\n+++ b/piptools/writer.py\n@@ -1,8 +1,9 @@\n import os\n+import sys\n from itertools import chain\n \n from ._compat import ExitStack\n-from .click import unstyle\n+from .click import unstyle, get_os_args\n from .io import AtomicSaver\n from .logging import log\n from .utils import comment, dedup, format_requirement, key_from_req, UNSAFE_PACKAGES\n@@ -40,20 +41,9 @@\n if custom_cmd:\n yield comment('# {}'.format(custom_cmd))\n else:\n- params = []\n- if not self.emit_index:\n- params += ['--no-index']\n- if not self.emit_trusted_host:\n- params += ['--no-emit-trusted-host']\n- if not self.annotate:\n- params += ['--no-annotate']\n- if self.generate_hashes:\n- params += [\"--generate-hashes\"]\n- if self.allow_unsafe:\n- params += [\"--allow-unsafe\"]\n- params += ['--output-file', self.dst_file]\n- params += self.src_files\n- yield comment('# pip-compile {}'.format(' '.join(params)))\n+ prog = os.path.basename(sys.argv[0])\n+ args = ' '.join(get_os_args())\n+ yield comment('# {prog} {args}'.format(prog=prog, args=args))\n yield comment('#')\n \n def write_index_options(self):\n", "issue": "Command in autogenerated requirements.txt can be shortened\nWhen I run `pip-compile`, my requirements.txt has\r\n\r\n```\r\n#\r\n# This file is autogenerated by pip-compile\r\n# To update, run:\r\n#\r\n# pip-compile --output-file requirements.txt requirements.in\r\n#\r\n```\r\n\r\nBut I think the `--output-file requirements.txt` can just be dropped (for brevity) when the written file itself is named `requirements.txt`.\r\n\r\nI'm recommending this because `pip-compile` already goes ahead and modifies `requirements.txt` when no options are specified. Thoughts?\n", "before_files": [{"content": "import os\nfrom itertools import chain\n\nfrom ._compat import ExitStack\nfrom .click import unstyle\nfrom .io import AtomicSaver\nfrom .logging import log\nfrom .utils import comment, dedup, format_requirement, key_from_req, UNSAFE_PACKAGES\n\n\nclass OutputWriter(object):\n def __init__(self, src_files, dst_file, dry_run, emit_header, emit_index,\n emit_trusted_host, annotate, generate_hashes,\n default_index_url, index_urls, trusted_hosts, format_control,\n allow_unsafe):\n self.src_files = src_files\n self.dst_file = dst_file\n self.dry_run = dry_run\n self.emit_header = emit_header\n self.emit_index = emit_index\n self.emit_trusted_host = emit_trusted_host\n self.annotate = annotate\n self.generate_hashes = generate_hashes\n self.default_index_url = default_index_url\n self.index_urls = index_urls\n self.trusted_hosts = trusted_hosts\n self.format_control = format_control\n self.allow_unsafe = allow_unsafe\n\n def _sort_key(self, ireq):\n return (not ireq.editable, str(ireq.req).lower())\n\n def write_header(self):\n if self.emit_header:\n yield comment('#')\n yield comment('# This file is autogenerated by pip-compile')\n yield comment('# To update, run:')\n yield comment('#')\n custom_cmd = os.environ.get('CUSTOM_COMPILE_COMMAND')\n if custom_cmd:\n yield comment('# {}'.format(custom_cmd))\n else:\n params = []\n if not self.emit_index:\n params += ['--no-index']\n if not self.emit_trusted_host:\n params += ['--no-emit-trusted-host']\n if not self.annotate:\n params += ['--no-annotate']\n if self.generate_hashes:\n params += [\"--generate-hashes\"]\n if self.allow_unsafe:\n params += [\"--allow-unsafe\"]\n params += ['--output-file', self.dst_file]\n params += self.src_files\n yield comment('# pip-compile {}'.format(' '.join(params)))\n yield comment('#')\n\n def write_index_options(self):\n if self.emit_index:\n for index, index_url in enumerate(dedup(self.index_urls)):\n if index_url.rstrip('/') == self.default_index_url:\n continue\n flag = '--index-url' if index == 0 else '--extra-index-url'\n yield '{} {}'.format(flag, index_url)\n\n def write_trusted_hosts(self):\n if self.emit_trusted_host:\n for trusted_host in dedup(self.trusted_hosts):\n yield '--trusted-host {}'.format(trusted_host)\n\n def write_format_controls(self):\n for nb in dedup(self.format_control.no_binary):\n yield '--no-binary {}'.format(nb)\n for ob in dedup(self.format_control.only_binary):\n yield '--only-binary {}'.format(ob)\n\n def write_flags(self):\n emitted = False\n for line in chain(self.write_index_options(),\n self.write_trusted_hosts(),\n self.write_format_controls()):\n emitted = True\n yield line\n if emitted:\n yield ''\n\n def _iter_lines(self, results, unsafe_requirements, reverse_dependencies,\n primary_packages, markers, hashes):\n for line in self.write_header():\n yield line\n for line in self.write_flags():\n yield line\n\n unsafe_requirements = {r for r in results if r.name in UNSAFE_PACKAGES} if not unsafe_requirements else unsafe_requirements # noqa\n packages = {r for r in results if r.name not in UNSAFE_PACKAGES}\n\n packages = sorted(packages, key=self._sort_key)\n\n for ireq in packages:\n line = self._format_requirement(\n ireq, reverse_dependencies, primary_packages,\n markers.get(key_from_req(ireq.req)), hashes=hashes)\n yield line\n\n if unsafe_requirements:\n unsafe_requirements = sorted(unsafe_requirements, key=self._sort_key)\n yield ''\n yield comment('# The following packages are considered to be unsafe in a requirements file:')\n\n for ireq in unsafe_requirements:\n req = self._format_requirement(ireq,\n reverse_dependencies,\n primary_packages,\n marker=markers.get(key_from_req(ireq.req)),\n hashes=hashes)\n if not self.allow_unsafe:\n yield comment('# {}'.format(req))\n else:\n yield req\n\n def write(self, results, unsafe_requirements, reverse_dependencies,\n primary_packages, markers, hashes):\n with ExitStack() as stack:\n f = None\n if not self.dry_run:\n f = stack.enter_context(AtomicSaver(self.dst_file))\n\n for line in self._iter_lines(results, unsafe_requirements, reverse_dependencies,\n primary_packages, markers, hashes):\n log.info(line)\n if f:\n f.write(unstyle(line).encode('utf-8'))\n f.write(os.linesep.encode('utf-8'))\n\n def _format_requirement(self, ireq, reverse_dependencies, primary_packages, marker=None, hashes=None):\n ireq_hashes = (hashes if hashes is not None else {}).get(ireq)\n\n line = format_requirement(ireq, marker=marker, hashes=ireq_hashes)\n\n if not self.annotate or key_from_req(ireq.req) in primary_packages:\n return line\n\n # Annotate what packages this package is required by\n required_by = reverse_dependencies.get(ireq.name.lower(), [])\n if required_by:\n annotation = \", \".join(sorted(required_by))\n line = \"{:24}{}{}\".format(\n line,\n \" \\\\\\n \" if ireq_hashes else \" \",\n comment(\"# via \" + annotation))\n return line\n", "path": "piptools/writer.py"}]} | 2,237 | 341 |
gh_patches_debug_14738 | rasdani/github-patches | git_diff | crytic__slither-530 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Suicidal detector fails on external functions
If the [example](https://github.com/crytic/slither/wiki/Detector-Documentation#suicidal) function for the suicidal detector is changed from `public` to `external` the issue is no longer flagged.
```
pragma solidity ^0.5.0;
contract Suicidal{
function kill() external{
selfdestruct(msg.sender);
}
}
```
`slither --version`: 0.6.12
`solc --version`: 0.5.15
Suicidal detector fails on external functions
If the [example](https://github.com/crytic/slither/wiki/Detector-Documentation#suicidal) function for the suicidal detector is changed from `public` to `external` the issue is no longer flagged.
```
pragma solidity ^0.5.0;
contract Suicidal{
function kill() external{
selfdestruct(msg.sender);
}
}
```
`slither --version`: 0.6.12
`solc --version`: 0.5.15
</issue>
<code>
[start of slither/detectors/functions/suicidal.py]
1 """
2 Module detecting suicidal contract
3
4 A suicidal contract is an unprotected function that calls selfdestruct
5 """
6
7 from slither.detectors.abstract_detector import AbstractDetector, DetectorClassification
8
9
10 class Suicidal(AbstractDetector):
11 """
12 Unprotected function detector
13 """
14
15 ARGUMENT = 'suicidal'
16 HELP = 'Functions allowing anyone to destruct the contract'
17 IMPACT = DetectorClassification.HIGH
18 CONFIDENCE = DetectorClassification.HIGH
19
20 WIKI = 'https://github.com/crytic/slither/wiki/Detector-Documentation#suicidal'
21
22
23 WIKI_TITLE = 'Suicidal'
24 WIKI_DESCRIPTION = 'Unprotected call to a function executing `selfdestruct`/`suicide`.'
25 WIKI_EXPLOIT_SCENARIO = '''
26 ```solidity
27 contract Suicidal{
28 function kill() public{
29 selfdestruct(msg.sender);
30 }
31 }
32 ```
33 Bob calls `kill` and destructs the contract.'''
34
35 WIKI_RECOMMENDATION = 'Protect access to all sensitive functions.'
36
37 @staticmethod
38 def detect_suicidal_func(func):
39 """ Detect if the function is suicidal
40
41 Detect the public functions calling suicide/selfdestruct without protection
42 Returns:
43 (bool): True if the function is suicidal
44 """
45
46 if func.is_constructor:
47 return False
48
49 if func.visibility != 'public':
50 return False
51
52 calls = [c.name for c in func.internal_calls]
53 if not ('suicide(address)' in calls or 'selfdestruct(address)' in calls):
54 return False
55
56 if func.is_protected():
57 return False
58
59 return True
60
61 def detect_suicidal(self, contract):
62 ret = []
63 for f in [f for f in contract.functions if f.contract_declarer == contract]:
64 if self.detect_suicidal_func(f):
65 ret.append(f)
66 return ret
67
68 def _detect(self):
69 """ Detect the suicidal functions
70 """
71 results = []
72 for c in self.contracts:
73 functions = self.detect_suicidal(c)
74 for func in functions:
75
76 info = [func, " allows anyone to destruct the contract\n"]
77
78 res = self.generate_result(info)
79
80 results.append(res)
81
82 return results
83
[end of slither/detectors/functions/suicidal.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/slither/detectors/functions/suicidal.py b/slither/detectors/functions/suicidal.py
--- a/slither/detectors/functions/suicidal.py
+++ b/slither/detectors/functions/suicidal.py
@@ -46,7 +46,7 @@
if func.is_constructor:
return False
- if func.visibility != 'public':
+ if func.visibility not in ['public', 'external']:
return False
calls = [c.name for c in func.internal_calls]
@@ -60,7 +60,7 @@
def detect_suicidal(self, contract):
ret = []
- for f in [f for f in contract.functions if f.contract_declarer == contract]:
+ for f in contract.functions_declared:
if self.detect_suicidal_func(f):
ret.append(f)
return ret
| {"golden_diff": "diff --git a/slither/detectors/functions/suicidal.py b/slither/detectors/functions/suicidal.py\n--- a/slither/detectors/functions/suicidal.py\n+++ b/slither/detectors/functions/suicidal.py\n@@ -46,7 +46,7 @@\n if func.is_constructor:\n return False\n \n- if func.visibility != 'public':\n+ if func.visibility not in ['public', 'external']:\n return False\n \n calls = [c.name for c in func.internal_calls]\n@@ -60,7 +60,7 @@\n \n def detect_suicidal(self, contract):\n ret = []\n- for f in [f for f in contract.functions if f.contract_declarer == contract]:\n+ for f in contract.functions_declared:\n if self.detect_suicidal_func(f):\n ret.append(f)\n return ret\n", "issue": "Suicidal detector fails on external functions\nIf the [example](https://github.com/crytic/slither/wiki/Detector-Documentation#suicidal) function for the suicidal detector is changed from `public` to `external` the issue is no longer flagged.\r\n\r\n```\r\npragma solidity ^0.5.0;\r\ncontract Suicidal{\r\n function kill() external{\r\n selfdestruct(msg.sender);\r\n }\r\n}\r\n```\r\n\r\n`slither --version`: 0.6.12\r\n`solc --version`: 0.5.15\nSuicidal detector fails on external functions\nIf the [example](https://github.com/crytic/slither/wiki/Detector-Documentation#suicidal) function for the suicidal detector is changed from `public` to `external` the issue is no longer flagged.\r\n\r\n```\r\npragma solidity ^0.5.0;\r\ncontract Suicidal{\r\n function kill() external{\r\n selfdestruct(msg.sender);\r\n }\r\n}\r\n```\r\n\r\n`slither --version`: 0.6.12\r\n`solc --version`: 0.5.15\n", "before_files": [{"content": "\"\"\"\nModule detecting suicidal contract\n\nA suicidal contract is an unprotected function that calls selfdestruct\n\"\"\"\n\nfrom slither.detectors.abstract_detector import AbstractDetector, DetectorClassification\n\n\nclass Suicidal(AbstractDetector):\n \"\"\"\n Unprotected function detector\n \"\"\"\n\n ARGUMENT = 'suicidal'\n HELP = 'Functions allowing anyone to destruct the contract'\n IMPACT = DetectorClassification.HIGH\n CONFIDENCE = DetectorClassification.HIGH\n\n WIKI = 'https://github.com/crytic/slither/wiki/Detector-Documentation#suicidal'\n\n\n WIKI_TITLE = 'Suicidal'\n WIKI_DESCRIPTION = 'Unprotected call to a function executing `selfdestruct`/`suicide`.'\n WIKI_EXPLOIT_SCENARIO = '''\n```solidity\ncontract Suicidal{\n function kill() public{\n selfdestruct(msg.sender);\n }\n}\n```\nBob calls `kill` and destructs the contract.'''\n\n WIKI_RECOMMENDATION = 'Protect access to all sensitive functions.'\n\n @staticmethod\n def detect_suicidal_func(func):\n \"\"\" Detect if the function is suicidal\n\n Detect the public functions calling suicide/selfdestruct without protection\n Returns:\n (bool): True if the function is suicidal\n \"\"\"\n\n if func.is_constructor:\n return False\n\n if func.visibility != 'public':\n return False\n\n calls = [c.name for c in func.internal_calls]\n if not ('suicide(address)' in calls or 'selfdestruct(address)' in calls):\n return False\n\n if func.is_protected():\n return False\n\n return True\n\n def detect_suicidal(self, contract):\n ret = []\n for f in [f for f in contract.functions if f.contract_declarer == contract]:\n if self.detect_suicidal_func(f):\n ret.append(f)\n return ret\n\n def _detect(self):\n \"\"\" Detect the suicidal functions\n \"\"\"\n results = []\n for c in self.contracts:\n functions = self.detect_suicidal(c)\n for func in functions:\n\n info = [func, \" allows anyone to destruct the contract\\n\"]\n\n res = self.generate_result(info)\n\n results.append(res)\n\n return results\n", "path": "slither/detectors/functions/suicidal.py"}]} | 1,426 | 194 |
gh_patches_debug_18153 | rasdani/github-patches | git_diff | openmc-dev__openmc-2906 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Implement contains for BoundingBox
## Description
Implement `__contains__` for `BoundingBox` containing either a point, or another `BoundingBox`. This means that users could then write:
`if point in box:` or `if little_box in big_box`.
## Alternatives
It is possible for users to accomplish this currently but requires some clever coding to avoid becoming difficult to read:
``` python
def in_box(point, box):
for min_p, p, max_p in zip(box.lower_left, point, box.upper_right):
if p < min_p or > max_p:
return False
return True
```
## Compatibility
This would be an enhancement, and would not alter the behavior of the existing API.
There is a risk though that users will misinterpret the results. A point in the bounding box of a volume *may* be in the volume, but not necessarily. A user could misuse this information and create problems for themselves. Also a small volume's bounding box can be completely contained in another volume's bounding box, and be completely outside that other volume.
</issue>
<code>
[start of openmc/bounding_box.py]
1 from __future__ import annotations
2 from typing import Iterable
3
4 import numpy as np
5
6 from .checkvalue import check_length
7
8
9 class BoundingBox:
10 """Axis-aligned bounding box.
11
12 .. versionadded:: 0.14.0
13
14 Parameters
15 ----------
16 lower_left : iterable of float
17 The x, y, z coordinates of the lower left corner of the bounding box in [cm]
18 upper_right : iterable of float
19 The x, y, z coordinates of the upper right corner of the bounding box in [cm]
20
21 Attributes
22 ----------
23 center : numpy.ndarray
24 x, y, z coordinates of the center of the bounding box in [cm]
25 lower_left : numpy.ndarray
26 The x, y, z coordinates of the lower left corner of the bounding box in [cm]
27 upper_right : numpy.ndarray
28 The x, y, z coordinates of the upper right corner of the bounding box in [cm]
29 volume : float
30 The volume of the bounding box in [cm^3]
31 extent : dict
32 A dictionary of basis as keys and the extent (left, right, bottom, top)
33 as values. Intended use in Matplotlib plots when setting extent
34 width : iterable of float
35 The width of the x, y and z axis in [cm]
36 """
37
38 def __init__(self, lower_left: Iterable[float], upper_right: Iterable[float]):
39 check_length("lower_left", lower_left, 3, 3)
40 check_length("upper_right", upper_right, 3, 3)
41 self._bounds = np.asarray([lower_left, upper_right], dtype=float)
42
43 def __repr__(self) -> str:
44 return "BoundingBox(lower_left={}, upper_right={})".format(
45 tuple(self.lower_left), tuple(self.upper_right))
46
47 def __getitem__(self, key) -> np.ndarray:
48 return self._bounds[key]
49
50 def __len__(self):
51 return 2
52
53 def __setitem__(self, key, val):
54 self._bounds[key] = val
55
56 def __iand__(self, other: BoundingBox) -> BoundingBox:
57 """Updates the box be the intersection of itself and another box
58
59 Parameters
60 ----------
61 other : BoundingBox
62 The box used to resize this box
63
64 Returns
65 -------
66 An updated bounding box
67 """
68 self.lower_left = np.maximum(self.lower_left, other.lower_left)
69 self.upper_right = np.minimum(self.upper_right, other.upper_right)
70 return self
71
72 def __and__(self, other: BoundingBox) -> BoundingBox:
73 new = BoundingBox(*self)
74 new &= other
75 return new
76
77 def __ior__(self, other: BoundingBox) -> BoundingBox:
78 """Updates the box be the union of itself and another box
79
80 Parameters
81 ----------
82 other : BoundingBox
83 The box used to resize this box
84
85 Returns
86 -------
87 An updated bounding box
88 """
89 self.lower_left = np.minimum(self.lower_left, other.lower_left)
90 self.upper_right = np.maximum(self.upper_right, other.upper_right)
91 return self
92
93 def __or__(self, other: BoundingBox) -> BoundingBox:
94 new = BoundingBox(*self)
95 new |= other
96 return new
97
98 def __contains__(self, point):
99 """Check whether or not a point is in the bounding box"""
100 return all(point > self.lower_left) and all(point < self.upper_right)
101
102 @property
103 def center(self) -> np.ndarray:
104 return (self[0] + self[1]) / 2
105
106 @property
107 def lower_left(self) -> np.ndarray:
108 return self[0]
109
110 @lower_left.setter
111 def lower_left(self, llc):
112 check_length('lower_left', llc, 3, 3)
113 self[0] = llc
114
115 @property
116 def upper_right(self) -> np.ndarray:
117 return self[1]
118
119 @upper_right.setter
120 def upper_right(self, urc):
121 check_length('upper_right', urc, 3, 3)
122 self[1] = urc
123
124 @property
125 def volume(self) -> float:
126 return np.abs(np.prod(self[1] - self[0]))
127
128 @property
129 def extent(self):
130 return {
131 "xy": (
132 self.lower_left[0],
133 self.upper_right[0],
134 self.lower_left[1],
135 self.upper_right[1],
136 ),
137 "xz": (
138 self.lower_left[0],
139 self.upper_right[0],
140 self.lower_left[2],
141 self.upper_right[2],
142 ),
143 "yz": (
144 self.lower_left[1],
145 self.upper_right[1],
146 self.lower_left[2],
147 self.upper_right[2],
148 ),
149 }
150
151 @property
152 def width(self):
153 return self.upper_right - self.lower_left
154
155 def expand(self, padding_distance: float, inplace: bool = False) -> BoundingBox:
156 """Returns an expanded bounding box
157
158 Parameters
159 ----------
160 padding_distance : float
161 The distance to enlarge the bounding box by
162 inplace : bool
163 Whether or not to return a new BoundingBox instance or to modify the
164 current BoundingBox object.
165
166 Returns
167 -------
168 An expanded bounding box
169 """
170 if inplace:
171 self[0] -= padding_distance
172 self[1] += padding_distance
173 return self
174 else:
175 return BoundingBox(self[0] - padding_distance, self[1] + padding_distance)
176
177 @classmethod
178 def infinite(cls) -> BoundingBox:
179 """Create an infinite box. Useful as a starting point for determining
180 geometry bounds.
181
182 Returns
183 -------
184 An infinitely large bounding box.
185 """
186 infs = np.full((3,), np.inf)
187 return cls(-infs, infs)
188
[end of openmc/bounding_box.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/openmc/bounding_box.py b/openmc/bounding_box.py
--- a/openmc/bounding_box.py
+++ b/openmc/bounding_box.py
@@ -95,9 +95,23 @@
new |= other
return new
- def __contains__(self, point):
- """Check whether or not a point is in the bounding box"""
- return all(point > self.lower_left) and all(point < self.upper_right)
+ def __contains__(self, other):
+ """Check whether or not a point or another bounding box is in the bounding box.
+
+ For another bounding box to be in the parent it must lie fully inside of it.
+ """
+ # test for a single point
+ if isinstance(other, (tuple, list, np.ndarray)):
+ point = other
+ check_length("Point", point, 3, 3)
+ return all(point > self.lower_left) and all(point < self.upper_right)
+ elif isinstance(other, BoundingBox):
+ return all([p in self for p in [other.lower_left, other.upper_right]])
+ else:
+ raise TypeError(
+ f"Unable to determine if {other} is in the bounding box."
+ f" Expected a tuple or a bounding box, but {type(other)} given"
+ )
@property
def center(self) -> np.ndarray:
| {"golden_diff": "diff --git a/openmc/bounding_box.py b/openmc/bounding_box.py\n--- a/openmc/bounding_box.py\n+++ b/openmc/bounding_box.py\n@@ -95,9 +95,23 @@\n new |= other\n return new\n \n- def __contains__(self, point):\n- \"\"\"Check whether or not a point is in the bounding box\"\"\"\n- return all(point > self.lower_left) and all(point < self.upper_right)\n+ def __contains__(self, other):\n+ \"\"\"Check whether or not a point or another bounding box is in the bounding box.\n+\n+ For another bounding box to be in the parent it must lie fully inside of it.\n+ \"\"\"\n+ # test for a single point\n+ if isinstance(other, (tuple, list, np.ndarray)):\n+ point = other\n+ check_length(\"Point\", point, 3, 3)\n+ return all(point > self.lower_left) and all(point < self.upper_right)\n+ elif isinstance(other, BoundingBox):\n+ return all([p in self for p in [other.lower_left, other.upper_right]])\n+ else:\n+ raise TypeError(\n+ f\"Unable to determine if {other} is in the bounding box.\"\n+ f\" Expected a tuple or a bounding box, but {type(other)} given\"\n+ )\n \n @property\n def center(self) -> np.ndarray:\n", "issue": "Implement contains for BoundingBox\n## Description\r\nImplement `__contains__` for `BoundingBox` containing either a point, or another `BoundingBox`. This means that users could then write:\r\n\r\n`if point in box:` or `if little_box in big_box`.\r\n\r\n\r\n## Alternatives\r\nIt is possible for users to accomplish this currently but requires some clever coding to avoid becoming difficult to read:\r\n\r\n``` python\r\ndef in_box(point, box):\r\n for min_p, p, max_p in zip(box.lower_left, point, box.upper_right):\r\n if p < min_p or > max_p:\r\n return False \r\n return True\r\n```\r\n\r\n\r\n## Compatibility\r\nThis would be an enhancement, and would not alter the behavior of the existing API. \r\n\r\nThere is a risk though that users will misinterpret the results. A point in the bounding box of a volume *may* be in the volume, but not necessarily. A user could misuse this information and create problems for themselves. Also a small volume's bounding box can be completely contained in another volume's bounding box, and be completely outside that other volume. \n", "before_files": [{"content": "from __future__ import annotations\nfrom typing import Iterable\n\nimport numpy as np\n\nfrom .checkvalue import check_length\n\n\nclass BoundingBox:\n \"\"\"Axis-aligned bounding box.\n\n .. versionadded:: 0.14.0\n\n Parameters\n ----------\n lower_left : iterable of float\n The x, y, z coordinates of the lower left corner of the bounding box in [cm]\n upper_right : iterable of float\n The x, y, z coordinates of the upper right corner of the bounding box in [cm]\n\n Attributes\n ----------\n center : numpy.ndarray\n x, y, z coordinates of the center of the bounding box in [cm]\n lower_left : numpy.ndarray\n The x, y, z coordinates of the lower left corner of the bounding box in [cm]\n upper_right : numpy.ndarray\n The x, y, z coordinates of the upper right corner of the bounding box in [cm]\n volume : float\n The volume of the bounding box in [cm^3]\n extent : dict\n A dictionary of basis as keys and the extent (left, right, bottom, top)\n as values. Intended use in Matplotlib plots when setting extent\n width : iterable of float\n The width of the x, y and z axis in [cm]\n \"\"\"\n\n def __init__(self, lower_left: Iterable[float], upper_right: Iterable[float]):\n check_length(\"lower_left\", lower_left, 3, 3)\n check_length(\"upper_right\", upper_right, 3, 3)\n self._bounds = np.asarray([lower_left, upper_right], dtype=float)\n\n def __repr__(self) -> str:\n return \"BoundingBox(lower_left={}, upper_right={})\".format(\n tuple(self.lower_left), tuple(self.upper_right))\n\n def __getitem__(self, key) -> np.ndarray:\n return self._bounds[key]\n\n def __len__(self):\n return 2\n\n def __setitem__(self, key, val):\n self._bounds[key] = val\n\n def __iand__(self, other: BoundingBox) -> BoundingBox:\n \"\"\"Updates the box be the intersection of itself and another box\n\n Parameters\n ----------\n other : BoundingBox\n The box used to resize this box\n\n Returns\n -------\n An updated bounding box\n \"\"\"\n self.lower_left = np.maximum(self.lower_left, other.lower_left)\n self.upper_right = np.minimum(self.upper_right, other.upper_right)\n return self\n\n def __and__(self, other: BoundingBox) -> BoundingBox:\n new = BoundingBox(*self)\n new &= other\n return new\n\n def __ior__(self, other: BoundingBox) -> BoundingBox:\n \"\"\"Updates the box be the union of itself and another box\n\n Parameters\n ----------\n other : BoundingBox\n The box used to resize this box\n\n Returns\n -------\n An updated bounding box\n \"\"\"\n self.lower_left = np.minimum(self.lower_left, other.lower_left)\n self.upper_right = np.maximum(self.upper_right, other.upper_right)\n return self\n\n def __or__(self, other: BoundingBox) -> BoundingBox:\n new = BoundingBox(*self)\n new |= other\n return new\n\n def __contains__(self, point):\n \"\"\"Check whether or not a point is in the bounding box\"\"\"\n return all(point > self.lower_left) and all(point < self.upper_right)\n\n @property\n def center(self) -> np.ndarray:\n return (self[0] + self[1]) / 2\n\n @property\n def lower_left(self) -> np.ndarray:\n return self[0]\n\n @lower_left.setter\n def lower_left(self, llc):\n check_length('lower_left', llc, 3, 3)\n self[0] = llc\n\n @property\n def upper_right(self) -> np.ndarray:\n return self[1]\n\n @upper_right.setter\n def upper_right(self, urc):\n check_length('upper_right', urc, 3, 3)\n self[1] = urc\n\n @property\n def volume(self) -> float:\n return np.abs(np.prod(self[1] - self[0]))\n\n @property\n def extent(self):\n return {\n \"xy\": (\n self.lower_left[0],\n self.upper_right[0],\n self.lower_left[1],\n self.upper_right[1],\n ),\n \"xz\": (\n self.lower_left[0],\n self.upper_right[0],\n self.lower_left[2],\n self.upper_right[2],\n ),\n \"yz\": (\n self.lower_left[1],\n self.upper_right[1],\n self.lower_left[2],\n self.upper_right[2],\n ),\n }\n\n @property\n def width(self):\n return self.upper_right - self.lower_left\n\n def expand(self, padding_distance: float, inplace: bool = False) -> BoundingBox:\n \"\"\"Returns an expanded bounding box\n\n Parameters\n ----------\n padding_distance : float\n The distance to enlarge the bounding box by\n inplace : bool\n Whether or not to return a new BoundingBox instance or to modify the\n current BoundingBox object.\n\n Returns\n -------\n An expanded bounding box\n \"\"\"\n if inplace:\n self[0] -= padding_distance\n self[1] += padding_distance\n return self\n else:\n return BoundingBox(self[0] - padding_distance, self[1] + padding_distance)\n\n @classmethod\n def infinite(cls) -> BoundingBox:\n \"\"\"Create an infinite box. Useful as a starting point for determining\n geometry bounds.\n\n Returns\n -------\n An infinitely large bounding box.\n \"\"\"\n infs = np.full((3,), np.inf)\n return cls(-infs, infs)\n", "path": "openmc/bounding_box.py"}]} | 2,506 | 306 |
gh_patches_debug_945 | rasdani/github-patches | git_diff | magenta__magenta-1793 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Retraining Onsets and Frames Drums model with E-GMD dataset
Hello,
I am trying to retrain OaF model with the E-GMD dataset for drums transcription. I first downloaded the E-GMD dataset which has its corresponding csv file and a directoy for each drummer and subdirectories with the sessions.
I am trying to do the first step following the code in ```onsets_frames_transcription_create_tfrecords``` which I found that it is:
```
onsets_frames_transcription_create_tfrecords \
--csv=".../e-gmd-v1.0.0/e-gmd-v1.0.0.csv" \
--output_directory=".../e-gmd-v1.0.0" \
--num_shards="0" \
--wav_dir=".../e-gmd-v1.0.0" \
--midi_dir=".../e-gmd-v1.0.0" \
--expected_splits="train,validation,test"
```
But i got the following error which I don't know where does it come from:
```
2020-08-05 17:23:45.289023: W tensorflow/stream_executor/platform/default/dso_loader.cc:59] Could not load dynamic library 'cudart64_101.dll'; dlerror: cudart64_101.dll not found
2020-08-05 17:23:45.289348: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
WARNING:tensorflow:From c:\users\carlos\anaconda3\lib\site-packages\tensorflow\python\compat\v2_compat.py:96: disable_resource_variables (from tensorflow.python.ops.variable_scope) is deprecated and will be removed in a future version.
Instructions for updating:
non-resource variables are not supported in the long term
WARNING:tensorflow:From c:\users\carlos\anaconda3\lib\site-packages\tensorflow\python\compat\v2_compat.py:96: disable_resource_variables (from tensorflow.python.ops.variable_scope) is deprecated and will be removed in a future version.
Instructions for updating:
non-resource variables are not supported in the long term
Traceback (most recent call last):
File "c:\users\carlos\anaconda3\lib\runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "c:\users\carlos\anaconda3\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "C:\Users\Carlos\Anaconda3\Scripts\onsets_frames_transcription_create_tfrecords.exe\__main__.py", line 4, in <module>
ImportError: cannot import name 'console_entry_point'
```
I don't know if I have to change the paths of the wav and MIDI files in order to have the wav files in a directory and the MIDI files in other directory or the error comes from installation issues, versions, etc.
I am using Winows 10.
</issue>
<code>
[start of magenta/models/onsets_frames_transcription/onsets_frames_transcription_create_tfrecords.py]
1 # Copyright 2020 The Magenta Authors.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 # Lint as: python3
16 r"""Beam job for creating tfrecord files from datasets.
17
18 Expects a CSV with the following fields: audio_filename, midi_filename, split
19
20 Usage:
21 onsets_frames_transcription_create_tfrecords \
22 --csv="/path/to/dataset.csv" \
23 --output_directory="/path/to/output" \
24 --num_shards="0" \
25 --wav_dir="/path/to/dataset/audio" \
26 --midi_dir="/path/to/dataset/midi" \
27 --expected_splits="train,validation,test"
28
29 """
30
31 import collections
32 import copy
33 import csv
34 import os
35
36 from absl import app
37 from absl import flags
38 from absl import logging
39
40 import apache_beam as beam
41 from apache_beam.metrics import Metrics
42 from magenta.models.onsets_frames_transcription import audio_label_data_utils
43 from note_seq import midi_io
44 from note_seq.protobuf import music_pb2
45 import tensorflow.compat.v1 as tf
46
47 tf.disable_v2_behavior()
48
49 FLAGS = flags.FLAGS
50
51 flags.DEFINE_string('csv', None, 'Path to dataset CSV')
52 flags.DEFINE_string('output_directory', None, 'Path to output_directory')
53 flags.DEFINE_string('wav_dir', None, 'Directory for wav files.')
54 flags.DEFINE_string('midi_dir', None, 'Directory for midi files.')
55 flags.DEFINE_integer('num_shards', 0, 'number of output shards')
56 flags.DEFINE_string('expected_splits', 'train,validation,test',
57 'Comma separated list of expected splits.')
58 flags.DEFINE_boolean(
59 'add_wav_glob', False,
60 'If true, will add * to end of wav paths and use all matching files.')
61 flags.DEFINE_list(
62 'pipeline_options', '--runner=DirectRunner',
63 'A comma-separated list of command line arguments to be used as options '
64 'for the Beam Pipeline.')
65
66
67 class CreateExampleDoFn(beam.DoFn):
68 """Splits wav and midi files for the dataset."""
69
70 def __init__(self, wav_dir, midi_dir, add_wav_glob,
71 *unused_args, **unused_kwargs):
72 self._wav_dir = wav_dir
73 self._midi_dir = midi_dir
74 self._add_wav_glob = add_wav_glob
75 super(CreateExampleDoFn, self).__init__(*unused_args, **unused_kwargs)
76
77 def process(self, paths):
78 midi_path, wav_path_base = paths
79
80 if self._add_wav_glob:
81 wav_paths = tf.io.gfile.glob(wav_path_base + '*')
82 else:
83 wav_paths = [wav_path_base]
84
85 if midi_path:
86 base_ns = midi_io.midi_file_to_note_sequence(midi_path)
87 base_ns.filename = midi_path
88 else:
89 base_ns = music_pb2.NoteSequence()
90
91 for wav_path in wav_paths:
92 logging.info('Creating Example %s:%s', midi_path, wav_path)
93 wav_data = tf.io.gfile.GFile(wav_path, 'rb').read()
94
95 ns = copy.deepcopy(base_ns)
96
97 # Use base names.
98 ns.id = '%s:%s' % (wav_path.replace(self._wav_dir, ''),
99 midi_path.replace(self._midi_dir, ''))
100
101 Metrics.counter('create_example', 'read_midi_wav').inc()
102
103 example = audio_label_data_utils.create_example(ns.id, ns, wav_data)
104
105 Metrics.counter('create_example', 'created_example').inc()
106 yield example
107
108
109 def main(argv):
110 del argv
111
112
113 flags.mark_flags_as_required(['csv', 'output_directory'])
114
115 tf.io.gfile.makedirs(FLAGS.output_directory)
116
117 with tf.io.gfile.GFile(FLAGS.csv) as f:
118 reader = csv.DictReader(f)
119
120 splits = collections.defaultdict(list)
121 for row in reader:
122 splits[row['split']].append(
123 (os.path.join(FLAGS.midi_dir, row['midi_filename']),
124 os.path.join(FLAGS.wav_dir, row['audio_filename'])))
125
126 if sorted(splits.keys()) != sorted(FLAGS.expected_splits.split(',')):
127 raise ValueError('Got unexpected set of splits: %s' % list(splits.keys()))
128
129 pipeline_options = beam.options.pipeline_options.PipelineOptions(
130 FLAGS.pipeline_options)
131 with beam.Pipeline(options=pipeline_options) as p:
132 for split in splits:
133 split_p = p | 'prepare_split_%s' % split >> beam.Create(splits[split])
134 split_p |= 'create_examples_%s' % split >> beam.ParDo(
135 CreateExampleDoFn(FLAGS.wav_dir, FLAGS.midi_dir, FLAGS.add_wav_glob))
136 split_p |= 'write_%s' % split >> beam.io.WriteToTFRecord(
137 os.path.join(FLAGS.output_directory, '%s.tfrecord' % split),
138 coder=beam.coders.ProtoCoder(tf.train.Example),
139 num_shards=FLAGS.num_shards)
140
141
142 if __name__ == '__main__':
143 app.run(main)
144
[end of magenta/models/onsets_frames_transcription/onsets_frames_transcription_create_tfrecords.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/magenta/models/onsets_frames_transcription/onsets_frames_transcription_create_tfrecords.py b/magenta/models/onsets_frames_transcription/onsets_frames_transcription_create_tfrecords.py
--- a/magenta/models/onsets_frames_transcription/onsets_frames_transcription_create_tfrecords.py
+++ b/magenta/models/onsets_frames_transcription/onsets_frames_transcription_create_tfrecords.py
@@ -139,5 +139,10 @@
num_shards=FLAGS.num_shards)
-if __name__ == '__main__':
+def console_entry_point():
+ tf.disable_v2_behavior()
app.run(main)
+
+
+if __name__ == '__main__':
+ console_entry_point()
| {"golden_diff": "diff --git a/magenta/models/onsets_frames_transcription/onsets_frames_transcription_create_tfrecords.py b/magenta/models/onsets_frames_transcription/onsets_frames_transcription_create_tfrecords.py\n--- a/magenta/models/onsets_frames_transcription/onsets_frames_transcription_create_tfrecords.py\n+++ b/magenta/models/onsets_frames_transcription/onsets_frames_transcription_create_tfrecords.py\n@@ -139,5 +139,10 @@\n num_shards=FLAGS.num_shards)\n \n \n-if __name__ == '__main__':\n+def console_entry_point():\n+ tf.disable_v2_behavior()\n app.run(main)\n+\n+\n+if __name__ == '__main__':\n+ console_entry_point()\n", "issue": "Retraining Onsets and Frames Drums model with E-GMD dataset\nHello,\r\n\r\nI am trying to retrain OaF model with the E-GMD dataset for drums transcription. I first downloaded the E-GMD dataset which has its corresponding csv file and a directoy for each drummer and subdirectories with the sessions.\r\n\r\nI am trying to do the first step following the code in ```onsets_frames_transcription_create_tfrecords``` which I found that it is:\r\n\r\n```\r\nonsets_frames_transcription_create_tfrecords \\\r\n --csv=\".../e-gmd-v1.0.0/e-gmd-v1.0.0.csv\" \\\r\n --output_directory=\".../e-gmd-v1.0.0\" \\\r\n --num_shards=\"0\" \\\r\n --wav_dir=\".../e-gmd-v1.0.0\" \\\r\n --midi_dir=\".../e-gmd-v1.0.0\" \\\r\n --expected_splits=\"train,validation,test\"\r\n```\r\n\r\nBut i got the following error which I don't know where does it come from:\r\n\r\n```\r\n2020-08-05 17:23:45.289023: W tensorflow/stream_executor/platform/default/dso_loader.cc:59] Could not load dynamic library 'cudart64_101.dll'; dlerror: cudart64_101.dll not found\r\n2020-08-05 17:23:45.289348: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.\r\nWARNING:tensorflow:From c:\\users\\carlos\\anaconda3\\lib\\site-packages\\tensorflow\\python\\compat\\v2_compat.py:96: disable_resource_variables (from tensorflow.python.ops.variable_scope) is deprecated and will be removed in a future version.\r\nInstructions for updating:\r\nnon-resource variables are not supported in the long term\r\nWARNING:tensorflow:From c:\\users\\carlos\\anaconda3\\lib\\site-packages\\tensorflow\\python\\compat\\v2_compat.py:96: disable_resource_variables (from tensorflow.python.ops.variable_scope) is deprecated and will be removed in a future version.\r\nInstructions for updating:\r\nnon-resource variables are not supported in the long term\r\nTraceback (most recent call last):\r\n File \"c:\\users\\carlos\\anaconda3\\lib\\runpy.py\", line 193, in _run_module_as_main\r\n \"__main__\", mod_spec)\r\n File \"c:\\users\\carlos\\anaconda3\\lib\\runpy.py\", line 85, in _run_code\r\n exec(code, run_globals)\r\n File \"C:\\Users\\Carlos\\Anaconda3\\Scripts\\onsets_frames_transcription_create_tfrecords.exe\\__main__.py\", line 4, in <module>\r\nImportError: cannot import name 'console_entry_point'\r\n```\r\nI don't know if I have to change the paths of the wav and MIDI files in order to have the wav files in a directory and the MIDI files in other directory or the error comes from installation issues, versions, etc.\r\n\r\nI am using Winows 10.\n", "before_files": [{"content": "# Copyright 2020 The Magenta Authors.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n# Lint as: python3\nr\"\"\"Beam job for creating tfrecord files from datasets.\n\nExpects a CSV with the following fields: audio_filename, midi_filename, split\n\nUsage:\nonsets_frames_transcription_create_tfrecords \\\n --csv=\"/path/to/dataset.csv\" \\\n --output_directory=\"/path/to/output\" \\\n --num_shards=\"0\" \\\n --wav_dir=\"/path/to/dataset/audio\" \\\n --midi_dir=\"/path/to/dataset/midi\" \\\n --expected_splits=\"train,validation,test\"\n\n\"\"\"\n\nimport collections\nimport copy\nimport csv\nimport os\n\nfrom absl import app\nfrom absl import flags\nfrom absl import logging\n\nimport apache_beam as beam\nfrom apache_beam.metrics import Metrics\nfrom magenta.models.onsets_frames_transcription import audio_label_data_utils\nfrom note_seq import midi_io\nfrom note_seq.protobuf import music_pb2\nimport tensorflow.compat.v1 as tf\n\ntf.disable_v2_behavior()\n\nFLAGS = flags.FLAGS\n\nflags.DEFINE_string('csv', None, 'Path to dataset CSV')\nflags.DEFINE_string('output_directory', None, 'Path to output_directory')\nflags.DEFINE_string('wav_dir', None, 'Directory for wav files.')\nflags.DEFINE_string('midi_dir', None, 'Directory for midi files.')\nflags.DEFINE_integer('num_shards', 0, 'number of output shards')\nflags.DEFINE_string('expected_splits', 'train,validation,test',\n 'Comma separated list of expected splits.')\nflags.DEFINE_boolean(\n 'add_wav_glob', False,\n 'If true, will add * to end of wav paths and use all matching files.')\nflags.DEFINE_list(\n 'pipeline_options', '--runner=DirectRunner',\n 'A comma-separated list of command line arguments to be used as options '\n 'for the Beam Pipeline.')\n\n\nclass CreateExampleDoFn(beam.DoFn):\n \"\"\"Splits wav and midi files for the dataset.\"\"\"\n\n def __init__(self, wav_dir, midi_dir, add_wav_glob,\n *unused_args, **unused_kwargs):\n self._wav_dir = wav_dir\n self._midi_dir = midi_dir\n self._add_wav_glob = add_wav_glob\n super(CreateExampleDoFn, self).__init__(*unused_args, **unused_kwargs)\n\n def process(self, paths):\n midi_path, wav_path_base = paths\n\n if self._add_wav_glob:\n wav_paths = tf.io.gfile.glob(wav_path_base + '*')\n else:\n wav_paths = [wav_path_base]\n\n if midi_path:\n base_ns = midi_io.midi_file_to_note_sequence(midi_path)\n base_ns.filename = midi_path\n else:\n base_ns = music_pb2.NoteSequence()\n\n for wav_path in wav_paths:\n logging.info('Creating Example %s:%s', midi_path, wav_path)\n wav_data = tf.io.gfile.GFile(wav_path, 'rb').read()\n\n ns = copy.deepcopy(base_ns)\n\n # Use base names.\n ns.id = '%s:%s' % (wav_path.replace(self._wav_dir, ''),\n midi_path.replace(self._midi_dir, ''))\n\n Metrics.counter('create_example', 'read_midi_wav').inc()\n\n example = audio_label_data_utils.create_example(ns.id, ns, wav_data)\n\n Metrics.counter('create_example', 'created_example').inc()\n yield example\n\n\ndef main(argv):\n del argv\n\n\n flags.mark_flags_as_required(['csv', 'output_directory'])\n\n tf.io.gfile.makedirs(FLAGS.output_directory)\n\n with tf.io.gfile.GFile(FLAGS.csv) as f:\n reader = csv.DictReader(f)\n\n splits = collections.defaultdict(list)\n for row in reader:\n splits[row['split']].append(\n (os.path.join(FLAGS.midi_dir, row['midi_filename']),\n os.path.join(FLAGS.wav_dir, row['audio_filename'])))\n\n if sorted(splits.keys()) != sorted(FLAGS.expected_splits.split(',')):\n raise ValueError('Got unexpected set of splits: %s' % list(splits.keys()))\n\n pipeline_options = beam.options.pipeline_options.PipelineOptions(\n FLAGS.pipeline_options)\n with beam.Pipeline(options=pipeline_options) as p:\n for split in splits:\n split_p = p | 'prepare_split_%s' % split >> beam.Create(splits[split])\n split_p |= 'create_examples_%s' % split >> beam.ParDo(\n CreateExampleDoFn(FLAGS.wav_dir, FLAGS.midi_dir, FLAGS.add_wav_glob))\n split_p |= 'write_%s' % split >> beam.io.WriteToTFRecord(\n os.path.join(FLAGS.output_directory, '%s.tfrecord' % split),\n coder=beam.coders.ProtoCoder(tf.train.Example),\n num_shards=FLAGS.num_shards)\n\n\nif __name__ == '__main__':\n app.run(main)\n", "path": "magenta/models/onsets_frames_transcription/onsets_frames_transcription_create_tfrecords.py"}]} | 2,757 | 150 |
gh_patches_debug_57650 | rasdani/github-patches | git_diff | facebookresearch__ParlAI-1956 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Quickstart AttributeError: 'HogwildWorld' object has no attribute 'acts'
**Bug description**
When going through the ParlAI [quickstart](https://parl.ai/docs/tutorial_quick.html#install), I got the following error:
``` python
Traceback (most recent call last):
File "examples/interactive.py", line 18, in <module>
interactive(opt, print_parser=parser)
File "/root/ParlAI/parlai/scripts/interactive.py", line 68, in interactive
agent = create_agent(opt, requireModelExists=True)
File "/root/ParlAI/parlai/core/agents.py", line 683, in create_agent
model = load_agent_module(opt)
File "/root/ParlAI/parlai/core/agents.py", line 548, in load_agent_module
return model_class(new_opt)
File "/root/ParlAI/parlai/agents/memnn/memnn.py", line 86, in __init__
super().__init__(opt, shared)
File "/root/ParlAI/parlai/core/torch_ranker_agent.py", line 135, in __init__
super().__init__(opt, shared)
File "/root/ParlAI/parlai/core/torch_agent.py", line 737, in __init__
self.set_interactive_mode(opt['interactive_mode'], shared)
File "/root/ParlAI/parlai/core/torch_ranker_agent.py", line 206, in set_interactive_mode
path = self.get_task_candidates_path()
File "/root/ParlAI/parlai/core/torch_ranker_agent.py", line 230, in get_task_candidates_path
build_cands(opt)
File "/root/ParlAI/parlai/scripts/build_candidates.py", line 47, in build_cands
acts = world.get_acts()[0]
File "/root/ParlAI/parlai/core/worlds.py", line 162, in get_acts
return self.acts
AttributeError: 'HogwildWorld' object has no attribute 'acts'
```
**While running**
```python
python examples/interactive.py -mf /tmp/babi_memnn -ecands vocab
```
</issue>
<code>
[start of parlai/scripts/build_candidates.py]
1 #!/usr/bin/env python3
2
3 # Copyright (c) Facebook, Inc. and its affiliates.
4 # This source code is licensed under the MIT license found in the
5 # LICENSE file in the root directory of this source tree.
6 """Build the candidate responses for a retrieval model.
7
8 Examples
9 --------
10
11 .. code-block:: shell
12
13 python build_candidates.py -t convai2 --outfile /tmp/cands.txt
14 """
15
16 from parlai.core.params import ParlaiParser
17 from parlai.agents.repeat_label.repeat_label import RepeatLabelAgent
18 from parlai.core.worlds import create_task
19 from parlai.core.utils import TimeLogger
20 import random
21 import tempfile
22
23
24 def build_cands(opt):
25 # create repeat label agent and assign it to the specified task
26 agent = RepeatLabelAgent(opt)
27 world = create_task(opt, agent)
28 if opt['outfile'] is None:
29 outfile = tempfile.mkstemp(
30 prefix='{}_{}_'.format(opt['task'], opt['datatype']), suffix='.txt'
31 )[1]
32 else:
33 outfile = opt['outfile']
34
35 if opt.get('num_examples', -1) == -1:
36 num_examples = world.num_examples()
37 else:
38 num_examples = opt['num_examples']
39 log_timer = TimeLogger()
40
41 print('[ starting to build candidates from task.. (ex:' + str(num_examples) + ')]')
42 print('[ saving output to {} ]'.format(outfile))
43 cands = []
44 for _ in range(num_examples):
45 world.parley()
46 # We get the acts of the first agent, which is the teacher.
47 acts = world.get_acts()[0]
48 if isinstance(acts, dict):
49 # We turn into a batch of 1 example, in case batching is being used.
50 acts = [acts]
51 for a in acts:
52 candidate = a.get('labels', a.get('eval_labels', None))
53 if candidate is not None:
54 candidate = candidate[0]
55 cands.append(candidate)
56 if log_timer.time() > opt['log_every_n_secs']:
57 text, _log = log_timer.log(world.total_parleys, world.num_examples())
58 print(text)
59 if world.epoch_done():
60 print('EPOCH DONE')
61 break
62 fw = open(outfile, 'w')
63 fw.write('\n'.join(cands))
64 fw.close()
65
66
67 def main():
68 random.seed(42)
69 # Get command line arguments
70 parser = ParlaiParser()
71 parser.add_argument(
72 '-n',
73 '--num-examples',
74 default=-1,
75 type=int,
76 help='Total number of exs to convert, -1 to convert all examples',
77 )
78 parser.add_argument(
79 '-of',
80 '--outfile',
81 default=None,
82 type=str,
83 help='Output file where to save, by default will be created in /tmp',
84 )
85 parser.add_argument('-ltim', '--log-every-n-secs', type=float, default=2)
86 parser.set_defaults(datatype='train:evalmode')
87 opt = parser.parse_args()
88 build_cands(opt)
89
90
91 if __name__ == '__main__':
92 main()
93
[end of parlai/scripts/build_candidates.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/parlai/scripts/build_candidates.py b/parlai/scripts/build_candidates.py
--- a/parlai/scripts/build_candidates.py
+++ b/parlai/scripts/build_candidates.py
@@ -23,6 +23,9 @@
def build_cands(opt):
# create repeat label agent and assign it to the specified task
+ if opt['numthreads'] > 1:
+ # Broken in hogwild mode. Just fall back to single processing mode
+ opt['numthreads'] = 1
agent = RepeatLabelAgent(opt)
world = create_task(opt, agent)
if opt['outfile'] is None:
| {"golden_diff": "diff --git a/parlai/scripts/build_candidates.py b/parlai/scripts/build_candidates.py\n--- a/parlai/scripts/build_candidates.py\n+++ b/parlai/scripts/build_candidates.py\n@@ -23,6 +23,9 @@\n \n def build_cands(opt):\n # create repeat label agent and assign it to the specified task\n+ if opt['numthreads'] > 1:\n+ # Broken in hogwild mode. Just fall back to single processing mode\n+ opt['numthreads'] = 1\n agent = RepeatLabelAgent(opt)\n world = create_task(opt, agent)\n if opt['outfile'] is None:\n", "issue": "Quickstart AttributeError: 'HogwildWorld' object has no attribute 'acts'\n**Bug description**\r\nWhen going through the ParlAI [quickstart](https://parl.ai/docs/tutorial_quick.html#install), I got the following error:\r\n\r\n``` python\r\nTraceback (most recent call last):\r\n File \"examples/interactive.py\", line 18, in <module>\r\n interactive(opt, print_parser=parser)\r\n File \"/root/ParlAI/parlai/scripts/interactive.py\", line 68, in interactive\r\n agent = create_agent(opt, requireModelExists=True)\r\n File \"/root/ParlAI/parlai/core/agents.py\", line 683, in create_agent\r\n model = load_agent_module(opt)\r\n File \"/root/ParlAI/parlai/core/agents.py\", line 548, in load_agent_module\r\n return model_class(new_opt)\r\n File \"/root/ParlAI/parlai/agents/memnn/memnn.py\", line 86, in __init__\r\n super().__init__(opt, shared)\r\n File \"/root/ParlAI/parlai/core/torch_ranker_agent.py\", line 135, in __init__\r\n super().__init__(opt, shared)\r\n File \"/root/ParlAI/parlai/core/torch_agent.py\", line 737, in __init__\r\n self.set_interactive_mode(opt['interactive_mode'], shared)\r\n File \"/root/ParlAI/parlai/core/torch_ranker_agent.py\", line 206, in set_interactive_mode\r\n path = self.get_task_candidates_path()\r\n File \"/root/ParlAI/parlai/core/torch_ranker_agent.py\", line 230, in get_task_candidates_path\r\n build_cands(opt)\r\n File \"/root/ParlAI/parlai/scripts/build_candidates.py\", line 47, in build_cands\r\n acts = world.get_acts()[0]\r\n File \"/root/ParlAI/parlai/core/worlds.py\", line 162, in get_acts\r\n return self.acts\r\nAttributeError: 'HogwildWorld' object has no attribute 'acts'\r\n```\r\n\r\n**While running**\r\n```python\r\npython examples/interactive.py -mf /tmp/babi_memnn -ecands vocab\r\n```\r\n\n", "before_files": [{"content": "#!/usr/bin/env python3\n\n# Copyright (c) Facebook, Inc. and its affiliates.\n# This source code is licensed under the MIT license found in the\n# LICENSE file in the root directory of this source tree.\n\"\"\"Build the candidate responses for a retrieval model.\n\nExamples\n--------\n\n.. code-block:: shell\n\n python build_candidates.py -t convai2 --outfile /tmp/cands.txt\n\"\"\"\n\nfrom parlai.core.params import ParlaiParser\nfrom parlai.agents.repeat_label.repeat_label import RepeatLabelAgent\nfrom parlai.core.worlds import create_task\nfrom parlai.core.utils import TimeLogger\nimport random\nimport tempfile\n\n\ndef build_cands(opt):\n # create repeat label agent and assign it to the specified task\n agent = RepeatLabelAgent(opt)\n world = create_task(opt, agent)\n if opt['outfile'] is None:\n outfile = tempfile.mkstemp(\n prefix='{}_{}_'.format(opt['task'], opt['datatype']), suffix='.txt'\n )[1]\n else:\n outfile = opt['outfile']\n\n if opt.get('num_examples', -1) == -1:\n num_examples = world.num_examples()\n else:\n num_examples = opt['num_examples']\n log_timer = TimeLogger()\n\n print('[ starting to build candidates from task.. (ex:' + str(num_examples) + ')]')\n print('[ saving output to {} ]'.format(outfile))\n cands = []\n for _ in range(num_examples):\n world.parley()\n # We get the acts of the first agent, which is the teacher.\n acts = world.get_acts()[0]\n if isinstance(acts, dict):\n # We turn into a batch of 1 example, in case batching is being used.\n acts = [acts]\n for a in acts:\n candidate = a.get('labels', a.get('eval_labels', None))\n if candidate is not None:\n candidate = candidate[0]\n cands.append(candidate)\n if log_timer.time() > opt['log_every_n_secs']:\n text, _log = log_timer.log(world.total_parleys, world.num_examples())\n print(text)\n if world.epoch_done():\n print('EPOCH DONE')\n break\n fw = open(outfile, 'w')\n fw.write('\\n'.join(cands))\n fw.close()\n\n\ndef main():\n random.seed(42)\n # Get command line arguments\n parser = ParlaiParser()\n parser.add_argument(\n '-n',\n '--num-examples',\n default=-1,\n type=int,\n help='Total number of exs to convert, -1 to convert all examples',\n )\n parser.add_argument(\n '-of',\n '--outfile',\n default=None,\n type=str,\n help='Output file where to save, by default will be created in /tmp',\n )\n parser.add_argument('-ltim', '--log-every-n-secs', type=float, default=2)\n parser.set_defaults(datatype='train:evalmode')\n opt = parser.parse_args()\n build_cands(opt)\n\n\nif __name__ == '__main__':\n main()\n", "path": "parlai/scripts/build_candidates.py"}]} | 1,895 | 143 |
gh_patches_debug_12054 | rasdani/github-patches | git_diff | quantumlib__Cirq-4816 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Substates are separated after measurements, when measurements are ignored
**Description of the issue**
Density matrix separates qubit states after measurement when split_untanged_states=True. However when ignore_measurement_results=True, this should not happen, as the this changes the measurement into a dephase and does not make the state separable.
For instance, when measuring a Bell state, the resulting DM should be 0.5 |00> + 0.5 |11>. However, separating those states (partial tracing each qubit) and re-kronning them gives 0.25 of each; i.e. it causes each qubit to be 0.5 |0> and 0.5 |1> independently. Therefore we need to avoid separating states after measurements if ignore_measurement_results=True.
**How to reproduce the issue**
```python
def test_ignore_measurements_remains_entangled():
q0, q1 = cirq.LineQubit.range(2)
simulator1 = cirq.DensityMatrixSimulator(
ignore_measurement_results=True, split_untangled_states=False
)
simulator2 = cirq.DensityMatrixSimulator(
ignore_measurement_results=True, split_untangled_states=True
)
circuit = cirq.Circuit(
cirq.H(q0),
cirq.CX(q0, q1),
cirq.measure(q0),
)
result1 = simulator1.simulate(circuit)
result2 = simulator2.simulate(circuit)
np.testing.assert_almost_equal(result2.final_density_matrix, result1.final_density_matrix)
```
</issue>
<code>
[start of cirq-core/cirq/sim/act_on_args_container.py]
1 # Copyright 2021 The Cirq Developers
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # https://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from collections import abc
16 from typing import (
17 Dict,
18 TYPE_CHECKING,
19 Generic,
20 Sequence,
21 Optional,
22 Iterator,
23 Any,
24 Tuple,
25 List,
26 Union,
27 )
28
29 import numpy as np
30
31 from cirq import ops, protocols
32 from cirq.sim.operation_target import OperationTarget
33 from cirq.sim.simulator import (
34 TActOnArgs,
35 )
36
37 if TYPE_CHECKING:
38 import cirq
39
40
41 class ActOnArgsContainer(
42 Generic[TActOnArgs],
43 OperationTarget[TActOnArgs],
44 abc.Mapping,
45 ):
46 """A container for a `Qid`-to-`ActOnArgs` dictionary."""
47
48 def __init__(
49 self,
50 args: Dict[Optional['cirq.Qid'], TActOnArgs],
51 qubits: Sequence['cirq.Qid'],
52 split_untangled_states: bool,
53 log_of_measurement_results: Dict[str, Any],
54 ):
55 """Initializes the class.
56
57 Args:
58 args: The `ActOnArgs` dictionary. This will not be copied; the
59 original reference will be kept here.
60 qubits: The canonical ordering of qubits.
61 split_untangled_states: If True, optimizes operations by running
62 unentangled qubit sets independently and merging those states
63 at the end.
64 log_of_measurement_results: A mutable object that measurements are
65 being recorded into.
66 """
67 self.args = args
68 self._qubits = tuple(qubits)
69 self.split_untangled_states = split_untangled_states
70 self._log_of_measurement_results = log_of_measurement_results
71
72 def create_merged_state(self) -> TActOnArgs:
73 if not self.split_untangled_states:
74 return self.args[None]
75 final_args = self.args[None]
76 for args in set([self.args[k] for k in self.args.keys() if k is not None]):
77 final_args = final_args.kronecker_product(args)
78 return final_args.transpose_to_qubit_order(self.qubits)
79
80 def _act_on_fallback_(
81 self,
82 action: Union['cirq.Operation', 'cirq.Gate'],
83 qubits: Sequence['cirq.Qid'],
84 allow_decompose: bool = True,
85 ) -> bool:
86 gate = action.gate if isinstance(action, ops.Operation) else action
87
88 if isinstance(gate, ops.IdentityGate):
89 return True
90
91 if isinstance(gate, ops.SwapPowGate) and gate.exponent % 2 == 1 and gate.global_shift == 0:
92 q0, q1 = qubits
93 args0 = self.args[q0]
94 args1 = self.args[q1]
95 if args0 is args1:
96 args0.swap(q0, q1, inplace=True)
97 else:
98 self.args[q0] = args1.rename(q1, q0, inplace=True)
99 self.args[q1] = args0.rename(q0, q1, inplace=True)
100 return True
101
102 # Go through the op's qubits and join any disparate ActOnArgs states
103 # into a new combined state.
104 op_args_opt: Optional[TActOnArgs] = None
105 for q in qubits:
106 if op_args_opt is None:
107 op_args_opt = self.args[q]
108 elif q not in op_args_opt.qubits:
109 op_args_opt = op_args_opt.kronecker_product(self.args[q])
110 op_args = op_args_opt or self.args[None]
111
112 # (Backfill the args map with the new value)
113 for q in op_args.qubits:
114 self.args[q] = op_args
115
116 # Act on the args with the operation
117 act_on_qubits = qubits if isinstance(action, ops.Gate) else None
118 protocols.act_on(action, op_args, act_on_qubits, allow_decompose=allow_decompose)
119
120 # Decouple any measurements or resets
121 if self.split_untangled_states and isinstance(
122 gate, (ops.MeasurementGate, ops.ResetChannel)
123 ):
124 for q in qubits:
125 q_args, op_args = op_args.factor((q,), validate=False)
126 self.args[q] = q_args
127
128 # (Backfill the args map with the new value)
129 for q in op_args.qubits:
130 self.args[q] = op_args
131 return True
132
133 def copy(self) -> 'cirq.ActOnArgsContainer[TActOnArgs]':
134 logs = self.log_of_measurement_results.copy()
135 copies = {a: a.copy() for a in set(self.args.values())}
136 for copy in copies.values():
137 copy._log_of_measurement_results = logs
138 args = {q: copies[a] for q, a in self.args.items()}
139 return ActOnArgsContainer(args, self.qubits, self.split_untangled_states, logs)
140
141 @property
142 def qubits(self) -> Tuple['cirq.Qid', ...]:
143 return self._qubits
144
145 @property
146 def log_of_measurement_results(self) -> Dict[str, Any]:
147 return self._log_of_measurement_results
148
149 def sample(
150 self,
151 qubits: List['cirq.Qid'],
152 repetitions: int = 1,
153 seed: 'cirq.RANDOM_STATE_OR_SEED_LIKE' = None,
154 ) -> np.ndarray:
155 columns = []
156 selected_order: List[ops.Qid] = []
157 q_set = set(qubits)
158 for v in dict.fromkeys(self.args.values()):
159 qs = [q for q in v.qubits if q in q_set]
160 if any(qs):
161 column = v.sample(qs, repetitions, seed)
162 columns.append(column)
163 selected_order += qs
164 stacked = np.column_stack(columns)
165 qubit_map = {q: i for i, q in enumerate(selected_order)}
166 index_order = [qubit_map[q] for q in qubits]
167 return stacked[:, index_order]
168
169 def __getitem__(self, item: Optional['cirq.Qid']) -> TActOnArgs:
170 return self.args[item]
171
172 def __len__(self) -> int:
173 return len(self.args)
174
175 def __iter__(self) -> Iterator[Optional['cirq.Qid']]:
176 return iter(self.args)
177
[end of cirq-core/cirq/sim/act_on_args_container.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/cirq-core/cirq/sim/act_on_args_container.py b/cirq-core/cirq/sim/act_on_args_container.py
--- a/cirq-core/cirq/sim/act_on_args_container.py
+++ b/cirq-core/cirq/sim/act_on_args_container.py
@@ -118,8 +118,9 @@
protocols.act_on(action, op_args, act_on_qubits, allow_decompose=allow_decompose)
# Decouple any measurements or resets
- if self.split_untangled_states and isinstance(
- gate, (ops.MeasurementGate, ops.ResetChannel)
+ if self.split_untangled_states and (
+ isinstance(gate, ops.ResetChannel)
+ or (isinstance(gate, ops.MeasurementGate) and not op_args.ignore_measurement_results)
):
for q in qubits:
q_args, op_args = op_args.factor((q,), validate=False)
| {"golden_diff": "diff --git a/cirq-core/cirq/sim/act_on_args_container.py b/cirq-core/cirq/sim/act_on_args_container.py\n--- a/cirq-core/cirq/sim/act_on_args_container.py\n+++ b/cirq-core/cirq/sim/act_on_args_container.py\n@@ -118,8 +118,9 @@\n protocols.act_on(action, op_args, act_on_qubits, allow_decompose=allow_decompose)\n \n # Decouple any measurements or resets\n- if self.split_untangled_states and isinstance(\n- gate, (ops.MeasurementGate, ops.ResetChannel)\n+ if self.split_untangled_states and (\n+ isinstance(gate, ops.ResetChannel)\n+ or (isinstance(gate, ops.MeasurementGate) and not op_args.ignore_measurement_results)\n ):\n for q in qubits:\n q_args, op_args = op_args.factor((q,), validate=False)\n", "issue": "Substates are separated after measurements, when measurements are ignored\n**Description of the issue**\r\n\r\nDensity matrix separates qubit states after measurement when split_untanged_states=True. However when ignore_measurement_results=True, this should not happen, as the this changes the measurement into a dephase and does not make the state separable.\r\n\r\nFor instance, when measuring a Bell state, the resulting DM should be 0.5 |00> + 0.5 |11>. However, separating those states (partial tracing each qubit) and re-kronning them gives 0.25 of each; i.e. it causes each qubit to be 0.5 |0> and 0.5 |1> independently. Therefore we need to avoid separating states after measurements if ignore_measurement_results=True.\r\n\r\n\r\n**How to reproduce the issue**\r\n\r\n```python\r\ndef test_ignore_measurements_remains_entangled():\r\n q0, q1 = cirq.LineQubit.range(2)\r\n simulator1 = cirq.DensityMatrixSimulator(\r\n ignore_measurement_results=True, split_untangled_states=False\r\n )\r\n simulator2 = cirq.DensityMatrixSimulator(\r\n ignore_measurement_results=True, split_untangled_states=True\r\n )\r\n circuit = cirq.Circuit(\r\n cirq.H(q0),\r\n cirq.CX(q0, q1),\r\n cirq.measure(q0),\r\n )\r\n result1 = simulator1.simulate(circuit)\r\n result2 = simulator2.simulate(circuit)\r\n np.testing.assert_almost_equal(result2.final_density_matrix, result1.final_density_matrix)\r\n```\r\n\r\n\n", "before_files": [{"content": "# Copyright 2021 The Cirq Developers\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom collections import abc\nfrom typing import (\n Dict,\n TYPE_CHECKING,\n Generic,\n Sequence,\n Optional,\n Iterator,\n Any,\n Tuple,\n List,\n Union,\n)\n\nimport numpy as np\n\nfrom cirq import ops, protocols\nfrom cirq.sim.operation_target import OperationTarget\nfrom cirq.sim.simulator import (\n TActOnArgs,\n)\n\nif TYPE_CHECKING:\n import cirq\n\n\nclass ActOnArgsContainer(\n Generic[TActOnArgs],\n OperationTarget[TActOnArgs],\n abc.Mapping,\n):\n \"\"\"A container for a `Qid`-to-`ActOnArgs` dictionary.\"\"\"\n\n def __init__(\n self,\n args: Dict[Optional['cirq.Qid'], TActOnArgs],\n qubits: Sequence['cirq.Qid'],\n split_untangled_states: bool,\n log_of_measurement_results: Dict[str, Any],\n ):\n \"\"\"Initializes the class.\n\n Args:\n args: The `ActOnArgs` dictionary. This will not be copied; the\n original reference will be kept here.\n qubits: The canonical ordering of qubits.\n split_untangled_states: If True, optimizes operations by running\n unentangled qubit sets independently and merging those states\n at the end.\n log_of_measurement_results: A mutable object that measurements are\n being recorded into.\n \"\"\"\n self.args = args\n self._qubits = tuple(qubits)\n self.split_untangled_states = split_untangled_states\n self._log_of_measurement_results = log_of_measurement_results\n\n def create_merged_state(self) -> TActOnArgs:\n if not self.split_untangled_states:\n return self.args[None]\n final_args = self.args[None]\n for args in set([self.args[k] for k in self.args.keys() if k is not None]):\n final_args = final_args.kronecker_product(args)\n return final_args.transpose_to_qubit_order(self.qubits)\n\n def _act_on_fallback_(\n self,\n action: Union['cirq.Operation', 'cirq.Gate'],\n qubits: Sequence['cirq.Qid'],\n allow_decompose: bool = True,\n ) -> bool:\n gate = action.gate if isinstance(action, ops.Operation) else action\n\n if isinstance(gate, ops.IdentityGate):\n return True\n\n if isinstance(gate, ops.SwapPowGate) and gate.exponent % 2 == 1 and gate.global_shift == 0:\n q0, q1 = qubits\n args0 = self.args[q0]\n args1 = self.args[q1]\n if args0 is args1:\n args0.swap(q0, q1, inplace=True)\n else:\n self.args[q0] = args1.rename(q1, q0, inplace=True)\n self.args[q1] = args0.rename(q0, q1, inplace=True)\n return True\n\n # Go through the op's qubits and join any disparate ActOnArgs states\n # into a new combined state.\n op_args_opt: Optional[TActOnArgs] = None\n for q in qubits:\n if op_args_opt is None:\n op_args_opt = self.args[q]\n elif q not in op_args_opt.qubits:\n op_args_opt = op_args_opt.kronecker_product(self.args[q])\n op_args = op_args_opt or self.args[None]\n\n # (Backfill the args map with the new value)\n for q in op_args.qubits:\n self.args[q] = op_args\n\n # Act on the args with the operation\n act_on_qubits = qubits if isinstance(action, ops.Gate) else None\n protocols.act_on(action, op_args, act_on_qubits, allow_decompose=allow_decompose)\n\n # Decouple any measurements or resets\n if self.split_untangled_states and isinstance(\n gate, (ops.MeasurementGate, ops.ResetChannel)\n ):\n for q in qubits:\n q_args, op_args = op_args.factor((q,), validate=False)\n self.args[q] = q_args\n\n # (Backfill the args map with the new value)\n for q in op_args.qubits:\n self.args[q] = op_args\n return True\n\n def copy(self) -> 'cirq.ActOnArgsContainer[TActOnArgs]':\n logs = self.log_of_measurement_results.copy()\n copies = {a: a.copy() for a in set(self.args.values())}\n for copy in copies.values():\n copy._log_of_measurement_results = logs\n args = {q: copies[a] for q, a in self.args.items()}\n return ActOnArgsContainer(args, self.qubits, self.split_untangled_states, logs)\n\n @property\n def qubits(self) -> Tuple['cirq.Qid', ...]:\n return self._qubits\n\n @property\n def log_of_measurement_results(self) -> Dict[str, Any]:\n return self._log_of_measurement_results\n\n def sample(\n self,\n qubits: List['cirq.Qid'],\n repetitions: int = 1,\n seed: 'cirq.RANDOM_STATE_OR_SEED_LIKE' = None,\n ) -> np.ndarray:\n columns = []\n selected_order: List[ops.Qid] = []\n q_set = set(qubits)\n for v in dict.fromkeys(self.args.values()):\n qs = [q for q in v.qubits if q in q_set]\n if any(qs):\n column = v.sample(qs, repetitions, seed)\n columns.append(column)\n selected_order += qs\n stacked = np.column_stack(columns)\n qubit_map = {q: i for i, q in enumerate(selected_order)}\n index_order = [qubit_map[q] for q in qubits]\n return stacked[:, index_order]\n\n def __getitem__(self, item: Optional['cirq.Qid']) -> TActOnArgs:\n return self.args[item]\n\n def __len__(self) -> int:\n return len(self.args)\n\n def __iter__(self) -> Iterator[Optional['cirq.Qid']]:\n return iter(self.args)\n", "path": "cirq-core/cirq/sim/act_on_args_container.py"}]} | 2,792 | 204 |
gh_patches_debug_1156 | rasdani/github-patches | git_diff | facebookresearch__hydra-1531 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add `env` to Hydra's config group
This is a follow up to #1441
the `env` config group will allows users to manually change the env defaults value. (such as provides default callbacks or update run.dir )
</issue>
<code>
[start of hydra/conf/__init__.py]
1 # Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
2 from dataclasses import dataclass, field
3 from typing import Any, Dict, List, Optional
4
5 from omegaconf import MISSING
6
7 from hydra.core.config_store import ConfigStore
8
9
10 @dataclass
11 class HelpConf:
12 app_name: str = MISSING
13 header: str = MISSING
14 footer: str = MISSING
15 template: str = MISSING
16
17
18 @dataclass
19 class HydraHelpConf:
20 hydra_help: str = MISSING
21 template: str = MISSING
22
23
24 @dataclass
25 class RunDir:
26 dir: str = MISSING
27
28
29 @dataclass
30 class SweepDir:
31 dir: str = MISSING
32 subdir: str = MISSING
33
34
35 @dataclass
36 class OverridesConf:
37 # Overrides for the hydra configuration
38 hydra: List[str] = field(default_factory=lambda: [])
39 # Overrides for the task configuration
40 task: List[str] = field(default_factory=lambda: [])
41
42
43 # job runtime information will be populated here
44 @dataclass
45 class JobConf:
46 # Job name, populated automatically unless specified by the user (in config or cli)
47 name: str = MISSING
48
49 # Populated automatically by Hydra.
50 # Concatenation of job overrides that can be used as a part
51 # of the directory name.
52 # This can be configured via hydra.job.config.override_dirname
53 override_dirname: str = MISSING
54
55 # Job ID in underlying scheduling system
56 id: str = MISSING
57
58 # Job number if job is a part of a sweep
59 num: int = MISSING
60
61 # The config name used by the job
62 config_name: Optional[str] = MISSING
63
64 # Environment variables to set remotely
65 env_set: Dict[str, str] = field(default_factory=dict)
66 # Environment variables to copy from the launching machine
67 env_copy: List[str] = field(default_factory=list)
68
69 # Job config
70 @dataclass
71 class JobConfig:
72 @dataclass
73 # configuration for the ${hydra.job.override_dirname} runtime variable
74 class OverrideDirname:
75 kv_sep: str = "="
76 item_sep: str = ","
77 exclude_keys: List[str] = field(default_factory=list)
78
79 override_dirname: OverrideDirname = OverrideDirname()
80
81 config: JobConfig = JobConfig()
82
83
84 @dataclass
85 class RuntimeConf:
86 version: str = MISSING
87 cwd: str = MISSING
88
89
90 @dataclass
91 class HydraConf:
92 defaults: List[Any] = field(
93 default_factory=lambda: [
94 {"output": "default"},
95 {"launcher": "basic"},
96 {"sweeper": "basic"},
97 {"help": "default"},
98 {"hydra_help": "default"},
99 {"hydra_logging": "default"},
100 {"job_logging": "default"},
101 {"callbacks": None},
102 ]
103 )
104
105 # Elements to append to the config search path.
106 # Note: This can only be configured in the primary config.
107 searchpath: List[str] = field(default_factory=list)
108
109 # Normal run output configuration
110 run: RunDir = RunDir()
111 # Multi-run output configuration
112 sweep: SweepDir = SweepDir()
113 # Logging configuration for Hydra
114 hydra_logging: Any = MISSING
115 # Logging configuration for the job
116 job_logging: Any = MISSING
117
118 # Sweeper configuration
119 sweeper: Any = MISSING
120 # Launcher configuration
121 launcher: Any = MISSING
122 # Callbacks configuration
123 callbacks: Dict[str, Any] = field(default_factory=dict)
124
125 # Program Help template
126 help: HelpConf = HelpConf()
127 # Hydra's Help template
128 hydra_help: HydraHelpConf = HydraHelpConf()
129
130 # Output directory for produced configuration files and overrides.
131 # E.g., hydra.yaml, overrides.yaml will go here. Useful for debugging
132 # and extra context when looking at past runs.
133 # Setting to None will prevent the creation of the output subdir.
134 output_subdir: Optional[str] = ".hydra"
135
136 # Those lists will contain runtime overrides
137 overrides: OverridesConf = OverridesConf()
138
139 job: JobConf = JobConf()
140
141 # populated at runtime
142 runtime: RuntimeConf = RuntimeConf()
143
144 # Can be a boolean, string or a list of strings
145 # If a boolean, setting to true will set the log level for the root logger to debug
146 # If a string, it's interpreted as a the list [string]
147 # If a list, each element is interpreted as a logger to have logging level set to debug.
148 # Typical command lines to manipulate hydra.verbose:
149 # hydra.verbose=true
150 # hydra.verbose=[hydra,__main__]
151 # TODO: good use case for Union support in OmegaConf
152 verbose: Any = False
153
154 # Composition choices dictionary
155 choices: Dict[str, str] = field(default_factory=lambda: {})
156
157
158 cs = ConfigStore.instance()
159
160 cs.store(
161 group="hydra",
162 name="config",
163 node=HydraConf(),
164 provider="hydra",
165 )
166
[end of hydra/conf/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/hydra/conf/__init__.py b/hydra/conf/__init__.py
--- a/hydra/conf/__init__.py
+++ b/hydra/conf/__init__.py
@@ -99,6 +99,8 @@
{"hydra_logging": "default"},
{"job_logging": "default"},
{"callbacks": None},
+ # env specific overrides
+ {"env": "default"},
]
)
| {"golden_diff": "diff --git a/hydra/conf/__init__.py b/hydra/conf/__init__.py\n--- a/hydra/conf/__init__.py\n+++ b/hydra/conf/__init__.py\n@@ -99,6 +99,8 @@\n {\"hydra_logging\": \"default\"},\n {\"job_logging\": \"default\"},\n {\"callbacks\": None},\n+ # env specific overrides\n+ {\"env\": \"default\"},\n ]\n )\n", "issue": "Add `env` to Hydra's config group\nThis is a follow up to #1441\r\n\r\nthe `env` config group will allows users to manually change the env defaults value. (such as provides default callbacks or update run.dir )\r\n\n", "before_files": [{"content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\nfrom dataclasses import dataclass, field\nfrom typing import Any, Dict, List, Optional\n\nfrom omegaconf import MISSING\n\nfrom hydra.core.config_store import ConfigStore\n\n\n@dataclass\nclass HelpConf:\n app_name: str = MISSING\n header: str = MISSING\n footer: str = MISSING\n template: str = MISSING\n\n\n@dataclass\nclass HydraHelpConf:\n hydra_help: str = MISSING\n template: str = MISSING\n\n\n@dataclass\nclass RunDir:\n dir: str = MISSING\n\n\n@dataclass\nclass SweepDir:\n dir: str = MISSING\n subdir: str = MISSING\n\n\n@dataclass\nclass OverridesConf:\n # Overrides for the hydra configuration\n hydra: List[str] = field(default_factory=lambda: [])\n # Overrides for the task configuration\n task: List[str] = field(default_factory=lambda: [])\n\n\n# job runtime information will be populated here\n@dataclass\nclass JobConf:\n # Job name, populated automatically unless specified by the user (in config or cli)\n name: str = MISSING\n\n # Populated automatically by Hydra.\n # Concatenation of job overrides that can be used as a part\n # of the directory name.\n # This can be configured via hydra.job.config.override_dirname\n override_dirname: str = MISSING\n\n # Job ID in underlying scheduling system\n id: str = MISSING\n\n # Job number if job is a part of a sweep\n num: int = MISSING\n\n # The config name used by the job\n config_name: Optional[str] = MISSING\n\n # Environment variables to set remotely\n env_set: Dict[str, str] = field(default_factory=dict)\n # Environment variables to copy from the launching machine\n env_copy: List[str] = field(default_factory=list)\n\n # Job config\n @dataclass\n class JobConfig:\n @dataclass\n # configuration for the ${hydra.job.override_dirname} runtime variable\n class OverrideDirname:\n kv_sep: str = \"=\"\n item_sep: str = \",\"\n exclude_keys: List[str] = field(default_factory=list)\n\n override_dirname: OverrideDirname = OverrideDirname()\n\n config: JobConfig = JobConfig()\n\n\n@dataclass\nclass RuntimeConf:\n version: str = MISSING\n cwd: str = MISSING\n\n\n@dataclass\nclass HydraConf:\n defaults: List[Any] = field(\n default_factory=lambda: [\n {\"output\": \"default\"},\n {\"launcher\": \"basic\"},\n {\"sweeper\": \"basic\"},\n {\"help\": \"default\"},\n {\"hydra_help\": \"default\"},\n {\"hydra_logging\": \"default\"},\n {\"job_logging\": \"default\"},\n {\"callbacks\": None},\n ]\n )\n\n # Elements to append to the config search path.\n # Note: This can only be configured in the primary config.\n searchpath: List[str] = field(default_factory=list)\n\n # Normal run output configuration\n run: RunDir = RunDir()\n # Multi-run output configuration\n sweep: SweepDir = SweepDir()\n # Logging configuration for Hydra\n hydra_logging: Any = MISSING\n # Logging configuration for the job\n job_logging: Any = MISSING\n\n # Sweeper configuration\n sweeper: Any = MISSING\n # Launcher configuration\n launcher: Any = MISSING\n # Callbacks configuration\n callbacks: Dict[str, Any] = field(default_factory=dict)\n\n # Program Help template\n help: HelpConf = HelpConf()\n # Hydra's Help template\n hydra_help: HydraHelpConf = HydraHelpConf()\n\n # Output directory for produced configuration files and overrides.\n # E.g., hydra.yaml, overrides.yaml will go here. Useful for debugging\n # and extra context when looking at past runs.\n # Setting to None will prevent the creation of the output subdir.\n output_subdir: Optional[str] = \".hydra\"\n\n # Those lists will contain runtime overrides\n overrides: OverridesConf = OverridesConf()\n\n job: JobConf = JobConf()\n\n # populated at runtime\n runtime: RuntimeConf = RuntimeConf()\n\n # Can be a boolean, string or a list of strings\n # If a boolean, setting to true will set the log level for the root logger to debug\n # If a string, it's interpreted as a the list [string]\n # If a list, each element is interpreted as a logger to have logging level set to debug.\n # Typical command lines to manipulate hydra.verbose:\n # hydra.verbose=true\n # hydra.verbose=[hydra,__main__]\n # TODO: good use case for Union support in OmegaConf\n verbose: Any = False\n\n # Composition choices dictionary\n choices: Dict[str, str] = field(default_factory=lambda: {})\n\n\ncs = ConfigStore.instance()\n\ncs.store(\n group=\"hydra\",\n name=\"config\",\n node=HydraConf(),\n provider=\"hydra\",\n)\n", "path": "hydra/conf/__init__.py"}]} | 2,098 | 98 |
gh_patches_debug_9241 | rasdani/github-patches | git_diff | aws-cloudformation__cfn-lint-1066 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
IoT Topic Rule fails with E1029
*cfn-lint version: 0.22.4*
*Description of issue.*
When using AWS IoT substitution templates (in my case, for IoT SQL functions) within Cloud Formation, it is necessary to use the dollar sign and curly braces (For example `${topic()}`). This gets misinterpreted as a Fn::Sub Parameter which throws an E1029 error.
Please provide as much information as possible:
* Template linting issues:
* Please provide a CloudFormation sample that generated the issue.
```yaml
IotTopicRule:
Type: AWS::IoT::TopicRule
Properties:
RuleName: IotTopicRule
TopicRulePayload:
RuleDisabled: false
Sql: !Sub "SELECT * FROM 'some-topic'"
Actions:
-
Kinesis:
RoleArn: !Sub '${topicRole.Arn}'
StreamName: !Ref MyKinesisStream
PartitionKey: "${topic()}" # error happens here
```
* If present, please add links to the (official) documentation for clarification.
AWS IoT substitution templates are explained here: https://docs.aws.amazon.com/iot/latest/developerguide/iot-substitution-templates.html
How !Sub uses variables (which `cfn-lint` looks for) is found here: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/intrinsic-function-reference-sub.html#w2ab1c21c24c59b7
* Validate if the issue still exists with the latest version of `cfn-lint` and/or the latest Spec files
Yes
</issue>
<code>
[start of src/cfnlint/rules/functions/SubNeeded.py]
1 """
2 Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
3
4 Permission is hereby granted, free of charge, to any person obtaining a copy of this
5 software and associated documentation files (the "Software"), to deal in the Software
6 without restriction, including without limitation the rights to use, copy, modify,
7 merge, publish, distribute, sublicense, and/or sell copies of the Software, and to
8 permit persons to whom the Software is furnished to do so.
9
10 THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED,
11 INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A
12 PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
13 HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
14 OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
15 SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
16 """
17 import re
18 from cfnlint import CloudFormationLintRule
19 from cfnlint import RuleMatch
20
21 class SubNeeded(CloudFormationLintRule):
22 """Check if a substitution string exists without a substitution function"""
23 id = 'E1029'
24 shortdesc = 'Sub is required if a variable is used in a string'
25 description = 'If a substitution variable exists in a string but isn\'t wrapped with the Fn::Sub function the deployment will fail.'
26 source_url = 'https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/intrinsic-function-reference-sub.html'
27 tags = ['functions', 'sub']
28
29 # Free-form text properties to exclude from this rule
30 # content is part of AWS::CloudFormation::Init
31 excludes = ['UserData', 'ZipFile', 'Condition', 'AWS::CloudFormation::Init', 'CloudWatchAlarmDefinition']
32 api_excludes = ['Uri', 'Body']
33
34 # IAM Policy has special variables that don't require !Sub, Check for these
35 # https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_variables.html
36 # https://docs.aws.amazon.com/iot/latest/developerguide/basic-policy-variables.html
37 # https://docs.aws.amazon.com/iot/latest/developerguide/thing-policy-variables.html
38 # https://docs.aws.amazon.com/transfer/latest/userguide/users.html#users-policies-scope-down
39 resource_excludes = ['${aws:CurrentTime}', '${aws:EpochTime}', '${aws:TokenIssueTime}', '${aws:principaltype}',
40 '${aws:SecureTransport}', '${aws:SourceIp}', '${aws:UserAgent}', '${aws:userid}',
41 '${aws:username}', '${ec2:SourceInstanceARN}',
42 '${iot:Connection.Thing.ThingName}', '${iot:Connection.Thing.ThingTypeName}',
43 '${iot:Connection.Thing.IsAttached}', '${iot:ClientId}', '${transfer:HomeBucket}',
44 '${transfer:HomeDirectory}', '${transfer:HomeFolder}', '${transfer:UserName}']
45
46 def _match_values(self, searchRegex, cfnelem, path):
47 """Recursively search for values matching the searchRegex"""
48 values = []
49 if isinstance(cfnelem, dict):
50 for key in cfnelem:
51 pathprop = path[:]
52 pathprop.append(key)
53 values.extend(self._match_values(searchRegex, cfnelem[key], pathprop))
54 elif isinstance(cfnelem, list):
55 for index, item in enumerate(cfnelem):
56 pathprop = path[:]
57 pathprop.append(index)
58 values.extend(self._match_values(searchRegex, item, pathprop))
59 else:
60 # Leaf node
61 if isinstance(cfnelem, str) and re.match(searchRegex, cfnelem):
62 # Get all variables as seperate paths
63 regex = re.compile(r'(\$\{.*?\.?.*?})')
64 for variable in re.findall(regex, cfnelem):
65 values.append(path + [variable])
66
67 return values
68
69 def match_values(self, searchRegex, cfn):
70 """
71 Search for values in all parts of the templates that match the searchRegex
72 """
73 results = []
74 results.extend(self._match_values(searchRegex, cfn.template, []))
75 # Globals are removed during a transform. They need to be checked manually
76 results.extend(self._match_values(searchRegex, cfn.template.get('Globals', {}), []))
77 return results
78
79 def _api_exceptions(self, value):
80 """ Key value exceptions """
81 parameter_search = re.compile(r'^\$\{stageVariables\..*\}$')
82 return re.match(parameter_search, value)
83
84 def match(self, cfn):
85 """Basic Rule Matching"""
86
87 matches = []
88
89 # Generic regex to match a string containing at least one ${parameter}
90 parameter_search = re.compile(r'^.*(\$\{.*\}.*(\$\{.*\}.*)*)$')
91
92 # Get a list of paths to every leaf node string containing at least one ${parameter}
93 parameter_string_paths = self.match_values(parameter_search, cfn)
94
95 # We want to search all of the paths to check if each one contains an 'Fn::Sub'
96 for parameter_string_path in parameter_string_paths:
97
98 # Exxclude the special IAM variables
99 variable = parameter_string_path[-1]
100
101 if 'Resource' in parameter_string_path:
102 if variable in self.resource_excludes:
103 continue
104
105 # Exclude literals (https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/intrinsic-function-reference-sub.html)
106 if variable.startswith('${!'):
107 continue
108
109 found_sub = False
110 # Does the path contain an 'Fn::Sub'?
111 for step in parameter_string_path:
112 if step in self.api_excludes:
113 if self._api_exceptions(parameter_string_path[-1]):
114 found_sub = True
115 elif step == 'Fn::Sub' or step in self.excludes:
116 found_sub = True
117
118 # If we didn't find an 'Fn::Sub' it means a string containing a ${parameter} may not be evaluated correctly
119 if not found_sub:
120 # Remove the last item (the variable) to prevent multiple errors on 1 line errors
121 path = parameter_string_path[:-1]
122 message = 'Found an embedded parameter outside of an "Fn::Sub" at {}'.format('/'.join(map(str, path)))
123 matches.append(RuleMatch(path, message))
124
125 return matches
126
[end of src/cfnlint/rules/functions/SubNeeded.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/cfnlint/rules/functions/SubNeeded.py b/src/cfnlint/rules/functions/SubNeeded.py
--- a/src/cfnlint/rules/functions/SubNeeded.py
+++ b/src/cfnlint/rules/functions/SubNeeded.py
@@ -28,7 +28,7 @@
# Free-form text properties to exclude from this rule
# content is part of AWS::CloudFormation::Init
- excludes = ['UserData', 'ZipFile', 'Condition', 'AWS::CloudFormation::Init', 'CloudWatchAlarmDefinition']
+ excludes = ['UserData', 'ZipFile', 'Condition', 'AWS::CloudFormation::Init', 'CloudWatchAlarmDefinition', 'TopicRulePayload']
api_excludes = ['Uri', 'Body']
# IAM Policy has special variables that don't require !Sub, Check for these
| {"golden_diff": "diff --git a/src/cfnlint/rules/functions/SubNeeded.py b/src/cfnlint/rules/functions/SubNeeded.py\n--- a/src/cfnlint/rules/functions/SubNeeded.py\n+++ b/src/cfnlint/rules/functions/SubNeeded.py\n@@ -28,7 +28,7 @@\n \n # Free-form text properties to exclude from this rule\n # content is part of AWS::CloudFormation::Init\n- excludes = ['UserData', 'ZipFile', 'Condition', 'AWS::CloudFormation::Init', 'CloudWatchAlarmDefinition']\n+ excludes = ['UserData', 'ZipFile', 'Condition', 'AWS::CloudFormation::Init', 'CloudWatchAlarmDefinition', 'TopicRulePayload']\n api_excludes = ['Uri', 'Body']\n \n # IAM Policy has special variables that don't require !Sub, Check for these\n", "issue": "IoT Topic Rule fails with E1029\n*cfn-lint version: 0.22.4*\r\n\r\n*Description of issue.*\r\n\r\nWhen using AWS IoT substitution templates (in my case, for IoT SQL functions) within Cloud Formation, it is necessary to use the dollar sign and curly braces (For example `${topic()}`). This gets misinterpreted as a Fn::Sub Parameter which throws an E1029 error.\r\n\r\nPlease provide as much information as possible:\r\n* Template linting issues:\r\n * Please provide a CloudFormation sample that generated the issue.\r\n```yaml\r\n IotTopicRule: \r\n Type: AWS::IoT::TopicRule\r\n Properties: \r\n RuleName: IotTopicRule\r\n TopicRulePayload:\r\n RuleDisabled: false\r\n Sql: !Sub \"SELECT * FROM 'some-topic'\"\r\n Actions: \r\n - \r\n Kinesis: \r\n RoleArn: !Sub '${topicRole.Arn}'\r\n StreamName: !Ref MyKinesisStream\r\n PartitionKey: \"${topic()}\" # error happens here\r\n```\r\n * If present, please add links to the (official) documentation for clarification.\r\n\r\nAWS IoT substitution templates are explained here: https://docs.aws.amazon.com/iot/latest/developerguide/iot-substitution-templates.html\r\n\r\nHow !Sub uses variables (which `cfn-lint` looks for) is found here: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/intrinsic-function-reference-sub.html#w2ab1c21c24c59b7\r\n\r\n * Validate if the issue still exists with the latest version of `cfn-lint` and/or the latest Spec files\r\n\r\nYes\n", "before_files": [{"content": "\"\"\"\n Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n\n Permission is hereby granted, free of charge, to any person obtaining a copy of this\n software and associated documentation files (the \"Software\"), to deal in the Software\n without restriction, including without limitation the rights to use, copy, modify,\n merge, publish, distribute, sublicense, and/or sell copies of the Software, and to\n permit persons to whom the Software is furnished to do so.\n\n THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED,\n INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A\n PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT\n HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION\n OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE\n SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.\n\"\"\"\nimport re\nfrom cfnlint import CloudFormationLintRule\nfrom cfnlint import RuleMatch\n\nclass SubNeeded(CloudFormationLintRule):\n \"\"\"Check if a substitution string exists without a substitution function\"\"\"\n id = 'E1029'\n shortdesc = 'Sub is required if a variable is used in a string'\n description = 'If a substitution variable exists in a string but isn\\'t wrapped with the Fn::Sub function the deployment will fail.'\n source_url = 'https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/intrinsic-function-reference-sub.html'\n tags = ['functions', 'sub']\n\n # Free-form text properties to exclude from this rule\n # content is part of AWS::CloudFormation::Init\n excludes = ['UserData', 'ZipFile', 'Condition', 'AWS::CloudFormation::Init', 'CloudWatchAlarmDefinition']\n api_excludes = ['Uri', 'Body']\n\n # IAM Policy has special variables that don't require !Sub, Check for these\n # https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_variables.html\n # https://docs.aws.amazon.com/iot/latest/developerguide/basic-policy-variables.html\n # https://docs.aws.amazon.com/iot/latest/developerguide/thing-policy-variables.html\n # https://docs.aws.amazon.com/transfer/latest/userguide/users.html#users-policies-scope-down\n resource_excludes = ['${aws:CurrentTime}', '${aws:EpochTime}', '${aws:TokenIssueTime}', '${aws:principaltype}',\n '${aws:SecureTransport}', '${aws:SourceIp}', '${aws:UserAgent}', '${aws:userid}',\n '${aws:username}', '${ec2:SourceInstanceARN}',\n '${iot:Connection.Thing.ThingName}', '${iot:Connection.Thing.ThingTypeName}',\n '${iot:Connection.Thing.IsAttached}', '${iot:ClientId}', '${transfer:HomeBucket}',\n '${transfer:HomeDirectory}', '${transfer:HomeFolder}', '${transfer:UserName}']\n\n def _match_values(self, searchRegex, cfnelem, path):\n \"\"\"Recursively search for values matching the searchRegex\"\"\"\n values = []\n if isinstance(cfnelem, dict):\n for key in cfnelem:\n pathprop = path[:]\n pathprop.append(key)\n values.extend(self._match_values(searchRegex, cfnelem[key], pathprop))\n elif isinstance(cfnelem, list):\n for index, item in enumerate(cfnelem):\n pathprop = path[:]\n pathprop.append(index)\n values.extend(self._match_values(searchRegex, item, pathprop))\n else:\n # Leaf node\n if isinstance(cfnelem, str) and re.match(searchRegex, cfnelem):\n # Get all variables as seperate paths\n regex = re.compile(r'(\\$\\{.*?\\.?.*?})')\n for variable in re.findall(regex, cfnelem):\n values.append(path + [variable])\n\n return values\n\n def match_values(self, searchRegex, cfn):\n \"\"\"\n Search for values in all parts of the templates that match the searchRegex\n \"\"\"\n results = []\n results.extend(self._match_values(searchRegex, cfn.template, []))\n # Globals are removed during a transform. They need to be checked manually\n results.extend(self._match_values(searchRegex, cfn.template.get('Globals', {}), []))\n return results\n\n def _api_exceptions(self, value):\n \"\"\" Key value exceptions \"\"\"\n parameter_search = re.compile(r'^\\$\\{stageVariables\\..*\\}$')\n return re.match(parameter_search, value)\n\n def match(self, cfn):\n \"\"\"Basic Rule Matching\"\"\"\n\n matches = []\n\n # Generic regex to match a string containing at least one ${parameter}\n parameter_search = re.compile(r'^.*(\\$\\{.*\\}.*(\\$\\{.*\\}.*)*)$')\n\n # Get a list of paths to every leaf node string containing at least one ${parameter}\n parameter_string_paths = self.match_values(parameter_search, cfn)\n\n # We want to search all of the paths to check if each one contains an 'Fn::Sub'\n for parameter_string_path in parameter_string_paths:\n\n # Exxclude the special IAM variables\n variable = parameter_string_path[-1]\n\n if 'Resource' in parameter_string_path:\n if variable in self.resource_excludes:\n continue\n\n # Exclude literals (https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/intrinsic-function-reference-sub.html)\n if variable.startswith('${!'):\n continue\n\n found_sub = False\n # Does the path contain an 'Fn::Sub'?\n for step in parameter_string_path:\n if step in self.api_excludes:\n if self._api_exceptions(parameter_string_path[-1]):\n found_sub = True\n elif step == 'Fn::Sub' or step in self.excludes:\n found_sub = True\n\n # If we didn't find an 'Fn::Sub' it means a string containing a ${parameter} may not be evaluated correctly\n if not found_sub:\n # Remove the last item (the variable) to prevent multiple errors on 1 line errors\n path = parameter_string_path[:-1]\n message = 'Found an embedded parameter outside of an \"Fn::Sub\" at {}'.format('/'.join(map(str, path)))\n matches.append(RuleMatch(path, message))\n\n return matches\n", "path": "src/cfnlint/rules/functions/SubNeeded.py"}]} | 2,543 | 177 |
gh_patches_debug_20846 | rasdani/github-patches | git_diff | wagtail__wagtail-1147 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Wagtail doesn't gracefully support session invalidation on password change
According to [Django's documentation](https://docs.djangoproject.com/en/1.7/topics/auth/default/#session-invalidation-on-password-change), SessionAuthenticationMiddleware is new in Django 1.7, enabled by default, and will be mandatory in Django 2.0.
Currently, when the middleware is loaded and the user changes their password, they are immediately kicked out to the sign in screen. The user's session is most likely invalidated. This is very obtrusive and the user is not informed if their password was successfully updated. I believe the offending code is in
[account.py](https://github.com/torchbox/wagtail/blob/master/wagtail/wagtailadmin/views/account.py#L26) and attempted to modify the code from the example to make it work, but the outcome was the same:
``` python
# ...
from django.contrib.auth import update_session_auth_hash # new code
# ...
def change_password(request):
can_change_password = request.user.has_usable_password()
if can_change_password:
if request.POST:
form = SetPasswordForm(request.user, request.POST)
if form.is_valid():
form.save()
update_session_auth_hash(request, form.user) # new code
messages.success(request, _("Your password has been changed successfully!"))
return redirect('wagtailadmin_account')
else:
form = SetPasswordForm(request.user)
else:
form = None
return render(request, 'wagtailadmin/account/change_password.html', {
'form': form,
'can_change_password': can_change_password,
})
```
I am, currently, a Django novice, so that's as far as I was able to get. Hope this is an easy fix!
</issue>
<code>
[start of wagtail/wagtailadmin/views/account.py]
1 from django.conf import settings
2 from django.shortcuts import render, redirect
3 from django.contrib import messages
4 from django.contrib.auth.forms import SetPasswordForm
5 from django.contrib.auth.views import logout as auth_logout, login as auth_login
6 from django.utils.translation import ugettext as _
7 from django.views.decorators.debug import sensitive_post_parameters
8 from django.views.decorators.cache import never_cache
9
10 from wagtail.wagtailadmin import forms
11 from wagtail.wagtailusers.forms import NotificationPreferencesForm
12 from wagtail.wagtailusers.models import UserProfile
13 from wagtail.wagtailcore.models import UserPagePermissionsProxy
14
15
16 def account(request):
17 user_perms = UserPagePermissionsProxy(request.user)
18 show_notification_preferences = user_perms.can_edit_pages() or user_perms.can_publish_pages()
19
20 return render(request, 'wagtailadmin/account/account.html', {
21 'show_change_password': getattr(settings, 'WAGTAIL_PASSWORD_MANAGEMENT_ENABLED', True) and request.user.has_usable_password(),
22 'show_notification_preferences': show_notification_preferences
23 })
24
25
26 def change_password(request):
27 can_change_password = request.user.has_usable_password()
28
29 if can_change_password:
30 if request.POST:
31 form = SetPasswordForm(request.user, request.POST)
32
33 if form.is_valid():
34 form.save()
35
36 messages.success(request, _("Your password has been changed successfully!"))
37 return redirect('wagtailadmin_account')
38 else:
39 form = SetPasswordForm(request.user)
40 else:
41 form = None
42
43 return render(request, 'wagtailadmin/account/change_password.html', {
44 'form': form,
45 'can_change_password': can_change_password,
46 })
47
48
49 def notification_preferences(request):
50
51 if request.POST:
52 form = NotificationPreferencesForm(request.POST, instance=UserProfile.get_for_user(request.user))
53
54 if form.is_valid():
55 form.save()
56 messages.success(request, _("Your preferences have been updated successfully!"))
57 return redirect('wagtailadmin_account')
58 else:
59 form = NotificationPreferencesForm(instance=UserProfile.get_for_user(request.user))
60
61 # quick-and-dirty catch-all in case the form has been rendered with no
62 # fields, as the user has no customisable permissions
63 if not form.fields:
64 return redirect('wagtailadmin_account')
65
66 return render(request, 'wagtailadmin/account/notification_preferences.html', {
67 'form': form,
68 })
69
70
71 @sensitive_post_parameters()
72 @never_cache
73 def login(request):
74 if request.user.is_authenticated() and request.user.has_perm('wagtailadmin.access_admin'):
75 return redirect('wagtailadmin_home')
76 else:
77 from django.contrib.auth import get_user_model
78 return auth_login(request,
79 template_name='wagtailadmin/login.html',
80 authentication_form=forms.LoginForm,
81 extra_context={
82 'show_password_reset': getattr(settings, 'WAGTAIL_PASSWORD_MANAGEMENT_ENABLED', True),
83 'username_field': get_user_model().USERNAME_FIELD,
84 },
85 )
86
87
88 def logout(request):
89 response = auth_logout(request, next_page='wagtailadmin_login')
90
91 # By default, logging out will generate a fresh sessionid cookie. We want to use the
92 # absence of sessionid as an indication that front-end pages are being viewed by a
93 # non-logged-in user and are therefore cacheable, so we forcibly delete the cookie here.
94 response.delete_cookie(settings.SESSION_COOKIE_NAME,
95 domain=settings.SESSION_COOKIE_DOMAIN,
96 path=settings.SESSION_COOKIE_PATH)
97
98 # HACK: pretend that the session hasn't been modified, so that SessionMiddleware
99 # won't override the above and write a new cookie.
100 request.session.modified = False
101
102 return response
103
[end of wagtail/wagtailadmin/views/account.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/wagtail/wagtailadmin/views/account.py b/wagtail/wagtailadmin/views/account.py
--- a/wagtail/wagtailadmin/views/account.py
+++ b/wagtail/wagtailadmin/views/account.py
@@ -3,6 +3,7 @@
from django.contrib import messages
from django.contrib.auth.forms import SetPasswordForm
from django.contrib.auth.views import logout as auth_logout, login as auth_login
+from django.contrib.auth import update_session_auth_hash
from django.utils.translation import ugettext as _
from django.views.decorators.debug import sensitive_post_parameters
from django.views.decorators.cache import never_cache
@@ -32,6 +33,7 @@
if form.is_valid():
form.save()
+ update_session_auth_hash(request, form.user)
messages.success(request, _("Your password has been changed successfully!"))
return redirect('wagtailadmin_account')
| {"golden_diff": "diff --git a/wagtail/wagtailadmin/views/account.py b/wagtail/wagtailadmin/views/account.py\n--- a/wagtail/wagtailadmin/views/account.py\n+++ b/wagtail/wagtailadmin/views/account.py\n@@ -3,6 +3,7 @@\n from django.contrib import messages\n from django.contrib.auth.forms import SetPasswordForm\n from django.contrib.auth.views import logout as auth_logout, login as auth_login\n+from django.contrib.auth import update_session_auth_hash\n from django.utils.translation import ugettext as _ \n from django.views.decorators.debug import sensitive_post_parameters\n from django.views.decorators.cache import never_cache\n@@ -32,6 +33,7 @@\n \n if form.is_valid():\n form.save()\n+ update_session_auth_hash(request, form.user)\n \n messages.success(request, _(\"Your password has been changed successfully!\"))\n return redirect('wagtailadmin_account')\n", "issue": "Wagtail doesn't gracefully support session invalidation on password change\nAccording to [Django's documentation](https://docs.djangoproject.com/en/1.7/topics/auth/default/#session-invalidation-on-password-change), SessionAuthenticationMiddleware is new in Django 1.7, enabled by default, and will be mandatory in Django 2.0.\n\nCurrently, when the middleware is loaded and the user changes their password, they are immediately kicked out to the sign in screen. The user's session is most likely invalidated. This is very obtrusive and the user is not informed if their password was successfully updated. I believe the offending code is in\n[account.py](https://github.com/torchbox/wagtail/blob/master/wagtail/wagtailadmin/views/account.py#L26) and attempted to modify the code from the example to make it work, but the outcome was the same:\n\n``` python\n# ...\nfrom django.contrib.auth import update_session_auth_hash # new code\n# ...\ndef change_password(request):\n can_change_password = request.user.has_usable_password()\n\n if can_change_password:\n if request.POST:\n form = SetPasswordForm(request.user, request.POST)\n\n if form.is_valid():\n form.save()\n update_session_auth_hash(request, form.user) # new code\n\n messages.success(request, _(\"Your password has been changed successfully!\"))\n return redirect('wagtailadmin_account')\n else:\n form = SetPasswordForm(request.user)\n else:\n form = None\n\n return render(request, 'wagtailadmin/account/change_password.html', {\n 'form': form,\n 'can_change_password': can_change_password,\n })\n```\n\nI am, currently, a Django novice, so that's as far as I was able to get. Hope this is an easy fix!\n\n", "before_files": [{"content": "from django.conf import settings\nfrom django.shortcuts import render, redirect\nfrom django.contrib import messages\nfrom django.contrib.auth.forms import SetPasswordForm\nfrom django.contrib.auth.views import logout as auth_logout, login as auth_login\nfrom django.utils.translation import ugettext as _ \nfrom django.views.decorators.debug import sensitive_post_parameters\nfrom django.views.decorators.cache import never_cache\n\nfrom wagtail.wagtailadmin import forms\nfrom wagtail.wagtailusers.forms import NotificationPreferencesForm\nfrom wagtail.wagtailusers.models import UserProfile\nfrom wagtail.wagtailcore.models import UserPagePermissionsProxy\n\n\ndef account(request):\n user_perms = UserPagePermissionsProxy(request.user)\n show_notification_preferences = user_perms.can_edit_pages() or user_perms.can_publish_pages()\n\n return render(request, 'wagtailadmin/account/account.html', {\n 'show_change_password': getattr(settings, 'WAGTAIL_PASSWORD_MANAGEMENT_ENABLED', True) and request.user.has_usable_password(),\n 'show_notification_preferences': show_notification_preferences\n })\n\n\ndef change_password(request):\n can_change_password = request.user.has_usable_password()\n\n if can_change_password:\n if request.POST:\n form = SetPasswordForm(request.user, request.POST)\n\n if form.is_valid():\n form.save()\n\n messages.success(request, _(\"Your password has been changed successfully!\"))\n return redirect('wagtailadmin_account')\n else:\n form = SetPasswordForm(request.user)\n else:\n form = None\n\n return render(request, 'wagtailadmin/account/change_password.html', {\n 'form': form,\n 'can_change_password': can_change_password,\n })\n\n\ndef notification_preferences(request):\n\n if request.POST:\n form = NotificationPreferencesForm(request.POST, instance=UserProfile.get_for_user(request.user))\n\n if form.is_valid():\n form.save()\n messages.success(request, _(\"Your preferences have been updated successfully!\"))\n return redirect('wagtailadmin_account')\n else:\n form = NotificationPreferencesForm(instance=UserProfile.get_for_user(request.user))\n\n # quick-and-dirty catch-all in case the form has been rendered with no\n # fields, as the user has no customisable permissions\n if not form.fields:\n return redirect('wagtailadmin_account')\n\n return render(request, 'wagtailadmin/account/notification_preferences.html', {\n 'form': form,\n })\n\n\n@sensitive_post_parameters()\n@never_cache\ndef login(request):\n if request.user.is_authenticated() and request.user.has_perm('wagtailadmin.access_admin'):\n return redirect('wagtailadmin_home')\n else:\n from django.contrib.auth import get_user_model\n return auth_login(request,\n template_name='wagtailadmin/login.html',\n authentication_form=forms.LoginForm,\n extra_context={\n 'show_password_reset': getattr(settings, 'WAGTAIL_PASSWORD_MANAGEMENT_ENABLED', True),\n 'username_field': get_user_model().USERNAME_FIELD,\n },\n )\n\n\ndef logout(request):\n response = auth_logout(request, next_page='wagtailadmin_login')\n\n # By default, logging out will generate a fresh sessionid cookie. We want to use the\n # absence of sessionid as an indication that front-end pages are being viewed by a\n # non-logged-in user and are therefore cacheable, so we forcibly delete the cookie here.\n response.delete_cookie(settings.SESSION_COOKIE_NAME,\n domain=settings.SESSION_COOKIE_DOMAIN,\n path=settings.SESSION_COOKIE_PATH)\n\n # HACK: pretend that the session hasn't been modified, so that SessionMiddleware\n # won't override the above and write a new cookie.\n request.session.modified = False\n\n return response\n", "path": "wagtail/wagtailadmin/views/account.py"}]} | 1,902 | 193 |
gh_patches_debug_21663 | rasdani/github-patches | git_diff | boto__boto-2489 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
boto.glacier.utils.compute_hashes_from_fileobj No Longer Works with Binary Files
Commit a4c9a781f47a61ddde2b3a3802b93c1ed29cdf16 added a `.encode('utf-8')` to the result of the first read in `boto.glacier.utils.compute_hashes_from_fileobj` (although it doesn't attempt to encode the results of the reads in the loop below that).
This breaks for binary files for me with python 2.6.6:
```
(glacier)[cperl@localhost ~]$ dd if=/dev/urandom of=/tmp/foo.bin bs=1M count=1
1+0 records in
1+0 records out
1048576 bytes (1.0 MB) copied, 0.110299 s, 9.5 MB/s
(glacier)[cperl@localhost ~]$ python
Python 2.6.6 (r266:84292, Nov 22 2013, 12:16:22)
[GCC 4.4.7 20120313 (Red Hat 4.4.7-4)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import boto.glacier.utils
>>> with open("/tmp/foo.bin", 'r') as f:
... boto.glacier.utils.compute_hashes_from_fileobj(f)
...
Traceback (most recent call last):
File "<stdin>", line 2, in <module>
File "/home/cperl/virtualenv/glacier/lib/python2.6/site-packages/boto/glacier/utils.py", line 127, in compute_hashes_from_fileobj
chunk = fileobj.read(chunk_size).encode('utf-8')
UnicodeDecodeError: 'ascii' codec can't decode byte 0xf4 in position 0: ordinal not in range(128)
```
</issue>
<code>
[start of boto/glacier/utils.py]
1 # Copyright (c) 2012 Amazon.com, Inc. or its affiliates. All Rights Reserved
2 #
3 # Permission is hereby granted, free of charge, to any person obtaining a
4 # copy of this software and associated documentation files (the
5 # "Software"), to deal in the Software without restriction, including
6 # without limitation the rights to use, copy, modify, merge, publish, dis-
7 # tribute, sublicense, and/or sell copies of the Software, and to permit
8 # persons to whom the Software is furnished to do so, subject to the fol-
9 # lowing conditions:
10 #
11 # The above copyright notice and this permission notice shall be included
12 # in all copies or substantial portions of the Software.
13 #
14 # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
15 # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
16 # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
17 # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
18 # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
19 # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
20 # IN THE SOFTWARE.
21 #
22 import hashlib
23 import math
24 import binascii
25
26
27 _MEGABYTE = 1024 * 1024
28 DEFAULT_PART_SIZE = 4 * _MEGABYTE
29 MAXIMUM_NUMBER_OF_PARTS = 10000
30
31
32 def minimum_part_size(size_in_bytes, default_part_size=DEFAULT_PART_SIZE):
33 """Calculate the minimum part size needed for a multipart upload.
34
35 Glacier allows a maximum of 10,000 parts per upload. It also
36 states that the maximum archive size is 10,000 * 4 GB, which means
37 the part size can range from 1MB to 4GB (provided it is one 1MB
38 multiplied by a power of 2).
39
40 This function will compute what the minimum part size must be in
41 order to upload a file of size ``size_in_bytes``.
42
43 It will first check if ``default_part_size`` is sufficient for
44 a part size given the ``size_in_bytes``. If this is not the case,
45 then the smallest part size than can accomodate a file of size
46 ``size_in_bytes`` will be returned.
47
48 If the file size is greater than the maximum allowed archive
49 size of 10,000 * 4GB, a ``ValueError`` will be raised.
50
51 """
52 # The default part size (4 MB) will be too small for a very large
53 # archive, as there is a limit of 10,000 parts in a multipart upload.
54 # This puts the maximum allowed archive size with the default part size
55 # at 40,000 MB. We need to do a sanity check on the part size, and find
56 # one that works if the default is too small.
57 part_size = _MEGABYTE
58 if (default_part_size * MAXIMUM_NUMBER_OF_PARTS) < size_in_bytes:
59 if size_in_bytes > (4096 * _MEGABYTE * 10000):
60 raise ValueError("File size too large: %s" % size_in_bytes)
61 min_part_size = size_in_bytes / 10000
62 power = 3
63 while part_size < min_part_size:
64 part_size = math.ldexp(_MEGABYTE, power)
65 power += 1
66 part_size = int(part_size)
67 else:
68 part_size = default_part_size
69 return part_size
70
71
72 def chunk_hashes(bytestring, chunk_size=_MEGABYTE):
73 chunk_count = int(math.ceil(len(bytestring) / float(chunk_size)))
74 hashes = []
75 for i in range(chunk_count):
76 start = i * chunk_size
77 end = (i + 1) * chunk_size
78 hashes.append(hashlib.sha256(bytestring[start:end]).digest())
79 if not hashes:
80 return [hashlib.sha256(b'').digest()]
81 return hashes
82
83
84 def tree_hash(fo):
85 """
86 Given a hash of each 1MB chunk (from chunk_hashes) this will hash
87 together adjacent hashes until it ends up with one big one. So a
88 tree of hashes.
89 """
90 hashes = []
91 hashes.extend(fo)
92 while len(hashes) > 1:
93 new_hashes = []
94 while True:
95 if len(hashes) > 1:
96 first = hashes.pop(0)
97 second = hashes.pop(0)
98 new_hashes.append(hashlib.sha256(first + second).digest())
99 elif len(hashes) == 1:
100 only = hashes.pop(0)
101 new_hashes.append(only)
102 else:
103 break
104 hashes.extend(new_hashes)
105 return hashes[0]
106
107
108 def compute_hashes_from_fileobj(fileobj, chunk_size=1024 * 1024):
109 """Compute the linear and tree hash from a fileobj.
110
111 This function will compute the linear/tree hash of a fileobj
112 in a single pass through the fileobj.
113
114 :param fileobj: A file like object.
115
116 :param chunk_size: The size of the chunks to use for the tree
117 hash. This is also the buffer size used to read from
118 `fileobj`.
119
120 :rtype: tuple
121 :return: A tuple of (linear_hash, tree_hash). Both hashes
122 are returned in hex.
123
124 """
125 linear_hash = hashlib.sha256()
126 chunks = []
127 chunk = fileobj.read(chunk_size).encode('utf-8')
128 while chunk:
129 linear_hash.update(chunk)
130 chunks.append(hashlib.sha256(chunk).digest())
131 chunk = fileobj.read(chunk_size)
132 if not chunks:
133 chunks = [hashlib.sha256(b'').digest()]
134 return linear_hash.hexdigest(), bytes_to_hex(tree_hash(chunks))
135
136
137 def bytes_to_hex(str_as_bytes):
138 return binascii.hexlify(str_as_bytes)
139
140
141 def tree_hash_from_str(str_as_bytes):
142 """
143
144 :type str_as_bytes: str
145 :param str_as_bytes: The string for which to compute the tree hash.
146
147 :rtype: str
148 :return: The computed tree hash, returned as hex.
149
150 """
151 return bytes_to_hex(tree_hash(chunk_hashes(str_as_bytes)))
152
153
154 class ResettingFileSender(object):
155 def __init__(self, archive):
156 self._archive = archive
157 self._starting_offset = archive.tell()
158
159 def __call__(self, connection, method, path, body, headers):
160 try:
161 connection.request(method, path, self._archive, headers)
162 return connection.getresponse()
163 finally:
164 self._archive.seek(self._starting_offset)
165
[end of boto/glacier/utils.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/boto/glacier/utils.py b/boto/glacier/utils.py
--- a/boto/glacier/utils.py
+++ b/boto/glacier/utils.py
@@ -23,6 +23,8 @@
import math
import binascii
+from boto.compat import six
+
_MEGABYTE = 1024 * 1024
DEFAULT_PART_SIZE = 4 * _MEGABYTE
@@ -122,10 +124,19 @@
are returned in hex.
"""
+ # Python 3+, not binary
+ if six.PY3 and hasattr(fileobj, 'mode') and 'b' not in fileobj.mode:
+ raise ValueError('File-like object must be opened in binary mode!')
+
linear_hash = hashlib.sha256()
chunks = []
- chunk = fileobj.read(chunk_size).encode('utf-8')
+ chunk = fileobj.read(chunk_size)
while chunk:
+ # It's possible to get a file-like object that has no mode (checked
+ # above) and returns something other than bytes (e.g. str). So here
+ # we try to catch that and encode to bytes.
+ if not isinstance(chunk, bytes):
+ chunk = chunk.encode(getattr(fileobj, 'encoding', '') or 'utf-8')
linear_hash.update(chunk)
chunks.append(hashlib.sha256(chunk).digest())
chunk = fileobj.read(chunk_size)
| {"golden_diff": "diff --git a/boto/glacier/utils.py b/boto/glacier/utils.py\n--- a/boto/glacier/utils.py\n+++ b/boto/glacier/utils.py\n@@ -23,6 +23,8 @@\n import math\n import binascii\n \n+from boto.compat import six\n+\n \n _MEGABYTE = 1024 * 1024\n DEFAULT_PART_SIZE = 4 * _MEGABYTE\n@@ -122,10 +124,19 @@\n are returned in hex.\n \n \"\"\"\n+ # Python 3+, not binary\n+ if six.PY3 and hasattr(fileobj, 'mode') and 'b' not in fileobj.mode:\n+ raise ValueError('File-like object must be opened in binary mode!')\n+\n linear_hash = hashlib.sha256()\n chunks = []\n- chunk = fileobj.read(chunk_size).encode('utf-8')\n+ chunk = fileobj.read(chunk_size)\n while chunk:\n+ # It's possible to get a file-like object that has no mode (checked\n+ # above) and returns something other than bytes (e.g. str). So here\n+ # we try to catch that and encode to bytes.\n+ if not isinstance(chunk, bytes):\n+ chunk = chunk.encode(getattr(fileobj, 'encoding', '') or 'utf-8')\n linear_hash.update(chunk)\n chunks.append(hashlib.sha256(chunk).digest())\n chunk = fileobj.read(chunk_size)\n", "issue": "boto.glacier.utils.compute_hashes_from_fileobj No Longer Works with Binary Files\nCommit a4c9a781f47a61ddde2b3a3802b93c1ed29cdf16 added a `.encode('utf-8')` to the result of the first read in `boto.glacier.utils.compute_hashes_from_fileobj` (although it doesn't attempt to encode the results of the reads in the loop below that).\n\nThis breaks for binary files for me with python 2.6.6:\n\n```\n(glacier)[cperl@localhost ~]$ dd if=/dev/urandom of=/tmp/foo.bin bs=1M count=1\n1+0 records in\n1+0 records out\n1048576 bytes (1.0 MB) copied, 0.110299 s, 9.5 MB/s\n\n(glacier)[cperl@localhost ~]$ python\nPython 2.6.6 (r266:84292, Nov 22 2013, 12:16:22) \n[GCC 4.4.7 20120313 (Red Hat 4.4.7-4)] on linux2\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\n>>> import boto.glacier.utils\n>>> with open(\"/tmp/foo.bin\", 'r') as f:\n... boto.glacier.utils.compute_hashes_from_fileobj(f)\n... \nTraceback (most recent call last):\n File \"<stdin>\", line 2, in <module>\n File \"/home/cperl/virtualenv/glacier/lib/python2.6/site-packages/boto/glacier/utils.py\", line 127, in compute_hashes_from_fileobj\n chunk = fileobj.read(chunk_size).encode('utf-8')\nUnicodeDecodeError: 'ascii' codec can't decode byte 0xf4 in position 0: ordinal not in range(128)\n```\n\n", "before_files": [{"content": "# Copyright (c) 2012 Amazon.com, Inc. or its affiliates. All Rights Reserved\n#\n# Permission is hereby granted, free of charge, to any person obtaining a\n# copy of this software and associated documentation files (the\n# \"Software\"), to deal in the Software without restriction, including\n# without limitation the rights to use, copy, modify, merge, publish, dis-\n# tribute, sublicense, and/or sell copies of the Software, and to permit\n# persons to whom the Software is furnished to do so, subject to the fol-\n# lowing conditions:\n#\n# The above copyright notice and this permission notice shall be included\n# in all copies or substantial portions of the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS\n# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-\n# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT\n# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,\n# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n# IN THE SOFTWARE.\n#\nimport hashlib\nimport math\nimport binascii\n\n\n_MEGABYTE = 1024 * 1024\nDEFAULT_PART_SIZE = 4 * _MEGABYTE\nMAXIMUM_NUMBER_OF_PARTS = 10000\n\n\ndef minimum_part_size(size_in_bytes, default_part_size=DEFAULT_PART_SIZE):\n \"\"\"Calculate the minimum part size needed for a multipart upload.\n\n Glacier allows a maximum of 10,000 parts per upload. It also\n states that the maximum archive size is 10,000 * 4 GB, which means\n the part size can range from 1MB to 4GB (provided it is one 1MB\n multiplied by a power of 2).\n\n This function will compute what the minimum part size must be in\n order to upload a file of size ``size_in_bytes``.\n\n It will first check if ``default_part_size`` is sufficient for\n a part size given the ``size_in_bytes``. If this is not the case,\n then the smallest part size than can accomodate a file of size\n ``size_in_bytes`` will be returned.\n\n If the file size is greater than the maximum allowed archive\n size of 10,000 * 4GB, a ``ValueError`` will be raised.\n\n \"\"\"\n # The default part size (4 MB) will be too small for a very large\n # archive, as there is a limit of 10,000 parts in a multipart upload.\n # This puts the maximum allowed archive size with the default part size\n # at 40,000 MB. We need to do a sanity check on the part size, and find\n # one that works if the default is too small.\n part_size = _MEGABYTE\n if (default_part_size * MAXIMUM_NUMBER_OF_PARTS) < size_in_bytes:\n if size_in_bytes > (4096 * _MEGABYTE * 10000):\n raise ValueError(\"File size too large: %s\" % size_in_bytes)\n min_part_size = size_in_bytes / 10000\n power = 3\n while part_size < min_part_size:\n part_size = math.ldexp(_MEGABYTE, power)\n power += 1\n part_size = int(part_size)\n else:\n part_size = default_part_size\n return part_size\n\n\ndef chunk_hashes(bytestring, chunk_size=_MEGABYTE):\n chunk_count = int(math.ceil(len(bytestring) / float(chunk_size)))\n hashes = []\n for i in range(chunk_count):\n start = i * chunk_size\n end = (i + 1) * chunk_size\n hashes.append(hashlib.sha256(bytestring[start:end]).digest())\n if not hashes:\n return [hashlib.sha256(b'').digest()]\n return hashes\n\n\ndef tree_hash(fo):\n \"\"\"\n Given a hash of each 1MB chunk (from chunk_hashes) this will hash\n together adjacent hashes until it ends up with one big one. So a\n tree of hashes.\n \"\"\"\n hashes = []\n hashes.extend(fo)\n while len(hashes) > 1:\n new_hashes = []\n while True:\n if len(hashes) > 1:\n first = hashes.pop(0)\n second = hashes.pop(0)\n new_hashes.append(hashlib.sha256(first + second).digest())\n elif len(hashes) == 1:\n only = hashes.pop(0)\n new_hashes.append(only)\n else:\n break\n hashes.extend(new_hashes)\n return hashes[0]\n\n\ndef compute_hashes_from_fileobj(fileobj, chunk_size=1024 * 1024):\n \"\"\"Compute the linear and tree hash from a fileobj.\n\n This function will compute the linear/tree hash of a fileobj\n in a single pass through the fileobj.\n\n :param fileobj: A file like object.\n\n :param chunk_size: The size of the chunks to use for the tree\n hash. This is also the buffer size used to read from\n `fileobj`.\n\n :rtype: tuple\n :return: A tuple of (linear_hash, tree_hash). Both hashes\n are returned in hex.\n\n \"\"\"\n linear_hash = hashlib.sha256()\n chunks = []\n chunk = fileobj.read(chunk_size).encode('utf-8')\n while chunk:\n linear_hash.update(chunk)\n chunks.append(hashlib.sha256(chunk).digest())\n chunk = fileobj.read(chunk_size)\n if not chunks:\n chunks = [hashlib.sha256(b'').digest()]\n return linear_hash.hexdigest(), bytes_to_hex(tree_hash(chunks))\n\n\ndef bytes_to_hex(str_as_bytes):\n return binascii.hexlify(str_as_bytes)\n\n\ndef tree_hash_from_str(str_as_bytes):\n \"\"\"\n\n :type str_as_bytes: str\n :param str_as_bytes: The string for which to compute the tree hash.\n\n :rtype: str\n :return: The computed tree hash, returned as hex.\n\n \"\"\"\n return bytes_to_hex(tree_hash(chunk_hashes(str_as_bytes)))\n\n\nclass ResettingFileSender(object):\n def __init__(self, archive):\n self._archive = archive\n self._starting_offset = archive.tell()\n\n def __call__(self, connection, method, path, body, headers):\n try:\n connection.request(method, path, self._archive, headers)\n return connection.getresponse()\n finally:\n self._archive.seek(self._starting_offset)\n", "path": "boto/glacier/utils.py"}]} | 2,851 | 325 |
gh_patches_debug_25198 | rasdani/github-patches | git_diff | open-telemetry__opentelemetry-python-contrib-890 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
error with logging instrumentation - AttributeError: 'ProxyTracerProvider' object has no attribute 'resource'
**Describe your environment**
LoggingInstrumentor().instrument() is throwing an error
```
Traceback (most recent call last):
File "manage.py", line 30, in <module>
main()
File "manage.py", line 14, in main
LoggingInstrumentor().instrument(set_logging_format=True)
File "/home/vamsikrishnam/otel/lib/python3.8/site-packages/opentelemetry/instrumentation/instrumentor.py", line 109, in instrument
result = self._instrument( # pylint: disable=assignment-from-no-return
File "/home/vamsikrishnam/otel/lib/python3.8/site-packages/opentelemetry/instrumentation/logging/__init__.py", line 81, in _instrument
resource = provider.resource if provider else None
AttributeError: 'ProxyTracerProvider' object has no attribute 'resource'
```
**Steps to reproduce**
Below packages installed and trying to instrument with below two lines:
> LoggingInstrumentor().instrument(set_logging_format=True)
> DjangoInstrumentor().instrument()
```
(otel) vamsikrishnam@NHHYDL-00217:~/django$ pip list | grep opentele
opentelemetry-api 1.7.1
opentelemetry-exporter-otlp 1.7.1
opentelemetry-exporter-otlp-proto-grpc 1.7.1
opentelemetry-exporter-otlp-proto-http 1.7.1
opentelemetry-instrumentation 0.26b1
opentelemetry-instrumentation-django 0.26b1
opentelemetry-instrumentation-logging 0.26b1
opentelemetry-instrumentation-wsgi 0.26b1
opentelemetry-propagator-b3 1.7.1
opentelemetry-proto 1.7.1
opentelemetry-sdk 1.7.1
opentelemetry-semantic-conventions 0.26b1
opentelemetry-util-http 0.26b1
```
**What is the expected behavior?**
What did you expect to see?
logging should be instrumented properly.
**What is the actual behavior?**
What did you see instead?
logging should be instrumented properly and populate the otelTraceID and otelSpanID in the logs.
**Additional context**
Add any other context about the problem here.
$ python3 --version
Python 3.8.10
manage.py:
```
#!/usr/bin/env python
"""Django's command-line utility for administrative tasks."""
import os
import sys
import logging
from opentelemetry.instrumentation.django import DjangoInstrumentor
from opentelemetry.instrumentation.logging import LoggingInstrumentor
def main():
"""Run administrative tasks."""
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'django_project.settings')
logging.basicConfig(level = logging.DEBUG)
LoggingInstrumentor().instrument(set_logging_format=True)
DjangoInstrumentor().instrument()
# LoggingInstrumentor().instrument(set_logging_format=True,log_level=logging.DEBUG)
try:
from django.core.management import execute_from_command_line
except ImportError as exc:
raise ImportError(
"Couldn't import Django. Are you sure it's installed and "
"available on your PYTHONPATH environment variable? Did you "
"forget to activate a virtual environment?"
) from exc
execute_from_command_line(sys.argv)
if __name__ == '__main__':
main()
```
</issue>
<code>
[start of instrumentation/opentelemetry-instrumentation-logging/src/opentelemetry/instrumentation/logging/__init__.py]
1 # Copyright The OpenTelemetry Authors
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 # pylint: disable=empty-docstring,no-value-for-parameter,no-member,no-name-in-module
16
17 import logging # pylint: disable=import-self
18 from os import environ
19 from typing import Collection
20
21 from opentelemetry.instrumentation.instrumentor import BaseInstrumentor
22 from opentelemetry.instrumentation.logging.constants import (
23 _MODULE_DOC,
24 DEFAULT_LOGGING_FORMAT,
25 )
26 from opentelemetry.instrumentation.logging.environment_variables import (
27 OTEL_PYTHON_LOG_CORRELATION,
28 OTEL_PYTHON_LOG_FORMAT,
29 OTEL_PYTHON_LOG_LEVEL,
30 )
31 from opentelemetry.instrumentation.logging.package import _instruments
32 from opentelemetry.trace import (
33 INVALID_SPAN,
34 INVALID_SPAN_CONTEXT,
35 get_current_span,
36 get_tracer_provider,
37 )
38
39 __doc__ = _MODULE_DOC
40
41 LEVELS = {
42 "debug": logging.DEBUG,
43 "info": logging.INFO,
44 "warning": logging.WARNING,
45 "error": logging.ERROR,
46 }
47
48
49 class LoggingInstrumentor(BaseInstrumentor): # pylint: disable=empty-docstring
50 __doc__ = f"""An instrumentor for stdlib logging module.
51
52 This instrumentor injects tracing context into logging records and optionally sets the global logging format to the following:
53
54 .. code-block::
55
56 {DEFAULT_LOGGING_FORMAT}
57
58 Args:
59 tracer_provider: Tracer provider instance that can be used to fetch a tracer.
60 set_logging_format: When set to True, it calls logging.basicConfig() and sets a logging format.
61 logging_format: Accepts a string and sets it as the logging format when set_logging_format
62 is set to True.
63 log_level: Accepts one of the following values and sets the logging level to it.
64 logging.INFO
65 logging.DEBUG
66 logging.WARN
67 logging.ERROR
68 logging.FATAL
69
70 See `BaseInstrumentor`
71 """
72
73 _old_factory = None
74
75 def instrumentation_dependencies(self) -> Collection[str]:
76 return _instruments
77
78 def _instrument(self, **kwargs):
79 service_name = ""
80 provider = kwargs.get("tracer_provider", None) or get_tracer_provider()
81 resource = provider.resource if provider else None
82 if resource:
83 service_name = resource.attributes.get("service.name")
84
85 old_factory = logging.getLogRecordFactory()
86 LoggingInstrumentor._old_factory = old_factory
87
88 def record_factory(*args, **kwargs):
89 record = old_factory(*args, **kwargs)
90
91 record.otelSpanID = "0"
92 record.otelTraceID = "0"
93 record.otelServiceName = service_name
94
95 span = get_current_span()
96 if span != INVALID_SPAN:
97 ctx = span.get_span_context()
98 if ctx != INVALID_SPAN_CONTEXT:
99 record.otelSpanID = format(ctx.span_id, "016x")
100 record.otelTraceID = format(ctx.trace_id, "032x")
101 return record
102
103 logging.setLogRecordFactory(record_factory)
104
105 set_logging_format = kwargs.get(
106 "set_logging_format",
107 environ.get(OTEL_PYTHON_LOG_CORRELATION, "false").lower()
108 == "true",
109 )
110
111 if set_logging_format:
112 log_format = kwargs.get(
113 "logging_format", environ.get(OTEL_PYTHON_LOG_FORMAT, None)
114 )
115 log_format = log_format or DEFAULT_LOGGING_FORMAT
116
117 log_level = kwargs.get(
118 "log_level", LEVELS.get(environ.get(OTEL_PYTHON_LOG_LEVEL))
119 )
120 log_level = log_level or logging.INFO
121
122 logging.basicConfig(format=log_format, level=log_level)
123
124 def _uninstrument(self, **kwargs):
125 if LoggingInstrumentor._old_factory:
126 logging.setLogRecordFactory(LoggingInstrumentor._old_factory)
127 LoggingInstrumentor._old_factory = None
128
[end of instrumentation/opentelemetry-instrumentation-logging/src/opentelemetry/instrumentation/logging/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/instrumentation/opentelemetry-instrumentation-logging/src/opentelemetry/instrumentation/logging/__init__.py b/instrumentation/opentelemetry-instrumentation-logging/src/opentelemetry/instrumentation/logging/__init__.py
--- a/instrumentation/opentelemetry-instrumentation-logging/src/opentelemetry/instrumentation/logging/__init__.py
+++ b/instrumentation/opentelemetry-instrumentation-logging/src/opentelemetry/instrumentation/logging/__init__.py
@@ -76,20 +76,29 @@
return _instruments
def _instrument(self, **kwargs):
- service_name = ""
- provider = kwargs.get("tracer_provider", None) or get_tracer_provider()
- resource = provider.resource if provider else None
- if resource:
- service_name = resource.attributes.get("service.name")
+ provider = kwargs.get("tracer_provider", None) or get_tracer_provider()
old_factory = logging.getLogRecordFactory()
LoggingInstrumentor._old_factory = old_factory
+ service_name = None
+
def record_factory(*args, **kwargs):
record = old_factory(*args, **kwargs)
record.otelSpanID = "0"
record.otelTraceID = "0"
+
+ nonlocal service_name
+ if service_name is None:
+ resource = getattr(provider, "resource", None)
+ if resource:
+ service_name = (
+ resource.attributes.get("service.name") or ""
+ )
+ else:
+ service_name = ""
+
record.otelServiceName = service_name
span = get_current_span()
| {"golden_diff": "diff --git a/instrumentation/opentelemetry-instrumentation-logging/src/opentelemetry/instrumentation/logging/__init__.py b/instrumentation/opentelemetry-instrumentation-logging/src/opentelemetry/instrumentation/logging/__init__.py\n--- a/instrumentation/opentelemetry-instrumentation-logging/src/opentelemetry/instrumentation/logging/__init__.py\n+++ b/instrumentation/opentelemetry-instrumentation-logging/src/opentelemetry/instrumentation/logging/__init__.py\n@@ -76,20 +76,29 @@\n return _instruments\n \n def _instrument(self, **kwargs):\n- service_name = \"\"\n- provider = kwargs.get(\"tracer_provider\", None) or get_tracer_provider()\n- resource = provider.resource if provider else None\n- if resource:\n- service_name = resource.attributes.get(\"service.name\")\n \n+ provider = kwargs.get(\"tracer_provider\", None) or get_tracer_provider()\n old_factory = logging.getLogRecordFactory()\n LoggingInstrumentor._old_factory = old_factory\n \n+ service_name = None\n+\n def record_factory(*args, **kwargs):\n record = old_factory(*args, **kwargs)\n \n record.otelSpanID = \"0\"\n record.otelTraceID = \"0\"\n+\n+ nonlocal service_name\n+ if service_name is None:\n+ resource = getattr(provider, \"resource\", None)\n+ if resource:\n+ service_name = (\n+ resource.attributes.get(\"service.name\") or \"\"\n+ )\n+ else:\n+ service_name = \"\"\n+\n record.otelServiceName = service_name\n \n span = get_current_span()\n", "issue": "error with logging instrumentation - AttributeError: 'ProxyTracerProvider' object has no attribute 'resource'\n**Describe your environment** \r\n\r\nLoggingInstrumentor().instrument() is throwing an error\r\n```\r\nTraceback (most recent call last):\r\n File \"manage.py\", line 30, in <module>\r\n main()\r\n File \"manage.py\", line 14, in main\r\n LoggingInstrumentor().instrument(set_logging_format=True)\r\n File \"/home/vamsikrishnam/otel/lib/python3.8/site-packages/opentelemetry/instrumentation/instrumentor.py\", line 109, in instrument\r\n result = self._instrument( # pylint: disable=assignment-from-no-return\r\n File \"/home/vamsikrishnam/otel/lib/python3.8/site-packages/opentelemetry/instrumentation/logging/__init__.py\", line 81, in _instrument\r\n resource = provider.resource if provider else None\r\nAttributeError: 'ProxyTracerProvider' object has no attribute 'resource'\r\n```\r\n\r\n**Steps to reproduce**\r\nBelow packages installed and trying to instrument with below two lines:\r\n\r\n> LoggingInstrumentor().instrument(set_logging_format=True)\r\n> DjangoInstrumentor().instrument()\r\n\r\n```\r\n(otel) vamsikrishnam@NHHYDL-00217:~/django$ pip list | grep opentele\r\nopentelemetry-api 1.7.1\r\nopentelemetry-exporter-otlp 1.7.1\r\nopentelemetry-exporter-otlp-proto-grpc 1.7.1\r\nopentelemetry-exporter-otlp-proto-http 1.7.1\r\nopentelemetry-instrumentation 0.26b1\r\nopentelemetry-instrumentation-django 0.26b1\r\nopentelemetry-instrumentation-logging 0.26b1\r\nopentelemetry-instrumentation-wsgi 0.26b1\r\nopentelemetry-propagator-b3 1.7.1\r\nopentelemetry-proto 1.7.1\r\nopentelemetry-sdk 1.7.1\r\nopentelemetry-semantic-conventions 0.26b1\r\nopentelemetry-util-http 0.26b1\r\n```\r\n\r\n**What is the expected behavior?**\r\nWhat did you expect to see?\r\nlogging should be instrumented properly.\r\n\r\n**What is the actual behavior?**\r\nWhat did you see instead?\r\nlogging should be instrumented properly and populate the otelTraceID and otelSpanID in the logs.\r\n\r\n**Additional context**\r\nAdd any other context about the problem here.\r\n\r\n$ python3 --version\r\nPython 3.8.10\r\n\r\nmanage.py:\r\n\r\n```\r\n#!/usr/bin/env python\r\n\"\"\"Django's command-line utility for administrative tasks.\"\"\"\r\nimport os\r\nimport sys\r\nimport logging\r\nfrom opentelemetry.instrumentation.django import DjangoInstrumentor\r\nfrom opentelemetry.instrumentation.logging import LoggingInstrumentor\r\n\r\n\r\ndef main():\r\n \"\"\"Run administrative tasks.\"\"\"\r\n os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'django_project.settings')\r\n logging.basicConfig(level = logging.DEBUG)\r\n LoggingInstrumentor().instrument(set_logging_format=True)\r\n DjangoInstrumentor().instrument()\r\n # LoggingInstrumentor().instrument(set_logging_format=True,log_level=logging.DEBUG)\r\n\r\n try:\r\n from django.core.management import execute_from_command_line\r\n except ImportError as exc:\r\n raise ImportError(\r\n \"Couldn't import Django. Are you sure it's installed and \"\r\n \"available on your PYTHONPATH environment variable? Did you \"\r\n \"forget to activate a virtual environment?\"\r\n ) from exc\r\n execute_from_command_line(sys.argv)\r\n\r\n\r\nif __name__ == '__main__':\r\n main()\r\n```\r\n\r\n\n", "before_files": [{"content": "# Copyright The OpenTelemetry Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n# pylint: disable=empty-docstring,no-value-for-parameter,no-member,no-name-in-module\n\nimport logging # pylint: disable=import-self\nfrom os import environ\nfrom typing import Collection\n\nfrom opentelemetry.instrumentation.instrumentor import BaseInstrumentor\nfrom opentelemetry.instrumentation.logging.constants import (\n _MODULE_DOC,\n DEFAULT_LOGGING_FORMAT,\n)\nfrom opentelemetry.instrumentation.logging.environment_variables import (\n OTEL_PYTHON_LOG_CORRELATION,\n OTEL_PYTHON_LOG_FORMAT,\n OTEL_PYTHON_LOG_LEVEL,\n)\nfrom opentelemetry.instrumentation.logging.package import _instruments\nfrom opentelemetry.trace import (\n INVALID_SPAN,\n INVALID_SPAN_CONTEXT,\n get_current_span,\n get_tracer_provider,\n)\n\n__doc__ = _MODULE_DOC\n\nLEVELS = {\n \"debug\": logging.DEBUG,\n \"info\": logging.INFO,\n \"warning\": logging.WARNING,\n \"error\": logging.ERROR,\n}\n\n\nclass LoggingInstrumentor(BaseInstrumentor): # pylint: disable=empty-docstring\n __doc__ = f\"\"\"An instrumentor for stdlib logging module.\n\n This instrumentor injects tracing context into logging records and optionally sets the global logging format to the following:\n\n .. code-block::\n\n {DEFAULT_LOGGING_FORMAT}\n\n Args:\n tracer_provider: Tracer provider instance that can be used to fetch a tracer.\n set_logging_format: When set to True, it calls logging.basicConfig() and sets a logging format.\n logging_format: Accepts a string and sets it as the logging format when set_logging_format\n is set to True.\n log_level: Accepts one of the following values and sets the logging level to it.\n logging.INFO\n logging.DEBUG\n logging.WARN\n logging.ERROR\n logging.FATAL\n\n See `BaseInstrumentor`\n \"\"\"\n\n _old_factory = None\n\n def instrumentation_dependencies(self) -> Collection[str]:\n return _instruments\n\n def _instrument(self, **kwargs):\n service_name = \"\"\n provider = kwargs.get(\"tracer_provider\", None) or get_tracer_provider()\n resource = provider.resource if provider else None\n if resource:\n service_name = resource.attributes.get(\"service.name\")\n\n old_factory = logging.getLogRecordFactory()\n LoggingInstrumentor._old_factory = old_factory\n\n def record_factory(*args, **kwargs):\n record = old_factory(*args, **kwargs)\n\n record.otelSpanID = \"0\"\n record.otelTraceID = \"0\"\n record.otelServiceName = service_name\n\n span = get_current_span()\n if span != INVALID_SPAN:\n ctx = span.get_span_context()\n if ctx != INVALID_SPAN_CONTEXT:\n record.otelSpanID = format(ctx.span_id, \"016x\")\n record.otelTraceID = format(ctx.trace_id, \"032x\")\n return record\n\n logging.setLogRecordFactory(record_factory)\n\n set_logging_format = kwargs.get(\n \"set_logging_format\",\n environ.get(OTEL_PYTHON_LOG_CORRELATION, \"false\").lower()\n == \"true\",\n )\n\n if set_logging_format:\n log_format = kwargs.get(\n \"logging_format\", environ.get(OTEL_PYTHON_LOG_FORMAT, None)\n )\n log_format = log_format or DEFAULT_LOGGING_FORMAT\n\n log_level = kwargs.get(\n \"log_level\", LEVELS.get(environ.get(OTEL_PYTHON_LOG_LEVEL))\n )\n log_level = log_level or logging.INFO\n\n logging.basicConfig(format=log_format, level=log_level)\n\n def _uninstrument(self, **kwargs):\n if LoggingInstrumentor._old_factory:\n logging.setLogRecordFactory(LoggingInstrumentor._old_factory)\n LoggingInstrumentor._old_factory = None\n", "path": "instrumentation/opentelemetry-instrumentation-logging/src/opentelemetry/instrumentation/logging/__init__.py"}]} | 2,572 | 355 |
gh_patches_debug_28162 | rasdani/github-patches | git_diff | Qiskit__qiskit-12069 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Documentation of RVGate is incorrect
### Environment
N/A
### What is happening?
Received this in an email:
>Hi, I think I found some errors in the Qiskit documentation at
<https://docs.quantum.ibm.com/api/qiskit/qiskit.circuit.library.RVGate>
and I'm contacting you because you look like the two people who most recently edited the source file at
<https://github.com/Qiskit/qiskit/blob/stable/0.46/qiskit/circuit/library/generalized_gates/rv.py>
The matrix representation given in the documentation seems to be wrong. I compared it to the definition given in
<https://arxiv.org/pdf/2104.14875.pdf>
on page 4, equation 1, we see the definition of the rotation matrix. It almost matches the definition given in the documentation at
<https://docs.quantum.ibm.com/api/qiskit/qiskit.circuit.library.RVGate>
except for two mistakes: the "sinc" function should be "sin", and the angle should be divided by two. This can be compared to the source code at
<https://github.com/Qiskit/qiskit/blob/stable/0.46/qiskit/circuit/library/generalized_gates/rv.py>
at lines 86 and 87, where we see the angle divided by two, and we see the use of the sin and cos functions.
### How can we reproduce the issue?
N/A
### What should happen?
N/A
### Any suggestions?
_No response_
</issue>
<code>
[start of qiskit/circuit/library/generalized_gates/rv.py]
1 # This code is part of Qiskit.
2 #
3 # (C) Copyright IBM 2017, 2020
4 #
5 # This code is licensed under the Apache License, Version 2.0. You may
6 # obtain a copy of this license in the LICENSE.txt file in the root directory
7 # of this source tree or at http://www.apache.org/licenses/LICENSE-2.0.
8 #
9 # Any modifications or derivative works of this code must retain this
10 # copyright notice, and modified files need to carry a notice indicating
11 # that they have been altered from the originals.
12
13 """Rotation around an arbitrary axis on the Bloch sphere."""
14
15 import numpy
16 from qiskit.circuit.gate import Gate
17 from qiskit.circuit.exceptions import CircuitError
18
19
20 class RVGate(Gate):
21 r"""Rotation around arbitrary rotation axis :math:`v` where :math:`|v|` is
22 angle of rotation in radians.
23
24 Can be applied to a :class:`~qiskit.circuit.QuantumCircuit`
25 with the :meth:`~qiskit.circuit.QuantumCircuit.rv` method.
26
27 **Circuit symbol:**
28
29 .. parsed-literal::
30
31 ┌─────────────────┐
32 q_0: ┤ RV(v_x,v_y,v_z) ├
33 └─────────────────┘
34
35 **Matrix Representation:**
36
37 .. math::
38
39 \newcommand{\rotationangle}{|\vec{v}|}
40 \newcommand{\sinc}{\text{sinc}}
41 R(\vec{v}) = e^{-i \vec{v}\cdot\vec{\sigma}} =
42 \begin{pmatrix}
43 \cos\left(\rotationangle\right) -i v_z \sinc\left(\rotationangle\right)
44 & -(i v_x + v_y) \sinc\left(\rotationangle\right) \\
45 -(i v_x - v_y) \sinc\left(\rotationangle\right)
46 & \cos\left(\rotationangle\right) + i v_z \sinc\left(\rotationangle\right)
47 \end{pmatrix}
48 """
49
50 def __init__(self, v_x, v_y, v_z, basis="U"):
51 """Create new rv single-qubit gate.
52
53 Args:
54 v_x (float): x-component
55 v_y (float): y-component
56 v_z (float): z-component
57 basis (str, optional): basis (see
58 :class:`~qiskit.synthesis.one_qubit.one_qubit_decompose.OneQubitEulerDecomposer`)
59 """
60 # pylint: disable=cyclic-import
61 from qiskit.synthesis.one_qubit.one_qubit_decompose import OneQubitEulerDecomposer
62
63 super().__init__("rv", 1, [v_x, v_y, v_z])
64 self._decomposer = OneQubitEulerDecomposer(basis=basis)
65
66 def _define(self):
67 try:
68 self.definition = self._decomposer(self.to_matrix())
69 except TypeError as ex:
70 raise CircuitError(
71 f"The {self.name} gate cannot be decomposed with unbound parameters"
72 ) from ex
73
74 def inverse(self):
75 """Invert this gate."""
76 vx, vy, vz = self.params
77 return RVGate(-vx, -vy, -vz)
78
79 def to_matrix(self):
80 """Return a numpy.array for the R(v) gate."""
81 v = numpy.asarray(self.params, dtype=float)
82 angle = numpy.sqrt(v.dot(v))
83 if angle == 0:
84 return numpy.array([[1, 0], [0, 1]])
85 nx, ny, nz = v / angle
86 sin = numpy.sin(angle / 2)
87 cos = numpy.cos(angle / 2)
88 return numpy.array(
89 [
90 [cos - 1j * nz * sin, (-ny - 1j * nx) * sin],
91 [(ny - 1j * nx) * sin, cos + 1j * nz * sin],
92 ]
93 )
94
[end of qiskit/circuit/library/generalized_gates/rv.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/qiskit/circuit/library/generalized_gates/rv.py b/qiskit/circuit/library/generalized_gates/rv.py
--- a/qiskit/circuit/library/generalized_gates/rv.py
+++ b/qiskit/circuit/library/generalized_gates/rv.py
@@ -18,7 +18,7 @@
class RVGate(Gate):
- r"""Rotation around arbitrary rotation axis :math:`v` where :math:`|v|` is
+ r"""Rotation around arbitrary rotation axis :math:`\vec{v}` where :math:`\|\vec{v}\|_2` is
angle of rotation in radians.
Can be applied to a :class:`~qiskit.circuit.QuantumCircuit`
@@ -36,14 +36,17 @@
.. math::
- \newcommand{\rotationangle}{|\vec{v}|}
- \newcommand{\sinc}{\text{sinc}}
- R(\vec{v}) = e^{-i \vec{v}\cdot\vec{\sigma}} =
+ \newcommand{\rotationangle}{\frac{\|\vec{v}\|_2}{2}}
+ R(\vec{v}) = e^{-i \vec{v}\cdot\vec{\sigma} / 2} =
\begin{pmatrix}
- \cos\left(\rotationangle\right) -i v_z \sinc\left(\rotationangle\right)
- & -(i v_x + v_y) \sinc\left(\rotationangle\right) \\
- -(i v_x - v_y) \sinc\left(\rotationangle\right)
- & \cos\left(\rotationangle\right) + i v_z \sinc\left(\rotationangle\right)
+ \cos\left(\rotationangle\right)
+ -i \frac{v_z}{\|\vec{v}\|_2} \sin\left(\rotationangle\right)
+ & -(i \frac{v_x}{\|\vec{v}\|_2}
+ + \frac{v_y}{\|\vec{v}\|_2}) \sin\left(\rotationangle\right) \\
+ -(i \frac{v_x}{\|\vec{v}\|_2}
+ - \frac{v_y}{\|\vec{v}\|_2}) \sin\left(\rotationangle\right)
+ & \cos\left(\rotationangle\right)
+ + i \frac{v_z}{\|\vec{v}\|_2} \sin\left(\rotationangle\right)
\end{pmatrix}
"""
| {"golden_diff": "diff --git a/qiskit/circuit/library/generalized_gates/rv.py b/qiskit/circuit/library/generalized_gates/rv.py\n--- a/qiskit/circuit/library/generalized_gates/rv.py\n+++ b/qiskit/circuit/library/generalized_gates/rv.py\n@@ -18,7 +18,7 @@\n \n \n class RVGate(Gate):\n- r\"\"\"Rotation around arbitrary rotation axis :math:`v` where :math:`|v|` is\n+ r\"\"\"Rotation around arbitrary rotation axis :math:`\\vec{v}` where :math:`\\|\\vec{v}\\|_2` is\n angle of rotation in radians.\n \n Can be applied to a :class:`~qiskit.circuit.QuantumCircuit`\n@@ -36,14 +36,17 @@\n \n .. math::\n \n- \\newcommand{\\rotationangle}{|\\vec{v}|}\n- \\newcommand{\\sinc}{\\text{sinc}}\n- R(\\vec{v}) = e^{-i \\vec{v}\\cdot\\vec{\\sigma}} =\n+ \\newcommand{\\rotationangle}{\\frac{\\|\\vec{v}\\|_2}{2}}\n+ R(\\vec{v}) = e^{-i \\vec{v}\\cdot\\vec{\\sigma} / 2} =\n \\begin{pmatrix}\n- \\cos\\left(\\rotationangle\\right) -i v_z \\sinc\\left(\\rotationangle\\right)\n- & -(i v_x + v_y) \\sinc\\left(\\rotationangle\\right) \\\\\n- -(i v_x - v_y) \\sinc\\left(\\rotationangle\\right)\n- & \\cos\\left(\\rotationangle\\right) + i v_z \\sinc\\left(\\rotationangle\\right)\n+ \\cos\\left(\\rotationangle\\right)\n+ -i \\frac{v_z}{\\|\\vec{v}\\|_2} \\sin\\left(\\rotationangle\\right)\n+ & -(i \\frac{v_x}{\\|\\vec{v}\\|_2}\n+ + \\frac{v_y}{\\|\\vec{v}\\|_2}) \\sin\\left(\\rotationangle\\right) \\\\\n+ -(i \\frac{v_x}{\\|\\vec{v}\\|_2}\n+ - \\frac{v_y}{\\|\\vec{v}\\|_2}) \\sin\\left(\\rotationangle\\right)\n+ & \\cos\\left(\\rotationangle\\right)\n+ + i \\frac{v_z}{\\|\\vec{v}\\|_2} \\sin\\left(\\rotationangle\\right)\n \\end{pmatrix}\n \"\"\"\n", "issue": "Documentation of RVGate is incorrect\n### Environment\n\nN/A\n\n### What is happening?\n\nReceived this in an email:\r\n>Hi, I think I found some errors in the Qiskit documentation at\r\n<https://docs.quantum.ibm.com/api/qiskit/qiskit.circuit.library.RVGate>\r\nand I'm contacting you because you look like the two people who most recently edited the source file at\r\n<https://github.com/Qiskit/qiskit/blob/stable/0.46/qiskit/circuit/library/generalized_gates/rv.py>\r\nThe matrix representation given in the documentation seems to be wrong. I compared it to the definition given in\r\n<https://arxiv.org/pdf/2104.14875.pdf>\r\non page 4, equation 1, we see the definition of the rotation matrix. It almost matches the definition given in the documentation at\r\n<https://docs.quantum.ibm.com/api/qiskit/qiskit.circuit.library.RVGate>\r\nexcept for two mistakes: the \"sinc\" function should be \"sin\", and the angle should be divided by two. This can be compared to the source code at\r\n<https://github.com/Qiskit/qiskit/blob/stable/0.46/qiskit/circuit/library/generalized_gates/rv.py>\r\nat lines 86 and 87, where we see the angle divided by two, and we see the use of the sin and cos functions.\n\n### How can we reproduce the issue?\n\nN/A\n\n### What should happen?\n\nN/A\n\n### Any suggestions?\n\n_No response_\n", "before_files": [{"content": "# This code is part of Qiskit.\n#\n# (C) Copyright IBM 2017, 2020\n#\n# This code is licensed under the Apache License, Version 2.0. You may\n# obtain a copy of this license in the LICENSE.txt file in the root directory\n# of this source tree or at http://www.apache.org/licenses/LICENSE-2.0.\n#\n# Any modifications or derivative works of this code must retain this\n# copyright notice, and modified files need to carry a notice indicating\n# that they have been altered from the originals.\n\n\"\"\"Rotation around an arbitrary axis on the Bloch sphere.\"\"\"\n\nimport numpy\nfrom qiskit.circuit.gate import Gate\nfrom qiskit.circuit.exceptions import CircuitError\n\n\nclass RVGate(Gate):\n r\"\"\"Rotation around arbitrary rotation axis :math:`v` where :math:`|v|` is\n angle of rotation in radians.\n\n Can be applied to a :class:`~qiskit.circuit.QuantumCircuit`\n with the :meth:`~qiskit.circuit.QuantumCircuit.rv` method.\n\n **Circuit symbol:**\n\n .. parsed-literal::\n\n \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510\n q_0: \u2524 RV(v_x,v_y,v_z) \u251c\n \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n\n **Matrix Representation:**\n\n .. math::\n\n \\newcommand{\\rotationangle}{|\\vec{v}|}\n \\newcommand{\\sinc}{\\text{sinc}}\n R(\\vec{v}) = e^{-i \\vec{v}\\cdot\\vec{\\sigma}} =\n \\begin{pmatrix}\n \\cos\\left(\\rotationangle\\right) -i v_z \\sinc\\left(\\rotationangle\\right)\n & -(i v_x + v_y) \\sinc\\left(\\rotationangle\\right) \\\\\n -(i v_x - v_y) \\sinc\\left(\\rotationangle\\right)\n & \\cos\\left(\\rotationangle\\right) + i v_z \\sinc\\left(\\rotationangle\\right)\n \\end{pmatrix}\n \"\"\"\n\n def __init__(self, v_x, v_y, v_z, basis=\"U\"):\n \"\"\"Create new rv single-qubit gate.\n\n Args:\n v_x (float): x-component\n v_y (float): y-component\n v_z (float): z-component\n basis (str, optional): basis (see\n :class:`~qiskit.synthesis.one_qubit.one_qubit_decompose.OneQubitEulerDecomposer`)\n \"\"\"\n # pylint: disable=cyclic-import\n from qiskit.synthesis.one_qubit.one_qubit_decompose import OneQubitEulerDecomposer\n\n super().__init__(\"rv\", 1, [v_x, v_y, v_z])\n self._decomposer = OneQubitEulerDecomposer(basis=basis)\n\n def _define(self):\n try:\n self.definition = self._decomposer(self.to_matrix())\n except TypeError as ex:\n raise CircuitError(\n f\"The {self.name} gate cannot be decomposed with unbound parameters\"\n ) from ex\n\n def inverse(self):\n \"\"\"Invert this gate.\"\"\"\n vx, vy, vz = self.params\n return RVGate(-vx, -vy, -vz)\n\n def to_matrix(self):\n \"\"\"Return a numpy.array for the R(v) gate.\"\"\"\n v = numpy.asarray(self.params, dtype=float)\n angle = numpy.sqrt(v.dot(v))\n if angle == 0:\n return numpy.array([[1, 0], [0, 1]])\n nx, ny, nz = v / angle\n sin = numpy.sin(angle / 2)\n cos = numpy.cos(angle / 2)\n return numpy.array(\n [\n [cos - 1j * nz * sin, (-ny - 1j * nx) * sin],\n [(ny - 1j * nx) * sin, cos + 1j * nz * sin],\n ]\n )\n", "path": "qiskit/circuit/library/generalized_gates/rv.py"}]} | 1,951 | 592 |
gh_patches_debug_8362 | rasdani/github-patches | git_diff | getnikola__nikola-3036 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
RSS_PATH doesn't work as advertised (is path and filename, excluding .xml)
* Python Version: 3.5.3
* Nikola Version: v7.8.14
* Operating System: Debian
A fresh config says:
```
# Final location for the blog main RSS feed is:
# output / TRANSLATION[lang] / RSS_PATH / rss.xml
```
which is in line with other `_PATH` variables.
But it seems `RSS_PATH` is actually path+filename (and `.xml` is appended).
With `RSS_PATH = "blog/`I get `render_taxonomies:output/blog/.xml` (instead of `blog/rss.xml`)
With `RSS_PATH = blog/index.xml` I get `render_taxonomies:output/blog/index.xml.xml`
</issue>
<code>
[start of nikola/plugins/task/indexes.py]
1 # -*- coding: utf-8 -*-
2
3 # Copyright © 2012-2018 Roberto Alsina and others.
4
5 # Permission is hereby granted, free of charge, to any
6 # person obtaining a copy of this software and associated
7 # documentation files (the "Software"), to deal in the
8 # Software without restriction, including without limitation
9 # the rights to use, copy, modify, merge, publish,
10 # distribute, sublicense, and/or sell copies of the
11 # Software, and to permit persons to whom the Software is
12 # furnished to do so, subject to the following conditions:
13 #
14 # The above copyright notice and this permission notice
15 # shall be included in all copies or substantial portions of
16 # the Software.
17 #
18 # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY
19 # KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE
20 # WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR
21 # PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS
22 # OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR
23 # OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR
24 # OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
25 # SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
26
27 """Render the blog's main index."""
28
29
30 from nikola.plugin_categories import Taxonomy
31
32
33 class Indexes(Taxonomy):
34 """Classify for the blog's main index."""
35
36 name = "classify_indexes"
37
38 classification_name = "index"
39 overview_page_variable_name = None
40 more_than_one_classifications_per_post = False
41 has_hierarchy = False
42 show_list_as_index = True
43 template_for_single_list = "index.tmpl"
44 template_for_classification_overview = None
45 apply_to_posts = True
46 apply_to_pages = False
47 omit_empty_classifications = False
48 path_handler_docstrings = {
49 'index_index': False,
50 'index': """Link to a numbered index.
51
52 Example:
53
54 link://index/3 => /index-3.html""",
55 'index_atom': """Link to a numbered Atom index.
56
57 Example:
58
59 link://index_atom/3 => /index-3.atom""",
60 'index_rss': """A link to the RSS feed path.
61
62 Example:
63
64 link://rss => /blog/rss.xml""",
65 }
66
67 def set_site(self, site):
68 """Set Nikola site."""
69 # Redirect automatically generated 'index_rss' path handler to 'rss' for compatibility with old rss plugin
70 site.register_path_handler('rss', lambda name, lang: site.path_handlers['index_rss'](name, lang))
71 site.path_handlers['rss'].__doc__ = """A link to the RSS feed path.
72
73 Example:
74
75 link://rss => /blog/rss.xml
76 """.strip()
77 return super(Indexes, self).set_site(site)
78
79 def get_implicit_classifications(self, lang):
80 """Return a list of classification strings which should always appear in posts_per_classification."""
81 return [""]
82
83 def classify(self, post, lang):
84 """Classify the given post for the given language."""
85 return [""]
86
87 def get_classification_friendly_name(self, classification, lang, only_last_component=False):
88 """Extract a friendly name from the classification."""
89 return self.site.config["BLOG_TITLE"](lang)
90
91 def get_path(self, classification, lang, dest_type='page'):
92 """Return a path for the given classification."""
93 if dest_type == 'rss':
94 return [self.site.config['RSS_PATH'](lang)], True
95 # 'page' (index) or 'feed' (Atom)
96 page_number = None
97 if dest_type == 'page':
98 # Interpret argument as page number
99 try:
100 page_number = int(classification)
101 except (ValueError, TypeError):
102 pass
103 return [self.site.config['INDEX_PATH'](lang)], 'always', page_number
104
105 def provide_context_and_uptodate(self, classification, lang, node=None):
106 """Provide data for the context and the uptodate list for the list of the given classifiation."""
107 kw = {
108 }
109 context = {
110 "title": self.site.config["INDEXES_TITLE"](lang) or self.site.config["BLOG_TITLE"](lang),
111 "description": self.site.config["BLOG_DESCRIPTION"](lang),
112 "pagekind": ["main_index", "index"],
113 }
114 kw.update(context)
115 return context, kw
116
117 def should_generate_classification_page(self, classification, post_list, lang):
118 """Only generates list of posts for classification if this function returns True."""
119 return not self.site.config["DISABLE_INDEXES_PLUGIN_INDEX_AND_ATOM_FEED"]
120
121 def should_generate_rss_for_classification_page(self, classification, post_list, lang):
122 """Only generates RSS feed for list of posts for classification if this function returns True."""
123 return not self.site.config["DISABLE_INDEXES_PLUGIN_RSS_FEED"]
124
[end of nikola/plugins/task/indexes.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/nikola/plugins/task/indexes.py b/nikola/plugins/task/indexes.py
--- a/nikola/plugins/task/indexes.py
+++ b/nikola/plugins/task/indexes.py
@@ -91,7 +91,7 @@
def get_path(self, classification, lang, dest_type='page'):
"""Return a path for the given classification."""
if dest_type == 'rss':
- return [self.site.config['RSS_PATH'](lang)], True
+ return [self.site.config['RSS_PATH'](lang), 'rss'], 'auto'
# 'page' (index) or 'feed' (Atom)
page_number = None
if dest_type == 'page':
| {"golden_diff": "diff --git a/nikola/plugins/task/indexes.py b/nikola/plugins/task/indexes.py\n--- a/nikola/plugins/task/indexes.py\n+++ b/nikola/plugins/task/indexes.py\n@@ -91,7 +91,7 @@\n def get_path(self, classification, lang, dest_type='page'):\n \"\"\"Return a path for the given classification.\"\"\"\n if dest_type == 'rss':\n- return [self.site.config['RSS_PATH'](lang)], True\n+ return [self.site.config['RSS_PATH'](lang), 'rss'], 'auto'\n # 'page' (index) or 'feed' (Atom)\n page_number = None\n if dest_type == 'page':\n", "issue": "RSS_PATH doesn't work as advertised (is path and filename, excluding .xml)\n* Python Version: 3.5.3\r\n* Nikola Version: v7.8.14\r\n* Operating System: Debian\r\n\r\nA fresh config says:\r\n\r\n```\r\n# Final location for the blog main RSS feed is:\r\n# output / TRANSLATION[lang] / RSS_PATH / rss.xml\r\n```\r\n\r\nwhich is in line with other `_PATH` variables.\r\n\r\nBut it seems `RSS_PATH` is actually path+filename (and `.xml` is appended).\r\n\r\nWith `RSS_PATH = \"blog/`I get `render_taxonomies:output/blog/.xml` (instead of `blog/rss.xml`)\r\n\r\nWith `RSS_PATH = blog/index.xml` I get `render_taxonomies:output/blog/index.xml.xml`\r\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\n# Copyright \u00a9 2012-2018 Roberto Alsina and others.\n\n# Permission is hereby granted, free of charge, to any\n# person obtaining a copy of this software and associated\n# documentation files (the \"Software\"), to deal in the\n# Software without restriction, including without limitation\n# the rights to use, copy, modify, merge, publish,\n# distribute, sublicense, and/or sell copies of the\n# Software, and to permit persons to whom the Software is\n# furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice\n# shall be included in all copies or substantial portions of\n# the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY\n# KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE\n# WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR\n# PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS\n# OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR\n# OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR\n# OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE\n# SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.\n\n\"\"\"Render the blog's main index.\"\"\"\n\n\nfrom nikola.plugin_categories import Taxonomy\n\n\nclass Indexes(Taxonomy):\n \"\"\"Classify for the blog's main index.\"\"\"\n\n name = \"classify_indexes\"\n\n classification_name = \"index\"\n overview_page_variable_name = None\n more_than_one_classifications_per_post = False\n has_hierarchy = False\n show_list_as_index = True\n template_for_single_list = \"index.tmpl\"\n template_for_classification_overview = None\n apply_to_posts = True\n apply_to_pages = False\n omit_empty_classifications = False\n path_handler_docstrings = {\n 'index_index': False,\n 'index': \"\"\"Link to a numbered index.\n\nExample:\n\nlink://index/3 => /index-3.html\"\"\",\n 'index_atom': \"\"\"Link to a numbered Atom index.\n\nExample:\n\nlink://index_atom/3 => /index-3.atom\"\"\",\n 'index_rss': \"\"\"A link to the RSS feed path.\n\nExample:\n\nlink://rss => /blog/rss.xml\"\"\",\n }\n\n def set_site(self, site):\n \"\"\"Set Nikola site.\"\"\"\n # Redirect automatically generated 'index_rss' path handler to 'rss' for compatibility with old rss plugin\n site.register_path_handler('rss', lambda name, lang: site.path_handlers['index_rss'](name, lang))\n site.path_handlers['rss'].__doc__ = \"\"\"A link to the RSS feed path.\n\nExample:\n\n link://rss => /blog/rss.xml\n \"\"\".strip()\n return super(Indexes, self).set_site(site)\n\n def get_implicit_classifications(self, lang):\n \"\"\"Return a list of classification strings which should always appear in posts_per_classification.\"\"\"\n return [\"\"]\n\n def classify(self, post, lang):\n \"\"\"Classify the given post for the given language.\"\"\"\n return [\"\"]\n\n def get_classification_friendly_name(self, classification, lang, only_last_component=False):\n \"\"\"Extract a friendly name from the classification.\"\"\"\n return self.site.config[\"BLOG_TITLE\"](lang)\n\n def get_path(self, classification, lang, dest_type='page'):\n \"\"\"Return a path for the given classification.\"\"\"\n if dest_type == 'rss':\n return [self.site.config['RSS_PATH'](lang)], True\n # 'page' (index) or 'feed' (Atom)\n page_number = None\n if dest_type == 'page':\n # Interpret argument as page number\n try:\n page_number = int(classification)\n except (ValueError, TypeError):\n pass\n return [self.site.config['INDEX_PATH'](lang)], 'always', page_number\n\n def provide_context_and_uptodate(self, classification, lang, node=None):\n \"\"\"Provide data for the context and the uptodate list for the list of the given classifiation.\"\"\"\n kw = {\n }\n context = {\n \"title\": self.site.config[\"INDEXES_TITLE\"](lang) or self.site.config[\"BLOG_TITLE\"](lang),\n \"description\": self.site.config[\"BLOG_DESCRIPTION\"](lang),\n \"pagekind\": [\"main_index\", \"index\"],\n }\n kw.update(context)\n return context, kw\n\n def should_generate_classification_page(self, classification, post_list, lang):\n \"\"\"Only generates list of posts for classification if this function returns True.\"\"\"\n return not self.site.config[\"DISABLE_INDEXES_PLUGIN_INDEX_AND_ATOM_FEED\"]\n\n def should_generate_rss_for_classification_page(self, classification, post_list, lang):\n \"\"\"Only generates RSS feed for list of posts for classification if this function returns True.\"\"\"\n return not self.site.config[\"DISABLE_INDEXES_PLUGIN_RSS_FEED\"]\n", "path": "nikola/plugins/task/indexes.py"}]} | 2,013 | 152 |
gh_patches_debug_33474 | rasdani/github-patches | git_diff | privacyidea__privacyidea-3231 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Support for SSH Token based on ed25519-sk or ecdsa-sk
**Is your feature request related to a problem? Please describe.**
What are you trying to achieve?
I try to add a ssh public key token based on ed25519-sk (associated with a Yubikey)
[2021-06-21 11:46:36,715][1177539][139641674618752][ERROR][privacyidea.app:1891] Exception on /token/init [POST]
Traceback (most recent call last):
File "/opt/privacyidea/lib/python3.8/site-packages/flask/app.py", line 2447, in wsgi_app
response = self.full_dispatch_request()
File "/opt/privacyidea/lib/python3.8/site-packages/flask/app.py", line 1952, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/opt/privacyidea/lib/python3.8/site-packages/flask/app.py", line 1821, in handle_user_exception
reraise(exc_type, exc_value, tb)
File "/opt/privacyidea/lib/python3.8/site-packages/flask/_compat.py", line 39, in reraise
raise value
File "/opt/privacyidea/lib/python3.8/site-packages/flask/app.py", line 1950, in full_dispatch_request
rv = self.dispatch_request()
File "/opt/privacyidea/lib/python3.8/site-packages/flask/app.py", line 1936, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "/opt/privacyidea/lib/python3.8/site-packages/privacyidea/api/lib/prepolicy.py", line 154, in policy_wrapper
return wrapped_function(*args, **kwds)
File "/opt/privacyidea/lib/python3.8/site-packages/privacyidea/api/lib/prepolicy.py", line 154, in policy_wrapper
return wrapped_function(*args, **kwds)
File "/opt/privacyidea/lib/python3.8/site-packages/privacyidea/api/lib/prepolicy.py", line 154, in policy_wrapper
return wrapped_function(*args, **kwds)
[Previous line repeated 21 more times]
File "/opt/privacyidea/lib/python3.8/site-packages/privacyidea/api/lib/postpolicy.py", line 108, in policy_wrapper
response = wrapped_function(*args, **kwds)
File "/opt/privacyidea/lib/python3.8/site-packages/privacyidea/lib/subscriptions.py", line 333, in check_subscription_wrapper
f_result = func(*args, **kwds)
File "/opt/privacyidea/lib/python3.8/site-packages/privacyidea/lib/event.py", line 99, in event_wrapper
f_result = func(*args, **kwds)
File "/opt/privacyidea/lib/python3.8/site-packages/privacyidea/lib/log.py", line 155, in log_wrapper
return func(*args, **kwds)
File "/opt/privacyidea/lib/python3.8/site-packages/privacyidea/api/token.py", line 270, in init
tokenobject = init_token(param,
File "/opt/privacyidea/lib/python3.8/site-packages/privacyidea/lib/log.py", line 155, in log_wrapper
return func(*args, **kwds)
File "/opt/privacyidea/lib/python3.8/site-packages/privacyidea/lib/token.py", line 1085, in init_token
tokenobject.update(upd_params)
File "/opt/privacyidea/lib/python3.8/site-packages/privacyidea/lib/tokens/sshkeytoken.py", line 131, in update
raise Exception("The keytype you specified is not supported.")
Exception: The keytype you specified is not supported.
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
I want to register this kind of keytype
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
Using gpg slot and generate rsa pubkey
**Additional context**
Add any other context or screenshots, that might help us to better understand your idea, your need and your circumstances.
</issue>
<code>
[start of privacyidea/lib/tokens/sshkeytoken.py]
1 # -*- coding: utf-8 -*-
2 #
3 # privacyIDEA
4 # Jul 18, 2014 Cornelius Kölbel
5 # License: AGPLv3
6 # contact: http://www.privacyidea.org
7 #
8 # This code is free software; you can redistribute it and/or
9 # modify it under the terms of the GNU AFFERO GENERAL PUBLIC LICENSE
10 # License as published by the Free Software Foundation; either
11 # version 3 of the License, or any later version.
12 #
13 # This code is distributed in the hope that it will be useful,
14 # but WITHOUT ANY WARRANTY; without even the implied warranty of
15 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
16 # GNU AFFERO GENERAL PUBLIC LICENSE for more details.
17 #
18 # You should have received a copy of the GNU Affero General Public
19 # License along with this program. If not, see <http://www.gnu.org/licenses/>.
20 #
21 __doc__="""The SSHKeyTokenClass provides a TokenClass that stores the public
22 SSH key. This can be used to manage SSH keys and retrieve the public ssh key
23 to import it to authorized keys files.
24
25 The code is tested in tests/test_lib_tokens_ssh
26 """
27
28 import logging
29 from privacyidea.lib import _
30 from privacyidea.api.lib.utils import getParam
31 from privacyidea.lib.log import log_with
32 from privacyidea.lib.tokenclass import TokenClass, ROLLOUTSTATE
33 from privacyidea.lib.policy import SCOPE, ACTION, GROUP
34
35 log = logging.getLogger(__name__)
36
37
38 optional = True
39 required = False
40
41
42 ##TODO: We should save a fingerprint of the SSH Key in the encrypted OTP
43 # field, so that we can be sure, that the public ssh key was not changed in
44 # the database!
45
46
47 class SSHkeyTokenClass(TokenClass):
48 """
49 The SSHKeyTokenClass provides a TokenClass that stores the public
50 SSH key. This can be used to manage SSH keys and retrieve the public ssh key
51 to import it to authorized keys files.
52 """
53 mode = ['authenticate']
54 using_pin = False
55
56 def __init__(self, db_token):
57 TokenClass.__init__(self, db_token)
58 self.set_type(u"sshkey")
59
60 @staticmethod
61 def get_class_type():
62 return "sshkey"
63
64 @staticmethod
65 def get_class_prefix():
66 return "SSHK"
67
68 @staticmethod
69 @log_with(log)
70 def get_class_info(key=None, ret='all'):
71 """
72 returns a subtree of the token definition
73
74 :param key: subsection identifier
75 :type key: string
76 :param ret: default return value, if nothing is found
77 :type ret: user defined
78 :return: subsection if key exists or user defined
79 :rtype: dictionary
80 """
81 res = {'type': 'sshkey',
82 'title': 'SSHkey Token',
83 'description': _('SSH Public Key: The public SSH key.'),
84 'config': {},
85 'user': ['enroll'],
86 # This tokentype is enrollable in the UI for...
87 'ui_enroll': ["admin", "user"],
88 'policy': {
89 SCOPE.ENROLL: {
90 ACTION.MAXTOKENUSER: {
91 'type': 'int',
92 'desc': _("The user may only have this maximum number of SSH keys assigned."),
93 'group': GROUP.TOKEN
94 },
95 ACTION.MAXACTIVETOKENUSER: {
96 'type': 'int',
97 'desc': _(
98 "The user may only have this maximum number of active SSH keys assigned."),
99 'group': GROUP.TOKEN
100 }
101 }
102 },
103 }
104 if key:
105 ret = res.get(key, {})
106 else:
107 if ret == 'all':
108 ret = res
109
110 return ret
111
112 def update(self, param):
113 """
114 The key holds the public ssh key and this is required
115
116 The key probably is of the form "ssh-rsa BASE64 comment"
117 """
118 # We need to save the token, so that we can later add the tokeninfo
119 # Otherwise we might not have created the DB entry, yet and we would
120 # be missing the token.id
121 self.token.save()
122
123 getParam(param, "sshkey", required)
124
125 key_elem = param.get("sshkey").split(" ", 2)
126 if key_elem[0] not in ["ssh-rsa", "ssh-ed25519", "ecdsa-sha2-nistp256"]:
127 self.token.rollout_state = ROLLOUTSTATE.BROKEN
128 self.token.save()
129 raise Exception("The keytype you specified is not supported.")
130
131 if len(key_elem) < 2:
132 self.token.rollout_state = ROLLOUTSTATE.BROKEN
133 self.token.save()
134 raise Exception("Missing key.")
135
136 key_type = key_elem[0]
137 key = key_elem[1]
138 if len(key_elem) > 2:
139 key_comment = key_elem[2]
140 else:
141 key_comment = ""
142
143 # convert key to hex
144 self.add_tokeninfo("ssh_key", key, value_type="password")
145 self.add_tokeninfo("ssh_type", key_type)
146 self.add_tokeninfo("ssh_comment", key_comment)
147
148 # call the parents function
149 TokenClass.update(self, param)
150
151 @log_with(log)
152 def get_sshkey(self):
153 """
154 returns the public SSH key
155
156 :return: SSH pub key
157 :rtype: string
158 """
159 ti = self.get_tokeninfo()
160 key_type = ti.get("ssh_type")
161 key_comment = ti.get("ssh_comment")
162 # get the ssh key directly, otherwise it will not be decrypted
163 sshkey = self.get_tokeninfo("ssh_key")
164 return u"{0!s} {1!s} {2!s}".format(key_type, sshkey, key_comment)
165
[end of privacyidea/lib/tokens/sshkeytoken.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/privacyidea/lib/tokens/sshkeytoken.py b/privacyidea/lib/tokens/sshkeytoken.py
--- a/privacyidea/lib/tokens/sshkeytoken.py
+++ b/privacyidea/lib/tokens/sshkeytoken.py
@@ -28,6 +28,7 @@
import logging
from privacyidea.lib import _
from privacyidea.api.lib.utils import getParam
+from privacyidea.lib.error import TokenAdminError
from privacyidea.lib.log import log_with
from privacyidea.lib.tokenclass import TokenClass, ROLLOUTSTATE
from privacyidea.lib.policy import SCOPE, ACTION, GROUP
@@ -121,17 +122,18 @@
self.token.save()
getParam(param, "sshkey", required)
-
+
key_elem = param.get("sshkey").split(" ", 2)
- if key_elem[0] not in ["ssh-rsa", "ssh-ed25519", "ecdsa-sha2-nistp256"]:
+ if key_elem[0] not in ["ssh-rsa", "ssh-ed25519", "ecdsa-sha2-nistp256",
+ "[email protected]", "[email protected]"]:
self.token.rollout_state = ROLLOUTSTATE.BROKEN
self.token.save()
- raise Exception("The keytype you specified is not supported.")
+ raise TokenAdminError("The keytype you specified is not supported.")
if len(key_elem) < 2:
self.token.rollout_state = ROLLOUTSTATE.BROKEN
self.token.save()
- raise Exception("Missing key.")
+ raise TokenAdminError("Missing key.")
key_type = key_elem[0]
key = key_elem[1]
@@ -161,4 +163,7 @@
key_comment = ti.get("ssh_comment")
# get the ssh key directly, otherwise it will not be decrypted
sshkey = self.get_tokeninfo("ssh_key")
- return u"{0!s} {1!s} {2!s}".format(key_type, sshkey, key_comment)
+ r = u"{0!s} {1!s}".format(key_type, sshkey)
+ if key_comment:
+ r += " " + key_comment
+ return r
| {"golden_diff": "diff --git a/privacyidea/lib/tokens/sshkeytoken.py b/privacyidea/lib/tokens/sshkeytoken.py\n--- a/privacyidea/lib/tokens/sshkeytoken.py\n+++ b/privacyidea/lib/tokens/sshkeytoken.py\n@@ -28,6 +28,7 @@\n import logging\n from privacyidea.lib import _\n from privacyidea.api.lib.utils import getParam\n+from privacyidea.lib.error import TokenAdminError\n from privacyidea.lib.log import log_with\n from privacyidea.lib.tokenclass import TokenClass, ROLLOUTSTATE\n from privacyidea.lib.policy import SCOPE, ACTION, GROUP\n@@ -121,17 +122,18 @@\n self.token.save()\n \n getParam(param, \"sshkey\", required)\n- \n+\n key_elem = param.get(\"sshkey\").split(\" \", 2)\n- if key_elem[0] not in [\"ssh-rsa\", \"ssh-ed25519\", \"ecdsa-sha2-nistp256\"]:\n+ if key_elem[0] not in [\"ssh-rsa\", \"ssh-ed25519\", \"ecdsa-sha2-nistp256\",\n+ \"[email protected]\", \"[email protected]\"]:\n self.token.rollout_state = ROLLOUTSTATE.BROKEN\n self.token.save()\n- raise Exception(\"The keytype you specified is not supported.\")\n+ raise TokenAdminError(\"The keytype you specified is not supported.\")\n \n if len(key_elem) < 2:\n self.token.rollout_state = ROLLOUTSTATE.BROKEN\n self.token.save()\n- raise Exception(\"Missing key.\")\n+ raise TokenAdminError(\"Missing key.\")\n \n key_type = key_elem[0]\n key = key_elem[1]\n@@ -161,4 +163,7 @@\n key_comment = ti.get(\"ssh_comment\")\n # get the ssh key directly, otherwise it will not be decrypted\n sshkey = self.get_tokeninfo(\"ssh_key\")\n- return u\"{0!s} {1!s} {2!s}\".format(key_type, sshkey, key_comment)\n+ r = u\"{0!s} {1!s}\".format(key_type, sshkey)\n+ if key_comment:\n+ r += \" \" + key_comment\n+ return r\n", "issue": "Support for SSH Token based on ed25519-sk or ecdsa-sk\n**Is your feature request related to a problem? Please describe.**\r\nWhat are you trying to achieve?\r\n\r\nI try to add a ssh public key token based on ed25519-sk (associated with a Yubikey)\r\n\r\n[2021-06-21 11:46:36,715][1177539][139641674618752][ERROR][privacyidea.app:1891] Exception on /token/init [POST]\r\nTraceback (most recent call last):\r\n File \"/opt/privacyidea/lib/python3.8/site-packages/flask/app.py\", line 2447, in wsgi_app\r\n response = self.full_dispatch_request()\r\n File \"/opt/privacyidea/lib/python3.8/site-packages/flask/app.py\", line 1952, in full_dispatch_request\r\n rv = self.handle_user_exception(e)\r\n File \"/opt/privacyidea/lib/python3.8/site-packages/flask/app.py\", line 1821, in handle_user_exception\r\n reraise(exc_type, exc_value, tb)\r\n File \"/opt/privacyidea/lib/python3.8/site-packages/flask/_compat.py\", line 39, in reraise\r\n raise value\r\n File \"/opt/privacyidea/lib/python3.8/site-packages/flask/app.py\", line 1950, in full_dispatch_request\r\n rv = self.dispatch_request()\r\n File \"/opt/privacyidea/lib/python3.8/site-packages/flask/app.py\", line 1936, in dispatch_request\r\n return self.view_functions[rule.endpoint](**req.view_args)\r\n File \"/opt/privacyidea/lib/python3.8/site-packages/privacyidea/api/lib/prepolicy.py\", line 154, in policy_wrapper\r\n return wrapped_function(*args, **kwds)\r\n File \"/opt/privacyidea/lib/python3.8/site-packages/privacyidea/api/lib/prepolicy.py\", line 154, in policy_wrapper\r\n return wrapped_function(*args, **kwds)\r\n File \"/opt/privacyidea/lib/python3.8/site-packages/privacyidea/api/lib/prepolicy.py\", line 154, in policy_wrapper\r\n return wrapped_function(*args, **kwds)\r\n [Previous line repeated 21 more times]\r\n File \"/opt/privacyidea/lib/python3.8/site-packages/privacyidea/api/lib/postpolicy.py\", line 108, in policy_wrapper\r\n response = wrapped_function(*args, **kwds)\r\n File \"/opt/privacyidea/lib/python3.8/site-packages/privacyidea/lib/subscriptions.py\", line 333, in check_subscription_wrapper\r\n f_result = func(*args, **kwds)\r\n File \"/opt/privacyidea/lib/python3.8/site-packages/privacyidea/lib/event.py\", line 99, in event_wrapper\r\n f_result = func(*args, **kwds)\r\n File \"/opt/privacyidea/lib/python3.8/site-packages/privacyidea/lib/log.py\", line 155, in log_wrapper\r\n return func(*args, **kwds)\r\n File \"/opt/privacyidea/lib/python3.8/site-packages/privacyidea/api/token.py\", line 270, in init\r\n tokenobject = init_token(param,\r\n File \"/opt/privacyidea/lib/python3.8/site-packages/privacyidea/lib/log.py\", line 155, in log_wrapper\r\n return func(*args, **kwds)\r\n File \"/opt/privacyidea/lib/python3.8/site-packages/privacyidea/lib/token.py\", line 1085, in init_token\r\n tokenobject.update(upd_params)\r\n File \"/opt/privacyidea/lib/python3.8/site-packages/privacyidea/lib/tokens/sshkeytoken.py\", line 131, in update\r\n raise Exception(\"The keytype you specified is not supported.\")\r\nException: The keytype you specified is not supported.\r\n\r\n**Describe the solution you'd like**\r\nA clear and concise description of what you want to happen.\r\n\r\nI want to register this kind of keytype\r\n\r\n**Describe alternatives you've considered**\r\nA clear and concise description of any alternative solutions or features you've considered.\r\n\r\nUsing gpg slot and generate rsa pubkey\r\n\r\n**Additional context**\r\nAdd any other context or screenshots, that might help us to better understand your idea, your need and your circumstances.\r\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n#\n# privacyIDEA\n# Jul 18, 2014 Cornelius K\u00f6lbel\n# License: AGPLv3\n# contact: http://www.privacyidea.org\n#\n# This code is free software; you can redistribute it and/or\n# modify it under the terms of the GNU AFFERO GENERAL PUBLIC LICENSE\n# License as published by the Free Software Foundation; either\n# version 3 of the License, or any later version.\n#\n# This code is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU AFFERO GENERAL PUBLIC LICENSE for more details.\n#\n# You should have received a copy of the GNU Affero General Public\n# License along with this program. If not, see <http://www.gnu.org/licenses/>.\n#\n__doc__=\"\"\"The SSHKeyTokenClass provides a TokenClass that stores the public\nSSH key. This can be used to manage SSH keys and retrieve the public ssh key\nto import it to authorized keys files.\n\nThe code is tested in tests/test_lib_tokens_ssh\n\"\"\"\n\nimport logging\nfrom privacyidea.lib import _\nfrom privacyidea.api.lib.utils import getParam\nfrom privacyidea.lib.log import log_with\nfrom privacyidea.lib.tokenclass import TokenClass, ROLLOUTSTATE\nfrom privacyidea.lib.policy import SCOPE, ACTION, GROUP\n\nlog = logging.getLogger(__name__)\n\n\noptional = True\nrequired = False\n\n\n##TODO: We should save a fingerprint of the SSH Key in the encrypted OTP\n# field, so that we can be sure, that the public ssh key was not changed in\n# the database!\n\n\nclass SSHkeyTokenClass(TokenClass):\n \"\"\"\n The SSHKeyTokenClass provides a TokenClass that stores the public\n SSH key. This can be used to manage SSH keys and retrieve the public ssh key\n to import it to authorized keys files.\n \"\"\"\n mode = ['authenticate']\n using_pin = False\n\n def __init__(self, db_token):\n TokenClass.__init__(self, db_token)\n self.set_type(u\"sshkey\")\n\n @staticmethod\n def get_class_type():\n return \"sshkey\"\n\n @staticmethod\n def get_class_prefix():\n return \"SSHK\"\n\n @staticmethod\n @log_with(log)\n def get_class_info(key=None, ret='all'):\n \"\"\"\n returns a subtree of the token definition\n\n :param key: subsection identifier\n :type key: string\n :param ret: default return value, if nothing is found\n :type ret: user defined\n :return: subsection if key exists or user defined\n :rtype: dictionary\n \"\"\"\n res = {'type': 'sshkey',\n 'title': 'SSHkey Token',\n 'description': _('SSH Public Key: The public SSH key.'),\n 'config': {},\n 'user': ['enroll'],\n # This tokentype is enrollable in the UI for...\n 'ui_enroll': [\"admin\", \"user\"],\n 'policy': {\n SCOPE.ENROLL: {\n ACTION.MAXTOKENUSER: {\n 'type': 'int',\n 'desc': _(\"The user may only have this maximum number of SSH keys assigned.\"),\n 'group': GROUP.TOKEN\n },\n ACTION.MAXACTIVETOKENUSER: {\n 'type': 'int',\n 'desc': _(\n \"The user may only have this maximum number of active SSH keys assigned.\"),\n 'group': GROUP.TOKEN\n }\n }\n },\n }\n if key:\n ret = res.get(key, {})\n else:\n if ret == 'all':\n ret = res\n\n return ret\n\n def update(self, param):\n \"\"\"\n The key holds the public ssh key and this is required\n \n The key probably is of the form \"ssh-rsa BASE64 comment\"\n \"\"\"\n # We need to save the token, so that we can later add the tokeninfo\n # Otherwise we might not have created the DB entry, yet and we would\n # be missing the token.id\n self.token.save()\n\n getParam(param, \"sshkey\", required)\n \n key_elem = param.get(\"sshkey\").split(\" \", 2)\n if key_elem[0] not in [\"ssh-rsa\", \"ssh-ed25519\", \"ecdsa-sha2-nistp256\"]:\n self.token.rollout_state = ROLLOUTSTATE.BROKEN\n self.token.save()\n raise Exception(\"The keytype you specified is not supported.\")\n\n if len(key_elem) < 2:\n self.token.rollout_state = ROLLOUTSTATE.BROKEN\n self.token.save()\n raise Exception(\"Missing key.\")\n\n key_type = key_elem[0]\n key = key_elem[1]\n if len(key_elem) > 2:\n key_comment = key_elem[2]\n else:\n key_comment = \"\"\n \n # convert key to hex\n self.add_tokeninfo(\"ssh_key\", key, value_type=\"password\")\n self.add_tokeninfo(\"ssh_type\", key_type)\n self.add_tokeninfo(\"ssh_comment\", key_comment)\n\n # call the parents function\n TokenClass.update(self, param)\n \n @log_with(log)\n def get_sshkey(self):\n \"\"\"\n returns the public SSH key\n \n :return: SSH pub key\n :rtype: string\n \"\"\"\n ti = self.get_tokeninfo()\n key_type = ti.get(\"ssh_type\")\n key_comment = ti.get(\"ssh_comment\")\n # get the ssh key directly, otherwise it will not be decrypted\n sshkey = self.get_tokeninfo(\"ssh_key\")\n return u\"{0!s} {1!s} {2!s}\".format(key_type, sshkey, key_comment)\n", "path": "privacyidea/lib/tokens/sshkeytoken.py"}]} | 3,176 | 542 |
gh_patches_debug_246 | rasdani/github-patches | git_diff | numpy__numpy-3245 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
2to3 run `standarderror` fixer
</issue>
<code>
[start of tools/py3tool.py]
1 #!/usr/bin/env python3
2 # -*- python -*-
3 """
4 %prog SUBMODULE...
5
6 Hack to pipe submodules of Numpy through 2to3 and build them in-place
7 one-by-one.
8
9 Example usage:
10
11 python3 tools/py3tool.py testing distutils core
12
13 This will copy files to _py3k/numpy, add a dummy __init__.py and
14 version.py on the top level, and copy and 2to3 the files of the three
15 submodules.
16
17 When running py3tool again, only changed files are re-processed, which
18 makes the test-bugfix cycle faster.
19
20 """
21 from __future__ import division, absolute_import, print_function
22
23 from optparse import OptionParser
24 import shutil
25 import os
26 import sys
27 import re
28 import subprocess
29 import fnmatch
30
31 if os.environ.get('USE_2TO3CACHE'):
32 import lib2to3cache
33
34 BASE = os.path.normpath(os.path.join(os.path.dirname(__file__), '..'))
35 TEMP = os.path.normpath(os.path.join(BASE, '_py3k'))
36
37 SCRIPT_2TO3 = os.path.join(BASE, 'tools', '2to3.py')
38
39 EXTRA_2TO3_FLAGS = {
40 'numpy/core/defchararray.py': '-x unicode',
41 'numpy/compat/py3k.py': '-x unicode',
42 'numpy/ma/timer_comparison.py': 'skip',
43 }
44
45 # Names of fixers to skip when running 2to3. This is a complete list of
46 # available fixers, with fixers not currently skipped commented out.
47 FIXES_TO_SKIP = [
48 'apply',
49 'basestring',
50 'buffer',
51 'callable',
52 'dict',
53 'exec',
54 'execfile',
55 'exitfunc',
56 'filter',
57 'funcattrs',
58 'future',
59 'getcwdu',
60 'has_key',
61 # 'idioms',
62 'import',
63 'imports',
64 'imports2',
65 'input',
66 'intern',
67 # 'isinstance',
68 'itertools',
69 'itertools_imports',
70 'long',
71 'map',
72 'metaclass',
73 'methodattrs',
74 'ne',
75 # 'next',
76 'nonzero',
77 'numliterals',
78 'operator',
79 'paren',
80 'print',
81 'raise',
82 'raw_input',
83 'reduce',
84 'renames',
85 'repr',
86 'setliteral',
87 'standarderror',
88 'sys_exc',
89 'throw',
90 'tuple_params',
91 # 'types',
92 # 'unicode',
93 # 'urllib',
94 # 'ws_comma',
95 'xrange',
96 'xreadlines',
97 'zip',
98 ]
99
100 skip_fixes= []
101 for _t in FIXES_TO_SKIP:
102 skip_fixes.append('-x')
103 skip_fixes.append(_t)
104
105
106 def main():
107 p = OptionParser(usage=__doc__.strip())
108 p.add_option("--clean", "-c", action="store_true",
109 help="clean source directory")
110 options, args = p.parse_args()
111
112 if not args:
113 p.error('no submodules given')
114 else:
115 dirs = ['numpy/%s' % x for x in map(os.path.basename, args)]
116
117 # Prepare
118 if not os.path.isdir(TEMP):
119 os.makedirs(TEMP)
120
121 # Set up dummy files (for building only submodules)
122 dummy_files = {
123 '__init__.py': 'from numpy.version import version as __version__',
124 'version.py': 'version = "1.4.0.dev"'
125 }
126
127 for fn, content in dummy_files.items():
128 fn = os.path.join(TEMP, 'numpy', fn)
129 if not os.path.isfile(fn):
130 try:
131 os.makedirs(os.path.dirname(fn))
132 except OSError:
133 pass
134 f = open(fn, 'wb+')
135 f.write(content.encode('ascii'))
136 f.close()
137
138 # Environment
139 pp = [os.path.abspath(TEMP)]
140 def getenv():
141 env = dict(os.environ)
142 env.update({'PYTHONPATH': ':'.join(pp)})
143 return env
144
145 # Copy
146 for d in dirs:
147 src = os.path.join(BASE, d)
148 dst = os.path.join(TEMP, d)
149
150 # Run 2to3
151 sync_2to3(dst=dst,
152 src=src,
153 patchfile=os.path.join(TEMP, os.path.basename(d) + '.patch'),
154 clean=options.clean)
155
156 # Run setup.py, falling back to Pdb post-mortem on exceptions
157 setup_py = os.path.join(dst, 'setup.py')
158 if os.path.isfile(setup_py):
159 code = """\
160 import pdb, sys, traceback
161 p = pdb.Pdb()
162 try:
163 import __main__
164 __main__.__dict__.update({
165 "__name__": "__main__", "__file__": "setup.py",
166 "__builtins__": __builtins__})
167 fp = open("setup.py", "rb")
168 try:
169 exec(compile(fp.read(), "setup.py", 'exec'))
170 finally:
171 fp.close()
172 except SystemExit:
173 raise
174 except:
175 traceback.print_exc()
176 t = sys.exc_info()[2]
177 p.interaction(None, t)
178 """
179 ret = subprocess.call([sys.executable, '-c', code,
180 'build_ext', '-i'],
181 cwd=dst,
182 env=getenv())
183 if ret != 0:
184 raise RuntimeError("Build failed.")
185
186 # Run nosetests
187 subprocess.call(['nosetests3', '-v', d], cwd=TEMP)
188
189
190 def walk_sync(dir1, dir2, _seen=None):
191 if _seen is None:
192 seen = {}
193 else:
194 seen = _seen
195
196 if not dir1.endswith(os.path.sep):
197 dir1 = dir1 + os.path.sep
198
199 # Walk through stuff (which we haven't yet gone through) in dir1
200 for root, dirs, files in os.walk(dir1):
201 sub = root[len(dir1):]
202 if sub in seen:
203 dirs = [x for x in dirs if x not in seen[sub][0]]
204 files = [x for x in files if x not in seen[sub][1]]
205 seen[sub][0].extend(dirs)
206 seen[sub][1].extend(files)
207 else:
208 seen[sub] = (dirs, files)
209 if not dirs and not files:
210 continue
211 yield os.path.join(dir1, sub), os.path.join(dir2, sub), dirs, files
212
213 if _seen is None:
214 # Walk through stuff (which we haven't yet gone through) in dir2
215 for root2, root1, dirs, files in walk_sync(dir2, dir1, _seen=seen):
216 yield root1, root2, dirs, files
217
218 def sync_2to3(src, dst, patchfile=None, clean=False):
219 import lib2to3.main
220 from io import StringIO
221
222 to_convert = []
223
224 for src_dir, dst_dir, dirs, files in walk_sync(src, dst):
225 for fn in dirs + files:
226 src_fn = os.path.join(src_dir, fn)
227 dst_fn = os.path.join(dst_dir, fn)
228
229 # skip temporary etc. files
230 if fn.startswith('.#') or fn.endswith('~'):
231 continue
232
233 # remove non-existing
234 if os.path.exists(dst_fn) and not os.path.exists(src_fn):
235 if clean:
236 if os.path.isdir(dst_fn):
237 shutil.rmtree(dst_fn)
238 else:
239 os.unlink(dst_fn)
240 continue
241
242 # make directories
243 if os.path.isdir(src_fn):
244 if not os.path.isdir(dst_fn):
245 os.makedirs(dst_fn)
246 continue
247
248 dst_dir = os.path.dirname(dst_fn)
249 if os.path.isfile(dst_fn) and not os.path.isdir(dst_dir):
250 os.makedirs(dst_dir)
251
252 # don't replace up-to-date files
253 try:
254 if os.path.isfile(dst_fn) and \
255 os.stat(dst_fn).st_mtime >= os.stat(src_fn).st_mtime:
256 continue
257 except OSError:
258 pass
259
260 # copy file
261 shutil.copyfile(src_fn, dst_fn)
262
263 # add .py files to 2to3 list
264 if dst_fn.endswith('.py'):
265 to_convert.append((src_fn, dst_fn))
266
267 # run 2to3
268 flag_sets = {}
269 for fn, dst_fn in to_convert:
270 flag = ''
271 for pat, opt in EXTRA_2TO3_FLAGS.items():
272 if fnmatch.fnmatch(fn, pat):
273 flag = opt
274 break
275 flag_sets.setdefault(flag, []).append(dst_fn)
276
277 if patchfile:
278 p = open(patchfile, 'wb+')
279 else:
280 p = open(os.devnull, 'wb')
281
282 for flags, filenames in flag_sets.items():
283 if flags == 'skip':
284 continue
285
286 _old_stdout = sys.stdout
287 try:
288 sys.stdout = StringIO()
289 opt = []
290 opt.extend(['-w', '-n'])
291 opt.extend(skip_fixes)
292 opt.extend(flags.split())
293 opt.extend(filenames)
294 lib2to3.main.main("lib2to3.fixes", opt)
295 finally:
296 sys.stdout = _old_stdout
297
298 p.close()
299
300 if __name__ == "__main__":
301 main()
302
[end of tools/py3tool.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/tools/py3tool.py b/tools/py3tool.py
--- a/tools/py3tool.py
+++ b/tools/py3tool.py
@@ -64,7 +64,7 @@
'imports2',
'input',
'intern',
-# 'isinstance',
+ 'isinstance',
'itertools',
'itertools_imports',
'long',
| {"golden_diff": "diff --git a/tools/py3tool.py b/tools/py3tool.py\n--- a/tools/py3tool.py\n+++ b/tools/py3tool.py\n@@ -64,7 +64,7 @@\n 'imports2',\n 'input',\n 'intern',\n-# 'isinstance',\n+ 'isinstance',\n 'itertools',\n 'itertools_imports',\n 'long',\n", "issue": "2to3 run `standarderror` fixer\n\n", "before_files": [{"content": "#!/usr/bin/env python3\n# -*- python -*-\n\"\"\"\n%prog SUBMODULE...\n\nHack to pipe submodules of Numpy through 2to3 and build them in-place\none-by-one.\n\nExample usage:\n\n python3 tools/py3tool.py testing distutils core\n\nThis will copy files to _py3k/numpy, add a dummy __init__.py and\nversion.py on the top level, and copy and 2to3 the files of the three\nsubmodules.\n\nWhen running py3tool again, only changed files are re-processed, which\nmakes the test-bugfix cycle faster.\n\n\"\"\"\nfrom __future__ import division, absolute_import, print_function\n\nfrom optparse import OptionParser\nimport shutil\nimport os\nimport sys\nimport re\nimport subprocess\nimport fnmatch\n\nif os.environ.get('USE_2TO3CACHE'):\n import lib2to3cache\n\nBASE = os.path.normpath(os.path.join(os.path.dirname(__file__), '..'))\nTEMP = os.path.normpath(os.path.join(BASE, '_py3k'))\n\nSCRIPT_2TO3 = os.path.join(BASE, 'tools', '2to3.py')\n\nEXTRA_2TO3_FLAGS = {\n 'numpy/core/defchararray.py': '-x unicode',\n 'numpy/compat/py3k.py': '-x unicode',\n 'numpy/ma/timer_comparison.py': 'skip',\n}\n\n# Names of fixers to skip when running 2to3. This is a complete list of\n# available fixers, with fixers not currently skipped commented out.\nFIXES_TO_SKIP = [\n 'apply',\n 'basestring',\n 'buffer',\n 'callable',\n 'dict',\n 'exec',\n 'execfile',\n 'exitfunc',\n 'filter',\n 'funcattrs',\n 'future',\n 'getcwdu',\n 'has_key',\n# 'idioms',\n 'import',\n 'imports',\n 'imports2',\n 'input',\n 'intern',\n# 'isinstance',\n 'itertools',\n 'itertools_imports',\n 'long',\n 'map',\n 'metaclass',\n 'methodattrs',\n 'ne',\n# 'next',\n 'nonzero',\n 'numliterals',\n 'operator',\n 'paren',\n 'print',\n 'raise',\n 'raw_input',\n 'reduce',\n 'renames',\n 'repr',\n 'setliteral',\n 'standarderror',\n 'sys_exc',\n 'throw',\n 'tuple_params',\n# 'types',\n# 'unicode',\n# 'urllib',\n# 'ws_comma',\n 'xrange',\n 'xreadlines',\n 'zip',\n]\n\nskip_fixes= []\nfor _t in FIXES_TO_SKIP:\n skip_fixes.append('-x')\n skip_fixes.append(_t)\n\n\ndef main():\n p = OptionParser(usage=__doc__.strip())\n p.add_option(\"--clean\", \"-c\", action=\"store_true\",\n help=\"clean source directory\")\n options, args = p.parse_args()\n\n if not args:\n p.error('no submodules given')\n else:\n dirs = ['numpy/%s' % x for x in map(os.path.basename, args)]\n\n # Prepare\n if not os.path.isdir(TEMP):\n os.makedirs(TEMP)\n\n # Set up dummy files (for building only submodules)\n dummy_files = {\n '__init__.py': 'from numpy.version import version as __version__',\n 'version.py': 'version = \"1.4.0.dev\"'\n }\n\n for fn, content in dummy_files.items():\n fn = os.path.join(TEMP, 'numpy', fn)\n if not os.path.isfile(fn):\n try:\n os.makedirs(os.path.dirname(fn))\n except OSError:\n pass\n f = open(fn, 'wb+')\n f.write(content.encode('ascii'))\n f.close()\n\n # Environment\n pp = [os.path.abspath(TEMP)]\n def getenv():\n env = dict(os.environ)\n env.update({'PYTHONPATH': ':'.join(pp)})\n return env\n\n # Copy\n for d in dirs:\n src = os.path.join(BASE, d)\n dst = os.path.join(TEMP, d)\n\n # Run 2to3\n sync_2to3(dst=dst,\n src=src,\n patchfile=os.path.join(TEMP, os.path.basename(d) + '.patch'),\n clean=options.clean)\n\n # Run setup.py, falling back to Pdb post-mortem on exceptions\n setup_py = os.path.join(dst, 'setup.py')\n if os.path.isfile(setup_py):\n code = \"\"\"\\\nimport pdb, sys, traceback\np = pdb.Pdb()\ntry:\n import __main__\n __main__.__dict__.update({\n \"__name__\": \"__main__\", \"__file__\": \"setup.py\",\n \"__builtins__\": __builtins__})\n fp = open(\"setup.py\", \"rb\")\n try:\n exec(compile(fp.read(), \"setup.py\", 'exec'))\n finally:\n fp.close()\nexcept SystemExit:\n raise\nexcept:\n traceback.print_exc()\n t = sys.exc_info()[2]\n p.interaction(None, t)\n\"\"\"\n ret = subprocess.call([sys.executable, '-c', code,\n 'build_ext', '-i'],\n cwd=dst,\n env=getenv())\n if ret != 0:\n raise RuntimeError(\"Build failed.\")\n\n # Run nosetests\n subprocess.call(['nosetests3', '-v', d], cwd=TEMP)\n\n\ndef walk_sync(dir1, dir2, _seen=None):\n if _seen is None:\n seen = {}\n else:\n seen = _seen\n\n if not dir1.endswith(os.path.sep):\n dir1 = dir1 + os.path.sep\n\n # Walk through stuff (which we haven't yet gone through) in dir1\n for root, dirs, files in os.walk(dir1):\n sub = root[len(dir1):]\n if sub in seen:\n dirs = [x for x in dirs if x not in seen[sub][0]]\n files = [x for x in files if x not in seen[sub][1]]\n seen[sub][0].extend(dirs)\n seen[sub][1].extend(files)\n else:\n seen[sub] = (dirs, files)\n if not dirs and not files:\n continue\n yield os.path.join(dir1, sub), os.path.join(dir2, sub), dirs, files\n\n if _seen is None:\n # Walk through stuff (which we haven't yet gone through) in dir2\n for root2, root1, dirs, files in walk_sync(dir2, dir1, _seen=seen):\n yield root1, root2, dirs, files\n\ndef sync_2to3(src, dst, patchfile=None, clean=False):\n import lib2to3.main\n from io import StringIO\n\n to_convert = []\n\n for src_dir, dst_dir, dirs, files in walk_sync(src, dst):\n for fn in dirs + files:\n src_fn = os.path.join(src_dir, fn)\n dst_fn = os.path.join(dst_dir, fn)\n\n # skip temporary etc. files\n if fn.startswith('.#') or fn.endswith('~'):\n continue\n\n # remove non-existing\n if os.path.exists(dst_fn) and not os.path.exists(src_fn):\n if clean:\n if os.path.isdir(dst_fn):\n shutil.rmtree(dst_fn)\n else:\n os.unlink(dst_fn)\n continue\n\n # make directories\n if os.path.isdir(src_fn):\n if not os.path.isdir(dst_fn):\n os.makedirs(dst_fn)\n continue\n\n dst_dir = os.path.dirname(dst_fn)\n if os.path.isfile(dst_fn) and not os.path.isdir(dst_dir):\n os.makedirs(dst_dir)\n\n # don't replace up-to-date files\n try:\n if os.path.isfile(dst_fn) and \\\n os.stat(dst_fn).st_mtime >= os.stat(src_fn).st_mtime:\n continue\n except OSError:\n pass\n\n # copy file\n shutil.copyfile(src_fn, dst_fn)\n\n # add .py files to 2to3 list\n if dst_fn.endswith('.py'):\n to_convert.append((src_fn, dst_fn))\n\n # run 2to3\n flag_sets = {}\n for fn, dst_fn in to_convert:\n flag = ''\n for pat, opt in EXTRA_2TO3_FLAGS.items():\n if fnmatch.fnmatch(fn, pat):\n flag = opt\n break\n flag_sets.setdefault(flag, []).append(dst_fn)\n\n if patchfile:\n p = open(patchfile, 'wb+')\n else:\n p = open(os.devnull, 'wb')\n\n for flags, filenames in flag_sets.items():\n if flags == 'skip':\n continue\n\n _old_stdout = sys.stdout\n try:\n sys.stdout = StringIO()\n opt = []\n opt.extend(['-w', '-n'])\n opt.extend(skip_fixes)\n opt.extend(flags.split())\n opt.extend(filenames)\n lib2to3.main.main(\"lib2to3.fixes\", opt)\n finally:\n sys.stdout = _old_stdout\n\n p.close()\n\nif __name__ == \"__main__\":\n main()\n", "path": "tools/py3tool.py"}]} | 3,371 | 86 |
gh_patches_debug_4808 | rasdani/github-patches | git_diff | buildbot__buildbot-5301 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Cache-control header is filled incorrectly
When a cache-control header is formed a ';' character is used as a delimiter:
https://github.com/buildbot/buildbot/blob/144eb7e82dc261e6506f1f68493446bcb24d77a0/master/buildbot/www/config.py#L120
This is not allowed by [RFC 7234](https://tools.ietf.org/html/rfc7234). The RFC states the following format of the header:
```
Cache-Control = *( "," OWS ) cache-directive *( OWS "," [ OWS
cache-directive ] )
cache-directive = token [ "=" ( token / quoted-string ) ]
```
Thus a replace `;` -> `, ` is required.
</issue>
<code>
[start of master/buildbot/www/config.py]
1 # This file is part of Buildbot. Buildbot is free software: you can
2 # redistribute it and/or modify it under the terms of the GNU General Public
3 # License as published by the Free Software Foundation, version 2.
4 #
5 # This program is distributed in the hope that it will be useful, but WITHOUT
6 # ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
7 # FOR A PARTICULAR PURPOSE. See the GNU General Public License for more
8 # details.
9 #
10 # You should have received a copy of the GNU General Public License along with
11 # this program; if not, write to the Free Software Foundation, Inc., 51
12 # Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
13 #
14 # Copyright Buildbot Team Members
15
16
17 import json
18 import os
19 import posixpath
20
21 import jinja2
22
23 from twisted.internet import defer
24 from twisted.python import log
25 from twisted.web.error import Error
26
27 from buildbot.interfaces import IConfigured
28 from buildbot.util import unicode2bytes
29 from buildbot.www import resource
30
31
32 class IndexResource(resource.Resource):
33 # enable reconfigResource calls
34 needsReconfig = True
35
36 def __init__(self, master, staticdir):
37 super().__init__(master)
38 loader = jinja2.FileSystemLoader(staticdir)
39 self.jinja = jinja2.Environment(
40 loader=loader, undefined=jinja2.StrictUndefined)
41
42 def reconfigResource(self, new_config):
43 self.config = new_config.www
44
45 versions = self.getEnvironmentVersions()
46 vs = self.config.get('versions')
47 if isinstance(vs, list):
48 versions += vs
49 self.config['versions'] = versions
50
51 self.custom_templates = {}
52 template_dir = self.config.pop('custom_templates_dir', None)
53 if template_dir is not None:
54 template_dir = os.path.join(self.master.basedir, template_dir)
55 self.custom_templates = self.parseCustomTemplateDir(template_dir)
56
57 def render_GET(self, request):
58 return self.asyncRenderHelper(request, self.renderIndex)
59
60 def parseCustomTemplateDir(self, template_dir):
61 res = {}
62 allowed_ext = [".html"]
63 try:
64 import pyjade # pylint: disable=import-outside-toplevel
65 allowed_ext.append(".jade")
66 except ImportError: # pragma: no cover
67 log.msg("pyjade not installed. Ignoring .jade files from {}".format(template_dir))
68 pyjade = None
69 for root, dirs, files in os.walk(template_dir):
70 if root == template_dir:
71 template_name = posixpath.join("views", "%s.html")
72 else:
73 # template_name is a url, so we really want '/'
74 # root is a os.path, though
75 template_name = posixpath.join(
76 os.path.basename(root), "views", "%s.html")
77 for f in files:
78 fn = os.path.join(root, f)
79 basename, ext = os.path.splitext(f)
80 if ext not in allowed_ext:
81 continue
82 if ext == ".html":
83 with open(fn) as f:
84 html = f.read().strip()
85 elif ext == ".jade":
86 with open(fn) as f:
87 jade = f.read()
88 parser = pyjade.parser.Parser(jade)
89 block = parser.parse()
90 compiler = pyjade.ext.html.Compiler(
91 block, pretty=False)
92 html = compiler.compile()
93 res[template_name % (basename,)] = html
94
95 return res
96
97 @staticmethod
98 def getEnvironmentVersions():
99 import sys # pylint: disable=import-outside-toplevel
100 import twisted # pylint: disable=import-outside-toplevel
101 from buildbot import version as bbversion # pylint: disable=import-outside-toplevel
102
103 pyversion = '.'.join(map(str, sys.version_info[:3]))
104
105 tx_version_info = (twisted.version.major,
106 twisted.version.minor,
107 twisted.version.micro)
108 txversion = '.'.join(map(str, tx_version_info))
109
110 return [
111 ('Python', pyversion),
112 ('Buildbot', bbversion),
113 ('Twisted', txversion),
114 ]
115
116 @defer.inlineCallbacks
117 def renderIndex(self, request):
118 config = {}
119 request.setHeader(b"content-type", b'text/html')
120 request.setHeader(b"Cache-Control", b"public;max-age=0")
121
122 try:
123 yield self.config['auth'].maybeAutoLogin(request)
124 except Error as e:
125 config["on_load_warning"] = e.message
126
127 user_info = self.master.www.getUserInfos(request)
128 config.update({"user": user_info})
129
130 config.update(self.config)
131 config['buildbotURL'] = self.master.config.buildbotURL
132 config['title'] = self.master.config.title
133 config['titleURL'] = self.master.config.titleURL
134 config['multiMaster'] = self.master.config.multiMaster
135
136 # delete things that may contain secrets
137 if 'change_hook_dialects' in config:
138 del config['change_hook_dialects']
139
140 def toJson(obj):
141 try:
142 obj = IConfigured(obj).getConfigDict()
143 except TypeError:
144 # this happens for old style classes (not deriving objects)
145 pass
146 if isinstance(obj, dict):
147 return obj
148 # don't leak object memory address
149 obj = obj.__class__.__module__ + "." + obj.__class__.__name__
150 return repr(obj) + " not yet IConfigured"
151
152 tpl = self.jinja.get_template('index.html')
153 # we use Jinja in order to render some server side dynamic stuff
154 # For example, custom_templates javascript is generated by the
155 # layout.jade jinja template
156 tpl = tpl.render(configjson=json.dumps(config, default=toJson),
157 custom_templates=self.custom_templates,
158 config=self.config)
159 return unicode2bytes(tpl, encoding='ascii')
160
[end of master/buildbot/www/config.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/master/buildbot/www/config.py b/master/buildbot/www/config.py
--- a/master/buildbot/www/config.py
+++ b/master/buildbot/www/config.py
@@ -117,7 +117,7 @@
def renderIndex(self, request):
config = {}
request.setHeader(b"content-type", b'text/html')
- request.setHeader(b"Cache-Control", b"public;max-age=0")
+ request.setHeader(b"Cache-Control", b"public,max-age=0")
try:
yield self.config['auth'].maybeAutoLogin(request)
| {"golden_diff": "diff --git a/master/buildbot/www/config.py b/master/buildbot/www/config.py\n--- a/master/buildbot/www/config.py\n+++ b/master/buildbot/www/config.py\n@@ -117,7 +117,7 @@\n def renderIndex(self, request):\n config = {}\n request.setHeader(b\"content-type\", b'text/html')\n- request.setHeader(b\"Cache-Control\", b\"public;max-age=0\")\n+ request.setHeader(b\"Cache-Control\", b\"public,max-age=0\")\n \n try:\n yield self.config['auth'].maybeAutoLogin(request)\n", "issue": "Cache-control header is filled incorrectly\nWhen a cache-control header is formed a ';' character is used as a delimiter:\r\n\r\nhttps://github.com/buildbot/buildbot/blob/144eb7e82dc261e6506f1f68493446bcb24d77a0/master/buildbot/www/config.py#L120\r\n\r\nThis is not allowed by [RFC 7234](https://tools.ietf.org/html/rfc7234). The RFC states the following format of the header:\r\n```\r\nCache-Control = *( \",\" OWS ) cache-directive *( OWS \",\" [ OWS\r\n cache-directive ] )\r\n cache-directive = token [ \"=\" ( token / quoted-string ) ]\r\n```\r\n\r\nThus a replace `;` -> `, ` is required.\n", "before_files": [{"content": "# This file is part of Buildbot. Buildbot is free software: you can\n# redistribute it and/or modify it under the terms of the GNU General Public\n# License as published by the Free Software Foundation, version 2.\n#\n# This program is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS\n# FOR A PARTICULAR PURPOSE. See the GNU General Public License for more\n# details.\n#\n# You should have received a copy of the GNU General Public License along with\n# this program; if not, write to the Free Software Foundation, Inc., 51\n# Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.\n#\n# Copyright Buildbot Team Members\n\n\nimport json\nimport os\nimport posixpath\n\nimport jinja2\n\nfrom twisted.internet import defer\nfrom twisted.python import log\nfrom twisted.web.error import Error\n\nfrom buildbot.interfaces import IConfigured\nfrom buildbot.util import unicode2bytes\nfrom buildbot.www import resource\n\n\nclass IndexResource(resource.Resource):\n # enable reconfigResource calls\n needsReconfig = True\n\n def __init__(self, master, staticdir):\n super().__init__(master)\n loader = jinja2.FileSystemLoader(staticdir)\n self.jinja = jinja2.Environment(\n loader=loader, undefined=jinja2.StrictUndefined)\n\n def reconfigResource(self, new_config):\n self.config = new_config.www\n\n versions = self.getEnvironmentVersions()\n vs = self.config.get('versions')\n if isinstance(vs, list):\n versions += vs\n self.config['versions'] = versions\n\n self.custom_templates = {}\n template_dir = self.config.pop('custom_templates_dir', None)\n if template_dir is not None:\n template_dir = os.path.join(self.master.basedir, template_dir)\n self.custom_templates = self.parseCustomTemplateDir(template_dir)\n\n def render_GET(self, request):\n return self.asyncRenderHelper(request, self.renderIndex)\n\n def parseCustomTemplateDir(self, template_dir):\n res = {}\n allowed_ext = [\".html\"]\n try:\n import pyjade # pylint: disable=import-outside-toplevel\n allowed_ext.append(\".jade\")\n except ImportError: # pragma: no cover\n log.msg(\"pyjade not installed. Ignoring .jade files from {}\".format(template_dir))\n pyjade = None\n for root, dirs, files in os.walk(template_dir):\n if root == template_dir:\n template_name = posixpath.join(\"views\", \"%s.html\")\n else:\n # template_name is a url, so we really want '/'\n # root is a os.path, though\n template_name = posixpath.join(\n os.path.basename(root), \"views\", \"%s.html\")\n for f in files:\n fn = os.path.join(root, f)\n basename, ext = os.path.splitext(f)\n if ext not in allowed_ext:\n continue\n if ext == \".html\":\n with open(fn) as f:\n html = f.read().strip()\n elif ext == \".jade\":\n with open(fn) as f:\n jade = f.read()\n parser = pyjade.parser.Parser(jade)\n block = parser.parse()\n compiler = pyjade.ext.html.Compiler(\n block, pretty=False)\n html = compiler.compile()\n res[template_name % (basename,)] = html\n\n return res\n\n @staticmethod\n def getEnvironmentVersions():\n import sys # pylint: disable=import-outside-toplevel\n import twisted # pylint: disable=import-outside-toplevel\n from buildbot import version as bbversion # pylint: disable=import-outside-toplevel\n\n pyversion = '.'.join(map(str, sys.version_info[:3]))\n\n tx_version_info = (twisted.version.major,\n twisted.version.minor,\n twisted.version.micro)\n txversion = '.'.join(map(str, tx_version_info))\n\n return [\n ('Python', pyversion),\n ('Buildbot', bbversion),\n ('Twisted', txversion),\n ]\n\n @defer.inlineCallbacks\n def renderIndex(self, request):\n config = {}\n request.setHeader(b\"content-type\", b'text/html')\n request.setHeader(b\"Cache-Control\", b\"public;max-age=0\")\n\n try:\n yield self.config['auth'].maybeAutoLogin(request)\n except Error as e:\n config[\"on_load_warning\"] = e.message\n\n user_info = self.master.www.getUserInfos(request)\n config.update({\"user\": user_info})\n\n config.update(self.config)\n config['buildbotURL'] = self.master.config.buildbotURL\n config['title'] = self.master.config.title\n config['titleURL'] = self.master.config.titleURL\n config['multiMaster'] = self.master.config.multiMaster\n\n # delete things that may contain secrets\n if 'change_hook_dialects' in config:\n del config['change_hook_dialects']\n\n def toJson(obj):\n try:\n obj = IConfigured(obj).getConfigDict()\n except TypeError:\n # this happens for old style classes (not deriving objects)\n pass\n if isinstance(obj, dict):\n return obj\n # don't leak object memory address\n obj = obj.__class__.__module__ + \".\" + obj.__class__.__name__\n return repr(obj) + \" not yet IConfigured\"\n\n tpl = self.jinja.get_template('index.html')\n # we use Jinja in order to render some server side dynamic stuff\n # For example, custom_templates javascript is generated by the\n # layout.jade jinja template\n tpl = tpl.render(configjson=json.dumps(config, default=toJson),\n custom_templates=self.custom_templates,\n config=self.config)\n return unicode2bytes(tpl, encoding='ascii')\n", "path": "master/buildbot/www/config.py"}]} | 2,359 | 126 |
gh_patches_debug_41965 | rasdani/github-patches | git_diff | gratipay__gratipay.com-3163 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
login requires Referer
This just started recently. When I try to log in I get a blank page. Inspecting the response in my console shows I'm getting a 500 from the server.
My browser (ABrowser): Mozilla/5.0 (X11; Linux i686; rv:33.0) Gecko/20100101 Firefox/33.0
login requires Referer
This just started recently. When I try to log in I get a blank page. Inspecting the response in my console shows I'm getting a 500 from the server.
My browser (ABrowser): Mozilla/5.0 (X11; Linux i686; rv:33.0) Gecko/20100101 Firefox/33.0
</issue>
<code>
[start of gratipay/security/csrf.py]
1 """Cross Site Request Forgery middleware, borrowed from Django.
2
3 See also:
4
5 https://github.com/django/django/blob/master/django/middleware/csrf.py
6 https://docs.djangoproject.com/en/dev/ref/contrib/csrf/
7 https://github.com/gratipay/gratipay.com/issues/88
8
9 """
10
11 from datetime import timedelta
12 import re
13 import urlparse
14 from aspen import log_dammit
15
16
17 #from django.utils.cache import patch_vary_headers
18 cc_delim_re = re.compile(r'\s*,\s*')
19 def patch_vary_headers(response, newheaders):
20 """
21 Adds (or updates) the "Vary" header in the given HttpResponse object.
22 newheaders is a list of header names that should be in "Vary". Existing
23 headers in "Vary" aren't removed.
24 """
25 # Note that we need to keep the original order intact, because cache
26 # implementations may rely on the order of the Vary contents in, say,
27 # computing an MD5 hash.
28 if 'Vary' in response.headers:
29 vary_headers = cc_delim_re.split(response.headers['Vary'])
30 else:
31 vary_headers = []
32 # Use .lower() here so we treat headers as case-insensitive.
33 existing_headers = set([header.lower() for header in vary_headers])
34 additional_headers = [newheader for newheader in newheaders
35 if newheader.lower() not in existing_headers]
36 response.headers['Vary'] = ', '.join(vary_headers + additional_headers)
37
38
39 #from django.utils.http import same_origin
40 def same_origin(url1, url2):
41 """
42 Checks if two URLs are 'same-origin'
43 """
44 p1, p2 = urlparse.urlparse(url1), urlparse.urlparse(url2)
45 return (p1.scheme, p1.hostname, p1.port) == (p2.scheme, p2.hostname, p2.port)
46
47
48 from aspen import Response
49 from crypto import constant_time_compare, get_random_string
50
51 REASON_NO_REFERER = "Referer checking failed - no Referer."
52 REASON_BAD_REFERER = "Referer checking failed - %s does not match %s."
53 REASON_NO_CSRF_COOKIE = "CSRF cookie not set."
54 REASON_BAD_TOKEN = "CSRF token missing or incorrect."
55
56 TOKEN_LENGTH = 32
57 CSRF_TIMEOUT = timedelta(days=7)
58
59
60 def _get_new_csrf_key():
61 return get_random_string(TOKEN_LENGTH)
62
63
64 def _sanitize_token(token):
65 # Allow only alphanum, and ensure we return a 'str' for the sake
66 # of the post processing middleware.
67 if len(token) > TOKEN_LENGTH:
68 return _get_new_csrf_key()
69 token = re.sub('[^a-zA-Z0-9]+', '', str(token.decode('ascii', 'ignore')))
70 if token == "":
71 # In case the cookie has been truncated to nothing at some point.
72 return _get_new_csrf_key()
73 return token
74
75 def _is_secure(request):
76 import gratipay
77 return gratipay.canonical_scheme == 'https'
78
79 def _get_host(request):
80 """Returns the HTTP host using the request headers.
81 """
82 return request.headers.get('X-Forwarded-Host', request.headers['Host'])
83
84
85
86 def get_csrf_token_from_request(request):
87 """Given a Request object, reject it if it's a forgery.
88 """
89 if request.line.uri.startswith('/assets/'): return
90 if request.line.uri.startswith('/callbacks/'): return
91
92 try:
93 csrf_token = _sanitize_token(request.headers.cookie['csrf_token'].value)
94 except KeyError:
95 csrf_token = None
96
97 request.context['csrf_token'] = csrf_token or _get_new_csrf_key()
98
99 # Assume that anything not defined as 'safe' by RC2616 needs protection
100 if request.line.method not in ('GET', 'HEAD', 'OPTIONS', 'TRACE'):
101
102 if _is_secure(request):
103 # Suppose user visits http://example.com/
104 # An active network attacker (man-in-the-middle, MITM) sends a
105 # POST form that targets https://example.com/detonate-bomb/ and
106 # submits it via JavaScript.
107 #
108 # The attacker will need to provide a CSRF cookie and token, but
109 # that's no problem for a MITM and the session-independent
110 # nonce we're using. So the MITM can circumvent the CSRF
111 # protection. This is true for any HTTP connection, but anyone
112 # using HTTPS expects better! For this reason, for
113 # https://example.com/ we need additional protection that treats
114 # http://example.com/ as completely untrusted. Under HTTPS,
115 # Barth et al. found that the Referer header is missing for
116 # same-domain requests in only about 0.2% of cases or less, so
117 # we can use strict Referer checking.
118 referer = request.headers.get('Referer')
119 if referer is None:
120 raise Response(403, REASON_NO_REFERER)
121
122 good_referer = 'https://%s/' % _get_host(request)
123 if not same_origin(referer, good_referer):
124 reason = REASON_BAD_REFERER % (referer, good_referer)
125 log_dammit(reason)
126 raise Response(403, reason)
127
128 if csrf_token is None:
129 raise Response(403, REASON_NO_CSRF_COOKIE)
130
131 # Check non-cookie token for match.
132 request_csrf_token = ""
133 if request.line.method == "POST":
134 if isinstance(request.body, dict):
135 request_csrf_token = request.body.get('csrf_token', '')
136
137 if request_csrf_token == "":
138 # Fall back to X-CSRF-TOKEN, to make things easier for AJAX,
139 # and possible for PUT/DELETE.
140 request_csrf_token = request.headers.get('X-CSRF-TOKEN', '')
141
142 if not constant_time_compare(request_csrf_token, csrf_token):
143 raise Response(403, REASON_BAD_TOKEN)
144
145
146 def add_csrf_token_to_response(response, request=None):
147 """Store the latest CSRF token as a cookie.
148 """
149 if request is None:
150 return # early parsing must've failed
151 csrf_token = request.context.get('csrf_token')
152 if csrf_token:
153 # Don't set httponly so that we can POST using XHR.
154 # https://github.com/gratipay/gratipay.com/issues/3030
155 response.set_cookie('csrf_token', csrf_token, expires=CSRF_TIMEOUT, httponly=False)
156
157 # Content varies with the CSRF cookie, so set the Vary header.
158 patch_vary_headers(response, ('Cookie',))
159
[end of gratipay/security/csrf.py]
[start of gratipay/security/authentication.py]
1 """Defines website authentication helpers.
2 """
3 import binascii
4 from datetime import date, datetime
5
6 from aspen import Response
7 from aspen.utils import to_rfc822
8 from gratipay.models.participant import Participant
9 from gratipay.security import csrf
10 from gratipay.security.crypto import constant_time_compare
11 from gratipay.security.user import User, SESSION
12
13
14 ANON = User()
15 BEGINNING_OF_EPOCH = to_rfc822(datetime(1970, 1, 1))
16
17 def _get_user_via_api_key(api_key):
18 """Given an api_key, return a User. This auth method is deprecated.
19 """
20 user = User()
21 user.participant = Participant._from_thing('api_key', api_key)
22 if user.participant:
23 p = user.participant
24 today = date.today()
25 if p.old_auth_usage != today:
26 Participant.db.run("""
27 UPDATE participants
28 SET old_auth_usage = %s
29 WHERE id = %s
30 """, (today, p.id))
31 return user
32
33 def _get_user_via_basic_auth(auth_header):
34 """Given a basic auth header, return a User object.
35 """
36 try:
37 creds = binascii.a2b_base64(auth_header[len('Basic '):]).split(':', 1)
38 except binascii.Error:
39 raise Response(400, 'Malformed "Authorization" header')
40 if len(creds) != 2:
41 raise Response(401)
42 userid, api_key = creds
43 if len(userid) == 36 and '-' in userid:
44 user = _get_user_via_api_key(userid) # For backward-compatibility
45 else:
46 try:
47 userid = int(userid)
48 except ValueError:
49 raise Response(401)
50 user = User.from_id(userid)
51 if user.ANON or not constant_time_compare(user.participant.api_key, api_key):
52 raise Response(401)
53 return user
54
55 def _turn_off_csrf(request):
56 """Given a request, short-circuit CSRF.
57 """
58 csrf_token = csrf._get_new_csrf_key()
59 request.headers.cookie['csrf_token'] = csrf_token
60 request.headers['X-CSRF-TOKEN'] = csrf_token
61 if 'Referer' not in request.headers:
62 request.headers['Referer'] = 'https://%s/' % csrf._get_host(request)
63
64 def set_request_context_user(request):
65 """Set request.context['user']. This signs the user in.
66 """
67
68 request.context['user'] = user = ANON # Make sure we always have a user object, even if
69 # there's an exception in the rest of this function.
70
71 if request.line.uri.startswith('/assets/'):
72 pass
73 elif 'Authorization' in request.headers:
74 header = request.headers['authorization']
75 if header.startswith('Basic '):
76 user = _get_user_via_basic_auth(header)
77 if not user.ANON:
78 _turn_off_csrf(request)
79 elif SESSION in request.headers.cookie:
80 token = request.headers.cookie[SESSION].value
81 user = User.from_session_token(token)
82
83 request.context['user'] = user
84
85 def add_auth_to_response(response, request=None):
86 if request is None:
87 return # early parsing must've failed
88 if request.line.uri.startswith('/assets/'):
89 return # assets never get auth headers and have their own caching done elsewhere
90
91 response.headers['Expires'] = BEGINNING_OF_EPOCH # don't cache
92
93 if SESSION in request.headers.cookie:
94 user = request.context.get('user') or ANON
95 if not user.ANON:
96 user.keep_signed_in(response.headers.cookie)
97
[end of gratipay/security/authentication.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/gratipay/security/authentication.py b/gratipay/security/authentication.py
--- a/gratipay/security/authentication.py
+++ b/gratipay/security/authentication.py
@@ -58,8 +58,6 @@
csrf_token = csrf._get_new_csrf_key()
request.headers.cookie['csrf_token'] = csrf_token
request.headers['X-CSRF-TOKEN'] = csrf_token
- if 'Referer' not in request.headers:
- request.headers['Referer'] = 'https://%s/' % csrf._get_host(request)
def set_request_context_user(request):
"""Set request.context['user']. This signs the user in.
diff --git a/gratipay/security/csrf.py b/gratipay/security/csrf.py
--- a/gratipay/security/csrf.py
+++ b/gratipay/security/csrf.py
@@ -10,8 +10,6 @@
from datetime import timedelta
import re
-import urlparse
-from aspen import log_dammit
#from django.utils.cache import patch_vary_headers
@@ -36,20 +34,9 @@
response.headers['Vary'] = ', '.join(vary_headers + additional_headers)
-#from django.utils.http import same_origin
-def same_origin(url1, url2):
- """
- Checks if two URLs are 'same-origin'
- """
- p1, p2 = urlparse.urlparse(url1), urlparse.urlparse(url2)
- return (p1.scheme, p1.hostname, p1.port) == (p2.scheme, p2.hostname, p2.port)
-
-
from aspen import Response
from crypto import constant_time_compare, get_random_string
-REASON_NO_REFERER = "Referer checking failed - no Referer."
-REASON_BAD_REFERER = "Referer checking failed - %s does not match %s."
REASON_NO_CSRF_COOKIE = "CSRF cookie not set."
REASON_BAD_TOKEN = "CSRF token missing or incorrect."
@@ -72,15 +59,6 @@
return _get_new_csrf_key()
return token
-def _is_secure(request):
- import gratipay
- return gratipay.canonical_scheme == 'https'
-
-def _get_host(request):
- """Returns the HTTP host using the request headers.
- """
- return request.headers.get('X-Forwarded-Host', request.headers['Host'])
-
def get_csrf_token_from_request(request):
@@ -99,32 +77,6 @@
# Assume that anything not defined as 'safe' by RC2616 needs protection
if request.line.method not in ('GET', 'HEAD', 'OPTIONS', 'TRACE'):
- if _is_secure(request):
- # Suppose user visits http://example.com/
- # An active network attacker (man-in-the-middle, MITM) sends a
- # POST form that targets https://example.com/detonate-bomb/ and
- # submits it via JavaScript.
- #
- # The attacker will need to provide a CSRF cookie and token, but
- # that's no problem for a MITM and the session-independent
- # nonce we're using. So the MITM can circumvent the CSRF
- # protection. This is true for any HTTP connection, but anyone
- # using HTTPS expects better! For this reason, for
- # https://example.com/ we need additional protection that treats
- # http://example.com/ as completely untrusted. Under HTTPS,
- # Barth et al. found that the Referer header is missing for
- # same-domain requests in only about 0.2% of cases or less, so
- # we can use strict Referer checking.
- referer = request.headers.get('Referer')
- if referer is None:
- raise Response(403, REASON_NO_REFERER)
-
- good_referer = 'https://%s/' % _get_host(request)
- if not same_origin(referer, good_referer):
- reason = REASON_BAD_REFERER % (referer, good_referer)
- log_dammit(reason)
- raise Response(403, reason)
-
if csrf_token is None:
raise Response(403, REASON_NO_CSRF_COOKIE)
| {"golden_diff": "diff --git a/gratipay/security/authentication.py b/gratipay/security/authentication.py\n--- a/gratipay/security/authentication.py\n+++ b/gratipay/security/authentication.py\n@@ -58,8 +58,6 @@\n csrf_token = csrf._get_new_csrf_key()\n request.headers.cookie['csrf_token'] = csrf_token\n request.headers['X-CSRF-TOKEN'] = csrf_token\n- if 'Referer' not in request.headers:\n- request.headers['Referer'] = 'https://%s/' % csrf._get_host(request)\n \n def set_request_context_user(request):\n \"\"\"Set request.context['user']. This signs the user in.\ndiff --git a/gratipay/security/csrf.py b/gratipay/security/csrf.py\n--- a/gratipay/security/csrf.py\n+++ b/gratipay/security/csrf.py\n@@ -10,8 +10,6 @@\n \n from datetime import timedelta\n import re\n-import urlparse\n-from aspen import log_dammit\n \n \n #from django.utils.cache import patch_vary_headers\n@@ -36,20 +34,9 @@\n response.headers['Vary'] = ', '.join(vary_headers + additional_headers)\n \n \n-#from django.utils.http import same_origin\n-def same_origin(url1, url2):\n- \"\"\"\n- Checks if two URLs are 'same-origin'\n- \"\"\"\n- p1, p2 = urlparse.urlparse(url1), urlparse.urlparse(url2)\n- return (p1.scheme, p1.hostname, p1.port) == (p2.scheme, p2.hostname, p2.port)\n-\n-\n from aspen import Response\n from crypto import constant_time_compare, get_random_string\n \n-REASON_NO_REFERER = \"Referer checking failed - no Referer.\"\n-REASON_BAD_REFERER = \"Referer checking failed - %s does not match %s.\"\n REASON_NO_CSRF_COOKIE = \"CSRF cookie not set.\"\n REASON_BAD_TOKEN = \"CSRF token missing or incorrect.\"\n \n@@ -72,15 +59,6 @@\n return _get_new_csrf_key()\n return token\n \n-def _is_secure(request):\n- import gratipay\n- return gratipay.canonical_scheme == 'https'\n-\n-def _get_host(request):\n- \"\"\"Returns the HTTP host using the request headers.\n- \"\"\"\n- return request.headers.get('X-Forwarded-Host', request.headers['Host'])\n-\n \n \n def get_csrf_token_from_request(request):\n@@ -99,32 +77,6 @@\n # Assume that anything not defined as 'safe' by RC2616 needs protection\n if request.line.method not in ('GET', 'HEAD', 'OPTIONS', 'TRACE'):\n \n- if _is_secure(request):\n- # Suppose user visits http://example.com/\n- # An active network attacker (man-in-the-middle, MITM) sends a\n- # POST form that targets https://example.com/detonate-bomb/ and\n- # submits it via JavaScript.\n- #\n- # The attacker will need to provide a CSRF cookie and token, but\n- # that's no problem for a MITM and the session-independent\n- # nonce we're using. So the MITM can circumvent the CSRF\n- # protection. This is true for any HTTP connection, but anyone\n- # using HTTPS expects better! For this reason, for\n- # https://example.com/ we need additional protection that treats\n- # http://example.com/ as completely untrusted. Under HTTPS,\n- # Barth et al. found that the Referer header is missing for\n- # same-domain requests in only about 0.2% of cases or less, so\n- # we can use strict Referer checking.\n- referer = request.headers.get('Referer')\n- if referer is None:\n- raise Response(403, REASON_NO_REFERER)\n-\n- good_referer = 'https://%s/' % _get_host(request)\n- if not same_origin(referer, good_referer):\n- reason = REASON_BAD_REFERER % (referer, good_referer)\n- log_dammit(reason)\n- raise Response(403, reason)\n-\n if csrf_token is None:\n raise Response(403, REASON_NO_CSRF_COOKIE)\n", "issue": "login requires Referer\nThis just started recently. When I try to log in I get a blank page. Inspecting the response in my console shows I'm getting a 500 from the server.\n\nMy browser (ABrowser): Mozilla/5.0 (X11; Linux i686; rv:33.0) Gecko/20100101 Firefox/33.0\n\nlogin requires Referer\nThis just started recently. When I try to log in I get a blank page. Inspecting the response in my console shows I'm getting a 500 from the server.\n\nMy browser (ABrowser): Mozilla/5.0 (X11; Linux i686; rv:33.0) Gecko/20100101 Firefox/33.0\n\n", "before_files": [{"content": "\"\"\"Cross Site Request Forgery middleware, borrowed from Django.\n\nSee also:\n\n https://github.com/django/django/blob/master/django/middleware/csrf.py\n https://docs.djangoproject.com/en/dev/ref/contrib/csrf/\n https://github.com/gratipay/gratipay.com/issues/88\n\n\"\"\"\n\nfrom datetime import timedelta\nimport re\nimport urlparse\nfrom aspen import log_dammit\n\n\n#from django.utils.cache import patch_vary_headers\ncc_delim_re = re.compile(r'\\s*,\\s*')\ndef patch_vary_headers(response, newheaders):\n \"\"\"\n Adds (or updates) the \"Vary\" header in the given HttpResponse object.\n newheaders is a list of header names that should be in \"Vary\". Existing\n headers in \"Vary\" aren't removed.\n \"\"\"\n # Note that we need to keep the original order intact, because cache\n # implementations may rely on the order of the Vary contents in, say,\n # computing an MD5 hash.\n if 'Vary' in response.headers:\n vary_headers = cc_delim_re.split(response.headers['Vary'])\n else:\n vary_headers = []\n # Use .lower() here so we treat headers as case-insensitive.\n existing_headers = set([header.lower() for header in vary_headers])\n additional_headers = [newheader for newheader in newheaders\n if newheader.lower() not in existing_headers]\n response.headers['Vary'] = ', '.join(vary_headers + additional_headers)\n\n\n#from django.utils.http import same_origin\ndef same_origin(url1, url2):\n \"\"\"\n Checks if two URLs are 'same-origin'\n \"\"\"\n p1, p2 = urlparse.urlparse(url1), urlparse.urlparse(url2)\n return (p1.scheme, p1.hostname, p1.port) == (p2.scheme, p2.hostname, p2.port)\n\n\nfrom aspen import Response\nfrom crypto import constant_time_compare, get_random_string\n\nREASON_NO_REFERER = \"Referer checking failed - no Referer.\"\nREASON_BAD_REFERER = \"Referer checking failed - %s does not match %s.\"\nREASON_NO_CSRF_COOKIE = \"CSRF cookie not set.\"\nREASON_BAD_TOKEN = \"CSRF token missing or incorrect.\"\n\nTOKEN_LENGTH = 32\nCSRF_TIMEOUT = timedelta(days=7)\n\n\ndef _get_new_csrf_key():\n return get_random_string(TOKEN_LENGTH)\n\n\ndef _sanitize_token(token):\n # Allow only alphanum, and ensure we return a 'str' for the sake\n # of the post processing middleware.\n if len(token) > TOKEN_LENGTH:\n return _get_new_csrf_key()\n token = re.sub('[^a-zA-Z0-9]+', '', str(token.decode('ascii', 'ignore')))\n if token == \"\":\n # In case the cookie has been truncated to nothing at some point.\n return _get_new_csrf_key()\n return token\n\ndef _is_secure(request):\n import gratipay\n return gratipay.canonical_scheme == 'https'\n\ndef _get_host(request):\n \"\"\"Returns the HTTP host using the request headers.\n \"\"\"\n return request.headers.get('X-Forwarded-Host', request.headers['Host'])\n\n\n\ndef get_csrf_token_from_request(request):\n \"\"\"Given a Request object, reject it if it's a forgery.\n \"\"\"\n if request.line.uri.startswith('/assets/'): return\n if request.line.uri.startswith('/callbacks/'): return\n\n try:\n csrf_token = _sanitize_token(request.headers.cookie['csrf_token'].value)\n except KeyError:\n csrf_token = None\n\n request.context['csrf_token'] = csrf_token or _get_new_csrf_key()\n\n # Assume that anything not defined as 'safe' by RC2616 needs protection\n if request.line.method not in ('GET', 'HEAD', 'OPTIONS', 'TRACE'):\n\n if _is_secure(request):\n # Suppose user visits http://example.com/\n # An active network attacker (man-in-the-middle, MITM) sends a\n # POST form that targets https://example.com/detonate-bomb/ and\n # submits it via JavaScript.\n #\n # The attacker will need to provide a CSRF cookie and token, but\n # that's no problem for a MITM and the session-independent\n # nonce we're using. So the MITM can circumvent the CSRF\n # protection. This is true for any HTTP connection, but anyone\n # using HTTPS expects better! For this reason, for\n # https://example.com/ we need additional protection that treats\n # http://example.com/ as completely untrusted. Under HTTPS,\n # Barth et al. found that the Referer header is missing for\n # same-domain requests in only about 0.2% of cases or less, so\n # we can use strict Referer checking.\n referer = request.headers.get('Referer')\n if referer is None:\n raise Response(403, REASON_NO_REFERER)\n\n good_referer = 'https://%s/' % _get_host(request)\n if not same_origin(referer, good_referer):\n reason = REASON_BAD_REFERER % (referer, good_referer)\n log_dammit(reason)\n raise Response(403, reason)\n\n if csrf_token is None:\n raise Response(403, REASON_NO_CSRF_COOKIE)\n\n # Check non-cookie token for match.\n request_csrf_token = \"\"\n if request.line.method == \"POST\":\n if isinstance(request.body, dict):\n request_csrf_token = request.body.get('csrf_token', '')\n\n if request_csrf_token == \"\":\n # Fall back to X-CSRF-TOKEN, to make things easier for AJAX,\n # and possible for PUT/DELETE.\n request_csrf_token = request.headers.get('X-CSRF-TOKEN', '')\n\n if not constant_time_compare(request_csrf_token, csrf_token):\n raise Response(403, REASON_BAD_TOKEN)\n\n\ndef add_csrf_token_to_response(response, request=None):\n \"\"\"Store the latest CSRF token as a cookie.\n \"\"\"\n if request is None:\n return # early parsing must've failed\n csrf_token = request.context.get('csrf_token')\n if csrf_token:\n # Don't set httponly so that we can POST using XHR.\n # https://github.com/gratipay/gratipay.com/issues/3030\n response.set_cookie('csrf_token', csrf_token, expires=CSRF_TIMEOUT, httponly=False)\n\n # Content varies with the CSRF cookie, so set the Vary header.\n patch_vary_headers(response, ('Cookie',))\n", "path": "gratipay/security/csrf.py"}, {"content": "\"\"\"Defines website authentication helpers.\n\"\"\"\nimport binascii\nfrom datetime import date, datetime\n\nfrom aspen import Response\nfrom aspen.utils import to_rfc822\nfrom gratipay.models.participant import Participant\nfrom gratipay.security import csrf\nfrom gratipay.security.crypto import constant_time_compare\nfrom gratipay.security.user import User, SESSION\n\n\nANON = User()\nBEGINNING_OF_EPOCH = to_rfc822(datetime(1970, 1, 1))\n\ndef _get_user_via_api_key(api_key):\n \"\"\"Given an api_key, return a User. This auth method is deprecated.\n \"\"\"\n user = User()\n user.participant = Participant._from_thing('api_key', api_key)\n if user.participant:\n p = user.participant\n today = date.today()\n if p.old_auth_usage != today:\n Participant.db.run(\"\"\"\n UPDATE participants\n SET old_auth_usage = %s\n WHERE id = %s\n \"\"\", (today, p.id))\n return user\n\ndef _get_user_via_basic_auth(auth_header):\n \"\"\"Given a basic auth header, return a User object.\n \"\"\"\n try:\n creds = binascii.a2b_base64(auth_header[len('Basic '):]).split(':', 1)\n except binascii.Error:\n raise Response(400, 'Malformed \"Authorization\" header')\n if len(creds) != 2:\n raise Response(401)\n userid, api_key = creds\n if len(userid) == 36 and '-' in userid:\n user = _get_user_via_api_key(userid) # For backward-compatibility\n else:\n try:\n userid = int(userid)\n except ValueError:\n raise Response(401)\n user = User.from_id(userid)\n if user.ANON or not constant_time_compare(user.participant.api_key, api_key):\n raise Response(401)\n return user\n\ndef _turn_off_csrf(request):\n \"\"\"Given a request, short-circuit CSRF.\n \"\"\"\n csrf_token = csrf._get_new_csrf_key()\n request.headers.cookie['csrf_token'] = csrf_token\n request.headers['X-CSRF-TOKEN'] = csrf_token\n if 'Referer' not in request.headers:\n request.headers['Referer'] = 'https://%s/' % csrf._get_host(request)\n\ndef set_request_context_user(request):\n \"\"\"Set request.context['user']. This signs the user in.\n \"\"\"\n\n request.context['user'] = user = ANON # Make sure we always have a user object, even if\n # there's an exception in the rest of this function.\n\n if request.line.uri.startswith('/assets/'):\n pass\n elif 'Authorization' in request.headers:\n header = request.headers['authorization']\n if header.startswith('Basic '):\n user = _get_user_via_basic_auth(header)\n if not user.ANON:\n _turn_off_csrf(request)\n elif SESSION in request.headers.cookie:\n token = request.headers.cookie[SESSION].value\n user = User.from_session_token(token)\n\n request.context['user'] = user\n\ndef add_auth_to_response(response, request=None):\n if request is None:\n return # early parsing must've failed\n if request.line.uri.startswith('/assets/'):\n return # assets never get auth headers and have their own caching done elsewhere\n\n response.headers['Expires'] = BEGINNING_OF_EPOCH # don't cache\n\n if SESSION in request.headers.cookie:\n user = request.context.get('user') or ANON\n if not user.ANON:\n user.keep_signed_in(response.headers.cookie)\n", "path": "gratipay/security/authentication.py"}]} | 3,521 | 935 |
gh_patches_debug_12695 | rasdani/github-patches | git_diff | lutris__lutris-1031 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
“Restore default gamma....” doesn’t restore gamma after game quits
For some reason, with "Restore default gamma..." option enabled, default screen gamma is not restored after a game that changes it quits (even after Lutris kills processes in the prefix)
When running Lutris in debug mode (-d) I get this Warning:
"WARNING 2018-08-08 18:46:47,323 [display.restore_gamma:168]:xgamma is not available on your system"
Would be nice to fix this, as changing screen gamma back manually is a bit annoying.
</issue>
<code>
[start of lutris/util/display.py]
1 import re
2 import subprocess
3 import time
4
5 import gi
6 gi.require_version('GnomeDesktop', '3.0')
7
8 from gi.repository import Gdk, GnomeDesktop, GLib
9
10 from lutris.util import system
11 from lutris.util.log import logger
12
13 XRANDR_CACHE = None
14 XRANDR_CACHE_SET_AT = None
15 XGAMMA_FOUND = False
16
17
18 def cached(function):
19 def wrapper():
20 global XRANDR_CACHE
21 global XRANDR_CACHE_SET_AT
22
23 if XRANDR_CACHE and time.time() - XRANDR_CACHE_SET_AT < 60:
24 return XRANDR_CACHE
25 XRANDR_CACHE = function()
26 XRANDR_CACHE_SET_AT = time.time()
27 return XRANDR_CACHE
28 return wrapper
29
30
31 @cached
32 def get_vidmodes():
33 """Return video modes from XrandR"""
34 logger.debug("Retrieving video modes from XrandR")
35 xrandr_output = subprocess.Popen(["xrandr"],
36 stdout=subprocess.PIPE).communicate()[0]
37 return list([line for line in xrandr_output.decode().split("\n")])
38
39
40 def get_outputs():
41 """Return list of tuples containing output name and geometry."""
42 outputs = []
43 vid_modes = get_vidmodes()
44 if not vid_modes:
45 logger.error("xrandr didn't return anything")
46 return []
47 for line in vid_modes:
48 parts = line.split()
49 if len(parts) < 2:
50 continue
51 if parts[1] == 'connected':
52 if len(parts) == 2:
53 continue
54 if parts[2] != 'primary':
55 geom = parts[2]
56 rotate = parts[3]
57 else:
58 geom = parts[3]
59 rotate = parts[4]
60 if geom.startswith('('): # Screen turned off, no geometry
61 continue
62 if rotate.startswith('('): # Screen not rotated, no need to include
63 outputs.append((parts[0], geom, "normal"))
64 else:
65 if rotate in ("left", "right"):
66 geom_parts = geom.split('+')
67 x_y = geom_parts[0].split('x')
68 geom = "{}x{}+{}+{}".format(x_y[1], x_y[0], geom_parts[1], geom_parts[2])
69 outputs.append((parts[0], geom, rotate))
70 return outputs
71
72
73 def get_output_names():
74 """Return output names from XrandR"""
75 return [output[0] for output in get_outputs()]
76
77
78 def turn_off_except(display):
79 """Use XrandR to turn off displays except the one referenced by `display`"""
80 if not display:
81 logger.error("No active display given, no turning off every display")
82 return
83 for output in get_outputs():
84 if output[0] != display:
85 logger.info("Turning off %s", output[0])
86 subprocess.Popen(["xrandr", "--output", output[0], "--off"])
87
88
89 def get_resolutions():
90 """Return the list of supported screen resolutions."""
91 resolution_list = []
92 for line in get_vidmodes():
93 if line.startswith(" "):
94 resolution_match = re.match(r'.*?(\d+x\d+).*', line)
95 if resolution_match:
96 resolution_list.append(resolution_match.groups()[0])
97 return resolution_list
98
99
100 def get_unique_resolutions():
101 """Return available resolutions, without duplicates and ordered with highest resolution first"""
102 return sorted(set(get_resolutions()), key=lambda x: int(x.split('x')[0]), reverse=True)
103
104
105 def get_current_resolution(monitor=0):
106 """Return the current resolution for the desktop."""
107 resolution = list()
108 for line in get_vidmodes():
109 if line.startswith(" ") and "*" in line:
110 resolution_match = re.match(r'.*?(\d+x\d+).*', line)
111 if resolution_match:
112 resolution.append(resolution_match.groups()[0])
113 if monitor == 'all':
114 return resolution
115 return resolution[monitor]
116
117
118 def change_resolution(resolution):
119 """Change display resolution.
120
121 Takes a string for single monitors or a list of displays as returned
122 by get_outputs().
123 """
124 if not resolution:
125 logger.warning("No resolution provided")
126 return
127 if isinstance(resolution, str):
128 logger.debug("Switching resolution to %s", resolution)
129
130 if resolution not in get_resolutions():
131 logger.warning("Resolution %s doesn't exist.", resolution)
132 else:
133 logger.info("Changing resolution to %s", resolution)
134 subprocess.Popen(["xrandr", "-s", resolution])
135 else:
136 for display in resolution:
137 display_name = display[0]
138 logger.debug("Switching to %s on %s", display[1], display[0])
139 display_geom = display[1].split('+')
140 display_resolution = display_geom[0]
141 position = (display_geom[1], display_geom[2])
142
143 if (
144 len(display) > 2 and
145 display[2] in ('normal', 'left', 'right', 'inverted')
146 ):
147 rotation = display[2]
148 else:
149 rotation = "normal"
150 logger.info("Switching resolution of %s to %s", display_name, display_resolution)
151 subprocess.Popen([
152 "xrandr",
153 "--output", display_name,
154 "--mode", display_resolution,
155 "--pos", "{}x{}".format(position[0], position[1]),
156 "--rotate", rotation
157 ]).communicate()
158
159
160 def restore_gamma():
161 """Restores gamma to a normal level."""
162 global XGAMMA_FOUND
163 if XGAMMA_FOUND is None:
164 XGAMMA_FOUND = system.find_executable('xgamma')
165 if XGAMMA_FOUND is True:
166 subprocess.Popen(["xgamma", "-gamma", "1.0"])
167 else:
168 logger.warning('xgamma is not available on your system')
169
170
171 def get_xrandr_version():
172 """Return the major and minor version of XRandR utility"""
173 pattern = "version"
174 xrandr_output = subprocess.Popen(["xrandr", "--version"],
175 stdout=subprocess.PIPE).communicate()[0].decode()
176 position = xrandr_output.find(pattern) + len(pattern)
177 version_str = xrandr_output[position:].strip().split(".")
178 logger.debug("Found XrandR version %s", version_str)
179 try:
180 return {"major": int(version_str[0]), "minor": int(version_str[1])}
181 except ValueError:
182 logger.error("Can't find version in: %s", xrandr_output)
183 return {"major": 0, "minor": 0}
184
185
186 def get_graphics_adapaters():
187 """Return the list of graphics cards available on a system
188
189 Returns:
190 list: list of tuples containing PCI ID and description of the VGA adapter
191 """
192
193 if not system.find_executable('lspci'):
194 logger.warning('lspci is not available. List of graphics cards not available')
195 return []
196 return [
197 (pci_id, vga_desc.split(': ')[1]) for pci_id, vga_desc in [
198 line.split(maxsplit=1)
199 for line in system.execute('lspci').split('\n')
200 if 'VGA' in line
201 ]
202 ]
203
204
205 def get_providers():
206 """Return the list of available graphic cards"""
207 pattern = "name:"
208 providers = []
209 version = get_xrandr_version()
210
211 if version["major"] == 1 and version["minor"] >= 4:
212 logger.debug("Retrieving providers from XrandR")
213 xrandr_output = subprocess.Popen(["xrandr", "--listproviders"],
214 stdout=subprocess.PIPE).communicate()[0].decode()
215 for line in xrandr_output.split("\n"):
216 if line.find("Provider ") != 0:
217 continue
218 position = line.find(pattern) + len(pattern)
219 providers.append(line[position:].strip())
220
221 return providers
222
223
224 class LegacyDisplayManager:
225 @staticmethod
226 def get_resolutions():
227 return get_resolutions()
228
229 @staticmethod
230 def get_display_names():
231 return get_output_names()
232
233
234 class DisplayManager(object):
235 def __init__(self):
236 self.screen = Gdk.Screen.get_default()
237 self.rr_screen = GnomeDesktop.RRScreen.new(self.screen)
238 self.rr_config = GnomeDesktop.RRConfig.new_current(self.rr_screen)
239 self.rr_config.load_current()
240
241 @property
242 def outputs(self):
243 return self.rr_screen.list_outputs()
244
245 def get_display_names(self):
246 return [output_info.get_display_name() for output_info in self.rr_config.get_outputs()]
247
248 def get_output_modes(self, output):
249 logger.debug("Retrieving modes for %s", output)
250 resolutions = []
251 for mode in output.list_modes():
252 resolution = "%sx%s" % (mode.get_width(), mode.get_height())
253 if resolution not in resolutions:
254 resolutions.append(resolution)
255 return resolutions
256
257 def get_resolutions(self):
258 resolutions = []
259 for mode in self.rr_screen.list_modes():
260 resolutions.append("%sx%s" % (mode.get_width(), mode.get_height()))
261 return sorted(set(resolutions), key=lambda x: int(x.split('x')[0]), reverse=True)
262
263
264 try:
265 DISPLAY_MANAGER = DisplayManager()
266 except GLib.Error:
267 DISPLAY_MANAGER = LegacyDisplayManager()
268
269 USE_DRI_PRIME = len(get_graphics_adapaters()) > 1
270
271
272 def get_resolution_choices():
273 """Return list of available resolutions as label, value tuples
274 suitable for inclusion in drop-downs.
275 """
276 resolutions = DISPLAY_MANAGER.get_resolutions()
277 resolution_choices = list(zip(resolutions, resolutions))
278 resolution_choices.insert(0, ("Keep current", 'off'))
279 return resolution_choices
280
281
282 def get_output_choices():
283 """Return list of outputs for drop-downs"""
284 displays = DISPLAY_MANAGER.get_display_names()
285 output_choices = list(zip(displays, displays))
286 output_choices.insert(0, ("Off", 'off'))
287 return output_choices
288
289
290 def get_output_list():
291 """Return a list of output with their index.
292 This is used to indicate to SDL 1.2 which monitor to use.
293 """
294 choices = [
295 ('Off', 'off'),
296 ]
297 displays = DISPLAY_MANAGER.get_display_names()
298 for index, output in enumerate(displays):
299 # Display name can't be used because they might not be in the right order
300 # Using DISPLAYS to get the number of connected monitors
301 choices.append((output, str(index)))
302 return choices
303
[end of lutris/util/display.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/lutris/util/display.py b/lutris/util/display.py
--- a/lutris/util/display.py
+++ b/lutris/util/display.py
@@ -12,8 +12,7 @@
XRANDR_CACHE = None
XRANDR_CACHE_SET_AT = None
-XGAMMA_FOUND = False
-
+XGAMMA_FOUND = None
def cached(function):
def wrapper():
@@ -161,7 +160,7 @@
"""Restores gamma to a normal level."""
global XGAMMA_FOUND
if XGAMMA_FOUND is None:
- XGAMMA_FOUND = system.find_executable('xgamma')
+ XGAMMA_FOUND = bool(system.find_executable('xgamma'))
if XGAMMA_FOUND is True:
subprocess.Popen(["xgamma", "-gamma", "1.0"])
else:
| {"golden_diff": "diff --git a/lutris/util/display.py b/lutris/util/display.py\n--- a/lutris/util/display.py\n+++ b/lutris/util/display.py\n@@ -12,8 +12,7 @@\n \n XRANDR_CACHE = None\n XRANDR_CACHE_SET_AT = None\n-XGAMMA_FOUND = False\n-\n+XGAMMA_FOUND = None\n \n def cached(function):\n def wrapper():\n@@ -161,7 +160,7 @@\n \"\"\"Restores gamma to a normal level.\"\"\"\n global XGAMMA_FOUND\n if XGAMMA_FOUND is None:\n- XGAMMA_FOUND = system.find_executable('xgamma')\n+ XGAMMA_FOUND = bool(system.find_executable('xgamma'))\n if XGAMMA_FOUND is True:\n subprocess.Popen([\"xgamma\", \"-gamma\", \"1.0\"])\n else:\n", "issue": "\u201cRestore default gamma....\u201d doesn\u2019t restore gamma after game quits\nFor some reason, with \"Restore default gamma...\" option enabled, default screen gamma is not restored after a game that changes it quits (even after Lutris kills processes in the prefix)\r\nWhen running Lutris in debug mode (-d) I get this Warning:\r\n\"WARNING 2018-08-08 18:46:47,323 [display.restore_gamma:168]:xgamma is not available on your system\"\r\nWould be nice to fix this, as changing screen gamma back manually is a bit annoying.\n", "before_files": [{"content": "import re\nimport subprocess\nimport time\n\nimport gi\ngi.require_version('GnomeDesktop', '3.0')\n\nfrom gi.repository import Gdk, GnomeDesktop, GLib\n\nfrom lutris.util import system\nfrom lutris.util.log import logger\n\nXRANDR_CACHE = None\nXRANDR_CACHE_SET_AT = None\nXGAMMA_FOUND = False\n\n\ndef cached(function):\n def wrapper():\n global XRANDR_CACHE\n global XRANDR_CACHE_SET_AT\n\n if XRANDR_CACHE and time.time() - XRANDR_CACHE_SET_AT < 60:\n return XRANDR_CACHE\n XRANDR_CACHE = function()\n XRANDR_CACHE_SET_AT = time.time()\n return XRANDR_CACHE\n return wrapper\n\n\n@cached\ndef get_vidmodes():\n \"\"\"Return video modes from XrandR\"\"\"\n logger.debug(\"Retrieving video modes from XrandR\")\n xrandr_output = subprocess.Popen([\"xrandr\"],\n stdout=subprocess.PIPE).communicate()[0]\n return list([line for line in xrandr_output.decode().split(\"\\n\")])\n\n\ndef get_outputs():\n \"\"\"Return list of tuples containing output name and geometry.\"\"\"\n outputs = []\n vid_modes = get_vidmodes()\n if not vid_modes:\n logger.error(\"xrandr didn't return anything\")\n return []\n for line in vid_modes:\n parts = line.split()\n if len(parts) < 2:\n continue\n if parts[1] == 'connected':\n if len(parts) == 2:\n continue\n if parts[2] != 'primary':\n geom = parts[2]\n rotate = parts[3]\n else:\n geom = parts[3]\n rotate = parts[4]\n if geom.startswith('('): # Screen turned off, no geometry\n continue\n if rotate.startswith('('): # Screen not rotated, no need to include\n outputs.append((parts[0], geom, \"normal\"))\n else:\n if rotate in (\"left\", \"right\"):\n geom_parts = geom.split('+')\n x_y = geom_parts[0].split('x')\n geom = \"{}x{}+{}+{}\".format(x_y[1], x_y[0], geom_parts[1], geom_parts[2])\n outputs.append((parts[0], geom, rotate))\n return outputs\n\n\ndef get_output_names():\n \"\"\"Return output names from XrandR\"\"\"\n return [output[0] for output in get_outputs()]\n\n\ndef turn_off_except(display):\n \"\"\"Use XrandR to turn off displays except the one referenced by `display`\"\"\"\n if not display:\n logger.error(\"No active display given, no turning off every display\")\n return\n for output in get_outputs():\n if output[0] != display:\n logger.info(\"Turning off %s\", output[0])\n subprocess.Popen([\"xrandr\", \"--output\", output[0], \"--off\"])\n\n\ndef get_resolutions():\n \"\"\"Return the list of supported screen resolutions.\"\"\"\n resolution_list = []\n for line in get_vidmodes():\n if line.startswith(\" \"):\n resolution_match = re.match(r'.*?(\\d+x\\d+).*', line)\n if resolution_match:\n resolution_list.append(resolution_match.groups()[0])\n return resolution_list\n\n\ndef get_unique_resolutions():\n \"\"\"Return available resolutions, without duplicates and ordered with highest resolution first\"\"\"\n return sorted(set(get_resolutions()), key=lambda x: int(x.split('x')[0]), reverse=True)\n\n\ndef get_current_resolution(monitor=0):\n \"\"\"Return the current resolution for the desktop.\"\"\"\n resolution = list()\n for line in get_vidmodes():\n if line.startswith(\" \") and \"*\" in line:\n resolution_match = re.match(r'.*?(\\d+x\\d+).*', line)\n if resolution_match:\n resolution.append(resolution_match.groups()[0])\n if monitor == 'all':\n return resolution\n return resolution[monitor]\n\n\ndef change_resolution(resolution):\n \"\"\"Change display resolution.\n\n Takes a string for single monitors or a list of displays as returned\n by get_outputs().\n \"\"\"\n if not resolution:\n logger.warning(\"No resolution provided\")\n return\n if isinstance(resolution, str):\n logger.debug(\"Switching resolution to %s\", resolution)\n\n if resolution not in get_resolutions():\n logger.warning(\"Resolution %s doesn't exist.\", resolution)\n else:\n logger.info(\"Changing resolution to %s\", resolution)\n subprocess.Popen([\"xrandr\", \"-s\", resolution])\n else:\n for display in resolution:\n display_name = display[0]\n logger.debug(\"Switching to %s on %s\", display[1], display[0])\n display_geom = display[1].split('+')\n display_resolution = display_geom[0]\n position = (display_geom[1], display_geom[2])\n\n if (\n len(display) > 2 and\n display[2] in ('normal', 'left', 'right', 'inverted')\n ):\n rotation = display[2]\n else:\n rotation = \"normal\"\n logger.info(\"Switching resolution of %s to %s\", display_name, display_resolution)\n subprocess.Popen([\n \"xrandr\",\n \"--output\", display_name,\n \"--mode\", display_resolution,\n \"--pos\", \"{}x{}\".format(position[0], position[1]),\n \"--rotate\", rotation\n ]).communicate()\n\n\ndef restore_gamma():\n \"\"\"Restores gamma to a normal level.\"\"\"\n global XGAMMA_FOUND\n if XGAMMA_FOUND is None:\n XGAMMA_FOUND = system.find_executable('xgamma')\n if XGAMMA_FOUND is True:\n subprocess.Popen([\"xgamma\", \"-gamma\", \"1.0\"])\n else:\n logger.warning('xgamma is not available on your system')\n\n\ndef get_xrandr_version():\n \"\"\"Return the major and minor version of XRandR utility\"\"\"\n pattern = \"version\"\n xrandr_output = subprocess.Popen([\"xrandr\", \"--version\"],\n stdout=subprocess.PIPE).communicate()[0].decode()\n position = xrandr_output.find(pattern) + len(pattern)\n version_str = xrandr_output[position:].strip().split(\".\")\n logger.debug(\"Found XrandR version %s\", version_str)\n try:\n return {\"major\": int(version_str[0]), \"minor\": int(version_str[1])}\n except ValueError:\n logger.error(\"Can't find version in: %s\", xrandr_output)\n return {\"major\": 0, \"minor\": 0}\n\n\ndef get_graphics_adapaters():\n \"\"\"Return the list of graphics cards available on a system\n\n Returns:\n list: list of tuples containing PCI ID and description of the VGA adapter\n \"\"\"\n\n if not system.find_executable('lspci'):\n logger.warning('lspci is not available. List of graphics cards not available')\n return []\n return [\n (pci_id, vga_desc.split(': ')[1]) for pci_id, vga_desc in [\n line.split(maxsplit=1)\n for line in system.execute('lspci').split('\\n')\n if 'VGA' in line\n ]\n ]\n\n\ndef get_providers():\n \"\"\"Return the list of available graphic cards\"\"\"\n pattern = \"name:\"\n providers = []\n version = get_xrandr_version()\n\n if version[\"major\"] == 1 and version[\"minor\"] >= 4:\n logger.debug(\"Retrieving providers from XrandR\")\n xrandr_output = subprocess.Popen([\"xrandr\", \"--listproviders\"],\n stdout=subprocess.PIPE).communicate()[0].decode()\n for line in xrandr_output.split(\"\\n\"):\n if line.find(\"Provider \") != 0:\n continue\n position = line.find(pattern) + len(pattern)\n providers.append(line[position:].strip())\n\n return providers\n\n\nclass LegacyDisplayManager:\n @staticmethod\n def get_resolutions():\n return get_resolutions()\n\n @staticmethod\n def get_display_names():\n return get_output_names()\n\n\nclass DisplayManager(object):\n def __init__(self):\n self.screen = Gdk.Screen.get_default()\n self.rr_screen = GnomeDesktop.RRScreen.new(self.screen)\n self.rr_config = GnomeDesktop.RRConfig.new_current(self.rr_screen)\n self.rr_config.load_current()\n\n @property\n def outputs(self):\n return self.rr_screen.list_outputs()\n\n def get_display_names(self):\n return [output_info.get_display_name() for output_info in self.rr_config.get_outputs()]\n\n def get_output_modes(self, output):\n logger.debug(\"Retrieving modes for %s\", output)\n resolutions = []\n for mode in output.list_modes():\n resolution = \"%sx%s\" % (mode.get_width(), mode.get_height())\n if resolution not in resolutions:\n resolutions.append(resolution)\n return resolutions\n\n def get_resolutions(self):\n resolutions = []\n for mode in self.rr_screen.list_modes():\n resolutions.append(\"%sx%s\" % (mode.get_width(), mode.get_height()))\n return sorted(set(resolutions), key=lambda x: int(x.split('x')[0]), reverse=True)\n\n\ntry:\n DISPLAY_MANAGER = DisplayManager()\nexcept GLib.Error:\n DISPLAY_MANAGER = LegacyDisplayManager()\n\nUSE_DRI_PRIME = len(get_graphics_adapaters()) > 1\n\n\ndef get_resolution_choices():\n \"\"\"Return list of available resolutions as label, value tuples\n suitable for inclusion in drop-downs.\n \"\"\"\n resolutions = DISPLAY_MANAGER.get_resolutions()\n resolution_choices = list(zip(resolutions, resolutions))\n resolution_choices.insert(0, (\"Keep current\", 'off'))\n return resolution_choices\n\n\ndef get_output_choices():\n \"\"\"Return list of outputs for drop-downs\"\"\"\n displays = DISPLAY_MANAGER.get_display_names()\n output_choices = list(zip(displays, displays))\n output_choices.insert(0, (\"Off\", 'off'))\n return output_choices\n\n\ndef get_output_list():\n \"\"\"Return a list of output with their index.\n This is used to indicate to SDL 1.2 which monitor to use.\n \"\"\"\n choices = [\n ('Off', 'off'),\n ]\n displays = DISPLAY_MANAGER.get_display_names()\n for index, output in enumerate(displays):\n # Display name can't be used because they might not be in the right order\n # Using DISPLAYS to get the number of connected monitors\n choices.append((output, str(index)))\n return choices\n", "path": "lutris/util/display.py"}]} | 3,744 | 194 |
gh_patches_debug_26910 | rasdani/github-patches | git_diff | frappe__frappe-24662 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
ImportError: cannot import name 'LazyTranslate' from partially initialized module 'frappe.translate' (most likely due to a circular import)
```
> bench get-untranslated --app erpnext de untranslated.csv
Traceback (most recent call last):
File "<frozen runpy>", line 198, in _run_module_as_main
File "<frozen runpy>", line 88, in _run_code
File "apps/frappe/frappe/utils/bench_helper.py", line 114, in <module>
main()
File "apps/frappe/frappe/utils/bench_helper.py", line 20, in main
click.Group(commands=commands)(prog_name="bench")
File "env/lib/python3.11/site-packages/click/core.py", line 1157, in __call__
return self.main(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "env/lib/python3.11/site-packages/click/core.py", line 1078, in main
rv = self.invoke(ctx)
^^^^^^^^^^^^^^^^
File "env/lib/python3.11/site-packages/click/core.py", line 1688, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "env/lib/python3.11/site-packages/click/core.py", line 1688, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "env/lib/python3.11/site-packages/click/core.py", line 1434, in invoke
return ctx.invoke(self.callback, **ctx.params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "env/lib/python3.11/site-packages/click/core.py", line 783, in invoke
return __callback(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "env/lib/python3.11/site-packages/click/decorators.py", line 33, in new_func
return f(get_current_context(), *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "apps/frappe/frappe/commands/__init__.py", line 29, in _func
ret = f(frappe._dict(ctx.obj), *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "apps/frappe/frappe/commands/translate.py", line 59, in get_untranslated
import frappe.translate
File "apps/frappe/frappe/translate.py", line 23, in <module>
from frappe.gettext.extractors.utils import extract_messages_from_code, is_translatable
File "apps/frappe/frappe/gettext/extractors/utils.py", line 4, in <module>
from frappe.model.utils import InvalidIncludePath, render_include
File "apps/frappe/frappe/model/__init__.py", line 137, in <module>
{"fieldname": "name", "fieldtype": "Link", "label": _lt("ID")},
^^^^^^^^^
File "apps/frappe/frappe/__init__.py", line 133, in _lt
from frappe.translate import LazyTranslate
ImportError: cannot import name 'LazyTranslate' from partially initialized module 'frappe.translate' (most likely due to a circular import) (apps/frappe/frappe/translate.py)
```
</issue>
<code>
[start of frappe/model/__init__.py]
1 # Copyright (c) 2015, Frappe Technologies Pvt. Ltd. and Contributors
2 # License: MIT. See LICENSE
3
4 # model __init__.py
5 import frappe
6 from frappe import _, _lt
7
8 data_fieldtypes = (
9 "Currency",
10 "Int",
11 "Long Int",
12 "Float",
13 "Percent",
14 "Check",
15 "Small Text",
16 "Long Text",
17 "Code",
18 "Text Editor",
19 "Markdown Editor",
20 "HTML Editor",
21 "Date",
22 "Datetime",
23 "Time",
24 "Text",
25 "Data",
26 "Link",
27 "Dynamic Link",
28 "Password",
29 "Select",
30 "Rating",
31 "Read Only",
32 "Attach",
33 "Attach Image",
34 "Signature",
35 "Color",
36 "Barcode",
37 "Geolocation",
38 "Duration",
39 "Icon",
40 "Phone",
41 "Autocomplete",
42 "JSON",
43 )
44
45 float_like_fields = {"Float", "Currency", "Percent"}
46 datetime_fields = {"Datetime", "Date", "Time"}
47
48 attachment_fieldtypes = (
49 "Attach",
50 "Attach Image",
51 )
52
53 no_value_fields = (
54 "Section Break",
55 "Column Break",
56 "Tab Break",
57 "HTML",
58 "Table",
59 "Table MultiSelect",
60 "Button",
61 "Image",
62 "Fold",
63 "Heading",
64 )
65
66 display_fieldtypes = (
67 "Section Break",
68 "Column Break",
69 "Tab Break",
70 "HTML",
71 "Button",
72 "Image",
73 "Fold",
74 "Heading",
75 )
76
77 numeric_fieldtypes = ("Currency", "Int", "Long Int", "Float", "Percent", "Check")
78
79 data_field_options = ("Email", "Name", "Phone", "URL", "Barcode")
80
81 default_fields = (
82 "doctype",
83 "name",
84 "owner",
85 "creation",
86 "modified",
87 "modified_by",
88 "docstatus",
89 "idx",
90 )
91
92 child_table_fields = ("parent", "parentfield", "parenttype")
93
94 optional_fields = ("_user_tags", "_comments", "_assign", "_liked_by", "_seen")
95
96 table_fields = ("Table", "Table MultiSelect")
97
98 core_doctypes_list = (
99 "DefaultValue",
100 "DocType",
101 "DocField",
102 "DocPerm",
103 "DocType Action",
104 "DocType Link",
105 "User",
106 "Role",
107 "Has Role",
108 "Page",
109 "Module Def",
110 "Print Format",
111 "Report",
112 "Customize Form",
113 "Customize Form Field",
114 "Property Setter",
115 "Custom Field",
116 "Client Script",
117 )
118
119 log_types = (
120 "Version",
121 "Error Log",
122 "Scheduled Job Log",
123 "Event Sync Log",
124 "Event Update Log",
125 "Access Log",
126 "View Log",
127 "Activity Log",
128 "Energy Point Log",
129 "Notification Log",
130 "Email Queue",
131 "DocShare",
132 "Document Follow",
133 "Console Log",
134 )
135
136 std_fields = [
137 {"fieldname": "name", "fieldtype": "Link", "label": _lt("ID")},
138 {"fieldname": "owner", "fieldtype": "Link", "label": _lt("Created By"), "options": "User"},
139 {"fieldname": "idx", "fieldtype": "Int", "label": _lt("Index")},
140 {"fieldname": "creation", "fieldtype": "Datetime", "label": _lt("Created On")},
141 {"fieldname": "modified", "fieldtype": "Datetime", "label": _lt("Last Updated On")},
142 {
143 "fieldname": "modified_by",
144 "fieldtype": "Link",
145 "label": _lt("Last Updated By"),
146 "options": "User",
147 },
148 {"fieldname": "_user_tags", "fieldtype": "Data", "label": _lt("Tags")},
149 {"fieldname": "_liked_by", "fieldtype": "Data", "label": _lt("Liked By")},
150 {"fieldname": "_comments", "fieldtype": "Text", "label": _lt("Comments")},
151 {"fieldname": "_assign", "fieldtype": "Text", "label": _lt("Assigned To")},
152 {"fieldname": "docstatus", "fieldtype": "Int", "label": _lt("Document Status")},
153 ]
154
155
156 def delete_fields(args_dict, delete=0):
157 """
158 Delete a field.
159 * Deletes record from `tabDocField`
160 * If not single doctype: Drops column from table
161 * If single, deletes record from `tabSingles`
162 args_dict = { dt: [field names] }
163 """
164 import frappe.utils
165
166 for dt in args_dict:
167 fields = args_dict[dt]
168 if not fields:
169 continue
170
171 frappe.db.delete(
172 "DocField",
173 {
174 "parent": dt,
175 "fieldname": ("in", fields),
176 },
177 )
178
179 # Delete the data/column only if delete is specified
180 if not delete:
181 continue
182
183 if frappe.db.get_value("DocType", dt, "issingle"):
184 frappe.db.delete(
185 "Singles",
186 {
187 "doctype": dt,
188 "field": ("in", fields),
189 },
190 )
191 else:
192 existing_fields = frappe.db.describe(dt)
193 existing_fields = existing_fields and [e[0] for e in existing_fields] or []
194 fields_need_to_delete = set(fields) & set(existing_fields)
195 if not fields_need_to_delete:
196 continue
197
198 if frappe.db.db_type == "mariadb":
199 # mariadb implicitly commits before DDL, make it explicit
200 frappe.db.commit()
201
202 query = "ALTER TABLE `tab%s` " % dt + ", ".join(
203 "DROP COLUMN `%s`" % f for f in fields_need_to_delete
204 )
205 frappe.db.sql(query)
206
207 if frappe.db.db_type == "postgres":
208 # commit the results to db
209 frappe.db.commit()
210
211
212 def get_permitted_fields(
213 doctype: str,
214 parenttype: str | None = None,
215 user: str | None = None,
216 permission_type: str | None = None,
217 *,
218 ignore_virtual=False,
219 ) -> list[str]:
220 meta = frappe.get_meta(doctype)
221 valid_columns = meta.get_valid_columns()
222
223 if doctype in core_doctypes_list:
224 return valid_columns
225
226 # DocType has only fields of type Table (Table, Table MultiSelect)
227 if set(valid_columns).issubset(default_fields):
228 return valid_columns
229
230 if permission_type is None:
231 permission_type = "select" if frappe.only_has_select_perm(doctype, user=user) else "read"
232
233 meta_fields = meta.default_fields.copy()
234 optional_meta_fields = [x for x in optional_fields if x in valid_columns]
235
236 if permitted_fields := meta.get_permitted_fieldnames(
237 parenttype=parenttype,
238 user=user,
239 permission_type=permission_type,
240 with_virtual_fields=not ignore_virtual,
241 ):
242 if permission_type == "select":
243 return permitted_fields
244
245 if meta.istable:
246 meta_fields.extend(child_table_fields)
247
248 return meta_fields + permitted_fields + optional_meta_fields
249
250 return meta_fields + optional_meta_fields
251
252
253 def is_default_field(fieldname: str) -> bool:
254 return fieldname in default_fields
255
[end of frappe/model/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/frappe/model/__init__.py b/frappe/model/__init__.py
--- a/frappe/model/__init__.py
+++ b/frappe/model/__init__.py
@@ -134,22 +134,22 @@
)
std_fields = [
- {"fieldname": "name", "fieldtype": "Link", "label": _lt("ID")},
- {"fieldname": "owner", "fieldtype": "Link", "label": _lt("Created By"), "options": "User"},
- {"fieldname": "idx", "fieldtype": "Int", "label": _lt("Index")},
- {"fieldname": "creation", "fieldtype": "Datetime", "label": _lt("Created On")},
- {"fieldname": "modified", "fieldtype": "Datetime", "label": _lt("Last Updated On")},
+ {"fieldname": "name", "fieldtype": "Link", "label": "ID"},
+ {"fieldname": "owner", "fieldtype": "Link", "label": "Created By", "options": "User"},
+ {"fieldname": "idx", "fieldtype": "Int", "label": "Index"},
+ {"fieldname": "creation", "fieldtype": "Datetime", "label": "Created On"},
+ {"fieldname": "modified", "fieldtype": "Datetime", "label": "Last Updated On"},
{
"fieldname": "modified_by",
"fieldtype": "Link",
- "label": _lt("Last Updated By"),
+ "label": "Last Updated By",
"options": "User",
},
- {"fieldname": "_user_tags", "fieldtype": "Data", "label": _lt("Tags")},
- {"fieldname": "_liked_by", "fieldtype": "Data", "label": _lt("Liked By")},
- {"fieldname": "_comments", "fieldtype": "Text", "label": _lt("Comments")},
- {"fieldname": "_assign", "fieldtype": "Text", "label": _lt("Assigned To")},
- {"fieldname": "docstatus", "fieldtype": "Int", "label": _lt("Document Status")},
+ {"fieldname": "_user_tags", "fieldtype": "Data", "label": "Tags"},
+ {"fieldname": "_liked_by", "fieldtype": "Data", "label": "Liked By"},
+ {"fieldname": "_comments", "fieldtype": "Text", "label": "Comments"},
+ {"fieldname": "_assign", "fieldtype": "Text", "label": "Assigned To"},
+ {"fieldname": "docstatus", "fieldtype": "Int", "label": "Document Status"},
]
| {"golden_diff": "diff --git a/frappe/model/__init__.py b/frappe/model/__init__.py\n--- a/frappe/model/__init__.py\n+++ b/frappe/model/__init__.py\n@@ -134,22 +134,22 @@\n )\n \n std_fields = [\n-\t{\"fieldname\": \"name\", \"fieldtype\": \"Link\", \"label\": _lt(\"ID\")},\n-\t{\"fieldname\": \"owner\", \"fieldtype\": \"Link\", \"label\": _lt(\"Created By\"), \"options\": \"User\"},\n-\t{\"fieldname\": \"idx\", \"fieldtype\": \"Int\", \"label\": _lt(\"Index\")},\n-\t{\"fieldname\": \"creation\", \"fieldtype\": \"Datetime\", \"label\": _lt(\"Created On\")},\n-\t{\"fieldname\": \"modified\", \"fieldtype\": \"Datetime\", \"label\": _lt(\"Last Updated On\")},\n+\t{\"fieldname\": \"name\", \"fieldtype\": \"Link\", \"label\": \"ID\"},\n+\t{\"fieldname\": \"owner\", \"fieldtype\": \"Link\", \"label\": \"Created By\", \"options\": \"User\"},\n+\t{\"fieldname\": \"idx\", \"fieldtype\": \"Int\", \"label\": \"Index\"},\n+\t{\"fieldname\": \"creation\", \"fieldtype\": \"Datetime\", \"label\": \"Created On\"},\n+\t{\"fieldname\": \"modified\", \"fieldtype\": \"Datetime\", \"label\": \"Last Updated On\"},\n \t{\n \t\t\"fieldname\": \"modified_by\",\n \t\t\"fieldtype\": \"Link\",\n-\t\t\"label\": _lt(\"Last Updated By\"),\n+\t\t\"label\": \"Last Updated By\",\n \t\t\"options\": \"User\",\n \t},\n-\t{\"fieldname\": \"_user_tags\", \"fieldtype\": \"Data\", \"label\": _lt(\"Tags\")},\n-\t{\"fieldname\": \"_liked_by\", \"fieldtype\": \"Data\", \"label\": _lt(\"Liked By\")},\n-\t{\"fieldname\": \"_comments\", \"fieldtype\": \"Text\", \"label\": _lt(\"Comments\")},\n-\t{\"fieldname\": \"_assign\", \"fieldtype\": \"Text\", \"label\": _lt(\"Assigned To\")},\n-\t{\"fieldname\": \"docstatus\", \"fieldtype\": \"Int\", \"label\": _lt(\"Document Status\")},\n+\t{\"fieldname\": \"_user_tags\", \"fieldtype\": \"Data\", \"label\": \"Tags\"},\n+\t{\"fieldname\": \"_liked_by\", \"fieldtype\": \"Data\", \"label\": \"Liked By\"},\n+\t{\"fieldname\": \"_comments\", \"fieldtype\": \"Text\", \"label\": \"Comments\"},\n+\t{\"fieldname\": \"_assign\", \"fieldtype\": \"Text\", \"label\": \"Assigned To\"},\n+\t{\"fieldname\": \"docstatus\", \"fieldtype\": \"Int\", \"label\": \"Document Status\"},\n ]\n", "issue": "ImportError: cannot import name 'LazyTranslate' from partially initialized module 'frappe.translate' (most likely due to a circular import)\n```\r\n> bench get-untranslated --app erpnext de untranslated.csv\r\n\r\nTraceback (most recent call last):\r\n File \"<frozen runpy>\", line 198, in _run_module_as_main\r\n File \"<frozen runpy>\", line 88, in _run_code\r\n File \"apps/frappe/frappe/utils/bench_helper.py\", line 114, in <module>\r\n main()\r\n File \"apps/frappe/frappe/utils/bench_helper.py\", line 20, in main\r\n click.Group(commands=commands)(prog_name=\"bench\")\r\n File \"env/lib/python3.11/site-packages/click/core.py\", line 1157, in __call__\r\n return self.main(*args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"env/lib/python3.11/site-packages/click/core.py\", line 1078, in main\r\n rv = self.invoke(ctx)\r\n ^^^^^^^^^^^^^^^^\r\n File \"env/lib/python3.11/site-packages/click/core.py\", line 1688, in invoke\r\n return _process_result(sub_ctx.command.invoke(sub_ctx))\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"env/lib/python3.11/site-packages/click/core.py\", line 1688, in invoke\r\n return _process_result(sub_ctx.command.invoke(sub_ctx))\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"env/lib/python3.11/site-packages/click/core.py\", line 1434, in invoke\r\n return ctx.invoke(self.callback, **ctx.params)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"env/lib/python3.11/site-packages/click/core.py\", line 783, in invoke\r\n return __callback(*args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"env/lib/python3.11/site-packages/click/decorators.py\", line 33, in new_func\r\n return f(get_current_context(), *args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"apps/frappe/frappe/commands/__init__.py\", line 29, in _func\r\n ret = f(frappe._dict(ctx.obj), *args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"apps/frappe/frappe/commands/translate.py\", line 59, in get_untranslated\r\n import frappe.translate\r\n File \"apps/frappe/frappe/translate.py\", line 23, in <module>\r\n from frappe.gettext.extractors.utils import extract_messages_from_code, is_translatable\r\n File \"apps/frappe/frappe/gettext/extractors/utils.py\", line 4, in <module>\r\n from frappe.model.utils import InvalidIncludePath, render_include\r\n File \"apps/frappe/frappe/model/__init__.py\", line 137, in <module>\r\n {\"fieldname\": \"name\", \"fieldtype\": \"Link\", \"label\": _lt(\"ID\")},\r\n ^^^^^^^^^\r\n File \"apps/frappe/frappe/__init__.py\", line 133, in _lt\r\n from frappe.translate import LazyTranslate\r\nImportError: cannot import name 'LazyTranslate' from partially initialized module 'frappe.translate' (most likely due to a circular import) (apps/frappe/frappe/translate.py)\r\n```\n", "before_files": [{"content": "# Copyright (c) 2015, Frappe Technologies Pvt. Ltd. and Contributors\n# License: MIT. See LICENSE\n\n# model __init__.py\nimport frappe\nfrom frappe import _, _lt\n\ndata_fieldtypes = (\n\t\"Currency\",\n\t\"Int\",\n\t\"Long Int\",\n\t\"Float\",\n\t\"Percent\",\n\t\"Check\",\n\t\"Small Text\",\n\t\"Long Text\",\n\t\"Code\",\n\t\"Text Editor\",\n\t\"Markdown Editor\",\n\t\"HTML Editor\",\n\t\"Date\",\n\t\"Datetime\",\n\t\"Time\",\n\t\"Text\",\n\t\"Data\",\n\t\"Link\",\n\t\"Dynamic Link\",\n\t\"Password\",\n\t\"Select\",\n\t\"Rating\",\n\t\"Read Only\",\n\t\"Attach\",\n\t\"Attach Image\",\n\t\"Signature\",\n\t\"Color\",\n\t\"Barcode\",\n\t\"Geolocation\",\n\t\"Duration\",\n\t\"Icon\",\n\t\"Phone\",\n\t\"Autocomplete\",\n\t\"JSON\",\n)\n\nfloat_like_fields = {\"Float\", \"Currency\", \"Percent\"}\ndatetime_fields = {\"Datetime\", \"Date\", \"Time\"}\n\nattachment_fieldtypes = (\n\t\"Attach\",\n\t\"Attach Image\",\n)\n\nno_value_fields = (\n\t\"Section Break\",\n\t\"Column Break\",\n\t\"Tab Break\",\n\t\"HTML\",\n\t\"Table\",\n\t\"Table MultiSelect\",\n\t\"Button\",\n\t\"Image\",\n\t\"Fold\",\n\t\"Heading\",\n)\n\ndisplay_fieldtypes = (\n\t\"Section Break\",\n\t\"Column Break\",\n\t\"Tab Break\",\n\t\"HTML\",\n\t\"Button\",\n\t\"Image\",\n\t\"Fold\",\n\t\"Heading\",\n)\n\nnumeric_fieldtypes = (\"Currency\", \"Int\", \"Long Int\", \"Float\", \"Percent\", \"Check\")\n\ndata_field_options = (\"Email\", \"Name\", \"Phone\", \"URL\", \"Barcode\")\n\ndefault_fields = (\n\t\"doctype\",\n\t\"name\",\n\t\"owner\",\n\t\"creation\",\n\t\"modified\",\n\t\"modified_by\",\n\t\"docstatus\",\n\t\"idx\",\n)\n\nchild_table_fields = (\"parent\", \"parentfield\", \"parenttype\")\n\noptional_fields = (\"_user_tags\", \"_comments\", \"_assign\", \"_liked_by\", \"_seen\")\n\ntable_fields = (\"Table\", \"Table MultiSelect\")\n\ncore_doctypes_list = (\n\t\"DefaultValue\",\n\t\"DocType\",\n\t\"DocField\",\n\t\"DocPerm\",\n\t\"DocType Action\",\n\t\"DocType Link\",\n\t\"User\",\n\t\"Role\",\n\t\"Has Role\",\n\t\"Page\",\n\t\"Module Def\",\n\t\"Print Format\",\n\t\"Report\",\n\t\"Customize Form\",\n\t\"Customize Form Field\",\n\t\"Property Setter\",\n\t\"Custom Field\",\n\t\"Client Script\",\n)\n\nlog_types = (\n\t\"Version\",\n\t\"Error Log\",\n\t\"Scheduled Job Log\",\n\t\"Event Sync Log\",\n\t\"Event Update Log\",\n\t\"Access Log\",\n\t\"View Log\",\n\t\"Activity Log\",\n\t\"Energy Point Log\",\n\t\"Notification Log\",\n\t\"Email Queue\",\n\t\"DocShare\",\n\t\"Document Follow\",\n\t\"Console Log\",\n)\n\nstd_fields = [\n\t{\"fieldname\": \"name\", \"fieldtype\": \"Link\", \"label\": _lt(\"ID\")},\n\t{\"fieldname\": \"owner\", \"fieldtype\": \"Link\", \"label\": _lt(\"Created By\"), \"options\": \"User\"},\n\t{\"fieldname\": \"idx\", \"fieldtype\": \"Int\", \"label\": _lt(\"Index\")},\n\t{\"fieldname\": \"creation\", \"fieldtype\": \"Datetime\", \"label\": _lt(\"Created On\")},\n\t{\"fieldname\": \"modified\", \"fieldtype\": \"Datetime\", \"label\": _lt(\"Last Updated On\")},\n\t{\n\t\t\"fieldname\": \"modified_by\",\n\t\t\"fieldtype\": \"Link\",\n\t\t\"label\": _lt(\"Last Updated By\"),\n\t\t\"options\": \"User\",\n\t},\n\t{\"fieldname\": \"_user_tags\", \"fieldtype\": \"Data\", \"label\": _lt(\"Tags\")},\n\t{\"fieldname\": \"_liked_by\", \"fieldtype\": \"Data\", \"label\": _lt(\"Liked By\")},\n\t{\"fieldname\": \"_comments\", \"fieldtype\": \"Text\", \"label\": _lt(\"Comments\")},\n\t{\"fieldname\": \"_assign\", \"fieldtype\": \"Text\", \"label\": _lt(\"Assigned To\")},\n\t{\"fieldname\": \"docstatus\", \"fieldtype\": \"Int\", \"label\": _lt(\"Document Status\")},\n]\n\n\ndef delete_fields(args_dict, delete=0):\n\t\"\"\"\n\tDelete a field.\n\t* Deletes record from `tabDocField`\n\t* If not single doctype: Drops column from table\n\t* If single, deletes record from `tabSingles`\n\targs_dict = { dt: [field names] }\n\t\"\"\"\n\timport frappe.utils\n\n\tfor dt in args_dict:\n\t\tfields = args_dict[dt]\n\t\tif not fields:\n\t\t\tcontinue\n\n\t\tfrappe.db.delete(\n\t\t\t\"DocField\",\n\t\t\t{\n\t\t\t\t\"parent\": dt,\n\t\t\t\t\"fieldname\": (\"in\", fields),\n\t\t\t},\n\t\t)\n\n\t\t# Delete the data/column only if delete is specified\n\t\tif not delete:\n\t\t\tcontinue\n\n\t\tif frappe.db.get_value(\"DocType\", dt, \"issingle\"):\n\t\t\tfrappe.db.delete(\n\t\t\t\t\"Singles\",\n\t\t\t\t{\n\t\t\t\t\t\"doctype\": dt,\n\t\t\t\t\t\"field\": (\"in\", fields),\n\t\t\t\t},\n\t\t\t)\n\t\telse:\n\t\t\texisting_fields = frappe.db.describe(dt)\n\t\t\texisting_fields = existing_fields and [e[0] for e in existing_fields] or []\n\t\t\tfields_need_to_delete = set(fields) & set(existing_fields)\n\t\t\tif not fields_need_to_delete:\n\t\t\t\tcontinue\n\n\t\t\tif frappe.db.db_type == \"mariadb\":\n\t\t\t\t# mariadb implicitly commits before DDL, make it explicit\n\t\t\t\tfrappe.db.commit()\n\n\t\t\tquery = \"ALTER TABLE `tab%s` \" % dt + \", \".join(\n\t\t\t\t\"DROP COLUMN `%s`\" % f for f in fields_need_to_delete\n\t\t\t)\n\t\t\tfrappe.db.sql(query)\n\n\t\tif frappe.db.db_type == \"postgres\":\n\t\t\t# commit the results to db\n\t\t\tfrappe.db.commit()\n\n\ndef get_permitted_fields(\n\tdoctype: str,\n\tparenttype: str | None = None,\n\tuser: str | None = None,\n\tpermission_type: str | None = None,\n\t*,\n\tignore_virtual=False,\n) -> list[str]:\n\tmeta = frappe.get_meta(doctype)\n\tvalid_columns = meta.get_valid_columns()\n\n\tif doctype in core_doctypes_list:\n\t\treturn valid_columns\n\n\t# DocType has only fields of type Table (Table, Table MultiSelect)\n\tif set(valid_columns).issubset(default_fields):\n\t\treturn valid_columns\n\n\tif permission_type is None:\n\t\tpermission_type = \"select\" if frappe.only_has_select_perm(doctype, user=user) else \"read\"\n\n\tmeta_fields = meta.default_fields.copy()\n\toptional_meta_fields = [x for x in optional_fields if x in valid_columns]\n\n\tif permitted_fields := meta.get_permitted_fieldnames(\n\t\tparenttype=parenttype,\n\t\tuser=user,\n\t\tpermission_type=permission_type,\n\t\twith_virtual_fields=not ignore_virtual,\n\t):\n\t\tif permission_type == \"select\":\n\t\t\treturn permitted_fields\n\n\t\tif meta.istable:\n\t\t\tmeta_fields.extend(child_table_fields)\n\n\t\treturn meta_fields + permitted_fields + optional_meta_fields\n\n\treturn meta_fields + optional_meta_fields\n\n\ndef is_default_field(fieldname: str) -> bool:\n\treturn fieldname in default_fields\n", "path": "frappe/model/__init__.py"}]} | 3,721 | 589 |
gh_patches_debug_2715 | rasdani/github-patches | git_diff | dotkom__onlineweb4-810 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Active feedbacks bug
Minor bug where feedbacks where everyone answers does not get set to inactive.
</issue>
<code>
[start of apps/feedback/mommy.py]
1 # -*- coding: utf-8 -*-
2 import datetime
3 import socket
4 import locale
5 import logging
6
7 from django.utils import timezone
8 from django.contrib.contenttypes.models import ContentType
9 from django.conf import settings
10 from django.core.mail import EmailMessage
11
12 from apps.events.models import Event, AttendanceEvent, Attendee
13 from apps.feedback.models import FeedbackRelation
14 from apps.marks.models import Mark, UserEntry
15 from apps.mommy import Task, schedule
16
17 class FeedbackMail(Task):
18
19 @staticmethod
20 def run():
21 logger = logging.getLogger("feedback")
22 logger.info("Feedback job started")
23 locale.setlocale(locale.LC_ALL, "nb_NO.UTF-8")
24 active_feedbacks = FeedbackRelation.objects.filter(active=True)
25
26 for feedback in active_feedbacks:
27 message = FeedbackMail.generate_message(feedback, logger)
28
29 if message.send:
30 EmailMessage(message.subject, unicode(message), message.committee_mail, [], message.attended_mails).send()
31 logger.info('Emails sent to: ' + str(message.attended_mails))
32
33 if message.results_message:
34 EmailMessage("Feedback resultat", message.results_message,"[email protected]", [message.committee_mail]).send()
35 logger.info('Results mail sent to :' + message.committee_mail)
36
37 @staticmethod
38 def generate_message(feedback, logger):
39 logger.info('Processing: "' + feedback.get_title() + '"')
40 today = timezone.now().date()
41 yesterday = today + datetime.timedelta(days=-1)
42 not_responded = FeedbackMail.get_users(feedback)
43 logger.info('Not responded: ' + str(not_responded))
44 message = Message()
45
46 #return if everyone has answered
47 if not not_responded:
48 logger.info('Everyone has answered')
49 return message
50
51
52 message.attended_mails = FeedbackMail.get_user_mails(not_responded)
53
54 message.committee_mail = FeedbackMail.get_committee_email(feedback)
55 deadline = feedback.deadline.strftime("%d. %B").encode("utf-8")
56 title = str(FeedbackMail.get_title(feedback)).encode("utf-8")
57 message.link = str(u"\n\n" + FeedbackMail.get_link(feedback)).encode("utf-8")
58 results_link = str(FeedbackMail.get_link(feedback) + "results").encode("utf-8")
59
60 start_date = feedback.get_start_date()
61 deadline_diff = (feedback.deadline - today).days
62
63 message.subject = u"Feedback: %s" % (title)
64 message.intro = u"Hei, vi ønsker tilbakemelding på \"%s\"" % (title)
65 message.mark = FeedbackMail.mark_message(feedback)
66 message.contact = u"\n\nEventuelle spørsmål sendes til %s " % (message.committee_mail)
67 message.start_date = FeedbackMail.start_date_message(start_date)
68
69 if deadline_diff < 0: #Deadline passed
70 feedback.active = False
71 feedback.save()
72 logger.info("Deadline passed feedback set to inactive")
73
74 if feedback.gives_mark:
75 FeedbackMail.set_marks(title, not_responded)
76
77 message.intro = u"Fristen for å svare på \"%s\" har gått ut og du har fått en prikk." % (title)
78 message.mark = ""
79 message.start_date = ""
80 message.link = ""
81 message.send = True
82
83 logger.info("Marks given to: " + str(not_responded))
84
85 elif deadline_diff < 1: #Last warning
86 message.deadline = u"\n\nI dag innen 23:59 er siste frist til å svare på skjemaet."
87
88 message.results_message = u"Hei, siste purremail på feedback skjema har blitt sendt til alle " \
89 u"gjenværende deltagere på \"%s\".\nDere kan se feedback-resultatene på:\n%s\n" % \
90 (title, results_link)
91 message.send = True
92 logger.info("Last warning message generated")
93 elif deadline_diff < 3 and feedback.gives_mark: # 3 days from the deadline
94 message.deadline = u"\n\nFristen for å svare på skjema er %s innen kl 23:59." % (deadline)
95 message.send = True
96 logger.info("Warning message generated")
97 elif FeedbackMail.send_first_notification(feedback): #Day after the event or feedback creation
98 message.deadline = u"\n\nFristen for å svare på skjema er %s innen kl 23:59." % (deadline)
99
100 message.results_message = u"Hei, nå har feedbackmail blitt sendt til alle " \
101 u"deltagere på \"%s\".\nDere kan se feedback-resultatene på:\n%s\n" % \
102 (title, results_link)
103 message.send = True
104 logger.info("First message generated")
105 else:
106 logger.info("No message generated")
107
108 return message
109
110 @staticmethod
111 def send_first_notification(feedback):
112 start_date = FeedbackMail.start_date(feedback)
113
114 #The object that requires feedback doesnt have a start date
115 if not start_date:
116 yesterday = timezone.now().date() - datetime.timedelta(days=1)
117 if feedback.created_date == yesterday.date():
118 #Send the first notification the day after the feedback relation was created
119 return True
120 else:
121 day_after_event = start_date + datetime.timedelta(1)
122 if day_after_event == datetime.datetime.date(timezone.now()):
123 #Send the first notification the day after the event
124 return True
125 return False
126
127 @staticmethod
128 def start_date(feedback):
129 start_date = feedback.get_start_date()
130
131 if start_date:
132 return start_date.date()
133 else:
134 return False
135
136 @staticmethod
137 def start_date_message(start_date):
138 #If the object(event) doesnt have start date it will send
139 #the first notification the day after the feedbackrelation is made
140 if start_date:
141 start_date_string = start_date.strftime("%d. %B").encode("utf-8")
142 message_start_date = u"som du var med på den %s:" % (start_date_string)
143 else:
144 message_start_date = ""
145
146 return message_start_date
147
148 @staticmethod
149 def get_users(feedback):
150 return feedback.get_slackers()
151
152 @staticmethod
153 def get_user_mails(not_responded):
154 return [user.email for user in not_responded]
155
156 @staticmethod
157 def get_link(feedback):
158 return str(settings.BASE_URL + feedback.get_absolute_url())
159
160 @staticmethod
161 def get_title(feedback):
162 return feedback.get_title()
163
164 @staticmethod
165 def get_committee_email(feedback):
166 return feedback.get_email()
167
168 @staticmethod
169 def mark_message(feedback):
170 if feedback.gives_mark:
171 return u"\nVær oppmerksom på at du får prikk dersom du ikke svarer " \
172 u"på disse spørsmålene innen fristen."
173 else:
174 return ""
175
176 @staticmethod
177 def set_marks(title, not_responded):
178 mark = Mark()
179 mark.title = u"Manglende tilbakemelding på %s" % (title)
180 mark.category = 4 #Missed feedback
181 mark.description = u"Du har fått en prikk fordi du ikke har levert tilbakemelding."
182 mark.save()
183
184 for user in not_responded:
185 user_entry = UserEntry()
186 user_entry.user = user
187 user_entry.mark = mark
188 user_entry.save()
189
190 class Message():
191 subject = ""
192 intro = ""
193 start_date = ""
194 deadline = ""
195 mark = ""
196 contact = ""
197 link = ""
198 send = False
199 end = u"\n\nMvh\nLinjeforeningen Online"
200 results_message = False
201
202 committee_mail = ""
203 attended_mails = False
204
205
206 def __unicode__(self):
207 message = "%s %s %s %s %s %s %s" % (
208 self.intro,
209 self.start_date,
210 self.link,
211 self.deadline,
212 self.mark,
213 self.contact,
214 self.end)
215 return message
216
217 schedule.register(FeedbackMail, day_of_week='mon-sun', hour=8, minute=00)
218
[end of apps/feedback/mommy.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/apps/feedback/mommy.py b/apps/feedback/mommy.py
--- a/apps/feedback/mommy.py
+++ b/apps/feedback/mommy.py
@@ -45,7 +45,10 @@
#return if everyone has answered
if not not_responded:
+ feedback.active = False
+ feedback.save()
logger.info('Everyone has answered')
+ logger.info('Feedback set to innactive')
return message
| {"golden_diff": "diff --git a/apps/feedback/mommy.py b/apps/feedback/mommy.py\n--- a/apps/feedback/mommy.py\n+++ b/apps/feedback/mommy.py\n@@ -45,7 +45,10 @@\n \n #return if everyone has answered\n if not not_responded:\n+ feedback.active = False\n+ feedback.save()\n logger.info('Everyone has answered')\n+ logger.info('Feedback set to innactive')\n return message\n", "issue": "Active feedbacks bug\nMinor bug where feedbacks where everyone answers does not get set to inactive.\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\nimport datetime\nimport socket\nimport locale\nimport logging\n\nfrom django.utils import timezone\nfrom django.contrib.contenttypes.models import ContentType\nfrom django.conf import settings\nfrom django.core.mail import EmailMessage\n\nfrom apps.events.models import Event, AttendanceEvent, Attendee\nfrom apps.feedback.models import FeedbackRelation\nfrom apps.marks.models import Mark, UserEntry\nfrom apps.mommy import Task, schedule\n\nclass FeedbackMail(Task):\n\n @staticmethod\n def run():\n logger = logging.getLogger(\"feedback\")\n logger.info(\"Feedback job started\")\n locale.setlocale(locale.LC_ALL, \"nb_NO.UTF-8\")\n active_feedbacks = FeedbackRelation.objects.filter(active=True)\n \n for feedback in active_feedbacks:\n message = FeedbackMail.generate_message(feedback, logger)\n\n if message.send:\n EmailMessage(message.subject, unicode(message), message.committee_mail, [], message.attended_mails).send()\n logger.info('Emails sent to: ' + str(message.attended_mails))\n\n if message.results_message:\n EmailMessage(\"Feedback resultat\", message.results_message,\"[email protected]\", [message.committee_mail]).send() \n logger.info('Results mail sent to :' + message.committee_mail)\n\n @staticmethod\n def generate_message(feedback, logger):\n logger.info('Processing: \"' + feedback.get_title() + '\"')\n today = timezone.now().date()\n yesterday = today + datetime.timedelta(days=-1)\n not_responded = FeedbackMail.get_users(feedback)\n logger.info('Not responded: ' + str(not_responded))\n message = Message()\n\n #return if everyone has answered\n if not not_responded:\n logger.info('Everyone has answered')\n return message\n\n \n message.attended_mails = FeedbackMail.get_user_mails(not_responded)\n\n message.committee_mail = FeedbackMail.get_committee_email(feedback)\n deadline = feedback.deadline.strftime(\"%d. %B\").encode(\"utf-8\")\n title = str(FeedbackMail.get_title(feedback)).encode(\"utf-8\")\n message.link = str(u\"\\n\\n\" + FeedbackMail.get_link(feedback)).encode(\"utf-8\")\n results_link = str(FeedbackMail.get_link(feedback) + \"results\").encode(\"utf-8\")\n \n start_date = feedback.get_start_date()\n deadline_diff = (feedback.deadline - today).days\n\n message.subject = u\"Feedback: %s\" % (title)\n message.intro = u\"Hei, vi \u00f8nsker tilbakemelding p\u00e5 \\\"%s\\\"\" % (title)\n message.mark = FeedbackMail.mark_message(feedback)\n message.contact = u\"\\n\\nEventuelle sp\u00f8rsm\u00e5l sendes til %s \" % (message.committee_mail)\n message.start_date = FeedbackMail.start_date_message(start_date)\n\n if deadline_diff < 0: #Deadline passed\n feedback.active = False\n feedback.save()\n logger.info(\"Deadline passed feedback set to inactive\")\n\n if feedback.gives_mark:\n FeedbackMail.set_marks(title, not_responded) \n \n message.intro = u\"Fristen for \u00e5 svare p\u00e5 \\\"%s\\\" har g\u00e5tt ut og du har f\u00e5tt en prikk.\" % (title)\n message.mark = \"\"\n message.start_date = \"\"\n message.link = \"\"\n message.send = True\n \n logger.info(\"Marks given to: \" + str(not_responded))\n\n elif deadline_diff < 1: #Last warning\n message.deadline = u\"\\n\\nI dag innen 23:59 er siste frist til \u00e5 svare p\u00e5 skjemaet.\"\n \n message.results_message = u\"Hei, siste purremail p\u00e5 feedback skjema har blitt sendt til alle \" \\\n u\"gjenv\u00e6rende deltagere p\u00e5 \\\"%s\\\".\\nDere kan se feedback-resultatene p\u00e5:\\n%s\\n\" % \\\n (title, results_link)\n message.send = True\n logger.info(\"Last warning message generated\")\n elif deadline_diff < 3 and feedback.gives_mark: # 3 days from the deadline\n message.deadline = u\"\\n\\nFristen for \u00e5 svare p\u00e5 skjema er %s innen kl 23:59.\" % (deadline)\n message.send = True\n logger.info(\"Warning message generated\")\n elif FeedbackMail.send_first_notification(feedback): #Day after the event or feedback creation \n message.deadline = u\"\\n\\nFristen for \u00e5 svare p\u00e5 skjema er %s innen kl 23:59.\" % (deadline)\n \n message.results_message = u\"Hei, n\u00e5 har feedbackmail blitt sendt til alle \" \\\n u\"deltagere p\u00e5 \\\"%s\\\".\\nDere kan se feedback-resultatene p\u00e5:\\n%s\\n\" % \\\n (title, results_link)\n message.send = True\n logger.info(\"First message generated\")\n else:\n logger.info(\"No message generated\")\n\n return message\n \n @staticmethod\n def send_first_notification(feedback):\n start_date = FeedbackMail.start_date(feedback)\n\n #The object that requires feedback doesnt have a start date\n if not start_date:\n yesterday = timezone.now().date() - datetime.timedelta(days=1)\n if feedback.created_date == yesterday.date():\n #Send the first notification the day after the feedback relation was created\n return True\n else:\n day_after_event = start_date + datetime.timedelta(1)\n if day_after_event == datetime.datetime.date(timezone.now()):\n #Send the first notification the day after the event\n return True\n return False\n\n @staticmethod\n def start_date(feedback):\n start_date = feedback.get_start_date()\n \n if start_date:\n return start_date.date()\n else:\n return False\n\n @staticmethod\n def start_date_message(start_date):\n #If the object(event) doesnt have start date it will send \n #the first notification the day after the feedbackrelation is made\n if start_date:\n start_date_string = start_date.strftime(\"%d. %B\").encode(\"utf-8\")\n message_start_date = u\"som du var med p\u00e5 den %s:\" % (start_date_string)\n else:\n message_start_date = \"\"\n \n return message_start_date \n\n @staticmethod\n def get_users(feedback):\n return feedback.get_slackers()\n\n @staticmethod\n def get_user_mails(not_responded):\n return [user.email for user in not_responded]\n\n @staticmethod\n def get_link(feedback):\n return str(settings.BASE_URL + feedback.get_absolute_url())\n\n @staticmethod\n def get_title(feedback):\n return feedback.get_title()\n\n @staticmethod\n def get_committee_email(feedback):\n return feedback.get_email()\n\n @staticmethod\n def mark_message(feedback):\n if feedback.gives_mark:\n return u\"\\nV\u00e6r oppmerksom p\u00e5 at du f\u00e5r prikk dersom du ikke svarer \" \\\n u\"p\u00e5 disse sp\u00f8rsm\u00e5lene innen fristen.\"\n else:\n return \"\"\n\n @staticmethod\n def set_marks(title, not_responded):\n mark = Mark()\n mark.title = u\"Manglende tilbakemelding p\u00e5 %s\" % (title)\n mark.category = 4 #Missed feedback\n mark.description = u\"Du har f\u00e5tt en prikk fordi du ikke har levert tilbakemelding.\"\n mark.save()\n \n for user in not_responded:\n user_entry = UserEntry()\n user_entry.user = user\n user_entry.mark = mark\n user_entry.save()\n \nclass Message():\n subject = \"\"\n intro = \"\"\n start_date = \"\"\n deadline = \"\"\n mark = \"\"\n contact = \"\"\n link = \"\"\n send = False\n end = u\"\\n\\nMvh\\nLinjeforeningen Online\"\n results_message = False\n\n committee_mail = \"\"\n attended_mails = False\n\n\n def __unicode__(self):\n message = \"%s %s %s %s %s %s %s\" % (\n self.intro, \n self.start_date, \n self.link, \n self.deadline, \n self.mark, \n self.contact, \n self.end)\n return message\n\nschedule.register(FeedbackMail, day_of_week='mon-sun', hour=8, minute=00)\n", "path": "apps/feedback/mommy.py"}]} | 2,937 | 105 |
gh_patches_debug_31710 | rasdani/github-patches | git_diff | pypa__pip-8474 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Way to clear items from pip cache of specified age.
I use pip a lot and had never considered anything about caching, and find I have a 1.7gb pip cache.
It would be useful if there was a command that could clear it of items beyond a specified age.
That way I could could create a script to run every day to delete anything in pip that is older than a month (and to do the same for unrelated things like yarn etc).
</issue>
<code>
[start of src/pip/_internal/commands/cache.py]
1 from __future__ import absolute_import
2
3 import logging
4 import os
5 import textwrap
6
7 import pip._internal.utils.filesystem as filesystem
8 from pip._internal.cli.base_command import Command
9 from pip._internal.cli.status_codes import ERROR, SUCCESS
10 from pip._internal.exceptions import CommandError, PipError
11 from pip._internal.utils.typing import MYPY_CHECK_RUNNING
12
13 if MYPY_CHECK_RUNNING:
14 from optparse import Values
15 from typing import Any, List
16
17
18 logger = logging.getLogger(__name__)
19
20
21 class CacheCommand(Command):
22 """
23 Inspect and manage pip's wheel cache.
24
25 Subcommands:
26
27 - dir: Show the cache directory.
28 - info: Show information about the cache.
29 - list: List filenames of packages stored in the cache.
30 - remove: Remove one or more package from the cache.
31 - purge: Remove all items from the cache.
32
33 ``<pattern>`` can be a glob expression or a package name.
34 """
35
36 ignore_require_venv = True
37 usage = """
38 %prog dir
39 %prog info
40 %prog list [<pattern>]
41 %prog remove <pattern>
42 %prog purge
43 """
44
45 def run(self, options, args):
46 # type: (Values, List[Any]) -> int
47 handlers = {
48 "dir": self.get_cache_dir,
49 "info": self.get_cache_info,
50 "list": self.list_cache_items,
51 "remove": self.remove_cache_items,
52 "purge": self.purge_cache,
53 }
54
55 if not options.cache_dir:
56 logger.error("pip cache commands can not "
57 "function since cache is disabled.")
58 return ERROR
59
60 # Determine action
61 if not args or args[0] not in handlers:
62 logger.error(
63 "Need an action (%s) to perform.",
64 ", ".join(sorted(handlers)),
65 )
66 return ERROR
67
68 action = args[0]
69
70 # Error handling happens here, not in the action-handlers.
71 try:
72 handlers[action](options, args[1:])
73 except PipError as e:
74 logger.error(e.args[0])
75 return ERROR
76
77 return SUCCESS
78
79 def get_cache_dir(self, options, args):
80 # type: (Values, List[Any]) -> None
81 if args:
82 raise CommandError('Too many arguments')
83
84 logger.info(options.cache_dir)
85
86 def get_cache_info(self, options, args):
87 # type: (Values, List[Any]) -> None
88 if args:
89 raise CommandError('Too many arguments')
90
91 num_packages = len(self._find_wheels(options, '*'))
92
93 cache_location = self._wheels_cache_dir(options)
94 cache_size = filesystem.format_directory_size(cache_location)
95
96 message = textwrap.dedent("""
97 Location: {location}
98 Size: {size}
99 Number of wheels: {package_count}
100 """).format(
101 location=cache_location,
102 package_count=num_packages,
103 size=cache_size,
104 ).strip()
105
106 logger.info(message)
107
108 def list_cache_items(self, options, args):
109 # type: (Values, List[Any]) -> None
110 if len(args) > 1:
111 raise CommandError('Too many arguments')
112
113 if args:
114 pattern = args[0]
115 else:
116 pattern = '*'
117
118 files = self._find_wheels(options, pattern)
119
120 if not files:
121 logger.info('Nothing cached.')
122 return
123
124 results = []
125 for filename in files:
126 wheel = os.path.basename(filename)
127 size = filesystem.format_file_size(filename)
128 results.append(' - {} ({})'.format(wheel, size))
129 logger.info('Cache contents:\n')
130 logger.info('\n'.join(sorted(results)))
131
132 def remove_cache_items(self, options, args):
133 # type: (Values, List[Any]) -> None
134 if len(args) > 1:
135 raise CommandError('Too many arguments')
136
137 if not args:
138 raise CommandError('Please provide a pattern')
139
140 files = self._find_wheels(options, args[0])
141 if not files:
142 raise CommandError('No matching packages')
143
144 for filename in files:
145 os.unlink(filename)
146 logger.debug('Removed %s', filename)
147 logger.info('Files removed: %s', len(files))
148
149 def purge_cache(self, options, args):
150 # type: (Values, List[Any]) -> None
151 if args:
152 raise CommandError('Too many arguments')
153
154 return self.remove_cache_items(options, ['*'])
155
156 def _wheels_cache_dir(self, options):
157 # type: (Values) -> str
158 return os.path.join(options.cache_dir, 'wheels')
159
160 def _find_wheels(self, options, pattern):
161 # type: (Values, str) -> List[str]
162 wheel_dir = self._wheels_cache_dir(options)
163
164 # The wheel filename format, as specified in PEP 427, is:
165 # {distribution}-{version}(-{build})?-{python}-{abi}-{platform}.whl
166 #
167 # Additionally, non-alphanumeric values in the distribution are
168 # normalized to underscores (_), meaning hyphens can never occur
169 # before `-{version}`.
170 #
171 # Given that information:
172 # - If the pattern we're given contains a hyphen (-), the user is
173 # providing at least the version. Thus, we can just append `*.whl`
174 # to match the rest of it.
175 # - If the pattern we're given doesn't contain a hyphen (-), the
176 # user is only providing the name. Thus, we append `-*.whl` to
177 # match the hyphen before the version, followed by anything else.
178 #
179 # PEP 427: https://www.python.org/dev/peps/pep-0427/
180 pattern = pattern + ("*.whl" if "-" in pattern else "-*.whl")
181
182 return filesystem.find_files(wheel_dir, pattern)
183
[end of src/pip/_internal/commands/cache.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/pip/_internal/commands/cache.py b/src/pip/_internal/commands/cache.py
--- a/src/pip/_internal/commands/cache.py
+++ b/src/pip/_internal/commands/cache.py
@@ -37,11 +37,25 @@
usage = """
%prog dir
%prog info
- %prog list [<pattern>]
+ %prog list [<pattern>] [--format=[human, abspath]]
%prog remove <pattern>
%prog purge
"""
+ def add_options(self):
+ # type: () -> None
+
+ self.cmd_opts.add_option(
+ '--format',
+ action='store',
+ dest='list_format',
+ default="human",
+ choices=('human', 'abspath'),
+ help="Select the output format among: human (default) or abspath"
+ )
+
+ self.parser.insert_option_group(0, self.cmd_opts)
+
def run(self, options, args):
# type: (Values, List[Any]) -> int
handlers = {
@@ -116,7 +130,13 @@
pattern = '*'
files = self._find_wheels(options, pattern)
+ if options.list_format == 'human':
+ self.format_for_human(files)
+ else:
+ self.format_for_abspath(files)
+ def format_for_human(self, files):
+ # type: (List[str]) -> None
if not files:
logger.info('Nothing cached.')
return
@@ -129,6 +149,17 @@
logger.info('Cache contents:\n')
logger.info('\n'.join(sorted(results)))
+ def format_for_abspath(self, files):
+ # type: (List[str]) -> None
+ if not files:
+ return
+
+ results = []
+ for filename in files:
+ results.append(filename)
+
+ logger.info('\n'.join(sorted(results)))
+
def remove_cache_items(self, options, args):
# type: (Values, List[Any]) -> None
if len(args) > 1:
| {"golden_diff": "diff --git a/src/pip/_internal/commands/cache.py b/src/pip/_internal/commands/cache.py\n--- a/src/pip/_internal/commands/cache.py\n+++ b/src/pip/_internal/commands/cache.py\n@@ -37,11 +37,25 @@\n usage = \"\"\"\n %prog dir\n %prog info\n- %prog list [<pattern>]\n+ %prog list [<pattern>] [--format=[human, abspath]]\n %prog remove <pattern>\n %prog purge\n \"\"\"\n \n+ def add_options(self):\n+ # type: () -> None\n+\n+ self.cmd_opts.add_option(\n+ '--format',\n+ action='store',\n+ dest='list_format',\n+ default=\"human\",\n+ choices=('human', 'abspath'),\n+ help=\"Select the output format among: human (default) or abspath\"\n+ )\n+\n+ self.parser.insert_option_group(0, self.cmd_opts)\n+\n def run(self, options, args):\n # type: (Values, List[Any]) -> int\n handlers = {\n@@ -116,7 +130,13 @@\n pattern = '*'\n \n files = self._find_wheels(options, pattern)\n+ if options.list_format == 'human':\n+ self.format_for_human(files)\n+ else:\n+ self.format_for_abspath(files)\n \n+ def format_for_human(self, files):\n+ # type: (List[str]) -> None\n if not files:\n logger.info('Nothing cached.')\n return\n@@ -129,6 +149,17 @@\n logger.info('Cache contents:\\n')\n logger.info('\\n'.join(sorted(results)))\n \n+ def format_for_abspath(self, files):\n+ # type: (List[str]) -> None\n+ if not files:\n+ return\n+\n+ results = []\n+ for filename in files:\n+ results.append(filename)\n+\n+ logger.info('\\n'.join(sorted(results)))\n+\n def remove_cache_items(self, options, args):\n # type: (Values, List[Any]) -> None\n if len(args) > 1:\n", "issue": "Way to clear items from pip cache of specified age.\nI use pip a lot and had never considered anything about caching, and find I have a 1.7gb pip cache.\r\n\r\nIt would be useful if there was a command that could clear it of items beyond a specified age.\r\n\r\nThat way I could could create a script to run every day to delete anything in pip that is older than a month (and to do the same for unrelated things like yarn etc).\n", "before_files": [{"content": "from __future__ import absolute_import\n\nimport logging\nimport os\nimport textwrap\n\nimport pip._internal.utils.filesystem as filesystem\nfrom pip._internal.cli.base_command import Command\nfrom pip._internal.cli.status_codes import ERROR, SUCCESS\nfrom pip._internal.exceptions import CommandError, PipError\nfrom pip._internal.utils.typing import MYPY_CHECK_RUNNING\n\nif MYPY_CHECK_RUNNING:\n from optparse import Values\n from typing import Any, List\n\n\nlogger = logging.getLogger(__name__)\n\n\nclass CacheCommand(Command):\n \"\"\"\n Inspect and manage pip's wheel cache.\n\n Subcommands:\n\n - dir: Show the cache directory.\n - info: Show information about the cache.\n - list: List filenames of packages stored in the cache.\n - remove: Remove one or more package from the cache.\n - purge: Remove all items from the cache.\n\n ``<pattern>`` can be a glob expression or a package name.\n \"\"\"\n\n ignore_require_venv = True\n usage = \"\"\"\n %prog dir\n %prog info\n %prog list [<pattern>]\n %prog remove <pattern>\n %prog purge\n \"\"\"\n\n def run(self, options, args):\n # type: (Values, List[Any]) -> int\n handlers = {\n \"dir\": self.get_cache_dir,\n \"info\": self.get_cache_info,\n \"list\": self.list_cache_items,\n \"remove\": self.remove_cache_items,\n \"purge\": self.purge_cache,\n }\n\n if not options.cache_dir:\n logger.error(\"pip cache commands can not \"\n \"function since cache is disabled.\")\n return ERROR\n\n # Determine action\n if not args or args[0] not in handlers:\n logger.error(\n \"Need an action (%s) to perform.\",\n \", \".join(sorted(handlers)),\n )\n return ERROR\n\n action = args[0]\n\n # Error handling happens here, not in the action-handlers.\n try:\n handlers[action](options, args[1:])\n except PipError as e:\n logger.error(e.args[0])\n return ERROR\n\n return SUCCESS\n\n def get_cache_dir(self, options, args):\n # type: (Values, List[Any]) -> None\n if args:\n raise CommandError('Too many arguments')\n\n logger.info(options.cache_dir)\n\n def get_cache_info(self, options, args):\n # type: (Values, List[Any]) -> None\n if args:\n raise CommandError('Too many arguments')\n\n num_packages = len(self._find_wheels(options, '*'))\n\n cache_location = self._wheels_cache_dir(options)\n cache_size = filesystem.format_directory_size(cache_location)\n\n message = textwrap.dedent(\"\"\"\n Location: {location}\n Size: {size}\n Number of wheels: {package_count}\n \"\"\").format(\n location=cache_location,\n package_count=num_packages,\n size=cache_size,\n ).strip()\n\n logger.info(message)\n\n def list_cache_items(self, options, args):\n # type: (Values, List[Any]) -> None\n if len(args) > 1:\n raise CommandError('Too many arguments')\n\n if args:\n pattern = args[0]\n else:\n pattern = '*'\n\n files = self._find_wheels(options, pattern)\n\n if not files:\n logger.info('Nothing cached.')\n return\n\n results = []\n for filename in files:\n wheel = os.path.basename(filename)\n size = filesystem.format_file_size(filename)\n results.append(' - {} ({})'.format(wheel, size))\n logger.info('Cache contents:\\n')\n logger.info('\\n'.join(sorted(results)))\n\n def remove_cache_items(self, options, args):\n # type: (Values, List[Any]) -> None\n if len(args) > 1:\n raise CommandError('Too many arguments')\n\n if not args:\n raise CommandError('Please provide a pattern')\n\n files = self._find_wheels(options, args[0])\n if not files:\n raise CommandError('No matching packages')\n\n for filename in files:\n os.unlink(filename)\n logger.debug('Removed %s', filename)\n logger.info('Files removed: %s', len(files))\n\n def purge_cache(self, options, args):\n # type: (Values, List[Any]) -> None\n if args:\n raise CommandError('Too many arguments')\n\n return self.remove_cache_items(options, ['*'])\n\n def _wheels_cache_dir(self, options):\n # type: (Values) -> str\n return os.path.join(options.cache_dir, 'wheels')\n\n def _find_wheels(self, options, pattern):\n # type: (Values, str) -> List[str]\n wheel_dir = self._wheels_cache_dir(options)\n\n # The wheel filename format, as specified in PEP 427, is:\n # {distribution}-{version}(-{build})?-{python}-{abi}-{platform}.whl\n #\n # Additionally, non-alphanumeric values in the distribution are\n # normalized to underscores (_), meaning hyphens can never occur\n # before `-{version}`.\n #\n # Given that information:\n # - If the pattern we're given contains a hyphen (-), the user is\n # providing at least the version. Thus, we can just append `*.whl`\n # to match the rest of it.\n # - If the pattern we're given doesn't contain a hyphen (-), the\n # user is only providing the name. Thus, we append `-*.whl` to\n # match the hyphen before the version, followed by anything else.\n #\n # PEP 427: https://www.python.org/dev/peps/pep-0427/\n pattern = pattern + (\"*.whl\" if \"-\" in pattern else \"-*.whl\")\n\n return filesystem.find_files(wheel_dir, pattern)\n", "path": "src/pip/_internal/commands/cache.py"}]} | 2,387 | 473 |
gh_patches_debug_416 | rasdani/github-patches | git_diff | automl__auto-sklearn-1361 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Check if test requirement `flaky` can be removed
We currently have a test dependancy [flaky](https://pypi.org/project/flaky/) used to annotate a test `KernelPCAComponentTest::test_default_configuration_classify()`. This is the only place it's used.
</issue>
<code>
[start of setup.py]
1 # -*- encoding: utf-8 -*-
2 import os
3 import sys
4 from setuptools import setup, find_packages
5
6
7 # Check if Auto-sklearn *could* run on the given system
8 if os.name != 'posix':
9 raise ValueError(
10 'Detected unsupported operating system: %s. Please check '
11 'the compability information of auto-sklearn: https://automl.github.io'
12 '/auto-sklearn/master/installation.html#windows-osx-compatibility' %
13 sys.platform
14 )
15
16 if sys.version_info < (3, 7):
17 raise ValueError(
18 'Unsupported Python version %d.%d.%d found. Auto-sklearn requires Python '
19 '3.7 or higher.' % (sys.version_info.major, sys.version_info.minor, sys.version_info.micro)
20 )
21
22 HERE = os.path.abspath(os.path.dirname(__file__))
23 with open(os.path.join(HERE, 'requirements.txt')) as fp:
24 install_reqs = [r.rstrip() for r in fp.readlines()
25 if not r.startswith('#') and not r.startswith('git+')]
26
27 extras_reqs={
28 "test": [
29 "pytest>=4.6",
30 "mypy",
31 "pytest-xdist",
32 "pytest-timeout",
33 "flaky",
34 "openml",
35 "pre-commit",
36 "pytest-cov",
37 ],
38 "examples": [
39 "matplotlib",
40 "jupyter",
41 "notebook",
42 "seaborn",
43 ],
44 "docs": [
45 "sphinx<4.3",
46 "sphinx-gallery",
47 "sphinx_bootstrap_theme",
48 "numpydoc",
49 "sphinx_toolbox",
50 "docutils==0.16"
51 ],
52 }
53
54 with open(os.path.join(HERE, 'autosklearn', '__version__.py')) as fh:
55 version = fh.readlines()[-1].split()[-1].strip("\"'")
56
57
58 with open(os.path.join(HERE, 'README.md')) as fh:
59 long_description = fh.read()
60
61
62 setup(
63 name='auto-sklearn',
64 author='Matthias Feurer',
65 author_email='[email protected]',
66 description='Automated machine learning.',
67 long_description=long_description,
68 long_description_content_type='text/markdown',
69 version=version,
70 packages=find_packages(exclude=['test', 'scripts', 'examples']),
71 extras_require=extras_reqs,
72 install_requires=install_reqs,
73 include_package_data=True,
74 license='BSD3',
75 platforms=['Linux'],
76 classifiers=[
77 "Environment :: Console",
78 "Intended Audience :: Developers",
79 "Intended Audience :: Education",
80 "Intended Audience :: Science/Research",
81 "Intended Audience :: Information Technology",
82 "License :: OSI Approved :: BSD License",
83 "Natural Language :: English",
84 "Operating System :: OS Independent",
85 "Topic :: Scientific/Engineering :: Artificial Intelligence",
86 "Topic :: Scientific/Engineering :: Information Analysis",
87 'Programming Language :: Python :: 3.7',
88 'Programming Language :: Python :: 3.8',
89 'Programming Language :: Python :: 3.9',
90 ],
91 python_requires='>=3.7',
92 url='https://automl.github.io/auto-sklearn',
93 )
94
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -30,7 +30,6 @@
"mypy",
"pytest-xdist",
"pytest-timeout",
- "flaky",
"openml",
"pre-commit",
"pytest-cov",
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -30,7 +30,6 @@\n \"mypy\",\n \"pytest-xdist\",\n \"pytest-timeout\",\n- \"flaky\",\n \"openml\",\n \"pre-commit\",\n \"pytest-cov\",\n", "issue": "Check if test requirement `flaky` can be removed\nWe currently have a test dependancy [flaky](https://pypi.org/project/flaky/) used to annotate a test `KernelPCAComponentTest::test_default_configuration_classify()`. This is the only place it's used.\n", "before_files": [{"content": "# -*- encoding: utf-8 -*-\nimport os\nimport sys\nfrom setuptools import setup, find_packages\n\n\n# Check if Auto-sklearn *could* run on the given system\nif os.name != 'posix':\n raise ValueError(\n 'Detected unsupported operating system: %s. Please check '\n 'the compability information of auto-sklearn: https://automl.github.io'\n '/auto-sklearn/master/installation.html#windows-osx-compatibility' %\n sys.platform\n )\n\nif sys.version_info < (3, 7):\n raise ValueError(\n 'Unsupported Python version %d.%d.%d found. Auto-sklearn requires Python '\n '3.7 or higher.' % (sys.version_info.major, sys.version_info.minor, sys.version_info.micro)\n )\n\nHERE = os.path.abspath(os.path.dirname(__file__))\nwith open(os.path.join(HERE, 'requirements.txt')) as fp:\n install_reqs = [r.rstrip() for r in fp.readlines()\n if not r.startswith('#') and not r.startswith('git+')]\n\nextras_reqs={\n \"test\": [\n \"pytest>=4.6\",\n \"mypy\",\n \"pytest-xdist\",\n \"pytest-timeout\",\n \"flaky\",\n \"openml\",\n \"pre-commit\",\n \"pytest-cov\",\n ],\n \"examples\": [\n \"matplotlib\",\n \"jupyter\",\n \"notebook\",\n \"seaborn\",\n ],\n \"docs\": [\n \"sphinx<4.3\",\n \"sphinx-gallery\",\n \"sphinx_bootstrap_theme\",\n \"numpydoc\",\n \"sphinx_toolbox\",\n \"docutils==0.16\"\n ],\n}\n\nwith open(os.path.join(HERE, 'autosklearn', '__version__.py')) as fh:\n version = fh.readlines()[-1].split()[-1].strip(\"\\\"'\")\n\n\nwith open(os.path.join(HERE, 'README.md')) as fh:\n long_description = fh.read()\n\n\nsetup(\n name='auto-sklearn',\n author='Matthias Feurer',\n author_email='[email protected]',\n description='Automated machine learning.',\n long_description=long_description,\n long_description_content_type='text/markdown',\n version=version,\n packages=find_packages(exclude=['test', 'scripts', 'examples']),\n extras_require=extras_reqs,\n install_requires=install_reqs,\n include_package_data=True,\n license='BSD3',\n platforms=['Linux'],\n classifiers=[\n \"Environment :: Console\",\n \"Intended Audience :: Developers\",\n \"Intended Audience :: Education\",\n \"Intended Audience :: Science/Research\",\n \"Intended Audience :: Information Technology\",\n \"License :: OSI Approved :: BSD License\",\n \"Natural Language :: English\",\n \"Operating System :: OS Independent\",\n \"Topic :: Scientific/Engineering :: Artificial Intelligence\",\n \"Topic :: Scientific/Engineering :: Information Analysis\",\n 'Programming Language :: Python :: 3.7',\n 'Programming Language :: Python :: 3.8',\n 'Programming Language :: Python :: 3.9',\n ],\n python_requires='>=3.7',\n url='https://automl.github.io/auto-sklearn',\n)\n", "path": "setup.py"}]} | 1,459 | 71 |
gh_patches_debug_57395 | rasdani/github-patches | git_diff | translate__pootle-3380 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Core: drop MySQL dependence on MyISAM
Core depends on MyISAM at the moment because of low level features used for changeid tracking. We need to migrate that to a more general approach that works on InnoDB and other supported DB engines.
- [x] Make resources list work in all DB backends (#3539)
- [x] Switch revision counter to Redis (#3364)
- [x] Ensure tests run on InnoDB (#3777)
</issue>
<code>
[start of setup.py]
1 #!/usr/bin/env python
2 # -*- coding: utf-8 -*-
3 #
4 # Copyright 2008-2013 Zuza Software Foundation
5 # Copyright 2014 Evernote Corporation
6 #
7 # This file is part of Pootle.
8 #
9 # This program is free software; you can redistribute it and/or modify
10 # it under the terms of the GNU General Public License as published by
11 # the Free Software Foundation; either version 2 of the License, or
12 # (at your option) any later version.
13 #
14 # This program is distributed in the hope that it will be useful,
15 # but WITHOUT ANY WARRANTY; without even the implied warranty of
16 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
17 # GNU General Public License for more details.
18 #
19 # You should have received a copy of the GNU General Public License
20 # along with this program; if not, see <http://www.gnu.org/licenses/>.
21
22 import glob
23 import os
24 import re
25 import sys
26
27 from distutils import log
28 from distutils.command.build import build as DistutilsBuild
29 from distutils.errors import DistutilsOptionError
30
31 from setuptools import find_packages, setup
32 from setuptools.command.test import test as TestCommand
33
34 from pootle.__version__ import sver as pootle_version
35
36
37 def parse_requirements(file_name):
38 """Parses a pip requirements file and returns a list of packages.
39
40 Use the result of this function in the ``install_requires`` field.
41 Copied from cburgmer/pdfserver.
42 """
43 requirements = []
44 for line in open(file_name, 'r').read().split('\n'):
45 # Ignore comments, blank lines and included requirements files
46 if re.match(r'(\s*#)|(\s*$)|(-r .*$)', line):
47 continue
48
49 if re.match(r'\s*-e\s+', line):
50 requirements.append(re.sub(r'\s*-e\s+.*#egg=(.*)$', r'\1', line))
51 elif re.match(r'\s*-f\s+', line):
52 pass
53 else:
54 requirements.append(line)
55
56 return requirements
57
58
59 class PyTest(TestCommand):
60
61 def finalize_options(self):
62 TestCommand.finalize_options(self)
63 self.test_args = ['--tb=short', 'tests/']
64 self.test_suite = True
65
66 def run_tests(self):
67 #import here, cause outside the eggs aren't loaded
68 import pytest
69 errno = pytest.main(self.test_args)
70 sys.exit(errno)
71
72
73 class PootleBuildMo(DistutilsBuild):
74
75 description = "compile Gettext PO files into MO"
76 user_options = [
77 ('all', None,
78 "compile all language (don't use LINGUAS file)"),
79 ('lang=', 'l',
80 "specify a language to compile"),
81 ]
82 boolean_options = ['all']
83
84 po_path_base = os.path.join('pootle', 'locale')
85 _langs = []
86
87 def initialize_options(self):
88 self.all = False
89 self.lang = None
90
91 def finalize_options(self):
92 if self.all and self.lang is not None:
93 raise DistutilsOptionError(
94 "Can't use --all and --lang together"
95 )
96 if self.lang is not None:
97 self._langs = [self.lang]
98 elif self.all:
99 for lang in os.listdir(self.po_path_base):
100 if (os.path.isdir(os.path.join(self.po_path_base, lang)) and
101 lang != "templates"):
102 self._langs.append(lang)
103 else:
104 for lang in open(os.path.join('pootle', 'locale', 'LINGUAS')):
105 self._langs.append(lang.rstrip())
106
107 def build_mo(self):
108 """Compile .mo files from available .po files"""
109 import subprocess
110 import gettext
111 from translate.storage import factory
112
113 for lang in self._langs:
114 lang = lang.rstrip()
115
116 po_path = os.path.join('pootle', 'locale', lang)
117 mo_path = os.path.join('pootle', 'locale', lang, 'LC_MESSAGES')
118
119 if not os.path.exists(mo_path):
120 os.makedirs(mo_path)
121
122 for po, mo in (('pootle.po', 'django.mo'),
123 ('pootle_js.po', 'djangojs.mo')):
124 po_filename = os.path.join(po_path, po)
125 mo_filename = os.path.join(mo_path, mo)
126
127 if not os.path.exists(po_filename):
128 log.warn("%s: missing file %s", lang, po_filename)
129 continue
130
131 if not os.path.exists(mo_path):
132 os.makedirs(mo_path)
133
134 log.info("compiling %s", lang)
135 try:
136 subprocess.call([
137 'msgfmt', '--strict', '-o', mo_filename, po_filename],
138 stderr=subprocess.STDOUT)
139 except Exception as e:
140 log.warn("%s: skipping, running msgfmt failed: %s",
141 lang, e)
142
143 try:
144 store = factory.getobject(po_filename)
145 gettext.c2py(store.getheaderplural()[1])
146 except Exception:
147 log.warn("%s: invalid plural header in %s",
148 lang, po_filename)
149
150 def run(self):
151 self.build_mo()
152
153
154 setup(
155 name="Pootle",
156 version=pootle_version,
157
158 description="An online collaborative localization tool.",
159 long_description=open(
160 os.path.join(os.path.dirname(__file__), 'README.rst')
161 ).read(),
162
163 author="Translate",
164 author_email="[email protected]",
165 license="GNU General Public License (GPL)",
166 url="http://pootle.translatehouse.org",
167 download_url="http://sourceforge.net/projects/translate/files/Pootle/" + pootle_version,
168
169 install_requires=parse_requirements('requirements/base.txt'),
170 tests_require=parse_requirements('requirements/tests.txt'),
171
172 platforms=["any"],
173 classifiers=[
174 "Development Status :: 5 - Production/Stable",
175 "Environment :: Web Environment",
176 "Framework :: Django",
177 "Intended Audience :: Developers",
178 "Intended Audience :: End Users/Desktop",
179 "Intended Audience :: Information Technology",
180 "License :: OSI Approved :: GNU General Public License (GPL)",
181 "Operating System :: OS Independent",
182 "Operating System :: Microsoft :: Windows",
183 "Operating System :: Unix",
184 "Programming Language :: JavaScript",
185 "Programming Language :: Python",
186 "Topic :: Software Development :: Localization",
187 "Topic :: Text Processing :: Linguistic"
188 ],
189 zip_safe=False,
190 packages=find_packages(exclude=['deploy*']),
191 include_package_data=True,
192
193 entry_points={
194 'console_scripts': [
195 'pootle = pootle.runner:main',
196 ],
197 },
198 cmdclass={
199 'build_mo': PootleBuildMo,
200 'test': PyTest,
201 },
202 )
203
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -43,7 +43,7 @@
requirements = []
for line in open(file_name, 'r').read().split('\n'):
# Ignore comments, blank lines and included requirements files
- if re.match(r'(\s*#)|(\s*$)|(-r .*$)', line):
+ if re.match(r'(\s*#)|(\s*$)|((-r|--allow-external|--allow-unverified) .*$)', line):
continue
if re.match(r'\s*-e\s+', line):
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -43,7 +43,7 @@\n requirements = []\n for line in open(file_name, 'r').read().split('\\n'):\n # Ignore comments, blank lines and included requirements files\n- if re.match(r'(\\s*#)|(\\s*$)|(-r .*$)', line):\n+ if re.match(r'(\\s*#)|(\\s*$)|((-r|--allow-external|--allow-unverified) .*$)', line):\n continue\n \n if re.match(r'\\s*-e\\s+', line):\n", "issue": "Core: drop MySQL dependence on MyISAM\nCore depends on MyISAM at the moment because of low level features used for changeid tracking. We need to migrate that to a more general approach that works on InnoDB and other supported DB engines.\n- [x] Make resources list work in all DB backends (#3539)\n- [x] Switch revision counter to Redis (#3364)\n- [x] Ensure tests run on InnoDB (#3777)\n\n", "before_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n#\n# Copyright 2008-2013 Zuza Software Foundation\n# Copyright 2014 Evernote Corporation\n#\n# This file is part of Pootle.\n#\n# This program is free software; you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation; either version 2 of the License, or\n# (at your option) any later version.\n#\n# This program is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with this program; if not, see <http://www.gnu.org/licenses/>.\n\nimport glob\nimport os\nimport re\nimport sys\n\nfrom distutils import log\nfrom distutils.command.build import build as DistutilsBuild\nfrom distutils.errors import DistutilsOptionError\n\nfrom setuptools import find_packages, setup\nfrom setuptools.command.test import test as TestCommand\n\nfrom pootle.__version__ import sver as pootle_version\n\n\ndef parse_requirements(file_name):\n \"\"\"Parses a pip requirements file and returns a list of packages.\n\n Use the result of this function in the ``install_requires`` field.\n Copied from cburgmer/pdfserver.\n \"\"\"\n requirements = []\n for line in open(file_name, 'r').read().split('\\n'):\n # Ignore comments, blank lines and included requirements files\n if re.match(r'(\\s*#)|(\\s*$)|(-r .*$)', line):\n continue\n\n if re.match(r'\\s*-e\\s+', line):\n requirements.append(re.sub(r'\\s*-e\\s+.*#egg=(.*)$', r'\\1', line))\n elif re.match(r'\\s*-f\\s+', line):\n pass\n else:\n requirements.append(line)\n\n return requirements\n\n\nclass PyTest(TestCommand):\n\n def finalize_options(self):\n TestCommand.finalize_options(self)\n self.test_args = ['--tb=short', 'tests/']\n self.test_suite = True\n\n def run_tests(self):\n #import here, cause outside the eggs aren't loaded\n import pytest\n errno = pytest.main(self.test_args)\n sys.exit(errno)\n\n\nclass PootleBuildMo(DistutilsBuild):\n\n description = \"compile Gettext PO files into MO\"\n user_options = [\n ('all', None,\n \"compile all language (don't use LINGUAS file)\"),\n ('lang=', 'l',\n \"specify a language to compile\"),\n ]\n boolean_options = ['all']\n\n po_path_base = os.path.join('pootle', 'locale')\n _langs = []\n\n def initialize_options(self):\n self.all = False\n self.lang = None\n\n def finalize_options(self):\n if self.all and self.lang is not None:\n raise DistutilsOptionError(\n \"Can't use --all and --lang together\"\n )\n if self.lang is not None:\n self._langs = [self.lang]\n elif self.all:\n for lang in os.listdir(self.po_path_base):\n if (os.path.isdir(os.path.join(self.po_path_base, lang)) and\n lang != \"templates\"):\n self._langs.append(lang)\n else:\n for lang in open(os.path.join('pootle', 'locale', 'LINGUAS')):\n self._langs.append(lang.rstrip())\n\n def build_mo(self):\n \"\"\"Compile .mo files from available .po files\"\"\"\n import subprocess\n import gettext\n from translate.storage import factory\n\n for lang in self._langs:\n lang = lang.rstrip()\n\n po_path = os.path.join('pootle', 'locale', lang)\n mo_path = os.path.join('pootle', 'locale', lang, 'LC_MESSAGES')\n\n if not os.path.exists(mo_path):\n os.makedirs(mo_path)\n\n for po, mo in (('pootle.po', 'django.mo'),\n ('pootle_js.po', 'djangojs.mo')):\n po_filename = os.path.join(po_path, po)\n mo_filename = os.path.join(mo_path, mo)\n\n if not os.path.exists(po_filename):\n log.warn(\"%s: missing file %s\", lang, po_filename)\n continue\n\n if not os.path.exists(mo_path):\n os.makedirs(mo_path)\n\n log.info(\"compiling %s\", lang)\n try:\n subprocess.call([\n 'msgfmt', '--strict', '-o', mo_filename, po_filename],\n stderr=subprocess.STDOUT)\n except Exception as e:\n log.warn(\"%s: skipping, running msgfmt failed: %s\",\n lang, e)\n\n try:\n store = factory.getobject(po_filename)\n gettext.c2py(store.getheaderplural()[1])\n except Exception:\n log.warn(\"%s: invalid plural header in %s\",\n lang, po_filename)\n\n def run(self):\n self.build_mo()\n\n\nsetup(\n name=\"Pootle\",\n version=pootle_version,\n\n description=\"An online collaborative localization tool.\",\n long_description=open(\n os.path.join(os.path.dirname(__file__), 'README.rst')\n ).read(),\n\n author=\"Translate\",\n author_email=\"[email protected]\",\n license=\"GNU General Public License (GPL)\",\n url=\"http://pootle.translatehouse.org\",\n download_url=\"http://sourceforge.net/projects/translate/files/Pootle/\" + pootle_version,\n\n install_requires=parse_requirements('requirements/base.txt'),\n tests_require=parse_requirements('requirements/tests.txt'),\n\n platforms=[\"any\"],\n classifiers=[\n \"Development Status :: 5 - Production/Stable\",\n \"Environment :: Web Environment\",\n \"Framework :: Django\",\n \"Intended Audience :: Developers\",\n \"Intended Audience :: End Users/Desktop\",\n \"Intended Audience :: Information Technology\",\n \"License :: OSI Approved :: GNU General Public License (GPL)\",\n \"Operating System :: OS Independent\",\n \"Operating System :: Microsoft :: Windows\",\n \"Operating System :: Unix\",\n \"Programming Language :: JavaScript\",\n \"Programming Language :: Python\",\n \"Topic :: Software Development :: Localization\",\n \"Topic :: Text Processing :: Linguistic\"\n ],\n zip_safe=False,\n packages=find_packages(exclude=['deploy*']),\n include_package_data=True,\n\n entry_points={\n 'console_scripts': [\n 'pootle = pootle.runner:main',\n ],\n },\n cmdclass={\n 'build_mo': PootleBuildMo,\n 'test': PyTest,\n },\n)\n", "path": "setup.py"}]} | 2,600 | 136 |
gh_patches_debug_5141 | rasdani/github-patches | git_diff | scrapy__scrapy-2503 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
scrapy view <url> raise exc in v1.3.0
````
(py35) wingyiu@mbp101:~$scrapy view http://www.scrapy.org
2017-01-19 22:13:54 [scrapy.utils.log] INFO: Scrapy 1.3.0 started (bot: scrapybot)
2017-01-19 22:13:54 [scrapy.utils.log] INFO: Overridden settings: {}
Traceback (most recent call last):
File "/Users/user/venv/py35/bin/scrapy", line 11, in <module>
sys.exit(execute())
File "/Users/user/venv/py35/lib/python3.5/site-packages/scrapy/cmdline.py", line 142, in execute
_run_print_help(parser, _run_command, cmd, args, opts)
File "/Users/user/venv/py35/lib/python3.5/site-packages/scrapy/cmdline.py", line 88, in _run_print_help
func(*a, **kw)
File "/Users/user/venv/py35/lib/python3.5/site-packages/scrapy/cmdline.py", line 149, in _run_command
cmd.run(args, opts)
File "/Users/user/venv/py35/lib/python3.5/site-packages/scrapy/commands/fetch.py", line 58, in run
if not opts.no_redirect:
AttributeError: 'Values' object has no attribute 'no_redirect'
````
</issue>
<code>
[start of scrapy/commands/view.py]
1 from scrapy.commands import fetch, ScrapyCommand
2 from scrapy.utils.response import open_in_browser
3
4 class Command(fetch.Command):
5
6 def short_desc(self):
7 return "Open URL in browser, as seen by Scrapy"
8
9 def long_desc(self):
10 return "Fetch a URL using the Scrapy downloader and show its " \
11 "contents in a browser"
12
13 def add_options(self, parser):
14 ScrapyCommand.add_options(self, parser)
15 parser.add_option("--spider", dest="spider",
16 help="use this spider")
17
18 def _print_response(self, response, opts):
19 open_in_browser(response)
20
[end of scrapy/commands/view.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/scrapy/commands/view.py b/scrapy/commands/view.py
--- a/scrapy/commands/view.py
+++ b/scrapy/commands/view.py
@@ -11,9 +11,8 @@
"contents in a browser"
def add_options(self, parser):
- ScrapyCommand.add_options(self, parser)
- parser.add_option("--spider", dest="spider",
- help="use this spider")
+ super(Command, self).add_options(parser)
+ parser.remove_option("--headers")
def _print_response(self, response, opts):
open_in_browser(response)
| {"golden_diff": "diff --git a/scrapy/commands/view.py b/scrapy/commands/view.py\n--- a/scrapy/commands/view.py\n+++ b/scrapy/commands/view.py\n@@ -11,9 +11,8 @@\n \"contents in a browser\"\n \n def add_options(self, parser):\n- ScrapyCommand.add_options(self, parser)\n- parser.add_option(\"--spider\", dest=\"spider\",\n- help=\"use this spider\")\n+ super(Command, self).add_options(parser)\n+ parser.remove_option(\"--headers\")\n \n def _print_response(self, response, opts):\n open_in_browser(response)\n", "issue": "scrapy view <url> raise exc in v1.3.0\n````\r\n(py35) wingyiu@mbp101:~$scrapy view http://www.scrapy.org\r\n2017-01-19 22:13:54 [scrapy.utils.log] INFO: Scrapy 1.3.0 started (bot: scrapybot)\r\n2017-01-19 22:13:54 [scrapy.utils.log] INFO: Overridden settings: {}\r\nTraceback (most recent call last):\r\n File \"/Users/user/venv/py35/bin/scrapy\", line 11, in <module>\r\n sys.exit(execute())\r\n File \"/Users/user/venv/py35/lib/python3.5/site-packages/scrapy/cmdline.py\", line 142, in execute\r\n _run_print_help(parser, _run_command, cmd, args, opts)\r\n File \"/Users/user/venv/py35/lib/python3.5/site-packages/scrapy/cmdline.py\", line 88, in _run_print_help\r\n func(*a, **kw)\r\n File \"/Users/user/venv/py35/lib/python3.5/site-packages/scrapy/cmdline.py\", line 149, in _run_command\r\n cmd.run(args, opts)\r\n File \"/Users/user/venv/py35/lib/python3.5/site-packages/scrapy/commands/fetch.py\", line 58, in run\r\n if not opts.no_redirect:\r\nAttributeError: 'Values' object has no attribute 'no_redirect'\r\n````\r\n\n", "before_files": [{"content": "from scrapy.commands import fetch, ScrapyCommand\nfrom scrapy.utils.response import open_in_browser\n\nclass Command(fetch.Command):\n\n def short_desc(self):\n return \"Open URL in browser, as seen by Scrapy\"\n\n def long_desc(self):\n return \"Fetch a URL using the Scrapy downloader and show its \" \\\n \"contents in a browser\"\n\n def add_options(self, parser):\n ScrapyCommand.add_options(self, parser)\n parser.add_option(\"--spider\", dest=\"spider\",\n help=\"use this spider\")\n\n def _print_response(self, response, opts):\n open_in_browser(response)\n", "path": "scrapy/commands/view.py"}]} | 1,050 | 134 |
gh_patches_debug_15924 | rasdani/github-patches | git_diff | Kinto__kinto-119 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Using the _since querystring filter has no effect
I've tried using the `_since` querystring filter as explained in the tutorial, but it seems to have no effect.
`GET`ing any of those urls returns the exact same list (the full list of records)
```
http GET http://0.0.0.0:8888/v1/buckets/default/collections/tasks/records?_since=1436094288171 -v --auth 'user:password'
http GET http://0.0.0.0:8888/v1/buckets/default/collections/tasks/records?_since=foobar -v --auth 'user:password'
http GET http://0.0.0.0:8888/v1/buckets/default/collections/tasks/records?_since=`date +%s` -v --auth 'user:password'
```
The last one uses the current timestamp as the value, which means it should return an empty list.
</issue>
<code>
[start of kinto/views/buckets.py]
1 from pyramid.httpexceptions import HTTPForbidden, HTTPPreconditionFailed
2 from pyramid.security import NO_PERMISSION_REQUIRED
3 from pyramid.view import view_config
4
5 from cliquet import resource
6 from cliquet.utils import hmac_digest, build_request
7
8 from kinto.views import NameGenerator
9
10
11 def create_bucket(request, bucket_id):
12 """Create a bucket if it doesn't exists."""
13 bucket_put = (request.method.lower() == 'put' and
14 request.path.endswith('buckets/default'))
15
16 if not bucket_put:
17 subrequest = build_request(request, {
18 'method': 'PUT',
19 'path': '/buckets/%s' % bucket_id,
20 'body': {"data": {}},
21 'headers': {'If-None-Match': '*'.encode('utf-8')}
22 })
23
24 try:
25 request.invoke_subrequest(subrequest)
26 except HTTPPreconditionFailed:
27 # The bucket already exists
28 pass
29
30
31 def create_collection(request, bucket_id):
32 subpath = request.matchdict['subpath']
33 if subpath.startswith('/collections/'):
34 collection_id = subpath.split('/')[2]
35 collection_put = (request.method.lower() == 'put' and
36 request.path.endswith(collection_id))
37 if not collection_put:
38 subrequest = build_request(request, {
39 'method': 'PUT',
40 'path': '/buckets/%s/collections/%s' % (
41 bucket_id, collection_id),
42 'body': {"data": {}},
43 'headers': {'If-None-Match': '*'.encode('utf-8')}
44 })
45 try:
46 request.invoke_subrequest(subrequest)
47 except HTTPPreconditionFailed:
48 # The collection already exists
49 pass
50
51
52 @view_config(route_name='default_bucket', permission=NO_PERMISSION_REQUIRED)
53 def default_bucket(request):
54 if getattr(request, 'prefixed_userid', None) is None:
55 raise HTTPForbidden # Pass through the forbidden_view_config
56
57 settings = request.registry.settings
58 hmac_secret = settings['cliquet.userid_hmac_secret']
59 # Build the user unguessable bucket_id UUID from its user_id
60 bucket_id = hmac_digest(hmac_secret, request.prefixed_userid)[:32]
61 path = request.path.replace('default', bucket_id)
62
63 # Make sure bucket exists
64 create_bucket(request, bucket_id)
65
66 # Make sure the collection exists
67 create_collection(request, bucket_id)
68
69 subrequest = build_request(request, {
70 'method': request.method,
71 'path': path,
72 'body': request.body
73 })
74
75 return request.invoke_subrequest(subrequest)
76
77
78 @resource.register(name='bucket',
79 collection_methods=('GET',),
80 collection_path='/buckets',
81 record_path='/buckets/{{id}}')
82 class Bucket(resource.ProtectedResource):
83 permissions = ('read', 'write', 'collection:create', 'group:create')
84
85 def __init__(self, *args, **kwargs):
86 super(Bucket, self).__init__(*args, **kwargs)
87 self.collection.id_generator = NameGenerator()
88
89 def get_parent_id(self, request):
90 # Buckets are not isolated by user, unlike Cliquet resources.
91 return ''
92
93 def delete(self):
94 result = super(Bucket, self).delete()
95
96 # Delete groups.
97 storage = self.collection.storage
98 parent_id = '/buckets/%s' % self.record_id
99 storage.delete_all(collection_id='group', parent_id=parent_id)
100
101 # Delete collections.
102 deleted = storage.delete_all(collection_id='collection',
103 parent_id=parent_id)
104
105 # Delete records.
106 id_field = self.collection.id_field
107 for collection in deleted:
108 parent_id = '/buckets/%s/collections/%s' % (self.record_id,
109 collection[id_field])
110 storage.delete_all(collection_id='record', parent_id=parent_id)
111
112 return result
113
[end of kinto/views/buckets.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/kinto/views/buckets.py b/kinto/views/buckets.py
--- a/kinto/views/buckets.py
+++ b/kinto/views/buckets.py
@@ -59,6 +59,8 @@
# Build the user unguessable bucket_id UUID from its user_id
bucket_id = hmac_digest(hmac_secret, request.prefixed_userid)[:32]
path = request.path.replace('default', bucket_id)
+ querystring = request.url[(request.url.index(request.path) +
+ len(request.path)):]
# Make sure bucket exists
create_bucket(request, bucket_id)
@@ -68,7 +70,7 @@
subrequest = build_request(request, {
'method': request.method,
- 'path': path,
+ 'path': path + querystring,
'body': request.body
})
| {"golden_diff": "diff --git a/kinto/views/buckets.py b/kinto/views/buckets.py\n--- a/kinto/views/buckets.py\n+++ b/kinto/views/buckets.py\n@@ -59,6 +59,8 @@\n # Build the user unguessable bucket_id UUID from its user_id\n bucket_id = hmac_digest(hmac_secret, request.prefixed_userid)[:32]\n path = request.path.replace('default', bucket_id)\n+ querystring = request.url[(request.url.index(request.path) +\n+ len(request.path)):]\n \n # Make sure bucket exists\n create_bucket(request, bucket_id)\n@@ -68,7 +70,7 @@\n \n subrequest = build_request(request, {\n 'method': request.method,\n- 'path': path,\n+ 'path': path + querystring,\n 'body': request.body\n })\n", "issue": "Using the _since querystring filter has no effect\nI've tried using the `_since` querystring filter as explained in the tutorial, but it seems to have no effect.\n\n`GET`ing any of those urls returns the exact same list (the full list of records)\n\n```\nhttp GET http://0.0.0.0:8888/v1/buckets/default/collections/tasks/records?_since=1436094288171 -v --auth 'user:password'\nhttp GET http://0.0.0.0:8888/v1/buckets/default/collections/tasks/records?_since=foobar -v --auth 'user:password'\nhttp GET http://0.0.0.0:8888/v1/buckets/default/collections/tasks/records?_since=`date +%s` -v --auth 'user:password'\n```\n\nThe last one uses the current timestamp as the value, which means it should return an empty list.\n\n", "before_files": [{"content": "from pyramid.httpexceptions import HTTPForbidden, HTTPPreconditionFailed\nfrom pyramid.security import NO_PERMISSION_REQUIRED\nfrom pyramid.view import view_config\n\nfrom cliquet import resource\nfrom cliquet.utils import hmac_digest, build_request\n\nfrom kinto.views import NameGenerator\n\n\ndef create_bucket(request, bucket_id):\n \"\"\"Create a bucket if it doesn't exists.\"\"\"\n bucket_put = (request.method.lower() == 'put' and\n request.path.endswith('buckets/default'))\n\n if not bucket_put:\n subrequest = build_request(request, {\n 'method': 'PUT',\n 'path': '/buckets/%s' % bucket_id,\n 'body': {\"data\": {}},\n 'headers': {'If-None-Match': '*'.encode('utf-8')}\n })\n\n try:\n request.invoke_subrequest(subrequest)\n except HTTPPreconditionFailed:\n # The bucket already exists\n pass\n\n\ndef create_collection(request, bucket_id):\n subpath = request.matchdict['subpath']\n if subpath.startswith('/collections/'):\n collection_id = subpath.split('/')[2]\n collection_put = (request.method.lower() == 'put' and\n request.path.endswith(collection_id))\n if not collection_put:\n subrequest = build_request(request, {\n 'method': 'PUT',\n 'path': '/buckets/%s/collections/%s' % (\n bucket_id, collection_id),\n 'body': {\"data\": {}},\n 'headers': {'If-None-Match': '*'.encode('utf-8')}\n })\n try:\n request.invoke_subrequest(subrequest)\n except HTTPPreconditionFailed:\n # The collection already exists\n pass\n\n\n@view_config(route_name='default_bucket', permission=NO_PERMISSION_REQUIRED)\ndef default_bucket(request):\n if getattr(request, 'prefixed_userid', None) is None:\n raise HTTPForbidden # Pass through the forbidden_view_config\n\n settings = request.registry.settings\n hmac_secret = settings['cliquet.userid_hmac_secret']\n # Build the user unguessable bucket_id UUID from its user_id\n bucket_id = hmac_digest(hmac_secret, request.prefixed_userid)[:32]\n path = request.path.replace('default', bucket_id)\n\n # Make sure bucket exists\n create_bucket(request, bucket_id)\n\n # Make sure the collection exists\n create_collection(request, bucket_id)\n\n subrequest = build_request(request, {\n 'method': request.method,\n 'path': path,\n 'body': request.body\n })\n\n return request.invoke_subrequest(subrequest)\n\n\[email protected](name='bucket',\n collection_methods=('GET',),\n collection_path='/buckets',\n record_path='/buckets/{{id}}')\nclass Bucket(resource.ProtectedResource):\n permissions = ('read', 'write', 'collection:create', 'group:create')\n\n def __init__(self, *args, **kwargs):\n super(Bucket, self).__init__(*args, **kwargs)\n self.collection.id_generator = NameGenerator()\n\n def get_parent_id(self, request):\n # Buckets are not isolated by user, unlike Cliquet resources.\n return ''\n\n def delete(self):\n result = super(Bucket, self).delete()\n\n # Delete groups.\n storage = self.collection.storage\n parent_id = '/buckets/%s' % self.record_id\n storage.delete_all(collection_id='group', parent_id=parent_id)\n\n # Delete collections.\n deleted = storage.delete_all(collection_id='collection',\n parent_id=parent_id)\n\n # Delete records.\n id_field = self.collection.id_field\n for collection in deleted:\n parent_id = '/buckets/%s/collections/%s' % (self.record_id,\n collection[id_field])\n storage.delete_all(collection_id='record', parent_id=parent_id)\n\n return result\n", "path": "kinto/views/buckets.py"}]} | 1,805 | 187 |
gh_patches_debug_33036 | rasdani/github-patches | git_diff | jazzband__pip-tools-493 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
pip-sync appears to get confused about editable packages
When a requirements.txt file includes editable local packages, pip-sync seems to want to uninstall them and reinstall them.
Also, the pip install command that pip-sync attempts to perform is missing the `-e` editable flags, resulting in an error.
##### Steps to replicate
- Create a new virtualenv for testing
``` bash
virtualenv --no-site-packages testvenv
source ./testvenv/bin/activate
pip list
```
returns:
```
pip (8.1.2)
setuptools (28.3.0)
wheel (0.30.0a0)
```
- Generate a requirements.txt file (obfuscated from our actual names):
```
-e file:///vagrant/projects/someproject
```
- Install the editable project into the virtualenv
``` bash
pip install -r requirements.txt --no-deps
```
- Install pip-tools into the venv (because the globally installed pip-tools doesn't seem to deal with virtualenvironents well (another bug?) )
``` bash
pip install pip-tools
deactivate
source ./testvenv/bin/activate
```
- See what pip-sync thinks
``` bash
pip-sync -n requirements.txt
```
returns _unexpected results_:
```
Would uninstall:
our.package.name
Would install:
file:///vagrant/projects/packagename
```
- Try to do the sync
```
pip-sync requirements.txt
```
returns
```
Uninstalling our.package.name-1.2.3+dirty:
Successfully uninstalled our.package.name-1.2.3+dirty
Processing /vagrant/projects/packagename
Complete output from command python setup.py egg_info:
*** Bunch of errors we generate related to our package's assumptions about being a git repo ***
----------------------------------------
Command "python setup.py egg_info" failed with error code 1 in /tmp/pip-ls2yuR-build/
Traceback (most recent call last):
File "/home/vagrant/testvenv/bin/pip-sync", line 11, in <module>
sys.exit(cli())
File "/home/vagrant/testvenv/local/lib/python2.7/site-packages/click/core.py", line 716, in __call__
return self.main(*args, **kwargs)
File "/home/vagrant/testvenv/local/lib/python2.7/site-packages/click/core.py", line 696, in main
rv = self.invoke(ctx)
File "/home/vagrant/testvenv/local/lib/python2.7/site-packages/click/core.py", line 889, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/home/vagrant/testvenv/local/lib/python2.7/site-packages/click/core.py", line 534, in invoke
return callback(*args, **kwargs)
File "/home/vagrant/testvenv/local/lib/python2.7/site-packages/piptools/scripts/sync.py", line 72, in cli
install_flags=install_flags))
File "/home/vagrant/testvenv/local/lib/python2.7/site-packages/piptools/sync.py", line 157, in sync
check_call([pip, 'install'] + pip_flags + install_flags + sorted(to_install))
File "/usr/lib/python2.7/subprocess.py", line 511, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['pip', 'install', 'file:///vagrant/projects/packagename']' returned non-zero exit status 1
```
##### Expected result
A) The tool shouldn't be attempting to uninstall the editable package and then reinstall it
B) Installing editable packages should have a `-e` option
</issue>
<code>
[start of piptools/sync.py]
1 import collections
2 import os
3 import sys
4 from subprocess import check_call
5
6 from . import click
7 from .exceptions import IncompatibleRequirements, UnsupportedConstraint
8 from .utils import flat_map, key_from_req
9
10 PACKAGES_TO_IGNORE = [
11 'pip',
12 'pip-tools',
13 'pip-review',
14 'setuptools',
15 'wheel',
16 ]
17
18
19 def dependency_tree(installed_keys, root_key):
20 """
21 Calculate the dependency tree for the package `root_key` and return
22 a collection of all its dependencies. Uses a DFS traversal algorithm.
23
24 `installed_keys` should be a {key: requirement} mapping, e.g.
25 {'django': from_line('django==1.8')}
26 `root_key` should be the key to return the dependency tree for.
27 """
28 dependencies = set()
29 queue = collections.deque()
30
31 if root_key in installed_keys:
32 dep = installed_keys[root_key]
33 queue.append(dep)
34
35 while queue:
36 v = queue.popleft()
37 key = key_from_req(v)
38 if key in dependencies:
39 continue
40
41 dependencies.add(key)
42
43 for dep_specifier in v.requires():
44 dep_name = key_from_req(dep_specifier)
45 if dep_name in installed_keys:
46 dep = installed_keys[dep_name]
47
48 if dep_specifier.specifier.contains(dep.version):
49 queue.append(dep)
50
51 return dependencies
52
53
54 def get_dists_to_ignore(installed):
55 """
56 Returns a collection of package names to ignore when performing pip-sync,
57 based on the currently installed environment. For example, when pip-tools
58 is installed in the local environment, it should be ignored, including all
59 of its dependencies (e.g. click). When pip-tools is not installed
60 locally, click should also be installed/uninstalled depending on the given
61 requirements.
62 """
63 installed_keys = {key_from_req(r): r for r in installed}
64 return list(flat_map(lambda req: dependency_tree(installed_keys, req), PACKAGES_TO_IGNORE))
65
66
67 def merge(requirements, ignore_conflicts):
68 by_key = {}
69
70 for ireq in requirements:
71 if ireq.link is not None and not ireq.editable:
72 msg = ('pip-compile does not support URLs as packages, unless they are editable. '
73 'Perhaps add -e option?')
74 raise UnsupportedConstraint(msg, ireq)
75
76 key = ireq.link or key_from_req(ireq.req)
77
78 if not ignore_conflicts:
79 existing_ireq = by_key.get(key)
80 if existing_ireq:
81 # NOTE: We check equality here since we can assume that the
82 # requirements are all pinned
83 if ireq.specifier != existing_ireq.specifier:
84 raise IncompatibleRequirements(ireq, existing_ireq)
85
86 # TODO: Always pick the largest specifier in case of a conflict
87 by_key[key] = ireq
88
89 return by_key.values()
90
91
92 def diff(compiled_requirements, installed_dists):
93 """
94 Calculate which packages should be installed or uninstalled, given a set
95 of compiled requirements and a list of currently installed modules.
96 """
97 requirements_lut = {r.link or key_from_req(r.req): r for r in compiled_requirements}
98
99 satisfied = set() # holds keys
100 to_install = set() # holds keys-and-versions
101 to_uninstall = set() # holds keys
102
103 pkgs_to_ignore = get_dists_to_ignore(installed_dists)
104 for dist in installed_dists:
105 key = key_from_req(dist)
106 if key not in requirements_lut:
107 to_uninstall.add(key)
108 elif requirements_lut[key].specifier.contains(dist.version):
109 satisfied.add(key)
110
111 for key, requirement in requirements_lut.items():
112 if key not in satisfied:
113 to_install.add(str(requirement.link or requirement.req))
114
115 # Make sure to not uninstall any packages that should be ignored
116 to_uninstall -= set(pkgs_to_ignore)
117
118 return (to_install, to_uninstall)
119
120
121 def sync(to_install, to_uninstall, verbose=False, dry_run=False, pip_flags=None, install_flags=None):
122 """
123 Install and uninstalls the given sets of modules.
124 """
125 if not to_uninstall and not to_install:
126 click.echo("Everything up-to-date")
127
128 if pip_flags is None:
129 pip_flags = []
130
131 if not verbose:
132 pip_flags += ['-q']
133
134 if os.environ.get('VIRTUAL_ENV'):
135 # find pip via PATH
136 pip = 'pip'
137 else:
138 # find pip in same directory as pip-sync entry-point script
139 pip = os.path.join(os.path.dirname(os.path.abspath(sys.argv[0])), 'pip')
140
141 if to_uninstall:
142 if dry_run:
143 click.echo("Would uninstall:")
144 for pkg in to_uninstall:
145 click.echo(" {}".format(pkg))
146 else:
147 check_call([pip, 'uninstall', '-y'] + pip_flags + sorted(to_uninstall))
148
149 if to_install:
150 if install_flags is None:
151 install_flags = []
152 if dry_run:
153 click.echo("Would install:")
154 for pkg in to_install:
155 click.echo(" {}".format(pkg))
156 else:
157 check_call([pip, 'install'] + pip_flags + install_flags + sorted(to_install))
158 return 0
159
[end of piptools/sync.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/piptools/sync.py b/piptools/sync.py
--- a/piptools/sync.py
+++ b/piptools/sync.py
@@ -5,7 +5,7 @@
from . import click
from .exceptions import IncompatibleRequirements, UnsupportedConstraint
-from .utils import flat_map, key_from_req
+from .utils import flat_map, format_requirement, key_from_req
PACKAGES_TO_IGNORE = [
'pip',
@@ -97,7 +97,7 @@
requirements_lut = {r.link or key_from_req(r.req): r for r in compiled_requirements}
satisfied = set() # holds keys
- to_install = set() # holds keys-and-versions
+ to_install = set() # holds InstallRequirement objects
to_uninstall = set() # holds keys
pkgs_to_ignore = get_dists_to_ignore(installed_dists)
@@ -110,7 +110,7 @@
for key, requirement in requirements_lut.items():
if key not in satisfied:
- to_install.add(str(requirement.link or requirement.req))
+ to_install.add(requirement)
# Make sure to not uninstall any packages that should be ignored
to_uninstall -= set(pkgs_to_ignore)
@@ -151,8 +151,14 @@
install_flags = []
if dry_run:
click.echo("Would install:")
- for pkg in to_install:
- click.echo(" {}".format(pkg))
+ for ireq in to_install:
+ click.echo(" {}".format(format_requirement(ireq)))
else:
- check_call([pip, 'install'] + pip_flags + install_flags + sorted(to_install))
+ package_args = []
+ for ireq in sorted(to_install):
+ if ireq.editable:
+ package_args.extend(['-e', str(ireq.link or ireq.req)])
+ else:
+ package_args.append(str(ireq.req))
+ check_call([pip, 'install'] + pip_flags + install_flags + package_args)
return 0
| {"golden_diff": "diff --git a/piptools/sync.py b/piptools/sync.py\n--- a/piptools/sync.py\n+++ b/piptools/sync.py\n@@ -5,7 +5,7 @@\n \n from . import click\n from .exceptions import IncompatibleRequirements, UnsupportedConstraint\n-from .utils import flat_map, key_from_req\n+from .utils import flat_map, format_requirement, key_from_req\n \n PACKAGES_TO_IGNORE = [\n 'pip',\n@@ -97,7 +97,7 @@\n requirements_lut = {r.link or key_from_req(r.req): r for r in compiled_requirements}\n \n satisfied = set() # holds keys\n- to_install = set() # holds keys-and-versions\n+ to_install = set() # holds InstallRequirement objects\n to_uninstall = set() # holds keys\n \n pkgs_to_ignore = get_dists_to_ignore(installed_dists)\n@@ -110,7 +110,7 @@\n \n for key, requirement in requirements_lut.items():\n if key not in satisfied:\n- to_install.add(str(requirement.link or requirement.req))\n+ to_install.add(requirement)\n \n # Make sure to not uninstall any packages that should be ignored\n to_uninstall -= set(pkgs_to_ignore)\n@@ -151,8 +151,14 @@\n install_flags = []\n if dry_run:\n click.echo(\"Would install:\")\n- for pkg in to_install:\n- click.echo(\" {}\".format(pkg))\n+ for ireq in to_install:\n+ click.echo(\" {}\".format(format_requirement(ireq)))\n else:\n- check_call([pip, 'install'] + pip_flags + install_flags + sorted(to_install))\n+ package_args = []\n+ for ireq in sorted(to_install):\n+ if ireq.editable:\n+ package_args.extend(['-e', str(ireq.link or ireq.req)])\n+ else:\n+ package_args.append(str(ireq.req))\n+ check_call([pip, 'install'] + pip_flags + install_flags + package_args)\n return 0\n", "issue": "pip-sync appears to get confused about editable packages\nWhen a requirements.txt file includes editable local packages, pip-sync seems to want to uninstall them and reinstall them.\n\nAlso, the pip install command that pip-sync attempts to perform is missing the `-e` editable flags, resulting in an error.\n##### Steps to replicate\n- Create a new virtualenv for testing\n\n``` bash\nvirtualenv --no-site-packages testvenv\nsource ./testvenv/bin/activate\npip list\n```\n\nreturns:\n\n```\npip (8.1.2)\nsetuptools (28.3.0)\nwheel (0.30.0a0)\n```\n- Generate a requirements.txt file (obfuscated from our actual names):\n\n```\n-e file:///vagrant/projects/someproject\n```\n- Install the editable project into the virtualenv\n\n``` bash\npip install -r requirements.txt --no-deps\n```\n- Install pip-tools into the venv (because the globally installed pip-tools doesn't seem to deal with virtualenvironents well (another bug?) )\n\n``` bash\npip install pip-tools\ndeactivate\nsource ./testvenv/bin/activate\n```\n- See what pip-sync thinks\n\n``` bash\npip-sync -n requirements.txt\n```\n\nreturns _unexpected results_:\n\n```\nWould uninstall:\n our.package.name\nWould install:\n file:///vagrant/projects/packagename\n```\n- Try to do the sync\n\n```\npip-sync requirements.txt\n```\n\nreturns\n\n```\nUninstalling our.package.name-1.2.3+dirty:\n Successfully uninstalled our.package.name-1.2.3+dirty\nProcessing /vagrant/projects/packagename\n Complete output from command python setup.py egg_info:\n\n*** Bunch of errors we generate related to our package's assumptions about being a git repo ***\n\n ----------------------------------------\nCommand \"python setup.py egg_info\" failed with error code 1 in /tmp/pip-ls2yuR-build/\nTraceback (most recent call last):\n File \"/home/vagrant/testvenv/bin/pip-sync\", line 11, in <module>\n sys.exit(cli())\n File \"/home/vagrant/testvenv/local/lib/python2.7/site-packages/click/core.py\", line 716, in __call__\n return self.main(*args, **kwargs)\n File \"/home/vagrant/testvenv/local/lib/python2.7/site-packages/click/core.py\", line 696, in main\n rv = self.invoke(ctx)\n File \"/home/vagrant/testvenv/local/lib/python2.7/site-packages/click/core.py\", line 889, in invoke\n return ctx.invoke(self.callback, **ctx.params)\n File \"/home/vagrant/testvenv/local/lib/python2.7/site-packages/click/core.py\", line 534, in invoke\n return callback(*args, **kwargs)\n File \"/home/vagrant/testvenv/local/lib/python2.7/site-packages/piptools/scripts/sync.py\", line 72, in cli\n install_flags=install_flags))\n File \"/home/vagrant/testvenv/local/lib/python2.7/site-packages/piptools/sync.py\", line 157, in sync\n check_call([pip, 'install'] + pip_flags + install_flags + sorted(to_install))\n File \"/usr/lib/python2.7/subprocess.py\", line 511, in check_call\n raise CalledProcessError(retcode, cmd)\nsubprocess.CalledProcessError: Command '['pip', 'install', 'file:///vagrant/projects/packagename']' returned non-zero exit status 1\n```\n##### Expected result\n\nA) The tool shouldn't be attempting to uninstall the editable package and then reinstall it\nB) Installing editable packages should have a `-e` option\n\n", "before_files": [{"content": "import collections\nimport os\nimport sys\nfrom subprocess import check_call\n\nfrom . import click\nfrom .exceptions import IncompatibleRequirements, UnsupportedConstraint\nfrom .utils import flat_map, key_from_req\n\nPACKAGES_TO_IGNORE = [\n 'pip',\n 'pip-tools',\n 'pip-review',\n 'setuptools',\n 'wheel',\n]\n\n\ndef dependency_tree(installed_keys, root_key):\n \"\"\"\n Calculate the dependency tree for the package `root_key` and return\n a collection of all its dependencies. Uses a DFS traversal algorithm.\n\n `installed_keys` should be a {key: requirement} mapping, e.g.\n {'django': from_line('django==1.8')}\n `root_key` should be the key to return the dependency tree for.\n \"\"\"\n dependencies = set()\n queue = collections.deque()\n\n if root_key in installed_keys:\n dep = installed_keys[root_key]\n queue.append(dep)\n\n while queue:\n v = queue.popleft()\n key = key_from_req(v)\n if key in dependencies:\n continue\n\n dependencies.add(key)\n\n for dep_specifier in v.requires():\n dep_name = key_from_req(dep_specifier)\n if dep_name in installed_keys:\n dep = installed_keys[dep_name]\n\n if dep_specifier.specifier.contains(dep.version):\n queue.append(dep)\n\n return dependencies\n\n\ndef get_dists_to_ignore(installed):\n \"\"\"\n Returns a collection of package names to ignore when performing pip-sync,\n based on the currently installed environment. For example, when pip-tools\n is installed in the local environment, it should be ignored, including all\n of its dependencies (e.g. click). When pip-tools is not installed\n locally, click should also be installed/uninstalled depending on the given\n requirements.\n \"\"\"\n installed_keys = {key_from_req(r): r for r in installed}\n return list(flat_map(lambda req: dependency_tree(installed_keys, req), PACKAGES_TO_IGNORE))\n\n\ndef merge(requirements, ignore_conflicts):\n by_key = {}\n\n for ireq in requirements:\n if ireq.link is not None and not ireq.editable:\n msg = ('pip-compile does not support URLs as packages, unless they are editable. '\n 'Perhaps add -e option?')\n raise UnsupportedConstraint(msg, ireq)\n\n key = ireq.link or key_from_req(ireq.req)\n\n if not ignore_conflicts:\n existing_ireq = by_key.get(key)\n if existing_ireq:\n # NOTE: We check equality here since we can assume that the\n # requirements are all pinned\n if ireq.specifier != existing_ireq.specifier:\n raise IncompatibleRequirements(ireq, existing_ireq)\n\n # TODO: Always pick the largest specifier in case of a conflict\n by_key[key] = ireq\n\n return by_key.values()\n\n\ndef diff(compiled_requirements, installed_dists):\n \"\"\"\n Calculate which packages should be installed or uninstalled, given a set\n of compiled requirements and a list of currently installed modules.\n \"\"\"\n requirements_lut = {r.link or key_from_req(r.req): r for r in compiled_requirements}\n\n satisfied = set() # holds keys\n to_install = set() # holds keys-and-versions\n to_uninstall = set() # holds keys\n\n pkgs_to_ignore = get_dists_to_ignore(installed_dists)\n for dist in installed_dists:\n key = key_from_req(dist)\n if key not in requirements_lut:\n to_uninstall.add(key)\n elif requirements_lut[key].specifier.contains(dist.version):\n satisfied.add(key)\n\n for key, requirement in requirements_lut.items():\n if key not in satisfied:\n to_install.add(str(requirement.link or requirement.req))\n\n # Make sure to not uninstall any packages that should be ignored\n to_uninstall -= set(pkgs_to_ignore)\n\n return (to_install, to_uninstall)\n\n\ndef sync(to_install, to_uninstall, verbose=False, dry_run=False, pip_flags=None, install_flags=None):\n \"\"\"\n Install and uninstalls the given sets of modules.\n \"\"\"\n if not to_uninstall and not to_install:\n click.echo(\"Everything up-to-date\")\n\n if pip_flags is None:\n pip_flags = []\n\n if not verbose:\n pip_flags += ['-q']\n\n if os.environ.get('VIRTUAL_ENV'):\n # find pip via PATH\n pip = 'pip'\n else:\n # find pip in same directory as pip-sync entry-point script\n pip = os.path.join(os.path.dirname(os.path.abspath(sys.argv[0])), 'pip')\n\n if to_uninstall:\n if dry_run:\n click.echo(\"Would uninstall:\")\n for pkg in to_uninstall:\n click.echo(\" {}\".format(pkg))\n else:\n check_call([pip, 'uninstall', '-y'] + pip_flags + sorted(to_uninstall))\n\n if to_install:\n if install_flags is None:\n install_flags = []\n if dry_run:\n click.echo(\"Would install:\")\n for pkg in to_install:\n click.echo(\" {}\".format(pkg))\n else:\n check_call([pip, 'install'] + pip_flags + install_flags + sorted(to_install))\n return 0\n", "path": "piptools/sync.py"}]} | 2,866 | 463 |
gh_patches_debug_49039 | rasdani/github-patches | git_diff | facebookresearch__hydra-279 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[Bug] Documentation inconsistency for `utils.get_original_cwd`
# 🐛 Bug
The tutorial for working directories has a few commands for setting the working directory [see here](https://cli.dev/docs/tutorial/working_directory), but the version of hydra on pip does not have these functions. Additionally, the install instructions do not include instructions on how to install from source (even if that's fairly trivial). The simple solution is to update the wheels on pip. Another alternative would be to put on the installation page that hydra is rapidly developing and suggest that one can install from source directly.
## System information
- 0.10.0 from pip
- python 3.7
- arch linux
## One more thing...
This is very minor but the pip version is `0.10.0` and the github master version is also `0.10.0`, but they not the same as evidenced by this issue. You should probably bump the version of git master. Keep up the good work, I think this is a great idea.
</issue>
<code>
[start of hydra/__init__.py]
1 # Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
2 from . import utils
3 from .errors import MissingConfigException
4 from .main import main
5
6 # Source of truth for Hydra's version
7 __version__ = "0.10.0"
8
9 __all__ = ["__version__", "MissingConfigException", "main", "utils"]
10
[end of hydra/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/hydra/__init__.py b/hydra/__init__.py
--- a/hydra/__init__.py
+++ b/hydra/__init__.py
@@ -4,6 +4,6 @@
from .main import main
# Source of truth for Hydra's version
-__version__ = "0.10.0"
+__version__ = "0.11.0-pre1"
__all__ = ["__version__", "MissingConfigException", "main", "utils"]
| {"golden_diff": "diff --git a/hydra/__init__.py b/hydra/__init__.py\n--- a/hydra/__init__.py\n+++ b/hydra/__init__.py\n@@ -4,6 +4,6 @@\n from .main import main\n \n # Source of truth for Hydra's version\n-__version__ = \"0.10.0\"\n+__version__ = \"0.11.0-pre1\"\n \n __all__ = [\"__version__\", \"MissingConfigException\", \"main\", \"utils\"]\n", "issue": "[Bug] Documentation inconsistency for `utils.get_original_cwd`\n# \ud83d\udc1b Bug\r\n\r\nThe tutorial for working directories has a few commands for setting the working directory [see here](https://cli.dev/docs/tutorial/working_directory), but the version of hydra on pip does not have these functions. Additionally, the install instructions do not include instructions on how to install from source (even if that's fairly trivial). The simple solution is to update the wheels on pip. Another alternative would be to put on the installation page that hydra is rapidly developing and suggest that one can install from source directly.\r\n\r\n## System information\r\n- 0.10.0 from pip\r\n- python 3.7\r\n- arch linux\r\n\r\n## One more thing...\r\nThis is very minor but the pip version is `0.10.0` and the github master version is also `0.10.0`, but they not the same as evidenced by this issue. You should probably bump the version of git master. Keep up the good work, I think this is a great idea.\n", "before_files": [{"content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\nfrom . import utils\nfrom .errors import MissingConfigException\nfrom .main import main\n\n# Source of truth for Hydra's version\n__version__ = \"0.10.0\"\n\n__all__ = [\"__version__\", \"MissingConfigException\", \"main\", \"utils\"]\n", "path": "hydra/__init__.py"}]} | 851 | 114 |
gh_patches_debug_37809 | rasdani/github-patches | git_diff | Pylons__pyramid-2760 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
make pyramid.paster.bootstrap into a context manager
This would just improve the API such that users could automatically call the closer in a reliable way.
``` python
from pyramid.paster import bootstrap
with bootstrap('development.ini') as env:
req = env['request']
```
This change would also affect `pyramid.scripting.prepare` which is what `bootstrap` uses under the hood to construct the `env`.
</issue>
<code>
[start of pyramid/paster.py]
1 import os
2
3 from paste.deploy import (
4 loadapp,
5 appconfig,
6 )
7
8 from pyramid.compat import configparser
9 from logging.config import fileConfig
10 from pyramid.scripting import prepare
11
12 def get_app(config_uri, name=None, options=None, loadapp=loadapp):
13 """ Return the WSGI application named ``name`` in the PasteDeploy
14 config file specified by ``config_uri``.
15
16 ``options``, if passed, should be a dictionary used as variable assignments
17 like ``{'http_port': 8080}``. This is useful if e.g. ``%(http_port)s`` is
18 used in the config file.
19
20 If the ``name`` is None, this will attempt to parse the name from
21 the ``config_uri`` string expecting the format ``inifile#name``.
22 If no name is found, the name will default to "main"."""
23 path, section = _getpathsec(config_uri, name)
24 config_name = 'config:%s' % path
25 here_dir = os.getcwd()
26
27 app = loadapp(
28 config_name,
29 name=section,
30 relative_to=here_dir,
31 global_conf=options)
32
33 return app
34
35 def get_appsettings(config_uri, name=None, options=None, appconfig=appconfig):
36 """ Return a dictionary representing the key/value pairs in an ``app``
37 section within the file represented by ``config_uri``.
38
39 ``options``, if passed, should be a dictionary used as variable assignments
40 like ``{'http_port': 8080}``. This is useful if e.g. ``%(http_port)s`` is
41 used in the config file.
42
43 If the ``name`` is None, this will attempt to parse the name from
44 the ``config_uri`` string expecting the format ``inifile#name``.
45 If no name is found, the name will default to "main"."""
46 path, section = _getpathsec(config_uri, name)
47 config_name = 'config:%s' % path
48 here_dir = os.getcwd()
49 return appconfig(
50 config_name,
51 name=section,
52 relative_to=here_dir,
53 global_conf=options)
54
55 def setup_logging(config_uri, global_conf=None,
56 fileConfig=fileConfig,
57 configparser=configparser):
58 """
59 Set up logging via :func:`logging.config.fileConfig` with the filename
60 specified via ``config_uri`` (a string in the form
61 ``filename#sectionname``).
62
63 ConfigParser defaults are specified for the special ``__file__``
64 and ``here`` variables, similar to PasteDeploy config loading.
65 Extra defaults can optionally be specified as a dict in ``global_conf``.
66 """
67 path, _ = _getpathsec(config_uri, None)
68 parser = configparser.ConfigParser()
69 parser.read([path])
70 if parser.has_section('loggers'):
71 config_file = os.path.abspath(path)
72 full_global_conf = dict(
73 __file__=config_file,
74 here=os.path.dirname(config_file))
75 if global_conf:
76 full_global_conf.update(global_conf)
77 return fileConfig(config_file, full_global_conf)
78
79 def _getpathsec(config_uri, name):
80 if '#' in config_uri:
81 path, section = config_uri.split('#', 1)
82 else:
83 path, section = config_uri, 'main'
84 if name:
85 section = name
86 return path, section
87
88 def bootstrap(config_uri, request=None, options=None):
89 """ Load a WSGI application from the PasteDeploy config file specified
90 by ``config_uri``. The environment will be configured as if it is
91 currently serving ``request``, leaving a natural environment in place
92 to write scripts that can generate URLs and utilize renderers.
93
94 This function returns a dictionary with ``app``, ``root``, ``closer``,
95 ``request``, and ``registry`` keys. ``app`` is the WSGI app loaded
96 (based on the ``config_uri``), ``root`` is the traversal root resource
97 of the Pyramid application, and ``closer`` is a parameterless callback
98 that may be called when your script is complete (it pops a threadlocal
99 stack).
100
101 .. note::
102
103 Most operations within :app:`Pyramid` expect to be invoked within the
104 context of a WSGI request, thus it's important when loading your
105 application to anchor it when executing scripts and other code that is
106 not normally invoked during active WSGI requests.
107
108 .. note::
109
110 For a complex config file containing multiple :app:`Pyramid`
111 applications, this function will setup the environment under the context
112 of the last-loaded :app:`Pyramid` application. You may load a specific
113 application yourself by using the lower-level functions
114 :meth:`pyramid.paster.get_app` and :meth:`pyramid.scripting.prepare` in
115 conjunction with :attr:`pyramid.config.global_registries`.
116
117 ``config_uri`` -- specifies the PasteDeploy config file to use for the
118 interactive shell. The format is ``inifile#name``. If the name is left
119 off, ``main`` will be assumed.
120
121 ``request`` -- specified to anchor the script to a given set of WSGI
122 parameters. For example, most people would want to specify the host,
123 scheme and port such that their script will generate URLs in relation
124 to those parameters. A request with default parameters is constructed
125 for you if none is provided. You can mutate the request's ``environ``
126 later to setup a specific host/port/scheme/etc.
127
128 ``options`` Is passed to get_app for use as variable assignments like
129 {'http_port': 8080} and then use %(http_port)s in the
130 config file.
131
132 See :ref:`writing_a_script` for more information about how to use this
133 function.
134 """
135 app = get_app(config_uri, options=options)
136 env = prepare(request)
137 env['app'] = app
138 return env
139
140
[end of pyramid/paster.py]
[start of pyramid/scripting.py]
1 from pyramid.config import global_registries
2 from pyramid.exceptions import ConfigurationError
3
4 from pyramid.interfaces import (
5 IRequestFactory,
6 IRootFactory,
7 )
8 from pyramid.request import Request
9 from pyramid.request import apply_request_extensions
10
11 from pyramid.threadlocal import manager as threadlocal_manager
12 from pyramid.traversal import DefaultRootFactory
13
14 def get_root(app, request=None):
15 """ Return a tuple composed of ``(root, closer)`` when provided a
16 :term:`router` instance as the ``app`` argument. The ``root``
17 returned is the application root object. The ``closer`` returned
18 is a callable (accepting no arguments) that should be called when
19 your scripting application is finished using the root.
20
21 ``request`` is passed to the :app:`Pyramid` application root
22 factory to compute the root. If ``request`` is None, a default
23 will be constructed using the registry's :term:`Request Factory`
24 via the :meth:`pyramid.interfaces.IRequestFactory.blank` method.
25 """
26 registry = app.registry
27 if request is None:
28 request = _make_request('/', registry)
29 threadlocals = {'registry':registry, 'request':request}
30 app.threadlocal_manager.push(threadlocals)
31 def closer(request=request): # keep request alive via this function default
32 app.threadlocal_manager.pop()
33 root = app.root_factory(request)
34 return root, closer
35
36 def prepare(request=None, registry=None):
37 """ This function pushes data onto the Pyramid threadlocal stack
38 (request and registry), making those objects 'current'. It
39 returns a dictionary useful for bootstrapping a Pyramid
40 application in a scripting environment.
41
42 ``request`` is passed to the :app:`Pyramid` application root
43 factory to compute the root. If ``request`` is None, a default
44 will be constructed using the registry's :term:`Request Factory`
45 via the :meth:`pyramid.interfaces.IRequestFactory.blank` method.
46
47 If ``registry`` is not supplied, the last registry loaded from
48 :attr:`pyramid.config.global_registries` will be used. If you
49 have loaded more than one :app:`Pyramid` application in the
50 current process, you may not want to use the last registry
51 loaded, thus you can search the ``global_registries`` and supply
52 the appropriate one based on your own criteria.
53
54 The function returns a dictionary composed of ``root``,
55 ``closer``, ``registry``, ``request`` and ``root_factory``. The
56 ``root`` returned is the application's root resource object. The
57 ``closer`` returned is a callable (accepting no arguments) that
58 should be called when your scripting application is finished
59 using the root. ``registry`` is the registry object passed or
60 the last registry loaded into
61 :attr:`pyramid.config.global_registries` if no registry is passed.
62 ``request`` is the request object passed or the constructed request
63 if no request is passed. ``root_factory`` is the root factory used
64 to construct the root.
65 """
66 if registry is None:
67 registry = getattr(request, 'registry', global_registries.last)
68 if registry is None:
69 raise ConfigurationError('No valid Pyramid applications could be '
70 'found, make sure one has been created '
71 'before trying to activate it.')
72 if request is None:
73 request = _make_request('/', registry)
74 # NB: even though _make_request might have already set registry on
75 # request, we reset it in case someone has passed in their own
76 # request.
77 request.registry = registry
78 threadlocals = {'registry':registry, 'request':request}
79 threadlocal_manager.push(threadlocals)
80 apply_request_extensions(request)
81 def closer():
82 threadlocal_manager.pop()
83 root_factory = registry.queryUtility(IRootFactory,
84 default=DefaultRootFactory)
85 root = root_factory(request)
86 if getattr(request, 'context', None) is None:
87 request.context = root
88 return {'root':root, 'closer':closer, 'registry':registry,
89 'request':request, 'root_factory':root_factory}
90
91 def _make_request(path, registry=None):
92 """ Return a :meth:`pyramid.request.Request` object anchored at a
93 given path. The object returned will be generated from the supplied
94 registry's :term:`Request Factory` using the
95 :meth:`pyramid.interfaces.IRequestFactory.blank` method.
96
97 This request object can be passed to :meth:`pyramid.scripting.get_root`
98 or :meth:`pyramid.scripting.prepare` to initialize an application in
99 preparation for executing a script with a proper environment setup.
100 URLs can then be generated with the object, as well as rendering
101 templates.
102
103 If ``registry`` is not supplied, the last registry loaded from
104 :attr:`pyramid.config.global_registries` will be used. If you have
105 loaded more than one :app:`Pyramid` application in the current
106 process, you may not want to use the last registry loaded, thus
107 you can search the ``global_registries`` and supply the appropriate
108 one based on your own criteria.
109 """
110 if registry is None:
111 registry = global_registries.last
112 request_factory = registry.queryUtility(IRequestFactory, default=Request)
113 request = request_factory.blank(path)
114 request.registry = registry
115 return request
116
[end of pyramid/scripting.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/pyramid/paster.py b/pyramid/paster.py
--- a/pyramid/paster.py
+++ b/pyramid/paster.py
@@ -129,8 +129,22 @@
{'http_port': 8080} and then use %(http_port)s in the
config file.
+ This function may be used as a context manager to call the ``closer``
+ automatically:
+
+ .. code-block:: python
+
+ with bootstrap('development.ini') as env:
+ request = env['request']
+ # ...
+
See :ref:`writing_a_script` for more information about how to use this
function.
+
+ .. versionchanged:: 1.8
+
+ Added the ability to use the return value as a context manager.
+
"""
app = get_app(config_uri, options=options)
env = prepare(request)
diff --git a/pyramid/scripting.py b/pyramid/scripting.py
--- a/pyramid/scripting.py
+++ b/pyramid/scripting.py
@@ -56,12 +56,25 @@
``root`` returned is the application's root resource object. The
``closer`` returned is a callable (accepting no arguments) that
should be called when your scripting application is finished
- using the root. ``registry`` is the registry object passed or
- the last registry loaded into
- :attr:`pyramid.config.global_registries` if no registry is passed.
+ using the root. ``registry`` is the resolved registry object.
``request`` is the request object passed or the constructed request
if no request is passed. ``root_factory`` is the root factory used
to construct the root.
+
+ This function may be used as a context manager to call the ``closer``
+ automatically:
+
+ .. code-block:: python
+
+ registry = config.registry
+ with prepare(registry) as env:
+ request = env['request']
+ # ...
+
+ .. versionchanged:: 1.8
+
+ Added the ability to use the return value as a context manager.
+
"""
if registry is None:
registry = getattr(request, 'registry', global_registries.last)
@@ -85,8 +98,20 @@
root = root_factory(request)
if getattr(request, 'context', None) is None:
request.context = root
- return {'root':root, 'closer':closer, 'registry':registry,
- 'request':request, 'root_factory':root_factory}
+ return AppEnvironment(
+ root=root,
+ closer=closer,
+ registry=registry,
+ request=request,
+ root_factory=root_factory,
+ )
+
+class AppEnvironment(dict):
+ def __enter__(self):
+ return self
+
+ def __exit__(self, type, value, traceback):
+ self['closer']()
def _make_request(path, registry=None):
""" Return a :meth:`pyramid.request.Request` object anchored at a
| {"golden_diff": "diff --git a/pyramid/paster.py b/pyramid/paster.py\n--- a/pyramid/paster.py\n+++ b/pyramid/paster.py\n@@ -129,8 +129,22 @@\n {'http_port': 8080} and then use %(http_port)s in the\n config file.\n \n+ This function may be used as a context manager to call the ``closer``\n+ automatically:\n+\n+ .. code-block:: python\n+\n+ with bootstrap('development.ini') as env:\n+ request = env['request']\n+ # ...\n+\n See :ref:`writing_a_script` for more information about how to use this\n function.\n+\n+ .. versionchanged:: 1.8\n+\n+ Added the ability to use the return value as a context manager.\n+\n \"\"\"\n app = get_app(config_uri, options=options)\n env = prepare(request)\ndiff --git a/pyramid/scripting.py b/pyramid/scripting.py\n--- a/pyramid/scripting.py\n+++ b/pyramid/scripting.py\n@@ -56,12 +56,25 @@\n ``root`` returned is the application's root resource object. The\n ``closer`` returned is a callable (accepting no arguments) that\n should be called when your scripting application is finished\n- using the root. ``registry`` is the registry object passed or\n- the last registry loaded into\n- :attr:`pyramid.config.global_registries` if no registry is passed.\n+ using the root. ``registry`` is the resolved registry object.\n ``request`` is the request object passed or the constructed request\n if no request is passed. ``root_factory`` is the root factory used\n to construct the root.\n+\n+ This function may be used as a context manager to call the ``closer``\n+ automatically:\n+\n+ .. code-block:: python\n+\n+ registry = config.registry\n+ with prepare(registry) as env:\n+ request = env['request']\n+ # ...\n+\n+ .. versionchanged:: 1.8\n+\n+ Added the ability to use the return value as a context manager.\n+\n \"\"\"\n if registry is None:\n registry = getattr(request, 'registry', global_registries.last)\n@@ -85,8 +98,20 @@\n root = root_factory(request)\n if getattr(request, 'context', None) is None:\n request.context = root\n- return {'root':root, 'closer':closer, 'registry':registry,\n- 'request':request, 'root_factory':root_factory}\n+ return AppEnvironment(\n+ root=root,\n+ closer=closer,\n+ registry=registry,\n+ request=request,\n+ root_factory=root_factory,\n+ )\n+\n+class AppEnvironment(dict):\n+ def __enter__(self):\n+ return self\n+\n+ def __exit__(self, type, value, traceback):\n+ self['closer']()\n \n def _make_request(path, registry=None):\n \"\"\" Return a :meth:`pyramid.request.Request` object anchored at a\n", "issue": "make pyramid.paster.bootstrap into a context manager\nThis would just improve the API such that users could automatically call the closer in a reliable way.\n\n``` python\nfrom pyramid.paster import bootstrap\n\nwith bootstrap('development.ini') as env:\n req = env['request']\n```\n\nThis change would also affect `pyramid.scripting.prepare` which is what `bootstrap` uses under the hood to construct the `env`.\n\n", "before_files": [{"content": "import os\n\nfrom paste.deploy import (\n loadapp,\n appconfig,\n )\n\nfrom pyramid.compat import configparser\nfrom logging.config import fileConfig\nfrom pyramid.scripting import prepare\n\ndef get_app(config_uri, name=None, options=None, loadapp=loadapp):\n \"\"\" Return the WSGI application named ``name`` in the PasteDeploy\n config file specified by ``config_uri``.\n\n ``options``, if passed, should be a dictionary used as variable assignments\n like ``{'http_port': 8080}``. This is useful if e.g. ``%(http_port)s`` is\n used in the config file.\n\n If the ``name`` is None, this will attempt to parse the name from\n the ``config_uri`` string expecting the format ``inifile#name``.\n If no name is found, the name will default to \"main\".\"\"\"\n path, section = _getpathsec(config_uri, name)\n config_name = 'config:%s' % path\n here_dir = os.getcwd()\n\n app = loadapp(\n config_name,\n name=section,\n relative_to=here_dir,\n global_conf=options)\n\n return app\n\ndef get_appsettings(config_uri, name=None, options=None, appconfig=appconfig):\n \"\"\" Return a dictionary representing the key/value pairs in an ``app``\n section within the file represented by ``config_uri``.\n\n ``options``, if passed, should be a dictionary used as variable assignments\n like ``{'http_port': 8080}``. This is useful if e.g. ``%(http_port)s`` is\n used in the config file.\n\n If the ``name`` is None, this will attempt to parse the name from\n the ``config_uri`` string expecting the format ``inifile#name``.\n If no name is found, the name will default to \"main\".\"\"\"\n path, section = _getpathsec(config_uri, name)\n config_name = 'config:%s' % path\n here_dir = os.getcwd()\n return appconfig(\n config_name,\n name=section,\n relative_to=here_dir,\n global_conf=options)\n\ndef setup_logging(config_uri, global_conf=None,\n fileConfig=fileConfig,\n configparser=configparser):\n \"\"\"\n Set up logging via :func:`logging.config.fileConfig` with the filename\n specified via ``config_uri`` (a string in the form\n ``filename#sectionname``).\n\n ConfigParser defaults are specified for the special ``__file__``\n and ``here`` variables, similar to PasteDeploy config loading.\n Extra defaults can optionally be specified as a dict in ``global_conf``.\n \"\"\"\n path, _ = _getpathsec(config_uri, None)\n parser = configparser.ConfigParser()\n parser.read([path])\n if parser.has_section('loggers'):\n config_file = os.path.abspath(path)\n full_global_conf = dict(\n __file__=config_file,\n here=os.path.dirname(config_file))\n if global_conf:\n full_global_conf.update(global_conf)\n return fileConfig(config_file, full_global_conf)\n\ndef _getpathsec(config_uri, name):\n if '#' in config_uri:\n path, section = config_uri.split('#', 1)\n else:\n path, section = config_uri, 'main'\n if name:\n section = name\n return path, section\n\ndef bootstrap(config_uri, request=None, options=None):\n \"\"\" Load a WSGI application from the PasteDeploy config file specified\n by ``config_uri``. The environment will be configured as if it is\n currently serving ``request``, leaving a natural environment in place\n to write scripts that can generate URLs and utilize renderers.\n\n This function returns a dictionary with ``app``, ``root``, ``closer``,\n ``request``, and ``registry`` keys. ``app`` is the WSGI app loaded\n (based on the ``config_uri``), ``root`` is the traversal root resource\n of the Pyramid application, and ``closer`` is a parameterless callback\n that may be called when your script is complete (it pops a threadlocal\n stack).\n\n .. note::\n\n Most operations within :app:`Pyramid` expect to be invoked within the\n context of a WSGI request, thus it's important when loading your\n application to anchor it when executing scripts and other code that is\n not normally invoked during active WSGI requests.\n\n .. note::\n\n For a complex config file containing multiple :app:`Pyramid`\n applications, this function will setup the environment under the context\n of the last-loaded :app:`Pyramid` application. You may load a specific\n application yourself by using the lower-level functions\n :meth:`pyramid.paster.get_app` and :meth:`pyramid.scripting.prepare` in\n conjunction with :attr:`pyramid.config.global_registries`.\n\n ``config_uri`` -- specifies the PasteDeploy config file to use for the\n interactive shell. The format is ``inifile#name``. If the name is left\n off, ``main`` will be assumed.\n\n ``request`` -- specified to anchor the script to a given set of WSGI\n parameters. For example, most people would want to specify the host,\n scheme and port such that their script will generate URLs in relation\n to those parameters. A request with default parameters is constructed\n for you if none is provided. You can mutate the request's ``environ``\n later to setup a specific host/port/scheme/etc.\n\n ``options`` Is passed to get_app for use as variable assignments like \n {'http_port': 8080} and then use %(http_port)s in the\n config file.\n\n See :ref:`writing_a_script` for more information about how to use this\n function.\n \"\"\"\n app = get_app(config_uri, options=options)\n env = prepare(request)\n env['app'] = app\n return env\n\n", "path": "pyramid/paster.py"}, {"content": "from pyramid.config import global_registries\nfrom pyramid.exceptions import ConfigurationError\n\nfrom pyramid.interfaces import (\n IRequestFactory,\n IRootFactory,\n )\nfrom pyramid.request import Request\nfrom pyramid.request import apply_request_extensions\n\nfrom pyramid.threadlocal import manager as threadlocal_manager\nfrom pyramid.traversal import DefaultRootFactory\n\ndef get_root(app, request=None):\n \"\"\" Return a tuple composed of ``(root, closer)`` when provided a\n :term:`router` instance as the ``app`` argument. The ``root``\n returned is the application root object. The ``closer`` returned\n is a callable (accepting no arguments) that should be called when\n your scripting application is finished using the root.\n\n ``request`` is passed to the :app:`Pyramid` application root\n factory to compute the root. If ``request`` is None, a default\n will be constructed using the registry's :term:`Request Factory`\n via the :meth:`pyramid.interfaces.IRequestFactory.blank` method.\n \"\"\"\n registry = app.registry\n if request is None:\n request = _make_request('/', registry)\n threadlocals = {'registry':registry, 'request':request}\n app.threadlocal_manager.push(threadlocals)\n def closer(request=request): # keep request alive via this function default\n app.threadlocal_manager.pop()\n root = app.root_factory(request)\n return root, closer\n\ndef prepare(request=None, registry=None):\n \"\"\" This function pushes data onto the Pyramid threadlocal stack\n (request and registry), making those objects 'current'. It\n returns a dictionary useful for bootstrapping a Pyramid\n application in a scripting environment.\n\n ``request`` is passed to the :app:`Pyramid` application root\n factory to compute the root. If ``request`` is None, a default\n will be constructed using the registry's :term:`Request Factory`\n via the :meth:`pyramid.interfaces.IRequestFactory.blank` method.\n\n If ``registry`` is not supplied, the last registry loaded from\n :attr:`pyramid.config.global_registries` will be used. If you\n have loaded more than one :app:`Pyramid` application in the\n current process, you may not want to use the last registry\n loaded, thus you can search the ``global_registries`` and supply\n the appropriate one based on your own criteria.\n\n The function returns a dictionary composed of ``root``,\n ``closer``, ``registry``, ``request`` and ``root_factory``. The\n ``root`` returned is the application's root resource object. The\n ``closer`` returned is a callable (accepting no arguments) that\n should be called when your scripting application is finished\n using the root. ``registry`` is the registry object passed or\n the last registry loaded into\n :attr:`pyramid.config.global_registries` if no registry is passed.\n ``request`` is the request object passed or the constructed request\n if no request is passed. ``root_factory`` is the root factory used\n to construct the root.\n \"\"\"\n if registry is None:\n registry = getattr(request, 'registry', global_registries.last)\n if registry is None:\n raise ConfigurationError('No valid Pyramid applications could be '\n 'found, make sure one has been created '\n 'before trying to activate it.')\n if request is None:\n request = _make_request('/', registry)\n # NB: even though _make_request might have already set registry on\n # request, we reset it in case someone has passed in their own\n # request.\n request.registry = registry \n threadlocals = {'registry':registry, 'request':request}\n threadlocal_manager.push(threadlocals)\n apply_request_extensions(request)\n def closer():\n threadlocal_manager.pop()\n root_factory = registry.queryUtility(IRootFactory,\n default=DefaultRootFactory)\n root = root_factory(request)\n if getattr(request, 'context', None) is None:\n request.context = root\n return {'root':root, 'closer':closer, 'registry':registry,\n 'request':request, 'root_factory':root_factory}\n\ndef _make_request(path, registry=None):\n \"\"\" Return a :meth:`pyramid.request.Request` object anchored at a\n given path. The object returned will be generated from the supplied\n registry's :term:`Request Factory` using the\n :meth:`pyramid.interfaces.IRequestFactory.blank` method.\n\n This request object can be passed to :meth:`pyramid.scripting.get_root`\n or :meth:`pyramid.scripting.prepare` to initialize an application in\n preparation for executing a script with a proper environment setup.\n URLs can then be generated with the object, as well as rendering\n templates.\n\n If ``registry`` is not supplied, the last registry loaded from\n :attr:`pyramid.config.global_registries` will be used. If you have\n loaded more than one :app:`Pyramid` application in the current\n process, you may not want to use the last registry loaded, thus\n you can search the ``global_registries`` and supply the appropriate\n one based on your own criteria.\n \"\"\"\n if registry is None:\n registry = global_registries.last\n request_factory = registry.queryUtility(IRequestFactory, default=Request)\n request = request_factory.blank(path)\n request.registry = registry\n return request\n", "path": "pyramid/scripting.py"}]} | 3,666 | 681 |
gh_patches_debug_34543 | rasdani/github-patches | git_diff | UTNkar__moore-154 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Menu Translations
<!-- Do you want to ask a question? Are you looking for support? The system administrator can help you: [email protected] -->
### Description
Not all menu pages are using `translated_title` when being added to the menu.
<!-- Please select the appropriate "topic category"/blue and "issue type"/yellow label -->
</issue>
<code>
[start of website/website/templatetags/site_tags.py]
1 from django import template
2
3 register = template.Library()
4
5
6 @register.simple_tag(takes_context=True)
7 def get_site_root(context):
8 # NB this returns a core.Page, not the implementation-specific model used
9 # so object-comparison to self will return false as objects would differ
10 return context['request'].site.root_page
11
12
13 def has_menu_children(page):
14 return page.get_children().live().in_menu().exists()
15
16
17 # Retrieves the top menu items - the immediate children of the parent page
18 # The has_menu_children method is necessary because the bootstrap menu requires
19 # a dropdown class to be applied to a parent
20 @register.inclusion_tag('tags/menu.html', takes_context=True)
21 def menu_items(context, parent, calling_page=None, sidenav=False):
22 menuitems = parent.get_children().live().in_menu()
23 for menuitem in menuitems:
24 menuitem.show_dropdown = has_menu_children(menuitem)
25 # TODO: There has to be a better alternative!
26 if hasattr(menuitem, 'googleformindex'):
27 menuitem.translated_title = menuitem.googleformindex\
28 .translated_title
29 elif hasattr(menuitem, 'googleformpage'):
30 menuitem.translated_title = menuitem.googleformpage\
31 .translated_title
32 elif hasattr(menuitem, 'homepage'):
33 menuitem.translated_title = menuitem.homepage.translated_title
34 elif hasattr(menuitem, 'recruitmentpage'):
35 menuitem.translated_title = menuitem.recruitmentpage\
36 .translated_title
37 elif hasattr(menuitem, 'newsindexpage'):
38 menuitem.translated_title = menuitem.newsindexpage.translated_title
39 elif hasattr(menuitem, 'newspage'):
40 menuitem.translated_title = menuitem.newspage.translated_title
41 elif hasattr(menuitem, 'webpage'):
42 menuitem.translated_title = menuitem.webpage.translated_title
43 # We don't directly check if calling_page is None since the template
44 # engine can pass an empty string to calling_page
45 # if the variable passed as calling_page does not exist.
46 menuitem.active = (calling_page.url.startswith(menuitem.url)
47 if calling_page else False)
48 return {
49 'calling_page': calling_page,
50 'menuitems': menuitems,
51 'sidenav': sidenav,
52 # required by the pageurl tag that we want to use within this template
53 'request': context['request'],
54 }
55
56
57 # Retrieves the children of the top menu items for the drop downs
58 @register.inclusion_tag('tags/menu_children.html', takes_context=True)
59 def menu_children(context, parent, sidenav=False):
60 children = parent.get_children()
61 children = children.live().in_menu()
62 return {
63 'parent': parent,
64 'children': children,
65 'sidenav': sidenav,
66 # required by the pageurl tag that we want to use within this template
67 'request': context['request'],
68 }
69
[end of website/website/templatetags/site_tags.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/website/website/templatetags/site_tags.py b/website/website/templatetags/site_tags.py
--- a/website/website/templatetags/site_tags.py
+++ b/website/website/templatetags/site_tags.py
@@ -20,26 +20,9 @@
@register.inclusion_tag('tags/menu.html', takes_context=True)
def menu_items(context, parent, calling_page=None, sidenav=False):
menuitems = parent.get_children().live().in_menu()
+ menuitems = [m.specific for m in menuitems]
for menuitem in menuitems:
menuitem.show_dropdown = has_menu_children(menuitem)
- # TODO: There has to be a better alternative!
- if hasattr(menuitem, 'googleformindex'):
- menuitem.translated_title = menuitem.googleformindex\
- .translated_title
- elif hasattr(menuitem, 'googleformpage'):
- menuitem.translated_title = menuitem.googleformpage\
- .translated_title
- elif hasattr(menuitem, 'homepage'):
- menuitem.translated_title = menuitem.homepage.translated_title
- elif hasattr(menuitem, 'recruitmentpage'):
- menuitem.translated_title = menuitem.recruitmentpage\
- .translated_title
- elif hasattr(menuitem, 'newsindexpage'):
- menuitem.translated_title = menuitem.newsindexpage.translated_title
- elif hasattr(menuitem, 'newspage'):
- menuitem.translated_title = menuitem.newspage.translated_title
- elif hasattr(menuitem, 'webpage'):
- menuitem.translated_title = menuitem.webpage.translated_title
# We don't directly check if calling_page is None since the template
# engine can pass an empty string to calling_page
# if the variable passed as calling_page does not exist.
@@ -57,8 +40,8 @@
# Retrieves the children of the top menu items for the drop downs
@register.inclusion_tag('tags/menu_children.html', takes_context=True)
def menu_children(context, parent, sidenav=False):
- children = parent.get_children()
- children = children.live().in_menu()
+ children = parent.get_children().live().in_menu()
+ children = [c.specific for c in children]
return {
'parent': parent,
'children': children,
| {"golden_diff": "diff --git a/website/website/templatetags/site_tags.py b/website/website/templatetags/site_tags.py\n--- a/website/website/templatetags/site_tags.py\n+++ b/website/website/templatetags/site_tags.py\n@@ -20,26 +20,9 @@\n @register.inclusion_tag('tags/menu.html', takes_context=True)\n def menu_items(context, parent, calling_page=None, sidenav=False):\n menuitems = parent.get_children().live().in_menu()\n+ menuitems = [m.specific for m in menuitems]\n for menuitem in menuitems:\n menuitem.show_dropdown = has_menu_children(menuitem)\n- # TODO: There has to be a better alternative!\n- if hasattr(menuitem, 'googleformindex'):\n- menuitem.translated_title = menuitem.googleformindex\\\n- .translated_title\n- elif hasattr(menuitem, 'googleformpage'):\n- menuitem.translated_title = menuitem.googleformpage\\\n- .translated_title\n- elif hasattr(menuitem, 'homepage'):\n- menuitem.translated_title = menuitem.homepage.translated_title\n- elif hasattr(menuitem, 'recruitmentpage'):\n- menuitem.translated_title = menuitem.recruitmentpage\\\n- .translated_title\n- elif hasattr(menuitem, 'newsindexpage'):\n- menuitem.translated_title = menuitem.newsindexpage.translated_title\n- elif hasattr(menuitem, 'newspage'):\n- menuitem.translated_title = menuitem.newspage.translated_title\n- elif hasattr(menuitem, 'webpage'):\n- menuitem.translated_title = menuitem.webpage.translated_title\n # We don't directly check if calling_page is None since the template\n # engine can pass an empty string to calling_page\n # if the variable passed as calling_page does not exist.\n@@ -57,8 +40,8 @@\n # Retrieves the children of the top menu items for the drop downs\n @register.inclusion_tag('tags/menu_children.html', takes_context=True)\n def menu_children(context, parent, sidenav=False):\n- children = parent.get_children()\n- children = children.live().in_menu()\n+ children = parent.get_children().live().in_menu()\n+ children = [c.specific for c in children]\n return {\n 'parent': parent,\n 'children': children,\n", "issue": "Menu Translations\n<!-- Do you want to ask a question? Are you looking for support? The system administrator can help you: [email protected] -->\r\n\r\n### Description\r\n\r\nNot all menu pages are using `translated_title` when being added to the menu.\r\n\r\n<!-- Please select the appropriate \"topic category\"/blue and \"issue type\"/yellow label -->\r\n\n", "before_files": [{"content": "from django import template\n\nregister = template.Library()\n\n\[email protected]_tag(takes_context=True)\ndef get_site_root(context):\n # NB this returns a core.Page, not the implementation-specific model used\n # so object-comparison to self will return false as objects would differ\n return context['request'].site.root_page\n\n\ndef has_menu_children(page):\n return page.get_children().live().in_menu().exists()\n\n\n# Retrieves the top menu items - the immediate children of the parent page\n# The has_menu_children method is necessary because the bootstrap menu requires\n# a dropdown class to be applied to a parent\[email protected]_tag('tags/menu.html', takes_context=True)\ndef menu_items(context, parent, calling_page=None, sidenav=False):\n menuitems = parent.get_children().live().in_menu()\n for menuitem in menuitems:\n menuitem.show_dropdown = has_menu_children(menuitem)\n # TODO: There has to be a better alternative!\n if hasattr(menuitem, 'googleformindex'):\n menuitem.translated_title = menuitem.googleformindex\\\n .translated_title\n elif hasattr(menuitem, 'googleformpage'):\n menuitem.translated_title = menuitem.googleformpage\\\n .translated_title\n elif hasattr(menuitem, 'homepage'):\n menuitem.translated_title = menuitem.homepage.translated_title\n elif hasattr(menuitem, 'recruitmentpage'):\n menuitem.translated_title = menuitem.recruitmentpage\\\n .translated_title\n elif hasattr(menuitem, 'newsindexpage'):\n menuitem.translated_title = menuitem.newsindexpage.translated_title\n elif hasattr(menuitem, 'newspage'):\n menuitem.translated_title = menuitem.newspage.translated_title\n elif hasattr(menuitem, 'webpage'):\n menuitem.translated_title = menuitem.webpage.translated_title\n # We don't directly check if calling_page is None since the template\n # engine can pass an empty string to calling_page\n # if the variable passed as calling_page does not exist.\n menuitem.active = (calling_page.url.startswith(menuitem.url)\n if calling_page else False)\n return {\n 'calling_page': calling_page,\n 'menuitems': menuitems,\n 'sidenav': sidenav,\n # required by the pageurl tag that we want to use within this template\n 'request': context['request'],\n }\n\n\n# Retrieves the children of the top menu items for the drop downs\[email protected]_tag('tags/menu_children.html', takes_context=True)\ndef menu_children(context, parent, sidenav=False):\n children = parent.get_children()\n children = children.live().in_menu()\n return {\n 'parent': parent,\n 'children': children,\n 'sidenav': sidenav,\n # required by the pageurl tag that we want to use within this template\n 'request': context['request'],\n }\n", "path": "website/website/templatetags/site_tags.py"}]} | 1,369 | 522 |
gh_patches_debug_10281 | rasdani/github-patches | git_diff | scikit-hep__awkward-2410 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
String broadcasting is not producing sensible results
### Version of Awkward Array
main
### Description and code to reproduce
As Jim and I discussed today, string broadcasting is currently not producing reliable results for string-string, or even string-non string cases. In particular many broadcasts violate the appearance that strings are atoms:
```python3
>>> import awkward as ak
>>> x = ak._v2.Array([["one", "two"], ["three", "four"]])
>>> ak._v2.broadcast_arrays(x[1:], x[:-1])
ValueError: while calling (from <ipython-input-42-6e4db831c1ff>, line 1)
ak._v2.broadcast_arrays(
arrays = (<Array [['one', 'two']] type='1 * var * string'>, <Array [[...
kwargs = {}
)
Error details: cannot broadcast nested list (in compiled code: https://github.com/scikit-hep/awkward-1.0/blob/1.10.0rc1/src/cpu-kernels/awkward_ListArray_broadcast_tooffsets.cpp#L27)
```
In this case, broadcasting an Array of strings against an Array of integers produces arrays that have the same structure as though the outer `__array__ = "string"` were missing (i.e. broadcasting happens against the underlying characters array):
```python3
>>> import awkward as ak
>>> x = ak._v2.Array([["one", "two"], ["three", "four"]])
>>> y = ak._v2.Array([[1,2],[3, 4]])
>>> ak._v2.broadcast_arrays(x, y)
[<Array [['one', 'two'], ['three', 'four']] type='2 * var * string'>,
<Array [[[1, 1, 1], [2, 2, 2]], [[...], ...]] type='2 * var * var * int64'>]
```
</issue>
<code>
[start of src/awkward/operations/ak_broadcast_arrays.py]
1 # BSD 3-Clause License; see https://github.com/scikit-hep/awkward-1.0/blob/main/LICENSE
2 __all__ = ("broadcast_arrays",)
3 import awkward as ak
4 from awkward._backends.dispatch import backend_of
5 from awkward._backends.numpy import NumpyBackend
6 from awkward._behavior import behavior_of
7 from awkward._connect.numpy import unsupported
8 from awkward._layout import wrap_layout
9 from awkward._nplikes.numpylike import NumpyMetadata
10
11 np = NumpyMetadata.instance()
12 cpu = NumpyBackend.instance()
13
14
15 def broadcast_arrays(
16 *arrays,
17 depth_limit=None,
18 broadcast_parameters_rule="one_to_one",
19 left_broadcast=True,
20 right_broadcast=True,
21 highlevel=True,
22 behavior=None,
23 ):
24 """
25 Args:
26 arrays: Array-like data (anything #ak.to_layout recognizes).
27 depth_limit (None or int, default is None): If None, attempt to fully
28 broadcast the `arrays` to all levels. If an int, limit the number
29 of dimensions that get broadcasted. The minimum value is `1`,
30 for no broadcasting.
31 broadcast_parameters_rule (str): Rule for broadcasting parameters, one of:
32 - `"intersect"`
33 - `"all_or_nothing"`
34 - `"one_to_one"`
35 - `"none"`
36 left_broadcast (bool): If True, follow rules for implicit
37 left-broadcasting, as described below.
38 right_broadcast (bool): If True, follow rules for implicit
39 right-broadcasting, as described below.
40 highlevel (bool, default is True): If True, return an #ak.Array;
41 otherwise, return a low-level #ak.contents.Content subclass.
42 behavior (None or dict): Custom #ak.behavior for the output array, if
43 high-level.
44
45 Like NumPy's
46 [broadcast_arrays](https://docs.scipy.org/doc/numpy/reference/generated/numpy.broadcast_arrays.html)
47 function, this function returns the input `arrays` with enough elements
48 duplicated that they can be combined element-by-element.
49
50 For NumPy arrays, this means that scalars are replaced with arrays with
51 the same scalar value repeated at every element of the array, and regular
52 dimensions are created to increase low-dimensional data into
53 high-dimensional data.
54
55 For example,
56
57 >>> ak.broadcast_arrays(5,
58 ... [1, 2, 3, 4, 5])
59 [<Array [5, 5, 5, 5, 5] type='5 * int64'>,
60 <Array [1, 2, 3, 4, 5] type='5 * int64'>]
61
62 and
63
64 >>> ak.broadcast_arrays(np.array([1, 2, 3]),
65 ... np.array([[0.1, 0.2, 0.3], [10, 20, 30]]))
66 [<Array [[ 1, 2, 3], [ 1, 2, 3]] type='2 * 3 * int64'>,
67 <Array [[0.1, 0.2, 0.3], [10, 20, 30]] type='2 * 3 * float64'>]
68
69 Note that in the second example, when the `3 * int64` array is expanded
70 to match the `2 * 3 * float64` array, it is the deepest dimension that
71 is aligned. If we try to match a `2 * int64` with the `2 * 3 * float64`,
72
73 >>> ak.broadcast_arrays(np.array([1, 2]),
74 ... np.array([[0.1, 0.2, 0.3], [10, 20, 30]]))
75 ValueError: while calling
76 ak.broadcast_arrays(
77 arrays = (array([1, 2]), array([[ 0.1, 0.2, 0.3],
78 [10. , 20....
79 depth_limit = None
80 broadcast_parameters_rule = 'one_to_one'
81 left_broadcast = True
82 right_broadcast = True
83 highlevel = True
84 behavior = None
85 )
86 Error details: cannot broadcast RegularArray of size 2 with RegularArray of size 3
87
88 NumPy has the same behavior: arrays with different numbers of dimensions
89 are aligned to the right before expansion. One can control this by
90 explicitly adding a new axis (reshape to add a dimension of length 1)
91 where the expansion is supposed to take place because a dimension of
92 length 1 can be expanded like a scalar.
93
94 >>> ak.broadcast_arrays(np.array([1, 2])[:, np.newaxis],
95 ... np.array([[0.1, 0.2, 0.3], [10, 20, 30]]))
96 [<Array [[ 1, 1, 1], [ 2, 2, 2]] type='2 * 3 * int64'>,
97 <Array [[0.1, 0.2, 0.3], [10, 20, 30]] type='2 * 3 * float64'>]
98
99 Again, NumPy does the same thing (`np.newaxis` is equal to None, so this
100 trick is often shown with None in the slice-tuple). Where the broadcasting
101 happens can be controlled, but numbers of dimensions that don't match are
102 implicitly aligned to the right (fitting innermost structure, not
103 outermost).
104
105 While that might be an arbitrary decision for rectilinear arrays, it is
106 much more natural for implicit broadcasting to align left for tree-like
107 structures. That is, the root of each data structure should agree and
108 leaves may be duplicated to match. For example,
109
110 >>> ak.broadcast_arrays([ 100, 200, 300],
111 ... [[1.1, 2.2, 3.3], [], [4.4, 5.5]])
112 [<Array [[100, 100, 100], [], [300, 300]] type='3 * var * int64'>,
113 <Array [[1.1, 2.2, 3.3], [], [4.4, 5.5]] type='3 * var * float64'>]
114
115 One typically wants single-item-per-element data to be duplicated to
116 match multiple-items-per-element data. Operations on the broadcasted
117 arrays like
118
119 one_dimensional + nested_lists
120
121 would then have the same effect as the procedural code
122
123 for x, outer in zip(one_dimensional, nested_lists):
124 output = []
125 for inner in outer:
126 output.append(x + inner)
127 yield output
128
129 where `x` has the same value for each `inner` in the inner loop.
130
131 Awkward Array's broadcasting manages to have it both ways by applying the
132 following rules:
133
134 * If all dimensions are regular (i.e. #ak.types.RegularType), like NumPy,
135 implicit broadcasting aligns to the right, like NumPy.
136 * If any dimension is variable (i.e. #ak.types.ListType), which can
137 never be true of NumPy, implicit broadcasting aligns to the left.
138 * Explicit broadcasting with a length-1 regular dimension always
139 broadcasts, like NumPy.
140
141 Thus, it is important to be aware of the distinction between a dimension
142 that is declared to be regular in the type specification and a dimension
143 that is allowed to be variable (even if it happens to have the same length
144 for all elements). This distinction is can be accessed through the
145 #ak.Array.type, but it is lost when converting an array into JSON or
146 Python objects.
147
148 If arrays have the same depth but different lengths of nested
149 lists, attempting to broadcast them together is a broadcasting error.
150
151 >>> one = ak.Array([[[1, 2, 3], [], [4, 5], [6]], [], [[7, 8]]])
152 >>> two = ak.Array([[[1.1, 2.2], [3.3], [4.4], [5.5]], [], [[6.6]]])
153 >>> ak.broadcast_arrays(one, two)
154 ValueError: while calling
155 ak.broadcast_arrays(
156 arrays = (<Array [[[1, 2, 3], [], [4, ...], [6]], ...] type='3 * var ...
157 depth_limit = None
158 broadcast_parameters_rule = 'one_to_one'
159 left_broadcast = True
160 right_broadcast = True
161 highlevel = True
162 behavior = None
163 )
164 Error details: cannot broadcast nested list
165
166 For this, one can set the `depth_limit` to prevent the operation from
167 attempting to broadcast what can't be broadcasted.
168
169 >>> this, that = ak.broadcast_arrays(one, two, depth_limit=1)
170 >>> this.show()
171 [[[1, 2, 3], [], [4, 5], [6]],
172 [],
173 [[7, 8]]]
174 >>> that.show()
175 [[[1.1, 2.2], [3.3], [4.4], [5.5]],
176 [],
177 [[6.6]]]
178 """
179 with ak._errors.OperationErrorContext(
180 "ak.broadcast_arrays",
181 {
182 "arrays": arrays,
183 "depth_limit": depth_limit,
184 "broadcast_parameters_rule": broadcast_parameters_rule,
185 "left_broadcast": left_broadcast,
186 "right_broadcast": right_broadcast,
187 "highlevel": highlevel,
188 "behavior": behavior,
189 },
190 ):
191 return _impl(
192 arrays,
193 depth_limit,
194 broadcast_parameters_rule,
195 left_broadcast,
196 right_broadcast,
197 highlevel,
198 behavior,
199 )
200
201
202 def _impl(
203 arrays,
204 depth_limit,
205 broadcast_parameters_rule,
206 left_broadcast,
207 right_broadcast,
208 highlevel,
209 behavior,
210 ):
211 # Need at least one array!
212 if len(arrays) == 0:
213 return []
214
215 backend = backend_of(*arrays, default=cpu)
216
217 inputs = []
218 for x in arrays:
219 y = ak.operations.to_layout(x, allow_record=True, allow_other=True)
220 if not isinstance(y, (ak.contents.Content, ak.Record)):
221 y = ak.contents.NumpyArray(backend.nplike.asarray([y]))
222 inputs.append(y.to_backend(backend))
223
224 def action(inputs, depth, **kwargs):
225 if depth == depth_limit or all(x.is_numpy for x in inputs):
226 return tuple(inputs)
227 else:
228 return None
229
230 behavior = behavior_of(*arrays, behavior=behavior)
231 out = ak._broadcasting.broadcast_and_apply(
232 inputs,
233 action,
234 behavior,
235 left_broadcast=left_broadcast,
236 right_broadcast=right_broadcast,
237 broadcast_parameters_rule=broadcast_parameters_rule,
238 numpy_to_regular=True,
239 )
240 assert isinstance(out, tuple)
241 return [wrap_layout(x, behavior, highlevel) for x in out]
242
243
244 @ak._connect.numpy.implements("broadcast_arrays")
245 def _nep_18_impl(*args, subok=unsupported):
246 return broadcast_arrays(*args)
247
[end of src/awkward/operations/ak_broadcast_arrays.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/awkward/operations/ak_broadcast_arrays.py b/src/awkward/operations/ak_broadcast_arrays.py
--- a/src/awkward/operations/ak_broadcast_arrays.py
+++ b/src/awkward/operations/ak_broadcast_arrays.py
@@ -222,7 +222,14 @@
inputs.append(y.to_backend(backend))
def action(inputs, depth, **kwargs):
- if depth == depth_limit or all(x.is_numpy for x in inputs):
+ # The depth limit is the depth at which we must return, i.e.
+ # the _first_ layout at that depth
+ if depth == depth_limit:
+ return tuple(inputs)
+ # Walk through non-leaf nodes
+ elif all(
+ x.purelist_depth == 1 and not (x.is_option or x.is_indexed) for x in inputs
+ ):
return tuple(inputs)
else:
return None
| {"golden_diff": "diff --git a/src/awkward/operations/ak_broadcast_arrays.py b/src/awkward/operations/ak_broadcast_arrays.py\n--- a/src/awkward/operations/ak_broadcast_arrays.py\n+++ b/src/awkward/operations/ak_broadcast_arrays.py\n@@ -222,7 +222,14 @@\n inputs.append(y.to_backend(backend))\n \n def action(inputs, depth, **kwargs):\n- if depth == depth_limit or all(x.is_numpy for x in inputs):\n+ # The depth limit is the depth at which we must return, i.e.\n+ # the _first_ layout at that depth\n+ if depth == depth_limit:\n+ return tuple(inputs)\n+ # Walk through non-leaf nodes\n+ elif all(\n+ x.purelist_depth == 1 and not (x.is_option or x.is_indexed) for x in inputs\n+ ):\n return tuple(inputs)\n else:\n return None\n", "issue": "String broadcasting is not producing sensible results\n### Version of Awkward Array\r\n\r\nmain\r\n\r\n### Description and code to reproduce\r\n\r\nAs Jim and I discussed today, string broadcasting is currently not producing reliable results for string-string, or even string-non string cases. In particular many broadcasts violate the appearance that strings are atoms:\r\n\r\n```python3\r\n>>> import awkward as ak\r\n>>> x = ak._v2.Array([[\"one\", \"two\"], [\"three\", \"four\"]])\r\n>>> ak._v2.broadcast_arrays(x[1:], x[:-1])\r\nValueError: while calling (from <ipython-input-42-6e4db831c1ff>, line 1)\r\n ak._v2.broadcast_arrays(\r\n arrays = (<Array [['one', 'two']] type='1 * var * string'>, <Array [[...\r\n kwargs = {}\r\n )\r\nError details: cannot broadcast nested list (in compiled code: https://github.com/scikit-hep/awkward-1.0/blob/1.10.0rc1/src/cpu-kernels/awkward_ListArray_broadcast_tooffsets.cpp#L27)\r\n```\r\n\r\nIn this case, broadcasting an Array of strings against an Array of integers produces arrays that have the same structure as though the outer `__array__ = \"string\"` were missing (i.e. broadcasting happens against the underlying characters array):\r\n```python3\r\n>>> import awkward as ak\r\n>>> x = ak._v2.Array([[\"one\", \"two\"], [\"three\", \"four\"]])\r\n>>> y = ak._v2.Array([[1,2],[3, 4]])\r\n>>> ak._v2.broadcast_arrays(x, y)\r\n[<Array [['one', 'two'], ['three', 'four']] type='2 * var * string'>,\r\n <Array [[[1, 1, 1], [2, 2, 2]], [[...], ...]] type='2 * var * var * int64'>]\r\n```\n", "before_files": [{"content": "# BSD 3-Clause License; see https://github.com/scikit-hep/awkward-1.0/blob/main/LICENSE\n__all__ = (\"broadcast_arrays\",)\nimport awkward as ak\nfrom awkward._backends.dispatch import backend_of\nfrom awkward._backends.numpy import NumpyBackend\nfrom awkward._behavior import behavior_of\nfrom awkward._connect.numpy import unsupported\nfrom awkward._layout import wrap_layout\nfrom awkward._nplikes.numpylike import NumpyMetadata\n\nnp = NumpyMetadata.instance()\ncpu = NumpyBackend.instance()\n\n\ndef broadcast_arrays(\n *arrays,\n depth_limit=None,\n broadcast_parameters_rule=\"one_to_one\",\n left_broadcast=True,\n right_broadcast=True,\n highlevel=True,\n behavior=None,\n):\n \"\"\"\n Args:\n arrays: Array-like data (anything #ak.to_layout recognizes).\n depth_limit (None or int, default is None): If None, attempt to fully\n broadcast the `arrays` to all levels. If an int, limit the number\n of dimensions that get broadcasted. The minimum value is `1`,\n for no broadcasting.\n broadcast_parameters_rule (str): Rule for broadcasting parameters, one of:\n - `\"intersect\"`\n - `\"all_or_nothing\"`\n - `\"one_to_one\"`\n - `\"none\"`\n left_broadcast (bool): If True, follow rules for implicit\n left-broadcasting, as described below.\n right_broadcast (bool): If True, follow rules for implicit\n right-broadcasting, as described below.\n highlevel (bool, default is True): If True, return an #ak.Array;\n otherwise, return a low-level #ak.contents.Content subclass.\n behavior (None or dict): Custom #ak.behavior for the output array, if\n high-level.\n\n Like NumPy's\n [broadcast_arrays](https://docs.scipy.org/doc/numpy/reference/generated/numpy.broadcast_arrays.html)\n function, this function returns the input `arrays` with enough elements\n duplicated that they can be combined element-by-element.\n\n For NumPy arrays, this means that scalars are replaced with arrays with\n the same scalar value repeated at every element of the array, and regular\n dimensions are created to increase low-dimensional data into\n high-dimensional data.\n\n For example,\n\n >>> ak.broadcast_arrays(5,\n ... [1, 2, 3, 4, 5])\n [<Array [5, 5, 5, 5, 5] type='5 * int64'>,\n <Array [1, 2, 3, 4, 5] type='5 * int64'>]\n\n and\n\n >>> ak.broadcast_arrays(np.array([1, 2, 3]),\n ... np.array([[0.1, 0.2, 0.3], [10, 20, 30]]))\n [<Array [[ 1, 2, 3], [ 1, 2, 3]] type='2 * 3 * int64'>,\n <Array [[0.1, 0.2, 0.3], [10, 20, 30]] type='2 * 3 * float64'>]\n\n Note that in the second example, when the `3 * int64` array is expanded\n to match the `2 * 3 * float64` array, it is the deepest dimension that\n is aligned. If we try to match a `2 * int64` with the `2 * 3 * float64`,\n\n >>> ak.broadcast_arrays(np.array([1, 2]),\n ... np.array([[0.1, 0.2, 0.3], [10, 20, 30]]))\n ValueError: while calling\n ak.broadcast_arrays(\n arrays = (array([1, 2]), array([[ 0.1, 0.2, 0.3],\n [10. , 20....\n depth_limit = None\n broadcast_parameters_rule = 'one_to_one'\n left_broadcast = True\n right_broadcast = True\n highlevel = True\n behavior = None\n )\n Error details: cannot broadcast RegularArray of size 2 with RegularArray of size 3\n\n NumPy has the same behavior: arrays with different numbers of dimensions\n are aligned to the right before expansion. One can control this by\n explicitly adding a new axis (reshape to add a dimension of length 1)\n where the expansion is supposed to take place because a dimension of\n length 1 can be expanded like a scalar.\n\n >>> ak.broadcast_arrays(np.array([1, 2])[:, np.newaxis],\n ... np.array([[0.1, 0.2, 0.3], [10, 20, 30]]))\n [<Array [[ 1, 1, 1], [ 2, 2, 2]] type='2 * 3 * int64'>,\n <Array [[0.1, 0.2, 0.3], [10, 20, 30]] type='2 * 3 * float64'>]\n\n Again, NumPy does the same thing (`np.newaxis` is equal to None, so this\n trick is often shown with None in the slice-tuple). Where the broadcasting\n happens can be controlled, but numbers of dimensions that don't match are\n implicitly aligned to the right (fitting innermost structure, not\n outermost).\n\n While that might be an arbitrary decision for rectilinear arrays, it is\n much more natural for implicit broadcasting to align left for tree-like\n structures. That is, the root of each data structure should agree and\n leaves may be duplicated to match. For example,\n\n >>> ak.broadcast_arrays([ 100, 200, 300],\n ... [[1.1, 2.2, 3.3], [], [4.4, 5.5]])\n [<Array [[100, 100, 100], [], [300, 300]] type='3 * var * int64'>,\n <Array [[1.1, 2.2, 3.3], [], [4.4, 5.5]] type='3 * var * float64'>]\n\n One typically wants single-item-per-element data to be duplicated to\n match multiple-items-per-element data. Operations on the broadcasted\n arrays like\n\n one_dimensional + nested_lists\n\n would then have the same effect as the procedural code\n\n for x, outer in zip(one_dimensional, nested_lists):\n output = []\n for inner in outer:\n output.append(x + inner)\n yield output\n\n where `x` has the same value for each `inner` in the inner loop.\n\n Awkward Array's broadcasting manages to have it both ways by applying the\n following rules:\n\n * If all dimensions are regular (i.e. #ak.types.RegularType), like NumPy,\n implicit broadcasting aligns to the right, like NumPy.\n * If any dimension is variable (i.e. #ak.types.ListType), which can\n never be true of NumPy, implicit broadcasting aligns to the left.\n * Explicit broadcasting with a length-1 regular dimension always\n broadcasts, like NumPy.\n\n Thus, it is important to be aware of the distinction between a dimension\n that is declared to be regular in the type specification and a dimension\n that is allowed to be variable (even if it happens to have the same length\n for all elements). This distinction is can be accessed through the\n #ak.Array.type, but it is lost when converting an array into JSON or\n Python objects.\n\n If arrays have the same depth but different lengths of nested\n lists, attempting to broadcast them together is a broadcasting error.\n\n >>> one = ak.Array([[[1, 2, 3], [], [4, 5], [6]], [], [[7, 8]]])\n >>> two = ak.Array([[[1.1, 2.2], [3.3], [4.4], [5.5]], [], [[6.6]]])\n >>> ak.broadcast_arrays(one, two)\n ValueError: while calling\n ak.broadcast_arrays(\n arrays = (<Array [[[1, 2, 3], [], [4, ...], [6]], ...] type='3 * var ...\n depth_limit = None\n broadcast_parameters_rule = 'one_to_one'\n left_broadcast = True\n right_broadcast = True\n highlevel = True\n behavior = None\n )\n Error details: cannot broadcast nested list\n\n For this, one can set the `depth_limit` to prevent the operation from\n attempting to broadcast what can't be broadcasted.\n\n >>> this, that = ak.broadcast_arrays(one, two, depth_limit=1)\n >>> this.show()\n [[[1, 2, 3], [], [4, 5], [6]],\n [],\n [[7, 8]]]\n >>> that.show()\n [[[1.1, 2.2], [3.3], [4.4], [5.5]],\n [],\n [[6.6]]]\n \"\"\"\n with ak._errors.OperationErrorContext(\n \"ak.broadcast_arrays\",\n {\n \"arrays\": arrays,\n \"depth_limit\": depth_limit,\n \"broadcast_parameters_rule\": broadcast_parameters_rule,\n \"left_broadcast\": left_broadcast,\n \"right_broadcast\": right_broadcast,\n \"highlevel\": highlevel,\n \"behavior\": behavior,\n },\n ):\n return _impl(\n arrays,\n depth_limit,\n broadcast_parameters_rule,\n left_broadcast,\n right_broadcast,\n highlevel,\n behavior,\n )\n\n\ndef _impl(\n arrays,\n depth_limit,\n broadcast_parameters_rule,\n left_broadcast,\n right_broadcast,\n highlevel,\n behavior,\n):\n # Need at least one array!\n if len(arrays) == 0:\n return []\n\n backend = backend_of(*arrays, default=cpu)\n\n inputs = []\n for x in arrays:\n y = ak.operations.to_layout(x, allow_record=True, allow_other=True)\n if not isinstance(y, (ak.contents.Content, ak.Record)):\n y = ak.contents.NumpyArray(backend.nplike.asarray([y]))\n inputs.append(y.to_backend(backend))\n\n def action(inputs, depth, **kwargs):\n if depth == depth_limit or all(x.is_numpy for x in inputs):\n return tuple(inputs)\n else:\n return None\n\n behavior = behavior_of(*arrays, behavior=behavior)\n out = ak._broadcasting.broadcast_and_apply(\n inputs,\n action,\n behavior,\n left_broadcast=left_broadcast,\n right_broadcast=right_broadcast,\n broadcast_parameters_rule=broadcast_parameters_rule,\n numpy_to_regular=True,\n )\n assert isinstance(out, tuple)\n return [wrap_layout(x, behavior, highlevel) for x in out]\n\n\n@ak._connect.numpy.implements(\"broadcast_arrays\")\ndef _nep_18_impl(*args, subok=unsupported):\n return broadcast_arrays(*args)\n", "path": "src/awkward/operations/ak_broadcast_arrays.py"}]} | 4,045 | 209 |
gh_patches_debug_3740 | rasdani/github-patches | git_diff | napari__napari-553 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Menu bar focus on Mac
## 🐛 Bug
We've now added a menubar, but you need to toggle focus in and out of napari before it becomes active on the mac. This bug has been encountered in other Qt apps, but we still need to look into fixing.
See here - https://github.com/robotology/yarp/issues/457
</issue>
<code>
[start of napari/_qt/qt_main_window.py]
1 """
2 Custom Qt widgets that serve as native objects that the public-facing elements
3 wrap.
4 """
5 # set vispy to use same backend as qtpy
6 from qtpy import API_NAME
7 from vispy import app
8
9 app.use_app(API_NAME)
10 del app
11
12 from qtpy.QtWidgets import (
13 QMainWindow,
14 QWidget,
15 QHBoxLayout,
16 QLabel,
17 QAction,
18 QShortcut,
19 )
20 from qtpy.QtGui import QKeySequence
21
22 from ..util.theme import template
23
24
25 class Window:
26 """Application window that contains the menu bar and viewer.
27
28 Parameters
29 ----------
30 qt_viewer : QtViewer
31 Contained viewer widget.
32
33 Attributes
34 ----------
35 qt_viewer : QtViewer
36 Contained viewer widget.
37 """
38
39 def __init__(self, qt_viewer, *, show=True):
40
41 self.qt_viewer = qt_viewer
42
43 self._qt_window = QMainWindow()
44 self._qt_window.setUnifiedTitleAndToolBarOnMac(True)
45 self._qt_center = QWidget()
46 self._qt_window.setCentralWidget(self._qt_center)
47 self._qt_window.setWindowTitle(self.qt_viewer.viewer.title)
48 self._qt_center.setLayout(QHBoxLayout())
49 self._status_bar = self._qt_window.statusBar()
50 self._qt_window.closeEvent = self.closeEvent
51 self.close = self._qt_window.close
52
53 self._add_menubar()
54
55 self._add_file_menu()
56 self._add_view_menu()
57 self._add_window_menu()
58
59 self._status_bar.showMessage('Ready')
60 self._help = QLabel('')
61 self._status_bar.addPermanentWidget(self._help)
62
63 self._qt_center.layout().addWidget(self.qt_viewer)
64 self._qt_center.layout().setContentsMargins(4, 0, 4, 0)
65
66 self._update_palette(qt_viewer.viewer.palette)
67
68 self.qt_viewer.viewer.events.status.connect(self._status_changed)
69 self.qt_viewer.viewer.events.help.connect(self._help_changed)
70 self.qt_viewer.viewer.events.title.connect(self._title_changed)
71 self.qt_viewer.viewer.events.palette.connect(
72 lambda event: self._update_palette(event.palette)
73 )
74
75 if show:
76 self.show()
77
78 def _add_menubar(self):
79 self.main_menu = self._qt_window.menuBar()
80 # Menubar shortcuts are only active when the menubar is visible.
81 # Therefore, we set a global shortcut not associated with the menubar
82 # to toggle visibility, *but*, in order to not shadow the menubar
83 # shortcut, we disable it, and only enable it when the menubar is
84 # hidden. See this stackoverflow link for details:
85 # https://stackoverflow.com/questions/50537642/how-to-keep-the-shortcuts-of-a-hidden-widget-in-pyqt5
86 self._main_menu_shortcut = QShortcut(
87 QKeySequence('Ctrl+M'), self._qt_window
88 )
89 self._main_menu_shortcut.activated.connect(
90 self._toggle_menubar_visible
91 )
92 self._main_menu_shortcut.setEnabled(False)
93
94 def _toggle_menubar_visible(self):
95 """Toggle visibility of app menubar.
96
97 This function also disables or enables a global keyboard shortcut to
98 show the menubar, since menubar shortcuts are only available while the
99 menubar is visible.
100 """
101 if self.main_menu.isVisible():
102 self.main_menu.setVisible(False)
103 self._main_menu_shortcut.setEnabled(True)
104 else:
105 self.main_menu.setVisible(True)
106 self._main_menu_shortcut.setEnabled(False)
107
108 def _add_file_menu(self):
109 open_images = QAction('Open', self._qt_window)
110 open_images.setShortcut('Ctrl+O')
111 open_images.setStatusTip('Open image file(s)')
112 open_images.triggered.connect(self.qt_viewer._open_images)
113 self.file_menu = self.main_menu.addMenu('&File')
114 self.file_menu.addAction(open_images)
115
116 def _add_view_menu(self):
117 toggle_visible = QAction('Toggle menubar visibility', self._qt_window)
118 toggle_visible.setShortcut('Ctrl+M')
119 toggle_visible.setStatusTip('Hide Menubar')
120 toggle_visible.triggered.connect(self._toggle_menubar_visible)
121 self.view_menu = self.main_menu.addMenu('&View')
122 self.view_menu.addAction(toggle_visible)
123
124 def _add_window_menu(self):
125 exit_action = QAction("Close window", self._qt_window)
126 exit_action.setShortcut("Ctrl+W")
127 exit_action.setStatusTip('Close napari window')
128 exit_action.triggered.connect(self._qt_window.close)
129 self.window_menu = self.main_menu.addMenu('&Window')
130 self.window_menu.addAction(exit_action)
131
132 def resize(self, width, height):
133 """Resize the window.
134
135 Parameters
136 ----------
137 width : int
138 Width in logical pixels.
139 height : int
140 Height in logical pixels.
141 """
142 self._qt_window.resize(width, height)
143
144 def show(self):
145 """Resize, show, and bring forward the window.
146 """
147 self._qt_window.resize(self._qt_window.layout().sizeHint())
148 self._qt_window.show()
149 self._qt_window.raise_()
150
151 def _update_palette(self, palette):
152 # set window styles which don't use the primary stylesheet
153 # FIXME: this is a problem with the stylesheet not using properties
154 self._status_bar.setStyleSheet(
155 template(
156 'QStatusBar { background: {{ background }}; '
157 'color: {{ text }}; }',
158 **palette,
159 )
160 )
161 self._qt_center.setStyleSheet(
162 template('QWidget { background: {{ background }}; }', **palette)
163 )
164
165 def _status_changed(self, event):
166 """Update status bar.
167 """
168 self._status_bar.showMessage(event.text)
169
170 def _title_changed(self, event):
171 """Update window title.
172 """
173 self._qt_window.setWindowTitle(event.text)
174
175 def _help_changed(self, event):
176 """Update help message on status bar.
177 """
178 self._help.setText(event.text)
179
180 def closeEvent(self, event):
181 # Forward close event to the console to trigger proper shutdown
182 self.qt_viewer.console.shutdown()
183 event.accept()
184
[end of napari/_qt/qt_main_window.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/napari/_qt/qt_main_window.py b/napari/_qt/qt_main_window.py
--- a/napari/_qt/qt_main_window.py
+++ b/napari/_qt/qt_main_window.py
@@ -146,7 +146,6 @@
"""
self._qt_window.resize(self._qt_window.layout().sizeHint())
self._qt_window.show()
- self._qt_window.raise_()
def _update_palette(self, palette):
# set window styles which don't use the primary stylesheet
| {"golden_diff": "diff --git a/napari/_qt/qt_main_window.py b/napari/_qt/qt_main_window.py\n--- a/napari/_qt/qt_main_window.py\n+++ b/napari/_qt/qt_main_window.py\n@@ -146,7 +146,6 @@\n \"\"\"\n self._qt_window.resize(self._qt_window.layout().sizeHint())\n self._qt_window.show()\n- self._qt_window.raise_()\n \n def _update_palette(self, palette):\n # set window styles which don't use the primary stylesheet\n", "issue": "Menu bar focus on Mac\n## \ud83d\udc1b Bug\r\nWe've now added a menubar, but you need to toggle focus in and out of napari before it becomes active on the mac. This bug has been encountered in other Qt apps, but we still need to look into fixing.\r\n\r\nSee here - https://github.com/robotology/yarp/issues/457\n", "before_files": [{"content": "\"\"\"\nCustom Qt widgets that serve as native objects that the public-facing elements\nwrap.\n\"\"\"\n# set vispy to use same backend as qtpy\nfrom qtpy import API_NAME\nfrom vispy import app\n\napp.use_app(API_NAME)\ndel app\n\nfrom qtpy.QtWidgets import (\n QMainWindow,\n QWidget,\n QHBoxLayout,\n QLabel,\n QAction,\n QShortcut,\n)\nfrom qtpy.QtGui import QKeySequence\n\nfrom ..util.theme import template\n\n\nclass Window:\n \"\"\"Application window that contains the menu bar and viewer.\n\n Parameters\n ----------\n qt_viewer : QtViewer\n Contained viewer widget.\n\n Attributes\n ----------\n qt_viewer : QtViewer\n Contained viewer widget.\n \"\"\"\n\n def __init__(self, qt_viewer, *, show=True):\n\n self.qt_viewer = qt_viewer\n\n self._qt_window = QMainWindow()\n self._qt_window.setUnifiedTitleAndToolBarOnMac(True)\n self._qt_center = QWidget()\n self._qt_window.setCentralWidget(self._qt_center)\n self._qt_window.setWindowTitle(self.qt_viewer.viewer.title)\n self._qt_center.setLayout(QHBoxLayout())\n self._status_bar = self._qt_window.statusBar()\n self._qt_window.closeEvent = self.closeEvent\n self.close = self._qt_window.close\n\n self._add_menubar()\n\n self._add_file_menu()\n self._add_view_menu()\n self._add_window_menu()\n\n self._status_bar.showMessage('Ready')\n self._help = QLabel('')\n self._status_bar.addPermanentWidget(self._help)\n\n self._qt_center.layout().addWidget(self.qt_viewer)\n self._qt_center.layout().setContentsMargins(4, 0, 4, 0)\n\n self._update_palette(qt_viewer.viewer.palette)\n\n self.qt_viewer.viewer.events.status.connect(self._status_changed)\n self.qt_viewer.viewer.events.help.connect(self._help_changed)\n self.qt_viewer.viewer.events.title.connect(self._title_changed)\n self.qt_viewer.viewer.events.palette.connect(\n lambda event: self._update_palette(event.palette)\n )\n\n if show:\n self.show()\n\n def _add_menubar(self):\n self.main_menu = self._qt_window.menuBar()\n # Menubar shortcuts are only active when the menubar is visible.\n # Therefore, we set a global shortcut not associated with the menubar\n # to toggle visibility, *but*, in order to not shadow the menubar\n # shortcut, we disable it, and only enable it when the menubar is\n # hidden. See this stackoverflow link for details:\n # https://stackoverflow.com/questions/50537642/how-to-keep-the-shortcuts-of-a-hidden-widget-in-pyqt5\n self._main_menu_shortcut = QShortcut(\n QKeySequence('Ctrl+M'), self._qt_window\n )\n self._main_menu_shortcut.activated.connect(\n self._toggle_menubar_visible\n )\n self._main_menu_shortcut.setEnabled(False)\n\n def _toggle_menubar_visible(self):\n \"\"\"Toggle visibility of app menubar.\n\n This function also disables or enables a global keyboard shortcut to\n show the menubar, since menubar shortcuts are only available while the\n menubar is visible.\n \"\"\"\n if self.main_menu.isVisible():\n self.main_menu.setVisible(False)\n self._main_menu_shortcut.setEnabled(True)\n else:\n self.main_menu.setVisible(True)\n self._main_menu_shortcut.setEnabled(False)\n\n def _add_file_menu(self):\n open_images = QAction('Open', self._qt_window)\n open_images.setShortcut('Ctrl+O')\n open_images.setStatusTip('Open image file(s)')\n open_images.triggered.connect(self.qt_viewer._open_images)\n self.file_menu = self.main_menu.addMenu('&File')\n self.file_menu.addAction(open_images)\n\n def _add_view_menu(self):\n toggle_visible = QAction('Toggle menubar visibility', self._qt_window)\n toggle_visible.setShortcut('Ctrl+M')\n toggle_visible.setStatusTip('Hide Menubar')\n toggle_visible.triggered.connect(self._toggle_menubar_visible)\n self.view_menu = self.main_menu.addMenu('&View')\n self.view_menu.addAction(toggle_visible)\n\n def _add_window_menu(self):\n exit_action = QAction(\"Close window\", self._qt_window)\n exit_action.setShortcut(\"Ctrl+W\")\n exit_action.setStatusTip('Close napari window')\n exit_action.triggered.connect(self._qt_window.close)\n self.window_menu = self.main_menu.addMenu('&Window')\n self.window_menu.addAction(exit_action)\n\n def resize(self, width, height):\n \"\"\"Resize the window.\n\n Parameters\n ----------\n width : int\n Width in logical pixels.\n height : int\n Height in logical pixels.\n \"\"\"\n self._qt_window.resize(width, height)\n\n def show(self):\n \"\"\"Resize, show, and bring forward the window.\n \"\"\"\n self._qt_window.resize(self._qt_window.layout().sizeHint())\n self._qt_window.show()\n self._qt_window.raise_()\n\n def _update_palette(self, palette):\n # set window styles which don't use the primary stylesheet\n # FIXME: this is a problem with the stylesheet not using properties\n self._status_bar.setStyleSheet(\n template(\n 'QStatusBar { background: {{ background }}; '\n 'color: {{ text }}; }',\n **palette,\n )\n )\n self._qt_center.setStyleSheet(\n template('QWidget { background: {{ background }}; }', **palette)\n )\n\n def _status_changed(self, event):\n \"\"\"Update status bar.\n \"\"\"\n self._status_bar.showMessage(event.text)\n\n def _title_changed(self, event):\n \"\"\"Update window title.\n \"\"\"\n self._qt_window.setWindowTitle(event.text)\n\n def _help_changed(self, event):\n \"\"\"Update help message on status bar.\n \"\"\"\n self._help.setText(event.text)\n\n def closeEvent(self, event):\n # Forward close event to the console to trigger proper shutdown\n self.qt_viewer.console.shutdown()\n event.accept()\n", "path": "napari/_qt/qt_main_window.py"}]} | 2,377 | 117 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.