problem_id
stringlengths 18
22
| source
stringclasses 1
value | task_type
stringclasses 1
value | in_source_id
stringlengths 13
58
| prompt
stringlengths 1.71k
18.9k
| golden_diff
stringlengths 145
5.13k
| verification_info
stringlengths 465
23.6k
| num_tokens_prompt
int64 556
4.1k
| num_tokens_diff
int64 47
1.02k
|
---|---|---|---|---|---|---|---|---|
gh_patches_debug_8815 | rasdani/github-patches | git_diff | CTFd__CTFd-2458 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Upload to S3 Failing
- CTFd Version/Commit: 3.6.1
- Operating System: Linux (Docker container)
- Web Browser and Version: Chrome
**What happened?**
Upgrading CTFd resulting in S3 file uploads beginning to return 400 (bad request) status codes. I see one of the fixes for 3.6.1 was for S3, so perhaps a new bug was introduced.
Here are some additional facts which may be helpful:
- The files are successfully making there way into S3, despite the error
- The timezone I have configured for my server is CST
I can also confirm that my deployment had working file upload before upgrade to version 3.6.1 (file upload was working for 3.6.0).
**What did you expect to happen?**
File upload to continue working.
**How to reproduce your issue**
Deploy CTFd free version using version 3.6.1 with S3 file upload configured.
**Any associated stack traces or error logs**
The browser request returns error (400 status code):
```
{
"success": false,
"errors": {
"location": [
"I/O operation on closed file."
]
}
}
```
The backend error is:
```
[ERROR] Error handling request
Traceback (most recent call last):
File "/opt/venv/lib/python3.9/site-packages/gunicorn/workers/base_async.py", line 113, in handle_request
resp.write_file(respiter)
File "/opt/venv/lib/python3.9/site-packages/gunicorn/http/wsgi.py", line 385, in write_file
if not self.sendfile(respiter):
File "/opt/venv/lib/python3.9/site-packages/gunicorn/http/wsgi.py", line 375, in sendfile
self.sock.sendfile(respiter.filelike, count=nbytes)
File "/opt/venv/lib/python3.9/site-packages/gevent/_socket3.py", line 486, in sendfile
return self._sendfile_use_send(file, offset, count)
File "/opt/venv/lib/python3.9/site-packages/gevent/_socket3.py", line 416, in _sendfile_use_send
self._check_sendfile_params(file, offset, count)
File "/opt/venv/lib/python3.9/site-packages/gevent/_socket3.py", line 461, in _check_sendfile_params
raise ValueError(
ValueError: count must be a positive integer (got 0)
```
</issue>
<code>
[start of CTFd/utils/uploads/__init__.py]
1 import hashlib
2 import shutil
3 from pathlib import Path
4
5 from CTFd.models import ChallengeFiles, Files, PageFiles, db
6 from CTFd.utils import get_app_config
7 from CTFd.utils.uploads.uploaders import FilesystemUploader, S3Uploader
8
9 UPLOADERS = {"filesystem": FilesystemUploader, "s3": S3Uploader}
10
11
12 def get_uploader():
13 return UPLOADERS.get(get_app_config("UPLOAD_PROVIDER") or "filesystem")()
14
15
16 def upload_file(*args, **kwargs):
17 file_obj = kwargs.get("file")
18 challenge_id = kwargs.get("challenge_id") or kwargs.get("challenge")
19 page_id = kwargs.get("page_id") or kwargs.get("page")
20 file_type = kwargs.get("type", "standard")
21 location = kwargs.get("location")
22
23 # Validate location and default filename to uploaded file's name
24 parent = None
25 filename = file_obj.filename
26 if location:
27 path = Path(location)
28 if len(path.parts) != 2:
29 raise ValueError(
30 "Location must contain two parts, a directory and a filename"
31 )
32 # Allow location to override the directory and filename
33 parent = path.parts[0]
34 filename = path.parts[1]
35 location = parent + "/" + filename
36
37 model_args = {"type": file_type, "location": location}
38
39 model = Files
40 if file_type == "challenge":
41 model = ChallengeFiles
42 model_args["challenge_id"] = challenge_id
43 if file_type == "page":
44 model = PageFiles
45 model_args["page_id"] = page_id
46
47 uploader = get_uploader()
48 location = uploader.upload(file_obj=file_obj, filename=filename, path=parent)
49
50 sha1sum = hash_file(fp=file_obj)
51
52 model_args["location"] = location
53 model_args["sha1sum"] = sha1sum
54
55 existing_file = Files.query.filter_by(location=location).first()
56 if existing_file:
57 for k, v in model_args.items():
58 setattr(existing_file, k, v)
59 db.session.commit()
60 file_row = existing_file
61 else:
62 file_row = model(**model_args)
63 db.session.add(file_row)
64 db.session.commit()
65 return file_row
66
67
68 def hash_file(fp, algo="sha1"):
69 fp.seek(0)
70 if algo == "sha1":
71 h = hashlib.sha1() # nosec
72 # https://stackoverflow.com/a/64730457
73 while chunk := fp.read(1024):
74 h.update(chunk)
75 fp.seek(0)
76 return h.hexdigest()
77 else:
78 raise NotImplementedError
79
80
81 def delete_file(file_id):
82 f = Files.query.filter_by(id=file_id).first_or_404()
83
84 uploader = get_uploader()
85 uploader.delete(filename=f.location)
86
87 db.session.delete(f)
88 db.session.commit()
89 return True
90
91
92 def rmdir(directory):
93 shutil.rmtree(directory, ignore_errors=True)
94
[end of CTFd/utils/uploads/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/CTFd/utils/uploads/__init__.py b/CTFd/utils/uploads/__init__.py
--- a/CTFd/utils/uploads/__init__.py
+++ b/CTFd/utils/uploads/__init__.py
@@ -44,11 +44,12 @@
model = PageFiles
model_args["page_id"] = page_id
+ # Hash is calculated before upload since S3 file upload closes file object
+ sha1sum = hash_file(fp=file_obj)
+
uploader = get_uploader()
location = uploader.upload(file_obj=file_obj, filename=filename, path=parent)
- sha1sum = hash_file(fp=file_obj)
-
model_args["location"] = location
model_args["sha1sum"] = sha1sum
| {"golden_diff": "diff --git a/CTFd/utils/uploads/__init__.py b/CTFd/utils/uploads/__init__.py\n--- a/CTFd/utils/uploads/__init__.py\n+++ b/CTFd/utils/uploads/__init__.py\n@@ -44,11 +44,12 @@\n model = PageFiles\n model_args[\"page_id\"] = page_id\n \n+ # Hash is calculated before upload since S3 file upload closes file object\n+ sha1sum = hash_file(fp=file_obj)\n+\n uploader = get_uploader()\n location = uploader.upload(file_obj=file_obj, filename=filename, path=parent)\n \n- sha1sum = hash_file(fp=file_obj)\n-\n model_args[\"location\"] = location\n model_args[\"sha1sum\"] = sha1sum\n", "issue": "Upload to S3 Failing\n- CTFd Version/Commit: 3.6.1\r\n- Operating System: Linux (Docker container)\r\n- Web Browser and Version: Chrome\r\n\r\n**What happened?**\r\n\r\nUpgrading CTFd resulting in S3 file uploads beginning to return 400 (bad request) status codes. I see one of the fixes for 3.6.1 was for S3, so perhaps a new bug was introduced.\r\n\r\nHere are some additional facts which may be helpful:\r\n\r\n - The files are successfully making there way into S3, despite the error\r\n - The timezone I have configured for my server is CST\r\n\r\nI can also confirm that my deployment had working file upload before upgrade to version 3.6.1 (file upload was working for 3.6.0).\r\n\r\n**What did you expect to happen?**\r\n\r\nFile upload to continue working.\r\n\r\n**How to reproduce your issue**\r\n\r\nDeploy CTFd free version using version 3.6.1 with S3 file upload configured.\r\n\r\n**Any associated stack traces or error logs**\r\n\r\nThe browser request returns error (400 status code):\r\n\r\n```\r\n{\r\n \"success\": false,\r\n \"errors\": {\r\n \"location\": [\r\n \"I/O operation on closed file.\"\r\n ]\r\n }\r\n}\r\n```\r\n\r\nThe backend error is:\r\n\r\n```\r\n[ERROR] Error handling request\r\nTraceback (most recent call last):\r\nFile \"/opt/venv/lib/python3.9/site-packages/gunicorn/workers/base_async.py\", line 113, in handle_request\r\nresp.write_file(respiter)\r\nFile \"/opt/venv/lib/python3.9/site-packages/gunicorn/http/wsgi.py\", line 385, in write_file\r\nif not self.sendfile(respiter):\r\nFile \"/opt/venv/lib/python3.9/site-packages/gunicorn/http/wsgi.py\", line 375, in sendfile\r\nself.sock.sendfile(respiter.filelike, count=nbytes)\r\nFile \"/opt/venv/lib/python3.9/site-packages/gevent/_socket3.py\", line 486, in sendfile\r\nreturn self._sendfile_use_send(file, offset, count)\r\nFile \"/opt/venv/lib/python3.9/site-packages/gevent/_socket3.py\", line 416, in _sendfile_use_send\r\nself._check_sendfile_params(file, offset, count)\r\nFile \"/opt/venv/lib/python3.9/site-packages/gevent/_socket3.py\", line 461, in _check_sendfile_params\r\nraise ValueError(\r\nValueError: count must be a positive integer (got 0)\r\n```\n", "before_files": [{"content": "import hashlib\nimport shutil\nfrom pathlib import Path\n\nfrom CTFd.models import ChallengeFiles, Files, PageFiles, db\nfrom CTFd.utils import get_app_config\nfrom CTFd.utils.uploads.uploaders import FilesystemUploader, S3Uploader\n\nUPLOADERS = {\"filesystem\": FilesystemUploader, \"s3\": S3Uploader}\n\n\ndef get_uploader():\n return UPLOADERS.get(get_app_config(\"UPLOAD_PROVIDER\") or \"filesystem\")()\n\n\ndef upload_file(*args, **kwargs):\n file_obj = kwargs.get(\"file\")\n challenge_id = kwargs.get(\"challenge_id\") or kwargs.get(\"challenge\")\n page_id = kwargs.get(\"page_id\") or kwargs.get(\"page\")\n file_type = kwargs.get(\"type\", \"standard\")\n location = kwargs.get(\"location\")\n\n # Validate location and default filename to uploaded file's name\n parent = None\n filename = file_obj.filename\n if location:\n path = Path(location)\n if len(path.parts) != 2:\n raise ValueError(\n \"Location must contain two parts, a directory and a filename\"\n )\n # Allow location to override the directory and filename\n parent = path.parts[0]\n filename = path.parts[1]\n location = parent + \"/\" + filename\n\n model_args = {\"type\": file_type, \"location\": location}\n\n model = Files\n if file_type == \"challenge\":\n model = ChallengeFiles\n model_args[\"challenge_id\"] = challenge_id\n if file_type == \"page\":\n model = PageFiles\n model_args[\"page_id\"] = page_id\n\n uploader = get_uploader()\n location = uploader.upload(file_obj=file_obj, filename=filename, path=parent)\n\n sha1sum = hash_file(fp=file_obj)\n\n model_args[\"location\"] = location\n model_args[\"sha1sum\"] = sha1sum\n\n existing_file = Files.query.filter_by(location=location).first()\n if existing_file:\n for k, v in model_args.items():\n setattr(existing_file, k, v)\n db.session.commit()\n file_row = existing_file\n else:\n file_row = model(**model_args)\n db.session.add(file_row)\n db.session.commit()\n return file_row\n\n\ndef hash_file(fp, algo=\"sha1\"):\n fp.seek(0)\n if algo == \"sha1\":\n h = hashlib.sha1() # nosec\n # https://stackoverflow.com/a/64730457\n while chunk := fp.read(1024):\n h.update(chunk)\n fp.seek(0)\n return h.hexdigest()\n else:\n raise NotImplementedError\n\n\ndef delete_file(file_id):\n f = Files.query.filter_by(id=file_id).first_or_404()\n\n uploader = get_uploader()\n uploader.delete(filename=f.location)\n\n db.session.delete(f)\n db.session.commit()\n return True\n\n\ndef rmdir(directory):\n shutil.rmtree(directory, ignore_errors=True)\n", "path": "CTFd/utils/uploads/__init__.py"}]} | 1,926 | 170 |
gh_patches_debug_29333 | rasdani/github-patches | git_diff | pex-tool__pex-322 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Remove pkg_resources.build_zipmanifest monkeypatching
This may involve increasing the minimum setuptools version. Another alternative is vendoring setuptools.
</issue>
<code>
[start of pex/version.py]
1 # Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).
2 # Licensed under the Apache License, Version 2.0 (see LICENSE).
3
4 __version__ = '1.1.15'
5
6 SETUPTOOLS_REQUIREMENT = 'setuptools>=2.2,<20.11'
7 WHEEL_REQUIREMENT = 'wheel>=0.26.0,<0.30.0'
8
[end of pex/version.py]
[start of pex/pex_bootstrapper.py]
1 # Copyright 2014 Pants project contributors (see CONTRIBUTORS.md).
2 # Licensed under the Apache License, Version 2.0 (see LICENSE).
3
4 import contextlib
5 import os
6 import sys
7 import zipfile
8
9 __all__ = ('bootstrap_pex',)
10
11
12 def pex_info_name(entry_point):
13 """Return the PEX-INFO for an entry_point"""
14 return os.path.join(entry_point, 'PEX-INFO')
15
16
17 def is_compressed(entry_point):
18 return os.path.exists(entry_point) and not os.path.exists(pex_info_name(entry_point))
19
20
21 def read_pexinfo_from_directory(entry_point):
22 with open(pex_info_name(entry_point), 'rb') as fp:
23 return fp.read()
24
25
26 def read_pexinfo_from_zip(entry_point):
27 with contextlib.closing(zipfile.ZipFile(entry_point)) as zf:
28 return zf.read('PEX-INFO')
29
30
31 def read_pex_info_content(entry_point):
32 """Return the raw content of a PEX-INFO."""
33 if is_compressed(entry_point):
34 return read_pexinfo_from_zip(entry_point)
35 else:
36 return read_pexinfo_from_directory(entry_point)
37
38
39 def get_pex_info(entry_point):
40 """Return the PexInfo object for an entry point."""
41 from . import pex_info
42
43 pex_info_content = read_pex_info_content(entry_point)
44 if pex_info_content:
45 return pex_info.PexInfo.from_json(pex_info_content)
46 raise ValueError('Invalid entry_point: %s' % entry_point)
47
48
49 # TODO(wickman) Remove once resolved (#91):
50 # https://bitbucket.org/pypa/setuptools/issue/154/build_zipmanifest-results-should-be
51 def monkeypatch_build_zipmanifest():
52 import pkg_resources
53 if not hasattr(pkg_resources, 'build_zipmanifest'):
54 return
55 old_build_zipmanifest = pkg_resources.build_zipmanifest
56 def memoized_build_zipmanifest(archive, memo={}):
57 if archive not in memo:
58 memo[archive] = old_build_zipmanifest(archive)
59 return memo[archive]
60 pkg_resources.build_zipmanifest = memoized_build_zipmanifest
61
62
63 def find_in_path(target_interpreter):
64 if os.path.exists(target_interpreter):
65 return target_interpreter
66
67 for directory in os.getenv('PATH', '').split(os.pathsep):
68 try_path = os.path.join(directory, target_interpreter)
69 if os.path.exists(try_path):
70 return try_path
71
72
73 def maybe_reexec_pex():
74 from .variables import ENV
75 if not ENV.PEX_PYTHON:
76 return
77
78 from .common import die
79 from .tracer import TRACER
80
81 target_python = ENV.PEX_PYTHON
82 target = find_in_path(target_python)
83 if not target:
84 die('Failed to find interpreter specified by PEX_PYTHON: %s' % target)
85 if os.path.exists(target) and os.path.realpath(target) != os.path.realpath(sys.executable):
86 TRACER.log('Detected PEX_PYTHON, re-exec to %s' % target)
87 ENV.delete('PEX_PYTHON')
88 os.execve(target, [target_python] + sys.argv, ENV.copy())
89
90
91 def bootstrap_pex(entry_point):
92 from .finders import register_finders
93 monkeypatch_build_zipmanifest()
94 register_finders()
95 maybe_reexec_pex()
96
97 from . import pex
98 pex.PEX(entry_point).execute()
99
100
101 def bootstrap_pex_env(entry_point):
102 """Bootstrap the current runtime environment using a given pex."""
103 from .environment import PEXEnvironment
104 from .finders import register_finders
105 from .pex_info import PexInfo
106
107 monkeypatch_build_zipmanifest()
108 register_finders()
109
110 PEXEnvironment(entry_point, PexInfo.from_pex(entry_point)).activate()
111
[end of pex/pex_bootstrapper.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/pex/pex_bootstrapper.py b/pex/pex_bootstrapper.py
--- a/pex/pex_bootstrapper.py
+++ b/pex/pex_bootstrapper.py
@@ -46,20 +46,6 @@
raise ValueError('Invalid entry_point: %s' % entry_point)
-# TODO(wickman) Remove once resolved (#91):
-# https://bitbucket.org/pypa/setuptools/issue/154/build_zipmanifest-results-should-be
-def monkeypatch_build_zipmanifest():
- import pkg_resources
- if not hasattr(pkg_resources, 'build_zipmanifest'):
- return
- old_build_zipmanifest = pkg_resources.build_zipmanifest
- def memoized_build_zipmanifest(archive, memo={}):
- if archive not in memo:
- memo[archive] = old_build_zipmanifest(archive)
- return memo[archive]
- pkg_resources.build_zipmanifest = memoized_build_zipmanifest
-
-
def find_in_path(target_interpreter):
if os.path.exists(target_interpreter):
return target_interpreter
@@ -90,7 +76,6 @@
def bootstrap_pex(entry_point):
from .finders import register_finders
- monkeypatch_build_zipmanifest()
register_finders()
maybe_reexec_pex()
@@ -104,7 +89,6 @@
from .finders import register_finders
from .pex_info import PexInfo
- monkeypatch_build_zipmanifest()
register_finders()
PEXEnvironment(entry_point, PexInfo.from_pex(entry_point)).activate()
diff --git a/pex/version.py b/pex/version.py
--- a/pex/version.py
+++ b/pex/version.py
@@ -3,5 +3,5 @@
__version__ = '1.1.15'
-SETUPTOOLS_REQUIREMENT = 'setuptools>=2.2,<20.11'
+SETUPTOOLS_REQUIREMENT = 'setuptools>=5.7,<20.11'
WHEEL_REQUIREMENT = 'wheel>=0.26.0,<0.30.0'
| {"golden_diff": "diff --git a/pex/pex_bootstrapper.py b/pex/pex_bootstrapper.py\n--- a/pex/pex_bootstrapper.py\n+++ b/pex/pex_bootstrapper.py\n@@ -46,20 +46,6 @@\n raise ValueError('Invalid entry_point: %s' % entry_point)\n \n \n-# TODO(wickman) Remove once resolved (#91):\n-# https://bitbucket.org/pypa/setuptools/issue/154/build_zipmanifest-results-should-be\n-def monkeypatch_build_zipmanifest():\n- import pkg_resources\n- if not hasattr(pkg_resources, 'build_zipmanifest'):\n- return\n- old_build_zipmanifest = pkg_resources.build_zipmanifest\n- def memoized_build_zipmanifest(archive, memo={}):\n- if archive not in memo:\n- memo[archive] = old_build_zipmanifest(archive)\n- return memo[archive]\n- pkg_resources.build_zipmanifest = memoized_build_zipmanifest\n-\n-\n def find_in_path(target_interpreter):\n if os.path.exists(target_interpreter):\n return target_interpreter\n@@ -90,7 +76,6 @@\n \n def bootstrap_pex(entry_point):\n from .finders import register_finders\n- monkeypatch_build_zipmanifest()\n register_finders()\n maybe_reexec_pex()\n \n@@ -104,7 +89,6 @@\n from .finders import register_finders\n from .pex_info import PexInfo\n \n- monkeypatch_build_zipmanifest()\n register_finders()\n \n PEXEnvironment(entry_point, PexInfo.from_pex(entry_point)).activate()\ndiff --git a/pex/version.py b/pex/version.py\n--- a/pex/version.py\n+++ b/pex/version.py\n@@ -3,5 +3,5 @@\n \n __version__ = '1.1.15'\n \n-SETUPTOOLS_REQUIREMENT = 'setuptools>=2.2,<20.11'\n+SETUPTOOLS_REQUIREMENT = 'setuptools>=5.7,<20.11'\n WHEEL_REQUIREMENT = 'wheel>=0.26.0,<0.30.0'\n", "issue": "Remove pkg_resources.build_zipmanifest monkeypatching\nThis may involve increasing the minimum setuptools version. Another alternative is vendoring setuptools.\n\n", "before_files": [{"content": "# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\n__version__ = '1.1.15'\n\nSETUPTOOLS_REQUIREMENT = 'setuptools>=2.2,<20.11'\nWHEEL_REQUIREMENT = 'wheel>=0.26.0,<0.30.0'\n", "path": "pex/version.py"}, {"content": "# Copyright 2014 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\nimport contextlib\nimport os\nimport sys\nimport zipfile\n\n__all__ = ('bootstrap_pex',)\n\n\ndef pex_info_name(entry_point):\n \"\"\"Return the PEX-INFO for an entry_point\"\"\"\n return os.path.join(entry_point, 'PEX-INFO')\n\n\ndef is_compressed(entry_point):\n return os.path.exists(entry_point) and not os.path.exists(pex_info_name(entry_point))\n\n\ndef read_pexinfo_from_directory(entry_point):\n with open(pex_info_name(entry_point), 'rb') as fp:\n return fp.read()\n\n\ndef read_pexinfo_from_zip(entry_point):\n with contextlib.closing(zipfile.ZipFile(entry_point)) as zf:\n return zf.read('PEX-INFO')\n\n\ndef read_pex_info_content(entry_point):\n \"\"\"Return the raw content of a PEX-INFO.\"\"\"\n if is_compressed(entry_point):\n return read_pexinfo_from_zip(entry_point)\n else:\n return read_pexinfo_from_directory(entry_point)\n\n\ndef get_pex_info(entry_point):\n \"\"\"Return the PexInfo object for an entry point.\"\"\"\n from . import pex_info\n\n pex_info_content = read_pex_info_content(entry_point)\n if pex_info_content:\n return pex_info.PexInfo.from_json(pex_info_content)\n raise ValueError('Invalid entry_point: %s' % entry_point)\n\n\n# TODO(wickman) Remove once resolved (#91):\n# https://bitbucket.org/pypa/setuptools/issue/154/build_zipmanifest-results-should-be\ndef monkeypatch_build_zipmanifest():\n import pkg_resources\n if not hasattr(pkg_resources, 'build_zipmanifest'):\n return\n old_build_zipmanifest = pkg_resources.build_zipmanifest\n def memoized_build_zipmanifest(archive, memo={}):\n if archive not in memo:\n memo[archive] = old_build_zipmanifest(archive)\n return memo[archive]\n pkg_resources.build_zipmanifest = memoized_build_zipmanifest\n\n\ndef find_in_path(target_interpreter):\n if os.path.exists(target_interpreter):\n return target_interpreter\n\n for directory in os.getenv('PATH', '').split(os.pathsep):\n try_path = os.path.join(directory, target_interpreter)\n if os.path.exists(try_path):\n return try_path\n\n\ndef maybe_reexec_pex():\n from .variables import ENV\n if not ENV.PEX_PYTHON:\n return\n\n from .common import die\n from .tracer import TRACER\n\n target_python = ENV.PEX_PYTHON\n target = find_in_path(target_python)\n if not target:\n die('Failed to find interpreter specified by PEX_PYTHON: %s' % target)\n if os.path.exists(target) and os.path.realpath(target) != os.path.realpath(sys.executable):\n TRACER.log('Detected PEX_PYTHON, re-exec to %s' % target)\n ENV.delete('PEX_PYTHON')\n os.execve(target, [target_python] + sys.argv, ENV.copy())\n\n\ndef bootstrap_pex(entry_point):\n from .finders import register_finders\n monkeypatch_build_zipmanifest()\n register_finders()\n maybe_reexec_pex()\n\n from . import pex\n pex.PEX(entry_point).execute()\n\n\ndef bootstrap_pex_env(entry_point):\n \"\"\"Bootstrap the current runtime environment using a given pex.\"\"\"\n from .environment import PEXEnvironment\n from .finders import register_finders\n from .pex_info import PexInfo\n\n monkeypatch_build_zipmanifest()\n register_finders()\n\n PEXEnvironment(entry_point, PexInfo.from_pex(entry_point)).activate()\n", "path": "pex/pex_bootstrapper.py"}]} | 1,737 | 476 |
gh_patches_debug_18001 | rasdani/github-patches | git_diff | mozilla__telemetry-analysis-service-258 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
ATMO should pre-click my single SSH key
Would save me thousands of milliseconds every time I launch a cluster ;)
</issue>
<code>
[start of atmo/clusters/views.py]
1 # This Source Code Form is subject to the terms of the Mozilla Public
2 # License, v. 2.0. If a copy of the MPL was not distributed with this
3 # file, you can obtain one at http://mozilla.org/MPL/2.0/.
4 from django.contrib import messages
5 from django.contrib.auth.decorators import login_required
6 from django.shortcuts import redirect, render
7 from django.utils.safestring import mark_safe
8
9 from allauth.account.utils import user_display
10
11 from .forms import NewClusterForm
12 from .models import Cluster
13 from ..decorators import view_permission_required, delete_permission_required
14
15
16 @login_required
17 def new_cluster(request):
18 if request.user.created_sshkeys.count() == 0:
19 messages.error(
20 request,
21 mark_safe(
22 '<h4>No SSH keys associated to you.</h4>'
23 'Please upload one below to be able to launch a cluster.'
24 'This is one-time step.'
25 )
26 )
27 return redirect('keys-new')
28 initial = {
29 'identifier': '{}-telemetry-analysis'.format(user_display(request.user)),
30 'size': 1,
31 }
32 form = NewClusterForm(
33 request.user,
34 initial=initial,
35 )
36 if request.method == 'POST':
37 form = NewClusterForm(
38 request.user,
39 data=request.POST,
40 files=request.FILES,
41 initial=initial,
42 )
43 if form.is_valid():
44 cluster = form.save() # this will also magically spawn the cluster for us
45 return redirect(cluster)
46 context = {
47 'form': form,
48 }
49 return render(request, 'atmo/clusters/new.html', context)
50
51
52 @login_required
53 @delete_permission_required(Cluster)
54 def terminate_cluster(request, id):
55 cluster = Cluster.objects.get(id=id)
56 if not cluster.is_active:
57 return redirect(cluster)
58
59 if request.method == 'POST':
60 cluster.deactivate()
61 return redirect(cluster)
62
63 context = {
64 'cluster': cluster,
65 }
66 return render(request, 'atmo/clusters/terminate.html', context=context)
67
68
69 @login_required
70 @view_permission_required(Cluster)
71 def detail_cluster(request, id):
72 cluster = Cluster.objects.get(id=id)
73 context = {
74 'cluster': cluster,
75 }
76 return render(request, 'atmo/clusters/detail.html', context=context)
77
[end of atmo/clusters/views.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/atmo/clusters/views.py b/atmo/clusters/views.py
--- a/atmo/clusters/views.py
+++ b/atmo/clusters/views.py
@@ -15,7 +15,13 @@
@login_required
def new_cluster(request):
- if request.user.created_sshkeys.count() == 0:
+ initial = {
+ 'identifier': '{}-telemetry-analysis'.format(user_display(request.user)),
+ 'size': 1,
+ }
+ ssh_key_count = request.user.created_sshkeys.count()
+
+ if ssh_key_count == 0:
messages.error(
request,
mark_safe(
@@ -25,10 +31,10 @@
)
)
return redirect('keys-new')
- initial = {
- 'identifier': '{}-telemetry-analysis'.format(user_display(request.user)),
- 'size': 1,
- }
+ elif ssh_key_count == 1:
+ # If only 1 ssh key, make it pre-selected.
+ initial['ssh_key'] = request.user.created_sshkeys.values('pk')[0]['pk']
+
form = NewClusterForm(
request.user,
initial=initial,
| {"golden_diff": "diff --git a/atmo/clusters/views.py b/atmo/clusters/views.py\n--- a/atmo/clusters/views.py\n+++ b/atmo/clusters/views.py\n@@ -15,7 +15,13 @@\n \n @login_required\n def new_cluster(request):\n- if request.user.created_sshkeys.count() == 0:\n+ initial = {\n+ 'identifier': '{}-telemetry-analysis'.format(user_display(request.user)),\n+ 'size': 1,\n+ }\n+ ssh_key_count = request.user.created_sshkeys.count()\n+\n+ if ssh_key_count == 0:\n messages.error(\n request,\n mark_safe(\n@@ -25,10 +31,10 @@\n )\n )\n return redirect('keys-new')\n- initial = {\n- 'identifier': '{}-telemetry-analysis'.format(user_display(request.user)),\n- 'size': 1,\n- }\n+ elif ssh_key_count == 1:\n+ # If only 1 ssh key, make it pre-selected.\n+ initial['ssh_key'] = request.user.created_sshkeys.values('pk')[0]['pk']\n+\n form = NewClusterForm(\n request.user,\n initial=initial,\n", "issue": "ATMO should pre-click my single SSH key\nWould save me thousands of milliseconds every time I launch a cluster ;)\n", "before_files": [{"content": "# This Source Code Form is subject to the terms of the Mozilla Public\n# License, v. 2.0. If a copy of the MPL was not distributed with this\n# file, you can obtain one at http://mozilla.org/MPL/2.0/.\nfrom django.contrib import messages\nfrom django.contrib.auth.decorators import login_required\nfrom django.shortcuts import redirect, render\nfrom django.utils.safestring import mark_safe\n\nfrom allauth.account.utils import user_display\n\nfrom .forms import NewClusterForm\nfrom .models import Cluster\nfrom ..decorators import view_permission_required, delete_permission_required\n\n\n@login_required\ndef new_cluster(request):\n if request.user.created_sshkeys.count() == 0:\n messages.error(\n request,\n mark_safe(\n '<h4>No SSH keys associated to you.</h4>'\n 'Please upload one below to be able to launch a cluster.'\n 'This is one-time step.'\n )\n )\n return redirect('keys-new')\n initial = {\n 'identifier': '{}-telemetry-analysis'.format(user_display(request.user)),\n 'size': 1,\n }\n form = NewClusterForm(\n request.user,\n initial=initial,\n )\n if request.method == 'POST':\n form = NewClusterForm(\n request.user,\n data=request.POST,\n files=request.FILES,\n initial=initial,\n )\n if form.is_valid():\n cluster = form.save() # this will also magically spawn the cluster for us\n return redirect(cluster)\n context = {\n 'form': form,\n }\n return render(request, 'atmo/clusters/new.html', context)\n\n\n@login_required\n@delete_permission_required(Cluster)\ndef terminate_cluster(request, id):\n cluster = Cluster.objects.get(id=id)\n if not cluster.is_active:\n return redirect(cluster)\n\n if request.method == 'POST':\n cluster.deactivate()\n return redirect(cluster)\n\n context = {\n 'cluster': cluster,\n }\n return render(request, 'atmo/clusters/terminate.html', context=context)\n\n\n@login_required\n@view_permission_required(Cluster)\ndef detail_cluster(request, id):\n cluster = Cluster.objects.get(id=id)\n context = {\n 'cluster': cluster,\n }\n return render(request, 'atmo/clusters/detail.html', context=context)\n", "path": "atmo/clusters/views.py"}]} | 1,198 | 262 |
gh_patches_debug_40147 | rasdani/github-patches | git_diff | OpenNMT__OpenNMT-tf-671 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
PRF evaluator: list index out of range
Hi!
I'm getting `list index out of range` when prf evaluator is used.
**Config:**
Model: TransformerRelative
params:
beam_width: 1
train:
maximum_features_length: 50
maximum_labels_length: 50
save_summary_steps: 100
sample_buffer_size: 1000000
keep_checkpoint_max: 20
save_checkpoints_steps: 5000
max_step: 2000000
eval:
batch_size: 32
steps: 5000
export_on_best: bleu
external_evaluators: [ "bleu", "prf", "wer" ]
infer:
batch_size: 1024
**Full stack:**
W tensorflow/core/kernels/data/generator_dataset_op.cc:103] Error occurred when finalizing GeneratorDataset iterator: Cancelled: Operation was cancelled
Traceback (most recent call last):
File "/home/dima/anaconda3/envs/tf/bin/onmt-main", line 8, in <module>
sys.exit(main())
File "/home/dima/anaconda3/envs/tf/lib/python3.7/site-packages/opennmt/bin/main.py", line 224, in main
hvd=hvd)
File "/home/dima/anaconda3/envs/tf/lib/python3.7/site-packages/opennmt/runner.py", line 217, in train
moving_average_decay=train_config.get("moving_average_decay"))
File "/home/dima/anaconda3/envs/tf/lib/python3.7/site-packages/opennmt/training.py", line 118, in __call__
early_stop = self._evaluate(evaluator, step, moving_average=moving_average)
File "/home/dima/anaconda3/envs/tf/lib/python3.7/site-packages/opennmt/training.py", line 140, in _evaluate
evaluator(step)
File "/home/dima/anaconda3/envs/tf/lib/python3.7/site-packages/opennmt/evaluation.py", line 299, in __call__
score = scorer(self._labels_file, output_path)
File "/home/dima/anaconda3/envs/tf/lib/python3.7/site-packages/opennmt/utils/scorers.py", line 132, in __call__
precision_score, recall_score, fmeasure_score = fmeasure(ref_path, hyp_path)
File "/home/dima/anaconda3/envs/tf/lib/python3.7/site-packages/opennmt/utils/fmeasure.py", line 49, in fmeasure
if tag == classref[linecpt][tagcpt]:
IndexError: list index out of range
Can I help you with the issue? I'm not familiar with the code base, but I can try to reproduce it locally and extract the context if necessary.
</issue>
<code>
[start of opennmt/utils/fmeasure.py]
1 """Hypotheses file scoring for Precision Recall and F-Measure."""
2
3 def fmeasure(ref_path,
4 hyp_path,
5 return_precision_only=False,
6 return_recall_only=False,
7 return_fmeasure_only=False):
8 """Compute Precision Recall and F-Measure between two files"""
9 with open(ref_path) as ref_fp, open(hyp_path) as hyp_fp:
10 list_null_tags = ["X", "null", "NULL", "Null", "O"]
11 listtags = []
12 linecpt = 0
13 classref = []
14 classrandom = []
15 classhyp = []
16 nbrtagref = {}
17 nbrtaghyp = {}
18 nbrtagok = {}
19 for tag in listtags:
20 nbrtagref[tag] = 0
21 nbrtaghyp[tag] = 0
22 nbrtagok[tag] = 0
23 for line in ref_fp:
24 line = line.strip()
25 tabline = line.split(' ')
26 tagcpt = 0
27 lineref = []
28 for tag in tabline:
29 lineref.append(tag)
30 if tag in nbrtagref.keys() and tag not in list_null_tags:
31 nbrtagref[tag] = nbrtagref[tag]+1
32 else:
33 nbrtagref[tag] = 1
34 tagcpt = tagcpt+1
35 classref.append(lineref)
36 linecpt = linecpt+1
37 linecpt = 0
38 for line in hyp_fp:
39 line = line.strip()
40 tabline = line.split(' ')
41 tagcpt = 0
42 linehyp = []
43 linerandom = []
44 for tag in tabline:
45 linehyp.append(tag)
46 if tag not in listtags:
47 listtags.append(tag)
48 linerandom.append(tag)
49 if tag == classref[linecpt][tagcpt]:
50 if tag in nbrtagok.keys():
51 nbrtagok[tag] = nbrtagok[tag]+1
52 else:
53 nbrtagok[tag] = 1
54 tagcpt = tagcpt+1
55 if tag in nbrtaghyp.keys():
56 nbrtaghyp[tag] = nbrtaghyp[tag]+1
57 else:
58 nbrtaghyp[tag] = 1
59 classhyp.append(linehyp)
60 classrandom.append(linerandom)
61 linecpt = linecpt+1
62
63 tagcpt = 0
64 fullprecision = 0
65 fullrecall = 0
66 precision = {}
67 recall = {}
68 fulltagok = 0.00
69 fulltaghyp = 0.00
70 fulltagref = 0.00
71 for tag in listtags:
72 if tag not in nbrtagok:
73 nbrtagok[tag] = 0
74 if tag not in nbrtaghyp:
75 nbrtaghyp[tag] = 0
76 if tag not in nbrtagref:
77 nbrtagref[tag] = 0
78 if nbrtaghyp[tag] != 0:
79 precision[tag] = nbrtagok[tag]/nbrtaghyp[tag]
80 else:
81 precision[tag] = 0
82 if nbrtagref[tag] != 0:
83 recall[tag] = nbrtagok[tag]/nbrtagref[tag]
84 else:
85 recall[tag] = 0
86 if tag not in list_null_tags:
87 fulltagok = fulltagok+nbrtagok[tag]
88 fulltaghyp = fulltaghyp+nbrtaghyp[tag]
89 fulltagref = fulltagref+nbrtagref[tag]
90 # fullprecision = fullprecision+precision[tag]
91 # fullrecall = fullrecall+recall[tag]
92 tagcpt = tagcpt+1
93 fullprecision = round(100*fulltagok/fulltaghyp, 2)/100
94 fullrecall = round(100*fulltagok/fulltagref, 2)/100
95 fullfmeasure = (round((200*fullprecision*fullrecall)/(fullprecision+fullrecall), 2))/100
96 if return_precision_only:
97 return fullprecision
98 if return_recall_only:
99 return fullrecall
100 if return_fmeasure_only:
101 return fullfmeasure
102 return fullprecision, fullrecall, fullfmeasure
103
[end of opennmt/utils/fmeasure.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/opennmt/utils/fmeasure.py b/opennmt/utils/fmeasure.py
--- a/opennmt/utils/fmeasure.py
+++ b/opennmt/utils/fmeasure.py
@@ -9,21 +9,15 @@
with open(ref_path) as ref_fp, open(hyp_path) as hyp_fp:
list_null_tags = ["X", "null", "NULL", "Null", "O"]
listtags = []
- linecpt = 0
classref = []
classrandom = []
classhyp = []
nbrtagref = {}
nbrtaghyp = {}
nbrtagok = {}
- for tag in listtags:
- nbrtagref[tag] = 0
- nbrtaghyp[tag] = 0
- nbrtagok[tag] = 0
for line in ref_fp:
line = line.strip()
tabline = line.split(' ')
- tagcpt = 0
lineref = []
for tag in tabline:
lineref.append(tag)
@@ -31,36 +25,29 @@
nbrtagref[tag] = nbrtagref[tag]+1
else:
nbrtagref[tag] = 1
- tagcpt = tagcpt+1
classref.append(lineref)
- linecpt = linecpt+1
- linecpt = 0
- for line in hyp_fp:
+ for line, lineref in zip(hyp_fp, classref):
line = line.strip()
tabline = line.split(' ')
- tagcpt = 0
linehyp = []
linerandom = []
- for tag in tabline:
+ for tagcpt, tag in enumerate(tabline):
linehyp.append(tag)
if tag not in listtags:
listtags.append(tag)
linerandom.append(tag)
- if tag == classref[linecpt][tagcpt]:
+ if tagcpt < len(lineref) and tag == lineref[tagcpt]:
if tag in nbrtagok.keys():
nbrtagok[tag] = nbrtagok[tag]+1
else:
nbrtagok[tag] = 1
- tagcpt = tagcpt+1
if tag in nbrtaghyp.keys():
nbrtaghyp[tag] = nbrtaghyp[tag]+1
else:
nbrtaghyp[tag] = 1
classhyp.append(linehyp)
classrandom.append(linerandom)
- linecpt = linecpt+1
- tagcpt = 0
fullprecision = 0
fullrecall = 0
precision = {}
@@ -87,12 +74,11 @@
fulltagok = fulltagok+nbrtagok[tag]
fulltaghyp = fulltaghyp+nbrtaghyp[tag]
fulltagref = fulltagref+nbrtagref[tag]
-# fullprecision = fullprecision+precision[tag]
-# fullrecall = fullrecall+recall[tag]
- tagcpt = tagcpt+1
- fullprecision = round(100*fulltagok/fulltaghyp, 2)/100
- fullrecall = round(100*fulltagok/fulltagref, 2)/100
- fullfmeasure = (round((200*fullprecision*fullrecall)/(fullprecision+fullrecall), 2))/100
+ fullprecision = fulltagok / fulltaghyp if fulltaghyp != 0 else 0
+ fullrecall = fulltagok / fulltagref if fulltagref != 0 else 0
+ fullfmeasure = (
+ (2 * fullprecision * fullrecall) / (fullprecision + fullrecall)
+ if (fullprecision + fullrecall) != 0 else 0)
if return_precision_only:
return fullprecision
if return_recall_only:
| {"golden_diff": "diff --git a/opennmt/utils/fmeasure.py b/opennmt/utils/fmeasure.py\n--- a/opennmt/utils/fmeasure.py\n+++ b/opennmt/utils/fmeasure.py\n@@ -9,21 +9,15 @@\n with open(ref_path) as ref_fp, open(hyp_path) as hyp_fp:\n list_null_tags = [\"X\", \"null\", \"NULL\", \"Null\", \"O\"]\n listtags = []\n- linecpt = 0\n classref = []\n classrandom = []\n classhyp = []\n nbrtagref = {}\n nbrtaghyp = {}\n nbrtagok = {}\n- for tag in listtags:\n- nbrtagref[tag] = 0\n- nbrtaghyp[tag] = 0\n- nbrtagok[tag] = 0\n for line in ref_fp:\n line = line.strip()\n tabline = line.split(' ')\n- tagcpt = 0\n lineref = []\n for tag in tabline:\n lineref.append(tag)\n@@ -31,36 +25,29 @@\n nbrtagref[tag] = nbrtagref[tag]+1\n else:\n nbrtagref[tag] = 1\n- tagcpt = tagcpt+1\n classref.append(lineref)\n- linecpt = linecpt+1\n- linecpt = 0\n- for line in hyp_fp:\n+ for line, lineref in zip(hyp_fp, classref):\n line = line.strip()\n tabline = line.split(' ')\n- tagcpt = 0\n linehyp = []\n linerandom = []\n- for tag in tabline:\n+ for tagcpt, tag in enumerate(tabline):\n linehyp.append(tag)\n if tag not in listtags:\n listtags.append(tag)\n linerandom.append(tag)\n- if tag == classref[linecpt][tagcpt]:\n+ if tagcpt < len(lineref) and tag == lineref[tagcpt]:\n if tag in nbrtagok.keys():\n nbrtagok[tag] = nbrtagok[tag]+1\n else:\n nbrtagok[tag] = 1\n- tagcpt = tagcpt+1\n if tag in nbrtaghyp.keys():\n nbrtaghyp[tag] = nbrtaghyp[tag]+1\n else:\n nbrtaghyp[tag] = 1\n classhyp.append(linehyp)\n classrandom.append(linerandom)\n- linecpt = linecpt+1\n \n- tagcpt = 0\n fullprecision = 0\n fullrecall = 0\n precision = {}\n@@ -87,12 +74,11 @@\n fulltagok = fulltagok+nbrtagok[tag]\n fulltaghyp = fulltaghyp+nbrtaghyp[tag]\n fulltagref = fulltagref+nbrtagref[tag]\n-# fullprecision = fullprecision+precision[tag]\n-# fullrecall = fullrecall+recall[tag]\n- tagcpt = tagcpt+1\n- fullprecision = round(100*fulltagok/fulltaghyp, 2)/100\n- fullrecall = round(100*fulltagok/fulltagref, 2)/100\n- fullfmeasure = (round((200*fullprecision*fullrecall)/(fullprecision+fullrecall), 2))/100\n+ fullprecision = fulltagok / fulltaghyp if fulltaghyp != 0 else 0\n+ fullrecall = fulltagok / fulltagref if fulltagref != 0 else 0\n+ fullfmeasure = (\n+ (2 * fullprecision * fullrecall) / (fullprecision + fullrecall)\n+ if (fullprecision + fullrecall) != 0 else 0)\n if return_precision_only:\n return fullprecision\n if return_recall_only:\n", "issue": "PRF evaluator: list index out of range\nHi! \r\nI'm getting `list index out of range` when prf evaluator is used.\r\n\r\n**Config:**\r\nModel: TransformerRelative\r\nparams:\r\n beam_width: 1\r\n\r\ntrain:\r\n maximum_features_length: 50\r\n maximum_labels_length: 50\r\n save_summary_steps: 100\r\n sample_buffer_size: 1000000\r\n keep_checkpoint_max: 20\r\n save_checkpoints_steps: 5000\r\n max_step: 2000000\r\n\r\neval:\r\n batch_size: 32\r\n steps: 5000\r\n export_on_best: bleu\r\n external_evaluators: [ \"bleu\", \"prf\", \"wer\" ]\r\n\r\ninfer:\r\n batch_size: 1024\r\n\r\n**Full stack:**\r\nW tensorflow/core/kernels/data/generator_dataset_op.cc:103] Error occurred when finalizing GeneratorDataset iterator: Cancelled: Operation was cancelled\r\nTraceback (most recent call last):\r\n File \"/home/dima/anaconda3/envs/tf/bin/onmt-main\", line 8, in <module>\r\n sys.exit(main())\r\n File \"/home/dima/anaconda3/envs/tf/lib/python3.7/site-packages/opennmt/bin/main.py\", line 224, in main\r\n hvd=hvd)\r\n File \"/home/dima/anaconda3/envs/tf/lib/python3.7/site-packages/opennmt/runner.py\", line 217, in train\r\n moving_average_decay=train_config.get(\"moving_average_decay\"))\r\n File \"/home/dima/anaconda3/envs/tf/lib/python3.7/site-packages/opennmt/training.py\", line 118, in __call__\r\n early_stop = self._evaluate(evaluator, step, moving_average=moving_average)\r\n File \"/home/dima/anaconda3/envs/tf/lib/python3.7/site-packages/opennmt/training.py\", line 140, in _evaluate\r\n evaluator(step)\r\n File \"/home/dima/anaconda3/envs/tf/lib/python3.7/site-packages/opennmt/evaluation.py\", line 299, in __call__\r\n score = scorer(self._labels_file, output_path)\r\n File \"/home/dima/anaconda3/envs/tf/lib/python3.7/site-packages/opennmt/utils/scorers.py\", line 132, in __call__\r\n precision_score, recall_score, fmeasure_score = fmeasure(ref_path, hyp_path)\r\n File \"/home/dima/anaconda3/envs/tf/lib/python3.7/site-packages/opennmt/utils/fmeasure.py\", line 49, in fmeasure\r\n if tag == classref[linecpt][tagcpt]:\r\nIndexError: list index out of range\r\n\r\nCan I help you with the issue? I'm not familiar with the code base, but I can try to reproduce it locally and extract the context if necessary.\r\n\n", "before_files": [{"content": "\"\"\"Hypotheses file scoring for Precision Recall and F-Measure.\"\"\"\n\ndef fmeasure(ref_path,\n hyp_path,\n return_precision_only=False,\n return_recall_only=False,\n return_fmeasure_only=False):\n \"\"\"Compute Precision Recall and F-Measure between two files\"\"\"\n with open(ref_path) as ref_fp, open(hyp_path) as hyp_fp:\n list_null_tags = [\"X\", \"null\", \"NULL\", \"Null\", \"O\"]\n listtags = []\n linecpt = 0\n classref = []\n classrandom = []\n classhyp = []\n nbrtagref = {}\n nbrtaghyp = {}\n nbrtagok = {}\n for tag in listtags:\n nbrtagref[tag] = 0\n nbrtaghyp[tag] = 0\n nbrtagok[tag] = 0\n for line in ref_fp:\n line = line.strip()\n tabline = line.split(' ')\n tagcpt = 0\n lineref = []\n for tag in tabline:\n lineref.append(tag)\n if tag in nbrtagref.keys() and tag not in list_null_tags:\n nbrtagref[tag] = nbrtagref[tag]+1\n else:\n nbrtagref[tag] = 1\n tagcpt = tagcpt+1\n classref.append(lineref)\n linecpt = linecpt+1\n linecpt = 0\n for line in hyp_fp:\n line = line.strip()\n tabline = line.split(' ')\n tagcpt = 0\n linehyp = []\n linerandom = []\n for tag in tabline:\n linehyp.append(tag)\n if tag not in listtags:\n listtags.append(tag)\n linerandom.append(tag)\n if tag == classref[linecpt][tagcpt]:\n if tag in nbrtagok.keys():\n nbrtagok[tag] = nbrtagok[tag]+1\n else:\n nbrtagok[tag] = 1\n tagcpt = tagcpt+1\n if tag in nbrtaghyp.keys():\n nbrtaghyp[tag] = nbrtaghyp[tag]+1\n else:\n nbrtaghyp[tag] = 1\n classhyp.append(linehyp)\n classrandom.append(linerandom)\n linecpt = linecpt+1\n\n tagcpt = 0\n fullprecision = 0\n fullrecall = 0\n precision = {}\n recall = {}\n fulltagok = 0.00\n fulltaghyp = 0.00\n fulltagref = 0.00\n for tag in listtags:\n if tag not in nbrtagok:\n nbrtagok[tag] = 0\n if tag not in nbrtaghyp:\n nbrtaghyp[tag] = 0\n if tag not in nbrtagref:\n nbrtagref[tag] = 0\n if nbrtaghyp[tag] != 0:\n precision[tag] = nbrtagok[tag]/nbrtaghyp[tag]\n else:\n precision[tag] = 0\n if nbrtagref[tag] != 0:\n recall[tag] = nbrtagok[tag]/nbrtagref[tag]\n else:\n recall[tag] = 0\n if tag not in list_null_tags:\n fulltagok = fulltagok+nbrtagok[tag]\n fulltaghyp = fulltaghyp+nbrtaghyp[tag]\n fulltagref = fulltagref+nbrtagref[tag]\n# fullprecision = fullprecision+precision[tag]\n# fullrecall = fullrecall+recall[tag]\n tagcpt = tagcpt+1\n fullprecision = round(100*fulltagok/fulltaghyp, 2)/100\n fullrecall = round(100*fulltagok/fulltagref, 2)/100\n fullfmeasure = (round((200*fullprecision*fullrecall)/(fullprecision+fullrecall), 2))/100\n if return_precision_only:\n return fullprecision\n if return_recall_only:\n return fullrecall\n if return_fmeasure_only:\n return fullfmeasure\n return fullprecision, fullrecall, fullfmeasure\n", "path": "opennmt/utils/fmeasure.py"}]} | 2,295 | 847 |
gh_patches_debug_35704 | rasdani/github-patches | git_diff | vllm-project__vllm-2992 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Benchmarking script for openai chat completion api are not supported
When running vllm with openai chat apis, the benchmarking script will fail as it asserts the backend API of `assert api_url.endswith("v1/completions")`.
```
python benchmark_serving.py --backend openai --model mistralai/Mistral-7B-v0.1 --dataset ShareGPT_V3_unfiltered_cleaned_split.json --save-result
```
The logs are as follows:
```
Namespace(backend='openai', version='N/A', base_url=None, host='localhost', port=8000, endpoint='/generate', dataset='ShareGPT_V3_unfiltered_cleaned_split.json', model='mistralai/Mistral-7B-v0.1', tokenizer=None, best_of=1, use_beam_search=False, num_prompts=1000, request_rate=inf, seed=0, trust_remote_code=False, disable_tqdm=False, save_result=True)
0%| | 0/1000 [00:00<?, ?it/s]Traffic request rate: inf
Traceback (most recent call last):
File "/home/chenw/vllm/benchmarks/benchmark_serving.py", line 387, in <module>
main(args)
File "/home/chenw/vllm/benchmarks/benchmark_serving.py", line 259, in main
benchmark_result = asyncio.run(
File "/home/chenw/miniconda3/envs/myenv/lib/python3.9/asyncio/runners.py", line 44, in run
return loop.run_until_complete(main)
File "/home/chenw/miniconda3/envs/myenv/lib/python3.9/asyncio/base_events.py", line 647, in run_until_complete
return future.result()
File "/home/chenw/vllm/benchmarks/benchmark_serving.py", line 195, in benchmark
outputs = await asyncio.gather(*tasks)
File "/home/chenw/vllm/benchmarks/backend_request_func.py", line 223, in async_request_openai_completions
assert api_url.endswith("v1/completions")
AssertionError
0%| | 0/1000 [00:00<?, ?it/s]
```
The `backend_request_func.py` should not only allow chat apis like: `assert api_url.endswith("v1/chat/completions")`.
</issue>
<code>
[start of benchmarks/backend_request_func.py]
1 import json
2 import os
3 import time
4 from dataclasses import dataclass
5 from typing import Optional
6
7 import aiohttp
8 from tqdm.asyncio import tqdm
9
10 AIOHTTP_TIMEOUT = aiohttp.ClientTimeout(total=6 * 60 * 60)
11
12
13 @dataclass
14 class RequestFuncInput:
15 prompt: str
16 api_url: str
17 prompt_len: int
18 output_len: int
19 model: str
20 best_of: int = 1
21 use_beam_search: bool = False
22
23
24 @dataclass
25 class RequestFuncOutput:
26 generated_text: str = ""
27 success: bool = False
28 latency: float = 0
29 ttft: float = 0
30 prompt_len: int = 0
31
32
33 async def async_request_tgi(
34 request_func_input: RequestFuncInput,
35 pbar: Optional[tqdm] = None,
36 ) -> RequestFuncOutput:
37 api_url = request_func_input.api_url
38 assert api_url.endswith("generate_stream")
39
40 async with aiohttp.ClientSession(timeout=AIOHTTP_TIMEOUT) as session:
41 assert not request_func_input.use_beam_search
42 params = {
43 "best_of": request_func_input.best_of,
44 "max_new_tokens": request_func_input.output_len,
45 "do_sample": True,
46 "temperature": 0.01, # TGI does not accept 0.0 temperature.
47 "top_p": 0.99, # TGI does not accept 1.0 top_p.
48 }
49 payload = {
50 "inputs": request_func_input.prompt,
51 "parameters": params,
52 }
53 output = RequestFuncOutput()
54 output.prompt_len = request_func_input.prompt_len
55
56 ttft = 0
57 st = time.perf_counter()
58 try:
59 async with session.post(url=api_url, json=payload) as response:
60 if response.status == 200:
61 async for data in response.content.iter_any():
62 if ttft == 0:
63 ttft = time.perf_counter() - st
64 output.ttft = ttft
65 output.latency = time.perf_counter() - st
66
67 body = data.decode("utf-8").lstrip("data:")
68 output.generated_text = json.loads(body)["generated_text"]
69 output.success = True
70 else:
71 output.success = False
72 except (aiohttp.ClientOSError, aiohttp.ServerDisconnectedError):
73 output.success = False
74
75 if pbar:
76 pbar.update(1)
77 return output
78
79
80 async def async_request_vllm(
81 request_func_input: RequestFuncInput,
82 pbar: Optional[tqdm] = None,
83 ) -> RequestFuncOutput:
84 api_url = request_func_input.api_url
85 assert api_url.endswith("generate")
86
87 async with aiohttp.ClientSession(timeout=AIOHTTP_TIMEOUT) as session:
88 payload = {
89 "prompt": request_func_input.prompt,
90 "n": 1,
91 "best_of": request_func_input.best_of,
92 "use_beam_search": request_func_input.use_beam_search,
93 "temperature": 0.0 if request_func_input.use_beam_search else 1.0,
94 "top_p": 1.0,
95 "max_tokens": request_func_input.output_len,
96 "ignore_eos": True,
97 "stream": True,
98 }
99 output = RequestFuncOutput()
100 output.prompt_len = request_func_input.prompt_len
101
102 ttft = 0
103 st = time.perf_counter()
104 try:
105 async with session.post(url=api_url, json=payload) as response:
106 if response.status == 200:
107 async for data in response.content.iter_any():
108 if ttft == 0:
109 ttft = time.perf_counter() - st
110 output.ttft = ttft
111 output.latency = time.perf_counter() - st
112
113 # When streaming, '\0' is appended to the end of the response.
114 body = data.decode("utf-8").strip("\0")
115 output.generated_text = json.loads(
116 body)["text"][0][len(request_func_input.prompt):]
117 output.success = True
118
119 else:
120 output.success = False
121 except (aiohttp.ClientOSError, aiohttp.ServerDisconnectedError):
122 output.success = False
123
124 if pbar:
125 pbar.update(1)
126 return output
127
128
129 async def async_request_trt_llm(
130 request_func_input: RequestFuncInput,
131 pbar: Optional[tqdm] = None,
132 ) -> RequestFuncOutput:
133 api_url = request_func_input.api_url
134 assert api_url.endswith("generate_stream")
135
136 async with aiohttp.ClientSession(timeout=AIOHTTP_TIMEOUT) as session:
137 assert not request_func_input.use_beam_search
138 assert request_func_input.best_of == 1
139 payload = {
140 "accumulate_tokens": True,
141 "text_input": request_func_input.prompt,
142 "temperature": 0.0,
143 "top_p": 1.0,
144 "max_tokens": request_func_input.output_len,
145 "stream": True,
146 }
147 output = RequestFuncOutput()
148 output.prompt_len = request_func_input.prompt_len
149 ttft = 0
150
151 st = time.perf_counter()
152 try:
153 async with session.post(url=api_url, json=payload) as resp:
154 if resp.status == 200:
155 async for data in resp.content.iter_any():
156 if ttft == 0:
157 ttft = time.perf_counter() - st
158 output.ttft = ttft
159 output.latency = time.perf_counter() - st
160
161 body = data.decode("utf-8").lstrip("data:")
162 output.generated_text = json.loads(body)["text_output"]
163 output.success = True
164
165 else:
166 output.success = False
167 except (aiohttp.ClientOSError, aiohttp.ServerDisconnectedError):
168 output.success = False
169
170 if pbar:
171 pbar.update(1)
172 return output
173
174
175 async def async_request_deepspeed_mii(
176 request_func_input: RequestFuncInput,
177 pbar: Optional[tqdm] = None,
178 ) -> RequestFuncOutput:
179 async with aiohttp.ClientSession(timeout=AIOHTTP_TIMEOUT) as session:
180 assert request_func_input.best_of == 1
181 assert not request_func_input.use_beam_search
182
183 payload = {
184 "prompts": request_func_input.prompt,
185 "max_new_tokens": request_func_input.output_len,
186 "ignore_eos": True,
187 "do_sample": True,
188 "temperature":
189 0.01, # deepspeed-mii does not accept 0.0 temperature.
190 "top_p": 1.0,
191 }
192 output = RequestFuncOutput()
193 output.prompt_len = request_func_input.prompt_len
194
195 # DeepSpeed-MII doesn't support streaming as of Jan 28 2024, will use 0 as placeholder.
196 # https://github.com/microsoft/DeepSpeed-MII/pull/311
197 output.ttft = 0
198
199 st = time.perf_counter()
200 try:
201 async with session.post(url=request_func_input.api_url,
202 json=payload) as resp:
203 if resp.status == 200:
204 parsed_resp = await resp.json()
205 output.latency = time.perf_counter() - st
206 output.generated_text = parsed_resp[0]["generated_text"]
207 output.success = True
208 else:
209 output.success = False
210 except (aiohttp.ClientOSError, aiohttp.ServerDisconnectedError):
211 output.success = False
212
213 if pbar:
214 pbar.update(1)
215 return output
216
217
218 async def async_request_openai_completions(
219 request_func_input: RequestFuncInput,
220 pbar: Optional[tqdm] = None,
221 ) -> RequestFuncOutput:
222 api_url = request_func_input.api_url
223 assert api_url.endswith("v1/completions")
224
225 async with aiohttp.ClientSession(timeout=AIOHTTP_TIMEOUT) as session:
226 assert not request_func_input.use_beam_search
227 payload = {
228 "model": request_func_input.model,
229 "prompt": request_func_input.prompt,
230 "temperature": 0.0,
231 "best_of": request_func_input.best_of,
232 "max_tokens": request_func_input.output_len,
233 "stream": True,
234 }
235 headers = {
236 "Authorization": f"Bearer {os.environ.get('OPENAI_API_KEY')}"
237 }
238
239 output = RequestFuncOutput()
240 output.prompt_len = request_func_input.prompt_len
241
242 generated_text = ""
243 ttft = 0
244 st = time.perf_counter()
245 try:
246 async with session.post(url=api_url, json=payload,
247 headers=headers) as response:
248 if response.status == 200:
249 async for chunk in response.content:
250 if ttft == 0:
251 ttft = time.perf_counter() - st
252 output.ttft = ttft
253
254 chunk = chunk.strip()
255 if not chunk:
256 continue
257
258 chunk = chunk.decode("utf-8").lstrip("data: ")
259 if chunk == "[DONE]":
260 latency = time.perf_counter() - st
261 else:
262 body = json.loads(chunk)
263 generated_text += body["choices"][0]["text"]
264
265 output.generated_text = generated_text
266 output.success = True
267 output.latency = latency
268 else:
269 output.success = False
270 except (aiohttp.ClientOSError, aiohttp.ServerDisconnectedError):
271 output.success = False
272
273 if pbar:
274 pbar.update(1)
275 return output
276
277
278 ASYNC_REQUEST_FUNCS = {
279 "tgi": async_request_tgi,
280 "vllm": async_request_vllm,
281 "deepspeed-mii": async_request_deepspeed_mii,
282 "openai": async_request_openai_completions,
283 "tensorrt-llm": async_request_trt_llm,
284 }
285
[end of benchmarks/backend_request_func.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/benchmarks/backend_request_func.py b/benchmarks/backend_request_func.py
--- a/benchmarks/backend_request_func.py
+++ b/benchmarks/backend_request_func.py
@@ -275,10 +275,80 @@
return output
+async def async_request_openai_chat_completions(
+ request_func_input: RequestFuncInput,
+ pbar: Optional[tqdm] = None,
+) -> RequestFuncOutput:
+ api_url = request_func_input.api_url
+ assert api_url.endswith(
+ "v1/chat/completions"
+ ), "OpenAI Chat API URL must end with 'v1/chat/completions'."
+
+ async with aiohttp.ClientSession(timeout=AIOHTTP_TIMEOUT) as session:
+ assert not request_func_input.use_beam_search
+ payload = {
+ "model": request_func_input.model,
+ "messages": [
+ {
+ "role": "user",
+ "content": request_func_input.prompt,
+ },
+ ],
+ "temperature": 0.0,
+ "max_tokens": request_func_input.output_len,
+ "stream": True,
+ }
+ headers = {
+ "Content-Type": "application/json",
+ "Authorization": f"Bearer {os.environ.get('OPENAI_API_KEY')}"
+ }
+
+ output = RequestFuncOutput()
+ output.prompt_len = request_func_input.prompt_len
+
+ generated_text = ""
+ ttft = 0
+ st = time.perf_counter()
+ try:
+ async with session.post(url=api_url, json=payload,
+ headers=headers) as response:
+ if response.status == 200:
+ async for chunk in response.content:
+ if ttft == 0:
+ ttft = time.perf_counter() - st
+ output.ttft = ttft
+
+ chunk = chunk.strip()
+ if not chunk:
+ continue
+
+ chunk = chunk.decode("utf-8").lstrip("data: ")
+ if chunk == "[DONE]":
+ latency = time.perf_counter() - st
+ else:
+ body = json.loads(chunk)
+ if "content" in body["choices"][0]["delta"]:
+ generated_text += body["choices"][0]["delta"][
+ "content"]
+
+ output.generated_text = generated_text
+ output.success = True
+ output.latency = latency
+ else:
+ output.success = False
+ except (aiohttp.ClientOSError, aiohttp.ServerDisconnectedError):
+ output.success = False
+
+ if pbar:
+ pbar.update(1)
+ return output
+
+
ASYNC_REQUEST_FUNCS = {
"tgi": async_request_tgi,
"vllm": async_request_vllm,
"deepspeed-mii": async_request_deepspeed_mii,
"openai": async_request_openai_completions,
+ "openai-chat": async_request_openai_chat_completions,
"tensorrt-llm": async_request_trt_llm,
}
| {"golden_diff": "diff --git a/benchmarks/backend_request_func.py b/benchmarks/backend_request_func.py\n--- a/benchmarks/backend_request_func.py\n+++ b/benchmarks/backend_request_func.py\n@@ -275,10 +275,80 @@\n return output\n \n \n+async def async_request_openai_chat_completions(\n+ request_func_input: RequestFuncInput,\n+ pbar: Optional[tqdm] = None,\n+) -> RequestFuncOutput:\n+ api_url = request_func_input.api_url\n+ assert api_url.endswith(\n+ \"v1/chat/completions\"\n+ ), \"OpenAI Chat API URL must end with 'v1/chat/completions'.\"\n+\n+ async with aiohttp.ClientSession(timeout=AIOHTTP_TIMEOUT) as session:\n+ assert not request_func_input.use_beam_search\n+ payload = {\n+ \"model\": request_func_input.model,\n+ \"messages\": [\n+ {\n+ \"role\": \"user\",\n+ \"content\": request_func_input.prompt,\n+ },\n+ ],\n+ \"temperature\": 0.0,\n+ \"max_tokens\": request_func_input.output_len,\n+ \"stream\": True,\n+ }\n+ headers = {\n+ \"Content-Type\": \"application/json\",\n+ \"Authorization\": f\"Bearer {os.environ.get('OPENAI_API_KEY')}\"\n+ }\n+\n+ output = RequestFuncOutput()\n+ output.prompt_len = request_func_input.prompt_len\n+\n+ generated_text = \"\"\n+ ttft = 0\n+ st = time.perf_counter()\n+ try:\n+ async with session.post(url=api_url, json=payload,\n+ headers=headers) as response:\n+ if response.status == 200:\n+ async for chunk in response.content:\n+ if ttft == 0:\n+ ttft = time.perf_counter() - st\n+ output.ttft = ttft\n+\n+ chunk = chunk.strip()\n+ if not chunk:\n+ continue\n+\n+ chunk = chunk.decode(\"utf-8\").lstrip(\"data: \")\n+ if chunk == \"[DONE]\":\n+ latency = time.perf_counter() - st\n+ else:\n+ body = json.loads(chunk)\n+ if \"content\" in body[\"choices\"][0][\"delta\"]:\n+ generated_text += body[\"choices\"][0][\"delta\"][\n+ \"content\"]\n+\n+ output.generated_text = generated_text\n+ output.success = True\n+ output.latency = latency\n+ else:\n+ output.success = False\n+ except (aiohttp.ClientOSError, aiohttp.ServerDisconnectedError):\n+ output.success = False\n+\n+ if pbar:\n+ pbar.update(1)\n+ return output\n+\n+\n ASYNC_REQUEST_FUNCS = {\n \"tgi\": async_request_tgi,\n \"vllm\": async_request_vllm,\n \"deepspeed-mii\": async_request_deepspeed_mii,\n \"openai\": async_request_openai_completions,\n+ \"openai-chat\": async_request_openai_chat_completions,\n \"tensorrt-llm\": async_request_trt_llm,\n }\n", "issue": "Benchmarking script for openai chat completion api are not supported\nWhen running vllm with openai chat apis, the benchmarking script will fail as it asserts the backend API of `assert api_url.endswith(\"v1/completions\")`.\r\n\r\n```\r\npython benchmark_serving.py --backend openai --model mistralai/Mistral-7B-v0.1 --dataset ShareGPT_V3_unfiltered_cleaned_split.json --save-result\r\n```\r\n\r\nThe logs are as follows:\r\n```\r\nNamespace(backend='openai', version='N/A', base_url=None, host='localhost', port=8000, endpoint='/generate', dataset='ShareGPT_V3_unfiltered_cleaned_split.json', model='mistralai/Mistral-7B-v0.1', tokenizer=None, best_of=1, use_beam_search=False, num_prompts=1000, request_rate=inf, seed=0, trust_remote_code=False, disable_tqdm=False, save_result=True)\r\n 0%| | 0/1000 [00:00<?, ?it/s]Traffic request rate: inf\r\nTraceback (most recent call last):\r\n File \"/home/chenw/vllm/benchmarks/benchmark_serving.py\", line 387, in <module>\r\n main(args)\r\n File \"/home/chenw/vllm/benchmarks/benchmark_serving.py\", line 259, in main\r\n benchmark_result = asyncio.run(\r\n File \"/home/chenw/miniconda3/envs/myenv/lib/python3.9/asyncio/runners.py\", line 44, in run\r\n return loop.run_until_complete(main)\r\n File \"/home/chenw/miniconda3/envs/myenv/lib/python3.9/asyncio/base_events.py\", line 647, in run_until_complete\r\n return future.result()\r\n File \"/home/chenw/vllm/benchmarks/benchmark_serving.py\", line 195, in benchmark\r\n outputs = await asyncio.gather(*tasks)\r\n File \"/home/chenw/vllm/benchmarks/backend_request_func.py\", line 223, in async_request_openai_completions\r\n assert api_url.endswith(\"v1/completions\")\r\nAssertionError\r\n 0%| | 0/1000 [00:00<?, ?it/s]\r\n```\r\n\r\nThe `backend_request_func.py` should not only allow chat apis like: `assert api_url.endswith(\"v1/chat/completions\")`.\r\n\n", "before_files": [{"content": "import json\nimport os\nimport time\nfrom dataclasses import dataclass\nfrom typing import Optional\n\nimport aiohttp\nfrom tqdm.asyncio import tqdm\n\nAIOHTTP_TIMEOUT = aiohttp.ClientTimeout(total=6 * 60 * 60)\n\n\n@dataclass\nclass RequestFuncInput:\n prompt: str\n api_url: str\n prompt_len: int\n output_len: int\n model: str\n best_of: int = 1\n use_beam_search: bool = False\n\n\n@dataclass\nclass RequestFuncOutput:\n generated_text: str = \"\"\n success: bool = False\n latency: float = 0\n ttft: float = 0\n prompt_len: int = 0\n\n\nasync def async_request_tgi(\n request_func_input: RequestFuncInput,\n pbar: Optional[tqdm] = None,\n) -> RequestFuncOutput:\n api_url = request_func_input.api_url\n assert api_url.endswith(\"generate_stream\")\n\n async with aiohttp.ClientSession(timeout=AIOHTTP_TIMEOUT) as session:\n assert not request_func_input.use_beam_search\n params = {\n \"best_of\": request_func_input.best_of,\n \"max_new_tokens\": request_func_input.output_len,\n \"do_sample\": True,\n \"temperature\": 0.01, # TGI does not accept 0.0 temperature.\n \"top_p\": 0.99, # TGI does not accept 1.0 top_p.\n }\n payload = {\n \"inputs\": request_func_input.prompt,\n \"parameters\": params,\n }\n output = RequestFuncOutput()\n output.prompt_len = request_func_input.prompt_len\n\n ttft = 0\n st = time.perf_counter()\n try:\n async with session.post(url=api_url, json=payload) as response:\n if response.status == 200:\n async for data in response.content.iter_any():\n if ttft == 0:\n ttft = time.perf_counter() - st\n output.ttft = ttft\n output.latency = time.perf_counter() - st\n\n body = data.decode(\"utf-8\").lstrip(\"data:\")\n output.generated_text = json.loads(body)[\"generated_text\"]\n output.success = True\n else:\n output.success = False\n except (aiohttp.ClientOSError, aiohttp.ServerDisconnectedError):\n output.success = False\n\n if pbar:\n pbar.update(1)\n return output\n\n\nasync def async_request_vllm(\n request_func_input: RequestFuncInput,\n pbar: Optional[tqdm] = None,\n) -> RequestFuncOutput:\n api_url = request_func_input.api_url\n assert api_url.endswith(\"generate\")\n\n async with aiohttp.ClientSession(timeout=AIOHTTP_TIMEOUT) as session:\n payload = {\n \"prompt\": request_func_input.prompt,\n \"n\": 1,\n \"best_of\": request_func_input.best_of,\n \"use_beam_search\": request_func_input.use_beam_search,\n \"temperature\": 0.0 if request_func_input.use_beam_search else 1.0,\n \"top_p\": 1.0,\n \"max_tokens\": request_func_input.output_len,\n \"ignore_eos\": True,\n \"stream\": True,\n }\n output = RequestFuncOutput()\n output.prompt_len = request_func_input.prompt_len\n\n ttft = 0\n st = time.perf_counter()\n try:\n async with session.post(url=api_url, json=payload) as response:\n if response.status == 200:\n async for data in response.content.iter_any():\n if ttft == 0:\n ttft = time.perf_counter() - st\n output.ttft = ttft\n output.latency = time.perf_counter() - st\n\n # When streaming, '\\0' is appended to the end of the response.\n body = data.decode(\"utf-8\").strip(\"\\0\")\n output.generated_text = json.loads(\n body)[\"text\"][0][len(request_func_input.prompt):]\n output.success = True\n\n else:\n output.success = False\n except (aiohttp.ClientOSError, aiohttp.ServerDisconnectedError):\n output.success = False\n\n if pbar:\n pbar.update(1)\n return output\n\n\nasync def async_request_trt_llm(\n request_func_input: RequestFuncInput,\n pbar: Optional[tqdm] = None,\n) -> RequestFuncOutput:\n api_url = request_func_input.api_url\n assert api_url.endswith(\"generate_stream\")\n\n async with aiohttp.ClientSession(timeout=AIOHTTP_TIMEOUT) as session:\n assert not request_func_input.use_beam_search\n assert request_func_input.best_of == 1\n payload = {\n \"accumulate_tokens\": True,\n \"text_input\": request_func_input.prompt,\n \"temperature\": 0.0,\n \"top_p\": 1.0,\n \"max_tokens\": request_func_input.output_len,\n \"stream\": True,\n }\n output = RequestFuncOutput()\n output.prompt_len = request_func_input.prompt_len\n ttft = 0\n\n st = time.perf_counter()\n try:\n async with session.post(url=api_url, json=payload) as resp:\n if resp.status == 200:\n async for data in resp.content.iter_any():\n if ttft == 0:\n ttft = time.perf_counter() - st\n output.ttft = ttft\n output.latency = time.perf_counter() - st\n\n body = data.decode(\"utf-8\").lstrip(\"data:\")\n output.generated_text = json.loads(body)[\"text_output\"]\n output.success = True\n\n else:\n output.success = False\n except (aiohttp.ClientOSError, aiohttp.ServerDisconnectedError):\n output.success = False\n\n if pbar:\n pbar.update(1)\n return output\n\n\nasync def async_request_deepspeed_mii(\n request_func_input: RequestFuncInput,\n pbar: Optional[tqdm] = None,\n) -> RequestFuncOutput:\n async with aiohttp.ClientSession(timeout=AIOHTTP_TIMEOUT) as session:\n assert request_func_input.best_of == 1\n assert not request_func_input.use_beam_search\n\n payload = {\n \"prompts\": request_func_input.prompt,\n \"max_new_tokens\": request_func_input.output_len,\n \"ignore_eos\": True,\n \"do_sample\": True,\n \"temperature\":\n 0.01, # deepspeed-mii does not accept 0.0 temperature.\n \"top_p\": 1.0,\n }\n output = RequestFuncOutput()\n output.prompt_len = request_func_input.prompt_len\n\n # DeepSpeed-MII doesn't support streaming as of Jan 28 2024, will use 0 as placeholder.\n # https://github.com/microsoft/DeepSpeed-MII/pull/311\n output.ttft = 0\n\n st = time.perf_counter()\n try:\n async with session.post(url=request_func_input.api_url,\n json=payload) as resp:\n if resp.status == 200:\n parsed_resp = await resp.json()\n output.latency = time.perf_counter() - st\n output.generated_text = parsed_resp[0][\"generated_text\"]\n output.success = True\n else:\n output.success = False\n except (aiohttp.ClientOSError, aiohttp.ServerDisconnectedError):\n output.success = False\n\n if pbar:\n pbar.update(1)\n return output\n\n\nasync def async_request_openai_completions(\n request_func_input: RequestFuncInput,\n pbar: Optional[tqdm] = None,\n) -> RequestFuncOutput:\n api_url = request_func_input.api_url\n assert api_url.endswith(\"v1/completions\")\n\n async with aiohttp.ClientSession(timeout=AIOHTTP_TIMEOUT) as session:\n assert not request_func_input.use_beam_search\n payload = {\n \"model\": request_func_input.model,\n \"prompt\": request_func_input.prompt,\n \"temperature\": 0.0,\n \"best_of\": request_func_input.best_of,\n \"max_tokens\": request_func_input.output_len,\n \"stream\": True,\n }\n headers = {\n \"Authorization\": f\"Bearer {os.environ.get('OPENAI_API_KEY')}\"\n }\n\n output = RequestFuncOutput()\n output.prompt_len = request_func_input.prompt_len\n\n generated_text = \"\"\n ttft = 0\n st = time.perf_counter()\n try:\n async with session.post(url=api_url, json=payload,\n headers=headers) as response:\n if response.status == 200:\n async for chunk in response.content:\n if ttft == 0:\n ttft = time.perf_counter() - st\n output.ttft = ttft\n\n chunk = chunk.strip()\n if not chunk:\n continue\n\n chunk = chunk.decode(\"utf-8\").lstrip(\"data: \")\n if chunk == \"[DONE]\":\n latency = time.perf_counter() - st\n else:\n body = json.loads(chunk)\n generated_text += body[\"choices\"][0][\"text\"]\n\n output.generated_text = generated_text\n output.success = True\n output.latency = latency\n else:\n output.success = False\n except (aiohttp.ClientOSError, aiohttp.ServerDisconnectedError):\n output.success = False\n\n if pbar:\n pbar.update(1)\n return output\n\n\nASYNC_REQUEST_FUNCS = {\n \"tgi\": async_request_tgi,\n \"vllm\": async_request_vllm,\n \"deepspeed-mii\": async_request_deepspeed_mii,\n \"openai\": async_request_openai_completions,\n \"tensorrt-llm\": async_request_trt_llm,\n}\n", "path": "benchmarks/backend_request_func.py"}]} | 3,991 | 695 |
gh_patches_debug_39804 | rasdani/github-patches | git_diff | conan-io__conan-center-index-3024 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[package] proj/7.1.1: Fails to build on iOS
<!--
Please don't forget to update the issue title.
Include all applicable information to help us reproduce your problem.
-->
### Package and Environment Details (include every applicable attribute)
* Package Name/Version: **proj/7.1.1**
* Operating System+version: **iOS 11.0**
* Compiler+version: **apple-clang 11.0**
* Conan version: **conan 1.29.2**
* Python version: **Python 3.8.5**
### Conan profile
```
[settings]
arch=armv8
arch_build=x86_64
build_type=Release
compiler=apple-clang
compiler.libcxx=libc++
compiler.version=11.0
os=iOS
os.version=11.0
os_build=Macos
[options]
*:build_executable=False
proj:with_curl=False
proj:with_tiff=False
[build_requires]
*: darwin-toolchain/1.0.8@theodelrieu/stable
[env]
```
### Steps to reproduce (Include if Applicable)
`conan install proj/7.1.0@ --profile ios11-arm64 -o '*:build_executable=False' -o 'proj:with_tiff=False' -o 'proj:with_curl=False' --build=missing`
### Logs (Include/Attach if Applicable)
<details><summary>Click to expand log</summary>
```
CMake Error at source_subfolder/src/bin_cct.cmake:14 (install):
install TARGETS given no BUNDLE DESTINATION for MACOSX_BUNDLE executable
target "cct".
Call Stack (most recent call first):
source_subfolder/src/CMakeLists.txt:45 (include)
CMake Error at source_subfolder/src/bin_cs2cs.cmake:13 (install):
install TARGETS given no BUNDLE DESTINATION for MACOSX_BUNDLE executable
target "cs2cs".
Call Stack (most recent call first):
source_subfolder/src/CMakeLists.txt:50 (include)
CMake Error at source_subfolder/src/bin_geod.cmake:15 (install):
install TARGETS given no BUNDLE DESTINATION for MACOSX_BUNDLE executable
target "geod".
Call Stack (most recent call first):
source_subfolder/src/CMakeLists.txt:55 (include)
CMake Error at source_subfolder/src/bin_proj.cmake:16 (install):
install TARGETS given no BUNDLE DESTINATION for MACOSX_BUNDLE executable
target "binproj".
Call Stack (most recent call first):
source_subfolder/src/CMakeLists.txt:63 (include)
CMake Error at source_subfolder/src/bin_projinfo.cmake:12 (install):
install TARGETS given no BUNDLE DESTINATION for MACOSX_BUNDLE executable
target "binprojinfo".
Call Stack (most recent call first):
source_subfolder/src/CMakeLists.txt:68 (include)
CMake Error at source_subfolder/src/bin_gie.cmake:14 (install):
install TARGETS given no BUNDLE DESTINATION for MACOSX_BUNDLE executable
target "gie".
Call Stack (most recent call first):
source_subfolder/src/CMakeLists.txt:73 (include)
```
</details>
I would suggest adding an option that disables all the executable targets from being generated and built, like `build_executables`. Alternatively, the recipe could define individual `build_` options for each executable, but I don't know how worthwhile that level of granularity is since these are not large applications. (I personally prefer the single `build_executables` option.)
For reference, `glslang` provides this `build_executables` option to enable/disable its binaries, while `sqlite3`, `bzip2`, and `spirv-cross` provide the similarly named `build_executable` option for their (single) binary executable.
</issue>
<code>
[start of recipes/proj/all/conanfile.py]
1 import os
2
3 from conans import ConanFile, CMake, tools, RunEnvironment
4
5 required_conan_version = ">=1.28.0"
6
7 class ProjConan(ConanFile):
8 name = "proj"
9 description = "Cartographic Projections and Coordinate Transformations Library."
10 license = "MIT"
11 topics = ("conan", "dsp", "proj", "proj4", "projections", "gis", "geospatial")
12 homepage = "https://proj.org"
13 url = "https://github.com/conan-io/conan-center-index"
14 exports_sources = ["CMakeLists.txt", "patches/**"]
15 generators = "cmake", "cmake_find_package"
16 settings = "os", "arch", "compiler", "build_type"
17 options = {
18 "shared": [True, False],
19 "fPIC": [True, False],
20 "threadsafe": [True, False],
21 "with_tiff": [True, False],
22 "with_curl": [True, False]
23 }
24 default_options = {
25 "shared": False,
26 "fPIC": True,
27 "threadsafe": True,
28 "with_tiff": True,
29 "with_curl": True
30 }
31
32 _cmake = None
33
34 @property
35 def _source_subfolder(self):
36 return "source_subfolder"
37
38 def config_options(self):
39 if self.settings.os == "Windows":
40 del self.options.fPIC
41 if tools.Version(self.version) < "7.0.0":
42 del self.options.with_tiff
43 del self.options.with_curl
44
45 def configure(self):
46 if self.options.shared:
47 del self.options.fPIC
48
49 def requirements(self):
50 self.requires("sqlite3/3.32.3")
51 if self.options.get_safe("with_tiff"):
52 self.requires("libtiff/4.1.0")
53 if self.options.get_safe("with_curl"):
54 self.requires("libcurl/7.72.0")
55
56 def build_requirements(self):
57 self.build_requires("sqlite3/3.32.3")
58
59 def source(self):
60 tools.get(**self.conan_data["sources"][self.version])
61 os.rename(self.name + "-" + self.version, self._source_subfolder)
62
63 def build(self):
64 self._patch_sources()
65 with tools.environment_append(RunEnvironment(self).vars):
66 cmake = self._configure_cmake()
67 cmake.build()
68
69 def _patch_sources(self):
70 for patch in self.conan_data.get("patches", {}).get(self.version, []):
71 tools.patch(**patch)
72 tools.replace_in_file(os.path.join(self._source_subfolder, "CMakeLists.txt"), "/W4", "")
73
74 def _configure_cmake(self):
75 if self._cmake:
76 return self._cmake
77 self._cmake = CMake(self)
78 self._cmake.definitions["USE_THREAD"] = self.options.threadsafe
79 self._cmake.definitions["BUILD_CCT"] = True
80 self._cmake.definitions["BUILD_CS2CS"] = True
81 self._cmake.definitions["BUILD_GEOD"] = True
82 self._cmake.definitions["BUILD_GIE"] = True
83 self._cmake.definitions["BUILD_PROJ"] = True
84 self._cmake.definitions["BUILD_PROJINFO"] = True
85 self._cmake.definitions["PROJ_DATA_SUBDIR"] = "res"
86 if tools.Version(self.version) < "7.0.0":
87 self._cmake.definitions["PROJ_TESTS"] = False
88 self._cmake.definitions["BUILD_LIBPROJ_SHARED"] = self.options.shared
89 self._cmake.definitions["ENABLE_LTO"] = False
90 self._cmake.definitions["JNI_SUPPORT"] = False
91 else:
92 self._cmake.definitions["ENABLE_TIFF"] = self.options.with_tiff
93 self._cmake.definitions["ENABLE_CURL"] = self.options.with_curl
94 self._cmake.definitions["BUILD_TESTING"] = False
95 self._cmake.definitions["ENABLE_IPO"] = False
96 self._cmake.definitions["BUILD_PROJSYNC"] = self.options.with_curl
97 self._cmake.configure()
98 return self._cmake
99
100 def package(self):
101 self.copy("COPYING", dst="licenses", src=self._source_subfolder)
102 cmake = self._configure_cmake()
103 cmake.install()
104 tools.rmdir(os.path.join(self.package_folder, "share"))
105 tools.rmdir(os.path.join(self.package_folder, "lib", "cmake"))
106
107 def package_info(self):
108 proj_version = tools.Version(self.version)
109 cmake_config_filename = "proj" if proj_version >= "7.0.0" else "proj4"
110 cmake_namespace = "PROJ" if proj_version >= "7.0.0" else "PROJ4"
111 self.cpp_info.filenames["cmake_find_package"] = cmake_config_filename
112 self.cpp_info.filenames["cmake_find_package_multi"] = cmake_config_filename
113 self.cpp_info.names["cmake_find_package"] = cmake_namespace
114 self.cpp_info.names["cmake_find_package_multi"] = cmake_namespace
115 self.cpp_info.components["projlib"].names["cmake_find_package"] = "proj"
116 self.cpp_info.components["projlib"].names["cmake_find_package_multi"] = "proj"
117 self.cpp_info.components["projlib"].libs = tools.collect_libs(self)
118 if self.settings.os == "Linux":
119 self.cpp_info.components["projlib"].system_libs.append("m")
120 if self.options.threadsafe:
121 self.cpp_info.components["projlib"].system_libs.append("pthread")
122 elif self.settings.os == "Windows":
123 if proj_version >= "7.0.0":
124 self.cpp_info.components["projlib"].system_libs.append("shell32")
125 if proj_version >= "7.1.0":
126 self.cpp_info.components["projlib"].system_libs.append("Ole32")
127 if not self.options.shared and tools.stdcpp_library(self):
128 self.cpp_info.components["projlib"].system_libs.append(tools.stdcpp_library(self))
129 self.cpp_info.components["projlib"].requires.append("sqlite3::sqlite3")
130 if self.options.get_safe("with_tiff"):
131 self.cpp_info.components["projlib"].requires.append("libtiff::libtiff")
132 if self.options.get_safe("with_curl"):
133 self.cpp_info.components["projlib"].requires.append("libcurl::libcurl")
134 if self.options.shared and self.settings.compiler == "Visual Studio":
135 self.cpp_info.components["projlib"].defines.append("PROJ_MSVC_DLL_IMPORT")
136
137 res_path = os.path.join(self.package_folder, "res")
138 self.output.info("Appending PROJ_LIB environment variable: {}".format(res_path))
139 self.env_info.PROJ_LIB.append(res_path)
140 bin_path = os.path.join(self.package_folder, "bin")
141 self.output.info("Appending PATH environment variable: {}".format(bin_path))
142 self.env_info.PATH.append(bin_path)
143
[end of recipes/proj/all/conanfile.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/recipes/proj/all/conanfile.py b/recipes/proj/all/conanfile.py
--- a/recipes/proj/all/conanfile.py
+++ b/recipes/proj/all/conanfile.py
@@ -19,14 +19,16 @@
"fPIC": [True, False],
"threadsafe": [True, False],
"with_tiff": [True, False],
- "with_curl": [True, False]
+ "with_curl": [True, False],
+ "build_executables": [True, False]
}
default_options = {
"shared": False,
"fPIC": True,
"threadsafe": True,
"with_tiff": True,
- "with_curl": True
+ "with_curl": True,
+ "build_executables": True
}
_cmake = None
@@ -76,12 +78,12 @@
return self._cmake
self._cmake = CMake(self)
self._cmake.definitions["USE_THREAD"] = self.options.threadsafe
- self._cmake.definitions["BUILD_CCT"] = True
- self._cmake.definitions["BUILD_CS2CS"] = True
- self._cmake.definitions["BUILD_GEOD"] = True
- self._cmake.definitions["BUILD_GIE"] = True
- self._cmake.definitions["BUILD_PROJ"] = True
- self._cmake.definitions["BUILD_PROJINFO"] = True
+ self._cmake.definitions["BUILD_CCT"] = self.options.build_executables
+ self._cmake.definitions["BUILD_CS2CS"] = self.options.build_executables
+ self._cmake.definitions["BUILD_GEOD"] = self.options.build_executables
+ self._cmake.definitions["BUILD_GIE"] = self.options.build_executables
+ self._cmake.definitions["BUILD_PROJ"] = self.options.build_executables
+ self._cmake.definitions["BUILD_PROJINFO"] = self.options.build_executables
self._cmake.definitions["PROJ_DATA_SUBDIR"] = "res"
if tools.Version(self.version) < "7.0.0":
self._cmake.definitions["PROJ_TESTS"] = False
@@ -93,7 +95,7 @@
self._cmake.definitions["ENABLE_CURL"] = self.options.with_curl
self._cmake.definitions["BUILD_TESTING"] = False
self._cmake.definitions["ENABLE_IPO"] = False
- self._cmake.definitions["BUILD_PROJSYNC"] = self.options.with_curl
+ self._cmake.definitions["BUILD_PROJSYNC"] = self.options.build_executables and self.options.with_curl
self._cmake.configure()
return self._cmake
@@ -137,6 +139,7 @@
res_path = os.path.join(self.package_folder, "res")
self.output.info("Appending PROJ_LIB environment variable: {}".format(res_path))
self.env_info.PROJ_LIB.append(res_path)
- bin_path = os.path.join(self.package_folder, "bin")
- self.output.info("Appending PATH environment variable: {}".format(bin_path))
- self.env_info.PATH.append(bin_path)
+ if self.options.build_executables:
+ bin_path = os.path.join(self.package_folder, "bin")
+ self.output.info("Appending PATH environment variable: {}".format(bin_path))
+ self.env_info.PATH.append(bin_path)
| {"golden_diff": "diff --git a/recipes/proj/all/conanfile.py b/recipes/proj/all/conanfile.py\n--- a/recipes/proj/all/conanfile.py\n+++ b/recipes/proj/all/conanfile.py\n@@ -19,14 +19,16 @@\n \"fPIC\": [True, False],\n \"threadsafe\": [True, False],\n \"with_tiff\": [True, False],\n- \"with_curl\": [True, False]\n+ \"with_curl\": [True, False],\n+ \"build_executables\": [True, False]\n }\n default_options = {\n \"shared\": False,\n \"fPIC\": True,\n \"threadsafe\": True,\n \"with_tiff\": True,\n- \"with_curl\": True\n+ \"with_curl\": True,\n+ \"build_executables\": True\n }\n \n _cmake = None\n@@ -76,12 +78,12 @@\n return self._cmake\n self._cmake = CMake(self)\n self._cmake.definitions[\"USE_THREAD\"] = self.options.threadsafe\n- self._cmake.definitions[\"BUILD_CCT\"] = True\n- self._cmake.definitions[\"BUILD_CS2CS\"] = True\n- self._cmake.definitions[\"BUILD_GEOD\"] = True\n- self._cmake.definitions[\"BUILD_GIE\"] = True\n- self._cmake.definitions[\"BUILD_PROJ\"] = True\n- self._cmake.definitions[\"BUILD_PROJINFO\"] = True\n+ self._cmake.definitions[\"BUILD_CCT\"] = self.options.build_executables\n+ self._cmake.definitions[\"BUILD_CS2CS\"] = self.options.build_executables\n+ self._cmake.definitions[\"BUILD_GEOD\"] = self.options.build_executables\n+ self._cmake.definitions[\"BUILD_GIE\"] = self.options.build_executables\n+ self._cmake.definitions[\"BUILD_PROJ\"] = self.options.build_executables\n+ self._cmake.definitions[\"BUILD_PROJINFO\"] = self.options.build_executables\n self._cmake.definitions[\"PROJ_DATA_SUBDIR\"] = \"res\"\n if tools.Version(self.version) < \"7.0.0\":\n self._cmake.definitions[\"PROJ_TESTS\"] = False\n@@ -93,7 +95,7 @@\n self._cmake.definitions[\"ENABLE_CURL\"] = self.options.with_curl\n self._cmake.definitions[\"BUILD_TESTING\"] = False\n self._cmake.definitions[\"ENABLE_IPO\"] = False\n- self._cmake.definitions[\"BUILD_PROJSYNC\"] = self.options.with_curl\n+ self._cmake.definitions[\"BUILD_PROJSYNC\"] = self.options.build_executables and self.options.with_curl\n self._cmake.configure()\n return self._cmake\n \n@@ -137,6 +139,7 @@\n res_path = os.path.join(self.package_folder, \"res\")\n self.output.info(\"Appending PROJ_LIB environment variable: {}\".format(res_path))\n self.env_info.PROJ_LIB.append(res_path)\n- bin_path = os.path.join(self.package_folder, \"bin\")\n- self.output.info(\"Appending PATH environment variable: {}\".format(bin_path))\n- self.env_info.PATH.append(bin_path)\n+ if self.options.build_executables:\n+ bin_path = os.path.join(self.package_folder, \"bin\")\n+ self.output.info(\"Appending PATH environment variable: {}\".format(bin_path))\n+ self.env_info.PATH.append(bin_path)\n", "issue": "[package] proj/7.1.1: Fails to build on iOS\n<!-- \r\n Please don't forget to update the issue title.\r\n Include all applicable information to help us reproduce your problem.\r\n-->\r\n\r\n### Package and Environment Details (include every applicable attribute)\r\n * Package Name/Version: **proj/7.1.1**\r\n * Operating System+version: **iOS 11.0**\r\n * Compiler+version: **apple-clang 11.0**\r\n * Conan version: **conan 1.29.2**\r\n * Python version: **Python 3.8.5**\r\n\r\n### Conan profile\r\n```\r\n[settings]\r\narch=armv8\r\narch_build=x86_64\r\nbuild_type=Release\r\ncompiler=apple-clang\r\ncompiler.libcxx=libc++\r\ncompiler.version=11.0\r\nos=iOS\r\nos.version=11.0\r\nos_build=Macos\r\n[options]\r\n*:build_executable=False\r\nproj:with_curl=False\r\nproj:with_tiff=False\r\n[build_requires]\r\n*: darwin-toolchain/1.0.8@theodelrieu/stable\r\n[env]\r\n```\r\n\r\n\r\n### Steps to reproduce (Include if Applicable)\r\n\r\n`conan install proj/7.1.0@ --profile ios11-arm64 -o '*:build_executable=False' -o 'proj:with_tiff=False' -o 'proj:with_curl=False' --build=missing`\r\n\r\n### Logs (Include/Attach if Applicable)\r\n<details><summary>Click to expand log</summary>\r\n\r\n```\r\nCMake Error at source_subfolder/src/bin_cct.cmake:14 (install):\r\n install TARGETS given no BUNDLE DESTINATION for MACOSX_BUNDLE executable\r\n target \"cct\".\r\nCall Stack (most recent call first):\r\n source_subfolder/src/CMakeLists.txt:45 (include)\r\n\r\n\r\nCMake Error at source_subfolder/src/bin_cs2cs.cmake:13 (install):\r\n install TARGETS given no BUNDLE DESTINATION for MACOSX_BUNDLE executable\r\n target \"cs2cs\".\r\nCall Stack (most recent call first):\r\n source_subfolder/src/CMakeLists.txt:50 (include)\r\n\r\n\r\nCMake Error at source_subfolder/src/bin_geod.cmake:15 (install):\r\n install TARGETS given no BUNDLE DESTINATION for MACOSX_BUNDLE executable\r\n target \"geod\".\r\nCall Stack (most recent call first):\r\n source_subfolder/src/CMakeLists.txt:55 (include)\r\n\r\n\r\nCMake Error at source_subfolder/src/bin_proj.cmake:16 (install):\r\n install TARGETS given no BUNDLE DESTINATION for MACOSX_BUNDLE executable\r\n target \"binproj\".\r\nCall Stack (most recent call first):\r\n source_subfolder/src/CMakeLists.txt:63 (include)\r\n\r\n\r\nCMake Error at source_subfolder/src/bin_projinfo.cmake:12 (install):\r\n install TARGETS given no BUNDLE DESTINATION for MACOSX_BUNDLE executable\r\n target \"binprojinfo\".\r\nCall Stack (most recent call first):\r\n source_subfolder/src/CMakeLists.txt:68 (include)\r\n\r\n\r\nCMake Error at source_subfolder/src/bin_gie.cmake:14 (install):\r\n install TARGETS given no BUNDLE DESTINATION for MACOSX_BUNDLE executable\r\n target \"gie\".\r\nCall Stack (most recent call first):\r\n source_subfolder/src/CMakeLists.txt:73 (include)\r\n```\r\n\r\n</details>\r\n\r\nI would suggest adding an option that disables all the executable targets from being generated and built, like `build_executables`. Alternatively, the recipe could define individual `build_` options for each executable, but I don't know how worthwhile that level of granularity is since these are not large applications. (I personally prefer the single `build_executables` option.)\r\n\r\nFor reference, `glslang` provides this `build_executables` option to enable/disable its binaries, while `sqlite3`, `bzip2`, and `spirv-cross` provide the similarly named `build_executable` option for their (single) binary executable.\n", "before_files": [{"content": "import os\n\nfrom conans import ConanFile, CMake, tools, RunEnvironment\n\nrequired_conan_version = \">=1.28.0\"\n\nclass ProjConan(ConanFile):\n name = \"proj\"\n description = \"Cartographic Projections and Coordinate Transformations Library.\"\n license = \"MIT\"\n topics = (\"conan\", \"dsp\", \"proj\", \"proj4\", \"projections\", \"gis\", \"geospatial\")\n homepage = \"https://proj.org\"\n url = \"https://github.com/conan-io/conan-center-index\"\n exports_sources = [\"CMakeLists.txt\", \"patches/**\"]\n generators = \"cmake\", \"cmake_find_package\"\n settings = \"os\", \"arch\", \"compiler\", \"build_type\"\n options = {\n \"shared\": [True, False],\n \"fPIC\": [True, False],\n \"threadsafe\": [True, False],\n \"with_tiff\": [True, False],\n \"with_curl\": [True, False]\n }\n default_options = {\n \"shared\": False,\n \"fPIC\": True,\n \"threadsafe\": True,\n \"with_tiff\": True,\n \"with_curl\": True\n }\n\n _cmake = None\n\n @property\n def _source_subfolder(self):\n return \"source_subfolder\"\n\n def config_options(self):\n if self.settings.os == \"Windows\":\n del self.options.fPIC\n if tools.Version(self.version) < \"7.0.0\":\n del self.options.with_tiff\n del self.options.with_curl\n\n def configure(self):\n if self.options.shared:\n del self.options.fPIC\n\n def requirements(self):\n self.requires(\"sqlite3/3.32.3\")\n if self.options.get_safe(\"with_tiff\"):\n self.requires(\"libtiff/4.1.0\")\n if self.options.get_safe(\"with_curl\"):\n self.requires(\"libcurl/7.72.0\")\n\n def build_requirements(self):\n self.build_requires(\"sqlite3/3.32.3\")\n\n def source(self):\n tools.get(**self.conan_data[\"sources\"][self.version])\n os.rename(self.name + \"-\" + self.version, self._source_subfolder)\n\n def build(self):\n self._patch_sources()\n with tools.environment_append(RunEnvironment(self).vars):\n cmake = self._configure_cmake()\n cmake.build()\n\n def _patch_sources(self):\n for patch in self.conan_data.get(\"patches\", {}).get(self.version, []):\n tools.patch(**patch)\n tools.replace_in_file(os.path.join(self._source_subfolder, \"CMakeLists.txt\"), \"/W4\", \"\")\n\n def _configure_cmake(self):\n if self._cmake:\n return self._cmake\n self._cmake = CMake(self)\n self._cmake.definitions[\"USE_THREAD\"] = self.options.threadsafe\n self._cmake.definitions[\"BUILD_CCT\"] = True\n self._cmake.definitions[\"BUILD_CS2CS\"] = True\n self._cmake.definitions[\"BUILD_GEOD\"] = True\n self._cmake.definitions[\"BUILD_GIE\"] = True\n self._cmake.definitions[\"BUILD_PROJ\"] = True\n self._cmake.definitions[\"BUILD_PROJINFO\"] = True\n self._cmake.definitions[\"PROJ_DATA_SUBDIR\"] = \"res\"\n if tools.Version(self.version) < \"7.0.0\":\n self._cmake.definitions[\"PROJ_TESTS\"] = False\n self._cmake.definitions[\"BUILD_LIBPROJ_SHARED\"] = self.options.shared\n self._cmake.definitions[\"ENABLE_LTO\"] = False\n self._cmake.definitions[\"JNI_SUPPORT\"] = False\n else:\n self._cmake.definitions[\"ENABLE_TIFF\"] = self.options.with_tiff\n self._cmake.definitions[\"ENABLE_CURL\"] = self.options.with_curl\n self._cmake.definitions[\"BUILD_TESTING\"] = False\n self._cmake.definitions[\"ENABLE_IPO\"] = False\n self._cmake.definitions[\"BUILD_PROJSYNC\"] = self.options.with_curl\n self._cmake.configure()\n return self._cmake\n\n def package(self):\n self.copy(\"COPYING\", dst=\"licenses\", src=self._source_subfolder)\n cmake = self._configure_cmake()\n cmake.install()\n tools.rmdir(os.path.join(self.package_folder, \"share\"))\n tools.rmdir(os.path.join(self.package_folder, \"lib\", \"cmake\"))\n\n def package_info(self):\n proj_version = tools.Version(self.version)\n cmake_config_filename = \"proj\" if proj_version >= \"7.0.0\" else \"proj4\"\n cmake_namespace = \"PROJ\" if proj_version >= \"7.0.0\" else \"PROJ4\"\n self.cpp_info.filenames[\"cmake_find_package\"] = cmake_config_filename\n self.cpp_info.filenames[\"cmake_find_package_multi\"] = cmake_config_filename\n self.cpp_info.names[\"cmake_find_package\"] = cmake_namespace\n self.cpp_info.names[\"cmake_find_package_multi\"] = cmake_namespace\n self.cpp_info.components[\"projlib\"].names[\"cmake_find_package\"] = \"proj\"\n self.cpp_info.components[\"projlib\"].names[\"cmake_find_package_multi\"] = \"proj\"\n self.cpp_info.components[\"projlib\"].libs = tools.collect_libs(self)\n if self.settings.os == \"Linux\":\n self.cpp_info.components[\"projlib\"].system_libs.append(\"m\")\n if self.options.threadsafe:\n self.cpp_info.components[\"projlib\"].system_libs.append(\"pthread\")\n elif self.settings.os == \"Windows\":\n if proj_version >= \"7.0.0\":\n self.cpp_info.components[\"projlib\"].system_libs.append(\"shell32\")\n if proj_version >= \"7.1.0\":\n self.cpp_info.components[\"projlib\"].system_libs.append(\"Ole32\")\n if not self.options.shared and tools.stdcpp_library(self):\n self.cpp_info.components[\"projlib\"].system_libs.append(tools.stdcpp_library(self))\n self.cpp_info.components[\"projlib\"].requires.append(\"sqlite3::sqlite3\")\n if self.options.get_safe(\"with_tiff\"):\n self.cpp_info.components[\"projlib\"].requires.append(\"libtiff::libtiff\")\n if self.options.get_safe(\"with_curl\"):\n self.cpp_info.components[\"projlib\"].requires.append(\"libcurl::libcurl\")\n if self.options.shared and self.settings.compiler == \"Visual Studio\":\n self.cpp_info.components[\"projlib\"].defines.append(\"PROJ_MSVC_DLL_IMPORT\")\n\n res_path = os.path.join(self.package_folder, \"res\")\n self.output.info(\"Appending PROJ_LIB environment variable: {}\".format(res_path))\n self.env_info.PROJ_LIB.append(res_path)\n bin_path = os.path.join(self.package_folder, \"bin\")\n self.output.info(\"Appending PATH environment variable: {}\".format(bin_path))\n self.env_info.PATH.append(bin_path)\n", "path": "recipes/proj/all/conanfile.py"}]} | 3,283 | 799 |
gh_patches_debug_38914 | rasdani/github-patches | git_diff | akvo__akvo-rsr-1783 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Can't add locations to update through REST API
## Test plan
GIVEN the Up app
WHEN the user tries to add an update
THEN this should not give a 400 error
</issue>
<code>
[start of akvo/rest/serializers/__init__.py]
1 # -*- coding: utf-8 -*-
2
3 """Akvo RSR is covered by the GNU Affero General Public License.
4 See more details in the license.txt file located at the root folder of the
5 Akvo RSR module. For additional details on the GNU license please
6 see < http://www.gnu.org/licenses/agpl.html >.
7 """
8
9
10 from .benchmark import BenchmarkSerializer
11 from .benchmark_name import BenchmarknameSerializer
12 from .budget_item import BudgetItemSerializer, CountryBudgetItemSerializer
13 from .budget_item_label import BudgetItemLabelSerializer
14 from .category import CategorySerializer
15 from .country import CountrySerializer
16 from .custom_field import OrganisationCustomFieldSerializer, ProjectCustomFieldSerializer
17 from .employment import EmploymentSerializer
18 from .focus_area import FocusAreaSerializer
19 from .goal import GoalSerializer
20 from .indicator import IndicatorPeriodSerializer, IndicatorSerializer
21 from .internal_organisation_id import InternalOrganisationIDSerializer
22 from .invoice import InvoiceSerializer
23 from .keyword import KeywordSerializer
24 from .legacy_data import LegacyDataSerializer
25 from .link import LinkSerializer
26 from .organisation import OrganisationSerializer
27 from .organisation_location import (OrganisationLocationSerializer,
28 MapOrganisationLocationSerializer)
29 from .partner_site import PartnerSiteSerializer
30 from .partnership import PartnershipSerializer
31 from .planned_disbursement import PlannedDisbursementSerializer
32 from .policy_marker import PolicyMarkerSerializer
33 from .project import ProjectSerializer, ProjectExtraSerializer, ProjectUpSerializer
34 from .project_comment import ProjectCommentSerializer
35 from .project_condition import ProjectConditionSerializer
36 from .project_contact import ProjectContactSerializer
37 from .project_document import ProjectDocumentSerializer
38 from .project_location import (ProjectLocationSerializer, AdministrativeLocationSerializer,
39 MapProjectLocationSerializer)
40 from .project_update import (ProjectUpdateSerializer,
41 ProjectUpdateExtraSerializer)
42 from .project_update_location import (ProjectUpdateLocationSerializer,
43 MapProjectUpdateLocationSerializer)
44 from .publishing_status import PublishingStatusSerializer
45 from .recipient_country import RecipientCountrySerializer
46 from .region import RecipientRegionSerializer
47 from .related_project import RelatedProjectSerializer
48 from .result import ResultSerializer
49 from .sector import SectorSerializer
50 from .transaction import TransactionSerializer, TransactionSectorSerializer
51 from .typeahead import (TypeaheadCountrySerializer,
52 TypeaheadOrganisationSerializer,
53 TypeaheadProjectSerializer,
54 TypeaheadProjectUpdateSerializer)
55 from .user import UserSerializer, UserDetailsSerializer, UserPasswordSerializer
56
57 __all__ = [
58 'AdministrativeLocationSerializer',
59 'BenchmarknameSerializer',
60 'BenchmarkSerializer',
61 'BudgetItemLabelSerializer',
62 'BudgetItemSerializer',
63 'CategorySerializer',
64 'CountrySerializer',
65 'CountryBudgetItemSerializer',
66 'EmploymentSerializer',
67 'FocusAreaSerializer',
68 'GoalSerializer',
69 'IndicatorPeriodSerializer',
70 'IndicatorSerializer',
71 'InternalOrganisationIDSerializer',
72 'InvoiceSerializer',
73 'KeywordSerializer',
74 'LegacyDataSerializer',
75 'LinkSerializer',
76 'MapOrganisationLocationSerializer',
77 'MapProjectLocationSerializer',
78 'MapProjectUpdateLocationSerializer',
79 'OrganisationSerializer',
80 'OrganisationCustomFieldSerializer',
81 'OrganisationLocationSerializer',
82 'PartnershipSerializer',
83 'PartnerSiteSerializer',
84 'PlannedDisbursementSerializer',
85 'PolicyMarkerSerializer',
86 'ProjectCommentSerializer',
87 'ProjectConditionSerializer',
88 'ProjectContactSerializer',
89 'ProjectCustomFieldSerializer',
90 'ProjectDocumentSerializer',
91 'ProjectExtraSerializer',
92 'ProjectLocationSerializer',
93 'ProjectSerializer',
94 'ProjectUpdateExtraSerializer',
95 'ProjectUpdateLocationSerializer',
96 'ProjectUpdateSerializer',
97 'ProjectUpSerializer',
98 'PublishingStatusSerializer',
99 'RecipientCountrySerializer',
100 'RecipientRegionSerializer',
101 'RelatedProjectSerializer',
102 'ResultSerializer',
103 'SectorSerializer',
104 'TransactionSerializer',
105 'TransactionSectorSerializer',
106 'TypeaheadCountrySerializer',
107 'TypeaheadOrganisationSerializer',
108 'TypeaheadProjectSerializer',
109 'TypeaheadProjectUpdateSerializer',
110 'UserDetailsSerializer',
111 'UserPasswordSerializer',
112 'UserSerializer',
113 ]
114
[end of akvo/rest/serializers/__init__.py]
[start of akvo/rest/serializers/project_update.py]
1 # -*- coding: utf-8 -*-
2 """Akvo RSR is covered by the GNU Affero General Public License.
3
4 See more details in the license.txt file located at the root folder of the Akvo RSR module.
5 For additional details on the GNU license please see < http://www.gnu.org/licenses/agpl.html >.
6 """
7
8 from rest_framework import serializers
9 from akvo.rsr.models import ProjectUpdate
10 from ..fields import Base64ImageField
11 from .project_update_location import (ProjectUpdateLocationSerializer,
12 ProjectUpdateLocationExtraSerializer)
13 from .rsr_serializer import BaseRSRSerializer
14 from .user import UserSerializer
15
16
17 class ProjectUpdateSerializer(BaseRSRSerializer):
18
19 """Serializer for project updates."""
20
21 locations = ProjectUpdateLocationSerializer(source='locations', many=True, required=False,
22 allow_add_remove=True)
23 photo = Base64ImageField(required=False, allow_empty_file=True)
24
25 class Meta:
26 model = ProjectUpdate
27
28
29 class ProjectUpdateExtraSerializer(BaseRSRSerializer):
30
31 """This serializer includes data about user and connected organisation."""
32
33 photo = Base64ImageField(required=False, allow_empty_file=True)
34 primary_location = ProjectUpdateLocationExtraSerializer()
35 # Limit project data to its PK, this is needed because of Meta.depth = 2
36 project = serializers.Field(source='project.pk')
37 user = UserSerializer()
38
39 class Meta:
40 model = ProjectUpdate
41 depth = 2
42
[end of akvo/rest/serializers/project_update.py]
[start of akvo/rest/serializers/project_update_location.py]
1 # -*- coding: utf-8 -*-
2 """Akvo RSR is covered by the GNU Affero General Public License.
3 See more details in the license.txt file located at the root folder of the Akvo RSR module.
4 For additional details on the GNU license please see < http://www.gnu.org/licenses/agpl.html >.
5 """
6
7 from rest_framework import serializers
8 from akvo.rsr.models import ProjectUpdateLocation
9 from ..fields import Base64ImageField
10 from .rsr_serializer import BaseRSRSerializer
11
12
13 class ProjectUpdateLocationSerializer(BaseRSRSerializer):
14
15 class Meta:
16 model = ProjectUpdateLocation
17
18
19 class ProjectUpdateLocationExtraSerializer(ProjectUpdateLocationSerializer):
20
21 # Limit update data to its PK, this is needed because of Meta.depth = 2
22 location_target = serializers.Field(source='location_target.pk')
23
24 class Meta(ProjectUpdateLocationSerializer.Meta):
25 depth = 2
26
27
28 class MapProjectUpdateSerializer(serializers.Serializer):
29
30 """To serialize the update field of the update map resource."""
31
32 id = serializers.IntegerField()
33 title = serializers.CharField()
34 url = serializers.URLField(source='get_absolute_url')
35 photo = Base64ImageField(required=False, allow_empty_file=True)
36 video = serializers.CharField(required=False)
37
38
39 class MapProjectUpdateLocationSerializer(serializers.Serializer):
40
41 """To serialize the update map resource."""
42
43 id = serializers.IntegerField()
44 latitude = serializers.FloatField()
45 longitude = serializers.FloatField()
46 update = MapProjectUpdateSerializer(source='location_target')
47
[end of akvo/rest/serializers/project_update_location.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/akvo/rest/serializers/__init__.py b/akvo/rest/serializers/__init__.py
--- a/akvo/rest/serializers/__init__.py
+++ b/akvo/rest/serializers/__init__.py
@@ -40,6 +40,7 @@
from .project_update import (ProjectUpdateSerializer,
ProjectUpdateExtraSerializer)
from .project_update_location import (ProjectUpdateLocationSerializer,
+ ProjectUpdateLocationNestedSerializer,
MapProjectUpdateLocationSerializer)
from .publishing_status import PublishingStatusSerializer
from .recipient_country import RecipientCountrySerializer
@@ -93,6 +94,7 @@
'ProjectSerializer',
'ProjectUpdateExtraSerializer',
'ProjectUpdateLocationSerializer',
+ 'ProjectUpdateLocationNestedSerializer',
'ProjectUpdateSerializer',
'ProjectUpSerializer',
'PublishingStatusSerializer',
diff --git a/akvo/rest/serializers/project_update.py b/akvo/rest/serializers/project_update.py
--- a/akvo/rest/serializers/project_update.py
+++ b/akvo/rest/serializers/project_update.py
@@ -8,7 +8,7 @@
from rest_framework import serializers
from akvo.rsr.models import ProjectUpdate
from ..fields import Base64ImageField
-from .project_update_location import (ProjectUpdateLocationSerializer,
+from .project_update_location import (ProjectUpdateLocationNestedSerializer,
ProjectUpdateLocationExtraSerializer)
from .rsr_serializer import BaseRSRSerializer
from .user import UserSerializer
@@ -18,8 +18,8 @@
"""Serializer for project updates."""
- locations = ProjectUpdateLocationSerializer(source='locations', many=True, required=False,
- allow_add_remove=True)
+ locations = ProjectUpdateLocationNestedSerializer(source='locations', many=True, required=False,
+ allow_add_remove=True)
photo = Base64ImageField(required=False, allow_empty_file=True)
class Meta:
diff --git a/akvo/rest/serializers/project_update_location.py b/akvo/rest/serializers/project_update_location.py
--- a/akvo/rest/serializers/project_update_location.py
+++ b/akvo/rest/serializers/project_update_location.py
@@ -16,6 +16,14 @@
model = ProjectUpdateLocation
+class ProjectUpdateLocationNestedSerializer(ProjectUpdateLocationSerializer):
+
+ class Meta(ProjectUpdateLocationSerializer.Meta):
+ # Exclude the mandatory 'location_target' field, so that it is possible to create a
+ # project update location at the same time as the project update.
+ exclude = ('location_target',)
+
+
class ProjectUpdateLocationExtraSerializer(ProjectUpdateLocationSerializer):
# Limit update data to its PK, this is needed because of Meta.depth = 2
| {"golden_diff": "diff --git a/akvo/rest/serializers/__init__.py b/akvo/rest/serializers/__init__.py\n--- a/akvo/rest/serializers/__init__.py\n+++ b/akvo/rest/serializers/__init__.py\n@@ -40,6 +40,7 @@\n from .project_update import (ProjectUpdateSerializer,\n ProjectUpdateExtraSerializer)\n from .project_update_location import (ProjectUpdateLocationSerializer,\n+ ProjectUpdateLocationNestedSerializer,\n MapProjectUpdateLocationSerializer)\n from .publishing_status import PublishingStatusSerializer\n from .recipient_country import RecipientCountrySerializer\n@@ -93,6 +94,7 @@\n 'ProjectSerializer',\n 'ProjectUpdateExtraSerializer',\n 'ProjectUpdateLocationSerializer',\n+ 'ProjectUpdateLocationNestedSerializer',\n 'ProjectUpdateSerializer',\n 'ProjectUpSerializer',\n 'PublishingStatusSerializer',\ndiff --git a/akvo/rest/serializers/project_update.py b/akvo/rest/serializers/project_update.py\n--- a/akvo/rest/serializers/project_update.py\n+++ b/akvo/rest/serializers/project_update.py\n@@ -8,7 +8,7 @@\n from rest_framework import serializers\n from akvo.rsr.models import ProjectUpdate\n from ..fields import Base64ImageField\n-from .project_update_location import (ProjectUpdateLocationSerializer,\n+from .project_update_location import (ProjectUpdateLocationNestedSerializer,\n ProjectUpdateLocationExtraSerializer)\n from .rsr_serializer import BaseRSRSerializer\n from .user import UserSerializer\n@@ -18,8 +18,8 @@\n \n \"\"\"Serializer for project updates.\"\"\"\n \n- locations = ProjectUpdateLocationSerializer(source='locations', many=True, required=False,\n- allow_add_remove=True)\n+ locations = ProjectUpdateLocationNestedSerializer(source='locations', many=True, required=False,\n+ allow_add_remove=True)\n photo = Base64ImageField(required=False, allow_empty_file=True)\n \n class Meta:\ndiff --git a/akvo/rest/serializers/project_update_location.py b/akvo/rest/serializers/project_update_location.py\n--- a/akvo/rest/serializers/project_update_location.py\n+++ b/akvo/rest/serializers/project_update_location.py\n@@ -16,6 +16,14 @@\n model = ProjectUpdateLocation\n \n \n+class ProjectUpdateLocationNestedSerializer(ProjectUpdateLocationSerializer):\n+\n+ class Meta(ProjectUpdateLocationSerializer.Meta):\n+ # Exclude the mandatory 'location_target' field, so that it is possible to create a\n+ # project update location at the same time as the project update.\n+ exclude = ('location_target',)\n+\n+\n class ProjectUpdateLocationExtraSerializer(ProjectUpdateLocationSerializer):\n \n # Limit update data to its PK, this is needed because of Meta.depth = 2\n", "issue": "Can't add locations to update through REST API\n## Test plan\n\nGIVEN the Up app\nWHEN the user tries to add an update\nTHEN this should not give a 400 error\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\n\"\"\"Akvo RSR is covered by the GNU Affero General Public License.\nSee more details in the license.txt file located at the root folder of the\nAkvo RSR module. For additional details on the GNU license please\nsee < http://www.gnu.org/licenses/agpl.html >.\n\"\"\"\n\n\nfrom .benchmark import BenchmarkSerializer\nfrom .benchmark_name import BenchmarknameSerializer\nfrom .budget_item import BudgetItemSerializer, CountryBudgetItemSerializer\nfrom .budget_item_label import BudgetItemLabelSerializer\nfrom .category import CategorySerializer\nfrom .country import CountrySerializer\nfrom .custom_field import OrganisationCustomFieldSerializer, ProjectCustomFieldSerializer\nfrom .employment import EmploymentSerializer\nfrom .focus_area import FocusAreaSerializer\nfrom .goal import GoalSerializer\nfrom .indicator import IndicatorPeriodSerializer, IndicatorSerializer\nfrom .internal_organisation_id import InternalOrganisationIDSerializer\nfrom .invoice import InvoiceSerializer\nfrom .keyword import KeywordSerializer\nfrom .legacy_data import LegacyDataSerializer\nfrom .link import LinkSerializer\nfrom .organisation import OrganisationSerializer\nfrom .organisation_location import (OrganisationLocationSerializer,\n MapOrganisationLocationSerializer)\nfrom .partner_site import PartnerSiteSerializer\nfrom .partnership import PartnershipSerializer\nfrom .planned_disbursement import PlannedDisbursementSerializer\nfrom .policy_marker import PolicyMarkerSerializer\nfrom .project import ProjectSerializer, ProjectExtraSerializer, ProjectUpSerializer\nfrom .project_comment import ProjectCommentSerializer\nfrom .project_condition import ProjectConditionSerializer\nfrom .project_contact import ProjectContactSerializer\nfrom .project_document import ProjectDocumentSerializer\nfrom .project_location import (ProjectLocationSerializer, AdministrativeLocationSerializer,\n MapProjectLocationSerializer)\nfrom .project_update import (ProjectUpdateSerializer,\n ProjectUpdateExtraSerializer)\nfrom .project_update_location import (ProjectUpdateLocationSerializer,\n MapProjectUpdateLocationSerializer)\nfrom .publishing_status import PublishingStatusSerializer\nfrom .recipient_country import RecipientCountrySerializer\nfrom .region import RecipientRegionSerializer\nfrom .related_project import RelatedProjectSerializer\nfrom .result import ResultSerializer\nfrom .sector import SectorSerializer\nfrom .transaction import TransactionSerializer, TransactionSectorSerializer\nfrom .typeahead import (TypeaheadCountrySerializer,\n TypeaheadOrganisationSerializer,\n TypeaheadProjectSerializer,\n TypeaheadProjectUpdateSerializer)\nfrom .user import UserSerializer, UserDetailsSerializer, UserPasswordSerializer\n\n__all__ = [\n 'AdministrativeLocationSerializer',\n 'BenchmarknameSerializer',\n 'BenchmarkSerializer',\n 'BudgetItemLabelSerializer',\n 'BudgetItemSerializer',\n 'CategorySerializer',\n 'CountrySerializer',\n 'CountryBudgetItemSerializer',\n 'EmploymentSerializer',\n 'FocusAreaSerializer',\n 'GoalSerializer',\n 'IndicatorPeriodSerializer',\n 'IndicatorSerializer',\n 'InternalOrganisationIDSerializer',\n 'InvoiceSerializer',\n 'KeywordSerializer',\n 'LegacyDataSerializer',\n 'LinkSerializer',\n 'MapOrganisationLocationSerializer',\n 'MapProjectLocationSerializer',\n 'MapProjectUpdateLocationSerializer',\n 'OrganisationSerializer',\n 'OrganisationCustomFieldSerializer',\n 'OrganisationLocationSerializer',\n 'PartnershipSerializer',\n 'PartnerSiteSerializer',\n 'PlannedDisbursementSerializer',\n 'PolicyMarkerSerializer',\n 'ProjectCommentSerializer',\n 'ProjectConditionSerializer',\n 'ProjectContactSerializer',\n 'ProjectCustomFieldSerializer',\n 'ProjectDocumentSerializer',\n 'ProjectExtraSerializer',\n 'ProjectLocationSerializer',\n 'ProjectSerializer',\n 'ProjectUpdateExtraSerializer',\n 'ProjectUpdateLocationSerializer',\n 'ProjectUpdateSerializer',\n 'ProjectUpSerializer',\n 'PublishingStatusSerializer',\n 'RecipientCountrySerializer',\n 'RecipientRegionSerializer',\n 'RelatedProjectSerializer',\n 'ResultSerializer',\n 'SectorSerializer',\n 'TransactionSerializer',\n 'TransactionSectorSerializer',\n 'TypeaheadCountrySerializer',\n 'TypeaheadOrganisationSerializer',\n 'TypeaheadProjectSerializer',\n 'TypeaheadProjectUpdateSerializer',\n 'UserDetailsSerializer',\n 'UserPasswordSerializer',\n 'UserSerializer',\n]\n", "path": "akvo/rest/serializers/__init__.py"}, {"content": "# -*- coding: utf-8 -*-\n\"\"\"Akvo RSR is covered by the GNU Affero General Public License.\n\nSee more details in the license.txt file located at the root folder of the Akvo RSR module.\nFor additional details on the GNU license please see < http://www.gnu.org/licenses/agpl.html >.\n\"\"\"\n\nfrom rest_framework import serializers\nfrom akvo.rsr.models import ProjectUpdate\nfrom ..fields import Base64ImageField\nfrom .project_update_location import (ProjectUpdateLocationSerializer,\n ProjectUpdateLocationExtraSerializer)\nfrom .rsr_serializer import BaseRSRSerializer\nfrom .user import UserSerializer\n\n\nclass ProjectUpdateSerializer(BaseRSRSerializer):\n\n \"\"\"Serializer for project updates.\"\"\"\n\n locations = ProjectUpdateLocationSerializer(source='locations', many=True, required=False,\n allow_add_remove=True)\n photo = Base64ImageField(required=False, allow_empty_file=True)\n\n class Meta:\n model = ProjectUpdate\n\n\nclass ProjectUpdateExtraSerializer(BaseRSRSerializer):\n\n \"\"\"This serializer includes data about user and connected organisation.\"\"\"\n\n photo = Base64ImageField(required=False, allow_empty_file=True)\n primary_location = ProjectUpdateLocationExtraSerializer()\n # Limit project data to its PK, this is needed because of Meta.depth = 2\n project = serializers.Field(source='project.pk')\n user = UserSerializer()\n\n class Meta:\n model = ProjectUpdate\n depth = 2\n", "path": "akvo/rest/serializers/project_update.py"}, {"content": "# -*- coding: utf-8 -*-\n\"\"\"Akvo RSR is covered by the GNU Affero General Public License.\nSee more details in the license.txt file located at the root folder of the Akvo RSR module.\nFor additional details on the GNU license please see < http://www.gnu.org/licenses/agpl.html >.\n\"\"\"\n\nfrom rest_framework import serializers\nfrom akvo.rsr.models import ProjectUpdateLocation\nfrom ..fields import Base64ImageField\nfrom .rsr_serializer import BaseRSRSerializer\n\n\nclass ProjectUpdateLocationSerializer(BaseRSRSerializer):\n\n class Meta:\n model = ProjectUpdateLocation\n\n\nclass ProjectUpdateLocationExtraSerializer(ProjectUpdateLocationSerializer):\n\n # Limit update data to its PK, this is needed because of Meta.depth = 2\n location_target = serializers.Field(source='location_target.pk')\n\n class Meta(ProjectUpdateLocationSerializer.Meta):\n depth = 2\n\n\nclass MapProjectUpdateSerializer(serializers.Serializer):\n\n \"\"\"To serialize the update field of the update map resource.\"\"\"\n\n id = serializers.IntegerField()\n title = serializers.CharField()\n url = serializers.URLField(source='get_absolute_url')\n photo = Base64ImageField(required=False, allow_empty_file=True)\n video = serializers.CharField(required=False)\n\n\nclass MapProjectUpdateLocationSerializer(serializers.Serializer):\n\n \"\"\"To serialize the update map resource.\"\"\"\n\n id = serializers.IntegerField()\n latitude = serializers.FloatField()\n longitude = serializers.FloatField()\n update = MapProjectUpdateSerializer(source='location_target')\n", "path": "akvo/rest/serializers/project_update_location.py"}]} | 2,500 | 600 |
gh_patches_debug_2290 | rasdani/github-patches | git_diff | TheAlgorithms__Python-4779 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Bug with union in disjoint_set
https://github.com/TheAlgorithms/Python/blob/master/data_structures/disjoint_set/disjoint_set.py
```python
def union_set(x, y):
"""
union two sets.
set with bigger rank should be parent, so that the
disjoint set tree will be more flat.
"""
x, y = find_set(x), find_set(y)
if x.rank > y.rank:
y.parent = x
else:
x.parent = y
if x.rank == y.rank:
y.rank += 1
```
here need check if x==y
Bug with union in disjoint_set
https://github.com/TheAlgorithms/Python/blob/master/data_structures/disjoint_set/disjoint_set.py
```python
def union_set(x, y):
"""
union two sets.
set with bigger rank should be parent, so that the
disjoint set tree will be more flat.
"""
x, y = find_set(x), find_set(y)
if x.rank > y.rank:
y.parent = x
else:
x.parent = y
if x.rank == y.rank:
y.rank += 1
```
here need check if x==y
</issue>
<code>
[start of data_structures/disjoint_set/disjoint_set.py]
1 """
2 disjoint set
3 Reference: https://en.wikipedia.org/wiki/Disjoint-set_data_structure
4 """
5
6
7 class Node:
8 def __init__(self, data):
9 self.data = data
10
11
12 def make_set(x):
13 """
14 make x as a set.
15 """
16 # rank is the distance from x to its' parent
17 # root's rank is 0
18 x.rank = 0
19 x.parent = x
20
21
22 def union_set(x, y):
23 """
24 union two sets.
25 set with bigger rank should be parent, so that the
26 disjoint set tree will be more flat.
27 """
28 x, y = find_set(x), find_set(y)
29 if x.rank > y.rank:
30 y.parent = x
31 else:
32 x.parent = y
33 if x.rank == y.rank:
34 y.rank += 1
35
36
37 def find_set(x):
38 """
39 return the parent of x
40 """
41 if x != x.parent:
42 x.parent = find_set(x.parent)
43 return x.parent
44
45
46 def find_python_set(node: Node) -> set:
47 """
48 Return a Python Standard Library set that contains i.
49 """
50 sets = ({0, 1, 2}, {3, 4, 5})
51 for s in sets:
52 if node.data in s:
53 return s
54 raise ValueError(f"{node.data} is not in {sets}")
55
56
57 def test_disjoint_set():
58 """
59 >>> test_disjoint_set()
60 """
61 vertex = [Node(i) for i in range(6)]
62 for v in vertex:
63 make_set(v)
64
65 union_set(vertex[0], vertex[1])
66 union_set(vertex[1], vertex[2])
67 union_set(vertex[3], vertex[4])
68 union_set(vertex[3], vertex[5])
69
70 for node0 in vertex:
71 for node1 in vertex:
72 if find_python_set(node0).isdisjoint(find_python_set(node1)):
73 assert find_set(node0) != find_set(node1)
74 else:
75 assert find_set(node0) == find_set(node1)
76
77
78 if __name__ == "__main__":
79 test_disjoint_set()
80
[end of data_structures/disjoint_set/disjoint_set.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/data_structures/disjoint_set/disjoint_set.py b/data_structures/disjoint_set/disjoint_set.py
--- a/data_structures/disjoint_set/disjoint_set.py
+++ b/data_structures/disjoint_set/disjoint_set.py
@@ -26,7 +26,10 @@
disjoint set tree will be more flat.
"""
x, y = find_set(x), find_set(y)
- if x.rank > y.rank:
+ if x == y:
+ return
+
+ elif x.rank > y.rank:
y.parent = x
else:
x.parent = y
| {"golden_diff": "diff --git a/data_structures/disjoint_set/disjoint_set.py b/data_structures/disjoint_set/disjoint_set.py\n--- a/data_structures/disjoint_set/disjoint_set.py\n+++ b/data_structures/disjoint_set/disjoint_set.py\n@@ -26,7 +26,10 @@\n disjoint set tree will be more flat.\r\n \"\"\"\r\n x, y = find_set(x), find_set(y)\r\n- if x.rank > y.rank:\r\n+ if x == y:\r\n+ return\r\n+\r\n+ elif x.rank > y.rank:\r\n y.parent = x\r\n else:\r\n x.parent = y\n", "issue": "Bug with union in disjoint_set\nhttps://github.com/TheAlgorithms/Python/blob/master/data_structures/disjoint_set/disjoint_set.py\r\n```python\r\ndef union_set(x, y):\r\n \"\"\"\r\n union two sets.\r\n set with bigger rank should be parent, so that the\r\n disjoint set tree will be more flat.\r\n \"\"\"\r\n x, y = find_set(x), find_set(y)\r\n if x.rank > y.rank:\r\n y.parent = x\r\n else:\r\n x.parent = y\r\n if x.rank == y.rank:\r\n y.rank += 1\r\n```\r\n\r\nhere need check if x==y\r\n\nBug with union in disjoint_set\nhttps://github.com/TheAlgorithms/Python/blob/master/data_structures/disjoint_set/disjoint_set.py\r\n```python\r\ndef union_set(x, y):\r\n \"\"\"\r\n union two sets.\r\n set with bigger rank should be parent, so that the\r\n disjoint set tree will be more flat.\r\n \"\"\"\r\n x, y = find_set(x), find_set(y)\r\n if x.rank > y.rank:\r\n y.parent = x\r\n else:\r\n x.parent = y\r\n if x.rank == y.rank:\r\n y.rank += 1\r\n```\r\n\r\nhere need check if x==y\r\n\n", "before_files": [{"content": "\"\"\"\r\n disjoint set\r\n Reference: https://en.wikipedia.org/wiki/Disjoint-set_data_structure\r\n\"\"\"\r\n\r\n\r\nclass Node:\r\n def __init__(self, data):\r\n self.data = data\r\n\r\n\r\ndef make_set(x):\r\n \"\"\"\r\n make x as a set.\r\n \"\"\"\r\n # rank is the distance from x to its' parent\r\n # root's rank is 0\r\n x.rank = 0\r\n x.parent = x\r\n\r\n\r\ndef union_set(x, y):\r\n \"\"\"\r\n union two sets.\r\n set with bigger rank should be parent, so that the\r\n disjoint set tree will be more flat.\r\n \"\"\"\r\n x, y = find_set(x), find_set(y)\r\n if x.rank > y.rank:\r\n y.parent = x\r\n else:\r\n x.parent = y\r\n if x.rank == y.rank:\r\n y.rank += 1\r\n\r\n\r\ndef find_set(x):\r\n \"\"\"\r\n return the parent of x\r\n \"\"\"\r\n if x != x.parent:\r\n x.parent = find_set(x.parent)\r\n return x.parent\r\n\r\n\r\ndef find_python_set(node: Node) -> set:\r\n \"\"\"\r\n Return a Python Standard Library set that contains i.\r\n \"\"\"\r\n sets = ({0, 1, 2}, {3, 4, 5})\r\n for s in sets:\r\n if node.data in s:\r\n return s\r\n raise ValueError(f\"{node.data} is not in {sets}\")\r\n\r\n\r\ndef test_disjoint_set():\r\n \"\"\"\r\n >>> test_disjoint_set()\r\n \"\"\"\r\n vertex = [Node(i) for i in range(6)]\r\n for v in vertex:\r\n make_set(v)\r\n\r\n union_set(vertex[0], vertex[1])\r\n union_set(vertex[1], vertex[2])\r\n union_set(vertex[3], vertex[4])\r\n union_set(vertex[3], vertex[5])\r\n\r\n for node0 in vertex:\r\n for node1 in vertex:\r\n if find_python_set(node0).isdisjoint(find_python_set(node1)):\r\n assert find_set(node0) != find_set(node1)\r\n else:\r\n assert find_set(node0) == find_set(node1)\r\n\r\n\r\nif __name__ == \"__main__\":\r\n test_disjoint_set()\r\n", "path": "data_structures/disjoint_set/disjoint_set.py"}]} | 1,431 | 135 |
gh_patches_debug_25461 | rasdani/github-patches | git_diff | cornellius-gp__gpytorch-785 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[Bug] get_fantasy_model does not work for SGPR with InducingPointKernel
# 🐛 Bug
Not sure if this should be considered a bug or a feature request, but gpytorch's implementation of SGPR using the InducingPointKernel kernel seems to not support get_fantasy_model.
## To reproduce
I am including the smallest mwe (or should I say mnwe) here. Note that I get the same behaviour by taking the [example tutorial for SGPR](https://gpytorch.readthedocs.io/en/latest/examples/05_Scalable_GP_Regression_Multidimensional/SGPR_Example_CUDA.html) and add a get_fantasy_model added at the end. I can post that too if required, but it is longer and might clutter the ticket.
**Code snippet to reproduce**
```python
import gpytorch
import torch
from gpytorch.kernels import ScaleKernel, RBFKernel, InducingPointKernel
from gpytorch.distributions import MultivariateNormal
from gpytorch.likelihoods import GaussianLikelihood
from gpytorch.means import ConstantMean
class GPRegressionModel(gpytorch.models.ExactGP):
def __init__(self, train_x, train_y, likelihood):
super(GPRegressionModel, self).__init__(train_x, train_y, likelihood)
self.mean_module = ConstantMean()
self.base_covar_module = ScaleKernel(RBFKernel())
self.covar_module = InducingPointKernel(self.base_covar_module, inducing_points=train_x[:500, :], likelihood=likelihood)
def forward(self, x):
mean_x = self.mean_module(x)
covar_x = self.covar_module(x)
return MultivariateNormal(mean_x, covar_x)
train_X = torch.randn((100,5)).to("cpu")
train_y = torch.randn((100)).to("cpu")
likelihood = GaussianLikelihood()
model = GPRegressionModel(train_X, train_y, likelihood)
model.train()
model.eval()
test_pred = model(torch.randn((1,5)).to("cpu"))
model = model.get_fantasy_model(torch.randn((1,5)).to("cpu"), torch.randn((1)).to("cpu"))
```
**Stack trace/error message**
```
Traceback (most recent call last):
File "mwe_sgpr_fantasy.py", line 31, in <module>
model = model.get_fantasy_model(torch.randn((1,5)).to("cpu"), torch.randn((1)).to("cpu"))
File "/home/user/miniconda3/lib/python3.7/site-packages/gpytorch/models/exact_gp.py", line 173, in get_fantasy_model
new_model = deepcopy(self)
File "/home/user/miniconda3/lib/python3.7/copy.py", line 180, in deepcopy
y = _reconstruct(x, memo, *rv)
File "/home/user/miniconda3/lib/python3.7/copy.py", line 280, in _reconstruct
state = deepcopy(state, memo)
File "/home/user/miniconda3/lib/python3.7/copy.py", line 150, in deepcopy
y = copier(x, memo)
File "/home/user/miniconda3/lib/python3.7/copy.py", line 240, in _deepcopy_dict
y[deepcopy(key, memo)] = deepcopy(value, memo)
File "/home/user/miniconda3/lib/python3.7/copy.py", line 180, in deepcopy
y = _reconstruct(x, memo, *rv)
File "/home/user/miniconda3/lib/python3.7/copy.py", line 306, in _reconstruct
value = deepcopy(value, memo)
File "/home/user/miniconda3/lib/python3.7/copy.py", line 180, in deepcopy
y = _reconstruct(x, memo, *rv)
File "/home/user/miniconda3/lib/python3.7/copy.py", line 280, in _reconstruct
state = deepcopy(state, memo)
File "/home/user/miniconda3/lib/python3.7/copy.py", line 150, in deepcopy
y = copier(x, memo)
File "/home/user/miniconda3/lib/python3.7/copy.py", line 240, in _deepcopy_dict
y[deepcopy(key, memo)] = deepcopy(value, memo)
File "/home/user/miniconda3/lib/python3.7/copy.py", line 161, in deepcopy
y = copier(memo)
File "/home/user/miniconda3/lib/python3.7/site-packages/torch/tensor.py", line 23, in __deepcopy__
raise RuntimeError("Only Tensors created explicitly by the user "
RuntimeError: Only Tensors created explicitly by the user (graph leaves) support the deepcopy protocol at the moment
```
## Expected Behavior
I would expect a fantasized model to be returned efficiently.
## System information
**Please complete the following information:**
- GPyTorch Version 0.3.3
- PyTorch Version 1.1.0
- Ubuntu 18.04
## Additional context
It seems that during the update, the `new_model = deepcopy(self)` tries to copy `self._inducing_inv_root` but detects that it is trainable by autograd and balks. I guess gpytorch made this design choice because of the goal of optimizing the inducing points as a hyperparameter, but as a tradeoff it does not allow for efficient updates.
So far I tried to replace the inducing points with a non-trainable version by setting `requires_grad` to `False`, but it seems to not help. I would guess that [any of these tensors multiplications](https://github.com/cornellius-gp/gpytorch/blob/master/gpytorch/kernels/inducing_point_kernel.py#L45-L47) in the implementation of `_inducing_inv_root` could end up reactivating autograd, and I am afraid that without more knowledge of gpytorch's internals patching them one-by-one might end up in a long whack-a-mole.
</issue>
<code>
[start of gpytorch/kernels/inducing_point_kernel.py]
1 #!/usr/bin/env python3
2
3 import math
4 import torch
5 from .kernel import Kernel
6 from ..lazy import delazify, DiagLazyTensor, MatmulLazyTensor, RootLazyTensor, PsdSumLazyTensor
7 from ..distributions import MultivariateNormal
8 from ..mlls import InducingPointKernelAddedLossTerm
9 from ..utils.cholesky import psd_safe_cholesky
10
11
12 class InducingPointKernel(Kernel):
13 def __init__(self, base_kernel, inducing_points, likelihood, active_dims=None):
14 super(InducingPointKernel, self).__init__(active_dims=active_dims)
15 self.base_kernel = base_kernel
16 self.likelihood = likelihood
17
18 if inducing_points.ndimension() == 1:
19 inducing_points = inducing_points.unsqueeze(-1)
20 if inducing_points.ndimension() != 2:
21 raise RuntimeError("Inducing points should be 2 dimensional")
22 self.register_parameter(name="inducing_points", parameter=torch.nn.Parameter(inducing_points))
23 self.register_added_loss_term("inducing_point_loss_term")
24
25 def train(self, mode=True):
26 if hasattr(self, "_cached_kernel_mat"):
27 del self._cached_kernel_mat
28 return super(InducingPointKernel, self).train(mode)
29
30 @property
31 def _inducing_mat(self):
32 if not self.training and hasattr(self, "_cached_kernel_mat"):
33 return self._cached_kernel_mat
34 else:
35 res = delazify(self.base_kernel(self.inducing_points, self.inducing_points))
36 if not self.training:
37 self._cached_kernel_mat = res
38 return res
39
40 @property
41 def _inducing_inv_root(self):
42 if not self.training and hasattr(self, "_cached_kernel_inv_root"):
43 return self._cached_kernel_inv_root
44 else:
45 chol = psd_safe_cholesky(self._inducing_mat, upper=True)
46 eye = torch.eye(chol.size(-1), device=chol.device, dtype=chol.dtype)
47 inv_root = torch.triangular_solve(eye, chol)[0]
48
49 res = inv_root
50 if not self.training:
51 self._cached_kernel_inv_root = res
52 return res
53
54 def _get_covariance(self, x1, x2):
55 k_ux1 = delazify(self.base_kernel(x1, self.inducing_points))
56 if torch.equal(x1, x2):
57 covar = RootLazyTensor(k_ux1.matmul(self._inducing_inv_root))
58
59 # Diagonal correction for predictive posterior
60 correction = (self.base_kernel(x1, x2, diag=True) - covar.diag()).clamp(0, math.inf)
61 covar = PsdSumLazyTensor(covar, DiagLazyTensor(correction))
62 else:
63 k_ux2 = delazify(self.base_kernel(x2, self.inducing_points))
64 covar = MatmulLazyTensor(
65 k_ux1.matmul(self._inducing_inv_root), k_ux2.matmul(self._inducing_inv_root).transpose(-1, -2)
66 )
67
68 return covar
69
70 def _covar_diag(self, inputs):
71 if inputs.ndimension() == 1:
72 inputs = inputs.unsqueeze(1)
73
74 # Get diagonal of covar
75 covar_diag = delazify(self.base_kernel(inputs, diag=True))
76 return DiagLazyTensor(covar_diag)
77
78 def forward(self, x1, x2, diag=False, **kwargs):
79 covar = self._get_covariance(x1, x2)
80
81 if self.training:
82 if not torch.equal(x1, x2):
83 raise RuntimeError("x1 should equal x2 in training mode")
84 zero_mean = torch.zeros_like(x1.select(-1, 0))
85 new_added_loss_term = InducingPointKernelAddedLossTerm(
86 MultivariateNormal(zero_mean, self._covar_diag(x1)),
87 MultivariateNormal(zero_mean, covar),
88 self.likelihood,
89 )
90 self.update_added_loss_term("inducing_point_loss_term", new_added_loss_term)
91
92 if diag:
93 return covar.diag()
94 else:
95 return covar
96
97 def num_outputs_per_input(self, x1, x2):
98 return self.base_kernel.num_outputs_per_input(x1, x2)
99
[end of gpytorch/kernels/inducing_point_kernel.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/gpytorch/kernels/inducing_point_kernel.py b/gpytorch/kernels/inducing_point_kernel.py
--- a/gpytorch/kernels/inducing_point_kernel.py
+++ b/gpytorch/kernels/inducing_point_kernel.py
@@ -2,6 +2,7 @@
import math
import torch
+import copy
from .kernel import Kernel
from ..lazy import delazify, DiagLazyTensor, MatmulLazyTensor, RootLazyTensor, PsdSumLazyTensor
from ..distributions import MultivariateNormal
@@ -96,3 +97,33 @@
def num_outputs_per_input(self, x1, x2):
return self.base_kernel.num_outputs_per_input(x1, x2)
+
+ def __deepcopy__(self, memo):
+ replace_inv_root = False
+ replace_kernel_mat = False
+
+ if hasattr(self, "_cached_kernel_inv_root"):
+ replace_inv_root = True
+ kernel_inv_root = self._cached_kernel_inv_root
+ self._cached_kernel_inv_root = None
+ if hasattr(self, "_cached_kernel_mat"):
+ replace_kernel_mat = True
+ kernel_mat = self._cached_kernel_mat
+ self._cached_kernel_mat = None
+
+ deepcopy_method = self.__deepcopy__
+ self.__deepcopy__ = None
+ cp = copy.deepcopy(self, memo)
+
+ self.__deepcopy__ = deepcopy_method
+ cp.__deepcopy__ = deepcopy_method
+
+ if replace_inv_root:
+ self._cached_kernel_inv_root = kernel_inv_root
+ cp._cached_kernel_inv_root = kernel_inv_root
+
+ if replace_kernel_mat:
+ self._cached_kernel_mat = kernel_mat
+ cp._cached_kernel_mat = kernel_mat
+
+ return cp
| {"golden_diff": "diff --git a/gpytorch/kernels/inducing_point_kernel.py b/gpytorch/kernels/inducing_point_kernel.py\n--- a/gpytorch/kernels/inducing_point_kernel.py\n+++ b/gpytorch/kernels/inducing_point_kernel.py\n@@ -2,6 +2,7 @@\n \n import math\n import torch\n+import copy\n from .kernel import Kernel\n from ..lazy import delazify, DiagLazyTensor, MatmulLazyTensor, RootLazyTensor, PsdSumLazyTensor\n from ..distributions import MultivariateNormal\n@@ -96,3 +97,33 @@\n \n def num_outputs_per_input(self, x1, x2):\n return self.base_kernel.num_outputs_per_input(x1, x2)\n+\n+ def __deepcopy__(self, memo):\n+ replace_inv_root = False\n+ replace_kernel_mat = False\n+\n+ if hasattr(self, \"_cached_kernel_inv_root\"):\n+ replace_inv_root = True\n+ kernel_inv_root = self._cached_kernel_inv_root\n+ self._cached_kernel_inv_root = None\n+ if hasattr(self, \"_cached_kernel_mat\"):\n+ replace_kernel_mat = True\n+ kernel_mat = self._cached_kernel_mat\n+ self._cached_kernel_mat = None\n+\n+ deepcopy_method = self.__deepcopy__\n+ self.__deepcopy__ = None\n+ cp = copy.deepcopy(self, memo)\n+\n+ self.__deepcopy__ = deepcopy_method\n+ cp.__deepcopy__ = deepcopy_method\n+\n+ if replace_inv_root:\n+ self._cached_kernel_inv_root = kernel_inv_root\n+ cp._cached_kernel_inv_root = kernel_inv_root\n+\n+ if replace_kernel_mat:\n+ self._cached_kernel_mat = kernel_mat\n+ cp._cached_kernel_mat = kernel_mat\n+\n+ return cp\n", "issue": "[Bug] get_fantasy_model does not work for SGPR with InducingPointKernel\n# \ud83d\udc1b Bug\r\n\r\nNot sure if this should be considered a bug or a feature request, but gpytorch's implementation of SGPR using the InducingPointKernel kernel seems to not support get_fantasy_model.\r\n\r\n## To reproduce\r\nI am including the smallest mwe (or should I say mnwe) here. Note that I get the same behaviour by taking the [example tutorial for SGPR](https://gpytorch.readthedocs.io/en/latest/examples/05_Scalable_GP_Regression_Multidimensional/SGPR_Example_CUDA.html) and add a get_fantasy_model added at the end. I can post that too if required, but it is longer and might clutter the ticket.\r\n\r\n**Code snippet to reproduce**\r\n```python\r\nimport gpytorch\r\nimport torch\r\nfrom gpytorch.kernels import ScaleKernel, RBFKernel, InducingPointKernel\r\nfrom gpytorch.distributions import MultivariateNormal\r\nfrom gpytorch.likelihoods import GaussianLikelihood\r\nfrom gpytorch.means import ConstantMean\r\n\r\nclass GPRegressionModel(gpytorch.models.ExactGP):\r\n def __init__(self, train_x, train_y, likelihood):\r\n super(GPRegressionModel, self).__init__(train_x, train_y, likelihood)\r\n self.mean_module = ConstantMean()\r\n self.base_covar_module = ScaleKernel(RBFKernel())\r\n self.covar_module = InducingPointKernel(self.base_covar_module, inducing_points=train_x[:500, :], likelihood=likelihood)\r\n\r\n def forward(self, x):\r\n mean_x = self.mean_module(x)\r\n covar_x = self.covar_module(x)\r\n return MultivariateNormal(mean_x, covar_x)\r\n\r\ntrain_X = torch.randn((100,5)).to(\"cpu\")\r\ntrain_y = torch.randn((100)).to(\"cpu\")\r\n\r\nlikelihood = GaussianLikelihood()\r\n\r\nmodel = GPRegressionModel(train_X, train_y, likelihood)\r\nmodel.train()\r\nmodel.eval()\r\n\r\ntest_pred = model(torch.randn((1,5)).to(\"cpu\"))\r\n\r\nmodel = model.get_fantasy_model(torch.randn((1,5)).to(\"cpu\"), torch.randn((1)).to(\"cpu\"))\r\n```\r\n\r\n**Stack trace/error message**\r\n```\r\nTraceback (most recent call last):\r\n File \"mwe_sgpr_fantasy.py\", line 31, in <module>\r\n model = model.get_fantasy_model(torch.randn((1,5)).to(\"cpu\"), torch.randn((1)).to(\"cpu\"))\r\n File \"/home/user/miniconda3/lib/python3.7/site-packages/gpytorch/models/exact_gp.py\", line 173, in get_fantasy_model\r\n new_model = deepcopy(self)\r\n File \"/home/user/miniconda3/lib/python3.7/copy.py\", line 180, in deepcopy\r\n y = _reconstruct(x, memo, *rv)\r\n File \"/home/user/miniconda3/lib/python3.7/copy.py\", line 280, in _reconstruct\r\n state = deepcopy(state, memo)\r\n File \"/home/user/miniconda3/lib/python3.7/copy.py\", line 150, in deepcopy\r\n y = copier(x, memo)\r\n File \"/home/user/miniconda3/lib/python3.7/copy.py\", line 240, in _deepcopy_dict\r\n y[deepcopy(key, memo)] = deepcopy(value, memo)\r\n File \"/home/user/miniconda3/lib/python3.7/copy.py\", line 180, in deepcopy\r\n y = _reconstruct(x, memo, *rv)\r\n File \"/home/user/miniconda3/lib/python3.7/copy.py\", line 306, in _reconstruct\r\n value = deepcopy(value, memo)\r\n File \"/home/user/miniconda3/lib/python3.7/copy.py\", line 180, in deepcopy\r\n y = _reconstruct(x, memo, *rv)\r\n File \"/home/user/miniconda3/lib/python3.7/copy.py\", line 280, in _reconstruct\r\n state = deepcopy(state, memo)\r\n File \"/home/user/miniconda3/lib/python3.7/copy.py\", line 150, in deepcopy\r\n y = copier(x, memo)\r\n File \"/home/user/miniconda3/lib/python3.7/copy.py\", line 240, in _deepcopy_dict\r\n y[deepcopy(key, memo)] = deepcopy(value, memo)\r\n File \"/home/user/miniconda3/lib/python3.7/copy.py\", line 161, in deepcopy\r\n y = copier(memo)\r\n File \"/home/user/miniconda3/lib/python3.7/site-packages/torch/tensor.py\", line 23, in __deepcopy__\r\n raise RuntimeError(\"Only Tensors created explicitly by the user \"\r\nRuntimeError: Only Tensors created explicitly by the user (graph leaves) support the deepcopy protocol at the moment\r\n```\r\n\r\n## Expected Behavior\r\n\r\nI would expect a fantasized model to be returned efficiently.\r\n\r\n## System information\r\n\r\n**Please complete the following information:**\r\n- GPyTorch Version 0.3.3\r\n\r\n\r\n- PyTorch Version 1.1.0\r\n- Ubuntu 18.04\r\n\r\n## Additional context\r\nIt seems that during the update, the `new_model = deepcopy(self)` tries to copy `self._inducing_inv_root` but detects that it is trainable by autograd and balks. I guess gpytorch made this design choice because of the goal of optimizing the inducing points as a hyperparameter, but as a tradeoff it does not allow for efficient updates.\r\n\r\nSo far I tried to replace the inducing points with a non-trainable version by setting `requires_grad` to `False`, but it seems to not help. I would guess that [any of these tensors multiplications](https://github.com/cornellius-gp/gpytorch/blob/master/gpytorch/kernels/inducing_point_kernel.py#L45-L47) in the implementation of `_inducing_inv_root` could end up reactivating autograd, and I am afraid that without more knowledge of gpytorch's internals patching them one-by-one might end up in a long whack-a-mole.\r\n\n", "before_files": [{"content": "#!/usr/bin/env python3\n\nimport math\nimport torch\nfrom .kernel import Kernel\nfrom ..lazy import delazify, DiagLazyTensor, MatmulLazyTensor, RootLazyTensor, PsdSumLazyTensor\nfrom ..distributions import MultivariateNormal\nfrom ..mlls import InducingPointKernelAddedLossTerm\nfrom ..utils.cholesky import psd_safe_cholesky\n\n\nclass InducingPointKernel(Kernel):\n def __init__(self, base_kernel, inducing_points, likelihood, active_dims=None):\n super(InducingPointKernel, self).__init__(active_dims=active_dims)\n self.base_kernel = base_kernel\n self.likelihood = likelihood\n\n if inducing_points.ndimension() == 1:\n inducing_points = inducing_points.unsqueeze(-1)\n if inducing_points.ndimension() != 2:\n raise RuntimeError(\"Inducing points should be 2 dimensional\")\n self.register_parameter(name=\"inducing_points\", parameter=torch.nn.Parameter(inducing_points))\n self.register_added_loss_term(\"inducing_point_loss_term\")\n\n def train(self, mode=True):\n if hasattr(self, \"_cached_kernel_mat\"):\n del self._cached_kernel_mat\n return super(InducingPointKernel, self).train(mode)\n\n @property\n def _inducing_mat(self):\n if not self.training and hasattr(self, \"_cached_kernel_mat\"):\n return self._cached_kernel_mat\n else:\n res = delazify(self.base_kernel(self.inducing_points, self.inducing_points))\n if not self.training:\n self._cached_kernel_mat = res\n return res\n\n @property\n def _inducing_inv_root(self):\n if not self.training and hasattr(self, \"_cached_kernel_inv_root\"):\n return self._cached_kernel_inv_root\n else:\n chol = psd_safe_cholesky(self._inducing_mat, upper=True)\n eye = torch.eye(chol.size(-1), device=chol.device, dtype=chol.dtype)\n inv_root = torch.triangular_solve(eye, chol)[0]\n\n res = inv_root\n if not self.training:\n self._cached_kernel_inv_root = res\n return res\n\n def _get_covariance(self, x1, x2):\n k_ux1 = delazify(self.base_kernel(x1, self.inducing_points))\n if torch.equal(x1, x2):\n covar = RootLazyTensor(k_ux1.matmul(self._inducing_inv_root))\n\n # Diagonal correction for predictive posterior\n correction = (self.base_kernel(x1, x2, diag=True) - covar.diag()).clamp(0, math.inf)\n covar = PsdSumLazyTensor(covar, DiagLazyTensor(correction))\n else:\n k_ux2 = delazify(self.base_kernel(x2, self.inducing_points))\n covar = MatmulLazyTensor(\n k_ux1.matmul(self._inducing_inv_root), k_ux2.matmul(self._inducing_inv_root).transpose(-1, -2)\n )\n\n return covar\n\n def _covar_diag(self, inputs):\n if inputs.ndimension() == 1:\n inputs = inputs.unsqueeze(1)\n\n # Get diagonal of covar\n covar_diag = delazify(self.base_kernel(inputs, diag=True))\n return DiagLazyTensor(covar_diag)\n\n def forward(self, x1, x2, diag=False, **kwargs):\n covar = self._get_covariance(x1, x2)\n\n if self.training:\n if not torch.equal(x1, x2):\n raise RuntimeError(\"x1 should equal x2 in training mode\")\n zero_mean = torch.zeros_like(x1.select(-1, 0))\n new_added_loss_term = InducingPointKernelAddedLossTerm(\n MultivariateNormal(zero_mean, self._covar_diag(x1)),\n MultivariateNormal(zero_mean, covar),\n self.likelihood,\n )\n self.update_added_loss_term(\"inducing_point_loss_term\", new_added_loss_term)\n\n if diag:\n return covar.diag()\n else:\n return covar\n\n def num_outputs_per_input(self, x1, x2):\n return self.base_kernel.num_outputs_per_input(x1, x2)\n", "path": "gpytorch/kernels/inducing_point_kernel.py"}]} | 3,011 | 400 |
gh_patches_debug_36193 | rasdani/github-patches | git_diff | microsoft__ptvsd-552 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Args passed to user script on 'start without debugging' contains ptvsd args
## Environment data
- PTVSD version: Master
- OS and version: Windows 10
- Python version (& distribution if applicable, e.g. Anaconda): Any
- Using VS Code or Visual Studio: VSC
## Actual behavior
```
['c:\\Users\\kanadig\\.vscode\\extensions\\ms-python.python-2018.6.0\\pythonFiles\\experimental\\ptvsd\\ptvsd\\__main__.py', '--nodebug', '--host', 'localhost', '--port', '51225', 'c:\\scratch\\test.py', '--one', '--two', '--three']
```
## Expected behavior
```
['c:\\scratch\\test.py', '--one', '--two', '--three']
```
## Steps to reproduce:
1. Create a script file with this content:
```python
import sys
print(sys.argv)
```
2. Add `args` to python experimental launch configuration:
```json
{
"name": "PyExp: Current File",
"type": "pythonExperimental",
"request": "launch",
"program": "${file}",
"console": "integratedTerminal",
"args": ["--one", "--two", "--three"]
}
```
2. Run using **F5** and **Ctrl+F5**, the output should be same.
</issue>
<code>
[start of ptvsd/__main__.py]
1 # Copyright (c) Microsoft Corporation. All rights reserved.
2 # Licensed under the MIT License. See LICENSE in the project root
3 # for license information.
4
5 import argparse
6 import os.path
7 import sys
8
9 from ptvsd._local import debug_main, run_main
10 from ptvsd.socket import Address
11 from ptvsd.version import __version__, __author__ # noqa
12
13
14 ##################################
15 # the script
16
17 """
18 For the PyDevd CLI handling see:
19
20 https://github.com/fabioz/PyDev.Debugger/blob/master/_pydevd_bundle/pydevd_command_line_handling.py
21 https://github.com/fabioz/PyDev.Debugger/blob/master/pydevd.py#L1450 (main func)
22 """ # noqa
23
24 PYDEVD_OPTS = {
25 '--file',
26 '--client',
27 #'--port',
28 '--vm_type',
29 }
30
31 PYDEVD_FLAGS = {
32 '--DEBUG',
33 '--DEBUG_RECORD_SOCKET_READS',
34 '--cmd-line',
35 '--module',
36 '--multiproc',
37 '--multiprocess',
38 '--print-in-debugger-startup',
39 '--save-signatures',
40 '--save-threading',
41 '--save-asyncio',
42 '--server',
43 '--qt-support=auto',
44 }
45
46 USAGE = """
47 {0} [-h] [-V] [--nodebug] [--host HOST | --server-host HOST] --port PORT -m MODULE [arg ...]
48 {0} [-h] [-V] [--nodebug] [--host HOST | --server-host HOST] --port PORT FILENAME [arg ...]
49 """ # noqa
50
51
52 PYDEVD_DEFAULTS = {
53 '--qt-support=auto',
54 }
55
56
57 def _set_pydevd_defaults(pydevd_args):
58 args_to_append = []
59 for arg in PYDEVD_DEFAULTS:
60 if arg not in pydevd_args:
61 args_to_append.append(arg)
62 return pydevd_args + args_to_append
63
64
65 def parse_args(argv=None):
66 """Return the parsed args to use in main()."""
67 if argv is None:
68 argv = sys.argv
69 prog = argv[0]
70 if prog == __file__:
71 prog = '{} -m ptvsd'.format(os.path.basename(sys.executable))
72 else:
73 prog = argv[0]
74 argv = argv[1:]
75
76 supported, pydevd, script = _group_args(argv)
77 args = _parse_args(prog, supported)
78 pydevd = _set_pydevd_defaults(pydevd)
79 extra = pydevd + ['--']
80 if script:
81 extra += script
82 return args, extra
83
84
85 def _group_args(argv):
86 supported = []
87 pydevd = []
88 script = []
89
90 try:
91 pos = argv.index('--')
92 except ValueError:
93 script = []
94 else:
95 script = argv[pos + 1:]
96 argv = argv[:pos]
97
98 for arg in argv:
99 if arg == '-h' or arg == '--help':
100 return argv, [], script
101
102 gottarget = False
103 skip = 0
104 for i in range(len(argv)):
105 if skip:
106 skip -= 1
107 continue
108
109 arg = argv[i]
110 try:
111 nextarg = argv[i + 1]
112 except IndexError:
113 nextarg = None
114
115 # TODO: Deprecate the PyDevd arg support.
116 # PyDevd support
117 if gottarget:
118 script = argv[i:] + script
119 break
120 if arg == '--client':
121 arg = '--host'
122 elif arg == '--file':
123 if nextarg is None: # The filename is missing...
124 pydevd.append(arg)
125 continue # This will get handled later.
126 if nextarg.endswith(':') and '--module' in pydevd:
127 pydevd.remove('--module')
128 arg = '-m'
129 argv[i + 1] = nextarg = nextarg[:-1]
130 else:
131 arg = nextarg
132 skip += 1
133
134 if arg in PYDEVD_OPTS:
135 pydevd.append(arg)
136 if nextarg is not None:
137 pydevd.append(nextarg)
138 skip += 1
139 elif arg in PYDEVD_FLAGS:
140 pydevd.append(arg)
141 elif arg == '--nodebug':
142 supported.append(arg)
143
144 # ptvsd support
145 elif arg in ('--host', '--server-host', '--port', '-m'):
146 if arg == '-m':
147 gottarget = True
148 supported.append(arg)
149 if nextarg is not None:
150 supported.append(nextarg)
151 skip += 1
152 elif arg in ('--single-session',):
153 supported.append(arg)
154 elif not arg.startswith('-'):
155 supported.append(arg)
156 gottarget = True
157
158 # unsupported arg
159 else:
160 supported.append(arg)
161 break
162
163 return supported, pydevd, script
164
165
166 def _parse_args(prog, argv):
167 parser = argparse.ArgumentParser(
168 prog=prog,
169 usage=USAGE.format(prog),
170 )
171 parser.add_argument('--nodebug', action='store_true')
172 host = parser.add_mutually_exclusive_group()
173 host.add_argument('--host')
174 host.add_argument('--server-host')
175 parser.add_argument('--port', type=int, required=True)
176
177 target = parser.add_mutually_exclusive_group(required=True)
178 target.add_argument('-m', dest='module')
179 target.add_argument('filename', nargs='?')
180
181 parser.add_argument('--single-session', action='store_true')
182 parser.add_argument('-V', '--version', action='version')
183 parser.version = __version__
184
185 args = parser.parse_args(argv)
186 ns = vars(args)
187
188 serverhost = ns.pop('server_host', None)
189 clienthost = ns.pop('host', None)
190 if serverhost:
191 args.address = Address.as_server(serverhost, ns.pop('port'))
192 elif not clienthost:
193 if args.nodebug:
194 args.address = Address.as_client(clienthost, ns.pop('port'))
195 else:
196 args.address = Address.as_server(clienthost, ns.pop('port'))
197 else:
198 args.address = Address.as_client(clienthost, ns.pop('port'))
199
200 module = ns.pop('module')
201 filename = ns.pop('filename')
202 if module is None:
203 args.name = filename
204 args.kind = 'script'
205 else:
206 args.name = module
207 args.kind = 'module'
208 #if argv[-1] != args.name or (module and argv[-1] != '-m'):
209 # parser.error('script/module must be last arg')
210
211 return args
212
213
214 def main(addr, name, kind, extra=(), nodebug=False, **kwargs):
215 if nodebug:
216 run_main(addr, name, kind, *extra, **kwargs)
217 else:
218 debug_main(addr, name, kind, *extra, **kwargs)
219
220
221 if __name__ == '__main__':
222 args, extra = parse_args()
223 main(args.address, args.name, args.kind, extra, nodebug=args.nodebug,
224 singlesession=args.single_session)
225
[end of ptvsd/__main__.py]
[start of ptvsd/_local.py]
1 import sys
2
3 import pydevd
4
5 from ptvsd.pydevd_hooks import install
6 from ptvsd.runner import run as no_debug_runner
7 from ptvsd.socket import Address
8
9
10 ########################
11 # high-level functions
12
13 def debug_main(address, name, kind, *extra, **kwargs):
14 if kind == 'module':
15 run_module(address, name, *extra, **kwargs)
16 else:
17 run_file(address, name, *extra, **kwargs)
18
19
20 def run_main(address, name, kind, *extra, **kwargs):
21 no_debug_runner(address, name, kind == 'module', *extra, **kwargs)
22
23
24 ########################
25 # low-level functions
26
27 def run_module(address, modname, *extra, **kwargs):
28 """Run pydevd for the given module."""
29 addr = Address.from_raw(address)
30 if not addr.isserver:
31 kwargs['singlesession'] = True
32 run = kwargs.pop('_run', _run)
33 prog = kwargs.pop('_prog', sys.argv[0])
34 filename = modname + ':'
35 argv = _run_argv(addr, filename, extra, _prog=prog)
36 argv.insert(argv.index('--file'), '--module')
37 run(argv, addr, **kwargs)
38
39
40 def run_file(address, filename, *extra, **kwargs):
41 """Run pydevd for the given Python file."""
42 addr = Address.from_raw(address)
43 if not addr.isserver:
44 kwargs['singlesession'] = True
45 run = kwargs.pop('_run', _run)
46 prog = kwargs.pop('_prog', sys.argv[0])
47 argv = _run_argv(addr, filename, extra, _prog=prog)
48 run(argv, addr, **kwargs)
49
50
51 def _run_argv(address, filename, extra, _prog=sys.argv[0]):
52 """Convert the given values to an argv that pydevd.main() supports."""
53 if '--' in extra:
54 pydevd = list(extra[:extra.index('--')])
55 extra = list(extra[len(pydevd) + 1:])
56 else:
57 pydevd = []
58 extra = list(extra)
59
60 host, port = address
61 argv = [
62 _prog,
63 '--port', str(port),
64 ]
65 if not address.isserver:
66 argv.extend([
67 '--client', host or 'localhost',
68 ])
69 return argv + pydevd + [
70 '--file', filename,
71 ] + extra
72
73
74 def _run(argv, addr, _pydevd=pydevd, _install=install, **kwargs):
75 """Start pydevd with the given commandline args."""
76 #print(' '.join(argv))
77
78 # Pydevd assumes that the "__main__" module is the "pydevd" module
79 # and does some tricky stuff under that assumption. For example,
80 # when the debugger starts up it calls save_main_module()
81 # (in pydevd_bundle/pydevd_utils.py). That function explicitly sets
82 # sys.modules["pydevd"] to sys.modules["__main__"] and then sets
83 # the __main__ module to a new one. This makes some sense since
84 # it gives the debugged script a fresh __main__ module.
85 #
86 # This complicates things for us since we are running a different
87 # file (i.e. this one) as the __main__ module. Consequently,
88 # sys.modules["pydevd"] gets set to ptvsd/__main__.py. Subsequent
89 # imports of the "pydevd" module then return the wrong module. We
90 # work around this by avoiding lazy imports of the "pydevd" module.
91 # We also replace the __main__ module with the "pydevd" module here.
92 if sys.modules['__main__'].__file__ != _pydevd.__file__:
93 sys.modules['__main___orig'] = sys.modules['__main__']
94 sys.modules['__main__'] = _pydevd
95
96 daemon = _install(_pydevd, addr, **kwargs)
97 sys.argv[:] = argv
98 try:
99 _pydevd.main()
100 except SystemExit as ex:
101 daemon.exitcode = int(ex.code)
102 raise
103
[end of ptvsd/_local.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/ptvsd/__main__.py b/ptvsd/__main__.py
--- a/ptvsd/__main__.py
+++ b/ptvsd/__main__.py
@@ -49,19 +49,6 @@
""" # noqa
-PYDEVD_DEFAULTS = {
- '--qt-support=auto',
-}
-
-
-def _set_pydevd_defaults(pydevd_args):
- args_to_append = []
- for arg in PYDEVD_DEFAULTS:
- if arg not in pydevd_args:
- args_to_append.append(arg)
- return pydevd_args + args_to_append
-
-
def parse_args(argv=None):
"""Return the parsed args to use in main()."""
if argv is None:
@@ -75,7 +62,7 @@
supported, pydevd, script = _group_args(argv)
args = _parse_args(prog, supported)
- pydevd = _set_pydevd_defaults(pydevd)
+ # '--' is used in _run_args to extract pydevd specific args
extra = pydevd + ['--']
if script:
extra += script
diff --git a/ptvsd/_local.py b/ptvsd/_local.py
--- a/ptvsd/_local.py
+++ b/ptvsd/_local.py
@@ -7,6 +7,19 @@
from ptvsd.socket import Address
+PYDEVD_DEFAULTS = {
+ '--qt-support=auto',
+}
+
+
+def _set_pydevd_defaults(pydevd_args):
+ args_to_append = []
+ for arg in PYDEVD_DEFAULTS:
+ if arg not in pydevd_args:
+ args_to_append.append(arg)
+ return pydevd_args + args_to_append
+
+
########################
# high-level functions
@@ -18,7 +31,10 @@
def run_main(address, name, kind, *extra, **kwargs):
- no_debug_runner(address, name, kind == 'module', *extra, **kwargs)
+ addr = Address.from_raw(address)
+ sys.argv[:] = _run_main_argv(name, extra)
+ runner = kwargs.pop('_runner', no_debug_runner)
+ runner(addr, name, kind == 'module', *extra, **kwargs)
########################
@@ -57,6 +73,7 @@
pydevd = []
extra = list(extra)
+ pydevd = _set_pydevd_defaults(pydevd)
host, port = address
argv = [
_prog,
@@ -71,6 +88,15 @@
] + extra
+def _run_main_argv(filename, extra):
+ if '--' in extra:
+ pydevd = list(extra[:extra.index('--')])
+ extra = list(extra[len(pydevd) + 1:])
+ else:
+ extra = list(extra)
+ return [filename] + extra
+
+
def _run(argv, addr, _pydevd=pydevd, _install=install, **kwargs):
"""Start pydevd with the given commandline args."""
#print(' '.join(argv))
| {"golden_diff": "diff --git a/ptvsd/__main__.py b/ptvsd/__main__.py\n--- a/ptvsd/__main__.py\n+++ b/ptvsd/__main__.py\n@@ -49,19 +49,6 @@\n \"\"\" # noqa\n \n \n-PYDEVD_DEFAULTS = {\n- '--qt-support=auto',\n-}\n-\n-\n-def _set_pydevd_defaults(pydevd_args):\n- args_to_append = []\n- for arg in PYDEVD_DEFAULTS:\n- if arg not in pydevd_args:\n- args_to_append.append(arg)\n- return pydevd_args + args_to_append\n-\n-\n def parse_args(argv=None):\n \"\"\"Return the parsed args to use in main().\"\"\"\n if argv is None:\n@@ -75,7 +62,7 @@\n \n supported, pydevd, script = _group_args(argv)\n args = _parse_args(prog, supported)\n- pydevd = _set_pydevd_defaults(pydevd)\n+ # '--' is used in _run_args to extract pydevd specific args\n extra = pydevd + ['--']\n if script:\n extra += script\ndiff --git a/ptvsd/_local.py b/ptvsd/_local.py\n--- a/ptvsd/_local.py\n+++ b/ptvsd/_local.py\n@@ -7,6 +7,19 @@\n from ptvsd.socket import Address\n \n \n+PYDEVD_DEFAULTS = {\n+ '--qt-support=auto',\n+}\n+\n+\n+def _set_pydevd_defaults(pydevd_args):\n+ args_to_append = []\n+ for arg in PYDEVD_DEFAULTS:\n+ if arg not in pydevd_args:\n+ args_to_append.append(arg)\n+ return pydevd_args + args_to_append\n+\n+\n ########################\n # high-level functions\n \n@@ -18,7 +31,10 @@\n \n \n def run_main(address, name, kind, *extra, **kwargs):\n- no_debug_runner(address, name, kind == 'module', *extra, **kwargs)\n+ addr = Address.from_raw(address)\n+ sys.argv[:] = _run_main_argv(name, extra)\n+ runner = kwargs.pop('_runner', no_debug_runner)\n+ runner(addr, name, kind == 'module', *extra, **kwargs)\n \n \n ########################\n@@ -57,6 +73,7 @@\n pydevd = []\n extra = list(extra)\n \n+ pydevd = _set_pydevd_defaults(pydevd)\n host, port = address\n argv = [\n _prog,\n@@ -71,6 +88,15 @@\n ] + extra\n \n \n+def _run_main_argv(filename, extra):\n+ if '--' in extra:\n+ pydevd = list(extra[:extra.index('--')])\n+ extra = list(extra[len(pydevd) + 1:])\n+ else:\n+ extra = list(extra)\n+ return [filename] + extra\n+\n+\n def _run(argv, addr, _pydevd=pydevd, _install=install, **kwargs):\n \"\"\"Start pydevd with the given commandline args.\"\"\"\n #print(' '.join(argv))\n", "issue": "Args passed to user script on 'start without debugging' contains ptvsd args\n## Environment data\r\n\r\n- PTVSD version: Master\r\n- OS and version: Windows 10\r\n- Python version (& distribution if applicable, e.g. Anaconda): Any\r\n- Using VS Code or Visual Studio: VSC\r\n\r\n## Actual behavior\r\n\r\n```\r\n['c:\\\\Users\\\\kanadig\\\\.vscode\\\\extensions\\\\ms-python.python-2018.6.0\\\\pythonFiles\\\\experimental\\\\ptvsd\\\\ptvsd\\\\__main__.py', '--nodebug', '--host', 'localhost', '--port', '51225', 'c:\\\\scratch\\\\test.py', '--one', '--two', '--three']\r\n```\r\n\r\n## Expected behavior\r\n\r\n```\r\n['c:\\\\scratch\\\\test.py', '--one', '--two', '--three']\r\n```\r\n\r\n## Steps to reproduce:\r\n1. Create a script file with this content:\r\n```python\r\nimport sys\r\nprint(sys.argv)\r\n```\r\n2. Add `args` to python experimental launch configuration:\r\n```json\r\n{\r\n \"name\": \"PyExp: Current File\",\r\n \"type\": \"pythonExperimental\",\r\n \"request\": \"launch\",\r\n \"program\": \"${file}\",\r\n \"console\": \"integratedTerminal\",\r\n \"args\": [\"--one\", \"--two\", \"--three\"]\r\n}\r\n```\r\n2. Run using **F5** and **Ctrl+F5**, the output should be same.\r\n\r\n\n", "before_files": [{"content": "# Copyright (c) Microsoft Corporation. All rights reserved.\n# Licensed under the MIT License. See LICENSE in the project root\n# for license information.\n\nimport argparse\nimport os.path\nimport sys\n\nfrom ptvsd._local import debug_main, run_main\nfrom ptvsd.socket import Address\nfrom ptvsd.version import __version__, __author__ # noqa\n\n\n##################################\n# the script\n\n\"\"\"\nFor the PyDevd CLI handling see:\n\n https://github.com/fabioz/PyDev.Debugger/blob/master/_pydevd_bundle/pydevd_command_line_handling.py\n https://github.com/fabioz/PyDev.Debugger/blob/master/pydevd.py#L1450 (main func)\n\"\"\" # noqa\n\nPYDEVD_OPTS = {\n '--file',\n '--client',\n #'--port',\n '--vm_type',\n}\n\nPYDEVD_FLAGS = {\n '--DEBUG',\n '--DEBUG_RECORD_SOCKET_READS',\n '--cmd-line',\n '--module',\n '--multiproc',\n '--multiprocess',\n '--print-in-debugger-startup',\n '--save-signatures',\n '--save-threading',\n '--save-asyncio',\n '--server',\n '--qt-support=auto',\n}\n\nUSAGE = \"\"\"\n {0} [-h] [-V] [--nodebug] [--host HOST | --server-host HOST] --port PORT -m MODULE [arg ...]\n {0} [-h] [-V] [--nodebug] [--host HOST | --server-host HOST] --port PORT FILENAME [arg ...]\n\"\"\" # noqa\n\n\nPYDEVD_DEFAULTS = {\n '--qt-support=auto',\n}\n\n\ndef _set_pydevd_defaults(pydevd_args):\n args_to_append = []\n for arg in PYDEVD_DEFAULTS:\n if arg not in pydevd_args:\n args_to_append.append(arg)\n return pydevd_args + args_to_append\n\n\ndef parse_args(argv=None):\n \"\"\"Return the parsed args to use in main().\"\"\"\n if argv is None:\n argv = sys.argv\n prog = argv[0]\n if prog == __file__:\n prog = '{} -m ptvsd'.format(os.path.basename(sys.executable))\n else:\n prog = argv[0]\n argv = argv[1:]\n\n supported, pydevd, script = _group_args(argv)\n args = _parse_args(prog, supported)\n pydevd = _set_pydevd_defaults(pydevd)\n extra = pydevd + ['--']\n if script:\n extra += script\n return args, extra\n\n\ndef _group_args(argv):\n supported = []\n pydevd = []\n script = []\n\n try:\n pos = argv.index('--')\n except ValueError:\n script = []\n else:\n script = argv[pos + 1:]\n argv = argv[:pos]\n\n for arg in argv:\n if arg == '-h' or arg == '--help':\n return argv, [], script\n\n gottarget = False\n skip = 0\n for i in range(len(argv)):\n if skip:\n skip -= 1\n continue\n\n arg = argv[i]\n try:\n nextarg = argv[i + 1]\n except IndexError:\n nextarg = None\n\n # TODO: Deprecate the PyDevd arg support.\n # PyDevd support\n if gottarget:\n script = argv[i:] + script\n break\n if arg == '--client':\n arg = '--host'\n elif arg == '--file':\n if nextarg is None: # The filename is missing...\n pydevd.append(arg)\n continue # This will get handled later.\n if nextarg.endswith(':') and '--module' in pydevd:\n pydevd.remove('--module')\n arg = '-m'\n argv[i + 1] = nextarg = nextarg[:-1]\n else:\n arg = nextarg\n skip += 1\n\n if arg in PYDEVD_OPTS:\n pydevd.append(arg)\n if nextarg is not None:\n pydevd.append(nextarg)\n skip += 1\n elif arg in PYDEVD_FLAGS:\n pydevd.append(arg)\n elif arg == '--nodebug':\n supported.append(arg)\n\n # ptvsd support\n elif arg in ('--host', '--server-host', '--port', '-m'):\n if arg == '-m':\n gottarget = True\n supported.append(arg)\n if nextarg is not None:\n supported.append(nextarg)\n skip += 1\n elif arg in ('--single-session',):\n supported.append(arg)\n elif not arg.startswith('-'):\n supported.append(arg)\n gottarget = True\n\n # unsupported arg\n else:\n supported.append(arg)\n break\n\n return supported, pydevd, script\n\n\ndef _parse_args(prog, argv):\n parser = argparse.ArgumentParser(\n prog=prog,\n usage=USAGE.format(prog),\n )\n parser.add_argument('--nodebug', action='store_true')\n host = parser.add_mutually_exclusive_group()\n host.add_argument('--host')\n host.add_argument('--server-host')\n parser.add_argument('--port', type=int, required=True)\n\n target = parser.add_mutually_exclusive_group(required=True)\n target.add_argument('-m', dest='module')\n target.add_argument('filename', nargs='?')\n\n parser.add_argument('--single-session', action='store_true')\n parser.add_argument('-V', '--version', action='version')\n parser.version = __version__\n\n args = parser.parse_args(argv)\n ns = vars(args)\n\n serverhost = ns.pop('server_host', None)\n clienthost = ns.pop('host', None)\n if serverhost:\n args.address = Address.as_server(serverhost, ns.pop('port'))\n elif not clienthost:\n if args.nodebug:\n args.address = Address.as_client(clienthost, ns.pop('port'))\n else:\n args.address = Address.as_server(clienthost, ns.pop('port'))\n else:\n args.address = Address.as_client(clienthost, ns.pop('port'))\n\n module = ns.pop('module')\n filename = ns.pop('filename')\n if module is None:\n args.name = filename\n args.kind = 'script'\n else:\n args.name = module\n args.kind = 'module'\n #if argv[-1] != args.name or (module and argv[-1] != '-m'):\n # parser.error('script/module must be last arg')\n\n return args\n\n\ndef main(addr, name, kind, extra=(), nodebug=False, **kwargs):\n if nodebug:\n run_main(addr, name, kind, *extra, **kwargs)\n else:\n debug_main(addr, name, kind, *extra, **kwargs)\n\n\nif __name__ == '__main__':\n args, extra = parse_args()\n main(args.address, args.name, args.kind, extra, nodebug=args.nodebug,\n singlesession=args.single_session)\n", "path": "ptvsd/__main__.py"}, {"content": "import sys\n\nimport pydevd\n\nfrom ptvsd.pydevd_hooks import install\nfrom ptvsd.runner import run as no_debug_runner\nfrom ptvsd.socket import Address\n\n\n########################\n# high-level functions\n\ndef debug_main(address, name, kind, *extra, **kwargs):\n if kind == 'module':\n run_module(address, name, *extra, **kwargs)\n else:\n run_file(address, name, *extra, **kwargs)\n\n\ndef run_main(address, name, kind, *extra, **kwargs):\n no_debug_runner(address, name, kind == 'module', *extra, **kwargs)\n\n\n########################\n# low-level functions\n\ndef run_module(address, modname, *extra, **kwargs):\n \"\"\"Run pydevd for the given module.\"\"\"\n addr = Address.from_raw(address)\n if not addr.isserver:\n kwargs['singlesession'] = True\n run = kwargs.pop('_run', _run)\n prog = kwargs.pop('_prog', sys.argv[0])\n filename = modname + ':'\n argv = _run_argv(addr, filename, extra, _prog=prog)\n argv.insert(argv.index('--file'), '--module')\n run(argv, addr, **kwargs)\n\n\ndef run_file(address, filename, *extra, **kwargs):\n \"\"\"Run pydevd for the given Python file.\"\"\"\n addr = Address.from_raw(address)\n if not addr.isserver:\n kwargs['singlesession'] = True\n run = kwargs.pop('_run', _run)\n prog = kwargs.pop('_prog', sys.argv[0])\n argv = _run_argv(addr, filename, extra, _prog=prog)\n run(argv, addr, **kwargs)\n\n\ndef _run_argv(address, filename, extra, _prog=sys.argv[0]):\n \"\"\"Convert the given values to an argv that pydevd.main() supports.\"\"\"\n if '--' in extra:\n pydevd = list(extra[:extra.index('--')])\n extra = list(extra[len(pydevd) + 1:])\n else:\n pydevd = []\n extra = list(extra)\n\n host, port = address\n argv = [\n _prog,\n '--port', str(port),\n ]\n if not address.isserver:\n argv.extend([\n '--client', host or 'localhost',\n ])\n return argv + pydevd + [\n '--file', filename,\n ] + extra\n\n\ndef _run(argv, addr, _pydevd=pydevd, _install=install, **kwargs):\n \"\"\"Start pydevd with the given commandline args.\"\"\"\n #print(' '.join(argv))\n\n # Pydevd assumes that the \"__main__\" module is the \"pydevd\" module\n # and does some tricky stuff under that assumption. For example,\n # when the debugger starts up it calls save_main_module()\n # (in pydevd_bundle/pydevd_utils.py). That function explicitly sets\n # sys.modules[\"pydevd\"] to sys.modules[\"__main__\"] and then sets\n # the __main__ module to a new one. This makes some sense since\n # it gives the debugged script a fresh __main__ module.\n #\n # This complicates things for us since we are running a different\n # file (i.e. this one) as the __main__ module. Consequently,\n # sys.modules[\"pydevd\"] gets set to ptvsd/__main__.py. Subsequent\n # imports of the \"pydevd\" module then return the wrong module. We\n # work around this by avoiding lazy imports of the \"pydevd\" module.\n # We also replace the __main__ module with the \"pydevd\" module here.\n if sys.modules['__main__'].__file__ != _pydevd.__file__:\n sys.modules['__main___orig'] = sys.modules['__main__']\n sys.modules['__main__'] = _pydevd\n\n daemon = _install(_pydevd, addr, **kwargs)\n sys.argv[:] = argv\n try:\n _pydevd.main()\n except SystemExit as ex:\n daemon.exitcode = int(ex.code)\n raise\n", "path": "ptvsd/_local.py"}]} | 4,076 | 706 |
gh_patches_debug_15343 | rasdani/github-patches | git_diff | Pylons__pyramid-1131 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
No way to add query parameters without a value
I occasionally need to put a hint in the query string for a URL, which is essentially a parameter without a value. This can be important to provide information to javascript or as a hint to GA. For example I may need to use `http://localhost/dashboard?new-user` as URL when I redirect a new user to the dashboard after completing registration.
Intuitively I expected this to work:
``` python
return HTTPFound(request.route_url('dashboard', _query={'new-user': None}))
```
but that returns `/dashboard?new-user=None` which is not very pretty.
</issue>
<code>
[start of pyramid/encode.py]
1 from pyramid.compat import (
2 text_type,
3 binary_type,
4 is_nonstr_iter,
5 url_quote as _url_quote,
6 url_quote_plus as quote_plus, # bw compat api (dnr)
7 )
8
9 def url_quote(s, safe=''): # bw compat api
10 return _url_quote(s, safe=safe)
11
12 def urlencode(query, doseq=True):
13 """
14 An alternate implementation of Python's stdlib `urllib.urlencode
15 function <http://docs.python.org/library/urllib.html>`_ which
16 accepts unicode keys and values within the ``query``
17 dict/sequence; all Unicode keys and values are first converted to
18 UTF-8 before being used to compose the query string.
19
20 The value of ``query`` must be a sequence of two-tuples
21 representing key/value pairs *or* an object (often a dictionary)
22 with an ``.items()`` method that returns a sequence of two-tuples
23 representing key/value pairs.
24
25 For minimal calling convention backwards compatibility, this
26 version of urlencode accepts *but ignores* a second argument
27 conventionally named ``doseq``. The Python stdlib version behaves
28 differently when ``doseq`` is False and when a sequence is
29 presented as one of the values. This version always behaves in
30 the ``doseq=True`` mode, no matter what the value of the second
31 argument.
32
33 See the Python stdlib documentation for ``urllib.urlencode`` for
34 more information.
35 """
36 try:
37 # presumed to be a dictionary
38 query = query.items()
39 except AttributeError:
40 pass
41
42 result = ''
43 prefix = ''
44
45 for (k, v) in query:
46 k = _enc(k)
47
48 if is_nonstr_iter(v):
49 for x in v:
50 x = _enc(x)
51 result += '%s%s=%s' % (prefix, k, x)
52 prefix = '&'
53 else:
54 v = _enc(v)
55 result += '%s%s=%s' % (prefix, k, v)
56
57 prefix = '&'
58
59 return result
60
61 def _enc(val):
62 cls = val.__class__
63 if cls is text_type:
64 val = val.encode('utf-8')
65 elif cls is not binary_type:
66 val = str(val).encode('utf-8')
67 return quote_plus(val)
68
69
[end of pyramid/encode.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/pyramid/encode.py b/pyramid/encode.py
--- a/pyramid/encode.py
+++ b/pyramid/encode.py
@@ -32,6 +32,10 @@
See the Python stdlib documentation for ``urllib.urlencode`` for
more information.
+
+ .. versionchanged:: 1.5
+ In a key/value pair, if the value is ``None`` then it will be
+ dropped from the resulting output.
"""
try:
# presumed to be a dictionary
@@ -50,6 +54,8 @@
x = _enc(x)
result += '%s%s=%s' % (prefix, k, x)
prefix = '&'
+ elif v is None:
+ result += '%s%s=' % (prefix, k)
else:
v = _enc(v)
result += '%s%s=%s' % (prefix, k, v)
| {"golden_diff": "diff --git a/pyramid/encode.py b/pyramid/encode.py\n--- a/pyramid/encode.py\n+++ b/pyramid/encode.py\n@@ -32,6 +32,10 @@\n \n See the Python stdlib documentation for ``urllib.urlencode`` for\n more information.\n+\n+ .. versionchanged:: 1.5\n+ In a key/value pair, if the value is ``None`` then it will be\n+ dropped from the resulting output.\n \"\"\"\n try:\n # presumed to be a dictionary\n@@ -50,6 +54,8 @@\n x = _enc(x)\n result += '%s%s=%s' % (prefix, k, x)\n prefix = '&'\n+ elif v is None:\n+ result += '%s%s=' % (prefix, k)\n else:\n v = _enc(v)\n result += '%s%s=%s' % (prefix, k, v)\n", "issue": "No way to add query parameters without a value\nI occasionally need to put a hint in the query string for a URL, which is essentially a parameter without a value. This can be important to provide information to javascript or as a hint to GA. For example I may need to use `http://localhost/dashboard?new-user` as URL when I redirect a new user to the dashboard after completing registration.\n\nIntuitively I expected this to work:\n\n``` python\nreturn HTTPFound(request.route_url('dashboard', _query={'new-user': None}))\n```\n\nbut that returns `/dashboard?new-user=None` which is not very pretty.\n\n", "before_files": [{"content": "from pyramid.compat import (\n text_type,\n binary_type,\n is_nonstr_iter,\n url_quote as _url_quote,\n url_quote_plus as quote_plus, # bw compat api (dnr)\n )\n\ndef url_quote(s, safe=''): # bw compat api\n return _url_quote(s, safe=safe)\n\ndef urlencode(query, doseq=True):\n \"\"\"\n An alternate implementation of Python's stdlib `urllib.urlencode\n function <http://docs.python.org/library/urllib.html>`_ which\n accepts unicode keys and values within the ``query``\n dict/sequence; all Unicode keys and values are first converted to\n UTF-8 before being used to compose the query string.\n\n The value of ``query`` must be a sequence of two-tuples\n representing key/value pairs *or* an object (often a dictionary)\n with an ``.items()`` method that returns a sequence of two-tuples\n representing key/value pairs.\n\n For minimal calling convention backwards compatibility, this\n version of urlencode accepts *but ignores* a second argument\n conventionally named ``doseq``. The Python stdlib version behaves\n differently when ``doseq`` is False and when a sequence is\n presented as one of the values. This version always behaves in\n the ``doseq=True`` mode, no matter what the value of the second\n argument.\n\n See the Python stdlib documentation for ``urllib.urlencode`` for\n more information.\n \"\"\"\n try:\n # presumed to be a dictionary\n query = query.items()\n except AttributeError:\n pass\n\n result = ''\n prefix = ''\n\n for (k, v) in query:\n k = _enc(k)\n\n if is_nonstr_iter(v):\n for x in v:\n x = _enc(x)\n result += '%s%s=%s' % (prefix, k, x)\n prefix = '&'\n else:\n v = _enc(v)\n result += '%s%s=%s' % (prefix, k, v)\n\n prefix = '&'\n\n return result\n\ndef _enc(val):\n cls = val.__class__\n if cls is text_type:\n val = val.encode('utf-8')\n elif cls is not binary_type:\n val = str(val).encode('utf-8')\n return quote_plus(val)\n\n", "path": "pyramid/encode.py"}]} | 1,306 | 208 |
gh_patches_debug_37438 | rasdani/github-patches | git_diff | open-telemetry__opentelemetry-python-934 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
B3 trace_id and span_id not handled correctly
These fields are not being handled correctly when an invalid value is passed for one or both of them. Fix that.
</issue>
<code>
[start of opentelemetry-sdk/src/opentelemetry/sdk/trace/propagation/b3_format.py]
1 # Copyright The OpenTelemetry Authors
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import typing
16
17 import opentelemetry.trace as trace
18 from opentelemetry.context import Context
19 from opentelemetry.trace.propagation.httptextformat import (
20 Getter,
21 HTTPTextFormat,
22 HTTPTextFormatT,
23 Setter,
24 )
25
26
27 class B3Format(HTTPTextFormat):
28 """Propagator for the B3 HTTP header format.
29
30 See: https://github.com/openzipkin/b3-propagation
31 """
32
33 SINGLE_HEADER_KEY = "b3"
34 TRACE_ID_KEY = "x-b3-traceid"
35 SPAN_ID_KEY = "x-b3-spanid"
36 PARENT_SPAN_ID_KEY = "x-b3-parentspanid"
37 SAMPLED_KEY = "x-b3-sampled"
38 FLAGS_KEY = "x-b3-flags"
39 _SAMPLE_PROPAGATE_VALUES = set(["1", "True", "true", "d"])
40
41 def extract(
42 self,
43 get_from_carrier: Getter[HTTPTextFormatT],
44 carrier: HTTPTextFormatT,
45 context: typing.Optional[Context] = None,
46 ) -> Context:
47 trace_id = format_trace_id(trace.INVALID_TRACE_ID)
48 span_id = format_span_id(trace.INVALID_SPAN_ID)
49 sampled = "0"
50 flags = None
51
52 single_header = _extract_first_element(
53 get_from_carrier(carrier, self.SINGLE_HEADER_KEY)
54 )
55 if single_header:
56 # The b3 spec calls for the sampling state to be
57 # "deferred", which is unspecified. This concept does not
58 # translate to SpanContext, so we set it as recorded.
59 sampled = "1"
60 fields = single_header.split("-", 4)
61
62 if len(fields) == 1:
63 sampled = fields[0]
64 elif len(fields) == 2:
65 trace_id, span_id = fields
66 elif len(fields) == 3:
67 trace_id, span_id, sampled = fields
68 elif len(fields) == 4:
69 trace_id, span_id, sampled, _ = fields
70 else:
71 return trace.set_span_in_context(trace.INVALID_SPAN)
72 else:
73 trace_id = (
74 _extract_first_element(
75 get_from_carrier(carrier, self.TRACE_ID_KEY)
76 )
77 or trace_id
78 )
79 span_id = (
80 _extract_first_element(
81 get_from_carrier(carrier, self.SPAN_ID_KEY)
82 )
83 or span_id
84 )
85 sampled = (
86 _extract_first_element(
87 get_from_carrier(carrier, self.SAMPLED_KEY)
88 )
89 or sampled
90 )
91 flags = (
92 _extract_first_element(
93 get_from_carrier(carrier, self.FLAGS_KEY)
94 )
95 or flags
96 )
97
98 options = 0
99 # The b3 spec provides no defined behavior for both sample and
100 # flag values set. Since the setting of at least one implies
101 # the desire for some form of sampling, propagate if either
102 # header is set to allow.
103 if sampled in self._SAMPLE_PROPAGATE_VALUES or flags == "1":
104 options |= trace.TraceFlags.SAMPLED
105 return trace.set_span_in_context(
106 trace.DefaultSpan(
107 trace.SpanContext(
108 # trace an span ids are encoded in hex, so must be converted
109 trace_id=int(trace_id, 16),
110 span_id=int(span_id, 16),
111 is_remote=True,
112 trace_flags=trace.TraceFlags(options),
113 trace_state=trace.TraceState(),
114 )
115 )
116 )
117
118 def inject(
119 self,
120 set_in_carrier: Setter[HTTPTextFormatT],
121 carrier: HTTPTextFormatT,
122 context: typing.Optional[Context] = None,
123 ) -> None:
124 span = trace.get_current_span(context=context)
125
126 if span.get_context() == trace.INVALID_SPAN_CONTEXT:
127 return
128
129 sampled = (trace.TraceFlags.SAMPLED & span.context.trace_flags) != 0
130 set_in_carrier(
131 carrier, self.TRACE_ID_KEY, format_trace_id(span.context.trace_id),
132 )
133 set_in_carrier(
134 carrier, self.SPAN_ID_KEY, format_span_id(span.context.span_id)
135 )
136 if span.parent is not None:
137 set_in_carrier(
138 carrier,
139 self.PARENT_SPAN_ID_KEY,
140 format_span_id(span.parent.span_id),
141 )
142 set_in_carrier(carrier, self.SAMPLED_KEY, "1" if sampled else "0")
143
144
145 def format_trace_id(trace_id: int) -> str:
146 """Format the trace id according to b3 specification."""
147 return format(trace_id, "032x")
148
149
150 def format_span_id(span_id: int) -> str:
151 """Format the span id according to b3 specification."""
152 return format(span_id, "016x")
153
154
155 def _extract_first_element(
156 items: typing.Iterable[HTTPTextFormatT],
157 ) -> typing.Optional[HTTPTextFormatT]:
158 if items is None:
159 return None
160 return next(iter(items), None)
161
[end of opentelemetry-sdk/src/opentelemetry/sdk/trace/propagation/b3_format.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/opentelemetry-sdk/src/opentelemetry/sdk/trace/propagation/b3_format.py b/opentelemetry-sdk/src/opentelemetry/sdk/trace/propagation/b3_format.py
--- a/opentelemetry-sdk/src/opentelemetry/sdk/trace/propagation/b3_format.py
+++ b/opentelemetry-sdk/src/opentelemetry/sdk/trace/propagation/b3_format.py
@@ -13,9 +13,11 @@
# limitations under the License.
import typing
+from re import compile as re_compile
import opentelemetry.trace as trace
from opentelemetry.context import Context
+from opentelemetry.sdk.trace import generate_span_id, generate_trace_id
from opentelemetry.trace.propagation.httptextformat import (
Getter,
HTTPTextFormat,
@@ -37,6 +39,8 @@
SAMPLED_KEY = "x-b3-sampled"
FLAGS_KEY = "x-b3-flags"
_SAMPLE_PROPAGATE_VALUES = set(["1", "True", "true", "d"])
+ _trace_id_regex = re_compile(r"[\da-fA-F]{16}|[\da-fA-F]{32}")
+ _span_id_regex = re_compile(r"[\da-fA-F]{16}")
def extract(
self,
@@ -95,6 +99,18 @@
or flags
)
+ if (
+ self._trace_id_regex.fullmatch(trace_id) is None
+ or self._span_id_regex.fullmatch(span_id) is None
+ ):
+ trace_id = generate_trace_id()
+ span_id = generate_span_id()
+ sampled = "0"
+
+ else:
+ trace_id = int(trace_id, 16)
+ span_id = int(span_id, 16)
+
options = 0
# The b3 spec provides no defined behavior for both sample and
# flag values set. Since the setting of at least one implies
@@ -102,12 +118,13 @@
# header is set to allow.
if sampled in self._SAMPLE_PROPAGATE_VALUES or flags == "1":
options |= trace.TraceFlags.SAMPLED
+
return trace.set_span_in_context(
trace.DefaultSpan(
trace.SpanContext(
# trace an span ids are encoded in hex, so must be converted
- trace_id=int(trace_id, 16),
- span_id=int(span_id, 16),
+ trace_id=trace_id,
+ span_id=span_id,
is_remote=True,
trace_flags=trace.TraceFlags(options),
trace_state=trace.TraceState(),
| {"golden_diff": "diff --git a/opentelemetry-sdk/src/opentelemetry/sdk/trace/propagation/b3_format.py b/opentelemetry-sdk/src/opentelemetry/sdk/trace/propagation/b3_format.py\n--- a/opentelemetry-sdk/src/opentelemetry/sdk/trace/propagation/b3_format.py\n+++ b/opentelemetry-sdk/src/opentelemetry/sdk/trace/propagation/b3_format.py\n@@ -13,9 +13,11 @@\n # limitations under the License.\n \n import typing\n+from re import compile as re_compile\n \n import opentelemetry.trace as trace\n from opentelemetry.context import Context\n+from opentelemetry.sdk.trace import generate_span_id, generate_trace_id\n from opentelemetry.trace.propagation.httptextformat import (\n Getter,\n HTTPTextFormat,\n@@ -37,6 +39,8 @@\n SAMPLED_KEY = \"x-b3-sampled\"\n FLAGS_KEY = \"x-b3-flags\"\n _SAMPLE_PROPAGATE_VALUES = set([\"1\", \"True\", \"true\", \"d\"])\n+ _trace_id_regex = re_compile(r\"[\\da-fA-F]{16}|[\\da-fA-F]{32}\")\n+ _span_id_regex = re_compile(r\"[\\da-fA-F]{16}\")\n \n def extract(\n self,\n@@ -95,6 +99,18 @@\n or flags\n )\n \n+ if (\n+ self._trace_id_regex.fullmatch(trace_id) is None\n+ or self._span_id_regex.fullmatch(span_id) is None\n+ ):\n+ trace_id = generate_trace_id()\n+ span_id = generate_span_id()\n+ sampled = \"0\"\n+\n+ else:\n+ trace_id = int(trace_id, 16)\n+ span_id = int(span_id, 16)\n+\n options = 0\n # The b3 spec provides no defined behavior for both sample and\n # flag values set. Since the setting of at least one implies\n@@ -102,12 +118,13 @@\n # header is set to allow.\n if sampled in self._SAMPLE_PROPAGATE_VALUES or flags == \"1\":\n options |= trace.TraceFlags.SAMPLED\n+\n return trace.set_span_in_context(\n trace.DefaultSpan(\n trace.SpanContext(\n # trace an span ids are encoded in hex, so must be converted\n- trace_id=int(trace_id, 16),\n- span_id=int(span_id, 16),\n+ trace_id=trace_id,\n+ span_id=span_id,\n is_remote=True,\n trace_flags=trace.TraceFlags(options),\n trace_state=trace.TraceState(),\n", "issue": "B3 trace_id and span_id not handled correctly\nThese fields are not being handled correctly when an invalid value is passed for one or both of them. Fix that.\n", "before_files": [{"content": "# Copyright The OpenTelemetry Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport typing\n\nimport opentelemetry.trace as trace\nfrom opentelemetry.context import Context\nfrom opentelemetry.trace.propagation.httptextformat import (\n Getter,\n HTTPTextFormat,\n HTTPTextFormatT,\n Setter,\n)\n\n\nclass B3Format(HTTPTextFormat):\n \"\"\"Propagator for the B3 HTTP header format.\n\n See: https://github.com/openzipkin/b3-propagation\n \"\"\"\n\n SINGLE_HEADER_KEY = \"b3\"\n TRACE_ID_KEY = \"x-b3-traceid\"\n SPAN_ID_KEY = \"x-b3-spanid\"\n PARENT_SPAN_ID_KEY = \"x-b3-parentspanid\"\n SAMPLED_KEY = \"x-b3-sampled\"\n FLAGS_KEY = \"x-b3-flags\"\n _SAMPLE_PROPAGATE_VALUES = set([\"1\", \"True\", \"true\", \"d\"])\n\n def extract(\n self,\n get_from_carrier: Getter[HTTPTextFormatT],\n carrier: HTTPTextFormatT,\n context: typing.Optional[Context] = None,\n ) -> Context:\n trace_id = format_trace_id(trace.INVALID_TRACE_ID)\n span_id = format_span_id(trace.INVALID_SPAN_ID)\n sampled = \"0\"\n flags = None\n\n single_header = _extract_first_element(\n get_from_carrier(carrier, self.SINGLE_HEADER_KEY)\n )\n if single_header:\n # The b3 spec calls for the sampling state to be\n # \"deferred\", which is unspecified. This concept does not\n # translate to SpanContext, so we set it as recorded.\n sampled = \"1\"\n fields = single_header.split(\"-\", 4)\n\n if len(fields) == 1:\n sampled = fields[0]\n elif len(fields) == 2:\n trace_id, span_id = fields\n elif len(fields) == 3:\n trace_id, span_id, sampled = fields\n elif len(fields) == 4:\n trace_id, span_id, sampled, _ = fields\n else:\n return trace.set_span_in_context(trace.INVALID_SPAN)\n else:\n trace_id = (\n _extract_first_element(\n get_from_carrier(carrier, self.TRACE_ID_KEY)\n )\n or trace_id\n )\n span_id = (\n _extract_first_element(\n get_from_carrier(carrier, self.SPAN_ID_KEY)\n )\n or span_id\n )\n sampled = (\n _extract_first_element(\n get_from_carrier(carrier, self.SAMPLED_KEY)\n )\n or sampled\n )\n flags = (\n _extract_first_element(\n get_from_carrier(carrier, self.FLAGS_KEY)\n )\n or flags\n )\n\n options = 0\n # The b3 spec provides no defined behavior for both sample and\n # flag values set. Since the setting of at least one implies\n # the desire for some form of sampling, propagate if either\n # header is set to allow.\n if sampled in self._SAMPLE_PROPAGATE_VALUES or flags == \"1\":\n options |= trace.TraceFlags.SAMPLED\n return trace.set_span_in_context(\n trace.DefaultSpan(\n trace.SpanContext(\n # trace an span ids are encoded in hex, so must be converted\n trace_id=int(trace_id, 16),\n span_id=int(span_id, 16),\n is_remote=True,\n trace_flags=trace.TraceFlags(options),\n trace_state=trace.TraceState(),\n )\n )\n )\n\n def inject(\n self,\n set_in_carrier: Setter[HTTPTextFormatT],\n carrier: HTTPTextFormatT,\n context: typing.Optional[Context] = None,\n ) -> None:\n span = trace.get_current_span(context=context)\n\n if span.get_context() == trace.INVALID_SPAN_CONTEXT:\n return\n\n sampled = (trace.TraceFlags.SAMPLED & span.context.trace_flags) != 0\n set_in_carrier(\n carrier, self.TRACE_ID_KEY, format_trace_id(span.context.trace_id),\n )\n set_in_carrier(\n carrier, self.SPAN_ID_KEY, format_span_id(span.context.span_id)\n )\n if span.parent is not None:\n set_in_carrier(\n carrier,\n self.PARENT_SPAN_ID_KEY,\n format_span_id(span.parent.span_id),\n )\n set_in_carrier(carrier, self.SAMPLED_KEY, \"1\" if sampled else \"0\")\n\n\ndef format_trace_id(trace_id: int) -> str:\n \"\"\"Format the trace id according to b3 specification.\"\"\"\n return format(trace_id, \"032x\")\n\n\ndef format_span_id(span_id: int) -> str:\n \"\"\"Format the span id according to b3 specification.\"\"\"\n return format(span_id, \"016x\")\n\n\ndef _extract_first_element(\n items: typing.Iterable[HTTPTextFormatT],\n) -> typing.Optional[HTTPTextFormatT]:\n if items is None:\n return None\n return next(iter(items), None)\n", "path": "opentelemetry-sdk/src/opentelemetry/sdk/trace/propagation/b3_format.py"}]} | 2,182 | 582 |
gh_patches_debug_25471 | rasdani/github-patches | git_diff | StackStorm__st2-5383 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Trigger name collision workaround
This addresses the jinja trigger name collision noted in issue #4641
</issue>
<code>
[start of contrib/core/actions/inject_trigger.py]
1 # Copyright 2020 The StackStorm Authors.
2 # Copyright 2019 Extreme Networks, Inc.
3 #
4 # Licensed under the Apache License, Version 2.0 (the "License");
5 # you may not use this file except in compliance with the License.
6 # You may obtain a copy of the License at
7 #
8 # http://www.apache.org/licenses/LICENSE-2.0
9 #
10 # Unless required by applicable law or agreed to in writing, software
11 # distributed under the License is distributed on an "AS IS" BASIS,
12 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 # See the License for the specific language governing permissions and
14 # limitations under the License.
15
16 from __future__ import absolute_import
17
18 from st2common.runners.base_action import Action
19
20 __all__ = ["InjectTriggerAction"]
21
22
23 class InjectTriggerAction(Action):
24 def run(self, trigger, payload=None, trace_tag=None):
25 payload = payload or {}
26
27 datastore_service = self.action_service.datastore_service
28 client = datastore_service.get_api_client()
29
30 # Dispatch the trigger using the /webhooks/st2 API endpoint
31 # NOTE: Webhooks API endpoint is asynchronous so we don't know if the actual injection
32 # results in a TriggerInstanceDB database object creation or not. The object is created
33 # inside rulesengine service and could fail due to the user providing an invalid trigger
34 # reference or similar.
35 self.logger.debug(
36 'Injecting trigger "%s" with payload="%s"' % (trigger, str(payload))
37 )
38 result = client.webhooks.post_generic_webhook(
39 trigger=trigger, payload=payload, trace_tag=trace_tag
40 )
41
42 return result
43
[end of contrib/core/actions/inject_trigger.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/contrib/core/actions/inject_trigger.py b/contrib/core/actions/inject_trigger.py
--- a/contrib/core/actions/inject_trigger.py
+++ b/contrib/core/actions/inject_trigger.py
@@ -21,7 +21,7 @@
class InjectTriggerAction(Action):
- def run(self, trigger, payload=None, trace_tag=None):
+ def run(self, trigger=None, trigger_name=None, payload=None, trace_tag=None):
payload = payload or {}
datastore_service = self.action_service.datastore_service
@@ -32,6 +32,18 @@
# results in a TriggerInstanceDB database object creation or not. The object is created
# inside rulesengine service and could fail due to the user providing an invalid trigger
# reference or similar.
+
+ # Raise an error if both trigger and trigger_name are specified
+ if trigger and trigger_name:
+ raise ValueError(
+ "Parameters `trigger` and `trigger_name` are mutually exclusive."
+ )
+
+ # Raise an error if neither trigger nor trigger_name are specified
+ if not trigger and not trigger_name:
+ raise ValueError("You must include the `trigger_name` parameter.")
+
+ trigger = trigger if trigger else trigger_name
self.logger.debug(
'Injecting trigger "%s" with payload="%s"' % (trigger, str(payload))
)
| {"golden_diff": "diff --git a/contrib/core/actions/inject_trigger.py b/contrib/core/actions/inject_trigger.py\n--- a/contrib/core/actions/inject_trigger.py\n+++ b/contrib/core/actions/inject_trigger.py\n@@ -21,7 +21,7 @@\n \n \n class InjectTriggerAction(Action):\n- def run(self, trigger, payload=None, trace_tag=None):\n+ def run(self, trigger=None, trigger_name=None, payload=None, trace_tag=None):\n payload = payload or {}\n \n datastore_service = self.action_service.datastore_service\n@@ -32,6 +32,18 @@\n # results in a TriggerInstanceDB database object creation or not. The object is created\n # inside rulesengine service and could fail due to the user providing an invalid trigger\n # reference or similar.\n+\n+ # Raise an error if both trigger and trigger_name are specified\n+ if trigger and trigger_name:\n+ raise ValueError(\n+ \"Parameters `trigger` and `trigger_name` are mutually exclusive.\"\n+ )\n+\n+ # Raise an error if neither trigger nor trigger_name are specified\n+ if not trigger and not trigger_name:\n+ raise ValueError(\"You must include the `trigger_name` parameter.\")\n+\n+ trigger = trigger if trigger else trigger_name\n self.logger.debug(\n 'Injecting trigger \"%s\" with payload=\"%s\"' % (trigger, str(payload))\n )\n", "issue": "Trigger name collision workaround\nThis addresses the jinja trigger name collision noted in issue #4641\n", "before_files": [{"content": "# Copyright 2020 The StackStorm Authors.\n# Copyright 2019 Extreme Networks, Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom __future__ import absolute_import\n\nfrom st2common.runners.base_action import Action\n\n__all__ = [\"InjectTriggerAction\"]\n\n\nclass InjectTriggerAction(Action):\n def run(self, trigger, payload=None, trace_tag=None):\n payload = payload or {}\n\n datastore_service = self.action_service.datastore_service\n client = datastore_service.get_api_client()\n\n # Dispatch the trigger using the /webhooks/st2 API endpoint\n # NOTE: Webhooks API endpoint is asynchronous so we don't know if the actual injection\n # results in a TriggerInstanceDB database object creation or not. The object is created\n # inside rulesengine service and could fail due to the user providing an invalid trigger\n # reference or similar.\n self.logger.debug(\n 'Injecting trigger \"%s\" with payload=\"%s\"' % (trigger, str(payload))\n )\n result = client.webhooks.post_generic_webhook(\n trigger=trigger, payload=payload, trace_tag=trace_tag\n )\n\n return result\n", "path": "contrib/core/actions/inject_trigger.py"}]} | 1,000 | 300 |
gh_patches_debug_40558 | rasdani/github-patches | git_diff | docker__docker-py-3112 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Timeouts don't work on windows
Currently the windows npipe implementation doesn't honour timeouts. Regardless of which api endpoint you use or pretty much anything else this leads to bugs where the docker api waits until the docker daemon finishes instead of timing out properly.
For example, if there is a dockerfile containing at `timeout/`
```
FROM alpine
RUN sleep 1000
```
and you run
```python
from docker import DockerClient
DockerClient.from_env().images.build(path="timeout/", timeout=3)
```
python will hang for the full 1000 seconds instead of raising an error after 3.
Version info:
docker-py: 6.0.1
python: 3.11.3
docker:
Client:
Cloud integration: v1.0.24
Version: 20.10.14
API version: 1.41
Go version: go1.16.15
Git commit: a224086
Built: Thu Mar 24 01:53:11 2022
OS/Arch: windows/amd64
Context: default
Experimental: true
Server: Docker Desktop 4.8.1 (78998)
Engine:
Version: 20.10.14
API version: 1.41 (minimum version 1.12)
Go version: go1.16.15
Git commit: 87a90dc
Built: Thu Mar 24 01:46:14 2022
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: 1.5.11
GitCommit: 3df54a852345ae127d1fa3092b95168e4a88e2f8
runc:
Version: 1.0.3
GitCommit: v1.0.3-0-gf46b6ba
docker-init:
Version: 0.19.0
GitCommit: de40ad0
</issue>
<code>
[start of docker/transport/npipesocket.py]
1 import functools
2 import time
3 import io
4
5 import win32file
6 import win32pipe
7
8 cERROR_PIPE_BUSY = 0xe7
9 cSECURITY_SQOS_PRESENT = 0x100000
10 cSECURITY_ANONYMOUS = 0
11
12 MAXIMUM_RETRY_COUNT = 10
13
14
15 def check_closed(f):
16 @functools.wraps(f)
17 def wrapped(self, *args, **kwargs):
18 if self._closed:
19 raise RuntimeError(
20 'Can not reuse socket after connection was closed.'
21 )
22 return f(self, *args, **kwargs)
23 return wrapped
24
25
26 class NpipeSocket:
27 """ Partial implementation of the socket API over windows named pipes.
28 This implementation is only designed to be used as a client socket,
29 and server-specific methods (bind, listen, accept...) are not
30 implemented.
31 """
32
33 def __init__(self, handle=None):
34 self._timeout = win32pipe.NMPWAIT_USE_DEFAULT_WAIT
35 self._handle = handle
36 self._closed = False
37
38 def accept(self):
39 raise NotImplementedError()
40
41 def bind(self, address):
42 raise NotImplementedError()
43
44 def close(self):
45 self._handle.Close()
46 self._closed = True
47
48 @check_closed
49 def connect(self, address, retry_count=0):
50 try:
51 handle = win32file.CreateFile(
52 address,
53 win32file.GENERIC_READ | win32file.GENERIC_WRITE,
54 0,
55 None,
56 win32file.OPEN_EXISTING,
57 cSECURITY_ANONYMOUS | cSECURITY_SQOS_PRESENT,
58 0
59 )
60 except win32pipe.error as e:
61 # See Remarks:
62 # https://msdn.microsoft.com/en-us/library/aa365800.aspx
63 if e.winerror == cERROR_PIPE_BUSY:
64 # Another program or thread has grabbed our pipe instance
65 # before we got to it. Wait for availability and attempt to
66 # connect again.
67 retry_count = retry_count + 1
68 if (retry_count < MAXIMUM_RETRY_COUNT):
69 time.sleep(1)
70 return self.connect(address, retry_count)
71 raise e
72
73 self.flags = win32pipe.GetNamedPipeInfo(handle)[0]
74
75 self._handle = handle
76 self._address = address
77
78 @check_closed
79 def connect_ex(self, address):
80 return self.connect(address)
81
82 @check_closed
83 def detach(self):
84 self._closed = True
85 return self._handle
86
87 @check_closed
88 def dup(self):
89 return NpipeSocket(self._handle)
90
91 def getpeername(self):
92 return self._address
93
94 def getsockname(self):
95 return self._address
96
97 def getsockopt(self, level, optname, buflen=None):
98 raise NotImplementedError()
99
100 def ioctl(self, control, option):
101 raise NotImplementedError()
102
103 def listen(self, backlog):
104 raise NotImplementedError()
105
106 def makefile(self, mode=None, bufsize=None):
107 if mode.strip('b') != 'r':
108 raise NotImplementedError()
109 rawio = NpipeFileIOBase(self)
110 if bufsize is None or bufsize <= 0:
111 bufsize = io.DEFAULT_BUFFER_SIZE
112 return io.BufferedReader(rawio, buffer_size=bufsize)
113
114 @check_closed
115 def recv(self, bufsize, flags=0):
116 err, data = win32file.ReadFile(self._handle, bufsize)
117 return data
118
119 @check_closed
120 def recvfrom(self, bufsize, flags=0):
121 data = self.recv(bufsize, flags)
122 return (data, self._address)
123
124 @check_closed
125 def recvfrom_into(self, buf, nbytes=0, flags=0):
126 return self.recv_into(buf, nbytes, flags), self._address
127
128 @check_closed
129 def recv_into(self, buf, nbytes=0):
130 readbuf = buf
131 if not isinstance(buf, memoryview):
132 readbuf = memoryview(buf)
133
134 err, data = win32file.ReadFile(
135 self._handle,
136 readbuf[:nbytes] if nbytes else readbuf
137 )
138 return len(data)
139
140 def _recv_into_py2(self, buf, nbytes):
141 err, data = win32file.ReadFile(self._handle, nbytes or len(buf))
142 n = len(data)
143 buf[:n] = data
144 return n
145
146 @check_closed
147 def send(self, string, flags=0):
148 err, nbytes = win32file.WriteFile(self._handle, string)
149 return nbytes
150
151 @check_closed
152 def sendall(self, string, flags=0):
153 return self.send(string, flags)
154
155 @check_closed
156 def sendto(self, string, address):
157 self.connect(address)
158 return self.send(string)
159
160 def setblocking(self, flag):
161 if flag:
162 return self.settimeout(None)
163 return self.settimeout(0)
164
165 def settimeout(self, value):
166 if value is None:
167 # Blocking mode
168 self._timeout = win32pipe.NMPWAIT_WAIT_FOREVER
169 elif not isinstance(value, (float, int)) or value < 0:
170 raise ValueError('Timeout value out of range')
171 elif value == 0:
172 # Non-blocking mode
173 self._timeout = win32pipe.NMPWAIT_NO_WAIT
174 else:
175 # Timeout mode - Value converted to milliseconds
176 self._timeout = value * 1000
177
178 def gettimeout(self):
179 return self._timeout
180
181 def setsockopt(self, level, optname, value):
182 raise NotImplementedError()
183
184 @check_closed
185 def shutdown(self, how):
186 return self.close()
187
188
189 class NpipeFileIOBase(io.RawIOBase):
190 def __init__(self, npipe_socket):
191 self.sock = npipe_socket
192
193 def close(self):
194 super().close()
195 self.sock = None
196
197 def fileno(self):
198 return self.sock.fileno()
199
200 def isatty(self):
201 return False
202
203 def readable(self):
204 return True
205
206 def readinto(self, buf):
207 return self.sock.recv_into(buf)
208
209 def seekable(self):
210 return False
211
212 def writable(self):
213 return False
214
[end of docker/transport/npipesocket.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/docker/transport/npipesocket.py b/docker/transport/npipesocket.py
--- a/docker/transport/npipesocket.py
+++ b/docker/transport/npipesocket.py
@@ -4,6 +4,9 @@
import win32file
import win32pipe
+import pywintypes
+import win32event
+import win32api
cERROR_PIPE_BUSY = 0xe7
cSECURITY_SQOS_PRESENT = 0x100000
@@ -54,7 +57,9 @@
0,
None,
win32file.OPEN_EXISTING,
- cSECURITY_ANONYMOUS | cSECURITY_SQOS_PRESENT,
+ (cSECURITY_ANONYMOUS
+ | cSECURITY_SQOS_PRESENT
+ | win32file.FILE_FLAG_OVERLAPPED),
0
)
except win32pipe.error as e:
@@ -131,22 +136,37 @@
if not isinstance(buf, memoryview):
readbuf = memoryview(buf)
- err, data = win32file.ReadFile(
- self._handle,
- readbuf[:nbytes] if nbytes else readbuf
- )
- return len(data)
-
- def _recv_into_py2(self, buf, nbytes):
- err, data = win32file.ReadFile(self._handle, nbytes or len(buf))
- n = len(data)
- buf[:n] = data
- return n
+ event = win32event.CreateEvent(None, True, True, None)
+ try:
+ overlapped = pywintypes.OVERLAPPED()
+ overlapped.hEvent = event
+ err, data = win32file.ReadFile(
+ self._handle,
+ readbuf[:nbytes] if nbytes else readbuf,
+ overlapped
+ )
+ wait_result = win32event.WaitForSingleObject(event, self._timeout)
+ if wait_result == win32event.WAIT_TIMEOUT:
+ win32file.CancelIo(self._handle)
+ raise TimeoutError
+ return win32file.GetOverlappedResult(self._handle, overlapped, 0)
+ finally:
+ win32api.CloseHandle(event)
@check_closed
def send(self, string, flags=0):
- err, nbytes = win32file.WriteFile(self._handle, string)
- return nbytes
+ event = win32event.CreateEvent(None, True, True, None)
+ try:
+ overlapped = pywintypes.OVERLAPPED()
+ overlapped.hEvent = event
+ win32file.WriteFile(self._handle, string, overlapped)
+ wait_result = win32event.WaitForSingleObject(event, self._timeout)
+ if wait_result == win32event.WAIT_TIMEOUT:
+ win32file.CancelIo(self._handle)
+ raise TimeoutError
+ return win32file.GetOverlappedResult(self._handle, overlapped, 0)
+ finally:
+ win32api.CloseHandle(event)
@check_closed
def sendall(self, string, flags=0):
@@ -165,15 +185,12 @@
def settimeout(self, value):
if value is None:
# Blocking mode
- self._timeout = win32pipe.NMPWAIT_WAIT_FOREVER
+ self._timeout = win32event.INFINITE
elif not isinstance(value, (float, int)) or value < 0:
raise ValueError('Timeout value out of range')
- elif value == 0:
- # Non-blocking mode
- self._timeout = win32pipe.NMPWAIT_NO_WAIT
else:
# Timeout mode - Value converted to milliseconds
- self._timeout = value * 1000
+ self._timeout = int(value * 1000)
def gettimeout(self):
return self._timeout
| {"golden_diff": "diff --git a/docker/transport/npipesocket.py b/docker/transport/npipesocket.py\n--- a/docker/transport/npipesocket.py\n+++ b/docker/transport/npipesocket.py\n@@ -4,6 +4,9 @@\n \n import win32file\n import win32pipe\n+import pywintypes\n+import win32event\n+import win32api\n \n cERROR_PIPE_BUSY = 0xe7\n cSECURITY_SQOS_PRESENT = 0x100000\n@@ -54,7 +57,9 @@\n 0,\n None,\n win32file.OPEN_EXISTING,\n- cSECURITY_ANONYMOUS | cSECURITY_SQOS_PRESENT,\n+ (cSECURITY_ANONYMOUS\n+ | cSECURITY_SQOS_PRESENT\n+ | win32file.FILE_FLAG_OVERLAPPED),\n 0\n )\n except win32pipe.error as e:\n@@ -131,22 +136,37 @@\n if not isinstance(buf, memoryview):\n readbuf = memoryview(buf)\n \n- err, data = win32file.ReadFile(\n- self._handle,\n- readbuf[:nbytes] if nbytes else readbuf\n- )\n- return len(data)\n-\n- def _recv_into_py2(self, buf, nbytes):\n- err, data = win32file.ReadFile(self._handle, nbytes or len(buf))\n- n = len(data)\n- buf[:n] = data\n- return n\n+ event = win32event.CreateEvent(None, True, True, None)\n+ try:\n+ overlapped = pywintypes.OVERLAPPED()\n+ overlapped.hEvent = event\n+ err, data = win32file.ReadFile(\n+ self._handle,\n+ readbuf[:nbytes] if nbytes else readbuf,\n+ overlapped\n+ )\n+ wait_result = win32event.WaitForSingleObject(event, self._timeout)\n+ if wait_result == win32event.WAIT_TIMEOUT:\n+ win32file.CancelIo(self._handle)\n+ raise TimeoutError\n+ return win32file.GetOverlappedResult(self._handle, overlapped, 0)\n+ finally:\n+ win32api.CloseHandle(event)\n \n @check_closed\n def send(self, string, flags=0):\n- err, nbytes = win32file.WriteFile(self._handle, string)\n- return nbytes\n+ event = win32event.CreateEvent(None, True, True, None)\n+ try:\n+ overlapped = pywintypes.OVERLAPPED()\n+ overlapped.hEvent = event\n+ win32file.WriteFile(self._handle, string, overlapped)\n+ wait_result = win32event.WaitForSingleObject(event, self._timeout)\n+ if wait_result == win32event.WAIT_TIMEOUT:\n+ win32file.CancelIo(self._handle)\n+ raise TimeoutError\n+ return win32file.GetOverlappedResult(self._handle, overlapped, 0)\n+ finally:\n+ win32api.CloseHandle(event)\n \n @check_closed\n def sendall(self, string, flags=0):\n@@ -165,15 +185,12 @@\n def settimeout(self, value):\n if value is None:\n # Blocking mode\n- self._timeout = win32pipe.NMPWAIT_WAIT_FOREVER\n+ self._timeout = win32event.INFINITE\n elif not isinstance(value, (float, int)) or value < 0:\n raise ValueError('Timeout value out of range')\n- elif value == 0:\n- # Non-blocking mode\n- self._timeout = win32pipe.NMPWAIT_NO_WAIT\n else:\n # Timeout mode - Value converted to milliseconds\n- self._timeout = value * 1000\n+ self._timeout = int(value * 1000)\n \n def gettimeout(self):\n return self._timeout\n", "issue": "Timeouts don't work on windows\nCurrently the windows npipe implementation doesn't honour timeouts. Regardless of which api endpoint you use or pretty much anything else this leads to bugs where the docker api waits until the docker daemon finishes instead of timing out properly.\r\n\r\nFor example, if there is a dockerfile containing at `timeout/`\r\n```\r\nFROM alpine\r\n\r\nRUN sleep 1000\r\n```\r\nand you run\r\n```python\r\nfrom docker import DockerClient\r\n\r\nDockerClient.from_env().images.build(path=\"timeout/\", timeout=3)\r\n```\r\npython will hang for the full 1000 seconds instead of raising an error after 3.\r\n\r\nVersion info: \r\ndocker-py: 6.0.1 \r\npython: 3.11.3 \r\ndocker:\r\nClient:\r\n Cloud integration: v1.0.24\r\n Version: 20.10.14\r\n API version: 1.41\r\n Go version: go1.16.15\r\n Git commit: a224086\r\n Built: Thu Mar 24 01:53:11 2022 \r\n OS/Arch: windows/amd64\r\n Context: default\r\n Experimental: true\r\n\r\nServer: Docker Desktop 4.8.1 (78998)\r\n Engine:\r\n Version: 20.10.14\r\n API version: 1.41 (minimum version 1.12) \r\n Go version: go1.16.15\r\n Git commit: 87a90dc\r\n Built: Thu Mar 24 01:46:14 2022 \r\n OS/Arch: linux/amd64\r\n Experimental: false\r\n containerd:\r\n Version: 1.5.11\r\n GitCommit: 3df54a852345ae127d1fa3092b95168e4a88e2f8\r\n runc:\r\n Version: 1.0.3\r\n GitCommit: v1.0.3-0-gf46b6ba\r\n docker-init:\r\n Version: 0.19.0\r\n GitCommit: de40ad0\n", "before_files": [{"content": "import functools\nimport time\nimport io\n\nimport win32file\nimport win32pipe\n\ncERROR_PIPE_BUSY = 0xe7\ncSECURITY_SQOS_PRESENT = 0x100000\ncSECURITY_ANONYMOUS = 0\n\nMAXIMUM_RETRY_COUNT = 10\n\n\ndef check_closed(f):\n @functools.wraps(f)\n def wrapped(self, *args, **kwargs):\n if self._closed:\n raise RuntimeError(\n 'Can not reuse socket after connection was closed.'\n )\n return f(self, *args, **kwargs)\n return wrapped\n\n\nclass NpipeSocket:\n \"\"\" Partial implementation of the socket API over windows named pipes.\n This implementation is only designed to be used as a client socket,\n and server-specific methods (bind, listen, accept...) are not\n implemented.\n \"\"\"\n\n def __init__(self, handle=None):\n self._timeout = win32pipe.NMPWAIT_USE_DEFAULT_WAIT\n self._handle = handle\n self._closed = False\n\n def accept(self):\n raise NotImplementedError()\n\n def bind(self, address):\n raise NotImplementedError()\n\n def close(self):\n self._handle.Close()\n self._closed = True\n\n @check_closed\n def connect(self, address, retry_count=0):\n try:\n handle = win32file.CreateFile(\n address,\n win32file.GENERIC_READ | win32file.GENERIC_WRITE,\n 0,\n None,\n win32file.OPEN_EXISTING,\n cSECURITY_ANONYMOUS | cSECURITY_SQOS_PRESENT,\n 0\n )\n except win32pipe.error as e:\n # See Remarks:\n # https://msdn.microsoft.com/en-us/library/aa365800.aspx\n if e.winerror == cERROR_PIPE_BUSY:\n # Another program or thread has grabbed our pipe instance\n # before we got to it. Wait for availability and attempt to\n # connect again.\n retry_count = retry_count + 1\n if (retry_count < MAXIMUM_RETRY_COUNT):\n time.sleep(1)\n return self.connect(address, retry_count)\n raise e\n\n self.flags = win32pipe.GetNamedPipeInfo(handle)[0]\n\n self._handle = handle\n self._address = address\n\n @check_closed\n def connect_ex(self, address):\n return self.connect(address)\n\n @check_closed\n def detach(self):\n self._closed = True\n return self._handle\n\n @check_closed\n def dup(self):\n return NpipeSocket(self._handle)\n\n def getpeername(self):\n return self._address\n\n def getsockname(self):\n return self._address\n\n def getsockopt(self, level, optname, buflen=None):\n raise NotImplementedError()\n\n def ioctl(self, control, option):\n raise NotImplementedError()\n\n def listen(self, backlog):\n raise NotImplementedError()\n\n def makefile(self, mode=None, bufsize=None):\n if mode.strip('b') != 'r':\n raise NotImplementedError()\n rawio = NpipeFileIOBase(self)\n if bufsize is None or bufsize <= 0:\n bufsize = io.DEFAULT_BUFFER_SIZE\n return io.BufferedReader(rawio, buffer_size=bufsize)\n\n @check_closed\n def recv(self, bufsize, flags=0):\n err, data = win32file.ReadFile(self._handle, bufsize)\n return data\n\n @check_closed\n def recvfrom(self, bufsize, flags=0):\n data = self.recv(bufsize, flags)\n return (data, self._address)\n\n @check_closed\n def recvfrom_into(self, buf, nbytes=0, flags=0):\n return self.recv_into(buf, nbytes, flags), self._address\n\n @check_closed\n def recv_into(self, buf, nbytes=0):\n readbuf = buf\n if not isinstance(buf, memoryview):\n readbuf = memoryview(buf)\n\n err, data = win32file.ReadFile(\n self._handle,\n readbuf[:nbytes] if nbytes else readbuf\n )\n return len(data)\n\n def _recv_into_py2(self, buf, nbytes):\n err, data = win32file.ReadFile(self._handle, nbytes or len(buf))\n n = len(data)\n buf[:n] = data\n return n\n\n @check_closed\n def send(self, string, flags=0):\n err, nbytes = win32file.WriteFile(self._handle, string)\n return nbytes\n\n @check_closed\n def sendall(self, string, flags=0):\n return self.send(string, flags)\n\n @check_closed\n def sendto(self, string, address):\n self.connect(address)\n return self.send(string)\n\n def setblocking(self, flag):\n if flag:\n return self.settimeout(None)\n return self.settimeout(0)\n\n def settimeout(self, value):\n if value is None:\n # Blocking mode\n self._timeout = win32pipe.NMPWAIT_WAIT_FOREVER\n elif not isinstance(value, (float, int)) or value < 0:\n raise ValueError('Timeout value out of range')\n elif value == 0:\n # Non-blocking mode\n self._timeout = win32pipe.NMPWAIT_NO_WAIT\n else:\n # Timeout mode - Value converted to milliseconds\n self._timeout = value * 1000\n\n def gettimeout(self):\n return self._timeout\n\n def setsockopt(self, level, optname, value):\n raise NotImplementedError()\n\n @check_closed\n def shutdown(self, how):\n return self.close()\n\n\nclass NpipeFileIOBase(io.RawIOBase):\n def __init__(self, npipe_socket):\n self.sock = npipe_socket\n\n def close(self):\n super().close()\n self.sock = None\n\n def fileno(self):\n return self.sock.fileno()\n\n def isatty(self):\n return False\n\n def readable(self):\n return True\n\n def readinto(self, buf):\n return self.sock.recv_into(buf)\n\n def seekable(self):\n return False\n\n def writable(self):\n return False\n", "path": "docker/transport/npipesocket.py"}]} | 2,955 | 899 |
gh_patches_debug_38189 | rasdani/github-patches | git_diff | MycroftAI__mycroft-core-2881 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
mycroft.conf silently overwritten
**Describe the bug**
When there's an error in mycroft.conf, it is silently overwritten. This is bad because user settings should not be permanently deleted without consent. Instead, logs and/or the output of mycroft-start should show the error.
**To Reproduce**
Try the following mycroft.conf:
```
{
"max_allowed_core_version": 20.8,
"listener": {
"wake_word": "Lazarus",
"device_name": "default"
"energy_ratio": 1.5
},
"hotwords": {
"Lazarus": {
"module": "pocketsphinx",
"phonemes": "L AE Z ER AH S .",
}
}
}
```
Note the missing comma after "default" and incorrect use of the energy ratio parameter.
After running mycroft-start restart all, it is overwritten with the following:
```
{
"max_allowed_core_version": 20.8
}
```
**Expected behavior**
One of the following:
"Mycroft failed to start because of an error in mycroft.conf."
or
The config file is copied to `mycroft.conf.old` (or `mycroft.conf.old.1`, etc.) and `mycroft.conf` is overwritten with the following:
```
# The previous mycroft.conf contained errors and was moved to mycroft.conf.old.
{
"max_allowed_core_version": 20.8
}
```
</issue>
<code>
[start of mycroft/configuration/config.py]
1
2 # Copyright 2017 Mycroft AI Inc.
3 #
4 # Licensed under the Apache License, Version 2.0 (the "License");
5 # you may not use this file except in compliance with the License.
6 # You may obtain a copy of the License at
7 #
8 # http://www.apache.org/licenses/LICENSE-2.0
9 #
10 # Unless required by applicable law or agreed to in writing, software
11 # distributed under the License is distributed on an "AS IS" BASIS,
12 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 # See the License for the specific language governing permissions and
14 # limitations under the License.
15 #
16
17 import json
18 import os
19 import re
20 from os.path import exists, isfile, join, dirname
21
22 import xdg.BaseDirectory
23 from requests import RequestException
24
25 from mycroft.util.combo_lock import ComboLock
26 from mycroft.util.file_utils import get_temp_path
27 from mycroft.util import camel_case_split
28 from mycroft.util.json_helper import load_commented_json, merge_dict
29 from mycroft.util.log import LOG
30
31 from .locations import (
32 DEFAULT_CONFIG,
33 OLD_USER_CONFIG,
34 SYSTEM_CONFIG,
35 USER_CONFIG
36 )
37
38
39 def is_remote_list(values):
40 """Check if list corresponds to a backend formatted collection of dicts
41 """
42 for v in values:
43 if not isinstance(v, dict):
44 return False
45 if "@type" not in v.keys():
46 return False
47 return True
48
49
50 def translate_remote(config, setting):
51 """Translate config names from server to equivalents for mycroft-core.
52
53 Args:
54 config: base config to populate
55 settings: remote settings to be translated
56 """
57 IGNORED_SETTINGS = ["uuid", "@type", "active", "user", "device"]
58
59 for k, v in setting.items():
60 if k not in IGNORED_SETTINGS:
61 # Translate the CamelCase values stored remotely into the
62 # Python-style names used within mycroft-core.
63 key = re.sub(r"Setting(s)?", "", k)
64 key = camel_case_split(key).replace(" ", "_").lower()
65 if isinstance(v, dict):
66 config[key] = config.get(key, {})
67 translate_remote(config[key], v)
68 elif isinstance(v, list):
69 if is_remote_list(v):
70 if key not in config:
71 config[key] = {}
72 translate_list(config[key], v)
73 else:
74 config[key] = v
75 else:
76 config[key] = v
77
78
79 def translate_list(config, values):
80 """Translate list formated by mycroft server.
81
82 Args:
83 config (dict): target config
84 values (list): list from mycroft server config
85 """
86 for v in values:
87 module = v["@type"]
88 if v.get("active"):
89 config["module"] = module
90 config[module] = config.get(module, {})
91 translate_remote(config[module], v)
92
93
94 class LocalConf(dict):
95 """Config dictionary from file."""
96 _lock = ComboLock(get_temp_path('local-conf.lock'))
97
98 def __init__(self, path):
99 super(LocalConf, self).__init__()
100 if path:
101 self.path = path
102 self.load_local(path)
103
104 def load_local(self, path):
105 """Load local json file into self.
106
107 Args:
108 path (str): file to load
109 """
110 if exists(path) and isfile(path):
111 try:
112 config = load_commented_json(path)
113 for key in config:
114 self.__setitem__(key, config[key])
115
116 LOG.debug("Configuration {} loaded".format(path))
117 except Exception as e:
118 LOG.error("Error loading configuration '{}'".format(path))
119 LOG.error(repr(e))
120 else:
121 LOG.debug("Configuration '{}' not defined, skipping".format(path))
122
123 def store(self, path=None):
124 """Cache the received settings locally.
125
126 The cache will be used if the remote is unreachable to load settings
127 that are as close to the user's as possible.
128 """
129 with self._lock:
130 path = path or self.path
131 config_dir = dirname(path)
132 if not exists(config_dir):
133 os.makedirs(config_dir)
134
135 with open(path, 'w') as f:
136 json.dump(self, f, indent=2)
137
138 def merge(self, conf):
139 merge_dict(self, conf)
140
141
142 class RemoteConf(LocalConf):
143 _lock = ComboLock(get_temp_path('remote-conf.lock'))
144 """Config dictionary fetched from mycroft.ai."""
145
146 def __init__(self, cache=None):
147 super(RemoteConf, self).__init__(None)
148
149 cache = cache or join(xdg.BaseDirectory.xdg_cache_home, 'mycroft',
150 'web_cache.json')
151 from mycroft.api import is_paired
152 if not is_paired():
153 self.load_local(cache)
154 return
155
156 try:
157 # Here to avoid cyclic import
158 from mycroft.api import DeviceApi
159 api = DeviceApi()
160 setting = api.get_settings()
161
162 location = None
163 try:
164 location = api.get_location()
165 except RequestException as e:
166 LOG.error("RequestException fetching remote location: {}"
167 .format(str(e)))
168 if exists(cache) and isfile(cache):
169 location = load_commented_json(cache).get('location')
170
171 if location:
172 setting["location"] = location
173 # Remove server specific entries
174 config = {}
175 translate_remote(config, setting)
176 for key in config:
177 self.__setitem__(key, config[key])
178 self.store(cache)
179
180 except RequestException as e:
181 LOG.error("RequestException fetching remote configuration: {}"
182 .format(str(e)))
183 self.load_local(cache)
184
185 except Exception as e:
186 LOG.error("Failed to fetch remote configuration: %s" % repr(e),
187 exc_info=True)
188 self.load_local(cache)
189
190
191 def _log_old_location_deprecation():
192 LOG.warning("\n ===============================================\n"
193 " == DEPRECATION WARNING ==\n"
194 " ===============================================\n"
195 f" You still have a config file at {OLD_USER_CONFIG}\n"
196 " Note that this location is deprecated and will"
197 " not be used in the future\n"
198 " Please move it to "
199 f"{join(xdg.BaseDirectory.xdg_config_home, 'mycroft')}")
200
201
202 class Configuration:
203 """Namespace for operations on the configuration singleton."""
204 __config = {} # Cached config
205 __patch = {} # Patch config that skills can update to override config
206
207 @staticmethod
208 def get(configs=None, cache=True, remote=True):
209 """Get configuration
210
211 Returns cached instance if available otherwise builds a new
212 configuration dict.
213
214 Args:
215 configs (list): List of configuration dicts
216 cache (boolean): True if the result should be cached
217 remote (boolean): False if the Remote settings shouldn't be loaded
218
219 Returns:
220 (dict) configuration dictionary.
221 """
222 if Configuration.__config:
223 return Configuration.__config
224 else:
225 return Configuration.load_config_stack(configs, cache, remote)
226
227 @staticmethod
228 def load_config_stack(configs=None, cache=False, remote=True):
229 """Load a stack of config dicts into a single dict
230
231 Args:
232 configs (list): list of dicts to load
233 cache (boolean): True if result should be cached
234 remote (boolean): False if the Mycroft Home settings shouldn't
235 be loaded
236 Returns:
237 (dict) merged dict of all configuration files
238 """
239 if not configs:
240 configs = []
241
242 # First use the patched config
243 configs.append(Configuration.__patch)
244
245 # Then use XDG config
246 # This includes both the user config and
247 # /etc/xdg/mycroft/mycroft.conf
248 for conf_dir in xdg.BaseDirectory.load_config_paths('mycroft'):
249 configs.append(LocalConf(join(conf_dir, 'mycroft.conf')))
250
251 # Then check the old user config
252 if isfile(OLD_USER_CONFIG):
253 _log_old_location_deprecation()
254 configs.append(LocalConf(OLD_USER_CONFIG))
255
256 # Then use the system config (/etc/mycroft/mycroft.conf)
257 configs.append(LocalConf(SYSTEM_CONFIG))
258
259 # Then use remote config
260 if remote:
261 configs.append(RemoteConf())
262
263 # Then use the config that comes with the package
264 configs.append(LocalConf(DEFAULT_CONFIG))
265
266 # Make sure we reverse the array, as merge_dict will put every new
267 # file on top of the previous one
268 configs = reversed(configs)
269 else:
270 # Handle strings in stack
271 for index, item in enumerate(configs):
272 if isinstance(item, str):
273 configs[index] = LocalConf(item)
274
275 # Merge all configs into one
276 base = {}
277 for c in configs:
278 merge_dict(base, c)
279
280 # copy into cache
281 if cache:
282 Configuration.__config.clear()
283 for key in base:
284 Configuration.__config[key] = base[key]
285 return Configuration.__config
286 else:
287 return base
288
289 @staticmethod
290 def set_config_update_handlers(bus):
291 """Setup websocket handlers to update config.
292
293 Args:
294 bus: Message bus client instance
295 """
296 bus.on("configuration.updated", Configuration.updated)
297 bus.on("configuration.patch", Configuration.patch)
298 bus.on("configuration.patch.clear", Configuration.patch_clear)
299
300 @staticmethod
301 def updated(message):
302 """Handler for configuration.updated,
303
304 Triggers an update of cached config.
305 """
306 Configuration.load_config_stack(cache=True)
307
308 @staticmethod
309 def patch(message):
310 """Patch the volatile dict usable by skills
311
312 Args:
313 message: Messagebus message should contain a config
314 in the data payload.
315 """
316 config = message.data.get("config", {})
317 merge_dict(Configuration.__patch, config)
318 Configuration.load_config_stack(cache=True)
319
320 @staticmethod
321 def patch_clear(message):
322 """Clear the config patch space.
323
324 Args:
325 message: Messagebus message should contain a config
326 in the data payload.
327 """
328 Configuration.__patch = {}
329 Configuration.load_config_stack(cache=True)
330
[end of mycroft/configuration/config.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/mycroft/configuration/config.py b/mycroft/configuration/config.py
--- a/mycroft/configuration/config.py
+++ b/mycroft/configuration/config.py
@@ -97,6 +97,7 @@
def __init__(self, path):
super(LocalConf, self).__init__()
+ self.is_valid = True # is loaded json valid, updated when load occurs
if path:
self.path = path
self.load_local(path)
@@ -117,23 +118,41 @@
except Exception as e:
LOG.error("Error loading configuration '{}'".format(path))
LOG.error(repr(e))
+ self.is_valid = False
else:
LOG.debug("Configuration '{}' not defined, skipping".format(path))
- def store(self, path=None):
- """Cache the received settings locally.
+ def store(self, path=None, force=False):
+ """Save config to disk.
The cache will be used if the remote is unreachable to load settings
that are as close to the user's as possible.
+
+ path (str): path to store file to, if missing will use the path from
+ where the config was loaded.
+ force (bool): Set to True if writing should occur despite the original
+ was malformed.
+
+ Returns:
+ (bool) True if save was successful, else False.
"""
+ result = False
with self._lock:
path = path or self.path
config_dir = dirname(path)
if not exists(config_dir):
os.makedirs(config_dir)
- with open(path, 'w') as f:
- json.dump(self, f, indent=2)
+ if self.is_valid or force:
+ with open(path, 'w') as f:
+ json.dump(self, f, indent=2)
+ result = True
+ else:
+ LOG.warning((f'"{path}" was not a valid config file when '
+ 'loaded, will not save config. Please correct '
+ 'the json or remove it to allow updates.'))
+ result = False
+ return result
def merge(self, conf):
merge_dict(self, conf)
@@ -175,7 +194,7 @@
translate_remote(config, setting)
for key in config:
self.__setitem__(key, config[key])
- self.store(cache)
+ self.store(cache, force=True)
except RequestException as e:
LOG.error("RequestException fetching remote configuration: {}"
| {"golden_diff": "diff --git a/mycroft/configuration/config.py b/mycroft/configuration/config.py\n--- a/mycroft/configuration/config.py\n+++ b/mycroft/configuration/config.py\n@@ -97,6 +97,7 @@\n \n def __init__(self, path):\n super(LocalConf, self).__init__()\n+ self.is_valid = True # is loaded json valid, updated when load occurs\n if path:\n self.path = path\n self.load_local(path)\n@@ -117,23 +118,41 @@\n except Exception as e:\n LOG.error(\"Error loading configuration '{}'\".format(path))\n LOG.error(repr(e))\n+ self.is_valid = False\n else:\n LOG.debug(\"Configuration '{}' not defined, skipping\".format(path))\n \n- def store(self, path=None):\n- \"\"\"Cache the received settings locally.\n+ def store(self, path=None, force=False):\n+ \"\"\"Save config to disk.\n \n The cache will be used if the remote is unreachable to load settings\n that are as close to the user's as possible.\n+\n+ path (str): path to store file to, if missing will use the path from\n+ where the config was loaded.\n+ force (bool): Set to True if writing should occur despite the original\n+ was malformed.\n+\n+ Returns:\n+ (bool) True if save was successful, else False.\n \"\"\"\n+ result = False\n with self._lock:\n path = path or self.path\n config_dir = dirname(path)\n if not exists(config_dir):\n os.makedirs(config_dir)\n \n- with open(path, 'w') as f:\n- json.dump(self, f, indent=2)\n+ if self.is_valid or force:\n+ with open(path, 'w') as f:\n+ json.dump(self, f, indent=2)\n+ result = True\n+ else:\n+ LOG.warning((f'\"{path}\" was not a valid config file when '\n+ 'loaded, will not save config. Please correct '\n+ 'the json or remove it to allow updates.'))\n+ result = False\n+ return result\n \n def merge(self, conf):\n merge_dict(self, conf)\n@@ -175,7 +194,7 @@\n translate_remote(config, setting)\n for key in config:\n self.__setitem__(key, config[key])\n- self.store(cache)\n+ self.store(cache, force=True)\n \n except RequestException as e:\n LOG.error(\"RequestException fetching remote configuration: {}\"\n", "issue": "mycroft.conf silently overwritten\n**Describe the bug**\r\nWhen there's an error in mycroft.conf, it is silently overwritten. This is bad because user settings should not be permanently deleted without consent. Instead, logs and/or the output of mycroft-start should show the error.\r\n\r\n**To Reproduce**\r\nTry the following mycroft.conf:\r\n```\r\n{\r\n \"max_allowed_core_version\": 20.8,\r\n \"listener\": {\r\n \"wake_word\": \"Lazarus\",\r\n \"device_name\": \"default\"\r\n \"energy_ratio\": 1.5\r\n },\r\n \"hotwords\": {\r\n \"Lazarus\": {\r\n \"module\": \"pocketsphinx\",\r\n \"phonemes\": \"L AE Z ER AH S .\",\r\n }\r\n }\r\n}\r\n```\r\n\r\nNote the missing comma after \"default\" and incorrect use of the energy ratio parameter.\r\n\r\nAfter running mycroft-start restart all, it is overwritten with the following:\r\n\r\n```\r\n{\r\n \"max_allowed_core_version\": 20.8\r\n}\r\n```\r\n\r\n**Expected behavior**\r\nOne of the following:\r\n\"Mycroft failed to start because of an error in mycroft.conf.\"\r\n\r\nor\r\n\r\nThe config file is copied to `mycroft.conf.old` (or `mycroft.conf.old.1`, etc.) and `mycroft.conf` is overwritten with the following:\r\n```\r\n# The previous mycroft.conf contained errors and was moved to mycroft.conf.old.\r\n{\r\n \"max_allowed_core_version\": 20.8\r\n}\r\n```\n", "before_files": [{"content": "\n# Copyright 2017 Mycroft AI Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n#\n\nimport json\nimport os\nimport re\nfrom os.path import exists, isfile, join, dirname\n\nimport xdg.BaseDirectory\nfrom requests import RequestException\n\nfrom mycroft.util.combo_lock import ComboLock\nfrom mycroft.util.file_utils import get_temp_path\nfrom mycroft.util import camel_case_split\nfrom mycroft.util.json_helper import load_commented_json, merge_dict\nfrom mycroft.util.log import LOG\n\nfrom .locations import (\n DEFAULT_CONFIG,\n OLD_USER_CONFIG,\n SYSTEM_CONFIG,\n USER_CONFIG\n)\n\n\ndef is_remote_list(values):\n \"\"\"Check if list corresponds to a backend formatted collection of dicts\n \"\"\"\n for v in values:\n if not isinstance(v, dict):\n return False\n if \"@type\" not in v.keys():\n return False\n return True\n\n\ndef translate_remote(config, setting):\n \"\"\"Translate config names from server to equivalents for mycroft-core.\n\n Args:\n config: base config to populate\n settings: remote settings to be translated\n \"\"\"\n IGNORED_SETTINGS = [\"uuid\", \"@type\", \"active\", \"user\", \"device\"]\n\n for k, v in setting.items():\n if k not in IGNORED_SETTINGS:\n # Translate the CamelCase values stored remotely into the\n # Python-style names used within mycroft-core.\n key = re.sub(r\"Setting(s)?\", \"\", k)\n key = camel_case_split(key).replace(\" \", \"_\").lower()\n if isinstance(v, dict):\n config[key] = config.get(key, {})\n translate_remote(config[key], v)\n elif isinstance(v, list):\n if is_remote_list(v):\n if key not in config:\n config[key] = {}\n translate_list(config[key], v)\n else:\n config[key] = v\n else:\n config[key] = v\n\n\ndef translate_list(config, values):\n \"\"\"Translate list formated by mycroft server.\n\n Args:\n config (dict): target config\n values (list): list from mycroft server config\n \"\"\"\n for v in values:\n module = v[\"@type\"]\n if v.get(\"active\"):\n config[\"module\"] = module\n config[module] = config.get(module, {})\n translate_remote(config[module], v)\n\n\nclass LocalConf(dict):\n \"\"\"Config dictionary from file.\"\"\"\n _lock = ComboLock(get_temp_path('local-conf.lock'))\n\n def __init__(self, path):\n super(LocalConf, self).__init__()\n if path:\n self.path = path\n self.load_local(path)\n\n def load_local(self, path):\n \"\"\"Load local json file into self.\n\n Args:\n path (str): file to load\n \"\"\"\n if exists(path) and isfile(path):\n try:\n config = load_commented_json(path)\n for key in config:\n self.__setitem__(key, config[key])\n\n LOG.debug(\"Configuration {} loaded\".format(path))\n except Exception as e:\n LOG.error(\"Error loading configuration '{}'\".format(path))\n LOG.error(repr(e))\n else:\n LOG.debug(\"Configuration '{}' not defined, skipping\".format(path))\n\n def store(self, path=None):\n \"\"\"Cache the received settings locally.\n\n The cache will be used if the remote is unreachable to load settings\n that are as close to the user's as possible.\n \"\"\"\n with self._lock:\n path = path or self.path\n config_dir = dirname(path)\n if not exists(config_dir):\n os.makedirs(config_dir)\n\n with open(path, 'w') as f:\n json.dump(self, f, indent=2)\n\n def merge(self, conf):\n merge_dict(self, conf)\n\n\nclass RemoteConf(LocalConf):\n _lock = ComboLock(get_temp_path('remote-conf.lock'))\n \"\"\"Config dictionary fetched from mycroft.ai.\"\"\"\n\n def __init__(self, cache=None):\n super(RemoteConf, self).__init__(None)\n\n cache = cache or join(xdg.BaseDirectory.xdg_cache_home, 'mycroft',\n 'web_cache.json')\n from mycroft.api import is_paired\n if not is_paired():\n self.load_local(cache)\n return\n\n try:\n # Here to avoid cyclic import\n from mycroft.api import DeviceApi\n api = DeviceApi()\n setting = api.get_settings()\n\n location = None\n try:\n location = api.get_location()\n except RequestException as e:\n LOG.error(\"RequestException fetching remote location: {}\"\n .format(str(e)))\n if exists(cache) and isfile(cache):\n location = load_commented_json(cache).get('location')\n\n if location:\n setting[\"location\"] = location\n # Remove server specific entries\n config = {}\n translate_remote(config, setting)\n for key in config:\n self.__setitem__(key, config[key])\n self.store(cache)\n\n except RequestException as e:\n LOG.error(\"RequestException fetching remote configuration: {}\"\n .format(str(e)))\n self.load_local(cache)\n\n except Exception as e:\n LOG.error(\"Failed to fetch remote configuration: %s\" % repr(e),\n exc_info=True)\n self.load_local(cache)\n\n\ndef _log_old_location_deprecation():\n LOG.warning(\"\\n ===============================================\\n\"\n \" == DEPRECATION WARNING ==\\n\"\n \" ===============================================\\n\"\n f\" You still have a config file at {OLD_USER_CONFIG}\\n\"\n \" Note that this location is deprecated and will\"\n \" not be used in the future\\n\"\n \" Please move it to \"\n f\"{join(xdg.BaseDirectory.xdg_config_home, 'mycroft')}\")\n\n\nclass Configuration:\n \"\"\"Namespace for operations on the configuration singleton.\"\"\"\n __config = {} # Cached config\n __patch = {} # Patch config that skills can update to override config\n\n @staticmethod\n def get(configs=None, cache=True, remote=True):\n \"\"\"Get configuration\n\n Returns cached instance if available otherwise builds a new\n configuration dict.\n\n Args:\n configs (list): List of configuration dicts\n cache (boolean): True if the result should be cached\n remote (boolean): False if the Remote settings shouldn't be loaded\n\n Returns:\n (dict) configuration dictionary.\n \"\"\"\n if Configuration.__config:\n return Configuration.__config\n else:\n return Configuration.load_config_stack(configs, cache, remote)\n\n @staticmethod\n def load_config_stack(configs=None, cache=False, remote=True):\n \"\"\"Load a stack of config dicts into a single dict\n\n Args:\n configs (list): list of dicts to load\n cache (boolean): True if result should be cached\n remote (boolean): False if the Mycroft Home settings shouldn't\n be loaded\n Returns:\n (dict) merged dict of all configuration files\n \"\"\"\n if not configs:\n configs = []\n\n # First use the patched config\n configs.append(Configuration.__patch)\n\n # Then use XDG config\n # This includes both the user config and\n # /etc/xdg/mycroft/mycroft.conf\n for conf_dir in xdg.BaseDirectory.load_config_paths('mycroft'):\n configs.append(LocalConf(join(conf_dir, 'mycroft.conf')))\n\n # Then check the old user config\n if isfile(OLD_USER_CONFIG):\n _log_old_location_deprecation()\n configs.append(LocalConf(OLD_USER_CONFIG))\n\n # Then use the system config (/etc/mycroft/mycroft.conf)\n configs.append(LocalConf(SYSTEM_CONFIG))\n\n # Then use remote config\n if remote:\n configs.append(RemoteConf())\n\n # Then use the config that comes with the package\n configs.append(LocalConf(DEFAULT_CONFIG))\n\n # Make sure we reverse the array, as merge_dict will put every new\n # file on top of the previous one\n configs = reversed(configs)\n else:\n # Handle strings in stack\n for index, item in enumerate(configs):\n if isinstance(item, str):\n configs[index] = LocalConf(item)\n\n # Merge all configs into one\n base = {}\n for c in configs:\n merge_dict(base, c)\n\n # copy into cache\n if cache:\n Configuration.__config.clear()\n for key in base:\n Configuration.__config[key] = base[key]\n return Configuration.__config\n else:\n return base\n\n @staticmethod\n def set_config_update_handlers(bus):\n \"\"\"Setup websocket handlers to update config.\n\n Args:\n bus: Message bus client instance\n \"\"\"\n bus.on(\"configuration.updated\", Configuration.updated)\n bus.on(\"configuration.patch\", Configuration.patch)\n bus.on(\"configuration.patch.clear\", Configuration.patch_clear)\n\n @staticmethod\n def updated(message):\n \"\"\"Handler for configuration.updated,\n\n Triggers an update of cached config.\n \"\"\"\n Configuration.load_config_stack(cache=True)\n\n @staticmethod\n def patch(message):\n \"\"\"Patch the volatile dict usable by skills\n\n Args:\n message: Messagebus message should contain a config\n in the data payload.\n \"\"\"\n config = message.data.get(\"config\", {})\n merge_dict(Configuration.__patch, config)\n Configuration.load_config_stack(cache=True)\n\n @staticmethod\n def patch_clear(message):\n \"\"\"Clear the config patch space.\n\n Args:\n message: Messagebus message should contain a config\n in the data payload.\n \"\"\"\n Configuration.__patch = {}\n Configuration.load_config_stack(cache=True)\n", "path": "mycroft/configuration/config.py"}]} | 3,913 | 553 |
gh_patches_debug_22621 | rasdani/github-patches | git_diff | aws__serverless-application-model-1582 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Cognito User Pool SMS configuration problem
**Description:**
When trying to create a Cognito user pool using SAM templates, SAM throws the error
> Failed to create the changeset: Waiter ChangeSetCreateComplete failed: Waiter encountered a terminal failure state Status: FAILED. Reason: Transform AWS::Serverless-2016-10-31 failed with: Invalid Serverless Application Specification document. Number of errors found: 1. Resource with id [CognitoUserPool] is invalid. Type of property 'SmsConfiguration' is invalid.
when specifying [SmsConfiguration](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-cognito-userpool.html#cfn-cognito-userpool-smsconfiguration) property.
In the template, there is also a Lambda trigger that has Cognito configured as an event source.
After looking through the project and doing some tests, I believe the error could appear in the samtranslator module:
`'SmsConfiguration': PropertyType(False, list_of(dict)),`
From the CloudFormation docs, [SmsConfiguration](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-cognito-userpool.html#cfn-cognito-userpool-smsconfiguration) seems to be a simple dict, but in the code snippet above, it is validated as a list of dicts.
Indeed, if I modify the corresponding part of the template from a mapping to a YAML list consisting of a single object, validation passes, but when the stack is created by CloudFormation, it fails with
> Property validation failure: [Value of property {/SmsConfiguration} does not match type {Object}]
which is consistent with the type of the property specified in the CloudFormation docs.
**Steps to reproduce the issue:**
1. Create a SAM template with a Cognito user pool configured to use SMS MFA and a Lambda trigger associated.
```yaml
AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Description: >
Example YAML.
Globals:
Function:
Timeout: 3
Handler: lambda_function.lambda_handler
Runtime: python3.6
MemorySize: 128
Resources:
PreSignupValidationLambda:
Type: AWS::Serverless::Function
Properties:
CodeUri: src/pre_signup_validation/
Events:
CognitoTrigger:
Type: Cognito
Properties:
UserPool: !Ref CognitoUserPool
Trigger: PreSignUp
CognitoUserPool:
Type: 'AWS::Cognito::UserPool'
Properties:
AutoVerifiedAttributes:
- phone_number
MfaConfiguration: OPTIONAL
Schema:
- AttributeDataType: String
DeveloperOnlyAttribute: false
Mutable: false
Name: sub
Required: true
StringAttributeConstraints:
MaxLength: 2048
MinLength: 0
- AttributeDataType: String
DeveloperOnlyAttribute: false
Mutable: true
Name: email
Required: true
StringAttributeConstraints:
MaxLength: 2048
MinLength: 0
- AttributeDataType: String
DeveloperOnlyAttribute: false
Mutable: true
Name: phone_number
Required: true
StringAttributeConstraints:
MaxLength: 2048
MinLength: 0
SmsConfiguration:
ExternalId: 'xxx-xxx-xxx'
SnsCallerArn: !GetAtt CognitoSMSRole.Arn
UsernameAttributes:
- email
- phone_number
UserPoolName: Customers
CognitoSMSRole:
Type: 'AWS::IAM::Role'
Properties:
AssumeRolePolicyDocument:
Version: 2012-10-17
Statement:
- Effect: Allow
Principal:
Service: 'cognito-idp.amazonaws.com'
Action:
- 'sts:AssumeRole'
Condition:
StringEquals:
'sts:ExternalId': 'xxx-xxx-xxx'
Policies:
- PolicyDocument:
Version: 2012-10-17
Statement:
- Effect: Allow
Action:
- 'sns:Publish'
Resource:
- '*'
PolicyName: CognitoSendSMS
RoleName: CognitoSMSRole
```
2. Write a basic Lambda function in ```<template_location>/src/pre_signup_validation/lambda_function.py```
```python
def lambda_handler(event: dict, context: dict):
return event
```
3. Run (Commands from the AWS Toolkit for PyCharm when trying to deploy application)
```bash
sam build --template template.yaml --build-dir build --use-container
```
```bash
sam package --template-file build/template.yaml --output-template-file build/packaged-template.yaml --s3-bucket <your_s3_bucket>
```
```bash
sam deploy --template-file build/packaged-template.yaml --stack-name test --no-execute-changeset
```
**Observed result:**
SAM validates the SmsConfiguration parameter of Cognito user pools as a list of type dict.
**Expected result:**
Validation should be consistent with CloudFormation specification.
</issue>
<code>
[start of samtranslator/parser/parser.py]
1 import logging
2
3 from samtranslator.model.exceptions import InvalidDocumentException, InvalidTemplateException, InvalidResourceException
4 from samtranslator.validator.validator import SamTemplateValidator
5 from samtranslator.plugins import LifeCycleEvents
6 from samtranslator.public.sdk.template import SamTemplate
7
8 LOG = logging.getLogger(__name__)
9
10
11 class Parser:
12 def __init__(self):
13 pass
14
15 def parse(self, sam_template, parameter_values, sam_plugins):
16 self._validate(sam_template, parameter_values)
17 sam_plugins.act(LifeCycleEvents.before_transform_template, sam_template)
18
19 @staticmethod
20 def validate_datatypes(sam_template):
21 """Validates the datatype within the template """
22 if (
23 "Resources" not in sam_template
24 or not isinstance(sam_template["Resources"], dict)
25 or not sam_template["Resources"]
26 ):
27 raise InvalidDocumentException([InvalidTemplateException("'Resources' section is required")])
28
29 if not all(isinstance(sam_resource, dict) for sam_resource in sam_template["Resources"].values()):
30 raise InvalidDocumentException(
31 [
32 InvalidTemplateException(
33 "All 'Resources' must be Objects. If you're using YAML, this may be an " "indentation issue."
34 )
35 ]
36 )
37
38 sam_template_instance = SamTemplate(sam_template)
39
40 for resource_logical_id, sam_resource in sam_template_instance.iterate():
41 # NOTE: Properties isn't required for SimpleTable, so we can't check
42 # `not isinstance(sam_resources.get("Properties"), dict)` as this would be a breaking change.
43 # sam_resource.properties defaults to {} in SamTemplate init
44 if not isinstance(sam_resource.properties, dict):
45 raise InvalidDocumentException(
46 [
47 InvalidResourceException(
48 resource_logical_id,
49 "All 'Resources' must be Objects and have a 'Properties' Object. If "
50 "you're using YAML, this may be an indentation issue.",
51 )
52 ]
53 )
54
55 # private methods
56 def _validate(self, sam_template, parameter_values):
57 """Validates the template and parameter values and raises exceptions if there's an issue
58
59 :param dict sam_template: SAM template
60 :param dict parameter_values: Dictionary of parameter values provided by the user
61 """
62 if parameter_values is None:
63 raise ValueError("`parameter_values` argument is required")
64
65 Parser.validate_datatypes(sam_template)
66
67 try:
68 validator = SamTemplateValidator()
69 validation_errors = validator.validate(sam_template)
70 if validation_errors:
71 LOG.warning("Template schema validation reported the following errors: %s", validation_errors)
72 except Exception as e:
73 # Catching any exception and not re-raising to make sure any validation process won't break transform
74 LOG.exception("Exception from SamTemplateValidator: %s", e)
75
[end of samtranslator/parser/parser.py]
[start of samtranslator/model/cognito.py]
1 from samtranslator.model import PropertyType, Resource
2 from samtranslator.model.types import is_type, list_of, is_str
3 from samtranslator.model.intrinsics import fnGetAtt, ref
4
5
6 class CognitoUserPool(Resource):
7 resource_type = "AWS::Cognito::UserPool"
8 property_types = {
9 "AccountRecoverySetting": PropertyType(False, is_type(dict)),
10 "AdminCreateUserConfig": PropertyType(False, is_type(dict)),
11 "AliasAttributes": PropertyType(False, list_of(is_str())),
12 "AutoVerifiedAttributes": PropertyType(False, list_of(is_str())),
13 "DeviceConfiguration": PropertyType(False, is_type(dict)),
14 "EmailConfiguration": PropertyType(False, is_type(dict)),
15 "EmailVerificationMessage": PropertyType(False, is_str()),
16 "EmailVerificationSubject": PropertyType(False, is_str()),
17 "EnabledMfas": PropertyType(False, list_of(is_str())),
18 "LambdaConfig": PropertyType(False, is_type(dict)),
19 "MfaConfiguration": PropertyType(False, is_str()),
20 "Policies": PropertyType(False, is_type(dict)),
21 "Schema": PropertyType(False, list_of(dict)),
22 "SmsAuthenticationMessage": PropertyType(False, is_str()),
23 "SmsConfiguration": PropertyType(False, list_of(dict)),
24 "SmsVerificationMessage": PropertyType(False, is_str()),
25 "UsernameAttributes": PropertyType(False, list_of(is_str())),
26 "UsernameConfiguration": PropertyType(False, is_type(dict)),
27 "UserPoolAddOns": PropertyType(False, list_of(dict)),
28 "UserPoolName": PropertyType(False, is_str()),
29 "UserPoolTags": PropertyType(False, is_type(dict)),
30 "VerificationMessageTemplate": PropertyType(False, is_type(dict)),
31 }
32
33 runtime_attrs = {
34 "name": lambda self: ref(self.logical_id),
35 "arn": lambda self: fnGetAtt(self.logical_id, "Arn"),
36 "provider_name": lambda self: fnGetAtt(self.logical_id, "ProviderName"),
37 "provider_url": lambda self: fnGetAtt(self.logical_id, "ProviderURL"),
38 }
39
[end of samtranslator/model/cognito.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/samtranslator/model/cognito.py b/samtranslator/model/cognito.py
--- a/samtranslator/model/cognito.py
+++ b/samtranslator/model/cognito.py
@@ -20,7 +20,7 @@
"Policies": PropertyType(False, is_type(dict)),
"Schema": PropertyType(False, list_of(dict)),
"SmsAuthenticationMessage": PropertyType(False, is_str()),
- "SmsConfiguration": PropertyType(False, list_of(dict)),
+ "SmsConfiguration": PropertyType(False, is_type(dict)),
"SmsVerificationMessage": PropertyType(False, is_str()),
"UsernameAttributes": PropertyType(False, list_of(is_str())),
"UsernameConfiguration": PropertyType(False, is_type(dict)),
diff --git a/samtranslator/parser/parser.py b/samtranslator/parser/parser.py
--- a/samtranslator/parser/parser.py
+++ b/samtranslator/parser/parser.py
@@ -18,7 +18,7 @@
@staticmethod
def validate_datatypes(sam_template):
- """Validates the datatype within the template """
+ """Validates the datatype within the template"""
if (
"Resources" not in sam_template
or not isinstance(sam_template["Resources"], dict)
| {"golden_diff": "diff --git a/samtranslator/model/cognito.py b/samtranslator/model/cognito.py\n--- a/samtranslator/model/cognito.py\n+++ b/samtranslator/model/cognito.py\n@@ -20,7 +20,7 @@\n \"Policies\": PropertyType(False, is_type(dict)),\n \"Schema\": PropertyType(False, list_of(dict)),\n \"SmsAuthenticationMessage\": PropertyType(False, is_str()),\n- \"SmsConfiguration\": PropertyType(False, list_of(dict)),\n+ \"SmsConfiguration\": PropertyType(False, is_type(dict)),\n \"SmsVerificationMessage\": PropertyType(False, is_str()),\n \"UsernameAttributes\": PropertyType(False, list_of(is_str())),\n \"UsernameConfiguration\": PropertyType(False, is_type(dict)),\ndiff --git a/samtranslator/parser/parser.py b/samtranslator/parser/parser.py\n--- a/samtranslator/parser/parser.py\n+++ b/samtranslator/parser/parser.py\n@@ -18,7 +18,7 @@\n \n @staticmethod\n def validate_datatypes(sam_template):\n- \"\"\"Validates the datatype within the template \"\"\"\n+ \"\"\"Validates the datatype within the template\"\"\"\n if (\n \"Resources\" not in sam_template\n or not isinstance(sam_template[\"Resources\"], dict)\n", "issue": "Cognito User Pool SMS configuration problem\n**Description:**\r\nWhen trying to create a Cognito user pool using SAM templates, SAM throws the error\r\n\r\n> Failed to create the changeset: Waiter ChangeSetCreateComplete failed: Waiter encountered a terminal failure state Status: FAILED. Reason: Transform AWS::Serverless-2016-10-31 failed with: Invalid Serverless Application Specification document. Number of errors found: 1. Resource with id [CognitoUserPool] is invalid. Type of property 'SmsConfiguration' is invalid.\r\n\r\nwhen specifying [SmsConfiguration](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-cognito-userpool.html#cfn-cognito-userpool-smsconfiguration) property.\r\nIn the template, there is also a Lambda trigger that has Cognito configured as an event source.\r\nAfter looking through the project and doing some tests, I believe the error could appear in the samtranslator module:\r\n`'SmsConfiguration': PropertyType(False, list_of(dict)),`\r\nFrom the CloudFormation docs, [SmsConfiguration](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-cognito-userpool.html#cfn-cognito-userpool-smsconfiguration) seems to be a simple dict, but in the code snippet above, it is validated as a list of dicts.\r\nIndeed, if I modify the corresponding part of the template from a mapping to a YAML list consisting of a single object, validation passes, but when the stack is created by CloudFormation, it fails with \r\n> Property validation failure: [Value of property {/SmsConfiguration} does not match type {Object}]\r\n\r\nwhich is consistent with the type of the property specified in the CloudFormation docs.\r\n\r\n**Steps to reproduce the issue:**\r\n1. Create a SAM template with a Cognito user pool configured to use SMS MFA and a Lambda trigger associated.\r\n```yaml\r\nAWSTemplateFormatVersion: '2010-09-09'\r\nTransform: AWS::Serverless-2016-10-31\r\nDescription: >\r\n Example YAML.\r\nGlobals:\r\n Function:\r\n Timeout: 3\r\n Handler: lambda_function.lambda_handler\r\n Runtime: python3.6\r\n MemorySize: 128\r\nResources:\r\n PreSignupValidationLambda:\r\n Type: AWS::Serverless::Function\r\n Properties:\r\n CodeUri: src/pre_signup_validation/\r\n Events:\r\n CognitoTrigger:\r\n Type: Cognito\r\n Properties:\r\n UserPool: !Ref CognitoUserPool\r\n Trigger: PreSignUp\r\n CognitoUserPool:\r\n Type: 'AWS::Cognito::UserPool'\r\n Properties:\r\n AutoVerifiedAttributes:\r\n - phone_number\r\n MfaConfiguration: OPTIONAL\r\n Schema:\r\n - AttributeDataType: String\r\n DeveloperOnlyAttribute: false\r\n Mutable: false\r\n Name: sub\r\n Required: true\r\n StringAttributeConstraints:\r\n MaxLength: 2048\r\n MinLength: 0\r\n - AttributeDataType: String\r\n DeveloperOnlyAttribute: false\r\n Mutable: true\r\n Name: email\r\n Required: true\r\n StringAttributeConstraints:\r\n MaxLength: 2048\r\n MinLength: 0\r\n - AttributeDataType: String\r\n DeveloperOnlyAttribute: false\r\n Mutable: true\r\n Name: phone_number\r\n Required: true\r\n StringAttributeConstraints:\r\n MaxLength: 2048\r\n MinLength: 0\r\n SmsConfiguration:\r\n ExternalId: 'xxx-xxx-xxx'\r\n SnsCallerArn: !GetAtt CognitoSMSRole.Arn\r\n UsernameAttributes:\r\n - email\r\n - phone_number\r\n UserPoolName: Customers\r\n CognitoSMSRole:\r\n Type: 'AWS::IAM::Role'\r\n Properties:\r\n AssumeRolePolicyDocument:\r\n Version: 2012-10-17\r\n Statement:\r\n - Effect: Allow\r\n Principal:\r\n Service: 'cognito-idp.amazonaws.com'\r\n Action:\r\n - 'sts:AssumeRole'\r\n Condition:\r\n StringEquals:\r\n 'sts:ExternalId': 'xxx-xxx-xxx'\r\n Policies:\r\n - PolicyDocument:\r\n Version: 2012-10-17\r\n Statement:\r\n - Effect: Allow\r\n Action:\r\n - 'sns:Publish'\r\n Resource:\r\n - '*'\r\n PolicyName: CognitoSendSMS\r\n RoleName: CognitoSMSRole\r\n```\r\n2. Write a basic Lambda function in ```<template_location>/src/pre_signup_validation/lambda_function.py```\r\n```python\r\ndef lambda_handler(event: dict, context: dict):\r\n return event\r\n```\r\n3. Run (Commands from the AWS Toolkit for PyCharm when trying to deploy application)\r\n```bash\r\nsam build --template template.yaml --build-dir build --use-container\r\n```\r\n```bash\r\nsam package --template-file build/template.yaml --output-template-file build/packaged-template.yaml --s3-bucket <your_s3_bucket>\r\n```\r\n```bash\r\nsam deploy --template-file build/packaged-template.yaml --stack-name test --no-execute-changeset\r\n```\r\n\r\n**Observed result:**\r\nSAM validates the SmsConfiguration parameter of Cognito user pools as a list of type dict.\r\n**Expected result:**\r\nValidation should be consistent with CloudFormation specification.\n", "before_files": [{"content": "import logging\n\nfrom samtranslator.model.exceptions import InvalidDocumentException, InvalidTemplateException, InvalidResourceException\nfrom samtranslator.validator.validator import SamTemplateValidator\nfrom samtranslator.plugins import LifeCycleEvents\nfrom samtranslator.public.sdk.template import SamTemplate\n\nLOG = logging.getLogger(__name__)\n\n\nclass Parser:\n def __init__(self):\n pass\n\n def parse(self, sam_template, parameter_values, sam_plugins):\n self._validate(sam_template, parameter_values)\n sam_plugins.act(LifeCycleEvents.before_transform_template, sam_template)\n\n @staticmethod\n def validate_datatypes(sam_template):\n \"\"\"Validates the datatype within the template \"\"\"\n if (\n \"Resources\" not in sam_template\n or not isinstance(sam_template[\"Resources\"], dict)\n or not sam_template[\"Resources\"]\n ):\n raise InvalidDocumentException([InvalidTemplateException(\"'Resources' section is required\")])\n\n if not all(isinstance(sam_resource, dict) for sam_resource in sam_template[\"Resources\"].values()):\n raise InvalidDocumentException(\n [\n InvalidTemplateException(\n \"All 'Resources' must be Objects. If you're using YAML, this may be an \" \"indentation issue.\"\n )\n ]\n )\n\n sam_template_instance = SamTemplate(sam_template)\n\n for resource_logical_id, sam_resource in sam_template_instance.iterate():\n # NOTE: Properties isn't required for SimpleTable, so we can't check\n # `not isinstance(sam_resources.get(\"Properties\"), dict)` as this would be a breaking change.\n # sam_resource.properties defaults to {} in SamTemplate init\n if not isinstance(sam_resource.properties, dict):\n raise InvalidDocumentException(\n [\n InvalidResourceException(\n resource_logical_id,\n \"All 'Resources' must be Objects and have a 'Properties' Object. If \"\n \"you're using YAML, this may be an indentation issue.\",\n )\n ]\n )\n\n # private methods\n def _validate(self, sam_template, parameter_values):\n \"\"\"Validates the template and parameter values and raises exceptions if there's an issue\n\n :param dict sam_template: SAM template\n :param dict parameter_values: Dictionary of parameter values provided by the user\n \"\"\"\n if parameter_values is None:\n raise ValueError(\"`parameter_values` argument is required\")\n\n Parser.validate_datatypes(sam_template)\n\n try:\n validator = SamTemplateValidator()\n validation_errors = validator.validate(sam_template)\n if validation_errors:\n LOG.warning(\"Template schema validation reported the following errors: %s\", validation_errors)\n except Exception as e:\n # Catching any exception and not re-raising to make sure any validation process won't break transform\n LOG.exception(\"Exception from SamTemplateValidator: %s\", e)\n", "path": "samtranslator/parser/parser.py"}, {"content": "from samtranslator.model import PropertyType, Resource\nfrom samtranslator.model.types import is_type, list_of, is_str\nfrom samtranslator.model.intrinsics import fnGetAtt, ref\n\n\nclass CognitoUserPool(Resource):\n resource_type = \"AWS::Cognito::UserPool\"\n property_types = {\n \"AccountRecoverySetting\": PropertyType(False, is_type(dict)),\n \"AdminCreateUserConfig\": PropertyType(False, is_type(dict)),\n \"AliasAttributes\": PropertyType(False, list_of(is_str())),\n \"AutoVerifiedAttributes\": PropertyType(False, list_of(is_str())),\n \"DeviceConfiguration\": PropertyType(False, is_type(dict)),\n \"EmailConfiguration\": PropertyType(False, is_type(dict)),\n \"EmailVerificationMessage\": PropertyType(False, is_str()),\n \"EmailVerificationSubject\": PropertyType(False, is_str()),\n \"EnabledMfas\": PropertyType(False, list_of(is_str())),\n \"LambdaConfig\": PropertyType(False, is_type(dict)),\n \"MfaConfiguration\": PropertyType(False, is_str()),\n \"Policies\": PropertyType(False, is_type(dict)),\n \"Schema\": PropertyType(False, list_of(dict)),\n \"SmsAuthenticationMessage\": PropertyType(False, is_str()),\n \"SmsConfiguration\": PropertyType(False, list_of(dict)),\n \"SmsVerificationMessage\": PropertyType(False, is_str()),\n \"UsernameAttributes\": PropertyType(False, list_of(is_str())),\n \"UsernameConfiguration\": PropertyType(False, is_type(dict)),\n \"UserPoolAddOns\": PropertyType(False, list_of(dict)),\n \"UserPoolName\": PropertyType(False, is_str()),\n \"UserPoolTags\": PropertyType(False, is_type(dict)),\n \"VerificationMessageTemplate\": PropertyType(False, is_type(dict)),\n }\n\n runtime_attrs = {\n \"name\": lambda self: ref(self.logical_id),\n \"arn\": lambda self: fnGetAtt(self.logical_id, \"Arn\"),\n \"provider_name\": lambda self: fnGetAtt(self.logical_id, \"ProviderName\"),\n \"provider_url\": lambda self: fnGetAtt(self.logical_id, \"ProviderURL\"),\n }\n", "path": "samtranslator/model/cognito.py"}]} | 2,923 | 270 |
gh_patches_debug_38651 | rasdani/github-patches | git_diff | conan-io__conan-center-index-505 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[package] msys2/20190524: PKG_CONFIG_PATH environment variable is not passed
The `PKG_CONFIG_PATH` environment variable is not passed tot the msys2 environment.
This causes the `pkg_config` generator not to work.
The `PKG_CONFIG_PATH` environment variable is always `/mingw64/lib/pkgconfig:/mingw64/share/pkgconfig`
Is this a bug or am I missing something?
### Package and Environment Details (include every applicable attribute)
* Package Name/Version: **msys2/20190524**
* Operating System+version: **Windows 10**
* Conan version: **conan 1.21.0**
* Python version: **Python 3.8.0**
### Steps to reproduce (Include if Applicable)
In Windows 10, build the following recipe:
```
from conans import ConanFile, tools
import os
class DummyConan(ConanFile):
name = "dummy"
version = "0.1"
requires = ""
def build_requirements(self):
if tools.os_info.is_windows and not "CONAN_BASH_PATH" in os.environ:
self.build_requires("msys2/20190524")
# self.build_requires("msys2/20161025")
def build(self):
env = {
"PKG_CONFIG_PATH": "PKG_CONFIG_PATH from conan",
"DUMMY_ENV": "DUMMY_ENV from conan",
}
with tools.environment_append(env):
self.run("echo $PKG_CONFIG_PATH", win_bash=tools.os_info.is_windows)
self.run("echo $DUMMY_ENV", win_bash=tools.os_info.is_windows)
```
(the behavior is the same for `msys2/20161025`)
This prints ` /mingw64/lib/pkgconfig:/mingw64/share/pkgconfig` for `PKG_CONFIG_PATH`.
And `DUMMY_ENV from conan` for `DUMMY_ENV`.
### Logs (Include/Attach if Applicable)
<details><summary>Click to expand log</summary>
```
dummy/0.1: Calling build()
dummy/0.1: run_in_windows_bash: C:\.conan\1982c6\1\bin\usr\bin\bash.exe --login -c ^"cd \^"/c/users/maarten/.conan/data/dummy/0.1/_/_/build/5ab84d6acfe1f23c4fae0ab88f26e3a396351ac9\^" ^&^& PATH=\^"/c/.conan/1982c6/1/bin/usr/bin:$PATH\^" ^&^& echo $PKG_CONFIG_PATH ^"
dummy/0.1:
----Running------
> C:\.conan\1982c6\1\bin\usr\bin\bash.exe --login -c ^"cd \^"/c/users/maarten/.conan/data/dummy/0.1/_/_/build/5ab84d6acfe1f23c4fae0ab88f26e3a396351ac9\^" ^&^& PATH=\^"/c/.conan/1982c6/1/bin/usr/bin:$PATH\^" ^&^& echo $PKG_CONFIG_PATH ^"
-----------------
dummy/0.1: /mingw64/lib/pkgconfig:/mingw64/share/pkgconfig
dummy/0.1: run_in_windows_bash: C:\.conan\1982c6\1\bin\usr\bin\bash.exe --login -c ^"cd \^"/c/users/maarten/.conan/data/dummy/0.1/_/_/build/5ab84d6acfe1f23c4fae0ab88f26e3a396351ac9\^" ^&^& PATH=\^"/c/.conan/1982c6/1/bin/usr/bin:$PATH\^" ^&^& echo $DUMMY_ENV ^"
dummy/0.1:
----Running------
> C:\.conan\1982c6\1\bin\usr\bin\bash.exe --login -c ^"cd \^"/c/users/maarten/.conan/data/dummy/0.1/_/_/build/5ab84d6acfe1f23c4fae0ab88f26e3a396351ac9\^" ^&^& PATH=\^"/c/.conan/1982c6/1/bin/usr/bin:$PATH\^" ^&^& echo $DUMMY_ENV ^"
-----------------
dummy/0.1: DUMMY_ENV from conan
dummy/0.1: Package '5ab84d6acfe1f23c4fae0ab88f26e3a396351ac9' built
dummy/0.1: Build folder C:\Users\maarten\.conan\data\dummy\0.1\_\_\build\5ab84d6acfe1f23c4fae0ab88f26e3a396351ac9
dummy/0.1: Generated conaninfo.txt
dummy/0.1: Generated conanbuildinfo.txt
dummy/0.1: Generating the package
dummy/0.1: Package folder C:\Users\maarten\.conan\data\dummy\0.1\_\_\package\5ab84d6acfe1f23c4fae0ab88f26e3a396351ac9
```
</details>
</issue>
<code>
[start of recipes/msys2/all/conanfile.py]
1 from conans import ConanFile, tools
2 from conans.errors import ConanInvalidConfiguration
3 import os
4 import shutil
5
6
7 class MSYS2Conan(ConanFile):
8 name = "msys2"
9 description = "MSYS2 is a software distro and building platform for Windows"
10 url = "https://github.com/conan-io/conan-center-index"
11 homepage = "http://www.msys2.org"
12 license = "MSYS license"
13 topics = ("conan", "msys", "unix", "subsystem")
14 build_requires = "7zip/19.00"
15 short_paths = True
16 options = {"exclude_files": "ANY", # Comma separated list of file patterns to exclude from the package
17 "packages": "ANY", # Comma separated
18 "additional_packages": "ANY"} # Comma separated
19 default_options = {"exclude_files": "*/link.exe",
20 "packages": "base-devel,binutils,gcc",
21 "additional_packages": None}
22 settings = "os_build", "arch_build"
23
24 def configure(self):
25 if self.settings.os_build != "Windows":
26 raise ConanInvalidConfiguration("Only Windows supported")
27
28 def source(self):
29 # build tools have to download files in build method when the
30 # source files downloaded will be different based on architecture or OS
31 pass
32
33 def _download(self, url, sha256):
34 from six.moves.urllib.parse import urlparse
35 filename = os.path.basename(urlparse(url).path)
36 tools.download(url, filename)
37 tools.check_sha256(filename, sha256)
38 return filename
39
40 def build(self):
41 arch = 0 if self.settings.arch_build == "x86" else 1 # index in the sources list
42 url = self.conan_data["sources"][self.version][arch]["url"]
43 sha256 = self.conan_data["sources"][self.version][arch]["sha256"]
44 filename = self._download(**self.conan_data["sources"][self.version][arch])
45 tar_name = filename.replace(".xz", "")
46 self.run("7z.exe x {0}".format(filename))
47 self.run("7z.exe x {0}".format(tar_name))
48 os.unlink(filename)
49 os.unlink(tar_name)
50
51 msys_dir = "msys64" if self.settings.arch_build == "x86_64" else "msys32"
52
53 packages = []
54 if self.options.packages:
55 packages.extend(str(self.options.packages).split(","))
56 if self.options.additional_packages:
57 packages.extend(str(self.options.additional_packages).split(","))
58
59 with tools.chdir(os.path.join(msys_dir, "usr", "bin")):
60 for package in packages:
61 self.run('bash -l -c "pacman -S %s --noconfirm"' % package)
62
63 # create /tmp dir in order to avoid
64 # bash.exe: warning: could not find /tmp, please create!
65 tmp_dir = os.path.join(msys_dir, 'tmp')
66 if not os.path.isdir(tmp_dir):
67 os.makedirs(tmp_dir)
68 tmp_name = os.path.join(tmp_dir, 'dummy')
69 with open(tmp_name, 'a'):
70 os.utime(tmp_name, None)
71
72 def package(self):
73 msys_dir = "msys64" if self.settings.arch_build == "x86_64" else "msys32"
74 excludes = None
75 if self.options.exclude_files:
76 excludes = tuple(str(self.options.exclude_files).split(","))
77 self.copy("*", dst="bin", src=msys_dir, excludes=excludes)
78 shutil.copytree(os.path.join(self.package_folder, "bin", "usr", "share", "licenses"),
79 os.path.join(self.package_folder, "licenses"))
80
81
82 def package_info(self):
83 msys_root = os.path.join(self.package_folder, "bin")
84 msys_bin = os.path.join(msys_root, "usr", "bin")
85
86 self.output.info("Creating MSYS_ROOT env var : %s" % msys_root)
87 self.env_info.MSYS_ROOT = msys_root
88
89 self.output.info("Creating MSYS_BIN env var : %s" % msys_bin)
90 self.env_info.MSYS_BIN = msys_bin
91
92 self.output.info("Appending PATH env var with : " + msys_bin)
93 self.env_info.path.append(msys_bin)
94
[end of recipes/msys2/all/conanfile.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/recipes/msys2/all/conanfile.py b/recipes/msys2/all/conanfile.py
--- a/recipes/msys2/all/conanfile.py
+++ b/recipes/msys2/all/conanfile.py
@@ -37,6 +37,10 @@
tools.check_sha256(filename, sha256)
return filename
+ @property
+ def _msys_dir(self):
+ return "msys64" if self.settings.arch_build == "x86_64" else "msys32"
+
def build(self):
arch = 0 if self.settings.arch_build == "x86" else 1 # index in the sources list
url = self.conan_data["sources"][self.version][arch]["url"]
@@ -48,33 +52,34 @@
os.unlink(filename)
os.unlink(tar_name)
- msys_dir = "msys64" if self.settings.arch_build == "x86_64" else "msys32"
-
packages = []
if self.options.packages:
packages.extend(str(self.options.packages).split(","))
if self.options.additional_packages:
packages.extend(str(self.options.additional_packages).split(","))
- with tools.chdir(os.path.join(msys_dir, "usr", "bin")):
+ with tools.chdir(os.path.join(self._msys_dir, "usr", "bin")):
for package in packages:
self.run('bash -l -c "pacman -S %s --noconfirm"' % package)
# create /tmp dir in order to avoid
# bash.exe: warning: could not find /tmp, please create!
- tmp_dir = os.path.join(msys_dir, 'tmp')
+ tmp_dir = os.path.join(self._msys_dir, 'tmp')
if not os.path.isdir(tmp_dir):
os.makedirs(tmp_dir)
tmp_name = os.path.join(tmp_dir, 'dummy')
with open(tmp_name, 'a'):
os.utime(tmp_name, None)
+ # Prepend the PKG_CONFIG_PATH environment variable with an eventual PKG_CONFIG_PATH environment variable
+ tools.replace_in_file(os.path.join(self._msys_dir, "etc", "profile"),
+ 'PKG_CONFIG_PATH="', 'PKG_CONFIG_PATH="$PKG_CONFIG_PATH:')
+
def package(self):
- msys_dir = "msys64" if self.settings.arch_build == "x86_64" else "msys32"
excludes = None
if self.options.exclude_files:
excludes = tuple(str(self.options.exclude_files).split(","))
- self.copy("*", dst="bin", src=msys_dir, excludes=excludes)
+ self.copy("*", dst="bin", src=self._msys_dir, excludes=excludes)
shutil.copytree(os.path.join(self.package_folder, "bin", "usr", "share", "licenses"),
os.path.join(self.package_folder, "licenses"))
| {"golden_diff": "diff --git a/recipes/msys2/all/conanfile.py b/recipes/msys2/all/conanfile.py\n--- a/recipes/msys2/all/conanfile.py\n+++ b/recipes/msys2/all/conanfile.py\n@@ -37,6 +37,10 @@\n tools.check_sha256(filename, sha256)\n return filename\n \n+ @property\n+ def _msys_dir(self):\n+ return \"msys64\" if self.settings.arch_build == \"x86_64\" else \"msys32\"\n+\n def build(self):\n arch = 0 if self.settings.arch_build == \"x86\" else 1 # index in the sources list\n url = self.conan_data[\"sources\"][self.version][arch][\"url\"]\n@@ -48,33 +52,34 @@\n os.unlink(filename)\n os.unlink(tar_name)\n \n- msys_dir = \"msys64\" if self.settings.arch_build == \"x86_64\" else \"msys32\"\n-\n packages = []\n if self.options.packages:\n packages.extend(str(self.options.packages).split(\",\"))\n if self.options.additional_packages:\n packages.extend(str(self.options.additional_packages).split(\",\"))\n \n- with tools.chdir(os.path.join(msys_dir, \"usr\", \"bin\")):\n+ with tools.chdir(os.path.join(self._msys_dir, \"usr\", \"bin\")):\n for package in packages:\n self.run('bash -l -c \"pacman -S %s --noconfirm\"' % package)\n \n # create /tmp dir in order to avoid\n # bash.exe: warning: could not find /tmp, please create!\n- tmp_dir = os.path.join(msys_dir, 'tmp')\n+ tmp_dir = os.path.join(self._msys_dir, 'tmp')\n if not os.path.isdir(tmp_dir):\n os.makedirs(tmp_dir)\n tmp_name = os.path.join(tmp_dir, 'dummy')\n with open(tmp_name, 'a'):\n os.utime(tmp_name, None)\n \n+ # Prepend the PKG_CONFIG_PATH environment variable with an eventual PKG_CONFIG_PATH environment variable\n+ tools.replace_in_file(os.path.join(self._msys_dir, \"etc\", \"profile\"),\n+ 'PKG_CONFIG_PATH=\"', 'PKG_CONFIG_PATH=\"$PKG_CONFIG_PATH:')\n+\n def package(self):\n- msys_dir = \"msys64\" if self.settings.arch_build == \"x86_64\" else \"msys32\"\n excludes = None\n if self.options.exclude_files:\n excludes = tuple(str(self.options.exclude_files).split(\",\"))\n- self.copy(\"*\", dst=\"bin\", src=msys_dir, excludes=excludes)\n+ self.copy(\"*\", dst=\"bin\", src=self._msys_dir, excludes=excludes)\n shutil.copytree(os.path.join(self.package_folder, \"bin\", \"usr\", \"share\", \"licenses\"),\n os.path.join(self.package_folder, \"licenses\"))\n", "issue": "[package] msys2/20190524: PKG_CONFIG_PATH environment variable is not passed\nThe `PKG_CONFIG_PATH` environment variable is not passed tot the msys2 environment.\r\nThis causes the `pkg_config` generator not to work.\r\n\r\nThe `PKG_CONFIG_PATH` environment variable is always `/mingw64/lib/pkgconfig:/mingw64/share/pkgconfig`\r\n\r\nIs this a bug or am I missing something?\r\n\r\n### Package and Environment Details (include every applicable attribute)\r\n * Package Name/Version: **msys2/20190524**\r\n * Operating System+version: **Windows 10**\r\n * Conan version: **conan 1.21.0**\r\n * Python version: **Python 3.8.0**\r\n\r\n### Steps to reproduce (Include if Applicable)\r\nIn Windows 10, build the following recipe:\r\n```\r\nfrom conans import ConanFile, tools\r\nimport os\r\n\r\n\r\nclass DummyConan(ConanFile):\r\n name = \"dummy\"\r\n version = \"0.1\"\r\n\r\n requires = \"\"\r\n\r\n def build_requirements(self):\r\n if tools.os_info.is_windows and not \"CONAN_BASH_PATH\" in os.environ:\r\n self.build_requires(\"msys2/20190524\")\r\n # self.build_requires(\"msys2/20161025\")\r\n\r\n def build(self):\r\n env = {\r\n \"PKG_CONFIG_PATH\": \"PKG_CONFIG_PATH from conan\",\r\n \"DUMMY_ENV\": \"DUMMY_ENV from conan\",\r\n }\r\n with tools.environment_append(env):\r\n self.run(\"echo $PKG_CONFIG_PATH\", win_bash=tools.os_info.is_windows)\r\n self.run(\"echo $DUMMY_ENV\", win_bash=tools.os_info.is_windows)\r\n```\r\n(the behavior is the same for `msys2/20161025`)\r\n\r\nThis prints ` /mingw64/lib/pkgconfig:/mingw64/share/pkgconfig` for `PKG_CONFIG_PATH`.\r\nAnd `DUMMY_ENV from conan` for `DUMMY_ENV`.\r\n\r\n\r\n### Logs (Include/Attach if Applicable)\r\n<details><summary>Click to expand log</summary>\r\n\r\n```\r\ndummy/0.1: Calling build()\r\ndummy/0.1: run_in_windows_bash: C:\\.conan\\1982c6\\1\\bin\\usr\\bin\\bash.exe --login -c ^\"cd \\^\"/c/users/maarten/.conan/data/dummy/0.1/_/_/build/5ab84d6acfe1f23c4fae0ab88f26e3a396351ac9\\^\" ^&^& PATH=\\^\"/c/.conan/1982c6/1/bin/usr/bin:$PATH\\^\" ^&^& echo $PKG_CONFIG_PATH ^\"\r\ndummy/0.1:\r\n----Running------\r\n> C:\\.conan\\1982c6\\1\\bin\\usr\\bin\\bash.exe --login -c ^\"cd \\^\"/c/users/maarten/.conan/data/dummy/0.1/_/_/build/5ab84d6acfe1f23c4fae0ab88f26e3a396351ac9\\^\" ^&^& PATH=\\^\"/c/.conan/1982c6/1/bin/usr/bin:$PATH\\^\" ^&^& echo $PKG_CONFIG_PATH ^\"\r\n-----------------\r\ndummy/0.1: /mingw64/lib/pkgconfig:/mingw64/share/pkgconfig\r\ndummy/0.1: run_in_windows_bash: C:\\.conan\\1982c6\\1\\bin\\usr\\bin\\bash.exe --login -c ^\"cd \\^\"/c/users/maarten/.conan/data/dummy/0.1/_/_/build/5ab84d6acfe1f23c4fae0ab88f26e3a396351ac9\\^\" ^&^& PATH=\\^\"/c/.conan/1982c6/1/bin/usr/bin:$PATH\\^\" ^&^& echo $DUMMY_ENV ^\"\r\ndummy/0.1:\r\n----Running------\r\n> C:\\.conan\\1982c6\\1\\bin\\usr\\bin\\bash.exe --login -c ^\"cd \\^\"/c/users/maarten/.conan/data/dummy/0.1/_/_/build/5ab84d6acfe1f23c4fae0ab88f26e3a396351ac9\\^\" ^&^& PATH=\\^\"/c/.conan/1982c6/1/bin/usr/bin:$PATH\\^\" ^&^& echo $DUMMY_ENV ^\"\r\n-----------------\r\ndummy/0.1: DUMMY_ENV from conan\r\ndummy/0.1: Package '5ab84d6acfe1f23c4fae0ab88f26e3a396351ac9' built\r\ndummy/0.1: Build folder C:\\Users\\maarten\\.conan\\data\\dummy\\0.1\\_\\_\\build\\5ab84d6acfe1f23c4fae0ab88f26e3a396351ac9\r\ndummy/0.1: Generated conaninfo.txt\r\ndummy/0.1: Generated conanbuildinfo.txt\r\ndummy/0.1: Generating the package\r\ndummy/0.1: Package folder C:\\Users\\maarten\\.conan\\data\\dummy\\0.1\\_\\_\\package\\5ab84d6acfe1f23c4fae0ab88f26e3a396351ac9\r\n```\r\n\r\n</details>\r\n\n", "before_files": [{"content": "from conans import ConanFile, tools\nfrom conans.errors import ConanInvalidConfiguration\nimport os\nimport shutil\n\n\nclass MSYS2Conan(ConanFile):\n name = \"msys2\"\n description = \"MSYS2 is a software distro and building platform for Windows\"\n url = \"https://github.com/conan-io/conan-center-index\"\n homepage = \"http://www.msys2.org\"\n license = \"MSYS license\"\n topics = (\"conan\", \"msys\", \"unix\", \"subsystem\")\n build_requires = \"7zip/19.00\"\n short_paths = True\n options = {\"exclude_files\": \"ANY\", # Comma separated list of file patterns to exclude from the package\n \"packages\": \"ANY\", # Comma separated\n \"additional_packages\": \"ANY\"} # Comma separated\n default_options = {\"exclude_files\": \"*/link.exe\",\n \"packages\": \"base-devel,binutils,gcc\",\n \"additional_packages\": None}\n settings = \"os_build\", \"arch_build\"\n\n def configure(self):\n if self.settings.os_build != \"Windows\":\n raise ConanInvalidConfiguration(\"Only Windows supported\")\n\n def source(self):\n # build tools have to download files in build method when the\n # source files downloaded will be different based on architecture or OS\n pass\n\n def _download(self, url, sha256):\n from six.moves.urllib.parse import urlparse\n filename = os.path.basename(urlparse(url).path)\n tools.download(url, filename)\n tools.check_sha256(filename, sha256)\n return filename\n\n def build(self):\n arch = 0 if self.settings.arch_build == \"x86\" else 1 # index in the sources list\n url = self.conan_data[\"sources\"][self.version][arch][\"url\"]\n sha256 = self.conan_data[\"sources\"][self.version][arch][\"sha256\"]\n filename = self._download(**self.conan_data[\"sources\"][self.version][arch])\n tar_name = filename.replace(\".xz\", \"\")\n self.run(\"7z.exe x {0}\".format(filename))\n self.run(\"7z.exe x {0}\".format(tar_name))\n os.unlink(filename)\n os.unlink(tar_name)\n\n msys_dir = \"msys64\" if self.settings.arch_build == \"x86_64\" else \"msys32\"\n\n packages = []\n if self.options.packages:\n packages.extend(str(self.options.packages).split(\",\"))\n if self.options.additional_packages:\n packages.extend(str(self.options.additional_packages).split(\",\"))\n\n with tools.chdir(os.path.join(msys_dir, \"usr\", \"bin\")):\n for package in packages:\n self.run('bash -l -c \"pacman -S %s --noconfirm\"' % package)\n\n # create /tmp dir in order to avoid\n # bash.exe: warning: could not find /tmp, please create!\n tmp_dir = os.path.join(msys_dir, 'tmp')\n if not os.path.isdir(tmp_dir):\n os.makedirs(tmp_dir)\n tmp_name = os.path.join(tmp_dir, 'dummy')\n with open(tmp_name, 'a'):\n os.utime(tmp_name, None)\n\n def package(self):\n msys_dir = \"msys64\" if self.settings.arch_build == \"x86_64\" else \"msys32\"\n excludes = None\n if self.options.exclude_files:\n excludes = tuple(str(self.options.exclude_files).split(\",\"))\n self.copy(\"*\", dst=\"bin\", src=msys_dir, excludes=excludes)\n shutil.copytree(os.path.join(self.package_folder, \"bin\", \"usr\", \"share\", \"licenses\"),\n os.path.join(self.package_folder, \"licenses\"))\n\n\n def package_info(self):\n msys_root = os.path.join(self.package_folder, \"bin\")\n msys_bin = os.path.join(msys_root, \"usr\", \"bin\")\n\n self.output.info(\"Creating MSYS_ROOT env var : %s\" % msys_root)\n self.env_info.MSYS_ROOT = msys_root\n\n self.output.info(\"Creating MSYS_BIN env var : %s\" % msys_bin)\n self.env_info.MSYS_BIN = msys_bin\n\n self.output.info(\"Appending PATH env var with : \" + msys_bin)\n self.env_info.path.append(msys_bin)\n", "path": "recipes/msys2/all/conanfile.py"}]} | 2,991 | 658 |
gh_patches_debug_26953 | rasdani/github-patches | git_diff | mdn__kuma-2072 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
restore django-debug-toolbar
We disabled django-debug-toolbar before we upgraded to django 1.4. Now that we're on it we should be able to restore it in `settings_local.py`.
restore django-debug-toolbar
We disabled django-debug-toolbar before we upgraded to django 1.4. Now that we're on it we should be able to restore it in `settings_local.py`.
</issue>
<code>
[start of puppet/files/vagrant/settings_local.py]
1 from settings import *
2 import logging
3
4 INTERNAL_IPS = ('127.0.0.1', '192.168.10.1',)
5
6 DEBUG = True
7 DEV = True
8 TEMPLATE_DEBUG = DEBUG
9 SERVE_MEDIA = DEBUG
10
11 SESSION_COOKIE_SECURE = True
12
13 DEMO_UPLOADS_ROOT = '/home/vagrant/uploads/demos'
14 DEMO_UPLOADS_URL = '/media/uploads/demos/'
15
16 PROD_DETAILS_DIR = '/home/vagrant/product_details_json'
17 MDC_PAGES_DIR = '/home/vagrant/mdc_pages'
18
19 GOOGLE_MAPS_API_KEY = "ABQIAAAANRj9BHQi5ireVluCwVy0yRSrufPN8BjQWjkoRva24PCQEXS2OhSXu2BEgUH5PmGOmW71r2-tEuOVuQ"
20
21 RECAPTCHA_USE_SSL = True
22 RECAPTCHA_PUBLIC_KEY = '6LdX8cISAAAAAA9HRXmzrcRSFsUoIK9u0nWpvGS_'
23 RECAPTCHA_PRIVATE_KEY = '6LdX8cISAAAAACkC1kqYmpeSf-1geTmLzrLnq0t6'
24
25 BITLY_USERNAME = 'lmorchard'
26 BITLY_API_KEY = "R_2653e6351e31d02988b3da31dac6e2c0"
27
28 EMAIL_BACKEND = 'django.core.mail.backends.console.EmailBackend'
29 #EMAIL_BACKEND = 'django.core.mail.backends.filebased.EmailBackend'
30 #EMAIL_FILE_PATH = '/home/vagrant/logs/kuma-email.log'
31
32 # Uncomment to enable a real celery queue
33 CELERY_ALWAYS_EAGER = False
34
35 INSTALLED_APPS = INSTALLED_APPS + (
36 "django_extensions",
37 # TODO: re-enable after django 1.4
38 # "debug_toolbar",
39 "devserver",
40 )
41
42 MIDDLEWARE_CLASSES = (
43 # TODO: re-enable after django 1.4
44 # "debug_toolbar.middleware.DebugToolbarMiddleware",
45 ) + MIDDLEWARE_CLASSES
46
47 DEBUG_TOOLBAR_CONFIG = {
48 "INTERCEPT_REDIRECTS": False,
49 }
50
51 DEBUG_TOOLBAR_PANELS = (
52 'debug_toolbar.panels.version.VersionDebugPanel',
53 'debug_toolbar.panels.timer.TimerDebugPanel',
54 'debug_toolbar.panels.settings_vars.SettingsVarsDebugPanel',
55 'debug_toolbar.panels.headers.HeaderDebugPanel',
56 'debug_toolbar.panels.request_vars.RequestVarsDebugPanel',
57 'debug_toolbar.panels.template.TemplateDebugPanel',
58 #'cache_panel.CachePanel',
59 'debug_toolbar.panels.sql.SQLDebugPanel',
60 'debug_toolbar.panels.signals.SignalDebugPanel',
61 'debug_toolbar.panels.logger.LoggingPanel',
62 )
63
64 DEVSERVER_MODULES = (
65 # sql modules interfere with saving some KumaScript templates
66 #'devserver.modules.sql.SQLRealTimeModule',
67 #'devserver.modules.sql.SQLSummaryModule',
68 'devserver.modules.profile.ProfileSummaryModule',
69
70 # Modules not enabled by default
71 #'devserver.modules.ajax.AjaxDumpModule',
72 #'devserver.modules.profile.MemoryUseModule',
73 #'devserver.modules.cache.CacheSummaryModule',
74 #'devserver.modules.profile.LineProfilerModule',
75 )
76
77 # The default database should point to the master.
78 DATABASES = {
79 'default': {
80 'NAME': 'kuma',
81 'ENGINE': 'django.db.backends.mysql',
82 'HOST': 'localhost',
83 'USER': 'kuma',
84 'PASSWORD': 'kuma',
85 'OPTIONS': {'init_command': 'SET storage_engine=InnoDB'},
86 },
87 }
88
89 MIGRATION_DATABASES = {
90 'wikidb': {
91 'NAME': 'wikidb',
92 'ENGINE': 'django.db.backends.mysql',
93 'HOST': 'localhost',
94 'USER': 'wikiuser',
95 'PASSWORD': '2yeOr7ByBUMBiB4z',
96 },
97 }
98
99 CACHES = {
100 'default': {
101 # HACK: We currently have 'default' memcache disabled in production.
102 # This reflects that in local dev.
103 'BACKEND': 'django.core.cache.backends.locmem.LocMemCache',
104 #'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
105 #'LOCATION': [
106 # '127.0.0.1:11211',
107 #],
108 'TIMEOUT': 3600,
109 'KEY_PREFIX': 'kuma',
110 },
111 'secondary': {
112 'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
113 'LOCATION': [
114 '127.0.0.1:11211',
115 ],
116 'TIMEOUT': 3600,
117 'KEY_PREFIX': 'kuma',
118 }
119 }
120
121 # TODO: Switch this to 'default' when main cache issues are resolved
122 SECONDARY_CACHE_ALIAS = 'secondary'
123
124 # Use IP:PORT pairs separated by semicolons.
125 CACHE_BACKEND = 'memcached://localhost:11211?timeout=60'
126 CONSTANCE_DATABASE_CACHE_BACKEND = CACHE_BACKEND
127
128 # This is used to hash some things in Django.
129 SECRET_KEY = 'jenny8675309'
130
131 DEBUG_PROPAGATE_EXCEPTIONS = DEBUG
132
133 LOG_LEVEL = logging.DEBUG
134
135 SITE_URL = 'https://developer-local.allizom.org'
136 PROTOCOL = 'https://'
137 DOMAIN = 'developer-local.allizom.org'
138
139 # See: https://github.com/mozilla/django-browserid/issues/8 (TODO)
140 BROWSERID_DISABLE_CERT_CHECK = True
141 BROWSERID_CACERT_FILE = None
142
143 LOGIN_REDIRECT_URL = '/'
144 LOGIN_REDIRECT_URL_FAILURE = '/'
145
146 KUMASCRIPT_URL_TEMPLATE = 'http://localhost:9080/docs/{path}'
147
148 ATTACHMENT_HOST = 'mdn-local.mozillademos.org'
149
150 ES_DISABLED = False
151 ES_URLS = ['http://127.0.0.1:9200']
152 ES_INDEXES = {'default': 'main_index'}
153 ES_INDEX_PREFIX = 'mdn'
154 ES_LIVE_INDEX = True
155 ES_INDEXING_TIMEOUT = 30
156
157 # See https://mana.mozilla.org/wiki/display/websites/Developer+Cluster#DeveloperCluster-Sentry
158 SENTRY_DSN = ''
159
160 if SENTRY_DSN:
161 INSTALLED_APPS = INSTALLED_APPS + (
162 'raven.contrib.django.raven_compat',
163 )
164
[end of puppet/files/vagrant/settings_local.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/puppet/files/vagrant/settings_local.py b/puppet/files/vagrant/settings_local.py
--- a/puppet/files/vagrant/settings_local.py
+++ b/puppet/files/vagrant/settings_local.py
@@ -34,31 +34,30 @@
INSTALLED_APPS = INSTALLED_APPS + (
"django_extensions",
- # TODO: re-enable after django 1.4
- # "debug_toolbar",
+ "debug_toolbar",
"devserver",
)
-MIDDLEWARE_CLASSES = (
- # TODO: re-enable after django 1.4
- # "debug_toolbar.middleware.DebugToolbarMiddleware",
-) + MIDDLEWARE_CLASSES
+JINGO_EXCLUDE_APPS = JINGO_EXCLUDE_APPS + (
+ 'debug_toolbar',
+)
DEBUG_TOOLBAR_CONFIG = {
"INTERCEPT_REDIRECTS": False,
}
DEBUG_TOOLBAR_PANELS = (
- 'debug_toolbar.panels.version.VersionDebugPanel',
- 'debug_toolbar.panels.timer.TimerDebugPanel',
- 'debug_toolbar.panels.settings_vars.SettingsVarsDebugPanel',
- 'debug_toolbar.panels.headers.HeaderDebugPanel',
- 'debug_toolbar.panels.request_vars.RequestVarsDebugPanel',
- 'debug_toolbar.panels.template.TemplateDebugPanel',
- #'cache_panel.CachePanel',
- 'debug_toolbar.panels.sql.SQLDebugPanel',
- 'debug_toolbar.panels.signals.SignalDebugPanel',
- 'debug_toolbar.panels.logger.LoggingPanel',
+ 'debug_toolbar.panels.versions.VersionsPanel',
+ 'debug_toolbar.panels.timer.TimerPanel',
+ 'debug_toolbar.panels.settings.SettingsPanel',
+ 'debug_toolbar.panels.headers.HeadersPanel',
+ 'debug_toolbar.panels.request.RequestPanel',
+ 'debug_toolbar.panels.templates.TemplatesPanel',
+ 'debug_toolbar.panels.cache.CachePanel',
+ 'debug_toolbar.panels.sql.SQLPanel',
+ 'debug_toolbar.panels.signals.SignalsPanel',
+ 'debug_toolbar.panels.logging.LoggingPanel',
+ 'debug_toolbar.panels.redirects.RedirectsPanel',
)
DEVSERVER_MODULES = (
| {"golden_diff": "diff --git a/puppet/files/vagrant/settings_local.py b/puppet/files/vagrant/settings_local.py\n--- a/puppet/files/vagrant/settings_local.py\n+++ b/puppet/files/vagrant/settings_local.py\n@@ -34,31 +34,30 @@\n \n INSTALLED_APPS = INSTALLED_APPS + (\n \"django_extensions\",\n- # TODO: re-enable after django 1.4\n- # \"debug_toolbar\",\n+ \"debug_toolbar\",\n \"devserver\",\n )\n \n-MIDDLEWARE_CLASSES = (\n- # TODO: re-enable after django 1.4\n- # \"debug_toolbar.middleware.DebugToolbarMiddleware\",\n-) + MIDDLEWARE_CLASSES\n+JINGO_EXCLUDE_APPS = JINGO_EXCLUDE_APPS + (\n+ 'debug_toolbar',\n+)\n \n DEBUG_TOOLBAR_CONFIG = {\n \"INTERCEPT_REDIRECTS\": False,\n }\n \n DEBUG_TOOLBAR_PANELS = (\n- 'debug_toolbar.panels.version.VersionDebugPanel',\n- 'debug_toolbar.panels.timer.TimerDebugPanel',\n- 'debug_toolbar.panels.settings_vars.SettingsVarsDebugPanel',\n- 'debug_toolbar.panels.headers.HeaderDebugPanel',\n- 'debug_toolbar.panels.request_vars.RequestVarsDebugPanel',\n- 'debug_toolbar.panels.template.TemplateDebugPanel',\n- #'cache_panel.CachePanel',\n- 'debug_toolbar.panels.sql.SQLDebugPanel',\n- 'debug_toolbar.panels.signals.SignalDebugPanel',\n- 'debug_toolbar.panels.logger.LoggingPanel',\n+ 'debug_toolbar.panels.versions.VersionsPanel',\n+ 'debug_toolbar.panels.timer.TimerPanel',\n+ 'debug_toolbar.panels.settings.SettingsPanel',\n+ 'debug_toolbar.panels.headers.HeadersPanel',\n+ 'debug_toolbar.panels.request.RequestPanel',\n+ 'debug_toolbar.panels.templates.TemplatesPanel',\n+ 'debug_toolbar.panels.cache.CachePanel',\n+ 'debug_toolbar.panels.sql.SQLPanel',\n+ 'debug_toolbar.panels.signals.SignalsPanel',\n+ 'debug_toolbar.panels.logging.LoggingPanel',\n+ 'debug_toolbar.panels.redirects.RedirectsPanel',\n )\n \n DEVSERVER_MODULES = (\n", "issue": "restore django-debug-toolbar\nWe disabled django-debug-toolbar before we upgraded to django 1.4. Now that we're on it we should be able to restore it in `settings_local.py`.\n\nrestore django-debug-toolbar\nWe disabled django-debug-toolbar before we upgraded to django 1.4. Now that we're on it we should be able to restore it in `settings_local.py`.\n\n", "before_files": [{"content": "from settings import *\nimport logging\n\nINTERNAL_IPS = ('127.0.0.1', '192.168.10.1',)\n\nDEBUG = True\nDEV = True\nTEMPLATE_DEBUG = DEBUG\nSERVE_MEDIA = DEBUG\n\nSESSION_COOKIE_SECURE = True\n\nDEMO_UPLOADS_ROOT = '/home/vagrant/uploads/demos'\nDEMO_UPLOADS_URL = '/media/uploads/demos/'\n\nPROD_DETAILS_DIR = '/home/vagrant/product_details_json'\nMDC_PAGES_DIR = '/home/vagrant/mdc_pages'\n\nGOOGLE_MAPS_API_KEY = \"ABQIAAAANRj9BHQi5ireVluCwVy0yRSrufPN8BjQWjkoRva24PCQEXS2OhSXu2BEgUH5PmGOmW71r2-tEuOVuQ\"\n\nRECAPTCHA_USE_SSL = True\nRECAPTCHA_PUBLIC_KEY = '6LdX8cISAAAAAA9HRXmzrcRSFsUoIK9u0nWpvGS_'\nRECAPTCHA_PRIVATE_KEY = '6LdX8cISAAAAACkC1kqYmpeSf-1geTmLzrLnq0t6'\n\nBITLY_USERNAME = 'lmorchard'\nBITLY_API_KEY = \"R_2653e6351e31d02988b3da31dac6e2c0\"\n\nEMAIL_BACKEND = 'django.core.mail.backends.console.EmailBackend'\n#EMAIL_BACKEND = 'django.core.mail.backends.filebased.EmailBackend'\n#EMAIL_FILE_PATH = '/home/vagrant/logs/kuma-email.log'\n\n# Uncomment to enable a real celery queue\nCELERY_ALWAYS_EAGER = False\n\nINSTALLED_APPS = INSTALLED_APPS + (\n \"django_extensions\",\n # TODO: re-enable after django 1.4\n # \"debug_toolbar\",\n \"devserver\",\n)\n\nMIDDLEWARE_CLASSES = (\n # TODO: re-enable after django 1.4\n # \"debug_toolbar.middleware.DebugToolbarMiddleware\",\n) + MIDDLEWARE_CLASSES\n\nDEBUG_TOOLBAR_CONFIG = {\n \"INTERCEPT_REDIRECTS\": False,\n}\n\nDEBUG_TOOLBAR_PANELS = (\n 'debug_toolbar.panels.version.VersionDebugPanel',\n 'debug_toolbar.panels.timer.TimerDebugPanel',\n 'debug_toolbar.panels.settings_vars.SettingsVarsDebugPanel',\n 'debug_toolbar.panels.headers.HeaderDebugPanel',\n 'debug_toolbar.panels.request_vars.RequestVarsDebugPanel',\n 'debug_toolbar.panels.template.TemplateDebugPanel',\n #'cache_panel.CachePanel',\n 'debug_toolbar.panels.sql.SQLDebugPanel',\n 'debug_toolbar.panels.signals.SignalDebugPanel',\n 'debug_toolbar.panels.logger.LoggingPanel',\n)\n\nDEVSERVER_MODULES = (\n # sql modules interfere with saving some KumaScript templates\n #'devserver.modules.sql.SQLRealTimeModule',\n #'devserver.modules.sql.SQLSummaryModule',\n 'devserver.modules.profile.ProfileSummaryModule',\n\n # Modules not enabled by default\n #'devserver.modules.ajax.AjaxDumpModule',\n #'devserver.modules.profile.MemoryUseModule',\n #'devserver.modules.cache.CacheSummaryModule',\n #'devserver.modules.profile.LineProfilerModule',\n)\n\n# The default database should point to the master.\nDATABASES = {\n 'default': {\n 'NAME': 'kuma',\n 'ENGINE': 'django.db.backends.mysql',\n 'HOST': 'localhost',\n 'USER': 'kuma',\n 'PASSWORD': 'kuma',\n 'OPTIONS': {'init_command': 'SET storage_engine=InnoDB'},\n },\n}\n\nMIGRATION_DATABASES = {\n 'wikidb': {\n 'NAME': 'wikidb',\n 'ENGINE': 'django.db.backends.mysql',\n 'HOST': 'localhost',\n 'USER': 'wikiuser',\n 'PASSWORD': '2yeOr7ByBUMBiB4z',\n },\n}\n\nCACHES = {\n 'default': {\n # HACK: We currently have 'default' memcache disabled in production.\n # This reflects that in local dev.\n 'BACKEND': 'django.core.cache.backends.locmem.LocMemCache',\n #'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',\n #'LOCATION': [\n # '127.0.0.1:11211',\n #],\n 'TIMEOUT': 3600,\n 'KEY_PREFIX': 'kuma',\n },\n 'secondary': {\n 'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',\n 'LOCATION': [\n '127.0.0.1:11211',\n ],\n 'TIMEOUT': 3600,\n 'KEY_PREFIX': 'kuma',\n }\n}\n\n# TODO: Switch this to 'default' when main cache issues are resolved\nSECONDARY_CACHE_ALIAS = 'secondary'\n\n# Use IP:PORT pairs separated by semicolons.\nCACHE_BACKEND = 'memcached://localhost:11211?timeout=60'\nCONSTANCE_DATABASE_CACHE_BACKEND = CACHE_BACKEND\n\n# This is used to hash some things in Django.\nSECRET_KEY = 'jenny8675309'\n\nDEBUG_PROPAGATE_EXCEPTIONS = DEBUG\n\nLOG_LEVEL = logging.DEBUG\n\nSITE_URL = 'https://developer-local.allizom.org'\nPROTOCOL = 'https://'\nDOMAIN = 'developer-local.allizom.org'\n\n# See: https://github.com/mozilla/django-browserid/issues/8 (TODO)\nBROWSERID_DISABLE_CERT_CHECK = True\nBROWSERID_CACERT_FILE = None\n\nLOGIN_REDIRECT_URL = '/'\nLOGIN_REDIRECT_URL_FAILURE = '/'\n\nKUMASCRIPT_URL_TEMPLATE = 'http://localhost:9080/docs/{path}'\n\nATTACHMENT_HOST = 'mdn-local.mozillademos.org'\n\nES_DISABLED = False\nES_URLS = ['http://127.0.0.1:9200']\nES_INDEXES = {'default': 'main_index'}\nES_INDEX_PREFIX = 'mdn'\nES_LIVE_INDEX = True\nES_INDEXING_TIMEOUT = 30\n\n# See https://mana.mozilla.org/wiki/display/websites/Developer+Cluster#DeveloperCluster-Sentry\nSENTRY_DSN = ''\n\nif SENTRY_DSN:\n INSTALLED_APPS = INSTALLED_APPS + (\n 'raven.contrib.django.raven_compat',\n )\n", "path": "puppet/files/vagrant/settings_local.py"}]} | 2,388 | 447 |
gh_patches_debug_22363 | rasdani/github-patches | git_diff | Mailu__Mailu-2791 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
mailu front fails with KeyError: 'LD_PRELOAD'
<!--
Thank you for opening an issue with Mailu. Please understand that issues are meant for bugs and enhancement-requests.
For **user-support questions**, reach out to us on [matrix](https://matrix.to/#/#mailu:tedomum.net).
To be able to help you best, we need some more information.
Before you open your issue
- Check if no issue or pull-request for this already exists.
- Check [documentation](https://mailu.io/master/) and [FAQ](https://mailu.io/master/faq.html). (Tip, use the search function on the documentation page)
- You understand `Mailu` is made by volunteers in their **free time** — be concise, civil and accept that delays can occur.
- The title of the issue should be short and simple. It should contain specific terms related to the actual issue. Be specific while writing the title.
Please put your text outside of the comment blocks to be visible. You can use the button "Preview" above to check.
-->
## Environment & Version
### Environment
- [X] docker compose
- [ ] kubernetes
- [ ] docker swarm
### Version
- Version: `2.0`
<!--
To find your version, get the image name of a mailu container and read the version from the tag (example for version 1.7).
$> docker ps -a | grep mailu
140b09d4b09c mailu/roundcube:1.7 "docker-php-entrypoi…" 2 weeks ago Up 2 days (healthy) 80/tcp
$> grep MAILU_VERSION docker-compose.yml mailu.env
-->
## Description
<!--
Further explain the bug in a few words. It should be clear what the unexpected behaviour is. Share it in an easy-to-understand language.
-->
Pulled the image today to create a new server. The nginx fails with the following error.
## Replication Steps
<!--
Steps for replicating your issue
-->
* docker-compose up -d
* docker shows unhealthy front container
* docker logs mailu_front_1
## Observed behaviour
<!--
Explain or paste the result you received.
-->
## Expected behaviour
<!--
Explain what results you expected - be as specific as possible.
Just saying "it doesn’t work as expected" is not useful. It's also helpful to describe what you actually experienced.
-->
## Logs
<!--
Often it is very useful to include log fragments of the involved component.
You can get the logs via `docker logs <container name> --tail 1000`.
For example for the admin container: `docker logs mailu_admin_1 --tail 1000`
or using docker compose `docker compose -f /mailu/docker-compose.yml logs --tail 1000 admin`
If you can find the relevant section, please share only the parts that seem relevant. If you have any logs, please enclose them in code tags, like so:
-->
```
# docker logs mailu_front_1
Traceback (most recent call last):
File "/config.py", line 8, in <module>
args = system.set_env()
File "/app/venv/lib/python3.10/site-packages/socrate/system.py", line 80, in set_env
del os.environ['LD_PRELOAD']
File "/usr/lib/python3.10/os.py", line 696, in __delitem__
raise KeyError(key) from None
KeyError: 'LD_PRELOAD'
```
</issue>
<code>
[start of core/base/libs/socrate/socrate/system.py]
1 import hmac
2 import logging as log
3 import os
4 import sys
5 import re
6 from pwd import getpwnam
7 import socket
8 import tenacity
9
10 @tenacity.retry(stop=tenacity.stop_after_attempt(100),
11 wait=tenacity.wait_random(min=2, max=5))
12 def resolve_hostname(hostname):
13 """ This function uses system DNS to resolve a hostname.
14 It is capable of retrying in case the host is not immediately available
15 """
16 try:
17 return sorted(socket.getaddrinfo(hostname, None, socket.AF_UNSPEC, socket.SOCK_STREAM, 0, socket.AI_PASSIVE), key=lambda s:s[0])[0][4][0]
18 except Exception as e:
19 log.warn("Unable to lookup '%s': %s",hostname,e)
20 raise e
21
22 def _coerce_value(value):
23 if isinstance(value, str) and value.lower() in ('true','yes'):
24 return True
25 elif isinstance(value, str) and value.lower() in ('false', 'no'):
26 return False
27 return value
28
29 class LogFilter(object):
30 def __init__(self, stream, re_patterns, log_file):
31 self.stream = stream
32 if isinstance(re_patterns, list):
33 self.pattern = re.compile('|'.join([f'(?:{pattern})' for pattern in re_patterns]))
34 elif isinstance(re_patterns, str):
35 self.pattern = re.compile(re_patterns)
36 else:
37 self.pattern = re_patterns
38 self.found = False
39 self.log_file = log_file
40
41 def __getattr__(self, attr_name):
42 return getattr(self.stream, attr_name)
43
44 def write(self, data):
45 if data == '\n' and self.found:
46 self.found = False
47 else:
48 if not self.pattern.search(data):
49 self.stream.write(data)
50 self.stream.flush()
51 if self.log_file:
52 try:
53 with open(self.log_file, 'a', encoding='utf-8') as l:
54 l.write(data)
55 except:
56 pass
57 else:
58 # caught bad pattern
59 self.found = True
60
61 def flush(self):
62 self.stream.flush()
63
64 def _is_compatible_with_hardened_malloc():
65 with open('/proc/cpuinfo', 'r') as f:
66 lines = f.readlines()
67 for line in lines:
68 # See #2764, we need vmovdqu
69 if line.startswith('flags') and ' avx ' not in line:
70 return False
71 return True
72
73 def set_env(required_secrets=[], log_filters=[], log_file=None):
74 if log_filters:
75 sys.stdout = LogFilter(sys.stdout, log_filters, log_file)
76 sys.stderr = LogFilter(sys.stderr, log_filters, log_file)
77 log.basicConfig(stream=sys.stderr, level=os.environ.get("LOG_LEVEL", 'WARNING'))
78
79 if not _is_compatible_with_hardened_malloc():
80 del os.environ['LD_PRELOAD']
81
82 """ This will set all the environment variables and retains only the secrets we need """
83 if 'SECRET_KEY_FILE' in os.environ:
84 try:
85 secret_key = open(os.environ.get("SECRET_KEY_FILE"), "r").read().strip()
86 except Exception as exc:
87 log.error(f"Can't read SECRET_KEY from file: {exc}")
88 raise exc
89 else:
90 secret_key = os.environ.get('SECRET_KEY')
91 clean_env()
92 # derive the keys we need
93 for secret in required_secrets:
94 os.environ[f'{secret}_KEY'] = hmac.new(bytearray(secret_key, 'utf-8'), bytearray(secret, 'utf-8'), 'sha256').hexdigest()
95
96 return {
97 key: _coerce_value(os.environ.get(key, value))
98 for key, value in os.environ.items()
99 }
100
101 def clean_env():
102 """ remove all secret keys """
103 [os.environ.pop(key, None) for key in os.environ.keys() if key.endswith("_KEY")]
104
105 def drop_privs_to(username='mailu'):
106 pwnam = getpwnam(username)
107 os.setgroups([])
108 os.setgid(pwnam.pw_gid)
109 os.setuid(pwnam.pw_uid)
110 os.environ['HOME'] = pwnam.pw_dir
111
[end of core/base/libs/socrate/socrate/system.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/core/base/libs/socrate/socrate/system.py b/core/base/libs/socrate/socrate/system.py
--- a/core/base/libs/socrate/socrate/system.py
+++ b/core/base/libs/socrate/socrate/system.py
@@ -68,6 +68,9 @@
# See #2764, we need vmovdqu
if line.startswith('flags') and ' avx ' not in line:
return False
+ # See #2541
+ if line.startswith('Features') and ' lrcpc ' not in line:
+ return False
return True
def set_env(required_secrets=[], log_filters=[], log_file=None):
@@ -76,7 +79,8 @@
sys.stderr = LogFilter(sys.stderr, log_filters, log_file)
log.basicConfig(stream=sys.stderr, level=os.environ.get("LOG_LEVEL", 'WARNING'))
- if not _is_compatible_with_hardened_malloc():
+ if 'LD_PRELOAD' in os.environ and not _is_compatible_with_hardened_malloc():
+ log.warning('Disabling hardened-malloc on this CPU')
del os.environ['LD_PRELOAD']
""" This will set all the environment variables and retains only the secrets we need """
| {"golden_diff": "diff --git a/core/base/libs/socrate/socrate/system.py b/core/base/libs/socrate/socrate/system.py\n--- a/core/base/libs/socrate/socrate/system.py\n+++ b/core/base/libs/socrate/socrate/system.py\n@@ -68,6 +68,9 @@\n # See #2764, we need vmovdqu\n if line.startswith('flags') and ' avx ' not in line:\n return False\n+ # See #2541\n+ if line.startswith('Features') and ' lrcpc ' not in line:\n+ return False\n return True\n \n def set_env(required_secrets=[], log_filters=[], log_file=None):\n@@ -76,7 +79,8 @@\n sys.stderr = LogFilter(sys.stderr, log_filters, log_file)\n log.basicConfig(stream=sys.stderr, level=os.environ.get(\"LOG_LEVEL\", 'WARNING'))\n \n- if not _is_compatible_with_hardened_malloc():\n+ if 'LD_PRELOAD' in os.environ and not _is_compatible_with_hardened_malloc():\n+ log.warning('Disabling hardened-malloc on this CPU')\n del os.environ['LD_PRELOAD']\n \n \"\"\" This will set all the environment variables and retains only the secrets we need \"\"\"\n", "issue": "mailu front fails with KeyError: 'LD_PRELOAD'\n<!--\r\n\r\nThank you for opening an issue with Mailu. Please understand that issues are meant for bugs and enhancement-requests.\r\nFor **user-support questions**, reach out to us on [matrix](https://matrix.to/#/#mailu:tedomum.net).\r\n\r\nTo be able to help you best, we need some more information.\r\n\r\nBefore you open your issue\r\n- Check if no issue or pull-request for this already exists.\r\n- Check [documentation](https://mailu.io/master/) and [FAQ](https://mailu.io/master/faq.html). (Tip, use the search function on the documentation page)\r\n- You understand `Mailu` is made by volunteers in their **free time** \u2014 be concise, civil and accept that delays can occur.\r\n- The title of the issue should be short and simple. It should contain specific terms related to the actual issue. Be specific while writing the title.\r\n\r\nPlease put your text outside of the comment blocks to be visible. You can use the button \"Preview\" above to check.\r\n\r\n-->\r\n\r\n## Environment & Version\r\n\r\n### Environment\r\n\r\n- [X] docker compose\r\n- [ ] kubernetes\r\n- [ ] docker swarm\r\n\r\n### Version\r\n\r\n- Version: `2.0`\r\n\r\n<!--\r\nTo find your version, get the image name of a mailu container and read the version from the tag (example for version 1.7).\r\n\r\n$> docker ps -a | grep mailu\r\n140b09d4b09c mailu/roundcube:1.7 \"docker-php-entrypoi\u2026\" 2 weeks ago Up 2 days (healthy) 80/tcp\r\n$> grep MAILU_VERSION docker-compose.yml mailu.env\r\n-->\r\n\r\n## Description\r\n<!--\r\nFurther explain the bug in a few words. It should be clear what the unexpected behaviour is. Share it in an easy-to-understand language.\r\n-->\r\nPulled the image today to create a new server. The nginx fails with the following error.\r\n\r\n## Replication Steps\r\n<!--\r\nSteps for replicating your issue\r\n-->\r\n* docker-compose up -d\r\n* docker shows unhealthy front container\r\n* docker logs mailu_front_1\r\n\r\n## Observed behaviour\r\n<!--\r\nExplain or paste the result you received.\r\n-->\r\n\r\n## Expected behaviour\r\n<!--\r\nExplain what results you expected - be as specific as possible.\r\nJust saying \"it doesn\u2019t work as expected\" is not useful. It's also helpful to describe what you actually experienced.\r\n-->\r\n\r\n## Logs\r\n<!--\r\nOften it is very useful to include log fragments of the involved component.\r\nYou can get the logs via `docker logs <container name> --tail 1000`.\r\nFor example for the admin container: `docker logs mailu_admin_1 --tail 1000`\r\nor using docker compose `docker compose -f /mailu/docker-compose.yml logs --tail 1000 admin`\r\n\r\nIf you can find the relevant section, please share only the parts that seem relevant. If you have any logs, please enclose them in code tags, like so:\r\n\r\n-->\r\n\r\n\r\n```\r\n# docker logs mailu_front_1\r\nTraceback (most recent call last):\r\n File \"/config.py\", line 8, in <module>\r\n args = system.set_env()\r\n File \"/app/venv/lib/python3.10/site-packages/socrate/system.py\", line 80, in set_env\r\n del os.environ['LD_PRELOAD']\r\n File \"/usr/lib/python3.10/os.py\", line 696, in __delitem__\r\n raise KeyError(key) from None\r\nKeyError: 'LD_PRELOAD'\r\n```\r\n\n", "before_files": [{"content": "import hmac\nimport logging as log\nimport os\nimport sys\nimport re\nfrom pwd import getpwnam\nimport socket\nimport tenacity\n\[email protected](stop=tenacity.stop_after_attempt(100),\n wait=tenacity.wait_random(min=2, max=5))\ndef resolve_hostname(hostname):\n \"\"\" This function uses system DNS to resolve a hostname.\n It is capable of retrying in case the host is not immediately available\n \"\"\"\n try:\n return sorted(socket.getaddrinfo(hostname, None, socket.AF_UNSPEC, socket.SOCK_STREAM, 0, socket.AI_PASSIVE), key=lambda s:s[0])[0][4][0]\n except Exception as e:\n log.warn(\"Unable to lookup '%s': %s\",hostname,e)\n raise e\n\ndef _coerce_value(value):\n if isinstance(value, str) and value.lower() in ('true','yes'):\n return True\n elif isinstance(value, str) and value.lower() in ('false', 'no'):\n return False\n return value\n\nclass LogFilter(object):\n def __init__(self, stream, re_patterns, log_file):\n self.stream = stream\n if isinstance(re_patterns, list):\n self.pattern = re.compile('|'.join([f'(?:{pattern})' for pattern in re_patterns]))\n elif isinstance(re_patterns, str):\n self.pattern = re.compile(re_patterns)\n else:\n self.pattern = re_patterns\n self.found = False\n self.log_file = log_file\n\n def __getattr__(self, attr_name):\n return getattr(self.stream, attr_name)\n\n def write(self, data):\n if data == '\\n' and self.found:\n self.found = False\n else:\n if not self.pattern.search(data):\n self.stream.write(data)\n self.stream.flush()\n if self.log_file:\n try:\n with open(self.log_file, 'a', encoding='utf-8') as l:\n l.write(data)\n except:\n pass\n else:\n # caught bad pattern\n self.found = True\n\n def flush(self):\n self.stream.flush()\n\ndef _is_compatible_with_hardened_malloc():\n with open('/proc/cpuinfo', 'r') as f:\n lines = f.readlines()\n for line in lines:\n # See #2764, we need vmovdqu\n if line.startswith('flags') and ' avx ' not in line:\n return False\n return True\n\ndef set_env(required_secrets=[], log_filters=[], log_file=None):\n if log_filters:\n sys.stdout = LogFilter(sys.stdout, log_filters, log_file)\n sys.stderr = LogFilter(sys.stderr, log_filters, log_file)\n log.basicConfig(stream=sys.stderr, level=os.environ.get(\"LOG_LEVEL\", 'WARNING'))\n\n if not _is_compatible_with_hardened_malloc():\n del os.environ['LD_PRELOAD']\n\n \"\"\" This will set all the environment variables and retains only the secrets we need \"\"\"\n if 'SECRET_KEY_FILE' in os.environ:\n try:\n secret_key = open(os.environ.get(\"SECRET_KEY_FILE\"), \"r\").read().strip()\n except Exception as exc:\n log.error(f\"Can't read SECRET_KEY from file: {exc}\")\n raise exc\n else:\n secret_key = os.environ.get('SECRET_KEY')\n clean_env()\n # derive the keys we need\n for secret in required_secrets:\n os.environ[f'{secret}_KEY'] = hmac.new(bytearray(secret_key, 'utf-8'), bytearray(secret, 'utf-8'), 'sha256').hexdigest()\n\n return {\n key: _coerce_value(os.environ.get(key, value))\n for key, value in os.environ.items()\n }\n\ndef clean_env():\n \"\"\" remove all secret keys \"\"\"\n [os.environ.pop(key, None) for key in os.environ.keys() if key.endswith(\"_KEY\")]\n\ndef drop_privs_to(username='mailu'):\n pwnam = getpwnam(username)\n os.setgroups([])\n os.setgid(pwnam.pw_gid)\n os.setuid(pwnam.pw_uid)\n os.environ['HOME'] = pwnam.pw_dir\n", "path": "core/base/libs/socrate/socrate/system.py"}]} | 2,451 | 277 |
gh_patches_debug_32252 | rasdani/github-patches | git_diff | fal-ai__dbt-fal-569 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
fal run --scripts selector with global scripts not working
<!-- *** Make sure you have searched for an existing bug report for this issue *** -->
**Describe the bug**
Fal CLI `run` command using `--scripts` flag selector does not execute any script when the script passed is under the `before` key in the `schema.yml` configuration.
**Your environment**
- OS: macOS Monterrey 12.6
- Paste the following commands output:
```
fal --0.6.0
dbt --1.2.1
```
- Adapter being used: bigquery
**How to reproduce**
Add scripts to run under `--before` key in the `schema.yml`:
```
version: 2
fal:
scripts:
before:
- fal_scripts/delete_bq_datasets.py
- fal_scripts/download_prod_artifacts.py
```
File structure:
```
dbt_project
├── analysis
├── dbt_packages
├── dbt_project.yml
├── fal_scripts
│ ├── delete_bq_datasets.py
│ └── download_prod_artifacts.py
├── logs
├── macros
├── models
│ ├── exposures
│ └── schema.yml
├── packages.yml
├── seeds
├── snapshots
├── target
└── tests
```
Then run:
```sh
fal run --before --scripts download_prod_artifacts.py
```
Or:
```sh
fal run --scripts download_prod_artifacts.py
```
**Expected behavior**
Run only the script passed to the `--scripts` flag: `download_prod_artifacts.py`.
**Actual behavior**
Does nothing for neither case.
```sh
fal run --scripts download_prod_artifacts.py
```
```
13:25:19 Found 370 models, 544 tests, 0 snapshots, 18 analyses, 583 macros, 0 operations, 42 seed files, 129 sources, 2 exposures, 0 metrics
13:25:19 Could not read dbt sources artifact
```
```sh
fal run --before --scripts download_prod_artifacts.py
```
```
13:27:34 Found 370 models, 544 tests, 0 snapshots, 18 analyses, 583 macros, 0 operations, 42 seed files, 129 sources, 2 exposures, 0 metrics
13:27:34 Could not read dbt sources artifact
```
**Additional context**
The issue might be [this](https://github.com/fal-ai/fal/blob/771ad3dc8946dbda57e91b188719f8a20c6eb353/src/fal/cli/fal_runner.py#L47) section of code (related with `scripts` and `global_scripts` variables).
</issue>
<code>
[start of src/fal/cli/fal_runner.py]
1 import argparse
2 from pathlib import Path
3 from typing import Dict, List
4
5 from dbt.flags import PROFILES_DIR
6 from fal.planner.executor import parallel_executor
7 from fal.planner.schedule import Scheduler
8 from fal.planner.tasks import FalLocalHookTask, Status, TaskGroup
9
10 from fal.fal_script import FalScript
11 from faldbt.project import DbtModel, FalDbt, FalGeneralException
12
13
14 def create_fal_dbt(
15 args: argparse.Namespace, generated_models: Dict[str, Path] = {}
16 ) -> FalDbt:
17 profiles_dir = PROFILES_DIR
18 if args.profiles_dir is not None:
19 profiles_dir = args.profiles_dir
20
21 real_state = None
22 if hasattr(args, "state") and args.state is not None:
23 real_state = args.state
24
25 return FalDbt(
26 args.project_dir,
27 profiles_dir,
28 args.select,
29 args.exclude,
30 args.selector,
31 args.keyword,
32 args.threads,
33 real_state,
34 args.target,
35 getattr(args, "vars", "{}"),
36 generated_models,
37 )
38
39
40 def fal_run(args: argparse.Namespace):
41 "Runs the fal run command in a subprocess"
42
43 selector_flags = args.select or args.exclude or args.selector
44 if args.all and selector_flags:
45 raise FalGeneralException(
46 "Cannot pass --all flag alongside selection flags (--select/--models, --exclude, --selector)"
47 )
48
49 faldbt = create_fal_dbt(args)
50 models = _get_filtered_models(faldbt, args.all, selector_flags, args.before)
51
52 scripts = _select_scripts(args, models, faldbt)
53
54 global_scripts = _get_global_scripts(faldbt, args.before)
55
56 if args.before:
57 if not _scripts_flag(args):
58 # run globals when no --script is passed
59 _run_scripts(args, global_scripts, faldbt)
60
61 pre_hook_scripts = _get_hooks_for_model(models, faldbt, "pre-hook")
62 _run_scripts(args, pre_hook_scripts, faldbt)
63
64 _run_scripts(args, scripts, faldbt)
65
66 else:
67 _run_scripts(args, scripts, faldbt)
68
69 post_hook_scripts = _get_hooks_for_model(models, faldbt, "post-hook")
70 _run_scripts(args, post_hook_scripts, faldbt)
71
72 if not _scripts_flag(args):
73 # run globals when no --script is passed
74 _run_scripts(args, global_scripts, faldbt)
75
76
77 def _run_scripts(args: argparse.Namespace, scripts: List[FalScript], faldbt: FalDbt):
78 scheduler = Scheduler(
79 [TaskGroup(FalLocalHookTask.from_fal_script(script)) for script in scripts]
80 )
81 parallel_executor(args, faldbt, scheduler)
82
83 failed_tasks: List[FalLocalHookTask] = [
84 group.task for group in scheduler.filter_groups(Status.FAILURE)
85 ] # type: ignore
86 failed_script_ids = [task.build_fal_script(faldbt).id for task in failed_tasks]
87 if failed_script_ids:
88 raise RuntimeError(f"Error in scripts {str.join(', ',failed_script_ids)}")
89
90
91 def _scripts_flag(args: argparse.Namespace) -> bool:
92 return bool(args.scripts)
93
94
95 def _get_hooks_for_model(
96 models: List[DbtModel], faldbt: FalDbt, hook_type: str
97 ) -> List[FalScript]:
98 return [
99 FalScript.from_hook(faldbt, model, hook)
100 for model in models
101 for hook in model._get_hooks(hook_type=hook_type)
102 ]
103
104
105 def _select_scripts(
106 args: argparse.Namespace, models: List[DbtModel], faldbt: FalDbt
107 ) -> List[FalScript]:
108 scripts = []
109 scripts_flag = _scripts_flag(args)
110
111 for model in models:
112 model_scripts = model.get_scripts(args.keyword, before=bool(args.before))
113 for path in model_scripts:
114 if not scripts_flag:
115 # run all scripts when no --script is passed
116 scripts.append(FalScript(faldbt, model, path))
117 elif path in args.scripts:
118 # if --script selector is there only run selected scripts
119 scripts.append(FalScript(faldbt, model, path))
120
121 return scripts
122
123
124 def _get_global_scripts(faldbt: FalDbt, is_before: bool):
125 return [
126 FalScript(faldbt, None, path)
127 for path in faldbt._global_script_paths["before" if is_before else "after"]
128 ]
129
130
131 def _get_models_with_keyword(faldbt: FalDbt) -> List[DbtModel]:
132 return list(
133 filter(lambda model: faldbt.keyword in model.meta, faldbt.list_models())
134 )
135
136
137 def _get_filtered_models(faldbt: FalDbt, all, selected, before) -> List[DbtModel]:
138 selected_ids = _models_ids(faldbt._compile_task._flattened_nodes)
139 filtered_models: List[DbtModel] = []
140
141 if (
142 not all
143 and not selected
144 and not before
145 and faldbt._run_results.nativeRunResult is None
146 ):
147 from faldbt.parse import FalParseError
148
149 raise FalParseError(
150 "Cannot define models to run without selection flags or dbt run_results artifact or --before flag"
151 )
152
153 models = _get_models_with_keyword(faldbt)
154
155 for node in models:
156 if selected:
157 if node.unique_id in selected_ids:
158 filtered_models.append(node)
159 elif before:
160 if node.get_scripts(faldbt.keyword, before=before) != []:
161 filtered_models.append(node)
162 elif all:
163 filtered_models.append(node)
164 elif node.status != "skipped":
165 filtered_models.append(node)
166
167 return filtered_models
168
169
170 def _models_ids(models):
171 return list(map(lambda r: r.unique_id, models))
172
[end of src/fal/cli/fal_runner.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/fal/cli/fal_runner.py b/src/fal/cli/fal_runner.py
--- a/src/fal/cli/fal_runner.py
+++ b/src/fal/cli/fal_runner.py
@@ -50,15 +50,15 @@
models = _get_filtered_models(faldbt, args.all, selector_flags, args.before)
scripts = _select_scripts(args, models, faldbt)
-
- global_scripts = _get_global_scripts(faldbt, args.before)
+ global_scripts = _get_global_scripts(faldbt, args)
if args.before:
- if not _scripts_flag(args):
- # run globals when no --script is passed
+ if not _scripts_flag(args) or not selector_flags:
+ # run globals when no --script is passed or no selector is passed
_run_scripts(args, global_scripts, faldbt)
pre_hook_scripts = _get_hooks_for_model(models, faldbt, "pre-hook")
+
_run_scripts(args, pre_hook_scripts, faldbt)
_run_scripts(args, scripts, faldbt)
@@ -69,7 +69,7 @@
post_hook_scripts = _get_hooks_for_model(models, faldbt, "post-hook")
_run_scripts(args, post_hook_scripts, faldbt)
- if not _scripts_flag(args):
+ if not _scripts_flag(args) or not selector_flags:
# run globals when no --script is passed
_run_scripts(args, global_scripts, faldbt)
@@ -121,10 +121,12 @@
return scripts
-def _get_global_scripts(faldbt: FalDbt, is_before: bool):
+def _get_global_scripts(faldbt: FalDbt, args: argparse.Namespace):
+ scripts_flag = _scripts_flag(args)
return [
FalScript(faldbt, None, path)
- for path in faldbt._global_script_paths["before" if is_before else "after"]
+ for path in faldbt._global_script_paths["before" if args.before else "after"]
+ if not scripts_flag or path in args.scripts
]
| {"golden_diff": "diff --git a/src/fal/cli/fal_runner.py b/src/fal/cli/fal_runner.py\n--- a/src/fal/cli/fal_runner.py\n+++ b/src/fal/cli/fal_runner.py\n@@ -50,15 +50,15 @@\n models = _get_filtered_models(faldbt, args.all, selector_flags, args.before)\n \n scripts = _select_scripts(args, models, faldbt)\n-\n- global_scripts = _get_global_scripts(faldbt, args.before)\n+ global_scripts = _get_global_scripts(faldbt, args)\n \n if args.before:\n- if not _scripts_flag(args):\n- # run globals when no --script is passed\n+ if not _scripts_flag(args) or not selector_flags:\n+ # run globals when no --script is passed or no selector is passed\n _run_scripts(args, global_scripts, faldbt)\n \n pre_hook_scripts = _get_hooks_for_model(models, faldbt, \"pre-hook\")\n+\n _run_scripts(args, pre_hook_scripts, faldbt)\n \n _run_scripts(args, scripts, faldbt)\n@@ -69,7 +69,7 @@\n post_hook_scripts = _get_hooks_for_model(models, faldbt, \"post-hook\")\n _run_scripts(args, post_hook_scripts, faldbt)\n \n- if not _scripts_flag(args):\n+ if not _scripts_flag(args) or not selector_flags:\n # run globals when no --script is passed\n _run_scripts(args, global_scripts, faldbt)\n \n@@ -121,10 +121,12 @@\n return scripts\n \n \n-def _get_global_scripts(faldbt: FalDbt, is_before: bool):\n+def _get_global_scripts(faldbt: FalDbt, args: argparse.Namespace):\n+ scripts_flag = _scripts_flag(args)\n return [\n FalScript(faldbt, None, path)\n- for path in faldbt._global_script_paths[\"before\" if is_before else \"after\"]\n+ for path in faldbt._global_script_paths[\"before\" if args.before else \"after\"]\n+ if not scripts_flag or path in args.scripts\n ]\n", "issue": "fal run --scripts selector with global scripts not working\n<!-- *** Make sure you have searched for an existing bug report for this issue *** -->\r\n\r\n**Describe the bug**\r\nFal CLI `run` command using `--scripts` flag selector does not execute any script when the script passed is under the `before` key in the `schema.yml` configuration.\r\n\r\n**Your environment**\r\n- OS: macOS Monterrey 12.6\r\n- Paste the following commands output:\r\n```\r\nfal --0.6.0\r\ndbt --1.2.1\r\n```\r\n- Adapter being used: bigquery\r\n\r\n**How to reproduce**\r\nAdd scripts to run under `--before` key in the `schema.yml`:\r\n```\r\nversion: 2\r\n\r\nfal:\r\n scripts:\r\n before:\r\n - fal_scripts/delete_bq_datasets.py\r\n - fal_scripts/download_prod_artifacts.py\r\n```\r\nFile structure:\r\n```\r\ndbt_project\r\n\u251c\u2500\u2500 analysis\r\n\u251c\u2500\u2500 dbt_packages\r\n\u251c\u2500\u2500 dbt_project.yml\r\n\u251c\u2500\u2500 fal_scripts\r\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 delete_bq_datasets.py\r\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 download_prod_artifacts.py\r\n\u251c\u2500\u2500 logs\r\n\u251c\u2500\u2500 macros\r\n\u251c\u2500\u2500 models\r\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 exposures\r\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 schema.yml\r\n\u251c\u2500\u2500 packages.yml\r\n\u251c\u2500\u2500 seeds\r\n\u251c\u2500\u2500 snapshots\r\n\u251c\u2500\u2500 target\r\n\u2514\u2500\u2500 tests\r\n\r\n```\r\nThen run:\r\n```sh\r\nfal run --before --scripts download_prod_artifacts.py\r\n```\r\nOr:\r\n```sh\r\nfal run --scripts download_prod_artifacts.py\r\n```\r\n\r\n**Expected behavior**\r\nRun only the script passed to the `--scripts` flag: `download_prod_artifacts.py`. \r\n\r\n**Actual behavior**\r\nDoes nothing for neither case.\r\n\r\n```sh\r\nfal run --scripts download_prod_artifacts.py\r\n```\r\n```\r\n13:25:19 Found 370 models, 544 tests, 0 snapshots, 18 analyses, 583 macros, 0 operations, 42 seed files, 129 sources, 2 exposures, 0 metrics\r\n13:25:19 Could not read dbt sources artifact\r\n```\r\n```sh\r\nfal run --before --scripts download_prod_artifacts.py\r\n```\r\n```\r\n13:27:34 Found 370 models, 544 tests, 0 snapshots, 18 analyses, 583 macros, 0 operations, 42 seed files, 129 sources, 2 exposures, 0 metrics\r\n13:27:34 Could not read dbt sources artifact\r\n```\r\n**Additional context**\r\nThe issue might be [this](https://github.com/fal-ai/fal/blob/771ad3dc8946dbda57e91b188719f8a20c6eb353/src/fal/cli/fal_runner.py#L47) section of code (related with `scripts` and `global_scripts` variables).\r\n\n", "before_files": [{"content": "import argparse\nfrom pathlib import Path\nfrom typing import Dict, List\n\nfrom dbt.flags import PROFILES_DIR\nfrom fal.planner.executor import parallel_executor\nfrom fal.planner.schedule import Scheduler\nfrom fal.planner.tasks import FalLocalHookTask, Status, TaskGroup\n\nfrom fal.fal_script import FalScript\nfrom faldbt.project import DbtModel, FalDbt, FalGeneralException\n\n\ndef create_fal_dbt(\n args: argparse.Namespace, generated_models: Dict[str, Path] = {}\n) -> FalDbt:\n profiles_dir = PROFILES_DIR\n if args.profiles_dir is not None:\n profiles_dir = args.profiles_dir\n\n real_state = None\n if hasattr(args, \"state\") and args.state is not None:\n real_state = args.state\n\n return FalDbt(\n args.project_dir,\n profiles_dir,\n args.select,\n args.exclude,\n args.selector,\n args.keyword,\n args.threads,\n real_state,\n args.target,\n getattr(args, \"vars\", \"{}\"),\n generated_models,\n )\n\n\ndef fal_run(args: argparse.Namespace):\n \"Runs the fal run command in a subprocess\"\n\n selector_flags = args.select or args.exclude or args.selector\n if args.all and selector_flags:\n raise FalGeneralException(\n \"Cannot pass --all flag alongside selection flags (--select/--models, --exclude, --selector)\"\n )\n\n faldbt = create_fal_dbt(args)\n models = _get_filtered_models(faldbt, args.all, selector_flags, args.before)\n\n scripts = _select_scripts(args, models, faldbt)\n\n global_scripts = _get_global_scripts(faldbt, args.before)\n\n if args.before:\n if not _scripts_flag(args):\n # run globals when no --script is passed\n _run_scripts(args, global_scripts, faldbt)\n\n pre_hook_scripts = _get_hooks_for_model(models, faldbt, \"pre-hook\")\n _run_scripts(args, pre_hook_scripts, faldbt)\n\n _run_scripts(args, scripts, faldbt)\n\n else:\n _run_scripts(args, scripts, faldbt)\n\n post_hook_scripts = _get_hooks_for_model(models, faldbt, \"post-hook\")\n _run_scripts(args, post_hook_scripts, faldbt)\n\n if not _scripts_flag(args):\n # run globals when no --script is passed\n _run_scripts(args, global_scripts, faldbt)\n\n\ndef _run_scripts(args: argparse.Namespace, scripts: List[FalScript], faldbt: FalDbt):\n scheduler = Scheduler(\n [TaskGroup(FalLocalHookTask.from_fal_script(script)) for script in scripts]\n )\n parallel_executor(args, faldbt, scheduler)\n\n failed_tasks: List[FalLocalHookTask] = [\n group.task for group in scheduler.filter_groups(Status.FAILURE)\n ] # type: ignore\n failed_script_ids = [task.build_fal_script(faldbt).id for task in failed_tasks]\n if failed_script_ids:\n raise RuntimeError(f\"Error in scripts {str.join(', ',failed_script_ids)}\")\n\n\ndef _scripts_flag(args: argparse.Namespace) -> bool:\n return bool(args.scripts)\n\n\ndef _get_hooks_for_model(\n models: List[DbtModel], faldbt: FalDbt, hook_type: str\n) -> List[FalScript]:\n return [\n FalScript.from_hook(faldbt, model, hook)\n for model in models\n for hook in model._get_hooks(hook_type=hook_type)\n ]\n\n\ndef _select_scripts(\n args: argparse.Namespace, models: List[DbtModel], faldbt: FalDbt\n) -> List[FalScript]:\n scripts = []\n scripts_flag = _scripts_flag(args)\n\n for model in models:\n model_scripts = model.get_scripts(args.keyword, before=bool(args.before))\n for path in model_scripts:\n if not scripts_flag:\n # run all scripts when no --script is passed\n scripts.append(FalScript(faldbt, model, path))\n elif path in args.scripts:\n # if --script selector is there only run selected scripts\n scripts.append(FalScript(faldbt, model, path))\n\n return scripts\n\n\ndef _get_global_scripts(faldbt: FalDbt, is_before: bool):\n return [\n FalScript(faldbt, None, path)\n for path in faldbt._global_script_paths[\"before\" if is_before else \"after\"]\n ]\n\n\ndef _get_models_with_keyword(faldbt: FalDbt) -> List[DbtModel]:\n return list(\n filter(lambda model: faldbt.keyword in model.meta, faldbt.list_models())\n )\n\n\ndef _get_filtered_models(faldbt: FalDbt, all, selected, before) -> List[DbtModel]:\n selected_ids = _models_ids(faldbt._compile_task._flattened_nodes)\n filtered_models: List[DbtModel] = []\n\n if (\n not all\n and not selected\n and not before\n and faldbt._run_results.nativeRunResult is None\n ):\n from faldbt.parse import FalParseError\n\n raise FalParseError(\n \"Cannot define models to run without selection flags or dbt run_results artifact or --before flag\"\n )\n\n models = _get_models_with_keyword(faldbt)\n\n for node in models:\n if selected:\n if node.unique_id in selected_ids:\n filtered_models.append(node)\n elif before:\n if node.get_scripts(faldbt.keyword, before=before) != []:\n filtered_models.append(node)\n elif all:\n filtered_models.append(node)\n elif node.status != \"skipped\":\n filtered_models.append(node)\n\n return filtered_models\n\n\ndef _models_ids(models):\n return list(map(lambda r: r.unique_id, models))\n", "path": "src/fal/cli/fal_runner.py"}]} | 2,887 | 487 |
gh_patches_debug_40529 | rasdani/github-patches | git_diff | nautobot__nautobot-1148 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Remove Custom Fields from Admin UI
### Proposed Changes
Remove custom fields from Admin UI. This should be as simple as deleting a bunch of code from `nautobot/extras/admin.py` that's no longer needed.
### Justification
Now that we have custom field management in the regular UI (#735, #997), the admin UI for custom field management is redundant.
</issue>
<code>
[start of nautobot/extras/admin.py]
1 from db_file_storage.form_widgets import DBAdminClearableFileInput
2 from django import forms
3 from django.contrib import admin, messages
4 from django.db import transaction
5 from django.db.models import ProtectedError
6
7 from .models import CustomField, CustomFieldChoice, FileProxy, JobResult
8
9
10 def order_content_types(field):
11 """
12 Order the list of available ContentTypes by application
13 """
14 queryset = field.queryset.order_by("app_label", "model")
15 field.choices = [(ct.pk, "{} > {}".format(ct.app_label, ct.name)) for ct in queryset]
16
17
18 #
19 # Custom fields
20 #
21
22
23 class CustomFieldForm(forms.ModelForm):
24 class Meta:
25 model = CustomField
26 exclude = []
27 widgets = {
28 "default": forms.TextInput(),
29 "validation_regex": forms.Textarea(
30 attrs={
31 "cols": 80,
32 "rows": 3,
33 }
34 ),
35 }
36
37 def __init__(self, *args, **kwargs):
38 super().__init__(*args, **kwargs)
39
40 order_content_types(self.fields["content_types"])
41
42
43 class CustomFieldChoiceAdmin(admin.TabularInline):
44 """
45 Defines the inline formset factory that handles choices for selection type custom fields.
46 The `extra` defines the default number of inline rows that appear in the UI.
47 """
48
49 model = CustomFieldChoice
50 extra = 5
51
52
53 @admin.register(CustomField)
54 class CustomFieldAdmin(admin.ModelAdmin):
55 """
56 Define the structure and composition of the custom field form in the admin panel.
57 """
58
59 actions = None
60 form = CustomFieldForm
61 inlines = [CustomFieldChoiceAdmin]
62 list_display = [
63 "name",
64 "models",
65 "type",
66 "required",
67 "filter_logic",
68 "default",
69 "weight",
70 "description",
71 ]
72 list_filter = [
73 "type",
74 "required",
75 "content_types",
76 ]
77 fieldsets = (
78 (
79 "Custom Field",
80 {
81 "fields": (
82 "type",
83 "name",
84 "weight",
85 "label",
86 "description",
87 "required",
88 "default",
89 "filter_logic",
90 )
91 },
92 ),
93 (
94 "Assignment",
95 {
96 "description": "A custom field must be assigned to one or more object types.",
97 "fields": ("content_types",),
98 },
99 ),
100 (
101 "Validation Rules",
102 {
103 "fields": (
104 "validation_minimum",
105 "validation_maximum",
106 "validation_regex",
107 ),
108 "classes": ("monospace",),
109 },
110 ),
111 )
112
113 def models(self, obj):
114 return ", ".join([ct.name for ct in obj.content_types.all()])
115
116 @transaction.atomic
117 def save_formset(self, request, form, formset, change):
118 # TODO(John): revisit this when custom fields are moved out of admin... there is a better way...
119 if formset.model != CustomFieldChoice:
120 return super().save_formset(request, form, formset, change)
121 instances = formset.save(commit=False)
122 for instance in instances:
123 instance.save()
124 formset.save_m2m()
125 for obj in formset.deleted_objects:
126 try:
127 obj.delete()
128 except ProtectedError as e:
129 self.message_user(request, e, level=messages.ERROR)
130 raise e
131
132
133 #
134 # File attachments
135 #
136
137
138 class FileProxyForm(forms.ModelForm):
139 class Meta:
140 model = FileProxy
141 exclude = []
142 widgets = {
143 "file": DBAdminClearableFileInput,
144 }
145
146
147 @admin.register(FileProxy)
148 class FileProxyAdmin(admin.ModelAdmin):
149 form = FileProxyForm
150 list_display = ["name", "uploaded_at"]
151 list_filter = ["uploaded_at"]
152
153
154 #
155 # Job results (jobs, scripts, reports, Git repository sync, etc.)
156 #
157
158
159 @admin.register(JobResult)
160 class JobResultAdmin(admin.ModelAdmin):
161 list_display = [
162 "obj_type",
163 "name",
164 "created",
165 "completed",
166 "user",
167 "status",
168 ]
169 fields = [
170 "obj_type",
171 "name",
172 "created",
173 "completed",
174 "user",
175 "status",
176 "data",
177 "job_id",
178 ]
179 list_filter = [
180 "status",
181 ]
182 readonly_fields = fields
183
184 def has_add_permission(self, request):
185 return False
186
[end of nautobot/extras/admin.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/nautobot/extras/admin.py b/nautobot/extras/admin.py
--- a/nautobot/extras/admin.py
+++ b/nautobot/extras/admin.py
@@ -1,10 +1,8 @@
from db_file_storage.form_widgets import DBAdminClearableFileInput
from django import forms
-from django.contrib import admin, messages
-from django.db import transaction
-from django.db.models import ProtectedError
+from django.contrib import admin
-from .models import CustomField, CustomFieldChoice, FileProxy, JobResult
+from .models import FileProxy, JobResult
def order_content_types(field):
@@ -15,121 +13,6 @@
field.choices = [(ct.pk, "{} > {}".format(ct.app_label, ct.name)) for ct in queryset]
-#
-# Custom fields
-#
-
-
-class CustomFieldForm(forms.ModelForm):
- class Meta:
- model = CustomField
- exclude = []
- widgets = {
- "default": forms.TextInput(),
- "validation_regex": forms.Textarea(
- attrs={
- "cols": 80,
- "rows": 3,
- }
- ),
- }
-
- def __init__(self, *args, **kwargs):
- super().__init__(*args, **kwargs)
-
- order_content_types(self.fields["content_types"])
-
-
-class CustomFieldChoiceAdmin(admin.TabularInline):
- """
- Defines the inline formset factory that handles choices for selection type custom fields.
- The `extra` defines the default number of inline rows that appear in the UI.
- """
-
- model = CustomFieldChoice
- extra = 5
-
-
[email protected](CustomField)
-class CustomFieldAdmin(admin.ModelAdmin):
- """
- Define the structure and composition of the custom field form in the admin panel.
- """
-
- actions = None
- form = CustomFieldForm
- inlines = [CustomFieldChoiceAdmin]
- list_display = [
- "name",
- "models",
- "type",
- "required",
- "filter_logic",
- "default",
- "weight",
- "description",
- ]
- list_filter = [
- "type",
- "required",
- "content_types",
- ]
- fieldsets = (
- (
- "Custom Field",
- {
- "fields": (
- "type",
- "name",
- "weight",
- "label",
- "description",
- "required",
- "default",
- "filter_logic",
- )
- },
- ),
- (
- "Assignment",
- {
- "description": "A custom field must be assigned to one or more object types.",
- "fields": ("content_types",),
- },
- ),
- (
- "Validation Rules",
- {
- "fields": (
- "validation_minimum",
- "validation_maximum",
- "validation_regex",
- ),
- "classes": ("monospace",),
- },
- ),
- )
-
- def models(self, obj):
- return ", ".join([ct.name for ct in obj.content_types.all()])
-
- @transaction.atomic
- def save_formset(self, request, form, formset, change):
- # TODO(John): revisit this when custom fields are moved out of admin... there is a better way...
- if formset.model != CustomFieldChoice:
- return super().save_formset(request, form, formset, change)
- instances = formset.save(commit=False)
- for instance in instances:
- instance.save()
- formset.save_m2m()
- for obj in formset.deleted_objects:
- try:
- obj.delete()
- except ProtectedError as e:
- self.message_user(request, e, level=messages.ERROR)
- raise e
-
-
#
# File attachments
#
| {"golden_diff": "diff --git a/nautobot/extras/admin.py b/nautobot/extras/admin.py\n--- a/nautobot/extras/admin.py\n+++ b/nautobot/extras/admin.py\n@@ -1,10 +1,8 @@\n from db_file_storage.form_widgets import DBAdminClearableFileInput\n from django import forms\n-from django.contrib import admin, messages\n-from django.db import transaction\n-from django.db.models import ProtectedError\n+from django.contrib import admin\n \n-from .models import CustomField, CustomFieldChoice, FileProxy, JobResult\n+from .models import FileProxy, JobResult\n \n \n def order_content_types(field):\n@@ -15,121 +13,6 @@\n field.choices = [(ct.pk, \"{} > {}\".format(ct.app_label, ct.name)) for ct in queryset]\n \n \n-#\n-# Custom fields\n-#\n-\n-\n-class CustomFieldForm(forms.ModelForm):\n- class Meta:\n- model = CustomField\n- exclude = []\n- widgets = {\n- \"default\": forms.TextInput(),\n- \"validation_regex\": forms.Textarea(\n- attrs={\n- \"cols\": 80,\n- \"rows\": 3,\n- }\n- ),\n- }\n-\n- def __init__(self, *args, **kwargs):\n- super().__init__(*args, **kwargs)\n-\n- order_content_types(self.fields[\"content_types\"])\n-\n-\n-class CustomFieldChoiceAdmin(admin.TabularInline):\n- \"\"\"\n- Defines the inline formset factory that handles choices for selection type custom fields.\n- The `extra` defines the default number of inline rows that appear in the UI.\n- \"\"\"\n-\n- model = CustomFieldChoice\n- extra = 5\n-\n-\[email protected](CustomField)\n-class CustomFieldAdmin(admin.ModelAdmin):\n- \"\"\"\n- Define the structure and composition of the custom field form in the admin panel.\n- \"\"\"\n-\n- actions = None\n- form = CustomFieldForm\n- inlines = [CustomFieldChoiceAdmin]\n- list_display = [\n- \"name\",\n- \"models\",\n- \"type\",\n- \"required\",\n- \"filter_logic\",\n- \"default\",\n- \"weight\",\n- \"description\",\n- ]\n- list_filter = [\n- \"type\",\n- \"required\",\n- \"content_types\",\n- ]\n- fieldsets = (\n- (\n- \"Custom Field\",\n- {\n- \"fields\": (\n- \"type\",\n- \"name\",\n- \"weight\",\n- \"label\",\n- \"description\",\n- \"required\",\n- \"default\",\n- \"filter_logic\",\n- )\n- },\n- ),\n- (\n- \"Assignment\",\n- {\n- \"description\": \"A custom field must be assigned to one or more object types.\",\n- \"fields\": (\"content_types\",),\n- },\n- ),\n- (\n- \"Validation Rules\",\n- {\n- \"fields\": (\n- \"validation_minimum\",\n- \"validation_maximum\",\n- \"validation_regex\",\n- ),\n- \"classes\": (\"monospace\",),\n- },\n- ),\n- )\n-\n- def models(self, obj):\n- return \", \".join([ct.name for ct in obj.content_types.all()])\n-\n- @transaction.atomic\n- def save_formset(self, request, form, formset, change):\n- # TODO(John): revisit this when custom fields are moved out of admin... there is a better way...\n- if formset.model != CustomFieldChoice:\n- return super().save_formset(request, form, formset, change)\n- instances = formset.save(commit=False)\n- for instance in instances:\n- instance.save()\n- formset.save_m2m()\n- for obj in formset.deleted_objects:\n- try:\n- obj.delete()\n- except ProtectedError as e:\n- self.message_user(request, e, level=messages.ERROR)\n- raise e\n-\n-\n #\n # File attachments\n #\n", "issue": "Remove Custom Fields from Admin UI\n### Proposed Changes\r\n\r\nRemove custom fields from Admin UI. This should be as simple as deleting a bunch of code from `nautobot/extras/admin.py` that's no longer needed.\r\n\r\n### Justification\r\n\r\nNow that we have custom field management in the regular UI (#735, #997), the admin UI for custom field management is redundant.\n", "before_files": [{"content": "from db_file_storage.form_widgets import DBAdminClearableFileInput\nfrom django import forms\nfrom django.contrib import admin, messages\nfrom django.db import transaction\nfrom django.db.models import ProtectedError\n\nfrom .models import CustomField, CustomFieldChoice, FileProxy, JobResult\n\n\ndef order_content_types(field):\n \"\"\"\n Order the list of available ContentTypes by application\n \"\"\"\n queryset = field.queryset.order_by(\"app_label\", \"model\")\n field.choices = [(ct.pk, \"{} > {}\".format(ct.app_label, ct.name)) for ct in queryset]\n\n\n#\n# Custom fields\n#\n\n\nclass CustomFieldForm(forms.ModelForm):\n class Meta:\n model = CustomField\n exclude = []\n widgets = {\n \"default\": forms.TextInput(),\n \"validation_regex\": forms.Textarea(\n attrs={\n \"cols\": 80,\n \"rows\": 3,\n }\n ),\n }\n\n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n\n order_content_types(self.fields[\"content_types\"])\n\n\nclass CustomFieldChoiceAdmin(admin.TabularInline):\n \"\"\"\n Defines the inline formset factory that handles choices for selection type custom fields.\n The `extra` defines the default number of inline rows that appear in the UI.\n \"\"\"\n\n model = CustomFieldChoice\n extra = 5\n\n\[email protected](CustomField)\nclass CustomFieldAdmin(admin.ModelAdmin):\n \"\"\"\n Define the structure and composition of the custom field form in the admin panel.\n \"\"\"\n\n actions = None\n form = CustomFieldForm\n inlines = [CustomFieldChoiceAdmin]\n list_display = [\n \"name\",\n \"models\",\n \"type\",\n \"required\",\n \"filter_logic\",\n \"default\",\n \"weight\",\n \"description\",\n ]\n list_filter = [\n \"type\",\n \"required\",\n \"content_types\",\n ]\n fieldsets = (\n (\n \"Custom Field\",\n {\n \"fields\": (\n \"type\",\n \"name\",\n \"weight\",\n \"label\",\n \"description\",\n \"required\",\n \"default\",\n \"filter_logic\",\n )\n },\n ),\n (\n \"Assignment\",\n {\n \"description\": \"A custom field must be assigned to one or more object types.\",\n \"fields\": (\"content_types\",),\n },\n ),\n (\n \"Validation Rules\",\n {\n \"fields\": (\n \"validation_minimum\",\n \"validation_maximum\",\n \"validation_regex\",\n ),\n \"classes\": (\"monospace\",),\n },\n ),\n )\n\n def models(self, obj):\n return \", \".join([ct.name for ct in obj.content_types.all()])\n\n @transaction.atomic\n def save_formset(self, request, form, formset, change):\n # TODO(John): revisit this when custom fields are moved out of admin... there is a better way...\n if formset.model != CustomFieldChoice:\n return super().save_formset(request, form, formset, change)\n instances = formset.save(commit=False)\n for instance in instances:\n instance.save()\n formset.save_m2m()\n for obj in formset.deleted_objects:\n try:\n obj.delete()\n except ProtectedError as e:\n self.message_user(request, e, level=messages.ERROR)\n raise e\n\n\n#\n# File attachments\n#\n\n\nclass FileProxyForm(forms.ModelForm):\n class Meta:\n model = FileProxy\n exclude = []\n widgets = {\n \"file\": DBAdminClearableFileInput,\n }\n\n\[email protected](FileProxy)\nclass FileProxyAdmin(admin.ModelAdmin):\n form = FileProxyForm\n list_display = [\"name\", \"uploaded_at\"]\n list_filter = [\"uploaded_at\"]\n\n\n#\n# Job results (jobs, scripts, reports, Git repository sync, etc.)\n#\n\n\[email protected](JobResult)\nclass JobResultAdmin(admin.ModelAdmin):\n list_display = [\n \"obj_type\",\n \"name\",\n \"created\",\n \"completed\",\n \"user\",\n \"status\",\n ]\n fields = [\n \"obj_type\",\n \"name\",\n \"created\",\n \"completed\",\n \"user\",\n \"status\",\n \"data\",\n \"job_id\",\n ]\n list_filter = [\n \"status\",\n ]\n readonly_fields = fields\n\n def has_add_permission(self, request):\n return False\n", "path": "nautobot/extras/admin.py"}]} | 2,015 | 875 |
gh_patches_debug_4375 | rasdani/github-patches | git_diff | ansible__ansible-modules-extras-3020 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
#446 broke npm state=latest for missing packages
##### Issue Type:
- Bug Report (`npm` module)
##### Ansible Version:
Running against devel:
``` console
$ ansible --version
ansible 2.1.0 (devel be5488cb60) last updated 2015/12/15 09:36:59 (GMT -400)
lib/ansible/modules/core: (devel 6b13da738b) last updated 2015/12/15 09:38:18 (GMT -400)
lib/ansible/modules/extras: (devel f3251de29c) last updated 2015/12/15 09:38:42 (GMT -400)
config file = /home/tomxtobin/.ansible.cfg
configured module search path = Default w/o overrides
```
##### Ansible Configuration:
``` ini
[defaults]
hostfile = ~/ansible/hosts
nocows = 1
```
##### Environment:
N/A (but it's Arch Linux)
##### Summary:
It looks like PR #446 broke `npm: name=foo state=latest` for a missing package `foo` (i.e., `foo` isn't present on the system yet).
Suggested fix: for `state == 'latest'`, actually differentiate between the result of checking `len(missing)` and `len(outdated)` to see whether the package is installed or not, and run either `npm.install()` or `npm.update()` as appropriate.
##### Steps To Reproduce:
Let's use the `gulp` package as an example.
On a system that doesn't already have `gulp` installed globally:
``` console
$ ansible example-host -m command -a 'npm ls -g -depth 0 gulp'
example-host | FAILED | rc=1 >>
/usr/lib
└── (empty)npm ERR! code 1
```
``` console
$ ansible example-host -m command -a 'gulp --version'
example-host | FAILED | rc=2 >>
[Errno 2] No such file or directory
```
Run a task against such system to install `gulp` globally:
``` console
$ ansible example-host -m npm -a 'name=gulp state=latest global=yes'
example-host | SUCCESS => {
"changed": true
}
```
##### Expected Results:
The module (`gulp`, above) actually gets installed on the system(s) I'm running that task against.
Against such a system, I can run something like:
``` console
$ ansible example-host -m command -a 'npm ls -g -depth 0 gulp'
example-host | SUCCESS | rc=0 >>
/usr/lib
└── [email protected]
```
``` console
$ ansible example-host -m command -a 'gulp --version'
example-host | SUCCESS | rc=0 >>
[15:24:28] CLI version 3.9.0
```
(Assuming the latest version of `gulp` happened to be `3.9.0`.)
##### Actual Results:
Ansible claims it succeeds in running the task, but it doesn't actually install `gulp` on the system(s) in question.
On such a system:
``` console
$ ansible example-host -m command -a 'npm ls -g -depth 0 gulp'
example-host | FAILED | rc=1 >>
/usr/lib
└── (empty)npm ERR! code 1
```
``` console
$ ansible example-host -m command -a 'gulp --version'
example-host | FAILED | rc=2 >>
[Errno 2] No such file or directory
```
You can actually keep re-running the task over and over, and Ansible will keep claiming to successfully install `gulp`:
``` console
$ ansible example-host -m npm -a 'name=gulp state=latest global=yes'
example-host | SUCCESS => {
"changed": true
}
$ ansible example-host -m npm -a 'name=gulp state=latest global=yes'
example-host | SUCCESS => {
"changed": true
}
```
</issue>
<code>
[start of packaging/language/npm.py]
1 #!/usr/bin/python
2 # -*- coding: utf-8 -*-
3
4 # (c) 2013, Chris Hoffman <[email protected]>
5 #
6 # This file is part of Ansible
7 #
8 # Ansible is free software: you can redistribute it and/or modify
9 # it under the terms of the GNU General Public License as published by
10 # the Free Software Foundation, either version 3 of the License, or
11 # (at your option) any later version.
12 #
13 # Ansible is distributed in the hope that it will be useful,
14 # but WITHOUT ANY WARRANTY; without even the implied warranty of
15 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
16 # GNU General Public License for more details.
17 #
18 # You should have received a copy of the GNU General Public License
19 # along with Ansible. If not, see <http://www.gnu.org/licenses/>.
20
21 DOCUMENTATION = '''
22 ---
23 module: npm
24 short_description: Manage node.js packages with npm
25 description:
26 - Manage node.js packages with Node Package Manager (npm)
27 version_added: 1.2
28 author: "Chris Hoffman (@chrishoffman)"
29 options:
30 name:
31 description:
32 - The name of a node.js library to install
33 required: false
34 path:
35 description:
36 - The base path where to install the node.js libraries
37 required: false
38 version:
39 description:
40 - The version to be installed
41 required: false
42 global:
43 description:
44 - Install the node.js library globally
45 required: false
46 default: no
47 choices: [ "yes", "no" ]
48 executable:
49 description:
50 - The executable location for npm.
51 - This is useful if you are using a version manager, such as nvm
52 required: false
53 ignore_scripts:
54 description:
55 - Use the --ignore-scripts flag when installing.
56 required: false
57 choices: [ "yes", "no" ]
58 default: no
59 version_added: "1.8"
60 production:
61 description:
62 - Install dependencies in production mode, excluding devDependencies
63 required: false
64 choices: [ "yes", "no" ]
65 default: no
66 registry:
67 description:
68 - The registry to install modules from.
69 required: false
70 version_added: "1.6"
71 state:
72 description:
73 - The state of the node.js library
74 required: false
75 default: present
76 choices: [ "present", "absent", "latest" ]
77 '''
78
79 EXAMPLES = '''
80 description: Install "coffee-script" node.js package.
81 - npm: name=coffee-script path=/app/location
82
83 description: Install "coffee-script" node.js package on version 1.6.1.
84 - npm: name=coffee-script version=1.6.1 path=/app/location
85
86 description: Install "coffee-script" node.js package globally.
87 - npm: name=coffee-script global=yes
88
89 description: Remove the globally package "coffee-script".
90 - npm: name=coffee-script global=yes state=absent
91
92 description: Install "coffee-script" node.js package from custom registry.
93 - npm: name=coffee-script registry=http://registry.mysite.com
94
95 description: Install packages based on package.json.
96 - npm: path=/app/location
97
98 description: Update packages based on package.json to their latest version.
99 - npm: path=/app/location state=latest
100
101 description: Install packages based on package.json using the npm installed with nvm v0.10.1.
102 - npm: path=/app/location executable=/opt/nvm/v0.10.1/bin/npm state=present
103 '''
104
105 import os
106
107 try:
108 import json
109 except ImportError:
110 try:
111 import simplejson as json
112 except ImportError:
113 # Let snippet from module_utils/basic.py return a proper error in this case
114 pass
115
116
117 class Npm(object):
118 def __init__(self, module, **kwargs):
119 self.module = module
120 self.glbl = kwargs['glbl']
121 self.name = kwargs['name']
122 self.version = kwargs['version']
123 self.path = kwargs['path']
124 self.registry = kwargs['registry']
125 self.production = kwargs['production']
126 self.ignore_scripts = kwargs['ignore_scripts']
127
128 if kwargs['executable']:
129 self.executable = kwargs['executable'].split(' ')
130 else:
131 self.executable = [module.get_bin_path('npm', True)]
132
133 if kwargs['version']:
134 self.name_version = self.name + '@' + str(self.version)
135 else:
136 self.name_version = self.name
137
138 def _exec(self, args, run_in_check_mode=False, check_rc=True):
139 if not self.module.check_mode or (self.module.check_mode and run_in_check_mode):
140 cmd = self.executable + args
141
142 if self.glbl:
143 cmd.append('--global')
144 if self.production:
145 cmd.append('--production')
146 if self.ignore_scripts:
147 cmd.append('--ignore-scripts')
148 if self.name:
149 cmd.append(self.name_version)
150 if self.registry:
151 cmd.append('--registry')
152 cmd.append(self.registry)
153
154 #If path is specified, cd into that path and run the command.
155 cwd = None
156 if self.path:
157 if not os.path.exists(self.path):
158 os.makedirs(self.path)
159 if not os.path.isdir(self.path):
160 self.module.fail_json(msg="path %s is not a directory" % self.path)
161 cwd = self.path
162
163 rc, out, err = self.module.run_command(cmd, check_rc=check_rc, cwd=cwd)
164 return out
165 return ''
166
167 def list(self):
168 cmd = ['list', '--json']
169
170 installed = list()
171 missing = list()
172 data = json.loads(self._exec(cmd, True, False))
173 if 'dependencies' in data:
174 for dep in data['dependencies']:
175 if 'missing' in data['dependencies'][dep] and data['dependencies'][dep]['missing']:
176 missing.append(dep)
177 elif 'invalid' in data['dependencies'][dep] and data['dependencies'][dep]['invalid']:
178 missing.append(dep)
179 else:
180 installed.append(dep)
181 if self.name and self.name not in installed:
182 missing.append(self.name)
183 #Named dependency not installed
184 else:
185 missing.append(self.name)
186
187 return installed, missing
188
189 def install(self):
190 return self._exec(['install'])
191
192 def update(self):
193 return self._exec(['update'])
194
195 def uninstall(self):
196 return self._exec(['uninstall'])
197
198 def list_outdated(self):
199 outdated = list()
200 data = self._exec(['outdated'], True, False)
201 for dep in data.splitlines():
202 if dep:
203 # node.js v0.10.22 changed the `npm outdated` module separator
204 # from "@" to " ". Split on both for backwards compatibility.
205 pkg, other = re.split('\s|@', dep, 1)
206 outdated.append(pkg)
207
208 return outdated
209
210
211 def main():
212 arg_spec = dict(
213 name=dict(default=None),
214 path=dict(default=None, type='path'),
215 version=dict(default=None),
216 production=dict(default='no', type='bool'),
217 executable=dict(default=None, type='path'),
218 registry=dict(default=None),
219 state=dict(default='present', choices=['present', 'absent', 'latest']),
220 ignore_scripts=dict(default=False, type='bool'),
221 )
222 arg_spec['global'] = dict(default='no', type='bool')
223 module = AnsibleModule(
224 argument_spec=arg_spec,
225 supports_check_mode=True
226 )
227
228 name = module.params['name']
229 path = module.params['path']
230 version = module.params['version']
231 glbl = module.params['global']
232 production = module.params['production']
233 executable = module.params['executable']
234 registry = module.params['registry']
235 state = module.params['state']
236 ignore_scripts = module.params['ignore_scripts']
237
238 if not path and not glbl:
239 module.fail_json(msg='path must be specified when not using global')
240 if state == 'absent' and not name:
241 module.fail_json(msg='uninstalling a package is only available for named packages')
242
243 npm = Npm(module, name=name, path=path, version=version, glbl=glbl, production=production, \
244 executable=executable, registry=registry, ignore_scripts=ignore_scripts)
245
246 changed = False
247 if state == 'present':
248 installed, missing = npm.list()
249 if len(missing):
250 changed = True
251 npm.install()
252 elif state == 'latest':
253 installed, missing = npm.list()
254 outdated = npm.list_outdated()
255 if len(missing) or len(outdated):
256 changed = True
257 npm.update()
258 else: #absent
259 installed, missing = npm.list()
260 if name in installed:
261 changed = True
262 npm.uninstall()
263
264 module.exit_json(changed=changed)
265
266 # import module snippets
267 from ansible.module_utils.basic import *
268 main()
269
[end of packaging/language/npm.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/packaging/language/npm.py b/packaging/language/npm.py
--- a/packaging/language/npm.py
+++ b/packaging/language/npm.py
@@ -252,7 +252,10 @@
elif state == 'latest':
installed, missing = npm.list()
outdated = npm.list_outdated()
- if len(missing) or len(outdated):
+ if len(missing):
+ changed = True
+ npm.install()
+ if len(outdated):
changed = True
npm.update()
else: #absent
| {"golden_diff": "diff --git a/packaging/language/npm.py b/packaging/language/npm.py\n--- a/packaging/language/npm.py\n+++ b/packaging/language/npm.py\n@@ -252,7 +252,10 @@\n elif state == 'latest':\n installed, missing = npm.list()\n outdated = npm.list_outdated()\n- if len(missing) or len(outdated):\n+ if len(missing):\n+ changed = True\n+ npm.install()\n+ if len(outdated):\n changed = True\n npm.update()\n else: #absent\n", "issue": "#446 broke npm state=latest for missing packages\n##### Issue Type:\n- Bug Report (`npm` module)\n##### Ansible Version:\n\nRunning against devel:\n\n``` console\n$ ansible --version\nansible 2.1.0 (devel be5488cb60) last updated 2015/12/15 09:36:59 (GMT -400)\n lib/ansible/modules/core: (devel 6b13da738b) last updated 2015/12/15 09:38:18 (GMT -400)\n lib/ansible/modules/extras: (devel f3251de29c) last updated 2015/12/15 09:38:42 (GMT -400)\n config file = /home/tomxtobin/.ansible.cfg\n configured module search path = Default w/o overrides\n```\n##### Ansible Configuration:\n\n``` ini\n[defaults]\nhostfile = ~/ansible/hosts\nnocows = 1\n```\n##### Environment:\n\nN/A (but it's Arch Linux)\n##### Summary:\n\nIt looks like PR #446 broke `npm: name=foo state=latest` for a missing package `foo` (i.e., `foo` isn't present on the system yet).\n\nSuggested fix: for `state == 'latest'`, actually differentiate between the result of checking `len(missing)` and `len(outdated)` to see whether the package is installed or not, and run either `npm.install()` or `npm.update()` as appropriate.\n##### Steps To Reproduce:\n\nLet's use the `gulp` package as an example.\n\nOn a system that doesn't already have `gulp` installed globally:\n\n``` console\n$ ansible example-host -m command -a 'npm ls -g -depth 0 gulp' \nexample-host | FAILED | rc=1 >>\n/usr/lib\n\u2514\u2500\u2500 (empty)npm ERR! code 1\n```\n\n``` console\n$ ansible example-host -m command -a 'gulp --version' \nexample-host | FAILED | rc=2 >>\n[Errno 2] No such file or directory\n```\n\nRun a task against such system to install `gulp` globally:\n\n``` console\n$ ansible example-host -m npm -a 'name=gulp state=latest global=yes'\nexample-host | SUCCESS => {\n \"changed\": true\n}\n```\n##### Expected Results:\n\nThe module (`gulp`, above) actually gets installed on the system(s) I'm running that task against.\n\nAgainst such a system, I can run something like:\n\n``` console\n$ ansible example-host -m command -a 'npm ls -g -depth 0 gulp' \nexample-host | SUCCESS | rc=0 >>\n/usr/lib\n\u2514\u2500\u2500 [email protected] \n```\n\n``` console\n$ ansible example-host -m command -a 'gulp --version'\nexample-host | SUCCESS | rc=0 >>\n[15:24:28] CLI version 3.9.0\n```\n\n(Assuming the latest version of `gulp` happened to be `3.9.0`.)\n##### Actual Results:\n\nAnsible claims it succeeds in running the task, but it doesn't actually install `gulp` on the system(s) in question.\n\nOn such a system:\n\n``` console\n$ ansible example-host -m command -a 'npm ls -g -depth 0 gulp' \nexample-host | FAILED | rc=1 >>\n/usr/lib\n\u2514\u2500\u2500 (empty)npm ERR! code 1\n```\n\n``` console\n$ ansible example-host -m command -a 'gulp --version' \nexample-host | FAILED | rc=2 >>\n[Errno 2] No such file or directory\n```\n\nYou can actually keep re-running the task over and over, and Ansible will keep claiming to successfully install `gulp`:\n\n``` console\n$ ansible example-host -m npm -a 'name=gulp state=latest global=yes'\nexample-host | SUCCESS => {\n \"changed\": true\n}\n$ ansible example-host -m npm -a 'name=gulp state=latest global=yes'\nexample-host | SUCCESS => {\n \"changed\": true\n}\n```\n\n", "before_files": [{"content": "#!/usr/bin/python\n# -*- coding: utf-8 -*-\n\n# (c) 2013, Chris Hoffman <[email protected]>\n#\n# This file is part of Ansible\n#\n# Ansible is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# Ansible is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with Ansible. If not, see <http://www.gnu.org/licenses/>.\n\nDOCUMENTATION = '''\n---\nmodule: npm\nshort_description: Manage node.js packages with npm\ndescription:\n - Manage node.js packages with Node Package Manager (npm)\nversion_added: 1.2\nauthor: \"Chris Hoffman (@chrishoffman)\"\noptions:\n name:\n description:\n - The name of a node.js library to install\n required: false\n path:\n description:\n - The base path where to install the node.js libraries\n required: false\n version:\n description:\n - The version to be installed\n required: false\n global:\n description:\n - Install the node.js library globally\n required: false\n default: no\n choices: [ \"yes\", \"no\" ]\n executable:\n description:\n - The executable location for npm.\n - This is useful if you are using a version manager, such as nvm\n required: false\n ignore_scripts:\n description:\n - Use the --ignore-scripts flag when installing.\n required: false\n choices: [ \"yes\", \"no\" ]\n default: no\n version_added: \"1.8\"\n production:\n description:\n - Install dependencies in production mode, excluding devDependencies\n required: false\n choices: [ \"yes\", \"no\" ]\n default: no\n registry:\n description:\n - The registry to install modules from.\n required: false\n version_added: \"1.6\"\n state:\n description:\n - The state of the node.js library\n required: false\n default: present\n choices: [ \"present\", \"absent\", \"latest\" ]\n'''\n\nEXAMPLES = '''\ndescription: Install \"coffee-script\" node.js package.\n- npm: name=coffee-script path=/app/location\n\ndescription: Install \"coffee-script\" node.js package on version 1.6.1.\n- npm: name=coffee-script version=1.6.1 path=/app/location\n\ndescription: Install \"coffee-script\" node.js package globally.\n- npm: name=coffee-script global=yes\n\ndescription: Remove the globally package \"coffee-script\".\n- npm: name=coffee-script global=yes state=absent\n\ndescription: Install \"coffee-script\" node.js package from custom registry.\n- npm: name=coffee-script registry=http://registry.mysite.com\n\ndescription: Install packages based on package.json.\n- npm: path=/app/location\n\ndescription: Update packages based on package.json to their latest version.\n- npm: path=/app/location state=latest\n\ndescription: Install packages based on package.json using the npm installed with nvm v0.10.1.\n- npm: path=/app/location executable=/opt/nvm/v0.10.1/bin/npm state=present\n'''\n\nimport os\n\ntry:\n import json\nexcept ImportError:\n try:\n import simplejson as json\n except ImportError:\n # Let snippet from module_utils/basic.py return a proper error in this case\n pass\n\n\nclass Npm(object):\n def __init__(self, module, **kwargs):\n self.module = module\n self.glbl = kwargs['glbl']\n self.name = kwargs['name']\n self.version = kwargs['version']\n self.path = kwargs['path']\n self.registry = kwargs['registry']\n self.production = kwargs['production']\n self.ignore_scripts = kwargs['ignore_scripts']\n\n if kwargs['executable']:\n self.executable = kwargs['executable'].split(' ')\n else:\n self.executable = [module.get_bin_path('npm', True)]\n\n if kwargs['version']:\n self.name_version = self.name + '@' + str(self.version)\n else:\n self.name_version = self.name\n\n def _exec(self, args, run_in_check_mode=False, check_rc=True):\n if not self.module.check_mode or (self.module.check_mode and run_in_check_mode):\n cmd = self.executable + args\n\n if self.glbl:\n cmd.append('--global')\n if self.production:\n cmd.append('--production')\n if self.ignore_scripts:\n cmd.append('--ignore-scripts')\n if self.name:\n cmd.append(self.name_version)\n if self.registry:\n cmd.append('--registry')\n cmd.append(self.registry)\n\n #If path is specified, cd into that path and run the command.\n cwd = None\n if self.path:\n if not os.path.exists(self.path):\n os.makedirs(self.path)\n if not os.path.isdir(self.path):\n self.module.fail_json(msg=\"path %s is not a directory\" % self.path)\n cwd = self.path\n\n rc, out, err = self.module.run_command(cmd, check_rc=check_rc, cwd=cwd)\n return out\n return ''\n\n def list(self):\n cmd = ['list', '--json']\n\n installed = list()\n missing = list()\n data = json.loads(self._exec(cmd, True, False))\n if 'dependencies' in data:\n for dep in data['dependencies']:\n if 'missing' in data['dependencies'][dep] and data['dependencies'][dep]['missing']:\n missing.append(dep)\n elif 'invalid' in data['dependencies'][dep] and data['dependencies'][dep]['invalid']:\n missing.append(dep)\n else:\n installed.append(dep)\n if self.name and self.name not in installed:\n missing.append(self.name)\n #Named dependency not installed\n else:\n missing.append(self.name)\n\n return installed, missing\n\n def install(self):\n return self._exec(['install'])\n\n def update(self):\n return self._exec(['update'])\n\n def uninstall(self):\n return self._exec(['uninstall'])\n\n def list_outdated(self):\n outdated = list()\n data = self._exec(['outdated'], True, False)\n for dep in data.splitlines():\n if dep:\n # node.js v0.10.22 changed the `npm outdated` module separator\n # from \"@\" to \" \". Split on both for backwards compatibility.\n pkg, other = re.split('\\s|@', dep, 1)\n outdated.append(pkg)\n\n return outdated\n\n\ndef main():\n arg_spec = dict(\n name=dict(default=None),\n path=dict(default=None, type='path'),\n version=dict(default=None),\n production=dict(default='no', type='bool'),\n executable=dict(default=None, type='path'),\n registry=dict(default=None),\n state=dict(default='present', choices=['present', 'absent', 'latest']),\n ignore_scripts=dict(default=False, type='bool'),\n )\n arg_spec['global'] = dict(default='no', type='bool')\n module = AnsibleModule(\n argument_spec=arg_spec,\n supports_check_mode=True\n )\n\n name = module.params['name']\n path = module.params['path']\n version = module.params['version']\n glbl = module.params['global']\n production = module.params['production']\n executable = module.params['executable']\n registry = module.params['registry']\n state = module.params['state']\n ignore_scripts = module.params['ignore_scripts']\n\n if not path and not glbl:\n module.fail_json(msg='path must be specified when not using global')\n if state == 'absent' and not name:\n module.fail_json(msg='uninstalling a package is only available for named packages')\n\n npm = Npm(module, name=name, path=path, version=version, glbl=glbl, production=production, \\\n executable=executable, registry=registry, ignore_scripts=ignore_scripts)\n\n changed = False\n if state == 'present':\n installed, missing = npm.list()\n if len(missing):\n changed = True\n npm.install()\n elif state == 'latest':\n installed, missing = npm.list()\n outdated = npm.list_outdated()\n if len(missing) or len(outdated):\n changed = True\n npm.update()\n else: #absent\n installed, missing = npm.list()\n if name in installed:\n changed = True\n npm.uninstall()\n\n module.exit_json(changed=changed)\n\n# import module snippets\nfrom ansible.module_utils.basic import *\nmain()\n", "path": "packaging/language/npm.py"}]} | 4,082 | 127 |
gh_patches_debug_14993 | rasdani/github-patches | git_diff | PrefectHQ__prefect-1583 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add example code block to `switch` docstring
I recently realized I hadn't touched the `switch` code in a long time, and I would've really appreciated an example to work off of. Instead, I ended up looking at our tests which most users won't want to do. Relevant doc: https://docs.prefect.io/api/unreleased/tasks/control_flow.html#prefect-tasks-control-flow-conditional-switch
</issue>
<code>
[start of src/prefect/tasks/control_flow/conditional.py]
1 from typing import Any, Dict
2
3 import prefect
4 from prefect import Task
5 from prefect.engine import signals
6 from prefect.engine.result import NoResult
7
8 __all__ = ["switch", "ifelse"]
9
10
11 class Merge(Task):
12 def __init__(self, **kwargs) -> None:
13 if kwargs.setdefault("skip_on_upstream_skip", False):
14 raise ValueError("Merge tasks must have `skip_on_upstream_skip=False`.")
15 super().__init__(**kwargs)
16
17 def run(self, **task_results: Any) -> Any:
18 return next((v for v in task_results.values() if v != NoResult), None)
19
20
21 class CompareValue(Task):
22 """
23 This task stores a `value` at initialization and compares it to a `value` received at runtime.
24 If the values don't match, it raises a SKIP exception.
25
26 Args:
27 - value (Any): the value this task will attempt to match when it runs
28 - **kwargs: keyword arguments for the Task
29 """
30
31 def __init__(self, value: Any, **kwargs: Any):
32 self.value = value
33 kwargs.setdefault("name", 'CompareValue: "{}"'.format(value))
34 super().__init__(**kwargs)
35
36 def run(self, value: Any) -> None:
37 """
38 Raises a SKIP signal if the passed value does not match the task's match value;
39 succeeds silently otherwise.
40
41 Args:
42 - value (Any): the value that will be matched against the task's value.
43 """
44 if value != self.value:
45 raise signals.SKIP(
46 'Provided value "{}" did not match "{}"'.format(value, self.value)
47 )
48
49
50 def switch(condition: Task, cases: Dict[Any, Task]) -> None:
51 """
52 Adds a SWITCH to a workflow.
53
54 The condition task is evaluated and the result is compared to the keys of the cases
55 dictionary. The task corresponding to the matching key is run; all other tasks are
56 skipped. Any tasks downstream of the skipped tasks are also skipped unless they set
57 `skip_on_upstream_skip=False`.
58
59 Args:
60 - condition (Task): a task whose result forms the condition for the switch
61 - cases (Dict[Any, Task]): a dict representing the "case" statements of the switch.
62 The value of the `condition` task will be compared to the keys of this dict, and
63 the matching task will be executed.
64
65 Raises:
66 - PrefectWarning: if any of the tasks in "cases" have upstream dependencies,
67 then this task will warn that those upstream tasks may run whether or not the switch condition matches their branch. The most common cause of this
68 is passing a list of tasks as one of the cases, which adds the `List` task
69 to the switch condition but leaves the tasks themselves upstream.
70 """
71
72 with prefect.tags("switch"):
73 for value, task in cases.items():
74 task = prefect.utilities.tasks.as_task(task)
75 match_condition = CompareValue(value=value).bind(value=condition)
76 task.set_dependencies(upstream_tasks=[match_condition])
77
78
79 def ifelse(condition: Task, true_task: Task, false_task: Task) -> None:
80 """
81 Builds a conditional branch into a workflow.
82
83 If the condition evaluates True(ish), the true_task will run. If it
84 evaluates False(ish), the false_task will run. The task doesn't run is Skipped, as are
85 all downstream tasks that don't set `skip_on_upstream_skip=False`.
86
87 Args:
88 - condition (Task): a task whose boolean result forms the condition for the ifelse
89 - true_task (Task): a task that will be executed if the condition is True
90 - false_task (Task): a task that will be executed if the condition is False
91 """
92
93 switch(condition=condition, cases={True: true_task, False: false_task})
94
95
96 def merge(*tasks: Task) -> Task:
97 """
98 Merges conditional branches back together.
99
100 A conditional branch in a flow results in one or more tasks proceeding and one or
101 more tasks skipping. It is often convenient to merge those branches back into a
102 single result. This function is a simple way to achieve that goal.
103
104 The merge will return the first real result it encounters, or `None`. If multiple
105 tasks might return a result, group them with a list.
106
107 Example:
108 ```python
109 with Flow("My Flow"):
110 true_branch = ActionIfTrue()
111 false_branch = ActionIfFalse()
112 ifelse(CheckCondition(), true_branch, false_branch)
113
114 merged_result = merge(true_branch, false_branch)
115 ```
116
117 Args:
118 - *tasks (Task): tasks whose results should be merged into a single result. The tasks are
119 assumed to all sit downstream of different `switch` branches, such that only
120 one of them will contain a result and the others will all be skipped.
121
122 Returns:
123 - Task: a Task representing the merged result.
124
125 """
126 return Merge().bind(**{"task_{}".format(i + 1): t for i, t in enumerate(tasks)})
127
[end of src/prefect/tasks/control_flow/conditional.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/prefect/tasks/control_flow/conditional.py b/src/prefect/tasks/control_flow/conditional.py
--- a/src/prefect/tasks/control_flow/conditional.py
+++ b/src/prefect/tasks/control_flow/conditional.py
@@ -56,6 +56,24 @@
skipped. Any tasks downstream of the skipped tasks are also skipped unless they set
`skip_on_upstream_skip=False`.
+ Example:
+ ```python
+ @task
+ def condition():
+ return "b" # returning 'b' will take the b_branch
+
+ @task
+ def a_branch():
+ return "A Branch"
+
+ @task
+ def b_branch():
+ return "B Branch"
+
+ with Flow("switch-flow") as flow:
+ switch(condition, dict(a=a_branch, b=b_branch))
+ ```
+
Args:
- condition (Task): a task whose result forms the condition for the switch
- cases (Dict[Any, Task]): a dict representing the "case" statements of the switch.
| {"golden_diff": "diff --git a/src/prefect/tasks/control_flow/conditional.py b/src/prefect/tasks/control_flow/conditional.py\n--- a/src/prefect/tasks/control_flow/conditional.py\n+++ b/src/prefect/tasks/control_flow/conditional.py\n@@ -56,6 +56,24 @@\n skipped. Any tasks downstream of the skipped tasks are also skipped unless they set\n `skip_on_upstream_skip=False`.\n \n+ Example:\n+ ```python\n+ @task\n+ def condition():\n+ return \"b\" # returning 'b' will take the b_branch\n+\n+ @task\n+ def a_branch():\n+ return \"A Branch\"\n+\n+ @task\n+ def b_branch():\n+ return \"B Branch\"\n+\n+ with Flow(\"switch-flow\") as flow:\n+ switch(condition, dict(a=a_branch, b=b_branch))\n+ ```\n+\n Args:\n - condition (Task): a task whose result forms the condition for the switch\n - cases (Dict[Any, Task]): a dict representing the \"case\" statements of the switch.\n", "issue": "Add example code block to `switch` docstring\nI recently realized I hadn't touched the `switch` code in a long time, and I would've really appreciated an example to work off of. Instead, I ended up looking at our tests which most users won't want to do. Relevant doc: https://docs.prefect.io/api/unreleased/tasks/control_flow.html#prefect-tasks-control-flow-conditional-switch\n", "before_files": [{"content": "from typing import Any, Dict\n\nimport prefect\nfrom prefect import Task\nfrom prefect.engine import signals\nfrom prefect.engine.result import NoResult\n\n__all__ = [\"switch\", \"ifelse\"]\n\n\nclass Merge(Task):\n def __init__(self, **kwargs) -> None:\n if kwargs.setdefault(\"skip_on_upstream_skip\", False):\n raise ValueError(\"Merge tasks must have `skip_on_upstream_skip=False`.\")\n super().__init__(**kwargs)\n\n def run(self, **task_results: Any) -> Any:\n return next((v for v in task_results.values() if v != NoResult), None)\n\n\nclass CompareValue(Task):\n \"\"\"\n This task stores a `value` at initialization and compares it to a `value` received at runtime.\n If the values don't match, it raises a SKIP exception.\n\n Args:\n - value (Any): the value this task will attempt to match when it runs\n - **kwargs: keyword arguments for the Task\n \"\"\"\n\n def __init__(self, value: Any, **kwargs: Any):\n self.value = value\n kwargs.setdefault(\"name\", 'CompareValue: \"{}\"'.format(value))\n super().__init__(**kwargs)\n\n def run(self, value: Any) -> None:\n \"\"\"\n Raises a SKIP signal if the passed value does not match the task's match value;\n succeeds silently otherwise.\n\n Args:\n - value (Any): the value that will be matched against the task's value.\n \"\"\"\n if value != self.value:\n raise signals.SKIP(\n 'Provided value \"{}\" did not match \"{}\"'.format(value, self.value)\n )\n\n\ndef switch(condition: Task, cases: Dict[Any, Task]) -> None:\n \"\"\"\n Adds a SWITCH to a workflow.\n\n The condition task is evaluated and the result is compared to the keys of the cases\n dictionary. The task corresponding to the matching key is run; all other tasks are\n skipped. Any tasks downstream of the skipped tasks are also skipped unless they set\n `skip_on_upstream_skip=False`.\n\n Args:\n - condition (Task): a task whose result forms the condition for the switch\n - cases (Dict[Any, Task]): a dict representing the \"case\" statements of the switch.\n The value of the `condition` task will be compared to the keys of this dict, and\n the matching task will be executed.\n\n Raises:\n - PrefectWarning: if any of the tasks in \"cases\" have upstream dependencies,\n then this task will warn that those upstream tasks may run whether or not the switch condition matches their branch. The most common cause of this\n is passing a list of tasks as one of the cases, which adds the `List` task\n to the switch condition but leaves the tasks themselves upstream.\n \"\"\"\n\n with prefect.tags(\"switch\"):\n for value, task in cases.items():\n task = prefect.utilities.tasks.as_task(task)\n match_condition = CompareValue(value=value).bind(value=condition)\n task.set_dependencies(upstream_tasks=[match_condition])\n\n\ndef ifelse(condition: Task, true_task: Task, false_task: Task) -> None:\n \"\"\"\n Builds a conditional branch into a workflow.\n\n If the condition evaluates True(ish), the true_task will run. If it\n evaluates False(ish), the false_task will run. The task doesn't run is Skipped, as are\n all downstream tasks that don't set `skip_on_upstream_skip=False`.\n\n Args:\n - condition (Task): a task whose boolean result forms the condition for the ifelse\n - true_task (Task): a task that will be executed if the condition is True\n - false_task (Task): a task that will be executed if the condition is False\n \"\"\"\n\n switch(condition=condition, cases={True: true_task, False: false_task})\n\n\ndef merge(*tasks: Task) -> Task:\n \"\"\"\n Merges conditional branches back together.\n\n A conditional branch in a flow results in one or more tasks proceeding and one or\n more tasks skipping. It is often convenient to merge those branches back into a\n single result. This function is a simple way to achieve that goal.\n\n The merge will return the first real result it encounters, or `None`. If multiple\n tasks might return a result, group them with a list.\n\n Example:\n ```python\n with Flow(\"My Flow\"):\n true_branch = ActionIfTrue()\n false_branch = ActionIfFalse()\n ifelse(CheckCondition(), true_branch, false_branch)\n\n merged_result = merge(true_branch, false_branch)\n ```\n\n Args:\n - *tasks (Task): tasks whose results should be merged into a single result. The tasks are\n assumed to all sit downstream of different `switch` branches, such that only\n one of them will contain a result and the others will all be skipped.\n\n Returns:\n - Task: a Task representing the merged result.\n\n \"\"\"\n return Merge().bind(**{\"task_{}\".format(i + 1): t for i, t in enumerate(tasks)})\n", "path": "src/prefect/tasks/control_flow/conditional.py"}]} | 2,005 | 238 |
gh_patches_debug_30157 | rasdani/github-patches | git_diff | xonsh__xonsh-3796 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Bad documentation or bug: _.rtn does not work
[In the Documentation](https://xon.sh/bash_to_xsh.html) you write that `_.rtn` is the equivalent of the shell `$?` and that it `Returns the exit code, or status, of the previous command.`. Either I understand the documentation wrong or there is a bug:
```
#!/usr/bin/env xonsh
echo "abc"
print(_.rtn)
```
Outputs
```
abc
Traceback (most recent call last):
File "/home/volker/.local/bin/xonsh", line 8, in <module>
sys.exit(main())
File "/home/volker/.local/lib/python3.8/site-packages/xonsh/main.py", line 426, in main
_failback_to_other_shells(args, err)
File "/home/volker/.local/lib/python3.8/site-packages/xonsh/main.py", line 373, in _failback_to_other_shells
raise err
File "/home/volker/.local/lib/python3.8/site-packages/xonsh/main.py", line 424, in main
sys.exit(main_xonsh(args))
File "/home/volker/.local/lib/python3.8/site-packages/xonsh/main.py", line 471, in main_xonsh
run_script_with_cache(
File "/home/volker/.local/lib/python3.8/site-packages/xonsh/codecache.py", line 162, in run_script_with_cache
run_compiled_code(ccode, glb, loc, mode)
File "/home/volker/.local/lib/python3.8/site-packages/xonsh/codecache.py", line 67, in run_compiled_code
func(code, glb, loc)
File "./generateIso.xonsh", line 24, in <module>
print(_.rtn)
NameError: name '_' is not defined
```
</issue>
<code>
[start of xontrib/bashisms.py]
1 """Bash-like interface extensions for xonsh."""
2 import shlex
3 import sys
4 import re
5 import builtins
6
7
8 __all__ = ()
9
10
11 @events.on_transform_command
12 def bash_preproc(cmd, **kw):
13 bang_previous = {
14 "!": lambda x: x,
15 "$": lambda x: shlex.split(x)[-1],
16 "^": lambda x: shlex.split(x)[0],
17 "*": lambda x: " ".join(shlex.split(x)[1:]),
18 }
19
20 def replace_bang(m):
21 arg = m.group(1)
22 inputs = __xonsh__.history.inps
23
24 # Dissect the previous command.
25 if arg in bang_previous:
26 try:
27 return bang_previous[arg](inputs[-1])
28 except IndexError:
29 print("xonsh: no history for '!{}'".format(arg))
30 return ""
31
32 # Look back in history for a matching command.
33 else:
34 try:
35 return next((x for x in reversed(inputs) if x.startswith(arg)))
36 except StopIteration:
37 print("xonsh: no previous commands match '!{}'".format(arg))
38 return ""
39
40 return re.sub(r"!([!$^*]|[\w]+)", replace_bang, cmd)
41
42
43 def alias(args, stdin=None):
44 ret = 0
45
46 if args:
47 for arg in args:
48 if "=" in arg:
49 # shlex.split to remove quotes, e.g. "foo='echo hey'" into
50 # "foo=echo hey"
51 name, cmd = shlex.split(arg)[0].split("=", 1)
52 aliases[name] = shlex.split(cmd)
53 elif arg in aliases:
54 print("{}={}".format(arg, aliases[arg]))
55 else:
56 print("alias: {}: not found".format(arg), file=sys.stderr)
57 ret = 1
58 else:
59 for alias, cmd in aliases.items():
60 print("{}={}".format(alias, cmd))
61
62 return ret
63
64
65 aliases["alias"] = alias
66 builtins.__xonsh__.env["THREAD_SUBPROCS"] = False
67
[end of xontrib/bashisms.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/xontrib/bashisms.py b/xontrib/bashisms.py
--- a/xontrib/bashisms.py
+++ b/xontrib/bashisms.py
@@ -64,3 +64,86 @@
aliases["alias"] = alias
builtins.__xonsh__.env["THREAD_SUBPROCS"] = False
+
+
+def _unset(args):
+ if not args:
+ print("Usage: unset ENV_VARIABLE", file=sys.stderr)
+
+ for v in args:
+ try:
+ __xonsh__.env.pop(v)
+ except KeyError:
+ print(f"{v} not found", file=sys.stderr)
+
+
+aliases["unset"] = _unset
+
+
+def _export(args):
+ if not args:
+ print("Usage: export ENV_VARIABLE=VALUE", file=sys.stderr)
+
+ for eq in args:
+ if "=" in eq:
+ name, val = shlex.split(eq)[0].split("=", 1)
+ __xonsh__.env[name] = val
+ else:
+ print(f"{eq} equal sign not found", file=sys.stderr)
+
+
+aliases["export"] = _export
+
+
+def _set(args):
+ arg = args[0]
+ if arg == "-e":
+ __xonsh__.env["RAISE_SUBPROC_ERROR"] = True
+ elif arg == "+e":
+ __xonsh__.env["RAISE_SUBPROC_ERROR"] = False
+ elif arg == "-x":
+ __xonsh__.env["XONSH_TRACE_SUBPROC"] = True
+ elif arg == "+x":
+ __xonsh__.env["XONSH_TRACE_SUBPROC"] = False
+ else:
+ print(
+ "Not supported in xontrib bashisms.\nPRs are welcome - https://github.com/xonsh/xonsh/blob/master/xontrib/bashisms.py",
+ file=sys.stderr,
+ )
+
+
+aliases["set"] = _set
+
+
+def _shopt(args):
+
+ supported_shopt = ["DOTGLOB"]
+
+ args_len = len(args)
+ if args_len == 0:
+ for so in supported_shopt:
+ onoff = "on" if so in __xonsh__.env and __xonsh__.env[so] else "off"
+ print(f"dotglob\t{onoff}")
+ return
+ elif args_len < 2 or args[0] in ["-h", "--help"]:
+ print(f'Usage: shopt <-s|-u> <{"|".join(supported_shopt).lower()}>')
+ return
+
+ opt = args[0]
+ optname = args[1]
+
+ if opt == "-s" and optname == "dotglob":
+ __xonsh__.env["DOTGLOB"] = True
+ elif opt == "-u" and optname == "dotglob":
+ __xonsh__.env["DOTGLOB"] = False
+ else:
+ print(
+ "Not supported in xontrib bashisms.\nPRs are welcome - https://github.com/xonsh/xonsh/blob/master/xontrib/bashisms.py",
+ file=sys.stderr,
+ )
+
+
+aliases["shopt"] = _shopt
+
+
+aliases["complete"] = "completer list"
| {"golden_diff": "diff --git a/xontrib/bashisms.py b/xontrib/bashisms.py\n--- a/xontrib/bashisms.py\n+++ b/xontrib/bashisms.py\n@@ -64,3 +64,86 @@\n \n aliases[\"alias\"] = alias\n builtins.__xonsh__.env[\"THREAD_SUBPROCS\"] = False\n+\n+\n+def _unset(args):\n+ if not args:\n+ print(\"Usage: unset ENV_VARIABLE\", file=sys.stderr)\n+\n+ for v in args:\n+ try:\n+ __xonsh__.env.pop(v)\n+ except KeyError:\n+ print(f\"{v} not found\", file=sys.stderr)\n+\n+\n+aliases[\"unset\"] = _unset\n+\n+\n+def _export(args):\n+ if not args:\n+ print(\"Usage: export ENV_VARIABLE=VALUE\", file=sys.stderr)\n+\n+ for eq in args:\n+ if \"=\" in eq:\n+ name, val = shlex.split(eq)[0].split(\"=\", 1)\n+ __xonsh__.env[name] = val\n+ else:\n+ print(f\"{eq} equal sign not found\", file=sys.stderr)\n+\n+\n+aliases[\"export\"] = _export\n+\n+\n+def _set(args):\n+ arg = args[0]\n+ if arg == \"-e\":\n+ __xonsh__.env[\"RAISE_SUBPROC_ERROR\"] = True\n+ elif arg == \"+e\":\n+ __xonsh__.env[\"RAISE_SUBPROC_ERROR\"] = False\n+ elif arg == \"-x\":\n+ __xonsh__.env[\"XONSH_TRACE_SUBPROC\"] = True\n+ elif arg == \"+x\":\n+ __xonsh__.env[\"XONSH_TRACE_SUBPROC\"] = False\n+ else:\n+ print(\n+ \"Not supported in xontrib bashisms.\\nPRs are welcome - https://github.com/xonsh/xonsh/blob/master/xontrib/bashisms.py\",\n+ file=sys.stderr,\n+ )\n+\n+\n+aliases[\"set\"] = _set\n+\n+\n+def _shopt(args):\n+\n+ supported_shopt = [\"DOTGLOB\"]\n+\n+ args_len = len(args)\n+ if args_len == 0:\n+ for so in supported_shopt:\n+ onoff = \"on\" if so in __xonsh__.env and __xonsh__.env[so] else \"off\"\n+ print(f\"dotglob\\t{onoff}\")\n+ return\n+ elif args_len < 2 or args[0] in [\"-h\", \"--help\"]:\n+ print(f'Usage: shopt <-s|-u> <{\"|\".join(supported_shopt).lower()}>')\n+ return\n+\n+ opt = args[0]\n+ optname = args[1]\n+\n+ if opt == \"-s\" and optname == \"dotglob\":\n+ __xonsh__.env[\"DOTGLOB\"] = True\n+ elif opt == \"-u\" and optname == \"dotglob\":\n+ __xonsh__.env[\"DOTGLOB\"] = False\n+ else:\n+ print(\n+ \"Not supported in xontrib bashisms.\\nPRs are welcome - https://github.com/xonsh/xonsh/blob/master/xontrib/bashisms.py\",\n+ file=sys.stderr,\n+ )\n+\n+\n+aliases[\"shopt\"] = _shopt\n+\n+\n+aliases[\"complete\"] = \"completer list\"\n", "issue": "Bad documentation or bug: _.rtn does not work\n[In the Documentation](https://xon.sh/bash_to_xsh.html) you write that `_.rtn` is the equivalent of the shell `$?` and that it `Returns the exit code, or status, of the previous command.`. Either I understand the documentation wrong or there is a bug:\r\n```\r\n#!/usr/bin/env xonsh\r\necho \"abc\"\r\nprint(_.rtn)\r\n```\r\nOutputs\r\n```\r\nabc\r\nTraceback (most recent call last):\r\n File \"/home/volker/.local/bin/xonsh\", line 8, in <module>\r\n sys.exit(main())\r\n File \"/home/volker/.local/lib/python3.8/site-packages/xonsh/main.py\", line 426, in main\r\n _failback_to_other_shells(args, err)\r\n File \"/home/volker/.local/lib/python3.8/site-packages/xonsh/main.py\", line 373, in _failback_to_other_shells\r\n raise err\r\n File \"/home/volker/.local/lib/python3.8/site-packages/xonsh/main.py\", line 424, in main\r\n sys.exit(main_xonsh(args))\r\n File \"/home/volker/.local/lib/python3.8/site-packages/xonsh/main.py\", line 471, in main_xonsh\r\n run_script_with_cache(\r\n File \"/home/volker/.local/lib/python3.8/site-packages/xonsh/codecache.py\", line 162, in run_script_with_cache\r\n run_compiled_code(ccode, glb, loc, mode)\r\n File \"/home/volker/.local/lib/python3.8/site-packages/xonsh/codecache.py\", line 67, in run_compiled_code\r\n func(code, glb, loc)\r\n File \"./generateIso.xonsh\", line 24, in <module>\r\n print(_.rtn)\r\nNameError: name '_' is not defined\r\n```\n", "before_files": [{"content": "\"\"\"Bash-like interface extensions for xonsh.\"\"\"\nimport shlex\nimport sys\nimport re\nimport builtins\n\n\n__all__ = ()\n\n\[email protected]_transform_command\ndef bash_preproc(cmd, **kw):\n bang_previous = {\n \"!\": lambda x: x,\n \"$\": lambda x: shlex.split(x)[-1],\n \"^\": lambda x: shlex.split(x)[0],\n \"*\": lambda x: \" \".join(shlex.split(x)[1:]),\n }\n\n def replace_bang(m):\n arg = m.group(1)\n inputs = __xonsh__.history.inps\n\n # Dissect the previous command.\n if arg in bang_previous:\n try:\n return bang_previous[arg](inputs[-1])\n except IndexError:\n print(\"xonsh: no history for '!{}'\".format(arg))\n return \"\"\n\n # Look back in history for a matching command.\n else:\n try:\n return next((x for x in reversed(inputs) if x.startswith(arg)))\n except StopIteration:\n print(\"xonsh: no previous commands match '!{}'\".format(arg))\n return \"\"\n\n return re.sub(r\"!([!$^*]|[\\w]+)\", replace_bang, cmd)\n\n\ndef alias(args, stdin=None):\n ret = 0\n\n if args:\n for arg in args:\n if \"=\" in arg:\n # shlex.split to remove quotes, e.g. \"foo='echo hey'\" into\n # \"foo=echo hey\"\n name, cmd = shlex.split(arg)[0].split(\"=\", 1)\n aliases[name] = shlex.split(cmd)\n elif arg in aliases:\n print(\"{}={}\".format(arg, aliases[arg]))\n else:\n print(\"alias: {}: not found\".format(arg), file=sys.stderr)\n ret = 1\n else:\n for alias, cmd in aliases.items():\n print(\"{}={}\".format(alias, cmd))\n\n return ret\n\n\naliases[\"alias\"] = alias\nbuiltins.__xonsh__.env[\"THREAD_SUBPROCS\"] = False\n", "path": "xontrib/bashisms.py"}]} | 1,541 | 751 |
gh_patches_debug_13527 | rasdani/github-patches | git_diff | jupyterhub__jupyterhub-443 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Spawner custom form validation
Are there ideas for allowing form validation for spawners that have a custom form?
I was thinking of raising an exception in `options_from_form()` and moving the `try` up by one line in [SpawnHandler](https://github.com/jupyter/jupyterhub/blob/master/jupyterhub/handlers/pages.py#L97).
</issue>
<code>
[start of jupyterhub/handlers/pages.py]
1 """Basic html-rendering handlers."""
2
3 # Copyright (c) Jupyter Development Team.
4 # Distributed under the terms of the Modified BSD License.
5
6 from tornado import web, gen
7
8 from .. import orm
9 from ..utils import admin_only, url_path_join
10 from .base import BaseHandler
11 from .login import LoginHandler
12
13
14 class RootHandler(BaseHandler):
15 """Render the Hub root page.
16
17 If logged in, redirects to:
18
19 - single-user server if running
20 - hub home, otherwise
21
22 Otherwise, renders login page.
23 """
24 def get(self):
25 user = self.get_current_user()
26 if user:
27 if user.running:
28 url = user.server.base_url
29 self.log.debug("User is running: %s", url)
30 else:
31 url = url_path_join(self.hub.server.base_url, 'home')
32 self.log.debug("User is not running: %s", url)
33 self.redirect(url)
34 return
35 url = url_path_join(self.hub.server.base_url, 'login')
36 self.redirect(url)
37
38
39 class HomeHandler(BaseHandler):
40 """Render the user's home page."""
41
42 @web.authenticated
43 def get(self):
44 html = self.render_template('home.html',
45 user=self.get_current_user(),
46 )
47 self.finish(html)
48
49
50 class SpawnHandler(BaseHandler):
51 """Handle spawning of single-user servers via form.
52
53 GET renders the form, POST handles form submission.
54
55 Only enabled when Spawner.options_form is defined.
56 """
57 def _render_form(self, message=''):
58 user = self.get_current_user()
59 return self.render_template('spawn.html',
60 user=user,
61 spawner_options_form=user.spawner.options_form,
62 error_message=message,
63 )
64
65 @web.authenticated
66 def get(self):
67 """GET renders form for spawning with user-specified options"""
68 user = self.get_current_user()
69 if user.running:
70 url = user.server.base_url
71 self.log.debug("User is running: %s", url)
72 self.redirect(url)
73 return
74 if user.spawner.options_form:
75 self.finish(self._render_form())
76 else:
77 # not running, no form. Trigger spawn.
78 url = url_path_join(self.base_url, 'user', user.name)
79 self.redirect(url)
80
81 @web.authenticated
82 @gen.coroutine
83 def post(self):
84 """POST spawns with user-specified options"""
85 user = self.get_current_user()
86 if user.running:
87 url = user.server.base_url
88 self.log.warning("User is already running: %s", url)
89 self.redirect(url)
90 return
91 form_options = {}
92 for key, byte_list in self.request.body_arguments.items():
93 form_options[key] = [ bs.decode('utf8') for bs in byte_list ]
94 for key, byte_list in self.request.files.items():
95 form_options["%s_file"%key] = byte_list
96 options = user.spawner.options_from_form(form_options)
97 try:
98 yield self.spawn_single_user(user, options=options)
99 except Exception as e:
100 self.log.error("Failed to spawn single-user server with form", exc_info=True)
101 self.finish(self._render_form(str(e)))
102 return
103 self.set_login_cookie(user)
104 url = user.server.base_url
105 self.redirect(url)
106
107 class AdminHandler(BaseHandler):
108 """Render the admin page."""
109
110 @admin_only
111 def get(self):
112 available = {'name', 'admin', 'running', 'last_activity'}
113 default_sort = ['admin', 'name']
114 mapping = {
115 'running': '_server_id'
116 }
117 default_order = {
118 'name': 'asc',
119 'last_activity': 'desc',
120 'admin': 'desc',
121 'running': 'desc',
122 }
123 sorts = self.get_arguments('sort') or default_sort
124 orders = self.get_arguments('order')
125
126 for bad in set(sorts).difference(available):
127 self.log.warn("ignoring invalid sort: %r", bad)
128 sorts.remove(bad)
129 for bad in set(orders).difference({'asc', 'desc'}):
130 self.log.warn("ignoring invalid order: %r", bad)
131 orders.remove(bad)
132
133 # add default sort as secondary
134 for s in default_sort:
135 if s not in sorts:
136 sorts.append(s)
137 if len(orders) < len(sorts):
138 for col in sorts[len(orders):]:
139 orders.append(default_order[col])
140 else:
141 orders = orders[:len(sorts)]
142
143 # this could be one incomprehensible nested list comprehension
144 # get User columns
145 cols = [ getattr(orm.User, mapping.get(c, c)) for c in sorts ]
146 # get User.col.desc() order objects
147 ordered = [ getattr(c, o)() for c, o in zip(cols, orders) ]
148
149 users = self.db.query(orm.User).order_by(*ordered)
150 users = [ self._user_from_orm(u) for u in users ]
151 running = [ u for u in users if u.running ]
152
153 html = self.render_template('admin.html',
154 user=self.get_current_user(),
155 admin_access=self.settings.get('admin_access', False),
156 users=users,
157 running=running,
158 sort={s:o for s,o in zip(sorts, orders)},
159 )
160 self.finish(html)
161
162
163 default_handlers = [
164 (r'/', RootHandler),
165 (r'/home', HomeHandler),
166 (r'/admin', AdminHandler),
167 (r'/spawn', SpawnHandler),
168 ]
169
[end of jupyterhub/handlers/pages.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/jupyterhub/handlers/pages.py b/jupyterhub/handlers/pages.py
--- a/jupyterhub/handlers/pages.py
+++ b/jupyterhub/handlers/pages.py
@@ -93,8 +93,8 @@
form_options[key] = [ bs.decode('utf8') for bs in byte_list ]
for key, byte_list in self.request.files.items():
form_options["%s_file"%key] = byte_list
- options = user.spawner.options_from_form(form_options)
try:
+ options = user.spawner.options_from_form(form_options)
yield self.spawn_single_user(user, options=options)
except Exception as e:
self.log.error("Failed to spawn single-user server with form", exc_info=True)
| {"golden_diff": "diff --git a/jupyterhub/handlers/pages.py b/jupyterhub/handlers/pages.py\n--- a/jupyterhub/handlers/pages.py\n+++ b/jupyterhub/handlers/pages.py\n@@ -93,8 +93,8 @@\n form_options[key] = [ bs.decode('utf8') for bs in byte_list ]\n for key, byte_list in self.request.files.items():\n form_options[\"%s_file\"%key] = byte_list\n- options = user.spawner.options_from_form(form_options)\n try:\n+ options = user.spawner.options_from_form(form_options)\n yield self.spawn_single_user(user, options=options)\n except Exception as e:\n self.log.error(\"Failed to spawn single-user server with form\", exc_info=True)\n", "issue": "Spawner custom form validation\nAre there ideas for allowing form validation for spawners that have a custom form?\n\nI was thinking of raising an exception in `options_from_form()` and moving the `try` up by one line in [SpawnHandler](https://github.com/jupyter/jupyterhub/blob/master/jupyterhub/handlers/pages.py#L97).\n\n", "before_files": [{"content": "\"\"\"Basic html-rendering handlers.\"\"\"\n\n# Copyright (c) Jupyter Development Team.\n# Distributed under the terms of the Modified BSD License.\n\nfrom tornado import web, gen\n\nfrom .. import orm\nfrom ..utils import admin_only, url_path_join\nfrom .base import BaseHandler\nfrom .login import LoginHandler\n\n\nclass RootHandler(BaseHandler):\n \"\"\"Render the Hub root page.\n \n If logged in, redirects to:\n \n - single-user server if running\n - hub home, otherwise\n \n Otherwise, renders login page.\n \"\"\"\n def get(self):\n user = self.get_current_user()\n if user:\n if user.running:\n url = user.server.base_url\n self.log.debug(\"User is running: %s\", url)\n else:\n url = url_path_join(self.hub.server.base_url, 'home')\n self.log.debug(\"User is not running: %s\", url)\n self.redirect(url)\n return\n url = url_path_join(self.hub.server.base_url, 'login')\n self.redirect(url)\n\n\nclass HomeHandler(BaseHandler):\n \"\"\"Render the user's home page.\"\"\"\n\n @web.authenticated\n def get(self):\n html = self.render_template('home.html',\n user=self.get_current_user(),\n )\n self.finish(html)\n\n\nclass SpawnHandler(BaseHandler):\n \"\"\"Handle spawning of single-user servers via form.\n \n GET renders the form, POST handles form submission.\n \n Only enabled when Spawner.options_form is defined.\n \"\"\"\n def _render_form(self, message=''):\n user = self.get_current_user()\n return self.render_template('spawn.html',\n user=user,\n spawner_options_form=user.spawner.options_form,\n error_message=message,\n )\n\n @web.authenticated\n def get(self):\n \"\"\"GET renders form for spawning with user-specified options\"\"\"\n user = self.get_current_user()\n if user.running:\n url = user.server.base_url\n self.log.debug(\"User is running: %s\", url)\n self.redirect(url)\n return\n if user.spawner.options_form:\n self.finish(self._render_form())\n else:\n # not running, no form. Trigger spawn.\n url = url_path_join(self.base_url, 'user', user.name)\n self.redirect(url)\n \n @web.authenticated\n @gen.coroutine\n def post(self):\n \"\"\"POST spawns with user-specified options\"\"\"\n user = self.get_current_user()\n if user.running:\n url = user.server.base_url\n self.log.warning(\"User is already running: %s\", url)\n self.redirect(url)\n return\n form_options = {}\n for key, byte_list in self.request.body_arguments.items():\n form_options[key] = [ bs.decode('utf8') for bs in byte_list ]\n for key, byte_list in self.request.files.items():\n form_options[\"%s_file\"%key] = byte_list\n options = user.spawner.options_from_form(form_options)\n try:\n yield self.spawn_single_user(user, options=options)\n except Exception as e:\n self.log.error(\"Failed to spawn single-user server with form\", exc_info=True)\n self.finish(self._render_form(str(e)))\n return\n self.set_login_cookie(user)\n url = user.server.base_url\n self.redirect(url)\n\nclass AdminHandler(BaseHandler):\n \"\"\"Render the admin page.\"\"\"\n\n @admin_only\n def get(self):\n available = {'name', 'admin', 'running', 'last_activity'}\n default_sort = ['admin', 'name']\n mapping = {\n 'running': '_server_id'\n }\n default_order = {\n 'name': 'asc',\n 'last_activity': 'desc',\n 'admin': 'desc',\n 'running': 'desc',\n }\n sorts = self.get_arguments('sort') or default_sort\n orders = self.get_arguments('order')\n \n for bad in set(sorts).difference(available):\n self.log.warn(\"ignoring invalid sort: %r\", bad)\n sorts.remove(bad)\n for bad in set(orders).difference({'asc', 'desc'}):\n self.log.warn(\"ignoring invalid order: %r\", bad)\n orders.remove(bad)\n \n # add default sort as secondary\n for s in default_sort:\n if s not in sorts:\n sorts.append(s)\n if len(orders) < len(sorts):\n for col in sorts[len(orders):]:\n orders.append(default_order[col])\n else:\n orders = orders[:len(sorts)]\n \n # this could be one incomprehensible nested list comprehension\n # get User columns\n cols = [ getattr(orm.User, mapping.get(c, c)) for c in sorts ]\n # get User.col.desc() order objects\n ordered = [ getattr(c, o)() for c, o in zip(cols, orders) ]\n \n users = self.db.query(orm.User).order_by(*ordered)\n users = [ self._user_from_orm(u) for u in users ]\n running = [ u for u in users if u.running ]\n \n html = self.render_template('admin.html',\n user=self.get_current_user(),\n admin_access=self.settings.get('admin_access', False),\n users=users,\n running=running,\n sort={s:o for s,o in zip(sorts, orders)},\n )\n self.finish(html)\n\n\ndefault_handlers = [\n (r'/', RootHandler),\n (r'/home', HomeHandler),\n (r'/admin', AdminHandler),\n (r'/spawn', SpawnHandler),\n]\n", "path": "jupyterhub/handlers/pages.py"}]} | 2,207 | 165 |
gh_patches_debug_34164 | rasdani/github-patches | git_diff | tensorflow__tensor2tensor-1281 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
MRPC dev data is being used for training
### Description
I expected that the dev dataset would be different from the training dataset. However, all dev examples of MRPC are actually included in the training dataset.
### Environment information
```
OS: macOS 10.13.4
$ pip freeze | grep tensor
mesh-tensorflow==0.0.4
-e [email protected]:tensorflow/tensor2tensor.git@7de63449a98375011e2a8715482dfeea946e6de7#egg=tensor2tensor
tensorboard==1.12.0
tensorflow==1.12.0
tensorflow-metadata==0.9.0
tensorflow-probability==0.5.0
$ python -V
Python 3.6.4
```
### For bugs: reproduction and error logs
```python
import tensorflow as tf
from tensor2tensor.data_generators import problem
from tensor2tensor.data_generators.mrpc import MSRParaphraseCorpus
data_dir = "/tmp/t2t_mrpc"
mrpc = MSRParaphraseCorpus()
tf.gfile.MakeDirs(data_dir)
mrpc.generate_data(data_dir, "/tmp")
encoder = mrpc.feature_encoders(data_dir).get("inputs")
tfe = tf.contrib.eager
tfe.enable_eager_execution()
train_dataset = set(
encoder.decode(example["inputs"])
for example in tfe.Iterator(mrpc.dataset(problem.DatasetSplit.TRAIN, data_dir)))
eval_dataset = set(
encoder.decode(example["inputs"])
for example in tfe.Iterator(mrpc.dataset(problem.DatasetSplit.EVAL, data_dir)))
print("TRAIN Dataset: {}".format(len(train_dataset)))
print("EVAL Dataset: {}".format(len(eval_dataset)))
print("Duplication: {}".format(len(train_dataset & eval_dataset)))
```
Output:
```
TRAIN Dataset: 8152
EVAL Dataset: 816
Duplication: 816
```
</issue>
<code>
[start of tensor2tensor/data_generators/mrpc.py]
1 # coding=utf-8
2 # Copyright 2018 The Tensor2Tensor Authors.
3 #
4 # Licensed under the Apache License, Version 2.0 (the "License");
5 # you may not use this file except in compliance with the License.
6 # You may obtain a copy of the License at
7 #
8 # http://www.apache.org/licenses/LICENSE-2.0
9 #
10 # Unless required by applicable law or agreed to in writing, software
11 # distributed under the License is distributed on an "AS IS" BASIS,
12 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 # See the License for the specific language governing permissions and
14 # limitations under the License.
15
16 """Data generators for the MSR Paraphrase Corpus."""
17
18 from __future__ import absolute_import
19 from __future__ import division
20 from __future__ import print_function
21
22 import os
23 import six
24 from tensor2tensor.data_generators import generator_utils
25 from tensor2tensor.data_generators import problem
26 from tensor2tensor.data_generators import text_encoder
27 from tensor2tensor.data_generators import text_problems
28 from tensor2tensor.utils import registry
29 import tensorflow as tf
30
31 EOS = text_encoder.EOS
32
33
34 @registry.register_problem
35 class MSRParaphraseCorpus(text_problems.TextConcat2ClassProblem):
36 """MSR Paraphrase Identification problems."""
37
38 # Link to data from GLUE: https://gluebenchmark.com/tasks
39 DEV_IDS = ("https://firebasestorage.googleapis.com/v0/b/"
40 "mtl-sentence-representations.appspot.com/o/"
41 "data%2Fmrpc_dev_ids.tsv?alt=media&token=ec5c0836-31d5-"
42 "48f4-b431-7480817f1adc")
43 MRPC_TRAIN = ("https://s3.amazonaws.com/senteval/senteval_data/"
44 "msr_paraphrase_train.txt")
45 MRPC_TEST = ("https://s3.amazonaws.com/senteval/senteval_data/"
46 "msr_paraphrase_test.txt")
47 DATA_DIR = "MRPC"
48
49 @property
50 def is_generate_per_split(self):
51 return True
52
53 @property
54 def dataset_splits(self):
55 return [{
56 "split": problem.DatasetSplit.TRAIN,
57 "shards": 10,
58 }, {
59 "split": problem.DatasetSplit.EVAL,
60 "shards": 1,
61 }]
62
63 @property
64 def approx_vocab_size(self):
65 return 2**13 # 8k vocab suffices for this small dataset.
66
67 @property
68 def num_classes(self):
69 return 2
70
71 def class_labels(self, data_dir):
72 del data_dir
73 return ["not_paraphrase", "paraphrase"]
74
75 def _maybe_download_corpora(self, tmp_dir):
76 mrpc_dir = os.path.join(tmp_dir, self.DATA_DIR)
77 tf.gfile.MakeDirs(mrpc_dir)
78 mrpc_train_finalpath = os.path.join(mrpc_dir, "msr_paraphrase_train.txt")
79 mrpc_test_finalpath = os.path.join(mrpc_dir, "msr_paraphrase_test.txt")
80 mrpc_dev_ids_finalpath = os.path.join(mrpc_dir, "dev_ids.tsv")
81
82 def download_file(tdir, filepath, url):
83 if not tf.gfile.Exists(filepath):
84 generator_utils.maybe_download(tdir, filepath, url)
85
86 download_file(mrpc_dir, mrpc_train_finalpath, self.MRPC_TRAIN)
87 download_file(mrpc_dir, mrpc_test_finalpath, self.MRPC_TEST)
88 download_file(mrpc_dir, mrpc_dev_ids_finalpath, self.DEV_IDS)
89
90 return mrpc_dir
91
92 def example_generator(self, filename, dev_ids):
93 for idx, line in enumerate(tf.gfile.Open(filename, "rb")):
94 if idx == 0: continue # skip header
95 if six.PY2:
96 line = unicode(line.strip(), "utf-8")
97 else:
98 line = line.strip().decode("utf-8")
99 l, id1, id2, s1, s2 = line.split("\t")
100 if dev_ids and [id1, id2] not in dev_ids:
101 continue
102 inputs = [[s1, s2], [s2, s1]]
103 for inp in inputs:
104 yield {
105 "inputs": inp,
106 "label": int(l)
107 }
108
109 def generate_samples(self, data_dir, tmp_dir, dataset_split):
110 mrpc_dir = self._maybe_download_corpora(tmp_dir)
111 filesplit = "msr_paraphrase_train.txt"
112 dev_ids = []
113 if dataset_split != problem.DatasetSplit.TRAIN:
114 for row in tf.gfile.Open(os.path.join(mrpc_dir, "dev_ids.tsv")):
115 dev_ids.append(row.strip().split("\t"))
116
117 filename = os.path.join(mrpc_dir, filesplit)
118 for example in self.example_generator(filename, dev_ids):
119 yield example
120
121
122 @registry.register_problem
123 class MSRParaphraseCorpusCharacters(MSRParaphraseCorpus):
124 """MSR Paraphrase Identification problems, character level"""
125
126 @property
127 def vocab_type(self):
128 return text_problems.VocabType.CHARACTER
129
130 def global_task_id(self):
131 return problem.TaskID.EN_SIM
132
[end of tensor2tensor/data_generators/mrpc.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/tensor2tensor/data_generators/mrpc.py b/tensor2tensor/data_generators/mrpc.py
--- a/tensor2tensor/data_generators/mrpc.py
+++ b/tensor2tensor/data_generators/mrpc.py
@@ -58,6 +58,9 @@
}, {
"split": problem.DatasetSplit.EVAL,
"shards": 1,
+ }, {
+ "split": problem.DatasetSplit.TEST,
+ "shards": 1,
}]
@property
@@ -89,7 +92,7 @@
return mrpc_dir
- def example_generator(self, filename, dev_ids):
+ def example_generator(self, filename, dev_ids, dataset_split):
for idx, line in enumerate(tf.gfile.Open(filename, "rb")):
if idx == 0: continue # skip header
if six.PY2:
@@ -97,7 +100,10 @@
else:
line = line.strip().decode("utf-8")
l, id1, id2, s1, s2 = line.split("\t")
- if dev_ids and [id1, id2] not in dev_ids:
+ is_dev = [id1, id2] in dev_ids
+ if dataset_split == problem.DatasetSplit.TRAIN and is_dev:
+ continue
+ if dataset_split == problem.DatasetSplit.EVAL and not is_dev:
continue
inputs = [[s1, s2], [s2, s1]]
for inp in inputs:
@@ -108,14 +114,17 @@
def generate_samples(self, data_dir, tmp_dir, dataset_split):
mrpc_dir = self._maybe_download_corpora(tmp_dir)
- filesplit = "msr_paraphrase_train.txt"
+ if dataset_split != problem.DatasetSplit.TEST:
+ filesplit = "msr_paraphrase_train.txt"
+ else:
+ filesplit = "msr_paraphrase_test.txt"
dev_ids = []
- if dataset_split != problem.DatasetSplit.TRAIN:
+ if dataset_split != problem.DatasetSplit.TEST:
for row in tf.gfile.Open(os.path.join(mrpc_dir, "dev_ids.tsv")):
dev_ids.append(row.strip().split("\t"))
filename = os.path.join(mrpc_dir, filesplit)
- for example in self.example_generator(filename, dev_ids):
+ for example in self.example_generator(filename, dev_ids, dataset_split):
yield example
| {"golden_diff": "diff --git a/tensor2tensor/data_generators/mrpc.py b/tensor2tensor/data_generators/mrpc.py\n--- a/tensor2tensor/data_generators/mrpc.py\n+++ b/tensor2tensor/data_generators/mrpc.py\n@@ -58,6 +58,9 @@\n }, {\n \"split\": problem.DatasetSplit.EVAL,\n \"shards\": 1,\n+ }, {\n+ \"split\": problem.DatasetSplit.TEST,\n+ \"shards\": 1,\n }]\n \n @property\n@@ -89,7 +92,7 @@\n \n return mrpc_dir\n \n- def example_generator(self, filename, dev_ids):\n+ def example_generator(self, filename, dev_ids, dataset_split):\n for idx, line in enumerate(tf.gfile.Open(filename, \"rb\")):\n if idx == 0: continue # skip header\n if six.PY2:\n@@ -97,7 +100,10 @@\n else:\n line = line.strip().decode(\"utf-8\")\n l, id1, id2, s1, s2 = line.split(\"\\t\")\n- if dev_ids and [id1, id2] not in dev_ids:\n+ is_dev = [id1, id2] in dev_ids\n+ if dataset_split == problem.DatasetSplit.TRAIN and is_dev:\n+ continue\n+ if dataset_split == problem.DatasetSplit.EVAL and not is_dev:\n continue\n inputs = [[s1, s2], [s2, s1]]\n for inp in inputs:\n@@ -108,14 +114,17 @@\n \n def generate_samples(self, data_dir, tmp_dir, dataset_split):\n mrpc_dir = self._maybe_download_corpora(tmp_dir)\n- filesplit = \"msr_paraphrase_train.txt\"\n+ if dataset_split != problem.DatasetSplit.TEST:\n+ filesplit = \"msr_paraphrase_train.txt\"\n+ else:\n+ filesplit = \"msr_paraphrase_test.txt\"\n dev_ids = []\n- if dataset_split != problem.DatasetSplit.TRAIN:\n+ if dataset_split != problem.DatasetSplit.TEST:\n for row in tf.gfile.Open(os.path.join(mrpc_dir, \"dev_ids.tsv\")):\n dev_ids.append(row.strip().split(\"\\t\"))\n \n filename = os.path.join(mrpc_dir, filesplit)\n- for example in self.example_generator(filename, dev_ids):\n+ for example in self.example_generator(filename, dev_ids, dataset_split):\n yield example\n", "issue": "MRPC dev data is being used for training\n### Description\r\n\r\nI expected that the dev dataset would be different from the training dataset. However, all dev examples of MRPC are actually included in the training dataset.\r\n\r\n### Environment information\r\n\r\n```\r\nOS: macOS 10.13.4\r\n\r\n$ pip freeze | grep tensor\r\nmesh-tensorflow==0.0.4\r\n-e [email protected]:tensorflow/tensor2tensor.git@7de63449a98375011e2a8715482dfeea946e6de7#egg=tensor2tensor\r\ntensorboard==1.12.0\r\ntensorflow==1.12.0\r\ntensorflow-metadata==0.9.0\r\ntensorflow-probability==0.5.0\r\n\r\n$ python -V\r\nPython 3.6.4\r\n```\r\n\r\n### For bugs: reproduction and error logs\r\n\r\n```python\r\nimport tensorflow as tf\r\nfrom tensor2tensor.data_generators import problem\r\nfrom tensor2tensor.data_generators.mrpc import MSRParaphraseCorpus\r\n\r\ndata_dir = \"/tmp/t2t_mrpc\"\r\nmrpc = MSRParaphraseCorpus()\r\ntf.gfile.MakeDirs(data_dir)\r\nmrpc.generate_data(data_dir, \"/tmp\")\r\nencoder = mrpc.feature_encoders(data_dir).get(\"inputs\")\r\n\r\ntfe = tf.contrib.eager\r\ntfe.enable_eager_execution()\r\ntrain_dataset = set(\r\n encoder.decode(example[\"inputs\"])\r\n for example in tfe.Iterator(mrpc.dataset(problem.DatasetSplit.TRAIN, data_dir)))\r\neval_dataset = set(\r\n encoder.decode(example[\"inputs\"])\r\n for example in tfe.Iterator(mrpc.dataset(problem.DatasetSplit.EVAL, data_dir)))\r\n\r\nprint(\"TRAIN Dataset: {}\".format(len(train_dataset)))\r\nprint(\"EVAL Dataset: {}\".format(len(eval_dataset)))\r\nprint(\"Duplication: {}\".format(len(train_dataset & eval_dataset)))\r\n```\r\n\r\nOutput:\r\n```\r\nTRAIN Dataset: 8152\r\nEVAL Dataset: 816\r\nDuplication: 816\r\n```\r\n\n", "before_files": [{"content": "# coding=utf-8\n# Copyright 2018 The Tensor2Tensor Authors.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Data generators for the MSR Paraphrase Corpus.\"\"\"\n\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport os\nimport six\nfrom tensor2tensor.data_generators import generator_utils\nfrom tensor2tensor.data_generators import problem\nfrom tensor2tensor.data_generators import text_encoder\nfrom tensor2tensor.data_generators import text_problems\nfrom tensor2tensor.utils import registry\nimport tensorflow as tf\n\nEOS = text_encoder.EOS\n\n\[email protected]_problem\nclass MSRParaphraseCorpus(text_problems.TextConcat2ClassProblem):\n \"\"\"MSR Paraphrase Identification problems.\"\"\"\n\n # Link to data from GLUE: https://gluebenchmark.com/tasks\n DEV_IDS = (\"https://firebasestorage.googleapis.com/v0/b/\"\n \"mtl-sentence-representations.appspot.com/o/\"\n \"data%2Fmrpc_dev_ids.tsv?alt=media&token=ec5c0836-31d5-\"\n \"48f4-b431-7480817f1adc\")\n MRPC_TRAIN = (\"https://s3.amazonaws.com/senteval/senteval_data/\"\n \"msr_paraphrase_train.txt\")\n MRPC_TEST = (\"https://s3.amazonaws.com/senteval/senteval_data/\"\n \"msr_paraphrase_test.txt\")\n DATA_DIR = \"MRPC\"\n\n @property\n def is_generate_per_split(self):\n return True\n\n @property\n def dataset_splits(self):\n return [{\n \"split\": problem.DatasetSplit.TRAIN,\n \"shards\": 10,\n }, {\n \"split\": problem.DatasetSplit.EVAL,\n \"shards\": 1,\n }]\n\n @property\n def approx_vocab_size(self):\n return 2**13 # 8k vocab suffices for this small dataset.\n\n @property\n def num_classes(self):\n return 2\n\n def class_labels(self, data_dir):\n del data_dir\n return [\"not_paraphrase\", \"paraphrase\"]\n\n def _maybe_download_corpora(self, tmp_dir):\n mrpc_dir = os.path.join(tmp_dir, self.DATA_DIR)\n tf.gfile.MakeDirs(mrpc_dir)\n mrpc_train_finalpath = os.path.join(mrpc_dir, \"msr_paraphrase_train.txt\")\n mrpc_test_finalpath = os.path.join(mrpc_dir, \"msr_paraphrase_test.txt\")\n mrpc_dev_ids_finalpath = os.path.join(mrpc_dir, \"dev_ids.tsv\")\n\n def download_file(tdir, filepath, url):\n if not tf.gfile.Exists(filepath):\n generator_utils.maybe_download(tdir, filepath, url)\n\n download_file(mrpc_dir, mrpc_train_finalpath, self.MRPC_TRAIN)\n download_file(mrpc_dir, mrpc_test_finalpath, self.MRPC_TEST)\n download_file(mrpc_dir, mrpc_dev_ids_finalpath, self.DEV_IDS)\n\n return mrpc_dir\n\n def example_generator(self, filename, dev_ids):\n for idx, line in enumerate(tf.gfile.Open(filename, \"rb\")):\n if idx == 0: continue # skip header\n if six.PY2:\n line = unicode(line.strip(), \"utf-8\")\n else:\n line = line.strip().decode(\"utf-8\")\n l, id1, id2, s1, s2 = line.split(\"\\t\")\n if dev_ids and [id1, id2] not in dev_ids:\n continue\n inputs = [[s1, s2], [s2, s1]]\n for inp in inputs:\n yield {\n \"inputs\": inp,\n \"label\": int(l)\n }\n\n def generate_samples(self, data_dir, tmp_dir, dataset_split):\n mrpc_dir = self._maybe_download_corpora(tmp_dir)\n filesplit = \"msr_paraphrase_train.txt\"\n dev_ids = []\n if dataset_split != problem.DatasetSplit.TRAIN:\n for row in tf.gfile.Open(os.path.join(mrpc_dir, \"dev_ids.tsv\")):\n dev_ids.append(row.strip().split(\"\\t\"))\n\n filename = os.path.join(mrpc_dir, filesplit)\n for example in self.example_generator(filename, dev_ids):\n yield example\n\n\[email protected]_problem\nclass MSRParaphraseCorpusCharacters(MSRParaphraseCorpus):\n \"\"\"MSR Paraphrase Identification problems, character level\"\"\"\n\n @property\n def vocab_type(self):\n return text_problems.VocabType.CHARACTER\n\n def global_task_id(self):\n return problem.TaskID.EN_SIM\n", "path": "tensor2tensor/data_generators/mrpc.py"}]} | 2,422 | 552 |
gh_patches_debug_30656 | rasdani/github-patches | git_diff | rucio__rucio-2150 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Test reaper console script
Motivation
----------
The reaper console script `rucio-reaper` is not tested in the testsuite.
Modification
------------
- Add test for the reaper console script.
- Install the environnement with `python setup.py develop` in the docker env to have the generated console scripts available in the docker.
- Extend the reaper argparse method and the reaper tests to validate the argparse main method and console script.
</issue>
<code>
[start of lib/rucio/clis/daemons/reaper/reaper.py]
1 # Copyright 2012-2018 CERN for the benefit of the ATLAS collaboration.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 #
15 # Authors:
16 # - Vincent Garonne, <[email protected]>, 2012-2018
17 # - Wen Guan, <[email protected]>, 2014
18 # - Hannes Hansen, <[email protected]>, 2018
19
20 """
21 Reaper is a daemon to manage file deletion
22 """
23
24 import argparse
25 import signal
26
27 from rucio.daemons.reaper.reaper import run, stop
28
29
30 def get_parser():
31 """
32 Returns the argparse parser.
33 """
34 parser = argparse.ArgumentParser(description="The Reaper daemon is responsible for replica deletion. It deletes them by checking if there are replicas that are not locked and have a tombstone to indicate that they can be deleted.", epilog='''
35 Upload a file and prepare the rules and replicas for deletion by using the judge-cleaner daemon::
36
37 $ rucio upload --rse MOCK --scope mock --name file filename.txt
38 $ rucio add-rule mock:file 1 MOCK2 --lifetime 1
39 $ rucio-judge-cleaner --run-once
40
41 Check if the replica was created::
42
43 $ rucio list-file-replica mock:file
44 +---------+--------+------------+-----------+---------------------------------------------------------+
45 | SCOPE | NAME | FILESIZE | ADLER32 | RSE: REPLICA |
46 |---------+--------+------------+-----------+---------------------------------------------------------|
47 | mock | file | 1.542 kB | 1268ee71 | MOCK: file://localhost:0/tmp/rucio_rse/mock/15/58/file |
48 +---------+--------+------------+-----------+---------------------------------------------------------+
49
50 Run the daemon::
51
52 $ rucio-reaper --run-once
53
54 Check if the replica exists::
55
56 $ rucio list-file-replica mock:file
57 +---------+--------+------------+-----------+---------------------------------------------------------+
58 | SCOPE | NAME | FILESIZE | ADLER32 | RSE: REPLICA |
59 |---------+--------+------------+-----------+---------------------------------------------------------|
60 +---------+--------+------------+-----------+---------------------------------------------------------+
61 ''')
62 parser.add_argument("--run-once", action="store_true", default=False, help='One iteration only')
63 parser.add_argument("--total-workers", action="store", default=1, type=int, help='Total number of workers per process')
64 parser.add_argument("--threads-per-worker", action="store", default=None, type=int, help='Total number of threads created by each worker')
65 parser.add_argument("--chunk-size", action="store", default=10, type=int, help='Chunk size')
66 parser.add_argument("--scheme", action="store", default=None, type=str, help='Force the reaper to use a particular protocol, e.g., mock.')
67 parser.add_argument('--greedy', action='store_true', default=False, help='Greedy mode')
68 parser.add_argument('--exclude-rses', action="store", default=None, type=str, help='RSEs expression to exclude RSEs')
69 parser.add_argument('--include-rses', action="store", default=None, type=str, help='RSEs expression to include RSEs')
70 parser.add_argument('--rses', nargs='+', type=str, help='List of RSEs')
71 parser.add_argument('--delay-seconds', action="store", default=3600, type=int, help='Delay to retry failed deletion')
72 return parser
73
74
75 def main():
76
77 signal.signal(signal.SIGTERM, stop)
78 parser = get_parser()
79 args = parser.parse_args()
80 try:
81 run(total_workers=args.total_workers, chunk_size=args.chunk_size, greedy=args.greedy,
82 once=args.run_once, scheme=args.scheme, rses=args.rses, threads_per_worker=args.threads_per_worker,
83 exclude_rses=args.exclude_rses, include_rses=args.include_rses, delay_seconds=args.delay_seconds)
84 except KeyboardInterrupt:
85 stop()
86
[end of lib/rucio/clis/daemons/reaper/reaper.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/lib/rucio/clis/daemons/reaper/reaper.py b/lib/rucio/clis/daemons/reaper/reaper.py
--- a/lib/rucio/clis/daemons/reaper/reaper.py
+++ b/lib/rucio/clis/daemons/reaper/reaper.py
@@ -1,4 +1,4 @@
-# Copyright 2012-2018 CERN for the benefit of the ATLAS collaboration.
+# Copyright 2012-2019 CERN for the benefit of the ATLAS collaboration.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
@@ -13,7 +13,7 @@
# limitations under the License.
#
# Authors:
-# - Vincent Garonne, <[email protected]>, 2012-2018
+# - Vincent Garonne, <[email protected]>, 2012-2019
# - Wen Guan, <[email protected]>, 2014
# - Hannes Hansen, <[email protected]>, 2018
@@ -23,6 +23,7 @@
import argparse
import signal
+import sys
from rucio.daemons.reaper.reaper import run, stop
@@ -72,11 +73,19 @@
return parser
-def main():
+def main(argv=None):
+ """
+ The main reaper method called by the command.
+ :param argv: Command-line arguments. Default to sys.argv if not set.
+ """
signal.signal(signal.SIGTERM, stop)
+
+ if argv is None:
+ argv = sys.argv[1:]
+
parser = get_parser()
- args = parser.parse_args()
+ args = parser.parse_args(argv)
try:
run(total_workers=args.total_workers, chunk_size=args.chunk_size, greedy=args.greedy,
once=args.run_once, scheme=args.scheme, rses=args.rses, threads_per_worker=args.threads_per_worker,
| {"golden_diff": "diff --git a/lib/rucio/clis/daemons/reaper/reaper.py b/lib/rucio/clis/daemons/reaper/reaper.py\n--- a/lib/rucio/clis/daemons/reaper/reaper.py\n+++ b/lib/rucio/clis/daemons/reaper/reaper.py\n@@ -1,4 +1,4 @@\n-# Copyright 2012-2018 CERN for the benefit of the ATLAS collaboration.\n+# Copyright 2012-2019 CERN for the benefit of the ATLAS collaboration.\n #\n # Licensed under the Apache License, Version 2.0 (the \"License\");\n # you may not use this file except in compliance with the License.\n@@ -13,7 +13,7 @@\n # limitations under the License.\n #\n # Authors:\n-# - Vincent Garonne, <[email protected]>, 2012-2018\n+# - Vincent Garonne, <[email protected]>, 2012-2019\n # - Wen Guan, <[email protected]>, 2014\n # - Hannes Hansen, <[email protected]>, 2018\n \n@@ -23,6 +23,7 @@\n \n import argparse\n import signal\n+import sys\n \n from rucio.daemons.reaper.reaper import run, stop\n \n@@ -72,11 +73,19 @@\n return parser\n \n \n-def main():\n+def main(argv=None):\n+ \"\"\"\n+ The main reaper method called by the command.\n \n+ :param argv: Command-line arguments. Default to sys.argv if not set.\n+ \"\"\"\n signal.signal(signal.SIGTERM, stop)\n+\n+ if argv is None:\n+ argv = sys.argv[1:]\n+\n parser = get_parser()\n- args = parser.parse_args()\n+ args = parser.parse_args(argv)\n try:\n run(total_workers=args.total_workers, chunk_size=args.chunk_size, greedy=args.greedy,\n once=args.run_once, scheme=args.scheme, rses=args.rses, threads_per_worker=args.threads_per_worker,\n", "issue": "Test reaper console script\nMotivation\r\n----------\r\n\r\nThe reaper console script `rucio-reaper` is not tested in the testsuite.\r\n\r\nModification\r\n------------\r\n- Add test for the reaper console script.\r\n- Install the environnement with `python setup.py develop` in the docker env to have the generated console scripts available in the docker.\r\n- Extend the reaper argparse method and the reaper tests to validate the argparse main method and console script.\r\n\r\n\r\n\r\n\n", "before_files": [{"content": "# Copyright 2012-2018 CERN for the benefit of the ATLAS collaboration.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n#\n# Authors:\n# - Vincent Garonne, <[email protected]>, 2012-2018\n# - Wen Guan, <[email protected]>, 2014\n# - Hannes Hansen, <[email protected]>, 2018\n\n\"\"\"\nReaper is a daemon to manage file deletion\n\"\"\"\n\nimport argparse\nimport signal\n\nfrom rucio.daemons.reaper.reaper import run, stop\n\n\ndef get_parser():\n \"\"\"\n Returns the argparse parser.\n \"\"\"\n parser = argparse.ArgumentParser(description=\"The Reaper daemon is responsible for replica deletion. It deletes them by checking if there are replicas that are not locked and have a tombstone to indicate that they can be deleted.\", epilog='''\nUpload a file and prepare the rules and replicas for deletion by using the judge-cleaner daemon::\n\n $ rucio upload --rse MOCK --scope mock --name file filename.txt\n $ rucio add-rule mock:file 1 MOCK2 --lifetime 1\n $ rucio-judge-cleaner --run-once\n\nCheck if the replica was created::\n\n $ rucio list-file-replica mock:file\n +---------+--------+------------+-----------+---------------------------------------------------------+\n | SCOPE | NAME | FILESIZE | ADLER32 | RSE: REPLICA |\n |---------+--------+------------+-----------+---------------------------------------------------------|\n | mock | file | 1.542 kB | 1268ee71 | MOCK: file://localhost:0/tmp/rucio_rse/mock/15/58/file |\n +---------+--------+------------+-----------+---------------------------------------------------------+\n\nRun the daemon::\n\n $ rucio-reaper --run-once\n\nCheck if the replica exists::\n\n $ rucio list-file-replica mock:file\n +---------+--------+------------+-----------+---------------------------------------------------------+\n | SCOPE | NAME | FILESIZE | ADLER32 | RSE: REPLICA |\n |---------+--------+------------+-----------+---------------------------------------------------------|\n +---------+--------+------------+-----------+---------------------------------------------------------+\n ''')\n parser.add_argument(\"--run-once\", action=\"store_true\", default=False, help='One iteration only')\n parser.add_argument(\"--total-workers\", action=\"store\", default=1, type=int, help='Total number of workers per process')\n parser.add_argument(\"--threads-per-worker\", action=\"store\", default=None, type=int, help='Total number of threads created by each worker')\n parser.add_argument(\"--chunk-size\", action=\"store\", default=10, type=int, help='Chunk size')\n parser.add_argument(\"--scheme\", action=\"store\", default=None, type=str, help='Force the reaper to use a particular protocol, e.g., mock.')\n parser.add_argument('--greedy', action='store_true', default=False, help='Greedy mode')\n parser.add_argument('--exclude-rses', action=\"store\", default=None, type=str, help='RSEs expression to exclude RSEs')\n parser.add_argument('--include-rses', action=\"store\", default=None, type=str, help='RSEs expression to include RSEs')\n parser.add_argument('--rses', nargs='+', type=str, help='List of RSEs')\n parser.add_argument('--delay-seconds', action=\"store\", default=3600, type=int, help='Delay to retry failed deletion')\n return parser\n\n\ndef main():\n\n signal.signal(signal.SIGTERM, stop)\n parser = get_parser()\n args = parser.parse_args()\n try:\n run(total_workers=args.total_workers, chunk_size=args.chunk_size, greedy=args.greedy,\n once=args.run_once, scheme=args.scheme, rses=args.rses, threads_per_worker=args.threads_per_worker,\n exclude_rses=args.exclude_rses, include_rses=args.include_rses, delay_seconds=args.delay_seconds)\n except KeyboardInterrupt:\n stop()\n", "path": "lib/rucio/clis/daemons/reaper/reaper.py"}]} | 1,831 | 476 |
gh_patches_debug_4942 | rasdani/github-patches | git_diff | saleor__saleor-11327 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Bug: Lack of validation in cleaning manifest data
### What are you trying to achieve?
The `KeyError` is raised when `manifest_data` doesn't have `tokenTargetUrl`.
### Steps to reproduce the problem
1. Run `AppFetchManifest` with URL that contains JSON data without `tokenTargetUrl`.
2. You will get the `KeyError`.
### What did you expect to happen?
The `ValidationError` should be raised.
### Logs
https://sentry.io/organizations/saleor/issues/3749157627/?project=6417854
### Environment
Saleor version: 3.9 (to check if it also affect other versions)
</issue>
<code>
[start of saleor/app/manifest_validations.py]
1 import logging
2 from collections import defaultdict
3 from typing import Dict, Iterable, List
4
5 from django.contrib.auth.models import Permission
6 from django.core.exceptions import ValidationError
7 from django.db.models import Value
8 from django.db.models.functions import Concat
9
10 from ..core.permissions import (
11 get_permissions,
12 get_permissions_enum_list,
13 split_permission_codename,
14 )
15 from ..graphql.core.utils import str_to_enum
16 from ..graphql.webhook.subscription_payload import validate_subscription_query
17 from ..webhook.event_types import WebhookEventAsyncType, WebhookEventSyncType
18 from .error_codes import AppErrorCode
19 from .types import AppExtensionMount, AppExtensionTarget
20 from .validators import AppURLValidator
21
22 logger = logging.getLogger(__name__)
23
24 T_ERRORS = Dict[str, List[ValidationError]]
25
26
27 def _clean_app_url(url):
28 url_validator = AppURLValidator()
29 url_validator(url)
30
31
32 def _clean_extension_url_with_only_path(
33 manifest_data: dict, target: str, extension_url: str
34 ):
35 if target == AppExtensionTarget.APP_PAGE:
36 return
37 elif manifest_data["appUrl"]:
38 _clean_app_url(manifest_data["appUrl"])
39 else:
40 msg = (
41 "Incorrect relation between extension's target and URL fields. "
42 "APP_PAGE can be used only with relative URL path."
43 )
44 logger.warning(msg, extra={"target": target, "url": extension_url})
45 raise ValidationError(msg)
46
47
48 def clean_extension_url(extension: dict, manifest_data: dict):
49 """Clean assigned extension url.
50
51 Make sure that format of url is correct based on the rest of manifest fields.
52 - url can start with '/' when one of these conditions is true:
53 a) extension.target == APP_PAGE
54 b) appUrl is provided
55 - url cannot start with protocol when target == "APP_PAGE"
56 """
57 extension_url = extension["url"]
58 target = extension.get("target") or AppExtensionTarget.POPUP
59 if extension_url.startswith("/"):
60 _clean_extension_url_with_only_path(manifest_data, target, extension_url)
61 elif target == AppExtensionTarget.APP_PAGE:
62 msg = "Url cannot start with protocol when target == APP_PAGE"
63 logger.warning(msg)
64 raise ValidationError(msg)
65 else:
66 _clean_app_url(extension_url)
67
68
69 def clean_manifest_url(manifest_url):
70 try:
71 _clean_app_url(manifest_url)
72 except (ValidationError, AttributeError):
73 msg = "Enter a valid URL."
74 code = AppErrorCode.INVALID_URL_FORMAT.value
75 raise ValidationError({"manifest_url": ValidationError(msg, code=code)})
76
77
78 def clean_permissions(
79 required_permissions: List[str], saleor_permissions: Iterable[Permission]
80 ) -> List[Permission]:
81 missing_permissions = []
82 all_permissions = {perm[0]: perm[1] for perm in get_permissions_enum_list()}
83 for perm in required_permissions:
84 if not all_permissions.get(perm):
85 missing_permissions.append(perm)
86 if missing_permissions:
87 error_msg = "Given permissions don't exist."
88 code = AppErrorCode.INVALID_PERMISSION.value
89 params = {"permissions": missing_permissions}
90 raise ValidationError(error_msg, code=code, params=params)
91
92 permissions = [all_permissions[perm] for perm in required_permissions]
93 permissions = split_permission_codename(permissions)
94 return [p for p in saleor_permissions if p.codename in permissions]
95
96
97 def clean_manifest_data(manifest_data):
98 errors: T_ERRORS = defaultdict(list)
99
100 validate_required_fields(manifest_data, errors)
101 try:
102 _clean_app_url(manifest_data["tokenTargetUrl"])
103 except (ValidationError, AttributeError):
104 errors["tokenTargetUrl"].append(
105 ValidationError(
106 "Incorrect format.",
107 code=AppErrorCode.INVALID_URL_FORMAT.value,
108 )
109 )
110
111 saleor_permissions = get_permissions().annotate(
112 formated_codename=Concat("content_type__app_label", Value("."), "codename")
113 )
114 try:
115 app_permissions = clean_permissions(
116 manifest_data.get("permissions", []), saleor_permissions
117 )
118 except ValidationError as e:
119 errors["permissions"].append(e)
120 app_permissions = []
121
122 manifest_data["permissions"] = app_permissions
123
124 if not errors:
125 clean_extensions(manifest_data, app_permissions, errors)
126 clean_webhooks(manifest_data, errors)
127
128 if errors:
129 raise ValidationError(errors)
130
131
132 def _clean_extension_permissions(extension, app_permissions, errors):
133 permissions_data = extension.get("permissions", [])
134 try:
135 extension_permissions = clean_permissions(permissions_data, app_permissions)
136 except ValidationError as e:
137 e.params["label"] = extension.get("label")
138 errors["extensions"].append(e)
139 return
140
141 if len(extension_permissions) != len(permissions_data):
142 errors["extensions"].append(
143 ValidationError(
144 "Extension permission must be listed in App's permissions.",
145 code=AppErrorCode.OUT_OF_SCOPE_PERMISSION.value,
146 )
147 )
148
149 extension["permissions"] = extension_permissions
150
151
152 def clean_extension_enum_field(enum, field_name, extension, errors):
153 if extension[field_name] in [code.upper() for code, _ in enum.CHOICES]:
154 extension[field_name] = getattr(enum, extension[field_name])
155 else:
156 errors["extensions"].append(
157 ValidationError(
158 f"Incorrect value for field: {field_name}",
159 code=AppErrorCode.INVALID.value,
160 )
161 )
162
163
164 def clean_extensions(manifest_data, app_permissions, errors):
165 extensions = manifest_data.get("extensions", [])
166 for extension in extensions:
167 if "target" not in extension:
168 extension["target"] = AppExtensionTarget.POPUP
169 else:
170 clean_extension_enum_field(AppExtensionTarget, "target", extension, errors)
171 clean_extension_enum_field(AppExtensionMount, "mount", extension, errors)
172
173 try:
174 clean_extension_url(extension, manifest_data)
175 except (ValidationError, AttributeError):
176 errors["extensions"].append(
177 ValidationError(
178 "Incorrect value for field: url.",
179 code=AppErrorCode.INVALID_URL_FORMAT.value,
180 )
181 )
182 _clean_extension_permissions(extension, app_permissions, errors)
183
184
185 def clean_webhooks(manifest_data, errors):
186 webhooks = manifest_data.get("webhooks", [])
187
188 async_types = {
189 str_to_enum(e_type[0]): e_type[0] for e_type in WebhookEventAsyncType.CHOICES
190 }
191 sync_types = {
192 str_to_enum(e_type[0]): e_type[0] for e_type in WebhookEventSyncType.CHOICES
193 }
194
195 target_url_validator = AppURLValidator(
196 schemes=["http", "https", "awssqs", "gcpubsub"]
197 )
198
199 for webhook in webhooks:
200 if not validate_subscription_query(webhook["query"]):
201 errors["webhooks"].append(
202 ValidationError(
203 "Subscription query is not valid.",
204 code=AppErrorCode.INVALID.value,
205 )
206 )
207
208 webhook["events"] = []
209 for e_type in webhook.get("asyncEvents", []):
210 try:
211 webhook["events"].append(async_types[e_type])
212 except KeyError:
213 errors["webhooks"].append(
214 ValidationError(
215 "Invalid asynchronous event.",
216 code=AppErrorCode.INVALID.value,
217 )
218 )
219 for e_type in webhook.get("syncEvents", []):
220 try:
221 webhook["events"].append(sync_types[e_type])
222 except KeyError:
223 errors["webhooks"].append(
224 ValidationError(
225 "Invalid synchronous event.",
226 code=AppErrorCode.INVALID.value,
227 )
228 )
229
230 try:
231 target_url_validator(webhook["targetUrl"])
232 except ValidationError:
233 errors["webhooks"].append(
234 ValidationError(
235 "Invalid target url.",
236 code=AppErrorCode.INVALID_URL_FORMAT.value,
237 )
238 )
239
240
241 def validate_required_fields(manifest_data, errors):
242 manifest_required_fields = {"id", "version", "name", "tokenTargetUrl"}
243 extension_required_fields = {"label", "url", "mount"}
244 webhook_required_fields = {"name", "targetUrl", "query"}
245
246 if manifest_missing_fields := manifest_required_fields.difference(manifest_data):
247 for missing_field in manifest_missing_fields:
248 errors[missing_field].append(
249 ValidationError("Field required.", code=AppErrorCode.REQUIRED.value)
250 )
251
252 app_extensions_data = manifest_data.get("extensions", [])
253 for extension in app_extensions_data:
254 extension_fields = set(extension.keys())
255 if missing_fields := extension_required_fields.difference(extension_fields):
256 errors["extensions"].append(
257 ValidationError(
258 "Missing required fields for app extension: "
259 f'{", ".join(missing_fields)}.',
260 code=AppErrorCode.REQUIRED.value,
261 )
262 )
263
264 webhooks = manifest_data.get("webhooks", [])
265 for webhook in webhooks:
266 webhook_fields = set(webhook.keys())
267 if missing_fields := webhook_required_fields.difference(webhook_fields):
268 errors["webhooks"].append(
269 ValidationError(
270 f"Missing required fields for webhook: "
271 f'{", ".join(missing_fields)}.',
272 code=AppErrorCode.REQUIRED.value,
273 )
274 )
275
[end of saleor/app/manifest_validations.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/saleor/app/manifest_validations.py b/saleor/app/manifest_validations.py
--- a/saleor/app/manifest_validations.py
+++ b/saleor/app/manifest_validations.py
@@ -99,7 +99,8 @@
validate_required_fields(manifest_data, errors)
try:
- _clean_app_url(manifest_data["tokenTargetUrl"])
+ if "tokenTargetUrl" in manifest_data:
+ _clean_app_url(manifest_data["tokenTargetUrl"])
except (ValidationError, AttributeError):
errors["tokenTargetUrl"].append(
ValidationError(
| {"golden_diff": "diff --git a/saleor/app/manifest_validations.py b/saleor/app/manifest_validations.py\n--- a/saleor/app/manifest_validations.py\n+++ b/saleor/app/manifest_validations.py\n@@ -99,7 +99,8 @@\n \n validate_required_fields(manifest_data, errors)\n try:\n- _clean_app_url(manifest_data[\"tokenTargetUrl\"])\n+ if \"tokenTargetUrl\" in manifest_data:\n+ _clean_app_url(manifest_data[\"tokenTargetUrl\"])\n except (ValidationError, AttributeError):\n errors[\"tokenTargetUrl\"].append(\n ValidationError(\n", "issue": "Bug: Lack of validation in cleaning manifest data\n### What are you trying to achieve?\n\nThe `KeyError` is raised when `manifest_data` doesn't have `tokenTargetUrl`.\n\n### Steps to reproduce the problem\n\n1. Run `AppFetchManifest` with URL that contains JSON data without `tokenTargetUrl`.\r\n2. You will get the `KeyError`.\n\n### What did you expect to happen?\n\nThe `ValidationError` should be raised.\n\n### Logs\n\nhttps://sentry.io/organizations/saleor/issues/3749157627/?project=6417854\n\n### Environment\n\nSaleor version: 3.9 (to check if it also affect other versions)\n", "before_files": [{"content": "import logging\nfrom collections import defaultdict\nfrom typing import Dict, Iterable, List\n\nfrom django.contrib.auth.models import Permission\nfrom django.core.exceptions import ValidationError\nfrom django.db.models import Value\nfrom django.db.models.functions import Concat\n\nfrom ..core.permissions import (\n get_permissions,\n get_permissions_enum_list,\n split_permission_codename,\n)\nfrom ..graphql.core.utils import str_to_enum\nfrom ..graphql.webhook.subscription_payload import validate_subscription_query\nfrom ..webhook.event_types import WebhookEventAsyncType, WebhookEventSyncType\nfrom .error_codes import AppErrorCode\nfrom .types import AppExtensionMount, AppExtensionTarget\nfrom .validators import AppURLValidator\n\nlogger = logging.getLogger(__name__)\n\nT_ERRORS = Dict[str, List[ValidationError]]\n\n\ndef _clean_app_url(url):\n url_validator = AppURLValidator()\n url_validator(url)\n\n\ndef _clean_extension_url_with_only_path(\n manifest_data: dict, target: str, extension_url: str\n):\n if target == AppExtensionTarget.APP_PAGE:\n return\n elif manifest_data[\"appUrl\"]:\n _clean_app_url(manifest_data[\"appUrl\"])\n else:\n msg = (\n \"Incorrect relation between extension's target and URL fields. \"\n \"APP_PAGE can be used only with relative URL path.\"\n )\n logger.warning(msg, extra={\"target\": target, \"url\": extension_url})\n raise ValidationError(msg)\n\n\ndef clean_extension_url(extension: dict, manifest_data: dict):\n \"\"\"Clean assigned extension url.\n\n Make sure that format of url is correct based on the rest of manifest fields.\n - url can start with '/' when one of these conditions is true:\n a) extension.target == APP_PAGE\n b) appUrl is provided\n - url cannot start with protocol when target == \"APP_PAGE\"\n \"\"\"\n extension_url = extension[\"url\"]\n target = extension.get(\"target\") or AppExtensionTarget.POPUP\n if extension_url.startswith(\"/\"):\n _clean_extension_url_with_only_path(manifest_data, target, extension_url)\n elif target == AppExtensionTarget.APP_PAGE:\n msg = \"Url cannot start with protocol when target == APP_PAGE\"\n logger.warning(msg)\n raise ValidationError(msg)\n else:\n _clean_app_url(extension_url)\n\n\ndef clean_manifest_url(manifest_url):\n try:\n _clean_app_url(manifest_url)\n except (ValidationError, AttributeError):\n msg = \"Enter a valid URL.\"\n code = AppErrorCode.INVALID_URL_FORMAT.value\n raise ValidationError({\"manifest_url\": ValidationError(msg, code=code)})\n\n\ndef clean_permissions(\n required_permissions: List[str], saleor_permissions: Iterable[Permission]\n) -> List[Permission]:\n missing_permissions = []\n all_permissions = {perm[0]: perm[1] for perm in get_permissions_enum_list()}\n for perm in required_permissions:\n if not all_permissions.get(perm):\n missing_permissions.append(perm)\n if missing_permissions:\n error_msg = \"Given permissions don't exist.\"\n code = AppErrorCode.INVALID_PERMISSION.value\n params = {\"permissions\": missing_permissions}\n raise ValidationError(error_msg, code=code, params=params)\n\n permissions = [all_permissions[perm] for perm in required_permissions]\n permissions = split_permission_codename(permissions)\n return [p for p in saleor_permissions if p.codename in permissions]\n\n\ndef clean_manifest_data(manifest_data):\n errors: T_ERRORS = defaultdict(list)\n\n validate_required_fields(manifest_data, errors)\n try:\n _clean_app_url(manifest_data[\"tokenTargetUrl\"])\n except (ValidationError, AttributeError):\n errors[\"tokenTargetUrl\"].append(\n ValidationError(\n \"Incorrect format.\",\n code=AppErrorCode.INVALID_URL_FORMAT.value,\n )\n )\n\n saleor_permissions = get_permissions().annotate(\n formated_codename=Concat(\"content_type__app_label\", Value(\".\"), \"codename\")\n )\n try:\n app_permissions = clean_permissions(\n manifest_data.get(\"permissions\", []), saleor_permissions\n )\n except ValidationError as e:\n errors[\"permissions\"].append(e)\n app_permissions = []\n\n manifest_data[\"permissions\"] = app_permissions\n\n if not errors:\n clean_extensions(manifest_data, app_permissions, errors)\n clean_webhooks(manifest_data, errors)\n\n if errors:\n raise ValidationError(errors)\n\n\ndef _clean_extension_permissions(extension, app_permissions, errors):\n permissions_data = extension.get(\"permissions\", [])\n try:\n extension_permissions = clean_permissions(permissions_data, app_permissions)\n except ValidationError as e:\n e.params[\"label\"] = extension.get(\"label\")\n errors[\"extensions\"].append(e)\n return\n\n if len(extension_permissions) != len(permissions_data):\n errors[\"extensions\"].append(\n ValidationError(\n \"Extension permission must be listed in App's permissions.\",\n code=AppErrorCode.OUT_OF_SCOPE_PERMISSION.value,\n )\n )\n\n extension[\"permissions\"] = extension_permissions\n\n\ndef clean_extension_enum_field(enum, field_name, extension, errors):\n if extension[field_name] in [code.upper() for code, _ in enum.CHOICES]:\n extension[field_name] = getattr(enum, extension[field_name])\n else:\n errors[\"extensions\"].append(\n ValidationError(\n f\"Incorrect value for field: {field_name}\",\n code=AppErrorCode.INVALID.value,\n )\n )\n\n\ndef clean_extensions(manifest_data, app_permissions, errors):\n extensions = manifest_data.get(\"extensions\", [])\n for extension in extensions:\n if \"target\" not in extension:\n extension[\"target\"] = AppExtensionTarget.POPUP\n else:\n clean_extension_enum_field(AppExtensionTarget, \"target\", extension, errors)\n clean_extension_enum_field(AppExtensionMount, \"mount\", extension, errors)\n\n try:\n clean_extension_url(extension, manifest_data)\n except (ValidationError, AttributeError):\n errors[\"extensions\"].append(\n ValidationError(\n \"Incorrect value for field: url.\",\n code=AppErrorCode.INVALID_URL_FORMAT.value,\n )\n )\n _clean_extension_permissions(extension, app_permissions, errors)\n\n\ndef clean_webhooks(manifest_data, errors):\n webhooks = manifest_data.get(\"webhooks\", [])\n\n async_types = {\n str_to_enum(e_type[0]): e_type[0] for e_type in WebhookEventAsyncType.CHOICES\n }\n sync_types = {\n str_to_enum(e_type[0]): e_type[0] for e_type in WebhookEventSyncType.CHOICES\n }\n\n target_url_validator = AppURLValidator(\n schemes=[\"http\", \"https\", \"awssqs\", \"gcpubsub\"]\n )\n\n for webhook in webhooks:\n if not validate_subscription_query(webhook[\"query\"]):\n errors[\"webhooks\"].append(\n ValidationError(\n \"Subscription query is not valid.\",\n code=AppErrorCode.INVALID.value,\n )\n )\n\n webhook[\"events\"] = []\n for e_type in webhook.get(\"asyncEvents\", []):\n try:\n webhook[\"events\"].append(async_types[e_type])\n except KeyError:\n errors[\"webhooks\"].append(\n ValidationError(\n \"Invalid asynchronous event.\",\n code=AppErrorCode.INVALID.value,\n )\n )\n for e_type in webhook.get(\"syncEvents\", []):\n try:\n webhook[\"events\"].append(sync_types[e_type])\n except KeyError:\n errors[\"webhooks\"].append(\n ValidationError(\n \"Invalid synchronous event.\",\n code=AppErrorCode.INVALID.value,\n )\n )\n\n try:\n target_url_validator(webhook[\"targetUrl\"])\n except ValidationError:\n errors[\"webhooks\"].append(\n ValidationError(\n \"Invalid target url.\",\n code=AppErrorCode.INVALID_URL_FORMAT.value,\n )\n )\n\n\ndef validate_required_fields(manifest_data, errors):\n manifest_required_fields = {\"id\", \"version\", \"name\", \"tokenTargetUrl\"}\n extension_required_fields = {\"label\", \"url\", \"mount\"}\n webhook_required_fields = {\"name\", \"targetUrl\", \"query\"}\n\n if manifest_missing_fields := manifest_required_fields.difference(manifest_data):\n for missing_field in manifest_missing_fields:\n errors[missing_field].append(\n ValidationError(\"Field required.\", code=AppErrorCode.REQUIRED.value)\n )\n\n app_extensions_data = manifest_data.get(\"extensions\", [])\n for extension in app_extensions_data:\n extension_fields = set(extension.keys())\n if missing_fields := extension_required_fields.difference(extension_fields):\n errors[\"extensions\"].append(\n ValidationError(\n \"Missing required fields for app extension: \"\n f'{\", \".join(missing_fields)}.',\n code=AppErrorCode.REQUIRED.value,\n )\n )\n\n webhooks = manifest_data.get(\"webhooks\", [])\n for webhook in webhooks:\n webhook_fields = set(webhook.keys())\n if missing_fields := webhook_required_fields.difference(webhook_fields):\n errors[\"webhooks\"].append(\n ValidationError(\n f\"Missing required fields for webhook: \"\n f'{\", \".join(missing_fields)}.',\n code=AppErrorCode.REQUIRED.value,\n )\n )\n", "path": "saleor/app/manifest_validations.py"}]} | 3,330 | 133 |
gh_patches_debug_30904 | rasdani/github-patches | git_diff | python-telegram-bot__python-telegram-bot-1591 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[BUG] Cannot Download a File if it doesnt have a `file_path` or if `custom_path` is not provided to `download()`
<!--
Thanks for reporting issues of python-telegram-bot!
Use this template to notify us if you found a bug.
To make it easier for us to help you please enter detailed information below.
Please note, we only support the latest version of python-telegram-bot and
master branch. Please make sure to upgrade & recreate the issue on the latest
version prior to opening an issue.
-->
### Steps to reproduce
1. Create an operation with the above mentioned aspects
Ex:
```
def run_bot(token):
def make_pasta(update, context):
msg = update.message
if msg.reply_to_message is None:
msg.reply_text(responses.NOT_A_REPLY)
return
if msg.reply_to_message.document is None:
msg.reply_text(responses.NOT_A_DOC)
return
telegram_file = File(msg.reply_to_message.document.file_id)
telegram_file.download()
updater = Updater(token, use_context=True)
dp = updater.dispatcher
dp.add_handler(CommandHandler('hello', hello))
dp.add_handler(CommandHandler('make_pasta', make_pasta))
print('Log: Seu bot iniciou! (:')
updater.start_polling()
updater.idle()
```
(I know this is not exactly a MWE, sorry)
### Expected behaviour
According to the documentation, it should download the file directly to my current working directory `Download this file. By default, the file is saved in the current working directory with its original filename as reported by Telegram.`
### Actual behaviour
on `telegram/files/file.py`, on the download function, we get a type error since `self.file_path` is NoneType, not str or os.PathLike object
### Configuration
Ubuntu 18.04
python-telegram-bot 12.2.0
certifi 2019.09.11
future 0.18.1
Python 3.7.3 (default, Apr 3 2019, 19:16:38) [GCC 8.0.1 20180414 (experimental) [trunk revision 259383]]
</issue>
<code>
[start of telegram/files/file.py]
1 #!/usr/bin/env python
2 #
3 # A library that provides a Python interface to the Telegram Bot API
4 # Copyright (C) 2015-2018
5 # Leandro Toledo de Souza <[email protected]>
6 #
7 # This program is free software: you can redistribute it and/or modify
8 # it under the terms of the GNU Lesser Public License as published by
9 # the Free Software Foundation, either version 3 of the License, or
10 # (at your option) any later version.
11 #
12 # This program is distributed in the hope that it will be useful,
13 # but WITHOUT ANY WARRANTY; without even the implied warranty of
14 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
15 # GNU Lesser Public License for more details.
16 #
17 # You should have received a copy of the GNU Lesser Public License
18 # along with this program. If not, see [http://www.gnu.org/licenses/].
19 """This module contains an object that represents a Telegram File."""
20 from base64 import b64decode
21 from os.path import basename
22
23 from future.backports.urllib import parse as urllib_parse
24
25 from telegram import TelegramObject
26 from telegram.passport.credentials import decrypt
27
28
29 class File(TelegramObject):
30 """
31 This object represents a file ready to be downloaded. The file can be downloaded with
32 :attr:`download`. It is guaranteed that the link will be valid for at least 1 hour. When the
33 link expires, a new one can be requested by calling getFile.
34
35 Note:
36 Maximum file size to download is 20 MB
37
38 Attributes:
39 file_id (:obj:`str`): Unique identifier for this file.
40 file_size (:obj:`str`): Optional. File size.
41 file_path (:obj:`str`): Optional. File path. Use :attr:`download` to get the file.
42
43 Args:
44 file_id (:obj:`str`): Unique identifier for this file.
45 file_size (:obj:`int`, optional): Optional. File size, if known.
46 file_path (:obj:`str`, optional): File path. Use :attr:`download` to get the file.
47 bot (:obj:`telegram.Bot`, optional): Bot to use with shortcut method.
48 **kwargs (:obj:`dict`): Arbitrary keyword arguments.
49
50 Note:
51 If you obtain an instance of this class from :attr:`telegram.PassportFile.get_file`,
52 then it will automatically be decrypted as it downloads when you call :attr:`download()`.
53
54 """
55
56 def __init__(self, file_id, bot=None, file_size=None, file_path=None, **kwargs):
57 # Required
58 self.file_id = str(file_id)
59
60 # Optionals
61 self.file_size = file_size
62 self.file_path = file_path
63
64 self.bot = bot
65 self._credentials = None
66
67 self._id_attrs = (self.file_id,)
68
69 @classmethod
70 def de_json(cls, data, bot):
71 if not data:
72 return None
73
74 return cls(bot=bot, **data)
75
76 def download(self, custom_path=None, out=None, timeout=None):
77 """
78 Download this file. By default, the file is saved in the current working directory with its
79 original filename as reported by Telegram. If a :attr:`custom_path` is supplied, it will be
80 saved to that path instead. If :attr:`out` is defined, the file contents will be saved to
81 that object using the ``out.write`` method.
82
83 Note:
84 :attr:`custom_path` and :attr:`out` are mutually exclusive.
85
86 Args:
87 custom_path (:obj:`str`, optional): Custom path.
88 out (:obj:`io.BufferedWriter`, optional): A file-like object. Must be opened for
89 writing in binary mode, if applicable.
90 timeout (:obj:`int` | :obj:`float`, optional): If this value is specified, use it as
91 the read timeout from the server (instead of the one specified during creation of
92 the connection pool).
93
94 Returns:
95 :obj:`str` | :obj:`io.BufferedWriter`: The same object as :attr:`out` if specified.
96 Otherwise, returns the filename downloaded to.
97
98 Raises:
99 ValueError: If both :attr:`custom_path` and :attr:`out` are passed.
100
101 """
102 if custom_path is not None and out is not None:
103 raise ValueError('custom_path and out are mutually exclusive')
104
105 # Convert any UTF-8 char into a url encoded ASCII string.
106 url = self._get_encoded_url()
107
108 if out:
109 buf = self.bot.request.retrieve(url)
110 if self._credentials:
111 buf = decrypt(b64decode(self._credentials.secret),
112 b64decode(self._credentials.hash),
113 buf)
114 out.write(buf)
115 return out
116 else:
117 if custom_path:
118 filename = custom_path
119 else:
120 filename = basename(self.file_path)
121
122 buf = self.bot.request.retrieve(url, timeout=timeout)
123 if self._credentials:
124 buf = decrypt(b64decode(self._credentials.secret),
125 b64decode(self._credentials.hash),
126 buf)
127 with open(filename, 'wb') as fobj:
128 fobj.write(buf)
129 return filename
130
131 def _get_encoded_url(self):
132 """Convert any UTF-8 char in :obj:`File.file_path` into a url encoded ASCII string."""
133 sres = urllib_parse.urlsplit(self.file_path)
134 return urllib_parse.urlunsplit(urllib_parse.SplitResult(
135 sres.scheme, sres.netloc, urllib_parse.quote(sres.path), sres.query, sres.fragment))
136
137 def download_as_bytearray(self, buf=None):
138 """Download this file and return it as a bytearray.
139
140 Args:
141 buf (:obj:`bytearray`, optional): Extend the given bytearray with the downloaded data.
142
143 Returns:
144 :obj:`bytearray`: The same object as :attr:`buf` if it was specified. Otherwise a newly
145 allocated :obj:`bytearray`.
146
147 """
148 if buf is None:
149 buf = bytearray()
150
151 buf.extend(self.bot.request.retrieve(self._get_encoded_url()))
152 return buf
153
154 def set_credentials(self, credentials):
155 self._credentials = credentials
156
[end of telegram/files/file.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/telegram/files/file.py b/telegram/files/file.py
--- a/telegram/files/file.py
+++ b/telegram/files/file.py
@@ -19,6 +19,7 @@
"""This module contains an object that represents a Telegram File."""
from base64 import b64decode
from os.path import basename
+import os
from future.backports.urllib import parse as urllib_parse
@@ -76,9 +77,10 @@
def download(self, custom_path=None, out=None, timeout=None):
"""
Download this file. By default, the file is saved in the current working directory with its
- original filename as reported by Telegram. If a :attr:`custom_path` is supplied, it will be
- saved to that path instead. If :attr:`out` is defined, the file contents will be saved to
- that object using the ``out.write`` method.
+ original filename as reported by Telegram. If the file has no filename, it the file ID will
+ be used as filename. If a :attr:`custom_path` is supplied, it will be saved to that path
+ instead. If :attr:`out` is defined, the file contents will be saved to that object using
+ the ``out.write`` method.
Note:
:attr:`custom_path` and :attr:`out` are mutually exclusive.
@@ -116,8 +118,10 @@
else:
if custom_path:
filename = custom_path
- else:
+ elif self.file_path:
filename = basename(self.file_path)
+ else:
+ filename = os.path.join(os.getcwd(), self.file_id)
buf = self.bot.request.retrieve(url, timeout=timeout)
if self._credentials:
| {"golden_diff": "diff --git a/telegram/files/file.py b/telegram/files/file.py\n--- a/telegram/files/file.py\n+++ b/telegram/files/file.py\n@@ -19,6 +19,7 @@\n \"\"\"This module contains an object that represents a Telegram File.\"\"\"\n from base64 import b64decode\n from os.path import basename\n+import os\n \n from future.backports.urllib import parse as urllib_parse\n \n@@ -76,9 +77,10 @@\n def download(self, custom_path=None, out=None, timeout=None):\n \"\"\"\n Download this file. By default, the file is saved in the current working directory with its\n- original filename as reported by Telegram. If a :attr:`custom_path` is supplied, it will be\n- saved to that path instead. If :attr:`out` is defined, the file contents will be saved to\n- that object using the ``out.write`` method.\n+ original filename as reported by Telegram. If the file has no filename, it the file ID will\n+ be used as filename. If a :attr:`custom_path` is supplied, it will be saved to that path\n+ instead. If :attr:`out` is defined, the file contents will be saved to that object using\n+ the ``out.write`` method.\n \n Note:\n :attr:`custom_path` and :attr:`out` are mutually exclusive.\n@@ -116,8 +118,10 @@\n else:\n if custom_path:\n filename = custom_path\n- else:\n+ elif self.file_path:\n filename = basename(self.file_path)\n+ else:\n+ filename = os.path.join(os.getcwd(), self.file_id)\n \n buf = self.bot.request.retrieve(url, timeout=timeout)\n if self._credentials:\n", "issue": "[BUG] Cannot Download a File if it doesnt have a `file_path` or if `custom_path` is not provided to `download()`\n<!--\r\nThanks for reporting issues of python-telegram-bot!\r\n\r\nUse this template to notify us if you found a bug.\r\n\r\nTo make it easier for us to help you please enter detailed information below.\r\n\r\nPlease note, we only support the latest version of python-telegram-bot and\r\nmaster branch. Please make sure to upgrade & recreate the issue on the latest\r\nversion prior to opening an issue.\r\n-->\r\n### Steps to reproduce\r\n1. Create an operation with the above mentioned aspects\r\n\r\nEx:\r\n```\r\ndef run_bot(token):\r\n def make_pasta(update, context):\r\n msg = update.message\r\n\r\n if msg.reply_to_message is None:\r\n msg.reply_text(responses.NOT_A_REPLY)\r\n return\r\n if msg.reply_to_message.document is None:\r\n msg.reply_text(responses.NOT_A_DOC)\r\n return\r\n telegram_file = File(msg.reply_to_message.document.file_id)\r\n telegram_file.download() \r\n\r\n updater = Updater(token, use_context=True)\r\n dp = updater.dispatcher\r\n\r\n dp.add_handler(CommandHandler('hello', hello))\r\n dp.add_handler(CommandHandler('make_pasta', make_pasta))\r\n\r\n print('Log: Seu bot iniciou! (:')\r\n updater.start_polling()\r\n updater.idle() \r\n```\r\n(I know this is not exactly a MWE, sorry)\r\n\r\n### Expected behaviour\r\nAccording to the documentation, it should download the file directly to my current working directory `Download this file. By default, the file is saved in the current working directory with its original filename as reported by Telegram.`\r\n\r\n### Actual behaviour\r\non `telegram/files/file.py`, on the download function, we get a type error since `self.file_path` is NoneType, not str or os.PathLike object\r\n\r\n### Configuration\r\nUbuntu 18.04\r\n\r\npython-telegram-bot 12.2.0\r\ncertifi 2019.09.11\r\nfuture 0.18.1\r\nPython 3.7.3 (default, Apr 3 2019, 19:16:38) [GCC 8.0.1 20180414 (experimental) [trunk revision 259383]]\n", "before_files": [{"content": "#!/usr/bin/env python\n#\n# A library that provides a Python interface to the Telegram Bot API\n# Copyright (C) 2015-2018\n# Leandro Toledo de Souza <[email protected]>\n#\n# This program is free software: you can redistribute it and/or modify\n# it under the terms of the GNU Lesser Public License as published by\n# the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# This program is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU Lesser Public License for more details.\n#\n# You should have received a copy of the GNU Lesser Public License\n# along with this program. If not, see [http://www.gnu.org/licenses/].\n\"\"\"This module contains an object that represents a Telegram File.\"\"\"\nfrom base64 import b64decode\nfrom os.path import basename\n\nfrom future.backports.urllib import parse as urllib_parse\n\nfrom telegram import TelegramObject\nfrom telegram.passport.credentials import decrypt\n\n\nclass File(TelegramObject):\n \"\"\"\n This object represents a file ready to be downloaded. The file can be downloaded with\n :attr:`download`. It is guaranteed that the link will be valid for at least 1 hour. When the\n link expires, a new one can be requested by calling getFile.\n\n Note:\n Maximum file size to download is 20 MB\n\n Attributes:\n file_id (:obj:`str`): Unique identifier for this file.\n file_size (:obj:`str`): Optional. File size.\n file_path (:obj:`str`): Optional. File path. Use :attr:`download` to get the file.\n\n Args:\n file_id (:obj:`str`): Unique identifier for this file.\n file_size (:obj:`int`, optional): Optional. File size, if known.\n file_path (:obj:`str`, optional): File path. Use :attr:`download` to get the file.\n bot (:obj:`telegram.Bot`, optional): Bot to use with shortcut method.\n **kwargs (:obj:`dict`): Arbitrary keyword arguments.\n\n Note:\n If you obtain an instance of this class from :attr:`telegram.PassportFile.get_file`,\n then it will automatically be decrypted as it downloads when you call :attr:`download()`.\n\n \"\"\"\n\n def __init__(self, file_id, bot=None, file_size=None, file_path=None, **kwargs):\n # Required\n self.file_id = str(file_id)\n\n # Optionals\n self.file_size = file_size\n self.file_path = file_path\n\n self.bot = bot\n self._credentials = None\n\n self._id_attrs = (self.file_id,)\n\n @classmethod\n def de_json(cls, data, bot):\n if not data:\n return None\n\n return cls(bot=bot, **data)\n\n def download(self, custom_path=None, out=None, timeout=None):\n \"\"\"\n Download this file. By default, the file is saved in the current working directory with its\n original filename as reported by Telegram. If a :attr:`custom_path` is supplied, it will be\n saved to that path instead. If :attr:`out` is defined, the file contents will be saved to\n that object using the ``out.write`` method.\n\n Note:\n :attr:`custom_path` and :attr:`out` are mutually exclusive.\n\n Args:\n custom_path (:obj:`str`, optional): Custom path.\n out (:obj:`io.BufferedWriter`, optional): A file-like object. Must be opened for\n writing in binary mode, if applicable.\n timeout (:obj:`int` | :obj:`float`, optional): If this value is specified, use it as\n the read timeout from the server (instead of the one specified during creation of\n the connection pool).\n\n Returns:\n :obj:`str` | :obj:`io.BufferedWriter`: The same object as :attr:`out` if specified.\n Otherwise, returns the filename downloaded to.\n\n Raises:\n ValueError: If both :attr:`custom_path` and :attr:`out` are passed.\n\n \"\"\"\n if custom_path is not None and out is not None:\n raise ValueError('custom_path and out are mutually exclusive')\n\n # Convert any UTF-8 char into a url encoded ASCII string.\n url = self._get_encoded_url()\n\n if out:\n buf = self.bot.request.retrieve(url)\n if self._credentials:\n buf = decrypt(b64decode(self._credentials.secret),\n b64decode(self._credentials.hash),\n buf)\n out.write(buf)\n return out\n else:\n if custom_path:\n filename = custom_path\n else:\n filename = basename(self.file_path)\n\n buf = self.bot.request.retrieve(url, timeout=timeout)\n if self._credentials:\n buf = decrypt(b64decode(self._credentials.secret),\n b64decode(self._credentials.hash),\n buf)\n with open(filename, 'wb') as fobj:\n fobj.write(buf)\n return filename\n\n def _get_encoded_url(self):\n \"\"\"Convert any UTF-8 char in :obj:`File.file_path` into a url encoded ASCII string.\"\"\"\n sres = urllib_parse.urlsplit(self.file_path)\n return urllib_parse.urlunsplit(urllib_parse.SplitResult(\n sres.scheme, sres.netloc, urllib_parse.quote(sres.path), sres.query, sres.fragment))\n\n def download_as_bytearray(self, buf=None):\n \"\"\"Download this file and return it as a bytearray.\n\n Args:\n buf (:obj:`bytearray`, optional): Extend the given bytearray with the downloaded data.\n\n Returns:\n :obj:`bytearray`: The same object as :attr:`buf` if it was specified. Otherwise a newly\n allocated :obj:`bytearray`.\n\n \"\"\"\n if buf is None:\n buf = bytearray()\n\n buf.extend(self.bot.request.retrieve(self._get_encoded_url()))\n return buf\n\n def set_credentials(self, credentials):\n self._credentials = credentials\n", "path": "telegram/files/file.py"}]} | 2,726 | 387 |
gh_patches_debug_32170 | rasdani/github-patches | git_diff | pwndbg__pwndbg-1855 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add `telescope --frame`
It'd be nice to have a `telescope` command that prints out the stack frame, from `rsp` to `rbp`. Obviously this only works if the program actually uses a frame pointer.
It would be equivalent to this:
```
pwndbg> p ($rbp-$rsp)/8 + 1
$5 = 9
pwndbg> telescope $rsp $$
00:0000│ rsp 0x7ffe5f4951a0 ◂— 0x300000001
01:0008│ 0x7ffe5f4951a8 —▸ 0x7ffe5f495220 ◂— 0x170c94ca0
02:0010│ 0x7ffe5f4951b0 —▸ 0x7f6870c96168 —▸ 0x563e09600000 ◂— jg 0x563e09600047
03:0018│ 0x7ffe5f4951b8 ◂— 0xf0
04:0020│ rsi 0x7ffe5f4951c0 ◂— 0xb1ed2074ada5ce5
05:0028│ 0x7ffe5f4951c8 ◂— 0xda37756c736484c1
06:0030│ 0x7ffe5f4951d0 —▸ 0x7ffe5f4951fe ◂— 0x563e096013e00000
07:0038│ 0x7ffe5f4951d8 ◂— 0x56657596c3d91600
08:0040│ rbp 0x7ffe5f4951e0 —▸ 0x7ffe5f495200 —▸ 0x563e096013e0 ◂— push r15
```
</issue>
<code>
[start of pwndbg/commands/telescope.py]
1 """
2 Prints out pointer chains starting at some address in memory.
3
4 Generally used to print out the stack or register values.
5 """
6
7 from __future__ import annotations
8
9 import argparse
10 import collections
11 import math
12
13 import pwndbg.chain
14 import pwndbg.color.telescope as T
15 import pwndbg.commands
16 import pwndbg.gdblib.arch
17 import pwndbg.gdblib.config
18 import pwndbg.gdblib.memory
19 import pwndbg.gdblib.regs
20 import pwndbg.gdblib.typeinfo
21 from pwndbg.color import theme
22 from pwndbg.commands import CommandCategory
23
24 telescope_lines = pwndbg.gdblib.config.add_param(
25 "telescope-lines", 8, "number of lines to printed by the telescope command"
26 )
27 skip_repeating_values = pwndbg.gdblib.config.add_param(
28 "telescope-skip-repeating-val",
29 True,
30 "whether to skip repeating values of the telescope command",
31 )
32 skip_repeating_values_minimum = pwndbg.gdblib.config.add_param(
33 "telescope-skip-repeating-val-minimum",
34 3,
35 "minimum amount of repeated values before skipping lines",
36 )
37
38 offset_separator = theme.add_param(
39 "telescope-offset-separator", "│", "offset separator of the telescope command"
40 )
41 offset_delimiter = theme.add_param(
42 "telescope-offset-delimiter", ":", "offset delimiter of the telescope command"
43 )
44 repeating_marker = theme.add_param(
45 "telescope-repeating-marker", "... ↓", "repeating values marker of the telescope command"
46 )
47
48
49 parser = argparse.ArgumentParser(
50 description="Recursively dereferences pointers starting at the specified address."
51 )
52 parser.add_argument(
53 "-r",
54 "--reverse",
55 dest="reverse",
56 action="store_true",
57 default=False,
58 help="Show <count> previous addresses instead of next ones",
59 )
60
61 parser.add_argument(
62 "address", nargs="?", default="$sp", type=int, help="The address to telescope at."
63 )
64
65 parser.add_argument(
66 "count", nargs="?", default=telescope_lines, type=int, help="The number of lines to show."
67 )
68
69
70 @pwndbg.commands.ArgparsedCommand(parser, category=CommandCategory.MEMORY)
71 @pwndbg.commands.OnlyWhenRunning
72 def telescope(address=None, count=telescope_lines, to_string=False, reverse=False):
73 """
74 Recursively dereferences pointers starting at the specified address
75 ($sp by default)
76 """
77 ptrsize = pwndbg.gdblib.typeinfo.ptrsize
78 if telescope.repeat:
79 address = telescope.last_address + ptrsize
80 telescope.offset += 1
81 else:
82 telescope.offset = 0
83
84 address = int(address if address else pwndbg.gdblib.regs.sp) & pwndbg.gdblib.arch.ptrmask
85 input_address = address
86 count = max(int(count), 1) & pwndbg.gdblib.arch.ptrmask
87 delimiter = T.delimiter(offset_delimiter)
88 separator = T.separator(offset_separator)
89
90 # Allow invocation of "telescope 20" to dump 20 bytes at the stack pointer
91 if address < pwndbg.gdblib.memory.MMAP_MIN_ADDR and not pwndbg.gdblib.memory.peek(address):
92 count = address
93 address = pwndbg.gdblib.regs.sp
94
95 # Allow invocation of telescope -r to dump previous addresses
96 if reverse:
97 address -= (count - 1) * ptrsize
98
99 # Allow invocation of "telescope a b" to dump all bytes from A to B
100 if int(address) <= int(count):
101 # adjust count if it is an address. use ceil division as count is number of
102 # ptrsize values and we don't want to strip out a value if dest is unaligned
103 count -= address
104 count = max(math.ceil(count / ptrsize), 1)
105
106 reg_values = collections.defaultdict(lambda: [])
107 for reg in pwndbg.gdblib.regs.common:
108 reg_values[pwndbg.gdblib.regs[reg]].append(reg)
109
110 start = address
111 stop = address + (count * ptrsize)
112 step = ptrsize
113
114 # Find all registers which show up in the trace
115 regs = {}
116 for i in range(start, stop, step):
117 values = list(reg_values[i])
118
119 for width in range(1, pwndbg.gdblib.arch.ptrsize):
120 values.extend("%s-%i" % (r, width) for r in reg_values[i + width])
121
122 regs[i] = " ".join(values)
123
124 # Find the longest set of register information
125 if regs:
126 longest_regs = max(map(len, regs.values()))
127 else:
128 longest_regs = 0
129
130 # Print everything out
131 result = []
132 last = None
133 collapse_buffer: list[str] = []
134 skipped_padding = (
135 2
136 + len(offset_delimiter)
137 + 4
138 + len(offset_separator)
139 + 1
140 + longest_regs
141 + 1
142 - len(repeating_marker)
143 )
144
145 # Collapse repeating values exceeding minimum delta.
146 def collapse_repeating_values() -> None:
147 # The first line was already printed, hence increment by 1
148 if collapse_buffer and len(collapse_buffer) + 1 >= skip_repeating_values_minimum:
149 result.append(
150 T.repeating_marker(
151 "%s%s%i skipped"
152 % (repeating_marker, " " * skipped_padding, len(collapse_buffer))
153 )
154 )
155 else:
156 result.extend(collapse_buffer)
157 collapse_buffer.clear()
158
159 for i, addr in enumerate(range(start, stop, step)):
160 if not pwndbg.gdblib.memory.peek(addr):
161 collapse_repeating_values()
162 result.append("<Could not read memory at %#x>" % addr)
163 break
164
165 line = " ".join(
166 (
167 T.offset(
168 "%02x%s%04x%s"
169 % (
170 i + telescope.offset,
171 delimiter,
172 addr - start + (telescope.offset * ptrsize),
173 separator,
174 )
175 ),
176 T.register(regs[addr].ljust(longest_regs)),
177 pwndbg.chain.format(addr),
178 )
179 )
180
181 # Buffer repeating values.
182 if skip_repeating_values:
183 value = pwndbg.gdblib.memory.pvoid(addr)
184 if last == value and addr != input_address:
185 collapse_buffer.append(line)
186 continue
187 collapse_repeating_values()
188 last = value
189
190 result.append(line)
191
192 collapse_repeating_values()
193 telescope.offset += i
194 telescope.last_address = addr
195
196 if not to_string:
197 print("\n".join(result))
198
199 return result
200
201
202 parser = argparse.ArgumentParser(
203 description="Dereferences on stack data with specified count and offset."
204 )
205 parser.add_argument("count", nargs="?", default=8, type=int, help="number of element to dump")
206 parser.add_argument(
207 "offset",
208 nargs="?",
209 default=0,
210 type=int,
211 help="Element offset from $sp (support negative offset)",
212 )
213
214
215 @pwndbg.commands.ArgparsedCommand(parser, category=CommandCategory.STACK)
216 @pwndbg.commands.OnlyWhenRunning
217 def stack(count, offset) -> None:
218 ptrsize = pwndbg.gdblib.typeinfo.ptrsize
219 telescope.repeat = stack.repeat
220 telescope(address=pwndbg.gdblib.regs.sp + offset * ptrsize, count=count)
221
222
223 telescope.last_address = 0
224 telescope.offset = 0
225
[end of pwndbg/commands/telescope.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/pwndbg/commands/telescope.py b/pwndbg/commands/telescope.py
--- a/pwndbg/commands/telescope.py
+++ b/pwndbg/commands/telescope.py
@@ -58,6 +58,15 @@
help="Show <count> previous addresses instead of next ones",
)
+parser.add_argument(
+ "-f",
+ "--frame",
+ dest="frame",
+ action="store_true",
+ default=False,
+ help="Show the stack frame, from rsp to rbp",
+)
+
parser.add_argument(
"address", nargs="?", default="$sp", type=int, help="The address to telescope at."
)
@@ -69,7 +78,7 @@
@pwndbg.commands.ArgparsedCommand(parser, category=CommandCategory.MEMORY)
@pwndbg.commands.OnlyWhenRunning
-def telescope(address=None, count=telescope_lines, to_string=False, reverse=False):
+def telescope(address=None, count=telescope_lines, to_string=False, reverse=False, frame=False):
"""
Recursively dereferences pointers starting at the specified address
($sp by default)
@@ -96,6 +105,24 @@
if reverse:
address -= (count - 1) * ptrsize
+ # Allow invocation of telescope -f (--frame) to dump frame addresses
+ if frame:
+ sp = pwndbg.gdblib.regs.sp
+ bp = pwndbg.gdblib.regs[pwndbg.gdblib.regs.frame]
+ if sp > bp:
+ print("Cannot display stack frame because base pointer is below stack pointer")
+ return
+
+ for page in pwndbg.gdblib.vmmap.get():
+ if sp in page and bp not in page:
+ print(
+ "Cannot display stack frame because base pointer is not on the same page with stack pointer"
+ )
+ return
+
+ address = sp
+ count = int((bp - sp) / ptrsize) + 1
+
# Allow invocation of "telescope a b" to dump all bytes from A to B
if int(address) <= int(count):
# adjust count if it is an address. use ceil division as count is number of
| {"golden_diff": "diff --git a/pwndbg/commands/telescope.py b/pwndbg/commands/telescope.py\n--- a/pwndbg/commands/telescope.py\n+++ b/pwndbg/commands/telescope.py\n@@ -58,6 +58,15 @@\n help=\"Show <count> previous addresses instead of next ones\",\n )\n \n+parser.add_argument(\n+ \"-f\",\n+ \"--frame\",\n+ dest=\"frame\",\n+ action=\"store_true\",\n+ default=False,\n+ help=\"Show the stack frame, from rsp to rbp\",\n+)\n+\n parser.add_argument(\n \"address\", nargs=\"?\", default=\"$sp\", type=int, help=\"The address to telescope at.\"\n )\n@@ -69,7 +78,7 @@\n \n @pwndbg.commands.ArgparsedCommand(parser, category=CommandCategory.MEMORY)\n @pwndbg.commands.OnlyWhenRunning\n-def telescope(address=None, count=telescope_lines, to_string=False, reverse=False):\n+def telescope(address=None, count=telescope_lines, to_string=False, reverse=False, frame=False):\n \"\"\"\n Recursively dereferences pointers starting at the specified address\n ($sp by default)\n@@ -96,6 +105,24 @@\n if reverse:\n address -= (count - 1) * ptrsize\n \n+ # Allow invocation of telescope -f (--frame) to dump frame addresses\n+ if frame:\n+ sp = pwndbg.gdblib.regs.sp\n+ bp = pwndbg.gdblib.regs[pwndbg.gdblib.regs.frame]\n+ if sp > bp:\n+ print(\"Cannot display stack frame because base pointer is below stack pointer\")\n+ return\n+\n+ for page in pwndbg.gdblib.vmmap.get():\n+ if sp in page and bp not in page:\n+ print(\n+ \"Cannot display stack frame because base pointer is not on the same page with stack pointer\"\n+ )\n+ return\n+\n+ address = sp\n+ count = int((bp - sp) / ptrsize) + 1\n+\n # Allow invocation of \"telescope a b\" to dump all bytes from A to B\n if int(address) <= int(count):\n # adjust count if it is an address. use ceil division as count is number of\n", "issue": "Add `telescope --frame`\nIt'd be nice to have a `telescope` command that prints out the stack frame, from `rsp` to `rbp`. Obviously this only works if the program actually uses a frame pointer.\r\n\r\nIt would be equivalent to this:\r\n```\r\npwndbg> p ($rbp-$rsp)/8 + 1\r\n$5 = 9\r\npwndbg> telescope $rsp $$\r\n00:0000\u2502 rsp 0x7ffe5f4951a0 \u25c2\u2014 0x300000001\r\n01:0008\u2502 0x7ffe5f4951a8 \u2014\u25b8 0x7ffe5f495220 \u25c2\u2014 0x170c94ca0\r\n02:0010\u2502 0x7ffe5f4951b0 \u2014\u25b8 0x7f6870c96168 \u2014\u25b8 0x563e09600000 \u25c2\u2014 jg 0x563e09600047\r\n03:0018\u2502 0x7ffe5f4951b8 \u25c2\u2014 0xf0\r\n04:0020\u2502 rsi 0x7ffe5f4951c0 \u25c2\u2014 0xb1ed2074ada5ce5\r\n05:0028\u2502 0x7ffe5f4951c8 \u25c2\u2014 0xda37756c736484c1\r\n06:0030\u2502 0x7ffe5f4951d0 \u2014\u25b8 0x7ffe5f4951fe \u25c2\u2014 0x563e096013e00000\r\n07:0038\u2502 0x7ffe5f4951d8 \u25c2\u2014 0x56657596c3d91600\r\n08:0040\u2502 rbp 0x7ffe5f4951e0 \u2014\u25b8 0x7ffe5f495200 \u2014\u25b8 0x563e096013e0 \u25c2\u2014 push r15\r\n```\n", "before_files": [{"content": "\"\"\"\nPrints out pointer chains starting at some address in memory.\n\nGenerally used to print out the stack or register values.\n\"\"\"\n\nfrom __future__ import annotations\n\nimport argparse\nimport collections\nimport math\n\nimport pwndbg.chain\nimport pwndbg.color.telescope as T\nimport pwndbg.commands\nimport pwndbg.gdblib.arch\nimport pwndbg.gdblib.config\nimport pwndbg.gdblib.memory\nimport pwndbg.gdblib.regs\nimport pwndbg.gdblib.typeinfo\nfrom pwndbg.color import theme\nfrom pwndbg.commands import CommandCategory\n\ntelescope_lines = pwndbg.gdblib.config.add_param(\n \"telescope-lines\", 8, \"number of lines to printed by the telescope command\"\n)\nskip_repeating_values = pwndbg.gdblib.config.add_param(\n \"telescope-skip-repeating-val\",\n True,\n \"whether to skip repeating values of the telescope command\",\n)\nskip_repeating_values_minimum = pwndbg.gdblib.config.add_param(\n \"telescope-skip-repeating-val-minimum\",\n 3,\n \"minimum amount of repeated values before skipping lines\",\n)\n\noffset_separator = theme.add_param(\n \"telescope-offset-separator\", \"\u2502\", \"offset separator of the telescope command\"\n)\noffset_delimiter = theme.add_param(\n \"telescope-offset-delimiter\", \":\", \"offset delimiter of the telescope command\"\n)\nrepeating_marker = theme.add_param(\n \"telescope-repeating-marker\", \"... \u2193\", \"repeating values marker of the telescope command\"\n)\n\n\nparser = argparse.ArgumentParser(\n description=\"Recursively dereferences pointers starting at the specified address.\"\n)\nparser.add_argument(\n \"-r\",\n \"--reverse\",\n dest=\"reverse\",\n action=\"store_true\",\n default=False,\n help=\"Show <count> previous addresses instead of next ones\",\n)\n\nparser.add_argument(\n \"address\", nargs=\"?\", default=\"$sp\", type=int, help=\"The address to telescope at.\"\n)\n\nparser.add_argument(\n \"count\", nargs=\"?\", default=telescope_lines, type=int, help=\"The number of lines to show.\"\n)\n\n\[email protected](parser, category=CommandCategory.MEMORY)\[email protected]\ndef telescope(address=None, count=telescope_lines, to_string=False, reverse=False):\n \"\"\"\n Recursively dereferences pointers starting at the specified address\n ($sp by default)\n \"\"\"\n ptrsize = pwndbg.gdblib.typeinfo.ptrsize\n if telescope.repeat:\n address = telescope.last_address + ptrsize\n telescope.offset += 1\n else:\n telescope.offset = 0\n\n address = int(address if address else pwndbg.gdblib.regs.sp) & pwndbg.gdblib.arch.ptrmask\n input_address = address\n count = max(int(count), 1) & pwndbg.gdblib.arch.ptrmask\n delimiter = T.delimiter(offset_delimiter)\n separator = T.separator(offset_separator)\n\n # Allow invocation of \"telescope 20\" to dump 20 bytes at the stack pointer\n if address < pwndbg.gdblib.memory.MMAP_MIN_ADDR and not pwndbg.gdblib.memory.peek(address):\n count = address\n address = pwndbg.gdblib.regs.sp\n\n # Allow invocation of telescope -r to dump previous addresses\n if reverse:\n address -= (count - 1) * ptrsize\n\n # Allow invocation of \"telescope a b\" to dump all bytes from A to B\n if int(address) <= int(count):\n # adjust count if it is an address. use ceil division as count is number of\n # ptrsize values and we don't want to strip out a value if dest is unaligned\n count -= address\n count = max(math.ceil(count / ptrsize), 1)\n\n reg_values = collections.defaultdict(lambda: [])\n for reg in pwndbg.gdblib.regs.common:\n reg_values[pwndbg.gdblib.regs[reg]].append(reg)\n\n start = address\n stop = address + (count * ptrsize)\n step = ptrsize\n\n # Find all registers which show up in the trace\n regs = {}\n for i in range(start, stop, step):\n values = list(reg_values[i])\n\n for width in range(1, pwndbg.gdblib.arch.ptrsize):\n values.extend(\"%s-%i\" % (r, width) for r in reg_values[i + width])\n\n regs[i] = \" \".join(values)\n\n # Find the longest set of register information\n if regs:\n longest_regs = max(map(len, regs.values()))\n else:\n longest_regs = 0\n\n # Print everything out\n result = []\n last = None\n collapse_buffer: list[str] = []\n skipped_padding = (\n 2\n + len(offset_delimiter)\n + 4\n + len(offset_separator)\n + 1\n + longest_regs\n + 1\n - len(repeating_marker)\n )\n\n # Collapse repeating values exceeding minimum delta.\n def collapse_repeating_values() -> None:\n # The first line was already printed, hence increment by 1\n if collapse_buffer and len(collapse_buffer) + 1 >= skip_repeating_values_minimum:\n result.append(\n T.repeating_marker(\n \"%s%s%i skipped\"\n % (repeating_marker, \" \" * skipped_padding, len(collapse_buffer))\n )\n )\n else:\n result.extend(collapse_buffer)\n collapse_buffer.clear()\n\n for i, addr in enumerate(range(start, stop, step)):\n if not pwndbg.gdblib.memory.peek(addr):\n collapse_repeating_values()\n result.append(\"<Could not read memory at %#x>\" % addr)\n break\n\n line = \" \".join(\n (\n T.offset(\n \"%02x%s%04x%s\"\n % (\n i + telescope.offset,\n delimiter,\n addr - start + (telescope.offset * ptrsize),\n separator,\n )\n ),\n T.register(regs[addr].ljust(longest_regs)),\n pwndbg.chain.format(addr),\n )\n )\n\n # Buffer repeating values.\n if skip_repeating_values:\n value = pwndbg.gdblib.memory.pvoid(addr)\n if last == value and addr != input_address:\n collapse_buffer.append(line)\n continue\n collapse_repeating_values()\n last = value\n\n result.append(line)\n\n collapse_repeating_values()\n telescope.offset += i\n telescope.last_address = addr\n\n if not to_string:\n print(\"\\n\".join(result))\n\n return result\n\n\nparser = argparse.ArgumentParser(\n description=\"Dereferences on stack data with specified count and offset.\"\n)\nparser.add_argument(\"count\", nargs=\"?\", default=8, type=int, help=\"number of element to dump\")\nparser.add_argument(\n \"offset\",\n nargs=\"?\",\n default=0,\n type=int,\n help=\"Element offset from $sp (support negative offset)\",\n)\n\n\[email protected](parser, category=CommandCategory.STACK)\[email protected]\ndef stack(count, offset) -> None:\n ptrsize = pwndbg.gdblib.typeinfo.ptrsize\n telescope.repeat = stack.repeat\n telescope(address=pwndbg.gdblib.regs.sp + offset * ptrsize, count=count)\n\n\ntelescope.last_address = 0\ntelescope.offset = 0\n", "path": "pwndbg/commands/telescope.py"}]} | 3,281 | 505 |
gh_patches_debug_57588 | rasdani/github-patches | git_diff | joke2k__faker-1043 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
BBAN for en_GB too short
* Faker version: v2.0.3
* OS: linux
Numeric part of the en_GB BBAN needs to be 14 digits long, it currently only returns 13, failing further validation.
### Steps to reproduce
Invoke `fake.iban()` or `fake.bban()` with the en_GB locale, an IBAN or BBAN with 1 digit missing is returned.
### Expected behavior
GB ibans should be 22 chars long: https://www.xe.com/ibancalculator/sample/?ibancountry=united kingdom
</issue>
<code>
[start of faker/providers/bank/en_GB/__init__.py]
1 from .. import Provider as BankProvider
2
3
4 class Provider(BankProvider):
5 bban_format = '????#############'
6 country_code = 'GB'
7
[end of faker/providers/bank/en_GB/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/faker/providers/bank/en_GB/__init__.py b/faker/providers/bank/en_GB/__init__.py
--- a/faker/providers/bank/en_GB/__init__.py
+++ b/faker/providers/bank/en_GB/__init__.py
@@ -2,5 +2,5 @@
class Provider(BankProvider):
- bban_format = '????#############'
+ bban_format = '????##############'
country_code = 'GB'
| {"golden_diff": "diff --git a/faker/providers/bank/en_GB/__init__.py b/faker/providers/bank/en_GB/__init__.py\n--- a/faker/providers/bank/en_GB/__init__.py\n+++ b/faker/providers/bank/en_GB/__init__.py\n@@ -2,5 +2,5 @@\n \n \n class Provider(BankProvider):\n- bban_format = '????#############'\n+ bban_format = '????##############'\n country_code = 'GB'\n", "issue": "BBAN for en_GB too short\n* Faker version: v2.0.3\r\n* OS: linux\r\n\r\nNumeric part of the en_GB BBAN needs to be 14 digits long, it currently only returns 13, failing further validation.\r\n\r\n### Steps to reproduce\r\n\r\nInvoke `fake.iban()` or `fake.bban()` with the en_GB locale, an IBAN or BBAN with 1 digit missing is returned.\r\n\r\n### Expected behavior\r\n\r\nGB ibans should be 22 chars long: https://www.xe.com/ibancalculator/sample/?ibancountry=united kingdom\r\n\r\n\n", "before_files": [{"content": "from .. import Provider as BankProvider\n\n\nclass Provider(BankProvider):\n bban_format = '????#############'\n country_code = 'GB'\n", "path": "faker/providers/bank/en_GB/__init__.py"}]} | 713 | 102 |
gh_patches_debug_10673 | rasdani/github-patches | git_diff | ultralytics__yolov5-3973 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
model ensembling isn't working
## 🐛 Bug
When I detect some image by using ensembling, it doesn't work.
## To Reproduce (REQUIRED)
Input:
```
!git clone https://github.com/ultralytics/yolov5 # clone repo
%cd yolov5
%pip install -qr requirements.txt # install dependencies
import torch
from IPython.display import Image, clear_output # to display images
import urllib
clear_output()
print(f"Setup complete. Using torch {torch.__version__} ({torch.cuda.get_device_properties(0).name if torch.cuda.is_available() else 'CPU'})")
def download_file(url, dst_path):
try:
with urllib.request.urlopen(url) as web_file:
data = web_file.read()
with open(dst_path, mode='wb') as local_file:
local_file.write(data)
except urllib.error.URLError as e:
print(e)
download_file('https://user-images.githubusercontent.com/26833433/124489091-ea4f9a00-ddb0-11eb-8ef1-d6f335c97f6f.jpg', "zidane.jpg")
!python detect.py --weights yolov5x.pt yolov5s.pt --img 640 --conf 0.25 --source data/images/
Image(filename='zidane.jpg', width=600)
```
Output:
```
image 1/2 /content/yolov5/yolov5/yolov5/yolov5/data/images/bus.jpg: Traceback (most recent call last):
File "detect.py", line 228, in <module>
main(opt)
File "detect.py", line 223, in main
run(**vars(opt))
File "/usr/local/lib/python3.7/dist-packages/torch/autograd/grad_mode.py", line 28, in decorate_context
return func(*args, **kwargs)
File "detect.py", line 106, in run
visualize=increment_path(save_dir / Path(path).stem, mkdir=True) if visualize else False)[0]
File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
TypeError: forward() got an unexpected keyword argument 'visualize'
```
<img width="1093" alt="スクリーンショット 2021-07-12 1 31 10" src="https://user-images.githubusercontent.com/33506506/125202974-31f28c00-e2b1-11eb-8d50-ff518011c32e.png">
## Expected behavior
detect image with ensembling correctly.
## Environment
If applicable, add screenshots to help explain your problem.
google colab
https://colab.research.google.com/drive/1rXRjuFTiHdJwbxhSIY8EywwQMrg3zCbV?usp=sharing
- OS: [e.g. Ubuntu]
- GPU Tesla P100
## Additional context
I'm trying to fix it now. might be one day from now I will make pull request
</issue>
<code>
[start of models/experimental.py]
1 # YOLOv5 experimental modules
2
3 import numpy as np
4 import torch
5 import torch.nn as nn
6
7 from models.common import Conv, DWConv
8 from utils.google_utils import attempt_download
9
10
11 class CrossConv(nn.Module):
12 # Cross Convolution Downsample
13 def __init__(self, c1, c2, k=3, s=1, g=1, e=1.0, shortcut=False):
14 # ch_in, ch_out, kernel, stride, groups, expansion, shortcut
15 super(CrossConv, self).__init__()
16 c_ = int(c2 * e) # hidden channels
17 self.cv1 = Conv(c1, c_, (1, k), (1, s))
18 self.cv2 = Conv(c_, c2, (k, 1), (s, 1), g=g)
19 self.add = shortcut and c1 == c2
20
21 def forward(self, x):
22 return x + self.cv2(self.cv1(x)) if self.add else self.cv2(self.cv1(x))
23
24
25 class Sum(nn.Module):
26 # Weighted sum of 2 or more layers https://arxiv.org/abs/1911.09070
27 def __init__(self, n, weight=False): # n: number of inputs
28 super(Sum, self).__init__()
29 self.weight = weight # apply weights boolean
30 self.iter = range(n - 1) # iter object
31 if weight:
32 self.w = nn.Parameter(-torch.arange(1., n) / 2, requires_grad=True) # layer weights
33
34 def forward(self, x):
35 y = x[0] # no weight
36 if self.weight:
37 w = torch.sigmoid(self.w) * 2
38 for i in self.iter:
39 y = y + x[i + 1] * w[i]
40 else:
41 for i in self.iter:
42 y = y + x[i + 1]
43 return y
44
45
46 class GhostConv(nn.Module):
47 # Ghost Convolution https://github.com/huawei-noah/ghostnet
48 def __init__(self, c1, c2, k=1, s=1, g=1, act=True): # ch_in, ch_out, kernel, stride, groups
49 super(GhostConv, self).__init__()
50 c_ = c2 // 2 # hidden channels
51 self.cv1 = Conv(c1, c_, k, s, None, g, act)
52 self.cv2 = Conv(c_, c_, 5, 1, None, c_, act)
53
54 def forward(self, x):
55 y = self.cv1(x)
56 return torch.cat([y, self.cv2(y)], 1)
57
58
59 class GhostBottleneck(nn.Module):
60 # Ghost Bottleneck https://github.com/huawei-noah/ghostnet
61 def __init__(self, c1, c2, k=3, s=1): # ch_in, ch_out, kernel, stride
62 super(GhostBottleneck, self).__init__()
63 c_ = c2 // 2
64 self.conv = nn.Sequential(GhostConv(c1, c_, 1, 1), # pw
65 DWConv(c_, c_, k, s, act=False) if s == 2 else nn.Identity(), # dw
66 GhostConv(c_, c2, 1, 1, act=False)) # pw-linear
67 self.shortcut = nn.Sequential(DWConv(c1, c1, k, s, act=False),
68 Conv(c1, c2, 1, 1, act=False)) if s == 2 else nn.Identity()
69
70 def forward(self, x):
71 return self.conv(x) + self.shortcut(x)
72
73
74 class MixConv2d(nn.Module):
75 # Mixed Depthwise Conv https://arxiv.org/abs/1907.09595
76 def __init__(self, c1, c2, k=(1, 3), s=1, equal_ch=True):
77 super(MixConv2d, self).__init__()
78 groups = len(k)
79 if equal_ch: # equal c_ per group
80 i = torch.linspace(0, groups - 1E-6, c2).floor() # c2 indices
81 c_ = [(i == g).sum() for g in range(groups)] # intermediate channels
82 else: # equal weight.numel() per group
83 b = [c2] + [0] * groups
84 a = np.eye(groups + 1, groups, k=-1)
85 a -= np.roll(a, 1, axis=1)
86 a *= np.array(k) ** 2
87 a[0] = 1
88 c_ = np.linalg.lstsq(a, b, rcond=None)[0].round() # solve for equal weight indices, ax = b
89
90 self.m = nn.ModuleList([nn.Conv2d(c1, int(c_[g]), k[g], s, k[g] // 2, bias=False) for g in range(groups)])
91 self.bn = nn.BatchNorm2d(c2)
92 self.act = nn.LeakyReLU(0.1, inplace=True)
93
94 def forward(self, x):
95 return x + self.act(self.bn(torch.cat([m(x) for m in self.m], 1)))
96
97
98 class Ensemble(nn.ModuleList):
99 # Ensemble of models
100 def __init__(self):
101 super(Ensemble, self).__init__()
102
103 def forward(self, x, augment=False):
104 y = []
105 for module in self:
106 y.append(module(x, augment)[0])
107 # y = torch.stack(y).max(0)[0] # max ensemble
108 # y = torch.stack(y).mean(0) # mean ensemble
109 y = torch.cat(y, 1) # nms ensemble
110 return y, None # inference, train output
111
112
113 def attempt_load(weights, map_location=None, inplace=True):
114 from models.yolo import Detect, Model
115
116 # Loads an ensemble of models weights=[a,b,c] or a single model weights=[a] or weights=a
117 model = Ensemble()
118 for w in weights if isinstance(weights, list) else [weights]:
119 ckpt = torch.load(attempt_download(w), map_location=map_location) # load
120 model.append(ckpt['ema' if ckpt.get('ema') else 'model'].float().fuse().eval()) # FP32 model
121
122 # Compatibility updates
123 for m in model.modules():
124 if type(m) in [nn.Hardswish, nn.LeakyReLU, nn.ReLU, nn.ReLU6, nn.SiLU, Detect, Model]:
125 m.inplace = inplace # pytorch 1.7.0 compatibility
126 elif type(m) is Conv:
127 m._non_persistent_buffers_set = set() # pytorch 1.6.0 compatibility
128
129 if len(model) == 1:
130 return model[-1] # return model
131 else:
132 print(f'Ensemble created with {weights}\n')
133 for k in ['names']:
134 setattr(model, k, getattr(model[-1], k))
135 model.stride = model[torch.argmax(torch.tensor([m.stride.max() for m in model])).int()].stride # max stride
136 return model # return ensemble
137
[end of models/experimental.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/models/experimental.py b/models/experimental.py
--- a/models/experimental.py
+++ b/models/experimental.py
@@ -100,10 +100,10 @@
def __init__(self):
super(Ensemble, self).__init__()
- def forward(self, x, augment=False):
+ def forward(self, x, augment=False, profile=False, visualize=False):
y = []
for module in self:
- y.append(module(x, augment)[0])
+ y.append(module(x, augment, profile, visualize)[0])
# y = torch.stack(y).max(0)[0] # max ensemble
# y = torch.stack(y).mean(0) # mean ensemble
y = torch.cat(y, 1) # nms ensemble
| {"golden_diff": "diff --git a/models/experimental.py b/models/experimental.py\n--- a/models/experimental.py\n+++ b/models/experimental.py\n@@ -100,10 +100,10 @@\n def __init__(self):\n super(Ensemble, self).__init__()\n \n- def forward(self, x, augment=False):\n+ def forward(self, x, augment=False, profile=False, visualize=False):\n y = []\n for module in self:\n- y.append(module(x, augment)[0])\n+ y.append(module(x, augment, profile, visualize)[0])\n # y = torch.stack(y).max(0)[0] # max ensemble\n # y = torch.stack(y).mean(0) # mean ensemble\n y = torch.cat(y, 1) # nms ensemble\n", "issue": "model ensembling isn't working\n## \ud83d\udc1b Bug\r\nWhen I detect some image by using ensembling, it doesn't work.\r\n\r\n## To Reproduce (REQUIRED)\r\nInput:\r\n```\r\n!git clone https://github.com/ultralytics/yolov5 # clone repo\r\n%cd yolov5\r\n%pip install -qr requirements.txt # install dependencies\r\n\r\nimport torch\r\nfrom IPython.display import Image, clear_output # to display images\r\nimport urllib\r\n\r\nclear_output()\r\nprint(f\"Setup complete. Using torch {torch.__version__} ({torch.cuda.get_device_properties(0).name if torch.cuda.is_available() else 'CPU'})\")\r\n\r\ndef download_file(url, dst_path):\r\n try:\r\n with urllib.request.urlopen(url) as web_file:\r\n data = web_file.read()\r\n with open(dst_path, mode='wb') as local_file:\r\n local_file.write(data)\r\n except urllib.error.URLError as e:\r\n print(e)\r\n\r\ndownload_file('https://user-images.githubusercontent.com/26833433/124489091-ea4f9a00-ddb0-11eb-8ef1-d6f335c97f6f.jpg', \"zidane.jpg\")\r\n\r\n!python detect.py --weights yolov5x.pt yolov5s.pt --img 640 --conf 0.25 --source data/images/\r\nImage(filename='zidane.jpg', width=600)\r\n```\r\n\r\nOutput:\r\n```\r\nimage 1/2 /content/yolov5/yolov5/yolov5/yolov5/data/images/bus.jpg: Traceback (most recent call last):\r\n File \"detect.py\", line 228, in <module>\r\n main(opt)\r\n File \"detect.py\", line 223, in main\r\n run(**vars(opt))\r\n File \"/usr/local/lib/python3.7/dist-packages/torch/autograd/grad_mode.py\", line 28, in decorate_context\r\n return func(*args, **kwargs)\r\n File \"detect.py\", line 106, in run\r\n visualize=increment_path(save_dir / Path(path).stem, mkdir=True) if visualize else False)[0]\r\n File \"/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py\", line 1051, in _call_impl\r\n return forward_call(*input, **kwargs)\r\nTypeError: forward() got an unexpected keyword argument 'visualize'\r\n```\r\n<img width=\"1093\" alt=\"\u30b9\u30af\u30ea\u30fc\u30f3\u30b7\u30e7\u30c3\u30c8 2021-07-12 1 31 10\" src=\"https://user-images.githubusercontent.com/33506506/125202974-31f28c00-e2b1-11eb-8d50-ff518011c32e.png\">\r\n\r\n\r\n## Expected behavior\r\ndetect image with ensembling correctly.\r\n\r\n## Environment\r\nIf applicable, add screenshots to help explain your problem.\r\n\r\ngoogle colab\r\n\r\nhttps://colab.research.google.com/drive/1rXRjuFTiHdJwbxhSIY8EywwQMrg3zCbV?usp=sharing\r\n\r\n - OS: [e.g. Ubuntu]\r\n - GPU Tesla P100\r\n\r\n## Additional context\r\nI'm trying to fix it now. might be one day from now I will make pull request\r\n\n", "before_files": [{"content": "# YOLOv5 experimental modules\n\nimport numpy as np\nimport torch\nimport torch.nn as nn\n\nfrom models.common import Conv, DWConv\nfrom utils.google_utils import attempt_download\n\n\nclass CrossConv(nn.Module):\n # Cross Convolution Downsample\n def __init__(self, c1, c2, k=3, s=1, g=1, e=1.0, shortcut=False):\n # ch_in, ch_out, kernel, stride, groups, expansion, shortcut\n super(CrossConv, self).__init__()\n c_ = int(c2 * e) # hidden channels\n self.cv1 = Conv(c1, c_, (1, k), (1, s))\n self.cv2 = Conv(c_, c2, (k, 1), (s, 1), g=g)\n self.add = shortcut and c1 == c2\n\n def forward(self, x):\n return x + self.cv2(self.cv1(x)) if self.add else self.cv2(self.cv1(x))\n\n\nclass Sum(nn.Module):\n # Weighted sum of 2 or more layers https://arxiv.org/abs/1911.09070\n def __init__(self, n, weight=False): # n: number of inputs\n super(Sum, self).__init__()\n self.weight = weight # apply weights boolean\n self.iter = range(n - 1) # iter object\n if weight:\n self.w = nn.Parameter(-torch.arange(1., n) / 2, requires_grad=True) # layer weights\n\n def forward(self, x):\n y = x[0] # no weight\n if self.weight:\n w = torch.sigmoid(self.w) * 2\n for i in self.iter:\n y = y + x[i + 1] * w[i]\n else:\n for i in self.iter:\n y = y + x[i + 1]\n return y\n\n\nclass GhostConv(nn.Module):\n # Ghost Convolution https://github.com/huawei-noah/ghostnet\n def __init__(self, c1, c2, k=1, s=1, g=1, act=True): # ch_in, ch_out, kernel, stride, groups\n super(GhostConv, self).__init__()\n c_ = c2 // 2 # hidden channels\n self.cv1 = Conv(c1, c_, k, s, None, g, act)\n self.cv2 = Conv(c_, c_, 5, 1, None, c_, act)\n\n def forward(self, x):\n y = self.cv1(x)\n return torch.cat([y, self.cv2(y)], 1)\n\n\nclass GhostBottleneck(nn.Module):\n # Ghost Bottleneck https://github.com/huawei-noah/ghostnet\n def __init__(self, c1, c2, k=3, s=1): # ch_in, ch_out, kernel, stride\n super(GhostBottleneck, self).__init__()\n c_ = c2 // 2\n self.conv = nn.Sequential(GhostConv(c1, c_, 1, 1), # pw\n DWConv(c_, c_, k, s, act=False) if s == 2 else nn.Identity(), # dw\n GhostConv(c_, c2, 1, 1, act=False)) # pw-linear\n self.shortcut = nn.Sequential(DWConv(c1, c1, k, s, act=False),\n Conv(c1, c2, 1, 1, act=False)) if s == 2 else nn.Identity()\n\n def forward(self, x):\n return self.conv(x) + self.shortcut(x)\n\n\nclass MixConv2d(nn.Module):\n # Mixed Depthwise Conv https://arxiv.org/abs/1907.09595\n def __init__(self, c1, c2, k=(1, 3), s=1, equal_ch=True):\n super(MixConv2d, self).__init__()\n groups = len(k)\n if equal_ch: # equal c_ per group\n i = torch.linspace(0, groups - 1E-6, c2).floor() # c2 indices\n c_ = [(i == g).sum() for g in range(groups)] # intermediate channels\n else: # equal weight.numel() per group\n b = [c2] + [0] * groups\n a = np.eye(groups + 1, groups, k=-1)\n a -= np.roll(a, 1, axis=1)\n a *= np.array(k) ** 2\n a[0] = 1\n c_ = np.linalg.lstsq(a, b, rcond=None)[0].round() # solve for equal weight indices, ax = b\n\n self.m = nn.ModuleList([nn.Conv2d(c1, int(c_[g]), k[g], s, k[g] // 2, bias=False) for g in range(groups)])\n self.bn = nn.BatchNorm2d(c2)\n self.act = nn.LeakyReLU(0.1, inplace=True)\n\n def forward(self, x):\n return x + self.act(self.bn(torch.cat([m(x) for m in self.m], 1)))\n\n\nclass Ensemble(nn.ModuleList):\n # Ensemble of models\n def __init__(self):\n super(Ensemble, self).__init__()\n\n def forward(self, x, augment=False):\n y = []\n for module in self:\n y.append(module(x, augment)[0])\n # y = torch.stack(y).max(0)[0] # max ensemble\n # y = torch.stack(y).mean(0) # mean ensemble\n y = torch.cat(y, 1) # nms ensemble\n return y, None # inference, train output\n\n\ndef attempt_load(weights, map_location=None, inplace=True):\n from models.yolo import Detect, Model\n\n # Loads an ensemble of models weights=[a,b,c] or a single model weights=[a] or weights=a\n model = Ensemble()\n for w in weights if isinstance(weights, list) else [weights]:\n ckpt = torch.load(attempt_download(w), map_location=map_location) # load\n model.append(ckpt['ema' if ckpt.get('ema') else 'model'].float().fuse().eval()) # FP32 model\n\n # Compatibility updates\n for m in model.modules():\n if type(m) in [nn.Hardswish, nn.LeakyReLU, nn.ReLU, nn.ReLU6, nn.SiLU, Detect, Model]:\n m.inplace = inplace # pytorch 1.7.0 compatibility\n elif type(m) is Conv:\n m._non_persistent_buffers_set = set() # pytorch 1.6.0 compatibility\n\n if len(model) == 1:\n return model[-1] # return model\n else:\n print(f'Ensemble created with {weights}\\n')\n for k in ['names']:\n setattr(model, k, getattr(model[-1], k))\n model.stride = model[torch.argmax(torch.tensor([m.stride.max() for m in model])).int()].stride # max stride\n return model # return ensemble\n", "path": "models/experimental.py"}]} | 3,190 | 178 |
gh_patches_debug_60487 | rasdani/github-patches | git_diff | mars-project__mars-284 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[BUG] Fuse operand's sparse value is wrong
<!--
Thank you for your contribution!
Please review https://github.com/mars-project/mars/blob/master/CONTRIBUTING.rst before opening an issue.
-->
**Describe the bug**
A fuse operand's sparseness should be the same as tail node's, it is not set correctly now.
**To Reproduce**
``` Python
In [1]: import scipy.sparse as sps
In [2]: import mars.tensor as mt
In [3]: data = sps.rand(10, 10, density=0.05)
In [4]: a = mt.tensor(data, chunk_size=3)
In [5]: b = (a * 2) * 2
In [6]: g = b.build_graph(tiled=True, compose=True)
In [7]: list(g)[0].op.sparse
Out[7]: False
In [8]: list(g)[0].op
Out[8]: <mars.tensor.expressions.fuse.core.TensorFuseChunk at 0xa208b7048>
In [9]: list(g)[0].composed[-1].op.sparse
Out[9]: True
```
</issue>
<code>
[start of mars/tensor/expressions/fuse/core.py]
1 #!/usr/bin/env python
2 # -*- coding: utf-8 -*-
3 # Copyright 1999-2018 Alibaba Group Holding Ltd.
4 #
5 # Licensed under the Apache License, Version 2.0 (the "License");
6 # you may not use this file except in compliance with the License.
7 # You may obtain a copy of the License at
8 #
9 # http://www.apache.org/licenses/LICENSE-2.0
10 #
11 # Unless required by applicable law or agreed to in writing, software
12 # distributed under the License is distributed on an "AS IS" BASIS,
13 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
14 # See the License for the specific language governing permissions and
15 # limitations under the License.
16
17 from .... import operands
18 from ....tiles import NotSupportTile
19 from ..core import TensorOperandMixin
20
21
22 class TensorFuseChunk(operands.Fuse, TensorOperandMixin):
23 def __init__(self, dtype=None, **kw):
24 super(TensorFuseChunk, self).__init__(_dtype=dtype, **kw)
25
26 def calc_shape(self, *inputs_shape):
27 in_shapes = inputs_shape
28 out_shape = None
29
30 # TODO: the logic will be changed when fusion is not only straight line
31 for c in self.outputs[0].composed:
32 out_shape = c.op.calc_shape(*in_shapes)
33 in_shapes = [out_shape]
34 return out_shape
35
36 @classmethod
37 def tile(cls, op):
38 raise NotSupportTile('TensorFuseChunk is a chunk operand which does not support tile')
39
40
41 class TensorFuseChunkMixin(TensorOperandMixin):
42 __slots__ = ()
43
44 @classmethod
45 def tile(cls, op):
46 raise NotSupportTile('TensorFuseChunk is a chunk operand which does not support tile')
47
48 def __call__(self, fuse_chunks):
49 head_chunk = fuse_chunks[0]
50 tail_chunk = fuse_chunks[-1]
51 setattr(self, '_operands', [c.op for c in fuse_chunks])
52 return self.new_chunk(head_chunk.inputs, tail_chunk.shape,
53 _composed=fuse_chunks, _key=tail_chunk.key)
54
[end of mars/tensor/expressions/fuse/core.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/mars/tensor/expressions/fuse/core.py b/mars/tensor/expressions/fuse/core.py
--- a/mars/tensor/expressions/fuse/core.py
+++ b/mars/tensor/expressions/fuse/core.py
@@ -20,8 +20,8 @@
class TensorFuseChunk(operands.Fuse, TensorOperandMixin):
- def __init__(self, dtype=None, **kw):
- super(TensorFuseChunk, self).__init__(_dtype=dtype, **kw)
+ def __init__(self, dtype=None, sparse=False, **kw):
+ super(TensorFuseChunk, self).__init__(_dtype=dtype, _sparse=sparse, **kw)
def calc_shape(self, *inputs_shape):
in_shapes = inputs_shape
| {"golden_diff": "diff --git a/mars/tensor/expressions/fuse/core.py b/mars/tensor/expressions/fuse/core.py\n--- a/mars/tensor/expressions/fuse/core.py\n+++ b/mars/tensor/expressions/fuse/core.py\n@@ -20,8 +20,8 @@\n \n \n class TensorFuseChunk(operands.Fuse, TensorOperandMixin):\n- def __init__(self, dtype=None, **kw):\n- super(TensorFuseChunk, self).__init__(_dtype=dtype, **kw)\n+ def __init__(self, dtype=None, sparse=False, **kw):\n+ super(TensorFuseChunk, self).__init__(_dtype=dtype, _sparse=sparse, **kw)\n \n def calc_shape(self, *inputs_shape):\n in_shapes = inputs_shape\n", "issue": "[BUG] Fuse operand's sparse value is wrong\n<!--\r\nThank you for your contribution!\r\n\r\nPlease review https://github.com/mars-project/mars/blob/master/CONTRIBUTING.rst before opening an issue.\r\n-->\r\n\r\n**Describe the bug**\r\nA fuse operand's sparseness should be the same as tail node's, it is not set correctly now.\r\n\r\n**To Reproduce**\r\n``` Python\r\nIn [1]: import scipy.sparse as sps \r\n\r\nIn [2]: import mars.tensor as mt \r\n\r\nIn [3]: data = sps.rand(10, 10, density=0.05) \r\n\r\nIn [4]: a = mt.tensor(data, chunk_size=3) \r\n\r\nIn [5]: b = (a * 2) * 2 \r\n\r\nIn [6]: g = b.build_graph(tiled=True, compose=True) \r\n\r\nIn [7]: list(g)[0].op.sparse \r\nOut[7]: False\r\n\r\nIn [8]: list(g)[0].op \r\nOut[8]: <mars.tensor.expressions.fuse.core.TensorFuseChunk at 0xa208b7048>\r\n\r\nIn [9]: list(g)[0].composed[-1].op.sparse \r\nOut[9]: True\r\n```\r\n\r\n\n", "before_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n# Copyright 1999-2018 Alibaba Group Holding Ltd.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom .... import operands\nfrom ....tiles import NotSupportTile\nfrom ..core import TensorOperandMixin\n\n\nclass TensorFuseChunk(operands.Fuse, TensorOperandMixin):\n def __init__(self, dtype=None, **kw):\n super(TensorFuseChunk, self).__init__(_dtype=dtype, **kw)\n\n def calc_shape(self, *inputs_shape):\n in_shapes = inputs_shape\n out_shape = None\n\n # TODO: the logic will be changed when fusion is not only straight line\n for c in self.outputs[0].composed:\n out_shape = c.op.calc_shape(*in_shapes)\n in_shapes = [out_shape]\n return out_shape\n\n @classmethod\n def tile(cls, op):\n raise NotSupportTile('TensorFuseChunk is a chunk operand which does not support tile')\n\n\nclass TensorFuseChunkMixin(TensorOperandMixin):\n __slots__ = ()\n\n @classmethod\n def tile(cls, op):\n raise NotSupportTile('TensorFuseChunk is a chunk operand which does not support tile')\n\n def __call__(self, fuse_chunks):\n head_chunk = fuse_chunks[0]\n tail_chunk = fuse_chunks[-1]\n setattr(self, '_operands', [c.op for c in fuse_chunks])\n return self.new_chunk(head_chunk.inputs, tail_chunk.shape,\n _composed=fuse_chunks, _key=tail_chunk.key)\n", "path": "mars/tensor/expressions/fuse/core.py"}]} | 1,373 | 176 |
gh_patches_debug_11091 | rasdani/github-patches | git_diff | chainer__chainer-7202 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
`eps` is doubly added to variance in inference of `L.BatchRenormalization`
- `runninng_var` learns variances with `eps`
- `train=False` mode uses `running_var + eps`
### Conditions
I tested with Chainer versions: `3.0.0`, `4.5.0`, `5.4.0`, `7.0.0a1`
### Code to reproduce
```python
import chainer
import numpy as np
np.random.seed(0)
brn = chainer.links.BatchRenormalization(3, eps=1.)
for _ in range(1000):
x = np.random.randn(1000, 3).astype('f')
brn(x)
x = np.random.randn(1000, 3).astype('f')
y = brn(x)
print(y.array.var(axis=0))
with chainer.using_config('train', False):
y = brn(x)
print(y.array.var(axis=0))
```
### Error messages, stack traces, or logs
```
[0.51281106 0.49953052 0.48790243]
[0.3506052 0.33283928 0.31892547]
```
Here, the "normalized" variance is around 1/(1+eps) if train, whereas it's around 1/(1+2 eps) otherwise.
</issue>
<code>
[start of chainer/functions/normalization/batch_renormalization.py]
1 import warnings
2
3 import numpy
4
5 from chainer import backend
6 from chainer.backends import cuda
7 from chainer import configuration
8 from chainer import function
9 from chainer.functions.normalization import batch_normalization
10 from chainer.utils import type_check
11
12
13 def _xhat(x, mean, std, expander):
14 x_mu = x - mean[expander]
15 x_mu /= std[expander]
16 return x_mu
17
18
19 class BatchRenormalizationFunction(function.Function):
20
21 def __init__(self, eps=2e-5, mean=None, var=None, decay=0.9,
22 rmax=1, dmax=0, update_statistics=True):
23 self._running_mean = mean
24 self._running_var = var
25 self.rmax = rmax
26 self.dmax = dmax
27 self.r = None
28 self.update_statistics = update_statistics
29
30 self.eps = eps
31 self.decay = decay
32
33 def _warn_accessing_property(self):
34 warnings.warn(
35 'The attributes of BatchRenormalizationFunction '
36 'are deprecated. '
37 'Consider setting update_statistics=True to '
38 'batch_renormalization to update running statistics.',
39 DeprecationWarning)
40
41 @property
42 def running_mean(self):
43 self._warn_accessing_property()
44 return self._running_mean
45
46 @property
47 def running_var(self):
48 self._warn_accessing_property()
49 return self._running_var
50
51 def check_type_forward(self, in_types):
52 type_check.expect(in_types.size() == 3)
53 x_type, gamma_type, beta_type = in_types
54 M = type_check.eval(gamma_type.ndim)
55 type_check.expect(
56 x_type.dtype.kind == 'f',
57 x_type.ndim >= gamma_type.ndim + 1,
58 x_type.shape[1:1 + M] == gamma_type.shape,
59 # TODO(tkerola): Check shape
60 gamma_type.dtype.kind == 'f',
61 gamma_type.dtype == beta_type.dtype,
62 gamma_type.shape == beta_type.shape,
63 )
64
65 def forward(self, inputs):
66 xp = backend.get_array_module(*inputs)
67 x, gamma, beta = inputs
68
69 # Note: we must be in train mode.
70 assert configuration.config.train
71
72 head_ndim = gamma.ndim + 1
73 expander = (None, Ellipsis) + (None,) * (x.ndim - head_ndim)
74
75 # NOTE(tommi): cuDNN is not used since it does not support
76 # batch renormalization
77 axis = (0,) + tuple(range(head_ndim, x.ndim))
78 mean = x.mean(axis=axis, dtype=gamma.dtype)
79 var = x.var(axis=axis, dtype=gamma.dtype) + self.eps
80 self.std = xp.sqrt(var, dtype=var.dtype)
81
82 running_sigma = xp.sqrt(self._running_var + self.eps,
83 dtype=self._running_mean.dtype)
84 self.r = xp.clip(self.std / running_sigma,
85 1.0 / self.rmax, self.rmax)
86 d = xp.clip(
87 (mean - self._running_mean) / running_sigma,
88 -self.dmax, self.dmax)
89
90 gamma = gamma[expander]
91 beta = beta[expander]
92
93 if xp is numpy:
94 self.x_hat = _xhat(x, mean, self.std, expander)
95 self.x_hat_renorm = self.x_hat * self.r[expander] + d[expander]
96 y = gamma * self.x_hat_renorm
97 y += beta
98 y = y.astype(dtype=x.dtype)
99 else:
100 self.x_hat, self.x_hat_renorm, y = cuda.elementwise(
101 'T x, U mean, U std, U gamma, U beta, U r, U d',
102 'U x_hat, U x_hat_renorm, T y',
103 '''
104 x_hat = (x - mean) / std;
105 x_hat_renorm = x_hat * r + d;
106 y = gamma * x_hat_renorm + beta;
107 ''',
108 'brn_fwd')(
109 x, mean[expander], self.std[expander], gamma, beta,
110 self.r[expander], d[expander])
111
112 if self.update_statistics:
113 m = x.size // gamma[expander].size
114 self._running_mean *= self.decay
115 adjust = m / max(m - 1., 1.) # unbiased estimation
116 temp_ar = xp.array(mean)
117 temp_ar *= (1 - self.decay)
118 self._running_mean += temp_ar
119 del temp_ar
120 self._running_var *= self.decay
121 temp_ar = xp.array(var)
122 temp_ar *= (1 - self.decay) * adjust
123 self._running_var += temp_ar
124 del temp_ar
125
126 return y,
127
128 def backward(self, inputs, grad_outputs):
129 x, gamma, _ = inputs
130 gy = grad_outputs[0]
131 head_ndim = gamma.ndim + 1
132 expander = (None, Ellipsis) + (None,) * (x.ndim - head_ndim)
133 m = gamma.dtype.type(x.size // gamma.size)
134 axis = (0,) + tuple(range(head_ndim, x.ndim))
135 xp = backend.get_array_module(x)
136
137 # Note: we must be in train mode.
138 assert configuration.config.train
139 # NOTE(tommi): cuDNN is not used since it does not support
140 # batch renormalization
141 gbeta = gy.sum(axis=axis, dtype=gamma.dtype)
142 ggamma = (gy * self.x_hat_renorm).sum(axis=axis)
143 gsigma_batch = (gy * self.x_hat).sum(axis=axis)
144 if xp is numpy:
145 scale = (self.r * gamma / self.std)[expander]
146 gx = scale * (gy - (self.x_hat * gsigma_batch[expander] +
147 gbeta[expander]) / m)
148 gx = gx.astype(dtype=x.dtype)
149 else:
150 inv_m = numpy.float32(1) / m
151 gx = cuda.elementwise(
152 'T gy, U x_hat, U gamma, U std, U gsigma_batch, U gbeta, \
153 U inv_m, U r',
154 'T gx',
155 'gx = (r * gamma / std) * (gy - (x_hat * gsigma_batch + gbeta) * \
156 inv_m)',
157 'brn_bwd')(
158 gy, self.x_hat, gamma[expander],
159 self.std[expander], gsigma_batch[expander],
160 gbeta[expander], inv_m, self.r[expander])
161 return gx, ggamma, gbeta
162
163
164 def batch_renormalization(x, gamma, beta, rmax, dmax, eps=2e-5,
165 running_mean=None, running_var=None, decay=0.9,
166 update_statistics=False):
167 """Batch renormalization function.
168
169 This is an extension of batch normalization, which ensures that the
170 training and inference models generate the same outputs that depend on
171 individual examples rather than the entire minibatch.
172
173 .. note::
174
175 This function does not perform in-place update to
176 ``running_mean`` and ``running_var`` by default, contrary to
177 :func:`~chainer.functions.batch_normalization`.
178 If the function is called, it will not be possible to access the
179 updated running mean and variance statistics, because they are members
180 of the function object, which cannot be accessed by the caller.
181 If it is desired to update the running statistics, call the function
182 with ``update_statistics=True`` option.
183
184 .. note::
185
186 For the consistency with Batch Normalization, this function
187 intentionally ignores some of the theoretical flaws in Algorithm 1 of
188 the Batch Renormalization paper:
189
190 - ``F.batch_renormalization`` maintains the moving average of variances
191 :math:`\\sigma^2`, while the original paper maintains the moving
192 average of standard deviations :math:`\\sigma`.
193 - ``F.batch_renormalization`` applies Bessel's correction to update the
194 moving average of variances.
195
196 See: `Batch Renormalization: Towards Reducing Minibatch Dependence in
197 Batch-Normalized Models <https://arxiv.org/abs/1702.03275>`_
198
199 .. seealso:: :class:`~chainer.links.BatchRenormalization`
200
201 """
202 if running_mean is None:
203 raise TypeError('running_mean is required')
204 if running_var is None:
205 raise TypeError('running_var is required')
206 return BatchRenormalizationFunction(
207 eps, running_mean, running_var, decay, rmax, dmax, update_statistics
208 )(x, gamma, beta)
209
210
211 def fixed_batch_renormalization(x, gamma, beta, mean, var, eps=2e-5):
212 warnings.warn(
213 'fixed_batch_renormalization is deprecated. '
214 'Use fixed_batch_normalization instead.',
215 DeprecationWarning)
216 with configuration.using_config('train', False):
217 return batch_normalization.fixed_batch_normalization(
218 x, gamma, beta, mean, var, eps
219 )
220
[end of chainer/functions/normalization/batch_renormalization.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/chainer/functions/normalization/batch_renormalization.py b/chainer/functions/normalization/batch_renormalization.py
--- a/chainer/functions/normalization/batch_renormalization.py
+++ b/chainer/functions/normalization/batch_renormalization.py
@@ -76,8 +76,8 @@
# batch renormalization
axis = (0,) + tuple(range(head_ndim, x.ndim))
mean = x.mean(axis=axis, dtype=gamma.dtype)
- var = x.var(axis=axis, dtype=gamma.dtype) + self.eps
- self.std = xp.sqrt(var, dtype=var.dtype)
+ var = x.var(axis=axis, dtype=gamma.dtype)
+ self.std = xp.sqrt(var + self.eps, dtype=var.dtype)
running_sigma = xp.sqrt(self._running_var + self.eps,
dtype=self._running_mean.dtype)
| {"golden_diff": "diff --git a/chainer/functions/normalization/batch_renormalization.py b/chainer/functions/normalization/batch_renormalization.py\n--- a/chainer/functions/normalization/batch_renormalization.py\n+++ b/chainer/functions/normalization/batch_renormalization.py\n@@ -76,8 +76,8 @@\n # batch renormalization\n axis = (0,) + tuple(range(head_ndim, x.ndim))\n mean = x.mean(axis=axis, dtype=gamma.dtype)\n- var = x.var(axis=axis, dtype=gamma.dtype) + self.eps\n- self.std = xp.sqrt(var, dtype=var.dtype)\n+ var = x.var(axis=axis, dtype=gamma.dtype)\n+ self.std = xp.sqrt(var + self.eps, dtype=var.dtype)\n \n running_sigma = xp.sqrt(self._running_var + self.eps,\n dtype=self._running_mean.dtype)\n", "issue": "`eps` is doubly added to variance in inference of `L.BatchRenormalization`\n- `runninng_var` learns variances with `eps`\r\n- `train=False` mode uses `running_var + eps`\r\n\r\n### Conditions\r\nI tested with Chainer versions: `3.0.0`, `4.5.0`, `5.4.0`, `7.0.0a1`\r\n\r\n### Code to reproduce\r\n```python\r\nimport chainer\r\nimport numpy as np\r\nnp.random.seed(0)\r\n\r\nbrn = chainer.links.BatchRenormalization(3, eps=1.)\r\nfor _ in range(1000):\r\n x = np.random.randn(1000, 3).astype('f')\r\n brn(x)\r\n\r\nx = np.random.randn(1000, 3).astype('f')\r\n\r\ny = brn(x)\r\nprint(y.array.var(axis=0))\r\n\r\nwith chainer.using_config('train', False):\r\n y = brn(x)\r\nprint(y.array.var(axis=0))\r\n```\r\n\r\n### Error messages, stack traces, or logs\r\n```\r\n[0.51281106 0.49953052 0.48790243]\r\n[0.3506052 0.33283928 0.31892547]\r\n```\r\n\r\nHere, the \"normalized\" variance is around 1/(1+eps) if train, whereas it's around 1/(1+2 eps) otherwise.\n", "before_files": [{"content": "import warnings\n\nimport numpy\n\nfrom chainer import backend\nfrom chainer.backends import cuda\nfrom chainer import configuration\nfrom chainer import function\nfrom chainer.functions.normalization import batch_normalization\nfrom chainer.utils import type_check\n\n\ndef _xhat(x, mean, std, expander):\n x_mu = x - mean[expander]\n x_mu /= std[expander]\n return x_mu\n\n\nclass BatchRenormalizationFunction(function.Function):\n\n def __init__(self, eps=2e-5, mean=None, var=None, decay=0.9,\n rmax=1, dmax=0, update_statistics=True):\n self._running_mean = mean\n self._running_var = var\n self.rmax = rmax\n self.dmax = dmax\n self.r = None\n self.update_statistics = update_statistics\n\n self.eps = eps\n self.decay = decay\n\n def _warn_accessing_property(self):\n warnings.warn(\n 'The attributes of BatchRenormalizationFunction '\n 'are deprecated. '\n 'Consider setting update_statistics=True to '\n 'batch_renormalization to update running statistics.',\n DeprecationWarning)\n\n @property\n def running_mean(self):\n self._warn_accessing_property()\n return self._running_mean\n\n @property\n def running_var(self):\n self._warn_accessing_property()\n return self._running_var\n\n def check_type_forward(self, in_types):\n type_check.expect(in_types.size() == 3)\n x_type, gamma_type, beta_type = in_types\n M = type_check.eval(gamma_type.ndim)\n type_check.expect(\n x_type.dtype.kind == 'f',\n x_type.ndim >= gamma_type.ndim + 1,\n x_type.shape[1:1 + M] == gamma_type.shape,\n # TODO(tkerola): Check shape\n gamma_type.dtype.kind == 'f',\n gamma_type.dtype == beta_type.dtype,\n gamma_type.shape == beta_type.shape,\n )\n\n def forward(self, inputs):\n xp = backend.get_array_module(*inputs)\n x, gamma, beta = inputs\n\n # Note: we must be in train mode.\n assert configuration.config.train\n\n head_ndim = gamma.ndim + 1\n expander = (None, Ellipsis) + (None,) * (x.ndim - head_ndim)\n\n # NOTE(tommi): cuDNN is not used since it does not support\n # batch renormalization\n axis = (0,) + tuple(range(head_ndim, x.ndim))\n mean = x.mean(axis=axis, dtype=gamma.dtype)\n var = x.var(axis=axis, dtype=gamma.dtype) + self.eps\n self.std = xp.sqrt(var, dtype=var.dtype)\n\n running_sigma = xp.sqrt(self._running_var + self.eps,\n dtype=self._running_mean.dtype)\n self.r = xp.clip(self.std / running_sigma,\n 1.0 / self.rmax, self.rmax)\n d = xp.clip(\n (mean - self._running_mean) / running_sigma,\n -self.dmax, self.dmax)\n\n gamma = gamma[expander]\n beta = beta[expander]\n\n if xp is numpy:\n self.x_hat = _xhat(x, mean, self.std, expander)\n self.x_hat_renorm = self.x_hat * self.r[expander] + d[expander]\n y = gamma * self.x_hat_renorm\n y += beta\n y = y.astype(dtype=x.dtype)\n else:\n self.x_hat, self.x_hat_renorm, y = cuda.elementwise(\n 'T x, U mean, U std, U gamma, U beta, U r, U d',\n 'U x_hat, U x_hat_renorm, T y',\n '''\n x_hat = (x - mean) / std;\n x_hat_renorm = x_hat * r + d;\n y = gamma * x_hat_renorm + beta;\n ''',\n 'brn_fwd')(\n x, mean[expander], self.std[expander], gamma, beta,\n self.r[expander], d[expander])\n\n if self.update_statistics:\n m = x.size // gamma[expander].size\n self._running_mean *= self.decay\n adjust = m / max(m - 1., 1.) # unbiased estimation\n temp_ar = xp.array(mean)\n temp_ar *= (1 - self.decay)\n self._running_mean += temp_ar\n del temp_ar\n self._running_var *= self.decay\n temp_ar = xp.array(var)\n temp_ar *= (1 - self.decay) * adjust\n self._running_var += temp_ar\n del temp_ar\n\n return y,\n\n def backward(self, inputs, grad_outputs):\n x, gamma, _ = inputs\n gy = grad_outputs[0]\n head_ndim = gamma.ndim + 1\n expander = (None, Ellipsis) + (None,) * (x.ndim - head_ndim)\n m = gamma.dtype.type(x.size // gamma.size)\n axis = (0,) + tuple(range(head_ndim, x.ndim))\n xp = backend.get_array_module(x)\n\n # Note: we must be in train mode.\n assert configuration.config.train\n # NOTE(tommi): cuDNN is not used since it does not support\n # batch renormalization\n gbeta = gy.sum(axis=axis, dtype=gamma.dtype)\n ggamma = (gy * self.x_hat_renorm).sum(axis=axis)\n gsigma_batch = (gy * self.x_hat).sum(axis=axis)\n if xp is numpy:\n scale = (self.r * gamma / self.std)[expander]\n gx = scale * (gy - (self.x_hat * gsigma_batch[expander] +\n gbeta[expander]) / m)\n gx = gx.astype(dtype=x.dtype)\n else:\n inv_m = numpy.float32(1) / m\n gx = cuda.elementwise(\n 'T gy, U x_hat, U gamma, U std, U gsigma_batch, U gbeta, \\\n U inv_m, U r',\n 'T gx',\n 'gx = (r * gamma / std) * (gy - (x_hat * gsigma_batch + gbeta) * \\\n inv_m)',\n 'brn_bwd')(\n gy, self.x_hat, gamma[expander],\n self.std[expander], gsigma_batch[expander],\n gbeta[expander], inv_m, self.r[expander])\n return gx, ggamma, gbeta\n\n\ndef batch_renormalization(x, gamma, beta, rmax, dmax, eps=2e-5,\n running_mean=None, running_var=None, decay=0.9,\n update_statistics=False):\n \"\"\"Batch renormalization function.\n\n This is an extension of batch normalization, which ensures that the\n training and inference models generate the same outputs that depend on\n individual examples rather than the entire minibatch.\n\n .. note::\n\n This function does not perform in-place update to\n ``running_mean`` and ``running_var`` by default, contrary to\n :func:`~chainer.functions.batch_normalization`.\n If the function is called, it will not be possible to access the\n updated running mean and variance statistics, because they are members\n of the function object, which cannot be accessed by the caller.\n If it is desired to update the running statistics, call the function\n with ``update_statistics=True`` option.\n\n .. note::\n\n For the consistency with Batch Normalization, this function\n intentionally ignores some of the theoretical flaws in Algorithm 1 of\n the Batch Renormalization paper:\n\n - ``F.batch_renormalization`` maintains the moving average of variances\n :math:`\\\\sigma^2`, while the original paper maintains the moving\n average of standard deviations :math:`\\\\sigma`.\n - ``F.batch_renormalization`` applies Bessel's correction to update the\n moving average of variances.\n\n See: `Batch Renormalization: Towards Reducing Minibatch Dependence in\n Batch-Normalized Models <https://arxiv.org/abs/1702.03275>`_\n\n .. seealso:: :class:`~chainer.links.BatchRenormalization`\n\n \"\"\"\n if running_mean is None:\n raise TypeError('running_mean is required')\n if running_var is None:\n raise TypeError('running_var is required')\n return BatchRenormalizationFunction(\n eps, running_mean, running_var, decay, rmax, dmax, update_statistics\n )(x, gamma, beta)\n\n\ndef fixed_batch_renormalization(x, gamma, beta, mean, var, eps=2e-5):\n warnings.warn(\n 'fixed_batch_renormalization is deprecated. '\n 'Use fixed_batch_normalization instead.',\n DeprecationWarning)\n with configuration.using_config('train', False):\n return batch_normalization.fixed_batch_normalization(\n x, gamma, beta, mean, var, eps\n )\n", "path": "chainer/functions/normalization/batch_renormalization.py"}]} | 3,434 | 197 |
gh_patches_debug_33075 | rasdani/github-patches | git_diff | sanic-org__sanic-2858 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Websocket invalid upgrade exception handling b0rkage
### Is there an existing issue for this?
- [X] I have searched the existing issues
### Describe the bug
A client apparently sent no Upgrade header to a websocket endpoint, leading to an error as it should. An ugly traceback is printed on terminal even though the error eventually gets handled correctly it would seem.
It would appear that the websockets module attempts to attach its exception on `request._exception` field which Sanic's Request doesn't have a slot for. This could be hidden if Sanic later used `raise BadRequest(...) from None` rather than `raise SanicException(...)`, suppressing the chain and giving a non-500 error for what really is no server error. Not sure though if that would from this context ever reach the client anyway but at least it could avoid a traceback in server log.
If anyone wants to investigate and make a PR, feel free to (I am currently busy and cannot do that unfortunately).
```python
Traceback (most recent call last):
File "/home/user/.local/lib/python3.10/site-packages/websockets/server.py", line 111, in accept
) = self.process_request(request)
File "/home/user/.local/lib/python3.10/site-packages/websockets/server.py", line 218, in process_request
raise InvalidUpgrade("Upgrade", ", ".join(upgrade) if upgrade else None)
websockets.exceptions.InvalidUpgrade: missing Upgrade header
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/user/sanic/sanic/server/protocols/websocket_protocol.py", line 120, in websocket_handshake
resp: "http11.Response" = ws_proto.accept(request)
File "/home/user/.local/lib/python3.10/site-packages/websockets/server.py", line 122, in accept
request._exception = exc
AttributeError: 'Request' object has no attribute '_exception'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "handle_request", line 97, in handle_request
File "/home/user/sanic/sanic/app.py", line 1047, in _websocket_handler
ws = await protocol.websocket_handshake(request, subprotocols)
File "/home/user/sanic/sanic/server/protocols/websocket_protocol.py", line 126, in websocket_handshake
raise SanicException(msg, status_code=500)
sanic.exceptions.SanicException: Failed to open a WebSocket connection.
See server log for more information.
```
### Code snippet
_No response_
### Expected Behavior
400 Bad Request error reaching the client and being more silent on server side. Including the message of **missing Upgrade header** would be helpful for debugging (e.g. in case Nginx proxy config forgot to forward that header).
### How do you run Sanic?
Sanic CLI
### Operating System
Linux
### Sanic Version
Almost 23.03.0 (a git version slightly before release)
### Additional context
_No response_
</issue>
<code>
[start of sanic/server/protocols/websocket_protocol.py]
1 from typing import TYPE_CHECKING, Optional, Sequence, cast
2
3
4 try: # websockets < 11.0
5 from websockets.connection import State
6 from websockets.server import ServerConnection as ServerProtocol
7 except ImportError: # websockets >= 11.0
8 from websockets.protocol import State # type: ignore
9 from websockets.server import ServerProtocol # type: ignore
10
11 from websockets.typing import Subprotocol
12
13 from sanic.exceptions import SanicException
14 from sanic.log import logger
15 from sanic.server import HttpProtocol
16
17 from ..websockets.impl import WebsocketImplProtocol
18
19
20 if TYPE_CHECKING:
21 from websockets import http11
22
23
24 OPEN = State.OPEN
25 CLOSING = State.CLOSING
26 CLOSED = State.CLOSED
27
28
29 class WebSocketProtocol(HttpProtocol):
30 __slots__ = (
31 "websocket",
32 "websocket_timeout",
33 "websocket_max_size",
34 "websocket_ping_interval",
35 "websocket_ping_timeout",
36 )
37
38 def __init__(
39 self,
40 *args,
41 websocket_timeout: float = 10.0,
42 websocket_max_size: Optional[int] = None,
43 websocket_ping_interval: Optional[float] = 20.0,
44 websocket_ping_timeout: Optional[float] = 20.0,
45 **kwargs,
46 ):
47 super().__init__(*args, **kwargs)
48 self.websocket: Optional[WebsocketImplProtocol] = None
49 self.websocket_timeout = websocket_timeout
50 self.websocket_max_size = websocket_max_size
51 self.websocket_ping_interval = websocket_ping_interval
52 self.websocket_ping_timeout = websocket_ping_timeout
53
54 def connection_lost(self, exc):
55 if self.websocket is not None:
56 self.websocket.connection_lost(exc)
57 super().connection_lost(exc)
58
59 def data_received(self, data):
60 if self.websocket is not None:
61 self.websocket.data_received(data)
62 else:
63 # Pass it to HttpProtocol handler first
64 # That will (hopefully) upgrade it to a websocket.
65 super().data_received(data)
66
67 def eof_received(self) -> Optional[bool]:
68 if self.websocket is not None:
69 return self.websocket.eof_received()
70 else:
71 return False
72
73 def close(self, timeout: Optional[float] = None):
74 # Called by HttpProtocol at the end of connection_task
75 # If we've upgraded to websocket, we do our own closing
76 if self.websocket is not None:
77 # Note, we don't want to use websocket.close()
78 # That is used for user's application code to send a
79 # websocket close packet. This is different.
80 self.websocket.end_connection(1001)
81 else:
82 super().close()
83
84 def close_if_idle(self):
85 # Called by Sanic Server when shutting down
86 # If we've upgraded to websocket, shut it down
87 if self.websocket is not None:
88 if self.websocket.ws_proto.state in (CLOSING, CLOSED):
89 return True
90 elif self.websocket.loop is not None:
91 self.websocket.loop.create_task(self.websocket.close(1001))
92 else:
93 self.websocket.end_connection(1001)
94 else:
95 return super().close_if_idle()
96
97 async def websocket_handshake(
98 self, request, subprotocols: Optional[Sequence[str]] = None
99 ):
100 # let the websockets package do the handshake with the client
101 try:
102 if subprotocols is not None:
103 # subprotocols can be a set or frozenset,
104 # but ServerProtocol needs a list
105 subprotocols = cast(
106 Optional[Sequence[Subprotocol]],
107 list(
108 [
109 Subprotocol(subprotocol)
110 for subprotocol in subprotocols
111 ]
112 ),
113 )
114 ws_proto = ServerProtocol(
115 max_size=self.websocket_max_size,
116 subprotocols=subprotocols,
117 state=OPEN,
118 logger=logger,
119 )
120 resp: "http11.Response" = ws_proto.accept(request)
121 except Exception:
122 msg = (
123 "Failed to open a WebSocket connection.\n"
124 "See server log for more information.\n"
125 )
126 raise SanicException(msg, status_code=500)
127 if 100 <= resp.status_code <= 299:
128 first_line = (
129 f"HTTP/1.1 {resp.status_code} {resp.reason_phrase}\r\n"
130 ).encode()
131 rbody = bytearray(first_line)
132 rbody += (
133 "".join([f"{k}: {v}\r\n" for k, v in resp.headers.items()])
134 ).encode()
135 rbody += b"\r\n"
136 if resp.body is not None:
137 rbody += resp.body
138 rbody += b"\r\n\r\n"
139 await super().send(rbody)
140 else:
141 raise SanicException(resp.body, resp.status_code)
142 self.websocket = WebsocketImplProtocol(
143 ws_proto,
144 ping_interval=self.websocket_ping_interval,
145 ping_timeout=self.websocket_ping_timeout,
146 close_timeout=self.websocket_timeout,
147 )
148 loop = (
149 request.transport.loop
150 if hasattr(request, "transport")
151 and hasattr(request.transport, "loop")
152 else None
153 )
154 await self.websocket.connection_made(self, loop=loop)
155 return self.websocket
156
[end of sanic/server/protocols/websocket_protocol.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/sanic/server/protocols/websocket_protocol.py b/sanic/server/protocols/websocket_protocol.py
--- a/sanic/server/protocols/websocket_protocol.py
+++ b/sanic/server/protocols/websocket_protocol.py
@@ -1,4 +1,4 @@
-from typing import TYPE_CHECKING, Optional, Sequence, cast
+from typing import Optional, Sequence, cast
try: # websockets < 11.0
@@ -8,19 +8,18 @@
from websockets.protocol import State # type: ignore
from websockets.server import ServerProtocol # type: ignore
+from websockets import http11
+from websockets.datastructures import Headers as WSHeaders
from websockets.typing import Subprotocol
from sanic.exceptions import SanicException
from sanic.log import logger
+from sanic.request import Request
from sanic.server import HttpProtocol
from ..websockets.impl import WebsocketImplProtocol
-if TYPE_CHECKING:
- from websockets import http11
-
-
OPEN = State.OPEN
CLOSING = State.CLOSING
CLOSED = State.CLOSED
@@ -94,6 +93,13 @@
else:
return super().close_if_idle()
+ @staticmethod
+ def sanic_request_to_ws_request(request: Request):
+ return http11.Request(
+ path=request.path,
+ headers=WSHeaders(request.headers),
+ )
+
async def websocket_handshake(
self, request, subprotocols: Optional[Sequence[str]] = None
):
@@ -117,7 +123,7 @@
state=OPEN,
logger=logger,
)
- resp: "http11.Response" = ws_proto.accept(request)
+ resp = ws_proto.accept(self.sanic_request_to_ws_request(request))
except Exception:
msg = (
"Failed to open a WebSocket connection.\n"
| {"golden_diff": "diff --git a/sanic/server/protocols/websocket_protocol.py b/sanic/server/protocols/websocket_protocol.py\n--- a/sanic/server/protocols/websocket_protocol.py\n+++ b/sanic/server/protocols/websocket_protocol.py\n@@ -1,4 +1,4 @@\n-from typing import TYPE_CHECKING, Optional, Sequence, cast\n+from typing import Optional, Sequence, cast\n \n \n try: # websockets < 11.0\n@@ -8,19 +8,18 @@\n from websockets.protocol import State # type: ignore\n from websockets.server import ServerProtocol # type: ignore\n \n+from websockets import http11\n+from websockets.datastructures import Headers as WSHeaders\n from websockets.typing import Subprotocol\n \n from sanic.exceptions import SanicException\n from sanic.log import logger\n+from sanic.request import Request\n from sanic.server import HttpProtocol\n \n from ..websockets.impl import WebsocketImplProtocol\n \n \n-if TYPE_CHECKING:\n- from websockets import http11\n-\n-\n OPEN = State.OPEN\n CLOSING = State.CLOSING\n CLOSED = State.CLOSED\n@@ -94,6 +93,13 @@\n else:\n return super().close_if_idle()\n \n+ @staticmethod\n+ def sanic_request_to_ws_request(request: Request):\n+ return http11.Request(\n+ path=request.path,\n+ headers=WSHeaders(request.headers),\n+ )\n+\n async def websocket_handshake(\n self, request, subprotocols: Optional[Sequence[str]] = None\n ):\n@@ -117,7 +123,7 @@\n state=OPEN,\n logger=logger,\n )\n- resp: \"http11.Response\" = ws_proto.accept(request)\n+ resp = ws_proto.accept(self.sanic_request_to_ws_request(request))\n except Exception:\n msg = (\n \"Failed to open a WebSocket connection.\\n\"\n", "issue": "Websocket invalid upgrade exception handling b0rkage\n### Is there an existing issue for this?\n\n- [X] I have searched the existing issues\n\n### Describe the bug\n\nA client apparently sent no Upgrade header to a websocket endpoint, leading to an error as it should. An ugly traceback is printed on terminal even though the error eventually gets handled correctly it would seem.\r\n\r\nIt would appear that the websockets module attempts to attach its exception on `request._exception` field which Sanic's Request doesn't have a slot for. This could be hidden if Sanic later used `raise BadRequest(...) from None` rather than `raise SanicException(...)`, suppressing the chain and giving a non-500 error for what really is no server error. Not sure though if that would from this context ever reach the client anyway but at least it could avoid a traceback in server log.\r\n\r\nIf anyone wants to investigate and make a PR, feel free to (I am currently busy and cannot do that unfortunately).\r\n\r\n```python\r\nTraceback (most recent call last):\r\n File \"/home/user/.local/lib/python3.10/site-packages/websockets/server.py\", line 111, in accept\r\n ) = self.process_request(request)\r\n File \"/home/user/.local/lib/python3.10/site-packages/websockets/server.py\", line 218, in process_request\r\n raise InvalidUpgrade(\"Upgrade\", \", \".join(upgrade) if upgrade else None)\r\nwebsockets.exceptions.InvalidUpgrade: missing Upgrade header\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"/home/user/sanic/sanic/server/protocols/websocket_protocol.py\", line 120, in websocket_handshake\r\n resp: \"http11.Response\" = ws_proto.accept(request)\r\n File \"/home/user/.local/lib/python3.10/site-packages/websockets/server.py\", line 122, in accept\r\n request._exception = exc\r\nAttributeError: 'Request' object has no attribute '_exception'\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"handle_request\", line 97, in handle_request\r\n File \"/home/user/sanic/sanic/app.py\", line 1047, in _websocket_handler\r\n ws = await protocol.websocket_handshake(request, subprotocols)\r\n File \"/home/user/sanic/sanic/server/protocols/websocket_protocol.py\", line 126, in websocket_handshake\r\n raise SanicException(msg, status_code=500)\r\nsanic.exceptions.SanicException: Failed to open a WebSocket connection.\r\nSee server log for more information.\r\n```\r\n\n\n### Code snippet\n\n_No response_\n\n### Expected Behavior\n\n400 Bad Request error reaching the client and being more silent on server side. Including the message of **missing Upgrade header** would be helpful for debugging (e.g. in case Nginx proxy config forgot to forward that header).\n\n### How do you run Sanic?\n\nSanic CLI\n\n### Operating System\n\nLinux\n\n### Sanic Version\n\nAlmost 23.03.0 (a git version slightly before release)\n\n### Additional context\n\n_No response_\n", "before_files": [{"content": "from typing import TYPE_CHECKING, Optional, Sequence, cast\n\n\ntry: # websockets < 11.0\n from websockets.connection import State\n from websockets.server import ServerConnection as ServerProtocol\nexcept ImportError: # websockets >= 11.0\n from websockets.protocol import State # type: ignore\n from websockets.server import ServerProtocol # type: ignore\n\nfrom websockets.typing import Subprotocol\n\nfrom sanic.exceptions import SanicException\nfrom sanic.log import logger\nfrom sanic.server import HttpProtocol\n\nfrom ..websockets.impl import WebsocketImplProtocol\n\n\nif TYPE_CHECKING:\n from websockets import http11\n\n\nOPEN = State.OPEN\nCLOSING = State.CLOSING\nCLOSED = State.CLOSED\n\n\nclass WebSocketProtocol(HttpProtocol):\n __slots__ = (\n \"websocket\",\n \"websocket_timeout\",\n \"websocket_max_size\",\n \"websocket_ping_interval\",\n \"websocket_ping_timeout\",\n )\n\n def __init__(\n self,\n *args,\n websocket_timeout: float = 10.0,\n websocket_max_size: Optional[int] = None,\n websocket_ping_interval: Optional[float] = 20.0,\n websocket_ping_timeout: Optional[float] = 20.0,\n **kwargs,\n ):\n super().__init__(*args, **kwargs)\n self.websocket: Optional[WebsocketImplProtocol] = None\n self.websocket_timeout = websocket_timeout\n self.websocket_max_size = websocket_max_size\n self.websocket_ping_interval = websocket_ping_interval\n self.websocket_ping_timeout = websocket_ping_timeout\n\n def connection_lost(self, exc):\n if self.websocket is not None:\n self.websocket.connection_lost(exc)\n super().connection_lost(exc)\n\n def data_received(self, data):\n if self.websocket is not None:\n self.websocket.data_received(data)\n else:\n # Pass it to HttpProtocol handler first\n # That will (hopefully) upgrade it to a websocket.\n super().data_received(data)\n\n def eof_received(self) -> Optional[bool]:\n if self.websocket is not None:\n return self.websocket.eof_received()\n else:\n return False\n\n def close(self, timeout: Optional[float] = None):\n # Called by HttpProtocol at the end of connection_task\n # If we've upgraded to websocket, we do our own closing\n if self.websocket is not None:\n # Note, we don't want to use websocket.close()\n # That is used for user's application code to send a\n # websocket close packet. This is different.\n self.websocket.end_connection(1001)\n else:\n super().close()\n\n def close_if_idle(self):\n # Called by Sanic Server when shutting down\n # If we've upgraded to websocket, shut it down\n if self.websocket is not None:\n if self.websocket.ws_proto.state in (CLOSING, CLOSED):\n return True\n elif self.websocket.loop is not None:\n self.websocket.loop.create_task(self.websocket.close(1001))\n else:\n self.websocket.end_connection(1001)\n else:\n return super().close_if_idle()\n\n async def websocket_handshake(\n self, request, subprotocols: Optional[Sequence[str]] = None\n ):\n # let the websockets package do the handshake with the client\n try:\n if subprotocols is not None:\n # subprotocols can be a set or frozenset,\n # but ServerProtocol needs a list\n subprotocols = cast(\n Optional[Sequence[Subprotocol]],\n list(\n [\n Subprotocol(subprotocol)\n for subprotocol in subprotocols\n ]\n ),\n )\n ws_proto = ServerProtocol(\n max_size=self.websocket_max_size,\n subprotocols=subprotocols,\n state=OPEN,\n logger=logger,\n )\n resp: \"http11.Response\" = ws_proto.accept(request)\n except Exception:\n msg = (\n \"Failed to open a WebSocket connection.\\n\"\n \"See server log for more information.\\n\"\n )\n raise SanicException(msg, status_code=500)\n if 100 <= resp.status_code <= 299:\n first_line = (\n f\"HTTP/1.1 {resp.status_code} {resp.reason_phrase}\\r\\n\"\n ).encode()\n rbody = bytearray(first_line)\n rbody += (\n \"\".join([f\"{k}: {v}\\r\\n\" for k, v in resp.headers.items()])\n ).encode()\n rbody += b\"\\r\\n\"\n if resp.body is not None:\n rbody += resp.body\n rbody += b\"\\r\\n\\r\\n\"\n await super().send(rbody)\n else:\n raise SanicException(resp.body, resp.status_code)\n self.websocket = WebsocketImplProtocol(\n ws_proto,\n ping_interval=self.websocket_ping_interval,\n ping_timeout=self.websocket_ping_timeout,\n close_timeout=self.websocket_timeout,\n )\n loop = (\n request.transport.loop\n if hasattr(request, \"transport\")\n and hasattr(request.transport, \"loop\")\n else None\n )\n await self.websocket.connection_made(self, loop=loop)\n return self.websocket\n", "path": "sanic/server/protocols/websocket_protocol.py"}]} | 2,692 | 418 |
gh_patches_debug_16846 | rasdani/github-patches | git_diff | electricitymaps__electricitymaps-contrib-1691 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Import intensity could fallback on yearly averages when missing/unknown
When a country, or area, is importing electricity from another country and the exporting country's production sources are unknown, it seems as if the intensity of the imported electricity is set to be equal to the intensity of the importing country. But this is hardly meaningful. Would it be possible to set the unknown intensity of imported electricity to an average or mean value from a historical period? E.g. the last month or the same month last year. Or to the last available dataset (depending on how old that is).
I can see that it happens quite often for Norway, that "Data [is] temporarily unavailable". The intensity of the electricity exported to Sweden is low, while it is medium high when exported to West Denmark.
</issue>
<code>
[start of utils/config.py]
1 import json
2 import os
3
4 def relative_path(script_reference_path, rel_path):
5 # __file__ should be passed as script_reference_path
6 script_path = os.path.abspath(
7 script_reference_path) # i.e. /path/to/dir/foobar.py
8 script_dir = os.path.split(script_path)[0] # i.e. /path/to/dir/
9 return os.path.join(script_dir, rel_path)
10
11
12 # Prepare zone bounding boxes
13 ZONE_BOUNDING_BOXES = {}
14
15 # Read parser import list from config jsons
16 ZONES_CONFIG = json.load(open(relative_path(
17 __file__, '../config/zones.json')))
18
19 # Read all zones
20 for zone_id, zone_config in ZONES_CONFIG.items():
21 if 'bounding_box' in zone_config:
22 ZONE_BOUNDING_BOXES[zone_id] = zone_config['bounding_box']
23
24 # Read parser import list from config jsons
25 ZONES_CONFIG = json.load(open(relative_path(
26 __file__, '../config/zones.json')))
27 EXCHANGES_CONFIG = json.load(open(relative_path(
28 __file__, '../config/exchanges.json')))
29 ZONE_NEIGHBOURS = {}
30 for k, v in EXCHANGES_CONFIG.items():
31 zone_names = k.split('->')
32 pairs = [
33 (zone_names[0], zone_names[1]),
34 (zone_names[1], zone_names[0])
35 ]
36 for zone_name_1, zone_name_2 in pairs:
37 if zone_name_1 not in ZONE_NEIGHBOURS:
38 ZONE_NEIGHBOURS[zone_name_1] = set()
39 ZONE_NEIGHBOURS[zone_name_1].add(zone_name_2)
40 # we want neighbors to always be in the same order
41 for zone, neighbors in ZONE_NEIGHBOURS.items():
42 ZONE_NEIGHBOURS[zone] = sorted(neighbors)
43
[end of utils/config.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/utils/config.py b/utils/config.py
--- a/utils/config.py
+++ b/utils/config.py
@@ -40,3 +40,22 @@
# we want neighbors to always be in the same order
for zone, neighbors in ZONE_NEIGHBOURS.items():
ZONE_NEIGHBOURS[zone] = sorted(neighbors)
+
+CO2EQ_PARAMETERS = json.load(open(relative_path(
+ __file__, '../config/co2eq_parameters.json')))
+
+def emission_factors(zone_key):
+ fallback_carbon_intensity = CO2EQ_PARAMETERS['fallbackZoneMixes'].get(zone_key, {}).get('carbonIntensity');
+ override = CO2EQ_PARAMETERS['emissionFactors']['zoneOverrides'].get(zone_key, {})
+ defaults = CO2EQ_PARAMETERS['emissionFactors']['defaults']
+ merged = {**defaults, **override}
+ if fallback_carbon_intensity:
+ merged['battery storage'] = {
+ 'value': fallback_carbon_intensity,
+ 'source': 'Annual carbon intensity'
+ }
+ merged['hydro storage'] = {
+ 'value': fallback_carbon_intensity,
+ 'source': 'Annual carbon intensity'
+ }
+ return dict([(k, (v or {}).get('value')) for (k, v) in merged.items()])
| {"golden_diff": "diff --git a/utils/config.py b/utils/config.py\n--- a/utils/config.py\n+++ b/utils/config.py\n@@ -40,3 +40,22 @@\n # we want neighbors to always be in the same order\n for zone, neighbors in ZONE_NEIGHBOURS.items():\n ZONE_NEIGHBOURS[zone] = sorted(neighbors)\n+\n+CO2EQ_PARAMETERS = json.load(open(relative_path(\n+ __file__, '../config/co2eq_parameters.json')))\n+\n+def emission_factors(zone_key):\n+ fallback_carbon_intensity = CO2EQ_PARAMETERS['fallbackZoneMixes'].get(zone_key, {}).get('carbonIntensity');\n+ override = CO2EQ_PARAMETERS['emissionFactors']['zoneOverrides'].get(zone_key, {})\n+ defaults = CO2EQ_PARAMETERS['emissionFactors']['defaults']\n+ merged = {**defaults, **override}\n+ if fallback_carbon_intensity:\n+ merged['battery storage'] = {\n+ 'value': fallback_carbon_intensity,\n+ 'source': 'Annual carbon intensity'\n+ }\n+ merged['hydro storage'] = {\n+ 'value': fallback_carbon_intensity,\n+ 'source': 'Annual carbon intensity'\n+ }\n+ return dict([(k, (v or {}).get('value')) for (k, v) in merged.items()])\n", "issue": "Import intensity could fallback on yearly averages when missing/unknown\nWhen a country, or area, is importing electricity from another country and the exporting country's production sources are unknown, it seems as if the intensity of the imported electricity is set to be equal to the intensity of the importing country. But this is hardly meaningful. Would it be possible to set the unknown intensity of imported electricity to an average or mean value from a historical period? E.g. the last month or the same month last year. Or to the last available dataset (depending on how old that is).\r\n\r\nI can see that it happens quite often for Norway, that \"Data [is] temporarily unavailable\". The intensity of the electricity exported to Sweden is low, while it is medium high when exported to West Denmark.\n", "before_files": [{"content": "import json\nimport os\n\ndef relative_path(script_reference_path, rel_path):\n # __file__ should be passed as script_reference_path\n script_path = os.path.abspath(\n script_reference_path) # i.e. /path/to/dir/foobar.py\n script_dir = os.path.split(script_path)[0] # i.e. /path/to/dir/\n return os.path.join(script_dir, rel_path)\n\n\n# Prepare zone bounding boxes\nZONE_BOUNDING_BOXES = {}\n\n# Read parser import list from config jsons\nZONES_CONFIG = json.load(open(relative_path(\n __file__, '../config/zones.json')))\n\n# Read all zones\nfor zone_id, zone_config in ZONES_CONFIG.items():\n if 'bounding_box' in zone_config:\n ZONE_BOUNDING_BOXES[zone_id] = zone_config['bounding_box']\n\n# Read parser import list from config jsons\nZONES_CONFIG = json.load(open(relative_path(\n __file__, '../config/zones.json')))\nEXCHANGES_CONFIG = json.load(open(relative_path(\n __file__, '../config/exchanges.json')))\nZONE_NEIGHBOURS = {}\nfor k, v in EXCHANGES_CONFIG.items():\n zone_names = k.split('->')\n pairs = [\n (zone_names[0], zone_names[1]),\n (zone_names[1], zone_names[0])\n ]\n for zone_name_1, zone_name_2 in pairs:\n if zone_name_1 not in ZONE_NEIGHBOURS:\n ZONE_NEIGHBOURS[zone_name_1] = set()\n ZONE_NEIGHBOURS[zone_name_1].add(zone_name_2)\n# we want neighbors to always be in the same order\nfor zone, neighbors in ZONE_NEIGHBOURS.items():\n ZONE_NEIGHBOURS[zone] = sorted(neighbors)\n", "path": "utils/config.py"}]} | 1,167 | 284 |
gh_patches_debug_43085 | rasdani/github-patches | git_diff | SeldonIO__MLServer-1337 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[huggingface] Merge predictive unit parameters from env with model settings
# Background
## What are PREDICTIVE_UNIT_PARAMETERS
This is a collection of parameters passed though the environment to a HuggingFace model. This is opposed to declaring them in a `model-settings.json` file as it is the case in Seldon Core v2. As of this moment, those parameters are injected via Seldon Core v1 only, where as Seldon Core v2 uses `model-settings.json` to provide the metadata for the model.
PREDICTIVE_UNIT_PARAMETERS are used only by the HuggingFace runtime and injected only by SCv1.
You can find the code for creating `HuggingFaceSettings` in `./runtimes/huggingface/ml-server-huggingface/settings.py` fle along with functions for parsing those params from env vars or from `model-settings.json`
# What is the problem
Currently, `HuggingFaceSettings` are created either from parsing the PREDICTIVE_UNIT_PARAMETERS from the environment OR from the `model-settings.json`. Meaning that if there is at least one parameter set in the env var, the `model-settings.json` extra parameters will be ignored. This makes it cumbersome when a deployment is created from the UI because additional params such as `task`, `pretrained_model`, `pretrained_tokenizer`, `framework`, etc. will have to be added one by one in the Wizard. Why they have to be added from the wizard and not just specified in `model-settings.json` - because currently SCv1 always injects `model_uri` param so the PREDICTIVE_UNIT_PARAMETERS env var so it's not empty. Because this var is not empty, the HF settings are initialised from it and the `model-settings.json` is ignored.
# What needs to be done
When creating HuggingFace settings, env vars needs to be merged with params from `model-settings.json`, giving priority to env vars. For example:
If such env var exists:
```
PREDICTIVE_UNIT_PARAMETERS = [{"name":"model_uri","value":"/mnt/models","type":"STRING"}, {"name":"task_suffix","value":"else","type":"STRING"}]
```
and such `model-settings.json` file exists:
```
{
"name": "transformer",
"implementation": "mlserver_huggingface.HuggingFaceRuntime",
"parameters": {
"extra": {
"task": "text-generation",
"task_suffix": "something",
"framework": "pt"
}
}
}
```
The outcome should be that the `task` parameter doesn't need to be specified in the wizard and
The HuggingFace settings should contain values: task = text-generation, task_suffix = else, framework = pt
# Scope
This only relating to `HuggingFace` runtime and when it's used from SCv1 and only valid as long as SCv1 is still operational and related code is present in MLServer.
</issue>
<code>
[start of runtimes/huggingface/mlserver_huggingface/settings.py]
1 import os
2 import orjson
3
4 from typing import Optional, Dict
5 from pydantic import BaseSettings
6 from distutils.util import strtobool
7 from transformers.pipelines import SUPPORTED_TASKS
8
9 try:
10 # Optimum 1.7 changed the import name from `SUPPORTED_TASKS` to
11 # `ORT_SUPPORTED_TASKS`.
12 # We'll try to import the more recent one, falling back to the previous
13 # import name if not present.
14 # https://github.com/huggingface/optimum/blob/987b02e4f6e2a1c9325b364ff764da2e57e89902/optimum/pipelines/__init__.py#L18
15 from optimum.pipelines import ORT_SUPPORTED_TASKS as SUPPORTED_OPTIMUM_TASKS
16 except ImportError:
17 from optimum.pipelines import SUPPORTED_TASKS as SUPPORTED_OPTIMUM_TASKS
18
19 from mlserver.settings import ModelSettings
20
21 from .errors import (
22 MissingHuggingFaceSettings,
23 InvalidTransformersTask,
24 InvalidOptimumTask,
25 InvalidModelParameter,
26 InvalidModelParameterType,
27 )
28
29 ENV_PREFIX_HUGGINGFACE_SETTINGS = "MLSERVER_MODEL_HUGGINGFACE_"
30 PARAMETERS_ENV_NAME = "PREDICTIVE_UNIT_PARAMETERS"
31
32
33 class HuggingFaceSettings(BaseSettings):
34 """
35 Parameters that apply only to HuggingFace models
36 """
37
38 class Config:
39 env_prefix = ENV_PREFIX_HUGGINGFACE_SETTINGS
40
41 # TODO: Document fields
42 task: str = ""
43 """
44 Pipeline task to load.
45 You can see the available Optimum and Transformers tasks available in the
46 links below:
47
48 - `Optimum Tasks <https://huggingface.co/docs/optimum/onnxruntime/usage_guides/pipelines#inference-pipelines-with-the-onnx-runtime-accelerator>`_
49 - `Transformer Tasks <https://huggingface.co/docs/transformers/task_summary>`_
50 """ # noqa: E501
51
52 task_suffix: str = ""
53 """
54 Suffix to append to the base task name.
55 Useful for, e.g. translation tasks which require a suffix on the task name
56 to specify source and target.
57 """
58
59 pretrained_model: Optional[str] = None
60 """
61 Name of the model that should be loaded in the pipeline.
62 """
63
64 pretrained_tokenizer: Optional[str] = None
65 """
66 Name of the tokenizer that should be loaded in the pipeline.
67 """
68
69 framework: Optional[str] = None
70 """
71 The framework to use, either "pt" for PyTorch or "tf" for TensorFlow.
72 """
73
74 optimum_model: bool = False
75 """
76 Flag to decide whether the pipeline should use a Optimum-optimised model or
77 the standard Transformers model.
78 Under the hood, this will enable the model to use the optimised ONNX
79 runtime.
80 """
81
82 device: int = -1
83 """
84 Device in which this pipeline will be loaded (e.g., "cpu", "cuda:1", "mps",
85 or a GPU ordinal rank like 1).
86 """
87
88 inter_op_threads: Optional[int] = None
89 """
90 Threads used for parallelism between independent operations.
91 PyTorch:
92 https://pytorch.org/docs/stable/notes/cpu_threading_torchscript_inference.html
93 Tensorflow:
94 https://www.tensorflow.org/api_docs/python/tf/config/threading/set_inter_op_parallelism_threads
95 """
96
97 intra_op_threads: Optional[int] = None
98 """
99 Threads used within an individual op for parallelism.
100 PyTorch:
101 https://pytorch.org/docs/stable/notes/cpu_threading_torchscript_inference.html
102 Tensorflow:
103 https://www.tensorflow.org/api_docs/python/tf/config/threading/set_intra_op_parallelism_threads
104 """
105
106 @property
107 def task_name(self):
108 if self.task == "translation":
109 return f"{self.task}{self.task_suffix}"
110 return self.task
111
112
113 def parse_parameters_from_env() -> Dict:
114 """
115 This method parses the environment variables injected via SCv1.
116 """
117 # TODO: Once support for SCv1 is deprecated, we should remove this method and rely
118 # purely on settings coming via the `model-settings.json` file.
119 parameters = orjson.loads(os.environ.get(PARAMETERS_ENV_NAME, "[]"))
120
121 type_dict = {
122 "INT": int,
123 "FLOAT": float,
124 "DOUBLE": float,
125 "STRING": str,
126 "BOOL": bool,
127 }
128
129 parsed_parameters = {}
130 for param in parameters:
131 name = param.get("name")
132 value = param.get("value")
133 type_ = param.get("type")
134 if type_ == "BOOL":
135 parsed_parameters[name] = bool(strtobool(value))
136 else:
137 try:
138 parsed_parameters[name] = type_dict[type_](value)
139 except ValueError:
140 raise InvalidModelParameter(name, value, type_)
141 except KeyError:
142 raise InvalidModelParameterType(type_)
143 return parsed_parameters
144
145
146 def get_huggingface_settings(model_settings: ModelSettings) -> HuggingFaceSettings:
147 env_params = parse_parameters_from_env()
148 if not env_params and (
149 not model_settings.parameters or not model_settings.parameters.extra
150 ):
151 raise MissingHuggingFaceSettings()
152
153 extra = env_params or model_settings.parameters.extra # type: ignore
154 hf_settings = HuggingFaceSettings(**extra) # type: ignore
155
156 if hf_settings.task not in SUPPORTED_TASKS:
157 raise InvalidTransformersTask(hf_settings.task, SUPPORTED_TASKS.keys())
158
159 if hf_settings.optimum_model:
160 if hf_settings.task not in SUPPORTED_OPTIMUM_TASKS:
161 raise InvalidOptimumTask(hf_settings.task, SUPPORTED_OPTIMUM_TASKS.keys())
162
163 return hf_settings
164
[end of runtimes/huggingface/mlserver_huggingface/settings.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/runtimes/huggingface/mlserver_huggingface/settings.py b/runtimes/huggingface/mlserver_huggingface/settings.py
--- a/runtimes/huggingface/mlserver_huggingface/settings.py
+++ b/runtimes/huggingface/mlserver_huggingface/settings.py
@@ -1,7 +1,7 @@
import os
import orjson
-from typing import Optional, Dict
+from typing import Optional, Dict, Union, NewType
from pydantic import BaseSettings
from distutils.util import strtobool
from transformers.pipelines import SUPPORTED_TASKS
@@ -110,23 +110,33 @@
return self.task
-def parse_parameters_from_env() -> Dict:
+EXTRA_TYPE_DICT = {
+ "INT": int,
+ "FLOAT": float,
+ "DOUBLE": float,
+ "STRING": str,
+ "BOOL": bool,
+}
+
+ExtraDict = NewType("ExtraDict", Dict[str, Union[str, bool, float, int]])
+
+
+def parse_parameters_from_env() -> ExtraDict:
"""
This method parses the environment variables injected via SCv1.
+
+ At least an empty dict is always returned.
"""
# TODO: Once support for SCv1 is deprecated, we should remove this method and rely
# purely on settings coming via the `model-settings.json` file.
parameters = orjson.loads(os.environ.get(PARAMETERS_ENV_NAME, "[]"))
- type_dict = {
- "INT": int,
- "FLOAT": float,
- "DOUBLE": float,
- "STRING": str,
- "BOOL": bool,
- }
+ parsed_parameters: ExtraDict = ExtraDict({})
+
+ # Guard: Exit early if there's no parameters
+ if len(parameters) == 0:
+ return parsed_parameters
- parsed_parameters = {}
for param in parameters:
name = param.get("name")
value = param.get("value")
@@ -135,22 +145,20 @@
parsed_parameters[name] = bool(strtobool(value))
else:
try:
- parsed_parameters[name] = type_dict[type_](value)
+ parsed_parameters[name] = EXTRA_TYPE_DICT[type_](value)
except ValueError:
raise InvalidModelParameter(name, value, type_)
except KeyError:
raise InvalidModelParameterType(type_)
+
return parsed_parameters
def get_huggingface_settings(model_settings: ModelSettings) -> HuggingFaceSettings:
- env_params = parse_parameters_from_env()
- if not env_params and (
- not model_settings.parameters or not model_settings.parameters.extra
- ):
- raise MissingHuggingFaceSettings()
+ """Get the HuggingFace settings provided to the runtime"""
- extra = env_params or model_settings.parameters.extra # type: ignore
+ env_params = parse_parameters_from_env()
+ extra = merge_huggingface_settings_extra(model_settings, env_params)
hf_settings = HuggingFaceSettings(**extra) # type: ignore
if hf_settings.task not in SUPPORTED_TASKS:
@@ -161,3 +169,35 @@
raise InvalidOptimumTask(hf_settings.task, SUPPORTED_OPTIMUM_TASKS.keys())
return hf_settings
+
+
+def merge_huggingface_settings_extra(
+ model_settings: ModelSettings, env_params: ExtraDict
+) -> ExtraDict:
+ """
+ This function returns the Extra field of the Settings.
+
+ It merges them, iff they're both present, from the
+ environment AND model settings file. Precedence is
+ giving to the environment.
+ """
+
+ # Both `parameters` and `extra` are Optional, so we
+ # need to get the value, or nothing.
+ settings_params = (
+ model_settings.parameters.extra
+ if model_settings.parameters is not None
+ else None
+ )
+
+ if settings_params is None and env_params == {}:
+ # There must be settings provided by at least the environment OR model settings
+ raise MissingHuggingFaceSettings()
+
+ # Set the default value
+ settings_params = settings_params or {}
+
+ # Overwrite any conflicting keys, giving precedence to the environment
+ settings_params.update(env_params)
+
+ return ExtraDict(settings_params)
| {"golden_diff": "diff --git a/runtimes/huggingface/mlserver_huggingface/settings.py b/runtimes/huggingface/mlserver_huggingface/settings.py\n--- a/runtimes/huggingface/mlserver_huggingface/settings.py\n+++ b/runtimes/huggingface/mlserver_huggingface/settings.py\n@@ -1,7 +1,7 @@\n import os\n import orjson\n \n-from typing import Optional, Dict\n+from typing import Optional, Dict, Union, NewType\n from pydantic import BaseSettings\n from distutils.util import strtobool\n from transformers.pipelines import SUPPORTED_TASKS\n@@ -110,23 +110,33 @@\n return self.task\n \n \n-def parse_parameters_from_env() -> Dict:\n+EXTRA_TYPE_DICT = {\n+ \"INT\": int,\n+ \"FLOAT\": float,\n+ \"DOUBLE\": float,\n+ \"STRING\": str,\n+ \"BOOL\": bool,\n+}\n+\n+ExtraDict = NewType(\"ExtraDict\", Dict[str, Union[str, bool, float, int]])\n+\n+\n+def parse_parameters_from_env() -> ExtraDict:\n \"\"\"\n This method parses the environment variables injected via SCv1.\n+\n+ At least an empty dict is always returned.\n \"\"\"\n # TODO: Once support for SCv1 is deprecated, we should remove this method and rely\n # purely on settings coming via the `model-settings.json` file.\n parameters = orjson.loads(os.environ.get(PARAMETERS_ENV_NAME, \"[]\"))\n \n- type_dict = {\n- \"INT\": int,\n- \"FLOAT\": float,\n- \"DOUBLE\": float,\n- \"STRING\": str,\n- \"BOOL\": bool,\n- }\n+ parsed_parameters: ExtraDict = ExtraDict({})\n+\n+ # Guard: Exit early if there's no parameters\n+ if len(parameters) == 0:\n+ return parsed_parameters\n \n- parsed_parameters = {}\n for param in parameters:\n name = param.get(\"name\")\n value = param.get(\"value\")\n@@ -135,22 +145,20 @@\n parsed_parameters[name] = bool(strtobool(value))\n else:\n try:\n- parsed_parameters[name] = type_dict[type_](value)\n+ parsed_parameters[name] = EXTRA_TYPE_DICT[type_](value)\n except ValueError:\n raise InvalidModelParameter(name, value, type_)\n except KeyError:\n raise InvalidModelParameterType(type_)\n+\n return parsed_parameters\n \n \n def get_huggingface_settings(model_settings: ModelSettings) -> HuggingFaceSettings:\n- env_params = parse_parameters_from_env()\n- if not env_params and (\n- not model_settings.parameters or not model_settings.parameters.extra\n- ):\n- raise MissingHuggingFaceSettings()\n+ \"\"\"Get the HuggingFace settings provided to the runtime\"\"\"\n \n- extra = env_params or model_settings.parameters.extra # type: ignore\n+ env_params = parse_parameters_from_env()\n+ extra = merge_huggingface_settings_extra(model_settings, env_params)\n hf_settings = HuggingFaceSettings(**extra) # type: ignore\n \n if hf_settings.task not in SUPPORTED_TASKS:\n@@ -161,3 +169,35 @@\n raise InvalidOptimumTask(hf_settings.task, SUPPORTED_OPTIMUM_TASKS.keys())\n \n return hf_settings\n+\n+\n+def merge_huggingface_settings_extra(\n+ model_settings: ModelSettings, env_params: ExtraDict\n+) -> ExtraDict:\n+ \"\"\"\n+ This function returns the Extra field of the Settings.\n+\n+ It merges them, iff they're both present, from the\n+ environment AND model settings file. Precedence is\n+ giving to the environment.\n+ \"\"\"\n+\n+ # Both `parameters` and `extra` are Optional, so we\n+ # need to get the value, or nothing.\n+ settings_params = (\n+ model_settings.parameters.extra\n+ if model_settings.parameters is not None\n+ else None\n+ )\n+\n+ if settings_params is None and env_params == {}:\n+ # There must be settings provided by at least the environment OR model settings\n+ raise MissingHuggingFaceSettings()\n+\n+ # Set the default value\n+ settings_params = settings_params or {}\n+\n+ # Overwrite any conflicting keys, giving precedence to the environment\n+ settings_params.update(env_params)\n+\n+ return ExtraDict(settings_params)\n", "issue": "[huggingface] Merge predictive unit parameters from env with model settings\n# Background\r\n## What are PREDICTIVE_UNIT_PARAMETERS\r\nThis is a collection of parameters passed though the environment to a HuggingFace model. This is opposed to declaring them in a `model-settings.json` file as it is the case in Seldon Core v2. As of this moment, those parameters are injected via Seldon Core v1 only, where as Seldon Core v2 uses `model-settings.json` to provide the metadata for the model.\r\n\r\nPREDICTIVE_UNIT_PARAMETERS are used only by the HuggingFace runtime and injected only by SCv1.\r\n\r\nYou can find the code for creating `HuggingFaceSettings` in `./runtimes/huggingface/ml-server-huggingface/settings.py` fle along with functions for parsing those params from env vars or from `model-settings.json` \r\n\r\n# What is the problem\r\nCurrently, `HuggingFaceSettings` are created either from parsing the PREDICTIVE_UNIT_PARAMETERS from the environment OR from the `model-settings.json`. Meaning that if there is at least one parameter set in the env var, the `model-settings.json` extra parameters will be ignored. This makes it cumbersome when a deployment is created from the UI because additional params such as `task`, `pretrained_model`, `pretrained_tokenizer`, `framework`, etc. will have to be added one by one in the Wizard. Why they have to be added from the wizard and not just specified in `model-settings.json` - because currently SCv1 always injects `model_uri` param so the PREDICTIVE_UNIT_PARAMETERS env var so it's not empty. Because this var is not empty, the HF settings are initialised from it and the `model-settings.json` is ignored.\r\n\r\n# What needs to be done\r\nWhen creating HuggingFace settings, env vars needs to be merged with params from `model-settings.json`, giving priority to env vars. For example:\r\nIf such env var exists:\r\n```\r\nPREDICTIVE_UNIT_PARAMETERS = [{\"name\":\"model_uri\",\"value\":\"/mnt/models\",\"type\":\"STRING\"}, {\"name\":\"task_suffix\",\"value\":\"else\",\"type\":\"STRING\"}]\r\n```\r\nand such `model-settings.json` file exists:\r\n```\r\n{\r\n \"name\": \"transformer\",\r\n \"implementation\": \"mlserver_huggingface.HuggingFaceRuntime\",\r\n \"parameters\": {\r\n \"extra\": {\r\n \"task\": \"text-generation\",\r\n \"task_suffix\": \"something\",\r\n \"framework\": \"pt\"\r\n }\r\n }\r\n}\r\n```\r\nThe outcome should be that the `task` parameter doesn't need to be specified in the wizard and\r\nThe HuggingFace settings should contain values: task = text-generation, task_suffix = else, framework = pt\r\n\r\n# Scope\r\nThis only relating to `HuggingFace` runtime and when it's used from SCv1 and only valid as long as SCv1 is still operational and related code is present in MLServer.\n", "before_files": [{"content": "import os\nimport orjson\n\nfrom typing import Optional, Dict\nfrom pydantic import BaseSettings\nfrom distutils.util import strtobool\nfrom transformers.pipelines import SUPPORTED_TASKS\n\ntry:\n # Optimum 1.7 changed the import name from `SUPPORTED_TASKS` to\n # `ORT_SUPPORTED_TASKS`.\n # We'll try to import the more recent one, falling back to the previous\n # import name if not present.\n # https://github.com/huggingface/optimum/blob/987b02e4f6e2a1c9325b364ff764da2e57e89902/optimum/pipelines/__init__.py#L18\n from optimum.pipelines import ORT_SUPPORTED_TASKS as SUPPORTED_OPTIMUM_TASKS\nexcept ImportError:\n from optimum.pipelines import SUPPORTED_TASKS as SUPPORTED_OPTIMUM_TASKS\n\nfrom mlserver.settings import ModelSettings\n\nfrom .errors import (\n MissingHuggingFaceSettings,\n InvalidTransformersTask,\n InvalidOptimumTask,\n InvalidModelParameter,\n InvalidModelParameterType,\n)\n\nENV_PREFIX_HUGGINGFACE_SETTINGS = \"MLSERVER_MODEL_HUGGINGFACE_\"\nPARAMETERS_ENV_NAME = \"PREDICTIVE_UNIT_PARAMETERS\"\n\n\nclass HuggingFaceSettings(BaseSettings):\n \"\"\"\n Parameters that apply only to HuggingFace models\n \"\"\"\n\n class Config:\n env_prefix = ENV_PREFIX_HUGGINGFACE_SETTINGS\n\n # TODO: Document fields\n task: str = \"\"\n \"\"\"\n Pipeline task to load.\n You can see the available Optimum and Transformers tasks available in the\n links below:\n\n - `Optimum Tasks <https://huggingface.co/docs/optimum/onnxruntime/usage_guides/pipelines#inference-pipelines-with-the-onnx-runtime-accelerator>`_\n - `Transformer Tasks <https://huggingface.co/docs/transformers/task_summary>`_\n \"\"\" # noqa: E501\n\n task_suffix: str = \"\"\n \"\"\"\n Suffix to append to the base task name.\n Useful for, e.g. translation tasks which require a suffix on the task name\n to specify source and target.\n \"\"\"\n\n pretrained_model: Optional[str] = None\n \"\"\"\n Name of the model that should be loaded in the pipeline.\n \"\"\"\n\n pretrained_tokenizer: Optional[str] = None\n \"\"\"\n Name of the tokenizer that should be loaded in the pipeline.\n \"\"\"\n\n framework: Optional[str] = None\n \"\"\"\n The framework to use, either \"pt\" for PyTorch or \"tf\" for TensorFlow.\n \"\"\"\n\n optimum_model: bool = False\n \"\"\"\n Flag to decide whether the pipeline should use a Optimum-optimised model or\n the standard Transformers model.\n Under the hood, this will enable the model to use the optimised ONNX\n runtime.\n \"\"\"\n\n device: int = -1\n \"\"\"\n Device in which this pipeline will be loaded (e.g., \"cpu\", \"cuda:1\", \"mps\",\n or a GPU ordinal rank like 1).\n \"\"\"\n\n inter_op_threads: Optional[int] = None\n \"\"\"\n Threads used for parallelism between independent operations.\n PyTorch:\n https://pytorch.org/docs/stable/notes/cpu_threading_torchscript_inference.html\n Tensorflow:\n https://www.tensorflow.org/api_docs/python/tf/config/threading/set_inter_op_parallelism_threads\n \"\"\"\n\n intra_op_threads: Optional[int] = None\n \"\"\"\n Threads used within an individual op for parallelism.\n PyTorch:\n https://pytorch.org/docs/stable/notes/cpu_threading_torchscript_inference.html\n Tensorflow:\n https://www.tensorflow.org/api_docs/python/tf/config/threading/set_intra_op_parallelism_threads\n \"\"\"\n\n @property\n def task_name(self):\n if self.task == \"translation\":\n return f\"{self.task}{self.task_suffix}\"\n return self.task\n\n\ndef parse_parameters_from_env() -> Dict:\n \"\"\"\n This method parses the environment variables injected via SCv1.\n \"\"\"\n # TODO: Once support for SCv1 is deprecated, we should remove this method and rely\n # purely on settings coming via the `model-settings.json` file.\n parameters = orjson.loads(os.environ.get(PARAMETERS_ENV_NAME, \"[]\"))\n\n type_dict = {\n \"INT\": int,\n \"FLOAT\": float,\n \"DOUBLE\": float,\n \"STRING\": str,\n \"BOOL\": bool,\n }\n\n parsed_parameters = {}\n for param in parameters:\n name = param.get(\"name\")\n value = param.get(\"value\")\n type_ = param.get(\"type\")\n if type_ == \"BOOL\":\n parsed_parameters[name] = bool(strtobool(value))\n else:\n try:\n parsed_parameters[name] = type_dict[type_](value)\n except ValueError:\n raise InvalidModelParameter(name, value, type_)\n except KeyError:\n raise InvalidModelParameterType(type_)\n return parsed_parameters\n\n\ndef get_huggingface_settings(model_settings: ModelSettings) -> HuggingFaceSettings:\n env_params = parse_parameters_from_env()\n if not env_params and (\n not model_settings.parameters or not model_settings.parameters.extra\n ):\n raise MissingHuggingFaceSettings()\n\n extra = env_params or model_settings.parameters.extra # type: ignore\n hf_settings = HuggingFaceSettings(**extra) # type: ignore\n\n if hf_settings.task not in SUPPORTED_TASKS:\n raise InvalidTransformersTask(hf_settings.task, SUPPORTED_TASKS.keys())\n\n if hf_settings.optimum_model:\n if hf_settings.task not in SUPPORTED_OPTIMUM_TASKS:\n raise InvalidOptimumTask(hf_settings.task, SUPPORTED_OPTIMUM_TASKS.keys())\n\n return hf_settings\n", "path": "runtimes/huggingface/mlserver_huggingface/settings.py"}]} | 2,818 | 957 |
gh_patches_debug_16532 | rasdani/github-patches | git_diff | open-telemetry__opentelemetry-python-1516 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Update default port for OTLP exporter to 4317
With this change https://github.com/open-telemetry/opentelemetry-specification/pull/1221 default port for OTLP exporter is 4317, the current default port in the OTLP exporter is 55680. This should be updated.
</issue>
<code>
[start of exporter/opentelemetry-exporter-otlp/src/opentelemetry/exporter/otlp/__init__.py]
1 # Copyright The OpenTelemetry Authors
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15
16 """
17 This library allows to export tracing data to an OTLP collector.
18
19 Usage
20 -----
21
22 The **OTLP Span Exporter** allows to export `OpenTelemetry`_ traces to the
23 `OTLP`_ collector.
24
25
26 .. _OTLP: https://github.com/open-telemetry/opentelemetry-collector/
27 .. _OpenTelemetry: https://github.com/open-telemetry/opentelemetry-python/
28
29 .. envvar:: OTEL_EXPORTER_OTLP_COMPRESSION
30
31 The :envvar:`OTEL_EXPORTER_OTLP_COMPRESSION` environment variable allows a
32 compression algorithm to be passed to the OTLP exporter. The compression
33 algorithms that are supported include gzip and no compression. The value should
34 be in the format of a string "gzip" for gzip compression, and no value specified
35 if no compression is the desired choice.
36 Additional details are available `in the specification
37 <https://github.com/open-telemetry/opentelemetry-specification/blob/master/specification/protocol/exporter.md#opentelemetry-protocol-exporter>`_.
38
39 .. code:: python
40
41 from opentelemetry import trace
42 from opentelemetry.exporter.otlp.trace_exporter import OTLPSpanExporter
43 from opentelemetry.sdk.resources import Resource
44 from opentelemetry.sdk.trace import TracerProvider
45 from opentelemetry.sdk.trace.export import BatchExportSpanProcessor
46
47 # Resource can be required for some backends, e.g. Jaeger
48 # If resource wouldn't be set - traces wouldn't appears in Jaeger
49 resource = Resource(attributes={
50 "service.name": "service"
51 })
52
53 trace.set_tracer_provider(TracerProvider(resource=resource))
54 tracer = trace.get_tracer(__name__)
55
56 otlp_exporter = OTLPSpanExporter(endpoint="localhost:55680", insecure=True)
57
58 span_processor = BatchExportSpanProcessor(otlp_exporter)
59
60 trace.get_tracer_provider().add_span_processor(span_processor)
61
62 with tracer.start_as_current_span("foo"):
63 print("Hello world!")
64
65 API
66 ---
67 """
68
[end of exporter/opentelemetry-exporter-otlp/src/opentelemetry/exporter/otlp/__init__.py]
[start of exporter/opentelemetry-exporter-otlp/src/opentelemetry/exporter/otlp/exporter.py]
1 # Copyright The OpenTelemetry Authors
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """OTLP Exporter"""
16
17 import enum
18 import logging
19 from abc import ABC, abstractmethod
20 from collections.abc import Mapping, Sequence
21 from time import sleep
22 from typing import Any, Callable, Dict, Generic, List, Optional
23 from typing import Sequence as TypingSequence
24 from typing import Text, TypeVar
25
26 from backoff import expo
27 from google.rpc.error_details_pb2 import RetryInfo
28 from grpc import (
29 ChannelCredentials,
30 Compression,
31 RpcError,
32 StatusCode,
33 insecure_channel,
34 secure_channel,
35 ssl_channel_credentials,
36 )
37
38 from opentelemetry.configuration import Configuration
39 from opentelemetry.proto.common.v1.common_pb2 import AnyValue, KeyValue
40 from opentelemetry.proto.resource.v1.resource_pb2 import Resource
41 from opentelemetry.sdk.resources import Resource as SDKResource
42
43 logger = logging.getLogger(__name__)
44 SDKDataT = TypeVar("SDKDataT")
45 ResourceDataT = TypeVar("ResourceDataT")
46 TypingResourceT = TypeVar("TypingResourceT")
47 ExportServiceRequestT = TypeVar("ExportServiceRequestT")
48 ExportResultT = TypeVar("ExportResultT")
49
50
51 class OTLPCompression(enum.Enum):
52 gzip = "gzip"
53
54
55 def _translate_key_values(key: Text, value: Any) -> KeyValue:
56
57 if isinstance(value, bool):
58 any_value = AnyValue(bool_value=value)
59
60 elif isinstance(value, str):
61 any_value = AnyValue(string_value=value)
62
63 elif isinstance(value, int):
64 any_value = AnyValue(int_value=value)
65
66 elif isinstance(value, float):
67 any_value = AnyValue(double_value=value)
68
69 elif isinstance(value, Sequence):
70 any_value = AnyValue(array_value=value)
71
72 elif isinstance(value, Mapping):
73 any_value = AnyValue(kvlist_value=value)
74
75 else:
76 raise Exception(
77 "Invalid type {} of value {}".format(type(value), value)
78 )
79
80 return KeyValue(key=key, value=any_value)
81
82
83 def _get_resource_data(
84 sdk_resource_instrumentation_library_data: Dict[
85 SDKResource, ResourceDataT
86 ],
87 resource_class: Callable[..., TypingResourceT],
88 name: str,
89 ) -> List[TypingResourceT]:
90
91 resource_data = []
92
93 for (
94 sdk_resource,
95 instrumentation_library_data,
96 ) in sdk_resource_instrumentation_library_data.items():
97
98 collector_resource = Resource()
99
100 for key, value in sdk_resource.attributes.items():
101
102 try:
103 # pylint: disable=no-member
104 collector_resource.attributes.append(
105 _translate_key_values(key, value)
106 )
107 except Exception as error: # pylint: disable=broad-except
108 logger.exception(error)
109
110 resource_data.append(
111 resource_class(
112 **{
113 "resource": collector_resource,
114 "instrumentation_library_{}".format(name): [
115 instrumentation_library_data
116 ],
117 }
118 )
119 )
120
121 return resource_data
122
123
124 def _load_credential_from_file(filepath) -> ChannelCredentials:
125 try:
126 with open(filepath, "rb") as creds_file:
127 credential = creds_file.read()
128 return ssl_channel_credentials(credential)
129 except FileNotFoundError:
130 logger.exception("Failed to read credential file")
131 return None
132
133
134 # pylint: disable=no-member
135 class OTLPExporterMixin(
136 ABC, Generic[SDKDataT, ExportServiceRequestT, ExportResultT]
137 ):
138 """OTLP span/metric exporter
139
140 Args:
141 endpoint: OpenTelemetry Collector receiver endpoint
142 insecure: Connection type
143 credentials: ChannelCredentials object for server authentication
144 headers: Headers to send when exporting
145 compression: Compression algorithm to be used in channel
146 timeout: Backend request timeout in seconds
147 """
148
149 def __init__(
150 self,
151 endpoint: Optional[str] = None,
152 insecure: Optional[bool] = None,
153 credentials: Optional[ChannelCredentials] = None,
154 headers: Optional[Sequence] = None,
155 timeout: Optional[int] = None,
156 compression: str = None,
157 ):
158 super().__init__()
159
160 endpoint = (
161 endpoint
162 or Configuration().EXPORTER_OTLP_ENDPOINT
163 or "localhost:55680"
164 )
165
166 if insecure is None:
167 insecure = Configuration().EXPORTER_OTLP_INSECURE
168 if insecure is None:
169 insecure = False
170
171 self._headers = headers or Configuration().EXPORTER_OTLP_HEADERS
172 if isinstance(self._headers, str):
173 self._headers = tuple(
174 tuple(item.split("=")) for item in self._headers.split(",")
175 )
176 self._timeout = (
177 timeout
178 or Configuration().EXPORTER_OTLP_TIMEOUT
179 or 10 # default: 10 seconds
180 )
181 self._collector_span_kwargs = None
182
183 if compression is None:
184 compression_algorithm = Compression.NoCompression
185 elif (
186 compression in OTLPCompression._value2member_map_
187 and OTLPCompression(compression) is OTLPCompression.gzip
188 ):
189 compression_algorithm = Compression.Gzip
190 else:
191 compression_str = Configuration().EXPORTER_OTLP_INSECURE or None
192 if compression_str is None:
193 compression_algorithm = Compression.NoCompression
194 elif (
195 compression_str in OTLPCompression._value2member_map_
196 and OTLPCompression(compression_str) is OTLPCompression.gzip
197 ):
198 compression_algorithm = Compression.Gzip
199 else:
200 raise ValueError(
201 "OTEL_EXPORTER_OTLP_COMPRESSION environment variable does not match gzip."
202 )
203
204 if insecure:
205 self._client = self._stub(
206 insecure_channel(endpoint, compression=compression_algorithm)
207 )
208 return
209
210 # secure mode
211 if (
212 credentials is None
213 and Configuration().EXPORTER_OTLP_CERTIFICATE is None
214 ):
215 # use the default location chosen by gRPC runtime
216 credentials = ssl_channel_credentials()
217 else:
218 credentials = credentials or _load_credential_from_file(
219 Configuration().EXPORTER_OTLP_CERTIFICATE
220 )
221 self._client = self._stub(
222 secure_channel(
223 endpoint, credentials, compression=compression_algorithm
224 )
225 )
226
227 @abstractmethod
228 def _translate_data(
229 self, data: TypingSequence[SDKDataT]
230 ) -> ExportServiceRequestT:
231 pass
232
233 def _export(self, data: TypingSequence[SDKDataT]) -> ExportResultT:
234 # expo returns a generator that yields delay values which grow
235 # exponentially. Once delay is greater than max_value, the yielded
236 # value will remain constant.
237 # max_value is set to 900 (900 seconds is 15 minutes) to use the same
238 # value as used in the Go implementation.
239
240 max_value = 900
241
242 for delay in expo(max_value=max_value):
243
244 if delay == max_value:
245 return self._result.FAILURE
246
247 try:
248 self._client.Export(
249 request=self._translate_data(data),
250 metadata=self._headers,
251 timeout=self._timeout,
252 )
253
254 return self._result.SUCCESS
255
256 except RpcError as error:
257
258 if error.code() in [
259 StatusCode.CANCELLED,
260 StatusCode.DEADLINE_EXCEEDED,
261 StatusCode.PERMISSION_DENIED,
262 StatusCode.UNAUTHENTICATED,
263 StatusCode.RESOURCE_EXHAUSTED,
264 StatusCode.ABORTED,
265 StatusCode.OUT_OF_RANGE,
266 StatusCode.UNAVAILABLE,
267 StatusCode.DATA_LOSS,
268 ]:
269
270 retry_info_bin = dict(error.trailing_metadata()).get(
271 "google.rpc.retryinfo-bin"
272 )
273 if retry_info_bin is not None:
274 retry_info = RetryInfo()
275 retry_info.ParseFromString(retry_info_bin)
276 delay = (
277 retry_info.retry_delay.seconds
278 + retry_info.retry_delay.nanos / 1.0e9
279 )
280
281 logger.debug(
282 "Waiting %ss before retrying export of span", delay
283 )
284 sleep(delay)
285 continue
286
287 if error.code() == StatusCode.OK:
288 return self._result.SUCCESS
289
290 return self._result.FAILURE
291
292 return self._result.FAILURE
293
294 def shutdown(self) -> None:
295 pass
296
[end of exporter/opentelemetry-exporter-otlp/src/opentelemetry/exporter/otlp/exporter.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/exporter/opentelemetry-exporter-otlp/src/opentelemetry/exporter/otlp/__init__.py b/exporter/opentelemetry-exporter-otlp/src/opentelemetry/exporter/otlp/__init__.py
--- a/exporter/opentelemetry-exporter-otlp/src/opentelemetry/exporter/otlp/__init__.py
+++ b/exporter/opentelemetry-exporter-otlp/src/opentelemetry/exporter/otlp/__init__.py
@@ -53,7 +53,7 @@
trace.set_tracer_provider(TracerProvider(resource=resource))
tracer = trace.get_tracer(__name__)
- otlp_exporter = OTLPSpanExporter(endpoint="localhost:55680", insecure=True)
+ otlp_exporter = OTLPSpanExporter(endpoint="localhost:4317", insecure=True)
span_processor = BatchExportSpanProcessor(otlp_exporter)
diff --git a/exporter/opentelemetry-exporter-otlp/src/opentelemetry/exporter/otlp/exporter.py b/exporter/opentelemetry-exporter-otlp/src/opentelemetry/exporter/otlp/exporter.py
--- a/exporter/opentelemetry-exporter-otlp/src/opentelemetry/exporter/otlp/exporter.py
+++ b/exporter/opentelemetry-exporter-otlp/src/opentelemetry/exporter/otlp/exporter.py
@@ -160,7 +160,7 @@
endpoint = (
endpoint
or Configuration().EXPORTER_OTLP_ENDPOINT
- or "localhost:55680"
+ or "localhost:4317"
)
if insecure is None:
| {"golden_diff": "diff --git a/exporter/opentelemetry-exporter-otlp/src/opentelemetry/exporter/otlp/__init__.py b/exporter/opentelemetry-exporter-otlp/src/opentelemetry/exporter/otlp/__init__.py\n--- a/exporter/opentelemetry-exporter-otlp/src/opentelemetry/exporter/otlp/__init__.py\n+++ b/exporter/opentelemetry-exporter-otlp/src/opentelemetry/exporter/otlp/__init__.py\n@@ -53,7 +53,7 @@\n trace.set_tracer_provider(TracerProvider(resource=resource))\n tracer = trace.get_tracer(__name__)\n \n- otlp_exporter = OTLPSpanExporter(endpoint=\"localhost:55680\", insecure=True)\n+ otlp_exporter = OTLPSpanExporter(endpoint=\"localhost:4317\", insecure=True)\n \n span_processor = BatchExportSpanProcessor(otlp_exporter)\n \ndiff --git a/exporter/opentelemetry-exporter-otlp/src/opentelemetry/exporter/otlp/exporter.py b/exporter/opentelemetry-exporter-otlp/src/opentelemetry/exporter/otlp/exporter.py\n--- a/exporter/opentelemetry-exporter-otlp/src/opentelemetry/exporter/otlp/exporter.py\n+++ b/exporter/opentelemetry-exporter-otlp/src/opentelemetry/exporter/otlp/exporter.py\n@@ -160,7 +160,7 @@\n endpoint = (\n endpoint\n or Configuration().EXPORTER_OTLP_ENDPOINT\n- or \"localhost:55680\"\n+ or \"localhost:4317\"\n )\n \n if insecure is None:\n", "issue": "Update default port for OTLP exporter to 4317\nWith this change https://github.com/open-telemetry/opentelemetry-specification/pull/1221 default port for OTLP exporter is 4317, the current default port in the OTLP exporter is 55680. This should be updated.\n", "before_files": [{"content": "# Copyright The OpenTelemetry Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\n\"\"\"\nThis library allows to export tracing data to an OTLP collector.\n\nUsage\n-----\n\nThe **OTLP Span Exporter** allows to export `OpenTelemetry`_ traces to the\n`OTLP`_ collector.\n\n\n.. _OTLP: https://github.com/open-telemetry/opentelemetry-collector/\n.. _OpenTelemetry: https://github.com/open-telemetry/opentelemetry-python/\n\n.. envvar:: OTEL_EXPORTER_OTLP_COMPRESSION\n\nThe :envvar:`OTEL_EXPORTER_OTLP_COMPRESSION` environment variable allows a\ncompression algorithm to be passed to the OTLP exporter. The compression\nalgorithms that are supported include gzip and no compression. The value should\nbe in the format of a string \"gzip\" for gzip compression, and no value specified\nif no compression is the desired choice.\nAdditional details are available `in the specification\n<https://github.com/open-telemetry/opentelemetry-specification/blob/master/specification/protocol/exporter.md#opentelemetry-protocol-exporter>`_.\n\n.. code:: python\n\n from opentelemetry import trace\n from opentelemetry.exporter.otlp.trace_exporter import OTLPSpanExporter\n from opentelemetry.sdk.resources import Resource\n from opentelemetry.sdk.trace import TracerProvider\n from opentelemetry.sdk.trace.export import BatchExportSpanProcessor\n\n # Resource can be required for some backends, e.g. Jaeger\n # If resource wouldn't be set - traces wouldn't appears in Jaeger\n resource = Resource(attributes={\n \"service.name\": \"service\"\n })\n\n trace.set_tracer_provider(TracerProvider(resource=resource))\n tracer = trace.get_tracer(__name__)\n\n otlp_exporter = OTLPSpanExporter(endpoint=\"localhost:55680\", insecure=True)\n\n span_processor = BatchExportSpanProcessor(otlp_exporter)\n\n trace.get_tracer_provider().add_span_processor(span_processor)\n\n with tracer.start_as_current_span(\"foo\"):\n print(\"Hello world!\")\n\nAPI\n---\n\"\"\"\n", "path": "exporter/opentelemetry-exporter-otlp/src/opentelemetry/exporter/otlp/__init__.py"}, {"content": "# Copyright The OpenTelemetry Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"OTLP Exporter\"\"\"\n\nimport enum\nimport logging\nfrom abc import ABC, abstractmethod\nfrom collections.abc import Mapping, Sequence\nfrom time import sleep\nfrom typing import Any, Callable, Dict, Generic, List, Optional\nfrom typing import Sequence as TypingSequence\nfrom typing import Text, TypeVar\n\nfrom backoff import expo\nfrom google.rpc.error_details_pb2 import RetryInfo\nfrom grpc import (\n ChannelCredentials,\n Compression,\n RpcError,\n StatusCode,\n insecure_channel,\n secure_channel,\n ssl_channel_credentials,\n)\n\nfrom opentelemetry.configuration import Configuration\nfrom opentelemetry.proto.common.v1.common_pb2 import AnyValue, KeyValue\nfrom opentelemetry.proto.resource.v1.resource_pb2 import Resource\nfrom opentelemetry.sdk.resources import Resource as SDKResource\n\nlogger = logging.getLogger(__name__)\nSDKDataT = TypeVar(\"SDKDataT\")\nResourceDataT = TypeVar(\"ResourceDataT\")\nTypingResourceT = TypeVar(\"TypingResourceT\")\nExportServiceRequestT = TypeVar(\"ExportServiceRequestT\")\nExportResultT = TypeVar(\"ExportResultT\")\n\n\nclass OTLPCompression(enum.Enum):\n gzip = \"gzip\"\n\n\ndef _translate_key_values(key: Text, value: Any) -> KeyValue:\n\n if isinstance(value, bool):\n any_value = AnyValue(bool_value=value)\n\n elif isinstance(value, str):\n any_value = AnyValue(string_value=value)\n\n elif isinstance(value, int):\n any_value = AnyValue(int_value=value)\n\n elif isinstance(value, float):\n any_value = AnyValue(double_value=value)\n\n elif isinstance(value, Sequence):\n any_value = AnyValue(array_value=value)\n\n elif isinstance(value, Mapping):\n any_value = AnyValue(kvlist_value=value)\n\n else:\n raise Exception(\n \"Invalid type {} of value {}\".format(type(value), value)\n )\n\n return KeyValue(key=key, value=any_value)\n\n\ndef _get_resource_data(\n sdk_resource_instrumentation_library_data: Dict[\n SDKResource, ResourceDataT\n ],\n resource_class: Callable[..., TypingResourceT],\n name: str,\n) -> List[TypingResourceT]:\n\n resource_data = []\n\n for (\n sdk_resource,\n instrumentation_library_data,\n ) in sdk_resource_instrumentation_library_data.items():\n\n collector_resource = Resource()\n\n for key, value in sdk_resource.attributes.items():\n\n try:\n # pylint: disable=no-member\n collector_resource.attributes.append(\n _translate_key_values(key, value)\n )\n except Exception as error: # pylint: disable=broad-except\n logger.exception(error)\n\n resource_data.append(\n resource_class(\n **{\n \"resource\": collector_resource,\n \"instrumentation_library_{}\".format(name): [\n instrumentation_library_data\n ],\n }\n )\n )\n\n return resource_data\n\n\ndef _load_credential_from_file(filepath) -> ChannelCredentials:\n try:\n with open(filepath, \"rb\") as creds_file:\n credential = creds_file.read()\n return ssl_channel_credentials(credential)\n except FileNotFoundError:\n logger.exception(\"Failed to read credential file\")\n return None\n\n\n# pylint: disable=no-member\nclass OTLPExporterMixin(\n ABC, Generic[SDKDataT, ExportServiceRequestT, ExportResultT]\n):\n \"\"\"OTLP span/metric exporter\n\n Args:\n endpoint: OpenTelemetry Collector receiver endpoint\n insecure: Connection type\n credentials: ChannelCredentials object for server authentication\n headers: Headers to send when exporting\n compression: Compression algorithm to be used in channel\n timeout: Backend request timeout in seconds\n \"\"\"\n\n def __init__(\n self,\n endpoint: Optional[str] = None,\n insecure: Optional[bool] = None,\n credentials: Optional[ChannelCredentials] = None,\n headers: Optional[Sequence] = None,\n timeout: Optional[int] = None,\n compression: str = None,\n ):\n super().__init__()\n\n endpoint = (\n endpoint\n or Configuration().EXPORTER_OTLP_ENDPOINT\n or \"localhost:55680\"\n )\n\n if insecure is None:\n insecure = Configuration().EXPORTER_OTLP_INSECURE\n if insecure is None:\n insecure = False\n\n self._headers = headers or Configuration().EXPORTER_OTLP_HEADERS\n if isinstance(self._headers, str):\n self._headers = tuple(\n tuple(item.split(\"=\")) for item in self._headers.split(\",\")\n )\n self._timeout = (\n timeout\n or Configuration().EXPORTER_OTLP_TIMEOUT\n or 10 # default: 10 seconds\n )\n self._collector_span_kwargs = None\n\n if compression is None:\n compression_algorithm = Compression.NoCompression\n elif (\n compression in OTLPCompression._value2member_map_\n and OTLPCompression(compression) is OTLPCompression.gzip\n ):\n compression_algorithm = Compression.Gzip\n else:\n compression_str = Configuration().EXPORTER_OTLP_INSECURE or None\n if compression_str is None:\n compression_algorithm = Compression.NoCompression\n elif (\n compression_str in OTLPCompression._value2member_map_\n and OTLPCompression(compression_str) is OTLPCompression.gzip\n ):\n compression_algorithm = Compression.Gzip\n else:\n raise ValueError(\n \"OTEL_EXPORTER_OTLP_COMPRESSION environment variable does not match gzip.\"\n )\n\n if insecure:\n self._client = self._stub(\n insecure_channel(endpoint, compression=compression_algorithm)\n )\n return\n\n # secure mode\n if (\n credentials is None\n and Configuration().EXPORTER_OTLP_CERTIFICATE is None\n ):\n # use the default location chosen by gRPC runtime\n credentials = ssl_channel_credentials()\n else:\n credentials = credentials or _load_credential_from_file(\n Configuration().EXPORTER_OTLP_CERTIFICATE\n )\n self._client = self._stub(\n secure_channel(\n endpoint, credentials, compression=compression_algorithm\n )\n )\n\n @abstractmethod\n def _translate_data(\n self, data: TypingSequence[SDKDataT]\n ) -> ExportServiceRequestT:\n pass\n\n def _export(self, data: TypingSequence[SDKDataT]) -> ExportResultT:\n # expo returns a generator that yields delay values which grow\n # exponentially. Once delay is greater than max_value, the yielded\n # value will remain constant.\n # max_value is set to 900 (900 seconds is 15 minutes) to use the same\n # value as used in the Go implementation.\n\n max_value = 900\n\n for delay in expo(max_value=max_value):\n\n if delay == max_value:\n return self._result.FAILURE\n\n try:\n self._client.Export(\n request=self._translate_data(data),\n metadata=self._headers,\n timeout=self._timeout,\n )\n\n return self._result.SUCCESS\n\n except RpcError as error:\n\n if error.code() in [\n StatusCode.CANCELLED,\n StatusCode.DEADLINE_EXCEEDED,\n StatusCode.PERMISSION_DENIED,\n StatusCode.UNAUTHENTICATED,\n StatusCode.RESOURCE_EXHAUSTED,\n StatusCode.ABORTED,\n StatusCode.OUT_OF_RANGE,\n StatusCode.UNAVAILABLE,\n StatusCode.DATA_LOSS,\n ]:\n\n retry_info_bin = dict(error.trailing_metadata()).get(\n \"google.rpc.retryinfo-bin\"\n )\n if retry_info_bin is not None:\n retry_info = RetryInfo()\n retry_info.ParseFromString(retry_info_bin)\n delay = (\n retry_info.retry_delay.seconds\n + retry_info.retry_delay.nanos / 1.0e9\n )\n\n logger.debug(\n \"Waiting %ss before retrying export of span\", delay\n )\n sleep(delay)\n continue\n\n if error.code() == StatusCode.OK:\n return self._result.SUCCESS\n\n return self._result.FAILURE\n\n return self._result.FAILURE\n\n def shutdown(self) -> None:\n pass\n", "path": "exporter/opentelemetry-exporter-otlp/src/opentelemetry/exporter/otlp/exporter.py"}]} | 4,030 | 371 |
gh_patches_debug_34020 | rasdani/github-patches | git_diff | pre-commit__pre-commit-1888 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Path is not mounted correctly when running Docker hooks from Docker
**Situation**:
- In our CI we want to run `pre-commit` inside Docker.
- Some of our hooks are `docker_image`
**Problem**
This line mostly https://github.com/pre-commit/pre-commit/blob/528c7afd18dafa6e47ce73add2c8e1550d105674/pre_commit/languages/docker.py#L94
Currently `pre-commit` mounts the current directory to `/src` and uses current directory name as mount base.
However this does not work when `pre-commit` is run inside the container on some mounted path already, because mount points are relative to the host, not to the container.
Example:
```
/opt/my_code <- host, mounts /opt/my_code:/project
/project <- in Docker running pre-commit, pre-commit is doing mount /project:/src
/src <- (in Dockerized hook)
```
Currently pre-commit will try to mount it as `-v /project:/src,rw,Z`. Expected - to mount it as `-v /opt/my_code:/src`
**Possible solution**:
When I replaced `os.getcwd()` from the code above to `translate_path(os.getcwd())` where `translate_path` is taken from https://gist.github.com/dpfoose/f96d4e4b76c2e01265619d545b77987a, it worked perfectly. It does add extra `docker` pip-dependency though.
**See also**: https://forums.docker.com/t/mounting-a-volume-not-working-with-running-docker-in-docker/25775/2
</issue>
<code>
[start of pre_commit/languages/docker.py]
1 import hashlib
2 import os
3 from typing import Sequence
4 from typing import Tuple
5
6 import pre_commit.constants as C
7 from pre_commit.hook import Hook
8 from pre_commit.languages import helpers
9 from pre_commit.prefix import Prefix
10 from pre_commit.util import clean_path_on_failure
11
12 ENVIRONMENT_DIR = 'docker'
13 PRE_COMMIT_LABEL = 'PRE_COMMIT'
14 get_default_version = helpers.basic_get_default_version
15 healthy = helpers.basic_healthy
16
17
18 def md5(s: str) -> str: # pragma: win32 no cover
19 return hashlib.md5(s.encode()).hexdigest()
20
21
22 def docker_tag(prefix: Prefix) -> str: # pragma: win32 no cover
23 md5sum = md5(os.path.basename(prefix.prefix_dir)).lower()
24 return f'pre-commit-{md5sum}'
25
26
27 def build_docker_image(
28 prefix: Prefix,
29 *,
30 pull: bool,
31 ) -> None: # pragma: win32 no cover
32 cmd: Tuple[str, ...] = (
33 'docker', 'build',
34 '--tag', docker_tag(prefix),
35 '--label', PRE_COMMIT_LABEL,
36 )
37 if pull:
38 cmd += ('--pull',)
39 # This must come last for old versions of docker. See #477
40 cmd += ('.',)
41 helpers.run_setup_cmd(prefix, cmd)
42
43
44 def install_environment(
45 prefix: Prefix, version: str, additional_dependencies: Sequence[str],
46 ) -> None: # pragma: win32 no cover
47 helpers.assert_version_default('docker', version)
48 helpers.assert_no_additional_deps('docker', additional_dependencies)
49
50 directory = prefix.path(
51 helpers.environment_dir(ENVIRONMENT_DIR, C.DEFAULT),
52 )
53
54 # Docker doesn't really have relevant disk environment, but pre-commit
55 # still needs to cleanup its state files on failure
56 with clean_path_on_failure(directory):
57 build_docker_image(prefix, pull=True)
58 os.mkdir(directory)
59
60
61 def get_docker_user() -> Tuple[str, ...]: # pragma: win32 no cover
62 try:
63 return ('-u', f'{os.getuid()}:{os.getgid()}')
64 except AttributeError:
65 return ()
66
67
68 def docker_cmd() -> Tuple[str, ...]: # pragma: win32 no cover
69 return (
70 'docker', 'run',
71 '--rm',
72 *get_docker_user(),
73 # https://docs.docker.com/engine/reference/commandline/run/#mount-volumes-from-container-volumes-from
74 # The `Z` option tells Docker to label the content with a private
75 # unshared label. Only the current container can use a private volume.
76 '-v', f'{os.getcwd()}:/src:rw,Z',
77 '--workdir', '/src',
78 )
79
80
81 def run_hook(
82 hook: Hook,
83 file_args: Sequence[str],
84 color: bool,
85 ) -> Tuple[int, bytes]: # pragma: win32 no cover
86 # Rebuild the docker image in case it has gone missing, as many people do
87 # automated cleanup of docker images.
88 build_docker_image(hook.prefix, pull=False)
89
90 entry_exe, *cmd_rest = hook.cmd
91
92 entry_tag = ('--entrypoint', entry_exe, docker_tag(hook.prefix))
93 cmd = (*docker_cmd(), *entry_tag, *cmd_rest)
94 return helpers.run_xargs(hook, cmd, file_args, color=color)
95
[end of pre_commit/languages/docker.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/pre_commit/languages/docker.py b/pre_commit/languages/docker.py
--- a/pre_commit/languages/docker.py
+++ b/pre_commit/languages/docker.py
@@ -1,5 +1,7 @@
import hashlib
+import json
import os
+import socket
from typing import Sequence
from typing import Tuple
@@ -8,6 +10,7 @@
from pre_commit.languages import helpers
from pre_commit.prefix import Prefix
from pre_commit.util import clean_path_on_failure
+from pre_commit.util import cmd_output_b
ENVIRONMENT_DIR = 'docker'
PRE_COMMIT_LABEL = 'PRE_COMMIT'
@@ -15,6 +18,34 @@
healthy = helpers.basic_healthy
+def _is_in_docker() -> bool:
+ try:
+ with open('/proc/1/cgroup', 'rb') as f:
+ return b'docker' in f.read()
+ except FileNotFoundError:
+ return False
+
+
+def _get_docker_path(path: str) -> str:
+ if not _is_in_docker():
+ return path
+ hostname = socket.gethostname()
+
+ _, out, _ = cmd_output_b('docker', 'inspect', hostname)
+
+ container, = json.loads(out)
+ for mount in container['Mounts']:
+ src_path = mount['Source']
+ to_path = mount['Destination']
+ if os.path.commonpath((path, to_path)) == to_path:
+ # So there is something in common,
+ # and we can proceed remapping it
+ return path.replace(to_path, src_path)
+ # we're in Docker, but the path is not mounted, cannot really do anything,
+ # so fall back to original path
+ return path
+
+
def md5(s: str) -> str: # pragma: win32 no cover
return hashlib.md5(s.encode()).hexdigest()
@@ -73,7 +104,7 @@
# https://docs.docker.com/engine/reference/commandline/run/#mount-volumes-from-container-volumes-from
# The `Z` option tells Docker to label the content with a private
# unshared label. Only the current container can use a private volume.
- '-v', f'{os.getcwd()}:/src:rw,Z',
+ '-v', f'{_get_docker_path(os.getcwd())}:/src:rw,Z',
'--workdir', '/src',
)
| {"golden_diff": "diff --git a/pre_commit/languages/docker.py b/pre_commit/languages/docker.py\n--- a/pre_commit/languages/docker.py\n+++ b/pre_commit/languages/docker.py\n@@ -1,5 +1,7 @@\n import hashlib\n+import json\n import os\n+import socket\n from typing import Sequence\n from typing import Tuple\n \n@@ -8,6 +10,7 @@\n from pre_commit.languages import helpers\n from pre_commit.prefix import Prefix\n from pre_commit.util import clean_path_on_failure\n+from pre_commit.util import cmd_output_b\n \n ENVIRONMENT_DIR = 'docker'\n PRE_COMMIT_LABEL = 'PRE_COMMIT'\n@@ -15,6 +18,34 @@\n healthy = helpers.basic_healthy\n \n \n+def _is_in_docker() -> bool:\n+ try:\n+ with open('/proc/1/cgroup', 'rb') as f:\n+ return b'docker' in f.read()\n+ except FileNotFoundError:\n+ return False\n+\n+\n+def _get_docker_path(path: str) -> str:\n+ if not _is_in_docker():\n+ return path\n+ hostname = socket.gethostname()\n+\n+ _, out, _ = cmd_output_b('docker', 'inspect', hostname)\n+\n+ container, = json.loads(out)\n+ for mount in container['Mounts']:\n+ src_path = mount['Source']\n+ to_path = mount['Destination']\n+ if os.path.commonpath((path, to_path)) == to_path:\n+ # So there is something in common,\n+ # and we can proceed remapping it\n+ return path.replace(to_path, src_path)\n+ # we're in Docker, but the path is not mounted, cannot really do anything,\n+ # so fall back to original path\n+ return path\n+\n+\n def md5(s: str) -> str: # pragma: win32 no cover\n return hashlib.md5(s.encode()).hexdigest()\n \n@@ -73,7 +104,7 @@\n # https://docs.docker.com/engine/reference/commandline/run/#mount-volumes-from-container-volumes-from\n # The `Z` option tells Docker to label the content with a private\n # unshared label. Only the current container can use a private volume.\n- '-v', f'{os.getcwd()}:/src:rw,Z',\n+ '-v', f'{_get_docker_path(os.getcwd())}:/src:rw,Z',\n '--workdir', '/src',\n )\n", "issue": "Path is not mounted correctly when running Docker hooks from Docker\n**Situation**:\r\n\r\n- In our CI we want to run `pre-commit` inside Docker.\r\n- Some of our hooks are `docker_image`\r\n\r\n**Problem**\r\nThis line mostly https://github.com/pre-commit/pre-commit/blob/528c7afd18dafa6e47ce73add2c8e1550d105674/pre_commit/languages/docker.py#L94\r\n\r\nCurrently `pre-commit` mounts the current directory to `/src` and uses current directory name as mount base.\r\nHowever this does not work when `pre-commit` is run inside the container on some mounted path already, because mount points are relative to the host, not to the container.\r\n\r\n Example: \r\n```\r\n/opt/my_code <- host, mounts /opt/my_code:/project\r\n/project <- in Docker running pre-commit, pre-commit is doing mount /project:/src\r\n/src <- (in Dockerized hook)\r\n```\r\n\r\nCurrently pre-commit will try to mount it as `-v /project:/src,rw,Z`. Expected - to mount it as `-v /opt/my_code:/src`\r\n\r\n**Possible solution**:\r\n\r\nWhen I replaced `os.getcwd()` from the code above to `translate_path(os.getcwd())` where `translate_path` is taken from https://gist.github.com/dpfoose/f96d4e4b76c2e01265619d545b77987a, it worked perfectly. It does add extra `docker` pip-dependency though.\r\n\r\n**See also**: https://forums.docker.com/t/mounting-a-volume-not-working-with-running-docker-in-docker/25775/2\n", "before_files": [{"content": "import hashlib\nimport os\nfrom typing import Sequence\nfrom typing import Tuple\n\nimport pre_commit.constants as C\nfrom pre_commit.hook import Hook\nfrom pre_commit.languages import helpers\nfrom pre_commit.prefix import Prefix\nfrom pre_commit.util import clean_path_on_failure\n\nENVIRONMENT_DIR = 'docker'\nPRE_COMMIT_LABEL = 'PRE_COMMIT'\nget_default_version = helpers.basic_get_default_version\nhealthy = helpers.basic_healthy\n\n\ndef md5(s: str) -> str: # pragma: win32 no cover\n return hashlib.md5(s.encode()).hexdigest()\n\n\ndef docker_tag(prefix: Prefix) -> str: # pragma: win32 no cover\n md5sum = md5(os.path.basename(prefix.prefix_dir)).lower()\n return f'pre-commit-{md5sum}'\n\n\ndef build_docker_image(\n prefix: Prefix,\n *,\n pull: bool,\n) -> None: # pragma: win32 no cover\n cmd: Tuple[str, ...] = (\n 'docker', 'build',\n '--tag', docker_tag(prefix),\n '--label', PRE_COMMIT_LABEL,\n )\n if pull:\n cmd += ('--pull',)\n # This must come last for old versions of docker. See #477\n cmd += ('.',)\n helpers.run_setup_cmd(prefix, cmd)\n\n\ndef install_environment(\n prefix: Prefix, version: str, additional_dependencies: Sequence[str],\n) -> None: # pragma: win32 no cover\n helpers.assert_version_default('docker', version)\n helpers.assert_no_additional_deps('docker', additional_dependencies)\n\n directory = prefix.path(\n helpers.environment_dir(ENVIRONMENT_DIR, C.DEFAULT),\n )\n\n # Docker doesn't really have relevant disk environment, but pre-commit\n # still needs to cleanup its state files on failure\n with clean_path_on_failure(directory):\n build_docker_image(prefix, pull=True)\n os.mkdir(directory)\n\n\ndef get_docker_user() -> Tuple[str, ...]: # pragma: win32 no cover\n try:\n return ('-u', f'{os.getuid()}:{os.getgid()}')\n except AttributeError:\n return ()\n\n\ndef docker_cmd() -> Tuple[str, ...]: # pragma: win32 no cover\n return (\n 'docker', 'run',\n '--rm',\n *get_docker_user(),\n # https://docs.docker.com/engine/reference/commandline/run/#mount-volumes-from-container-volumes-from\n # The `Z` option tells Docker to label the content with a private\n # unshared label. Only the current container can use a private volume.\n '-v', f'{os.getcwd()}:/src:rw,Z',\n '--workdir', '/src',\n )\n\n\ndef run_hook(\n hook: Hook,\n file_args: Sequence[str],\n color: bool,\n) -> Tuple[int, bytes]: # pragma: win32 no cover\n # Rebuild the docker image in case it has gone missing, as many people do\n # automated cleanup of docker images.\n build_docker_image(hook.prefix, pull=False)\n\n entry_exe, *cmd_rest = hook.cmd\n\n entry_tag = ('--entrypoint', entry_exe, docker_tag(hook.prefix))\n cmd = (*docker_cmd(), *entry_tag, *cmd_rest)\n return helpers.run_xargs(hook, cmd, file_args, color=color)\n", "path": "pre_commit/languages/docker.py"}]} | 1,817 | 533 |
gh_patches_debug_41877 | rasdani/github-patches | git_diff | litestar-org__litestar-1794 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
StaticFilesConfig and virtual directories
I'm trying to write a ``FileSystemProtocol`` to load files from the package data using [importlib_resources](https://importlib-resources.readthedocs.io/en/latest/using.html#). But because ``directories`` is defined as ``DirectoryPath``, pydantic checks if the given directories exist in the local filesystem.
This is not generally true, especially in any kind of virtual filesystem (e.g. a zipped package). I think this condition should be relaxed to support virtual filesystems.
https://github.com/starlite-api/starlite/blob/9bb6dcd57c10a591377cf8e3a537e9292566d5b9/starlite/config/static_files.py#L32
</issue>
<code>
[start of litestar/contrib/sqlalchemy/base.py]
1 """Application ORM configuration."""
2 from __future__ import annotations
3
4 import re
5 from datetime import date, datetime
6 from typing import TYPE_CHECKING, Any, ClassVar, Protocol, TypeVar, runtime_checkable
7 from uuid import UUID, uuid4
8
9 from pydantic import AnyHttpUrl, AnyUrl, EmailStr
10 from sqlalchemy import Date, DateTime, MetaData, Sequence, String
11 from sqlalchemy.event import listens_for
12 from sqlalchemy.orm import (
13 DeclarativeBase,
14 Mapped,
15 Session,
16 declared_attr,
17 mapped_column,
18 orm_insert_sentinel,
19 registry,
20 )
21
22 from .types import GUID, JSON, BigIntIdentity
23
24 if TYPE_CHECKING:
25 from sqlalchemy.sql import FromClause
26
27 __all__ = (
28 "AuditColumns",
29 "BigIntAuditBase",
30 "BigIntBase",
31 "BigIntPrimaryKey",
32 "CommonTableAttributes",
33 "create_registry",
34 "ModelProtocol",
35 "touch_updated_timestamp",
36 "UUIDAuditBase",
37 "UUIDBase",
38 "UUIDPrimaryKey",
39 )
40
41
42 UUIDBaseT = TypeVar("UUIDBaseT", bound="UUIDBase")
43 BigIntBaseT = TypeVar("BigIntBaseT", bound="BigIntBase")
44
45 convention = {
46 "ix": "ix_%(column_0_label)s",
47 "uq": "uq_%(table_name)s_%(column_0_name)s",
48 "ck": "ck_%(table_name)s_%(constraint_name)s",
49 "fk": "fk_%(table_name)s_%(column_0_name)s_%(referred_table_name)s",
50 "pk": "pk_%(table_name)s",
51 }
52 """Templates for automated constraint name generation."""
53
54
55 @listens_for(Session, "before_flush")
56 def touch_updated_timestamp(session: Session, *_: Any) -> None:
57 """Set timestamp on update.
58
59 Called from SQLAlchemy's
60 :meth:`before_flush <sqlalchemy.orm.SessionEvents.before_flush>` event to bump the ``updated``
61 timestamp on modified instances.
62
63 Args:
64 session: The sync :class:`Session <sqlalchemy.orm.Session>` instance that underlies the async
65 session.
66 """
67 for instance in session.dirty:
68 if hasattr(instance, "updated"):
69 instance.updated = datetime.now() # noqa: DTZ005
70
71
72 @runtime_checkable
73 class ModelProtocol(Protocol):
74 """The base SQLAlchemy model protocol."""
75
76 __table__: FromClause
77 __name__: ClassVar[str]
78
79 def to_dict(self, exclude: set[str] | None = None) -> dict[str, Any]:
80 """Convert model to dictionary.
81
82 Returns:
83 dict[str, Any]: A dict representation of the model
84 """
85 ...
86
87
88 class UUIDPrimaryKey:
89 """UUID Primary Key Field Mixin."""
90
91 id: Mapped[UUID] = mapped_column(default=uuid4, primary_key=True) # pyright: ignore
92 """UUID Primary key column."""
93
94 @declared_attr
95 def _sentinel(cls) -> Mapped[int]:
96 return orm_insert_sentinel()
97
98
99 class BigIntPrimaryKey:
100 """BigInt Primary Key Field Mixin."""
101
102 @declared_attr
103 def id(cls) -> Mapped[int]:
104 """BigInt Primary key column."""
105 return mapped_column(
106 BigIntIdentity,
107 Sequence(f"{cls.__tablename__}_id_seq", optional=False), # type: ignore[attr-defined] # pyright: ignore
108 primary_key=True,
109 )
110
111
112 class AuditColumns:
113 """Created/Updated At Fields Mixin."""
114
115 created: Mapped[datetime] = mapped_column(default=datetime.now) # pyright: ignore
116 """Date/time of instance creation."""
117 updated: Mapped[datetime] = mapped_column(default=datetime.now) # pyright: ignore
118 """Date/time of instance last update."""
119
120
121 class CommonTableAttributes:
122 """Common attributes for SQLALchemy tables."""
123
124 __name__: ClassVar[str]
125 __table__: FromClause
126
127 # noinspection PyMethodParameters
128 @declared_attr.directive
129 def __tablename__(cls) -> str: # pylint: disable=no-self-argument
130 """Infer table name from class name."""
131 regexp = re.compile("((?<=[a-z0-9])[A-Z]|(?!^)[A-Z](?=[a-z]))")
132 return regexp.sub(r"_\1", cls.__name__).lower()
133
134 def to_dict(self, exclude: set[str] | None = None) -> dict[str, Any]:
135 """Convert model to dictionary.
136
137 Returns:
138 dict[str, Any]: A dict representation of the model
139 """
140 exclude = exclude.union("_sentinel") if exclude else {"_sentinel"}
141 return {field.name: getattr(self, field.name) for field in self.__table__.columns if field.name not in exclude}
142
143
144 def create_registry() -> registry:
145 """Create a new SQLAlchemy registry."""
146 meta = MetaData(naming_convention=convention)
147 return registry(
148 metadata=meta,
149 type_annotation_map={
150 UUID: GUID,
151 EmailStr: String,
152 AnyUrl: String,
153 AnyHttpUrl: String,
154 dict: JSON,
155 datetime: DateTime,
156 date: Date,
157 },
158 )
159
160
161 orm_registry = create_registry()
162
163
164 class UUIDBase(UUIDPrimaryKey, CommonTableAttributes, DeclarativeBase):
165 """Base for all SQLAlchemy declarative models with UUID primary keys."""
166
167 registry = orm_registry
168
169
170 class UUIDAuditBase(CommonTableAttributes, UUIDPrimaryKey, AuditColumns, DeclarativeBase):
171 """Base for declarative models with UUID primary keys and audit columns."""
172
173 registry = orm_registry
174
175
176 class BigIntBase(BigIntPrimaryKey, CommonTableAttributes, DeclarativeBase):
177 """Base for all SQLAlchemy declarative models with BigInt primary keys."""
178
179 registry = orm_registry
180
181
182 class BigIntAuditBase(CommonTableAttributes, BigIntPrimaryKey, AuditColumns, DeclarativeBase):
183 """Base for declarative models with BigInt primary keys and audit columns."""
184
185 registry = orm_registry
186
[end of litestar/contrib/sqlalchemy/base.py]
[start of litestar/contrib/sqlalchemy/types.py]
1 from __future__ import annotations
2
3 import uuid
4 from base64 import b64decode
5 from typing import TYPE_CHECKING, Any, cast
6
7 from sqlalchemy import text, util
8 from sqlalchemy.dialects.oracle import BLOB as ORA_BLOB
9 from sqlalchemy.dialects.oracle import RAW as ORA_RAW
10 from sqlalchemy.dialects.postgresql import JSONB as PG_JSONB
11 from sqlalchemy.dialects.postgresql import UUID as PG_UUID
12 from sqlalchemy.types import BINARY, CHAR, BigInteger, Integer, SchemaType, TypeDecorator
13 from sqlalchemy.types import JSON as _JSON
14
15 if TYPE_CHECKING:
16 from sqlalchemy.engine import Dialect
17
18 BigIntIdentity = BigInteger().with_variant(Integer, "sqlite")
19
20
21 class GUID(TypeDecorator):
22 """Platform-independent GUID type.
23
24 Uses PostgreSQL's UUID type, Oracle's RAW(16) type, otherwise uses
25 BINARY(16) or CHAR(32), storing as stringified hex values.
26
27 Will accept stringified UUIDs as a hexstring or an actual UUID
28
29 """
30
31 impl = BINARY(16)
32 cache_ok = True
33
34 @property
35 def python_type(self) -> type[uuid.UUID]:
36 return uuid.UUID
37
38 def __init__(self, *args: Any, binary: bool = True, **kwargs: Any) -> None:
39 self.binary = binary
40
41 def load_dialect_impl(self, dialect: Dialect) -> Any:
42 if dialect.name in {"postgresql", "duckdb"}:
43 return dialect.type_descriptor(PG_UUID())
44 if dialect.name == "oracle":
45 return dialect.type_descriptor(ORA_RAW(16))
46 if self.binary:
47 return dialect.type_descriptor(BINARY(16))
48 return dialect.type_descriptor(CHAR(32))
49
50 def process_bind_param(self, value: bytes | str | uuid.UUID | None, dialect: Dialect) -> bytes | str | None:
51 if value is None:
52 return value
53 if dialect.name in {"postgresql", "duckdb"}:
54 return str(value)
55 value = self.to_uuid(value)
56 if value is None:
57 return value
58 if dialect.name in {"oracle", "spanner+spanner"}:
59 return value.bytes
60 return value.bytes if self.binary else value.hex
61
62 def process_result_value(self, value: bytes | str | uuid.UUID | None, dialect: Dialect) -> uuid.UUID | None:
63 if value is None:
64 return value
65 if isinstance(value, uuid.UUID):
66 return value
67 if dialect.name == "spanner+spanner":
68 return uuid.UUID(bytes=b64decode(value))
69 if self.binary:
70 return uuid.UUID(bytes=cast("bytes", value))
71 return uuid.UUID(hex=cast("str", value))
72
73 @staticmethod
74 def to_uuid(value: Any) -> uuid.UUID | None:
75 if isinstance(value, uuid.UUID) or value is None:
76 return value
77 try:
78 value = uuid.UUID(hex=value)
79 except (TypeError, ValueError):
80 value = uuid.UUID(bytes=value)
81 return cast("uuid.UUID | None", value)
82
83
84 class JSON(TypeDecorator, SchemaType): # type: ignore
85 """Platform-independent JSON type.
86
87 Uses JSONB type for postgres, BLOB for Oracle, otherwise uses the generic JSON data type.
88
89 JSON = _JSON().with_variant(PG_JSONB, "postgresql").with_variant(ORA_BLOB, "oracle")
90
91 """
92
93 impl = _JSON
94 cache_ok = True
95
96 @property
97 def python_type(self) -> type[dict]:
98 return dict
99
100 def __init__(self, *args: Any, **kwargs: Any) -> None:
101 """Initialize JSON type"""
102 self.name = kwargs.pop("name", None)
103 self.oracle_strict = kwargs.pop("oracle_strict", True)
104
105 def load_dialect_impl(self, dialect: Dialect) -> Any:
106 if dialect.name == "postgresql":
107 return dialect.type_descriptor(PG_JSONB()) # type: ignore
108 if dialect.name == "oracle":
109 return dialect.type_descriptor(ORA_BLOB())
110 return dialect.type_descriptor(_JSON())
111
112 def _should_create_constraint(self, compiler: Any, **kw: Any) -> bool:
113 return bool(compiler.dialect.name == "oracle")
114
115 def _variant_mapping_for_set_table(self, column: Any) -> dict | None:
116 if column.type._variant_mapping:
117 variant_mapping = dict(column.type._variant_mapping)
118 variant_mapping["_default"] = column.type
119 else:
120 variant_mapping = None
121 return variant_mapping
122
123 @util.preload_module("sqlalchemy.sql.schema")
124 def _set_table(self, column: Any, table: Any) -> None:
125 schema = util.preloaded.sql_schema
126 variant_mapping = self._variant_mapping_for_set_table(column)
127 constraint_options = "(strict)" if self.oracle_strict else ""
128 sqltext = text(f"{column.name} is json {constraint_options}")
129 e = schema.CheckConstraint(
130 sqltext,
131 name=f"{column.name}_is_json",
132 _create_rule=util.portable_instancemethod( # type: ignore[no-untyped-call]
133 self._should_create_constraint,
134 {"variant_mapping": variant_mapping},
135 ),
136 _type_bound=True,
137 )
138 table.append_constraint(e)
139
[end of litestar/contrib/sqlalchemy/types.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/litestar/contrib/sqlalchemy/base.py b/litestar/contrib/sqlalchemy/base.py
--- a/litestar/contrib/sqlalchemy/base.py
+++ b/litestar/contrib/sqlalchemy/base.py
@@ -19,7 +19,7 @@
registry,
)
-from .types import GUID, JSON, BigIntIdentity
+from .types import GUID, BigIntIdentity, JsonB
if TYPE_CHECKING:
from sqlalchemy.sql import FromClause
@@ -151,7 +151,7 @@
EmailStr: String,
AnyUrl: String,
AnyHttpUrl: String,
- dict: JSON,
+ dict: JsonB,
datetime: DateTime,
date: Date,
},
diff --git a/litestar/contrib/sqlalchemy/types.py b/litestar/contrib/sqlalchemy/types.py
--- a/litestar/contrib/sqlalchemy/types.py
+++ b/litestar/contrib/sqlalchemy/types.py
@@ -9,14 +9,14 @@
from sqlalchemy.dialects.oracle import RAW as ORA_RAW
from sqlalchemy.dialects.postgresql import JSONB as PG_JSONB
from sqlalchemy.dialects.postgresql import UUID as PG_UUID
-from sqlalchemy.types import BINARY, CHAR, BigInteger, Integer, SchemaType, TypeDecorator
+from sqlalchemy.types import BINARY, CHAR, BigInteger, Integer, SchemaType, TypeDecorator, TypeEngine
from sqlalchemy.types import JSON as _JSON
+from litestar.serialization import decode_json, encode_json
+
if TYPE_CHECKING:
from sqlalchemy.engine import Dialect
-BigIntIdentity = BigInteger().with_variant(Integer, "sqlite")
-
class GUID(TypeDecorator):
"""Platform-independent GUID type.
@@ -81,16 +81,14 @@
return cast("uuid.UUID | None", value)
-class JSON(TypeDecorator, SchemaType): # type: ignore
- """Platform-independent JSON type.
-
- Uses JSONB type for postgres, BLOB for Oracle, otherwise uses the generic JSON data type.
+class ORA_JSONB(TypeDecorator, SchemaType): # type: ignore # noqa: N801
+ """Oracle Binary JSON type.
- JSON = _JSON().with_variant(PG_JSONB, "postgresql").with_variant(ORA_BLOB, "oracle")
+ JsonB = _JSON().with_variant(PG_JSONB, "postgresql").with_variant(ORA_JSONB, "oracle")
"""
- impl = _JSON
+ impl = ORA_BLOB
cache_ok = True
@property
@@ -102,12 +100,21 @@
self.name = kwargs.pop("name", None)
self.oracle_strict = kwargs.pop("oracle_strict", True)
- def load_dialect_impl(self, dialect: Dialect) -> Any:
- if dialect.name == "postgresql":
- return dialect.type_descriptor(PG_JSONB()) # type: ignore
- if dialect.name == "oracle":
- return dialect.type_descriptor(ORA_BLOB())
- return dialect.type_descriptor(_JSON())
+ def coerce_compared_value(self, op: Any, value: Any) -> Any:
+ return self.impl.coerce_compared_value(op=op, value=value) # type: ignore
+
+ def load_dialect_impl(self, dialect: Dialect) -> TypeEngine[Any]:
+ return dialect.type_descriptor(ORA_BLOB())
+
+ def process_bind_param(self, value: Any, dialect: Dialect) -> Any | None:
+ if value is None:
+ return value
+ return encode_json(value)
+
+ def process_result_value(self, value: bytes | None, dialect: Dialect) -> Any | None:
+ if value is None:
+ return value
+ return decode_json(value)
def _should_create_constraint(self, compiler: Any, **kw: Any) -> bool:
return bool(compiler.dialect.name == "oracle")
@@ -136,3 +143,7 @@
_type_bound=True,
)
table.append_constraint(e)
+
+
+BigIntIdentity = BigInteger().with_variant(Integer, "sqlite")
+JsonB = _JSON().with_variant(PG_JSONB, "postgresql").with_variant(ORA_JSONB, "oracle")
| {"golden_diff": "diff --git a/litestar/contrib/sqlalchemy/base.py b/litestar/contrib/sqlalchemy/base.py\n--- a/litestar/contrib/sqlalchemy/base.py\n+++ b/litestar/contrib/sqlalchemy/base.py\n@@ -19,7 +19,7 @@\n registry,\n )\n \n-from .types import GUID, JSON, BigIntIdentity\n+from .types import GUID, BigIntIdentity, JsonB\n \n if TYPE_CHECKING:\n from sqlalchemy.sql import FromClause\n@@ -151,7 +151,7 @@\n EmailStr: String,\n AnyUrl: String,\n AnyHttpUrl: String,\n- dict: JSON,\n+ dict: JsonB,\n datetime: DateTime,\n date: Date,\n },\ndiff --git a/litestar/contrib/sqlalchemy/types.py b/litestar/contrib/sqlalchemy/types.py\n--- a/litestar/contrib/sqlalchemy/types.py\n+++ b/litestar/contrib/sqlalchemy/types.py\n@@ -9,14 +9,14 @@\n from sqlalchemy.dialects.oracle import RAW as ORA_RAW\n from sqlalchemy.dialects.postgresql import JSONB as PG_JSONB\n from sqlalchemy.dialects.postgresql import UUID as PG_UUID\n-from sqlalchemy.types import BINARY, CHAR, BigInteger, Integer, SchemaType, TypeDecorator\n+from sqlalchemy.types import BINARY, CHAR, BigInteger, Integer, SchemaType, TypeDecorator, TypeEngine\n from sqlalchemy.types import JSON as _JSON\n \n+from litestar.serialization import decode_json, encode_json\n+\n if TYPE_CHECKING:\n from sqlalchemy.engine import Dialect\n \n-BigIntIdentity = BigInteger().with_variant(Integer, \"sqlite\")\n-\n \n class GUID(TypeDecorator):\n \"\"\"Platform-independent GUID type.\n@@ -81,16 +81,14 @@\n return cast(\"uuid.UUID | None\", value)\n \n \n-class JSON(TypeDecorator, SchemaType): # type: ignore\n- \"\"\"Platform-independent JSON type.\n-\n- Uses JSONB type for postgres, BLOB for Oracle, otherwise uses the generic JSON data type.\n+class ORA_JSONB(TypeDecorator, SchemaType): # type: ignore # noqa: N801\n+ \"\"\"Oracle Binary JSON type.\n \n- JSON = _JSON().with_variant(PG_JSONB, \"postgresql\").with_variant(ORA_BLOB, \"oracle\")\n+ JsonB = _JSON().with_variant(PG_JSONB, \"postgresql\").with_variant(ORA_JSONB, \"oracle\")\n \n \"\"\"\n \n- impl = _JSON\n+ impl = ORA_BLOB\n cache_ok = True\n \n @property\n@@ -102,12 +100,21 @@\n self.name = kwargs.pop(\"name\", None)\n self.oracle_strict = kwargs.pop(\"oracle_strict\", True)\n \n- def load_dialect_impl(self, dialect: Dialect) -> Any:\n- if dialect.name == \"postgresql\":\n- return dialect.type_descriptor(PG_JSONB()) # type: ignore\n- if dialect.name == \"oracle\":\n- return dialect.type_descriptor(ORA_BLOB())\n- return dialect.type_descriptor(_JSON())\n+ def coerce_compared_value(self, op: Any, value: Any) -> Any:\n+ return self.impl.coerce_compared_value(op=op, value=value) # type: ignore\n+\n+ def load_dialect_impl(self, dialect: Dialect) -> TypeEngine[Any]:\n+ return dialect.type_descriptor(ORA_BLOB())\n+\n+ def process_bind_param(self, value: Any, dialect: Dialect) -> Any | None:\n+ if value is None:\n+ return value\n+ return encode_json(value)\n+\n+ def process_result_value(self, value: bytes | None, dialect: Dialect) -> Any | None:\n+ if value is None:\n+ return value\n+ return decode_json(value)\n \n def _should_create_constraint(self, compiler: Any, **kw: Any) -> bool:\n return bool(compiler.dialect.name == \"oracle\")\n@@ -136,3 +143,7 @@\n _type_bound=True,\n )\n table.append_constraint(e)\n+\n+\n+BigIntIdentity = BigInteger().with_variant(Integer, \"sqlite\")\n+JsonB = _JSON().with_variant(PG_JSONB, \"postgresql\").with_variant(ORA_JSONB, \"oracle\")\n", "issue": "StaticFilesConfig and virtual directories\nI'm trying to write a ``FileSystemProtocol`` to load files from the package data using [importlib_resources](https://importlib-resources.readthedocs.io/en/latest/using.html#). But because ``directories`` is defined as ``DirectoryPath``, pydantic checks if the given directories exist in the local filesystem. \r\n\r\nThis is not generally true, especially in any kind of virtual filesystem (e.g. a zipped package). I think this condition should be relaxed to support virtual filesystems.\r\n\r\nhttps://github.com/starlite-api/starlite/blob/9bb6dcd57c10a591377cf8e3a537e9292566d5b9/starlite/config/static_files.py#L32\n", "before_files": [{"content": "\"\"\"Application ORM configuration.\"\"\"\nfrom __future__ import annotations\n\nimport re\nfrom datetime import date, datetime\nfrom typing import TYPE_CHECKING, Any, ClassVar, Protocol, TypeVar, runtime_checkable\nfrom uuid import UUID, uuid4\n\nfrom pydantic import AnyHttpUrl, AnyUrl, EmailStr\nfrom sqlalchemy import Date, DateTime, MetaData, Sequence, String\nfrom sqlalchemy.event import listens_for\nfrom sqlalchemy.orm import (\n DeclarativeBase,\n Mapped,\n Session,\n declared_attr,\n mapped_column,\n orm_insert_sentinel,\n registry,\n)\n\nfrom .types import GUID, JSON, BigIntIdentity\n\nif TYPE_CHECKING:\n from sqlalchemy.sql import FromClause\n\n__all__ = (\n \"AuditColumns\",\n \"BigIntAuditBase\",\n \"BigIntBase\",\n \"BigIntPrimaryKey\",\n \"CommonTableAttributes\",\n \"create_registry\",\n \"ModelProtocol\",\n \"touch_updated_timestamp\",\n \"UUIDAuditBase\",\n \"UUIDBase\",\n \"UUIDPrimaryKey\",\n)\n\n\nUUIDBaseT = TypeVar(\"UUIDBaseT\", bound=\"UUIDBase\")\nBigIntBaseT = TypeVar(\"BigIntBaseT\", bound=\"BigIntBase\")\n\nconvention = {\n \"ix\": \"ix_%(column_0_label)s\",\n \"uq\": \"uq_%(table_name)s_%(column_0_name)s\",\n \"ck\": \"ck_%(table_name)s_%(constraint_name)s\",\n \"fk\": \"fk_%(table_name)s_%(column_0_name)s_%(referred_table_name)s\",\n \"pk\": \"pk_%(table_name)s\",\n}\n\"\"\"Templates for automated constraint name generation.\"\"\"\n\n\n@listens_for(Session, \"before_flush\")\ndef touch_updated_timestamp(session: Session, *_: Any) -> None:\n \"\"\"Set timestamp on update.\n\n Called from SQLAlchemy's\n :meth:`before_flush <sqlalchemy.orm.SessionEvents.before_flush>` event to bump the ``updated``\n timestamp on modified instances.\n\n Args:\n session: The sync :class:`Session <sqlalchemy.orm.Session>` instance that underlies the async\n session.\n \"\"\"\n for instance in session.dirty:\n if hasattr(instance, \"updated\"):\n instance.updated = datetime.now() # noqa: DTZ005\n\n\n@runtime_checkable\nclass ModelProtocol(Protocol):\n \"\"\"The base SQLAlchemy model protocol.\"\"\"\n\n __table__: FromClause\n __name__: ClassVar[str]\n\n def to_dict(self, exclude: set[str] | None = None) -> dict[str, Any]:\n \"\"\"Convert model to dictionary.\n\n Returns:\n dict[str, Any]: A dict representation of the model\n \"\"\"\n ...\n\n\nclass UUIDPrimaryKey:\n \"\"\"UUID Primary Key Field Mixin.\"\"\"\n\n id: Mapped[UUID] = mapped_column(default=uuid4, primary_key=True) # pyright: ignore\n \"\"\"UUID Primary key column.\"\"\"\n\n @declared_attr\n def _sentinel(cls) -> Mapped[int]:\n return orm_insert_sentinel()\n\n\nclass BigIntPrimaryKey:\n \"\"\"BigInt Primary Key Field Mixin.\"\"\"\n\n @declared_attr\n def id(cls) -> Mapped[int]:\n \"\"\"BigInt Primary key column.\"\"\"\n return mapped_column(\n BigIntIdentity,\n Sequence(f\"{cls.__tablename__}_id_seq\", optional=False), # type: ignore[attr-defined] # pyright: ignore\n primary_key=True,\n )\n\n\nclass AuditColumns:\n \"\"\"Created/Updated At Fields Mixin.\"\"\"\n\n created: Mapped[datetime] = mapped_column(default=datetime.now) # pyright: ignore\n \"\"\"Date/time of instance creation.\"\"\"\n updated: Mapped[datetime] = mapped_column(default=datetime.now) # pyright: ignore\n \"\"\"Date/time of instance last update.\"\"\"\n\n\nclass CommonTableAttributes:\n \"\"\"Common attributes for SQLALchemy tables.\"\"\"\n\n __name__: ClassVar[str]\n __table__: FromClause\n\n # noinspection PyMethodParameters\n @declared_attr.directive\n def __tablename__(cls) -> str: # pylint: disable=no-self-argument\n \"\"\"Infer table name from class name.\"\"\"\n regexp = re.compile(\"((?<=[a-z0-9])[A-Z]|(?!^)[A-Z](?=[a-z]))\")\n return regexp.sub(r\"_\\1\", cls.__name__).lower()\n\n def to_dict(self, exclude: set[str] | None = None) -> dict[str, Any]:\n \"\"\"Convert model to dictionary.\n\n Returns:\n dict[str, Any]: A dict representation of the model\n \"\"\"\n exclude = exclude.union(\"_sentinel\") if exclude else {\"_sentinel\"}\n return {field.name: getattr(self, field.name) for field in self.__table__.columns if field.name not in exclude}\n\n\ndef create_registry() -> registry:\n \"\"\"Create a new SQLAlchemy registry.\"\"\"\n meta = MetaData(naming_convention=convention)\n return registry(\n metadata=meta,\n type_annotation_map={\n UUID: GUID,\n EmailStr: String,\n AnyUrl: String,\n AnyHttpUrl: String,\n dict: JSON,\n datetime: DateTime,\n date: Date,\n },\n )\n\n\norm_registry = create_registry()\n\n\nclass UUIDBase(UUIDPrimaryKey, CommonTableAttributes, DeclarativeBase):\n \"\"\"Base for all SQLAlchemy declarative models with UUID primary keys.\"\"\"\n\n registry = orm_registry\n\n\nclass UUIDAuditBase(CommonTableAttributes, UUIDPrimaryKey, AuditColumns, DeclarativeBase):\n \"\"\"Base for declarative models with UUID primary keys and audit columns.\"\"\"\n\n registry = orm_registry\n\n\nclass BigIntBase(BigIntPrimaryKey, CommonTableAttributes, DeclarativeBase):\n \"\"\"Base for all SQLAlchemy declarative models with BigInt primary keys.\"\"\"\n\n registry = orm_registry\n\n\nclass BigIntAuditBase(CommonTableAttributes, BigIntPrimaryKey, AuditColumns, DeclarativeBase):\n \"\"\"Base for declarative models with BigInt primary keys and audit columns.\"\"\"\n\n registry = orm_registry\n", "path": "litestar/contrib/sqlalchemy/base.py"}, {"content": "from __future__ import annotations\n\nimport uuid\nfrom base64 import b64decode\nfrom typing import TYPE_CHECKING, Any, cast\n\nfrom sqlalchemy import text, util\nfrom sqlalchemy.dialects.oracle import BLOB as ORA_BLOB\nfrom sqlalchemy.dialects.oracle import RAW as ORA_RAW\nfrom sqlalchemy.dialects.postgresql import JSONB as PG_JSONB\nfrom sqlalchemy.dialects.postgresql import UUID as PG_UUID\nfrom sqlalchemy.types import BINARY, CHAR, BigInteger, Integer, SchemaType, TypeDecorator\nfrom sqlalchemy.types import JSON as _JSON\n\nif TYPE_CHECKING:\n from sqlalchemy.engine import Dialect\n\nBigIntIdentity = BigInteger().with_variant(Integer, \"sqlite\")\n\n\nclass GUID(TypeDecorator):\n \"\"\"Platform-independent GUID type.\n\n Uses PostgreSQL's UUID type, Oracle's RAW(16) type, otherwise uses\n BINARY(16) or CHAR(32), storing as stringified hex values.\n\n Will accept stringified UUIDs as a hexstring or an actual UUID\n\n \"\"\"\n\n impl = BINARY(16)\n cache_ok = True\n\n @property\n def python_type(self) -> type[uuid.UUID]:\n return uuid.UUID\n\n def __init__(self, *args: Any, binary: bool = True, **kwargs: Any) -> None:\n self.binary = binary\n\n def load_dialect_impl(self, dialect: Dialect) -> Any:\n if dialect.name in {\"postgresql\", \"duckdb\"}:\n return dialect.type_descriptor(PG_UUID())\n if dialect.name == \"oracle\":\n return dialect.type_descriptor(ORA_RAW(16))\n if self.binary:\n return dialect.type_descriptor(BINARY(16))\n return dialect.type_descriptor(CHAR(32))\n\n def process_bind_param(self, value: bytes | str | uuid.UUID | None, dialect: Dialect) -> bytes | str | None:\n if value is None:\n return value\n if dialect.name in {\"postgresql\", \"duckdb\"}:\n return str(value)\n value = self.to_uuid(value)\n if value is None:\n return value\n if dialect.name in {\"oracle\", \"spanner+spanner\"}:\n return value.bytes\n return value.bytes if self.binary else value.hex\n\n def process_result_value(self, value: bytes | str | uuid.UUID | None, dialect: Dialect) -> uuid.UUID | None:\n if value is None:\n return value\n if isinstance(value, uuid.UUID):\n return value\n if dialect.name == \"spanner+spanner\":\n return uuid.UUID(bytes=b64decode(value))\n if self.binary:\n return uuid.UUID(bytes=cast(\"bytes\", value))\n return uuid.UUID(hex=cast(\"str\", value))\n\n @staticmethod\n def to_uuid(value: Any) -> uuid.UUID | None:\n if isinstance(value, uuid.UUID) or value is None:\n return value\n try:\n value = uuid.UUID(hex=value)\n except (TypeError, ValueError):\n value = uuid.UUID(bytes=value)\n return cast(\"uuid.UUID | None\", value)\n\n\nclass JSON(TypeDecorator, SchemaType): # type: ignore\n \"\"\"Platform-independent JSON type.\n\n Uses JSONB type for postgres, BLOB for Oracle, otherwise uses the generic JSON data type.\n\n JSON = _JSON().with_variant(PG_JSONB, \"postgresql\").with_variant(ORA_BLOB, \"oracle\")\n\n \"\"\"\n\n impl = _JSON\n cache_ok = True\n\n @property\n def python_type(self) -> type[dict]:\n return dict\n\n def __init__(self, *args: Any, **kwargs: Any) -> None:\n \"\"\"Initialize JSON type\"\"\"\n self.name = kwargs.pop(\"name\", None)\n self.oracle_strict = kwargs.pop(\"oracle_strict\", True)\n\n def load_dialect_impl(self, dialect: Dialect) -> Any:\n if dialect.name == \"postgresql\":\n return dialect.type_descriptor(PG_JSONB()) # type: ignore\n if dialect.name == \"oracle\":\n return dialect.type_descriptor(ORA_BLOB())\n return dialect.type_descriptor(_JSON())\n\n def _should_create_constraint(self, compiler: Any, **kw: Any) -> bool:\n return bool(compiler.dialect.name == \"oracle\")\n\n def _variant_mapping_for_set_table(self, column: Any) -> dict | None:\n if column.type._variant_mapping:\n variant_mapping = dict(column.type._variant_mapping)\n variant_mapping[\"_default\"] = column.type\n else:\n variant_mapping = None\n return variant_mapping\n\n @util.preload_module(\"sqlalchemy.sql.schema\")\n def _set_table(self, column: Any, table: Any) -> None:\n schema = util.preloaded.sql_schema\n variant_mapping = self._variant_mapping_for_set_table(column)\n constraint_options = \"(strict)\" if self.oracle_strict else \"\"\n sqltext = text(f\"{column.name} is json {constraint_options}\")\n e = schema.CheckConstraint(\n sqltext,\n name=f\"{column.name}_is_json\",\n _create_rule=util.portable_instancemethod( # type: ignore[no-untyped-call]\n self._should_create_constraint,\n {\"variant_mapping\": variant_mapping},\n ),\n _type_bound=True,\n )\n table.append_constraint(e)\n", "path": "litestar/contrib/sqlalchemy/types.py"}]} | 3,930 | 935 |
gh_patches_debug_33085 | rasdani/github-patches | git_diff | goauthentik__authentik-7028 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Date field can't be serialized to JSON on user write stage
**Describe the bug**
`Date` fields can't be serialised to JSON and thus are not saved to the user.
**To Reproduce**
I've added a field of type `Date` to my user settings by adding it in the `default-user-settings` prompt stage.
When I go into my user settings, set a value and save it, my user's fields are no longer displayed, instead replaced by a `Open settings` button.

When I click the button I get this error message:
```
builtins.TypeError: Object of type date is not JSON serializable
```

- authentik version: 2023.6.1
- Deployment: docker-compose
**Additional context**
https://discord.com/channels/809154715984199690/1129892642080161913
</issue>
<code>
[start of authentik/events/utils.py]
1 """event utilities"""
2 import re
3 from copy import copy
4 from dataclasses import asdict, is_dataclass
5 from enum import Enum
6 from pathlib import Path
7 from types import GeneratorType
8 from typing import Any, Optional
9 from uuid import UUID
10
11 from django.contrib.auth.models import AnonymousUser
12 from django.core.handlers.wsgi import WSGIRequest
13 from django.db import models
14 from django.db.models.base import Model
15 from django.http.request import HttpRequest
16 from django.views.debug import SafeExceptionReporterFilter
17 from geoip2.models import City
18 from guardian.utils import get_anonymous_user
19
20 from authentik.blueprints.v1.common import YAMLTag
21 from authentik.core.models import User
22 from authentik.events.geo import GEOIP_READER
23 from authentik.policies.types import PolicyRequest
24
25 # Special keys which are *not* cleaned, even when the default filter
26 # is matched
27 ALLOWED_SPECIAL_KEYS = re.compile("passing", flags=re.I)
28
29
30 def cleanse_item(key: str, value: Any) -> Any:
31 """Cleanse a single item"""
32 if isinstance(value, dict):
33 return cleanse_dict(value)
34 if isinstance(value, (list, tuple, set)):
35 for idx, item in enumerate(value):
36 value[idx] = cleanse_item(key, item)
37 return value
38 try:
39 if SafeExceptionReporterFilter.hidden_settings.search(
40 key
41 ) and not ALLOWED_SPECIAL_KEYS.search(key):
42 return SafeExceptionReporterFilter.cleansed_substitute
43 except TypeError: # pragma: no cover
44 return value
45 return value
46
47
48 def cleanse_dict(source: dict[Any, Any]) -> dict[Any, Any]:
49 """Cleanse a dictionary, recursively"""
50 final_dict = {}
51 for key, value in source.items():
52 new_value = cleanse_item(key, value)
53 if new_value is not ...:
54 final_dict[key] = new_value
55 return final_dict
56
57
58 def model_to_dict(model: Model) -> dict[str, Any]:
59 """Convert model to dict"""
60 name = str(model)
61 if hasattr(model, "name"):
62 name = model.name
63 return {
64 "app": model._meta.app_label,
65 "model_name": model._meta.model_name,
66 "pk": model.pk,
67 "name": name,
68 }
69
70
71 def get_user(user: User, original_user: Optional[User] = None) -> dict[str, Any]:
72 """Convert user object to dictionary, optionally including the original user"""
73 if isinstance(user, AnonymousUser):
74 user = get_anonymous_user()
75 user_data = {
76 "username": user.username,
77 "pk": user.pk,
78 "email": user.email,
79 }
80 if original_user:
81 original_data = get_user(original_user)
82 original_data["on_behalf_of"] = user_data
83 return original_data
84 return user_data
85
86
87 # pylint: disable=too-many-return-statements
88 def sanitize_item(value: Any) -> Any:
89 """Sanitize a single item, ensure it is JSON parsable"""
90 if is_dataclass(value):
91 # Because asdict calls `copy.deepcopy(obj)` on everything that's not tuple/dict,
92 # and deepcopy doesn't work with HttpRequest (neither django nor rest_framework).
93 # (more specifically doesn't work with ResolverMatch)
94 # rest_framework's custom Request class makes this more complicated as it also holds a
95 # thread lock.
96 # Since this class is mainly used for Events which already hold the http request context
97 # we just remove the http_request from the shallow policy request
98 # Currently, the only dataclass that actually holds an http request is a PolicyRequest
99 if isinstance(value, PolicyRequest) and value.http_request is not None:
100 value: PolicyRequest = copy(value)
101 value.http_request = None
102 value = asdict(value)
103 if isinstance(value, dict):
104 return sanitize_dict(value)
105 if isinstance(value, GeneratorType):
106 return sanitize_item(list(value))
107 if isinstance(value, (list, tuple, set)):
108 new_values = []
109 for item in value:
110 new_value = sanitize_item(item)
111 if new_value:
112 new_values.append(new_value)
113 return new_values
114 if isinstance(value, (User, AnonymousUser)):
115 return sanitize_dict(get_user(value))
116 if isinstance(value, models.Model):
117 return sanitize_dict(model_to_dict(value))
118 if isinstance(value, UUID):
119 return value.hex
120 if isinstance(value, (HttpRequest, WSGIRequest)):
121 return ...
122 if isinstance(value, City):
123 return GEOIP_READER.city_to_dict(value)
124 if isinstance(value, Path):
125 return str(value)
126 if isinstance(value, Exception):
127 return str(value)
128 if isinstance(value, YAMLTag):
129 return str(value)
130 if isinstance(value, Enum):
131 return value.value
132 if isinstance(value, type):
133 return {
134 "type": value.__name__,
135 "module": value.__module__,
136 }
137 return value
138
139
140 def sanitize_dict(source: dict[Any, Any]) -> dict[Any, Any]:
141 """clean source of all Models that would interfere with the JSONField.
142 Models are replaced with a dictionary of {
143 app: str,
144 name: str,
145 pk: Any
146 }"""
147 final_dict = {}
148 for key, value in source.items():
149 new_value = sanitize_item(value)
150 if new_value is not ...:
151 final_dict[key] = new_value
152 return final_dict
153
[end of authentik/events/utils.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/authentik/events/utils.py b/authentik/events/utils.py
--- a/authentik/events/utils.py
+++ b/authentik/events/utils.py
@@ -2,6 +2,7 @@
import re
from copy import copy
from dataclasses import asdict, is_dataclass
+from datetime import date, datetime, time, timedelta
from enum import Enum
from pathlib import Path
from types import GeneratorType
@@ -13,6 +14,7 @@
from django.db import models
from django.db.models.base import Model
from django.http.request import HttpRequest
+from django.utils import timezone
from django.views.debug import SafeExceptionReporterFilter
from geoip2.models import City
from guardian.utils import get_anonymous_user
@@ -84,7 +86,7 @@
return user_data
-# pylint: disable=too-many-return-statements
+# pylint: disable=too-many-return-statements,too-many-branches
def sanitize_item(value: Any) -> Any:
"""Sanitize a single item, ensure it is JSON parsable"""
if is_dataclass(value):
@@ -134,6 +136,23 @@
"type": value.__name__,
"module": value.__module__,
}
+ # See
+ # https://github.com/encode/django-rest-framework/blob/master/rest_framework/utils/encoders.py
+ # For Date Time string spec, see ECMA 262
+ # https://ecma-international.org/ecma-262/5.1/#sec-15.9.1.15
+ if isinstance(value, datetime):
+ representation = value.isoformat()
+ if representation.endswith("+00:00"):
+ representation = representation[:-6] + "Z"
+ return representation
+ if isinstance(value, date):
+ return value.isoformat()
+ if isinstance(value, time):
+ if timezone and timezone.is_aware(value):
+ raise ValueError("JSON can't represent timezone-aware times.")
+ return value.isoformat()
+ if isinstance(value, timedelta):
+ return str(value.total_seconds())
return value
| {"golden_diff": "diff --git a/authentik/events/utils.py b/authentik/events/utils.py\n--- a/authentik/events/utils.py\n+++ b/authentik/events/utils.py\n@@ -2,6 +2,7 @@\n import re\n from copy import copy\n from dataclasses import asdict, is_dataclass\n+from datetime import date, datetime, time, timedelta\n from enum import Enum\n from pathlib import Path\n from types import GeneratorType\n@@ -13,6 +14,7 @@\n from django.db import models\n from django.db.models.base import Model\n from django.http.request import HttpRequest\n+from django.utils import timezone\n from django.views.debug import SafeExceptionReporterFilter\n from geoip2.models import City\n from guardian.utils import get_anonymous_user\n@@ -84,7 +86,7 @@\n return user_data\n \n \n-# pylint: disable=too-many-return-statements\n+# pylint: disable=too-many-return-statements,too-many-branches\n def sanitize_item(value: Any) -> Any:\n \"\"\"Sanitize a single item, ensure it is JSON parsable\"\"\"\n if is_dataclass(value):\n@@ -134,6 +136,23 @@\n \"type\": value.__name__,\n \"module\": value.__module__,\n }\n+ # See\n+ # https://github.com/encode/django-rest-framework/blob/master/rest_framework/utils/encoders.py\n+ # For Date Time string spec, see ECMA 262\n+ # https://ecma-international.org/ecma-262/5.1/#sec-15.9.1.15\n+ if isinstance(value, datetime):\n+ representation = value.isoformat()\n+ if representation.endswith(\"+00:00\"):\n+ representation = representation[:-6] + \"Z\"\n+ return representation\n+ if isinstance(value, date):\n+ return value.isoformat()\n+ if isinstance(value, time):\n+ if timezone and timezone.is_aware(value):\n+ raise ValueError(\"JSON can't represent timezone-aware times.\")\n+ return value.isoformat()\n+ if isinstance(value, timedelta):\n+ return str(value.total_seconds())\n return value\n", "issue": "Date field can't be serialized to JSON on user write stage\n**Describe the bug**\r\n\r\n`Date` fields can't be serialised to JSON and thus are not saved to the user.\r\n\r\n**To Reproduce**\r\n\r\nI've added a field of type `Date` to my user settings by adding it in the `default-user-settings` prompt stage.\r\nWhen I go into my user settings, set a value and save it, my user's fields are no longer displayed, instead replaced by a `Open settings` button.\r\n\r\n\r\n\r\nWhen I click the button I get this error message:\r\n\r\n```\r\nbuiltins.TypeError: Object of type date is not JSON serializable\r\n```\r\n\r\n\r\n\r\n- authentik version: 2023.6.1\r\n- Deployment: docker-compose\r\n\r\n**Additional context**\r\nhttps://discord.com/channels/809154715984199690/1129892642080161913\r\n\n", "before_files": [{"content": "\"\"\"event utilities\"\"\"\nimport re\nfrom copy import copy\nfrom dataclasses import asdict, is_dataclass\nfrom enum import Enum\nfrom pathlib import Path\nfrom types import GeneratorType\nfrom typing import Any, Optional\nfrom uuid import UUID\n\nfrom django.contrib.auth.models import AnonymousUser\nfrom django.core.handlers.wsgi import WSGIRequest\nfrom django.db import models\nfrom django.db.models.base import Model\nfrom django.http.request import HttpRequest\nfrom django.views.debug import SafeExceptionReporterFilter\nfrom geoip2.models import City\nfrom guardian.utils import get_anonymous_user\n\nfrom authentik.blueprints.v1.common import YAMLTag\nfrom authentik.core.models import User\nfrom authentik.events.geo import GEOIP_READER\nfrom authentik.policies.types import PolicyRequest\n\n# Special keys which are *not* cleaned, even when the default filter\n# is matched\nALLOWED_SPECIAL_KEYS = re.compile(\"passing\", flags=re.I)\n\n\ndef cleanse_item(key: str, value: Any) -> Any:\n \"\"\"Cleanse a single item\"\"\"\n if isinstance(value, dict):\n return cleanse_dict(value)\n if isinstance(value, (list, tuple, set)):\n for idx, item in enumerate(value):\n value[idx] = cleanse_item(key, item)\n return value\n try:\n if SafeExceptionReporterFilter.hidden_settings.search(\n key\n ) and not ALLOWED_SPECIAL_KEYS.search(key):\n return SafeExceptionReporterFilter.cleansed_substitute\n except TypeError: # pragma: no cover\n return value\n return value\n\n\ndef cleanse_dict(source: dict[Any, Any]) -> dict[Any, Any]:\n \"\"\"Cleanse a dictionary, recursively\"\"\"\n final_dict = {}\n for key, value in source.items():\n new_value = cleanse_item(key, value)\n if new_value is not ...:\n final_dict[key] = new_value\n return final_dict\n\n\ndef model_to_dict(model: Model) -> dict[str, Any]:\n \"\"\"Convert model to dict\"\"\"\n name = str(model)\n if hasattr(model, \"name\"):\n name = model.name\n return {\n \"app\": model._meta.app_label,\n \"model_name\": model._meta.model_name,\n \"pk\": model.pk,\n \"name\": name,\n }\n\n\ndef get_user(user: User, original_user: Optional[User] = None) -> dict[str, Any]:\n \"\"\"Convert user object to dictionary, optionally including the original user\"\"\"\n if isinstance(user, AnonymousUser):\n user = get_anonymous_user()\n user_data = {\n \"username\": user.username,\n \"pk\": user.pk,\n \"email\": user.email,\n }\n if original_user:\n original_data = get_user(original_user)\n original_data[\"on_behalf_of\"] = user_data\n return original_data\n return user_data\n\n\n# pylint: disable=too-many-return-statements\ndef sanitize_item(value: Any) -> Any:\n \"\"\"Sanitize a single item, ensure it is JSON parsable\"\"\"\n if is_dataclass(value):\n # Because asdict calls `copy.deepcopy(obj)` on everything that's not tuple/dict,\n # and deepcopy doesn't work with HttpRequest (neither django nor rest_framework).\n # (more specifically doesn't work with ResolverMatch)\n # rest_framework's custom Request class makes this more complicated as it also holds a\n # thread lock.\n # Since this class is mainly used for Events which already hold the http request context\n # we just remove the http_request from the shallow policy request\n # Currently, the only dataclass that actually holds an http request is a PolicyRequest\n if isinstance(value, PolicyRequest) and value.http_request is not None:\n value: PolicyRequest = copy(value)\n value.http_request = None\n value = asdict(value)\n if isinstance(value, dict):\n return sanitize_dict(value)\n if isinstance(value, GeneratorType):\n return sanitize_item(list(value))\n if isinstance(value, (list, tuple, set)):\n new_values = []\n for item in value:\n new_value = sanitize_item(item)\n if new_value:\n new_values.append(new_value)\n return new_values\n if isinstance(value, (User, AnonymousUser)):\n return sanitize_dict(get_user(value))\n if isinstance(value, models.Model):\n return sanitize_dict(model_to_dict(value))\n if isinstance(value, UUID):\n return value.hex\n if isinstance(value, (HttpRequest, WSGIRequest)):\n return ...\n if isinstance(value, City):\n return GEOIP_READER.city_to_dict(value)\n if isinstance(value, Path):\n return str(value)\n if isinstance(value, Exception):\n return str(value)\n if isinstance(value, YAMLTag):\n return str(value)\n if isinstance(value, Enum):\n return value.value\n if isinstance(value, type):\n return {\n \"type\": value.__name__,\n \"module\": value.__module__,\n }\n return value\n\n\ndef sanitize_dict(source: dict[Any, Any]) -> dict[Any, Any]:\n \"\"\"clean source of all Models that would interfere with the JSONField.\n Models are replaced with a dictionary of {\n app: str,\n name: str,\n pk: Any\n }\"\"\"\n final_dict = {}\n for key, value in source.items():\n new_value = sanitize_item(value)\n if new_value is not ...:\n final_dict[key] = new_value\n return final_dict\n", "path": "authentik/events/utils.py"}]} | 2,378 | 468 |
gh_patches_debug_1619 | rasdani/github-patches | git_diff | getredash__redash-3008 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
GA Data Source throws an error when no rows returned
### Issue Summary
Google Analytics Data Source throws `Error running query: 'rows'` when the query result is empty.
I have a pretty simple query with dimensions and filters, like:
```json
{
"ids": "ga:177xxxxxx",
"start_date": "2018-10-08",
"end_date": "2018-10-12",
"metrics": "ga:uniqueEvents",
"dimensions": "ga:dimension1,ga:dimension3",
"filters": "ga:dimension2==userrole;ga:eventCategory==eventcategory;ga:eventAction==enentaction;ga:dimension1!=demo"
}
```
Sometimes it returns empty result as there is no data. This results in error in redash.
### Steps to Reproduce
1. Create the Google Analytics Data Source
2. Make some query returning zero rows
3. Execute it in query editor
`Error running query: 'rows'` will be thrown. While this might be considered not a bug, I'd expect just an empty result with no errors.
### Technical details:
* Redash Version: 5.0.1
* Browser/OS: Chrome/macOS
* How did you install Redash: docker-compose
GA Data Source throws an error when no rows returned
### Issue Summary
Google Analytics Data Source throws `Error running query: 'rows'` when the query result is empty.
I have a pretty simple query with dimensions and filters, like:
```json
{
"ids": "ga:177xxxxxx",
"start_date": "2018-10-08",
"end_date": "2018-10-12",
"metrics": "ga:uniqueEvents",
"dimensions": "ga:dimension1,ga:dimension3",
"filters": "ga:dimension2==userrole;ga:eventCategory==eventcategory;ga:eventAction==enentaction;ga:dimension1!=demo"
}
```
Sometimes it returns empty result as there is no data. This results in error in redash.
### Steps to Reproduce
1. Create the Google Analytics Data Source
2. Make some query returning zero rows
3. Execute it in query editor
`Error running query: 'rows'` will be thrown. While this might be considered not a bug, I'd expect just an empty result with no errors.
### Technical details:
* Redash Version: 5.0.1
* Browser/OS: Chrome/macOS
* How did you install Redash: docker-compose
</issue>
<code>
[start of redash/query_runner/google_analytics.py]
1 # -*- coding: utf-8 -*-
2
3 import logging
4 from base64 import b64decode
5 from datetime import datetime
6 from urlparse import parse_qs, urlparse
7
8 from redash.query_runner import *
9 from redash.utils import json_dumps, json_loads
10
11 logger = logging.getLogger(__name__)
12
13 try:
14 from oauth2client.service_account import ServiceAccountCredentials
15 from apiclient.discovery import build
16 from apiclient.errors import HttpError
17 import httplib2
18 enabled = True
19 except ImportError as e:
20 enabled = False
21
22
23 types_conv = dict(
24 STRING=TYPE_STRING,
25 INTEGER=TYPE_INTEGER,
26 FLOAT=TYPE_FLOAT,
27 DATE=TYPE_DATE,
28 DATETIME=TYPE_DATETIME
29 )
30
31
32 def parse_ga_response(response):
33 columns = []
34 for h in response['columnHeaders']:
35 if h['name'] in ('ga:date', 'mcf:conversionDate'):
36 h['dataType'] = 'DATE'
37 elif h['name'] == 'ga:dateHour':
38 h['dataType'] = 'DATETIME'
39 columns.append({
40 'name': h['name'],
41 'friendly_name': h['name'].split(':', 1)[1],
42 'type': types_conv.get(h['dataType'], 'string')
43 })
44
45 rows = []
46 for r in response['rows']:
47 d = {}
48 for c, value in enumerate(r):
49 column_name = response['columnHeaders'][c]['name']
50 column_type = filter(lambda col: col['name'] == column_name, columns)[0]['type']
51
52 # mcf results come a bit different than ga results:
53 if isinstance(value, dict):
54 if 'primitiveValue' in value:
55 value = value['primitiveValue']
56 elif 'conversionPathValue' in value:
57 steps = []
58 for step in value['conversionPathValue']:
59 steps.append('{}:{}'.format(step['interactionType'], step['nodeValue']))
60 value = ', '.join(steps)
61 else:
62 raise Exception("Results format not supported")
63
64 if column_type == TYPE_DATE:
65 value = datetime.strptime(value, '%Y%m%d')
66 elif column_type == TYPE_DATETIME:
67 if len(value) == 10:
68 value = datetime.strptime(value, '%Y%m%d%H')
69 elif len(value) == 12:
70 value = datetime.strptime(value, '%Y%m%d%H%M')
71 else:
72 raise Exception("Unknown date/time format in results: '{}'".format(value))
73
74 d[column_name] = value
75 rows.append(d)
76
77 return {'columns': columns, 'rows': rows}
78
79
80 class GoogleAnalytics(BaseSQLQueryRunner):
81 @classmethod
82 def annotate_query(cls):
83 return False
84
85 @classmethod
86 def type(cls):
87 return "google_analytics"
88
89 @classmethod
90 def name(cls):
91 return "Google Analytics"
92
93 @classmethod
94 def enabled(cls):
95 return enabled
96
97 @classmethod
98 def configuration_schema(cls):
99 return {
100 'type': 'object',
101 'properties': {
102 'jsonKeyFile': {
103 "type": "string",
104 'title': 'JSON Key File'
105 }
106 },
107 'required': ['jsonKeyFile'],
108 'secret': ['jsonKeyFile']
109 }
110
111 def __init__(self, configuration):
112 super(GoogleAnalytics, self).__init__(configuration)
113 self.syntax = 'json'
114
115 def _get_analytics_service(self):
116 scope = ['https://www.googleapis.com/auth/analytics.readonly']
117 key = json_loads(b64decode(self.configuration['jsonKeyFile']))
118 creds = ServiceAccountCredentials.from_json_keyfile_dict(key, scope)
119 return build('analytics', 'v3', http=creds.authorize(httplib2.Http()))
120
121 def _get_tables(self, schema):
122 accounts = self._get_analytics_service().management().accounts().list().execute().get('items')
123 if accounts is None:
124 raise Exception("Failed getting accounts.")
125 else:
126 for account in accounts:
127 schema[account['name']] = {'name': account['name'], 'columns': []}
128 properties = self._get_analytics_service().management().webproperties().list(
129 accountId=account['id']).execute().get('items', [])
130 for property_ in properties:
131 if 'defaultProfileId' in property_ and 'name' in property_:
132 schema[account['name']]['columns'].append(
133 u'{0} (ga:{1})'.format(property_['name'], property_['defaultProfileId'])
134 )
135
136 return schema.values()
137
138 def test_connection(self):
139 try:
140 service = self._get_analytics_service()
141 service.management().accounts().list().execute()
142 except HttpError as e:
143 # Make sure we return a more readable error to the end user
144 raise Exception(e._get_reason())
145
146 def run_query(self, query, user):
147 logger.debug("Analytics is about to execute query: %s", query)
148 try:
149 params = json_loads(query)
150 except:
151 params = parse_qs(urlparse(query).query, keep_blank_values=True)
152 for key in params.keys():
153 params[key] = ','.join(params[key])
154 if '-' in key:
155 params[key.replace('-', '_')] = params.pop(key)
156
157 if 'mcf:' in params['metrics'] and 'ga:' in params['metrics']:
158 raise Exception("Can't mix mcf: and ga: metrics.")
159
160 if 'mcf:' in params.get('dimensions', '') and 'ga:' in params.get('dimensions', ''):
161 raise Exception("Can't mix mcf: and ga: dimensions.")
162
163 if 'mcf:' in params['metrics']:
164 api = self._get_analytics_service().data().mcf()
165 else:
166 api = self._get_analytics_service().data().ga()
167
168 if len(params) > 0:
169 try:
170 response = api.get(**params).execute()
171 data = parse_ga_response(response)
172 error = None
173 json_data = json_dumps(data)
174 except HttpError as e:
175 # Make sure we return a more readable error to the end user
176 error = e._get_reason()
177 json_data = None
178 else:
179 error = 'Wrong query format.'
180 json_data = None
181 return json_data, error
182
183
184 register(GoogleAnalytics)
185
[end of redash/query_runner/google_analytics.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/redash/query_runner/google_analytics.py b/redash/query_runner/google_analytics.py
--- a/redash/query_runner/google_analytics.py
+++ b/redash/query_runner/google_analytics.py
@@ -43,7 +43,7 @@
})
rows = []
- for r in response['rows']:
+ for r in response.get('rows', []):
d = {}
for c, value in enumerate(r):
column_name = response['columnHeaders'][c]['name']
| {"golden_diff": "diff --git a/redash/query_runner/google_analytics.py b/redash/query_runner/google_analytics.py\n--- a/redash/query_runner/google_analytics.py\n+++ b/redash/query_runner/google_analytics.py\n@@ -43,7 +43,7 @@\n })\n \n rows = []\n- for r in response['rows']:\n+ for r in response.get('rows', []):\n d = {}\n for c, value in enumerate(r):\n column_name = response['columnHeaders'][c]['name']\n", "issue": "GA Data Source throws an error when no rows returned\n### Issue Summary\r\n\r\nGoogle Analytics Data Source throws `Error running query: 'rows'` when the query result is empty.\r\nI have a pretty simple query with dimensions and filters, like:\r\n\r\n```json\r\n{\r\n \"ids\": \"ga:177xxxxxx\",\r\n \"start_date\": \"2018-10-08\",\r\n \"end_date\": \"2018-10-12\",\r\n \"metrics\": \"ga:uniqueEvents\",\r\n \"dimensions\": \"ga:dimension1,ga:dimension3\",\r\n \"filters\": \"ga:dimension2==userrole;ga:eventCategory==eventcategory;ga:eventAction==enentaction;ga:dimension1!=demo\"\r\n}\r\n```\r\n\r\nSometimes it returns empty result as there is no data. This results in error in redash.\r\n\r\n### Steps to Reproduce\r\n\r\n1. Create the Google Analytics Data Source\r\n2. Make some query returning zero rows\r\n3. Execute it in query editor\r\n\r\n`Error running query: 'rows'` will be thrown. While this might be considered not a bug, I'd expect just an empty result with no errors.\r\n\r\n### Technical details:\r\n\r\n* Redash Version: 5.0.1\r\n* Browser/OS: Chrome/macOS\r\n* How did you install Redash: docker-compose\r\n\nGA Data Source throws an error when no rows returned\n### Issue Summary\r\n\r\nGoogle Analytics Data Source throws `Error running query: 'rows'` when the query result is empty.\r\nI have a pretty simple query with dimensions and filters, like:\r\n\r\n```json\r\n{\r\n \"ids\": \"ga:177xxxxxx\",\r\n \"start_date\": \"2018-10-08\",\r\n \"end_date\": \"2018-10-12\",\r\n \"metrics\": \"ga:uniqueEvents\",\r\n \"dimensions\": \"ga:dimension1,ga:dimension3\",\r\n \"filters\": \"ga:dimension2==userrole;ga:eventCategory==eventcategory;ga:eventAction==enentaction;ga:dimension1!=demo\"\r\n}\r\n```\r\n\r\nSometimes it returns empty result as there is no data. This results in error in redash.\r\n\r\n### Steps to Reproduce\r\n\r\n1. Create the Google Analytics Data Source\r\n2. Make some query returning zero rows\r\n3. Execute it in query editor\r\n\r\n`Error running query: 'rows'` will be thrown. While this might be considered not a bug, I'd expect just an empty result with no errors.\r\n\r\n### Technical details:\r\n\r\n* Redash Version: 5.0.1\r\n* Browser/OS: Chrome/macOS\r\n* How did you install Redash: docker-compose\r\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\nimport logging\nfrom base64 import b64decode\nfrom datetime import datetime\nfrom urlparse import parse_qs, urlparse\n\nfrom redash.query_runner import *\nfrom redash.utils import json_dumps, json_loads\n\nlogger = logging.getLogger(__name__)\n\ntry:\n from oauth2client.service_account import ServiceAccountCredentials\n from apiclient.discovery import build\n from apiclient.errors import HttpError\n import httplib2\n enabled = True\nexcept ImportError as e:\n enabled = False\n\n\ntypes_conv = dict(\n STRING=TYPE_STRING,\n INTEGER=TYPE_INTEGER,\n FLOAT=TYPE_FLOAT,\n DATE=TYPE_DATE,\n DATETIME=TYPE_DATETIME\n)\n\n\ndef parse_ga_response(response):\n columns = []\n for h in response['columnHeaders']:\n if h['name'] in ('ga:date', 'mcf:conversionDate'):\n h['dataType'] = 'DATE'\n elif h['name'] == 'ga:dateHour':\n h['dataType'] = 'DATETIME'\n columns.append({\n 'name': h['name'],\n 'friendly_name': h['name'].split(':', 1)[1],\n 'type': types_conv.get(h['dataType'], 'string')\n })\n\n rows = []\n for r in response['rows']:\n d = {}\n for c, value in enumerate(r):\n column_name = response['columnHeaders'][c]['name']\n column_type = filter(lambda col: col['name'] == column_name, columns)[0]['type']\n\n # mcf results come a bit different than ga results:\n if isinstance(value, dict):\n if 'primitiveValue' in value:\n value = value['primitiveValue']\n elif 'conversionPathValue' in value:\n steps = []\n for step in value['conversionPathValue']:\n steps.append('{}:{}'.format(step['interactionType'], step['nodeValue']))\n value = ', '.join(steps)\n else:\n raise Exception(\"Results format not supported\")\n\n if column_type == TYPE_DATE:\n value = datetime.strptime(value, '%Y%m%d')\n elif column_type == TYPE_DATETIME:\n if len(value) == 10:\n value = datetime.strptime(value, '%Y%m%d%H')\n elif len(value) == 12:\n value = datetime.strptime(value, '%Y%m%d%H%M')\n else:\n raise Exception(\"Unknown date/time format in results: '{}'\".format(value))\n\n d[column_name] = value\n rows.append(d)\n\n return {'columns': columns, 'rows': rows}\n\n\nclass GoogleAnalytics(BaseSQLQueryRunner):\n @classmethod\n def annotate_query(cls):\n return False\n\n @classmethod\n def type(cls):\n return \"google_analytics\"\n\n @classmethod\n def name(cls):\n return \"Google Analytics\"\n\n @classmethod\n def enabled(cls):\n return enabled\n\n @classmethod\n def configuration_schema(cls):\n return {\n 'type': 'object',\n 'properties': {\n 'jsonKeyFile': {\n \"type\": \"string\",\n 'title': 'JSON Key File'\n }\n },\n 'required': ['jsonKeyFile'],\n 'secret': ['jsonKeyFile']\n }\n\n def __init__(self, configuration):\n super(GoogleAnalytics, self).__init__(configuration)\n self.syntax = 'json'\n\n def _get_analytics_service(self):\n scope = ['https://www.googleapis.com/auth/analytics.readonly']\n key = json_loads(b64decode(self.configuration['jsonKeyFile']))\n creds = ServiceAccountCredentials.from_json_keyfile_dict(key, scope)\n return build('analytics', 'v3', http=creds.authorize(httplib2.Http()))\n\n def _get_tables(self, schema):\n accounts = self._get_analytics_service().management().accounts().list().execute().get('items')\n if accounts is None:\n raise Exception(\"Failed getting accounts.\")\n else:\n for account in accounts:\n schema[account['name']] = {'name': account['name'], 'columns': []}\n properties = self._get_analytics_service().management().webproperties().list(\n accountId=account['id']).execute().get('items', [])\n for property_ in properties:\n if 'defaultProfileId' in property_ and 'name' in property_:\n schema[account['name']]['columns'].append(\n u'{0} (ga:{1})'.format(property_['name'], property_['defaultProfileId'])\n )\n\n return schema.values()\n\n def test_connection(self):\n try:\n service = self._get_analytics_service()\n service.management().accounts().list().execute()\n except HttpError as e:\n # Make sure we return a more readable error to the end user\n raise Exception(e._get_reason())\n\n def run_query(self, query, user):\n logger.debug(\"Analytics is about to execute query: %s\", query)\n try:\n params = json_loads(query)\n except:\n params = parse_qs(urlparse(query).query, keep_blank_values=True)\n for key in params.keys():\n params[key] = ','.join(params[key])\n if '-' in key:\n params[key.replace('-', '_')] = params.pop(key)\n\n if 'mcf:' in params['metrics'] and 'ga:' in params['metrics']:\n raise Exception(\"Can't mix mcf: and ga: metrics.\")\n\n if 'mcf:' in params.get('dimensions', '') and 'ga:' in params.get('dimensions', ''):\n raise Exception(\"Can't mix mcf: and ga: dimensions.\")\n\n if 'mcf:' in params['metrics']:\n api = self._get_analytics_service().data().mcf()\n else:\n api = self._get_analytics_service().data().ga()\n\n if len(params) > 0:\n try:\n response = api.get(**params).execute()\n data = parse_ga_response(response)\n error = None\n json_data = json_dumps(data)\n except HttpError as e:\n # Make sure we return a more readable error to the end user\n error = e._get_reason()\n json_data = None\n else:\n error = 'Wrong query format.'\n json_data = None\n return json_data, error\n\n\nregister(GoogleAnalytics)\n", "path": "redash/query_runner/google_analytics.py"}]} | 2,922 | 110 |
gh_patches_debug_1849 | rasdani/github-patches | git_diff | WordPress__openverse-api-556 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Sound category mismatch
## Description
<!-- Concisely describe the bug. -->
The `sound` category for audio doesn't work on the front-end.
There seems to be a mismatch between the `audio` category of `sound_effect`:
If you go to `https://api.openverse.engineering/v1/audio/?q=cat&categories=sound`, you will get a 400 response:
```
HTTP 400 Bad Request
Allow: GET, HEAD, OPTIONS
Content-Type: application/json
Vary: Accept
{
"detail": {
"categories": [
"Invalid category: sound. Available options: {'music', 'audiobook', 'podcast', 'news', 'sound_effect'}"
]
}
}
```
However, if you access a single audio result, you will see that it returns `sound` for the category:
https://api.openverse.engineering/v1/audio/1bb94f50-009c-4371-a605-dd289562a9f5/
## Expectation
<!-- Concisely describe what you expected to happen. -->
Both the query category parameter and the result category property for sound effect should have the same name.
## Additional context
The catalog sets the category as `sound`, so that is the value we get from the database:
https://github.com/WordPress/openverse-catalog/blob/cb19f839e96de7ae1a55e8b7dc82a7d2bf5588e8/openverse_catalog/dags/providers/provider_api_scripts/freesound.py#L33-L34
## Resolution
<!-- Replace the [ ] with [x] to check the box. -->
- [ ] 🙋 I would be interested in resolving this bug.
</issue>
<code>
[start of api/catalog/api/serializers/audio_serializers.py]
1 from catalog.api.controllers.search_controller import get_sources
2 from catalog.api.docs.media_docs import fields_to_md
3 from catalog.api.models import AudioReport
4 from catalog.api.models.audio import Audio
5 from catalog.api.serializers.media_serializers import (
6 MediaSearchRequestSerializer,
7 MediaSearchSerializer,
8 MediaSerializer,
9 _validate_enum,
10 )
11 from elasticsearch_dsl.response import Hit
12 from rest_framework import serializers
13
14
15 class AudioSetSerializer(serializers.Serializer):
16 """An audio set, rendered as a part of the ``AudioSerializer`` output."""
17
18 title = serializers.CharField(help_text="The name of the media.", required=False)
19 foreign_landing_url = serializers.URLField(
20 required=False, help_text="A foreign landing link for the image."
21 )
22
23 creator = serializers.CharField(
24 help_text="The name of the media creator.", required=False, allow_blank=True
25 )
26 creator_url = serializers.URLField(
27 required=False, help_text="A direct link to the media creator."
28 )
29
30 url = serializers.URLField(help_text="The actual URL to the media file.")
31 filesize = serializers.CharField(
32 required=False, help_text="Number in bytes, e.g. 1024."
33 )
34 filetype = serializers.CharField(
35 required=False,
36 help_text="The type of the file, related to the file extension.",
37 )
38
39
40 class AudioSearchRequestSerializer(MediaSearchRequestSerializer):
41 """Parse and validate search query string parameters."""
42
43 fields_names = [
44 *MediaSearchRequestSerializer.fields_names,
45 "source",
46 "categories",
47 "duration",
48 ]
49 """
50 Keep the fields names in sync with the actual fields below as this list is
51 used to generate Swagger documentation.
52 """
53
54 source = serializers.CharField(
55 label="provider",
56 help_text="A comma separated list of data sources to search. Valid "
57 "inputs: "
58 f"`{list(get_sources('audio').keys())}`",
59 required=False,
60 )
61 categories = serializers.CharField(
62 label="categories",
63 help_text="A comma separated list of categories; available categories "
64 "include `music`, `sound_effect`, `podcast`, `audiobook`, "
65 "and `news`.",
66 required=False,
67 )
68 duration = serializers.CharField(
69 label="duration",
70 help_text="A comma separated list of audio lengths; available lengths "
71 "include `short`, and `long`.",
72 required=False,
73 )
74
75 @staticmethod
76 def validate_source(input_sources):
77 allowed_sources = list(get_sources("audio").keys())
78 input_sources = input_sources.split(",")
79 input_sources = [x for x in input_sources if x in allowed_sources]
80 input_sources = ",".join(input_sources)
81 return input_sources.lower()
82
83 @staticmethod
84 def validate_categories(value):
85 valid_categories = {
86 "music",
87 "sound_effect",
88 "podcast",
89 "news",
90 "audiobook",
91 }
92 _validate_enum("category", valid_categories, value)
93 return value.lower()
94
95 @staticmethod
96 def validate_duration(value):
97 valid_durations = {"short", "long"} # TODO: Finalise duration filters
98 _validate_enum("duration", valid_durations, value)
99 return value.lower()
100
101
102 class AudioSerializer(MediaSerializer):
103 """A single audio file. Used in search results."""
104
105 fields_names = [
106 *MediaSerializer.fields_names,
107 "audio_set",
108 "genre",
109 "duration",
110 "bit_rate",
111 "sample_rate",
112 "alt_files",
113 "detail_url",
114 "related_url",
115 "category",
116 ]
117 """
118 Keep the fields names in sync with the actual fields below as this list is
119 used to generate Swagger documentation.
120 """
121
122 audio_set = AudioSetSerializer(
123 required=False,
124 help_text="Reference to set of which this track is a part.",
125 read_only=True,
126 )
127
128 genres = serializers.ListField(
129 child=serializers.CharField(),
130 required=False,
131 help_text="An array of audio genres such as "
132 "`rock`, `electronic` for `music` category, or "
133 "`politics`, `sport`, `education` for `podcast` category",
134 )
135
136 duration = serializers.IntegerField(
137 required=False, help_text="The time length of the audio file in milliseconds."
138 )
139 bit_rate = serializers.IntegerField(
140 required=False, help_text="Number in bits per second, eg. 128000."
141 )
142 sample_rate = serializers.IntegerField(
143 required=False, help_text="Number in hertz, eg. 44100."
144 )
145
146 alt_files = serializers.JSONField(
147 required=False, help_text="JSON describing alternative files for this audio."
148 )
149
150 # Hyperlinks
151 thumbnail = serializers.HyperlinkedIdentityField(
152 read_only=True,
153 view_name="audio-thumb",
154 lookup_field="identifier",
155 help_text="A direct link to the miniature artwork.",
156 )
157 waveform = serializers.HyperlinkedIdentityField(
158 read_only=True,
159 view_name="audio-waveform",
160 lookup_field="identifier",
161 help_text="A direct link to the waveform peaks.",
162 )
163 detail_url = serializers.HyperlinkedIdentityField(
164 read_only=True,
165 view_name="audio-detail",
166 lookup_field="identifier",
167 help_text="A direct link to the detail view of this audio file.",
168 )
169 related_url = serializers.HyperlinkedIdentityField(
170 read_only=True,
171 view_name="audio-related",
172 lookup_field="identifier",
173 help_text="A link to an endpoint that provides similar audio files.",
174 )
175
176 # Add-on data
177 peaks = serializers.SerializerMethodField()
178
179 @staticmethod
180 def get_peaks(obj):
181 if isinstance(obj, Hit):
182 obj = Audio.objects.get(identifier=obj.identifier)
183 return obj.get_waveform()
184
185
186 class AudioSearchSerializer(MediaSearchSerializer):
187 """
188 The full audio search response.
189 This serializer is purely representational and not actually used to
190 serialize the response.
191 """
192
193 results = AudioSerializer(
194 many=True,
195 help_text=(
196 "An array of audios and their details such as "
197 f"{fields_to_md(AudioSerializer.fields_names)}."
198 ),
199 )
200
201
202 class AudioReportSerializer(serializers.ModelSerializer):
203 class Meta:
204 model = AudioReport
205 fields = ("identifier", "reason", "description")
206 read_only_fields = ("identifier",)
207
208 def create(self, validated_data):
209 if (
210 validated_data["reason"] == "other"
211 and (
212 "description" not in validated_data
213 or len(validated_data["description"])
214 )
215 < 20
216 ):
217 raise serializers.ValidationError(
218 "Description must be at least be 20 characters long"
219 )
220 return AudioReport.objects.create(**validated_data)
221
222
223 class AudioWaveformSerializer(serializers.Serializer):
224 len = serializers.SerializerMethodField()
225 points = serializers.ListField(
226 child=serializers.FloatField(min_value=0, max_value=1)
227 )
228
229 @staticmethod
230 def get_len(obj) -> int:
231 return len(obj.get("points", []))
232
[end of api/catalog/api/serializers/audio_serializers.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/api/catalog/api/serializers/audio_serializers.py b/api/catalog/api/serializers/audio_serializers.py
--- a/api/catalog/api/serializers/audio_serializers.py
+++ b/api/catalog/api/serializers/audio_serializers.py
@@ -88,6 +88,7 @@
"podcast",
"news",
"audiobook",
+ "pronunciation",
}
_validate_enum("category", valid_categories, value)
return value.lower()
| {"golden_diff": "diff --git a/api/catalog/api/serializers/audio_serializers.py b/api/catalog/api/serializers/audio_serializers.py\n--- a/api/catalog/api/serializers/audio_serializers.py\n+++ b/api/catalog/api/serializers/audio_serializers.py\n@@ -88,6 +88,7 @@\n \"podcast\",\n \"news\",\n \"audiobook\",\n+ \"pronunciation\",\n }\n _validate_enum(\"category\", valid_categories, value)\n return value.lower()\n", "issue": "Sound category mismatch\n## Description\r\n<!-- Concisely describe the bug. -->\r\nThe `sound` category for audio doesn't work on the front-end.\r\nThere seems to be a mismatch between the `audio` category of `sound_effect`:\r\nIf you go to `https://api.openverse.engineering/v1/audio/?q=cat&categories=sound`, you will get a 400 response:\r\n```\r\nHTTP 400 Bad Request\r\nAllow: GET, HEAD, OPTIONS\r\nContent-Type: application/json\r\nVary: Accept\r\n\r\n{\r\n \"detail\": {\r\n \"categories\": [\r\n \"Invalid category: sound. Available options: {'music', 'audiobook', 'podcast', 'news', 'sound_effect'}\"\r\n ]\r\n }\r\n}\r\n```\r\n\r\nHowever, if you access a single audio result, you will see that it returns `sound` for the category:\r\nhttps://api.openverse.engineering/v1/audio/1bb94f50-009c-4371-a605-dd289562a9f5/\r\n\r\n## Expectation\r\n<!-- Concisely describe what you expected to happen. -->\r\nBoth the query category parameter and the result category property for sound effect should have the same name.\r\n\r\n## Additional context\r\nThe catalog sets the category as `sound`, so that is the value we get from the database:\r\nhttps://github.com/WordPress/openverse-catalog/blob/cb19f839e96de7ae1a55e8b7dc82a7d2bf5588e8/openverse_catalog/dags/providers/provider_api_scripts/freesound.py#L33-L34\r\n\r\n## Resolution\r\n<!-- Replace the [ ] with [x] to check the box. -->\r\n- [ ] \ud83d\ude4b I would be interested in resolving this bug.\r\n\n", "before_files": [{"content": "from catalog.api.controllers.search_controller import get_sources\nfrom catalog.api.docs.media_docs import fields_to_md\nfrom catalog.api.models import AudioReport\nfrom catalog.api.models.audio import Audio\nfrom catalog.api.serializers.media_serializers import (\n MediaSearchRequestSerializer,\n MediaSearchSerializer,\n MediaSerializer,\n _validate_enum,\n)\nfrom elasticsearch_dsl.response import Hit\nfrom rest_framework import serializers\n\n\nclass AudioSetSerializer(serializers.Serializer):\n \"\"\"An audio set, rendered as a part of the ``AudioSerializer`` output.\"\"\"\n\n title = serializers.CharField(help_text=\"The name of the media.\", required=False)\n foreign_landing_url = serializers.URLField(\n required=False, help_text=\"A foreign landing link for the image.\"\n )\n\n creator = serializers.CharField(\n help_text=\"The name of the media creator.\", required=False, allow_blank=True\n )\n creator_url = serializers.URLField(\n required=False, help_text=\"A direct link to the media creator.\"\n )\n\n url = serializers.URLField(help_text=\"The actual URL to the media file.\")\n filesize = serializers.CharField(\n required=False, help_text=\"Number in bytes, e.g. 1024.\"\n )\n filetype = serializers.CharField(\n required=False,\n help_text=\"The type of the file, related to the file extension.\",\n )\n\n\nclass AudioSearchRequestSerializer(MediaSearchRequestSerializer):\n \"\"\"Parse and validate search query string parameters.\"\"\"\n\n fields_names = [\n *MediaSearchRequestSerializer.fields_names,\n \"source\",\n \"categories\",\n \"duration\",\n ]\n \"\"\"\n Keep the fields names in sync with the actual fields below as this list is\n used to generate Swagger documentation.\n \"\"\"\n\n source = serializers.CharField(\n label=\"provider\",\n help_text=\"A comma separated list of data sources to search. Valid \"\n \"inputs: \"\n f\"`{list(get_sources('audio').keys())}`\",\n required=False,\n )\n categories = serializers.CharField(\n label=\"categories\",\n help_text=\"A comma separated list of categories; available categories \"\n \"include `music`, `sound_effect`, `podcast`, `audiobook`, \"\n \"and `news`.\",\n required=False,\n )\n duration = serializers.CharField(\n label=\"duration\",\n help_text=\"A comma separated list of audio lengths; available lengths \"\n \"include `short`, and `long`.\",\n required=False,\n )\n\n @staticmethod\n def validate_source(input_sources):\n allowed_sources = list(get_sources(\"audio\").keys())\n input_sources = input_sources.split(\",\")\n input_sources = [x for x in input_sources if x in allowed_sources]\n input_sources = \",\".join(input_sources)\n return input_sources.lower()\n\n @staticmethod\n def validate_categories(value):\n valid_categories = {\n \"music\",\n \"sound_effect\",\n \"podcast\",\n \"news\",\n \"audiobook\",\n }\n _validate_enum(\"category\", valid_categories, value)\n return value.lower()\n\n @staticmethod\n def validate_duration(value):\n valid_durations = {\"short\", \"long\"} # TODO: Finalise duration filters\n _validate_enum(\"duration\", valid_durations, value)\n return value.lower()\n\n\nclass AudioSerializer(MediaSerializer):\n \"\"\"A single audio file. Used in search results.\"\"\"\n\n fields_names = [\n *MediaSerializer.fields_names,\n \"audio_set\",\n \"genre\",\n \"duration\",\n \"bit_rate\",\n \"sample_rate\",\n \"alt_files\",\n \"detail_url\",\n \"related_url\",\n \"category\",\n ]\n \"\"\"\n Keep the fields names in sync with the actual fields below as this list is\n used to generate Swagger documentation.\n \"\"\"\n\n audio_set = AudioSetSerializer(\n required=False,\n help_text=\"Reference to set of which this track is a part.\",\n read_only=True,\n )\n\n genres = serializers.ListField(\n child=serializers.CharField(),\n required=False,\n help_text=\"An array of audio genres such as \"\n \"`rock`, `electronic` for `music` category, or \"\n \"`politics`, `sport`, `education` for `podcast` category\",\n )\n\n duration = serializers.IntegerField(\n required=False, help_text=\"The time length of the audio file in milliseconds.\"\n )\n bit_rate = serializers.IntegerField(\n required=False, help_text=\"Number in bits per second, eg. 128000.\"\n )\n sample_rate = serializers.IntegerField(\n required=False, help_text=\"Number in hertz, eg. 44100.\"\n )\n\n alt_files = serializers.JSONField(\n required=False, help_text=\"JSON describing alternative files for this audio.\"\n )\n\n # Hyperlinks\n thumbnail = serializers.HyperlinkedIdentityField(\n read_only=True,\n view_name=\"audio-thumb\",\n lookup_field=\"identifier\",\n help_text=\"A direct link to the miniature artwork.\",\n )\n waveform = serializers.HyperlinkedIdentityField(\n read_only=True,\n view_name=\"audio-waveform\",\n lookup_field=\"identifier\",\n help_text=\"A direct link to the waveform peaks.\",\n )\n detail_url = serializers.HyperlinkedIdentityField(\n read_only=True,\n view_name=\"audio-detail\",\n lookup_field=\"identifier\",\n help_text=\"A direct link to the detail view of this audio file.\",\n )\n related_url = serializers.HyperlinkedIdentityField(\n read_only=True,\n view_name=\"audio-related\",\n lookup_field=\"identifier\",\n help_text=\"A link to an endpoint that provides similar audio files.\",\n )\n\n # Add-on data\n peaks = serializers.SerializerMethodField()\n\n @staticmethod\n def get_peaks(obj):\n if isinstance(obj, Hit):\n obj = Audio.objects.get(identifier=obj.identifier)\n return obj.get_waveform()\n\n\nclass AudioSearchSerializer(MediaSearchSerializer):\n \"\"\"\n The full audio search response.\n This serializer is purely representational and not actually used to\n serialize the response.\n \"\"\"\n\n results = AudioSerializer(\n many=True,\n help_text=(\n \"An array of audios and their details such as \"\n f\"{fields_to_md(AudioSerializer.fields_names)}.\"\n ),\n )\n\n\nclass AudioReportSerializer(serializers.ModelSerializer):\n class Meta:\n model = AudioReport\n fields = (\"identifier\", \"reason\", \"description\")\n read_only_fields = (\"identifier\",)\n\n def create(self, validated_data):\n if (\n validated_data[\"reason\"] == \"other\"\n and (\n \"description\" not in validated_data\n or len(validated_data[\"description\"])\n )\n < 20\n ):\n raise serializers.ValidationError(\n \"Description must be at least be 20 characters long\"\n )\n return AudioReport.objects.create(**validated_data)\n\n\nclass AudioWaveformSerializer(serializers.Serializer):\n len = serializers.SerializerMethodField()\n points = serializers.ListField(\n child=serializers.FloatField(min_value=0, max_value=1)\n )\n\n @staticmethod\n def get_len(obj) -> int:\n return len(obj.get(\"points\", []))\n", "path": "api/catalog/api/serializers/audio_serializers.py"}]} | 3,036 | 103 |
gh_patches_debug_1052 | rasdani/github-patches | git_diff | mindee__doctr-404 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
WeasyPrint import error Python 3.7
## 🐛 Bug
When importing weasyprint with python 3.7 I have an error: `AttributeError: 'OutStream' object has no attribute 'buffer'`*
## To Reproduce
Steps to reproduce the behavior:
`from doctr.models import ocr_predictor`
leads to:
```
AttributeError Traceback (most recent call last)
<ipython-input-4-19f78ebc9b57> in <module>()
----> 1 from doctr.models import ocr_predictor
2
3 # Load predictor
4 model = ocr_predictor(pretrained=True)
7 frames
/usr/local/lib/python3.7/dist-packages/doctr/__init__.py in <module>()
1 from .file_utils import is_tf_available, is_torch_available
2 from .version import __version__ # noqa: F401
----> 3 from . import documents
4 from . import transforms
5 from . import utils
/usr/local/lib/python3.7/dist-packages/doctr/documents/__init__.py in <module>()
1 from .elements import *
----> 2 from .reader import *
/usr/local/lib/python3.7/dist-packages/doctr/documents/reader.py in <module>()
8 from pathlib import Path
9 import fitz
---> 10 from weasyprint import HTML
11 from typing import List, Tuple, Optional, Any, Union, Sequence, Dict
12
/usr/local/lib/python3.7/dist-packages/weasyprint/__init__.py in <module>()
321 # Work around circular imports.
322 from .css import preprocess_stylesheet # noqa isort:skip
--> 323 from .html import ( # noqa isort:skip
324 HTML5_UA_COUNTER_STYLE, HTML5_UA_STYLESHEET, HTML5_PH_STYLESHEET,
325 find_base_url)
/usr/local/lib/python3.7/dist-packages/weasyprint/html.py in <module>()
21 from .css.counters import CounterStyle
22 from .formatting_structure import boxes
---> 23 from .images import SVGImage
24 from .logger import LOGGER
25 from .urls import get_url_attribute
/usr/local/lib/python3.7/dist-packages/weasyprint/images.py in <module>()
11 from itertools import cycle
12
---> 13 import pydyf
14 from PIL import Image
15
/usr/local/lib/python3.7/dist-packages/pydyf/__init__.py in <module>()
402
403
--> 404 class PDF:
405 """PDF document."""
406 def __init__(self):
/usr/local/lib/python3.7/dist-packages/pydyf/__init__.py in PDF()
506 self.write_line(b'%%EOF', output)
507
--> 508 def write(self, output=sys.stdout.buffer):
509 """Write PDF to output.
510
AttributeError: 'OutStream' object has no attribute 'buffer'
```
## Expected behavior
Nothing, special
## Environment
```
DocTR version: 0.3.0
TensorFlow version: 2.5.0
PyTorch version: 1.9.0+cu102 (torchvision 0.10.0+cu102)
OpenCV version: 4.5.3
OS: Ubuntu 18.04.5 LTS
Python version: 3.7
Is CUDA available (TensorFlow): No
Is CUDA available (PyTorch): No
CUDA runtime version: 11.0.221
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
```
</issue>
<code>
[start of setup.py]
1 # Copyright (C) 2021, Mindee.
2
3 # This program is licensed under the Apache License version 2.
4 # See LICENSE or go to <https://www.apache.org/licenses/LICENSE-2.0.txt> for full license details.
5
6 """
7 Package installation setup
8 """
9
10 import os
11 import re
12 from pathlib import Path
13 import subprocess
14
15 from setuptools import find_packages, setup
16
17
18 version = "0.3.1a0"
19 sha = 'Unknown'
20 package_name = 'doctr'
21
22 cwd = Path(__file__).parent.absolute()
23
24 if os.getenv('BUILD_VERSION'):
25 version = os.getenv('BUILD_VERSION')
26 elif sha != 'Unknown':
27 try:
28 sha = subprocess.check_output(['git', 'rev-parse', 'HEAD'], cwd=cwd).decode('ascii').strip()
29 except Exception:
30 pass
31 version += '+' + sha[:7]
32 print(f"Building wheel {package_name}-{version}")
33
34 with open(cwd.joinpath(package_name, 'version.py'), 'w') as f:
35 f.write(f"__version__ = '{version}'\n")
36
37 with open('README.md', 'r') as f:
38 readme = f.read()
39
40 # Borrowed from https://github.com/huggingface/transformers/blob/master/setup.py
41 _deps = [
42 "importlib_metadata",
43 "numpy>=1.16.0",
44 "scipy>=1.4.0",
45 "opencv-python>=4.2",
46 "tensorflow>=2.4.0",
47 "PyMuPDF>=1.16.0,<1.18.11",
48 "pyclipper>=1.2.0",
49 "shapely>=1.6.0",
50 "matplotlib>=3.1.0",
51 "mplcursors>=0.3",
52 "weasyprint>=52.2",
53 "unidecode>=1.0.0",
54 "tensorflow-cpu>=2.4.0",
55 "torch>=1.8.0",
56 "torchvision>=0.9.0",
57 "Pillow>=8.0.0,<8.3.0", # cf. https://github.com/python-pillow/Pillow/issues/5571
58 "tqdm>=4.30.0",
59 "tensorflow-addons>=0.13.0"
60 ]
61
62 deps = {b: a for a, b in (re.findall(r"^(([^!=<>]+)(?:[!=<>].*)?$)", x)[0] for x in _deps)}
63
64
65 def deps_list(*pkgs):
66 return [deps[pkg] for pkg in pkgs]
67
68
69 install_requires = [
70 deps["importlib_metadata"] + ";python_version<'3.8'", # importlib_metadata for Python versions that don't have it
71 deps["numpy"],
72 deps["scipy"],
73 deps["opencv-python"],
74 deps["PyMuPDF"],
75 deps["pyclipper"],
76 deps["shapely"],
77 deps["matplotlib"],
78 deps["mplcursors"],
79 deps["weasyprint"],
80 deps["unidecode"],
81 deps["Pillow"],
82 deps["tqdm"],
83 ]
84
85 extras = {}
86 extras["tf"] = deps_list("tensorflow", "tensorflow-addons")
87 extras["tf-cpu"] = deps_list("tensorflow-cpu", "tensorflow-addons")
88 extras["torch"] = deps_list("torch", "torchvision")
89 extras["all"] = (
90 extras["tf"]
91 + extras["torch"]
92 )
93
94 setup(
95 # Metadata
96 name=os.getenv('PKG_INDEX') if os.getenv('PKG_INDEX') else package_name,
97 version=version,
98 author='François-Guillaume Fernandez, Charles Gaillard',
99 author_email='[email protected]',
100 description='Extract valuable text information from your documents',
101 long_description=readme,
102 long_description_content_type="text/markdown",
103 url='https://github.com/mindee/doctr',
104 download_url='https://github.com/mindee/doctr/tags',
105 license='Apache',
106 classifiers=[
107 'Development Status :: 4 - Beta',
108 'Intended Audience :: Developers',
109 "Intended Audience :: Education",
110 'Intended Audience :: Science/Research',
111 'License :: OSI Approved :: Apache Software License',
112 'Natural Language :: English',
113 'Operating System :: OS Independent',
114 'Programming Language :: Python :: 3',
115 'Programming Language :: Python :: 3.6',
116 'Programming Language :: Python :: 3.7',
117 'Topic :: Scientific/Engineering :: Artificial Intelligence',
118 ],
119 keywords=['OCR', 'deep learning', 'computer vision', 'tensorflow', 'pytorch', 'text detection', 'text recognition'],
120
121 # Package info
122 packages=find_packages(exclude=('test',)),
123 zip_safe=True,
124 python_requires='>=3.6.0',
125 include_package_data=True,
126 install_requires=install_requires,
127 extras_require=extras,
128 package_data={'': ['LICENSE']}
129 )
130
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -49,7 +49,7 @@
"shapely>=1.6.0",
"matplotlib>=3.1.0",
"mplcursors>=0.3",
- "weasyprint>=52.2",
+ "weasyprint>=52.2,<53.0",
"unidecode>=1.0.0",
"tensorflow-cpu>=2.4.0",
"torch>=1.8.0",
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -49,7 +49,7 @@\n \"shapely>=1.6.0\",\n \"matplotlib>=3.1.0\",\n \"mplcursors>=0.3\",\n- \"weasyprint>=52.2\",\n+ \"weasyprint>=52.2,<53.0\",\n \"unidecode>=1.0.0\",\n \"tensorflow-cpu>=2.4.0\",\n \"torch>=1.8.0\",\n", "issue": "WeasyPrint import error Python 3.7\n## \ud83d\udc1b Bug\r\n\r\nWhen importing weasyprint with python 3.7 I have an error: `AttributeError: 'OutStream' object has no attribute 'buffer'`*\r\n\r\n\r\n## To Reproduce\r\n\r\nSteps to reproduce the behavior:\r\n\r\n`from doctr.models import ocr_predictor`\r\n\r\nleads to:\r\n\r\n```\r\nAttributeError Traceback (most recent call last)\r\n\r\n<ipython-input-4-19f78ebc9b57> in <module>()\r\n----> 1 from doctr.models import ocr_predictor\r\n 2 \r\n 3 # Load predictor\r\n 4 model = ocr_predictor(pretrained=True)\r\n\r\n7 frames\r\n\r\n/usr/local/lib/python3.7/dist-packages/doctr/__init__.py in <module>()\r\n 1 from .file_utils import is_tf_available, is_torch_available\r\n 2 from .version import __version__ # noqa: F401\r\n----> 3 from . import documents\r\n 4 from . import transforms\r\n 5 from . import utils\r\n\r\n/usr/local/lib/python3.7/dist-packages/doctr/documents/__init__.py in <module>()\r\n 1 from .elements import *\r\n----> 2 from .reader import *\r\n\r\n/usr/local/lib/python3.7/dist-packages/doctr/documents/reader.py in <module>()\r\n 8 from pathlib import Path\r\n 9 import fitz\r\n---> 10 from weasyprint import HTML\r\n 11 from typing import List, Tuple, Optional, Any, Union, Sequence, Dict\r\n 12 \r\n\r\n/usr/local/lib/python3.7/dist-packages/weasyprint/__init__.py in <module>()\r\n 321 # Work around circular imports.\r\n 322 from .css import preprocess_stylesheet # noqa isort:skip\r\n--> 323 from .html import ( # noqa isort:skip\r\n 324 HTML5_UA_COUNTER_STYLE, HTML5_UA_STYLESHEET, HTML5_PH_STYLESHEET,\r\n 325 find_base_url)\r\n\r\n/usr/local/lib/python3.7/dist-packages/weasyprint/html.py in <module>()\r\n 21 from .css.counters import CounterStyle\r\n 22 from .formatting_structure import boxes\r\n---> 23 from .images import SVGImage\r\n 24 from .logger import LOGGER\r\n 25 from .urls import get_url_attribute\r\n\r\n/usr/local/lib/python3.7/dist-packages/weasyprint/images.py in <module>()\r\n 11 from itertools import cycle\r\n 12 \r\n---> 13 import pydyf\r\n 14 from PIL import Image\r\n 15 \r\n\r\n/usr/local/lib/python3.7/dist-packages/pydyf/__init__.py in <module>()\r\n 402 \r\n 403 \r\n--> 404 class PDF:\r\n 405 \"\"\"PDF document.\"\"\"\r\n 406 def __init__(self):\r\n\r\n/usr/local/lib/python3.7/dist-packages/pydyf/__init__.py in PDF()\r\n 506 self.write_line(b'%%EOF', output)\r\n 507 \r\n--> 508 def write(self, output=sys.stdout.buffer):\r\n 509 \"\"\"Write PDF to output.\r\n 510 \r\n\r\nAttributeError: 'OutStream' object has no attribute 'buffer'\r\n\r\n```\r\n\r\n\r\n## Expected behavior\r\n\r\nNothing, special\r\n\r\n## Environment\r\n```\r\nDocTR version: 0.3.0\r\nTensorFlow version: 2.5.0\r\nPyTorch version: 1.9.0+cu102 (torchvision 0.10.0+cu102)\r\nOpenCV version: 4.5.3\r\nOS: Ubuntu 18.04.5 LTS\r\nPython version: 3.7\r\nIs CUDA available (TensorFlow): No\r\nIs CUDA available (PyTorch): No\r\nCUDA runtime version: 11.0.221\r\nGPU models and configuration: Could not collect\r\nNvidia driver version: Could not collect\r\n```\r\n\n", "before_files": [{"content": "# Copyright (C) 2021, Mindee.\n\n# This program is licensed under the Apache License version 2.\n# See LICENSE or go to <https://www.apache.org/licenses/LICENSE-2.0.txt> for full license details.\n\n\"\"\"\nPackage installation setup\n\"\"\"\n\nimport os\nimport re\nfrom pathlib import Path\nimport subprocess\n\nfrom setuptools import find_packages, setup\n\n\nversion = \"0.3.1a0\"\nsha = 'Unknown'\npackage_name = 'doctr'\n\ncwd = Path(__file__).parent.absolute()\n\nif os.getenv('BUILD_VERSION'):\n version = os.getenv('BUILD_VERSION')\nelif sha != 'Unknown':\n try:\n sha = subprocess.check_output(['git', 'rev-parse', 'HEAD'], cwd=cwd).decode('ascii').strip()\n except Exception:\n pass\n version += '+' + sha[:7]\nprint(f\"Building wheel {package_name}-{version}\")\n\nwith open(cwd.joinpath(package_name, 'version.py'), 'w') as f:\n f.write(f\"__version__ = '{version}'\\n\")\n\nwith open('README.md', 'r') as f:\n readme = f.read()\n\n# Borrowed from https://github.com/huggingface/transformers/blob/master/setup.py\n_deps = [\n \"importlib_metadata\",\n \"numpy>=1.16.0\",\n \"scipy>=1.4.0\",\n \"opencv-python>=4.2\",\n \"tensorflow>=2.4.0\",\n \"PyMuPDF>=1.16.0,<1.18.11\",\n \"pyclipper>=1.2.0\",\n \"shapely>=1.6.0\",\n \"matplotlib>=3.1.0\",\n \"mplcursors>=0.3\",\n \"weasyprint>=52.2\",\n \"unidecode>=1.0.0\",\n \"tensorflow-cpu>=2.4.0\",\n \"torch>=1.8.0\",\n \"torchvision>=0.9.0\",\n \"Pillow>=8.0.0,<8.3.0\", # cf. https://github.com/python-pillow/Pillow/issues/5571\n \"tqdm>=4.30.0\",\n \"tensorflow-addons>=0.13.0\"\n]\n\ndeps = {b: a for a, b in (re.findall(r\"^(([^!=<>]+)(?:[!=<>].*)?$)\", x)[0] for x in _deps)}\n\n\ndef deps_list(*pkgs):\n return [deps[pkg] for pkg in pkgs]\n\n\ninstall_requires = [\n deps[\"importlib_metadata\"] + \";python_version<'3.8'\", # importlib_metadata for Python versions that don't have it\n deps[\"numpy\"],\n deps[\"scipy\"],\n deps[\"opencv-python\"],\n deps[\"PyMuPDF\"],\n deps[\"pyclipper\"],\n deps[\"shapely\"],\n deps[\"matplotlib\"],\n deps[\"mplcursors\"],\n deps[\"weasyprint\"],\n deps[\"unidecode\"],\n deps[\"Pillow\"],\n deps[\"tqdm\"],\n]\n\nextras = {}\nextras[\"tf\"] = deps_list(\"tensorflow\", \"tensorflow-addons\")\nextras[\"tf-cpu\"] = deps_list(\"tensorflow-cpu\", \"tensorflow-addons\")\nextras[\"torch\"] = deps_list(\"torch\", \"torchvision\")\nextras[\"all\"] = (\n extras[\"tf\"]\n + extras[\"torch\"]\n)\n\nsetup(\n # Metadata\n name=os.getenv('PKG_INDEX') if os.getenv('PKG_INDEX') else package_name,\n version=version,\n author='Fran\u00e7ois-Guillaume Fernandez, Charles Gaillard',\n author_email='[email protected]',\n description='Extract valuable text information from your documents',\n long_description=readme,\n long_description_content_type=\"text/markdown\",\n url='https://github.com/mindee/doctr',\n download_url='https://github.com/mindee/doctr/tags',\n license='Apache',\n classifiers=[\n 'Development Status :: 4 - Beta',\n 'Intended Audience :: Developers',\n \"Intended Audience :: Education\",\n 'Intended Audience :: Science/Research',\n 'License :: OSI Approved :: Apache Software License',\n 'Natural Language :: English',\n 'Operating System :: OS Independent',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Topic :: Scientific/Engineering :: Artificial Intelligence',\n ],\n keywords=['OCR', 'deep learning', 'computer vision', 'tensorflow', 'pytorch', 'text detection', 'text recognition'],\n\n # Package info\n packages=find_packages(exclude=('test',)),\n zip_safe=True,\n python_requires='>=3.6.0',\n include_package_data=True,\n install_requires=install_requires,\n extras_require=extras,\n package_data={'': ['LICENSE']}\n)\n", "path": "setup.py"}]} | 2,770 | 127 |
gh_patches_debug_37126 | rasdani/github-patches | git_diff | open-mmlab__mmdetection3d-69 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
iou3d failed when inference with gpu:1
Thanks for your error report and we appreciate it a lot.
**Checklist**
1. I have searched related issues but cannot get the expected help.
2. The bug has not been fixed in the latest version.
**Describe the bug**
Training on single GPU, when using default gpu (gpu:0) , everything is ok.
Switch to gpu:1, report `an illegal memory access was encountered mmdet3d/ops/iou3d/src/iou3d.cpp 121` during inference, however training is ok.
**Reproduction**
1. What command or script did you run?
```
python tools/train.py CONFIG_PATH --gpu-ids 1
```
2. Did you make any modifications on the code or config? Did you understand what you have modified?
3. What dataset did you use?
- kitti
**Environment**
1. Please run `python mmdet3d/utils/collect_env.py` to collect necessary environment infomation and paste it here.
2. You may add addition that may be helpful for locating the problem, such as
- How you installed PyTorch [e.g., pip, conda, source]
- Other environment variables that may be related (such as `$PATH`, `$LD_LIBRARY_PATH`, `$PYTHONPATH`, etc.)
**Error traceback**
If applicable, paste the error trackback here.
```
A placeholder for trackback.
```
**Bug fix**
If you have already identified the reason, you can provide the information here. If you are willing to create a PR to fix it, please also leave a comment here and that would be much appreciated!
</issue>
<code>
[start of mmdet3d/ops/iou3d/iou3d_utils.py]
1 import torch
2
3 from . import iou3d_cuda
4
5
6 def boxes_iou_bev(boxes_a, boxes_b):
7 """
8 :param boxes_a: (M, 5)
9 :param boxes_b: (N, 5)
10 :return:
11 ans_iou: (M, N)
12 """
13
14 ans_iou = torch.cuda.FloatTensor(
15 torch.Size((boxes_a.shape[0], boxes_b.shape[0]))).zero_()
16
17 iou3d_cuda.boxes_iou_bev_gpu(boxes_a.contiguous(), boxes_b.contiguous(),
18 ans_iou)
19
20 return ans_iou
21
22
23 def nms_gpu(boxes, scores, thresh):
24 """
25 :param boxes: (N, 5) [x1, y1, x2, y2, ry]
26 :param scores: (N)
27 :param thresh:
28 :return:
29 """
30 # areas = (x2 - x1) * (y2 - y1)
31 order = scores.sort(0, descending=True)[1]
32
33 boxes = boxes[order].contiguous()
34
35 keep = torch.LongTensor(boxes.size(0))
36 num_out = iou3d_cuda.nms_gpu(boxes, keep, thresh)
37 return order[keep[:num_out].cuda()].contiguous()
38
39
40 def nms_normal_gpu(boxes, scores, thresh):
41 """
42 :param boxes: (N, 5) [x1, y1, x2, y2, ry]
43 :param scores: (N)
44 :param thresh:
45 :return:
46 """
47 # areas = (x2 - x1) * (y2 - y1)
48 order = scores.sort(0, descending=True)[1]
49
50 boxes = boxes[order].contiguous()
51
52 keep = torch.LongTensor(boxes.size(0))
53 num_out = iou3d_cuda.nms_normal_gpu(boxes, keep, thresh)
54 return order[keep[:num_out].cuda()].contiguous()
55
[end of mmdet3d/ops/iou3d/iou3d_utils.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/mmdet3d/ops/iou3d/iou3d_utils.py b/mmdet3d/ops/iou3d/iou3d_utils.py
--- a/mmdet3d/ops/iou3d/iou3d_utils.py
+++ b/mmdet3d/ops/iou3d/iou3d_utils.py
@@ -4,15 +4,17 @@
def boxes_iou_bev(boxes_a, boxes_b):
- """
- :param boxes_a: (M, 5)
- :param boxes_b: (N, 5)
- :return:
- ans_iou: (M, N)
- """
+ """Calculate boxes IoU in the bird view.
- ans_iou = torch.cuda.FloatTensor(
- torch.Size((boxes_a.shape[0], boxes_b.shape[0]))).zero_()
+ Args:
+ boxes_a (torch.Tensor): Input boxes a with shape (M, 5).
+ boxes_b (torch.Tensor): Input boxes b with shape (N, 5).
+
+ Returns:
+ ans_iou (torch.Tensor): IoU result with shape (M, N).
+ """
+ ans_iou = boxes_a.new_zeros(
+ torch.Size((boxes_a.shape[0], boxes_b.shape[0])))
iou3d_cuda.boxes_iou_bev_gpu(boxes_a.contiguous(), boxes_b.contiguous(),
ans_iou)
@@ -21,34 +23,41 @@
def nms_gpu(boxes, scores, thresh):
+ """Non maximum suppression on GPU.
+
+ Args:
+ boxes (torch.Tensor): Input boxes with shape (N, 5).
+ scores (torch.Tensor): Scores of predicted boxes with shape (N).
+ thresh (torch.Tensor): Threshold of non maximum suppression.
+
+ Returns:
+ torch.Tensor: Remaining indices with scores in descending order.
"""
- :param boxes: (N, 5) [x1, y1, x2, y2, ry]
- :param scores: (N)
- :param thresh:
- :return:
- """
- # areas = (x2 - x1) * (y2 - y1)
order = scores.sort(0, descending=True)[1]
boxes = boxes[order].contiguous()
- keep = torch.LongTensor(boxes.size(0))
- num_out = iou3d_cuda.nms_gpu(boxes, keep, thresh)
- return order[keep[:num_out].cuda()].contiguous()
+ keep = boxes.new_zeros(boxes.size(0))
+ num_out = iou3d_cuda.nms_gpu(boxes, keep, thresh, boxes.device.index)
+ return order[keep[:num_out].cuda(boxes.device)].contiguous()
def nms_normal_gpu(boxes, scores, thresh):
+ """Normal non maximum suppression on GPU.
+
+ Args:
+ boxes (torch.Tensor): Input boxes with shape (N, 5).
+ scores (torch.Tensor): Scores of predicted boxes with shape (N).
+ thresh (torch.Tensor): Threshold of non maximum suppression.
+
+ Returns:
+ torch.Tensor: Remaining indices with scores in descending order.
"""
- :param boxes: (N, 5) [x1, y1, x2, y2, ry]
- :param scores: (N)
- :param thresh:
- :return:
- """
- # areas = (x2 - x1) * (y2 - y1)
order = scores.sort(0, descending=True)[1]
boxes = boxes[order].contiguous()
- keep = torch.LongTensor(boxes.size(0))
- num_out = iou3d_cuda.nms_normal_gpu(boxes, keep, thresh)
- return order[keep[:num_out].cuda()].contiguous()
+ keep = boxes.new_zeros(boxes.size(0))
+ num_out = iou3d_cuda.nms_normal_gpu(boxes, keep, thresh,
+ boxes.device.index)
+ return order[keep[:num_out].cuda(boxes.device)].contiguous()
| {"golden_diff": "diff --git a/mmdet3d/ops/iou3d/iou3d_utils.py b/mmdet3d/ops/iou3d/iou3d_utils.py\n--- a/mmdet3d/ops/iou3d/iou3d_utils.py\n+++ b/mmdet3d/ops/iou3d/iou3d_utils.py\n@@ -4,15 +4,17 @@\n \n \n def boxes_iou_bev(boxes_a, boxes_b):\n- \"\"\"\n- :param boxes_a: (M, 5)\n- :param boxes_b: (N, 5)\n- :return:\n- ans_iou: (M, N)\n- \"\"\"\n+ \"\"\"Calculate boxes IoU in the bird view.\n \n- ans_iou = torch.cuda.FloatTensor(\n- torch.Size((boxes_a.shape[0], boxes_b.shape[0]))).zero_()\n+ Args:\n+ boxes_a (torch.Tensor): Input boxes a with shape (M, 5).\n+ boxes_b (torch.Tensor): Input boxes b with shape (N, 5).\n+\n+ Returns:\n+ ans_iou (torch.Tensor): IoU result with shape (M, N).\n+ \"\"\"\n+ ans_iou = boxes_a.new_zeros(\n+ torch.Size((boxes_a.shape[0], boxes_b.shape[0])))\n \n iou3d_cuda.boxes_iou_bev_gpu(boxes_a.contiguous(), boxes_b.contiguous(),\n ans_iou)\n@@ -21,34 +23,41 @@\n \n \n def nms_gpu(boxes, scores, thresh):\n+ \"\"\"Non maximum suppression on GPU.\n+\n+ Args:\n+ boxes (torch.Tensor): Input boxes with shape (N, 5).\n+ scores (torch.Tensor): Scores of predicted boxes with shape (N).\n+ thresh (torch.Tensor): Threshold of non maximum suppression.\n+\n+ Returns:\n+ torch.Tensor: Remaining indices with scores in descending order.\n \"\"\"\n- :param boxes: (N, 5) [x1, y1, x2, y2, ry]\n- :param scores: (N)\n- :param thresh:\n- :return:\n- \"\"\"\n- # areas = (x2 - x1) * (y2 - y1)\n order = scores.sort(0, descending=True)[1]\n \n boxes = boxes[order].contiguous()\n \n- keep = torch.LongTensor(boxes.size(0))\n- num_out = iou3d_cuda.nms_gpu(boxes, keep, thresh)\n- return order[keep[:num_out].cuda()].contiguous()\n+ keep = boxes.new_zeros(boxes.size(0))\n+ num_out = iou3d_cuda.nms_gpu(boxes, keep, thresh, boxes.device.index)\n+ return order[keep[:num_out].cuda(boxes.device)].contiguous()\n \n \n def nms_normal_gpu(boxes, scores, thresh):\n+ \"\"\"Normal non maximum suppression on GPU.\n+\n+ Args:\n+ boxes (torch.Tensor): Input boxes with shape (N, 5).\n+ scores (torch.Tensor): Scores of predicted boxes with shape (N).\n+ thresh (torch.Tensor): Threshold of non maximum suppression.\n+\n+ Returns:\n+ torch.Tensor: Remaining indices with scores in descending order.\n \"\"\"\n- :param boxes: (N, 5) [x1, y1, x2, y2, ry]\n- :param scores: (N)\n- :param thresh:\n- :return:\n- \"\"\"\n- # areas = (x2 - x1) * (y2 - y1)\n order = scores.sort(0, descending=True)[1]\n \n boxes = boxes[order].contiguous()\n \n- keep = torch.LongTensor(boxes.size(0))\n- num_out = iou3d_cuda.nms_normal_gpu(boxes, keep, thresh)\n- return order[keep[:num_out].cuda()].contiguous()\n+ keep = boxes.new_zeros(boxes.size(0))\n+ num_out = iou3d_cuda.nms_normal_gpu(boxes, keep, thresh,\n+ boxes.device.index)\n+ return order[keep[:num_out].cuda(boxes.device)].contiguous()\n", "issue": "iou3d failed when inference with gpu:1\nThanks for your error report and we appreciate it a lot.\r\n\r\n**Checklist**\r\n1. I have searched related issues but cannot get the expected help.\r\n2. The bug has not been fixed in the latest version.\r\n\r\n**Describe the bug**\r\nTraining on single GPU, when using default gpu (gpu:0) , everything is ok. \r\nSwitch to gpu:1, report `an illegal memory access was encountered mmdet3d/ops/iou3d/src/iou3d.cpp 121` during inference, however training is ok.\r\n\r\n**Reproduction**\r\n1. What command or script did you run?\r\n```\r\npython tools/train.py CONFIG_PATH --gpu-ids 1\r\n```\r\n2. Did you make any modifications on the code or config? Did you understand what you have modified?\r\n3. What dataset did you use?\r\n- kitti\r\n\r\n**Environment**\r\n\r\n1. Please run `python mmdet3d/utils/collect_env.py` to collect necessary environment infomation and paste it here.\r\n2. You may add addition that may be helpful for locating the problem, such as\r\n - How you installed PyTorch [e.g., pip, conda, source]\r\n - Other environment variables that may be related (such as `$PATH`, `$LD_LIBRARY_PATH`, `$PYTHONPATH`, etc.)\r\n\r\n**Error traceback**\r\nIf applicable, paste the error trackback here.\r\n```\r\nA placeholder for trackback.\r\n```\r\n\r\n**Bug fix**\r\nIf you have already identified the reason, you can provide the information here. If you are willing to create a PR to fix it, please also leave a comment here and that would be much appreciated!\r\n\n", "before_files": [{"content": "import torch\n\nfrom . import iou3d_cuda\n\n\ndef boxes_iou_bev(boxes_a, boxes_b):\n \"\"\"\n :param boxes_a: (M, 5)\n :param boxes_b: (N, 5)\n :return:\n ans_iou: (M, N)\n \"\"\"\n\n ans_iou = torch.cuda.FloatTensor(\n torch.Size((boxes_a.shape[0], boxes_b.shape[0]))).zero_()\n\n iou3d_cuda.boxes_iou_bev_gpu(boxes_a.contiguous(), boxes_b.contiguous(),\n ans_iou)\n\n return ans_iou\n\n\ndef nms_gpu(boxes, scores, thresh):\n \"\"\"\n :param boxes: (N, 5) [x1, y1, x2, y2, ry]\n :param scores: (N)\n :param thresh:\n :return:\n \"\"\"\n # areas = (x2 - x1) * (y2 - y1)\n order = scores.sort(0, descending=True)[1]\n\n boxes = boxes[order].contiguous()\n\n keep = torch.LongTensor(boxes.size(0))\n num_out = iou3d_cuda.nms_gpu(boxes, keep, thresh)\n return order[keep[:num_out].cuda()].contiguous()\n\n\ndef nms_normal_gpu(boxes, scores, thresh):\n \"\"\"\n :param boxes: (N, 5) [x1, y1, x2, y2, ry]\n :param scores: (N)\n :param thresh:\n :return:\n \"\"\"\n # areas = (x2 - x1) * (y2 - y1)\n order = scores.sort(0, descending=True)[1]\n\n boxes = boxes[order].contiguous()\n\n keep = torch.LongTensor(boxes.size(0))\n num_out = iou3d_cuda.nms_normal_gpu(boxes, keep, thresh)\n return order[keep[:num_out].cuda()].contiguous()\n", "path": "mmdet3d/ops/iou3d/iou3d_utils.py"}]} | 1,450 | 924 |
gh_patches_debug_5988 | rasdani/github-patches | git_diff | liqd__a4-meinberlin-4916 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
#6963 Too many codes in 1 package
URL: https://meinberlin-dev.liqd.net/dashboard/modules/burgerinnenhaushalt-3-phasen-21/download-codes/
user: admin, initiator
expected behaviour: Each code-package should contain a max. of 1.000.000 codes. ~~The wording of the helptext should have also the right number of 1.000.000 codes per package as each package should contain a maximum of 1.000.000 codes per excel-file.~~
behaviour: ~~the number in the wording of the helptext is "10.000.000" and~~ the packages can contain more than 1.000.000 codes.
important screensize: -
device & browser: mac ff
Comment/Question: I tried it with generating two mill codes and the codes were put in only one code-package. I also couldn't download the package probably because it was too big.
Linked: https://github.com/liqd/a4-meinberlin/issues/4907
</issue>
<code>
[start of meinberlin/apps/votes/tasks.py]
1 from background_task import background
2
3 from adhocracy4.modules.models import Module
4 from meinberlin.apps.votes.models import VotingToken
5 from meinberlin.apps.votes.models import get_token_12
6
7 # Number of tokens to insert into database per bulk_create
8 BATCH_SIZE = 1000000
9 # Max number of tokens in one download / package
10 PACKAGE_SIZE = 10000000
11
12
13 def generate_voting_tokens(module_id, number_of_tokens, existing_tokens):
14 module = Module.objects.get(pk=module_id)
15 package_number = VotingToken.next_package_number(module)
16 module_name = module.name
17 project_id = module.project.id
18 project_name = module.project.name
19
20 number_to_generate = number_of_tokens
21 package_number_limit = 0
22 if number_of_tokens > PACKAGE_SIZE:
23 package_number_limit = number_of_tokens - PACKAGE_SIZE
24 while number_to_generate > 0:
25 if number_to_generate >= BATCH_SIZE:
26 generate_voting_tokens_batch(
27 module_id,
28 BATCH_SIZE,
29 package_number,
30 number_of_tokens,
31 module_name,
32 project_id,
33 project_name,
34 existing_tokens,
35 )
36 number_to_generate = number_to_generate - BATCH_SIZE
37 else:
38 generate_voting_tokens_batch(
39 module_id,
40 number_to_generate,
41 package_number,
42 number_of_tokens,
43 module_name,
44 project_id,
45 project_name,
46 existing_tokens,
47 )
48 number_to_generate = 0
49 if package_number_limit >= number_to_generate:
50 package_number += 1
51 package_number_limit - PACKAGE_SIZE
52
53
54 @background(schedule=1)
55 def generate_voting_tokens_batch(
56 module_id,
57 batch_size,
58 package_number,
59 number_of_tokens,
60 module_name,
61 project_id,
62 project_name,
63 existing_tokens,
64 ):
65 module = Module.objects.get(pk=module_id)
66 VotingToken.objects.bulk_create(
67 [get_token_and_hash(module, package_number) for i in range(batch_size)]
68 )
69
70
71 def get_token_and_hash(module, package_number):
72 token = get_token_12()
73 token_hash = VotingToken.hash_token(token, module)
74 return VotingToken(
75 token=token, token_hash=token_hash, module=module, package_number=package_number
76 )
77
[end of meinberlin/apps/votes/tasks.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/meinberlin/apps/votes/tasks.py b/meinberlin/apps/votes/tasks.py
--- a/meinberlin/apps/votes/tasks.py
+++ b/meinberlin/apps/votes/tasks.py
@@ -5,9 +5,9 @@
from meinberlin.apps.votes.models import get_token_12
# Number of tokens to insert into database per bulk_create
-BATCH_SIZE = 1000000
+BATCH_SIZE = 100000
# Max number of tokens in one download / package
-PACKAGE_SIZE = 10000000
+PACKAGE_SIZE = 1000000
def generate_voting_tokens(module_id, number_of_tokens, existing_tokens):
| {"golden_diff": "diff --git a/meinberlin/apps/votes/tasks.py b/meinberlin/apps/votes/tasks.py\n--- a/meinberlin/apps/votes/tasks.py\n+++ b/meinberlin/apps/votes/tasks.py\n@@ -5,9 +5,9 @@\n from meinberlin.apps.votes.models import get_token_12\n \n # Number of tokens to insert into database per bulk_create\n-BATCH_SIZE = 1000000\n+BATCH_SIZE = 100000\n # Max number of tokens in one download / package\n-PACKAGE_SIZE = 10000000\n+PACKAGE_SIZE = 1000000\n \n \n def generate_voting_tokens(module_id, number_of_tokens, existing_tokens):\n", "issue": "#6963 Too many codes in 1 package\nURL: https://meinberlin-dev.liqd.net/dashboard/modules/burgerinnenhaushalt-3-phasen-21/download-codes/\r\nuser: admin, initiator\r\nexpected behaviour: Each code-package should contain a max. of 1.000.000 codes. ~~The wording of the helptext should have also the right number of 1.000.000 codes per package as each package should contain a maximum of 1.000.000 codes per excel-file.~~\r\nbehaviour: ~~the number in the wording of the helptext is \"10.000.000\" and~~ the packages can contain more than 1.000.000 codes.\r\nimportant screensize: -\r\ndevice & browser: mac ff\r\nComment/Question: I tried it with generating two mill codes and the codes were put in only one code-package. I also couldn't download the package probably because it was too big.\r\n\r\nLinked: https://github.com/liqd/a4-meinberlin/issues/4907\r\n\n", "before_files": [{"content": "from background_task import background\n\nfrom adhocracy4.modules.models import Module\nfrom meinberlin.apps.votes.models import VotingToken\nfrom meinberlin.apps.votes.models import get_token_12\n\n# Number of tokens to insert into database per bulk_create\nBATCH_SIZE = 1000000\n# Max number of tokens in one download / package\nPACKAGE_SIZE = 10000000\n\n\ndef generate_voting_tokens(module_id, number_of_tokens, existing_tokens):\n module = Module.objects.get(pk=module_id)\n package_number = VotingToken.next_package_number(module)\n module_name = module.name\n project_id = module.project.id\n project_name = module.project.name\n\n number_to_generate = number_of_tokens\n package_number_limit = 0\n if number_of_tokens > PACKAGE_SIZE:\n package_number_limit = number_of_tokens - PACKAGE_SIZE\n while number_to_generate > 0:\n if number_to_generate >= BATCH_SIZE:\n generate_voting_tokens_batch(\n module_id,\n BATCH_SIZE,\n package_number,\n number_of_tokens,\n module_name,\n project_id,\n project_name,\n existing_tokens,\n )\n number_to_generate = number_to_generate - BATCH_SIZE\n else:\n generate_voting_tokens_batch(\n module_id,\n number_to_generate,\n package_number,\n number_of_tokens,\n module_name,\n project_id,\n project_name,\n existing_tokens,\n )\n number_to_generate = 0\n if package_number_limit >= number_to_generate:\n package_number += 1\n package_number_limit - PACKAGE_SIZE\n\n\n@background(schedule=1)\ndef generate_voting_tokens_batch(\n module_id,\n batch_size,\n package_number,\n number_of_tokens,\n module_name,\n project_id,\n project_name,\n existing_tokens,\n):\n module = Module.objects.get(pk=module_id)\n VotingToken.objects.bulk_create(\n [get_token_and_hash(module, package_number) for i in range(batch_size)]\n )\n\n\ndef get_token_and_hash(module, package_number):\n token = get_token_12()\n token_hash = VotingToken.hash_token(token, module)\n return VotingToken(\n token=token, token_hash=token_hash, module=module, package_number=package_number\n )\n", "path": "meinberlin/apps/votes/tasks.py"}]} | 1,426 | 165 |
gh_patches_debug_17265 | rasdani/github-patches | git_diff | netbox-community__netbox-2694 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add "White" as a cable color
### Environment
* Python version: 3.6
* NetBox version: 2.5.1
### Proposed Functionality
Add color white to the cable colors.
Optionally add:
* ~~slate~~(Dark Grey works, almost identical color)
* rose
* ~~violet~~ (Fuschia works, almost identical color)
* aqua
### Use Case
These fiber strand colors are missing
### Database Changes
None
### External Dependencies
None
</issue>
<code>
[start of netbox/utilities/constants.py]
1 COLOR_CHOICES = (
2 ('aa1409', 'Dark red'),
3 ('f44336', 'Red'),
4 ('e91e63', 'Pink'),
5 ('ff66ff', 'Fuschia'),
6 ('9c27b0', 'Purple'),
7 ('673ab7', 'Dark purple'),
8 ('3f51b5', 'Indigo'),
9 ('2196f3', 'Blue'),
10 ('03a9f4', 'Light blue'),
11 ('00bcd4', 'Cyan'),
12 ('009688', 'Teal'),
13 ('2f6a31', 'Dark green'),
14 ('4caf50', 'Green'),
15 ('8bc34a', 'Light green'),
16 ('cddc39', 'Lime'),
17 ('ffeb3b', 'Yellow'),
18 ('ffc107', 'Amber'),
19 ('ff9800', 'Orange'),
20 ('ff5722', 'Dark orange'),
21 ('795548', 'Brown'),
22 ('c0c0c0', 'Light grey'),
23 ('9e9e9e', 'Grey'),
24 ('607d8b', 'Dark grey'),
25 ('111111', 'Black'),
26 )
27
[end of netbox/utilities/constants.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/netbox/utilities/constants.py b/netbox/utilities/constants.py
--- a/netbox/utilities/constants.py
+++ b/netbox/utilities/constants.py
@@ -2,6 +2,7 @@
('aa1409', 'Dark red'),
('f44336', 'Red'),
('e91e63', 'Pink'),
+ ('ffe4e1', 'Rose'),
('ff66ff', 'Fuschia'),
('9c27b0', 'Purple'),
('673ab7', 'Dark purple'),
@@ -10,6 +11,7 @@
('03a9f4', 'Light blue'),
('00bcd4', 'Cyan'),
('009688', 'Teal'),
+ ('00ffff', 'Aqua'),
('2f6a31', 'Dark green'),
('4caf50', 'Green'),
('8bc34a', 'Light green'),
@@ -23,4 +25,5 @@
('9e9e9e', 'Grey'),
('607d8b', 'Dark grey'),
('111111', 'Black'),
+ ('ffffff', 'White'),
)
| {"golden_diff": "diff --git a/netbox/utilities/constants.py b/netbox/utilities/constants.py\n--- a/netbox/utilities/constants.py\n+++ b/netbox/utilities/constants.py\n@@ -2,6 +2,7 @@\n ('aa1409', 'Dark red'),\n ('f44336', 'Red'),\n ('e91e63', 'Pink'),\n+ ('ffe4e1', 'Rose'),\n ('ff66ff', 'Fuschia'),\n ('9c27b0', 'Purple'),\n ('673ab7', 'Dark purple'),\n@@ -10,6 +11,7 @@\n ('03a9f4', 'Light blue'),\n ('00bcd4', 'Cyan'),\n ('009688', 'Teal'),\n+ ('00ffff', 'Aqua'),\n ('2f6a31', 'Dark green'),\n ('4caf50', 'Green'),\n ('8bc34a', 'Light green'),\n@@ -23,4 +25,5 @@\n ('9e9e9e', 'Grey'),\n ('607d8b', 'Dark grey'),\n ('111111', 'Black'),\n+ ('ffffff', 'White'),\n )\n", "issue": "Add \"White\" as a cable color\n### Environment\r\n* Python version: 3.6\r\n* NetBox version: 2.5.1\r\n\r\n### Proposed Functionality\r\n\r\nAdd color white to the cable colors.\r\n\r\nOptionally add:\r\n\r\n* ~~slate~~(Dark Grey works, almost identical color)\r\n* rose\r\n* ~~violet~~ (Fuschia works, almost identical color)\r\n* aqua\r\n\r\n### Use Case\r\n\r\nThese fiber strand colors are missing\r\n\r\n### Database Changes\r\n\r\nNone\r\n\r\n### External Dependencies\r\n\r\nNone\n", "before_files": [{"content": "COLOR_CHOICES = (\n ('aa1409', 'Dark red'),\n ('f44336', 'Red'),\n ('e91e63', 'Pink'),\n ('ff66ff', 'Fuschia'),\n ('9c27b0', 'Purple'),\n ('673ab7', 'Dark purple'),\n ('3f51b5', 'Indigo'),\n ('2196f3', 'Blue'),\n ('03a9f4', 'Light blue'),\n ('00bcd4', 'Cyan'),\n ('009688', 'Teal'),\n ('2f6a31', 'Dark green'),\n ('4caf50', 'Green'),\n ('8bc34a', 'Light green'),\n ('cddc39', 'Lime'),\n ('ffeb3b', 'Yellow'),\n ('ffc107', 'Amber'),\n ('ff9800', 'Orange'),\n ('ff5722', 'Dark orange'),\n ('795548', 'Brown'),\n ('c0c0c0', 'Light grey'),\n ('9e9e9e', 'Grey'),\n ('607d8b', 'Dark grey'),\n ('111111', 'Black'),\n)\n", "path": "netbox/utilities/constants.py"}]} | 985 | 282 |
gh_patches_debug_4763 | rasdani/github-patches | git_diff | pytorch__ignite-3199 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Mean Absolute Percentage Error (MAPE)
## 🚀 Feature
I'd like to implement the mean absolute percentage error [(MAPE)](https://en.wikipedia.org/wiki/Mean_absolute_percentage_error) in `ignite/metrics`.
It is a commonly used metric for regression problems and it would be really convenient to be able to use it directly with ignite evaluators.
For that, I would write a custom Metric class in a new file `mean_absolute_percentage_error.py` inheriting from the base `Metric` class in `ignite/metrics/metric.py`.
</issue>
<code>
[start of ignite/contrib/metrics/regression/mean_absolute_relative_error.py]
1 from typing import Tuple
2
3 import torch
4
5 from ignite.contrib.metrics.regression._base import _BaseRegression
6 from ignite.exceptions import NotComputableError
7 from ignite.metrics.metric import reinit__is_reduced, sync_all_reduce
8
9
10 class MeanAbsoluteRelativeError(_BaseRegression):
11 r"""Calculate Mean Absolute Relative Error.
12
13 .. math::
14 \text{MARE} = \frac{1}{n}\sum_{j=1}^n\frac{\left|A_j-P_j\right|}{\left|A_j\right|}
15
16 where :math:`A_j` is the ground truth and :math:`P_j` is the predicted value.
17
18 More details can be found in the reference `Botchkarev 2018`__.
19
20 - ``update`` must receive output of the form ``(y_pred, y)`` or ``{'y_pred': y_pred, 'y': y}``.
21 - `y` and `y_pred` must be of same shape `(N, )` or `(N, 1)`.
22
23 __ https://arxiv.org/ftp/arxiv/papers/1809/1809.03006.pdf
24
25 Parameters are inherited from ``Metric.__init__``.
26
27 Args:
28 output_transform: a callable that is used to transform the
29 :class:`~ignite.engine.engine.Engine`'s ``process_function``'s output into the
30 form expected by the metric. This can be useful if, for example, you have a multi-output model and
31 you want to compute the metric with respect to one of the outputs.
32 By default, metrics require the output as ``(y_pred, y)`` or ``{'y_pred': y_pred, 'y': y}``.
33 device: specifies which device updates are accumulated on. Setting the
34 metric's device to be the same as your ``update`` arguments ensures the ``update`` method is
35 non-blocking. By default, CPU.
36
37 Examples:
38 To use with ``Engine`` and ``process_function``, simply attach the metric instance to the engine.
39 The output of the engine's ``process_function`` needs to be in format of
40 ``(y_pred, y)`` or ``{'y_pred': y_pred, 'y': y, ...}``.
41
42 .. include:: defaults.rst
43 :start-after: :orphan:
44
45 .. testcode::
46
47 metric = MeanAbsoluteRelativeError()
48 metric.attach(default_evaluator, 'mare')
49 y_true = torch.tensor([1., 2., 3., 4., 5.])
50 y_pred = y_true * 0.75
51 state = default_evaluator.run([[y_pred, y_true]])
52 print(state.metrics['mare'])
53
54 .. testoutput::
55
56 0.25...
57
58 .. versionchanged:: 0.4.5
59 - Works with DDP.
60 """
61 _state_dict_all_req_keys = ("_sum_of_absolute_relative_errors", "_num_samples")
62
63 @reinit__is_reduced
64 def reset(self) -> None:
65 self._sum_of_absolute_relative_errors = torch.tensor(0.0, device=self._device)
66 self._num_samples = 0
67
68 def _update(self, output: Tuple[torch.Tensor, torch.Tensor]) -> None:
69 y_pred, y = output[0].detach(), output[1].detach()
70 if (y == 0).any():
71 raise NotComputableError("The ground truth has 0.")
72 absolute_error = torch.abs(y_pred - y.view_as(y_pred)) / torch.abs(y.view_as(y_pred))
73 self._sum_of_absolute_relative_errors += torch.sum(absolute_error).to(self._device)
74 self._num_samples += y.size()[0]
75
76 @sync_all_reduce("_sum_of_absolute_relative_errors", "_num_samples")
77 def compute(self) -> float:
78 if self._num_samples == 0:
79 raise NotComputableError(
80 "MeanAbsoluteRelativeError must have at least one sample before it can be computed."
81 )
82 return self._sum_of_absolute_relative_errors.item() / self._num_samples
83
[end of ignite/contrib/metrics/regression/mean_absolute_relative_error.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/ignite/contrib/metrics/regression/mean_absolute_relative_error.py b/ignite/contrib/metrics/regression/mean_absolute_relative_error.py
--- a/ignite/contrib/metrics/regression/mean_absolute_relative_error.py
+++ b/ignite/contrib/metrics/regression/mean_absolute_relative_error.py
@@ -8,7 +8,7 @@
class MeanAbsoluteRelativeError(_BaseRegression):
- r"""Calculate Mean Absolute Relative Error.
+ r"""Calculate Mean Absolute Relative Error (MARE), also known as Mean Absolute Percentage Error (MAPE).
.. math::
\text{MARE} = \frac{1}{n}\sum_{j=1}^n\frac{\left|A_j-P_j\right|}{\left|A_j\right|}
| {"golden_diff": "diff --git a/ignite/contrib/metrics/regression/mean_absolute_relative_error.py b/ignite/contrib/metrics/regression/mean_absolute_relative_error.py\n--- a/ignite/contrib/metrics/regression/mean_absolute_relative_error.py\n+++ b/ignite/contrib/metrics/regression/mean_absolute_relative_error.py\n@@ -8,7 +8,7 @@\n \n \n class MeanAbsoluteRelativeError(_BaseRegression):\n- r\"\"\"Calculate Mean Absolute Relative Error.\n+ r\"\"\"Calculate Mean Absolute Relative Error (MARE), also known as Mean Absolute Percentage Error (MAPE).\n \n .. math::\n \\text{MARE} = \\frac{1}{n}\\sum_{j=1}^n\\frac{\\left|A_j-P_j\\right|}{\\left|A_j\\right|}\n", "issue": "Mean Absolute Percentage Error (MAPE)\n## \ud83d\ude80 Feature\r\n\r\nI'd like to implement the mean absolute percentage error [(MAPE)](https://en.wikipedia.org/wiki/Mean_absolute_percentage_error) in `ignite/metrics`.\r\n\r\nIt is a commonly used metric for regression problems and it would be really convenient to be able to use it directly with ignite evaluators.\r\n\r\nFor that, I would write a custom Metric class in a new file `mean_absolute_percentage_error.py` inheriting from the base `Metric` class in `ignite/metrics/metric.py`.\r\n\n", "before_files": [{"content": "from typing import Tuple\n\nimport torch\n\nfrom ignite.contrib.metrics.regression._base import _BaseRegression\nfrom ignite.exceptions import NotComputableError\nfrom ignite.metrics.metric import reinit__is_reduced, sync_all_reduce\n\n\nclass MeanAbsoluteRelativeError(_BaseRegression):\n r\"\"\"Calculate Mean Absolute Relative Error.\n\n .. math::\n \\text{MARE} = \\frac{1}{n}\\sum_{j=1}^n\\frac{\\left|A_j-P_j\\right|}{\\left|A_j\\right|}\n\n where :math:`A_j` is the ground truth and :math:`P_j` is the predicted value.\n\n More details can be found in the reference `Botchkarev 2018`__.\n\n - ``update`` must receive output of the form ``(y_pred, y)`` or ``{'y_pred': y_pred, 'y': y}``.\n - `y` and `y_pred` must be of same shape `(N, )` or `(N, 1)`.\n\n __ https://arxiv.org/ftp/arxiv/papers/1809/1809.03006.pdf\n\n Parameters are inherited from ``Metric.__init__``.\n\n Args:\n output_transform: a callable that is used to transform the\n :class:`~ignite.engine.engine.Engine`'s ``process_function``'s output into the\n form expected by the metric. This can be useful if, for example, you have a multi-output model and\n you want to compute the metric with respect to one of the outputs.\n By default, metrics require the output as ``(y_pred, y)`` or ``{'y_pred': y_pred, 'y': y}``.\n device: specifies which device updates are accumulated on. Setting the\n metric's device to be the same as your ``update`` arguments ensures the ``update`` method is\n non-blocking. By default, CPU.\n\n Examples:\n To use with ``Engine`` and ``process_function``, simply attach the metric instance to the engine.\n The output of the engine's ``process_function`` needs to be in format of\n ``(y_pred, y)`` or ``{'y_pred': y_pred, 'y': y, ...}``.\n\n .. include:: defaults.rst\n :start-after: :orphan:\n\n .. testcode::\n\n metric = MeanAbsoluteRelativeError()\n metric.attach(default_evaluator, 'mare')\n y_true = torch.tensor([1., 2., 3., 4., 5.])\n y_pred = y_true * 0.75\n state = default_evaluator.run([[y_pred, y_true]])\n print(state.metrics['mare'])\n\n .. testoutput::\n\n 0.25...\n\n .. versionchanged:: 0.4.5\n - Works with DDP.\n \"\"\"\n _state_dict_all_req_keys = (\"_sum_of_absolute_relative_errors\", \"_num_samples\")\n\n @reinit__is_reduced\n def reset(self) -> None:\n self._sum_of_absolute_relative_errors = torch.tensor(0.0, device=self._device)\n self._num_samples = 0\n\n def _update(self, output: Tuple[torch.Tensor, torch.Tensor]) -> None:\n y_pred, y = output[0].detach(), output[1].detach()\n if (y == 0).any():\n raise NotComputableError(\"The ground truth has 0.\")\n absolute_error = torch.abs(y_pred - y.view_as(y_pred)) / torch.abs(y.view_as(y_pred))\n self._sum_of_absolute_relative_errors += torch.sum(absolute_error).to(self._device)\n self._num_samples += y.size()[0]\n\n @sync_all_reduce(\"_sum_of_absolute_relative_errors\", \"_num_samples\")\n def compute(self) -> float:\n if self._num_samples == 0:\n raise NotComputableError(\n \"MeanAbsoluteRelativeError must have at least one sample before it can be computed.\"\n )\n return self._sum_of_absolute_relative_errors.item() / self._num_samples\n", "path": "ignite/contrib/metrics/regression/mean_absolute_relative_error.py"}]} | 1,716 | 171 |
gh_patches_debug_26538 | rasdani/github-patches | git_diff | speechbrain__speechbrain-304 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Stats precision of FileTrainLogger
Now, all the stats logged by a FileTrainLogger have the precision 2 after their decimal points. In some training scenarios, precision 2 is not enough for some stats. I suggest allowing users to decide precision for each stats or adding precision number to 4 or 5 uniformly.
</issue>
<code>
[start of speechbrain/utils/train_logger.py]
1 """
2 Loggers for experiment monitoring
3
4 Authors
5 * Peter Plantinga 2020
6 """
7 import logging
8 from speechbrain.utils.edit_distance import wer_summary
9
10 logger = logging.getLogger(__name__)
11
12
13 class TrainLogger:
14 """Abstract class defining an interface for training loggers."""
15
16 def log_stats(
17 self,
18 stats_meta,
19 train_stats=None,
20 valid_stats=None,
21 test_stats=None,
22 verbose=False,
23 ):
24 """Log the stats for one epoch.
25
26 Arguments
27 ---------
28 stats_meta : dict of str:scalar pairs
29 Meta information about the stats (e.g. epoch, learning-rate, etc.)
30 train_stats : dict of str:list pairs
31 Each loss type is represented with a str : list pair including
32 all the values for the training pass.
33 valid_stats : dict of str:list pairs
34 Each loss type is represented with a str : list pair including
35 all the values for the validation pass.
36 test_stats : dict of str:list pairs
37 Each loss type is represented with a str : list pair including
38 all the values for the test pass.
39 verbose : bool
40 Whether to also put logging information to the standard logger.
41 """
42 raise NotImplementedError
43
44
45 class FileTrainLogger(TrainLogger):
46 """Text logger of training information
47
48 Arguments
49 ---------
50 save_file : str
51 The file to use for logging train information.
52 summary_fns : dict of str:function pairs
53 Each summary function should take a list produced as output
54 from a training/validation pass and summarize it to a single scalar.
55 """
56
57 def __init__(self, save_file, summary_fns=None):
58 self.save_file = save_file
59 self.summary_fns = summary_fns or {}
60
61 def _item_to_string(self, key, value, dataset=None):
62 """Convert one item to string, handling floats"""
63 if isinstance(value, float) and 0.01 < value < 100.0:
64 value = f"{value:.2f}"
65 elif isinstance(value, float):
66 value = f"{value:.2e}"
67 if dataset is not None:
68 key = f"{dataset} {key}"
69 return f"{key}: {value}"
70
71 def _stats_to_string(self, stats, dataset=None):
72 """Convert all stats to a single string summary"""
73 return ", ".join(
74 [self._item_to_string(k, v, dataset) for k, v in stats.items()]
75 )
76
77 def log_stats(
78 self,
79 stats_meta,
80 train_stats=None,
81 valid_stats=None,
82 test_stats=None,
83 verbose=True,
84 ):
85 """See TrainLogger.log_stats()"""
86 string_summary = self._stats_to_string(stats_meta)
87 for dataset, stats in [
88 ("train", train_stats),
89 ("valid", valid_stats),
90 ("test", test_stats),
91 ]:
92 if stats is None:
93 continue
94 summary = {}
95 for stat, value_list in stats.items():
96 if stat in self.summary_fns:
97 summary[stat] = self.summary_fns[stat](value_list)
98 else:
99 summary[stat] = summarize_average(value_list)
100 string_summary += " - " + self._stats_to_string(summary, dataset)
101
102 with open(self.save_file, "a") as fout:
103 print(string_summary, file=fout)
104 if verbose:
105 logger.info(string_summary)
106
107
108 class TensorboardLogger(TrainLogger):
109 """Logs training information in the format required by Tensorboard.
110
111 Arguments
112 ---------
113 save_dir : str
114 A directory for storing all the relevant logs
115
116 Raises
117 ------
118 ImportError if Tensorboard is not installed.
119 """
120
121 def __init__(self, save_dir):
122 self.save_dir = save_dir
123
124 # Raises ImportError if TensorBoard is not installed
125 from torch.utils.tensorboard import SummaryWriter
126
127 self.writer = SummaryWriter(self.save_dir)
128 self.global_step = {"train": {}, "valid": {}, "meta": 0}
129
130 def log_stats(
131 self,
132 stats_meta,
133 train_stats=None,
134 valid_stats=None,
135 test_stats=None,
136 verbose=False,
137 ):
138 """See TrainLogger.log_stats()"""
139 self.global_step["meta"] += 1
140 for name, value in stats_meta.items():
141 self.writer.add_scalar(name, value, self.global_step["meta"])
142
143 for dataset, stats in [
144 ("train", train_stats),
145 ("valid", valid_stats),
146 ("test", test_stats),
147 ]:
148 if stats is None:
149 continue
150 for stat, value_list in stats.items():
151 if stat not in self.global_step[dataset]:
152 self.global_step[dataset][stat] = 0
153 tag = f"{stat}/{dataset}"
154 for value in value_list:
155 new_global_step = self.global_step[dataset][stat] + 1
156 self.writer.add_scalar(tag, value, new_global_step)
157 self.global_step[dataset][stat] = new_global_step
158
159
160 def summarize_average(stat_list):
161 return float(sum(stat_list) / len(stat_list))
162
163
164 def summarize_error_rate(stat_list):
165 summary = wer_summary(stat_list)
166 return summary["WER"]
167
[end of speechbrain/utils/train_logger.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/speechbrain/utils/train_logger.py b/speechbrain/utils/train_logger.py
--- a/speechbrain/utils/train_logger.py
+++ b/speechbrain/utils/train_logger.py
@@ -49,21 +49,24 @@
---------
save_file : str
The file to use for logging train information.
+ precision : int
+ Number of decimal places to display. Default 2, example: 1.35e-5
summary_fns : dict of str:function pairs
Each summary function should take a list produced as output
from a training/validation pass and summarize it to a single scalar.
"""
- def __init__(self, save_file, summary_fns=None):
+ def __init__(self, save_file, precision=2, summary_fns=None):
self.save_file = save_file
+ self.precision = precision
self.summary_fns = summary_fns or {}
def _item_to_string(self, key, value, dataset=None):
"""Convert one item to string, handling floats"""
- if isinstance(value, float) and 0.01 < value < 100.0:
- value = f"{value:.2f}"
+ if isinstance(value, float) and 1.0 < value < 100.0:
+ value = f"{value:.{self.precision}f}"
elif isinstance(value, float):
- value = f"{value:.2e}"
+ value = f"{value:.{self.precision}e}"
if dataset is not None:
key = f"{dataset} {key}"
return f"{key}: {value}"
| {"golden_diff": "diff --git a/speechbrain/utils/train_logger.py b/speechbrain/utils/train_logger.py\n--- a/speechbrain/utils/train_logger.py\n+++ b/speechbrain/utils/train_logger.py\n@@ -49,21 +49,24 @@\n ---------\n save_file : str\n The file to use for logging train information.\n+ precision : int\n+ Number of decimal places to display. Default 2, example: 1.35e-5\n summary_fns : dict of str:function pairs\n Each summary function should take a list produced as output\n from a training/validation pass and summarize it to a single scalar.\n \"\"\"\n \n- def __init__(self, save_file, summary_fns=None):\n+ def __init__(self, save_file, precision=2, summary_fns=None):\n self.save_file = save_file\n+ self.precision = precision\n self.summary_fns = summary_fns or {}\n \n def _item_to_string(self, key, value, dataset=None):\n \"\"\"Convert one item to string, handling floats\"\"\"\n- if isinstance(value, float) and 0.01 < value < 100.0:\n- value = f\"{value:.2f}\"\n+ if isinstance(value, float) and 1.0 < value < 100.0:\n+ value = f\"{value:.{self.precision}f}\"\n elif isinstance(value, float):\n- value = f\"{value:.2e}\"\n+ value = f\"{value:.{self.precision}e}\"\n if dataset is not None:\n key = f\"{dataset} {key}\"\n return f\"{key}: {value}\"\n", "issue": "Stats precision of FileTrainLogger\nNow, all the stats logged by a FileTrainLogger have the precision 2 after their decimal points. In some training scenarios, precision 2 is not enough for some stats. I suggest allowing users to decide precision for each stats or adding precision number to 4 or 5 uniformly.\n", "before_files": [{"content": "\"\"\"\nLoggers for experiment monitoring\n\nAuthors\n * Peter Plantinga 2020\n\"\"\"\nimport logging\nfrom speechbrain.utils.edit_distance import wer_summary\n\nlogger = logging.getLogger(__name__)\n\n\nclass TrainLogger:\n \"\"\"Abstract class defining an interface for training loggers.\"\"\"\n\n def log_stats(\n self,\n stats_meta,\n train_stats=None,\n valid_stats=None,\n test_stats=None,\n verbose=False,\n ):\n \"\"\"Log the stats for one epoch.\n\n Arguments\n ---------\n stats_meta : dict of str:scalar pairs\n Meta information about the stats (e.g. epoch, learning-rate, etc.)\n train_stats : dict of str:list pairs\n Each loss type is represented with a str : list pair including\n all the values for the training pass.\n valid_stats : dict of str:list pairs\n Each loss type is represented with a str : list pair including\n all the values for the validation pass.\n test_stats : dict of str:list pairs\n Each loss type is represented with a str : list pair including\n all the values for the test pass.\n verbose : bool\n Whether to also put logging information to the standard logger.\n \"\"\"\n raise NotImplementedError\n\n\nclass FileTrainLogger(TrainLogger):\n \"\"\"Text logger of training information\n\n Arguments\n ---------\n save_file : str\n The file to use for logging train information.\n summary_fns : dict of str:function pairs\n Each summary function should take a list produced as output\n from a training/validation pass and summarize it to a single scalar.\n \"\"\"\n\n def __init__(self, save_file, summary_fns=None):\n self.save_file = save_file\n self.summary_fns = summary_fns or {}\n\n def _item_to_string(self, key, value, dataset=None):\n \"\"\"Convert one item to string, handling floats\"\"\"\n if isinstance(value, float) and 0.01 < value < 100.0:\n value = f\"{value:.2f}\"\n elif isinstance(value, float):\n value = f\"{value:.2e}\"\n if dataset is not None:\n key = f\"{dataset} {key}\"\n return f\"{key}: {value}\"\n\n def _stats_to_string(self, stats, dataset=None):\n \"\"\"Convert all stats to a single string summary\"\"\"\n return \", \".join(\n [self._item_to_string(k, v, dataset) for k, v in stats.items()]\n )\n\n def log_stats(\n self,\n stats_meta,\n train_stats=None,\n valid_stats=None,\n test_stats=None,\n verbose=True,\n ):\n \"\"\"See TrainLogger.log_stats()\"\"\"\n string_summary = self._stats_to_string(stats_meta)\n for dataset, stats in [\n (\"train\", train_stats),\n (\"valid\", valid_stats),\n (\"test\", test_stats),\n ]:\n if stats is None:\n continue\n summary = {}\n for stat, value_list in stats.items():\n if stat in self.summary_fns:\n summary[stat] = self.summary_fns[stat](value_list)\n else:\n summary[stat] = summarize_average(value_list)\n string_summary += \" - \" + self._stats_to_string(summary, dataset)\n\n with open(self.save_file, \"a\") as fout:\n print(string_summary, file=fout)\n if verbose:\n logger.info(string_summary)\n\n\nclass TensorboardLogger(TrainLogger):\n \"\"\"Logs training information in the format required by Tensorboard.\n\n Arguments\n ---------\n save_dir : str\n A directory for storing all the relevant logs\n\n Raises\n ------\n ImportError if Tensorboard is not installed.\n \"\"\"\n\n def __init__(self, save_dir):\n self.save_dir = save_dir\n\n # Raises ImportError if TensorBoard is not installed\n from torch.utils.tensorboard import SummaryWriter\n\n self.writer = SummaryWriter(self.save_dir)\n self.global_step = {\"train\": {}, \"valid\": {}, \"meta\": 0}\n\n def log_stats(\n self,\n stats_meta,\n train_stats=None,\n valid_stats=None,\n test_stats=None,\n verbose=False,\n ):\n \"\"\"See TrainLogger.log_stats()\"\"\"\n self.global_step[\"meta\"] += 1\n for name, value in stats_meta.items():\n self.writer.add_scalar(name, value, self.global_step[\"meta\"])\n\n for dataset, stats in [\n (\"train\", train_stats),\n (\"valid\", valid_stats),\n (\"test\", test_stats),\n ]:\n if stats is None:\n continue\n for stat, value_list in stats.items():\n if stat not in self.global_step[dataset]:\n self.global_step[dataset][stat] = 0\n tag = f\"{stat}/{dataset}\"\n for value in value_list:\n new_global_step = self.global_step[dataset][stat] + 1\n self.writer.add_scalar(tag, value, new_global_step)\n self.global_step[dataset][stat] = new_global_step\n\n\ndef summarize_average(stat_list):\n return float(sum(stat_list) / len(stat_list))\n\n\ndef summarize_error_rate(stat_list):\n summary = wer_summary(stat_list)\n return summary[\"WER\"]\n", "path": "speechbrain/utils/train_logger.py"}]} | 2,119 | 362 |
gh_patches_debug_6949 | rasdani/github-patches | git_diff | mkdocs__mkdocs-409 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
problem with config in command line
If I run the follow command in current development version:
mkdocs serve --config=/home/lf/git/mywork/bb/bog/mkdocs.yml
it will raise error:
```
Config file 'mkdocs.yml' does not exist.
```
But if I run the same command use version 0.11.1
Everything is OK
Is there any thing wrong with code below in [config](https://github.com/tomchristie/mkdocs/blob/master/mkdocs/config.py#L79)
```
if 'config' in options:
filename = options.pop('config')
```
Should it be:
```
if 'config' in options:
filename = options.get('config')
```
Because when we run `mkdocs serve` , we will execute this block of code two times, filename will use the default `mkdocs.yml` in the second time, this file may not exist.
</issue>
<code>
[start of mkdocs/config.py]
1 # coding: utf-8
2
3 from mkdocs import utils
4 from mkdocs.compat import urlparse
5 from mkdocs.exceptions import ConfigurationError
6
7 import logging
8 import os
9 import yaml
10
11 log = logging.getLogger(__name__)
12
13 DEFAULT_CONFIG = {
14 'site_name': None,
15 'pages': None,
16
17 'site_url': None,
18 'site_description': None,
19 'site_author': None,
20 'site_favicon': None,
21
22 'theme': 'mkdocs',
23 'docs_dir': 'docs',
24 'site_dir': 'site',
25 'theme_dir': None,
26
27 'copyright': None,
28 'google_analytics': None,
29
30 # The address on which to serve the livereloading docs server.
31 'dev_addr': '127.0.0.1:8000',
32
33 # If `True`, use `<page_name>/index.hmtl` style files with hyperlinks to the directory.
34 # If `False`, use `<page_name>.html style file with hyperlinks to the file.
35 # True generates nicer URLs, but False is useful if browsing the output on a filesystem.
36 'use_directory_urls': True,
37
38 # Specify a link to the project source repo to be included
39 # in the documentation pages.
40 'repo_url': None,
41
42 # A name to use for the link to the project source repo.
43 # Default: If repo_url is unset then None, otherwise
44 # "GitHub" or "Bitbucket" for known url or Hostname for unknown urls.
45 'repo_name': None,
46
47 # Specify which css or javascript files from the docs
48 # directionary should be additionally included in the site.
49 # Default: List of all .css and .js files in the docs dir.
50 'extra_css': None,
51 'extra_javascript': None,
52
53 # Determine if the site should include the nav and next/prev elements.
54 # Default: True if the site has more than one page, False otherwise.
55 'include_nav': None,
56 'include_next_prev': None,
57
58 # PyMarkdown extension names.
59 'markdown_extensions': (),
60
61 # Determine if the site should generate a json search index and include
62 # search elements in the theme. - TODO
63 'include_search': False,
64
65 # Determine if the site should include a 404.html page.
66 # TODO: Implment this. Make this None, have it True if a 404.html
67 # template exists in the theme or docs dir.
68 'include_404': False,
69
70 # enabling strict mode causes MkDocs to stop the build when a problem is
71 # encountered rather than display an error.
72 'strict': False,
73 }
74
75
76 def load_config(filename='mkdocs.yml', options=None):
77 options = options or {}
78 if 'config' in options:
79 filename = options.pop('config')
80 if not os.path.exists(filename):
81 raise ConfigurationError("Config file '%s' does not exist." % filename)
82 with open(filename, 'r') as fp:
83 user_config = yaml.load(fp)
84 if not isinstance(user_config, dict):
85 raise ConfigurationError("The mkdocs.yml file is invalid. See http://www.mkdocs.org/user-guide/configuration/ for more information.")
86 user_config.update(options)
87 return validate_config(user_config)
88
89
90 def validate_config(user_config):
91 config = DEFAULT_CONFIG.copy()
92
93 theme_in_config = 'theme' in user_config
94
95 config.update(user_config)
96
97 if not config['site_name']:
98 raise ConfigurationError("Config must contain 'site_name' setting.")
99
100 # Validate that the docs_dir and site_dir don't contain the
101 # other as this will lead to copying back and forth on each
102 # and eventually make a deep nested mess.
103 abs_site_dir = os.path.abspath(config['site_dir'])
104 abs_docs_dir = os.path.abspath(config['docs_dir'])
105 if abs_docs_dir.startswith(abs_site_dir):
106 raise ConfigurationError(
107 "The 'docs_dir' can't be within the 'site_dir'.")
108 elif abs_site_dir.startswith(abs_docs_dir):
109 raise ConfigurationError(
110 "The 'site_dir' can't be within the 'docs_dir'.")
111
112 # If not specified, then the 'pages' config simply includes all
113 # markdown files in the docs dir, without generating any header items
114 # for them.
115 pages = []
116 extra_css = []
117 extra_javascript = []
118 for (dirpath, dirnames, filenames) in os.walk(config['docs_dir']):
119 for filename in sorted(filenames):
120 fullpath = os.path.join(dirpath, filename)
121 relpath = os.path.relpath(fullpath, config['docs_dir'])
122
123 if utils.is_markdown_file(filename):
124 # index pages should always be the first listed page.
125 if os.path.splitext(relpath)[0] == 'index':
126 pages.insert(0, relpath)
127 else:
128 pages.append(relpath)
129 elif utils.is_css_file(filename):
130 extra_css.append(relpath)
131 elif utils.is_javascript_file(filename):
132 extra_javascript.append(relpath)
133
134 if config['pages'] is None:
135 config['pages'] = pages
136
137 if config['extra_css'] is None:
138 config['extra_css'] = extra_css
139
140 if config['extra_javascript'] is None:
141 config['extra_javascript'] = extra_javascript
142
143 package_dir = os.path.dirname(__file__)
144 theme_dir = [os.path.join(package_dir, 'themes', config['theme'])]
145
146 if config['theme_dir'] is not None:
147 # If the user has given us a custom theme but not a
148 # builtin theme name then we don't want to merge them.
149 if not theme_in_config:
150 theme_dir = []
151 theme_dir.insert(0, config['theme_dir'])
152
153 config['theme_dir'] = theme_dir
154
155 if config['repo_url'] is not None and config['repo_name'] is None:
156 repo_host = urlparse(config['repo_url']).netloc.lower()
157 if repo_host == 'github.com':
158 config['repo_name'] = 'GitHub'
159 elif repo_host == 'bitbucket.org':
160 config['repo_name'] = 'Bitbucket'
161 else:
162 config['repo_name'] = repo_host.split('.')[0].title()
163
164 if config['include_next_prev'] is None:
165 config['include_next_prev'] = len(config['pages']) > 1
166
167 if config['include_nav'] is None:
168 config['include_nav'] = len(config['pages']) > 1
169
170 # To Do:
171
172 # The docs dir must exist.
173 # The theme dir must exist.
174 # Ensure 'theme' is one of 'mkdocs', 'readthedocs', 'custom'
175 # A homepage 'index' must exist.
176 # The theme 'base.html' file must exist.
177 # Cannot set repo_name without setting repo_url.
178 # Cannot set 'include_next_prev: true' when only one page exists.
179 # Cannot set 'include_nav: true' when only one page exists.
180 # Error if any config keys provided that are not in the DEFAULT_CONFIG.
181
182 return config
183
[end of mkdocs/config.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/mkdocs/config.py b/mkdocs/config.py
--- a/mkdocs/config.py
+++ b/mkdocs/config.py
@@ -76,7 +76,7 @@
def load_config(filename='mkdocs.yml', options=None):
options = options or {}
if 'config' in options:
- filename = options.pop('config')
+ filename = options['config']
if not os.path.exists(filename):
raise ConfigurationError("Config file '%s' does not exist." % filename)
with open(filename, 'r') as fp:
| {"golden_diff": "diff --git a/mkdocs/config.py b/mkdocs/config.py\n--- a/mkdocs/config.py\n+++ b/mkdocs/config.py\n@@ -76,7 +76,7 @@\n def load_config(filename='mkdocs.yml', options=None):\n options = options or {}\n if 'config' in options:\n- filename = options.pop('config')\n+ filename = options['config']\n if not os.path.exists(filename):\n raise ConfigurationError(\"Config file '%s' does not exist.\" % filename)\n with open(filename, 'r') as fp:\n", "issue": "problem with config in command line \nIf I run the follow command in current development version:\n\nmkdocs serve --config=/home/lf/git/mywork/bb/bog/mkdocs.yml\n\nit will raise error:\n\n```\nConfig file 'mkdocs.yml' does not exist.\n```\n\nBut if I run the same command use version 0.11.1\n\nEverything is OK\n\nIs there any thing wrong with code below in [config](https://github.com/tomchristie/mkdocs/blob/master/mkdocs/config.py#L79)\n\n```\nif 'config' in options:\n filename = options.pop('config')\n```\n\nShould it be:\n\n```\nif 'config' in options:\n filename = options.get('config')\n```\n\nBecause when we run `mkdocs serve` , we will execute this block of code two times, filename will use the default `mkdocs.yml` in the second time, this file may not exist.\n\n", "before_files": [{"content": "# coding: utf-8\n\nfrom mkdocs import utils\nfrom mkdocs.compat import urlparse\nfrom mkdocs.exceptions import ConfigurationError\n\nimport logging\nimport os\nimport yaml\n\nlog = logging.getLogger(__name__)\n\nDEFAULT_CONFIG = {\n 'site_name': None,\n 'pages': None,\n\n 'site_url': None,\n 'site_description': None,\n 'site_author': None,\n 'site_favicon': None,\n\n 'theme': 'mkdocs',\n 'docs_dir': 'docs',\n 'site_dir': 'site',\n 'theme_dir': None,\n\n 'copyright': None,\n 'google_analytics': None,\n\n # The address on which to serve the livereloading docs server.\n 'dev_addr': '127.0.0.1:8000',\n\n # If `True`, use `<page_name>/index.hmtl` style files with hyperlinks to the directory.\n # If `False`, use `<page_name>.html style file with hyperlinks to the file.\n # True generates nicer URLs, but False is useful if browsing the output on a filesystem.\n 'use_directory_urls': True,\n\n # Specify a link to the project source repo to be included\n # in the documentation pages.\n 'repo_url': None,\n\n # A name to use for the link to the project source repo.\n # Default: If repo_url is unset then None, otherwise\n # \"GitHub\" or \"Bitbucket\" for known url or Hostname for unknown urls.\n 'repo_name': None,\n\n # Specify which css or javascript files from the docs\n # directionary should be additionally included in the site.\n # Default: List of all .css and .js files in the docs dir.\n 'extra_css': None,\n 'extra_javascript': None,\n\n # Determine if the site should include the nav and next/prev elements.\n # Default: True if the site has more than one page, False otherwise.\n 'include_nav': None,\n 'include_next_prev': None,\n\n # PyMarkdown extension names.\n 'markdown_extensions': (),\n\n # Determine if the site should generate a json search index and include\n # search elements in the theme. - TODO\n 'include_search': False,\n\n # Determine if the site should include a 404.html page.\n # TODO: Implment this. Make this None, have it True if a 404.html\n # template exists in the theme or docs dir.\n 'include_404': False,\n\n # enabling strict mode causes MkDocs to stop the build when a problem is\n # encountered rather than display an error.\n 'strict': False,\n}\n\n\ndef load_config(filename='mkdocs.yml', options=None):\n options = options or {}\n if 'config' in options:\n filename = options.pop('config')\n if not os.path.exists(filename):\n raise ConfigurationError(\"Config file '%s' does not exist.\" % filename)\n with open(filename, 'r') as fp:\n user_config = yaml.load(fp)\n if not isinstance(user_config, dict):\n raise ConfigurationError(\"The mkdocs.yml file is invalid. See http://www.mkdocs.org/user-guide/configuration/ for more information.\")\n user_config.update(options)\n return validate_config(user_config)\n\n\ndef validate_config(user_config):\n config = DEFAULT_CONFIG.copy()\n\n theme_in_config = 'theme' in user_config\n\n config.update(user_config)\n\n if not config['site_name']:\n raise ConfigurationError(\"Config must contain 'site_name' setting.\")\n\n # Validate that the docs_dir and site_dir don't contain the\n # other as this will lead to copying back and forth on each\n # and eventually make a deep nested mess.\n abs_site_dir = os.path.abspath(config['site_dir'])\n abs_docs_dir = os.path.abspath(config['docs_dir'])\n if abs_docs_dir.startswith(abs_site_dir):\n raise ConfigurationError(\n \"The 'docs_dir' can't be within the 'site_dir'.\")\n elif abs_site_dir.startswith(abs_docs_dir):\n raise ConfigurationError(\n \"The 'site_dir' can't be within the 'docs_dir'.\")\n\n # If not specified, then the 'pages' config simply includes all\n # markdown files in the docs dir, without generating any header items\n # for them.\n pages = []\n extra_css = []\n extra_javascript = []\n for (dirpath, dirnames, filenames) in os.walk(config['docs_dir']):\n for filename in sorted(filenames):\n fullpath = os.path.join(dirpath, filename)\n relpath = os.path.relpath(fullpath, config['docs_dir'])\n\n if utils.is_markdown_file(filename):\n # index pages should always be the first listed page.\n if os.path.splitext(relpath)[0] == 'index':\n pages.insert(0, relpath)\n else:\n pages.append(relpath)\n elif utils.is_css_file(filename):\n extra_css.append(relpath)\n elif utils.is_javascript_file(filename):\n extra_javascript.append(relpath)\n\n if config['pages'] is None:\n config['pages'] = pages\n\n if config['extra_css'] is None:\n config['extra_css'] = extra_css\n\n if config['extra_javascript'] is None:\n config['extra_javascript'] = extra_javascript\n\n package_dir = os.path.dirname(__file__)\n theme_dir = [os.path.join(package_dir, 'themes', config['theme'])]\n\n if config['theme_dir'] is not None:\n # If the user has given us a custom theme but not a\n # builtin theme name then we don't want to merge them.\n if not theme_in_config:\n theme_dir = []\n theme_dir.insert(0, config['theme_dir'])\n\n config['theme_dir'] = theme_dir\n\n if config['repo_url'] is not None and config['repo_name'] is None:\n repo_host = urlparse(config['repo_url']).netloc.lower()\n if repo_host == 'github.com':\n config['repo_name'] = 'GitHub'\n elif repo_host == 'bitbucket.org':\n config['repo_name'] = 'Bitbucket'\n else:\n config['repo_name'] = repo_host.split('.')[0].title()\n\n if config['include_next_prev'] is None:\n config['include_next_prev'] = len(config['pages']) > 1\n\n if config['include_nav'] is None:\n config['include_nav'] = len(config['pages']) > 1\n\n # To Do:\n\n # The docs dir must exist.\n # The theme dir must exist.\n # Ensure 'theme' is one of 'mkdocs', 'readthedocs', 'custom'\n # A homepage 'index' must exist.\n # The theme 'base.html' file must exist.\n # Cannot set repo_name without setting repo_url.\n # Cannot set 'include_next_prev: true' when only one page exists.\n # Cannot set 'include_nav: true' when only one page exists.\n # Error if any config keys provided that are not in the DEFAULT_CONFIG.\n\n return config\n", "path": "mkdocs/config.py"}]} | 2,724 | 123 |
gh_patches_debug_30298 | rasdani/github-patches | git_diff | pulp__pulpcore-3755 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
There's a race when creating the same content in multiple processes
`Task 90df818f-3645-4b7c-8aaa-fae400a10ae0 failed (duplicate key value violates unique constraint "file_filecontent_relative_path_digest__pu_b4bae2c2_uniq"
DETAIL: Key (relative_path, digest, _pulp_domain_id)=(B, df7e70e5021544f4834bbee64a9e3789febc4be81470df629cad6ddb03320a5c, be9b7087-f318-48ef-81ce-f3141524c659) already exists.
)`
</issue>
<code>
[start of pulpcore/app/serializers/content.py]
1 from gettext import gettext as _
2
3 from django.db import transaction
4 from rest_framework import serializers
5 from rest_framework.validators import UniqueValidator
6
7 from pulpcore.app import models
8 from pulpcore.app.serializers import base, fields
9 from pulpcore.app.util import get_domain
10
11
12 class BaseContentSerializer(base.ModelSerializer):
13 pulp_href = base.DetailIdentityField(view_name_pattern=r"contents(-.*/.*)-detail")
14
15 class Meta:
16 model = models.Content
17 fields = base.ModelSerializer.Meta.fields
18
19
20 class NoArtifactContentSerializer(BaseContentSerializer):
21 class Meta:
22 model = models.Content
23 fields = BaseContentSerializer.Meta.fields
24
25
26 class SingleArtifactContentSerializer(BaseContentSerializer):
27 artifact = fields.SingleContentArtifactField(
28 help_text=_("Artifact file representing the physical content"),
29 )
30
31 relative_path = serializers.CharField(
32 help_text=_("Path where the artifact is located relative to distributions base_path"),
33 validators=[fields.relative_path_validator],
34 write_only=True,
35 )
36
37 def __init__(self, *args, **kwargs):
38 """
39 Initializer for SingleArtifactContentSerializer
40 """
41 super().__init__(*args, **kwargs)
42
43 # If the content model has its own database field 'relative_path',
44 # we should not mark the field write_only
45 if hasattr(self.Meta.model, "relative_path") and "relative_path" in self.fields:
46 self.fields["relative_path"].write_only = False
47
48 @transaction.atomic
49 def create(self, validated_data):
50 """
51 Create the content and associate it with its Artifact, or retrieve the existing content.
52
53 Args:
54 validated_data (dict): Data to save to the database
55 """
56 content = self.retrieve(validated_data)
57
58 if content is not None:
59 content.touch()
60 else:
61 artifact = validated_data.pop("artifact")
62 if "relative_path" not in self.fields or self.fields["relative_path"].write_only:
63 relative_path = validated_data.pop("relative_path")
64 else:
65 relative_path = validated_data.get("relative_path")
66 content = self.Meta.model.objects.create(**validated_data)
67 models.ContentArtifact.objects.create(
68 artifact=artifact, content=content, relative_path=relative_path
69 )
70
71 return content
72
73 def retrieve(self, validated_data):
74 """
75 Retrieve existing content unit if it exists, else return None.
76
77 This method is plugin-specific and implementing it for a specific content type
78 allows for uploading already existing content units of that type.
79 """
80 return None
81
82 class Meta:
83 model = models.Content
84 fields = BaseContentSerializer.Meta.fields + ("artifact", "relative_path")
85
86
87 class MultipleArtifactContentSerializer(BaseContentSerializer):
88 artifacts = fields.ContentArtifactsField(
89 help_text=_(
90 "A dict mapping relative paths inside the Content to the corresponding"
91 "Artifact URLs. E.g.: {'relative/path': "
92 "'/artifacts/1/'"
93 ),
94 )
95
96 @transaction.atomic
97 def create(self, validated_data):
98 """
99 Create the content and associate it with all its Artifacts.
100
101 Args:
102 validated_data (dict): Data to save to the database
103 """
104 artifacts = validated_data.pop("artifacts")
105 content = self.Meta.model.objects.create(**validated_data)
106 for relative_path, artifact in artifacts.items():
107 models.ContentArtifact.objects.create(
108 artifact=artifact, content=content, relative_path=relative_path
109 )
110 return content
111
112 class Meta:
113 model = models.Content
114 fields = BaseContentSerializer.Meta.fields + ("artifacts",)
115
116
117 class ContentChecksumSerializer(serializers.Serializer):
118 """
119 Provide a serializer with artifact checksum fields for single artifact content.
120
121 If you use this serializer, it's recommended that you prefetch artifacts:
122
123 Content.objects.prefetch_related("_artifacts").all()
124 """
125
126 md5 = fields.ContentArtifactChecksumField(
127 help_text=_("The MD5 checksum if available."),
128 checksum="md5",
129 )
130
131 sha1 = fields.ContentArtifactChecksumField(
132 help_text=_("The SHA-1 checksum if available."),
133 checksum="sha1",
134 )
135
136 sha224 = fields.ContentArtifactChecksumField(
137 help_text=_("The SHA-224 checksum if available."),
138 checksum="sha224",
139 )
140
141 sha256 = fields.ContentArtifactChecksumField(
142 help_text=_("The SHA-256 checksum if available."),
143 checksum="sha256",
144 )
145
146 sha384 = fields.ContentArtifactChecksumField(
147 help_text=_("The SHA-384 checksum if available."),
148 checksum="sha384",
149 )
150
151 sha512 = fields.ContentArtifactChecksumField(
152 help_text=_("The SHA-512 checksum if available."),
153 checksum="sha512",
154 )
155
156 class Meta:
157 model = models.Content
158 fields = base.ModelSerializer.Meta.fields + (
159 "md5",
160 "sha1",
161 "sha224",
162 "sha256",
163 "sha384",
164 "sha512",
165 )
166
167
168 class ArtifactSerializer(base.ModelSerializer):
169 pulp_href = base.IdentityField(view_name="artifacts-detail")
170
171 file = serializers.FileField(help_text=_("The stored file."), allow_empty_file=True)
172
173 size = serializers.IntegerField(help_text=_("The size of the file in bytes."), required=False)
174
175 md5 = serializers.CharField(
176 help_text=_("The MD5 checksum of the file if available."), required=False, allow_null=True
177 )
178
179 sha1 = serializers.CharField(
180 help_text=_("The SHA-1 checksum of the file if available."),
181 required=False,
182 allow_null=True,
183 )
184
185 sha224 = serializers.CharField(
186 help_text=_("The SHA-224 checksum of the file if available."),
187 required=False,
188 allow_null=True,
189 )
190
191 sha256 = serializers.CharField(
192 help_text=_("The SHA-256 checksum of the file if available."),
193 required=False,
194 allow_null=True,
195 )
196
197 sha384 = serializers.CharField(
198 help_text=_("The SHA-384 checksum of the file if available."),
199 required=False,
200 allow_null=True,
201 )
202
203 sha512 = serializers.CharField(
204 help_text=_("The SHA-512 checksum of the file if available."),
205 required=False,
206 allow_null=True,
207 )
208
209 def validate(self, data):
210 """
211 Validate file by size and by all checksums provided.
212
213 Args:
214 data (:class:`django.http.QueryDict`): QueryDict mapping Artifact model fields to their
215 values
216
217 Raises:
218 :class:`rest_framework.exceptions.ValidationError`: When the expected file size or any
219 of the checksums don't match their actual values.
220 """
221 super().validate(data)
222 if "size" in data:
223 if data["file"].size != int(data["size"]):
224 raise serializers.ValidationError(_("The size did not match actual size of file."))
225 else:
226 data["size"] = data["file"].size
227
228 bad_algs = []
229 for algorithm in models.Artifact.FORBIDDEN_DIGESTS:
230 if algorithm in data:
231 bad_algs.append(algorithm)
232 if bad_algs:
233 raise serializers.ValidationError(
234 _("Checksum algorithms {} forbidden for this Pulp instance.").format(bad_algs)
235 )
236
237 for algorithm in reversed(models.Artifact.DIGEST_FIELDS):
238 digest = data["file"].hashers[algorithm].hexdigest()
239
240 if algorithm in data and digest != data[algorithm]:
241 raise serializers.ValidationError(_("The %s checksum did not match.") % algorithm)
242 else:
243 data[algorithm] = digest
244
245 if algorithm in models.Artifact.RELIABLE_DIGEST_FIELDS:
246 validator = UniqueValidator(
247 models.Artifact.objects.filter(pulp_domain=get_domain()),
248 message=_("Artifact with {0} checksum of '{1}' already exists.").format(
249 algorithm, digest
250 ),
251 )
252 validator.instance = None
253 validator(digest, self.fields[algorithm])
254
255 return data
256
257 class Meta:
258 model = models.Artifact
259 fields = base.ModelSerializer.Meta.fields + (
260 "file",
261 "size",
262 "md5",
263 "sha1",
264 "sha224",
265 "sha256",
266 "sha384",
267 "sha512",
268 )
269
270
271 class SigningServiceSerializer(base.ModelSerializer):
272 """
273 A serializer for the model declaring a signing service.
274 """
275
276 pulp_href = base.IdentityField(view_name="signing-services-detail")
277 name = serializers.CharField(help_text=_("A unique name used to recognize a script."))
278 public_key = serializers.CharField(
279 help_text=_("The value of a public key used for the repository verification.")
280 )
281 pubkey_fingerprint = serializers.CharField(help_text=_("The fingerprint of the public key."))
282 script = serializers.CharField(
283 help_text=_("An absolute path to a script which is going to be used for the signing.")
284 )
285
286 class Meta:
287 model = models.SigningService
288 fields = BaseContentSerializer.Meta.fields + (
289 "name",
290 "public_key",
291 "pubkey_fingerprint",
292 "script",
293 )
294
[end of pulpcore/app/serializers/content.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/pulpcore/app/serializers/content.py b/pulpcore/app/serializers/content.py
--- a/pulpcore/app/serializers/content.py
+++ b/pulpcore/app/serializers/content.py
@@ -1,6 +1,6 @@
from gettext import gettext as _
-from django.db import transaction
+from django.db import transaction, IntegrityError
from rest_framework import serializers
from rest_framework.validators import UniqueValidator
@@ -45,7 +45,6 @@
if hasattr(self.Meta.model, "relative_path") and "relative_path" in self.fields:
self.fields["relative_path"].write_only = False
- @transaction.atomic
def create(self, validated_data):
"""
Create the content and associate it with its Artifact, or retrieve the existing content.
@@ -63,10 +62,16 @@
relative_path = validated_data.pop("relative_path")
else:
relative_path = validated_data.get("relative_path")
- content = self.Meta.model.objects.create(**validated_data)
- models.ContentArtifact.objects.create(
- artifact=artifact, content=content, relative_path=relative_path
- )
+ try:
+ with transaction.atomic():
+ content = self.Meta.model.objects.create(**validated_data)
+ models.ContentArtifact.objects.create(
+ artifact=artifact, content=content, relative_path=relative_path
+ )
+ except IntegrityError:
+ content = self.retrieve(validated_data)
+ if content is None:
+ raise
return content
| {"golden_diff": "diff --git a/pulpcore/app/serializers/content.py b/pulpcore/app/serializers/content.py\n--- a/pulpcore/app/serializers/content.py\n+++ b/pulpcore/app/serializers/content.py\n@@ -1,6 +1,6 @@\n from gettext import gettext as _\n \n-from django.db import transaction\n+from django.db import transaction, IntegrityError\n from rest_framework import serializers\n from rest_framework.validators import UniqueValidator\n \n@@ -45,7 +45,6 @@\n if hasattr(self.Meta.model, \"relative_path\") and \"relative_path\" in self.fields:\n self.fields[\"relative_path\"].write_only = False\n \n- @transaction.atomic\n def create(self, validated_data):\n \"\"\"\n Create the content and associate it with its Artifact, or retrieve the existing content.\n@@ -63,10 +62,16 @@\n relative_path = validated_data.pop(\"relative_path\")\n else:\n relative_path = validated_data.get(\"relative_path\")\n- content = self.Meta.model.objects.create(**validated_data)\n- models.ContentArtifact.objects.create(\n- artifact=artifact, content=content, relative_path=relative_path\n- )\n+ try:\n+ with transaction.atomic():\n+ content = self.Meta.model.objects.create(**validated_data)\n+ models.ContentArtifact.objects.create(\n+ artifact=artifact, content=content, relative_path=relative_path\n+ )\n+ except IntegrityError:\n+ content = self.retrieve(validated_data)\n+ if content is None:\n+ raise\n \n return content\n", "issue": "There's a race when creating the same content in multiple processes\n`Task 90df818f-3645-4b7c-8aaa-fae400a10ae0 failed (duplicate key value violates unique constraint \"file_filecontent_relative_path_digest__pu_b4bae2c2_uniq\"\r\nDETAIL: Key (relative_path, digest, _pulp_domain_id)=(B, df7e70e5021544f4834bbee64a9e3789febc4be81470df629cad6ddb03320a5c, be9b7087-f318-48ef-81ce-f3141524c659) already exists.\r\n)`\n", "before_files": [{"content": "from gettext import gettext as _\n\nfrom django.db import transaction\nfrom rest_framework import serializers\nfrom rest_framework.validators import UniqueValidator\n\nfrom pulpcore.app import models\nfrom pulpcore.app.serializers import base, fields\nfrom pulpcore.app.util import get_domain\n\n\nclass BaseContentSerializer(base.ModelSerializer):\n pulp_href = base.DetailIdentityField(view_name_pattern=r\"contents(-.*/.*)-detail\")\n\n class Meta:\n model = models.Content\n fields = base.ModelSerializer.Meta.fields\n\n\nclass NoArtifactContentSerializer(BaseContentSerializer):\n class Meta:\n model = models.Content\n fields = BaseContentSerializer.Meta.fields\n\n\nclass SingleArtifactContentSerializer(BaseContentSerializer):\n artifact = fields.SingleContentArtifactField(\n help_text=_(\"Artifact file representing the physical content\"),\n )\n\n relative_path = serializers.CharField(\n help_text=_(\"Path where the artifact is located relative to distributions base_path\"),\n validators=[fields.relative_path_validator],\n write_only=True,\n )\n\n def __init__(self, *args, **kwargs):\n \"\"\"\n Initializer for SingleArtifactContentSerializer\n \"\"\"\n super().__init__(*args, **kwargs)\n\n # If the content model has its own database field 'relative_path',\n # we should not mark the field write_only\n if hasattr(self.Meta.model, \"relative_path\") and \"relative_path\" in self.fields:\n self.fields[\"relative_path\"].write_only = False\n\n @transaction.atomic\n def create(self, validated_data):\n \"\"\"\n Create the content and associate it with its Artifact, or retrieve the existing content.\n\n Args:\n validated_data (dict): Data to save to the database\n \"\"\"\n content = self.retrieve(validated_data)\n\n if content is not None:\n content.touch()\n else:\n artifact = validated_data.pop(\"artifact\")\n if \"relative_path\" not in self.fields or self.fields[\"relative_path\"].write_only:\n relative_path = validated_data.pop(\"relative_path\")\n else:\n relative_path = validated_data.get(\"relative_path\")\n content = self.Meta.model.objects.create(**validated_data)\n models.ContentArtifact.objects.create(\n artifact=artifact, content=content, relative_path=relative_path\n )\n\n return content\n\n def retrieve(self, validated_data):\n \"\"\"\n Retrieve existing content unit if it exists, else return None.\n\n This method is plugin-specific and implementing it for a specific content type\n allows for uploading already existing content units of that type.\n \"\"\"\n return None\n\n class Meta:\n model = models.Content\n fields = BaseContentSerializer.Meta.fields + (\"artifact\", \"relative_path\")\n\n\nclass MultipleArtifactContentSerializer(BaseContentSerializer):\n artifacts = fields.ContentArtifactsField(\n help_text=_(\n \"A dict mapping relative paths inside the Content to the corresponding\"\n \"Artifact URLs. E.g.: {'relative/path': \"\n \"'/artifacts/1/'\"\n ),\n )\n\n @transaction.atomic\n def create(self, validated_data):\n \"\"\"\n Create the content and associate it with all its Artifacts.\n\n Args:\n validated_data (dict): Data to save to the database\n \"\"\"\n artifacts = validated_data.pop(\"artifacts\")\n content = self.Meta.model.objects.create(**validated_data)\n for relative_path, artifact in artifacts.items():\n models.ContentArtifact.objects.create(\n artifact=artifact, content=content, relative_path=relative_path\n )\n return content\n\n class Meta:\n model = models.Content\n fields = BaseContentSerializer.Meta.fields + (\"artifacts\",)\n\n\nclass ContentChecksumSerializer(serializers.Serializer):\n \"\"\"\n Provide a serializer with artifact checksum fields for single artifact content.\n\n If you use this serializer, it's recommended that you prefetch artifacts:\n\n Content.objects.prefetch_related(\"_artifacts\").all()\n \"\"\"\n\n md5 = fields.ContentArtifactChecksumField(\n help_text=_(\"The MD5 checksum if available.\"),\n checksum=\"md5\",\n )\n\n sha1 = fields.ContentArtifactChecksumField(\n help_text=_(\"The SHA-1 checksum if available.\"),\n checksum=\"sha1\",\n )\n\n sha224 = fields.ContentArtifactChecksumField(\n help_text=_(\"The SHA-224 checksum if available.\"),\n checksum=\"sha224\",\n )\n\n sha256 = fields.ContentArtifactChecksumField(\n help_text=_(\"The SHA-256 checksum if available.\"),\n checksum=\"sha256\",\n )\n\n sha384 = fields.ContentArtifactChecksumField(\n help_text=_(\"The SHA-384 checksum if available.\"),\n checksum=\"sha384\",\n )\n\n sha512 = fields.ContentArtifactChecksumField(\n help_text=_(\"The SHA-512 checksum if available.\"),\n checksum=\"sha512\",\n )\n\n class Meta:\n model = models.Content\n fields = base.ModelSerializer.Meta.fields + (\n \"md5\",\n \"sha1\",\n \"sha224\",\n \"sha256\",\n \"sha384\",\n \"sha512\",\n )\n\n\nclass ArtifactSerializer(base.ModelSerializer):\n pulp_href = base.IdentityField(view_name=\"artifacts-detail\")\n\n file = serializers.FileField(help_text=_(\"The stored file.\"), allow_empty_file=True)\n\n size = serializers.IntegerField(help_text=_(\"The size of the file in bytes.\"), required=False)\n\n md5 = serializers.CharField(\n help_text=_(\"The MD5 checksum of the file if available.\"), required=False, allow_null=True\n )\n\n sha1 = serializers.CharField(\n help_text=_(\"The SHA-1 checksum of the file if available.\"),\n required=False,\n allow_null=True,\n )\n\n sha224 = serializers.CharField(\n help_text=_(\"The SHA-224 checksum of the file if available.\"),\n required=False,\n allow_null=True,\n )\n\n sha256 = serializers.CharField(\n help_text=_(\"The SHA-256 checksum of the file if available.\"),\n required=False,\n allow_null=True,\n )\n\n sha384 = serializers.CharField(\n help_text=_(\"The SHA-384 checksum of the file if available.\"),\n required=False,\n allow_null=True,\n )\n\n sha512 = serializers.CharField(\n help_text=_(\"The SHA-512 checksum of the file if available.\"),\n required=False,\n allow_null=True,\n )\n\n def validate(self, data):\n \"\"\"\n Validate file by size and by all checksums provided.\n\n Args:\n data (:class:`django.http.QueryDict`): QueryDict mapping Artifact model fields to their\n values\n\n Raises:\n :class:`rest_framework.exceptions.ValidationError`: When the expected file size or any\n of the checksums don't match their actual values.\n \"\"\"\n super().validate(data)\n if \"size\" in data:\n if data[\"file\"].size != int(data[\"size\"]):\n raise serializers.ValidationError(_(\"The size did not match actual size of file.\"))\n else:\n data[\"size\"] = data[\"file\"].size\n\n bad_algs = []\n for algorithm in models.Artifact.FORBIDDEN_DIGESTS:\n if algorithm in data:\n bad_algs.append(algorithm)\n if bad_algs:\n raise serializers.ValidationError(\n _(\"Checksum algorithms {} forbidden for this Pulp instance.\").format(bad_algs)\n )\n\n for algorithm in reversed(models.Artifact.DIGEST_FIELDS):\n digest = data[\"file\"].hashers[algorithm].hexdigest()\n\n if algorithm in data and digest != data[algorithm]:\n raise serializers.ValidationError(_(\"The %s checksum did not match.\") % algorithm)\n else:\n data[algorithm] = digest\n\n if algorithm in models.Artifact.RELIABLE_DIGEST_FIELDS:\n validator = UniqueValidator(\n models.Artifact.objects.filter(pulp_domain=get_domain()),\n message=_(\"Artifact with {0} checksum of '{1}' already exists.\").format(\n algorithm, digest\n ),\n )\n validator.instance = None\n validator(digest, self.fields[algorithm])\n\n return data\n\n class Meta:\n model = models.Artifact\n fields = base.ModelSerializer.Meta.fields + (\n \"file\",\n \"size\",\n \"md5\",\n \"sha1\",\n \"sha224\",\n \"sha256\",\n \"sha384\",\n \"sha512\",\n )\n\n\nclass SigningServiceSerializer(base.ModelSerializer):\n \"\"\"\n A serializer for the model declaring a signing service.\n \"\"\"\n\n pulp_href = base.IdentityField(view_name=\"signing-services-detail\")\n name = serializers.CharField(help_text=_(\"A unique name used to recognize a script.\"))\n public_key = serializers.CharField(\n help_text=_(\"The value of a public key used for the repository verification.\")\n )\n pubkey_fingerprint = serializers.CharField(help_text=_(\"The fingerprint of the public key.\"))\n script = serializers.CharField(\n help_text=_(\"An absolute path to a script which is going to be used for the signing.\")\n )\n\n class Meta:\n model = models.SigningService\n fields = BaseContentSerializer.Meta.fields + (\n \"name\",\n \"public_key\",\n \"pubkey_fingerprint\",\n \"script\",\n )\n", "path": "pulpcore/app/serializers/content.py"}]} | 3,454 | 332 |
gh_patches_debug_1757 | rasdani/github-patches | git_diff | mne-tools__mne-bids-1156 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
MNE-BIDS 0.13 release
A release of MNE-BIDS has been requested: https://mne.discourse.group/t/mne-bids-0-13-release-date/7291/2
Our last release has been in December 2022, so I feel like cutting a release now is reasonable.
I'll migrate issues from the [0.13 milestone](https://github.com/mne-tools/mne-bids/milestone/14) to a new 0.14 milestone.
Please comment here if you need some particular thing to be fixed before the release.
cc @agramfort @hoechenberger @larsoner
</issue>
<code>
[start of mne_bids/__init__.py]
1 """MNE software for easily interacting with BIDS compatible datasets."""
2
3 __version__ = "0.13.dev0"
4 from mne_bids import commands
5 from mne_bids.report import make_report
6 from mne_bids.path import (
7 BIDSPath,
8 get_datatypes,
9 get_entity_vals,
10 print_dir_tree,
11 get_entities_from_fname,
12 search_folder_for_text,
13 get_bids_path_from_fname,
14 find_matching_paths,
15 )
16 from mne_bids.read import get_head_mri_trans, read_raw_bids
17 from mne_bids.utils import get_anonymization_daysback
18 from mne_bids.write import (
19 make_dataset_description,
20 write_anat,
21 write_raw_bids,
22 mark_channels,
23 write_meg_calibration,
24 write_meg_crosstalk,
25 get_anat_landmarks,
26 anonymize_dataset,
27 )
28 from mne_bids.sidecar_updates import update_sidecar_json, update_anat_landmarks
29 from mne_bids.inspect import inspect_dataset
30 from mne_bids.dig import (
31 template_to_head,
32 convert_montage_to_ras,
33 convert_montage_to_mri,
34 )
35
[end of mne_bids/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/mne_bids/__init__.py b/mne_bids/__init__.py
--- a/mne_bids/__init__.py
+++ b/mne_bids/__init__.py
@@ -1,6 +1,6 @@
"""MNE software for easily interacting with BIDS compatible datasets."""
-__version__ = "0.13.dev0"
+__version__ = "0.13"
from mne_bids import commands
from mne_bids.report import make_report
from mne_bids.path import (
| {"golden_diff": "diff --git a/mne_bids/__init__.py b/mne_bids/__init__.py\n--- a/mne_bids/__init__.py\n+++ b/mne_bids/__init__.py\n@@ -1,6 +1,6 @@\n \"\"\"MNE software for easily interacting with BIDS compatible datasets.\"\"\"\n \n-__version__ = \"0.13.dev0\"\n+__version__ = \"0.13\"\n from mne_bids import commands\n from mne_bids.report import make_report\n from mne_bids.path import (\n", "issue": "MNE-BIDS 0.13 release\nA release of MNE-BIDS has been requested: https://mne.discourse.group/t/mne-bids-0-13-release-date/7291/2\r\n\r\nOur last release has been in December 2022, so I feel like cutting a release now is reasonable.\r\n\r\nI'll migrate issues from the [0.13 milestone](https://github.com/mne-tools/mne-bids/milestone/14) to a new 0.14 milestone.\r\n\r\nPlease comment here if you need some particular thing to be fixed before the release.\r\n\r\ncc @agramfort @hoechenberger @larsoner \n", "before_files": [{"content": "\"\"\"MNE software for easily interacting with BIDS compatible datasets.\"\"\"\n\n__version__ = \"0.13.dev0\"\nfrom mne_bids import commands\nfrom mne_bids.report import make_report\nfrom mne_bids.path import (\n BIDSPath,\n get_datatypes,\n get_entity_vals,\n print_dir_tree,\n get_entities_from_fname,\n search_folder_for_text,\n get_bids_path_from_fname,\n find_matching_paths,\n)\nfrom mne_bids.read import get_head_mri_trans, read_raw_bids\nfrom mne_bids.utils import get_anonymization_daysback\nfrom mne_bids.write import (\n make_dataset_description,\n write_anat,\n write_raw_bids,\n mark_channels,\n write_meg_calibration,\n write_meg_crosstalk,\n get_anat_landmarks,\n anonymize_dataset,\n)\nfrom mne_bids.sidecar_updates import update_sidecar_json, update_anat_landmarks\nfrom mne_bids.inspect import inspect_dataset\nfrom mne_bids.dig import (\n template_to_head,\n convert_montage_to_ras,\n convert_montage_to_mri,\n)\n", "path": "mne_bids/__init__.py"}]} | 992 | 118 |
gh_patches_debug_36782 | rasdani/github-patches | git_diff | aws-cloudformation__cfn-lint-2006 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
cfn-lint 0.49.1 does not catch `/` as an invalid character in a Mapping element name
*cfn-lint version: cfn-lint 0.49.1*
*cfn-lint did not catch `/` as an invalid character in a Mapping element name*
cfn-lint passed successfully with this mapping included in the template:
```yaml
Mappings:
NameServers:
10.90.0.0/16:
NameServer1: 10.90.0.10
NameServer2: 10.90.4.10
10.91.0.0/16:
NameServer1: 10.91.0.10
NameServer2: 10.91.4.10
```
However AWS rejected it:
> Template format error: Mappings element name '10.93.0.0/16' must be non-empty and can contain only alphanumerics, '-' or '.'

</issue>
<code>
[start of src/cfnlint/rules/mappings/KeyName.py]
1 """
2 Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
3 SPDX-License-Identifier: MIT-0
4 """
5 import re
6 import six
7 from cfnlint.rules import CloudFormationLintRule
8 from cfnlint.rules import RuleMatch
9 from cfnlint.helpers import REGEX_ALPHANUMERIC
10
11
12 class KeyName(CloudFormationLintRule):
13 """Check if Mapping Keys are type string"""
14 id = 'E7003'
15 shortdesc = 'Mapping keys are strings and alphanumeric'
16 description = 'Check if Mappings keys are properly typed as strings and alphanumeric'
17 source_url = 'https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/mappings-section-structure.html'
18 tags = ['mappings']
19
20 def check_key(self, key, path, check_alphanumeric=True):
21 """ Check the key name for string and alphanumeric"""
22 matches = []
23 if not isinstance(key, six.string_types):
24 message = 'Mapping key ({0}) has to be a string.'
25 matches.append(RuleMatch(path[:], message.format(key)))
26 elif not re.match(REGEX_ALPHANUMERIC, key) and check_alphanumeric:
27 message = 'Mapping key ({0}) has invalid name. Name has to be alphanumeric.'
28 matches.append(RuleMatch(path[:], message.format(key)))
29
30 return matches
31
32 def match(self, cfn):
33 matches = []
34
35 mappings = cfn.template.get('Mappings', {})
36 for mapping_name, mapping_value in mappings.items():
37 if isinstance(mapping_value, dict):
38 for key_name, key_value in mapping_value.items():
39 matches.extend(self.check_key(
40 key_name, ['Mappings', mapping_name, key_name], False))
41 if isinstance(key_value, dict):
42 for sub_key_name, _ in key_value.items():
43 matches.extend(
44 self.check_key(
45 sub_key_name, ['Mappings', mapping_name, key_name, sub_key_name]))
46
47 return matches
48
[end of src/cfnlint/rules/mappings/KeyName.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/cfnlint/rules/mappings/KeyName.py b/src/cfnlint/rules/mappings/KeyName.py
--- a/src/cfnlint/rules/mappings/KeyName.py
+++ b/src/cfnlint/rules/mappings/KeyName.py
@@ -17,14 +17,26 @@
source_url = 'https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/mappings-section-structure.html'
tags = ['mappings']
- def check_key(self, key, path, check_alphanumeric=True):
+ def check_attribute(self, key, path):
+ """ Check the key name for string and alphanumeric"""
+ matches = []
+ if not isinstance(key, six.string_types):
+ message = 'Mapping attribute ({0}) has to be a string.'
+ matches.append(RuleMatch(path[:], message.format(key)))
+ elif not re.match(REGEX_ALPHANUMERIC, key):
+ message = 'Mapping attribute ({0}) has invalid name. Name has to be alphanumeric.'
+ matches.append(RuleMatch(path[:], message.format(key)))
+
+ return matches
+
+ def check_key(self, key, path):
""" Check the key name for string and alphanumeric"""
matches = []
if not isinstance(key, six.string_types):
message = 'Mapping key ({0}) has to be a string.'
matches.append(RuleMatch(path[:], message.format(key)))
- elif not re.match(REGEX_ALPHANUMERIC, key) and check_alphanumeric:
- message = 'Mapping key ({0}) has invalid name. Name has to be alphanumeric.'
+ elif not re.match('^[a-zA-Z0-9.-]{1,255}$', key):
+ message = 'Mapping key ({0}) has invalid name. Name has to be alphanumeric, \'-\' or \'.\''
matches.append(RuleMatch(path[:], message.format(key)))
return matches
@@ -37,11 +49,11 @@
if isinstance(mapping_value, dict):
for key_name, key_value in mapping_value.items():
matches.extend(self.check_key(
- key_name, ['Mappings', mapping_name, key_name], False))
+ key_name, ['Mappings', mapping_name, key_name]))
if isinstance(key_value, dict):
for sub_key_name, _ in key_value.items():
matches.extend(
- self.check_key(
+ self.check_attribute(
sub_key_name, ['Mappings', mapping_name, key_name, sub_key_name]))
return matches
| {"golden_diff": "diff --git a/src/cfnlint/rules/mappings/KeyName.py b/src/cfnlint/rules/mappings/KeyName.py\n--- a/src/cfnlint/rules/mappings/KeyName.py\n+++ b/src/cfnlint/rules/mappings/KeyName.py\n@@ -17,14 +17,26 @@\n source_url = 'https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/mappings-section-structure.html'\n tags = ['mappings']\n \n- def check_key(self, key, path, check_alphanumeric=True):\n+ def check_attribute(self, key, path):\n+ \"\"\" Check the key name for string and alphanumeric\"\"\"\n+ matches = []\n+ if not isinstance(key, six.string_types):\n+ message = 'Mapping attribute ({0}) has to be a string.'\n+ matches.append(RuleMatch(path[:], message.format(key)))\n+ elif not re.match(REGEX_ALPHANUMERIC, key):\n+ message = 'Mapping attribute ({0}) has invalid name. Name has to be alphanumeric.'\n+ matches.append(RuleMatch(path[:], message.format(key)))\n+\n+ return matches\n+\n+ def check_key(self, key, path):\n \"\"\" Check the key name for string and alphanumeric\"\"\"\n matches = []\n if not isinstance(key, six.string_types):\n message = 'Mapping key ({0}) has to be a string.'\n matches.append(RuleMatch(path[:], message.format(key)))\n- elif not re.match(REGEX_ALPHANUMERIC, key) and check_alphanumeric:\n- message = 'Mapping key ({0}) has invalid name. Name has to be alphanumeric.'\n+ elif not re.match('^[a-zA-Z0-9.-]{1,255}$', key):\n+ message = 'Mapping key ({0}) has invalid name. Name has to be alphanumeric, \\'-\\' or \\'.\\''\n matches.append(RuleMatch(path[:], message.format(key)))\n \n return matches\n@@ -37,11 +49,11 @@\n if isinstance(mapping_value, dict):\n for key_name, key_value in mapping_value.items():\n matches.extend(self.check_key(\n- key_name, ['Mappings', mapping_name, key_name], False))\n+ key_name, ['Mappings', mapping_name, key_name]))\n if isinstance(key_value, dict):\n for sub_key_name, _ in key_value.items():\n matches.extend(\n- self.check_key(\n+ self.check_attribute(\n sub_key_name, ['Mappings', mapping_name, key_name, sub_key_name]))\n \n return matches\n", "issue": "cfn-lint 0.49.1 does not catch `/` as an invalid character in a Mapping element name\n*cfn-lint version: cfn-lint 0.49.1*\r\n\r\n*cfn-lint did not catch `/` as an invalid character in a Mapping element name*\r\n\r\ncfn-lint passed successfully with this mapping included in the template:\r\n```yaml\r\nMappings:\r\n NameServers:\r\n 10.90.0.0/16:\r\n NameServer1: 10.90.0.10\r\n NameServer2: 10.90.4.10\r\n 10.91.0.0/16:\r\n NameServer1: 10.91.0.10\r\n NameServer2: 10.91.4.10\r\n```\r\n\r\nHowever AWS rejected it:\r\n> Template format error: Mappings element name '10.93.0.0/16' must be non-empty and can contain only alphanumerics, '-' or '.'\r\n\r\n\r\n\n", "before_files": [{"content": "\"\"\"\nCopyright Amazon.com, Inc. or its affiliates. All Rights Reserved.\nSPDX-License-Identifier: MIT-0\n\"\"\"\nimport re\nimport six\nfrom cfnlint.rules import CloudFormationLintRule\nfrom cfnlint.rules import RuleMatch\nfrom cfnlint.helpers import REGEX_ALPHANUMERIC\n\n\nclass KeyName(CloudFormationLintRule):\n \"\"\"Check if Mapping Keys are type string\"\"\"\n id = 'E7003'\n shortdesc = 'Mapping keys are strings and alphanumeric'\n description = 'Check if Mappings keys are properly typed as strings and alphanumeric'\n source_url = 'https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/mappings-section-structure.html'\n tags = ['mappings']\n\n def check_key(self, key, path, check_alphanumeric=True):\n \"\"\" Check the key name for string and alphanumeric\"\"\"\n matches = []\n if not isinstance(key, six.string_types):\n message = 'Mapping key ({0}) has to be a string.'\n matches.append(RuleMatch(path[:], message.format(key)))\n elif not re.match(REGEX_ALPHANUMERIC, key) and check_alphanumeric:\n message = 'Mapping key ({0}) has invalid name. Name has to be alphanumeric.'\n matches.append(RuleMatch(path[:], message.format(key)))\n\n return matches\n\n def match(self, cfn):\n matches = []\n\n mappings = cfn.template.get('Mappings', {})\n for mapping_name, mapping_value in mappings.items():\n if isinstance(mapping_value, dict):\n for key_name, key_value in mapping_value.items():\n matches.extend(self.check_key(\n key_name, ['Mappings', mapping_name, key_name], False))\n if isinstance(key_value, dict):\n for sub_key_name, _ in key_value.items():\n matches.extend(\n self.check_key(\n sub_key_name, ['Mappings', mapping_name, key_name, sub_key_name]))\n\n return matches\n", "path": "src/cfnlint/rules/mappings/KeyName.py"}]} | 1,363 | 547 |
gh_patches_debug_8668 | rasdani/github-patches | git_diff | wright-group__WrightTools-1132 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
shift supported Python 3 versions
Since users are increasingly relying on 3.10 and 3.11, I propose we move testing from 3.7-9 to 3.8-11.
</issue>
<code>
[start of setup.py]
1 #! /usr/bin/env python3
2
3 import os
4 from setuptools import setup, find_packages
5
6
7 here = os.path.abspath(os.path.dirname(__file__))
8
9
10 def read(fname):
11 with open(os.path.join(here, fname)) as f:
12 return f.read()
13
14
15 extra_files = {
16 "WrightTools": [
17 "datasets",
18 "datasets/*",
19 "datasets/*/*",
20 "datasets/*/*/*",
21 "datasets/*/*/*/*",
22 "CITATION",
23 "VERSION",
24 "WT5_VERSION",
25 ]
26 }
27
28 with open(os.path.join(here, "WrightTools", "VERSION")) as version_file:
29 version = version_file.read().strip()
30
31 docs_require = ["sphinx", "sphinx-gallery==0.8.2", "sphinx-rtd-theme"]
32
33 setup(
34 name="WrightTools",
35 packages=find_packages(exclude=("tests", "tests.*")),
36 package_data=extra_files,
37 python_requires=">=3.7",
38 install_requires=[
39 "h5py",
40 "imageio",
41 "matplotlib>=3.4.0",
42 "numexpr",
43 "numpy>=1.15.0",
44 "pint",
45 "python-dateutil",
46 "scipy",
47 "tidy_headers>=1.0.0",
48 ],
49 extras_require={
50 "docs": docs_require,
51 "dev": [
52 "black",
53 "pre-commit",
54 "pydocstyle",
55 "pytest",
56 "pytest-cov",
57 "databroker>=1.2",
58 "msgpack",
59 ]
60 + docs_require,
61 },
62 version=version,
63 description="Tools for loading, processing, and plotting multidimensional spectroscopy data.",
64 long_description=read("README.rst"),
65 author="WrightTools Developers",
66 license="MIT",
67 url="http://wright.tools",
68 keywords="spectroscopy science multidimensional visualization",
69 entry_points={"console_scripts": ["wt-tree=WrightTools.__main__:wt_tree"]},
70 classifiers=[
71 "Development Status :: 5 - Production/Stable",
72 "Intended Audience :: Science/Research",
73 "License :: OSI Approved :: MIT License",
74 "Framework :: Matplotlib",
75 "Natural Language :: English",
76 "Programming Language :: Python :: 3",
77 "Programming Language :: Python :: 3.7",
78 "Programming Language :: Python :: 3.8",
79 "Programming Language :: Python :: 3.9",
80 "Topic :: Scientific/Engineering",
81 ],
82 )
83
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -74,9 +74,10 @@
"Framework :: Matplotlib",
"Natural Language :: English",
"Programming Language :: Python :: 3",
- "Programming Language :: Python :: 3.7",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
+ "Programming Language :: Python :: 3.10",
+ "Programming Language :: Python :: 3.11",
"Topic :: Scientific/Engineering",
],
)
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -74,9 +74,10 @@\n \"Framework :: Matplotlib\",\n \"Natural Language :: English\",\n \"Programming Language :: Python :: 3\",\n- \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n+ \"Programming Language :: Python :: 3.10\",\n+ \"Programming Language :: Python :: 3.11\",\n \"Topic :: Scientific/Engineering\",\n ],\n )\n", "issue": "shift supported Python 3 versions\nSince users are increasingly relying on 3.10 and 3.11, I propose we move testing from 3.7-9 to 3.8-11.\r\n\n", "before_files": [{"content": "#! /usr/bin/env python3\n\nimport os\nfrom setuptools import setup, find_packages\n\n\nhere = os.path.abspath(os.path.dirname(__file__))\n\n\ndef read(fname):\n with open(os.path.join(here, fname)) as f:\n return f.read()\n\n\nextra_files = {\n \"WrightTools\": [\n \"datasets\",\n \"datasets/*\",\n \"datasets/*/*\",\n \"datasets/*/*/*\",\n \"datasets/*/*/*/*\",\n \"CITATION\",\n \"VERSION\",\n \"WT5_VERSION\",\n ]\n}\n\nwith open(os.path.join(here, \"WrightTools\", \"VERSION\")) as version_file:\n version = version_file.read().strip()\n\ndocs_require = [\"sphinx\", \"sphinx-gallery==0.8.2\", \"sphinx-rtd-theme\"]\n\nsetup(\n name=\"WrightTools\",\n packages=find_packages(exclude=(\"tests\", \"tests.*\")),\n package_data=extra_files,\n python_requires=\">=3.7\",\n install_requires=[\n \"h5py\",\n \"imageio\",\n \"matplotlib>=3.4.0\",\n \"numexpr\",\n \"numpy>=1.15.0\",\n \"pint\",\n \"python-dateutil\",\n \"scipy\",\n \"tidy_headers>=1.0.0\",\n ],\n extras_require={\n \"docs\": docs_require,\n \"dev\": [\n \"black\",\n \"pre-commit\",\n \"pydocstyle\",\n \"pytest\",\n \"pytest-cov\",\n \"databroker>=1.2\",\n \"msgpack\",\n ]\n + docs_require,\n },\n version=version,\n description=\"Tools for loading, processing, and plotting multidimensional spectroscopy data.\",\n long_description=read(\"README.rst\"),\n author=\"WrightTools Developers\",\n license=\"MIT\",\n url=\"http://wright.tools\",\n keywords=\"spectroscopy science multidimensional visualization\",\n entry_points={\"console_scripts\": [\"wt-tree=WrightTools.__main__:wt_tree\"]},\n classifiers=[\n \"Development Status :: 5 - Production/Stable\",\n \"Intended Audience :: Science/Research\",\n \"License :: OSI Approved :: MIT License\",\n \"Framework :: Matplotlib\",\n \"Natural Language :: English\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Topic :: Scientific/Engineering\",\n ],\n)\n", "path": "setup.py"}]} | 1,275 | 133 |
gh_patches_debug_44771 | rasdani/github-patches | git_diff | aws-cloudformation__cfn-lint-1140 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
W2030 Default value required on conditionally included property
*cfn-lint version: 0.21.3*
CloudFormation provides the AWS::NoValue pseudo-parameter, which allows for a property to be included based on a given Condition. However, cfn-lint will still validate the potential value provided for the property, even if it will not actually be used in the deployment.
Example template:
```yaml
Parameters:
Retention:
Type: Number
Description: Retention in days for the log group (-1 for no retention)
Default: -1
Conditions:
IsRetention:
!Not [!Equals [!Ref 'Retention', '-1']]
Resources:
LogGroup:
Type: AWS::Logs::LogGroup
Properties:
LogGroupName: 'some-log-group'
RetentionInDays: !If [IsRetention, !Ref Retention, !Ref 'AWS::NoValue']
```
This template allows the user to specify the retention on a log group, or use the number -1 if they wish to have unlimited retention. This is achieved via a Condition as well as an If block that conditionally includes the property.
This leads to the following linter output:
```
cfn-lint --template template.yaml
W2030 You must specify a valid Default value for Retention (-1).
Valid values are ['1', '3', '5', '7', '14', '30', '60', '90', '120', '150', '180', '365', '400', '545', '731', '1827', '3653']
cloudformation/template.yaml:5:5
```
This can of course be avoided by disabling this specific check in the template Metadata block. Unfortunately it cannot be disabled in the resource Metadata, as the validation error happens on the Parameter:
```yaml
Metadata:
cfn-lint:
config:
ignore_checks:
- W2030
```
This might be a difficult situation to account for, since it would require the Condition to be evaluated to determine whether the property itself should even be checked.
</issue>
<code>
[start of src/cfnlint/rules/parameters/AllowedValue.py]
1 """
2 Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
3
4 Permission is hereby granted, free of charge, to any person obtaining a copy of this
5 software and associated documentation files (the "Software"), to deal in the Software
6 without restriction, including without limitation the rights to use, copy, modify,
7 merge, publish, distribute, sublicense, and/or sell copies of the Software, and to
8 permit persons to whom the Software is furnished to do so.
9
10 THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED,
11 INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A
12 PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
13 HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
14 OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
15 SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
16 """
17 import six
18 from cfnlint.rules import CloudFormationLintRule
19 from cfnlint.rules import RuleMatch
20
21 from cfnlint.helpers import RESOURCE_SPECS
22
23
24 class AllowedValue(CloudFormationLintRule):
25 """Check if parameters have a valid value"""
26 id = 'W2030'
27 shortdesc = 'Check if parameters have a valid value'
28 description = 'Check if parameters have a valid value in case of an enumator. The Parameter''s allowed values is based on the usages in property (Ref)'
29 source_url = 'https://github.com/aws-cloudformation/cfn-python-lint/blob/master/docs/cfn-resource-specification.md#allowedvalue'
30 tags = ['resources', 'property', 'allowed value']
31
32 def initialize(self, cfn):
33 """Initialize the rule"""
34 for resource_type_spec in RESOURCE_SPECS.get(cfn.regions[0]).get('ResourceTypes'):
35 self.resource_property_types.append(resource_type_spec)
36 for property_type_spec in RESOURCE_SPECS.get(cfn.regions[0]).get('PropertyTypes'):
37 self.resource_sub_property_types.append(property_type_spec)
38
39 def check_value_ref(self, value, path, **kwargs):
40 """Check Ref"""
41 matches = []
42
43 if 'Fn::If' in path:
44 self.logger.debug('Not able to guarentee that the default value hasn\'t been conditioned out')
45 return matches
46
47 allowed_value_specs = kwargs.get('value_specs', {}).get('AllowedValues', {})
48 cfn = kwargs.get('cfn')
49
50 if allowed_value_specs:
51 if value in cfn.template.get('Parameters', {}):
52 param = cfn.template.get('Parameters').get(value, {})
53 parameter_values = param.get('AllowedValues')
54 default_value = param.get('Default')
55 parameter_type = param.get('Type')
56 if isinstance(parameter_type, six.string_types):
57 if ((not parameter_type.startswith('List<')) and
58 (not parameter_type.startswith('AWS::SSM::Parameter::Value<')) and
59 parameter_type not in ['CommaDelimitedList', 'List<String>']):
60 # Check Allowed Values
61 if parameter_values:
62 for index, allowed_value in enumerate(parameter_values):
63 if str(allowed_value) not in allowed_value_specs:
64 param_path = ['Parameters', value, 'AllowedValues', index]
65 message = 'You must specify a valid allowed value for {0} ({1}).\nValid values are {2}'
66 matches.append(RuleMatch(param_path, message.format(value, allowed_value, allowed_value_specs)))
67 if default_value:
68 # Check Default, only if no allowed Values are specified in the parameter (that's covered by E2015)
69 if str(default_value) not in allowed_value_specs:
70 param_path = ['Parameters', value, 'Default']
71 message = 'You must specify a valid Default value for {0} ({1}).\nValid values are {2}'
72 matches.append(RuleMatch(param_path, message.format(value, default_value, allowed_value_specs)))
73
74 return matches
75
76 def check(self, cfn, properties, value_specs, property_specs, path):
77 """Check itself"""
78 matches = list()
79 for p_value, p_path in properties.items_safe(path[:]):
80 for prop in p_value:
81 if prop in value_specs:
82 value = value_specs.get(prop).get('Value', {})
83 if value:
84 value_type = value.get('ValueType', '')
85 property_type = property_specs.get('Properties').get(prop).get('Type')
86 matches.extend(
87 cfn.check_value(
88 p_value, prop, p_path,
89 check_ref=self.check_value_ref,
90 value_specs=RESOURCE_SPECS.get(cfn.regions[0]).get('ValueTypes').get(value_type, {}),
91 cfn=cfn, property_type=property_type, property_name=prop
92 )
93 )
94
95 return matches
96
97 def match_resource_sub_properties(self, properties, property_type, path, cfn):
98 """Match for sub properties"""
99 matches = list()
100
101 specs = RESOURCE_SPECS.get(cfn.regions[0]).get('PropertyTypes').get(property_type, {}).get('Properties', {})
102 property_specs = RESOURCE_SPECS.get(cfn.regions[0]).get('PropertyTypes').get(property_type)
103 matches.extend(self.check(cfn, properties, specs, property_specs, path))
104
105 return matches
106
107 def match_resource_properties(self, properties, resource_type, path, cfn):
108 """Check CloudFormation Properties"""
109 matches = list()
110
111 specs = RESOURCE_SPECS.get(cfn.regions[0]).get('ResourceTypes').get(resource_type, {}).get('Properties', {})
112 resource_specs = RESOURCE_SPECS.get(cfn.regions[0]).get('ResourceTypes').get(resource_type)
113 matches.extend(self.check(cfn, properties, specs, resource_specs, path))
114
115 return matches
116
[end of src/cfnlint/rules/parameters/AllowedValue.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/cfnlint/rules/parameters/AllowedValue.py b/src/cfnlint/rules/parameters/AllowedValue.py
--- a/src/cfnlint/rules/parameters/AllowedValue.py
+++ b/src/cfnlint/rules/parameters/AllowedValue.py
@@ -40,12 +40,19 @@
"""Check Ref"""
matches = []
+ cfn = kwargs.get('cfn')
if 'Fn::If' in path:
- self.logger.debug('Not able to guarentee that the default value hasn\'t been conditioned out')
+ self.logger.debug(
+ 'Not able to guarentee that the default value hasn\'t been conditioned out')
+ return matches
+ if path[0] == 'Resources' and 'Condition' in cfn.template.get(
+ path[0], {}).get(path[1]):
+ self.logger.debug(
+ 'Not able to guarentee that the default value '
+ 'hasn\'t been conditioned out')
return matches
allowed_value_specs = kwargs.get('value_specs', {}).get('AllowedValues', {})
- cfn = kwargs.get('cfn')
if allowed_value_specs:
if value in cfn.template.get('Parameters', {}):
@@ -63,13 +70,15 @@
if str(allowed_value) not in allowed_value_specs:
param_path = ['Parameters', value, 'AllowedValues', index]
message = 'You must specify a valid allowed value for {0} ({1}).\nValid values are {2}'
- matches.append(RuleMatch(param_path, message.format(value, allowed_value, allowed_value_specs)))
+ matches.append(RuleMatch(param_path, message.format(
+ value, allowed_value, allowed_value_specs)))
if default_value:
# Check Default, only if no allowed Values are specified in the parameter (that's covered by E2015)
if str(default_value) not in allowed_value_specs:
param_path = ['Parameters', value, 'Default']
message = 'You must specify a valid Default value for {0} ({1}).\nValid values are {2}'
- matches.append(RuleMatch(param_path, message.format(value, default_value, allowed_value_specs)))
+ matches.append(RuleMatch(param_path, message.format(
+ value, default_value, allowed_value_specs)))
return matches
@@ -87,7 +96,8 @@
cfn.check_value(
p_value, prop, p_path,
check_ref=self.check_value_ref,
- value_specs=RESOURCE_SPECS.get(cfn.regions[0]).get('ValueTypes').get(value_type, {}),
+ value_specs=RESOURCE_SPECS.get(cfn.regions[0]).get(
+ 'ValueTypes').get(value_type, {}),
cfn=cfn, property_type=property_type, property_name=prop
)
)
@@ -98,7 +108,8 @@
"""Match for sub properties"""
matches = list()
- specs = RESOURCE_SPECS.get(cfn.regions[0]).get('PropertyTypes').get(property_type, {}).get('Properties', {})
+ specs = RESOURCE_SPECS.get(cfn.regions[0]).get(
+ 'PropertyTypes').get(property_type, {}).get('Properties', {})
property_specs = RESOURCE_SPECS.get(cfn.regions[0]).get('PropertyTypes').get(property_type)
matches.extend(self.check(cfn, properties, specs, property_specs, path))
@@ -108,7 +119,8 @@
"""Check CloudFormation Properties"""
matches = list()
- specs = RESOURCE_SPECS.get(cfn.regions[0]).get('ResourceTypes').get(resource_type, {}).get('Properties', {})
+ specs = RESOURCE_SPECS.get(cfn.regions[0]).get(
+ 'ResourceTypes').get(resource_type, {}).get('Properties', {})
resource_specs = RESOURCE_SPECS.get(cfn.regions[0]).get('ResourceTypes').get(resource_type)
matches.extend(self.check(cfn, properties, specs, resource_specs, path))
| {"golden_diff": "diff --git a/src/cfnlint/rules/parameters/AllowedValue.py b/src/cfnlint/rules/parameters/AllowedValue.py\n--- a/src/cfnlint/rules/parameters/AllowedValue.py\n+++ b/src/cfnlint/rules/parameters/AllowedValue.py\n@@ -40,12 +40,19 @@\n \"\"\"Check Ref\"\"\"\n matches = []\n \n+ cfn = kwargs.get('cfn')\n if 'Fn::If' in path:\n- self.logger.debug('Not able to guarentee that the default value hasn\\'t been conditioned out')\n+ self.logger.debug(\n+ 'Not able to guarentee that the default value hasn\\'t been conditioned out')\n+ return matches\n+ if path[0] == 'Resources' and 'Condition' in cfn.template.get(\n+ path[0], {}).get(path[1]):\n+ self.logger.debug(\n+ 'Not able to guarentee that the default value '\n+ 'hasn\\'t been conditioned out')\n return matches\n \n allowed_value_specs = kwargs.get('value_specs', {}).get('AllowedValues', {})\n- cfn = kwargs.get('cfn')\n \n if allowed_value_specs:\n if value in cfn.template.get('Parameters', {}):\n@@ -63,13 +70,15 @@\n if str(allowed_value) not in allowed_value_specs:\n param_path = ['Parameters', value, 'AllowedValues', index]\n message = 'You must specify a valid allowed value for {0} ({1}).\\nValid values are {2}'\n- matches.append(RuleMatch(param_path, message.format(value, allowed_value, allowed_value_specs)))\n+ matches.append(RuleMatch(param_path, message.format(\n+ value, allowed_value, allowed_value_specs)))\n if default_value:\n # Check Default, only if no allowed Values are specified in the parameter (that's covered by E2015)\n if str(default_value) not in allowed_value_specs:\n param_path = ['Parameters', value, 'Default']\n message = 'You must specify a valid Default value for {0} ({1}).\\nValid values are {2}'\n- matches.append(RuleMatch(param_path, message.format(value, default_value, allowed_value_specs)))\n+ matches.append(RuleMatch(param_path, message.format(\n+ value, default_value, allowed_value_specs)))\n \n return matches\n \n@@ -87,7 +96,8 @@\n cfn.check_value(\n p_value, prop, p_path,\n check_ref=self.check_value_ref,\n- value_specs=RESOURCE_SPECS.get(cfn.regions[0]).get('ValueTypes').get(value_type, {}),\n+ value_specs=RESOURCE_SPECS.get(cfn.regions[0]).get(\n+ 'ValueTypes').get(value_type, {}),\n cfn=cfn, property_type=property_type, property_name=prop\n )\n )\n@@ -98,7 +108,8 @@\n \"\"\"Match for sub properties\"\"\"\n matches = list()\n \n- specs = RESOURCE_SPECS.get(cfn.regions[0]).get('PropertyTypes').get(property_type, {}).get('Properties', {})\n+ specs = RESOURCE_SPECS.get(cfn.regions[0]).get(\n+ 'PropertyTypes').get(property_type, {}).get('Properties', {})\n property_specs = RESOURCE_SPECS.get(cfn.regions[0]).get('PropertyTypes').get(property_type)\n matches.extend(self.check(cfn, properties, specs, property_specs, path))\n \n@@ -108,7 +119,8 @@\n \"\"\"Check CloudFormation Properties\"\"\"\n matches = list()\n \n- specs = RESOURCE_SPECS.get(cfn.regions[0]).get('ResourceTypes').get(resource_type, {}).get('Properties', {})\n+ specs = RESOURCE_SPECS.get(cfn.regions[0]).get(\n+ 'ResourceTypes').get(resource_type, {}).get('Properties', {})\n resource_specs = RESOURCE_SPECS.get(cfn.regions[0]).get('ResourceTypes').get(resource_type)\n matches.extend(self.check(cfn, properties, specs, resource_specs, path))\n", "issue": "W2030 Default value required on conditionally included property\n*cfn-lint version: 0.21.3*\r\n\r\nCloudFormation provides the AWS::NoValue pseudo-parameter, which allows for a property to be included based on a given Condition. However, cfn-lint will still validate the potential value provided for the property, even if it will not actually be used in the deployment.\r\n\r\nExample template:\r\n\r\n```yaml\r\nParameters:\r\n Retention:\r\n Type: Number\r\n Description: Retention in days for the log group (-1 for no retention)\r\n Default: -1\r\nConditions:\r\n IsRetention: \r\n !Not [!Equals [!Ref 'Retention', '-1']]\r\nResources:\r\n LogGroup:\r\n Type: AWS::Logs::LogGroup\r\n Properties:\r\n LogGroupName: 'some-log-group'\r\n RetentionInDays: !If [IsRetention, !Ref Retention, !Ref 'AWS::NoValue']\r\n```\r\n\r\nThis template allows the user to specify the retention on a log group, or use the number -1 if they wish to have unlimited retention. This is achieved via a Condition as well as an If block that conditionally includes the property.\r\n\r\nThis leads to the following linter output:\r\n\r\n```\r\ncfn-lint --template template.yaml\r\nW2030 You must specify a valid Default value for Retention (-1). \r\nValid values are ['1', '3', '5', '7', '14', '30', '60', '90', '120', '150', '180', '365', '400', '545', '731', '1827', '3653']\r\ncloudformation/template.yaml:5:5\r\n```\r\n\r\nThis can of course be avoided by disabling this specific check in the template Metadata block. Unfortunately it cannot be disabled in the resource Metadata, as the validation error happens on the Parameter:\r\n\r\n```yaml\r\nMetadata:\r\n cfn-lint:\r\n config:\r\n ignore_checks:\r\n - W2030\r\n```\r\n\r\nThis might be a difficult situation to account for, since it would require the Condition to be evaluated to determine whether the property itself should even be checked.\n", "before_files": [{"content": "\"\"\"\n Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n\n Permission is hereby granted, free of charge, to any person obtaining a copy of this\n software and associated documentation files (the \"Software\"), to deal in the Software\n without restriction, including without limitation the rights to use, copy, modify,\n merge, publish, distribute, sublicense, and/or sell copies of the Software, and to\n permit persons to whom the Software is furnished to do so.\n\n THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED,\n INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A\n PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT\n HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION\n OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE\n SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.\n\"\"\"\nimport six\nfrom cfnlint.rules import CloudFormationLintRule\nfrom cfnlint.rules import RuleMatch\n\nfrom cfnlint.helpers import RESOURCE_SPECS\n\n\nclass AllowedValue(CloudFormationLintRule):\n \"\"\"Check if parameters have a valid value\"\"\"\n id = 'W2030'\n shortdesc = 'Check if parameters have a valid value'\n description = 'Check if parameters have a valid value in case of an enumator. The Parameter''s allowed values is based on the usages in property (Ref)'\n source_url = 'https://github.com/aws-cloudformation/cfn-python-lint/blob/master/docs/cfn-resource-specification.md#allowedvalue'\n tags = ['resources', 'property', 'allowed value']\n\n def initialize(self, cfn):\n \"\"\"Initialize the rule\"\"\"\n for resource_type_spec in RESOURCE_SPECS.get(cfn.regions[0]).get('ResourceTypes'):\n self.resource_property_types.append(resource_type_spec)\n for property_type_spec in RESOURCE_SPECS.get(cfn.regions[0]).get('PropertyTypes'):\n self.resource_sub_property_types.append(property_type_spec)\n\n def check_value_ref(self, value, path, **kwargs):\n \"\"\"Check Ref\"\"\"\n matches = []\n\n if 'Fn::If' in path:\n self.logger.debug('Not able to guarentee that the default value hasn\\'t been conditioned out')\n return matches\n\n allowed_value_specs = kwargs.get('value_specs', {}).get('AllowedValues', {})\n cfn = kwargs.get('cfn')\n\n if allowed_value_specs:\n if value in cfn.template.get('Parameters', {}):\n param = cfn.template.get('Parameters').get(value, {})\n parameter_values = param.get('AllowedValues')\n default_value = param.get('Default')\n parameter_type = param.get('Type')\n if isinstance(parameter_type, six.string_types):\n if ((not parameter_type.startswith('List<')) and\n (not parameter_type.startswith('AWS::SSM::Parameter::Value<')) and\n parameter_type not in ['CommaDelimitedList', 'List<String>']):\n # Check Allowed Values\n if parameter_values:\n for index, allowed_value in enumerate(parameter_values):\n if str(allowed_value) not in allowed_value_specs:\n param_path = ['Parameters', value, 'AllowedValues', index]\n message = 'You must specify a valid allowed value for {0} ({1}).\\nValid values are {2}'\n matches.append(RuleMatch(param_path, message.format(value, allowed_value, allowed_value_specs)))\n if default_value:\n # Check Default, only if no allowed Values are specified in the parameter (that's covered by E2015)\n if str(default_value) not in allowed_value_specs:\n param_path = ['Parameters', value, 'Default']\n message = 'You must specify a valid Default value for {0} ({1}).\\nValid values are {2}'\n matches.append(RuleMatch(param_path, message.format(value, default_value, allowed_value_specs)))\n\n return matches\n\n def check(self, cfn, properties, value_specs, property_specs, path):\n \"\"\"Check itself\"\"\"\n matches = list()\n for p_value, p_path in properties.items_safe(path[:]):\n for prop in p_value:\n if prop in value_specs:\n value = value_specs.get(prop).get('Value', {})\n if value:\n value_type = value.get('ValueType', '')\n property_type = property_specs.get('Properties').get(prop).get('Type')\n matches.extend(\n cfn.check_value(\n p_value, prop, p_path,\n check_ref=self.check_value_ref,\n value_specs=RESOURCE_SPECS.get(cfn.regions[0]).get('ValueTypes').get(value_type, {}),\n cfn=cfn, property_type=property_type, property_name=prop\n )\n )\n\n return matches\n\n def match_resource_sub_properties(self, properties, property_type, path, cfn):\n \"\"\"Match for sub properties\"\"\"\n matches = list()\n\n specs = RESOURCE_SPECS.get(cfn.regions[0]).get('PropertyTypes').get(property_type, {}).get('Properties', {})\n property_specs = RESOURCE_SPECS.get(cfn.regions[0]).get('PropertyTypes').get(property_type)\n matches.extend(self.check(cfn, properties, specs, property_specs, path))\n\n return matches\n\n def match_resource_properties(self, properties, resource_type, path, cfn):\n \"\"\"Check CloudFormation Properties\"\"\"\n matches = list()\n\n specs = RESOURCE_SPECS.get(cfn.regions[0]).get('ResourceTypes').get(resource_type, {}).get('Properties', {})\n resource_specs = RESOURCE_SPECS.get(cfn.regions[0]).get('ResourceTypes').get(resource_type)\n matches.extend(self.check(cfn, properties, specs, resource_specs, path))\n\n return matches\n", "path": "src/cfnlint/rules/parameters/AllowedValue.py"}]} | 2,505 | 889 |
gh_patches_debug_63372 | rasdani/github-patches | git_diff | mkdocs__mkdocs-2481 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
gh_deploy doesn't work when a config file is supplied
```
$ mkdocs gh-deploy --force --config-file mkdocs-editable.yml
...
Traceback (most recent call last):
File "/usr/local/bin/mkdocs", line 8, in <module>
sys.exit(cli())
File "/usr/local/lib/python3.9/site-packages/click/core.py", line 1137, in __call__
return self.main(*args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/click/core.py", line 1062, in main
rv = self.invoke(ctx)
File "/usr/local/lib/python3.9/site-packages/click/core.py", line 1668, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/usr/local/lib/python3.9/site-packages/click/core.py", line 1404, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/usr/local/lib/python3.9/site-packages/click/core.py", line 763, in invoke
return __callback(*args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/mkdocs/__main__.py", line 205, in gh_deploy_command
gh_deploy.gh_deploy(cfg, message=message, force=force, ignore_version=ignore_version, shell=shell)
File "/usr/local/lib/python3.9/site-packages/mkdocs/commands/gh_deploy.py", line 102, in gh_deploy
sha = _get_current_sha(os.path.dirname(config.config_file_path))
File "/usr/local/lib/python3.9/site-packages/mkdocs/commands/gh_deploy.py", line 32, in _get_current_sha
proc = subprocess.Popen(['git', 'rev-parse', '--short', 'HEAD'], cwd=repo_path,
File "/usr/local/Cellar/[email protected]/3.9.4/Frameworks/Python.framework/Versions/3.9/lib/python3.9/subprocess.py", line 951, in __init__
self._execute_child(args, executable, preexec_fn, close_fds,
File "/usr/local/Cellar/[email protected]/3.9.4/Frameworks/Python.framework/Versions/3.9/lib/python3.9/subprocess.py", line 1821, in _execute_child
raise child_exception_type(errno_num, err_msg, err_filename)
FileNotFoundError: [Errno 2] No such file or directory: ''
```
The issue is that `sha = _get_current_sha(os.path.dirname(config.config_file_path))` from `gh_deploy.py` returns an empty string for `dirname` if a relative config file path is passed in.
Workaround: `--config-file $(pwd)/mkdocs-editable.yml`
</issue>
<code>
[start of mkdocs/commands/gh_deploy.py]
1 import logging
2 import subprocess
3 import os
4 import re
5 from packaging import version
6
7 import mkdocs
8 import ghp_import
9 from mkdocs.exceptions import Abort
10
11 log = logging.getLogger(__name__)
12
13 default_message = """Deployed {sha} with MkDocs version: {version}"""
14
15
16 def _is_cwd_git_repo():
17 try:
18 proc = subprocess.Popen(
19 ['git', 'rev-parse', '--is-inside-work-tree'],
20 stdout=subprocess.PIPE,
21 stderr=subprocess.PIPE
22 )
23 except FileNotFoundError:
24 log.error("Could not find git - is it installed and on your path?")
25 raise Abort('Deployment Aborted!')
26 proc.communicate()
27 return proc.wait() == 0
28
29
30 def _get_current_sha(repo_path):
31
32 proc = subprocess.Popen(['git', 'rev-parse', '--short', 'HEAD'], cwd=repo_path,
33 stdout=subprocess.PIPE, stderr=subprocess.PIPE)
34
35 stdout, _ = proc.communicate()
36 sha = stdout.decode('utf-8').strip()
37 return sha
38
39
40 def _get_remote_url(remote_name):
41
42 # No CNAME found. We will use the origin URL to determine the GitHub
43 # pages location.
44 remote = f"remote.{remote_name}.url"
45 proc = subprocess.Popen(["git", "config", "--get", remote],
46 stdout=subprocess.PIPE, stderr=subprocess.PIPE)
47
48 stdout, _ = proc.communicate()
49 url = stdout.decode('utf-8').strip()
50
51 host = None
52 path = None
53 if 'github.com/' in url:
54 host, path = url.split('github.com/', 1)
55 elif 'github.com:' in url:
56 host, path = url.split('github.com:', 1)
57
58 return host, path
59
60
61 def _check_version(branch):
62
63 proc = subprocess.Popen(['git', 'show', '-s', '--format=%s', f'refs/heads/{branch}'],
64 stdout=subprocess.PIPE, stderr=subprocess.PIPE)
65
66 stdout, _ = proc.communicate()
67 msg = stdout.decode('utf-8').strip()
68 m = re.search(r'\d+(\.\d+)+((a|b|rc)\d+)?(\.post\d+)?(\.dev\d+)?', msg, re.X | re.I)
69 previousv = version.parse(m.group()) if m else None
70 currentv = version.parse(mkdocs.__version__)
71 if not previousv:
72 log.warning('Version check skipped: No version specified in previous deployment.')
73 elif currentv > previousv:
74 log.info(
75 f'Previous deployment was done with MkDocs version {previousv}; '
76 f'you are deploying with a newer version ({currentv})'
77 )
78 elif currentv < previousv:
79 log.error(
80 f'Deployment terminated: Previous deployment was made with MkDocs version {previousv}; '
81 f'you are attempting to deploy with an older version ({currentv}). Use --ignore-version '
82 'to deploy anyway.'
83 )
84 raise Abort('Deployment Aborted!')
85
86
87 def gh_deploy(config, message=None, force=False, ignore_version=False, shell=False):
88
89 if not _is_cwd_git_repo():
90 log.error('Cannot deploy - this directory does not appear to be a git '
91 'repository')
92
93 remote_branch = config['remote_branch']
94 remote_name = config['remote_name']
95
96 if not ignore_version:
97 _check_version(remote_branch)
98
99 if message is None:
100 message = default_message
101 sha = _get_current_sha(os.path.dirname(config.config_file_path))
102 message = message.format(version=mkdocs.__version__, sha=sha)
103
104 log.info("Copying '%s' to '%s' branch and pushing to GitHub.",
105 config['site_dir'], config['remote_branch'])
106
107 try:
108 ghp_import.ghp_import(
109 config['site_dir'],
110 mesg=message,
111 remote=remote_name,
112 branch=remote_branch,
113 push=True,
114 force=force,
115 use_shell=shell,
116 nojekyll=True
117 )
118 except ghp_import.GhpError as e:
119 log.error("Failed to deploy to GitHub with error: \n{}".format(e.message))
120 raise Abort('Deployment Aborted!')
121
122 cname_file = os.path.join(config['site_dir'], 'CNAME')
123 # Does this repository have a CNAME set for GitHub pages?
124 if os.path.isfile(cname_file):
125 # This GitHub pages repository has a CNAME configured.
126 with(open(cname_file, 'r')) as f:
127 cname_host = f.read().strip()
128 log.info(f'Based on your CNAME file, your documentation should be '
129 f'available shortly at: http://{cname_host}')
130 log.info('NOTE: Your DNS records must be configured appropriately for '
131 'your CNAME URL to work.')
132 return
133
134 host, path = _get_remote_url(remote_name)
135
136 if host is None:
137 # This could be a GitHub Enterprise deployment.
138 log.info('Your documentation should be available shortly.')
139 else:
140 username, repo = path.split('/', 1)
141 if repo.endswith('.git'):
142 repo = repo[:-len('.git')]
143 url = f'https://{username}.github.io/{repo}/'
144 log.info(f"Your documentation should shortly be available at: {url}")
145
[end of mkdocs/commands/gh_deploy.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/mkdocs/commands/gh_deploy.py b/mkdocs/commands/gh_deploy.py
--- a/mkdocs/commands/gh_deploy.py
+++ b/mkdocs/commands/gh_deploy.py
@@ -29,7 +29,7 @@
def _get_current_sha(repo_path):
- proc = subprocess.Popen(['git', 'rev-parse', '--short', 'HEAD'], cwd=repo_path,
+ proc = subprocess.Popen(['git', 'rev-parse', '--short', 'HEAD'], cwd=repo_path or None,
stdout=subprocess.PIPE, stderr=subprocess.PIPE)
stdout, _ = proc.communicate()
| {"golden_diff": "diff --git a/mkdocs/commands/gh_deploy.py b/mkdocs/commands/gh_deploy.py\n--- a/mkdocs/commands/gh_deploy.py\n+++ b/mkdocs/commands/gh_deploy.py\n@@ -29,7 +29,7 @@\n \n def _get_current_sha(repo_path):\n \n- proc = subprocess.Popen(['git', 'rev-parse', '--short', 'HEAD'], cwd=repo_path,\n+ proc = subprocess.Popen(['git', 'rev-parse', '--short', 'HEAD'], cwd=repo_path or None,\n stdout=subprocess.PIPE, stderr=subprocess.PIPE)\n \n stdout, _ = proc.communicate()\n", "issue": "gh_deploy doesn't work when a config file is supplied\n```\r\n$ mkdocs gh-deploy --force --config-file mkdocs-editable.yml\r\n...\r\nTraceback (most recent call last):\r\n File \"/usr/local/bin/mkdocs\", line 8, in <module>\r\n sys.exit(cli())\r\n File \"/usr/local/lib/python3.9/site-packages/click/core.py\", line 1137, in __call__\r\n return self.main(*args, **kwargs)\r\n File \"/usr/local/lib/python3.9/site-packages/click/core.py\", line 1062, in main\r\n rv = self.invoke(ctx)\r\n File \"/usr/local/lib/python3.9/site-packages/click/core.py\", line 1668, in invoke\r\n return _process_result(sub_ctx.command.invoke(sub_ctx))\r\n File \"/usr/local/lib/python3.9/site-packages/click/core.py\", line 1404, in invoke\r\n return ctx.invoke(self.callback, **ctx.params)\r\n File \"/usr/local/lib/python3.9/site-packages/click/core.py\", line 763, in invoke\r\n return __callback(*args, **kwargs)\r\n File \"/usr/local/lib/python3.9/site-packages/mkdocs/__main__.py\", line 205, in gh_deploy_command\r\n gh_deploy.gh_deploy(cfg, message=message, force=force, ignore_version=ignore_version, shell=shell)\r\n File \"/usr/local/lib/python3.9/site-packages/mkdocs/commands/gh_deploy.py\", line 102, in gh_deploy\r\n sha = _get_current_sha(os.path.dirname(config.config_file_path))\r\n File \"/usr/local/lib/python3.9/site-packages/mkdocs/commands/gh_deploy.py\", line 32, in _get_current_sha\r\n proc = subprocess.Popen(['git', 'rev-parse', '--short', 'HEAD'], cwd=repo_path,\r\n File \"/usr/local/Cellar/[email protected]/3.9.4/Frameworks/Python.framework/Versions/3.9/lib/python3.9/subprocess.py\", line 951, in __init__\r\n self._execute_child(args, executable, preexec_fn, close_fds,\r\n File \"/usr/local/Cellar/[email protected]/3.9.4/Frameworks/Python.framework/Versions/3.9/lib/python3.9/subprocess.py\", line 1821, in _execute_child\r\n raise child_exception_type(errno_num, err_msg, err_filename)\r\nFileNotFoundError: [Errno 2] No such file or directory: ''\r\n```\r\n\r\nThe issue is that `sha = _get_current_sha(os.path.dirname(config.config_file_path))` from `gh_deploy.py` returns an empty string for `dirname` if a relative config file path is passed in.\r\n\r\nWorkaround: `--config-file $(pwd)/mkdocs-editable.yml`\r\n\n", "before_files": [{"content": "import logging\nimport subprocess\nimport os\nimport re\nfrom packaging import version\n\nimport mkdocs\nimport ghp_import\nfrom mkdocs.exceptions import Abort\n\nlog = logging.getLogger(__name__)\n\ndefault_message = \"\"\"Deployed {sha} with MkDocs version: {version}\"\"\"\n\n\ndef _is_cwd_git_repo():\n try:\n proc = subprocess.Popen(\n ['git', 'rev-parse', '--is-inside-work-tree'],\n stdout=subprocess.PIPE,\n stderr=subprocess.PIPE\n )\n except FileNotFoundError:\n log.error(\"Could not find git - is it installed and on your path?\")\n raise Abort('Deployment Aborted!')\n proc.communicate()\n return proc.wait() == 0\n\n\ndef _get_current_sha(repo_path):\n\n proc = subprocess.Popen(['git', 'rev-parse', '--short', 'HEAD'], cwd=repo_path,\n stdout=subprocess.PIPE, stderr=subprocess.PIPE)\n\n stdout, _ = proc.communicate()\n sha = stdout.decode('utf-8').strip()\n return sha\n\n\ndef _get_remote_url(remote_name):\n\n # No CNAME found. We will use the origin URL to determine the GitHub\n # pages location.\n remote = f\"remote.{remote_name}.url\"\n proc = subprocess.Popen([\"git\", \"config\", \"--get\", remote],\n stdout=subprocess.PIPE, stderr=subprocess.PIPE)\n\n stdout, _ = proc.communicate()\n url = stdout.decode('utf-8').strip()\n\n host = None\n path = None\n if 'github.com/' in url:\n host, path = url.split('github.com/', 1)\n elif 'github.com:' in url:\n host, path = url.split('github.com:', 1)\n\n return host, path\n\n\ndef _check_version(branch):\n\n proc = subprocess.Popen(['git', 'show', '-s', '--format=%s', f'refs/heads/{branch}'],\n stdout=subprocess.PIPE, stderr=subprocess.PIPE)\n\n stdout, _ = proc.communicate()\n msg = stdout.decode('utf-8').strip()\n m = re.search(r'\\d+(\\.\\d+)+((a|b|rc)\\d+)?(\\.post\\d+)?(\\.dev\\d+)?', msg, re.X | re.I)\n previousv = version.parse(m.group()) if m else None\n currentv = version.parse(mkdocs.__version__)\n if not previousv:\n log.warning('Version check skipped: No version specified in previous deployment.')\n elif currentv > previousv:\n log.info(\n f'Previous deployment was done with MkDocs version {previousv}; '\n f'you are deploying with a newer version ({currentv})'\n )\n elif currentv < previousv:\n log.error(\n f'Deployment terminated: Previous deployment was made with MkDocs version {previousv}; '\n f'you are attempting to deploy with an older version ({currentv}). Use --ignore-version '\n 'to deploy anyway.'\n )\n raise Abort('Deployment Aborted!')\n\n\ndef gh_deploy(config, message=None, force=False, ignore_version=False, shell=False):\n\n if not _is_cwd_git_repo():\n log.error('Cannot deploy - this directory does not appear to be a git '\n 'repository')\n\n remote_branch = config['remote_branch']\n remote_name = config['remote_name']\n\n if not ignore_version:\n _check_version(remote_branch)\n\n if message is None:\n message = default_message\n sha = _get_current_sha(os.path.dirname(config.config_file_path))\n message = message.format(version=mkdocs.__version__, sha=sha)\n\n log.info(\"Copying '%s' to '%s' branch and pushing to GitHub.\",\n config['site_dir'], config['remote_branch'])\n\n try:\n ghp_import.ghp_import(\n config['site_dir'],\n mesg=message,\n remote=remote_name,\n branch=remote_branch,\n push=True,\n force=force,\n use_shell=shell,\n nojekyll=True\n )\n except ghp_import.GhpError as e:\n log.error(\"Failed to deploy to GitHub with error: \\n{}\".format(e.message))\n raise Abort('Deployment Aborted!')\n\n cname_file = os.path.join(config['site_dir'], 'CNAME')\n # Does this repository have a CNAME set for GitHub pages?\n if os.path.isfile(cname_file):\n # This GitHub pages repository has a CNAME configured.\n with(open(cname_file, 'r')) as f:\n cname_host = f.read().strip()\n log.info(f'Based on your CNAME file, your documentation should be '\n f'available shortly at: http://{cname_host}')\n log.info('NOTE: Your DNS records must be configured appropriately for '\n 'your CNAME URL to work.')\n return\n\n host, path = _get_remote_url(remote_name)\n\n if host is None:\n # This could be a GitHub Enterprise deployment.\n log.info('Your documentation should be available shortly.')\n else:\n username, repo = path.split('/', 1)\n if repo.endswith('.git'):\n repo = repo[:-len('.git')]\n url = f'https://{username}.github.io/{repo}/'\n log.info(f\"Your documentation should shortly be available at: {url}\")\n", "path": "mkdocs/commands/gh_deploy.py"}]} | 2,622 | 139 |
gh_patches_debug_35416 | rasdani/github-patches | git_diff | zigpy__zha-device-handlers-1664 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[BUG] New yooksmart D10110 inverted with quirk
**Describe the bug**
I purchased a new yooksmart D10110 cover and paired with home assistant. The controls
seemed inverted and I had to move the bar twice in order to get it to move. I read reports
in the past with the suggestion to unpair and pair again, tried multiple times with no luck.
So I disabled the quirk (apologies for the brute force: moved the file to a different directory
and reloaded) and it works now. For completeness:
Before:
- buttons up and down wouldn't work
- available button would be inverted (e.g.: cover was all the way down and the down button was enabled)
- in order to control the cover I'd move the progress bar all the way to 0 or to 100 then the opposite in order to work
After:
- buttons up and down work
- enabled button matches the direction of the cover: if open, it shows down button enabled
**To Reproduce**
Behavior is consistent across multiple pair/unpair cycles and full home assistant instance restarts
**Additional context**
Something that is possible, since the cover is new, is that they corrected the behavior in their firmware
and the quirk isn't needed anymore.
This device has: Firmware: 0x10013001
I can provide any debugging necessary. I'm using homeassistant official virtual machine image and keeping
it up to date.
Editted: formatting
</issue>
<code>
[start of zhaquirks/yooksmart/D10110blinds.py]
1 """Device handler for Yooksmart D10110 roller blinds."""
2 from zigpy.profiles import zha
3 from zigpy.quirks import CustomCluster, CustomDevice
4 from zigpy.zcl.clusters.closures import WindowCovering
5 from zigpy.zcl.clusters.general import (
6 Basic,
7 Groups,
8 Identify,
9 Ota,
10 PollControl,
11 PowerConfiguration,
12 Scenes,
13 )
14
15 from zhaquirks.const import (
16 DEVICE_TYPE,
17 ENDPOINTS,
18 INPUT_CLUSTERS,
19 MODELS_INFO,
20 OUTPUT_CLUSTERS,
21 PROFILE_ID,
22 )
23
24
25 class InvertedWindowCoveringCluster(CustomCluster, WindowCovering):
26 """WindowCovering cluster implementation.
27
28 This implementation inverts the reported covering percent for non standard
29 devices that don't follow the reporting spec.
30 """
31
32 cluster_id = WindowCovering.cluster_id
33 CURRENT_POSITION_LIFT_PERCENTAGE = 0x0008
34
35 def _update_attribute(self, attrid, value):
36 if attrid == self.CURRENT_POSITION_LIFT_PERCENTAGE:
37 value = 100 - value
38 super()._update_attribute(attrid, value)
39
40
41 class D10110Blinds(CustomDevice):
42 """Custom device representing Yooksmart D10110 roller blinds."""
43
44 signature = {
45 # <SimpleDescriptor endpoint=1 profile=260 device_type=514
46 # device_version=1
47 # input_clusters=[0, 1, 3, 4, 5, 32, 258]
48 # output_clusters=[3, 25]>
49 MODELS_INFO: [
50 ("yooksmart", "D10110"),
51 ],
52 ENDPOINTS: {
53 1: {
54 PROFILE_ID: zha.PROFILE_ID,
55 DEVICE_TYPE: zha.DeviceType.WINDOW_COVERING_DEVICE,
56 INPUT_CLUSTERS: [
57 Basic.cluster_id,
58 PowerConfiguration.cluster_id,
59 Identify.cluster_id,
60 Groups.cluster_id,
61 Scenes.cluster_id,
62 PollControl.cluster_id,
63 WindowCovering.cluster_id,
64 ],
65 OUTPUT_CLUSTERS: [Identify.cluster_id, Ota.cluster_id],
66 }
67 },
68 }
69
70 replacement = {
71 ENDPOINTS: {
72 1: {
73 PROFILE_ID: zha.PROFILE_ID,
74 DEVICE_TYPE: zha.DeviceType.WINDOW_COVERING_DEVICE,
75 INPUT_CLUSTERS: [
76 Basic.cluster_id,
77 PowerConfiguration.cluster_id,
78 Identify.cluster_id,
79 Groups.cluster_id,
80 Scenes.cluster_id,
81 PollControl.cluster_id,
82 InvertedWindowCoveringCluster,
83 ],
84 OUTPUT_CLUSTERS: [Identify.cluster_id, Ota.cluster_id],
85 }
86 }
87 }
88
[end of zhaquirks/yooksmart/D10110blinds.py]
[start of zhaquirks/yooksmart/__init__.py]
1 """Yooksmart module for custom device handlers."""
2
[end of zhaquirks/yooksmart/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/zhaquirks/yooksmart/D10110blinds.py b/zhaquirks/yooksmart/D10110blinds.py
deleted file mode 100644
--- a/zhaquirks/yooksmart/D10110blinds.py
+++ /dev/null
@@ -1,87 +0,0 @@
-"""Device handler for Yooksmart D10110 roller blinds."""
-from zigpy.profiles import zha
-from zigpy.quirks import CustomCluster, CustomDevice
-from zigpy.zcl.clusters.closures import WindowCovering
-from zigpy.zcl.clusters.general import (
- Basic,
- Groups,
- Identify,
- Ota,
- PollControl,
- PowerConfiguration,
- Scenes,
-)
-
-from zhaquirks.const import (
- DEVICE_TYPE,
- ENDPOINTS,
- INPUT_CLUSTERS,
- MODELS_INFO,
- OUTPUT_CLUSTERS,
- PROFILE_ID,
-)
-
-
-class InvertedWindowCoveringCluster(CustomCluster, WindowCovering):
- """WindowCovering cluster implementation.
-
- This implementation inverts the reported covering percent for non standard
- devices that don't follow the reporting spec.
- """
-
- cluster_id = WindowCovering.cluster_id
- CURRENT_POSITION_LIFT_PERCENTAGE = 0x0008
-
- def _update_attribute(self, attrid, value):
- if attrid == self.CURRENT_POSITION_LIFT_PERCENTAGE:
- value = 100 - value
- super()._update_attribute(attrid, value)
-
-
-class D10110Blinds(CustomDevice):
- """Custom device representing Yooksmart D10110 roller blinds."""
-
- signature = {
- # <SimpleDescriptor endpoint=1 profile=260 device_type=514
- # device_version=1
- # input_clusters=[0, 1, 3, 4, 5, 32, 258]
- # output_clusters=[3, 25]>
- MODELS_INFO: [
- ("yooksmart", "D10110"),
- ],
- ENDPOINTS: {
- 1: {
- PROFILE_ID: zha.PROFILE_ID,
- DEVICE_TYPE: zha.DeviceType.WINDOW_COVERING_DEVICE,
- INPUT_CLUSTERS: [
- Basic.cluster_id,
- PowerConfiguration.cluster_id,
- Identify.cluster_id,
- Groups.cluster_id,
- Scenes.cluster_id,
- PollControl.cluster_id,
- WindowCovering.cluster_id,
- ],
- OUTPUT_CLUSTERS: [Identify.cluster_id, Ota.cluster_id],
- }
- },
- }
-
- replacement = {
- ENDPOINTS: {
- 1: {
- PROFILE_ID: zha.PROFILE_ID,
- DEVICE_TYPE: zha.DeviceType.WINDOW_COVERING_DEVICE,
- INPUT_CLUSTERS: [
- Basic.cluster_id,
- PowerConfiguration.cluster_id,
- Identify.cluster_id,
- Groups.cluster_id,
- Scenes.cluster_id,
- PollControl.cluster_id,
- InvertedWindowCoveringCluster,
- ],
- OUTPUT_CLUSTERS: [Identify.cluster_id, Ota.cluster_id],
- }
- }
- }
diff --git a/zhaquirks/yooksmart/__init__.py b/zhaquirks/yooksmart/__init__.py
deleted file mode 100644
--- a/zhaquirks/yooksmart/__init__.py
+++ /dev/null
@@ -1 +0,0 @@
-"""Yooksmart module for custom device handlers."""
| {"golden_diff": "diff --git a/zhaquirks/yooksmart/D10110blinds.py b/zhaquirks/yooksmart/D10110blinds.py\ndeleted file mode 100644\n--- a/zhaquirks/yooksmart/D10110blinds.py\n+++ /dev/null\n@@ -1,87 +0,0 @@\n-\"\"\"Device handler for Yooksmart D10110 roller blinds.\"\"\"\n-from zigpy.profiles import zha\n-from zigpy.quirks import CustomCluster, CustomDevice\n-from zigpy.zcl.clusters.closures import WindowCovering\n-from zigpy.zcl.clusters.general import (\n- Basic,\n- Groups,\n- Identify,\n- Ota,\n- PollControl,\n- PowerConfiguration,\n- Scenes,\n-)\n-\n-from zhaquirks.const import (\n- DEVICE_TYPE,\n- ENDPOINTS,\n- INPUT_CLUSTERS,\n- MODELS_INFO,\n- OUTPUT_CLUSTERS,\n- PROFILE_ID,\n-)\n-\n-\n-class InvertedWindowCoveringCluster(CustomCluster, WindowCovering):\n- \"\"\"WindowCovering cluster implementation.\n-\n- This implementation inverts the reported covering percent for non standard\n- devices that don't follow the reporting spec.\n- \"\"\"\n-\n- cluster_id = WindowCovering.cluster_id\n- CURRENT_POSITION_LIFT_PERCENTAGE = 0x0008\n-\n- def _update_attribute(self, attrid, value):\n- if attrid == self.CURRENT_POSITION_LIFT_PERCENTAGE:\n- value = 100 - value\n- super()._update_attribute(attrid, value)\n-\n-\n-class D10110Blinds(CustomDevice):\n- \"\"\"Custom device representing Yooksmart D10110 roller blinds.\"\"\"\n-\n- signature = {\n- # <SimpleDescriptor endpoint=1 profile=260 device_type=514\n- # device_version=1\n- # input_clusters=[0, 1, 3, 4, 5, 32, 258]\n- # output_clusters=[3, 25]>\n- MODELS_INFO: [\n- (\"yooksmart\", \"D10110\"),\n- ],\n- ENDPOINTS: {\n- 1: {\n- PROFILE_ID: zha.PROFILE_ID,\n- DEVICE_TYPE: zha.DeviceType.WINDOW_COVERING_DEVICE,\n- INPUT_CLUSTERS: [\n- Basic.cluster_id,\n- PowerConfiguration.cluster_id,\n- Identify.cluster_id,\n- Groups.cluster_id,\n- Scenes.cluster_id,\n- PollControl.cluster_id,\n- WindowCovering.cluster_id,\n- ],\n- OUTPUT_CLUSTERS: [Identify.cluster_id, Ota.cluster_id],\n- }\n- },\n- }\n-\n- replacement = {\n- ENDPOINTS: {\n- 1: {\n- PROFILE_ID: zha.PROFILE_ID,\n- DEVICE_TYPE: zha.DeviceType.WINDOW_COVERING_DEVICE,\n- INPUT_CLUSTERS: [\n- Basic.cluster_id,\n- PowerConfiguration.cluster_id,\n- Identify.cluster_id,\n- Groups.cluster_id,\n- Scenes.cluster_id,\n- PollControl.cluster_id,\n- InvertedWindowCoveringCluster,\n- ],\n- OUTPUT_CLUSTERS: [Identify.cluster_id, Ota.cluster_id],\n- }\n- }\n- }\ndiff --git a/zhaquirks/yooksmart/__init__.py b/zhaquirks/yooksmart/__init__.py\ndeleted file mode 100644\n--- a/zhaquirks/yooksmart/__init__.py\n+++ /dev/null\n@@ -1 +0,0 @@\n-\"\"\"Yooksmart module for custom device handlers.\"\"\"\n", "issue": "[BUG] New yooksmart D10110 inverted with quirk\n**Describe the bug**\r\nI purchased a new yooksmart D10110 cover and paired with home assistant. The controls\r\nseemed inverted and I had to move the bar twice in order to get it to move. I read reports\r\nin the past with the suggestion to unpair and pair again, tried multiple times with no luck.\r\nSo I disabled the quirk (apologies for the brute force: moved the file to a different directory\r\nand reloaded) and it works now. For completeness:\r\nBefore:\r\n- buttons up and down wouldn't work\r\n- available button would be inverted (e.g.: cover was all the way down and the down button was enabled)\r\n- in order to control the cover I'd move the progress bar all the way to 0 or to 100 then the opposite in order to work\r\nAfter:\r\n- buttons up and down work\r\n- enabled button matches the direction of the cover: if open, it shows down button enabled\r\n\r\n**To Reproduce**\r\nBehavior is consistent across multiple pair/unpair cycles and full home assistant instance restarts\r\n\r\n**Additional context**\r\nSomething that is possible, since the cover is new, is that they corrected the behavior in their firmware\r\nand the quirk isn't needed anymore.\r\nThis device has: Firmware: 0x10013001\r\n\r\nI can provide any debugging necessary. I'm using homeassistant official virtual machine image and keeping\r\nit up to date.\r\n\r\nEditted: formatting\n", "before_files": [{"content": "\"\"\"Device handler for Yooksmart D10110 roller blinds.\"\"\"\nfrom zigpy.profiles import zha\nfrom zigpy.quirks import CustomCluster, CustomDevice\nfrom zigpy.zcl.clusters.closures import WindowCovering\nfrom zigpy.zcl.clusters.general import (\n Basic,\n Groups,\n Identify,\n Ota,\n PollControl,\n PowerConfiguration,\n Scenes,\n)\n\nfrom zhaquirks.const import (\n DEVICE_TYPE,\n ENDPOINTS,\n INPUT_CLUSTERS,\n MODELS_INFO,\n OUTPUT_CLUSTERS,\n PROFILE_ID,\n)\n\n\nclass InvertedWindowCoveringCluster(CustomCluster, WindowCovering):\n \"\"\"WindowCovering cluster implementation.\n\n This implementation inverts the reported covering percent for non standard\n devices that don't follow the reporting spec.\n \"\"\"\n\n cluster_id = WindowCovering.cluster_id\n CURRENT_POSITION_LIFT_PERCENTAGE = 0x0008\n\n def _update_attribute(self, attrid, value):\n if attrid == self.CURRENT_POSITION_LIFT_PERCENTAGE:\n value = 100 - value\n super()._update_attribute(attrid, value)\n\n\nclass D10110Blinds(CustomDevice):\n \"\"\"Custom device representing Yooksmart D10110 roller blinds.\"\"\"\n\n signature = {\n # <SimpleDescriptor endpoint=1 profile=260 device_type=514\n # device_version=1\n # input_clusters=[0, 1, 3, 4, 5, 32, 258]\n # output_clusters=[3, 25]>\n MODELS_INFO: [\n (\"yooksmart\", \"D10110\"),\n ],\n ENDPOINTS: {\n 1: {\n PROFILE_ID: zha.PROFILE_ID,\n DEVICE_TYPE: zha.DeviceType.WINDOW_COVERING_DEVICE,\n INPUT_CLUSTERS: [\n Basic.cluster_id,\n PowerConfiguration.cluster_id,\n Identify.cluster_id,\n Groups.cluster_id,\n Scenes.cluster_id,\n PollControl.cluster_id,\n WindowCovering.cluster_id,\n ],\n OUTPUT_CLUSTERS: [Identify.cluster_id, Ota.cluster_id],\n }\n },\n }\n\n replacement = {\n ENDPOINTS: {\n 1: {\n PROFILE_ID: zha.PROFILE_ID,\n DEVICE_TYPE: zha.DeviceType.WINDOW_COVERING_DEVICE,\n INPUT_CLUSTERS: [\n Basic.cluster_id,\n PowerConfiguration.cluster_id,\n Identify.cluster_id,\n Groups.cluster_id,\n Scenes.cluster_id,\n PollControl.cluster_id,\n InvertedWindowCoveringCluster,\n ],\n OUTPUT_CLUSTERS: [Identify.cluster_id, Ota.cluster_id],\n }\n }\n }\n", "path": "zhaquirks/yooksmart/D10110blinds.py"}, {"content": "\"\"\"Yooksmart module for custom device handlers.\"\"\"\n", "path": "zhaquirks/yooksmart/__init__.py"}]} | 1,681 | 825 |
gh_patches_debug_10567 | rasdani/github-patches | git_diff | internetarchive__openlibrary-8102 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Author dropdown not working as expected
<!-- What problem are we solving? What does the experience look like today? What are the symptoms? -->
Since the last deployment on July 13, the author dropdown on the edit form behaves differently.
### Evidence / Screenshot (if possible)
<img width="1012" alt="Screenshot 2023-07-13 at 08 35 17" src="https://github.com/internetarchive/openlibrary/assets/17739465/389b1544-9d04-4de1-b218-0145867ec284">
### Relevant url?
<!-- `https://openlibrary.org/...` -->
### Steps to Reproduce
<!-- What steps caused you to find the bug? -->
1. Go to ...add book or edit book form
2. Do ... try to add Plato as an author
<!-- What actually happened after these steps? What did you expect to happen? -->
* Actual: the most obvious choice is missing (Plato, the philosopher). Instead, there are authors that have plato as part of the spelling of their names or less prolific authors with the last name Plato.
* Expected: The most likely choice, probably determined by spelling and number of works, should appear on the list.
### Details
- **Logged in (Y/N)?**
- **Browser type/version?**
- **Operating system?**
- **Environment (prod/dev/local)?** prod
<!-- If not sure, put prod -->
### Proposal & Constraints
<!-- What is the proposed solution / implementation? Is there a precedent of this approach succeeding elsewhere? -->
### Related files
<!-- Files related to this issue; this is super useful for new contributors who might want to help! If you're not sure, leave this blank; a maintainer will add them. -->
### Stakeholders
<!-- @ tag stakeholders of this bug -->
@cdrini
</issue>
<code>
[start of openlibrary/plugins/worksearch/autocomplete.py]
1 import itertools
2 import web
3 import json
4
5
6 from infogami.utils import delegate
7 from infogami.utils.view import safeint
8 from openlibrary.core.models import Thing
9 from openlibrary.plugins.upstream import utils
10 from openlibrary.plugins.worksearch.search import get_solr
11 from openlibrary.utils import (
12 find_olid_in_string,
13 olid_to_key,
14 )
15
16
17 def to_json(d):
18 web.header('Content-Type', 'application/json')
19 return delegate.RawText(json.dumps(d))
20
21
22 class autocomplete(delegate.page):
23 path = "/_autocomplete"
24 fq = ['-type:edition']
25 fl = 'key,type,name,title,score'
26 olid_suffix: str | None = None
27 query = 'title:"{q}"^2 OR title:({q}*) OR name:"{q}"^2 OR name:({q}*)'
28
29 def db_fetch(self, key: str) -> Thing | None:
30 if thing := web.ctx.site.get(key):
31 return thing.as_fake_solr_record()
32 else:
33 return None
34
35 def doc_wrap(self, doc: dict):
36 """Modify the returned solr document in place."""
37 if 'name' not in doc:
38 doc['name'] = doc.get('title')
39
40 def GET(self):
41 return self.direct_get()
42
43 def direct_get(self, fq: list[str] | None = None):
44 i = web.input(q="", limit=5)
45 i.limit = safeint(i.limit, 5)
46
47 solr = get_solr()
48
49 # look for ID in query string here
50 q = solr.escape(i.q).strip()
51 embedded_olid = None
52 if self.olid_suffix:
53 embedded_olid = find_olid_in_string(q, self.olid_suffix)
54
55 if embedded_olid:
56 solr_q = f'key:"{olid_to_key(embedded_olid)}"'
57 else:
58 solr_q = self.query.format(q=q)
59
60 fq = fq or self.fq
61 params = {
62 'q_op': 'AND',
63 'rows': i.limit,
64 **({'fq': fq} if fq else {}),
65 # limit the fields returned for better performance
66 'fl': self.fl,
67 }
68
69 data = solr.select(solr_q, **params)
70 docs = data['docs']
71
72 if embedded_olid and not docs:
73 # Grumble! Work not in solr yet. Create a dummy.
74 fake_doc = self.db_fetch(olid_to_key(embedded_olid))
75 if fake_doc:
76 docs = [fake_doc]
77
78 for d in docs:
79 self.doc_wrap(d)
80
81 return to_json(docs)
82
83
84 class languages_autocomplete(delegate.page):
85 path = "/languages/_autocomplete"
86
87 def GET(self):
88 i = web.input(q="", limit=5)
89 i.limit = safeint(i.limit, 5)
90 return to_json(
91 list(itertools.islice(utils.autocomplete_languages(i.q), i.limit))
92 )
93
94
95 class works_autocomplete(autocomplete):
96 path = "/works/_autocomplete"
97 fq = [
98 'type:work',
99 # Exclude orphaned editions from search results
100 'key:*W',
101 ]
102 fl = 'key,title,subtitle,cover_i,first_publish_year,author_name,edition_count'
103 olid_suffix = 'W'
104 query = 'title:"{q}"^2 OR title:({q}*)'
105
106 def doc_wrap(self, doc: dict):
107 doc['full_title'] = doc['title']
108 if 'subtitle' in doc:
109 doc['full_title'] += ": " + doc['subtitle']
110 doc['name'] = doc.get('title')
111
112
113 class authors_autocomplete(autocomplete):
114 path = "/authors/_autocomplete"
115 fq = ['type:author']
116 fl = 'key,name,alternate_names,birth_date,death_date,work_count,works,subjects'
117 olid_suffix = 'A'
118 query = 'name:({q}*) OR alternate_names:({q}*)'
119
120 def doc_wrap(self, doc: dict):
121 if 'top_work' in doc:
122 doc['works'] = [doc.pop('top_work')]
123 else:
124 doc['works'] = []
125 doc['subjects'] = doc.pop('top_subjects', [])
126
127
128 class subjects_autocomplete(autocomplete):
129 # can't use /subjects/_autocomplete because the subjects endpoint = /subjects/[^/]+
130 path = "/subjects_autocomplete"
131 fq = ['type:subject']
132 fl = 'key,name'
133 query = 'name:({q}*)'
134
135 def GET(self):
136 i = web.input(type="")
137 fq = self.fq
138 if i.type:
139 fq = fq + [f'subject_type:{i.type}']
140
141 return super().direct_get(fq=fq)
142
143
144 def setup():
145 """Do required setup."""
146 pass
147
[end of openlibrary/plugins/worksearch/autocomplete.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/openlibrary/plugins/worksearch/autocomplete.py b/openlibrary/plugins/worksearch/autocomplete.py
--- a/openlibrary/plugins/worksearch/autocomplete.py
+++ b/openlibrary/plugins/worksearch/autocomplete.py
@@ -113,9 +113,9 @@
class authors_autocomplete(autocomplete):
path = "/authors/_autocomplete"
fq = ['type:author']
- fl = 'key,name,alternate_names,birth_date,death_date,work_count,works,subjects'
+ fl = 'key,name,alternate_names,birth_date,death_date,work_count,top_work,top_subjects'
olid_suffix = 'A'
- query = 'name:({q}*) OR alternate_names:({q}*)'
+ query = 'name:({q}*) OR alternate_names:({q}*) OR name:"{q}"^2 OR alternate_names:"{q}"^2'
def doc_wrap(self, doc: dict):
if 'top_work' in doc:
| {"golden_diff": "diff --git a/openlibrary/plugins/worksearch/autocomplete.py b/openlibrary/plugins/worksearch/autocomplete.py\n--- a/openlibrary/plugins/worksearch/autocomplete.py\n+++ b/openlibrary/plugins/worksearch/autocomplete.py\n@@ -113,9 +113,9 @@\n class authors_autocomplete(autocomplete):\n path = \"/authors/_autocomplete\"\n fq = ['type:author']\n- fl = 'key,name,alternate_names,birth_date,death_date,work_count,works,subjects'\n+ fl = 'key,name,alternate_names,birth_date,death_date,work_count,top_work,top_subjects'\n olid_suffix = 'A'\n- query = 'name:({q}*) OR alternate_names:({q}*)'\n+ query = 'name:({q}*) OR alternate_names:({q}*) OR name:\"{q}\"^2 OR alternate_names:\"{q}\"^2'\n \n def doc_wrap(self, doc: dict):\n if 'top_work' in doc:\n", "issue": "Author dropdown not working as expected\n<!-- What problem are we solving? What does the experience look like today? What are the symptoms? -->\r\nSince the last deployment on July 13, the author dropdown on the edit form behaves differently. \r\n### Evidence / Screenshot (if possible)\r\n<img width=\"1012\" alt=\"Screenshot 2023-07-13 at 08 35 17\" src=\"https://github.com/internetarchive/openlibrary/assets/17739465/389b1544-9d04-4de1-b218-0145867ec284\">\r\n\r\n\r\n### Relevant url?\r\n<!-- `https://openlibrary.org/...` -->\r\n\r\n### Steps to Reproduce\r\n<!-- What steps caused you to find the bug? -->\r\n1. Go to ...add book or edit book form\r\n2. Do ... try to add Plato as an author\r\n\r\n<!-- What actually happened after these steps? What did you expect to happen? -->\r\n* Actual: the most obvious choice is missing (Plato, the philosopher). Instead, there are authors that have plato as part of the spelling of their names or less prolific authors with the last name Plato.\r\n* Expected: The most likely choice, probably determined by spelling and number of works, should appear on the list.\r\n\r\n### Details\r\n\r\n- **Logged in (Y/N)?**\r\n- **Browser type/version?**\r\n- **Operating system?**\r\n- **Environment (prod/dev/local)?** prod\r\n<!-- If not sure, put prod -->\r\n\r\n### Proposal & Constraints\r\n<!-- What is the proposed solution / implementation? Is there a precedent of this approach succeeding elsewhere? -->\r\n\r\n### Related files\r\n<!-- Files related to this issue; this is super useful for new contributors who might want to help! If you're not sure, leave this blank; a maintainer will add them. -->\r\n\r\n### Stakeholders\r\n<!-- @ tag stakeholders of this bug -->\r\n@cdrini \n", "before_files": [{"content": "import itertools\nimport web\nimport json\n\n\nfrom infogami.utils import delegate\nfrom infogami.utils.view import safeint\nfrom openlibrary.core.models import Thing\nfrom openlibrary.plugins.upstream import utils\nfrom openlibrary.plugins.worksearch.search import get_solr\nfrom openlibrary.utils import (\n find_olid_in_string,\n olid_to_key,\n)\n\n\ndef to_json(d):\n web.header('Content-Type', 'application/json')\n return delegate.RawText(json.dumps(d))\n\n\nclass autocomplete(delegate.page):\n path = \"/_autocomplete\"\n fq = ['-type:edition']\n fl = 'key,type,name,title,score'\n olid_suffix: str | None = None\n query = 'title:\"{q}\"^2 OR title:({q}*) OR name:\"{q}\"^2 OR name:({q}*)'\n\n def db_fetch(self, key: str) -> Thing | None:\n if thing := web.ctx.site.get(key):\n return thing.as_fake_solr_record()\n else:\n return None\n\n def doc_wrap(self, doc: dict):\n \"\"\"Modify the returned solr document in place.\"\"\"\n if 'name' not in doc:\n doc['name'] = doc.get('title')\n\n def GET(self):\n return self.direct_get()\n\n def direct_get(self, fq: list[str] | None = None):\n i = web.input(q=\"\", limit=5)\n i.limit = safeint(i.limit, 5)\n\n solr = get_solr()\n\n # look for ID in query string here\n q = solr.escape(i.q).strip()\n embedded_olid = None\n if self.olid_suffix:\n embedded_olid = find_olid_in_string(q, self.olid_suffix)\n\n if embedded_olid:\n solr_q = f'key:\"{olid_to_key(embedded_olid)}\"'\n else:\n solr_q = self.query.format(q=q)\n\n fq = fq or self.fq\n params = {\n 'q_op': 'AND',\n 'rows': i.limit,\n **({'fq': fq} if fq else {}),\n # limit the fields returned for better performance\n 'fl': self.fl,\n }\n\n data = solr.select(solr_q, **params)\n docs = data['docs']\n\n if embedded_olid and not docs:\n # Grumble! Work not in solr yet. Create a dummy.\n fake_doc = self.db_fetch(olid_to_key(embedded_olid))\n if fake_doc:\n docs = [fake_doc]\n\n for d in docs:\n self.doc_wrap(d)\n\n return to_json(docs)\n\n\nclass languages_autocomplete(delegate.page):\n path = \"/languages/_autocomplete\"\n\n def GET(self):\n i = web.input(q=\"\", limit=5)\n i.limit = safeint(i.limit, 5)\n return to_json(\n list(itertools.islice(utils.autocomplete_languages(i.q), i.limit))\n )\n\n\nclass works_autocomplete(autocomplete):\n path = \"/works/_autocomplete\"\n fq = [\n 'type:work',\n # Exclude orphaned editions from search results\n 'key:*W',\n ]\n fl = 'key,title,subtitle,cover_i,first_publish_year,author_name,edition_count'\n olid_suffix = 'W'\n query = 'title:\"{q}\"^2 OR title:({q}*)'\n\n def doc_wrap(self, doc: dict):\n doc['full_title'] = doc['title']\n if 'subtitle' in doc:\n doc['full_title'] += \": \" + doc['subtitle']\n doc['name'] = doc.get('title')\n\n\nclass authors_autocomplete(autocomplete):\n path = \"/authors/_autocomplete\"\n fq = ['type:author']\n fl = 'key,name,alternate_names,birth_date,death_date,work_count,works,subjects'\n olid_suffix = 'A'\n query = 'name:({q}*) OR alternate_names:({q}*)'\n\n def doc_wrap(self, doc: dict):\n if 'top_work' in doc:\n doc['works'] = [doc.pop('top_work')]\n else:\n doc['works'] = []\n doc['subjects'] = doc.pop('top_subjects', [])\n\n\nclass subjects_autocomplete(autocomplete):\n # can't use /subjects/_autocomplete because the subjects endpoint = /subjects/[^/]+\n path = \"/subjects_autocomplete\"\n fq = ['type:subject']\n fl = 'key,name'\n query = 'name:({q}*)'\n\n def GET(self):\n i = web.input(type=\"\")\n fq = self.fq\n if i.type:\n fq = fq + [f'subject_type:{i.type}']\n\n return super().direct_get(fq=fq)\n\n\ndef setup():\n \"\"\"Do required setup.\"\"\"\n pass\n", "path": "openlibrary/plugins/worksearch/autocomplete.py"}]} | 2,343 | 215 |
gh_patches_debug_36709 | rasdani/github-patches | git_diff | gammapy__gammapy-3306 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Allow to specify spectral model in ExcessMapEstimator
Currently the `ExcessMapEstimator` does not allow to define the spectral model, that is used for the flux computation. It is easy to support and should be done...
</issue>
<code>
[start of gammapy/estimators/excess_map.py]
1 # Licensed under a 3-clause BSD style license - see LICENSE.rst
2 import copy
3 import logging
4 import numpy as np
5 import astropy.units as u
6 from astropy.convolution import Tophat2DKernel
7 from astropy.coordinates import Angle
8 from gammapy.datasets import MapDataset, MapDatasetOnOff
9 from gammapy.maps import Map, MapAxis
10 from gammapy.stats import CashCountsStatistic, WStatCountsStatistic
11 from .core import Estimator
12 from .utils import estimate_exposure_reco_energy
13
14 __all__ = [
15 "ExcessMapEstimator",
16 ]
17
18 log = logging.getLogger(__name__)
19
20
21 def convolved_map_dataset_counts_statistics(dataset, kernel, mask, correlate_off):
22 """Return CountsDataset objects containing smoothed maps from the MapDataset"""
23 # Kernel is modified later make a copy here
24 kernel = copy.deepcopy(kernel)
25 kernel.normalize("peak")
26
27 # fft convolution adds numerical noise, to ensure integer results we call
28 # np.rint
29 n_on = dataset.counts * mask
30 n_on_conv = np.rint(n_on.convolve(kernel.array).data)
31
32 if isinstance(dataset, MapDatasetOnOff):
33 n_off = dataset.counts_off * mask
34 npred_sig = dataset.npred_signal() * mask
35 acceptance_on = dataset.acceptance * mask
36 acceptance_off = dataset.acceptance_off * mask
37
38 npred_sig_convolve = npred_sig.convolve(kernel.array)
39 acceptance_on_convolve = acceptance_on.convolve(kernel.array)
40 if correlate_off:
41 n_off = n_off.convolve(kernel.array)
42 acceptance_off = acceptance_off.convolve(kernel.array)
43
44 with np.errstate(invalid="ignore", divide="ignore"):
45 alpha = acceptance_on_convolve / acceptance_off
46
47 return WStatCountsStatistic(
48 n_on_conv.data, n_off.data, alpha.data, npred_sig_convolve.data
49 )
50 else:
51
52 npred = dataset.npred() * mask
53 background_conv = npred.convolve(kernel.array)
54 return CashCountsStatistic(n_on_conv.data, background_conv.data)
55
56
57 class ExcessMapEstimator(Estimator):
58 """Computes correlated excess, sqrt TS (i.e. Li-Ma significance) and errors for MapDatasets.
59
60 If a model is set on the dataset the excess map estimator will compute the excess taking into account
61 the predicted counts of the model.
62
63 Some background estimation techniques like ring background or adaptive ring background will provide already
64 correlated data for OFF. In the case of already correlated OFF data, the OFF data should not be correlated again,
65 and so the option correlate_off should set to False (default).
66
67 Parameters
68 ----------
69 correlation_radius : ~astropy.coordinate.Angle
70 correlation radius to use
71 n_sigma : float
72 Confidence level for the asymmetric errors expressed in number of sigma.
73 Default is 1.
74 n_sigma_ul : float
75 Confidence level for the upper limits expressed in number of sigma.
76 Default is 3.
77 selection_optional : list of str
78 Which additional maps to estimate besides delta TS, significance and symmetric error.
79 Available options are:
80
81 * "errn-errp": estimate asymmetric errors.
82 * "ul": estimate upper limits.
83
84 By default all additional quantities are estimated.
85 energy_edges : `~astropy.units.Quantity`
86 Energy edges of the target excess maps bins.
87 apply_mask_fit : Bool
88 Apply a mask for the computation.
89 A `~gammapy.datasets.MapDataset.mask_fit` must be present on the input dataset
90 correlate_off : Bool
91 Correlate OFF events in the case of a MapDatasetOnOff
92 """
93
94 tag = "ExcessMapEstimator"
95 _available_selection_optional = ["errn-errp", "ul"]
96
97 def __init__(
98 self,
99 correlation_radius="0.1 deg",
100 n_sigma=1,
101 n_sigma_ul=3,
102 selection_optional=None,
103 energy_edges=None,
104 apply_mask_fit=False,
105 correlate_off=False
106 ):
107 self.correlation_radius = correlation_radius
108 self.n_sigma = n_sigma
109 self.n_sigma_ul = n_sigma_ul
110 self.apply_mask_fit = apply_mask_fit
111 self.selection_optional = selection_optional
112 self.energy_edges = energy_edges
113 self.correlate_off = correlate_off
114
115 @property
116 def correlation_radius(self):
117 return self._correlation_radius
118
119 @correlation_radius.setter
120 def correlation_radius(self, correlation_radius):
121 """Sets radius"""
122 self._correlation_radius = Angle(correlation_radius)
123
124 def run(self, dataset):
125 """Compute correlated excess, Li & Ma significance and flux maps
126
127 If a model is set on the dataset the excess map estimator will compute the excess taking into account
128 the predicted counts of the model.
129
130 Parameters
131 ----------
132 dataset : `~gammapy.datasets.MapDataset` or `~gammapy.datasets.MapDatasetOnOff`
133 input dataset
134
135 Returns
136 -------
137 images : dict
138 Dictionary containing result correlated maps. Keys are:
139
140 * counts : correlated counts map
141 * background : correlated background map
142 * excess : correlated excess map
143 * ts : TS map
144 * sqrt_ts : sqrt(delta TS), or Li-Ma significance map
145 * err : symmetric error map (from covariance)
146 * flux : flux map. An exposure map must be present in the dataset to compute flux map
147 * errn : negative error map
148 * errp : positive error map
149 * ul : upper limit map
150
151 """
152 if not isinstance(dataset, MapDataset):
153 raise ValueError("Unsupported dataset type")
154
155 if self.energy_edges is None:
156 energy_axis = dataset.counts.geom.axes["energy"]
157 energy_edges = u.Quantity([energy_axis.edges[0], energy_axis.edges[-1]])
158 else:
159 energy_edges = self.energy_edges
160
161 axis = MapAxis.from_energy_edges(energy_edges)
162
163 resampled_dataset = dataset.resample_energy_axis(energy_axis=axis)
164
165 # Beware we rely here on the correct npred background in MapDataset.resample_energy_axis
166 resampled_dataset.models = dataset.models
167
168 result = self.estimate_excess_map(resampled_dataset)
169
170 return result
171
172 def estimate_excess_map(self, dataset):
173 """Estimate excess and ts maps for single dataset.
174
175 If exposure is defined, a flux map is also computed.
176
177 Parameters
178 ----------
179 dataset : `MapDataset`
180 Map dataset
181 """
182
183 pixel_size = np.mean(np.abs(dataset.counts.geom.wcs.wcs.cdelt))
184 size = self.correlation_radius.deg / pixel_size
185 kernel = Tophat2DKernel(size)
186
187 geom = dataset.counts.geom
188
189 if self.apply_mask_fit:
190 mask = dataset.mask
191 elif dataset.mask_safe:
192 mask = dataset.mask_safe
193 else:
194 mask = np.ones(dataset.data_shape, dtype=bool)
195
196 counts_stat = convolved_map_dataset_counts_statistics(dataset, kernel, mask, self.correlate_off)
197
198 n_on = Map.from_geom(geom, data=counts_stat.n_on)
199 bkg = Map.from_geom(geom, data=counts_stat.n_on - counts_stat.n_sig)
200 excess = Map.from_geom(geom, data=counts_stat.n_sig)
201
202 result = {"counts": n_on, "background": bkg, "excess": excess}
203
204 tsmap = Map.from_geom(geom, data=counts_stat.ts)
205 sqrt_ts = Map.from_geom(geom, data=counts_stat.sqrt_ts)
206 result.update({"ts": tsmap, "sqrt_ts": sqrt_ts})
207
208 err = Map.from_geom(geom, data=counts_stat.error * self.n_sigma)
209 result.update({"err": err})
210
211 if dataset.exposure:
212 reco_exposure = estimate_exposure_reco_energy(dataset)
213 with np.errstate(invalid="ignore", divide="ignore"):
214 flux = excess / reco_exposure
215 flux.quantity = flux.quantity.to("1 / (cm2 s)")
216 else:
217 flux = Map.from_geom(
218 geom=dataset.counts.geom, data=np.nan * np.ones(dataset.data_shape)
219 )
220 result.update({"flux": flux})
221
222 if "errn-errp" in self.selection_optional:
223 errn = Map.from_geom(geom, data=counts_stat.compute_errn(self.n_sigma))
224 errp = Map.from_geom(geom, data=counts_stat.compute_errp(self.n_sigma))
225 result.update({"errn": errn, "errp": errp})
226
227 if "ul" in self.selection_optional:
228 ul = Map.from_geom(
229 geom, data=counts_stat.compute_upper_limit(self.n_sigma_ul)
230 )
231 result.update({"ul": ul})
232
233 # return nan values outside mask
234 for key in result:
235 result[key].data[~mask] = np.nan
236
237 return result
238
[end of gammapy/estimators/excess_map.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/gammapy/estimators/excess_map.py b/gammapy/estimators/excess_map.py
--- a/gammapy/estimators/excess_map.py
+++ b/gammapy/estimators/excess_map.py
@@ -89,6 +89,9 @@
A `~gammapy.datasets.MapDataset.mask_fit` must be present on the input dataset
correlate_off : Bool
Correlate OFF events in the case of a MapDatasetOnOff
+ spectral_model : `~gammapy.modeling.models.SpectralModel`
+ Spectral model used for the computation of the flux map.
+ If None, a Power Law of index 2 is assumed (default).
"""
tag = "ExcessMapEstimator"
@@ -102,7 +105,8 @@
selection_optional=None,
energy_edges=None,
apply_mask_fit=False,
- correlate_off=False
+ correlate_off=False,
+ spectral_model=None,
):
self.correlation_radius = correlation_radius
self.n_sigma = n_sigma
@@ -111,6 +115,7 @@
self.selection_optional = selection_optional
self.energy_edges = energy_edges
self.correlate_off = correlate_off
+ self.spectral_model = spectral_model
@property
def correlation_radius(self):
@@ -193,7 +198,9 @@
else:
mask = np.ones(dataset.data_shape, dtype=bool)
- counts_stat = convolved_map_dataset_counts_statistics(dataset, kernel, mask, self.correlate_off)
+ counts_stat = convolved_map_dataset_counts_statistics(
+ dataset, kernel, mask, self.correlate_off
+ )
n_on = Map.from_geom(geom, data=counts_stat.n_on)
bkg = Map.from_geom(geom, data=counts_stat.n_on - counts_stat.n_sig)
@@ -209,7 +216,7 @@
result.update({"err": err})
if dataset.exposure:
- reco_exposure = estimate_exposure_reco_energy(dataset)
+ reco_exposure = estimate_exposure_reco_energy(dataset, self.spectral_model)
with np.errstate(invalid="ignore", divide="ignore"):
flux = excess / reco_exposure
flux.quantity = flux.quantity.to("1 / (cm2 s)")
| {"golden_diff": "diff --git a/gammapy/estimators/excess_map.py b/gammapy/estimators/excess_map.py\n--- a/gammapy/estimators/excess_map.py\n+++ b/gammapy/estimators/excess_map.py\n@@ -89,6 +89,9 @@\n A `~gammapy.datasets.MapDataset.mask_fit` must be present on the input dataset\n correlate_off : Bool\n Correlate OFF events in the case of a MapDatasetOnOff\n+ spectral_model : `~gammapy.modeling.models.SpectralModel`\n+ Spectral model used for the computation of the flux map. \n+ If None, a Power Law of index 2 is assumed (default). \n \"\"\"\n \n tag = \"ExcessMapEstimator\"\n@@ -102,7 +105,8 @@\n selection_optional=None,\n energy_edges=None,\n apply_mask_fit=False,\n- correlate_off=False\n+ correlate_off=False,\n+ spectral_model=None,\n ):\n self.correlation_radius = correlation_radius\n self.n_sigma = n_sigma\n@@ -111,6 +115,7 @@\n self.selection_optional = selection_optional\n self.energy_edges = energy_edges\n self.correlate_off = correlate_off\n+ self.spectral_model = spectral_model\n \n @property\n def correlation_radius(self):\n@@ -193,7 +198,9 @@\n else:\n mask = np.ones(dataset.data_shape, dtype=bool)\n \n- counts_stat = convolved_map_dataset_counts_statistics(dataset, kernel, mask, self.correlate_off)\n+ counts_stat = convolved_map_dataset_counts_statistics(\n+ dataset, kernel, mask, self.correlate_off\n+ )\n \n n_on = Map.from_geom(geom, data=counts_stat.n_on)\n bkg = Map.from_geom(geom, data=counts_stat.n_on - counts_stat.n_sig)\n@@ -209,7 +216,7 @@\n result.update({\"err\": err})\n \n if dataset.exposure:\n- reco_exposure = estimate_exposure_reco_energy(dataset)\n+ reco_exposure = estimate_exposure_reco_energy(dataset, self.spectral_model)\n with np.errstate(invalid=\"ignore\", divide=\"ignore\"):\n flux = excess / reco_exposure\n flux.quantity = flux.quantity.to(\"1 / (cm2 s)\")\n", "issue": "Allow to specify spectral model in ExcessMapEstimator\nCurrently the `ExcessMapEstimator` does not allow to define the spectral model, that is used for the flux computation. It is easy to support and should be done...\r\n\n", "before_files": [{"content": "# Licensed under a 3-clause BSD style license - see LICENSE.rst\nimport copy\nimport logging\nimport numpy as np\nimport astropy.units as u\nfrom astropy.convolution import Tophat2DKernel\nfrom astropy.coordinates import Angle\nfrom gammapy.datasets import MapDataset, MapDatasetOnOff\nfrom gammapy.maps import Map, MapAxis\nfrom gammapy.stats import CashCountsStatistic, WStatCountsStatistic\nfrom .core import Estimator\nfrom .utils import estimate_exposure_reco_energy\n\n__all__ = [\n \"ExcessMapEstimator\",\n]\n\nlog = logging.getLogger(__name__)\n\n\ndef convolved_map_dataset_counts_statistics(dataset, kernel, mask, correlate_off):\n \"\"\"Return CountsDataset objects containing smoothed maps from the MapDataset\"\"\"\n # Kernel is modified later make a copy here\n kernel = copy.deepcopy(kernel)\n kernel.normalize(\"peak\")\n\n # fft convolution adds numerical noise, to ensure integer results we call\n # np.rint\n n_on = dataset.counts * mask\n n_on_conv = np.rint(n_on.convolve(kernel.array).data)\n\n if isinstance(dataset, MapDatasetOnOff):\n n_off = dataset.counts_off * mask\n npred_sig = dataset.npred_signal() * mask\n acceptance_on = dataset.acceptance * mask\n acceptance_off = dataset.acceptance_off * mask\n\n npred_sig_convolve = npred_sig.convolve(kernel.array)\n acceptance_on_convolve = acceptance_on.convolve(kernel.array)\n if correlate_off:\n n_off = n_off.convolve(kernel.array)\n acceptance_off = acceptance_off.convolve(kernel.array)\n\n with np.errstate(invalid=\"ignore\", divide=\"ignore\"):\n alpha = acceptance_on_convolve / acceptance_off\n\n return WStatCountsStatistic(\n n_on_conv.data, n_off.data, alpha.data, npred_sig_convolve.data\n )\n else:\n\n npred = dataset.npred() * mask\n background_conv = npred.convolve(kernel.array)\n return CashCountsStatistic(n_on_conv.data, background_conv.data)\n\n\nclass ExcessMapEstimator(Estimator):\n \"\"\"Computes correlated excess, sqrt TS (i.e. Li-Ma significance) and errors for MapDatasets.\n\n If a model is set on the dataset the excess map estimator will compute the excess taking into account\n the predicted counts of the model.\n\n Some background estimation techniques like ring background or adaptive ring background will provide already\n correlated data for OFF. In the case of already correlated OFF data, the OFF data should not be correlated again,\n and so the option correlate_off should set to False (default).\n\n Parameters\n ----------\n correlation_radius : ~astropy.coordinate.Angle\n correlation radius to use\n n_sigma : float\n Confidence level for the asymmetric errors expressed in number of sigma.\n Default is 1.\n n_sigma_ul : float\n Confidence level for the upper limits expressed in number of sigma.\n Default is 3.\n selection_optional : list of str\n Which additional maps to estimate besides delta TS, significance and symmetric error.\n Available options are:\n\n * \"errn-errp\": estimate asymmetric errors.\n * \"ul\": estimate upper limits.\n\n By default all additional quantities are estimated.\n energy_edges : `~astropy.units.Quantity`\n Energy edges of the target excess maps bins.\n apply_mask_fit : Bool\n Apply a mask for the computation.\n A `~gammapy.datasets.MapDataset.mask_fit` must be present on the input dataset\n correlate_off : Bool\n Correlate OFF events in the case of a MapDatasetOnOff\n \"\"\"\n\n tag = \"ExcessMapEstimator\"\n _available_selection_optional = [\"errn-errp\", \"ul\"]\n\n def __init__(\n self,\n correlation_radius=\"0.1 deg\",\n n_sigma=1,\n n_sigma_ul=3,\n selection_optional=None,\n energy_edges=None,\n apply_mask_fit=False,\n correlate_off=False\n ):\n self.correlation_radius = correlation_radius\n self.n_sigma = n_sigma\n self.n_sigma_ul = n_sigma_ul\n self.apply_mask_fit = apply_mask_fit\n self.selection_optional = selection_optional\n self.energy_edges = energy_edges\n self.correlate_off = correlate_off\n\n @property\n def correlation_radius(self):\n return self._correlation_radius\n\n @correlation_radius.setter\n def correlation_radius(self, correlation_radius):\n \"\"\"Sets radius\"\"\"\n self._correlation_radius = Angle(correlation_radius)\n\n def run(self, dataset):\n \"\"\"Compute correlated excess, Li & Ma significance and flux maps\n\n If a model is set on the dataset the excess map estimator will compute the excess taking into account\n the predicted counts of the model.\n\n Parameters\n ----------\n dataset : `~gammapy.datasets.MapDataset` or `~gammapy.datasets.MapDatasetOnOff`\n input dataset\n\n Returns\n -------\n images : dict\n Dictionary containing result correlated maps. Keys are:\n\n * counts : correlated counts map\n * background : correlated background map\n * excess : correlated excess map\n * ts : TS map\n * sqrt_ts : sqrt(delta TS), or Li-Ma significance map\n * err : symmetric error map (from covariance)\n * flux : flux map. An exposure map must be present in the dataset to compute flux map\n * errn : negative error map\n * errp : positive error map\n * ul : upper limit map\n\n \"\"\"\n if not isinstance(dataset, MapDataset):\n raise ValueError(\"Unsupported dataset type\")\n\n if self.energy_edges is None:\n energy_axis = dataset.counts.geom.axes[\"energy\"]\n energy_edges = u.Quantity([energy_axis.edges[0], energy_axis.edges[-1]])\n else:\n energy_edges = self.energy_edges\n\n axis = MapAxis.from_energy_edges(energy_edges)\n\n resampled_dataset = dataset.resample_energy_axis(energy_axis=axis)\n\n # Beware we rely here on the correct npred background in MapDataset.resample_energy_axis\n resampled_dataset.models = dataset.models\n\n result = self.estimate_excess_map(resampled_dataset)\n\n return result\n\n def estimate_excess_map(self, dataset):\n \"\"\"Estimate excess and ts maps for single dataset.\n\n If exposure is defined, a flux map is also computed.\n\n Parameters\n ----------\n dataset : `MapDataset`\n Map dataset\n \"\"\"\n\n pixel_size = np.mean(np.abs(dataset.counts.geom.wcs.wcs.cdelt))\n size = self.correlation_radius.deg / pixel_size\n kernel = Tophat2DKernel(size)\n\n geom = dataset.counts.geom\n\n if self.apply_mask_fit:\n mask = dataset.mask\n elif dataset.mask_safe:\n mask = dataset.mask_safe\n else:\n mask = np.ones(dataset.data_shape, dtype=bool)\n\n counts_stat = convolved_map_dataset_counts_statistics(dataset, kernel, mask, self.correlate_off)\n\n n_on = Map.from_geom(geom, data=counts_stat.n_on)\n bkg = Map.from_geom(geom, data=counts_stat.n_on - counts_stat.n_sig)\n excess = Map.from_geom(geom, data=counts_stat.n_sig)\n\n result = {\"counts\": n_on, \"background\": bkg, \"excess\": excess}\n\n tsmap = Map.from_geom(geom, data=counts_stat.ts)\n sqrt_ts = Map.from_geom(geom, data=counts_stat.sqrt_ts)\n result.update({\"ts\": tsmap, \"sqrt_ts\": sqrt_ts})\n\n err = Map.from_geom(geom, data=counts_stat.error * self.n_sigma)\n result.update({\"err\": err})\n\n if dataset.exposure:\n reco_exposure = estimate_exposure_reco_energy(dataset)\n with np.errstate(invalid=\"ignore\", divide=\"ignore\"):\n flux = excess / reco_exposure\n flux.quantity = flux.quantity.to(\"1 / (cm2 s)\")\n else:\n flux = Map.from_geom(\n geom=dataset.counts.geom, data=np.nan * np.ones(dataset.data_shape)\n )\n result.update({\"flux\": flux})\n\n if \"errn-errp\" in self.selection_optional:\n errn = Map.from_geom(geom, data=counts_stat.compute_errn(self.n_sigma))\n errp = Map.from_geom(geom, data=counts_stat.compute_errp(self.n_sigma))\n result.update({\"errn\": errn, \"errp\": errp})\n\n if \"ul\" in self.selection_optional:\n ul = Map.from_geom(\n geom, data=counts_stat.compute_upper_limit(self.n_sigma_ul)\n )\n result.update({\"ul\": ul})\n\n # return nan values outside mask\n for key in result:\n result[key].data[~mask] = np.nan\n\n return result\n", "path": "gammapy/estimators/excess_map.py"}]} | 3,123 | 523 |
gh_patches_debug_3138 | rasdani/github-patches | git_diff | microsoft__botbuilder-python-1231 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[PORT] [Authentication] updates to support Arlington
> Port this change from botbuilder-dotnet/master branch:
https://github.com/microsoft/botbuilder-dotnet/pull/3734
# Changed projects
* Microsoft.Bot.Connector
* Microsoft.Bot.Connector.Tests
[R9]
</issue>
<code>
[start of libraries/botframework-connector/botframework/connector/auth/government_constants.py]
1 # Copyright (c) Microsoft Corporation. All rights reserved.
2 # Licensed under the MIT License.
3 from abc import ABC
4
5
6 class GovernmentConstants(ABC):
7
8 """
9 Government Channel Service property value
10 """
11
12 CHANNEL_SERVICE = "https://botframework.azure.us"
13
14 """
15 TO CHANNEL FROM BOT: Login URL
16 """
17 TO_CHANNEL_FROM_BOT_LOGIN_URL = (
18 "https://login.microsoftonline.us/"
19 "cab8a31a-1906-4287-a0d8-4eef66b95f6e/"
20 "oauth2/v2.0/token"
21 )
22
23 """
24 TO CHANNEL FROM BOT: OAuth scope to request
25 """
26 TO_CHANNEL_FROM_BOT_OAUTH_SCOPE = "https://api.botframework.us/.default"
27
28 """
29 TO BOT FROM CHANNEL: Token issuer
30 """
31 TO_BOT_FROM_CHANNEL_TOKEN_ISSUER = "https://api.botframework.us"
32
33 """
34 TO BOT FROM CHANNEL: OpenID metadata document for tokens coming from MSA
35 """
36 TO_BOT_FROM_CHANNEL_OPEN_ID_METADATA_URL = (
37 "https://login.botframework.azure.us/v1/.well-known/openidconfiguration"
38 )
39
40 """
41 TO BOT FROM GOV EMULATOR: OpenID metadata document for tokens coming from MSA
42 """
43 TO_BOT_FROM_EMULATOR_OPEN_ID_METADATA_URL = (
44 "https://login.microsoftonline.us/"
45 "cab8a31a-1906-4287-a0d8-4eef66b95f6e/v2.0/"
46 ".well-known/openid-configuration"
47 )
48
[end of libraries/botframework-connector/botframework/connector/auth/government_constants.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/libraries/botframework-connector/botframework/connector/auth/government_constants.py b/libraries/botframework-connector/botframework/connector/auth/government_constants.py
--- a/libraries/botframework-connector/botframework/connector/auth/government_constants.py
+++ b/libraries/botframework-connector/botframework/connector/auth/government_constants.py
@@ -15,9 +15,7 @@
TO CHANNEL FROM BOT: Login URL
"""
TO_CHANNEL_FROM_BOT_LOGIN_URL = (
- "https://login.microsoftonline.us/"
- "cab8a31a-1906-4287-a0d8-4eef66b95f6e/"
- "oauth2/v2.0/token"
+ "https://login.microsoftonline.us/MicrosoftServices.onmicrosoft.us"
)
"""
| {"golden_diff": "diff --git a/libraries/botframework-connector/botframework/connector/auth/government_constants.py b/libraries/botframework-connector/botframework/connector/auth/government_constants.py\n--- a/libraries/botframework-connector/botframework/connector/auth/government_constants.py\n+++ b/libraries/botframework-connector/botframework/connector/auth/government_constants.py\n@@ -15,9 +15,7 @@\n TO CHANNEL FROM BOT: Login URL\n \"\"\"\n TO_CHANNEL_FROM_BOT_LOGIN_URL = (\n- \"https://login.microsoftonline.us/\"\n- \"cab8a31a-1906-4287-a0d8-4eef66b95f6e/\"\n- \"oauth2/v2.0/token\"\n+ \"https://login.microsoftonline.us/MicrosoftServices.onmicrosoft.us\"\n )\n \n \"\"\"\n", "issue": "[PORT] [Authentication] updates to support Arlington\n> Port this change from botbuilder-dotnet/master branch:\nhttps://github.com/microsoft/botbuilder-dotnet/pull/3734\n\n\n\n\r\n# Changed projects\r\n* Microsoft.Bot.Connector\r\n* Microsoft.Bot.Connector.Tests\r\n\r\n[R9]\r\n\r\n\n\n", "before_files": [{"content": "# Copyright (c) Microsoft Corporation. All rights reserved.\n# Licensed under the MIT License.\nfrom abc import ABC\n\n\nclass GovernmentConstants(ABC):\n\n \"\"\"\n Government Channel Service property value\n \"\"\"\n\n CHANNEL_SERVICE = \"https://botframework.azure.us\"\n\n \"\"\"\n TO CHANNEL FROM BOT: Login URL\n \"\"\"\n TO_CHANNEL_FROM_BOT_LOGIN_URL = (\n \"https://login.microsoftonline.us/\"\n \"cab8a31a-1906-4287-a0d8-4eef66b95f6e/\"\n \"oauth2/v2.0/token\"\n )\n\n \"\"\"\n TO CHANNEL FROM BOT: OAuth scope to request\n \"\"\"\n TO_CHANNEL_FROM_BOT_OAUTH_SCOPE = \"https://api.botframework.us/.default\"\n\n \"\"\"\n TO BOT FROM CHANNEL: Token issuer\n \"\"\"\n TO_BOT_FROM_CHANNEL_TOKEN_ISSUER = \"https://api.botframework.us\"\n\n \"\"\"\n TO BOT FROM CHANNEL: OpenID metadata document for tokens coming from MSA\n \"\"\"\n TO_BOT_FROM_CHANNEL_OPEN_ID_METADATA_URL = (\n \"https://login.botframework.azure.us/v1/.well-known/openidconfiguration\"\n )\n\n \"\"\"\n TO BOT FROM GOV EMULATOR: OpenID metadata document for tokens coming from MSA\n \"\"\"\n TO_BOT_FROM_EMULATOR_OPEN_ID_METADATA_URL = (\n \"https://login.microsoftonline.us/\"\n \"cab8a31a-1906-4287-a0d8-4eef66b95f6e/v2.0/\"\n \".well-known/openid-configuration\"\n )\n", "path": "libraries/botframework-connector/botframework/connector/auth/government_constants.py"}]} | 1,066 | 192 |
gh_patches_debug_272 | rasdani/github-patches | git_diff | cupy__cupy-1028 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
cupy.copyto behaves differently from numpy.copyto when src is a python scalar
Code:
```python
import numpy
import cupy
def copyto_check(xp):
x = xp.zeros(3, dtype=numpy.float32)
# replace first and third items with 1.0
xp.copyto(x, 1.0, where=xp.asarray([True, False, True]))
print(x)
print('numpy', numpy.__version__)
copyto_check(numpy)
print('cupy', cupy.__version__)
copyto_check(cupy)
```
Output:
```
numpy 1.14.0
[1. 0. 1.]
cupy 2.2.0
[1. 1. 1.]
```
</issue>
<code>
[start of cupy/manipulation/basic.py]
1 import numpy
2 import six
3
4 from cupy import core
5
6
7 def copyto(dst, src, casting='same_kind', where=None):
8 """Copies values from one array to another with broadcasting.
9
10 This function can be called for arrays on different devices. In this case,
11 casting, ``where``, and broadcasting is not supported, and an exception is
12 raised if these are used.
13
14 Args:
15 dst (cupy.ndarray): Target array.
16 src (cupy.ndarray): Source array.
17 casting (str): Casting rule. See :func:`numpy.can_cast` for detail.
18 where (cupy.ndarray of bool): If specified, this array acts as a mask,
19 and an element is copied only if the corresponding element of
20 ``where`` is True.
21
22 .. seealso:: :func:`numpy.copyto`
23
24 """
25
26 src_type = type(src)
27 src_is_python_scalar = (src_type in six.integer_types or
28 src_type in (bool, float, complex))
29 if src_is_python_scalar:
30 src_dtype = numpy.dtype(type(src))
31 can_cast = numpy.can_cast(src, dst.dtype, casting)
32 else:
33 src_dtype = src.dtype
34 can_cast = numpy.can_cast(src_dtype, dst.dtype, casting)
35
36 if not can_cast:
37 raise TypeError('Cannot cast %s to %s in %s casting mode' %
38 (src_dtype, dst.dtype, casting))
39 if dst.size == 0:
40 return
41
42 if src_is_python_scalar:
43 dst.fill(src)
44 return
45
46 if where is None:
47 if _can_memcpy(dst, src):
48 dst.data.copy_from(src.data, src.nbytes)
49 else:
50 device = dst.device
51 with device:
52 if src.device != device:
53 src = src.copy()
54 core.elementwise_copy(src, dst)
55 else:
56 core.elementwise_copy_where(src, where, dst)
57
58
59 def _can_memcpy(dst, src):
60 c_contiguous = dst.flags.c_contiguous and src.flags.c_contiguous
61 f_contiguous = dst.flags.f_contiguous and src.flags.f_contiguous
62 return (c_contiguous or f_contiguous) and dst.dtype == src.dtype and \
63 dst.size == src.size
64
[end of cupy/manipulation/basic.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/cupy/manipulation/basic.py b/cupy/manipulation/basic.py
--- a/cupy/manipulation/basic.py
+++ b/cupy/manipulation/basic.py
@@ -39,7 +39,7 @@
if dst.size == 0:
return
- if src_is_python_scalar:
+ if src_is_python_scalar and where is None:
dst.fill(src)
return
| {"golden_diff": "diff --git a/cupy/manipulation/basic.py b/cupy/manipulation/basic.py\n--- a/cupy/manipulation/basic.py\n+++ b/cupy/manipulation/basic.py\n@@ -39,7 +39,7 @@\n if dst.size == 0:\n return\n \n- if src_is_python_scalar:\n+ if src_is_python_scalar and where is None:\n dst.fill(src)\n return\n", "issue": "cupy.copyto behaves differently from numpy.copyto when src is a python scalar\nCode:\r\n```python\r\nimport numpy\r\nimport cupy\r\n\r\ndef copyto_check(xp):\r\n x = xp.zeros(3, dtype=numpy.float32)\r\n # replace first and third items with 1.0\r\n xp.copyto(x, 1.0, where=xp.asarray([True, False, True]))\r\n print(x)\r\n\r\nprint('numpy', numpy.__version__)\r\ncopyto_check(numpy)\r\nprint('cupy', cupy.__version__)\r\ncopyto_check(cupy)\r\n```\r\nOutput:\r\n```\r\nnumpy 1.14.0\r\n[1. 0. 1.]\r\ncupy 2.2.0\r\n[1. 1. 1.]\r\n```\n", "before_files": [{"content": "import numpy\nimport six\n\nfrom cupy import core\n\n\ndef copyto(dst, src, casting='same_kind', where=None):\n \"\"\"Copies values from one array to another with broadcasting.\n\n This function can be called for arrays on different devices. In this case,\n casting, ``where``, and broadcasting is not supported, and an exception is\n raised if these are used.\n\n Args:\n dst (cupy.ndarray): Target array.\n src (cupy.ndarray): Source array.\n casting (str): Casting rule. See :func:`numpy.can_cast` for detail.\n where (cupy.ndarray of bool): If specified, this array acts as a mask,\n and an element is copied only if the corresponding element of\n ``where`` is True.\n\n .. seealso:: :func:`numpy.copyto`\n\n \"\"\"\n\n src_type = type(src)\n src_is_python_scalar = (src_type in six.integer_types or\n src_type in (bool, float, complex))\n if src_is_python_scalar:\n src_dtype = numpy.dtype(type(src))\n can_cast = numpy.can_cast(src, dst.dtype, casting)\n else:\n src_dtype = src.dtype\n can_cast = numpy.can_cast(src_dtype, dst.dtype, casting)\n\n if not can_cast:\n raise TypeError('Cannot cast %s to %s in %s casting mode' %\n (src_dtype, dst.dtype, casting))\n if dst.size == 0:\n return\n\n if src_is_python_scalar:\n dst.fill(src)\n return\n\n if where is None:\n if _can_memcpy(dst, src):\n dst.data.copy_from(src.data, src.nbytes)\n else:\n device = dst.device\n with device:\n if src.device != device:\n src = src.copy()\n core.elementwise_copy(src, dst)\n else:\n core.elementwise_copy_where(src, where, dst)\n\n\ndef _can_memcpy(dst, src):\n c_contiguous = dst.flags.c_contiguous and src.flags.c_contiguous\n f_contiguous = dst.flags.f_contiguous and src.flags.f_contiguous\n return (c_contiguous or f_contiguous) and dst.dtype == src.dtype and \\\n dst.size == src.size\n", "path": "cupy/manipulation/basic.py"}]} | 1,306 | 91 |
gh_patches_debug_28700 | rasdani/github-patches | git_diff | meltano__meltano-6552 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[Feature]: Collect telemetry data about how `send_anonymous_usage_stats` was configured
The project context (and its schema) should be updated to include the key `send_anonymous_usage_stats_source` with the value `ProjectSettingService.get_with_metadata('send_anonymous_usage_stats')[1]['source'].value`, which can be one of the following strings:
- `auto`
- `config_override`
- `db`
- `default`
- `dotenv`
- `env`
- `inherited`
- `meltano_env`
- `meltano_yml`
CC @pnadolny13 @aaronsteers
</issue>
<code>
[start of src/meltano/core/tracking/contexts/project.py]
1 """Project context for the Snowplow tracker."""
2
3 from __future__ import annotations
4
5 import uuid
6 from enum import Enum, auto
7
8 from cached_property import cached_property
9 from snowplow_tracker import SelfDescribingJson
10 from structlog.stdlib import get_logger
11
12 from meltano.core.project import Project
13 from meltano.core.project_settings_service import ProjectSettingsService
14 from meltano.core.tracking.schemas import ProjectContextSchema
15 from meltano.core.utils import hash_sha256
16
17 logger = get_logger(__name__)
18
19
20 class ProjectUUIDSource(Enum):
21 """The source of the `project_uuid` used for telemetry."""
22
23 # The UUID was explicitly provided in the config as the `project_id`.
24 explicit = auto()
25
26 # The UUID was derived by hashing the `project_id` in the config.
27 derived = auto()
28
29 # The UUID was randomly generated (UUID v4) since no `project_id` was configured.
30 random = auto()
31
32
33 class ProjectContext(SelfDescribingJson):
34 """Tracking context for the Meltano project."""
35
36 def __init__(self, project: Project, client_id: uuid.UUID):
37 """Initialize a meltano tracking "project" context.
38
39 Args:
40 project: The Meltano project.
41 client_id: The client ID from `analytics.json`.
42 """
43 self.project = project
44 self.settings_service = ProjectSettingsService(project)
45 self.send_anonymous_usage_stats = self.settings_service.get(
46 "send_anonymous_usage_stats", True
47 )
48
49 super().__init__(
50 ProjectContextSchema.url,
51 {
52 "context_uuid": str(uuid.uuid4()),
53 "project_uuid": str(self.project_uuid),
54 "project_uuid_source": self.project_uuid_source.name,
55 "client_uuid": str(client_id),
56 "environment_name_hash": (
57 hash_sha256(self.project.active_environment.name)
58 if self.project.active_environment
59 else None
60 ),
61 },
62 )
63
64 @property
65 def project_uuid_source(self) -> ProjectUUIDSource:
66 """Obtain the source of the `project_uuid` used for telemetry.
67
68 Returns:
69 ProjectUUIDSource: The source of the `project_uuid` used for telemetry.
70 """
71 # Ensure the `project_uuid` has been generated
72 self.project_uuid # noqa: WPS428
73 return self._project_uuid_source
74
75 @cached_property
76 def project_uuid(self) -> uuid.UUID:
77 """Obtain the `project_id` from the project config file.
78
79 If it is not found (e.g. first time run), generate a valid v4 UUID, and and store it in the
80 project config file.
81
82 Returns:
83 The project UUID.
84 """
85 project_id_str = self.settings_service.get("project_id")
86
87 if project_id_str:
88 try:
89 # Project ID might already be a UUID
90 project_id = uuid.UUID(project_id_str)
91 except ValueError:
92 # If the project ID is not a UUID, then we hash it, and use the hash to make a UUID
93 project_id = uuid.UUID(hash_sha256(project_id_str)[::2])
94 self._project_uuid_source = ProjectUUIDSource.derived
95 else:
96 self._project_uuid_source = ProjectUUIDSource.explicit
97 else:
98 project_id = uuid.uuid4()
99 self._project_uuid_source = ProjectUUIDSource.random
100
101 return project_id
102
[end of src/meltano/core/tracking/contexts/project.py]
[start of src/meltano/core/tracking/schemas.py]
1 """Meltano Iglu schemas metadata & utilities."""
2
3 from __future__ import annotations
4
5 from dataclasses import dataclass
6
7 DEFAULT_VENDOR = "com.meltano"
8
9
10 @dataclass
11 class IgluSchema:
12 """Dataclass to store the name, version, vendor, and URL for an Iglu schema."""
13
14 name: str
15 version: str
16 vendor: str = DEFAULT_VENDOR
17
18 @property
19 def url(self) -> str:
20 """Construct an iglu schema URL.
21
22 Returns:
23 The URL to the schema.
24 """
25 return f"iglu:{self.vendor}/{self.name}/jsonschema/{self.version}"
26
27
28 CliContextSchema = IgluSchema("cli_context", "1-1-0")
29 CliEventSchema = IgluSchema("cli_event", "1-0-1")
30 BlockEventSchema = IgluSchema("block_event", "1-0-0")
31 EnvironmentContextSchema = IgluSchema("environment_context", "1-0-0")
32 ExceptionContextSchema = IgluSchema("exception_context", "1-0-0")
33 ExitEventSchema = IgluSchema("exit_event", "1-0-0")
34 PluginsContextSchema = IgluSchema("plugins_context", "1-0-0")
35 ProjectContextSchema = IgluSchema("project_context", "1-0-0")
36 TelemetryStateChangeEventSchema = IgluSchema("telemetry_state_change_event", "1-0-0")
37
[end of src/meltano/core/tracking/schemas.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/meltano/core/tracking/contexts/project.py b/src/meltano/core/tracking/contexts/project.py
--- a/src/meltano/core/tracking/contexts/project.py
+++ b/src/meltano/core/tracking/contexts/project.py
@@ -42,9 +42,10 @@
"""
self.project = project
self.settings_service = ProjectSettingsService(project)
- self.send_anonymous_usage_stats = self.settings_service.get(
- "send_anonymous_usage_stats", True
- )
+ (
+ send_anonymous_usage_stats,
+ send_anonymous_usage_stats_metadata,
+ ) = self.settings_service.get_with_metadata("send_anonymous_usage_stats")
super().__init__(
ProjectContextSchema.url,
@@ -58,6 +59,10 @@
if self.project.active_environment
else None
),
+ "send_anonymous_usage_stats": send_anonymous_usage_stats,
+ "send_anonymous_usage_stats_source": (
+ send_anonymous_usage_stats_metadata["source"].value
+ ),
},
)
diff --git a/src/meltano/core/tracking/schemas.py b/src/meltano/core/tracking/schemas.py
--- a/src/meltano/core/tracking/schemas.py
+++ b/src/meltano/core/tracking/schemas.py
@@ -32,5 +32,5 @@
ExceptionContextSchema = IgluSchema("exception_context", "1-0-0")
ExitEventSchema = IgluSchema("exit_event", "1-0-0")
PluginsContextSchema = IgluSchema("plugins_context", "1-0-0")
-ProjectContextSchema = IgluSchema("project_context", "1-0-0")
+ProjectContextSchema = IgluSchema("project_context", "1-1-0")
TelemetryStateChangeEventSchema = IgluSchema("telemetry_state_change_event", "1-0-0")
| {"golden_diff": "diff --git a/src/meltano/core/tracking/contexts/project.py b/src/meltano/core/tracking/contexts/project.py\n--- a/src/meltano/core/tracking/contexts/project.py\n+++ b/src/meltano/core/tracking/contexts/project.py\n@@ -42,9 +42,10 @@\n \"\"\"\n self.project = project\n self.settings_service = ProjectSettingsService(project)\n- self.send_anonymous_usage_stats = self.settings_service.get(\n- \"send_anonymous_usage_stats\", True\n- )\n+ (\n+ send_anonymous_usage_stats,\n+ send_anonymous_usage_stats_metadata,\n+ ) = self.settings_service.get_with_metadata(\"send_anonymous_usage_stats\")\n \n super().__init__(\n ProjectContextSchema.url,\n@@ -58,6 +59,10 @@\n if self.project.active_environment\n else None\n ),\n+ \"send_anonymous_usage_stats\": send_anonymous_usage_stats,\n+ \"send_anonymous_usage_stats_source\": (\n+ send_anonymous_usage_stats_metadata[\"source\"].value\n+ ),\n },\n )\n \ndiff --git a/src/meltano/core/tracking/schemas.py b/src/meltano/core/tracking/schemas.py\n--- a/src/meltano/core/tracking/schemas.py\n+++ b/src/meltano/core/tracking/schemas.py\n@@ -32,5 +32,5 @@\n ExceptionContextSchema = IgluSchema(\"exception_context\", \"1-0-0\")\n ExitEventSchema = IgluSchema(\"exit_event\", \"1-0-0\")\n PluginsContextSchema = IgluSchema(\"plugins_context\", \"1-0-0\")\n-ProjectContextSchema = IgluSchema(\"project_context\", \"1-0-0\")\n+ProjectContextSchema = IgluSchema(\"project_context\", \"1-1-0\")\n TelemetryStateChangeEventSchema = IgluSchema(\"telemetry_state_change_event\", \"1-0-0\")\n", "issue": "[Feature]: Collect telemetry data about how `send_anonymous_usage_stats` was configured\nThe project context (and its schema) should be updated to include the key `send_anonymous_usage_stats_source` with the value `ProjectSettingService.get_with_metadata('send_anonymous_usage_stats')[1]['source'].value`, which can be one of the following strings:\r\n- `auto`\r\n- `config_override`\r\n- `db`\r\n- `default`\r\n- `dotenv`\r\n- `env`\r\n- `inherited`\r\n- `meltano_env`\r\n- `meltano_yml`\r\n\r\nCC @pnadolny13 @aaronsteers \n", "before_files": [{"content": "\"\"\"Project context for the Snowplow tracker.\"\"\"\n\nfrom __future__ import annotations\n\nimport uuid\nfrom enum import Enum, auto\n\nfrom cached_property import cached_property\nfrom snowplow_tracker import SelfDescribingJson\nfrom structlog.stdlib import get_logger\n\nfrom meltano.core.project import Project\nfrom meltano.core.project_settings_service import ProjectSettingsService\nfrom meltano.core.tracking.schemas import ProjectContextSchema\nfrom meltano.core.utils import hash_sha256\n\nlogger = get_logger(__name__)\n\n\nclass ProjectUUIDSource(Enum):\n \"\"\"The source of the `project_uuid` used for telemetry.\"\"\"\n\n # The UUID was explicitly provided in the config as the `project_id`.\n explicit = auto()\n\n # The UUID was derived by hashing the `project_id` in the config.\n derived = auto()\n\n # The UUID was randomly generated (UUID v4) since no `project_id` was configured.\n random = auto()\n\n\nclass ProjectContext(SelfDescribingJson):\n \"\"\"Tracking context for the Meltano project.\"\"\"\n\n def __init__(self, project: Project, client_id: uuid.UUID):\n \"\"\"Initialize a meltano tracking \"project\" context.\n\n Args:\n project: The Meltano project.\n client_id: The client ID from `analytics.json`.\n \"\"\"\n self.project = project\n self.settings_service = ProjectSettingsService(project)\n self.send_anonymous_usage_stats = self.settings_service.get(\n \"send_anonymous_usage_stats\", True\n )\n\n super().__init__(\n ProjectContextSchema.url,\n {\n \"context_uuid\": str(uuid.uuid4()),\n \"project_uuid\": str(self.project_uuid),\n \"project_uuid_source\": self.project_uuid_source.name,\n \"client_uuid\": str(client_id),\n \"environment_name_hash\": (\n hash_sha256(self.project.active_environment.name)\n if self.project.active_environment\n else None\n ),\n },\n )\n\n @property\n def project_uuid_source(self) -> ProjectUUIDSource:\n \"\"\"Obtain the source of the `project_uuid` used for telemetry.\n\n Returns:\n ProjectUUIDSource: The source of the `project_uuid` used for telemetry.\n \"\"\"\n # Ensure the `project_uuid` has been generated\n self.project_uuid # noqa: WPS428\n return self._project_uuid_source\n\n @cached_property\n def project_uuid(self) -> uuid.UUID:\n \"\"\"Obtain the `project_id` from the project config file.\n\n If it is not found (e.g. first time run), generate a valid v4 UUID, and and store it in the\n project config file.\n\n Returns:\n The project UUID.\n \"\"\"\n project_id_str = self.settings_service.get(\"project_id\")\n\n if project_id_str:\n try:\n # Project ID might already be a UUID\n project_id = uuid.UUID(project_id_str)\n except ValueError:\n # If the project ID is not a UUID, then we hash it, and use the hash to make a UUID\n project_id = uuid.UUID(hash_sha256(project_id_str)[::2])\n self._project_uuid_source = ProjectUUIDSource.derived\n else:\n self._project_uuid_source = ProjectUUIDSource.explicit\n else:\n project_id = uuid.uuid4()\n self._project_uuid_source = ProjectUUIDSource.random\n\n return project_id\n", "path": "src/meltano/core/tracking/contexts/project.py"}, {"content": "\"\"\"Meltano Iglu schemas metadata & utilities.\"\"\"\n\nfrom __future__ import annotations\n\nfrom dataclasses import dataclass\n\nDEFAULT_VENDOR = \"com.meltano\"\n\n\n@dataclass\nclass IgluSchema:\n \"\"\"Dataclass to store the name, version, vendor, and URL for an Iglu schema.\"\"\"\n\n name: str\n version: str\n vendor: str = DEFAULT_VENDOR\n\n @property\n def url(self) -> str:\n \"\"\"Construct an iglu schema URL.\n\n Returns:\n The URL to the schema.\n \"\"\"\n return f\"iglu:{self.vendor}/{self.name}/jsonschema/{self.version}\"\n\n\nCliContextSchema = IgluSchema(\"cli_context\", \"1-1-0\")\nCliEventSchema = IgluSchema(\"cli_event\", \"1-0-1\")\nBlockEventSchema = IgluSchema(\"block_event\", \"1-0-0\")\nEnvironmentContextSchema = IgluSchema(\"environment_context\", \"1-0-0\")\nExceptionContextSchema = IgluSchema(\"exception_context\", \"1-0-0\")\nExitEventSchema = IgluSchema(\"exit_event\", \"1-0-0\")\nPluginsContextSchema = IgluSchema(\"plugins_context\", \"1-0-0\")\nProjectContextSchema = IgluSchema(\"project_context\", \"1-0-0\")\nTelemetryStateChangeEventSchema = IgluSchema(\"telemetry_state_change_event\", \"1-0-0\")\n", "path": "src/meltano/core/tracking/schemas.py"}]} | 2,021 | 422 |
gh_patches_debug_26978 | rasdani/github-patches | git_diff | PrefectHQ__prefect-2727 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Write More Idioms
We should write some more idioms:
- [x] how to define conditional logic using the [new conditional api](https://github.com/PrefectHQ/prefect/pull/2443) and the "old" way
- [x] how to use `target`s (0.11.0+)
- [x] how to configure notifications (three options: write a downstream task, state handler, cloud hook)
</issue>
<code>
[start of src/prefect/tasks/control_flow/conditional.py]
1 from typing import Any, Dict
2
3 import prefect
4 from prefect import Task
5 from prefect.engine import signals
6
7 __all__ = ["switch", "ifelse"]
8
9
10 class Merge(Task):
11 def __init__(self, **kwargs) -> None:
12 if kwargs.setdefault("skip_on_upstream_skip", False):
13 raise ValueError("Merge tasks must have `skip_on_upstream_skip=False`.")
14 kwargs.setdefault("trigger", prefect.triggers.not_all_skipped)
15 super().__init__(**kwargs)
16
17 def run(self, **task_results: Any) -> Any:
18 return next(
19 (v for k, v in sorted(task_results.items()) if v is not None), None,
20 )
21
22
23 class CompareValue(Task):
24 """
25 This task stores a `value` at initialization and compares it to a `value` received at runtime.
26 If the values don't match, it raises a SKIP exception.
27
28 Args:
29 - value (Any): the value this task will attempt to match when it runs
30 - **kwargs: keyword arguments for the Task
31 """
32
33 def __init__(self, value: Any, **kwargs: Any):
34 self.value = value
35 kwargs.setdefault("name", 'CompareValue: "{}"'.format(value))
36 super().__init__(**kwargs)
37
38 def run(self, value: Any) -> None:
39 """
40 Raises a SKIP signal if the passed value does not match the task's match value;
41 succeeds silently otherwise.
42
43 Args:
44 - value (Any): the value that will be matched against the task's value.
45 """
46 if value != self.value:
47 raise signals.SKIP(
48 'Provided value "{}" did not match "{}"'.format(value, self.value)
49 )
50
51
52 def switch(condition: Task, cases: Dict[Any, Task]) -> None:
53 """
54 Adds a SWITCH to a workflow.
55
56 The condition task is evaluated and the result is compared to the keys of the cases
57 dictionary. The task corresponding to the matching key is run; all other tasks are
58 skipped. Any tasks downstream of the skipped tasks are also skipped unless they set
59 `skip_on_upstream_skip=False`.
60
61 Example:
62 ```python
63 @task
64 def condition():
65 return "b" # returning 'b' will take the b_branch
66
67 @task
68 def a_branch():
69 return "A Branch"
70
71 @task
72 def b_branch():
73 return "B Branch"
74
75 with Flow("switch-flow") as flow:
76 switch(condition, dict(a=a_branch, b=b_branch))
77 ```
78
79 Args:
80 - condition (Task): a task whose result forms the condition for the switch
81 - cases (Dict[Any, Task]): a dict representing the "case" statements of the switch.
82 The value of the `condition` task will be compared to the keys of this dict, and
83 the matching task will be executed.
84
85 Raises:
86 - PrefectWarning: if any of the tasks in "cases" have upstream dependencies,
87 then this task will warn that those upstream tasks may run whether or not the switch condition matches their branch. The most common cause of this
88 is passing a list of tasks as one of the cases, which adds the `List` task
89 to the switch condition but leaves the tasks themselves upstream.
90 """
91
92 with prefect.tags("switch"):
93 for value, task in cases.items():
94 task = prefect.utilities.tasks.as_task(task)
95 match_condition = CompareValue(value=value).bind(value=condition)
96 task.set_dependencies(upstream_tasks=[match_condition])
97
98
99 def ifelse(condition: Task, true_task: Task, false_task: Task) -> None:
100 """
101 Builds a conditional branch into a workflow.
102
103 If the condition evaluates True(ish), the true_task will run. If it
104 evaluates False(ish), the false_task will run. The task doesn't run is Skipped, as are
105 all downstream tasks that don't set `skip_on_upstream_skip=False`.
106
107 Args:
108 - condition (Task): a task whose boolean result forms the condition for the ifelse
109 - true_task (Task): a task that will be executed if the condition is True
110 - false_task (Task): a task that will be executed if the condition is False
111 """
112
113 @prefect.task
114 def as_bool(x):
115 return bool(x)
116
117 cases = {c: t for c, t in [(True, true_task), (False, false_task)] if t is not None}
118 if cases:
119 switch(condition=as_bool(condition), cases=cases)
120
121
122 def merge(*tasks: Task) -> Task:
123 """
124 Merges conditional branches back together.
125
126 A conditional branch in a flow results in one or more tasks proceeding and one or
127 more tasks skipping. It is often convenient to merge those branches back into a
128 single result. This function is a simple way to achieve that goal. By default this
129 task will skip if all its upstream dependencies are also skipped.
130
131 The merge will return the first real result it encounters, or `None`. If multiple
132 tasks might return a result, group them with a list.
133
134 Example:
135 ```python
136 with Flow("My Flow"):
137 true_branch = ActionIfTrue()
138 false_branch = ActionIfFalse()
139 ifelse(CheckCondition(), true_branch, false_branch)
140
141 merged_result = merge(true_branch, false_branch)
142 ```
143
144 Args:
145 - *tasks (Task): tasks whose results should be merged into a single result. The tasks are
146 assumed to all sit downstream of different `switch` branches, such that only
147 one of them will contain a result and the others will all be skipped.
148
149 Returns:
150 - Task: a Task representing the merged result.
151
152 """
153 return Merge().bind(**{"task_{}".format(i + 1): t for i, t in enumerate(tasks)})
154
[end of src/prefect/tasks/control_flow/conditional.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/prefect/tasks/control_flow/conditional.py b/src/prefect/tasks/control_flow/conditional.py
--- a/src/prefect/tasks/control_flow/conditional.py
+++ b/src/prefect/tasks/control_flow/conditional.py
@@ -4,7 +4,7 @@
from prefect import Task
from prefect.engine import signals
-__all__ = ["switch", "ifelse"]
+__all__ = ["switch", "ifelse", "merge"]
class Merge(Task):
@@ -119,7 +119,7 @@
switch(condition=as_bool(condition), cases=cases)
-def merge(*tasks: Task) -> Task:
+def merge(*tasks: Task, flow=None) -> Task:
"""
Merges conditional branches back together.
@@ -145,9 +145,13 @@
- *tasks (Task): tasks whose results should be merged into a single result. The tasks are
assumed to all sit downstream of different `switch` branches, such that only
one of them will contain a result and the others will all be skipped.
+ - flow (Flow, optional): The flow to use, defaults to the current flow
+ in context if no flow is specified
Returns:
- Task: a Task representing the merged result.
"""
- return Merge().bind(**{"task_{}".format(i + 1): t for i, t in enumerate(tasks)})
+ return Merge().bind(
+ **{"task_{}".format(i + 1): t for i, t in enumerate(tasks)}, flow=flow
+ )
| {"golden_diff": "diff --git a/src/prefect/tasks/control_flow/conditional.py b/src/prefect/tasks/control_flow/conditional.py\n--- a/src/prefect/tasks/control_flow/conditional.py\n+++ b/src/prefect/tasks/control_flow/conditional.py\n@@ -4,7 +4,7 @@\n from prefect import Task\n from prefect.engine import signals\n \n-__all__ = [\"switch\", \"ifelse\"]\n+__all__ = [\"switch\", \"ifelse\", \"merge\"]\n \n \n class Merge(Task):\n@@ -119,7 +119,7 @@\n switch(condition=as_bool(condition), cases=cases)\n \n \n-def merge(*tasks: Task) -> Task:\n+def merge(*tasks: Task, flow=None) -> Task:\n \"\"\"\n Merges conditional branches back together.\n \n@@ -145,9 +145,13 @@\n - *tasks (Task): tasks whose results should be merged into a single result. The tasks are\n assumed to all sit downstream of different `switch` branches, such that only\n one of them will contain a result and the others will all be skipped.\n+ - flow (Flow, optional): The flow to use, defaults to the current flow\n+ in context if no flow is specified\n \n Returns:\n - Task: a Task representing the merged result.\n \n \"\"\"\n- return Merge().bind(**{\"task_{}\".format(i + 1): t for i, t in enumerate(tasks)})\n+ return Merge().bind(\n+ **{\"task_{}\".format(i + 1): t for i, t in enumerate(tasks)}, flow=flow\n+ )\n", "issue": "Write More Idioms\nWe should write some more idioms:\r\n\r\n- [x] how to define conditional logic using the [new conditional api](https://github.com/PrefectHQ/prefect/pull/2443) and the \"old\" way\r\n- [x] how to use `target`s (0.11.0+)\r\n- [x] how to configure notifications (three options: write a downstream task, state handler, cloud hook)\n", "before_files": [{"content": "from typing import Any, Dict\n\nimport prefect\nfrom prefect import Task\nfrom prefect.engine import signals\n\n__all__ = [\"switch\", \"ifelse\"]\n\n\nclass Merge(Task):\n def __init__(self, **kwargs) -> None:\n if kwargs.setdefault(\"skip_on_upstream_skip\", False):\n raise ValueError(\"Merge tasks must have `skip_on_upstream_skip=False`.\")\n kwargs.setdefault(\"trigger\", prefect.triggers.not_all_skipped)\n super().__init__(**kwargs)\n\n def run(self, **task_results: Any) -> Any:\n return next(\n (v for k, v in sorted(task_results.items()) if v is not None), None,\n )\n\n\nclass CompareValue(Task):\n \"\"\"\n This task stores a `value` at initialization and compares it to a `value` received at runtime.\n If the values don't match, it raises a SKIP exception.\n\n Args:\n - value (Any): the value this task will attempt to match when it runs\n - **kwargs: keyword arguments for the Task\n \"\"\"\n\n def __init__(self, value: Any, **kwargs: Any):\n self.value = value\n kwargs.setdefault(\"name\", 'CompareValue: \"{}\"'.format(value))\n super().__init__(**kwargs)\n\n def run(self, value: Any) -> None:\n \"\"\"\n Raises a SKIP signal if the passed value does not match the task's match value;\n succeeds silently otherwise.\n\n Args:\n - value (Any): the value that will be matched against the task's value.\n \"\"\"\n if value != self.value:\n raise signals.SKIP(\n 'Provided value \"{}\" did not match \"{}\"'.format(value, self.value)\n )\n\n\ndef switch(condition: Task, cases: Dict[Any, Task]) -> None:\n \"\"\"\n Adds a SWITCH to a workflow.\n\n The condition task is evaluated and the result is compared to the keys of the cases\n dictionary. The task corresponding to the matching key is run; all other tasks are\n skipped. Any tasks downstream of the skipped tasks are also skipped unless they set\n `skip_on_upstream_skip=False`.\n\n Example:\n ```python\n @task\n def condition():\n return \"b\" # returning 'b' will take the b_branch\n\n @task\n def a_branch():\n return \"A Branch\"\n\n @task\n def b_branch():\n return \"B Branch\"\n\n with Flow(\"switch-flow\") as flow:\n switch(condition, dict(a=a_branch, b=b_branch))\n ```\n\n Args:\n - condition (Task): a task whose result forms the condition for the switch\n - cases (Dict[Any, Task]): a dict representing the \"case\" statements of the switch.\n The value of the `condition` task will be compared to the keys of this dict, and\n the matching task will be executed.\n\n Raises:\n - PrefectWarning: if any of the tasks in \"cases\" have upstream dependencies,\n then this task will warn that those upstream tasks may run whether or not the switch condition matches their branch. The most common cause of this\n is passing a list of tasks as one of the cases, which adds the `List` task\n to the switch condition but leaves the tasks themselves upstream.\n \"\"\"\n\n with prefect.tags(\"switch\"):\n for value, task in cases.items():\n task = prefect.utilities.tasks.as_task(task)\n match_condition = CompareValue(value=value).bind(value=condition)\n task.set_dependencies(upstream_tasks=[match_condition])\n\n\ndef ifelse(condition: Task, true_task: Task, false_task: Task) -> None:\n \"\"\"\n Builds a conditional branch into a workflow.\n\n If the condition evaluates True(ish), the true_task will run. If it\n evaluates False(ish), the false_task will run. The task doesn't run is Skipped, as are\n all downstream tasks that don't set `skip_on_upstream_skip=False`.\n\n Args:\n - condition (Task): a task whose boolean result forms the condition for the ifelse\n - true_task (Task): a task that will be executed if the condition is True\n - false_task (Task): a task that will be executed if the condition is False\n \"\"\"\n\n @prefect.task\n def as_bool(x):\n return bool(x)\n\n cases = {c: t for c, t in [(True, true_task), (False, false_task)] if t is not None}\n if cases:\n switch(condition=as_bool(condition), cases=cases)\n\n\ndef merge(*tasks: Task) -> Task:\n \"\"\"\n Merges conditional branches back together.\n\n A conditional branch in a flow results in one or more tasks proceeding and one or\n more tasks skipping. It is often convenient to merge those branches back into a\n single result. This function is a simple way to achieve that goal. By default this\n task will skip if all its upstream dependencies are also skipped.\n\n The merge will return the first real result it encounters, or `None`. If multiple\n tasks might return a result, group them with a list.\n\n Example:\n ```python\n with Flow(\"My Flow\"):\n true_branch = ActionIfTrue()\n false_branch = ActionIfFalse()\n ifelse(CheckCondition(), true_branch, false_branch)\n\n merged_result = merge(true_branch, false_branch)\n ```\n\n Args:\n - *tasks (Task): tasks whose results should be merged into a single result. The tasks are\n assumed to all sit downstream of different `switch` branches, such that only\n one of them will contain a result and the others will all be skipped.\n\n Returns:\n - Task: a Task representing the merged result.\n\n \"\"\"\n return Merge().bind(**{\"task_{}\".format(i + 1): t for i, t in enumerate(tasks)})\n", "path": "src/prefect/tasks/control_flow/conditional.py"}]} | 2,262 | 347 |
gh_patches_debug_8710 | rasdani/github-patches | git_diff | ietf-tools__datatracker-4911 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Wrapping issues in ballot modal on narrow screens
### Describe the issue
<img width="582" alt="Screenshot 2022-12-15 at 18 02 42" src="https://user-images.githubusercontent.com/200328/207908976-51568fb5-a3b4-4ccc-8026-8065d13da38d.png">
### Code of Conduct
- [X] I agree to follow the [IETF's Code of Conduct](https://github.com/ietf-tools/.github/blob/main/CODE_OF_CONDUCT.md)
</issue>
<code>
[start of ietf/doc/templatetags/ballot_icon.py]
1 # Copyright The IETF Trust 2012-2021, All Rights Reserved
2 # Copyright (C) 2009-2010 Nokia Corporation and/or its subsidiary(-ies).
3 # All rights reserved. Contact: Pasi Eronen <[email protected]>
4 #
5 # Redistribution and use in source and binary forms, with or without
6 # modification, are permitted provided that the following conditions
7 # are met:
8 #
9 # * Redistributions of source code must retain the above copyright
10 # notice, this list of conditions and the following disclaimer.
11 #
12 # * Redistributions in binary form must reproduce the above
13 # copyright notice, this list of conditions and the following
14 # disclaimer in the documentation and/or other materials provided
15 # with the distribution.
16 #
17 # * Neither the name of the Nokia Corporation and/or its
18 # subsidiary(-ies) nor the names of its contributors may be used
19 # to endorse or promote products derived from this software
20 # without specific prior written permission.
21 #
22 # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
23 # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
24 # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
25 # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
26 # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
27 # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
28 # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
29 # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
30 # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
31 # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
32 # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
33
34 import datetime
35
36 import debug # pyflakes:ignore
37
38 from django import template
39 from django.urls import reverse as urlreverse
40 from django.db.models import Q
41 from django.utils import timezone
42 from django.utils.safestring import mark_safe
43
44 from ietf.ietfauth.utils import user_is_person, has_role
45 from ietf.doc.models import BallotPositionDocEvent, IESG_BALLOT_ACTIVE_STATES
46 from ietf.name.models import BallotPositionName
47
48
49 register = template.Library()
50
51 @register.filter
52 def showballoticon(doc):
53 if doc.type_id == "draft":
54 if doc.stream_id == 'ietf' and doc.get_state_slug("draft-iesg") not in IESG_BALLOT_ACTIVE_STATES:
55 return False
56 elif doc.stream_id == 'irtf' and doc.get_state_slug("draft-stream-irtf") not in ['irsgpoll']:
57 return False
58 elif doc.type_id == "charter":
59 if doc.get_state_slug() not in ("intrev", "extrev", "iesgrev"):
60 return False
61 elif doc.type_id == "conflrev":
62 if doc.get_state_slug() not in ("iesgeval","defer"):
63 return False
64 elif doc.type_id == "statchg":
65 if doc.get_state_slug() not in ("iesgeval","defer", "in-lc"):
66 return False
67
68 return True
69
70 @register.simple_tag(takes_context=True)
71 def ballot_icon(context, doc):
72 user = context.get("user")
73
74 if not doc:
75 return ""
76
77 if not showballoticon(doc):
78 return ""
79
80 ballot = doc.ballot if hasattr(doc, 'ballot') else doc.active_ballot()
81
82 if not ballot:
83 return ""
84
85 def sort_key(t):
86 _, pos = t
87 if not pos:
88 return (2, 0)
89 elif pos.pos.blocking:
90 return (0, pos.pos.order)
91 else:
92 return (1, pos.pos.order)
93
94 positions = list(ballot.active_balloter_positions().items())
95 positions.sort(key=sort_key)
96
97 right_click_string = ''
98 if has_role(user, "Area Director"):
99 right_click_string = 'oncontextmenu="window.location.href=\'%s\';return false;"' % urlreverse('ietf.doc.views_ballot.edit_position', kwargs=dict(name=doc.name, ballot_id=ballot.pk))
100
101 my_blocking = False
102 for i, (balloter, pos) in enumerate(positions):
103 if user_is_person(user,balloter) and pos and pos.pos.blocking:
104 my_blocking = True
105 break
106
107 typename = "Unknown"
108 if ballot.ballot_type.slug=='irsg-approve':
109 typename = "IRSG"
110 else:
111 typename = "IESG"
112
113 res = ['<a %s href="%s" data-bs-toggle="modal" data-bs-target="#modal-%d" aria-label="%s positions" title="%s positions (click to show more)" class="ballot-icon"><table' % (
114 right_click_string,
115 urlreverse("ietf.doc.views_doc.ballot_popup", kwargs=dict(name=doc.name, ballot_id=ballot.pk)),
116 ballot.pk,
117 typename,
118 typename,)]
119 if my_blocking:
120 res.append(' class="is-blocking" ')
121 res.append('><tbody>')
122
123 res.append("<tr>")
124
125 for i, (ad, pos) in enumerate(positions):
126 # The IRSG has many more members than the IESG, so make the table wider
127 if i > 0 and i % (5 if len(positions) <= 15 else 10) == 0:
128 res.append("</tr><tr>")
129
130 c = "position-%s" % (pos.pos.slug if pos else "norecord")
131
132 if user_is_person(user, ad):
133 c += " my"
134
135 res.append('<td class="%s"></td>' % c)
136
137 # add sufficient table calls to last row to avoid HTML validation warning
138 while (i + 1) % 5 != 0:
139 res.append('<td class="position-empty"></td>')
140 i = i + 1
141
142 res.append("</tr></tbody></table></a>")
143 res.append('<div id="modal-%d" class="modal fade" tabindex="-1" role="dialog" aria-hidden="true"><div class="modal-dialog modal-dialog-scrollable modal-xl"><div class="modal-content"></div></div></div>' % ballot.pk)
144
145 return mark_safe("".join(res))
146
147 @register.filter
148 def ballotposition(doc, user):
149 if not showballoticon(doc) or not has_role(user, "Area Director"):
150 return None
151
152 ballot = doc.active_ballot()
153 if not ballot:
154 return None
155
156 changed_pos = doc.latest_event(BallotPositionDocEvent, type="changed_ballot_position", balloter__user=user, ballot=ballot)
157 if changed_pos:
158 pos = changed_pos.pos
159 else:
160 pos = BallotPositionName.objects.get(slug="norecord")
161 return pos
162
163
164 @register.filter
165 def state_age_colored(doc):
166 if doc.type_id == "draft":
167 if not doc.get_state_slug() in ["active", "rfc"]:
168 # Don't show anything for expired/withdrawn/replaced drafts
169 return ""
170 iesg_state = doc.get_state_slug("draft-iesg")
171 if not iesg_state:
172 return ""
173
174 if iesg_state in ["dead", "watching", "pub", "idexists"]:
175 return ""
176 try:
177 state_datetime = (
178 doc.docevent_set.filter(
179 Q(type="started_iesg_process")
180 | Q(type="changed_state", statedocevent__state_type="draft-iesg")
181 )
182 .order_by("-time")[0]
183 .time
184 )
185 except IndexError:
186 state_datetime = datetime.datetime(1990, 1, 1, tzinfo=datetime.timezone.utc)
187 days = (timezone.now() - state_datetime).days
188 # loosely based on
189 # https://trac.ietf.org/trac/iesg/wiki/PublishPath
190 if iesg_state == "lc":
191 goal1 = 30
192 goal2 = 30
193 elif iesg_state == "rfcqueue":
194 goal1 = 60
195 goal2 = 120
196 elif iesg_state in ["lc-req", "ann"]:
197 goal1 = 4
198 goal2 = 7
199 elif "need-rev" in [x.slug for x in doc.tags.all()]:
200 goal1 = 14
201 goal2 = 28
202 elif iesg_state == "pub-req":
203 goal1 = 7
204 goal2 = 14
205 elif iesg_state == "ad-eval":
206 goal1 = 14
207 goal2 = 28
208 else:
209 goal1 = 14
210 goal2 = 28
211 if days > goal2:
212 class_name = "bg-danger"
213 elif days > goal1:
214 class_name = "bg-warning"
215 else:
216 # don't show a badge when things are in the green; clutters display
217 # class_name = "text-success"
218 return ""
219 if days > goal1:
220 title = ' title="In state for %d day%s; goal is <%d days."' % (
221 days,
222 "s" if days != 1 else "",
223 goal1,
224 )
225 else:
226 title = ""
227 return mark_safe(
228 '<span class="badge rounded-pill %s" %s><i class="bi bi-clock-fill"></i> %d</span>'
229 % (class_name, title, days)
230 )
231 else:
232 return ""
233
234
235 @register.filter
236 def auth48_alert_badge(doc):
237 """Return alert badge, if any, for a document"""
238 if doc.type_id != 'draft':
239 return ''
240
241 iesg_state = doc.get_state_slug('draft-iesg')
242 if iesg_state != 'rfcqueue':
243 return ''
244
245 rfced_state = doc.get_state_slug('draft-rfceditor')
246 if rfced_state == 'auth48':
247 return mark_safe('<span class="badge rounded-pill bg-info" title="AUTH48">AUTH48</span>')
248
249 return ''
250
[end of ietf/doc/templatetags/ballot_icon.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/ietf/doc/templatetags/ballot_icon.py b/ietf/doc/templatetags/ballot_icon.py
--- a/ietf/doc/templatetags/ballot_icon.py
+++ b/ietf/doc/templatetags/ballot_icon.py
@@ -140,7 +140,7 @@
i = i + 1
res.append("</tr></tbody></table></a>")
- res.append('<div id="modal-%d" class="modal fade" tabindex="-1" role="dialog" aria-hidden="true"><div class="modal-dialog modal-dialog-scrollable modal-xl"><div class="modal-content"></div></div></div>' % ballot.pk)
+ res.append('<div id="modal-%d" class="modal fade" tabindex="-1" role="dialog" aria-hidden="true"><div class="modal-dialog modal-dialog-scrollable modal-xl modal-fullscreen-lg-down"><div class="modal-content"></div></div></div>' % ballot.pk)
return mark_safe("".join(res))
| {"golden_diff": "diff --git a/ietf/doc/templatetags/ballot_icon.py b/ietf/doc/templatetags/ballot_icon.py\n--- a/ietf/doc/templatetags/ballot_icon.py\n+++ b/ietf/doc/templatetags/ballot_icon.py\n@@ -140,7 +140,7 @@\n i = i + 1\n \n res.append(\"</tr></tbody></table></a>\")\n- res.append('<div id=\"modal-%d\" class=\"modal fade\" tabindex=\"-1\" role=\"dialog\" aria-hidden=\"true\"><div class=\"modal-dialog modal-dialog-scrollable modal-xl\"><div class=\"modal-content\"></div></div></div>' % ballot.pk)\n+ res.append('<div id=\"modal-%d\" class=\"modal fade\" tabindex=\"-1\" role=\"dialog\" aria-hidden=\"true\"><div class=\"modal-dialog modal-dialog-scrollable modal-xl modal-fullscreen-lg-down\"><div class=\"modal-content\"></div></div></div>' % ballot.pk)\n \n return mark_safe(\"\".join(res))\n", "issue": "Wrapping issues in ballot modal on narrow screens\n### Describe the issue\n\n<img width=\"582\" alt=\"Screenshot 2022-12-15 at 18 02 42\" src=\"https://user-images.githubusercontent.com/200328/207908976-51568fb5-a3b4-4ccc-8026-8065d13da38d.png\">\r\n\n\n### Code of Conduct\n\n- [X] I agree to follow the [IETF's Code of Conduct](https://github.com/ietf-tools/.github/blob/main/CODE_OF_CONDUCT.md)\n", "before_files": [{"content": "# Copyright The IETF Trust 2012-2021, All Rights Reserved\n# Copyright (C) 2009-2010 Nokia Corporation and/or its subsidiary(-ies).\n# All rights reserved. Contact: Pasi Eronen <[email protected]>\n#\n# Redistribution and use in source and binary forms, with or without\n# modification, are permitted provided that the following conditions\n# are met:\n#\n# * Redistributions of source code must retain the above copyright\n# notice, this list of conditions and the following disclaimer.\n#\n# * Redistributions in binary form must reproduce the above\n# copyright notice, this list of conditions and the following\n# disclaimer in the documentation and/or other materials provided\n# with the distribution.\n#\n# * Neither the name of the Nokia Corporation and/or its\n# subsidiary(-ies) nor the names of its contributors may be used\n# to endorse or promote products derived from this software\n# without specific prior written permission.\n#\n# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS\n# \"AS IS\" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT\n# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR\n# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT\n# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,\n# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT\n# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,\n# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY\n# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT\n# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\n# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n\nimport datetime\n\nimport debug # pyflakes:ignore\n\nfrom django import template\nfrom django.urls import reverse as urlreverse\nfrom django.db.models import Q\nfrom django.utils import timezone\nfrom django.utils.safestring import mark_safe\n\nfrom ietf.ietfauth.utils import user_is_person, has_role\nfrom ietf.doc.models import BallotPositionDocEvent, IESG_BALLOT_ACTIVE_STATES\nfrom ietf.name.models import BallotPositionName\n\n\nregister = template.Library()\n\[email protected]\ndef showballoticon(doc):\n if doc.type_id == \"draft\":\n if doc.stream_id == 'ietf' and doc.get_state_slug(\"draft-iesg\") not in IESG_BALLOT_ACTIVE_STATES:\n return False\n elif doc.stream_id == 'irtf' and doc.get_state_slug(\"draft-stream-irtf\") not in ['irsgpoll']:\n return False\n elif doc.type_id == \"charter\":\n if doc.get_state_slug() not in (\"intrev\", \"extrev\", \"iesgrev\"):\n return False\n elif doc.type_id == \"conflrev\":\n if doc.get_state_slug() not in (\"iesgeval\",\"defer\"):\n return False\n elif doc.type_id == \"statchg\":\n if doc.get_state_slug() not in (\"iesgeval\",\"defer\", \"in-lc\"):\n return False\n\n return True\n\[email protected]_tag(takes_context=True)\ndef ballot_icon(context, doc):\n user = context.get(\"user\")\n\n if not doc:\n return \"\"\n\n if not showballoticon(doc):\n return \"\"\n\n ballot = doc.ballot if hasattr(doc, 'ballot') else doc.active_ballot()\n\n if not ballot:\n return \"\"\n\n def sort_key(t):\n _, pos = t\n if not pos:\n return (2, 0)\n elif pos.pos.blocking:\n return (0, pos.pos.order)\n else:\n return (1, pos.pos.order)\n\n positions = list(ballot.active_balloter_positions().items())\n positions.sort(key=sort_key)\n\n right_click_string = ''\n if has_role(user, \"Area Director\"):\n right_click_string = 'oncontextmenu=\"window.location.href=\\'%s\\';return false;\"' % urlreverse('ietf.doc.views_ballot.edit_position', kwargs=dict(name=doc.name, ballot_id=ballot.pk))\n\n my_blocking = False\n for i, (balloter, pos) in enumerate(positions):\n if user_is_person(user,balloter) and pos and pos.pos.blocking:\n my_blocking = True\n break\n\n typename = \"Unknown\"\n if ballot.ballot_type.slug=='irsg-approve':\n typename = \"IRSG\"\n else:\n typename = \"IESG\"\n\n res = ['<a %s href=\"%s\" data-bs-toggle=\"modal\" data-bs-target=\"#modal-%d\" aria-label=\"%s positions\" title=\"%s positions (click to show more)\" class=\"ballot-icon\"><table' % (\n right_click_string,\n urlreverse(\"ietf.doc.views_doc.ballot_popup\", kwargs=dict(name=doc.name, ballot_id=ballot.pk)),\n ballot.pk,\n typename,\n typename,)]\n if my_blocking:\n res.append(' class=\"is-blocking\" ')\n res.append('><tbody>')\n\n res.append(\"<tr>\")\n\n for i, (ad, pos) in enumerate(positions):\n # The IRSG has many more members than the IESG, so make the table wider\n if i > 0 and i % (5 if len(positions) <= 15 else 10) == 0:\n res.append(\"</tr><tr>\")\n\n c = \"position-%s\" % (pos.pos.slug if pos else \"norecord\")\n\n if user_is_person(user, ad):\n c += \" my\"\n\n res.append('<td class=\"%s\"></td>' % c)\n\n # add sufficient table calls to last row to avoid HTML validation warning\n while (i + 1) % 5 != 0:\n res.append('<td class=\"position-empty\"></td>')\n i = i + 1\n\n res.append(\"</tr></tbody></table></a>\")\n res.append('<div id=\"modal-%d\" class=\"modal fade\" tabindex=\"-1\" role=\"dialog\" aria-hidden=\"true\"><div class=\"modal-dialog modal-dialog-scrollable modal-xl\"><div class=\"modal-content\"></div></div></div>' % ballot.pk)\n\n return mark_safe(\"\".join(res))\n\[email protected]\ndef ballotposition(doc, user):\n if not showballoticon(doc) or not has_role(user, \"Area Director\"):\n return None\n\n ballot = doc.active_ballot()\n if not ballot:\n return None\n\n changed_pos = doc.latest_event(BallotPositionDocEvent, type=\"changed_ballot_position\", balloter__user=user, ballot=ballot)\n if changed_pos:\n pos = changed_pos.pos\n else:\n pos = BallotPositionName.objects.get(slug=\"norecord\")\n return pos\n\n\[email protected]\ndef state_age_colored(doc):\n if doc.type_id == \"draft\":\n if not doc.get_state_slug() in [\"active\", \"rfc\"]:\n # Don't show anything for expired/withdrawn/replaced drafts\n return \"\"\n iesg_state = doc.get_state_slug(\"draft-iesg\")\n if not iesg_state:\n return \"\"\n\n if iesg_state in [\"dead\", \"watching\", \"pub\", \"idexists\"]:\n return \"\"\n try:\n state_datetime = (\n doc.docevent_set.filter(\n Q(type=\"started_iesg_process\")\n | Q(type=\"changed_state\", statedocevent__state_type=\"draft-iesg\")\n )\n .order_by(\"-time\")[0]\n .time\n )\n except IndexError:\n state_datetime = datetime.datetime(1990, 1, 1, tzinfo=datetime.timezone.utc)\n days = (timezone.now() - state_datetime).days\n # loosely based on\n # https://trac.ietf.org/trac/iesg/wiki/PublishPath\n if iesg_state == \"lc\":\n goal1 = 30\n goal2 = 30\n elif iesg_state == \"rfcqueue\":\n goal1 = 60\n goal2 = 120\n elif iesg_state in [\"lc-req\", \"ann\"]:\n goal1 = 4\n goal2 = 7\n elif \"need-rev\" in [x.slug for x in doc.tags.all()]:\n goal1 = 14\n goal2 = 28\n elif iesg_state == \"pub-req\":\n goal1 = 7\n goal2 = 14\n elif iesg_state == \"ad-eval\":\n goal1 = 14\n goal2 = 28\n else:\n goal1 = 14\n goal2 = 28\n if days > goal2:\n class_name = \"bg-danger\"\n elif days > goal1:\n class_name = \"bg-warning\"\n else:\n # don't show a badge when things are in the green; clutters display\n # class_name = \"text-success\"\n return \"\"\n if days > goal1:\n title = ' title=\"In state for %d day%s; goal is <%d days.\"' % (\n days,\n \"s\" if days != 1 else \"\",\n goal1,\n )\n else:\n title = \"\"\n return mark_safe(\n '<span class=\"badge rounded-pill %s\" %s><i class=\"bi bi-clock-fill\"></i> %d</span>'\n % (class_name, title, days)\n )\n else:\n return \"\"\n\n\[email protected]\ndef auth48_alert_badge(doc):\n \"\"\"Return alert badge, if any, for a document\"\"\"\n if doc.type_id != 'draft':\n return ''\n\n iesg_state = doc.get_state_slug('draft-iesg')\n if iesg_state != 'rfcqueue':\n return ''\n\n rfced_state = doc.get_state_slug('draft-rfceditor')\n if rfced_state == 'auth48':\n return mark_safe('<span class=\"badge rounded-pill bg-info\" title=\"AUTH48\">AUTH48</span>')\n\n return ''\n", "path": "ietf/doc/templatetags/ballot_icon.py"}]} | 3,568 | 229 |
gh_patches_debug_12227 | rasdani/github-patches | git_diff | mitmproxy__mitmproxy-2041 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add PyInstaller indicator to `mitmproxy --version`
We currently cannot distinguish if users use our precompiled binaries or if they installed mitmproxy using pip/brew/$packagemanager. It would be very useful to output if we are running the precompiled PyInstaller binary.
https://pythonhosted.org/PyInstaller/runtime-information.html
</issue>
<code>
[start of mitmproxy/utils/debug.py]
1 import gc
2 import os
3 import sys
4 import threading
5 import signal
6 import platform
7 import traceback
8 import subprocess
9
10 from mitmproxy import version
11 from mitmproxy import utils
12
13 from OpenSSL import SSL
14
15
16 def dump_system_info():
17 git_describe = 'release version'
18 with utils.chdir(os.path.abspath(os.path.join(os.path.dirname(__file__), ".."))):
19 try:
20 c = ['git', 'describe', '--tags', '--long']
21 git_describe = subprocess.check_output(c, stderr=subprocess.STDOUT)
22 last_tag, tag_dist, commit = git_describe.decode().strip().rsplit("-", 2)
23
24 if last_tag.startswith('v'):
25 # remove the 'v' prefix
26 last_tag = last_tag[1:]
27 if commit.startswith('g'):
28 # remove the 'g' prefix added by recent git versions
29 commit = commit[1:]
30
31 # build the same version specifier as used for snapshots by rtool
32 git_describe = "{version}dev{tag:04}-0x{commit}".format(
33 version=last_tag,
34 tag=int(tag_dist),
35 commit=commit,
36 )
37 except:
38 pass
39
40 data = [
41 "Mitmproxy version: {} ({})".format(version.VERSION, git_describe),
42 "Python version: {}".format(platform.python_version()),
43 "Platform: {}".format(platform.platform()),
44 "SSL version: {}".format(SSL.SSLeay_version(SSL.SSLEAY_VERSION).decode()),
45 ]
46 d = platform.linux_distribution()
47 t = "Linux distro: %s %s %s" % d
48 if d[0]: # pragma: no cover
49 data.append(t)
50
51 d = platform.mac_ver()
52 t = "Mac version: %s %s %s" % d
53 if d[0]: # pragma: no cover
54 data.append(t)
55
56 d = platform.win32_ver()
57 t = "Windows version: %s %s %s %s" % d
58 if d[0]: # pragma: no cover
59 data.append(t)
60
61 return "\n".join(data)
62
63
64 def dump_info(signal=None, frame=None, file=sys.stdout, testing=False): # pragma: no cover
65 print("****************************************************", file=file)
66 print("Summary", file=file)
67 print("=======", file=file)
68
69 try:
70 import psutil
71 except:
72 print("(psutil not installed, skipping some debug info)", file=file)
73 else:
74 p = psutil.Process()
75 print("num threads: ", p.num_threads(), file=file)
76 if hasattr(p, "num_fds"):
77 print("num fds: ", p.num_fds(), file=file)
78 print("memory: ", p.memory_info(), file=file)
79
80 print(file=file)
81 print("Files", file=file)
82 print("=====", file=file)
83 for i in p.open_files():
84 print(i, file=file)
85
86 print(file=file)
87 print("Connections", file=file)
88 print("===========", file=file)
89 for i in p.connections():
90 print(i, file=file)
91
92 print(file=file)
93 print("Threads", file=file)
94 print("=======", file=file)
95 bthreads = []
96 for i in threading.enumerate():
97 if hasattr(i, "_threadinfo"):
98 bthreads.append(i)
99 else:
100 print(i.name, file=file)
101 bthreads.sort(key=lambda x: x._thread_started)
102 for i in bthreads:
103 print(i._threadinfo(), file=file)
104
105 print(file=file)
106 print("Memory", file=file)
107 print("=======", file=file)
108 gc.collect()
109 d = {}
110 for i in gc.get_objects():
111 t = str(type(i))
112 if "mitmproxy" in t:
113 d[t] = d.setdefault(t, 0) + 1
114 itms = list(d.items())
115 itms.sort(key=lambda x: x[1])
116 for i in itms[-20:]:
117 print(i[1], i[0], file=file)
118 print("****************************************************", file=file)
119
120 if not testing:
121 sys.exit(1)
122
123
124 def dump_stacks(signal=None, frame=None, file=sys.stdout, testing=False):
125 id2name = dict([(th.ident, th.name) for th in threading.enumerate()])
126 code = []
127 for threadId, stack in sys._current_frames().items():
128 code.append(
129 "\n# Thread: %s(%d)" % (
130 id2name.get(threadId, ""), threadId
131 )
132 )
133 for filename, lineno, name, line in traceback.extract_stack(stack):
134 code.append('File: "%s", line %d, in %s' % (filename, lineno, name))
135 if line:
136 code.append(" %s" % (line.strip()))
137 print("\n".join(code), file=file)
138 if not testing: # pragma: no cover
139 sys.exit(1)
140
141
142 def register_info_dumpers():
143 if os.name != "nt": # pragma: windows no cover
144 signal.signal(signal.SIGUSR1, dump_info)
145 signal.signal(signal.SIGUSR2, dump_stacks)
146
[end of mitmproxy/utils/debug.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/mitmproxy/utils/debug.py b/mitmproxy/utils/debug.py
--- a/mitmproxy/utils/debug.py
+++ b/mitmproxy/utils/debug.py
@@ -37,8 +37,12 @@
except:
pass
+ bin_indicator = "" # PyInstaller builds indicator, if using precompiled binary
+ if getattr(sys, 'frozen', False):
+ bin_indicator = "Precompiled Binary"
+
data = [
- "Mitmproxy version: {} ({})".format(version.VERSION, git_describe),
+ "Mitmproxy version: {} ({}) {}".format(version.VERSION, git_describe, bin_indicator),
"Python version: {}".format(platform.python_version()),
"Platform: {}".format(platform.platform()),
"SSL version: {}".format(SSL.SSLeay_version(SSL.SSLEAY_VERSION).decode()),
| {"golden_diff": "diff --git a/mitmproxy/utils/debug.py b/mitmproxy/utils/debug.py\n--- a/mitmproxy/utils/debug.py\n+++ b/mitmproxy/utils/debug.py\n@@ -37,8 +37,12 @@\n except:\n pass\n \n+ bin_indicator = \"\" # PyInstaller builds indicator, if using precompiled binary\n+ if getattr(sys, 'frozen', False):\n+ bin_indicator = \"Precompiled Binary\"\n+\n data = [\n- \"Mitmproxy version: {} ({})\".format(version.VERSION, git_describe),\n+ \"Mitmproxy version: {} ({}) {}\".format(version.VERSION, git_describe, bin_indicator),\n \"Python version: {}\".format(platform.python_version()),\n \"Platform: {}\".format(platform.platform()),\n \"SSL version: {}\".format(SSL.SSLeay_version(SSL.SSLEAY_VERSION).decode()),\n", "issue": "Add PyInstaller indicator to `mitmproxy --version`\nWe currently cannot distinguish if users use our precompiled binaries or if they installed mitmproxy using pip/brew/$packagemanager. It would be very useful to output if we are running the precompiled PyInstaller binary. \r\n\r\nhttps://pythonhosted.org/PyInstaller/runtime-information.html\n", "before_files": [{"content": "import gc\nimport os\nimport sys\nimport threading\nimport signal\nimport platform\nimport traceback\nimport subprocess\n\nfrom mitmproxy import version\nfrom mitmproxy import utils\n\nfrom OpenSSL import SSL\n\n\ndef dump_system_info():\n git_describe = 'release version'\n with utils.chdir(os.path.abspath(os.path.join(os.path.dirname(__file__), \"..\"))):\n try:\n c = ['git', 'describe', '--tags', '--long']\n git_describe = subprocess.check_output(c, stderr=subprocess.STDOUT)\n last_tag, tag_dist, commit = git_describe.decode().strip().rsplit(\"-\", 2)\n\n if last_tag.startswith('v'):\n # remove the 'v' prefix\n last_tag = last_tag[1:]\n if commit.startswith('g'):\n # remove the 'g' prefix added by recent git versions\n commit = commit[1:]\n\n # build the same version specifier as used for snapshots by rtool\n git_describe = \"{version}dev{tag:04}-0x{commit}\".format(\n version=last_tag,\n tag=int(tag_dist),\n commit=commit,\n )\n except:\n pass\n\n data = [\n \"Mitmproxy version: {} ({})\".format(version.VERSION, git_describe),\n \"Python version: {}\".format(platform.python_version()),\n \"Platform: {}\".format(platform.platform()),\n \"SSL version: {}\".format(SSL.SSLeay_version(SSL.SSLEAY_VERSION).decode()),\n ]\n d = platform.linux_distribution()\n t = \"Linux distro: %s %s %s\" % d\n if d[0]: # pragma: no cover\n data.append(t)\n\n d = platform.mac_ver()\n t = \"Mac version: %s %s %s\" % d\n if d[0]: # pragma: no cover\n data.append(t)\n\n d = platform.win32_ver()\n t = \"Windows version: %s %s %s %s\" % d\n if d[0]: # pragma: no cover\n data.append(t)\n\n return \"\\n\".join(data)\n\n\ndef dump_info(signal=None, frame=None, file=sys.stdout, testing=False): # pragma: no cover\n print(\"****************************************************\", file=file)\n print(\"Summary\", file=file)\n print(\"=======\", file=file)\n\n try:\n import psutil\n except:\n print(\"(psutil not installed, skipping some debug info)\", file=file)\n else:\n p = psutil.Process()\n print(\"num threads: \", p.num_threads(), file=file)\n if hasattr(p, \"num_fds\"):\n print(\"num fds: \", p.num_fds(), file=file)\n print(\"memory: \", p.memory_info(), file=file)\n\n print(file=file)\n print(\"Files\", file=file)\n print(\"=====\", file=file)\n for i in p.open_files():\n print(i, file=file)\n\n print(file=file)\n print(\"Connections\", file=file)\n print(\"===========\", file=file)\n for i in p.connections():\n print(i, file=file)\n\n print(file=file)\n print(\"Threads\", file=file)\n print(\"=======\", file=file)\n bthreads = []\n for i in threading.enumerate():\n if hasattr(i, \"_threadinfo\"):\n bthreads.append(i)\n else:\n print(i.name, file=file)\n bthreads.sort(key=lambda x: x._thread_started)\n for i in bthreads:\n print(i._threadinfo(), file=file)\n\n print(file=file)\n print(\"Memory\", file=file)\n print(\"=======\", file=file)\n gc.collect()\n d = {}\n for i in gc.get_objects():\n t = str(type(i))\n if \"mitmproxy\" in t:\n d[t] = d.setdefault(t, 0) + 1\n itms = list(d.items())\n itms.sort(key=lambda x: x[1])\n for i in itms[-20:]:\n print(i[1], i[0], file=file)\n print(\"****************************************************\", file=file)\n\n if not testing:\n sys.exit(1)\n\n\ndef dump_stacks(signal=None, frame=None, file=sys.stdout, testing=False):\n id2name = dict([(th.ident, th.name) for th in threading.enumerate()])\n code = []\n for threadId, stack in sys._current_frames().items():\n code.append(\n \"\\n# Thread: %s(%d)\" % (\n id2name.get(threadId, \"\"), threadId\n )\n )\n for filename, lineno, name, line in traceback.extract_stack(stack):\n code.append('File: \"%s\", line %d, in %s' % (filename, lineno, name))\n if line:\n code.append(\" %s\" % (line.strip()))\n print(\"\\n\".join(code), file=file)\n if not testing: # pragma: no cover\n sys.exit(1)\n\n\ndef register_info_dumpers():\n if os.name != \"nt\": # pragma: windows no cover\n signal.signal(signal.SIGUSR1, dump_info)\n signal.signal(signal.SIGUSR2, dump_stacks)\n", "path": "mitmproxy/utils/debug.py"}]} | 2,065 | 188 |
gh_patches_debug_28571 | rasdani/github-patches | git_diff | alltheplaces__alltheplaces-5219 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Poundland spider address parsing issue
The addr:street_address field returned by the poundland.py spider is sometimes broken, giving results such as:
`"addr:street_address": "5, 6, -, 5, 8, , T, a, f, f, , S, t, r, e, e, t"`
The problem is caused by line 20 in the code:
` item["street_address"] = ", ".join(filter(None, store["address"].get("line")))`
where is is assumed that "line" from the scraped JSON will be an array of values. But it is sometimes "line" is just a single string. When this happens, the string itself is split into individual characters, giving results like the one above.
I guess that before applying that code we should test whether "line" is a single string. I don't think I know enough python to know the best way to fix this, and a quick Google suggests there may be a difference between Python 2 and Python 3 (which would make it difficult for me to test any solutions).
</issue>
<code>
[start of locations/spiders/poundland.py]
1 import scrapy
2
3 from locations.dict_parser import DictParser
4 from locations.hours import OpeningHours
5
6
7 class PoundlandSpider(scrapy.Spider):
8 name = "poundland"
9 item_attributes = {"brand": "Poundland", "brand_wikidata": "Q1434528"}
10 start_urls = [
11 "https://www.poundland.co.uk/rest/poundland/V1/locator/?searchCriteria[scope]=store-locator&searchCriteria[current_page]=1&searchCriteria[page_size]=10000"
12 ]
13 custom_settings = {"DEFAULT_REQUEST_HEADERS": {"Accept": "application/json"}}
14
15 def parse(self, response):
16 # We may have to handle pagination at some point
17 for store in response.json()["locations"]:
18 item = DictParser.parse(store)
19
20 item["street_address"] = ", ".join(filter(None, store["address"].get("line")))
21
22 # "store_id" seems to be a better ref than "id"
23 item["ref"] = store.get("store_id")
24 item["website"] = "https://www.poundland.co.uk/store-finder/store_page/view/id/" + item["ref"] + "/"
25
26 oh = OpeningHours()
27 for rule in store["opening_hours"]:
28 if rule["hours"] == "Closed":
29 continue
30 open_time, close_time = rule["hours"].split(" - ")
31 oh.add_range(rule["day"][:2], open_time, close_time)
32
33 item["opening_hours"] = oh.as_opening_hours()
34
35 item["extras"] = {}
36 item["extras"]["atm"] = "yes" if store.get("atm") == "1" else "no"
37 item["extras"]["icestore"] = "yes" if store.get("icestore") == "1" else "no"
38
39 if store["is_pep_co_only"] == "1":
40 item["brand"] = "Pep&Co"
41 item["brand_wikidata"] = "Q24908166"
42 else:
43 if store.get("pepshopinshop") == "1":
44 # Pep and Poundland at this location
45 pep = item.copy()
46
47 pep["ref"] = pep["ref"] + "_pep"
48
49 pep["brand"] = "Pep&Co"
50 pep["brand_wikidata"] = "Q24908166"
51
52 pep["located_in"] = self.item_attributes["brand"]
53 pep["located_in_wikidata"] = self.item_attributes["brand_wikidata"]
54
55 yield pep
56
57 yield item
58
[end of locations/spiders/poundland.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/locations/spiders/poundland.py b/locations/spiders/poundland.py
--- a/locations/spiders/poundland.py
+++ b/locations/spiders/poundland.py
@@ -1,7 +1,9 @@
import scrapy
+from locations.categories import Extras, apply_yes_no
from locations.dict_parser import DictParser
from locations.hours import OpeningHours
+from locations.spiders.vapestore_gb import clean_address
class PoundlandSpider(scrapy.Spider):
@@ -17,7 +19,7 @@
for store in response.json()["locations"]:
item = DictParser.parse(store)
- item["street_address"] = ", ".join(filter(None, store["address"].get("line")))
+ item["street_address"] = clean_address(store["address"].get("line"))
# "store_id" seems to be a better ref than "id"
item["ref"] = store.get("store_id")
@@ -30,10 +32,9 @@
open_time, close_time = rule["hours"].split(" - ")
oh.add_range(rule["day"][:2], open_time, close_time)
- item["opening_hours"] = oh.as_opening_hours()
+ item["opening_hours"] = oh
- item["extras"] = {}
- item["extras"]["atm"] = "yes" if store.get("atm") == "1" else "no"
+ apply_yes_no(Extras.ATM, item, store.get("atm") == "1")
item["extras"]["icestore"] = "yes" if store.get("icestore") == "1" else "no"
if store["is_pep_co_only"] == "1":
| {"golden_diff": "diff --git a/locations/spiders/poundland.py b/locations/spiders/poundland.py\n--- a/locations/spiders/poundland.py\n+++ b/locations/spiders/poundland.py\n@@ -1,7 +1,9 @@\n import scrapy\n \n+from locations.categories import Extras, apply_yes_no\n from locations.dict_parser import DictParser\n from locations.hours import OpeningHours\n+from locations.spiders.vapestore_gb import clean_address\n \n \n class PoundlandSpider(scrapy.Spider):\n@@ -17,7 +19,7 @@\n for store in response.json()[\"locations\"]:\n item = DictParser.parse(store)\n \n- item[\"street_address\"] = \", \".join(filter(None, store[\"address\"].get(\"line\")))\n+ item[\"street_address\"] = clean_address(store[\"address\"].get(\"line\"))\n \n # \"store_id\" seems to be a better ref than \"id\"\n item[\"ref\"] = store.get(\"store_id\")\n@@ -30,10 +32,9 @@\n open_time, close_time = rule[\"hours\"].split(\" - \")\n oh.add_range(rule[\"day\"][:2], open_time, close_time)\n \n- item[\"opening_hours\"] = oh.as_opening_hours()\n+ item[\"opening_hours\"] = oh\n \n- item[\"extras\"] = {}\n- item[\"extras\"][\"atm\"] = \"yes\" if store.get(\"atm\") == \"1\" else \"no\"\n+ apply_yes_no(Extras.ATM, item, store.get(\"atm\") == \"1\")\n item[\"extras\"][\"icestore\"] = \"yes\" if store.get(\"icestore\") == \"1\" else \"no\"\n \n if store[\"is_pep_co_only\"] == \"1\":\n", "issue": "Poundland spider address parsing issue\nThe addr:street_address field returned by the poundland.py spider is sometimes broken, giving results such as:\r\n`\"addr:street_address\": \"5, 6, -, 5, 8, , T, a, f, f, , S, t, r, e, e, t\"`\r\nThe problem is caused by line 20 in the code:\r\n` item[\"street_address\"] = \", \".join(filter(None, store[\"address\"].get(\"line\")))`\r\nwhere is is assumed that \"line\" from the scraped JSON will be an array of values. But it is sometimes \"line\" is just a single string. When this happens, the string itself is split into individual characters, giving results like the one above.\r\n\r\nI guess that before applying that code we should test whether \"line\" is a single string. I don't think I know enough python to know the best way to fix this, and a quick Google suggests there may be a difference between Python 2 and Python 3 (which would make it difficult for me to test any solutions).\n", "before_files": [{"content": "import scrapy\n\nfrom locations.dict_parser import DictParser\nfrom locations.hours import OpeningHours\n\n\nclass PoundlandSpider(scrapy.Spider):\n name = \"poundland\"\n item_attributes = {\"brand\": \"Poundland\", \"brand_wikidata\": \"Q1434528\"}\n start_urls = [\n \"https://www.poundland.co.uk/rest/poundland/V1/locator/?searchCriteria[scope]=store-locator&searchCriteria[current_page]=1&searchCriteria[page_size]=10000\"\n ]\n custom_settings = {\"DEFAULT_REQUEST_HEADERS\": {\"Accept\": \"application/json\"}}\n\n def parse(self, response):\n # We may have to handle pagination at some point\n for store in response.json()[\"locations\"]:\n item = DictParser.parse(store)\n\n item[\"street_address\"] = \", \".join(filter(None, store[\"address\"].get(\"line\")))\n\n # \"store_id\" seems to be a better ref than \"id\"\n item[\"ref\"] = store.get(\"store_id\")\n item[\"website\"] = \"https://www.poundland.co.uk/store-finder/store_page/view/id/\" + item[\"ref\"] + \"/\"\n\n oh = OpeningHours()\n for rule in store[\"opening_hours\"]:\n if rule[\"hours\"] == \"Closed\":\n continue\n open_time, close_time = rule[\"hours\"].split(\" - \")\n oh.add_range(rule[\"day\"][:2], open_time, close_time)\n\n item[\"opening_hours\"] = oh.as_opening_hours()\n\n item[\"extras\"] = {}\n item[\"extras\"][\"atm\"] = \"yes\" if store.get(\"atm\") == \"1\" else \"no\"\n item[\"extras\"][\"icestore\"] = \"yes\" if store.get(\"icestore\") == \"1\" else \"no\"\n\n if store[\"is_pep_co_only\"] == \"1\":\n item[\"brand\"] = \"Pep&Co\"\n item[\"brand_wikidata\"] = \"Q24908166\"\n else:\n if store.get(\"pepshopinshop\") == \"1\":\n # Pep and Poundland at this location\n pep = item.copy()\n\n pep[\"ref\"] = pep[\"ref\"] + \"_pep\"\n\n pep[\"brand\"] = \"Pep&Co\"\n pep[\"brand_wikidata\"] = \"Q24908166\"\n\n pep[\"located_in\"] = self.item_attributes[\"brand\"]\n pep[\"located_in_wikidata\"] = self.item_attributes[\"brand_wikidata\"]\n\n yield pep\n\n yield item\n", "path": "locations/spiders/poundland.py"}]} | 1,442 | 378 |
gh_patches_debug_5530 | rasdani/github-patches | git_diff | urllib3__urllib3-2204 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
is_connection_dropped checks against None but uses False as default value for getattr
I happened to read this line and the code looks fishy. I did not otherwise verify the potential bug.
See implementation of `is_connection_dropped(conn: socket.socket) -> bool`:
https://github.com/urllib3/urllib3/blob/287052a16a59bcaba5772387de36fa9a49eb8378/src/urllib3/util/connection.py#L19-L23
If there is no property `sock` on `conn`, then we will call `wait_for_read(False, timeout=0.0)`, which e.g. may end up putting the `False` into the iterable passed to `select`.
Since this seemed to never have caused problems, the `sock = getattr(conn, "sock", False)` can probably be replaced with just `sock = conn.sock`.
Alternatives would be to replace the default (last argument of `getattr`) of `False` with `None` or replace the `if sock is None` with `if not sock`.
</issue>
<code>
[start of src/urllib3/util/connection.py]
1 import socket
2 from typing import List, Optional, Tuple, Union
3
4 from urllib3.exceptions import LocationParseError
5
6 from .wait import wait_for_read
7
8 SOCKET_GLOBAL_DEFAULT_TIMEOUT = socket._GLOBAL_DEFAULT_TIMEOUT # type: ignore
9 SocketOptions = List[Tuple[int, int, Union[int, bytes]]]
10
11
12 def is_connection_dropped(conn: socket.socket) -> bool: # Platform-specific
13 """
14 Returns True if the connection is dropped and should be closed.
15
16 :param conn:
17 :class:`http.client.HTTPConnection` object.
18 """
19 sock = getattr(conn, "sock", False)
20 if sock is None: # Connection already closed (such as by httplib).
21 return True
22 # Returns True if readable, which here means it's been dropped
23 return wait_for_read(sock, timeout=0.0)
24
25
26 # This function is copied from socket.py in the Python 2.7 standard
27 # library test suite. Added to its signature is only `socket_options`.
28 # One additional modification is that we avoid binding to IPv6 servers
29 # discovered in DNS if the system doesn't have IPv6 functionality.
30 def create_connection(
31 address: Tuple[str, int],
32 timeout: Optional[float] = SOCKET_GLOBAL_DEFAULT_TIMEOUT,
33 source_address: Optional[Tuple[str, int]] = None,
34 socket_options: Optional[SocketOptions] = None,
35 ) -> socket.socket:
36 """Connect to *address* and return the socket object.
37
38 Convenience function. Connect to *address* (a 2-tuple ``(host,
39 port)``) and return the socket object. Passing the optional
40 *timeout* parameter will set the timeout on the socket instance
41 before attempting to connect. If no *timeout* is supplied, the
42 global default timeout setting returned by :func:`socket.getdefaulttimeout`
43 is used. If *source_address* is set it must be a tuple of (host, port)
44 for the socket to bind as a source address before making the connection.
45 An host of '' or port 0 tells the OS to use the default.
46 """
47
48 host, port = address
49 if host.startswith("["):
50 host = host.strip("[]")
51 err = None
52
53 # Using the value from allowed_gai_family() in the context of getaddrinfo lets
54 # us select whether to work with IPv4 DNS records, IPv6 records, or both.
55 # The original create_connection function always returns all records.
56 family = allowed_gai_family()
57
58 try:
59 host.encode("idna")
60 except UnicodeError:
61 raise LocationParseError(f"'{host}', label empty or too long") from None
62
63 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM):
64 af, socktype, proto, canonname, sa = res
65 sock = None
66 try:
67 sock = socket.socket(af, socktype, proto)
68
69 # If provided, set socket level options before connecting.
70 _set_socket_options(sock, socket_options)
71
72 if timeout is not SOCKET_GLOBAL_DEFAULT_TIMEOUT:
73 sock.settimeout(timeout)
74 if source_address:
75 sock.bind(source_address)
76 sock.connect(sa)
77 return sock
78
79 except OSError as e:
80 err = e
81 if sock is not None:
82 sock.close()
83 sock = None
84
85 if err is not None:
86 raise err
87
88 raise OSError("getaddrinfo returns an empty list")
89
90
91 def _set_socket_options(sock: socket.socket, options: Optional[SocketOptions]) -> None:
92 if options is None:
93 return
94
95 for opt in options:
96 sock.setsockopt(*opt)
97
98
99 def allowed_gai_family() -> socket.AddressFamily:
100 """This function is designed to work in the context of
101 getaddrinfo, where family=socket.AF_UNSPEC is the default and
102 will perform a DNS search for both IPv6 and IPv4 records."""
103
104 family = socket.AF_INET
105 if HAS_IPV6:
106 family = socket.AF_UNSPEC
107 return family
108
109
110 def _has_ipv6(host: str) -> bool:
111 """ Returns True if the system can bind an IPv6 address. """
112 sock = None
113 has_ipv6 = False
114
115 if socket.has_ipv6:
116 # has_ipv6 returns true if cPython was compiled with IPv6 support.
117 # It does not tell us if the system has IPv6 support enabled. To
118 # determine that we must bind to an IPv6 address.
119 # https://github.com/urllib3/urllib3/pull/611
120 # https://bugs.python.org/issue658327
121 try:
122 sock = socket.socket(socket.AF_INET6)
123 sock.bind((host, 0))
124 has_ipv6 = True
125 except Exception:
126 pass
127
128 if sock:
129 sock.close()
130 return has_ipv6
131
132
133 HAS_IPV6 = _has_ipv6("::1")
134
[end of src/urllib3/util/connection.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/urllib3/util/connection.py b/src/urllib3/util/connection.py
--- a/src/urllib3/util/connection.py
+++ b/src/urllib3/util/connection.py
@@ -16,7 +16,7 @@
:param conn:
:class:`http.client.HTTPConnection` object.
"""
- sock = getattr(conn, "sock", False)
+ sock = getattr(conn, "sock", None)
if sock is None: # Connection already closed (such as by httplib).
return True
# Returns True if readable, which here means it's been dropped
| {"golden_diff": "diff --git a/src/urllib3/util/connection.py b/src/urllib3/util/connection.py\n--- a/src/urllib3/util/connection.py\n+++ b/src/urllib3/util/connection.py\n@@ -16,7 +16,7 @@\n :param conn:\n :class:`http.client.HTTPConnection` object.\n \"\"\"\n- sock = getattr(conn, \"sock\", False)\n+ sock = getattr(conn, \"sock\", None)\n if sock is None: # Connection already closed (such as by httplib).\n return True\n # Returns True if readable, which here means it's been dropped\n", "issue": "is_connection_dropped checks against None but uses False as default value for getattr\nI happened to read this line and the code looks fishy. I did not otherwise verify the potential bug.\r\n\r\nSee implementation of `is_connection_dropped(conn: socket.socket) -> bool`:\r\n\r\nhttps://github.com/urllib3/urllib3/blob/287052a16a59bcaba5772387de36fa9a49eb8378/src/urllib3/util/connection.py#L19-L23\r\n\r\nIf there is no property `sock` on `conn`, then we will call `wait_for_read(False, timeout=0.0)`, which e.g. may end up putting the `False` into the iterable passed to `select`.\r\n\r\nSince this seemed to never have caused problems, the `sock = getattr(conn, \"sock\", False)` can probably be replaced with just `sock = conn.sock`.\r\n\r\nAlternatives would be to replace the default (last argument of `getattr`) of `False` with `None` or replace the `if sock is None` with `if not sock`.\n", "before_files": [{"content": "import socket\nfrom typing import List, Optional, Tuple, Union\n\nfrom urllib3.exceptions import LocationParseError\n\nfrom .wait import wait_for_read\n\nSOCKET_GLOBAL_DEFAULT_TIMEOUT = socket._GLOBAL_DEFAULT_TIMEOUT # type: ignore\nSocketOptions = List[Tuple[int, int, Union[int, bytes]]]\n\n\ndef is_connection_dropped(conn: socket.socket) -> bool: # Platform-specific\n \"\"\"\n Returns True if the connection is dropped and should be closed.\n\n :param conn:\n :class:`http.client.HTTPConnection` object.\n \"\"\"\n sock = getattr(conn, \"sock\", False)\n if sock is None: # Connection already closed (such as by httplib).\n return True\n # Returns True if readable, which here means it's been dropped\n return wait_for_read(sock, timeout=0.0)\n\n\n# This function is copied from socket.py in the Python 2.7 standard\n# library test suite. Added to its signature is only `socket_options`.\n# One additional modification is that we avoid binding to IPv6 servers\n# discovered in DNS if the system doesn't have IPv6 functionality.\ndef create_connection(\n address: Tuple[str, int],\n timeout: Optional[float] = SOCKET_GLOBAL_DEFAULT_TIMEOUT,\n source_address: Optional[Tuple[str, int]] = None,\n socket_options: Optional[SocketOptions] = None,\n) -> socket.socket:\n \"\"\"Connect to *address* and return the socket object.\n\n Convenience function. Connect to *address* (a 2-tuple ``(host,\n port)``) and return the socket object. Passing the optional\n *timeout* parameter will set the timeout on the socket instance\n before attempting to connect. If no *timeout* is supplied, the\n global default timeout setting returned by :func:`socket.getdefaulttimeout`\n is used. If *source_address* is set it must be a tuple of (host, port)\n for the socket to bind as a source address before making the connection.\n An host of '' or port 0 tells the OS to use the default.\n \"\"\"\n\n host, port = address\n if host.startswith(\"[\"):\n host = host.strip(\"[]\")\n err = None\n\n # Using the value from allowed_gai_family() in the context of getaddrinfo lets\n # us select whether to work with IPv4 DNS records, IPv6 records, or both.\n # The original create_connection function always returns all records.\n family = allowed_gai_family()\n\n try:\n host.encode(\"idna\")\n except UnicodeError:\n raise LocationParseError(f\"'{host}', label empty or too long\") from None\n\n for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM):\n af, socktype, proto, canonname, sa = res\n sock = None\n try:\n sock = socket.socket(af, socktype, proto)\n\n # If provided, set socket level options before connecting.\n _set_socket_options(sock, socket_options)\n\n if timeout is not SOCKET_GLOBAL_DEFAULT_TIMEOUT:\n sock.settimeout(timeout)\n if source_address:\n sock.bind(source_address)\n sock.connect(sa)\n return sock\n\n except OSError as e:\n err = e\n if sock is not None:\n sock.close()\n sock = None\n\n if err is not None:\n raise err\n\n raise OSError(\"getaddrinfo returns an empty list\")\n\n\ndef _set_socket_options(sock: socket.socket, options: Optional[SocketOptions]) -> None:\n if options is None:\n return\n\n for opt in options:\n sock.setsockopt(*opt)\n\n\ndef allowed_gai_family() -> socket.AddressFamily:\n \"\"\"This function is designed to work in the context of\n getaddrinfo, where family=socket.AF_UNSPEC is the default and\n will perform a DNS search for both IPv6 and IPv4 records.\"\"\"\n\n family = socket.AF_INET\n if HAS_IPV6:\n family = socket.AF_UNSPEC\n return family\n\n\ndef _has_ipv6(host: str) -> bool:\n \"\"\" Returns True if the system can bind an IPv6 address. \"\"\"\n sock = None\n has_ipv6 = False\n\n if socket.has_ipv6:\n # has_ipv6 returns true if cPython was compiled with IPv6 support.\n # It does not tell us if the system has IPv6 support enabled. To\n # determine that we must bind to an IPv6 address.\n # https://github.com/urllib3/urllib3/pull/611\n # https://bugs.python.org/issue658327\n try:\n sock = socket.socket(socket.AF_INET6)\n sock.bind((host, 0))\n has_ipv6 = True\n except Exception:\n pass\n\n if sock:\n sock.close()\n return has_ipv6\n\n\nHAS_IPV6 = _has_ipv6(\"::1\")\n", "path": "src/urllib3/util/connection.py"}]} | 2,145 | 135 |
gh_patches_debug_9785 | rasdani/github-patches | git_diff | freedomofpress__securedrop-603 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Database error when trying to delete replies in the journalist interface
An error is thrown deleting replies in the journalist interface. An attempt is made to remove a record for the reply from the database but replies are only recorded on the filesystem.
</issue>
<code>
[start of securedrop/journalist.py]
1 # -*- coding: utf-8 -*-
2 import config
3 import version
4 import crypto_util
5 import store
6 import template_filters
7 from db import db_session, Source, Submission, SourceStar, get_one_or_else
8
9 import os
10 from datetime import datetime
11 from flask import (Flask, request, render_template, send_file, redirect, flash, url_for, g, abort)
12 from flask_wtf.csrf import CsrfProtect
13 from sqlalchemy.orm.exc import MultipleResultsFound, NoResultFound
14
15 import background
16
17 app = Flask(__name__, template_folder=config.JOURNALIST_TEMPLATES_DIR)
18 app.config.from_object(config.JournalistInterfaceFlaskConfig)
19 CsrfProtect(app)
20
21 app.jinja_env.globals['version'] = version.__version__
22 if getattr(config, 'CUSTOM_HEADER_IMAGE', None):
23 app.jinja_env.globals['header_image'] = config.CUSTOM_HEADER_IMAGE
24 app.jinja_env.globals['use_custom_header_image'] = True
25 else:
26 app.jinja_env.globals['header_image'] = 'logo.png'
27 app.jinja_env.globals['use_custom_header_image'] = False
28
29 app.jinja_env.filters['datetimeformat'] = template_filters.datetimeformat
30
31
32 @app.teardown_appcontext
33 def shutdown_session(exception=None):
34 """Automatically remove database sessions at the end of the request, or
35 when the application shuts down"""
36 db_session.remove()
37
38
39 def get_source(sid):
40 """Return a Source object, representing the database row, for the source
41 with id `sid`"""
42 source = None
43 query = Source.query.filter(Source.filesystem_id == sid)
44 source = get_one_or_else(query, app.logger, abort)
45
46 return source
47
48
49 @app.before_request
50 def setup_g():
51 """Store commonly used values in Flask's special g object"""
52 if request.method == 'POST':
53 sid = request.form.get('sid')
54 if sid:
55 g.sid = sid
56 g.source = get_source(sid)
57
58
59 def get_docs(sid):
60 """Get docs associated with source id `sid`, sorted by submission date"""
61 docs = []
62 for filename in os.listdir(store.path(sid)):
63 os_stat = os.stat(store.path(sid, filename))
64 docs.append(dict(
65 name=filename,
66 date=datetime.fromtimestamp(os_stat.st_mtime),
67 size=os_stat.st_size,
68 ))
69 # sort in chronological order
70 docs.sort(key=lambda x: int(x['name'].split('-')[0]))
71 return docs
72
73
74 def make_star_true(sid):
75 source = get_source(sid)
76 if source.star:
77 source.star.starred = True
78 else:
79 source_star = SourceStar(source)
80 db_session.add(source_star)
81
82
83 def make_star_false(sid):
84 source = get_source(sid)
85 source.star.starred = False
86
87
88 @app.route('/col/add_star/<sid>', methods=('POST',))
89 def add_star(sid):
90 make_star_true(sid)
91 db_session.commit()
92 return redirect(url_for('index'))
93
94
95 @app.route("/col/remove_star/<sid>", methods=('POST',))
96 def remove_star(sid):
97 make_star_false(sid)
98 db_session.commit()
99 return redirect(url_for('index'))
100
101
102 @app.route('/')
103 def index():
104 unstarred = []
105 starred = []
106 for source in Source.query.filter_by(pending=False).order_by(Source.last_updated.desc()).all():
107 star = SourceStar.query.filter(SourceStar.source_id == source.id).first()
108 if star and star.starred:
109 starred.append(source)
110 else:
111 unstarred.append(source)
112 source.num_unread = len(
113 Submission.query.filter(Submission.source_id == source.id, Submission.downloaded == False).all())
114
115 return render_template('index.html', unstarred=unstarred, starred=starred)
116
117
118 @app.route('/col/<sid>')
119 def col(sid):
120 source = get_source(sid)
121 docs = get_docs(sid)
122 submissions = [submission.filename for submission in Submission.query.filter(Submission.source_id == source.id).all()]
123 # Only include documents loaded from the filesystem which are replies or which are also listed in the
124 # submissions table to avoid displaying partially uploaded files (#561).
125 docs = [doc for doc in docs if doc['name'] in submissions or doc['name'].endswith('reply.gpg')]
126
127 haskey = crypto_util.getkey(sid)
128 return render_template("col.html", sid=sid,
129 codename=source.journalist_designation, docs=docs, haskey=haskey,
130 flagged=source.flagged)
131
132
133 def delete_collection(source_id):
134 # Delete the source's collection of submissions
135 store.delete_source_directory(source_id)
136
137 # Delete the source's reply keypair
138 crypto_util.delete_reply_keypair(source_id)
139
140 # Delete their entry in the db
141 source = get_source(source_id)
142 db_session.delete(source)
143 db_session.commit()
144
145
146 @app.route('/col/process', methods=('POST',))
147 def col_process():
148 actions = {'delete': col_delete, 'star': col_star, 'un-star': col_un_star}
149 if 'cols_selected' not in request.form:
150 return redirect(url_for('index'))
151
152 cols_selected = request.form.getlist('cols_selected') # getlist is cgi.FieldStorage.getlist
153 action = request.form['action']
154
155 if action not in actions:
156 return abort(500)
157
158 method = actions[action]
159 return method(cols_selected)
160
161
162 def col_star(cols_selected):
163 for sid in cols_selected:
164 make_star_true(sid)
165
166 db_session.commit()
167 return redirect(url_for('index'))
168
169
170 def col_un_star(cols_selected):
171 for source_id in cols_selected:
172 make_star_false(source_id)
173
174 db_session.commit()
175 return redirect(url_for('index'))
176
177
178 @app.route('/col/delete/<sid>', methods=('POST',))
179 def col_delete_single(sid):
180 """deleting a single collection from its /col page"""
181 source = get_source(sid)
182 delete_collection(sid)
183 flash("%s's collection deleted" % (source.journalist_designation,), "notification")
184 return redirect(url_for('index'))
185
186
187 def col_delete(cols_selected):
188 """deleting multiple collections from the index"""
189 if len(cols_selected) < 1:
190 flash("No collections selected to delete!", "error")
191 else:
192 for source_id in cols_selected:
193 delete_collection(source_id)
194 flash("%s %s deleted" % (
195 len(cols_selected),
196 "collection" if len(cols_selected) == 1 else "collections"
197 ), "notification")
198
199 return redirect(url_for('index'))
200
201
202 @app.route('/col/<sid>/<fn>')
203 def doc(sid, fn):
204 if '..' in fn or fn.startswith('/'):
205 abort(404)
206 try:
207 Submission.query.filter(Submission.filename == fn).one().downloaded = True
208 except NoResultFound as e:
209 app.logger.error("Could not mark " + fn + " as downloaded: %s" % (e,))
210 db_session.commit()
211 return send_file(store.path(sid, fn), mimetype="application/pgp-encrypted")
212
213
214 @app.route('/reply', methods=('POST',))
215 def reply():
216 msg = request.form['msg']
217 g.source.interaction_count += 1
218 filename = "{0}-reply.gpg".format(g.source.interaction_count)
219
220 crypto_util.encrypt(crypto_util.getkey(g.sid), msg, output=
221 store.path(g.sid, filename))
222
223 db_session.commit()
224 return render_template('reply.html', sid=g.sid,
225 codename=g.source.journalist_designation)
226
227
228 @app.route('/regenerate-code', methods=('POST',))
229 def generate_code():
230 original_journalist_designation = g.source.journalist_designation
231 g.source.journalist_designation = crypto_util.display_id()
232
233 for doc in Submission.query.filter(Submission.source_id == g.source.id).all():
234 doc.filename = store.rename_submission(g.sid, doc.filename, g.source.journalist_filename())
235 db_session.commit()
236
237 flash("The source '%s' has been renamed to '%s'" % (original_journalist_designation, g.source.journalist_designation), "notification")
238 return redirect('/col/' + g.sid)
239
240
241 @app.route('/download_unread/<sid>')
242 def download_unread(sid):
243 id = Source.query.filter(Source.filesystem_id == sid).one().id
244 docs = [doc.filename for doc in
245 Submission.query.filter(Submission.source_id == id, Submission.downloaded == False).all()]
246 return bulk_download(sid, docs)
247
248
249 @app.route('/bulk', methods=('POST',))
250 def bulk():
251 action = request.form['action']
252
253 doc_names_selected = request.form.getlist('doc_names_selected')
254 docs_selected = [
255 doc for doc in get_docs(g.sid) if doc['name'] in doc_names_selected]
256 filenames_selected = [
257 doc['name'] for doc in docs_selected]
258
259 if not docs_selected:
260 if action == 'download':
261 flash("No collections selected to download!", "error")
262 elif action == 'delete':
263 flash("No collections selected to delete!", "error")
264 return redirect(url_for('col', sid=g.sid))
265
266 if action == 'download':
267 return bulk_download(g.sid, filenames_selected)
268 elif action == 'delete':
269 return bulk_delete(g.sid, docs_selected)
270 else:
271 abort(400)
272
273
274 def bulk_delete(sid, docs_selected):
275 source = get_source(sid)
276 confirm_delete = bool(request.form.get('confirm_delete', False))
277 if confirm_delete:
278 for doc in docs_selected:
279 db_session.delete(Submission.query.filter(Submission.filename == doc['name']).one())
280 fn = store.path(sid, doc['name'])
281 store.secure_unlink(fn)
282 db_session.commit()
283 return render_template('delete.html', sid=sid,
284 codename=source.journalist_designation,
285 docs_selected=docs_selected, confirm_delete=confirm_delete)
286
287
288 def bulk_download(sid, docs_selected):
289 source = get_source(sid)
290 filenames = []
291 for doc in docs_selected:
292 filenames.append(store.path(sid, doc))
293 try:
294 Submission.query.filter(Submission.filename == doc).one().downloaded = True
295 except NoResultFound as e:
296 app.logger.error("Could not mark " + doc + " as downloaded: %s" % (e,))
297 db_session.commit()
298 zip = store.get_bulk_archive(filenames, zip_directory=source.journalist_filename())
299 return send_file(zip.name, mimetype="application/zip",
300 attachment_filename=source.journalist_filename() + ".zip",
301 as_attachment=True)
302
303
304 @app.route('/flag', methods=('POST',))
305 def flag():
306 g.source.flagged = True
307 db_session.commit()
308 return render_template('flag.html', sid=g.sid,
309 codename=g.source.journalist_designation)
310 def write_pidfile():
311 pid = str(os.getpid())
312 with open(config.JOURNALIST_PIDFILE, 'w') as fp:
313 fp.write(pid)
314
315 if __name__ == "__main__":
316 write_pidfile()
317 # TODO make sure debug=False in production
318 app.run(debug=True, host='0.0.0.0', port=8081)
319
[end of securedrop/journalist.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/securedrop/journalist.py b/securedrop/journalist.py
--- a/securedrop/journalist.py
+++ b/securedrop/journalist.py
@@ -276,7 +276,8 @@
confirm_delete = bool(request.form.get('confirm_delete', False))
if confirm_delete:
for doc in docs_selected:
- db_session.delete(Submission.query.filter(Submission.filename == doc['name']).one())
+ if not doc['name'].endswith('reply.gpg'):
+ db_session.delete(Submission.query.filter(Submission.filename == doc['name']).one())
fn = store.path(sid, doc['name'])
store.secure_unlink(fn)
db_session.commit()
| {"golden_diff": "diff --git a/securedrop/journalist.py b/securedrop/journalist.py\n--- a/securedrop/journalist.py\n+++ b/securedrop/journalist.py\n@@ -276,7 +276,8 @@\n confirm_delete = bool(request.form.get('confirm_delete', False))\n if confirm_delete:\n for doc in docs_selected:\n- db_session.delete(Submission.query.filter(Submission.filename == doc['name']).one())\n+ if not doc['name'].endswith('reply.gpg'):\n+ db_session.delete(Submission.query.filter(Submission.filename == doc['name']).one())\n fn = store.path(sid, doc['name'])\n store.secure_unlink(fn)\n db_session.commit()\n", "issue": "Database error when trying to delete replies in the journalist interface\nAn error is thrown deleting replies in the journalist interface. An attempt is made to remove a record for the reply from the database but replies are only recorded on the filesystem.\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\nimport config\nimport version\nimport crypto_util\nimport store\nimport template_filters\nfrom db import db_session, Source, Submission, SourceStar, get_one_or_else\n\nimport os\nfrom datetime import datetime\nfrom flask import (Flask, request, render_template, send_file, redirect, flash, url_for, g, abort)\nfrom flask_wtf.csrf import CsrfProtect\nfrom sqlalchemy.orm.exc import MultipleResultsFound, NoResultFound\n\nimport background\n\napp = Flask(__name__, template_folder=config.JOURNALIST_TEMPLATES_DIR)\napp.config.from_object(config.JournalistInterfaceFlaskConfig)\nCsrfProtect(app)\n\napp.jinja_env.globals['version'] = version.__version__\nif getattr(config, 'CUSTOM_HEADER_IMAGE', None):\n app.jinja_env.globals['header_image'] = config.CUSTOM_HEADER_IMAGE\n app.jinja_env.globals['use_custom_header_image'] = True\nelse:\n app.jinja_env.globals['header_image'] = 'logo.png'\n app.jinja_env.globals['use_custom_header_image'] = False\n\napp.jinja_env.filters['datetimeformat'] = template_filters.datetimeformat\n\n\[email protected]_appcontext\ndef shutdown_session(exception=None):\n \"\"\"Automatically remove database sessions at the end of the request, or\n when the application shuts down\"\"\"\n db_session.remove()\n\n\ndef get_source(sid):\n \"\"\"Return a Source object, representing the database row, for the source\n with id `sid`\"\"\"\n source = None\n query = Source.query.filter(Source.filesystem_id == sid)\n source = get_one_or_else(query, app.logger, abort)\n\n return source\n\n\[email protected]_request\ndef setup_g():\n \"\"\"Store commonly used values in Flask's special g object\"\"\"\n if request.method == 'POST':\n sid = request.form.get('sid')\n if sid:\n g.sid = sid\n g.source = get_source(sid)\n\n\ndef get_docs(sid):\n \"\"\"Get docs associated with source id `sid`, sorted by submission date\"\"\"\n docs = []\n for filename in os.listdir(store.path(sid)):\n os_stat = os.stat(store.path(sid, filename))\n docs.append(dict(\n name=filename,\n date=datetime.fromtimestamp(os_stat.st_mtime),\n size=os_stat.st_size,\n ))\n # sort in chronological order\n docs.sort(key=lambda x: int(x['name'].split('-')[0]))\n return docs\n\n\ndef make_star_true(sid):\n source = get_source(sid)\n if source.star:\n source.star.starred = True\n else:\n source_star = SourceStar(source)\n db_session.add(source_star)\n\n\ndef make_star_false(sid):\n source = get_source(sid)\n source.star.starred = False\n\n\[email protected]('/col/add_star/<sid>', methods=('POST',))\ndef add_star(sid):\n make_star_true(sid)\n db_session.commit()\n return redirect(url_for('index'))\n\n\[email protected](\"/col/remove_star/<sid>\", methods=('POST',))\ndef remove_star(sid):\n make_star_false(sid)\n db_session.commit()\n return redirect(url_for('index'))\n\n\[email protected]('/')\ndef index():\n unstarred = []\n starred = []\n for source in Source.query.filter_by(pending=False).order_by(Source.last_updated.desc()).all():\n star = SourceStar.query.filter(SourceStar.source_id == source.id).first()\n if star and star.starred:\n starred.append(source)\n else:\n unstarred.append(source)\n source.num_unread = len(\n Submission.query.filter(Submission.source_id == source.id, Submission.downloaded == False).all())\n\n return render_template('index.html', unstarred=unstarred, starred=starred)\n\n\[email protected]('/col/<sid>')\ndef col(sid):\n source = get_source(sid)\n docs = get_docs(sid)\n submissions = [submission.filename for submission in Submission.query.filter(Submission.source_id == source.id).all()]\n # Only include documents loaded from the filesystem which are replies or which are also listed in the\n # submissions table to avoid displaying partially uploaded files (#561).\n docs = [doc for doc in docs if doc['name'] in submissions or doc['name'].endswith('reply.gpg')]\n\n haskey = crypto_util.getkey(sid)\n return render_template(\"col.html\", sid=sid,\n codename=source.journalist_designation, docs=docs, haskey=haskey,\n flagged=source.flagged)\n\n\ndef delete_collection(source_id):\n # Delete the source's collection of submissions\n store.delete_source_directory(source_id)\n\n # Delete the source's reply keypair\n crypto_util.delete_reply_keypair(source_id)\n\n # Delete their entry in the db\n source = get_source(source_id)\n db_session.delete(source)\n db_session.commit()\n\n\[email protected]('/col/process', methods=('POST',))\ndef col_process():\n actions = {'delete': col_delete, 'star': col_star, 'un-star': col_un_star}\n if 'cols_selected' not in request.form:\n return redirect(url_for('index'))\n\n cols_selected = request.form.getlist('cols_selected') # getlist is cgi.FieldStorage.getlist\n action = request.form['action']\n\n if action not in actions:\n return abort(500)\n\n method = actions[action]\n return method(cols_selected)\n\n\ndef col_star(cols_selected):\n for sid in cols_selected:\n make_star_true(sid)\n\n db_session.commit()\n return redirect(url_for('index'))\n\n\ndef col_un_star(cols_selected):\n for source_id in cols_selected:\n make_star_false(source_id)\n\n db_session.commit()\n return redirect(url_for('index'))\n\n\[email protected]('/col/delete/<sid>', methods=('POST',))\ndef col_delete_single(sid):\n \"\"\"deleting a single collection from its /col page\"\"\"\n source = get_source(sid)\n delete_collection(sid)\n flash(\"%s's collection deleted\" % (source.journalist_designation,), \"notification\")\n return redirect(url_for('index'))\n\n\ndef col_delete(cols_selected):\n \"\"\"deleting multiple collections from the index\"\"\"\n if len(cols_selected) < 1:\n flash(\"No collections selected to delete!\", \"error\")\n else:\n for source_id in cols_selected:\n delete_collection(source_id)\n flash(\"%s %s deleted\" % (\n len(cols_selected),\n \"collection\" if len(cols_selected) == 1 else \"collections\"\n ), \"notification\")\n\n return redirect(url_for('index'))\n\n\[email protected]('/col/<sid>/<fn>')\ndef doc(sid, fn):\n if '..' in fn or fn.startswith('/'):\n abort(404)\n try:\n Submission.query.filter(Submission.filename == fn).one().downloaded = True\n except NoResultFound as e:\n app.logger.error(\"Could not mark \" + fn + \" as downloaded: %s\" % (e,))\n db_session.commit()\n return send_file(store.path(sid, fn), mimetype=\"application/pgp-encrypted\")\n\n\[email protected]('/reply', methods=('POST',))\ndef reply():\n msg = request.form['msg']\n g.source.interaction_count += 1\n filename = \"{0}-reply.gpg\".format(g.source.interaction_count)\n\n crypto_util.encrypt(crypto_util.getkey(g.sid), msg, output=\n store.path(g.sid, filename))\n\n db_session.commit()\n return render_template('reply.html', sid=g.sid,\n codename=g.source.journalist_designation)\n\n\[email protected]('/regenerate-code', methods=('POST',))\ndef generate_code():\n original_journalist_designation = g.source.journalist_designation\n g.source.journalist_designation = crypto_util.display_id()\n \n for doc in Submission.query.filter(Submission.source_id == g.source.id).all():\n doc.filename = store.rename_submission(g.sid, doc.filename, g.source.journalist_filename())\n db_session.commit()\n\n flash(\"The source '%s' has been renamed to '%s'\" % (original_journalist_designation, g.source.journalist_designation), \"notification\")\n return redirect('/col/' + g.sid)\n\n\[email protected]('/download_unread/<sid>')\ndef download_unread(sid):\n id = Source.query.filter(Source.filesystem_id == sid).one().id\n docs = [doc.filename for doc in\n Submission.query.filter(Submission.source_id == id, Submission.downloaded == False).all()]\n return bulk_download(sid, docs)\n\n\[email protected]('/bulk', methods=('POST',))\ndef bulk():\n action = request.form['action']\n\n doc_names_selected = request.form.getlist('doc_names_selected')\n docs_selected = [\n doc for doc in get_docs(g.sid) if doc['name'] in doc_names_selected]\n filenames_selected = [\n doc['name'] for doc in docs_selected]\n\n if not docs_selected:\n if action == 'download':\n flash(\"No collections selected to download!\", \"error\")\n elif action == 'delete':\n flash(\"No collections selected to delete!\", \"error\")\n return redirect(url_for('col', sid=g.sid))\n\n if action == 'download':\n return bulk_download(g.sid, filenames_selected)\n elif action == 'delete':\n return bulk_delete(g.sid, docs_selected)\n else:\n abort(400)\n\n\ndef bulk_delete(sid, docs_selected):\n source = get_source(sid)\n confirm_delete = bool(request.form.get('confirm_delete', False))\n if confirm_delete:\n for doc in docs_selected:\n db_session.delete(Submission.query.filter(Submission.filename == doc['name']).one())\n fn = store.path(sid, doc['name'])\n store.secure_unlink(fn)\n db_session.commit()\n return render_template('delete.html', sid=sid,\n codename=source.journalist_designation,\n docs_selected=docs_selected, confirm_delete=confirm_delete)\n\n\ndef bulk_download(sid, docs_selected):\n source = get_source(sid)\n filenames = []\n for doc in docs_selected:\n filenames.append(store.path(sid, doc))\n try:\n Submission.query.filter(Submission.filename == doc).one().downloaded = True\n except NoResultFound as e:\n app.logger.error(\"Could not mark \" + doc + \" as downloaded: %s\" % (e,))\n db_session.commit()\n zip = store.get_bulk_archive(filenames, zip_directory=source.journalist_filename())\n return send_file(zip.name, mimetype=\"application/zip\",\n attachment_filename=source.journalist_filename() + \".zip\",\n as_attachment=True)\n\n\[email protected]('/flag', methods=('POST',))\ndef flag():\n g.source.flagged = True\n db_session.commit()\n return render_template('flag.html', sid=g.sid,\n codename=g.source.journalist_designation)\ndef write_pidfile():\n pid = str(os.getpid())\n with open(config.JOURNALIST_PIDFILE, 'w') as fp:\n fp.write(pid)\n\nif __name__ == \"__main__\":\n write_pidfile()\n # TODO make sure debug=False in production\n app.run(debug=True, host='0.0.0.0', port=8081)\n", "path": "securedrop/journalist.py"}]} | 3,853 | 157 |
gh_patches_debug_30382 | rasdani/github-patches | git_diff | pytorch__audio-3 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Need API for saving to file.
Currently we only have a load function. But after training the network it would be great if we can save the generated tensor to a file.
@soumith I think we can reuse a lot of code from this repo https://github.com/MattVitelli/GRUV
</issue>
<code>
[start of torchaudio/__init__.py]
1 import torch
2
3 from cffi import FFI
4 ffi = FFI()
5 from ._ext import th_sox
6
7 def load(filename, out=None):
8 if out is not None:
9 assert torch.is_tensor(out)
10 assert not out.is_cuda
11 else:
12 out = torch.FloatTensor()
13
14 if isinstance(out, torch.FloatTensor):
15 func = th_sox.libthsox_Float_read_audio_file
16 elif isinstance(out, torch.DoubleTensor):
17 func = th_sox.libthsox_Double_read_audio_file
18 elif isinstance(out, torch.ByteTensor):
19 func = th_sox.libthsox_Byte_read_audio_file
20 elif isinstance(out, torch.CharTensor):
21 func = th_sox.libthsox_Char_read_audio_file
22 elif isinstance(out, torch.ShortTensor):
23 func = th_sox.libthsox_Short_read_audio_file
24 elif isinstance(out, torch.IntTensor):
25 func = th_sox.libthsox_Int_read_audio_file
26 elif isinstance(out, torch.LongTensor):
27 func = th_sox.libthsox_Long_read_audio_file
28
29 sample_rate_p = ffi.new('int*')
30 func(bytes(filename), out, sample_rate_p)
31 sample_rate = sample_rate_p[0]
32 return out, sample_rate
33
[end of torchaudio/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/torchaudio/__init__.py b/torchaudio/__init__.py
--- a/torchaudio/__init__.py
+++ b/torchaudio/__init__.py
@@ -1,32 +1,40 @@
+import os
+
import torch
from cffi import FFI
+
ffi = FFI()
from ._ext import th_sox
+
+def check_input(src):
+ if not torch.is_tensor(src):
+ raise TypeError('Expected a tensor, got %s' % type(src))
+ if not src.__module__ == 'torch':
+ raise TypeError('Expected a CPU based tensor, got %s' % type(src))
+
+
def load(filename, out=None):
if out is not None:
- assert torch.is_tensor(out)
- assert not out.is_cuda
+ check_input(out)
else:
out = torch.FloatTensor()
-
- if isinstance(out, torch.FloatTensor):
- func = th_sox.libthsox_Float_read_audio_file
- elif isinstance(out, torch.DoubleTensor):
- func = th_sox.libthsox_Double_read_audio_file
- elif isinstance(out, torch.ByteTensor):
- func = th_sox.libthsox_Byte_read_audio_file
- elif isinstance(out, torch.CharTensor):
- func = th_sox.libthsox_Char_read_audio_file
- elif isinstance(out, torch.ShortTensor):
- func = th_sox.libthsox_Short_read_audio_file
- elif isinstance(out, torch.IntTensor):
- func = th_sox.libthsox_Int_read_audio_file
- elif isinstance(out, torch.LongTensor):
- func = th_sox.libthsox_Long_read_audio_file
-
- sample_rate_p = ffi.new('int*')
+ typename = type(out).__name__.replace('Tensor', '')
+ func = getattr(th_sox, 'libthsox_{}_read_audio_file'.format(typename))
+ sample_rate_p = ffi.new('int*')
func(bytes(filename), out, sample_rate_p)
sample_rate = sample_rate_p[0]
return out, sample_rate
+
+
+def save(filepath, src, sample_rate):
+ filename, extension = os.path.splitext(filepath)
+ if type(sample_rate) != int:
+ raise TypeError('Sample rate should be a integer')
+
+ check_input(src)
+ typename = type(src).__name__.replace('Tensor', '')
+ func = getattr(th_sox, 'libthsox_{}_write_audio_file'.format(typename))
+
+ func(bytes(filepath), src, extension[1:], sample_rate)
| {"golden_diff": "diff --git a/torchaudio/__init__.py b/torchaudio/__init__.py\n--- a/torchaudio/__init__.py\n+++ b/torchaudio/__init__.py\n@@ -1,32 +1,40 @@\n+import os\n+\n import torch\n \n from cffi import FFI\n+\n ffi = FFI()\n from ._ext import th_sox\n \n+\n+def check_input(src):\n+ if not torch.is_tensor(src):\n+ raise TypeError('Expected a tensor, got %s' % type(src))\n+ if not src.__module__ == 'torch':\n+ raise TypeError('Expected a CPU based tensor, got %s' % type(src))\n+\n+\n def load(filename, out=None):\n if out is not None:\n- assert torch.is_tensor(out)\n- assert not out.is_cuda\n+ check_input(out)\n else:\n out = torch.FloatTensor()\n-\n- if isinstance(out, torch.FloatTensor):\n- func = th_sox.libthsox_Float_read_audio_file\n- elif isinstance(out, torch.DoubleTensor):\n- func = th_sox.libthsox_Double_read_audio_file\n- elif isinstance(out, torch.ByteTensor):\n- func = th_sox.libthsox_Byte_read_audio_file\n- elif isinstance(out, torch.CharTensor):\n- func = th_sox.libthsox_Char_read_audio_file\n- elif isinstance(out, torch.ShortTensor):\n- func = th_sox.libthsox_Short_read_audio_file\n- elif isinstance(out, torch.IntTensor):\n- func = th_sox.libthsox_Int_read_audio_file\n- elif isinstance(out, torch.LongTensor):\n- func = th_sox.libthsox_Long_read_audio_file\n- \n- sample_rate_p = ffi.new('int*') \n+ typename = type(out).__name__.replace('Tensor', '')\n+ func = getattr(th_sox, 'libthsox_{}_read_audio_file'.format(typename))\n+ sample_rate_p = ffi.new('int*')\n func(bytes(filename), out, sample_rate_p)\n sample_rate = sample_rate_p[0]\n return out, sample_rate\n+\n+\n+def save(filepath, src, sample_rate):\n+ filename, extension = os.path.splitext(filepath)\n+ if type(sample_rate) != int:\n+ raise TypeError('Sample rate should be a integer')\n+\n+ check_input(src)\n+ typename = type(src).__name__.replace('Tensor', '')\n+ func = getattr(th_sox, 'libthsox_{}_write_audio_file'.format(typename))\n+\n+ func(bytes(filepath), src, extension[1:], sample_rate)\n", "issue": "Need API for saving to file.\nCurrently we only have a load function. But after training the network it would be great if we can save the generated tensor to a file.\r\n\r\n@soumith I think we can reuse a lot of code from this repo https://github.com/MattVitelli/GRUV\n", "before_files": [{"content": "import torch\n\nfrom cffi import FFI\nffi = FFI()\nfrom ._ext import th_sox\n\ndef load(filename, out=None):\n if out is not None:\n assert torch.is_tensor(out)\n assert not out.is_cuda\n else:\n out = torch.FloatTensor()\n\n if isinstance(out, torch.FloatTensor):\n func = th_sox.libthsox_Float_read_audio_file\n elif isinstance(out, torch.DoubleTensor):\n func = th_sox.libthsox_Double_read_audio_file\n elif isinstance(out, torch.ByteTensor):\n func = th_sox.libthsox_Byte_read_audio_file\n elif isinstance(out, torch.CharTensor):\n func = th_sox.libthsox_Char_read_audio_file\n elif isinstance(out, torch.ShortTensor):\n func = th_sox.libthsox_Short_read_audio_file\n elif isinstance(out, torch.IntTensor):\n func = th_sox.libthsox_Int_read_audio_file\n elif isinstance(out, torch.LongTensor):\n func = th_sox.libthsox_Long_read_audio_file\n \n sample_rate_p = ffi.new('int*') \n func(bytes(filename), out, sample_rate_p)\n sample_rate = sample_rate_p[0]\n return out, sample_rate\n", "path": "torchaudio/__init__.py"}]} | 932 | 579 |
gh_patches_debug_10609 | rasdani/github-patches | git_diff | kubeflow__pipelines-6240 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[v2 sample test] kaniko build times out / OOM
We've observed significantly high build time outs and OOMs with Kaniko recently.
I've tried several combinations:
1. 1.3.0-debug with/without --snapshotMode=redo + 4GB memory
2. 1.6.0-debug with/without --snapshotMode=redo + 8GB memory https://github.com/kubeflow/pipelines/pull/6226
but none of them run stably in reasonable amount of time.
The memory and timeout issues can be found upstream:
* https://github.com/GoogleContainerTools/kaniko/issues/1680
* https://github.com/GoogleContainerTools/kaniko/issues/1333
but they are both long standing, and no one still maintains the repo actively.
</issue>
<code>
[start of sdk/python/kfp/compiler/v2_compat.py]
1 # Copyright 2021 The Kubeflow Authors
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 """Utility functions for enabling v2-compatible pipelines in v1."""
15 import collections
16 import json
17 from typing import Optional
18
19 from kfp import dsl
20 from kfp.compiler import _default_transformers
21 from kfp.pipeline_spec import pipeline_spec_pb2
22 from kfp.v2 import compiler
23
24 from kubernetes import client as k8s_client
25
26 _DEFAULT_LAUNCHER_IMAGE = "gcr.io/ml-pipeline/kfp-launcher:1.6.6"
27
28
29 def update_op(op: dsl.ContainerOp,
30 pipeline_name: dsl.PipelineParam,
31 pipeline_root: dsl.PipelineParam,
32 launcher_image: Optional[str] = None) -> None:
33 """Updates the passed in Op for running in v2-compatible mode.
34
35 Args:
36 op: The Op to update.
37 pipeline_spec: The PipelineSpec for the pipeline under which `op`
38 runs.
39 pipeline_root: The root output directory for pipeline artifacts.
40 launcher_image: An optional launcher image. Useful for tests.
41 """
42 op.is_v2 = True
43 # Inject the launcher binary and overwrite the entrypoint.
44 image_name = launcher_image or _DEFAULT_LAUNCHER_IMAGE
45 launcher_container = dsl.UserContainer(name="kfp-launcher",
46 image=image_name,
47 command="/bin/mount_launcher.sh",
48 mirror_volume_mounts=True)
49
50 op.add_init_container(launcher_container)
51 op.add_volume(k8s_client.V1Volume(name='kfp-launcher'))
52 op.add_volume_mount(
53 k8s_client.V1VolumeMount(name='kfp-launcher', mount_path='/kfp-launcher'))
54
55 # op.command + op.args will have the following sections:
56 # 1. args passed to kfp-launcher
57 # 2. a separator "--"
58 # 3. parameters in format "key1=value1", "key2=value2", ...
59 # 4. a separator "--" as end of arguments passed to launcher
60 # 5. (start of op.args) arguments of the original user program command + args
61 #
62 # example:
63 # - command:
64 # - /kfp-launcher/launch
65 # - '--mlmd_server_address'
66 # - $(METADATA_GRPC_SERVICE_HOST)
67 # - '--mlmd_server_port'
68 # - $(METADATA_GRPC_SERVICE_PORT)
69 # - ... # more launcher params
70 # - '--pipeline_task_id'
71 # - $(KFP_POD_NAME)
72 # - '--pipeline_root'
73 # - ''
74 # - '--' # start of parameter values
75 # - first=first
76 # - second=second
77 # - '--' # start of user command and args
78 # args:
79 # - sh
80 # - '-ec'
81 # - |
82 # program_path=$(mktemp)
83 # printf "%s" "$0" > "$program_path"
84 # python3 -u "$program_path" "$@"
85 # - >
86 # import json
87 # import xxx
88 # ...
89 op.command = [
90 "/kfp-launcher/launch",
91 "--mlmd_server_address",
92 "$(METADATA_GRPC_SERVICE_HOST)",
93 "--mlmd_server_port",
94 "$(METADATA_GRPC_SERVICE_PORT)",
95 "--runtime_info_json",
96 "$(KFP_V2_RUNTIME_INFO)",
97 "--container_image",
98 "$(KFP_V2_IMAGE)",
99 "--task_name",
100 op.name,
101 "--pipeline_name",
102 pipeline_name,
103 "--run_id",
104 "$(KFP_RUN_ID)",
105 "--run_resource",
106 "workflows.argoproj.io/$(WORKFLOW_ID)",
107 "--namespace",
108 "$(KFP_NAMESPACE)",
109 "--pod_name",
110 "$(KFP_POD_NAME)",
111 "--pod_uid",
112 "$(KFP_POD_UID)",
113 "--pipeline_root",
114 pipeline_root,
115 "--enable_caching",
116 "$(ENABLE_CACHING)",
117 ]
118
119 # Mount necessary environment variables.
120 op.apply(_default_transformers.add_kfp_pod_env)
121 op.container.add_env_variable(
122 k8s_client.V1EnvVar(name="KFP_V2_IMAGE", value=op.container.image))
123
124 config_map_ref = k8s_client.V1ConfigMapEnvSource(
125 name='metadata-grpc-configmap', optional=True)
126 op.container.add_env_from(
127 k8s_client.V1EnvFromSource(config_map_ref=config_map_ref))
128
129 op.arguments = list(op.container_spec.command) + list(op.container_spec.args)
130
131 runtime_info = {
132 "inputParameters": collections.OrderedDict(),
133 "inputArtifacts": collections.OrderedDict(),
134 "outputParameters": collections.OrderedDict(),
135 "outputArtifacts": collections.OrderedDict(),
136 }
137
138 op.command += ["--"]
139 component_spec = op.component_spec
140 for parameter, spec in sorted(
141 component_spec.input_definitions.parameters.items()):
142 parameter_info = {
143 "type":
144 pipeline_spec_pb2.PrimitiveType.PrimitiveTypeEnum.Name(spec.type),
145 }
146 op.command += [f"{parameter}={op._parameter_arguments[parameter]}"]
147 runtime_info["inputParameters"][parameter] = parameter_info
148 op.command += ["--"]
149
150 for artifact_name, spec in sorted(
151 component_spec.input_definitions.artifacts.items()):
152 artifact_info = {
153 "metadataPath": op.input_artifact_paths[artifact_name],
154 "schemaTitle": spec.artifact_type.schema_title,
155 "instanceSchema": spec.artifact_type.instance_schema,
156 }
157 runtime_info["inputArtifacts"][artifact_name] = artifact_info
158
159 for parameter, spec in sorted(
160 component_spec.output_definitions.parameters.items()):
161 parameter_info = {
162 "type":
163 pipeline_spec_pb2.PrimitiveType.PrimitiveTypeEnum.Name(spec.type),
164 "path":
165 op.file_outputs[parameter],
166 }
167 runtime_info["outputParameters"][parameter] = parameter_info
168
169 for artifact_name, spec in sorted(
170 component_spec.output_definitions.artifacts.items()):
171 # TODO: Assert instance_schema.
172 artifact_info = {
173 # Type used to register output artifacts.
174 "schemaTitle": spec.artifact_type.schema_title,
175 "instanceSchema": spec.artifact_type.instance_schema,
176 # File used to write out the registered artifact ID.
177 "metadataPath": op.file_outputs[artifact_name],
178 }
179 runtime_info["outputArtifacts"][artifact_name] = artifact_info
180
181 op.container.add_env_variable(
182 k8s_client.V1EnvVar(name="KFP_V2_RUNTIME_INFO",
183 value=json.dumps(runtime_info)))
184
185 op.pod_annotations['pipelines.kubeflow.org/v2_component'] = "true"
186 op.pod_labels['pipelines.kubeflow.org/v2_component']= "true"
187
[end of sdk/python/kfp/compiler/v2_compat.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/sdk/python/kfp/compiler/v2_compat.py b/sdk/python/kfp/compiler/v2_compat.py
--- a/sdk/python/kfp/compiler/v2_compat.py
+++ b/sdk/python/kfp/compiler/v2_compat.py
@@ -44,7 +44,7 @@
image_name = launcher_image or _DEFAULT_LAUNCHER_IMAGE
launcher_container = dsl.UserContainer(name="kfp-launcher",
image=image_name,
- command="/bin/mount_launcher.sh",
+ command=["launcher", "--copy", "/kfp-launcher/launch"],
mirror_volume_mounts=True)
op.add_init_container(launcher_container)
| {"golden_diff": "diff --git a/sdk/python/kfp/compiler/v2_compat.py b/sdk/python/kfp/compiler/v2_compat.py\n--- a/sdk/python/kfp/compiler/v2_compat.py\n+++ b/sdk/python/kfp/compiler/v2_compat.py\n@@ -44,7 +44,7 @@\n image_name = launcher_image or _DEFAULT_LAUNCHER_IMAGE\n launcher_container = dsl.UserContainer(name=\"kfp-launcher\",\n image=image_name,\n- command=\"/bin/mount_launcher.sh\",\n+ command=[\"launcher\", \"--copy\", \"/kfp-launcher/launch\"],\n mirror_volume_mounts=True)\n \n op.add_init_container(launcher_container)\n", "issue": "[v2 sample test] kaniko build times out / OOM\nWe've observed significantly high build time outs and OOMs with Kaniko recently.\r\nI've tried several combinations:\r\n1. 1.3.0-debug with/without --snapshotMode=redo + 4GB memory\r\n2. 1.6.0-debug with/without --snapshotMode=redo + 8GB memory https://github.com/kubeflow/pipelines/pull/6226\r\n\r\nbut none of them run stably in reasonable amount of time.\r\n\r\nThe memory and timeout issues can be found upstream:\r\n* https://github.com/GoogleContainerTools/kaniko/issues/1680\r\n* https://github.com/GoogleContainerTools/kaniko/issues/1333\r\n\r\nbut they are both long standing, and no one still maintains the repo actively.\n", "before_files": [{"content": "# Copyright 2021 The Kubeflow Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Utility functions for enabling v2-compatible pipelines in v1.\"\"\"\nimport collections\nimport json\nfrom typing import Optional\n\nfrom kfp import dsl\nfrom kfp.compiler import _default_transformers\nfrom kfp.pipeline_spec import pipeline_spec_pb2\nfrom kfp.v2 import compiler\n\nfrom kubernetes import client as k8s_client\n\n_DEFAULT_LAUNCHER_IMAGE = \"gcr.io/ml-pipeline/kfp-launcher:1.6.6\"\n\n\ndef update_op(op: dsl.ContainerOp,\n pipeline_name: dsl.PipelineParam,\n pipeline_root: dsl.PipelineParam,\n launcher_image: Optional[str] = None) -> None:\n \"\"\"Updates the passed in Op for running in v2-compatible mode.\n\n Args:\n op: The Op to update.\n pipeline_spec: The PipelineSpec for the pipeline under which `op`\n runs.\n pipeline_root: The root output directory for pipeline artifacts.\n launcher_image: An optional launcher image. Useful for tests.\n \"\"\"\n op.is_v2 = True\n # Inject the launcher binary and overwrite the entrypoint.\n image_name = launcher_image or _DEFAULT_LAUNCHER_IMAGE\n launcher_container = dsl.UserContainer(name=\"kfp-launcher\",\n image=image_name,\n command=\"/bin/mount_launcher.sh\",\n mirror_volume_mounts=True)\n\n op.add_init_container(launcher_container)\n op.add_volume(k8s_client.V1Volume(name='kfp-launcher'))\n op.add_volume_mount(\n k8s_client.V1VolumeMount(name='kfp-launcher', mount_path='/kfp-launcher'))\n\n # op.command + op.args will have the following sections:\n # 1. args passed to kfp-launcher\n # 2. a separator \"--\"\n # 3. parameters in format \"key1=value1\", \"key2=value2\", ...\n # 4. a separator \"--\" as end of arguments passed to launcher\n # 5. (start of op.args) arguments of the original user program command + args\n #\n # example:\n # - command:\n # - /kfp-launcher/launch\n # - '--mlmd_server_address'\n # - $(METADATA_GRPC_SERVICE_HOST)\n # - '--mlmd_server_port'\n # - $(METADATA_GRPC_SERVICE_PORT)\n # - ... # more launcher params\n # - '--pipeline_task_id'\n # - $(KFP_POD_NAME)\n # - '--pipeline_root'\n # - ''\n # - '--' # start of parameter values\n # - first=first\n # - second=second\n # - '--' # start of user command and args\n # args:\n # - sh\n # - '-ec'\n # - |\n # program_path=$(mktemp)\n # printf \"%s\" \"$0\" > \"$program_path\"\n # python3 -u \"$program_path\" \"$@\"\n # - >\n # import json\n # import xxx\n # ...\n op.command = [\n \"/kfp-launcher/launch\",\n \"--mlmd_server_address\",\n \"$(METADATA_GRPC_SERVICE_HOST)\",\n \"--mlmd_server_port\",\n \"$(METADATA_GRPC_SERVICE_PORT)\",\n \"--runtime_info_json\",\n \"$(KFP_V2_RUNTIME_INFO)\",\n \"--container_image\",\n \"$(KFP_V2_IMAGE)\",\n \"--task_name\",\n op.name,\n \"--pipeline_name\",\n pipeline_name,\n \"--run_id\",\n \"$(KFP_RUN_ID)\",\n \"--run_resource\",\n \"workflows.argoproj.io/$(WORKFLOW_ID)\",\n \"--namespace\",\n \"$(KFP_NAMESPACE)\",\n \"--pod_name\",\n \"$(KFP_POD_NAME)\",\n \"--pod_uid\",\n \"$(KFP_POD_UID)\",\n \"--pipeline_root\",\n pipeline_root,\n \"--enable_caching\",\n \"$(ENABLE_CACHING)\",\n ]\n\n # Mount necessary environment variables.\n op.apply(_default_transformers.add_kfp_pod_env)\n op.container.add_env_variable(\n k8s_client.V1EnvVar(name=\"KFP_V2_IMAGE\", value=op.container.image))\n\n config_map_ref = k8s_client.V1ConfigMapEnvSource(\n name='metadata-grpc-configmap', optional=True)\n op.container.add_env_from(\n k8s_client.V1EnvFromSource(config_map_ref=config_map_ref))\n\n op.arguments = list(op.container_spec.command) + list(op.container_spec.args)\n\n runtime_info = {\n \"inputParameters\": collections.OrderedDict(),\n \"inputArtifacts\": collections.OrderedDict(),\n \"outputParameters\": collections.OrderedDict(),\n \"outputArtifacts\": collections.OrderedDict(),\n }\n\n op.command += [\"--\"]\n component_spec = op.component_spec\n for parameter, spec in sorted(\n component_spec.input_definitions.parameters.items()):\n parameter_info = {\n \"type\":\n pipeline_spec_pb2.PrimitiveType.PrimitiveTypeEnum.Name(spec.type),\n }\n op.command += [f\"{parameter}={op._parameter_arguments[parameter]}\"]\n runtime_info[\"inputParameters\"][parameter] = parameter_info\n op.command += [\"--\"]\n\n for artifact_name, spec in sorted(\n component_spec.input_definitions.artifacts.items()):\n artifact_info = {\n \"metadataPath\": op.input_artifact_paths[artifact_name],\n \"schemaTitle\": spec.artifact_type.schema_title,\n \"instanceSchema\": spec.artifact_type.instance_schema,\n }\n runtime_info[\"inputArtifacts\"][artifact_name] = artifact_info\n\n for parameter, spec in sorted(\n component_spec.output_definitions.parameters.items()):\n parameter_info = {\n \"type\":\n pipeline_spec_pb2.PrimitiveType.PrimitiveTypeEnum.Name(spec.type),\n \"path\":\n op.file_outputs[parameter],\n }\n runtime_info[\"outputParameters\"][parameter] = parameter_info\n\n for artifact_name, spec in sorted(\n component_spec.output_definitions.artifacts.items()):\n # TODO: Assert instance_schema.\n artifact_info = {\n # Type used to register output artifacts.\n \"schemaTitle\": spec.artifact_type.schema_title,\n \"instanceSchema\": spec.artifact_type.instance_schema,\n # File used to write out the registered artifact ID.\n \"metadataPath\": op.file_outputs[artifact_name],\n }\n runtime_info[\"outputArtifacts\"][artifact_name] = artifact_info\n\n op.container.add_env_variable(\n k8s_client.V1EnvVar(name=\"KFP_V2_RUNTIME_INFO\",\n value=json.dumps(runtime_info)))\n\n op.pod_annotations['pipelines.kubeflow.org/v2_component'] = \"true\"\n op.pod_labels['pipelines.kubeflow.org/v2_component']= \"true\"\n", "path": "sdk/python/kfp/compiler/v2_compat.py"}]} | 2,738 | 139 |
gh_patches_debug_14248 | rasdani/github-patches | git_diff | ivy-llc__ivy-21042 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
gumble_softmax
add mindspore gumble_softmax
- [gumble_softmax] #21042
</issue>
<code>
[start of ivy/functional/frontends/mindspore/ops/function/nn_func.py]
1 """Includes Mindspore Frontend functions listed in the TODO list
2 https://github.com/unifyai/ivy/issues/14951."""
3
4 # local
5 import ivy
6 from ivy.func_wrapper import with_supported_dtypes
7 from ivy.functional.frontends.paddle.func_wrapper import to_ivy_arrays_and_back
8
9
10 def _broadcast_pooling_helper(x, pool_dims: str = "2d", name: str = "padding"):
11 dims = {"1d": 1, "2d": 2, "3d": 3}
12
13 if isinstance(x, int):
14 return tuple([x for _ in range(dims[pool_dims])])
15
16 if len(x) == 1:
17 return tuple([x[0] for _ in range(dims[pool_dims])])
18 elif len(x) == dims[pool_dims]:
19 return tuple(x)
20 elif len(x) != dims[pool_dims]:
21 raise ValueError(
22 f"`{name}` must either be a single int, "
23 f"or a tuple of {dims[pool_dims]} ints. "
24 )
25
26
27 @with_supported_dtypes(
28 {
29 "2.0.0 and below": (
30 "int8",
31 "int16",
32 "int32",
33 "int64",
34 "float16",
35 "float32",
36 "float64",
37 )
38 },
39 "mindspore",
40 )
41 @to_ivy_arrays_and_back
42 def dropout2d(input, p=0.5, training=True):
43 return ivy.dropout2d(input, p, training=training, data_format="NCHW")
44
45
46 @with_supported_dtypes({"2.0.0 and below": ("float16", "float32")}, "mindspore")
47 @to_ivy_arrays_and_back
48 def selu(input_x):
49 return ivy.selu(input_x)
50
51
52 @with_supported_dtypes({"2.0 and below": ("float16", "float32")}, "mindspore")
53 @to_ivy_arrays_and_back
54 def softsign(x):
55 return ivy.divide(x, ivy.add(1, ivy.abs(x)))
56
57
58 @with_supported_dtypes({"2.0.0 and below": ("float16", "float32")}, "mindspore")
59 @to_ivy_arrays_and_back
60 def log_softmax(input, axis=-1):
61 return ivy.log_softmax(input)
62
63
64 @with_supported_dtypes({"2.0 and below": ("float16", "float32")}, "mindspore")
65 @to_ivy_arrays_and_back
66 def kl_div(logits, labels, reduction="mean"):
67 """
68 Computes the Kullback-Leibler (KL) Divergence between the logits and the labels.
69
70 Parameters:
71 logits (numpy array): The input logits array.
72 labels (numpy array): The label array which has the same shape as logits.
73 reduction (str): Specifies the reduction to be applied to the output.
74 Its value must be one of 'none', 'mean', 'batchmean',
75 or 'sum'. Default: 'mean'.
76
77 Returns:
78 float or numpy array: If reduction is 'none', then output is
79 a numpy array and has the same shape as logits.
80 Otherwise, it is a scalar (float).
81 """
82 assert ivy.shape(logits) == ivy.shape(
83 labels
84 ), "logits and labels must have the same shape."
85 L = labels * (ivy.log(labels) - logits)
86 if reduction == "none":
87 return L
88 elif reduction == "mean":
89 return ivy.mean(L)
90 elif reduction == "batchmean":
91 return ivy.mean(L, axis=0)
92 elif reduction == "sum":
93 return ivy.sum(L)
94 else:
95 raise ValueError(
96 "Invalid reduction mode. Supported values are 'none', 'mean', 'batchmean',"
97 " or 'sum'."
98 )
99
100
101 @with_supported_dtypes(
102 {
103 "2.0.0 and below": (
104 "int8",
105 "int16",
106 "int32",
107 "int64",
108 "float16",
109 "float32",
110 "float64",
111 )
112 },
113 "mindspore",
114 )
115 @to_ivy_arrays_and_back
116 def dropout3d(input, p=0.5, training=True):
117 return ivy.dropout3d(input, p, training=training, data_format="NCDHW")
118
119
120 @with_supported_dtypes(
121 {
122 "2.0.0 and below": (
123 "int8",
124 "int16",
125 "int32",
126 "int64",
127 "float16",
128 "float32",
129 "float64",
130 )
131 },
132 "mindspore",
133 )
134 @to_ivy_arrays_and_back
135 def interpolate(
136 input,
137 size=None,
138 scale_factor=None,
139 mode="nearest",
140 align_corners=False,
141 recompute_scale_factor=False,
142 ):
143 return ivy.interpolate(
144 input,
145 size=size,
146 scale_factor=scale_factor,
147 mode=mode,
148 align_corners=align_corners,
149 recompute_scale_factor=recompute_scale_factor,
150 )
151
152
153 @with_supported_dtypes(
154 {
155 "2.0 and below": (
156 "int8",
157 "int16",
158 "int32",
159 "int64",
160 "float16",
161 "float32",
162 "float64",
163 )
164 },
165 "mindspore",
166 )
167 @to_ivy_arrays_and_back
168 def pad(input, pad_width, mode="constant", constant_values=0):
169 return ivy.pad(input, pad_width, mode=mode, constant_values=constant_values)
170
171
172 @with_supported_dtypes(
173 {"2.0.0 and below": ("float16", "float32", "float64")}, "mindspore"
174 )
175 @to_ivy_arrays_and_back
176 def adaptive_avg_pool2d(input, output_size):
177 return ivy.adaptive_avg_pool2d(input, output_size)
178
179
180 @to_ivy_arrays_and_back
181 def avg_pool2d(
182 input,
183 kernel_size,
184 stride=None,
185 padding=0,
186 pad_mode=False,
187 count_include_pad=True,
188 divisor_override=None,
189 ):
190 # Figure out input dims N
191 input_rank = input.ndim
192
193 if input_rank == 4:
194 # NCHW
195 data_format = "NCHW"
196
197 kernel_size = _broadcast_pooling_helper(kernel_size, "2d", name="kernel_size")
198 stride = _broadcast_pooling_helper(stride, "2d", name="stride")
199 padding = _broadcast_pooling_helper(padding, "2d", name="padding")
200 kernel_pads = list(zip(kernel_size, padding))
201
202 # Padding should be less than or equal to half of kernel size
203 if not all([pad <= kernel / 2 for kernel, pad in kernel_pads]):
204 raise ValueError(
205 "pad should be smaller than or equal to half of kernel size, "
206 f"but got padding={padding}, kernel_size={kernel_size}. "
207 )
208
209 # Figure out padding string
210 if all([pad == ivy.ceil((kernel - 1) / 2) for kernel, pad in kernel_pads]):
211 padding_str = "SAME"
212 else:
213 padding_str = "VALID"
214
215 return ivy.avg_pool2d(
216 input,
217 kernel_size,
218 stride,
219 padding_str,
220 data_format=data_format,
221 pad_mode=pad_mode,
222 count_include_pad=count_include_pad,
223 divisor_override=divisor_override,
224 )
225
226
227 @to_ivy_arrays_and_back
228 def flatten(input, order="C", *, start_dim=1, end_dim=-1):
229 return ivy.flatten(input, order=order, start_dim=start_dim, end_dim=end_dim)
230
231
232 @with_supported_dtypes(
233 {"2.0.0 and below": ("float16", "float32", "float64")},
234 "mindspore",
235 )
236 @to_ivy_arrays_and_back
237 def fast_gelu(input_x):
238 return (input_x / (1 + ivy.exp(-1.702 * ivy.abs(input_x)))) * ivy.exp(
239 0.851 * (input_x - ivy.abs(input_x))
240 )
241
[end of ivy/functional/frontends/mindspore/ops/function/nn_func.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/ivy/functional/frontends/mindspore/ops/function/nn_func.py b/ivy/functional/frontends/mindspore/ops/function/nn_func.py
--- a/ivy/functional/frontends/mindspore/ops/function/nn_func.py
+++ b/ivy/functional/frontends/mindspore/ops/function/nn_func.py
@@ -238,3 +238,23 @@
return (input_x / (1 + ivy.exp(-1.702 * ivy.abs(input_x)))) * ivy.exp(
0.851 * (input_x - ivy.abs(input_x))
)
+
+
+@with_supported_dtypes({"2.0.0 and below": ("float16", "float32")}, "mindspore")
+@to_ivy_arrays_and_back
+def gumbel_softmax(logits, tau=1, hard=False, dim=-1):
+ gumbels = -ivy.empty_like(logits).exponential().log()
+ gumbels = (logits + gumbels) / tau
+ y_soft = ivy.softmax(gumbels, axis=dim)
+
+ if hard:
+ indices = y_soft.max(axis=dim, keepdims=True)[1]
+ y_hard = ivy.zeros_like(logits)
+ updates = ivy.ones_like(indices)
+ y_hard = ivy.scatter_nd(indices, updates, reduction="replace", out=y_hard)
+
+ ret = y_hard - y_soft.stop_gradient(preserve_type=True) + y_soft
+ else:
+ ret = y_soft
+
+ return ret
| {"golden_diff": "diff --git a/ivy/functional/frontends/mindspore/ops/function/nn_func.py b/ivy/functional/frontends/mindspore/ops/function/nn_func.py\n--- a/ivy/functional/frontends/mindspore/ops/function/nn_func.py\n+++ b/ivy/functional/frontends/mindspore/ops/function/nn_func.py\n@@ -238,3 +238,23 @@\n return (input_x / (1 + ivy.exp(-1.702 * ivy.abs(input_x)))) * ivy.exp(\n 0.851 * (input_x - ivy.abs(input_x))\n )\n+\n+\n+@with_supported_dtypes({\"2.0.0 and below\": (\"float16\", \"float32\")}, \"mindspore\")\n+@to_ivy_arrays_and_back\n+def gumbel_softmax(logits, tau=1, hard=False, dim=-1):\n+ gumbels = -ivy.empty_like(logits).exponential().log()\n+ gumbels = (logits + gumbels) / tau\n+ y_soft = ivy.softmax(gumbels, axis=dim)\n+\n+ if hard:\n+ indices = y_soft.max(axis=dim, keepdims=True)[1]\n+ y_hard = ivy.zeros_like(logits)\n+ updates = ivy.ones_like(indices)\n+ y_hard = ivy.scatter_nd(indices, updates, reduction=\"replace\", out=y_hard)\n+\n+ ret = y_hard - y_soft.stop_gradient(preserve_type=True) + y_soft\n+ else:\n+ ret = y_soft\n+\n+ return ret\n", "issue": "gumble_softmax\nadd mindspore gumble_softmax\r\n\r\n- [gumble_softmax] #21042\n", "before_files": [{"content": "\"\"\"Includes Mindspore Frontend functions listed in the TODO list\nhttps://github.com/unifyai/ivy/issues/14951.\"\"\"\n\n# local\nimport ivy\nfrom ivy.func_wrapper import with_supported_dtypes\nfrom ivy.functional.frontends.paddle.func_wrapper import to_ivy_arrays_and_back\n\n\ndef _broadcast_pooling_helper(x, pool_dims: str = \"2d\", name: str = \"padding\"):\n dims = {\"1d\": 1, \"2d\": 2, \"3d\": 3}\n\n if isinstance(x, int):\n return tuple([x for _ in range(dims[pool_dims])])\n\n if len(x) == 1:\n return tuple([x[0] for _ in range(dims[pool_dims])])\n elif len(x) == dims[pool_dims]:\n return tuple(x)\n elif len(x) != dims[pool_dims]:\n raise ValueError(\n f\"`{name}` must either be a single int, \"\n f\"or a tuple of {dims[pool_dims]} ints. \"\n )\n\n\n@with_supported_dtypes(\n {\n \"2.0.0 and below\": (\n \"int8\",\n \"int16\",\n \"int32\",\n \"int64\",\n \"float16\",\n \"float32\",\n \"float64\",\n )\n },\n \"mindspore\",\n)\n@to_ivy_arrays_and_back\ndef dropout2d(input, p=0.5, training=True):\n return ivy.dropout2d(input, p, training=training, data_format=\"NCHW\")\n\n\n@with_supported_dtypes({\"2.0.0 and below\": (\"float16\", \"float32\")}, \"mindspore\")\n@to_ivy_arrays_and_back\ndef selu(input_x):\n return ivy.selu(input_x)\n\n\n@with_supported_dtypes({\"2.0 and below\": (\"float16\", \"float32\")}, \"mindspore\")\n@to_ivy_arrays_and_back\ndef softsign(x):\n return ivy.divide(x, ivy.add(1, ivy.abs(x)))\n\n\n@with_supported_dtypes({\"2.0.0 and below\": (\"float16\", \"float32\")}, \"mindspore\")\n@to_ivy_arrays_and_back\ndef log_softmax(input, axis=-1):\n return ivy.log_softmax(input)\n\n\n@with_supported_dtypes({\"2.0 and below\": (\"float16\", \"float32\")}, \"mindspore\")\n@to_ivy_arrays_and_back\ndef kl_div(logits, labels, reduction=\"mean\"):\n \"\"\"\n Computes the Kullback-Leibler (KL) Divergence between the logits and the labels.\n\n Parameters:\n logits (numpy array): The input logits array.\n labels (numpy array): The label array which has the same shape as logits.\n reduction (str): Specifies the reduction to be applied to the output.\n Its value must be one of 'none', 'mean', 'batchmean',\n or 'sum'. Default: 'mean'.\n\n Returns:\n float or numpy array: If reduction is 'none', then output is\n a numpy array and has the same shape as logits.\n Otherwise, it is a scalar (float).\n \"\"\"\n assert ivy.shape(logits) == ivy.shape(\n labels\n ), \"logits and labels must have the same shape.\"\n L = labels * (ivy.log(labels) - logits)\n if reduction == \"none\":\n return L\n elif reduction == \"mean\":\n return ivy.mean(L)\n elif reduction == \"batchmean\":\n return ivy.mean(L, axis=0)\n elif reduction == \"sum\":\n return ivy.sum(L)\n else:\n raise ValueError(\n \"Invalid reduction mode. Supported values are 'none', 'mean', 'batchmean',\"\n \" or 'sum'.\"\n )\n\n\n@with_supported_dtypes(\n {\n \"2.0.0 and below\": (\n \"int8\",\n \"int16\",\n \"int32\",\n \"int64\",\n \"float16\",\n \"float32\",\n \"float64\",\n )\n },\n \"mindspore\",\n)\n@to_ivy_arrays_and_back\ndef dropout3d(input, p=0.5, training=True):\n return ivy.dropout3d(input, p, training=training, data_format=\"NCDHW\")\n\n\n@with_supported_dtypes(\n {\n \"2.0.0 and below\": (\n \"int8\",\n \"int16\",\n \"int32\",\n \"int64\",\n \"float16\",\n \"float32\",\n \"float64\",\n )\n },\n \"mindspore\",\n)\n@to_ivy_arrays_and_back\ndef interpolate(\n input,\n size=None,\n scale_factor=None,\n mode=\"nearest\",\n align_corners=False,\n recompute_scale_factor=False,\n):\n return ivy.interpolate(\n input,\n size=size,\n scale_factor=scale_factor,\n mode=mode,\n align_corners=align_corners,\n recompute_scale_factor=recompute_scale_factor,\n )\n\n\n@with_supported_dtypes(\n {\n \"2.0 and below\": (\n \"int8\",\n \"int16\",\n \"int32\",\n \"int64\",\n \"float16\",\n \"float32\",\n \"float64\",\n )\n },\n \"mindspore\",\n)\n@to_ivy_arrays_and_back\ndef pad(input, pad_width, mode=\"constant\", constant_values=0):\n return ivy.pad(input, pad_width, mode=mode, constant_values=constant_values)\n\n\n@with_supported_dtypes(\n {\"2.0.0 and below\": (\"float16\", \"float32\", \"float64\")}, \"mindspore\"\n)\n@to_ivy_arrays_and_back\ndef adaptive_avg_pool2d(input, output_size):\n return ivy.adaptive_avg_pool2d(input, output_size)\n\n\n@to_ivy_arrays_and_back\ndef avg_pool2d(\n input,\n kernel_size,\n stride=None,\n padding=0,\n pad_mode=False,\n count_include_pad=True,\n divisor_override=None,\n):\n # Figure out input dims N\n input_rank = input.ndim\n\n if input_rank == 4:\n # NCHW\n data_format = \"NCHW\"\n\n kernel_size = _broadcast_pooling_helper(kernel_size, \"2d\", name=\"kernel_size\")\n stride = _broadcast_pooling_helper(stride, \"2d\", name=\"stride\")\n padding = _broadcast_pooling_helper(padding, \"2d\", name=\"padding\")\n kernel_pads = list(zip(kernel_size, padding))\n\n # Padding should be less than or equal to half of kernel size\n if not all([pad <= kernel / 2 for kernel, pad in kernel_pads]):\n raise ValueError(\n \"pad should be smaller than or equal to half of kernel size, \"\n f\"but got padding={padding}, kernel_size={kernel_size}. \"\n )\n\n # Figure out padding string\n if all([pad == ivy.ceil((kernel - 1) / 2) for kernel, pad in kernel_pads]):\n padding_str = \"SAME\"\n else:\n padding_str = \"VALID\"\n\n return ivy.avg_pool2d(\n input,\n kernel_size,\n stride,\n padding_str,\n data_format=data_format,\n pad_mode=pad_mode,\n count_include_pad=count_include_pad,\n divisor_override=divisor_override,\n )\n\n\n@to_ivy_arrays_and_back\ndef flatten(input, order=\"C\", *, start_dim=1, end_dim=-1):\n return ivy.flatten(input, order=order, start_dim=start_dim, end_dim=end_dim)\n\n\n@with_supported_dtypes(\n {\"2.0.0 and below\": (\"float16\", \"float32\", \"float64\")},\n \"mindspore\",\n)\n@to_ivy_arrays_and_back\ndef fast_gelu(input_x):\n return (input_x / (1 + ivy.exp(-1.702 * ivy.abs(input_x)))) * ivy.exp(\n 0.851 * (input_x - ivy.abs(input_x))\n )\n", "path": "ivy/functional/frontends/mindspore/ops/function/nn_func.py"}]} | 3,011 | 359 |
gh_patches_debug_4602 | rasdani/github-patches | git_diff | inventree__InvenTree-6300 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Plugin with git, but no commits causes all plugins to fail to load
### Please verify that this bug has NOT been raised before.
- [X] I checked and didn't find a similar issue
### Describe the bug*
Creating a plugin in `InvenTree/plugins/your_plugin/` with the proper files required, and then initiating a git repo in this directory will cause all plugins to fail on the next reload of the server
### Steps to Reproduce
1. `mkdir InvenTree/plugins/someplugin && touch InvenTree/plugins/someplugin/__init__.py && cd InvenTree/plugins/someplugin && git init`
2. 2. Start up the InvenTree instance
3. All plugins will be listed with "?" icons, and even installed plugins will be unavailable
### Expected behaviour
Ignore that no commits have been issues
### Deployment Method
- [X] Docker
- [X] Bare metal
### Version Information
# Version Information:
InvenTree-Version: 0.14.0 dev
Django Version: 3.2.23
Commit Hash: 5d018e8
Commit Date: 2024-01-15
Commit Branch: details-panel
Database: sqlite3
Debug-Mode: True
Deployed using Docker: True
Platform: Linux-5.15.133.1-microsoft-standard-WSL2-x86_64-with
Installer: DOC
Active plugins: []
### Please verify if you can reproduce this bug on the demo site.
- [ ] I can reproduce this bug on the demo site.
### Relevant log output
```shell
a
```
</issue>
<code>
[start of InvenTree/plugin/helpers.py]
1 """Helpers for plugin app."""
2
3 import inspect
4 import logging
5 import os
6 import pathlib
7 import pkgutil
8 import sysconfig
9 import traceback
10 from importlib.metadata import entry_points
11
12 from django import template
13 from django.conf import settings
14 from django.core.exceptions import AppRegistryNotReady
15 from django.db.utils import IntegrityError
16
17 logger = logging.getLogger('inventree')
18
19
20 # region logging / errors
21 class IntegrationPluginError(Exception):
22 """Error that encapsulates another error and adds the path / reference of the raising plugin."""
23
24 def __init__(self, path, message):
25 """Init a plugin error.
26
27 Args:
28 path: Path on which the error occurred - used to find out which plugin it was
29 message: The original error message
30 """
31 self.path = path
32 self.message = message
33
34 def __str__(self):
35 """Returns the error message."""
36 return self.message # pragma: no cover
37
38
39 class MixinImplementationError(ValueError):
40 """Error if mixin was implemented wrong in plugin.
41
42 Mostly raised if constant is missing
43 """
44
45 pass
46
47
48 class MixinNotImplementedError(NotImplementedError):
49 """Error if necessary mixin function was not overwritten."""
50
51 pass
52
53
54 def log_error(error, reference: str = 'general'):
55 """Log an plugin error."""
56 from plugin import registry
57
58 # make sure the registry is set up
59 if reference not in registry.errors:
60 registry.errors[reference] = []
61
62 # add error to stack
63 registry.errors[reference].append(error)
64
65
66 def handle_error(error, do_raise: bool = True, do_log: bool = True, log_name: str = ''):
67 """Handles an error and casts it as an IntegrationPluginError."""
68 package_path = traceback.extract_tb(error.__traceback__)[-1].filename
69 install_path = sysconfig.get_paths()['purelib']
70
71 try:
72 package_name = pathlib.Path(package_path).relative_to(install_path).parts[0]
73 except ValueError:
74 # is file - loaded -> form a name for that
75 try:
76 path_obj = pathlib.Path(package_path).relative_to(settings.BASE_DIR)
77 path_parts = [*path_obj.parts]
78 path_parts[-1] = path_parts[-1].replace(
79 path_obj.suffix, ''
80 ) # remove suffix
81
82 # remove path prefixes
83 if path_parts[0] == 'plugin':
84 path_parts.remove('plugin')
85 path_parts.pop(0)
86 else:
87 path_parts.remove('plugins') # pragma: no cover
88
89 package_name = '.'.join(path_parts)
90 except Exception:
91 package_name = package_path
92
93 if do_log:
94 log_kwargs = {}
95 if log_name:
96 log_kwargs['reference'] = log_name
97 log_error({package_name: str(error)}, **log_kwargs)
98
99 if do_raise:
100 # do a straight raise if we are playing with environment variables at execution time, ignore the broken sample
101 if (
102 settings.TESTING_ENV
103 and package_name != 'integration.broken_sample'
104 and isinstance(error, IntegrityError)
105 ):
106 raise error # pragma: no cover
107
108 raise IntegrationPluginError(package_name, str(error))
109
110
111 def get_entrypoints():
112 """Returns list for entrypoints for InvenTree plugins."""
113 return entry_points().get('inventree_plugins', [])
114
115
116 # endregion
117
118
119 # region git-helpers
120 def get_git_log(path):
121 """Get dict with info of the last commit to file named in path."""
122 import datetime
123
124 from dulwich.repo import NotGitRepository, Repo
125
126 from InvenTree.ready import isInTestMode
127
128 output = None
129 path = os.path.abspath(path)
130
131 if os.path.exists(path) and os.path.isfile(path):
132 path = os.path.dirname(path)
133
134 # only do this if we are not in test mode
135 if not isInTestMode(): # pragma: no cover
136 try:
137 repo = Repo(path)
138 head = repo.head()
139 commit = repo[head]
140
141 output = [
142 head.decode(),
143 commit.author.decode().split('<')[0][:-1],
144 commit.author.decode().split('<')[1][:-1],
145 datetime.datetime.fromtimestamp(commit.author_time).isoformat(),
146 commit.message.decode().split('\n')[0],
147 ]
148 except NotGitRepository:
149 pass
150
151 if not output:
152 output = 5 * [''] # pragma: no cover
153
154 return {
155 'hash': output[0],
156 'author': output[1],
157 'mail': output[2],
158 'date': output[3],
159 'message': output[4],
160 }
161
162
163 # endregion
164
165
166 # region plugin finders
167 def get_modules(pkg, path=None):
168 """Get all modules in a package."""
169 context = {}
170
171 if path is None:
172 path = pkg.__path__
173 elif type(path) is not list:
174 path = [path]
175
176 for loader, name, _ in pkgutil.walk_packages(path):
177 try:
178 module = loader.find_module(name).load_module(name)
179 pkg_names = getattr(module, '__all__', None)
180 for k, v in vars(module).items():
181 if not k.startswith('_') and (pkg_names is None or k in pkg_names):
182 context[k] = v
183 context[name] = module
184 except AppRegistryNotReady: # pragma: no cover
185 pass
186 except Exception as error:
187 # this 'protects' against malformed plugin modules by more or less silently failing
188
189 # log to stack
190 log_error({name: str(error)}, 'discovery')
191
192 return [v for k, v in context.items()]
193
194
195 def get_classes(module):
196 """Get all classes in a given module."""
197 return inspect.getmembers(module, inspect.isclass)
198
199
200 def get_plugins(pkg, baseclass, path=None):
201 """Return a list of all modules under a given package.
202
203 - Modules must be a subclass of the provided 'baseclass'
204 - Modules must have a non-empty NAME parameter
205 """
206 plugins = []
207
208 modules = get_modules(pkg, path=path)
209
210 # Iterate through each module in the package
211 for mod in modules:
212 # Iterate through each class in the module
213 for item in get_classes(mod):
214 plugin = item[1]
215 if issubclass(plugin, baseclass) and plugin.NAME:
216 plugins.append(plugin)
217
218 return plugins
219
220
221 # endregion
222
223
224 # region templates
225 def render_template(plugin, template_file, context=None):
226 """Locate and render a template file, available in the global template context."""
227 try:
228 tmp = template.loader.get_template(template_file)
229 except template.TemplateDoesNotExist:
230 logger.exception(
231 "Plugin %s could not locate template '%s'", plugin.slug, template_file
232 )
233
234 return f"""
235 <div class='alert alert-block alert-danger'>
236 Template file <em>{template_file}</em> does not exist.
237 </div>
238 """
239
240 # Render with the provided context
241 html = tmp.render(context)
242
243 return html
244
245
246 def render_text(text, context=None):
247 """Locate a raw string with provided context."""
248 ctx = template.Context(context)
249
250 return template.Template(text).render(ctx)
251
252
253 # endregion
254
[end of InvenTree/plugin/helpers.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/InvenTree/plugin/helpers.py b/InvenTree/plugin/helpers.py
--- a/InvenTree/plugin/helpers.py
+++ b/InvenTree/plugin/helpers.py
@@ -145,6 +145,8 @@
datetime.datetime.fromtimestamp(commit.author_time).isoformat(),
commit.message.decode().split('\n')[0],
]
+ except KeyError as err:
+ logger.debug('No HEAD tag found in git repo at path %s', path)
except NotGitRepository:
pass
| {"golden_diff": "diff --git a/InvenTree/plugin/helpers.py b/InvenTree/plugin/helpers.py\n--- a/InvenTree/plugin/helpers.py\n+++ b/InvenTree/plugin/helpers.py\n@@ -145,6 +145,8 @@\n datetime.datetime.fromtimestamp(commit.author_time).isoformat(),\n commit.message.decode().split('\\n')[0],\n ]\n+ except KeyError as err:\n+ logger.debug('No HEAD tag found in git repo at path %s', path)\n except NotGitRepository:\n pass\n", "issue": "Plugin with git, but no commits causes all plugins to fail to load\n### Please verify that this bug has NOT been raised before.\n\n- [X] I checked and didn't find a similar issue\n\n### Describe the bug*\n\nCreating a plugin in `InvenTree/plugins/your_plugin/` with the proper files required, and then initiating a git repo in this directory will cause all plugins to fail on the next reload of the server\n\n### Steps to Reproduce\n\n1. `mkdir InvenTree/plugins/someplugin && touch InvenTree/plugins/someplugin/__init__.py && cd InvenTree/plugins/someplugin && git init`\r\n2. 2. Start up the InvenTree instance\r\n3. All plugins will be listed with \"?\" icons, and even installed plugins will be unavailable\n\n### Expected behaviour\n\nIgnore that no commits have been issues\n\n### Deployment Method\n\n- [X] Docker\n- [X] Bare metal\n\n### Version Information\n\n# Version Information:\r\nInvenTree-Version: 0.14.0 dev\r\nDjango Version: 3.2.23\r\nCommit Hash: 5d018e8\r\nCommit Date: 2024-01-15\r\nCommit Branch: details-panel\r\nDatabase: sqlite3\r\nDebug-Mode: True\r\nDeployed using Docker: True\r\nPlatform: Linux-5.15.133.1-microsoft-standard-WSL2-x86_64-with\r\nInstaller: DOC\r\n\r\nActive plugins: []\r\n\n\n### Please verify if you can reproduce this bug on the demo site.\n\n- [ ] I can reproduce this bug on the demo site.\n\n### Relevant log output\n\n```shell\na\n```\n\n", "before_files": [{"content": "\"\"\"Helpers for plugin app.\"\"\"\n\nimport inspect\nimport logging\nimport os\nimport pathlib\nimport pkgutil\nimport sysconfig\nimport traceback\nfrom importlib.metadata import entry_points\n\nfrom django import template\nfrom django.conf import settings\nfrom django.core.exceptions import AppRegistryNotReady\nfrom django.db.utils import IntegrityError\n\nlogger = logging.getLogger('inventree')\n\n\n# region logging / errors\nclass IntegrationPluginError(Exception):\n \"\"\"Error that encapsulates another error and adds the path / reference of the raising plugin.\"\"\"\n\n def __init__(self, path, message):\n \"\"\"Init a plugin error.\n\n Args:\n path: Path on which the error occurred - used to find out which plugin it was\n message: The original error message\n \"\"\"\n self.path = path\n self.message = message\n\n def __str__(self):\n \"\"\"Returns the error message.\"\"\"\n return self.message # pragma: no cover\n\n\nclass MixinImplementationError(ValueError):\n \"\"\"Error if mixin was implemented wrong in plugin.\n\n Mostly raised if constant is missing\n \"\"\"\n\n pass\n\n\nclass MixinNotImplementedError(NotImplementedError):\n \"\"\"Error if necessary mixin function was not overwritten.\"\"\"\n\n pass\n\n\ndef log_error(error, reference: str = 'general'):\n \"\"\"Log an plugin error.\"\"\"\n from plugin import registry\n\n # make sure the registry is set up\n if reference not in registry.errors:\n registry.errors[reference] = []\n\n # add error to stack\n registry.errors[reference].append(error)\n\n\ndef handle_error(error, do_raise: bool = True, do_log: bool = True, log_name: str = ''):\n \"\"\"Handles an error and casts it as an IntegrationPluginError.\"\"\"\n package_path = traceback.extract_tb(error.__traceback__)[-1].filename\n install_path = sysconfig.get_paths()['purelib']\n\n try:\n package_name = pathlib.Path(package_path).relative_to(install_path).parts[0]\n except ValueError:\n # is file - loaded -> form a name for that\n try:\n path_obj = pathlib.Path(package_path).relative_to(settings.BASE_DIR)\n path_parts = [*path_obj.parts]\n path_parts[-1] = path_parts[-1].replace(\n path_obj.suffix, ''\n ) # remove suffix\n\n # remove path prefixes\n if path_parts[0] == 'plugin':\n path_parts.remove('plugin')\n path_parts.pop(0)\n else:\n path_parts.remove('plugins') # pragma: no cover\n\n package_name = '.'.join(path_parts)\n except Exception:\n package_name = package_path\n\n if do_log:\n log_kwargs = {}\n if log_name:\n log_kwargs['reference'] = log_name\n log_error({package_name: str(error)}, **log_kwargs)\n\n if do_raise:\n # do a straight raise if we are playing with environment variables at execution time, ignore the broken sample\n if (\n settings.TESTING_ENV\n and package_name != 'integration.broken_sample'\n and isinstance(error, IntegrityError)\n ):\n raise error # pragma: no cover\n\n raise IntegrationPluginError(package_name, str(error))\n\n\ndef get_entrypoints():\n \"\"\"Returns list for entrypoints for InvenTree plugins.\"\"\"\n return entry_points().get('inventree_plugins', [])\n\n\n# endregion\n\n\n# region git-helpers\ndef get_git_log(path):\n \"\"\"Get dict with info of the last commit to file named in path.\"\"\"\n import datetime\n\n from dulwich.repo import NotGitRepository, Repo\n\n from InvenTree.ready import isInTestMode\n\n output = None\n path = os.path.abspath(path)\n\n if os.path.exists(path) and os.path.isfile(path):\n path = os.path.dirname(path)\n\n # only do this if we are not in test mode\n if not isInTestMode(): # pragma: no cover\n try:\n repo = Repo(path)\n head = repo.head()\n commit = repo[head]\n\n output = [\n head.decode(),\n commit.author.decode().split('<')[0][:-1],\n commit.author.decode().split('<')[1][:-1],\n datetime.datetime.fromtimestamp(commit.author_time).isoformat(),\n commit.message.decode().split('\\n')[0],\n ]\n except NotGitRepository:\n pass\n\n if not output:\n output = 5 * [''] # pragma: no cover\n\n return {\n 'hash': output[0],\n 'author': output[1],\n 'mail': output[2],\n 'date': output[3],\n 'message': output[4],\n }\n\n\n# endregion\n\n\n# region plugin finders\ndef get_modules(pkg, path=None):\n \"\"\"Get all modules in a package.\"\"\"\n context = {}\n\n if path is None:\n path = pkg.__path__\n elif type(path) is not list:\n path = [path]\n\n for loader, name, _ in pkgutil.walk_packages(path):\n try:\n module = loader.find_module(name).load_module(name)\n pkg_names = getattr(module, '__all__', None)\n for k, v in vars(module).items():\n if not k.startswith('_') and (pkg_names is None or k in pkg_names):\n context[k] = v\n context[name] = module\n except AppRegistryNotReady: # pragma: no cover\n pass\n except Exception as error:\n # this 'protects' against malformed plugin modules by more or less silently failing\n\n # log to stack\n log_error({name: str(error)}, 'discovery')\n\n return [v for k, v in context.items()]\n\n\ndef get_classes(module):\n \"\"\"Get all classes in a given module.\"\"\"\n return inspect.getmembers(module, inspect.isclass)\n\n\ndef get_plugins(pkg, baseclass, path=None):\n \"\"\"Return a list of all modules under a given package.\n\n - Modules must be a subclass of the provided 'baseclass'\n - Modules must have a non-empty NAME parameter\n \"\"\"\n plugins = []\n\n modules = get_modules(pkg, path=path)\n\n # Iterate through each module in the package\n for mod in modules:\n # Iterate through each class in the module\n for item in get_classes(mod):\n plugin = item[1]\n if issubclass(plugin, baseclass) and plugin.NAME:\n plugins.append(plugin)\n\n return plugins\n\n\n# endregion\n\n\n# region templates\ndef render_template(plugin, template_file, context=None):\n \"\"\"Locate and render a template file, available in the global template context.\"\"\"\n try:\n tmp = template.loader.get_template(template_file)\n except template.TemplateDoesNotExist:\n logger.exception(\n \"Plugin %s could not locate template '%s'\", plugin.slug, template_file\n )\n\n return f\"\"\"\n <div class='alert alert-block alert-danger'>\n Template file <em>{template_file}</em> does not exist.\n </div>\n \"\"\"\n\n # Render with the provided context\n html = tmp.render(context)\n\n return html\n\n\ndef render_text(text, context=None):\n \"\"\"Locate a raw string with provided context.\"\"\"\n ctx = template.Context(context)\n\n return template.Template(text).render(ctx)\n\n\n# endregion\n", "path": "InvenTree/plugin/helpers.py"}]} | 3,127 | 114 |
gh_patches_debug_163 | rasdani/github-patches | git_diff | fedora-infra__bodhi-1935 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
The CI yaml file is invalid yaml
I noticed today that our CentOS CI service jobs have been failing for a week or two due to the yaml being invalid:
```
>>> with open('devel/ci/githubprb-project.yml') as yml:
... a = yaml.load(yml.read())
...
Traceback (most recent call last):
File "<stdin>", line 2, in <module>
File "/usr/lib64/python2.7/site-packages/yaml/__init__.py", line 71, in load
return loader.get_single_data()
File "/usr/lib64/python2.7/site-packages/yaml/constructor.py", line 37, in get_single_data
node = self.get_single_node()
File "/usr/lib64/python2.7/site-packages/yaml/composer.py", line 36, in get_single_node
document = self.compose_document()
File "/usr/lib64/python2.7/site-packages/yaml/composer.py", line 55, in compose_document
node = self.compose_node(None, None)
File "/usr/lib64/python2.7/site-packages/yaml/composer.py", line 82, in compose_node
node = self.compose_sequence_node(anchor)
File "/usr/lib64/python2.7/site-packages/yaml/composer.py", line 111, in compose_sequence_node
node.value.append(self.compose_node(node, index))
File "/usr/lib64/python2.7/site-packages/yaml/composer.py", line 84, in compose_node
node = self.compose_mapping_node(anchor)
File "/usr/lib64/python2.7/site-packages/yaml/composer.py", line 133, in compose_mapping_node
item_value = self.compose_node(node, item_key)
File "/usr/lib64/python2.7/site-packages/yaml/composer.py", line 84, in compose_node
node = self.compose_mapping_node(anchor)
File "/usr/lib64/python2.7/site-packages/yaml/composer.py", line 133, in compose_mapping_node
item_value = self.compose_node(node, item_key)
File "/usr/lib64/python2.7/site-packages/yaml/composer.py", line 82, in compose_node
node = self.compose_sequence_node(anchor)
File "/usr/lib64/python2.7/site-packages/yaml/composer.py", line 111, in compose_sequence_node
node.value.append(self.compose_node(node, index))
File "/usr/lib64/python2.7/site-packages/yaml/composer.py", line 84, in compose_node
node = self.compose_mapping_node(anchor)
File "/usr/lib64/python2.7/site-packages/yaml/composer.py", line 133, in compose_mapping_node
item_value = self.compose_node(node, item_key)
File "/usr/lib64/python2.7/site-packages/yaml/composer.py", line 84, in compose_node
node = self.compose_mapping_node(anchor)
File "/usr/lib64/python2.7/site-packages/yaml/composer.py", line 127, in compose_mapping_node
while not self.check_event(MappingEndEvent):
File "/usr/lib64/python2.7/site-packages/yaml/parser.py", line 98, in check_event
self.current_event = self.state()
File "/usr/lib64/python2.7/site-packages/yaml/parser.py", line 428, in parse_block_mapping_key
if self.check_token(KeyToken):
File "/usr/lib64/python2.7/site-packages/yaml/scanner.py", line 116, in check_token
self.fetch_more_tokens()
File "/usr/lib64/python2.7/site-packages/yaml/scanner.py", line 220, in fetch_more_tokens
return self.fetch_value()
File "/usr/lib64/python2.7/site-packages/yaml/scanner.py", line 576, in fetch_value
self.get_mark())
yaml.scanner.ScannerError: mapping values are not allowed here
in "<string>", line 20, column 99:
... ase review the Jenkins job. Hint: You can search for "JENKIES FA ...
^
```
I personally am responsible, when I made https://github.com/fedora-infra/bodhi/commit/791d4e3ea98d252daa6fb4856cb394eb8b07d0b3. Shame!
Anywyays, it's easy to fix and we should add a test that ensures the YAML is at least parseable.
</issue>
<code>
[start of setup.py]
1 import __main__
2 __requires__ = __main__.__requires__ = 'WebOb>=1.4.1'
3 import pkg_resources # noqa
4
5 # The following two imports are required to shut up an
6 # atexit error when running tests with python 2.7
7 from setuptools import setup, find_packages # noqa
8 import logging # noqa
9 import multiprocessing # noqa
10 import os # noqa
11 import setuptools.command.egg_info # noqa
12 import sys # noqa
13
14
15 def get_requirements(requirements_file='requirements.txt'):
16 """
17 Get the contents of a file listing the requirements.
18
19 Args:
20 requirements_file (str): path to a requirements file
21
22 Returns:
23 list: the list of requirements, or an empty list if
24 `requirements_file` could not be opened or read
25 """
26 lines = open(requirements_file).readlines()
27 dependencies = []
28 for line in lines:
29 maybe_dep = line.strip()
30 if maybe_dep.startswith('#'):
31 # Skip pure comment lines
32 continue
33 if maybe_dep.startswith('git+'):
34 # VCS reference for dev purposes, expect a trailing comment
35 # with the normal requirement
36 __, __, maybe_dep = maybe_dep.rpartition('#')
37 else:
38 # Ignore any trailing comment
39 maybe_dep, __, __ = maybe_dep.partition('#')
40 # Remove any whitespace and assume non-empty results are dependencies
41 maybe_dep = maybe_dep.strip()
42 if maybe_dep:
43 dependencies.append(maybe_dep)
44 return dependencies
45
46
47 here = os.path.abspath(os.path.dirname(__file__))
48 README = open(os.path.join(here, 'README.rst')).read()
49 VERSION = '3.0.0'
50 # Possible options are at https://pypi.python.org/pypi?%3Aaction=list_classifiers
51 CLASSIFIERS = [
52 'Development Status :: 5 - Production/Stable',
53 'Intended Audience :: Developers',
54 'Intended Audience :: System Administrators',
55 'License :: OSI Approved :: GNU General Public License v2 or later (GPLv2+)',
56 'Operating System :: POSIX :: Linux',
57 'Programming Language :: Python :: 2.7',
58 'Topic :: System :: Software Distribution']
59 LICENSE = 'GPLv2+'
60 MAINTAINER = 'Fedora Infrastructure Team'
61 MAINTAINER_EMAIL = '[email protected]'
62 PLATFORMS = ['Fedora', 'GNU/Linux']
63 URL = 'https://github.com/fedora-infra/bodhi'
64
65
66 setuptools.command.egg_info.manifest_maker.template = 'BODHI_MANIFEST.in'
67
68
69 setup(
70 name='bodhi',
71 version=VERSION,
72 description='bodhi common package',
73 long_description=README,
74 classifiers=CLASSIFIERS,
75 license=LICENSE,
76 maintainer=MAINTAINER,
77 maintainer_email=MAINTAINER_EMAIL,
78 platforms=PLATFORMS,
79 url=URL,
80 keywords='fedora',
81 packages=['bodhi'],
82 include_package_data=True,
83 zip_safe=False,
84 install_requires=[],
85 tests_require=[
86 'flake8',
87 'pytest',
88 'pytest-cov',
89 'webtest',
90 'mock',
91 ],
92 )
93
94
95 setuptools.command.egg_info.manifest_maker.template = 'CLIENT_MANIFEST.in'
96
97
98 setup(
99 name='bodhi-client',
100 version=VERSION,
101 description='bodhi client',
102 long_description=README,
103 classifiers=CLASSIFIERS,
104 license=LICENSE,
105 maintainer=MAINTAINER,
106 maintainer_email=MAINTAINER_EMAIL,
107 platforms=PLATFORMS,
108 url=URL,
109 keywords='fedora',
110 packages=['bodhi.client'],
111 include_package_data=False,
112 zip_safe=False,
113 install_requires=['click', 'iniparse', 'python-fedora >= 0.9.0', 'six'],
114 entry_points="""\
115 [console_scripts]
116 bodhi = bodhi.client:cli
117 """)
118
119
120 setuptools.command.egg_info.manifest_maker.template = 'SERVER_MANIFEST.in'
121 # Due to https://github.com/pypa/setuptools/issues/808, we need to include the bodhi superpackage
122 # and then remove it if we want find_packages() to find the bodhi.server package and its
123 # subpackages without including the bodhi top level package.
124 server_packages = find_packages(
125 exclude=['bodhi.client', 'bodhi.client.*', 'bodhi.tests', 'bodhi.tests.*'])
126 server_packages.remove('bodhi')
127
128
129 setup(
130 name='bodhi-server',
131 version=VERSION,
132 description='bodhi server',
133 long_description=README,
134 classifiers=CLASSIFIERS + [
135 'Framework :: Pyramid',
136 'Programming Language :: JavaScript',
137 "Topic :: Internet :: WWW/HTTP",
138 "Topic :: Internet :: WWW/HTTP :: WSGI :: Application"],
139 license=LICENSE,
140 maintainer=MAINTAINER,
141 maintainer_email=MAINTAINER_EMAIL,
142 platforms=PLATFORMS,
143 url=URL,
144 keywords='web fedora pyramid',
145 packages=server_packages,
146 include_package_data=True,
147 zip_safe=False,
148 install_requires=get_requirements(),
149 message_extractors={'.': []},
150 entry_points="""\
151 [paste.app_factory]
152 main = bodhi.server:main
153 [console_scripts]
154 initialize_bodhi_db = bodhi.server.scripts.initializedb:main
155 bodhi-clean-old-mashes = bodhi.server.scripts.clean_old_mashes:clean_up
156 bodhi-dequeue-stable = bodhi.server.scripts.dequeue_stable:dequeue_stable
157 bodhi-push = bodhi.server.push:push
158 bodhi-expire-overrides = bodhi.server.scripts.expire_overrides:main
159 bodhi-untag-branched = bodhi.server.scripts.untag_branched:main
160 bodhi-approve-testing = bodhi.server.scripts.approve_testing:main
161 bodhi-manage-releases = bodhi.server.scripts.manage_releases:main
162 bodhi-check-policies = bodhi.server.scripts.check_policies:check
163 [moksha.consumer]
164 masher = bodhi.server.consumers.masher:Masher
165 updates = bodhi.server.consumers.updates:UpdatesHandler
166 signed = bodhi.server.consumers.signed:SignedHandler
167 """,
168 paster_plugins=['pyramid'])
169
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -86,6 +86,7 @@
'flake8',
'pytest',
'pytest-cov',
+ 'pyyaml',
'webtest',
'mock',
],
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -86,6 +86,7 @@\n 'flake8',\n 'pytest',\n 'pytest-cov',\n+ 'pyyaml',\n 'webtest',\n 'mock',\n ],\n", "issue": "The CI yaml file is invalid yaml\nI noticed today that our CentOS CI service jobs have been failing for a week or two due to the yaml being invalid:\r\n\r\n```\r\n>>> with open('devel/ci/githubprb-project.yml') as yml:\r\n... a = yaml.load(yml.read()) \r\n... \r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 2, in <module>\r\n File \"/usr/lib64/python2.7/site-packages/yaml/__init__.py\", line 71, in load\r\n return loader.get_single_data()\r\n File \"/usr/lib64/python2.7/site-packages/yaml/constructor.py\", line 37, in get_single_data\r\n node = self.get_single_node()\r\n File \"/usr/lib64/python2.7/site-packages/yaml/composer.py\", line 36, in get_single_node\r\n document = self.compose_document()\r\n File \"/usr/lib64/python2.7/site-packages/yaml/composer.py\", line 55, in compose_document\r\n node = self.compose_node(None, None)\r\n File \"/usr/lib64/python2.7/site-packages/yaml/composer.py\", line 82, in compose_node\r\n node = self.compose_sequence_node(anchor)\r\n File \"/usr/lib64/python2.7/site-packages/yaml/composer.py\", line 111, in compose_sequence_node\r\n node.value.append(self.compose_node(node, index))\r\n File \"/usr/lib64/python2.7/site-packages/yaml/composer.py\", line 84, in compose_node\r\n node = self.compose_mapping_node(anchor)\r\n File \"/usr/lib64/python2.7/site-packages/yaml/composer.py\", line 133, in compose_mapping_node\r\n item_value = self.compose_node(node, item_key)\r\n File \"/usr/lib64/python2.7/site-packages/yaml/composer.py\", line 84, in compose_node\r\n node = self.compose_mapping_node(anchor)\r\n File \"/usr/lib64/python2.7/site-packages/yaml/composer.py\", line 133, in compose_mapping_node\r\n item_value = self.compose_node(node, item_key)\r\n File \"/usr/lib64/python2.7/site-packages/yaml/composer.py\", line 82, in compose_node\r\n node = self.compose_sequence_node(anchor)\r\n File \"/usr/lib64/python2.7/site-packages/yaml/composer.py\", line 111, in compose_sequence_node\r\n node.value.append(self.compose_node(node, index))\r\n File \"/usr/lib64/python2.7/site-packages/yaml/composer.py\", line 84, in compose_node\r\n node = self.compose_mapping_node(anchor)\r\n File \"/usr/lib64/python2.7/site-packages/yaml/composer.py\", line 133, in compose_mapping_node\r\n item_value = self.compose_node(node, item_key)\r\n File \"/usr/lib64/python2.7/site-packages/yaml/composer.py\", line 84, in compose_node\r\n node = self.compose_mapping_node(anchor)\r\n File \"/usr/lib64/python2.7/site-packages/yaml/composer.py\", line 127, in compose_mapping_node\r\n while not self.check_event(MappingEndEvent):\r\n File \"/usr/lib64/python2.7/site-packages/yaml/parser.py\", line 98, in check_event\r\n self.current_event = self.state()\r\n File \"/usr/lib64/python2.7/site-packages/yaml/parser.py\", line 428, in parse_block_mapping_key\r\n if self.check_token(KeyToken):\r\n File \"/usr/lib64/python2.7/site-packages/yaml/scanner.py\", line 116, in check_token\r\n self.fetch_more_tokens()\r\n File \"/usr/lib64/python2.7/site-packages/yaml/scanner.py\", line 220, in fetch_more_tokens\r\n return self.fetch_value()\r\n File \"/usr/lib64/python2.7/site-packages/yaml/scanner.py\", line 576, in fetch_value\r\n self.get_mark())\r\nyaml.scanner.ScannerError: mapping values are not allowed here\r\n in \"<string>\", line 20, column 99:\r\n ... ase review the Jenkins job. Hint: You can search for \"JENKIES FA ... \r\n ^\r\n```\r\n\r\nI personally am responsible, when I made https://github.com/fedora-infra/bodhi/commit/791d4e3ea98d252daa6fb4856cb394eb8b07d0b3. Shame!\r\n\r\nAnywyays, it's easy to fix and we should add a test that ensures the YAML is at least parseable.\n", "before_files": [{"content": "import __main__\n__requires__ = __main__.__requires__ = 'WebOb>=1.4.1'\nimport pkg_resources # noqa\n\n# The following two imports are required to shut up an\n# atexit error when running tests with python 2.7\nfrom setuptools import setup, find_packages # noqa\nimport logging # noqa\nimport multiprocessing # noqa\nimport os # noqa\nimport setuptools.command.egg_info # noqa\nimport sys # noqa\n\n\ndef get_requirements(requirements_file='requirements.txt'):\n \"\"\"\n Get the contents of a file listing the requirements.\n\n Args:\n requirements_file (str): path to a requirements file\n\n Returns:\n list: the list of requirements, or an empty list if\n `requirements_file` could not be opened or read\n \"\"\"\n lines = open(requirements_file).readlines()\n dependencies = []\n for line in lines:\n maybe_dep = line.strip()\n if maybe_dep.startswith('#'):\n # Skip pure comment lines\n continue\n if maybe_dep.startswith('git+'):\n # VCS reference for dev purposes, expect a trailing comment\n # with the normal requirement\n __, __, maybe_dep = maybe_dep.rpartition('#')\n else:\n # Ignore any trailing comment\n maybe_dep, __, __ = maybe_dep.partition('#')\n # Remove any whitespace and assume non-empty results are dependencies\n maybe_dep = maybe_dep.strip()\n if maybe_dep:\n dependencies.append(maybe_dep)\n return dependencies\n\n\nhere = os.path.abspath(os.path.dirname(__file__))\nREADME = open(os.path.join(here, 'README.rst')).read()\nVERSION = '3.0.0'\n# Possible options are at https://pypi.python.org/pypi?%3Aaction=list_classifiers\nCLASSIFIERS = [\n 'Development Status :: 5 - Production/Stable',\n 'Intended Audience :: Developers',\n 'Intended Audience :: System Administrators',\n 'License :: OSI Approved :: GNU General Public License v2 or later (GPLv2+)',\n 'Operating System :: POSIX :: Linux',\n 'Programming Language :: Python :: 2.7',\n 'Topic :: System :: Software Distribution']\nLICENSE = 'GPLv2+'\nMAINTAINER = 'Fedora Infrastructure Team'\nMAINTAINER_EMAIL = '[email protected]'\nPLATFORMS = ['Fedora', 'GNU/Linux']\nURL = 'https://github.com/fedora-infra/bodhi'\n\n\nsetuptools.command.egg_info.manifest_maker.template = 'BODHI_MANIFEST.in'\n\n\nsetup(\n name='bodhi',\n version=VERSION,\n description='bodhi common package',\n long_description=README,\n classifiers=CLASSIFIERS,\n license=LICENSE,\n maintainer=MAINTAINER,\n maintainer_email=MAINTAINER_EMAIL,\n platforms=PLATFORMS,\n url=URL,\n keywords='fedora',\n packages=['bodhi'],\n include_package_data=True,\n zip_safe=False,\n install_requires=[],\n tests_require=[\n 'flake8',\n 'pytest',\n 'pytest-cov',\n 'webtest',\n 'mock',\n ],\n)\n\n\nsetuptools.command.egg_info.manifest_maker.template = 'CLIENT_MANIFEST.in'\n\n\nsetup(\n name='bodhi-client',\n version=VERSION,\n description='bodhi client',\n long_description=README,\n classifiers=CLASSIFIERS,\n license=LICENSE,\n maintainer=MAINTAINER,\n maintainer_email=MAINTAINER_EMAIL,\n platforms=PLATFORMS,\n url=URL,\n keywords='fedora',\n packages=['bodhi.client'],\n include_package_data=False,\n zip_safe=False,\n install_requires=['click', 'iniparse', 'python-fedora >= 0.9.0', 'six'],\n entry_points=\"\"\"\\\n [console_scripts]\n bodhi = bodhi.client:cli\n \"\"\")\n\n\nsetuptools.command.egg_info.manifest_maker.template = 'SERVER_MANIFEST.in'\n# Due to https://github.com/pypa/setuptools/issues/808, we need to include the bodhi superpackage\n# and then remove it if we want find_packages() to find the bodhi.server package and its\n# subpackages without including the bodhi top level package.\nserver_packages = find_packages(\n exclude=['bodhi.client', 'bodhi.client.*', 'bodhi.tests', 'bodhi.tests.*'])\nserver_packages.remove('bodhi')\n\n\nsetup(\n name='bodhi-server',\n version=VERSION,\n description='bodhi server',\n long_description=README,\n classifiers=CLASSIFIERS + [\n 'Framework :: Pyramid',\n 'Programming Language :: JavaScript',\n \"Topic :: Internet :: WWW/HTTP\",\n \"Topic :: Internet :: WWW/HTTP :: WSGI :: Application\"],\n license=LICENSE,\n maintainer=MAINTAINER,\n maintainer_email=MAINTAINER_EMAIL,\n platforms=PLATFORMS,\n url=URL,\n keywords='web fedora pyramid',\n packages=server_packages,\n include_package_data=True,\n zip_safe=False,\n install_requires=get_requirements(),\n message_extractors={'.': []},\n entry_points=\"\"\"\\\n [paste.app_factory]\n main = bodhi.server:main\n [console_scripts]\n initialize_bodhi_db = bodhi.server.scripts.initializedb:main\n bodhi-clean-old-mashes = bodhi.server.scripts.clean_old_mashes:clean_up\n bodhi-dequeue-stable = bodhi.server.scripts.dequeue_stable:dequeue_stable\n bodhi-push = bodhi.server.push:push\n bodhi-expire-overrides = bodhi.server.scripts.expire_overrides:main\n bodhi-untag-branched = bodhi.server.scripts.untag_branched:main\n bodhi-approve-testing = bodhi.server.scripts.approve_testing:main\n bodhi-manage-releases = bodhi.server.scripts.manage_releases:main\n bodhi-check-policies = bodhi.server.scripts.check_policies:check\n [moksha.consumer]\n masher = bodhi.server.consumers.masher:Masher\n updates = bodhi.server.consumers.updates:UpdatesHandler\n signed = bodhi.server.consumers.signed:SignedHandler\n \"\"\",\n paster_plugins=['pyramid'])\n", "path": "setup.py"}]} | 3,335 | 64 |
gh_patches_debug_37125 | rasdani/github-patches | git_diff | scoutapp__scout_apm_python-546 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
scout breaks caching for django
We gave scout a try and looked promising but then we quickly had to disable it on production.
Issue is that scout is adding a `Vary: Cookie` header which breaks caching
This normally happens when some code in django accesses the request.user object. I'm assuming scout is trying to add some meta data and accesses it causing this issue.
We've run into this problem ourselves in the past and the way around is it to look for the internal cached user on the request object. Like this:
```
# going request.user will generate cookie vary headers, but since
# we aren't changing the output based on this we want to see the user
# without adding the header, so look for the lazy user
if request and hasattr(request, '_cached_user'):
# noinspection PyProtectedMember
user = request._cached_user
```
I think if replaced the check for user here
https://github.com/scoutapp/scout_apm_python/blob/master/src/scout_apm/django/middleware.py#L139
</issue>
<code>
[start of src/scout_apm/django/middleware.py]
1 # coding=utf-8
2 from __future__ import absolute_import, division, print_function, unicode_literals
3
4 import sys
5
6 import django
7 from django.conf import settings
8
9 from scout_apm.compat import string_types
10 from scout_apm.core.config import scout_config
11 from scout_apm.core.tracked_request import TrackedRequest
12 from scout_apm.core.web_requests import (
13 create_filtered_path,
14 ignore_path,
15 track_amazon_request_queue_time,
16 track_request_queue_time,
17 )
18
19 if django.VERSION >= (1, 11):
20 from django.urls import get_urlconf
21 else:
22 from django.core.urlresolvers import get_urlconf
23
24
25 def get_operation_name(request):
26 view_func = request.resolver_match.func
27 view_name = request.resolver_match._func_path
28
29 if hasattr(view_func, "model_admin"):
30 # Seems to comes from Django admin (attribute only set on Django 1.9+)
31 admin_class = view_func.model_admin.__class__
32 view_name = (
33 admin_class.__module__
34 + "."
35 + admin_class.__name__
36 + "."
37 + view_func.__name__
38 )
39
40 django_rest_framework_name = _get_django_rest_framework_name(
41 request, view_func, view_name
42 )
43 if django_rest_framework_name is not None:
44 return django_rest_framework_name
45
46 # Seems to be a Tastypie Resource. Need to resort to some stack inspection
47 # to find a better name since its decorators don't wrap very well
48 if view_name == "tastypie.resources.wrapper":
49 tastypie_name = _get_tastypie_operation_name(request, view_func)
50 if tastypie_name is not None:
51 return tastypie_name
52
53 return "Controller/" + view_name
54
55
56 def _get_django_rest_framework_name(request, view_func, view_name):
57 try:
58 from rest_framework.viewsets import ViewSetMixin
59 except ImportError:
60 return None
61
62 kls = getattr(view_func, "cls", None)
63 if isinstance(kls, type) and not issubclass(kls, ViewSetMixin):
64 return None
65
66 # Get 'actions' set in ViewSetMixin.as_view
67 actions = getattr(view_func, "actions", None)
68 if not actions or not isinstance(actions, dict):
69 return None
70
71 method_lower = request.method.lower()
72 if method_lower not in actions:
73 return None
74
75 return "Controller/{}.{}".format(view_name, actions[method_lower])
76
77
78 def _get_tastypie_operation_name(request, view_func):
79 try:
80 from tastypie.resources import Resource
81 except ImportError:
82 return None
83
84 if sys.version_info[0] == 2: # pragma: no cover
85 try:
86 wrapper = view_func.__closure__[0].cell_contents
87 except (AttributeError, IndexError):
88 return None
89 elif sys.version_info[0] == 3:
90 try:
91 wrapper = view_func.__wrapped__
92 except AttributeError:
93 return None
94
95 if not hasattr(wrapper, "__closure__") or len(wrapper.__closure__) != 2:
96 return None
97
98 instance = wrapper.__closure__[0].cell_contents
99 if not isinstance(instance, Resource): # pragma: no cover
100 return None
101
102 method_name = wrapper.__closure__[1].cell_contents
103 if not isinstance(method_name, string_types): # pragma: no cover
104 return None
105
106 if method_name.startswith("dispatch_"): # pragma: no cover
107 method_name = request.method.lower() + method_name.split("dispatch", 1)[1]
108
109 return "Controller/{}.{}.{}".format(
110 instance.__module__, instance.__class__.__name__, method_name
111 )
112
113
114 def track_request_view_data(request, tracked_request):
115 path = request.path
116 tracked_request.tag(
117 "path",
118 create_filtered_path(
119 path, [(k, v) for k, vs in request.GET.lists() for v in vs]
120 ),
121 )
122 if ignore_path(path):
123 tracked_request.tag("ignore_transaction", True)
124
125 if scout_config.value("collect_remote_ip"):
126 try:
127 # Determine a remote IP to associate with the request. The value is
128 # spoofable by the requester so this is not suitable to use in any
129 # security sensitive context.
130 user_ip = (
131 request.META.get("HTTP_X_FORWARDED_FOR", "").split(",")[0]
132 or request.META.get("HTTP_CLIENT_IP", "").split(",")[0]
133 or request.META.get("REMOTE_ADDR", None)
134 )
135 tracked_request.tag("user_ip", user_ip)
136 except Exception:
137 pass
138
139 user = getattr(request, "user", None)
140 if user is not None:
141 try:
142 tracked_request.tag("username", user.get_username())
143 except Exception:
144 pass
145
146 tracked_request.tag("urlconf", get_urlconf(settings.ROOT_URLCONF))
147
148
149 class MiddlewareTimingMiddleware(object):
150 """
151 Insert as early into the Middleware stack as possible (outermost layers),
152 so that other middlewares called after can be timed.
153 """
154
155 def __init__(self, get_response):
156 self.get_response = get_response
157
158 def __call__(self, request):
159 if not scout_config.value("monitor"):
160 return self.get_response(request)
161
162 tracked_request = TrackedRequest.instance()
163
164 tracked_request.start_span(
165 operation="Middleware", should_capture_backtrace=False
166 )
167 queue_time = request.META.get("HTTP_X_QUEUE_START") or request.META.get(
168 "HTTP_X_REQUEST_START", ""
169 )
170 queue_time_tracked = track_request_queue_time(queue_time, tracked_request)
171 if not queue_time_tracked:
172 track_amazon_request_queue_time(
173 request.META.get("HTTP_X_AMZN_TRACE_ID", ""), tracked_request
174 )
175
176 try:
177 return self.get_response(request)
178 finally:
179 tracked_request.stop_span()
180
181
182 class ViewTimingMiddleware(object):
183 """
184 Insert as deep into the middleware stack as possible, ideally wrapping no
185 other middleware. Designed to time the View itself
186 """
187
188 def __init__(self, get_response):
189 self.get_response = get_response
190
191 def __call__(self, request):
192 """
193 Wrap a single incoming request with start and stop calls.
194 This will start timing, but relies on the process_view callback to
195 capture more details about what view was really called, and other
196 similar info.
197
198 If process_view isn't called, then the request will not
199 be recorded. This can happen if a middleware further along the stack
200 doesn't call onward, and instead returns a response directly.
201 """
202 if not scout_config.value("monitor"):
203 return self.get_response(request)
204
205 tracked_request = TrackedRequest.instance()
206
207 # This operation name won't be recorded unless changed later in
208 # process_view
209 tracked_request.start_span(operation="Unknown", should_capture_backtrace=False)
210 try:
211 response = self.get_response(request)
212 if 500 <= response.status_code <= 599:
213 tracked_request.tag("error", "true")
214 return response
215 finally:
216 tracked_request.stop_span()
217
218 def process_view(self, request, view_func, view_args, view_kwargs):
219 """
220 Capture details about the view_func that is about to execute
221 """
222 if not scout_config.value("monitor"):
223 return
224 tracked_request = TrackedRequest.instance()
225 tracked_request.is_real_request = True
226
227 track_request_view_data(request, tracked_request)
228
229 span = tracked_request.current_span()
230 if span is not None:
231 span.operation = get_operation_name(request)
232
233 def process_exception(self, request, exception):
234 """
235 Mark this request as having errored out
236
237 Does not modify or catch or otherwise change the exception thrown
238 """
239 if not scout_config.value("monitor"):
240 return
241 TrackedRequest.instance().tag("error", "true")
242
243
244 class OldStyleMiddlewareTimingMiddleware(object):
245 """
246 Insert as early into the Middleware stack as possible (outermost layers),
247 so that other middlewares called after can be timed.
248 """
249
250 def process_request(self, request):
251 if not scout_config.value("monitor"):
252 return
253 tracked_request = TrackedRequest.instance()
254 request._scout_tracked_request = tracked_request
255
256 queue_time = request.META.get("HTTP_X_QUEUE_START") or request.META.get(
257 "HTTP_X_REQUEST_START", ""
258 )
259 queue_time_tracked = track_request_queue_time(queue_time, tracked_request)
260 if not queue_time_tracked:
261 track_amazon_request_queue_time(
262 request.META.get("HTTP_X_AMZN_TRACE_ID", ""), tracked_request
263 )
264
265 tracked_request.start_span(
266 operation="Middleware", should_capture_backtrace=False
267 )
268
269 def process_response(self, request, response):
270 # Only stop span if there's a request, but presume we are balanced,
271 # i.e. that custom instrumentation within the application is not
272 # causing errors
273 tracked_request = getattr(request, "_scout_tracked_request", None)
274 if 500 <= response.status_code <= 599:
275 tracked_request.tag("error", "true")
276 if tracked_request is not None:
277 tracked_request.stop_span()
278 return response
279
280
281 class OldStyleViewMiddleware(object):
282 def process_view(self, request, view_func, view_func_args, view_func_kwargs):
283 tracked_request = getattr(request, "_scout_tracked_request", None)
284 if tracked_request is None:
285 # Looks like OldStyleMiddlewareTimingMiddleware didn't run, so
286 # don't do anything
287 return
288
289 tracked_request.is_real_request = True
290
291 track_request_view_data(request, tracked_request)
292
293 span = tracked_request.start_span(
294 operation=get_operation_name(request), should_capture_backtrace=False
295 )
296 # Save the span into the request, so we can check
297 # if we're matched up when stopping
298 request._scout_view_span = span
299
300 def process_response(self, request, response):
301 tracked_request = getattr(request, "_scout_tracked_request", None)
302 if tracked_request is None:
303 # Looks like OldStyleMiddlewareTimingMiddleware didn't run, so
304 # don't do anything
305 return response
306
307 # Only stop span if we started, but presume we are balanced, i.e. that
308 # custom instrumentation within the application is not causing errors
309 span = getattr(request, "_scout_view_span", None)
310 if span is not None:
311 tracked_request.stop_span()
312 return response
313
314 def process_exception(self, request, exception):
315 tracked_request = getattr(request, "_scout_tracked_request", None)
316 if tracked_request is None:
317 # Looks like OldStyleMiddlewareTimingMiddleware didn't run, so
318 # don't do anything
319 return
320
321 tracked_request.tag("error", "true")
322
[end of src/scout_apm/django/middleware.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/scout_apm/django/middleware.py b/src/scout_apm/django/middleware.py
--- a/src/scout_apm/django/middleware.py
+++ b/src/scout_apm/django/middleware.py
@@ -136,7 +136,11 @@
except Exception:
pass
- user = getattr(request, "user", None)
+ # Django's request.user caches in this attribute on first access. We only
+ # want to track the user if the application code has touched request.user
+ # because touching it causes session access, which adds "Cookie" to the
+ # "Vary" header.
+ user = getattr(request, "_cached_user", None)
if user is not None:
try:
tracked_request.tag("username", user.get_username())
@@ -209,6 +213,7 @@
tracked_request.start_span(operation="Unknown", should_capture_backtrace=False)
try:
response = self.get_response(request)
+ track_request_view_data(request, tracked_request)
if 500 <= response.status_code <= 599:
tracked_request.tag("error", "true")
return response
@@ -224,8 +229,6 @@
tracked_request = TrackedRequest.instance()
tracked_request.is_real_request = True
- track_request_view_data(request, tracked_request)
-
span = tracked_request.current_span()
if span is not None:
span.operation = get_operation_name(request)
@@ -288,8 +291,6 @@
tracked_request.is_real_request = True
- track_request_view_data(request, tracked_request)
-
span = tracked_request.start_span(
operation=get_operation_name(request), should_capture_backtrace=False
)
@@ -304,6 +305,8 @@
# don't do anything
return response
+ track_request_view_data(request, tracked_request)
+
# Only stop span if we started, but presume we are balanced, i.e. that
# custom instrumentation within the application is not causing errors
span = getattr(request, "_scout_view_span", None)
| {"golden_diff": "diff --git a/src/scout_apm/django/middleware.py b/src/scout_apm/django/middleware.py\n--- a/src/scout_apm/django/middleware.py\n+++ b/src/scout_apm/django/middleware.py\n@@ -136,7 +136,11 @@\n except Exception:\n pass\n \n- user = getattr(request, \"user\", None)\n+ # Django's request.user caches in this attribute on first access. We only\n+ # want to track the user if the application code has touched request.user\n+ # because touching it causes session access, which adds \"Cookie\" to the\n+ # \"Vary\" header.\n+ user = getattr(request, \"_cached_user\", None)\n if user is not None:\n try:\n tracked_request.tag(\"username\", user.get_username())\n@@ -209,6 +213,7 @@\n tracked_request.start_span(operation=\"Unknown\", should_capture_backtrace=False)\n try:\n response = self.get_response(request)\n+ track_request_view_data(request, tracked_request)\n if 500 <= response.status_code <= 599:\n tracked_request.tag(\"error\", \"true\")\n return response\n@@ -224,8 +229,6 @@\n tracked_request = TrackedRequest.instance()\n tracked_request.is_real_request = True\n \n- track_request_view_data(request, tracked_request)\n-\n span = tracked_request.current_span()\n if span is not None:\n span.operation = get_operation_name(request)\n@@ -288,8 +291,6 @@\n \n tracked_request.is_real_request = True\n \n- track_request_view_data(request, tracked_request)\n-\n span = tracked_request.start_span(\n operation=get_operation_name(request), should_capture_backtrace=False\n )\n@@ -304,6 +305,8 @@\n # don't do anything\n return response\n \n+ track_request_view_data(request, tracked_request)\n+\n # Only stop span if we started, but presume we are balanced, i.e. that\n # custom instrumentation within the application is not causing errors\n span = getattr(request, \"_scout_view_span\", None)\n", "issue": "scout breaks caching for django\nWe gave scout a try and looked promising but then we quickly had to disable it on production.\r\n\r\nIssue is that scout is adding a `Vary: Cookie` header which breaks caching\r\n\r\n\r\n\r\nThis normally happens when some code in django accesses the request.user object. I'm assuming scout is trying to add some meta data and accesses it causing this issue.\r\n\r\nWe've run into this problem ourselves in the past and the way around is it to look for the internal cached user on the request object. Like this:\r\n\r\n```\r\n # going request.user will generate cookie vary headers, but since\r\n # we aren't changing the output based on this we want to see the user\r\n # without adding the header, so look for the lazy user\r\n if request and hasattr(request, '_cached_user'):\r\n # noinspection PyProtectedMember\r\n user = request._cached_user\r\n```\r\n\r\nI think if replaced the check for user here\r\n\r\nhttps://github.com/scoutapp/scout_apm_python/blob/master/src/scout_apm/django/middleware.py#L139\r\n\r\n\r\n\n", "before_files": [{"content": "# coding=utf-8\nfrom __future__ import absolute_import, division, print_function, unicode_literals\n\nimport sys\n\nimport django\nfrom django.conf import settings\n\nfrom scout_apm.compat import string_types\nfrom scout_apm.core.config import scout_config\nfrom scout_apm.core.tracked_request import TrackedRequest\nfrom scout_apm.core.web_requests import (\n create_filtered_path,\n ignore_path,\n track_amazon_request_queue_time,\n track_request_queue_time,\n)\n\nif django.VERSION >= (1, 11):\n from django.urls import get_urlconf\nelse:\n from django.core.urlresolvers import get_urlconf\n\n\ndef get_operation_name(request):\n view_func = request.resolver_match.func\n view_name = request.resolver_match._func_path\n\n if hasattr(view_func, \"model_admin\"):\n # Seems to comes from Django admin (attribute only set on Django 1.9+)\n admin_class = view_func.model_admin.__class__\n view_name = (\n admin_class.__module__\n + \".\"\n + admin_class.__name__\n + \".\"\n + view_func.__name__\n )\n\n django_rest_framework_name = _get_django_rest_framework_name(\n request, view_func, view_name\n )\n if django_rest_framework_name is not None:\n return django_rest_framework_name\n\n # Seems to be a Tastypie Resource. Need to resort to some stack inspection\n # to find a better name since its decorators don't wrap very well\n if view_name == \"tastypie.resources.wrapper\":\n tastypie_name = _get_tastypie_operation_name(request, view_func)\n if tastypie_name is not None:\n return tastypie_name\n\n return \"Controller/\" + view_name\n\n\ndef _get_django_rest_framework_name(request, view_func, view_name):\n try:\n from rest_framework.viewsets import ViewSetMixin\n except ImportError:\n return None\n\n kls = getattr(view_func, \"cls\", None)\n if isinstance(kls, type) and not issubclass(kls, ViewSetMixin):\n return None\n\n # Get 'actions' set in ViewSetMixin.as_view\n actions = getattr(view_func, \"actions\", None)\n if not actions or not isinstance(actions, dict):\n return None\n\n method_lower = request.method.lower()\n if method_lower not in actions:\n return None\n\n return \"Controller/{}.{}\".format(view_name, actions[method_lower])\n\n\ndef _get_tastypie_operation_name(request, view_func):\n try:\n from tastypie.resources import Resource\n except ImportError:\n return None\n\n if sys.version_info[0] == 2: # pragma: no cover\n try:\n wrapper = view_func.__closure__[0].cell_contents\n except (AttributeError, IndexError):\n return None\n elif sys.version_info[0] == 3:\n try:\n wrapper = view_func.__wrapped__\n except AttributeError:\n return None\n\n if not hasattr(wrapper, \"__closure__\") or len(wrapper.__closure__) != 2:\n return None\n\n instance = wrapper.__closure__[0].cell_contents\n if not isinstance(instance, Resource): # pragma: no cover\n return None\n\n method_name = wrapper.__closure__[1].cell_contents\n if not isinstance(method_name, string_types): # pragma: no cover\n return None\n\n if method_name.startswith(\"dispatch_\"): # pragma: no cover\n method_name = request.method.lower() + method_name.split(\"dispatch\", 1)[1]\n\n return \"Controller/{}.{}.{}\".format(\n instance.__module__, instance.__class__.__name__, method_name\n )\n\n\ndef track_request_view_data(request, tracked_request):\n path = request.path\n tracked_request.tag(\n \"path\",\n create_filtered_path(\n path, [(k, v) for k, vs in request.GET.lists() for v in vs]\n ),\n )\n if ignore_path(path):\n tracked_request.tag(\"ignore_transaction\", True)\n\n if scout_config.value(\"collect_remote_ip\"):\n try:\n # Determine a remote IP to associate with the request. The value is\n # spoofable by the requester so this is not suitable to use in any\n # security sensitive context.\n user_ip = (\n request.META.get(\"HTTP_X_FORWARDED_FOR\", \"\").split(\",\")[0]\n or request.META.get(\"HTTP_CLIENT_IP\", \"\").split(\",\")[0]\n or request.META.get(\"REMOTE_ADDR\", None)\n )\n tracked_request.tag(\"user_ip\", user_ip)\n except Exception:\n pass\n\n user = getattr(request, \"user\", None)\n if user is not None:\n try:\n tracked_request.tag(\"username\", user.get_username())\n except Exception:\n pass\n\n tracked_request.tag(\"urlconf\", get_urlconf(settings.ROOT_URLCONF))\n\n\nclass MiddlewareTimingMiddleware(object):\n \"\"\"\n Insert as early into the Middleware stack as possible (outermost layers),\n so that other middlewares called after can be timed.\n \"\"\"\n\n def __init__(self, get_response):\n self.get_response = get_response\n\n def __call__(self, request):\n if not scout_config.value(\"monitor\"):\n return self.get_response(request)\n\n tracked_request = TrackedRequest.instance()\n\n tracked_request.start_span(\n operation=\"Middleware\", should_capture_backtrace=False\n )\n queue_time = request.META.get(\"HTTP_X_QUEUE_START\") or request.META.get(\n \"HTTP_X_REQUEST_START\", \"\"\n )\n queue_time_tracked = track_request_queue_time(queue_time, tracked_request)\n if not queue_time_tracked:\n track_amazon_request_queue_time(\n request.META.get(\"HTTP_X_AMZN_TRACE_ID\", \"\"), tracked_request\n )\n\n try:\n return self.get_response(request)\n finally:\n tracked_request.stop_span()\n\n\nclass ViewTimingMiddleware(object):\n \"\"\"\n Insert as deep into the middleware stack as possible, ideally wrapping no\n other middleware. Designed to time the View itself\n \"\"\"\n\n def __init__(self, get_response):\n self.get_response = get_response\n\n def __call__(self, request):\n \"\"\"\n Wrap a single incoming request with start and stop calls.\n This will start timing, but relies on the process_view callback to\n capture more details about what view was really called, and other\n similar info.\n\n If process_view isn't called, then the request will not\n be recorded. This can happen if a middleware further along the stack\n doesn't call onward, and instead returns a response directly.\n \"\"\"\n if not scout_config.value(\"monitor\"):\n return self.get_response(request)\n\n tracked_request = TrackedRequest.instance()\n\n # This operation name won't be recorded unless changed later in\n # process_view\n tracked_request.start_span(operation=\"Unknown\", should_capture_backtrace=False)\n try:\n response = self.get_response(request)\n if 500 <= response.status_code <= 599:\n tracked_request.tag(\"error\", \"true\")\n return response\n finally:\n tracked_request.stop_span()\n\n def process_view(self, request, view_func, view_args, view_kwargs):\n \"\"\"\n Capture details about the view_func that is about to execute\n \"\"\"\n if not scout_config.value(\"monitor\"):\n return\n tracked_request = TrackedRequest.instance()\n tracked_request.is_real_request = True\n\n track_request_view_data(request, tracked_request)\n\n span = tracked_request.current_span()\n if span is not None:\n span.operation = get_operation_name(request)\n\n def process_exception(self, request, exception):\n \"\"\"\n Mark this request as having errored out\n\n Does not modify or catch or otherwise change the exception thrown\n \"\"\"\n if not scout_config.value(\"monitor\"):\n return\n TrackedRequest.instance().tag(\"error\", \"true\")\n\n\nclass OldStyleMiddlewareTimingMiddleware(object):\n \"\"\"\n Insert as early into the Middleware stack as possible (outermost layers),\n so that other middlewares called after can be timed.\n \"\"\"\n\n def process_request(self, request):\n if not scout_config.value(\"monitor\"):\n return\n tracked_request = TrackedRequest.instance()\n request._scout_tracked_request = tracked_request\n\n queue_time = request.META.get(\"HTTP_X_QUEUE_START\") or request.META.get(\n \"HTTP_X_REQUEST_START\", \"\"\n )\n queue_time_tracked = track_request_queue_time(queue_time, tracked_request)\n if not queue_time_tracked:\n track_amazon_request_queue_time(\n request.META.get(\"HTTP_X_AMZN_TRACE_ID\", \"\"), tracked_request\n )\n\n tracked_request.start_span(\n operation=\"Middleware\", should_capture_backtrace=False\n )\n\n def process_response(self, request, response):\n # Only stop span if there's a request, but presume we are balanced,\n # i.e. that custom instrumentation within the application is not\n # causing errors\n tracked_request = getattr(request, \"_scout_tracked_request\", None)\n if 500 <= response.status_code <= 599:\n tracked_request.tag(\"error\", \"true\")\n if tracked_request is not None:\n tracked_request.stop_span()\n return response\n\n\nclass OldStyleViewMiddleware(object):\n def process_view(self, request, view_func, view_func_args, view_func_kwargs):\n tracked_request = getattr(request, \"_scout_tracked_request\", None)\n if tracked_request is None:\n # Looks like OldStyleMiddlewareTimingMiddleware didn't run, so\n # don't do anything\n return\n\n tracked_request.is_real_request = True\n\n track_request_view_data(request, tracked_request)\n\n span = tracked_request.start_span(\n operation=get_operation_name(request), should_capture_backtrace=False\n )\n # Save the span into the request, so we can check\n # if we're matched up when stopping\n request._scout_view_span = span\n\n def process_response(self, request, response):\n tracked_request = getattr(request, \"_scout_tracked_request\", None)\n if tracked_request is None:\n # Looks like OldStyleMiddlewareTimingMiddleware didn't run, so\n # don't do anything\n return response\n\n # Only stop span if we started, but presume we are balanced, i.e. that\n # custom instrumentation within the application is not causing errors\n span = getattr(request, \"_scout_view_span\", None)\n if span is not None:\n tracked_request.stop_span()\n return response\n\n def process_exception(self, request, exception):\n tracked_request = getattr(request, \"_scout_tracked_request\", None)\n if tracked_request is None:\n # Looks like OldStyleMiddlewareTimingMiddleware didn't run, so\n # don't do anything\n return\n\n tracked_request.tag(\"error\", \"true\")\n", "path": "src/scout_apm/django/middleware.py"}]} | 3,988 | 473 |
gh_patches_debug_21704 | rasdani/github-patches | git_diff | qutebrowser__qutebrowser-3407 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Different colors per completion column
I think it would present a lot of visual clarity to be able to style the completion list's “sections” separately.
For example in pentadactyl, I can very easily visually distinguish between URLs and URL titles based on color alone:

I'm lacking this visual feedback in qutebrowser. I think as a rough first approximation it would be fine to give explicit colors to the Nth sections (regardless of what they are), but I think in principle for this to work as well as possible the section order would have to be somewhat consistent between commands (so the URL is always in the same place, the title is always in the same place, etc.)
</issue>
<code>
[start of qutebrowser/completion/completiondelegate.py]
1 # vim: ft=python fileencoding=utf-8 sts=4 sw=4 et:
2
3 # Copyright 2014-2017 Florian Bruhin (The Compiler) <[email protected]>
4 #
5 # This file is part of qutebrowser.
6 #
7 # qutebrowser is free software: you can redistribute it and/or modify
8 # it under the terms of the GNU General Public License as published by
9 # the Free Software Foundation, either version 3 of the License, or
10 # (at your option) any later version.
11 #
12 # qutebrowser is distributed in the hope that it will be useful,
13 # but WITHOUT ANY WARRANTY; without even the implied warranty of
14 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
15 # GNU General Public License for more details.
16 #
17 # You should have received a copy of the GNU General Public License
18 # along with qutebrowser. If not, see <http://www.gnu.org/licenses/>.
19
20 """Completion item delegate for CompletionView.
21
22 We use this to be able to highlight parts of the text.
23 """
24
25 import re
26 import html
27
28 from PyQt5.QtWidgets import QStyle, QStyleOptionViewItem, QStyledItemDelegate
29 from PyQt5.QtCore import QRectF, QSize, Qt
30 from PyQt5.QtGui import (QIcon, QPalette, QTextDocument, QTextOption,
31 QAbstractTextDocumentLayout)
32
33 from qutebrowser.config import config
34 from qutebrowser.utils import qtutils, jinja
35
36
37 _cached_stylesheet = None
38
39
40 class CompletionItemDelegate(QStyledItemDelegate):
41
42 """Delegate used by CompletionView to draw individual items.
43
44 Mainly a cleaned up port of Qt's way to draw a TreeView item, except it
45 uses a QTextDocument to draw the text and add marking.
46
47 Original implementation:
48 qt/src/gui/styles/qcommonstyle.cpp:drawControl:2153
49
50 Attributes:
51 _opt: The QStyleOptionViewItem which is used.
52 _style: The style to be used.
53 _painter: The QPainter to be used.
54 _doc: The QTextDocument to be used.
55 """
56
57 # FIXME this is horribly slow when resizing.
58 # We should probably cache something in _get_textdoc or so, but as soon as
59 # we implement eliding that cache probably isn't worth much anymore...
60 # https://github.com/qutebrowser/qutebrowser/issues/121
61
62 def __init__(self, parent=None):
63 self._painter = None
64 self._opt = None
65 self._doc = None
66 self._style = None
67 super().__init__(parent)
68
69 def _draw_background(self):
70 """Draw the background of an ItemViewItem."""
71 self._style.drawPrimitive(self._style.PE_PanelItemViewItem, self._opt,
72 self._painter, self._opt.widget)
73
74 def _draw_icon(self):
75 """Draw the icon of an ItemViewItem."""
76 icon_rect = self._style.subElementRect(
77 self._style.SE_ItemViewItemDecoration, self._opt, self._opt.widget)
78 if not icon_rect.isValid():
79 # The rect seems to be wrong in all kind of ways if no icon should
80 # be displayed.
81 return
82
83 mode = QIcon.Normal
84 if not self._opt.state & QStyle.State_Enabled:
85 mode = QIcon.Disabled
86 elif self._opt.state & QStyle.State_Selected:
87 mode = QIcon.Selected
88 state = QIcon.On if self._opt.state & QStyle.State_Open else QIcon.Off
89 self._opt.icon.paint(self._painter, icon_rect,
90 self._opt.decorationAlignment, mode, state)
91
92 def _draw_text(self, index):
93 """Draw the text of an ItemViewItem.
94
95 This is the main part where we differ from the original implementation
96 in Qt: We use a QTextDocument to draw text.
97
98 Args:
99 index: The QModelIndex of the item to draw.
100 """
101 if not self._opt.text:
102 return
103
104 text_rect_ = self._style.subElementRect(
105 self._style.SE_ItemViewItemText, self._opt, self._opt.widget)
106 qtutils.ensure_valid(text_rect_)
107 margin = self._style.pixelMetric(QStyle.PM_FocusFrameHMargin,
108 self._opt, self._opt.widget) + 1
109 # remove width padding
110 text_rect = text_rect_.adjusted(margin, 0, -margin, 0)
111 qtutils.ensure_valid(text_rect)
112 # move text upwards a bit
113 if index.parent().isValid():
114 text_rect.adjust(0, -1, 0, -1)
115 else:
116 text_rect.adjust(0, -2, 0, -2)
117 self._painter.save()
118 state = self._opt.state
119 if state & QStyle.State_Enabled and state & QStyle.State_Active:
120 cg = QPalette.Normal
121 elif state & QStyle.State_Enabled:
122 cg = QPalette.Inactive
123 else:
124 cg = QPalette.Disabled
125
126 if state & QStyle.State_Selected:
127 self._painter.setPen(self._opt.palette.color(
128 cg, QPalette.HighlightedText))
129 # This is a dirty fix for the text jumping by one pixel for
130 # whatever reason.
131 text_rect.adjust(0, -1, 0, 0)
132 else:
133 self._painter.setPen(self._opt.palette.color(cg, QPalette.Text))
134
135 if state & QStyle.State_Editing:
136 self._painter.setPen(self._opt.palette.color(cg, QPalette.Text))
137 self._painter.drawRect(text_rect_.adjusted(0, 0, -1, -1))
138
139 self._painter.translate(text_rect.left(), text_rect.top())
140 self._get_textdoc(index)
141 self._draw_textdoc(text_rect)
142 self._painter.restore()
143
144 def _draw_textdoc(self, rect):
145 """Draw the QTextDocument of an item.
146
147 Args:
148 rect: The QRect to clip the drawing to.
149 """
150 # We can't use drawContents because then the color would be ignored.
151 clip = QRectF(0, 0, rect.width(), rect.height())
152 self._painter.save()
153
154 if self._opt.state & QStyle.State_Selected:
155 color = config.val.colors.completion.item.selected.fg
156 elif not self._opt.state & QStyle.State_Enabled:
157 color = config.val.colors.completion.category.fg
158 else:
159 color = config.val.colors.completion.fg
160 self._painter.setPen(color)
161
162 ctx = QAbstractTextDocumentLayout.PaintContext()
163 ctx.palette.setColor(QPalette.Text, self._painter.pen().color())
164 if clip.isValid():
165 self._painter.setClipRect(clip)
166 ctx.clip = clip
167 self._doc.documentLayout().draw(self._painter, ctx)
168 self._painter.restore()
169
170 def _get_textdoc(self, index):
171 """Create the QTextDocument of an item.
172
173 Args:
174 index: The QModelIndex of the item to draw.
175 """
176 # FIXME we probably should do eliding here. See
177 # qcommonstyle.cpp:viewItemDrawText
178 # https://github.com/qutebrowser/qutebrowser/issues/118
179 text_option = QTextOption()
180 if self._opt.features & QStyleOptionViewItem.WrapText:
181 text_option.setWrapMode(QTextOption.WordWrap)
182 else:
183 text_option.setWrapMode(QTextOption.ManualWrap)
184 text_option.setTextDirection(self._opt.direction)
185 text_option.setAlignment(QStyle.visualAlignment(
186 self._opt.direction, self._opt.displayAlignment))
187
188 if self._doc is not None:
189 self._doc.deleteLater()
190 self._doc = QTextDocument(self)
191 self._doc.setDefaultFont(self._opt.font)
192 self._doc.setDefaultTextOption(text_option)
193 self._doc.setDocumentMargin(2)
194
195 assert _cached_stylesheet is not None
196 self._doc.setDefaultStyleSheet(_cached_stylesheet)
197
198 if index.parent().isValid():
199 view = self.parent()
200 pattern = view.pattern
201 columns_to_filter = index.model().columns_to_filter(index)
202 if index.column() in columns_to_filter and pattern:
203 repl = r'<span class="highlight">\g<0></span>'
204 text = re.sub(re.escape(pattern).replace(r'\ ', r'|'),
205 repl, html.escape(self._opt.text),
206 flags=re.IGNORECASE)
207 self._doc.setHtml(text)
208 else:
209 self._doc.setPlainText(self._opt.text)
210 else:
211 self._doc.setHtml(
212 '<span style="font: {};">{}</span>'.format(
213 html.escape(config.val.fonts.completion.category),
214 html.escape(self._opt.text)))
215
216 def _draw_focus_rect(self):
217 """Draw the focus rectangle of an ItemViewItem."""
218 state = self._opt.state
219 if not state & QStyle.State_HasFocus:
220 return
221 o = self._opt
222 o.rect = self._style.subElementRect(
223 self._style.SE_ItemViewItemFocusRect, self._opt, self._opt.widget)
224 o.state |= QStyle.State_KeyboardFocusChange | QStyle.State_Item
225 qtutils.ensure_valid(o.rect)
226 if state & QStyle.State_Enabled:
227 cg = QPalette.Normal
228 else:
229 cg = QPalette.Disabled
230 if state & QStyle.State_Selected:
231 role = QPalette.Highlight
232 else:
233 role = QPalette.Window
234 o.backgroundColor = self._opt.palette.color(cg, role)
235 self._style.drawPrimitive(QStyle.PE_FrameFocusRect, o, self._painter,
236 self._opt.widget)
237
238 def sizeHint(self, option, index):
239 """Override sizeHint of QStyledItemDelegate.
240
241 Return the cell size based on the QTextDocument size, but might not
242 work correctly yet.
243
244 Args:
245 option: const QStyleOptionViewItem & option
246 index: const QModelIndex & index
247
248 Return:
249 A QSize with the recommended size.
250 """
251 value = index.data(Qt.SizeHintRole)
252 if value is not None:
253 return value
254 self._opt = QStyleOptionViewItem(option)
255 self.initStyleOption(self._opt, index)
256 self._style = self._opt.widget.style()
257 self._get_textdoc(index)
258 docsize = self._doc.size().toSize()
259 size = self._style.sizeFromContents(QStyle.CT_ItemViewItem, self._opt,
260 docsize, self._opt.widget)
261 qtutils.ensure_valid(size)
262 return size + QSize(10, 3)
263
264 def paint(self, painter, option, index):
265 """Override the QStyledItemDelegate paint function.
266
267 Args:
268 painter: QPainter * painter
269 option: const QStyleOptionViewItem & option
270 index: const QModelIndex & index
271 """
272 self._painter = painter
273 self._painter.save()
274 self._opt = QStyleOptionViewItem(option)
275 self.initStyleOption(self._opt, index)
276 self._style = self._opt.widget.style()
277
278 self._draw_background()
279 self._draw_icon()
280 self._draw_text(index)
281 self._draw_focus_rect()
282
283 self._painter.restore()
284
285
286 @config.change_filter('colors.completion.match.fg', function=True)
287 def _update_stylesheet():
288 """Update the cached stylesheet."""
289 stylesheet = """
290 .highlight {
291 color: {{ conf.colors.completion.match.fg }};
292 }
293 """
294 with jinja.environment.no_autoescape():
295 template = jinja.environment.from_string(stylesheet)
296
297 global _cached_stylesheet
298 _cached_stylesheet = template.render(conf=config.val)
299
300
301 def init():
302 """Initialize the cached stylesheet."""
303 _update_stylesheet()
304 config.instance.changed.connect(_update_stylesheet)
305
[end of qutebrowser/completion/completiondelegate.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/qutebrowser/completion/completiondelegate.py b/qutebrowser/completion/completiondelegate.py
--- a/qutebrowser/completion/completiondelegate.py
+++ b/qutebrowser/completion/completiondelegate.py
@@ -138,10 +138,10 @@
self._painter.translate(text_rect.left(), text_rect.top())
self._get_textdoc(index)
- self._draw_textdoc(text_rect)
+ self._draw_textdoc(text_rect, index.column())
self._painter.restore()
- def _draw_textdoc(self, rect):
+ def _draw_textdoc(self, rect, col):
"""Draw the QTextDocument of an item.
Args:
@@ -156,7 +156,9 @@
elif not self._opt.state & QStyle.State_Enabled:
color = config.val.colors.completion.category.fg
else:
- color = config.val.colors.completion.fg
+ colors = config.val.colors.completion.fg
+ # if multiple colors are set, use different colors per column
+ color = colors[col % len(colors)]
self._painter.setPen(color)
ctx = QAbstractTextDocumentLayout.PaintContext()
| {"golden_diff": "diff --git a/qutebrowser/completion/completiondelegate.py b/qutebrowser/completion/completiondelegate.py\n--- a/qutebrowser/completion/completiondelegate.py\n+++ b/qutebrowser/completion/completiondelegate.py\n@@ -138,10 +138,10 @@\n \n self._painter.translate(text_rect.left(), text_rect.top())\n self._get_textdoc(index)\n- self._draw_textdoc(text_rect)\n+ self._draw_textdoc(text_rect, index.column())\n self._painter.restore()\n \n- def _draw_textdoc(self, rect):\n+ def _draw_textdoc(self, rect, col):\n \"\"\"Draw the QTextDocument of an item.\n \n Args:\n@@ -156,7 +156,9 @@\n elif not self._opt.state & QStyle.State_Enabled:\n color = config.val.colors.completion.category.fg\n else:\n- color = config.val.colors.completion.fg\n+ colors = config.val.colors.completion.fg\n+ # if multiple colors are set, use different colors per column\n+ color = colors[col % len(colors)]\n self._painter.setPen(color)\n \n ctx = QAbstractTextDocumentLayout.PaintContext()\n", "issue": "Different colors per completion column\nI think it would present a lot of visual clarity to be able to style the completion list's \u201csections\u201d separately.\n\nFor example in pentadactyl, I can very easily visually distinguish between URLs and URL titles based on color alone:\n\n\n\nI'm lacking this visual feedback in qutebrowser. I think as a rough first approximation it would be fine to give explicit colors to the Nth sections (regardless of what they are), but I think in principle for this to work as well as possible the section order would have to be somewhat consistent between commands (so the URL is always in the same place, the title is always in the same place, etc.)\n\n", "before_files": [{"content": "# vim: ft=python fileencoding=utf-8 sts=4 sw=4 et:\n\n# Copyright 2014-2017 Florian Bruhin (The Compiler) <[email protected]>\n#\n# This file is part of qutebrowser.\n#\n# qutebrowser is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# qutebrowser is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with qutebrowser. If not, see <http://www.gnu.org/licenses/>.\n\n\"\"\"Completion item delegate for CompletionView.\n\nWe use this to be able to highlight parts of the text.\n\"\"\"\n\nimport re\nimport html\n\nfrom PyQt5.QtWidgets import QStyle, QStyleOptionViewItem, QStyledItemDelegate\nfrom PyQt5.QtCore import QRectF, QSize, Qt\nfrom PyQt5.QtGui import (QIcon, QPalette, QTextDocument, QTextOption,\n QAbstractTextDocumentLayout)\n\nfrom qutebrowser.config import config\nfrom qutebrowser.utils import qtutils, jinja\n\n\n_cached_stylesheet = None\n\n\nclass CompletionItemDelegate(QStyledItemDelegate):\n\n \"\"\"Delegate used by CompletionView to draw individual items.\n\n Mainly a cleaned up port of Qt's way to draw a TreeView item, except it\n uses a QTextDocument to draw the text and add marking.\n\n Original implementation:\n qt/src/gui/styles/qcommonstyle.cpp:drawControl:2153\n\n Attributes:\n _opt: The QStyleOptionViewItem which is used.\n _style: The style to be used.\n _painter: The QPainter to be used.\n _doc: The QTextDocument to be used.\n \"\"\"\n\n # FIXME this is horribly slow when resizing.\n # We should probably cache something in _get_textdoc or so, but as soon as\n # we implement eliding that cache probably isn't worth much anymore...\n # https://github.com/qutebrowser/qutebrowser/issues/121\n\n def __init__(self, parent=None):\n self._painter = None\n self._opt = None\n self._doc = None\n self._style = None\n super().__init__(parent)\n\n def _draw_background(self):\n \"\"\"Draw the background of an ItemViewItem.\"\"\"\n self._style.drawPrimitive(self._style.PE_PanelItemViewItem, self._opt,\n self._painter, self._opt.widget)\n\n def _draw_icon(self):\n \"\"\"Draw the icon of an ItemViewItem.\"\"\"\n icon_rect = self._style.subElementRect(\n self._style.SE_ItemViewItemDecoration, self._opt, self._opt.widget)\n if not icon_rect.isValid():\n # The rect seems to be wrong in all kind of ways if no icon should\n # be displayed.\n return\n\n mode = QIcon.Normal\n if not self._opt.state & QStyle.State_Enabled:\n mode = QIcon.Disabled\n elif self._opt.state & QStyle.State_Selected:\n mode = QIcon.Selected\n state = QIcon.On if self._opt.state & QStyle.State_Open else QIcon.Off\n self._opt.icon.paint(self._painter, icon_rect,\n self._opt.decorationAlignment, mode, state)\n\n def _draw_text(self, index):\n \"\"\"Draw the text of an ItemViewItem.\n\n This is the main part where we differ from the original implementation\n in Qt: We use a QTextDocument to draw text.\n\n Args:\n index: The QModelIndex of the item to draw.\n \"\"\"\n if not self._opt.text:\n return\n\n text_rect_ = self._style.subElementRect(\n self._style.SE_ItemViewItemText, self._opt, self._opt.widget)\n qtutils.ensure_valid(text_rect_)\n margin = self._style.pixelMetric(QStyle.PM_FocusFrameHMargin,\n self._opt, self._opt.widget) + 1\n # remove width padding\n text_rect = text_rect_.adjusted(margin, 0, -margin, 0)\n qtutils.ensure_valid(text_rect)\n # move text upwards a bit\n if index.parent().isValid():\n text_rect.adjust(0, -1, 0, -1)\n else:\n text_rect.adjust(0, -2, 0, -2)\n self._painter.save()\n state = self._opt.state\n if state & QStyle.State_Enabled and state & QStyle.State_Active:\n cg = QPalette.Normal\n elif state & QStyle.State_Enabled:\n cg = QPalette.Inactive\n else:\n cg = QPalette.Disabled\n\n if state & QStyle.State_Selected:\n self._painter.setPen(self._opt.palette.color(\n cg, QPalette.HighlightedText))\n # This is a dirty fix for the text jumping by one pixel for\n # whatever reason.\n text_rect.adjust(0, -1, 0, 0)\n else:\n self._painter.setPen(self._opt.palette.color(cg, QPalette.Text))\n\n if state & QStyle.State_Editing:\n self._painter.setPen(self._opt.palette.color(cg, QPalette.Text))\n self._painter.drawRect(text_rect_.adjusted(0, 0, -1, -1))\n\n self._painter.translate(text_rect.left(), text_rect.top())\n self._get_textdoc(index)\n self._draw_textdoc(text_rect)\n self._painter.restore()\n\n def _draw_textdoc(self, rect):\n \"\"\"Draw the QTextDocument of an item.\n\n Args:\n rect: The QRect to clip the drawing to.\n \"\"\"\n # We can't use drawContents because then the color would be ignored.\n clip = QRectF(0, 0, rect.width(), rect.height())\n self._painter.save()\n\n if self._opt.state & QStyle.State_Selected:\n color = config.val.colors.completion.item.selected.fg\n elif not self._opt.state & QStyle.State_Enabled:\n color = config.val.colors.completion.category.fg\n else:\n color = config.val.colors.completion.fg\n self._painter.setPen(color)\n\n ctx = QAbstractTextDocumentLayout.PaintContext()\n ctx.palette.setColor(QPalette.Text, self._painter.pen().color())\n if clip.isValid():\n self._painter.setClipRect(clip)\n ctx.clip = clip\n self._doc.documentLayout().draw(self._painter, ctx)\n self._painter.restore()\n\n def _get_textdoc(self, index):\n \"\"\"Create the QTextDocument of an item.\n\n Args:\n index: The QModelIndex of the item to draw.\n \"\"\"\n # FIXME we probably should do eliding here. See\n # qcommonstyle.cpp:viewItemDrawText\n # https://github.com/qutebrowser/qutebrowser/issues/118\n text_option = QTextOption()\n if self._opt.features & QStyleOptionViewItem.WrapText:\n text_option.setWrapMode(QTextOption.WordWrap)\n else:\n text_option.setWrapMode(QTextOption.ManualWrap)\n text_option.setTextDirection(self._opt.direction)\n text_option.setAlignment(QStyle.visualAlignment(\n self._opt.direction, self._opt.displayAlignment))\n\n if self._doc is not None:\n self._doc.deleteLater()\n self._doc = QTextDocument(self)\n self._doc.setDefaultFont(self._opt.font)\n self._doc.setDefaultTextOption(text_option)\n self._doc.setDocumentMargin(2)\n\n assert _cached_stylesheet is not None\n self._doc.setDefaultStyleSheet(_cached_stylesheet)\n\n if index.parent().isValid():\n view = self.parent()\n pattern = view.pattern\n columns_to_filter = index.model().columns_to_filter(index)\n if index.column() in columns_to_filter and pattern:\n repl = r'<span class=\"highlight\">\\g<0></span>'\n text = re.sub(re.escape(pattern).replace(r'\\ ', r'|'),\n repl, html.escape(self._opt.text),\n flags=re.IGNORECASE)\n self._doc.setHtml(text)\n else:\n self._doc.setPlainText(self._opt.text)\n else:\n self._doc.setHtml(\n '<span style=\"font: {};\">{}</span>'.format(\n html.escape(config.val.fonts.completion.category),\n html.escape(self._opt.text)))\n\n def _draw_focus_rect(self):\n \"\"\"Draw the focus rectangle of an ItemViewItem.\"\"\"\n state = self._opt.state\n if not state & QStyle.State_HasFocus:\n return\n o = self._opt\n o.rect = self._style.subElementRect(\n self._style.SE_ItemViewItemFocusRect, self._opt, self._opt.widget)\n o.state |= QStyle.State_KeyboardFocusChange | QStyle.State_Item\n qtutils.ensure_valid(o.rect)\n if state & QStyle.State_Enabled:\n cg = QPalette.Normal\n else:\n cg = QPalette.Disabled\n if state & QStyle.State_Selected:\n role = QPalette.Highlight\n else:\n role = QPalette.Window\n o.backgroundColor = self._opt.palette.color(cg, role)\n self._style.drawPrimitive(QStyle.PE_FrameFocusRect, o, self._painter,\n self._opt.widget)\n\n def sizeHint(self, option, index):\n \"\"\"Override sizeHint of QStyledItemDelegate.\n\n Return the cell size based on the QTextDocument size, but might not\n work correctly yet.\n\n Args:\n option: const QStyleOptionViewItem & option\n index: const QModelIndex & index\n\n Return:\n A QSize with the recommended size.\n \"\"\"\n value = index.data(Qt.SizeHintRole)\n if value is not None:\n return value\n self._opt = QStyleOptionViewItem(option)\n self.initStyleOption(self._opt, index)\n self._style = self._opt.widget.style()\n self._get_textdoc(index)\n docsize = self._doc.size().toSize()\n size = self._style.sizeFromContents(QStyle.CT_ItemViewItem, self._opt,\n docsize, self._opt.widget)\n qtutils.ensure_valid(size)\n return size + QSize(10, 3)\n\n def paint(self, painter, option, index):\n \"\"\"Override the QStyledItemDelegate paint function.\n\n Args:\n painter: QPainter * painter\n option: const QStyleOptionViewItem & option\n index: const QModelIndex & index\n \"\"\"\n self._painter = painter\n self._painter.save()\n self._opt = QStyleOptionViewItem(option)\n self.initStyleOption(self._opt, index)\n self._style = self._opt.widget.style()\n\n self._draw_background()\n self._draw_icon()\n self._draw_text(index)\n self._draw_focus_rect()\n\n self._painter.restore()\n\n\[email protected]_filter('colors.completion.match.fg', function=True)\ndef _update_stylesheet():\n \"\"\"Update the cached stylesheet.\"\"\"\n stylesheet = \"\"\"\n .highlight {\n color: {{ conf.colors.completion.match.fg }};\n }\n \"\"\"\n with jinja.environment.no_autoescape():\n template = jinja.environment.from_string(stylesheet)\n\n global _cached_stylesheet\n _cached_stylesheet = template.render(conf=config.val)\n\n\ndef init():\n \"\"\"Initialize the cached stylesheet.\"\"\"\n _update_stylesheet()\n config.instance.changed.connect(_update_stylesheet)\n", "path": "qutebrowser/completion/completiondelegate.py"}]} | 4,051 | 269 |
gh_patches_debug_26586 | rasdani/github-patches | git_diff | getsentry__sentry-python-2086 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
DjangoCache `IndexError` raised when using keywords approach
### How do you use Sentry?
Sentry Saas (sentry.io)
### Version
1.22.1
### Steps to Reproduce
1. Install the latest version
2. Run the code with django cache and get the key using keyword approach
3. Observe `IndexError` issue
Snippet:
```python
from djang.core.cache import cache
cache.get(key="my_key") # <-- `IndexError` as there will no `args[0]` which is used for spans
```
### Expected Result
No exception raised and value retrieved
### Actual Result
`IndexError` raised:
```python
IndexError
tuple index out of range
```
</issue>
<code>
[start of sentry_sdk/integrations/django/caching.py]
1 import functools
2 from typing import TYPE_CHECKING
3
4 from django import VERSION as DJANGO_VERSION
5 from django.core.cache import CacheHandler
6
7 from sentry_sdk import Hub
8 from sentry_sdk.consts import OP, SPANDATA
9 from sentry_sdk._compat import text_type
10
11
12 if TYPE_CHECKING:
13 from typing import Any
14 from typing import Callable
15
16
17 METHODS_TO_INSTRUMENT = [
18 "get",
19 "get_many",
20 ]
21
22
23 def _patch_cache_method(cache, method_name):
24 # type: (CacheHandler, str) -> None
25 from sentry_sdk.integrations.django import DjangoIntegration
26
27 def _instrument_call(cache, method_name, original_method, args, kwargs):
28 # type: (CacheHandler, str, Callable[..., Any], Any, Any) -> Any
29 hub = Hub.current
30 integration = hub.get_integration(DjangoIntegration)
31 if integration is None or not integration.cache_spans:
32 return original_method(*args, **kwargs)
33
34 description = "{} {}".format(method_name, args[0])
35
36 with hub.start_span(op=OP.CACHE, description=description) as span:
37 value = original_method(*args, **kwargs)
38
39 if value:
40 span.set_data(SPANDATA.CACHE_HIT, True)
41
42 size = len(text_type(value).encode("utf-8"))
43 span.set_data(SPANDATA.CACHE_ITEM_SIZE, size)
44
45 else:
46 span.set_data(SPANDATA.CACHE_HIT, False)
47
48 return value
49
50 original_method = getattr(cache, method_name)
51
52 @functools.wraps(original_method)
53 def sentry_method(*args, **kwargs):
54 # type: (*Any, **Any) -> Any
55 return _instrument_call(cache, method_name, original_method, args, kwargs)
56
57 setattr(cache, method_name, sentry_method)
58
59
60 def _patch_cache(cache):
61 # type: (CacheHandler) -> None
62 if not hasattr(cache, "_sentry_patched"):
63 for method_name in METHODS_TO_INSTRUMENT:
64 _patch_cache_method(cache, method_name)
65 cache._sentry_patched = True
66
67
68 def patch_caching():
69 # type: () -> None
70 from sentry_sdk.integrations.django import DjangoIntegration
71
72 if not hasattr(CacheHandler, "_sentry_patched"):
73 if DJANGO_VERSION < (3, 2):
74 original_get_item = CacheHandler.__getitem__
75
76 @functools.wraps(original_get_item)
77 def sentry_get_item(self, alias):
78 # type: (CacheHandler, str) -> Any
79 cache = original_get_item(self, alias)
80
81 integration = Hub.current.get_integration(DjangoIntegration)
82 if integration and integration.cache_spans:
83 _patch_cache(cache)
84
85 return cache
86
87 CacheHandler.__getitem__ = sentry_get_item
88 CacheHandler._sentry_patched = True
89
90 else:
91 original_create_connection = CacheHandler.create_connection
92
93 @functools.wraps(original_create_connection)
94 def sentry_create_connection(self, alias):
95 # type: (CacheHandler, str) -> Any
96 cache = original_create_connection(self, alias)
97
98 integration = Hub.current.get_integration(DjangoIntegration)
99 if integration and integration.cache_spans:
100 _patch_cache(cache)
101
102 return cache
103
104 CacheHandler.create_connection = sentry_create_connection
105 CacheHandler._sentry_patched = True
106
[end of sentry_sdk/integrations/django/caching.py]
[start of sentry_sdk/consts.py]
1 from sentry_sdk._types import TYPE_CHECKING
2
3 if TYPE_CHECKING:
4 import sentry_sdk
5
6 from typing import Optional
7 from typing import Callable
8 from typing import Union
9 from typing import List
10 from typing import Type
11 from typing import Dict
12 from typing import Any
13 from typing import Sequence
14 from typing_extensions import TypedDict
15
16 from sentry_sdk.integrations import Integration
17
18 from sentry_sdk._types import (
19 BreadcrumbProcessor,
20 Event,
21 EventProcessor,
22 ProfilerMode,
23 TracesSampler,
24 TransactionProcessor,
25 )
26
27 # Experiments are feature flags to enable and disable certain unstable SDK
28 # functionality. Changing them from the defaults (`None`) in production
29 # code is highly discouraged. They are not subject to any stability
30 # guarantees such as the ones from semantic versioning.
31 Experiments = TypedDict(
32 "Experiments",
33 {
34 "max_spans": Optional[int],
35 "record_sql_params": Optional[bool],
36 # TODO: Remove these 2 profiling related experiments
37 "profiles_sample_rate": Optional[float],
38 "profiler_mode": Optional[ProfilerMode],
39 },
40 total=False,
41 )
42
43 DEFAULT_QUEUE_SIZE = 100
44 DEFAULT_MAX_BREADCRUMBS = 100
45
46 MATCH_ALL = r".*"
47
48
49 class INSTRUMENTER:
50 SENTRY = "sentry"
51 OTEL = "otel"
52
53
54 class SPANDATA:
55 """
56 Additional information describing the type of the span.
57 See: https://develop.sentry.dev/sdk/performance/span-data-conventions/
58 """
59
60 DB_SYSTEM = "db.system"
61 """
62 An identifier for the database management system (DBMS) product being used.
63 See: https://github.com/open-telemetry/opentelemetry-specification/blob/24de67b3827a4e3ab2515cd8ab62d5bcf837c586/specification/trace/semantic_conventions/database.md
64 Example: postgresql
65 """
66
67 CACHE_HIT = "cache.hit"
68 """
69 A boolean indicating whether the requested data was found in the cache.
70 Example: true
71 """
72
73 CACHE_ITEM_SIZE = "cache.item_size"
74 """
75 The size of the requested data in bytes.
76 Example: 58
77 """
78
79 HTTP_QUERY = "http.query"
80 """
81 The Query string present in the URL.
82 Example: ?foo=bar&bar=baz
83 """
84
85 HTTP_FRAGMENT = "http.fragment"
86 """
87 The Fragments present in the URL.
88 Example: #foo=bar
89 """
90
91 HTTP_METHOD = "http.method"
92 """
93 The HTTP method used.
94 Example: GET
95 """
96
97
98 class OP:
99 CACHE = "cache"
100 DB = "db"
101 DB_REDIS = "db.redis"
102 EVENT_DJANGO = "event.django"
103 FUNCTION = "function"
104 FUNCTION_AWS = "function.aws"
105 FUNCTION_GCP = "function.gcp"
106 GRPC_CLIENT = "grpc.client"
107 GRPC_SERVER = "grpc.server"
108 HTTP_CLIENT = "http.client"
109 HTTP_CLIENT_STREAM = "http.client.stream"
110 HTTP_SERVER = "http.server"
111 MIDDLEWARE_DJANGO = "middleware.django"
112 MIDDLEWARE_STARLETTE = "middleware.starlette"
113 MIDDLEWARE_STARLETTE_RECEIVE = "middleware.starlette.receive"
114 MIDDLEWARE_STARLETTE_SEND = "middleware.starlette.send"
115 MIDDLEWARE_STARLITE = "middleware.starlite"
116 MIDDLEWARE_STARLITE_RECEIVE = "middleware.starlite.receive"
117 MIDDLEWARE_STARLITE_SEND = "middleware.starlite.send"
118 QUEUE_SUBMIT_ARQ = "queue.submit.arq"
119 QUEUE_TASK_ARQ = "queue.task.arq"
120 QUEUE_SUBMIT_CELERY = "queue.submit.celery"
121 QUEUE_TASK_CELERY = "queue.task.celery"
122 QUEUE_TASK_RQ = "queue.task.rq"
123 QUEUE_SUBMIT_HUEY = "queue.submit.huey"
124 QUEUE_TASK_HUEY = "queue.task.huey"
125 SUBPROCESS = "subprocess"
126 SUBPROCESS_WAIT = "subprocess.wait"
127 SUBPROCESS_COMMUNICATE = "subprocess.communicate"
128 TEMPLATE_RENDER = "template.render"
129 VIEW_RENDER = "view.render"
130 VIEW_RESPONSE_RENDER = "view.response.render"
131 WEBSOCKET_SERVER = "websocket.server"
132 SOCKET_CONNECTION = "socket.connection"
133 SOCKET_DNS = "socket.dns"
134
135
136 # This type exists to trick mypy and PyCharm into thinking `init` and `Client`
137 # take these arguments (even though they take opaque **kwargs)
138 class ClientConstructor(object):
139 def __init__(
140 self,
141 dsn=None, # type: Optional[str]
142 max_breadcrumbs=DEFAULT_MAX_BREADCRUMBS, # type: int
143 release=None, # type: Optional[str]
144 environment=None, # type: Optional[str]
145 server_name=None, # type: Optional[str]
146 shutdown_timeout=2, # type: float
147 integrations=[], # type: Sequence[Integration] # noqa: B006
148 in_app_include=[], # type: List[str] # noqa: B006
149 in_app_exclude=[], # type: List[str] # noqa: B006
150 default_integrations=True, # type: bool
151 dist=None, # type: Optional[str]
152 transport=None, # type: Optional[Union[sentry_sdk.transport.Transport, Type[sentry_sdk.transport.Transport], Callable[[Event], None]]]
153 transport_queue_size=DEFAULT_QUEUE_SIZE, # type: int
154 sample_rate=1.0, # type: float
155 send_default_pii=False, # type: bool
156 http_proxy=None, # type: Optional[str]
157 https_proxy=None, # type: Optional[str]
158 ignore_errors=[], # type: Sequence[Union[type, str]] # noqa: B006
159 request_bodies="medium", # type: str
160 before_send=None, # type: Optional[EventProcessor]
161 before_breadcrumb=None, # type: Optional[BreadcrumbProcessor]
162 debug=False, # type: bool
163 attach_stacktrace=False, # type: bool
164 ca_certs=None, # type: Optional[str]
165 propagate_traces=True, # type: bool
166 traces_sample_rate=None, # type: Optional[float]
167 traces_sampler=None, # type: Optional[TracesSampler]
168 profiles_sample_rate=None, # type: Optional[float]
169 profiles_sampler=None, # type: Optional[TracesSampler]
170 profiler_mode=None, # type: Optional[ProfilerMode]
171 auto_enabling_integrations=True, # type: bool
172 auto_session_tracking=True, # type: bool
173 send_client_reports=True, # type: bool
174 _experiments={}, # type: Experiments # noqa: B006
175 proxy_headers=None, # type: Optional[Dict[str, str]]
176 instrumenter=INSTRUMENTER.SENTRY, # type: Optional[str]
177 before_send_transaction=None, # type: Optional[TransactionProcessor]
178 project_root=None, # type: Optional[str]
179 enable_tracing=None, # type: Optional[bool]
180 include_local_variables=True, # type: Optional[bool]
181 trace_propagation_targets=[ # noqa: B006
182 MATCH_ALL
183 ], # type: Optional[Sequence[str]]
184 functions_to_trace=[], # type: Sequence[str] # noqa: B006
185 event_scrubber=None, # type: Optional[sentry_sdk.scrubber.EventScrubber]
186 ):
187 # type: (...) -> None
188 pass
189
190
191 def _get_default_options():
192 # type: () -> Dict[str, Any]
193 import inspect
194
195 if hasattr(inspect, "getfullargspec"):
196 getargspec = inspect.getfullargspec
197 else:
198 getargspec = inspect.getargspec # type: ignore
199
200 a = getargspec(ClientConstructor.__init__)
201 defaults = a.defaults or ()
202 return dict(zip(a.args[-len(defaults) :], defaults))
203
204
205 DEFAULT_OPTIONS = _get_default_options()
206 del _get_default_options
207
208
209 VERSION = "1.22.1"
210
[end of sentry_sdk/consts.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/sentry_sdk/consts.py b/sentry_sdk/consts.py
--- a/sentry_sdk/consts.py
+++ b/sentry_sdk/consts.py
@@ -96,7 +96,7 @@
class OP:
- CACHE = "cache"
+ CACHE_GET_ITEM = "cache.get_item"
DB = "db"
DB_REDIS = "db.redis"
EVENT_DJANGO = "event.django"
diff --git a/sentry_sdk/integrations/django/caching.py b/sentry_sdk/integrations/django/caching.py
--- a/sentry_sdk/integrations/django/caching.py
+++ b/sentry_sdk/integrations/django/caching.py
@@ -20,6 +20,18 @@
]
+def _get_span_description(method_name, args, kwargs):
+ # type: (str, Any, Any) -> str
+ description = "{} ".format(method_name)
+
+ if args is not None and len(args) >= 1:
+ description += text_type(args[0])
+ elif kwargs is not None and "key" in kwargs:
+ description += text_type(kwargs["key"])
+
+ return description
+
+
def _patch_cache_method(cache, method_name):
# type: (CacheHandler, str) -> None
from sentry_sdk.integrations.django import DjangoIntegration
@@ -31,9 +43,9 @@
if integration is None or not integration.cache_spans:
return original_method(*args, **kwargs)
- description = "{} {}".format(method_name, args[0])
+ description = _get_span_description(method_name, args, kwargs)
- with hub.start_span(op=OP.CACHE, description=description) as span:
+ with hub.start_span(op=OP.CACHE_GET_ITEM, description=description) as span:
value = original_method(*args, **kwargs)
if value:
| {"golden_diff": "diff --git a/sentry_sdk/consts.py b/sentry_sdk/consts.py\n--- a/sentry_sdk/consts.py\n+++ b/sentry_sdk/consts.py\n@@ -96,7 +96,7 @@\n \n \n class OP:\n- CACHE = \"cache\"\n+ CACHE_GET_ITEM = \"cache.get_item\"\n DB = \"db\"\n DB_REDIS = \"db.redis\"\n EVENT_DJANGO = \"event.django\"\ndiff --git a/sentry_sdk/integrations/django/caching.py b/sentry_sdk/integrations/django/caching.py\n--- a/sentry_sdk/integrations/django/caching.py\n+++ b/sentry_sdk/integrations/django/caching.py\n@@ -20,6 +20,18 @@\n ]\n \n \n+def _get_span_description(method_name, args, kwargs):\n+ # type: (str, Any, Any) -> str\n+ description = \"{} \".format(method_name)\n+\n+ if args is not None and len(args) >= 1:\n+ description += text_type(args[0])\n+ elif kwargs is not None and \"key\" in kwargs:\n+ description += text_type(kwargs[\"key\"])\n+\n+ return description\n+\n+\n def _patch_cache_method(cache, method_name):\n # type: (CacheHandler, str) -> None\n from sentry_sdk.integrations.django import DjangoIntegration\n@@ -31,9 +43,9 @@\n if integration is None or not integration.cache_spans:\n return original_method(*args, **kwargs)\n \n- description = \"{} {}\".format(method_name, args[0])\n+ description = _get_span_description(method_name, args, kwargs)\n \n- with hub.start_span(op=OP.CACHE, description=description) as span:\n+ with hub.start_span(op=OP.CACHE_GET_ITEM, description=description) as span:\n value = original_method(*args, **kwargs)\n \n if value:\n", "issue": "DjangoCache `IndexError` raised when using keywords approach\n### How do you use Sentry?\r\n\r\nSentry Saas (sentry.io)\r\n\r\n### Version\r\n\r\n1.22.1\r\n\r\n### Steps to Reproduce\r\n\r\n1. Install the latest version\r\n2. Run the code with django cache and get the key using keyword approach\r\n3. Observe `IndexError` issue\r\n\r\nSnippet:\r\n```python\r\nfrom djang.core.cache import cache\r\n\r\ncache.get(key=\"my_key\") # <-- `IndexError` as there will no `args[0]` which is used for spans\r\n```\r\n\r\n### Expected Result\r\n\r\nNo exception raised and value retrieved\r\n\r\n### Actual Result\r\n`IndexError` raised:\r\n```python\r\nIndexError\r\ntuple index out of range\r\n```\n", "before_files": [{"content": "import functools\nfrom typing import TYPE_CHECKING\n\nfrom django import VERSION as DJANGO_VERSION\nfrom django.core.cache import CacheHandler\n\nfrom sentry_sdk import Hub\nfrom sentry_sdk.consts import OP, SPANDATA\nfrom sentry_sdk._compat import text_type\n\n\nif TYPE_CHECKING:\n from typing import Any\n from typing import Callable\n\n\nMETHODS_TO_INSTRUMENT = [\n \"get\",\n \"get_many\",\n]\n\n\ndef _patch_cache_method(cache, method_name):\n # type: (CacheHandler, str) -> None\n from sentry_sdk.integrations.django import DjangoIntegration\n\n def _instrument_call(cache, method_name, original_method, args, kwargs):\n # type: (CacheHandler, str, Callable[..., Any], Any, Any) -> Any\n hub = Hub.current\n integration = hub.get_integration(DjangoIntegration)\n if integration is None or not integration.cache_spans:\n return original_method(*args, **kwargs)\n\n description = \"{} {}\".format(method_name, args[0])\n\n with hub.start_span(op=OP.CACHE, description=description) as span:\n value = original_method(*args, **kwargs)\n\n if value:\n span.set_data(SPANDATA.CACHE_HIT, True)\n\n size = len(text_type(value).encode(\"utf-8\"))\n span.set_data(SPANDATA.CACHE_ITEM_SIZE, size)\n\n else:\n span.set_data(SPANDATA.CACHE_HIT, False)\n\n return value\n\n original_method = getattr(cache, method_name)\n\n @functools.wraps(original_method)\n def sentry_method(*args, **kwargs):\n # type: (*Any, **Any) -> Any\n return _instrument_call(cache, method_name, original_method, args, kwargs)\n\n setattr(cache, method_name, sentry_method)\n\n\ndef _patch_cache(cache):\n # type: (CacheHandler) -> None\n if not hasattr(cache, \"_sentry_patched\"):\n for method_name in METHODS_TO_INSTRUMENT:\n _patch_cache_method(cache, method_name)\n cache._sentry_patched = True\n\n\ndef patch_caching():\n # type: () -> None\n from sentry_sdk.integrations.django import DjangoIntegration\n\n if not hasattr(CacheHandler, \"_sentry_patched\"):\n if DJANGO_VERSION < (3, 2):\n original_get_item = CacheHandler.__getitem__\n\n @functools.wraps(original_get_item)\n def sentry_get_item(self, alias):\n # type: (CacheHandler, str) -> Any\n cache = original_get_item(self, alias)\n\n integration = Hub.current.get_integration(DjangoIntegration)\n if integration and integration.cache_spans:\n _patch_cache(cache)\n\n return cache\n\n CacheHandler.__getitem__ = sentry_get_item\n CacheHandler._sentry_patched = True\n\n else:\n original_create_connection = CacheHandler.create_connection\n\n @functools.wraps(original_create_connection)\n def sentry_create_connection(self, alias):\n # type: (CacheHandler, str) -> Any\n cache = original_create_connection(self, alias)\n\n integration = Hub.current.get_integration(DjangoIntegration)\n if integration and integration.cache_spans:\n _patch_cache(cache)\n\n return cache\n\n CacheHandler.create_connection = sentry_create_connection\n CacheHandler._sentry_patched = True\n", "path": "sentry_sdk/integrations/django/caching.py"}, {"content": "from sentry_sdk._types import TYPE_CHECKING\n\nif TYPE_CHECKING:\n import sentry_sdk\n\n from typing import Optional\n from typing import Callable\n from typing import Union\n from typing import List\n from typing import Type\n from typing import Dict\n from typing import Any\n from typing import Sequence\n from typing_extensions import TypedDict\n\n from sentry_sdk.integrations import Integration\n\n from sentry_sdk._types import (\n BreadcrumbProcessor,\n Event,\n EventProcessor,\n ProfilerMode,\n TracesSampler,\n TransactionProcessor,\n )\n\n # Experiments are feature flags to enable and disable certain unstable SDK\n # functionality. Changing them from the defaults (`None`) in production\n # code is highly discouraged. They are not subject to any stability\n # guarantees such as the ones from semantic versioning.\n Experiments = TypedDict(\n \"Experiments\",\n {\n \"max_spans\": Optional[int],\n \"record_sql_params\": Optional[bool],\n # TODO: Remove these 2 profiling related experiments\n \"profiles_sample_rate\": Optional[float],\n \"profiler_mode\": Optional[ProfilerMode],\n },\n total=False,\n )\n\nDEFAULT_QUEUE_SIZE = 100\nDEFAULT_MAX_BREADCRUMBS = 100\n\nMATCH_ALL = r\".*\"\n\n\nclass INSTRUMENTER:\n SENTRY = \"sentry\"\n OTEL = \"otel\"\n\n\nclass SPANDATA:\n \"\"\"\n Additional information describing the type of the span.\n See: https://develop.sentry.dev/sdk/performance/span-data-conventions/\n \"\"\"\n\n DB_SYSTEM = \"db.system\"\n \"\"\"\n An identifier for the database management system (DBMS) product being used.\n See: https://github.com/open-telemetry/opentelemetry-specification/blob/24de67b3827a4e3ab2515cd8ab62d5bcf837c586/specification/trace/semantic_conventions/database.md\n Example: postgresql\n \"\"\"\n\n CACHE_HIT = \"cache.hit\"\n \"\"\"\n A boolean indicating whether the requested data was found in the cache.\n Example: true\n \"\"\"\n\n CACHE_ITEM_SIZE = \"cache.item_size\"\n \"\"\"\n The size of the requested data in bytes.\n Example: 58\n \"\"\"\n\n HTTP_QUERY = \"http.query\"\n \"\"\"\n The Query string present in the URL.\n Example: ?foo=bar&bar=baz\n \"\"\"\n\n HTTP_FRAGMENT = \"http.fragment\"\n \"\"\"\n The Fragments present in the URL.\n Example: #foo=bar\n \"\"\"\n\n HTTP_METHOD = \"http.method\"\n \"\"\"\n The HTTP method used.\n Example: GET\n \"\"\"\n\n\nclass OP:\n CACHE = \"cache\"\n DB = \"db\"\n DB_REDIS = \"db.redis\"\n EVENT_DJANGO = \"event.django\"\n FUNCTION = \"function\"\n FUNCTION_AWS = \"function.aws\"\n FUNCTION_GCP = \"function.gcp\"\n GRPC_CLIENT = \"grpc.client\"\n GRPC_SERVER = \"grpc.server\"\n HTTP_CLIENT = \"http.client\"\n HTTP_CLIENT_STREAM = \"http.client.stream\"\n HTTP_SERVER = \"http.server\"\n MIDDLEWARE_DJANGO = \"middleware.django\"\n MIDDLEWARE_STARLETTE = \"middleware.starlette\"\n MIDDLEWARE_STARLETTE_RECEIVE = \"middleware.starlette.receive\"\n MIDDLEWARE_STARLETTE_SEND = \"middleware.starlette.send\"\n MIDDLEWARE_STARLITE = \"middleware.starlite\"\n MIDDLEWARE_STARLITE_RECEIVE = \"middleware.starlite.receive\"\n MIDDLEWARE_STARLITE_SEND = \"middleware.starlite.send\"\n QUEUE_SUBMIT_ARQ = \"queue.submit.arq\"\n QUEUE_TASK_ARQ = \"queue.task.arq\"\n QUEUE_SUBMIT_CELERY = \"queue.submit.celery\"\n QUEUE_TASK_CELERY = \"queue.task.celery\"\n QUEUE_TASK_RQ = \"queue.task.rq\"\n QUEUE_SUBMIT_HUEY = \"queue.submit.huey\"\n QUEUE_TASK_HUEY = \"queue.task.huey\"\n SUBPROCESS = \"subprocess\"\n SUBPROCESS_WAIT = \"subprocess.wait\"\n SUBPROCESS_COMMUNICATE = \"subprocess.communicate\"\n TEMPLATE_RENDER = \"template.render\"\n VIEW_RENDER = \"view.render\"\n VIEW_RESPONSE_RENDER = \"view.response.render\"\n WEBSOCKET_SERVER = \"websocket.server\"\n SOCKET_CONNECTION = \"socket.connection\"\n SOCKET_DNS = \"socket.dns\"\n\n\n# This type exists to trick mypy and PyCharm into thinking `init` and `Client`\n# take these arguments (even though they take opaque **kwargs)\nclass ClientConstructor(object):\n def __init__(\n self,\n dsn=None, # type: Optional[str]\n max_breadcrumbs=DEFAULT_MAX_BREADCRUMBS, # type: int\n release=None, # type: Optional[str]\n environment=None, # type: Optional[str]\n server_name=None, # type: Optional[str]\n shutdown_timeout=2, # type: float\n integrations=[], # type: Sequence[Integration] # noqa: B006\n in_app_include=[], # type: List[str] # noqa: B006\n in_app_exclude=[], # type: List[str] # noqa: B006\n default_integrations=True, # type: bool\n dist=None, # type: Optional[str]\n transport=None, # type: Optional[Union[sentry_sdk.transport.Transport, Type[sentry_sdk.transport.Transport], Callable[[Event], None]]]\n transport_queue_size=DEFAULT_QUEUE_SIZE, # type: int\n sample_rate=1.0, # type: float\n send_default_pii=False, # type: bool\n http_proxy=None, # type: Optional[str]\n https_proxy=None, # type: Optional[str]\n ignore_errors=[], # type: Sequence[Union[type, str]] # noqa: B006\n request_bodies=\"medium\", # type: str\n before_send=None, # type: Optional[EventProcessor]\n before_breadcrumb=None, # type: Optional[BreadcrumbProcessor]\n debug=False, # type: bool\n attach_stacktrace=False, # type: bool\n ca_certs=None, # type: Optional[str]\n propagate_traces=True, # type: bool\n traces_sample_rate=None, # type: Optional[float]\n traces_sampler=None, # type: Optional[TracesSampler]\n profiles_sample_rate=None, # type: Optional[float]\n profiles_sampler=None, # type: Optional[TracesSampler]\n profiler_mode=None, # type: Optional[ProfilerMode]\n auto_enabling_integrations=True, # type: bool\n auto_session_tracking=True, # type: bool\n send_client_reports=True, # type: bool\n _experiments={}, # type: Experiments # noqa: B006\n proxy_headers=None, # type: Optional[Dict[str, str]]\n instrumenter=INSTRUMENTER.SENTRY, # type: Optional[str]\n before_send_transaction=None, # type: Optional[TransactionProcessor]\n project_root=None, # type: Optional[str]\n enable_tracing=None, # type: Optional[bool]\n include_local_variables=True, # type: Optional[bool]\n trace_propagation_targets=[ # noqa: B006\n MATCH_ALL\n ], # type: Optional[Sequence[str]]\n functions_to_trace=[], # type: Sequence[str] # noqa: B006\n event_scrubber=None, # type: Optional[sentry_sdk.scrubber.EventScrubber]\n ):\n # type: (...) -> None\n pass\n\n\ndef _get_default_options():\n # type: () -> Dict[str, Any]\n import inspect\n\n if hasattr(inspect, \"getfullargspec\"):\n getargspec = inspect.getfullargspec\n else:\n getargspec = inspect.getargspec # type: ignore\n\n a = getargspec(ClientConstructor.__init__)\n defaults = a.defaults or ()\n return dict(zip(a.args[-len(defaults) :], defaults))\n\n\nDEFAULT_OPTIONS = _get_default_options()\ndel _get_default_options\n\n\nVERSION = \"1.22.1\"\n", "path": "sentry_sdk/consts.py"}]} | 4,046 | 421 |
gh_patches_debug_24978 | rasdani/github-patches | git_diff | chainer__chainer-310 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
split_axis.backward fails on incomplete gradients
When there is a None in the grad_outputs, split_axis fails to backprop the incomplete gradients.
</issue>
<code>
[start of chainer/functions/split_axis.py]
1 import collections
2
3 import numpy
4
5 from chainer import cuda
6 from chainer import function
7 from chainer.utils import type_check
8
9
10 _args = 'float* y, float* x, int cdimy, int cdimx, int rdim, int coffset'
11 _preamble = '''
12 #define COPY(statement) \
13 int l = i / (rdim * cdimy); \
14 int c = i / rdim % cdimy + coffset; \
15 int r = i % rdim; \
16 int idx = r + rdim * (c + cdimx * l); \
17 statement;
18 '''
19
20
21 class SplitAxis(function.Function):
22
23 """Function that splits multiple arrays towards the specified axis."""
24
25 def __init__(self, indices_or_sections, axis):
26 if not isinstance(indices_or_sections, (int, collections.Iterable)):
27 raise TypeError('indices_or_sections must be integer or 1-D array')
28 self.indices_or_sections = indices_or_sections
29 self.axis = axis
30
31 def check_type_forward(self, in_types):
32 type_check.expect(in_types.size() == 1)
33 type_check.expect(in_types[0].ndim >= self.axis)
34
35 if isinstance(self.indices_or_sections, collections.Iterable):
36 max_index = type_check.Variable(
37 self.indices_or_sections[-1], 'max_index')
38 type_check.expect(in_types[0].shape[self.axis] > max_index)
39 else:
40 sections = type_check.Variable(
41 self.indices_or_sections, 'sections')
42 type_check.expect(in_types[0].shape[self.axis] % sections == 0)
43
44 def forward_cpu(self, x):
45 if isinstance(self.indices_or_sections, collections.Iterable):
46 cdimx = x[0].shape[self.axis]
47 ind = list(self.indices_or_sections)
48 ind.append(cdimx)
49 prev_i = 0
50 for i in ind:
51 cdimy = max(0, min(i, cdimx) - prev_i)
52 if cdimy == 0:
53 raise ValueError('Not support if shape contains 0')
54 prev_i = i
55 return tuple(numpy.split(x[0], self.indices_or_sections, self.axis))
56
57 def forward_gpu(self, x):
58 xshape = x[0].shape
59 self.cdimx = xshape[self.axis]
60 self.rdim = numpy.prod(xshape[self.axis + 1:], dtype=int)
61
62 if isinstance(self.indices_or_sections, collections.Iterable):
63 ind = list(self.indices_or_sections)
64 ind.append(self.cdimx)
65 else:
66 sec = self.indices_or_sections
67 if self.cdimx % sec:
68 raise ValueError(
69 'array split does not result in an equal division')
70 ind = numpy.arange(1, sec + 1) * (self.cdimx // sec)
71 ys = []
72 kernel = cuda.elementwise(
73 _args, 'COPY(y[i] = x[idx])', 'split_fwd', preamble=_preamble)
74 prev_i = 0
75 for i in ind:
76 cdimy = max(0, min(i, self.cdimx) - prev_i)
77 s = list(xshape)
78 s[self.axis] = cdimy
79 y = cuda.empty(tuple(s), dtype=x[0].dtype)
80 if cdimy == 0:
81 raise ValueError('Not support if shape contains 0')
82 kernel(y, x[0], cdimy, self.cdimx, self.rdim, prev_i)
83 prev_i = i
84 ys.append(y)
85 return tuple(ys)
86
87 def backward_cpu(self, x, gys):
88 return numpy.concatenate(gys, axis=self.axis),
89
90 def backward_gpu(self, x, gys):
91 gx = cuda.empty_like(x[0])
92 coffset = 0
93 kernel = cuda.elementwise(
94 _args, 'COPY(x[idx] = y[i])', 'split_bwd', preamble=_preamble)
95 for gy in gys:
96 cdimy = gy.shape[self.axis]
97 if cdimy != 0:
98 kernel(gy, gx, cdimy, self.cdimx, self.rdim, coffset)
99 coffset += cdimy
100 return gx,
101
102
103 def split_axis(x, indices_or_sections, axis):
104 """Splits given variables along an axis.
105
106 Args:
107 x (tuple of Variables): Variables to be split.
108 indices_or_sections (int or 1-D array): If this argument is an integer,
109 N, the array will be divided into N equal arrays along axis.
110 If it is a 1-D array of sorted integers, it
111 indicates the positions where the array is split.
112 axis (int): Axis that the input array is split along.
113
114 Returns:
115 ``tuple`` or ``Variable``: Tuple of :class:`~chainer.Variable` objects
116 if the number of outputs is more than 1 or
117 :class:`~chainer.Variable` otherwise.
118
119 .. note::
120 This function raises ``ValueError`` if at least
121 one of the outputs is splitted to zero-size
122 (i.e. `axis`-th value of its shape is zero).
123
124 """
125 return SplitAxis(indices_or_sections, axis)(x)
126
[end of chainer/functions/split_axis.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/chainer/functions/split_axis.py b/chainer/functions/split_axis.py
--- a/chainer/functions/split_axis.py
+++ b/chainer/functions/split_axis.py
@@ -1,6 +1,7 @@
import collections
import numpy
+import six
from chainer import cuda
from chainer import function
@@ -85,14 +86,25 @@
return tuple(ys)
def backward_cpu(self, x, gys):
- return numpy.concatenate(gys, axis=self.axis),
+ if any(gy is None for gy in gys):
+ gx = numpy.zeros_like(x[0])
+ gxs = numpy.split(gx, self.indices_or_sections, self.axis)
+ for gxi, gy in six.moves.zip(gxs, gys):
+ if gy is None:
+ continue
+ gxi[:] = gy
+ return gx,
+ else:
+ return numpy.concatenate(gys, axis=self.axis),
def backward_gpu(self, x, gys):
- gx = cuda.empty_like(x[0])
+ gx = cuda.zeros_like(x[0])
coffset = 0
kernel = cuda.elementwise(
_args, 'COPY(x[idx] = y[i])', 'split_bwd', preamble=_preamble)
for gy in gys:
+ if gy is None:
+ continue
cdimy = gy.shape[self.axis]
if cdimy != 0:
kernel(gy, gx, cdimy, self.cdimx, self.rdim, coffset)
| {"golden_diff": "diff --git a/chainer/functions/split_axis.py b/chainer/functions/split_axis.py\n--- a/chainer/functions/split_axis.py\n+++ b/chainer/functions/split_axis.py\n@@ -1,6 +1,7 @@\n import collections\n \n import numpy\n+import six\n \n from chainer import cuda\n from chainer import function\n@@ -85,14 +86,25 @@\n return tuple(ys)\n \n def backward_cpu(self, x, gys):\n- return numpy.concatenate(gys, axis=self.axis),\n+ if any(gy is None for gy in gys):\n+ gx = numpy.zeros_like(x[0])\n+ gxs = numpy.split(gx, self.indices_or_sections, self.axis)\n+ for gxi, gy in six.moves.zip(gxs, gys):\n+ if gy is None:\n+ continue\n+ gxi[:] = gy\n+ return gx,\n+ else:\n+ return numpy.concatenate(gys, axis=self.axis),\n \n def backward_gpu(self, x, gys):\n- gx = cuda.empty_like(x[0])\n+ gx = cuda.zeros_like(x[0])\n coffset = 0\n kernel = cuda.elementwise(\n _args, 'COPY(x[idx] = y[i])', 'split_bwd', preamble=_preamble)\n for gy in gys:\n+ if gy is None:\n+ continue\n cdimy = gy.shape[self.axis]\n if cdimy != 0:\n kernel(gy, gx, cdimy, self.cdimx, self.rdim, coffset)\n", "issue": "split_axis.backward fails on incomplete gradients\nWhen there is a None in the grad_outputs, split_axis fails to backprop the incomplete gradients.\n\n", "before_files": [{"content": "import collections\n\nimport numpy\n\nfrom chainer import cuda\nfrom chainer import function\nfrom chainer.utils import type_check\n\n\n_args = 'float* y, float* x, int cdimy, int cdimx, int rdim, int coffset'\n_preamble = '''\n#define COPY(statement) \\\n int l = i / (rdim * cdimy); \\\n int c = i / rdim % cdimy + coffset; \\\n int r = i % rdim; \\\n int idx = r + rdim * (c + cdimx * l); \\\n statement;\n'''\n\n\nclass SplitAxis(function.Function):\n\n \"\"\"Function that splits multiple arrays towards the specified axis.\"\"\"\n\n def __init__(self, indices_or_sections, axis):\n if not isinstance(indices_or_sections, (int, collections.Iterable)):\n raise TypeError('indices_or_sections must be integer or 1-D array')\n self.indices_or_sections = indices_or_sections\n self.axis = axis\n\n def check_type_forward(self, in_types):\n type_check.expect(in_types.size() == 1)\n type_check.expect(in_types[0].ndim >= self.axis)\n\n if isinstance(self.indices_or_sections, collections.Iterable):\n max_index = type_check.Variable(\n self.indices_or_sections[-1], 'max_index')\n type_check.expect(in_types[0].shape[self.axis] > max_index)\n else:\n sections = type_check.Variable(\n self.indices_or_sections, 'sections')\n type_check.expect(in_types[0].shape[self.axis] % sections == 0)\n\n def forward_cpu(self, x):\n if isinstance(self.indices_or_sections, collections.Iterable):\n cdimx = x[0].shape[self.axis]\n ind = list(self.indices_or_sections)\n ind.append(cdimx)\n prev_i = 0\n for i in ind:\n cdimy = max(0, min(i, cdimx) - prev_i)\n if cdimy == 0:\n raise ValueError('Not support if shape contains 0')\n prev_i = i\n return tuple(numpy.split(x[0], self.indices_or_sections, self.axis))\n\n def forward_gpu(self, x):\n xshape = x[0].shape\n self.cdimx = xshape[self.axis]\n self.rdim = numpy.prod(xshape[self.axis + 1:], dtype=int)\n\n if isinstance(self.indices_or_sections, collections.Iterable):\n ind = list(self.indices_or_sections)\n ind.append(self.cdimx)\n else:\n sec = self.indices_or_sections\n if self.cdimx % sec:\n raise ValueError(\n 'array split does not result in an equal division')\n ind = numpy.arange(1, sec + 1) * (self.cdimx // sec)\n ys = []\n kernel = cuda.elementwise(\n _args, 'COPY(y[i] = x[idx])', 'split_fwd', preamble=_preamble)\n prev_i = 0\n for i in ind:\n cdimy = max(0, min(i, self.cdimx) - prev_i)\n s = list(xshape)\n s[self.axis] = cdimy\n y = cuda.empty(tuple(s), dtype=x[0].dtype)\n if cdimy == 0:\n raise ValueError('Not support if shape contains 0')\n kernel(y, x[0], cdimy, self.cdimx, self.rdim, prev_i)\n prev_i = i\n ys.append(y)\n return tuple(ys)\n\n def backward_cpu(self, x, gys):\n return numpy.concatenate(gys, axis=self.axis),\n\n def backward_gpu(self, x, gys):\n gx = cuda.empty_like(x[0])\n coffset = 0\n kernel = cuda.elementwise(\n _args, 'COPY(x[idx] = y[i])', 'split_bwd', preamble=_preamble)\n for gy in gys:\n cdimy = gy.shape[self.axis]\n if cdimy != 0:\n kernel(gy, gx, cdimy, self.cdimx, self.rdim, coffset)\n coffset += cdimy\n return gx,\n\n\ndef split_axis(x, indices_or_sections, axis):\n \"\"\"Splits given variables along an axis.\n\n Args:\n x (tuple of Variables): Variables to be split.\n indices_or_sections (int or 1-D array): If this argument is an integer,\n N, the array will be divided into N equal arrays along axis.\n If it is a 1-D array of sorted integers, it\n indicates the positions where the array is split.\n axis (int): Axis that the input array is split along.\n\n Returns:\n ``tuple`` or ``Variable``: Tuple of :class:`~chainer.Variable` objects\n if the number of outputs is more than 1 or\n :class:`~chainer.Variable` otherwise.\n\n .. note::\n This function raises ``ValueError`` if at least\n one of the outputs is splitted to zero-size\n (i.e. `axis`-th value of its shape is zero).\n\n \"\"\"\n return SplitAxis(indices_or_sections, axis)(x)\n", "path": "chainer/functions/split_axis.py"}]} | 1,976 | 348 |
gh_patches_debug_61068 | rasdani/github-patches | git_diff | Mailu__Mailu-719 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Alternatives useless after podop
After updating to master to get all the up-to-date fixes it also moves postfix to use podop and it seems to no longer support receiving external mail from alternative domains 😢
Sending internal mail between alternatives works as expected but not with external mail, a "relay denied" message is shown in the logs and when checking the postfix podop views it looks like alternative is never mentioned.
</issue>
<code>
[start of core/admin/mailu/internal/views/postfix.py]
1 from mailu import db, models
2 from mailu.internal import internal
3
4 import flask
5
6
7 @internal.route("/postfix/domain/<domain_name>")
8 def postfix_mailbox_domain(domain_name):
9 domain = models.Domain.query.get(domain_name) or flask.abort(404)
10 return flask.jsonify(domain.name)
11
12
13 @internal.route("/postfix/mailbox/<email>")
14 def postfix_mailbox_map(email):
15 user = models.User.query.get(email) or flask.abort(404)
16 return flask.jsonify(user.email)
17
18
19 @internal.route("/postfix/alias/<alias>")
20 def postfix_alias_map(alias):
21 localpart, domain = alias.split('@', 1) if '@' in alias else (None, alias)
22 alternative = models.Alternative.query.get(domain)
23 if alternative:
24 domain = alternative.domain_name
25 email = '{}@{}'.format(localpart, domain)
26 if localpart is None:
27 return flask.jsonify(domain)
28 else:
29 alias_obj = models.Alias.resolve(localpart, domain)
30 if alias_obj:
31 return flask.jsonify(",".join(alias_obj.destination))
32 user_obj = models.User.query.get(email)
33 if user_obj:
34 return flask.jsonify(user_obj.destination)
35 return flask.abort(404)
36
37
38 @internal.route("/postfix/transport/<email>")
39 def postfix_transport(email):
40 localpart, domain = email.split('@', 1) if '@' in email else (None, email)
41 relay = models.Relay.query.get(domain) or flask.abort(404)
42 return flask.jsonify("smtp:[{}]".format(relay.smtp))
43
44
45 @internal.route("/postfix/sender/<sender>")
46 def postfix_sender(sender):
47 """ Simply reject any sender that pretends to be from a local domain
48 """
49 localpart, domain_name = sender.split('@', 1) if '@' in sender else (None, sender)
50 domain = models.Domain.query.get(domain_name)
51 alternative = models.Alternative.query.get(domain_name)
52 if domain or alternative:
53 return flask.jsonify("REJECT")
54 return flask.abort(404)
55
[end of core/admin/mailu/internal/views/postfix.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/core/admin/mailu/internal/views/postfix.py b/core/admin/mailu/internal/views/postfix.py
--- a/core/admin/mailu/internal/views/postfix.py
+++ b/core/admin/mailu/internal/views/postfix.py
@@ -6,7 +6,9 @@
@internal.route("/postfix/domain/<domain_name>")
def postfix_mailbox_domain(domain_name):
- domain = models.Domain.query.get(domain_name) or flask.abort(404)
+ domain = models.Domain.query.get(domain_name) or \
+ models.Alternative.query.get(domain_name) or \
+ flask.abort(404)
return flask.jsonify(domain.name)
| {"golden_diff": "diff --git a/core/admin/mailu/internal/views/postfix.py b/core/admin/mailu/internal/views/postfix.py\n--- a/core/admin/mailu/internal/views/postfix.py\n+++ b/core/admin/mailu/internal/views/postfix.py\n@@ -6,7 +6,9 @@\n \n @internal.route(\"/postfix/domain/<domain_name>\")\n def postfix_mailbox_domain(domain_name):\n- domain = models.Domain.query.get(domain_name) or flask.abort(404)\n+ domain = models.Domain.query.get(domain_name) or \\\n+ models.Alternative.query.get(domain_name) or \\\n+ flask.abort(404)\n return flask.jsonify(domain.name)\n", "issue": "Alternatives useless after podop\nAfter updating to master to get all the up-to-date fixes it also moves postfix to use podop and it seems to no longer support receiving external mail from alternative domains \ud83d\ude22 \r\n\r\nSending internal mail between alternatives works as expected but not with external mail, a \"relay denied\" message is shown in the logs and when checking the postfix podop views it looks like alternative is never mentioned.\n", "before_files": [{"content": "from mailu import db, models\nfrom mailu.internal import internal\n\nimport flask\n\n\[email protected](\"/postfix/domain/<domain_name>\")\ndef postfix_mailbox_domain(domain_name):\n domain = models.Domain.query.get(domain_name) or flask.abort(404)\n return flask.jsonify(domain.name)\n\n\[email protected](\"/postfix/mailbox/<email>\")\ndef postfix_mailbox_map(email):\n user = models.User.query.get(email) or flask.abort(404)\n return flask.jsonify(user.email)\n\n\[email protected](\"/postfix/alias/<alias>\")\ndef postfix_alias_map(alias):\n localpart, domain = alias.split('@', 1) if '@' in alias else (None, alias)\n alternative = models.Alternative.query.get(domain)\n if alternative:\n domain = alternative.domain_name\n email = '{}@{}'.format(localpart, domain)\n if localpart is None:\n return flask.jsonify(domain)\n else:\n alias_obj = models.Alias.resolve(localpart, domain)\n if alias_obj:\n return flask.jsonify(\",\".join(alias_obj.destination))\n user_obj = models.User.query.get(email)\n if user_obj:\n return flask.jsonify(user_obj.destination)\n return flask.abort(404)\n\n\[email protected](\"/postfix/transport/<email>\")\ndef postfix_transport(email):\n localpart, domain = email.split('@', 1) if '@' in email else (None, email)\n relay = models.Relay.query.get(domain) or flask.abort(404)\n return flask.jsonify(\"smtp:[{}]\".format(relay.smtp))\n\n\[email protected](\"/postfix/sender/<sender>\")\ndef postfix_sender(sender):\n \"\"\" Simply reject any sender that pretends to be from a local domain\n \"\"\"\n localpart, domain_name = sender.split('@', 1) if '@' in sender else (None, sender)\n domain = models.Domain.query.get(domain_name)\n alternative = models.Alternative.query.get(domain_name)\n if domain or alternative:\n return flask.jsonify(\"REJECT\")\n return flask.abort(404)\n", "path": "core/admin/mailu/internal/views/postfix.py"}]} | 1,178 | 140 |
gh_patches_debug_16390 | rasdani/github-patches | git_diff | open-mmlab__mmdetection3d-398 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Question about 3d NMS
As I can see [iou3d_utils](https://github.com/open-mmlab/mmdetection3d/blob/master/mmdet3d/ops/iou3d/iou3d_utils.py) is based on [iou3d_nms_utils](https://github.com/open-mmlab/OpenPCDet/blob/master/pcdet/ops/iou3d_nms/iou3d_nms_utils.py) of `OpenPCDet`. The implementation in `OpenPCDet` supports fair `iou` and `nms` with full 3d parametrization: `[x, y, z, dx, dy, dz, heading]`. However the implementation in `mmdetection3d` supports only `[x1, y1, x2, y2, ry]`. This design choice brings a couple of disadvantages. For example, for `VoteNet` on `SUNRGBD` we first predict boxes with angles and then [apply](https://github.com/open-mmlab/mmdetection3d/blob/master/mmdet3d/models/dense_heads/vote_head.py#L627) `nms` to the aligned boxes without angle.
So, my question is, why not to use `nms` from `OpenPCDet` instead of applying lifehacks with `aligned_nms` or `bev_nms`?
Thanks in advance.
</issue>
<code>
[start of mmdet3d/core/post_processing/box3d_nms.py]
1 import numba
2 import numpy as np
3 import torch
4
5 from mmdet3d.ops.iou3d.iou3d_utils import nms_gpu, nms_normal_gpu
6
7
8 def box3d_multiclass_nms(mlvl_bboxes,
9 mlvl_bboxes_for_nms,
10 mlvl_scores,
11 score_thr,
12 max_num,
13 cfg,
14 mlvl_dir_scores=None):
15 """Multi-class nms for 3D boxes.
16
17 Args:
18 mlvl_bboxes (torch.Tensor): Multi-level boxes with shape (N, M).
19 M is the dimensions of boxes.
20 mlvl_bboxes_for_nms (torch.Tensor): Multi-level boxes with shape
21 (N, 4). N is the number of boxes.
22 mlvl_scores (torch.Tensor): Multi-level boxes with shape
23 (N, ). N is the number of boxes.
24 score_thr (float): Score thredhold to filter boxes with low
25 confidence.
26 max_num (int): Maximum number of boxes will be kept.
27 cfg (dict): Configuration dict of NMS.
28 mlvl_dir_scores (torch.Tensor, optional): Multi-level scores
29 of direction classifier. Defaults to None.
30
31 Returns:
32 tuple[torch.Tensor]: Return results after nms, including 3D \
33 bounding boxes, scores, labels and direction scores.
34 """
35 # do multi class nms
36 # the fg class id range: [0, num_classes-1]
37 num_classes = mlvl_scores.shape[1] - 1
38 bboxes = []
39 scores = []
40 labels = []
41 dir_scores = []
42 for i in range(0, num_classes):
43 # get bboxes and scores of this class
44 cls_inds = mlvl_scores[:, i] > score_thr
45 if not cls_inds.any():
46 continue
47
48 _scores = mlvl_scores[cls_inds, i]
49 _bboxes_for_nms = mlvl_bboxes_for_nms[cls_inds, :]
50
51 if cfg.use_rotate_nms:
52 nms_func = nms_gpu
53 else:
54 nms_func = nms_normal_gpu
55
56 selected = nms_func(_bboxes_for_nms, _scores, cfg.nms_thr)
57 _mlvl_bboxes = mlvl_bboxes[cls_inds, :]
58 bboxes.append(_mlvl_bboxes[selected])
59 scores.append(_scores[selected])
60 cls_label = mlvl_bboxes.new_full((len(selected), ),
61 i,
62 dtype=torch.long)
63 labels.append(cls_label)
64
65 if mlvl_dir_scores is not None:
66 _mlvl_dir_scores = mlvl_dir_scores[cls_inds]
67 dir_scores.append(_mlvl_dir_scores[selected])
68
69 if bboxes:
70 bboxes = torch.cat(bboxes, dim=0)
71 scores = torch.cat(scores, dim=0)
72 labels = torch.cat(labels, dim=0)
73 if mlvl_dir_scores is not None:
74 dir_scores = torch.cat(dir_scores, dim=0)
75 if bboxes.shape[0] > max_num:
76 _, inds = scores.sort(descending=True)
77 inds = inds[:max_num]
78 bboxes = bboxes[inds, :]
79 labels = labels[inds]
80 scores = scores[inds]
81 if mlvl_dir_scores is not None:
82 dir_scores = dir_scores[inds]
83 else:
84 bboxes = mlvl_scores.new_zeros((0, mlvl_bboxes.size(-1)))
85 scores = mlvl_scores.new_zeros((0, ))
86 labels = mlvl_scores.new_zeros((0, ), dtype=torch.long)
87 dir_scores = mlvl_scores.new_zeros((0, ))
88 return bboxes, scores, labels, dir_scores
89
90
91 def aligned_3d_nms(boxes, scores, classes, thresh):
92 """3d nms for aligned boxes.
93
94 Args:
95 boxes (torch.Tensor): Aligned box with shape [n, 6].
96 scores (torch.Tensor): Scores of each box.
97 classes (torch.Tensor): Class of each box.
98 thresh (float): Iou threshold for nms.
99
100 Returns:
101 torch.Tensor: Indices of selected boxes.
102 """
103 x1 = boxes[:, 0]
104 y1 = boxes[:, 1]
105 z1 = boxes[:, 2]
106 x2 = boxes[:, 3]
107 y2 = boxes[:, 4]
108 z2 = boxes[:, 5]
109 area = (x2 - x1) * (y2 - y1) * (z2 - z1)
110 zero = boxes.new_zeros(1, )
111
112 score_sorted = torch.argsort(scores)
113 pick = []
114 while (score_sorted.shape[0] != 0):
115 last = score_sorted.shape[0]
116 i = score_sorted[-1]
117 pick.append(i)
118
119 xx1 = torch.max(x1[i], x1[score_sorted[:last - 1]])
120 yy1 = torch.max(y1[i], y1[score_sorted[:last - 1]])
121 zz1 = torch.max(z1[i], z1[score_sorted[:last - 1]])
122 xx2 = torch.min(x2[i], x2[score_sorted[:last - 1]])
123 yy2 = torch.min(y2[i], y2[score_sorted[:last - 1]])
124 zz2 = torch.min(z2[i], z2[score_sorted[:last - 1]])
125 classes1 = classes[i]
126 classes2 = classes[score_sorted[:last - 1]]
127 inter_l = torch.max(zero, xx2 - xx1)
128 inter_w = torch.max(zero, yy2 - yy1)
129 inter_h = torch.max(zero, zz2 - zz1)
130
131 inter = inter_l * inter_w * inter_h
132 iou = inter / (area[i] + area[score_sorted[:last - 1]] - inter)
133 iou = iou * (classes1 == classes2).float()
134 score_sorted = score_sorted[torch.nonzero(
135 iou <= thresh, as_tuple=False).flatten()]
136
137 indices = boxes.new_tensor(pick, dtype=torch.long)
138 return indices
139
140
141 @numba.jit(nopython=True)
142 def circle_nms(dets, thresh, post_max_size=83):
143 """Circular NMS.
144
145 An object is only counted as positive if no other center
146 with a higher confidence exists within a radius r using a
147 bird-eye view distance metric.
148
149 Args:
150 dets (torch.Tensor): Detection results with the shape of [N, 3].
151 thresh (float): Value of threshold.
152 post_max_size (int): Max number of prediction to be kept. Defaults
153 to 83
154
155 Returns:
156 torch.Tensor: Indexes of the detections to be kept.
157 """
158 x1 = dets[:, 0]
159 y1 = dets[:, 1]
160 scores = dets[:, 2]
161 order = scores.argsort()[::-1].astype(np.int32) # highest->lowest
162 ndets = dets.shape[0]
163 suppressed = np.zeros((ndets), dtype=np.int32)
164 keep = []
165 for _i in range(ndets):
166 i = order[_i] # start with highest score box
167 if suppressed[
168 i] == 1: # if any box have enough iou with this, remove it
169 continue
170 keep.append(i)
171 for _j in range(_i + 1, ndets):
172 j = order[_j]
173 if suppressed[j] == 1:
174 continue
175 # calculate center distance between i and j box
176 dist = (x1[i] - x1[j])**2 + (y1[i] - y1[j])**2
177
178 # ovr = inter / areas[j]
179 if dist <= thresh:
180 suppressed[j] = 1
181 return keep[:post_max_size]
182
[end of mmdet3d/core/post_processing/box3d_nms.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/mmdet3d/core/post_processing/box3d_nms.py b/mmdet3d/core/post_processing/box3d_nms.py
--- a/mmdet3d/core/post_processing/box3d_nms.py
+++ b/mmdet3d/core/post_processing/box3d_nms.py
@@ -18,9 +18,9 @@
mlvl_bboxes (torch.Tensor): Multi-level boxes with shape (N, M).
M is the dimensions of boxes.
mlvl_bboxes_for_nms (torch.Tensor): Multi-level boxes with shape
- (N, 4). N is the number of boxes.
+ (N, 5) ([x1, y1, x2, y2, ry]). N is the number of boxes.
mlvl_scores (torch.Tensor): Multi-level boxes with shape
- (N, ). N is the number of boxes.
+ (N, C + 1). N is the number of boxes. C is the number of classes.
score_thr (float): Score thredhold to filter boxes with low
confidence.
max_num (int): Maximum number of boxes will be kept.
| {"golden_diff": "diff --git a/mmdet3d/core/post_processing/box3d_nms.py b/mmdet3d/core/post_processing/box3d_nms.py\n--- a/mmdet3d/core/post_processing/box3d_nms.py\n+++ b/mmdet3d/core/post_processing/box3d_nms.py\n@@ -18,9 +18,9 @@\n mlvl_bboxes (torch.Tensor): Multi-level boxes with shape (N, M).\n M is the dimensions of boxes.\n mlvl_bboxes_for_nms (torch.Tensor): Multi-level boxes with shape\n- (N, 4). N is the number of boxes.\n+ (N, 5) ([x1, y1, x2, y2, ry]). N is the number of boxes.\n mlvl_scores (torch.Tensor): Multi-level boxes with shape\n- (N, ). N is the number of boxes.\n+ (N, C + 1). N is the number of boxes. C is the number of classes.\n score_thr (float): Score thredhold to filter boxes with low\n confidence.\n max_num (int): Maximum number of boxes will be kept.\n", "issue": "Question about 3d NMS\nAs I can see [iou3d_utils](https://github.com/open-mmlab/mmdetection3d/blob/master/mmdet3d/ops/iou3d/iou3d_utils.py) is based on [iou3d_nms_utils](https://github.com/open-mmlab/OpenPCDet/blob/master/pcdet/ops/iou3d_nms/iou3d_nms_utils.py) of `OpenPCDet`. The implementation in `OpenPCDet` supports fair `iou` and `nms` with full 3d parametrization: `[x, y, z, dx, dy, dz, heading]`. However the implementation in `mmdetection3d` supports only `[x1, y1, x2, y2, ry]`. This design choice brings a couple of disadvantages. For example, for `VoteNet` on `SUNRGBD` we first predict boxes with angles and then [apply](https://github.com/open-mmlab/mmdetection3d/blob/master/mmdet3d/models/dense_heads/vote_head.py#L627) `nms` to the aligned boxes without angle.\r\n\r\nSo, my question is, why not to use `nms` from `OpenPCDet` instead of applying lifehacks with `aligned_nms` or `bev_nms`?\r\nThanks in advance.\n", "before_files": [{"content": "import numba\nimport numpy as np\nimport torch\n\nfrom mmdet3d.ops.iou3d.iou3d_utils import nms_gpu, nms_normal_gpu\n\n\ndef box3d_multiclass_nms(mlvl_bboxes,\n mlvl_bboxes_for_nms,\n mlvl_scores,\n score_thr,\n max_num,\n cfg,\n mlvl_dir_scores=None):\n \"\"\"Multi-class nms for 3D boxes.\n\n Args:\n mlvl_bboxes (torch.Tensor): Multi-level boxes with shape (N, M).\n M is the dimensions of boxes.\n mlvl_bboxes_for_nms (torch.Tensor): Multi-level boxes with shape\n (N, 4). N is the number of boxes.\n mlvl_scores (torch.Tensor): Multi-level boxes with shape\n (N, ). N is the number of boxes.\n score_thr (float): Score thredhold to filter boxes with low\n confidence.\n max_num (int): Maximum number of boxes will be kept.\n cfg (dict): Configuration dict of NMS.\n mlvl_dir_scores (torch.Tensor, optional): Multi-level scores\n of direction classifier. Defaults to None.\n\n Returns:\n tuple[torch.Tensor]: Return results after nms, including 3D \\\n bounding boxes, scores, labels and direction scores.\n \"\"\"\n # do multi class nms\n # the fg class id range: [0, num_classes-1]\n num_classes = mlvl_scores.shape[1] - 1\n bboxes = []\n scores = []\n labels = []\n dir_scores = []\n for i in range(0, num_classes):\n # get bboxes and scores of this class\n cls_inds = mlvl_scores[:, i] > score_thr\n if not cls_inds.any():\n continue\n\n _scores = mlvl_scores[cls_inds, i]\n _bboxes_for_nms = mlvl_bboxes_for_nms[cls_inds, :]\n\n if cfg.use_rotate_nms:\n nms_func = nms_gpu\n else:\n nms_func = nms_normal_gpu\n\n selected = nms_func(_bboxes_for_nms, _scores, cfg.nms_thr)\n _mlvl_bboxes = mlvl_bboxes[cls_inds, :]\n bboxes.append(_mlvl_bboxes[selected])\n scores.append(_scores[selected])\n cls_label = mlvl_bboxes.new_full((len(selected), ),\n i,\n dtype=torch.long)\n labels.append(cls_label)\n\n if mlvl_dir_scores is not None:\n _mlvl_dir_scores = mlvl_dir_scores[cls_inds]\n dir_scores.append(_mlvl_dir_scores[selected])\n\n if bboxes:\n bboxes = torch.cat(bboxes, dim=0)\n scores = torch.cat(scores, dim=0)\n labels = torch.cat(labels, dim=0)\n if mlvl_dir_scores is not None:\n dir_scores = torch.cat(dir_scores, dim=0)\n if bboxes.shape[0] > max_num:\n _, inds = scores.sort(descending=True)\n inds = inds[:max_num]\n bboxes = bboxes[inds, :]\n labels = labels[inds]\n scores = scores[inds]\n if mlvl_dir_scores is not None:\n dir_scores = dir_scores[inds]\n else:\n bboxes = mlvl_scores.new_zeros((0, mlvl_bboxes.size(-1)))\n scores = mlvl_scores.new_zeros((0, ))\n labels = mlvl_scores.new_zeros((0, ), dtype=torch.long)\n dir_scores = mlvl_scores.new_zeros((0, ))\n return bboxes, scores, labels, dir_scores\n\n\ndef aligned_3d_nms(boxes, scores, classes, thresh):\n \"\"\"3d nms for aligned boxes.\n\n Args:\n boxes (torch.Tensor): Aligned box with shape [n, 6].\n scores (torch.Tensor): Scores of each box.\n classes (torch.Tensor): Class of each box.\n thresh (float): Iou threshold for nms.\n\n Returns:\n torch.Tensor: Indices of selected boxes.\n \"\"\"\n x1 = boxes[:, 0]\n y1 = boxes[:, 1]\n z1 = boxes[:, 2]\n x2 = boxes[:, 3]\n y2 = boxes[:, 4]\n z2 = boxes[:, 5]\n area = (x2 - x1) * (y2 - y1) * (z2 - z1)\n zero = boxes.new_zeros(1, )\n\n score_sorted = torch.argsort(scores)\n pick = []\n while (score_sorted.shape[0] != 0):\n last = score_sorted.shape[0]\n i = score_sorted[-1]\n pick.append(i)\n\n xx1 = torch.max(x1[i], x1[score_sorted[:last - 1]])\n yy1 = torch.max(y1[i], y1[score_sorted[:last - 1]])\n zz1 = torch.max(z1[i], z1[score_sorted[:last - 1]])\n xx2 = torch.min(x2[i], x2[score_sorted[:last - 1]])\n yy2 = torch.min(y2[i], y2[score_sorted[:last - 1]])\n zz2 = torch.min(z2[i], z2[score_sorted[:last - 1]])\n classes1 = classes[i]\n classes2 = classes[score_sorted[:last - 1]]\n inter_l = torch.max(zero, xx2 - xx1)\n inter_w = torch.max(zero, yy2 - yy1)\n inter_h = torch.max(zero, zz2 - zz1)\n\n inter = inter_l * inter_w * inter_h\n iou = inter / (area[i] + area[score_sorted[:last - 1]] - inter)\n iou = iou * (classes1 == classes2).float()\n score_sorted = score_sorted[torch.nonzero(\n iou <= thresh, as_tuple=False).flatten()]\n\n indices = boxes.new_tensor(pick, dtype=torch.long)\n return indices\n\n\[email protected](nopython=True)\ndef circle_nms(dets, thresh, post_max_size=83):\n \"\"\"Circular NMS.\n\n An object is only counted as positive if no other center\n with a higher confidence exists within a radius r using a\n bird-eye view distance metric.\n\n Args:\n dets (torch.Tensor): Detection results with the shape of [N, 3].\n thresh (float): Value of threshold.\n post_max_size (int): Max number of prediction to be kept. Defaults\n to 83\n\n Returns:\n torch.Tensor: Indexes of the detections to be kept.\n \"\"\"\n x1 = dets[:, 0]\n y1 = dets[:, 1]\n scores = dets[:, 2]\n order = scores.argsort()[::-1].astype(np.int32) # highest->lowest\n ndets = dets.shape[0]\n suppressed = np.zeros((ndets), dtype=np.int32)\n keep = []\n for _i in range(ndets):\n i = order[_i] # start with highest score box\n if suppressed[\n i] == 1: # if any box have enough iou with this, remove it\n continue\n keep.append(i)\n for _j in range(_i + 1, ndets):\n j = order[_j]\n if suppressed[j] == 1:\n continue\n # calculate center distance between i and j box\n dist = (x1[i] - x1[j])**2 + (y1[i] - y1[j])**2\n\n # ovr = inter / areas[j]\n if dist <= thresh:\n suppressed[j] = 1\n return keep[:post_max_size]\n", "path": "mmdet3d/core/post_processing/box3d_nms.py"}]} | 2,989 | 258 |
gh_patches_debug_34169 | rasdani/github-patches | git_diff | conan-io__conan-center-index-253 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[package] catch2/2.9.2: Expected CMake scripts to be included in the package
### Package and Environment Details (include every applicable attribute)
* Package Name/Version: **catch2/2.9.2**
I expected to have access to cmake scripts that are installed with Catch2.
The helper scripts are set to be installed.
https://github.com/conan-io/conan-center-index/blob/6a7ff72be4e6fa6362112459f7319f6e6e565a99/recipes/catch2/2.x.x/conanfile.py#L33
Then they are deleted during packaging.
https://github.com/conan-io/conan-center-index/blob/6a7ff72be4e6fa6362112459f7319f6e6e565a99/recipes/catch2/2.x.x/conanfile.py#L51
Currently, I am using the older bincrafters package (catch2/2.5.0@bincrafters/stable) which still includes the CMake scripts. I would need to maintain my own conan package to use the newer version of Catch2.
</issue>
<code>
[start of recipes/catch2/2.x.x/conanfile.py]
1 #!/usr/bin/env python
2
3 import os
4
5 from conans import ConanFile, CMake, tools
6
7
8 class ConanRecipe(ConanFile):
9 name = "catch2"
10 description = "A modern, C++-native, header-only, framework for unit-tests, TDD and BDD"
11 topics = ("conan", "catch2", "header-only", "unit-test", "tdd", "bdd")
12 homepage = "https://github.com/catchorg/Catch2"
13 url = "https://github.com/conan-io/conan-center-index"
14 license = "BSL-1.0"
15
16 settings = "os", "compiler", "build_type", "arch"
17
18 generators = "cmake"
19
20 _source_subfolder = "source_subfolder"
21
22 def source(self):
23 tools.get(**self.conan_data["sources"][self.version])
24 extracted_dir = "Catch2-" + self.version
25 os.rename(extracted_dir, self._source_subfolder)
26
27 _build_subfolder = "build_subfolder"
28
29 def _configure_cmake(self):
30 cmake = CMake(self)
31 cmake.definitions["BUILD_TESTING"] = "OFF"
32 cmake.definitions["CATCH_INSTALL_DOCS"] = "OFF"
33 cmake.definitions["CATCH_INSTALL_HELPERS"] = "ON"
34 cmake.configure(
35 source_folder=self._source_subfolder,
36 build_folder=self._build_subfolder
37 )
38 return cmake
39
40 def build(self):
41 cmake = self._configure_cmake()
42 cmake.build()
43
44 def package(self):
45 self.copy(pattern="LICENSE.txt", dst="licenses",
46 src=self._source_subfolder)
47
48 cmake = self._configure_cmake()
49 cmake.install()
50
51 tools.rmdir(os.path.join(self.package_folder, "lib", "cmake"))
52 tools.rmdir(os.path.join(self.package_folder, "share"))
53
54 def package_id(self):
55 self.info.header_only()
56
[end of recipes/catch2/2.x.x/conanfile.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/recipes/catch2/2.x.x/conanfile.py b/recipes/catch2/2.x.x/conanfile.py
--- a/recipes/catch2/2.x.x/conanfile.py
+++ b/recipes/catch2/2.x.x/conanfile.py
@@ -1,5 +1,3 @@
-#!/usr/bin/env python
-
import os
from conans import ConanFile, CMake, tools
@@ -12,20 +10,16 @@
homepage = "https://github.com/catchorg/Catch2"
url = "https://github.com/conan-io/conan-center-index"
license = "BSL-1.0"
-
settings = "os", "compiler", "build_type", "arch"
-
generators = "cmake"
-
_source_subfolder = "source_subfolder"
+ _build_subfolder = "build_subfolder"
def source(self):
tools.get(**self.conan_data["sources"][self.version])
extracted_dir = "Catch2-" + self.version
os.rename(extracted_dir, self._source_subfolder)
- _build_subfolder = "build_subfolder"
-
def _configure_cmake(self):
cmake = CMake(self)
cmake.definitions["BUILD_TESTING"] = "OFF"
@@ -42,14 +36,18 @@
cmake.build()
def package(self):
- self.copy(pattern="LICENSE.txt", dst="licenses",
- src=self._source_subfolder)
-
+ self.copy(pattern="LICENSE.txt", dst="licenses", src=self._source_subfolder)
cmake = self._configure_cmake()
cmake.install()
-
tools.rmdir(os.path.join(self.package_folder, "lib", "cmake"))
tools.rmdir(os.path.join(self.package_folder, "share"))
+ for cmake_file in ["ParseAndAddCatchTests.cmake", "Catch.cmake"]:
+ self.copy(cmake_file,
+ src=os.path.join(self._source_subfolder, "contrib"),
+ dst=os.path.join("lib", "cmake", "Catch2"))
def package_id(self):
self.info.header_only()
+
+ def package_info(self):
+ self.cpp_info.builddirs = [os.path.join("lib", "cmake", "Catch2")]
| {"golden_diff": "diff --git a/recipes/catch2/2.x.x/conanfile.py b/recipes/catch2/2.x.x/conanfile.py\n--- a/recipes/catch2/2.x.x/conanfile.py\n+++ b/recipes/catch2/2.x.x/conanfile.py\n@@ -1,5 +1,3 @@\n-#!/usr/bin/env python\n-\n import os\n \n from conans import ConanFile, CMake, tools\n@@ -12,20 +10,16 @@\n homepage = \"https://github.com/catchorg/Catch2\"\n url = \"https://github.com/conan-io/conan-center-index\"\n license = \"BSL-1.0\"\n-\n settings = \"os\", \"compiler\", \"build_type\", \"arch\"\n-\n generators = \"cmake\"\n-\n _source_subfolder = \"source_subfolder\"\n+ _build_subfolder = \"build_subfolder\"\n \n def source(self):\n tools.get(**self.conan_data[\"sources\"][self.version])\n extracted_dir = \"Catch2-\" + self.version\n os.rename(extracted_dir, self._source_subfolder)\n \n- _build_subfolder = \"build_subfolder\"\n-\n def _configure_cmake(self):\n cmake = CMake(self)\n cmake.definitions[\"BUILD_TESTING\"] = \"OFF\"\n@@ -42,14 +36,18 @@\n cmake.build()\n \n def package(self):\n- self.copy(pattern=\"LICENSE.txt\", dst=\"licenses\",\n- src=self._source_subfolder)\n-\n+ self.copy(pattern=\"LICENSE.txt\", dst=\"licenses\", src=self._source_subfolder)\n cmake = self._configure_cmake()\n cmake.install()\n-\n tools.rmdir(os.path.join(self.package_folder, \"lib\", \"cmake\"))\n tools.rmdir(os.path.join(self.package_folder, \"share\"))\n+ for cmake_file in [\"ParseAndAddCatchTests.cmake\", \"Catch.cmake\"]:\n+ self.copy(cmake_file,\n+ src=os.path.join(self._source_subfolder, \"contrib\"),\n+ dst=os.path.join(\"lib\", \"cmake\", \"Catch2\"))\n \n def package_id(self):\n self.info.header_only()\n+\n+ def package_info(self):\n+ self.cpp_info.builddirs = [os.path.join(\"lib\", \"cmake\", \"Catch2\")]\n", "issue": "[package] catch2/2.9.2: Expected CMake scripts to be included in the package \n### Package and Environment Details (include every applicable attribute)\r\n * Package Name/Version: **catch2/2.9.2**\r\n\r\nI expected to have access to cmake scripts that are installed with Catch2.\r\n\r\nThe helper scripts are set to be installed.\r\n\r\nhttps://github.com/conan-io/conan-center-index/blob/6a7ff72be4e6fa6362112459f7319f6e6e565a99/recipes/catch2/2.x.x/conanfile.py#L33\r\n\r\nThen they are deleted during packaging.\r\n\r\nhttps://github.com/conan-io/conan-center-index/blob/6a7ff72be4e6fa6362112459f7319f6e6e565a99/recipes/catch2/2.x.x/conanfile.py#L51\r\n\r\nCurrently, I am using the older bincrafters package (catch2/2.5.0@bincrafters/stable) which still includes the CMake scripts. I would need to maintain my own conan package to use the newer version of Catch2.\n", "before_files": [{"content": "#!/usr/bin/env python\n\nimport os\n\nfrom conans import ConanFile, CMake, tools\n\n\nclass ConanRecipe(ConanFile):\n name = \"catch2\"\n description = \"A modern, C++-native, header-only, framework for unit-tests, TDD and BDD\"\n topics = (\"conan\", \"catch2\", \"header-only\", \"unit-test\", \"tdd\", \"bdd\")\n homepage = \"https://github.com/catchorg/Catch2\"\n url = \"https://github.com/conan-io/conan-center-index\"\n license = \"BSL-1.0\"\n\n settings = \"os\", \"compiler\", \"build_type\", \"arch\"\n\n generators = \"cmake\"\n\n _source_subfolder = \"source_subfolder\"\n\n def source(self):\n tools.get(**self.conan_data[\"sources\"][self.version])\n extracted_dir = \"Catch2-\" + self.version\n os.rename(extracted_dir, self._source_subfolder)\n\n _build_subfolder = \"build_subfolder\"\n\n def _configure_cmake(self):\n cmake = CMake(self)\n cmake.definitions[\"BUILD_TESTING\"] = \"OFF\"\n cmake.definitions[\"CATCH_INSTALL_DOCS\"] = \"OFF\"\n cmake.definitions[\"CATCH_INSTALL_HELPERS\"] = \"ON\"\n cmake.configure(\n source_folder=self._source_subfolder,\n build_folder=self._build_subfolder\n )\n return cmake\n\n def build(self):\n cmake = self._configure_cmake()\n cmake.build()\n\n def package(self):\n self.copy(pattern=\"LICENSE.txt\", dst=\"licenses\",\n src=self._source_subfolder)\n\n cmake = self._configure_cmake()\n cmake.install()\n\n tools.rmdir(os.path.join(self.package_folder, \"lib\", \"cmake\"))\n tools.rmdir(os.path.join(self.package_folder, \"share\"))\n\n def package_id(self):\n self.info.header_only()\n", "path": "recipes/catch2/2.x.x/conanfile.py"}]} | 1,348 | 508 |
gh_patches_debug_35972 | rasdani/github-patches | git_diff | strawberry-graphql__strawberry-1241 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Interface values don't convert correctly from Pydantic models
When calling `from_pydantic` on a class with an interface field, the field value is always converted into an instance of the base class, no matter what its starting type is. The expected behavior should probably be to convert to the corresponding subtype class instead. See here for an example: https://gist.github.com/Matt343/fbce0cdffe1523bb22016bed6f65473f
</issue>
<code>
[start of strawberry/experimental/pydantic/__init__.py]
1 from .error_type import error_type
2 from .exceptions import UnregisteredTypeException
3 from .object_type import input, type
4
5
6 __all__ = ["error_type", "UnregisteredTypeException", "input", "type"]
7
[end of strawberry/experimental/pydantic/__init__.py]
[start of strawberry/experimental/pydantic/conversion.py]
1 from typing import Union, cast
2
3 from strawberry.field import StrawberryField
4 from strawberry.scalars import is_scalar
5 from strawberry.type import StrawberryList, StrawberryOptional, StrawberryType
6 from strawberry.union import StrawberryUnion
7
8
9 def _convert_from_pydantic_to_strawberry_type(
10 type_: Union[StrawberryType, type], data_from_model=None, extra=None
11 ):
12 data = data_from_model if data_from_model is not None else extra
13
14 if isinstance(type_, StrawberryOptional):
15 if data is None:
16 return data
17 return _convert_from_pydantic_to_strawberry_type(
18 type_.of_type, data_from_model=data, extra=extra
19 )
20 if isinstance(type_, StrawberryUnion):
21 for option_type in type_.types:
22 if hasattr(option_type, "_pydantic_type"):
23 source_type = option_type._pydantic_type # type: ignore
24 else:
25 source_type = cast(type, option_type)
26 if isinstance(data, source_type):
27 return _convert_from_pydantic_to_strawberry_type(
28 option_type, data_from_model=data, extra=extra
29 )
30 if isinstance(type_, StrawberryList):
31 items = []
32 for index, item in enumerate(data):
33 items.append(
34 _convert_from_pydantic_to_strawberry_type(
35 type_.of_type,
36 data_from_model=item,
37 extra=extra[index] if extra else None,
38 )
39 )
40
41 return items
42 elif is_scalar(type_):
43 return data
44 else:
45 return convert_pydantic_model_to_strawberry_class(
46 type_, model_instance=data_from_model, extra=extra
47 )
48
49
50 def convert_pydantic_model_to_strawberry_class(cls, *, model_instance=None, extra=None):
51 extra = extra or {}
52 kwargs = {}
53
54 for field in cls._type_definition.fields:
55 field = cast(StrawberryField, field)
56 python_name = field.python_name
57
58 data_from_extra = extra.get(python_name, None)
59 data_from_model = (
60 getattr(model_instance, python_name, None) if model_instance else None
61 )
62 kwargs[python_name] = _convert_from_pydantic_to_strawberry_type(
63 field.type, data_from_model, extra=data_from_extra
64 )
65
66 return cls(**kwargs)
67
[end of strawberry/experimental/pydantic/conversion.py]
[start of strawberry/experimental/pydantic/object_type.py]
1 import builtins
2 import dataclasses
3 from functools import partial
4 from typing import Any, Dict, List, Optional, Tuple, Type, cast
5
6 from pydantic import BaseModel
7 from pydantic.fields import ModelField
8
9 from strawberry.arguments import UNSET
10 from strawberry.experimental.pydantic.conversion import (
11 convert_pydantic_model_to_strawberry_class,
12 )
13 from strawberry.experimental.pydantic.fields import get_basic_type
14 from strawberry.field import StrawberryField
15 from strawberry.object_type import _process_type, _wrap_dataclass
16 from strawberry.private import Private
17 from strawberry.types.type_resolver import _get_fields
18 from strawberry.types.types import FederationTypeParams, TypeDefinition
19
20 from .exceptions import MissingFieldsListError, UnregisteredTypeException
21
22
23 def replace_pydantic_types(type_: Any):
24 if hasattr(type_, "__args__"):
25 new_type = type_.copy_with(
26 tuple(replace_pydantic_types(t) for t in type_.__args__)
27 )
28
29 if isinstance(new_type, TypeDefinition):
30 # TODO: Not sure if this is necessary. No coverage in tests
31 # TODO: Unnecessary with StrawberryObject
32
33 new_type = builtins.type(
34 new_type.name,
35 (),
36 {"_type_definition": new_type},
37 )
38
39 return new_type
40
41 if issubclass(type_, BaseModel):
42 if hasattr(type_, "_strawberry_type"):
43 return type_._strawberry_type
44 else:
45 raise UnregisteredTypeException(type_)
46
47 return type_
48
49
50 def get_type_for_field(field: ModelField):
51 type_ = field.outer_type_
52 type_ = get_basic_type(type_)
53 type_ = replace_pydantic_types(type_)
54
55 if not field.required:
56 type_ = Optional[type_]
57
58 return type_
59
60
61 def _get_private_fields(cls: Type) -> List[dataclasses.Field]:
62 private_fields: List[dataclasses.Field] = []
63 for field in dataclasses.fields(cls):
64 if isinstance(field.type, Private):
65 private_fields.append(field)
66 return private_fields
67
68
69 def type(
70 model: Type[BaseModel],
71 *,
72 fields: List[str],
73 name: Optional[str] = None,
74 is_input: bool = False,
75 is_interface: bool = False,
76 description: Optional[str] = None,
77 federation: Optional[FederationTypeParams] = None,
78 ):
79 def wrap(cls):
80 if not fields:
81 raise MissingFieldsListError(model)
82
83 model_fields = model.__fields__
84 fields_set = set(fields)
85
86 all_fields: List[Tuple[str, Any, dataclasses.Field]] = [
87 (
88 name,
89 get_type_for_field(field),
90 StrawberryField(
91 python_name=field.name,
92 graphql_name=field.alias if field.has_alias else None,
93 default=field.default if not field.required else UNSET,
94 default_factory=(
95 field.default_factory if field.default_factory else UNSET
96 ),
97 type_annotation=get_type_for_field(field),
98 ),
99 )
100 for name, field in model_fields.items()
101 if name in fields_set
102 ]
103
104 wrapped = _wrap_dataclass(cls)
105 extra_fields = cast(List[dataclasses.Field], _get_fields(wrapped))
106 private_fields = _get_private_fields(wrapped)
107
108 all_fields.extend(
109 (
110 (
111 field.name,
112 field.type,
113 field,
114 )
115 for field in extra_fields + private_fields
116 )
117 )
118
119 # Sort fields so that fields with missing defaults go first
120 # because dataclasses require that fields with no defaults are defined
121 # first
122 missing_default = []
123 has_default = []
124 for field in all_fields:
125 if field[2].default is dataclasses.MISSING:
126 missing_default.append(field)
127 else:
128 has_default.append(field)
129
130 sorted_fields = missing_default + has_default
131
132 cls = dataclasses.make_dataclass(
133 cls.__name__,
134 sorted_fields,
135 )
136
137 _process_type(
138 cls,
139 name=name,
140 is_input=is_input,
141 is_interface=is_interface,
142 description=description,
143 federation=federation,
144 )
145
146 model._strawberry_type = cls # type: ignore
147 cls._pydantic_type = model # type: ignore
148
149 def from_pydantic(instance: Any, extra: Dict[str, Any] = None) -> Any:
150 return convert_pydantic_model_to_strawberry_class(
151 cls=cls, model_instance=instance, extra=extra
152 )
153
154 def to_pydantic(self) -> Any:
155 instance_kwargs = dataclasses.asdict(self)
156
157 return model(**instance_kwargs)
158
159 cls.from_pydantic = staticmethod(from_pydantic)
160 cls.to_pydantic = to_pydantic
161
162 return cls
163
164 return wrap
165
166
167 input = partial(type, is_input=True)
168
[end of strawberry/experimental/pydantic/object_type.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/strawberry/experimental/pydantic/__init__.py b/strawberry/experimental/pydantic/__init__.py
--- a/strawberry/experimental/pydantic/__init__.py
+++ b/strawberry/experimental/pydantic/__init__.py
@@ -1,6 +1,6 @@
from .error_type import error_type
from .exceptions import UnregisteredTypeException
-from .object_type import input, type
+from .object_type import input, interface, type
-__all__ = ["error_type", "UnregisteredTypeException", "input", "type"]
+__all__ = ["error_type", "UnregisteredTypeException", "input", "type", "interface"]
diff --git a/strawberry/experimental/pydantic/conversion.py b/strawberry/experimental/pydantic/conversion.py
--- a/strawberry/experimental/pydantic/conversion.py
+++ b/strawberry/experimental/pydantic/conversion.py
@@ -1,5 +1,6 @@
from typing import Union, cast
+from strawberry.enum import EnumDefinition
from strawberry.field import StrawberryField
from strawberry.scalars import is_scalar
from strawberry.type import StrawberryList, StrawberryOptional, StrawberryType
@@ -27,6 +28,8 @@
return _convert_from_pydantic_to_strawberry_type(
option_type, data_from_model=data, extra=extra
)
+ if isinstance(type_, EnumDefinition):
+ return data
if isinstance(type_, StrawberryList):
items = []
for index, item in enumerate(data):
@@ -42,6 +45,10 @@
elif is_scalar(type_):
return data
else:
+ # in the case of an interface, the concrete type may be more specific
+ # than the type in the field definition
+ if hasattr(type(data), "_strawberry_type"):
+ type_ = type(data)._strawberry_type
return convert_pydantic_model_to_strawberry_class(
type_, model_instance=data_from_model, extra=extra
)
diff --git a/strawberry/experimental/pydantic/object_type.py b/strawberry/experimental/pydantic/object_type.py
--- a/strawberry/experimental/pydantic/object_type.py
+++ b/strawberry/experimental/pydantic/object_type.py
@@ -132,6 +132,7 @@
cls = dataclasses.make_dataclass(
cls.__name__,
sorted_fields,
+ bases=cls.__bases__,
)
_process_type(
@@ -165,3 +166,5 @@
input = partial(type, is_input=True)
+
+interface = partial(type, is_interface=True)
| {"golden_diff": "diff --git a/strawberry/experimental/pydantic/__init__.py b/strawberry/experimental/pydantic/__init__.py\n--- a/strawberry/experimental/pydantic/__init__.py\n+++ b/strawberry/experimental/pydantic/__init__.py\n@@ -1,6 +1,6 @@\n from .error_type import error_type\n from .exceptions import UnregisteredTypeException\n-from .object_type import input, type\n+from .object_type import input, interface, type\n \n \n-__all__ = [\"error_type\", \"UnregisteredTypeException\", \"input\", \"type\"]\n+__all__ = [\"error_type\", \"UnregisteredTypeException\", \"input\", \"type\", \"interface\"]\ndiff --git a/strawberry/experimental/pydantic/conversion.py b/strawberry/experimental/pydantic/conversion.py\n--- a/strawberry/experimental/pydantic/conversion.py\n+++ b/strawberry/experimental/pydantic/conversion.py\n@@ -1,5 +1,6 @@\n from typing import Union, cast\n \n+from strawberry.enum import EnumDefinition\n from strawberry.field import StrawberryField\n from strawberry.scalars import is_scalar\n from strawberry.type import StrawberryList, StrawberryOptional, StrawberryType\n@@ -27,6 +28,8 @@\n return _convert_from_pydantic_to_strawberry_type(\n option_type, data_from_model=data, extra=extra\n )\n+ if isinstance(type_, EnumDefinition):\n+ return data\n if isinstance(type_, StrawberryList):\n items = []\n for index, item in enumerate(data):\n@@ -42,6 +45,10 @@\n elif is_scalar(type_):\n return data\n else:\n+ # in the case of an interface, the concrete type may be more specific\n+ # than the type in the field definition\n+ if hasattr(type(data), \"_strawberry_type\"):\n+ type_ = type(data)._strawberry_type\n return convert_pydantic_model_to_strawberry_class(\n type_, model_instance=data_from_model, extra=extra\n )\ndiff --git a/strawberry/experimental/pydantic/object_type.py b/strawberry/experimental/pydantic/object_type.py\n--- a/strawberry/experimental/pydantic/object_type.py\n+++ b/strawberry/experimental/pydantic/object_type.py\n@@ -132,6 +132,7 @@\n cls = dataclasses.make_dataclass(\n cls.__name__,\n sorted_fields,\n+ bases=cls.__bases__,\n )\n \n _process_type(\n@@ -165,3 +166,5 @@\n \n \n input = partial(type, is_input=True)\n+\n+interface = partial(type, is_interface=True)\n", "issue": "Interface values don't convert correctly from Pydantic models\nWhen calling `from_pydantic` on a class with an interface field, the field value is always converted into an instance of the base class, no matter what its starting type is. The expected behavior should probably be to convert to the corresponding subtype class instead. See here for an example: https://gist.github.com/Matt343/fbce0cdffe1523bb22016bed6f65473f\n", "before_files": [{"content": "from .error_type import error_type\nfrom .exceptions import UnregisteredTypeException\nfrom .object_type import input, type\n\n\n__all__ = [\"error_type\", \"UnregisteredTypeException\", \"input\", \"type\"]\n", "path": "strawberry/experimental/pydantic/__init__.py"}, {"content": "from typing import Union, cast\n\nfrom strawberry.field import StrawberryField\nfrom strawberry.scalars import is_scalar\nfrom strawberry.type import StrawberryList, StrawberryOptional, StrawberryType\nfrom strawberry.union import StrawberryUnion\n\n\ndef _convert_from_pydantic_to_strawberry_type(\n type_: Union[StrawberryType, type], data_from_model=None, extra=None\n):\n data = data_from_model if data_from_model is not None else extra\n\n if isinstance(type_, StrawberryOptional):\n if data is None:\n return data\n return _convert_from_pydantic_to_strawberry_type(\n type_.of_type, data_from_model=data, extra=extra\n )\n if isinstance(type_, StrawberryUnion):\n for option_type in type_.types:\n if hasattr(option_type, \"_pydantic_type\"):\n source_type = option_type._pydantic_type # type: ignore\n else:\n source_type = cast(type, option_type)\n if isinstance(data, source_type):\n return _convert_from_pydantic_to_strawberry_type(\n option_type, data_from_model=data, extra=extra\n )\n if isinstance(type_, StrawberryList):\n items = []\n for index, item in enumerate(data):\n items.append(\n _convert_from_pydantic_to_strawberry_type(\n type_.of_type,\n data_from_model=item,\n extra=extra[index] if extra else None,\n )\n )\n\n return items\n elif is_scalar(type_):\n return data\n else:\n return convert_pydantic_model_to_strawberry_class(\n type_, model_instance=data_from_model, extra=extra\n )\n\n\ndef convert_pydantic_model_to_strawberry_class(cls, *, model_instance=None, extra=None):\n extra = extra or {}\n kwargs = {}\n\n for field in cls._type_definition.fields:\n field = cast(StrawberryField, field)\n python_name = field.python_name\n\n data_from_extra = extra.get(python_name, None)\n data_from_model = (\n getattr(model_instance, python_name, None) if model_instance else None\n )\n kwargs[python_name] = _convert_from_pydantic_to_strawberry_type(\n field.type, data_from_model, extra=data_from_extra\n )\n\n return cls(**kwargs)\n", "path": "strawberry/experimental/pydantic/conversion.py"}, {"content": "import builtins\nimport dataclasses\nfrom functools import partial\nfrom typing import Any, Dict, List, Optional, Tuple, Type, cast\n\nfrom pydantic import BaseModel\nfrom pydantic.fields import ModelField\n\nfrom strawberry.arguments import UNSET\nfrom strawberry.experimental.pydantic.conversion import (\n convert_pydantic_model_to_strawberry_class,\n)\nfrom strawberry.experimental.pydantic.fields import get_basic_type\nfrom strawberry.field import StrawberryField\nfrom strawberry.object_type import _process_type, _wrap_dataclass\nfrom strawberry.private import Private\nfrom strawberry.types.type_resolver import _get_fields\nfrom strawberry.types.types import FederationTypeParams, TypeDefinition\n\nfrom .exceptions import MissingFieldsListError, UnregisteredTypeException\n\n\ndef replace_pydantic_types(type_: Any):\n if hasattr(type_, \"__args__\"):\n new_type = type_.copy_with(\n tuple(replace_pydantic_types(t) for t in type_.__args__)\n )\n\n if isinstance(new_type, TypeDefinition):\n # TODO: Not sure if this is necessary. No coverage in tests\n # TODO: Unnecessary with StrawberryObject\n\n new_type = builtins.type(\n new_type.name,\n (),\n {\"_type_definition\": new_type},\n )\n\n return new_type\n\n if issubclass(type_, BaseModel):\n if hasattr(type_, \"_strawberry_type\"):\n return type_._strawberry_type\n else:\n raise UnregisteredTypeException(type_)\n\n return type_\n\n\ndef get_type_for_field(field: ModelField):\n type_ = field.outer_type_\n type_ = get_basic_type(type_)\n type_ = replace_pydantic_types(type_)\n\n if not field.required:\n type_ = Optional[type_]\n\n return type_\n\n\ndef _get_private_fields(cls: Type) -> List[dataclasses.Field]:\n private_fields: List[dataclasses.Field] = []\n for field in dataclasses.fields(cls):\n if isinstance(field.type, Private):\n private_fields.append(field)\n return private_fields\n\n\ndef type(\n model: Type[BaseModel],\n *,\n fields: List[str],\n name: Optional[str] = None,\n is_input: bool = False,\n is_interface: bool = False,\n description: Optional[str] = None,\n federation: Optional[FederationTypeParams] = None,\n):\n def wrap(cls):\n if not fields:\n raise MissingFieldsListError(model)\n\n model_fields = model.__fields__\n fields_set = set(fields)\n\n all_fields: List[Tuple[str, Any, dataclasses.Field]] = [\n (\n name,\n get_type_for_field(field),\n StrawberryField(\n python_name=field.name,\n graphql_name=field.alias if field.has_alias else None,\n default=field.default if not field.required else UNSET,\n default_factory=(\n field.default_factory if field.default_factory else UNSET\n ),\n type_annotation=get_type_for_field(field),\n ),\n )\n for name, field in model_fields.items()\n if name in fields_set\n ]\n\n wrapped = _wrap_dataclass(cls)\n extra_fields = cast(List[dataclasses.Field], _get_fields(wrapped))\n private_fields = _get_private_fields(wrapped)\n\n all_fields.extend(\n (\n (\n field.name,\n field.type,\n field,\n )\n for field in extra_fields + private_fields\n )\n )\n\n # Sort fields so that fields with missing defaults go first\n # because dataclasses require that fields with no defaults are defined\n # first\n missing_default = []\n has_default = []\n for field in all_fields:\n if field[2].default is dataclasses.MISSING:\n missing_default.append(field)\n else:\n has_default.append(field)\n\n sorted_fields = missing_default + has_default\n\n cls = dataclasses.make_dataclass(\n cls.__name__,\n sorted_fields,\n )\n\n _process_type(\n cls,\n name=name,\n is_input=is_input,\n is_interface=is_interface,\n description=description,\n federation=federation,\n )\n\n model._strawberry_type = cls # type: ignore\n cls._pydantic_type = model # type: ignore\n\n def from_pydantic(instance: Any, extra: Dict[str, Any] = None) -> Any:\n return convert_pydantic_model_to_strawberry_class(\n cls=cls, model_instance=instance, extra=extra\n )\n\n def to_pydantic(self) -> Any:\n instance_kwargs = dataclasses.asdict(self)\n\n return model(**instance_kwargs)\n\n cls.from_pydantic = staticmethod(from_pydantic)\n cls.to_pydantic = to_pydantic\n\n return cls\n\n return wrap\n\n\ninput = partial(type, is_input=True)\n", "path": "strawberry/experimental/pydantic/object_type.py"}]} | 2,810 | 600 |
gh_patches_debug_3764 | rasdani/github-patches | git_diff | quantumlib__Cirq-2296 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
bug in supremacy test circuit
I ran the function
```
generate_boixo_2018_supremacy_circuits_v2_bristlecone(
n_rows: int, cz_depth: int, seed: int) -> circuits.Circuit
```
in `cirq.experiments.google_v2_supremacy_circuit` with `n_rows = 1` and got trapped in infinite loops.
I think this is because when we have `n_rows = 1`, `_make_cz_layer` would never return any cz gate, since there's only one qubit, thus in ` _add_cz_layer`, the loop
```
while not cz_layer:
qubits = cast(Iterable[devices.GridQubit], circuit.all_qubits())
cz_layer = list(_make_cz_layer(qubits, layer_index))
layer_index += 1
```
would never end.
My suggestion would be change `assert 1 <= n_rows <= 11` in `generate_boixo_2018_supremacy_circuits_v2_bristlecone` to `assert 2 <= n_rows <= 11`, since it does not make anysense to have one-qubit cz layer in the first place.
</issue>
<code>
[start of cirq/experiments/google_v2_supremacy_circuit.py]
1 # Copyright 2018 The Cirq Developers
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # https://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import random
16 from typing import Callable, Iterable, TypeVar, cast, Sequence
17
18 from cirq.circuits import InsertStrategy
19 from cirq import circuits, devices, google, ops
20
21
22 def generate_boixo_2018_supremacy_circuits_v2(
23 qubits: Iterable[devices.GridQubit], cz_depth: int,
24 seed: int) -> circuits.Circuit:
25 """
26 Generates Google Random Circuits v2 as in github.com/sboixo/GRCS cz_v2.
27 See also https://arxiv.org/abs/1807.10749
28
29 Args:
30 qubits: qubit grid in which to generate the circuit.
31 cz_depth: number of layers with CZ gates.
32 seed: seed for the random instance.
33
34 Returns:
35 A circuit corresponding to instance
36 inst_{n_rows}x{n_cols}_{cz_depth+1}_{seed}
37
38 The mapping of qubits is cirq.GridQubit(j,k) -> q[j*n_cols+k]
39 (as in the QASM mapping)
40 """
41
42 non_diagonal_gates = [ops.pauli_gates.X**(1/2), ops.pauli_gates.Y**(1/2)]
43 rand_gen = random.Random(seed).random
44
45 circuit = circuits.Circuit()
46
47 # Add an initial moment of Hadamards
48 circuit.append(ops.common_gates.H(qubit) for qubit in qubits)
49
50 layer_index = 0
51 if cz_depth:
52 layer_index = _add_cz_layer(layer_index, circuit)
53 # In the first moment, add T gates when possible
54 for qubit in qubits:
55 if not circuit.operation_at(qubit, 1):
56 circuit.append(ops.common_gates.T(qubit),
57 strategy=InsertStrategy.EARLIEST)
58
59 for moment_index in range(2, cz_depth+1):
60 layer_index = _add_cz_layer(layer_index, circuit)
61 # Add single qubit gates in the same moment
62 for qubit in qubits:
63 if not circuit.operation_at(qubit, moment_index):
64 last_op = circuit.operation_at(qubit, moment_index-1)
65 if last_op:
66 gate = cast(ops.GateOperation, last_op).gate
67 # Add a random non diagonal gate after a CZ
68 if gate == ops.CZ:
69 circuit.append(_choice(rand_gen,
70 non_diagonal_gates).on(qubit),
71 strategy=InsertStrategy.EARLIEST)
72 # Add a T gate after a non diagonal gate
73 elif not gate == ops.T:
74 circuit.append(ops.common_gates.T(qubit),
75 strategy=InsertStrategy.EARLIEST)
76
77 # Add a final moment of Hadamards
78 circuit.append([ops.common_gates.H(qubit) for qubit in qubits],
79 strategy=InsertStrategy.NEW_THEN_INLINE)
80
81 return circuit
82
83
84 def generate_boixo_2018_supremacy_circuits_v2_grid(n_rows: int, n_cols: int,
85 cz_depth: int, seed: int
86 ) -> circuits.Circuit:
87 """
88 Generates Google Random Circuits v2 as in github.com/sboixo/GRCS cz_v2.
89 See also https://arxiv.org/abs/1807.10749
90
91 Args:
92 n_rows: number of rows of a 2D lattice.
93 n_cols: number of columns.
94 cz_depth: number of layers with CZ gates.
95 seed: seed for the random instance.
96
97 Returns:
98 A circuit corresponding to instance
99 inst_{n_rows}x{n_cols}_{cz_depth+1}_{seed}
100
101 The mapping of qubits is cirq.GridQubit(j,k) -> q[j*n_cols+k]
102 (as in the QASM mapping)
103 """
104 qubits = [devices.GridQubit(i, j) for i in range(n_rows)
105 for j in range(n_cols)]
106 return generate_boixo_2018_supremacy_circuits_v2(qubits, cz_depth, seed)
107
108
109 def generate_boixo_2018_supremacy_circuits_v2_bristlecone(
110 n_rows: int, cz_depth: int, seed: int) -> circuits.Circuit:
111 """
112 Generates Google Random Circuits v2 in Bristlecone.
113 See also https://arxiv.org/abs/1807.10749
114
115 Args:
116 n_rows: number of rows in a Bristlecone lattice.
117 Note that we do not include single qubit corners.
118 cz_depth: number of layers with CZ gates.
119 seed: seed for the random instance.
120
121 Returns:
122 A circuit with given size and seed.
123 """
124 def get_qubits(n_rows):
125 def count_neighbors(qubits, qubit):
126 """Counts the qubits that the given qubit can interact with."""
127 possibles = [
128 devices.GridQubit(qubit.row + 1, qubit.col),
129 devices.GridQubit(qubit.row - 1, qubit.col),
130 devices.GridQubit(qubit.row, qubit.col + 1),
131 devices.GridQubit(qubit.row, qubit.col - 1),
132 ]
133 return len(list(e for e in possibles if e in qubits))
134
135 assert 1 <= n_rows <= 11
136 max_row = n_rows - 1
137 dev = google.Bristlecone
138 # we need a consistent order of qubits
139 qubits = list(dev.qubits)
140 qubits.sort()
141 qubits = [q for q in qubits
142 if q.row <= max_row and q.row + q.col < n_rows + 6
143 and q.row - q.col < n_rows - 5]
144 qubits = [q for q in qubits if count_neighbors(qubits, q) > 1]
145 return qubits
146
147 qubits = get_qubits(n_rows)
148 return generate_boixo_2018_supremacy_circuits_v2(qubits, cz_depth, seed)
149
150
151 T = TypeVar('T')
152
153
154 def _choice(rand_gen: Callable[[], float], sequence: Sequence[T]) -> T:
155 """Choose a random element from a non-empty sequence.
156
157 Use this instead of random.choice, with random.random(), for reproducibility
158 """
159 return sequence[int(rand_gen() * len(sequence))]
160
161
162 def _add_cz_layer(layer_index: int, circuit: circuits.Circuit) -> int:
163 cz_layer = None
164 while not cz_layer:
165 qubits = cast(Iterable[devices.GridQubit], circuit.all_qubits())
166 cz_layer = list(_make_cz_layer(qubits, layer_index))
167 layer_index += 1
168
169 circuit.append(cz_layer, strategy=InsertStrategy.NEW_THEN_INLINE)
170 return layer_index
171
172
173 def _make_cz_layer(qubits: Iterable[devices.GridQubit], layer_index: int
174 ) -> Iterable[ops.Operation]:
175 """
176 Each layer index corresponds to a shift/transpose of this CZ pattern:
177
178 ●───● ● ● ●───● ● ● . . .
179
180 ● ● ●───● ● ● ●───● . . .
181
182 ●───● ● ● ●───● ● ● . . .
183
184 ● ● ●───● ● ● ●───● . . .
185
186 ●───● ● ● ●───● ● ● . . .
187
188 ● ● ●───● ● ● ●───● . . .
189 . . . . . . . . .
190 . . . . . . . . .
191 . . . . . . . . .
192
193 Labelled edges, showing the exact index-to-CZs mapping (mod 8):
194
195 ●─0─●─2─●─4─●─6─●─0─. . .
196 3│ 7│ 3│ 7│ 3│
197 ●─4─●─6─●─0─●─2─●─4─. . .
198 1│ 5│ 1│ 5│ 1│
199 ●─0─●─2─●─4─●─6─●─0─. . .
200 7│ 3│ 7│ 3│ 7│
201 ●─4─●─6─●─0─●─2─●─4─. . .
202 5│ 1│ 5│ 1│ 5│
203 ●─0─●─2─●─4─●─6─●─0─. . .
204 3│ 7│ 3│ 7│ 3│
205 . . . . . .
206 . . . . . .
207 . . . . . .
208
209 Note that, for small devices, some layers will be empty because the layer
210 only contains edges not present on the device.
211 """
212
213 # map to an internal layer index to match the cycle order of public circuits
214 layer_index_map = [0, 3, 2, 1, 4, 7, 6, 5]
215 internal_layer_index = layer_index_map[layer_index % 8]
216
217 dir_row = internal_layer_index % 2
218 dir_col = 1 - dir_row
219 shift = (internal_layer_index >> 1) % 4
220
221 for q in qubits:
222 q2 = devices.GridQubit(q.row + dir_row, q.col + dir_col)
223 if q2 not in qubits:
224 continue # This edge isn't on the device.
225 if (q.row * (2 - dir_row) + q.col * (2 - dir_col)) % 4 != shift:
226 continue # No CZ along this edge for this layer.
227
228 yield ops.common_gates.CZ(q, q2)
229
[end of cirq/experiments/google_v2_supremacy_circuit.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/cirq/experiments/google_v2_supremacy_circuit.py b/cirq/experiments/google_v2_supremacy_circuit.py
--- a/cirq/experiments/google_v2_supremacy_circuit.py
+++ b/cirq/experiments/google_v2_supremacy_circuit.py
@@ -132,7 +132,7 @@
]
return len(list(e for e in possibles if e in qubits))
- assert 1 <= n_rows <= 11
+ assert 2 <= n_rows <= 11
max_row = n_rows - 1
dev = google.Bristlecone
# we need a consistent order of qubits
| {"golden_diff": "diff --git a/cirq/experiments/google_v2_supremacy_circuit.py b/cirq/experiments/google_v2_supremacy_circuit.py\n--- a/cirq/experiments/google_v2_supremacy_circuit.py\n+++ b/cirq/experiments/google_v2_supremacy_circuit.py\n@@ -132,7 +132,7 @@\n ]\n return len(list(e for e in possibles if e in qubits))\n \n- assert 1 <= n_rows <= 11\n+ assert 2 <= n_rows <= 11\n max_row = n_rows - 1\n dev = google.Bristlecone\n # we need a consistent order of qubits\n", "issue": "bug in supremacy test circuit\nI ran the function\r\n```\r\ngenerate_boixo_2018_supremacy_circuits_v2_bristlecone(\r\n n_rows: int, cz_depth: int, seed: int) -> circuits.Circuit\r\n```\r\nin `cirq.experiments.google_v2_supremacy_circuit` with `n_rows = 1` and got trapped in infinite loops.\r\n\r\nI think this is because when we have `n_rows = 1`, `_make_cz_layer` would never return any cz gate, since there's only one qubit, thus in ` _add_cz_layer`, the loop\r\n```\r\nwhile not cz_layer:\r\n qubits = cast(Iterable[devices.GridQubit], circuit.all_qubits())\r\n cz_layer = list(_make_cz_layer(qubits, layer_index))\r\n layer_index += 1\r\n```\r\nwould never end.\r\n\r\nMy suggestion would be change `assert 1 <= n_rows <= 11` in `generate_boixo_2018_supremacy_circuits_v2_bristlecone` to `assert 2 <= n_rows <= 11`, since it does not make anysense to have one-qubit cz layer in the first place.\n", "before_files": [{"content": "# Copyright 2018 The Cirq Developers\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport random\nfrom typing import Callable, Iterable, TypeVar, cast, Sequence\n\nfrom cirq.circuits import InsertStrategy\nfrom cirq import circuits, devices, google, ops\n\n\ndef generate_boixo_2018_supremacy_circuits_v2(\n qubits: Iterable[devices.GridQubit], cz_depth: int,\n seed: int) -> circuits.Circuit:\n \"\"\"\n Generates Google Random Circuits v2 as in github.com/sboixo/GRCS cz_v2.\n See also https://arxiv.org/abs/1807.10749\n\n Args:\n qubits: qubit grid in which to generate the circuit.\n cz_depth: number of layers with CZ gates.\n seed: seed for the random instance.\n\n Returns:\n A circuit corresponding to instance\n inst_{n_rows}x{n_cols}_{cz_depth+1}_{seed}\n\n The mapping of qubits is cirq.GridQubit(j,k) -> q[j*n_cols+k]\n (as in the QASM mapping)\n \"\"\"\n\n non_diagonal_gates = [ops.pauli_gates.X**(1/2), ops.pauli_gates.Y**(1/2)]\n rand_gen = random.Random(seed).random\n\n circuit = circuits.Circuit()\n\n # Add an initial moment of Hadamards\n circuit.append(ops.common_gates.H(qubit) for qubit in qubits)\n\n layer_index = 0\n if cz_depth:\n layer_index = _add_cz_layer(layer_index, circuit)\n # In the first moment, add T gates when possible\n for qubit in qubits:\n if not circuit.operation_at(qubit, 1):\n circuit.append(ops.common_gates.T(qubit),\n strategy=InsertStrategy.EARLIEST)\n\n for moment_index in range(2, cz_depth+1):\n layer_index = _add_cz_layer(layer_index, circuit)\n # Add single qubit gates in the same moment\n for qubit in qubits:\n if not circuit.operation_at(qubit, moment_index):\n last_op = circuit.operation_at(qubit, moment_index-1)\n if last_op:\n gate = cast(ops.GateOperation, last_op).gate\n # Add a random non diagonal gate after a CZ\n if gate == ops.CZ:\n circuit.append(_choice(rand_gen,\n non_diagonal_gates).on(qubit),\n strategy=InsertStrategy.EARLIEST)\n # Add a T gate after a non diagonal gate\n elif not gate == ops.T:\n circuit.append(ops.common_gates.T(qubit),\n strategy=InsertStrategy.EARLIEST)\n\n # Add a final moment of Hadamards\n circuit.append([ops.common_gates.H(qubit) for qubit in qubits],\n strategy=InsertStrategy.NEW_THEN_INLINE)\n\n return circuit\n\n\ndef generate_boixo_2018_supremacy_circuits_v2_grid(n_rows: int, n_cols: int,\n cz_depth: int, seed: int\n ) -> circuits.Circuit:\n \"\"\"\n Generates Google Random Circuits v2 as in github.com/sboixo/GRCS cz_v2.\n See also https://arxiv.org/abs/1807.10749\n\n Args:\n n_rows: number of rows of a 2D lattice.\n n_cols: number of columns.\n cz_depth: number of layers with CZ gates.\n seed: seed for the random instance.\n\n Returns:\n A circuit corresponding to instance\n inst_{n_rows}x{n_cols}_{cz_depth+1}_{seed}\n\n The mapping of qubits is cirq.GridQubit(j,k) -> q[j*n_cols+k]\n (as in the QASM mapping)\n \"\"\"\n qubits = [devices.GridQubit(i, j) for i in range(n_rows)\n for j in range(n_cols)]\n return generate_boixo_2018_supremacy_circuits_v2(qubits, cz_depth, seed)\n\n\ndef generate_boixo_2018_supremacy_circuits_v2_bristlecone(\n n_rows: int, cz_depth: int, seed: int) -> circuits.Circuit:\n \"\"\"\n Generates Google Random Circuits v2 in Bristlecone.\n See also https://arxiv.org/abs/1807.10749\n\n Args:\n n_rows: number of rows in a Bristlecone lattice.\n Note that we do not include single qubit corners.\n cz_depth: number of layers with CZ gates.\n seed: seed for the random instance.\n\n Returns:\n A circuit with given size and seed.\n \"\"\"\n def get_qubits(n_rows):\n def count_neighbors(qubits, qubit):\n \"\"\"Counts the qubits that the given qubit can interact with.\"\"\"\n possibles = [\n devices.GridQubit(qubit.row + 1, qubit.col),\n devices.GridQubit(qubit.row - 1, qubit.col),\n devices.GridQubit(qubit.row, qubit.col + 1),\n devices.GridQubit(qubit.row, qubit.col - 1),\n ]\n return len(list(e for e in possibles if e in qubits))\n\n assert 1 <= n_rows <= 11\n max_row = n_rows - 1\n dev = google.Bristlecone\n # we need a consistent order of qubits\n qubits = list(dev.qubits)\n qubits.sort()\n qubits = [q for q in qubits\n if q.row <= max_row and q.row + q.col < n_rows + 6\n and q.row - q.col < n_rows - 5]\n qubits = [q for q in qubits if count_neighbors(qubits, q) > 1]\n return qubits\n\n qubits = get_qubits(n_rows)\n return generate_boixo_2018_supremacy_circuits_v2(qubits, cz_depth, seed)\n\n\nT = TypeVar('T')\n\n\ndef _choice(rand_gen: Callable[[], float], sequence: Sequence[T]) -> T:\n \"\"\"Choose a random element from a non-empty sequence.\n\n Use this instead of random.choice, with random.random(), for reproducibility\n \"\"\"\n return sequence[int(rand_gen() * len(sequence))]\n\n\ndef _add_cz_layer(layer_index: int, circuit: circuits.Circuit) -> int:\n cz_layer = None\n while not cz_layer:\n qubits = cast(Iterable[devices.GridQubit], circuit.all_qubits())\n cz_layer = list(_make_cz_layer(qubits, layer_index))\n layer_index += 1\n\n circuit.append(cz_layer, strategy=InsertStrategy.NEW_THEN_INLINE)\n return layer_index\n\n\ndef _make_cz_layer(qubits: Iterable[devices.GridQubit], layer_index: int\n ) -> Iterable[ops.Operation]:\n \"\"\"\n Each layer index corresponds to a shift/transpose of this CZ pattern:\n\n \u25cf\u2500\u2500\u2500\u25cf \u25cf \u25cf \u25cf\u2500\u2500\u2500\u25cf \u25cf \u25cf . . .\n\n \u25cf \u25cf \u25cf\u2500\u2500\u2500\u25cf \u25cf \u25cf \u25cf\u2500\u2500\u2500\u25cf . . .\n\n \u25cf\u2500\u2500\u2500\u25cf \u25cf \u25cf \u25cf\u2500\u2500\u2500\u25cf \u25cf \u25cf . . .\n\n \u25cf \u25cf \u25cf\u2500\u2500\u2500\u25cf \u25cf \u25cf \u25cf\u2500\u2500\u2500\u25cf . . .\n\n \u25cf\u2500\u2500\u2500\u25cf \u25cf \u25cf \u25cf\u2500\u2500\u2500\u25cf \u25cf \u25cf . . .\n\n \u25cf \u25cf \u25cf\u2500\u2500\u2500\u25cf \u25cf \u25cf \u25cf\u2500\u2500\u2500\u25cf . . .\n . . . . . . . . .\n . . . . . . . . .\n . . . . . . . . .\n\n Labelled edges, showing the exact index-to-CZs mapping (mod 8):\n\n \u25cf\u25000\u2500\u25cf\u25002\u2500\u25cf\u25004\u2500\u25cf\u25006\u2500\u25cf\u25000\u2500. . .\n 3\u2502 7\u2502 3\u2502 7\u2502 3\u2502\n \u25cf\u25004\u2500\u25cf\u25006\u2500\u25cf\u25000\u2500\u25cf\u25002\u2500\u25cf\u25004\u2500. . .\n 1\u2502 5\u2502 1\u2502 5\u2502 1\u2502\n \u25cf\u25000\u2500\u25cf\u25002\u2500\u25cf\u25004\u2500\u25cf\u25006\u2500\u25cf\u25000\u2500. . .\n 7\u2502 3\u2502 7\u2502 3\u2502 7\u2502\n \u25cf\u25004\u2500\u25cf\u25006\u2500\u25cf\u25000\u2500\u25cf\u25002\u2500\u25cf\u25004\u2500. . .\n 5\u2502 1\u2502 5\u2502 1\u2502 5\u2502\n \u25cf\u25000\u2500\u25cf\u25002\u2500\u25cf\u25004\u2500\u25cf\u25006\u2500\u25cf\u25000\u2500. . .\n 3\u2502 7\u2502 3\u2502 7\u2502 3\u2502\n . . . . . .\n . . . . . .\n . . . . . .\n\n Note that, for small devices, some layers will be empty because the layer\n only contains edges not present on the device.\n \"\"\"\n\n # map to an internal layer index to match the cycle order of public circuits\n layer_index_map = [0, 3, 2, 1, 4, 7, 6, 5]\n internal_layer_index = layer_index_map[layer_index % 8]\n\n dir_row = internal_layer_index % 2\n dir_col = 1 - dir_row\n shift = (internal_layer_index >> 1) % 4\n\n for q in qubits:\n q2 = devices.GridQubit(q.row + dir_row, q.col + dir_col)\n if q2 not in qubits:\n continue # This edge isn't on the device.\n if (q.row * (2 - dir_row) + q.col * (2 - dir_col)) % 4 != shift:\n continue # No CZ along this edge for this layer.\n\n yield ops.common_gates.CZ(q, q2)\n", "path": "cirq/experiments/google_v2_supremacy_circuit.py"}]} | 3,763 | 153 |
gh_patches_debug_5 | rasdani/github-patches | git_diff | freedomofpress__securedrop-1117 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Update kernel module blacklist
During an installation last week, we encountered an issue with the kernel module blacklist. The install was using the new generation of Intel NUCs ([NUC5i5RYK](http://www.amazon.com/dp/B00SD9ISIQ) and [NUC5i5RYH](http://www.amazon.com/dp/B00SD9IS1S/)). Unlike the previous generation of NUCs, which did not include wireless networking hardware by default, the new generation includes wireless networking hardware for Wifi and Bluetooth on the motherboard.
This means that Ubuntu running on the servers not only loaded the high-level kernel modules for wifi and bluetooth support (`iwlwifi` and `bluetooth`), it also loaded modules necessary for support on the specific (included) hardware: `iwlmvm` and `btusb`. When the `remove kernel modules` Ansible role ran, it failed with an error because it could not remove the top-level modules without removing their dependencies first.
A quickfix to get this working on the new hardware was to change `disabled_kernel_modules` in `group_vars/securedrop.yml` from:
``` yml
disabled_kernel_modules:
- bluetooth
- iwlwifi
```
to:
``` yml
disabled_kernel_modules:
- btusb
- bluetooth
- iwlmvm
- iwlwifi
```
The order of the modules is important! We need to make sure the the dependencies are removed prior to the target modules that depend on them.
This list is also likely specific to the new generation of Intel NUCs. If we want to support a wider variety of hardware, we may want to try being smart about removing kernel modules and their dependencies, e.g. something akin to this technique from [Stack Exchange](https://askubuntu.com/questions/317230/how-can-i-temporarily-disable-a-kernel-module).
Finally, we need to make sure this updated module blacklist still works on the old hardware as well.
</issue>
<code>
[start of securedrop/version.py]
1 __version__ = '0.3.4'
2
[end of securedrop/version.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/securedrop/version.py b/securedrop/version.py
--- a/securedrop/version.py
+++ b/securedrop/version.py
@@ -1 +1 @@
-__version__ = '0.3.4'
+__version__ = '0.3.5'
| {"golden_diff": "diff --git a/securedrop/version.py b/securedrop/version.py\n--- a/securedrop/version.py\n+++ b/securedrop/version.py\n@@ -1 +1 @@\n-__version__ = '0.3.4'\n+__version__ = '0.3.5'\n", "issue": "Update kernel module blacklist\nDuring an installation last week, we encountered an issue with the kernel module blacklist. The install was using the new generation of Intel NUCs ([NUC5i5RYK](http://www.amazon.com/dp/B00SD9ISIQ) and [NUC5i5RYH](http://www.amazon.com/dp/B00SD9IS1S/)). Unlike the previous generation of NUCs, which did not include wireless networking hardware by default, the new generation includes wireless networking hardware for Wifi and Bluetooth on the motherboard.\n\nThis means that Ubuntu running on the servers not only loaded the high-level kernel modules for wifi and bluetooth support (`iwlwifi` and `bluetooth`), it also loaded modules necessary for support on the specific (included) hardware: `iwlmvm` and `btusb`. When the `remove kernel modules` Ansible role ran, it failed with an error because it could not remove the top-level modules without removing their dependencies first.\n\nA quickfix to get this working on the new hardware was to change `disabled_kernel_modules` in `group_vars/securedrop.yml` from:\n\n``` yml\ndisabled_kernel_modules:\n - bluetooth\n - iwlwifi\n```\n\nto:\n\n``` yml\ndisabled_kernel_modules:\n - btusb\n - bluetooth\n - iwlmvm\n - iwlwifi\n```\n\nThe order of the modules is important! We need to make sure the the dependencies are removed prior to the target modules that depend on them.\n\nThis list is also likely specific to the new generation of Intel NUCs. If we want to support a wider variety of hardware, we may want to try being smart about removing kernel modules and their dependencies, e.g. something akin to this technique from [Stack Exchange](https://askubuntu.com/questions/317230/how-can-i-temporarily-disable-a-kernel-module).\n\nFinally, we need to make sure this updated module blacklist still works on the old hardware as well.\n\n", "before_files": [{"content": "__version__ = '0.3.4'\n", "path": "securedrop/version.py"}]} | 963 | 62 |
gh_patches_debug_6718 | rasdani/github-patches | git_diff | getmoto__moto-556 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Fix S3 issues with botocore 1.3.29
botocore 1.3.29 breaks s3 in tests
</issue>
<code>
[start of moto/__init__.py]
1 from __future__ import unicode_literals
2 import logging
3 logging.getLogger('boto').setLevel(logging.CRITICAL)
4
5 __title__ = 'moto'
6 __version__ = '0.4.22'
7
8 from .autoscaling import mock_autoscaling # flake8: noqa
9 from .awslambda import mock_lambda # flake8: noqa
10 from .cloudformation import mock_cloudformation # flake8: noqa
11 from .cloudwatch import mock_cloudwatch # flake8: noqa
12 from .datapipeline import mock_datapipeline # flake8: noqa
13 from .dynamodb import mock_dynamodb # flake8: noqa
14 from .dynamodb2 import mock_dynamodb2 # flake8: noqa
15 from .ec2 import mock_ec2 # flake8: noqa
16 from .ecs import mock_ecs # flake8: noqa
17 from .elb import mock_elb # flake8: noqa
18 from .emr import mock_emr # flake8: noqa
19 from .glacier import mock_glacier # flake8: noqa
20 from .iam import mock_iam # flake8: noqa
21 from .kinesis import mock_kinesis # flake8: noqa
22 from .kms import mock_kms # flake8: noqa
23 from .rds import mock_rds # flake8: noqa
24 from .rds2 import mock_rds2 # flake8: noqa
25 from .redshift import mock_redshift # flake8: noqa
26 from .s3 import mock_s3 # flake8: noqa
27 from .s3bucket_path import mock_s3bucket_path # flake8: noqa
28 from .ses import mock_ses # flake8: noqa
29 from .sns import mock_sns # flake8: noqa
30 from .sqs import mock_sqs # flake8: noqa
31 from .sts import mock_sts # flake8: noqa
32 from .route53 import mock_route53 # flake8: noqa
33 from .swf import mock_swf # flake8: noqa
34
[end of moto/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/moto/__init__.py b/moto/__init__.py
--- a/moto/__init__.py
+++ b/moto/__init__.py
@@ -31,3 +31,13 @@
from .sts import mock_sts # flake8: noqa
from .route53 import mock_route53 # flake8: noqa
from .swf import mock_swf # flake8: noqa
+
+
+try:
+ # Need to monkey-patch botocore requests back to underlying urllib3 classes
+ from botocore.awsrequest import HTTPSConnectionPool, HTTPConnectionPool, HTTPConnection, VerifiedHTTPSConnection
+except ImportError:
+ pass
+else:
+ HTTPSConnectionPool.ConnectionCls = VerifiedHTTPSConnection
+ HTTPConnectionPool.ConnectionCls = HTTPConnection
| {"golden_diff": "diff --git a/moto/__init__.py b/moto/__init__.py\n--- a/moto/__init__.py\n+++ b/moto/__init__.py\n@@ -31,3 +31,13 @@\n from .sts import mock_sts # flake8: noqa\n from .route53 import mock_route53 # flake8: noqa\n from .swf import mock_swf # flake8: noqa\n+\n+\n+try:\n+ # Need to monkey-patch botocore requests back to underlying urllib3 classes\n+ from botocore.awsrequest import HTTPSConnectionPool, HTTPConnectionPool, HTTPConnection, VerifiedHTTPSConnection\n+except ImportError:\n+ pass\n+else:\n+ HTTPSConnectionPool.ConnectionCls = VerifiedHTTPSConnection\n+ HTTPConnectionPool.ConnectionCls = HTTPConnection\n", "issue": "Fix S3 issues with botocore 1.3.29\nbotocore 1.3.29 breaks s3 in tests\n\n", "before_files": [{"content": "from __future__ import unicode_literals\nimport logging\nlogging.getLogger('boto').setLevel(logging.CRITICAL)\n\n__title__ = 'moto'\n__version__ = '0.4.22'\n\nfrom .autoscaling import mock_autoscaling # flake8: noqa\nfrom .awslambda import mock_lambda # flake8: noqa\nfrom .cloudformation import mock_cloudformation # flake8: noqa\nfrom .cloudwatch import mock_cloudwatch # flake8: noqa\nfrom .datapipeline import mock_datapipeline # flake8: noqa\nfrom .dynamodb import mock_dynamodb # flake8: noqa\nfrom .dynamodb2 import mock_dynamodb2 # flake8: noqa\nfrom .ec2 import mock_ec2 # flake8: noqa\nfrom .ecs import mock_ecs # flake8: noqa\nfrom .elb import mock_elb # flake8: noqa\nfrom .emr import mock_emr # flake8: noqa\nfrom .glacier import mock_glacier # flake8: noqa\nfrom .iam import mock_iam # flake8: noqa\nfrom .kinesis import mock_kinesis # flake8: noqa\nfrom .kms import mock_kms # flake8: noqa\nfrom .rds import mock_rds # flake8: noqa\nfrom .rds2 import mock_rds2 # flake8: noqa\nfrom .redshift import mock_redshift # flake8: noqa\nfrom .s3 import mock_s3 # flake8: noqa\nfrom .s3bucket_path import mock_s3bucket_path # flake8: noqa\nfrom .ses import mock_ses # flake8: noqa\nfrom .sns import mock_sns # flake8: noqa\nfrom .sqs import mock_sqs # flake8: noqa\nfrom .sts import mock_sts # flake8: noqa\nfrom .route53 import mock_route53 # flake8: noqa\nfrom .swf import mock_swf # flake8: noqa\n", "path": "moto/__init__.py"}]} | 1,094 | 180 |
gh_patches_debug_4681 | rasdani/github-patches | git_diff | awslabs__gluonts-1159 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Multiprocessing hangs when num_workers > len(dataset)
## Description
I'm trying to serialize a predictor trained on multiple cores. When calling the `serialize` method nothing happens.
Running the same code, but without specifying `num_workers`, it works as expected.
## To Reproduce
```python
from pathlib import Path
from typing import Optional
from gluonts.dataset.multivariate_grouper import MultivariateGrouper
from gluonts.dataset.common import TrainDatasets
from gluonts.model.gpvar import GPVAREstimator
from gluonts.dataset.repository.datasets import get_dataset
from gluonts.mx.trainer import Trainer
def load_multivariate_dataset(dataset_name: str, target_dim: Optional[int] = None):
ds = get_dataset(dataset_name)
if target_dim is None:
target_dim = len(ds.train)
grouper = MultivariateGrouper(max_target_dim=target_dim)
meta = ds.metadata
meta.feat_static_cat[0].cardinality = target_dim
return (TrainDatasets(
metadata=meta,
train=grouper(ds.train),
test=grouper(ds.test)
), target_dim)
ds, target_dim = load_multivariate_dataset("exchange_rate")
metadata = ds.metadata
estimator = GPVAREstimator(
prediction_length=metadata.prediction_length,
freq=metadata.freq,
target_dim=target_dim,
trainer=Trainer(
epochs=2,
num_batches_per_epoch=10,
batch_size=8,
),
)
predictor = estimator.train(training_data=ds.train, num_workers=2)
predictor.serialize(Path("/tmp"))
```
## Error message or code output
Nothing happens.
## Environment
- Operating system: Mac OSX 10.15.7
- Python version: 3.6.12
- GluonTS version: 0.6.0
- MXNet version: 1.7.0post1
</issue>
<code>
[start of src/gluonts/itertools.py]
1 # Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License").
4 # You may not use this file except in compliance with the License.
5 # A copy of the License is located at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # or in the "license" file accompanying this file. This file is distributed
10 # on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
11 # express or implied. See the License for the specific language governing
12 # permissions and limitations under the License.
13
14 from typing import Iterable, Iterator, List, TypeVar
15 import itertools
16 import random
17
18 T = TypeVar("T")
19
20
21 def cyclic(it):
22 """Like `itertools.cycle`, but does not store the data."""
23
24 while True:
25 yield from it
26
27
28 def batcher(iterable: Iterable[T], batch_size: int) -> Iterator[List[T]]:
29 """Groups elements from `iterable` into batches of size `batch_size`.
30
31 >>> list(batcher("ABCDEFG", 3))
32 [['A', 'B', 'C'], ['D', 'E', 'F'], ['G']]
33
34 Unlike the grouper proposed in the documentation of itertools, `batcher`
35 doesn't fill up missing values.
36 """
37 it: Iterator[T] = iter(iterable)
38
39 def get_batch():
40 return list(itertools.islice(it, batch_size))
41
42 # has an empty list so that we have a 2D array for sure
43 return iter(get_batch, [])
44
45
46 class cached(Iterable):
47 """
48 An iterable wrapper, which caches values in a list the first time it is iterated.
49
50 The primary use-case for this is to avoid re-computing the element of the sequence,
51 in case the inner iterable does it on demand.
52
53 This should be used to wrap deterministic iterables, i.e. iterables where the data
54 generation process is not random, and that yield the same elements when iterated
55 multiple times.
56 """
57
58 def __init__(self, iterable: Iterable) -> None:
59 self.iterable = iterable
60 self.cache = None
61
62 def __iter__(self):
63 if self.cache is None:
64 self.cache = []
65 for element in self.iterable:
66 yield element
67 self.cache.append(element)
68 else:
69 yield from self.cache
70
71
72 def pseudo_shuffled(iterator: Iterator, shuffle_buffer_length: int):
73 """
74 An iterator that yields item from a given iterator in a pseudo-shuffled order.
75 """
76 shuffle_buffer = []
77
78 for element in iterator:
79 shuffle_buffer.append(element)
80 if len(shuffle_buffer) >= shuffle_buffer_length:
81 yield shuffle_buffer.pop(random.randrange(len(shuffle_buffer)))
82
83 while shuffle_buffer:
84 yield shuffle_buffer.pop(random.randrange(len(shuffle_buffer)))
85
[end of src/gluonts/itertools.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/gluonts/itertools.py b/src/gluonts/itertools.py
--- a/src/gluonts/itertools.py
+++ b/src/gluonts/itertools.py
@@ -21,8 +21,13 @@
def cyclic(it):
"""Like `itertools.cycle`, but does not store the data."""
+ at_least_one = False
while True:
- yield from it
+ for el in it:
+ at_least_one = True
+ yield el
+ if not at_least_one:
+ break
def batcher(iterable: Iterable[T], batch_size: int) -> Iterator[List[T]]:
| {"golden_diff": "diff --git a/src/gluonts/itertools.py b/src/gluonts/itertools.py\n--- a/src/gluonts/itertools.py\n+++ b/src/gluonts/itertools.py\n@@ -21,8 +21,13 @@\n def cyclic(it):\n \"\"\"Like `itertools.cycle`, but does not store the data.\"\"\"\n \n+ at_least_one = False\n while True:\n- yield from it\n+ for el in it:\n+ at_least_one = True\n+ yield el\n+ if not at_least_one:\n+ break\n \n \n def batcher(iterable: Iterable[T], batch_size: int) -> Iterator[List[T]]:\n", "issue": "Multiprocessing hangs when num_workers > len(dataset)\n## Description\r\nI'm trying to serialize a predictor trained on multiple cores. When calling the `serialize` method nothing happens.\r\nRunning the same code, but without specifying `num_workers`, it works as expected.\r\n\r\n## To Reproduce\r\n\r\n```python\r\nfrom pathlib import Path\r\nfrom typing import Optional\r\n\r\nfrom gluonts.dataset.multivariate_grouper import MultivariateGrouper\r\nfrom gluonts.dataset.common import TrainDatasets\r\nfrom gluonts.model.gpvar import GPVAREstimator\r\nfrom gluonts.dataset.repository.datasets import get_dataset\r\nfrom gluonts.mx.trainer import Trainer\r\n\r\n\r\ndef load_multivariate_dataset(dataset_name: str, target_dim: Optional[int] = None):\r\n ds = get_dataset(dataset_name)\r\n\r\n if target_dim is None:\r\n target_dim = len(ds.train)\r\n\r\n grouper = MultivariateGrouper(max_target_dim=target_dim)\r\n\r\n meta = ds.metadata\r\n meta.feat_static_cat[0].cardinality = target_dim\r\n\r\n return (TrainDatasets(\r\n metadata=meta,\r\n train=grouper(ds.train),\r\n test=grouper(ds.test)\r\n ), target_dim)\r\n\r\n\r\nds, target_dim = load_multivariate_dataset(\"exchange_rate\")\r\nmetadata = ds.metadata\r\n\r\nestimator = GPVAREstimator(\r\n prediction_length=metadata.prediction_length,\r\n freq=metadata.freq,\r\n target_dim=target_dim,\r\n trainer=Trainer(\r\n epochs=2,\r\n num_batches_per_epoch=10,\r\n batch_size=8,\r\n ),\r\n)\r\n\r\npredictor = estimator.train(training_data=ds.train, num_workers=2)\r\n\r\npredictor.serialize(Path(\"/tmp\"))\r\n\r\n```\r\n\r\n## Error message or code output\r\nNothing happens.\r\n\r\n\r\n## Environment\r\n- Operating system: Mac OSX 10.15.7\r\n- Python version: 3.6.12\r\n- GluonTS version: 0.6.0\r\n- MXNet version: 1.7.0post1\r\n\r\n\n", "before_files": [{"content": "# Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\").\n# You may not use this file except in compliance with the License.\n# A copy of the License is located at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# or in the \"license\" file accompanying this file. This file is distributed\n# on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either\n# express or implied. See the License for the specific language governing\n# permissions and limitations under the License.\n\nfrom typing import Iterable, Iterator, List, TypeVar\nimport itertools\nimport random\n\nT = TypeVar(\"T\")\n\n\ndef cyclic(it):\n \"\"\"Like `itertools.cycle`, but does not store the data.\"\"\"\n\n while True:\n yield from it\n\n\ndef batcher(iterable: Iterable[T], batch_size: int) -> Iterator[List[T]]:\n \"\"\"Groups elements from `iterable` into batches of size `batch_size`.\n\n >>> list(batcher(\"ABCDEFG\", 3))\n [['A', 'B', 'C'], ['D', 'E', 'F'], ['G']]\n\n Unlike the grouper proposed in the documentation of itertools, `batcher`\n doesn't fill up missing values.\n \"\"\"\n it: Iterator[T] = iter(iterable)\n\n def get_batch():\n return list(itertools.islice(it, batch_size))\n\n # has an empty list so that we have a 2D array for sure\n return iter(get_batch, [])\n\n\nclass cached(Iterable):\n \"\"\"\n An iterable wrapper, which caches values in a list the first time it is iterated.\n\n The primary use-case for this is to avoid re-computing the element of the sequence,\n in case the inner iterable does it on demand.\n\n This should be used to wrap deterministic iterables, i.e. iterables where the data\n generation process is not random, and that yield the same elements when iterated\n multiple times.\n \"\"\"\n\n def __init__(self, iterable: Iterable) -> None:\n self.iterable = iterable\n self.cache = None\n\n def __iter__(self):\n if self.cache is None:\n self.cache = []\n for element in self.iterable:\n yield element\n self.cache.append(element)\n else:\n yield from self.cache\n\n\ndef pseudo_shuffled(iterator: Iterator, shuffle_buffer_length: int):\n \"\"\"\n An iterator that yields item from a given iterator in a pseudo-shuffled order.\n \"\"\"\n shuffle_buffer = []\n\n for element in iterator:\n shuffle_buffer.append(element)\n if len(shuffle_buffer) >= shuffle_buffer_length:\n yield shuffle_buffer.pop(random.randrange(len(shuffle_buffer)))\n\n while shuffle_buffer:\n yield shuffle_buffer.pop(random.randrange(len(shuffle_buffer)))\n", "path": "src/gluonts/itertools.py"}]} | 1,747 | 151 |
gh_patches_debug_33074 | rasdani/github-patches | git_diff | ARM-DOE__ACT-817 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Bug in xsection plot map code
* ACT version: Current Version
* Python version: All
* Operating System: All
### Description
xsection plot map is generating images with duplicate axes, see image below. I believe this is probably the cause to our baseline image failure.

</issue>
<code>
[start of act/plotting/xsectiondisplay.py]
1 """
2 Stores the class for XSectionDisplay.
3
4 """
5
6 # Import third party libraries
7 import matplotlib.pyplot as plt
8 import numpy as np
9
10 try:
11 import cartopy.crs as ccrs
12
13 CARTOPY_AVAILABLE = True
14 except ImportError:
15 CARTOPY_AVAILABLE = False
16
17 # Import Local Libs
18 from ..utils import data_utils
19 from .plot import Display
20
21
22 class XSectionDisplay(Display):
23 """
24 Plots cross sections of multidimensional datasets. The data
25 must be able to be sliced into a 2 dimensional slice using the
26 xarray :func:`xarray.Dataset.sel` and :func:`xarray.Dataset.isel` commands.
27
28 This is inherited from the :func:`act.plotting.Display`
29 class and has therefore has the same attributes as that class.
30 See :func:`act.plotting.Display`
31 for more information. There are no additional attributes or parameters
32 to this class.
33
34 In order to create geographic plots, ACT needs the Cartopy package to be
35 installed on your system. More information about
36 Cartopy go here:https://scitools.org.uk/cartopy/docs/latest/.
37
38 Examples
39 --------
40 For example, if you only want to do a cross section through the first
41 time period of a 3D dataset called :code:`ir_temperature`, you would
42 do the following in xarray:
43
44 .. code-block:: python
45
46 time_slice = my_ds["ir_temperature"].isel(time=0)
47
48 The methods of this class support passing in keyword arguments into
49 xarray :func:`xarray.Dataset.sel` and :func:`xarray.Dataset.isel` commands
50 so that new datasets do not need to be created when slicing by specific time
51 periods or spatial slices. For example, to plot the first time period
52 from :code:`my_ds`, simply do:
53
54 .. code-block:: python
55
56 xsection = XSectionDisplay(my_ds, figsize=(15, 8))
57 xsection.plot_xsection_map(
58 None,
59 "ir_temperature",
60 vmin=220,
61 vmax=300,
62 cmap="Greys",
63 x="longitude",
64 y="latitude",
65 isel_kwargs={"time": 0},
66 )
67
68 Here, the array is sliced by the first time period as specified
69 in :code:`isel_kwargs`. The other keyword arguments are standard keyword
70 arguments taken by :func:`matplotlib.pyplot.pcolormesh`.
71
72 """
73
74 def __init__(self, ds, subplot_shape=(1,), ds_name=None, **kwargs):
75 super().__init__(ds, subplot_shape, ds_name, **kwargs)
76
77 def set_subplot_to_map(self, subplot_index):
78 total_num_plots = self.axes.shape
79
80 if len(total_num_plots) == 2:
81 second_number = total_num_plots[0]
82 j = subplot_index[1]
83 else:
84 second_number = 1
85 j = 0
86
87 third_number = second_number * subplot_index[0] + j + 1
88
89 self.axes[subplot_index] = plt.subplot(
90 total_num_plots[0],
91 second_number,
92 third_number,
93 projection=ccrs.PlateCarree(),
94 )
95
96 def set_xrng(self, xrng, subplot_index=(0,)):
97 """
98 Sets the x range of the plot.
99
100 Parameters
101 ----------
102 xrng : 2 number array
103 The x limits of the plot.
104 subplot_index : 1 or 2D tuple, list, or array
105 The index of the subplot to set the x range of.
106
107 """
108 if self.axes is None:
109 raise RuntimeError('set_xrng requires the plot to be displayed.')
110
111 if not hasattr(self, 'xrng') and len(self.axes.shape) == 2:
112 self.xrng = np.zeros((self.axes.shape[0], self.axes.shape[1], 2), dtype=xrng[0].dtype)
113 elif not hasattr(self, 'xrng') and len(self.axes.shape) == 1:
114 self.xrng = np.zeros((self.axes.shape[0], 2), dtype=xrng[0].dtype)
115
116 self.axes[subplot_index].set_xlim(xrng)
117 self.xrng[subplot_index, :] = np.array(xrng)
118
119 def set_yrng(self, yrng, subplot_index=(0,)):
120 """
121 Sets the y range of the plot.
122
123 Parameters
124 ----------
125 yrng : 2 number array
126 The y limits of the plot.
127 subplot_index : 1 or 2D tuple, list, or array
128 The index of the subplot to set the x range of.
129
130 """
131 if self.axes is None:
132 raise RuntimeError('set_yrng requires the plot to be displayed.')
133
134 if not hasattr(self, 'yrng') and len(self.axes.shape) == 2:
135 self.yrng = np.zeros((self.axes.shape[0], self.axes.shape[1], 2), dtype=yrng[0].dtype)
136 elif not hasattr(self, 'yrng') and len(self.axes.shape) == 1:
137 self.yrng = np.zeros((self.axes.shape[0], 2), dtype=yrng[0].dtype)
138
139 if yrng[0] == yrng[1]:
140 yrng[1] = yrng[1] + 1
141
142 self.axes[subplot_index].set_ylim(yrng)
143
144 self.yrng[subplot_index, :] = yrng
145
146 def plot_xsection(
147 self,
148 dsname,
149 varname,
150 x=None,
151 y=None,
152 subplot_index=(0,),
153 sel_kwargs=None,
154 isel_kwargs=None,
155 **kwargs,
156 ):
157 """
158 This function plots a cross section whose x and y coordinates are
159 specified by the variable names either provided by the user or
160 automatically detected by xarray.
161
162 Parameters
163 ----------
164 dsname : str or None
165 The name of the datastream to plot from. Set to None to have
166 ACT attempt to automatically detect this.
167 varname : str
168 The name of the variable to plot.
169 x : str or None
170 The name of the x coordinate variable.
171 y : str or None
172 The name of the y coordinate variable.
173 subplot_index : tuple
174 The index of the subplot to create the plot in.
175 sel_kwargs : dict
176 The keyword arguments to pass into :py:func:`xarray.DataArray.sel`
177 This is useful when your data is in 3 or more dimensions and you
178 want to only view a cross section on a specific x-y plane. For more
179 information on how to use xarray's .sel and .isel functionality
180 to slice datasets, see the documentation on :func:`xarray.DataArray.sel`.
181 isel_kwargs : dict
182 The keyword arguments to pass into :py:func:`xarray.DataArray.sel`
183 **kwargs : keyword arguments
184 Additional keyword arguments will be passed into
185 :func:`xarray.DataArray.plot`.
186
187 Returns
188 -------
189 ax : matplotlib axis handle
190 The matplotlib axis handle corresponding to the plot.
191
192 """
193 if dsname is None and len(self._ds.keys()) > 1:
194 raise ValueError(
195 'You must choose a datastream when there are 2 '
196 'or more datasets in the TimeSeriesDisplay '
197 'object.'
198 )
199 elif dsname is None:
200 dsname = list(self._ds.keys())[0]
201 temp_ds = self._ds[dsname].copy()
202
203 if sel_kwargs is not None:
204 temp_ds = temp_ds.sel(**sel_kwargs, method='nearest')
205
206 if isel_kwargs is not None:
207 temp_ds = temp_ds.isel(**isel_kwargs)
208
209 if (x is not None and y is None) or (y is None and x is not None):
210 raise RuntimeError(
211 'Both x and y must be specified if we are'
212 + 'not trying to automatically detect them!'
213 )
214
215 if x is not None:
216 coord_list = {}
217 x_coord_dim = temp_ds[x].dims[0]
218 coord_list[x] = x_coord_dim
219 y_coord_dim = temp_ds[y].dims[0]
220 coord_list[y] = y_coord_dim
221 new_ds = data_utils.assign_coordinates(temp_ds, coord_list)
222 my_dataarray = new_ds[varname]
223 else:
224 my_dataarray = temp_ds[varname]
225
226 coord_keys = [key for key in my_dataarray.coords.keys()]
227 # X-array will sometimes shorten latitude and longitude variables
228 if x == 'longitude' and x not in coord_keys:
229 xc = 'lon'
230 else:
231 xc = x
232 if y == 'latitude' and y not in coord_keys:
233 yc = 'lat'
234 else:
235 yc = y
236
237 if x is None:
238 ax = my_dataarray.plot(ax=self.axes[subplot_index], **kwargs)
239 else:
240 ax = my_dataarray.plot(ax=self.axes[subplot_index], x=xc, y=yc, **kwargs)
241
242 the_coords = [the_keys for the_keys in my_dataarray.coords.keys()]
243 if x is None:
244 x = the_coords[0]
245 else:
246 x = coord_list[x]
247
248 if y is None:
249 y = the_coords[1]
250 else:
251 y = coord_list[y]
252
253 xrng = self.axes[subplot_index].get_xlim()
254 self.set_xrng(xrng, subplot_index)
255 yrng = self.axes[subplot_index].get_ylim()
256 self.set_yrng(yrng, subplot_index)
257 del temp_ds
258 return ax
259
260 def plot_xsection_map(
261 self, dsname, varname, subplot_index=(0,), coastlines=True, background=False, **kwargs
262 ):
263 """
264 Plots a cross section of 2D data on a geographical map.
265
266 Parameters
267 ----------
268 dsname : str or None
269 The name of the datastream to plot from. Set to None
270 to have ACT attempt to automatically detect this.
271 varname : str
272 The name of the variable to plot.
273 subplot_index : tuple
274 The index of the subplot to plot inside.
275 coastlines : bool
276 Set to True to plot the coastlines.
277 background : bool
278 Set to True to plot a stock image background.
279 **kwargs : keyword arguments
280 Additional keyword arguments will be passed into
281 :func:`act.plotting.XSectionDisplay.plot_xsection`
282
283 Returns
284 -------
285 ax : matplotlib axis handle
286 The matplotlib axis handle corresponding to the plot.
287
288 """
289 if not CARTOPY_AVAILABLE:
290 raise ImportError(
291 'Cartopy needs to be installed in order to plot ' + 'cross sections on maps!'
292 )
293
294 self.set_subplot_to_map(subplot_index)
295 self.plot_xsection(dsname, varname, subplot_index=subplot_index, **kwargs)
296 xlims = self.xrng[subplot_index].flatten()
297 ylims = self.yrng[subplot_index].flatten()
298 self.axes[subplot_index].set_xticks(np.linspace(round(xlims[0], 0), round(xlims[1], 0), 10))
299 self.axes[subplot_index].set_yticks(np.linspace(round(ylims[0], 0), round(ylims[1], 0), 10))
300
301 if coastlines:
302 self.axes[subplot_index].coastlines(resolution='10m')
303
304 if background:
305 self.axes[subplot_index].stock_img()
306
307 return self.axes[subplot_index]
308
[end of act/plotting/xsectiondisplay.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/act/plotting/xsectiondisplay.py b/act/plotting/xsectiondisplay.py
--- a/act/plotting/xsectiondisplay.py
+++ b/act/plotting/xsectiondisplay.py
@@ -75,6 +75,7 @@
super().__init__(ds, subplot_shape, ds_name, **kwargs)
def set_subplot_to_map(self, subplot_index):
+ self.fig.delaxes(self.axes[subplot_index])
total_num_plots = self.axes.shape
if len(total_num_plots) == 2:
@@ -235,9 +236,9 @@
yc = y
if x is None:
- ax = my_dataarray.plot(ax=self.axes[subplot_index], **kwargs)
+ my_dataarray.plot(ax=self.axes[subplot_index], **kwargs)
else:
- ax = my_dataarray.plot(ax=self.axes[subplot_index], x=xc, y=yc, **kwargs)
+ my_dataarray.plot(ax=self.axes[subplot_index], x=xc, y=yc, **kwargs)
the_coords = [the_keys for the_keys in my_dataarray.coords.keys()]
if x is None:
@@ -255,7 +256,7 @@
yrng = self.axes[subplot_index].get_ylim()
self.set_yrng(yrng, subplot_index)
del temp_ds
- return ax
+ return self.axes[subplot_index]
def plot_xsection_map(
self, dsname, varname, subplot_index=(0,), coastlines=True, background=False, **kwargs
@@ -290,7 +291,6 @@
raise ImportError(
'Cartopy needs to be installed in order to plot ' + 'cross sections on maps!'
)
-
self.set_subplot_to_map(subplot_index)
self.plot_xsection(dsname, varname, subplot_index=subplot_index, **kwargs)
xlims = self.xrng[subplot_index].flatten()
| {"golden_diff": "diff --git a/act/plotting/xsectiondisplay.py b/act/plotting/xsectiondisplay.py\n--- a/act/plotting/xsectiondisplay.py\n+++ b/act/plotting/xsectiondisplay.py\n@@ -75,6 +75,7 @@\n super().__init__(ds, subplot_shape, ds_name, **kwargs)\n \n def set_subplot_to_map(self, subplot_index):\n+ self.fig.delaxes(self.axes[subplot_index])\n total_num_plots = self.axes.shape\n \n if len(total_num_plots) == 2:\n@@ -235,9 +236,9 @@\n yc = y\n \n if x is None:\n- ax = my_dataarray.plot(ax=self.axes[subplot_index], **kwargs)\n+ my_dataarray.plot(ax=self.axes[subplot_index], **kwargs)\n else:\n- ax = my_dataarray.plot(ax=self.axes[subplot_index], x=xc, y=yc, **kwargs)\n+ my_dataarray.plot(ax=self.axes[subplot_index], x=xc, y=yc, **kwargs)\n \n the_coords = [the_keys for the_keys in my_dataarray.coords.keys()]\n if x is None:\n@@ -255,7 +256,7 @@\n yrng = self.axes[subplot_index].get_ylim()\n self.set_yrng(yrng, subplot_index)\n del temp_ds\n- return ax\n+ return self.axes[subplot_index]\n \n def plot_xsection_map(\n self, dsname, varname, subplot_index=(0,), coastlines=True, background=False, **kwargs\n@@ -290,7 +291,6 @@\n raise ImportError(\n 'Cartopy needs to be installed in order to plot ' + 'cross sections on maps!'\n )\n-\n self.set_subplot_to_map(subplot_index)\n self.plot_xsection(dsname, varname, subplot_index=subplot_index, **kwargs)\n xlims = self.xrng[subplot_index].flatten()\n", "issue": "Bug in xsection plot map code\n* ACT version: Current Version\r\n* Python version: All\r\n* Operating System: All\r\n\r\n### Description\r\n\r\nxsection plot map is generating images with duplicate axes, see image below. I believe this is probably the cause to our baseline image failure.\r\n\r\n\r\n\r\n\n", "before_files": [{"content": "\"\"\"\nStores the class for XSectionDisplay.\n\n\"\"\"\n\n# Import third party libraries\nimport matplotlib.pyplot as plt\nimport numpy as np\n\ntry:\n import cartopy.crs as ccrs\n\n CARTOPY_AVAILABLE = True\nexcept ImportError:\n CARTOPY_AVAILABLE = False\n\n# Import Local Libs\nfrom ..utils import data_utils\nfrom .plot import Display\n\n\nclass XSectionDisplay(Display):\n \"\"\"\n Plots cross sections of multidimensional datasets. The data\n must be able to be sliced into a 2 dimensional slice using the\n xarray :func:`xarray.Dataset.sel` and :func:`xarray.Dataset.isel` commands.\n\n This is inherited from the :func:`act.plotting.Display`\n class and has therefore has the same attributes as that class.\n See :func:`act.plotting.Display`\n for more information. There are no additional attributes or parameters\n to this class.\n\n In order to create geographic plots, ACT needs the Cartopy package to be\n installed on your system. More information about\n Cartopy go here:https://scitools.org.uk/cartopy/docs/latest/.\n\n Examples\n --------\n For example, if you only want to do a cross section through the first\n time period of a 3D dataset called :code:`ir_temperature`, you would\n do the following in xarray:\n\n .. code-block:: python\n\n time_slice = my_ds[\"ir_temperature\"].isel(time=0)\n\n The methods of this class support passing in keyword arguments into\n xarray :func:`xarray.Dataset.sel` and :func:`xarray.Dataset.isel` commands\n so that new datasets do not need to be created when slicing by specific time\n periods or spatial slices. For example, to plot the first time period\n from :code:`my_ds`, simply do:\n\n .. code-block:: python\n\n xsection = XSectionDisplay(my_ds, figsize=(15, 8))\n xsection.plot_xsection_map(\n None,\n \"ir_temperature\",\n vmin=220,\n vmax=300,\n cmap=\"Greys\",\n x=\"longitude\",\n y=\"latitude\",\n isel_kwargs={\"time\": 0},\n )\n\n Here, the array is sliced by the first time period as specified\n in :code:`isel_kwargs`. The other keyword arguments are standard keyword\n arguments taken by :func:`matplotlib.pyplot.pcolormesh`.\n\n \"\"\"\n\n def __init__(self, ds, subplot_shape=(1,), ds_name=None, **kwargs):\n super().__init__(ds, subplot_shape, ds_name, **kwargs)\n\n def set_subplot_to_map(self, subplot_index):\n total_num_plots = self.axes.shape\n\n if len(total_num_plots) == 2:\n second_number = total_num_plots[0]\n j = subplot_index[1]\n else:\n second_number = 1\n j = 0\n\n third_number = second_number * subplot_index[0] + j + 1\n\n self.axes[subplot_index] = plt.subplot(\n total_num_plots[0],\n second_number,\n third_number,\n projection=ccrs.PlateCarree(),\n )\n\n def set_xrng(self, xrng, subplot_index=(0,)):\n \"\"\"\n Sets the x range of the plot.\n\n Parameters\n ----------\n xrng : 2 number array\n The x limits of the plot.\n subplot_index : 1 or 2D tuple, list, or array\n The index of the subplot to set the x range of.\n\n \"\"\"\n if self.axes is None:\n raise RuntimeError('set_xrng requires the plot to be displayed.')\n\n if not hasattr(self, 'xrng') and len(self.axes.shape) == 2:\n self.xrng = np.zeros((self.axes.shape[0], self.axes.shape[1], 2), dtype=xrng[0].dtype)\n elif not hasattr(self, 'xrng') and len(self.axes.shape) == 1:\n self.xrng = np.zeros((self.axes.shape[0], 2), dtype=xrng[0].dtype)\n\n self.axes[subplot_index].set_xlim(xrng)\n self.xrng[subplot_index, :] = np.array(xrng)\n\n def set_yrng(self, yrng, subplot_index=(0,)):\n \"\"\"\n Sets the y range of the plot.\n\n Parameters\n ----------\n yrng : 2 number array\n The y limits of the plot.\n subplot_index : 1 or 2D tuple, list, or array\n The index of the subplot to set the x range of.\n\n \"\"\"\n if self.axes is None:\n raise RuntimeError('set_yrng requires the plot to be displayed.')\n\n if not hasattr(self, 'yrng') and len(self.axes.shape) == 2:\n self.yrng = np.zeros((self.axes.shape[0], self.axes.shape[1], 2), dtype=yrng[0].dtype)\n elif not hasattr(self, 'yrng') and len(self.axes.shape) == 1:\n self.yrng = np.zeros((self.axes.shape[0], 2), dtype=yrng[0].dtype)\n\n if yrng[0] == yrng[1]:\n yrng[1] = yrng[1] + 1\n\n self.axes[subplot_index].set_ylim(yrng)\n\n self.yrng[subplot_index, :] = yrng\n\n def plot_xsection(\n self,\n dsname,\n varname,\n x=None,\n y=None,\n subplot_index=(0,),\n sel_kwargs=None,\n isel_kwargs=None,\n **kwargs,\n ):\n \"\"\"\n This function plots a cross section whose x and y coordinates are\n specified by the variable names either provided by the user or\n automatically detected by xarray.\n\n Parameters\n ----------\n dsname : str or None\n The name of the datastream to plot from. Set to None to have\n ACT attempt to automatically detect this.\n varname : str\n The name of the variable to plot.\n x : str or None\n The name of the x coordinate variable.\n y : str or None\n The name of the y coordinate variable.\n subplot_index : tuple\n The index of the subplot to create the plot in.\n sel_kwargs : dict\n The keyword arguments to pass into :py:func:`xarray.DataArray.sel`\n This is useful when your data is in 3 or more dimensions and you\n want to only view a cross section on a specific x-y plane. For more\n information on how to use xarray's .sel and .isel functionality\n to slice datasets, see the documentation on :func:`xarray.DataArray.sel`.\n isel_kwargs : dict\n The keyword arguments to pass into :py:func:`xarray.DataArray.sel`\n **kwargs : keyword arguments\n Additional keyword arguments will be passed into\n :func:`xarray.DataArray.plot`.\n\n Returns\n -------\n ax : matplotlib axis handle\n The matplotlib axis handle corresponding to the plot.\n\n \"\"\"\n if dsname is None and len(self._ds.keys()) > 1:\n raise ValueError(\n 'You must choose a datastream when there are 2 '\n 'or more datasets in the TimeSeriesDisplay '\n 'object.'\n )\n elif dsname is None:\n dsname = list(self._ds.keys())[0]\n temp_ds = self._ds[dsname].copy()\n\n if sel_kwargs is not None:\n temp_ds = temp_ds.sel(**sel_kwargs, method='nearest')\n\n if isel_kwargs is not None:\n temp_ds = temp_ds.isel(**isel_kwargs)\n\n if (x is not None and y is None) or (y is None and x is not None):\n raise RuntimeError(\n 'Both x and y must be specified if we are'\n + 'not trying to automatically detect them!'\n )\n\n if x is not None:\n coord_list = {}\n x_coord_dim = temp_ds[x].dims[0]\n coord_list[x] = x_coord_dim\n y_coord_dim = temp_ds[y].dims[0]\n coord_list[y] = y_coord_dim\n new_ds = data_utils.assign_coordinates(temp_ds, coord_list)\n my_dataarray = new_ds[varname]\n else:\n my_dataarray = temp_ds[varname]\n\n coord_keys = [key for key in my_dataarray.coords.keys()]\n # X-array will sometimes shorten latitude and longitude variables\n if x == 'longitude' and x not in coord_keys:\n xc = 'lon'\n else:\n xc = x\n if y == 'latitude' and y not in coord_keys:\n yc = 'lat'\n else:\n yc = y\n\n if x is None:\n ax = my_dataarray.plot(ax=self.axes[subplot_index], **kwargs)\n else:\n ax = my_dataarray.plot(ax=self.axes[subplot_index], x=xc, y=yc, **kwargs)\n\n the_coords = [the_keys for the_keys in my_dataarray.coords.keys()]\n if x is None:\n x = the_coords[0]\n else:\n x = coord_list[x]\n\n if y is None:\n y = the_coords[1]\n else:\n y = coord_list[y]\n\n xrng = self.axes[subplot_index].get_xlim()\n self.set_xrng(xrng, subplot_index)\n yrng = self.axes[subplot_index].get_ylim()\n self.set_yrng(yrng, subplot_index)\n del temp_ds\n return ax\n\n def plot_xsection_map(\n self, dsname, varname, subplot_index=(0,), coastlines=True, background=False, **kwargs\n ):\n \"\"\"\n Plots a cross section of 2D data on a geographical map.\n\n Parameters\n ----------\n dsname : str or None\n The name of the datastream to plot from. Set to None\n to have ACT attempt to automatically detect this.\n varname : str\n The name of the variable to plot.\n subplot_index : tuple\n The index of the subplot to plot inside.\n coastlines : bool\n Set to True to plot the coastlines.\n background : bool\n Set to True to plot a stock image background.\n **kwargs : keyword arguments\n Additional keyword arguments will be passed into\n :func:`act.plotting.XSectionDisplay.plot_xsection`\n\n Returns\n -------\n ax : matplotlib axis handle\n The matplotlib axis handle corresponding to the plot.\n\n \"\"\"\n if not CARTOPY_AVAILABLE:\n raise ImportError(\n 'Cartopy needs to be installed in order to plot ' + 'cross sections on maps!'\n )\n\n self.set_subplot_to_map(subplot_index)\n self.plot_xsection(dsname, varname, subplot_index=subplot_index, **kwargs)\n xlims = self.xrng[subplot_index].flatten()\n ylims = self.yrng[subplot_index].flatten()\n self.axes[subplot_index].set_xticks(np.linspace(round(xlims[0], 0), round(xlims[1], 0), 10))\n self.axes[subplot_index].set_yticks(np.linspace(round(ylims[0], 0), round(ylims[1], 0), 10))\n\n if coastlines:\n self.axes[subplot_index].coastlines(resolution='10m')\n\n if background:\n self.axes[subplot_index].stock_img()\n\n return self.axes[subplot_index]\n", "path": "act/plotting/xsectiondisplay.py"}]} | 3,981 | 429 |
gh_patches_debug_10230 | rasdani/github-patches | git_diff | streamlink__streamlink-925 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
BBC iPlayer plugin cannot find VPID
### Checklist
- [x] This is a bug report.
- [ ] This is a feature request.
- [ ] This is a plugin (improvement) request.
- [ ] I have read the contribution guidelines.
### Description
The BBC IPlayer plugin cannot find the VPID for valid urls.
### Reproduction steps / Explicit stream URLs to test
The following command:
`streamlink -l debug 'http://www.bbc.co.uk/iplayer/episode/b013pnv4/horizon-20112012-2-seeing-stars' best`
produces this output:
```
[cli][info] Found matching plugin bbciplayer for URL http://www.bbc.co.uk/iplayer/episode/b013pnv4/horizon-20112012-2-seeing-stars
[plugin.bbciplayer][debug] Loading streams for episode: b013pnv4
[plugin.bbciplayer][debug] Looking for vpid on http://www.bbc.co.uk/iplayer/episode/b013pnv4/horizon-20112012-2-seeing-stars
[plugin.bbciplayer][error] Could not find VPID for episode b013pnv4
error: No playable streams found on this URL: http://www.bbc.co.uk/iplayer/episode/b013pnv4/horizon-20112012-2-seeing-stars
```
and the same goes for any other valid iplayer url.
### Environment details
Operating system: arch linux
Streamlink and Python versions: streamlink-0.6.0 and python-3.6.1
### Comments, logs, screenshots, etc.
AFAICS, the page downloaded from the iplayer url does not contain the string "vpid".
</issue>
<code>
[start of src/streamlink/plugins/bbciplayer.py]
1 from __future__ import print_function
2
3 import base64
4 import re
5 from functools import partial
6 from hashlib import sha1
7
8 from streamlink.plugin import Plugin
9 from streamlink.plugin.api import http
10 from streamlink.plugin.api import validate
11 from streamlink.stream import HDSStream
12 from streamlink.stream import HLSStream
13 from streamlink.utils import parse_xml, parse_json
14
15
16 class BBCiPlayer(Plugin):
17 url_re = re.compile(r"""https?://(?:www\.)?bbc.co.uk/iplayer/
18 (
19 episode/(?P<episode_id>\w+)|
20 live/(?P<channel_name>\w+)
21 )
22 """, re.VERBOSE)
23 vpid_re = re.compile(r'"vpid"\s*:\s*"(\w+)"')
24 tvip_re = re.compile(r'event_master_brand=(\w+?)&')
25 swf_url = "http://emp.bbci.co.uk/emp/SMPf/1.18.3/StandardMediaPlayerChromelessFlash.swf"
26 hash = base64.b64decode(b"N2RmZjc2NzFkMGM2OTdmZWRiMWQ5MDVkOWExMjE3MTk5MzhiOTJiZg==")
27 api_url = ("http://open.live.bbc.co.uk/mediaselector/5/select/"
28 "version/2.0/mediaset/{platform}/vpid/{vpid}/atk/{vpid_hash}/asn/1/")
29 platforms = ("pc", "iptv-all")
30
31 mediaselector_schema = validate.Schema(
32 validate.transform(partial(parse_xml, ignore_ns=True)),
33 validate.union({
34 "hds": validate.xml_findall(".//media[@kind='video']//connection[@transferFormat='hds']"),
35 "hls": validate.xml_findall(".//media[@kind='video']//connection[@transferFormat='hls']")
36 }),
37 {validate.text: validate.all(
38 [validate.all(validate.getattr("attrib"), validate.get("href"))],
39 validate.transform(lambda x: list(set(x))) # unique
40 )}
41 )
42
43 @classmethod
44 def can_handle_url(cls, url):
45 return cls.url_re.match(url) is not None
46
47 @classmethod
48 def _hash_vpid(cls, vpid):
49 return sha1(cls.hash + str(vpid).encode("utf8")).hexdigest()
50
51 def find_vpid(self, url):
52 self.logger.debug("Looking for vpid on {0}", url)
53 res = http.get(url)
54 m = self.vpid_re.search(res.text)
55 return m and m.group(1)
56
57 def find_tvip(self, url):
58 self.logger.debug("Looking for tvip on {0}", url)
59 res = http.get(url)
60 m = self.tvip_re.search(res.text)
61 return m and m.group(1)
62
63 def mediaselector(self, vpid):
64 for platform in self.platforms:
65 url = self.api_url.format(vpid=vpid, vpid_hash=self._hash_vpid(vpid), platform=platform)
66 stream_urls = http.get(url, schema=self.mediaselector_schema)
67 for surl in stream_urls.get("hls"):
68 for s in HLSStream.parse_variant_playlist(self.session, surl).items():
69 yield s
70 for surl in stream_urls.get("hds"):
71 for s in HDSStream.parse_manifest(self.session, surl).items():
72 yield s
73
74 def _get_streams(self):
75 m = self.url_re.match(self.url)
76 episode_id = m.group("episode_id")
77 channel_name = m.group("channel_name")
78
79 if episode_id:
80 self.logger.debug("Loading streams for episode: {0}", episode_id)
81 vpid = self.find_vpid(self.url)
82 if vpid:
83 self.logger.debug("Found VPID: {0}", vpid)
84 for s in self.mediaselector(vpid):
85 yield s
86 else:
87 self.logger.error("Could not find VPID for episode {0}", episode_id)
88 elif channel_name:
89 self.logger.debug("Loading stream for live channel: {0}", channel_name)
90 tvip = self.find_tvip(self.url)
91 if tvip:
92 self.logger.debug("Found TVIP: {0}", tvip)
93 for s in self.mediaselector(tvip):
94 yield s
95
96
97 __plugin__ = BBCiPlayer
98
[end of src/streamlink/plugins/bbciplayer.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/streamlink/plugins/bbciplayer.py b/src/streamlink/plugins/bbciplayer.py
--- a/src/streamlink/plugins/bbciplayer.py
+++ b/src/streamlink/plugins/bbciplayer.py
@@ -20,7 +20,7 @@
live/(?P<channel_name>\w+)
)
""", re.VERBOSE)
- vpid_re = re.compile(r'"vpid"\s*:\s*"(\w+)"')
+ vpid_re = re.compile(r'"ident_id"\s*:\s*"(\w+)"')
tvip_re = re.compile(r'event_master_brand=(\w+?)&')
swf_url = "http://emp.bbci.co.uk/emp/SMPf/1.18.3/StandardMediaPlayerChromelessFlash.swf"
hash = base64.b64decode(b"N2RmZjc2NzFkMGM2OTdmZWRiMWQ5MDVkOWExMjE3MTk5MzhiOTJiZg==")
| {"golden_diff": "diff --git a/src/streamlink/plugins/bbciplayer.py b/src/streamlink/plugins/bbciplayer.py\n--- a/src/streamlink/plugins/bbciplayer.py\n+++ b/src/streamlink/plugins/bbciplayer.py\n@@ -20,7 +20,7 @@\n live/(?P<channel_name>\\w+)\n )\n \"\"\", re.VERBOSE)\n- vpid_re = re.compile(r'\"vpid\"\\s*:\\s*\"(\\w+)\"')\n+ vpid_re = re.compile(r'\"ident_id\"\\s*:\\s*\"(\\w+)\"')\n tvip_re = re.compile(r'event_master_brand=(\\w+?)&')\n swf_url = \"http://emp.bbci.co.uk/emp/SMPf/1.18.3/StandardMediaPlayerChromelessFlash.swf\"\n hash = base64.b64decode(b\"N2RmZjc2NzFkMGM2OTdmZWRiMWQ5MDVkOWExMjE3MTk5MzhiOTJiZg==\")\n", "issue": "BBC iPlayer plugin cannot find VPID\n### Checklist\r\n\r\n- [x] This is a bug report.\r\n- [ ] This is a feature request.\r\n- [ ] This is a plugin (improvement) request.\r\n- [ ] I have read the contribution guidelines.\r\n\r\n### Description\r\n\r\nThe BBC IPlayer plugin cannot find the VPID for valid urls.\r\n\r\n### Reproduction steps / Explicit stream URLs to test\r\n\r\nThe following command:\r\n\r\n`streamlink -l debug 'http://www.bbc.co.uk/iplayer/episode/b013pnv4/horizon-20112012-2-seeing-stars' best`\r\n\r\nproduces this output:\r\n\r\n```\r\n[cli][info] Found matching plugin bbciplayer for URL http://www.bbc.co.uk/iplayer/episode/b013pnv4/horizon-20112012-2-seeing-stars\r\n[plugin.bbciplayer][debug] Loading streams for episode: b013pnv4\r\n[plugin.bbciplayer][debug] Looking for vpid on http://www.bbc.co.uk/iplayer/episode/b013pnv4/horizon-20112012-2-seeing-stars\r\n[plugin.bbciplayer][error] Could not find VPID for episode b013pnv4\r\nerror: No playable streams found on this URL: http://www.bbc.co.uk/iplayer/episode/b013pnv4/horizon-20112012-2-seeing-stars\r\n\r\n```\r\n\r\nand the same goes for any other valid iplayer url.\r\n\r\n### Environment details\r\n\r\nOperating system: arch linux\r\nStreamlink and Python versions: streamlink-0.6.0 and python-3.6.1\r\n\r\n### Comments, logs, screenshots, etc.\r\n\r\nAFAICS, the page downloaded from the iplayer url does not contain the string \"vpid\".\r\n\n", "before_files": [{"content": "from __future__ import print_function\n\nimport base64\nimport re\nfrom functools import partial\nfrom hashlib import sha1\n\nfrom streamlink.plugin import Plugin\nfrom streamlink.plugin.api import http\nfrom streamlink.plugin.api import validate\nfrom streamlink.stream import HDSStream\nfrom streamlink.stream import HLSStream\nfrom streamlink.utils import parse_xml, parse_json\n\n\nclass BBCiPlayer(Plugin):\n url_re = re.compile(r\"\"\"https?://(?:www\\.)?bbc.co.uk/iplayer/\n (\n episode/(?P<episode_id>\\w+)|\n live/(?P<channel_name>\\w+)\n )\n \"\"\", re.VERBOSE)\n vpid_re = re.compile(r'\"vpid\"\\s*:\\s*\"(\\w+)\"')\n tvip_re = re.compile(r'event_master_brand=(\\w+?)&')\n swf_url = \"http://emp.bbci.co.uk/emp/SMPf/1.18.3/StandardMediaPlayerChromelessFlash.swf\"\n hash = base64.b64decode(b\"N2RmZjc2NzFkMGM2OTdmZWRiMWQ5MDVkOWExMjE3MTk5MzhiOTJiZg==\")\n api_url = (\"http://open.live.bbc.co.uk/mediaselector/5/select/\"\n \"version/2.0/mediaset/{platform}/vpid/{vpid}/atk/{vpid_hash}/asn/1/\")\n platforms = (\"pc\", \"iptv-all\")\n\n mediaselector_schema = validate.Schema(\n validate.transform(partial(parse_xml, ignore_ns=True)),\n validate.union({\n \"hds\": validate.xml_findall(\".//media[@kind='video']//connection[@transferFormat='hds']\"),\n \"hls\": validate.xml_findall(\".//media[@kind='video']//connection[@transferFormat='hls']\")\n }),\n {validate.text: validate.all(\n [validate.all(validate.getattr(\"attrib\"), validate.get(\"href\"))],\n validate.transform(lambda x: list(set(x))) # unique\n )}\n )\n\n @classmethod\n def can_handle_url(cls, url):\n return cls.url_re.match(url) is not None\n\n @classmethod\n def _hash_vpid(cls, vpid):\n return sha1(cls.hash + str(vpid).encode(\"utf8\")).hexdigest()\n\n def find_vpid(self, url):\n self.logger.debug(\"Looking for vpid on {0}\", url)\n res = http.get(url)\n m = self.vpid_re.search(res.text)\n return m and m.group(1)\n\n def find_tvip(self, url):\n self.logger.debug(\"Looking for tvip on {0}\", url)\n res = http.get(url)\n m = self.tvip_re.search(res.text)\n return m and m.group(1)\n\n def mediaselector(self, vpid):\n for platform in self.platforms:\n url = self.api_url.format(vpid=vpid, vpid_hash=self._hash_vpid(vpid), platform=platform)\n stream_urls = http.get(url, schema=self.mediaselector_schema)\n for surl in stream_urls.get(\"hls\"):\n for s in HLSStream.parse_variant_playlist(self.session, surl).items():\n yield s\n for surl in stream_urls.get(\"hds\"):\n for s in HDSStream.parse_manifest(self.session, surl).items():\n yield s\n\n def _get_streams(self):\n m = self.url_re.match(self.url)\n episode_id = m.group(\"episode_id\")\n channel_name = m.group(\"channel_name\")\n\n if episode_id:\n self.logger.debug(\"Loading streams for episode: {0}\", episode_id)\n vpid = self.find_vpid(self.url)\n if vpid:\n self.logger.debug(\"Found VPID: {0}\", vpid)\n for s in self.mediaselector(vpid):\n yield s\n else:\n self.logger.error(\"Could not find VPID for episode {0}\", episode_id)\n elif channel_name:\n self.logger.debug(\"Loading stream for live channel: {0}\", channel_name)\n tvip = self.find_tvip(self.url)\n if tvip:\n self.logger.debug(\"Found TVIP: {0}\", tvip)\n for s in self.mediaselector(tvip):\n yield s\n\n\n__plugin__ = BBCiPlayer\n", "path": "src/streamlink/plugins/bbciplayer.py"}]} | 2,094 | 233 |
gh_patches_debug_149 | rasdani/github-patches | git_diff | apache__tvm-6399 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
`import tvm` now requires pytest
With the merge of #6331, `import tvm` now requires pytest. I created this issue just to check whether this is something intentional or something that we want to fix.
The chain from `import tvm` to `import pytest` happens due to the `from .import testing` on `python/tvm/__init__.py`. There is nothing actually done with that import.
https://github.com/apache/incubator-tvm/blob/a4ebb16ed76bfea4ce4eed7be7ea73d4a01027e2/python/tvm/__init__.py#L53-L56
Within `python/tvm/testing.py` then there is the `import pytest`. I was thinking that we might want to remove these lines from `__init__.py`, so that we don't load `tvm.testing` and will only import it when required. I'm happy to submit a PR removing those lines, in case there is an understanding that it makes sense.
cc @tqchen
</issue>
<code>
[start of python/tvm/__init__.py]
1 # Licensed to the Apache Software Foundation (ASF) under one
2 # or more contributor license agreements. See the NOTICE file
3 # distributed with this work for additional information
4 # regarding copyright ownership. The ASF licenses this file
5 # to you under the Apache License, Version 2.0 (the
6 # "License"); you may not use this file except in compliance
7 # with the License. You may obtain a copy of the License at
8 #
9 # http://www.apache.org/licenses/LICENSE-2.0
10 #
11 # Unless required by applicable law or agreed to in writing,
12 # software distributed under the License is distributed on an
13 # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
14 # KIND, either express or implied. See the License for the
15 # specific language governing permissions and limitations
16 # under the License.
17 # pylint: disable=redefined-builtin, wildcard-import
18 """TVM: Open Deep Learning Compiler Stack."""
19 import multiprocessing
20 import sys
21 import traceback
22
23 # top-level alias
24 # tvm._ffi
25 from ._ffi.base import TVMError, __version__
26 from ._ffi.runtime_ctypes import DataTypeCode, DataType
27 from ._ffi import register_object, register_func, register_extension, get_global_func
28
29 # top-level alias
30 # tvm.runtime
31 from .runtime.object import Object
32 from .runtime.ndarray import context, cpu, gpu, opencl, cl, vulkan, metal, mtl
33 from .runtime.ndarray import vpi, rocm, ext_dev, micro_dev, hexagon
34 from .runtime import ndarray as nd
35
36 # tvm.error
37 from . import error
38
39 # tvm.ir
40 from .ir import IRModule
41 from .ir import transform
42 from .ir import container
43 from . import ir
44
45 # tvm.tir
46 from . import tir
47
48 # tvm.target
49 from . import target
50
51 # tvm.te
52 from . import te
53
54 # tvm.testing
55 from . import testing
56
57 # tvm.driver
58 from .driver import build, lower
59
60 # tvm.parser
61 from . import parser
62
63 # tvm tir hybrid script
64 from . import hybrid
65
66 # others
67 from . import arith
68
69 # support infra
70 from . import support
71
72 # Contrib initializers
73 from .contrib import rocm as _rocm, nvcc as _nvcc, sdaccel as _sdaccel
74
75
76 def tvm_wrap_excepthook(exception_hook):
77 """Wrap given excepthook with TVM additional work."""
78
79 def wrapper(exctype, value, trbk):
80 """Clean subprocesses when TVM is interrupted."""
81 exception_hook(exctype, value, trbk)
82 if hasattr(multiprocessing, 'active_children'):
83 # pylint: disable=not-callable
84 for p in multiprocessing.active_children():
85 p.terminate()
86
87 return wrapper
88
89
90 sys.excepthook = tvm_wrap_excepthook(sys.excepthook)
91
[end of python/tvm/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/python/tvm/__init__.py b/python/tvm/__init__.py
--- a/python/tvm/__init__.py
+++ b/python/tvm/__init__.py
@@ -51,9 +51,6 @@
# tvm.te
from . import te
-# tvm.testing
-from . import testing
-
# tvm.driver
from .driver import build, lower
| {"golden_diff": "diff --git a/python/tvm/__init__.py b/python/tvm/__init__.py\n--- a/python/tvm/__init__.py\n+++ b/python/tvm/__init__.py\n@@ -51,9 +51,6 @@\n # tvm.te\n from . import te\n \n-# tvm.testing\n-from . import testing\n-\n # tvm.driver\n from .driver import build, lower\n", "issue": "`import tvm` now requires pytest\nWith the merge of #6331, `import tvm` now requires pytest. I created this issue just to check whether this is something intentional or something that we want to fix.\r\n\r\nThe chain from `import tvm` to `import pytest` happens due to the `from .import testing` on `python/tvm/__init__.py`. There is nothing actually done with that import.\r\n\r\nhttps://github.com/apache/incubator-tvm/blob/a4ebb16ed76bfea4ce4eed7be7ea73d4a01027e2/python/tvm/__init__.py#L53-L56\r\n\r\nWithin `python/tvm/testing.py` then there is the `import pytest`. I was thinking that we might want to remove these lines from `__init__.py`, so that we don't load `tvm.testing` and will only import it when required. I'm happy to submit a PR removing those lines, in case there is an understanding that it makes sense.\r\n\r\ncc @tqchen \n", "before_files": [{"content": "# Licensed to the Apache Software Foundation (ASF) under one\n# or more contributor license agreements. See the NOTICE file\n# distributed with this work for additional information\n# regarding copyright ownership. The ASF licenses this file\n# to you under the Apache License, Version 2.0 (the\n# \"License\"); you may not use this file except in compliance\n# with the License. You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing,\n# software distributed under the License is distributed on an\n# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n# KIND, either express or implied. See the License for the\n# specific language governing permissions and limitations\n# under the License.\n# pylint: disable=redefined-builtin, wildcard-import\n\"\"\"TVM: Open Deep Learning Compiler Stack.\"\"\"\nimport multiprocessing\nimport sys\nimport traceback\n\n# top-level alias\n# tvm._ffi\nfrom ._ffi.base import TVMError, __version__\nfrom ._ffi.runtime_ctypes import DataTypeCode, DataType\nfrom ._ffi import register_object, register_func, register_extension, get_global_func\n\n# top-level alias\n# tvm.runtime\nfrom .runtime.object import Object\nfrom .runtime.ndarray import context, cpu, gpu, opencl, cl, vulkan, metal, mtl\nfrom .runtime.ndarray import vpi, rocm, ext_dev, micro_dev, hexagon\nfrom .runtime import ndarray as nd\n\n# tvm.error\nfrom . import error\n\n# tvm.ir\nfrom .ir import IRModule\nfrom .ir import transform\nfrom .ir import container\nfrom . import ir\n\n# tvm.tir\nfrom . import tir\n\n# tvm.target\nfrom . import target\n\n# tvm.te\nfrom . import te\n\n# tvm.testing\nfrom . import testing\n\n# tvm.driver\nfrom .driver import build, lower\n\n# tvm.parser\nfrom . import parser\n\n# tvm tir hybrid script\nfrom . import hybrid\n\n# others\nfrom . import arith\n\n# support infra\nfrom . import support\n\n# Contrib initializers\nfrom .contrib import rocm as _rocm, nvcc as _nvcc, sdaccel as _sdaccel\n\n\ndef tvm_wrap_excepthook(exception_hook):\n \"\"\"Wrap given excepthook with TVM additional work.\"\"\"\n\n def wrapper(exctype, value, trbk):\n \"\"\"Clean subprocesses when TVM is interrupted.\"\"\"\n exception_hook(exctype, value, trbk)\n if hasattr(multiprocessing, 'active_children'):\n # pylint: disable=not-callable\n for p in multiprocessing.active_children():\n p.terminate()\n\n return wrapper\n\n\nsys.excepthook = tvm_wrap_excepthook(sys.excepthook)\n", "path": "python/tvm/__init__.py"}]} | 1,561 | 87 |
gh_patches_debug_1023 | rasdani/github-patches | git_diff | pyca__cryptography-4037 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Bug in HKDF?
I think the computation of [`max_length`](https://github.com/pyca/cryptography/blob/66460d8f62b3f27a009bb61be6ce7675c8451b6e/src/cryptography/hazmat/primitives/kdf/hkdf.py#L70) in `src/cryptography/hazmat/primitives/kdf/hkdf.py` is wrong.
[RFC5869](https://tools.ietf.org/html/rfc5869) states on page 3 that the input `L` of the HKDF-Expand function describes the "length of output keying material in octets (<= 255*HashLen)".
An octet consists of 8 bit.
Currently, `max_length` is computed as:
```
max_length = 255 * (algorithm.digest_size // 8)
```
The problem is, that `algorithm.digest_size` returns the size of the digest in bytes. (There are 8 bits per byte). Therefore, the division by 8 is wrong, and thus, `max_length` is unnecessarily small.
(same applies for the computation of `salt` as well ([line 33](https://github.com/pyca/cryptography/blob/66460d8f62b3f27a009bb61be6ce7675c8451b6e/src/cryptography/hazmat/primitives/kdf/hkdf.py#L33)), in the case where `salt is None`)
</issue>
<code>
[start of src/cryptography/hazmat/primitives/kdf/hkdf.py]
1 # This file is dual licensed under the terms of the Apache License, Version
2 # 2.0, and the BSD License. See the LICENSE file in the root of this repository
3 # for complete details.
4
5 from __future__ import absolute_import, division, print_function
6
7 import six
8
9 from cryptography import utils
10 from cryptography.exceptions import (
11 AlreadyFinalized, InvalidKey, UnsupportedAlgorithm, _Reasons
12 )
13 from cryptography.hazmat.backends.interfaces import HMACBackend
14 from cryptography.hazmat.primitives import constant_time, hmac
15 from cryptography.hazmat.primitives.kdf import KeyDerivationFunction
16
17
18 @utils.register_interface(KeyDerivationFunction)
19 class HKDF(object):
20 def __init__(self, algorithm, length, salt, info, backend):
21 if not isinstance(backend, HMACBackend):
22 raise UnsupportedAlgorithm(
23 "Backend object does not implement HMACBackend.",
24 _Reasons.BACKEND_MISSING_INTERFACE
25 )
26
27 self._algorithm = algorithm
28
29 if not (salt is None or isinstance(salt, bytes)):
30 raise TypeError("salt must be bytes.")
31
32 if salt is None:
33 salt = b"\x00" * self._algorithm.digest_size
34
35 self._salt = salt
36
37 self._backend = backend
38
39 self._hkdf_expand = HKDFExpand(self._algorithm, length, info, backend)
40
41 def _extract(self, key_material):
42 h = hmac.HMAC(self._salt, self._algorithm, backend=self._backend)
43 h.update(key_material)
44 return h.finalize()
45
46 def derive(self, key_material):
47 if not isinstance(key_material, bytes):
48 raise TypeError("key_material must be bytes.")
49
50 return self._hkdf_expand.derive(self._extract(key_material))
51
52 def verify(self, key_material, expected_key):
53 if not constant_time.bytes_eq(self.derive(key_material), expected_key):
54 raise InvalidKey
55
56
57 @utils.register_interface(KeyDerivationFunction)
58 class HKDFExpand(object):
59 def __init__(self, algorithm, length, info, backend):
60 if not isinstance(backend, HMACBackend):
61 raise UnsupportedAlgorithm(
62 "Backend object does not implement HMACBackend.",
63 _Reasons.BACKEND_MISSING_INTERFACE
64 )
65
66 self._algorithm = algorithm
67
68 self._backend = backend
69
70 max_length = 255 * (algorithm.digest_size // 8)
71
72 if length > max_length:
73 raise ValueError(
74 "Can not derive keys larger than {0} octets.".format(
75 max_length
76 ))
77
78 self._length = length
79
80 if not (info is None or isinstance(info, bytes)):
81 raise TypeError("info must be bytes.")
82
83 if info is None:
84 info = b""
85
86 self._info = info
87
88 self._used = False
89
90 def _expand(self, key_material):
91 output = [b""]
92 counter = 1
93
94 while self._algorithm.digest_size * (len(output) - 1) < self._length:
95 h = hmac.HMAC(key_material, self._algorithm, backend=self._backend)
96 h.update(output[-1])
97 h.update(self._info)
98 h.update(six.int2byte(counter))
99 output.append(h.finalize())
100 counter += 1
101
102 return b"".join(output)[:self._length]
103
104 def derive(self, key_material):
105 if not isinstance(key_material, bytes):
106 raise TypeError("key_material must be bytes.")
107
108 if self._used:
109 raise AlreadyFinalized
110
111 self._used = True
112 return self._expand(key_material)
113
114 def verify(self, key_material, expected_key):
115 if not constant_time.bytes_eq(self.derive(key_material), expected_key):
116 raise InvalidKey
117
[end of src/cryptography/hazmat/primitives/kdf/hkdf.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/cryptography/hazmat/primitives/kdf/hkdf.py b/src/cryptography/hazmat/primitives/kdf/hkdf.py
--- a/src/cryptography/hazmat/primitives/kdf/hkdf.py
+++ b/src/cryptography/hazmat/primitives/kdf/hkdf.py
@@ -67,7 +67,7 @@
self._backend = backend
- max_length = 255 * (algorithm.digest_size // 8)
+ max_length = 255 * algorithm.digest_size
if length > max_length:
raise ValueError(
| {"golden_diff": "diff --git a/src/cryptography/hazmat/primitives/kdf/hkdf.py b/src/cryptography/hazmat/primitives/kdf/hkdf.py\n--- a/src/cryptography/hazmat/primitives/kdf/hkdf.py\n+++ b/src/cryptography/hazmat/primitives/kdf/hkdf.py\n@@ -67,7 +67,7 @@\n \n self._backend = backend\n \n- max_length = 255 * (algorithm.digest_size // 8)\n+ max_length = 255 * algorithm.digest_size\n \n if length > max_length:\n raise ValueError(\n", "issue": "Bug in HKDF?\nI think the computation of [`max_length`](https://github.com/pyca/cryptography/blob/66460d8f62b3f27a009bb61be6ce7675c8451b6e/src/cryptography/hazmat/primitives/kdf/hkdf.py#L70) in `src/cryptography/hazmat/primitives/kdf/hkdf.py` is wrong.\r\n\r\n[RFC5869](https://tools.ietf.org/html/rfc5869) states on page 3 that the input `L` of the HKDF-Expand function describes the \"length of output keying material in octets (<= 255*HashLen)\".\r\nAn octet consists of 8 bit. \r\n\r\nCurrently, `max_length` is computed as:\r\n\r\n```\r\nmax_length = 255 * (algorithm.digest_size // 8)\r\n```\r\n\r\nThe problem is, that `algorithm.digest_size` returns the size of the digest in bytes. (There are 8 bits per byte). Therefore, the division by 8 is wrong, and thus, `max_length` is unnecessarily small.\r\n\r\n(same applies for the computation of `salt` as well ([line 33](https://github.com/pyca/cryptography/blob/66460d8f62b3f27a009bb61be6ce7675c8451b6e/src/cryptography/hazmat/primitives/kdf/hkdf.py#L33)), in the case where `salt is None`)\n", "before_files": [{"content": "# This file is dual licensed under the terms of the Apache License, Version\n# 2.0, and the BSD License. See the LICENSE file in the root of this repository\n# for complete details.\n\nfrom __future__ import absolute_import, division, print_function\n\nimport six\n\nfrom cryptography import utils\nfrom cryptography.exceptions import (\n AlreadyFinalized, InvalidKey, UnsupportedAlgorithm, _Reasons\n)\nfrom cryptography.hazmat.backends.interfaces import HMACBackend\nfrom cryptography.hazmat.primitives import constant_time, hmac\nfrom cryptography.hazmat.primitives.kdf import KeyDerivationFunction\n\n\[email protected]_interface(KeyDerivationFunction)\nclass HKDF(object):\n def __init__(self, algorithm, length, salt, info, backend):\n if not isinstance(backend, HMACBackend):\n raise UnsupportedAlgorithm(\n \"Backend object does not implement HMACBackend.\",\n _Reasons.BACKEND_MISSING_INTERFACE\n )\n\n self._algorithm = algorithm\n\n if not (salt is None or isinstance(salt, bytes)):\n raise TypeError(\"salt must be bytes.\")\n\n if salt is None:\n salt = b\"\\x00\" * self._algorithm.digest_size\n\n self._salt = salt\n\n self._backend = backend\n\n self._hkdf_expand = HKDFExpand(self._algorithm, length, info, backend)\n\n def _extract(self, key_material):\n h = hmac.HMAC(self._salt, self._algorithm, backend=self._backend)\n h.update(key_material)\n return h.finalize()\n\n def derive(self, key_material):\n if not isinstance(key_material, bytes):\n raise TypeError(\"key_material must be bytes.\")\n\n return self._hkdf_expand.derive(self._extract(key_material))\n\n def verify(self, key_material, expected_key):\n if not constant_time.bytes_eq(self.derive(key_material), expected_key):\n raise InvalidKey\n\n\[email protected]_interface(KeyDerivationFunction)\nclass HKDFExpand(object):\n def __init__(self, algorithm, length, info, backend):\n if not isinstance(backend, HMACBackend):\n raise UnsupportedAlgorithm(\n \"Backend object does not implement HMACBackend.\",\n _Reasons.BACKEND_MISSING_INTERFACE\n )\n\n self._algorithm = algorithm\n\n self._backend = backend\n\n max_length = 255 * (algorithm.digest_size // 8)\n\n if length > max_length:\n raise ValueError(\n \"Can not derive keys larger than {0} octets.\".format(\n max_length\n ))\n\n self._length = length\n\n if not (info is None or isinstance(info, bytes)):\n raise TypeError(\"info must be bytes.\")\n\n if info is None:\n info = b\"\"\n\n self._info = info\n\n self._used = False\n\n def _expand(self, key_material):\n output = [b\"\"]\n counter = 1\n\n while self._algorithm.digest_size * (len(output) - 1) < self._length:\n h = hmac.HMAC(key_material, self._algorithm, backend=self._backend)\n h.update(output[-1])\n h.update(self._info)\n h.update(six.int2byte(counter))\n output.append(h.finalize())\n counter += 1\n\n return b\"\".join(output)[:self._length]\n\n def derive(self, key_material):\n if not isinstance(key_material, bytes):\n raise TypeError(\"key_material must be bytes.\")\n\n if self._used:\n raise AlreadyFinalized\n\n self._used = True\n return self._expand(key_material)\n\n def verify(self, key_material, expected_key):\n if not constant_time.bytes_eq(self.derive(key_material), expected_key):\n raise InvalidKey\n", "path": "src/cryptography/hazmat/primitives/kdf/hkdf.py"}]} | 1,943 | 131 |
gh_patches_debug_17802 | rasdani/github-patches | git_diff | python-discord__bot-919 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Use appropriate log level for exceptions from event listeners
From @SebastiaanZ:
> Finally, `discord.py` currently "hides" errors/tracebacks that happen in event listeners as we only have a custom error handler for commands. This isn't too bad locally, since `d.py` **prints** those exceptions to stderr, but it obviously means they'll never show up in Sentry, as they are **not actually logged** with the appropriate level.
</issue>
<code>
[start of bot/bot.py]
1 import asyncio
2 import logging
3 import socket
4 import warnings
5 from typing import Optional
6
7 import aiohttp
8 import discord
9 from discord.ext import commands
10
11 from bot import DEBUG_MODE, api, constants
12 from bot.async_stats import AsyncStatsClient
13
14 log = logging.getLogger('bot')
15
16
17 class Bot(commands.Bot):
18 """A subclass of `discord.ext.commands.Bot` with an aiohttp session and an API client."""
19
20 def __init__(self, *args, **kwargs):
21 if "connector" in kwargs:
22 warnings.warn(
23 "If login() is called (or the bot is started), the connector will be overwritten "
24 "with an internal one"
25 )
26
27 super().__init__(*args, **kwargs)
28
29 self.http_session: Optional[aiohttp.ClientSession] = None
30 self.api_client = api.APIClient(loop=self.loop)
31
32 self._connector = None
33 self._resolver = None
34 self._guild_available = asyncio.Event()
35
36 statsd_url = constants.Stats.statsd_host
37
38 if DEBUG_MODE:
39 # Since statsd is UDP, there are no errors for sending to a down port.
40 # For this reason, setting the statsd host to 127.0.0.1 for development
41 # will effectively disable stats.
42 statsd_url = "127.0.0.1"
43
44 self.stats = AsyncStatsClient(self.loop, statsd_url, 8125, prefix="bot")
45
46 def add_cog(self, cog: commands.Cog) -> None:
47 """Adds a "cog" to the bot and logs the operation."""
48 super().add_cog(cog)
49 log.info(f"Cog loaded: {cog.qualified_name}")
50
51 def clear(self) -> None:
52 """
53 Clears the internal state of the bot and recreates the connector and sessions.
54
55 Will cause a DeprecationWarning if called outside a coroutine.
56 """
57 # Because discord.py recreates the HTTPClient session, may as well follow suit and recreate
58 # our own stuff here too.
59 self._recreate()
60 super().clear()
61
62 async def close(self) -> None:
63 """Close the Discord connection and the aiohttp session, connector, statsd client, and resolver."""
64 await super().close()
65
66 await self.api_client.close()
67
68 if self.http_session:
69 await self.http_session.close()
70
71 if self._connector:
72 await self._connector.close()
73
74 if self._resolver:
75 await self._resolver.close()
76
77 if self.stats._transport:
78 await self.stats._transport.close()
79
80 async def login(self, *args, **kwargs) -> None:
81 """Re-create the connector and set up sessions before logging into Discord."""
82 self._recreate()
83 await self.stats.create_socket()
84 await super().login(*args, **kwargs)
85
86 def _recreate(self) -> None:
87 """Re-create the connector, aiohttp session, and the APIClient."""
88 # Use asyncio for DNS resolution instead of threads so threads aren't spammed.
89 # Doesn't seem to have any state with regards to being closed, so no need to worry?
90 self._resolver = aiohttp.AsyncResolver()
91
92 # Its __del__ does send a warning but it doesn't always show up for some reason.
93 if self._connector and not self._connector._closed:
94 log.warning(
95 "The previous connector was not closed; it will remain open and be overwritten"
96 )
97
98 # Use AF_INET as its socket family to prevent HTTPS related problems both locally
99 # and in production.
100 self._connector = aiohttp.TCPConnector(
101 resolver=self._resolver,
102 family=socket.AF_INET,
103 )
104
105 # Client.login() will call HTTPClient.static_login() which will create a session using
106 # this connector attribute.
107 self.http.connector = self._connector
108
109 # Its __del__ does send a warning but it doesn't always show up for some reason.
110 if self.http_session and not self.http_session.closed:
111 log.warning(
112 "The previous session was not closed; it will remain open and be overwritten"
113 )
114
115 self.http_session = aiohttp.ClientSession(connector=self._connector)
116 self.api_client.recreate(force=True, connector=self._connector)
117
118 async def on_guild_available(self, guild: discord.Guild) -> None:
119 """
120 Set the internal guild available event when constants.Guild.id becomes available.
121
122 If the cache appears to still be empty (no members, no channels, or no roles), the event
123 will not be set.
124 """
125 if guild.id != constants.Guild.id:
126 return
127
128 if not guild.roles or not guild.members or not guild.channels:
129 msg = "Guild available event was dispatched but the cache appears to still be empty!"
130 log.warning(msg)
131
132 try:
133 webhook = await self.fetch_webhook(constants.Webhooks.dev_log)
134 except discord.HTTPException as e:
135 log.error(f"Failed to fetch webhook to send empty cache warning: status {e.status}")
136 else:
137 await webhook.send(f"<@&{constants.Roles.admin}> {msg}")
138
139 return
140
141 self._guild_available.set()
142
143 async def on_guild_unavailable(self, guild: discord.Guild) -> None:
144 """Clear the internal guild available event when constants.Guild.id becomes unavailable."""
145 if guild.id != constants.Guild.id:
146 return
147
148 self._guild_available.clear()
149
150 async def wait_until_guild_available(self) -> None:
151 """
152 Wait until the constants.Guild.id guild is available (and the cache is ready).
153
154 The on_ready event is inadequate because it only waits 2 seconds for a GUILD_CREATE
155 gateway event before giving up and thus not populating the cache for unavailable guilds.
156 """
157 await self._guild_available.wait()
158
[end of bot/bot.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/bot/bot.py b/bot/bot.py
--- a/bot/bot.py
+++ b/bot/bot.py
@@ -7,6 +7,7 @@
import aiohttp
import discord
from discord.ext import commands
+from sentry_sdk import push_scope
from bot import DEBUG_MODE, api, constants
from bot.async_stats import AsyncStatsClient
@@ -155,3 +156,14 @@
gateway event before giving up and thus not populating the cache for unavailable guilds.
"""
await self._guild_available.wait()
+
+ async def on_error(self, event: str, *args, **kwargs) -> None:
+ """Log errors raised in event listeners rather than printing them to stderr."""
+ self.stats.incr(f"errors.event.{event}")
+
+ with push_scope() as scope:
+ scope.set_tag("event", event)
+ scope.set_extra("args", args)
+ scope.set_extra("kwargs", kwargs)
+
+ log.exception(f"Unhandled exception in {event}.")
| {"golden_diff": "diff --git a/bot/bot.py b/bot/bot.py\n--- a/bot/bot.py\n+++ b/bot/bot.py\n@@ -7,6 +7,7 @@\n import aiohttp\n import discord\n from discord.ext import commands\n+from sentry_sdk import push_scope\n \n from bot import DEBUG_MODE, api, constants\n from bot.async_stats import AsyncStatsClient\n@@ -155,3 +156,14 @@\n gateway event before giving up and thus not populating the cache for unavailable guilds.\n \"\"\"\n await self._guild_available.wait()\n+\n+ async def on_error(self, event: str, *args, **kwargs) -> None:\n+ \"\"\"Log errors raised in event listeners rather than printing them to stderr.\"\"\"\n+ self.stats.incr(f\"errors.event.{event}\")\n+\n+ with push_scope() as scope:\n+ scope.set_tag(\"event\", event)\n+ scope.set_extra(\"args\", args)\n+ scope.set_extra(\"kwargs\", kwargs)\n+\n+ log.exception(f\"Unhandled exception in {event}.\")\n", "issue": "Use appropriate log level for exceptions from event listeners\nFrom @SebastiaanZ:\r\n\r\n> Finally, `discord.py` currently \"hides\" errors/tracebacks that happen in event listeners as we only have a custom error handler for commands. This isn't too bad locally, since `d.py` **prints** those exceptions to stderr, but it obviously means they'll never show up in Sentry, as they are **not actually logged** with the appropriate level.\n", "before_files": [{"content": "import asyncio\nimport logging\nimport socket\nimport warnings\nfrom typing import Optional\n\nimport aiohttp\nimport discord\nfrom discord.ext import commands\n\nfrom bot import DEBUG_MODE, api, constants\nfrom bot.async_stats import AsyncStatsClient\n\nlog = logging.getLogger('bot')\n\n\nclass Bot(commands.Bot):\n \"\"\"A subclass of `discord.ext.commands.Bot` with an aiohttp session and an API client.\"\"\"\n\n def __init__(self, *args, **kwargs):\n if \"connector\" in kwargs:\n warnings.warn(\n \"If login() is called (or the bot is started), the connector will be overwritten \"\n \"with an internal one\"\n )\n\n super().__init__(*args, **kwargs)\n\n self.http_session: Optional[aiohttp.ClientSession] = None\n self.api_client = api.APIClient(loop=self.loop)\n\n self._connector = None\n self._resolver = None\n self._guild_available = asyncio.Event()\n\n statsd_url = constants.Stats.statsd_host\n\n if DEBUG_MODE:\n # Since statsd is UDP, there are no errors for sending to a down port.\n # For this reason, setting the statsd host to 127.0.0.1 for development\n # will effectively disable stats.\n statsd_url = \"127.0.0.1\"\n\n self.stats = AsyncStatsClient(self.loop, statsd_url, 8125, prefix=\"bot\")\n\n def add_cog(self, cog: commands.Cog) -> None:\n \"\"\"Adds a \"cog\" to the bot and logs the operation.\"\"\"\n super().add_cog(cog)\n log.info(f\"Cog loaded: {cog.qualified_name}\")\n\n def clear(self) -> None:\n \"\"\"\n Clears the internal state of the bot and recreates the connector and sessions.\n\n Will cause a DeprecationWarning if called outside a coroutine.\n \"\"\"\n # Because discord.py recreates the HTTPClient session, may as well follow suit and recreate\n # our own stuff here too.\n self._recreate()\n super().clear()\n\n async def close(self) -> None:\n \"\"\"Close the Discord connection and the aiohttp session, connector, statsd client, and resolver.\"\"\"\n await super().close()\n\n await self.api_client.close()\n\n if self.http_session:\n await self.http_session.close()\n\n if self._connector:\n await self._connector.close()\n\n if self._resolver:\n await self._resolver.close()\n\n if self.stats._transport:\n await self.stats._transport.close()\n\n async def login(self, *args, **kwargs) -> None:\n \"\"\"Re-create the connector and set up sessions before logging into Discord.\"\"\"\n self._recreate()\n await self.stats.create_socket()\n await super().login(*args, **kwargs)\n\n def _recreate(self) -> None:\n \"\"\"Re-create the connector, aiohttp session, and the APIClient.\"\"\"\n # Use asyncio for DNS resolution instead of threads so threads aren't spammed.\n # Doesn't seem to have any state with regards to being closed, so no need to worry?\n self._resolver = aiohttp.AsyncResolver()\n\n # Its __del__ does send a warning but it doesn't always show up for some reason.\n if self._connector and not self._connector._closed:\n log.warning(\n \"The previous connector was not closed; it will remain open and be overwritten\"\n )\n\n # Use AF_INET as its socket family to prevent HTTPS related problems both locally\n # and in production.\n self._connector = aiohttp.TCPConnector(\n resolver=self._resolver,\n family=socket.AF_INET,\n )\n\n # Client.login() will call HTTPClient.static_login() which will create a session using\n # this connector attribute.\n self.http.connector = self._connector\n\n # Its __del__ does send a warning but it doesn't always show up for some reason.\n if self.http_session and not self.http_session.closed:\n log.warning(\n \"The previous session was not closed; it will remain open and be overwritten\"\n )\n\n self.http_session = aiohttp.ClientSession(connector=self._connector)\n self.api_client.recreate(force=True, connector=self._connector)\n\n async def on_guild_available(self, guild: discord.Guild) -> None:\n \"\"\"\n Set the internal guild available event when constants.Guild.id becomes available.\n\n If the cache appears to still be empty (no members, no channels, or no roles), the event\n will not be set.\n \"\"\"\n if guild.id != constants.Guild.id:\n return\n\n if not guild.roles or not guild.members or not guild.channels:\n msg = \"Guild available event was dispatched but the cache appears to still be empty!\"\n log.warning(msg)\n\n try:\n webhook = await self.fetch_webhook(constants.Webhooks.dev_log)\n except discord.HTTPException as e:\n log.error(f\"Failed to fetch webhook to send empty cache warning: status {e.status}\")\n else:\n await webhook.send(f\"<@&{constants.Roles.admin}> {msg}\")\n\n return\n\n self._guild_available.set()\n\n async def on_guild_unavailable(self, guild: discord.Guild) -> None:\n \"\"\"Clear the internal guild available event when constants.Guild.id becomes unavailable.\"\"\"\n if guild.id != constants.Guild.id:\n return\n\n self._guild_available.clear()\n\n async def wait_until_guild_available(self) -> None:\n \"\"\"\n Wait until the constants.Guild.id guild is available (and the cache is ready).\n\n The on_ready event is inadequate because it only waits 2 seconds for a GUILD_CREATE\n gateway event before giving up and thus not populating the cache for unavailable guilds.\n \"\"\"\n await self._guild_available.wait()\n", "path": "bot/bot.py"}]} | 2,249 | 230 |
gh_patches_debug_2160 | rasdani/github-patches | git_diff | facebookresearch__hydra-1593 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[Bug] Config composition error with latest version of OmegaConf
# 🐛 Bug
## Description
When using OmegaConf at commit 2dd15f9 (first commit where this problem occurs), there are multiple Hydra tests failures, for instance:
```
pytest "tests/test_basic_launcher.py::TestBasicLauncher::test_sweep_1_job[basic-overrides0]"
(...)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = __INVALID__, value = None
def validate_and_convert(self, value: Any) -> Any:
"""
Validates input and converts to canonical form
:param value: input value
:return: converted value ("100" may be converted to 100 for example)
"""
if value is None:
if self._is_optional():
return None
> raise ValidationError("Non optional field cannot be assigned None")
E hydra.errors.ConfigCompositionException
../omegaconf/omegaconf/nodes.py:55: ConfigCompositionException
```
## Checklist
- [X] I checked on the latest version of Hydra
- [X] I created a minimal repro (See [this](https://stackoverflow.com/help/minimal-reproducible-example) for tips).
## To reproduce
Use master branch of Hydra with OmegaConf's commit 2dd15f9
## Additional context
This might actually be an OmegaConf bug (I'm not sure).
</issue>
<code>
[start of hydra/conf/__init__.py]
1 # Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
2 from dataclasses import dataclass, field
3 from typing import Any, Dict, List, Optional
4
5 from omegaconf import MISSING
6
7 from hydra.core.config_store import ConfigStore
8
9
10 @dataclass
11 class HelpConf:
12 app_name: str = MISSING
13 header: str = MISSING
14 footer: str = MISSING
15 template: str = MISSING
16
17
18 @dataclass
19 class HydraHelpConf:
20 hydra_help: str = MISSING
21 template: str = MISSING
22
23
24 @dataclass
25 class RunDir:
26 dir: str = MISSING
27
28
29 @dataclass
30 class SweepDir:
31 dir: str = MISSING
32 subdir: str = MISSING
33
34
35 @dataclass
36 class OverridesConf:
37 # Overrides for the hydra configuration
38 hydra: List[str] = field(default_factory=lambda: [])
39 # Overrides for the task configuration
40 task: List[str] = field(default_factory=lambda: [])
41
42
43 # job runtime information will be populated here
44 @dataclass
45 class JobConf:
46 # Job name, populated automatically unless specified by the user (in config or cli)
47 name: str = MISSING
48
49 # Populated automatically by Hydra.
50 # Concatenation of job overrides that can be used as a part
51 # of the directory name.
52 # This can be configured via hydra.job.config.override_dirname
53 override_dirname: str = MISSING
54
55 # Job ID in underlying scheduling system
56 id: str = MISSING
57
58 # Job number if job is a part of a sweep
59 num: int = MISSING
60
61 # The config name used by the job
62 config_name: Optional[str] = MISSING
63
64 # Environment variables to set remotely
65 env_set: Dict[str, str] = field(default_factory=dict)
66 # Environment variables to copy from the launching machine
67 env_copy: List[str] = field(default_factory=list)
68
69 # Job config
70 @dataclass
71 class JobConfig:
72 @dataclass
73 # configuration for the ${hydra.job.override_dirname} runtime variable
74 class OverrideDirname:
75 kv_sep: str = "="
76 item_sep: str = ","
77 exclude_keys: List[str] = field(default_factory=list)
78
79 override_dirname: OverrideDirname = OverrideDirname()
80
81 config: JobConfig = JobConfig()
82
83
84 @dataclass
85 class ConfigSourceInfo:
86 path: str
87 schema: str
88 provider: str
89
90
91 @dataclass
92 class RuntimeConf:
93 version: str = MISSING
94 cwd: str = MISSING
95 config_sources: List[ConfigSourceInfo] = MISSING
96
97 # Composition choices dictionary
98 choices: Dict[str, str] = field(default_factory=lambda: {})
99
100
101 @dataclass
102 class HydraConf:
103 defaults: List[Any] = field(
104 default_factory=lambda: [
105 {"output": "default"},
106 {"launcher": "basic"},
107 {"sweeper": "basic"},
108 {"help": "default"},
109 {"hydra_help": "default"},
110 {"hydra_logging": "default"},
111 {"job_logging": "default"},
112 {"callbacks": None},
113 # env specific overrides
114 {"env": "default"},
115 ]
116 )
117
118 # Elements to append to the config search path.
119 # Note: This can only be configured in the primary config.
120 searchpath: List[str] = field(default_factory=list)
121
122 # Normal run output configuration
123 run: RunDir = RunDir()
124 # Multi-run output configuration
125 sweep: SweepDir = SweepDir()
126 # Logging configuration for Hydra
127 hydra_logging: Any = MISSING
128 # Logging configuration for the job
129 job_logging: Any = MISSING
130
131 # Sweeper configuration
132 sweeper: Any = MISSING
133 # Launcher configuration
134 launcher: Any = MISSING
135 # Callbacks configuration
136 callbacks: Dict[str, Any] = field(default_factory=dict)
137
138 # Program Help template
139 help: HelpConf = HelpConf()
140 # Hydra's Help template
141 hydra_help: HydraHelpConf = HydraHelpConf()
142
143 # Output directory for produced configuration files and overrides.
144 # E.g., hydra.yaml, overrides.yaml will go here. Useful for debugging
145 # and extra context when looking at past runs.
146 # Setting to None will prevent the creation of the output subdir.
147 output_subdir: Optional[str] = ".hydra"
148
149 # Those lists will contain runtime overrides
150 overrides: OverridesConf = OverridesConf()
151
152 job: JobConf = JobConf()
153
154 # populated at runtime
155 runtime: RuntimeConf = RuntimeConf()
156
157 # Can be a boolean, string or a list of strings
158 # If a boolean, setting to true will set the log level for the root logger to debug
159 # If a string, it's interpreted as a the list [string]
160 # If a list, each element is interpreted as a logger to have logging level set to debug.
161 # Typical command lines to manipulate hydra.verbose:
162 # hydra.verbose=true
163 # hydra.verbose=[hydra,__main__]
164 # TODO: good use case for Union support in OmegaConf
165 verbose: Any = False
166
167
168 cs = ConfigStore.instance()
169
170 cs.store(
171 group="hydra",
172 name="config",
173 node=HydraConf(),
174 provider="hydra",
175 )
176
[end of hydra/conf/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/hydra/conf/__init__.py b/hydra/conf/__init__.py
--- a/hydra/conf/__init__.py
+++ b/hydra/conf/__init__.py
@@ -95,7 +95,8 @@
config_sources: List[ConfigSourceInfo] = MISSING
# Composition choices dictionary
- choices: Dict[str, str] = field(default_factory=lambda: {})
+ # Ideally, the value type would be Union[str, List[str], None]
+ choices: Dict[str, Any] = field(default_factory=lambda: {})
@dataclass
| {"golden_diff": "diff --git a/hydra/conf/__init__.py b/hydra/conf/__init__.py\n--- a/hydra/conf/__init__.py\n+++ b/hydra/conf/__init__.py\n@@ -95,7 +95,8 @@\n config_sources: List[ConfigSourceInfo] = MISSING\n \n # Composition choices dictionary\n- choices: Dict[str, str] = field(default_factory=lambda: {})\n+ # Ideally, the value type would be Union[str, List[str], None]\n+ choices: Dict[str, Any] = field(default_factory=lambda: {})\n \n \n @dataclass\n", "issue": "[Bug] Config composition error with latest version of OmegaConf\n# \ud83d\udc1b Bug\r\n## Description\r\n\r\nWhen using OmegaConf at commit 2dd15f9 (first commit where this problem occurs), there are multiple Hydra tests failures, for instance:\r\n\r\n```\r\npytest \"tests/test_basic_launcher.py::TestBasicLauncher::test_sweep_1_job[basic-overrides0]\"\r\n(...)\r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _\r\n\r\nself = __INVALID__, value = None\r\n\r\n def validate_and_convert(self, value: Any) -> Any:\r\n \"\"\"\r\n Validates input and converts to canonical form\r\n :param value: input value\r\n :return: converted value (\"100\" may be converted to 100 for example)\r\n \"\"\"\r\n if value is None:\r\n if self._is_optional():\r\n return None\r\n> raise ValidationError(\"Non optional field cannot be assigned None\")\r\nE hydra.errors.ConfigCompositionException\r\n\r\n../omegaconf/omegaconf/nodes.py:55: ConfigCompositionException\r\n```\r\n\r\n## Checklist\r\n- [X] I checked on the latest version of Hydra\r\n- [X] I created a minimal repro (See [this](https://stackoverflow.com/help/minimal-reproducible-example) for tips).\r\n\r\n## To reproduce\r\n\r\nUse master branch of Hydra with OmegaConf's commit 2dd15f9\r\n\r\n## Additional context\r\n\r\nThis might actually be an OmegaConf bug (I'm not sure).\n", "before_files": [{"content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\nfrom dataclasses import dataclass, field\nfrom typing import Any, Dict, List, Optional\n\nfrom omegaconf import MISSING\n\nfrom hydra.core.config_store import ConfigStore\n\n\n@dataclass\nclass HelpConf:\n app_name: str = MISSING\n header: str = MISSING\n footer: str = MISSING\n template: str = MISSING\n\n\n@dataclass\nclass HydraHelpConf:\n hydra_help: str = MISSING\n template: str = MISSING\n\n\n@dataclass\nclass RunDir:\n dir: str = MISSING\n\n\n@dataclass\nclass SweepDir:\n dir: str = MISSING\n subdir: str = MISSING\n\n\n@dataclass\nclass OverridesConf:\n # Overrides for the hydra configuration\n hydra: List[str] = field(default_factory=lambda: [])\n # Overrides for the task configuration\n task: List[str] = field(default_factory=lambda: [])\n\n\n# job runtime information will be populated here\n@dataclass\nclass JobConf:\n # Job name, populated automatically unless specified by the user (in config or cli)\n name: str = MISSING\n\n # Populated automatically by Hydra.\n # Concatenation of job overrides that can be used as a part\n # of the directory name.\n # This can be configured via hydra.job.config.override_dirname\n override_dirname: str = MISSING\n\n # Job ID in underlying scheduling system\n id: str = MISSING\n\n # Job number if job is a part of a sweep\n num: int = MISSING\n\n # The config name used by the job\n config_name: Optional[str] = MISSING\n\n # Environment variables to set remotely\n env_set: Dict[str, str] = field(default_factory=dict)\n # Environment variables to copy from the launching machine\n env_copy: List[str] = field(default_factory=list)\n\n # Job config\n @dataclass\n class JobConfig:\n @dataclass\n # configuration for the ${hydra.job.override_dirname} runtime variable\n class OverrideDirname:\n kv_sep: str = \"=\"\n item_sep: str = \",\"\n exclude_keys: List[str] = field(default_factory=list)\n\n override_dirname: OverrideDirname = OverrideDirname()\n\n config: JobConfig = JobConfig()\n\n\n@dataclass\nclass ConfigSourceInfo:\n path: str\n schema: str\n provider: str\n\n\n@dataclass\nclass RuntimeConf:\n version: str = MISSING\n cwd: str = MISSING\n config_sources: List[ConfigSourceInfo] = MISSING\n\n # Composition choices dictionary\n choices: Dict[str, str] = field(default_factory=lambda: {})\n\n\n@dataclass\nclass HydraConf:\n defaults: List[Any] = field(\n default_factory=lambda: [\n {\"output\": \"default\"},\n {\"launcher\": \"basic\"},\n {\"sweeper\": \"basic\"},\n {\"help\": \"default\"},\n {\"hydra_help\": \"default\"},\n {\"hydra_logging\": \"default\"},\n {\"job_logging\": \"default\"},\n {\"callbacks\": None},\n # env specific overrides\n {\"env\": \"default\"},\n ]\n )\n\n # Elements to append to the config search path.\n # Note: This can only be configured in the primary config.\n searchpath: List[str] = field(default_factory=list)\n\n # Normal run output configuration\n run: RunDir = RunDir()\n # Multi-run output configuration\n sweep: SweepDir = SweepDir()\n # Logging configuration for Hydra\n hydra_logging: Any = MISSING\n # Logging configuration for the job\n job_logging: Any = MISSING\n\n # Sweeper configuration\n sweeper: Any = MISSING\n # Launcher configuration\n launcher: Any = MISSING\n # Callbacks configuration\n callbacks: Dict[str, Any] = field(default_factory=dict)\n\n # Program Help template\n help: HelpConf = HelpConf()\n # Hydra's Help template\n hydra_help: HydraHelpConf = HydraHelpConf()\n\n # Output directory for produced configuration files and overrides.\n # E.g., hydra.yaml, overrides.yaml will go here. Useful for debugging\n # and extra context when looking at past runs.\n # Setting to None will prevent the creation of the output subdir.\n output_subdir: Optional[str] = \".hydra\"\n\n # Those lists will contain runtime overrides\n overrides: OverridesConf = OverridesConf()\n\n job: JobConf = JobConf()\n\n # populated at runtime\n runtime: RuntimeConf = RuntimeConf()\n\n # Can be a boolean, string or a list of strings\n # If a boolean, setting to true will set the log level for the root logger to debug\n # If a string, it's interpreted as a the list [string]\n # If a list, each element is interpreted as a logger to have logging level set to debug.\n # Typical command lines to manipulate hydra.verbose:\n # hydra.verbose=true\n # hydra.verbose=[hydra,__main__]\n # TODO: good use case for Union support in OmegaConf\n verbose: Any = False\n\n\ncs = ConfigStore.instance()\n\ncs.store(\n group=\"hydra\",\n name=\"config\",\n node=HydraConf(),\n provider=\"hydra\",\n)\n", "path": "hydra/conf/__init__.py"}]} | 2,498 | 130 |
gh_patches_debug_18475 | rasdani/github-patches | git_diff | getnikola__nikola-1957 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
handle include tag in mako templates
Currently templates used via include tags are not considered dependencies. It's not hard.
handle include tag in mako templates
Currently templates used via include tags are not considered dependencies. It's not hard.
</issue>
<code>
[start of nikola/plugins/template/mako.py]
1 # -*- coding: utf-8 -*-
2
3 # Copyright © 2012-2015 Roberto Alsina and others.
4
5 # Permission is hereby granted, free of charge, to any
6 # person obtaining a copy of this software and associated
7 # documentation files (the "Software"), to deal in the
8 # Software without restriction, including without limitation
9 # the rights to use, copy, modify, merge, publish,
10 # distribute, sublicense, and/or sell copies of the
11 # Software, and to permit persons to whom the Software is
12 # furnished to do so, subject to the following conditions:
13 #
14 # The above copyright notice and this permission notice
15 # shall be included in all copies or substantial portions of
16 # the Software.
17 #
18 # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY
19 # KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE
20 # WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR
21 # PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS
22 # OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR
23 # OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR
24 # OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
25 # SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
26
27 """Mako template handler."""
28
29 from __future__ import unicode_literals, print_function, absolute_import
30 import os
31 import shutil
32 import sys
33 import tempfile
34
35 from mako import util, lexer
36 from mako.lookup import TemplateLookup
37 from mako.template import Template
38 from markupsafe import Markup # It's ok, Mako requires it
39
40 from nikola.plugin_categories import TemplateSystem
41 from nikola.utils import makedirs, get_logger, STDERR_HANDLER
42
43 LOGGER = get_logger('mako', STDERR_HANDLER)
44
45
46 class MakoTemplates(TemplateSystem):
47
48 """Support for Mako templates."""
49
50 name = "mako"
51
52 lookup = None
53 cache = {}
54 filters = {}
55 directories = []
56 cache_dir = None
57
58 def get_deps(self, filename):
59 """Get dependencies for a template (internal function)."""
60 text = util.read_file(filename)
61 lex = lexer.Lexer(text=text, filename=filename)
62 lex.parse()
63
64 deps = []
65 for n in lex.template.nodes:
66 keyword = getattr(n, 'keyword', None)
67 if keyword in ["inherit", "namespace"]:
68 deps.append(n.attributes['file'])
69 # TODO: include tags are not handled
70 return deps
71
72 def set_directories(self, directories, cache_folder):
73 """Create a new template lookup with set directories."""
74 cache_dir = os.path.join(cache_folder, '.mako.tmp')
75 # Workaround for a Mako bug, Issue #825
76 if sys.version_info[0] == 2:
77 try:
78 os.path.abspath(cache_dir).decode('ascii')
79 except UnicodeEncodeError:
80 cache_dir = tempfile.mkdtemp()
81 LOGGER.warning('Because of a Mako bug, setting cache_dir to {0}'.format(cache_dir))
82 if os.path.exists(cache_dir):
83 shutil.rmtree(cache_dir)
84 self.directories = directories
85 self.cache_dir = cache_dir
86 self.create_lookup()
87
88 def inject_directory(self, directory):
89 """Add a directory to the lookup and recreate it if it's not there yet."""
90 if directory not in self.directories:
91 self.directories.append(directory)
92 self.create_lookup()
93
94 def create_lookup(self):
95 """Create a template lookup."""
96 self.lookup = TemplateLookup(
97 directories=self.directories,
98 module_directory=self.cache_dir,
99 output_encoding='utf-8')
100
101 def set_site(self, site):
102 """Set the Nikola site."""
103 self.site = site
104 self.filters.update(self.site.config['TEMPLATE_FILTERS'])
105
106 def render_template(self, template_name, output_name, context):
107 """Render the template into output_name using context."""
108 context['striphtml'] = striphtml
109 template = self.lookup.get_template(template_name)
110 data = template.render_unicode(**context)
111 if output_name is not None:
112 makedirs(os.path.dirname(output_name))
113 with open(output_name, 'w+') as output:
114 output.write(data)
115 return data
116
117 def render_template_to_string(self, template, context):
118 """Render template to a string using context."""
119 context.update(self.filters)
120 return Template(template).render(**context)
121
122 def template_deps(self, template_name):
123 """Generate list of dependencies for a template."""
124 # We can cache here because dependencies should
125 # not change between runs
126 if self.cache.get(template_name, None) is None:
127 template = self.lookup.get_template(template_name)
128 dep_filenames = self.get_deps(template.filename)
129 deps = [template.filename]
130 for fname in dep_filenames:
131 deps += self.template_deps(fname)
132 self.cache[template_name] = tuple(deps)
133 return list(self.cache[template_name])
134
135
136 def striphtml(text):
137 """Strip HTML tags from text."""
138 return Markup(text).striptags()
139
[end of nikola/plugins/template/mako.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/nikola/plugins/template/mako.py b/nikola/plugins/template/mako.py
--- a/nikola/plugins/template/mako.py
+++ b/nikola/plugins/template/mako.py
@@ -32,7 +32,7 @@
import sys
import tempfile
-from mako import util, lexer
+from mako import util, lexer, parsetree
from mako.lookup import TemplateLookup
from mako.template import Template
from markupsafe import Markup # It's ok, Mako requires it
@@ -64,9 +64,8 @@
deps = []
for n in lex.template.nodes:
keyword = getattr(n, 'keyword', None)
- if keyword in ["inherit", "namespace"]:
+ if keyword in ["inherit", "namespace"] or isinstance(n, parsetree.IncludeTag):
deps.append(n.attributes['file'])
- # TODO: include tags are not handled
return deps
def set_directories(self, directories, cache_folder):
| {"golden_diff": "diff --git a/nikola/plugins/template/mako.py b/nikola/plugins/template/mako.py\n--- a/nikola/plugins/template/mako.py\n+++ b/nikola/plugins/template/mako.py\n@@ -32,7 +32,7 @@\n import sys\n import tempfile\n \n-from mako import util, lexer\n+from mako import util, lexer, parsetree\n from mako.lookup import TemplateLookup\n from mako.template import Template\n from markupsafe import Markup # It's ok, Mako requires it\n@@ -64,9 +64,8 @@\n deps = []\n for n in lex.template.nodes:\n keyword = getattr(n, 'keyword', None)\n- if keyword in [\"inherit\", \"namespace\"]:\n+ if keyword in [\"inherit\", \"namespace\"] or isinstance(n, parsetree.IncludeTag):\n deps.append(n.attributes['file'])\n- # TODO: include tags are not handled\n return deps\n \n def set_directories(self, directories, cache_folder):\n", "issue": "handle include tag in mako templates\nCurrently templates used via include tags are not considered dependencies. It's not hard.\n\nhandle include tag in mako templates\nCurrently templates used via include tags are not considered dependencies. It's not hard.\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\n# Copyright \u00a9 2012-2015 Roberto Alsina and others.\n\n# Permission is hereby granted, free of charge, to any\n# person obtaining a copy of this software and associated\n# documentation files (the \"Software\"), to deal in the\n# Software without restriction, including without limitation\n# the rights to use, copy, modify, merge, publish,\n# distribute, sublicense, and/or sell copies of the\n# Software, and to permit persons to whom the Software is\n# furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice\n# shall be included in all copies or substantial portions of\n# the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY\n# KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE\n# WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR\n# PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS\n# OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR\n# OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR\n# OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE\n# SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.\n\n\"\"\"Mako template handler.\"\"\"\n\nfrom __future__ import unicode_literals, print_function, absolute_import\nimport os\nimport shutil\nimport sys\nimport tempfile\n\nfrom mako import util, lexer\nfrom mako.lookup import TemplateLookup\nfrom mako.template import Template\nfrom markupsafe import Markup # It's ok, Mako requires it\n\nfrom nikola.plugin_categories import TemplateSystem\nfrom nikola.utils import makedirs, get_logger, STDERR_HANDLER\n\nLOGGER = get_logger('mako', STDERR_HANDLER)\n\n\nclass MakoTemplates(TemplateSystem):\n\n \"\"\"Support for Mako templates.\"\"\"\n\n name = \"mako\"\n\n lookup = None\n cache = {}\n filters = {}\n directories = []\n cache_dir = None\n\n def get_deps(self, filename):\n \"\"\"Get dependencies for a template (internal function).\"\"\"\n text = util.read_file(filename)\n lex = lexer.Lexer(text=text, filename=filename)\n lex.parse()\n\n deps = []\n for n in lex.template.nodes:\n keyword = getattr(n, 'keyword', None)\n if keyword in [\"inherit\", \"namespace\"]:\n deps.append(n.attributes['file'])\n # TODO: include tags are not handled\n return deps\n\n def set_directories(self, directories, cache_folder):\n \"\"\"Create a new template lookup with set directories.\"\"\"\n cache_dir = os.path.join(cache_folder, '.mako.tmp')\n # Workaround for a Mako bug, Issue #825\n if sys.version_info[0] == 2:\n try:\n os.path.abspath(cache_dir).decode('ascii')\n except UnicodeEncodeError:\n cache_dir = tempfile.mkdtemp()\n LOGGER.warning('Because of a Mako bug, setting cache_dir to {0}'.format(cache_dir))\n if os.path.exists(cache_dir):\n shutil.rmtree(cache_dir)\n self.directories = directories\n self.cache_dir = cache_dir\n self.create_lookup()\n\n def inject_directory(self, directory):\n \"\"\"Add a directory to the lookup and recreate it if it's not there yet.\"\"\"\n if directory not in self.directories:\n self.directories.append(directory)\n self.create_lookup()\n\n def create_lookup(self):\n \"\"\"Create a template lookup.\"\"\"\n self.lookup = TemplateLookup(\n directories=self.directories,\n module_directory=self.cache_dir,\n output_encoding='utf-8')\n\n def set_site(self, site):\n \"\"\"Set the Nikola site.\"\"\"\n self.site = site\n self.filters.update(self.site.config['TEMPLATE_FILTERS'])\n\n def render_template(self, template_name, output_name, context):\n \"\"\"Render the template into output_name using context.\"\"\"\n context['striphtml'] = striphtml\n template = self.lookup.get_template(template_name)\n data = template.render_unicode(**context)\n if output_name is not None:\n makedirs(os.path.dirname(output_name))\n with open(output_name, 'w+') as output:\n output.write(data)\n return data\n\n def render_template_to_string(self, template, context):\n \"\"\"Render template to a string using context.\"\"\"\n context.update(self.filters)\n return Template(template).render(**context)\n\n def template_deps(self, template_name):\n \"\"\"Generate list of dependencies for a template.\"\"\"\n # We can cache here because dependencies should\n # not change between runs\n if self.cache.get(template_name, None) is None:\n template = self.lookup.get_template(template_name)\n dep_filenames = self.get_deps(template.filename)\n deps = [template.filename]\n for fname in dep_filenames:\n deps += self.template_deps(fname)\n self.cache[template_name] = tuple(deps)\n return list(self.cache[template_name])\n\n\ndef striphtml(text):\n \"\"\"Strip HTML tags from text.\"\"\"\n return Markup(text).striptags()\n", "path": "nikola/plugins/template/mako.py"}]} | 1,962 | 217 |
gh_patches_debug_31198 | rasdani/github-patches | git_diff | cloudtools__troposphere-654 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
AutoScalingRollingUpdate with 'If' Aws Helper Condition fail during validate
Hi, in autoscalinggroup UpdatePolicy if i have a AutoScalingRollingUpdate with a IF AWS Helper Condition it fails during validate with:
##
AttributeError: 'If' object has no attribute 'MinInstancesInService'
##
Example code:
##
AutoScalingRollingUpdate=If(
'RollingUpdate',
pol.AutoScalingRollingUpdate(
MaxBatchSize=get_mapped_value('RollingUpdateMaxBatchSize'),
MinInstancesInService=get_mapped_value('RollingUpdateMinInstancesInService'),
MinSuccessfulInstancesPercent=get_mapped_value('RollingUpdateMinSuccessfulInstancesPercent'),
PauseTime=get_mapped_value('RollingUpdatePauseTime'),
SuspendProcesses=[
'HealthCheck',
'ReplaceUnhealthy',
'AlarmNotification',
'ScheduledActions'
],
WaitOnResourceSignals=True
),
Ref('AWS::NoValue')
),
##
To solve issue, in troposphere/autoscaling.py function validate should be:
```
diff --git a/troposphere/autoscaling.py b/troposphere/autoscaling.py
index cc5873f..8f7a43d 100644
--- a/troposphere/autoscaling.py
+++ b/troposphere/autoscaling.py
@@ -136,7 +136,8 @@ class AutoScalingGroup(AWSObject):
update_policy = self.resource['UpdatePolicy']
if (not isinstance(update_policy, AWSHelperFn) and
- 'AutoScalingRollingUpdate' in update_policy.properties):
+ 'AutoScalingRollingUpdate' in update_policy.properties and
+ not isinstance(update_policy.AutoScalingRollingUpdate, AWSHelperFn)):
rolling_update = update_policy.AutoScalingRollingUpdate
isMinNoCheck = isinstance(
```
##
Regards, Alberto.
</issue>
<code>
[start of troposphere/autoscaling.py]
1 # Copyright (c) 2012-2013, Mark Peek <[email protected]>
2 # All rights reserved.
3 #
4 # See LICENSE file for full license.
5
6 from . import AWSHelperFn, AWSObject, AWSProperty, If, FindInMap, Ref
7 from .validators import boolean, integer
8 from . import cloudformation
9
10
11 EC2_INSTANCE_LAUNCH = "autoscaling:EC2_INSTANCE_LAUNCH"
12 EC2_INSTANCE_LAUNCH_ERROR = "autoscaling:EC2_INSTANCE_LAUNCH_ERROR"
13 EC2_INSTANCE_TERMINATE = "autoscaling:EC2_INSTANCE_TERMINATE"
14 EC2_INSTANCE_TERMINATE_ERROR = "autoscaling:EC2_INSTANCE_TERMINATE_ERROR"
15 TEST_NOTIFICATION = "autoscaling:TEST_NOTIFICATION"
16
17 # Termination Policy constants
18 Default = 'Default'
19 OldestInstance = 'OldestInstance'
20 NewestInstance = 'NewestInstance'
21 OldestLaunchConfiguration = 'OldestLaunchConfiguration'
22 ClosestToNextInstanceHour = 'ClosestToNextInstanceHour'
23
24
25 class Tag(AWSHelperFn):
26 def __init__(self, key, value, propogate):
27 self.data = {
28 'Key': key,
29 'Value': value,
30 'PropagateAtLaunch': propogate,
31 }
32
33 def JSONrepr(self):
34 return self.data
35
36
37 class Tags(AWSHelperFn):
38 defaultPropagateAtLaunch = True
39 manyType = [type([]), type(())]
40
41 def __init__(self, **kwargs):
42 self.tags = []
43 for k, v in sorted(kwargs.iteritems()):
44 if type(v) in self.manyType:
45 propagate = str(v[1]).lower()
46 v = v[0]
47 else:
48 propagate = str(self.defaultPropagateAtLaunch).lower()
49 self.tags.append({
50 'Key': k,
51 'Value': v,
52 'PropagateAtLaunch': propagate,
53 })
54
55 # append tags to list
56 def __add__(self, newtags):
57 newtags.tags = self.tags + newtags.tags
58 return newtags
59
60 def JSONrepr(self):
61 return self.tags
62
63
64 class NotificationConfigurations(AWSProperty):
65 props = {
66 'TopicARN': (basestring, True),
67 'NotificationTypes': (list, True),
68 }
69
70
71 class MetricsCollection(AWSProperty):
72 props = {
73 'Granularity': (basestring, True),
74 'Metrics': (list, False),
75 }
76
77
78 class Metadata(AWSHelperFn):
79 def __init__(self, init, authentication=None):
80 self.validate(init, authentication)
81 # get keys and values from init and authentication
82
83 # if there's only one data point, then we know its the default
84 # cfn-init; where the key is 'config'
85 if len(init.data) == 1:
86 initKey, initValue = init.data.popitem()
87 self.data = {initKey: initValue}
88 else:
89 self.data = init.data
90
91 if authentication:
92 authKey, authValue = authentication.data.popitem()
93 self.data[authKey] = authValue
94
95 def validate(self, init, authentication):
96 if not isinstance(init, cloudformation.Init):
97 raise ValueError(
98 'init must be of type cloudformation.Init'
99 )
100
101 is_instance = isinstance(authentication, cloudformation.Authentication)
102 if authentication and not is_instance:
103 raise ValueError(
104 'authentication must be of type cloudformation.Authentication'
105 )
106
107 def JSONrepr(self):
108 return self.data
109
110
111 class AutoScalingGroup(AWSObject):
112 resource_type = "AWS::AutoScaling::AutoScalingGroup"
113
114 props = {
115 'AvailabilityZones': (list, False),
116 'Cooldown': (integer, False),
117 'DesiredCapacity': (integer, False),
118 'HealthCheckGracePeriod': (integer, False),
119 'HealthCheckType': (basestring, False),
120 'InstanceId': (basestring, False),
121 'LaunchConfigurationName': (basestring, False),
122 'LoadBalancerNames': (list, False),
123 'MaxSize': (integer, True),
124 'MetricsCollection': ([MetricsCollection], False),
125 'MinSize': (integer, True),
126 'NotificationConfigurations': ([NotificationConfigurations], False),
127 'PlacementGroup': (basestring, False),
128 'Tags': (list, False),
129 'TargetGroupARNs': ([basestring], False),
130 'TerminationPolicies': ([basestring], False),
131 'VPCZoneIdentifier': (list, False),
132 }
133
134 def validate(self):
135 if 'UpdatePolicy' in self.resource:
136 update_policy = self.resource['UpdatePolicy']
137
138 if (not isinstance(update_policy, AWSHelperFn) and
139 'AutoScalingRollingUpdate' in update_policy.properties):
140 rolling_update = update_policy.AutoScalingRollingUpdate
141
142 isMinNoCheck = isinstance(
143 rolling_update.MinInstancesInService,
144 (FindInMap, Ref)
145 )
146 isMaxNoCheck = isinstance(self.MaxSize, (If, FindInMap, Ref))
147
148 if not (isMinNoCheck or isMaxNoCheck):
149 maxCount = int(self.MaxSize)
150 minCount = int(rolling_update.MinInstancesInService)
151
152 if minCount >= maxCount:
153 raise ValueError(
154 "The UpdatePolicy attribute "
155 "MinInstancesInService must be less than the "
156 "autoscaling group's MaxSize")
157
158 launch_config = self.properties.get('LaunchConfigurationName')
159 instance_id = self.properties.get('InstanceId')
160 if launch_config and instance_id:
161 raise ValueError("LaunchConfigurationName and InstanceId "
162 "are mutually exclusive.")
163 if not launch_config and not instance_id:
164 raise ValueError("Must specify either LaunchConfigurationName or "
165 "InstanceId: http://docs.aws.amazon.com/AWSCloud"
166 "Formation/latest/UserGuide/aws-properties-as-gr"
167 "oup.html#cfn-as-group-instanceid")
168
169 availability_zones = self.properties.get('AvailabilityZones')
170 vpc_zone_identifier = self.properties.get('VPCZoneIdentifier')
171 if not availability_zones and not vpc_zone_identifier:
172 raise ValueError("Must specify AvailabilityZones and/or "
173 "VPCZoneIdentifier: http://docs.aws.amazon.com/A"
174 "WSCloudFormation/latest/UserGuide/aws-propertie"
175 "s-as-group.html#cfn-as-group-vpczoneidentifier")
176 return True
177
178
179 class LaunchConfiguration(AWSObject):
180 resource_type = "AWS::AutoScaling::LaunchConfiguration"
181
182 props = {
183 'AssociatePublicIpAddress': (boolean, False),
184 'BlockDeviceMappings': (list, False),
185 'ClassicLinkVPCId': (basestring, False),
186 'ClassicLinkVPCSecurityGroups': ([basestring], False),
187 'EbsOptimized': (boolean, False),
188 'IamInstanceProfile': (basestring, False),
189 'ImageId': (basestring, True),
190 'InstanceId': (basestring, False),
191 'InstanceMonitoring': (boolean, False),
192 'InstanceType': (basestring, True),
193 'KernelId': (basestring, False),
194 'KeyName': (basestring, False),
195 'Metadata': (Metadata, False),
196 'PlacementTenancy': (basestring, False),
197 'RamDiskId': (basestring, False),
198 'SecurityGroups': (list, False),
199 'SpotPrice': (basestring, False),
200 'UserData': (basestring, False),
201 }
202
203
204 class StepAdjustments(AWSProperty):
205 props = {
206 'MetricIntervalLowerBound': (integer, False),
207 'MetricIntervalUpperBound': (integer, False),
208 'ScalingAdjustment': (integer, True),
209 }
210
211
212 class ScalingPolicy(AWSObject):
213 resource_type = "AWS::AutoScaling::ScalingPolicy"
214
215 props = {
216 'AdjustmentType': (basestring, True),
217 'AutoScalingGroupName': (basestring, True),
218 'Cooldown': (integer, False),
219 'EstimatedInstanceWarmup': (integer, False),
220 'MetricAggregationType': (basestring, False),
221 'MinAdjustmentMagnitude': (integer, False),
222 'PolicyType': (basestring, False),
223 'ScalingAdjustment': (integer, False),
224 'StepAdjustments': ([StepAdjustments], False),
225 }
226
227
228 class ScheduledAction(AWSObject):
229 resource_type = "AWS::AutoScaling::ScheduledAction"
230
231 props = {
232 'AutoScalingGroupName': (basestring, True),
233 'DesiredCapacity': (integer, False),
234 'EndTime': (basestring, False),
235 'MaxSize': (integer, False),
236 'MinSize': (integer, False),
237 'Recurrence': (basestring, False),
238 'StartTime': (basestring, False),
239 }
240
241
242 class LifecycleHook(AWSObject):
243 resource_type = "AWS::AutoScaling::LifecycleHook"
244
245 props = {
246 'AutoScalingGroupName': (basestring, True),
247 'DefaultResult': (basestring, False),
248 'HeartbeatTimeout': (integer, False),
249 'LifecycleHookName': (basestring, False),
250 'LifecycleTransition': (basestring, True),
251 'NotificationMetadata': (basestring, False),
252 'NotificationTargetARN': (basestring, True),
253 'RoleARN': (basestring, True),
254 }
255
256
257 class Trigger(AWSObject):
258 resource_type = "AWS::AutoScaling::Trigger"
259
260 props = {
261 'AutoScalingGroupName': (basestring, True),
262 'BreachDuration': (integer, True),
263 'Dimensions': (list, True),
264 'LowerBreachScaleIncrement': (integer, False),
265 'LowerThreshold': (integer, True),
266 'MetricName': (basestring, True),
267 'Namespace': (basestring, True),
268 'Period': (integer, True),
269 'Statistic': (basestring, True),
270 'Unit': (basestring, False),
271 'UpperBreachScaleIncrement': (integer, False),
272 'UpperThreshold': (integer, True),
273 }
274
275
276 class EBSBlockDevice(AWSProperty):
277 # http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-as-launchconfig-blockdev-template.html
278 props = {
279 'DeleteOnTermination': (boolean, False),
280 'Encrypted': (boolean, False),
281 'Iops': (integer, False),
282 'SnapshotId': (basestring, False),
283 'VolumeSize': (integer, False),
284 'VolumeType': (basestring, False),
285 }
286
287
288 class BlockDeviceMapping(AWSProperty):
289 # http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-as-launchconfig-blockdev-mapping.html
290 props = {
291 'DeviceName': (basestring, True),
292 'Ebs': (EBSBlockDevice, False),
293 'NoDevice': (boolean, False),
294 'VirtualName': (basestring, False),
295 }
296
[end of troposphere/autoscaling.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/troposphere/autoscaling.py b/troposphere/autoscaling.py
--- a/troposphere/autoscaling.py
+++ b/troposphere/autoscaling.py
@@ -137,23 +137,26 @@
if (not isinstance(update_policy, AWSHelperFn) and
'AutoScalingRollingUpdate' in update_policy.properties):
- rolling_update = update_policy.AutoScalingRollingUpdate
-
- isMinNoCheck = isinstance(
- rolling_update.MinInstancesInService,
- (FindInMap, Ref)
- )
- isMaxNoCheck = isinstance(self.MaxSize, (If, FindInMap, Ref))
-
- if not (isMinNoCheck or isMaxNoCheck):
- maxCount = int(self.MaxSize)
- minCount = int(rolling_update.MinInstancesInService)
-
- if minCount >= maxCount:
- raise ValueError(
- "The UpdatePolicy attribute "
- "MinInstancesInService must be less than the "
- "autoscaling group's MaxSize")
+ if not isinstance(
+ update_policy.AutoScalingRollingUpdate, AWSHelperFn):
+ rolling_update = update_policy.AutoScalingRollingUpdate
+
+ is_min_no_check = isinstance(
+ rolling_update.MinInstancesInService,
+ (FindInMap, Ref)
+ )
+ is_max_no_check = isinstance(self.MaxSize,
+ (If, FindInMap, Ref))
+
+ if not (is_min_no_check or is_max_no_check):
+ max_count = int(self.MaxSize)
+ min_count = int(rolling_update.MinInstancesInService)
+
+ if min_count >= max_count:
+ raise ValueError(
+ "The UpdatePolicy attribute "
+ "MinInstancesInService must be less than the "
+ "autoscaling group's MaxSize")
launch_config = self.properties.get('LaunchConfigurationName')
instance_id = self.properties.get('InstanceId')
| {"golden_diff": "diff --git a/troposphere/autoscaling.py b/troposphere/autoscaling.py\n--- a/troposphere/autoscaling.py\n+++ b/troposphere/autoscaling.py\n@@ -137,23 +137,26 @@\n \n if (not isinstance(update_policy, AWSHelperFn) and\n 'AutoScalingRollingUpdate' in update_policy.properties):\n- rolling_update = update_policy.AutoScalingRollingUpdate\n-\n- isMinNoCheck = isinstance(\n- rolling_update.MinInstancesInService,\n- (FindInMap, Ref)\n- )\n- isMaxNoCheck = isinstance(self.MaxSize, (If, FindInMap, Ref))\n-\n- if not (isMinNoCheck or isMaxNoCheck):\n- maxCount = int(self.MaxSize)\n- minCount = int(rolling_update.MinInstancesInService)\n-\n- if minCount >= maxCount:\n- raise ValueError(\n- \"The UpdatePolicy attribute \"\n- \"MinInstancesInService must be less than the \"\n- \"autoscaling group's MaxSize\")\n+ if not isinstance(\n+ update_policy.AutoScalingRollingUpdate, AWSHelperFn):\n+ rolling_update = update_policy.AutoScalingRollingUpdate\n+\n+ is_min_no_check = isinstance(\n+ rolling_update.MinInstancesInService,\n+ (FindInMap, Ref)\n+ )\n+ is_max_no_check = isinstance(self.MaxSize,\n+ (If, FindInMap, Ref))\n+\n+ if not (is_min_no_check or is_max_no_check):\n+ max_count = int(self.MaxSize)\n+ min_count = int(rolling_update.MinInstancesInService)\n+\n+ if min_count >= max_count:\n+ raise ValueError(\n+ \"The UpdatePolicy attribute \"\n+ \"MinInstancesInService must be less than the \"\n+ \"autoscaling group's MaxSize\")\n \n launch_config = self.properties.get('LaunchConfigurationName')\n instance_id = self.properties.get('InstanceId')\n", "issue": "AutoScalingRollingUpdate with 'If' Aws Helper Condition fail during validate\nHi, in autoscalinggroup UpdatePolicy if i have a AutoScalingRollingUpdate with a IF AWS Helper Condition it fails during validate with:\r\n##\r\nAttributeError: 'If' object has no attribute 'MinInstancesInService'\r\n##\r\n\r\nExample code:\r\n##\r\n AutoScalingRollingUpdate=If(\r\n 'RollingUpdate',\r\n pol.AutoScalingRollingUpdate(\r\n MaxBatchSize=get_mapped_value('RollingUpdateMaxBatchSize'),\r\n MinInstancesInService=get_mapped_value('RollingUpdateMinInstancesInService'),\r\n MinSuccessfulInstancesPercent=get_mapped_value('RollingUpdateMinSuccessfulInstancesPercent'),\r\n PauseTime=get_mapped_value('RollingUpdatePauseTime'),\r\n SuspendProcesses=[\r\n 'HealthCheck',\r\n 'ReplaceUnhealthy',\r\n 'AlarmNotification',\r\n 'ScheduledActions'\r\n ],\r\n WaitOnResourceSignals=True\r\n ),\r\n Ref('AWS::NoValue')\r\n ),\r\n##\r\n\r\nTo solve issue, in troposphere/autoscaling.py function validate should be:\r\n```\r\ndiff --git a/troposphere/autoscaling.py b/troposphere/autoscaling.py\r\nindex cc5873f..8f7a43d 100644\r\n--- a/troposphere/autoscaling.py\r\n+++ b/troposphere/autoscaling.py\r\n@@ -136,7 +136,8 @@ class AutoScalingGroup(AWSObject):\r\n update_policy = self.resource['UpdatePolicy']\r\n \r\n if (not isinstance(update_policy, AWSHelperFn) and\r\n- 'AutoScalingRollingUpdate' in update_policy.properties):\r\n+ 'AutoScalingRollingUpdate' in update_policy.properties and\r\n+ not isinstance(update_policy.AutoScalingRollingUpdate, AWSHelperFn)):\r\n rolling_update = update_policy.AutoScalingRollingUpdate\r\n \r\n isMinNoCheck = isinstance(\r\n```\r\n##\r\n\r\nRegards, Alberto.\n", "before_files": [{"content": "# Copyright (c) 2012-2013, Mark Peek <[email protected]>\n# All rights reserved.\n#\n# See LICENSE file for full license.\n\nfrom . import AWSHelperFn, AWSObject, AWSProperty, If, FindInMap, Ref\nfrom .validators import boolean, integer\nfrom . import cloudformation\n\n\nEC2_INSTANCE_LAUNCH = \"autoscaling:EC2_INSTANCE_LAUNCH\"\nEC2_INSTANCE_LAUNCH_ERROR = \"autoscaling:EC2_INSTANCE_LAUNCH_ERROR\"\nEC2_INSTANCE_TERMINATE = \"autoscaling:EC2_INSTANCE_TERMINATE\"\nEC2_INSTANCE_TERMINATE_ERROR = \"autoscaling:EC2_INSTANCE_TERMINATE_ERROR\"\nTEST_NOTIFICATION = \"autoscaling:TEST_NOTIFICATION\"\n\n# Termination Policy constants\nDefault = 'Default'\nOldestInstance = 'OldestInstance'\nNewestInstance = 'NewestInstance'\nOldestLaunchConfiguration = 'OldestLaunchConfiguration'\nClosestToNextInstanceHour = 'ClosestToNextInstanceHour'\n\n\nclass Tag(AWSHelperFn):\n def __init__(self, key, value, propogate):\n self.data = {\n 'Key': key,\n 'Value': value,\n 'PropagateAtLaunch': propogate,\n }\n\n def JSONrepr(self):\n return self.data\n\n\nclass Tags(AWSHelperFn):\n defaultPropagateAtLaunch = True\n manyType = [type([]), type(())]\n\n def __init__(self, **kwargs):\n self.tags = []\n for k, v in sorted(kwargs.iteritems()):\n if type(v) in self.manyType:\n propagate = str(v[1]).lower()\n v = v[0]\n else:\n propagate = str(self.defaultPropagateAtLaunch).lower()\n self.tags.append({\n 'Key': k,\n 'Value': v,\n 'PropagateAtLaunch': propagate,\n })\n\n # append tags to list\n def __add__(self, newtags):\n newtags.tags = self.tags + newtags.tags\n return newtags\n\n def JSONrepr(self):\n return self.tags\n\n\nclass NotificationConfigurations(AWSProperty):\n props = {\n 'TopicARN': (basestring, True),\n 'NotificationTypes': (list, True),\n }\n\n\nclass MetricsCollection(AWSProperty):\n props = {\n 'Granularity': (basestring, True),\n 'Metrics': (list, False),\n }\n\n\nclass Metadata(AWSHelperFn):\n def __init__(self, init, authentication=None):\n self.validate(init, authentication)\n # get keys and values from init and authentication\n\n # if there's only one data point, then we know its the default\n # cfn-init; where the key is 'config'\n if len(init.data) == 1:\n initKey, initValue = init.data.popitem()\n self.data = {initKey: initValue}\n else:\n self.data = init.data\n\n if authentication:\n authKey, authValue = authentication.data.popitem()\n self.data[authKey] = authValue\n\n def validate(self, init, authentication):\n if not isinstance(init, cloudformation.Init):\n raise ValueError(\n 'init must be of type cloudformation.Init'\n )\n\n is_instance = isinstance(authentication, cloudformation.Authentication)\n if authentication and not is_instance:\n raise ValueError(\n 'authentication must be of type cloudformation.Authentication'\n )\n\n def JSONrepr(self):\n return self.data\n\n\nclass AutoScalingGroup(AWSObject):\n resource_type = \"AWS::AutoScaling::AutoScalingGroup\"\n\n props = {\n 'AvailabilityZones': (list, False),\n 'Cooldown': (integer, False),\n 'DesiredCapacity': (integer, False),\n 'HealthCheckGracePeriod': (integer, False),\n 'HealthCheckType': (basestring, False),\n 'InstanceId': (basestring, False),\n 'LaunchConfigurationName': (basestring, False),\n 'LoadBalancerNames': (list, False),\n 'MaxSize': (integer, True),\n 'MetricsCollection': ([MetricsCollection], False),\n 'MinSize': (integer, True),\n 'NotificationConfigurations': ([NotificationConfigurations], False),\n 'PlacementGroup': (basestring, False),\n 'Tags': (list, False),\n 'TargetGroupARNs': ([basestring], False),\n 'TerminationPolicies': ([basestring], False),\n 'VPCZoneIdentifier': (list, False),\n }\n\n def validate(self):\n if 'UpdatePolicy' in self.resource:\n update_policy = self.resource['UpdatePolicy']\n\n if (not isinstance(update_policy, AWSHelperFn) and\n 'AutoScalingRollingUpdate' in update_policy.properties):\n rolling_update = update_policy.AutoScalingRollingUpdate\n\n isMinNoCheck = isinstance(\n rolling_update.MinInstancesInService,\n (FindInMap, Ref)\n )\n isMaxNoCheck = isinstance(self.MaxSize, (If, FindInMap, Ref))\n\n if not (isMinNoCheck or isMaxNoCheck):\n maxCount = int(self.MaxSize)\n minCount = int(rolling_update.MinInstancesInService)\n\n if minCount >= maxCount:\n raise ValueError(\n \"The UpdatePolicy attribute \"\n \"MinInstancesInService must be less than the \"\n \"autoscaling group's MaxSize\")\n\n launch_config = self.properties.get('LaunchConfigurationName')\n instance_id = self.properties.get('InstanceId')\n if launch_config and instance_id:\n raise ValueError(\"LaunchConfigurationName and InstanceId \"\n \"are mutually exclusive.\")\n if not launch_config and not instance_id:\n raise ValueError(\"Must specify either LaunchConfigurationName or \"\n \"InstanceId: http://docs.aws.amazon.com/AWSCloud\"\n \"Formation/latest/UserGuide/aws-properties-as-gr\"\n \"oup.html#cfn-as-group-instanceid\")\n\n availability_zones = self.properties.get('AvailabilityZones')\n vpc_zone_identifier = self.properties.get('VPCZoneIdentifier')\n if not availability_zones and not vpc_zone_identifier:\n raise ValueError(\"Must specify AvailabilityZones and/or \"\n \"VPCZoneIdentifier: http://docs.aws.amazon.com/A\"\n \"WSCloudFormation/latest/UserGuide/aws-propertie\"\n \"s-as-group.html#cfn-as-group-vpczoneidentifier\")\n return True\n\n\nclass LaunchConfiguration(AWSObject):\n resource_type = \"AWS::AutoScaling::LaunchConfiguration\"\n\n props = {\n 'AssociatePublicIpAddress': (boolean, False),\n 'BlockDeviceMappings': (list, False),\n 'ClassicLinkVPCId': (basestring, False),\n 'ClassicLinkVPCSecurityGroups': ([basestring], False),\n 'EbsOptimized': (boolean, False),\n 'IamInstanceProfile': (basestring, False),\n 'ImageId': (basestring, True),\n 'InstanceId': (basestring, False),\n 'InstanceMonitoring': (boolean, False),\n 'InstanceType': (basestring, True),\n 'KernelId': (basestring, False),\n 'KeyName': (basestring, False),\n 'Metadata': (Metadata, False),\n 'PlacementTenancy': (basestring, False),\n 'RamDiskId': (basestring, False),\n 'SecurityGroups': (list, False),\n 'SpotPrice': (basestring, False),\n 'UserData': (basestring, False),\n }\n\n\nclass StepAdjustments(AWSProperty):\n props = {\n 'MetricIntervalLowerBound': (integer, False),\n 'MetricIntervalUpperBound': (integer, False),\n 'ScalingAdjustment': (integer, True),\n }\n\n\nclass ScalingPolicy(AWSObject):\n resource_type = \"AWS::AutoScaling::ScalingPolicy\"\n\n props = {\n 'AdjustmentType': (basestring, True),\n 'AutoScalingGroupName': (basestring, True),\n 'Cooldown': (integer, False),\n 'EstimatedInstanceWarmup': (integer, False),\n 'MetricAggregationType': (basestring, False),\n 'MinAdjustmentMagnitude': (integer, False),\n 'PolicyType': (basestring, False),\n 'ScalingAdjustment': (integer, False),\n 'StepAdjustments': ([StepAdjustments], False),\n }\n\n\nclass ScheduledAction(AWSObject):\n resource_type = \"AWS::AutoScaling::ScheduledAction\"\n\n props = {\n 'AutoScalingGroupName': (basestring, True),\n 'DesiredCapacity': (integer, False),\n 'EndTime': (basestring, False),\n 'MaxSize': (integer, False),\n 'MinSize': (integer, False),\n 'Recurrence': (basestring, False),\n 'StartTime': (basestring, False),\n }\n\n\nclass LifecycleHook(AWSObject):\n resource_type = \"AWS::AutoScaling::LifecycleHook\"\n\n props = {\n 'AutoScalingGroupName': (basestring, True),\n 'DefaultResult': (basestring, False),\n 'HeartbeatTimeout': (integer, False),\n 'LifecycleHookName': (basestring, False),\n 'LifecycleTransition': (basestring, True),\n 'NotificationMetadata': (basestring, False),\n 'NotificationTargetARN': (basestring, True),\n 'RoleARN': (basestring, True),\n }\n\n\nclass Trigger(AWSObject):\n resource_type = \"AWS::AutoScaling::Trigger\"\n\n props = {\n 'AutoScalingGroupName': (basestring, True),\n 'BreachDuration': (integer, True),\n 'Dimensions': (list, True),\n 'LowerBreachScaleIncrement': (integer, False),\n 'LowerThreshold': (integer, True),\n 'MetricName': (basestring, True),\n 'Namespace': (basestring, True),\n 'Period': (integer, True),\n 'Statistic': (basestring, True),\n 'Unit': (basestring, False),\n 'UpperBreachScaleIncrement': (integer, False),\n 'UpperThreshold': (integer, True),\n }\n\n\nclass EBSBlockDevice(AWSProperty):\n # http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-as-launchconfig-blockdev-template.html\n props = {\n 'DeleteOnTermination': (boolean, False),\n 'Encrypted': (boolean, False),\n 'Iops': (integer, False),\n 'SnapshotId': (basestring, False),\n 'VolumeSize': (integer, False),\n 'VolumeType': (basestring, False),\n }\n\n\nclass BlockDeviceMapping(AWSProperty):\n # http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-as-launchconfig-blockdev-mapping.html\n props = {\n 'DeviceName': (basestring, True),\n 'Ebs': (EBSBlockDevice, False),\n 'NoDevice': (boolean, False),\n 'VirtualName': (basestring, False),\n }\n", "path": "troposphere/autoscaling.py"}]} | 4,080 | 431 |
gh_patches_debug_3836 | rasdani/github-patches | git_diff | StackStorm__st2-2925 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
"st2 key delete" does not obey interface and populate user query params
st2 --debug key delete -s user -u stanley netdev_servicewrapper_address
2016-09-23 17:13:48,545 DEBUG - Using cached token from file "/home/vagrant/.st2/token-st2admin"
curl -X GET -H 'Connection: keep-alive' -H 'Accept-Encoding: gzip, deflate' -H 'Accept: _/_' -H 'User-Agent: python-requests/2.11.1' -H 'X-Auth-Token: 235ca028e8e545efbd28806090ca3bd6' 'http://127.0.0.1:9101/v1/keys/netdev_servicewrapper_address?scope=user'
{
"faultstring": "KeyValuePair with name: st2admin:netdev_servicewrapper_address and scope: user not found in db."
}
</issue>
<code>
[start of st2client/st2client/commands/keyvalue.py]
1 # Licensed to the StackStorm, Inc ('StackStorm') under one or more
2 # contributor license agreements. See the NOTICE file distributed with
3 # this work for additional information regarding copyright ownership.
4 # The ASF licenses this file to You under the Apache License, Version 2.0
5 # (the "License"); you may not use this file except in compliance with
6 # the License. You may obtain a copy of the License at
7 #
8 # http://www.apache.org/licenses/LICENSE-2.0
9 #
10 # Unless required by applicable law or agreed to in writing, software
11 # distributed under the License is distributed on an "AS IS" BASIS,
12 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 # See the License for the specific language governing permissions and
14 # limitations under the License.
15
16 import os
17 import json
18 import logging
19 from os.path import join as pjoin
20
21 from st2client.commands import resource
22 from st2client.commands.noop import NoopCommand
23 from st2client.commands.resource import add_auth_token_to_kwargs_from_cli
24 from st2client.formatters import table
25 from st2client.models.keyvalue import KeyValuePair
26 from st2client.utils.date import format_isodate_for_user_timezone
27
28 LOG = logging.getLogger(__name__)
29
30 DEFAULT_SCOPE = 'system'
31
32
33 class KeyValuePairBranch(resource.ResourceBranch):
34
35 def __init__(self, description, app, subparsers, parent_parser=None):
36 super(KeyValuePairBranch, self).__init__(
37 KeyValuePair, description, app, subparsers,
38 parent_parser=parent_parser,
39 commands={
40 'list': KeyValuePairListCommand,
41 'get': KeyValuePairGetCommand,
42 'delete': KeyValuePairDeleteCommand,
43 'create': NoopCommand,
44 'update': NoopCommand
45 })
46
47 # Registers extended commands
48 self.commands['set'] = KeyValuePairSetCommand(self.resource, self.app,
49 self.subparsers)
50 self.commands['load'] = KeyValuePairLoadCommand(
51 self.resource, self.app, self.subparsers)
52 self.commands['delete_by_prefix'] = KeyValuePairDeleteByPrefixCommand(
53 self.resource, self.app, self.subparsers)
54
55 # Remove unsupported commands
56 # TODO: Refactor parent class and make it nicer
57 del self.commands['create']
58 del self.commands['update']
59
60
61 class KeyValuePairListCommand(resource.ResourceListCommand):
62 display_attributes = ['name', 'value', 'secret', 'encrypted', 'scope', 'user',
63 'expire_timestamp']
64 attribute_transform_functions = {
65 'expire_timestamp': format_isodate_for_user_timezone,
66 }
67
68 def __init__(self, *args, **kwargs):
69 super(KeyValuePairListCommand, self).__init__(*args, **kwargs)
70
71 # Filter options
72 self.parser.add_argument('--prefix', help=('Only return values which name starts with the '
73 ' provided prefix.'))
74 self.parser.add_argument('--decrypt', action='store_true',
75 help='Decrypt secrets and display plain text.')
76 self.parser.add_argument('-s', '--scope', default='system', dest='scope',
77 help='Scope item is under. Example: "user".')
78 self.parser.add_argument('-u', '--user', dest='user', default=None,
79 help='User for user scoped items (admin only).')
80
81 def run_and_print(self, args, **kwargs):
82 if args.prefix:
83 kwargs['prefix'] = args.prefix
84
85 decrypt = getattr(args, 'decrypt', False)
86 kwargs['params'] = {'decrypt': str(decrypt).lower()}
87 scope = getattr(args, 'scope', DEFAULT_SCOPE)
88 kwargs['params']['scope'] = scope
89 kwargs['params']['user'] = args.user
90
91 instances = self.run(args, **kwargs)
92 self.print_output(reversed(instances), table.MultiColumnTable,
93 attributes=args.attr, widths=args.width,
94 json=args.json,
95 yaml=args.yaml,
96 attribute_transform_functions=self.attribute_transform_functions)
97
98
99 class KeyValuePairGetCommand(resource.ResourceGetCommand):
100 pk_argument_name = 'name'
101 display_attributes = ['name', 'value', 'secret', 'encrypted', 'scope', 'expire_timestamp']
102
103 def __init__(self, kv_resource, *args, **kwargs):
104 super(KeyValuePairGetCommand, self).__init__(kv_resource, *args, **kwargs)
105 self.parser.add_argument('-d', '--decrypt', action='store_true',
106 help='Decrypt secret if encrypted and show plain text.')
107 self.parser.add_argument('-s', '--scope', default=DEFAULT_SCOPE, dest='scope',
108 help='Scope item is under. Example: "user".')
109
110 @resource.add_auth_token_to_kwargs_from_cli
111 def run(self, args, **kwargs):
112 resource_name = getattr(args, self.pk_argument_name, None)
113 decrypt = getattr(args, 'decrypt', False)
114 scope = getattr(args, 'scope', DEFAULT_SCOPE)
115 kwargs['params'] = {'decrypt': str(decrypt).lower()}
116 kwargs['params']['scope'] = scope
117 return self.get_resource_by_id(id=resource_name, **kwargs)
118
119
120 class KeyValuePairSetCommand(resource.ResourceCommand):
121 display_attributes = ['name', 'value', 'expire_timestamp']
122
123 def __init__(self, resource, *args, **kwargs):
124 super(KeyValuePairSetCommand, self).__init__(
125 resource, 'set',
126 'Set an existing %s.' % resource.get_display_name().lower(),
127 *args, **kwargs
128 )
129
130 self.parser.add_argument('name',
131 metavar='name',
132 help='Name of the key value pair.')
133 self.parser.add_argument('value', help='Value paired with the key.')
134 self.parser.add_argument('-l', '--ttl', dest='ttl', type=int, default=None,
135 help='TTL (in seconds) for this value.')
136 self.parser.add_argument('-e', '--encrypt', dest='secret',
137 action='store_true',
138 help='Encrypt value before saving the value.')
139 self.parser.add_argument('-s', '--scope', dest='scope', default=DEFAULT_SCOPE,
140 help='Specify the scope under which you want ' +
141 'to place the item.')
142 self.parser.add_argument('-u', '--user', dest='user', default=None,
143 help='User for user scoped items (admin only).')
144
145 @add_auth_token_to_kwargs_from_cli
146 def run(self, args, **kwargs):
147 instance = KeyValuePair()
148 instance.id = args.name # TODO: refactor and get rid of id
149 instance.name = args.name
150 instance.value = args.value
151 instance.scope = args.scope
152 instance.user = args.user
153
154 if args.secret:
155 instance.secret = args.secret
156
157 if args.ttl:
158 instance.ttl = args.ttl
159
160 return self.manager.update(instance, **kwargs)
161
162 def run_and_print(self, args, **kwargs):
163 instance = self.run(args, **kwargs)
164 self.print_output(instance, table.PropertyValueTable,
165 attributes=self.display_attributes, json=args.json,
166 yaml=args.yaml)
167
168
169 class KeyValuePairDeleteCommand(resource.ResourceDeleteCommand):
170 pk_argument_name = 'name'
171
172 def __init__(self, resource, *args, **kwargs):
173 super(KeyValuePairDeleteCommand, self).__init__(resource, *args, **kwargs)
174
175 self.parser.add_argument('-s', '--scope', dest='scope', default=DEFAULT_SCOPE,
176 help='Specify the scope under which you want ' +
177 'to place the item.')
178 self.parser.add_argument('-u', '--user', dest='user', default=None,
179 help='User for user scoped items (admin only).')
180
181 @add_auth_token_to_kwargs_from_cli
182 def run(self, args, **kwargs):
183 resource_id = getattr(args, self.pk_argument_name, None)
184 scope = getattr(args, 'scope', DEFAULT_SCOPE)
185 kwargs['params'] = {}
186 kwargs['params']['scope'] = scope
187 instance = self.get_resource(resource_id, **kwargs)
188
189 if not instance:
190 raise resource.ResourceNotFoundError('KeyValuePair with id "%s" not found', resource_id)
191
192 instance.id = resource_id # TODO: refactor and get rid of id
193 self.manager.delete(instance, **kwargs)
194
195
196 class KeyValuePairDeleteByPrefixCommand(resource.ResourceCommand):
197 """
198 Commands which delete all the key value pairs which match the provided
199 prefix.
200 """
201 def __init__(self, resource, *args, **kwargs):
202 super(KeyValuePairDeleteByPrefixCommand, self).__init__(resource, 'delete_by_prefix',
203 'Delete KeyValue pairs which match the provided prefix', *args, **kwargs)
204
205 self.parser.add_argument('-p', '--prefix', required=True,
206 help='Name prefix (e.g. twitter.TwitterSensor:)')
207
208 @add_auth_token_to_kwargs_from_cli
209 def run(self, args, **kwargs):
210 prefix = args.prefix
211 key_pairs = self.manager.get_all(prefix=prefix)
212
213 to_delete = []
214 for key_pair in key_pairs:
215 key_pair.id = key_pair.name
216 to_delete.append(key_pair)
217
218 deleted = []
219 for key_pair in to_delete:
220 self.manager.delete(instance=key_pair, **kwargs)
221 deleted.append(key_pair)
222
223 return deleted
224
225 def run_and_print(self, args, **kwargs):
226 # TODO: Need to use args, instead of kwargs (args=) because of bad API
227 # FIX ME
228 deleted = self.run(args, **kwargs)
229 key_ids = [key_pair.id for key_pair in deleted]
230
231 print('Deleted %s keys' % (len(deleted)))
232 print('Deleted key ids: %s' % (', '.join(key_ids)))
233
234
235 class KeyValuePairLoadCommand(resource.ResourceCommand):
236 pk_argument_name = 'name'
237 display_attributes = ['name', 'value']
238
239 def __init__(self, resource, *args, **kwargs):
240 help_text = ('Load a list of %s from file.' %
241 resource.get_plural_display_name().lower())
242 super(KeyValuePairLoadCommand, self).__init__(resource, 'load',
243 help_text, *args, **kwargs)
244
245 self.parser.add_argument(
246 'file', help=('JSON file containing the %s to create.'
247 % resource.get_plural_display_name().lower()))
248
249 @add_auth_token_to_kwargs_from_cli
250 def run(self, args, **kwargs):
251 file_path = os.path.normpath(pjoin(os.getcwd(), args.file))
252
253 if not os.path.exists(args.file):
254 raise ValueError('File "%s" doesn\'t exist' % (file_path))
255
256 if not os.path.isfile(args.file):
257 raise ValueError('"%s" is not a file' % (file_path))
258
259 with open(file_path, 'r') as f:
260 kvps = json.loads(f.read())
261
262 instances = []
263 for item in kvps:
264 name = item['name']
265 value = item['value']
266
267 instance = KeyValuePair()
268 instance.id = name # TODO: refactor and get rid of id
269 instance.name = name
270 instance.value = value
271
272 self.manager.update(instance, **kwargs)
273 instances.append(instance)
274 return instances
275
276 def run_and_print(self, args, **kwargs):
277 instances = self.run(args, **kwargs)
278 self.print_output(instances, table.MultiColumnTable,
279 attributes=['id', 'name', 'value'], json=args.json, yaml=args.yaml)
280
[end of st2client/st2client/commands/keyvalue.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/st2client/st2client/commands/keyvalue.py b/st2client/st2client/commands/keyvalue.py
--- a/st2client/st2client/commands/keyvalue.py
+++ b/st2client/st2client/commands/keyvalue.py
@@ -184,6 +184,7 @@
scope = getattr(args, 'scope', DEFAULT_SCOPE)
kwargs['params'] = {}
kwargs['params']['scope'] = scope
+ kwargs['params']['user'] = args.user
instance = self.get_resource(resource_id, **kwargs)
if not instance:
| {"golden_diff": "diff --git a/st2client/st2client/commands/keyvalue.py b/st2client/st2client/commands/keyvalue.py\n--- a/st2client/st2client/commands/keyvalue.py\n+++ b/st2client/st2client/commands/keyvalue.py\n@@ -184,6 +184,7 @@\n scope = getattr(args, 'scope', DEFAULT_SCOPE)\n kwargs['params'] = {}\n kwargs['params']['scope'] = scope\n+ kwargs['params']['user'] = args.user\n instance = self.get_resource(resource_id, **kwargs)\n \n if not instance:\n", "issue": "\"st2 key delete\" does not obey interface and populate user query params\nst2 --debug key delete -s user -u stanley netdev_servicewrapper_address\n2016-09-23 17:13:48,545 DEBUG - Using cached token from file \"/home/vagrant/.st2/token-st2admin\"\n\ncurl -X GET -H 'Connection: keep-alive' -H 'Accept-Encoding: gzip, deflate' -H 'Accept: _/_' -H 'User-Agent: python-requests/2.11.1' -H 'X-Auth-Token: 235ca028e8e545efbd28806090ca3bd6' 'http://127.0.0.1:9101/v1/keys/netdev_servicewrapper_address?scope=user'\n\n{\n \"faultstring\": \"KeyValuePair with name: st2admin:netdev_servicewrapper_address and scope: user not found in db.\"\n}\n\n", "before_files": [{"content": "# Licensed to the StackStorm, Inc ('StackStorm') under one or more\n# contributor license agreements. See the NOTICE file distributed with\n# this work for additional information regarding copyright ownership.\n# The ASF licenses this file to You under the Apache License, Version 2.0\n# (the \"License\"); you may not use this file except in compliance with\n# the License. You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport os\nimport json\nimport logging\nfrom os.path import join as pjoin\n\nfrom st2client.commands import resource\nfrom st2client.commands.noop import NoopCommand\nfrom st2client.commands.resource import add_auth_token_to_kwargs_from_cli\nfrom st2client.formatters import table\nfrom st2client.models.keyvalue import KeyValuePair\nfrom st2client.utils.date import format_isodate_for_user_timezone\n\nLOG = logging.getLogger(__name__)\n\nDEFAULT_SCOPE = 'system'\n\n\nclass KeyValuePairBranch(resource.ResourceBranch):\n\n def __init__(self, description, app, subparsers, parent_parser=None):\n super(KeyValuePairBranch, self).__init__(\n KeyValuePair, description, app, subparsers,\n parent_parser=parent_parser,\n commands={\n 'list': KeyValuePairListCommand,\n 'get': KeyValuePairGetCommand,\n 'delete': KeyValuePairDeleteCommand,\n 'create': NoopCommand,\n 'update': NoopCommand\n })\n\n # Registers extended commands\n self.commands['set'] = KeyValuePairSetCommand(self.resource, self.app,\n self.subparsers)\n self.commands['load'] = KeyValuePairLoadCommand(\n self.resource, self.app, self.subparsers)\n self.commands['delete_by_prefix'] = KeyValuePairDeleteByPrefixCommand(\n self.resource, self.app, self.subparsers)\n\n # Remove unsupported commands\n # TODO: Refactor parent class and make it nicer\n del self.commands['create']\n del self.commands['update']\n\n\nclass KeyValuePairListCommand(resource.ResourceListCommand):\n display_attributes = ['name', 'value', 'secret', 'encrypted', 'scope', 'user',\n 'expire_timestamp']\n attribute_transform_functions = {\n 'expire_timestamp': format_isodate_for_user_timezone,\n }\n\n def __init__(self, *args, **kwargs):\n super(KeyValuePairListCommand, self).__init__(*args, **kwargs)\n\n # Filter options\n self.parser.add_argument('--prefix', help=('Only return values which name starts with the '\n ' provided prefix.'))\n self.parser.add_argument('--decrypt', action='store_true',\n help='Decrypt secrets and display plain text.')\n self.parser.add_argument('-s', '--scope', default='system', dest='scope',\n help='Scope item is under. Example: \"user\".')\n self.parser.add_argument('-u', '--user', dest='user', default=None,\n help='User for user scoped items (admin only).')\n\n def run_and_print(self, args, **kwargs):\n if args.prefix:\n kwargs['prefix'] = args.prefix\n\n decrypt = getattr(args, 'decrypt', False)\n kwargs['params'] = {'decrypt': str(decrypt).lower()}\n scope = getattr(args, 'scope', DEFAULT_SCOPE)\n kwargs['params']['scope'] = scope\n kwargs['params']['user'] = args.user\n\n instances = self.run(args, **kwargs)\n self.print_output(reversed(instances), table.MultiColumnTable,\n attributes=args.attr, widths=args.width,\n json=args.json,\n yaml=args.yaml,\n attribute_transform_functions=self.attribute_transform_functions)\n\n\nclass KeyValuePairGetCommand(resource.ResourceGetCommand):\n pk_argument_name = 'name'\n display_attributes = ['name', 'value', 'secret', 'encrypted', 'scope', 'expire_timestamp']\n\n def __init__(self, kv_resource, *args, **kwargs):\n super(KeyValuePairGetCommand, self).__init__(kv_resource, *args, **kwargs)\n self.parser.add_argument('-d', '--decrypt', action='store_true',\n help='Decrypt secret if encrypted and show plain text.')\n self.parser.add_argument('-s', '--scope', default=DEFAULT_SCOPE, dest='scope',\n help='Scope item is under. Example: \"user\".')\n\n @resource.add_auth_token_to_kwargs_from_cli\n def run(self, args, **kwargs):\n resource_name = getattr(args, self.pk_argument_name, None)\n decrypt = getattr(args, 'decrypt', False)\n scope = getattr(args, 'scope', DEFAULT_SCOPE)\n kwargs['params'] = {'decrypt': str(decrypt).lower()}\n kwargs['params']['scope'] = scope\n return self.get_resource_by_id(id=resource_name, **kwargs)\n\n\nclass KeyValuePairSetCommand(resource.ResourceCommand):\n display_attributes = ['name', 'value', 'expire_timestamp']\n\n def __init__(self, resource, *args, **kwargs):\n super(KeyValuePairSetCommand, self).__init__(\n resource, 'set',\n 'Set an existing %s.' % resource.get_display_name().lower(),\n *args, **kwargs\n )\n\n self.parser.add_argument('name',\n metavar='name',\n help='Name of the key value pair.')\n self.parser.add_argument('value', help='Value paired with the key.')\n self.parser.add_argument('-l', '--ttl', dest='ttl', type=int, default=None,\n help='TTL (in seconds) for this value.')\n self.parser.add_argument('-e', '--encrypt', dest='secret',\n action='store_true',\n help='Encrypt value before saving the value.')\n self.parser.add_argument('-s', '--scope', dest='scope', default=DEFAULT_SCOPE,\n help='Specify the scope under which you want ' +\n 'to place the item.')\n self.parser.add_argument('-u', '--user', dest='user', default=None,\n help='User for user scoped items (admin only).')\n\n @add_auth_token_to_kwargs_from_cli\n def run(self, args, **kwargs):\n instance = KeyValuePair()\n instance.id = args.name # TODO: refactor and get rid of id\n instance.name = args.name\n instance.value = args.value\n instance.scope = args.scope\n instance.user = args.user\n\n if args.secret:\n instance.secret = args.secret\n\n if args.ttl:\n instance.ttl = args.ttl\n\n return self.manager.update(instance, **kwargs)\n\n def run_and_print(self, args, **kwargs):\n instance = self.run(args, **kwargs)\n self.print_output(instance, table.PropertyValueTable,\n attributes=self.display_attributes, json=args.json,\n yaml=args.yaml)\n\n\nclass KeyValuePairDeleteCommand(resource.ResourceDeleteCommand):\n pk_argument_name = 'name'\n\n def __init__(self, resource, *args, **kwargs):\n super(KeyValuePairDeleteCommand, self).__init__(resource, *args, **kwargs)\n\n self.parser.add_argument('-s', '--scope', dest='scope', default=DEFAULT_SCOPE,\n help='Specify the scope under which you want ' +\n 'to place the item.')\n self.parser.add_argument('-u', '--user', dest='user', default=None,\n help='User for user scoped items (admin only).')\n\n @add_auth_token_to_kwargs_from_cli\n def run(self, args, **kwargs):\n resource_id = getattr(args, self.pk_argument_name, None)\n scope = getattr(args, 'scope', DEFAULT_SCOPE)\n kwargs['params'] = {}\n kwargs['params']['scope'] = scope\n instance = self.get_resource(resource_id, **kwargs)\n\n if not instance:\n raise resource.ResourceNotFoundError('KeyValuePair with id \"%s\" not found', resource_id)\n\n instance.id = resource_id # TODO: refactor and get rid of id\n self.manager.delete(instance, **kwargs)\n\n\nclass KeyValuePairDeleteByPrefixCommand(resource.ResourceCommand):\n \"\"\"\n Commands which delete all the key value pairs which match the provided\n prefix.\n \"\"\"\n def __init__(self, resource, *args, **kwargs):\n super(KeyValuePairDeleteByPrefixCommand, self).__init__(resource, 'delete_by_prefix',\n 'Delete KeyValue pairs which match the provided prefix', *args, **kwargs)\n\n self.parser.add_argument('-p', '--prefix', required=True,\n help='Name prefix (e.g. twitter.TwitterSensor:)')\n\n @add_auth_token_to_kwargs_from_cli\n def run(self, args, **kwargs):\n prefix = args.prefix\n key_pairs = self.manager.get_all(prefix=prefix)\n\n to_delete = []\n for key_pair in key_pairs:\n key_pair.id = key_pair.name\n to_delete.append(key_pair)\n\n deleted = []\n for key_pair in to_delete:\n self.manager.delete(instance=key_pair, **kwargs)\n deleted.append(key_pair)\n\n return deleted\n\n def run_and_print(self, args, **kwargs):\n # TODO: Need to use args, instead of kwargs (args=) because of bad API\n # FIX ME\n deleted = self.run(args, **kwargs)\n key_ids = [key_pair.id for key_pair in deleted]\n\n print('Deleted %s keys' % (len(deleted)))\n print('Deleted key ids: %s' % (', '.join(key_ids)))\n\n\nclass KeyValuePairLoadCommand(resource.ResourceCommand):\n pk_argument_name = 'name'\n display_attributes = ['name', 'value']\n\n def __init__(self, resource, *args, **kwargs):\n help_text = ('Load a list of %s from file.' %\n resource.get_plural_display_name().lower())\n super(KeyValuePairLoadCommand, self).__init__(resource, 'load',\n help_text, *args, **kwargs)\n\n self.parser.add_argument(\n 'file', help=('JSON file containing the %s to create.'\n % resource.get_plural_display_name().lower()))\n\n @add_auth_token_to_kwargs_from_cli\n def run(self, args, **kwargs):\n file_path = os.path.normpath(pjoin(os.getcwd(), args.file))\n\n if not os.path.exists(args.file):\n raise ValueError('File \"%s\" doesn\\'t exist' % (file_path))\n\n if not os.path.isfile(args.file):\n raise ValueError('\"%s\" is not a file' % (file_path))\n\n with open(file_path, 'r') as f:\n kvps = json.loads(f.read())\n\n instances = []\n for item in kvps:\n name = item['name']\n value = item['value']\n\n instance = KeyValuePair()\n instance.id = name # TODO: refactor and get rid of id\n instance.name = name\n instance.value = value\n\n self.manager.update(instance, **kwargs)\n instances.append(instance)\n return instances\n\n def run_and_print(self, args, **kwargs):\n instances = self.run(args, **kwargs)\n self.print_output(instances, table.MultiColumnTable,\n attributes=['id', 'name', 'value'], json=args.json, yaml=args.yaml)\n", "path": "st2client/st2client/commands/keyvalue.py"}]} | 3,939 | 129 |
gh_patches_debug_34374 | rasdani/github-patches | git_diff | cloud-custodian__cloud-custodian-9297 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Set Athena Workgroup Encryption also tries to change readonly field
### Describe the bug
Hi, we are currently trying to set the encryption for our primary athena workgroups that are unencrypted. The policy looks like this:
```yaml
- name: set-athena-workgroup-encryption
resource: awscc.athena_workgroup
filters:
- type: value
key: Name
value: "primary"
- type: value
key: "WorkGroupConfiguration.ResultConfiguration.EncryptionConfiguration.EncryptionOption"
value: absent
actions:
- type: update
WorkGroupConfiguration:
EnforceWorkGroupConfiguration: true
- type: update
WorkGroupConfiguration:
ResultConfiguration:
EncryptionConfiguration:
EncryptionOption: SSE_S3
```
When executing this policy we get this error though:
```
2024-02-15 09:59:25,626: custodian.commands:ERROR Error while executing policy set-athena-workgroup-encryption, continuing
Traceback (most recent call last):
File "/home/.../policies/.venv/lib/python3.11/site-packages/c7n/commands.py", line 307, in run
policy()
File "/home/.../policies/.venv/lib/python3.11/site-packages/c7n/policy.py", line 1357, in __call__
resources = mode.run()
^^^^^^^^^^
File "/home/.../policies/.venv/lib/python3.11/site-packages/c7n/policy.py", line 364, in run
results = a.process(resources)
^^^^^^^^^^^^^^^^^^^^
File "/home/.../policies/.venv/lib/python3.11/site-packages/c7n_awscc/actions.py", line 43, in process
client.update_resource(
File "/home/.../policies/.venv/lib/python3.11/site-packages/botocore/client.py", line 553, in _api_call
return self._make_api_call(operation_name, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/.../policies/.venv/lib/python3.11/site-packages/botocore/client.py", line 1009, in _make_api_call
raise error_class(parsed_response, operation_name)
botocore.exceptions.ClientError: An error occurred (ValidationException) when calling the UpdateResource operation: Invalid patch update: readOnlyProperties [/properties/WorkGroupConfiguration/EngineVersion/EffectiveEngineVersion] cannot be updated
2024-02-15 09:59:25,627: custodian.commands:ERROR The following policies had errors while executing
- set-athena-workgroup-encryption
```
But we don't want to update the engine version itself.
### What did you expect to happen?
We expected the policy to update the encryption setting and not touch the engine version, because the attribute was not specified in our policy
### Cloud Provider
Amazon Web Services (AWS)
### Cloud Custodian version and dependency information
```shell
Custodian: 0.9.34
Python: 3.11.4 (main, Dec 7 2023, 15:43:41) [GCC 12.3.0]
Platform: posix.uname_result(sysname='Linux', nodename='marcel', release='6.2.0-39-generic', version='#40-Ubuntu SMP PREEMPT_DYNAMIC Tue Nov 14 14:18:00 UTC 2023', machine='x86_64')
Using venv: True
Docker: False
Installed:
argcomplete==3.2.1
attrs==23.2.0
boto3==1.34.21
botocore==1.34.21
docutils==0.18.1
importlib-metadata==6.11.0
jmespath==1.0.1
jsonschema==4.21.0
jsonschema-specifications==2023.12.1
python-dateutil==2.8.2
pyyaml==6.0.1
referencing==0.31.1
rpds-py==0.17.1
s3transfer==0.10.0
six==1.16.0
tabulate==0.9.0
urllib3==1.26.18
zipp==3.17.0
```
### Policy
```shell
- name: set-athena-workgroup-encryption
resource: awscc.athena_workgroup
filters:
- type: value
key: Name
value: "primary"
- type: value
key: "WorkGroupConfiguration.ResultConfiguration.EncryptionConfiguration.EncryptionOption"
value: absent
actions:
- type: update
WorkGroupConfiguration:
EnforceWorkGroupConfiguration: true
- type: update
WorkGroupConfiguration:
ResultConfiguration:
EncryptionConfiguration:
EncryptionOption: SSE_S3
```
### Relevant log/traceback output
```shell
2024-02-15 09:59:25,626: custodian.commands:ERROR Error while executing policy set-athena-workgroup-encryption, continuing
Traceback (most recent call last):
File "/home/.../policies/.venv/lib/python3.11/site-packages/c7n/commands.py", line 307, in run
policy()
File "/home/.../policies/.venv/lib/python3.11/site-packages/c7n/policy.py", line 1357, in __call__
resources = mode.run()
^^^^^^^^^^
File "/home/.../policies/.venv/lib/python3.11/site-packages/c7n/policy.py", line 364, in run
results = a.process(resources)
^^^^^^^^^^^^^^^^^^^^
File "/home/.../policies/.venv/lib/python3.11/site-packages/c7n_awscc/actions.py", line 43, in process
client.update_resource(
File "/home/.../policies/.venv/lib/python3.11/site-packages/botocore/client.py", line 553, in _api_call
return self._make_api_call(operation_name, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/.../policies/.venv/lib/python3.11/site-packages/botocore/client.py", line 1009, in _make_api_call
raise error_class(parsed_response, operation_name)
botocore.exceptions.ClientError: An error occurred (ValidationException) when calling the UpdateResource operation: Invalid patch update: readOnlyProperties [/properties/WorkGroupConfiguration/EngineVersion/EffectiveEngineVersion] cannot be updated
2024-02-15 09:59:25,627: custodian.commands:ERROR The following policies had errors while executing
- set-athena-workgroup-encryption
```
### Extra information or context
We tried to use the update attributes like this
```yaml
- type: update
WorkGroupConfigurationUpdates:
EnforceWorkGroupConfiguration: true
ResultConfigurationUpdates:
EncryptionConfiguration:
EncryptionOption: SSE_S3
```
but there is currently a bug in AWS which resets the workgroup right after the operation. We are in communication with AWS Support there, but in the meantime we tried to make it work with the approach described above.
</issue>
<code>
[start of tools/c7n_awscc/c7n_awscc/actions.py]
1 # Copyright The Cloud Custodian Authors.
2 # SPDX-License-Identifier: Apache-2.0
3 import json
4
5 import jsonpatch
6
7 from c7n.actions import Action
8 from c7n.utils import local_session, type_schema
9
10
11 class ControlAction(Action):
12 def get_identity(self, r):
13 id_fields = self.manager.schema["primaryIdentifier"]
14 idv = {}
15 for idf in id_fields:
16 idn = idf.rsplit("/", 1)[-1]
17 idv[idn] = r[idn]
18 if len(idv) == 1:
19 return idv[idn]
20 return json.dumps(idv)
21
22
23 class Delete(ControlAction):
24 schema = type_schema("delete")
25
26 def process(self, resources):
27 client = local_session(self.manager.session_factory).client("cloudcontrol")
28 for r in resources:
29 client.delete_resource(
30 TypeName=self.manager.resource_type.cfn_type,
31 Identifier=self.get_identity(r),
32 )
33
34
35 class Update(ControlAction):
36 # schema is setup at resource type initialization
37
38 def process(self, resources):
39 client = local_session(self.manager.session_factory).client("cloudcontrol")
40 for r in resources:
41 patch = self.get_patch(r)
42 client.update_resource(
43 TypeName=self.manager.resource_type.cfn_type,
44 Identifier=self.get_identity(r),
45 PatchDocument=patch.to_string(),
46 )
47
48 def get_patch(self, r):
49 tgt = dict(r)
50 for k, v in self.data.items():
51 if k == "type":
52 continue
53 tgt[k] = v
54 return jsonpatch.make_patch(r, tgt)
55
[end of tools/c7n_awscc/c7n_awscc/actions.py]
[start of tools/c7n_awscc/c7n_awscc/manager.py]
1 # Copyright The Cloud Custodian Authors.
2 # SPDX-License-Identifier: Apache-2.0
3 import json
4 from pathlib import Path
5
6 from c7n.filters import Filter # noqa
7
8 from .actions import Delete, Update
9 from .query import CloudControl
10 from .provider import resources
11
12 from c7n.query import TypeInfo, QueryResourceManager
13
14
15 _IndexData = None
16
17
18 def get_index():
19 global _IndexData
20
21 if _IndexData is not None:
22 return _IndexData
23
24 index_path = Path(__file__).parent / "data" / "index.json"
25 _IndexData = json.loads(index_path.read_text(encoding="utf8"))
26 return _IndexData
27
28
29 def initialize_resource(resource_name):
30 """Load a resource class from its name"""
31 rpath = Path(__file__).parent / "data" / f"aws_{resource_name}.json"
32 if not rpath.exists():
33 return None
34 rinfo = json.loads(rpath.read_text(encoding="utf8"))
35
36 type_info = type(
37 "resource_type",
38 (TypeInfo,),
39 dict(
40 id=rinfo["primaryIdentifier"][0].split("/", 1)[-1],
41 service=rinfo["typeName"].split("::")[1].lower(),
42 cfn_type=rinfo["typeName"],
43 ),
44 )
45
46 rname = "_".join([s.lower() for s in rinfo["typeName"].split("::")[1:]])
47 class_name = "".join([s.lower().capitalize() for s in rinfo["typeName"].split("::")[1:]])
48 mod_name = f"c7n_awscc.resources.{resource_name}"
49
50 permissions = rinfo.get("handlers", {}).get("read", {}).get("permissions", []) + rinfo.get(
51 "handlers", {}
52 ).get("list", {}).get("permissions", [])
53
54 rtype = type(
55 class_name,
56 (QueryResourceManager,),
57 dict(
58 __module__=mod_name,
59 source_mapping={"describe": CloudControl},
60 resource_type=type_info,
61 permissions=permissions,
62 schema=rinfo,
63 ),
64 )
65
66 rtype.action_registry.register(
67 "delete",
68 type(
69 class_name + "Delete",
70 (Delete,),
71 {
72 "permissions": rinfo["handlers"]["delete"]["permissions"],
73 "__module__": mod_name,
74 },
75 ),
76 )
77
78 if "update" in rinfo["handlers"]:
79 rtype.action_registry.register(
80 "update",
81 type(
82 class_name + "Update",
83 (Update,),
84 {
85 "schema": get_update_schema(rtype.schema, rname),
86 "permissions": rinfo["handlers"]["update"]["permissions"],
87 "__module__": mod_name,
88 },
89 ),
90 )
91
92 process_supplementary_data(rtype)
93 resources.register(rname, rtype)
94
95 return {rtype.__name__: rtype}
96
97
98 def process_supplementary_data(rtype):
99 idx = get_index()
100 augment = idx["augment"][rtype.resource_type.cfn_type]
101 rtype.resource_type.service = augment.get("service") or ""
102
103
104 def get_update_schema(schema, rname):
105 prop_names = set(schema["properties"])
106 create_only = {s.rsplit("/", 1)[-1] for s in schema.get("createOnlyProperties", ())}
107 read_only = {s.rsplit("/", 1)[-1] for s in schema.get("readOnlyProperties", ())}
108
109 updatable = prop_names - (create_only | read_only)
110 update_schema = {
111 "additionalProperties": False,
112 "properties": {u: schema["properties"][u] for u in updatable},
113 }
114 update_schema["properties"]["type"] = {"enum": ["update"]}
115
116 if "definitions" in schema:
117 update_schema["definitions"] = dict(schema["definitions"])
118 update_refs(update_schema, rname)
119
120 return update_schema
121
122
123 def update_refs(schema_node, rname):
124 for k, v in schema_node.items():
125 if k == "$ref" and v.startswith("#/definitions/"):
126 # mutating while iterating but there's only ref value ever
127 schema_node[k] = "#/definitions/resources/awscc.%s/actions/update/%s" % (
128 rname,
129 v[2:],
130 )
131 elif isinstance(v, dict):
132 update_refs(v, rname)
133
[end of tools/c7n_awscc/c7n_awscc/manager.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/tools/c7n_awscc/c7n_awscc/actions.py b/tools/c7n_awscc/c7n_awscc/actions.py
--- a/tools/c7n_awscc/c7n_awscc/actions.py
+++ b/tools/c7n_awscc/c7n_awscc/actions.py
@@ -33,8 +33,16 @@
class Update(ControlAction):
+ """Update a resource.
+
+ Supports either whole key/value replacement via an attribute mapping
+ or jsonpatch `patch:`
+ """
+
# schema is setup at resource type initialization
+ _action_meta = ("type", "patch")
+
def process(self, resources):
client = local_session(self.manager.session_factory).client("cloudcontrol")
for r in resources:
@@ -46,9 +54,23 @@
)
def get_patch(self, r):
- tgt = dict(r)
+ # we support either using json patch to do a partial modification.
+ if self.data.get("patch"):
+ return jsonpatch.JsonPatch(self.data["patch"])
+
+ current = dict(r)
+
+ # the action's schema reflects updatable properties
+ updatable = {k for k in self.schema["properties"] if k not in self._action_meta}
+ for k in list(set(current) - updatable):
+ del current[k]
+
+ # shallow copy for patch generation
+ tgt = dict(current)
+
+ # or whole key value replacement.
for k, v in self.data.items():
if k == "type":
continue
tgt[k] = v
- return jsonpatch.make_patch(r, tgt)
+ return jsonpatch.make_patch(current, tgt)
diff --git a/tools/c7n_awscc/c7n_awscc/manager.py b/tools/c7n_awscc/c7n_awscc/manager.py
--- a/tools/c7n_awscc/c7n_awscc/manager.py
+++ b/tools/c7n_awscc/c7n_awscc/manager.py
@@ -112,6 +112,19 @@
"properties": {u: schema["properties"][u] for u in updatable},
}
update_schema["properties"]["type"] = {"enum": ["update"]}
+ update_schema["properties"]["patch"] = {
+ # This schema is pretty minimal
+ "description": "Json patch to apply to resources",
+ "type": "array",
+ "items": {
+ "type": "object",
+ "required": ["op", "path"],
+ "properties": {
+ "path": {"type": "string"},
+ "op": {"enum": ["add", "remove", "update", "replace", "move", "copy", "test"]},
+ },
+ },
+ }
if "definitions" in schema:
update_schema["definitions"] = dict(schema["definitions"])
| {"golden_diff": "diff --git a/tools/c7n_awscc/c7n_awscc/actions.py b/tools/c7n_awscc/c7n_awscc/actions.py\n--- a/tools/c7n_awscc/c7n_awscc/actions.py\n+++ b/tools/c7n_awscc/c7n_awscc/actions.py\n@@ -33,8 +33,16 @@\n \n \n class Update(ControlAction):\n+ \"\"\"Update a resource.\n+\n+ Supports either whole key/value replacement via an attribute mapping\n+ or jsonpatch `patch:`\n+ \"\"\"\n+\n # schema is setup at resource type initialization\n \n+ _action_meta = (\"type\", \"patch\")\n+\n def process(self, resources):\n client = local_session(self.manager.session_factory).client(\"cloudcontrol\")\n for r in resources:\n@@ -46,9 +54,23 @@\n )\n \n def get_patch(self, r):\n- tgt = dict(r)\n+ # we support either using json patch to do a partial modification.\n+ if self.data.get(\"patch\"):\n+ return jsonpatch.JsonPatch(self.data[\"patch\"])\n+\n+ current = dict(r)\n+\n+ # the action's schema reflects updatable properties\n+ updatable = {k for k in self.schema[\"properties\"] if k not in self._action_meta}\n+ for k in list(set(current) - updatable):\n+ del current[k]\n+\n+ # shallow copy for patch generation\n+ tgt = dict(current)\n+\n+ # or whole key value replacement.\n for k, v in self.data.items():\n if k == \"type\":\n continue\n tgt[k] = v\n- return jsonpatch.make_patch(r, tgt)\n+ return jsonpatch.make_patch(current, tgt)\ndiff --git a/tools/c7n_awscc/c7n_awscc/manager.py b/tools/c7n_awscc/c7n_awscc/manager.py\n--- a/tools/c7n_awscc/c7n_awscc/manager.py\n+++ b/tools/c7n_awscc/c7n_awscc/manager.py\n@@ -112,6 +112,19 @@\n \"properties\": {u: schema[\"properties\"][u] for u in updatable},\n }\n update_schema[\"properties\"][\"type\"] = {\"enum\": [\"update\"]}\n+ update_schema[\"properties\"][\"patch\"] = {\n+ # This schema is pretty minimal\n+ \"description\": \"Json patch to apply to resources\",\n+ \"type\": \"array\",\n+ \"items\": {\n+ \"type\": \"object\",\n+ \"required\": [\"op\", \"path\"],\n+ \"properties\": {\n+ \"path\": {\"type\": \"string\"},\n+ \"op\": {\"enum\": [\"add\", \"remove\", \"update\", \"replace\", \"move\", \"copy\", \"test\"]},\n+ },\n+ },\n+ }\n \n if \"definitions\" in schema:\n update_schema[\"definitions\"] = dict(schema[\"definitions\"])\n", "issue": "Set Athena Workgroup Encryption also tries to change readonly field\n### Describe the bug\n\nHi, we are currently trying to set the encryption for our primary athena workgroups that are unencrypted. The policy looks like this:\r\n\r\n```yaml\r\n- name: set-athena-workgroup-encryption\r\n resource: awscc.athena_workgroup\r\n filters:\r\n - type: value\r\n key: Name\r\n value: \"primary\"\r\n - type: value\r\n key: \"WorkGroupConfiguration.ResultConfiguration.EncryptionConfiguration.EncryptionOption\"\r\n value: absent\r\n actions:\r\n - type: update\r\n WorkGroupConfiguration:\r\n EnforceWorkGroupConfiguration: true\r\n - type: update\r\n WorkGroupConfiguration:\r\n ResultConfiguration:\r\n EncryptionConfiguration:\r\n EncryptionOption: SSE_S3\r\n```\r\n\r\nWhen executing this policy we get this error though: \r\n\r\n```\r\n2024-02-15 09:59:25,626: custodian.commands:ERROR Error while executing policy set-athena-workgroup-encryption, continuing\r\nTraceback (most recent call last):\r\n File \"/home/.../policies/.venv/lib/python3.11/site-packages/c7n/commands.py\", line 307, in run\r\n policy()\r\n File \"/home/.../policies/.venv/lib/python3.11/site-packages/c7n/policy.py\", line 1357, in __call__\r\n resources = mode.run()\r\n ^^^^^^^^^^\r\n File \"/home/.../policies/.venv/lib/python3.11/site-packages/c7n/policy.py\", line 364, in run\r\n results = a.process(resources)\r\n ^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/.../policies/.venv/lib/python3.11/site-packages/c7n_awscc/actions.py\", line 43, in process\r\n client.update_resource(\r\n File \"/home/.../policies/.venv/lib/python3.11/site-packages/botocore/client.py\", line 553, in _api_call\r\n return self._make_api_call(operation_name, kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/.../policies/.venv/lib/python3.11/site-packages/botocore/client.py\", line 1009, in _make_api_call\r\n raise error_class(parsed_response, operation_name)\r\nbotocore.exceptions.ClientError: An error occurred (ValidationException) when calling the UpdateResource operation: Invalid patch update: readOnlyProperties [/properties/WorkGroupConfiguration/EngineVersion/EffectiveEngineVersion] cannot be updated\r\n2024-02-15 09:59:25,627: custodian.commands:ERROR The following policies had errors while executing\r\n - set-athena-workgroup-encryption\r\n```\r\n\r\nBut we don't want to update the engine version itself.\n\n### What did you expect to happen?\n\nWe expected the policy to update the encryption setting and not touch the engine version, because the attribute was not specified in our policy\n\n### Cloud Provider\n\nAmazon Web Services (AWS)\n\n### Cloud Custodian version and dependency information\n\n```shell\nCustodian: 0.9.34\r\nPython: 3.11.4 (main, Dec 7 2023, 15:43:41) [GCC 12.3.0]\r\nPlatform: posix.uname_result(sysname='Linux', nodename='marcel', release='6.2.0-39-generic', version='#40-Ubuntu SMP PREEMPT_DYNAMIC Tue Nov 14 14:18:00 UTC 2023', machine='x86_64')\r\nUsing venv: True\r\nDocker: False\r\nInstalled:\r\n\r\nargcomplete==3.2.1\r\nattrs==23.2.0\r\nboto3==1.34.21\r\nbotocore==1.34.21\r\ndocutils==0.18.1\r\nimportlib-metadata==6.11.0\r\njmespath==1.0.1\r\njsonschema==4.21.0\r\njsonschema-specifications==2023.12.1\r\npython-dateutil==2.8.2\r\npyyaml==6.0.1\r\nreferencing==0.31.1\r\nrpds-py==0.17.1\r\ns3transfer==0.10.0\r\nsix==1.16.0\r\ntabulate==0.9.0\r\nurllib3==1.26.18\r\nzipp==3.17.0\n```\n\n\n### Policy\n\n```shell\n- name: set-athena-workgroup-encryption\r\n resource: awscc.athena_workgroup\r\n filters:\r\n - type: value\r\n key: Name\r\n value: \"primary\"\r\n - type: value\r\n key: \"WorkGroupConfiguration.ResultConfiguration.EncryptionConfiguration.EncryptionOption\"\r\n value: absent\r\n actions:\r\n - type: update\r\n WorkGroupConfiguration:\r\n EnforceWorkGroupConfiguration: true\r\n - type: update\r\n WorkGroupConfiguration:\r\n ResultConfiguration:\r\n EncryptionConfiguration:\r\n EncryptionOption: SSE_S3\n```\n\n\n### Relevant log/traceback output\n\n```shell\n2024-02-15 09:59:25,626: custodian.commands:ERROR Error while executing policy set-athena-workgroup-encryption, continuing\r\nTraceback (most recent call last):\r\n File \"/home/.../policies/.venv/lib/python3.11/site-packages/c7n/commands.py\", line 307, in run\r\n policy()\r\n File \"/home/.../policies/.venv/lib/python3.11/site-packages/c7n/policy.py\", line 1357, in __call__\r\n resources = mode.run()\r\n ^^^^^^^^^^\r\n File \"/home/.../policies/.venv/lib/python3.11/site-packages/c7n/policy.py\", line 364, in run\r\n results = a.process(resources)\r\n ^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/.../policies/.venv/lib/python3.11/site-packages/c7n_awscc/actions.py\", line 43, in process\r\n client.update_resource(\r\n File \"/home/.../policies/.venv/lib/python3.11/site-packages/botocore/client.py\", line 553, in _api_call\r\n return self._make_api_call(operation_name, kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/.../policies/.venv/lib/python3.11/site-packages/botocore/client.py\", line 1009, in _make_api_call\r\n raise error_class(parsed_response, operation_name)\r\nbotocore.exceptions.ClientError: An error occurred (ValidationException) when calling the UpdateResource operation: Invalid patch update: readOnlyProperties [/properties/WorkGroupConfiguration/EngineVersion/EffectiveEngineVersion] cannot be updated\r\n2024-02-15 09:59:25,627: custodian.commands:ERROR The following policies had errors while executing\r\n - set-athena-workgroup-encryption\n```\n\n\n### Extra information or context\n\nWe tried to use the update attributes like this \r\n\r\n```yaml\r\n - type: update\r\n WorkGroupConfigurationUpdates:\r\n EnforceWorkGroupConfiguration: true\r\n ResultConfigurationUpdates:\r\n EncryptionConfiguration:\r\n EncryptionOption: SSE_S3\r\n```\r\n\r\nbut there is currently a bug in AWS which resets the workgroup right after the operation. We are in communication with AWS Support there, but in the meantime we tried to make it work with the approach described above.\n", "before_files": [{"content": "# Copyright The Cloud Custodian Authors.\n# SPDX-License-Identifier: Apache-2.0\nimport json\n\nimport jsonpatch\n\nfrom c7n.actions import Action\nfrom c7n.utils import local_session, type_schema\n\n\nclass ControlAction(Action):\n def get_identity(self, r):\n id_fields = self.manager.schema[\"primaryIdentifier\"]\n idv = {}\n for idf in id_fields:\n idn = idf.rsplit(\"/\", 1)[-1]\n idv[idn] = r[idn]\n if len(idv) == 1:\n return idv[idn]\n return json.dumps(idv)\n\n\nclass Delete(ControlAction):\n schema = type_schema(\"delete\")\n\n def process(self, resources):\n client = local_session(self.manager.session_factory).client(\"cloudcontrol\")\n for r in resources:\n client.delete_resource(\n TypeName=self.manager.resource_type.cfn_type,\n Identifier=self.get_identity(r),\n )\n\n\nclass Update(ControlAction):\n # schema is setup at resource type initialization\n\n def process(self, resources):\n client = local_session(self.manager.session_factory).client(\"cloudcontrol\")\n for r in resources:\n patch = self.get_patch(r)\n client.update_resource(\n TypeName=self.manager.resource_type.cfn_type,\n Identifier=self.get_identity(r),\n PatchDocument=patch.to_string(),\n )\n\n def get_patch(self, r):\n tgt = dict(r)\n for k, v in self.data.items():\n if k == \"type\":\n continue\n tgt[k] = v\n return jsonpatch.make_patch(r, tgt)\n", "path": "tools/c7n_awscc/c7n_awscc/actions.py"}, {"content": "# Copyright The Cloud Custodian Authors.\n# SPDX-License-Identifier: Apache-2.0\nimport json\nfrom pathlib import Path\n\nfrom c7n.filters import Filter # noqa\n\nfrom .actions import Delete, Update\nfrom .query import CloudControl\nfrom .provider import resources\n\nfrom c7n.query import TypeInfo, QueryResourceManager\n\n\n_IndexData = None\n\n\ndef get_index():\n global _IndexData\n\n if _IndexData is not None:\n return _IndexData\n\n index_path = Path(__file__).parent / \"data\" / \"index.json\"\n _IndexData = json.loads(index_path.read_text(encoding=\"utf8\"))\n return _IndexData\n\n\ndef initialize_resource(resource_name):\n \"\"\"Load a resource class from its name\"\"\"\n rpath = Path(__file__).parent / \"data\" / f\"aws_{resource_name}.json\"\n if not rpath.exists():\n return None\n rinfo = json.loads(rpath.read_text(encoding=\"utf8\"))\n\n type_info = type(\n \"resource_type\",\n (TypeInfo,),\n dict(\n id=rinfo[\"primaryIdentifier\"][0].split(\"/\", 1)[-1],\n service=rinfo[\"typeName\"].split(\"::\")[1].lower(),\n cfn_type=rinfo[\"typeName\"],\n ),\n )\n\n rname = \"_\".join([s.lower() for s in rinfo[\"typeName\"].split(\"::\")[1:]])\n class_name = \"\".join([s.lower().capitalize() for s in rinfo[\"typeName\"].split(\"::\")[1:]])\n mod_name = f\"c7n_awscc.resources.{resource_name}\"\n\n permissions = rinfo.get(\"handlers\", {}).get(\"read\", {}).get(\"permissions\", []) + rinfo.get(\n \"handlers\", {}\n ).get(\"list\", {}).get(\"permissions\", [])\n\n rtype = type(\n class_name,\n (QueryResourceManager,),\n dict(\n __module__=mod_name,\n source_mapping={\"describe\": CloudControl},\n resource_type=type_info,\n permissions=permissions,\n schema=rinfo,\n ),\n )\n\n rtype.action_registry.register(\n \"delete\",\n type(\n class_name + \"Delete\",\n (Delete,),\n {\n \"permissions\": rinfo[\"handlers\"][\"delete\"][\"permissions\"],\n \"__module__\": mod_name,\n },\n ),\n )\n\n if \"update\" in rinfo[\"handlers\"]:\n rtype.action_registry.register(\n \"update\",\n type(\n class_name + \"Update\",\n (Update,),\n {\n \"schema\": get_update_schema(rtype.schema, rname),\n \"permissions\": rinfo[\"handlers\"][\"update\"][\"permissions\"],\n \"__module__\": mod_name,\n },\n ),\n )\n\n process_supplementary_data(rtype)\n resources.register(rname, rtype)\n\n return {rtype.__name__: rtype}\n\n\ndef process_supplementary_data(rtype):\n idx = get_index()\n augment = idx[\"augment\"][rtype.resource_type.cfn_type]\n rtype.resource_type.service = augment.get(\"service\") or \"\"\n\n\ndef get_update_schema(schema, rname):\n prop_names = set(schema[\"properties\"])\n create_only = {s.rsplit(\"/\", 1)[-1] for s in schema.get(\"createOnlyProperties\", ())}\n read_only = {s.rsplit(\"/\", 1)[-1] for s in schema.get(\"readOnlyProperties\", ())}\n\n updatable = prop_names - (create_only | read_only)\n update_schema = {\n \"additionalProperties\": False,\n \"properties\": {u: schema[\"properties\"][u] for u in updatable},\n }\n update_schema[\"properties\"][\"type\"] = {\"enum\": [\"update\"]}\n\n if \"definitions\" in schema:\n update_schema[\"definitions\"] = dict(schema[\"definitions\"])\n update_refs(update_schema, rname)\n\n return update_schema\n\n\ndef update_refs(schema_node, rname):\n for k, v in schema_node.items():\n if k == \"$ref\" and v.startswith(\"#/definitions/\"):\n # mutating while iterating but there's only ref value ever\n schema_node[k] = \"#/definitions/resources/awscc.%s/actions/update/%s\" % (\n rname,\n v[2:],\n )\n elif isinstance(v, dict):\n update_refs(v, rname)\n", "path": "tools/c7n_awscc/c7n_awscc/manager.py"}]} | 3,970 | 653 |
gh_patches_debug_40894 | rasdani/github-patches | git_diff | dask__distributed-3786 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Version mismatch warning is a little scary
## Background
When clients/scheduler/workers have mismatched versions, users get an informative error message like the following:
```
/home/mrocklin/workspace/distributed/distributed/client.py:1079: VersionMismatchWarning: Mismatched versions found
blosc
+---------------------------+---------+
| | version |
+---------------------------+---------+
| client | None |
| scheduler | 1.9.1 |
| tcp://172.31.15.170:46853 | 1.9.1 |
| tcp://172.31.18.92:41153 | 1.9.1 |
| tcp://172.31.42.33:42009 | 1.9.1 |
| tcp://172.31.7.159:38461 | 1.9.1 |
+---------------------------+---------+
cloudpickle
+---------------------------+---------+
| | version |
+---------------------------+---------+
| client | 1.3.0 |
| scheduler | 1.4.0 |
| tcp://172.31.15.170:46853 | 1.4.0 |
| tcp://172.31.18.92:41153 | 1.4.0 |
| tcp://172.31.42.33:42009 | 1.4.0 |
| tcp://172.31.7.159:38461 | 1.4.0 |
+---------------------------+---------+
dask
+---------------------------+---------------------+
| | version |
+---------------------------+---------------------+
| client | 2.14.0+34.g8ab7f942 |
| scheduler | 2.15.0 |
| tcp://172.31.15.170:46853 | 2.15.0 |
| tcp://172.31.18.92:41153 | 2.15.0 |
| tcp://172.31.42.33:42009 | 2.15.0 |
| tcp://172.31.7.159:38461 | 2.15.0 |
+---------------------------+---------------------+
distributed
+---------------------------+---------------------+
| | version |
+---------------------------+---------------------+
| client | 2.14.0+47.gb4dc9c64 |
| scheduler | 2.15.0 |
| tcp://172.31.15.170:46853 | 2.15.0 |
| tcp://172.31.18.92:41153 | 2.15.0 |
| tcp://172.31.42.33:42009 | 2.15.0 |
| tcp://172.31.7.159:38461 | 2.15.0 |
+---------------------------+---------------------+
lz4
+---------------------------+---------+
| | version |
+---------------------------+---------+
| client | 2.2.1 |
| scheduler | 3.0.2 |
| tcp://172.31.15.170:46853 | 3.0.2 |
| tcp://172.31.18.92:41153 | 3.0.2 |
| tcp://172.31.42.33:42009 | 3.0.2 |
| tcp://172.31.7.159:38461 | 3.0.2 |
+---------------------------+---------+
msgpack
+---------------------------+---------+
| | version |
+---------------------------+---------+
| client | 0.6.2 |
| scheduler | 1.0.0 |
| tcp://172.31.15.170:46853 | 1.0.0 |
| tcp://172.31.18.92:41153 | 1.0.0 |
| tcp://172.31.42.33:42009 | 1.0.0 |
| tcp://172.31.7.159:38461 | 1.0.0 |
+---------------------------+---------+
python
+---------------------------+---------------+
| | version |
+---------------------------+---------------+
| client | 3.7.6.final.0 |
| scheduler | 3.7.4.final.0 |
| tcp://172.31.15.170:46853 | 3.7.4.final.0 |
| tcp://172.31.18.92:41153 | 3.7.4.final.0 |
| tcp://172.31.42.33:42009 | 3.7.4.final.0 |
| tcp://172.31.7.159:38461 | 3.7.4.final.0 |
+---------------------------+---------------+
warnings.warn(version_module.VersionMismatchWarning(msg[0]["warning"]))
```
This is generally pretty great. We used to get a ton of github issues that reduced down to version mismatches, and now we don't. Hooray for informative error messages.
## Moving forward
However, I've run into a couple of issues that arise with these messages in practice, where I think that we might be able to improve them a bit.
1. They can get very long, especially if you have lots of workers.
2. We don't call out really important issues in relation to less important issues. It's entirely ok if your `msgpack` version is a little off, but probably not ok if some machines have `lz4` and some don't.
So I wonder if we might reorganize this message a bit. We might have something like the following:
```
+-----------+----------+------------+------------+
| Package | client | scheduler | workers |
+-----------+----------+------------+------------+
| python | 3.7 | 3.8 | {3.7, 3.8} |
| lz4 | ... | ... | ... |
| msgpack | | ... | ... |
+-----------+----------+------------+------------+
Notes:
- msgpack: Variation is ok, as long as everything is above 0.6
- lz4: Variation is ok, but missing libraries are not
- python: Variation is sometimes ok, sometimes not. It depends on your workloads
```
We pack down the one-line-per-worker policy, and instead include sets of versions in the table if necessary. This makes it a bit harder to debug if one of the workers is mismatched, but I think that this behavior isn't common.
We include optional prose around each of the libraries and include that prose if that library is found to be mismatched.
</issue>
<code>
[start of distributed/versions.py]
1 """ utilities for package version introspection """
2
3 from __future__ import print_function, division, absolute_import
4
5 from collections import defaultdict
6 import platform
7 import struct
8 import os
9 import sys
10 import importlib
11
12
13 required_packages = [
14 ("dask", lambda p: p.__version__),
15 ("distributed", lambda p: p.__version__),
16 ("msgpack", lambda p: ".".join([str(v) for v in p.version])),
17 ("cloudpickle", lambda p: p.__version__),
18 ("tornado", lambda p: p.version),
19 ("toolz", lambda p: p.__version__),
20 ]
21
22 optional_packages = [
23 ("numpy", lambda p: p.__version__),
24 ("lz4", lambda p: p.__version__),
25 ("blosc", lambda p: p.__version__),
26 ]
27
28
29 # only these scheduler packages will be checked for version mismatch
30 scheduler_relevant_packages = set(pkg for pkg, _ in required_packages) | set(
31 ["lz4", "blosc"]
32 )
33
34
35 def get_versions(packages=None):
36 """
37 Return basic information on our software installation, and our installed versions of packages.
38 """
39 if packages is None:
40 packages = []
41
42 d = {
43 "host": get_system_info(),
44 "packages": get_package_info(
45 required_packages + optional_packages + list(packages)
46 ),
47 }
48
49 return d
50
51
52 def get_system_info():
53 (sysname, nodename, release, version, machine, processor) = platform.uname()
54 host = {
55 "python": "%d.%d.%d.%s.%s" % sys.version_info[:],
56 "python-bits": struct.calcsize("P") * 8,
57 "OS": "%s" % sysname,
58 "OS-release": "%s" % release,
59 "machine": "%s" % machine,
60 "processor": "%s" % processor,
61 "byteorder": "%s" % sys.byteorder,
62 "LC_ALL": "%s" % os.environ.get("LC_ALL", "None"),
63 "LANG": "%s" % os.environ.get("LANG", "None"),
64 }
65
66 return host
67
68
69 def version_of_package(pkg):
70 """ Try a variety of common ways to get the version of a package """
71 from .utils import ignoring
72
73 with ignoring(AttributeError):
74 return pkg.__version__
75 with ignoring(AttributeError):
76 return str(pkg.version)
77 with ignoring(AttributeError):
78 return ".".join(map(str, pkg.version_info))
79 return None
80
81
82 def get_package_info(pkgs):
83 """ get package versions for the passed required & optional packages """
84
85 pversions = [("python", ".".join(map(str, sys.version_info)))]
86 for pkg in pkgs:
87 if isinstance(pkg, (tuple, list)):
88 modname, ver_f = pkg
89 else:
90 modname = pkg
91 ver_f = version_of_package
92
93 if ver_f is None:
94 ver_f = version_of_package
95
96 try:
97 mod = importlib.import_module(modname)
98 ver = ver_f(mod)
99 pversions.append((modname, ver))
100 except Exception:
101 pversions.append((modname, None))
102
103 return dict(pversions)
104
105
106 def error_message(scheduler, workers, client, client_name="client"):
107 from .utils import asciitable
108
109 nodes = {**{client_name: client}, **{"scheduler": scheduler}, **workers}
110
111 # Hold all versions, e.g. versions["scheduler"]["distributed"] = 2.9.3
112 node_packages = defaultdict(dict)
113
114 # Collect all package versions
115 packages = set()
116 for node, info in nodes.items():
117 if info is None or not (isinstance(info, dict)) or "packages" not in info:
118 node_packages[node] = defaultdict(lambda: "UNKNOWN")
119 else:
120 node_packages[node] = defaultdict(lambda: "MISSING")
121 for pkg, version in info["packages"].items():
122 node_packages[node][pkg] = version
123 packages.add(pkg)
124
125 errs = []
126 for pkg in sorted(packages):
127 versions = set(
128 node_packages[node][pkg]
129 for node in nodes
130 if node != "scheduler" or pkg in scheduler_relevant_packages
131 )
132 if len(versions) <= 1:
133 continue
134 rows = [
135 (node_name, node_packages[node_name][pkg]) for node_name in nodes.keys()
136 ]
137 errs.append("%s\n%s" % (pkg, asciitable(["", "version"], rows)))
138 if errs:
139 return "Mismatched versions found\n" "\n" "%s" % ("\n\n".join(errs))
140 else:
141 return ""
142
143
144 class VersionMismatchWarning(Warning):
145 """Indicates version mismatch between nodes"""
146
[end of distributed/versions.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/distributed/versions.py b/distributed/versions.py
--- a/distributed/versions.py
+++ b/distributed/versions.py
@@ -2,7 +2,6 @@
from __future__ import print_function, division, absolute_import
-from collections import defaultdict
import platform
import struct
import os
@@ -32,6 +31,14 @@
)
+# notes to be displayed for mismatch packages
+notes_mismatch_package = {
+ "msgpack": "Variation is ok, as long as everything is above 0.6",
+ "lz4": "Variation is ok, but missing libraries are not",
+ "python": "Variation is sometimes ok, sometimes not. It depends on your workloads",
+}
+
+
def get_versions(packages=None):
"""
Return basic information on our software installation, and our installed versions of packages.
@@ -106,37 +113,56 @@
def error_message(scheduler, workers, client, client_name="client"):
from .utils import asciitable
- nodes = {**{client_name: client}, **{"scheduler": scheduler}, **workers}
-
- # Hold all versions, e.g. versions["scheduler"]["distributed"] = 2.9.3
- node_packages = defaultdict(dict)
+ client = client.get("packages") if client else "UNKNOWN"
+ scheduler = scheduler.get("packages") if scheduler else "UNKNOWN"
+ workers = {k: v.get("packages") if v else "UNKNOWN" for k, v in workers.items()}
- # Collect all package versions
packages = set()
- for node, info in nodes.items():
- if info is None or not (isinstance(info, dict)) or "packages" not in info:
- node_packages[node] = defaultdict(lambda: "UNKNOWN")
- else:
- node_packages[node] = defaultdict(lambda: "MISSING")
- for pkg, version in info["packages"].items():
- node_packages[node][pkg] = version
- packages.add(pkg)
+ packages.update(client)
+ packages.update(scheduler)
+ for worker in workers:
+ packages.update(workers.get(worker))
errs = []
+ notes = []
for pkg in sorted(packages):
- versions = set(
- node_packages[node][pkg]
- for node in nodes
- if node != "scheduler" or pkg in scheduler_relevant_packages
+ versions = set()
+ scheduler_version = (
+ scheduler.get(pkg, "MISSING") if isinstance(scheduler, dict) else scheduler
+ )
+ if pkg in scheduler_relevant_packages:
+ versions.add(scheduler_version)
+
+ client_version = (
+ client.get(pkg, "MISSING") if isinstance(client, dict) else client
)
+ versions.add(client_version)
+
+ worker_versions = set(
+ workers[w].get(pkg, "MISSING")
+ if isinstance(workers[w], dict)
+ else workers[w]
+ for w in workers
+ )
+ versions |= worker_versions
+
if len(versions) <= 1:
continue
- rows = [
- (node_name, node_packages[node_name][pkg]) for node_name in nodes.keys()
- ]
- errs.append("%s\n%s" % (pkg, asciitable(["", "version"], rows)))
+ if len(worker_versions) == 1:
+ worker_versions = list(worker_versions)[0]
+ elif len(worker_versions) == 0:
+ worker_versions = None
+
+ errs.append((pkg, client_version, scheduler_version, worker_versions))
+ if pkg in notes_mismatch_package.keys():
+ notes.append(f"- {pkg}: {notes_mismatch_package[pkg]}")
+
if errs:
- return "Mismatched versions found\n" "\n" "%s" % ("\n\n".join(errs))
+ err_table = asciitable(["Package", client_name, "scheduler", "workers"], errs)
+ err_msg = f"Mismatched versions found\n\n{err_table}"
+ if notes:
+ err_msg += "\nNotes: \n{}".format("\n".join(notes))
+ return err_msg
else:
return ""
| {"golden_diff": "diff --git a/distributed/versions.py b/distributed/versions.py\n--- a/distributed/versions.py\n+++ b/distributed/versions.py\n@@ -2,7 +2,6 @@\n \n from __future__ import print_function, division, absolute_import\n \n-from collections import defaultdict\n import platform\n import struct\n import os\n@@ -32,6 +31,14 @@\n )\n \n \n+# notes to be displayed for mismatch packages\n+notes_mismatch_package = {\n+ \"msgpack\": \"Variation is ok, as long as everything is above 0.6\",\n+ \"lz4\": \"Variation is ok, but missing libraries are not\",\n+ \"python\": \"Variation is sometimes ok, sometimes not. It depends on your workloads\",\n+}\n+\n+\n def get_versions(packages=None):\n \"\"\"\n Return basic information on our software installation, and our installed versions of packages.\n@@ -106,37 +113,56 @@\n def error_message(scheduler, workers, client, client_name=\"client\"):\n from .utils import asciitable\n \n- nodes = {**{client_name: client}, **{\"scheduler\": scheduler}, **workers}\n-\n- # Hold all versions, e.g. versions[\"scheduler\"][\"distributed\"] = 2.9.3\n- node_packages = defaultdict(dict)\n+ client = client.get(\"packages\") if client else \"UNKNOWN\"\n+ scheduler = scheduler.get(\"packages\") if scheduler else \"UNKNOWN\"\n+ workers = {k: v.get(\"packages\") if v else \"UNKNOWN\" for k, v in workers.items()}\n \n- # Collect all package versions\n packages = set()\n- for node, info in nodes.items():\n- if info is None or not (isinstance(info, dict)) or \"packages\" not in info:\n- node_packages[node] = defaultdict(lambda: \"UNKNOWN\")\n- else:\n- node_packages[node] = defaultdict(lambda: \"MISSING\")\n- for pkg, version in info[\"packages\"].items():\n- node_packages[node][pkg] = version\n- packages.add(pkg)\n+ packages.update(client)\n+ packages.update(scheduler)\n+ for worker in workers:\n+ packages.update(workers.get(worker))\n \n errs = []\n+ notes = []\n for pkg in sorted(packages):\n- versions = set(\n- node_packages[node][pkg]\n- for node in nodes\n- if node != \"scheduler\" or pkg in scheduler_relevant_packages\n+ versions = set()\n+ scheduler_version = (\n+ scheduler.get(pkg, \"MISSING\") if isinstance(scheduler, dict) else scheduler\n+ )\n+ if pkg in scheduler_relevant_packages:\n+ versions.add(scheduler_version)\n+\n+ client_version = (\n+ client.get(pkg, \"MISSING\") if isinstance(client, dict) else client\n )\n+ versions.add(client_version)\n+\n+ worker_versions = set(\n+ workers[w].get(pkg, \"MISSING\")\n+ if isinstance(workers[w], dict)\n+ else workers[w]\n+ for w in workers\n+ )\n+ versions |= worker_versions\n+\n if len(versions) <= 1:\n continue\n- rows = [\n- (node_name, node_packages[node_name][pkg]) for node_name in nodes.keys()\n- ]\n- errs.append(\"%s\\n%s\" % (pkg, asciitable([\"\", \"version\"], rows)))\n+ if len(worker_versions) == 1:\n+ worker_versions = list(worker_versions)[0]\n+ elif len(worker_versions) == 0:\n+ worker_versions = None\n+\n+ errs.append((pkg, client_version, scheduler_version, worker_versions))\n+ if pkg in notes_mismatch_package.keys():\n+ notes.append(f\"- {pkg}: {notes_mismatch_package[pkg]}\")\n+\n if errs:\n- return \"Mismatched versions found\\n\" \"\\n\" \"%s\" % (\"\\n\\n\".join(errs))\n+ err_table = asciitable([\"Package\", client_name, \"scheduler\", \"workers\"], errs)\n+ err_msg = f\"Mismatched versions found\\n\\n{err_table}\"\n+ if notes:\n+ err_msg += \"\\nNotes: \\n{}\".format(\"\\n\".join(notes))\n+ return err_msg\n else:\n return \"\"\n", "issue": "Version mismatch warning is a little scary\n## Background\r\n\r\nWhen clients/scheduler/workers have mismatched versions, users get an informative error message like the following:\r\n\r\n```\r\n/home/mrocklin/workspace/distributed/distributed/client.py:1079: VersionMismatchWarning: Mismatched versions found\r\n\r\nblosc\r\n+---------------------------+---------+\r\n| | version |\r\n+---------------------------+---------+\r\n| client | None |\r\n| scheduler | 1.9.1 |\r\n| tcp://172.31.15.170:46853 | 1.9.1 |\r\n| tcp://172.31.18.92:41153 | 1.9.1 |\r\n| tcp://172.31.42.33:42009 | 1.9.1 |\r\n| tcp://172.31.7.159:38461 | 1.9.1 |\r\n+---------------------------+---------+\r\n\r\ncloudpickle\r\n+---------------------------+---------+\r\n| | version |\r\n+---------------------------+---------+\r\n| client | 1.3.0 |\r\n| scheduler | 1.4.0 |\r\n| tcp://172.31.15.170:46853 | 1.4.0 |\r\n| tcp://172.31.18.92:41153 | 1.4.0 |\r\n| tcp://172.31.42.33:42009 | 1.4.0 |\r\n| tcp://172.31.7.159:38461 | 1.4.0 |\r\n+---------------------------+---------+\r\n\r\ndask\r\n+---------------------------+---------------------+\r\n| | version |\r\n+---------------------------+---------------------+\r\n| client | 2.14.0+34.g8ab7f942 |\r\n| scheduler | 2.15.0 |\r\n| tcp://172.31.15.170:46853 | 2.15.0 |\r\n| tcp://172.31.18.92:41153 | 2.15.0 |\r\n| tcp://172.31.42.33:42009 | 2.15.0 |\r\n| tcp://172.31.7.159:38461 | 2.15.0 |\r\n+---------------------------+---------------------+\r\n\r\ndistributed\r\n+---------------------------+---------------------+\r\n| | version |\r\n+---------------------------+---------------------+\r\n| client | 2.14.0+47.gb4dc9c64 |\r\n| scheduler | 2.15.0 |\r\n| tcp://172.31.15.170:46853 | 2.15.0 |\r\n| tcp://172.31.18.92:41153 | 2.15.0 |\r\n| tcp://172.31.42.33:42009 | 2.15.0 |\r\n| tcp://172.31.7.159:38461 | 2.15.0 |\r\n+---------------------------+---------------------+\r\n\r\nlz4\r\n+---------------------------+---------+\r\n| | version |\r\n+---------------------------+---------+\r\n| client | 2.2.1 |\r\n| scheduler | 3.0.2 |\r\n| tcp://172.31.15.170:46853 | 3.0.2 |\r\n| tcp://172.31.18.92:41153 | 3.0.2 |\r\n| tcp://172.31.42.33:42009 | 3.0.2 |\r\n| tcp://172.31.7.159:38461 | 3.0.2 |\r\n+---------------------------+---------+\r\n\r\nmsgpack\r\n+---------------------------+---------+\r\n| | version |\r\n+---------------------------+---------+\r\n| client | 0.6.2 |\r\n| scheduler | 1.0.0 |\r\n| tcp://172.31.15.170:46853 | 1.0.0 |\r\n| tcp://172.31.18.92:41153 | 1.0.0 |\r\n| tcp://172.31.42.33:42009 | 1.0.0 |\r\n| tcp://172.31.7.159:38461 | 1.0.0 |\r\n+---------------------------+---------+\r\n\r\npython\r\n+---------------------------+---------------+\r\n| | version |\r\n+---------------------------+---------------+\r\n| client | 3.7.6.final.0 |\r\n| scheduler | 3.7.4.final.0 |\r\n| tcp://172.31.15.170:46853 | 3.7.4.final.0 |\r\n| tcp://172.31.18.92:41153 | 3.7.4.final.0 |\r\n| tcp://172.31.42.33:42009 | 3.7.4.final.0 |\r\n| tcp://172.31.7.159:38461 | 3.7.4.final.0 |\r\n+---------------------------+---------------+\r\n warnings.warn(version_module.VersionMismatchWarning(msg[0][\"warning\"]))\r\n```\r\n\r\nThis is generally pretty great. We used to get a ton of github issues that reduced down to version mismatches, and now we don't. Hooray for informative error messages.\r\n\r\n## Moving forward\r\n\r\nHowever, I've run into a couple of issues that arise with these messages in practice, where I think that we might be able to improve them a bit.\r\n\r\n1. They can get very long, especially if you have lots of workers.\r\n2. We don't call out really important issues in relation to less important issues. It's entirely ok if your `msgpack` version is a little off, but probably not ok if some machines have `lz4` and some don't. \r\n\r\nSo I wonder if we might reorganize this message a bit. We might have something like the following:\r\n\r\n```\r\n+-----------+----------+------------+------------+\r\n| Package | client | scheduler | workers |\r\n+-----------+----------+------------+------------+\r\n| python | 3.7 | 3.8 | {3.7, 3.8} |\r\n| lz4 | ... | ... | ... |\r\n| msgpack | | ... | ... |\r\n+-----------+----------+------------+------------+\r\n\r\nNotes:\r\n\r\n- msgpack: Variation is ok, as long as everything is above 0.6\r\n- lz4: Variation is ok, but missing libraries are not\r\n- python: Variation is sometimes ok, sometimes not. It depends on your workloads\r\n```\r\n\r\nWe pack down the one-line-per-worker policy, and instead include sets of versions in the table if necessary. This makes it a bit harder to debug if one of the workers is mismatched, but I think that this behavior isn't common. \r\n\r\nWe include optional prose around each of the libraries and include that prose if that library is found to be mismatched.\n", "before_files": [{"content": "\"\"\" utilities for package version introspection \"\"\"\n\nfrom __future__ import print_function, division, absolute_import\n\nfrom collections import defaultdict\nimport platform\nimport struct\nimport os\nimport sys\nimport importlib\n\n\nrequired_packages = [\n (\"dask\", lambda p: p.__version__),\n (\"distributed\", lambda p: p.__version__),\n (\"msgpack\", lambda p: \".\".join([str(v) for v in p.version])),\n (\"cloudpickle\", lambda p: p.__version__),\n (\"tornado\", lambda p: p.version),\n (\"toolz\", lambda p: p.__version__),\n]\n\noptional_packages = [\n (\"numpy\", lambda p: p.__version__),\n (\"lz4\", lambda p: p.__version__),\n (\"blosc\", lambda p: p.__version__),\n]\n\n\n# only these scheduler packages will be checked for version mismatch\nscheduler_relevant_packages = set(pkg for pkg, _ in required_packages) | set(\n [\"lz4\", \"blosc\"]\n)\n\n\ndef get_versions(packages=None):\n \"\"\"\n Return basic information on our software installation, and our installed versions of packages.\n \"\"\"\n if packages is None:\n packages = []\n\n d = {\n \"host\": get_system_info(),\n \"packages\": get_package_info(\n required_packages + optional_packages + list(packages)\n ),\n }\n\n return d\n\n\ndef get_system_info():\n (sysname, nodename, release, version, machine, processor) = platform.uname()\n host = {\n \"python\": \"%d.%d.%d.%s.%s\" % sys.version_info[:],\n \"python-bits\": struct.calcsize(\"P\") * 8,\n \"OS\": \"%s\" % sysname,\n \"OS-release\": \"%s\" % release,\n \"machine\": \"%s\" % machine,\n \"processor\": \"%s\" % processor,\n \"byteorder\": \"%s\" % sys.byteorder,\n \"LC_ALL\": \"%s\" % os.environ.get(\"LC_ALL\", \"None\"),\n \"LANG\": \"%s\" % os.environ.get(\"LANG\", \"None\"),\n }\n\n return host\n\n\ndef version_of_package(pkg):\n \"\"\" Try a variety of common ways to get the version of a package \"\"\"\n from .utils import ignoring\n\n with ignoring(AttributeError):\n return pkg.__version__\n with ignoring(AttributeError):\n return str(pkg.version)\n with ignoring(AttributeError):\n return \".\".join(map(str, pkg.version_info))\n return None\n\n\ndef get_package_info(pkgs):\n \"\"\" get package versions for the passed required & optional packages \"\"\"\n\n pversions = [(\"python\", \".\".join(map(str, sys.version_info)))]\n for pkg in pkgs:\n if isinstance(pkg, (tuple, list)):\n modname, ver_f = pkg\n else:\n modname = pkg\n ver_f = version_of_package\n\n if ver_f is None:\n ver_f = version_of_package\n\n try:\n mod = importlib.import_module(modname)\n ver = ver_f(mod)\n pversions.append((modname, ver))\n except Exception:\n pversions.append((modname, None))\n\n return dict(pversions)\n\n\ndef error_message(scheduler, workers, client, client_name=\"client\"):\n from .utils import asciitable\n\n nodes = {**{client_name: client}, **{\"scheduler\": scheduler}, **workers}\n\n # Hold all versions, e.g. versions[\"scheduler\"][\"distributed\"] = 2.9.3\n node_packages = defaultdict(dict)\n\n # Collect all package versions\n packages = set()\n for node, info in nodes.items():\n if info is None or not (isinstance(info, dict)) or \"packages\" not in info:\n node_packages[node] = defaultdict(lambda: \"UNKNOWN\")\n else:\n node_packages[node] = defaultdict(lambda: \"MISSING\")\n for pkg, version in info[\"packages\"].items():\n node_packages[node][pkg] = version\n packages.add(pkg)\n\n errs = []\n for pkg in sorted(packages):\n versions = set(\n node_packages[node][pkg]\n for node in nodes\n if node != \"scheduler\" or pkg in scheduler_relevant_packages\n )\n if len(versions) <= 1:\n continue\n rows = [\n (node_name, node_packages[node_name][pkg]) for node_name in nodes.keys()\n ]\n errs.append(\"%s\\n%s\" % (pkg, asciitable([\"\", \"version\"], rows)))\n if errs:\n return \"Mismatched versions found\\n\" \"\\n\" \"%s\" % (\"\\n\\n\".join(errs))\n else:\n return \"\"\n\n\nclass VersionMismatchWarning(Warning):\n \"\"\"Indicates version mismatch between nodes\"\"\"\n", "path": "distributed/versions.py"}]} | 3,670 | 934 |
gh_patches_debug_35936 | rasdani/github-patches | git_diff | ESMCI__cime-3019 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Comment at the top of env_batch.xml is wrong
The comment at the top of env_batch.xml (imported from config/config_headers.xml) is:
> These variables may be changed anytime during a run, they
> control arguments to the batch submit command.
>
However, if I submit a job and then try:
```
./xmlchange JOB_WALLCLOCK_TIME=40:00:00
./xmlchange JOB_QUEUE=long
```
My job is killed (if it is in the queue when I make the change) or resubmit fails and I see the following message in my output file:
```
goldy@hobart^GFile /scratch/cluster/goldy/FQ3D_ne5pg3_ne5pg3_mg37/LockedFiles/env_batch.xml has been modified
found difference in JOB_WALLCLOCK_TIME : case '40:00:00' locked '80:00:00'
found difference in USER_REQUESTED_WALLTIME : case '40:00:00' locked ''
ERROR: Batch configuration has changed, please run case.setup --reset
```
The CIME documentation does not seem to mention this ability to change (cf http://esmci.github.io/cime/users_guide/running-a-case.html?highlight=job_wallclock_time).
Please fix the header with correct information.
Question: Is there a way to see the `<header>` information via `xmlquery`?
Reported by a CESM 2.0 user.
</issue>
<code>
[start of scripts/lib/CIME/case/case_submit.py]
1 #!/usr/bin/env python
2
3 """
4 case.submit - Submit a cesm workflow to the queueing system or run it
5 if there is no queueing system. A cesm workflow may include multiple
6 jobs.
7 submit, check_case and check_da_settings are members of class Case in file case.py
8 """
9 import socket
10 from six.moves import configparser
11 from CIME.XML.standard_module_setup import *
12 from CIME.utils import expect, run_and_log_case_status, verbatim_success_msg
13 from CIME.locked_files import unlock_file, lock_file
14 from CIME.test_status import *
15
16 logger = logging.getLogger(__name__)
17
18 def _build_prereq_str(case, prev_job_ids):
19 delimiter = case.get_value("depend_separator")
20 prereq_str = ""
21 for job_id in prev_job_ids.values():
22 prereq_str += str(job_id) + delimiter
23 return prereq_str[:-1]
24
25 def _submit(case, job=None, no_batch=False, prereq=None, allow_fail=False, resubmit=False,
26 resubmit_immediate=False, skip_pnl=False, mail_user=None, mail_type=None,
27 batch_args=None):
28 if job is None:
29 job = case.get_primary_job()
30
31 rundir = case.get_value("RUNDIR")
32 if job != "case.test":
33 continue_run = case.get_value("CONTINUE_RUN")
34 expect(os.path.isdir(rundir) or not continue_run,
35 " CONTINUE_RUN is true but RUNDIR {} does not exist".format(rundir))
36
37 # if case.submit is called with the no_batch flag then we assume that this
38 # flag will stay in effect for the duration of the RESUBMITs
39 env_batch = case.get_env("batch")
40 if resubmit:
41 if env_batch.get_batch_system_type() == "none":
42 no_batch = True
43
44 # This is a resubmission, do not reinitialize test values
45 if job == "case.test":
46 case.set_value("IS_FIRST_RUN", False)
47
48 resub = case.get_value("RESUBMIT")
49 logger.info("Submitting job '{}', resubmit={:d}".format(job, resub))
50 case.set_value("RESUBMIT", resub-1)
51 if case.get_value("RESUBMIT_SETS_CONTINUE_RUN"):
52 case.set_value("CONTINUE_RUN", True)
53
54 else:
55 if job == "case.test":
56 case.set_value("IS_FIRST_RUN", True)
57
58 if no_batch:
59 batch_system = "none"
60 else:
61 batch_system = env_batch.get_batch_system_type()
62
63 case.set_value("BATCH_SYSTEM", batch_system)
64
65 env_batch_has_changed = False
66 try:
67 case.check_lockedfile(os.path.basename(env_batch.filename))
68 except SystemExit:
69 env_batch_has_changed = True
70
71 if env_batch.get_batch_system_type() != "none" and env_batch_has_changed:
72 # May need to regen batch files if user made batch setting changes (e.g. walltime, queue, etc)
73 logger.warning(\
74 """
75 env_batch.xml appears to have changed, regenerating batch scripts
76 manual edits to these file will be lost!
77 """)
78 env_batch.make_all_batch_files(case)
79
80 unlock_file(os.path.basename(env_batch.filename))
81 lock_file(os.path.basename(env_batch.filename))
82
83 if job == case.get_primary_job():
84 case.check_case()
85 case.check_DA_settings()
86 if case.get_value("MACH") == "mira":
87 with open(".original_host", "w") as fd:
88 fd.write( socket.gethostname())
89
90 #Load Modules
91 case.load_env()
92
93 case.flush()
94
95 logger.warning("submit_jobs {}".format(job))
96 job_ids = case.submit_jobs(no_batch=no_batch, job=job, prereq=prereq,
97 skip_pnl=skip_pnl, resubmit_immediate=resubmit_immediate,
98 allow_fail=allow_fail, mail_user=mail_user,
99 mail_type=mail_type, batch_args=batch_args)
100
101 xml_jobids = []
102 for jobname, jobid in job_ids.items():
103 logger.info("Submitted job {} with id {}".format(jobname, jobid))
104 if jobid:
105 xml_jobids.append("{}:{}".format(jobname, jobid))
106
107 xml_jobid_text = ", ".join(xml_jobids)
108 if xml_jobid_text:
109 case.set_value("JOB_IDS", xml_jobid_text)
110
111 return xml_jobid_text
112
113 def submit(self, job=None, no_batch=False, prereq=None, allow_fail=False, resubmit=False,
114 resubmit_immediate=False, skip_pnl=False, mail_user=None, mail_type=None,
115 batch_args=None):
116 if resubmit_immediate and self.get_value("MACH") in ['mira', 'cetus']:
117 logger.warning("resubmit_immediate does not work on Mira/Cetus, submitting normally")
118 resubmit_immediate = False
119
120 if self.get_value("TEST"):
121 caseroot = self.get_value("CASEROOT")
122 casebaseid = self.get_value("CASEBASEID")
123 # This should take care of the race condition where the submitted job
124 # begins immediately and tries to set RUN phase. We proactively assume
125 # a passed SUBMIT phase. If this state is already PASS, don't set it again
126 # because then we'll lose RUN phase info if it's there. This info is important
127 # for system_tests_common to know if it needs to reinitialize the test or not.
128 with TestStatus(test_dir=caseroot, test_name=casebaseid) as ts:
129 phase_status = ts.get_status(SUBMIT_PHASE)
130 if phase_status != TEST_PASS_STATUS:
131 ts.set_status(SUBMIT_PHASE, TEST_PASS_STATUS)
132
133 # If this is a resubmit check the hidden file .submit_options for
134 # any submit options used on the original submit and use them again
135 caseroot = self.get_value("CASEROOT")
136 submit_options = os.path.join(caseroot, ".submit_options")
137 if resubmit and os.path.exists(submit_options):
138 config = configparser.SafeConfigParser()
139 config.read(submit_options)
140 if not skip_pnl and config.has_option('SubmitOptions','skip_pnl'):
141 skip_pnl = config.getboolean('SubmitOptions', 'skip_pnl')
142 if mail_user is None and config.has_option('SubmitOptions', 'mail_user'):
143 mail_user = config.get('SubmitOptions', 'mail_user')
144 if mail_type is None and config.has_option('SubmitOptions', 'mail_type'):
145 mail_type = config.get('SubmitOptions', 'mail_type').split(',')
146 if batch_args is None and config.has_option('SubmitOptions', 'batch_args'):
147 batch_args = config.get('SubmitOptions', 'batch_args')
148
149 try:
150 functor = lambda: _submit(self, job=job, no_batch=no_batch, prereq=prereq,
151 allow_fail=allow_fail, resubmit=resubmit,
152 resubmit_immediate=resubmit_immediate, skip_pnl=skip_pnl,
153 mail_user=mail_user, mail_type=mail_type,
154 batch_args=batch_args)
155 run_and_log_case_status(functor, "case.submit", caseroot=caseroot,
156 custom_success_msg_functor=verbatim_success_msg)
157 except:
158 # If something failed in the batch system, make sure to mark
159 # the test as failed if we are running a test.
160 if self.get_value("TEST"):
161 with TestStatus(test_dir=caseroot, test_name=casebaseid) as ts:
162 ts.set_status(SUBMIT_PHASE, TEST_FAIL_STATUS)
163
164 raise
165
166 def check_case(self):
167 self.check_lockedfiles()
168 self.create_namelists() # Must be called before check_all_input_data
169 logger.info("Checking that inputdata is available as part of case submission")
170 self.check_all_input_data()
171
172 if self.get_value('COMP_WAV') == 'ww':
173 # the ww3 buildnml has dependancies on inputdata so we must run it again
174 self.create_namelists(component='WAV')
175
176
177 expect(self.get_value("BUILD_COMPLETE"), "Build complete is "
178 "not True please rebuild the model by calling case.build")
179 logger.info("Check case OK")
180
181 def check_DA_settings(self):
182 script = self.get_value("DATA_ASSIMILATION_SCRIPT")
183 cycles = self.get_value("DATA_ASSIMILATION_CYCLES")
184 if len(script) > 0 and os.path.isfile(script) and cycles > 0:
185 logger.info("Data Assimilation enabled using script {} with {:d} cycles".format(script,
186 cycles))
187
[end of scripts/lib/CIME/case/case_submit.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/scripts/lib/CIME/case/case_submit.py b/scripts/lib/CIME/case/case_submit.py
--- a/scripts/lib/CIME/case/case_submit.py
+++ b/scripts/lib/CIME/case/case_submit.py
@@ -37,10 +37,35 @@
# if case.submit is called with the no_batch flag then we assume that this
# flag will stay in effect for the duration of the RESUBMITs
env_batch = case.get_env("batch")
- if resubmit:
- if env_batch.get_batch_system_type() == "none":
- no_batch = True
+ if resubmit and env_batch.get_batch_system_type() == "none":
+ no_batch = True
+ if no_batch:
+ batch_system = "none"
+ else:
+ batch_system = env_batch.get_batch_system_type()
+
+ case.set_value("BATCH_SYSTEM", batch_system)
+
+ env_batch_has_changed = False
+ try:
+ case.check_lockedfile(os.path.basename(env_batch.filename))
+ except:
+ env_batch_has_changed = True
+
+ if batch_system != "none" and env_batch_has_changed:
+ # May need to regen batch files if user made batch setting changes (e.g. walltime, queue, etc)
+ logger.warning(\
+"""
+env_batch.xml appears to have changed, regenerating batch scripts
+manual edits to these file will be lost!
+""")
+ env_batch.make_all_batch_files(case)
+
+ unlock_file(os.path.basename(env_batch.filename))
+ lock_file(os.path.basename(env_batch.filename))
+
+ if resubmit:
# This is a resubmission, do not reinitialize test values
if job == "case.test":
case.set_value("IS_FIRST_RUN", False)
@@ -55,31 +80,6 @@
if job == "case.test":
case.set_value("IS_FIRST_RUN", True)
- if no_batch:
- batch_system = "none"
- else:
- batch_system = env_batch.get_batch_system_type()
-
- case.set_value("BATCH_SYSTEM", batch_system)
-
- env_batch_has_changed = False
- try:
- case.check_lockedfile(os.path.basename(env_batch.filename))
- except SystemExit:
- env_batch_has_changed = True
-
- if env_batch.get_batch_system_type() != "none" and env_batch_has_changed:
- # May need to regen batch files if user made batch setting changes (e.g. walltime, queue, etc)
- logger.warning(\
-"""
-env_batch.xml appears to have changed, regenerating batch scripts
-manual edits to these file will be lost!
-""")
- env_batch.make_all_batch_files(case)
-
- unlock_file(os.path.basename(env_batch.filename))
- lock_file(os.path.basename(env_batch.filename))
-
if job == case.get_primary_job():
case.check_case()
case.check_DA_settings()
| {"golden_diff": "diff --git a/scripts/lib/CIME/case/case_submit.py b/scripts/lib/CIME/case/case_submit.py\n--- a/scripts/lib/CIME/case/case_submit.py\n+++ b/scripts/lib/CIME/case/case_submit.py\n@@ -37,10 +37,35 @@\n # if case.submit is called with the no_batch flag then we assume that this\n # flag will stay in effect for the duration of the RESUBMITs\n env_batch = case.get_env(\"batch\")\n- if resubmit:\n- if env_batch.get_batch_system_type() == \"none\":\n- no_batch = True\n \n+ if resubmit and env_batch.get_batch_system_type() == \"none\":\n+ no_batch = True\n+ if no_batch:\n+ batch_system = \"none\"\n+ else:\n+ batch_system = env_batch.get_batch_system_type()\n+\n+ case.set_value(\"BATCH_SYSTEM\", batch_system)\n+\n+ env_batch_has_changed = False\n+ try:\n+ case.check_lockedfile(os.path.basename(env_batch.filename))\n+ except:\n+ env_batch_has_changed = True\n+\n+ if batch_system != \"none\" and env_batch_has_changed:\n+ # May need to regen batch files if user made batch setting changes (e.g. walltime, queue, etc)\n+ logger.warning(\\\n+\"\"\"\n+env_batch.xml appears to have changed, regenerating batch scripts\n+manual edits to these file will be lost!\n+\"\"\")\n+ env_batch.make_all_batch_files(case)\n+\n+ unlock_file(os.path.basename(env_batch.filename))\n+ lock_file(os.path.basename(env_batch.filename))\n+\n+ if resubmit:\n # This is a resubmission, do not reinitialize test values\n if job == \"case.test\":\n case.set_value(\"IS_FIRST_RUN\", False)\n@@ -55,31 +80,6 @@\n if job == \"case.test\":\n case.set_value(\"IS_FIRST_RUN\", True)\n \n- if no_batch:\n- batch_system = \"none\"\n- else:\n- batch_system = env_batch.get_batch_system_type()\n-\n- case.set_value(\"BATCH_SYSTEM\", batch_system)\n-\n- env_batch_has_changed = False\n- try:\n- case.check_lockedfile(os.path.basename(env_batch.filename))\n- except SystemExit:\n- env_batch_has_changed = True\n-\n- if env_batch.get_batch_system_type() != \"none\" and env_batch_has_changed:\n- # May need to regen batch files if user made batch setting changes (e.g. walltime, queue, etc)\n- logger.warning(\\\n-\"\"\"\n-env_batch.xml appears to have changed, regenerating batch scripts\n-manual edits to these file will be lost!\n-\"\"\")\n- env_batch.make_all_batch_files(case)\n-\n- unlock_file(os.path.basename(env_batch.filename))\n- lock_file(os.path.basename(env_batch.filename))\n-\n if job == case.get_primary_job():\n case.check_case()\n case.check_DA_settings()\n", "issue": "Comment at the top of env_batch.xml is wrong\nThe comment at the top of env_batch.xml (imported from config/config_headers.xml) is:\r\n\r\n> These variables may be changed anytime during a run, they\r\n> control arguments to the batch submit command.\r\n> \r\nHowever, if I submit a job and then try:\r\n```\r\n./xmlchange JOB_WALLCLOCK_TIME=40:00:00\r\n./xmlchange JOB_QUEUE=long\r\n```\r\nMy job is killed (if it is in the queue when I make the change) or resubmit fails and I see the following message in my output file:\r\n```\r\ngoldy@hobart^GFile /scratch/cluster/goldy/FQ3D_ne5pg3_ne5pg3_mg37/LockedFiles/env_batch.xml has been modified\r\n found difference in JOB_WALLCLOCK_TIME : case '40:00:00' locked '80:00:00'\r\n found difference in USER_REQUESTED_WALLTIME : case '40:00:00' locked ''\r\nERROR: Batch configuration has changed, please run case.setup --reset\r\n```\r\nThe CIME documentation does not seem to mention this ability to change (cf http://esmci.github.io/cime/users_guide/running-a-case.html?highlight=job_wallclock_time).\r\n\r\nPlease fix the header with correct information.\r\n\r\nQuestion: Is there a way to see the `<header>` information via `xmlquery`?\r\n\r\nReported by a CESM 2.0 user.\n", "before_files": [{"content": "#!/usr/bin/env python\n\n\"\"\"\ncase.submit - Submit a cesm workflow to the queueing system or run it\nif there is no queueing system. A cesm workflow may include multiple\njobs.\nsubmit, check_case and check_da_settings are members of class Case in file case.py\n\"\"\"\nimport socket\nfrom six.moves import configparser\nfrom CIME.XML.standard_module_setup import *\nfrom CIME.utils import expect, run_and_log_case_status, verbatim_success_msg\nfrom CIME.locked_files import unlock_file, lock_file\nfrom CIME.test_status import *\n\nlogger = logging.getLogger(__name__)\n\ndef _build_prereq_str(case, prev_job_ids):\n delimiter = case.get_value(\"depend_separator\")\n prereq_str = \"\"\n for job_id in prev_job_ids.values():\n prereq_str += str(job_id) + delimiter\n return prereq_str[:-1]\n\ndef _submit(case, job=None, no_batch=False, prereq=None, allow_fail=False, resubmit=False,\n resubmit_immediate=False, skip_pnl=False, mail_user=None, mail_type=None,\n batch_args=None):\n if job is None:\n job = case.get_primary_job()\n\n rundir = case.get_value(\"RUNDIR\")\n if job != \"case.test\":\n continue_run = case.get_value(\"CONTINUE_RUN\")\n expect(os.path.isdir(rundir) or not continue_run,\n \" CONTINUE_RUN is true but RUNDIR {} does not exist\".format(rundir))\n\n # if case.submit is called with the no_batch flag then we assume that this\n # flag will stay in effect for the duration of the RESUBMITs\n env_batch = case.get_env(\"batch\")\n if resubmit:\n if env_batch.get_batch_system_type() == \"none\":\n no_batch = True\n\n # This is a resubmission, do not reinitialize test values\n if job == \"case.test\":\n case.set_value(\"IS_FIRST_RUN\", False)\n\n resub = case.get_value(\"RESUBMIT\")\n logger.info(\"Submitting job '{}', resubmit={:d}\".format(job, resub))\n case.set_value(\"RESUBMIT\", resub-1)\n if case.get_value(\"RESUBMIT_SETS_CONTINUE_RUN\"):\n case.set_value(\"CONTINUE_RUN\", True)\n\n else:\n if job == \"case.test\":\n case.set_value(\"IS_FIRST_RUN\", True)\n\n if no_batch:\n batch_system = \"none\"\n else:\n batch_system = env_batch.get_batch_system_type()\n\n case.set_value(\"BATCH_SYSTEM\", batch_system)\n\n env_batch_has_changed = False\n try:\n case.check_lockedfile(os.path.basename(env_batch.filename))\n except SystemExit:\n env_batch_has_changed = True\n\n if env_batch.get_batch_system_type() != \"none\" and env_batch_has_changed:\n # May need to regen batch files if user made batch setting changes (e.g. walltime, queue, etc)\n logger.warning(\\\n\"\"\"\nenv_batch.xml appears to have changed, regenerating batch scripts\nmanual edits to these file will be lost!\n\"\"\")\n env_batch.make_all_batch_files(case)\n\n unlock_file(os.path.basename(env_batch.filename))\n lock_file(os.path.basename(env_batch.filename))\n\n if job == case.get_primary_job():\n case.check_case()\n case.check_DA_settings()\n if case.get_value(\"MACH\") == \"mira\":\n with open(\".original_host\", \"w\") as fd:\n fd.write( socket.gethostname())\n\n #Load Modules\n case.load_env()\n\n case.flush()\n\n logger.warning(\"submit_jobs {}\".format(job))\n job_ids = case.submit_jobs(no_batch=no_batch, job=job, prereq=prereq,\n skip_pnl=skip_pnl, resubmit_immediate=resubmit_immediate,\n allow_fail=allow_fail, mail_user=mail_user,\n mail_type=mail_type, batch_args=batch_args)\n\n xml_jobids = []\n for jobname, jobid in job_ids.items():\n logger.info(\"Submitted job {} with id {}\".format(jobname, jobid))\n if jobid:\n xml_jobids.append(\"{}:{}\".format(jobname, jobid))\n\n xml_jobid_text = \", \".join(xml_jobids)\n if xml_jobid_text:\n case.set_value(\"JOB_IDS\", xml_jobid_text)\n\n return xml_jobid_text\n\ndef submit(self, job=None, no_batch=False, prereq=None, allow_fail=False, resubmit=False,\n resubmit_immediate=False, skip_pnl=False, mail_user=None, mail_type=None,\n batch_args=None):\n if resubmit_immediate and self.get_value(\"MACH\") in ['mira', 'cetus']:\n logger.warning(\"resubmit_immediate does not work on Mira/Cetus, submitting normally\")\n resubmit_immediate = False\n\n if self.get_value(\"TEST\"):\n caseroot = self.get_value(\"CASEROOT\")\n casebaseid = self.get_value(\"CASEBASEID\")\n # This should take care of the race condition where the submitted job\n # begins immediately and tries to set RUN phase. We proactively assume\n # a passed SUBMIT phase. If this state is already PASS, don't set it again\n # because then we'll lose RUN phase info if it's there. This info is important\n # for system_tests_common to know if it needs to reinitialize the test or not.\n with TestStatus(test_dir=caseroot, test_name=casebaseid) as ts:\n phase_status = ts.get_status(SUBMIT_PHASE)\n if phase_status != TEST_PASS_STATUS:\n ts.set_status(SUBMIT_PHASE, TEST_PASS_STATUS)\n\n # If this is a resubmit check the hidden file .submit_options for\n # any submit options used on the original submit and use them again\n caseroot = self.get_value(\"CASEROOT\")\n submit_options = os.path.join(caseroot, \".submit_options\")\n if resubmit and os.path.exists(submit_options):\n config = configparser.SafeConfigParser()\n config.read(submit_options)\n if not skip_pnl and config.has_option('SubmitOptions','skip_pnl'):\n skip_pnl = config.getboolean('SubmitOptions', 'skip_pnl')\n if mail_user is None and config.has_option('SubmitOptions', 'mail_user'):\n mail_user = config.get('SubmitOptions', 'mail_user')\n if mail_type is None and config.has_option('SubmitOptions', 'mail_type'):\n mail_type = config.get('SubmitOptions', 'mail_type').split(',')\n if batch_args is None and config.has_option('SubmitOptions', 'batch_args'):\n batch_args = config.get('SubmitOptions', 'batch_args')\n\n try:\n functor = lambda: _submit(self, job=job, no_batch=no_batch, prereq=prereq,\n allow_fail=allow_fail, resubmit=resubmit,\n resubmit_immediate=resubmit_immediate, skip_pnl=skip_pnl,\n mail_user=mail_user, mail_type=mail_type,\n batch_args=batch_args)\n run_and_log_case_status(functor, \"case.submit\", caseroot=caseroot,\n custom_success_msg_functor=verbatim_success_msg)\n except:\n # If something failed in the batch system, make sure to mark\n # the test as failed if we are running a test.\n if self.get_value(\"TEST\"):\n with TestStatus(test_dir=caseroot, test_name=casebaseid) as ts:\n ts.set_status(SUBMIT_PHASE, TEST_FAIL_STATUS)\n\n raise\n\ndef check_case(self):\n self.check_lockedfiles()\n self.create_namelists() # Must be called before check_all_input_data\n logger.info(\"Checking that inputdata is available as part of case submission\")\n self.check_all_input_data()\n\n if self.get_value('COMP_WAV') == 'ww':\n # the ww3 buildnml has dependancies on inputdata so we must run it again\n self.create_namelists(component='WAV')\n\n\n expect(self.get_value(\"BUILD_COMPLETE\"), \"Build complete is \"\n \"not True please rebuild the model by calling case.build\")\n logger.info(\"Check case OK\")\n\ndef check_DA_settings(self):\n script = self.get_value(\"DATA_ASSIMILATION_SCRIPT\")\n cycles = self.get_value(\"DATA_ASSIMILATION_CYCLES\")\n if len(script) > 0 and os.path.isfile(script) and cycles > 0:\n logger.info(\"Data Assimilation enabled using script {} with {:d} cycles\".format(script,\n cycles))\n", "path": "scripts/lib/CIME/case/case_submit.py"}]} | 3,198 | 655 |
gh_patches_debug_339 | rasdani/github-patches | git_diff | pyro-ppl__pyro-3164 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
PyTorch 2.0 compatibility: Explicit PyTorch 1.x check causing issues with packages that depend on PyTorch / pyro (e.g. BoTorch)
### Issue Description
The explicit check for PyTorch 1.x here (https://github.com/pyro-ppl/pyro/blob/dev/pyro/distributions/torch_patch.py#L10) is causing problems when another package has a dependency on PyTorch + Pyro, since PyTorch is now at 2.0. For example, it is causing BoTorch tests to fail here (https://github.com/pytorch/botorch/pull/1551).
Could this check be removed to allow for PyTorch 2.0?
### Environment
Mac OS 11.7.1
Python 3.10
PyTorch 2.0
Pyro 1.8.3
### Code Snippet
https://github.com/pytorch/botorch/actions/runs/3659534850/jobs/6185642011
</issue>
<code>
[start of pyro/distributions/torch_patch.py]
1 # Copyright (c) 2017-2019 Uber Technologies, Inc.
2 # SPDX-License-Identifier: Apache-2.0
3
4 import functools
5 import math
6 import weakref
7
8 import torch
9
10 assert torch.__version__.startswith("1.")
11
12
13 def patch_dependency(target, root_module=torch):
14 parts = target.split(".")
15 assert parts[0] == root_module.__name__
16 module = root_module
17 for part in parts[1:-1]:
18 module = getattr(module, part)
19 name = parts[-1]
20 old_fn = getattr(module, name, None)
21 old_fn = getattr(old_fn, "_pyro_unpatched", old_fn) # ensure patching is idempotent
22
23 def decorator(new_fn):
24 try:
25 functools.update_wrapper(new_fn, old_fn)
26 except Exception:
27 for attr in functools.WRAPPER_ASSIGNMENTS:
28 if hasattr(old_fn, attr):
29 setattr(new_fn, attr, getattr(old_fn, attr))
30 new_fn._pyro_unpatched = old_fn
31 setattr(module, name, new_fn)
32 return new_fn
33
34 return decorator
35
36
37 # TODO: Move upstream to allow for pickle serialization of transforms
38 @patch_dependency("torch.distributions.transforms.Transform.__getstate__")
39 def _Transform__getstate__(self):
40 attrs = {}
41 for k, v in self.__dict__.items():
42 if isinstance(v, weakref.ref):
43 attrs[k] = None
44 else:
45 attrs[k] = v
46 return attrs
47
48
49 # TODO move upstream
50 @patch_dependency("torch.distributions.transforms.Transform.clear_cache")
51 def _Transform_clear_cache(self):
52 if self._cache_size == 1:
53 self._cached_x_y = None, None
54
55
56 # TODO move upstream
57 @patch_dependency("torch.distributions.TransformedDistribution.clear_cache")
58 def _TransformedDistribution_clear_cache(self):
59 for t in self.transforms:
60 t.clear_cache()
61
62
63 # TODO fix https://github.com/pytorch/pytorch/issues/48054 upstream
64 @patch_dependency("torch.distributions.HalfCauchy.log_prob")
65 def _HalfCauchy_logprob(self, value):
66 if self._validate_args:
67 self._validate_sample(value)
68 value = torch.as_tensor(
69 value, dtype=self.base_dist.scale.dtype, device=self.base_dist.scale.device
70 )
71 log_prob = self.base_dist.log_prob(value) + math.log(2)
72 log_prob.masked_fill_(value.expand(log_prob.shape) < 0, -float("inf"))
73 return log_prob
74
75
76 # TODO fix batch_shape have an extra singleton dimension upstream
77 @patch_dependency("torch.distributions.constraints._PositiveDefinite.check")
78 def _PositiveDefinite_check(self, value):
79 matrix_shape = value.shape[-2:]
80 batch_shape = value.shape[:-2]
81 flattened_value = value.reshape((-1,) + matrix_shape)
82 return torch.stack(
83 [torch.linalg.eigvalsh(v)[:1] > 0.0 for v in flattened_value]
84 ).view(batch_shape)
85
86
87 @patch_dependency("torch.distributions.constraints._CorrCholesky.check")
88 def _CorrCholesky_check(self, value):
89 row_norm = torch.linalg.norm(value.detach(), dim=-1)
90 unit_row_norm = (row_norm - 1.0).abs().le(1e-4).all(dim=-1)
91 return torch.distributions.constraints.lower_cholesky.check(value) & unit_row_norm
92
93
94 # This adds a __call__ method to satisfy sphinx.
95 @patch_dependency("torch.distributions.utils.lazy_property.__call__")
96 def _lazy_property__call__(self):
97 raise NotImplementedError
98
99
100 __all__ = []
101
[end of pyro/distributions/torch_patch.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/pyro/distributions/torch_patch.py b/pyro/distributions/torch_patch.py
--- a/pyro/distributions/torch_patch.py
+++ b/pyro/distributions/torch_patch.py
@@ -7,8 +7,6 @@
import torch
-assert torch.__version__.startswith("1.")
-
def patch_dependency(target, root_module=torch):
parts = target.split(".")
| {"golden_diff": "diff --git a/pyro/distributions/torch_patch.py b/pyro/distributions/torch_patch.py\n--- a/pyro/distributions/torch_patch.py\n+++ b/pyro/distributions/torch_patch.py\n@@ -7,8 +7,6 @@\n \n import torch\n \n-assert torch.__version__.startswith(\"1.\")\n-\n \n def patch_dependency(target, root_module=torch):\n parts = target.split(\".\")\n", "issue": "PyTorch 2.0 compatibility: Explicit PyTorch 1.x check causing issues with packages that depend on PyTorch / pyro (e.g. BoTorch)\n### Issue Description\r\nThe explicit check for PyTorch 1.x here (https://github.com/pyro-ppl/pyro/blob/dev/pyro/distributions/torch_patch.py#L10) is causing problems when another package has a dependency on PyTorch + Pyro, since PyTorch is now at 2.0. For example, it is causing BoTorch tests to fail here (https://github.com/pytorch/botorch/pull/1551).\r\n\r\nCould this check be removed to allow for PyTorch 2.0?\r\n\r\n### Environment\r\nMac OS 11.7.1\r\nPython 3.10\r\nPyTorch 2.0\r\nPyro 1.8.3\r\n\r\n### Code Snippet\r\nhttps://github.com/pytorch/botorch/actions/runs/3659534850/jobs/6185642011\n", "before_files": [{"content": "# Copyright (c) 2017-2019 Uber Technologies, Inc.\n# SPDX-License-Identifier: Apache-2.0\n\nimport functools\nimport math\nimport weakref\n\nimport torch\n\nassert torch.__version__.startswith(\"1.\")\n\n\ndef patch_dependency(target, root_module=torch):\n parts = target.split(\".\")\n assert parts[0] == root_module.__name__\n module = root_module\n for part in parts[1:-1]:\n module = getattr(module, part)\n name = parts[-1]\n old_fn = getattr(module, name, None)\n old_fn = getattr(old_fn, \"_pyro_unpatched\", old_fn) # ensure patching is idempotent\n\n def decorator(new_fn):\n try:\n functools.update_wrapper(new_fn, old_fn)\n except Exception:\n for attr in functools.WRAPPER_ASSIGNMENTS:\n if hasattr(old_fn, attr):\n setattr(new_fn, attr, getattr(old_fn, attr))\n new_fn._pyro_unpatched = old_fn\n setattr(module, name, new_fn)\n return new_fn\n\n return decorator\n\n\n# TODO: Move upstream to allow for pickle serialization of transforms\n@patch_dependency(\"torch.distributions.transforms.Transform.__getstate__\")\ndef _Transform__getstate__(self):\n attrs = {}\n for k, v in self.__dict__.items():\n if isinstance(v, weakref.ref):\n attrs[k] = None\n else:\n attrs[k] = v\n return attrs\n\n\n# TODO move upstream\n@patch_dependency(\"torch.distributions.transforms.Transform.clear_cache\")\ndef _Transform_clear_cache(self):\n if self._cache_size == 1:\n self._cached_x_y = None, None\n\n\n# TODO move upstream\n@patch_dependency(\"torch.distributions.TransformedDistribution.clear_cache\")\ndef _TransformedDistribution_clear_cache(self):\n for t in self.transforms:\n t.clear_cache()\n\n\n# TODO fix https://github.com/pytorch/pytorch/issues/48054 upstream\n@patch_dependency(\"torch.distributions.HalfCauchy.log_prob\")\ndef _HalfCauchy_logprob(self, value):\n if self._validate_args:\n self._validate_sample(value)\n value = torch.as_tensor(\n value, dtype=self.base_dist.scale.dtype, device=self.base_dist.scale.device\n )\n log_prob = self.base_dist.log_prob(value) + math.log(2)\n log_prob.masked_fill_(value.expand(log_prob.shape) < 0, -float(\"inf\"))\n return log_prob\n\n\n# TODO fix batch_shape have an extra singleton dimension upstream\n@patch_dependency(\"torch.distributions.constraints._PositiveDefinite.check\")\ndef _PositiveDefinite_check(self, value):\n matrix_shape = value.shape[-2:]\n batch_shape = value.shape[:-2]\n flattened_value = value.reshape((-1,) + matrix_shape)\n return torch.stack(\n [torch.linalg.eigvalsh(v)[:1] > 0.0 for v in flattened_value]\n ).view(batch_shape)\n\n\n@patch_dependency(\"torch.distributions.constraints._CorrCholesky.check\")\ndef _CorrCholesky_check(self, value):\n row_norm = torch.linalg.norm(value.detach(), dim=-1)\n unit_row_norm = (row_norm - 1.0).abs().le(1e-4).all(dim=-1)\n return torch.distributions.constraints.lower_cholesky.check(value) & unit_row_norm\n\n\n# This adds a __call__ method to satisfy sphinx.\n@patch_dependency(\"torch.distributions.utils.lazy_property.__call__\")\ndef _lazy_property__call__(self):\n raise NotImplementedError\n\n\n__all__ = []\n", "path": "pyro/distributions/torch_patch.py"}]} | 1,763 | 86 |
gh_patches_debug_30012 | rasdani/github-patches | git_diff | TheAlgorithms__Python-2443 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Dev sprint ideas: More tests, type hints and less complexity
currently, some of the programs use static type checking like this [program](https://github.com/TheAlgorithms/Python/blob/master/dynamic_programming/fast_fibonacci.py) but some of the programs did not use static typing.
it's a good practice to use static typing as it makes code more clear and readable, should we make it a standard for this repository.we can use [mypy](http://mypy-lang.org/) for testing code
[more on static typing](https://medium.com/@ageitgey/learn-how-to-use-static-type-checking-in-python-3-6-in-10-minutes-12c86d72677b)
thank you
### Dev sprint ideas:
* [ ] [Add tests to Python files with <10% test coverage.](https://github.com/TheAlgorithms/Python/issues/2128#issuecomment-645231020)
* [ ] [Add static typing to functions and methods.](https://github.com/TheAlgorithms/Python/issues/2128)
* [ ] [Set `flake8 --max-complexity=15`](https://github.com/TheAlgorithms/Python/issues/2128#issuecomment-645190839) (Ensure files have strong tests ___before___ refactoring). Test results from #2139...
* [ ] ./boolean_algebra/quine_mc_cluskey.py:82:1: C901 'selection' is too complex (17)
* [ ] ./digital_image_processing/edge_detection/canny.py:20:1: C901 'canny' is too complex (17) @lighttxu
* [ ] ./graphs/minimum_spanning_tree_prims.py:5:1: C901 'PrimsAlgorithm' is too complex (21)
* [ ] Add doctests aligned with https://en.wikipedia.org/wiki/Prim%27s_algorithm
* [ ] In a ___separate___ PR reduce the McCabe complexity
* [ ] ./linear_algebra/src/polynom-for-points.py:1:1: C901 'points_to_polynomial' is too complex (23) @nic-dern
* [ ] ./machine_learning/linear_discriminant_analysis.py:251:1: C901 'main' is too complex (25)
* [x] ./hashes/hamming_code.py:71:1: C901 'emitterConverter' is too complex (16) #2140
* [x] ./hashes/hamming_code.py:153:1: C901 'receptorConverter' is too complex (20) #2140
* [x] ./project_euler/problem_551/sol1.py:20:1: C901 'next_term' is too complex (16) #2141
</issue>
<code>
[start of searches/simple_binary_search.py]
1 """
2 Pure Python implementation of a binary search algorithm.
3
4 For doctests run following command:
5 python3 -m doctest -v simple_binary_search.py
6
7 For manual testing run:
8 python3 simple_binary_search.py
9 """
10 from __future__ import annotations
11
12
13 def binary_search(a_list: list[int], item: int) -> bool:
14 """
15 >>> test_list = [0, 1, 2, 8, 13, 17, 19, 32, 42]
16 >>> print(binary_search(test_list, 3))
17 False
18 >>> print(binary_search(test_list, 13))
19 True
20 >>> print(binary_search([4, 4, 5, 6, 7], 4))
21 True
22 >>> print(binary_search([4, 4, 5, 6, 7], -10))
23 False
24 >>> print(binary_search([-18, 2], -18))
25 True
26 >>> print(binary_search([5], 5))
27 True
28 >>> print(binary_search(['a', 'c', 'd'], 'c'))
29 True
30 >>> print(binary_search(['a', 'c', 'd'], 'f'))
31 False
32 >>> print(binary_search([], 1))
33 False
34 >>> print(binary_search([-.1, .1 , .8], .1))
35 True
36 >>> binary_search(range(-5000, 5000, 10), 80)
37 True
38 >>> binary_search(range(-5000, 5000, 10), 1255)
39 False
40 >>> binary_search(range(0, 10000, 5), 2)
41 False
42 """
43 if len(a_list) == 0:
44 return False
45 midpoint = len(a_list) // 2
46 if a_list[midpoint] == item:
47 return True
48 if item < a_list[midpoint]:
49 return binary_search(a_list[:midpoint], item)
50 else:
51 return binary_search(a_list[midpoint + 1 :], item)
52
53
54 if __name__ == "__main__":
55 user_input = input("Enter numbers separated by comma:\n").strip()
56 sequence = [int(item.strip()) for item in user_input.split(",")]
57 target = int(input("Enter the number to be found in the list:\n").strip())
58 not_str = "" if binary_search(sequence, target) else "not "
59 print(f"{target} was {not_str}found in {sequence}")
60
[end of searches/simple_binary_search.py]
[start of graphs/minimum_spanning_tree_kruskal.py]
1 if __name__ == "__main__":
2 num_nodes, num_edges = list(map(int, input().strip().split()))
3
4 edges = []
5
6 for i in range(num_edges):
7 node1, node2, cost = list(map(int, input().strip().split()))
8 edges.append((i, node1, node2, cost))
9
10 edges = sorted(edges, key=lambda edge: edge[3])
11
12 parent = list(range(num_nodes))
13
14 def find_parent(i):
15 if i != parent[i]:
16 parent[i] = find_parent(parent[i])
17 return parent[i]
18
19 minimum_spanning_tree_cost = 0
20 minimum_spanning_tree = []
21
22 for edge in edges:
23 parent_a = find_parent(edge[1])
24 parent_b = find_parent(edge[2])
25 if parent_a != parent_b:
26 minimum_spanning_tree_cost += edge[3]
27 minimum_spanning_tree.append(edge)
28 parent[parent_a] = parent_b
29
30 print(minimum_spanning_tree_cost)
31 for edge in minimum_spanning_tree:
32 print(edge)
33
[end of graphs/minimum_spanning_tree_kruskal.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/graphs/minimum_spanning_tree_kruskal.py b/graphs/minimum_spanning_tree_kruskal.py
--- a/graphs/minimum_spanning_tree_kruskal.py
+++ b/graphs/minimum_spanning_tree_kruskal.py
@@ -1,13 +1,5 @@
-if __name__ == "__main__":
- num_nodes, num_edges = list(map(int, input().strip().split()))
-
- edges = []
-
- for i in range(num_edges):
- node1, node2, cost = list(map(int, input().strip().split()))
- edges.append((i, node1, node2, cost))
-
- edges = sorted(edges, key=lambda edge: edge[3])
+def kruskal(num_nodes, num_edges, edges):
+ edges = sorted(edges, key=lambda edge: edge[2])
parent = list(range(num_nodes))
@@ -20,13 +12,22 @@
minimum_spanning_tree = []
for edge in edges:
- parent_a = find_parent(edge[1])
- parent_b = find_parent(edge[2])
+ parent_a = find_parent(edge[0])
+ parent_b = find_parent(edge[1])
if parent_a != parent_b:
- minimum_spanning_tree_cost += edge[3]
+ minimum_spanning_tree_cost += edge[2]
minimum_spanning_tree.append(edge)
parent[parent_a] = parent_b
- print(minimum_spanning_tree_cost)
- for edge in minimum_spanning_tree:
- print(edge)
+ return minimum_spanning_tree
+
+
+if __name__ == "__main__": # pragma: no cover
+ num_nodes, num_edges = list(map(int, input().strip().split()))
+ edges = []
+
+ for _ in range(num_edges):
+ node1, node2, cost = [int(x) for x in input().strip().split()]
+ edges.append((node1, node2, cost))
+
+ kruskal(num_nodes, num_edges, edges)
diff --git a/searches/simple_binary_search.py b/searches/simple_binary_search.py
--- a/searches/simple_binary_search.py
+++ b/searches/simple_binary_search.py
@@ -42,7 +42,7 @@
if item < a_list[midpoint]:
return binary_search(a_list[:midpoint], item)
else:
- return binary_search(a_list[midpoint + 1:], item)
+ return binary_search(a_list[midpoint + 1 :], item)
if __name__ == "__main__":
| {"golden_diff": "diff --git a/graphs/minimum_spanning_tree_kruskal.py b/graphs/minimum_spanning_tree_kruskal.py\n--- a/graphs/minimum_spanning_tree_kruskal.py\n+++ b/graphs/minimum_spanning_tree_kruskal.py\n@@ -1,13 +1,5 @@\n-if __name__ == \"__main__\":\n- num_nodes, num_edges = list(map(int, input().strip().split()))\n-\n- edges = []\n-\n- for i in range(num_edges):\n- node1, node2, cost = list(map(int, input().strip().split()))\n- edges.append((i, node1, node2, cost))\n-\n- edges = sorted(edges, key=lambda edge: edge[3])\n+def kruskal(num_nodes, num_edges, edges):\n+ edges = sorted(edges, key=lambda edge: edge[2])\n \n parent = list(range(num_nodes))\n \n@@ -20,13 +12,22 @@\n minimum_spanning_tree = []\n \n for edge in edges:\n- parent_a = find_parent(edge[1])\n- parent_b = find_parent(edge[2])\n+ parent_a = find_parent(edge[0])\n+ parent_b = find_parent(edge[1])\n if parent_a != parent_b:\n- minimum_spanning_tree_cost += edge[3]\n+ minimum_spanning_tree_cost += edge[2]\n minimum_spanning_tree.append(edge)\n parent[parent_a] = parent_b\n \n- print(minimum_spanning_tree_cost)\n- for edge in minimum_spanning_tree:\n- print(edge)\n+ return minimum_spanning_tree\n+\n+\n+if __name__ == \"__main__\": # pragma: no cover\n+ num_nodes, num_edges = list(map(int, input().strip().split()))\n+ edges = []\n+\n+ for _ in range(num_edges):\n+ node1, node2, cost = [int(x) for x in input().strip().split()]\n+ edges.append((node1, node2, cost))\n+\n+ kruskal(num_nodes, num_edges, edges)\ndiff --git a/searches/simple_binary_search.py b/searches/simple_binary_search.py\n--- a/searches/simple_binary_search.py\n+++ b/searches/simple_binary_search.py\n@@ -42,7 +42,7 @@\n if item < a_list[midpoint]:\n return binary_search(a_list[:midpoint], item)\n else:\n- return binary_search(a_list[midpoint + 1:], item)\n+ return binary_search(a_list[midpoint + 1 :], item)\n \n \n if __name__ == \"__main__\":\n", "issue": "Dev sprint ideas: More tests, type hints and less complexity\ncurrently, some of the programs use static type checking like this [program](https://github.com/TheAlgorithms/Python/blob/master/dynamic_programming/fast_fibonacci.py) but some of the programs did not use static typing.\r\n\r\nit's a good practice to use static typing as it makes code more clear and readable, should we make it a standard for this repository.we can use [mypy](http://mypy-lang.org/) for testing code \r\n\r\n[more on static typing](https://medium.com/@ageitgey/learn-how-to-use-static-type-checking-in-python-3-6-in-10-minutes-12c86d72677b)\r\n \r\nthank you\r\n\r\n### Dev sprint ideas:\r\n* [ ] [Add tests to Python files with <10% test coverage.](https://github.com/TheAlgorithms/Python/issues/2128#issuecomment-645231020)\r\n* [ ] [Add static typing to functions and methods.](https://github.com/TheAlgorithms/Python/issues/2128)\r\n* [ ] [Set `flake8 --max-complexity=15`](https://github.com/TheAlgorithms/Python/issues/2128#issuecomment-645190839) (Ensure files have strong tests ___before___ refactoring). Test results from #2139...\r\n * [ ] ./boolean_algebra/quine_mc_cluskey.py:82:1: C901 'selection' is too complex (17)\r\n * [ ] ./digital_image_processing/edge_detection/canny.py:20:1: C901 'canny' is too complex (17) @lighttxu\r\n * [ ] ./graphs/minimum_spanning_tree_prims.py:5:1: C901 'PrimsAlgorithm' is too complex (21)\r\n * [ ] Add doctests aligned with https://en.wikipedia.org/wiki/Prim%27s_algorithm\r\n * [ ] In a ___separate___ PR reduce the McCabe complexity\r\n * [ ] ./linear_algebra/src/polynom-for-points.py:1:1: C901 'points_to_polynomial' is too complex (23) @nic-dern\r\n * [ ] ./machine_learning/linear_discriminant_analysis.py:251:1: C901 'main' is too complex (25)\r\n * [x] ./hashes/hamming_code.py:71:1: C901 'emitterConverter' is too complex (16) #2140\r\n * [x] ./hashes/hamming_code.py:153:1: C901 'receptorConverter' is too complex (20) #2140\r\n * [x] ./project_euler/problem_551/sol1.py:20:1: C901 'next_term' is too complex (16) #2141\n", "before_files": [{"content": "\"\"\"\nPure Python implementation of a binary search algorithm.\n\nFor doctests run following command:\npython3 -m doctest -v simple_binary_search.py\n\nFor manual testing run:\npython3 simple_binary_search.py\n\"\"\"\nfrom __future__ import annotations\n\n\ndef binary_search(a_list: list[int], item: int) -> bool:\n \"\"\"\n >>> test_list = [0, 1, 2, 8, 13, 17, 19, 32, 42]\n >>> print(binary_search(test_list, 3))\n False\n >>> print(binary_search(test_list, 13))\n True\n >>> print(binary_search([4, 4, 5, 6, 7], 4))\n True\n >>> print(binary_search([4, 4, 5, 6, 7], -10))\n False\n >>> print(binary_search([-18, 2], -18))\n True\n >>> print(binary_search([5], 5))\n True\n >>> print(binary_search(['a', 'c', 'd'], 'c'))\n True\n >>> print(binary_search(['a', 'c', 'd'], 'f'))\n False\n >>> print(binary_search([], 1))\n False\n >>> print(binary_search([-.1, .1 , .8], .1))\n True\n >>> binary_search(range(-5000, 5000, 10), 80)\n True\n >>> binary_search(range(-5000, 5000, 10), 1255)\n False\n >>> binary_search(range(0, 10000, 5), 2)\n False\n \"\"\"\n if len(a_list) == 0:\n return False\n midpoint = len(a_list) // 2\n if a_list[midpoint] == item:\n return True\n if item < a_list[midpoint]:\n return binary_search(a_list[:midpoint], item)\n else:\n return binary_search(a_list[midpoint + 1 :], item)\n\n\nif __name__ == \"__main__\":\n user_input = input(\"Enter numbers separated by comma:\\n\").strip()\n sequence = [int(item.strip()) for item in user_input.split(\",\")]\n target = int(input(\"Enter the number to be found in the list:\\n\").strip())\n not_str = \"\" if binary_search(sequence, target) else \"not \"\n print(f\"{target} was {not_str}found in {sequence}\")\n", "path": "searches/simple_binary_search.py"}, {"content": "if __name__ == \"__main__\":\n num_nodes, num_edges = list(map(int, input().strip().split()))\n\n edges = []\n\n for i in range(num_edges):\n node1, node2, cost = list(map(int, input().strip().split()))\n edges.append((i, node1, node2, cost))\n\n edges = sorted(edges, key=lambda edge: edge[3])\n\n parent = list(range(num_nodes))\n\n def find_parent(i):\n if i != parent[i]:\n parent[i] = find_parent(parent[i])\n return parent[i]\n\n minimum_spanning_tree_cost = 0\n minimum_spanning_tree = []\n\n for edge in edges:\n parent_a = find_parent(edge[1])\n parent_b = find_parent(edge[2])\n if parent_a != parent_b:\n minimum_spanning_tree_cost += edge[3]\n minimum_spanning_tree.append(edge)\n parent[parent_a] = parent_b\n\n print(minimum_spanning_tree_cost)\n for edge in minimum_spanning_tree:\n print(edge)\n", "path": "graphs/minimum_spanning_tree_kruskal.py"}]} | 2,174 | 565 |
gh_patches_debug_32139 | rasdani/github-patches | git_diff | certbot__certbot-2141 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Travis: pep8: not found
In #2138 it was discovered that `pep8` is not run because it's not installed at the right time.
</issue>
<code>
[start of setup.py]
1 import codecs
2 import os
3 import re
4 import sys
5
6 from setuptools import setup
7 from setuptools import find_packages
8
9 # Workaround for http://bugs.python.org/issue8876, see
10 # http://bugs.python.org/issue8876#msg208792
11 # This can be removed when using Python 2.7.9 or later:
12 # https://hg.python.org/cpython/raw-file/v2.7.9/Misc/NEWS
13 if os.path.abspath(__file__).split(os.path.sep)[1] == 'vagrant':
14 del os.link
15
16
17 def read_file(filename, encoding='utf8'):
18 """Read unicode from given file."""
19 with codecs.open(filename, encoding=encoding) as fd:
20 return fd.read()
21
22
23 here = os.path.abspath(os.path.dirname(__file__))
24
25 # read version number (and other metadata) from package init
26 init_fn = os.path.join(here, 'letsencrypt', '__init__.py')
27 meta = dict(re.findall(r"""__([a-z]+)__ = '([^']+)""", read_file(init_fn)))
28
29 readme = read_file(os.path.join(here, 'README.rst'))
30 changes = read_file(os.path.join(here, 'CHANGES.rst'))
31 version = meta['version']
32
33 # Please update tox.ini when modifying dependency version requirements
34 install_requires = [
35 'acme=={0}'.format(version),
36 'configobj',
37 'cryptography>=0.7', # load_pem_x509_certificate
38 'parsedatetime',
39 'psutil>=2.1.0', # net_connections introduced in 2.1.0
40 'PyOpenSSL',
41 'pyrfc3339',
42 'python2-pythondialog>=3.2.2rc1', # Debian squeeze support, cf. #280
43 'pytz',
44 'setuptools', # pkg_resources
45 'six',
46 'zope.component',
47 'zope.interface',
48 ]
49
50 # env markers in extras_require cause problems with older pip: #517
51 if sys.version_info < (2, 7):
52 install_requires.extend([
53 # only some distros recognize stdlib argparse as already satisfying
54 'argparse',
55 'ConfigArgParse>=0.10.0', # python2.6 support, upstream #17
56 'mock<1.1.0',
57 ])
58 else:
59 install_requires.extend([
60 'ConfigArgParse',
61 'mock',
62 ])
63
64 dev_extras = [
65 # Pin astroid==1.3.5, pylint==1.4.2 as a workaround for #289
66 'astroid==1.3.5',
67 'pylint==1.4.2', # upstream #248
68 'twine',
69 'wheel',
70 ]
71
72 docs_extras = [
73 'repoze.sphinx.autointerface',
74 'Sphinx>=1.0', # autodoc_member_order = 'bysource', autodoc_default_flags
75 'sphinx_rtd_theme',
76 'sphinxcontrib-programoutput',
77 ]
78
79 testing_extras = [
80 'coverage',
81 'nose',
82 'nosexcover',
83 'pep8',
84 'tox',
85 ]
86
87 setup(
88 name='letsencrypt',
89 version=version,
90 description="Let's Encrypt client",
91 long_description=readme, # later: + '\n\n' + changes
92 url='https://github.com/letsencrypt/letsencrypt',
93 author="Let's Encrypt Project",
94 author_email='[email protected]',
95 license='Apache License 2.0',
96 classifiers=[
97 'Development Status :: 3 - Alpha',
98 'Environment :: Console',
99 'Environment :: Console :: Curses',
100 'Intended Audience :: System Administrators',
101 'License :: OSI Approved :: Apache Software License',
102 'Operating System :: POSIX :: Linux',
103 'Programming Language :: Python',
104 'Programming Language :: Python :: 2',
105 'Programming Language :: Python :: 2.6',
106 'Programming Language :: Python :: 2.7',
107 'Topic :: Internet :: WWW/HTTP',
108 'Topic :: Security',
109 'Topic :: System :: Installation/Setup',
110 'Topic :: System :: Networking',
111 'Topic :: System :: Systems Administration',
112 'Topic :: Utilities',
113 ],
114
115 packages=find_packages(exclude=['docs', 'examples', 'tests', 'venv']),
116 include_package_data=True,
117
118 install_requires=install_requires,
119 extras_require={
120 'dev': dev_extras,
121 'docs': docs_extras,
122 'testing': testing_extras,
123 },
124
125 tests_require=install_requires,
126 # to test all packages run "python setup.py test -s
127 # {acme,letsencrypt_apache,letsencrypt_nginx}"
128 test_suite='letsencrypt',
129
130 entry_points={
131 'console_scripts': [
132 'letsencrypt = letsencrypt.cli:main',
133 'letsencrypt-renewer = letsencrypt.renewer:main',
134 ],
135 'letsencrypt.plugins': [
136 'manual = letsencrypt.plugins.manual:Authenticator',
137 'null = letsencrypt.plugins.null:Installer',
138 'standalone = letsencrypt.plugins.standalone:Authenticator',
139 'webroot = letsencrypt.plugins.webroot:Authenticator',
140 ],
141 },
142 )
143
[end of setup.py]
[start of acme/setup.py]
1 import sys
2
3 from setuptools import setup
4 from setuptools import find_packages
5
6
7 version = '0.2.0.dev0'
8
9 # Please update tox.ini when modifying dependency version requirements
10 install_requires = [
11 # load_pem_private/public_key (>=0.6)
12 # rsa_recover_prime_factors (>=0.8)
13 'cryptography>=0.8',
14 # Connection.set_tlsext_host_name (>=0.13)
15 'PyOpenSSL>=0.13',
16 'pyrfc3339',
17 'pytz',
18 'requests',
19 'setuptools', # pkg_resources
20 'six',
21 'werkzeug',
22 ]
23
24 # env markers in extras_require cause problems with older pip: #517
25 if sys.version_info < (2, 7):
26 install_requires.extend([
27 # only some distros recognize stdlib argparse as already satisfying
28 'argparse',
29 'mock<1.1.0',
30 ])
31 else:
32 install_requires.append('mock')
33
34 if sys.version_info < (2, 7, 9):
35 # For secure SSL connection with Python 2.7 (InsecurePlatformWarning)
36 install_requires.append('ndg-httpsclient')
37 install_requires.append('pyasn1')
38
39 docs_extras = [
40 'Sphinx>=1.0', # autodoc_member_order = 'bysource', autodoc_default_flags
41 'sphinx_rtd_theme',
42 'sphinxcontrib-programoutput',
43 ]
44
45 testing_extras = [
46 'nose',
47 'tox',
48 ]
49
50
51 setup(
52 name='acme',
53 version=version,
54 description='ACME protocol implementation in Python',
55 url='https://github.com/letsencrypt/letsencrypt',
56 author="Let's Encrypt Project",
57 author_email='[email protected]',
58 license='Apache License 2.0',
59 classifiers=[
60 'Development Status :: 3 - Alpha',
61 'Intended Audience :: Developers',
62 'License :: OSI Approved :: Apache Software License',
63 'Programming Language :: Python',
64 'Programming Language :: Python :: 2',
65 'Programming Language :: Python :: 2.6',
66 'Programming Language :: Python :: 2.7',
67 'Programming Language :: Python :: 3',
68 'Programming Language :: Python :: 3.3',
69 'Programming Language :: Python :: 3.4',
70 'Programming Language :: Python :: 3.5',
71 'Topic :: Internet :: WWW/HTTP',
72 'Topic :: Security',
73 ],
74
75 packages=find_packages(),
76 include_package_data=True,
77 install_requires=install_requires,
78 extras_require={
79 'docs': docs_extras,
80 'testing': testing_extras,
81 },
82 entry_points={
83 'console_scripts': [
84 'jws = acme.jose.jws:CLI.run',
85 ],
86 },
87 test_suite='acme',
88 )
89
[end of acme/setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/acme/setup.py b/acme/setup.py
--- a/acme/setup.py
+++ b/acme/setup.py
@@ -36,17 +36,18 @@
install_requires.append('ndg-httpsclient')
install_requires.append('pyasn1')
+dev_extras = [
+ 'nose',
+ 'pep8',
+ 'tox',
+]
+
docs_extras = [
'Sphinx>=1.0', # autodoc_member_order = 'bysource', autodoc_default_flags
'sphinx_rtd_theme',
'sphinxcontrib-programoutput',
]
-testing_extras = [
- 'nose',
- 'tox',
-]
-
setup(
name='acme',
@@ -76,8 +77,8 @@
include_package_data=True,
install_requires=install_requires,
extras_require={
+ 'dev': dev_extras,
'docs': docs_extras,
- 'testing': testing_extras,
},
entry_points={
'console_scripts': [
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -64,7 +64,12 @@
dev_extras = [
# Pin astroid==1.3.5, pylint==1.4.2 as a workaround for #289
'astroid==1.3.5',
+ 'coverage',
+ 'nose',
+ 'nosexcover',
+ 'pep8',
'pylint==1.4.2', # upstream #248
+ 'tox',
'twine',
'wheel',
]
@@ -76,14 +81,6 @@
'sphinxcontrib-programoutput',
]
-testing_extras = [
- 'coverage',
- 'nose',
- 'nosexcover',
- 'pep8',
- 'tox',
-]
-
setup(
name='letsencrypt',
version=version,
@@ -119,7 +116,6 @@
extras_require={
'dev': dev_extras,
'docs': docs_extras,
- 'testing': testing_extras,
},
tests_require=install_requires,
| {"golden_diff": "diff --git a/acme/setup.py b/acme/setup.py\n--- a/acme/setup.py\n+++ b/acme/setup.py\n@@ -36,17 +36,18 @@\n install_requires.append('ndg-httpsclient')\n install_requires.append('pyasn1')\n \n+dev_extras = [\n+ 'nose',\n+ 'pep8',\n+ 'tox',\n+]\n+\n docs_extras = [\n 'Sphinx>=1.0', # autodoc_member_order = 'bysource', autodoc_default_flags\n 'sphinx_rtd_theme',\n 'sphinxcontrib-programoutput',\n ]\n \n-testing_extras = [\n- 'nose',\n- 'tox',\n-]\n-\n \n setup(\n name='acme',\n@@ -76,8 +77,8 @@\n include_package_data=True,\n install_requires=install_requires,\n extras_require={\n+ 'dev': dev_extras,\n 'docs': docs_extras,\n- 'testing': testing_extras,\n },\n entry_points={\n 'console_scripts': [\ndiff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -64,7 +64,12 @@\n dev_extras = [\n # Pin astroid==1.3.5, pylint==1.4.2 as a workaround for #289\n 'astroid==1.3.5',\n+ 'coverage',\n+ 'nose',\n+ 'nosexcover',\n+ 'pep8',\n 'pylint==1.4.2', # upstream #248\n+ 'tox',\n 'twine',\n 'wheel',\n ]\n@@ -76,14 +81,6 @@\n 'sphinxcontrib-programoutput',\n ]\n \n-testing_extras = [\n- 'coverage',\n- 'nose',\n- 'nosexcover',\n- 'pep8',\n- 'tox',\n-]\n-\n setup(\n name='letsencrypt',\n version=version,\n@@ -119,7 +116,6 @@\n extras_require={\n 'dev': dev_extras,\n 'docs': docs_extras,\n- 'testing': testing_extras,\n },\n \n tests_require=install_requires,\n", "issue": "Travis: pep8: not found\nIn #2138 it was discovered that `pep8` is not run because it's not installed at the right time.\n\n", "before_files": [{"content": "import codecs\nimport os\nimport re\nimport sys\n\nfrom setuptools import setup\nfrom setuptools import find_packages\n\n# Workaround for http://bugs.python.org/issue8876, see\n# http://bugs.python.org/issue8876#msg208792\n# This can be removed when using Python 2.7.9 or later:\n# https://hg.python.org/cpython/raw-file/v2.7.9/Misc/NEWS\nif os.path.abspath(__file__).split(os.path.sep)[1] == 'vagrant':\n del os.link\n\n\ndef read_file(filename, encoding='utf8'):\n \"\"\"Read unicode from given file.\"\"\"\n with codecs.open(filename, encoding=encoding) as fd:\n return fd.read()\n\n\nhere = os.path.abspath(os.path.dirname(__file__))\n\n# read version number (and other metadata) from package init\ninit_fn = os.path.join(here, 'letsencrypt', '__init__.py')\nmeta = dict(re.findall(r\"\"\"__([a-z]+)__ = '([^']+)\"\"\", read_file(init_fn)))\n\nreadme = read_file(os.path.join(here, 'README.rst'))\nchanges = read_file(os.path.join(here, 'CHANGES.rst'))\nversion = meta['version']\n\n# Please update tox.ini when modifying dependency version requirements\ninstall_requires = [\n 'acme=={0}'.format(version),\n 'configobj',\n 'cryptography>=0.7', # load_pem_x509_certificate\n 'parsedatetime',\n 'psutil>=2.1.0', # net_connections introduced in 2.1.0\n 'PyOpenSSL',\n 'pyrfc3339',\n 'python2-pythondialog>=3.2.2rc1', # Debian squeeze support, cf. #280\n 'pytz',\n 'setuptools', # pkg_resources\n 'six',\n 'zope.component',\n 'zope.interface',\n]\n\n# env markers in extras_require cause problems with older pip: #517\nif sys.version_info < (2, 7):\n install_requires.extend([\n # only some distros recognize stdlib argparse as already satisfying\n 'argparse',\n 'ConfigArgParse>=0.10.0', # python2.6 support, upstream #17\n 'mock<1.1.0',\n ])\nelse:\n install_requires.extend([\n 'ConfigArgParse',\n 'mock',\n ])\n\ndev_extras = [\n # Pin astroid==1.3.5, pylint==1.4.2 as a workaround for #289\n 'astroid==1.3.5',\n 'pylint==1.4.2', # upstream #248\n 'twine',\n 'wheel',\n]\n\ndocs_extras = [\n 'repoze.sphinx.autointerface',\n 'Sphinx>=1.0', # autodoc_member_order = 'bysource', autodoc_default_flags\n 'sphinx_rtd_theme',\n 'sphinxcontrib-programoutput',\n]\n\ntesting_extras = [\n 'coverage',\n 'nose',\n 'nosexcover',\n 'pep8',\n 'tox',\n]\n\nsetup(\n name='letsencrypt',\n version=version,\n description=\"Let's Encrypt client\",\n long_description=readme, # later: + '\\n\\n' + changes\n url='https://github.com/letsencrypt/letsencrypt',\n author=\"Let's Encrypt Project\",\n author_email='[email protected]',\n license='Apache License 2.0',\n classifiers=[\n 'Development Status :: 3 - Alpha',\n 'Environment :: Console',\n 'Environment :: Console :: Curses',\n 'Intended Audience :: System Administrators',\n 'License :: OSI Approved :: Apache Software License',\n 'Operating System :: POSIX :: Linux',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 2.6',\n 'Programming Language :: Python :: 2.7',\n 'Topic :: Internet :: WWW/HTTP',\n 'Topic :: Security',\n 'Topic :: System :: Installation/Setup',\n 'Topic :: System :: Networking',\n 'Topic :: System :: Systems Administration',\n 'Topic :: Utilities',\n ],\n\n packages=find_packages(exclude=['docs', 'examples', 'tests', 'venv']),\n include_package_data=True,\n\n install_requires=install_requires,\n extras_require={\n 'dev': dev_extras,\n 'docs': docs_extras,\n 'testing': testing_extras,\n },\n\n tests_require=install_requires,\n # to test all packages run \"python setup.py test -s\n # {acme,letsencrypt_apache,letsencrypt_nginx}\"\n test_suite='letsencrypt',\n\n entry_points={\n 'console_scripts': [\n 'letsencrypt = letsencrypt.cli:main',\n 'letsencrypt-renewer = letsencrypt.renewer:main',\n ],\n 'letsencrypt.plugins': [\n 'manual = letsencrypt.plugins.manual:Authenticator',\n 'null = letsencrypt.plugins.null:Installer',\n 'standalone = letsencrypt.plugins.standalone:Authenticator',\n 'webroot = letsencrypt.plugins.webroot:Authenticator',\n ],\n },\n)\n", "path": "setup.py"}, {"content": "import sys\n\nfrom setuptools import setup\nfrom setuptools import find_packages\n\n\nversion = '0.2.0.dev0'\n\n# Please update tox.ini when modifying dependency version requirements\ninstall_requires = [\n # load_pem_private/public_key (>=0.6)\n # rsa_recover_prime_factors (>=0.8)\n 'cryptography>=0.8',\n # Connection.set_tlsext_host_name (>=0.13)\n 'PyOpenSSL>=0.13',\n 'pyrfc3339',\n 'pytz',\n 'requests',\n 'setuptools', # pkg_resources\n 'six',\n 'werkzeug',\n]\n\n# env markers in extras_require cause problems with older pip: #517\nif sys.version_info < (2, 7):\n install_requires.extend([\n # only some distros recognize stdlib argparse as already satisfying\n 'argparse',\n 'mock<1.1.0',\n ])\nelse:\n install_requires.append('mock')\n\nif sys.version_info < (2, 7, 9):\n # For secure SSL connection with Python 2.7 (InsecurePlatformWarning)\n install_requires.append('ndg-httpsclient')\n install_requires.append('pyasn1')\n\ndocs_extras = [\n 'Sphinx>=1.0', # autodoc_member_order = 'bysource', autodoc_default_flags\n 'sphinx_rtd_theme',\n 'sphinxcontrib-programoutput',\n]\n\ntesting_extras = [\n 'nose',\n 'tox',\n]\n\n\nsetup(\n name='acme',\n version=version,\n description='ACME protocol implementation in Python',\n url='https://github.com/letsencrypt/letsencrypt',\n author=\"Let's Encrypt Project\",\n author_email='[email protected]',\n license='Apache License 2.0',\n classifiers=[\n 'Development Status :: 3 - Alpha',\n 'Intended Audience :: Developers',\n 'License :: OSI Approved :: Apache Software License',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 2.6',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.3',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n 'Topic :: Internet :: WWW/HTTP',\n 'Topic :: Security',\n ],\n\n packages=find_packages(),\n include_package_data=True,\n install_requires=install_requires,\n extras_require={\n 'docs': docs_extras,\n 'testing': testing_extras,\n },\n entry_points={\n 'console_scripts': [\n 'jws = acme.jose.jws:CLI.run',\n ],\n },\n test_suite='acme',\n)\n", "path": "acme/setup.py"}]} | 2,851 | 503 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.