problem_id
stringlengths 18
22
| source
stringclasses 1
value | task_type
stringclasses 1
value | in_source_id
stringlengths 13
58
| prompt
stringlengths 1.71k
18.9k
| golden_diff
stringlengths 145
5.13k
| verification_info
stringlengths 465
23.6k
| num_tokens_prompt
int64 556
4.1k
| num_tokens_diff
int64 47
1.02k
|
---|---|---|---|---|---|---|---|---|
gh_patches_debug_13031 | rasdani/github-patches | git_diff | inventree__InvenTree-6284 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Platform UI refuses to log out
### Please verify that this bug has NOT been raised before.
- [X] I checked and didn't find a similar issue
### Describe the bug*
Discovered when I was setting up Platorm UI for dev, trying to log out simply sends you to the Home page and tells you that you were already logged in

### Steps to Reproduce
Not sure about the exact trigger here. It's still occuring to me as it did yesterday.
### Expected behaviour
Being able to log out
### Deployment Method
- [ ] Docker
- [ ] Bare metal
### Version Information
InvenTree - inventree.org
The Open-Source Inventory Management System
Installation paths:
Base /workspaces/InvenTree
Config /workspaces/InvenTree/dev/config.yaml
Media /workspaces/InvenTree/dev/media
Static /workspaces/InvenTree/dev/static
Versions:
Python 3.10.10
Django 3.2.23
InvenTree 0.13.0 dev
API 152
Node v20.9.0
Yarn 1.22.19
Commit hash:dabd95d
Commit date:2023-11-21
### Please verify if you can reproduce this bug on the demo site.
- [ ] I can reproduce this bug on the demo site.
### Relevant log output
```shell
Created new API token for user 'admin' (name='inventree-web-app')
[22/Nov/2023 17:23:56] "GET /api/user/token/?name=inventree-web-app HTTP/1.1" 200 114
[22/Nov/2023 17:23:56] "GET /api/user/me/ HTTP/1.1" 200 134
[22/Nov/2023 17:23:56] "GET /api/notifications/?read=false&limit=1 HTTP/1.1" 200 52
[22/Nov/2023 17:23:57] "GET /api/user/roles/ HTTP/1.1" 200 527
[22/Nov/2023 17:23:57] "GET /api/settings/global/ HTTP/1.1" 200 27344
Created new API token for user 'admin' (name='inventree-web-app')
[22/Nov/2023 17:23:57] "GET /api/user/token/?name=inventree-web-app HTTP/1.1" 200 114
Background worker check failed
Email backend not configured
InvenTree system health checks failed
[22/Nov/2023 17:23:57] "GET /api/ HTTP/1.1" 200 1145
[22/Nov/2023 17:23:57] "GET /api/user/me/ HTTP/1.1" 200 134
[22/Nov/2023 17:23:57] "GET /api/generic/status/ HTTP/1.1" 200 5851
[22/Nov/2023 17:23:57] "GET /api/user/roles/ HTTP/1.1" 200 527
Background worker check failed
Email backend not configured
InvenTree system health checks failed
[22/Nov/2023 17:23:58] "GET /api/settings/global/ HTTP/1.1" 200 27344
[22/Nov/2023 17:23:58] "GET /api/ HTTP/1.1" 200 1145
[22/Nov/2023 17:23:58] "GET /api/settings/user/ HTTP/1.1" 200 13878
[22/Nov/2023 17:23:58] "GET /api/generic/status/ HTTP/1.1" 200 5851
[22/Nov/2023 17:23:58] "GET /api/settings/user/ HTTP/1.1" 200 13878
```
</issue>
<code>
[start of InvenTree/web/urls.py]
1 """URLs for web app."""
2
3 from django.conf import settings
4 from django.shortcuts import redirect
5 from django.urls import include, path
6 from django.views.decorators.csrf import ensure_csrf_cookie
7 from django.views.generic import TemplateView
8
9
10 class RedirectAssetView(TemplateView):
11 """View to redirect to static asset."""
12
13 def get(self, request, *args, **kwargs):
14 """Redirect to static asset."""
15 return redirect(
16 f"{settings.STATIC_URL}web/assets/{kwargs['path']}", permanent=True
17 )
18
19
20 spa_view = ensure_csrf_cookie(TemplateView.as_view(template_name='web/index.html'))
21 assets_path = path('assets/<path:path>', RedirectAssetView.as_view())
22
23
24 urlpatterns = [
25 path(
26 f'{settings.FRONTEND_URL_BASE}/',
27 include([
28 assets_path,
29 path(
30 'set-password?uid=<uid>&token=<token>',
31 spa_view,
32 name='password_reset_confirm',
33 ),
34 path('', spa_view),
35 ]),
36 ),
37 assets_path,
38 path(settings.FRONTEND_URL_BASE, spa_view, name='platform'),
39 ]
40
[end of InvenTree/web/urls.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/InvenTree/web/urls.py b/InvenTree/web/urls.py
--- a/InvenTree/web/urls.py
+++ b/InvenTree/web/urls.py
@@ -2,7 +2,7 @@
from django.conf import settings
from django.shortcuts import redirect
-from django.urls import include, path
+from django.urls import include, path, re_path
from django.views.decorators.csrf import ensure_csrf_cookie
from django.views.generic import TemplateView
@@ -31,7 +31,7 @@
spa_view,
name='password_reset_confirm',
),
- path('', spa_view),
+ re_path('.*', spa_view),
]),
),
assets_path,
| {"golden_diff": "diff --git a/InvenTree/web/urls.py b/InvenTree/web/urls.py\n--- a/InvenTree/web/urls.py\n+++ b/InvenTree/web/urls.py\n@@ -2,7 +2,7 @@\n \n from django.conf import settings\n from django.shortcuts import redirect\n-from django.urls import include, path\n+from django.urls import include, path, re_path\n from django.views.decorators.csrf import ensure_csrf_cookie\n from django.views.generic import TemplateView\n \n@@ -31,7 +31,7 @@\n spa_view,\n name='password_reset_confirm',\n ),\n- path('', spa_view),\n+ re_path('.*', spa_view),\n ]),\n ),\n assets_path,\n", "issue": "Platform UI refuses to log out\n### Please verify that this bug has NOT been raised before.\n\n- [X] I checked and didn't find a similar issue\n\n### Describe the bug*\n\nDiscovered when I was setting up Platorm UI for dev, trying to log out simply sends you to the Home page and tells you that you were already logged in\r\n\r\n\n\n### Steps to Reproduce\n\nNot sure about the exact trigger here. It's still occuring to me as it did yesterday.\r\n\n\n### Expected behaviour\n\nBeing able to log out\n\n### Deployment Method\n\n- [ ] Docker\n- [ ] Bare metal\n\n### Version Information\n\nInvenTree - inventree.org\r\nThe Open-Source Inventory Management System\r\n\r\n\r\nInstallation paths:\r\nBase /workspaces/InvenTree\r\nConfig /workspaces/InvenTree/dev/config.yaml\r\nMedia /workspaces/InvenTree/dev/media\r\nStatic /workspaces/InvenTree/dev/static\r\n\r\nVersions:\r\nPython 3.10.10\r\nDjango 3.2.23\r\nInvenTree 0.13.0 dev\r\nAPI 152\r\nNode v20.9.0\r\nYarn 1.22.19\r\n\r\nCommit hash:dabd95d\r\nCommit date:2023-11-21\n\n### Please verify if you can reproduce this bug on the demo site.\n\n- [ ] I can reproduce this bug on the demo site.\n\n### Relevant log output\n\n```shell\nCreated new API token for user 'admin' (name='inventree-web-app')\r\n[22/Nov/2023 17:23:56] \"GET /api/user/token/?name=inventree-web-app HTTP/1.1\" 200 114\r\n[22/Nov/2023 17:23:56] \"GET /api/user/me/ HTTP/1.1\" 200 134\r\n[22/Nov/2023 17:23:56] \"GET /api/notifications/?read=false&limit=1 HTTP/1.1\" 200 52\r\n[22/Nov/2023 17:23:57] \"GET /api/user/roles/ HTTP/1.1\" 200 527\r\n[22/Nov/2023 17:23:57] \"GET /api/settings/global/ HTTP/1.1\" 200 27344\r\nCreated new API token for user 'admin' (name='inventree-web-app')\r\n[22/Nov/2023 17:23:57] \"GET /api/user/token/?name=inventree-web-app HTTP/1.1\" 200 114\r\nBackground worker check failed\r\nEmail backend not configured\r\nInvenTree system health checks failed\r\n[22/Nov/2023 17:23:57] \"GET /api/ HTTP/1.1\" 200 1145\r\n[22/Nov/2023 17:23:57] \"GET /api/user/me/ HTTP/1.1\" 200 134\r\n[22/Nov/2023 17:23:57] \"GET /api/generic/status/ HTTP/1.1\" 200 5851\r\n[22/Nov/2023 17:23:57] \"GET /api/user/roles/ HTTP/1.1\" 200 527\r\nBackground worker check failed\r\nEmail backend not configured\r\nInvenTree system health checks failed\r\n[22/Nov/2023 17:23:58] \"GET /api/settings/global/ HTTP/1.1\" 200 27344\r\n[22/Nov/2023 17:23:58] \"GET /api/ HTTP/1.1\" 200 1145\r\n[22/Nov/2023 17:23:58] \"GET /api/settings/user/ HTTP/1.1\" 200 13878\r\n[22/Nov/2023 17:23:58] \"GET /api/generic/status/ HTTP/1.1\" 200 5851\r\n[22/Nov/2023 17:23:58] \"GET /api/settings/user/ HTTP/1.1\" 200 13878\n```\n\n", "before_files": [{"content": "\"\"\"URLs for web app.\"\"\"\n\nfrom django.conf import settings\nfrom django.shortcuts import redirect\nfrom django.urls import include, path\nfrom django.views.decorators.csrf import ensure_csrf_cookie\nfrom django.views.generic import TemplateView\n\n\nclass RedirectAssetView(TemplateView):\n \"\"\"View to redirect to static asset.\"\"\"\n\n def get(self, request, *args, **kwargs):\n \"\"\"Redirect to static asset.\"\"\"\n return redirect(\n f\"{settings.STATIC_URL}web/assets/{kwargs['path']}\", permanent=True\n )\n\n\nspa_view = ensure_csrf_cookie(TemplateView.as_view(template_name='web/index.html'))\nassets_path = path('assets/<path:path>', RedirectAssetView.as_view())\n\n\nurlpatterns = [\n path(\n f'{settings.FRONTEND_URL_BASE}/',\n include([\n assets_path,\n path(\n 'set-password?uid=<uid>&token=<token>',\n spa_view,\n name='password_reset_confirm',\n ),\n path('', spa_view),\n ]),\n ),\n assets_path,\n path(settings.FRONTEND_URL_BASE, spa_view, name='platform'),\n]\n", "path": "InvenTree/web/urls.py"}]} | 1,956 | 155 |
gh_patches_debug_2424 | rasdani/github-patches | git_diff | microsoft__ptvsd-362 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
PTVSD fails to run on windows
```
Traceback (most recent call last):
File "C:\Users\karth\.vscode\extensions\ms-python.python-2018.3.1\pythonFiles\experimental\ptvsd_launcher.py", line 96,
in <module>
vspd.debug(filename, port_num, debug_id, debug_options, run_as)
File "c:\git\ptvsd\ptvsd\debugger.py", line 36, in debug
run(address, filename, *args, **kwargs)
File "c:\git\ptvsd\ptvsd\__main__.py", line 37, in run_file
run(argv, addr, **kwargs)
File "c:\git\ptvsd\ptvsd\__main__.py", line 85, in _run
daemon = _install(_pydevd, addr, **kwargs)
File "c:\git\ptvsd\ptvsd\pydevd_hooks.py", line 52, in install
daemon = Daemon(**kwargs)
File "c:\git\ptvsd\ptvsd\daemon.py", line 53, in __init__
self.install_exit_handlers()
File "c:\git\ptvsd\ptvsd\daemon.py", line 91, in install_exit_handlers
signal.SIGHUP: [],
AttributeError: module 'signal' has no attribute 'SIGHUP'
```
PTVSD fails to run on windows
```
Traceback (most recent call last):
File "C:\Users\karth\.vscode\extensions\ms-python.python-2018.3.1\pythonFiles\experimental\ptvsd_launcher.py", line 96,
in <module>
vspd.debug(filename, port_num, debug_id, debug_options, run_as)
File "c:\git\ptvsd\ptvsd\debugger.py", line 36, in debug
run(address, filename, *args, **kwargs)
File "c:\git\ptvsd\ptvsd\__main__.py", line 37, in run_file
run(argv, addr, **kwargs)
File "c:\git\ptvsd\ptvsd\__main__.py", line 85, in _run
daemon = _install(_pydevd, addr, **kwargs)
File "c:\git\ptvsd\ptvsd\pydevd_hooks.py", line 52, in install
daemon = Daemon(**kwargs)
File "c:\git\ptvsd\ptvsd\daemon.py", line 53, in __init__
self.install_exit_handlers()
File "c:\git\ptvsd\ptvsd\daemon.py", line 91, in install_exit_handlers
signal.SIGHUP: [],
AttributeError: module 'signal' has no attribute 'SIGHUP'
```
</issue>
<code>
[start of ptvsd/daemon.py]
1 import atexit
2 import os
3 import platform
4 import signal
5 import sys
6
7 from ptvsd import wrapper
8 from ptvsd.socket import close_socket
9
10
11 def _wait_on_exit():
12 if sys.__stdout__ is not None:
13 try:
14 import msvcrt
15 except ImportError:
16 sys.__stdout__.write('Press Enter to continue . . . ')
17 sys.__stdout__.flush()
18 sys.__stdin__.read(1)
19 else:
20 sys.__stdout__.write('Press any key to continue . . . ')
21 sys.__stdout__.flush()
22 msvcrt.getch()
23
24
25 class DaemonClosedError(RuntimeError):
26 """Indicates that a Daemon was unexpectedly closed."""
27 def __init__(self, msg='closed'):
28 super(DaemonClosedError, self).__init__(msg)
29
30
31 class Daemon(object):
32 """The process-level manager for the VSC protocol debug adapter."""
33
34 exitcode = 0
35
36 def __init__(self, wait_on_exit=_wait_on_exit,
37 addhandlers=True, killonclose=True):
38 self.wait_on_exit = wait_on_exit
39 self.killonclose = killonclose
40
41 self._closed = False
42 self._exiting_via_atexit_handler = False
43
44 self._pydevd = None
45 self._server = None
46 self._client = None
47 self._adapter = None
48
49 self._signal_handlers = None
50 self._atexit_handlers = None
51 self._handlers_installed = False
52 if addhandlers:
53 self.install_exit_handlers()
54
55 @property
56 def pydevd(self):
57 return self._pydevd
58
59 @property
60 def server(self):
61 return self._server
62
63 @property
64 def client(self):
65 return self._client
66
67 @property
68 def adapter(self):
69 return self._adapter
70
71 def start(self, server=None):
72 """Return the "socket" to use for pydevd after setting it up."""
73 if self._closed:
74 raise DaemonClosedError()
75 if self._pydevd is not None:
76 raise RuntimeError('already started')
77 self._pydevd = wrapper.PydevdSocket(
78 self._handle_pydevd_message,
79 self._handle_pydevd_close,
80 self._getpeername,
81 self._getsockname,
82 )
83 self._server = server
84 return self._pydevd
85
86 def install_exit_handlers(self):
87 """Set the placeholder handlers."""
88 if self._signal_handlers is not None:
89 raise RuntimeError('exit handlers already installed')
90 self._signal_handlers = {
91 signal.SIGHUP: [],
92 }
93 self._atexit_handlers = []
94
95 if platform.system() != 'Windows':
96 try:
97 for sig in self._signal_handlers:
98 signal.signal(sig, self._signal_handler)
99 except ValueError:
100 # Wasn't called in main thread!
101 raise
102 atexit.register(self._atexit_handler)
103
104 def set_connection(self, client):
105 """Set the client socket to use for the debug adapter.
106
107 A VSC message loop is started for the client.
108 """
109 if self._closed:
110 raise DaemonClosedError()
111 if self._pydevd is None:
112 raise RuntimeError('not started yet')
113 if self._client is not None:
114 raise RuntimeError('connection already set')
115 self._client = client
116
117 self._adapter = wrapper.VSCodeMessageProcessor(
118 client,
119 self._pydevd.pydevd_notify,
120 self._pydevd.pydevd_request,
121 self._handle_vsc_disconnect,
122 self._handle_vsc_close,
123 )
124 name = 'ptvsd.Client' if self._server is None else 'ptvsd.Server'
125 self._adapter.start(name)
126 if self._signal_handlers is not None:
127 self._add_signal_handlers()
128 self._add_atexit_handler()
129 return self._adapter
130
131 def close(self):
132 """Stop all loops and release all resources."""
133 if self._closed:
134 raise DaemonClosedError('already closed')
135 self._closed = True
136
137 if self._adapter is not None:
138 normal, abnormal = self._adapter._wait_options()
139 if (normal and not self.exitcode) or (abnormal and self.exitcode):
140 self.wait_on_exit()
141
142 if self._pydevd is not None:
143 close_socket(self._pydevd)
144 if self._client is not None:
145 self._release_connection()
146
147 def re_build_breakpoints(self):
148 self.adapter.re_build_breakpoints()
149
150 # internal methods
151
152 def _signal_handler(self, signum, frame):
153 for handle_signal in self._signal_handlers.get(signum, ()):
154 handle_signal(signum, frame)
155
156 def _atexit_handler(self):
157 for handle_atexit in self._atexit_handlers:
158 handle_atexit()
159
160 def _add_atexit_handler(self):
161 def handler():
162 self._exiting_via_atexit_handler = True
163 if not self._closed:
164 self.close()
165 if self._adapter is not None:
166 # TODO: Do this in VSCodeMessageProcessor.close()?
167 self._adapter._wait_for_server_thread()
168 self._atexit_handlers.append(handler)
169
170 def _add_signal_handlers(self):
171 def handler(signum, frame):
172 if not self._closed:
173 self.close()
174 sys.exit(0)
175 self._signal_handlers[signal.SIGHUP].append(handler)
176
177 def _release_connection(self):
178 if self._adapter is not None:
179 # TODO: This is not correct in the "attach" case.
180 self._adapter.handle_pydevd_stopped(self.exitcode)
181 self._adapter.close()
182 close_socket(self._client)
183
184 # internal methods for PyDevdSocket().
185
186 def _handle_pydevd_message(self, cmdid, seq, text):
187 if self._adapter is not None:
188 self._adapter.on_pydevd_event(cmdid, seq, text)
189
190 def _handle_pydevd_close(self):
191 if self._closed:
192 return
193 self.close()
194
195 def _getpeername(self):
196 if self._client is None:
197 raise NotImplementedError
198 return self._client.getpeername()
199
200 def _getsockname(self):
201 if self._client is None:
202 raise NotImplementedError
203 return self._client.getsockname()
204
205 # internal methods for VSCodeMessageProcessor
206
207 def _handle_vsc_disconnect(self, kill=False):
208 if not self._closed:
209 self.close()
210 if kill and self.killonclose and not self._exiting_via_atexit_handler:
211 os.kill(os.getpid(), signal.SIGTERM)
212
213 def _handle_vsc_close(self):
214 if self._closed:
215 return
216 self.close()
217
[end of ptvsd/daemon.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/ptvsd/daemon.py b/ptvsd/daemon.py
--- a/ptvsd/daemon.py
+++ b/ptvsd/daemon.py
@@ -168,6 +168,9 @@
self._atexit_handlers.append(handler)
def _add_signal_handlers(self):
+ if platform.system() == 'Windows':
+ return
+
def handler(signum, frame):
if not self._closed:
self.close()
| {"golden_diff": "diff --git a/ptvsd/daemon.py b/ptvsd/daemon.py\n--- a/ptvsd/daemon.py\n+++ b/ptvsd/daemon.py\n@@ -168,6 +168,9 @@\n self._atexit_handlers.append(handler)\n \n def _add_signal_handlers(self):\n+ if platform.system() == 'Windows':\n+ return\n+\n def handler(signum, frame):\n if not self._closed:\n self.close()\n", "issue": "PTVSD fails to run on windows\n```\r\nTraceback (most recent call last):\r\n File \"C:\\Users\\karth\\.vscode\\extensions\\ms-python.python-2018.3.1\\pythonFiles\\experimental\\ptvsd_launcher.py\", line 96,\r\nin <module>\r\n vspd.debug(filename, port_num, debug_id, debug_options, run_as)\r\n File \"c:\\git\\ptvsd\\ptvsd\\debugger.py\", line 36, in debug\r\n run(address, filename, *args, **kwargs)\r\n File \"c:\\git\\ptvsd\\ptvsd\\__main__.py\", line 37, in run_file\r\n run(argv, addr, **kwargs)\r\n File \"c:\\git\\ptvsd\\ptvsd\\__main__.py\", line 85, in _run\r\n daemon = _install(_pydevd, addr, **kwargs)\r\n File \"c:\\git\\ptvsd\\ptvsd\\pydevd_hooks.py\", line 52, in install\r\n daemon = Daemon(**kwargs)\r\n File \"c:\\git\\ptvsd\\ptvsd\\daemon.py\", line 53, in __init__\r\n self.install_exit_handlers()\r\n File \"c:\\git\\ptvsd\\ptvsd\\daemon.py\", line 91, in install_exit_handlers\r\n signal.SIGHUP: [],\r\nAttributeError: module 'signal' has no attribute 'SIGHUP'\r\n```\nPTVSD fails to run on windows\n```\r\nTraceback (most recent call last):\r\n File \"C:\\Users\\karth\\.vscode\\extensions\\ms-python.python-2018.3.1\\pythonFiles\\experimental\\ptvsd_launcher.py\", line 96,\r\nin <module>\r\n vspd.debug(filename, port_num, debug_id, debug_options, run_as)\r\n File \"c:\\git\\ptvsd\\ptvsd\\debugger.py\", line 36, in debug\r\n run(address, filename, *args, **kwargs)\r\n File \"c:\\git\\ptvsd\\ptvsd\\__main__.py\", line 37, in run_file\r\n run(argv, addr, **kwargs)\r\n File \"c:\\git\\ptvsd\\ptvsd\\__main__.py\", line 85, in _run\r\n daemon = _install(_pydevd, addr, **kwargs)\r\n File \"c:\\git\\ptvsd\\ptvsd\\pydevd_hooks.py\", line 52, in install\r\n daemon = Daemon(**kwargs)\r\n File \"c:\\git\\ptvsd\\ptvsd\\daemon.py\", line 53, in __init__\r\n self.install_exit_handlers()\r\n File \"c:\\git\\ptvsd\\ptvsd\\daemon.py\", line 91, in install_exit_handlers\r\n signal.SIGHUP: [],\r\nAttributeError: module 'signal' has no attribute 'SIGHUP'\r\n```\n", "before_files": [{"content": "import atexit\nimport os\nimport platform\nimport signal\nimport sys\n\nfrom ptvsd import wrapper\nfrom ptvsd.socket import close_socket\n\n\ndef _wait_on_exit():\n if sys.__stdout__ is not None:\n try:\n import msvcrt\n except ImportError:\n sys.__stdout__.write('Press Enter to continue . . . ')\n sys.__stdout__.flush()\n sys.__stdin__.read(1)\n else:\n sys.__stdout__.write('Press any key to continue . . . ')\n sys.__stdout__.flush()\n msvcrt.getch()\n\n\nclass DaemonClosedError(RuntimeError):\n \"\"\"Indicates that a Daemon was unexpectedly closed.\"\"\"\n def __init__(self, msg='closed'):\n super(DaemonClosedError, self).__init__(msg)\n\n\nclass Daemon(object):\n \"\"\"The process-level manager for the VSC protocol debug adapter.\"\"\"\n\n exitcode = 0\n\n def __init__(self, wait_on_exit=_wait_on_exit,\n addhandlers=True, killonclose=True):\n self.wait_on_exit = wait_on_exit\n self.killonclose = killonclose\n\n self._closed = False\n self._exiting_via_atexit_handler = False\n\n self._pydevd = None\n self._server = None\n self._client = None\n self._adapter = None\n\n self._signal_handlers = None\n self._atexit_handlers = None\n self._handlers_installed = False\n if addhandlers:\n self.install_exit_handlers()\n\n @property\n def pydevd(self):\n return self._pydevd\n\n @property\n def server(self):\n return self._server\n\n @property\n def client(self):\n return self._client\n\n @property\n def adapter(self):\n return self._adapter\n\n def start(self, server=None):\n \"\"\"Return the \"socket\" to use for pydevd after setting it up.\"\"\"\n if self._closed:\n raise DaemonClosedError()\n if self._pydevd is not None:\n raise RuntimeError('already started')\n self._pydevd = wrapper.PydevdSocket(\n self._handle_pydevd_message,\n self._handle_pydevd_close,\n self._getpeername,\n self._getsockname,\n )\n self._server = server\n return self._pydevd\n\n def install_exit_handlers(self):\n \"\"\"Set the placeholder handlers.\"\"\"\n if self._signal_handlers is not None:\n raise RuntimeError('exit handlers already installed')\n self._signal_handlers = {\n signal.SIGHUP: [],\n }\n self._atexit_handlers = []\n\n if platform.system() != 'Windows':\n try:\n for sig in self._signal_handlers:\n signal.signal(sig, self._signal_handler)\n except ValueError:\n # Wasn't called in main thread!\n raise\n atexit.register(self._atexit_handler)\n\n def set_connection(self, client):\n \"\"\"Set the client socket to use for the debug adapter.\n\n A VSC message loop is started for the client.\n \"\"\"\n if self._closed:\n raise DaemonClosedError()\n if self._pydevd is None:\n raise RuntimeError('not started yet')\n if self._client is not None:\n raise RuntimeError('connection already set')\n self._client = client\n\n self._adapter = wrapper.VSCodeMessageProcessor(\n client,\n self._pydevd.pydevd_notify,\n self._pydevd.pydevd_request,\n self._handle_vsc_disconnect,\n self._handle_vsc_close,\n )\n name = 'ptvsd.Client' if self._server is None else 'ptvsd.Server'\n self._adapter.start(name)\n if self._signal_handlers is not None:\n self._add_signal_handlers()\n self._add_atexit_handler()\n return self._adapter\n\n def close(self):\n \"\"\"Stop all loops and release all resources.\"\"\"\n if self._closed:\n raise DaemonClosedError('already closed')\n self._closed = True\n\n if self._adapter is not None:\n normal, abnormal = self._adapter._wait_options()\n if (normal and not self.exitcode) or (abnormal and self.exitcode):\n self.wait_on_exit()\n\n if self._pydevd is not None:\n close_socket(self._pydevd)\n if self._client is not None:\n self._release_connection()\n\n def re_build_breakpoints(self):\n self.adapter.re_build_breakpoints()\n\n # internal methods\n\n def _signal_handler(self, signum, frame):\n for handle_signal in self._signal_handlers.get(signum, ()):\n handle_signal(signum, frame)\n\n def _atexit_handler(self):\n for handle_atexit in self._atexit_handlers:\n handle_atexit()\n\n def _add_atexit_handler(self):\n def handler():\n self._exiting_via_atexit_handler = True\n if not self._closed:\n self.close()\n if self._adapter is not None:\n # TODO: Do this in VSCodeMessageProcessor.close()?\n self._adapter._wait_for_server_thread()\n self._atexit_handlers.append(handler)\n\n def _add_signal_handlers(self):\n def handler(signum, frame):\n if not self._closed:\n self.close()\n sys.exit(0)\n self._signal_handlers[signal.SIGHUP].append(handler)\n\n def _release_connection(self):\n if self._adapter is not None:\n # TODO: This is not correct in the \"attach\" case.\n self._adapter.handle_pydevd_stopped(self.exitcode)\n self._adapter.close()\n close_socket(self._client)\n\n # internal methods for PyDevdSocket().\n\n def _handle_pydevd_message(self, cmdid, seq, text):\n if self._adapter is not None:\n self._adapter.on_pydevd_event(cmdid, seq, text)\n\n def _handle_pydevd_close(self):\n if self._closed:\n return\n self.close()\n\n def _getpeername(self):\n if self._client is None:\n raise NotImplementedError\n return self._client.getpeername()\n\n def _getsockname(self):\n if self._client is None:\n raise NotImplementedError\n return self._client.getsockname()\n\n # internal methods for VSCodeMessageProcessor\n\n def _handle_vsc_disconnect(self, kill=False):\n if not self._closed:\n self.close()\n if kill and self.killonclose and not self._exiting_via_atexit_handler:\n os.kill(os.getpid(), signal.SIGTERM)\n\n def _handle_vsc_close(self):\n if self._closed:\n return\n self.close()\n", "path": "ptvsd/daemon.py"}]} | 3,199 | 106 |
gh_patches_debug_24482 | rasdani/github-patches | git_diff | sunpy__sunpy-3515 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Error in documentation for "Finding bright regions with ndimage" example.
<!-- This comments are hidden when you submit the issue so you do not need to remove them!
Please be sure to check out our contributing guidelines: https://github.com/sunpy/sunpy/blob/master/CONTRIBUTING.rst
Please be sure to check out our code of conduct:
https://github.com/sunpy/sunpy/blob/master/CODE_OF_CONDUCT.rst -->
<!-- Please have a search on our GitHub repository to see if a similar issue has already been posted.
If a similar issue is closed, have a quick look to see if you are satisfied by the resolution.
If not please go ahead and open an issue! -->
### Description
<!-- Provide a general description of the bug. -->
There seems to be an error in the documentation for the "Finding bright regions with ndimage" example.
In the part where a mask is made, the surrounding text states: " We choose the criterion that the data should be at least 5% of the maximum value. " However, if you look at the code immediately below, the threshold is based off 10% the max value:
`mask = aiamap.data < aiamap.max() * 0.10`
### Expected behavior
<!-- What did you expect to happen. -->
Documentation needs to be modified to reflect that the threshold is based off a 10% threshold.
</issue>
<code>
[start of examples/map/image_bright_regions_gallery_example.py]
1 # coding: utf-8
2 """
3 ===================================
4 Finding bright regions with ndimage
5 ===================================
6
7 How you can to find the brightest regions in an AIA image and
8 count the approximate number of regions of interest using ndimage.
9 """
10 # sphinx_gallery_thumbnail_number = 2
11
12 from scipy import ndimage
13 import matplotlib.pyplot as plt
14
15 import sunpy.map
16 from sunpy.data.sample import AIA_193_IMAGE
17
18 ###############################################################################
19 # We start with the sample data
20 aiamap_mask = sunpy.map.Map(AIA_193_IMAGE)
21 aiamap = sunpy.map.Map(AIA_193_IMAGE)
22
23 ##############################################################################
24 # First we make a mask, which tells us which regions are bright. We
25 # choose the criterion that the data should be at least 5% of the maximum
26 # value. Pixels with intensity values greater than this are included in the
27 # mask, while all other pixels are excluded.
28 mask = aiamap.data < aiamap.max() * 0.10
29
30 ##############################################################################
31 # Mask is a `boolean` array. It can be used to modify the original map object
32 # without modifying the data. Once this mask attribute is set, we can plot the
33 # image again.
34 aiamap_mask.mask = mask
35 plt.figure()
36 aiamap.plot()
37 plt.colorbar()
38 plt.show()
39
40 ##############################################################################
41 # Only the brightest pixels remain in the image.
42 # However, these areas are artificially broken up into small regions.
43 # We can solve this by applying some smoothing to the image data.
44 # Here we apply a 2D Gaussian smoothing function to the data.
45 data2 = ndimage.gaussian_filter(aiamap.data * ~mask, 14)
46
47 ##############################################################################
48 # The issue with the filtering is that it create pixels where the values are
49 # small (<100), so when we go on later to label this array,
50 # we get one large region which encompasses the entire array.
51 # If you want to see, just remove this line.
52 data2[data2 < 100] = 0
53
54 ##############################################################################
55 # Now we will make a second SunPy map with this smoothed data.
56 aiamap2 = sunpy.map.Map(data2, aiamap.meta)
57
58 ##############################################################################
59 # The function `label` from the `scipy.ndimage` module, counts the number of
60 # contiguous regions in an image.
61 labels, n = ndimage.label(aiamap2.data)
62
63 ##############################################################################
64 # Finally, we plot the smoothed bright image data, along with the estimate of
65 # the number of distinct regions. We can see that approximately 6 distinct hot
66 # regions are present above the 5% of the maximum level.
67 plt.figure()
68 ax = plt.subplot(projection=aiamap)
69 aiamap.plot()
70 plt.contour(labels)
71 plt.figtext(0.3, 0.2, f'Number of regions = {n}', color='white')
72 plt.show()
73
[end of examples/map/image_bright_regions_gallery_example.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/examples/map/image_bright_regions_gallery_example.py b/examples/map/image_bright_regions_gallery_example.py
--- a/examples/map/image_bright_regions_gallery_example.py
+++ b/examples/map/image_bright_regions_gallery_example.py
@@ -22,7 +22,7 @@
##############################################################################
# First we make a mask, which tells us which regions are bright. We
-# choose the criterion that the data should be at least 5% of the maximum
+# choose the criterion that the data should be at least 10% of the maximum
# value. Pixels with intensity values greater than this are included in the
# mask, while all other pixels are excluded.
mask = aiamap.data < aiamap.max() * 0.10
@@ -63,7 +63,7 @@
##############################################################################
# Finally, we plot the smoothed bright image data, along with the estimate of
# the number of distinct regions. We can see that approximately 6 distinct hot
-# regions are present above the 5% of the maximum level.
+# regions are present above the 10% of the maximum level.
plt.figure()
ax = plt.subplot(projection=aiamap)
aiamap.plot()
| {"golden_diff": "diff --git a/examples/map/image_bright_regions_gallery_example.py b/examples/map/image_bright_regions_gallery_example.py\n--- a/examples/map/image_bright_regions_gallery_example.py\n+++ b/examples/map/image_bright_regions_gallery_example.py\n@@ -22,7 +22,7 @@\n \n ##############################################################################\n # First we make a mask, which tells us which regions are bright. We\n-# choose the criterion that the data should be at least 5% of the maximum\n+# choose the criterion that the data should be at least 10% of the maximum\n # value. Pixels with intensity values greater than this are included in the\n # mask, while all other pixels are excluded.\n mask = aiamap.data < aiamap.max() * 0.10\n@@ -63,7 +63,7 @@\n ##############################################################################\n # Finally, we plot the smoothed bright image data, along with the estimate of\n # the number of distinct regions. We can see that approximately 6 distinct hot\n-# regions are present above the 5% of the maximum level.\n+# regions are present above the 10% of the maximum level.\n plt.figure()\n ax = plt.subplot(projection=aiamap)\n aiamap.plot()\n", "issue": "Error in documentation for \"Finding bright regions with ndimage\" example.\n<!-- This comments are hidden when you submit the issue so you do not need to remove them!\r\nPlease be sure to check out our contributing guidelines: https://github.com/sunpy/sunpy/blob/master/CONTRIBUTING.rst\r\nPlease be sure to check out our code of conduct:\r\nhttps://github.com/sunpy/sunpy/blob/master/CODE_OF_CONDUCT.rst -->\r\n\r\n<!-- Please have a search on our GitHub repository to see if a similar issue has already been posted.\r\nIf a similar issue is closed, have a quick look to see if you are satisfied by the resolution.\r\nIf not please go ahead and open an issue! -->\r\n\r\n### Description\r\n<!-- Provide a general description of the bug. -->\r\nThere seems to be an error in the documentation for the \"Finding bright regions with ndimage\" example.\r\n\r\nIn the part where a mask is made, the surrounding text states: \" We choose the criterion that the data should be at least 5% of the maximum value. \" However, if you look at the code immediately below, the threshold is based off 10% the max value:\r\n`mask = aiamap.data < aiamap.max() * 0.10`\r\n\r\n### Expected behavior\r\n<!-- What did you expect to happen. -->\r\n\r\nDocumentation needs to be modified to reflect that the threshold is based off a 10% threshold. \n", "before_files": [{"content": "# coding: utf-8\n\"\"\"\n===================================\nFinding bright regions with ndimage\n===================================\n\nHow you can to find the brightest regions in an AIA image and\ncount the approximate number of regions of interest using ndimage.\n\"\"\"\n# sphinx_gallery_thumbnail_number = 2\n\nfrom scipy import ndimage\nimport matplotlib.pyplot as plt\n\nimport sunpy.map\nfrom sunpy.data.sample import AIA_193_IMAGE\n\n###############################################################################\n# We start with the sample data\naiamap_mask = sunpy.map.Map(AIA_193_IMAGE)\naiamap = sunpy.map.Map(AIA_193_IMAGE)\n\n##############################################################################\n# First we make a mask, which tells us which regions are bright. We\n# choose the criterion that the data should be at least 5% of the maximum\n# value. Pixels with intensity values greater than this are included in the\n# mask, while all other pixels are excluded.\nmask = aiamap.data < aiamap.max() * 0.10\n\n##############################################################################\n# Mask is a `boolean` array. It can be used to modify the original map object\n# without modifying the data. Once this mask attribute is set, we can plot the\n# image again.\naiamap_mask.mask = mask\nplt.figure()\naiamap.plot()\nplt.colorbar()\nplt.show()\n\n##############################################################################\n# Only the brightest pixels remain in the image.\n# However, these areas are artificially broken up into small regions.\n# We can solve this by applying some smoothing to the image data.\n# Here we apply a 2D Gaussian smoothing function to the data.\ndata2 = ndimage.gaussian_filter(aiamap.data * ~mask, 14)\n\n##############################################################################\n# The issue with the filtering is that it create pixels where the values are\n# small (<100), so when we go on later to label this array,\n# we get one large region which encompasses the entire array.\n# If you want to see, just remove this line.\ndata2[data2 < 100] = 0\n\n##############################################################################\n# Now we will make a second SunPy map with this smoothed data.\naiamap2 = sunpy.map.Map(data2, aiamap.meta)\n\n##############################################################################\n# The function `label` from the `scipy.ndimage` module, counts the number of\n# contiguous regions in an image.\nlabels, n = ndimage.label(aiamap2.data)\n\n##############################################################################\n# Finally, we plot the smoothed bright image data, along with the estimate of\n# the number of distinct regions. We can see that approximately 6 distinct hot\n# regions are present above the 5% of the maximum level.\nplt.figure()\nax = plt.subplot(projection=aiamap)\naiamap.plot()\nplt.contour(labels)\nplt.figtext(0.3, 0.2, f'Number of regions = {n}', color='white')\nplt.show()\n", "path": "examples/map/image_bright_regions_gallery_example.py"}]} | 1,591 | 259 |
gh_patches_debug_38902 | rasdani/github-patches | git_diff | pypi__warehouse-3352 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Unnecessary purges on `User` model change
Currently right now any time the `User` model changes, we purge all the cache keys for that user's project.
This includes attribute changes that don't actually affect the project pages, like `last_login`, `password` etc.
We should filter out "purge-able" attribute changes and only issue purges when necessary. Said attributes include:
* `username`
* `name`
* `emails`
</issue>
<code>
[start of warehouse/packaging/__init__.py]
1 # Licensed under the Apache License, Version 2.0 (the "License");
2 # you may not use this file except in compliance with the License.
3 # You may obtain a copy of the License at
4 #
5 # http://www.apache.org/licenses/LICENSE-2.0
6 #
7 # Unless required by applicable law or agreed to in writing, software
8 # distributed under the License is distributed on an "AS IS" BASIS,
9 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
10 # See the License for the specific language governing permissions and
11 # limitations under the License.
12
13 from celery.schedules import crontab
14
15 from warehouse.accounts.models import User
16 from warehouse.cache.origin import key_factory
17 from warehouse.packaging.interfaces import IFileStorage
18 from warehouse.packaging.models import Project, Release
19 from warehouse.packaging.tasks import compute_trending
20
21
22 def includeme(config):
23 # Register whatever file storage backend has been configured for storing
24 # our package files.
25 storage_class = config.maybe_dotted(
26 config.registry.settings["files.backend"],
27 )
28 config.register_service_factory(storage_class.create_service, IFileStorage)
29
30 # Register our origin cache keys
31 config.register_origin_cache_keys(
32 Project,
33 cache_keys=["project/{obj.normalized_name}"],
34 purge_keys=[
35 key_factory("project/{obj.normalized_name}"),
36 key_factory("user/{itr.username}", iterate_on='users'),
37 key_factory("all-projects"),
38 ],
39 )
40 config.register_origin_cache_keys(
41 Release,
42 cache_keys=["project/{obj.project.normalized_name}"],
43 purge_keys=[
44 key_factory("project/{obj.project.normalized_name}"),
45 key_factory("user/{itr.username}", iterate_on='project.users'),
46 key_factory("all-projects"),
47 ],
48 )
49 config.register_origin_cache_keys(
50 User,
51 cache_keys=["user/{obj.username}"],
52 purge_keys=[
53 key_factory("user/{obj.username}"),
54 key_factory("project/{itr.normalized_name}", iterate_on='projects')
55 ],
56 )
57
58 # Add a periodic task to compute trending once a day, assuming we have
59 # been configured to be able to access BigQuery.
60 if config.get_settings().get("warehouse.trending_table"):
61 config.add_periodic_task(crontab(minute=0, hour=3), compute_trending)
62
[end of warehouse/packaging/__init__.py]
[start of warehouse/cache/origin/__init__.py]
1 # Licensed under the Apache License, Version 2.0 (the "License");
2 # you may not use this file except in compliance with the License.
3 # You may obtain a copy of the License at
4 #
5 # http://www.apache.org/licenses/LICENSE-2.0
6 #
7 # Unless required by applicable law or agreed to in writing, software
8 # distributed under the License is distributed on an "AS IS" BASIS,
9 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
10 # See the License for the specific language governing permissions and
11 # limitations under the License.
12
13 import collections
14 import functools
15 import operator
16 from itertools import chain
17
18 from warehouse import db
19 from warehouse.cache.origin.interfaces import IOriginCache
20
21
22 @db.listens_for(db.Session, "after_flush")
23 def store_purge_keys(config, session, flush_context):
24 cache_keys = config.registry["cache_keys"]
25
26 # We'll (ab)use the session.info dictionary to store a list of pending
27 # purges to the session.
28 purges = session.info.setdefault("warehouse.cache.origin.purges", set())
29
30 # Go through each new, changed, and deleted object and attempt to store
31 # a cache key that we'll want to purge when the session has been committed.
32 for obj in (session.new | session.dirty | session.deleted):
33 try:
34 key_maker = cache_keys[obj.__class__]
35 except KeyError:
36 continue
37
38 purges.update(key_maker(obj).purge)
39
40
41 @db.listens_for(db.Session, "after_commit")
42 def execute_purge(config, session):
43 purges = session.info.pop("warehouse.cache.origin.purges", set())
44
45 try:
46 cacher_factory = config.find_service_factory(IOriginCache)
47 except ValueError:
48 return
49
50 cacher = cacher_factory(None, config)
51 cacher.purge(purges)
52
53
54 def origin_cache(seconds, keys=None, stale_while_revalidate=None,
55 stale_if_error=None):
56 if keys is None:
57 keys = []
58
59 def inner(view):
60 @functools.wraps(view)
61 def wrapped(context, request):
62 cache_keys = request.registry["cache_keys"]
63
64 context_keys = []
65 if context.__class__ in cache_keys:
66 context_keys = cache_keys[context.__class__](context).cache
67
68 try:
69 cacher = request.find_service(IOriginCache)
70 except ValueError:
71 pass
72 else:
73 request.add_response_callback(
74 functools.partial(
75 cacher.cache,
76 sorted(context_keys + keys),
77 seconds=seconds,
78 stale_while_revalidate=stale_while_revalidate,
79 stale_if_error=stale_if_error,
80 )
81 )
82
83 return view(context, request)
84 return wrapped
85
86 return inner
87
88
89 CacheKeys = collections.namedtuple("CacheKeys", ["cache", "purge"])
90
91
92 def key_factory(keystring, iterate_on=None):
93
94 def generate_key(obj):
95 if iterate_on:
96 for itr in operator.attrgetter(iterate_on)(obj):
97 yield keystring.format(itr=itr, obj=obj)
98 else:
99 yield keystring.format(obj=obj)
100
101 return generate_key
102
103
104 def key_maker_factory(cache_keys, purge_keys):
105 if cache_keys is None:
106 cache_keys = []
107
108 if purge_keys is None:
109 purge_keys = []
110
111 def key_maker(obj):
112 return CacheKeys(
113 # Note: this does not support setting the `cache` argument via
114 # multiple `key_factories` as we do with `purge` because there is
115 # a limit to how many surrogate keys we can attach to a single HTTP
116 # response, and being able to use use `iterate_on` would allow this
117 # size to be unbounded.
118 # ref: https://github.com/pypa/warehouse/pull/3189
119 cache=[k.format(obj=obj) for k in cache_keys],
120 purge=chain.from_iterable(key(obj) for key in purge_keys),
121 )
122
123 return key_maker
124
125
126 def register_origin_cache_keys(config, klass, cache_keys=None,
127 purge_keys=None):
128 key_makers = config.registry.setdefault("cache_keys", {})
129 key_makers[klass] = key_maker_factory(
130 cache_keys=cache_keys,
131 purge_keys=purge_keys,
132 )
133
134
135 def includeme(config):
136 if "origin_cache.backend" in config.registry.settings:
137 cache_class = config.maybe_dotted(
138 config.registry.settings["origin_cache.backend"],
139 )
140 config.register_service_factory(
141 cache_class.create_service,
142 IOriginCache,
143 )
144
145 config.add_directive(
146 "register_origin_cache_keys",
147 register_origin_cache_keys,
148 )
149
[end of warehouse/cache/origin/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/warehouse/cache/origin/__init__.py b/warehouse/cache/origin/__init__.py
--- a/warehouse/cache/origin/__init__.py
+++ b/warehouse/cache/origin/__init__.py
@@ -15,6 +15,8 @@
import operator
from itertools import chain
+from sqlalchemy.orm.session import Session
+
from warehouse import db
from warehouse.cache.origin.interfaces import IOriginCache
@@ -132,6 +134,19 @@
)
+def receive_set(attribute, config, target):
+ cache_keys = config.registry["cache_keys"]
+ session = Session.object_session(target)
+ if session:
+ purges = session.info.setdefault(
+ "warehouse.cache.origin.purges",
+ set()
+ )
+ key_maker = cache_keys[attribute]
+ keys = key_maker(target).purge
+ purges.update(list(keys))
+
+
def includeme(config):
if "origin_cache.backend" in config.registry.settings:
cache_class = config.maybe_dotted(
diff --git a/warehouse/packaging/__init__.py b/warehouse/packaging/__init__.py
--- a/warehouse/packaging/__init__.py
+++ b/warehouse/packaging/__init__.py
@@ -11,14 +11,25 @@
# limitations under the License.
from celery.schedules import crontab
+from warehouse import db
-from warehouse.accounts.models import User
-from warehouse.cache.origin import key_factory
+from warehouse.accounts.models import User, Email
+from warehouse.cache.origin import key_factory, receive_set
from warehouse.packaging.interfaces import IFileStorage
from warehouse.packaging.models import Project, Release
from warehouse.packaging.tasks import compute_trending
[email protected]_for(User.name, 'set')
+def user_name_receive_set(config, target, value, oldvalue, initiator):
+ receive_set(User.name, config, target)
+
+
[email protected]_for(Email.primary, 'set')
+def email_primary_receive_set(config, target, value, oldvalue, initiator):
+ receive_set(Email.primary, config, target)
+
+
def includeme(config):
# Register whatever file storage backend has been configured for storing
# our package files.
@@ -49,11 +60,24 @@
config.register_origin_cache_keys(
User,
cache_keys=["user/{obj.username}"],
+ )
+ config.register_origin_cache_keys(
+ User.name,
purge_keys=[
key_factory("user/{obj.username}"),
key_factory("project/{itr.normalized_name}", iterate_on='projects')
],
)
+ config.register_origin_cache_keys(
+ Email.primary,
+ purge_keys=[
+ key_factory("user/{obj.user.username}"),
+ key_factory(
+ "project/{itr.normalized_name}",
+ iterate_on='user.projects',
+ )
+ ],
+ )
# Add a periodic task to compute trending once a day, assuming we have
# been configured to be able to access BigQuery.
| {"golden_diff": "diff --git a/warehouse/cache/origin/__init__.py b/warehouse/cache/origin/__init__.py\n--- a/warehouse/cache/origin/__init__.py\n+++ b/warehouse/cache/origin/__init__.py\n@@ -15,6 +15,8 @@\n import operator\n from itertools import chain\n \n+from sqlalchemy.orm.session import Session\n+\n from warehouse import db\n from warehouse.cache.origin.interfaces import IOriginCache\n \n@@ -132,6 +134,19 @@\n )\n \n \n+def receive_set(attribute, config, target):\n+ cache_keys = config.registry[\"cache_keys\"]\n+ session = Session.object_session(target)\n+ if session:\n+ purges = session.info.setdefault(\n+ \"warehouse.cache.origin.purges\",\n+ set()\n+ )\n+ key_maker = cache_keys[attribute]\n+ keys = key_maker(target).purge\n+ purges.update(list(keys))\n+\n+\n def includeme(config):\n if \"origin_cache.backend\" in config.registry.settings:\n cache_class = config.maybe_dotted(\ndiff --git a/warehouse/packaging/__init__.py b/warehouse/packaging/__init__.py\n--- a/warehouse/packaging/__init__.py\n+++ b/warehouse/packaging/__init__.py\n@@ -11,14 +11,25 @@\n # limitations under the License.\n \n from celery.schedules import crontab\n+from warehouse import db\n \n-from warehouse.accounts.models import User\n-from warehouse.cache.origin import key_factory\n+from warehouse.accounts.models import User, Email\n+from warehouse.cache.origin import key_factory, receive_set\n from warehouse.packaging.interfaces import IFileStorage\n from warehouse.packaging.models import Project, Release\n from warehouse.packaging.tasks import compute_trending\n \n \[email protected]_for(User.name, 'set')\n+def user_name_receive_set(config, target, value, oldvalue, initiator):\n+ receive_set(User.name, config, target)\n+\n+\[email protected]_for(Email.primary, 'set')\n+def email_primary_receive_set(config, target, value, oldvalue, initiator):\n+ receive_set(Email.primary, config, target)\n+\n+\n def includeme(config):\n # Register whatever file storage backend has been configured for storing\n # our package files.\n@@ -49,11 +60,24 @@\n config.register_origin_cache_keys(\n User,\n cache_keys=[\"user/{obj.username}\"],\n+ )\n+ config.register_origin_cache_keys(\n+ User.name,\n purge_keys=[\n key_factory(\"user/{obj.username}\"),\n key_factory(\"project/{itr.normalized_name}\", iterate_on='projects')\n ],\n )\n+ config.register_origin_cache_keys(\n+ Email.primary,\n+ purge_keys=[\n+ key_factory(\"user/{obj.user.username}\"),\n+ key_factory(\n+ \"project/{itr.normalized_name}\",\n+ iterate_on='user.projects',\n+ )\n+ ],\n+ )\n \n # Add a periodic task to compute trending once a day, assuming we have\n # been configured to be able to access BigQuery.\n", "issue": "Unnecessary purges on `User` model change\nCurrently right now any time the `User` model changes, we purge all the cache keys for that user's project.\r\n\r\nThis includes attribute changes that don't actually affect the project pages, like `last_login`, `password` etc.\r\n\r\nWe should filter out \"purge-able\" attribute changes and only issue purges when necessary. Said attributes include:\r\n* `username`\r\n* `name`\r\n* `emails`\n", "before_files": [{"content": "# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom celery.schedules import crontab\n\nfrom warehouse.accounts.models import User\nfrom warehouse.cache.origin import key_factory\nfrom warehouse.packaging.interfaces import IFileStorage\nfrom warehouse.packaging.models import Project, Release\nfrom warehouse.packaging.tasks import compute_trending\n\n\ndef includeme(config):\n # Register whatever file storage backend has been configured for storing\n # our package files.\n storage_class = config.maybe_dotted(\n config.registry.settings[\"files.backend\"],\n )\n config.register_service_factory(storage_class.create_service, IFileStorage)\n\n # Register our origin cache keys\n config.register_origin_cache_keys(\n Project,\n cache_keys=[\"project/{obj.normalized_name}\"],\n purge_keys=[\n key_factory(\"project/{obj.normalized_name}\"),\n key_factory(\"user/{itr.username}\", iterate_on='users'),\n key_factory(\"all-projects\"),\n ],\n )\n config.register_origin_cache_keys(\n Release,\n cache_keys=[\"project/{obj.project.normalized_name}\"],\n purge_keys=[\n key_factory(\"project/{obj.project.normalized_name}\"),\n key_factory(\"user/{itr.username}\", iterate_on='project.users'),\n key_factory(\"all-projects\"),\n ],\n )\n config.register_origin_cache_keys(\n User,\n cache_keys=[\"user/{obj.username}\"],\n purge_keys=[\n key_factory(\"user/{obj.username}\"),\n key_factory(\"project/{itr.normalized_name}\", iterate_on='projects')\n ],\n )\n\n # Add a periodic task to compute trending once a day, assuming we have\n # been configured to be able to access BigQuery.\n if config.get_settings().get(\"warehouse.trending_table\"):\n config.add_periodic_task(crontab(minute=0, hour=3), compute_trending)\n", "path": "warehouse/packaging/__init__.py"}, {"content": "# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport collections\nimport functools\nimport operator\nfrom itertools import chain\n\nfrom warehouse import db\nfrom warehouse.cache.origin.interfaces import IOriginCache\n\n\[email protected]_for(db.Session, \"after_flush\")\ndef store_purge_keys(config, session, flush_context):\n cache_keys = config.registry[\"cache_keys\"]\n\n # We'll (ab)use the session.info dictionary to store a list of pending\n # purges to the session.\n purges = session.info.setdefault(\"warehouse.cache.origin.purges\", set())\n\n # Go through each new, changed, and deleted object and attempt to store\n # a cache key that we'll want to purge when the session has been committed.\n for obj in (session.new | session.dirty | session.deleted):\n try:\n key_maker = cache_keys[obj.__class__]\n except KeyError:\n continue\n\n purges.update(key_maker(obj).purge)\n\n\[email protected]_for(db.Session, \"after_commit\")\ndef execute_purge(config, session):\n purges = session.info.pop(\"warehouse.cache.origin.purges\", set())\n\n try:\n cacher_factory = config.find_service_factory(IOriginCache)\n except ValueError:\n return\n\n cacher = cacher_factory(None, config)\n cacher.purge(purges)\n\n\ndef origin_cache(seconds, keys=None, stale_while_revalidate=None,\n stale_if_error=None):\n if keys is None:\n keys = []\n\n def inner(view):\n @functools.wraps(view)\n def wrapped(context, request):\n cache_keys = request.registry[\"cache_keys\"]\n\n context_keys = []\n if context.__class__ in cache_keys:\n context_keys = cache_keys[context.__class__](context).cache\n\n try:\n cacher = request.find_service(IOriginCache)\n except ValueError:\n pass\n else:\n request.add_response_callback(\n functools.partial(\n cacher.cache,\n sorted(context_keys + keys),\n seconds=seconds,\n stale_while_revalidate=stale_while_revalidate,\n stale_if_error=stale_if_error,\n )\n )\n\n return view(context, request)\n return wrapped\n\n return inner\n\n\nCacheKeys = collections.namedtuple(\"CacheKeys\", [\"cache\", \"purge\"])\n\n\ndef key_factory(keystring, iterate_on=None):\n\n def generate_key(obj):\n if iterate_on:\n for itr in operator.attrgetter(iterate_on)(obj):\n yield keystring.format(itr=itr, obj=obj)\n else:\n yield keystring.format(obj=obj)\n\n return generate_key\n\n\ndef key_maker_factory(cache_keys, purge_keys):\n if cache_keys is None:\n cache_keys = []\n\n if purge_keys is None:\n purge_keys = []\n\n def key_maker(obj):\n return CacheKeys(\n # Note: this does not support setting the `cache` argument via\n # multiple `key_factories` as we do with `purge` because there is\n # a limit to how many surrogate keys we can attach to a single HTTP\n # response, and being able to use use `iterate_on` would allow this\n # size to be unbounded.\n # ref: https://github.com/pypa/warehouse/pull/3189\n cache=[k.format(obj=obj) for k in cache_keys],\n purge=chain.from_iterable(key(obj) for key in purge_keys),\n )\n\n return key_maker\n\n\ndef register_origin_cache_keys(config, klass, cache_keys=None,\n purge_keys=None):\n key_makers = config.registry.setdefault(\"cache_keys\", {})\n key_makers[klass] = key_maker_factory(\n cache_keys=cache_keys,\n purge_keys=purge_keys,\n )\n\n\ndef includeme(config):\n if \"origin_cache.backend\" in config.registry.settings:\n cache_class = config.maybe_dotted(\n config.registry.settings[\"origin_cache.backend\"],\n )\n config.register_service_factory(\n cache_class.create_service,\n IOriginCache,\n )\n\n config.add_directive(\n \"register_origin_cache_keys\",\n register_origin_cache_keys,\n )\n", "path": "warehouse/cache/origin/__init__.py"}]} | 2,602 | 665 |
gh_patches_debug_7546 | rasdani/github-patches | git_diff | akvo__akvo-rsr-1594 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add content_owner to organisation REST API filters
</issue>
<code>
[start of akvo/rest/views/organisation.py]
1 # -*- coding: utf-8 -*-
2
3 # Akvo RSR is covered by the GNU Affero General Public License.
4 # See more details in the license.txt file located at the root folder of the Akvo RSR module.
5 # For additional details on the GNU license please see < http://www.gnu.org/licenses/agpl.html >.
6
7 from django.conf import settings
8
9 from rest_framework.compat import etree, six
10 from rest_framework.exceptions import ParseError
11 from rest_framework.parsers import XMLParser, JSONParser
12
13 from akvo.rsr.models import Organisation, Country
14
15 from ..serializers import OrganisationSerializer
16 from ..viewsets import BaseRSRViewSet
17
18
19 class AkvoOrganisationParser(XMLParser):
20 def parse(self, stream, media_type=None, parser_context=None):
21 assert etree, 'XMLParser requires defusedxml to be installed'
22
23 parser_context = parser_context or {}
24 encoding = parser_context.get('encoding', settings.DEFAULT_CHARSET)
25 parser = etree.DefusedXMLParser(encoding=encoding)
26 try:
27 tree = etree.parse(stream, parser=parser, forbid_dtd=True)
28 except (etree.ParseError, ValueError) as exc:
29 raise ParseError('XML parse error - %s' % six.text_type(exc))
30 return self.organisation_data_from_etree(tree.getroot())
31
32 def organisation_data_from_etree(self, tree):
33 def find_text(tree, str):
34 element = tree.find(str)
35 if element is None:
36 return ''
37 return element.text.strip() if element.text else ""
38
39 def location_data(location_tree):
40 if location_tree is None:
41 return []
42 iso_code = find_text(location_tree, 'iso_code').lower()
43 country, created = Country.objects.get_or_create(**Country.fields_from_iso_code(iso_code))
44 country = country.id
45 latitude = find_text(location_tree, 'latitude') or 0
46 longitude = find_text(location_tree, 'longitude') or 0
47 primary = True
48 return [dict(latitude=latitude, longitude=longitude, country=country, primary=primary)]
49
50 #id = find_text(tree, 'org_id')
51 long_name = find_text(tree, 'name')
52 name = long_name[:25]
53 description = find_text(tree, 'description')
54 url = find_text(tree, 'url')
55 iati_type = find_text(tree, 'iati_organisation_type')
56 new_organisation_type = int(iati_type) if iati_type else 22
57 organisation_type = Organisation.org_type_from_iati_type(new_organisation_type)
58 locations = location_data(tree.find('location/object'))
59 return dict(
60 name=name, long_name=long_name, description=description, url=url,
61 organisation_type=organisation_type, new_organisation_type=new_organisation_type,
62 locations=locations
63 )
64
65
66 class OrganisationViewSet(BaseRSRViewSet):
67 """
68 API endpoint that allows organisations to be viewed or edited.
69 """
70 queryset = Organisation.objects.all()
71 serializer_class = OrganisationSerializer
72 parser_classes = (AkvoOrganisationParser, JSONParser,)
73 filter_fields = ('name', 'long_name', 'iati_org_id', )
74
75 def get_queryset(self):
76 """ Enable filtering of Organisations on iati_org_id or name
77 """
78 queryset = super(OrganisationViewSet, self).get_queryset()
79 pk = self.request.QUERY_PARAMS.get('id', None)
80 if pk is not None:
81 try:
82 queryset = queryset.filter(pk=pk)
83 except ValueError:
84 pass
85 iati_org_id = self.request.QUERY_PARAMS.get('iati_org_id', None)
86 if iati_org_id is not None:
87 queryset = queryset.filter(iati_org_id=iati_org_id)
88 name = self.request.QUERY_PARAMS.get('name', None)
89 if name is not None:
90 queryset = queryset.filter(name=name)
91 long_name = self.request.QUERY_PARAMS.get('long_name', None)
92 if long_name is not None:
93 queryset = queryset.filter(long_name=long_name)
94 return queryset
95
[end of akvo/rest/views/organisation.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/akvo/rest/views/organisation.py b/akvo/rest/views/organisation.py
--- a/akvo/rest/views/organisation.py
+++ b/akvo/rest/views/organisation.py
@@ -70,7 +70,7 @@
queryset = Organisation.objects.all()
serializer_class = OrganisationSerializer
parser_classes = (AkvoOrganisationParser, JSONParser,)
- filter_fields = ('name', 'long_name', 'iati_org_id', )
+ filter_fields = ('name', 'long_name', 'iati_org_id', 'content_owner')
def get_queryset(self):
""" Enable filtering of Organisations on iati_org_id or name
| {"golden_diff": "diff --git a/akvo/rest/views/organisation.py b/akvo/rest/views/organisation.py\n--- a/akvo/rest/views/organisation.py\n+++ b/akvo/rest/views/organisation.py\n@@ -70,7 +70,7 @@\n queryset = Organisation.objects.all()\n serializer_class = OrganisationSerializer\n parser_classes = (AkvoOrganisationParser, JSONParser,)\n- filter_fields = ('name', 'long_name', 'iati_org_id', )\n+ filter_fields = ('name', 'long_name', 'iati_org_id', 'content_owner')\n \n def get_queryset(self):\n \"\"\" Enable filtering of Organisations on iati_org_id or name\n", "issue": "Add content_owner to organisation REST API filters\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\n# Akvo RSR is covered by the GNU Affero General Public License.\n# See more details in the license.txt file located at the root folder of the Akvo RSR module.\n# For additional details on the GNU license please see < http://www.gnu.org/licenses/agpl.html >.\n\nfrom django.conf import settings\n\nfrom rest_framework.compat import etree, six\nfrom rest_framework.exceptions import ParseError\nfrom rest_framework.parsers import XMLParser, JSONParser\n\nfrom akvo.rsr.models import Organisation, Country\n\nfrom ..serializers import OrganisationSerializer\nfrom ..viewsets import BaseRSRViewSet\n\n\nclass AkvoOrganisationParser(XMLParser):\n def parse(self, stream, media_type=None, parser_context=None):\n assert etree, 'XMLParser requires defusedxml to be installed'\n\n parser_context = parser_context or {}\n encoding = parser_context.get('encoding', settings.DEFAULT_CHARSET)\n parser = etree.DefusedXMLParser(encoding=encoding)\n try:\n tree = etree.parse(stream, parser=parser, forbid_dtd=True)\n except (etree.ParseError, ValueError) as exc:\n raise ParseError('XML parse error - %s' % six.text_type(exc))\n return self.organisation_data_from_etree(tree.getroot())\n\n def organisation_data_from_etree(self, tree):\n def find_text(tree, str):\n element = tree.find(str)\n if element is None:\n return ''\n return element.text.strip() if element.text else \"\"\n\n def location_data(location_tree):\n if location_tree is None:\n return []\n iso_code = find_text(location_tree, 'iso_code').lower()\n country, created = Country.objects.get_or_create(**Country.fields_from_iso_code(iso_code))\n country = country.id\n latitude = find_text(location_tree, 'latitude') or 0\n longitude = find_text(location_tree, 'longitude') or 0\n primary = True\n return [dict(latitude=latitude, longitude=longitude, country=country, primary=primary)]\n\n #id = find_text(tree, 'org_id')\n long_name = find_text(tree, 'name')\n name = long_name[:25]\n description = find_text(tree, 'description')\n url = find_text(tree, 'url')\n iati_type = find_text(tree, 'iati_organisation_type')\n new_organisation_type = int(iati_type) if iati_type else 22\n organisation_type = Organisation.org_type_from_iati_type(new_organisation_type)\n locations = location_data(tree.find('location/object'))\n return dict(\n name=name, long_name=long_name, description=description, url=url,\n organisation_type=organisation_type, new_organisation_type=new_organisation_type,\n locations=locations\n )\n\n\nclass OrganisationViewSet(BaseRSRViewSet):\n \"\"\"\n API endpoint that allows organisations to be viewed or edited.\n \"\"\"\n queryset = Organisation.objects.all()\n serializer_class = OrganisationSerializer\n parser_classes = (AkvoOrganisationParser, JSONParser,)\n filter_fields = ('name', 'long_name', 'iati_org_id', )\n\n def get_queryset(self):\n \"\"\" Enable filtering of Organisations on iati_org_id or name\n \"\"\"\n queryset = super(OrganisationViewSet, self).get_queryset()\n pk = self.request.QUERY_PARAMS.get('id', None)\n if pk is not None:\n try:\n queryset = queryset.filter(pk=pk)\n except ValueError:\n pass\n iati_org_id = self.request.QUERY_PARAMS.get('iati_org_id', None)\n if iati_org_id is not None:\n queryset = queryset.filter(iati_org_id=iati_org_id)\n name = self.request.QUERY_PARAMS.get('name', None)\n if name is not None:\n queryset = queryset.filter(name=name)\n long_name = self.request.QUERY_PARAMS.get('long_name', None)\n if long_name is not None:\n queryset = queryset.filter(long_name=long_name)\n return queryset\n", "path": "akvo/rest/views/organisation.py"}]} | 1,584 | 147 |
gh_patches_debug_10083 | rasdani/github-patches | git_diff | Pyomo__pyomo-1806 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
pyomo.common.tempfiles.TempfileManager raises FileNotFoundError
I've recently started using pyomo.common.tempfiles.TempfileManager to set the temporary directory in a package, as in https://pyomo.readthedocs.io/en/stable/working_models.html#changing-the-temporary-directory. I was previously using TempfileManager from PyUtilib for about 4 years.
My tests now fail. Here is some of the trace:
File "/home/user/anaconda38/lib/python3.8/site-packages/pyomo/opt/base/solvers.py", line 571, in solve
self._presolve(*args, **kwds)
File "/home/user/anaconda38/lib/python3.8/site-packages/pyomo/solvers/plugins/solvers/CPLEX.py", line 349, in _presolve
ILMLicensedSystemCallSolver._presolve(self, *args, **kwds)
File "/home/user/anaconda38/lib/python3.8/site-packages/pyomo/opt/solver/shellcmd.py", line 197, in _presolve
OptSolver._presolve(self, *args, **kwds)
File "/home/user/anaconda38/lib/python3.8/site-packages/pyomo/opt/base/solvers.py", line 668, in _presolve
self._convert_problem(args,
File "/home/user/anaconda38/lib/python3.8/site-packages/pyomo/opt/base/solvers.py", line 738, in _convert_problem
return convert_problem(args,
File "/home/user/anaconda38/lib/python3.8/site-packages/pyomo/opt/base/convert.py", line 105, in convert_problem
problem_files, symbol_map = converter.apply(*tmp, **tmpkw)
File "/home/user/anaconda38/lib/python3.8/site-packages/pyomo/solvers/plugins/converter/model.py", line 72, in apply
problem_filename = TempfileManager.\
File "/home/user/anaconda38/lib/python3.8/site-packages/pyomo/common/tempfiles.py", line 67, in create_tempfile
ans = tempfile.mkstemp(suffix=suffix, prefix=prefix, text=text, dir=dir)
File "/home/user/anaconda38/lib/python3.8/tempfile.py", line 332, in mkstemp
return _mkstemp_inner(dir, prefix, suffix, flags, output_type)
File "/home/user/anaconda38/lib/python3.8/tempfile.py", line 250, in _mkstemp_inner
fd = _os.open(file, flags, 0o600)
FileNotFoundError: [Errno 2] No such file or directory: '/tmp/tmpc6s9g6hf/tmpjoxugx27.pyomo.lp'
If I run the tests individually on the commandline, then the FileNotFoundError does not occur.
If I run all the tests from a script, then the FileNotFoundError does occur.
If I run all the tests from the same script, but change the order of the tests, then the FileNotFoundError still occurs but during a different test.
Note that in all tests, I'm not acutally setting TempfileManager.tempdir. It appears in a method, but this method is not called during these tests. So just the import "from pyomo.common.tempfiles import TempfileManager" is being run.
Now if I change my code so that "TempfileManager.tempdir = None" is always called for each test, then the FileNotFoundError no longer occurs.
Can you help?
I'm using Python 3.8.5 from Anaconda, Pyomo 5.7.2 and Ubuntu 18.04 LTS.
Thanks,
Jason
</issue>
<code>
[start of pyomo/common/tempfiles.py]
1 # ___________________________________________________________________________
2 #
3 # Pyomo: Python Optimization Modeling Objects
4 # Copyright 2017 National Technology and Engineering Solutions of Sandia, LLC
5 # Under the terms of Contract DE-NA0003525 with National Technology and
6 # Engineering Solutions of Sandia, LLC, the U.S. Government retains certain
7 # rights in this software.
8 # This software is distributed under the 3-clause BSD License.
9 # ___________________________________________________________________________
10 #
11 # This module was originally developed as part of the PyUtilib project
12 # Copyright (c) 2008 Sandia Corporation.
13 # This software is distributed under the BSD License.
14 # Under the terms of Contract DE-AC04-94AL85000 with Sandia Corporation,
15 # the U.S. Government retains certain rights in this software.
16 # ___________________________________________________________________________
17
18 import os
19 import time
20 import tempfile
21 import logging
22 import shutil
23 from pyomo.common.deprecation import deprecation_warning
24 try:
25 from pyutilib.component.config.tempfiles import (
26 TempfileManager as pyutilib_mngr
27 )
28 except ImportError:
29 pyutilib_mngr = None
30
31 deletion_errors_are_fatal = True
32
33
34 class TempfileManagerClass:
35 """A class that manages temporary files."""
36
37 tempdir = None
38
39 def __init__(self, **kwds):
40 self._tempfiles = [[]]
41 self._ctr = -1
42
43 def create_tempfile(self, suffix=None, prefix=None, text=False, dir=None):
44 """Create a unique temporary file
45
46 Returns the absolute path of a temporary filename that is
47 guaranteed to be unique. This function generates the file and
48 returns the filename.
49
50 """
51 if suffix is None:
52 suffix = ''
53 if prefix is None:
54 prefix = 'tmp'
55 if dir is None:
56 dir = self.tempdir
57 if dir is None and pyutilib_mngr is not None:
58 dir = pyutilib_mngr.tempdir
59 if dir is not None:
60 deprecation_warning(
61 "The use of the PyUtilib TempfileManager.tempdir "
62 "to specify the default location for Pyomo "
63 "temporary files has been deprecated. "
64 "Please set TempfileManager.tempdir in "
65 "pyomo.common.tempfiles", version='5.7.2')
66
67 ans = tempfile.mkstemp(suffix=suffix, prefix=prefix, text=text, dir=dir)
68 ans = list(ans)
69 if not os.path.isabs(ans[1]): #pragma:nocover
70 fname = os.path.join(dir, ans[1])
71 else:
72 fname = ans[1]
73 os.close(ans[0])
74 if self._ctr >= 0:
75 new_fname = os.path.join(dir, prefix + str(self._ctr) + suffix)
76 # Delete any file having the sequential name and then
77 # rename
78 if os.path.exists(new_fname):
79 os.remove(new_fname)
80 shutil.move(fname, new_fname)
81 fname = new_fname
82 self._ctr += 1
83 self._tempfiles[-1].append(fname)
84 return fname
85
86 def create_tempdir(self, suffix=None, prefix=None, dir=None):
87 """Create a unique temporary directory
88
89 Returns the absolute path of a temporary directory that is
90 guaranteed to be unique. This function generates the directory
91 and returns the directory name.
92
93 """
94 if suffix is None:
95 suffix = ''
96 if prefix is None:
97 prefix = 'tmp'
98 if dir is None:
99 dir = self.tempdir
100 if dir is None and pyutilib_mngr is not None:
101 dir = pyutilib_mngr.tempdir
102 if dir is not None:
103 deprecation_warning(
104 "The use of the PyUtilib TempfileManager.tempdir "
105 "to specify the default location for Pyomo "
106 "temporary directories has been deprecated. "
107 "Please set TempfileManager.tempdir in "
108 "pyomo.common.tempfiles", version='5.7.2')
109
110 dirname = tempfile.mkdtemp(suffix=suffix, prefix=prefix, dir=dir)
111 if self._ctr >= 0:
112 new_dirname = os.path.join(dir, prefix + str(self._ctr) + suffix)
113 # Delete any directory having the sequential name and then
114 # rename
115 if os.path.exists(new_dirname):
116 shutil.rmtree(new_dirname)
117 shutil.move(dirname, new_dirname)
118 dirname = new_dirname
119 self._ctr += 1
120
121 self._tempfiles[-1].append(dirname)
122 return dirname
123
124 def add_tempfile(self, filename, exists=True):
125 """Declare this file to be temporary."""
126 tmp = os.path.abspath(filename)
127 if exists and not os.path.exists(tmp):
128 raise IOError("Temporary file does not exist: " + tmp)
129 self._tempfiles[-1].append(tmp)
130
131 def clear_tempfiles(self, remove=True):
132 """Delete all temporary files."""
133 while len(self._tempfiles) > 1:
134 self.pop(remove)
135 self.pop(remove)
136
137 def sequential_files(self, ctr=0):
138 """Start generating sequential files, using the specified counter"""
139 self._ctr = ctr
140
141 def unique_files(self):
142 """Stop generating sequential files, using the specified counter"""
143 self._ctr = -1
144
145 #
146 # Support "with" statements, where the pop automatically
147 # takes place on exit.
148 #
149 def push(self):
150 self._tempfiles.append([])
151 return self
152
153 def __enter__(self):
154 self.push()
155
156 def __exit__(self, type, value, traceback):
157 self.pop(remove=True)
158
159 def pop(self, remove=True):
160 files = self._tempfiles.pop()
161 if remove:
162 for filename in files:
163 if os.path.exists(filename):
164 if os.path.isdir(filename):
165 shutil.rmtree(
166 filename,
167 ignore_errors=not deletion_errors_are_fatal)
168 else:
169 try:
170 os.remove(filename)
171 except WindowsError:
172 # Sometimes Windows doesn't release the
173 # file lock immediately when the process
174 # terminates. If we get an error, wait a
175 # second and try again.
176 try:
177 time.sleep(1)
178 os.remove(filename)
179 except WindowsError:
180 if deletion_errors_are_fatal:
181 raise
182 else:
183 # Failure to delete a tempfile
184 # should NOT be fatal
185 logger = logging.getLogger(__name__)
186 logger.warning("Unable to delete temporary "
187 "file %s" % (filename,))
188
189 if len(self._tempfiles) == 0:
190 self._tempfiles = [[]]
191
192 TempfileManager = TempfileManagerClass()
193
[end of pyomo/common/tempfiles.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/pyomo/common/tempfiles.py b/pyomo/common/tempfiles.py
--- a/pyomo/common/tempfiles.py
+++ b/pyomo/common/tempfiles.py
@@ -34,9 +34,8 @@
class TempfileManagerClass:
"""A class that manages temporary files."""
- tempdir = None
-
def __init__(self, **kwds):
+ self.tempdir = None
self._tempfiles = [[]]
self._ctr = -1
@@ -189,4 +188,5 @@
if len(self._tempfiles) == 0:
self._tempfiles = [[]]
+
TempfileManager = TempfileManagerClass()
| {"golden_diff": "diff --git a/pyomo/common/tempfiles.py b/pyomo/common/tempfiles.py\n--- a/pyomo/common/tempfiles.py\n+++ b/pyomo/common/tempfiles.py\n@@ -34,9 +34,8 @@\n class TempfileManagerClass:\n \"\"\"A class that manages temporary files.\"\"\"\n \n- tempdir = None\n-\n def __init__(self, **kwds):\n+ self.tempdir = None\n self._tempfiles = [[]]\n self._ctr = -1\n \n@@ -189,4 +188,5 @@\n if len(self._tempfiles) == 0:\n self._tempfiles = [[]]\n \n+\n TempfileManager = TempfileManagerClass()\n", "issue": "pyomo.common.tempfiles.TempfileManager raises FileNotFoundError\nI've recently started using pyomo.common.tempfiles.TempfileManager to set the temporary directory in a package, as in https://pyomo.readthedocs.io/en/stable/working_models.html#changing-the-temporary-directory. I was previously using TempfileManager from PyUtilib for about 4 years. \r\n\r\nMy tests now fail. Here is some of the trace:\r\n\r\n File \"/home/user/anaconda38/lib/python3.8/site-packages/pyomo/opt/base/solvers.py\", line 571, in solve\r\n self._presolve(*args, **kwds)\r\n File \"/home/user/anaconda38/lib/python3.8/site-packages/pyomo/solvers/plugins/solvers/CPLEX.py\", line 349, in _presolve\r\n ILMLicensedSystemCallSolver._presolve(self, *args, **kwds)\r\n File \"/home/user/anaconda38/lib/python3.8/site-packages/pyomo/opt/solver/shellcmd.py\", line 197, in _presolve\r\n OptSolver._presolve(self, *args, **kwds)\r\n File \"/home/user/anaconda38/lib/python3.8/site-packages/pyomo/opt/base/solvers.py\", line 668, in _presolve\r\n self._convert_problem(args,\r\n File \"/home/user/anaconda38/lib/python3.8/site-packages/pyomo/opt/base/solvers.py\", line 738, in _convert_problem\r\n return convert_problem(args,\r\n File \"/home/user/anaconda38/lib/python3.8/site-packages/pyomo/opt/base/convert.py\", line 105, in convert_problem\r\n problem_files, symbol_map = converter.apply(*tmp, **tmpkw)\r\n File \"/home/user/anaconda38/lib/python3.8/site-packages/pyomo/solvers/plugins/converter/model.py\", line 72, in apply\r\n problem_filename = TempfileManager.\\\r\n File \"/home/user/anaconda38/lib/python3.8/site-packages/pyomo/common/tempfiles.py\", line 67, in create_tempfile\r\n ans = tempfile.mkstemp(suffix=suffix, prefix=prefix, text=text, dir=dir)\r\n File \"/home/user/anaconda38/lib/python3.8/tempfile.py\", line 332, in mkstemp\r\n return _mkstemp_inner(dir, prefix, suffix, flags, output_type)\r\n File \"/home/user/anaconda38/lib/python3.8/tempfile.py\", line 250, in _mkstemp_inner\r\n fd = _os.open(file, flags, 0o600)\r\nFileNotFoundError: [Errno 2] No such file or directory: '/tmp/tmpc6s9g6hf/tmpjoxugx27.pyomo.lp'\r\n\r\nIf I run the tests individually on the commandline, then the FileNotFoundError does not occur.\r\n\r\nIf I run all the tests from a script, then the FileNotFoundError does occur.\r\n\r\nIf I run all the tests from the same script, but change the order of the tests, then the FileNotFoundError still occurs but during a different test.\r\n\r\nNote that in all tests, I'm not acutally setting TempfileManager.tempdir. It appears in a method, but this method is not called during these tests. So just the import \"from pyomo.common.tempfiles import TempfileManager\" is being run.\r\n\r\nNow if I change my code so that \"TempfileManager.tempdir = None\" is always called for each test, then the FileNotFoundError no longer occurs.\r\n\r\nCan you help?\r\n\r\nI'm using Python 3.8.5 from Anaconda, Pyomo 5.7.2 and Ubuntu 18.04 LTS.\r\n\r\nThanks,\r\nJason\n", "before_files": [{"content": "# ___________________________________________________________________________\n#\n# Pyomo: Python Optimization Modeling Objects\n# Copyright 2017 National Technology and Engineering Solutions of Sandia, LLC\n# Under the terms of Contract DE-NA0003525 with National Technology and\n# Engineering Solutions of Sandia, LLC, the U.S. Government retains certain\n# rights in this software.\n# This software is distributed under the 3-clause BSD License.\n# ___________________________________________________________________________\n#\n# This module was originally developed as part of the PyUtilib project\n# Copyright (c) 2008 Sandia Corporation.\n# This software is distributed under the BSD License.\n# Under the terms of Contract DE-AC04-94AL85000 with Sandia Corporation,\n# the U.S. Government retains certain rights in this software.\n# ___________________________________________________________________________\n\nimport os\nimport time\nimport tempfile\nimport logging\nimport shutil\nfrom pyomo.common.deprecation import deprecation_warning\ntry:\n from pyutilib.component.config.tempfiles import (\n TempfileManager as pyutilib_mngr\n )\nexcept ImportError:\n pyutilib_mngr = None\n\ndeletion_errors_are_fatal = True\n\n\nclass TempfileManagerClass:\n \"\"\"A class that manages temporary files.\"\"\"\n\n tempdir = None\n\n def __init__(self, **kwds):\n self._tempfiles = [[]]\n self._ctr = -1\n\n def create_tempfile(self, suffix=None, prefix=None, text=False, dir=None):\n \"\"\"Create a unique temporary file\n\n Returns the absolute path of a temporary filename that is\n guaranteed to be unique. This function generates the file and\n returns the filename.\n\n \"\"\"\n if suffix is None:\n suffix = ''\n if prefix is None:\n prefix = 'tmp'\n if dir is None:\n dir = self.tempdir\n if dir is None and pyutilib_mngr is not None:\n dir = pyutilib_mngr.tempdir\n if dir is not None:\n deprecation_warning(\n \"The use of the PyUtilib TempfileManager.tempdir \"\n \"to specify the default location for Pyomo \"\n \"temporary files has been deprecated. \"\n \"Please set TempfileManager.tempdir in \"\n \"pyomo.common.tempfiles\", version='5.7.2')\n\n ans = tempfile.mkstemp(suffix=suffix, prefix=prefix, text=text, dir=dir)\n ans = list(ans)\n if not os.path.isabs(ans[1]): #pragma:nocover\n fname = os.path.join(dir, ans[1])\n else:\n fname = ans[1]\n os.close(ans[0])\n if self._ctr >= 0:\n new_fname = os.path.join(dir, prefix + str(self._ctr) + suffix)\n # Delete any file having the sequential name and then\n # rename\n if os.path.exists(new_fname):\n os.remove(new_fname)\n shutil.move(fname, new_fname)\n fname = new_fname\n self._ctr += 1\n self._tempfiles[-1].append(fname)\n return fname\n\n def create_tempdir(self, suffix=None, prefix=None, dir=None):\n \"\"\"Create a unique temporary directory\n\n Returns the absolute path of a temporary directory that is\n guaranteed to be unique. This function generates the directory\n and returns the directory name.\n\n \"\"\"\n if suffix is None:\n suffix = ''\n if prefix is None:\n prefix = 'tmp'\n if dir is None:\n dir = self.tempdir\n if dir is None and pyutilib_mngr is not None:\n dir = pyutilib_mngr.tempdir\n if dir is not None:\n deprecation_warning(\n \"The use of the PyUtilib TempfileManager.tempdir \"\n \"to specify the default location for Pyomo \"\n \"temporary directories has been deprecated. \"\n \"Please set TempfileManager.tempdir in \"\n \"pyomo.common.tempfiles\", version='5.7.2')\n\n dirname = tempfile.mkdtemp(suffix=suffix, prefix=prefix, dir=dir)\n if self._ctr >= 0:\n new_dirname = os.path.join(dir, prefix + str(self._ctr) + suffix)\n # Delete any directory having the sequential name and then\n # rename\n if os.path.exists(new_dirname):\n shutil.rmtree(new_dirname)\n shutil.move(dirname, new_dirname)\n dirname = new_dirname\n self._ctr += 1\n\n self._tempfiles[-1].append(dirname)\n return dirname\n\n def add_tempfile(self, filename, exists=True):\n \"\"\"Declare this file to be temporary.\"\"\"\n tmp = os.path.abspath(filename)\n if exists and not os.path.exists(tmp):\n raise IOError(\"Temporary file does not exist: \" + tmp)\n self._tempfiles[-1].append(tmp)\n\n def clear_tempfiles(self, remove=True):\n \"\"\"Delete all temporary files.\"\"\"\n while len(self._tempfiles) > 1:\n self.pop(remove)\n self.pop(remove)\n\n def sequential_files(self, ctr=0):\n \"\"\"Start generating sequential files, using the specified counter\"\"\"\n self._ctr = ctr\n\n def unique_files(self):\n \"\"\"Stop generating sequential files, using the specified counter\"\"\"\n self._ctr = -1\n\n #\n # Support \"with\" statements, where the pop automatically\n # takes place on exit.\n #\n def push(self):\n self._tempfiles.append([])\n return self\n\n def __enter__(self):\n self.push()\n\n def __exit__(self, type, value, traceback):\n self.pop(remove=True)\n\n def pop(self, remove=True):\n files = self._tempfiles.pop()\n if remove:\n for filename in files:\n if os.path.exists(filename):\n if os.path.isdir(filename):\n shutil.rmtree(\n filename,\n ignore_errors=not deletion_errors_are_fatal)\n else:\n try:\n os.remove(filename)\n except WindowsError:\n # Sometimes Windows doesn't release the\n # file lock immediately when the process\n # terminates. If we get an error, wait a\n # second and try again.\n try:\n time.sleep(1)\n os.remove(filename)\n except WindowsError:\n if deletion_errors_are_fatal:\n raise\n else:\n # Failure to delete a tempfile\n # should NOT be fatal\n logger = logging.getLogger(__name__)\n logger.warning(\"Unable to delete temporary \"\n \"file %s\" % (filename,))\n\n if len(self._tempfiles) == 0:\n self._tempfiles = [[]]\n\nTempfileManager = TempfileManagerClass()\n", "path": "pyomo/common/tempfiles.py"}]} | 3,273 | 152 |
gh_patches_debug_19298 | rasdani/github-patches | git_diff | pyca__cryptography-6865 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Can load PKCS12 with ED25519 Keys but cannot Serialize them
Why does the pkcs12.serialize_key_and_certificates() still sanitize against ed private keys? cryptography has no problem loading pkcs12 files which contain ed25519 private keys and related certificates.
</issue>
<code>
[start of src/cryptography/hazmat/primitives/serialization/pkcs12.py]
1 # This file is dual licensed under the terms of the Apache License, Version
2 # 2.0, and the BSD License. See the LICENSE file in the root of this repository
3 # for complete details.
4
5 import typing
6
7 from cryptography import x509
8 from cryptography.hazmat.primitives import serialization
9 from cryptography.hazmat.primitives.asymmetric import (
10 dsa,
11 ec,
12 ed25519,
13 ed448,
14 rsa,
15 )
16 from cryptography.hazmat.primitives.asymmetric.types import (
17 PRIVATE_KEY_TYPES,
18 )
19
20
21 _ALLOWED_PKCS12_TYPES = typing.Union[
22 rsa.RSAPrivateKey,
23 dsa.DSAPrivateKey,
24 ec.EllipticCurvePrivateKey,
25 ]
26
27
28 class PKCS12Certificate:
29 def __init__(
30 self,
31 cert: x509.Certificate,
32 friendly_name: typing.Optional[bytes],
33 ):
34 if not isinstance(cert, x509.Certificate):
35 raise TypeError("Expecting x509.Certificate object")
36 if friendly_name is not None and not isinstance(friendly_name, bytes):
37 raise TypeError("friendly_name must be bytes or None")
38 self._cert = cert
39 self._friendly_name = friendly_name
40
41 @property
42 def friendly_name(self) -> typing.Optional[bytes]:
43 return self._friendly_name
44
45 @property
46 def certificate(self) -> x509.Certificate:
47 return self._cert
48
49 def __eq__(self, other: object) -> bool:
50 if not isinstance(other, PKCS12Certificate):
51 return NotImplemented
52
53 return (
54 self.certificate == other.certificate
55 and self.friendly_name == other.friendly_name
56 )
57
58 def __ne__(self, other: object) -> bool:
59 return not self == other
60
61 def __hash__(self) -> int:
62 return hash((self.certificate, self.friendly_name))
63
64 def __repr__(self) -> str:
65 return "<PKCS12Certificate({}, friendly_name={!r})>".format(
66 self.certificate, self.friendly_name
67 )
68
69
70 class PKCS12KeyAndCertificates:
71 def __init__(
72 self,
73 key: typing.Optional[PRIVATE_KEY_TYPES],
74 cert: typing.Optional[PKCS12Certificate],
75 additional_certs: typing.List[PKCS12Certificate],
76 ):
77 if key is not None and not isinstance(
78 key,
79 (
80 rsa.RSAPrivateKey,
81 dsa.DSAPrivateKey,
82 ec.EllipticCurvePrivateKey,
83 ed25519.Ed25519PrivateKey,
84 ed448.Ed448PrivateKey,
85 ),
86 ):
87 raise TypeError(
88 "Key must be RSA, DSA, EllipticCurve, ED25519, or ED448"
89 " private key, or None."
90 )
91 if cert is not None and not isinstance(cert, PKCS12Certificate):
92 raise TypeError("cert must be a PKCS12Certificate object or None")
93 if not all(
94 isinstance(add_cert, PKCS12Certificate)
95 for add_cert in additional_certs
96 ):
97 raise TypeError(
98 "all values in additional_certs must be PKCS12Certificate"
99 " objects"
100 )
101 self._key = key
102 self._cert = cert
103 self._additional_certs = additional_certs
104
105 @property
106 def key(self) -> typing.Optional[PRIVATE_KEY_TYPES]:
107 return self._key
108
109 @property
110 def cert(self) -> typing.Optional[PKCS12Certificate]:
111 return self._cert
112
113 @property
114 def additional_certs(self) -> typing.List[PKCS12Certificate]:
115 return self._additional_certs
116
117 def __eq__(self, other: object) -> bool:
118 if not isinstance(other, PKCS12KeyAndCertificates):
119 return NotImplemented
120
121 return (
122 self.key == other.key
123 and self.cert == other.cert
124 and self.additional_certs == other.additional_certs
125 )
126
127 def __ne__(self, other: object) -> bool:
128 return not self == other
129
130 def __hash__(self) -> int:
131 return hash((self.key, self.cert, tuple(self.additional_certs)))
132
133 def __repr__(self) -> str:
134 fmt = (
135 "<PKCS12KeyAndCertificates(key={}, cert={}, additional_certs={})>"
136 )
137 return fmt.format(self.key, self.cert, self.additional_certs)
138
139
140 def load_key_and_certificates(
141 data: bytes,
142 password: typing.Optional[bytes],
143 backend: typing.Any = None,
144 ) -> typing.Tuple[
145 typing.Optional[PRIVATE_KEY_TYPES],
146 typing.Optional[x509.Certificate],
147 typing.List[x509.Certificate],
148 ]:
149 from cryptography.hazmat.backends.openssl.backend import backend as ossl
150
151 return ossl.load_key_and_certificates_from_pkcs12(data, password)
152
153
154 def load_pkcs12(
155 data: bytes,
156 password: typing.Optional[bytes],
157 backend: typing.Any = None,
158 ) -> PKCS12KeyAndCertificates:
159 from cryptography.hazmat.backends.openssl.backend import backend as ossl
160
161 return ossl.load_pkcs12(data, password)
162
163
164 def serialize_key_and_certificates(
165 name: typing.Optional[bytes],
166 key: typing.Optional[_ALLOWED_PKCS12_TYPES],
167 cert: typing.Optional[x509.Certificate],
168 cas: typing.Optional[typing.Iterable[x509.Certificate]],
169 encryption_algorithm: serialization.KeySerializationEncryption,
170 ) -> bytes:
171 if key is not None and not isinstance(
172 key,
173 (
174 rsa.RSAPrivateKey,
175 dsa.DSAPrivateKey,
176 ec.EllipticCurvePrivateKey,
177 ),
178 ):
179 raise TypeError(
180 "Key must be RSA, DSA, or EllipticCurve private key or None."
181 )
182 if cert is not None and not isinstance(cert, x509.Certificate):
183 raise TypeError("cert must be a certificate or None")
184
185 if cas is not None:
186 cas = list(cas)
187 if not all(isinstance(val, x509.Certificate) for val in cas):
188 raise TypeError("all values in cas must be certificates")
189
190 if not isinstance(
191 encryption_algorithm, serialization.KeySerializationEncryption
192 ):
193 raise TypeError(
194 "Key encryption algorithm must be a "
195 "KeySerializationEncryption instance"
196 )
197
198 if key is None and cert is None and not cas:
199 raise ValueError("You must supply at least one of key, cert, or cas")
200
201 from cryptography.hazmat.backends.openssl.backend import backend
202
203 return backend.serialize_key_and_certificates_to_pkcs12(
204 name, key, cert, cas, encryption_algorithm
205 )
206
[end of src/cryptography/hazmat/primitives/serialization/pkcs12.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/cryptography/hazmat/primitives/serialization/pkcs12.py b/src/cryptography/hazmat/primitives/serialization/pkcs12.py
--- a/src/cryptography/hazmat/primitives/serialization/pkcs12.py
+++ b/src/cryptography/hazmat/primitives/serialization/pkcs12.py
@@ -22,6 +22,8 @@
rsa.RSAPrivateKey,
dsa.DSAPrivateKey,
ec.EllipticCurvePrivateKey,
+ ed25519.Ed25519PrivateKey,
+ ed448.Ed448PrivateKey,
]
@@ -174,10 +176,13 @@
rsa.RSAPrivateKey,
dsa.DSAPrivateKey,
ec.EllipticCurvePrivateKey,
+ ed25519.Ed25519PrivateKey,
+ ed448.Ed448PrivateKey,
),
):
raise TypeError(
- "Key must be RSA, DSA, or EllipticCurve private key or None."
+ "Key must be RSA, DSA, EllipticCurve, ED25519, or ED448"
+ " private key, or None."
)
if cert is not None and not isinstance(cert, x509.Certificate):
raise TypeError("cert must be a certificate or None")
| {"golden_diff": "diff --git a/src/cryptography/hazmat/primitives/serialization/pkcs12.py b/src/cryptography/hazmat/primitives/serialization/pkcs12.py\n--- a/src/cryptography/hazmat/primitives/serialization/pkcs12.py\n+++ b/src/cryptography/hazmat/primitives/serialization/pkcs12.py\n@@ -22,6 +22,8 @@\n rsa.RSAPrivateKey,\n dsa.DSAPrivateKey,\n ec.EllipticCurvePrivateKey,\n+ ed25519.Ed25519PrivateKey,\n+ ed448.Ed448PrivateKey,\n ]\n \n \n@@ -174,10 +176,13 @@\n rsa.RSAPrivateKey,\n dsa.DSAPrivateKey,\n ec.EllipticCurvePrivateKey,\n+ ed25519.Ed25519PrivateKey,\n+ ed448.Ed448PrivateKey,\n ),\n ):\n raise TypeError(\n- \"Key must be RSA, DSA, or EllipticCurve private key or None.\"\n+ \"Key must be RSA, DSA, EllipticCurve, ED25519, or ED448\"\n+ \" private key, or None.\"\n )\n if cert is not None and not isinstance(cert, x509.Certificate):\n raise TypeError(\"cert must be a certificate or None\")\n", "issue": "Can load PKCS12 with ED25519 Keys but cannot Serialize them\nWhy does the pkcs12.serialize_key_and_certificates() still sanitize against ed private keys? cryptography has no problem loading pkcs12 files which contain ed25519 private keys and related certificates.\n", "before_files": [{"content": "# This file is dual licensed under the terms of the Apache License, Version\n# 2.0, and the BSD License. See the LICENSE file in the root of this repository\n# for complete details.\n\nimport typing\n\nfrom cryptography import x509\nfrom cryptography.hazmat.primitives import serialization\nfrom cryptography.hazmat.primitives.asymmetric import (\n dsa,\n ec,\n ed25519,\n ed448,\n rsa,\n)\nfrom cryptography.hazmat.primitives.asymmetric.types import (\n PRIVATE_KEY_TYPES,\n)\n\n\n_ALLOWED_PKCS12_TYPES = typing.Union[\n rsa.RSAPrivateKey,\n dsa.DSAPrivateKey,\n ec.EllipticCurvePrivateKey,\n]\n\n\nclass PKCS12Certificate:\n def __init__(\n self,\n cert: x509.Certificate,\n friendly_name: typing.Optional[bytes],\n ):\n if not isinstance(cert, x509.Certificate):\n raise TypeError(\"Expecting x509.Certificate object\")\n if friendly_name is not None and not isinstance(friendly_name, bytes):\n raise TypeError(\"friendly_name must be bytes or None\")\n self._cert = cert\n self._friendly_name = friendly_name\n\n @property\n def friendly_name(self) -> typing.Optional[bytes]:\n return self._friendly_name\n\n @property\n def certificate(self) -> x509.Certificate:\n return self._cert\n\n def __eq__(self, other: object) -> bool:\n if not isinstance(other, PKCS12Certificate):\n return NotImplemented\n\n return (\n self.certificate == other.certificate\n and self.friendly_name == other.friendly_name\n )\n\n def __ne__(self, other: object) -> bool:\n return not self == other\n\n def __hash__(self) -> int:\n return hash((self.certificate, self.friendly_name))\n\n def __repr__(self) -> str:\n return \"<PKCS12Certificate({}, friendly_name={!r})>\".format(\n self.certificate, self.friendly_name\n )\n\n\nclass PKCS12KeyAndCertificates:\n def __init__(\n self,\n key: typing.Optional[PRIVATE_KEY_TYPES],\n cert: typing.Optional[PKCS12Certificate],\n additional_certs: typing.List[PKCS12Certificate],\n ):\n if key is not None and not isinstance(\n key,\n (\n rsa.RSAPrivateKey,\n dsa.DSAPrivateKey,\n ec.EllipticCurvePrivateKey,\n ed25519.Ed25519PrivateKey,\n ed448.Ed448PrivateKey,\n ),\n ):\n raise TypeError(\n \"Key must be RSA, DSA, EllipticCurve, ED25519, or ED448\"\n \" private key, or None.\"\n )\n if cert is not None and not isinstance(cert, PKCS12Certificate):\n raise TypeError(\"cert must be a PKCS12Certificate object or None\")\n if not all(\n isinstance(add_cert, PKCS12Certificate)\n for add_cert in additional_certs\n ):\n raise TypeError(\n \"all values in additional_certs must be PKCS12Certificate\"\n \" objects\"\n )\n self._key = key\n self._cert = cert\n self._additional_certs = additional_certs\n\n @property\n def key(self) -> typing.Optional[PRIVATE_KEY_TYPES]:\n return self._key\n\n @property\n def cert(self) -> typing.Optional[PKCS12Certificate]:\n return self._cert\n\n @property\n def additional_certs(self) -> typing.List[PKCS12Certificate]:\n return self._additional_certs\n\n def __eq__(self, other: object) -> bool:\n if not isinstance(other, PKCS12KeyAndCertificates):\n return NotImplemented\n\n return (\n self.key == other.key\n and self.cert == other.cert\n and self.additional_certs == other.additional_certs\n )\n\n def __ne__(self, other: object) -> bool:\n return not self == other\n\n def __hash__(self) -> int:\n return hash((self.key, self.cert, tuple(self.additional_certs)))\n\n def __repr__(self) -> str:\n fmt = (\n \"<PKCS12KeyAndCertificates(key={}, cert={}, additional_certs={})>\"\n )\n return fmt.format(self.key, self.cert, self.additional_certs)\n\n\ndef load_key_and_certificates(\n data: bytes,\n password: typing.Optional[bytes],\n backend: typing.Any = None,\n) -> typing.Tuple[\n typing.Optional[PRIVATE_KEY_TYPES],\n typing.Optional[x509.Certificate],\n typing.List[x509.Certificate],\n]:\n from cryptography.hazmat.backends.openssl.backend import backend as ossl\n\n return ossl.load_key_and_certificates_from_pkcs12(data, password)\n\n\ndef load_pkcs12(\n data: bytes,\n password: typing.Optional[bytes],\n backend: typing.Any = None,\n) -> PKCS12KeyAndCertificates:\n from cryptography.hazmat.backends.openssl.backend import backend as ossl\n\n return ossl.load_pkcs12(data, password)\n\n\ndef serialize_key_and_certificates(\n name: typing.Optional[bytes],\n key: typing.Optional[_ALLOWED_PKCS12_TYPES],\n cert: typing.Optional[x509.Certificate],\n cas: typing.Optional[typing.Iterable[x509.Certificate]],\n encryption_algorithm: serialization.KeySerializationEncryption,\n) -> bytes:\n if key is not None and not isinstance(\n key,\n (\n rsa.RSAPrivateKey,\n dsa.DSAPrivateKey,\n ec.EllipticCurvePrivateKey,\n ),\n ):\n raise TypeError(\n \"Key must be RSA, DSA, or EllipticCurve private key or None.\"\n )\n if cert is not None and not isinstance(cert, x509.Certificate):\n raise TypeError(\"cert must be a certificate or None\")\n\n if cas is not None:\n cas = list(cas)\n if not all(isinstance(val, x509.Certificate) for val in cas):\n raise TypeError(\"all values in cas must be certificates\")\n\n if not isinstance(\n encryption_algorithm, serialization.KeySerializationEncryption\n ):\n raise TypeError(\n \"Key encryption algorithm must be a \"\n \"KeySerializationEncryption instance\"\n )\n\n if key is None and cert is None and not cas:\n raise ValueError(\"You must supply at least one of key, cert, or cas\")\n\n from cryptography.hazmat.backends.openssl.backend import backend\n\n return backend.serialize_key_and_certificates_to_pkcs12(\n name, key, cert, cas, encryption_algorithm\n )\n", "path": "src/cryptography/hazmat/primitives/serialization/pkcs12.py"}]} | 2,632 | 317 |
gh_patches_debug_31439 | rasdani/github-patches | git_diff | ethereum__consensus-specs-758 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Miscellaneous beacon chain changesβtake 4
(See #128, #218, #322 for takes 1, 2, 3.)
Below is a list of miscellaneous suggestions for phase 0, most of which were discussed on the researcher's call on Feb 19. This issue keeps track of some of the phase 0 work remaining.
- [x] 1. **Friendlier GENESIS_SLOT**: Implemented in #655.
- [x] 2. **Granular state roots**: Expose state roots at every slot. Implemented in #649.
- [x] 3. **Previous block root reconstruction**: Provide enough information in `state` to reconstruct the current block's `previous_block_root`. Implemented in #649.
- [x] 4. **Define genesis Eth1 data**: Implemented in #649.
- [x] 5. **Mandatory deposits**: Mandatory processing of pending deposits.
- [x] 6. **Transfers during pre-activation**: Allow not-yet-activated validators to make transfers.
- [x] 7. **LMD GHOST tie breaker**: Compare block hashes to tie-break LMD GHOST.
- [ ] 8. **Maximum reversions**: Enshrine dynamic weak subjectivity revert period. See #577.
- [x] 9. **Double justifications**: Specify fork choice rule when there are two justified blocks at the same height. (Possible solution: ignore both and fallback to the previous highest justified block.)
- [x] 10. **Move to SHA256**: See #612.
- [ ] 11. **Standardise BLS12-381**: See #605.
- [ ] 12. **Performance parameters tuning**: Fine-tune `SECONDS_PER_SLOT`, `SHARD_COUNT`, etc. based on benchmarks.
- [ ] 13a. **Increase proposer rewards**: See #621. Need to check incentive compatibility with inclusion distance reward.
- [x] 13b. **Incentive-compatible proposer rewards**: Make proposer rewards proportional to balance.
- [ ] 14. **Increase rewards in general**: Calculate theoretical max issuance rate and work backwards to expected issuance.
- [x] 15. **Reduce SSZ_CHUNK_SIZE to 32**: See #603 and #696.
- [x] 16. **SSZ tuples**: See #665 and #696.
- [x] <s>17. **Immediately withdrawable if bad proof of possession**: See #657.</s>
- [x] 18. **4-byte working balance**: See #685.
- [x] 19. **Merkleisation-friendly pending attestations**: See #697.
- [ ] 20. **Fine-tune container field ordering**: To do with constants fine-tuning.
- [x] 21. **Minimum activation period**: See [here](https://github.com/ethereum/eth2.0-specs/issues/675#issuecomment-468159678) and [here](https://github.com/ethereum/eth2.0-specs/pull/746).
- [x] 22. **Milder ejections**: Replace `exit_validator` by `initiate_validator_exit` in `process_ejections`.
- [x] 23. **Improved rate limiting**: Change the rate limiting logic (for entry/exit/withdrawal) based on [this Ethresear.ch post](https://ethresear.ch/t/rate-limiting-entry-exits-not-withdrawals/4942).
- [x] 24. **Epoch transitions at start of epoch**: Instead of at the very end of the epoch.
- [x] 25. **Epoch-based proposer slashing**: As opposed to slot-based.
- [x] 26. **Genesis epochs**: Use `GENESIS_EPOCH - 1` for `previous_shuffling_epoch` and maybe `previous_shuffling_epoch`.
- [x] <s>27. **No backfilling of latest_active_index_roots**: Only set the active index root for the first slot.</s>
- [x] 28. <s>**`start_shard` offsets**: For fairer crosslinking latency across shards.</s>
- [x] 29. **Remove deposit timestamps and `DepositData`**: See #760.
- [x] 30. **Fair proposer sampling**: See #733.
- [ ] 31. **Slashed validators and LMD GHOST**: Should attestations from slashed validators be ignored in LMD GHOST?
- [x] 32. **Incentives simplification**: Simplification of the rewards and penalties.
- [ ] 33. **Exit fee**: See [here](https://github.com/ethereum/eth2.0-specs/pull/850#issuecomment-478068655).
- [x] 34. **GENESIS_SLOT == 0**: From Danny.
- [ ] 35. **Incentive-compatible crosslink rewards**: Proportional to amount of crosslink data.
- [ ] 36. **No phase 0 transfers**: Push transfers to phase 1 so that no economically meaningful activity happens during phase 0. This allows for phase 0 (a "testnet") to be rebooted if things go horribly wrong.
- [ ] 37. **Explicit genesis deposits**: Put genesis deposits in `block.body.deposits`.
- [x] 38. **Remove serialization from consensus**: See #924.
- [ ] 39. **Do not store withdrawal credentials**: See #937.
- [ ] 40. **Increase SECONDS_PER_SLOT and remove MIN_ATTESTATION_INCLUSION_DELAY**: The idea is to set different `SECONDS_PER_BEACON_SLOT`and `SECONDS_PER_SHARD_SLOT`, e.g. to 8/4, 12/3 or 16/4.
- [ ] 41. **The slotering**: Remove various unnecessary slots and replace by epochs where appropriate. (Justin surprise cleanup.)
- [x] 42. **Graffiti**: 32-byte arbitrary data in blocks
- [ ] 43. **Merge historical stats**: In particular, merge constants under "State list lengths".
- [ ] 44. **Improve epoch processing**: See #1043.
</issue>
<code>
[start of utils/phase0/state_transition.py]
1 from . import spec
2
3
4 from typing import ( # noqa: F401
5 Any,
6 Callable,
7 List,
8 NewType,
9 Tuple,
10 )
11
12 from .spec import (
13 BeaconState,
14 BeaconBlock,
15 )
16
17
18 def process_transaction_type(state: BeaconState,
19 transactions: List[Any],
20 max_transactions: int,
21 tx_fn: Callable[[BeaconState, Any], None]) -> None:
22 assert len(transactions) <= max_transactions
23 for transaction in transactions:
24 tx_fn(state, transaction)
25
26
27 def process_transactions(state: BeaconState, block: BeaconBlock) -> None:
28 process_transaction_type(
29 state,
30 block.body.proposer_slashings,
31 spec.MAX_PROPOSER_SLASHINGS,
32 spec.process_proposer_slashing,
33 )
34 process_transaction_type(
35 state,
36 block.body.attester_slashings,
37 spec.MAX_ATTESTER_SLASHINGS,
38 spec.process_attester_slashing,
39 )
40 process_transaction_type(
41 state,
42 block.body.attestations,
43 spec.MAX_ATTESTATIONS,
44 spec.process_attestation,
45 )
46 process_transaction_type(
47 state,
48 block.body.deposits,
49 spec.MAX_DEPOSITS,
50 spec.process_deposit,
51 )
52 process_transaction_type(
53 state,
54 block.body.voluntary_exits,
55 spec.MAX_VOLUNTARY_EXITS,
56 spec.process_voluntary_exit,
57 )
58 assert len(block.body.transfers) == len(set(block.body.transfers))
59 process_transaction_type(
60 state,
61 block.body.transfers,
62 spec.MAX_TRANSFERS,
63 spec.process_transfer,
64 )
65
66
67 def process_block(state: BeaconState,
68 block: BeaconBlock,
69 verify_state_root: bool=False) -> None:
70 spec.process_block_header(state, block)
71 spec.process_randao(state, block)
72 spec.process_eth1_data(state, block)
73
74 process_transactions(state, block)
75 if verify_state_root:
76 spec.verify_block_state_root(state, block)
77
78
79 def process_epoch_transition(state: BeaconState) -> None:
80 spec.update_justification_and_finalization(state)
81 spec.process_crosslinks(state)
82 spec.maybe_reset_eth1_period(state)
83 spec.apply_rewards(state)
84 spec.process_ejections(state)
85 spec.update_registry_and_shuffling_data(state)
86 spec.process_slashings(state)
87 spec.process_exit_queue(state)
88 spec.finish_epoch_update(state)
89
90
91 def state_transition(state: BeaconState,
92 block: BeaconBlock,
93 verify_state_root: bool=False) -> BeaconState:
94 while state.slot < block.slot:
95 spec.cache_state(state)
96 if (state.slot + 1) % spec.SLOTS_PER_EPOCH == 0:
97 process_epoch_transition(state)
98 spec.advance_slot(state)
99 if block.slot == state.slot:
100 process_block(state, block, verify_state_root)
101
[end of utils/phase0/state_transition.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/utils/phase0/state_transition.py b/utils/phase0/state_transition.py
--- a/utils/phase0/state_transition.py
+++ b/utils/phase0/state_transition.py
@@ -15,6 +15,13 @@
)
+def expected_deposit_count(state: BeaconState) -> int:
+ return min(
+ spec.MAX_DEPOSITS,
+ state.latest_eth1_data.deposit_count - state.deposit_index
+ )
+
+
def process_transaction_type(state: BeaconState,
transactions: List[Any],
max_transactions: int,
@@ -31,30 +38,36 @@
spec.MAX_PROPOSER_SLASHINGS,
spec.process_proposer_slashing,
)
+
process_transaction_type(
state,
block.body.attester_slashings,
spec.MAX_ATTESTER_SLASHINGS,
spec.process_attester_slashing,
)
+
process_transaction_type(
state,
block.body.attestations,
spec.MAX_ATTESTATIONS,
spec.process_attestation,
)
+
+ assert len(block.body.deposits) == expected_deposit_count(state)
process_transaction_type(
state,
block.body.deposits,
spec.MAX_DEPOSITS,
spec.process_deposit,
)
+
process_transaction_type(
state,
block.body.voluntary_exits,
spec.MAX_VOLUNTARY_EXITS,
spec.process_voluntary_exit,
)
+
assert len(block.body.transfers) == len(set(block.body.transfers))
process_transaction_type(
state,
| {"golden_diff": "diff --git a/utils/phase0/state_transition.py b/utils/phase0/state_transition.py\n--- a/utils/phase0/state_transition.py\n+++ b/utils/phase0/state_transition.py\n@@ -15,6 +15,13 @@\n )\n \n \n+def expected_deposit_count(state: BeaconState) -> int:\n+ return min(\n+ spec.MAX_DEPOSITS,\n+ state.latest_eth1_data.deposit_count - state.deposit_index\n+ )\n+\n+\n def process_transaction_type(state: BeaconState,\n transactions: List[Any],\n max_transactions: int,\n@@ -31,30 +38,36 @@\n spec.MAX_PROPOSER_SLASHINGS,\n spec.process_proposer_slashing,\n )\n+\n process_transaction_type(\n state,\n block.body.attester_slashings,\n spec.MAX_ATTESTER_SLASHINGS,\n spec.process_attester_slashing,\n )\n+\n process_transaction_type(\n state,\n block.body.attestations,\n spec.MAX_ATTESTATIONS,\n spec.process_attestation,\n )\n+\n+ assert len(block.body.deposits) == expected_deposit_count(state)\n process_transaction_type(\n state,\n block.body.deposits,\n spec.MAX_DEPOSITS,\n spec.process_deposit,\n )\n+\n process_transaction_type(\n state,\n block.body.voluntary_exits,\n spec.MAX_VOLUNTARY_EXITS,\n spec.process_voluntary_exit,\n )\n+\n assert len(block.body.transfers) == len(set(block.body.transfers))\n process_transaction_type(\n state,\n", "issue": "Miscellaneous beacon chain changes\u2014take 4\n(See #128, #218, #322 for takes 1, 2, 3.)\r\n\r\nBelow is a list of miscellaneous suggestions for phase 0, most of which were discussed on the researcher's call on Feb 19. This issue keeps track of some of the phase 0 work remaining.\r\n\r\n- [x] 1. **Friendlier GENESIS_SLOT**: Implemented in #655.\r\n- [x] 2. **Granular state roots**: Expose state roots at every slot. Implemented in #649.\r\n- [x] 3. **Previous block root reconstruction**: Provide enough information in `state` to reconstruct the current block's `previous_block_root`. Implemented in #649.\r\n- [x] 4. **Define genesis Eth1 data**: Implemented in #649.\r\n- [x] 5. **Mandatory deposits**: Mandatory processing of pending deposits.\r\n- [x] 6. **Transfers during pre-activation**: Allow not-yet-activated validators to make transfers.\r\n- [x] 7. **LMD GHOST tie breaker**: Compare block hashes to tie-break LMD GHOST.\r\n- [ ] 8. **Maximum reversions**: Enshrine dynamic weak subjectivity revert period. See #577.\r\n- [x] 9. **Double justifications**: Specify fork choice rule when there are two justified blocks at the same height. (Possible solution: ignore both and fallback to the previous highest justified block.)\r\n- [x] 10. **Move to SHA256**: See #612.\r\n- [ ] 11. **Standardise BLS12-381**: See #605.\r\n- [ ] 12. **Performance parameters tuning**: Fine-tune `SECONDS_PER_SLOT`, `SHARD_COUNT`, etc. based on benchmarks.\r\n- [ ] 13a. **Increase proposer rewards**: See #621. Need to check incentive compatibility with inclusion distance reward.\r\n- [x] 13b. **Incentive-compatible proposer rewards**: Make proposer rewards proportional to balance. \r\n- [ ] 14. **Increase rewards in general**: Calculate theoretical max issuance rate and work backwards to expected issuance.\r\n- [x] 15. **Reduce SSZ_CHUNK_SIZE to 32**: See #603 and #696.\r\n- [x] 16. **SSZ tuples**: See #665 and #696.\r\n- [x] <s>17. **Immediately withdrawable if bad proof of possession**: See #657.</s>\r\n- [x] 18. **4-byte working balance**: See #685.\r\n- [x] 19. **Merkleisation-friendly pending attestations**: See #697.\r\n- [ ] 20. **Fine-tune container field ordering**: To do with constants fine-tuning.\r\n- [x] 21. **Minimum activation period**: See [here](https://github.com/ethereum/eth2.0-specs/issues/675#issuecomment-468159678) and [here](https://github.com/ethereum/eth2.0-specs/pull/746).\r\n- [x] 22. **Milder ejections**: Replace `exit_validator` by `initiate_validator_exit` in `process_ejections`.\r\n- [x] 23. **Improved rate limiting**: Change the rate limiting logic (for entry/exit/withdrawal) based on [this Ethresear.ch post](https://ethresear.ch/t/rate-limiting-entry-exits-not-withdrawals/4942).\r\n- [x] 24. **Epoch transitions at start of epoch**: Instead of at the very end of the epoch.\r\n- [x] 25. **Epoch-based proposer slashing**: As opposed to slot-based.\r\n- [x] 26. **Genesis epochs**: Use `GENESIS_EPOCH - 1` for `previous_shuffling_epoch` and maybe `previous_shuffling_epoch`.\r\n- [x] <s>27. **No backfilling of latest_active_index_roots**: Only set the active index root for the first slot.</s>\r\n- [x] 28. <s>**`start_shard` offsets**: For fairer crosslinking latency across shards.</s>\r\n- [x] 29. **Remove deposit timestamps and `DepositData`**: See #760.\r\n- [x] 30. **Fair proposer sampling**: See #733.\r\n- [ ] 31. **Slashed validators and LMD GHOST**: Should attestations from slashed validators be ignored in LMD GHOST?\r\n- [x] 32. **Incentives simplification**: Simplification of the rewards and penalties.\r\n- [ ] 33. **Exit fee**: See [here](https://github.com/ethereum/eth2.0-specs/pull/850#issuecomment-478068655).\r\n- [x] 34. **GENESIS_SLOT == 0**: From Danny.\r\n- [ ] 35. **Incentive-compatible crosslink rewards**: Proportional to amount of crosslink data.\r\n- [ ] 36. **No phase 0 transfers**: Push transfers to phase 1 so that no economically meaningful activity happens during phase 0. This allows for phase 0 (a \"testnet\") to be rebooted if things go horribly wrong.\r\n- [ ] 37. **Explicit genesis deposits**: Put genesis deposits in `block.body.deposits`.\r\n- [x] 38. **Remove serialization from consensus**: See #924.\r\n- [ ] 39. **Do not store withdrawal credentials**: See #937.\r\n- [ ] 40. **Increase SECONDS_PER_SLOT and remove MIN_ATTESTATION_INCLUSION_DELAY**: The idea is to set different `SECONDS_PER_BEACON_SLOT`and `SECONDS_PER_SHARD_SLOT`, e.g. to 8/4, 12/3 or 16/4.\r\n- [ ] 41. **The slotering**: Remove various unnecessary slots and replace by epochs where appropriate. (Justin surprise cleanup.)\r\n- [x] 42. **Graffiti**: 32-byte arbitrary data in blocks\r\n- [ ] 43. **Merge historical stats**: In particular, merge constants under \"State list lengths\".\r\n- [ ] 44. **Improve epoch processing**: See #1043.\n", "before_files": [{"content": "from . import spec\n\n\nfrom typing import ( # noqa: F401\n Any,\n Callable,\n List,\n NewType,\n Tuple,\n)\n\nfrom .spec import (\n BeaconState,\n BeaconBlock,\n)\n\n\ndef process_transaction_type(state: BeaconState,\n transactions: List[Any],\n max_transactions: int,\n tx_fn: Callable[[BeaconState, Any], None]) -> None:\n assert len(transactions) <= max_transactions\n for transaction in transactions:\n tx_fn(state, transaction)\n\n\ndef process_transactions(state: BeaconState, block: BeaconBlock) -> None:\n process_transaction_type(\n state,\n block.body.proposer_slashings,\n spec.MAX_PROPOSER_SLASHINGS,\n spec.process_proposer_slashing,\n )\n process_transaction_type(\n state,\n block.body.attester_slashings,\n spec.MAX_ATTESTER_SLASHINGS,\n spec.process_attester_slashing,\n )\n process_transaction_type(\n state,\n block.body.attestations,\n spec.MAX_ATTESTATIONS,\n spec.process_attestation,\n )\n process_transaction_type(\n state,\n block.body.deposits,\n spec.MAX_DEPOSITS,\n spec.process_deposit,\n )\n process_transaction_type(\n state,\n block.body.voluntary_exits,\n spec.MAX_VOLUNTARY_EXITS,\n spec.process_voluntary_exit,\n )\n assert len(block.body.transfers) == len(set(block.body.transfers))\n process_transaction_type(\n state,\n block.body.transfers,\n spec.MAX_TRANSFERS,\n spec.process_transfer,\n )\n\n\ndef process_block(state: BeaconState,\n block: BeaconBlock,\n verify_state_root: bool=False) -> None:\n spec.process_block_header(state, block)\n spec.process_randao(state, block)\n spec.process_eth1_data(state, block)\n\n process_transactions(state, block)\n if verify_state_root:\n spec.verify_block_state_root(state, block)\n\n\ndef process_epoch_transition(state: BeaconState) -> None:\n spec.update_justification_and_finalization(state)\n spec.process_crosslinks(state)\n spec.maybe_reset_eth1_period(state)\n spec.apply_rewards(state)\n spec.process_ejections(state)\n spec.update_registry_and_shuffling_data(state)\n spec.process_slashings(state)\n spec.process_exit_queue(state)\n spec.finish_epoch_update(state)\n\n\ndef state_transition(state: BeaconState,\n block: BeaconBlock,\n verify_state_root: bool=False) -> BeaconState:\n while state.slot < block.slot:\n spec.cache_state(state)\n if (state.slot + 1) % spec.SLOTS_PER_EPOCH == 0:\n process_epoch_transition(state)\n spec.advance_slot(state)\n if block.slot == state.slot:\n process_block(state, block, verify_state_root)\n", "path": "utils/phase0/state_transition.py"}]} | 2,765 | 332 |
gh_patches_debug_26667 | rasdani/github-patches | git_diff | getsentry__sentry-24151 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Accessibility Issues with User Feedback Widget
<!-- Requirements: please go through this checklist before opening a new issue -->
- [x] Review the documentation: https://docs.sentry.io/
- [x] Search for existing issues: https://github.com/getsentry/sentry-javascript/issues
- [x] Use the latest release: https://github.com/getsentry/sentry-javascript/releases
- [x] Provide a link to the affected event from your Sentry account <- Not applicable
## Package + Version
- [x] `@sentry/browser`
- [ ] `@sentry/node`
- [ ] `raven-js`
- [ ] `raven-node` _(raven for node)_
- [ ] other:
### Version:
```
5.29.2
```
## Description
The dialog opened by Sentry.showReportDialog is not accessible. See the [WAI-ARIA Authoring Practices guidance on accessible modals](https://www.w3.org/TR/wai-aria-practices-1.1/#dialog_modal).
Some specific issues which need to be fixed:
- `Tab` and `Shift` + `Tab` should not move focus to elements outside the modal (they currently can)
- When the modal loads, the first input of the form should receive focus (currently nothing in the modal is focused when the modal loads)
- The "Close" button is rendered using an anchor tag without an `href` attribute. As a result it is not in the tab sequence and keyboard users are not able to use it. To fix this, a `button` element should be used instead. Since the element does not cause navigation, the `button` element will also have the proper semantics and will improve the experience for users of assistive technology.
- The outermost div of the dialog needs `role` set to `dialog`
- The outermost div of the dialog needs `aria-modal` set to `true`
- The outermost div of the dialog needs `aria-labelledby` set to the `id` of the modal's h2
</issue>
<code>
[start of src/sentry/web/frontend/error_page_embed.py]
1 from django import forms
2 from django.db import IntegrityError, transaction
3 from django.http import HttpResponse
4 from django.views.generic import View
5 from django.utils import timezone
6 from django.utils.safestring import mark_safe
7 from django.utils.translation import ugettext_lazy as _
8 from django.views.decorators.csrf import csrf_exempt
9
10 from sentry import eventstore
11 from sentry.models import Project, ProjectKey, ProjectOption, UserReport
12 from sentry.web.helpers import render_to_response, render_to_string
13 from sentry.signals import user_feedback_received
14 from sentry.utils import json
15 from sentry.utils.http import absolute_uri, is_valid_origin, origin_from_request
16 from sentry.utils.validators import normalize_event_id
17
18 GENERIC_ERROR = _("An unknown error occurred while submitting your report. Please try again.")
19 FORM_ERROR = _("Some fields were invalid. Please correct the errors and try again.")
20 SENT_MESSAGE = _("Your feedback has been sent. Thank you!")
21
22 DEFAULT_TITLE = _("It looks like we're having issues.")
23 DEFAULT_SUBTITLE = _("Our team has been notified.")
24 DEFAULT_SUBTITLE2 = _("If you'd like to help, tell us what happened below.")
25
26 DEFAULT_NAME_LABEL = _("Name")
27 DEFAULT_EMAIL_LABEL = _("Email")
28 DEFAULT_COMMENTS_LABEL = _("What happened?")
29
30 DEFAULT_CLOSE_LABEL = _("Close")
31 DEFAULT_SUBMIT_LABEL = _("Submit Crash Report")
32
33 DEFAULT_OPTIONS = {
34 "title": DEFAULT_TITLE,
35 "subtitle": DEFAULT_SUBTITLE,
36 "subtitle2": DEFAULT_SUBTITLE2,
37 "labelName": DEFAULT_NAME_LABEL,
38 "labelEmail": DEFAULT_EMAIL_LABEL,
39 "labelComments": DEFAULT_COMMENTS_LABEL,
40 "labelClose": DEFAULT_CLOSE_LABEL,
41 "labelSubmit": DEFAULT_SUBMIT_LABEL,
42 "errorGeneric": GENERIC_ERROR,
43 "errorFormEntry": FORM_ERROR,
44 "successMessage": SENT_MESSAGE,
45 }
46
47
48 class UserReportForm(forms.ModelForm):
49 name = forms.CharField(
50 max_length=128, widget=forms.TextInput(attrs={"placeholder": _("Jane Bloggs")})
51 )
52 email = forms.EmailField(
53 max_length=75,
54 widget=forms.TextInput(attrs={"placeholder": _("[email protected]"), "type": "email"}),
55 )
56 comments = forms.CharField(
57 widget=forms.Textarea(attrs={"placeholder": _("I clicked on 'X' and then hit 'Confirm'")})
58 )
59
60 class Meta:
61 model = UserReport
62 fields = ("name", "email", "comments")
63
64
65 class ErrorPageEmbedView(View):
66 def _get_project_key(self, request):
67 try:
68 dsn = request.GET["dsn"]
69 except KeyError:
70 return
71
72 try:
73 key = ProjectKey.from_dsn(dsn)
74 except ProjectKey.DoesNotExist:
75 return
76
77 return key
78
79 def _get_origin(self, request):
80 return origin_from_request(request)
81
82 def _smart_response(self, request, context=None, status=200):
83 json_context = json.dumps(context or {})
84 accept = request.META.get("HTTP_ACCEPT") or ""
85 if "text/javascript" in accept:
86 content_type = "text/javascript"
87 content = ""
88 else:
89 content_type = "application/json"
90 content = json_context
91 response = HttpResponse(content, status=status, content_type=content_type)
92 response["Access-Control-Allow-Origin"] = request.META.get("HTTP_ORIGIN", "")
93 response["Access-Control-Allow-Methods"] = "GET, POST, OPTIONS"
94 response["Access-Control-Max-Age"] = "1000"
95 response["Access-Control-Allow-Headers"] = "Content-Type, Authorization, X-Requested-With"
96 response["Vary"] = "Accept"
97 if content == "" and context:
98 response["X-Sentry-Context"] = json_context
99 return response
100
101 @csrf_exempt
102 def dispatch(self, request):
103 try:
104 event_id = request.GET["eventId"]
105 except KeyError:
106 return self._smart_response(
107 request, {"eventId": "Missing or invalid parameter."}, status=400
108 )
109
110 normalized_event_id = normalize_event_id(event_id)
111 if normalized_event_id:
112 event_id = normalized_event_id
113 elif event_id:
114 return self._smart_response(
115 request, {"eventId": "Missing or invalid parameter."}, status=400
116 )
117
118 key = self._get_project_key(request)
119 if not key:
120 return self._smart_response(
121 request, {"dsn": "Missing or invalid parameter."}, status=404
122 )
123
124 origin = self._get_origin(request)
125 if not is_valid_origin(origin, key.project):
126 return self._smart_response(request, status=403)
127
128 if request.method == "OPTIONS":
129 return self._smart_response(request)
130
131 # customization options
132 options = DEFAULT_OPTIONS.copy()
133 for name in options.keys():
134 if name in request.GET:
135 options[name] = str(request.GET[name])
136
137 # TODO(dcramer): since we cant use a csrf cookie we should at the very
138 # least sign the request / add some kind of nonce
139 initial = {"name": request.GET.get("name"), "email": request.GET.get("email")}
140
141 form = UserReportForm(request.POST if request.method == "POST" else None, initial=initial)
142 if form.is_valid():
143 # TODO(dcramer): move this to post to the internal API
144 report = form.save(commit=False)
145 report.project_id = key.project_id
146 report.event_id = event_id
147
148 event = eventstore.get_event_by_id(report.project_id, report.event_id)
149
150 if event is not None:
151 report.environment_id = event.get_environment().id
152 report.group_id = event.group_id
153
154 try:
155 with transaction.atomic():
156 report.save()
157 except IntegrityError:
158 # There was a duplicate, so just overwrite the existing
159 # row with the new one. The only way this ever happens is
160 # if someone is messing around with the API, or doing
161 # something wrong with the SDK, but this behavior is
162 # more reasonable than just hard erroring and is more
163 # expected.
164 UserReport.objects.filter(
165 project_id=report.project_id, event_id=report.event_id
166 ).update(
167 name=report.name,
168 email=report.email,
169 comments=report.comments,
170 date_added=timezone.now(),
171 )
172
173 else:
174 if report.group_id:
175 report.notify()
176
177 user_feedback_received.send(
178 project=Project.objects.get(id=report.project_id),
179 sender=self,
180 )
181
182 return self._smart_response(request)
183 elif request.method == "POST":
184 return self._smart_response(request, {"errors": dict(form.errors)}, status=400)
185
186 show_branding = (
187 ProjectOption.objects.get_value(
188 project=key.project, key="feedback:branding", default="1"
189 )
190 == "1"
191 )
192
193 template = render_to_string(
194 "sentry/error-page-embed.html",
195 context={
196 "form": form,
197 "show_branding": show_branding,
198 "title": options["title"],
199 "subtitle": options["subtitle"],
200 "subtitle2": options["subtitle2"],
201 "name_label": options["labelName"],
202 "email_label": options["labelEmail"],
203 "comments_label": options["labelComments"],
204 "submit_label": options["labelSubmit"],
205 "close_label": options["labelClose"],
206 },
207 )
208
209 context = {
210 "endpoint": mark_safe("*/" + json.dumps(absolute_uri(request.get_full_path())) + ";/*"),
211 "template": mark_safe("*/" + json.dumps(template) + ";/*"),
212 "strings": json.dumps_htmlsafe(
213 {
214 "generic_error": str(options["errorGeneric"]),
215 "form_error": str(options["errorFormEntry"]),
216 "sent_message": str(options["successMessage"]),
217 }
218 ),
219 }
220
221 return render_to_response(
222 "sentry/error-page-embed.js", context, request, content_type="text/javascript"
223 )
224
[end of src/sentry/web/frontend/error_page_embed.py]
[start of src/sentry/web/frontend/debug/debug_error_embed.py]
1 from django.conf import settings
2 from django.views.generic import View
3 from urllib.parse import urlencode
4
5 from sentry.models import ProjectKey
6 from sentry.web.helpers import render_to_response
7
8
9 class DebugErrorPageEmbedView(View):
10 def _get_project_key(self):
11 return ProjectKey.objects.filter(project=settings.SENTRY_PROJECT)[0]
12
13 def get(self, request):
14 context = {
15 "query_params": urlencode(
16 {
17 "dsn": self._get_project_key().dsn_public,
18 "event_id": "342a3d7f690a49f8bd7c4cf0e61a9ded",
19 "options": dict(**request.GET),
20 }
21 )
22 }
23
24 return render_to_response("sentry/debug/error-page-embed.html", context, request)
25
[end of src/sentry/web/frontend/debug/debug_error_embed.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/sentry/web/frontend/debug/debug_error_embed.py b/src/sentry/web/frontend/debug/debug_error_embed.py
--- a/src/sentry/web/frontend/debug/debug_error_embed.py
+++ b/src/sentry/web/frontend/debug/debug_error_embed.py
@@ -15,8 +15,8 @@
"query_params": urlencode(
{
"dsn": self._get_project_key().dsn_public,
- "event_id": "342a3d7f690a49f8bd7c4cf0e61a9ded",
- "options": dict(**request.GET),
+ "eventId": "342a3d7f690a49f8bd7c4cf0e61a9ded",
+ **request.GET,
}
)
}
diff --git a/src/sentry/web/frontend/error_page_embed.py b/src/sentry/web/frontend/error_page_embed.py
--- a/src/sentry/web/frontend/error_page_embed.py
+++ b/src/sentry/web/frontend/error_page_embed.py
@@ -209,12 +209,16 @@
context = {
"endpoint": mark_safe("*/" + json.dumps(absolute_uri(request.get_full_path())) + ";/*"),
"template": mark_safe("*/" + json.dumps(template) + ";/*"),
- "strings": json.dumps_htmlsafe(
- {
- "generic_error": str(options["errorGeneric"]),
- "form_error": str(options["errorFormEntry"]),
- "sent_message": str(options["successMessage"]),
- }
+ "strings": mark_safe(
+ "*/"
+ + json.dumps_htmlsafe(
+ {
+ "generic_error": str(options["errorGeneric"]),
+ "form_error": str(options["errorFormEntry"]),
+ "sent_message": str(options["successMessage"]),
+ }
+ )
+ + ";/*"
),
}
| {"golden_diff": "diff --git a/src/sentry/web/frontend/debug/debug_error_embed.py b/src/sentry/web/frontend/debug/debug_error_embed.py\n--- a/src/sentry/web/frontend/debug/debug_error_embed.py\n+++ b/src/sentry/web/frontend/debug/debug_error_embed.py\n@@ -15,8 +15,8 @@\n \"query_params\": urlencode(\n {\n \"dsn\": self._get_project_key().dsn_public,\n- \"event_id\": \"342a3d7f690a49f8bd7c4cf0e61a9ded\",\n- \"options\": dict(**request.GET),\n+ \"eventId\": \"342a3d7f690a49f8bd7c4cf0e61a9ded\",\n+ **request.GET,\n }\n )\n }\ndiff --git a/src/sentry/web/frontend/error_page_embed.py b/src/sentry/web/frontend/error_page_embed.py\n--- a/src/sentry/web/frontend/error_page_embed.py\n+++ b/src/sentry/web/frontend/error_page_embed.py\n@@ -209,12 +209,16 @@\n context = {\n \"endpoint\": mark_safe(\"*/\" + json.dumps(absolute_uri(request.get_full_path())) + \";/*\"),\n \"template\": mark_safe(\"*/\" + json.dumps(template) + \";/*\"),\n- \"strings\": json.dumps_htmlsafe(\n- {\n- \"generic_error\": str(options[\"errorGeneric\"]),\n- \"form_error\": str(options[\"errorFormEntry\"]),\n- \"sent_message\": str(options[\"successMessage\"]),\n- }\n+ \"strings\": mark_safe(\n+ \"*/\"\n+ + json.dumps_htmlsafe(\n+ {\n+ \"generic_error\": str(options[\"errorGeneric\"]),\n+ \"form_error\": str(options[\"errorFormEntry\"]),\n+ \"sent_message\": str(options[\"successMessage\"]),\n+ }\n+ )\n+ + \";/*\"\n ),\n }\n", "issue": "Accessibility Issues with User Feedback Widget\n<!-- Requirements: please go through this checklist before opening a new issue -->\r\n\r\n- [x] Review the documentation: https://docs.sentry.io/\r\n- [x] Search for existing issues: https://github.com/getsentry/sentry-javascript/issues\r\n- [x] Use the latest release: https://github.com/getsentry/sentry-javascript/releases\r\n- [x] Provide a link to the affected event from your Sentry account <- Not applicable\r\n\r\n## Package + Version\r\n\r\n- [x] `@sentry/browser`\r\n- [ ] `@sentry/node`\r\n- [ ] `raven-js`\r\n- [ ] `raven-node` _(raven for node)_\r\n- [ ] other:\r\n\r\n### Version:\r\n\r\n```\r\n5.29.2\r\n```\r\n\r\n## Description\r\n\r\nThe dialog opened by Sentry.showReportDialog is not accessible. See the [WAI-ARIA Authoring Practices guidance on accessible modals](https://www.w3.org/TR/wai-aria-practices-1.1/#dialog_modal).\r\n\r\nSome specific issues which need to be fixed:\r\n\r\n- `Tab` and `Shift` + `Tab` should not move focus to elements outside the modal (they currently can)\r\n- When the modal loads, the first input of the form should receive focus (currently nothing in the modal is focused when the modal loads)\r\n- The \"Close\" button is rendered using an anchor tag without an `href` attribute. As a result it is not in the tab sequence and keyboard users are not able to use it. To fix this, a `button` element should be used instead. Since the element does not cause navigation, the `button` element will also have the proper semantics and will improve the experience for users of assistive technology.\r\n- The outermost div of the dialog needs `role` set to `dialog`\r\n- The outermost div of the dialog needs `aria-modal` set to `true`\r\n- The outermost div of the dialog needs `aria-labelledby` set to the `id` of the modal's h2\r\n\n", "before_files": [{"content": "from django import forms\nfrom django.db import IntegrityError, transaction\nfrom django.http import HttpResponse\nfrom django.views.generic import View\nfrom django.utils import timezone\nfrom django.utils.safestring import mark_safe\nfrom django.utils.translation import ugettext_lazy as _\nfrom django.views.decorators.csrf import csrf_exempt\n\nfrom sentry import eventstore\nfrom sentry.models import Project, ProjectKey, ProjectOption, UserReport\nfrom sentry.web.helpers import render_to_response, render_to_string\nfrom sentry.signals import user_feedback_received\nfrom sentry.utils import json\nfrom sentry.utils.http import absolute_uri, is_valid_origin, origin_from_request\nfrom sentry.utils.validators import normalize_event_id\n\nGENERIC_ERROR = _(\"An unknown error occurred while submitting your report. Please try again.\")\nFORM_ERROR = _(\"Some fields were invalid. Please correct the errors and try again.\")\nSENT_MESSAGE = _(\"Your feedback has been sent. Thank you!\")\n\nDEFAULT_TITLE = _(\"It looks like we're having issues.\")\nDEFAULT_SUBTITLE = _(\"Our team has been notified.\")\nDEFAULT_SUBTITLE2 = _(\"If you'd like to help, tell us what happened below.\")\n\nDEFAULT_NAME_LABEL = _(\"Name\")\nDEFAULT_EMAIL_LABEL = _(\"Email\")\nDEFAULT_COMMENTS_LABEL = _(\"What happened?\")\n\nDEFAULT_CLOSE_LABEL = _(\"Close\")\nDEFAULT_SUBMIT_LABEL = _(\"Submit Crash Report\")\n\nDEFAULT_OPTIONS = {\n \"title\": DEFAULT_TITLE,\n \"subtitle\": DEFAULT_SUBTITLE,\n \"subtitle2\": DEFAULT_SUBTITLE2,\n \"labelName\": DEFAULT_NAME_LABEL,\n \"labelEmail\": DEFAULT_EMAIL_LABEL,\n \"labelComments\": DEFAULT_COMMENTS_LABEL,\n \"labelClose\": DEFAULT_CLOSE_LABEL,\n \"labelSubmit\": DEFAULT_SUBMIT_LABEL,\n \"errorGeneric\": GENERIC_ERROR,\n \"errorFormEntry\": FORM_ERROR,\n \"successMessage\": SENT_MESSAGE,\n}\n\n\nclass UserReportForm(forms.ModelForm):\n name = forms.CharField(\n max_length=128, widget=forms.TextInput(attrs={\"placeholder\": _(\"Jane Bloggs\")})\n )\n email = forms.EmailField(\n max_length=75,\n widget=forms.TextInput(attrs={\"placeholder\": _(\"[email protected]\"), \"type\": \"email\"}),\n )\n comments = forms.CharField(\n widget=forms.Textarea(attrs={\"placeholder\": _(\"I clicked on 'X' and then hit 'Confirm'\")})\n )\n\n class Meta:\n model = UserReport\n fields = (\"name\", \"email\", \"comments\")\n\n\nclass ErrorPageEmbedView(View):\n def _get_project_key(self, request):\n try:\n dsn = request.GET[\"dsn\"]\n except KeyError:\n return\n\n try:\n key = ProjectKey.from_dsn(dsn)\n except ProjectKey.DoesNotExist:\n return\n\n return key\n\n def _get_origin(self, request):\n return origin_from_request(request)\n\n def _smart_response(self, request, context=None, status=200):\n json_context = json.dumps(context or {})\n accept = request.META.get(\"HTTP_ACCEPT\") or \"\"\n if \"text/javascript\" in accept:\n content_type = \"text/javascript\"\n content = \"\"\n else:\n content_type = \"application/json\"\n content = json_context\n response = HttpResponse(content, status=status, content_type=content_type)\n response[\"Access-Control-Allow-Origin\"] = request.META.get(\"HTTP_ORIGIN\", \"\")\n response[\"Access-Control-Allow-Methods\"] = \"GET, POST, OPTIONS\"\n response[\"Access-Control-Max-Age\"] = \"1000\"\n response[\"Access-Control-Allow-Headers\"] = \"Content-Type, Authorization, X-Requested-With\"\n response[\"Vary\"] = \"Accept\"\n if content == \"\" and context:\n response[\"X-Sentry-Context\"] = json_context\n return response\n\n @csrf_exempt\n def dispatch(self, request):\n try:\n event_id = request.GET[\"eventId\"]\n except KeyError:\n return self._smart_response(\n request, {\"eventId\": \"Missing or invalid parameter.\"}, status=400\n )\n\n normalized_event_id = normalize_event_id(event_id)\n if normalized_event_id:\n event_id = normalized_event_id\n elif event_id:\n return self._smart_response(\n request, {\"eventId\": \"Missing or invalid parameter.\"}, status=400\n )\n\n key = self._get_project_key(request)\n if not key:\n return self._smart_response(\n request, {\"dsn\": \"Missing or invalid parameter.\"}, status=404\n )\n\n origin = self._get_origin(request)\n if not is_valid_origin(origin, key.project):\n return self._smart_response(request, status=403)\n\n if request.method == \"OPTIONS\":\n return self._smart_response(request)\n\n # customization options\n options = DEFAULT_OPTIONS.copy()\n for name in options.keys():\n if name in request.GET:\n options[name] = str(request.GET[name])\n\n # TODO(dcramer): since we cant use a csrf cookie we should at the very\n # least sign the request / add some kind of nonce\n initial = {\"name\": request.GET.get(\"name\"), \"email\": request.GET.get(\"email\")}\n\n form = UserReportForm(request.POST if request.method == \"POST\" else None, initial=initial)\n if form.is_valid():\n # TODO(dcramer): move this to post to the internal API\n report = form.save(commit=False)\n report.project_id = key.project_id\n report.event_id = event_id\n\n event = eventstore.get_event_by_id(report.project_id, report.event_id)\n\n if event is not None:\n report.environment_id = event.get_environment().id\n report.group_id = event.group_id\n\n try:\n with transaction.atomic():\n report.save()\n except IntegrityError:\n # There was a duplicate, so just overwrite the existing\n # row with the new one. The only way this ever happens is\n # if someone is messing around with the API, or doing\n # something wrong with the SDK, but this behavior is\n # more reasonable than just hard erroring and is more\n # expected.\n UserReport.objects.filter(\n project_id=report.project_id, event_id=report.event_id\n ).update(\n name=report.name,\n email=report.email,\n comments=report.comments,\n date_added=timezone.now(),\n )\n\n else:\n if report.group_id:\n report.notify()\n\n user_feedback_received.send(\n project=Project.objects.get(id=report.project_id),\n sender=self,\n )\n\n return self._smart_response(request)\n elif request.method == \"POST\":\n return self._smart_response(request, {\"errors\": dict(form.errors)}, status=400)\n\n show_branding = (\n ProjectOption.objects.get_value(\n project=key.project, key=\"feedback:branding\", default=\"1\"\n )\n == \"1\"\n )\n\n template = render_to_string(\n \"sentry/error-page-embed.html\",\n context={\n \"form\": form,\n \"show_branding\": show_branding,\n \"title\": options[\"title\"],\n \"subtitle\": options[\"subtitle\"],\n \"subtitle2\": options[\"subtitle2\"],\n \"name_label\": options[\"labelName\"],\n \"email_label\": options[\"labelEmail\"],\n \"comments_label\": options[\"labelComments\"],\n \"submit_label\": options[\"labelSubmit\"],\n \"close_label\": options[\"labelClose\"],\n },\n )\n\n context = {\n \"endpoint\": mark_safe(\"*/\" + json.dumps(absolute_uri(request.get_full_path())) + \";/*\"),\n \"template\": mark_safe(\"*/\" + json.dumps(template) + \";/*\"),\n \"strings\": json.dumps_htmlsafe(\n {\n \"generic_error\": str(options[\"errorGeneric\"]),\n \"form_error\": str(options[\"errorFormEntry\"]),\n \"sent_message\": str(options[\"successMessage\"]),\n }\n ),\n }\n\n return render_to_response(\n \"sentry/error-page-embed.js\", context, request, content_type=\"text/javascript\"\n )\n", "path": "src/sentry/web/frontend/error_page_embed.py"}, {"content": "from django.conf import settings\nfrom django.views.generic import View\nfrom urllib.parse import urlencode\n\nfrom sentry.models import ProjectKey\nfrom sentry.web.helpers import render_to_response\n\n\nclass DebugErrorPageEmbedView(View):\n def _get_project_key(self):\n return ProjectKey.objects.filter(project=settings.SENTRY_PROJECT)[0]\n\n def get(self, request):\n context = {\n \"query_params\": urlencode(\n {\n \"dsn\": self._get_project_key().dsn_public,\n \"event_id\": \"342a3d7f690a49f8bd7c4cf0e61a9ded\",\n \"options\": dict(**request.GET),\n }\n )\n }\n\n return render_to_response(\"sentry/debug/error-page-embed.html\", context, request)\n", "path": "src/sentry/web/frontend/debug/debug_error_embed.py"}]} | 3,489 | 419 |
gh_patches_debug_41519 | rasdani/github-patches | git_diff | microsoft__Qcodes-1742 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
GroupParameter initial_value cannot be set
Here's a MWE:
```py
from qcodes.instrument.group_parameter import GroupParameter, Group
from qcodes import Instrument
class MyInstrument(Instrument):
def __init__(self, name, *args, **kwargs):
super().__init__(name, *args, **kwargs)
self.add_parameter(name="foo",
initial_value=42,
parameter_class=GroupParameter
)
self.group = Group([self.foo])
instr = MyInstrument("test")
```
### Expected behaviour
The instrument should have the GroupParameter with the given initial value.
### Actual behaviour
Raises `RuntimeError("Trying to set Group value but no group defined")`.
### Proposed fix
The `GroupParameter` should defer setting the initial value until it has been added to the group. One way of doing it would be to add something like the following to `GroupParameter.__init__`, before the `super().__init__` call:
```py
if "initial_value" in kwargs:
self._initial_value = kwargs["initial_value"]
kwargs["initial_value"] = None
```
and then adding a `GroupParameter.add_to_group` method where the value is actually set, and calling that instead of just setting `parameter.group = self` in `Group.__init__`. I'm not 100% sure if this is the right way to do it.
### System
**qcodes branch**: master
**qcodes commit**: c7eef82d9ab68afb3546fb3c736f2d5b2ff02a14
</issue>
<code>
[start of qcodes/instrument/group_parameter.py]
1 """
2 This module implements a :class:`.Group` intended to hold multiple
3 parameters that are to be gotten and set by the same command. The parameters
4 should be of type :class:`GroupParameter`
5 """
6
7
8 from collections import OrderedDict
9 from typing import List, Union, Callable, Dict, Any, Optional
10
11 from qcodes.instrument.parameter import Parameter
12 from qcodes import Instrument
13
14
15 class GroupParameter(Parameter):
16 """
17 Group parameter is a :class:`.Parameter`, whose value can be set or get
18 only with other group parameters. This happens when an instrument
19 has commands which set and get more than one parameter per call.
20
21 The ``set_raw`` method of a group parameter forwards the call to the
22 group, and the group then makes sure that the values of other parameters
23 within the group are left unchanged. The ``get_raw`` method of a group
24 parameter also forwards the call to the group, and the group makes sure
25 that the command output is parsed correctly, and the value of the
26 parameter of interest is returned.
27
28 After initialization, the group parameters need to be added to a group.
29 See :class:`.Group` for more information.
30
31 Args:
32 name: Name of the parameter.
33 instrument: Instrument that this parameter belongs to; this
34 instrument is used by the group to call its get and set commands.
35
36 **kwargs: All kwargs used by the :class:`.Parameter` class, except
37 ``set_cmd`` and ``get_cmd``.
38 """
39
40 def __init__(self,
41 name: str,
42 instrument: Optional['Instrument'] = None,
43 **kwargs
44 ) -> None:
45
46 if "set_cmd" in kwargs or "get_cmd" in kwargs:
47 raise ValueError("A GroupParameter does not use 'set_cmd' or "
48 "'get_cmd' kwarg")
49
50 self.group: Union[Group, None] = None
51 super().__init__(name, instrument=instrument, **kwargs)
52
53 self.set = self._wrap_set(self.set_raw)
54
55 self.get_raw = lambda result=None: result if result is not None \
56 else self._get_raw_value()
57
58 self.get = self._wrap_get(self.get_raw)
59
60 def _get_raw_value(self) -> Any:
61 if self.group is None:
62 raise RuntimeError("Trying to get Group value but no "
63 "group defined")
64 self.group.update()
65 return self.raw_value
66
67 def set_raw(self, value: Any) -> None:
68 if self.group is None:
69 raise RuntimeError("Trying to set Group value but no "
70 "group defined")
71 self.group.set(self, value)
72
73
74 class Group:
75 """
76 The group combines :class:`.GroupParameter` s that are to be gotten or set
77 via the same command. The command has to be a string, for example,
78 a VISA command.
79
80 The :class:`Group`'s methods are used within :class:`GroupParameter` in
81 order to properly implement setting and getting of a single parameter in
82 the situation where one command sets or gets more than one parameter.
83
84 The command used for setting values of parameters has to be a format
85 string which contains the names of the parameters the group has been
86 initialized with. For example, if a command has syntax ``CMD a_value,
87 b_value``, where ``a_value`` and ``b_value`` are values of two parameters
88 with names ``a`` and ``b``, then the command string has to be ``CMD {a},
89 {b}``, and the group has to be initialized with two ``GroupParameter`` s
90 ``a_param`` and ``b_param``, where ``a_param.name=="a"`` and
91 ``b_param.name=="b"``.
92
93 **Note** that by default, it is assumed that the command used for getting
94 values returns a comma-separated list of values of parameters, and their
95 order corresponds to the order of :class:`.GroupParameter` s in the list
96 that is passed to the :class:`Group`'s constructor. Through keyword
97 arguments of the :class:`Group`'s constructor, it is possible to change
98 the separator, and even the parser of the output of the get command.
99
100 The get and set commands are called via the instrument that the first
101 parameter belongs to. It is assumed that all the parameters within the
102 group belong to the same instrument.
103
104 Example:
105
106 ::
107
108 class InstrumentWithGroupParameters(VisaInstrument):
109 def __init__(self, name, address, **kwargs):
110 super().__init__(name, address, **kwargs)
111
112 ...
113
114 # Here is how group of group parameters is defined for
115 # a simple case of an example "SGP" command that sets and gets
116 # values of "enabled" and "gain" parameters (it is assumed that
117 # "SGP?" returns the parameter values as comma-separated list
118 # "enabled_value,gain_value")
119 self.add_parameter('enabled',
120 label='Enabled',
121 val_mapping={True: 1, False: 0},
122 parameter_class=GroupParameter)
123 self.add_parameter('gain',
124 label='Some gain value',
125 get_parser=float,
126 parameter_class=GroupParameter)
127 self.output_group = Group([self.enabled, self.gain],
128 set_cmd='SGP {enabled}, {gain}',
129 get_cmd='SGP?')
130
131 ...
132
133 Args:
134 parameters: a list of :class:`.GroupParameter` instances which have
135 to be gotten and set via the same command; the order of
136 parameters in the list should correspond to the order of the
137 values returned by the ``get_cmd``.
138 set_cmd: Format string of the command that is used for setting the
139 valueS of the parameters; for example, ``CMD {a}, {b}``.
140 get_cmd: String of the command that is used for getting the values
141 of the parameters; for example, ``CMD?``.
142 separator: A separator that is used when parsing the output of the
143 ``get_cmd`` in order to obtain the values of the parameters; it
144 is ignored in case a custom ``get_parser`` is used.
145 get_parser: A callable with a single string argument that is used to
146 parse the output of the ``get_cmd``; the callable has to return a
147 dictionary where parameter names are keys, and the values are the
148 values (as directly obtained from the output of the get command;
149 note that parsers within the parameters will take care of
150 individual parsing of their values).
151 """
152 def __init__(self,
153 parameters: List[GroupParameter],
154 set_cmd: str = None,
155 get_cmd: str = None,
156 get_parser: Union[Callable[[str],
157 Dict[str, Any]], None] = None,
158 separator: str = ','
159 ) -> None:
160 self.parameters = OrderedDict((p.name, p) for p in parameters)
161
162 for p in parameters:
163 p.group = self
164
165 if len(set([p.root_instrument for p in parameters])) > 1:
166 raise ValueError(
167 "All parameters should belong to the same instrument")
168
169 self.instrument = parameters[0].root_instrument
170
171 self.set_cmd = set_cmd
172 self.get_cmd = get_cmd
173
174 if get_parser:
175 self.get_parser = get_parser
176 else:
177 self.get_parser = self._separator_parser(separator)
178
179 def _separator_parser(self, separator: str
180 ) -> Callable[[str], Dict[str, Any]]:
181 """A default separator-based string parser"""
182 def parser(ret_str: str) -> Dict[str, Any]:
183 keys = self.parameters.keys()
184 values = ret_str.split(separator)
185 return dict(zip(keys, values))
186
187 return parser
188
189 def set(self, set_parameter: GroupParameter, value: Any):
190 """
191 Sets the value of the given parameter within a group to the given
192 value by calling the ``set_cmd``.
193
194 Args:
195 set_parameter: The parameter within the group to set.
196 value: The new value for this parameter.
197 """
198 if any((p.get_latest() is None) for p in self.parameters.values()):
199 self.update()
200 calling_dict = {name: p.raw_value
201 for name, p in self.parameters.items()}
202 calling_dict[set_parameter.name] = value
203 if self.set_cmd is None:
204 raise RuntimeError("Calling set but no `set_cmd` defined")
205 command_str = self.set_cmd.format(**calling_dict)
206 if self.instrument is None:
207 raise RuntimeError("Trying to set GroupParameter not attached "
208 "to any instrument.")
209 self.instrument.write(command_str)
210
211 def update(self):
212 """
213 Update the values of all the parameters within the group by calling
214 the ``get_cmd``.
215 """
216 ret = self.get_parser(self.instrument.ask(self.get_cmd))
217 for name, p in list(self.parameters.items()):
218 p.get(result=ret[name])
219
[end of qcodes/instrument/group_parameter.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/qcodes/instrument/group_parameter.py b/qcodes/instrument/group_parameter.py
--- a/qcodes/instrument/group_parameter.py
+++ b/qcodes/instrument/group_parameter.py
@@ -30,8 +30,11 @@
Args:
name: Name of the parameter.
- instrument: Instrument that this parameter belongs to; this
- instrument is used by the group to call its get and set commands.
+ instrument: Instrument that this parameter belongs to; this instrument
+ is used by the group to call its get and set commands.
+ initial_value: Initial value of the parameter. Note that either none or
+ all of the parameters in a :class:`.Group` should have an initial
+ value.
**kwargs: All kwargs used by the :class:`.Parameter` class, except
``set_cmd`` and ``get_cmd``.
@@ -40,6 +43,7 @@
def __init__(self,
name: str,
instrument: Optional['Instrument'] = None,
+ initial_value: Union[float, int, str, None] = None,
**kwargs
) -> None:
@@ -48,6 +52,7 @@
"'get_cmd' kwarg")
self.group: Union[Group, None] = None
+ self._initial_value = initial_value
super().__init__(name, instrument=instrument, **kwargs)
self.set = self._wrap_set(self.set_raw)
@@ -176,6 +181,27 @@
else:
self.get_parser = self._separator_parser(separator)
+ have_initial_values = [p._initial_value is not None
+ for p in parameters]
+ if any(have_initial_values):
+ if not all(have_initial_values):
+ params_with_initial_values = [p.name for p in parameters
+ if p._initial_value is not None]
+ params_without_initial_values = [p.name for p in parameters
+ if p._initial_value is None]
+ error_msg = (f'Either none or all of the parameters in a '
+ f'group should have an initial value. Found '
+ f'initial values for '
+ f'{params_with_initial_values} but not for '
+ f'{params_without_initial_values}.')
+ raise ValueError(error_msg)
+
+ calling_dict = {name: p._initial_value
+ for name, p in self.parameters.items()}
+
+ self._set_from_dict(calling_dict)
+
+
def _separator_parser(self, separator: str
) -> Callable[[str], Dict[str, Any]]:
"""A default separator-based string parser"""
@@ -200,6 +226,14 @@
calling_dict = {name: p.raw_value
for name, p in self.parameters.items()}
calling_dict[set_parameter.name] = value
+
+ self._set_from_dict(calling_dict)
+
+ def _set_from_dict(self, calling_dict: Dict[str, Any]):
+ """
+ Use ``set_cmd`` to parse a dict that maps parameter names to parameter
+ values, and actually perform setting the values.
+ """
if self.set_cmd is None:
raise RuntimeError("Calling set but no `set_cmd` defined")
command_str = self.set_cmd.format(**calling_dict)
| {"golden_diff": "diff --git a/qcodes/instrument/group_parameter.py b/qcodes/instrument/group_parameter.py\n--- a/qcodes/instrument/group_parameter.py\n+++ b/qcodes/instrument/group_parameter.py\n@@ -30,8 +30,11 @@\n \n Args:\n name: Name of the parameter.\n- instrument: Instrument that this parameter belongs to; this\n- instrument is used by the group to call its get and set commands.\n+ instrument: Instrument that this parameter belongs to; this instrument\n+ is used by the group to call its get and set commands.\n+ initial_value: Initial value of the parameter. Note that either none or\n+ all of the parameters in a :class:`.Group` should have an initial\n+ value.\n \n **kwargs: All kwargs used by the :class:`.Parameter` class, except\n ``set_cmd`` and ``get_cmd``.\n@@ -40,6 +43,7 @@\n def __init__(self,\n name: str,\n instrument: Optional['Instrument'] = None,\n+ initial_value: Union[float, int, str, None] = None,\n **kwargs\n ) -> None:\n \n@@ -48,6 +52,7 @@\n \"'get_cmd' kwarg\")\n \n self.group: Union[Group, None] = None\n+ self._initial_value = initial_value\n super().__init__(name, instrument=instrument, **kwargs)\n \n self.set = self._wrap_set(self.set_raw)\n@@ -176,6 +181,27 @@\n else:\n self.get_parser = self._separator_parser(separator)\n \n+ have_initial_values = [p._initial_value is not None\n+ for p in parameters]\n+ if any(have_initial_values):\n+ if not all(have_initial_values):\n+ params_with_initial_values = [p.name for p in parameters\n+ if p._initial_value is not None]\n+ params_without_initial_values = [p.name for p in parameters\n+ if p._initial_value is None]\n+ error_msg = (f'Either none or all of the parameters in a '\n+ f'group should have an initial value. Found '\n+ f'initial values for '\n+ f'{params_with_initial_values} but not for '\n+ f'{params_without_initial_values}.')\n+ raise ValueError(error_msg)\n+\n+ calling_dict = {name: p._initial_value\n+ for name, p in self.parameters.items()}\n+\n+ self._set_from_dict(calling_dict)\n+\n+\n def _separator_parser(self, separator: str\n ) -> Callable[[str], Dict[str, Any]]:\n \"\"\"A default separator-based string parser\"\"\"\n@@ -200,6 +226,14 @@\n calling_dict = {name: p.raw_value\n for name, p in self.parameters.items()}\n calling_dict[set_parameter.name] = value\n+\n+ self._set_from_dict(calling_dict)\n+\n+ def _set_from_dict(self, calling_dict: Dict[str, Any]):\n+ \"\"\"\n+ Use ``set_cmd`` to parse a dict that maps parameter names to parameter\n+ values, and actually perform setting the values.\n+ \"\"\"\n if self.set_cmd is None:\n raise RuntimeError(\"Calling set but no `set_cmd` defined\")\n command_str = self.set_cmd.format(**calling_dict)\n", "issue": "GroupParameter initial_value cannot be set\nHere's a MWE:\r\n```py\r\nfrom qcodes.instrument.group_parameter import GroupParameter, Group\r\nfrom qcodes import Instrument\r\n\r\nclass MyInstrument(Instrument):\r\n def __init__(self, name, *args, **kwargs):\r\n super().__init__(name, *args, **kwargs)\r\n\r\n self.add_parameter(name=\"foo\",\r\n initial_value=42,\r\n parameter_class=GroupParameter\r\n )\r\n\r\n self.group = Group([self.foo])\r\n\r\ninstr = MyInstrument(\"test\")\r\n```\r\n\r\n### Expected behaviour\r\nThe instrument should have the GroupParameter with the given initial value.\r\n\r\n### Actual behaviour\r\nRaises `RuntimeError(\"Trying to set Group value but no group defined\")`.\r\n\r\n### Proposed fix\r\nThe `GroupParameter` should defer setting the initial value until it has been added to the group. One way of doing it would be to add something like the following to `GroupParameter.__init__`, before the `super().__init__` call:\r\n```py\r\nif \"initial_value\" in kwargs:\r\n self._initial_value = kwargs[\"initial_value\"]\r\n kwargs[\"initial_value\"] = None\r\n```\r\nand then adding a `GroupParameter.add_to_group` method where the value is actually set, and calling that instead of just setting `parameter.group = self` in `Group.__init__`. I'm not 100% sure if this is the right way to do it.\r\n\r\n### System\r\n**qcodes branch**: master\r\n\r\n**qcodes commit**: c7eef82d9ab68afb3546fb3c736f2d5b2ff02a14\r\n\n", "before_files": [{"content": "\"\"\"\nThis module implements a :class:`.Group` intended to hold multiple\nparameters that are to be gotten and set by the same command. The parameters\nshould be of type :class:`GroupParameter`\n\"\"\"\n\n\nfrom collections import OrderedDict\nfrom typing import List, Union, Callable, Dict, Any, Optional\n\nfrom qcodes.instrument.parameter import Parameter\nfrom qcodes import Instrument\n\n\nclass GroupParameter(Parameter):\n \"\"\"\n Group parameter is a :class:`.Parameter`, whose value can be set or get\n only with other group parameters. This happens when an instrument\n has commands which set and get more than one parameter per call.\n\n The ``set_raw`` method of a group parameter forwards the call to the\n group, and the group then makes sure that the values of other parameters\n within the group are left unchanged. The ``get_raw`` method of a group\n parameter also forwards the call to the group, and the group makes sure\n that the command output is parsed correctly, and the value of the\n parameter of interest is returned.\n\n After initialization, the group parameters need to be added to a group.\n See :class:`.Group` for more information.\n\n Args:\n name: Name of the parameter.\n instrument: Instrument that this parameter belongs to; this\n instrument is used by the group to call its get and set commands.\n\n **kwargs: All kwargs used by the :class:`.Parameter` class, except\n ``set_cmd`` and ``get_cmd``.\n \"\"\"\n\n def __init__(self,\n name: str,\n instrument: Optional['Instrument'] = None,\n **kwargs\n ) -> None:\n\n if \"set_cmd\" in kwargs or \"get_cmd\" in kwargs:\n raise ValueError(\"A GroupParameter does not use 'set_cmd' or \"\n \"'get_cmd' kwarg\")\n\n self.group: Union[Group, None] = None\n super().__init__(name, instrument=instrument, **kwargs)\n\n self.set = self._wrap_set(self.set_raw)\n\n self.get_raw = lambda result=None: result if result is not None \\\n else self._get_raw_value()\n\n self.get = self._wrap_get(self.get_raw)\n\n def _get_raw_value(self) -> Any:\n if self.group is None:\n raise RuntimeError(\"Trying to get Group value but no \"\n \"group defined\")\n self.group.update()\n return self.raw_value\n\n def set_raw(self, value: Any) -> None:\n if self.group is None:\n raise RuntimeError(\"Trying to set Group value but no \"\n \"group defined\")\n self.group.set(self, value)\n\n\nclass Group:\n \"\"\"\n The group combines :class:`.GroupParameter` s that are to be gotten or set\n via the same command. The command has to be a string, for example,\n a VISA command.\n\n The :class:`Group`'s methods are used within :class:`GroupParameter` in\n order to properly implement setting and getting of a single parameter in\n the situation where one command sets or gets more than one parameter.\n\n The command used for setting values of parameters has to be a format\n string which contains the names of the parameters the group has been\n initialized with. For example, if a command has syntax ``CMD a_value,\n b_value``, where ``a_value`` and ``b_value`` are values of two parameters\n with names ``a`` and ``b``, then the command string has to be ``CMD {a},\n {b}``, and the group has to be initialized with two ``GroupParameter`` s\n ``a_param`` and ``b_param``, where ``a_param.name==\"a\"`` and\n ``b_param.name==\"b\"``.\n\n **Note** that by default, it is assumed that the command used for getting\n values returns a comma-separated list of values of parameters, and their\n order corresponds to the order of :class:`.GroupParameter` s in the list\n that is passed to the :class:`Group`'s constructor. Through keyword\n arguments of the :class:`Group`'s constructor, it is possible to change\n the separator, and even the parser of the output of the get command.\n\n The get and set commands are called via the instrument that the first\n parameter belongs to. It is assumed that all the parameters within the\n group belong to the same instrument.\n\n Example:\n\n ::\n\n class InstrumentWithGroupParameters(VisaInstrument):\n def __init__(self, name, address, **kwargs):\n super().__init__(name, address, **kwargs)\n\n ...\n\n # Here is how group of group parameters is defined for\n # a simple case of an example \"SGP\" command that sets and gets\n # values of \"enabled\" and \"gain\" parameters (it is assumed that\n # \"SGP?\" returns the parameter values as comma-separated list\n # \"enabled_value,gain_value\")\n self.add_parameter('enabled',\n label='Enabled',\n val_mapping={True: 1, False: 0},\n parameter_class=GroupParameter)\n self.add_parameter('gain',\n label='Some gain value',\n get_parser=float,\n parameter_class=GroupParameter)\n self.output_group = Group([self.enabled, self.gain],\n set_cmd='SGP {enabled}, {gain}',\n get_cmd='SGP?')\n\n ...\n\n Args:\n parameters: a list of :class:`.GroupParameter` instances which have\n to be gotten and set via the same command; the order of\n parameters in the list should correspond to the order of the\n values returned by the ``get_cmd``.\n set_cmd: Format string of the command that is used for setting the\n valueS of the parameters; for example, ``CMD {a}, {b}``.\n get_cmd: String of the command that is used for getting the values\n of the parameters; for example, ``CMD?``.\n separator: A separator that is used when parsing the output of the\n ``get_cmd`` in order to obtain the values of the parameters; it\n is ignored in case a custom ``get_parser`` is used.\n get_parser: A callable with a single string argument that is used to\n parse the output of the ``get_cmd``; the callable has to return a\n dictionary where parameter names are keys, and the values are the\n values (as directly obtained from the output of the get command;\n note that parsers within the parameters will take care of\n individual parsing of their values).\n \"\"\"\n def __init__(self,\n parameters: List[GroupParameter],\n set_cmd: str = None,\n get_cmd: str = None,\n get_parser: Union[Callable[[str],\n Dict[str, Any]], None] = None,\n separator: str = ','\n ) -> None:\n self.parameters = OrderedDict((p.name, p) for p in parameters)\n\n for p in parameters:\n p.group = self\n\n if len(set([p.root_instrument for p in parameters])) > 1:\n raise ValueError(\n \"All parameters should belong to the same instrument\")\n\n self.instrument = parameters[0].root_instrument\n\n self.set_cmd = set_cmd\n self.get_cmd = get_cmd\n\n if get_parser:\n self.get_parser = get_parser\n else:\n self.get_parser = self._separator_parser(separator)\n\n def _separator_parser(self, separator: str\n ) -> Callable[[str], Dict[str, Any]]:\n \"\"\"A default separator-based string parser\"\"\"\n def parser(ret_str: str) -> Dict[str, Any]:\n keys = self.parameters.keys()\n values = ret_str.split(separator)\n return dict(zip(keys, values))\n\n return parser\n\n def set(self, set_parameter: GroupParameter, value: Any):\n \"\"\"\n Sets the value of the given parameter within a group to the given\n value by calling the ``set_cmd``.\n\n Args:\n set_parameter: The parameter within the group to set.\n value: The new value for this parameter.\n \"\"\"\n if any((p.get_latest() is None) for p in self.parameters.values()):\n self.update()\n calling_dict = {name: p.raw_value\n for name, p in self.parameters.items()}\n calling_dict[set_parameter.name] = value\n if self.set_cmd is None:\n raise RuntimeError(\"Calling set but no `set_cmd` defined\")\n command_str = self.set_cmd.format(**calling_dict)\n if self.instrument is None:\n raise RuntimeError(\"Trying to set GroupParameter not attached \"\n \"to any instrument.\")\n self.instrument.write(command_str)\n\n def update(self):\n \"\"\"\n Update the values of all the parameters within the group by calling\n the ``get_cmd``.\n \"\"\"\n ret = self.get_parser(self.instrument.ask(self.get_cmd))\n for name, p in list(self.parameters.items()):\n p.get(result=ret[name])\n", "path": "qcodes/instrument/group_parameter.py"}]} | 3,375 | 734 |
gh_patches_debug_18740 | rasdani/github-patches | git_diff | learningequality__kolibri-11846 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Impossible to create a learner account on a second (imported) facility?
## Observed behavior
I managed to replicate this on 2 separate Android devices. My Windows 7 server device has 2 facilities, one created during the server setup (**Win7new**), and the second one imported from another VM (**imported-from-Win11**). This second one has several classes, and I have been using it exclusively during this past week.
1. At first I tried to create a new learner account from the Huawei Android 8 tablet on the second facility, and got an error (_Whoops!_). I retried and the field validation was saying ``Username already exists``, which did not make sense, since I searched users on that second facility on the server VM and the username I was trying to create was not there. I moved on, imported another user and went on testing something else.
2. Some times after I was setting up another LoD (on the Motorola phone), tried again to create (not import) a new user on the second facility, and got the same message that the ``Username already exists``. After the initial confusion I had a hunch and went to check the first facility (the one I never really used for class interactions during this testing), and lo and behold, the user I was trying to create did exist there, probably from the above setup on Huawei tablet. And this is not my fat finger miss-touching the wrong radio button during the facility selection, it actually seems that the option to create a new user is somehow _bound_ to the first facility on the device, and would not _obey_ the user's selection of the second facility.
LoD | Win7 facility users | imported-from-Win11 facility users
-- | -- | --
 |  | 
## Errors and logs
[Windows 7 home folder](https://drive.google.com/file/d/1QaHEZON_yL3hnLRsW2PEURY76MVEL9JT/view?usp=drive_link) (without content)
## Expected behavior
User needs to be able to select the facility they need to create their new learner account.
## User-facing consequences
Impossible to create a new learner account on a desired facility.
## Steps to reproduce
You will need a server with 2 facilities, try setting up an LoD by creating a new account on the second one.
## Context
* Kolibri version: 0.16b13
* Operating system: Windows 7
cc @pcenov to try to replicate
</issue>
<code>
[start of kolibri/plugins/setup_wizard/api.py]
1 import requests
2 from django.urls import reverse
3 from rest_framework import decorators
4 from rest_framework.exceptions import AuthenticationFailed
5 from rest_framework.exceptions import NotFound
6 from rest_framework.exceptions import PermissionDenied
7 from rest_framework.exceptions import ValidationError
8 from rest_framework.permissions import BasePermission
9 from rest_framework.response import Response
10 from rest_framework.viewsets import ViewSet
11
12 from kolibri.core.auth.constants import user_kinds
13 from kolibri.core.auth.models import Facility
14 from kolibri.core.auth.models import FacilityUser
15 from kolibri.core.auth.utils.users import get_remote_users_info
16 from kolibri.core.device.models import DevicePermissions
17
18
19 # Basic class that makes these endpoints unusable if device is provisioned
20 class HasPermissionDuringSetup(BasePermission):
21 def has_permission(self, request, view):
22 from kolibri.core.device.utils import device_provisioned
23
24 return not device_provisioned()
25
26
27 class HasPermissionDuringLODSetup(BasePermission):
28 def has_permission(self, request, view):
29 from kolibri.core.device.utils import get_device_setting
30
31 return get_device_setting("subset_of_users_device")
32
33
34 class SetupWizardResource(ViewSet):
35 """
36 Generic endpoints for use during various setup wizard onboarding flows
37 """
38
39 permission_classes = (HasPermissionDuringSetup,)
40
41 @decorators.action(methods=["post"], detail=False)
42 def createuseronremote(self, request):
43 facility_id = request.data.get("facility_id", None)
44 username = request.data.get("username", None)
45 password = request.data.get("password", None)
46 full_name = request.data.get("full_name", "")
47 baseurl = request.data.get("baseurl", None)
48
49 api_url = reverse("kolibri:core:publicsignup-list")
50
51 url = "{}{}".format(baseurl, api_url)
52
53 payload = {
54 "facility_id": facility_id,
55 "username": username,
56 "password": password,
57 "full_name": full_name,
58 }
59
60 r = requests.post(url, data=payload)
61 return Response({"status": r.status_code, "data": r.content})
62
63
64 class FacilityImportViewSet(ViewSet):
65 """
66 A group of endpoints that are used by the SetupWizard to import a facility
67 and create a superuser
68 """
69
70 permission_classes = (HasPermissionDuringSetup,)
71
72 @decorators.action(methods=["get"], detail=False)
73 def facilityadmins(self, request):
74 # The filter is very loose, since we are assuming that the only
75 # users are from the new facility
76 queryset = FacilityUser.objects.filter(roles__kind__contains="admin")
77 response_data = [
78 {"full_name": user.full_name, "username": user.username, "id": user.id}
79 for user in queryset
80 ]
81 return Response(response_data)
82
83 @decorators.action(methods=["post"], detail=False)
84 def grantsuperuserpermissions(self, request):
85 """
86 Given a user ID and credentials, create a superuser DevicePermissions record
87 """
88 user_id = request.data.get("user_id", "")
89 password = request.data.get("password", "")
90
91 # Get the Facility User object
92 try:
93 facilityuser = FacilityUser.objects.get(id=user_id)
94 except (Exception, FacilityUser.DoesNotExist):
95 raise NotFound()
96
97 # Test for password and admin role
98 if (
99 not facilityuser.check_password(password)
100 or user_kinds.ADMIN not in facilityuser.session_data["kind"]
101 ):
102 raise PermissionDenied()
103
104 # If it succeeds, create a DevicePermissions model for the user
105 DevicePermissions.objects.update_or_create(
106 user=facilityuser,
107 defaults={"is_superuser": True, "can_manage_content": True},
108 )
109
110 # Finally: return a simple 200 so UI can continue on
111 return Response({"user_id": user_id})
112
113 @decorators.action(methods=["post"], detail=False)
114 def createsuperuser(self, request):
115 """
116 Given a username, full name and password, create a superuser attached
117 to the facility that was imported (or create a facility with given facility_name)
118 """
119 facility_name = request.data.get("facility_name", None)
120
121 # Get the imported facility (assuming its the only one at this point)
122 if Facility.objects.count() == 0:
123 the_facility = Facility.objects.create(name=facility_name)
124 else:
125 the_facility = Facility.objects.get()
126 if facility_name:
127 the_facility.name = facility_name
128 the_facility.save()
129
130 try:
131 superuser = FacilityUser.objects.create_superuser(
132 request.data.get("username"),
133 request.data.get("password"),
134 facility=the_facility,
135 full_name=request.data.get("full_name"),
136 )
137 return Response({"username": superuser.username})
138
139 except ValidationError:
140 raise ValidationError(detail="duplicate", code="duplicate_username")
141
142 @decorators.action(methods=["post"], detail=False)
143 def listfacilitylearners(self, request):
144 """
145 If the request is done by an admin user it will return a list of the users of the
146 facility
147
148 :param baseurl: First part of the url of the server that's going to be requested
149 :param facility_id: Id of the facility to authenticate and get the list of users
150 :param username: Username of the user that's going to authenticate
151 :param password: Password of the user that's going to authenticate
152 :return: List of the learners of the facility.
153 """
154 facility_id = request.data.get("facility_id")
155 baseurl = request.data.get("baseurl")
156 password = request.data.get("password")
157 username = request.data.get("username")
158 try:
159 facility_info = get_remote_users_info(
160 baseurl, facility_id, username, password
161 )
162 except AuthenticationFailed:
163 raise PermissionDenied()
164 user_info = facility_info["user"]
165 roles = user_info["roles"]
166 admin_roles = (user_kinds.ADMIN, user_kinds.SUPERUSER)
167 if not any(role in roles for role in admin_roles):
168 raise PermissionDenied()
169 students = [u for u in facility_info["users"] if not u["roles"]]
170 return Response({"students": students, "admin": facility_info["user"]})
171
[end of kolibri/plugins/setup_wizard/api.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/kolibri/plugins/setup_wizard/api.py b/kolibri/plugins/setup_wizard/api.py
--- a/kolibri/plugins/setup_wizard/api.py
+++ b/kolibri/plugins/setup_wizard/api.py
@@ -9,6 +9,7 @@
from rest_framework.response import Response
from rest_framework.viewsets import ViewSet
+from kolibri.core.auth.backends import FACILITY_CREDENTIAL_KEY
from kolibri.core.auth.constants import user_kinds
from kolibri.core.auth.models import Facility
from kolibri.core.auth.models import FacilityUser
@@ -51,7 +52,9 @@
url = "{}{}".format(baseurl, api_url)
payload = {
- "facility_id": facility_id,
+ # N.B. facility is keyed by facility not facility_id on the signup
+ # viewset serializer.
+ FACILITY_CREDENTIAL_KEY: facility_id,
"username": username,
"password": password,
"full_name": full_name,
| {"golden_diff": "diff --git a/kolibri/plugins/setup_wizard/api.py b/kolibri/plugins/setup_wizard/api.py\n--- a/kolibri/plugins/setup_wizard/api.py\n+++ b/kolibri/plugins/setup_wizard/api.py\n@@ -9,6 +9,7 @@\n from rest_framework.response import Response\n from rest_framework.viewsets import ViewSet\n \n+from kolibri.core.auth.backends import FACILITY_CREDENTIAL_KEY\n from kolibri.core.auth.constants import user_kinds\n from kolibri.core.auth.models import Facility\n from kolibri.core.auth.models import FacilityUser\n@@ -51,7 +52,9 @@\n url = \"{}{}\".format(baseurl, api_url)\n \n payload = {\n- \"facility_id\": facility_id,\n+ # N.B. facility is keyed by facility not facility_id on the signup\n+ # viewset serializer.\n+ FACILITY_CREDENTIAL_KEY: facility_id,\n \"username\": username,\n \"password\": password,\n \"full_name\": full_name,\n", "issue": "Impossible to create a learner account on a second (imported) facility?\n## Observed behavior\r\nI managed to replicate this on 2 separate Android devices. My Windows 7 server device has 2 facilities, one created during the server setup (**Win7new**), and the second one imported from another VM (**imported-from-Win11**). This second one has several classes, and I have been using it exclusively during this past week. \r\n\r\n1. At first I tried to create a new learner account from the Huawei Android 8 tablet on the second facility, and got an error (_Whoops!_). I retried and the field validation was saying ``Username already exists``, which did not make sense, since I searched users on that second facility on the server VM and the username I was trying to create was not there. I moved on, imported another user and went on testing something else.\r\n2. Some times after I was setting up another LoD (on the Motorola phone), tried again to create (not import) a new user on the second facility, and got the same message that the ``Username already exists``. After the initial confusion I had a hunch and went to check the first facility (the one I never really used for class interactions during this testing), and lo and behold, the user I was trying to create did exist there, probably from the above setup on Huawei tablet. And this is not my fat finger miss-touching the wrong radio button during the facility selection, it actually seems that the option to create a new user is somehow _bound_ to the first facility on the device, and would not _obey_ the user's selection of the second facility.\r\n\r\nLoD | Win7 facility users | imported-from-Win11 facility users\r\n-- | -- | --\r\n |  | \r\n\r\n## Errors and logs\r\n[Windows 7 home folder](https://drive.google.com/file/d/1QaHEZON_yL3hnLRsW2PEURY76MVEL9JT/view?usp=drive_link) (without content)\r\n\r\n## Expected behavior\r\nUser needs to be able to select the facility they need to create their new learner account.\r\n\r\n## User-facing consequences\r\nImpossible to create a new learner account on a desired facility.\r\n\r\n## Steps to reproduce\r\nYou will need a server with 2 facilities, try setting up an LoD by creating a new account on the second one. \r\n\r\n## Context\r\n\r\n * Kolibri version: 0.16b13\r\n * Operating system: Windows 7\r\n\r\ncc @pcenov to try to replicate\n", "before_files": [{"content": "import requests\nfrom django.urls import reverse\nfrom rest_framework import decorators\nfrom rest_framework.exceptions import AuthenticationFailed\nfrom rest_framework.exceptions import NotFound\nfrom rest_framework.exceptions import PermissionDenied\nfrom rest_framework.exceptions import ValidationError\nfrom rest_framework.permissions import BasePermission\nfrom rest_framework.response import Response\nfrom rest_framework.viewsets import ViewSet\n\nfrom kolibri.core.auth.constants import user_kinds\nfrom kolibri.core.auth.models import Facility\nfrom kolibri.core.auth.models import FacilityUser\nfrom kolibri.core.auth.utils.users import get_remote_users_info\nfrom kolibri.core.device.models import DevicePermissions\n\n\n# Basic class that makes these endpoints unusable if device is provisioned\nclass HasPermissionDuringSetup(BasePermission):\n def has_permission(self, request, view):\n from kolibri.core.device.utils import device_provisioned\n\n return not device_provisioned()\n\n\nclass HasPermissionDuringLODSetup(BasePermission):\n def has_permission(self, request, view):\n from kolibri.core.device.utils import get_device_setting\n\n return get_device_setting(\"subset_of_users_device\")\n\n\nclass SetupWizardResource(ViewSet):\n \"\"\"\n Generic endpoints for use during various setup wizard onboarding flows\n \"\"\"\n\n permission_classes = (HasPermissionDuringSetup,)\n\n @decorators.action(methods=[\"post\"], detail=False)\n def createuseronremote(self, request):\n facility_id = request.data.get(\"facility_id\", None)\n username = request.data.get(\"username\", None)\n password = request.data.get(\"password\", None)\n full_name = request.data.get(\"full_name\", \"\")\n baseurl = request.data.get(\"baseurl\", None)\n\n api_url = reverse(\"kolibri:core:publicsignup-list\")\n\n url = \"{}{}\".format(baseurl, api_url)\n\n payload = {\n \"facility_id\": facility_id,\n \"username\": username,\n \"password\": password,\n \"full_name\": full_name,\n }\n\n r = requests.post(url, data=payload)\n return Response({\"status\": r.status_code, \"data\": r.content})\n\n\nclass FacilityImportViewSet(ViewSet):\n \"\"\"\n A group of endpoints that are used by the SetupWizard to import a facility\n and create a superuser\n \"\"\"\n\n permission_classes = (HasPermissionDuringSetup,)\n\n @decorators.action(methods=[\"get\"], detail=False)\n def facilityadmins(self, request):\n # The filter is very loose, since we are assuming that the only\n # users are from the new facility\n queryset = FacilityUser.objects.filter(roles__kind__contains=\"admin\")\n response_data = [\n {\"full_name\": user.full_name, \"username\": user.username, \"id\": user.id}\n for user in queryset\n ]\n return Response(response_data)\n\n @decorators.action(methods=[\"post\"], detail=False)\n def grantsuperuserpermissions(self, request):\n \"\"\"\n Given a user ID and credentials, create a superuser DevicePermissions record\n \"\"\"\n user_id = request.data.get(\"user_id\", \"\")\n password = request.data.get(\"password\", \"\")\n\n # Get the Facility User object\n try:\n facilityuser = FacilityUser.objects.get(id=user_id)\n except (Exception, FacilityUser.DoesNotExist):\n raise NotFound()\n\n # Test for password and admin role\n if (\n not facilityuser.check_password(password)\n or user_kinds.ADMIN not in facilityuser.session_data[\"kind\"]\n ):\n raise PermissionDenied()\n\n # If it succeeds, create a DevicePermissions model for the user\n DevicePermissions.objects.update_or_create(\n user=facilityuser,\n defaults={\"is_superuser\": True, \"can_manage_content\": True},\n )\n\n # Finally: return a simple 200 so UI can continue on\n return Response({\"user_id\": user_id})\n\n @decorators.action(methods=[\"post\"], detail=False)\n def createsuperuser(self, request):\n \"\"\"\n Given a username, full name and password, create a superuser attached\n to the facility that was imported (or create a facility with given facility_name)\n \"\"\"\n facility_name = request.data.get(\"facility_name\", None)\n\n # Get the imported facility (assuming its the only one at this point)\n if Facility.objects.count() == 0:\n the_facility = Facility.objects.create(name=facility_name)\n else:\n the_facility = Facility.objects.get()\n if facility_name:\n the_facility.name = facility_name\n the_facility.save()\n\n try:\n superuser = FacilityUser.objects.create_superuser(\n request.data.get(\"username\"),\n request.data.get(\"password\"),\n facility=the_facility,\n full_name=request.data.get(\"full_name\"),\n )\n return Response({\"username\": superuser.username})\n\n except ValidationError:\n raise ValidationError(detail=\"duplicate\", code=\"duplicate_username\")\n\n @decorators.action(methods=[\"post\"], detail=False)\n def listfacilitylearners(self, request):\n \"\"\"\n If the request is done by an admin user it will return a list of the users of the\n facility\n\n :param baseurl: First part of the url of the server that's going to be requested\n :param facility_id: Id of the facility to authenticate and get the list of users\n :param username: Username of the user that's going to authenticate\n :param password: Password of the user that's going to authenticate\n :return: List of the learners of the facility.\n \"\"\"\n facility_id = request.data.get(\"facility_id\")\n baseurl = request.data.get(\"baseurl\")\n password = request.data.get(\"password\")\n username = request.data.get(\"username\")\n try:\n facility_info = get_remote_users_info(\n baseurl, facility_id, username, password\n )\n except AuthenticationFailed:\n raise PermissionDenied()\n user_info = facility_info[\"user\"]\n roles = user_info[\"roles\"]\n admin_roles = (user_kinds.ADMIN, user_kinds.SUPERUSER)\n if not any(role in roles for role in admin_roles):\n raise PermissionDenied()\n students = [u for u in facility_info[\"users\"] if not u[\"roles\"]]\n return Response({\"students\": students, \"admin\": facility_info[\"user\"]})\n", "path": "kolibri/plugins/setup_wizard/api.py"}]} | 3,021 | 212 |
gh_patches_debug_19384 | rasdani/github-patches | git_diff | qutebrowser__qutebrowser-3328 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
:edit-command accepts invalid commands
When using `:edit-command` and changing the command to `foo` (without an initial `:`):
```
10:33:50 DEBUG procs guiprocess:on_finished:98 Process finished with code 0, status 0.
10:33:50 DEBUG procs editor:on_proc_closed:73 Editor closed
10:33:50 DEBUG procs editor:on_proc_closed:90 Read back: foo
10:33:50 ERROR misc crashsignal:exception_hook:216 Uncaught exception
Traceback (most recent call last):
File "/home/florian/proj/qutebrowser/git/qutebrowser/mainwindow/statusbar/command.py", line 179, in callback
self.set_cmd_text(text)
File "/home/florian/proj/qutebrowser/git/qutebrowser/mainwindow/statusbar/command.py", line 86, in set_cmd_text
self.setText(text)
File "/home/florian/proj/qutebrowser/git/qutebrowser/mainwindow/statusbar/command.py", line 212, in setText
"'{}'!".format(text))
AssertionError: setText got called with invalid text 'foo
'!
```
When changing it to an empty file and pressing enter:
```
10:34:38 DEBUG commands command:run:484 command called: command-accept
[...]
10:34:38 ERROR misc crashsignal:exception_hook:216 Uncaught exception
Traceback (most recent call last):
File "/home/florian/proj/qutebrowser/git/qutebrowser/app.py", line 935, in eventFilter
return handler(event)
File "/home/florian/proj/qutebrowser/git/qutebrowser/app.py", line 895, in _handle_key_event
return man.eventFilter(event)
File "/home/florian/proj/qutebrowser/git/qutebrowser/keyinput/modeman.py", line 326, in eventFilter
return self._eventFilter_keypress(event)
File "/home/florian/proj/qutebrowser/git/qutebrowser/keyinput/modeman.py", line 162, in _eventFilter_keypress
handled = parser.handle(event)
File "/home/florian/proj/qutebrowser/git/qutebrowser/keyinput/basekeyparser.py", line 266, in handle
handled = self._handle_special_key(e)
File "/home/florian/proj/qutebrowser/git/qutebrowser/keyinput/basekeyparser.py", line 139, in _handle_special_key
self.execute(cmdstr, self.Type.special, count)
File "/home/florian/proj/qutebrowser/git/qutebrowser/keyinput/keyparser.py", line 44, in execute
self._commandrunner.run(cmdstr, count)
File "/home/florian/proj/qutebrowser/git/qutebrowser/commands/runners.py", line 301, in run
result.cmd.run(self._win_id, args, count=count)
File "/home/florian/proj/qutebrowser/git/qutebrowser/commands/command.py", line 500, in run
self.handler(*posargs, **kwargs)
File "/home/florian/proj/qutebrowser/git/qutebrowser/mainwindow/statusbar/command.py", line 167, in command_accept
self.got_cmd[str].emit(prefixes[text[0]] + text[1:])
IndexError: string index out of range
```
Report: https://crashes.qutebrowser.org/view/97044b65
cc @rcorre
</issue>
<code>
[start of qutebrowser/mainwindow/statusbar/command.py]
1 # vim: ft=python fileencoding=utf-8 sts=4 sw=4 et:
2
3 # Copyright 2014-2017 Florian Bruhin (The Compiler) <[email protected]>
4 #
5 # This file is part of qutebrowser.
6 #
7 # qutebrowser is free software: you can redistribute it and/or modify
8 # it under the terms of the GNU General Public License as published by
9 # the Free Software Foundation, either version 3 of the License, or
10 # (at your option) any later version.
11 #
12 # qutebrowser is distributed in the hope that it will be useful,
13 # but WITHOUT ANY WARRANTY; without even the implied warranty of
14 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
15 # GNU General Public License for more details.
16 #
17 # You should have received a copy of the GNU General Public License
18 # along with qutebrowser. If not, see <http://www.gnu.org/licenses/>.
19
20 """The commandline in the statusbar."""
21
22 from PyQt5.QtCore import pyqtSignal, pyqtSlot, Qt, QSize
23 from PyQt5.QtWidgets import QSizePolicy
24
25 from qutebrowser.keyinput import modeman, modeparsers
26 from qutebrowser.commands import cmdexc, cmdutils
27 from qutebrowser.misc import cmdhistory, editor
28 from qutebrowser.misc import miscwidgets as misc
29 from qutebrowser.utils import usertypes, log, objreg
30
31
32 class Command(misc.MinimalLineEditMixin, misc.CommandLineEdit):
33
34 """The commandline part of the statusbar.
35
36 Attributes:
37 _win_id: The window ID this widget is associated with.
38
39 Signals:
40 got_cmd: Emitted when a command is triggered by the user.
41 arg: The command string and also potentially the count.
42 clear_completion_selection: Emitted before the completion widget is
43 hidden.
44 hide_completion: Emitted when the completion widget should be hidden.
45 update_completion: Emitted when the completion should be shown/updated.
46 show_cmd: Emitted when command input should be shown.
47 hide_cmd: Emitted when command input can be hidden.
48 """
49
50 got_cmd = pyqtSignal([str], [str, int])
51 clear_completion_selection = pyqtSignal()
52 hide_completion = pyqtSignal()
53 update_completion = pyqtSignal()
54 show_cmd = pyqtSignal()
55 hide_cmd = pyqtSignal()
56
57 def __init__(self, *, win_id, private, parent=None):
58 misc.CommandLineEdit.__init__(self, parent=parent)
59 misc.MinimalLineEditMixin.__init__(self)
60 self._win_id = win_id
61 if not private:
62 command_history = objreg.get('command-history')
63 self.history.history = command_history.data
64 self.history.changed.connect(command_history.changed)
65 self.setSizePolicy(QSizePolicy.MinimumExpanding, QSizePolicy.Ignored)
66 self.cursorPositionChanged.connect(self.update_completion)
67 self.textChanged.connect(self.update_completion)
68 self.textChanged.connect(self.updateGeometry)
69
70 def prefix(self):
71 """Get the currently entered command prefix."""
72 text = self.text()
73 if not text:
74 return ''
75 elif text[0] in modeparsers.STARTCHARS:
76 return text[0]
77 else:
78 return ''
79
80 def set_cmd_text(self, text):
81 """Preset the statusbar to some text.
82
83 Args:
84 text: The text to set as string.
85 """
86 self.setText(text)
87 log.modes.debug("Setting command text, focusing {!r}".format(self))
88 modeman.enter(self._win_id, usertypes.KeyMode.command, 'cmd focus')
89 self.setFocus()
90 self.show_cmd.emit()
91
92 @cmdutils.register(instance='status-command', name='set-cmd-text',
93 scope='window', maxsplit=0)
94 @cmdutils.argument('count', count=True)
95 def set_cmd_text_command(self, text, count=None, space=False, append=False,
96 run_on_count=False):
97 """Preset the statusbar to some text.
98
99 //
100
101 Wrapper for set_cmd_text to check the arguments and allow multiple
102 strings which will get joined.
103
104 Args:
105 text: The commandline to set.
106 count: The count if given.
107 space: If given, a space is added to the end.
108 append: If given, the text is appended to the current text.
109 run_on_count: If given with a count, the command is run with the
110 given count rather than setting the command text.
111 """
112 if space:
113 text += ' '
114 if append:
115 if not self.text():
116 raise cmdexc.CommandError("No current text!")
117 text = self.text() + text
118
119 if not text or text[0] not in modeparsers.STARTCHARS:
120 raise cmdexc.CommandError(
121 "Invalid command text '{}'.".format(text))
122 if run_on_count and count is not None:
123 self.got_cmd[str, int].emit(text, count)
124 else:
125 self.set_cmd_text(text)
126
127 @cmdutils.register(instance='status-command',
128 modes=[usertypes.KeyMode.command], scope='window')
129 def command_history_prev(self):
130 """Go back in the commandline history."""
131 try:
132 if not self.history.is_browsing():
133 item = self.history.start(self.text().strip())
134 else:
135 item = self.history.previtem()
136 except (cmdhistory.HistoryEmptyError,
137 cmdhistory.HistoryEndReachedError):
138 return
139 if item:
140 self.set_cmd_text(item)
141
142 @cmdutils.register(instance='status-command',
143 modes=[usertypes.KeyMode.command], scope='window')
144 def command_history_next(self):
145 """Go forward in the commandline history."""
146 if not self.history.is_browsing():
147 return
148 try:
149 item = self.history.nextitem()
150 except cmdhistory.HistoryEndReachedError:
151 return
152 if item:
153 self.set_cmd_text(item)
154
155 @cmdutils.register(instance='status-command',
156 modes=[usertypes.KeyMode.command], scope='window')
157 def command_accept(self):
158 """Execute the command currently in the commandline."""
159 prefixes = {
160 ':': '',
161 '/': 'search -- ',
162 '?': 'search -r -- ',
163 }
164 text = self.text()
165 self.history.append(text)
166 modeman.leave(self._win_id, usertypes.KeyMode.command, 'cmd accept')
167 self.got_cmd[str].emit(prefixes[text[0]] + text[1:])
168
169 @cmdutils.register(instance='status-command', scope='window')
170 def edit_command(self, run=False):
171 """Open an editor to modify the current command.
172
173 Args:
174 run: Run the command if the editor exits successfully.
175 """
176 ed = editor.ExternalEditor(parent=self)
177
178 def callback(text):
179 self.set_cmd_text(text)
180 if run:
181 self.command_accept()
182
183 ed.editing_finished.connect(callback)
184 ed.edit(self.text())
185
186 @pyqtSlot(usertypes.KeyMode)
187 def on_mode_left(self, mode):
188 """Clear up when command mode was left.
189
190 - Clear the statusbar text if it's explicitly unfocused.
191 - Clear completion selection
192 - Hide completion
193
194 Args:
195 mode: The mode which was left.
196 """
197 if mode == usertypes.KeyMode.command:
198 self.setText('')
199 self.history.stop()
200 self.hide_cmd.emit()
201 self.clear_completion_selection.emit()
202 self.hide_completion.emit()
203
204 def setText(self, text):
205 """Extend setText to set prefix and make sure the prompt is ok."""
206 if not text:
207 pass
208 elif text[0] in modeparsers.STARTCHARS:
209 super().set_prompt(text[0])
210 else:
211 raise AssertionError("setText got called with invalid text "
212 "'{}'!".format(text))
213 super().setText(text)
214
215 def keyPressEvent(self, e):
216 """Override keyPressEvent to ignore Return key presses.
217
218 If this widget is focused, we are in passthrough key mode, and
219 Enter/Shift+Enter/etc. will cause QLineEdit to think it's finished
220 without command_accept to be called.
221 """
222 if e.key() == Qt.Key_Return:
223 e.ignore()
224 return
225 else:
226 super().keyPressEvent(e)
227
228 def sizeHint(self):
229 """Dynamically calculate the needed size."""
230 height = super().sizeHint().height()
231 text = self.text()
232 if not text:
233 text = 'x'
234 width = self.fontMetrics().width(text)
235 return QSize(width, height)
236
[end of qutebrowser/mainwindow/statusbar/command.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/qutebrowser/mainwindow/statusbar/command.py b/qutebrowser/mainwindow/statusbar/command.py
--- a/qutebrowser/mainwindow/statusbar/command.py
+++ b/qutebrowser/mainwindow/statusbar/command.py
@@ -26,7 +26,7 @@
from qutebrowser.commands import cmdexc, cmdutils
from qutebrowser.misc import cmdhistory, editor
from qutebrowser.misc import miscwidgets as misc
-from qutebrowser.utils import usertypes, log, objreg
+from qutebrowser.utils import usertypes, log, objreg, message
class Command(misc.MinimalLineEditMixin, misc.CommandLineEdit):
@@ -176,6 +176,10 @@
ed = editor.ExternalEditor(parent=self)
def callback(text):
+ if not text or text[0] not in modeparsers.STARTCHARS:
+ message.error('command must start with one of {}'
+ .format(modeparsers.STARTCHARS))
+ return
self.set_cmd_text(text)
if run:
self.command_accept()
| {"golden_diff": "diff --git a/qutebrowser/mainwindow/statusbar/command.py b/qutebrowser/mainwindow/statusbar/command.py\n--- a/qutebrowser/mainwindow/statusbar/command.py\n+++ b/qutebrowser/mainwindow/statusbar/command.py\n@@ -26,7 +26,7 @@\n from qutebrowser.commands import cmdexc, cmdutils\n from qutebrowser.misc import cmdhistory, editor\n from qutebrowser.misc import miscwidgets as misc\n-from qutebrowser.utils import usertypes, log, objreg\n+from qutebrowser.utils import usertypes, log, objreg, message\n \n \n class Command(misc.MinimalLineEditMixin, misc.CommandLineEdit):\n@@ -176,6 +176,10 @@\n ed = editor.ExternalEditor(parent=self)\n \n def callback(text):\n+ if not text or text[0] not in modeparsers.STARTCHARS:\n+ message.error('command must start with one of {}'\n+ .format(modeparsers.STARTCHARS))\n+ return\n self.set_cmd_text(text)\n if run:\n self.command_accept()\n", "issue": ":edit-command accepts invalid commands\nWhen using `:edit-command` and changing the command to `foo` (without an initial `:`):\r\n\r\n```\r\n10:33:50 DEBUG procs guiprocess:on_finished:98 Process finished with code 0, status 0.\r\n10:33:50 DEBUG procs editor:on_proc_closed:73 Editor closed\r\n10:33:50 DEBUG procs editor:on_proc_closed:90 Read back: foo\r\n\r\n10:33:50 ERROR misc crashsignal:exception_hook:216 Uncaught exception\r\nTraceback (most recent call last):\r\n File \"/home/florian/proj/qutebrowser/git/qutebrowser/mainwindow/statusbar/command.py\", line 179, in callback\r\n self.set_cmd_text(text)\r\n File \"/home/florian/proj/qutebrowser/git/qutebrowser/mainwindow/statusbar/command.py\", line 86, in set_cmd_text\r\n self.setText(text)\r\n File \"/home/florian/proj/qutebrowser/git/qutebrowser/mainwindow/statusbar/command.py\", line 212, in setText\r\n \"'{}'!\".format(text))\r\nAssertionError: setText got called with invalid text 'foo\r\n'!\r\n```\r\n\r\nWhen changing it to an empty file and pressing enter:\r\n\r\n```\r\n10:34:38 DEBUG commands command:run:484 command called: command-accept\r\n[...]\r\n10:34:38 ERROR misc crashsignal:exception_hook:216 Uncaught exception\r\nTraceback (most recent call last):\r\n File \"/home/florian/proj/qutebrowser/git/qutebrowser/app.py\", line 935, in eventFilter\r\n return handler(event)\r\n File \"/home/florian/proj/qutebrowser/git/qutebrowser/app.py\", line 895, in _handle_key_event\r\n return man.eventFilter(event)\r\n File \"/home/florian/proj/qutebrowser/git/qutebrowser/keyinput/modeman.py\", line 326, in eventFilter\r\n return self._eventFilter_keypress(event)\r\n File \"/home/florian/proj/qutebrowser/git/qutebrowser/keyinput/modeman.py\", line 162, in _eventFilter_keypress\r\n handled = parser.handle(event)\r\n File \"/home/florian/proj/qutebrowser/git/qutebrowser/keyinput/basekeyparser.py\", line 266, in handle\r\n handled = self._handle_special_key(e)\r\n File \"/home/florian/proj/qutebrowser/git/qutebrowser/keyinput/basekeyparser.py\", line 139, in _handle_special_key\r\n self.execute(cmdstr, self.Type.special, count)\r\n File \"/home/florian/proj/qutebrowser/git/qutebrowser/keyinput/keyparser.py\", line 44, in execute\r\n self._commandrunner.run(cmdstr, count)\r\n File \"/home/florian/proj/qutebrowser/git/qutebrowser/commands/runners.py\", line 301, in run\r\n result.cmd.run(self._win_id, args, count=count)\r\n File \"/home/florian/proj/qutebrowser/git/qutebrowser/commands/command.py\", line 500, in run\r\n self.handler(*posargs, **kwargs)\r\n File \"/home/florian/proj/qutebrowser/git/qutebrowser/mainwindow/statusbar/command.py\", line 167, in command_accept\r\n self.got_cmd[str].emit(prefixes[text[0]] + text[1:])\r\nIndexError: string index out of range\r\n```\r\n\r\nReport: https://crashes.qutebrowser.org/view/97044b65\r\n\r\ncc @rcorre\r\n\n", "before_files": [{"content": "# vim: ft=python fileencoding=utf-8 sts=4 sw=4 et:\n\n# Copyright 2014-2017 Florian Bruhin (The Compiler) <[email protected]>\n#\n# This file is part of qutebrowser.\n#\n# qutebrowser is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# qutebrowser is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with qutebrowser. If not, see <http://www.gnu.org/licenses/>.\n\n\"\"\"The commandline in the statusbar.\"\"\"\n\nfrom PyQt5.QtCore import pyqtSignal, pyqtSlot, Qt, QSize\nfrom PyQt5.QtWidgets import QSizePolicy\n\nfrom qutebrowser.keyinput import modeman, modeparsers\nfrom qutebrowser.commands import cmdexc, cmdutils\nfrom qutebrowser.misc import cmdhistory, editor\nfrom qutebrowser.misc import miscwidgets as misc\nfrom qutebrowser.utils import usertypes, log, objreg\n\n\nclass Command(misc.MinimalLineEditMixin, misc.CommandLineEdit):\n\n \"\"\"The commandline part of the statusbar.\n\n Attributes:\n _win_id: The window ID this widget is associated with.\n\n Signals:\n got_cmd: Emitted when a command is triggered by the user.\n arg: The command string and also potentially the count.\n clear_completion_selection: Emitted before the completion widget is\n hidden.\n hide_completion: Emitted when the completion widget should be hidden.\n update_completion: Emitted when the completion should be shown/updated.\n show_cmd: Emitted when command input should be shown.\n hide_cmd: Emitted when command input can be hidden.\n \"\"\"\n\n got_cmd = pyqtSignal([str], [str, int])\n clear_completion_selection = pyqtSignal()\n hide_completion = pyqtSignal()\n update_completion = pyqtSignal()\n show_cmd = pyqtSignal()\n hide_cmd = pyqtSignal()\n\n def __init__(self, *, win_id, private, parent=None):\n misc.CommandLineEdit.__init__(self, parent=parent)\n misc.MinimalLineEditMixin.__init__(self)\n self._win_id = win_id\n if not private:\n command_history = objreg.get('command-history')\n self.history.history = command_history.data\n self.history.changed.connect(command_history.changed)\n self.setSizePolicy(QSizePolicy.MinimumExpanding, QSizePolicy.Ignored)\n self.cursorPositionChanged.connect(self.update_completion)\n self.textChanged.connect(self.update_completion)\n self.textChanged.connect(self.updateGeometry)\n\n def prefix(self):\n \"\"\"Get the currently entered command prefix.\"\"\"\n text = self.text()\n if not text:\n return ''\n elif text[0] in modeparsers.STARTCHARS:\n return text[0]\n else:\n return ''\n\n def set_cmd_text(self, text):\n \"\"\"Preset the statusbar to some text.\n\n Args:\n text: The text to set as string.\n \"\"\"\n self.setText(text)\n log.modes.debug(\"Setting command text, focusing {!r}\".format(self))\n modeman.enter(self._win_id, usertypes.KeyMode.command, 'cmd focus')\n self.setFocus()\n self.show_cmd.emit()\n\n @cmdutils.register(instance='status-command', name='set-cmd-text',\n scope='window', maxsplit=0)\n @cmdutils.argument('count', count=True)\n def set_cmd_text_command(self, text, count=None, space=False, append=False,\n run_on_count=False):\n \"\"\"Preset the statusbar to some text.\n\n //\n\n Wrapper for set_cmd_text to check the arguments and allow multiple\n strings which will get joined.\n\n Args:\n text: The commandline to set.\n count: The count if given.\n space: If given, a space is added to the end.\n append: If given, the text is appended to the current text.\n run_on_count: If given with a count, the command is run with the\n given count rather than setting the command text.\n \"\"\"\n if space:\n text += ' '\n if append:\n if not self.text():\n raise cmdexc.CommandError(\"No current text!\")\n text = self.text() + text\n\n if not text or text[0] not in modeparsers.STARTCHARS:\n raise cmdexc.CommandError(\n \"Invalid command text '{}'.\".format(text))\n if run_on_count and count is not None:\n self.got_cmd[str, int].emit(text, count)\n else:\n self.set_cmd_text(text)\n\n @cmdutils.register(instance='status-command',\n modes=[usertypes.KeyMode.command], scope='window')\n def command_history_prev(self):\n \"\"\"Go back in the commandline history.\"\"\"\n try:\n if not self.history.is_browsing():\n item = self.history.start(self.text().strip())\n else:\n item = self.history.previtem()\n except (cmdhistory.HistoryEmptyError,\n cmdhistory.HistoryEndReachedError):\n return\n if item:\n self.set_cmd_text(item)\n\n @cmdutils.register(instance='status-command',\n modes=[usertypes.KeyMode.command], scope='window')\n def command_history_next(self):\n \"\"\"Go forward in the commandline history.\"\"\"\n if not self.history.is_browsing():\n return\n try:\n item = self.history.nextitem()\n except cmdhistory.HistoryEndReachedError:\n return\n if item:\n self.set_cmd_text(item)\n\n @cmdutils.register(instance='status-command',\n modes=[usertypes.KeyMode.command], scope='window')\n def command_accept(self):\n \"\"\"Execute the command currently in the commandline.\"\"\"\n prefixes = {\n ':': '',\n '/': 'search -- ',\n '?': 'search -r -- ',\n }\n text = self.text()\n self.history.append(text)\n modeman.leave(self._win_id, usertypes.KeyMode.command, 'cmd accept')\n self.got_cmd[str].emit(prefixes[text[0]] + text[1:])\n\n @cmdutils.register(instance='status-command', scope='window')\n def edit_command(self, run=False):\n \"\"\"Open an editor to modify the current command.\n\n Args:\n run: Run the command if the editor exits successfully.\n \"\"\"\n ed = editor.ExternalEditor(parent=self)\n\n def callback(text):\n self.set_cmd_text(text)\n if run:\n self.command_accept()\n\n ed.editing_finished.connect(callback)\n ed.edit(self.text())\n\n @pyqtSlot(usertypes.KeyMode)\n def on_mode_left(self, mode):\n \"\"\"Clear up when command mode was left.\n\n - Clear the statusbar text if it's explicitly unfocused.\n - Clear completion selection\n - Hide completion\n\n Args:\n mode: The mode which was left.\n \"\"\"\n if mode == usertypes.KeyMode.command:\n self.setText('')\n self.history.stop()\n self.hide_cmd.emit()\n self.clear_completion_selection.emit()\n self.hide_completion.emit()\n\n def setText(self, text):\n \"\"\"Extend setText to set prefix and make sure the prompt is ok.\"\"\"\n if not text:\n pass\n elif text[0] in modeparsers.STARTCHARS:\n super().set_prompt(text[0])\n else:\n raise AssertionError(\"setText got called with invalid text \"\n \"'{}'!\".format(text))\n super().setText(text)\n\n def keyPressEvent(self, e):\n \"\"\"Override keyPressEvent to ignore Return key presses.\n\n If this widget is focused, we are in passthrough key mode, and\n Enter/Shift+Enter/etc. will cause QLineEdit to think it's finished\n without command_accept to be called.\n \"\"\"\n if e.key() == Qt.Key_Return:\n e.ignore()\n return\n else:\n super().keyPressEvent(e)\n\n def sizeHint(self):\n \"\"\"Dynamically calculate the needed size.\"\"\"\n height = super().sizeHint().height()\n text = self.text()\n if not text:\n text = 'x'\n width = self.fontMetrics().width(text)\n return QSize(width, height)\n", "path": "qutebrowser/mainwindow/statusbar/command.py"}]} | 3,769 | 229 |
gh_patches_debug_13060 | rasdani/github-patches | git_diff | ytdl-org__youtube-dl-3089 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
AttributeError in ard module
With the command given below, I get the error message below. I'm using version 2014.06.09.
`youtube-dl http://www.ardmediathek.de/tv/Klassiker-der-Weltliteratur/Max-Frisch/BR-alpha/Video\?documentId\=19067308\&bcastId\=14913194`
```
[ARD] 19067308: Downloading webpage
[ARD] 19067308: Downloading JSON metadata
Traceback (most recent call last):
File "/usr/bin/youtube-dl", line 9, in <module>
load_entry_point('youtube-dl==2014.06.09', 'console_scripts', 'youtube-dl')()
File "/usr/lib/python3.4/site-packages/youtube_dl/__init__.py", line 853, in main
_real_main(argv)
File "/usr/lib/python3.4/site-packages/youtube_dl/__init__.py", line 843, in _real_main
retcode = ydl.download(all_urls)
File "/usr/lib/python3.4/site-packages/youtube_dl/YoutubeDL.py", line 1050, in download
self.extract_info(url)
File "/usr/lib/python3.4/site-packages/youtube_dl/YoutubeDL.py", line 516, in extract_info
ie_result = ie.extract(url)
File "/usr/lib/python3.4/site-packages/youtube_dl/extractor/common.py", line 168, in extract
return self._real_extract(url)
File "/usr/lib/python3.4/site-packages/youtube_dl/extractor/ard.py", line 66, in _real_extract
determine_ext(format['url']), format['quality'])
File "/usr/lib/python3.4/site-packages/youtube_dl/utils.py", line 845, in determine_ext
guess = url.partition(u'?')[0].rpartition(u'.')[2]
AttributeError: 'list' object has no attribute 'partition'
```
</issue>
<code>
[start of youtube_dl/extractor/ard.py]
1 # coding: utf-8
2 from __future__ import unicode_literals
3
4 import re
5
6 from .common import InfoExtractor
7 from ..utils import (
8 determine_ext,
9 ExtractorError,
10 )
11
12
13 class ARDIE(InfoExtractor):
14 _VALID_URL = r'^https?://(?:(?:www\.)?ardmediathek\.de|mediathek\.daserste\.de)/(?:.*/)(?P<video_id>[^/\?]+)(?:\?.*)?'
15
16 _TEST = {
17 'url': 'http://www.ardmediathek.de/das-erste/guenther-jauch/edward-snowden-im-interview-held-oder-verraeter?documentId=19288786',
18 'file': '19288786.mp4',
19 'md5': '515bf47ce209fb3f5a61b7aad364634c',
20 'info_dict': {
21 'title': 'Edward Snowden im Interview - Held oder VerrΓ€ter?',
22 'description': 'Edward Snowden hat alles aufs Spiel gesetzt, um die weltweite \xdcberwachung durch die Geheimdienste zu enttarnen. Nun stellt sich der ehemalige NSA-Mitarbeiter erstmals weltweit in einem TV-Interview den Fragen eines NDR-Journalisten. Die Sendung vom Sonntagabend.',
23 'thumbnail': 'http://www.ardmediathek.de/ard/servlet/contentblob/19/28/87/90/19288790/bild/2250037',
24 },
25 'skip': 'Blocked outside of Germany',
26 }
27
28 def _real_extract(self, url):
29 # determine video id from url
30 m = re.match(self._VALID_URL, url)
31
32 numid = re.search(r'documentId=([0-9]+)', url)
33 if numid:
34 video_id = numid.group(1)
35 else:
36 video_id = m.group('video_id')
37
38 webpage = self._download_webpage(url, video_id)
39
40 title = self._html_search_regex(
41 [r'<h1(?:\s+class="boxTopHeadline")?>(.*?)</h1>',
42 r'<meta name="dcterms.title" content="(.*?)"/>',
43 r'<h4 class="headline">(.*?)</h4>'],
44 webpage, 'title')
45 description = self._html_search_meta(
46 'dcterms.abstract', webpage, 'description')
47 thumbnail = self._og_search_thumbnail(webpage)
48
49
50 media_info = self._download_json(
51 'http://www.ardmediathek.de/play/media/%s' % video_id, video_id)
52 # The second element of the _mediaArray contains the standard http urls
53 streams = media_info['_mediaArray'][1]['_mediaStreamArray']
54 if not streams:
55 if '"fsk"' in webpage:
56 raise ExtractorError('This video is only available after 20:00')
57
58 formats = []
59 for s in streams:
60 format = {
61 'quality': s['_quality'],
62 'url': s['_stream'],
63 }
64
65 format['format_id'] = '%s-%s' % (
66 determine_ext(format['url']), format['quality'])
67
68 formats.append(format)
69
70 self._sort_formats(formats)
71
72 return {
73 'id': video_id,
74 'title': title,
75 'description': description,
76 'formats': formats,
77 'thumbnail': thumbnail,
78 }
79
[end of youtube_dl/extractor/ard.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/youtube_dl/extractor/ard.py b/youtube_dl/extractor/ard.py
--- a/youtube_dl/extractor/ard.py
+++ b/youtube_dl/extractor/ard.py
@@ -56,7 +56,18 @@
raise ExtractorError('This video is only available after 20:00')
formats = []
+
for s in streams:
+ if type(s['_stream']) == list:
+ for index, url in enumerate(s['_stream'][::-1]):
+ quality = s['_quality'] + index
+ formats.append({
+ 'quality': quality,
+ 'url': url,
+ 'format_id': '%s-%s' % (determine_ext(url), quality)
+ })
+ continue
+
format = {
'quality': s['_quality'],
'url': s['_stream'],
| {"golden_diff": "diff --git a/youtube_dl/extractor/ard.py b/youtube_dl/extractor/ard.py\n--- a/youtube_dl/extractor/ard.py\n+++ b/youtube_dl/extractor/ard.py\n@@ -56,7 +56,18 @@\n raise ExtractorError('This video is only available after 20:00')\n \n formats = []\n+\n for s in streams:\n+ if type(s['_stream']) == list:\n+ for index, url in enumerate(s['_stream'][::-1]):\n+ quality = s['_quality'] + index\n+ formats.append({\n+ 'quality': quality,\n+ 'url': url,\n+ 'format_id': '%s-%s' % (determine_ext(url), quality)\n+ })\n+ continue\n+\n format = {\n 'quality': s['_quality'],\n 'url': s['_stream'],\n", "issue": "AttributeError in ard module\nWith the command given below, I get the error message below. I'm using version 2014.06.09.\n\n`youtube-dl http://www.ardmediathek.de/tv/Klassiker-der-Weltliteratur/Max-Frisch/BR-alpha/Video\\?documentId\\=19067308\\&bcastId\\=14913194`\n\n```\n[ARD] 19067308: Downloading webpage\n[ARD] 19067308: Downloading JSON metadata\nTraceback (most recent call last):\n File \"/usr/bin/youtube-dl\", line 9, in <module>\n load_entry_point('youtube-dl==2014.06.09', 'console_scripts', 'youtube-dl')()\n File \"/usr/lib/python3.4/site-packages/youtube_dl/__init__.py\", line 853, in main\n _real_main(argv)\n File \"/usr/lib/python3.4/site-packages/youtube_dl/__init__.py\", line 843, in _real_main\n retcode = ydl.download(all_urls)\n File \"/usr/lib/python3.4/site-packages/youtube_dl/YoutubeDL.py\", line 1050, in download\n self.extract_info(url)\n File \"/usr/lib/python3.4/site-packages/youtube_dl/YoutubeDL.py\", line 516, in extract_info\n ie_result = ie.extract(url)\n File \"/usr/lib/python3.4/site-packages/youtube_dl/extractor/common.py\", line 168, in extract\n return self._real_extract(url)\n File \"/usr/lib/python3.4/site-packages/youtube_dl/extractor/ard.py\", line 66, in _real_extract\n determine_ext(format['url']), format['quality'])\n File \"/usr/lib/python3.4/site-packages/youtube_dl/utils.py\", line 845, in determine_ext\n guess = url.partition(u'?')[0].rpartition(u'.')[2]\nAttributeError: 'list' object has no attribute 'partition'\n```\n\n", "before_files": [{"content": "# coding: utf-8\nfrom __future__ import unicode_literals\n\nimport re\n\nfrom .common import InfoExtractor\nfrom ..utils import (\n determine_ext,\n ExtractorError,\n)\n\n\nclass ARDIE(InfoExtractor):\n _VALID_URL = r'^https?://(?:(?:www\\.)?ardmediathek\\.de|mediathek\\.daserste\\.de)/(?:.*/)(?P<video_id>[^/\\?]+)(?:\\?.*)?'\n\n _TEST = {\n 'url': 'http://www.ardmediathek.de/das-erste/guenther-jauch/edward-snowden-im-interview-held-oder-verraeter?documentId=19288786',\n 'file': '19288786.mp4',\n 'md5': '515bf47ce209fb3f5a61b7aad364634c',\n 'info_dict': {\n 'title': 'Edward Snowden im Interview - Held oder Verr\u00e4ter?',\n 'description': 'Edward Snowden hat alles aufs Spiel gesetzt, um die weltweite \\xdcberwachung durch die Geheimdienste zu enttarnen. Nun stellt sich der ehemalige NSA-Mitarbeiter erstmals weltweit in einem TV-Interview den Fragen eines NDR-Journalisten. Die Sendung vom Sonntagabend.',\n 'thumbnail': 'http://www.ardmediathek.de/ard/servlet/contentblob/19/28/87/90/19288790/bild/2250037',\n },\n 'skip': 'Blocked outside of Germany',\n }\n\n def _real_extract(self, url):\n # determine video id from url\n m = re.match(self._VALID_URL, url)\n\n numid = re.search(r'documentId=([0-9]+)', url)\n if numid:\n video_id = numid.group(1)\n else:\n video_id = m.group('video_id')\n\n webpage = self._download_webpage(url, video_id)\n\n title = self._html_search_regex(\n [r'<h1(?:\\s+class=\"boxTopHeadline\")?>(.*?)</h1>',\n r'<meta name=\"dcterms.title\" content=\"(.*?)\"/>',\n r'<h4 class=\"headline\">(.*?)</h4>'],\n webpage, 'title')\n description = self._html_search_meta(\n 'dcterms.abstract', webpage, 'description')\n thumbnail = self._og_search_thumbnail(webpage)\n\n\n media_info = self._download_json(\n 'http://www.ardmediathek.de/play/media/%s' % video_id, video_id)\n # The second element of the _mediaArray contains the standard http urls\n streams = media_info['_mediaArray'][1]['_mediaStreamArray']\n if not streams:\n if '\"fsk\"' in webpage:\n raise ExtractorError('This video is only available after 20:00')\n\n formats = []\n for s in streams:\n format = {\n 'quality': s['_quality'],\n 'url': s['_stream'],\n }\n\n format['format_id'] = '%s-%s' % (\n determine_ext(format['url']), format['quality'])\n\n formats.append(format)\n\n self._sort_formats(formats)\n\n return {\n 'id': video_id,\n 'title': title,\n 'description': description,\n 'formats': formats,\n 'thumbnail': thumbnail,\n }\n", "path": "youtube_dl/extractor/ard.py"}]} | 1,954 | 196 |
gh_patches_debug_22327 | rasdani/github-patches | git_diff | kivy__kivy-5727 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Probesysfs provider requires getconf
<!--
The issue tracker is a tool to address bugs.
Please use the #kivy IRC channel on freenode or Stack Overflow for
support questions, more information at https://git.io/vM1yQ.
Before opening a new issue, make sure you do the following:
* check that your issue isn't already filed: https://git.io/vM1iE
* prepare a short, runnable example that reproduces the issue
* reproduce the problem with the latest development version of Kivy
* double-check that the issue is indeed a bug and not a support request
-->
### Versions
* Python: 3.6.4
* OS: Linux
* Kivy: 1.10.0
* Kivy installation method: setuptools
### Description
Kivy's probesysfs provider requires getconf, provided by glibc, to get the platform's LONG_BIT value.
This dependency precludes the use of other C libraries, such as musl, as well as platforms that choose not to install getconf.
</issue>
<code>
[start of kivy/input/providers/probesysfs.py]
1 '''
2 Auto Create Input Provider Config Entry for Available MT Hardware (linux only).
3 ===============================================================================
4
5 Thanks to Marc Tardif for the probing code, taken from scan-for-mt-device.
6
7 The device discovery is done by this provider. However, the reading of
8 input can be performed by other providers like: hidinput, mtdev and
9 linuxwacom. mtdev is used prior to other providers. For more
10 information about mtdev, check :py:class:`~kivy.input.providers.mtdev`.
11
12 Here is an example of auto creation::
13
14 [input]
15 # using mtdev
16 device_%(name)s = probesysfs,provider=mtdev
17 # using hidinput
18 device_%(name)s = probesysfs,provider=hidinput
19 # using mtdev with a match on name
20 device_%(name)s = probesysfs,provider=mtdev,match=acer
21
22 # using hidinput with custom parameters to hidinput (all on one line)
23 %(name)s = probesysfs,
24 provider=hidinput,param=min_pressure=1,param=max_pressure=99
25
26 # you can also match your wacom touchscreen
27 touch = probesysfs,match=E3 Finger,provider=linuxwacom,
28 select_all=1,param=mode=touch
29 # and your wacom pen
30 pen = probesysfs,match=E3 Pen,provider=linuxwacom,
31 select_all=1,param=mode=pen
32
33 By default, ProbeSysfs module will enumerate hardware from the /sys/class/input
34 device, and configure hardware with ABS_MT_POSITION_X capability. But for
35 example, the wacom screen doesn't support this capability. You can prevent this
36 behavior by putting select_all=1 in your config line. Add use_mouse=1 to also
37 include touchscreen hardware that offers core pointer functionality.
38 '''
39
40 __all__ = ('ProbeSysfsHardwareProbe', )
41
42 import os
43 from os.path import sep
44
45 if 'KIVY_DOC' in os.environ:
46
47 ProbeSysfsHardwareProbe = None
48
49 else:
50 from re import match, IGNORECASE
51 from glob import glob
52 from subprocess import Popen, PIPE
53 from kivy.logger import Logger
54 from kivy.input.provider import MotionEventProvider
55 from kivy.input.providers.mouse import MouseMotionEventProvider
56 from kivy.input.factory import MotionEventFactory
57 from kivy.config import _is_rpi
58
59 EventLoop = None
60
61 # See linux/input.h
62 ABS_MT_POSITION_X = 0x35
63
64 _cache_input = None
65 _cache_xinput = None
66
67 class Input(object):
68
69 def __init__(self, path):
70 query_xinput()
71 self.path = path
72
73 @property
74 def device(self):
75 base = os.path.basename(self.path)
76 return os.path.join("/dev", "input", base)
77
78 @property
79 def name(self):
80 path = os.path.join(self.path, "device", "name")
81 return read_line(path)
82
83 def get_capabilities(self):
84 path = os.path.join(self.path, "device", "capabilities", "abs")
85 line = "0"
86 try:
87 line = read_line(path)
88 except OSError:
89 return []
90
91 capabilities = []
92 long_bit = getconf("LONG_BIT")
93 for i, word in enumerate(line.split(" ")):
94 word = int(word, 16)
95 subcapabilities = [bool(word & 1 << i)
96 for i in range(long_bit)]
97 capabilities[:0] = subcapabilities
98
99 return capabilities
100
101 def has_capability(self, capability):
102 capabilities = self.get_capabilities()
103 return len(capabilities) > capability and capabilities[capability]
104
105 @property
106 def is_mouse(self):
107 return self.device in _cache_xinput
108
109 def getout(*args):
110 try:
111 return Popen(args, stdout=PIPE).communicate()[0]
112 except OSError:
113 return ''
114
115 def getconf(var):
116 output = getout("getconf", var)
117 return int(output)
118
119 def query_xinput():
120 global _cache_xinput
121 if _cache_xinput is None:
122 _cache_xinput = []
123 devids = getout('xinput', '--list', '--id-only')
124 for did in devids.splitlines():
125 devprops = getout('xinput', '--list-props', did)
126 evpath = None
127 for prop in devprops.splitlines():
128 prop = prop.strip()
129 if (prop.startswith(b'Device Enabled') and
130 prop.endswith(b'0')):
131 evpath = None
132 break
133 if prop.startswith(b'Device Node'):
134 try:
135 evpath = prop.split('"')[1]
136 except Exception:
137 evpath = None
138 if evpath:
139 _cache_xinput.append(evpath)
140
141 def get_inputs(path):
142 global _cache_input
143 if _cache_input is None:
144 event_glob = os.path.join(path, "event*")
145 _cache_input = [Input(x) for x in glob(event_glob)]
146 return _cache_input
147
148 def read_line(path):
149 f = open(path)
150 try:
151 return f.readline().strip()
152 finally:
153 f.close()
154
155 class ProbeSysfsHardwareProbe(MotionEventProvider):
156
157 def __new__(self, device, args):
158 # hack to not return an instance of this provider.
159 # :)
160 instance = super(ProbeSysfsHardwareProbe, self).__new__(self)
161 instance.__init__(device, args)
162
163 def __init__(self, device, args):
164 super(ProbeSysfsHardwareProbe, self).__init__(device, args)
165 self.provider = 'mtdev'
166 self.match = None
167 self.input_path = '/sys/class/input'
168 self.select_all = True if _is_rpi else False
169 self.use_mouse = False
170 self.use_regex = False
171 self.args = []
172
173 args = args.split(',')
174 for arg in args:
175 if arg == '':
176 continue
177 arg = arg.split('=', 1)
178 # ensure it's a key = value
179 if len(arg) != 2:
180 Logger.error('ProbeSysfs: invalid parameters %s, not'
181 ' key=value format' % arg)
182 continue
183
184 key, value = arg
185 if key == 'match':
186 self.match = value
187 elif key == 'provider':
188 self.provider = value
189 elif key == 'use_regex':
190 self.use_regex = bool(int(value))
191 elif key == 'select_all':
192 self.select_all = bool(int(value))
193 elif key == 'use_mouse':
194 self.use_mouse = bool(int(value))
195 elif key == 'param':
196 self.args.append(value)
197 else:
198 Logger.error('ProbeSysfs: unknown %s option' % key)
199 continue
200
201 self.probe()
202
203 def should_use_mouse(self):
204 return (self.use_mouse or
205 not any(p for p in EventLoop.input_providers
206 if isinstance(p, MouseMotionEventProvider)))
207
208 def probe(self):
209 global EventLoop
210 from kivy.base import EventLoop
211
212 inputs = get_inputs(self.input_path)
213 Logger.debug('ProbeSysfs: using probesysfs!')
214
215 use_mouse = self.should_use_mouse()
216
217 if not self.select_all:
218 inputs = [x for x in inputs if
219 x.has_capability(ABS_MT_POSITION_X) and
220 (use_mouse or not x.is_mouse)]
221 for device in inputs:
222 Logger.debug('ProbeSysfs: found device: %s at %s' % (
223 device.name, device.device))
224
225 # must ignore ?
226 if self.match:
227 if self.use_regex:
228 if not match(self.match, device.name, IGNORECASE):
229 Logger.debug('ProbeSysfs: device not match the'
230 ' rule in config, ignoring.')
231 continue
232 else:
233 if self.match not in device.name:
234 continue
235
236 Logger.info('ProbeSysfs: device match: %s' % device.device)
237
238 d = device.device
239 devicename = self.device % dict(name=d.split(sep)[-1])
240
241 provider = MotionEventFactory.get(self.provider)
242 if provider is None:
243 Logger.info('ProbeSysfs: unable to found provider %s' %
244 self.provider)
245 Logger.info('ProbeSysfs: fallback on hidinput')
246 provider = MotionEventFactory.get('hidinput')
247 if provider is None:
248 Logger.critical('ProbeSysfs: no input provider found'
249 ' to handle this device !')
250 continue
251
252 instance = provider(devicename, '%s,%s' % (
253 device.device, ','.join(self.args)))
254 if instance:
255 EventLoop.add_input_provider(instance)
256
257 MotionEventFactory.register('probesysfs', ProbeSysfsHardwareProbe)
258
[end of kivy/input/providers/probesysfs.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/kivy/input/providers/probesysfs.py b/kivy/input/providers/probesysfs.py
--- a/kivy/input/providers/probesysfs.py
+++ b/kivy/input/providers/probesysfs.py
@@ -47,6 +47,7 @@
ProbeSysfsHardwareProbe = None
else:
+ import ctypes
from re import match, IGNORECASE
from glob import glob
from subprocess import Popen, PIPE
@@ -89,7 +90,7 @@
return []
capabilities = []
- long_bit = getconf("LONG_BIT")
+ long_bit = ctypes.sizeof(ctypes.c_long) * 8
for i, word in enumerate(line.split(" ")):
word = int(word, 16)
subcapabilities = [bool(word & 1 << i)
@@ -112,10 +113,6 @@
except OSError:
return ''
- def getconf(var):
- output = getout("getconf", var)
- return int(output)
-
def query_xinput():
global _cache_xinput
if _cache_xinput is None:
| {"golden_diff": "diff --git a/kivy/input/providers/probesysfs.py b/kivy/input/providers/probesysfs.py\n--- a/kivy/input/providers/probesysfs.py\n+++ b/kivy/input/providers/probesysfs.py\n@@ -47,6 +47,7 @@\n ProbeSysfsHardwareProbe = None\n \n else:\n+ import ctypes\n from re import match, IGNORECASE\n from glob import glob\n from subprocess import Popen, PIPE\n@@ -89,7 +90,7 @@\n return []\n \n capabilities = []\n- long_bit = getconf(\"LONG_BIT\")\n+ long_bit = ctypes.sizeof(ctypes.c_long) * 8\n for i, word in enumerate(line.split(\" \")):\n word = int(word, 16)\n subcapabilities = [bool(word & 1 << i)\n@@ -112,10 +113,6 @@\n except OSError:\n return ''\n \n- def getconf(var):\n- output = getout(\"getconf\", var)\n- return int(output)\n-\n def query_xinput():\n global _cache_xinput\n if _cache_xinput is None:\n", "issue": "Probesysfs provider requires getconf\n<!--\r\nThe issue tracker is a tool to address bugs.\r\nPlease use the #kivy IRC channel on freenode or Stack Overflow for\r\nsupport questions, more information at https://git.io/vM1yQ.\r\n\r\nBefore opening a new issue, make sure you do the following:\r\n * check that your issue isn't already filed: https://git.io/vM1iE\r\n * prepare a short, runnable example that reproduces the issue\r\n * reproduce the problem with the latest development version of Kivy\r\n * double-check that the issue is indeed a bug and not a support request\r\n-->\r\n\r\n### Versions\r\n\r\n* Python: 3.6.4\r\n* OS: Linux\r\n* Kivy: 1.10.0\r\n* Kivy installation method: setuptools\r\n\r\n### Description\r\n\r\nKivy's probesysfs provider requires getconf, provided by glibc, to get the platform's LONG_BIT value.\r\n\r\nThis dependency precludes the use of other C libraries, such as musl, as well as platforms that choose not to install getconf.\r\n\n", "before_files": [{"content": "'''\nAuto Create Input Provider Config Entry for Available MT Hardware (linux only).\n===============================================================================\n\nThanks to Marc Tardif for the probing code, taken from scan-for-mt-device.\n\nThe device discovery is done by this provider. However, the reading of\ninput can be performed by other providers like: hidinput, mtdev and\nlinuxwacom. mtdev is used prior to other providers. For more\ninformation about mtdev, check :py:class:`~kivy.input.providers.mtdev`.\n\nHere is an example of auto creation::\n\n [input]\n # using mtdev\n device_%(name)s = probesysfs,provider=mtdev\n # using hidinput\n device_%(name)s = probesysfs,provider=hidinput\n # using mtdev with a match on name\n device_%(name)s = probesysfs,provider=mtdev,match=acer\n\n # using hidinput with custom parameters to hidinput (all on one line)\n %(name)s = probesysfs,\n provider=hidinput,param=min_pressure=1,param=max_pressure=99\n\n # you can also match your wacom touchscreen\n touch = probesysfs,match=E3 Finger,provider=linuxwacom,\n select_all=1,param=mode=touch\n # and your wacom pen\n pen = probesysfs,match=E3 Pen,provider=linuxwacom,\n select_all=1,param=mode=pen\n\nBy default, ProbeSysfs module will enumerate hardware from the /sys/class/input\ndevice, and configure hardware with ABS_MT_POSITION_X capability. But for\nexample, the wacom screen doesn't support this capability. You can prevent this\nbehavior by putting select_all=1 in your config line. Add use_mouse=1 to also\ninclude touchscreen hardware that offers core pointer functionality.\n'''\n\n__all__ = ('ProbeSysfsHardwareProbe', )\n\nimport os\nfrom os.path import sep\n\nif 'KIVY_DOC' in os.environ:\n\n ProbeSysfsHardwareProbe = None\n\nelse:\n from re import match, IGNORECASE\n from glob import glob\n from subprocess import Popen, PIPE\n from kivy.logger import Logger\n from kivy.input.provider import MotionEventProvider\n from kivy.input.providers.mouse import MouseMotionEventProvider\n from kivy.input.factory import MotionEventFactory\n from kivy.config import _is_rpi\n\n EventLoop = None\n\n # See linux/input.h\n ABS_MT_POSITION_X = 0x35\n\n _cache_input = None\n _cache_xinput = None\n\n class Input(object):\n\n def __init__(self, path):\n query_xinput()\n self.path = path\n\n @property\n def device(self):\n base = os.path.basename(self.path)\n return os.path.join(\"/dev\", \"input\", base)\n\n @property\n def name(self):\n path = os.path.join(self.path, \"device\", \"name\")\n return read_line(path)\n\n def get_capabilities(self):\n path = os.path.join(self.path, \"device\", \"capabilities\", \"abs\")\n line = \"0\"\n try:\n line = read_line(path)\n except OSError:\n return []\n\n capabilities = []\n long_bit = getconf(\"LONG_BIT\")\n for i, word in enumerate(line.split(\" \")):\n word = int(word, 16)\n subcapabilities = [bool(word & 1 << i)\n for i in range(long_bit)]\n capabilities[:0] = subcapabilities\n\n return capabilities\n\n def has_capability(self, capability):\n capabilities = self.get_capabilities()\n return len(capabilities) > capability and capabilities[capability]\n\n @property\n def is_mouse(self):\n return self.device in _cache_xinput\n\n def getout(*args):\n try:\n return Popen(args, stdout=PIPE).communicate()[0]\n except OSError:\n return ''\n\n def getconf(var):\n output = getout(\"getconf\", var)\n return int(output)\n\n def query_xinput():\n global _cache_xinput\n if _cache_xinput is None:\n _cache_xinput = []\n devids = getout('xinput', '--list', '--id-only')\n for did in devids.splitlines():\n devprops = getout('xinput', '--list-props', did)\n evpath = None\n for prop in devprops.splitlines():\n prop = prop.strip()\n if (prop.startswith(b'Device Enabled') and\n prop.endswith(b'0')):\n evpath = None\n break\n if prop.startswith(b'Device Node'):\n try:\n evpath = prop.split('\"')[1]\n except Exception:\n evpath = None\n if evpath:\n _cache_xinput.append(evpath)\n\n def get_inputs(path):\n global _cache_input\n if _cache_input is None:\n event_glob = os.path.join(path, \"event*\")\n _cache_input = [Input(x) for x in glob(event_glob)]\n return _cache_input\n\n def read_line(path):\n f = open(path)\n try:\n return f.readline().strip()\n finally:\n f.close()\n\n class ProbeSysfsHardwareProbe(MotionEventProvider):\n\n def __new__(self, device, args):\n # hack to not return an instance of this provider.\n # :)\n instance = super(ProbeSysfsHardwareProbe, self).__new__(self)\n instance.__init__(device, args)\n\n def __init__(self, device, args):\n super(ProbeSysfsHardwareProbe, self).__init__(device, args)\n self.provider = 'mtdev'\n self.match = None\n self.input_path = '/sys/class/input'\n self.select_all = True if _is_rpi else False\n self.use_mouse = False\n self.use_regex = False\n self.args = []\n\n args = args.split(',')\n for arg in args:\n if arg == '':\n continue\n arg = arg.split('=', 1)\n # ensure it's a key = value\n if len(arg) != 2:\n Logger.error('ProbeSysfs: invalid parameters %s, not'\n ' key=value format' % arg)\n continue\n\n key, value = arg\n if key == 'match':\n self.match = value\n elif key == 'provider':\n self.provider = value\n elif key == 'use_regex':\n self.use_regex = bool(int(value))\n elif key == 'select_all':\n self.select_all = bool(int(value))\n elif key == 'use_mouse':\n self.use_mouse = bool(int(value))\n elif key == 'param':\n self.args.append(value)\n else:\n Logger.error('ProbeSysfs: unknown %s option' % key)\n continue\n\n self.probe()\n\n def should_use_mouse(self):\n return (self.use_mouse or\n not any(p for p in EventLoop.input_providers\n if isinstance(p, MouseMotionEventProvider)))\n\n def probe(self):\n global EventLoop\n from kivy.base import EventLoop\n\n inputs = get_inputs(self.input_path)\n Logger.debug('ProbeSysfs: using probesysfs!')\n\n use_mouse = self.should_use_mouse()\n\n if not self.select_all:\n inputs = [x for x in inputs if\n x.has_capability(ABS_MT_POSITION_X) and\n (use_mouse or not x.is_mouse)]\n for device in inputs:\n Logger.debug('ProbeSysfs: found device: %s at %s' % (\n device.name, device.device))\n\n # must ignore ?\n if self.match:\n if self.use_regex:\n if not match(self.match, device.name, IGNORECASE):\n Logger.debug('ProbeSysfs: device not match the'\n ' rule in config, ignoring.')\n continue\n else:\n if self.match not in device.name:\n continue\n\n Logger.info('ProbeSysfs: device match: %s' % device.device)\n\n d = device.device\n devicename = self.device % dict(name=d.split(sep)[-1])\n\n provider = MotionEventFactory.get(self.provider)\n if provider is None:\n Logger.info('ProbeSysfs: unable to found provider %s' %\n self.provider)\n Logger.info('ProbeSysfs: fallback on hidinput')\n provider = MotionEventFactory.get('hidinput')\n if provider is None:\n Logger.critical('ProbeSysfs: no input provider found'\n ' to handle this device !')\n continue\n\n instance = provider(devicename, '%s,%s' % (\n device.device, ','.join(self.args)))\n if instance:\n EventLoop.add_input_provider(instance)\n\n MotionEventFactory.register('probesysfs', ProbeSysfsHardwareProbe)\n", "path": "kivy/input/providers/probesysfs.py"}]} | 3,348 | 254 |
gh_patches_debug_42034 | rasdani/github-patches | git_diff | fossasia__open-event-server-4975 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Access code should only be linked to hidden tickets
**Describe the bug**
<!-- A clear and concise description of what the bug is. -->
Right now we are allowing access code to be linked to any ticket.
**To Reproduce**
Steps to reproduce the behavior:
1. Create an access code linking it to a public ticket
**Expected behavior**
<!-- A clear and concise description of what you expected to happen. -->
We should only allow creating access codes for hidden tickets.
</issue>
<code>
[start of app/api/schema/access_codes.py]
1 from marshmallow import validates_schema
2 from marshmallow_jsonapi import fields
3 from marshmallow_jsonapi.flask import Relationship
4
5 from app.api.helpers.exceptions import UnprocessableEntity
6 from app.api.helpers.utilities import dasherize
7 from app.api.schema.base import SoftDeletionSchema
8 from app.models.access_code import AccessCode
9 from utils.common import use_defaults
10
11
12 @use_defaults()
13 class AccessCodeSchema(SoftDeletionSchema):
14 """
15 Api schema for Access Code Model
16 """
17
18 class Meta:
19 """
20 Meta class for Access Code Api Schema
21 """
22 type_ = 'access-code'
23 self_view = 'v1.access_code_detail'
24 self_view_kwargs = {'id': '<id>'}
25 inflect = dasherize
26
27 @validates_schema(pass_original=True)
28 def validate_date(self, data, original_data):
29 if 'id' in original_data['data']:
30 access_code = AccessCode.query.filter_by(id=original_data['data']['id']).one()
31
32 if 'valid_from' not in data:
33 data['valid_from'] = access_code.valid_from
34
35 if 'valid_till' not in data:
36 data['valid_till'] = access_code.valid_till
37
38 if data['valid_from'] > data['valid_till']:
39 raise UnprocessableEntity({'pointer': '/data/attributes/valid-till'},
40 "valid_till should be after valid_from")
41
42 @validates_schema(pass_original=True)
43 def validate_order_quantity(self, data, original_data):
44 if 'id' in original_data['data']:
45 access_code = AccessCode.query.filter_by(id=original_data['data']['id']).one()
46
47 if 'min_quantity' not in data:
48 data['min_quantity'] = access_code.min_quantity
49
50 if 'max_quantity' not in data:
51 data['max_quantity'] = access_code.max_quantity
52
53 if 'tickets_number' not in data:
54 data['tickets_number'] = access_code.tickets_number
55
56 min_quantity = data.get('min_quantity', None)
57 max_quantity = data.get('max_quantity', None)
58 if min_quantity is not None and max_quantity is not None:
59 if min_quantity > max_quantity:
60 raise UnprocessableEntity(
61 {'pointer': '/data/attributes/min-quantity'},
62 "min-quantity should be less than max-quantity"
63 )
64
65 if 'tickets_number' in data and 'max_quantity' in data:
66 if data['tickets_number'] < data['max_quantity']:
67 raise UnprocessableEntity({'pointer': '/data/attributes/tickets-number'},
68 "tickets-number should be greater than max-quantity")
69
70 id = fields.Integer(dump_ony=True)
71 code = fields.Str(required=True)
72 access_url = fields.Url(allow_none=True)
73 is_active = fields.Boolean(default=False)
74
75 # For event level access this holds the max. uses
76 tickets_number = fields.Integer(validate=lambda n: n >= 0, allow_none=True)
77
78 min_quantity = fields.Integer(validate=lambda n: n >= 0, allow_none=True)
79 max_quantity = fields.Integer(validate=lambda n: n >= 0, allow_none=True)
80 valid_from = fields.DateTime(required=True)
81 valid_till = fields.DateTime(required=True)
82 event = Relationship(attribute='event',
83 self_view='v1.access_code_event',
84 self_view_kwargs={'id': '<id>'},
85 related_view='v1.event_detail',
86 related_view_kwargs={'access_code_id': '<id>'},
87 schema='EventSchemaPublic',
88 type_='event')
89 marketer = Relationship(attribute='user',
90 self_view='v1.access_code_user',
91 self_view_kwargs={'id': '<id>'},
92 related_view='v1.user_detail',
93 related_view_kwargs={'access_code_id': '<id>'},
94 schema='UserSchemaPublic',
95 type_='user')
96 tickets = Relationship(attribute='tickets',
97 self_view='v1.access_code_tickets',
98 self_view_kwargs={'id': '<id>'},
99 related_view='v1.ticket_list',
100 related_view_kwargs={'access_code_id': '<id>'},
101 schema='TicketSchemaPublic',
102 many=True,
103 type_='ticket')
104
[end of app/api/schema/access_codes.py]
[start of app/api/access_codes.py]
1 from flask_rest_jsonapi import ResourceDetail, ResourceList, ResourceRelationship
2 from flask_rest_jsonapi.exceptions import ObjectNotFound
3
4 from app.api.bootstrap import api
5 from app.api.helpers.db import safe_query
6 from app.api.helpers.exceptions import ForbiddenException, UnprocessableEntity
7 from app.api.helpers.permission_manager import has_access
8 from app.api.helpers.permissions import jwt_required
9 from app.api.helpers.query import event_query
10 from app.api.helpers.utilities import require_relationship
11 from app.api.schema.access_codes import AccessCodeSchema
12 from app.models import db
13 from app.models.access_code import AccessCode
14 from app.models.ticket import Ticket
15 from app.models.user import User
16
17
18 class AccessCodeListPost(ResourceList):
19 """
20 Create AccessCodes
21 """
22 def before_post(self, args, kwargs, data):
23 """
24 before post method to check for required relationships and permissions
25 :param args:
26 :param kwargs:
27 :param data:
28 :return:
29 """
30 require_relationship(['event', 'user'], data)
31 if not has_access('is_coorganizer', event_id=data['event']):
32 raise ForbiddenException({'source': ''}, "Minimum Organizer access required")
33
34 schema = AccessCodeSchema
35 methods = ['POST', ]
36 data_layer = {'session': db.session,
37 'model': AccessCode
38 }
39
40
41 class AccessCodeList(ResourceList):
42 """
43 List AccessCodes
44 """
45 def query(self, view_kwargs):
46 """
47 Method to get access codes list based on different view_kwargs
48 :param view_kwargs:
49 :return:
50 """
51 query_ = self.session.query(AccessCode)
52 query_ = event_query(self, query_, view_kwargs, permission='is_coorganizer')
53 if view_kwargs.get('user_id'):
54 user = safe_query(self, User, 'id', view_kwargs['user_id'], 'user_id')
55 if not has_access('is_user_itself', user_id=user.id):
56 raise ForbiddenException({'source': ''}, 'Access Forbidden')
57 query_ = query_.join(User).filter(User.id == user.id)
58 if view_kwargs.get('ticket_id'):
59 ticket = safe_query(self, Ticket, 'id', view_kwargs['ticket_id'], 'ticket_id')
60 if not has_access('is_coorganizer', event_id=ticket.event_id):
61 raise ForbiddenException({'source': ''}, 'Access Forbidden')
62 # access_code - ticket :: many-to-many relationship
63 query_ = AccessCode.query.filter(AccessCode.tickets.any(id=ticket.id))
64 query_
65 return query_
66
67 view_kwargs = True
68 methods = ['GET', ]
69 schema = AccessCodeSchema
70 data_layer = {'session': db.session,
71 'model': AccessCode,
72 'methods': {
73 'query': query,
74 }}
75
76
77 class AccessCodeDetail(ResourceDetail):
78 """
79 AccessCode detail by id or code
80 """
81 def before_get(self, args, kwargs):
82 """
83 before get method of access code details.
84 Check for permissions on the basis of kwargs.
85 :param args:
86 :param kwargs:
87 :return:
88 """
89 # Any registered user can fetch access code details using the code.
90 if kwargs.get('code'):
91 access = db.session.query(AccessCode).filter_by(code=kwargs.get('code')).first()
92 if access:
93 kwargs['id'] = access.id
94 else:
95 raise ObjectNotFound({'parameter': '{code}'}, "Access Code: not found")
96 return
97
98 # Co-organizer or the admin can fetch access code details using the id.
99 if kwargs.get('id'):
100 access = db.session.query(AccessCode).filter_by(id=kwargs.get('id')).one()
101 if not access:
102 raise ObjectNotFound({'parameter': '{id}'}, "Access Code: not found")
103
104 if not has_access('is_coorganizer', event_id=access.event_id):
105 raise UnprocessableEntity({'source': ''},
106 "Please verify your permission")
107
108 decorators = (jwt_required, api.has_permission('is_coorganizer', fetch='event_id',
109 fetch_as="event_id", model=AccessCode, methods="PATCH"),
110 api.has_permission('is_coorganizer_but_not_admin', fetch='event_id',
111 fetch_as="event_id", model=AccessCode, methods="DELETE"),)
112 schema = AccessCodeSchema
113 data_layer = {'session': db.session,
114 'model': AccessCode
115 }
116
117
118 class AccessCodeRelationshipRequired(ResourceRelationship):
119 """
120 AccessCode Relationship Required
121 """
122 decorators = (jwt_required,)
123 methods = ['GET', 'PATCH']
124 schema = AccessCodeSchema
125 data_layer = {'session': db.session,
126 'model': AccessCode}
127
128
129 class AccessCodeRelationshipOptional(ResourceRelationship):
130 """
131 AccessCode Relationship Optional
132 """
133 decorators = (jwt_required,)
134 schema = AccessCodeSchema
135 data_layer = {'session': db.session,
136 'model': AccessCode}
137
[end of app/api/access_codes.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/app/api/access_codes.py b/app/api/access_codes.py
--- a/app/api/access_codes.py
+++ b/app/api/access_codes.py
@@ -1,9 +1,11 @@
from flask_rest_jsonapi import ResourceDetail, ResourceList, ResourceRelationship
from flask_rest_jsonapi.exceptions import ObjectNotFound
+from sqlalchemy.orm.exc import NoResultFound
from app.api.bootstrap import api
from app.api.helpers.db import safe_query
-from app.api.helpers.exceptions import ForbiddenException, UnprocessableEntity
+from app.api.helpers.exceptions import ForbiddenException, ConflictException
+from app.api.helpers.exceptions import UnprocessableEntity
from app.api.helpers.permission_manager import has_access
from app.api.helpers.permissions import jwt_required
from app.api.helpers.query import event_query
@@ -31,11 +33,33 @@
if not has_access('is_coorganizer', event_id=data['event']):
raise ForbiddenException({'source': ''}, "Minimum Organizer access required")
+ def before_create_object(self, data, view_kwargs):
+ """
+ before create object method for AccessCodeListPost Class
+ :param data:
+ :param view_kwargs:
+ :return:
+ """
+ if data.get('tickets', None):
+ for ticket in data['tickets']:
+ # Ensuring that the ticket exists and is hidden.
+ try:
+ ticket_object = self.session.query(Ticket).filter_by(id=int(ticket),
+ deleted_at=None).one()
+ if not ticket_object.is_hidden:
+ raise ConflictException({'pointer': '/data/relationships/tickets'},
+ "Ticket with id {} is public.".format(ticket) +
+ " Access code cannot be applied to public tickets")
+ except NoResultFound:
+ raise ConflictException({'pointer': '/data/relationships/tickets'},
+ "Ticket with id {} does not exists".format(str(ticket)))
+
schema = AccessCodeSchema
methods = ['POST', ]
data_layer = {'session': db.session,
- 'model': AccessCode
- }
+ 'model': AccessCode,
+ 'methods': {'before_create_object': before_create_object
+ }}
class AccessCodeList(ResourceList):
diff --git a/app/api/schema/access_codes.py b/app/api/schema/access_codes.py
--- a/app/api/schema/access_codes.py
+++ b/app/api/schema/access_codes.py
@@ -55,17 +55,16 @@
min_quantity = data.get('min_quantity', None)
max_quantity = data.get('max_quantity', None)
- if min_quantity is not None and max_quantity is not None:
- if min_quantity > max_quantity:
- raise UnprocessableEntity(
+ tickets_number = data.get('tickets_number', None)
+ if min_quantity and max_quantity and (min_quantity > max_quantity):
+ raise UnprocessableEntity(
{'pointer': '/data/attributes/min-quantity'},
"min-quantity should be less than max-quantity"
- )
+ )
- if 'tickets_number' in data and 'max_quantity' in data:
- if data['tickets_number'] < data['max_quantity']:
- raise UnprocessableEntity({'pointer': '/data/attributes/tickets-number'},
- "tickets-number should be greater than max-quantity")
+ if tickets_number and max_quantity and (tickets_number < max_quantity):
+ raise UnprocessableEntity({'pointer': '/data/attributes/tickets-number'},
+ "tickets-number should be greater than max-quantity")
id = fields.Integer(dump_ony=True)
code = fields.Str(required=True)
| {"golden_diff": "diff --git a/app/api/access_codes.py b/app/api/access_codes.py\n--- a/app/api/access_codes.py\n+++ b/app/api/access_codes.py\n@@ -1,9 +1,11 @@\n from flask_rest_jsonapi import ResourceDetail, ResourceList, ResourceRelationship\n from flask_rest_jsonapi.exceptions import ObjectNotFound\n+from sqlalchemy.orm.exc import NoResultFound\n \n from app.api.bootstrap import api\n from app.api.helpers.db import safe_query\n-from app.api.helpers.exceptions import ForbiddenException, UnprocessableEntity\n+from app.api.helpers.exceptions import ForbiddenException, ConflictException\n+from app.api.helpers.exceptions import UnprocessableEntity\n from app.api.helpers.permission_manager import has_access\n from app.api.helpers.permissions import jwt_required\n from app.api.helpers.query import event_query\n@@ -31,11 +33,33 @@\n if not has_access('is_coorganizer', event_id=data['event']):\n raise ForbiddenException({'source': ''}, \"Minimum Organizer access required\")\n \n+ def before_create_object(self, data, view_kwargs):\n+ \"\"\"\n+ before create object method for AccessCodeListPost Class\n+ :param data:\n+ :param view_kwargs:\n+ :return:\n+ \"\"\"\n+ if data.get('tickets', None):\n+ for ticket in data['tickets']:\n+ # Ensuring that the ticket exists and is hidden.\n+ try:\n+ ticket_object = self.session.query(Ticket).filter_by(id=int(ticket),\n+ deleted_at=None).one()\n+ if not ticket_object.is_hidden:\n+ raise ConflictException({'pointer': '/data/relationships/tickets'},\n+ \"Ticket with id {} is public.\".format(ticket) +\n+ \" Access code cannot be applied to public tickets\")\n+ except NoResultFound:\n+ raise ConflictException({'pointer': '/data/relationships/tickets'},\n+ \"Ticket with id {} does not exists\".format(str(ticket)))\n+\n schema = AccessCodeSchema\n methods = ['POST', ]\n data_layer = {'session': db.session,\n- 'model': AccessCode\n- }\n+ 'model': AccessCode,\n+ 'methods': {'before_create_object': before_create_object\n+ }}\n \n \n class AccessCodeList(ResourceList):\ndiff --git a/app/api/schema/access_codes.py b/app/api/schema/access_codes.py\n--- a/app/api/schema/access_codes.py\n+++ b/app/api/schema/access_codes.py\n@@ -55,17 +55,16 @@\n \n min_quantity = data.get('min_quantity', None)\n max_quantity = data.get('max_quantity', None)\n- if min_quantity is not None and max_quantity is not None:\n- if min_quantity > max_quantity:\n- raise UnprocessableEntity(\n+ tickets_number = data.get('tickets_number', None)\n+ if min_quantity and max_quantity and (min_quantity > max_quantity):\n+ raise UnprocessableEntity(\n {'pointer': '/data/attributes/min-quantity'},\n \"min-quantity should be less than max-quantity\"\n- )\n+ )\n \n- if 'tickets_number' in data and 'max_quantity' in data:\n- if data['tickets_number'] < data['max_quantity']:\n- raise UnprocessableEntity({'pointer': '/data/attributes/tickets-number'},\n- \"tickets-number should be greater than max-quantity\")\n+ if tickets_number and max_quantity and (tickets_number < max_quantity):\n+ raise UnprocessableEntity({'pointer': '/data/attributes/tickets-number'},\n+ \"tickets-number should be greater than max-quantity\")\n \n id = fields.Integer(dump_ony=True)\n code = fields.Str(required=True)\n", "issue": "Access code should only be linked to hidden tickets\n**Describe the bug**\r\n<!-- A clear and concise description of what the bug is. -->\r\nRight now we are allowing access code to be linked to any ticket.\r\n\r\n**To Reproduce**\r\nSteps to reproduce the behavior:\r\n1. Create an access code linking it to a public ticket\r\n\r\n**Expected behavior**\r\n<!-- A clear and concise description of what you expected to happen. -->\r\nWe should only allow creating access codes for hidden tickets.\n", "before_files": [{"content": "from marshmallow import validates_schema\nfrom marshmallow_jsonapi import fields\nfrom marshmallow_jsonapi.flask import Relationship\n\nfrom app.api.helpers.exceptions import UnprocessableEntity\nfrom app.api.helpers.utilities import dasherize\nfrom app.api.schema.base import SoftDeletionSchema\nfrom app.models.access_code import AccessCode\nfrom utils.common import use_defaults\n\n\n@use_defaults()\nclass AccessCodeSchema(SoftDeletionSchema):\n \"\"\"\n Api schema for Access Code Model\n \"\"\"\n\n class Meta:\n \"\"\"\n Meta class for Access Code Api Schema\n \"\"\"\n type_ = 'access-code'\n self_view = 'v1.access_code_detail'\n self_view_kwargs = {'id': '<id>'}\n inflect = dasherize\n\n @validates_schema(pass_original=True)\n def validate_date(self, data, original_data):\n if 'id' in original_data['data']:\n access_code = AccessCode.query.filter_by(id=original_data['data']['id']).one()\n\n if 'valid_from' not in data:\n data['valid_from'] = access_code.valid_from\n\n if 'valid_till' not in data:\n data['valid_till'] = access_code.valid_till\n\n if data['valid_from'] > data['valid_till']:\n raise UnprocessableEntity({'pointer': '/data/attributes/valid-till'},\n \"valid_till should be after valid_from\")\n\n @validates_schema(pass_original=True)\n def validate_order_quantity(self, data, original_data):\n if 'id' in original_data['data']:\n access_code = AccessCode.query.filter_by(id=original_data['data']['id']).one()\n\n if 'min_quantity' not in data:\n data['min_quantity'] = access_code.min_quantity\n\n if 'max_quantity' not in data:\n data['max_quantity'] = access_code.max_quantity\n\n if 'tickets_number' not in data:\n data['tickets_number'] = access_code.tickets_number\n\n min_quantity = data.get('min_quantity', None)\n max_quantity = data.get('max_quantity', None)\n if min_quantity is not None and max_quantity is not None:\n if min_quantity > max_quantity:\n raise UnprocessableEntity(\n {'pointer': '/data/attributes/min-quantity'},\n \"min-quantity should be less than max-quantity\"\n )\n\n if 'tickets_number' in data and 'max_quantity' in data:\n if data['tickets_number'] < data['max_quantity']:\n raise UnprocessableEntity({'pointer': '/data/attributes/tickets-number'},\n \"tickets-number should be greater than max-quantity\")\n\n id = fields.Integer(dump_ony=True)\n code = fields.Str(required=True)\n access_url = fields.Url(allow_none=True)\n is_active = fields.Boolean(default=False)\n\n # For event level access this holds the max. uses\n tickets_number = fields.Integer(validate=lambda n: n >= 0, allow_none=True)\n\n min_quantity = fields.Integer(validate=lambda n: n >= 0, allow_none=True)\n max_quantity = fields.Integer(validate=lambda n: n >= 0, allow_none=True)\n valid_from = fields.DateTime(required=True)\n valid_till = fields.DateTime(required=True)\n event = Relationship(attribute='event',\n self_view='v1.access_code_event',\n self_view_kwargs={'id': '<id>'},\n related_view='v1.event_detail',\n related_view_kwargs={'access_code_id': '<id>'},\n schema='EventSchemaPublic',\n type_='event')\n marketer = Relationship(attribute='user',\n self_view='v1.access_code_user',\n self_view_kwargs={'id': '<id>'},\n related_view='v1.user_detail',\n related_view_kwargs={'access_code_id': '<id>'},\n schema='UserSchemaPublic',\n type_='user')\n tickets = Relationship(attribute='tickets',\n self_view='v1.access_code_tickets',\n self_view_kwargs={'id': '<id>'},\n related_view='v1.ticket_list',\n related_view_kwargs={'access_code_id': '<id>'},\n schema='TicketSchemaPublic',\n many=True,\n type_='ticket')\n", "path": "app/api/schema/access_codes.py"}, {"content": "from flask_rest_jsonapi import ResourceDetail, ResourceList, ResourceRelationship\nfrom flask_rest_jsonapi.exceptions import ObjectNotFound\n\nfrom app.api.bootstrap import api\nfrom app.api.helpers.db import safe_query\nfrom app.api.helpers.exceptions import ForbiddenException, UnprocessableEntity\nfrom app.api.helpers.permission_manager import has_access\nfrom app.api.helpers.permissions import jwt_required\nfrom app.api.helpers.query import event_query\nfrom app.api.helpers.utilities import require_relationship\nfrom app.api.schema.access_codes import AccessCodeSchema\nfrom app.models import db\nfrom app.models.access_code import AccessCode\nfrom app.models.ticket import Ticket\nfrom app.models.user import User\n\n\nclass AccessCodeListPost(ResourceList):\n \"\"\"\n Create AccessCodes\n \"\"\"\n def before_post(self, args, kwargs, data):\n \"\"\"\n before post method to check for required relationships and permissions\n :param args:\n :param kwargs:\n :param data:\n :return:\n \"\"\"\n require_relationship(['event', 'user'], data)\n if not has_access('is_coorganizer', event_id=data['event']):\n raise ForbiddenException({'source': ''}, \"Minimum Organizer access required\")\n\n schema = AccessCodeSchema\n methods = ['POST', ]\n data_layer = {'session': db.session,\n 'model': AccessCode\n }\n\n\nclass AccessCodeList(ResourceList):\n \"\"\"\n List AccessCodes\n \"\"\"\n def query(self, view_kwargs):\n \"\"\"\n Method to get access codes list based on different view_kwargs\n :param view_kwargs:\n :return:\n \"\"\"\n query_ = self.session.query(AccessCode)\n query_ = event_query(self, query_, view_kwargs, permission='is_coorganizer')\n if view_kwargs.get('user_id'):\n user = safe_query(self, User, 'id', view_kwargs['user_id'], 'user_id')\n if not has_access('is_user_itself', user_id=user.id):\n raise ForbiddenException({'source': ''}, 'Access Forbidden')\n query_ = query_.join(User).filter(User.id == user.id)\n if view_kwargs.get('ticket_id'):\n ticket = safe_query(self, Ticket, 'id', view_kwargs['ticket_id'], 'ticket_id')\n if not has_access('is_coorganizer', event_id=ticket.event_id):\n raise ForbiddenException({'source': ''}, 'Access Forbidden')\n # access_code - ticket :: many-to-many relationship\n query_ = AccessCode.query.filter(AccessCode.tickets.any(id=ticket.id))\n query_\n return query_\n\n view_kwargs = True\n methods = ['GET', ]\n schema = AccessCodeSchema\n data_layer = {'session': db.session,\n 'model': AccessCode,\n 'methods': {\n 'query': query,\n }}\n\n\nclass AccessCodeDetail(ResourceDetail):\n \"\"\"\n AccessCode detail by id or code\n \"\"\"\n def before_get(self, args, kwargs):\n \"\"\"\n before get method of access code details.\n Check for permissions on the basis of kwargs.\n :param args:\n :param kwargs:\n :return:\n \"\"\"\n # Any registered user can fetch access code details using the code.\n if kwargs.get('code'):\n access = db.session.query(AccessCode).filter_by(code=kwargs.get('code')).first()\n if access:\n kwargs['id'] = access.id\n else:\n raise ObjectNotFound({'parameter': '{code}'}, \"Access Code: not found\")\n return\n\n # Co-organizer or the admin can fetch access code details using the id.\n if kwargs.get('id'):\n access = db.session.query(AccessCode).filter_by(id=kwargs.get('id')).one()\n if not access:\n raise ObjectNotFound({'parameter': '{id}'}, \"Access Code: not found\")\n\n if not has_access('is_coorganizer', event_id=access.event_id):\n raise UnprocessableEntity({'source': ''},\n \"Please verify your permission\")\n\n decorators = (jwt_required, api.has_permission('is_coorganizer', fetch='event_id',\n fetch_as=\"event_id\", model=AccessCode, methods=\"PATCH\"),\n api.has_permission('is_coorganizer_but_not_admin', fetch='event_id',\n fetch_as=\"event_id\", model=AccessCode, methods=\"DELETE\"),)\n schema = AccessCodeSchema\n data_layer = {'session': db.session,\n 'model': AccessCode\n }\n\n\nclass AccessCodeRelationshipRequired(ResourceRelationship):\n \"\"\"\n AccessCode Relationship Required\n \"\"\"\n decorators = (jwt_required,)\n methods = ['GET', 'PATCH']\n schema = AccessCodeSchema\n data_layer = {'session': db.session,\n 'model': AccessCode}\n\n\nclass AccessCodeRelationshipOptional(ResourceRelationship):\n \"\"\"\n AccessCode Relationship Optional\n \"\"\"\n decorators = (jwt_required,)\n schema = AccessCodeSchema\n data_layer = {'session': db.session,\n 'model': AccessCode}\n", "path": "app/api/access_codes.py"}]} | 3,095 | 774 |
gh_patches_debug_38422 | rasdani/github-patches | git_diff | encode__starlette-105 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Credentialed CORS standard requests should not respond with wildcard origins
See https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS#Credentialed_requests_and_wildcards
If a standard request is made, that includes any cookie headers, then CORSMiddleware *ought* to strictly respond with the requested origin, rather than a wildcard.
This is actually potentially a bit fiddly since we maybe also need to make sure to *set or add* Vary: Origin in those cases, in order to ensure correct cacheability.
</issue>
<code>
[start of starlette/middleware/cors.py]
1 from starlette.datastructures import Headers, MutableHeaders, URL
2 from starlette.responses import PlainTextResponse
3 from starlette.types import ASGIApp, ASGIInstance, Scope
4 import functools
5 import typing
6 import re
7
8
9 ALL_METHODS = ("DELETE", "GET", "OPTIONS", "PATCH", "POST", "PUT")
10
11
12 class CORSMiddleware:
13 def __init__(
14 self,
15 app: ASGIApp,
16 allow_origins: typing.Sequence[str] = (),
17 allow_methods: typing.Sequence[str] = ("GET",),
18 allow_headers: typing.Sequence[str] = (),
19 allow_credentials: bool = False,
20 allow_origin_regex: str = None,
21 expose_headers: typing.Sequence[str] = (),
22 max_age: int = 600,
23 ) -> None:
24
25 if "*" in allow_methods:
26 allow_methods = ALL_METHODS
27
28 compiled_allow_origin_regex = None
29 if allow_origin_regex is not None:
30 compiled_allow_origin_regex = re.compile(allow_origin_regex)
31
32 simple_headers = {}
33 if "*" in allow_origins:
34 simple_headers["Access-Control-Allow-Origin"] = "*"
35 if allow_credentials:
36 simple_headers["Access-Control-Allow-Credentials"] = "true"
37 if expose_headers:
38 simple_headers["Access-Control-Expose-Headers"] = ", ".join(expose_headers)
39
40 preflight_headers = {}
41 if "*" in allow_origins:
42 preflight_headers["Access-Control-Allow-Origin"] = "*"
43 else:
44 preflight_headers["Vary"] = "Origin"
45 preflight_headers.update(
46 {
47 "Access-Control-Allow-Methods": ", ".join(allow_methods),
48 "Access-Control-Max-Age": str(max_age),
49 }
50 )
51 if allow_headers and "*" not in allow_headers:
52 preflight_headers["Access-Control-Allow-Headers"] = ", ".join(allow_headers)
53 if allow_credentials:
54 preflight_headers["Access-Control-Allow-Credentials"] = "true"
55
56 self.app = app
57 self.allow_origins = allow_origins
58 self.allow_methods = allow_methods
59 self.allow_headers = allow_headers
60 self.allow_all_origins = "*" in allow_origins
61 self.allow_all_headers = "*" in allow_headers
62 self.allow_origin_regex = compiled_allow_origin_regex
63 self.simple_headers = simple_headers
64 self.preflight_headers = preflight_headers
65
66 def __call__(self, scope: Scope):
67 if scope["type"] == "http":
68 method = scope["method"]
69 headers = Headers(scope["headers"])
70 origin = headers.get("origin")
71
72 if origin is not None:
73 if method == "OPTIONS" and "access-control-request-method" in headers:
74 return self.preflight_response(request_headers=headers)
75 else:
76 return functools.partial(
77 self.simple_response, scope=scope, origin=origin
78 )
79
80 return self.app(scope)
81
82 def is_allowed_origin(self, origin):
83 if self.allow_all_origins:
84 return True
85
86 if self.allow_origin_regex is not None and self.allow_origin_regex.match(
87 origin
88 ):
89 return True
90
91 return origin in self.allow_origins
92
93 def preflight_response(self, request_headers):
94 requested_origin = request_headers["origin"]
95 requested_method = request_headers["access-control-request-method"]
96 requested_headers = request_headers.get("access-control-request-headers")
97 requested_cookie = "cookie" in request_headers
98
99 headers = dict(self.preflight_headers)
100 failures = []
101
102 if self.is_allowed_origin(origin=requested_origin):
103 if not self.allow_all_origins:
104 # If self.allow_all_origins is True, then the "Access-Control-Allow-Origin"
105 # header is already set to "*".
106 # If we only allow specific origins, then we have to mirror back
107 # the Origin header in the response.
108 headers["Access-Control-Allow-Origin"] = requested_origin
109 else:
110 failures.append("origin")
111
112 if requested_method not in self.allow_methods:
113 failures.append("method")
114
115 # If we allow all headers, then we have to mirror back any requested
116 # headers in the response.
117 if self.allow_all_headers and requested_headers is not None:
118 headers["Access-Control-Allow-Headers"] = requested_headers
119 elif requested_headers is not None:
120 for header in requested_headers.split(","):
121 if header.strip() not in self.allow_headers:
122 failures.append("headers")
123
124 # We don't strictly need to use 400 responses here, since its up to
125 # the browser to enforce the CORS policy, but its more informative
126 # if we do.
127 if failures:
128 failure_text = "Disallowed CORS " + ", ".join(failures)
129 return PlainTextResponse(failure_text, status_code=400, headers=headers)
130
131 return PlainTextResponse("OK", status_code=200, headers=headers)
132
133 async def simple_response(self, receive, send, scope=None, origin=None):
134 inner = self.app(scope)
135 send = functools.partial(self.send, send=send, origin=origin)
136 await inner(receive, send)
137
138 async def send(self, message, send=None, origin=None):
139 if message["type"] != "http.response.start":
140 await send(message)
141 return
142
143 message.setdefault("headers", [])
144 headers = MutableHeaders(message["headers"])
145
146 # If we only allow specific origins, then we have to mirror back
147 # the Origin header in the response.
148 if not self.allow_all_origins and self.is_allowed_origin(origin=origin):
149 headers["Access-Control-Allow-Origin"] = origin
150 headers.update(self.simple_headers)
151 await send(message)
152
[end of starlette/middleware/cors.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/starlette/middleware/cors.py b/starlette/middleware/cors.py
--- a/starlette/middleware/cors.py
+++ b/starlette/middleware/cors.py
@@ -32,6 +32,8 @@
simple_headers = {}
if "*" in allow_origins:
simple_headers["Access-Control-Allow-Origin"] = "*"
+ else:
+ simple_headers["Vary"] = "Origin"
if allow_credentials:
simple_headers["Access-Control-Allow-Credentials"] = "true"
if expose_headers:
@@ -74,7 +76,7 @@
return self.preflight_response(request_headers=headers)
else:
return functools.partial(
- self.simple_response, scope=scope, origin=origin
+ self.simple_response, scope=scope, request_headers=headers
)
return self.app(scope)
@@ -130,22 +132,31 @@
return PlainTextResponse("OK", status_code=200, headers=headers)
- async def simple_response(self, receive, send, scope=None, origin=None):
+ async def simple_response(self, receive, send, scope=None, request_headers=None):
inner = self.app(scope)
- send = functools.partial(self.send, send=send, origin=origin)
+ send = functools.partial(self.send, send=send, request_headers=request_headers)
await inner(receive, send)
- async def send(self, message, send=None, origin=None):
+ async def send(self, message, send=None, request_headers=None):
if message["type"] != "http.response.start":
await send(message)
return
message.setdefault("headers", [])
headers = MutableHeaders(message["headers"])
+ origin = request_headers["Origin"]
+ has_cookie = "cookie" in request_headers
+
+ # If request includes any cookie headers, then we must respond
+ # with the specific origin instead of '*'.
+ if self.allow_all_origins and has_cookie:
+ self.simple_headers["Access-Control-Allow-Origin"] = origin
# If we only allow specific origins, then we have to mirror back
# the Origin header in the response.
- if not self.allow_all_origins and self.is_allowed_origin(origin=origin):
+ elif not self.allow_all_origins and self.is_allowed_origin(origin=origin):
headers["Access-Control-Allow-Origin"] = origin
+ if "vary" in headers:
+ self.simple_headers["Vary"] = f"{headers.get('vary')}, Origin"
headers.update(self.simple_headers)
await send(message)
| {"golden_diff": "diff --git a/starlette/middleware/cors.py b/starlette/middleware/cors.py\n--- a/starlette/middleware/cors.py\n+++ b/starlette/middleware/cors.py\n@@ -32,6 +32,8 @@\n simple_headers = {}\n if \"*\" in allow_origins:\n simple_headers[\"Access-Control-Allow-Origin\"] = \"*\"\n+ else:\n+ simple_headers[\"Vary\"] = \"Origin\"\n if allow_credentials:\n simple_headers[\"Access-Control-Allow-Credentials\"] = \"true\"\n if expose_headers:\n@@ -74,7 +76,7 @@\n return self.preflight_response(request_headers=headers)\n else:\n return functools.partial(\n- self.simple_response, scope=scope, origin=origin\n+ self.simple_response, scope=scope, request_headers=headers\n )\n \n return self.app(scope)\n@@ -130,22 +132,31 @@\n \n return PlainTextResponse(\"OK\", status_code=200, headers=headers)\n \n- async def simple_response(self, receive, send, scope=None, origin=None):\n+ async def simple_response(self, receive, send, scope=None, request_headers=None):\n inner = self.app(scope)\n- send = functools.partial(self.send, send=send, origin=origin)\n+ send = functools.partial(self.send, send=send, request_headers=request_headers)\n await inner(receive, send)\n \n- async def send(self, message, send=None, origin=None):\n+ async def send(self, message, send=None, request_headers=None):\n if message[\"type\"] != \"http.response.start\":\n await send(message)\n return\n \n message.setdefault(\"headers\", [])\n headers = MutableHeaders(message[\"headers\"])\n+ origin = request_headers[\"Origin\"]\n+ has_cookie = \"cookie\" in request_headers\n+\n+ # If request includes any cookie headers, then we must respond\n+ # with the specific origin instead of '*'.\n+ if self.allow_all_origins and has_cookie:\n+ self.simple_headers[\"Access-Control-Allow-Origin\"] = origin\n \n # If we only allow specific origins, then we have to mirror back\n # the Origin header in the response.\n- if not self.allow_all_origins and self.is_allowed_origin(origin=origin):\n+ elif not self.allow_all_origins and self.is_allowed_origin(origin=origin):\n headers[\"Access-Control-Allow-Origin\"] = origin\n+ if \"vary\" in headers:\n+ self.simple_headers[\"Vary\"] = f\"{headers.get('vary')}, Origin\"\n headers.update(self.simple_headers)\n await send(message)\n", "issue": "Credentialed CORS standard requests should not respond with wildcard origins\nSee https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS#Credentialed_requests_and_wildcards \r\n\r\nIf a standard request is made, that includes any cookie headers, then CORSMiddleware *ought* to strictly respond with the requested origin, rather than a wildcard.\r\n\r\nThis is actually potentially a bit fiddly since we maybe also need to make sure to *set or add* Vary: Origin in those cases, in order to ensure correct cacheability.\n", "before_files": [{"content": "from starlette.datastructures import Headers, MutableHeaders, URL\nfrom starlette.responses import PlainTextResponse\nfrom starlette.types import ASGIApp, ASGIInstance, Scope\nimport functools\nimport typing\nimport re\n\n\nALL_METHODS = (\"DELETE\", \"GET\", \"OPTIONS\", \"PATCH\", \"POST\", \"PUT\")\n\n\nclass CORSMiddleware:\n def __init__(\n self,\n app: ASGIApp,\n allow_origins: typing.Sequence[str] = (),\n allow_methods: typing.Sequence[str] = (\"GET\",),\n allow_headers: typing.Sequence[str] = (),\n allow_credentials: bool = False,\n allow_origin_regex: str = None,\n expose_headers: typing.Sequence[str] = (),\n max_age: int = 600,\n ) -> None:\n\n if \"*\" in allow_methods:\n allow_methods = ALL_METHODS\n\n compiled_allow_origin_regex = None\n if allow_origin_regex is not None:\n compiled_allow_origin_regex = re.compile(allow_origin_regex)\n\n simple_headers = {}\n if \"*\" in allow_origins:\n simple_headers[\"Access-Control-Allow-Origin\"] = \"*\"\n if allow_credentials:\n simple_headers[\"Access-Control-Allow-Credentials\"] = \"true\"\n if expose_headers:\n simple_headers[\"Access-Control-Expose-Headers\"] = \", \".join(expose_headers)\n\n preflight_headers = {}\n if \"*\" in allow_origins:\n preflight_headers[\"Access-Control-Allow-Origin\"] = \"*\"\n else:\n preflight_headers[\"Vary\"] = \"Origin\"\n preflight_headers.update(\n {\n \"Access-Control-Allow-Methods\": \", \".join(allow_methods),\n \"Access-Control-Max-Age\": str(max_age),\n }\n )\n if allow_headers and \"*\" not in allow_headers:\n preflight_headers[\"Access-Control-Allow-Headers\"] = \", \".join(allow_headers)\n if allow_credentials:\n preflight_headers[\"Access-Control-Allow-Credentials\"] = \"true\"\n\n self.app = app\n self.allow_origins = allow_origins\n self.allow_methods = allow_methods\n self.allow_headers = allow_headers\n self.allow_all_origins = \"*\" in allow_origins\n self.allow_all_headers = \"*\" in allow_headers\n self.allow_origin_regex = compiled_allow_origin_regex\n self.simple_headers = simple_headers\n self.preflight_headers = preflight_headers\n\n def __call__(self, scope: Scope):\n if scope[\"type\"] == \"http\":\n method = scope[\"method\"]\n headers = Headers(scope[\"headers\"])\n origin = headers.get(\"origin\")\n\n if origin is not None:\n if method == \"OPTIONS\" and \"access-control-request-method\" in headers:\n return self.preflight_response(request_headers=headers)\n else:\n return functools.partial(\n self.simple_response, scope=scope, origin=origin\n )\n\n return self.app(scope)\n\n def is_allowed_origin(self, origin):\n if self.allow_all_origins:\n return True\n\n if self.allow_origin_regex is not None and self.allow_origin_regex.match(\n origin\n ):\n return True\n\n return origin in self.allow_origins\n\n def preflight_response(self, request_headers):\n requested_origin = request_headers[\"origin\"]\n requested_method = request_headers[\"access-control-request-method\"]\n requested_headers = request_headers.get(\"access-control-request-headers\")\n requested_cookie = \"cookie\" in request_headers\n\n headers = dict(self.preflight_headers)\n failures = []\n\n if self.is_allowed_origin(origin=requested_origin):\n if not self.allow_all_origins:\n # If self.allow_all_origins is True, then the \"Access-Control-Allow-Origin\"\n # header is already set to \"*\".\n # If we only allow specific origins, then we have to mirror back\n # the Origin header in the response.\n headers[\"Access-Control-Allow-Origin\"] = requested_origin\n else:\n failures.append(\"origin\")\n\n if requested_method not in self.allow_methods:\n failures.append(\"method\")\n\n # If we allow all headers, then we have to mirror back any requested\n # headers in the response.\n if self.allow_all_headers and requested_headers is not None:\n headers[\"Access-Control-Allow-Headers\"] = requested_headers\n elif requested_headers is not None:\n for header in requested_headers.split(\",\"):\n if header.strip() not in self.allow_headers:\n failures.append(\"headers\")\n\n # We don't strictly need to use 400 responses here, since its up to\n # the browser to enforce the CORS policy, but its more informative\n # if we do.\n if failures:\n failure_text = \"Disallowed CORS \" + \", \".join(failures)\n return PlainTextResponse(failure_text, status_code=400, headers=headers)\n\n return PlainTextResponse(\"OK\", status_code=200, headers=headers)\n\n async def simple_response(self, receive, send, scope=None, origin=None):\n inner = self.app(scope)\n send = functools.partial(self.send, send=send, origin=origin)\n await inner(receive, send)\n\n async def send(self, message, send=None, origin=None):\n if message[\"type\"] != \"http.response.start\":\n await send(message)\n return\n\n message.setdefault(\"headers\", [])\n headers = MutableHeaders(message[\"headers\"])\n\n # If we only allow specific origins, then we have to mirror back\n # the Origin header in the response.\n if not self.allow_all_origins and self.is_allowed_origin(origin=origin):\n headers[\"Access-Control-Allow-Origin\"] = origin\n headers.update(self.simple_headers)\n await send(message)\n", "path": "starlette/middleware/cors.py"}]} | 2,205 | 562 |
gh_patches_debug_16535 | rasdani/github-patches | git_diff | bridgecrewio__checkov-4243 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
CKV_AWS_118 Fails With MonitoringInterval Integer Value
**Describe the issue**
CKV_AWS_118 fails if the `MonitoringInterval` value is not wrapped in double quotes despite the fact that the source code says it should allow ints and strings.
**Examples**
```
RDSinstance:
Type: AWS::RDS::DBInstance
Properties:
DBClusterIdentifier: !Ref DBCluster
DBInstanceClass: !Ref DbType
DBInstanceIdentifier: !Sub ${AppName}-${EnvironmentName}
DBParameterGroupName: !Ref DbParameterGroup
DBSubnetGroupName: !Ref DBSubnetGroup
Engine: aurora-mysql
MonitoringInterval: 60
MonitoringRoleArn: !GetAtt RdsMonitoringRole.Arn
PubliclyAccessible: 'false'
```
**Version (please complete the following information):**
- Checkov Version 2.2.255 (CLI)
**Additional context**
The test failure happens with the CLI and also using a GItHub Action `bridgecrewio/checkov-action@master`

</issue>
<code>
[start of checkov/cloudformation/checks/resource/base_resource_value_check.py]
1 import re
2 from abc import abstractmethod
3 from collections.abc import Iterable
4 from typing import List, Any, Dict
5
6 from checkov.cloudformation.checks.resource.base_resource_check import BaseResourceCheck
7 from checkov.cloudformation.context_parser import ContextParser
8 from checkov.common.parsers.node import StrNode, DictNode
9 from checkov.common.models.consts import ANY_VALUE
10 from checkov.common.models.enums import CheckResult, CheckCategories
11 from checkov.common.util.type_forcers import force_list
12 from checkov.common.util.var_utils import is_cloudformation_variable_dependent
13
14 VARIABLE_DEPENDANT_REGEX = re.compile(r"(?:Ref)\.[^\s]+")
15
16
17 class BaseResourceValueCheck(BaseResourceCheck):
18 def __init__(
19 self,
20 name: str,
21 id: str,
22 categories: "Iterable[CheckCategories]",
23 supported_resources: "Iterable[str]",
24 missing_block_result: CheckResult = CheckResult.FAILED,
25 ) -> None:
26 super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)
27 self.missing_block_result = missing_block_result
28
29 @staticmethod
30 def _filter_key_path(path: str) -> List[str]:
31 """
32 Filter an attribute path to contain only named attributes by dropping array indices from the path)
33 :param path: valid JSONPath of an attribute
34 :return: List of named attributes with respect to the input JSONPath order
35 """
36 regex = re.compile(r"^\[?\d+\]?$")
37 return [x for x in path.split("/") if not re.search(regex, x)]
38
39 @staticmethod
40 def _is_variable_dependant(value: Any) -> bool:
41 return is_cloudformation_variable_dependent(value)
42
43 @staticmethod
44 def _is_nesting_key(inspected_attributes: List[str], key: str) -> bool:
45 """
46 Resolves whether a key is a subset of the inspected nesting attributes
47 :param inspected_attributes: list of nesting attributes
48 :param key: JSONPath key of an attribute
49 :return: True/False
50 """
51 return any(x in key for x in inspected_attributes)
52
53 def scan_resource_conf(self, conf: Dict[StrNode, DictNode]) -> CheckResult:
54 inspected_key = self.get_inspected_key()
55 expected_values = self.get_expected_values()
56 path_elements = inspected_key.split("/")
57 matches = ContextParser.search_deep_keys(path_elements[-1], conf, [])
58 if len(matches) > 0:
59 for match in matches:
60 # CFN files are parsed differently from terraform, which causes the path search above to behave differently.
61 # The tesult is path parts with integer indexes, instead of strings like '[0]'. This logic replaces
62 # those, allowing inspected_keys in checks to use the same syntax.
63 for i in range(0, len(match)):
64 if type(match[i]) == int:
65 match[i] = f"[{match[i]}]"
66
67 if match[:-1] == path_elements:
68 # Inspected key exists
69 value = match[-1]
70 if ANY_VALUE in expected_values and value is not None and (not isinstance(value, str) or value):
71 # Key is found on the configuration - if it accepts any value, the check is PASSED
72 return CheckResult.PASSED
73 if isinstance(value, list) and len(value) == 1:
74 value = value[0]
75 if self._is_variable_dependant(value):
76 # If the tested attribute is variable-dependant, then result is PASSED
77 return CheckResult.PASSED
78 if value in expected_values:
79 return CheckResult.PASSED
80
81 # handle boolean case sensitivity (e.g., CFN accepts the string "true" as a boolean)
82 if isinstance(value, str) and value.lower() in ('true', 'false'):
83 value = value.lower() == 'true'
84 if value in expected_values:
85 return CheckResult.PASSED
86 return CheckResult.FAILED
87
88 return self.missing_block_result
89
90 @abstractmethod
91 def get_inspected_key(self) -> str:
92 """
93 :return: JSONPath syntax path of the checked attribute
94 """
95 raise NotImplementedError()
96
97 def get_expected_values(self) -> List[Any]:
98 """
99 Override the method with the list of acceptable values if the check has more than one possible expected value, given
100 the inspected key
101 :return: List of expected values, defaults to a list of the expected value
102 """
103 return [self.get_expected_value()]
104
105 def get_expected_value(self) -> Any:
106 """
107 Returns the default expected value, governed by provider best practices
108 """
109 return True
110
111 def get_evaluated_keys(self) -> List[str]:
112 return force_list(self.get_inspected_key())
113
[end of checkov/cloudformation/checks/resource/base_resource_value_check.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/checkov/cloudformation/checks/resource/base_resource_value_check.py b/checkov/cloudformation/checks/resource/base_resource_value_check.py
--- a/checkov/cloudformation/checks/resource/base_resource_value_check.py
+++ b/checkov/cloudformation/checks/resource/base_resource_value_check.py
@@ -60,7 +60,8 @@
# CFN files are parsed differently from terraform, which causes the path search above to behave differently.
# The tesult is path parts with integer indexes, instead of strings like '[0]'. This logic replaces
# those, allowing inspected_keys in checks to use the same syntax.
- for i in range(0, len(match)):
+ # The last value shouldn't be changed, because it could be indeed a valid number
+ for i in range(0, len(match) - 1):
if type(match[i]) == int:
match[i] = f"[{match[i]}]"
| {"golden_diff": "diff --git a/checkov/cloudformation/checks/resource/base_resource_value_check.py b/checkov/cloudformation/checks/resource/base_resource_value_check.py\n--- a/checkov/cloudformation/checks/resource/base_resource_value_check.py\n+++ b/checkov/cloudformation/checks/resource/base_resource_value_check.py\n@@ -60,7 +60,8 @@\n # CFN files are parsed differently from terraform, which causes the path search above to behave differently.\n # The tesult is path parts with integer indexes, instead of strings like '[0]'. This logic replaces\n # those, allowing inspected_keys in checks to use the same syntax.\n- for i in range(0, len(match)):\n+ # The last value shouldn't be changed, because it could be indeed a valid number\n+ for i in range(0, len(match) - 1):\n if type(match[i]) == int:\n match[i] = f\"[{match[i]}]\"\n", "issue": "CKV_AWS_118 Fails With MonitoringInterval Integer Value\n**Describe the issue**\r\nCKV_AWS_118 fails if the `MonitoringInterval` value is not wrapped in double quotes despite the fact that the source code says it should allow ints and strings.\r\n\r\n**Examples**\r\n```\r\nRDSinstance:\r\n Type: AWS::RDS::DBInstance\r\n Properties:\r\n DBClusterIdentifier: !Ref DBCluster\r\n DBInstanceClass: !Ref DbType\r\n DBInstanceIdentifier: !Sub ${AppName}-${EnvironmentName}\r\n DBParameterGroupName: !Ref DbParameterGroup\r\n DBSubnetGroupName: !Ref DBSubnetGroup\r\n Engine: aurora-mysql\r\n MonitoringInterval: 60\r\n MonitoringRoleArn: !GetAtt RdsMonitoringRole.Arn\r\n PubliclyAccessible: 'false'\r\n```\r\n\r\n**Version (please complete the following information):**\r\n - Checkov Version 2.2.255 (CLI)\r\n\r\n**Additional context**\r\nThe test failure happens with the CLI and also using a GItHub Action `bridgecrewio/checkov-action@master`\r\n\r\n\r\n\r\n\n", "before_files": [{"content": "import re\nfrom abc import abstractmethod\nfrom collections.abc import Iterable\nfrom typing import List, Any, Dict\n\nfrom checkov.cloudformation.checks.resource.base_resource_check import BaseResourceCheck\nfrom checkov.cloudformation.context_parser import ContextParser\nfrom checkov.common.parsers.node import StrNode, DictNode\nfrom checkov.common.models.consts import ANY_VALUE\nfrom checkov.common.models.enums import CheckResult, CheckCategories\nfrom checkov.common.util.type_forcers import force_list\nfrom checkov.common.util.var_utils import is_cloudformation_variable_dependent\n\nVARIABLE_DEPENDANT_REGEX = re.compile(r\"(?:Ref)\\.[^\\s]+\")\n\n\nclass BaseResourceValueCheck(BaseResourceCheck):\n def __init__(\n self,\n name: str,\n id: str,\n categories: \"Iterable[CheckCategories]\",\n supported_resources: \"Iterable[str]\",\n missing_block_result: CheckResult = CheckResult.FAILED,\n ) -> None:\n super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)\n self.missing_block_result = missing_block_result\n\n @staticmethod\n def _filter_key_path(path: str) -> List[str]:\n \"\"\"\n Filter an attribute path to contain only named attributes by dropping array indices from the path)\n :param path: valid JSONPath of an attribute\n :return: List of named attributes with respect to the input JSONPath order\n \"\"\"\n regex = re.compile(r\"^\\[?\\d+\\]?$\")\n return [x for x in path.split(\"/\") if not re.search(regex, x)]\n\n @staticmethod\n def _is_variable_dependant(value: Any) -> bool:\n return is_cloudformation_variable_dependent(value)\n\n @staticmethod\n def _is_nesting_key(inspected_attributes: List[str], key: str) -> bool:\n \"\"\"\n Resolves whether a key is a subset of the inspected nesting attributes\n :param inspected_attributes: list of nesting attributes\n :param key: JSONPath key of an attribute\n :return: True/False\n \"\"\"\n return any(x in key for x in inspected_attributes)\n\n def scan_resource_conf(self, conf: Dict[StrNode, DictNode]) -> CheckResult:\n inspected_key = self.get_inspected_key()\n expected_values = self.get_expected_values()\n path_elements = inspected_key.split(\"/\")\n matches = ContextParser.search_deep_keys(path_elements[-1], conf, [])\n if len(matches) > 0:\n for match in matches:\n # CFN files are parsed differently from terraform, which causes the path search above to behave differently.\n # The tesult is path parts with integer indexes, instead of strings like '[0]'. This logic replaces\n # those, allowing inspected_keys in checks to use the same syntax.\n for i in range(0, len(match)):\n if type(match[i]) == int:\n match[i] = f\"[{match[i]}]\"\n\n if match[:-1] == path_elements:\n # Inspected key exists\n value = match[-1]\n if ANY_VALUE in expected_values and value is not None and (not isinstance(value, str) or value):\n # Key is found on the configuration - if it accepts any value, the check is PASSED\n return CheckResult.PASSED\n if isinstance(value, list) and len(value) == 1:\n value = value[0]\n if self._is_variable_dependant(value):\n # If the tested attribute is variable-dependant, then result is PASSED\n return CheckResult.PASSED\n if value in expected_values:\n return CheckResult.PASSED\n\n # handle boolean case sensitivity (e.g., CFN accepts the string \"true\" as a boolean)\n if isinstance(value, str) and value.lower() in ('true', 'false'):\n value = value.lower() == 'true'\n if value in expected_values:\n return CheckResult.PASSED\n return CheckResult.FAILED\n\n return self.missing_block_result\n\n @abstractmethod\n def get_inspected_key(self) -> str:\n \"\"\"\n :return: JSONPath syntax path of the checked attribute\n \"\"\"\n raise NotImplementedError()\n\n def get_expected_values(self) -> List[Any]:\n \"\"\"\n Override the method with the list of acceptable values if the check has more than one possible expected value, given\n the inspected key\n :return: List of expected values, defaults to a list of the expected value\n \"\"\"\n return [self.get_expected_value()]\n\n def get_expected_value(self) -> Any:\n \"\"\"\n Returns the default expected value, governed by provider best practices\n \"\"\"\n return True\n\n def get_evaluated_keys(self) -> List[str]:\n return force_list(self.get_inspected_key())\n", "path": "checkov/cloudformation/checks/resource/base_resource_value_check.py"}]} | 2,124 | 202 |
gh_patches_debug_59532 | rasdani/github-patches | git_diff | mit-ll-responsible-ai__hydra-zen-97 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
PEP 561 compatibility
Hi,
Would it be possible to make hydra-zen compliant with [PEP 561](https://www.python.org/dev/peps/pep-0561) by distributing a `py.typed` file with the package?
Currently I'm getting `Skipping analyzing "hydra_zen": found module but no type hints or library stubs` when I run mypy on a test file. Here are steps to reproduce this error:
```text
$ pip install hydra-zen mypy
...
Successfully installed PyYAML-5.4.1 antlr4-python3-runtime-4.8 hydra-core-1.1.1 hydra-zen-0.2.0 mypy-0.910 mypy-extensions-0.4.3 omegaconf-2.1.1 toml-0.10.2 typing-extensions-3.10.0.2
...
$ echo "from hydra_zen import builds" > tmp.py
$ mypy tmp.py
tmp.py:1: error: Skipping analyzing "hydra_zen": found module but no type hints or library stubs
tmp.py:1: note: See https://mypy.readthedocs.io/en/stable/running_mypy.html#missing-imports
Found 1 error in 1 file (checked 1 source file)
```
I believe that adding an empty `py.typed` file to the `src/hydra_zen` directory (and modifying `setup.py` so that the `py.typed` file is distributed with the `hydra-zen` package) would make it possible for type checkers following PEP 561 to discover the type hints in `src`.
(I'd be happy to submit a PR to this effect.)
</issue>
<code>
[start of setup.py]
1 # Copyright (c) 2021 Massachusetts Institute of Technology
2 # SPDX-License-Identifier: MIT
3
4 from setuptools import find_packages, setup
5
6 import versioneer
7
8 DISTNAME = "hydra_zen"
9 LICENSE = "MIT"
10 AUTHOR = "Justin Goodwin, Ryan Soklaski"
11 AUTHOR_EMAIL = "[email protected]"
12 URL = "https://github.com/mit-ll-responsible-ai/hydra_zen"
13 CLASSIFIERS = [
14 "Development Status :: 4 - Beta",
15 "License :: OSI Approved :: MIT License",
16 "Operating System :: OS Independent",
17 "Intended Audience :: Science/Research",
18 "Programming Language :: Python :: 3.6",
19 "Programming Language :: Python :: 3.7",
20 "Programming Language :: Python :: 3.8",
21 "Programming Language :: Python :: 3.9",
22 "Topic :: Scientific/Engineering",
23 ]
24 KEYWORDS = "machine learning research configuration scalable reproducible"
25 INSTALL_REQUIRES = [
26 "hydra-core >= 1.1.0",
27 "typing-extensions >= 3.7.4.1",
28 ]
29 TESTS_REQUIRE = [
30 "pytest >= 3.8",
31 "hypothesis >= 5.32.0",
32 ]
33
34 DESCRIPTION = "Utilities for making hydra scale to ML workflows"
35 LONG_DESCRIPTION = """
36 hydra-zen helps you configure your project using the power of Hydra, while enjoying the Zen of Python!
37
38 hydra-zen eliminates the boilerplate code that you write to configure, orchestrate, and organize the results of large-scale projects, such as machine learning experiments. It does so by providing Hydra-compatible tools that dynamically generate "structured configurations" of your code, and enables Python-centric workflows for running configured instances of your code.
39
40 hydra-zen offers:
41
42 - Functions for automatically and dynamically generating structured configs that can be used to fully or partially instantiate objects in your application.
43 - The ability to launch Hydra jobs, complete with parameter sweeps and multi-run configurations, from within a notebook or any other Python environment.
44 - Incisive type annotations that provide enriched context about your project's configurations to IDEs, type checkers, and other tooling.
45 - Runtime validation of configurations to catch mistakes before your application launches.
46 - Equal support for both object-oriented libraries (e.g., torch.nn) and functional ones (e.g., jax and numpy).
47
48 These functions and capabilities can be used to great effect alongside PyTorch Lightning to design boilerplate-free machine learning projects!
49 """
50
51
52 setup(
53 name=DISTNAME,
54 version=versioneer.get_version(),
55 cmdclass=versioneer.get_cmdclass(),
56 license=LICENSE,
57 author=AUTHOR,
58 author_email=AUTHOR_EMAIL,
59 classifiers=CLASSIFIERS,
60 keywords=KEYWORDS,
61 description=DESCRIPTION,
62 long_description=LONG_DESCRIPTION,
63 install_requires=INSTALL_REQUIRES,
64 tests_require=TESTS_REQUIRE,
65 url=URL,
66 download_url="https://github.com/mit-ll-responsible-ai/hydra-zen/tarball/"
67 + versioneer.get_version(),
68 python_requires=">=3.6",
69 packages=find_packages(where="src", exclude=["tests", "tests.*"]),
70 package_dir={"": "src"},
71 )
72
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -68,4 +68,5 @@
python_requires=">=3.6",
packages=find_packages(where="src", exclude=["tests", "tests.*"]),
package_dir={"": "src"},
+ package_data={"hydra_zen": ["py.typed"]}
)
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -68,4 +68,5 @@\n python_requires=\">=3.6\",\n packages=find_packages(where=\"src\", exclude=[\"tests\", \"tests.*\"]),\n package_dir={\"\": \"src\"},\n+ package_data={\"hydra_zen\": [\"py.typed\"]}\n )\n", "issue": "PEP 561 compatibility\nHi,\r\n\r\nWould it be possible to make hydra-zen compliant with [PEP 561](https://www.python.org/dev/peps/pep-0561) by distributing a `py.typed` file with the package?\r\n\r\nCurrently I'm getting `Skipping analyzing \"hydra_zen\": found module but no type hints or library stubs` when I run mypy on a test file. Here are steps to reproduce this error:\r\n```text\r\n$ pip install hydra-zen mypy\r\n...\r\nSuccessfully installed PyYAML-5.4.1 antlr4-python3-runtime-4.8 hydra-core-1.1.1 hydra-zen-0.2.0 mypy-0.910 mypy-extensions-0.4.3 omegaconf-2.1.1 toml-0.10.2 typing-extensions-3.10.0.2\r\n...\r\n$ echo \"from hydra_zen import builds\" > tmp.py\r\n$ mypy tmp.py\r\ntmp.py:1: error: Skipping analyzing \"hydra_zen\": found module but no type hints or library stubs\r\ntmp.py:1: note: See https://mypy.readthedocs.io/en/stable/running_mypy.html#missing-imports\r\nFound 1 error in 1 file (checked 1 source file)\r\n```\r\n\r\nI believe that adding an empty `py.typed` file to the `src/hydra_zen` directory (and modifying `setup.py` so that the `py.typed` file is distributed with the `hydra-zen` package) would make it possible for type checkers following PEP 561 to discover the type hints in `src`.\r\n(I'd be happy to submit a PR to this effect.)\n", "before_files": [{"content": "# Copyright (c) 2021 Massachusetts Institute of Technology\n# SPDX-License-Identifier: MIT\n\nfrom setuptools import find_packages, setup\n\nimport versioneer\n\nDISTNAME = \"hydra_zen\"\nLICENSE = \"MIT\"\nAUTHOR = \"Justin Goodwin, Ryan Soklaski\"\nAUTHOR_EMAIL = \"[email protected]\"\nURL = \"https://github.com/mit-ll-responsible-ai/hydra_zen\"\nCLASSIFIERS = [\n \"Development Status :: 4 - Beta\",\n \"License :: OSI Approved :: MIT License\",\n \"Operating System :: OS Independent\",\n \"Intended Audience :: Science/Research\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Topic :: Scientific/Engineering\",\n]\nKEYWORDS = \"machine learning research configuration scalable reproducible\"\nINSTALL_REQUIRES = [\n \"hydra-core >= 1.1.0\",\n \"typing-extensions >= 3.7.4.1\",\n]\nTESTS_REQUIRE = [\n \"pytest >= 3.8\",\n \"hypothesis >= 5.32.0\",\n]\n\nDESCRIPTION = \"Utilities for making hydra scale to ML workflows\"\nLONG_DESCRIPTION = \"\"\"\nhydra-zen helps you configure your project using the power of Hydra, while enjoying the Zen of Python!\n\nhydra-zen eliminates the boilerplate code that you write to configure, orchestrate, and organize the results of large-scale projects, such as machine learning experiments. It does so by providing Hydra-compatible tools that dynamically generate \"structured configurations\" of your code, and enables Python-centric workflows for running configured instances of your code.\n\nhydra-zen offers:\n\n - Functions for automatically and dynamically generating structured configs that can be used to fully or partially instantiate objects in your application.\n - The ability to launch Hydra jobs, complete with parameter sweeps and multi-run configurations, from within a notebook or any other Python environment.\n - Incisive type annotations that provide enriched context about your project's configurations to IDEs, type checkers, and other tooling.\n - Runtime validation of configurations to catch mistakes before your application launches.\n - Equal support for both object-oriented libraries (e.g., torch.nn) and functional ones (e.g., jax and numpy).\n\nThese functions and capabilities can be used to great effect alongside PyTorch Lightning to design boilerplate-free machine learning projects!\n\"\"\"\n\n\nsetup(\n name=DISTNAME,\n version=versioneer.get_version(),\n cmdclass=versioneer.get_cmdclass(),\n license=LICENSE,\n author=AUTHOR,\n author_email=AUTHOR_EMAIL,\n classifiers=CLASSIFIERS,\n keywords=KEYWORDS,\n description=DESCRIPTION,\n long_description=LONG_DESCRIPTION,\n install_requires=INSTALL_REQUIRES,\n tests_require=TESTS_REQUIRE,\n url=URL,\n download_url=\"https://github.com/mit-ll-responsible-ai/hydra-zen/tarball/\"\n + versioneer.get_version(),\n python_requires=\">=3.6\",\n packages=find_packages(where=\"src\", exclude=[\"tests\", \"tests.*\"]),\n package_dir={\"\": \"src\"},\n)\n", "path": "setup.py"}]} | 1,746 | 81 |
gh_patches_debug_9569 | rasdani/github-patches | git_diff | ckan__ckan-2563 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Include the main.debug.css
Hi, I'm new to CKAN in my organization and turned debug to true for development and encountered the `AttributeError: 'module' object has no attribute 'css/main.debug.css'` error. It took me a while to figure out that I had to compile the less to get it.
Wouldn't it be easier to include this file so that debug mode automatically works without needing to change anything else?
</issue>
<code>
[start of ckan/lib/app_globals.py]
1 ''' The application's Globals object '''
2
3 import logging
4 import time
5 from threading import Lock
6 import re
7
8 from paste.deploy.converters import asbool
9 from pylons import config
10
11 import ckan
12 import ckan.model as model
13 import ckan.logic as logic
14
15
16 log = logging.getLogger(__name__)
17
18
19 # mappings translate between config settings and globals because our naming
20 # conventions are not well defined and/or implemented
21 mappings = {
22 # 'config_key': 'globals_key',
23 }
24
25
26 # This mapping is only used to define the configuration options (from the
27 # `config` object) that should be copied to the `app_globals` (`g`) object.
28 app_globals_from_config_details = {
29 'ckan.site_title': {},
30 'ckan.site_logo': {},
31 'ckan.site_url': {},
32 'ckan.site_description': {},
33 'ckan.site_about': {},
34 'ckan.site_intro_text': {},
35 'ckan.site_custom_css': {},
36 'ckan.favicon': {}, # default gets set in config.environment.py
37 'ckan.template_head_end': {},
38 'ckan.template_footer_end': {},
39 # has been setup in load_environment():
40 'ckan.site_id': {},
41 'ckan.recaptcha.publickey': {'name': 'recaptcha_publickey'},
42 'ckan.recaptcha.version': {'name': 'recaptcha_version', 'default': '1'},
43 'ckan.template_title_deliminater': {'default': '-'},
44 'ckan.template_head_end': {},
45 'ckan.template_footer_end': {},
46 'ckan.dumps_url': {},
47 'ckan.dumps_format': {},
48 'ofs.impl': {'name': 'ofs_impl'},
49 'ckan.homepage_style': {'default': '1'},
50
51 # split string
52 'search.facets': {'default': 'organization groups tags res_format license_id',
53 'type': 'split',
54 'name': 'facets'},
55 'package_hide_extras': {'type': 'split'},
56 'ckan.plugins': {'type': 'split'},
57
58 # bool
59 'debug': {'default': 'false', 'type' : 'bool'},
60 'ckan.debug_supress_header' : {'default': 'false', 'type' : 'bool'},
61 'ckan.legacy_templates' : {'default': 'false', 'type' : 'bool'},
62 'ckan.tracking_enabled' : {'default': 'false', 'type' : 'bool'},
63
64 # int
65 'ckan.datasets_per_page': {'default': '20', 'type': 'int'},
66 'ckan.activity_list_limit': {'default': '30', 'type': 'int'},
67 'search.facets.default': {'default': '10', 'type': 'int',
68 'name': 'facets_default_number'},
69 }
70
71
72 # A place to store the origional config options of we override them
73 _CONFIG_CACHE = {}
74
75 def set_main_css(css_file):
76 ''' Sets the main_css using debug css if needed. The css_file
77 must be of the form file.css '''
78 assert css_file.endswith('.css')
79 if config.get('debug') and css_file == '/base/css/main.css':
80 new_css = '/base/css/main.debug.css'
81 else:
82 new_css = css_file
83 # FIXME we should check the css file exists
84 app_globals.main_css = str(new_css)
85
86
87 def set_app_global(key, value):
88 '''
89 Set a new key on the app_globals (g) object
90
91 It will process the value according to the options on
92 app_globals_from_config_details (if any)
93 '''
94 key, value = process_app_global(key, value)
95 setattr(app_globals, key, value)
96
97
98 def process_app_global(key, value):
99 '''
100 Tweak a key, value pair meant to be set on the app_globals (g) object
101
102 According to the options on app_globals_from_config_details (if any)
103 '''
104 options = app_globals_from_config_details.get(key)
105 key = get_globals_key(key)
106 if options:
107 if 'name' in options:
108 key = options['name']
109 value = value or options.get('default', '')
110
111 data_type = options.get('type')
112 if data_type == 'bool':
113 value = asbool(value)
114 elif data_type == 'int':
115 value = int(value)
116 elif data_type == 'split':
117 value = value.split()
118
119 return key, value
120
121
122 def get_globals_key(key):
123 # create our globals key
124 # these can be specified in mappings or else we remove
125 # the `ckan.` part this is to keep the existing namings
126 # set the value
127 if key in mappings:
128 return mappings[key]
129 elif key.startswith('ckan.'):
130 return key[5:]
131 else:
132 return key
133
134
135 def reset():
136 ''' set updatable values from config '''
137 def get_config_value(key, default=''):
138 if model.meta.engine.has_table('system_info'):
139 value = model.get_system_info(key)
140 else:
141 value = None
142 config_value = config.get(key)
143 # sort encodeings if needed
144 if isinstance(config_value, str):
145 try:
146 config_value = config_value.decode('utf-8')
147 except UnicodeDecodeError:
148 config_value = config_value.decode('latin-1')
149 # we want to store the config the first time we get here so we can
150 # reset them if needed
151 if key not in _CONFIG_CACHE:
152 _CONFIG_CACHE[key] = config_value
153 if value is not None:
154 log.debug('config `%s` set to `%s` from db' % (key, value))
155 else:
156 value = _CONFIG_CACHE[key]
157 if value:
158 log.debug('config `%s` set to `%s` from config' % (key, value))
159 else:
160 value = default
161
162 set_app_global(key, value)
163
164 # update the config
165 config[key] = value
166 return value
167
168 # update the config settings in auto update
169 schema = logic.schema.update_configuration_schema()
170 for key in schema.keys():
171 get_config_value(key)
172
173 # cusom styling
174 main_css = get_config_value('ckan.main_css', '/base/css/main.css')
175 set_main_css(main_css)
176 # site_url_nice
177 site_url_nice = app_globals.site_url.replace('http://', '')
178 site_url_nice = site_url_nice.replace('www.', '')
179 app_globals.site_url_nice = site_url_nice
180
181 if app_globals.site_logo:
182 app_globals.header_class = 'header-image'
183 elif not app_globals.site_description:
184 app_globals.header_class = 'header-text-logo'
185 else:
186 app_globals.header_class = 'header-text-logo-tagline'
187
188
189 class _Globals(object):
190
191 ''' Globals acts as a container for objects available throughout the
192 life of the application. '''
193
194 def __init__(self):
195 '''One instance of Globals is created during application
196 initialization and is available during requests via the
197 'app_globals' variable
198 '''
199 self._init()
200 self._config_update = None
201 self._mutex = Lock()
202
203 def _check_uptodate(self):
204 ''' check the config is uptodate needed when several instances are
205 running '''
206 value = model.get_system_info('ckan.config_update')
207 if self._config_update != value:
208 if self._mutex.acquire(False):
209 reset()
210 self._config_update = value
211 self._mutex.release()
212
213 def _init(self):
214
215 self.ckan_version = ckan.__version__
216 self.ckan_base_version = re.sub('[^0-9\.]', '', self.ckan_version)
217 if self.ckan_base_version == self.ckan_version:
218 self.ckan_doc_version = 'ckan-{0}'.format(self.ckan_version)
219 else:
220 self.ckan_doc_version = 'latest'
221
222 # process the config details to set globals
223 for key in app_globals_from_config_details.keys():
224 new_key, value = process_app_global(key, config.get(key) or '')
225 setattr(self, new_key, value)
226
227
228 app_globals = _Globals()
229 del _Globals
230
[end of ckan/lib/app_globals.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/ckan/lib/app_globals.py b/ckan/lib/app_globals.py
--- a/ckan/lib/app_globals.py
+++ b/ckan/lib/app_globals.py
@@ -73,13 +73,9 @@
_CONFIG_CACHE = {}
def set_main_css(css_file):
- ''' Sets the main_css using debug css if needed. The css_file
- must be of the form file.css '''
+ ''' Sets the main_css. The css_file must be of the form file.css '''
assert css_file.endswith('.css')
- if config.get('debug') and css_file == '/base/css/main.css':
- new_css = '/base/css/main.debug.css'
- else:
- new_css = css_file
+ new_css = css_file
# FIXME we should check the css file exists
app_globals.main_css = str(new_css)
| {"golden_diff": "diff --git a/ckan/lib/app_globals.py b/ckan/lib/app_globals.py\n--- a/ckan/lib/app_globals.py\n+++ b/ckan/lib/app_globals.py\n@@ -73,13 +73,9 @@\n _CONFIG_CACHE = {}\n \n def set_main_css(css_file):\n- ''' Sets the main_css using debug css if needed. The css_file\n- must be of the form file.css '''\n+ ''' Sets the main_css. The css_file must be of the form file.css '''\n assert css_file.endswith('.css')\n- if config.get('debug') and css_file == '/base/css/main.css':\n- new_css = '/base/css/main.debug.css'\n- else:\n- new_css = css_file\n+ new_css = css_file\n # FIXME we should check the css file exists\n app_globals.main_css = str(new_css)\n", "issue": "Include the main.debug.css\nHi, I'm new to CKAN in my organization and turned debug to true for development and encountered the `AttributeError: 'module' object has no attribute 'css/main.debug.css'` error. It took me a while to figure out that I had to compile the less to get it.\n\nWouldn't it be easier to include this file so that debug mode automatically works without needing to change anything else?\n\n", "before_files": [{"content": "''' The application's Globals object '''\n\nimport logging\nimport time\nfrom threading import Lock\nimport re\n\nfrom paste.deploy.converters import asbool\nfrom pylons import config\n\nimport ckan\nimport ckan.model as model\nimport ckan.logic as logic\n\n\nlog = logging.getLogger(__name__)\n\n\n# mappings translate between config settings and globals because our naming\n# conventions are not well defined and/or implemented\nmappings = {\n# 'config_key': 'globals_key',\n}\n\n\n# This mapping is only used to define the configuration options (from the\n# `config` object) that should be copied to the `app_globals` (`g`) object.\napp_globals_from_config_details = {\n 'ckan.site_title': {},\n 'ckan.site_logo': {},\n 'ckan.site_url': {},\n 'ckan.site_description': {},\n 'ckan.site_about': {},\n 'ckan.site_intro_text': {},\n 'ckan.site_custom_css': {},\n 'ckan.favicon': {}, # default gets set in config.environment.py\n 'ckan.template_head_end': {},\n 'ckan.template_footer_end': {},\n # has been setup in load_environment():\n 'ckan.site_id': {},\n 'ckan.recaptcha.publickey': {'name': 'recaptcha_publickey'},\n 'ckan.recaptcha.version': {'name': 'recaptcha_version', 'default': '1'},\n 'ckan.template_title_deliminater': {'default': '-'},\n 'ckan.template_head_end': {},\n 'ckan.template_footer_end': {},\n 'ckan.dumps_url': {},\n 'ckan.dumps_format': {},\n 'ofs.impl': {'name': 'ofs_impl'},\n 'ckan.homepage_style': {'default': '1'},\n\n # split string\n 'search.facets': {'default': 'organization groups tags res_format license_id',\n 'type': 'split',\n 'name': 'facets'},\n 'package_hide_extras': {'type': 'split'},\n 'ckan.plugins': {'type': 'split'},\n\n # bool\n 'debug': {'default': 'false', 'type' : 'bool'},\n 'ckan.debug_supress_header' : {'default': 'false', 'type' : 'bool'},\n 'ckan.legacy_templates' : {'default': 'false', 'type' : 'bool'},\n 'ckan.tracking_enabled' : {'default': 'false', 'type' : 'bool'},\n\n # int\n 'ckan.datasets_per_page': {'default': '20', 'type': 'int'},\n 'ckan.activity_list_limit': {'default': '30', 'type': 'int'},\n 'search.facets.default': {'default': '10', 'type': 'int',\n 'name': 'facets_default_number'},\n}\n\n\n# A place to store the origional config options of we override them\n_CONFIG_CACHE = {}\n\ndef set_main_css(css_file):\n ''' Sets the main_css using debug css if needed. The css_file\n must be of the form file.css '''\n assert css_file.endswith('.css')\n if config.get('debug') and css_file == '/base/css/main.css':\n new_css = '/base/css/main.debug.css'\n else:\n new_css = css_file\n # FIXME we should check the css file exists\n app_globals.main_css = str(new_css)\n\n\ndef set_app_global(key, value):\n '''\n Set a new key on the app_globals (g) object\n\n It will process the value according to the options on\n app_globals_from_config_details (if any)\n '''\n key, value = process_app_global(key, value)\n setattr(app_globals, key, value)\n\n\ndef process_app_global(key, value):\n '''\n Tweak a key, value pair meant to be set on the app_globals (g) object\n\n According to the options on app_globals_from_config_details (if any)\n '''\n options = app_globals_from_config_details.get(key)\n key = get_globals_key(key)\n if options:\n if 'name' in options:\n key = options['name']\n value = value or options.get('default', '')\n\n data_type = options.get('type')\n if data_type == 'bool':\n value = asbool(value)\n elif data_type == 'int':\n value = int(value)\n elif data_type == 'split':\n value = value.split()\n\n return key, value\n\n\ndef get_globals_key(key):\n # create our globals key\n # these can be specified in mappings or else we remove\n # the `ckan.` part this is to keep the existing namings\n # set the value\n if key in mappings:\n return mappings[key]\n elif key.startswith('ckan.'):\n return key[5:]\n else:\n return key\n\n\ndef reset():\n ''' set updatable values from config '''\n def get_config_value(key, default=''):\n if model.meta.engine.has_table('system_info'):\n value = model.get_system_info(key)\n else:\n value = None\n config_value = config.get(key)\n # sort encodeings if needed\n if isinstance(config_value, str):\n try:\n config_value = config_value.decode('utf-8')\n except UnicodeDecodeError:\n config_value = config_value.decode('latin-1')\n # we want to store the config the first time we get here so we can\n # reset them if needed\n if key not in _CONFIG_CACHE:\n _CONFIG_CACHE[key] = config_value\n if value is not None:\n log.debug('config `%s` set to `%s` from db' % (key, value))\n else:\n value = _CONFIG_CACHE[key]\n if value:\n log.debug('config `%s` set to `%s` from config' % (key, value))\n else:\n value = default\n\n set_app_global(key, value)\n\n # update the config\n config[key] = value\n return value\n\n # update the config settings in auto update\n schema = logic.schema.update_configuration_schema()\n for key in schema.keys():\n get_config_value(key)\n\n # cusom styling\n main_css = get_config_value('ckan.main_css', '/base/css/main.css')\n set_main_css(main_css)\n # site_url_nice\n site_url_nice = app_globals.site_url.replace('http://', '')\n site_url_nice = site_url_nice.replace('www.', '')\n app_globals.site_url_nice = site_url_nice\n\n if app_globals.site_logo:\n app_globals.header_class = 'header-image'\n elif not app_globals.site_description:\n app_globals.header_class = 'header-text-logo'\n else:\n app_globals.header_class = 'header-text-logo-tagline'\n\n\nclass _Globals(object):\n\n ''' Globals acts as a container for objects available throughout the\n life of the application. '''\n\n def __init__(self):\n '''One instance of Globals is created during application\n initialization and is available during requests via the\n 'app_globals' variable\n '''\n self._init()\n self._config_update = None\n self._mutex = Lock()\n\n def _check_uptodate(self):\n ''' check the config is uptodate needed when several instances are\n running '''\n value = model.get_system_info('ckan.config_update')\n if self._config_update != value:\n if self._mutex.acquire(False):\n reset()\n self._config_update = value\n self._mutex.release()\n\n def _init(self):\n\n self.ckan_version = ckan.__version__\n self.ckan_base_version = re.sub('[^0-9\\.]', '', self.ckan_version)\n if self.ckan_base_version == self.ckan_version:\n self.ckan_doc_version = 'ckan-{0}'.format(self.ckan_version)\n else:\n self.ckan_doc_version = 'latest'\n\n # process the config details to set globals\n for key in app_globals_from_config_details.keys():\n new_key, value = process_app_global(key, config.get(key) or '')\n setattr(self, new_key, value)\n\n\napp_globals = _Globals()\ndel _Globals\n", "path": "ckan/lib/app_globals.py"}]} | 2,989 | 193 |
gh_patches_debug_20671 | rasdani/github-patches | git_diff | scrapy__scrapy-5068 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
MediaPipeline exceptions passed silently
### Description
MediaPipeline exceptions passed silently both for method body or method signature errors.
### Steps to Reproduce
```
from scrapy.pipelines.files import FilesPipeline
class BuggyFilesPipeline(FilesPipeline):
def file_path(self, request, response=None, info=None, *, item=None):
return 1 / 0
```
**Expected behavior:** Exception logged
**Actual behavior:** Exception passed silently
**Reproduces how often:** 100%
### Versions
Scrapy 2.4
</issue>
<code>
[start of scrapy/pipelines/media.py]
1 import functools
2 import logging
3 from collections import defaultdict
4
5 from twisted.internet.defer import Deferred, DeferredList
6 from twisted.python.failure import Failure
7
8 from scrapy.http.request import NO_CALLBACK
9 from scrapy.settings import Settings
10 from scrapy.utils.datatypes import SequenceExclude
11 from scrapy.utils.defer import defer_result, mustbe_deferred
12 from scrapy.utils.log import failure_to_exc_info
13 from scrapy.utils.misc import arg_to_iter
14
15 logger = logging.getLogger(__name__)
16
17
18 def _DUMMY_CALLBACK(response):
19 return response
20
21
22 class MediaPipeline:
23 LOG_FAILED_RESULTS = True
24
25 class SpiderInfo:
26 def __init__(self, spider):
27 self.spider = spider
28 self.downloading = set()
29 self.downloaded = {}
30 self.waiting = defaultdict(list)
31
32 def __init__(self, download_func=None, settings=None):
33 self.download_func = download_func
34 self._expects_item = {}
35
36 if isinstance(settings, dict) or settings is None:
37 settings = Settings(settings)
38 resolve = functools.partial(
39 self._key_for_pipe, base_class_name="MediaPipeline", settings=settings
40 )
41 self.allow_redirects = settings.getbool(resolve("MEDIA_ALLOW_REDIRECTS"), False)
42 self._handle_statuses(self.allow_redirects)
43
44 def _handle_statuses(self, allow_redirects):
45 self.handle_httpstatus_list = None
46 if allow_redirects:
47 self.handle_httpstatus_list = SequenceExclude(range(300, 400))
48
49 def _key_for_pipe(self, key, base_class_name=None, settings=None):
50 """
51 >>> MediaPipeline()._key_for_pipe("IMAGES")
52 'IMAGES'
53 >>> class MyPipe(MediaPipeline):
54 ... pass
55 >>> MyPipe()._key_for_pipe("IMAGES", base_class_name="MediaPipeline")
56 'MYPIPE_IMAGES'
57 """
58 class_name = self.__class__.__name__
59 formatted_key = f"{class_name.upper()}_{key}"
60 if (
61 not base_class_name
62 or class_name == base_class_name
63 or settings
64 and not settings.get(formatted_key)
65 ):
66 return key
67 return formatted_key
68
69 @classmethod
70 def from_crawler(cls, crawler):
71 try:
72 pipe = cls.from_settings(crawler.settings)
73 except AttributeError:
74 pipe = cls()
75 pipe.crawler = crawler
76 pipe._fingerprinter = crawler.request_fingerprinter
77 return pipe
78
79 def open_spider(self, spider):
80 self.spiderinfo = self.SpiderInfo(spider)
81
82 def process_item(self, item, spider):
83 info = self.spiderinfo
84 requests = arg_to_iter(self.get_media_requests(item, info))
85 dlist = [self._process_request(r, info, item) for r in requests]
86 dfd = DeferredList(dlist, consumeErrors=True)
87 return dfd.addCallback(self.item_completed, item, info)
88
89 def _process_request(self, request, info, item):
90 fp = self._fingerprinter.fingerprint(request)
91 if not request.callback or request.callback is NO_CALLBACK:
92 cb = _DUMMY_CALLBACK
93 else:
94 cb = request.callback
95 eb = request.errback
96 request.callback = NO_CALLBACK
97 request.errback = None
98
99 # Return cached result if request was already seen
100 if fp in info.downloaded:
101 return defer_result(info.downloaded[fp]).addCallbacks(cb, eb)
102
103 # Otherwise, wait for result
104 wad = Deferred().addCallbacks(cb, eb)
105 info.waiting[fp].append(wad)
106
107 # Check if request is downloading right now to avoid doing it twice
108 if fp in info.downloading:
109 return wad
110
111 # Download request checking media_to_download hook output first
112 info.downloading.add(fp)
113 dfd = mustbe_deferred(self.media_to_download, request, info, item=item)
114 dfd.addCallback(self._check_media_to_download, request, info, item=item)
115 dfd.addBoth(self._cache_result_and_execute_waiters, fp, info)
116 dfd.addErrback(
117 lambda f: logger.error(
118 f.value, exc_info=failure_to_exc_info(f), extra={"spider": info.spider}
119 )
120 )
121 return dfd.addBoth(lambda _: wad) # it must return wad at last
122
123 def _modify_media_request(self, request):
124 if self.handle_httpstatus_list:
125 request.meta["handle_httpstatus_list"] = self.handle_httpstatus_list
126 else:
127 request.meta["handle_httpstatus_all"] = True
128
129 def _check_media_to_download(self, result, request, info, item):
130 if result is not None:
131 return result
132 if self.download_func:
133 # this ugly code was left only to support tests. TODO: remove
134 dfd = mustbe_deferred(self.download_func, request, info.spider)
135 dfd.addCallbacks(
136 callback=self.media_downloaded,
137 callbackArgs=(request, info),
138 callbackKeywords={"item": item},
139 errback=self.media_failed,
140 errbackArgs=(request, info),
141 )
142 else:
143 self._modify_media_request(request)
144 dfd = self.crawler.engine.download(request)
145 dfd.addCallbacks(
146 callback=self.media_downloaded,
147 callbackArgs=(request, info),
148 callbackKeywords={"item": item},
149 errback=self.media_failed,
150 errbackArgs=(request, info),
151 )
152 return dfd
153
154 def _cache_result_and_execute_waiters(self, result, fp, info):
155 if isinstance(result, Failure):
156 # minimize cached information for failure
157 result.cleanFailure()
158 result.frames = []
159 result.stack = None
160
161 # This code fixes a memory leak by avoiding to keep references to
162 # the Request and Response objects on the Media Pipeline cache.
163 #
164 # What happens when the media_downloaded callback raises an
165 # exception, for example a FileException('download-error') when
166 # the Response status code is not 200 OK, is that the original
167 # StopIteration exception (which in turn contains the failed
168 # Response and by extension, the original Request) gets encapsulated
169 # within the FileException context.
170 #
171 # Originally, Scrapy was using twisted.internet.defer.returnValue
172 # inside functions decorated with twisted.internet.defer.inlineCallbacks,
173 # encapsulating the returned Response in a _DefGen_Return exception
174 # instead of a StopIteration.
175 #
176 # To avoid keeping references to the Response and therefore Request
177 # objects on the Media Pipeline cache, we should wipe the context of
178 # the encapsulated exception when it is a StopIteration instance
179 #
180 # This problem does not occur in Python 2.7 since we don't have
181 # Exception Chaining (https://www.python.org/dev/peps/pep-3134/).
182 context = getattr(result.value, "__context__", None)
183 if isinstance(context, StopIteration):
184 setattr(result.value, "__context__", None)
185
186 info.downloading.remove(fp)
187 info.downloaded[fp] = result # cache result
188 for wad in info.waiting.pop(fp):
189 defer_result(result).chainDeferred(wad)
190
191 # Overridable Interface
192 def media_to_download(self, request, info, *, item=None):
193 """Check request before starting download"""
194 pass
195
196 def get_media_requests(self, item, info):
197 """Returns the media requests to download"""
198 pass
199
200 def media_downloaded(self, response, request, info, *, item=None):
201 """Handler for success downloads"""
202 return response
203
204 def media_failed(self, failure, request, info):
205 """Handler for failed downloads"""
206 return failure
207
208 def item_completed(self, results, item, info):
209 """Called per item when all media requests has been processed"""
210 if self.LOG_FAILED_RESULTS:
211 for ok, value in results:
212 if not ok:
213 logger.error(
214 "%(class)s found errors processing %(item)s",
215 {"class": self.__class__.__name__, "item": item},
216 exc_info=failure_to_exc_info(value),
217 extra={"spider": info.spider},
218 )
219 return item
220
221 def file_path(self, request, response=None, info=None, *, item=None):
222 """Returns the path where downloaded media should be stored"""
223 pass
224
[end of scrapy/pipelines/media.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/scrapy/pipelines/media.py b/scrapy/pipelines/media.py
--- a/scrapy/pipelines/media.py
+++ b/scrapy/pipelines/media.py
@@ -112,14 +112,14 @@
info.downloading.add(fp)
dfd = mustbe_deferred(self.media_to_download, request, info, item=item)
dfd.addCallback(self._check_media_to_download, request, info, item=item)
+ dfd.addErrback(self._log_exception)
dfd.addBoth(self._cache_result_and_execute_waiters, fp, info)
- dfd.addErrback(
- lambda f: logger.error(
- f.value, exc_info=failure_to_exc_info(f), extra={"spider": info.spider}
- )
- )
return dfd.addBoth(lambda _: wad) # it must return wad at last
+ def _log_exception(self, result):
+ logger.exception(result)
+ return result
+
def _modify_media_request(self, request):
if self.handle_httpstatus_list:
request.meta["handle_httpstatus_list"] = self.handle_httpstatus_list
| {"golden_diff": "diff --git a/scrapy/pipelines/media.py b/scrapy/pipelines/media.py\n--- a/scrapy/pipelines/media.py\n+++ b/scrapy/pipelines/media.py\n@@ -112,14 +112,14 @@\n info.downloading.add(fp)\n dfd = mustbe_deferred(self.media_to_download, request, info, item=item)\n dfd.addCallback(self._check_media_to_download, request, info, item=item)\n+ dfd.addErrback(self._log_exception)\n dfd.addBoth(self._cache_result_and_execute_waiters, fp, info)\n- dfd.addErrback(\n- lambda f: logger.error(\n- f.value, exc_info=failure_to_exc_info(f), extra={\"spider\": info.spider}\n- )\n- )\n return dfd.addBoth(lambda _: wad) # it must return wad at last\n \n+ def _log_exception(self, result):\n+ logger.exception(result)\n+ return result\n+\n def _modify_media_request(self, request):\n if self.handle_httpstatus_list:\n request.meta[\"handle_httpstatus_list\"] = self.handle_httpstatus_list\n", "issue": "MediaPipeline exceptions passed silently\n### Description\r\n\r\nMediaPipeline exceptions passed silently both for method body or method signature errors.\r\n\r\n### Steps to Reproduce\r\n\r\n```\r\nfrom scrapy.pipelines.files import FilesPipeline\r\nclass BuggyFilesPipeline(FilesPipeline):\r\n def file_path(self, request, response=None, info=None, *, item=None):\r\n return 1 / 0\r\n```\r\n**Expected behavior:** Exception logged\r\n\r\n**Actual behavior:** Exception passed silently\r\n\r\n**Reproduces how often:** 100%\r\n\r\n### Versions\r\n\r\nScrapy 2.4\r\n\n", "before_files": [{"content": "import functools\nimport logging\nfrom collections import defaultdict\n\nfrom twisted.internet.defer import Deferred, DeferredList\nfrom twisted.python.failure import Failure\n\nfrom scrapy.http.request import NO_CALLBACK\nfrom scrapy.settings import Settings\nfrom scrapy.utils.datatypes import SequenceExclude\nfrom scrapy.utils.defer import defer_result, mustbe_deferred\nfrom scrapy.utils.log import failure_to_exc_info\nfrom scrapy.utils.misc import arg_to_iter\n\nlogger = logging.getLogger(__name__)\n\n\ndef _DUMMY_CALLBACK(response):\n return response\n\n\nclass MediaPipeline:\n LOG_FAILED_RESULTS = True\n\n class SpiderInfo:\n def __init__(self, spider):\n self.spider = spider\n self.downloading = set()\n self.downloaded = {}\n self.waiting = defaultdict(list)\n\n def __init__(self, download_func=None, settings=None):\n self.download_func = download_func\n self._expects_item = {}\n\n if isinstance(settings, dict) or settings is None:\n settings = Settings(settings)\n resolve = functools.partial(\n self._key_for_pipe, base_class_name=\"MediaPipeline\", settings=settings\n )\n self.allow_redirects = settings.getbool(resolve(\"MEDIA_ALLOW_REDIRECTS\"), False)\n self._handle_statuses(self.allow_redirects)\n\n def _handle_statuses(self, allow_redirects):\n self.handle_httpstatus_list = None\n if allow_redirects:\n self.handle_httpstatus_list = SequenceExclude(range(300, 400))\n\n def _key_for_pipe(self, key, base_class_name=None, settings=None):\n \"\"\"\n >>> MediaPipeline()._key_for_pipe(\"IMAGES\")\n 'IMAGES'\n >>> class MyPipe(MediaPipeline):\n ... pass\n >>> MyPipe()._key_for_pipe(\"IMAGES\", base_class_name=\"MediaPipeline\")\n 'MYPIPE_IMAGES'\n \"\"\"\n class_name = self.__class__.__name__\n formatted_key = f\"{class_name.upper()}_{key}\"\n if (\n not base_class_name\n or class_name == base_class_name\n or settings\n and not settings.get(formatted_key)\n ):\n return key\n return formatted_key\n\n @classmethod\n def from_crawler(cls, crawler):\n try:\n pipe = cls.from_settings(crawler.settings)\n except AttributeError:\n pipe = cls()\n pipe.crawler = crawler\n pipe._fingerprinter = crawler.request_fingerprinter\n return pipe\n\n def open_spider(self, spider):\n self.spiderinfo = self.SpiderInfo(spider)\n\n def process_item(self, item, spider):\n info = self.spiderinfo\n requests = arg_to_iter(self.get_media_requests(item, info))\n dlist = [self._process_request(r, info, item) for r in requests]\n dfd = DeferredList(dlist, consumeErrors=True)\n return dfd.addCallback(self.item_completed, item, info)\n\n def _process_request(self, request, info, item):\n fp = self._fingerprinter.fingerprint(request)\n if not request.callback or request.callback is NO_CALLBACK:\n cb = _DUMMY_CALLBACK\n else:\n cb = request.callback\n eb = request.errback\n request.callback = NO_CALLBACK\n request.errback = None\n\n # Return cached result if request was already seen\n if fp in info.downloaded:\n return defer_result(info.downloaded[fp]).addCallbacks(cb, eb)\n\n # Otherwise, wait for result\n wad = Deferred().addCallbacks(cb, eb)\n info.waiting[fp].append(wad)\n\n # Check if request is downloading right now to avoid doing it twice\n if fp in info.downloading:\n return wad\n\n # Download request checking media_to_download hook output first\n info.downloading.add(fp)\n dfd = mustbe_deferred(self.media_to_download, request, info, item=item)\n dfd.addCallback(self._check_media_to_download, request, info, item=item)\n dfd.addBoth(self._cache_result_and_execute_waiters, fp, info)\n dfd.addErrback(\n lambda f: logger.error(\n f.value, exc_info=failure_to_exc_info(f), extra={\"spider\": info.spider}\n )\n )\n return dfd.addBoth(lambda _: wad) # it must return wad at last\n\n def _modify_media_request(self, request):\n if self.handle_httpstatus_list:\n request.meta[\"handle_httpstatus_list\"] = self.handle_httpstatus_list\n else:\n request.meta[\"handle_httpstatus_all\"] = True\n\n def _check_media_to_download(self, result, request, info, item):\n if result is not None:\n return result\n if self.download_func:\n # this ugly code was left only to support tests. TODO: remove\n dfd = mustbe_deferred(self.download_func, request, info.spider)\n dfd.addCallbacks(\n callback=self.media_downloaded,\n callbackArgs=(request, info),\n callbackKeywords={\"item\": item},\n errback=self.media_failed,\n errbackArgs=(request, info),\n )\n else:\n self._modify_media_request(request)\n dfd = self.crawler.engine.download(request)\n dfd.addCallbacks(\n callback=self.media_downloaded,\n callbackArgs=(request, info),\n callbackKeywords={\"item\": item},\n errback=self.media_failed,\n errbackArgs=(request, info),\n )\n return dfd\n\n def _cache_result_and_execute_waiters(self, result, fp, info):\n if isinstance(result, Failure):\n # minimize cached information for failure\n result.cleanFailure()\n result.frames = []\n result.stack = None\n\n # This code fixes a memory leak by avoiding to keep references to\n # the Request and Response objects on the Media Pipeline cache.\n #\n # What happens when the media_downloaded callback raises an\n # exception, for example a FileException('download-error') when\n # the Response status code is not 200 OK, is that the original\n # StopIteration exception (which in turn contains the failed\n # Response and by extension, the original Request) gets encapsulated\n # within the FileException context.\n #\n # Originally, Scrapy was using twisted.internet.defer.returnValue\n # inside functions decorated with twisted.internet.defer.inlineCallbacks,\n # encapsulating the returned Response in a _DefGen_Return exception\n # instead of a StopIteration.\n #\n # To avoid keeping references to the Response and therefore Request\n # objects on the Media Pipeline cache, we should wipe the context of\n # the encapsulated exception when it is a StopIteration instance\n #\n # This problem does not occur in Python 2.7 since we don't have\n # Exception Chaining (https://www.python.org/dev/peps/pep-3134/).\n context = getattr(result.value, \"__context__\", None)\n if isinstance(context, StopIteration):\n setattr(result.value, \"__context__\", None)\n\n info.downloading.remove(fp)\n info.downloaded[fp] = result # cache result\n for wad in info.waiting.pop(fp):\n defer_result(result).chainDeferred(wad)\n\n # Overridable Interface\n def media_to_download(self, request, info, *, item=None):\n \"\"\"Check request before starting download\"\"\"\n pass\n\n def get_media_requests(self, item, info):\n \"\"\"Returns the media requests to download\"\"\"\n pass\n\n def media_downloaded(self, response, request, info, *, item=None):\n \"\"\"Handler for success downloads\"\"\"\n return response\n\n def media_failed(self, failure, request, info):\n \"\"\"Handler for failed downloads\"\"\"\n return failure\n\n def item_completed(self, results, item, info):\n \"\"\"Called per item when all media requests has been processed\"\"\"\n if self.LOG_FAILED_RESULTS:\n for ok, value in results:\n if not ok:\n logger.error(\n \"%(class)s found errors processing %(item)s\",\n {\"class\": self.__class__.__name__, \"item\": item},\n exc_info=failure_to_exc_info(value),\n extra={\"spider\": info.spider},\n )\n return item\n\n def file_path(self, request, response=None, info=None, *, item=None):\n \"\"\"Returns the path where downloaded media should be stored\"\"\"\n pass\n", "path": "scrapy/pipelines/media.py"}]} | 3,013 | 251 |
gh_patches_debug_29551 | rasdani/github-patches | git_diff | doccano__doccano-1770 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Wrong progress in collaborative annotation ('Share annotations across all users')
How to reproduce the behaviour
---------
Progress is shown as individual progress instead of total progress when 'Share annotations across all users' is ticked in project setting.
Your Environment
---------
<!-- Include details of your environment.-->
* Operating System: wsl2+ubuntu20.04
* Python Version Used: 3.8
* When you install doccano: 20220403
* How did you install doccano (Heroku button etc): source
</issue>
<code>
[start of backend/metrics/views.py]
1 import abc
2
3 from rest_framework import status
4 from rest_framework.permissions import IsAuthenticated
5 from rest_framework.response import Response
6 from rest_framework.views import APIView
7
8 from examples.models import Example, ExampleState
9 from label_types.models import CategoryType, LabelType, RelationType, SpanType
10 from labels.models import Category, Label, Relation, Span
11 from projects.models import Member
12 from projects.permissions import IsProjectAdmin, IsProjectStaffAndReadOnly
13
14
15 class ProgressAPI(APIView):
16 permission_classes = [IsAuthenticated & (IsProjectAdmin | IsProjectStaffAndReadOnly)]
17
18 def get(self, request, *args, **kwargs):
19 examples = Example.objects.filter(project=self.kwargs["project_id"]).values("id")
20 total = examples.count()
21 complete = ExampleState.objects.count_done(examples, user=self.request.user)
22 data = {"total": total, "remaining": total - complete, "complete": complete}
23 return Response(data=data, status=status.HTTP_200_OK)
24
25
26 class MemberProgressAPI(APIView):
27 permission_classes = [IsAuthenticated & (IsProjectAdmin | IsProjectStaffAndReadOnly)]
28
29 def get(self, request, *args, **kwargs):
30 examples = Example.objects.filter(project=self.kwargs["project_id"]).values("id")
31 members = Member.objects.filter(project=self.kwargs["project_id"])
32 data = ExampleState.objects.measure_member_progress(examples, members)
33 return Response(data=data, status=status.HTTP_200_OK)
34
35
36 class LabelDistribution(abc.ABC, APIView):
37 permission_classes = [IsAuthenticated & (IsProjectAdmin | IsProjectStaffAndReadOnly)]
38 model = Label
39 label_type = LabelType
40
41 def get(self, request, *args, **kwargs):
42 labels = self.label_type.objects.filter(project=self.kwargs["project_id"])
43 examples = Example.objects.filter(project=self.kwargs["project_id"]).values("id")
44 members = Member.objects.filter(project=self.kwargs["project_id"])
45 data = self.model.objects.calc_label_distribution(examples, members, labels)
46 return Response(data=data, status=status.HTTP_200_OK)
47
48
49 class CategoryTypeDistribution(LabelDistribution):
50 model = Category
51 label_type = CategoryType
52
53
54 class SpanTypeDistribution(LabelDistribution):
55 model = Span
56 label_type = SpanType
57
58
59 class RelationTypeDistribution(LabelDistribution):
60 model = Relation
61 label_type = RelationType
62
[end of backend/metrics/views.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/backend/metrics/views.py b/backend/metrics/views.py
--- a/backend/metrics/views.py
+++ b/backend/metrics/views.py
@@ -1,5 +1,6 @@
import abc
+from django.shortcuts import get_object_or_404
from rest_framework import status
from rest_framework.permissions import IsAuthenticated
from rest_framework.response import Response
@@ -8,7 +9,7 @@
from examples.models import Example, ExampleState
from label_types.models import CategoryType, LabelType, RelationType, SpanType
from labels.models import Category, Label, Relation, Span
-from projects.models import Member
+from projects.models import Member, Project
from projects.permissions import IsProjectAdmin, IsProjectStaffAndReadOnly
@@ -18,7 +19,11 @@
def get(self, request, *args, **kwargs):
examples = Example.objects.filter(project=self.kwargs["project_id"]).values("id")
total = examples.count()
- complete = ExampleState.objects.count_done(examples, user=self.request.user)
+ project = get_object_or_404(Project, pk=self.kwargs["project_id"])
+ if project.collaborative_annotation:
+ complete = ExampleState.objects.count_done(examples)
+ else:
+ complete = ExampleState.objects.count_done(examples, user=self.request.user)
data = {"total": total, "remaining": total - complete, "complete": complete}
return Response(data=data, status=status.HTTP_200_OK)
| {"golden_diff": "diff --git a/backend/metrics/views.py b/backend/metrics/views.py\n--- a/backend/metrics/views.py\n+++ b/backend/metrics/views.py\n@@ -1,5 +1,6 @@\n import abc\n \n+from django.shortcuts import get_object_or_404\n from rest_framework import status\n from rest_framework.permissions import IsAuthenticated\n from rest_framework.response import Response\n@@ -8,7 +9,7 @@\n from examples.models import Example, ExampleState\n from label_types.models import CategoryType, LabelType, RelationType, SpanType\n from labels.models import Category, Label, Relation, Span\n-from projects.models import Member\n+from projects.models import Member, Project\n from projects.permissions import IsProjectAdmin, IsProjectStaffAndReadOnly\n \n \n@@ -18,7 +19,11 @@\n def get(self, request, *args, **kwargs):\n examples = Example.objects.filter(project=self.kwargs[\"project_id\"]).values(\"id\")\n total = examples.count()\n- complete = ExampleState.objects.count_done(examples, user=self.request.user)\n+ project = get_object_or_404(Project, pk=self.kwargs[\"project_id\"])\n+ if project.collaborative_annotation:\n+ complete = ExampleState.objects.count_done(examples)\n+ else:\n+ complete = ExampleState.objects.count_done(examples, user=self.request.user)\n data = {\"total\": total, \"remaining\": total - complete, \"complete\": complete}\n return Response(data=data, status=status.HTTP_200_OK)\n", "issue": "Wrong progress in collaborative annotation ('Share annotations across all users')\nHow to reproduce the behaviour\r\n---------\r\nProgress is shown as individual progress instead of total progress when 'Share annotations across all users' is ticked in project setting.\r\n\r\nYour Environment\r\n---------\r\n<!-- Include details of your environment.-->\r\n* Operating System: wsl2+ubuntu20.04\r\n* Python Version Used: 3.8\r\n* When you install doccano: 20220403\r\n* How did you install doccano (Heroku button etc): source\r\n\n", "before_files": [{"content": "import abc\n\nfrom rest_framework import status\nfrom rest_framework.permissions import IsAuthenticated\nfrom rest_framework.response import Response\nfrom rest_framework.views import APIView\n\nfrom examples.models import Example, ExampleState\nfrom label_types.models import CategoryType, LabelType, RelationType, SpanType\nfrom labels.models import Category, Label, Relation, Span\nfrom projects.models import Member\nfrom projects.permissions import IsProjectAdmin, IsProjectStaffAndReadOnly\n\n\nclass ProgressAPI(APIView):\n permission_classes = [IsAuthenticated & (IsProjectAdmin | IsProjectStaffAndReadOnly)]\n\n def get(self, request, *args, **kwargs):\n examples = Example.objects.filter(project=self.kwargs[\"project_id\"]).values(\"id\")\n total = examples.count()\n complete = ExampleState.objects.count_done(examples, user=self.request.user)\n data = {\"total\": total, \"remaining\": total - complete, \"complete\": complete}\n return Response(data=data, status=status.HTTP_200_OK)\n\n\nclass MemberProgressAPI(APIView):\n permission_classes = [IsAuthenticated & (IsProjectAdmin | IsProjectStaffAndReadOnly)]\n\n def get(self, request, *args, **kwargs):\n examples = Example.objects.filter(project=self.kwargs[\"project_id\"]).values(\"id\")\n members = Member.objects.filter(project=self.kwargs[\"project_id\"])\n data = ExampleState.objects.measure_member_progress(examples, members)\n return Response(data=data, status=status.HTTP_200_OK)\n\n\nclass LabelDistribution(abc.ABC, APIView):\n permission_classes = [IsAuthenticated & (IsProjectAdmin | IsProjectStaffAndReadOnly)]\n model = Label\n label_type = LabelType\n\n def get(self, request, *args, **kwargs):\n labels = self.label_type.objects.filter(project=self.kwargs[\"project_id\"])\n examples = Example.objects.filter(project=self.kwargs[\"project_id\"]).values(\"id\")\n members = Member.objects.filter(project=self.kwargs[\"project_id\"])\n data = self.model.objects.calc_label_distribution(examples, members, labels)\n return Response(data=data, status=status.HTTP_200_OK)\n\n\nclass CategoryTypeDistribution(LabelDistribution):\n model = Category\n label_type = CategoryType\n\n\nclass SpanTypeDistribution(LabelDistribution):\n model = Span\n label_type = SpanType\n\n\nclass RelationTypeDistribution(LabelDistribution):\n model = Relation\n label_type = RelationType\n", "path": "backend/metrics/views.py"}]} | 1,275 | 320 |
gh_patches_debug_19988 | rasdani/github-patches | git_diff | CiviWiki__OpenCiviWiki-1186 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[BUG] Fix Bugs when profile is being viewed
### Description
The serialization is not correct when the registered user tries to view their own profile
This comes from the serialization part of our code-base which can be viewed [here](https://github.com/CiviWiki/OpenCiviWiki/blob/develop/project/threads/views.py#L181-L245)
```py
thread_wiki_data = {
"thread_id": thread_id,
"title": Thread_filter.title,
"summary": Thread_filter.summary,
"image": Thread_filter.image_url,
"author": {
"username": Thread_filter.author.user.username,
"profile_image": Thread_filter.author.profile_image_url,
"first_name": Thread_filter.author.first_name,
"last_name": Thread_filter.author.last_name,
},
"contributors": [
Profile.objects.chip_summarize(a)
for a in Profile.objects.filter(
pk__in=civis.distinct("author").values_list("author", flat=True)
)
],
"category": {
"id": Thread_filter.category.id,
"name": Thread_filter.category.name,
},
"categories": [{"id": c.id, "name": c.name} for c in Category.objects.all()],
"created": Thread_filter.created_date_str,
"num_civis": Thread_filter.num_civis,
"num_views": Thread_filter.num_views,
"user_votes": [
{
"civi_id": act.civi.id,
"activity_type": act.activity_type,
"c_type": act.civi.c_type,
}
for act in Activity.objects.filter(
thread=Thread_filter.id, account=req_acct.id
)
],
}
```
### What should have happened?
The serialization should return user appropriately
### What browser(s) are you seeing the problem on?
Chrome, Firefox, Microsoft Edge, Safari
### Further details

</issue>
<code>
[start of project/accounts/views.py]
1 """
2 Class based views.
3
4 This module will include views for the accounts app.
5 """
6
7 from core.custom_decorators import full_profile, login_required
8 from django.conf import settings
9 from django.contrib.auth import get_user_model, login
10 from django.contrib.auth import views as auth_views
11 from django.contrib.auth.mixins import LoginRequiredMixin
12 from django.contrib.sites.shortcuts import get_current_site
13 from django.http import HttpResponseRedirect
14 from django.template.response import TemplateResponse
15 from django.urls import reverse_lazy
16 from django.utils.encoding import force_str
17 from django.utils.http import urlsafe_base64_decode
18 from django.views import View
19 from django.views.generic.edit import FormView, UpdateView
20
21 from accounts.authentication import account_activation_token, send_activation_email
22 from accounts.forms import ProfileEditForm, UpdateProfileImage, UserRegistrationForm
23 from accounts.models import Profile
24
25
26 class RegisterView(FormView):
27 """
28 A form view that handles user registration.
29 """
30
31 template_name = "accounts/register/register.html"
32 form_class = UserRegistrationForm
33 success_url = "/"
34
35 def _create_user(self, form):
36 username = form.cleaned_data["username"]
37 password = form.cleaned_data["password"]
38 email = form.cleaned_data["email"]
39 user = get_user_model().objects.create_user(username, email, password)
40 return user
41
42 def _send_email(self, user):
43 domain = get_current_site(self.request).domain
44 send_activation_email(user, domain)
45
46 def _login(self, user):
47 login(self.request, user)
48
49 def form_valid(self, form):
50 user = self._create_user(form)
51
52 self._send_email(user)
53 self._login(user)
54
55 return super(RegisterView, self).form_valid(form)
56
57
58 class PasswordResetView(auth_views.PasswordResetView):
59 template_name = "accounts/users/password_reset.html"
60 email_template_name = "accounts/users/password_reset_email.html"
61 subject_template_name = "accounts/users/password_reset_subject.txt"
62 from_email = settings.EMAIL_HOST_USER
63 success_url = reverse_lazy("accounts_password_reset_done")
64
65
66 class PasswordResetDoneView(auth_views.PasswordResetDoneView):
67 template_name = "accounts/users/password_reset_done.html"
68
69
70 class PasswordResetConfirmView(auth_views.PasswordResetConfirmView):
71 template_name = "accounts/users/password_reset_confirm.html"
72 success_url = reverse_lazy("accounts_password_reset_complete")
73
74
75 class PasswordResetCompleteView(auth_views.PasswordResetCompleteView):
76 template_name = "accounts/users/password_reset_complete.html"
77
78
79 class SettingsView(LoginRequiredMixin, UpdateView):
80 """A form view to edit Profile"""
81
82 login_url = "accounts_login"
83 form_class = ProfileEditForm
84 success_url = reverse_lazy("accounts_settings")
85 template_name = "accounts/update_settings.html"
86
87 def get_object(self, queryset=None):
88 return Profile.objects.get(user=self.request.user)
89
90 def get_initial(self):
91 profile = Profile.objects.get(user=self.request.user)
92 self.initial.update(
93 {
94 "username": profile.user.username,
95 "email": profile.user.email,
96 "first_name": profile.first_name or None,
97 "last_name": profile.last_name or None,
98 "about_me": profile.about_me or None,
99 }
100 )
101 return super(SettingsView, self).get_initial()
102
103
104 class ProfileActivationView(View):
105 """
106 This shows different views to the user when they are verifying
107 their account based on whether they are already verified or not.
108 """
109
110 def get(self, request, uidb64, token):
111
112 User = get_user_model()
113 try:
114 uid = force_str(urlsafe_base64_decode(uidb64))
115 user = User.objects.get(pk=uid)
116
117 except (TypeError, ValueError, OverflowError, User.DoesNotExist):
118 user = None
119
120 if user is not None and account_activation_token.check_token(user, token):
121 profile = Profile.objects.get(user=user)
122 if profile.is_verified:
123 redirect_link = {"href": "/", "label": "Back to Main"}
124 template_var = {
125 "title": "Email Already Verified",
126 "content": "You have already verified your email",
127 "link": redirect_link,
128 }
129 else:
130 profile.is_verified = True
131 profile.save()
132
133 redirect_link = {"href": "/", "label": "Back to Main"}
134 template_var = {
135 "title": "Email Verification Successful",
136 "content": "Thank you for verifying your email with CiviWiki",
137 "link": redirect_link,
138 }
139 else:
140 # invalid link
141 redirect_link = {"href": "/", "label": "Back to Main"}
142 template_var = {
143 "title": "Email Verification Error",
144 "content": "Email could not be verified",
145 "link": redirect_link,
146 }
147
148 return TemplateResponse(request, "general_message.html", template_var)
149
150
151 class ProfileSetupView(LoginRequiredMixin, View):
152 """A view to make the user profile full_profile"""
153
154 login_url = "accounts_login"
155
156 def get(self, request):
157 profile = Profile.objects.get(user=request.user)
158 if profile.full_profile:
159 return HttpResponseRedirect("/")
160 # start temp rep rendering TODO: REMOVE THIS
161 else:
162 data = {
163 "username": request.user.username,
164 "email": request.user.email,
165 }
166 return TemplateResponse(request, "accounts/user-setup.html", data)
167
168
169 @login_required
170 @full_profile
171 def user_profile(request, username=None):
172 User = get_user_model()
173 if request.method == "GET":
174 if not username:
175 return HttpResponseRedirect(f"/profile/{request.user}")
176 else:
177 is_owner = username == request.user.username
178 try:
179 user = User.objects.get(username=username)
180 profile = user.profile_set.first()
181 except User.DoesNotExist:
182 return HttpResponseRedirect("/404")
183
184 form = ProfileEditForm(
185 initial={
186 "username": user.username,
187 "email": user.email,
188 "first_name": profile.first_name or None,
189 "last_name": profile.last_name or None,
190 "about_me": profile.about_me or None,
191 },
192 readonly=True,
193 )
194 data = {
195 "username": user,
196 "profile_image_form": UpdateProfileImage,
197 "form": form if is_owner else None,
198 "readonly": True,
199 }
200 return TemplateResponse(request, "account.html", data)
201
[end of project/accounts/views.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/project/accounts/views.py b/project/accounts/views.py
--- a/project/accounts/views.py
+++ b/project/accounts/views.py
@@ -177,7 +177,6 @@
is_owner = username == request.user.username
try:
user = User.objects.get(username=username)
- profile = user.profile_set.first()
except User.DoesNotExist:
return HttpResponseRedirect("/404")
@@ -185,9 +184,9 @@
initial={
"username": user.username,
"email": user.email,
- "first_name": profile.first_name or None,
- "last_name": profile.last_name or None,
- "about_me": profile.about_me or None,
+ "first_name": user.profile.first_name or None,
+ "last_name": user.profile.last_name or None,
+ "about_me": user.profile.about_me or None,
},
readonly=True,
)
| {"golden_diff": "diff --git a/project/accounts/views.py b/project/accounts/views.py\n--- a/project/accounts/views.py\n+++ b/project/accounts/views.py\n@@ -177,7 +177,6 @@\n is_owner = username == request.user.username\n try:\n user = User.objects.get(username=username)\n- profile = user.profile_set.first()\n except User.DoesNotExist:\n return HttpResponseRedirect(\"/404\")\n \n@@ -185,9 +184,9 @@\n initial={\n \"username\": user.username,\n \"email\": user.email,\n- \"first_name\": profile.first_name or None,\n- \"last_name\": profile.last_name or None,\n- \"about_me\": profile.about_me or None,\n+ \"first_name\": user.profile.first_name or None,\n+ \"last_name\": user.profile.last_name or None,\n+ \"about_me\": user.profile.about_me or None,\n },\n readonly=True,\n )\n", "issue": "[BUG] Fix Bugs when profile is being viewed\n### Description\n\nThe serialization is not correct when the registered user tries to view their own profile\r\n\r\n\r\n\r\nThis comes from the serialization part of our code-base which can be viewed [here](https://github.com/CiviWiki/OpenCiviWiki/blob/develop/project/threads/views.py#L181-L245)\r\n\r\n```py\r\nthread_wiki_data = {\r\n \"thread_id\": thread_id,\r\n \"title\": Thread_filter.title,\r\n \"summary\": Thread_filter.summary,\r\n \"image\": Thread_filter.image_url,\r\n \"author\": {\r\n \"username\": Thread_filter.author.user.username,\r\n \"profile_image\": Thread_filter.author.profile_image_url,\r\n \"first_name\": Thread_filter.author.first_name,\r\n \"last_name\": Thread_filter.author.last_name,\r\n },\r\n \"contributors\": [\r\n Profile.objects.chip_summarize(a)\r\n for a in Profile.objects.filter(\r\n pk__in=civis.distinct(\"author\").values_list(\"author\", flat=True)\r\n )\r\n ],\r\n \"category\": {\r\n \"id\": Thread_filter.category.id,\r\n \"name\": Thread_filter.category.name,\r\n },\r\n \"categories\": [{\"id\": c.id, \"name\": c.name} for c in Category.objects.all()],\r\n \"created\": Thread_filter.created_date_str,\r\n \"num_civis\": Thread_filter.num_civis,\r\n \"num_views\": Thread_filter.num_views,\r\n \"user_votes\": [\r\n {\r\n \"civi_id\": act.civi.id,\r\n \"activity_type\": act.activity_type,\r\n \"c_type\": act.civi.c_type,\r\n }\r\n for act in Activity.objects.filter(\r\n thread=Thread_filter.id, account=req_acct.id\r\n )\r\n ],\r\n }\r\n```\n\n### What should have happened?\n\nThe serialization should return user appropriately\n\n### What browser(s) are you seeing the problem on?\n\nChrome, Firefox, Microsoft Edge, Safari\n\n### Further details\n\n\n", "before_files": [{"content": "\"\"\"\nClass based views.\n\nThis module will include views for the accounts app.\n\"\"\"\n\nfrom core.custom_decorators import full_profile, login_required\nfrom django.conf import settings\nfrom django.contrib.auth import get_user_model, login\nfrom django.contrib.auth import views as auth_views\nfrom django.contrib.auth.mixins import LoginRequiredMixin\nfrom django.contrib.sites.shortcuts import get_current_site\nfrom django.http import HttpResponseRedirect\nfrom django.template.response import TemplateResponse\nfrom django.urls import reverse_lazy\nfrom django.utils.encoding import force_str\nfrom django.utils.http import urlsafe_base64_decode\nfrom django.views import View\nfrom django.views.generic.edit import FormView, UpdateView\n\nfrom accounts.authentication import account_activation_token, send_activation_email\nfrom accounts.forms import ProfileEditForm, UpdateProfileImage, UserRegistrationForm\nfrom accounts.models import Profile\n\n\nclass RegisterView(FormView):\n \"\"\"\n A form view that handles user registration.\n \"\"\"\n\n template_name = \"accounts/register/register.html\"\n form_class = UserRegistrationForm\n success_url = \"/\"\n\n def _create_user(self, form):\n username = form.cleaned_data[\"username\"]\n password = form.cleaned_data[\"password\"]\n email = form.cleaned_data[\"email\"]\n user = get_user_model().objects.create_user(username, email, password)\n return user\n\n def _send_email(self, user):\n domain = get_current_site(self.request).domain\n send_activation_email(user, domain)\n\n def _login(self, user):\n login(self.request, user)\n\n def form_valid(self, form):\n user = self._create_user(form)\n\n self._send_email(user)\n self._login(user)\n\n return super(RegisterView, self).form_valid(form)\n\n\nclass PasswordResetView(auth_views.PasswordResetView):\n template_name = \"accounts/users/password_reset.html\"\n email_template_name = \"accounts/users/password_reset_email.html\"\n subject_template_name = \"accounts/users/password_reset_subject.txt\"\n from_email = settings.EMAIL_HOST_USER\n success_url = reverse_lazy(\"accounts_password_reset_done\")\n\n\nclass PasswordResetDoneView(auth_views.PasswordResetDoneView):\n template_name = \"accounts/users/password_reset_done.html\"\n\n\nclass PasswordResetConfirmView(auth_views.PasswordResetConfirmView):\n template_name = \"accounts/users/password_reset_confirm.html\"\n success_url = reverse_lazy(\"accounts_password_reset_complete\")\n\n\nclass PasswordResetCompleteView(auth_views.PasswordResetCompleteView):\n template_name = \"accounts/users/password_reset_complete.html\"\n\n\nclass SettingsView(LoginRequiredMixin, UpdateView):\n \"\"\"A form view to edit Profile\"\"\"\n\n login_url = \"accounts_login\"\n form_class = ProfileEditForm\n success_url = reverse_lazy(\"accounts_settings\")\n template_name = \"accounts/update_settings.html\"\n\n def get_object(self, queryset=None):\n return Profile.objects.get(user=self.request.user)\n\n def get_initial(self):\n profile = Profile.objects.get(user=self.request.user)\n self.initial.update(\n {\n \"username\": profile.user.username,\n \"email\": profile.user.email,\n \"first_name\": profile.first_name or None,\n \"last_name\": profile.last_name or None,\n \"about_me\": profile.about_me or None,\n }\n )\n return super(SettingsView, self).get_initial()\n\n\nclass ProfileActivationView(View):\n \"\"\"\n This shows different views to the user when they are verifying\n their account based on whether they are already verified or not.\n \"\"\"\n\n def get(self, request, uidb64, token):\n\n User = get_user_model()\n try:\n uid = force_str(urlsafe_base64_decode(uidb64))\n user = User.objects.get(pk=uid)\n\n except (TypeError, ValueError, OverflowError, User.DoesNotExist):\n user = None\n\n if user is not None and account_activation_token.check_token(user, token):\n profile = Profile.objects.get(user=user)\n if profile.is_verified:\n redirect_link = {\"href\": \"/\", \"label\": \"Back to Main\"}\n template_var = {\n \"title\": \"Email Already Verified\",\n \"content\": \"You have already verified your email\",\n \"link\": redirect_link,\n }\n else:\n profile.is_verified = True\n profile.save()\n\n redirect_link = {\"href\": \"/\", \"label\": \"Back to Main\"}\n template_var = {\n \"title\": \"Email Verification Successful\",\n \"content\": \"Thank you for verifying your email with CiviWiki\",\n \"link\": redirect_link,\n }\n else:\n # invalid link\n redirect_link = {\"href\": \"/\", \"label\": \"Back to Main\"}\n template_var = {\n \"title\": \"Email Verification Error\",\n \"content\": \"Email could not be verified\",\n \"link\": redirect_link,\n }\n\n return TemplateResponse(request, \"general_message.html\", template_var)\n\n\nclass ProfileSetupView(LoginRequiredMixin, View):\n \"\"\"A view to make the user profile full_profile\"\"\"\n\n login_url = \"accounts_login\"\n\n def get(self, request):\n profile = Profile.objects.get(user=request.user)\n if profile.full_profile:\n return HttpResponseRedirect(\"/\")\n # start temp rep rendering TODO: REMOVE THIS\n else:\n data = {\n \"username\": request.user.username,\n \"email\": request.user.email,\n }\n return TemplateResponse(request, \"accounts/user-setup.html\", data)\n\n\n@login_required\n@full_profile\ndef user_profile(request, username=None):\n User = get_user_model()\n if request.method == \"GET\":\n if not username:\n return HttpResponseRedirect(f\"/profile/{request.user}\")\n else:\n is_owner = username == request.user.username\n try:\n user = User.objects.get(username=username)\n profile = user.profile_set.first()\n except User.DoesNotExist:\n return HttpResponseRedirect(\"/404\")\n\n form = ProfileEditForm(\n initial={\n \"username\": user.username,\n \"email\": user.email,\n \"first_name\": profile.first_name or None,\n \"last_name\": profile.last_name or None,\n \"about_me\": profile.about_me or None,\n },\n readonly=True,\n )\n data = {\n \"username\": user,\n \"profile_image_form\": UpdateProfileImage,\n \"form\": form if is_owner else None,\n \"readonly\": True,\n }\n return TemplateResponse(request, \"account.html\", data)\n", "path": "project/accounts/views.py"}]} | 2,842 | 201 |
gh_patches_debug_807 | rasdani/github-patches | git_diff | bokeh__bokeh-10106 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[BUG] `cd sphinx; make serve` doesn't work
#### ALL software version info (bokeh, python, notebook, OS, browser, any other relevant packages)
Bokeh 2.0.2-76-ga417746c9
#### Description of expected behavior and the observed behavior
The page at https://docs.bokeh.org/en/latest/docs/dev_guide/documentation.html mentions that it's possible to run `make serve` to serve the documentation locally. But running it results in:
```
Exception in thread Thread-2:
Traceback (most recent call last):
File "/home/p-himik/soft/miniconda3/envs/bokeh-dev/lib/python3.7/threading.py", line 917, in _bootstrap_inner
self.run()
File "/home/p-himik/soft/miniconda3/envs/bokeh-dev/lib/python3.7/threading.py", line 865, in run
self._target(*self._args, **self._kwargs)
File "docserver.py", line 43, in open_browser
webbrowser.open("http://localhost:%d/en/latest/index.html" % PORT, new="tab")
File "/home/p-himik/soft/miniconda3/envs/bokeh-dev/lib/python3.7/webbrowser.py", line 78, in open
if browser.open(url, new, autoraise):
File "/home/p-himik/soft/miniconda3/envs/bokeh-dev/lib/python3.7/webbrowser.py", line 251, in open
"expected 0, 1, or 2, got %s" % new)
webbrowser.Error: Bad 'new' parameter to open(); expected 0, 1, or 2, got tab
```
Not sure where `"tab"` has come from, but it has been there forever.
</issue>
<code>
[start of sphinx/docserver.py]
1 import os
2 import sys
3 import threading
4 import time
5 import webbrowser
6
7 import flask
8 import tornado
9 from tornado.httpserver import HTTPServer
10 from tornado.ioloop import IOLoop
11 from tornado.wsgi import WSGIContainer
12
13 _basedir = os.path.join("..", os.path.dirname(__file__))
14
15 app = flask.Flask(__name__, static_folder="/unused")
16 PORT=5009
17 http_server = HTTPServer(WSGIContainer(app))
18
19 @app.route('/')
20 def welcome():
21 return """
22 <h1>Welcome to the Bokeh documentation server</h1>
23 You probably want to go to <a href="/en/latest/index.html"> Index</a>
24 """
25
26 @app.route('/versions.json')
27 def send_versions():
28 return flask.send_from_directory(
29 os.path.join(_basedir, "sphinx"), "test_versions.json")
30
31 @app.route('/alert.html')
32 def send_alert():
33 return os.environ.get("BOKEH_DOCS_ALERT", "")
34
35 @app.route('/en/latest/<path:filename>')
36 def send_docs(filename):
37 return flask.send_from_directory(
38 os.path.join(_basedir, "sphinx/build/html/"), filename)
39
40 def open_browser():
41 # Child process
42 time.sleep(0.5)
43 webbrowser.open("http://localhost:%d/en/latest/index.html" % PORT, new="tab")
44
45 data = {}
46
47 def serve_http():
48 data['ioloop'] = IOLoop()
49 http_server.listen(PORT)
50 IOLoop.current().start()
51
52 def shutdown_server():
53 ioloop = data['ioloop']
54 ioloop.add_callback(ioloop.stop)
55 print("Asked Server to shut down.")
56
57 def ui():
58 try:
59 time.sleep(0.5)
60 input("Press <ENTER> to exit...\n") # lgtm [py/use-of-input]
61 except KeyboardInterrupt:
62 pass
63
64 if __name__ == "__main__":
65
66 if tornado.version_info[0] == 4:
67 print('docserver.py script requires tornado 5 or higher')
68 sys.exit(1)
69
70 print("\nStarting Bokeh plot server on port %d..." % PORT)
71 print("Visit http://localhost:%d/en/latest/index.html to see plots\n" % PORT)
72
73 t_server = threading.Thread(target=serve_http)
74 t_server.start()
75 t_browser = threading.Thread(target=open_browser)
76 t_browser.start()
77
78 ui()
79
80 shutdown_server()
81 t_server.join()
82 t_browser.join()
83 print("Server shut down.")
84
[end of sphinx/docserver.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/sphinx/docserver.py b/sphinx/docserver.py
--- a/sphinx/docserver.py
+++ b/sphinx/docserver.py
@@ -40,7 +40,7 @@
def open_browser():
# Child process
time.sleep(0.5)
- webbrowser.open("http://localhost:%d/en/latest/index.html" % PORT, new="tab")
+ webbrowser.open("http://localhost:%d/en/latest/index.html" % PORT, new=2)
data = {}
| {"golden_diff": "diff --git a/sphinx/docserver.py b/sphinx/docserver.py\n--- a/sphinx/docserver.py\n+++ b/sphinx/docserver.py\n@@ -40,7 +40,7 @@\n def open_browser():\n # Child process\n time.sleep(0.5)\n- webbrowser.open(\"http://localhost:%d/en/latest/index.html\" % PORT, new=\"tab\")\n+ webbrowser.open(\"http://localhost:%d/en/latest/index.html\" % PORT, new=2)\n \n data = {}\n", "issue": "[BUG] `cd sphinx; make serve` doesn't work\n#### ALL software version info (bokeh, python, notebook, OS, browser, any other relevant packages)\r\nBokeh 2.0.2-76-ga417746c9\r\n\r\n#### Description of expected behavior and the observed behavior\r\nThe page at https://docs.bokeh.org/en/latest/docs/dev_guide/documentation.html mentions that it's possible to run `make serve` to serve the documentation locally. But running it results in:\r\n```\r\nException in thread Thread-2:\r\nTraceback (most recent call last):\r\n File \"/home/p-himik/soft/miniconda3/envs/bokeh-dev/lib/python3.7/threading.py\", line 917, in _bootstrap_inner\r\n self.run()\r\n File \"/home/p-himik/soft/miniconda3/envs/bokeh-dev/lib/python3.7/threading.py\", line 865, in run\r\n self._target(*self._args, **self._kwargs)\r\n File \"docserver.py\", line 43, in open_browser\r\n webbrowser.open(\"http://localhost:%d/en/latest/index.html\" % PORT, new=\"tab\")\r\n File \"/home/p-himik/soft/miniconda3/envs/bokeh-dev/lib/python3.7/webbrowser.py\", line 78, in open\r\n if browser.open(url, new, autoraise):\r\n File \"/home/p-himik/soft/miniconda3/envs/bokeh-dev/lib/python3.7/webbrowser.py\", line 251, in open\r\n \"expected 0, 1, or 2, got %s\" % new)\r\nwebbrowser.Error: Bad 'new' parameter to open(); expected 0, 1, or 2, got tab\r\n```\r\nNot sure where `\"tab\"` has come from, but it has been there forever.\n", "before_files": [{"content": "import os\nimport sys\nimport threading\nimport time\nimport webbrowser\n\nimport flask\nimport tornado\nfrom tornado.httpserver import HTTPServer\nfrom tornado.ioloop import IOLoop\nfrom tornado.wsgi import WSGIContainer\n\n_basedir = os.path.join(\"..\", os.path.dirname(__file__))\n\napp = flask.Flask(__name__, static_folder=\"/unused\")\nPORT=5009\nhttp_server = HTTPServer(WSGIContainer(app))\n\[email protected]('/')\ndef welcome():\n return \"\"\"\n <h1>Welcome to the Bokeh documentation server</h1>\n You probably want to go to <a href=\"/en/latest/index.html\"> Index</a>\n \"\"\"\n\[email protected]('/versions.json')\ndef send_versions():\n return flask.send_from_directory(\n os.path.join(_basedir, \"sphinx\"), \"test_versions.json\")\n\[email protected]('/alert.html')\ndef send_alert():\n return os.environ.get(\"BOKEH_DOCS_ALERT\", \"\")\n\[email protected]('/en/latest/<path:filename>')\ndef send_docs(filename):\n return flask.send_from_directory(\n os.path.join(_basedir, \"sphinx/build/html/\"), filename)\n\ndef open_browser():\n # Child process\n time.sleep(0.5)\n webbrowser.open(\"http://localhost:%d/en/latest/index.html\" % PORT, new=\"tab\")\n\ndata = {}\n\ndef serve_http():\n data['ioloop'] = IOLoop()\n http_server.listen(PORT)\n IOLoop.current().start()\n\ndef shutdown_server():\n ioloop = data['ioloop']\n ioloop.add_callback(ioloop.stop)\n print(\"Asked Server to shut down.\")\n\ndef ui():\n try:\n time.sleep(0.5)\n input(\"Press <ENTER> to exit...\\n\") # lgtm [py/use-of-input]\n except KeyboardInterrupt:\n pass\n\nif __name__ == \"__main__\":\n\n if tornado.version_info[0] == 4:\n print('docserver.py script requires tornado 5 or higher')\n sys.exit(1)\n\n print(\"\\nStarting Bokeh plot server on port %d...\" % PORT)\n print(\"Visit http://localhost:%d/en/latest/index.html to see plots\\n\" % PORT)\n\n t_server = threading.Thread(target=serve_http)\n t_server.start()\n t_browser = threading.Thread(target=open_browser)\n t_browser.start()\n\n ui()\n\n shutdown_server()\n t_server.join()\n t_browser.join()\n print(\"Server shut down.\")\n", "path": "sphinx/docserver.py"}]} | 1,649 | 111 |
gh_patches_debug_3215 | rasdani/github-patches | git_diff | python-discord__bot-733 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Write unit tests for `bot/rules/newlines.py`
Write unit tests for [`bot/rules/newlines.py`](../blob/master/bot/rules/newlines.py).
## Implementation details
Please make sure to read the general information in the [meta issue](553) and the [testing README](../blob/master/tests/README.md). We are aiming for a 100% [branch coverage](https://coverage.readthedocs.io/en/stable/branch.html) for this file, but if you think that is not possible, please discuss that in this issue.
## Additional information
If you want to work on this issue, **please make sure that you get assigned to it** by one of the core devs before starting to work on it. We would like to prevent the situation that multiple people are working on the same issue. To get assigned, leave a comment showing your interesting in tackling this issue.
</issue>
<code>
[start of bot/rules/attachments.py]
1 from typing import Dict, Iterable, List, Optional, Tuple
2
3 from discord import Member, Message
4
5
6 async def apply(
7 last_message: Message, recent_messages: List[Message], config: Dict[str, int]
8 ) -> Optional[Tuple[str, Iterable[Member], Iterable[Message]]]:
9 """Detects total attachments exceeding the limit sent by a single user."""
10 relevant_messages = tuple(
11 msg
12 for msg in recent_messages
13 if (
14 msg.author == last_message.author
15 and len(msg.attachments) > 0
16 )
17 )
18 total_recent_attachments = sum(len(msg.attachments) for msg in relevant_messages)
19
20 if total_recent_attachments > config['max']:
21 return (
22 f"sent {total_recent_attachments} attachments in {config['max']}s",
23 (last_message.author,),
24 relevant_messages
25 )
26 return None
27
[end of bot/rules/attachments.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/bot/rules/attachments.py b/bot/rules/attachments.py
--- a/bot/rules/attachments.py
+++ b/bot/rules/attachments.py
@@ -19,7 +19,7 @@
if total_recent_attachments > config['max']:
return (
- f"sent {total_recent_attachments} attachments in {config['max']}s",
+ f"sent {total_recent_attachments} attachments in {config['interval']}s",
(last_message.author,),
relevant_messages
)
| {"golden_diff": "diff --git a/bot/rules/attachments.py b/bot/rules/attachments.py\n--- a/bot/rules/attachments.py\n+++ b/bot/rules/attachments.py\n@@ -19,7 +19,7 @@\n \n if total_recent_attachments > config['max']:\n return (\n- f\"sent {total_recent_attachments} attachments in {config['max']}s\",\n+ f\"sent {total_recent_attachments} attachments in {config['interval']}s\",\n (last_message.author,),\n relevant_messages\n )\n", "issue": "Write unit tests for `bot/rules/newlines.py`\nWrite unit tests for [`bot/rules/newlines.py`](../blob/master/bot/rules/newlines.py).\n\n## Implementation details\nPlease make sure to read the general information in the [meta issue](553) and the [testing README](../blob/master/tests/README.md). We are aiming for a 100% [branch coverage](https://coverage.readthedocs.io/en/stable/branch.html) for this file, but if you think that is not possible, please discuss that in this issue.\n\n## Additional information\nIf you want to work on this issue, **please make sure that you get assigned to it** by one of the core devs before starting to work on it. We would like to prevent the situation that multiple people are working on the same issue. To get assigned, leave a comment showing your interesting in tackling this issue.\n\n", "before_files": [{"content": "from typing import Dict, Iterable, List, Optional, Tuple\n\nfrom discord import Member, Message\n\n\nasync def apply(\n last_message: Message, recent_messages: List[Message], config: Dict[str, int]\n) -> Optional[Tuple[str, Iterable[Member], Iterable[Message]]]:\n \"\"\"Detects total attachments exceeding the limit sent by a single user.\"\"\"\n relevant_messages = tuple(\n msg\n for msg in recent_messages\n if (\n msg.author == last_message.author\n and len(msg.attachments) > 0\n )\n )\n total_recent_attachments = sum(len(msg.attachments) for msg in relevant_messages)\n\n if total_recent_attachments > config['max']:\n return (\n f\"sent {total_recent_attachments} attachments in {config['max']}s\",\n (last_message.author,),\n relevant_messages\n )\n return None\n", "path": "bot/rules/attachments.py"}]} | 950 | 112 |
gh_patches_debug_26022 | rasdani/github-patches | git_diff | mindee__doctr-173 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[docs] Add a visualization of the example script in the README
While the readme specifies how you can use the example script, it does not show any visualization examples. We could easily add one to help users.
</issue>
<code>
[start of doctr/utils/visualization.py]
1 # Copyright (C) 2021, Mindee.
2
3 # This program is licensed under the Apache License version 2.
4 # See LICENSE or go to <https://www.apache.org/licenses/LICENSE-2.0.txt> for full license details.
5
6 import matplotlib.pyplot as plt
7 import matplotlib.patches as patches
8 import mplcursors
9 import numpy as np
10 from typing import Tuple, List, Dict, Any
11
12 from .common_types import BoundingBox
13
14 __all__ = ['visualize_page']
15
16
17 def create_patch(
18 geometry: BoundingBox,
19 label: str,
20 page_dimensions: Tuple[int, int],
21 color: Tuple[int, int, int],
22 alpha: float = 0.3,
23 linewidth: int = 2,
24 ) -> patches.Patch:
25 """Create a matplotlib patch (rectangle) bounding the element
26
27 Args:
28 geometry: bounding box of the element
29 label: label to display when hovered
30 page_dimensions: dimensions of the Page
31 color: color to draw box
32 alpha: opacity parameter to fill the boxes, 0 = transparent
33 linewidth: line width
34
35 Returns:
36 a rectangular Patch
37 """
38 h, w = page_dimensions
39 (xmin, ymin), (xmax, ymax) = geometry
40 xmin, xmax = xmin * w, xmax * w
41 ymin, ymax = ymin * h, ymax * h
42 rect = patches.Rectangle(
43 (xmin, ymin),
44 xmax - xmin,
45 ymax - ymin,
46 fill=True,
47 linewidth=linewidth,
48 edgecolor=(*color, alpha),
49 facecolor=(*color, alpha),
50 label=label
51 )
52 return rect
53
54
55 def visualize_page(
56 page: Dict[str, Any],
57 image: np.ndarray,
58 words_only: bool = True,
59 ) -> None:
60 """Visualize a full page with predicted blocks, lines and words
61
62 Example::
63 >>> import numpy as np
64 >>> import matplotlib.pyplot as plt
65 >>> from doctr.utils.visualization import visualize_page
66 >>> from doctr.models import ocr_db_crnn
67 >>> model = ocr_db_crnn(pretrained=True)
68 >>> input_page = (255 * np.random.rand(600, 800, 3)).astype(np.uint8)
69 >>> out = model([[input_page]])
70 >>> visualize_page(out[0].pages[0].export(), input_page)
71 >>> plt.show()
72
73 Args:
74 page: the exported Page of a Document
75 image: np array of the page, needs to have the same shape than page['dimensions']
76 words_only: whether only words should be displayed
77 """
78 # Display the image
79 _, ax = plt.subplots()
80 ax.imshow(image)
81 # hide both axis
82 ax.axis('off')
83
84 artists: List[patches.Patch] = [] # instantiate an empty list of patches (to be drawn on the page)
85
86 for block in page['blocks']:
87 if not words_only:
88 rect = create_patch(block['geometry'], 'block', page['dimensions'], (0, 1, 0), linewidth=1)
89 # add patch on figure
90 ax.add_patch(rect)
91 # add patch to cursor's artists
92 artists.append(rect)
93
94 for line in block['lines']:
95 if not words_only:
96 rect = create_patch(line['geometry'], 'line', page['dimensions'], (1, 0, 0), linewidth=1)
97 ax.add_patch(rect)
98 artists.append(rect)
99
100 for word in line['words']:
101 rect = create_patch(word['geometry'], f"{word['value']} (confidence: {word['confidence']:.2%})",
102 page['dimensions'], (0, 0, 1))
103 ax.add_patch(rect)
104 artists.append(rect)
105
106 if not words_only:
107 for artefact in block['artefacts']:
108 rect = create_patch(artefact['geometry'], 'artefact', page['dimensions'], (0.5, 0.5, 0.5), linewidth=1)
109 ax.add_patch(rect)
110 artists.append(rect)
111
112 # Create mlp Cursor to hover patches in artists
113 mplcursors.Cursor(artists, hover=2).connect("add", lambda sel: sel.annotation.set_text(sel.artist.get_label()))
114
[end of doctr/utils/visualization.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/doctr/utils/visualization.py b/doctr/utils/visualization.py
--- a/doctr/utils/visualization.py
+++ b/doctr/utils/visualization.py
@@ -56,6 +56,7 @@
page: Dict[str, Any],
image: np.ndarray,
words_only: bool = True,
+ scale: float = 10,
) -> None:
"""Visualize a full page with predicted blocks, lines and words
@@ -74,9 +75,13 @@
page: the exported Page of a Document
image: np array of the page, needs to have the same shape than page['dimensions']
words_only: whether only words should be displayed
+ scale: figsize of the largest windows side
"""
+ # Get proper scale and aspect ratio
+ h, w = image.shape[:2]
+ size = (scale * w / h, scale) if h > w else (scale, h / w * scale)
+ fig, ax = plt.subplots(figsize=size)
# Display the image
- _, ax = plt.subplots()
ax.imshow(image)
# hide both axis
ax.axis('off')
@@ -111,3 +116,4 @@
# Create mlp Cursor to hover patches in artists
mplcursors.Cursor(artists, hover=2).connect("add", lambda sel: sel.annotation.set_text(sel.artist.get_label()))
+ fig.tight_layout()
| {"golden_diff": "diff --git a/doctr/utils/visualization.py b/doctr/utils/visualization.py\n--- a/doctr/utils/visualization.py\n+++ b/doctr/utils/visualization.py\n@@ -56,6 +56,7 @@\n page: Dict[str, Any],\n image: np.ndarray,\n words_only: bool = True,\n+ scale: float = 10,\n ) -> None:\n \"\"\"Visualize a full page with predicted blocks, lines and words\n \n@@ -74,9 +75,13 @@\n page: the exported Page of a Document\n image: np array of the page, needs to have the same shape than page['dimensions']\n words_only: whether only words should be displayed\n+ scale: figsize of the largest windows side\n \"\"\"\n+ # Get proper scale and aspect ratio\n+ h, w = image.shape[:2]\n+ size = (scale * w / h, scale) if h > w else (scale, h / w * scale)\n+ fig, ax = plt.subplots(figsize=size)\n # Display the image\n- _, ax = plt.subplots()\n ax.imshow(image)\n # hide both axis\n ax.axis('off')\n@@ -111,3 +116,4 @@\n \n # Create mlp Cursor to hover patches in artists\n mplcursors.Cursor(artists, hover=2).connect(\"add\", lambda sel: sel.annotation.set_text(sel.artist.get_label()))\n+ fig.tight_layout()\n", "issue": "[docs] Add a visualization of the example script in the README\nWhile the readme specifies how you can use the example script, it does not show any visualization examples. We could easily add one to help users.\n", "before_files": [{"content": "# Copyright (C) 2021, Mindee.\n\n# This program is licensed under the Apache License version 2.\n# See LICENSE or go to <https://www.apache.org/licenses/LICENSE-2.0.txt> for full license details.\n\nimport matplotlib.pyplot as plt\nimport matplotlib.patches as patches\nimport mplcursors\nimport numpy as np\nfrom typing import Tuple, List, Dict, Any\n\nfrom .common_types import BoundingBox\n\n__all__ = ['visualize_page']\n\n\ndef create_patch(\n geometry: BoundingBox,\n label: str,\n page_dimensions: Tuple[int, int],\n color: Tuple[int, int, int],\n alpha: float = 0.3,\n linewidth: int = 2,\n) -> patches.Patch:\n \"\"\"Create a matplotlib patch (rectangle) bounding the element\n\n Args:\n geometry: bounding box of the element\n label: label to display when hovered\n page_dimensions: dimensions of the Page\n color: color to draw box\n alpha: opacity parameter to fill the boxes, 0 = transparent\n linewidth: line width\n\n Returns:\n a rectangular Patch\n \"\"\"\n h, w = page_dimensions\n (xmin, ymin), (xmax, ymax) = geometry\n xmin, xmax = xmin * w, xmax * w\n ymin, ymax = ymin * h, ymax * h\n rect = patches.Rectangle(\n (xmin, ymin),\n xmax - xmin,\n ymax - ymin,\n fill=True,\n linewidth=linewidth,\n edgecolor=(*color, alpha),\n facecolor=(*color, alpha),\n label=label\n )\n return rect\n\n\ndef visualize_page(\n page: Dict[str, Any],\n image: np.ndarray,\n words_only: bool = True,\n) -> None:\n \"\"\"Visualize a full page with predicted blocks, lines and words\n\n Example::\n >>> import numpy as np\n >>> import matplotlib.pyplot as plt\n >>> from doctr.utils.visualization import visualize_page\n >>> from doctr.models import ocr_db_crnn\n >>> model = ocr_db_crnn(pretrained=True)\n >>> input_page = (255 * np.random.rand(600, 800, 3)).astype(np.uint8)\n >>> out = model([[input_page]])\n >>> visualize_page(out[0].pages[0].export(), input_page)\n >>> plt.show()\n\n Args:\n page: the exported Page of a Document\n image: np array of the page, needs to have the same shape than page['dimensions']\n words_only: whether only words should be displayed\n \"\"\"\n # Display the image\n _, ax = plt.subplots()\n ax.imshow(image)\n # hide both axis\n ax.axis('off')\n\n artists: List[patches.Patch] = [] # instantiate an empty list of patches (to be drawn on the page)\n\n for block in page['blocks']:\n if not words_only:\n rect = create_patch(block['geometry'], 'block', page['dimensions'], (0, 1, 0), linewidth=1)\n # add patch on figure\n ax.add_patch(rect)\n # add patch to cursor's artists\n artists.append(rect)\n\n for line in block['lines']:\n if not words_only:\n rect = create_patch(line['geometry'], 'line', page['dimensions'], (1, 0, 0), linewidth=1)\n ax.add_patch(rect)\n artists.append(rect)\n\n for word in line['words']:\n rect = create_patch(word['geometry'], f\"{word['value']} (confidence: {word['confidence']:.2%})\",\n page['dimensions'], (0, 0, 1))\n ax.add_patch(rect)\n artists.append(rect)\n\n if not words_only:\n for artefact in block['artefacts']:\n rect = create_patch(artefact['geometry'], 'artefact', page['dimensions'], (0.5, 0.5, 0.5), linewidth=1)\n ax.add_patch(rect)\n artists.append(rect)\n\n # Create mlp Cursor to hover patches in artists\n mplcursors.Cursor(artists, hover=2).connect(\"add\", lambda sel: sel.annotation.set_text(sel.artist.get_label()))\n", "path": "doctr/utils/visualization.py"}]} | 1,736 | 317 |
gh_patches_debug_36520 | rasdani/github-patches | git_diff | vacanza__python-holidays-1555 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Update Denmark holidays
I've received an email with a link to https://www.norden.org/en/info-norden/public-holidays-denmark
The author complained about absence of June 5th in the list of holiday dates:
> The calendar for Denmark does not include 5 June.
Denmark holidays need to be extended using categories approach.
</issue>
<code>
[start of holidays/countries/denmark.py]
1 # python-holidays
2 # ---------------
3 # A fast, efficient Python library for generating country, province and state
4 # specific sets of holidays on the fly. It aims to make determining whether a
5 # specific date is a holiday as fast and flexible as possible.
6 #
7 # Authors: dr-prodigy <[email protected]> (c) 2017-2023
8 # ryanss <[email protected]> (c) 2014-2017
9 # Website: https://github.com/dr-prodigy/python-holidays
10 # License: MIT (see LICENSE file)
11
12 from datetime import timedelta as td
13 from gettext import gettext as tr
14
15 from holidays.groups import ChristianHolidays, InternationalHolidays
16 from holidays.holiday_base import HolidayBase
17
18
19 class Denmark(HolidayBase, ChristianHolidays, InternationalHolidays):
20 """
21 Denmark holidays.
22
23 References:
24 - https://en.wikipedia.org/wiki/Public_holidays_in_Denmark
25 - https://www.ft.dk/samling/20222/lovforslag/l13/index.htm
26 """
27
28 country = "DK"
29 default_language = "da"
30 supported_languages = ("da", "en_US", "uk")
31
32 def __init__(self, *args, **kwargs):
33 ChristianHolidays.__init__(self)
34 InternationalHolidays.__init__(self)
35 super().__init__(*args, **kwargs)
36
37 def _populate(self, year):
38 super()._populate(year)
39
40 # New Year's Day.
41 self._add_new_years_day(tr("NytΓ₯rsdag"))
42
43 # Holy Thursday.
44 self._add_holy_thursday(tr("Skærtorsdag"))
45
46 # Good Friday.
47 self._add_good_friday(tr("Langfredag"))
48
49 # Easter Sunday.
50 self._add_easter_sunday(tr("PΓ₯skedag"))
51
52 # Easter Monday.
53 self._add_easter_monday(tr("Anden pΓ₯skedag"))
54
55 # See https://www.ft.dk/samling/20222/lovforslag/l13/index.htm
56 if year <= 2023:
57 # Great Day of Prayers.
58 self._add_holiday(tr("Store bededag"), self._easter_sunday + td(days=+26))
59
60 # Ascension Day.
61 self._add_ascension_thursday(tr("Kristi himmelfartsdag"))
62
63 # Whit Sunday.
64 self._add_whit_sunday(tr("Pinsedag"))
65
66 # Whit Monday.
67 self._add_whit_monday(tr("Anden pinsedag"))
68
69 # Christmas Day.
70 self._add_christmas_day(tr("Juledag"))
71
72 # Second Day of Christmas.
73 self._add_christmas_day_two(tr("Anden juledag"))
74
75
76 class DK(Denmark):
77 pass
78
79
80 class DNK(Denmark):
81 pass
82
[end of holidays/countries/denmark.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/holidays/countries/denmark.py b/holidays/countries/denmark.py
--- a/holidays/countries/denmark.py
+++ b/holidays/countries/denmark.py
@@ -12,6 +12,7 @@
from datetime import timedelta as td
from gettext import gettext as tr
+from holidays.constants import OPTIONAL, PUBLIC
from holidays.groups import ChristianHolidays, InternationalHolidays
from holidays.holiday_base import HolidayBase
@@ -22,11 +23,13 @@
References:
- https://en.wikipedia.org/wiki/Public_holidays_in_Denmark
+ - https://www.norden.org/en/info-norden/public-holidays-denmark
- https://www.ft.dk/samling/20222/lovforslag/l13/index.htm
"""
country = "DK"
default_language = "da"
+ supported_categories = {OPTIONAL, PUBLIC}
supported_languages = ("da", "en_US", "uk")
def __init__(self, *args, **kwargs):
@@ -34,9 +37,7 @@
InternationalHolidays.__init__(self)
super().__init__(*args, **kwargs)
- def _populate(self, year):
- super()._populate(year)
-
+ def _populate_public_holidays(self):
# New Year's Day.
self._add_new_years_day(tr("NytΓ₯rsdag"))
@@ -53,7 +54,7 @@
self._add_easter_monday(tr("Anden pΓ₯skedag"))
# See https://www.ft.dk/samling/20222/lovforslag/l13/index.htm
- if year <= 2023:
+ if self._year <= 2023:
# Great Day of Prayers.
self._add_holiday(tr("Store bededag"), self._easter_sunday + td(days=+26))
@@ -72,6 +73,19 @@
# Second Day of Christmas.
self._add_christmas_day_two(tr("Anden juledag"))
+ def _populate_optional_holidays(self):
+ # International Workers' Day.
+ self._add_labor_day(tr("Arbejdernes kampdag"))
+
+ # Constitution Day.
+ self._add_holiday_jun_5(tr("Grundlovsdag"))
+
+ # Christmas Eve.
+ self._add_christmas_eve(tr("Juleaftensdag"))
+
+ # New Year's Eve.
+ self._add_new_years_eve(tr("NytΓ₯rsaften"))
+
class DK(Denmark):
pass
| {"golden_diff": "diff --git a/holidays/countries/denmark.py b/holidays/countries/denmark.py\n--- a/holidays/countries/denmark.py\n+++ b/holidays/countries/denmark.py\n@@ -12,6 +12,7 @@\n from datetime import timedelta as td\n from gettext import gettext as tr\n \n+from holidays.constants import OPTIONAL, PUBLIC\n from holidays.groups import ChristianHolidays, InternationalHolidays\n from holidays.holiday_base import HolidayBase\n \n@@ -22,11 +23,13 @@\n \n References:\n - https://en.wikipedia.org/wiki/Public_holidays_in_Denmark\n+ - https://www.norden.org/en/info-norden/public-holidays-denmark\n - https://www.ft.dk/samling/20222/lovforslag/l13/index.htm\n \"\"\"\n \n country = \"DK\"\n default_language = \"da\"\n+ supported_categories = {OPTIONAL, PUBLIC}\n supported_languages = (\"da\", \"en_US\", \"uk\")\n \n def __init__(self, *args, **kwargs):\n@@ -34,9 +37,7 @@\n InternationalHolidays.__init__(self)\n super().__init__(*args, **kwargs)\n \n- def _populate(self, year):\n- super()._populate(year)\n-\n+ def _populate_public_holidays(self):\n # New Year's Day.\n self._add_new_years_day(tr(\"Nyt\u00e5rsdag\"))\n \n@@ -53,7 +54,7 @@\n self._add_easter_monday(tr(\"Anden p\u00e5skedag\"))\n \n # See https://www.ft.dk/samling/20222/lovforslag/l13/index.htm\n- if year <= 2023:\n+ if self._year <= 2023:\n # Great Day of Prayers.\n self._add_holiday(tr(\"Store bededag\"), self._easter_sunday + td(days=+26))\n \n@@ -72,6 +73,19 @@\n # Second Day of Christmas.\n self._add_christmas_day_two(tr(\"Anden juledag\"))\n \n+ def _populate_optional_holidays(self):\n+ # International Workers' Day.\n+ self._add_labor_day(tr(\"Arbejdernes kampdag\"))\n+\n+ # Constitution Day.\n+ self._add_holiday_jun_5(tr(\"Grundlovsdag\"))\n+\n+ # Christmas Eve.\n+ self._add_christmas_eve(tr(\"Juleaftensdag\"))\n+\n+ # New Year's Eve.\n+ self._add_new_years_eve(tr(\"Nyt\u00e5rsaften\"))\n+\n \n class DK(Denmark):\n pass\n", "issue": "Update Denmark holidays\nI've received an email with a link to https://www.norden.org/en/info-norden/public-holidays-denmark\r\n\r\nThe author complained about absence of June 5th in the list of holiday dates:\r\n\r\n> The calendar for Denmark does not include 5 June.\r\n\r\nDenmark holidays need to be extended using categories approach.\n", "before_files": [{"content": "# python-holidays\n# ---------------\n# A fast, efficient Python library for generating country, province and state\n# specific sets of holidays on the fly. It aims to make determining whether a\n# specific date is a holiday as fast and flexible as possible.\n#\n# Authors: dr-prodigy <[email protected]> (c) 2017-2023\n# ryanss <[email protected]> (c) 2014-2017\n# Website: https://github.com/dr-prodigy/python-holidays\n# License: MIT (see LICENSE file)\n\nfrom datetime import timedelta as td\nfrom gettext import gettext as tr\n\nfrom holidays.groups import ChristianHolidays, InternationalHolidays\nfrom holidays.holiday_base import HolidayBase\n\n\nclass Denmark(HolidayBase, ChristianHolidays, InternationalHolidays):\n \"\"\"\n Denmark holidays.\n\n References:\n - https://en.wikipedia.org/wiki/Public_holidays_in_Denmark\n - https://www.ft.dk/samling/20222/lovforslag/l13/index.htm\n \"\"\"\n\n country = \"DK\"\n default_language = \"da\"\n supported_languages = (\"da\", \"en_US\", \"uk\")\n\n def __init__(self, *args, **kwargs):\n ChristianHolidays.__init__(self)\n InternationalHolidays.__init__(self)\n super().__init__(*args, **kwargs)\n\n def _populate(self, year):\n super()._populate(year)\n\n # New Year's Day.\n self._add_new_years_day(tr(\"Nyt\u00e5rsdag\"))\n\n # Holy Thursday.\n self._add_holy_thursday(tr(\"Sk\u00e6rtorsdag\"))\n\n # Good Friday.\n self._add_good_friday(tr(\"Langfredag\"))\n\n # Easter Sunday.\n self._add_easter_sunday(tr(\"P\u00e5skedag\"))\n\n # Easter Monday.\n self._add_easter_monday(tr(\"Anden p\u00e5skedag\"))\n\n # See https://www.ft.dk/samling/20222/lovforslag/l13/index.htm\n if year <= 2023:\n # Great Day of Prayers.\n self._add_holiday(tr(\"Store bededag\"), self._easter_sunday + td(days=+26))\n\n # Ascension Day.\n self._add_ascension_thursday(tr(\"Kristi himmelfartsdag\"))\n\n # Whit Sunday.\n self._add_whit_sunday(tr(\"Pinsedag\"))\n\n # Whit Monday.\n self._add_whit_monday(tr(\"Anden pinsedag\"))\n\n # Christmas Day.\n self._add_christmas_day(tr(\"Juledag\"))\n\n # Second Day of Christmas.\n self._add_christmas_day_two(tr(\"Anden juledag\"))\n\n\nclass DK(Denmark):\n pass\n\n\nclass DNK(Denmark):\n pass\n", "path": "holidays/countries/denmark.py"}]} | 1,420 | 598 |
gh_patches_debug_21675 | rasdani/github-patches | git_diff | liqd__a4-meinberlin-2385 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
counting contributions to polls on module tile
as discussed please count the comments AND all answers on poll module tiles.
</issue>
<code>
[start of meinberlin/apps/projects/templatetags/meinberlin_project_tags.py]
1 from django import template
2
3 from adhocracy4.comments.models import Comment
4 from meinberlin.apps.budgeting.models import Proposal as budget_proposal
5 from meinberlin.apps.ideas.models import Idea
6 from meinberlin.apps.kiezkasse.models import Proposal as kiezkasse_proposal
7 from meinberlin.apps.mapideas.models import MapIdea
8 from meinberlin.apps.projects import get_project_type
9
10 register = template.Library()
11
12
13 @register.filter
14 def project_url(project):
15 if get_project_type(project) in ('external', 'bplan'):
16 return project.externalproject.url
17 return project.get_absolute_url()
18
19
20 @register.filter
21 def project_type(project):
22 return get_project_type(project)
23
24
25 @register.filter
26 def is_external(project):
27 return get_project_type(project) in ('external', 'bplan')
28
29
30 @register.filter
31 def is_container(project):
32 return get_project_type(project) == 'container'
33
34
35 @register.simple_tag
36 def to_class_name(value):
37 return value.__class__.__name__
38
39
40 @register.simple_tag
41 def get_num_entries(module):
42 """Count all user-generated items."""
43 item_count = \
44 Idea.objects.filter(module=module).count() \
45 + MapIdea.objects.filter(module=module).count() \
46 + budget_proposal.objects.filter(module=module).count() \
47 + kiezkasse_proposal.objects.filter(module=module).count() \
48 + Comment.objects.filter(idea__module=module).count() \
49 + Comment.objects.filter(mapidea__module=module).count() \
50 + Comment.objects.filter(budget_proposal__module=module).count() \
51 + Comment.objects.filter(kiezkasse_proposal__module=module).count() \
52 + Comment.objects.filter(topic__module=module).count() \
53 + Comment.objects.filter(maptopic__module=module).count() \
54 + Comment.objects.filter(paragraph__chapter__module=module).count() \
55 + Comment.objects.filter(chapter__module=module).count() \
56 + Comment.objects.filter(poll__module=module).count()
57 return item_count
58
[end of meinberlin/apps/projects/templatetags/meinberlin_project_tags.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/meinberlin/apps/projects/templatetags/meinberlin_project_tags.py b/meinberlin/apps/projects/templatetags/meinberlin_project_tags.py
--- a/meinberlin/apps/projects/templatetags/meinberlin_project_tags.py
+++ b/meinberlin/apps/projects/templatetags/meinberlin_project_tags.py
@@ -5,6 +5,7 @@
from meinberlin.apps.ideas.models import Idea
from meinberlin.apps.kiezkasse.models import Proposal as kiezkasse_proposal
from meinberlin.apps.mapideas.models import MapIdea
+from meinberlin.apps.polls.models import Vote
from meinberlin.apps.projects import get_project_type
register = template.Library()
@@ -53,5 +54,6 @@
+ Comment.objects.filter(maptopic__module=module).count() \
+ Comment.objects.filter(paragraph__chapter__module=module).count() \
+ Comment.objects.filter(chapter__module=module).count() \
- + Comment.objects.filter(poll__module=module).count()
+ + Comment.objects.filter(poll__module=module).count() \
+ + Vote.objects.filter(choice__question__poll__module=module).count()
return item_count
| {"golden_diff": "diff --git a/meinberlin/apps/projects/templatetags/meinberlin_project_tags.py b/meinberlin/apps/projects/templatetags/meinberlin_project_tags.py\n--- a/meinberlin/apps/projects/templatetags/meinberlin_project_tags.py\n+++ b/meinberlin/apps/projects/templatetags/meinberlin_project_tags.py\n@@ -5,6 +5,7 @@\n from meinberlin.apps.ideas.models import Idea\n from meinberlin.apps.kiezkasse.models import Proposal as kiezkasse_proposal\n from meinberlin.apps.mapideas.models import MapIdea\n+from meinberlin.apps.polls.models import Vote\n from meinberlin.apps.projects import get_project_type\n \n register = template.Library()\n@@ -53,5 +54,6 @@\n + Comment.objects.filter(maptopic__module=module).count() \\\n + Comment.objects.filter(paragraph__chapter__module=module).count() \\\n + Comment.objects.filter(chapter__module=module).count() \\\n- + Comment.objects.filter(poll__module=module).count()\n+ + Comment.objects.filter(poll__module=module).count() \\\n+ + Vote.objects.filter(choice__question__poll__module=module).count()\n return item_count\n", "issue": "counting contributions to polls on module tile\nas discussed please count the comments AND all answers on poll module tiles.\n", "before_files": [{"content": "from django import template\n\nfrom adhocracy4.comments.models import Comment\nfrom meinberlin.apps.budgeting.models import Proposal as budget_proposal\nfrom meinberlin.apps.ideas.models import Idea\nfrom meinberlin.apps.kiezkasse.models import Proposal as kiezkasse_proposal\nfrom meinberlin.apps.mapideas.models import MapIdea\nfrom meinberlin.apps.projects import get_project_type\n\nregister = template.Library()\n\n\[email protected]\ndef project_url(project):\n if get_project_type(project) in ('external', 'bplan'):\n return project.externalproject.url\n return project.get_absolute_url()\n\n\[email protected]\ndef project_type(project):\n return get_project_type(project)\n\n\[email protected]\ndef is_external(project):\n return get_project_type(project) in ('external', 'bplan')\n\n\[email protected]\ndef is_container(project):\n return get_project_type(project) == 'container'\n\n\[email protected]_tag\ndef to_class_name(value):\n return value.__class__.__name__\n\n\[email protected]_tag\ndef get_num_entries(module):\n \"\"\"Count all user-generated items.\"\"\"\n item_count = \\\n Idea.objects.filter(module=module).count() \\\n + MapIdea.objects.filter(module=module).count() \\\n + budget_proposal.objects.filter(module=module).count() \\\n + kiezkasse_proposal.objects.filter(module=module).count() \\\n + Comment.objects.filter(idea__module=module).count() \\\n + Comment.objects.filter(mapidea__module=module).count() \\\n + Comment.objects.filter(budget_proposal__module=module).count() \\\n + Comment.objects.filter(kiezkasse_proposal__module=module).count() \\\n + Comment.objects.filter(topic__module=module).count() \\\n + Comment.objects.filter(maptopic__module=module).count() \\\n + Comment.objects.filter(paragraph__chapter__module=module).count() \\\n + Comment.objects.filter(chapter__module=module).count() \\\n + Comment.objects.filter(poll__module=module).count()\n return item_count\n", "path": "meinberlin/apps/projects/templatetags/meinberlin_project_tags.py"}]} | 1,138 | 280 |
gh_patches_debug_32506 | rasdani/github-patches | git_diff | mkdocs__mkdocs-2438 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Re-building w/ symbolic links stopped working, regression after #2385
Since a444c43 in master using the local development server via `mkdocs serve` updating files that are symbolically linked is not triggering to rebuild (and therefore not reloading browser tabs).
On first glance this is due to the switch to watchdog for detecting file-system changes which needs more guidance to handle this file-type.
Preparing a PR with a patch.
Ref: a444c43474f91dea089922dd8fb188d1db3a4535
</issue>
<code>
[start of mkdocs/livereload/__init__.py]
1 import functools
2 import io
3 import logging
4 import mimetypes
5 import os
6 import os.path
7 import pathlib
8 import re
9 import socketserver
10 import threading
11 import time
12 import warnings
13 import wsgiref.simple_server
14
15 import watchdog.events
16 import watchdog.observers
17
18
19 class _LoggerAdapter(logging.LoggerAdapter):
20 def process(self, msg, kwargs):
21 return time.strftime("[%H:%M:%S] ") + msg, kwargs
22
23
24 log = _LoggerAdapter(logging.getLogger(__name__), {})
25
26
27 class LiveReloadServer(socketserver.ThreadingMixIn, wsgiref.simple_server.WSGIServer):
28 daemon_threads = True
29 poll_response_timeout = 60
30
31 def __init__(
32 self,
33 builder,
34 host,
35 port,
36 root,
37 mount_path="/",
38 build_delay=0.25,
39 shutdown_delay=0.25,
40 **kwargs,
41 ):
42 self.builder = builder
43 self.server_name = host
44 self.server_port = port
45 self.root = os.path.abspath(root)
46 self.mount_path = ("/" + mount_path.lstrip("/")).rstrip("/") + "/"
47 self.url = f"http://{self.server_name}:{self.server_port}{self.mount_path}"
48 self.build_delay = build_delay
49 self.shutdown_delay = shutdown_delay
50 # To allow custom error pages.
51 self.error_handler = lambda code: None
52
53 super().__init__((host, port), _Handler, **kwargs)
54 self.set_app(self.serve_request)
55
56 self._wanted_epoch = _timestamp() # The version of the site that started building.
57 self._visible_epoch = self._wanted_epoch # Latest fully built version of the site.
58 self._epoch_cond = threading.Condition() # Must be held when accessing _visible_epoch.
59
60 self._to_rebuild = {} # Used as an ordered set of functions to call.
61 self._rebuild_cond = threading.Condition() # Must be held when accessing _to_rebuild.
62
63 self._shutdown = False
64 self.serve_thread = threading.Thread(target=lambda: self.serve_forever(shutdown_delay))
65 self.observer = watchdog.observers.Observer(timeout=shutdown_delay)
66
67 def watch(self, path, func=None, recursive=True):
68 """Add the 'path' to watched paths, call the function and reload when any file changes under it."""
69 path = os.path.abspath(path)
70 if func in (None, self.builder):
71 func = self.builder
72 else:
73 warnings.warn(
74 "Plugins should not pass the 'func' parameter of watch(). "
75 "The ability to execute custom callbacks will be removed soon.",
76 DeprecationWarning,
77 stacklevel=2,
78 )
79
80 def callback(event, allowed_path=None):
81 if isinstance(event, watchdog.events.DirCreatedEvent):
82 return
83 if allowed_path is not None and event.src_path != allowed_path:
84 return
85 # Text editors always cause a "file close" event in addition to "modified" when saving
86 # a file. Some editors also have "swap" functionality that keeps writing into another
87 # file that's never closed. Prevent such write events from causing a rebuild.
88 if isinstance(event, watchdog.events.FileModifiedEvent):
89 # But FileClosedEvent is implemented only on Linux, otherwise we mustn't skip this:
90 if type(self.observer).__name__ == "InotifyObserver":
91 return
92 log.debug(str(event))
93 with self._rebuild_cond:
94 self._to_rebuild[func] = True
95 self._rebuild_cond.notify_all()
96
97 dir_handler = watchdog.events.FileSystemEventHandler()
98 dir_handler.on_any_event = callback
99
100 seen = set()
101
102 def schedule(path):
103 seen.add(path)
104 if os.path.isfile(path):
105 # Watchdog doesn't support watching files, so watch its directory and filter by path
106 handler = watchdog.events.FileSystemEventHandler()
107 handler.on_any_event = lambda event: callback(event, allowed_path=path)
108
109 parent = os.path.dirname(path)
110 log.debug(f"Watching file '{path}' through directory '{parent}'")
111 self.observer.schedule(handler, parent)
112 else:
113 log.debug(f"Watching directory '{path}'")
114 self.observer.schedule(dir_handler, path, recursive=recursive)
115
116 schedule(os.path.realpath(path))
117
118 def watch_symlink_targets(path_obj): # path is os.DirEntry or pathlib.Path
119 if path_obj.is_symlink():
120 # The extra `readlink` is needed due to https://bugs.python.org/issue9949
121 target = os.path.realpath(os.readlink(os.fspath(path_obj)))
122 if target in seen or not os.path.exists(target):
123 return
124 schedule(target)
125
126 path_obj = pathlib.Path(target)
127
128 if path_obj.is_dir() and recursive:
129 with os.scandir(os.fspath(path_obj)) as scan:
130 for entry in scan:
131 watch_symlink_targets(entry)
132
133 watch_symlink_targets(pathlib.Path(path))
134
135 def serve(self):
136 self.observer.start()
137
138 log.info(f"Serving on {self.url}")
139 self.serve_thread.start()
140
141 self._build_loop()
142
143 def _build_loop(self):
144 while True:
145 with self._rebuild_cond:
146 while not self._rebuild_cond.wait_for(
147 lambda: self._to_rebuild or self._shutdown, timeout=self.shutdown_delay
148 ):
149 # We could have used just one wait instead of a loop + timeout, but we need
150 # occasional breaks, otherwise on Windows we can't receive KeyboardInterrupt.
151 pass
152 if self._shutdown:
153 break
154 log.info("Detected file changes")
155 while self._rebuild_cond.wait(timeout=self.build_delay):
156 log.debug("Waiting for file changes to stop happening")
157
158 self._wanted_epoch = _timestamp()
159 funcs = list(self._to_rebuild)
160 self._to_rebuild.clear()
161
162 for func in funcs:
163 func()
164
165 with self._epoch_cond:
166 log.info("Reloading browsers")
167 self._visible_epoch = self._wanted_epoch
168 self._epoch_cond.notify_all()
169
170 def shutdown(self):
171 self.observer.stop()
172 with self._rebuild_cond:
173 self._shutdown = True
174 self._rebuild_cond.notify_all()
175
176 if self.serve_thread.is_alive():
177 super().shutdown()
178 self.serve_thread.join()
179 self.observer.join()
180
181 def serve_request(self, environ, start_response):
182 try:
183 result = self._serve_request(environ, start_response)
184 except Exception:
185 code = 500
186 msg = "500 Internal Server Error"
187 log.exception(msg)
188 else:
189 if result is not None:
190 return result
191 code = 404
192 msg = "404 Not Found"
193
194 error_content = None
195 try:
196 error_content = self.error_handler(code)
197 except Exception:
198 log.exception("Failed to render an error message!")
199 if error_content is None:
200 error_content = msg.encode()
201
202 start_response(msg, [("Content-Type", "text/html")])
203 return [error_content]
204
205 def _serve_request(self, environ, start_response):
206 path = environ["PATH_INFO"]
207
208 m = re.fullmatch(r"/livereload/([0-9]+)/[0-9]+", path)
209 if m:
210 epoch = int(m[1])
211 start_response("200 OK", [("Content-Type", "text/plain")])
212
213 def condition():
214 return self._visible_epoch > epoch
215
216 with self._epoch_cond:
217 if not condition():
218 # Stall the browser, respond as soon as there's something new.
219 # If there's not, respond anyway after a minute.
220 self._log_poll_request(environ.get("HTTP_REFERER"), request_id=path)
221 self._epoch_cond.wait_for(condition, timeout=self.poll_response_timeout)
222 return [b"%d" % self._visible_epoch]
223
224 if path == "/js/livereload.js":
225 file_path = os.path.join(os.path.dirname(os.path.abspath(__file__)), "livereload.js")
226 elif path.startswith(self.mount_path):
227 if path.endswith("/"):
228 path += "index.html"
229 path = path[len(self.mount_path):]
230 file_path = os.path.join(self.root, path.lstrip("/"))
231 elif path == "/":
232 start_response("302 Found", [("Location", self.mount_path)])
233 return []
234 else:
235 return None # Not found
236
237 # Wait until the ongoing rebuild (if any) finishes, so we're not serving a half-built site.
238 with self._epoch_cond:
239 self._epoch_cond.wait_for(lambda: self._visible_epoch == self._wanted_epoch)
240 epoch = self._visible_epoch
241
242 try:
243 file = open(file_path, "rb")
244 except OSError:
245 return None # Not found
246
247 if path.endswith(".html"):
248 with file:
249 content = file.read()
250 content = self._inject_js_into_html(content, epoch)
251 file = io.BytesIO(content)
252 content_length = len(content)
253 else:
254 content_length = os.path.getsize(file_path)
255
256 content_type = self._guess_type(file_path)
257 start_response(
258 "200 OK", [("Content-Type", content_type), ("Content-Length", str(content_length))]
259 )
260 return wsgiref.util.FileWrapper(file)
261
262 @classmethod
263 def _inject_js_into_html(cls, content, epoch):
264 try:
265 body_end = content.rindex(b"</body>")
266 except ValueError:
267 body_end = len(content)
268 # The page will reload if the livereload poller returns a newer epoch than what it knows.
269 # The other timestamp becomes just a unique identifier for the initiating page.
270 return (
271 b'%b<script src="/js/livereload.js"></script><script>livereload(%d, %d);</script>%b'
272 % (content[:body_end], epoch, _timestamp(), content[body_end:])
273 )
274
275 @classmethod
276 @functools.lru_cache() # "Cache" to not repeat the same message for the same browser tab.
277 def _log_poll_request(cls, url, request_id):
278 log.info(f"Browser connected: {url}")
279
280 def _guess_type(cls, path):
281 # MkDocs only ensures a few common types (as seen in livereload_tests.py::test_mime_types).
282 # Other uncommon types will not be accepted.
283 if path.endswith((".js", ".JS")):
284 return "application/javascript"
285 if path.endswith(".gz"):
286 return "application/gzip"
287
288 guess, _ = mimetypes.guess_type(path)
289 if guess:
290 return guess
291 return "application/octet-stream"
292
293
294 class _Handler(wsgiref.simple_server.WSGIRequestHandler):
295 def log_request(self, code="-", size="-"):
296 level = logging.DEBUG if str(code) == "200" else logging.WARNING
297 log.log(level, f'"{self.requestline}" code {code}')
298
299 def log_message(self, format, *args):
300 log.debug(format, *args)
301
302
303 def _timestamp():
304 return round(time.monotonic() * 1000)
305
[end of mkdocs/livereload/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/mkdocs/livereload/__init__.py b/mkdocs/livereload/__init__.py
--- a/mkdocs/livereload/__init__.py
+++ b/mkdocs/livereload/__init__.py
@@ -101,29 +101,26 @@
def schedule(path):
seen.add(path)
- if os.path.isfile(path):
+ if path.is_file():
# Watchdog doesn't support watching files, so watch its directory and filter by path
handler = watchdog.events.FileSystemEventHandler()
- handler.on_any_event = lambda event: callback(event, allowed_path=path)
+ handler.on_any_event = lambda event: callback(event, allowed_path=os.fspath(path))
- parent = os.path.dirname(path)
+ parent = path.parent
log.debug(f"Watching file '{path}' through directory '{parent}'")
self.observer.schedule(handler, parent)
else:
log.debug(f"Watching directory '{path}'")
self.observer.schedule(dir_handler, path, recursive=recursive)
- schedule(os.path.realpath(path))
+ schedule(pathlib.Path(path).resolve())
def watch_symlink_targets(path_obj): # path is os.DirEntry or pathlib.Path
if path_obj.is_symlink():
- # The extra `readlink` is needed due to https://bugs.python.org/issue9949
- target = os.path.realpath(os.readlink(os.fspath(path_obj)))
- if target in seen or not os.path.exists(target):
+ path_obj = pathlib.Path(path_obj).resolve()
+ if path_obj in seen or not path_obj.exists():
return
- schedule(target)
-
- path_obj = pathlib.Path(target)
+ schedule(path_obj)
if path_obj.is_dir() and recursive:
with os.scandir(os.fspath(path_obj)) as scan:
| {"golden_diff": "diff --git a/mkdocs/livereload/__init__.py b/mkdocs/livereload/__init__.py\n--- a/mkdocs/livereload/__init__.py\n+++ b/mkdocs/livereload/__init__.py\n@@ -101,29 +101,26 @@\n \n def schedule(path):\n seen.add(path)\n- if os.path.isfile(path):\n+ if path.is_file():\n # Watchdog doesn't support watching files, so watch its directory and filter by path\n handler = watchdog.events.FileSystemEventHandler()\n- handler.on_any_event = lambda event: callback(event, allowed_path=path)\n+ handler.on_any_event = lambda event: callback(event, allowed_path=os.fspath(path))\n \n- parent = os.path.dirname(path)\n+ parent = path.parent\n log.debug(f\"Watching file '{path}' through directory '{parent}'\")\n self.observer.schedule(handler, parent)\n else:\n log.debug(f\"Watching directory '{path}'\")\n self.observer.schedule(dir_handler, path, recursive=recursive)\n \n- schedule(os.path.realpath(path))\n+ schedule(pathlib.Path(path).resolve())\n \n def watch_symlink_targets(path_obj): # path is os.DirEntry or pathlib.Path\n if path_obj.is_symlink():\n- # The extra `readlink` is needed due to https://bugs.python.org/issue9949\n- target = os.path.realpath(os.readlink(os.fspath(path_obj)))\n- if target in seen or not os.path.exists(target):\n+ path_obj = pathlib.Path(path_obj).resolve()\n+ if path_obj in seen or not path_obj.exists():\n return\n- schedule(target)\n-\n- path_obj = pathlib.Path(target)\n+ schedule(path_obj)\n \n if path_obj.is_dir() and recursive:\n with os.scandir(os.fspath(path_obj)) as scan:\n", "issue": "Re-building w/ symbolic links stopped working, regression after #2385\nSince a444c43 in master using the local development server via `mkdocs serve` updating files that are symbolically linked is not triggering to rebuild (and therefore not reloading browser tabs).\r\n\r\nOn first glance this is due to the switch to watchdog for detecting file-system changes which needs more guidance to handle this file-type.\r\n\r\nPreparing a PR with a patch.\r\n\r\nRef: a444c43474f91dea089922dd8fb188d1db3a4535\n", "before_files": [{"content": "import functools\nimport io\nimport logging\nimport mimetypes\nimport os\nimport os.path\nimport pathlib\nimport re\nimport socketserver\nimport threading\nimport time\nimport warnings\nimport wsgiref.simple_server\n\nimport watchdog.events\nimport watchdog.observers\n\n\nclass _LoggerAdapter(logging.LoggerAdapter):\n def process(self, msg, kwargs):\n return time.strftime(\"[%H:%M:%S] \") + msg, kwargs\n\n\nlog = _LoggerAdapter(logging.getLogger(__name__), {})\n\n\nclass LiveReloadServer(socketserver.ThreadingMixIn, wsgiref.simple_server.WSGIServer):\n daemon_threads = True\n poll_response_timeout = 60\n\n def __init__(\n self,\n builder,\n host,\n port,\n root,\n mount_path=\"/\",\n build_delay=0.25,\n shutdown_delay=0.25,\n **kwargs,\n ):\n self.builder = builder\n self.server_name = host\n self.server_port = port\n self.root = os.path.abspath(root)\n self.mount_path = (\"/\" + mount_path.lstrip(\"/\")).rstrip(\"/\") + \"/\"\n self.url = f\"http://{self.server_name}:{self.server_port}{self.mount_path}\"\n self.build_delay = build_delay\n self.shutdown_delay = shutdown_delay\n # To allow custom error pages.\n self.error_handler = lambda code: None\n\n super().__init__((host, port), _Handler, **kwargs)\n self.set_app(self.serve_request)\n\n self._wanted_epoch = _timestamp() # The version of the site that started building.\n self._visible_epoch = self._wanted_epoch # Latest fully built version of the site.\n self._epoch_cond = threading.Condition() # Must be held when accessing _visible_epoch.\n\n self._to_rebuild = {} # Used as an ordered set of functions to call.\n self._rebuild_cond = threading.Condition() # Must be held when accessing _to_rebuild.\n\n self._shutdown = False\n self.serve_thread = threading.Thread(target=lambda: self.serve_forever(shutdown_delay))\n self.observer = watchdog.observers.Observer(timeout=shutdown_delay)\n\n def watch(self, path, func=None, recursive=True):\n \"\"\"Add the 'path' to watched paths, call the function and reload when any file changes under it.\"\"\"\n path = os.path.abspath(path)\n if func in (None, self.builder):\n func = self.builder\n else:\n warnings.warn(\n \"Plugins should not pass the 'func' parameter of watch(). \"\n \"The ability to execute custom callbacks will be removed soon.\",\n DeprecationWarning,\n stacklevel=2,\n )\n\n def callback(event, allowed_path=None):\n if isinstance(event, watchdog.events.DirCreatedEvent):\n return\n if allowed_path is not None and event.src_path != allowed_path:\n return\n # Text editors always cause a \"file close\" event in addition to \"modified\" when saving\n # a file. Some editors also have \"swap\" functionality that keeps writing into another\n # file that's never closed. Prevent such write events from causing a rebuild.\n if isinstance(event, watchdog.events.FileModifiedEvent):\n # But FileClosedEvent is implemented only on Linux, otherwise we mustn't skip this:\n if type(self.observer).__name__ == \"InotifyObserver\":\n return\n log.debug(str(event))\n with self._rebuild_cond:\n self._to_rebuild[func] = True\n self._rebuild_cond.notify_all()\n\n dir_handler = watchdog.events.FileSystemEventHandler()\n dir_handler.on_any_event = callback\n\n seen = set()\n\n def schedule(path):\n seen.add(path)\n if os.path.isfile(path):\n # Watchdog doesn't support watching files, so watch its directory and filter by path\n handler = watchdog.events.FileSystemEventHandler()\n handler.on_any_event = lambda event: callback(event, allowed_path=path)\n\n parent = os.path.dirname(path)\n log.debug(f\"Watching file '{path}' through directory '{parent}'\")\n self.observer.schedule(handler, parent)\n else:\n log.debug(f\"Watching directory '{path}'\")\n self.observer.schedule(dir_handler, path, recursive=recursive)\n\n schedule(os.path.realpath(path))\n\n def watch_symlink_targets(path_obj): # path is os.DirEntry or pathlib.Path\n if path_obj.is_symlink():\n # The extra `readlink` is needed due to https://bugs.python.org/issue9949\n target = os.path.realpath(os.readlink(os.fspath(path_obj)))\n if target in seen or not os.path.exists(target):\n return\n schedule(target)\n\n path_obj = pathlib.Path(target)\n\n if path_obj.is_dir() and recursive:\n with os.scandir(os.fspath(path_obj)) as scan:\n for entry in scan:\n watch_symlink_targets(entry)\n\n watch_symlink_targets(pathlib.Path(path))\n\n def serve(self):\n self.observer.start()\n\n log.info(f\"Serving on {self.url}\")\n self.serve_thread.start()\n\n self._build_loop()\n\n def _build_loop(self):\n while True:\n with self._rebuild_cond:\n while not self._rebuild_cond.wait_for(\n lambda: self._to_rebuild or self._shutdown, timeout=self.shutdown_delay\n ):\n # We could have used just one wait instead of a loop + timeout, but we need\n # occasional breaks, otherwise on Windows we can't receive KeyboardInterrupt.\n pass\n if self._shutdown:\n break\n log.info(\"Detected file changes\")\n while self._rebuild_cond.wait(timeout=self.build_delay):\n log.debug(\"Waiting for file changes to stop happening\")\n\n self._wanted_epoch = _timestamp()\n funcs = list(self._to_rebuild)\n self._to_rebuild.clear()\n\n for func in funcs:\n func()\n\n with self._epoch_cond:\n log.info(\"Reloading browsers\")\n self._visible_epoch = self._wanted_epoch\n self._epoch_cond.notify_all()\n\n def shutdown(self):\n self.observer.stop()\n with self._rebuild_cond:\n self._shutdown = True\n self._rebuild_cond.notify_all()\n\n if self.serve_thread.is_alive():\n super().shutdown()\n self.serve_thread.join()\n self.observer.join()\n\n def serve_request(self, environ, start_response):\n try:\n result = self._serve_request(environ, start_response)\n except Exception:\n code = 500\n msg = \"500 Internal Server Error\"\n log.exception(msg)\n else:\n if result is not None:\n return result\n code = 404\n msg = \"404 Not Found\"\n\n error_content = None\n try:\n error_content = self.error_handler(code)\n except Exception:\n log.exception(\"Failed to render an error message!\")\n if error_content is None:\n error_content = msg.encode()\n\n start_response(msg, [(\"Content-Type\", \"text/html\")])\n return [error_content]\n\n def _serve_request(self, environ, start_response):\n path = environ[\"PATH_INFO\"]\n\n m = re.fullmatch(r\"/livereload/([0-9]+)/[0-9]+\", path)\n if m:\n epoch = int(m[1])\n start_response(\"200 OK\", [(\"Content-Type\", \"text/plain\")])\n\n def condition():\n return self._visible_epoch > epoch\n\n with self._epoch_cond:\n if not condition():\n # Stall the browser, respond as soon as there's something new.\n # If there's not, respond anyway after a minute.\n self._log_poll_request(environ.get(\"HTTP_REFERER\"), request_id=path)\n self._epoch_cond.wait_for(condition, timeout=self.poll_response_timeout)\n return [b\"%d\" % self._visible_epoch]\n\n if path == \"/js/livereload.js\":\n file_path = os.path.join(os.path.dirname(os.path.abspath(__file__)), \"livereload.js\")\n elif path.startswith(self.mount_path):\n if path.endswith(\"/\"):\n path += \"index.html\"\n path = path[len(self.mount_path):]\n file_path = os.path.join(self.root, path.lstrip(\"/\"))\n elif path == \"/\":\n start_response(\"302 Found\", [(\"Location\", self.mount_path)])\n return []\n else:\n return None # Not found\n\n # Wait until the ongoing rebuild (if any) finishes, so we're not serving a half-built site.\n with self._epoch_cond:\n self._epoch_cond.wait_for(lambda: self._visible_epoch == self._wanted_epoch)\n epoch = self._visible_epoch\n\n try:\n file = open(file_path, \"rb\")\n except OSError:\n return None # Not found\n\n if path.endswith(\".html\"):\n with file:\n content = file.read()\n content = self._inject_js_into_html(content, epoch)\n file = io.BytesIO(content)\n content_length = len(content)\n else:\n content_length = os.path.getsize(file_path)\n\n content_type = self._guess_type(file_path)\n start_response(\n \"200 OK\", [(\"Content-Type\", content_type), (\"Content-Length\", str(content_length))]\n )\n return wsgiref.util.FileWrapper(file)\n\n @classmethod\n def _inject_js_into_html(cls, content, epoch):\n try:\n body_end = content.rindex(b\"</body>\")\n except ValueError:\n body_end = len(content)\n # The page will reload if the livereload poller returns a newer epoch than what it knows.\n # The other timestamp becomes just a unique identifier for the initiating page.\n return (\n b'%b<script src=\"/js/livereload.js\"></script><script>livereload(%d, %d);</script>%b'\n % (content[:body_end], epoch, _timestamp(), content[body_end:])\n )\n\n @classmethod\n @functools.lru_cache() # \"Cache\" to not repeat the same message for the same browser tab.\n def _log_poll_request(cls, url, request_id):\n log.info(f\"Browser connected: {url}\")\n\n def _guess_type(cls, path):\n # MkDocs only ensures a few common types (as seen in livereload_tests.py::test_mime_types).\n # Other uncommon types will not be accepted.\n if path.endswith((\".js\", \".JS\")):\n return \"application/javascript\"\n if path.endswith(\".gz\"):\n return \"application/gzip\"\n\n guess, _ = mimetypes.guess_type(path)\n if guess:\n return guess\n return \"application/octet-stream\"\n\n\nclass _Handler(wsgiref.simple_server.WSGIRequestHandler):\n def log_request(self, code=\"-\", size=\"-\"):\n level = logging.DEBUG if str(code) == \"200\" else logging.WARNING\n log.log(level, f'\"{self.requestline}\" code {code}')\n\n def log_message(self, format, *args):\n log.debug(format, *args)\n\n\ndef _timestamp():\n return round(time.monotonic() * 1000)\n", "path": "mkdocs/livereload/__init__.py"}]} | 3,889 | 405 |
gh_patches_debug_4165 | rasdani/github-patches | git_diff | ivy-llc__ivy-14979 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
extract
</issue>
<code>
[start of ivy/functional/frontends/numpy/sorting_searching_counting/searching.py]
1 # local
2
3 import ivy
4
5 from ivy.functional.frontends.numpy import promote_types_of_numpy_inputs
6
7 from ivy.functional.frontends.numpy.func_wrapper import (
8 to_ivy_arrays_and_back,
9 from_zero_dim_arrays_to_scalar,
10 handle_numpy_out,
11 )
12
13
14 @to_ivy_arrays_and_back
15 def where(cond, x1=None, x2=None, /):
16 if x1 is None and x2 is None:
17 # numpy where behaves as np.asarray(condition).nonzero() when x and y
18 # not included
19 return ivy.asarray(cond).nonzero()
20 elif x1 is not None and x2 is not None:
21 x1, x2 = promote_types_of_numpy_inputs(x1, x2)
22 return ivy.where(cond, x1, x2)
23 else:
24 raise ivy.utils.exceptions.IvyException("where takes either 1 or 3 arguments")
25
26
27 @to_ivy_arrays_and_back
28 def nonzero(a):
29 return ivy.nonzero(a)
30
31
32 @handle_numpy_out
33 @to_ivy_arrays_and_back
34 @from_zero_dim_arrays_to_scalar
35 def argmin(a, /, *, axis=None, keepdims=False, out=None):
36 return ivy.argmin(a, axis=axis, out=out, keepdims=keepdims)
37
38
39 @handle_numpy_out
40 @to_ivy_arrays_and_back
41 @from_zero_dim_arrays_to_scalar
42 def argmax(
43 a,
44 /,
45 *,
46 axis=None,
47 out=None,
48 keepdims=False,
49 ):
50 return ivy.argmax(a, axis=axis, out=out, keepdims=keepdims)
51
52
53 @to_ivy_arrays_and_back
54 def flatnonzero(a):
55 return ivy.nonzero(ivy.reshape(a, (-1,)))
56
57
58 @to_ivy_arrays_and_back
59 def searchsorted(a, v, side="left", sorter=None):
60 return ivy.searchsorted(a, v, side=side, sorter=sorter)
61
62
63 @to_ivy_arrays_and_back
64 def argwhere(a):
65 return ivy.argwhere(a)
66
67
68 # nanargmin and nanargmax composition helper
69 def _nanargminmax(a, axis=None):
70 # check nans
71 nans = ivy.isnan(a).astype(ivy.bool)
72 # replace nans with inf
73 a = ivy.where(nans, ivy.inf, a)
74 if nans is not None:
75 nans = ivy.all(nans, axis=axis)
76 if ivy.any(nans):
77 raise ivy.utils.exceptions.IvyError("All-NaN slice encountered")
78 return a
79
80
81 @handle_numpy_out
82 @to_ivy_arrays_and_back
83 @from_zero_dim_arrays_to_scalar
84 def nanargmax(a, /, *, axis=None, out=None, keepdims=False):
85 a = _nanargminmax(a, axis=axis)
86 return ivy.argmax(a, axis=axis, keepdims=keepdims, out=out)
87
88
89 @handle_numpy_out
90 @to_ivy_arrays_and_back
91 @from_zero_dim_arrays_to_scalar
92 def nanargmin(a, /, *, axis=None, out=None, keepdims=False):
93 a = _nanargminmax(a, axis=axis)
94 return ivy.argmin(a, axis=axis, keepdims=keepdims, out=out)
95
[end of ivy/functional/frontends/numpy/sorting_searching_counting/searching.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/ivy/functional/frontends/numpy/sorting_searching_counting/searching.py b/ivy/functional/frontends/numpy/sorting_searching_counting/searching.py
--- a/ivy/functional/frontends/numpy/sorting_searching_counting/searching.py
+++ b/ivy/functional/frontends/numpy/sorting_searching_counting/searching.py
@@ -92,3 +92,11 @@
def nanargmin(a, /, *, axis=None, out=None, keepdims=False):
a = _nanargminmax(a, axis=axis)
return ivy.argmin(a, axis=axis, keepdims=keepdims, out=out)
+
+
+@to_ivy_arrays_and_back
+def extract(cond, arr, /):
+ if cond.dtype == 'bool':
+ return arr[cond]
+ else:
+ return arr[cond !=0]
\ No newline at end of file
| {"golden_diff": "diff --git a/ivy/functional/frontends/numpy/sorting_searching_counting/searching.py b/ivy/functional/frontends/numpy/sorting_searching_counting/searching.py\n--- a/ivy/functional/frontends/numpy/sorting_searching_counting/searching.py\n+++ b/ivy/functional/frontends/numpy/sorting_searching_counting/searching.py\n@@ -92,3 +92,11 @@\n def nanargmin(a, /, *, axis=None, out=None, keepdims=False):\n a = _nanargminmax(a, axis=axis)\n return ivy.argmin(a, axis=axis, keepdims=keepdims, out=out)\n+\n+\n+@to_ivy_arrays_and_back\n+def extract(cond, arr, /):\n+ if cond.dtype == 'bool':\n+ return arr[cond]\n+ else:\n+ return arr[cond !=0]\n\\ No newline at end of file\n", "issue": "extract\n\n", "before_files": [{"content": "# local\n\nimport ivy\n\nfrom ivy.functional.frontends.numpy import promote_types_of_numpy_inputs\n\nfrom ivy.functional.frontends.numpy.func_wrapper import (\n to_ivy_arrays_and_back,\n from_zero_dim_arrays_to_scalar,\n handle_numpy_out,\n)\n\n\n@to_ivy_arrays_and_back\ndef where(cond, x1=None, x2=None, /):\n if x1 is None and x2 is None:\n # numpy where behaves as np.asarray(condition).nonzero() when x and y\n # not included\n return ivy.asarray(cond).nonzero()\n elif x1 is not None and x2 is not None:\n x1, x2 = promote_types_of_numpy_inputs(x1, x2)\n return ivy.where(cond, x1, x2)\n else:\n raise ivy.utils.exceptions.IvyException(\"where takes either 1 or 3 arguments\")\n\n\n@to_ivy_arrays_and_back\ndef nonzero(a):\n return ivy.nonzero(a)\n\n\n@handle_numpy_out\n@to_ivy_arrays_and_back\n@from_zero_dim_arrays_to_scalar\ndef argmin(a, /, *, axis=None, keepdims=False, out=None):\n return ivy.argmin(a, axis=axis, out=out, keepdims=keepdims)\n\n\n@handle_numpy_out\n@to_ivy_arrays_and_back\n@from_zero_dim_arrays_to_scalar\ndef argmax(\n a,\n /,\n *,\n axis=None,\n out=None,\n keepdims=False,\n):\n return ivy.argmax(a, axis=axis, out=out, keepdims=keepdims)\n\n\n@to_ivy_arrays_and_back\ndef flatnonzero(a):\n return ivy.nonzero(ivy.reshape(a, (-1,)))\n\n\n@to_ivy_arrays_and_back\ndef searchsorted(a, v, side=\"left\", sorter=None):\n return ivy.searchsorted(a, v, side=side, sorter=sorter)\n\n\n@to_ivy_arrays_and_back\ndef argwhere(a):\n return ivy.argwhere(a)\n\n\n# nanargmin and nanargmax composition helper\ndef _nanargminmax(a, axis=None):\n # check nans\n nans = ivy.isnan(a).astype(ivy.bool)\n # replace nans with inf\n a = ivy.where(nans, ivy.inf, a)\n if nans is not None:\n nans = ivy.all(nans, axis=axis)\n if ivy.any(nans):\n raise ivy.utils.exceptions.IvyError(\"All-NaN slice encountered\")\n return a\n\n\n@handle_numpy_out\n@to_ivy_arrays_and_back\n@from_zero_dim_arrays_to_scalar\ndef nanargmax(a, /, *, axis=None, out=None, keepdims=False):\n a = _nanargminmax(a, axis=axis)\n return ivy.argmax(a, axis=axis, keepdims=keepdims, out=out)\n\n\n@handle_numpy_out\n@to_ivy_arrays_and_back\n@from_zero_dim_arrays_to_scalar\ndef nanargmin(a, /, *, axis=None, out=None, keepdims=False):\n a = _nanargminmax(a, axis=axis)\n return ivy.argmin(a, axis=axis, keepdims=keepdims, out=out)\n", "path": "ivy/functional/frontends/numpy/sorting_searching_counting/searching.py"}]} | 1,456 | 204 |
gh_patches_debug_35590 | rasdani/github-patches | git_diff | biolab__orange3-text-165 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
GeoMap counting doesn't eliminate duplicates inside documents
With `Locations` attribute for NY Times sometimes you get a result: `Ljubljana (Slovenia), Slovenia, Europe (Slovenia),` which would count Slovenia 3 times instead of once. For a specific county a given document should not increment the count for more than one.
</issue>
<code>
[start of orangecontrib/text/widgets/owgeomap.py]
1 # coding: utf-8
2 import os
3 import re
4 from collections import defaultdict, Counter
5 from itertools import chain
6 from urllib.parse import urljoin
7 from urllib.request import pathname2url
8
9 import numpy as np
10 from AnyQt.QtCore import Qt, QTimer, pyqtSlot, QUrl
11 from AnyQt.QtWidgets import QApplication, QSizePolicy
12
13 from Orange.data import Table
14 from Orange.widgets import widget, gui, settings
15 from Orange.widgets.utils.itemmodels import VariableListModel
16 from orangecontrib.text.corpus import Corpus
17 from orangecontrib.text.country_codes import \
18 CC_EUROPE, INV_CC_EUROPE, SET_CC_EUROPE, \
19 CC_WORLD, INV_CC_WORLD, \
20 CC_USA, INV_CC_USA, SET_CC_USA
21
22 CC_NAMES = re.compile('[\w\s\.\-]+')
23
24
25 class Map:
26 WORLD = 'world_mill_en'
27 EUROPE = 'europe_mill_en'
28 USA = 'us_aea_en'
29 all = (('World', WORLD),
30 ('Europe', EUROPE),
31 ('USA', USA))
32
33
34 class OWGeoMap(widget.OWWidget):
35 name = "GeoMap"
36 priority = 20000
37 icon = "icons/GeoMap.svg"
38 inputs = [("Data", Table, "on_data")]
39 outputs = [('Corpus', Corpus)]
40
41 want_main_area = False
42
43 selected_attr = settings.Setting('')
44 selected_map = settings.Setting(0)
45 regions = settings.Setting([])
46
47 def __init__(self):
48 super().__init__()
49 self.data = None
50 self._create_layout()
51
52 @pyqtSlot(str)
53 def region_selected(self, regions):
54 """Called from JavaScript"""
55 if not regions:
56 self.regions = []
57 if not regions or self.data is None:
58 return self.send('Corpus', None)
59 self.regions = regions.split(',')
60 attr = self.data.domain[self.selected_attr]
61 if attr.is_discrete: return # TODO, FIXME: make this work for discrete attrs also
62 from Orange.data.filter import FilterRegex
63 filter = FilterRegex(attr, r'\b{}\b'.format(r'\b|\b'.join(self.regions)), re.IGNORECASE)
64 self.send('Corpus', self.data._filter_values(filter))
65
66 def _create_layout(self):
67 box = gui.widgetBox(self.controlArea,
68 orientation='horizontal')
69 self.varmodel = VariableListModel(parent=self)
70 self.attr_combo = gui.comboBox(box, self, 'selected_attr',
71 orientation=Qt.Horizontal,
72 label='Region attribute:',
73 callback=self.on_attr_change,
74 sendSelectedValue=True)
75 self.attr_combo.setModel(self.varmodel)
76 self.map_combo = gui.comboBox(box, self, 'selected_map',
77 orientation=Qt.Horizontal,
78 label='Map type:',
79 callback=self.on_map_change,
80 items=Map.all)
81 hexpand = QSizePolicy(QSizePolicy.Expanding,
82 QSizePolicy.Fixed)
83 self.attr_combo.setSizePolicy(hexpand)
84 self.map_combo.setSizePolicy(hexpand)
85
86 url = urljoin('file:',
87 pathname2url(os.path.join(
88 os.path.dirname(__file__),
89 'resources',
90 'owgeomap.html')))
91 self.webview = gui.WebviewWidget(self.controlArea, self, url=QUrl(url))
92 self.controlArea.layout().addWidget(self.webview)
93
94 QTimer.singleShot(
95 0, lambda: self.webview.evalJS('REGIONS = {};'.format({Map.WORLD: CC_WORLD,
96 Map.EUROPE: CC_EUROPE,
97 Map.USA: CC_USA})))
98
99 def _repopulate_attr_combo(self, data):
100 vars = [a for a in chain(data.domain.metas,
101 data.domain.attributes,
102 data.domain.class_vars)
103 if a.is_string] if data else []
104 self.varmodel.wrap(vars)
105 # Select default attribute
106 self.selected_attr = next((var.name
107 for var in vars
108 if var.name.lower().startswith(('country', 'location', 'region'))),
109 vars[0].name if vars else '')
110
111 def on_data(self, data):
112 if data and not isinstance(data, Corpus):
113 data = Corpus.from_table(data.domain, data)
114 self.data = data
115 self._repopulate_attr_combo(data)
116 if not data:
117 self.region_selected('')
118 QTimer.singleShot(0, lambda: self.webview.evalJS('DATA = {}; renderMap();'))
119 else:
120 QTimer.singleShot(0, self.on_attr_change)
121
122 def on_map_change(self, map_code=''):
123 if map_code:
124 self.map_combo.setCurrentIndex(self.map_combo.findData(map_code))
125 else:
126 map_code = self.map_combo.itemData(self.selected_map)
127
128 inv_cc_map, cc_map = {Map.USA: (INV_CC_USA, CC_USA),
129 Map.WORLD: (INV_CC_WORLD, CC_WORLD),
130 Map.EUROPE: (INV_CC_EUROPE, CC_EUROPE)} [map_code]
131 # Set country counts in JS
132 data = defaultdict(int)
133 for cc in getattr(self, 'cc_counts', ()):
134 key = inv_cc_map.get(cc, cc)
135 if key in cc_map:
136 data[key] += self.cc_counts[cc]
137 # Draw the new map
138 self.webview.evalJS('DATA = {};'
139 'MAP_CODE = "{}";'
140 'SELECTED_REGIONS = {};'
141 'renderMap();'.format(dict(data),
142 map_code,
143 self.regions))
144
145 def on_attr_change(self):
146 if not self.selected_attr:
147 return
148 attr = self.data.domain[self.selected_attr]
149 self.cc_counts = Counter(chain.from_iterable(
150 set(name.strip() for name in CC_NAMES.findall(i.lower())) if len(i) > 3 else (i,)
151 for i in self.data.get_column_view(self.data.domain.index(attr))[0]))
152 # Auto-select region map
153 values = set(self.cc_counts)
154 if 0 == len(values - SET_CC_USA):
155 map_code = Map.USA
156 elif 0 == len(values - SET_CC_EUROPE):
157 map_code = Map.EUROPE
158 else:
159 map_code = Map.WORLD
160 self.on_map_change(map_code)
161
162
163 def main():
164 from Orange.data import Table, Domain, StringVariable
165
166 words = np.column_stack([
167 'Slovenia Slovenia SVN USA Iraq Iraq Iraq Iraq France FR'.split(),
168 'Slovenia Slovenia SVN France FR Austria NL GB GB GB'.split(),
169 'Alabama AL Texas TX TX TX MS Montana US-MT MT'.split(),
170 ])
171 metas = [
172 StringVariable('World'),
173 StringVariable('Europe'),
174 StringVariable('USA'),
175 ]
176 domain = Domain([], metas=metas)
177 table = Table.from_numpy(domain,
178 X=np.zeros((len(words), 0)),
179 metas=words)
180 app = QApplication([''])
181 w = OWGeoMap()
182 w.on_data(table)
183 w.show()
184 app.exec()
185
186
187 if __name__ == "__main__":
188 main()
189
[end of orangecontrib/text/widgets/owgeomap.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/orangecontrib/text/widgets/owgeomap.py b/orangecontrib/text/widgets/owgeomap.py
--- a/orangecontrib/text/widgets/owgeomap.py
+++ b/orangecontrib/text/widgets/owgeomap.py
@@ -127,13 +127,14 @@
inv_cc_map, cc_map = {Map.USA: (INV_CC_USA, CC_USA),
Map.WORLD: (INV_CC_WORLD, CC_WORLD),
- Map.EUROPE: (INV_CC_EUROPE, CC_EUROPE)} [map_code]
- # Set country counts in JS
+ Map.EUROPE: (INV_CC_EUROPE, CC_EUROPE)}[map_code]
+ # Set country counts for JS
data = defaultdict(int)
- for cc in getattr(self, 'cc_counts', ()):
- key = inv_cc_map.get(cc, cc)
- if key in cc_map:
- data[key] += self.cc_counts[cc]
+ for locations in self._iter_locations():
+ keys = set(inv_cc_map.get(loc, loc) for loc in locations)
+ for key in keys:
+ if key in cc_map:
+ data[key] += 1
# Draw the new map
self.webview.evalJS('DATA = {};'
'MAP_CODE = "{}";'
@@ -145,12 +146,8 @@
def on_attr_change(self):
if not self.selected_attr:
return
- attr = self.data.domain[self.selected_attr]
- self.cc_counts = Counter(chain.from_iterable(
- set(name.strip() for name in CC_NAMES.findall(i.lower())) if len(i) > 3 else (i,)
- for i in self.data.get_column_view(self.data.domain.index(attr))[0]))
+ values = set(chain.from_iterable(self._iter_locations()))
# Auto-select region map
- values = set(self.cc_counts)
if 0 == len(values - SET_CC_USA):
map_code = Map.USA
elif 0 == len(values - SET_CC_EUROPE):
@@ -159,6 +156,16 @@
map_code = Map.WORLD
self.on_map_change(map_code)
+ def _iter_locations(self):
+ """ Iterator that yields an iterable per documents with all its's
+ locations. """
+ attr = self.data.domain[self.selected_attr]
+ for i in self.data.get_column_view(self.data.domain.index(attr))[0]:
+ if len(i) > 3:
+ yield map(lambda x: x.strip(), CC_NAMES.findall(i.lower()))
+ else:
+ yield (i, )
+
def main():
from Orange.data import Table, Domain, StringVariable
| {"golden_diff": "diff --git a/orangecontrib/text/widgets/owgeomap.py b/orangecontrib/text/widgets/owgeomap.py\n--- a/orangecontrib/text/widgets/owgeomap.py\n+++ b/orangecontrib/text/widgets/owgeomap.py\n@@ -127,13 +127,14 @@\n \n inv_cc_map, cc_map = {Map.USA: (INV_CC_USA, CC_USA),\n Map.WORLD: (INV_CC_WORLD, CC_WORLD),\n- Map.EUROPE: (INV_CC_EUROPE, CC_EUROPE)} [map_code]\n- # Set country counts in JS\n+ Map.EUROPE: (INV_CC_EUROPE, CC_EUROPE)}[map_code]\n+ # Set country counts for JS\n data = defaultdict(int)\n- for cc in getattr(self, 'cc_counts', ()):\n- key = inv_cc_map.get(cc, cc)\n- if key in cc_map:\n- data[key] += self.cc_counts[cc]\n+ for locations in self._iter_locations():\n+ keys = set(inv_cc_map.get(loc, loc) for loc in locations)\n+ for key in keys:\n+ if key in cc_map:\n+ data[key] += 1\n # Draw the new map\n self.webview.evalJS('DATA = {};'\n 'MAP_CODE = \"{}\";'\n@@ -145,12 +146,8 @@\n def on_attr_change(self):\n if not self.selected_attr:\n return\n- attr = self.data.domain[self.selected_attr]\n- self.cc_counts = Counter(chain.from_iterable(\n- set(name.strip() for name in CC_NAMES.findall(i.lower())) if len(i) > 3 else (i,)\n- for i in self.data.get_column_view(self.data.domain.index(attr))[0]))\n+ values = set(chain.from_iterable(self._iter_locations()))\n # Auto-select region map\n- values = set(self.cc_counts)\n if 0 == len(values - SET_CC_USA):\n map_code = Map.USA\n elif 0 == len(values - SET_CC_EUROPE):\n@@ -159,6 +156,16 @@\n map_code = Map.WORLD\n self.on_map_change(map_code)\n \n+ def _iter_locations(self):\n+ \"\"\" Iterator that yields an iterable per documents with all its's\n+ locations. \"\"\"\n+ attr = self.data.domain[self.selected_attr]\n+ for i in self.data.get_column_view(self.data.domain.index(attr))[0]:\n+ if len(i) > 3:\n+ yield map(lambda x: x.strip(), CC_NAMES.findall(i.lower()))\n+ else:\n+ yield (i, )\n+\n \n def main():\n from Orange.data import Table, Domain, StringVariable\n", "issue": "GeoMap counting doesn't eliminate duplicates inside documents\nWith `Locations` attribute for NY Times sometimes you get a result: `Ljubljana (Slovenia), Slovenia, Europe (Slovenia),` which would count Slovenia 3 times instead of once. For a specific county a given document should not increment the count for more than one.\n", "before_files": [{"content": "# coding: utf-8\nimport os\nimport re\nfrom collections import defaultdict, Counter\nfrom itertools import chain\nfrom urllib.parse import urljoin\nfrom urllib.request import pathname2url\n\nimport numpy as np\nfrom AnyQt.QtCore import Qt, QTimer, pyqtSlot, QUrl\nfrom AnyQt.QtWidgets import QApplication, QSizePolicy\n\nfrom Orange.data import Table\nfrom Orange.widgets import widget, gui, settings\nfrom Orange.widgets.utils.itemmodels import VariableListModel\nfrom orangecontrib.text.corpus import Corpus\nfrom orangecontrib.text.country_codes import \\\n CC_EUROPE, INV_CC_EUROPE, SET_CC_EUROPE, \\\n CC_WORLD, INV_CC_WORLD, \\\n CC_USA, INV_CC_USA, SET_CC_USA\n\nCC_NAMES = re.compile('[\\w\\s\\.\\-]+')\n\n\nclass Map:\n WORLD = 'world_mill_en'\n EUROPE = 'europe_mill_en'\n USA = 'us_aea_en'\n all = (('World', WORLD),\n ('Europe', EUROPE),\n ('USA', USA))\n\n\nclass OWGeoMap(widget.OWWidget):\n name = \"GeoMap\"\n priority = 20000\n icon = \"icons/GeoMap.svg\"\n inputs = [(\"Data\", Table, \"on_data\")]\n outputs = [('Corpus', Corpus)]\n\n want_main_area = False\n\n selected_attr = settings.Setting('')\n selected_map = settings.Setting(0)\n regions = settings.Setting([])\n\n def __init__(self):\n super().__init__()\n self.data = None\n self._create_layout()\n\n @pyqtSlot(str)\n def region_selected(self, regions):\n \"\"\"Called from JavaScript\"\"\"\n if not regions:\n self.regions = []\n if not regions or self.data is None:\n return self.send('Corpus', None)\n self.regions = regions.split(',')\n attr = self.data.domain[self.selected_attr]\n if attr.is_discrete: return # TODO, FIXME: make this work for discrete attrs also\n from Orange.data.filter import FilterRegex\n filter = FilterRegex(attr, r'\\b{}\\b'.format(r'\\b|\\b'.join(self.regions)), re.IGNORECASE)\n self.send('Corpus', self.data._filter_values(filter))\n\n def _create_layout(self):\n box = gui.widgetBox(self.controlArea,\n orientation='horizontal')\n self.varmodel = VariableListModel(parent=self)\n self.attr_combo = gui.comboBox(box, self, 'selected_attr',\n orientation=Qt.Horizontal,\n label='Region attribute:',\n callback=self.on_attr_change,\n sendSelectedValue=True)\n self.attr_combo.setModel(self.varmodel)\n self.map_combo = gui.comboBox(box, self, 'selected_map',\n orientation=Qt.Horizontal,\n label='Map type:',\n callback=self.on_map_change,\n items=Map.all)\n hexpand = QSizePolicy(QSizePolicy.Expanding,\n QSizePolicy.Fixed)\n self.attr_combo.setSizePolicy(hexpand)\n self.map_combo.setSizePolicy(hexpand)\n\n url = urljoin('file:',\n pathname2url(os.path.join(\n os.path.dirname(__file__),\n 'resources',\n 'owgeomap.html')))\n self.webview = gui.WebviewWidget(self.controlArea, self, url=QUrl(url))\n self.controlArea.layout().addWidget(self.webview)\n\n QTimer.singleShot(\n 0, lambda: self.webview.evalJS('REGIONS = {};'.format({Map.WORLD: CC_WORLD,\n Map.EUROPE: CC_EUROPE,\n Map.USA: CC_USA})))\n\n def _repopulate_attr_combo(self, data):\n vars = [a for a in chain(data.domain.metas,\n data.domain.attributes,\n data.domain.class_vars)\n if a.is_string] if data else []\n self.varmodel.wrap(vars)\n # Select default attribute\n self.selected_attr = next((var.name\n for var in vars\n if var.name.lower().startswith(('country', 'location', 'region'))),\n vars[0].name if vars else '')\n\n def on_data(self, data):\n if data and not isinstance(data, Corpus):\n data = Corpus.from_table(data.domain, data)\n self.data = data\n self._repopulate_attr_combo(data)\n if not data:\n self.region_selected('')\n QTimer.singleShot(0, lambda: self.webview.evalJS('DATA = {}; renderMap();'))\n else:\n QTimer.singleShot(0, self.on_attr_change)\n\n def on_map_change(self, map_code=''):\n if map_code:\n self.map_combo.setCurrentIndex(self.map_combo.findData(map_code))\n else:\n map_code = self.map_combo.itemData(self.selected_map)\n\n inv_cc_map, cc_map = {Map.USA: (INV_CC_USA, CC_USA),\n Map.WORLD: (INV_CC_WORLD, CC_WORLD),\n Map.EUROPE: (INV_CC_EUROPE, CC_EUROPE)} [map_code]\n # Set country counts in JS\n data = defaultdict(int)\n for cc in getattr(self, 'cc_counts', ()):\n key = inv_cc_map.get(cc, cc)\n if key in cc_map:\n data[key] += self.cc_counts[cc]\n # Draw the new map\n self.webview.evalJS('DATA = {};'\n 'MAP_CODE = \"{}\";'\n 'SELECTED_REGIONS = {};'\n 'renderMap();'.format(dict(data),\n map_code,\n self.regions))\n\n def on_attr_change(self):\n if not self.selected_attr:\n return\n attr = self.data.domain[self.selected_attr]\n self.cc_counts = Counter(chain.from_iterable(\n set(name.strip() for name in CC_NAMES.findall(i.lower())) if len(i) > 3 else (i,)\n for i in self.data.get_column_view(self.data.domain.index(attr))[0]))\n # Auto-select region map\n values = set(self.cc_counts)\n if 0 == len(values - SET_CC_USA):\n map_code = Map.USA\n elif 0 == len(values - SET_CC_EUROPE):\n map_code = Map.EUROPE\n else:\n map_code = Map.WORLD\n self.on_map_change(map_code)\n\n\ndef main():\n from Orange.data import Table, Domain, StringVariable\n\n words = np.column_stack([\n 'Slovenia Slovenia SVN USA Iraq Iraq Iraq Iraq France FR'.split(),\n 'Slovenia Slovenia SVN France FR Austria NL GB GB GB'.split(),\n 'Alabama AL Texas TX TX TX MS Montana US-MT MT'.split(),\n ])\n metas = [\n StringVariable('World'),\n StringVariable('Europe'),\n StringVariable('USA'),\n ]\n domain = Domain([], metas=metas)\n table = Table.from_numpy(domain,\n X=np.zeros((len(words), 0)),\n metas=words)\n app = QApplication([''])\n w = OWGeoMap()\n w.on_data(table)\n w.show()\n app.exec()\n\n\nif __name__ == \"__main__\":\n main()\n", "path": "orangecontrib/text/widgets/owgeomap.py"}]} | 2,609 | 613 |
gh_patches_debug_31606 | rasdani/github-patches | git_diff | fossasia__open-event-server-3128 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Do not show deleted orders in organiser ui and do not auto delete expired orders
</issue>
<code>
[start of app/helpers/scheduled_jobs.py]
1 from datetime import datetime, timedelta
2
3 from dateutil.relativedelta import relativedelta
4 from flask import url_for
5 from sqlalchemy_continuum import transaction_class
6
7 from app.helpers.data import DataManager, delete_from_db, save_to_db
8 from app.helpers.data_getter import DataGetter
9 from app.helpers.helpers import send_after_event, monthdelta, send_followup_email_for_monthly_fee_payment
10 from app.helpers.helpers import send_email_for_expired_orders, send_email_for_monthly_fee_payment
11 from app.helpers.payment import get_fee
12 from app.helpers.ticketing import TicketingManager
13 from app.models.event import Event
14 from app.models.event_invoice import EventInvoice
15 from app.models.order import Order
16 from app.models.session import Session
17 from app.models.user import User
18
19
20 def empty_trash():
21 from app import current_app as app
22 with app.app_context():
23 events = Event.query.filter_by(in_trash=True)
24 users = User.query.filter_by(in_trash=True)
25 sessions = Session.query.filter_by(in_trash=True)
26 orders = Order.query.filter_by(status="deleted")
27 pending_orders = Order.query.filter_by(status="pending")
28 expired_orders = Order.query.filter_by(status="expired")
29 for event in events:
30 if datetime.now() - event.trash_date >= timedelta(days=30):
31 DataManager.delete_event(event.id)
32
33 for user in users:
34 if datetime.now() - user.trash_date >= timedelta(days=30):
35 transaction = transaction_class(Event)
36 transaction.query.filter_by(user_id=user.id).delete()
37 delete_from_db(user, "User deleted permanently")
38
39 for session_ in sessions:
40 if datetime.now() - session_.trash_date >= timedelta(days=30):
41 delete_from_db(session_, "Session deleted permanently")
42
43 for order in orders:
44 if datetime.now() - order.trashed_at >= timedelta(days=30):
45 delete_from_db(order, "Order deleted permanently")
46
47 for pending_order in pending_orders:
48 if datetime.now() - pending_order.created_at >= timedelta(days=3):
49 pending_order.status = "expired"
50 save_to_db(pending_order, "Pending order expired.")
51
52 for expired_order in expired_orders:
53 if datetime.now() - expired_order.created_at >= timedelta(days=6):
54 expired_order.status = "deleted"
55 expired_order.trashed_at = datetime.now()
56 save_to_db(expired_order, "Expired order deleted")
57
58
59 def send_after_event_mail():
60 from app import current_app as app
61 with app.app_context():
62 events = Event.query.all()
63 for event in events:
64 upcoming_events = DataGetter.get_upcoming_events()
65 organizers = DataGetter.get_user_event_roles_by_role_name(
66 event.id, 'organizer')
67 speakers = DataGetter.get_user_event_roles_by_role_name(event.id,
68 'speaker')
69 if datetime.now() > event.end_time:
70 for speaker in speakers:
71 send_after_event(speaker.user.email, event.id,
72 upcoming_events)
73 for organizer in organizers:
74 send_after_event(organizer.user.email, event.id,
75 upcoming_events)
76
77
78 def send_mail_to_expired_orders():
79 from app import current_app as app
80 with app.app_context():
81 orders = DataGetter.get_expired_orders()
82 for order in orders:
83 send_email_for_expired_orders(order.user.email, order.event.name, order.get_invoice_number(),
84 url_for('ticketing.view_order_after_payment',
85 order_identifier=order.identifier, _external=True))
86
87
88 def send_event_fee_notification():
89 from app import current_app as app
90 with app.app_context():
91 events = Event.query.all()
92 for event in events:
93 latest_invoice = EventInvoice.filter_by(event_id=event.id).order_by(EventInvoice.created_at.desc()).first()
94
95 if latest_invoice:
96 orders = Order.query \
97 .filter_by(event_id=event.id) \
98 .filter_by(status='completed') \
99 .filter(Order.completed_at > latest_invoice.created_at).all()
100 else:
101 orders = Order.query.filter_by(event_id=event.id).filter_by(status='completed').all()
102
103 fee_total = 0
104 for order in orders:
105 for order_ticket in order.tickets:
106 ticket = TicketingManager.get_ticket(order_ticket.ticket_id)
107 if order.paid_via != 'free' and order.amount > 0 and ticket.price > 0:
108 fee = ticket.price * (get_fee(order.event.payment_currency) / 100.0)
109 fee_total += fee
110
111 if fee_total > 0:
112 new_invoice = EventInvoice(amount=fee_total, event_id=event.id, user_id=event.creator_id)
113
114 if event.discount_code_id and event.discount_code:
115 r = relativedelta(datetime.utcnow(), event.created_at)
116 if r <= event.discount_code.max_quantity:
117 new_invoice.amount = fee_total - (fee_total * (event.discount_code.value / 100.0))
118 new_invoice.discount_code_id = event.discount_code_id
119
120 save_to_db(new_invoice)
121 prev_month = monthdelta(new_invoice.created_at, 1).strftime("%b %Y") # Displayed as Aug 2016
122 send_email_for_monthly_fee_payment(new_invoice.user.email,
123 event.name,
124 prev_month,
125 new_invoice.amount,
126 url_for('event_invoicing.view_invoice',
127 invoice_identifier=new_invoice.identifier, _external=True))
128
129
130 def send_event_fee_notification_followup():
131 from app import current_app as app
132 with app.app_context():
133 incomplete_invoices = EventInvoice.query.filter(EventInvoice.status != 'completed').all()
134 for incomplete_invoice in incomplete_invoices:
135 if incomplete_invoice.amount > 0:
136 prev_month = monthdelta(incomplete_invoice.created_at, 1).strftime("%b %Y") # Displayed as Aug 2016
137 send_followup_email_for_monthly_fee_payment(incomplete_invoice.user.email,
138 incomplete_invoice.event.name,
139 prev_month,
140 incomplete_invoice.amount,
141 url_for('event_invoicing.view_invoice',
142 invoice_identifier=incomplete_invoice.identifier,
143 _external=True))
144
[end of app/helpers/scheduled_jobs.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/app/helpers/scheduled_jobs.py b/app/helpers/scheduled_jobs.py
--- a/app/helpers/scheduled_jobs.py
+++ b/app/helpers/scheduled_jobs.py
@@ -23,9 +23,8 @@
events = Event.query.filter_by(in_trash=True)
users = User.query.filter_by(in_trash=True)
sessions = Session.query.filter_by(in_trash=True)
- orders = Order.query.filter_by(status="deleted")
pending_orders = Order.query.filter_by(status="pending")
- expired_orders = Order.query.filter_by(status="expired")
+
for event in events:
if datetime.now() - event.trash_date >= timedelta(days=30):
DataManager.delete_event(event.id)
@@ -40,21 +39,11 @@
if datetime.now() - session_.trash_date >= timedelta(days=30):
delete_from_db(session_, "Session deleted permanently")
- for order in orders:
- if datetime.now() - order.trashed_at >= timedelta(days=30):
- delete_from_db(order, "Order deleted permanently")
-
for pending_order in pending_orders:
if datetime.now() - pending_order.created_at >= timedelta(days=3):
pending_order.status = "expired"
save_to_db(pending_order, "Pending order expired.")
- for expired_order in expired_orders:
- if datetime.now() - expired_order.created_at >= timedelta(days=6):
- expired_order.status = "deleted"
- expired_order.trashed_at = datetime.now()
- save_to_db(expired_order, "Expired order deleted")
-
def send_after_event_mail():
from app import current_app as app
| {"golden_diff": "diff --git a/app/helpers/scheduled_jobs.py b/app/helpers/scheduled_jobs.py\n--- a/app/helpers/scheduled_jobs.py\n+++ b/app/helpers/scheduled_jobs.py\n@@ -23,9 +23,8 @@\n events = Event.query.filter_by(in_trash=True)\n users = User.query.filter_by(in_trash=True)\n sessions = Session.query.filter_by(in_trash=True)\n- orders = Order.query.filter_by(status=\"deleted\")\n pending_orders = Order.query.filter_by(status=\"pending\")\n- expired_orders = Order.query.filter_by(status=\"expired\")\n+\n for event in events:\n if datetime.now() - event.trash_date >= timedelta(days=30):\n DataManager.delete_event(event.id)\n@@ -40,21 +39,11 @@\n if datetime.now() - session_.trash_date >= timedelta(days=30):\n delete_from_db(session_, \"Session deleted permanently\")\n \n- for order in orders:\n- if datetime.now() - order.trashed_at >= timedelta(days=30):\n- delete_from_db(order, \"Order deleted permanently\")\n-\n for pending_order in pending_orders:\n if datetime.now() - pending_order.created_at >= timedelta(days=3):\n pending_order.status = \"expired\"\n save_to_db(pending_order, \"Pending order expired.\")\n \n- for expired_order in expired_orders:\n- if datetime.now() - expired_order.created_at >= timedelta(days=6):\n- expired_order.status = \"deleted\"\n- expired_order.trashed_at = datetime.now()\n- save_to_db(expired_order, \"Expired order deleted\")\n-\n \n def send_after_event_mail():\n from app import current_app as app\n", "issue": "Do not show deleted orders in organiser ui and do not auto delete expired orders\n\n", "before_files": [{"content": "from datetime import datetime, timedelta\n\nfrom dateutil.relativedelta import relativedelta\nfrom flask import url_for\nfrom sqlalchemy_continuum import transaction_class\n\nfrom app.helpers.data import DataManager, delete_from_db, save_to_db\nfrom app.helpers.data_getter import DataGetter\nfrom app.helpers.helpers import send_after_event, monthdelta, send_followup_email_for_monthly_fee_payment\nfrom app.helpers.helpers import send_email_for_expired_orders, send_email_for_monthly_fee_payment\nfrom app.helpers.payment import get_fee\nfrom app.helpers.ticketing import TicketingManager\nfrom app.models.event import Event\nfrom app.models.event_invoice import EventInvoice\nfrom app.models.order import Order\nfrom app.models.session import Session\nfrom app.models.user import User\n\n\ndef empty_trash():\n from app import current_app as app\n with app.app_context():\n events = Event.query.filter_by(in_trash=True)\n users = User.query.filter_by(in_trash=True)\n sessions = Session.query.filter_by(in_trash=True)\n orders = Order.query.filter_by(status=\"deleted\")\n pending_orders = Order.query.filter_by(status=\"pending\")\n expired_orders = Order.query.filter_by(status=\"expired\")\n for event in events:\n if datetime.now() - event.trash_date >= timedelta(days=30):\n DataManager.delete_event(event.id)\n\n for user in users:\n if datetime.now() - user.trash_date >= timedelta(days=30):\n transaction = transaction_class(Event)\n transaction.query.filter_by(user_id=user.id).delete()\n delete_from_db(user, \"User deleted permanently\")\n\n for session_ in sessions:\n if datetime.now() - session_.trash_date >= timedelta(days=30):\n delete_from_db(session_, \"Session deleted permanently\")\n\n for order in orders:\n if datetime.now() - order.trashed_at >= timedelta(days=30):\n delete_from_db(order, \"Order deleted permanently\")\n\n for pending_order in pending_orders:\n if datetime.now() - pending_order.created_at >= timedelta(days=3):\n pending_order.status = \"expired\"\n save_to_db(pending_order, \"Pending order expired.\")\n\n for expired_order in expired_orders:\n if datetime.now() - expired_order.created_at >= timedelta(days=6):\n expired_order.status = \"deleted\"\n expired_order.trashed_at = datetime.now()\n save_to_db(expired_order, \"Expired order deleted\")\n\n\ndef send_after_event_mail():\n from app import current_app as app\n with app.app_context():\n events = Event.query.all()\n for event in events:\n upcoming_events = DataGetter.get_upcoming_events()\n organizers = DataGetter.get_user_event_roles_by_role_name(\n event.id, 'organizer')\n speakers = DataGetter.get_user_event_roles_by_role_name(event.id,\n 'speaker')\n if datetime.now() > event.end_time:\n for speaker in speakers:\n send_after_event(speaker.user.email, event.id,\n upcoming_events)\n for organizer in organizers:\n send_after_event(organizer.user.email, event.id,\n upcoming_events)\n\n\ndef send_mail_to_expired_orders():\n from app import current_app as app\n with app.app_context():\n orders = DataGetter.get_expired_orders()\n for order in orders:\n send_email_for_expired_orders(order.user.email, order.event.name, order.get_invoice_number(),\n url_for('ticketing.view_order_after_payment',\n order_identifier=order.identifier, _external=True))\n\n\ndef send_event_fee_notification():\n from app import current_app as app\n with app.app_context():\n events = Event.query.all()\n for event in events:\n latest_invoice = EventInvoice.filter_by(event_id=event.id).order_by(EventInvoice.created_at.desc()).first()\n\n if latest_invoice:\n orders = Order.query \\\n .filter_by(event_id=event.id) \\\n .filter_by(status='completed') \\\n .filter(Order.completed_at > latest_invoice.created_at).all()\n else:\n orders = Order.query.filter_by(event_id=event.id).filter_by(status='completed').all()\n\n fee_total = 0\n for order in orders:\n for order_ticket in order.tickets:\n ticket = TicketingManager.get_ticket(order_ticket.ticket_id)\n if order.paid_via != 'free' and order.amount > 0 and ticket.price > 0:\n fee = ticket.price * (get_fee(order.event.payment_currency) / 100.0)\n fee_total += fee\n\n if fee_total > 0:\n new_invoice = EventInvoice(amount=fee_total, event_id=event.id, user_id=event.creator_id)\n\n if event.discount_code_id and event.discount_code:\n r = relativedelta(datetime.utcnow(), event.created_at)\n if r <= event.discount_code.max_quantity:\n new_invoice.amount = fee_total - (fee_total * (event.discount_code.value / 100.0))\n new_invoice.discount_code_id = event.discount_code_id\n\n save_to_db(new_invoice)\n prev_month = monthdelta(new_invoice.created_at, 1).strftime(\"%b %Y\") # Displayed as Aug 2016\n send_email_for_monthly_fee_payment(new_invoice.user.email,\n event.name,\n prev_month,\n new_invoice.amount,\n url_for('event_invoicing.view_invoice',\n invoice_identifier=new_invoice.identifier, _external=True))\n\n\ndef send_event_fee_notification_followup():\n from app import current_app as app\n with app.app_context():\n incomplete_invoices = EventInvoice.query.filter(EventInvoice.status != 'completed').all()\n for incomplete_invoice in incomplete_invoices:\n if incomplete_invoice.amount > 0:\n prev_month = monthdelta(incomplete_invoice.created_at, 1).strftime(\"%b %Y\") # Displayed as Aug 2016\n send_followup_email_for_monthly_fee_payment(incomplete_invoice.user.email,\n incomplete_invoice.event.name,\n prev_month,\n incomplete_invoice.amount,\n url_for('event_invoicing.view_invoice',\n invoice_identifier=incomplete_invoice.identifier,\n _external=True))\n", "path": "app/helpers/scheduled_jobs.py"}]} | 2,160 | 356 |
gh_patches_debug_21723 | rasdani/github-patches | git_diff | graspologic-org__graspologic-85 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Fix notebooks on netlify
Issues with setting jupyter notebook kernels prevent them from running on netlify
</issue>
<code>
[start of graspy/embed/lse.py]
1 # ase.py
2 # Created by Ben Pedigo on 2018-09-26.
3 # Email: [email protected]
4 import warnings
5
6 from .embed import BaseEmbed
7 from .svd import selectSVD
8 from ..utils import import_graph, to_laplace, get_lcc, is_fully_connected
9
10
11 class LaplacianSpectralEmbed(BaseEmbed):
12 r"""
13 Class for computing the laplacian spectral embedding of a graph
14
15 The laplacian spectral embedding (LSE) is a k-dimensional Euclidean representation of
16 the graph based on its Laplacian matrix [1]_. It relies on an SVD to reduce the dimensionality
17 to the specified k, or if k is unspecified, can find a number of dimensions automatically.
18
19 Parameters
20 ----------
21 n_components : int or None, default = None
22 Desired dimensionality of output data. If "full",
23 n_components must be <= min(X.shape). Otherwise, n_components must be
24 < min(X.shape). If None, then optimal dimensions will be chosen by
25 ``select_dimension`` using ``n_elbows`` argument.
26 n_elbows : int, optional, default: 2
27 If `n_compoents=None`, then compute the optimal embedding dimension using
28 `select_dimension`. Otherwise, ignored.
29 algorithm : {'full', 'truncated' (default), 'randomized'}, optional
30 SVD solver to use:
31
32 - 'full'
33 Computes full svd using ``scipy.linalg.svd``
34 - 'truncated'
35 Computes truncated svd using ``scipy.sparse.linalg.svd``
36 - 'randomized'
37 Computes randomized svd using
38 ``sklearn.utils.extmath.randomized_svd``
39 n_iter : int, optional (default = 5)
40 Number of iterations for randomized SVD solver. Not used by 'full' or
41 'truncated'. The default is larger than the default in randomized_svd
42 to handle sparse matrices that may have large slowly decaying spectrum.
43 lcc : bool, optional (default=True)
44 If True, computes the largest connected component for the input graph.
45
46 Attributes
47 ----------
48 latent_left_ : array, shape (n_samples, n_components)
49 Estimated left latent positions of the graph.
50 latent_right_ : array, shape (n_samples, n_components), or None
51 Only computed when the graph is directed, or adjacency matrix is assymetric.
52 Estimated right latent positions of the graph. Otherwise, None.
53 singular_values_ : array, shape (n_components)
54 Singular values associated with the latent position matrices.
55 indices_ : array, or None
56 If ``lcc`` is True, these are the indices of the vertices that were kept.
57
58 See Also
59 --------
60 graspy.embed.selectSVD
61 graspy.embed.select_dimension
62 graspy.utils.to_laplace
63
64 Notes
65 -----
66 The singular value decomposition:
67
68 .. math:: A = U \Sigma V^T
69
70 is used to find an orthonormal basis for a matrix, which in our case is the Laplacian
71 matrix of the graph. These basis vectors (in the matrices U or V) are ordered according
72 to the amount of variance they explain in the original matrix. By selecting a subset of these
73 basis vectors (through our choice of dimensionality reduction) we can find a lower dimensional
74 space in which to represent the graph
75
76 References
77 ----------
78 .. [1] Sussman, D.L., Tang, M., Fishkind, D.E., Priebe, C.E. "A
79 Consistent Adjacency Spectral Embedding for Stochastic Blockmodel Graphs,"
80 Journal of the American Statistical Association, Vol. 107(499), 2012
81 """
82
83 def __init__(
84 self,
85 form='DAD',
86 n_components=None,
87 n_elbows=2,
88 algorithm='randomized',
89 n_iter=5,
90 ):
91 super().__init__(
92 n_components=n_components,
93 n_elbows=n_elbows,
94 algorithm=algorithm,
95 n_iter=n_iter,
96 )
97 self.form = form
98
99 def fit(self, graph):
100 """
101 Fit LSE model to input graph
102
103 By default, uses the Laplacian normalization of the form:
104
105 .. math:: L = D^{-1/2} A D^{-1/2}
106
107 Parameters
108 ----------
109 graph : array_like or networkx.Graph
110 Input graph to embed. see graphstats.utils.import_graph
111
112 form : {'DAD' (default), 'I-DAD'}, optional
113 Specifies the type of Laplacian normalization to use.
114
115 Returns
116 -------
117 self : returns an instance of self.
118 """
119 A = import_graph(graph)
120
121 if not is_fully_connected(A):
122 msg = """Input graph is not fully connected. Results may not \
123 be optimal. You can compute the largest connected component by \
124 using ``graspy.utils.get_lcc``."""
125 warnings.warn(msg, UserWarning)
126
127 L_norm = to_laplace(A, form=self.form)
128 self._reduce_dim(L_norm)
129 return self
130
[end of graspy/embed/lse.py]
[start of graspy/embed/ase.py]
1 # ase.py
2 # Created by Ben Pedigo on 2018-09-15.
3 # Email: [email protected]
4 import warnings
5
6 from .embed import BaseEmbed
7 from .svd import selectSVD
8 from ..utils import import_graph, get_lcc, is_fully_connected
9
10
11 class AdjacencySpectralEmbed(BaseEmbed):
12 r"""
13 Class for computing the adjacency spectral embedding of a graph
14
15 The adjacency spectral embedding (ASE) is a k-dimensional Euclidean representation of
16 the graph based on its adjacency matrix [1]_. It relies on an SVD to reduce the dimensionality
17 to the specified k, or if k is unspecified, can find a number of dimensions automatically
18 (see graphstats.embed.svd.selectSVD).
19
20 Parameters
21 ----------
22 n_components : int or None, default = None
23 Desired dimensionality of output data. If "full",
24 n_components must be <= min(X.shape). Otherwise, n_components must be
25 < min(X.shape). If None, then optimal dimensions will be chosen by
26 ``select_dimension`` using ``n_elbows`` argument.
27 n_elbows : int, optional, default: 2
28 If `n_compoents=None`, then compute the optimal embedding dimension using
29 `select_dimension`. Otherwise, ignored.
30 algorithm : {'full', 'truncated' (default), 'randomized'}, optional
31 SVD solver to use:
32
33 - 'full'
34 Computes full svd using ``scipy.linalg.svd``
35 - 'truncated'
36 Computes truncated svd using ``scipy.sparse.linalg.svd``
37 - 'randomized'
38 Computes randomized svd using
39 ``sklearn.utils.extmath.randomized_svd``
40 n_iter : int, optional (default = 5)
41 Number of iterations for randomized SVD solver. Not used by 'full' or
42 'truncated'. The default is larger than the default in randomized_svd
43 to handle sparse matrices that may have large slowly decaying spectrum.
44 lcc : bool, optional (default=True)
45 If True, computes the largest connected component for the input graph.
46
47 Attributes
48 ----------
49 latent_left_ : array, shape (n_samples, n_components)
50 Estimated left latent positions of the graph.
51 latent_right_ : array, shape (n_samples, n_components), or None
52 Only computed when the graph is directed, or adjacency matrix is assymetric.
53 Estimated right latent positions of the graph. Otherwise, None.
54 singular_values_ : array, shape (n_components)
55 Singular values associated with the latent position matrices.
56 indices_ : array, or None
57 If ``lcc`` is True, these are the indices of the vertices that were kept.
58
59 See Also
60 --------
61 graspy.embed.selectSVD
62 graspy.embed.select_dimension
63
64 Notes
65 -----
66 The singular value decomposition:
67
68 .. math:: A = U \Sigma V^T
69
70 is used to find an orthonormal basis for a matrix, which in our case is the adjacency
71 matrix of the graph. These basis vectors (in the matrices U or V) are ordered according
72 to the amount of variance they explain in the original matrix. By selecting a subset of these
73 basis vectors (through our choice of dimensionality reduction) we can find a lower dimensional
74 space in which to represent the graph
75
76 References
77 ----------
78 .. [1] Sussman, D.L., Tang, M., Fishkind, D.E., Priebe, C.E. "A
79 Consistent Adjacency Spectral Embedding for Stochastic Blockmodel Graphs,"
80 Journal of the American Statistical Association, Vol. 107(499), 2012
81 """
82
83 def __init__(
84 self,
85 n_components=None,
86 n_elbows=2,
87 algorithm='randomized',
88 n_iter=5,
89 ):
90 super().__init__(
91 n_components=n_components,
92 n_elbows=n_elbows,
93 algorithm=algorithm,
94 n_iter=n_iter,
95 )
96
97 def fit(self, graph):
98 """
99 Fit ASE model to input graph
100
101 Parameters
102 ----------
103 graph : array_like or networkx.Graph
104 input graph to embed. see graphstats.utils.import_graph
105
106 Returns
107 -------
108 self : returns an instance of self.
109 """
110 A = import_graph(graph)
111
112 if not is_fully_connected(A):
113 msg = """Input graph is not fully connected. Results may not \
114 be optimal. You can compute the largest connected component by \
115 using ``graspy.utils.get_lcc``."""
116 warnings.warn(msg, UserWarning)
117
118 self._reduce_dim(A)
119 return self
[end of graspy/embed/ase.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/graspy/embed/ase.py b/graspy/embed/ase.py
--- a/graspy/embed/ase.py
+++ b/graspy/embed/ase.py
@@ -41,8 +41,6 @@
Number of iterations for randomized SVD solver. Not used by 'full' or
'truncated'. The default is larger than the default in randomized_svd
to handle sparse matrices that may have large slowly decaying spectrum.
- lcc : bool, optional (default=True)
- If True, computes the largest connected component for the input graph.
Attributes
----------
diff --git a/graspy/embed/lse.py b/graspy/embed/lse.py
--- a/graspy/embed/lse.py
+++ b/graspy/embed/lse.py
@@ -40,8 +40,6 @@
Number of iterations for randomized SVD solver. Not used by 'full' or
'truncated'. The default is larger than the default in randomized_svd
to handle sparse matrices that may have large slowly decaying spectrum.
- lcc : bool, optional (default=True)
- If True, computes the largest connected component for the input graph.
Attributes
----------
| {"golden_diff": "diff --git a/graspy/embed/ase.py b/graspy/embed/ase.py\n--- a/graspy/embed/ase.py\n+++ b/graspy/embed/ase.py\n@@ -41,8 +41,6 @@\n Number of iterations for randomized SVD solver. Not used by 'full' or \n 'truncated'. The default is larger than the default in randomized_svd \n to handle sparse matrices that may have large slowly decaying spectrum.\n- lcc : bool, optional (default=True)\n- If True, computes the largest connected component for the input graph.\n \n Attributes\n ----------\ndiff --git a/graspy/embed/lse.py b/graspy/embed/lse.py\n--- a/graspy/embed/lse.py\n+++ b/graspy/embed/lse.py\n@@ -40,8 +40,6 @@\n Number of iterations for randomized SVD solver. Not used by 'full' or \n 'truncated'. The default is larger than the default in randomized_svd \n to handle sparse matrices that may have large slowly decaying spectrum.\n- lcc : bool, optional (default=True)\n- If True, computes the largest connected component for the input graph.\n \n Attributes\n ----------\n", "issue": "Fix notebooks on netlify\nIssues with setting jupyter notebook kernels prevent them from running on netlify\n", "before_files": [{"content": "# ase.py\n# Created by Ben Pedigo on 2018-09-26.\n# Email: [email protected]\nimport warnings\n\nfrom .embed import BaseEmbed\nfrom .svd import selectSVD\nfrom ..utils import import_graph, to_laplace, get_lcc, is_fully_connected\n\n\nclass LaplacianSpectralEmbed(BaseEmbed):\n r\"\"\"\n Class for computing the laplacian spectral embedding of a graph \n \n The laplacian spectral embedding (LSE) is a k-dimensional Euclidean representation of \n the graph based on its Laplacian matrix [1]_. It relies on an SVD to reduce the dimensionality\n to the specified k, or if k is unspecified, can find a number of dimensions automatically.\n\n Parameters\n ----------\n n_components : int or None, default = None\n Desired dimensionality of output data. If \"full\", \n n_components must be <= min(X.shape). Otherwise, n_components must be\n < min(X.shape). If None, then optimal dimensions will be chosen by\n ``select_dimension`` using ``n_elbows`` argument.\n n_elbows : int, optional, default: 2\n If `n_compoents=None`, then compute the optimal embedding dimension using\n `select_dimension`. Otherwise, ignored.\n algorithm : {'full', 'truncated' (default), 'randomized'}, optional\n SVD solver to use:\n\n - 'full'\n Computes full svd using ``scipy.linalg.svd``\n - 'truncated'\n Computes truncated svd using ``scipy.sparse.linalg.svd``\n - 'randomized'\n Computes randomized svd using \n ``sklearn.utils.extmath.randomized_svd``\n n_iter : int, optional (default = 5)\n Number of iterations for randomized SVD solver. Not used by 'full' or \n 'truncated'. The default is larger than the default in randomized_svd \n to handle sparse matrices that may have large slowly decaying spectrum.\n lcc : bool, optional (default=True)\n If True, computes the largest connected component for the input graph.\n\n Attributes\n ----------\n latent_left_ : array, shape (n_samples, n_components)\n Estimated left latent positions of the graph.\n latent_right_ : array, shape (n_samples, n_components), or None\n Only computed when the graph is directed, or adjacency matrix is assymetric.\n Estimated right latent positions of the graph. Otherwise, None.\n singular_values_ : array, shape (n_components)\n Singular values associated with the latent position matrices.\n indices_ : array, or None\n If ``lcc`` is True, these are the indices of the vertices that were kept.\n\n See Also\n --------\n graspy.embed.selectSVD\n graspy.embed.select_dimension\n graspy.utils.to_laplace\n\n Notes\n -----\n The singular value decomposition: \n\n .. math:: A = U \\Sigma V^T\n\n is used to find an orthonormal basis for a matrix, which in our case is the Laplacian\n matrix of the graph. These basis vectors (in the matrices U or V) are ordered according \n to the amount of variance they explain in the original matrix. By selecting a subset of these\n basis vectors (through our choice of dimensionality reduction) we can find a lower dimensional \n space in which to represent the graph\n\n References\n ----------\n .. [1] Sussman, D.L., Tang, M., Fishkind, D.E., Priebe, C.E. \"A\n Consistent Adjacency Spectral Embedding for Stochastic Blockmodel Graphs,\"\n Journal of the American Statistical Association, Vol. 107(499), 2012\n \"\"\"\n\n def __init__(\n self,\n form='DAD',\n n_components=None,\n n_elbows=2,\n algorithm='randomized',\n n_iter=5,\n ):\n super().__init__(\n n_components=n_components,\n n_elbows=n_elbows,\n algorithm=algorithm,\n n_iter=n_iter,\n )\n self.form = form\n\n def fit(self, graph):\n \"\"\"\n Fit LSE model to input graph\n\n By default, uses the Laplacian normalization of the form:\n\n .. math:: L = D^{-1/2} A D^{-1/2}\n\n Parameters\n ----------\n graph : array_like or networkx.Graph\n Input graph to embed. see graphstats.utils.import_graph\n\n form : {'DAD' (default), 'I-DAD'}, optional\n Specifies the type of Laplacian normalization to use.\n\n Returns\n -------\n self : returns an instance of self.\n \"\"\"\n A = import_graph(graph)\n\n if not is_fully_connected(A):\n msg = \"\"\"Input graph is not fully connected. Results may not \\\n be optimal. You can compute the largest connected component by \\\n using ``graspy.utils.get_lcc``.\"\"\"\n warnings.warn(msg, UserWarning)\n\n L_norm = to_laplace(A, form=self.form)\n self._reduce_dim(L_norm)\n return self\n", "path": "graspy/embed/lse.py"}, {"content": "# ase.py\n# Created by Ben Pedigo on 2018-09-15.\n# Email: [email protected]\nimport warnings\n\nfrom .embed import BaseEmbed\nfrom .svd import selectSVD\nfrom ..utils import import_graph, get_lcc, is_fully_connected\n\n\nclass AdjacencySpectralEmbed(BaseEmbed):\n r\"\"\"\n Class for computing the adjacency spectral embedding of a graph \n \n The adjacency spectral embedding (ASE) is a k-dimensional Euclidean representation of \n the graph based on its adjacency matrix [1]_. It relies on an SVD to reduce the dimensionality\n to the specified k, or if k is unspecified, can find a number of dimensions automatically\n (see graphstats.embed.svd.selectSVD).\n\n Parameters\n ----------\n n_components : int or None, default = None\n Desired dimensionality of output data. If \"full\", \n n_components must be <= min(X.shape). Otherwise, n_components must be\n < min(X.shape). If None, then optimal dimensions will be chosen by\n ``select_dimension`` using ``n_elbows`` argument.\n n_elbows : int, optional, default: 2\n If `n_compoents=None`, then compute the optimal embedding dimension using\n `select_dimension`. Otherwise, ignored.\n algorithm : {'full', 'truncated' (default), 'randomized'}, optional\n SVD solver to use:\n\n - 'full'\n Computes full svd using ``scipy.linalg.svd``\n - 'truncated'\n Computes truncated svd using ``scipy.sparse.linalg.svd``\n - 'randomized'\n Computes randomized svd using \n ``sklearn.utils.extmath.randomized_svd``\n n_iter : int, optional (default = 5)\n Number of iterations for randomized SVD solver. Not used by 'full' or \n 'truncated'. The default is larger than the default in randomized_svd \n to handle sparse matrices that may have large slowly decaying spectrum.\n lcc : bool, optional (default=True)\n If True, computes the largest connected component for the input graph.\n\n Attributes\n ----------\n latent_left_ : array, shape (n_samples, n_components)\n Estimated left latent positions of the graph. \n latent_right_ : array, shape (n_samples, n_components), or None\n Only computed when the graph is directed, or adjacency matrix is assymetric.\n Estimated right latent positions of the graph. Otherwise, None.\n singular_values_ : array, shape (n_components)\n Singular values associated with the latent position matrices. \n indices_ : array, or None\n If ``lcc`` is True, these are the indices of the vertices that were kept.\n\n See Also\n --------\n graspy.embed.selectSVD\n graspy.embed.select_dimension\n\n Notes\n -----\n The singular value decomposition: \n\n .. math:: A = U \\Sigma V^T\n\n is used to find an orthonormal basis for a matrix, which in our case is the adjacency\n matrix of the graph. These basis vectors (in the matrices U or V) are ordered according \n to the amount of variance they explain in the original matrix. By selecting a subset of these\n basis vectors (through our choice of dimensionality reduction) we can find a lower dimensional \n space in which to represent the graph\n\n References\n ----------\n .. [1] Sussman, D.L., Tang, M., Fishkind, D.E., Priebe, C.E. \"A\n Consistent Adjacency Spectral Embedding for Stochastic Blockmodel Graphs,\"\n Journal of the American Statistical Association, Vol. 107(499), 2012\n \"\"\"\n\n def __init__(\n self,\n n_components=None,\n n_elbows=2,\n algorithm='randomized',\n n_iter=5,\n ):\n super().__init__(\n n_components=n_components,\n n_elbows=n_elbows,\n algorithm=algorithm,\n n_iter=n_iter,\n )\n\n def fit(self, graph):\n \"\"\"\n Fit ASE model to input graph\n\n Parameters\n ----------\n graph : array_like or networkx.Graph\n input graph to embed. see graphstats.utils.import_graph\n\n Returns\n -------\n self : returns an instance of self.\n \"\"\"\n A = import_graph(graph)\n\n if not is_fully_connected(A):\n msg = \"\"\"Input graph is not fully connected. Results may not \\\n be optimal. You can compute the largest connected component by \\\n using ``graspy.utils.get_lcc``.\"\"\"\n warnings.warn(msg, UserWarning)\n\n self._reduce_dim(A)\n return self", "path": "graspy/embed/ase.py"}]} | 3,268 | 268 |
gh_patches_debug_40331 | rasdani/github-patches | git_diff | searxng__searxng-3418 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Wikimedia Commons
**Working URL to the engine**
https://commons.wikimedia.org
**Why do you want to add this engine?**
Out of all of the Wikimedia projects, Wikimedia Commons is one of only two to not appear in any engine category in SearXNG, with the other being Wikispecies.
**Features of this engine**
It has a collection of [82,886,704](https://commons.wikimedia.org/wiki/Special:Statistics) [freely usable](https://commons.wikimedia.org/wiki/Commons:Reusing_content_outside_Wikimedia) media files.
**How can SearXNG fetch the information from this engine?**
`https://commons.wikimedia.org/w/index.php?search=%s` with `%s` being what you want to search.
**Applicable category of this engine**
General, files, images, music, videos.
</issue>
<code>
[start of searx/engines/wikicommons.py]
1 # SPDX-License-Identifier: AGPL-3.0-or-later
2 """Wikimedia Commons (images)
3
4 """
5
6 from urllib.parse import urlencode
7
8 # about
9 about = {
10 "website": 'https://commons.wikimedia.org/',
11 "wikidata_id": 'Q565',
12 "official_api_documentation": 'https://commons.wikimedia.org/w/api.php',
13 "use_official_api": True,
14 "require_api_key": False,
15 "results": 'JSON',
16 }
17
18 base_url = "https://commons.wikimedia.org"
19 search_prefix = (
20 '?action=query'
21 '&format=json'
22 '&generator=search'
23 '&gsrnamespace=6'
24 '&gsrprop=snippet'
25 '&prop=info|imageinfo'
26 '&iiprop=url|size|mime'
27 '&iiurlheight=180' # needed for the thumb url
28 )
29 paging = True
30 number_of_results = 10
31
32
33 def request(query, params):
34 language = 'en'
35 if params['language'] != 'all':
36 language = params['language'].split('-')[0]
37
38 args = {
39 'uselang': language,
40 'gsrlimit': number_of_results,
41 'gsroffset': number_of_results * (params["pageno"] - 1),
42 'gsrsearch': "filetype:bitmap|drawing " + query,
43 }
44
45 params["url"] = f"{base_url}/w/api.php{search_prefix}&{urlencode(args, safe=':|')}"
46 return params
47
48
49 def response(resp):
50 results = []
51 json = resp.json()
52
53 if not json.get("query", {}).get("pages"):
54 return results
55
56 for item in json["query"]["pages"].values():
57 imageinfo = item["imageinfo"][0]
58 title = item["title"].replace("File:", "").rsplit('.', 1)[0]
59 result = {
60 'url': imageinfo["descriptionurl"],
61 'title': title,
62 'content': item["snippet"],
63 'img_src': imageinfo["url"],
64 'resolution': f'{imageinfo["width"]} x {imageinfo["height"]}',
65 'thumbnail_src': imageinfo["thumburl"],
66 'template': 'images.html',
67 }
68 results.append(result)
69
70 return results
71
[end of searx/engines/wikicommons.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/searx/engines/wikicommons.py b/searx/engines/wikicommons.py
--- a/searx/engines/wikicommons.py
+++ b/searx/engines/wikicommons.py
@@ -3,6 +3,8 @@
"""
+import datetime
+
from urllib.parse import urlencode
# about
@@ -14,6 +16,8 @@
"require_api_key": False,
"results": 'JSON',
}
+categories = ['images']
+search_type = 'images'
base_url = "https://commons.wikimedia.org"
search_prefix = (
@@ -29,17 +33,29 @@
paging = True
number_of_results = 10
+search_types = {
+ 'images': 'bitmap|drawing',
+ 'videos': 'video',
+ 'audio': 'audio',
+ 'files': 'multimedia|office|archive|3d',
+}
+
def request(query, params):
language = 'en'
if params['language'] != 'all':
language = params['language'].split('-')[0]
+ if search_type not in search_types:
+ raise ValueError(f"Unsupported search type: {search_type}")
+
+ filetype = search_types[search_type]
+
args = {
'uselang': language,
'gsrlimit': number_of_results,
'gsroffset': number_of_results * (params["pageno"] - 1),
- 'gsrsearch': "filetype:bitmap|drawing " + query,
+ 'gsrsearch': f"filetype:{filetype} {query}",
}
params["url"] = f"{base_url}/w/api.php{search_prefix}&{urlencode(args, safe=':|')}"
@@ -52,7 +68,6 @@
if not json.get("query", {}).get("pages"):
return results
-
for item in json["query"]["pages"].values():
imageinfo = item["imageinfo"][0]
title = item["title"].replace("File:", "").rsplit('.', 1)[0]
@@ -60,11 +75,28 @@
'url': imageinfo["descriptionurl"],
'title': title,
'content': item["snippet"],
- 'img_src': imageinfo["url"],
- 'resolution': f'{imageinfo["width"]} x {imageinfo["height"]}',
- 'thumbnail_src': imageinfo["thumburl"],
- 'template': 'images.html',
}
+
+ if search_type == "images":
+ result['template'] = 'images.html'
+ result['img_src'] = imageinfo["url"]
+ result['thumbnail_src'] = imageinfo["thumburl"]
+ result['resolution'] = f'{imageinfo["width"]} x {imageinfo["height"]}'
+ else:
+ result['thumbnail'] = imageinfo["thumburl"]
+
+ if search_type == "videos":
+ result['template'] = 'videos.html'
+ if imageinfo.get('duration'):
+ result['length'] = datetime.timedelta(seconds=int(imageinfo['duration']))
+ result['iframe_src'] = imageinfo['url']
+ elif search_type == "files":
+ result['template'] = 'files.html'
+ result['metadata'] = imageinfo['mime']
+ result['size'] = imageinfo['size']
+ elif search_type == "audio":
+ result['iframe_src'] = imageinfo['url']
+
results.append(result)
return results
| {"golden_diff": "diff --git a/searx/engines/wikicommons.py b/searx/engines/wikicommons.py\n--- a/searx/engines/wikicommons.py\n+++ b/searx/engines/wikicommons.py\n@@ -3,6 +3,8 @@\n \n \"\"\"\n \n+import datetime\n+\n from urllib.parse import urlencode\n \n # about\n@@ -14,6 +16,8 @@\n \"require_api_key\": False,\n \"results\": 'JSON',\n }\n+categories = ['images']\n+search_type = 'images'\n \n base_url = \"https://commons.wikimedia.org\"\n search_prefix = (\n@@ -29,17 +33,29 @@\n paging = True\n number_of_results = 10\n \n+search_types = {\n+ 'images': 'bitmap|drawing',\n+ 'videos': 'video',\n+ 'audio': 'audio',\n+ 'files': 'multimedia|office|archive|3d',\n+}\n+\n \n def request(query, params):\n language = 'en'\n if params['language'] != 'all':\n language = params['language'].split('-')[0]\n \n+ if search_type not in search_types:\n+ raise ValueError(f\"Unsupported search type: {search_type}\")\n+\n+ filetype = search_types[search_type]\n+\n args = {\n 'uselang': language,\n 'gsrlimit': number_of_results,\n 'gsroffset': number_of_results * (params[\"pageno\"] - 1),\n- 'gsrsearch': \"filetype:bitmap|drawing \" + query,\n+ 'gsrsearch': f\"filetype:{filetype} {query}\",\n }\n \n params[\"url\"] = f\"{base_url}/w/api.php{search_prefix}&{urlencode(args, safe=':|')}\"\n@@ -52,7 +68,6 @@\n \n if not json.get(\"query\", {}).get(\"pages\"):\n return results\n-\n for item in json[\"query\"][\"pages\"].values():\n imageinfo = item[\"imageinfo\"][0]\n title = item[\"title\"].replace(\"File:\", \"\").rsplit('.', 1)[0]\n@@ -60,11 +75,28 @@\n 'url': imageinfo[\"descriptionurl\"],\n 'title': title,\n 'content': item[\"snippet\"],\n- 'img_src': imageinfo[\"url\"],\n- 'resolution': f'{imageinfo[\"width\"]} x {imageinfo[\"height\"]}',\n- 'thumbnail_src': imageinfo[\"thumburl\"],\n- 'template': 'images.html',\n }\n+\n+ if search_type == \"images\":\n+ result['template'] = 'images.html'\n+ result['img_src'] = imageinfo[\"url\"]\n+ result['thumbnail_src'] = imageinfo[\"thumburl\"]\n+ result['resolution'] = f'{imageinfo[\"width\"]} x {imageinfo[\"height\"]}'\n+ else:\n+ result['thumbnail'] = imageinfo[\"thumburl\"]\n+\n+ if search_type == \"videos\":\n+ result['template'] = 'videos.html'\n+ if imageinfo.get('duration'):\n+ result['length'] = datetime.timedelta(seconds=int(imageinfo['duration']))\n+ result['iframe_src'] = imageinfo['url']\n+ elif search_type == \"files\":\n+ result['template'] = 'files.html'\n+ result['metadata'] = imageinfo['mime']\n+ result['size'] = imageinfo['size']\n+ elif search_type == \"audio\":\n+ result['iframe_src'] = imageinfo['url']\n+\n results.append(result)\n \n return results\n", "issue": "Wikimedia Commons\n**Working URL to the engine**\r\nhttps://commons.wikimedia.org\r\n\r\n**Why do you want to add this engine?**\r\nOut of all of the Wikimedia projects, Wikimedia Commons is one of only two to not appear in any engine category in SearXNG, with the other being Wikispecies.\r\n\r\n**Features of this engine**\r\nIt has a collection of [82,886,704](https://commons.wikimedia.org/wiki/Special:Statistics) [freely usable](https://commons.wikimedia.org/wiki/Commons:Reusing_content_outside_Wikimedia) media files.\r\n\r\n**How can SearXNG fetch the information from this engine?**\r\n`https://commons.wikimedia.org/w/index.php?search=%s` with `%s` being what you want to search.\r\n\r\n**Applicable category of this engine**\r\nGeneral, files, images, music, videos.\n", "before_files": [{"content": "# SPDX-License-Identifier: AGPL-3.0-or-later\n\"\"\"Wikimedia Commons (images)\n\n\"\"\"\n\nfrom urllib.parse import urlencode\n\n# about\nabout = {\n \"website\": 'https://commons.wikimedia.org/',\n \"wikidata_id\": 'Q565',\n \"official_api_documentation\": 'https://commons.wikimedia.org/w/api.php',\n \"use_official_api\": True,\n \"require_api_key\": False,\n \"results\": 'JSON',\n}\n\nbase_url = \"https://commons.wikimedia.org\"\nsearch_prefix = (\n '?action=query'\n '&format=json'\n '&generator=search'\n '&gsrnamespace=6'\n '&gsrprop=snippet'\n '&prop=info|imageinfo'\n '&iiprop=url|size|mime'\n '&iiurlheight=180' # needed for the thumb url\n)\npaging = True\nnumber_of_results = 10\n\n\ndef request(query, params):\n language = 'en'\n if params['language'] != 'all':\n language = params['language'].split('-')[0]\n\n args = {\n 'uselang': language,\n 'gsrlimit': number_of_results,\n 'gsroffset': number_of_results * (params[\"pageno\"] - 1),\n 'gsrsearch': \"filetype:bitmap|drawing \" + query,\n }\n\n params[\"url\"] = f\"{base_url}/w/api.php{search_prefix}&{urlencode(args, safe=':|')}\"\n return params\n\n\ndef response(resp):\n results = []\n json = resp.json()\n\n if not json.get(\"query\", {}).get(\"pages\"):\n return results\n\n for item in json[\"query\"][\"pages\"].values():\n imageinfo = item[\"imageinfo\"][0]\n title = item[\"title\"].replace(\"File:\", \"\").rsplit('.', 1)[0]\n result = {\n 'url': imageinfo[\"descriptionurl\"],\n 'title': title,\n 'content': item[\"snippet\"],\n 'img_src': imageinfo[\"url\"],\n 'resolution': f'{imageinfo[\"width\"]} x {imageinfo[\"height\"]}',\n 'thumbnail_src': imageinfo[\"thumburl\"],\n 'template': 'images.html',\n }\n results.append(result)\n\n return results\n", "path": "searx/engines/wikicommons.py"}]} | 1,366 | 786 |
gh_patches_debug_60750 | rasdani/github-patches | git_diff | larq__larq-80 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add docs on how to define your own quantizer
</issue>
<code>
[start of larq/quantizers.py]
1 """A Quantizer defines the way of transforming a full precision input to a
2 quantized output and the pseudo-gradient method used for the backwards pass."""
3
4 import tensorflow as tf
5 from larq import utils
6
7
8 def sign(x):
9 """A sign function that will never be zero"""
10 return tf.sign(tf.sign(x) + 0.1)
11
12
13 @tf.custom_gradient
14 def _binarize_with_identity_grad(x):
15 def grad(dy):
16 return dy
17
18 return sign(x), grad
19
20
21 @tf.custom_gradient
22 def _binarize_with_weighted_grad(x):
23 def grad(dy):
24 return (1 - tf.abs(x)) * 2 * dy
25
26 return sign(x), grad
27
28
29 @utils.register_keras_custom_object
30 def ste_sign(x):
31 r"""
32 Sign binarization function.
33 \\[
34 q(x) = \begin{cases}
35 -1 & x < 0 \\\
36 1 & x \geq 0
37 \end{cases}
38 \\]
39
40 The gradient is estimated using the Straight-Through Estimator
41 (essentially the binarization is replaced by a clipped identity on the
42 backward pass).
43 \\[\frac{\partial q(x)}{\partial x} = \begin{cases}
44 1 & \left|x\right| \leq 1 \\\
45 0 & \left|x\right| > 1
46 \end{cases}\\]
47
48 # Arguments
49 x: Input tensor.
50
51 # Returns
52 Binarized tensor.
53
54 # References
55 - [Binarized Neural Networks: Training Deep Neural Networks with Weights and
56 Activations Constrained to +1 or -1](http://arxiv.org/abs/1602.02830)
57 """
58
59 x = tf.clip_by_value(x, -1, 1)
60
61 return _binarize_with_identity_grad(x)
62
63
64 @utils.register_keras_custom_object
65 def magnitude_aware_sign(x):
66 r"""
67 Magnitude-aware sign for birealnet.
68
69
70 # Arguments
71 x: Input tensor
72
73 # Returns
74 Scaled binarized tensor (with values in $\{-a, a\}$, where $a$ is a float).
75
76 # References
77 - [Bi-Real Net: Enhancing the Performance of 1-bit CNNs With Improved
78 Representational Capability and Advanced Training
79 Algorithm](https://arxiv.org/abs/1808.00278)
80
81 """
82 scale_factor = tf.stop_gradient(
83 tf.reduce_mean(tf.abs(x), axis=list(range(len(x.shape) - 1)))
84 )
85 return scale_factor * ste_sign(x)
86
87
88 @utils.register_keras_custom_object
89 def approx_sign(x):
90 r"""
91 Sign binarization function.
92 \\[
93 q(x) = \begin{cases}
94 -1 & x < 0 \\\
95 1 & x \geq 0
96 \end{cases}
97 \\]
98
99 The gradient is estimated using the ApproxSign method.
100 \\[\frac{\partial q(x)}{\partial x} = \begin{cases}
101 (2 - 2 \left|x\right|) & \left|x\right| \leq 1 \\\
102 0 & \left|x\right| > 1
103 \end{cases}
104 \\]
105
106 # Arguments
107 x: Input tensor.
108
109 # Returns
110 Binarized tensor.
111
112 # References
113 - [Bi-Real Net: Enhancing the Performance of 1-bit CNNs With Improved
114 Representational Capability and Advanced
115 Training Algorithm](http://arxiv.org/abs/1808.00278)
116 """
117
118 x = tf.clip_by_value(x, -1, 1)
119
120 return _binarize_with_weighted_grad(x)
121
122
123 def serialize(initializer):
124 return tf.keras.utils.serialize_keras_object(initializer)
125
126
127 def deserialize(name, custom_objects=None):
128 return tf.keras.utils.deserialize_keras_object(
129 name,
130 module_objects=globals(),
131 custom_objects=custom_objects,
132 printable_module_name="quantization function",
133 )
134
135
136 def get(identifier):
137 if identifier is None:
138 return None
139 if isinstance(identifier, str):
140 return deserialize(str(identifier))
141 if callable(identifier):
142 return identifier
143 raise ValueError(
144 f"Could not interpret quantization function identifier: {identifier}"
145 )
146
[end of larq/quantizers.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/larq/quantizers.py b/larq/quantizers.py
--- a/larq/quantizers.py
+++ b/larq/quantizers.py
@@ -64,7 +64,7 @@
@utils.register_keras_custom_object
def magnitude_aware_sign(x):
r"""
- Magnitude-aware sign for birealnet.
+ Magnitude-aware sign for Bi-Real Net.
# Arguments
| {"golden_diff": "diff --git a/larq/quantizers.py b/larq/quantizers.py\n--- a/larq/quantizers.py\n+++ b/larq/quantizers.py\n@@ -64,7 +64,7 @@\n @utils.register_keras_custom_object\n def magnitude_aware_sign(x):\n r\"\"\"\n- Magnitude-aware sign for birealnet.\n+ Magnitude-aware sign for Bi-Real Net.\n \n \n # Arguments\n", "issue": "Add docs on how to define your own quantizer\n\n", "before_files": [{"content": "\"\"\"A Quantizer defines the way of transforming a full precision input to a\nquantized output and the pseudo-gradient method used for the backwards pass.\"\"\"\n\nimport tensorflow as tf\nfrom larq import utils\n\n\ndef sign(x):\n \"\"\"A sign function that will never be zero\"\"\"\n return tf.sign(tf.sign(x) + 0.1)\n\n\[email protected]_gradient\ndef _binarize_with_identity_grad(x):\n def grad(dy):\n return dy\n\n return sign(x), grad\n\n\[email protected]_gradient\ndef _binarize_with_weighted_grad(x):\n def grad(dy):\n return (1 - tf.abs(x)) * 2 * dy\n\n return sign(x), grad\n\n\[email protected]_keras_custom_object\ndef ste_sign(x):\n r\"\"\"\n Sign binarization function.\n \\\\[\n q(x) = \\begin{cases}\n -1 & x < 0 \\\\\\\n 1 & x \\geq 0\n \\end{cases}\n \\\\]\n\n The gradient is estimated using the Straight-Through Estimator\n (essentially the binarization is replaced by a clipped identity on the\n backward pass).\n \\\\[\\frac{\\partial q(x)}{\\partial x} = \\begin{cases}\n 1 & \\left|x\\right| \\leq 1 \\\\\\\n 0 & \\left|x\\right| > 1\n \\end{cases}\\\\]\n\n # Arguments\n x: Input tensor.\n\n # Returns\n Binarized tensor.\n\n # References\n - [Binarized Neural Networks: Training Deep Neural Networks with Weights and\n Activations Constrained to +1 or -1](http://arxiv.org/abs/1602.02830)\n \"\"\"\n\n x = tf.clip_by_value(x, -1, 1)\n\n return _binarize_with_identity_grad(x)\n\n\[email protected]_keras_custom_object\ndef magnitude_aware_sign(x):\n r\"\"\"\n Magnitude-aware sign for birealnet.\n\n\n # Arguments\n x: Input tensor\n\n # Returns\n Scaled binarized tensor (with values in $\\{-a, a\\}$, where $a$ is a float).\n\n # References\n - [Bi-Real Net: Enhancing the Performance of 1-bit CNNs With Improved\n Representational Capability and Advanced Training\n Algorithm](https://arxiv.org/abs/1808.00278)\n\n \"\"\"\n scale_factor = tf.stop_gradient(\n tf.reduce_mean(tf.abs(x), axis=list(range(len(x.shape) - 1)))\n )\n return scale_factor * ste_sign(x)\n\n\[email protected]_keras_custom_object\ndef approx_sign(x):\n r\"\"\"\n Sign binarization function.\n \\\\[\n q(x) = \\begin{cases}\n -1 & x < 0 \\\\\\\n 1 & x \\geq 0\n \\end{cases}\n \\\\]\n\n The gradient is estimated using the ApproxSign method.\n \\\\[\\frac{\\partial q(x)}{\\partial x} = \\begin{cases}\n (2 - 2 \\left|x\\right|) & \\left|x\\right| \\leq 1 \\\\\\\n 0 & \\left|x\\right| > 1\n \\end{cases}\n \\\\]\n\n # Arguments\n x: Input tensor.\n\n # Returns\n Binarized tensor.\n\n # References\n - [Bi-Real Net: Enhancing the Performance of 1-bit CNNs With Improved\n Representational Capability and Advanced\n Training Algorithm](http://arxiv.org/abs/1808.00278)\n \"\"\"\n\n x = tf.clip_by_value(x, -1, 1)\n\n return _binarize_with_weighted_grad(x)\n\n\ndef serialize(initializer):\n return tf.keras.utils.serialize_keras_object(initializer)\n\n\ndef deserialize(name, custom_objects=None):\n return tf.keras.utils.deserialize_keras_object(\n name,\n module_objects=globals(),\n custom_objects=custom_objects,\n printable_module_name=\"quantization function\",\n )\n\n\ndef get(identifier):\n if identifier is None:\n return None\n if isinstance(identifier, str):\n return deserialize(str(identifier))\n if callable(identifier):\n return identifier\n raise ValueError(\n f\"Could not interpret quantization function identifier: {identifier}\"\n )\n", "path": "larq/quantizers.py"}]} | 1,855 | 99 |
gh_patches_debug_6197 | rasdani/github-patches | git_diff | gratipay__gratipay.com-2491 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
can't save any profile info
Reported by @chrisamaphone on [Twitter](https://twitter.com/chrisamaphone/status/476775868778704896).
</issue>
<code>
[start of gittip/security/csrf.py]
1 """Cross Site Request Forgery middleware, borrowed from Django.
2
3 See also:
4
5 https://github.com/django/django/blob/master/django/middleware/csrf.py
6 https://docs.djangoproject.com/en/dev/ref/contrib/csrf/
7 https://github.com/gittip/www.gittip.com/issues/88
8
9 """
10
11 from datetime import timedelta
12 import re
13 import urlparse
14 from aspen import log_dammit
15
16
17 #from django.utils.cache import patch_vary_headers
18 cc_delim_re = re.compile(r'\s*,\s*')
19 def patch_vary_headers(response, newheaders):
20 """
21 Adds (or updates) the "Vary" header in the given HttpResponse object.
22 newheaders is a list of header names that should be in "Vary". Existing
23 headers in "Vary" aren't removed.
24 """
25 # Note that we need to keep the original order intact, because cache
26 # implementations may rely on the order of the Vary contents in, say,
27 # computing an MD5 hash.
28 if 'Vary' in response.headers:
29 vary_headers = cc_delim_re.split(response.headers['Vary'])
30 else:
31 vary_headers = []
32 # Use .lower() here so we treat headers as case-insensitive.
33 existing_headers = set([header.lower() for header in vary_headers])
34 additional_headers = [newheader for newheader in newheaders
35 if newheader.lower() not in existing_headers]
36 response.headers['Vary'] = ', '.join(vary_headers + additional_headers)
37
38
39 #from django.utils.http import same_origin
40 def same_origin(url1, url2):
41 """
42 Checks if two URLs are 'same-origin'
43 """
44 p1, p2 = urlparse.urlparse(url1), urlparse.urlparse(url2)
45 return (p1.scheme, p1.hostname, p1.port) == (p2.scheme, p2.hostname, p2.port)
46
47
48 from aspen import Response
49 from crypto import constant_time_compare, get_random_string
50
51 REASON_NO_REFERER = "Referer checking failed - no Referer."
52 REASON_BAD_REFERER = "Referer checking failed - %s does not match %s."
53 REASON_NO_CSRF_COOKIE = "CSRF cookie not set."
54 REASON_BAD_TOKEN = "CSRF token missing or incorrect."
55
56 TOKEN_LENGTH = 32
57 CSRF_TIMEOUT = timedelta(days=7)
58
59
60 def _get_new_csrf_key():
61 return get_random_string(TOKEN_LENGTH)
62
63
64 def _sanitize_token(token):
65 # Allow only alphanum, and ensure we return a 'str' for the sake
66 # of the post processing middleware.
67 if len(token) > TOKEN_LENGTH:
68 return _get_new_csrf_key()
69 token = re.sub('[^a-zA-Z0-9]+', '', str(token.decode('ascii', 'ignore')))
70 if token == "":
71 # In case the cookie has been truncated to nothing at some point.
72 return _get_new_csrf_key()
73 return token
74
75 def _is_secure(request):
76 import gittip
77 return gittip.canonical_scheme == 'https'
78
79 def _get_host(request):
80 """Returns the HTTP host using the request headers.
81 """
82 return request.headers.get('X-Forwarded-Host', request.headers['Host'])
83
84
85
86 def inbound(request):
87 """Given a Request object, reject it if it's a forgery.
88 """
89 if request.line.uri.startswith('/assets/'): return
90
91 try:
92 csrf_token = request.headers.cookie.get('csrf_token')
93 csrf_token = '' if csrf_token is None else csrf_token.value
94 csrf_token = _sanitize_token(csrf_token)
95 except KeyError:
96 csrf_token = _get_new_csrf_key()
97
98 request.context['csrf_token'] = csrf_token
99
100 # Assume that anything not defined as 'safe' by RC2616 needs protection
101 if request.line.method not in ('GET', 'HEAD', 'OPTIONS', 'TRACE'):
102
103 if _is_secure(request):
104 # Suppose user visits http://example.com/
105 # An active network attacker (man-in-the-middle, MITM) sends a
106 # POST form that targets https://example.com/detonate-bomb/ and
107 # submits it via JavaScript.
108 #
109 # The attacker will need to provide a CSRF cookie and token, but
110 # that's no problem for a MITM and the session-independent
111 # nonce we're using. So the MITM can circumvent the CSRF
112 # protection. This is true for any HTTP connection, but anyone
113 # using HTTPS expects better! For this reason, for
114 # https://example.com/ we need additional protection that treats
115 # http://example.com/ as completely untrusted. Under HTTPS,
116 # Barth et al. found that the Referer header is missing for
117 # same-domain requests in only about 0.2% of cases or less, so
118 # we can use strict Referer checking.
119 referer = request.headers.get('Referer')
120 if referer is None:
121 raise Response(403, REASON_NO_REFERER)
122
123 good_referer = 'https://%s/' % _get_host(request)
124 if not same_origin(referer, good_referer):
125 reason = REASON_BAD_REFERER % (referer, good_referer)
126 log_dammit(reason)
127 raise Response(403, reason)
128
129 if csrf_token is None:
130 raise Response(403, REASON_NO_CSRF_COOKIE)
131
132 # Check non-cookie token for match.
133 request_csrf_token = ""
134 if request.line.method == "POST":
135 request_csrf_token = request.body.get('csrf_token', '')
136
137 if request_csrf_token == "":
138 # Fall back to X-CSRF-TOKEN, to make things easier for AJAX,
139 # and possible for PUT/DELETE.
140 request_csrf_token = request.headers.get('X-CSRF-TOKEN', '')
141
142 if not constant_time_compare(request_csrf_token, csrf_token):
143 raise Response(403, REASON_BAD_TOKEN)
144
145
146 def outbound(request, response):
147 """Store the latest CSRF token as a cookie.
148 """
149 csrf_token = request.context.get('csrf_token')
150 if csrf_token:
151 response.set_cookie('csrf_token', csrf_token, expires=CSRF_TIMEOUT)
152
153 # Content varies with the CSRF cookie, so set the Vary header.
154 patch_vary_headers(response, ('Cookie',))
155
[end of gittip/security/csrf.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/gittip/security/csrf.py b/gittip/security/csrf.py
--- a/gittip/security/csrf.py
+++ b/gittip/security/csrf.py
@@ -148,7 +148,7 @@
"""
csrf_token = request.context.get('csrf_token')
if csrf_token:
- response.set_cookie('csrf_token', csrf_token, expires=CSRF_TIMEOUT)
+ response.set_cookie('csrf_token', csrf_token, expires=CSRF_TIMEOUT, httponly=False)
# Content varies with the CSRF cookie, so set the Vary header.
patch_vary_headers(response, ('Cookie',))
| {"golden_diff": "diff --git a/gittip/security/csrf.py b/gittip/security/csrf.py\n--- a/gittip/security/csrf.py\n+++ b/gittip/security/csrf.py\n@@ -148,7 +148,7 @@\n \"\"\"\n csrf_token = request.context.get('csrf_token')\n if csrf_token:\n- response.set_cookie('csrf_token', csrf_token, expires=CSRF_TIMEOUT)\n+ response.set_cookie('csrf_token', csrf_token, expires=CSRF_TIMEOUT, httponly=False)\n \n # Content varies with the CSRF cookie, so set the Vary header.\n patch_vary_headers(response, ('Cookie',))\n", "issue": "can't save any profile info\nReported by @chrisamaphone on [Twitter](https://twitter.com/chrisamaphone/status/476775868778704896).\n\n", "before_files": [{"content": "\"\"\"Cross Site Request Forgery middleware, borrowed from Django.\n\nSee also:\n\n https://github.com/django/django/blob/master/django/middleware/csrf.py\n https://docs.djangoproject.com/en/dev/ref/contrib/csrf/\n https://github.com/gittip/www.gittip.com/issues/88\n\n\"\"\"\n\nfrom datetime import timedelta\nimport re\nimport urlparse\nfrom aspen import log_dammit\n\n\n#from django.utils.cache import patch_vary_headers\ncc_delim_re = re.compile(r'\\s*,\\s*')\ndef patch_vary_headers(response, newheaders):\n \"\"\"\n Adds (or updates) the \"Vary\" header in the given HttpResponse object.\n newheaders is a list of header names that should be in \"Vary\". Existing\n headers in \"Vary\" aren't removed.\n \"\"\"\n # Note that we need to keep the original order intact, because cache\n # implementations may rely on the order of the Vary contents in, say,\n # computing an MD5 hash.\n if 'Vary' in response.headers:\n vary_headers = cc_delim_re.split(response.headers['Vary'])\n else:\n vary_headers = []\n # Use .lower() here so we treat headers as case-insensitive.\n existing_headers = set([header.lower() for header in vary_headers])\n additional_headers = [newheader for newheader in newheaders\n if newheader.lower() not in existing_headers]\n response.headers['Vary'] = ', '.join(vary_headers + additional_headers)\n\n\n#from django.utils.http import same_origin\ndef same_origin(url1, url2):\n \"\"\"\n Checks if two URLs are 'same-origin'\n \"\"\"\n p1, p2 = urlparse.urlparse(url1), urlparse.urlparse(url2)\n return (p1.scheme, p1.hostname, p1.port) == (p2.scheme, p2.hostname, p2.port)\n\n\nfrom aspen import Response\nfrom crypto import constant_time_compare, get_random_string\n\nREASON_NO_REFERER = \"Referer checking failed - no Referer.\"\nREASON_BAD_REFERER = \"Referer checking failed - %s does not match %s.\"\nREASON_NO_CSRF_COOKIE = \"CSRF cookie not set.\"\nREASON_BAD_TOKEN = \"CSRF token missing or incorrect.\"\n\nTOKEN_LENGTH = 32\nCSRF_TIMEOUT = timedelta(days=7)\n\n\ndef _get_new_csrf_key():\n return get_random_string(TOKEN_LENGTH)\n\n\ndef _sanitize_token(token):\n # Allow only alphanum, and ensure we return a 'str' for the sake\n # of the post processing middleware.\n if len(token) > TOKEN_LENGTH:\n return _get_new_csrf_key()\n token = re.sub('[^a-zA-Z0-9]+', '', str(token.decode('ascii', 'ignore')))\n if token == \"\":\n # In case the cookie has been truncated to nothing at some point.\n return _get_new_csrf_key()\n return token\n\ndef _is_secure(request):\n import gittip\n return gittip.canonical_scheme == 'https'\n\ndef _get_host(request):\n \"\"\"Returns the HTTP host using the request headers.\n \"\"\"\n return request.headers.get('X-Forwarded-Host', request.headers['Host'])\n\n\n\ndef inbound(request):\n \"\"\"Given a Request object, reject it if it's a forgery.\n \"\"\"\n if request.line.uri.startswith('/assets/'): return\n\n try:\n csrf_token = request.headers.cookie.get('csrf_token')\n csrf_token = '' if csrf_token is None else csrf_token.value\n csrf_token = _sanitize_token(csrf_token)\n except KeyError:\n csrf_token = _get_new_csrf_key()\n\n request.context['csrf_token'] = csrf_token\n\n # Assume that anything not defined as 'safe' by RC2616 needs protection\n if request.line.method not in ('GET', 'HEAD', 'OPTIONS', 'TRACE'):\n\n if _is_secure(request):\n # Suppose user visits http://example.com/\n # An active network attacker (man-in-the-middle, MITM) sends a\n # POST form that targets https://example.com/detonate-bomb/ and\n # submits it via JavaScript.\n #\n # The attacker will need to provide a CSRF cookie and token, but\n # that's no problem for a MITM and the session-independent\n # nonce we're using. So the MITM can circumvent the CSRF\n # protection. This is true for any HTTP connection, but anyone\n # using HTTPS expects better! For this reason, for\n # https://example.com/ we need additional protection that treats\n # http://example.com/ as completely untrusted. Under HTTPS,\n # Barth et al. found that the Referer header is missing for\n # same-domain requests in only about 0.2% of cases or less, so\n # we can use strict Referer checking.\n referer = request.headers.get('Referer')\n if referer is None:\n raise Response(403, REASON_NO_REFERER)\n\n good_referer = 'https://%s/' % _get_host(request)\n if not same_origin(referer, good_referer):\n reason = REASON_BAD_REFERER % (referer, good_referer)\n log_dammit(reason)\n raise Response(403, reason)\n\n if csrf_token is None:\n raise Response(403, REASON_NO_CSRF_COOKIE)\n\n # Check non-cookie token for match.\n request_csrf_token = \"\"\n if request.line.method == \"POST\":\n request_csrf_token = request.body.get('csrf_token', '')\n\n if request_csrf_token == \"\":\n # Fall back to X-CSRF-TOKEN, to make things easier for AJAX,\n # and possible for PUT/DELETE.\n request_csrf_token = request.headers.get('X-CSRF-TOKEN', '')\n\n if not constant_time_compare(request_csrf_token, csrf_token):\n raise Response(403, REASON_BAD_TOKEN)\n\n\ndef outbound(request, response):\n \"\"\"Store the latest CSRF token as a cookie.\n \"\"\"\n csrf_token = request.context.get('csrf_token')\n if csrf_token:\n response.set_cookie('csrf_token', csrf_token, expires=CSRF_TIMEOUT)\n\n # Content varies with the CSRF cookie, so set the Vary header.\n patch_vary_headers(response, ('Cookie',))\n", "path": "gittip/security/csrf.py"}]} | 2,327 | 141 |
gh_patches_debug_16585 | rasdani/github-patches | git_diff | cloud-custodian__cloud-custodian-3111 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Marking emr cluster for termination throws exception
When marking EMR cluster for termination throws exception in version c7n-0.8.31.2. I see the tag created in cluster
`````
policies:
- name: emr-mark-clusters-for-termination
resource: emr
filters:
- type: value
key: "Id"
op: in
value:
- 'abcdefghij'
actions:
- type: mark-for-op
tag: 'custodian-emr-terminate'
op: terminate
days: 4
`````
this policy throws exception
2018-09-27 19:20:30,262: custodian.actions:INFO Tagging 1 resources for terminate on 2018/10/01
2018-09-27 19:20:31,720: custodian.actions:ERROR Exception with tags: [{u'Value': u'Resource does not meet policy: terminate@2018/10/01', u'Key': 'custodian-emr-terminate'}] on resources: abcdefghij
'dict' object is not callable
`
Though the EMR is marked with tag ''custodian-emr-terminate', filtering on type: marked-for-op, returns 0 resources.
</issue>
<code>
[start of c7n/resources/emr.py]
1 # Copyright 2016-2017 Capital One Services, LLC
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 from __future__ import absolute_import, division, print_function, unicode_literals
15
16 import logging
17 import time
18
19 import six
20
21 from c7n.actions import ActionRegistry, BaseAction
22 from c7n.exceptions import PolicyValidationError
23 from c7n.filters import FilterRegistry, MetricsFilter
24 from c7n.manager import resources
25 from c7n.query import QueryResourceManager
26 from c7n.utils import (
27 local_session, type_schema, get_retry)
28 from c7n.tags import (
29 TagDelayedAction, RemoveTag, TagActionFilter, Tag)
30
31 filters = FilterRegistry('emr.filters')
32 actions = ActionRegistry('emr.actions')
33 log = logging.getLogger('custodian.emr')
34
35 filters.register('marked-for-op', TagActionFilter)
36
37
38 @resources.register('emr')
39 class EMRCluster(QueryResourceManager):
40 """Resource manager for Elastic MapReduce clusters
41 """
42
43 class resource_type(object):
44 service = 'emr'
45 type = 'emr'
46 cluster_states = ['WAITING', 'BOOTSTRAPPING', 'RUNNING', 'STARTING']
47 enum_spec = ('list_clusters', 'Clusters', {'ClusterStates': cluster_states})
48 name = 'Name'
49 id = 'Id'
50 date = "Status.Timeline.CreationDateTime"
51 filter_name = None
52 dimension = None
53
54 action_registry = actions
55 filter_registry = filters
56 retry = staticmethod(get_retry(('ThrottlingException',)))
57
58 def __init__(self, ctx, data):
59 super(EMRCluster, self).__init__(ctx, data)
60 self.queries = QueryFilter.parse(
61 self.data.get('query', [
62 {'ClusterStates': [
63 'running', 'bootstrapping', 'waiting']}]))
64
65 @classmethod
66 def get_permissions(cls):
67 return ("elasticmapreduce:ListClusters",
68 "elasticmapreduce:DescribeCluster")
69
70 def get_resources(self, ids):
71 # no filtering by id set supported at the api
72 client = local_session(self.session_factory).client('emr')
73 results = []
74 for jid in ids:
75 results.append(
76 client.describe_cluster(ClusterId=jid)['Cluster'])
77 return results
78
79 def resources(self, query=None):
80 q = self.consolidate_query_filter()
81 if q is not None:
82 query = query or {}
83 for i in range(0, len(q)):
84 query[q[i]['Name']] = q[i]['Values']
85 return super(EMRCluster, self).resources(query=query)
86
87 def consolidate_query_filter(self):
88 result = []
89 names = set()
90 # allow same name to be specified multiple times and append the queries
91 # under the same name
92 for q in self.queries:
93 query_filter = q.query()
94 if query_filter['Name'] in names:
95 for filt in result:
96 if query_filter['Name'] == filt['Name']:
97 filt['Values'].extend(query_filter['Values'])
98 else:
99 names.add(query_filter['Name'])
100 result.append(query_filter)
101 if 'ClusterStates' not in names:
102 # include default query
103 result.append(
104 {
105 'Name': 'ClusterStates',
106 'Values': ['WAITING', 'RUNNING', 'BOOTSTRAPPING'],
107 }
108 )
109 return result
110
111 def augment(self, resources):
112 client = local_session(
113 self.get_resource_manager('emr').session_factory).client('emr')
114 result = []
115 # remap for cwmetrics
116 for r in resources:
117 cluster = self.retry(
118 client.describe_cluster, ClusterId=r['Id'])['Cluster']
119 result.append(cluster)
120 return result
121
122
123 @EMRCluster.filter_registry.register('metrics')
124 class EMRMetrics(MetricsFilter):
125
126 def get_dimensions(self, resource):
127 # Job flow id is legacy name for cluster id
128 return [{'Name': 'JobFlowId', 'Value': resource['Id']}]
129
130
131 @actions.register('mark-for-op')
132 class TagDelayedAction(TagDelayedAction):
133 """Action to specify an action to occur at a later date
134
135 :example:
136
137 .. code-block:: yaml
138
139 policies:
140 - name: emr-mark-for-op
141 resource: emr
142 filters:
143 - "tag:Name": absent
144 actions:
145 - type: mark-for-op
146 tag: custodian_cleanup
147 op: terminate
148 days: 4
149 msg: "Cluster does not have required tags"
150 """
151
152 permission = ('elasticmapreduce:AddTags',)
153 batch_size = 1
154 retry = staticmethod(get_retry(('ThrottlingException',)))
155
156 def process_resource_set(self, resources, tags):
157 client = local_session(
158 self.manager.session_factory).client('emr')
159 for r in resources:
160 self.retry(client.add_tags(ResourceId=r['Id'], Tags=tags))
161
162
163 @actions.register('tag')
164 class TagTable(Tag):
165 """Action to create tag(s) on a resource
166
167 :example:
168
169 .. code-block:: yaml
170
171 policies:
172 - name: emr-tag-table
173 resource: emr
174 filters:
175 - "tag:target-tag": absent
176 actions:
177 - type: tag
178 key: target-tag
179 value: target-tag-value
180 """
181
182 permissions = ('elasticmapreduce:AddTags',)
183 batch_size = 1
184 retry = staticmethod(get_retry(('ThrottlingException',)))
185
186 def process_resource_set(self, resources, tags):
187 client = local_session(self.manager.session_factory).client('emr')
188 for r in resources:
189 self.retry(client.add_tags(ResourceId=r['Id'], Tags=tags))
190
191
192 @actions.register('remove-tag')
193 class UntagTable(RemoveTag):
194 """Action to remove tag(s) on a resource
195
196 :example:
197
198 .. code-block:: yaml
199
200 policies:
201 - name: emr-remove-tag
202 resource: emr
203 filters:
204 - "tag:target-tag": present
205 actions:
206 - type: remove-tag
207 tags: ["target-tag"]
208 """
209
210 concurrency = 2
211 batch_size = 5
212 permissions = ('elasticmapreduce:RemoveTags',)
213
214 def process_resource_set(self, resources, tag_keys):
215 client = local_session(
216 self.manager.session_factory).client('emr')
217 for r in resources:
218 client.remove_tags(
219 ResourceId=r['Id'], TagKeys=tag_keys)
220
221
222 @actions.register('terminate')
223 class Terminate(BaseAction):
224 """Action to terminate EMR cluster(s)
225
226 It is recommended to apply a filter to the terminate action to avoid
227 termination of all EMR clusters
228
229 :example:
230
231 .. code-block:: yaml
232
233 policies:
234 - name: emr-terminate
235 resource: emr
236 query:
237 - ClusterStates: [STARTING, BOOTSTRAPPING, RUNNING, WAITING]
238 actions:
239 - terminate
240 """
241
242 schema = type_schema('terminate', force={'type': 'boolean'})
243 permissions = ("elasticmapreduce:TerminateJobFlows",)
244 delay = 5
245
246 def process(self, emrs):
247 client = local_session(self.manager.session_factory).client('emr')
248 cluster_ids = [emr['Id'] for emr in emrs]
249 if self.data.get('force'):
250 client.set_termination_protection(
251 JobFlowIds=cluster_ids, TerminationProtected=False)
252 time.sleep(self.delay)
253 client.terminate_job_flows(JobFlowIds=cluster_ids)
254 self.log.info("Deleted emrs: %s", cluster_ids)
255 return emrs
256
257
258 # Valid EMR Query Filters
259 EMR_VALID_FILTERS = set(('CreatedAfter', 'CreatedBefore', 'ClusterStates'))
260
261
262 class QueryFilter(object):
263
264 @classmethod
265 def parse(cls, data):
266 results = []
267 for d in data:
268 if not isinstance(d, dict):
269 raise PolicyValidationError(
270 "EMR Query Filter Invalid structure %s" % d)
271 results.append(cls(d).validate())
272 return results
273
274 def __init__(self, data):
275 self.data = data
276 self.key = None
277 self.value = None
278
279 def validate(self):
280 if not len(list(self.data.keys())) == 1:
281 raise PolicyValidationError(
282 "EMR Query Filter Invalid %s" % self.data)
283 self.key = list(self.data.keys())[0]
284 self.value = list(self.data.values())[0]
285
286 if self.key not in EMR_VALID_FILTERS and not self.key.startswith(
287 'tag:'):
288 raise PolicyValidationError(
289 "EMR Query Filter invalid filter name %s" % (self.data))
290
291 if self.value is None:
292 raise PolicyValidationError(
293 "EMR Query Filters must have a value, use tag-key"
294 " w/ tag name as value for tag present checks"
295 " %s" % self.data)
296 return self
297
298 def query(self):
299 value = self.value
300 if isinstance(self.value, six.string_types):
301 value = [self.value]
302
303 return {'Name': self.key, 'Values': value}
304
[end of c7n/resources/emr.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/c7n/resources/emr.py b/c7n/resources/emr.py
--- a/c7n/resources/emr.py
+++ b/c7n/resources/emr.py
@@ -157,7 +157,7 @@
client = local_session(
self.manager.session_factory).client('emr')
for r in resources:
- self.retry(client.add_tags(ResourceId=r['Id'], Tags=tags))
+ self.retry(client.add_tags, ResourceId=r['Id'], Tags=tags)
@actions.register('tag')
@@ -186,7 +186,7 @@
def process_resource_set(self, resources, tags):
client = local_session(self.manager.session_factory).client('emr')
for r in resources:
- self.retry(client.add_tags(ResourceId=r['Id'], Tags=tags))
+ self.retry(client.add_tags, ResourceId=r['Id'], Tags=tags)
@actions.register('remove-tag')
| {"golden_diff": "diff --git a/c7n/resources/emr.py b/c7n/resources/emr.py\n--- a/c7n/resources/emr.py\n+++ b/c7n/resources/emr.py\n@@ -157,7 +157,7 @@\n client = local_session(\n self.manager.session_factory).client('emr')\n for r in resources:\n- self.retry(client.add_tags(ResourceId=r['Id'], Tags=tags))\n+ self.retry(client.add_tags, ResourceId=r['Id'], Tags=tags)\n \n \n @actions.register('tag')\n@@ -186,7 +186,7 @@\n def process_resource_set(self, resources, tags):\n client = local_session(self.manager.session_factory).client('emr')\n for r in resources:\n- self.retry(client.add_tags(ResourceId=r['Id'], Tags=tags))\n+ self.retry(client.add_tags, ResourceId=r['Id'], Tags=tags)\n \n \n @actions.register('remove-tag')\n", "issue": "Marking emr cluster for termination throws exception\nWhen marking EMR cluster for termination throws exception in version c7n-0.8.31.2. I see the tag created in cluster\r\n`````\r\npolicies:\r\n- name: emr-mark-clusters-for-termination\r\n resource: emr\r\n filters:\r\n - type: value\r\n key: \"Id\"\r\n op: in\r\n value:\r\n - 'abcdefghij'\r\n actions:\r\n - type: mark-for-op\r\n tag: 'custodian-emr-terminate'\r\n op: terminate\r\n days: 4\r\n`````\r\n\r\nthis policy throws exception \r\n\r\n2018-09-27 19:20:30,262: custodian.actions:INFO Tagging 1 resources for terminate on 2018/10/01\r\n2018-09-27 19:20:31,720: custodian.actions:ERROR Exception with tags: [{u'Value': u'Resource does not meet policy: terminate@2018/10/01', u'Key': 'custodian-emr-terminate'}] on resources: abcdefghij\r\n 'dict' object is not callable\r\n`\r\nThough the EMR is marked with tag ''custodian-emr-terminate', filtering on type: marked-for-op, returns 0 resources.\n", "before_files": [{"content": "# Copyright 2016-2017 Capital One Services, LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nfrom __future__ import absolute_import, division, print_function, unicode_literals\n\nimport logging\nimport time\n\nimport six\n\nfrom c7n.actions import ActionRegistry, BaseAction\nfrom c7n.exceptions import PolicyValidationError\nfrom c7n.filters import FilterRegistry, MetricsFilter\nfrom c7n.manager import resources\nfrom c7n.query import QueryResourceManager\nfrom c7n.utils import (\n local_session, type_schema, get_retry)\nfrom c7n.tags import (\n TagDelayedAction, RemoveTag, TagActionFilter, Tag)\n\nfilters = FilterRegistry('emr.filters')\nactions = ActionRegistry('emr.actions')\nlog = logging.getLogger('custodian.emr')\n\nfilters.register('marked-for-op', TagActionFilter)\n\n\[email protected]('emr')\nclass EMRCluster(QueryResourceManager):\n \"\"\"Resource manager for Elastic MapReduce clusters\n \"\"\"\n\n class resource_type(object):\n service = 'emr'\n type = 'emr'\n cluster_states = ['WAITING', 'BOOTSTRAPPING', 'RUNNING', 'STARTING']\n enum_spec = ('list_clusters', 'Clusters', {'ClusterStates': cluster_states})\n name = 'Name'\n id = 'Id'\n date = \"Status.Timeline.CreationDateTime\"\n filter_name = None\n dimension = None\n\n action_registry = actions\n filter_registry = filters\n retry = staticmethod(get_retry(('ThrottlingException',)))\n\n def __init__(self, ctx, data):\n super(EMRCluster, self).__init__(ctx, data)\n self.queries = QueryFilter.parse(\n self.data.get('query', [\n {'ClusterStates': [\n 'running', 'bootstrapping', 'waiting']}]))\n\n @classmethod\n def get_permissions(cls):\n return (\"elasticmapreduce:ListClusters\",\n \"elasticmapreduce:DescribeCluster\")\n\n def get_resources(self, ids):\n # no filtering by id set supported at the api\n client = local_session(self.session_factory).client('emr')\n results = []\n for jid in ids:\n results.append(\n client.describe_cluster(ClusterId=jid)['Cluster'])\n return results\n\n def resources(self, query=None):\n q = self.consolidate_query_filter()\n if q is not None:\n query = query or {}\n for i in range(0, len(q)):\n query[q[i]['Name']] = q[i]['Values']\n return super(EMRCluster, self).resources(query=query)\n\n def consolidate_query_filter(self):\n result = []\n names = set()\n # allow same name to be specified multiple times and append the queries\n # under the same name\n for q in self.queries:\n query_filter = q.query()\n if query_filter['Name'] in names:\n for filt in result:\n if query_filter['Name'] == filt['Name']:\n filt['Values'].extend(query_filter['Values'])\n else:\n names.add(query_filter['Name'])\n result.append(query_filter)\n if 'ClusterStates' not in names:\n # include default query\n result.append(\n {\n 'Name': 'ClusterStates',\n 'Values': ['WAITING', 'RUNNING', 'BOOTSTRAPPING'],\n }\n )\n return result\n\n def augment(self, resources):\n client = local_session(\n self.get_resource_manager('emr').session_factory).client('emr')\n result = []\n # remap for cwmetrics\n for r in resources:\n cluster = self.retry(\n client.describe_cluster, ClusterId=r['Id'])['Cluster']\n result.append(cluster)\n return result\n\n\[email protected]_registry.register('metrics')\nclass EMRMetrics(MetricsFilter):\n\n def get_dimensions(self, resource):\n # Job flow id is legacy name for cluster id\n return [{'Name': 'JobFlowId', 'Value': resource['Id']}]\n\n\[email protected]('mark-for-op')\nclass TagDelayedAction(TagDelayedAction):\n \"\"\"Action to specify an action to occur at a later date\n\n :example:\n\n .. code-block:: yaml\n\n policies:\n - name: emr-mark-for-op\n resource: emr\n filters:\n - \"tag:Name\": absent\n actions:\n - type: mark-for-op\n tag: custodian_cleanup\n op: terminate\n days: 4\n msg: \"Cluster does not have required tags\"\n \"\"\"\n\n permission = ('elasticmapreduce:AddTags',)\n batch_size = 1\n retry = staticmethod(get_retry(('ThrottlingException',)))\n\n def process_resource_set(self, resources, tags):\n client = local_session(\n self.manager.session_factory).client('emr')\n for r in resources:\n self.retry(client.add_tags(ResourceId=r['Id'], Tags=tags))\n\n\[email protected]('tag')\nclass TagTable(Tag):\n \"\"\"Action to create tag(s) on a resource\n\n :example:\n\n .. code-block:: yaml\n\n policies:\n - name: emr-tag-table\n resource: emr\n filters:\n - \"tag:target-tag\": absent\n actions:\n - type: tag\n key: target-tag\n value: target-tag-value\n \"\"\"\n\n permissions = ('elasticmapreduce:AddTags',)\n batch_size = 1\n retry = staticmethod(get_retry(('ThrottlingException',)))\n\n def process_resource_set(self, resources, tags):\n client = local_session(self.manager.session_factory).client('emr')\n for r in resources:\n self.retry(client.add_tags(ResourceId=r['Id'], Tags=tags))\n\n\[email protected]('remove-tag')\nclass UntagTable(RemoveTag):\n \"\"\"Action to remove tag(s) on a resource\n\n :example:\n\n .. code-block:: yaml\n\n policies:\n - name: emr-remove-tag\n resource: emr\n filters:\n - \"tag:target-tag\": present\n actions:\n - type: remove-tag\n tags: [\"target-tag\"]\n \"\"\"\n\n concurrency = 2\n batch_size = 5\n permissions = ('elasticmapreduce:RemoveTags',)\n\n def process_resource_set(self, resources, tag_keys):\n client = local_session(\n self.manager.session_factory).client('emr')\n for r in resources:\n client.remove_tags(\n ResourceId=r['Id'], TagKeys=tag_keys)\n\n\[email protected]('terminate')\nclass Terminate(BaseAction):\n \"\"\"Action to terminate EMR cluster(s)\n\n It is recommended to apply a filter to the terminate action to avoid\n termination of all EMR clusters\n\n :example:\n\n .. code-block:: yaml\n\n policies:\n - name: emr-terminate\n resource: emr\n query:\n - ClusterStates: [STARTING, BOOTSTRAPPING, RUNNING, WAITING]\n actions:\n - terminate\n \"\"\"\n\n schema = type_schema('terminate', force={'type': 'boolean'})\n permissions = (\"elasticmapreduce:TerminateJobFlows\",)\n delay = 5\n\n def process(self, emrs):\n client = local_session(self.manager.session_factory).client('emr')\n cluster_ids = [emr['Id'] for emr in emrs]\n if self.data.get('force'):\n client.set_termination_protection(\n JobFlowIds=cluster_ids, TerminationProtected=False)\n time.sleep(self.delay)\n client.terminate_job_flows(JobFlowIds=cluster_ids)\n self.log.info(\"Deleted emrs: %s\", cluster_ids)\n return emrs\n\n\n# Valid EMR Query Filters\nEMR_VALID_FILTERS = set(('CreatedAfter', 'CreatedBefore', 'ClusterStates'))\n\n\nclass QueryFilter(object):\n\n @classmethod\n def parse(cls, data):\n results = []\n for d in data:\n if not isinstance(d, dict):\n raise PolicyValidationError(\n \"EMR Query Filter Invalid structure %s\" % d)\n results.append(cls(d).validate())\n return results\n\n def __init__(self, data):\n self.data = data\n self.key = None\n self.value = None\n\n def validate(self):\n if not len(list(self.data.keys())) == 1:\n raise PolicyValidationError(\n \"EMR Query Filter Invalid %s\" % self.data)\n self.key = list(self.data.keys())[0]\n self.value = list(self.data.values())[0]\n\n if self.key not in EMR_VALID_FILTERS and not self.key.startswith(\n 'tag:'):\n raise PolicyValidationError(\n \"EMR Query Filter invalid filter name %s\" % (self.data))\n\n if self.value is None:\n raise PolicyValidationError(\n \"EMR Query Filters must have a value, use tag-key\"\n \" w/ tag name as value for tag present checks\"\n \" %s\" % self.data)\n return self\n\n def query(self):\n value = self.value\n if isinstance(self.value, six.string_types):\n value = [self.value]\n\n return {'Name': self.key, 'Values': value}\n", "path": "c7n/resources/emr.py"}]} | 3,751 | 208 |
gh_patches_debug_17274 | rasdani/github-patches | git_diff | CTPUG__wafer-307 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add support for Django's redirect app to wafer
It's useful to be able to add a redirect if a page is moved to a different point in the hierachy.
Django's already got support for this, so we should leverage that.
The potentially problematic part is how this iteracts with the static site generation, as django-medusa's handling of redirects is far from ideal.
</issue>
<code>
[start of wafer/settings.py]
1 import os
2
3 from django.utils.translation import ugettext_lazy as _
4
5 try:
6 from localsettings import *
7 except ImportError:
8 pass
9
10 # Django settings for wafer project.
11
12 ADMINS = (
13 # The logging config below mails admins
14 # ('Your Name', '[email protected]'),
15 )
16
17 DATABASES = {
18 'default': {
19 'ENGINE': 'django.db.backends.sqlite3',
20 'NAME': 'wafer.db',
21 }
22 }
23
24 if os.environ.get('TESTDB', None) == 'postgres':
25 DATABASES['default'].update({
26 'ENGINE': 'django.db.backends.postgresql_psycopg2',
27 'USER': 'postgres',
28 'NAME': 'wafer',
29 })
30
31 # Hosts/domain names that are valid for this site; required if DEBUG is False
32 # See https://docs.djangoproject.com/en/1.5/ref/settings/#allowed-hosts
33 ALLOWED_HOSTS = []
34
35 # Local time zone for this installation. Choices can be found here:
36 # http://en.wikipedia.org/wiki/List_of_tz_zones_by_name
37 # although not all choices may be available on all operating systems.
38 # In a Windows environment this must be set to your system time zone.
39 TIME_ZONE = 'UTC'
40
41 # Language code for this installation. All choices can be found here:
42 # http://www.i18nguy.com/unicode/language-identifiers.html
43 LANGUAGE_CODE = 'en-us'
44
45 SITE_ID = 1
46
47 # If you set this to False, Django will make some optimizations so as not
48 # to load the internationalization machinery.
49 USE_I18N = True
50
51 # If you set this to False, Django will not format dates, numbers and
52 # calendars according to the current locale.
53 USE_L10N = True
54
55 # If you set this to False, Django will not use timezone-aware datetimes.
56 USE_TZ = True
57
58 # Absolute filesystem path to the directory that will hold user-uploaded files.
59 # Example: "/var/www/example.com/media/"
60 project_root = os.path.abspath(os.path.join(os.path.dirname(__file__), '..'))
61 MEDIA_ROOT = os.path.join(project_root, 'media')
62
63 # URL that handles the media served from MEDIA_ROOT. Make sure to use a
64 # trailing slash.
65 # Examples: "http://example.com/media/", "http://media.example.com/"
66 MEDIA_URL = '/media/'
67
68 # Absolute path to the directory static files should be collected to.
69 # Don't put anything in this directory yourself; store your static files
70 # in apps' "static/" subdirectories and in STATICFILES_DIRS.
71 # Example: "/var/www/example.com/static/"
72 STATIC_ROOT = ''
73
74 # URL prefix for static files.
75 # Example: "http://example.com/static/", "http://static.example.com/"
76 STATIC_URL = '/static/'
77
78 # Additional locations of static files
79 STATICFILES_DIRS = (
80 # Put strings here, like "/home/html/static" or "C:/www/django/static".
81 # Always use forward slashes, even on Windows.
82 # Don't forget to use absolute paths, not relative paths.
83 os.path.join(project_root, 'bower_components'),
84 )
85
86 # List of finder classes that know how to find static files in
87 # various locations.
88 STATICFILES_FINDERS = (
89 'django.contrib.staticfiles.finders.FileSystemFinder',
90 'django.contrib.staticfiles.finders.AppDirectoriesFinder',
91 # 'django.contrib.staticfiles.finders.DefaultStorageFinder',
92 )
93
94 # Make this unique, and don't share it with anybody.
95 SECRET_KEY = '8iysa30^no&oi5kv$k1w)#gsxzrylr-h6%)loz71expnbf7z%)'
96
97 # List of callables that know how to import templates from various sources.
98 TEMPLATE_LOADERS = (
99 'django.template.loaders.filesystem.Loader',
100 'django.template.loaders.app_directories.Loader',
101 # 'django.template.loaders.eggs.Loader',
102 )
103
104 MIDDLEWARE_CLASSES = (
105 'django.middleware.common.CommonMiddleware',
106 'django.contrib.sessions.middleware.SessionMiddleware',
107 'django.middleware.csrf.CsrfViewMiddleware',
108 'django.contrib.auth.middleware.AuthenticationMiddleware',
109 'django.contrib.messages.middleware.MessageMiddleware',
110 # Uncomment the next line for simple clickjacking protection:
111 # 'django.middleware.clickjacking.XFrameOptionsMiddleware',
112 )
113
114 ROOT_URLCONF = 'wafer.urls'
115
116 # Python dotted path to the WSGI application used by Django's runserver.
117 WSGI_APPLICATION = 'wafer.wsgi.application'
118
119 TEMPLATE_DIRS = (
120 # Put strings here, like "/home/html/django_templates" or
121 # "C:/www/django/templates". Always use forward slashes, even on Windows.
122 # Don't forget to use absolute paths, not relative paths.
123 )
124
125 TEMPLATE_CONTEXT_PROCESSORS = (
126 'django.contrib.auth.context_processors.auth',
127 'django.core.context_processors.debug',
128 'django.core.context_processors.i18n',
129 'django.core.context_processors.media',
130 'django.core.context_processors.static',
131 'django.core.context_processors.tz',
132 'django.contrib.messages.context_processors.messages',
133 'wafer.context_processors.site_info',
134 'wafer.context_processors.navigation_info',
135 'wafer.context_processors.menu_info',
136 'wafer.context_processors.registration_settings',
137 )
138
139 INSTALLED_APPS = (
140 'django.contrib.auth',
141 'django.contrib.contenttypes',
142 'django.contrib.sessions',
143 'django.contrib.sites',
144 'django.contrib.messages',
145 'django.contrib.staticfiles',
146 'reversion',
147 'django_medusa',
148 'crispy_forms',
149 'django_nose',
150 'markitup',
151 'rest_framework',
152 'easy_select2',
153 'wafer',
154 'wafer.kv',
155 'wafer.registration',
156 'wafer.talks',
157 'wafer.schedule',
158 'wafer.users',
159 'wafer.sponsors',
160 'wafer.pages',
161 'wafer.tickets',
162 'wafer.compare',
163 # Django isn't finding the overridden templates
164 'registration',
165 'django.contrib.admin',
166 )
167
168 TEST_RUNNER = 'django_nose.NoseTestSuiteRunner'
169
170 # A sample logging configuration. The only tangible logging
171 # performed by this configuration is to send an email to
172 # the site admins on every HTTP 500 error when DEBUG=False.
173 # See http://docs.djangoproject.com/en/dev/topics/logging for
174 # more details on how to customize your logging configuration.
175 LOGGING = {
176 'version': 1,
177 'disable_existing_loggers': False,
178 'filters': {
179 'require_debug_false': {
180 '()': 'django.utils.log.RequireDebugFalse'
181 }
182 },
183 'handlers': {
184 'mail_admins': {
185 'level': 'ERROR',
186 'filters': ['require_debug_false'],
187 'class': 'django.utils.log.AdminEmailHandler'
188 }
189 },
190 'loggers': {
191 'django.request': {
192 'handlers': ['mail_admins'],
193 'level': 'ERROR',
194 'propagate': True,
195 },
196 }
197 }
198
199 # Django registration:
200 ACCOUNT_ACTIVATION_DAYS = 7
201
202 AUTH_USER_MODEL = 'auth.User'
203
204 # Forms:
205 CRISPY_TEMPLATE_PACK = 'bootstrap3'
206
207 # Wafer cache settings
208 # We assume that the WAFER_CACHE is cross-process
209 WAFER_CACHE = 'wafer_cache'
210 CACHES = {
211 'default': {
212 'BACKEND': 'django.core.cache.backends.locmem.LocMemCache',
213 },
214 WAFER_CACHE: {
215 'BACKEND': 'django.core.cache.backends.db.DatabaseCache',
216 'LOCATION': 'wafer_cache_table',
217 },
218 }
219
220
221 # Wafer menu settings
222
223 WAFER_MENUS = ()
224 # Example menus entries:
225 #
226 # {"label": _("Home"),
227 # "url": '/'},
228 # {"menu": "sponsors",
229 # "label": _("Sponsors"),
230 # "items": [
231 # {"name": "sponsors", "label": _("Our sponsors"),
232 # "url": reverse_lazy("wafer_sponsors")},
233 # {"name": "packages", "label": _("Sponsorship packages"),
234 # "url": reverse_lazy("wafer_sponsorship_packages")},
235 # ]},
236 # {"label": _("Talks"),
237 # "url": reverse_lazy("wafer_users_talks")},
238
239 WAFER_DYNAMIC_MENUS = (
240 'wafer.pages.models.page_menus',
241 )
242
243 # Enabled SSO mechanims:
244 WAFER_SSO = (
245 # 'github',
246 # 'debian',
247 )
248
249 # Log in with GitHub:
250 # WAFER_GITHUB_CLIENT_ID = 'register on github'
251 # WAFER_GITHUB_CLIENT_SECRET = 'to get these secrets'
252
253 # Log in with Debian SSO:
254 # Requires some Apache config:
255 # SSLCACertificateFile /srv/sso.debian.org/etc/debsso.crt
256 # SSLCARevocationCheck chain
257 # SSLCARevocationFile /srv/sso.debian.org/etc/debsso.crl
258 # <Location /accounts/debian-login/>
259 # SSLOptions +StdEnvVars
260 # SSLVerifyClient optional
261 # </Location>
262 # WAFER_DEBIAN_NM_API_KEY = 'obtain one from https://nm.debian.org/apikeys/'
263
264 # Set this to true to disable the login button on the navigation toolbar
265 WAFER_HIDE_LOGIN = False
266
267 # Talk submissions open
268 # Set this to False to disable talk submissions
269 WAFER_TALKS_OPEN = True
270
271 # The form used for talk submission
272 WAFER_TALK_FORM = 'wafer.talks.forms.TalkForm'
273
274 # Set this to False to disable registration
275 WAFER_REGISTRATION_OPEN = True
276 # Can be 'ticket' for Quicket tickets or 'form' for a classic form
277 WAFER_REGISTRATION_MODE = 'ticket'
278
279 # For REGISTRATION_MODE == 'form', the form to present
280 WAFER_REGISTRATION_FORM = 'wafer.users.forms.ExampleRegistrationForm'
281
282 # Allow registered and anonymous users to see registered users
283 WAFER_PUBLIC_ATTENDEE_LIST = True
284
285 # Ticket registration with Quicket
286 # WAFER_TICKET_SECRET = "i'm a shared secret"
287
288 # django_medusa -- disk-based renderer
289 MEDUSA_RENDERER_CLASS = "wafer.management.static.WaferDiskStaticSiteRenderer"
290 MEDUSA_DEPLOY_DIR = os.path.join(project_root, 'static_mirror')
291 MARKITUP_FILTER = ('markdown.markdown', {'safe_mode': True})
292
293 REST_FRAMEWORK = {
294 'DEFAULT_PERMISSION_CLASSES': ('rest_framework.permissions.IsAdminUser',),
295 'PAGE_SIZE': 50
296 }
297
[end of wafer/settings.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/wafer/settings.py b/wafer/settings.py
--- a/wafer/settings.py
+++ b/wafer/settings.py
@@ -107,6 +107,7 @@
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
+ 'django.contrib.redirects.middleware.RedirectFallbackMiddleware',
# Uncomment the next line for simple clickjacking protection:
# 'django.middleware.clickjacking.XFrameOptionsMiddleware',
)
@@ -143,6 +144,7 @@
'django.contrib.sites',
'django.contrib.messages',
'django.contrib.staticfiles',
+ 'django.contrib.redirects',
'reversion',
'django_medusa',
'crispy_forms',
| {"golden_diff": "diff --git a/wafer/settings.py b/wafer/settings.py\n--- a/wafer/settings.py\n+++ b/wafer/settings.py\n@@ -107,6 +107,7 @@\n 'django.middleware.csrf.CsrfViewMiddleware',\n 'django.contrib.auth.middleware.AuthenticationMiddleware',\n 'django.contrib.messages.middleware.MessageMiddleware',\n+ 'django.contrib.redirects.middleware.RedirectFallbackMiddleware',\n # Uncomment the next line for simple clickjacking protection:\n # 'django.middleware.clickjacking.XFrameOptionsMiddleware',\n )\n@@ -143,6 +144,7 @@\n 'django.contrib.sites',\n 'django.contrib.messages',\n 'django.contrib.staticfiles',\n+ 'django.contrib.redirects',\n 'reversion',\n 'django_medusa',\n 'crispy_forms',\n", "issue": "Add support for Django's redirect app to wafer\nIt's useful to be able to add a redirect if a page is moved to a different point in the hierachy.\n\nDjango's already got support for this, so we should leverage that.\n\nThe potentially problematic part is how this iteracts with the static site generation, as django-medusa's handling of redirects is far from ideal.\n\n", "before_files": [{"content": "import os\n\nfrom django.utils.translation import ugettext_lazy as _\n\ntry:\n from localsettings import *\nexcept ImportError:\n pass\n\n# Django settings for wafer project.\n\nADMINS = (\n # The logging config below mails admins\n # ('Your Name', '[email protected]'),\n)\n\nDATABASES = {\n 'default': {\n 'ENGINE': 'django.db.backends.sqlite3',\n 'NAME': 'wafer.db',\n }\n}\n\nif os.environ.get('TESTDB', None) == 'postgres':\n DATABASES['default'].update({\n 'ENGINE': 'django.db.backends.postgresql_psycopg2',\n 'USER': 'postgres',\n 'NAME': 'wafer',\n })\n\n# Hosts/domain names that are valid for this site; required if DEBUG is False\n# See https://docs.djangoproject.com/en/1.5/ref/settings/#allowed-hosts\nALLOWED_HOSTS = []\n\n# Local time zone for this installation. Choices can be found here:\n# http://en.wikipedia.org/wiki/List_of_tz_zones_by_name\n# although not all choices may be available on all operating systems.\n# In a Windows environment this must be set to your system time zone.\nTIME_ZONE = 'UTC'\n\n# Language code for this installation. All choices can be found here:\n# http://www.i18nguy.com/unicode/language-identifiers.html\nLANGUAGE_CODE = 'en-us'\n\nSITE_ID = 1\n\n# If you set this to False, Django will make some optimizations so as not\n# to load the internationalization machinery.\nUSE_I18N = True\n\n# If you set this to False, Django will not format dates, numbers and\n# calendars according to the current locale.\nUSE_L10N = True\n\n# If you set this to False, Django will not use timezone-aware datetimes.\nUSE_TZ = True\n\n# Absolute filesystem path to the directory that will hold user-uploaded files.\n# Example: \"/var/www/example.com/media/\"\nproject_root = os.path.abspath(os.path.join(os.path.dirname(__file__), '..'))\nMEDIA_ROOT = os.path.join(project_root, 'media')\n\n# URL that handles the media served from MEDIA_ROOT. Make sure to use a\n# trailing slash.\n# Examples: \"http://example.com/media/\", \"http://media.example.com/\"\nMEDIA_URL = '/media/'\n\n# Absolute path to the directory static files should be collected to.\n# Don't put anything in this directory yourself; store your static files\n# in apps' \"static/\" subdirectories and in STATICFILES_DIRS.\n# Example: \"/var/www/example.com/static/\"\nSTATIC_ROOT = ''\n\n# URL prefix for static files.\n# Example: \"http://example.com/static/\", \"http://static.example.com/\"\nSTATIC_URL = '/static/'\n\n# Additional locations of static files\nSTATICFILES_DIRS = (\n # Put strings here, like \"/home/html/static\" or \"C:/www/django/static\".\n # Always use forward slashes, even on Windows.\n # Don't forget to use absolute paths, not relative paths.\n os.path.join(project_root, 'bower_components'),\n)\n\n# List of finder classes that know how to find static files in\n# various locations.\nSTATICFILES_FINDERS = (\n 'django.contrib.staticfiles.finders.FileSystemFinder',\n 'django.contrib.staticfiles.finders.AppDirectoriesFinder',\n # 'django.contrib.staticfiles.finders.DefaultStorageFinder',\n)\n\n# Make this unique, and don't share it with anybody.\nSECRET_KEY = '8iysa30^no&oi5kv$k1w)#gsxzrylr-h6%)loz71expnbf7z%)'\n\n# List of callables that know how to import templates from various sources.\nTEMPLATE_LOADERS = (\n 'django.template.loaders.filesystem.Loader',\n 'django.template.loaders.app_directories.Loader',\n # 'django.template.loaders.eggs.Loader',\n)\n\nMIDDLEWARE_CLASSES = (\n 'django.middleware.common.CommonMiddleware',\n 'django.contrib.sessions.middleware.SessionMiddleware',\n 'django.middleware.csrf.CsrfViewMiddleware',\n 'django.contrib.auth.middleware.AuthenticationMiddleware',\n 'django.contrib.messages.middleware.MessageMiddleware',\n # Uncomment the next line for simple clickjacking protection:\n # 'django.middleware.clickjacking.XFrameOptionsMiddleware',\n)\n\nROOT_URLCONF = 'wafer.urls'\n\n# Python dotted path to the WSGI application used by Django's runserver.\nWSGI_APPLICATION = 'wafer.wsgi.application'\n\nTEMPLATE_DIRS = (\n # Put strings here, like \"/home/html/django_templates\" or\n # \"C:/www/django/templates\". Always use forward slashes, even on Windows.\n # Don't forget to use absolute paths, not relative paths.\n)\n\nTEMPLATE_CONTEXT_PROCESSORS = (\n 'django.contrib.auth.context_processors.auth',\n 'django.core.context_processors.debug',\n 'django.core.context_processors.i18n',\n 'django.core.context_processors.media',\n 'django.core.context_processors.static',\n 'django.core.context_processors.tz',\n 'django.contrib.messages.context_processors.messages',\n 'wafer.context_processors.site_info',\n 'wafer.context_processors.navigation_info',\n 'wafer.context_processors.menu_info',\n 'wafer.context_processors.registration_settings',\n)\n\nINSTALLED_APPS = (\n 'django.contrib.auth',\n 'django.contrib.contenttypes',\n 'django.contrib.sessions',\n 'django.contrib.sites',\n 'django.contrib.messages',\n 'django.contrib.staticfiles',\n 'reversion',\n 'django_medusa',\n 'crispy_forms',\n 'django_nose',\n 'markitup',\n 'rest_framework',\n 'easy_select2',\n 'wafer',\n 'wafer.kv',\n 'wafer.registration',\n 'wafer.talks',\n 'wafer.schedule',\n 'wafer.users',\n 'wafer.sponsors',\n 'wafer.pages',\n 'wafer.tickets',\n 'wafer.compare',\n # Django isn't finding the overridden templates\n 'registration',\n 'django.contrib.admin',\n)\n\nTEST_RUNNER = 'django_nose.NoseTestSuiteRunner'\n\n# A sample logging configuration. The only tangible logging\n# performed by this configuration is to send an email to\n# the site admins on every HTTP 500 error when DEBUG=False.\n# See http://docs.djangoproject.com/en/dev/topics/logging for\n# more details on how to customize your logging configuration.\nLOGGING = {\n 'version': 1,\n 'disable_existing_loggers': False,\n 'filters': {\n 'require_debug_false': {\n '()': 'django.utils.log.RequireDebugFalse'\n }\n },\n 'handlers': {\n 'mail_admins': {\n 'level': 'ERROR',\n 'filters': ['require_debug_false'],\n 'class': 'django.utils.log.AdminEmailHandler'\n }\n },\n 'loggers': {\n 'django.request': {\n 'handlers': ['mail_admins'],\n 'level': 'ERROR',\n 'propagate': True,\n },\n }\n}\n\n# Django registration:\nACCOUNT_ACTIVATION_DAYS = 7\n\nAUTH_USER_MODEL = 'auth.User'\n\n# Forms:\nCRISPY_TEMPLATE_PACK = 'bootstrap3'\n\n# Wafer cache settings\n# We assume that the WAFER_CACHE is cross-process\nWAFER_CACHE = 'wafer_cache'\nCACHES = {\n 'default': {\n 'BACKEND': 'django.core.cache.backends.locmem.LocMemCache',\n },\n WAFER_CACHE: {\n 'BACKEND': 'django.core.cache.backends.db.DatabaseCache',\n 'LOCATION': 'wafer_cache_table',\n },\n}\n\n\n# Wafer menu settings\n\nWAFER_MENUS = ()\n# Example menus entries:\n#\n# {\"label\": _(\"Home\"),\n# \"url\": '/'},\n# {\"menu\": \"sponsors\",\n# \"label\": _(\"Sponsors\"),\n# \"items\": [\n# {\"name\": \"sponsors\", \"label\": _(\"Our sponsors\"),\n# \"url\": reverse_lazy(\"wafer_sponsors\")},\n# {\"name\": \"packages\", \"label\": _(\"Sponsorship packages\"),\n# \"url\": reverse_lazy(\"wafer_sponsorship_packages\")},\n# ]},\n# {\"label\": _(\"Talks\"),\n# \"url\": reverse_lazy(\"wafer_users_talks\")},\n\nWAFER_DYNAMIC_MENUS = (\n 'wafer.pages.models.page_menus',\n)\n\n# Enabled SSO mechanims:\nWAFER_SSO = (\n # 'github',\n # 'debian',\n)\n\n# Log in with GitHub:\n# WAFER_GITHUB_CLIENT_ID = 'register on github'\n# WAFER_GITHUB_CLIENT_SECRET = 'to get these secrets'\n\n# Log in with Debian SSO:\n# Requires some Apache config:\n# SSLCACertificateFile /srv/sso.debian.org/etc/debsso.crt\n# SSLCARevocationCheck chain\n# SSLCARevocationFile /srv/sso.debian.org/etc/debsso.crl\n# <Location /accounts/debian-login/>\n# SSLOptions +StdEnvVars\n# SSLVerifyClient optional\n# </Location>\n# WAFER_DEBIAN_NM_API_KEY = 'obtain one from https://nm.debian.org/apikeys/'\n\n# Set this to true to disable the login button on the navigation toolbar\nWAFER_HIDE_LOGIN = False\n\n# Talk submissions open\n# Set this to False to disable talk submissions\nWAFER_TALKS_OPEN = True\n\n# The form used for talk submission\nWAFER_TALK_FORM = 'wafer.talks.forms.TalkForm'\n\n# Set this to False to disable registration\nWAFER_REGISTRATION_OPEN = True\n# Can be 'ticket' for Quicket tickets or 'form' for a classic form\nWAFER_REGISTRATION_MODE = 'ticket'\n\n# For REGISTRATION_MODE == 'form', the form to present\nWAFER_REGISTRATION_FORM = 'wafer.users.forms.ExampleRegistrationForm'\n\n# Allow registered and anonymous users to see registered users\nWAFER_PUBLIC_ATTENDEE_LIST = True\n\n# Ticket registration with Quicket\n# WAFER_TICKET_SECRET = \"i'm a shared secret\"\n\n# django_medusa -- disk-based renderer\nMEDUSA_RENDERER_CLASS = \"wafer.management.static.WaferDiskStaticSiteRenderer\"\nMEDUSA_DEPLOY_DIR = os.path.join(project_root, 'static_mirror')\nMARKITUP_FILTER = ('markdown.markdown', {'safe_mode': True})\n\nREST_FRAMEWORK = {\n 'DEFAULT_PERMISSION_CLASSES': ('rest_framework.permissions.IsAdminUser',),\n 'PAGE_SIZE': 50\n}\n", "path": "wafer/settings.py"}]} | 3,656 | 174 |
gh_patches_debug_33924 | rasdani/github-patches | git_diff | PrefectHQ__prefect-710 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Context docs are broken
For some reason the actual `context` class signature is not being documented.
</issue>
<code>
[start of src/prefect/utilities/context.py]
1 # Licensed under LICENSE.md; also available at https://www.prefect.io/licenses/beta-eula
2
3 """
4 This module implements the Prefect context that is available when tasks run.
5
6 Tasks can import prefect.context and access attributes that will be overwritten
7 when the task is run.
8
9 Example:
10
11 ```python
12 import prefect.context
13 with prefect.context(a=1, b=2):
14 print(prefect.context.a) # 1
15 print (prefect.context.a) # undefined
16 ```
17
18 Prefect provides various key / value pairs in context that are always available during task runs:
19
20 | Variable | Description |
21 | :--- | --- |
22 | `scheduled_start_time` | an actual datetime object representing the scheduled start time for the Flow run; falls back to `now` for unscheduled runs |
23 | `date` | an actual datetime object representing the current time |
24 | `today` | the current date formatted as `YYYY-MM-DD`|
25 | `today_nodash` | the current date formatted as `YYYYMMDD`|
26 | `yesterday` | yesterday's date formatted as `YYYY-MM-DD`|
27 | `yesterday_nodash` | yesterday's date formatted as `YYYYMMDD`|
28 | `tomorrow` | tomorrow's date formatted as `YYYY-MM-DD`|
29 | `tomorrow_nodash` | tomorrow's date formatted as `YYYYMMDD`|
30 | `task_name` | the name of the current task |
31 """
32
33 import contextlib
34 import threading
35 from typing import Any, Iterator, MutableMapping
36
37 from prefect.configuration import config
38 from prefect.utilities.collections import DotDict
39
40
41 class Context(DotDict, threading.local):
42 """
43 A thread safe context store for Prefect data.
44
45 The `Context` is a `DotDict` subclass, and can be instantiated the same way.
46
47 Args:
48 - *args (Any): arguments to provide to the `DotDict` constructor (e.g.,
49 an initial dictionary)
50 - *kwargs (Any): any key / value pairs to initialize this context with
51 """
52
53 def __init__(self, *args, **kwargs) -> None:
54 super().__init__(*args, **kwargs)
55 if "context" in config:
56 self.update(config.context)
57
58 def __repr__(self) -> str:
59 return "<Context>"
60
61 @contextlib.contextmanager
62 def __call__(self, *args: MutableMapping, **kwargs: Any) -> Iterator["Context"]:
63 """
64 A context manager for setting / resetting the Prefect context
65
66 Example:
67 import prefect.context
68 with prefect.context(dict(a=1, b=2), c=3):
69 print(prefect.context.a) # 1
70 """
71 previous_context = self.copy()
72 try:
73 self.update(*args, **kwargs)
74 yield self
75 finally:
76 self.clear()
77 self.update(previous_context)
78
79
80 context = Context()
81
[end of src/prefect/utilities/context.py]
[start of src/prefect/tasks/templates/jinja2.py]
1 # Licensed under LICENSE.md; also available at https://www.prefect.io/licenses/beta-eula
2
3 from typing import Any
4
5 from jinja2 import Template
6
7 import prefect
8 from prefect import Task
9
10
11 class JinjaTemplateTask(Task):
12 """
13 This task contains a Jinja template which is formatted with the results of any
14 upstream tasks and returned.
15
16 Variables from `prefect.context` will also be used for rendering.
17
18 Args:
19 - template (str, optional): the optional _default_ template string to render at runtime;
20 can also be provided as a keyword to `run`, which takes precendence over this default.
21 - **kwargs (optional): additional keyword arguments to pass to the
22 standard Task constructor
23 """
24
25 def __init__(self, template: str = None, **kwargs: Any):
26 self.template = Template(template or "")
27 super().__init__(**kwargs)
28
29 def run(self, template: str = None, **format_kwargs: Any) -> str: # type: ignore
30 """
31 Formats the Jinja Template with the provided kwargs.
32
33 Args:
34 - template (str, optional): the template string to render; if not
35 provided, `self.template` will be used
36 - **format_kwargs (optional): keyword arguments to use for
37 rendering; note that variables from `prefect.context` will also be used
38
39 Returns:
40 - str: the rendered string
41 """
42 template = self.template if template is None else Template(template)
43 with prefect.context(**format_kwargs) as data:
44 return template.render(**data)
45
[end of src/prefect/tasks/templates/jinja2.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/prefect/tasks/templates/jinja2.py b/src/prefect/tasks/templates/jinja2.py
--- a/src/prefect/tasks/templates/jinja2.py
+++ b/src/prefect/tasks/templates/jinja2.py
@@ -6,6 +6,7 @@
import prefect
from prefect import Task
+from prefect.utilities.tasks import defaults_from_attrs
class JinjaTemplateTask(Task):
@@ -23,9 +24,10 @@
"""
def __init__(self, template: str = None, **kwargs: Any):
- self.template = Template(template or "")
+ self.template = template or ""
super().__init__(**kwargs)
+ @defaults_from_attrs("template")
def run(self, template: str = None, **format_kwargs: Any) -> str: # type: ignore
"""
Formats the Jinja Template with the provided kwargs.
@@ -39,6 +41,6 @@
Returns:
- str: the rendered string
"""
- template = self.template if template is None else Template(template)
+ template = Template(template)
with prefect.context(**format_kwargs) as data:
return template.render(**data)
diff --git a/src/prefect/utilities/context.py b/src/prefect/utilities/context.py
--- a/src/prefect/utilities/context.py
+++ b/src/prefect/utilities/context.py
@@ -10,9 +10,11 @@
```python
import prefect.context
+
with prefect.context(a=1, b=2):
print(prefect.context.a) # 1
-print (prefect.context.a) # undefined
+
+print(prefect.context.a) # undefined
```
Prefect provides various key / value pairs in context that are always available during task runs:
@@ -28,6 +30,8 @@
| `tomorrow` | tomorrow's date formatted as `YYYY-MM-DD`|
| `tomorrow_nodash` | tomorrow's date formatted as `YYYYMMDD`|
| `task_name` | the name of the current task |
+
+Users can also provide values to context at runtime.
"""
import contextlib
| {"golden_diff": "diff --git a/src/prefect/tasks/templates/jinja2.py b/src/prefect/tasks/templates/jinja2.py\n--- a/src/prefect/tasks/templates/jinja2.py\n+++ b/src/prefect/tasks/templates/jinja2.py\n@@ -6,6 +6,7 @@\n \n import prefect\n from prefect import Task\n+from prefect.utilities.tasks import defaults_from_attrs\n \n \n class JinjaTemplateTask(Task):\n@@ -23,9 +24,10 @@\n \"\"\"\n \n def __init__(self, template: str = None, **kwargs: Any):\n- self.template = Template(template or \"\")\n+ self.template = template or \"\"\n super().__init__(**kwargs)\n \n+ @defaults_from_attrs(\"template\")\n def run(self, template: str = None, **format_kwargs: Any) -> str: # type: ignore\n \"\"\"\n Formats the Jinja Template with the provided kwargs.\n@@ -39,6 +41,6 @@\n Returns:\n - str: the rendered string\n \"\"\"\n- template = self.template if template is None else Template(template)\n+ template = Template(template)\n with prefect.context(**format_kwargs) as data:\n return template.render(**data)\ndiff --git a/src/prefect/utilities/context.py b/src/prefect/utilities/context.py\n--- a/src/prefect/utilities/context.py\n+++ b/src/prefect/utilities/context.py\n@@ -10,9 +10,11 @@\n \n ```python\n import prefect.context\n+\n with prefect.context(a=1, b=2):\n print(prefect.context.a) # 1\n-print (prefect.context.a) # undefined\n+\n+print(prefect.context.a) # undefined\n ```\n \n Prefect provides various key / value pairs in context that are always available during task runs:\n@@ -28,6 +30,8 @@\n | `tomorrow` | tomorrow's date formatted as `YYYY-MM-DD`|\n | `tomorrow_nodash` | tomorrow's date formatted as `YYYYMMDD`|\n | `task_name` | the name of the current task |\n+\n+Users can also provide values to context at runtime.\n \"\"\"\n \n import contextlib\n", "issue": "Context docs are broken\nFor some reason the actual `context` class signature is not being documented.\n", "before_files": [{"content": "# Licensed under LICENSE.md; also available at https://www.prefect.io/licenses/beta-eula\n\n\"\"\"\nThis module implements the Prefect context that is available when tasks run.\n\nTasks can import prefect.context and access attributes that will be overwritten\nwhen the task is run.\n\nExample:\n\n```python\nimport prefect.context\nwith prefect.context(a=1, b=2):\n print(prefect.context.a) # 1\nprint (prefect.context.a) # undefined\n```\n\nPrefect provides various key / value pairs in context that are always available during task runs:\n\n| Variable | Description |\n| :--- | --- |\n| `scheduled_start_time` | an actual datetime object representing the scheduled start time for the Flow run; falls back to `now` for unscheduled runs |\n| `date` | an actual datetime object representing the current time |\n| `today` | the current date formatted as `YYYY-MM-DD`|\n| `today_nodash` | the current date formatted as `YYYYMMDD`|\n| `yesterday` | yesterday's date formatted as `YYYY-MM-DD`|\n| `yesterday_nodash` | yesterday's date formatted as `YYYYMMDD`|\n| `tomorrow` | tomorrow's date formatted as `YYYY-MM-DD`|\n| `tomorrow_nodash` | tomorrow's date formatted as `YYYYMMDD`|\n| `task_name` | the name of the current task |\n\"\"\"\n\nimport contextlib\nimport threading\nfrom typing import Any, Iterator, MutableMapping\n\nfrom prefect.configuration import config\nfrom prefect.utilities.collections import DotDict\n\n\nclass Context(DotDict, threading.local):\n \"\"\"\n A thread safe context store for Prefect data.\n\n The `Context` is a `DotDict` subclass, and can be instantiated the same way.\n\n Args:\n - *args (Any): arguments to provide to the `DotDict` constructor (e.g.,\n an initial dictionary)\n - *kwargs (Any): any key / value pairs to initialize this context with\n \"\"\"\n\n def __init__(self, *args, **kwargs) -> None:\n super().__init__(*args, **kwargs)\n if \"context\" in config:\n self.update(config.context)\n\n def __repr__(self) -> str:\n return \"<Context>\"\n\n @contextlib.contextmanager\n def __call__(self, *args: MutableMapping, **kwargs: Any) -> Iterator[\"Context\"]:\n \"\"\"\n A context manager for setting / resetting the Prefect context\n\n Example:\n import prefect.context\n with prefect.context(dict(a=1, b=2), c=3):\n print(prefect.context.a) # 1\n \"\"\"\n previous_context = self.copy()\n try:\n self.update(*args, **kwargs)\n yield self\n finally:\n self.clear()\n self.update(previous_context)\n\n\ncontext = Context()\n", "path": "src/prefect/utilities/context.py"}, {"content": "# Licensed under LICENSE.md; also available at https://www.prefect.io/licenses/beta-eula\n\nfrom typing import Any\n\nfrom jinja2 import Template\n\nimport prefect\nfrom prefect import Task\n\n\nclass JinjaTemplateTask(Task):\n \"\"\"\n This task contains a Jinja template which is formatted with the results of any\n upstream tasks and returned.\n\n Variables from `prefect.context` will also be used for rendering.\n\n Args:\n - template (str, optional): the optional _default_ template string to render at runtime;\n can also be provided as a keyword to `run`, which takes precendence over this default.\n - **kwargs (optional): additional keyword arguments to pass to the\n standard Task constructor\n \"\"\"\n\n def __init__(self, template: str = None, **kwargs: Any):\n self.template = Template(template or \"\")\n super().__init__(**kwargs)\n\n def run(self, template: str = None, **format_kwargs: Any) -> str: # type: ignore\n \"\"\"\n Formats the Jinja Template with the provided kwargs.\n\n Args:\n - template (str, optional): the template string to render; if not\n provided, `self.template` will be used\n - **format_kwargs (optional): keyword arguments to use for\n rendering; note that variables from `prefect.context` will also be used\n\n Returns:\n - str: the rendered string\n \"\"\"\n template = self.template if template is None else Template(template)\n with prefect.context(**format_kwargs) as data:\n return template.render(**data)\n", "path": "src/prefect/tasks/templates/jinja2.py"}]} | 1,777 | 469 |
gh_patches_debug_10742 | rasdani/github-patches | git_diff | electricitymaps__electricitymaps-contrib-1238 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
United States NY ISO calculation on Dual Fuel is probably wrong
Just found your site today, really cool. I'd actually done a tech demo with streaming data from the nyiso as part of an MQTT / Kubernetes talk. Demo site at http://ny-power.org.
The Dual Fuel column is getting mapped to pretty dirty fossil fuel systems, which I think is probably inaccurate. My understanding of the dual fuel plants is they mostly burn Natural Gas, but can burn Oil instead when Natural Gas is constrained and targeted for heating (typically during peak heating demand in the winter). I looked up the total Oil burned for electricity generation in 2016 (the last year that numbers are available), and it was actually really low. So when I was simulating it I gave it a kind of 30% Oil / 70% NG number as an approximation. It's probably as good a guess as one can get. But the worse than coal numbers are I think pretty far off.
Is there a way in the system to provide a custom value, or is it just mapping back to the IPCC averaged numbers?
</issue>
<code>
[start of parsers/US_NY.py]
1 #!/usr/bin/env python3
2
3 """Real time parser for the state of New York."""
4 from collections import defaultdict
5 from datetime import timedelta
6 from operator import itemgetter
7 from urllib.error import HTTPError
8
9 import arrow
10 import pandas as pd
11
12 mapping = {
13 'Dual Fuel': 'unknown',
14 'Natural Gas': 'gas',
15 'Nuclear': 'nuclear',
16 'Other Fossil Fuels': 'unknown',
17 'Other Renewables': 'unknown',
18 'Wind': 'wind',
19 'Hydro': 'hydro'
20 }
21
22
23 def read_csv_data(url):
24 """
25 Gets csv data from a url and returns a dataframe.
26 """
27
28 csv_data = pd.read_csv(url)
29
30 return csv_data
31
32
33 def timestamp_converter(timestamp_string):
34 """
35 Converts timestamps in nyiso data into aware datetime objects.
36 """
37
38 dt_naive = arrow.get(timestamp_string, 'MM/DD/YYYY HH:mm:ss')
39 dt_aware = dt_naive.replace(tzinfo='America/New_York').datetime
40
41 return dt_aware
42
43
44 def data_parser(df):
45 """
46 Takes dataframe and loops over rows to form dictionaries consisting of datetime and generation type.
47 Merges these dictionaries using datetime key.
48 Maps to type and returns a list of tuples containing datetime string and production.
49 """
50
51 chunks = []
52 for row in df.itertuples():
53 piece = {}
54 piece['datetime'] = row[1]
55 piece[row[3]] = row[4]
56 chunks.append(piece)
57
58 # Join dicts on shared 'datetime' keys.
59 combine = defaultdict(dict)
60 for elem in chunks:
61 combine[elem['datetime']].update(elem)
62
63 ordered = sorted(combine.values(), key=itemgetter("datetime"))
64
65 mapped_generation = []
66 for item in ordered:
67 mapped_types = [(mapping.get(k, k), v) for k, v in item.items()]
68
69 # Need to avoid multiple 'unknown' keys overwriting.
70 complete_production = defaultdict(lambda: 0.0)
71 for key, val in mapped_types:
72 try:
73 complete_production[key] += val
74 except TypeError:
75 # Datetime is a string at this point!
76 complete_production[key] = val
77
78 dt = complete_production.pop('datetime')
79 final = (dt, dict(complete_production))
80 mapped_generation.append(final)
81
82 return mapped_generation
83
84
85 def fetch_production(zone_key='US-NY', session=None, target_datetime=None, logger=None):
86 """
87 Requests the last known production mix (in MW) of a given zone
88
89 Arguments:
90 zone_key: used in case a parser is able to fetch multiple zones
91 session: requests session passed in order to re-use an existing session,
92 not used here due to difficulty providing it to pandas
93 target_datetime: the datetime for which we want production data. If not provided, we should
94 default it to now. The provided target_datetime is timezone-aware in UTC.
95 logger: an instance of a `logging.Logger`; all raised exceptions are also logged automatically
96
97 Return:
98 A list of dictionaries in the form:
99 {
100 'zoneKey': 'FR',
101 'datetime': '2017-01-01T00:00:00Z',
102 'production': {
103 'biomass': 0.0,
104 'coal': 0.0,
105 'gas': 0.0,
106 'hydro': 0.0,
107 'nuclear': null,
108 'oil': 0.0,
109 'solar': 0.0,
110 'wind': 0.0,
111 'geothermal': 0.0,
112 'unknown': 0.0
113 },
114 'storage': {
115 'hydro': -10.0,
116 },
117 'source': 'mysource.com'
118 }
119 """
120 if target_datetime:
121 # ensure we have an arrow object
122 target_datetime = arrow.get(target_datetime)
123 else:
124 target_datetime = arrow.now('America/New_York')
125
126 ny_date = target_datetime.format('YYYYMMDD')
127 mix_url = 'http://mis.nyiso.com/public/csv/rtfuelmix/{}rtfuelmix.csv'.format(ny_date)
128 try:
129 raw_data = read_csv_data(mix_url)
130 except HTTPError:
131 # this can happen when target_datetime has no data available
132 return None
133
134 clean_data = data_parser(raw_data)
135
136 production_mix = []
137 for datapoint in clean_data:
138 data = {
139 'zoneKey': zone_key,
140 'datetime': timestamp_converter(datapoint[0]),
141 'production': datapoint[1],
142 'storage': {'hydro': None},
143 'source': 'nyiso.com'
144 }
145
146 production_mix.append(data)
147
148 return production_mix
149
150
151 def fetch_exchange(zone_key1, zone_key2, session=None, target_datetime=None, logger=None):
152 """Requests the last known power exchange (in MW) between two zones
153
154 Arguments:
155 zone_key1, zone_key2: specifies which exchange to get
156 session: requests session passed in order to re-use an existing session,
157 not used here due to difficulty providing it to pandas
158 target_datetime: the datetime for which we want production data. If not provided, we should
159 default it to now. The provided target_datetime is timezone-aware in UTC.
160 logger: an instance of a `logging.Logger`; all raised exceptions are also logged automatically
161
162 Return:
163 A list of dictionaries in the form:
164 {
165 'sortedZoneKeys': 'DK->NO',
166 'datetime': '2017-01-01T00:00:00Z',
167 'netFlow': 0.0,
168 'source': 'mysource.com'
169 }
170 where net flow is from DK into NO
171 """
172 url = 'http://mis.nyiso.com/public/csv/ExternalLimitsFlows/{}ExternalLimitsFlows.csv'
173
174 sorted_zone_keys = '->'.join(sorted([zone_key1, zone_key2]))
175
176 # In the source CSV, positive is flow into NY, negative is flow out of NY.
177 # In Electricity Map, A->B means flow to B is positive.
178 if sorted_zone_keys == 'US-NEISO->US-NY':
179 direction = 1
180 relevant_exchanges = ['SCH - NE - NY', 'SCH - NPX_1385', 'SCH - NPX_CSC']
181 elif sorted_zone_keys == 'US-NY->US-PJM':
182 direction = -1
183 relevant_exchanges = ['SCH - PJ - NY', 'SCH - PJM_HTP', 'SCH - PJM_NEPTUNE', 'SCH - PJM_VFT']
184 elif sorted_zone_keys == 'CA-ON->US-NY':
185 direction = 1
186 relevant_exchanges = ['SCH - OH - NY']
187 elif sorted_zone_keys == 'CA-QC->US-NY':
188 direction = 1
189 relevant_exchanges = ['SCH - HQ_CEDARS', 'SCH - HQ - NY']
190 else:
191 raise NotImplementedError('Exchange pair not supported: {}'.format(sorted_zone_keys))
192
193 if target_datetime:
194 # ensure we have an arrow object
195 target_datetime = arrow.get(target_datetime)
196 else:
197 target_datetime = arrow.now('America/New_York')
198 ny_date = target_datetime.format('YYYYMMDD')
199 exchange_url = url.format(ny_date)
200
201 try:
202 exchange_data = read_csv_data(exchange_url)
203 except HTTPError:
204 # this can happen when target_datetime has no data available
205 return None
206
207 new_england_exs = exchange_data.loc[exchange_data['Interface Name'].isin(relevant_exchanges)]
208 consolidated_flows = new_england_exs.reset_index().groupby("Timestamp").sum()
209
210 now = arrow.utcnow()
211
212 exchange_5min = []
213 for row in consolidated_flows.itertuples():
214 flow = float(row[3]) * direction
215 # Timestamp for exchange does not include seconds.
216 dt = timestamp_converter(row[0] + ':00')
217
218 if (dt > now) and ((dt - now) < timedelta(seconds=300)):
219 # NYISO exchanges CSV (and only the exchanges CSV) includes data
220 # up to 5 minutes in the future (but only 5 minutes in the future).
221 # This also happens on their official website.
222 # Electricity Map raises error with data in the future, so skip
223 # that datapoint. If it's more than 5 minutes in the future,
224 # it's weird/unexpected and thus worthy of failure and logging.
225 continue
226
227 exchange = {
228 'sortedZoneKeys': sorted_zone_keys,
229 'datetime': dt,
230 'netFlow': flow,
231 'source': 'nyiso.com'
232 }
233
234 exchange_5min.append(exchange)
235
236 return exchange_5min
237
238
239 if __name__ == '__main__':
240 """Main method, never used by the Electricity Map backend, but handy for testing."""
241
242 from pprint import pprint
243 print('fetch_production() ->')
244 pprint(fetch_production())
245
246 print('fetch_production(target_datetime=arrow.get("2018-03-13T12:00Z") ->')
247 pprint(fetch_production(target_datetime=arrow.get("2018-03-13T12:00Z")))
248
249 print('fetch_production(target_datetime=arrow.get("2007-03-13T12:00Z") ->')
250 pprint(fetch_production(target_datetime=arrow.get("2007-03-13T12:00Z")))
251
252 print('fetch_exchange(US-NY, US-NEISO)')
253 pprint(fetch_exchange('US-NY', 'US-NEISO'))
254
255 print('fetch_exchange("US-NY", "CA-QC")')
256 pprint(fetch_exchange('US-NY', 'CA-QC'))
257
258 print('fetch_exchange("US-NY", "CA-QC", target_datetime=arrow.get("2018-03-13T12:00Z"))')
259 pprint(fetch_exchange('US-NY', 'CA-QC', target_datetime=arrow.get("2018-03-13T12:00Z")))
260
261 print('fetch_exchange("US-NY", "CA-QC", target_datetime=arrow.get("2007-03-13T12:00Z")))')
262 pprint(fetch_exchange('US-NY', 'CA-QC', target_datetime=arrow.get('2007-03-13T12:00Z')))
263
[end of parsers/US_NY.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/parsers/US_NY.py b/parsers/US_NY.py
--- a/parsers/US_NY.py
+++ b/parsers/US_NY.py
@@ -9,8 +9,15 @@
import arrow
import pandas as pd
+# Dual Fuel systems can run either Natural Gas or Oil, they represent
+# significantly more capacity in NY State than plants that can only
+# burn Natural Gas. When looking up fuel usage for NY in 2016 in
+# https://www.eia.gov/electricity/data/state/annual_generation_state.xls
+# 100 times more energy came from NG than Oil. That means Oil
+# consumption in the Dual Fuel systems is roughly ~1%, and to a first
+# approximation it's just Natural Gas.
mapping = {
- 'Dual Fuel': 'unknown',
+ 'Dual Fuel': 'gas',
'Natural Gas': 'gas',
'Nuclear': 'nuclear',
'Other Fossil Fuels': 'unknown',
| {"golden_diff": "diff --git a/parsers/US_NY.py b/parsers/US_NY.py\n--- a/parsers/US_NY.py\n+++ b/parsers/US_NY.py\n@@ -9,8 +9,15 @@\n import arrow\n import pandas as pd\n \n+# Dual Fuel systems can run either Natural Gas or Oil, they represent\n+# significantly more capacity in NY State than plants that can only\n+# burn Natural Gas. When looking up fuel usage for NY in 2016 in\n+# https://www.eia.gov/electricity/data/state/annual_generation_state.xls\n+# 100 times more energy came from NG than Oil. That means Oil\n+# consumption in the Dual Fuel systems is roughly ~1%, and to a first\n+# approximation it's just Natural Gas.\n mapping = {\n- 'Dual Fuel': 'unknown',\n+ 'Dual Fuel': 'gas',\n 'Natural Gas': 'gas',\n 'Nuclear': 'nuclear',\n 'Other Fossil Fuels': 'unknown',\n", "issue": "United States NY ISO calculation on Dual Fuel is probably wrong\nJust found your site today, really cool. I'd actually done a tech demo with streaming data from the nyiso as part of an MQTT / Kubernetes talk. Demo site at http://ny-power.org.\r\n\r\nThe Dual Fuel column is getting mapped to pretty dirty fossil fuel systems, which I think is probably inaccurate. My understanding of the dual fuel plants is they mostly burn Natural Gas, but can burn Oil instead when Natural Gas is constrained and targeted for heating (typically during peak heating demand in the winter). I looked up the total Oil burned for electricity generation in 2016 (the last year that numbers are available), and it was actually really low. So when I was simulating it I gave it a kind of 30% Oil / 70% NG number as an approximation. It's probably as good a guess as one can get. But the worse than coal numbers are I think pretty far off.\r\n\r\nIs there a way in the system to provide a custom value, or is it just mapping back to the IPCC averaged numbers? \n", "before_files": [{"content": "#!/usr/bin/env python3\n\n\"\"\"Real time parser for the state of New York.\"\"\"\nfrom collections import defaultdict\nfrom datetime import timedelta\nfrom operator import itemgetter\nfrom urllib.error import HTTPError\n\nimport arrow\nimport pandas as pd\n\nmapping = {\n 'Dual Fuel': 'unknown',\n 'Natural Gas': 'gas',\n 'Nuclear': 'nuclear',\n 'Other Fossil Fuels': 'unknown',\n 'Other Renewables': 'unknown',\n 'Wind': 'wind',\n 'Hydro': 'hydro'\n}\n\n\ndef read_csv_data(url):\n \"\"\"\n Gets csv data from a url and returns a dataframe.\n \"\"\"\n\n csv_data = pd.read_csv(url)\n\n return csv_data\n\n\ndef timestamp_converter(timestamp_string):\n \"\"\"\n Converts timestamps in nyiso data into aware datetime objects.\n \"\"\"\n\n dt_naive = arrow.get(timestamp_string, 'MM/DD/YYYY HH:mm:ss')\n dt_aware = dt_naive.replace(tzinfo='America/New_York').datetime\n\n return dt_aware\n\n\ndef data_parser(df):\n \"\"\"\n Takes dataframe and loops over rows to form dictionaries consisting of datetime and generation type.\n Merges these dictionaries using datetime key.\n Maps to type and returns a list of tuples containing datetime string and production.\n \"\"\"\n\n chunks = []\n for row in df.itertuples():\n piece = {}\n piece['datetime'] = row[1]\n piece[row[3]] = row[4]\n chunks.append(piece)\n\n # Join dicts on shared 'datetime' keys.\n combine = defaultdict(dict)\n for elem in chunks:\n combine[elem['datetime']].update(elem)\n\n ordered = sorted(combine.values(), key=itemgetter(\"datetime\"))\n\n mapped_generation = []\n for item in ordered:\n mapped_types = [(mapping.get(k, k), v) for k, v in item.items()]\n\n # Need to avoid multiple 'unknown' keys overwriting.\n complete_production = defaultdict(lambda: 0.0)\n for key, val in mapped_types:\n try:\n complete_production[key] += val\n except TypeError:\n # Datetime is a string at this point!\n complete_production[key] = val\n\n dt = complete_production.pop('datetime')\n final = (dt, dict(complete_production))\n mapped_generation.append(final)\n\n return mapped_generation\n\n\ndef fetch_production(zone_key='US-NY', session=None, target_datetime=None, logger=None):\n \"\"\"\n Requests the last known production mix (in MW) of a given zone\n\n Arguments:\n zone_key: used in case a parser is able to fetch multiple zones\n session: requests session passed in order to re-use an existing session,\n not used here due to difficulty providing it to pandas\n target_datetime: the datetime for which we want production data. If not provided, we should\n default it to now. The provided target_datetime is timezone-aware in UTC.\n logger: an instance of a `logging.Logger`; all raised exceptions are also logged automatically\n\n Return:\n A list of dictionaries in the form:\n {\n 'zoneKey': 'FR',\n 'datetime': '2017-01-01T00:00:00Z',\n 'production': {\n 'biomass': 0.0,\n 'coal': 0.0,\n 'gas': 0.0,\n 'hydro': 0.0,\n 'nuclear': null,\n 'oil': 0.0,\n 'solar': 0.0,\n 'wind': 0.0,\n 'geothermal': 0.0,\n 'unknown': 0.0\n },\n 'storage': {\n 'hydro': -10.0,\n },\n 'source': 'mysource.com'\n }\n \"\"\"\n if target_datetime:\n # ensure we have an arrow object\n target_datetime = arrow.get(target_datetime)\n else:\n target_datetime = arrow.now('America/New_York')\n\n ny_date = target_datetime.format('YYYYMMDD')\n mix_url = 'http://mis.nyiso.com/public/csv/rtfuelmix/{}rtfuelmix.csv'.format(ny_date)\n try:\n raw_data = read_csv_data(mix_url)\n except HTTPError:\n # this can happen when target_datetime has no data available\n return None\n\n clean_data = data_parser(raw_data)\n\n production_mix = []\n for datapoint in clean_data:\n data = {\n 'zoneKey': zone_key,\n 'datetime': timestamp_converter(datapoint[0]),\n 'production': datapoint[1],\n 'storage': {'hydro': None},\n 'source': 'nyiso.com'\n }\n\n production_mix.append(data)\n\n return production_mix\n\n\ndef fetch_exchange(zone_key1, zone_key2, session=None, target_datetime=None, logger=None):\n \"\"\"Requests the last known power exchange (in MW) between two zones\n\n Arguments:\n zone_key1, zone_key2: specifies which exchange to get\n session: requests session passed in order to re-use an existing session,\n not used here due to difficulty providing it to pandas\n target_datetime: the datetime for which we want production data. If not provided, we should\n default it to now. The provided target_datetime is timezone-aware in UTC.\n logger: an instance of a `logging.Logger`; all raised exceptions are also logged automatically\n\n Return:\n A list of dictionaries in the form:\n {\n 'sortedZoneKeys': 'DK->NO',\n 'datetime': '2017-01-01T00:00:00Z',\n 'netFlow': 0.0,\n 'source': 'mysource.com'\n }\n where net flow is from DK into NO\n \"\"\"\n url = 'http://mis.nyiso.com/public/csv/ExternalLimitsFlows/{}ExternalLimitsFlows.csv'\n\n sorted_zone_keys = '->'.join(sorted([zone_key1, zone_key2]))\n\n # In the source CSV, positive is flow into NY, negative is flow out of NY.\n # In Electricity Map, A->B means flow to B is positive.\n if sorted_zone_keys == 'US-NEISO->US-NY':\n direction = 1\n relevant_exchanges = ['SCH - NE - NY', 'SCH - NPX_1385', 'SCH - NPX_CSC']\n elif sorted_zone_keys == 'US-NY->US-PJM':\n direction = -1\n relevant_exchanges = ['SCH - PJ - NY', 'SCH - PJM_HTP', 'SCH - PJM_NEPTUNE', 'SCH - PJM_VFT']\n elif sorted_zone_keys == 'CA-ON->US-NY':\n direction = 1\n relevant_exchanges = ['SCH - OH - NY']\n elif sorted_zone_keys == 'CA-QC->US-NY':\n direction = 1\n relevant_exchanges = ['SCH - HQ_CEDARS', 'SCH - HQ - NY']\n else:\n raise NotImplementedError('Exchange pair not supported: {}'.format(sorted_zone_keys))\n\n if target_datetime:\n # ensure we have an arrow object\n target_datetime = arrow.get(target_datetime)\n else:\n target_datetime = arrow.now('America/New_York')\n ny_date = target_datetime.format('YYYYMMDD')\n exchange_url = url.format(ny_date)\n\n try:\n exchange_data = read_csv_data(exchange_url)\n except HTTPError:\n # this can happen when target_datetime has no data available\n return None\n\n new_england_exs = exchange_data.loc[exchange_data['Interface Name'].isin(relevant_exchanges)]\n consolidated_flows = new_england_exs.reset_index().groupby(\"Timestamp\").sum()\n\n now = arrow.utcnow()\n\n exchange_5min = []\n for row in consolidated_flows.itertuples():\n flow = float(row[3]) * direction\n # Timestamp for exchange does not include seconds.\n dt = timestamp_converter(row[0] + ':00')\n\n if (dt > now) and ((dt - now) < timedelta(seconds=300)):\n # NYISO exchanges CSV (and only the exchanges CSV) includes data\n # up to 5 minutes in the future (but only 5 minutes in the future).\n # This also happens on their official website.\n # Electricity Map raises error with data in the future, so skip\n # that datapoint. If it's more than 5 minutes in the future,\n # it's weird/unexpected and thus worthy of failure and logging.\n continue\n\n exchange = {\n 'sortedZoneKeys': sorted_zone_keys,\n 'datetime': dt,\n 'netFlow': flow,\n 'source': 'nyiso.com'\n }\n\n exchange_5min.append(exchange)\n\n return exchange_5min\n\n\nif __name__ == '__main__':\n \"\"\"Main method, never used by the Electricity Map backend, but handy for testing.\"\"\"\n\n from pprint import pprint\n print('fetch_production() ->')\n pprint(fetch_production())\n\n print('fetch_production(target_datetime=arrow.get(\"2018-03-13T12:00Z\") ->')\n pprint(fetch_production(target_datetime=arrow.get(\"2018-03-13T12:00Z\")))\n\n print('fetch_production(target_datetime=arrow.get(\"2007-03-13T12:00Z\") ->')\n pprint(fetch_production(target_datetime=arrow.get(\"2007-03-13T12:00Z\")))\n\n print('fetch_exchange(US-NY, US-NEISO)')\n pprint(fetch_exchange('US-NY', 'US-NEISO'))\n\n print('fetch_exchange(\"US-NY\", \"CA-QC\")')\n pprint(fetch_exchange('US-NY', 'CA-QC'))\n\n print('fetch_exchange(\"US-NY\", \"CA-QC\", target_datetime=arrow.get(\"2018-03-13T12:00Z\"))')\n pprint(fetch_exchange('US-NY', 'CA-QC', target_datetime=arrow.get(\"2018-03-13T12:00Z\")))\n\n print('fetch_exchange(\"US-NY\", \"CA-QC\", target_datetime=arrow.get(\"2007-03-13T12:00Z\")))')\n pprint(fetch_exchange('US-NY', 'CA-QC', target_datetime=arrow.get('2007-03-13T12:00Z')))\n", "path": "parsers/US_NY.py"}]} | 3,764 | 217 |
gh_patches_debug_20113 | rasdani/github-patches | git_diff | googleapis__google-auth-library-python-708 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Getting project ID using Application Default Credentials fails when gcloud command writes anything to stderr
- OS: Ubuntu 20.04
- Python version: 3.8
- pip version: 20.0.2
- `google-auth` version: 1.19.2
#### Steps to reproduce
1. Arrange for gcloud to throw a warning. For example I'm suffering from this https://github.com/GoogleCloudPlatform/gsutil/issues/999
2. Attempt to use ADC e.g. `credentials, project = google.auth.default()`
3. Note that project always comes back at None even if `gcloud config set project` is correctly set
4. Root cause seems to be that in _cloud_sdk.py/get_project_id() the subprocess.check_output command merges stderr and stdout. So in the case that stderr is not empty and the subprocess does not fail, you might get badly formed JSON on which json.loads a few lines later chokes.
For example, my raw gcloud output is like:
/snap/google-cloud-sdk/165/lib/third_party/requests/__init__.py:83: RequestsDependencyWarning: Old version of cryptography ([1, 2, 3]) may cause slowdown.\n warnings.warn(warning, RequestsDependencyWarning)\n{\n "configuration": {\n "active_configuration": "default",\n "properties": {\n "core": {\n "account": "[email protected]",\n "disable_usage_reporting": "False",\n "project": "my-test-project"\n },\n "deployment_manager": {\n "glob_imports": "True"\n }\n }\n },\n "credential": {\n "access_token".... etc etc.
Expected behaviour: non-fatal errors or warnings from gcloud should not corrupt the output and cause the project ID lookup to fail.
</issue>
<code>
[start of google/auth/_cloud_sdk.py]
1 # Copyright 2015 Google Inc.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """Helpers for reading the Google Cloud SDK's configuration."""
16
17 import json
18 import os
19 import subprocess
20
21 import six
22
23 from google.auth import environment_vars
24 from google.auth import exceptions
25
26
27 # The ~/.config subdirectory containing gcloud credentials.
28 _CONFIG_DIRECTORY = "gcloud"
29 # Windows systems store config at %APPDATA%\gcloud
30 _WINDOWS_CONFIG_ROOT_ENV_VAR = "APPDATA"
31 # The name of the file in the Cloud SDK config that contains default
32 # credentials.
33 _CREDENTIALS_FILENAME = "application_default_credentials.json"
34 # The name of the Cloud SDK shell script
35 _CLOUD_SDK_POSIX_COMMAND = "gcloud"
36 _CLOUD_SDK_WINDOWS_COMMAND = "gcloud.cmd"
37 # The command to get the Cloud SDK configuration
38 _CLOUD_SDK_CONFIG_COMMAND = ("config", "config-helper", "--format", "json")
39 # The command to get google user access token
40 _CLOUD_SDK_USER_ACCESS_TOKEN_COMMAND = ("auth", "print-access-token")
41 # Cloud SDK's application-default client ID
42 CLOUD_SDK_CLIENT_ID = (
43 "764086051850-6qr4p6gpi6hn506pt8ejuq83di341hur.apps.googleusercontent.com"
44 )
45
46
47 def get_config_path():
48 """Returns the absolute path the the Cloud SDK's configuration directory.
49
50 Returns:
51 str: The Cloud SDK config path.
52 """
53 # If the path is explicitly set, return that.
54 try:
55 return os.environ[environment_vars.CLOUD_SDK_CONFIG_DIR]
56 except KeyError:
57 pass
58
59 # Non-windows systems store this at ~/.config/gcloud
60 if os.name != "nt":
61 return os.path.join(os.path.expanduser("~"), ".config", _CONFIG_DIRECTORY)
62 # Windows systems store config at %APPDATA%\gcloud
63 else:
64 try:
65 return os.path.join(
66 os.environ[_WINDOWS_CONFIG_ROOT_ENV_VAR], _CONFIG_DIRECTORY
67 )
68 except KeyError:
69 # This should never happen unless someone is really
70 # messing with things, but we'll cover the case anyway.
71 drive = os.environ.get("SystemDrive", "C:")
72 return os.path.join(drive, "\\", _CONFIG_DIRECTORY)
73
74
75 def get_application_default_credentials_path():
76 """Gets the path to the application default credentials file.
77
78 The path may or may not exist.
79
80 Returns:
81 str: The full path to application default credentials.
82 """
83 config_path = get_config_path()
84 return os.path.join(config_path, _CREDENTIALS_FILENAME)
85
86
87 def get_project_id():
88 """Gets the project ID from the Cloud SDK.
89
90 Returns:
91 Optional[str]: The project ID.
92 """
93 if os.name == "nt":
94 command = _CLOUD_SDK_WINDOWS_COMMAND
95 else:
96 command = _CLOUD_SDK_POSIX_COMMAND
97
98 try:
99 output = subprocess.check_output(
100 (command,) + _CLOUD_SDK_CONFIG_COMMAND, stderr=subprocess.STDOUT
101 )
102 except (subprocess.CalledProcessError, OSError, IOError):
103 return None
104
105 try:
106 configuration = json.loads(output.decode("utf-8"))
107 except ValueError:
108 return None
109
110 try:
111 return configuration["configuration"]["properties"]["core"]["project"]
112 except KeyError:
113 return None
114
115
116 def get_auth_access_token(account=None):
117 """Load user access token with the ``gcloud auth print-access-token`` command.
118
119 Args:
120 account (Optional[str]): Account to get the access token for. If not
121 specified, the current active account will be used.
122
123 Returns:
124 str: The user access token.
125
126 Raises:
127 google.auth.exceptions.UserAccessTokenError: if failed to get access
128 token from gcloud.
129 """
130 if os.name == "nt":
131 command = _CLOUD_SDK_WINDOWS_COMMAND
132 else:
133 command = _CLOUD_SDK_POSIX_COMMAND
134
135 try:
136 if account:
137 command = (
138 (command,)
139 + _CLOUD_SDK_USER_ACCESS_TOKEN_COMMAND
140 + ("--account=" + account,)
141 )
142 else:
143 command = (command,) + _CLOUD_SDK_USER_ACCESS_TOKEN_COMMAND
144
145 access_token = subprocess.check_output(command, stderr=subprocess.STDOUT)
146 # remove the trailing "\n"
147 return access_token.decode("utf-8").strip()
148 except (subprocess.CalledProcessError, OSError, IOError) as caught_exc:
149 new_exc = exceptions.UserAccessTokenError(
150 "Failed to obtain access token", caught_exc
151 )
152 six.raise_from(new_exc, caught_exc)
153
[end of google/auth/_cloud_sdk.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/google/auth/_cloud_sdk.py b/google/auth/_cloud_sdk.py
--- a/google/auth/_cloud_sdk.py
+++ b/google/auth/_cloud_sdk.py
@@ -84,6 +84,13 @@
return os.path.join(config_path, _CREDENTIALS_FILENAME)
+def _run_subprocess_ignore_stderr(command):
+ """ Return subprocess.check_output with the given command and ignores stderr."""
+ with open(os.devnull, "w") as devnull:
+ output = subprocess.check_output(command, stderr=devnull)
+ return output
+
+
def get_project_id():
"""Gets the project ID from the Cloud SDK.
@@ -96,9 +103,9 @@
command = _CLOUD_SDK_POSIX_COMMAND
try:
- output = subprocess.check_output(
- (command,) + _CLOUD_SDK_CONFIG_COMMAND, stderr=subprocess.STDOUT
- )
+ # Ignore the stderr coming from gcloud, so it won't be mixed into the output.
+ # https://github.com/googleapis/google-auth-library-python/issues/673
+ output = _run_subprocess_ignore_stderr((command,) + _CLOUD_SDK_CONFIG_COMMAND)
except (subprocess.CalledProcessError, OSError, IOError):
return None
| {"golden_diff": "diff --git a/google/auth/_cloud_sdk.py b/google/auth/_cloud_sdk.py\n--- a/google/auth/_cloud_sdk.py\n+++ b/google/auth/_cloud_sdk.py\n@@ -84,6 +84,13 @@\n return os.path.join(config_path, _CREDENTIALS_FILENAME)\n \n \n+def _run_subprocess_ignore_stderr(command):\n+ \"\"\" Return subprocess.check_output with the given command and ignores stderr.\"\"\"\n+ with open(os.devnull, \"w\") as devnull:\n+ output = subprocess.check_output(command, stderr=devnull)\n+ return output\n+\n+\n def get_project_id():\n \"\"\"Gets the project ID from the Cloud SDK.\n \n@@ -96,9 +103,9 @@\n command = _CLOUD_SDK_POSIX_COMMAND\n \n try:\n- output = subprocess.check_output(\n- (command,) + _CLOUD_SDK_CONFIG_COMMAND, stderr=subprocess.STDOUT\n- )\n+ # Ignore the stderr coming from gcloud, so it won't be mixed into the output.\n+ # https://github.com/googleapis/google-auth-library-python/issues/673\n+ output = _run_subprocess_ignore_stderr((command,) + _CLOUD_SDK_CONFIG_COMMAND)\n except (subprocess.CalledProcessError, OSError, IOError):\n return None\n", "issue": "Getting project ID using Application Default Credentials fails when gcloud command writes anything to stderr\n - OS: Ubuntu 20.04\r\n - Python version: 3.8\r\n - pip version: 20.0.2\r\n - `google-auth` version: 1.19.2\r\n\r\n#### Steps to reproduce\r\n\r\n 1. Arrange for gcloud to throw a warning. For example I'm suffering from this https://github.com/GoogleCloudPlatform/gsutil/issues/999\r\n 2. Attempt to use ADC e.g. `credentials, project = google.auth.default()`\r\n 3. Note that project always comes back at None even if `gcloud config set project` is correctly set\r\n 4. Root cause seems to be that in _cloud_sdk.py/get_project_id() the subprocess.check_output command merges stderr and stdout. So in the case that stderr is not empty and the subprocess does not fail, you might get badly formed JSON on which json.loads a few lines later chokes.\r\n\r\nFor example, my raw gcloud output is like:\r\n\r\n/snap/google-cloud-sdk/165/lib/third_party/requests/__init__.py:83: RequestsDependencyWarning: Old version of cryptography ([1, 2, 3]) may cause slowdown.\\n warnings.warn(warning, RequestsDependencyWarning)\\n{\\n \"configuration\": {\\n \"active_configuration\": \"default\",\\n \"properties\": {\\n \"core\": {\\n \"account\": \"[email protected]\",\\n \"disable_usage_reporting\": \"False\",\\n \"project\": \"my-test-project\"\\n },\\n \"deployment_manager\": {\\n \"glob_imports\": \"True\"\\n }\\n }\\n },\\n \"credential\": {\\n \"access_token\".... etc etc.\r\n\r\nExpected behaviour: non-fatal errors or warnings from gcloud should not corrupt the output and cause the project ID lookup to fail.\n", "before_files": [{"content": "# Copyright 2015 Google Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Helpers for reading the Google Cloud SDK's configuration.\"\"\"\n\nimport json\nimport os\nimport subprocess\n\nimport six\n\nfrom google.auth import environment_vars\nfrom google.auth import exceptions\n\n\n# The ~/.config subdirectory containing gcloud credentials.\n_CONFIG_DIRECTORY = \"gcloud\"\n# Windows systems store config at %APPDATA%\\gcloud\n_WINDOWS_CONFIG_ROOT_ENV_VAR = \"APPDATA\"\n# The name of the file in the Cloud SDK config that contains default\n# credentials.\n_CREDENTIALS_FILENAME = \"application_default_credentials.json\"\n# The name of the Cloud SDK shell script\n_CLOUD_SDK_POSIX_COMMAND = \"gcloud\"\n_CLOUD_SDK_WINDOWS_COMMAND = \"gcloud.cmd\"\n# The command to get the Cloud SDK configuration\n_CLOUD_SDK_CONFIG_COMMAND = (\"config\", \"config-helper\", \"--format\", \"json\")\n# The command to get google user access token\n_CLOUD_SDK_USER_ACCESS_TOKEN_COMMAND = (\"auth\", \"print-access-token\")\n# Cloud SDK's application-default client ID\nCLOUD_SDK_CLIENT_ID = (\n \"764086051850-6qr4p6gpi6hn506pt8ejuq83di341hur.apps.googleusercontent.com\"\n)\n\n\ndef get_config_path():\n \"\"\"Returns the absolute path the the Cloud SDK's configuration directory.\n\n Returns:\n str: The Cloud SDK config path.\n \"\"\"\n # If the path is explicitly set, return that.\n try:\n return os.environ[environment_vars.CLOUD_SDK_CONFIG_DIR]\n except KeyError:\n pass\n\n # Non-windows systems store this at ~/.config/gcloud\n if os.name != \"nt\":\n return os.path.join(os.path.expanduser(\"~\"), \".config\", _CONFIG_DIRECTORY)\n # Windows systems store config at %APPDATA%\\gcloud\n else:\n try:\n return os.path.join(\n os.environ[_WINDOWS_CONFIG_ROOT_ENV_VAR], _CONFIG_DIRECTORY\n )\n except KeyError:\n # This should never happen unless someone is really\n # messing with things, but we'll cover the case anyway.\n drive = os.environ.get(\"SystemDrive\", \"C:\")\n return os.path.join(drive, \"\\\\\", _CONFIG_DIRECTORY)\n\n\ndef get_application_default_credentials_path():\n \"\"\"Gets the path to the application default credentials file.\n\n The path may or may not exist.\n\n Returns:\n str: The full path to application default credentials.\n \"\"\"\n config_path = get_config_path()\n return os.path.join(config_path, _CREDENTIALS_FILENAME)\n\n\ndef get_project_id():\n \"\"\"Gets the project ID from the Cloud SDK.\n\n Returns:\n Optional[str]: The project ID.\n \"\"\"\n if os.name == \"nt\":\n command = _CLOUD_SDK_WINDOWS_COMMAND\n else:\n command = _CLOUD_SDK_POSIX_COMMAND\n\n try:\n output = subprocess.check_output(\n (command,) + _CLOUD_SDK_CONFIG_COMMAND, stderr=subprocess.STDOUT\n )\n except (subprocess.CalledProcessError, OSError, IOError):\n return None\n\n try:\n configuration = json.loads(output.decode(\"utf-8\"))\n except ValueError:\n return None\n\n try:\n return configuration[\"configuration\"][\"properties\"][\"core\"][\"project\"]\n except KeyError:\n return None\n\n\ndef get_auth_access_token(account=None):\n \"\"\"Load user access token with the ``gcloud auth print-access-token`` command.\n\n Args:\n account (Optional[str]): Account to get the access token for. If not\n specified, the current active account will be used.\n\n Returns:\n str: The user access token.\n\n Raises:\n google.auth.exceptions.UserAccessTokenError: if failed to get access\n token from gcloud.\n \"\"\"\n if os.name == \"nt\":\n command = _CLOUD_SDK_WINDOWS_COMMAND\n else:\n command = _CLOUD_SDK_POSIX_COMMAND\n\n try:\n if account:\n command = (\n (command,)\n + _CLOUD_SDK_USER_ACCESS_TOKEN_COMMAND\n + (\"--account=\" + account,)\n )\n else:\n command = (command,) + _CLOUD_SDK_USER_ACCESS_TOKEN_COMMAND\n\n access_token = subprocess.check_output(command, stderr=subprocess.STDOUT)\n # remove the trailing \"\\n\"\n return access_token.decode(\"utf-8\").strip()\n except (subprocess.CalledProcessError, OSError, IOError) as caught_exc:\n new_exc = exceptions.UserAccessTokenError(\n \"Failed to obtain access token\", caught_exc\n )\n six.raise_from(new_exc, caught_exc)\n", "path": "google/auth/_cloud_sdk.py"}]} | 2,421 | 278 |
gh_patches_debug_122 | rasdani/github-patches | git_diff | XanaduAI__strawberryfields-581 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Dependency versions error
#### Issue description
I made a fork of this project and tried to setup a new virtual environment.
```
python -m venv sf-venv
source sf-venv/bin/active.fish
pip install -r requirements.txt
```
However, I got the following error
```
ERROR: Cannot install -r requirements.txt (line 4) and numpy>=1.20 because these package versions have conflicting dependencies.
The conflict is caused by:
The user requested numpy>=1.20
tensorflow 2.5.0 depends on numpy~=1.19.2
To fix this you could try to:
1. loosen the range of package versions you've specified
2. remove package versions to allow pip attempt to solve the dependency conflict
ERROR: ResolutionImpossible: for help visit https://pip.pypa.io/en/latest/user_guide/#fixing-conflicting-dependencies
```
#### Additional information
If it helps, I am using Python 3.9.4 and pip 21.1.1.
A quick fix would be to downgrade the version of numpy in requirements.txt and solve the issue, but I am not sure it is the best way to go.
</issue>
<code>
[start of setup.py]
1 # Copyright 2019 Xanadu Quantum Technologies Inc.
2
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6
7 # http://www.apache.org/licenses/LICENSE-2.0
8
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 import os
15 import sys
16
17 from setuptools import setup, find_packages
18
19
20 with open("strawberryfields/_version.py") as f:
21 version = f.readlines()[-1].split()[-1].strip("\"'")
22
23
24 requirements = [
25 "numpy>=1.17.4",
26 "scipy>=1.0.0",
27 "sympy>=1.5",
28 "networkx>=2.0",
29 "quantum-blackbird>=0.3.0",
30 "python-dateutil>=2.8.0",
31 "thewalrus>=0.15.0",
32 "numba",
33 "toml",
34 "appdirs",
35 "requests>=2.22.0",
36 "urllib3>=1.25.3",
37 ]
38
39 info = {
40 "name": "StrawberryFields",
41 "version": version,
42 "maintainer": "Xanadu Inc.",
43 "maintainer_email": "[email protected]",
44 "url": "https://github.com/XanaduAI/StrawberryFields",
45 "license": "Apache License 2.0",
46 "packages": find_packages(where="."),
47 "package_data": {"strawberryfields": ["backends/data/*", "apps/data/feature_data/*",
48 "apps/data/sample_data/*"]},
49 "include_package_data": True,
50 "entry_points" : {
51 'console_scripts': [
52 'sf=strawberryfields.cli:main'
53 ]
54 },
55 "description": "Open source library for continuous-variable quantum computation",
56 "long_description": open("README.rst", encoding="utf-8").read(),
57 "long_description_content_type": "text/x-rst",
58 "provides": ["strawberryfields"],
59 "install_requires": requirements,
60 # 'extras_require': extra_requirements,
61 "command_options": {
62 "build_sphinx": {"version": ("setup.py", version), "release": ("setup.py", version)}
63 },
64 }
65
66 classifiers = [
67 "Development Status :: 4 - Beta",
68 "Environment :: Console",
69 "Intended Audience :: Science/Research",
70 "License :: OSI Approved :: Apache Software License",
71 "Natural Language :: English",
72 "Operating System :: POSIX",
73 "Operating System :: MacOS :: MacOS X",
74 "Operating System :: POSIX :: Linux",
75 "Operating System :: Microsoft :: Windows",
76 "Programming Language :: Python",
77 "Programming Language :: Python :: 3",
78 "Programming Language :: Python :: 3.7",
79 "Programming Language :: Python :: 3.8",
80 "Programming Language :: Python :: 3.9",
81 "Programming Language :: Python :: 3 :: Only",
82 "Topic :: Scientific/Engineering :: Physics",
83 ]
84
85 setup(classifiers=classifiers, **(info))
86
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -22,7 +22,7 @@
requirements = [
- "numpy>=1.17.4",
+ "numpy>=1.19.2",
"scipy>=1.0.0",
"sympy>=1.5",
"networkx>=2.0",
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -22,7 +22,7 @@\n \n \n requirements = [\n- \"numpy>=1.17.4\",\n+ \"numpy>=1.19.2\",\n \"scipy>=1.0.0\",\n \"sympy>=1.5\",\n \"networkx>=2.0\",\n", "issue": "Dependency versions error\n#### Issue description\r\nI made a fork of this project and tried to setup a new virtual environment.\r\n\r\n```\r\npython -m venv sf-venv\r\nsource sf-venv/bin/active.fish\r\npip install -r requirements.txt\r\n```\r\n\r\nHowever, I got the following error\r\n``` \r\nERROR: Cannot install -r requirements.txt (line 4) and numpy>=1.20 because these package versions have conflicting dependencies.\r\n\r\nThe conflict is caused by:\r\n The user requested numpy>=1.20\r\n tensorflow 2.5.0 depends on numpy~=1.19.2\r\n\r\nTo fix this you could try to:\r\n1. loosen the range of package versions you've specified\r\n2. remove package versions to allow pip attempt to solve the dependency conflict\r\n\r\nERROR: ResolutionImpossible: for help visit https://pip.pypa.io/en/latest/user_guide/#fixing-conflicting-dependencies\r\n```\r\n\r\n#### Additional information\r\n\r\nIf it helps, I am using Python 3.9.4 and pip 21.1.1. \r\n\r\nA quick fix would be to downgrade the version of numpy in requirements.txt and solve the issue, but I am not sure it is the best way to go.\r\n\n", "before_files": [{"content": "# Copyright 2019 Xanadu Quantum Technologies Inc.\n\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n\n# http://www.apache.org/licenses/LICENSE-2.0\n\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nimport os\nimport sys\n\nfrom setuptools import setup, find_packages\n\n\nwith open(\"strawberryfields/_version.py\") as f:\n version = f.readlines()[-1].split()[-1].strip(\"\\\"'\")\n\n\nrequirements = [\n \"numpy>=1.17.4\",\n \"scipy>=1.0.0\",\n \"sympy>=1.5\",\n \"networkx>=2.0\",\n \"quantum-blackbird>=0.3.0\",\n \"python-dateutil>=2.8.0\",\n \"thewalrus>=0.15.0\",\n \"numba\",\n \"toml\",\n \"appdirs\",\n \"requests>=2.22.0\",\n \"urllib3>=1.25.3\",\n]\n\ninfo = {\n \"name\": \"StrawberryFields\",\n \"version\": version,\n \"maintainer\": \"Xanadu Inc.\",\n \"maintainer_email\": \"[email protected]\",\n \"url\": \"https://github.com/XanaduAI/StrawberryFields\",\n \"license\": \"Apache License 2.0\",\n \"packages\": find_packages(where=\".\"),\n \"package_data\": {\"strawberryfields\": [\"backends/data/*\", \"apps/data/feature_data/*\",\n \"apps/data/sample_data/*\"]},\n \"include_package_data\": True,\n \"entry_points\" : {\n 'console_scripts': [\n 'sf=strawberryfields.cli:main'\n ]\n },\n \"description\": \"Open source library for continuous-variable quantum computation\",\n \"long_description\": open(\"README.rst\", encoding=\"utf-8\").read(),\n \"long_description_content_type\": \"text/x-rst\",\n \"provides\": [\"strawberryfields\"],\n \"install_requires\": requirements,\n # 'extras_require': extra_requirements,\n \"command_options\": {\n \"build_sphinx\": {\"version\": (\"setup.py\", version), \"release\": (\"setup.py\", version)}\n },\n}\n\nclassifiers = [\n \"Development Status :: 4 - Beta\",\n \"Environment :: Console\",\n \"Intended Audience :: Science/Research\",\n \"License :: OSI Approved :: Apache Software License\",\n \"Natural Language :: English\",\n \"Operating System :: POSIX\",\n \"Operating System :: MacOS :: MacOS X\",\n \"Operating System :: POSIX :: Linux\",\n \"Operating System :: Microsoft :: Windows\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Programming Language :: Python :: 3 :: Only\",\n \"Topic :: Scientific/Engineering :: Physics\",\n]\n\nsetup(classifiers=classifiers, **(info))\n", "path": "setup.py"}]} | 1,691 | 90 |
gh_patches_debug_36362 | rasdani/github-patches | git_diff | wright-group__WrightTools-829 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Recover trim, a method of channel
</issue>
<code>
[start of WrightTools/data/_channel.py]
1 """Channel class and associated."""
2
3
4 # --- import --------------------------------------------------------------------------------------
5
6
7 import numpy as np
8
9 import h5py
10
11 from .. import kit as wt_kit
12 from .._dataset import Dataset
13
14 __all__ = ["Channel"]
15
16 # --- class ---------------------------------------------------------------------------------------
17
18
19 class Channel(Dataset):
20 """Channel."""
21
22 class_name = "Channel"
23
24 def __init__(
25 self,
26 parent,
27 id,
28 *,
29 units=None,
30 null=None,
31 signed=None,
32 label=None,
33 label_seed=None,
34 **kwargs
35 ):
36 """Construct a channel object.
37
38 Parameters
39 ----------
40 values : array-like
41 Values.
42 name : string
43 Channel name.
44 units : string (optional)
45 Channel units. Default is None.
46 null : number (optional)
47 Channel null. Default is None (0).
48 signed : booelan (optional)
49 Channel signed flag. Default is None (guess).
50 label : string.
51 Label. Default is None.
52 label_seed : list of strings
53 Label seed. Default is None.
54 **kwargs
55 Additional keyword arguments are added to the attrs dictionary
56 and to the natural namespace of the object (if possible).
57 """
58 self._parent = parent
59 super().__init__(id)
60 self.label = label
61 self.label_seed = label_seed
62 self.units = units
63 self.dimensionality = len(self.shape)
64 # attrs
65 self.attrs.update(kwargs)
66 self.attrs["name"] = h5py.h5i.get_name(self.id).decode().split("/")[-1]
67 self.attrs["class"] = "Channel"
68 if signed is not None:
69 self.attrs["signed"] = signed
70 if null is not None:
71 self.attrs["null"] = null
72 for key, value in self.attrs.items():
73 identifier = wt_kit.string2identifier(key)
74 if not hasattr(self, identifier):
75 setattr(self, identifier, value)
76
77 @property
78 def major_extent(self) -> complex:
79 """Maximum deviation from null."""
80 return max((self.max() - self.null, self.null - self.min()))
81
82 @property
83 def minor_extent(self) -> complex:
84 """Minimum deviation from null."""
85 return min((self.max() - self.null, self.null - self.min()))
86
87 @property
88 def null(self) -> complex:
89 if "null" not in self.attrs.keys():
90 self.attrs["null"] = 0
91 return self.attrs["null"]
92
93 @null.setter
94 def null(self, value):
95 self.attrs["null"] = value
96
97 @property
98 def signed(self) -> bool:
99 if "signed" not in self.attrs.keys():
100 self.attrs["signed"] = False
101 return self.attrs["signed"]
102
103 @signed.setter
104 def signed(self, value):
105 self.attrs["signed"] = value
106
107 def mag(self) -> complex:
108 """Channel magnitude (maximum deviation from null)."""
109 return self.major_extent
110
111 def normalize(self, mag=1.):
112 """Normalize a Channel, set `null` to 0 and the mag to given value.
113
114 Parameters
115 ----------
116 mag : float (optional)
117 New value of mag. Default is 1.
118 """
119
120 def f(dataset, s, null, mag):
121 dataset[s] -= null
122 dataset[s] /= mag
123
124 if self.signed:
125 mag = self.mag() / mag
126 else:
127 mag = self.max() / mag
128 self.chunkwise(f, null=self.null, mag=mag)
129 self._null = 0
130
131 def trim(self, neighborhood, method="ztest", factor=3, replace="nan", verbose=True):
132 """Remove outliers from the dataset.
133
134 Identifies outliers by comparing each point to its
135 neighbors using a statistical test.
136
137 Parameters
138 ----------
139 neighborhood : list of integers
140 Size of the neighborhood in each dimension. Length of the list must
141 be equal to the dimensionality of the channel.
142 method : {'ztest'} (optional)
143 Statistical test used to detect outliers. Default is ztest.
144
145 ztest
146 Compare point deviation from neighborhood mean to neighborhood
147 standard deviation.
148
149 factor : number (optional)
150 Tolerance factor. Default is 3.
151 replace : {'nan', 'mean', 'mask', number} (optional)
152 Behavior of outlier replacement. Default is nan.
153
154 nan
155 Outliers are replaced by numpy nans.
156
157 mean
158 Outliers are replaced by the mean of its neighborhood.
159
160 mask
161 Array is masked at outliers.
162
163 number
164 Array becomes given number.
165
166 Returns
167 -------
168 list of tuples
169 Indicies of trimmed outliers.
170
171 See Also
172 --------
173 clip
174 Remove pixels outside of a certain range.
175 """
176 raise NotImplementedError
177 outliers = []
178 means = []
179 # find outliers
180 for idx in np.ndindex(self.shape):
181 slices = []
182 for i, di, size in zip(idx, neighborhood, self.shape):
183 start = max(0, i - di)
184 stop = min(size, i + di + 1)
185 slices.append(slice(start, stop, 1))
186 neighbors = self[slices]
187 mean = np.nanmean(neighbors)
188 limit = np.nanstd(neighbors) * factor
189 if np.abs(self[idx] - mean) > limit:
190 outliers.append(idx)
191 means.append(mean)
192 # replace outliers
193 i = tuple(zip(*outliers))
194 if replace == "nan":
195 self[i] = np.nan
196 elif replace == "mean":
197 self[i] = means
198 elif replace == "mask":
199 self[:] = np.ma.array(self[:])
200 self[i] = np.ma.masked
201 elif type(replace) in [int, float]:
202 self[i] = replace
203 else:
204 raise KeyError("replace must be one of {nan, mean, mask} or some number")
205 # finish
206 if verbose:
207 print("%i outliers removed" % len(outliers))
208 return outliers
209
[end of WrightTools/data/_channel.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/WrightTools/data/_channel.py b/WrightTools/data/_channel.py
--- a/WrightTools/data/_channel.py
+++ b/WrightTools/data/_channel.py
@@ -8,8 +8,12 @@
import h5py
+import warnings
+import numbers
+
from .. import kit as wt_kit
from .._dataset import Dataset
+from .. import exceptions as wt_exceptions
__all__ = ["Channel"]
@@ -148,7 +152,7 @@
factor : number (optional)
Tolerance factor. Default is 3.
- replace : {'nan', 'mean', 'mask', number} (optional)
+ replace : {'nan', 'mean', number} (optional)
Behavior of outlier replacement. Default is nan.
nan
@@ -157,9 +161,6 @@
mean
Outliers are replaced by the mean of its neighborhood.
- mask
- Array is masked at outliers.
-
number
Array becomes given number.
@@ -173,7 +174,7 @@
clip
Remove pixels outside of a certain range.
"""
- raise NotImplementedError
+ warnings.warn("trim", category=wt_exceptions.EntireDatasetInMemoryWarning)
outliers = []
means = []
# find outliers
@@ -192,16 +193,19 @@
# replace outliers
i = tuple(zip(*outliers))
if replace == "nan":
- self[i] = np.nan
+ arr = self[:]
+ arr[i] = np.nan
+ self[:] = arr
elif replace == "mean":
- self[i] = means
- elif replace == "mask":
- self[:] = np.ma.array(self[:])
- self[i] = np.ma.masked
- elif type(replace) in [int, float]:
- self[i] = replace
+ arr = self[:]
+ arr[i] = means
+ self[:] = arr
+ elif isinstance(replace, numbers.Number):
+ arr = self[:]
+ arr[i] = replace
+ self[:] = arr
else:
- raise KeyError("replace must be one of {nan, mean, mask} or some number")
+ raise KeyError("replace must be one of {nan, mean} or some number")
# finish
if verbose:
print("%i outliers removed" % len(outliers))
| {"golden_diff": "diff --git a/WrightTools/data/_channel.py b/WrightTools/data/_channel.py\n--- a/WrightTools/data/_channel.py\n+++ b/WrightTools/data/_channel.py\n@@ -8,8 +8,12 @@\n \n import h5py\n \n+import warnings\n+import numbers\n+\n from .. import kit as wt_kit\n from .._dataset import Dataset\n+from .. import exceptions as wt_exceptions\n \n __all__ = [\"Channel\"]\n \n@@ -148,7 +152,7 @@\n \n factor : number (optional)\n Tolerance factor. Default is 3.\n- replace : {'nan', 'mean', 'mask', number} (optional)\n+ replace : {'nan', 'mean', number} (optional)\n Behavior of outlier replacement. Default is nan.\n \n nan\n@@ -157,9 +161,6 @@\n mean\n Outliers are replaced by the mean of its neighborhood.\n \n- mask\n- Array is masked at outliers.\n-\n number\n Array becomes given number.\n \n@@ -173,7 +174,7 @@\n clip\n Remove pixels outside of a certain range.\n \"\"\"\n- raise NotImplementedError\n+ warnings.warn(\"trim\", category=wt_exceptions.EntireDatasetInMemoryWarning)\n outliers = []\n means = []\n # find outliers\n@@ -192,16 +193,19 @@\n # replace outliers\n i = tuple(zip(*outliers))\n if replace == \"nan\":\n- self[i] = np.nan\n+ arr = self[:]\n+ arr[i] = np.nan\n+ self[:] = arr\n elif replace == \"mean\":\n- self[i] = means\n- elif replace == \"mask\":\n- self[:] = np.ma.array(self[:])\n- self[i] = np.ma.masked\n- elif type(replace) in [int, float]:\n- self[i] = replace\n+ arr = self[:]\n+ arr[i] = means\n+ self[:] = arr\n+ elif isinstance(replace, numbers.Number):\n+ arr = self[:]\n+ arr[i] = replace\n+ self[:] = arr\n else:\n- raise KeyError(\"replace must be one of {nan, mean, mask} or some number\")\n+ raise KeyError(\"replace must be one of {nan, mean} or some number\")\n # finish\n if verbose:\n print(\"%i outliers removed\" % len(outliers))\n", "issue": "Recover trim, a method of channel\n\n", "before_files": [{"content": "\"\"\"Channel class and associated.\"\"\"\n\n\n# --- import --------------------------------------------------------------------------------------\n\n\nimport numpy as np\n\nimport h5py\n\nfrom .. import kit as wt_kit\nfrom .._dataset import Dataset\n\n__all__ = [\"Channel\"]\n\n# --- class ---------------------------------------------------------------------------------------\n\n\nclass Channel(Dataset):\n \"\"\"Channel.\"\"\"\n\n class_name = \"Channel\"\n\n def __init__(\n self,\n parent,\n id,\n *,\n units=None,\n null=None,\n signed=None,\n label=None,\n label_seed=None,\n **kwargs\n ):\n \"\"\"Construct a channel object.\n\n Parameters\n ----------\n values : array-like\n Values.\n name : string\n Channel name.\n units : string (optional)\n Channel units. Default is None.\n null : number (optional)\n Channel null. Default is None (0).\n signed : booelan (optional)\n Channel signed flag. Default is None (guess).\n label : string.\n Label. Default is None.\n label_seed : list of strings\n Label seed. Default is None.\n **kwargs\n Additional keyword arguments are added to the attrs dictionary\n and to the natural namespace of the object (if possible).\n \"\"\"\n self._parent = parent\n super().__init__(id)\n self.label = label\n self.label_seed = label_seed\n self.units = units\n self.dimensionality = len(self.shape)\n # attrs\n self.attrs.update(kwargs)\n self.attrs[\"name\"] = h5py.h5i.get_name(self.id).decode().split(\"/\")[-1]\n self.attrs[\"class\"] = \"Channel\"\n if signed is not None:\n self.attrs[\"signed\"] = signed\n if null is not None:\n self.attrs[\"null\"] = null\n for key, value in self.attrs.items():\n identifier = wt_kit.string2identifier(key)\n if not hasattr(self, identifier):\n setattr(self, identifier, value)\n\n @property\n def major_extent(self) -> complex:\n \"\"\"Maximum deviation from null.\"\"\"\n return max((self.max() - self.null, self.null - self.min()))\n\n @property\n def minor_extent(self) -> complex:\n \"\"\"Minimum deviation from null.\"\"\"\n return min((self.max() - self.null, self.null - self.min()))\n\n @property\n def null(self) -> complex:\n if \"null\" not in self.attrs.keys():\n self.attrs[\"null\"] = 0\n return self.attrs[\"null\"]\n\n @null.setter\n def null(self, value):\n self.attrs[\"null\"] = value\n\n @property\n def signed(self) -> bool:\n if \"signed\" not in self.attrs.keys():\n self.attrs[\"signed\"] = False\n return self.attrs[\"signed\"]\n\n @signed.setter\n def signed(self, value):\n self.attrs[\"signed\"] = value\n\n def mag(self) -> complex:\n \"\"\"Channel magnitude (maximum deviation from null).\"\"\"\n return self.major_extent\n\n def normalize(self, mag=1.):\n \"\"\"Normalize a Channel, set `null` to 0 and the mag to given value.\n\n Parameters\n ----------\n mag : float (optional)\n New value of mag. Default is 1.\n \"\"\"\n\n def f(dataset, s, null, mag):\n dataset[s] -= null\n dataset[s] /= mag\n\n if self.signed:\n mag = self.mag() / mag\n else:\n mag = self.max() / mag\n self.chunkwise(f, null=self.null, mag=mag)\n self._null = 0\n\n def trim(self, neighborhood, method=\"ztest\", factor=3, replace=\"nan\", verbose=True):\n \"\"\"Remove outliers from the dataset.\n\n Identifies outliers by comparing each point to its\n neighbors using a statistical test.\n\n Parameters\n ----------\n neighborhood : list of integers\n Size of the neighborhood in each dimension. Length of the list must\n be equal to the dimensionality of the channel.\n method : {'ztest'} (optional)\n Statistical test used to detect outliers. Default is ztest.\n\n ztest\n Compare point deviation from neighborhood mean to neighborhood\n standard deviation.\n\n factor : number (optional)\n Tolerance factor. Default is 3.\n replace : {'nan', 'mean', 'mask', number} (optional)\n Behavior of outlier replacement. Default is nan.\n\n nan\n Outliers are replaced by numpy nans.\n\n mean\n Outliers are replaced by the mean of its neighborhood.\n\n mask\n Array is masked at outliers.\n\n number\n Array becomes given number.\n\n Returns\n -------\n list of tuples\n Indicies of trimmed outliers.\n\n See Also\n --------\n clip\n Remove pixels outside of a certain range.\n \"\"\"\n raise NotImplementedError\n outliers = []\n means = []\n # find outliers\n for idx in np.ndindex(self.shape):\n slices = []\n for i, di, size in zip(idx, neighborhood, self.shape):\n start = max(0, i - di)\n stop = min(size, i + di + 1)\n slices.append(slice(start, stop, 1))\n neighbors = self[slices]\n mean = np.nanmean(neighbors)\n limit = np.nanstd(neighbors) * factor\n if np.abs(self[idx] - mean) > limit:\n outliers.append(idx)\n means.append(mean)\n # replace outliers\n i = tuple(zip(*outliers))\n if replace == \"nan\":\n self[i] = np.nan\n elif replace == \"mean\":\n self[i] = means\n elif replace == \"mask\":\n self[:] = np.ma.array(self[:])\n self[i] = np.ma.masked\n elif type(replace) in [int, float]:\n self[i] = replace\n else:\n raise KeyError(\"replace must be one of {nan, mean, mask} or some number\")\n # finish\n if verbose:\n print(\"%i outliers removed\" % len(outliers))\n return outliers\n", "path": "WrightTools/data/_channel.py"}]} | 2,379 | 544 |
gh_patches_debug_19530 | rasdani/github-patches | git_diff | mozmeao__snippets-service-995 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Fix NR reporting
See https://github.com/mozmeao/infra/issues/1106
</issue>
<code>
[start of snippets/wsgi/app.py]
1 """
2 WSGI config for snippets project.
3
4 It exposes the WSGI callable as a module-level variable named ``application``.
5
6 For more information on this file, see
7 https://docs.djangoproject.com/en/1.7/howto/deployment/wsgi/
8 """
9 import os
10 os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'snippets.settings') # NOQA
11
12 from django.core.wsgi import get_wsgi_application
13
14 import newrelic.agent
15 from decouple import config
16 from raven.contrib.django.raven_compat.middleware.wsgi import Sentry
17
18 application = get_wsgi_application()
19
20 application = Sentry(application)
21
22 # Add NewRelic
23 newrelic_ini = config('NEW_RELIC_CONFIG_FILE', default='newrelic.ini')
24 newrelic_license_key = config('NEW_RELIC_LICENSE_KEY', default=None)
25 if newrelic_ini and newrelic_license_key:
26 newrelic.agent.initialize(newrelic_ini)
27 application = newrelic.agent.wsgi_application()(application)
28
[end of snippets/wsgi/app.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/snippets/wsgi/app.py b/snippets/wsgi/app.py
--- a/snippets/wsgi/app.py
+++ b/snippets/wsgi/app.py
@@ -6,22 +6,14 @@
For more information on this file, see
https://docs.djangoproject.com/en/1.7/howto/deployment/wsgi/
"""
-import os
-os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'snippets.settings') # NOQA
-
-from django.core.wsgi import get_wsgi_application
-
import newrelic.agent
-from decouple import config
-from raven.contrib.django.raven_compat.middleware.wsgi import Sentry
+newrelic.agent.initialize('newrelic.ini')
+import os # NOQA
+os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'snippets.settings') # NOQA
+
+from django.core.wsgi import get_wsgi_application # NOQA
application = get_wsgi_application()
+from raven.contrib.django.raven_compat.middleware.wsgi import Sentry # NOQA
application = Sentry(application)
-
-# Add NewRelic
-newrelic_ini = config('NEW_RELIC_CONFIG_FILE', default='newrelic.ini')
-newrelic_license_key = config('NEW_RELIC_LICENSE_KEY', default=None)
-if newrelic_ini and newrelic_license_key:
- newrelic.agent.initialize(newrelic_ini)
- application = newrelic.agent.wsgi_application()(application)
| {"golden_diff": "diff --git a/snippets/wsgi/app.py b/snippets/wsgi/app.py\n--- a/snippets/wsgi/app.py\n+++ b/snippets/wsgi/app.py\n@@ -6,22 +6,14 @@\n For more information on this file, see\n https://docs.djangoproject.com/en/1.7/howto/deployment/wsgi/\n \"\"\"\n-import os\n-os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'snippets.settings') # NOQA\n-\n-from django.core.wsgi import get_wsgi_application\n-\n import newrelic.agent\n-from decouple import config\n-from raven.contrib.django.raven_compat.middleware.wsgi import Sentry\n+newrelic.agent.initialize('newrelic.ini')\n \n+import os # NOQA\n+os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'snippets.settings') # NOQA\n+\n+from django.core.wsgi import get_wsgi_application # NOQA\n application = get_wsgi_application()\n \n+from raven.contrib.django.raven_compat.middleware.wsgi import Sentry # NOQA\n application = Sentry(application)\n-\n-# Add NewRelic\n-newrelic_ini = config('NEW_RELIC_CONFIG_FILE', default='newrelic.ini')\n-newrelic_license_key = config('NEW_RELIC_LICENSE_KEY', default=None)\n-if newrelic_ini and newrelic_license_key:\n- newrelic.agent.initialize(newrelic_ini)\n- application = newrelic.agent.wsgi_application()(application)\n", "issue": "Fix NR reporting\nSee https://github.com/mozmeao/infra/issues/1106\n", "before_files": [{"content": "\"\"\"\nWSGI config for snippets project.\n\nIt exposes the WSGI callable as a module-level variable named ``application``.\n\nFor more information on this file, see\nhttps://docs.djangoproject.com/en/1.7/howto/deployment/wsgi/\n\"\"\"\nimport os\nos.environ.setdefault('DJANGO_SETTINGS_MODULE', 'snippets.settings') # NOQA\n\nfrom django.core.wsgi import get_wsgi_application\n\nimport newrelic.agent\nfrom decouple import config\nfrom raven.contrib.django.raven_compat.middleware.wsgi import Sentry\n\napplication = get_wsgi_application()\n\napplication = Sentry(application)\n\n# Add NewRelic\nnewrelic_ini = config('NEW_RELIC_CONFIG_FILE', default='newrelic.ini')\nnewrelic_license_key = config('NEW_RELIC_LICENSE_KEY', default=None)\nif newrelic_ini and newrelic_license_key:\n newrelic.agent.initialize(newrelic_ini)\n application = newrelic.agent.wsgi_application()(application)\n", "path": "snippets/wsgi/app.py"}]} | 815 | 311 |
gh_patches_debug_16876 | rasdani/github-patches | git_diff | chainer__chainer-1355 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Possibly wrong doc or code for deconvolution cover_all
The documentation says:
```
cover_all (bool): If True, all spatial locations are convoluted into
some output pixels. It may make the output size larger.
```
However, when I prepare a small toy example, the output is larger when `cover_all=True`. I feel like either the code or the documentation needs to be inverted.
See an [gist notebook](https://gist.github.com/LukasDrude/8a9ebbaa3a6ba4ae0e2bef611afefd5a) for the toy example or the attached screenshot. I had set the weight matrices to ones and disabled normalization for clarity.

</issue>
<code>
[start of chainer/functions/pooling/unpooling_2d.py]
1 from chainer import cuda
2 from chainer.functions.pooling import pooling_2d
3 from chainer.utils import conv
4 from chainer.utils import type_check
5
6
7 class Unpooling2D(pooling_2d.Pooling2D):
8
9 """Unpooling over a set of 2d planes."""
10
11 def __init__(self, ksize, stride=None, pad=0,
12 outsize=None, cover_all=True):
13 super(Unpooling2D, self).__init__(ksize, stride, pad, cover_all)
14 self.outh, self.outw = (None, None) if outsize is None else outsize
15
16 def check_type_forward(self, in_types):
17 n_in = in_types.size()
18 type_check.expect(n_in == 1)
19 x_type = in_types[0]
20
21 type_check.expect(
22 x_type.dtype.kind == 'f',
23 x_type.ndim == 4,
24 )
25
26 if self.outh is not None:
27 expected_h = conv.get_conv_outsize(
28 self.outh, self.kh, self.sy, self.ph, cover_all=self.cover_all)
29 type_check.expect(x_type.shape[2] == expected_h)
30 if self.outw is not None:
31 expected_w = conv.get_conv_outsize(
32 self.outw, self.kw, self.sx, self.pw, cover_all=self.cover_all)
33 type_check.expect(x_type.shape[3] == expected_w)
34
35 def forward(self, x):
36 h, w = x[0].shape[2:]
37 if self.outh is None:
38 self.outh = conv.get_deconv_outsize(
39 h, self.kh, self.sy, self.ph, cover_all=self.cover_all)
40 if self.outw is None:
41 self.outw = conv.get_deconv_outsize(
42 w, self.kw, self.sx, self.pw, cover_all=self.cover_all)
43 xp = cuda.get_array_module(*x)
44 col = xp.tile(x[0][:, :, None, None],
45 (1, 1, self.kh, self.kw, 1, 1))
46 if isinstance(x[0], cuda.ndarray):
47 y = conv.col2im_gpu(col, self.sy, self.sx, self.ph, self.pw,
48 self.outh, self.outw)
49 else:
50 y = conv.col2im_cpu(col, self.sy, self.sx, self.ph, self.pw,
51 self.outh, self.outw)
52 return y,
53
54 def backward(self, x, gy):
55 if isinstance(gy[0], cuda.ndarray):
56 gcol = conv.im2col_gpu(
57 gy[0], self.kh, self.kw, self.sy, self.sx, self.ph, self.pw,
58 cover_all=self.cover_all)
59 else:
60 gcol = conv.im2col_cpu(
61 gy[0], self.kh, self.kw, self.sy, self.sx, self.ph, self.pw,
62 cover_all=self.cover_all)
63 gx = gcol.sum(axis=(2, 3))
64 return gx,
65
66
67 def unpooling_2d(x, ksize, stride=None, pad=0, outsize=None, cover_all=True):
68 """Inverse operation of pooling for 2d array.
69
70 This function acts similarly to :class:`~functions.Deconvolution2D`, but
71 it spreads input 2d array's value without any parameter instead of
72 computing the inner products.
73
74 Args:
75 x (~chainer.Variable): Input variable.
76 ksize (int or pair of ints): Size of pooling window. ``ksize=k`` and
77 ``ksize=(k, k)`` are equivalent.
78 stride (int, pair of ints or None): Stride of pooling applications.
79 ``stride=s`` and ``stride=(s, s)`` are equivalent. If ``None`` is
80 specified, then it uses same stride as the pooling window size.
81 pad (int or pair of ints): Spatial padding width for the input array.
82 ``pad=p`` and ``pad=(p, p)`` are equivalent.
83 outsize (None or pair of ints): Expected output size (height, width)
84 of array after the operation. If ``None``, the size
85 (height or width) is estimated from the size of input array
86 in first batch with
87 :func:`~chainer.utils.conv.get_deconv_outsize`.
88 If outsize is not ``None``, the result of outsize applied to
89 :func:`~chainer.utils.conv.get_conv_outsize` must be equal to
90 the shape of the 2d array in the input batch ``x``.
91 cover_all (bool): If ``True``, all spatial locations are pooled
92 into some output pixels, and the output size is larger than that
93 when cover_all is ``False``.
94
95 Returns:
96 ~chainer.Variable: Output variable.
97
98 """
99 return Unpooling2D(ksize, stride, pad, outsize, cover_all)(x)
100
[end of chainer/functions/pooling/unpooling_2d.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/chainer/functions/pooling/unpooling_2d.py b/chainer/functions/pooling/unpooling_2d.py
--- a/chainer/functions/pooling/unpooling_2d.py
+++ b/chainer/functions/pooling/unpooling_2d.py
@@ -88,9 +88,12 @@
If outsize is not ``None``, the result of outsize applied to
:func:`~chainer.utils.conv.get_conv_outsize` must be equal to
the shape of the 2d array in the input batch ``x``.
- cover_all (bool): If ``True``, all spatial locations are pooled
- into some output pixels, and the output size is larger than that
- when cover_all is ``False``.
+ cover_all (bool): If ``True``, the output size may be smaller than
+ the size if ``cover_all`` is ``False``. This flag serves to
+ align behavior to the pooling functions which can cover all
+ input locations, see :func:`~chainer.functions.max_pooling_2d`
+ and :func:`~chainer.functions.convolution_2d`.
+
Returns:
~chainer.Variable: Output variable.
| {"golden_diff": "diff --git a/chainer/functions/pooling/unpooling_2d.py b/chainer/functions/pooling/unpooling_2d.py\n--- a/chainer/functions/pooling/unpooling_2d.py\n+++ b/chainer/functions/pooling/unpooling_2d.py\n@@ -88,9 +88,12 @@\n If outsize is not ``None``, the result of outsize applied to\n :func:`~chainer.utils.conv.get_conv_outsize` must be equal to\n the shape of the 2d array in the input batch ``x``.\n- cover_all (bool): If ``True``, all spatial locations are pooled\n- into some output pixels, and the output size is larger than that\n- when cover_all is ``False``.\n+ cover_all (bool): If ``True``, the output size may be smaller than\n+ the size if ``cover_all`` is ``False``. This flag serves to\n+ align behavior to the pooling functions which can cover all\n+ input locations, see :func:`~chainer.functions.max_pooling_2d`\n+ and :func:`~chainer.functions.convolution_2d`.\n+\n \n Returns:\n ~chainer.Variable: Output variable.\n", "issue": "Possibly wrong doc or code for deconvolution cover_all\nThe documentation says:\n\n```\ncover_all (bool): If True, all spatial locations are convoluted into\n some output pixels. It may make the output size larger.\n```\n\nHowever, when I prepare a small toy example, the output is larger when `cover_all=True`. I feel like either the code or the documentation needs to be inverted.\n\nSee an [gist notebook](https://gist.github.com/LukasDrude/8a9ebbaa3a6ba4ae0e2bef611afefd5a) for the toy example or the attached screenshot. I had set the weight matrices to ones and disabled normalization for clarity.\n\n\n\n", "before_files": [{"content": "from chainer import cuda\nfrom chainer.functions.pooling import pooling_2d\nfrom chainer.utils import conv\nfrom chainer.utils import type_check\n\n\nclass Unpooling2D(pooling_2d.Pooling2D):\n\n \"\"\"Unpooling over a set of 2d planes.\"\"\"\n\n def __init__(self, ksize, stride=None, pad=0,\n outsize=None, cover_all=True):\n super(Unpooling2D, self).__init__(ksize, stride, pad, cover_all)\n self.outh, self.outw = (None, None) if outsize is None else outsize\n\n def check_type_forward(self, in_types):\n n_in = in_types.size()\n type_check.expect(n_in == 1)\n x_type = in_types[0]\n\n type_check.expect(\n x_type.dtype.kind == 'f',\n x_type.ndim == 4,\n )\n\n if self.outh is not None:\n expected_h = conv.get_conv_outsize(\n self.outh, self.kh, self.sy, self.ph, cover_all=self.cover_all)\n type_check.expect(x_type.shape[2] == expected_h)\n if self.outw is not None:\n expected_w = conv.get_conv_outsize(\n self.outw, self.kw, self.sx, self.pw, cover_all=self.cover_all)\n type_check.expect(x_type.shape[3] == expected_w)\n\n def forward(self, x):\n h, w = x[0].shape[2:]\n if self.outh is None:\n self.outh = conv.get_deconv_outsize(\n h, self.kh, self.sy, self.ph, cover_all=self.cover_all)\n if self.outw is None:\n self.outw = conv.get_deconv_outsize(\n w, self.kw, self.sx, self.pw, cover_all=self.cover_all)\n xp = cuda.get_array_module(*x)\n col = xp.tile(x[0][:, :, None, None],\n (1, 1, self.kh, self.kw, 1, 1))\n if isinstance(x[0], cuda.ndarray):\n y = conv.col2im_gpu(col, self.sy, self.sx, self.ph, self.pw,\n self.outh, self.outw)\n else:\n y = conv.col2im_cpu(col, self.sy, self.sx, self.ph, self.pw,\n self.outh, self.outw)\n return y,\n\n def backward(self, x, gy):\n if isinstance(gy[0], cuda.ndarray):\n gcol = conv.im2col_gpu(\n gy[0], self.kh, self.kw, self.sy, self.sx, self.ph, self.pw,\n cover_all=self.cover_all)\n else:\n gcol = conv.im2col_cpu(\n gy[0], self.kh, self.kw, self.sy, self.sx, self.ph, self.pw,\n cover_all=self.cover_all)\n gx = gcol.sum(axis=(2, 3))\n return gx,\n\n\ndef unpooling_2d(x, ksize, stride=None, pad=0, outsize=None, cover_all=True):\n \"\"\"Inverse operation of pooling for 2d array.\n\n This function acts similarly to :class:`~functions.Deconvolution2D`, but\n it spreads input 2d array's value without any parameter instead of\n computing the inner products.\n\n Args:\n x (~chainer.Variable): Input variable.\n ksize (int or pair of ints): Size of pooling window. ``ksize=k`` and\n ``ksize=(k, k)`` are equivalent.\n stride (int, pair of ints or None): Stride of pooling applications.\n ``stride=s`` and ``stride=(s, s)`` are equivalent. If ``None`` is\n specified, then it uses same stride as the pooling window size.\n pad (int or pair of ints): Spatial padding width for the input array.\n ``pad=p`` and ``pad=(p, p)`` are equivalent.\n outsize (None or pair of ints): Expected output size (height, width)\n of array after the operation. If ``None``, the size\n (height or width) is estimated from the size of input array\n in first batch with\n :func:`~chainer.utils.conv.get_deconv_outsize`.\n If outsize is not ``None``, the result of outsize applied to\n :func:`~chainer.utils.conv.get_conv_outsize` must be equal to\n the shape of the 2d array in the input batch ``x``.\n cover_all (bool): If ``True``, all spatial locations are pooled\n into some output pixels, and the output size is larger than that\n when cover_all is ``False``.\n\n Returns:\n ~chainer.Variable: Output variable.\n\n \"\"\"\n return Unpooling2D(ksize, stride, pad, outsize, cover_all)(x)\n", "path": "chainer/functions/pooling/unpooling_2d.py"}]} | 2,064 | 271 |
gh_patches_debug_25164 | rasdani/github-patches | git_diff | Kinto__kinto-930 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
'set' object has no attribute extends
```kinto\/views\/permissions.py\", line 107, in get_records\n perms_by_object_uri.setdefault(subobj_uri, []).extend(resource_perms)\nAttributeError: 'set' object has no attribute 'extend'"```
</issue>
<code>
[start of kinto/views/permissions.py]
1 import colander
2 from pyramid.security import NO_PERMISSION_REQUIRED, Authenticated
3 from pyramid.settings import aslist
4
5 from kinto.authorization import PERMISSIONS_INHERITANCE_TREE
6 from kinto.core import utils as core_utils, resource
7 from kinto.core.storage.memory import extract_record_set
8
9
10 def allowed_from_settings(settings, principals):
11 """Returns every permissions allowed from settings for the current user.
12 :param settings dict: app settings
13 :param principals list: list of principals of current user
14 :rtype: dict
15
16 Result example::
17
18 {
19 "bucket": {"write", "collection:create"},
20 "collection": {"read"}
21 }
22
23 XXX: This helper will be useful for Kinto/kinto#894
24 """
25 perms_settings = {k: aslist(v) for k, v in settings.items()
26 if k.endswith('_principals')}
27 from_settings = {}
28 for key, allowed_principals in perms_settings.items():
29 resource_name, permission, _ = key.split('_')
30 # Keep the known permissions only.
31 if resource_name not in PERMISSIONS_INHERITANCE_TREE.keys():
32 continue
33 # Keep the permissions of the current user only.
34 if not bool(set(principals) & set(allowed_principals)):
35 continue
36 # ``collection_create_principals`` means ``collection:create`` in bucket.
37 if permission == 'create':
38 permission = '%s:%s' % (resource_name, permission)
39 resource_name = { # resource parents.
40 'bucket': '',
41 'collection': 'bucket',
42 'group': 'bucket',
43 'record': 'collection'}[resource_name]
44 # Store them in a convenient way.
45 from_settings.setdefault(resource_name, set()).add(permission)
46 return from_settings
47
48
49 class PermissionsModel(object):
50 id_field = 'id'
51 modified_field = 'last_modified'
52 deleted_field = 'deleted'
53
54 def __init__(self, request):
55 self.request = request
56
57 def get_records(self, filters=None, sorting=None, pagination_rules=None,
58 limit=None, include_deleted=False, parent_id=None):
59 # Invert the permissions inheritance tree.
60 perms_descending_tree = {}
61 for on_resource, tree in PERMISSIONS_INHERITANCE_TREE.items():
62 for obtained_perm, obtained_from in tree.items():
63 for from_resource, perms in obtained_from.items():
64 for perm in perms:
65 perms_descending_tree.setdefault(from_resource, {})\
66 .setdefault(perm, {})\
67 .setdefault(on_resource, set())\
68 .add(obtained_perm)
69
70 # Obtain current principals.
71 principals = self.request.effective_principals
72 if Authenticated in principals:
73 # Since this view does not require any permission (can be used to
74 # obtain public users permissions), we have to add the prefixed
75 # userid among the principals
76 # (see :mod:`kinto.core.authentication`)
77 userid = self.request.prefixed_userid
78 principals.append(userid)
79
80 # Query every possible permission of the current user from backend.
81 backend = self.request.registry.permission
82 perms_by_object_uri = backend.get_accessible_objects(principals)
83
84 # Check settings for every allowed resources.
85 from_settings = allowed_from_settings(self.request.registry.settings, principals)
86
87 # Expand permissions obtained from backend with the object URIs that
88 # correspond to permissions allowed from settings.
89 allowed_resources = {'bucket', 'collection', 'group'} & set(from_settings.keys())
90 if allowed_resources:
91 storage = self.request.registry.storage
92 every_bucket, _ = storage.get_all(parent_id='', collection_id='bucket')
93 for bucket in every_bucket:
94 bucket_uri = '/buckets/{id}'.format(**bucket)
95 for res in allowed_resources:
96 resource_perms = from_settings[res]
97 # Bucket is always fetched.
98 if res == 'bucket':
99 perms_by_object_uri.setdefault(bucket_uri, []).extend(resource_perms)
100 continue
101 # Fetch bucket collections and groups.
102 # XXX: wrong approach: query in a loop!
103 every_subobjects, _ = storage.get_all(parent_id=bucket_uri,
104 collection_id=res)
105 for subobject in every_subobjects:
106 subobj_uri = bucket_uri + '/{0}s/{1}'.format(res, subobject['id'])
107 perms_by_object_uri.setdefault(subobj_uri, []).extend(resource_perms)
108
109 entries = []
110 for object_uri, perms in perms_by_object_uri.items():
111 try:
112 # Obtain associated res from object URI
113 resource_name, matchdict = core_utils.view_lookup(self.request,
114 object_uri)
115 except ValueError:
116 # Skip permissions entries that are not linked to an object URI
117 continue
118
119 # For consistency with event payloads, prefix id with resource name
120 matchdict[resource_name + '_id'] = matchdict.get('id')
121
122 # Expand implicit permissions using descending tree.
123 permissions = set(perms)
124 for perm in perms:
125 obtained = perms_descending_tree[resource_name][perm]
126 # Related to same resource only and not every sub-objects.
127 # (e.g "bucket:write" gives "bucket:read" but not "group:read")
128 permissions |= obtained[resource_name]
129
130 entry = dict(uri=object_uri,
131 resource_name=resource_name,
132 permissions=list(permissions),
133 **matchdict)
134 entries.append(entry)
135
136 return extract_record_set(entries, filters=filters, sorting=sorting,
137 pagination_rules=pagination_rules,
138 limit=limit)
139
140
141 class PermissionsSchema(resource.ResourceSchema):
142 uri = colander.SchemaNode(colander.String())
143 resource_name = colander.SchemaNode(colander.String())
144 permissions = colander.Sequence(colander.SchemaNode(colander.String()))
145 bucket_id = colander.SchemaNode(colander.String())
146 collection_id = colander.SchemaNode(colander.String(),
147 missing=colander.drop)
148 group_id = colander.SchemaNode(colander.String(),
149 missing=colander.drop)
150 record_id = colander.SchemaNode(colander.String(),
151 missing=colander.drop)
152
153 class Options:
154 preserve_unknown = False
155
156
157 @resource.register(name='permissions',
158 description='List of user permissions',
159 collection_path='/permissions',
160 record_path=None,
161 collection_methods=('GET',),
162 permission=NO_PERMISSION_REQUIRED)
163 class Permissions(resource.ShareableResource):
164
165 schema = PermissionsSchema
166
167 def __init__(self, request, context=None):
168 super(Permissions, self).__init__(request, context)
169 self.model = PermissionsModel(request)
170
171 def _extract_sorting(self, limit):
172 # Permissions entries are not stored with timestamp, so do not
173 # force it.
174 result = super(Permissions, self)._extract_sorting(limit)
175 without_last_modified = [s for s in result
176 if s.field != self.model.modified_field]
177 return without_last_modified
178
179 def _extract_filters(self, queryparams=None):
180 result = super(Permissions, self)._extract_filters(queryparams)
181 without_last_modified = [s for s in result
182 if s.field != self.model.modified_field]
183 return without_last_modified
184
[end of kinto/views/permissions.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/kinto/views/permissions.py b/kinto/views/permissions.py
--- a/kinto/views/permissions.py
+++ b/kinto/views/permissions.py
@@ -96,7 +96,7 @@
resource_perms = from_settings[res]
# Bucket is always fetched.
if res == 'bucket':
- perms_by_object_uri.setdefault(bucket_uri, []).extend(resource_perms)
+ perms_by_object_uri.setdefault(bucket_uri, set()).update(resource_perms)
continue
# Fetch bucket collections and groups.
# XXX: wrong approach: query in a loop!
@@ -104,7 +104,7 @@
collection_id=res)
for subobject in every_subobjects:
subobj_uri = bucket_uri + '/{0}s/{1}'.format(res, subobject['id'])
- perms_by_object_uri.setdefault(subobj_uri, []).extend(resource_perms)
+ perms_by_object_uri.setdefault(subobj_uri, set()).update(resource_perms)
entries = []
for object_uri, perms in perms_by_object_uri.items():
| {"golden_diff": "diff --git a/kinto/views/permissions.py b/kinto/views/permissions.py\n--- a/kinto/views/permissions.py\n+++ b/kinto/views/permissions.py\n@@ -96,7 +96,7 @@\n resource_perms = from_settings[res]\n # Bucket is always fetched.\n if res == 'bucket':\n- perms_by_object_uri.setdefault(bucket_uri, []).extend(resource_perms)\n+ perms_by_object_uri.setdefault(bucket_uri, set()).update(resource_perms)\n continue\n # Fetch bucket collections and groups.\n # XXX: wrong approach: query in a loop!\n@@ -104,7 +104,7 @@\n collection_id=res)\n for subobject in every_subobjects:\n subobj_uri = bucket_uri + '/{0}s/{1}'.format(res, subobject['id'])\n- perms_by_object_uri.setdefault(subobj_uri, []).extend(resource_perms)\n+ perms_by_object_uri.setdefault(subobj_uri, set()).update(resource_perms)\n \n entries = []\n for object_uri, perms in perms_by_object_uri.items():\n", "issue": "'set' object has no attribute extends\n```kinto\\/views\\/permissions.py\\\", line 107, in get_records\\n perms_by_object_uri.setdefault(subobj_uri, []).extend(resource_perms)\\nAttributeError: 'set' object has no attribute 'extend'\"```\r\n\r\n\n", "before_files": [{"content": "import colander\nfrom pyramid.security import NO_PERMISSION_REQUIRED, Authenticated\nfrom pyramid.settings import aslist\n\nfrom kinto.authorization import PERMISSIONS_INHERITANCE_TREE\nfrom kinto.core import utils as core_utils, resource\nfrom kinto.core.storage.memory import extract_record_set\n\n\ndef allowed_from_settings(settings, principals):\n \"\"\"Returns every permissions allowed from settings for the current user.\n :param settings dict: app settings\n :param principals list: list of principals of current user\n :rtype: dict\n\n Result example::\n\n {\n \"bucket\": {\"write\", \"collection:create\"},\n \"collection\": {\"read\"}\n }\n\n XXX: This helper will be useful for Kinto/kinto#894\n \"\"\"\n perms_settings = {k: aslist(v) for k, v in settings.items()\n if k.endswith('_principals')}\n from_settings = {}\n for key, allowed_principals in perms_settings.items():\n resource_name, permission, _ = key.split('_')\n # Keep the known permissions only.\n if resource_name not in PERMISSIONS_INHERITANCE_TREE.keys():\n continue\n # Keep the permissions of the current user only.\n if not bool(set(principals) & set(allowed_principals)):\n continue\n # ``collection_create_principals`` means ``collection:create`` in bucket.\n if permission == 'create':\n permission = '%s:%s' % (resource_name, permission)\n resource_name = { # resource parents.\n 'bucket': '',\n 'collection': 'bucket',\n 'group': 'bucket',\n 'record': 'collection'}[resource_name]\n # Store them in a convenient way.\n from_settings.setdefault(resource_name, set()).add(permission)\n return from_settings\n\n\nclass PermissionsModel(object):\n id_field = 'id'\n modified_field = 'last_modified'\n deleted_field = 'deleted'\n\n def __init__(self, request):\n self.request = request\n\n def get_records(self, filters=None, sorting=None, pagination_rules=None,\n limit=None, include_deleted=False, parent_id=None):\n # Invert the permissions inheritance tree.\n perms_descending_tree = {}\n for on_resource, tree in PERMISSIONS_INHERITANCE_TREE.items():\n for obtained_perm, obtained_from in tree.items():\n for from_resource, perms in obtained_from.items():\n for perm in perms:\n perms_descending_tree.setdefault(from_resource, {})\\\n .setdefault(perm, {})\\\n .setdefault(on_resource, set())\\\n .add(obtained_perm)\n\n # Obtain current principals.\n principals = self.request.effective_principals\n if Authenticated in principals:\n # Since this view does not require any permission (can be used to\n # obtain public users permissions), we have to add the prefixed\n # userid among the principals\n # (see :mod:`kinto.core.authentication`)\n userid = self.request.prefixed_userid\n principals.append(userid)\n\n # Query every possible permission of the current user from backend.\n backend = self.request.registry.permission\n perms_by_object_uri = backend.get_accessible_objects(principals)\n\n # Check settings for every allowed resources.\n from_settings = allowed_from_settings(self.request.registry.settings, principals)\n\n # Expand permissions obtained from backend with the object URIs that\n # correspond to permissions allowed from settings.\n allowed_resources = {'bucket', 'collection', 'group'} & set(from_settings.keys())\n if allowed_resources:\n storage = self.request.registry.storage\n every_bucket, _ = storage.get_all(parent_id='', collection_id='bucket')\n for bucket in every_bucket:\n bucket_uri = '/buckets/{id}'.format(**bucket)\n for res in allowed_resources:\n resource_perms = from_settings[res]\n # Bucket is always fetched.\n if res == 'bucket':\n perms_by_object_uri.setdefault(bucket_uri, []).extend(resource_perms)\n continue\n # Fetch bucket collections and groups.\n # XXX: wrong approach: query in a loop!\n every_subobjects, _ = storage.get_all(parent_id=bucket_uri,\n collection_id=res)\n for subobject in every_subobjects:\n subobj_uri = bucket_uri + '/{0}s/{1}'.format(res, subobject['id'])\n perms_by_object_uri.setdefault(subobj_uri, []).extend(resource_perms)\n\n entries = []\n for object_uri, perms in perms_by_object_uri.items():\n try:\n # Obtain associated res from object URI\n resource_name, matchdict = core_utils.view_lookup(self.request,\n object_uri)\n except ValueError:\n # Skip permissions entries that are not linked to an object URI\n continue\n\n # For consistency with event payloads, prefix id with resource name\n matchdict[resource_name + '_id'] = matchdict.get('id')\n\n # Expand implicit permissions using descending tree.\n permissions = set(perms)\n for perm in perms:\n obtained = perms_descending_tree[resource_name][perm]\n # Related to same resource only and not every sub-objects.\n # (e.g \"bucket:write\" gives \"bucket:read\" but not \"group:read\")\n permissions |= obtained[resource_name]\n\n entry = dict(uri=object_uri,\n resource_name=resource_name,\n permissions=list(permissions),\n **matchdict)\n entries.append(entry)\n\n return extract_record_set(entries, filters=filters, sorting=sorting,\n pagination_rules=pagination_rules,\n limit=limit)\n\n\nclass PermissionsSchema(resource.ResourceSchema):\n uri = colander.SchemaNode(colander.String())\n resource_name = colander.SchemaNode(colander.String())\n permissions = colander.Sequence(colander.SchemaNode(colander.String()))\n bucket_id = colander.SchemaNode(colander.String())\n collection_id = colander.SchemaNode(colander.String(),\n missing=colander.drop)\n group_id = colander.SchemaNode(colander.String(),\n missing=colander.drop)\n record_id = colander.SchemaNode(colander.String(),\n missing=colander.drop)\n\n class Options:\n preserve_unknown = False\n\n\[email protected](name='permissions',\n description='List of user permissions',\n collection_path='/permissions',\n record_path=None,\n collection_methods=('GET',),\n permission=NO_PERMISSION_REQUIRED)\nclass Permissions(resource.ShareableResource):\n\n schema = PermissionsSchema\n\n def __init__(self, request, context=None):\n super(Permissions, self).__init__(request, context)\n self.model = PermissionsModel(request)\n\n def _extract_sorting(self, limit):\n # Permissions entries are not stored with timestamp, so do not\n # force it.\n result = super(Permissions, self)._extract_sorting(limit)\n without_last_modified = [s for s in result\n if s.field != self.model.modified_field]\n return without_last_modified\n\n def _extract_filters(self, queryparams=None):\n result = super(Permissions, self)._extract_filters(queryparams)\n without_last_modified = [s for s in result\n if s.field != self.model.modified_field]\n return without_last_modified\n", "path": "kinto/views/permissions.py"}]} | 2,554 | 226 |
gh_patches_debug_12576 | rasdani/github-patches | git_diff | Lightning-AI__torchmetrics-1116 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Minor discrepancy in AveragePrecision documentation for `average="none"`
## π Documentation
The [documentation](https://torchmetrics.readthedocs.io/en/latest/classification/average_precision.html) for the `torchmetrics.AveragePrecision` class and `torchmetrics.functional.average_precision()` function state that setting `average="none"` is permitted. However, the source code only seems to allow `average=None` (see [here](https://github.com/Lightning-AI/metrics/blob/master/src/torchmetrics/classification/avg_precision.py#L98) and [here](https://github.com/Lightning-AI/metrics/blob/master/src/torchmetrics/functional/classification/average_precision.py#L175)).
I'd be happy to submit a PR for this but I would like some feedback on how to best resolve this, since I am not familiar with the design of this library. The two immediate directions I can think of are editing the documentation to only allow `average=None` or editing the source code to support `average="none"`.
Thanks!
</issue>
<code>
[start of src/torchmetrics/functional/classification/average_precision.py]
1 # Copyright The PyTorch Lightning team.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 import warnings
15 from typing import List, Optional, Tuple, Union
16
17 import torch
18 from torch import Tensor
19
20 from torchmetrics.functional.classification.precision_recall_curve import (
21 _precision_recall_curve_compute,
22 _precision_recall_curve_update,
23 )
24 from torchmetrics.utilities.data import _bincount
25
26
27 def _average_precision_update(
28 preds: Tensor,
29 target: Tensor,
30 num_classes: Optional[int] = None,
31 pos_label: Optional[int] = None,
32 average: Optional[str] = "macro",
33 ) -> Tuple[Tensor, Tensor, int, Optional[int]]:
34 """Format the predictions and target based on the ``num_classes``, ``pos_label`` and ``average`` parameter.
35
36 Args:
37 preds: predictions from model (logits or probabilities)
38 target: ground truth values
39 num_classes: integer with number of classes.
40 pos_label: integer determining the positive class. Default is ``None`` which for binary problem is translated
41 to 1. For multiclass problems this argument should not be set as we iteratively change it in the
42 range ``[0, num_classes-1]``
43 average: reduction method for multi-class or multi-label problems
44 """
45 preds, target, num_classes, pos_label = _precision_recall_curve_update(preds, target, num_classes, pos_label)
46 if average == "micro" and preds.ndim != target.ndim:
47 raise ValueError("Cannot use `micro` average with multi-class input")
48
49 return preds, target, num_classes, pos_label
50
51
52 def _average_precision_compute(
53 preds: Tensor,
54 target: Tensor,
55 num_classes: int,
56 pos_label: Optional[int] = None,
57 average: Optional[str] = "macro",
58 ) -> Union[List[Tensor], Tensor]:
59 """Computes the average precision score.
60
61 Args:
62 preds: predictions from model (logits or probabilities)
63 target: ground truth values
64 num_classes: integer with number of classes.
65 pos_label: integer determining the positive class. Default is ``None`` which for binary problem is translated
66 to 1. For multiclass problems his argument should not be set as we iteratively change it in the
67 range ``[0, num_classes-1]``
68 average: reduction method for multi-class or multi-label problems
69
70 Example:
71 >>> # binary case
72 >>> preds = torch.tensor([0, 1, 2, 3])
73 >>> target = torch.tensor([0, 1, 1, 1])
74 >>> pos_label = 1
75 >>> preds, target, num_classes, pos_label = _average_precision_update(preds, target, pos_label=pos_label)
76 >>> _average_precision_compute(preds, target, num_classes, pos_label)
77 tensor(1.)
78
79 >>> # multiclass case
80 >>> preds = torch.tensor([[0.75, 0.05, 0.05, 0.05, 0.05],
81 ... [0.05, 0.75, 0.05, 0.05, 0.05],
82 ... [0.05, 0.05, 0.75, 0.05, 0.05],
83 ... [0.05, 0.05, 0.05, 0.75, 0.05]])
84 >>> target = torch.tensor([0, 1, 3, 2])
85 >>> num_classes = 5
86 >>> preds, target, num_classes, pos_label = _average_precision_update(preds, target, num_classes)
87 >>> _average_precision_compute(preds, target, num_classes, average=None)
88 [tensor(1.), tensor(1.), tensor(0.2500), tensor(0.2500), tensor(nan)]
89 """
90
91 if average == "micro" and preds.ndim == target.ndim:
92 preds = preds.flatten()
93 target = target.flatten()
94 num_classes = 1
95
96 precision, recall, _ = _precision_recall_curve_compute(preds, target, num_classes, pos_label)
97 if average == "weighted":
98 if preds.ndim == target.ndim and target.ndim > 1:
99 weights = target.sum(dim=0).float()
100 else:
101 weights = _bincount(target, minlength=num_classes).float()
102 weights = weights / torch.sum(weights)
103 else:
104 weights = None
105 return _average_precision_compute_with_precision_recall(precision, recall, num_classes, average, weights)
106
107
108 def _average_precision_compute_with_precision_recall(
109 precision: Tensor,
110 recall: Tensor,
111 num_classes: int,
112 average: Optional[str] = "macro",
113 weights: Optional[Tensor] = None,
114 ) -> Union[List[Tensor], Tensor]:
115 """Computes the average precision score from precision and recall.
116
117 Args:
118 precision: precision values
119 recall: recall values
120 num_classes: integer with number of classes. Not nessesary to provide
121 for binary problems.
122 average: reduction method for multi-class or multi-label problems
123 weights: weights to use when average='weighted'
124
125 Example:
126 >>> # binary case
127 >>> preds = torch.tensor([0, 1, 2, 3])
128 >>> target = torch.tensor([0, 1, 1, 1])
129 >>> pos_label = 1
130 >>> preds, target, num_classes, pos_label = _average_precision_update(preds, target, pos_label=pos_label)
131 >>> precision, recall, _ = _precision_recall_curve_compute(preds, target, num_classes, pos_label)
132 >>> _average_precision_compute_with_precision_recall(precision, recall, num_classes, average=None)
133 tensor(1.)
134
135 >>> # multiclass case
136 >>> preds = torch.tensor([[0.75, 0.05, 0.05, 0.05, 0.05],
137 ... [0.05, 0.75, 0.05, 0.05, 0.05],
138 ... [0.05, 0.05, 0.75, 0.05, 0.05],
139 ... [0.05, 0.05, 0.05, 0.75, 0.05]])
140 >>> target = torch.tensor([0, 1, 3, 2])
141 >>> num_classes = 5
142 >>> preds, target, num_classes, pos_label = _average_precision_update(preds, target, num_classes)
143 >>> precision, recall, _ = _precision_recall_curve_compute(preds, target, num_classes)
144 >>> _average_precision_compute_with_precision_recall(precision, recall, num_classes, average=None)
145 [tensor(1.), tensor(1.), tensor(0.2500), tensor(0.2500), tensor(nan)]
146 """
147
148 # Return the step function integral
149 # The following works because the last entry of precision is
150 # guaranteed to be 1, as returned by precision_recall_curve
151 if num_classes == 1:
152 return -torch.sum((recall[1:] - recall[:-1]) * precision[:-1])
153
154 res = []
155 for p, r in zip(precision, recall):
156 res.append(-torch.sum((r[1:] - r[:-1]) * p[:-1]))
157
158 # Reduce
159 if average in ("macro", "weighted"):
160 res = torch.stack(res)
161 if torch.isnan(res).any():
162 warnings.warn(
163 "Average precision score for one or more classes was `nan`. Ignoring these classes in average",
164 UserWarning,
165 )
166 if average == "macro":
167 return res[~torch.isnan(res)].mean()
168 weights = torch.ones_like(res) if weights is None else weights
169 return (res * weights)[~torch.isnan(res)].sum()
170 if average is None:
171 return res
172 allowed_average = ("micro", "macro", "weighted", None)
173 raise ValueError(f"Expected argument `average` to be one of {allowed_average}" f" but got {average}")
174
175
176 def average_precision(
177 preds: Tensor,
178 target: Tensor,
179 num_classes: Optional[int] = None,
180 pos_label: Optional[int] = None,
181 average: Optional[str] = "macro",
182 ) -> Union[List[Tensor], Tensor]:
183 """Computes the average precision score.
184
185 Args:
186 preds: predictions from model (logits or probabilities)
187 target: ground truth values
188 num_classes: integer with number of classes. Not nessesary to provide
189 for binary problems.
190 pos_label: integer determining the positive class. Default is ``None`` which for binary problem is translated
191 to 1. For multiclass problems his argument should not be set as we iteratively change it in the
192 range ``[0, num_classes-1]``
193 average:
194 defines the reduction that is applied in the case of multiclass and multilabel input.
195 Should be one of the following:
196
197 - ``'macro'`` [default]: Calculate the metric for each class separately, and average the
198 metrics across classes (with equal weights for each class).
199 - ``'micro'``: Calculate the metric globally, across all samples and classes. Cannot be
200 used with multiclass input.
201 - ``'weighted'``: Calculate the metric for each class separately, and average the
202 metrics across classes, weighting each class by its support.
203 - ``'none'`` or ``None``: Calculate the metric for each class separately, and return
204 the metric for every class.
205
206 Returns:
207 tensor with average precision. If multiclass will return list
208 of such tensors, one for each class
209
210 Example (binary case):
211 >>> from torchmetrics.functional import average_precision
212 >>> pred = torch.tensor([0, 1, 2, 3])
213 >>> target = torch.tensor([0, 1, 1, 1])
214 >>> average_precision(pred, target, pos_label=1)
215 tensor(1.)
216
217 Example (multiclass case):
218 >>> pred = torch.tensor([[0.75, 0.05, 0.05, 0.05, 0.05],
219 ... [0.05, 0.75, 0.05, 0.05, 0.05],
220 ... [0.05, 0.05, 0.75, 0.05, 0.05],
221 ... [0.05, 0.05, 0.05, 0.75, 0.05]])
222 >>> target = torch.tensor([0, 1, 3, 2])
223 >>> average_precision(pred, target, num_classes=5, average=None)
224 [tensor(1.), tensor(1.), tensor(0.2500), tensor(0.2500), tensor(nan)]
225 """
226 preds, target, num_classes, pos_label = _average_precision_update(preds, target, num_classes, pos_label, average)
227 return _average_precision_compute(preds, target, num_classes, pos_label, average)
228
[end of src/torchmetrics/functional/classification/average_precision.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/torchmetrics/functional/classification/average_precision.py b/src/torchmetrics/functional/classification/average_precision.py
--- a/src/torchmetrics/functional/classification/average_precision.py
+++ b/src/torchmetrics/functional/classification/average_precision.py
@@ -167,9 +167,9 @@
return res[~torch.isnan(res)].mean()
weights = torch.ones_like(res) if weights is None else weights
return (res * weights)[~torch.isnan(res)].sum()
- if average is None:
+ if average is None or average == "none":
return res
- allowed_average = ("micro", "macro", "weighted", None)
+ allowed_average = ("micro", "macro", "weighted", "none", None)
raise ValueError(f"Expected argument `average` to be one of {allowed_average}" f" but got {average}")
| {"golden_diff": "diff --git a/src/torchmetrics/functional/classification/average_precision.py b/src/torchmetrics/functional/classification/average_precision.py\n--- a/src/torchmetrics/functional/classification/average_precision.py\n+++ b/src/torchmetrics/functional/classification/average_precision.py\n@@ -167,9 +167,9 @@\n return res[~torch.isnan(res)].mean()\n weights = torch.ones_like(res) if weights is None else weights\n return (res * weights)[~torch.isnan(res)].sum()\n- if average is None:\n+ if average is None or average == \"none\":\n return res\n- allowed_average = (\"micro\", \"macro\", \"weighted\", None)\n+ allowed_average = (\"micro\", \"macro\", \"weighted\", \"none\", None)\n raise ValueError(f\"Expected argument `average` to be one of {allowed_average}\" f\" but got {average}\")\n", "issue": "Minor discrepancy in AveragePrecision documentation for `average=\"none\"`\n## \ud83d\udcda Documentation\r\n\r\nThe [documentation](https://torchmetrics.readthedocs.io/en/latest/classification/average_precision.html) for the `torchmetrics.AveragePrecision` class and `torchmetrics.functional.average_precision()` function state that setting `average=\"none\"` is permitted. However, the source code only seems to allow `average=None` (see [here](https://github.com/Lightning-AI/metrics/blob/master/src/torchmetrics/classification/avg_precision.py#L98) and [here](https://github.com/Lightning-AI/metrics/blob/master/src/torchmetrics/functional/classification/average_precision.py#L175)).\r\n\r\nI'd be happy to submit a PR for this but I would like some feedback on how to best resolve this, since I am not familiar with the design of this library. The two immediate directions I can think of are editing the documentation to only allow `average=None` or editing the source code to support `average=\"none\"`.\r\n\r\nThanks!\n", "before_files": [{"content": "# Copyright The PyTorch Lightning team.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nimport warnings\nfrom typing import List, Optional, Tuple, Union\n\nimport torch\nfrom torch import Tensor\n\nfrom torchmetrics.functional.classification.precision_recall_curve import (\n _precision_recall_curve_compute,\n _precision_recall_curve_update,\n)\nfrom torchmetrics.utilities.data import _bincount\n\n\ndef _average_precision_update(\n preds: Tensor,\n target: Tensor,\n num_classes: Optional[int] = None,\n pos_label: Optional[int] = None,\n average: Optional[str] = \"macro\",\n) -> Tuple[Tensor, Tensor, int, Optional[int]]:\n \"\"\"Format the predictions and target based on the ``num_classes``, ``pos_label`` and ``average`` parameter.\n\n Args:\n preds: predictions from model (logits or probabilities)\n target: ground truth values\n num_classes: integer with number of classes.\n pos_label: integer determining the positive class. Default is ``None`` which for binary problem is translated\n to 1. For multiclass problems this argument should not be set as we iteratively change it in the\n range ``[0, num_classes-1]``\n average: reduction method for multi-class or multi-label problems\n \"\"\"\n preds, target, num_classes, pos_label = _precision_recall_curve_update(preds, target, num_classes, pos_label)\n if average == \"micro\" and preds.ndim != target.ndim:\n raise ValueError(\"Cannot use `micro` average with multi-class input\")\n\n return preds, target, num_classes, pos_label\n\n\ndef _average_precision_compute(\n preds: Tensor,\n target: Tensor,\n num_classes: int,\n pos_label: Optional[int] = None,\n average: Optional[str] = \"macro\",\n) -> Union[List[Tensor], Tensor]:\n \"\"\"Computes the average precision score.\n\n Args:\n preds: predictions from model (logits or probabilities)\n target: ground truth values\n num_classes: integer with number of classes.\n pos_label: integer determining the positive class. Default is ``None`` which for binary problem is translated\n to 1. For multiclass problems his argument should not be set as we iteratively change it in the\n range ``[0, num_classes-1]``\n average: reduction method for multi-class or multi-label problems\n\n Example:\n >>> # binary case\n >>> preds = torch.tensor([0, 1, 2, 3])\n >>> target = torch.tensor([0, 1, 1, 1])\n >>> pos_label = 1\n >>> preds, target, num_classes, pos_label = _average_precision_update(preds, target, pos_label=pos_label)\n >>> _average_precision_compute(preds, target, num_classes, pos_label)\n tensor(1.)\n\n >>> # multiclass case\n >>> preds = torch.tensor([[0.75, 0.05, 0.05, 0.05, 0.05],\n ... [0.05, 0.75, 0.05, 0.05, 0.05],\n ... [0.05, 0.05, 0.75, 0.05, 0.05],\n ... [0.05, 0.05, 0.05, 0.75, 0.05]])\n >>> target = torch.tensor([0, 1, 3, 2])\n >>> num_classes = 5\n >>> preds, target, num_classes, pos_label = _average_precision_update(preds, target, num_classes)\n >>> _average_precision_compute(preds, target, num_classes, average=None)\n [tensor(1.), tensor(1.), tensor(0.2500), tensor(0.2500), tensor(nan)]\n \"\"\"\n\n if average == \"micro\" and preds.ndim == target.ndim:\n preds = preds.flatten()\n target = target.flatten()\n num_classes = 1\n\n precision, recall, _ = _precision_recall_curve_compute(preds, target, num_classes, pos_label)\n if average == \"weighted\":\n if preds.ndim == target.ndim and target.ndim > 1:\n weights = target.sum(dim=0).float()\n else:\n weights = _bincount(target, minlength=num_classes).float()\n weights = weights / torch.sum(weights)\n else:\n weights = None\n return _average_precision_compute_with_precision_recall(precision, recall, num_classes, average, weights)\n\n\ndef _average_precision_compute_with_precision_recall(\n precision: Tensor,\n recall: Tensor,\n num_classes: int,\n average: Optional[str] = \"macro\",\n weights: Optional[Tensor] = None,\n) -> Union[List[Tensor], Tensor]:\n \"\"\"Computes the average precision score from precision and recall.\n\n Args:\n precision: precision values\n recall: recall values\n num_classes: integer with number of classes. Not nessesary to provide\n for binary problems.\n average: reduction method for multi-class or multi-label problems\n weights: weights to use when average='weighted'\n\n Example:\n >>> # binary case\n >>> preds = torch.tensor([0, 1, 2, 3])\n >>> target = torch.tensor([0, 1, 1, 1])\n >>> pos_label = 1\n >>> preds, target, num_classes, pos_label = _average_precision_update(preds, target, pos_label=pos_label)\n >>> precision, recall, _ = _precision_recall_curve_compute(preds, target, num_classes, pos_label)\n >>> _average_precision_compute_with_precision_recall(precision, recall, num_classes, average=None)\n tensor(1.)\n\n >>> # multiclass case\n >>> preds = torch.tensor([[0.75, 0.05, 0.05, 0.05, 0.05],\n ... [0.05, 0.75, 0.05, 0.05, 0.05],\n ... [0.05, 0.05, 0.75, 0.05, 0.05],\n ... [0.05, 0.05, 0.05, 0.75, 0.05]])\n >>> target = torch.tensor([0, 1, 3, 2])\n >>> num_classes = 5\n >>> preds, target, num_classes, pos_label = _average_precision_update(preds, target, num_classes)\n >>> precision, recall, _ = _precision_recall_curve_compute(preds, target, num_classes)\n >>> _average_precision_compute_with_precision_recall(precision, recall, num_classes, average=None)\n [tensor(1.), tensor(1.), tensor(0.2500), tensor(0.2500), tensor(nan)]\n \"\"\"\n\n # Return the step function integral\n # The following works because the last entry of precision is\n # guaranteed to be 1, as returned by precision_recall_curve\n if num_classes == 1:\n return -torch.sum((recall[1:] - recall[:-1]) * precision[:-1])\n\n res = []\n for p, r in zip(precision, recall):\n res.append(-torch.sum((r[1:] - r[:-1]) * p[:-1]))\n\n # Reduce\n if average in (\"macro\", \"weighted\"):\n res = torch.stack(res)\n if torch.isnan(res).any():\n warnings.warn(\n \"Average precision score for one or more classes was `nan`. Ignoring these classes in average\",\n UserWarning,\n )\n if average == \"macro\":\n return res[~torch.isnan(res)].mean()\n weights = torch.ones_like(res) if weights is None else weights\n return (res * weights)[~torch.isnan(res)].sum()\n if average is None:\n return res\n allowed_average = (\"micro\", \"macro\", \"weighted\", None)\n raise ValueError(f\"Expected argument `average` to be one of {allowed_average}\" f\" but got {average}\")\n\n\ndef average_precision(\n preds: Tensor,\n target: Tensor,\n num_classes: Optional[int] = None,\n pos_label: Optional[int] = None,\n average: Optional[str] = \"macro\",\n) -> Union[List[Tensor], Tensor]:\n \"\"\"Computes the average precision score.\n\n Args:\n preds: predictions from model (logits or probabilities)\n target: ground truth values\n num_classes: integer with number of classes. Not nessesary to provide\n for binary problems.\n pos_label: integer determining the positive class. Default is ``None`` which for binary problem is translated\n to 1. For multiclass problems his argument should not be set as we iteratively change it in the\n range ``[0, num_classes-1]``\n average:\n defines the reduction that is applied in the case of multiclass and multilabel input.\n Should be one of the following:\n\n - ``'macro'`` [default]: Calculate the metric for each class separately, and average the\n metrics across classes (with equal weights for each class).\n - ``'micro'``: Calculate the metric globally, across all samples and classes. Cannot be\n used with multiclass input.\n - ``'weighted'``: Calculate the metric for each class separately, and average the\n metrics across classes, weighting each class by its support.\n - ``'none'`` or ``None``: Calculate the metric for each class separately, and return\n the metric for every class.\n\n Returns:\n tensor with average precision. If multiclass will return list\n of such tensors, one for each class\n\n Example (binary case):\n >>> from torchmetrics.functional import average_precision\n >>> pred = torch.tensor([0, 1, 2, 3])\n >>> target = torch.tensor([0, 1, 1, 1])\n >>> average_precision(pred, target, pos_label=1)\n tensor(1.)\n\n Example (multiclass case):\n >>> pred = torch.tensor([[0.75, 0.05, 0.05, 0.05, 0.05],\n ... [0.05, 0.75, 0.05, 0.05, 0.05],\n ... [0.05, 0.05, 0.75, 0.05, 0.05],\n ... [0.05, 0.05, 0.05, 0.75, 0.05]])\n >>> target = torch.tensor([0, 1, 3, 2])\n >>> average_precision(pred, target, num_classes=5, average=None)\n [tensor(1.), tensor(1.), tensor(0.2500), tensor(0.2500), tensor(nan)]\n \"\"\"\n preds, target, num_classes, pos_label = _average_precision_update(preds, target, num_classes, pos_label, average)\n return _average_precision_compute(preds, target, num_classes, pos_label, average)\n", "path": "src/torchmetrics/functional/classification/average_precision.py"}]} | 3,919 | 197 |
gh_patches_debug_11166 | rasdani/github-patches | git_diff | DataDog__dd-agent-2443 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[marathon] Marathon plugin slows down agent when marathon has many apps running
We are monitoring a marathon framework using datadog which has over 150 apps, and the marathon check seems to be slowing down the entire datadog process.
After investigating what the plugin actually does, the problem seems to be this loop: https://github.com/DataDog/dd-agent/blob/5.4.4/checks.d/marathon.py#L46. It appears that the agent is sequentially hitting the API 150 times, which is enough to stop the agent from reporting metrics long enough to trigger some of our other alerts.
</issue>
<code>
[start of checks.d/marathon.py]
1 # (C) Datadog, Inc. 2014-2016
2 # (C) graemej <[email protected]> 2014
3 # All rights reserved
4 # Licensed under Simplified BSD License (see LICENSE)
5
6
7 # stdlib
8 from urlparse import urljoin
9
10 # 3rd party
11 import requests
12
13 # project
14 from checks import AgentCheck
15
16
17 class Marathon(AgentCheck):
18
19 DEFAULT_TIMEOUT = 5
20 SERVICE_CHECK_NAME = 'marathon.can_connect'
21
22 APP_METRICS = [
23 'backoffFactor',
24 'backoffSeconds',
25 'cpus',
26 'disk',
27 'instances',
28 'mem',
29 'taskRateLimit',
30 'tasksRunning',
31 'tasksStaged'
32 ]
33
34 def check(self, instance):
35 if 'url' not in instance:
36 raise Exception('Marathon instance missing "url" value.')
37
38 # Load values from the instance config
39 url = instance['url']
40 user = instance.get('user')
41 password = instance.get('password')
42 if user is not None and password is not None:
43 auth = (user,password)
44 else:
45 auth = None
46 instance_tags = instance.get('tags', [])
47 default_timeout = self.init_config.get('default_timeout', self.DEFAULT_TIMEOUT)
48 timeout = float(instance.get('timeout', default_timeout))
49
50 response = self.get_json(urljoin(url, "/v2/apps"), timeout, auth)
51 if response is not None:
52 self.gauge('marathon.apps', len(response['apps']), tags=instance_tags)
53 for app in response['apps']:
54 tags = ['app_id:' + app['id'], 'version:' + app['version']] + instance_tags
55 for attr in self.APP_METRICS:
56 if attr in app:
57 self.gauge('marathon.' + attr, app[attr], tags=tags)
58
59 query_url = urljoin(url, "/v2/apps/{0}/versions".format(app['id']))
60 versions_reply = self.get_json(query_url, timeout, auth)
61
62 if versions_reply is not None:
63 self.gauge('marathon.versions', len(versions_reply['versions']), tags=tags)
64
65 def get_json(self, url, timeout, auth):
66 try:
67 r = requests.get(url, timeout=timeout, auth=auth)
68 r.raise_for_status()
69 except requests.exceptions.Timeout:
70 # If there's a timeout
71 self.service_check(self.SERVICE_CHECK_NAME, AgentCheck.CRITICAL,
72 message='%s timed out after %s seconds.' % (url, timeout),
73 tags = ["url:{0}".format(url)])
74 raise Exception("Timeout when hitting %s" % url)
75
76 except requests.exceptions.HTTPError:
77 self.service_check(self.SERVICE_CHECK_NAME, AgentCheck.CRITICAL,
78 message='%s returned a status of %s' % (url, r.status_code),
79 tags = ["url:{0}".format(url)])
80 raise Exception("Got %s when hitting %s" % (r.status_code, url))
81
82 else:
83 self.service_check(self.SERVICE_CHECK_NAME, AgentCheck.OK,
84 tags = ["url:{0}".format(url)]
85 )
86
87 return r.json()
88
[end of checks.d/marathon.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/checks.d/marathon.py b/checks.d/marathon.py
--- a/checks.d/marathon.py
+++ b/checks.d/marathon.py
@@ -56,12 +56,6 @@
if attr in app:
self.gauge('marathon.' + attr, app[attr], tags=tags)
- query_url = urljoin(url, "/v2/apps/{0}/versions".format(app['id']))
- versions_reply = self.get_json(query_url, timeout, auth)
-
- if versions_reply is not None:
- self.gauge('marathon.versions', len(versions_reply['versions']), tags=tags)
-
def get_json(self, url, timeout, auth):
try:
r = requests.get(url, timeout=timeout, auth=auth)
| {"golden_diff": "diff --git a/checks.d/marathon.py b/checks.d/marathon.py\n--- a/checks.d/marathon.py\n+++ b/checks.d/marathon.py\n@@ -56,12 +56,6 @@\n if attr in app:\n self.gauge('marathon.' + attr, app[attr], tags=tags)\n \n- query_url = urljoin(url, \"/v2/apps/{0}/versions\".format(app['id']))\n- versions_reply = self.get_json(query_url, timeout, auth)\n-\n- if versions_reply is not None:\n- self.gauge('marathon.versions', len(versions_reply['versions']), tags=tags)\n-\n def get_json(self, url, timeout, auth):\n try:\n r = requests.get(url, timeout=timeout, auth=auth)\n", "issue": "[marathon] Marathon plugin slows down agent when marathon has many apps running\nWe are monitoring a marathon framework using datadog which has over 150 apps, and the marathon check seems to be slowing down the entire datadog process.\n\nAfter investigating what the plugin actually does, the problem seems to be this loop: https://github.com/DataDog/dd-agent/blob/5.4.4/checks.d/marathon.py#L46. It appears that the agent is sequentially hitting the API 150 times, which is enough to stop the agent from reporting metrics long enough to trigger some of our other alerts.\n\n", "before_files": [{"content": "# (C) Datadog, Inc. 2014-2016\n# (C) graemej <[email protected]> 2014\n# All rights reserved\n# Licensed under Simplified BSD License (see LICENSE)\n\n\n# stdlib\nfrom urlparse import urljoin\n\n# 3rd party\nimport requests\n\n# project\nfrom checks import AgentCheck\n\n\nclass Marathon(AgentCheck):\n\n DEFAULT_TIMEOUT = 5\n SERVICE_CHECK_NAME = 'marathon.can_connect'\n\n APP_METRICS = [\n 'backoffFactor',\n 'backoffSeconds',\n 'cpus',\n 'disk',\n 'instances',\n 'mem',\n 'taskRateLimit',\n 'tasksRunning',\n 'tasksStaged'\n ]\n\n def check(self, instance):\n if 'url' not in instance:\n raise Exception('Marathon instance missing \"url\" value.')\n\n # Load values from the instance config\n url = instance['url']\n user = instance.get('user')\n password = instance.get('password')\n if user is not None and password is not None:\n auth = (user,password)\n else:\n auth = None\n instance_tags = instance.get('tags', [])\n default_timeout = self.init_config.get('default_timeout', self.DEFAULT_TIMEOUT)\n timeout = float(instance.get('timeout', default_timeout))\n\n response = self.get_json(urljoin(url, \"/v2/apps\"), timeout, auth)\n if response is not None:\n self.gauge('marathon.apps', len(response['apps']), tags=instance_tags)\n for app in response['apps']:\n tags = ['app_id:' + app['id'], 'version:' + app['version']] + instance_tags\n for attr in self.APP_METRICS:\n if attr in app:\n self.gauge('marathon.' + attr, app[attr], tags=tags)\n\n query_url = urljoin(url, \"/v2/apps/{0}/versions\".format(app['id']))\n versions_reply = self.get_json(query_url, timeout, auth)\n\n if versions_reply is not None:\n self.gauge('marathon.versions', len(versions_reply['versions']), tags=tags)\n\n def get_json(self, url, timeout, auth):\n try:\n r = requests.get(url, timeout=timeout, auth=auth)\n r.raise_for_status()\n except requests.exceptions.Timeout:\n # If there's a timeout\n self.service_check(self.SERVICE_CHECK_NAME, AgentCheck.CRITICAL,\n message='%s timed out after %s seconds.' % (url, timeout),\n tags = [\"url:{0}\".format(url)])\n raise Exception(\"Timeout when hitting %s\" % url)\n\n except requests.exceptions.HTTPError:\n self.service_check(self.SERVICE_CHECK_NAME, AgentCheck.CRITICAL,\n message='%s returned a status of %s' % (url, r.status_code),\n tags = [\"url:{0}\".format(url)])\n raise Exception(\"Got %s when hitting %s\" % (r.status_code, url))\n\n else:\n self.service_check(self.SERVICE_CHECK_NAME, AgentCheck.OK,\n tags = [\"url:{0}\".format(url)]\n )\n\n return r.json()\n", "path": "checks.d/marathon.py"}]} | 1,535 | 179 |
gh_patches_debug_31887 | rasdani/github-patches | git_diff | tobymao__sqlglot-1746 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Type error when converting datediff from redshift to trino
```
sql = "select datediff(week,'2009-01-01','2009-12-31')"
converted_sql = sqlglot.transpile(sql, read="redshift", write="trino")[0]
print(converted_sql)
SELECT DATE_DIFF('week', '2009-01-01', '2009-12-31')
```
Trino error: `Unexpected parameters (varchar(4), varchar(10), varchar(10)) for function date_diff. Expected: date_diff(varchar(x), date, date), date_diff(varchar(x), timestamp(p), timestamp(p)), date_diff(varchar(x), timestamp(p) with time zone, timestamp(p) with time zone), date_diff(varchar(x), time(p), time(p)), date_diff(varchar(x), time(p) with time zone, time(p) with time zone)'
`
Changing the SQL to `SELECT DATE_DIFF('week', DATE'2009-01-01', DATE'2009-12-31')` works in Trino
https://trino.io/docs/current/functions/datetime.html
</issue>
<code>
[start of sqlglot/dialects/redshift.py]
1 from __future__ import annotations
2
3 import typing as t
4
5 from sqlglot import exp, transforms
6 from sqlglot.dialects.dialect import rename_func
7 from sqlglot.dialects.postgres import Postgres
8 from sqlglot.helper import seq_get
9 from sqlglot.tokens import TokenType
10
11
12 def _json_sql(self: Postgres.Generator, expression: exp.JSONExtract | exp.JSONExtractScalar) -> str:
13 return f'{self.sql(expression, "this")}."{expression.expression.name}"'
14
15
16 class Redshift(Postgres):
17 time_format = "'YYYY-MM-DD HH:MI:SS'"
18 time_mapping = {
19 **Postgres.time_mapping,
20 "MON": "%b",
21 "HH": "%H",
22 }
23
24 class Parser(Postgres.Parser):
25 FUNCTIONS = {
26 **Postgres.Parser.FUNCTIONS,
27 "DATEADD": lambda args: exp.DateAdd(
28 this=seq_get(args, 2),
29 expression=seq_get(args, 1),
30 unit=seq_get(args, 0),
31 ),
32 "DATEDIFF": lambda args: exp.DateDiff(
33 this=seq_get(args, 2),
34 expression=seq_get(args, 1),
35 unit=seq_get(args, 0),
36 ),
37 "NVL": exp.Coalesce.from_arg_list,
38 "STRTOL": exp.FromBase.from_arg_list,
39 }
40
41 CONVERT_TYPE_FIRST = True
42
43 def _parse_types(
44 self, check_func: bool = False, schema: bool = False
45 ) -> t.Optional[exp.Expression]:
46 this = super()._parse_types(check_func=check_func, schema=schema)
47
48 if (
49 isinstance(this, exp.DataType)
50 and this.is_type("varchar")
51 and this.expressions
52 and this.expressions[0].this == exp.column("MAX")
53 ):
54 this.set("expressions", [exp.Var(this="MAX")])
55
56 return this
57
58 class Tokenizer(Postgres.Tokenizer):
59 BIT_STRINGS = []
60 HEX_STRINGS = []
61 STRING_ESCAPES = ["\\"]
62
63 KEYWORDS = {
64 **Postgres.Tokenizer.KEYWORDS,
65 "HLLSKETCH": TokenType.HLLSKETCH,
66 "SUPER": TokenType.SUPER,
67 "SYSDATE": TokenType.CURRENT_TIMESTAMP,
68 "TIME": TokenType.TIMESTAMP,
69 "TIMETZ": TokenType.TIMESTAMPTZ,
70 "TOP": TokenType.TOP,
71 "UNLOAD": TokenType.COMMAND,
72 "VARBYTE": TokenType.VARBINARY,
73 }
74
75 # Redshift allows # to appear as a table identifier prefix
76 SINGLE_TOKENS = Postgres.Tokenizer.SINGLE_TOKENS.copy()
77 SINGLE_TOKENS.pop("#")
78
79 class Generator(Postgres.Generator):
80 LOCKING_READS_SUPPORTED = False
81 RENAME_TABLE_WITH_DB = False
82
83 TYPE_MAPPING = {
84 **Postgres.Generator.TYPE_MAPPING,
85 exp.DataType.Type.BINARY: "VARBYTE",
86 exp.DataType.Type.VARBINARY: "VARBYTE",
87 exp.DataType.Type.INT: "INTEGER",
88 }
89
90 PROPERTIES_LOCATION = {
91 **Postgres.Generator.PROPERTIES_LOCATION,
92 exp.LikeProperty: exp.Properties.Location.POST_WITH,
93 }
94
95 TRANSFORMS = {
96 **Postgres.Generator.TRANSFORMS,
97 exp.CurrentTimestamp: lambda self, e: "SYSDATE",
98 exp.DateAdd: lambda self, e: self.func(
99 "DATEADD", exp.var(e.text("unit") or "day"), e.expression, e.this
100 ),
101 exp.DateDiff: lambda self, e: self.func(
102 "DATEDIFF", exp.var(e.text("unit") or "day"), e.expression, e.this
103 ),
104 exp.DistKeyProperty: lambda self, e: f"DISTKEY({e.name})",
105 exp.DistStyleProperty: lambda self, e: self.naked_property(e),
106 exp.JSONExtract: _json_sql,
107 exp.JSONExtractScalar: _json_sql,
108 exp.Select: transforms.preprocess([transforms.eliminate_distinct_on]),
109 exp.SortKeyProperty: lambda self, e: f"{'COMPOUND ' if e.args['compound'] else ''}SORTKEY({self.format_args(*e.this)})",
110 exp.FromBase: rename_func("STRTOL"),
111 }
112
113 # Postgres maps exp.Pivot to no_pivot_sql, but Redshift support pivots
114 TRANSFORMS.pop(exp.Pivot)
115
116 # Redshift uses the POW | POWER (expr1, expr2) syntax instead of expr1 ^ expr2 (postgres)
117 TRANSFORMS.pop(exp.Pow)
118
119 RESERVED_KEYWORDS = {*Postgres.Generator.RESERVED_KEYWORDS, "snapshot", "type"}
120
121 def values_sql(self, expression: exp.Values) -> str:
122 """
123 Converts `VALUES...` expression into a series of unions.
124
125 Note: If you have a lot of unions then this will result in a large number of recursive statements to
126 evaluate the expression. You may need to increase `sys.setrecursionlimit` to run and it can also be
127 very slow.
128 """
129
130 # The VALUES clause is still valid in an `INSERT INTO ..` statement, for example
131 if not expression.find_ancestor(exp.From, exp.Join):
132 return super().values_sql(expression)
133
134 column_names = expression.alias and expression.args["alias"].columns
135
136 selects = []
137 rows = [tuple_exp.expressions for tuple_exp in expression.expressions]
138
139 for i, row in enumerate(rows):
140 if i == 0 and column_names:
141 row = [
142 exp.alias_(value, column_name)
143 for value, column_name in zip(row, column_names)
144 ]
145
146 selects.append(exp.Select(expressions=row))
147
148 subquery_expression: exp.Select | exp.Union = selects[0]
149 if len(selects) > 1:
150 for select in selects[1:]:
151 subquery_expression = exp.union(subquery_expression, select, distinct=False)
152
153 return self.subquery_sql(subquery_expression.subquery(expression.alias))
154
155 def with_properties(self, properties: exp.Properties) -> str:
156 """Redshift doesn't have `WITH` as part of their with_properties so we remove it"""
157 return self.properties(properties, prefix=" ", suffix="")
158
159 def datatype_sql(self, expression: exp.DataType) -> str:
160 """
161 Redshift converts the `TEXT` data type to `VARCHAR(255)` by default when people more generally mean
162 VARCHAR of max length which is `VARCHAR(max)` in Redshift. Therefore if we get a `TEXT` data type
163 without precision we convert it to `VARCHAR(max)` and if it does have precision then we just convert
164 `TEXT` to `VARCHAR`.
165 """
166 if expression.is_type("text"):
167 expression = expression.copy()
168 expression.set("this", exp.DataType.Type.VARCHAR)
169 precision = expression.args.get("expressions")
170
171 if not precision:
172 expression.append("expressions", exp.Var(this="MAX"))
173
174 return super().datatype_sql(expression)
175
[end of sqlglot/dialects/redshift.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/sqlglot/dialects/redshift.py b/sqlglot/dialects/redshift.py
--- a/sqlglot/dialects/redshift.py
+++ b/sqlglot/dialects/redshift.py
@@ -25,13 +25,13 @@
FUNCTIONS = {
**Postgres.Parser.FUNCTIONS,
"DATEADD": lambda args: exp.DateAdd(
- this=seq_get(args, 2),
+ this=exp.TsOrDsToDate(this=seq_get(args, 2)),
expression=seq_get(args, 1),
unit=seq_get(args, 0),
),
"DATEDIFF": lambda args: exp.DateDiff(
- this=seq_get(args, 2),
- expression=seq_get(args, 1),
+ this=exp.TsOrDsToDate(this=seq_get(args, 2)),
+ expression=exp.TsOrDsToDate(this=seq_get(args, 1)),
unit=seq_get(args, 0),
),
"NVL": exp.Coalesce.from_arg_list,
@@ -103,11 +103,12 @@
),
exp.DistKeyProperty: lambda self, e: f"DISTKEY({e.name})",
exp.DistStyleProperty: lambda self, e: self.naked_property(e),
+ exp.FromBase: rename_func("STRTOL"),
exp.JSONExtract: _json_sql,
exp.JSONExtractScalar: _json_sql,
exp.Select: transforms.preprocess([transforms.eliminate_distinct_on]),
exp.SortKeyProperty: lambda self, e: f"{'COMPOUND ' if e.args['compound'] else ''}SORTKEY({self.format_args(*e.this)})",
- exp.FromBase: rename_func("STRTOL"),
+ exp.TsOrDsToDate: lambda self, e: self.sql(e.this),
}
# Postgres maps exp.Pivot to no_pivot_sql, but Redshift support pivots
| {"golden_diff": "diff --git a/sqlglot/dialects/redshift.py b/sqlglot/dialects/redshift.py\n--- a/sqlglot/dialects/redshift.py\n+++ b/sqlglot/dialects/redshift.py\n@@ -25,13 +25,13 @@\n FUNCTIONS = {\n **Postgres.Parser.FUNCTIONS,\n \"DATEADD\": lambda args: exp.DateAdd(\n- this=seq_get(args, 2),\n+ this=exp.TsOrDsToDate(this=seq_get(args, 2)),\n expression=seq_get(args, 1),\n unit=seq_get(args, 0),\n ),\n \"DATEDIFF\": lambda args: exp.DateDiff(\n- this=seq_get(args, 2),\n- expression=seq_get(args, 1),\n+ this=exp.TsOrDsToDate(this=seq_get(args, 2)),\n+ expression=exp.TsOrDsToDate(this=seq_get(args, 1)),\n unit=seq_get(args, 0),\n ),\n \"NVL\": exp.Coalesce.from_arg_list,\n@@ -103,11 +103,12 @@\n ),\n exp.DistKeyProperty: lambda self, e: f\"DISTKEY({e.name})\",\n exp.DistStyleProperty: lambda self, e: self.naked_property(e),\n+ exp.FromBase: rename_func(\"STRTOL\"),\n exp.JSONExtract: _json_sql,\n exp.JSONExtractScalar: _json_sql,\n exp.Select: transforms.preprocess([transforms.eliminate_distinct_on]),\n exp.SortKeyProperty: lambda self, e: f\"{'COMPOUND ' if e.args['compound'] else ''}SORTKEY({self.format_args(*e.this)})\",\n- exp.FromBase: rename_func(\"STRTOL\"),\n+ exp.TsOrDsToDate: lambda self, e: self.sql(e.this),\n }\n \n # Postgres maps exp.Pivot to no_pivot_sql, but Redshift support pivots\n", "issue": "Type error when converting datediff from redshift to trino \n```\r\nsql = \"select datediff(week,'2009-01-01','2009-12-31')\"\r\nconverted_sql = sqlglot.transpile(sql, read=\"redshift\", write=\"trino\")[0]\r\nprint(converted_sql)\r\nSELECT DATE_DIFF('week', '2009-01-01', '2009-12-31')\r\n```\r\n\r\nTrino error: `Unexpected parameters (varchar(4), varchar(10), varchar(10)) for function date_diff. Expected: date_diff(varchar(x), date, date), date_diff(varchar(x), timestamp(p), timestamp(p)), date_diff(varchar(x), timestamp(p) with time zone, timestamp(p) with time zone), date_diff(varchar(x), time(p), time(p)), date_diff(varchar(x), time(p) with time zone, time(p) with time zone)'\r\n`\r\n\r\nChanging the SQL to `SELECT DATE_DIFF('week', DATE'2009-01-01', DATE'2009-12-31')` works in Trino\r\n\r\nhttps://trino.io/docs/current/functions/datetime.html\n", "before_files": [{"content": "from __future__ import annotations\n\nimport typing as t\n\nfrom sqlglot import exp, transforms\nfrom sqlglot.dialects.dialect import rename_func\nfrom sqlglot.dialects.postgres import Postgres\nfrom sqlglot.helper import seq_get\nfrom sqlglot.tokens import TokenType\n\n\ndef _json_sql(self: Postgres.Generator, expression: exp.JSONExtract | exp.JSONExtractScalar) -> str:\n return f'{self.sql(expression, \"this\")}.\"{expression.expression.name}\"'\n\n\nclass Redshift(Postgres):\n time_format = \"'YYYY-MM-DD HH:MI:SS'\"\n time_mapping = {\n **Postgres.time_mapping,\n \"MON\": \"%b\",\n \"HH\": \"%H\",\n }\n\n class Parser(Postgres.Parser):\n FUNCTIONS = {\n **Postgres.Parser.FUNCTIONS,\n \"DATEADD\": lambda args: exp.DateAdd(\n this=seq_get(args, 2),\n expression=seq_get(args, 1),\n unit=seq_get(args, 0),\n ),\n \"DATEDIFF\": lambda args: exp.DateDiff(\n this=seq_get(args, 2),\n expression=seq_get(args, 1),\n unit=seq_get(args, 0),\n ),\n \"NVL\": exp.Coalesce.from_arg_list,\n \"STRTOL\": exp.FromBase.from_arg_list,\n }\n\n CONVERT_TYPE_FIRST = True\n\n def _parse_types(\n self, check_func: bool = False, schema: bool = False\n ) -> t.Optional[exp.Expression]:\n this = super()._parse_types(check_func=check_func, schema=schema)\n\n if (\n isinstance(this, exp.DataType)\n and this.is_type(\"varchar\")\n and this.expressions\n and this.expressions[0].this == exp.column(\"MAX\")\n ):\n this.set(\"expressions\", [exp.Var(this=\"MAX\")])\n\n return this\n\n class Tokenizer(Postgres.Tokenizer):\n BIT_STRINGS = []\n HEX_STRINGS = []\n STRING_ESCAPES = [\"\\\\\"]\n\n KEYWORDS = {\n **Postgres.Tokenizer.KEYWORDS,\n \"HLLSKETCH\": TokenType.HLLSKETCH,\n \"SUPER\": TokenType.SUPER,\n \"SYSDATE\": TokenType.CURRENT_TIMESTAMP,\n \"TIME\": TokenType.TIMESTAMP,\n \"TIMETZ\": TokenType.TIMESTAMPTZ,\n \"TOP\": TokenType.TOP,\n \"UNLOAD\": TokenType.COMMAND,\n \"VARBYTE\": TokenType.VARBINARY,\n }\n\n # Redshift allows # to appear as a table identifier prefix\n SINGLE_TOKENS = Postgres.Tokenizer.SINGLE_TOKENS.copy()\n SINGLE_TOKENS.pop(\"#\")\n\n class Generator(Postgres.Generator):\n LOCKING_READS_SUPPORTED = False\n RENAME_TABLE_WITH_DB = False\n\n TYPE_MAPPING = {\n **Postgres.Generator.TYPE_MAPPING,\n exp.DataType.Type.BINARY: \"VARBYTE\",\n exp.DataType.Type.VARBINARY: \"VARBYTE\",\n exp.DataType.Type.INT: \"INTEGER\",\n }\n\n PROPERTIES_LOCATION = {\n **Postgres.Generator.PROPERTIES_LOCATION,\n exp.LikeProperty: exp.Properties.Location.POST_WITH,\n }\n\n TRANSFORMS = {\n **Postgres.Generator.TRANSFORMS,\n exp.CurrentTimestamp: lambda self, e: \"SYSDATE\",\n exp.DateAdd: lambda self, e: self.func(\n \"DATEADD\", exp.var(e.text(\"unit\") or \"day\"), e.expression, e.this\n ),\n exp.DateDiff: lambda self, e: self.func(\n \"DATEDIFF\", exp.var(e.text(\"unit\") or \"day\"), e.expression, e.this\n ),\n exp.DistKeyProperty: lambda self, e: f\"DISTKEY({e.name})\",\n exp.DistStyleProperty: lambda self, e: self.naked_property(e),\n exp.JSONExtract: _json_sql,\n exp.JSONExtractScalar: _json_sql,\n exp.Select: transforms.preprocess([transforms.eliminate_distinct_on]),\n exp.SortKeyProperty: lambda self, e: f\"{'COMPOUND ' if e.args['compound'] else ''}SORTKEY({self.format_args(*e.this)})\",\n exp.FromBase: rename_func(\"STRTOL\"),\n }\n\n # Postgres maps exp.Pivot to no_pivot_sql, but Redshift support pivots\n TRANSFORMS.pop(exp.Pivot)\n\n # Redshift uses the POW | POWER (expr1, expr2) syntax instead of expr1 ^ expr2 (postgres)\n TRANSFORMS.pop(exp.Pow)\n\n RESERVED_KEYWORDS = {*Postgres.Generator.RESERVED_KEYWORDS, \"snapshot\", \"type\"}\n\n def values_sql(self, expression: exp.Values) -> str:\n \"\"\"\n Converts `VALUES...` expression into a series of unions.\n\n Note: If you have a lot of unions then this will result in a large number of recursive statements to\n evaluate the expression. You may need to increase `sys.setrecursionlimit` to run and it can also be\n very slow.\n \"\"\"\n\n # The VALUES clause is still valid in an `INSERT INTO ..` statement, for example\n if not expression.find_ancestor(exp.From, exp.Join):\n return super().values_sql(expression)\n\n column_names = expression.alias and expression.args[\"alias\"].columns\n\n selects = []\n rows = [tuple_exp.expressions for tuple_exp in expression.expressions]\n\n for i, row in enumerate(rows):\n if i == 0 and column_names:\n row = [\n exp.alias_(value, column_name)\n for value, column_name in zip(row, column_names)\n ]\n\n selects.append(exp.Select(expressions=row))\n\n subquery_expression: exp.Select | exp.Union = selects[0]\n if len(selects) > 1:\n for select in selects[1:]:\n subquery_expression = exp.union(subquery_expression, select, distinct=False)\n\n return self.subquery_sql(subquery_expression.subquery(expression.alias))\n\n def with_properties(self, properties: exp.Properties) -> str:\n \"\"\"Redshift doesn't have `WITH` as part of their with_properties so we remove it\"\"\"\n return self.properties(properties, prefix=\" \", suffix=\"\")\n\n def datatype_sql(self, expression: exp.DataType) -> str:\n \"\"\"\n Redshift converts the `TEXT` data type to `VARCHAR(255)` by default when people more generally mean\n VARCHAR of max length which is `VARCHAR(max)` in Redshift. Therefore if we get a `TEXT` data type\n without precision we convert it to `VARCHAR(max)` and if it does have precision then we just convert\n `TEXT` to `VARCHAR`.\n \"\"\"\n if expression.is_type(\"text\"):\n expression = expression.copy()\n expression.set(\"this\", exp.DataType.Type.VARCHAR)\n precision = expression.args.get(\"expressions\")\n\n if not precision:\n expression.append(\"expressions\", exp.Var(this=\"MAX\"))\n\n return super().datatype_sql(expression)\n", "path": "sqlglot/dialects/redshift.py"}]} | 2,735 | 434 |
gh_patches_debug_16169 | rasdani/github-patches | git_diff | networkx__networkx-4999 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Wrong degree_assortativity_coefficient for directed graphs
### Current Behavior
``degree_assortativity_coefficient`` will fail for most directed graphs except if the set of in- or out-degrees is the same as the set of total-degrees.
This issue was introduced in 2.6 by #4928 ([L78](https://github.com/networkx/networkx/pull/4928/files#diff-76675aa4f0d3a79d394219c8e15ec346b3f5af9f4a733d5ef9e7026421d43bd9R78)).
### Expected Behavior
The mapping should include all relevant in- and out-degrees for directed graphs.
### Steps to Reproduce
```python
G = nx.DiGraph()
G.add_edges_from([(0, 3), (1, 0), (1, 2), (2, 4), (4, 1), (4, 3), (4, 2)])
nx.degree_assortativity_coefficient(G) # returns NaN
nx.degree_pearson_correlation_coefficient(G) # returns the correct value 0.14852
```
### Environment
Python version: 3.9
NetworkX version: 2.6+
</issue>
<code>
[start of networkx/algorithms/assortativity/correlation.py]
1 """Node assortativity coefficients and correlation measures.
2 """
3 from networkx.algorithms.assortativity.mixing import (
4 degree_mixing_matrix,
5 attribute_mixing_matrix,
6 numeric_mixing_matrix,
7 )
8 from networkx.algorithms.assortativity.pairs import node_degree_xy
9
10 __all__ = [
11 "degree_pearson_correlation_coefficient",
12 "degree_assortativity_coefficient",
13 "attribute_assortativity_coefficient",
14 "numeric_assortativity_coefficient",
15 ]
16
17
18 def degree_assortativity_coefficient(G, x="out", y="in", weight=None, nodes=None):
19 """Compute degree assortativity of graph.
20
21 Assortativity measures the similarity of connections
22 in the graph with respect to the node degree.
23
24 Parameters
25 ----------
26 G : NetworkX graph
27
28 x: string ('in','out')
29 The degree type for source node (directed graphs only).
30
31 y: string ('in','out')
32 The degree type for target node (directed graphs only).
33
34 weight: string or None, optional (default=None)
35 The edge attribute that holds the numerical value used
36 as a weight. If None, then each edge has weight 1.
37 The degree is the sum of the edge weights adjacent to the node.
38
39 nodes: list or iterable (optional)
40 Compute degree assortativity only for nodes in container.
41 The default is all nodes.
42
43 Returns
44 -------
45 r : float
46 Assortativity of graph by degree.
47
48 Examples
49 --------
50 >>> G = nx.path_graph(4)
51 >>> r = nx.degree_assortativity_coefficient(G)
52 >>> print(f"{r:3.1f}")
53 -0.5
54
55 See Also
56 --------
57 attribute_assortativity_coefficient
58 numeric_assortativity_coefficient
59 degree_mixing_dict
60 degree_mixing_matrix
61
62 Notes
63 -----
64 This computes Eq. (21) in Ref. [1]_ , where e is the joint
65 probability distribution (mixing matrix) of the degrees. If G is
66 directed than the matrix e is the joint probability of the
67 user-specified degree type for the source and target.
68
69 References
70 ----------
71 .. [1] M. E. J. Newman, Mixing patterns in networks,
72 Physical Review E, 67 026126, 2003
73 .. [2] Foster, J.G., Foster, D.V., Grassberger, P. & Paczuski, M.
74 Edge direction and the structure of networks, PNAS 107, 10815-20 (2010).
75 """
76 if nodes is None:
77 nodes = G.nodes
78 degrees = set([d for n, d in G.degree(nodes, weight=weight)])
79 mapping = {d: i for i, d, in enumerate(degrees)}
80 M = degree_mixing_matrix(G, x=x, y=y, nodes=nodes, weight=weight, mapping=mapping)
81 return numeric_ac(M, mapping=mapping)
82
83
84 def degree_pearson_correlation_coefficient(G, x="out", y="in", weight=None, nodes=None):
85 """Compute degree assortativity of graph.
86
87 Assortativity measures the similarity of connections
88 in the graph with respect to the node degree.
89
90 This is the same as degree_assortativity_coefficient but uses the
91 potentially faster scipy.stats.pearsonr function.
92
93 Parameters
94 ----------
95 G : NetworkX graph
96
97 x: string ('in','out')
98 The degree type for source node (directed graphs only).
99
100 y: string ('in','out')
101 The degree type for target node (directed graphs only).
102
103 weight: string or None, optional (default=None)
104 The edge attribute that holds the numerical value used
105 as a weight. If None, then each edge has weight 1.
106 The degree is the sum of the edge weights adjacent to the node.
107
108 nodes: list or iterable (optional)
109 Compute pearson correlation of degrees only for specified nodes.
110 The default is all nodes.
111
112 Returns
113 -------
114 r : float
115 Assortativity of graph by degree.
116
117 Examples
118 --------
119 >>> G = nx.path_graph(4)
120 >>> r = nx.degree_pearson_correlation_coefficient(G)
121 >>> print(f"{r:3.1f}")
122 -0.5
123
124 Notes
125 -----
126 This calls scipy.stats.pearsonr.
127
128 References
129 ----------
130 .. [1] M. E. J. Newman, Mixing patterns in networks
131 Physical Review E, 67 026126, 2003
132 .. [2] Foster, J.G., Foster, D.V., Grassberger, P. & Paczuski, M.
133 Edge direction and the structure of networks, PNAS 107, 10815-20 (2010).
134 """
135 import scipy as sp
136 import scipy.stats # call as sp.stats
137
138 xy = node_degree_xy(G, x=x, y=y, nodes=nodes, weight=weight)
139 x, y = zip(*xy)
140 return sp.stats.pearsonr(x, y)[0]
141
142
143 def attribute_assortativity_coefficient(G, attribute, nodes=None):
144 """Compute assortativity for node attributes.
145
146 Assortativity measures the similarity of connections
147 in the graph with respect to the given attribute.
148
149 Parameters
150 ----------
151 G : NetworkX graph
152
153 attribute : string
154 Node attribute key
155
156 nodes: list or iterable (optional)
157 Compute attribute assortativity for nodes in container.
158 The default is all nodes.
159
160 Returns
161 -------
162 r: float
163 Assortativity of graph for given attribute
164
165 Examples
166 --------
167 >>> G = nx.Graph()
168 >>> G.add_nodes_from([0, 1], color="red")
169 >>> G.add_nodes_from([2, 3], color="blue")
170 >>> G.add_edges_from([(0, 1), (2, 3)])
171 >>> print(nx.attribute_assortativity_coefficient(G, "color"))
172 1.0
173
174 Notes
175 -----
176 This computes Eq. (2) in Ref. [1]_ , (trace(M)-sum(M^2))/(1-sum(M^2)),
177 where M is the joint probability distribution (mixing matrix)
178 of the specified attribute.
179
180 References
181 ----------
182 .. [1] M. E. J. Newman, Mixing patterns in networks,
183 Physical Review E, 67 026126, 2003
184 """
185 M = attribute_mixing_matrix(G, attribute, nodes)
186 return attribute_ac(M)
187
188
189 def numeric_assortativity_coefficient(G, attribute, nodes=None):
190 """Compute assortativity for numerical node attributes.
191
192 Assortativity measures the similarity of connections
193 in the graph with respect to the given numeric attribute.
194
195 Parameters
196 ----------
197 G : NetworkX graph
198
199 attribute : string
200 Node attribute key.
201
202 nodes: list or iterable (optional)
203 Compute numeric assortativity only for attributes of nodes in
204 container. The default is all nodes.
205
206 Returns
207 -------
208 r: float
209 Assortativity of graph for given attribute
210
211 Examples
212 --------
213 >>> G = nx.Graph()
214 >>> G.add_nodes_from([0, 1], size=2)
215 >>> G.add_nodes_from([2, 3], size=3)
216 >>> G.add_edges_from([(0, 1), (2, 3)])
217 >>> print(nx.numeric_assortativity_coefficient(G, "size"))
218 1.0
219
220 Notes
221 -----
222 This computes Eq. (21) in Ref. [1]_ , for the mixing matrix
223 of the specified attribute.
224
225 References
226 ----------
227 .. [1] M. E. J. Newman, Mixing patterns in networks
228 Physical Review E, 67 026126, 2003
229 """
230 if nodes is None:
231 nodes = G.nodes
232 vals = set(G.nodes[n][attribute] for n in nodes)
233 mapping = {d: i for i, d, in enumerate(vals)}
234 M = attribute_mixing_matrix(G, attribute, nodes, mapping)
235 return numeric_ac(M, mapping)
236
237
238 def attribute_ac(M):
239 """Compute assortativity for attribute matrix M.
240
241 Parameters
242 ----------
243 M : numpy.ndarray
244 2D ndarray representing the attribute mixing matrix.
245
246 Notes
247 -----
248 This computes Eq. (2) in Ref. [1]_ , (trace(e)-sum(e^2))/(1-sum(e^2)),
249 where e is the joint probability distribution (mixing matrix)
250 of the specified attribute.
251
252 References
253 ----------
254 .. [1] M. E. J. Newman, Mixing patterns in networks,
255 Physical Review E, 67 026126, 2003
256 """
257 if M.sum() != 1.0:
258 M = M / M.sum()
259 s = (M @ M).sum()
260 t = M.trace()
261 r = (t - s) / (1 - s)
262 return r
263
264
265 def numeric_ac(M, mapping):
266 # M is a numpy matrix or array
267 # numeric assortativity coefficient, pearsonr
268 import numpy as np
269
270 if M.sum() != 1.0:
271 M = M / float(M.sum())
272 nx, ny = M.shape # nx=ny
273 x = np.array(list(mapping.keys()))
274 y = x # x and y have the same support
275 idx = list(mapping.values())
276 a = M.sum(axis=0)
277 b = M.sum(axis=1)
278 vara = (a[idx] * x ** 2).sum() - ((a[idx] * x).sum()) ** 2
279 varb = (b[idx] * y ** 2).sum() - ((b[idx] * y).sum()) ** 2
280 xy = np.outer(x, y)
281 ab = np.outer(a[idx], b[idx])
282 return (xy * (M - ab)).sum() / np.sqrt(vara * varb)
283
[end of networkx/algorithms/assortativity/correlation.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/networkx/algorithms/assortativity/correlation.py b/networkx/algorithms/assortativity/correlation.py
--- a/networkx/algorithms/assortativity/correlation.py
+++ b/networkx/algorithms/assortativity/correlation.py
@@ -75,9 +75,27 @@
"""
if nodes is None:
nodes = G.nodes
- degrees = set([d for n, d in G.degree(nodes, weight=weight)])
+
+ degrees = None
+
+ if G.is_directed():
+ indeg = (
+ set([d for _, d in G.in_degree(nodes, weight=weight)])
+ if "in" in (x, y)
+ else set()
+ )
+ outdeg = (
+ set([d for _, d in G.out_degree(nodes, weight=weight)])
+ if "out" in (x, y)
+ else set()
+ )
+ degrees = set.union(indeg, outdeg)
+ else:
+ degrees = set([d for _, d in G.degree(nodes, weight=weight)])
+
mapping = {d: i for i, d, in enumerate(degrees)}
M = degree_mixing_matrix(G, x=x, y=y, nodes=nodes, weight=weight, mapping=mapping)
+
return numeric_ac(M, mapping=mapping)
| {"golden_diff": "diff --git a/networkx/algorithms/assortativity/correlation.py b/networkx/algorithms/assortativity/correlation.py\n--- a/networkx/algorithms/assortativity/correlation.py\n+++ b/networkx/algorithms/assortativity/correlation.py\n@@ -75,9 +75,27 @@\n \"\"\"\n if nodes is None:\n nodes = G.nodes\n- degrees = set([d for n, d in G.degree(nodes, weight=weight)])\n+\n+ degrees = None\n+\n+ if G.is_directed():\n+ indeg = (\n+ set([d for _, d in G.in_degree(nodes, weight=weight)])\n+ if \"in\" in (x, y)\n+ else set()\n+ )\n+ outdeg = (\n+ set([d for _, d in G.out_degree(nodes, weight=weight)])\n+ if \"out\" in (x, y)\n+ else set()\n+ )\n+ degrees = set.union(indeg, outdeg)\n+ else:\n+ degrees = set([d for _, d in G.degree(nodes, weight=weight)])\n+\n mapping = {d: i for i, d, in enumerate(degrees)}\n M = degree_mixing_matrix(G, x=x, y=y, nodes=nodes, weight=weight, mapping=mapping)\n+\n return numeric_ac(M, mapping=mapping)\n", "issue": "Wrong degree_assortativity_coefficient for directed graphs\n### Current Behavior\r\n``degree_assortativity_coefficient`` will fail for most directed graphs except if the set of in- or out-degrees is the same as the set of total-degrees.\r\nThis issue was introduced in 2.6 by #4928 ([L78](https://github.com/networkx/networkx/pull/4928/files#diff-76675aa4f0d3a79d394219c8e15ec346b3f5af9f4a733d5ef9e7026421d43bd9R78)).\r\n\r\n### Expected Behavior\r\nThe mapping should include all relevant in- and out-degrees for directed graphs.\r\n\r\n### Steps to Reproduce\r\n```python\r\nG = nx.DiGraph()\r\nG.add_edges_from([(0, 3), (1, 0), (1, 2), (2, 4), (4, 1), (4, 3), (4, 2)])\r\n\r\nnx.degree_assortativity_coefficient(G) # returns NaN\r\nnx.degree_pearson_correlation_coefficient(G) # returns the correct value 0.14852\r\n```\r\n\r\n### Environment\r\nPython version: 3.9\r\nNetworkX version: 2.6+\r\n\n", "before_files": [{"content": "\"\"\"Node assortativity coefficients and correlation measures.\n\"\"\"\nfrom networkx.algorithms.assortativity.mixing import (\n degree_mixing_matrix,\n attribute_mixing_matrix,\n numeric_mixing_matrix,\n)\nfrom networkx.algorithms.assortativity.pairs import node_degree_xy\n\n__all__ = [\n \"degree_pearson_correlation_coefficient\",\n \"degree_assortativity_coefficient\",\n \"attribute_assortativity_coefficient\",\n \"numeric_assortativity_coefficient\",\n]\n\n\ndef degree_assortativity_coefficient(G, x=\"out\", y=\"in\", weight=None, nodes=None):\n \"\"\"Compute degree assortativity of graph.\n\n Assortativity measures the similarity of connections\n in the graph with respect to the node degree.\n\n Parameters\n ----------\n G : NetworkX graph\n\n x: string ('in','out')\n The degree type for source node (directed graphs only).\n\n y: string ('in','out')\n The degree type for target node (directed graphs only).\n\n weight: string or None, optional (default=None)\n The edge attribute that holds the numerical value used\n as a weight. If None, then each edge has weight 1.\n The degree is the sum of the edge weights adjacent to the node.\n\n nodes: list or iterable (optional)\n Compute degree assortativity only for nodes in container.\n The default is all nodes.\n\n Returns\n -------\n r : float\n Assortativity of graph by degree.\n\n Examples\n --------\n >>> G = nx.path_graph(4)\n >>> r = nx.degree_assortativity_coefficient(G)\n >>> print(f\"{r:3.1f}\")\n -0.5\n\n See Also\n --------\n attribute_assortativity_coefficient\n numeric_assortativity_coefficient\n degree_mixing_dict\n degree_mixing_matrix\n\n Notes\n -----\n This computes Eq. (21) in Ref. [1]_ , where e is the joint\n probability distribution (mixing matrix) of the degrees. If G is\n directed than the matrix e is the joint probability of the\n user-specified degree type for the source and target.\n\n References\n ----------\n .. [1] M. E. J. Newman, Mixing patterns in networks,\n Physical Review E, 67 026126, 2003\n .. [2] Foster, J.G., Foster, D.V., Grassberger, P. & Paczuski, M.\n Edge direction and the structure of networks, PNAS 107, 10815-20 (2010).\n \"\"\"\n if nodes is None:\n nodes = G.nodes\n degrees = set([d for n, d in G.degree(nodes, weight=weight)])\n mapping = {d: i for i, d, in enumerate(degrees)}\n M = degree_mixing_matrix(G, x=x, y=y, nodes=nodes, weight=weight, mapping=mapping)\n return numeric_ac(M, mapping=mapping)\n\n\ndef degree_pearson_correlation_coefficient(G, x=\"out\", y=\"in\", weight=None, nodes=None):\n \"\"\"Compute degree assortativity of graph.\n\n Assortativity measures the similarity of connections\n in the graph with respect to the node degree.\n\n This is the same as degree_assortativity_coefficient but uses the\n potentially faster scipy.stats.pearsonr function.\n\n Parameters\n ----------\n G : NetworkX graph\n\n x: string ('in','out')\n The degree type for source node (directed graphs only).\n\n y: string ('in','out')\n The degree type for target node (directed graphs only).\n\n weight: string or None, optional (default=None)\n The edge attribute that holds the numerical value used\n as a weight. If None, then each edge has weight 1.\n The degree is the sum of the edge weights adjacent to the node.\n\n nodes: list or iterable (optional)\n Compute pearson correlation of degrees only for specified nodes.\n The default is all nodes.\n\n Returns\n -------\n r : float\n Assortativity of graph by degree.\n\n Examples\n --------\n >>> G = nx.path_graph(4)\n >>> r = nx.degree_pearson_correlation_coefficient(G)\n >>> print(f\"{r:3.1f}\")\n -0.5\n\n Notes\n -----\n This calls scipy.stats.pearsonr.\n\n References\n ----------\n .. [1] M. E. J. Newman, Mixing patterns in networks\n Physical Review E, 67 026126, 2003\n .. [2] Foster, J.G., Foster, D.V., Grassberger, P. & Paczuski, M.\n Edge direction and the structure of networks, PNAS 107, 10815-20 (2010).\n \"\"\"\n import scipy as sp\n import scipy.stats # call as sp.stats\n\n xy = node_degree_xy(G, x=x, y=y, nodes=nodes, weight=weight)\n x, y = zip(*xy)\n return sp.stats.pearsonr(x, y)[0]\n\n\ndef attribute_assortativity_coefficient(G, attribute, nodes=None):\n \"\"\"Compute assortativity for node attributes.\n\n Assortativity measures the similarity of connections\n in the graph with respect to the given attribute.\n\n Parameters\n ----------\n G : NetworkX graph\n\n attribute : string\n Node attribute key\n\n nodes: list or iterable (optional)\n Compute attribute assortativity for nodes in container.\n The default is all nodes.\n\n Returns\n -------\n r: float\n Assortativity of graph for given attribute\n\n Examples\n --------\n >>> G = nx.Graph()\n >>> G.add_nodes_from([0, 1], color=\"red\")\n >>> G.add_nodes_from([2, 3], color=\"blue\")\n >>> G.add_edges_from([(0, 1), (2, 3)])\n >>> print(nx.attribute_assortativity_coefficient(G, \"color\"))\n 1.0\n\n Notes\n -----\n This computes Eq. (2) in Ref. [1]_ , (trace(M)-sum(M^2))/(1-sum(M^2)),\n where M is the joint probability distribution (mixing matrix)\n of the specified attribute.\n\n References\n ----------\n .. [1] M. E. J. Newman, Mixing patterns in networks,\n Physical Review E, 67 026126, 2003\n \"\"\"\n M = attribute_mixing_matrix(G, attribute, nodes)\n return attribute_ac(M)\n\n\ndef numeric_assortativity_coefficient(G, attribute, nodes=None):\n \"\"\"Compute assortativity for numerical node attributes.\n\n Assortativity measures the similarity of connections\n in the graph with respect to the given numeric attribute.\n\n Parameters\n ----------\n G : NetworkX graph\n\n attribute : string\n Node attribute key.\n\n nodes: list or iterable (optional)\n Compute numeric assortativity only for attributes of nodes in\n container. The default is all nodes.\n\n Returns\n -------\n r: float\n Assortativity of graph for given attribute\n\n Examples\n --------\n >>> G = nx.Graph()\n >>> G.add_nodes_from([0, 1], size=2)\n >>> G.add_nodes_from([2, 3], size=3)\n >>> G.add_edges_from([(0, 1), (2, 3)])\n >>> print(nx.numeric_assortativity_coefficient(G, \"size\"))\n 1.0\n\n Notes\n -----\n This computes Eq. (21) in Ref. [1]_ , for the mixing matrix\n of the specified attribute.\n\n References\n ----------\n .. [1] M. E. J. Newman, Mixing patterns in networks\n Physical Review E, 67 026126, 2003\n \"\"\"\n if nodes is None:\n nodes = G.nodes\n vals = set(G.nodes[n][attribute] for n in nodes)\n mapping = {d: i for i, d, in enumerate(vals)}\n M = attribute_mixing_matrix(G, attribute, nodes, mapping)\n return numeric_ac(M, mapping)\n\n\ndef attribute_ac(M):\n \"\"\"Compute assortativity for attribute matrix M.\n\n Parameters\n ----------\n M : numpy.ndarray\n 2D ndarray representing the attribute mixing matrix.\n\n Notes\n -----\n This computes Eq. (2) in Ref. [1]_ , (trace(e)-sum(e^2))/(1-sum(e^2)),\n where e is the joint probability distribution (mixing matrix)\n of the specified attribute.\n\n References\n ----------\n .. [1] M. E. J. Newman, Mixing patterns in networks,\n Physical Review E, 67 026126, 2003\n \"\"\"\n if M.sum() != 1.0:\n M = M / M.sum()\n s = (M @ M).sum()\n t = M.trace()\n r = (t - s) / (1 - s)\n return r\n\n\ndef numeric_ac(M, mapping):\n # M is a numpy matrix or array\n # numeric assortativity coefficient, pearsonr\n import numpy as np\n\n if M.sum() != 1.0:\n M = M / float(M.sum())\n nx, ny = M.shape # nx=ny\n x = np.array(list(mapping.keys()))\n y = x # x and y have the same support\n idx = list(mapping.values())\n a = M.sum(axis=0)\n b = M.sum(axis=1)\n vara = (a[idx] * x ** 2).sum() - ((a[idx] * x).sum()) ** 2\n varb = (b[idx] * y ** 2).sum() - ((b[idx] * y).sum()) ** 2\n xy = np.outer(x, y)\n ab = np.outer(a[idx], b[idx])\n return (xy * (M - ab)).sum() / np.sqrt(vara * varb)\n", "path": "networkx/algorithms/assortativity/correlation.py"}]} | 3,844 | 302 |
gh_patches_debug_10306 | rasdani/github-patches | git_diff | pre-commit__pre-commit-1474 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xc0 in position 9: invalid start byte
Hi, there! Get such error on commit.
Probably, because of cyrillic symbols in user name: 'C:\\Users\\ΠΠ΄ΠΌΠΈΠ½ΠΈΡΡΡΠ°ΡΠΎΡ\...'.
Is there a way to avoid this problem exept renaming user?
Thanks for your cool product!
### version information
```
pre-commit version: 2.4.0
sys.version:
3.6.8 (tags/v3.6.8:3c6b436a57, Dec 24 2018, 00:16:47) [MSC v.1916 64 bit (AMD64)]
sys.executable: c:\program files\git\dev\core\venv\scripts\python.exe
os.name: nt
sys.platform: win32
```
### error information
```
An unexpected error has occurred: UnicodeDecodeError: 'utf-8' codec can't decode byte 0xc0 in position 9: invalid start byte
```
```
Traceback (most recent call last):
File "c:\program files\git\dev\core\venv\lib\site-packages\pre_commit\error_handler.py", line 56, in error_handler
yield
File "c:\program files\git\dev\core\venv\lib\site-packages\pre_commit\main.py", line 372, in main
args=args.rest[1:],
File "c:\program files\git\dev\core\venv\lib\site-packages\pre_commit\commands\hook_impl.py", line 217, in hook_impl
return retv | run(config, store, ns)
File "c:\program files\git\dev\core\venv\lib\site-packages\pre_commit\commands\run.py", line 357, in run
for hook in all_hooks(config, store)
File "c:\program files\git\dev\core\venv\lib\site-packages\pre_commit\repository.py", line 206, in all_hooks
for repo in root_config['repos']
File "c:\program files\git\dev\core\venv\lib\site-packages\pre_commit\repository.py", line 207, in <genexpr>
for hook in _repository_hooks(repo, store, root_config)
File "c:\program files\git\dev\core\venv\lib\site-packages\pre_commit\repository.py", line 182, in _repository_hooks
return _cloned_repository_hooks(repo_config, store, root_config)
File "c:\program files\git\dev\core\venv\lib\site-packages\pre_commit\repository.py", line 162, in _cloned_repository_hooks
for hook in repo_config['hooks']
File "c:\program files\git\dev\core\venv\lib\site-packages\pre_commit\repository.py", line 162, in <listcomp>
for hook in repo_config['hooks']
File "c:\program files\git\dev\core\venv\lib\site-packages\pre_commit\repository.py", line 110, in _hook
ret['language_version'] = languages[lang].get_default_version()
File "c:\program files\git\dev\core\venv\lib\site-packages\pre_commit\languages\python.py", line 113, in get_default_version
if _find_by_py_launcher(exe):
File "c:\program files\git\dev\core\venv\lib\site-packages\pre_commit\languages\python.py", line 72, in _find_by_py_launcher
return cmd_output(*cmd)[1].strip()
File "c:\program files\git\dev\core\venv\lib\site-packages\pre_commit\util.py", line 164, in cmd_output
stdout = stdout_b.decode() if stdout_b is not None else None
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xc0 in position 9: invalid start byte
```
</issue>
<code>
[start of pre_commit/languages/python.py]
1 import contextlib
2 import functools
3 import os
4 import sys
5 from typing import Dict
6 from typing import Generator
7 from typing import Optional
8 from typing import Sequence
9 from typing import Tuple
10
11 import pre_commit.constants as C
12 from pre_commit.envcontext import envcontext
13 from pre_commit.envcontext import PatchesT
14 from pre_commit.envcontext import UNSET
15 from pre_commit.envcontext import Var
16 from pre_commit.hook import Hook
17 from pre_commit.languages import helpers
18 from pre_commit.parse_shebang import find_executable
19 from pre_commit.prefix import Prefix
20 from pre_commit.util import CalledProcessError
21 from pre_commit.util import clean_path_on_failure
22 from pre_commit.util import cmd_output
23 from pre_commit.util import cmd_output_b
24
25 ENVIRONMENT_DIR = 'py_env'
26
27
28 @functools.lru_cache(maxsize=None)
29 def _version_info(exe: str) -> str:
30 prog = 'import sys;print(".".join(str(p) for p in sys.version_info))'
31 try:
32 return cmd_output(exe, '-S', '-c', prog)[1].strip()
33 except CalledProcessError:
34 return f'<<error retrieving version from {exe}>>'
35
36
37 def _read_pyvenv_cfg(filename: str) -> Dict[str, str]:
38 ret = {}
39 with open(filename) as f:
40 for line in f:
41 try:
42 k, v = line.split('=')
43 except ValueError: # blank line / comment / etc.
44 continue
45 else:
46 ret[k.strip()] = v.strip()
47 return ret
48
49
50 def bin_dir(venv: str) -> str:
51 """On windows there's a different directory for the virtualenv"""
52 bin_part = 'Scripts' if os.name == 'nt' else 'bin'
53 return os.path.join(venv, bin_part)
54
55
56 def get_env_patch(venv: str) -> PatchesT:
57 return (
58 ('PIP_DISABLE_PIP_VERSION_CHECK', '1'),
59 ('PYTHONHOME', UNSET),
60 ('VIRTUAL_ENV', venv),
61 ('PATH', (bin_dir(venv), os.pathsep, Var('PATH'))),
62 )
63
64
65 def _find_by_py_launcher(
66 version: str,
67 ) -> Optional[str]: # pragma: no cover (windows only)
68 if version.startswith('python'):
69 num = version[len('python'):]
70 try:
71 cmd = ('py', f'-{num}', '-c', 'import sys; print(sys.executable)')
72 return cmd_output(*cmd)[1].strip()
73 except CalledProcessError:
74 pass
75 return None
76
77
78 def _find_by_sys_executable() -> Optional[str]:
79 def _norm(path: str) -> Optional[str]:
80 _, exe = os.path.split(path.lower())
81 exe, _, _ = exe.partition('.exe')
82 if exe not in {'python', 'pythonw'} and find_executable(exe):
83 return exe
84 return None
85
86 # On linux, I see these common sys.executables:
87 #
88 # system `python`: /usr/bin/python -> python2.7
89 # system `python2`: /usr/bin/python2 -> python2.7
90 # virtualenv v: v/bin/python (will not return from this loop)
91 # virtualenv v -ppython2: v/bin/python -> python2
92 # virtualenv v -ppython2.7: v/bin/python -> python2.7
93 # virtualenv v -ppypy: v/bin/python -> v/bin/pypy
94 for path in (sys.executable, os.path.realpath(sys.executable)):
95 exe = _norm(path)
96 if exe:
97 return exe
98 return None
99
100
101 @functools.lru_cache(maxsize=1)
102 def get_default_version() -> str: # pragma: no cover (platform dependent)
103 # First attempt from `sys.executable` (or the realpath)
104 exe = _find_by_sys_executable()
105 if exe:
106 return exe
107
108 # Next try the `pythonX.X` executable
109 exe = f'python{sys.version_info[0]}.{sys.version_info[1]}'
110 if find_executable(exe):
111 return exe
112
113 if _find_by_py_launcher(exe):
114 return exe
115
116 # Give a best-effort try for windows
117 default_folder_name = exe.replace('.', '')
118 if os.path.exists(fr'C:\{default_folder_name}\python.exe'):
119 return exe
120
121 # We tried!
122 return C.DEFAULT
123
124
125 def _sys_executable_matches(version: str) -> bool:
126 if version == 'python':
127 return True
128 elif not version.startswith('python'):
129 return False
130
131 try:
132 info = tuple(int(p) for p in version[len('python'):].split('.'))
133 except ValueError:
134 return False
135
136 return sys.version_info[:len(info)] == info
137
138
139 def norm_version(version: str) -> str:
140 if version == C.DEFAULT:
141 return os.path.realpath(sys.executable)
142
143 # first see if our current executable is appropriate
144 if _sys_executable_matches(version):
145 return sys.executable
146
147 if os.name == 'nt': # pragma: no cover (windows)
148 version_exec = _find_by_py_launcher(version)
149 if version_exec:
150 return version_exec
151
152 # Try looking up by name
153 version_exec = find_executable(version)
154 if version_exec and version_exec != version:
155 return version_exec
156
157 # If it is in the form pythonx.x search in the default
158 # place on windows
159 if version.startswith('python'):
160 default_folder_name = version.replace('.', '')
161 return fr'C:\{default_folder_name}\python.exe'
162
163 # Otherwise assume it is a path
164 return os.path.expanduser(version)
165
166
167 @contextlib.contextmanager
168 def in_env(
169 prefix: Prefix,
170 language_version: str,
171 ) -> Generator[None, None, None]:
172 directory = helpers.environment_dir(ENVIRONMENT_DIR, language_version)
173 envdir = prefix.path(directory)
174 with envcontext(get_env_patch(envdir)):
175 yield
176
177
178 def healthy(prefix: Prefix, language_version: str) -> bool:
179 directory = helpers.environment_dir(ENVIRONMENT_DIR, language_version)
180 envdir = prefix.path(directory)
181 pyvenv_cfg = os.path.join(envdir, 'pyvenv.cfg')
182
183 # created with "old" virtualenv
184 if not os.path.exists(pyvenv_cfg):
185 return False
186
187 exe_name = 'python.exe' if sys.platform == 'win32' else 'python'
188 py_exe = prefix.path(bin_dir(envdir), exe_name)
189 cfg = _read_pyvenv_cfg(pyvenv_cfg)
190
191 return (
192 'version_info' in cfg and
193 _version_info(py_exe) == cfg['version_info'] and (
194 'base-executable' not in cfg or
195 _version_info(cfg['base-executable']) == cfg['version_info']
196 )
197 )
198
199
200 def install_environment(
201 prefix: Prefix,
202 version: str,
203 additional_dependencies: Sequence[str],
204 ) -> None:
205 envdir = prefix.path(helpers.environment_dir(ENVIRONMENT_DIR, version))
206 python = norm_version(version)
207 venv_cmd = (sys.executable, '-mvirtualenv', envdir, '-p', python)
208 install_cmd = ('python', '-mpip', 'install', '.', *additional_dependencies)
209
210 with clean_path_on_failure(envdir):
211 cmd_output_b(*venv_cmd, cwd='/')
212 with in_env(prefix, version):
213 helpers.run_setup_cmd(prefix, install_cmd)
214
215
216 def run_hook(
217 hook: Hook,
218 file_args: Sequence[str],
219 color: bool,
220 ) -> Tuple[int, bytes]:
221 with in_env(hook.prefix, hook.language_version):
222 return helpers.run_xargs(hook, hook.cmd, file_args, color=color)
223
[end of pre_commit/languages/python.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/pre_commit/languages/python.py b/pre_commit/languages/python.py
--- a/pre_commit/languages/python.py
+++ b/pre_commit/languages/python.py
@@ -67,9 +67,10 @@
) -> Optional[str]: # pragma: no cover (windows only)
if version.startswith('python'):
num = version[len('python'):]
+ cmd = ('py', f'-{num}', '-c', 'import sys; print(sys.executable)')
+ env = dict(os.environ, PYTHONIOENCODING='UTF-8')
try:
- cmd = ('py', f'-{num}', '-c', 'import sys; print(sys.executable)')
- return cmd_output(*cmd)[1].strip()
+ return cmd_output(*cmd, env=env)[1].strip()
except CalledProcessError:
pass
return None
| {"golden_diff": "diff --git a/pre_commit/languages/python.py b/pre_commit/languages/python.py\n--- a/pre_commit/languages/python.py\n+++ b/pre_commit/languages/python.py\n@@ -67,9 +67,10 @@\n ) -> Optional[str]: # pragma: no cover (windows only)\n if version.startswith('python'):\n num = version[len('python'):]\n+ cmd = ('py', f'-{num}', '-c', 'import sys; print(sys.executable)')\n+ env = dict(os.environ, PYTHONIOENCODING='UTF-8')\n try:\n- cmd = ('py', f'-{num}', '-c', 'import sys; print(sys.executable)')\n- return cmd_output(*cmd)[1].strip()\n+ return cmd_output(*cmd, env=env)[1].strip()\n except CalledProcessError:\n pass\n return None\n", "issue": "UnicodeDecodeError: 'utf-8' codec can't decode byte 0xc0 in position 9: invalid start byte\nHi, there! Get such error on commit.\r\n\r\nProbably, because of cyrillic symbols in user name: 'C:\\\\Users\\\\\u0410\u0434\u043c\u0438\u043d\u0438\u0441\u0442\u0440\u0430\u0442\u043e\u0440\\...'.\r\n\r\nIs there a way to avoid this problem exept renaming user?\r\n\r\nThanks for your cool product!\r\n\r\n### version information\r\n\r\n```\r\npre-commit version: 2.4.0\r\nsys.version:\r\n 3.6.8 (tags/v3.6.8:3c6b436a57, Dec 24 2018, 00:16:47) [MSC v.1916 64 bit (AMD64)]\r\nsys.executable: c:\\program files\\git\\dev\\core\\venv\\scripts\\python.exe\r\nos.name: nt\r\nsys.platform: win32\r\n```\r\n\r\n### error information\r\n\r\n```\r\nAn unexpected error has occurred: UnicodeDecodeError: 'utf-8' codec can't decode byte 0xc0 in position 9: invalid start byte\r\n```\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"c:\\program files\\git\\dev\\core\\venv\\lib\\site-packages\\pre_commit\\error_handler.py\", line 56, in error_handler\r\n yield\r\n File \"c:\\program files\\git\\dev\\core\\venv\\lib\\site-packages\\pre_commit\\main.py\", line 372, in main\r\n args=args.rest[1:],\r\n File \"c:\\program files\\git\\dev\\core\\venv\\lib\\site-packages\\pre_commit\\commands\\hook_impl.py\", line 217, in hook_impl\r\n return retv | run(config, store, ns)\r\n File \"c:\\program files\\git\\dev\\core\\venv\\lib\\site-packages\\pre_commit\\commands\\run.py\", line 357, in run\r\n for hook in all_hooks(config, store)\r\n File \"c:\\program files\\git\\dev\\core\\venv\\lib\\site-packages\\pre_commit\\repository.py\", line 206, in all_hooks\r\n for repo in root_config['repos']\r\n File \"c:\\program files\\git\\dev\\core\\venv\\lib\\site-packages\\pre_commit\\repository.py\", line 207, in <genexpr>\r\n for hook in _repository_hooks(repo, store, root_config)\r\n File \"c:\\program files\\git\\dev\\core\\venv\\lib\\site-packages\\pre_commit\\repository.py\", line 182, in _repository_hooks\r\n return _cloned_repository_hooks(repo_config, store, root_config)\r\n File \"c:\\program files\\git\\dev\\core\\venv\\lib\\site-packages\\pre_commit\\repository.py\", line 162, in _cloned_repository_hooks\r\n for hook in repo_config['hooks']\r\n File \"c:\\program files\\git\\dev\\core\\venv\\lib\\site-packages\\pre_commit\\repository.py\", line 162, in <listcomp>\r\n for hook in repo_config['hooks']\r\n File \"c:\\program files\\git\\dev\\core\\venv\\lib\\site-packages\\pre_commit\\repository.py\", line 110, in _hook\r\n ret['language_version'] = languages[lang].get_default_version()\r\n File \"c:\\program files\\git\\dev\\core\\venv\\lib\\site-packages\\pre_commit\\languages\\python.py\", line 113, in get_default_version\r\n if _find_by_py_launcher(exe):\r\n File \"c:\\program files\\git\\dev\\core\\venv\\lib\\site-packages\\pre_commit\\languages\\python.py\", line 72, in _find_by_py_launcher\r\n return cmd_output(*cmd)[1].strip()\r\n File \"c:\\program files\\git\\dev\\core\\venv\\lib\\site-packages\\pre_commit\\util.py\", line 164, in cmd_output\r\n stdout = stdout_b.decode() if stdout_b is not None else None\r\nUnicodeDecodeError: 'utf-8' codec can't decode byte 0xc0 in position 9: invalid start byte\r\n\r\n```\n", "before_files": [{"content": "import contextlib\nimport functools\nimport os\nimport sys\nfrom typing import Dict\nfrom typing import Generator\nfrom typing import Optional\nfrom typing import Sequence\nfrom typing import Tuple\n\nimport pre_commit.constants as C\nfrom pre_commit.envcontext import envcontext\nfrom pre_commit.envcontext import PatchesT\nfrom pre_commit.envcontext import UNSET\nfrom pre_commit.envcontext import Var\nfrom pre_commit.hook import Hook\nfrom pre_commit.languages import helpers\nfrom pre_commit.parse_shebang import find_executable\nfrom pre_commit.prefix import Prefix\nfrom pre_commit.util import CalledProcessError\nfrom pre_commit.util import clean_path_on_failure\nfrom pre_commit.util import cmd_output\nfrom pre_commit.util import cmd_output_b\n\nENVIRONMENT_DIR = 'py_env'\n\n\[email protected]_cache(maxsize=None)\ndef _version_info(exe: str) -> str:\n prog = 'import sys;print(\".\".join(str(p) for p in sys.version_info))'\n try:\n return cmd_output(exe, '-S', '-c', prog)[1].strip()\n except CalledProcessError:\n return f'<<error retrieving version from {exe}>>'\n\n\ndef _read_pyvenv_cfg(filename: str) -> Dict[str, str]:\n ret = {}\n with open(filename) as f:\n for line in f:\n try:\n k, v = line.split('=')\n except ValueError: # blank line / comment / etc.\n continue\n else:\n ret[k.strip()] = v.strip()\n return ret\n\n\ndef bin_dir(venv: str) -> str:\n \"\"\"On windows there's a different directory for the virtualenv\"\"\"\n bin_part = 'Scripts' if os.name == 'nt' else 'bin'\n return os.path.join(venv, bin_part)\n\n\ndef get_env_patch(venv: str) -> PatchesT:\n return (\n ('PIP_DISABLE_PIP_VERSION_CHECK', '1'),\n ('PYTHONHOME', UNSET),\n ('VIRTUAL_ENV', venv),\n ('PATH', (bin_dir(venv), os.pathsep, Var('PATH'))),\n )\n\n\ndef _find_by_py_launcher(\n version: str,\n) -> Optional[str]: # pragma: no cover (windows only)\n if version.startswith('python'):\n num = version[len('python'):]\n try:\n cmd = ('py', f'-{num}', '-c', 'import sys; print(sys.executable)')\n return cmd_output(*cmd)[1].strip()\n except CalledProcessError:\n pass\n return None\n\n\ndef _find_by_sys_executable() -> Optional[str]:\n def _norm(path: str) -> Optional[str]:\n _, exe = os.path.split(path.lower())\n exe, _, _ = exe.partition('.exe')\n if exe not in {'python', 'pythonw'} and find_executable(exe):\n return exe\n return None\n\n # On linux, I see these common sys.executables:\n #\n # system `python`: /usr/bin/python -> python2.7\n # system `python2`: /usr/bin/python2 -> python2.7\n # virtualenv v: v/bin/python (will not return from this loop)\n # virtualenv v -ppython2: v/bin/python -> python2\n # virtualenv v -ppython2.7: v/bin/python -> python2.7\n # virtualenv v -ppypy: v/bin/python -> v/bin/pypy\n for path in (sys.executable, os.path.realpath(sys.executable)):\n exe = _norm(path)\n if exe:\n return exe\n return None\n\n\[email protected]_cache(maxsize=1)\ndef get_default_version() -> str: # pragma: no cover (platform dependent)\n # First attempt from `sys.executable` (or the realpath)\n exe = _find_by_sys_executable()\n if exe:\n return exe\n\n # Next try the `pythonX.X` executable\n exe = f'python{sys.version_info[0]}.{sys.version_info[1]}'\n if find_executable(exe):\n return exe\n\n if _find_by_py_launcher(exe):\n return exe\n\n # Give a best-effort try for windows\n default_folder_name = exe.replace('.', '')\n if os.path.exists(fr'C:\\{default_folder_name}\\python.exe'):\n return exe\n\n # We tried!\n return C.DEFAULT\n\n\ndef _sys_executable_matches(version: str) -> bool:\n if version == 'python':\n return True\n elif not version.startswith('python'):\n return False\n\n try:\n info = tuple(int(p) for p in version[len('python'):].split('.'))\n except ValueError:\n return False\n\n return sys.version_info[:len(info)] == info\n\n\ndef norm_version(version: str) -> str:\n if version == C.DEFAULT:\n return os.path.realpath(sys.executable)\n\n # first see if our current executable is appropriate\n if _sys_executable_matches(version):\n return sys.executable\n\n if os.name == 'nt': # pragma: no cover (windows)\n version_exec = _find_by_py_launcher(version)\n if version_exec:\n return version_exec\n\n # Try looking up by name\n version_exec = find_executable(version)\n if version_exec and version_exec != version:\n return version_exec\n\n # If it is in the form pythonx.x search in the default\n # place on windows\n if version.startswith('python'):\n default_folder_name = version.replace('.', '')\n return fr'C:\\{default_folder_name}\\python.exe'\n\n # Otherwise assume it is a path\n return os.path.expanduser(version)\n\n\[email protected]\ndef in_env(\n prefix: Prefix,\n language_version: str,\n) -> Generator[None, None, None]:\n directory = helpers.environment_dir(ENVIRONMENT_DIR, language_version)\n envdir = prefix.path(directory)\n with envcontext(get_env_patch(envdir)):\n yield\n\n\ndef healthy(prefix: Prefix, language_version: str) -> bool:\n directory = helpers.environment_dir(ENVIRONMENT_DIR, language_version)\n envdir = prefix.path(directory)\n pyvenv_cfg = os.path.join(envdir, 'pyvenv.cfg')\n\n # created with \"old\" virtualenv\n if not os.path.exists(pyvenv_cfg):\n return False\n\n exe_name = 'python.exe' if sys.platform == 'win32' else 'python'\n py_exe = prefix.path(bin_dir(envdir), exe_name)\n cfg = _read_pyvenv_cfg(pyvenv_cfg)\n\n return (\n 'version_info' in cfg and\n _version_info(py_exe) == cfg['version_info'] and (\n 'base-executable' not in cfg or\n _version_info(cfg['base-executable']) == cfg['version_info']\n )\n )\n\n\ndef install_environment(\n prefix: Prefix,\n version: str,\n additional_dependencies: Sequence[str],\n) -> None:\n envdir = prefix.path(helpers.environment_dir(ENVIRONMENT_DIR, version))\n python = norm_version(version)\n venv_cmd = (sys.executable, '-mvirtualenv', envdir, '-p', python)\n install_cmd = ('python', '-mpip', 'install', '.', *additional_dependencies)\n\n with clean_path_on_failure(envdir):\n cmd_output_b(*venv_cmd, cwd='/')\n with in_env(prefix, version):\n helpers.run_setup_cmd(prefix, install_cmd)\n\n\ndef run_hook(\n hook: Hook,\n file_args: Sequence[str],\n color: bool,\n) -> Tuple[int, bytes]:\n with in_env(hook.prefix, hook.language_version):\n return helpers.run_xargs(hook, hook.cmd, file_args, color=color)\n", "path": "pre_commit/languages/python.py"}]} | 3,715 | 190 |
gh_patches_debug_1296 | rasdani/github-patches | git_diff | wandb__wandb-424 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Install issue on DLAMI images, conflict with PyYAML
wandb has a dependency conflict when installing on AWS Deep Learning images -- DLAMI v23
You can get arround it with 'pip install wandb --ignore-installed', but also perhaps wandb could relax PyYAML version requirement to make life easier (ie, I can't put wandb in requirements.txt because of this)
```
(pytorch_p36) ubuntu@ip-172-31-28-233:~$ pip install wandb
Collecting wandb
Using cached https://files.pythonhosted.org/packages/6a/d1/af8371f39d9383f4f1e9ba76c8894f75c01d5eddf4ec57bd45952fefab74/wandb-0.8.3-py2.py3-none-any.whl
Collecting watchdog>=0.8.3 (from wandb)
Requirement already satisfied: psutil>=5.0.0 in ./anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from wandb) (5.4.5)
Collecting backports.tempfile>=1.0 (from wandb)
Using cached https://files.pythonhosted.org/packages/b4/5c/077f910632476281428fe254807952eb47ca78e720d059a46178c541e669/backports.tempfile-1.0-py2.py3-none-any.whl
Requirement already satisfied: requests>=2.0.0 in ./anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from wandb) (2.20.0)
Requirement already satisfied: sentry-sdk>=0.4.0 in ./anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from wandb) (0.9.5)
Requirement already satisfied: six>=1.10.0 in ./anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from wandb) (1.11.0)
Collecting shortuuid>=0.5.0 (from wandb)
Collecting gql>=0.1.0 (from wandb)
Requirement already satisfied: subprocess32>=3.5.3 in ./anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from wandb) (3.5.4)
Collecting GitPython>=1.0.0 (from wandb)
Using cached https://files.pythonhosted.org/packages/fe/e5/fafe827507644c32d6dc553a1c435cdf882e0c28918a5bab29f7fbebfb70/GitPython-2.1.11-py2.py3-none-any.whl
Requirement already satisfied: docker-pycreds>=0.4.0 in ./anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from wandb) (0.4.0)
Requirement already satisfied: nvidia-ml-py3>=7.352.0 in ./anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from wandb) (7.352.0)
Requirement already satisfied: Click>=7.0 in ./anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from wandb) (7.0)
Requirement already satisfied: python-dateutil>=2.6.1 in ./anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from wandb) (2.7.3)
Collecting PyYAML>=4.2b4 (from wandb)
Requirement already satisfied: argh>=0.24.1 in ./anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from watchdog>=0.8.3->wandb) (0.26.2)
Collecting pathtools>=0.1.1 (from watchdog>=0.8.3->wandb)
Collecting backports.weakref (from backports.tempfile>=1.0->wandb)
Using cached https://files.pythonhosted.org/packages/88/ec/f598b633c3d5ffe267aaada57d961c94fdfa183c5c3ebda2b6d151943db6/backports.weakref-1.0.post1-py2.py3-none-any.whl
Requirement already satisfied: urllib3<1.25,>=1.21.1 in ./anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from requests>=2.0.0->wandb) (1.23)
Requirement already satisfied: certifi>=2017.4.17 in ./anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from requests>=2.0.0->wandb) (2019.3.9)
Requirement already satisfied: idna<2.8,>=2.5 in ./anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from requests>=2.0.0->wandb) (2.6)
Requirement already satisfied: chardet<3.1.0,>=3.0.2 in ./anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from requests>=2.0.0->wandb) (3.0.4)
Collecting graphql-core>=0.5.0 (from gql>=0.1.0->wandb)
Using cached https://files.pythonhosted.org/packages/f1/88/a4a7bf8ab66c35b146e44d77a1f9fd2c36e0ec9fb1a51581608c16deb6e3/graphql_core-2.2-py2.py3-none-any.whl
Collecting promise>=0.4.0 (from gql>=0.1.0->wandb)
Collecting gitdb2>=2.0.0 (from GitPython>=1.0.0->wandb)
Using cached https://files.pythonhosted.org/packages/da/30/a407568aa8d8f25db817cf50121a958722f3fc5f87e3a6fba1f40c0633e3/gitdb2-2.0.5-py2.py3-none-any.whl
Collecting rx>=1.6.0 (from graphql-core>=0.5.0->gql>=0.1.0->wandb)
Using cached https://files.pythonhosted.org/packages/33/0f/5ef4ac78e2a538cc1b054eb86285fe0bf7a5dbaeaac2c584757c300515e2/Rx-1.6.1-py2.py3-none-any.whl
Collecting smmap2>=2.0.0 (from gitdb2>=2.0.0->GitPython>=1.0.0->wandb)
Using cached https://files.pythonhosted.org/packages/55/d2/866d45e3a121ee15a1dc013824d58072fd5c7799c9c34d01378eb262ca8f/smmap2-2.0.5-py2.py3-none-any.whl
thinc 6.12.1 has requirement msgpack<0.6.0,>=0.5.6, but you'll have msgpack 0.6.0 which is incompatible.
tensorflow 1.13.1 has requirement protobuf>=3.6.1, but you'll have protobuf 3.5.2 which is incompatible.
tensorboard 1.13.1 has requirement protobuf>=3.6.0, but you'll have protobuf 3.5.2 which is incompatible.
docker-compose 1.24.0 has requirement PyYAML<4.3,>=3.10, but you'll have pyyaml 5.1.1 which is incompatible.
Installing collected packages: PyYAML, pathtools, watchdog, backports.weakref, backports.tempfile, shortuuid, rx, promise, graphql-core, gql, smmap2, gitdb2, GitPython, wandb
Found existing installation: PyYAML 3.12
Cannot uninstall 'PyYAML'. It is a distutils installed project and thus we cannot accurately determine which files belong to it which would lead to only a partial uninstall.
You are using pip version 10.0.1, however version 19.1.1 is available.
You should consider upgrading via the 'pip install --upgrade pip' command.
(pytorch_p36) ubuntu@ip-172-31-28-233:~$ echo $?
```
</issue>
<code>
[start of setup.py]
1 #!/usr/bin/env python
2 # -*- coding: utf-8 -*-
3
4 from setuptools import setup
5
6 with open('README.md') as readme_file:
7 readme = readme_file.read()
8
9 requirements = [
10 'backports.tempfile>=1.0',
11 'Click>=7.0',
12 'GitPython>=1.0.0',
13 'gql>=0.1.0',
14 'nvidia-ml-py3>=7.352.0',
15 'python-dateutil>=2.6.1',
16 'requests>=2.0.0',
17 'shortuuid>=0.5.0',
18 'six>=1.10.0',
19 'watchdog>=0.8.3',
20 'PyYAML>=4.2b4', # watchdog depends on pyyaml but doesnt specify safe version
21 'psutil>=5.0.0',
22 'sentry-sdk>=0.4.0',
23 'subprocess32>=3.5.3',
24 'docker-pycreds>=0.4.0',
25 # Removed until we bring back the board
26 # 'flask-cors>=3.0.3',
27 # 'flask-graphql>=1.4.0',
28 # 'graphene>=2.0.0',
29 ]
30
31 test_requirements = [
32 'mock>=2.0.0',
33 'tox-pyenv>=1.0.3'
34 ]
35
36 kubeflow_requirements = ['kubernetes', 'minio', 'google-cloud-storage', 'sh']
37
38 setup(
39 name='wandb',
40 version='0.8.4',
41 description="A CLI and library for interacting with the Weights and Biases API.",
42 long_description=readme,
43 long_description_content_type="text/markdown",
44 author="Weights & Biases",
45 author_email='[email protected]',
46 url='https://github.com/wandb/client',
47 packages=[
48 'wandb'
49 ],
50 package_dir={'wandb': 'wandb'},
51 entry_points={
52 'console_scripts': [
53 'wandb=wandb.cli:cli',
54 'wb=wandb.cli:cli',
55 'wanbd=wandb.cli:cli',
56 'wandb-docker-run=wandb.cli:docker_run'
57 ]
58 },
59 include_package_data=True,
60 install_requires=requirements,
61 license="MIT license",
62 zip_safe=False,
63 keywords='wandb',
64 python_requires='>=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*',
65 classifiers=[
66 'Development Status :: 5 - Production/Stable',
67 'Intended Audience :: Developers',
68 'Intended Audience :: Science/Research',
69 'License :: OSI Approved :: MIT License',
70 'Natural Language :: English',
71 'Programming Language :: Python :: 2',
72 'Programming Language :: Python :: 2.7',
73 'Programming Language :: Python :: 3',
74 'Programming Language :: Python :: 3.4',
75 'Programming Language :: Python :: 3.5',
76 'Programming Language :: Python :: 3.6',
77 'Programming Language :: Python :: 3.7',
78 'Topic :: Scientific/Engineering :: Artificial Intelligence',
79 'Topic :: Software Development :: Libraries :: Python Modules',
80 'Topic :: System :: Logging',
81 'Topic :: System :: Monitoring'
82 ],
83 test_suite='tests',
84 tests_require=test_requirements,
85 extras_require={
86 'kubeflow': kubeflow_requirements
87 }
88 )
89
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -17,7 +17,6 @@
'shortuuid>=0.5.0',
'six>=1.10.0',
'watchdog>=0.8.3',
- 'PyYAML>=4.2b4', # watchdog depends on pyyaml but doesnt specify safe version
'psutil>=5.0.0',
'sentry-sdk>=0.4.0',
'subprocess32>=3.5.3',
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -17,7 +17,6 @@\n 'shortuuid>=0.5.0',\n 'six>=1.10.0',\n 'watchdog>=0.8.3',\n- 'PyYAML>=4.2b4', # watchdog depends on pyyaml but doesnt specify safe version\n 'psutil>=5.0.0',\n 'sentry-sdk>=0.4.0',\n 'subprocess32>=3.5.3',\n", "issue": "Install issue on DLAMI images, conflict with PyYAML\nwandb has a dependency conflict when installing on AWS Deep Learning images -- DLAMI v23\r\nYou can get arround it with 'pip install wandb --ignore-installed', but also perhaps wandb could relax PyYAML version requirement to make life easier (ie, I can't put wandb in requirements.txt because of this)\r\n\r\n```\r\n(pytorch_p36) ubuntu@ip-172-31-28-233:~$ pip install wandb\r\nCollecting wandb\r\n Using cached https://files.pythonhosted.org/packages/6a/d1/af8371f39d9383f4f1e9ba76c8894f75c01d5eddf4ec57bd45952fefab74/wandb-0.8.3-py2.py3-none-any.whl\r\nCollecting watchdog>=0.8.3 (from wandb)\r\nRequirement already satisfied: psutil>=5.0.0 in ./anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from wandb) (5.4.5)\r\nCollecting backports.tempfile>=1.0 (from wandb)\r\n Using cached https://files.pythonhosted.org/packages/b4/5c/077f910632476281428fe254807952eb47ca78e720d059a46178c541e669/backports.tempfile-1.0-py2.py3-none-any.whl\r\nRequirement already satisfied: requests>=2.0.0 in ./anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from wandb) (2.20.0)\r\nRequirement already satisfied: sentry-sdk>=0.4.0 in ./anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from wandb) (0.9.5)\r\nRequirement already satisfied: six>=1.10.0 in ./anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from wandb) (1.11.0)\r\nCollecting shortuuid>=0.5.0 (from wandb)\r\nCollecting gql>=0.1.0 (from wandb)\r\nRequirement already satisfied: subprocess32>=3.5.3 in ./anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from wandb) (3.5.4)\r\nCollecting GitPython>=1.0.0 (from wandb)\r\n Using cached https://files.pythonhosted.org/packages/fe/e5/fafe827507644c32d6dc553a1c435cdf882e0c28918a5bab29f7fbebfb70/GitPython-2.1.11-py2.py3-none-any.whl\r\nRequirement already satisfied: docker-pycreds>=0.4.0 in ./anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from wandb) (0.4.0)\r\nRequirement already satisfied: nvidia-ml-py3>=7.352.0 in ./anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from wandb) (7.352.0)\r\nRequirement already satisfied: Click>=7.0 in ./anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from wandb) (7.0)\r\nRequirement already satisfied: python-dateutil>=2.6.1 in ./anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from wandb) (2.7.3)\r\nCollecting PyYAML>=4.2b4 (from wandb)\r\nRequirement already satisfied: argh>=0.24.1 in ./anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from watchdog>=0.8.3->wandb) (0.26.2)\r\nCollecting pathtools>=0.1.1 (from watchdog>=0.8.3->wandb)\r\nCollecting backports.weakref (from backports.tempfile>=1.0->wandb)\r\n Using cached https://files.pythonhosted.org/packages/88/ec/f598b633c3d5ffe267aaada57d961c94fdfa183c5c3ebda2b6d151943db6/backports.weakref-1.0.post1-py2.py3-none-any.whl\r\nRequirement already satisfied: urllib3<1.25,>=1.21.1 in ./anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from requests>=2.0.0->wandb) (1.23)\r\nRequirement already satisfied: certifi>=2017.4.17 in ./anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from requests>=2.0.0->wandb) (2019.3.9)\r\nRequirement already satisfied: idna<2.8,>=2.5 in ./anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from requests>=2.0.0->wandb) (2.6)\r\nRequirement already satisfied: chardet<3.1.0,>=3.0.2 in ./anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from requests>=2.0.0->wandb) (3.0.4)\r\nCollecting graphql-core>=0.5.0 (from gql>=0.1.0->wandb)\r\n Using cached https://files.pythonhosted.org/packages/f1/88/a4a7bf8ab66c35b146e44d77a1f9fd2c36e0ec9fb1a51581608c16deb6e3/graphql_core-2.2-py2.py3-none-any.whl\r\nCollecting promise>=0.4.0 (from gql>=0.1.0->wandb)\r\nCollecting gitdb2>=2.0.0 (from GitPython>=1.0.0->wandb)\r\n Using cached https://files.pythonhosted.org/packages/da/30/a407568aa8d8f25db817cf50121a958722f3fc5f87e3a6fba1f40c0633e3/gitdb2-2.0.5-py2.py3-none-any.whl\r\nCollecting rx>=1.6.0 (from graphql-core>=0.5.0->gql>=0.1.0->wandb)\r\n Using cached https://files.pythonhosted.org/packages/33/0f/5ef4ac78e2a538cc1b054eb86285fe0bf7a5dbaeaac2c584757c300515e2/Rx-1.6.1-py2.py3-none-any.whl\r\nCollecting smmap2>=2.0.0 (from gitdb2>=2.0.0->GitPython>=1.0.0->wandb)\r\n Using cached https://files.pythonhosted.org/packages/55/d2/866d45e3a121ee15a1dc013824d58072fd5c7799c9c34d01378eb262ca8f/smmap2-2.0.5-py2.py3-none-any.whl\r\nthinc 6.12.1 has requirement msgpack<0.6.0,>=0.5.6, but you'll have msgpack 0.6.0 which is incompatible.\r\ntensorflow 1.13.1 has requirement protobuf>=3.6.1, but you'll have protobuf 3.5.2 which is incompatible.\r\ntensorboard 1.13.1 has requirement protobuf>=3.6.0, but you'll have protobuf 3.5.2 which is incompatible.\r\ndocker-compose 1.24.0 has requirement PyYAML<4.3,>=3.10, but you'll have pyyaml 5.1.1 which is incompatible.\r\nInstalling collected packages: PyYAML, pathtools, watchdog, backports.weakref, backports.tempfile, shortuuid, rx, promise, graphql-core, gql, smmap2, gitdb2, GitPython, wandb\r\n Found existing installation: PyYAML 3.12\r\nCannot uninstall 'PyYAML'. It is a distutils installed project and thus we cannot accurately determine which files belong to it which would lead to only a partial uninstall.\r\nYou are using pip version 10.0.1, however version 19.1.1 is available.\r\nYou should consider upgrading via the 'pip install --upgrade pip' command.\r\n(pytorch_p36) ubuntu@ip-172-31-28-233:~$ echo $?\r\n\r\n```\n", "before_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n\nfrom setuptools import setup\n\nwith open('README.md') as readme_file:\n readme = readme_file.read()\n\nrequirements = [\n 'backports.tempfile>=1.0',\n 'Click>=7.0',\n 'GitPython>=1.0.0',\n 'gql>=0.1.0',\n 'nvidia-ml-py3>=7.352.0',\n 'python-dateutil>=2.6.1',\n 'requests>=2.0.0',\n 'shortuuid>=0.5.0',\n 'six>=1.10.0',\n 'watchdog>=0.8.3',\n 'PyYAML>=4.2b4', # watchdog depends on pyyaml but doesnt specify safe version\n 'psutil>=5.0.0',\n 'sentry-sdk>=0.4.0',\n 'subprocess32>=3.5.3',\n 'docker-pycreds>=0.4.0',\n # Removed until we bring back the board\n # 'flask-cors>=3.0.3',\n # 'flask-graphql>=1.4.0',\n # 'graphene>=2.0.0',\n]\n\ntest_requirements = [\n 'mock>=2.0.0',\n 'tox-pyenv>=1.0.3'\n]\n\nkubeflow_requirements = ['kubernetes', 'minio', 'google-cloud-storage', 'sh']\n\nsetup(\n name='wandb',\n version='0.8.4',\n description=\"A CLI and library for interacting with the Weights and Biases API.\",\n long_description=readme,\n long_description_content_type=\"text/markdown\",\n author=\"Weights & Biases\",\n author_email='[email protected]',\n url='https://github.com/wandb/client',\n packages=[\n 'wandb'\n ],\n package_dir={'wandb': 'wandb'},\n entry_points={\n 'console_scripts': [\n 'wandb=wandb.cli:cli',\n 'wb=wandb.cli:cli',\n 'wanbd=wandb.cli:cli',\n 'wandb-docker-run=wandb.cli:docker_run'\n ]\n },\n include_package_data=True,\n install_requires=requirements,\n license=\"MIT license\",\n zip_safe=False,\n keywords='wandb',\n python_requires='>=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*',\n classifiers=[\n 'Development Status :: 5 - Production/Stable',\n 'Intended Audience :: Developers',\n 'Intended Audience :: Science/Research',\n 'License :: OSI Approved :: MIT License',\n 'Natural Language :: English',\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Topic :: Scientific/Engineering :: Artificial Intelligence',\n 'Topic :: Software Development :: Libraries :: Python Modules',\n 'Topic :: System :: Logging',\n 'Topic :: System :: Monitoring'\n ],\n test_suite='tests',\n tests_require=test_requirements,\n extras_require={\n 'kubeflow': kubeflow_requirements\n }\n)\n", "path": "setup.py"}]} | 3,549 | 127 |
gh_patches_debug_28242 | rasdani/github-patches | git_diff | kivy__kivy-4127 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Crash when Slider is imported before forking process on Mac OS X
Consider
```
#!/usr/bin/env python2
import multiprocessing
from kivy.app import App
from kivy.uix.slider import Slider
class Test(App):
def build(self):
return Slider()
def run_app():
app = Test()
app.run()
running_app = multiprocessing.Process(target=run_app)
running_app.daemon = True
running_app.start()
running_app.join()
```
This currently crashes on Mac OS X:
> **\* multi-threaded process forked ***
> crashed on child side of fork pre-exec
> USING_FORK_WITHOUT_EXEC_IS_NOT_SUPPORTED_BY_FILE_MANAGER
This is because the property `padding` is initialized with `NumericProperty(sp(16))`. This call to `sp` will attempt to initialize SDL. Cf. [this question on StackOverflow](http://stackoverflow.com/questions/8106002/using-the-python-multiprocessing-module-for-io-with-pygame-on-mac-os-10-7)
</issue>
<code>
[start of kivy/uix/slider.py]
1 """
2 Slider
3 ======
4
5 .. image:: images/slider.jpg
6
7 The :class:`Slider` widget looks like a scrollbar. It supports horizontal and
8 vertical orientations, min/max values and a default value.
9
10 To create a slider from -100 to 100 starting from 25::
11
12 from kivy.uix.slider import Slider
13 s = Slider(min=-100, max=100, value=25)
14
15 To create a vertical slider::
16
17 from kivy.uix.slider import Slider
18 s = Slider(orientation='vertical')
19
20 """
21 __all__ = ('Slider', )
22
23 from kivy.uix.widget import Widget
24 from kivy.properties import (NumericProperty, AliasProperty, OptionProperty,
25 ReferenceListProperty, BoundedNumericProperty)
26 from kivy.metrics import sp
27
28
29 class Slider(Widget):
30 """Class for creating a Slider widget.
31
32 Check module documentation for more details.
33 """
34
35 value = NumericProperty(0.)
36 '''Current value used for the slider.
37
38 :attr:`value` is a :class:`~kivy.properties.NumericProperty` and defaults
39 to 0.'''
40
41 min = NumericProperty(0.)
42 '''Minimum value allowed for :attr:`value`.
43
44 :attr:`min` is a :class:`~kivy.properties.NumericProperty` and defaults to
45 0.'''
46
47 max = NumericProperty(100.)
48 '''Maximum value allowed for :attr:`value`.
49
50 :attr:`max` is a :class:`~kivy.properties.NumericProperty` and defaults to
51 100.'''
52
53 padding = NumericProperty(sp(16))
54 '''Padding of the slider. The padding is used for graphical representation
55 and interaction. It prevents the cursor from going out of the bounds of the
56 slider bounding box.
57
58 By default, padding is sp(16). The range of the slider is reduced from
59 padding \*2 on the screen. It allows drawing the default cursor of sp(32)
60 width without having the cursor go out of the widget.
61
62 :attr:`padding` is a :class:`~kivy.properties.NumericProperty` and defaults
63 to sp(16).'''
64
65 orientation = OptionProperty('horizontal', options=(
66 'vertical', 'horizontal'))
67 '''Orientation of the slider.
68
69 :attr:`orientation` is an :class:`~kivy.properties.OptionProperty` and
70 defaults to 'horizontal'. Can take a value of 'vertical' or 'horizontal'.
71 '''
72
73 range = ReferenceListProperty(min, max)
74 '''Range of the slider in the format (minimum value, maximum value)::
75
76 >>> slider = Slider(min=10, max=80)
77 >>> slider.range
78 [10, 80]
79 >>> slider.range = (20, 100)
80 >>> slider.min
81 20
82 >>> slider.max
83 100
84
85 :attr:`range` is a :class:`~kivy.properties.ReferenceListProperty` of
86 (:attr:`min`, :attr:`max`) properties.
87 '''
88
89 step = BoundedNumericProperty(0, min=0)
90 '''Step size of the slider.
91
92 .. versionadded:: 1.4.0
93
94 Determines the size of each interval or step the slider takes between
95 min and max. If the value range can't be evenly divisible by step the
96 last step will be capped by slider.max
97
98 :attr:`step` is a :class:`~kivy.properties.NumericProperty` and defaults
99 to 1.'''
100
101 # The following two methods constrain the slider's value
102 # to range(min,max). Otherwise it may happen that self.value < self.min
103 # at init.
104
105 def on_min(self, *largs):
106 self.value = min(self.max, max(self.min, self.value))
107
108 def on_max(self, *largs):
109 self.value = min(self.max, max(self.min, self.value))
110
111 def get_norm_value(self):
112 vmin = self.min
113 d = self.max - vmin
114 if d == 0:
115 return 0
116 return (self.value - vmin) / float(d)
117
118 def set_norm_value(self, value):
119 vmin = self.min
120 vmax = self.max
121 step = self.step
122 val = min(value * (vmax - vmin) + vmin, vmax)
123 if step == 0:
124 self.value = val
125 else:
126 self.value = min(round((val - vmin) / step) * step + vmin,
127 vmax)
128 value_normalized = AliasProperty(get_norm_value, set_norm_value,
129 bind=('value', 'min', 'max', 'step'))
130 '''Normalized value inside the :attr:`range` (min/max) to 0-1 range::
131
132 >>> slider = Slider(value=50, min=0, max=100)
133 >>> slider.value
134 50
135 >>> slider.value_normalized
136 0.5
137 >>> slider.value = 0
138 >>> slider.value_normalized
139 0
140 >>> slider.value = 100
141 >>> slider.value_normalized
142 1
143
144 You can also use it for setting the real value without knowing the minimum
145 and maximum::
146
147 >>> slider = Slider(min=0, max=200)
148 >>> slider.value_normalized = .5
149 >>> slider.value
150 100
151 >>> slider.value_normalized = 1.
152 >>> slider.value
153 200
154
155 :attr:`value_normalized` is an :class:`~kivy.properties.AliasProperty`.
156 '''
157
158 def get_value_pos(self):
159 padding = self.padding
160 x = self.x
161 y = self.y
162 nval = self.value_normalized
163 if self.orientation == 'horizontal':
164 return (x + padding + nval * (self.width - 2 * padding), y)
165 else:
166 return (x, y + padding + nval * (self.height - 2 * padding))
167
168 def set_value_pos(self, pos):
169 padding = self.padding
170 x = min(self.right - padding, max(pos[0], self.x + padding))
171 y = min(self.top - padding, max(pos[1], self.y + padding))
172 if self.orientation == 'horizontal':
173 if self.width == 0:
174 self.value_normalized = 0
175 else:
176 self.value_normalized = (x - self.x - padding
177 ) / float(self.width - 2 * padding)
178 else:
179 if self.height == 0:
180 self.value_normalized = 0
181 else:
182 self.value_normalized = (y - self.y - padding
183 ) / float(self.height - 2 * padding)
184 value_pos = AliasProperty(get_value_pos, set_value_pos,
185 bind=('x', 'y', 'width', 'height', 'min',
186 'max', 'value_normalized', 'orientation'))
187 '''Position of the internal cursor, based on the normalized value.
188
189 :attr:`value_pos` is an :class:`~kivy.properties.AliasProperty`.
190 '''
191
192 def on_touch_down(self, touch):
193 if self.disabled or not self.collide_point(*touch.pos):
194 return
195 if touch.is_mouse_scrolling:
196 if 'down' in touch.button or 'left' in touch.button:
197 if self.step:
198 self.value = min(self.max, self.value + self.step)
199 else:
200 self.value = min(
201 self.max,
202 self.value + (self.max - self.min) / 20)
203 if 'up' in touch.button or 'right' in touch.button:
204 if self.step:
205 self.value = max(self.min, self.value - self.step)
206 else:
207 self.value = max(
208 self.min,
209 self.value - (self.max - self.min) / 20)
210 else:
211 touch.grab(self)
212 self.value_pos = touch.pos
213 return True
214
215 def on_touch_move(self, touch):
216 if touch.grab_current == self:
217 self.value_pos = touch.pos
218 return True
219
220 def on_touch_up(self, touch):
221 if touch.grab_current == self:
222 self.value_pos = touch.pos
223 return True
224
225 if __name__ == '__main__':
226 from kivy.app import App
227
228 class SliderApp(App):
229 def build(self):
230 return Slider(padding=25)
231
232 SliderApp().run()
233
[end of kivy/uix/slider.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/kivy/uix/slider.py b/kivy/uix/slider.py
--- a/kivy/uix/slider.py
+++ b/kivy/uix/slider.py
@@ -23,7 +23,6 @@
from kivy.uix.widget import Widget
from kivy.properties import (NumericProperty, AliasProperty, OptionProperty,
ReferenceListProperty, BoundedNumericProperty)
-from kivy.metrics import sp
class Slider(Widget):
@@ -50,17 +49,17 @@
:attr:`max` is a :class:`~kivy.properties.NumericProperty` and defaults to
100.'''
- padding = NumericProperty(sp(16))
+ padding = NumericProperty('16sp')
'''Padding of the slider. The padding is used for graphical representation
and interaction. It prevents the cursor from going out of the bounds of the
slider bounding box.
- By default, padding is sp(16). The range of the slider is reduced from
- padding \*2 on the screen. It allows drawing the default cursor of sp(32)
+ By default, padding is 16sp. The range of the slider is reduced from
+ padding \*2 on the screen. It allows drawing the default cursor of 32sp
width without having the cursor go out of the widget.
:attr:`padding` is a :class:`~kivy.properties.NumericProperty` and defaults
- to sp(16).'''
+ to 16sp.'''
orientation = OptionProperty('horizontal', options=(
'vertical', 'horizontal'))
| {"golden_diff": "diff --git a/kivy/uix/slider.py b/kivy/uix/slider.py\n--- a/kivy/uix/slider.py\n+++ b/kivy/uix/slider.py\n@@ -23,7 +23,6 @@\n from kivy.uix.widget import Widget\n from kivy.properties import (NumericProperty, AliasProperty, OptionProperty,\n ReferenceListProperty, BoundedNumericProperty)\n-from kivy.metrics import sp\n \n \n class Slider(Widget):\n@@ -50,17 +49,17 @@\n :attr:`max` is a :class:`~kivy.properties.NumericProperty` and defaults to\n 100.'''\n \n- padding = NumericProperty(sp(16))\n+ padding = NumericProperty('16sp')\n '''Padding of the slider. The padding is used for graphical representation\n and interaction. It prevents the cursor from going out of the bounds of the\n slider bounding box.\n \n- By default, padding is sp(16). The range of the slider is reduced from\n- padding \\*2 on the screen. It allows drawing the default cursor of sp(32)\n+ By default, padding is 16sp. The range of the slider is reduced from\n+ padding \\*2 on the screen. It allows drawing the default cursor of 32sp\n width without having the cursor go out of the widget.\n \n :attr:`padding` is a :class:`~kivy.properties.NumericProperty` and defaults\n- to sp(16).'''\n+ to 16sp.'''\n \n orientation = OptionProperty('horizontal', options=(\n 'vertical', 'horizontal'))\n", "issue": "Crash when Slider is imported before forking process on Mac OS X\nConsider\n\n```\n#!/usr/bin/env python2\nimport multiprocessing\nfrom kivy.app import App\n\nfrom kivy.uix.slider import Slider\n\nclass Test(App):\n def build(self):\n return Slider()\n\ndef run_app():\n app = Test()\n app.run()\n\nrunning_app = multiprocessing.Process(target=run_app)\nrunning_app.daemon = True\nrunning_app.start()\nrunning_app.join()\n```\n\nThis currently crashes on Mac OS X:\n\n> **\\* multi-threaded process forked ***\n> crashed on child side of fork pre-exec\n> USING_FORK_WITHOUT_EXEC_IS_NOT_SUPPORTED_BY_FILE_MANAGER\n\nThis is because the property `padding` is initialized with `NumericProperty(sp(16))`. This call to `sp` will attempt to initialize SDL. Cf. [this question on StackOverflow](http://stackoverflow.com/questions/8106002/using-the-python-multiprocessing-module-for-io-with-pygame-on-mac-os-10-7)\n\n", "before_files": [{"content": "\"\"\"\nSlider\n======\n\n.. image:: images/slider.jpg\n\nThe :class:`Slider` widget looks like a scrollbar. It supports horizontal and\nvertical orientations, min/max values and a default value.\n\nTo create a slider from -100 to 100 starting from 25::\n\n from kivy.uix.slider import Slider\n s = Slider(min=-100, max=100, value=25)\n\nTo create a vertical slider::\n\n from kivy.uix.slider import Slider\n s = Slider(orientation='vertical')\n\n\"\"\"\n__all__ = ('Slider', )\n\nfrom kivy.uix.widget import Widget\nfrom kivy.properties import (NumericProperty, AliasProperty, OptionProperty,\n ReferenceListProperty, BoundedNumericProperty)\nfrom kivy.metrics import sp\n\n\nclass Slider(Widget):\n \"\"\"Class for creating a Slider widget.\n\n Check module documentation for more details.\n \"\"\"\n\n value = NumericProperty(0.)\n '''Current value used for the slider.\n\n :attr:`value` is a :class:`~kivy.properties.NumericProperty` and defaults\n to 0.'''\n\n min = NumericProperty(0.)\n '''Minimum value allowed for :attr:`value`.\n\n :attr:`min` is a :class:`~kivy.properties.NumericProperty` and defaults to\n 0.'''\n\n max = NumericProperty(100.)\n '''Maximum value allowed for :attr:`value`.\n\n :attr:`max` is a :class:`~kivy.properties.NumericProperty` and defaults to\n 100.'''\n\n padding = NumericProperty(sp(16))\n '''Padding of the slider. The padding is used for graphical representation\n and interaction. It prevents the cursor from going out of the bounds of the\n slider bounding box.\n\n By default, padding is sp(16). The range of the slider is reduced from\n padding \\*2 on the screen. It allows drawing the default cursor of sp(32)\n width without having the cursor go out of the widget.\n\n :attr:`padding` is a :class:`~kivy.properties.NumericProperty` and defaults\n to sp(16).'''\n\n orientation = OptionProperty('horizontal', options=(\n 'vertical', 'horizontal'))\n '''Orientation of the slider.\n\n :attr:`orientation` is an :class:`~kivy.properties.OptionProperty` and\n defaults to 'horizontal'. Can take a value of 'vertical' or 'horizontal'.\n '''\n\n range = ReferenceListProperty(min, max)\n '''Range of the slider in the format (minimum value, maximum value)::\n\n >>> slider = Slider(min=10, max=80)\n >>> slider.range\n [10, 80]\n >>> slider.range = (20, 100)\n >>> slider.min\n 20\n >>> slider.max\n 100\n\n :attr:`range` is a :class:`~kivy.properties.ReferenceListProperty` of\n (:attr:`min`, :attr:`max`) properties.\n '''\n\n step = BoundedNumericProperty(0, min=0)\n '''Step size of the slider.\n\n .. versionadded:: 1.4.0\n\n Determines the size of each interval or step the slider takes between\n min and max. If the value range can't be evenly divisible by step the\n last step will be capped by slider.max\n\n :attr:`step` is a :class:`~kivy.properties.NumericProperty` and defaults\n to 1.'''\n\n # The following two methods constrain the slider's value\n # to range(min,max). Otherwise it may happen that self.value < self.min\n # at init.\n\n def on_min(self, *largs):\n self.value = min(self.max, max(self.min, self.value))\n\n def on_max(self, *largs):\n self.value = min(self.max, max(self.min, self.value))\n\n def get_norm_value(self):\n vmin = self.min\n d = self.max - vmin\n if d == 0:\n return 0\n return (self.value - vmin) / float(d)\n\n def set_norm_value(self, value):\n vmin = self.min\n vmax = self.max\n step = self.step\n val = min(value * (vmax - vmin) + vmin, vmax)\n if step == 0:\n self.value = val\n else:\n self.value = min(round((val - vmin) / step) * step + vmin,\n vmax)\n value_normalized = AliasProperty(get_norm_value, set_norm_value,\n bind=('value', 'min', 'max', 'step'))\n '''Normalized value inside the :attr:`range` (min/max) to 0-1 range::\n\n >>> slider = Slider(value=50, min=0, max=100)\n >>> slider.value\n 50\n >>> slider.value_normalized\n 0.5\n >>> slider.value = 0\n >>> slider.value_normalized\n 0\n >>> slider.value = 100\n >>> slider.value_normalized\n 1\n\n You can also use it for setting the real value without knowing the minimum\n and maximum::\n\n >>> slider = Slider(min=0, max=200)\n >>> slider.value_normalized = .5\n >>> slider.value\n 100\n >>> slider.value_normalized = 1.\n >>> slider.value\n 200\n\n :attr:`value_normalized` is an :class:`~kivy.properties.AliasProperty`.\n '''\n\n def get_value_pos(self):\n padding = self.padding\n x = self.x\n y = self.y\n nval = self.value_normalized\n if self.orientation == 'horizontal':\n return (x + padding + nval * (self.width - 2 * padding), y)\n else:\n return (x, y + padding + nval * (self.height - 2 * padding))\n\n def set_value_pos(self, pos):\n padding = self.padding\n x = min(self.right - padding, max(pos[0], self.x + padding))\n y = min(self.top - padding, max(pos[1], self.y + padding))\n if self.orientation == 'horizontal':\n if self.width == 0:\n self.value_normalized = 0\n else:\n self.value_normalized = (x - self.x - padding\n ) / float(self.width - 2 * padding)\n else:\n if self.height == 0:\n self.value_normalized = 0\n else:\n self.value_normalized = (y - self.y - padding\n ) / float(self.height - 2 * padding)\n value_pos = AliasProperty(get_value_pos, set_value_pos,\n bind=('x', 'y', 'width', 'height', 'min',\n 'max', 'value_normalized', 'orientation'))\n '''Position of the internal cursor, based on the normalized value.\n\n :attr:`value_pos` is an :class:`~kivy.properties.AliasProperty`.\n '''\n\n def on_touch_down(self, touch):\n if self.disabled or not self.collide_point(*touch.pos):\n return\n if touch.is_mouse_scrolling:\n if 'down' in touch.button or 'left' in touch.button:\n if self.step:\n self.value = min(self.max, self.value + self.step)\n else:\n self.value = min(\n self.max,\n self.value + (self.max - self.min) / 20)\n if 'up' in touch.button or 'right' in touch.button:\n if self.step:\n self.value = max(self.min, self.value - self.step)\n else:\n self.value = max(\n self.min,\n self.value - (self.max - self.min) / 20)\n else:\n touch.grab(self)\n self.value_pos = touch.pos\n return True\n\n def on_touch_move(self, touch):\n if touch.grab_current == self:\n self.value_pos = touch.pos\n return True\n\n def on_touch_up(self, touch):\n if touch.grab_current == self:\n self.value_pos = touch.pos\n return True\n\nif __name__ == '__main__':\n from kivy.app import App\n\n class SliderApp(App):\n def build(self):\n return Slider(padding=25)\n\n SliderApp().run()\n", "path": "kivy/uix/slider.py"}]} | 3,178 | 353 |
gh_patches_debug_22351 | rasdani/github-patches | git_diff | scalableminds__webknossos-libs-225 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Allow explicitly setting a layer's bounding box
For example, when one cuts data away and knows that the bounding box needs to be shrunken, there is no way of setting the box explicitly with the api.
</issue>
<code>
[start of wkcuber/api/Layer.py]
1 from shutil import rmtree
2 from os.path import join
3 from os import makedirs
4 from wkw import wkw
5
6 from wkcuber.api.MagDataset import (
7 MagDataset,
8 WKMagDataset,
9 TiffMagDataset,
10 TiledTiffMagDataset,
11 )
12 from wkcuber.mag import Mag
13 from wkcuber.utils import DEFAULT_WKW_FILE_LEN
14
15
16 class Layer:
17
18 COLOR_TYPE = "color"
19 SEGMENTATION_TYPE = "segmentation"
20
21 def __init__(self, name, dataset, dtype, num_channels):
22 self.name = name
23 self.dataset = dataset
24 self.dtype = dtype
25 self.num_channels = num_channels
26 self.mags = {}
27
28 full_path = join(dataset.path, name)
29 makedirs(full_path, exist_ok=True)
30
31 def get_mag(self, mag) -> MagDataset:
32 mag = Mag(mag).to_layer_name()
33 if mag not in self.mags.keys():
34 raise IndexError("The mag {} is not a mag of this layer".format(mag))
35 return self.mags[mag]
36
37 def delete_mag(self, mag):
38 mag = Mag(mag).to_layer_name()
39 if mag not in self.mags.keys():
40 raise IndexError(
41 "Deleting mag {} failed. There is no mag with this name".format(mag)
42 )
43
44 del self.mags[mag]
45 self.dataset.properties._delete_mag(self.name, mag)
46 # delete files on disk
47 full_path = join(self.dataset.path, self.name, mag)
48 rmtree(full_path)
49
50 def _create_dir_for_mag(self, mag):
51 mag = Mag(mag).to_layer_name()
52 full_path = join(self.dataset.path, self.name, mag)
53 makedirs(full_path, exist_ok=True)
54
55 def _assert_mag_does_not_exist_yet(self, mag):
56 mag = Mag(mag).to_layer_name()
57 if mag in self.mags.keys():
58 raise IndexError(
59 "Adding mag {} failed. There is already a mag with this name".format(
60 mag
61 )
62 )
63
64
65 class WKLayer(Layer):
66 def add_mag(
67 self, mag, block_len=None, file_len=None, block_type=None
68 ) -> WKMagDataset:
69 if block_len is None:
70 block_len = 32
71 if file_len is None:
72 file_len = DEFAULT_WKW_FILE_LEN
73 if block_type is None:
74 block_type = wkw.Header.BLOCK_TYPE_RAW
75
76 # normalize the name of the mag
77 mag = Mag(mag).to_layer_name()
78
79 self._assert_mag_does_not_exist_yet(mag)
80 self._create_dir_for_mag(mag)
81
82 self.mags[mag] = WKMagDataset.create(self, mag, block_len, file_len, block_type)
83 self.dataset.properties._add_mag(self.name, mag, block_len * file_len)
84
85 return self.mags[mag]
86
87 def get_or_add_mag(
88 self, mag, block_len=None, file_len=None, block_type=None
89 ) -> WKMagDataset:
90 # normalize the name of the mag
91 mag = Mag(mag).to_layer_name()
92
93 if mag in self.mags.keys():
94 assert (
95 block_len is None or self.mags[mag].header.block_len == block_len
96 ), f"Cannot get_or_add_mag: The mag {mag} already exists, but the block lengths do not match"
97 assert (
98 file_len is None or self.mags[mag].header.file_len == file_len
99 ), f"Cannot get_or_add_mag: The mag {mag} already exists, but the file lengths do not match"
100 assert (
101 block_type is None or self.mags[mag].header.block_type == block_type
102 ), f"Cannot get_or_add_mag: The mag {mag} already exists, but the block types do not match"
103 return self.get_mag(mag)
104 else:
105 return self.add_mag(mag, block_len, file_len, block_type)
106
107 def setup_mag(self, mag):
108 # This method is used to initialize the mag when opening the Dataset. This does not create e.g. the wk_header.
109
110 # normalize the name of the mag
111 mag = Mag(mag).to_layer_name()
112
113 self._assert_mag_does_not_exist_yet(mag)
114
115 with wkw.Dataset.open(join(self.dataset.path, self.name, mag)) as wkw_dataset:
116 wk_header = wkw_dataset.header
117
118 self.mags[mag] = WKMagDataset(
119 self, mag, wk_header.block_len, wk_header.file_len, wk_header.block_type
120 )
121 self.dataset.properties._add_mag(
122 self.name, mag, wk_header.block_len * wk_header.file_len
123 )
124
125
126 class TiffLayer(Layer):
127 def add_mag(self, mag) -> MagDataset:
128 # normalize the name of the mag
129 mag = Mag(mag).to_layer_name()
130
131 self._assert_mag_does_not_exist_yet(mag)
132 self._create_dir_for_mag(mag)
133
134 self.mags[mag] = self._get_mag_dataset_class().create(
135 self, mag, self.dataset.properties.pattern
136 )
137 self.dataset.properties._add_mag(self.name, mag)
138
139 return self.mags[mag]
140
141 def get_or_add_mag(self, mag) -> MagDataset:
142 # normalize the name of the mag
143 mag = Mag(mag).to_layer_name()
144
145 if mag in self.mags.keys():
146 return self.get_mag(mag)
147 else:
148 return self.add_mag(mag)
149
150 def setup_mag(self, mag):
151 # This method is used to initialize the mag when opening the Dataset. This does not create e.g. folders.
152
153 # normalize the name of the mag
154 mag = Mag(mag).to_layer_name()
155
156 self._assert_mag_does_not_exist_yet(mag)
157
158 self.mags[mag] = self._get_mag_dataset_class()(
159 self, mag, self.dataset.properties.pattern
160 )
161 self.dataset.properties._add_mag(self.name, mag)
162
163 def _get_mag_dataset_class(self):
164 return TiffMagDataset
165
166
167 class TiledTiffLayer(TiffLayer):
168 def _get_mag_dataset_class(self):
169 return TiledTiffMagDataset
170
[end of wkcuber/api/Layer.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/wkcuber/api/Layer.py b/wkcuber/api/Layer.py
--- a/wkcuber/api/Layer.py
+++ b/wkcuber/api/Layer.py
@@ -1,6 +1,10 @@
from shutil import rmtree
from os.path import join
from os import makedirs
+from typing import Tuple
+
+import numpy as np
+
from wkw import wkw
from wkcuber.api.MagDataset import (
@@ -61,6 +65,28 @@
)
)
+ def set_bounding_box(
+ self, offset: Tuple[int, int, int], size: Tuple[int, int, int]
+ ):
+ self.set_bounding_box_offset(offset)
+ self.set_bounding_box_size(size)
+
+ def set_bounding_box_offset(self, offset: Tuple[int, int, int]):
+ size = self.dataset.properties.data_layers["color"].get_bounding_box_size()
+ self.dataset.properties._set_bounding_box_of_layer(
+ self.name, tuple(offset), tuple(size)
+ )
+ for _, mag in self.mags.items():
+ mag.view.global_offset = offset
+
+ def set_bounding_box_size(self, size: Tuple[int, int, int]):
+ offset = self.dataset.properties.data_layers["color"].get_bounding_box_offset()
+ self.dataset.properties._set_bounding_box_of_layer(
+ self.name, tuple(offset), tuple(size)
+ )
+ for _, mag in self.mags.items():
+ mag.view.size = size
+
class WKLayer(Layer):
def add_mag(
| {"golden_diff": "diff --git a/wkcuber/api/Layer.py b/wkcuber/api/Layer.py\n--- a/wkcuber/api/Layer.py\n+++ b/wkcuber/api/Layer.py\n@@ -1,6 +1,10 @@\n from shutil import rmtree\n from os.path import join\n from os import makedirs\n+from typing import Tuple\n+\n+import numpy as np\n+\n from wkw import wkw\n \n from wkcuber.api.MagDataset import (\n@@ -61,6 +65,28 @@\n )\n )\n \n+ def set_bounding_box(\n+ self, offset: Tuple[int, int, int], size: Tuple[int, int, int]\n+ ):\n+ self.set_bounding_box_offset(offset)\n+ self.set_bounding_box_size(size)\n+\n+ def set_bounding_box_offset(self, offset: Tuple[int, int, int]):\n+ size = self.dataset.properties.data_layers[\"color\"].get_bounding_box_size()\n+ self.dataset.properties._set_bounding_box_of_layer(\n+ self.name, tuple(offset), tuple(size)\n+ )\n+ for _, mag in self.mags.items():\n+ mag.view.global_offset = offset\n+\n+ def set_bounding_box_size(self, size: Tuple[int, int, int]):\n+ offset = self.dataset.properties.data_layers[\"color\"].get_bounding_box_offset()\n+ self.dataset.properties._set_bounding_box_of_layer(\n+ self.name, tuple(offset), tuple(size)\n+ )\n+ for _, mag in self.mags.items():\n+ mag.view.size = size\n+\n \n class WKLayer(Layer):\n def add_mag(\n", "issue": "Allow explicitly setting a layer's bounding box\nFor example, when one cuts data away and knows that the bounding box needs to be shrunken, there is no way of setting the box explicitly with the api.\n", "before_files": [{"content": "from shutil import rmtree\nfrom os.path import join\nfrom os import makedirs\nfrom wkw import wkw\n\nfrom wkcuber.api.MagDataset import (\n MagDataset,\n WKMagDataset,\n TiffMagDataset,\n TiledTiffMagDataset,\n)\nfrom wkcuber.mag import Mag\nfrom wkcuber.utils import DEFAULT_WKW_FILE_LEN\n\n\nclass Layer:\n\n COLOR_TYPE = \"color\"\n SEGMENTATION_TYPE = \"segmentation\"\n\n def __init__(self, name, dataset, dtype, num_channels):\n self.name = name\n self.dataset = dataset\n self.dtype = dtype\n self.num_channels = num_channels\n self.mags = {}\n\n full_path = join(dataset.path, name)\n makedirs(full_path, exist_ok=True)\n\n def get_mag(self, mag) -> MagDataset:\n mag = Mag(mag).to_layer_name()\n if mag not in self.mags.keys():\n raise IndexError(\"The mag {} is not a mag of this layer\".format(mag))\n return self.mags[mag]\n\n def delete_mag(self, mag):\n mag = Mag(mag).to_layer_name()\n if mag not in self.mags.keys():\n raise IndexError(\n \"Deleting mag {} failed. There is no mag with this name\".format(mag)\n )\n\n del self.mags[mag]\n self.dataset.properties._delete_mag(self.name, mag)\n # delete files on disk\n full_path = join(self.dataset.path, self.name, mag)\n rmtree(full_path)\n\n def _create_dir_for_mag(self, mag):\n mag = Mag(mag).to_layer_name()\n full_path = join(self.dataset.path, self.name, mag)\n makedirs(full_path, exist_ok=True)\n\n def _assert_mag_does_not_exist_yet(self, mag):\n mag = Mag(mag).to_layer_name()\n if mag in self.mags.keys():\n raise IndexError(\n \"Adding mag {} failed. There is already a mag with this name\".format(\n mag\n )\n )\n\n\nclass WKLayer(Layer):\n def add_mag(\n self, mag, block_len=None, file_len=None, block_type=None\n ) -> WKMagDataset:\n if block_len is None:\n block_len = 32\n if file_len is None:\n file_len = DEFAULT_WKW_FILE_LEN\n if block_type is None:\n block_type = wkw.Header.BLOCK_TYPE_RAW\n\n # normalize the name of the mag\n mag = Mag(mag).to_layer_name()\n\n self._assert_mag_does_not_exist_yet(mag)\n self._create_dir_for_mag(mag)\n\n self.mags[mag] = WKMagDataset.create(self, mag, block_len, file_len, block_type)\n self.dataset.properties._add_mag(self.name, mag, block_len * file_len)\n\n return self.mags[mag]\n\n def get_or_add_mag(\n self, mag, block_len=None, file_len=None, block_type=None\n ) -> WKMagDataset:\n # normalize the name of the mag\n mag = Mag(mag).to_layer_name()\n\n if mag in self.mags.keys():\n assert (\n block_len is None or self.mags[mag].header.block_len == block_len\n ), f\"Cannot get_or_add_mag: The mag {mag} already exists, but the block lengths do not match\"\n assert (\n file_len is None or self.mags[mag].header.file_len == file_len\n ), f\"Cannot get_or_add_mag: The mag {mag} already exists, but the file lengths do not match\"\n assert (\n block_type is None or self.mags[mag].header.block_type == block_type\n ), f\"Cannot get_or_add_mag: The mag {mag} already exists, but the block types do not match\"\n return self.get_mag(mag)\n else:\n return self.add_mag(mag, block_len, file_len, block_type)\n\n def setup_mag(self, mag):\n # This method is used to initialize the mag when opening the Dataset. This does not create e.g. the wk_header.\n\n # normalize the name of the mag\n mag = Mag(mag).to_layer_name()\n\n self._assert_mag_does_not_exist_yet(mag)\n\n with wkw.Dataset.open(join(self.dataset.path, self.name, mag)) as wkw_dataset:\n wk_header = wkw_dataset.header\n\n self.mags[mag] = WKMagDataset(\n self, mag, wk_header.block_len, wk_header.file_len, wk_header.block_type\n )\n self.dataset.properties._add_mag(\n self.name, mag, wk_header.block_len * wk_header.file_len\n )\n\n\nclass TiffLayer(Layer):\n def add_mag(self, mag) -> MagDataset:\n # normalize the name of the mag\n mag = Mag(mag).to_layer_name()\n\n self._assert_mag_does_not_exist_yet(mag)\n self._create_dir_for_mag(mag)\n\n self.mags[mag] = self._get_mag_dataset_class().create(\n self, mag, self.dataset.properties.pattern\n )\n self.dataset.properties._add_mag(self.name, mag)\n\n return self.mags[mag]\n\n def get_or_add_mag(self, mag) -> MagDataset:\n # normalize the name of the mag\n mag = Mag(mag).to_layer_name()\n\n if mag in self.mags.keys():\n return self.get_mag(mag)\n else:\n return self.add_mag(mag)\n\n def setup_mag(self, mag):\n # This method is used to initialize the mag when opening the Dataset. This does not create e.g. folders.\n\n # normalize the name of the mag\n mag = Mag(mag).to_layer_name()\n\n self._assert_mag_does_not_exist_yet(mag)\n\n self.mags[mag] = self._get_mag_dataset_class()(\n self, mag, self.dataset.properties.pattern\n )\n self.dataset.properties._add_mag(self.name, mag)\n\n def _get_mag_dataset_class(self):\n return TiffMagDataset\n\n\nclass TiledTiffLayer(TiffLayer):\n def _get_mag_dataset_class(self):\n return TiledTiffMagDataset\n", "path": "wkcuber/api/Layer.py"}]} | 2,350 | 353 |
gh_patches_debug_17970 | rasdani/github-patches | git_diff | bookwyrm-social__bookwyrm-604 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
link formatting not working properly
I made a markdown link, but the "<a href" part was trimmed and garbled html remained
</issue>
<code>
[start of bookwyrm/views/status.py]
1 ''' what are we here for if not for posting '''
2 import re
3 from django.contrib.auth.decorators import login_required
4 from django.http import HttpResponseBadRequest
5 from django.shortcuts import get_object_or_404, redirect
6 from django.utils.decorators import method_decorator
7 from django.views import View
8 from markdown import markdown
9
10 from bookwyrm import forms, models
11 from bookwyrm.sanitize_html import InputHtmlParser
12 from bookwyrm.settings import DOMAIN
13 from bookwyrm.status import create_notification, delete_status
14 from bookwyrm.utils import regex
15 from .helpers import handle_remote_webfinger
16
17
18 # pylint: disable= no-self-use
19 @method_decorator(login_required, name='dispatch')
20 class CreateStatus(View):
21 ''' the view for *posting* '''
22 def post(self, request, status_type):
23 ''' create status of whatever type '''
24 status_type = status_type[0].upper() + status_type[1:]
25
26 try:
27 form = getattr(forms, '%sForm' % status_type)(request.POST)
28 except AttributeError:
29 return HttpResponseBadRequest()
30 if not form.is_valid():
31 return redirect(request.headers.get('Referer', '/'))
32
33 status = form.save(commit=False)
34 if not status.sensitive and status.content_warning:
35 # the cw text field remains populated when you click "remove"
36 status.content_warning = None
37 status.save(broadcast=False)
38
39 # inspect the text for user tags
40 content = status.content
41 for (mention_text, mention_user) in find_mentions(content):
42 # add them to status mentions fk
43 status.mention_users.add(mention_user)
44
45 # turn the mention into a link
46 content = re.sub(
47 r'%s([^@]|$)' % mention_text,
48 r'<a href="%s">%s</a>\g<1>' % \
49 (mention_user.remote_id, mention_text),
50 content)
51
52 # add reply parent to mentions and notify
53 if status.reply_parent:
54 status.mention_users.add(status.reply_parent.user)
55
56 if status.reply_parent.user.local:
57 create_notification(
58 status.reply_parent.user,
59 'REPLY',
60 related_user=request.user,
61 related_status=status
62 )
63
64 # deduplicate mentions
65 status.mention_users.set(set(status.mention_users.all()))
66 # create mention notifications
67 for mention_user in status.mention_users.all():
68 if status.reply_parent and mention_user == status.reply_parent.user:
69 continue
70 if mention_user.local:
71 create_notification(
72 mention_user,
73 'MENTION',
74 related_user=request.user,
75 related_status=status
76 )
77
78 # don't apply formatting to generated notes
79 if not isinstance(status, models.GeneratedNote):
80 status.content = to_markdown(content)
81 # do apply formatting to quotes
82 if hasattr(status, 'quote'):
83 status.quote = to_markdown(status.quote)
84
85 status.save(created=True)
86 return redirect(request.headers.get('Referer', '/'))
87
88
89 class DeleteStatus(View):
90 ''' tombstone that bad boy '''
91 def post(self, request, status_id):
92 ''' delete and tombstone a status '''
93 status = get_object_or_404(models.Status, id=status_id)
94
95 # don't let people delete other people's statuses
96 if status.user != request.user:
97 return HttpResponseBadRequest()
98
99 # perform deletion
100 delete_status(status)
101 return redirect(request.headers.get('Referer', '/'))
102
103 def find_mentions(content):
104 ''' detect @mentions in raw status content '''
105 for match in re.finditer(regex.strict_username, content):
106 username = match.group().strip().split('@')[1:]
107 if len(username) == 1:
108 # this looks like a local user (@user), fill in the domain
109 username.append(DOMAIN)
110 username = '@'.join(username)
111
112 mention_user = handle_remote_webfinger(username)
113 if not mention_user:
114 # we can ignore users we don't know about
115 continue
116 yield (match.group(), mention_user)
117
118
119 def format_links(content):
120 ''' detect and format links '''
121 return re.sub(
122 r'([^(href=")]|^|\()(https?:\/\/(%s([\w\.\-_\/+&\?=:;,])*))' % \
123 regex.domain,
124 r'\g<1><a href="\g<2>">\g<3></a>',
125 content)
126
127 def to_markdown(content):
128 ''' catch links and convert to markdown '''
129 content = format_links(content)
130 content = markdown(content)
131 # sanitize resulting html
132 sanitizer = InputHtmlParser()
133 sanitizer.feed(content)
134 return sanitizer.get_output()
135
[end of bookwyrm/views/status.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/bookwyrm/views/status.py b/bookwyrm/views/status.py
--- a/bookwyrm/views/status.py
+++ b/bookwyrm/views/status.py
@@ -48,7 +48,6 @@
r'<a href="%s">%s</a>\g<1>' % \
(mention_user.remote_id, mention_text),
content)
-
# add reply parent to mentions and notify
if status.reply_parent:
status.mention_users.add(status.reply_parent.user)
@@ -126,8 +125,8 @@
def to_markdown(content):
''' catch links and convert to markdown '''
- content = format_links(content)
content = markdown(content)
+ content = format_links(content)
# sanitize resulting html
sanitizer = InputHtmlParser()
sanitizer.feed(content)
| {"golden_diff": "diff --git a/bookwyrm/views/status.py b/bookwyrm/views/status.py\n--- a/bookwyrm/views/status.py\n+++ b/bookwyrm/views/status.py\n@@ -48,7 +48,6 @@\n r'<a href=\"%s\">%s</a>\\g<1>' % \\\n (mention_user.remote_id, mention_text),\n content)\n-\n # add reply parent to mentions and notify\n if status.reply_parent:\n status.mention_users.add(status.reply_parent.user)\n@@ -126,8 +125,8 @@\n \n def to_markdown(content):\n ''' catch links and convert to markdown '''\n- content = format_links(content)\n content = markdown(content)\n+ content = format_links(content)\n # sanitize resulting html\n sanitizer = InputHtmlParser()\n sanitizer.feed(content)\n", "issue": "link formatting not working properly\nI made a markdown link, but the \"<a href\" part was trimmed and garbled html remained\n", "before_files": [{"content": "''' what are we here for if not for posting '''\nimport re\nfrom django.contrib.auth.decorators import login_required\nfrom django.http import HttpResponseBadRequest\nfrom django.shortcuts import get_object_or_404, redirect\nfrom django.utils.decorators import method_decorator\nfrom django.views import View\nfrom markdown import markdown\n\nfrom bookwyrm import forms, models\nfrom bookwyrm.sanitize_html import InputHtmlParser\nfrom bookwyrm.settings import DOMAIN\nfrom bookwyrm.status import create_notification, delete_status\nfrom bookwyrm.utils import regex\nfrom .helpers import handle_remote_webfinger\n\n\n# pylint: disable= no-self-use\n@method_decorator(login_required, name='dispatch')\nclass CreateStatus(View):\n ''' the view for *posting* '''\n def post(self, request, status_type):\n ''' create status of whatever type '''\n status_type = status_type[0].upper() + status_type[1:]\n\n try:\n form = getattr(forms, '%sForm' % status_type)(request.POST)\n except AttributeError:\n return HttpResponseBadRequest()\n if not form.is_valid():\n return redirect(request.headers.get('Referer', '/'))\n\n status = form.save(commit=False)\n if not status.sensitive and status.content_warning:\n # the cw text field remains populated when you click \"remove\"\n status.content_warning = None\n status.save(broadcast=False)\n\n # inspect the text for user tags\n content = status.content\n for (mention_text, mention_user) in find_mentions(content):\n # add them to status mentions fk\n status.mention_users.add(mention_user)\n\n # turn the mention into a link\n content = re.sub(\n r'%s([^@]|$)' % mention_text,\n r'<a href=\"%s\">%s</a>\\g<1>' % \\\n (mention_user.remote_id, mention_text),\n content)\n\n # add reply parent to mentions and notify\n if status.reply_parent:\n status.mention_users.add(status.reply_parent.user)\n\n if status.reply_parent.user.local:\n create_notification(\n status.reply_parent.user,\n 'REPLY',\n related_user=request.user,\n related_status=status\n )\n\n # deduplicate mentions\n status.mention_users.set(set(status.mention_users.all()))\n # create mention notifications\n for mention_user in status.mention_users.all():\n if status.reply_parent and mention_user == status.reply_parent.user:\n continue\n if mention_user.local:\n create_notification(\n mention_user,\n 'MENTION',\n related_user=request.user,\n related_status=status\n )\n\n # don't apply formatting to generated notes\n if not isinstance(status, models.GeneratedNote):\n status.content = to_markdown(content)\n # do apply formatting to quotes\n if hasattr(status, 'quote'):\n status.quote = to_markdown(status.quote)\n\n status.save(created=True)\n return redirect(request.headers.get('Referer', '/'))\n\n\nclass DeleteStatus(View):\n ''' tombstone that bad boy '''\n def post(self, request, status_id):\n ''' delete and tombstone a status '''\n status = get_object_or_404(models.Status, id=status_id)\n\n # don't let people delete other people's statuses\n if status.user != request.user:\n return HttpResponseBadRequest()\n\n # perform deletion\n delete_status(status)\n return redirect(request.headers.get('Referer', '/'))\n\ndef find_mentions(content):\n ''' detect @mentions in raw status content '''\n for match in re.finditer(regex.strict_username, content):\n username = match.group().strip().split('@')[1:]\n if len(username) == 1:\n # this looks like a local user (@user), fill in the domain\n username.append(DOMAIN)\n username = '@'.join(username)\n\n mention_user = handle_remote_webfinger(username)\n if not mention_user:\n # we can ignore users we don't know about\n continue\n yield (match.group(), mention_user)\n\n\ndef format_links(content):\n ''' detect and format links '''\n return re.sub(\n r'([^(href=\")]|^|\\()(https?:\\/\\/(%s([\\w\\.\\-_\\/+&\\?=:;,])*))' % \\\n regex.domain,\n r'\\g<1><a href=\"\\g<2>\">\\g<3></a>',\n content)\n\ndef to_markdown(content):\n ''' catch links and convert to markdown '''\n content = format_links(content)\n content = markdown(content)\n # sanitize resulting html\n sanitizer = InputHtmlParser()\n sanitizer.feed(content)\n return sanitizer.get_output()\n", "path": "bookwyrm/views/status.py"}]} | 1,842 | 177 |
gh_patches_debug_41702 | rasdani/github-patches | git_diff | mars-project__mars-482 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[ENH] --ui-port option for web service can be removed
As no actor pools are created in Mars Worker, the option -p can be adopted as http port, and --ui-port can be merged.
</issue>
<code>
[start of mars/web/__main__.py]
1 #!/usr/bin/env python
2 # -*- coding: utf-8 -*-
3 # Copyright 1999-2018 Alibaba Group Holding Ltd.
4 #
5 # Licensed under the Apache License, Version 2.0 (the "License");
6 # you may not use this file except in compliance with the License.
7 # You may obtain a copy of the License at
8 #
9 # http://www.apache.org/licenses/LICENSE-2.0
10 #
11 # Unless required by applicable law or agreed to in writing, software
12 # distributed under the License is distributed on an "AS IS" BASIS,
13 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
14 # See the License for the specific language governing permissions and
15 # limitations under the License.
16
17 import gevent.monkey
18 gevent.monkey.patch_all(thread=False)
19
20 import logging # noqa: E402
21 import random # noqa: E402
22 import time # noqa: E402
23
24 from ..base_app import BaseApplication # noqa: E402
25 from ..compat import six # noqa: E402
26 from ..errors import StartArgumentError # noqa: E402
27
28 logger = logging.getLogger(__name__)
29
30
31 class WebApplication(BaseApplication):
32 def __init__(self):
33 super(WebApplication, self).__init__()
34 self.mars_web = None
35 self.require_pool = False
36
37 def config_args(self, parser):
38 parser.add_argument('--ui-port', help='port of Mars UI')
39
40 def validate_arguments(self):
41 if not self.args.schedulers and not self.args.kv_store:
42 raise StartArgumentError('Either schedulers or url of kv store is required.')
43
44 def main_loop(self):
45 try:
46 self.start()
47 while True:
48 time.sleep(0.1)
49 finally:
50 self.stop()
51
52 def start(self):
53 from .server import MarsWeb
54 if MarsWeb is None:
55 self.mars_web = None
56 logger.warning('Mars UI cannot be loaded. Please check if necessary components are installed.')
57 else:
58 ui_port = int(self.args.ui_port) if self.args.ui_port else None
59 scheduler_ip = self.args.schedulers or None
60 if isinstance(scheduler_ip, six.string_types):
61 schedulers = scheduler_ip.split(',')
62 scheduler_ip = random.choice(schedulers)
63 self.mars_web = MarsWeb(port=ui_port, scheduler_ip=scheduler_ip)
64 self.mars_web.start()
65
66 def stop(self):
67 if self.mars_web:
68 self.mars_web.stop()
69
70
71 main = WebApplication()
72
73 if __name__ == '__main__':
74 main()
75
[end of mars/web/__main__.py]
[start of mars/base_app.py]
1 # Copyright 1999-2018 Alibaba Group Holding Ltd.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import argparse
16 import logging
17 import os
18 import sys
19
20 from .actors import create_actor_pool
21 from .config import options
22 from .errors import StartArgumentError
23 from .lib.tblib import pickling_support
24 from .utils import get_next_port
25
26 pickling_support.install()
27 logger = logging.getLogger(__name__)
28
29 try:
30 from pytest_cov.embed import cleanup_on_sigterm
31 cleanup_on_sigterm()
32 except ImportError: # pragma: no cover
33 pass
34
35
36 class BaseApplication(object):
37 """
38 :type pool mars.actors.pool.gevent_pool.ActorContext
39 """
40 service_description = ''
41 service_logger = logger
42
43 def __init__(self):
44 self.args = None
45 self.endpoint = None
46 self.pool = None
47 self.n_process = None
48
49 self._running = False
50
51 def __call__(self, argv=None):
52 import json
53
54 if argv is None:
55 argv = sys.argv[1:]
56 new_argv = []
57 for a in argv:
58 if not a.startswith('-D'):
59 new_argv.append(a)
60 continue
61 conf, val = a[2:].split('=', 1)
62 conf_parts = conf.split('.')
63 conf_obj = options
64 for g in conf_parts[:-1]:
65 conf_obj = getattr(conf_obj, g)
66 try:
67 setattr(conf_obj, conf_parts[-1], json.loads(val))
68 except:
69 setattr(conf_obj, conf_parts[-1], val)
70
71 return self._main(new_argv)
72
73 def _main(self, argv=None):
74 parser = argparse.ArgumentParser(description=self.service_description)
75 parser.add_argument('-a', '--advertise', help='advertise ip')
76 parser.add_argument('-k', '--kv-store', help='address of kv store service, '
77 'for instance, etcd://localhost:4001')
78 parser.add_argument('-e', '--endpoint', help='endpoint of the service')
79 parser.add_argument('-s', '--schedulers', help='endpoint of scheduler, when single scheduler '
80 'and etcd is not available')
81 parser.add_argument('-H', '--host', help='host of the scheduler service, only available '
82 'when `endpoint` is absent')
83 parser.add_argument('-p', '--port', help='port of the scheduler service, only available '
84 'when `endpoint` is absent')
85 parser.add_argument('--level', help='log level')
86 parser.add_argument('--format', help='log format')
87 parser.add_argument('--log_conf', help='log config file')
88 parser.add_argument('--inspect', help='inspection endpoint')
89 parser.add_argument('--load-modules', nargs='*', help='modules to import')
90 self.config_args(parser)
91 args = parser.parse_args(argv)
92 self.args = args
93
94 endpoint = args.endpoint
95 host = args.host
96 port = args.port
97 options.kv_store = args.kv_store if args.kv_store else options.kv_store
98
99 load_modules = []
100 for mod in args.load_modules or ():
101 load_modules.extend(mod.split(','))
102 if not args.load_modules:
103 load_module_str = os.environ.get('MARS_LOAD_MODULES')
104 if load_module_str:
105 load_modules = load_module_str.split(',')
106 load_modules.append('mars.executor')
107 for m in load_modules:
108 __import__(m, globals(), locals(), [])
109 self.service_logger.info('Modules %s loaded', ','.join(load_modules))
110
111 self.n_process = 1
112
113 self.config_service()
114 self.config_logging()
115
116 if not host:
117 host = args.advertise or '0.0.0.0'
118 if not endpoint and port:
119 endpoint = host + ':' + port
120
121 try:
122 self.validate_arguments()
123 except StartArgumentError as ex:
124 parser.error('Failed to start application: %s' % ex)
125
126 if getattr(self, 'require_pool', True):
127 self.endpoint, self.pool = self._try_create_pool(endpoint=endpoint, host=host, port=port)
128 self.service_logger.info('%s started at %s.', self.service_description, self.endpoint)
129 self.main_loop()
130
131 def config_logging(self):
132 import logging.config
133 log_conf = self.args.log_conf or 'logging.conf'
134
135 conf_file_paths = [
136 '', os.path.abspath('.'),
137 os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
138 ]
139 log_configured = False
140 for path in conf_file_paths:
141 conf_path = log_conf
142 if path:
143 conf_path = os.path.join(conf_path)
144 if os.path.exists(conf_path):
145 logging.config.fileConfig(conf_path, disable_existing_loggers=False)
146 log_configured = True
147
148 if not log_configured:
149 if not self.args.level:
150 level = logging.INFO
151 else:
152 level = getattr(logging, self.args.level.upper())
153 logging.getLogger('mars').setLevel(level)
154 logging.basicConfig(format=self.args.format)
155
156 def validate_arguments(self):
157 pass
158
159 def _try_create_pool(self, endpoint=None, host=None, port=None):
160 pool = None
161 if endpoint:
162 pool = self.create_pool(address=endpoint)
163 else:
164 use_port = None
165 retrial = 5
166 while use_port is None:
167 use_port = port or get_next_port()
168 try:
169 endpoint = '{0}:{1}'.format(host, use_port)
170 pool = self.create_pool(address=endpoint)
171 break
172 except:
173 retrial -= 1
174 if retrial == 0:
175 raise
176
177 if port is None:
178 use_port = None
179 else:
180 raise
181 return endpoint, pool
182
183 def create_pool(self, *args, **kwargs):
184 kwargs.update(dict(n_process=self.n_process, backend='gevent'))
185 return create_actor_pool(*args, **kwargs)
186
187 def main_loop(self):
188 try:
189 with self.pool:
190 try:
191 self.start()
192 self._running = True
193 while True:
194 self.pool.join(1)
195 stopped = []
196 for idx, proc in enumerate(self.pool.processes):
197 if not proc.is_alive():
198 stopped.append(idx)
199 if stopped:
200 self.handle_process_down(stopped)
201 finally:
202 self.stop()
203 finally:
204 self._running = False
205
206 def handle_process_down(self, proc_indices):
207 """
208 Handle process down event, the default action is to quit
209 the whole application. Applications can inherit this method
210 to do customized process-level failover.
211
212 :param proc_indices: indices of processes (not pids)
213 """
214 for idx in proc_indices:
215 proc = self.pool.processes[idx]
216 self.service_logger.fatal(
217 'Process %d exited unpredictably. exitcode=%d', proc.pid, proc.exitcode)
218 raise KeyboardInterrupt
219
220 def config_service(self):
221 pass
222
223 def config_args(self, parser):
224 raise NotImplementedError
225
226 def start(self):
227 raise NotImplementedError
228
229 def stop(self):
230 raise NotImplementedError
231
[end of mars/base_app.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/mars/base_app.py b/mars/base_app.py
--- a/mars/base_app.py
+++ b/mars/base_app.py
@@ -76,11 +76,11 @@
parser.add_argument('-k', '--kv-store', help='address of kv store service, '
'for instance, etcd://localhost:4001')
parser.add_argument('-e', '--endpoint', help='endpoint of the service')
- parser.add_argument('-s', '--schedulers', help='endpoint of scheduler, when single scheduler '
+ parser.add_argument('-s', '--schedulers', help='endpoint of schedulers, when single scheduler '
'and etcd is not available')
- parser.add_argument('-H', '--host', help='host of the scheduler service, only available '
+ parser.add_argument('-H', '--host', help='host of the service, only available '
'when `endpoint` is absent')
- parser.add_argument('-p', '--port', help='port of the scheduler service, only available '
+ parser.add_argument('-p', '--port', help='port of the service, only available '
'when `endpoint` is absent')
parser.add_argument('--level', help='log level')
parser.add_argument('--format', help='log format')
@@ -125,7 +125,7 @@
if getattr(self, 'require_pool', True):
self.endpoint, self.pool = self._try_create_pool(endpoint=endpoint, host=host, port=port)
- self.service_logger.info('%s started at %s.', self.service_description, self.endpoint)
+ self.service_logger.info('%s started at %s.', self.service_description, self.endpoint)
self.main_loop()
def config_logging(self):
diff --git a/mars/web/__main__.py b/mars/web/__main__.py
--- a/mars/web/__main__.py
+++ b/mars/web/__main__.py
@@ -17,9 +17,10 @@
import gevent.monkey
gevent.monkey.patch_all(thread=False)
-import logging # noqa: E402
-import random # noqa: E402
-import time # noqa: E402
+import argparse # noqa: E402
+import logging # noqa: E402
+import random # noqa: E402
+import time # noqa: E402
from ..base_app import BaseApplication # noqa: E402
from ..compat import six # noqa: E402
@@ -35,7 +36,7 @@
self.require_pool = False
def config_args(self, parser):
- parser.add_argument('--ui-port', help='port of Mars UI')
+ parser.add_argument('--ui-port', help=argparse.SUPPRESS)
def validate_arguments(self):
if not self.args.schedulers and not self.args.kv_store:
@@ -55,7 +56,8 @@
self.mars_web = None
logger.warning('Mars UI cannot be loaded. Please check if necessary components are installed.')
else:
- ui_port = int(self.args.ui_port) if self.args.ui_port else None
+ port_arg = self.args.ui_port or self.args.port
+ ui_port = int(port_arg) if port_arg else None
scheduler_ip = self.args.schedulers or None
if isinstance(scheduler_ip, six.string_types):
schedulers = scheduler_ip.split(',')
| {"golden_diff": "diff --git a/mars/base_app.py b/mars/base_app.py\n--- a/mars/base_app.py\n+++ b/mars/base_app.py\n@@ -76,11 +76,11 @@\n parser.add_argument('-k', '--kv-store', help='address of kv store service, '\n 'for instance, etcd://localhost:4001')\n parser.add_argument('-e', '--endpoint', help='endpoint of the service')\n- parser.add_argument('-s', '--schedulers', help='endpoint of scheduler, when single scheduler '\n+ parser.add_argument('-s', '--schedulers', help='endpoint of schedulers, when single scheduler '\n 'and etcd is not available')\n- parser.add_argument('-H', '--host', help='host of the scheduler service, only available '\n+ parser.add_argument('-H', '--host', help='host of the service, only available '\n 'when `endpoint` is absent')\n- parser.add_argument('-p', '--port', help='port of the scheduler service, only available '\n+ parser.add_argument('-p', '--port', help='port of the service, only available '\n 'when `endpoint` is absent')\n parser.add_argument('--level', help='log level')\n parser.add_argument('--format', help='log format')\n@@ -125,7 +125,7 @@\n \n if getattr(self, 'require_pool', True):\n self.endpoint, self.pool = self._try_create_pool(endpoint=endpoint, host=host, port=port)\n- self.service_logger.info('%s started at %s.', self.service_description, self.endpoint)\n+ self.service_logger.info('%s started at %s.', self.service_description, self.endpoint)\n self.main_loop()\n \n def config_logging(self):\ndiff --git a/mars/web/__main__.py b/mars/web/__main__.py\n--- a/mars/web/__main__.py\n+++ b/mars/web/__main__.py\n@@ -17,9 +17,10 @@\n import gevent.monkey\n gevent.monkey.patch_all(thread=False)\n \n-import logging # noqa: E402\n-import random # noqa: E402\n-import time # noqa: E402\n+import argparse # noqa: E402\n+import logging # noqa: E402\n+import random # noqa: E402\n+import time # noqa: E402\n \n from ..base_app import BaseApplication # noqa: E402\n from ..compat import six # noqa: E402\n@@ -35,7 +36,7 @@\n self.require_pool = False\n \n def config_args(self, parser):\n- parser.add_argument('--ui-port', help='port of Mars UI')\n+ parser.add_argument('--ui-port', help=argparse.SUPPRESS)\n \n def validate_arguments(self):\n if not self.args.schedulers and not self.args.kv_store:\n@@ -55,7 +56,8 @@\n self.mars_web = None\n logger.warning('Mars UI cannot be loaded. Please check if necessary components are installed.')\n else:\n- ui_port = int(self.args.ui_port) if self.args.ui_port else None\n+ port_arg = self.args.ui_port or self.args.port\n+ ui_port = int(port_arg) if port_arg else None\n scheduler_ip = self.args.schedulers or None\n if isinstance(scheduler_ip, six.string_types):\n schedulers = scheduler_ip.split(',')\n", "issue": "[ENH] --ui-port option for web service can be removed\nAs no actor pools are created in Mars Worker, the option -p can be adopted as http port, and --ui-port can be merged.\n", "before_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n# Copyright 1999-2018 Alibaba Group Holding Ltd.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport gevent.monkey\ngevent.monkey.patch_all(thread=False)\n\nimport logging # noqa: E402\nimport random # noqa: E402\nimport time # noqa: E402\n\nfrom ..base_app import BaseApplication # noqa: E402\nfrom ..compat import six # noqa: E402\nfrom ..errors import StartArgumentError # noqa: E402\n\nlogger = logging.getLogger(__name__)\n\n\nclass WebApplication(BaseApplication):\n def __init__(self):\n super(WebApplication, self).__init__()\n self.mars_web = None\n self.require_pool = False\n\n def config_args(self, parser):\n parser.add_argument('--ui-port', help='port of Mars UI')\n\n def validate_arguments(self):\n if not self.args.schedulers and not self.args.kv_store:\n raise StartArgumentError('Either schedulers or url of kv store is required.')\n\n def main_loop(self):\n try:\n self.start()\n while True:\n time.sleep(0.1)\n finally:\n self.stop()\n\n def start(self):\n from .server import MarsWeb\n if MarsWeb is None:\n self.mars_web = None\n logger.warning('Mars UI cannot be loaded. Please check if necessary components are installed.')\n else:\n ui_port = int(self.args.ui_port) if self.args.ui_port else None\n scheduler_ip = self.args.schedulers or None\n if isinstance(scheduler_ip, six.string_types):\n schedulers = scheduler_ip.split(',')\n scheduler_ip = random.choice(schedulers)\n self.mars_web = MarsWeb(port=ui_port, scheduler_ip=scheduler_ip)\n self.mars_web.start()\n\n def stop(self):\n if self.mars_web:\n self.mars_web.stop()\n\n\nmain = WebApplication()\n\nif __name__ == '__main__':\n main()\n", "path": "mars/web/__main__.py"}, {"content": "# Copyright 1999-2018 Alibaba Group Holding Ltd.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport argparse\nimport logging\nimport os\nimport sys\n\nfrom .actors import create_actor_pool\nfrom .config import options\nfrom .errors import StartArgumentError\nfrom .lib.tblib import pickling_support\nfrom .utils import get_next_port\n\npickling_support.install()\nlogger = logging.getLogger(__name__)\n\ntry:\n from pytest_cov.embed import cleanup_on_sigterm\n cleanup_on_sigterm()\nexcept ImportError: # pragma: no cover\n pass\n\n\nclass BaseApplication(object):\n \"\"\"\n :type pool mars.actors.pool.gevent_pool.ActorContext\n \"\"\"\n service_description = ''\n service_logger = logger\n\n def __init__(self):\n self.args = None\n self.endpoint = None\n self.pool = None\n self.n_process = None\n\n self._running = False\n\n def __call__(self, argv=None):\n import json\n\n if argv is None:\n argv = sys.argv[1:]\n new_argv = []\n for a in argv:\n if not a.startswith('-D'):\n new_argv.append(a)\n continue\n conf, val = a[2:].split('=', 1)\n conf_parts = conf.split('.')\n conf_obj = options\n for g in conf_parts[:-1]:\n conf_obj = getattr(conf_obj, g)\n try:\n setattr(conf_obj, conf_parts[-1], json.loads(val))\n except:\n setattr(conf_obj, conf_parts[-1], val)\n\n return self._main(new_argv)\n\n def _main(self, argv=None):\n parser = argparse.ArgumentParser(description=self.service_description)\n parser.add_argument('-a', '--advertise', help='advertise ip')\n parser.add_argument('-k', '--kv-store', help='address of kv store service, '\n 'for instance, etcd://localhost:4001')\n parser.add_argument('-e', '--endpoint', help='endpoint of the service')\n parser.add_argument('-s', '--schedulers', help='endpoint of scheduler, when single scheduler '\n 'and etcd is not available')\n parser.add_argument('-H', '--host', help='host of the scheduler service, only available '\n 'when `endpoint` is absent')\n parser.add_argument('-p', '--port', help='port of the scheduler service, only available '\n 'when `endpoint` is absent')\n parser.add_argument('--level', help='log level')\n parser.add_argument('--format', help='log format')\n parser.add_argument('--log_conf', help='log config file')\n parser.add_argument('--inspect', help='inspection endpoint')\n parser.add_argument('--load-modules', nargs='*', help='modules to import')\n self.config_args(parser)\n args = parser.parse_args(argv)\n self.args = args\n\n endpoint = args.endpoint\n host = args.host\n port = args.port\n options.kv_store = args.kv_store if args.kv_store else options.kv_store\n\n load_modules = []\n for mod in args.load_modules or ():\n load_modules.extend(mod.split(','))\n if not args.load_modules:\n load_module_str = os.environ.get('MARS_LOAD_MODULES')\n if load_module_str:\n load_modules = load_module_str.split(',')\n load_modules.append('mars.executor')\n for m in load_modules:\n __import__(m, globals(), locals(), [])\n self.service_logger.info('Modules %s loaded', ','.join(load_modules))\n\n self.n_process = 1\n\n self.config_service()\n self.config_logging()\n\n if not host:\n host = args.advertise or '0.0.0.0'\n if not endpoint and port:\n endpoint = host + ':' + port\n\n try:\n self.validate_arguments()\n except StartArgumentError as ex:\n parser.error('Failed to start application: %s' % ex)\n\n if getattr(self, 'require_pool', True):\n self.endpoint, self.pool = self._try_create_pool(endpoint=endpoint, host=host, port=port)\n self.service_logger.info('%s started at %s.', self.service_description, self.endpoint)\n self.main_loop()\n\n def config_logging(self):\n import logging.config\n log_conf = self.args.log_conf or 'logging.conf'\n\n conf_file_paths = [\n '', os.path.abspath('.'),\n os.path.dirname(os.path.dirname(os.path.abspath(__file__)))\n ]\n log_configured = False\n for path in conf_file_paths:\n conf_path = log_conf\n if path:\n conf_path = os.path.join(conf_path)\n if os.path.exists(conf_path):\n logging.config.fileConfig(conf_path, disable_existing_loggers=False)\n log_configured = True\n\n if not log_configured:\n if not self.args.level:\n level = logging.INFO\n else:\n level = getattr(logging, self.args.level.upper())\n logging.getLogger('mars').setLevel(level)\n logging.basicConfig(format=self.args.format)\n\n def validate_arguments(self):\n pass\n\n def _try_create_pool(self, endpoint=None, host=None, port=None):\n pool = None\n if endpoint:\n pool = self.create_pool(address=endpoint)\n else:\n use_port = None\n retrial = 5\n while use_port is None:\n use_port = port or get_next_port()\n try:\n endpoint = '{0}:{1}'.format(host, use_port)\n pool = self.create_pool(address=endpoint)\n break\n except:\n retrial -= 1\n if retrial == 0:\n raise\n\n if port is None:\n use_port = None\n else:\n raise\n return endpoint, pool\n\n def create_pool(self, *args, **kwargs):\n kwargs.update(dict(n_process=self.n_process, backend='gevent'))\n return create_actor_pool(*args, **kwargs)\n\n def main_loop(self):\n try:\n with self.pool:\n try:\n self.start()\n self._running = True\n while True:\n self.pool.join(1)\n stopped = []\n for idx, proc in enumerate(self.pool.processes):\n if not proc.is_alive():\n stopped.append(idx)\n if stopped:\n self.handle_process_down(stopped)\n finally:\n self.stop()\n finally:\n self._running = False\n\n def handle_process_down(self, proc_indices):\n \"\"\"\n Handle process down event, the default action is to quit\n the whole application. Applications can inherit this method\n to do customized process-level failover.\n\n :param proc_indices: indices of processes (not pids)\n \"\"\"\n for idx in proc_indices:\n proc = self.pool.processes[idx]\n self.service_logger.fatal(\n 'Process %d exited unpredictably. exitcode=%d', proc.pid, proc.exitcode)\n raise KeyboardInterrupt\n\n def config_service(self):\n pass\n\n def config_args(self, parser):\n raise NotImplementedError\n\n def start(self):\n raise NotImplementedError\n\n def stop(self):\n raise NotImplementedError\n", "path": "mars/base_app.py"}]} | 3,522 | 762 |
gh_patches_debug_15384 | rasdani/github-patches | git_diff | bridgecrewio__checkov-2303 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
CKV_GIT_3 should not be triggered on archived repositories
**Describe the issue**
_CKV_GIT_3_ currently gets triggered also on archived GitHub repositories. When archiving a repository the configuration `vulnerability_alerts` will get changed to `false` automatically. It's also not possible to turn it on again on an archived repository. _CKV_GIT_3_ should be changed to ignore archived repositories.
**Examples**
```terraform
resource "github_repository" "test" {
name = "test"
visibility = "private"
archived = true
vulnerability_alerts = false
}
```
**Version (please complete the following information):**
- Starting with Checkov Version 2.0.764
**Additional context**
See the [GitHub documentation](https://docs.github.com/en/code-security/supply-chain-security/managing-vulnerabilities-in-your-projects-dependencies/configuring-dependabot-security-updates#supported-repositories) that Dependabot is only supported on non-archived repositories.
</issue>
<code>
[start of checkov/terraform/checks/resource/github/RepositoryEnableVulnerabilityAlerts.py]
1 from typing import Any
2
3 from checkov.common.models.enums import CheckCategories, CheckResult
4 from checkov.terraform.checks.resource.base_resource_value_check import BaseResourceCheck
5
6
7 class GithubRepositoryVulnerabilityAlerts(BaseResourceCheck):
8 def __init__(self) -> None:
9 name = "Ensure GitHub repository has vulnerability alerts enabled"
10 id = "CKV_GIT_3"
11 supported_resources = ["github_repository"]
12 categories = [CheckCategories.GENERAL_SECURITY]
13 super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)
14
15 def scan_resource_conf(self, conf) -> CheckResult:
16 # GitHub enables the alerts on public repos but disables them on private repos by default.
17 # is private repo
18 if conf.get("private") == [True] or conf.get("visibility") in [["private"], ["internal"]]:
19 if conf.get("vulnerability_alerts"):
20 return CheckResult.PASSED
21 return CheckResult.FAILED
22 # is public repo
23 if conf.get("vulnerability_alerts") == [False]:
24 return CheckResult.FAILED
25 return CheckResult.PASSED
26
27
28 check = GithubRepositoryVulnerabilityAlerts()
29
[end of checkov/terraform/checks/resource/github/RepositoryEnableVulnerabilityAlerts.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/checkov/terraform/checks/resource/github/RepositoryEnableVulnerabilityAlerts.py b/checkov/terraform/checks/resource/github/RepositoryEnableVulnerabilityAlerts.py
--- a/checkov/terraform/checks/resource/github/RepositoryEnableVulnerabilityAlerts.py
+++ b/checkov/terraform/checks/resource/github/RepositoryEnableVulnerabilityAlerts.py
@@ -13,6 +13,9 @@
super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)
def scan_resource_conf(self, conf) -> CheckResult:
+ # GitHub disables the alerts when archiving the repository without an option to turn them on again.
+ if conf.get("archived") == [True]:
+ return CheckResult.PASSED
# GitHub enables the alerts on public repos but disables them on private repos by default.
# is private repo
if conf.get("private") == [True] or conf.get("visibility") in [["private"], ["internal"]]:
| {"golden_diff": "diff --git a/checkov/terraform/checks/resource/github/RepositoryEnableVulnerabilityAlerts.py b/checkov/terraform/checks/resource/github/RepositoryEnableVulnerabilityAlerts.py\n--- a/checkov/terraform/checks/resource/github/RepositoryEnableVulnerabilityAlerts.py\n+++ b/checkov/terraform/checks/resource/github/RepositoryEnableVulnerabilityAlerts.py\n@@ -13,6 +13,9 @@\n super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)\n \n def scan_resource_conf(self, conf) -> CheckResult:\n+ # GitHub disables the alerts when archiving the repository without an option to turn them on again.\n+ if conf.get(\"archived\") == [True]:\n+ return CheckResult.PASSED\n # GitHub enables the alerts on public repos but disables them on private repos by default.\n # is private repo\n if conf.get(\"private\") == [True] or conf.get(\"visibility\") in [[\"private\"], [\"internal\"]]:\n", "issue": "CKV_GIT_3 should not be triggered on archived repositories\n**Describe the issue**\r\n_CKV_GIT_3_ currently gets triggered also on archived GitHub repositories. When archiving a repository the configuration `vulnerability_alerts` will get changed to `false` automatically. It's also not possible to turn it on again on an archived repository. _CKV_GIT_3_ should be changed to ignore archived repositories.\r\n\r\n**Examples**\r\n\r\n```terraform\r\nresource \"github_repository\" \"test\" {\r\n name = \"test\"\r\n visibility = \"private\"\r\n archived = true\r\n vulnerability_alerts = false\r\n}\r\n```\r\n\r\n**Version (please complete the following information):**\r\n - Starting with Checkov Version 2.0.764\r\n\r\n**Additional context**\r\nSee the [GitHub documentation](https://docs.github.com/en/code-security/supply-chain-security/managing-vulnerabilities-in-your-projects-dependencies/configuring-dependabot-security-updates#supported-repositories) that Dependabot is only supported on non-archived repositories.\r\n\n", "before_files": [{"content": "from typing import Any\n\nfrom checkov.common.models.enums import CheckCategories, CheckResult\nfrom checkov.terraform.checks.resource.base_resource_value_check import BaseResourceCheck\n\n\nclass GithubRepositoryVulnerabilityAlerts(BaseResourceCheck):\n def __init__(self) -> None:\n name = \"Ensure GitHub repository has vulnerability alerts enabled\"\n id = \"CKV_GIT_3\"\n supported_resources = [\"github_repository\"]\n categories = [CheckCategories.GENERAL_SECURITY]\n super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)\n\n def scan_resource_conf(self, conf) -> CheckResult:\n # GitHub enables the alerts on public repos but disables them on private repos by default.\n # is private repo\n if conf.get(\"private\") == [True] or conf.get(\"visibility\") in [[\"private\"], [\"internal\"]]:\n if conf.get(\"vulnerability_alerts\"):\n return CheckResult.PASSED\n return CheckResult.FAILED\n # is public repo\n if conf.get(\"vulnerability_alerts\") == [False]:\n return CheckResult.FAILED\n return CheckResult.PASSED\n\n\ncheck = GithubRepositoryVulnerabilityAlerts()\n", "path": "checkov/terraform/checks/resource/github/RepositoryEnableVulnerabilityAlerts.py"}]} | 1,092 | 219 |
gh_patches_debug_63 | rasdani/github-patches | git_diff | strawberry-graphql__strawberry-378 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Cannot create type with multiple Unions
```python
from typing import Union
import strawberry
@strawberry.type
class CoolType:
@strawberry.type
class UnionA1:
value: int
@strawberry.type
class UnionA2:
value: int
@strawberry.type
class UnionB1:
value: int
@strawberry.type
class UnionB2:
value: int
field1: Union[UnionA1, UnionA2]
field2: Union[UnionB1, UnionB2]
schema = strawberry.Schema(query=CoolType)
```
```.pytb
Traceback (most recent call last):
File "/home/ignormies/.config/JetBrains/PyCharm2020.1/scratches/scratch.py", line 28, in <module>
schema = strawberry.Schema(query=CoolType)
File "/home/ignormies/.local/share/virtualenvs/gql-bf-XGX4szKA-py3.8/lib/python3.8/site-packages/strawberry/schema.py", line 25, in __init__
super().__init__(
File "/home/ignormies/.local/share/virtualenvs/gql-bf-XGX4szKA-py3.8/lib/python3.8/site-packages/graphql/type/schema.py", line 239, in __init__
raise TypeError(
TypeError: Schema must contain uniquely named types but contains multiple types named '_resolver'.
```
Removing either `field1` or `field2` allows the schema to be created
</issue>
<code>
[start of strawberry/type.py]
1 import copy
2 import dataclasses
3 from functools import partial
4 from typing import Optional
5
6 from graphql import GraphQLInputObjectType, GraphQLInterfaceType, GraphQLObjectType
7
8 from .constants import IS_STRAWBERRY_FIELD, IS_STRAWBERRY_INPUT, IS_STRAWBERRY_INTERFACE
9 from .field import field, strawberry_field
10 from .type_registry import register_type
11 from .utils.str_converters import to_camel_case
12 from .utils.typing import get_actual_type, has_type_var, is_type_var
13
14
15 def _interface_resolve_type(result, info, return_type):
16 """Resolves the correct type for an interface"""
17 return result.__class__.graphql_type
18
19
20 def _get_resolver(cls, field_name):
21 class_field = getattr(cls, field_name, None)
22
23 if class_field and getattr(class_field, "resolver", None):
24 return class_field.resolver
25
26 def _resolver(root, info):
27 if not root:
28 return None
29
30 field_resolver = getattr(root, field_name, None)
31
32 if getattr(field_resolver, IS_STRAWBERRY_FIELD, False):
33 return field_resolver(root, info)
34
35 elif field_resolver.__class__ is strawberry_field:
36 # TODO: support default values
37 return None
38
39 return field_resolver
40
41 return _resolver
42
43
44 def _process_type(
45 cls, *, name=None, is_input=False, is_interface=False, description=None
46 ):
47 name = name or cls.__name__
48
49 def _get_fields(wrapped, types_replacement_map=None):
50 class_fields = dataclasses.fields(wrapped)
51
52 fields = {}
53
54 for class_field in class_fields:
55 # we want to make a copy of the original field when dealing
56 # with generic types and also get the actual type for the type var
57 if is_type_var(class_field.type) or has_type_var(class_field.type):
58 class_field = copy.copy(class_field)
59 class_field.type = get_actual_type(
60 class_field.type, types_replacement_map
61 )
62 # like args, a None default implies Optional
63 if class_field.default is None:
64 class_field.type = Optional[class_field.type]
65
66 field_name = getattr(class_field, "field_name", None) or to_camel_case(
67 class_field.name
68 )
69 description = getattr(class_field, "field_description", None)
70 permission_classes = getattr(class_field, "field_permission_classes", None)
71 resolver = getattr(class_field, "field_resolver", None) or _get_resolver(
72 cls, class_field.name
73 )
74 resolver.__annotations__["return"] = class_field.type
75
76 fields[field_name] = field(
77 resolver,
78 is_input=is_input,
79 description=description,
80 permission_classes=permission_classes,
81 ).graphql_type
82 # supply a graphql default_value if the type annotation has a default
83 if class_field.default not in (dataclasses.MISSING, None):
84 fields[field_name].default_value = class_field.default
85
86 strawberry_fields = {}
87
88 for base in [cls, *cls.__bases__]:
89 strawberry_fields.update(
90 {
91 key: value
92 for key, value in base.__dict__.items()
93 if getattr(value, IS_STRAWBERRY_FIELD, False)
94 }
95 )
96
97 for key, value in strawberry_fields.items():
98 name = getattr(value, "field_name", None) or to_camel_case(key)
99
100 fields[name] = value.graphql_type
101
102 return fields
103
104 if is_input:
105 setattr(cls, IS_STRAWBERRY_INPUT, True)
106 elif is_interface:
107 setattr(cls, IS_STRAWBERRY_INTERFACE, True)
108
109 extra_kwargs = {"description": description or cls.__doc__}
110
111 wrapped = dataclasses.dataclass(cls)
112
113 if is_input:
114 TypeClass = GraphQLInputObjectType
115 elif is_interface:
116 TypeClass = GraphQLInterfaceType
117
118 # TODO: in future we might want to be able to override this
119 # for example to map a class (like a django model) to one
120 # type of the interface
121 extra_kwargs["resolve_type"] = _interface_resolve_type
122 else:
123 TypeClass = GraphQLObjectType
124
125 extra_kwargs["interfaces"] = [
126 klass.graphql_type
127 for klass in cls.__bases__
128 if hasattr(klass, IS_STRAWBERRY_INTERFACE)
129 ]
130
131 graphql_type = TypeClass(
132 name,
133 lambda types_replacement_map=None: _get_fields(wrapped, types_replacement_map),
134 **extra_kwargs
135 )
136 register_type(cls, graphql_type)
137
138 return wrapped
139
140
141 def type(cls=None, *, name=None, is_input=False, is_interface=False, description=None):
142 """Annotates a class as a GraphQL type.
143
144 Example usage:
145
146 >>> @strawberry.type:
147 >>> class X:
148 >>> field_abc: str = "ABC"
149 """
150
151 def wrap(cls):
152 return _process_type(
153 cls,
154 name=name,
155 is_input=is_input,
156 is_interface=is_interface,
157 description=description,
158 )
159
160 if cls is None:
161 return wrap
162
163 return wrap(cls)
164
165
166 input = partial(type, is_input=True)
167 interface = partial(type, is_interface=True)
168
[end of strawberry/type.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/strawberry/type.py b/strawberry/type.py
--- a/strawberry/type.py
+++ b/strawberry/type.py
@@ -38,6 +38,8 @@
return field_resolver
+ _resolver.__name__ = field_name
+
return _resolver
| {"golden_diff": "diff --git a/strawberry/type.py b/strawberry/type.py\n--- a/strawberry/type.py\n+++ b/strawberry/type.py\n@@ -38,6 +38,8 @@\n \n return field_resolver\n \n+ _resolver.__name__ = field_name\n+\n return _resolver\n", "issue": "Cannot create type with multiple Unions\n```python\r\nfrom typing import Union\r\n\r\nimport strawberry\r\n\r\n\r\[email protected]\r\nclass CoolType:\r\n @strawberry.type\r\n class UnionA1:\r\n value: int\r\n\r\n @strawberry.type\r\n class UnionA2:\r\n value: int\r\n\r\n @strawberry.type\r\n class UnionB1:\r\n value: int\r\n\r\n @strawberry.type\r\n class UnionB2:\r\n value: int\r\n\r\n field1: Union[UnionA1, UnionA2]\r\n field2: Union[UnionB1, UnionB2]\r\n\r\n\r\nschema = strawberry.Schema(query=CoolType)\r\n```\r\n\r\n```.pytb\r\nTraceback (most recent call last):\r\n File \"/home/ignormies/.config/JetBrains/PyCharm2020.1/scratches/scratch.py\", line 28, in <module>\r\n schema = strawberry.Schema(query=CoolType)\r\n File \"/home/ignormies/.local/share/virtualenvs/gql-bf-XGX4szKA-py3.8/lib/python3.8/site-packages/strawberry/schema.py\", line 25, in __init__\r\n super().__init__(\r\n File \"/home/ignormies/.local/share/virtualenvs/gql-bf-XGX4szKA-py3.8/lib/python3.8/site-packages/graphql/type/schema.py\", line 239, in __init__\r\n raise TypeError(\r\nTypeError: Schema must contain uniquely named types but contains multiple types named '_resolver'.\r\n```\r\n\r\nRemoving either `field1` or `field2` allows the schema to be created\n", "before_files": [{"content": "import copy\nimport dataclasses\nfrom functools import partial\nfrom typing import Optional\n\nfrom graphql import GraphQLInputObjectType, GraphQLInterfaceType, GraphQLObjectType\n\nfrom .constants import IS_STRAWBERRY_FIELD, IS_STRAWBERRY_INPUT, IS_STRAWBERRY_INTERFACE\nfrom .field import field, strawberry_field\nfrom .type_registry import register_type\nfrom .utils.str_converters import to_camel_case\nfrom .utils.typing import get_actual_type, has_type_var, is_type_var\n\n\ndef _interface_resolve_type(result, info, return_type):\n \"\"\"Resolves the correct type for an interface\"\"\"\n return result.__class__.graphql_type\n\n\ndef _get_resolver(cls, field_name):\n class_field = getattr(cls, field_name, None)\n\n if class_field and getattr(class_field, \"resolver\", None):\n return class_field.resolver\n\n def _resolver(root, info):\n if not root:\n return None\n\n field_resolver = getattr(root, field_name, None)\n\n if getattr(field_resolver, IS_STRAWBERRY_FIELD, False):\n return field_resolver(root, info)\n\n elif field_resolver.__class__ is strawberry_field:\n # TODO: support default values\n return None\n\n return field_resolver\n\n return _resolver\n\n\ndef _process_type(\n cls, *, name=None, is_input=False, is_interface=False, description=None\n):\n name = name or cls.__name__\n\n def _get_fields(wrapped, types_replacement_map=None):\n class_fields = dataclasses.fields(wrapped)\n\n fields = {}\n\n for class_field in class_fields:\n # we want to make a copy of the original field when dealing\n # with generic types and also get the actual type for the type var\n if is_type_var(class_field.type) or has_type_var(class_field.type):\n class_field = copy.copy(class_field)\n class_field.type = get_actual_type(\n class_field.type, types_replacement_map\n )\n # like args, a None default implies Optional\n if class_field.default is None:\n class_field.type = Optional[class_field.type]\n\n field_name = getattr(class_field, \"field_name\", None) or to_camel_case(\n class_field.name\n )\n description = getattr(class_field, \"field_description\", None)\n permission_classes = getattr(class_field, \"field_permission_classes\", None)\n resolver = getattr(class_field, \"field_resolver\", None) or _get_resolver(\n cls, class_field.name\n )\n resolver.__annotations__[\"return\"] = class_field.type\n\n fields[field_name] = field(\n resolver,\n is_input=is_input,\n description=description,\n permission_classes=permission_classes,\n ).graphql_type\n # supply a graphql default_value if the type annotation has a default\n if class_field.default not in (dataclasses.MISSING, None):\n fields[field_name].default_value = class_field.default\n\n strawberry_fields = {}\n\n for base in [cls, *cls.__bases__]:\n strawberry_fields.update(\n {\n key: value\n for key, value in base.__dict__.items()\n if getattr(value, IS_STRAWBERRY_FIELD, False)\n }\n )\n\n for key, value in strawberry_fields.items():\n name = getattr(value, \"field_name\", None) or to_camel_case(key)\n\n fields[name] = value.graphql_type\n\n return fields\n\n if is_input:\n setattr(cls, IS_STRAWBERRY_INPUT, True)\n elif is_interface:\n setattr(cls, IS_STRAWBERRY_INTERFACE, True)\n\n extra_kwargs = {\"description\": description or cls.__doc__}\n\n wrapped = dataclasses.dataclass(cls)\n\n if is_input:\n TypeClass = GraphQLInputObjectType\n elif is_interface:\n TypeClass = GraphQLInterfaceType\n\n # TODO: in future we might want to be able to override this\n # for example to map a class (like a django model) to one\n # type of the interface\n extra_kwargs[\"resolve_type\"] = _interface_resolve_type\n else:\n TypeClass = GraphQLObjectType\n\n extra_kwargs[\"interfaces\"] = [\n klass.graphql_type\n for klass in cls.__bases__\n if hasattr(klass, IS_STRAWBERRY_INTERFACE)\n ]\n\n graphql_type = TypeClass(\n name,\n lambda types_replacement_map=None: _get_fields(wrapped, types_replacement_map),\n **extra_kwargs\n )\n register_type(cls, graphql_type)\n\n return wrapped\n\n\ndef type(cls=None, *, name=None, is_input=False, is_interface=False, description=None):\n \"\"\"Annotates a class as a GraphQL type.\n\n Example usage:\n\n >>> @strawberry.type:\n >>> class X:\n >>> field_abc: str = \"ABC\"\n \"\"\"\n\n def wrap(cls):\n return _process_type(\n cls,\n name=name,\n is_input=is_input,\n is_interface=is_interface,\n description=description,\n )\n\n if cls is None:\n return wrap\n\n return wrap(cls)\n\n\ninput = partial(type, is_input=True)\ninterface = partial(type, is_interface=True)\n", "path": "strawberry/type.py"}]} | 2,399 | 71 |
gh_patches_debug_12131 | rasdani/github-patches | git_diff | Lightning-AI__torchmetrics-1460 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
MultioutputWrapper does not reset cleanly
## π Bug
Calling `MultioutputWrapper.compute()` after `MultioutputWrapper.reset()` returns old metrics that should have been cleared by the reset.
### To Reproduce
<!-- If you have a code sample, error messages, stack traces, please provide it here as well -->
#### Code sample
```py
import torch
import torchmetrics
base_metric = torchmetrics.ConfusionMatrix(task="multiclass", num_classes=2)
cf = torchmetrics.MultioutputWrapper(base_metric, num_outputs=2)
cf(torch.tensor([[0,0]]), torch.tensor([[0,0]]))
print("First result: ", cf.compute())
cf.reset()
cf(torch.tensor([[1,1]]), torch.tensor([[0,0]]))
print("Second result: ", cf.compute())
```
Output:
```
First result: [tensor([[1, 0], [0, 0]]), tensor([[1, 0], [0, 0]])]
Second result: [tensor([[1, 0], [0, 0]]), tensor([[1, 0], [0, 0]])]
```
The old output is returned even after resetting and entering new data. If the fist metric computation is omitted, the second metric is as expected.
Importantly, this bug only occurs when using `forward()` to enter data, while `update()` works as expected.
### Expected behavior
The result of the second computation should be independent of the first. Furthermore, forward and update should produce the same state as specified in the docs.
### Environment
- torchmetrics 0.10.3, installed from pypi
- Python 3.8.9
### Attempts to fix
Adding `super().reset()` (as done in e.g. the minmax wrapper) at the top of the reset method seems to fix the bug.
https://github.com/Lightning-AI/metrics/blob/7b505ff1a3b88181bef2b0cdfa21ec593dcda3ff/src/torchmetrics/wrappers/multioutput.py#L133
</issue>
<code>
[start of src/torchmetrics/wrappers/multioutput.py]
1 from copy import deepcopy
2 from typing import Any, List, Tuple
3
4 import torch
5 from torch import Tensor
6 from torch.nn import ModuleList
7
8 from torchmetrics import Metric
9 from torchmetrics.utilities import apply_to_collection
10
11
12 def _get_nan_indices(*tensors: Tensor) -> Tensor:
13 """Get indices of rows along dim 0 which have NaN values."""
14 if len(tensors) == 0:
15 raise ValueError("Must pass at least one tensor as argument")
16 sentinel = tensors[0]
17 nan_idxs = torch.zeros(len(sentinel), dtype=torch.bool, device=sentinel.device)
18 for tensor in tensors:
19 permuted_tensor = tensor.flatten(start_dim=1)
20 nan_idxs |= torch.any(torch.isnan(permuted_tensor), dim=1)
21 return nan_idxs
22
23
24 class MultioutputWrapper(Metric):
25 """Wrap a base metric to enable it to support multiple outputs.
26
27 Several torchmetrics metrics, such as :class:`torchmetrics.regression.spearman.SpearmanCorrcoef` lack support for
28 multioutput mode. This class wraps such metrics to support computing one metric per output.
29 Unlike specific torchmetric metrics, it doesn't support any aggregation across outputs.
30 This means if you set ``num_outputs`` to 2, ``.compute()`` will return a Tensor of dimension
31 ``(2, ...)`` where ``...`` represents the dimensions the metric returns when not wrapped.
32
33 In addition to enabling multioutput support for metrics that lack it, this class also supports, albeit in a crude
34 fashion, dealing with missing labels (or other data). When ``remove_nans`` is passed, the class will remove the
35 intersection of NaN containing "rows" upon each update for each output. For example, suppose a user uses
36 `MultioutputWrapper` to wrap :class:`torchmetrics.regression.r2.R2Score` with 2 outputs, one of which occasionally
37 has missing labels for classes like ``R2Score`` is that this class supports removing ``NaN`` values
38 (parameter ``remove_nans``) on a per-output basis. When ``remove_nans`` is passed the wrapper will remove all rows
39
40 Args:
41 base_metric: Metric being wrapped.
42 num_outputs: Expected dimensionality of the output dimension.
43 This parameter is used to determine the number of distinct metrics we need to track.
44 output_dim:
45 Dimension on which output is expected. Note that while this provides some flexibility, the output dimension
46 must be the same for all inputs to update. This applies even for metrics such as `Accuracy` where the labels
47 can have a different number of dimensions than the predictions. This can be worked around if the output
48 dimension can be set to -1 for both, even if -1 corresponds to different dimensions in different inputs.
49 remove_nans:
50 Whether to remove the intersection of rows containing NaNs from the values passed through to each underlying
51 metric. Proper operation requires all tensors passed to update to have dimension ``(N, ...)`` where N
52 represents the length of the batch or dataset being passed in.
53 squeeze_outputs:
54 If ``True``, will squeeze the 1-item dimensions left after ``index_select`` is applied.
55 This is sometimes unnecessary but harmless for metrics such as `R2Score` but useful
56 for certain classification metrics that can't handle additional 1-item dimensions.
57
58 Example:
59
60 >>> # Mimic R2Score in `multioutput`, `raw_values` mode:
61 >>> import torch
62 >>> from torchmetrics import MultioutputWrapper, R2Score
63 >>> target = torch.tensor([[0.5, 1], [-1, 1], [7, -6]])
64 >>> preds = torch.tensor([[0, 2], [-1, 2], [8, -5]])
65 >>> r2score = MultioutputWrapper(R2Score(), 2)
66 >>> r2score(preds, target)
67 [tensor(0.9654), tensor(0.9082)]
68 """
69
70 is_differentiable = False
71
72 def __init__(
73 self,
74 base_metric: Metric,
75 num_outputs: int,
76 output_dim: int = -1,
77 remove_nans: bool = True,
78 squeeze_outputs: bool = True,
79 ):
80 super().__init__()
81 self.metrics = ModuleList([deepcopy(base_metric) for _ in range(num_outputs)])
82 self.output_dim = output_dim
83 self.remove_nans = remove_nans
84 self.squeeze_outputs = squeeze_outputs
85
86 def _get_args_kwargs_by_output(self, *args: Tensor, **kwargs: Tensor) -> List[Tuple[Tensor, Tensor]]:
87 """Get args and kwargs reshaped to be output-specific and (maybe) having NaNs stripped out."""
88 args_kwargs_by_output = []
89 for i in range(len(self.metrics)):
90 selected_args = apply_to_collection(
91 args, Tensor, torch.index_select, dim=self.output_dim, index=torch.tensor(i, device=self.device)
92 )
93 selected_kwargs = apply_to_collection(
94 kwargs, Tensor, torch.index_select, dim=self.output_dim, index=torch.tensor(i, device=self.device)
95 )
96 if self.remove_nans:
97 args_kwargs = selected_args + tuple(selected_kwargs.values())
98 nan_idxs = _get_nan_indices(*args_kwargs)
99 selected_args = [arg[~nan_idxs] for arg in selected_args]
100 selected_kwargs = {k: v[~nan_idxs] for k, v in selected_kwargs.items()}
101
102 if self.squeeze_outputs:
103 selected_args = [arg.squeeze(self.output_dim) for arg in selected_args]
104 args_kwargs_by_output.append((selected_args, selected_kwargs))
105 return args_kwargs_by_output
106
107 def update(self, *args: Any, **kwargs: Any) -> None:
108 """Update each underlying metric with the corresponding output."""
109 reshaped_args_kwargs = self._get_args_kwargs_by_output(*args, **kwargs)
110 for metric, (selected_args, selected_kwargs) in zip(self.metrics, reshaped_args_kwargs):
111 metric.update(*selected_args, **selected_kwargs)
112
113 def compute(self) -> List[Tensor]:
114 """Compute metrics."""
115 return [m.compute() for m in self.metrics]
116
117 @torch.jit.unused
118 def forward(self, *args: Any, **kwargs: Any) -> Any:
119 """Call underlying forward methods and aggregate the results if they're non-null.
120
121 We override this method to ensure that state variables get copied over on the underlying metrics.
122 """
123 results = []
124 reshaped_args_kwargs = self._get_args_kwargs_by_output(*args, **kwargs)
125 for metric, (selected_args, selected_kwargs) in zip(self.metrics, reshaped_args_kwargs):
126 results.append(metric(*selected_args, **selected_kwargs))
127 if results[0] is None:
128 return None
129 return results
130
131 def reset(self) -> None:
132 """Reset all underlying metrics."""
133 for metric in self.metrics:
134 metric.reset()
135
[end of src/torchmetrics/wrappers/multioutput.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/torchmetrics/wrappers/multioutput.py b/src/torchmetrics/wrappers/multioutput.py
--- a/src/torchmetrics/wrappers/multioutput.py
+++ b/src/torchmetrics/wrappers/multioutput.py
@@ -1,5 +1,5 @@
from copy import deepcopy
-from typing import Any, List, Tuple
+from typing import Any, Callable, List, Tuple
import torch
from torch import Tensor
@@ -132,3 +132,12 @@
"""Reset all underlying metrics."""
for metric in self.metrics:
metric.reset()
+ super().reset()
+
+ def _wrap_update(self, update: Callable) -> Callable:
+ """Overwrite to do nothing."""
+ return update
+
+ def _wrap_compute(self, compute: Callable) -> Callable:
+ """Overwrite to do nothing."""
+ return compute
| {"golden_diff": "diff --git a/src/torchmetrics/wrappers/multioutput.py b/src/torchmetrics/wrappers/multioutput.py\n--- a/src/torchmetrics/wrappers/multioutput.py\n+++ b/src/torchmetrics/wrappers/multioutput.py\n@@ -1,5 +1,5 @@\n from copy import deepcopy\n-from typing import Any, List, Tuple\n+from typing import Any, Callable, List, Tuple\n \n import torch\n from torch import Tensor\n@@ -132,3 +132,12 @@\n \"\"\"Reset all underlying metrics.\"\"\"\n for metric in self.metrics:\n metric.reset()\n+ super().reset()\n+\n+ def _wrap_update(self, update: Callable) -> Callable:\n+ \"\"\"Overwrite to do nothing.\"\"\"\n+ return update\n+\n+ def _wrap_compute(self, compute: Callable) -> Callable:\n+ \"\"\"Overwrite to do nothing.\"\"\"\n+ return compute\n", "issue": "MultioutputWrapper does not reset cleanly\n## \ud83d\udc1b Bug\r\n\r\nCalling `MultioutputWrapper.compute()` after `MultioutputWrapper.reset()` returns old metrics that should have been cleared by the reset. \r\n\r\n### To Reproduce\r\n\r\n<!-- If you have a code sample, error messages, stack traces, please provide it here as well -->\r\n\r\n#### Code sample\r\n\r\n```py\r\nimport torch\r\nimport torchmetrics\r\n\r\nbase_metric = torchmetrics.ConfusionMatrix(task=\"multiclass\", num_classes=2)\r\ncf = torchmetrics.MultioutputWrapper(base_metric, num_outputs=2)\r\n\r\ncf(torch.tensor([[0,0]]), torch.tensor([[0,0]]))\r\nprint(\"First result: \", cf.compute())\r\n\r\ncf.reset()\r\n\r\ncf(torch.tensor([[1,1]]), torch.tensor([[0,0]]))\r\nprint(\"Second result: \", cf.compute())\r\n```\r\n\r\nOutput:\r\n```\r\nFirst result: [tensor([[1, 0], [0, 0]]), tensor([[1, 0], [0, 0]])]\r\nSecond result: [tensor([[1, 0], [0, 0]]), tensor([[1, 0], [0, 0]])]\r\n```\r\nThe old output is returned even after resetting and entering new data. If the fist metric computation is omitted, the second metric is as expected.\r\n\r\nImportantly, this bug only occurs when using `forward()` to enter data, while `update()` works as expected.\r\n\r\n### Expected behavior\r\n\r\nThe result of the second computation should be independent of the first. Furthermore, forward and update should produce the same state as specified in the docs.\r\n\r\n### Environment\r\n\r\n- torchmetrics 0.10.3, installed from pypi\r\n- Python 3.8.9\r\n\r\n### Attempts to fix\r\n\r\nAdding `super().reset()` (as done in e.g. the minmax wrapper) at the top of the reset method seems to fix the bug. \r\nhttps://github.com/Lightning-AI/metrics/blob/7b505ff1a3b88181bef2b0cdfa21ec593dcda3ff/src/torchmetrics/wrappers/multioutput.py#L133\n", "before_files": [{"content": "from copy import deepcopy\nfrom typing import Any, List, Tuple\n\nimport torch\nfrom torch import Tensor\nfrom torch.nn import ModuleList\n\nfrom torchmetrics import Metric\nfrom torchmetrics.utilities import apply_to_collection\n\n\ndef _get_nan_indices(*tensors: Tensor) -> Tensor:\n \"\"\"Get indices of rows along dim 0 which have NaN values.\"\"\"\n if len(tensors) == 0:\n raise ValueError(\"Must pass at least one tensor as argument\")\n sentinel = tensors[0]\n nan_idxs = torch.zeros(len(sentinel), dtype=torch.bool, device=sentinel.device)\n for tensor in tensors:\n permuted_tensor = tensor.flatten(start_dim=1)\n nan_idxs |= torch.any(torch.isnan(permuted_tensor), dim=1)\n return nan_idxs\n\n\nclass MultioutputWrapper(Metric):\n \"\"\"Wrap a base metric to enable it to support multiple outputs.\n\n Several torchmetrics metrics, such as :class:`torchmetrics.regression.spearman.SpearmanCorrcoef` lack support for\n multioutput mode. This class wraps such metrics to support computing one metric per output.\n Unlike specific torchmetric metrics, it doesn't support any aggregation across outputs.\n This means if you set ``num_outputs`` to 2, ``.compute()`` will return a Tensor of dimension\n ``(2, ...)`` where ``...`` represents the dimensions the metric returns when not wrapped.\n\n In addition to enabling multioutput support for metrics that lack it, this class also supports, albeit in a crude\n fashion, dealing with missing labels (or other data). When ``remove_nans`` is passed, the class will remove the\n intersection of NaN containing \"rows\" upon each update for each output. For example, suppose a user uses\n `MultioutputWrapper` to wrap :class:`torchmetrics.regression.r2.R2Score` with 2 outputs, one of which occasionally\n has missing labels for classes like ``R2Score`` is that this class supports removing ``NaN`` values\n (parameter ``remove_nans``) on a per-output basis. When ``remove_nans`` is passed the wrapper will remove all rows\n\n Args:\n base_metric: Metric being wrapped.\n num_outputs: Expected dimensionality of the output dimension.\n This parameter is used to determine the number of distinct metrics we need to track.\n output_dim:\n Dimension on which output is expected. Note that while this provides some flexibility, the output dimension\n must be the same for all inputs to update. This applies even for metrics such as `Accuracy` where the labels\n can have a different number of dimensions than the predictions. This can be worked around if the output\n dimension can be set to -1 for both, even if -1 corresponds to different dimensions in different inputs.\n remove_nans:\n Whether to remove the intersection of rows containing NaNs from the values passed through to each underlying\n metric. Proper operation requires all tensors passed to update to have dimension ``(N, ...)`` where N\n represents the length of the batch or dataset being passed in.\n squeeze_outputs:\n If ``True``, will squeeze the 1-item dimensions left after ``index_select`` is applied.\n This is sometimes unnecessary but harmless for metrics such as `R2Score` but useful\n for certain classification metrics that can't handle additional 1-item dimensions.\n\n Example:\n\n >>> # Mimic R2Score in `multioutput`, `raw_values` mode:\n >>> import torch\n >>> from torchmetrics import MultioutputWrapper, R2Score\n >>> target = torch.tensor([[0.5, 1], [-1, 1], [7, -6]])\n >>> preds = torch.tensor([[0, 2], [-1, 2], [8, -5]])\n >>> r2score = MultioutputWrapper(R2Score(), 2)\n >>> r2score(preds, target)\n [tensor(0.9654), tensor(0.9082)]\n \"\"\"\n\n is_differentiable = False\n\n def __init__(\n self,\n base_metric: Metric,\n num_outputs: int,\n output_dim: int = -1,\n remove_nans: bool = True,\n squeeze_outputs: bool = True,\n ):\n super().__init__()\n self.metrics = ModuleList([deepcopy(base_metric) for _ in range(num_outputs)])\n self.output_dim = output_dim\n self.remove_nans = remove_nans\n self.squeeze_outputs = squeeze_outputs\n\n def _get_args_kwargs_by_output(self, *args: Tensor, **kwargs: Tensor) -> List[Tuple[Tensor, Tensor]]:\n \"\"\"Get args and kwargs reshaped to be output-specific and (maybe) having NaNs stripped out.\"\"\"\n args_kwargs_by_output = []\n for i in range(len(self.metrics)):\n selected_args = apply_to_collection(\n args, Tensor, torch.index_select, dim=self.output_dim, index=torch.tensor(i, device=self.device)\n )\n selected_kwargs = apply_to_collection(\n kwargs, Tensor, torch.index_select, dim=self.output_dim, index=torch.tensor(i, device=self.device)\n )\n if self.remove_nans:\n args_kwargs = selected_args + tuple(selected_kwargs.values())\n nan_idxs = _get_nan_indices(*args_kwargs)\n selected_args = [arg[~nan_idxs] for arg in selected_args]\n selected_kwargs = {k: v[~nan_idxs] for k, v in selected_kwargs.items()}\n\n if self.squeeze_outputs:\n selected_args = [arg.squeeze(self.output_dim) for arg in selected_args]\n args_kwargs_by_output.append((selected_args, selected_kwargs))\n return args_kwargs_by_output\n\n def update(self, *args: Any, **kwargs: Any) -> None:\n \"\"\"Update each underlying metric with the corresponding output.\"\"\"\n reshaped_args_kwargs = self._get_args_kwargs_by_output(*args, **kwargs)\n for metric, (selected_args, selected_kwargs) in zip(self.metrics, reshaped_args_kwargs):\n metric.update(*selected_args, **selected_kwargs)\n\n def compute(self) -> List[Tensor]:\n \"\"\"Compute metrics.\"\"\"\n return [m.compute() for m in self.metrics]\n\n @torch.jit.unused\n def forward(self, *args: Any, **kwargs: Any) -> Any:\n \"\"\"Call underlying forward methods and aggregate the results if they're non-null.\n\n We override this method to ensure that state variables get copied over on the underlying metrics.\n \"\"\"\n results = []\n reshaped_args_kwargs = self._get_args_kwargs_by_output(*args, **kwargs)\n for metric, (selected_args, selected_kwargs) in zip(self.metrics, reshaped_args_kwargs):\n results.append(metric(*selected_args, **selected_kwargs))\n if results[0] is None:\n return None\n return results\n\n def reset(self) -> None:\n \"\"\"Reset all underlying metrics.\"\"\"\n for metric in self.metrics:\n metric.reset()\n", "path": "src/torchmetrics/wrappers/multioutput.py"}]} | 2,779 | 199 |
gh_patches_debug_20526 | rasdani/github-patches | git_diff | Lightning-AI__torchmetrics-2019 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Pearson Correlation Coefficient fails when updating one batch at a time
## π Bug
The `PearsonCorrCoef` fails when using a single batch per update.
### To Reproduce
```python
import torch
from torchmetrics import PearsonCorrCoef
metric = PearsonCorrCoef()
# Works
metric(torch.tensor([3.0, -0.5, 2.0, 7.0]), torch.tensor([2.5, 0.0, 2.0, 8.0]))
print(metric.compute()) # tensor(0.9849)
metric.reset()
# Doesn't work.
metric(torch.tensor([3.0]), torch.tensor([2.5]))
metric(torch.tensor([-0.5]), torch.tensor([0.0]))
metric(torch.tensor([2.0]), torch.tensor([2.0]))
metric(torch.tensor([7.0]), torch.tensor([8.0]))
print(metric.compute()) # tensor(nan)
```
### Expected behavior
Both ways of updating the metric should work.
### Environment
Python 3.10
torchmetrics==1.03
torch==2.01
</issue>
<code>
[start of src/torchmetrics/functional/regression/pearson.py]
1 # Copyright The Lightning team.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 import math
15 from typing import Tuple
16
17 import torch
18 from torch import Tensor
19
20 from torchmetrics.functional.regression.utils import _check_data_shape_to_num_outputs
21 from torchmetrics.utilities import rank_zero_warn
22 from torchmetrics.utilities.checks import _check_same_shape
23
24
25 def _pearson_corrcoef_update(
26 preds: Tensor,
27 target: Tensor,
28 mean_x: Tensor,
29 mean_y: Tensor,
30 var_x: Tensor,
31 var_y: Tensor,
32 corr_xy: Tensor,
33 n_prior: Tensor,
34 num_outputs: int,
35 ) -> Tuple[Tensor, Tensor, Tensor, Tensor, Tensor, Tensor]:
36 """Update and returns variables required to compute Pearson Correlation Coefficient.
37
38 Check for same shape of input tensors.
39
40 Args:
41 preds: estimated scores
42 target: ground truth scores
43 mean_x: current mean estimate of x tensor
44 mean_y: current mean estimate of y tensor
45 var_x: current variance estimate of x tensor
46 var_y: current variance estimate of y tensor
47 corr_xy: current covariance estimate between x and y tensor
48 n_prior: current number of observed observations
49 num_outputs: Number of outputs in multioutput setting
50
51 """
52 # Data checking
53 _check_same_shape(preds, target)
54 _check_data_shape_to_num_outputs(preds, target, num_outputs)
55 cond = n_prior.mean() > 0
56
57 n_obs = preds.shape[0]
58 if cond:
59 mx_new = (n_prior * mean_x + preds.sum(0)) / (n_prior + n_obs)
60 my_new = (n_prior * mean_y + target.sum(0)) / (n_prior + n_obs)
61 else:
62 mx_new = preds.mean(0)
63 my_new = target.mean(0)
64
65 n_prior += n_obs
66
67 if cond:
68 var_x += ((preds - mx_new) * (preds - mean_x)).sum(0)
69 var_y += ((target - my_new) * (target - mean_y)).sum(0)
70
71 else:
72 var_x += preds.var(0) * (n_obs - 1)
73 var_y += target.var(0) * (n_obs - 1)
74 corr_xy += ((preds - mx_new) * (target - mean_y)).sum(0)
75 mean_x = mx_new
76 mean_y = my_new
77
78 return mean_x, mean_y, var_x, var_y, corr_xy, n_prior
79
80
81 def _pearson_corrcoef_compute(
82 var_x: Tensor,
83 var_y: Tensor,
84 corr_xy: Tensor,
85 nb: Tensor,
86 ) -> Tensor:
87 """Compute the final pearson correlation based on accumulated statistics.
88
89 Args:
90 var_x: variance estimate of x tensor
91 var_y: variance estimate of y tensor
92 corr_xy: covariance estimate between x and y tensor
93 nb: number of observations
94
95 """
96 var_x /= nb - 1
97 var_y /= nb - 1
98 corr_xy /= nb - 1
99 # if var_x, var_y is float16 and on cpu, make it bfloat16 as sqrt is not supported for float16
100 # on cpu, remove this after https://github.com/pytorch/pytorch/issues/54774 is fixed
101 if var_x.dtype == torch.float16 and var_x.device == torch.device("cpu"):
102 var_x = var_x.bfloat16()
103 var_y = var_y.bfloat16()
104
105 bound = math.sqrt(torch.finfo(var_x.dtype).eps)
106 if (var_x < bound).any() or (var_y < bound).any():
107 rank_zero_warn(
108 "The variance of predictions or target is close to zero. This can cause instability in Pearson correlation"
109 "coefficient, leading to wrong results. Consider re-scaling the input if possible or computing using a"
110 f"larger dtype (currently using {var_x.dtype}).",
111 UserWarning,
112 )
113
114 corrcoef = (corr_xy / (var_x * var_y).sqrt()).squeeze()
115 return torch.clamp(corrcoef, -1.0, 1.0)
116
117
118 def pearson_corrcoef(preds: Tensor, target: Tensor) -> Tensor:
119 """Compute pearson correlation coefficient.
120
121 Args:
122 preds: estimated scores
123 target: ground truth scores
124
125 Example (single output regression):
126 >>> from torchmetrics.functional.regression import pearson_corrcoef
127 >>> target = torch.tensor([3, -0.5, 2, 7])
128 >>> preds = torch.tensor([2.5, 0.0, 2, 8])
129 >>> pearson_corrcoef(preds, target)
130 tensor(0.9849)
131
132 Example (multi output regression):
133 >>> from torchmetrics.functional.regression import pearson_corrcoef
134 >>> target = torch.tensor([[3, -0.5], [2, 7]])
135 >>> preds = torch.tensor([[2.5, 0.0], [2, 8]])
136 >>> pearson_corrcoef(preds, target)
137 tensor([1., 1.])
138
139 """
140 d = preds.shape[1] if preds.ndim == 2 else 1
141 _temp = torch.zeros(d, dtype=preds.dtype, device=preds.device)
142 mean_x, mean_y, var_x = _temp.clone(), _temp.clone(), _temp.clone()
143 var_y, corr_xy, nb = _temp.clone(), _temp.clone(), _temp.clone()
144 _, _, var_x, var_y, corr_xy, nb = _pearson_corrcoef_update(
145 preds, target, mean_x, mean_y, var_x, var_y, corr_xy, nb, num_outputs=1 if preds.ndim == 1 else preds.shape[-1]
146 )
147 return _pearson_corrcoef_compute(var_x, var_y, corr_xy, nb)
148
[end of src/torchmetrics/functional/regression/pearson.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/torchmetrics/functional/regression/pearson.py b/src/torchmetrics/functional/regression/pearson.py
--- a/src/torchmetrics/functional/regression/pearson.py
+++ b/src/torchmetrics/functional/regression/pearson.py
@@ -52,9 +52,9 @@
# Data checking
_check_same_shape(preds, target)
_check_data_shape_to_num_outputs(preds, target, num_outputs)
- cond = n_prior.mean() > 0
-
n_obs = preds.shape[0]
+ cond = n_prior.mean() > 0 or n_obs == 1
+
if cond:
mx_new = (n_prior * mean_x + preds.sum(0)) / (n_prior + n_obs)
my_new = (n_prior * mean_y + target.sum(0)) / (n_prior + n_obs)
@@ -67,7 +67,6 @@
if cond:
var_x += ((preds - mx_new) * (preds - mean_x)).sum(0)
var_y += ((target - my_new) * (target - mean_y)).sum(0)
-
else:
var_x += preds.var(0) * (n_obs - 1)
var_y += target.var(0) * (n_obs - 1)
| {"golden_diff": "diff --git a/src/torchmetrics/functional/regression/pearson.py b/src/torchmetrics/functional/regression/pearson.py\n--- a/src/torchmetrics/functional/regression/pearson.py\n+++ b/src/torchmetrics/functional/regression/pearson.py\n@@ -52,9 +52,9 @@\n # Data checking\n _check_same_shape(preds, target)\n _check_data_shape_to_num_outputs(preds, target, num_outputs)\n- cond = n_prior.mean() > 0\n-\n n_obs = preds.shape[0]\n+ cond = n_prior.mean() > 0 or n_obs == 1\n+\n if cond:\n mx_new = (n_prior * mean_x + preds.sum(0)) / (n_prior + n_obs)\n my_new = (n_prior * mean_y + target.sum(0)) / (n_prior + n_obs)\n@@ -67,7 +67,6 @@\n if cond:\n var_x += ((preds - mx_new) * (preds - mean_x)).sum(0)\n var_y += ((target - my_new) * (target - mean_y)).sum(0)\n-\n else:\n var_x += preds.var(0) * (n_obs - 1)\n var_y += target.var(0) * (n_obs - 1)\n", "issue": "Pearson Correlation Coefficient fails when updating one batch at a time\n## \ud83d\udc1b Bug\r\n\r\nThe `PearsonCorrCoef` fails when using a single batch per update.\r\n\r\n### To Reproduce\r\n```python\r\nimport torch\r\nfrom torchmetrics import PearsonCorrCoef\r\n\r\nmetric = PearsonCorrCoef()\r\n\r\n# Works\r\nmetric(torch.tensor([3.0, -0.5, 2.0, 7.0]), torch.tensor([2.5, 0.0, 2.0, 8.0]))\r\nprint(metric.compute()) # tensor(0.9849)\r\n\r\nmetric.reset()\r\n\r\n# Doesn't work.\r\nmetric(torch.tensor([3.0]), torch.tensor([2.5]))\r\nmetric(torch.tensor([-0.5]), torch.tensor([0.0]))\r\nmetric(torch.tensor([2.0]), torch.tensor([2.0]))\r\nmetric(torch.tensor([7.0]), torch.tensor([8.0]))\r\nprint(metric.compute()) # tensor(nan)\r\n```\r\n\r\n### Expected behavior\r\n\r\nBoth ways of updating the metric should work.\r\n\r\n### Environment\r\nPython 3.10\r\ntorchmetrics==1.03\r\ntorch==2.01\r\n\n", "before_files": [{"content": "# Copyright The Lightning team.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nimport math\nfrom typing import Tuple\n\nimport torch\nfrom torch import Tensor\n\nfrom torchmetrics.functional.regression.utils import _check_data_shape_to_num_outputs\nfrom torchmetrics.utilities import rank_zero_warn\nfrom torchmetrics.utilities.checks import _check_same_shape\n\n\ndef _pearson_corrcoef_update(\n preds: Tensor,\n target: Tensor,\n mean_x: Tensor,\n mean_y: Tensor,\n var_x: Tensor,\n var_y: Tensor,\n corr_xy: Tensor,\n n_prior: Tensor,\n num_outputs: int,\n) -> Tuple[Tensor, Tensor, Tensor, Tensor, Tensor, Tensor]:\n \"\"\"Update and returns variables required to compute Pearson Correlation Coefficient.\n\n Check for same shape of input tensors.\n\n Args:\n preds: estimated scores\n target: ground truth scores\n mean_x: current mean estimate of x tensor\n mean_y: current mean estimate of y tensor\n var_x: current variance estimate of x tensor\n var_y: current variance estimate of y tensor\n corr_xy: current covariance estimate between x and y tensor\n n_prior: current number of observed observations\n num_outputs: Number of outputs in multioutput setting\n\n \"\"\"\n # Data checking\n _check_same_shape(preds, target)\n _check_data_shape_to_num_outputs(preds, target, num_outputs)\n cond = n_prior.mean() > 0\n\n n_obs = preds.shape[0]\n if cond:\n mx_new = (n_prior * mean_x + preds.sum(0)) / (n_prior + n_obs)\n my_new = (n_prior * mean_y + target.sum(0)) / (n_prior + n_obs)\n else:\n mx_new = preds.mean(0)\n my_new = target.mean(0)\n\n n_prior += n_obs\n\n if cond:\n var_x += ((preds - mx_new) * (preds - mean_x)).sum(0)\n var_y += ((target - my_new) * (target - mean_y)).sum(0)\n\n else:\n var_x += preds.var(0) * (n_obs - 1)\n var_y += target.var(0) * (n_obs - 1)\n corr_xy += ((preds - mx_new) * (target - mean_y)).sum(0)\n mean_x = mx_new\n mean_y = my_new\n\n return mean_x, mean_y, var_x, var_y, corr_xy, n_prior\n\n\ndef _pearson_corrcoef_compute(\n var_x: Tensor,\n var_y: Tensor,\n corr_xy: Tensor,\n nb: Tensor,\n) -> Tensor:\n \"\"\"Compute the final pearson correlation based on accumulated statistics.\n\n Args:\n var_x: variance estimate of x tensor\n var_y: variance estimate of y tensor\n corr_xy: covariance estimate between x and y tensor\n nb: number of observations\n\n \"\"\"\n var_x /= nb - 1\n var_y /= nb - 1\n corr_xy /= nb - 1\n # if var_x, var_y is float16 and on cpu, make it bfloat16 as sqrt is not supported for float16\n # on cpu, remove this after https://github.com/pytorch/pytorch/issues/54774 is fixed\n if var_x.dtype == torch.float16 and var_x.device == torch.device(\"cpu\"):\n var_x = var_x.bfloat16()\n var_y = var_y.bfloat16()\n\n bound = math.sqrt(torch.finfo(var_x.dtype).eps)\n if (var_x < bound).any() or (var_y < bound).any():\n rank_zero_warn(\n \"The variance of predictions or target is close to zero. This can cause instability in Pearson correlation\"\n \"coefficient, leading to wrong results. Consider re-scaling the input if possible or computing using a\"\n f\"larger dtype (currently using {var_x.dtype}).\",\n UserWarning,\n )\n\n corrcoef = (corr_xy / (var_x * var_y).sqrt()).squeeze()\n return torch.clamp(corrcoef, -1.0, 1.0)\n\n\ndef pearson_corrcoef(preds: Tensor, target: Tensor) -> Tensor:\n \"\"\"Compute pearson correlation coefficient.\n\n Args:\n preds: estimated scores\n target: ground truth scores\n\n Example (single output regression):\n >>> from torchmetrics.functional.regression import pearson_corrcoef\n >>> target = torch.tensor([3, -0.5, 2, 7])\n >>> preds = torch.tensor([2.5, 0.0, 2, 8])\n >>> pearson_corrcoef(preds, target)\n tensor(0.9849)\n\n Example (multi output regression):\n >>> from torchmetrics.functional.regression import pearson_corrcoef\n >>> target = torch.tensor([[3, -0.5], [2, 7]])\n >>> preds = torch.tensor([[2.5, 0.0], [2, 8]])\n >>> pearson_corrcoef(preds, target)\n tensor([1., 1.])\n\n \"\"\"\n d = preds.shape[1] if preds.ndim == 2 else 1\n _temp = torch.zeros(d, dtype=preds.dtype, device=preds.device)\n mean_x, mean_y, var_x = _temp.clone(), _temp.clone(), _temp.clone()\n var_y, corr_xy, nb = _temp.clone(), _temp.clone(), _temp.clone()\n _, _, var_x, var_y, corr_xy, nb = _pearson_corrcoef_update(\n preds, target, mean_x, mean_y, var_x, var_y, corr_xy, nb, num_outputs=1 if preds.ndim == 1 else preds.shape[-1]\n )\n return _pearson_corrcoef_compute(var_x, var_y, corr_xy, nb)\n", "path": "src/torchmetrics/functional/regression/pearson.py"}]} | 2,536 | 294 |
gh_patches_debug_23499 | rasdani/github-patches | git_diff | getpelican__pelican-1140 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Atom feeds don't validate with SITEURL containing HTTPS scheme and/or a specific service port.
When SITEURL = 'https://example.com' or 'http://example.com:8080', `writers.py` generates `unique_id` producing wrong 'TAG:' IDs.
A possible fix could be to switch **line 45** from :
``` python
unique_id='tag:%s,%s:%s' % (self.site_url.replace('http://', ''),
item.date.date(), item.url),
```
to :
``` python
unique_id='tag:%s,%s:%s' % (re.sub('^https?://(?P<host>.*?)(:\d+)?$','\g<host>',self.site_url),
item.date.date(), item.url),
```
Atom feeds don't validate with SITEURL containing HTTPS scheme and/or a specific service port.
When SITEURL = 'https://example.com' or 'http://example.com:8080', `writers.py` generates `unique_id` producing wrong 'TAG:' IDs.
A possible fix could be to switch **line 45** from :
``` python
unique_id='tag:%s,%s:%s' % (self.site_url.replace('http://', ''),
item.date.date(), item.url),
```
to :
``` python
unique_id='tag:%s,%s:%s' % (re.sub('^https?://(?P<host>.*?)(:\d+)?$','\g<host>',self.site_url),
item.date.date(), item.url),
```
</issue>
<code>
[start of pelican/writers.py]
1 # -*- coding: utf-8 -*-
2 from __future__ import with_statement, unicode_literals, print_function
3 import six
4
5 import os
6 import locale
7 import logging
8
9 if not six.PY3:
10 from codecs import open
11
12 from feedgenerator import Atom1Feed, Rss201rev2Feed
13 from jinja2 import Markup
14
15 from pelican.paginator import Paginator
16 from pelican.utils import get_relative_path, path_to_url, set_date_tzinfo
17 from pelican import signals
18
19 logger = logging.getLogger(__name__)
20
21
22 class Writer(object):
23
24 def __init__(self, output_path, settings=None):
25 self.output_path = output_path
26 self.reminder = dict()
27 self.settings = settings or {}
28 self._written_files = set()
29 self._overridden_files = set()
30
31 def _create_new_feed(self, feed_type, context):
32 feed_class = Rss201rev2Feed if feed_type == 'rss' else Atom1Feed
33 sitename = Markup(context['SITENAME']).striptags()
34 feed = feed_class(
35 title=sitename,
36 link=(self.site_url + '/'),
37 feed_url=self.feed_url,
38 description=context.get('SITESUBTITLE', ''))
39 return feed
40
41 def _add_item_to_the_feed(self, feed, item):
42
43 title = Markup(item.title).striptags()
44 feed.add_item(
45 title=title,
46 link='%s/%s' % (self.site_url, item.url),
47 unique_id='tag:%s,%s:%s' % (self.site_url.replace('http://', ''),
48 item.date.date(), item.url),
49 description=item.get_content(self.site_url),
50 categories=item.tags if hasattr(item, 'tags') else None,
51 author_name=getattr(item, 'author', ''),
52 pubdate=set_date_tzinfo(item.date,
53 self.settings.get('TIMEZONE', None)))
54
55 def _open_w(self, filename, encoding, override=False):
56 """Open a file to write some content to it.
57
58 Exit if we have already written to that file, unless one (and no more
59 than one) of the writes has the override parameter set to True.
60 """
61 if filename in self._overridden_files:
62 if override:
63 raise RuntimeError('File %s is set to be overridden twice'
64 % filename)
65 else:
66 logger.info('skipping %s' % filename)
67 filename = os.devnull
68 elif filename in self._written_files:
69 if override:
70 logger.info('overwriting %s' % filename)
71 else:
72 raise RuntimeError('File %s is to be overwritten' % filename)
73 if override:
74 self._overridden_files.add(filename)
75 self._written_files.add(filename)
76 return open(filename, 'w', encoding=encoding)
77
78 def write_feed(self, elements, context, path=None, feed_type='atom'):
79 """Generate a feed with the list of articles provided
80
81 Return the feed. If no path or output_path is specified, just
82 return the feed object.
83
84 :param elements: the articles to put on the feed.
85 :param context: the context to get the feed metadata.
86 :param path: the path to output.
87 :param feed_type: the feed type to use (atom or rss)
88 """
89 old_locale = locale.setlocale(locale.LC_ALL)
90 locale.setlocale(locale.LC_ALL, str('C'))
91 try:
92 self.site_url = context.get(
93 'SITEURL', path_to_url(get_relative_path(path)))
94
95 self.feed_domain = context.get('FEED_DOMAIN')
96 self.feed_url = '{}/{}'.format(self.feed_domain, path)
97
98 feed = self._create_new_feed(feed_type, context)
99
100 max_items = len(elements)
101 if self.settings['FEED_MAX_ITEMS']:
102 max_items = min(self.settings['FEED_MAX_ITEMS'], max_items)
103 for i in range(max_items):
104 self._add_item_to_the_feed(feed, elements[i])
105
106 if path:
107 complete_path = os.path.join(self.output_path, path)
108 try:
109 os.makedirs(os.path.dirname(complete_path))
110 except Exception:
111 pass
112
113 encoding = 'utf-8' if six.PY3 else None
114 with self._open_w(complete_path, encoding) as fp:
115 feed.write(fp, 'utf-8')
116 logger.info('writing %s' % complete_path)
117 return feed
118 finally:
119 locale.setlocale(locale.LC_ALL, old_locale)
120
121 def write_file(self, name, template, context, relative_urls=False,
122 paginated=None, override_output=False, **kwargs):
123 """Render the template and write the file.
124
125 :param name: name of the file to output
126 :param template: template to use to generate the content
127 :param context: dict to pass to the templates.
128 :param relative_urls: use relative urls or absolutes ones
129 :param paginated: dict of article list to paginate - must have the
130 same length (same list in different orders)
131 :param override_output: boolean telling if we can override previous
132 output with the same name (and if next files written with the same
133 name should be skipped to keep that one)
134 :param **kwargs: additional variables to pass to the templates
135 """
136
137 if name is False:
138 return
139 elif not name:
140 # other stuff, just return for now
141 return
142
143 def _write_file(template, localcontext, output_path, name, override):
144 """Render the template write the file."""
145 old_locale = locale.setlocale(locale.LC_ALL)
146 locale.setlocale(locale.LC_ALL, str('C'))
147 try:
148 output = template.render(localcontext)
149 finally:
150 locale.setlocale(locale.LC_ALL, old_locale)
151 path = os.path.join(output_path, name)
152 try:
153 os.makedirs(os.path.dirname(path))
154 except Exception:
155 pass
156
157 with self._open_w(path, 'utf-8', override=override) as f:
158 f.write(output)
159 logger.info('writing {}'.format(path))
160
161 # Send a signal to say we're writing a file with some specific
162 # local context.
163 signals.content_written.send(path, context=localcontext)
164
165 localcontext = context.copy()
166 if relative_urls:
167 relative_url = path_to_url(get_relative_path(name))
168 context['localsiteurl'] = relative_url
169 localcontext['SITEURL'] = relative_url
170
171 localcontext['output_file'] = name
172 localcontext.update(kwargs)
173
174 # check paginated
175 paginated = paginated or {}
176 if paginated:
177 name_root = os.path.splitext(name)[0]
178
179 # pagination needed, init paginators
180 paginators = {}
181 for key in paginated.keys():
182 object_list = paginated[key]
183
184 paginators[key] = Paginator(
185 name_root,
186 object_list,
187 self.settings,
188 )
189
190 # generated pages, and write
191 for page_num in range(list(paginators.values())[0].num_pages):
192 paginated_localcontext = localcontext.copy()
193 for key in paginators.keys():
194 paginator = paginators[key]
195 previous_page = paginator.page(page_num) \
196 if page_num > 0 else None
197 page = paginator.page(page_num + 1)
198 next_page = paginator.page(page_num + 2) \
199 if page_num + 1 < paginator.num_pages else None
200 paginated_localcontext.update(
201 {'%s_paginator' % key: paginator,
202 '%s_page' % key: page,
203 '%s_previous_page' % key: previous_page,
204 '%s_next_page' % key: next_page})
205
206 _write_file(template, paginated_localcontext, self.output_path,
207 page.save_as, override_output)
208 else:
209 # no pagination
210 _write_file(template, localcontext, self.output_path, name,
211 override_output)
212
[end of pelican/writers.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/pelican/writers.py b/pelican/writers.py
--- a/pelican/writers.py
+++ b/pelican/writers.py
@@ -8,6 +8,9 @@
if not six.PY3:
from codecs import open
+ from urlparse import urlparse
+else:
+ from urllib.parse import urlparse
from feedgenerator import Atom1Feed, Rss201rev2Feed
from jinja2 import Markup
@@ -41,11 +44,13 @@
def _add_item_to_the_feed(self, feed, item):
title = Markup(item.title).striptags()
+ link = '%s/%s' % (self.site_url, item.url)
feed.add_item(
title=title,
- link='%s/%s' % (self.site_url, item.url),
- unique_id='tag:%s,%s:%s' % (self.site_url.replace('http://', ''),
- item.date.date(), item.url),
+ link=link,
+ unique_id='tag:%s,%s:%s' % (urlparse(link).netloc,
+ item.date.date(),
+ urlparse(link).path.lstrip('/')),
description=item.get_content(self.site_url),
categories=item.tags if hasattr(item, 'tags') else None,
author_name=getattr(item, 'author', ''),
| {"golden_diff": "diff --git a/pelican/writers.py b/pelican/writers.py\n--- a/pelican/writers.py\n+++ b/pelican/writers.py\n@@ -8,6 +8,9 @@\n \n if not six.PY3:\n from codecs import open\n+ from urlparse import urlparse\n+else:\n+ from urllib.parse import urlparse\n \n from feedgenerator import Atom1Feed, Rss201rev2Feed\n from jinja2 import Markup\n@@ -41,11 +44,13 @@\n def _add_item_to_the_feed(self, feed, item):\n \n title = Markup(item.title).striptags()\n+ link = '%s/%s' % (self.site_url, item.url)\n feed.add_item(\n title=title,\n- link='%s/%s' % (self.site_url, item.url),\n- unique_id='tag:%s,%s:%s' % (self.site_url.replace('http://', ''),\n- item.date.date(), item.url),\n+ link=link,\n+ unique_id='tag:%s,%s:%s' % (urlparse(link).netloc,\n+ item.date.date(),\n+ urlparse(link).path.lstrip('/')),\n description=item.get_content(self.site_url),\n categories=item.tags if hasattr(item, 'tags') else None,\n author_name=getattr(item, 'author', ''),\n", "issue": "Atom feeds don't validate with SITEURL containing HTTPS scheme and/or a specific service port.\nWhen SITEURL = 'https://example.com' or 'http://example.com:8080', `writers.py` generates `unique_id` producing wrong 'TAG:' IDs.\n\nA possible fix could be to switch **line 45** from :\n\n``` python\nunique_id='tag:%s,%s:%s' % (self.site_url.replace('http://', ''),\n item.date.date(), item.url),\n```\n\nto :\n\n``` python\nunique_id='tag:%s,%s:%s' % (re.sub('^https?://(?P<host>.*?)(:\\d+)?$','\\g<host>',self.site_url),\n item.date.date(), item.url),\n```\n\nAtom feeds don't validate with SITEURL containing HTTPS scheme and/or a specific service port.\nWhen SITEURL = 'https://example.com' or 'http://example.com:8080', `writers.py` generates `unique_id` producing wrong 'TAG:' IDs.\n\nA possible fix could be to switch **line 45** from :\n\n``` python\nunique_id='tag:%s,%s:%s' % (self.site_url.replace('http://', ''),\n item.date.date(), item.url),\n```\n\nto :\n\n``` python\nunique_id='tag:%s,%s:%s' % (re.sub('^https?://(?P<host>.*?)(:\\d+)?$','\\g<host>',self.site_url),\n item.date.date(), item.url),\n```\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\nfrom __future__ import with_statement, unicode_literals, print_function\nimport six\n\nimport os\nimport locale\nimport logging\n\nif not six.PY3:\n from codecs import open\n\nfrom feedgenerator import Atom1Feed, Rss201rev2Feed\nfrom jinja2 import Markup\n\nfrom pelican.paginator import Paginator\nfrom pelican.utils import get_relative_path, path_to_url, set_date_tzinfo\nfrom pelican import signals\n\nlogger = logging.getLogger(__name__)\n\n\nclass Writer(object):\n\n def __init__(self, output_path, settings=None):\n self.output_path = output_path\n self.reminder = dict()\n self.settings = settings or {}\n self._written_files = set()\n self._overridden_files = set()\n\n def _create_new_feed(self, feed_type, context):\n feed_class = Rss201rev2Feed if feed_type == 'rss' else Atom1Feed\n sitename = Markup(context['SITENAME']).striptags()\n feed = feed_class(\n title=sitename,\n link=(self.site_url + '/'),\n feed_url=self.feed_url,\n description=context.get('SITESUBTITLE', ''))\n return feed\n\n def _add_item_to_the_feed(self, feed, item):\n\n title = Markup(item.title).striptags()\n feed.add_item(\n title=title,\n link='%s/%s' % (self.site_url, item.url),\n unique_id='tag:%s,%s:%s' % (self.site_url.replace('http://', ''),\n item.date.date(), item.url),\n description=item.get_content(self.site_url),\n categories=item.tags if hasattr(item, 'tags') else None,\n author_name=getattr(item, 'author', ''),\n pubdate=set_date_tzinfo(item.date,\n self.settings.get('TIMEZONE', None)))\n\n def _open_w(self, filename, encoding, override=False):\n \"\"\"Open a file to write some content to it.\n\n Exit if we have already written to that file, unless one (and no more\n than one) of the writes has the override parameter set to True.\n \"\"\"\n if filename in self._overridden_files:\n if override:\n raise RuntimeError('File %s is set to be overridden twice'\n % filename)\n else:\n logger.info('skipping %s' % filename)\n filename = os.devnull\n elif filename in self._written_files:\n if override:\n logger.info('overwriting %s' % filename)\n else:\n raise RuntimeError('File %s is to be overwritten' % filename)\n if override:\n self._overridden_files.add(filename)\n self._written_files.add(filename)\n return open(filename, 'w', encoding=encoding)\n\n def write_feed(self, elements, context, path=None, feed_type='atom'):\n \"\"\"Generate a feed with the list of articles provided\n\n Return the feed. If no path or output_path is specified, just\n return the feed object.\n\n :param elements: the articles to put on the feed.\n :param context: the context to get the feed metadata.\n :param path: the path to output.\n :param feed_type: the feed type to use (atom or rss)\n \"\"\"\n old_locale = locale.setlocale(locale.LC_ALL)\n locale.setlocale(locale.LC_ALL, str('C'))\n try:\n self.site_url = context.get(\n 'SITEURL', path_to_url(get_relative_path(path)))\n\n self.feed_domain = context.get('FEED_DOMAIN')\n self.feed_url = '{}/{}'.format(self.feed_domain, path)\n\n feed = self._create_new_feed(feed_type, context)\n\n max_items = len(elements)\n if self.settings['FEED_MAX_ITEMS']:\n max_items = min(self.settings['FEED_MAX_ITEMS'], max_items)\n for i in range(max_items):\n self._add_item_to_the_feed(feed, elements[i])\n\n if path:\n complete_path = os.path.join(self.output_path, path)\n try:\n os.makedirs(os.path.dirname(complete_path))\n except Exception:\n pass\n\n encoding = 'utf-8' if six.PY3 else None\n with self._open_w(complete_path, encoding) as fp:\n feed.write(fp, 'utf-8')\n logger.info('writing %s' % complete_path)\n return feed\n finally:\n locale.setlocale(locale.LC_ALL, old_locale)\n\n def write_file(self, name, template, context, relative_urls=False,\n paginated=None, override_output=False, **kwargs):\n \"\"\"Render the template and write the file.\n\n :param name: name of the file to output\n :param template: template to use to generate the content\n :param context: dict to pass to the templates.\n :param relative_urls: use relative urls or absolutes ones\n :param paginated: dict of article list to paginate - must have the\n same length (same list in different orders)\n :param override_output: boolean telling if we can override previous\n output with the same name (and if next files written with the same\n name should be skipped to keep that one)\n :param **kwargs: additional variables to pass to the templates\n \"\"\"\n\n if name is False:\n return\n elif not name:\n # other stuff, just return for now\n return\n\n def _write_file(template, localcontext, output_path, name, override):\n \"\"\"Render the template write the file.\"\"\"\n old_locale = locale.setlocale(locale.LC_ALL)\n locale.setlocale(locale.LC_ALL, str('C'))\n try:\n output = template.render(localcontext)\n finally:\n locale.setlocale(locale.LC_ALL, old_locale)\n path = os.path.join(output_path, name)\n try:\n os.makedirs(os.path.dirname(path))\n except Exception:\n pass\n\n with self._open_w(path, 'utf-8', override=override) as f:\n f.write(output)\n logger.info('writing {}'.format(path))\n\n # Send a signal to say we're writing a file with some specific\n # local context.\n signals.content_written.send(path, context=localcontext)\n\n localcontext = context.copy()\n if relative_urls:\n relative_url = path_to_url(get_relative_path(name))\n context['localsiteurl'] = relative_url\n localcontext['SITEURL'] = relative_url\n\n localcontext['output_file'] = name\n localcontext.update(kwargs)\n\n # check paginated\n paginated = paginated or {}\n if paginated:\n name_root = os.path.splitext(name)[0]\n\n # pagination needed, init paginators\n paginators = {}\n for key in paginated.keys():\n object_list = paginated[key]\n\n paginators[key] = Paginator(\n name_root,\n object_list,\n self.settings,\n )\n\n # generated pages, and write\n for page_num in range(list(paginators.values())[0].num_pages):\n paginated_localcontext = localcontext.copy()\n for key in paginators.keys():\n paginator = paginators[key]\n previous_page = paginator.page(page_num) \\\n if page_num > 0 else None\n page = paginator.page(page_num + 1)\n next_page = paginator.page(page_num + 2) \\\n if page_num + 1 < paginator.num_pages else None\n paginated_localcontext.update(\n {'%s_paginator' % key: paginator,\n '%s_page' % key: page,\n '%s_previous_page' % key: previous_page,\n '%s_next_page' % key: next_page})\n\n _write_file(template, paginated_localcontext, self.output_path,\n page.save_as, override_output)\n else:\n # no pagination\n _write_file(template, localcontext, self.output_path, name,\n override_output)\n", "path": "pelican/writers.py"}]} | 3,101 | 297 |
gh_patches_debug_12005 | rasdani/github-patches | git_diff | chainer__chainer-722 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
`/usr/local/cuda/lib` in Linux for Tegra
In CUDA for L4T (Linux for Tegra), shared objects are located in `/usr/local/cuda/lib`, not in `lib64`. I failed to install Chainer.
</issue>
<code>
[start of chainer_setup_build.py]
1 from __future__ import print_function
2 import copy
3 import distutils
4 import os
5 from os import path
6 import pkg_resources
7 import shutil
8 import subprocess
9 import sys
10 import tempfile
11
12 import setuptools
13 from setuptools.command import build_ext
14
15
16 dummy_extension = setuptools.Extension('chainer', ['chainer.c'])
17
18 MODULES = [
19 {
20 'name': 'cuda',
21 'file': [
22 'cupy.core.core',
23 'cupy.core.flags',
24 'cupy.cuda.cublas',
25 'cupy.cuda.curand',
26 'cupy.cuda.device',
27 'cupy.cuda.driver',
28 'cupy.cuda.memory',
29 'cupy.cuda.function',
30 'cupy.cuda.runtime',
31 'cupy.util',
32 ],
33 'include': [
34 'cublas_v2.h',
35 'cuda.h',
36 'cuda_runtime.h',
37 'curand.h',
38 ],
39 'libraries': [
40 'cublas',
41 'cuda',
42 'cudart',
43 'curand',
44 ],
45 },
46 {
47 'name': 'cudnn',
48 'file': [
49 'cupy.cuda.cudnn',
50 ],
51 'include': [
52 'cudnn.h',
53 ],
54 'libraries': [
55 'cudnn',
56 ],
57 }
58 ]
59
60
61 def get_compiler_setting():
62 nvcc_path = search_on_path(('nvcc', 'nvcc.exe'))
63 cuda_path_default = None
64 if nvcc_path is None:
65 print('**************************************************************')
66 print('*** WARNING: nvcc not in path.')
67 print('*** WARNING: Please set path to nvcc.')
68 print('**************************************************************')
69 else:
70 cuda_path_default = path.normpath(
71 path.join(path.dirname(nvcc_path), '..'))
72
73 cuda_path = os.environ.get('CUDA_PATH', '') # Nvidia default on Windows
74 if len(cuda_path) > 0 and cuda_path != cuda_path_default:
75 print('**************************************************************')
76 print('*** WARNING: nvcc path != CUDA_PATH')
77 print('*** WARNING: nvcc path: %s', cuda_path_default)
78 print('*** WARNING: CUDA_PATH: %s', cuda_path)
79 print('**************************************************************')
80
81 if not path.exists(cuda_path):
82 cuda_path = cuda_path_default
83
84 if not cuda_path and path.exists('/usr/local/cuda'):
85 cuda_path = '/usr/local/cuda'
86
87 include_dirs = []
88 library_dirs = []
89 define_macros = []
90
91 if cuda_path:
92 include_dirs.append(path.join(cuda_path, 'include'))
93 if sys.platform == 'win32':
94 library_dirs.append(path.join(cuda_path, 'bin'))
95 library_dirs.append(path.join(cuda_path, 'lib', 'x64'))
96 elif sys.platform == 'darwin':
97 library_dirs.append(path.join(cuda_path, 'lib'))
98 else:
99 library_dirs.append(path.join(cuda_path, 'lib64'))
100 if sys.platform == 'darwin':
101 library_dirs.append('/usr/local/cuda/lib')
102
103 return {
104 'include_dirs': include_dirs,
105 'library_dirs': library_dirs,
106 'define_macros': define_macros,
107 'language': 'c++',
108 }
109
110
111 def localpath(*args):
112 return path.abspath(path.join(path.dirname(__file__), *args))
113
114
115 def get_path(key):
116 return os.environ.get(key, '').split(os.pathsep)
117
118
119 def search_on_path(filenames):
120 for p in get_path('PATH'):
121 for filename in filenames:
122 full = path.join(p, filename)
123 if path.exists(full):
124 return path.abspath(full)
125
126
127 def check_include(dirs, file_path):
128 return any(path.exists(path.join(dir, file_path)) for dir in dirs)
129
130
131 def check_readthedocs_environment():
132 return os.environ.get('READTHEDOCS', None) == 'True'
133
134
135 def check_library(compiler, includes=[], libraries=[],
136 include_dirs=[], library_dirs=[]):
137 temp_dir = tempfile.mkdtemp()
138
139 try:
140 source = '''
141 int main(int argc, char* argv[]) {
142 return 0;
143 }
144 '''
145 fname = os.path.join(temp_dir, 'a.cpp')
146 with open(fname, 'w') as f:
147 for header in includes:
148 f.write('#include <%s>\n' % header)
149 f.write(source)
150
151 try:
152 objects = compiler.compile([fname], output_dir=temp_dir,
153 include_dirs=include_dirs)
154 except distutils.errors.CompileError:
155 return False
156
157 try:
158 compiler.link_shared_lib(objects,
159 os.path.join(temp_dir, 'a'),
160 libraries=libraries,
161 library_dirs=library_dirs)
162 except (distutils.errors.LinkError, TypeError):
163 return False
164
165 return True
166
167 finally:
168 shutil.rmtree(temp_dir, ignore_errors=True)
169
170
171 def make_extensions(options, compiler):
172
173 """Produce a list of Extension instances which passed to cythonize()."""
174
175 no_cuda = options['no_cuda']
176 settings = get_compiler_setting()
177
178 try:
179 import numpy
180 numpy_include = numpy.get_include()
181 except AttributeError:
182 # if numpy is not installed get the headers from the .egg directory
183 import numpy.core
184 numpy_include = path.join(
185 path.dirname(numpy.core.__file__), 'include')
186 include_dirs = settings['include_dirs']
187 include_dirs.append(numpy_include)
188
189 settings['include_dirs'] = [
190 x for x in include_dirs if path.exists(x)]
191 settings['library_dirs'] = [
192 x for x in settings['library_dirs'] if path.exists(x)]
193 if sys.platform != 'win32':
194 settings['runtime_library_dirs'] = settings['library_dirs']
195
196 if options['linetrace']:
197 settings['define_macros'].append(('CYTHON_TRACE', '1'))
198 settings['define_macros'].append(('CYTHON_TRACE_NOGIL', '1'))
199 if no_cuda:
200 settings['define_macros'].append(('CUPY_NO_CUDA', '1'))
201
202 ret = []
203 for module in MODULES:
204 print('Include directories:', settings['include_dirs'])
205 print('Library directories:', settings['library_dirs'])
206
207 if not no_cuda:
208 if not check_library(compiler,
209 includes=module['include'],
210 include_dirs=settings['include_dirs']):
211 print('**************************************************')
212 print('*** Include files not found: %s' % module['include'])
213 print('*** Skip installing %s support' % module['name'])
214 print('*** Check your CPATH environment variable')
215 print('**************************************************')
216 continue
217
218 if not check_library(compiler,
219 libraries=module['libraries'],
220 library_dirs=settings['library_dirs']):
221 print('**************************************************')
222 print('*** Cannot link libraries: %s' % module['libraries'])
223 print('*** Skip installing %s support' % module['name'])
224 print('*** Check your LIBRARY_PATH environment variable')
225 print('**************************************************')
226 continue
227
228 s = settings.copy()
229 if not no_cuda:
230 s['libraries'] = module['libraries']
231 ret.extend([
232 setuptools.Extension(f, [path.join(*f.split('.')) + '.pyx'], **s)
233 for f in module['file']])
234 return ret
235
236
237 _arg_options = {}
238
239
240 def parse_args():
241 global _arg_options
242 _arg_options['profile'] = '--cupy-profile' in sys.argv
243 if _arg_options['profile']:
244 sys.argv.remove('--cupy-profile')
245
246 cupy_coverage = '--cupy-coverage' in sys.argv
247 if cupy_coverage:
248 sys.argv.remove('--cupy-coverage')
249 _arg_options['linetrace'] = cupy_coverage
250 _arg_options['annotate'] = cupy_coverage
251
252 _arg_options['no_cuda'] = '--cupy-no-cuda' in sys.argv
253 if _arg_options['no_cuda']:
254 sys.argv.remove('--cupy-no-cuda')
255 if check_readthedocs_environment():
256 _arg_options['no_cuda'] = True
257
258
259 def cythonize(extensions, force=False, annotate=False, compiler_directives={}):
260 cython_pkg = pkg_resources.get_distribution('cython')
261 cython_path = path.join(cython_pkg.location, 'cython.py')
262 print("cython path:%s" % cython_pkg.location)
263 cython_cmdbase = [sys.executable, cython_path]
264 subprocess.check_call(cython_cmdbase + ['--version'])
265
266 cython_cmdbase.extend(['--fast-fail', '--verbose', '--cplus'])
267 ret = []
268 for ext in extensions:
269 cmd = list(cython_cmdbase)
270 for i in compiler_directives.items():
271 cmd.append('--directive')
272 cmd.append('%s=%s' % i)
273 cpp_files = [path.splitext(f)[0] + ".cpp" for f in ext.sources]
274 cmd += ext.sources
275 subprocess.check_call(cmd)
276 ext = copy.copy(ext)
277 ext.sources = cpp_files
278 ret.append(ext)
279 return ret
280
281
282 class chainer_build_ext(build_ext.build_ext):
283
284 """`build_ext` command for cython files."""
285
286 def finalize_options(self):
287 ext_modules = self.distribution.ext_modules
288 if dummy_extension in ext_modules:
289 print('Executing cythonize()')
290 print('Options:', _arg_options)
291
292 directive_keys = ('linetrace', 'profile')
293 directives = {key: _arg_options[key] for key in directive_keys}
294
295 cythonize_option_keys = ('annotate',)
296 cythonize_options = {
297 key: _arg_options[key] for key in cythonize_option_keys}
298
299 compiler = distutils.ccompiler.new_compiler(self.compiler)
300 distutils.sysconfig.customize_compiler(compiler)
301
302 extensions = make_extensions(_arg_options, compiler)
303 extensions = cythonize(
304 extensions,
305 force=True,
306 compiler_directives=directives,
307 **cythonize_options)
308
309 # Modify ext_modules for cython
310 ext_modules.remove(dummy_extension)
311 ext_modules.extend(extensions)
312
313 build_ext.build_ext.finalize_options(self)
314
[end of chainer_setup_build.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/chainer_setup_build.py b/chainer_setup_build.py
--- a/chainer_setup_build.py
+++ b/chainer_setup_build.py
@@ -93,10 +93,9 @@
if sys.platform == 'win32':
library_dirs.append(path.join(cuda_path, 'bin'))
library_dirs.append(path.join(cuda_path, 'lib', 'x64'))
- elif sys.platform == 'darwin':
- library_dirs.append(path.join(cuda_path, 'lib'))
else:
library_dirs.append(path.join(cuda_path, 'lib64'))
+ library_dirs.append(path.join(cuda_path, 'lib'))
if sys.platform == 'darwin':
library_dirs.append('/usr/local/cuda/lib')
| {"golden_diff": "diff --git a/chainer_setup_build.py b/chainer_setup_build.py\n--- a/chainer_setup_build.py\n+++ b/chainer_setup_build.py\n@@ -93,10 +93,9 @@\n if sys.platform == 'win32':\n library_dirs.append(path.join(cuda_path, 'bin'))\n library_dirs.append(path.join(cuda_path, 'lib', 'x64'))\n- elif sys.platform == 'darwin':\n- library_dirs.append(path.join(cuda_path, 'lib'))\n else:\n library_dirs.append(path.join(cuda_path, 'lib64'))\n+ library_dirs.append(path.join(cuda_path, 'lib'))\n if sys.platform == 'darwin':\n library_dirs.append('/usr/local/cuda/lib')\n", "issue": "`/usr/local/cuda/lib` in Linux for Tegra\nIn CUDA for L4T (Linux for Tegra), shared objects are located in `/usr/local/cuda/lib`, not in `lib64`. I failed to install Chainer.\n\n", "before_files": [{"content": "from __future__ import print_function\nimport copy\nimport distutils\nimport os\nfrom os import path\nimport pkg_resources\nimport shutil\nimport subprocess\nimport sys\nimport tempfile\n\nimport setuptools\nfrom setuptools.command import build_ext\n\n\ndummy_extension = setuptools.Extension('chainer', ['chainer.c'])\n\nMODULES = [\n {\n 'name': 'cuda',\n 'file': [\n 'cupy.core.core',\n 'cupy.core.flags',\n 'cupy.cuda.cublas',\n 'cupy.cuda.curand',\n 'cupy.cuda.device',\n 'cupy.cuda.driver',\n 'cupy.cuda.memory',\n 'cupy.cuda.function',\n 'cupy.cuda.runtime',\n 'cupy.util',\n ],\n 'include': [\n 'cublas_v2.h',\n 'cuda.h',\n 'cuda_runtime.h',\n 'curand.h',\n ],\n 'libraries': [\n 'cublas',\n 'cuda',\n 'cudart',\n 'curand',\n ],\n },\n {\n 'name': 'cudnn',\n 'file': [\n 'cupy.cuda.cudnn',\n ],\n 'include': [\n 'cudnn.h',\n ],\n 'libraries': [\n 'cudnn',\n ],\n }\n]\n\n\ndef get_compiler_setting():\n nvcc_path = search_on_path(('nvcc', 'nvcc.exe'))\n cuda_path_default = None\n if nvcc_path is None:\n print('**************************************************************')\n print('*** WARNING: nvcc not in path.')\n print('*** WARNING: Please set path to nvcc.')\n print('**************************************************************')\n else:\n cuda_path_default = path.normpath(\n path.join(path.dirname(nvcc_path), '..'))\n\n cuda_path = os.environ.get('CUDA_PATH', '') # Nvidia default on Windows\n if len(cuda_path) > 0 and cuda_path != cuda_path_default:\n print('**************************************************************')\n print('*** WARNING: nvcc path != CUDA_PATH')\n print('*** WARNING: nvcc path: %s', cuda_path_default)\n print('*** WARNING: CUDA_PATH: %s', cuda_path)\n print('**************************************************************')\n\n if not path.exists(cuda_path):\n cuda_path = cuda_path_default\n\n if not cuda_path and path.exists('/usr/local/cuda'):\n cuda_path = '/usr/local/cuda'\n\n include_dirs = []\n library_dirs = []\n define_macros = []\n\n if cuda_path:\n include_dirs.append(path.join(cuda_path, 'include'))\n if sys.platform == 'win32':\n library_dirs.append(path.join(cuda_path, 'bin'))\n library_dirs.append(path.join(cuda_path, 'lib', 'x64'))\n elif sys.platform == 'darwin':\n library_dirs.append(path.join(cuda_path, 'lib'))\n else:\n library_dirs.append(path.join(cuda_path, 'lib64'))\n if sys.platform == 'darwin':\n library_dirs.append('/usr/local/cuda/lib')\n\n return {\n 'include_dirs': include_dirs,\n 'library_dirs': library_dirs,\n 'define_macros': define_macros,\n 'language': 'c++',\n }\n\n\ndef localpath(*args):\n return path.abspath(path.join(path.dirname(__file__), *args))\n\n\ndef get_path(key):\n return os.environ.get(key, '').split(os.pathsep)\n\n\ndef search_on_path(filenames):\n for p in get_path('PATH'):\n for filename in filenames:\n full = path.join(p, filename)\n if path.exists(full):\n return path.abspath(full)\n\n\ndef check_include(dirs, file_path):\n return any(path.exists(path.join(dir, file_path)) for dir in dirs)\n\n\ndef check_readthedocs_environment():\n return os.environ.get('READTHEDOCS', None) == 'True'\n\n\ndef check_library(compiler, includes=[], libraries=[],\n include_dirs=[], library_dirs=[]):\n temp_dir = tempfile.mkdtemp()\n\n try:\n source = '''\n int main(int argc, char* argv[]) {\n return 0;\n }\n '''\n fname = os.path.join(temp_dir, 'a.cpp')\n with open(fname, 'w') as f:\n for header in includes:\n f.write('#include <%s>\\n' % header)\n f.write(source)\n\n try:\n objects = compiler.compile([fname], output_dir=temp_dir,\n include_dirs=include_dirs)\n except distutils.errors.CompileError:\n return False\n\n try:\n compiler.link_shared_lib(objects,\n os.path.join(temp_dir, 'a'),\n libraries=libraries,\n library_dirs=library_dirs)\n except (distutils.errors.LinkError, TypeError):\n return False\n\n return True\n\n finally:\n shutil.rmtree(temp_dir, ignore_errors=True)\n\n\ndef make_extensions(options, compiler):\n\n \"\"\"Produce a list of Extension instances which passed to cythonize().\"\"\"\n\n no_cuda = options['no_cuda']\n settings = get_compiler_setting()\n\n try:\n import numpy\n numpy_include = numpy.get_include()\n except AttributeError:\n # if numpy is not installed get the headers from the .egg directory\n import numpy.core\n numpy_include = path.join(\n path.dirname(numpy.core.__file__), 'include')\n include_dirs = settings['include_dirs']\n include_dirs.append(numpy_include)\n\n settings['include_dirs'] = [\n x for x in include_dirs if path.exists(x)]\n settings['library_dirs'] = [\n x for x in settings['library_dirs'] if path.exists(x)]\n if sys.platform != 'win32':\n settings['runtime_library_dirs'] = settings['library_dirs']\n\n if options['linetrace']:\n settings['define_macros'].append(('CYTHON_TRACE', '1'))\n settings['define_macros'].append(('CYTHON_TRACE_NOGIL', '1'))\n if no_cuda:\n settings['define_macros'].append(('CUPY_NO_CUDA', '1'))\n\n ret = []\n for module in MODULES:\n print('Include directories:', settings['include_dirs'])\n print('Library directories:', settings['library_dirs'])\n\n if not no_cuda:\n if not check_library(compiler,\n includes=module['include'],\n include_dirs=settings['include_dirs']):\n print('**************************************************')\n print('*** Include files not found: %s' % module['include'])\n print('*** Skip installing %s support' % module['name'])\n print('*** Check your CPATH environment variable')\n print('**************************************************')\n continue\n\n if not check_library(compiler,\n libraries=module['libraries'],\n library_dirs=settings['library_dirs']):\n print('**************************************************')\n print('*** Cannot link libraries: %s' % module['libraries'])\n print('*** Skip installing %s support' % module['name'])\n print('*** Check your LIBRARY_PATH environment variable')\n print('**************************************************')\n continue\n\n s = settings.copy()\n if not no_cuda:\n s['libraries'] = module['libraries']\n ret.extend([\n setuptools.Extension(f, [path.join(*f.split('.')) + '.pyx'], **s)\n for f in module['file']])\n return ret\n\n\n_arg_options = {}\n\n\ndef parse_args():\n global _arg_options\n _arg_options['profile'] = '--cupy-profile' in sys.argv\n if _arg_options['profile']:\n sys.argv.remove('--cupy-profile')\n\n cupy_coverage = '--cupy-coverage' in sys.argv\n if cupy_coverage:\n sys.argv.remove('--cupy-coverage')\n _arg_options['linetrace'] = cupy_coverage\n _arg_options['annotate'] = cupy_coverage\n\n _arg_options['no_cuda'] = '--cupy-no-cuda' in sys.argv\n if _arg_options['no_cuda']:\n sys.argv.remove('--cupy-no-cuda')\n if check_readthedocs_environment():\n _arg_options['no_cuda'] = True\n\n\ndef cythonize(extensions, force=False, annotate=False, compiler_directives={}):\n cython_pkg = pkg_resources.get_distribution('cython')\n cython_path = path.join(cython_pkg.location, 'cython.py')\n print(\"cython path:%s\" % cython_pkg.location)\n cython_cmdbase = [sys.executable, cython_path]\n subprocess.check_call(cython_cmdbase + ['--version'])\n\n cython_cmdbase.extend(['--fast-fail', '--verbose', '--cplus'])\n ret = []\n for ext in extensions:\n cmd = list(cython_cmdbase)\n for i in compiler_directives.items():\n cmd.append('--directive')\n cmd.append('%s=%s' % i)\n cpp_files = [path.splitext(f)[0] + \".cpp\" for f in ext.sources]\n cmd += ext.sources\n subprocess.check_call(cmd)\n ext = copy.copy(ext)\n ext.sources = cpp_files\n ret.append(ext)\n return ret\n\n\nclass chainer_build_ext(build_ext.build_ext):\n\n \"\"\"`build_ext` command for cython files.\"\"\"\n\n def finalize_options(self):\n ext_modules = self.distribution.ext_modules\n if dummy_extension in ext_modules:\n print('Executing cythonize()')\n print('Options:', _arg_options)\n\n directive_keys = ('linetrace', 'profile')\n directives = {key: _arg_options[key] for key in directive_keys}\n\n cythonize_option_keys = ('annotate',)\n cythonize_options = {\n key: _arg_options[key] for key in cythonize_option_keys}\n\n compiler = distutils.ccompiler.new_compiler(self.compiler)\n distutils.sysconfig.customize_compiler(compiler)\n\n extensions = make_extensions(_arg_options, compiler)\n extensions = cythonize(\n extensions,\n force=True,\n compiler_directives=directives,\n **cythonize_options)\n\n # Modify ext_modules for cython\n ext_modules.remove(dummy_extension)\n ext_modules.extend(extensions)\n\n build_ext.build_ext.finalize_options(self)\n", "path": "chainer_setup_build.py"}]} | 3,589 | 155 |
gh_patches_debug_19462 | rasdani/github-patches | git_diff | sublimelsp__LSP-1997 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
goto commands don't restore selection when location picking is canceled
**Describe the bug**
when there's more than one location available for a `goto*` command, a quick panel is shown to pick.
highlighting entries modifies the selection, canceling the operation doesn't restore the initial selection.
**Expected behavior**
it should restore the selection, like ST's built-in
**Screenshots**

**Environment (please complete the following information):**
- OS: Windows 10
- Sublime Text version: 4126
- LSP version: 1.16.3
- Language servers used: LSP-rust-analyzer
**Additional context**
Add any other context about the problem here. For example, whether you're using a helper
package or your manual server configuration in LSP.sublime-settings. When using
a manual server configuration please include it here if you believe it's applicable.
</issue>
<code>
[start of plugin/locationpicker.py]
1 from .core.logging import debug
2 from .core.protocol import DocumentUri, Location, Position
3 from .core.protocol import LocationLink
4 from .core.sessions import Session
5 from .core.typing import Union, List, Optional, Tuple
6 from .core.views import get_uri_and_position_from_location
7 from .core.views import location_to_human_readable
8 from .core.views import to_encoded_filename
9 import functools
10 import sublime
11 import weakref
12
13
14 def open_location_async(
15 session: Session,
16 location: Union[Location, LocationLink],
17 side_by_side: bool,
18 force_group: bool
19 ) -> None:
20 flags = sublime.ENCODED_POSITION
21 if force_group:
22 flags |= sublime.FORCE_GROUP
23 if side_by_side:
24 flags |= sublime.ADD_TO_SELECTION | sublime.SEMI_TRANSIENT
25
26 def check_success_async(view: Optional[sublime.View]) -> None:
27 if not view:
28 sublime.error_message("Unable to open URI")
29
30 session.open_location_async(location, flags).then(check_success_async)
31
32
33 def open_basic_file(
34 session: Session,
35 uri: str,
36 position: Position,
37 flags: int = 0,
38 group: Optional[int] = None
39 ) -> sublime.View:
40 filename = session.config.map_server_uri_to_client_path(uri)
41 if group is None:
42 group = session.window.active_group()
43 return session.window.open_file(to_encoded_filename(filename, position), flags=flags, group=group)
44
45
46 class LocationPicker:
47
48 def __init__(
49 self,
50 view: sublime.View,
51 session: Session,
52 locations: Union[List[Location], List[LocationLink]],
53 side_by_side: bool
54 ) -> None:
55 self._view = view
56 window = view.window()
57 if not window:
58 raise ValueError("missing window")
59 self._window = window
60 self._weaksession = weakref.ref(session)
61 self._side_by_side = side_by_side
62 self._items = locations
63 self._highlighted_view = None # type: Optional[sublime.View]
64 manager = session.manager()
65 base_dir = manager.get_project_path(view.file_name() or "") if manager else None
66 self._window.show_quick_panel(
67 items=[location_to_human_readable(session.config, base_dir, location) for location in locations],
68 on_select=self._select_entry,
69 on_highlight=self._highlight_entry,
70 flags=sublime.KEEP_OPEN_ON_FOCUS_LOST
71 )
72
73 def _unpack(self, index: int) -> Tuple[Optional[Session], Union[Location, LocationLink], DocumentUri, Position]:
74 location = self._items[index]
75 uri, position = get_uri_and_position_from_location(location)
76 return self._weaksession(), location, uri, position
77
78 def _select_entry(self, index: int) -> None:
79 if index >= 0 and self._view.is_valid():
80 session, location, uri, position = self._unpack(index)
81 if not session:
82 return
83 # Note: this has to run on the main thread (and not via open_location_async)
84 # otherwise the bevior feels weird. It's the only reason why open_basic_file exists.
85 if uri.startswith("file:"):
86 flags = sublime.ENCODED_POSITION
87 if not self._side_by_side:
88 open_basic_file(session, uri, position, flags)
89 else:
90 sublime.set_timeout_async(
91 functools.partial(open_location_async, session, location, self._side_by_side, True))
92 else:
93 self._window.focus_view(self._view)
94 # When in side-by-side mode close the current highlighted
95 # sheet upon canceling if the sheet is semi-transient
96 if self._side_by_side and self._highlighted_view:
97 sheet = self._highlighted_view.sheet()
98 if sheet and sheet.is_semi_transient():
99 self._highlighted_view.close()
100
101 def _highlight_entry(self, index: int) -> None:
102 session, _, uri, position = self._unpack(index)
103 if not session:
104 return
105 if uri.startswith("file:"):
106 flags = sublime.ENCODED_POSITION | sublime.FORCE_GROUP
107 if self._side_by_side:
108 if self._highlighted_view and self._highlighted_view.is_valid():
109 # Replacing the MRU is done relative to the current highlighted sheet
110 self._window.focus_view(self._highlighted_view)
111 flags |= sublime.REPLACE_MRU | sublime.SEMI_TRANSIENT
112 else:
113 flags |= sublime.ADD_TO_SELECTION | sublime.SEMI_TRANSIENT
114 else:
115 flags |= sublime.TRANSIENT
116 self._highlighted_view = open_basic_file(session, uri, position, flags, self._window.active_group())
117 else:
118 # TODO: Preview non-file uris?
119 debug("no preview for", uri)
120
[end of plugin/locationpicker.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/plugin/locationpicker.py b/plugin/locationpicker.py
--- a/plugin/locationpicker.py
+++ b/plugin/locationpicker.py
@@ -53,6 +53,7 @@
side_by_side: bool
) -> None:
self._view = view
+ self._view_states = ([r.to_tuple() for r in view.sel()], view.viewport_position())
window = view.window()
if not window:
raise ValueError("missing window")
@@ -76,6 +77,9 @@
return self._weaksession(), location, uri, position
def _select_entry(self, index: int) -> None:
+ if self._view.is_valid() and not self._side_by_side:
+ self._view.set_viewport_position(self._view_states[1])
+ self._view.run_command('lsp_selection_set', {'regions': self._view_states[0]})
if index >= 0 and self._view.is_valid():
session, location, uri, position = self._unpack(index)
if not session:
| {"golden_diff": "diff --git a/plugin/locationpicker.py b/plugin/locationpicker.py\n--- a/plugin/locationpicker.py\n+++ b/plugin/locationpicker.py\n@@ -53,6 +53,7 @@\n side_by_side: bool\n ) -> None:\n self._view = view\n+ self._view_states = ([r.to_tuple() for r in view.sel()], view.viewport_position())\n window = view.window()\n if not window:\n raise ValueError(\"missing window\")\n@@ -76,6 +77,9 @@\n return self._weaksession(), location, uri, position\n \n def _select_entry(self, index: int) -> None:\n+ if self._view.is_valid() and not self._side_by_side:\n+ self._view.set_viewport_position(self._view_states[1])\n+ self._view.run_command('lsp_selection_set', {'regions': self._view_states[0]})\n if index >= 0 and self._view.is_valid():\n session, location, uri, position = self._unpack(index)\n if not session:\n", "issue": "goto commands don't restore selection when location picking is canceled\n**Describe the bug**\r\nwhen there's more than one location available for a `goto*` command, a quick panel is shown to pick.\r\nhighlighting entries modifies the selection, canceling the operation doesn't restore the initial selection.\r\n\r\n**Expected behavior**\r\nit should restore the selection, like ST's built-in\r\n\r\n**Screenshots**\r\n\r\n\r\n**Environment (please complete the following information):**\r\n- OS: Windows 10\r\n- Sublime Text version: 4126\r\n- LSP version: 1.16.3\r\n- Language servers used: LSP-rust-analyzer\r\n\r\n**Additional context**\r\nAdd any other context about the problem here. For example, whether you're using a helper\r\npackage or your manual server configuration in LSP.sublime-settings. When using\r\na manual server configuration please include it here if you believe it's applicable.\r\n\n", "before_files": [{"content": "from .core.logging import debug\nfrom .core.protocol import DocumentUri, Location, Position\nfrom .core.protocol import LocationLink\nfrom .core.sessions import Session\nfrom .core.typing import Union, List, Optional, Tuple\nfrom .core.views import get_uri_and_position_from_location\nfrom .core.views import location_to_human_readable\nfrom .core.views import to_encoded_filename\nimport functools\nimport sublime\nimport weakref\n\n\ndef open_location_async(\n session: Session,\n location: Union[Location, LocationLink],\n side_by_side: bool,\n force_group: bool\n) -> None:\n flags = sublime.ENCODED_POSITION\n if force_group:\n flags |= sublime.FORCE_GROUP\n if side_by_side:\n flags |= sublime.ADD_TO_SELECTION | sublime.SEMI_TRANSIENT\n\n def check_success_async(view: Optional[sublime.View]) -> None:\n if not view:\n sublime.error_message(\"Unable to open URI\")\n\n session.open_location_async(location, flags).then(check_success_async)\n\n\ndef open_basic_file(\n session: Session,\n uri: str,\n position: Position,\n flags: int = 0,\n group: Optional[int] = None\n) -> sublime.View:\n filename = session.config.map_server_uri_to_client_path(uri)\n if group is None:\n group = session.window.active_group()\n return session.window.open_file(to_encoded_filename(filename, position), flags=flags, group=group)\n\n\nclass LocationPicker:\n\n def __init__(\n self,\n view: sublime.View,\n session: Session,\n locations: Union[List[Location], List[LocationLink]],\n side_by_side: bool\n ) -> None:\n self._view = view\n window = view.window()\n if not window:\n raise ValueError(\"missing window\")\n self._window = window\n self._weaksession = weakref.ref(session)\n self._side_by_side = side_by_side\n self._items = locations\n self._highlighted_view = None # type: Optional[sublime.View]\n manager = session.manager()\n base_dir = manager.get_project_path(view.file_name() or \"\") if manager else None\n self._window.show_quick_panel(\n items=[location_to_human_readable(session.config, base_dir, location) for location in locations],\n on_select=self._select_entry,\n on_highlight=self._highlight_entry,\n flags=sublime.KEEP_OPEN_ON_FOCUS_LOST\n )\n\n def _unpack(self, index: int) -> Tuple[Optional[Session], Union[Location, LocationLink], DocumentUri, Position]:\n location = self._items[index]\n uri, position = get_uri_and_position_from_location(location)\n return self._weaksession(), location, uri, position\n\n def _select_entry(self, index: int) -> None:\n if index >= 0 and self._view.is_valid():\n session, location, uri, position = self._unpack(index)\n if not session:\n return\n # Note: this has to run on the main thread (and not via open_location_async)\n # otherwise the bevior feels weird. It's the only reason why open_basic_file exists.\n if uri.startswith(\"file:\"):\n flags = sublime.ENCODED_POSITION\n if not self._side_by_side:\n open_basic_file(session, uri, position, flags)\n else:\n sublime.set_timeout_async(\n functools.partial(open_location_async, session, location, self._side_by_side, True))\n else:\n self._window.focus_view(self._view)\n # When in side-by-side mode close the current highlighted\n # sheet upon canceling if the sheet is semi-transient\n if self._side_by_side and self._highlighted_view:\n sheet = self._highlighted_view.sheet()\n if sheet and sheet.is_semi_transient():\n self._highlighted_view.close()\n\n def _highlight_entry(self, index: int) -> None:\n session, _, uri, position = self._unpack(index)\n if not session:\n return\n if uri.startswith(\"file:\"):\n flags = sublime.ENCODED_POSITION | sublime.FORCE_GROUP\n if self._side_by_side:\n if self._highlighted_view and self._highlighted_view.is_valid():\n # Replacing the MRU is done relative to the current highlighted sheet\n self._window.focus_view(self._highlighted_view)\n flags |= sublime.REPLACE_MRU | sublime.SEMI_TRANSIENT\n else:\n flags |= sublime.ADD_TO_SELECTION | sublime.SEMI_TRANSIENT\n else:\n flags |= sublime.TRANSIENT\n self._highlighted_view = open_basic_file(session, uri, position, flags, self._window.active_group())\n else:\n # TODO: Preview non-file uris?\n debug(\"no preview for\", uri)\n", "path": "plugin/locationpicker.py"}]} | 2,066 | 226 |
gh_patches_debug_40931 | rasdani/github-patches | git_diff | psychopy__psychopy-465 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Invalid literal for int() when assigning variable to "N Vertices"
I'm trying to tell psychopy to display a polygon with a random number of vertices on each presentation by using randint. I keep running into this error:
`invalid literal for int() with base 10: 'randint(3, 4, 1)'`
This occurs before the script compiles, so it doesn't produce any output / traceback other than that.
I've also tried this using $randint(3, 4, 1) but that doesn't work as well. In addition, my friend had this problem when specifying the number of angles via an excel file.
When I just insert a number to the field, it works fine, so it seems like python is trying to interpret randint(3, 4, 1) literally, sees that it isn't an integer, and throws an error.
Variable assignment:

Error after clicking run:

</issue>
<code>
[start of psychopy/app/builder/components/polygon.py]
1 # Part of the PsychoPy library
2 # Copyright (C) 2013 Jonathan Peirce
3 # Distributed under the terms of the GNU General Public License (GPL).
4
5 from _visual import * #to get the template visual component
6 from os import path
7 from psychopy.app.builder.components import getInitVals
8
9 thisFolder = path.abspath(path.dirname(__file__))#the absolute path to the folder containing this path
10 iconFile = path.join(thisFolder,'polygon.png')
11 tooltip = 'Polygon: any regular polygon (line, triangle, square...circle)'
12
13 class PolygonComponent(VisualComponent):
14 """A class for presenting grating stimuli"""
15 def __init__(self, exp, parentName, name='polygon', interpolate='linear',
16 units='from exp settings',
17 lineColor='$[1,1,1]', lineColorSpace='rgb', lineWidth=1,
18 fillColor='$[1,1,1]', fillColorSpace='rgb',
19 nVertices = 4,
20 pos=[0,0], size=[0.5,0.5], ori=0,
21 startType='time (s)', startVal=0.0,
22 stopType='duration (s)', stopVal=1.0,
23 startEstim='', durationEstim=''):
24 #initialise main parameters from base stimulus
25 VisualComponent.__init__(self,exp,parentName,name=name, units=units,
26 pos=pos, size=size, ori=ori,
27 startType=startType, startVal=startVal,
28 stopType=stopType, stopVal=stopVal,
29 startEstim=startEstim, durationEstim=durationEstim)
30 self.type='Polygon'
31 self.url="http://www.psychopy.org/builder/components/shape.html"
32 self.exp.requirePsychopyLibs(['visual'])
33 self.order=['nVertices']
34 #params
35 self.params['nVertices']=Param(nVertices, valType='code',
36 updates='constant', allowedUpdates=['constant','set every repeat'],
37 hint="How many vertices? 2=line, 3=triangle... (90 approximates a circle)",
38 label="N Vertices")
39 self.params['fillColorSpace']=Param(fillColorSpace, valType='str', allowedVals=['rgb','dkl','lms'],
40 updates='constant',
41 hint="Choice of color space for the fill color (rgb, dkl, lms)",
42 label="Fill color space")
43 self.params['fillColor']=Param(fillColor, valType='str', allowedTypes=[],
44 updates='constant', allowedUpdates=['constant','set every repeat','set every frame'],
45 hint="Fill color of this shape; Right-click to bring up a color-picker (rgb only)",
46 label="Fill color")
47 self.params['lineColorSpace']=Param(lineColorSpace, valType='str', allowedVals=['rgb','dkl','lms'],
48 updates='constant',
49 hint="Choice of color space for the fill color (rgb, dkl, lms)",
50 label="Line color space")
51 self.params['lineColor']=Param(lineColor, valType='str', allowedTypes=[],
52 updates='constant', allowedUpdates=['constant','set every repeat','set every frame'],
53 hint="Line color of this shape; Right-click to bring up a color-picker (rgb only)",
54 label="Line color")
55 self.params['lineWidth']=Param(lineWidth, valType='code', allowedTypes=[],
56 updates='constant', allowedUpdates=['constant','set every repeat','set every frame'],
57 hint="Width of the shape's line (always in pixels - this does NOT use 'units')",
58 label="Line width")
59 self.params['interpolate']=Param(interpolate, valType='str', allowedVals=['linear','nearest'],
60 updates='constant', allowedUpdates=[],
61 hint="How should the image be interpolated if/when rescaled",
62 label="Interpolate")
63 self.params['size']=Param(size, valType='code', allowedTypes=[],
64 updates='constant', allowedUpdates=['constant','set every repeat','set every frame'],
65 hint="Size of this stimulus [w,h]. Note that for a line only the first value is used, for triangle and rect the [w,h] is as expected, but for higher-order polygons it represents the [w,h] of the ellipse that the polygon sits on!! ",
66 label="Size [w,h]")
67 del self.params['color']
68 del self.params['colorSpace']
69
70 def writeInitCode(self,buff):
71 #do we need units code?
72 if self.params['units'].val=='from exp settings': unitsStr=""
73 else: unitsStr="units=%(units)s, " %self.params
74 inits = getInitVals(self.params)#replaces variable params with defaults
75 if int(self.params['nVertices'].val) == 2:
76 buff.writeIndented("%s = visual.Line(win=win, name='%s',%s\n" %(inits['name'],inits['name'],unitsStr))
77 buff.writeIndented(" start=(-%(size)s[0]/2.0, 0), end=(+%(size)s[0]/2.0, 0),\n" %(inits) )
78 elif int(self.params['nVertices'].val) == 3:
79 buff.writeIndented("%s = visual.ShapeStim(win=win, name='%s',%s\n" %(inits['name'],inits['name'],unitsStr))
80 buff.writeIndented(" vertices = [[-%(size)s[0]/2.0,-%(size)s[1]/2.0], [+%(size)s[0]/2.0,-%(size)s[1]/2.0], [0,%(size)s[1]/2.0]],\n" %(inits) )
81 elif int(self.params['nVertices'].val) == 4:
82 buff.writeIndented("%s = visual.Rect(win=win, name='%s',%s\n" %(inits['name'],inits['name'],unitsStr))
83 buff.writeIndented(" width=%(size)s[0], height=%(size)s[1],\n" %(inits) )
84 else:
85 buff.writeIndented("%s = visual.Polygon(win=win, name='%s',%s\n" %(inits['name'],inits['name'],unitsStr))
86 buff.writeIndented(" edges = %(nVertices)s, size=%(size)s,\n" %(inits) )
87 buff.writeIndented(" ori=%(ori)s, pos=%(pos)s,\n" %(inits) )
88 buff.writeIndented(" lineWidth=%(lineWidth)s, lineColor=%(lineColor)s, lineColorSpace=%(lineColorSpace)s,\n" %(inits) )
89 buff.writeIndented(" fillColor=%(fillColor)s, fillColorSpace=%(fillColorSpace)s,\n" %(inits) )
90 buff.writeIndented(" opacity=%(opacity)s," %(inits) )
91 if self.params['interpolate'].val=='linear':
92 buff.write("interpolate=True)\n")
93 else: buff.write("interpolate=False)\n")
94
[end of psychopy/app/builder/components/polygon.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/psychopy/app/builder/components/polygon.py b/psychopy/app/builder/components/polygon.py
--- a/psychopy/app/builder/components/polygon.py
+++ b/psychopy/app/builder/components/polygon.py
@@ -32,8 +32,8 @@
self.exp.requirePsychopyLibs(['visual'])
self.order=['nVertices']
#params
- self.params['nVertices']=Param(nVertices, valType='code',
- updates='constant', allowedUpdates=['constant','set every repeat'],
+ self.params['nVertices']=Param(nVertices, valType='int',
+ updates='constant', allowedUpdates=['constant'],
hint="How many vertices? 2=line, 3=triangle... (90 approximates a circle)",
label="N Vertices")
self.params['fillColorSpace']=Param(fillColorSpace, valType='str', allowedVals=['rgb','dkl','lms'],
@@ -72,18 +72,19 @@
if self.params['units'].val=='from exp settings': unitsStr=""
else: unitsStr="units=%(units)s, " %self.params
inits = getInitVals(self.params)#replaces variable params with defaults
- if int(self.params['nVertices'].val) == 2:
+ if self.params['nVertices'].val == '2':
buff.writeIndented("%s = visual.Line(win=win, name='%s',%s\n" %(inits['name'],inits['name'],unitsStr))
buff.writeIndented(" start=(-%(size)s[0]/2.0, 0), end=(+%(size)s[0]/2.0, 0),\n" %(inits) )
- elif int(self.params['nVertices'].val) == 3:
+ elif self.params['nVertices'].val == '3':
buff.writeIndented("%s = visual.ShapeStim(win=win, name='%s',%s\n" %(inits['name'],inits['name'],unitsStr))
buff.writeIndented(" vertices = [[-%(size)s[0]/2.0,-%(size)s[1]/2.0], [+%(size)s[0]/2.0,-%(size)s[1]/2.0], [0,%(size)s[1]/2.0]],\n" %(inits) )
- elif int(self.params['nVertices'].val) == 4:
+ elif self.params['nVertices'].val == '4':
buff.writeIndented("%s = visual.Rect(win=win, name='%s',%s\n" %(inits['name'],inits['name'],unitsStr))
buff.writeIndented(" width=%(size)s[0], height=%(size)s[1],\n" %(inits) )
else:
buff.writeIndented("%s = visual.Polygon(win=win, name='%s',%s\n" %(inits['name'],inits['name'],unitsStr))
- buff.writeIndented(" edges = %(nVertices)s, size=%(size)s,\n" %(inits) )
+ buff.writeIndented(" edges = %s," % str(inits['nVertices'].val))
+ buff.writeIndented(" size=%(size)s,\n" %(inits) )
buff.writeIndented(" ori=%(ori)s, pos=%(pos)s,\n" %(inits) )
buff.writeIndented(" lineWidth=%(lineWidth)s, lineColor=%(lineColor)s, lineColorSpace=%(lineColorSpace)s,\n" %(inits) )
buff.writeIndented(" fillColor=%(fillColor)s, fillColorSpace=%(fillColorSpace)s,\n" %(inits) )
| {"golden_diff": "diff --git a/psychopy/app/builder/components/polygon.py b/psychopy/app/builder/components/polygon.py\n--- a/psychopy/app/builder/components/polygon.py\n+++ b/psychopy/app/builder/components/polygon.py\n@@ -32,8 +32,8 @@\n self.exp.requirePsychopyLibs(['visual'])\n self.order=['nVertices']\n #params\n- self.params['nVertices']=Param(nVertices, valType='code',\n- updates='constant', allowedUpdates=['constant','set every repeat'],\n+ self.params['nVertices']=Param(nVertices, valType='int',\n+ updates='constant', allowedUpdates=['constant'],\n hint=\"How many vertices? 2=line, 3=triangle... (90 approximates a circle)\",\n label=\"N Vertices\")\n self.params['fillColorSpace']=Param(fillColorSpace, valType='str', allowedVals=['rgb','dkl','lms'],\n@@ -72,18 +72,19 @@\n if self.params['units'].val=='from exp settings': unitsStr=\"\"\n else: unitsStr=\"units=%(units)s, \" %self.params\n inits = getInitVals(self.params)#replaces variable params with defaults\n- if int(self.params['nVertices'].val) == 2:\n+ if self.params['nVertices'].val == '2':\n buff.writeIndented(\"%s = visual.Line(win=win, name='%s',%s\\n\" %(inits['name'],inits['name'],unitsStr))\n buff.writeIndented(\" start=(-%(size)s[0]/2.0, 0), end=(+%(size)s[0]/2.0, 0),\\n\" %(inits) )\n- elif int(self.params['nVertices'].val) == 3:\n+ elif self.params['nVertices'].val == '3':\n buff.writeIndented(\"%s = visual.ShapeStim(win=win, name='%s',%s\\n\" %(inits['name'],inits['name'],unitsStr))\n buff.writeIndented(\" vertices = [[-%(size)s[0]/2.0,-%(size)s[1]/2.0], [+%(size)s[0]/2.0,-%(size)s[1]/2.0], [0,%(size)s[1]/2.0]],\\n\" %(inits) )\n- elif int(self.params['nVertices'].val) == 4:\n+ elif self.params['nVertices'].val == '4':\n buff.writeIndented(\"%s = visual.Rect(win=win, name='%s',%s\\n\" %(inits['name'],inits['name'],unitsStr))\n buff.writeIndented(\" width=%(size)s[0], height=%(size)s[1],\\n\" %(inits) )\n else:\n buff.writeIndented(\"%s = visual.Polygon(win=win, name='%s',%s\\n\" %(inits['name'],inits['name'],unitsStr))\n- buff.writeIndented(\" edges = %(nVertices)s, size=%(size)s,\\n\" %(inits) )\n+ buff.writeIndented(\" edges = %s,\" % str(inits['nVertices'].val))\n+ buff.writeIndented(\" size=%(size)s,\\n\" %(inits) )\n buff.writeIndented(\" ori=%(ori)s, pos=%(pos)s,\\n\" %(inits) )\n buff.writeIndented(\" lineWidth=%(lineWidth)s, lineColor=%(lineColor)s, lineColorSpace=%(lineColorSpace)s,\\n\" %(inits) )\n buff.writeIndented(\" fillColor=%(fillColor)s, fillColorSpace=%(fillColorSpace)s,\\n\" %(inits) )\n", "issue": "Invalid literal for int() when assigning variable to \"N Vertices\"\nI'm trying to tell psychopy to display a polygon with a random number of vertices on each presentation by using randint. I keep running into this error:\n\n`invalid literal for int() with base 10: 'randint(3, 4, 1)'`\n\nThis occurs before the script compiles, so it doesn't produce any output / traceback other than that. \n\nI've also tried this using $randint(3, 4, 1) but that doesn't work as well. In addition, my friend had this problem when specifying the number of angles via an excel file.\n\nWhen I just insert a number to the field, it works fine, so it seems like python is trying to interpret randint(3, 4, 1) literally, sees that it isn't an integer, and throws an error.\n\nVariable assignment:\n\n\nError after clicking run:\n\n\n", "before_files": [{"content": "# Part of the PsychoPy library\n# Copyright (C) 2013 Jonathan Peirce\n# Distributed under the terms of the GNU General Public License (GPL).\n\nfrom _visual import * #to get the template visual component\nfrom os import path\nfrom psychopy.app.builder.components import getInitVals\n\nthisFolder = path.abspath(path.dirname(__file__))#the absolute path to the folder containing this path\niconFile = path.join(thisFolder,'polygon.png')\ntooltip = 'Polygon: any regular polygon (line, triangle, square...circle)'\n\nclass PolygonComponent(VisualComponent):\n \"\"\"A class for presenting grating stimuli\"\"\"\n def __init__(self, exp, parentName, name='polygon', interpolate='linear',\n units='from exp settings',\n lineColor='$[1,1,1]', lineColorSpace='rgb', lineWidth=1,\n fillColor='$[1,1,1]', fillColorSpace='rgb',\n nVertices = 4,\n pos=[0,0], size=[0.5,0.5], ori=0,\n startType='time (s)', startVal=0.0,\n stopType='duration (s)', stopVal=1.0,\n startEstim='', durationEstim=''):\n #initialise main parameters from base stimulus\n VisualComponent.__init__(self,exp,parentName,name=name, units=units,\n pos=pos, size=size, ori=ori,\n startType=startType, startVal=startVal,\n stopType=stopType, stopVal=stopVal,\n startEstim=startEstim, durationEstim=durationEstim)\n self.type='Polygon'\n self.url=\"http://www.psychopy.org/builder/components/shape.html\"\n self.exp.requirePsychopyLibs(['visual'])\n self.order=['nVertices']\n #params\n self.params['nVertices']=Param(nVertices, valType='code',\n updates='constant', allowedUpdates=['constant','set every repeat'],\n hint=\"How many vertices? 2=line, 3=triangle... (90 approximates a circle)\",\n label=\"N Vertices\")\n self.params['fillColorSpace']=Param(fillColorSpace, valType='str', allowedVals=['rgb','dkl','lms'],\n updates='constant',\n hint=\"Choice of color space for the fill color (rgb, dkl, lms)\",\n label=\"Fill color space\")\n self.params['fillColor']=Param(fillColor, valType='str', allowedTypes=[],\n updates='constant', allowedUpdates=['constant','set every repeat','set every frame'],\n hint=\"Fill color of this shape; Right-click to bring up a color-picker (rgb only)\",\n label=\"Fill color\")\n self.params['lineColorSpace']=Param(lineColorSpace, valType='str', allowedVals=['rgb','dkl','lms'],\n updates='constant',\n hint=\"Choice of color space for the fill color (rgb, dkl, lms)\",\n label=\"Line color space\")\n self.params['lineColor']=Param(lineColor, valType='str', allowedTypes=[],\n updates='constant', allowedUpdates=['constant','set every repeat','set every frame'],\n hint=\"Line color of this shape; Right-click to bring up a color-picker (rgb only)\",\n label=\"Line color\")\n self.params['lineWidth']=Param(lineWidth, valType='code', allowedTypes=[],\n updates='constant', allowedUpdates=['constant','set every repeat','set every frame'],\n hint=\"Width of the shape's line (always in pixels - this does NOT use 'units')\",\n label=\"Line width\")\n self.params['interpolate']=Param(interpolate, valType='str', allowedVals=['linear','nearest'],\n updates='constant', allowedUpdates=[],\n hint=\"How should the image be interpolated if/when rescaled\",\n label=\"Interpolate\")\n self.params['size']=Param(size, valType='code', allowedTypes=[],\n updates='constant', allowedUpdates=['constant','set every repeat','set every frame'],\n hint=\"Size of this stimulus [w,h]. Note that for a line only the first value is used, for triangle and rect the [w,h] is as expected, but for higher-order polygons it represents the [w,h] of the ellipse that the polygon sits on!! \",\n label=\"Size [w,h]\")\n del self.params['color']\n del self.params['colorSpace']\n\n def writeInitCode(self,buff):\n #do we need units code?\n if self.params['units'].val=='from exp settings': unitsStr=\"\"\n else: unitsStr=\"units=%(units)s, \" %self.params\n inits = getInitVals(self.params)#replaces variable params with defaults\n if int(self.params['nVertices'].val) == 2:\n buff.writeIndented(\"%s = visual.Line(win=win, name='%s',%s\\n\" %(inits['name'],inits['name'],unitsStr))\n buff.writeIndented(\" start=(-%(size)s[0]/2.0, 0), end=(+%(size)s[0]/2.0, 0),\\n\" %(inits) )\n elif int(self.params['nVertices'].val) == 3:\n buff.writeIndented(\"%s = visual.ShapeStim(win=win, name='%s',%s\\n\" %(inits['name'],inits['name'],unitsStr))\n buff.writeIndented(\" vertices = [[-%(size)s[0]/2.0,-%(size)s[1]/2.0], [+%(size)s[0]/2.0,-%(size)s[1]/2.0], [0,%(size)s[1]/2.0]],\\n\" %(inits) )\n elif int(self.params['nVertices'].val) == 4:\n buff.writeIndented(\"%s = visual.Rect(win=win, name='%s',%s\\n\" %(inits['name'],inits['name'],unitsStr))\n buff.writeIndented(\" width=%(size)s[0], height=%(size)s[1],\\n\" %(inits) )\n else:\n buff.writeIndented(\"%s = visual.Polygon(win=win, name='%s',%s\\n\" %(inits['name'],inits['name'],unitsStr))\n buff.writeIndented(\" edges = %(nVertices)s, size=%(size)s,\\n\" %(inits) )\n buff.writeIndented(\" ori=%(ori)s, pos=%(pos)s,\\n\" %(inits) )\n buff.writeIndented(\" lineWidth=%(lineWidth)s, lineColor=%(lineColor)s, lineColorSpace=%(lineColorSpace)s,\\n\" %(inits) )\n buff.writeIndented(\" fillColor=%(fillColor)s, fillColorSpace=%(fillColorSpace)s,\\n\" %(inits) )\n buff.writeIndented(\" opacity=%(opacity)s,\" %(inits) )\n if self.params['interpolate'].val=='linear':\n buff.write(\"interpolate=True)\\n\")\n else: buff.write(\"interpolate=False)\\n\")\n", "path": "psychopy/app/builder/components/polygon.py"}]} | 2,554 | 813 |
gh_patches_debug_17636 | rasdani/github-patches | git_diff | svthalia__concrexit-3528 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Albums pagination doesn't maintain search terms
### Describe the bug
If you open https://thalia.nu/members/photos/?keywords=borrel#photos-albums, then go to the second page using the pagination buttons, the search term is dropped.
### Expected behaviour
<!-- A clear and concise description of what you expected to happen. -->
The search term remains
### Additional context
<!-- Add any other context about the problem here. -->
Could be since we introduced the shared paginated view template? So it's quite likely this occurs for other paginated filterable/searchable views as well.
</issue>
<code>
[start of website/thaliawebsite/views.py]
1 """General views for the website."""
2
3 from django.contrib.admin.views.decorators import staff_member_required
4 from django.contrib.auth.views import LoginView, PasswordResetView
5 from django.core.exceptions import PermissionDenied
6 from django.http import HttpResponse, HttpResponseForbidden
7 from django.shortcuts import redirect
8 from django.utils.decorators import method_decorator
9 from django.views.generic import ListView, TemplateView
10 from django.views.generic.base import View
11
12 from django_ratelimit.decorators import ratelimit
13
14
15 class IndexView(TemplateView):
16 template_name = "index.html"
17
18
19 @method_decorator(staff_member_required, "dispatch")
20 class TestCrashView(View):
21 """Test view to intentionally crash to test the error handling."""
22
23 def dispatch(self, request, *args, **kwargs) -> HttpResponse:
24 if not request.user.is_superuser:
25 return HttpResponseForbidden("This is not for you")
26 raise Exception("Test exception")
27
28
29 class PagedView(ListView):
30 """A ListView with automatic pagination."""
31
32 def get_context_data(self, **kwargs) -> dict:
33 context = super().get_context_data(**kwargs)
34 page = context["page_obj"].number
35 paginator = context["paginator"]
36
37 # Show the two pages before and after the current page
38 page_range_start = max(1, page - 2)
39 page_range_stop = min(page + 3, paginator.num_pages + 1)
40
41 # Add extra pages if we show less than 5 pages
42 page_range_start = min(page_range_start, page_range_stop - 5)
43 page_range_start = max(1, page_range_start)
44
45 # Add extra pages if we still show less than 5 pages
46 page_range_stop = max(page_range_stop, page_range_start + 5)
47 page_range_stop = min(page_range_stop, paginator.num_pages + 1)
48
49 page_range = range(page_range_start, page_range_stop)
50
51 context.update(
52 {
53 "page_range": page_range,
54 }
55 )
56
57 return context
58
59
60 class RateLimitedPasswordResetView(PasswordResetView):
61 @method_decorator(ratelimit(key="ip", rate="5/h"))
62 def post(self, request, *args, **kwargs):
63 return super().post(request, *args, **kwargs)
64
65
66 class RateLimitedLoginView(LoginView):
67 @method_decorator(ratelimit(key="ip", rate="30/h"))
68 @method_decorator(ratelimit(key="post:username", rate="30/h"))
69 def post(self, request, *args, **kwargs):
70 return super().post(request, *args, **kwargs)
71
72
73 def rate_limited_view(request, *args, **kwargs):
74 return HttpResponse("You are rate limited", status=429)
75
76
77 def admin_unauthorized_view(request):
78 if not request.member:
79 url = "/user/login"
80 args = request.META.get("QUERY_STRING", "")
81 if args:
82 url = f"{url}?{args}"
83 return redirect(url)
84 elif not request.member.is_staff and not request.member.is_superuser:
85 raise PermissionDenied("You are not allowed to access the administration page.")
86 else:
87 return redirect(request.GET.get("next", "/"))
88
[end of website/thaliawebsite/views.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/website/thaliawebsite/views.py b/website/thaliawebsite/views.py
--- a/website/thaliawebsite/views.py
+++ b/website/thaliawebsite/views.py
@@ -31,6 +31,7 @@
def get_context_data(self, **kwargs) -> dict:
context = super().get_context_data(**kwargs)
+ print(kwargs)
page = context["page_obj"].number
paginator = context["paginator"]
@@ -48,9 +49,17 @@
page_range = range(page_range_start, page_range_stop)
+ querydict = self.request.GET.copy()
+
+ if "page" in querydict:
+ del querydict["page"]
+
context.update(
{
"page_range": page_range,
+ "base_url": f"{self.request.path}?{querydict.urlencode()}&"
+ if querydict
+ else f"{self.request.path}?",
}
)
| {"golden_diff": "diff --git a/website/thaliawebsite/views.py b/website/thaliawebsite/views.py\n--- a/website/thaliawebsite/views.py\n+++ b/website/thaliawebsite/views.py\n@@ -31,6 +31,7 @@\n \n def get_context_data(self, **kwargs) -> dict:\n context = super().get_context_data(**kwargs)\n+ print(kwargs)\n page = context[\"page_obj\"].number\n paginator = context[\"paginator\"]\n \n@@ -48,9 +49,17 @@\n \n page_range = range(page_range_start, page_range_stop)\n \n+ querydict = self.request.GET.copy()\n+\n+ if \"page\" in querydict:\n+ del querydict[\"page\"]\n+\n context.update(\n {\n \"page_range\": page_range,\n+ \"base_url\": f\"{self.request.path}?{querydict.urlencode()}&\"\n+ if querydict\n+ else f\"{self.request.path}?\",\n }\n )\n", "issue": "Albums pagination doesn't maintain search terms\n### Describe the bug\r\nIf you open https://thalia.nu/members/photos/?keywords=borrel#photos-albums, then go to the second page using the pagination buttons, the search term is dropped.\r\n\r\n### Expected behaviour\r\n<!-- A clear and concise description of what you expected to happen. -->\r\nThe search term remains\r\n\r\n### Additional context\r\n<!-- Add any other context about the problem here. -->\r\nCould be since we introduced the shared paginated view template? So it's quite likely this occurs for other paginated filterable/searchable views as well.\n", "before_files": [{"content": "\"\"\"General views for the website.\"\"\"\n\nfrom django.contrib.admin.views.decorators import staff_member_required\nfrom django.contrib.auth.views import LoginView, PasswordResetView\nfrom django.core.exceptions import PermissionDenied\nfrom django.http import HttpResponse, HttpResponseForbidden\nfrom django.shortcuts import redirect\nfrom django.utils.decorators import method_decorator\nfrom django.views.generic import ListView, TemplateView\nfrom django.views.generic.base import View\n\nfrom django_ratelimit.decorators import ratelimit\n\n\nclass IndexView(TemplateView):\n template_name = \"index.html\"\n\n\n@method_decorator(staff_member_required, \"dispatch\")\nclass TestCrashView(View):\n \"\"\"Test view to intentionally crash to test the error handling.\"\"\"\n\n def dispatch(self, request, *args, **kwargs) -> HttpResponse:\n if not request.user.is_superuser:\n return HttpResponseForbidden(\"This is not for you\")\n raise Exception(\"Test exception\")\n\n\nclass PagedView(ListView):\n \"\"\"A ListView with automatic pagination.\"\"\"\n\n def get_context_data(self, **kwargs) -> dict:\n context = super().get_context_data(**kwargs)\n page = context[\"page_obj\"].number\n paginator = context[\"paginator\"]\n\n # Show the two pages before and after the current page\n page_range_start = max(1, page - 2)\n page_range_stop = min(page + 3, paginator.num_pages + 1)\n\n # Add extra pages if we show less than 5 pages\n page_range_start = min(page_range_start, page_range_stop - 5)\n page_range_start = max(1, page_range_start)\n\n # Add extra pages if we still show less than 5 pages\n page_range_stop = max(page_range_stop, page_range_start + 5)\n page_range_stop = min(page_range_stop, paginator.num_pages + 1)\n\n page_range = range(page_range_start, page_range_stop)\n\n context.update(\n {\n \"page_range\": page_range,\n }\n )\n\n return context\n\n\nclass RateLimitedPasswordResetView(PasswordResetView):\n @method_decorator(ratelimit(key=\"ip\", rate=\"5/h\"))\n def post(self, request, *args, **kwargs):\n return super().post(request, *args, **kwargs)\n\n\nclass RateLimitedLoginView(LoginView):\n @method_decorator(ratelimit(key=\"ip\", rate=\"30/h\"))\n @method_decorator(ratelimit(key=\"post:username\", rate=\"30/h\"))\n def post(self, request, *args, **kwargs):\n return super().post(request, *args, **kwargs)\n\n\ndef rate_limited_view(request, *args, **kwargs):\n return HttpResponse(\"You are rate limited\", status=429)\n\n\ndef admin_unauthorized_view(request):\n if not request.member:\n url = \"/user/login\"\n args = request.META.get(\"QUERY_STRING\", \"\")\n if args:\n url = f\"{url}?{args}\"\n return redirect(url)\n elif not request.member.is_staff and not request.member.is_superuser:\n raise PermissionDenied(\"You are not allowed to access the administration page.\")\n else:\n return redirect(request.GET.get(\"next\", \"/\"))\n", "path": "website/thaliawebsite/views.py"}]} | 1,506 | 216 |
gh_patches_debug_23410 | rasdani/github-patches | git_diff | OCA__bank-payment-630 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[12.0][BUG] account_payment_sale
Hi
I have found a bug in module account_payment_sale, but I am not sure how to fix it nicely.
The payment_mode_id does not propagate from the sale order to the invoice.
I guess the tests are a bit to naive, that is why they pass anyway.
Here we try to propagate the payment mode : https://github.com/OCA/bank-payment/blob/12.0/account_payment_sale/models/sale_order.py#L35
Here, the invoice is created with the right value (coming from the SO) : https://github.com/OCA/OCB/blob/12.0/addons/sale/models/sale.py#L521
And it is overriden here https://github.com/OCA/OCB/blob/12.0/addons/sale/models/sale.py#L570
I really don't get why they have refactored it this way, they create the invoice and then they override a lot of values...
And I do not really see a clean solution to solve this.
Any idea?
</issue>
<code>
[start of account_payment_sale/models/sale_order.py]
1 # Copyright 2014-2016 Akretion - Alexis de Lattre
2 # License AGPL-3.0 or later (http://www.gnu.org/licenses/agpl).
3
4 from odoo import models, fields, api
5
6
7 class SaleOrder(models.Model):
8 _inherit = "sale.order"
9
10 payment_mode_id = fields.Many2one(
11 'account.payment.mode', string='Payment Mode',
12 domain=[('payment_type', '=', 'inbound')])
13
14 def _get_payment_mode_vals(self, vals):
15 if self.payment_mode_id:
16 vals['payment_mode_id'] = self.payment_mode_id.id
17 if self.payment_mode_id.bank_account_link == 'fixed':
18 vals['partner_bank_id'] =\
19 self.payment_mode_id.fixed_journal_id.bank_account_id.id
20 return vals
21
22 @api.onchange('partner_id')
23 def onchange_partner_id(self):
24 res = super().onchange_partner_id()
25 if self.partner_id:
26 self.payment_mode_id = self.partner_id.customer_payment_mode_id
27 else:
28 self.payment_mode_id = False
29 return res
30
31 @api.multi
32 def _prepare_invoice(self):
33 """Copy bank partner from sale order to invoice"""
34 vals = super()._prepare_invoice()
35 return self._get_payment_mode_vals(vals)
36
[end of account_payment_sale/models/sale_order.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/account_payment_sale/models/sale_order.py b/account_payment_sale/models/sale_order.py
--- a/account_payment_sale/models/sale_order.py
+++ b/account_payment_sale/models/sale_order.py
@@ -33,3 +33,31 @@
"""Copy bank partner from sale order to invoice"""
vals = super()._prepare_invoice()
return self._get_payment_mode_vals(vals)
+
+ def _finalize_invoices(self, invoices, references):
+ """
+ Invoked after creating invoices at the end of action_invoice_create.
+
+ We must override this method since the onchange on partner is called by
+ the base method and therefore will change the specific payment_mode set
+ on the SO if one is defined on the partner..
+
+ :param invoices: {group_key: invoice}
+ :param references: {invoice: order}
+ """
+ payment_vals_by_invoice = {}
+ for invoice in invoices.values():
+ payment_vals_by_invoice[invoice] = {
+ 'payment_mode_id': invoice.payment_mode_id.id,
+ 'partner_bank_id': invoice.partner_bank_id.id
+ }
+ res = super()._finalize_invoices(invoices, references)
+ for invoice in invoices.values():
+ payment_vals = payment_vals_by_invoice[invoice]
+ if invoice.payment_mode_id.id == payment_vals['payment_mode_id']:
+ payment_vals.pop("payment_mode_id")
+ if invoice.partner_bank_id.id == payment_vals["partner_bank_id"]:
+ payment_vals.pop("partner_bank_id")
+ if payment_vals:
+ invoice.write(payment_vals)
+ return res
| {"golden_diff": "diff --git a/account_payment_sale/models/sale_order.py b/account_payment_sale/models/sale_order.py\n--- a/account_payment_sale/models/sale_order.py\n+++ b/account_payment_sale/models/sale_order.py\n@@ -33,3 +33,31 @@\n \"\"\"Copy bank partner from sale order to invoice\"\"\"\n vals = super()._prepare_invoice()\n return self._get_payment_mode_vals(vals)\n+\n+ def _finalize_invoices(self, invoices, references):\n+ \"\"\"\n+ Invoked after creating invoices at the end of action_invoice_create.\n+\n+ We must override this method since the onchange on partner is called by\n+ the base method and therefore will change the specific payment_mode set\n+ on the SO if one is defined on the partner..\n+\n+ :param invoices: {group_key: invoice}\n+ :param references: {invoice: order}\n+ \"\"\"\n+ payment_vals_by_invoice = {}\n+ for invoice in invoices.values():\n+ payment_vals_by_invoice[invoice] = {\n+ 'payment_mode_id': invoice.payment_mode_id.id,\n+ 'partner_bank_id': invoice.partner_bank_id.id\n+ }\n+ res = super()._finalize_invoices(invoices, references)\n+ for invoice in invoices.values():\n+ payment_vals = payment_vals_by_invoice[invoice]\n+ if invoice.payment_mode_id.id == payment_vals['payment_mode_id']:\n+ payment_vals.pop(\"payment_mode_id\")\n+ if invoice.partner_bank_id.id == payment_vals[\"partner_bank_id\"]:\n+ payment_vals.pop(\"partner_bank_id\")\n+ if payment_vals:\n+ invoice.write(payment_vals)\n+ return res\n", "issue": "[12.0][BUG] account_payment_sale\nHi\r\nI have found a bug in module account_payment_sale, but I am not sure how to fix it nicely.\r\nThe payment_mode_id does not propagate from the sale order to the invoice. \r\nI guess the tests are a bit to naive, that is why they pass anyway.\r\nHere we try to propagate the payment mode : https://github.com/OCA/bank-payment/blob/12.0/account_payment_sale/models/sale_order.py#L35\r\nHere, the invoice is created with the right value (coming from the SO) : https://github.com/OCA/OCB/blob/12.0/addons/sale/models/sale.py#L521\r\nAnd it is overriden here https://github.com/OCA/OCB/blob/12.0/addons/sale/models/sale.py#L570\r\n\r\nI really don't get why they have refactored it this way, they create the invoice and then they override a lot of values...\r\nAnd I do not really see a clean solution to solve this.\r\nAny idea?\n", "before_files": [{"content": "# Copyright 2014-2016 Akretion - Alexis de Lattre\n# License AGPL-3.0 or later (http://www.gnu.org/licenses/agpl).\n\nfrom odoo import models, fields, api\n\n\nclass SaleOrder(models.Model):\n _inherit = \"sale.order\"\n\n payment_mode_id = fields.Many2one(\n 'account.payment.mode', string='Payment Mode',\n domain=[('payment_type', '=', 'inbound')])\n\n def _get_payment_mode_vals(self, vals):\n if self.payment_mode_id:\n vals['payment_mode_id'] = self.payment_mode_id.id\n if self.payment_mode_id.bank_account_link == 'fixed':\n vals['partner_bank_id'] =\\\n self.payment_mode_id.fixed_journal_id.bank_account_id.id\n return vals\n\n @api.onchange('partner_id')\n def onchange_partner_id(self):\n res = super().onchange_partner_id()\n if self.partner_id:\n self.payment_mode_id = self.partner_id.customer_payment_mode_id\n else:\n self.payment_mode_id = False\n return res\n\n @api.multi\n def _prepare_invoice(self):\n \"\"\"Copy bank partner from sale order to invoice\"\"\"\n vals = super()._prepare_invoice()\n return self._get_payment_mode_vals(vals)\n", "path": "account_payment_sale/models/sale_order.py"}]} | 1,115 | 350 |
gh_patches_debug_15077 | rasdani/github-patches | git_diff | microsoft__botbuilder-python-1748 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
botbuilder-core library is missing the botframework-streaming dependency
## Version
4.14.0.20210616.dev252366
## Describe the bug
The botbuilder-core library is missing the botframework-streaming dependency.
When running a python bot with the botbuilder-core library installed, it won't run because it is missing the botframework-streaming dependency.
The dependency reference is missing from the requirements.txt file, and this new library is not published in any of the regular packages indexes ([test.pypi](https://test.pypi.org/), [pypi](https://pypi.org/) and [azure artifacts](https://dev.azure.com/ConversationalAI/BotFramework/_packaging?_a=feed&feed=SDK%40Local)), so it can't be installed manually.
When running the bots locally it is possible to install the dependency from a local folder with the code cloned from the repo.
## To Reproduce
1. Open a bot that uses the botbuilder-core library.
2. Install a preview version (4.14.x).
3. Run the bot.
## Expected behavior
The dependencies being installed should install all the required sub-dependencies or have them available for manual installation.
## Screenshots

## Additional context
This issue is blocking the pipelines from the [BotFramework-FunctionalTests](https://github.com/microsoft/BotFramework-FunctionalTests/) repository from testing preview versions of the BotBuilder Python libraries.
</issue>
<code>
[start of libraries/botframework-streaming/setup.py]
1 # Copyright (c) Microsoft Corporation. All rights reserved.
2 # Licensed under the MIT License.
3
4 import os
5 from setuptools import setup
6
7 VERSION = os.environ["packageVersion"] if "packageVersion" in os.environ else "4.12.0"
8 REQUIRES = [
9 "botbuilder-schema>=4.12.0",
10 "botframework-connector>=4.12.0",
11 "botbuilder-core>=4.12.0",
12 ]
13
14 root = os.path.abspath(os.path.dirname(__file__))
15
16 with open(os.path.join(root, "botframework", "streaming", "about.py")) as f:
17 package_info = {}
18 info = f.read()
19 exec(info, package_info)
20
21 with open(os.path.join(root, "README.rst"), encoding="utf-8") as f:
22 long_description = f.read()
23
24 setup(
25 name=package_info["__title__"],
26 version=package_info["__version__"],
27 url=package_info["__uri__"],
28 author=package_info["__author__"],
29 description=package_info["__description__"],
30 keywords=["BotFrameworkStreaming", "bots", "ai", "botframework", "botframework",],
31 long_description=long_description,
32 long_description_content_type="text/x-rst",
33 license=package_info["__license__"],
34 packages=[
35 "botframework.streaming",
36 "botframework.streaming.payloads",
37 "botframework.streaming.payloads.models",
38 "botframework.streaming.payload_transport",
39 "botframework.streaming.transport",
40 "botframework.streaming.transport.web_socket",
41 ],
42 install_requires=REQUIRES,
43 classifiers=[
44 "Programming Language :: Python :: 3.7",
45 "Intended Audience :: Developers",
46 "License :: OSI Approved :: MIT License",
47 "Operating System :: OS Independent",
48 "Development Status :: 5 - Production/Stable",
49 "Topic :: Scientific/Engineering :: Artificial Intelligence",
50 ],
51 )
52
[end of libraries/botframework-streaming/setup.py]
[start of libraries/botbuilder-core/setup.py]
1 # Copyright (c) Microsoft Corporation. All rights reserved.
2 # Licensed under the MIT License.
3
4 import os
5 from setuptools import setup
6
7 VERSION = os.environ["packageVersion"] if "packageVersion" in os.environ else "4.14.0"
8 REQUIRES = [
9 "botbuilder-schema==4.14.0",
10 "botframework-connector==4.14.0",
11 "jsonpickle>=1.2,<1.5",
12 ]
13
14 root = os.path.abspath(os.path.dirname(__file__))
15
16 with open(os.path.join(root, "botbuilder", "core", "about.py")) as f:
17 package_info = {}
18 info = f.read()
19 exec(info, package_info)
20
21 with open(os.path.join(root, "README.rst"), encoding="utf-8") as f:
22 long_description = f.read()
23
24 setup(
25 name=package_info["__title__"],
26 version=package_info["__version__"],
27 url=package_info["__uri__"],
28 author=package_info["__author__"],
29 description=package_info["__description__"],
30 keywords=["BotBuilderCore", "bots", "ai", "botframework", "botbuilder"],
31 long_description=long_description,
32 long_description_content_type="text/x-rst",
33 license=package_info["__license__"],
34 packages=[
35 "botbuilder.core",
36 "botbuilder.core.adapters",
37 "botbuilder.core.inspection",
38 "botbuilder.core.integration",
39 "botbuilder.core.skills",
40 "botbuilder.core.streaming",
41 "botbuilder.core.teams",
42 "botbuilder.core.oauth",
43 ],
44 install_requires=REQUIRES,
45 classifiers=[
46 "Programming Language :: Python :: 3.7",
47 "Intended Audience :: Developers",
48 "License :: OSI Approved :: MIT License",
49 "Operating System :: OS Independent",
50 "Development Status :: 5 - Production/Stable",
51 "Topic :: Scientific/Engineering :: Artificial Intelligence",
52 ],
53 )
54
[end of libraries/botbuilder-core/setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/libraries/botbuilder-core/setup.py b/libraries/botbuilder-core/setup.py
--- a/libraries/botbuilder-core/setup.py
+++ b/libraries/botbuilder-core/setup.py
@@ -8,6 +8,7 @@
REQUIRES = [
"botbuilder-schema==4.14.0",
"botframework-connector==4.14.0",
+ "botframework-streaming==4.14.0",
"jsonpickle>=1.2,<1.5",
]
diff --git a/libraries/botframework-streaming/setup.py b/libraries/botframework-streaming/setup.py
--- a/libraries/botframework-streaming/setup.py
+++ b/libraries/botframework-streaming/setup.py
@@ -4,11 +4,10 @@
import os
from setuptools import setup
-VERSION = os.environ["packageVersion"] if "packageVersion" in os.environ else "4.12.0"
+VERSION = os.environ["packageVersion"] if "packageVersion" in os.environ else "4.14.0"
REQUIRES = [
"botbuilder-schema>=4.12.0",
"botframework-connector>=4.12.0",
- "botbuilder-core>=4.12.0",
]
root = os.path.abspath(os.path.dirname(__file__))
| {"golden_diff": "diff --git a/libraries/botbuilder-core/setup.py b/libraries/botbuilder-core/setup.py\n--- a/libraries/botbuilder-core/setup.py\n+++ b/libraries/botbuilder-core/setup.py\n@@ -8,6 +8,7 @@\n REQUIRES = [\n \"botbuilder-schema==4.14.0\",\n \"botframework-connector==4.14.0\",\n+ \"botframework-streaming==4.14.0\",\n \"jsonpickle>=1.2,<1.5\",\n ]\n \ndiff --git a/libraries/botframework-streaming/setup.py b/libraries/botframework-streaming/setup.py\n--- a/libraries/botframework-streaming/setup.py\n+++ b/libraries/botframework-streaming/setup.py\n@@ -4,11 +4,10 @@\n import os\n from setuptools import setup\n \n-VERSION = os.environ[\"packageVersion\"] if \"packageVersion\" in os.environ else \"4.12.0\"\n+VERSION = os.environ[\"packageVersion\"] if \"packageVersion\" in os.environ else \"4.14.0\"\n REQUIRES = [\n \"botbuilder-schema>=4.12.0\",\n \"botframework-connector>=4.12.0\",\n- \"botbuilder-core>=4.12.0\",\n ]\n \n root = os.path.abspath(os.path.dirname(__file__))\n", "issue": "botbuilder-core library is missing the botframework-streaming dependency\n## Version\r\n4.14.0.20210616.dev252366\r\n\r\n## Describe the bug\r\nThe botbuilder-core library is missing the botframework-streaming dependency.\r\nWhen running a python bot with the botbuilder-core library installed, it won't run because it is missing the botframework-streaming dependency.\r\nThe dependency reference is missing from the requirements.txt file, and this new library is not published in any of the regular packages indexes ([test.pypi](https://test.pypi.org/), [pypi](https://pypi.org/) and [azure artifacts](https://dev.azure.com/ConversationalAI/BotFramework/_packaging?_a=feed&feed=SDK%40Local)), so it can't be installed manually.\r\nWhen running the bots locally it is possible to install the dependency from a local folder with the code cloned from the repo.\r\n\r\n## To Reproduce\r\n1. Open a bot that uses the botbuilder-core library.\r\n2. Install a preview version (4.14.x).\r\n3. Run the bot.\r\n\r\n## Expected behavior\r\nThe dependencies being installed should install all the required sub-dependencies or have them available for manual installation.\r\n\r\n## Screenshots\r\n\r\n\r\n## Additional context\r\nThis issue is blocking the pipelines from the [BotFramework-FunctionalTests](https://github.com/microsoft/BotFramework-FunctionalTests/) repository from testing preview versions of the BotBuilder Python libraries.\r\n\n", "before_files": [{"content": "# Copyright (c) Microsoft Corporation. All rights reserved.\n# Licensed under the MIT License.\n\nimport os\nfrom setuptools import setup\n\nVERSION = os.environ[\"packageVersion\"] if \"packageVersion\" in os.environ else \"4.12.0\"\nREQUIRES = [\n \"botbuilder-schema>=4.12.0\",\n \"botframework-connector>=4.12.0\",\n \"botbuilder-core>=4.12.0\",\n]\n\nroot = os.path.abspath(os.path.dirname(__file__))\n\nwith open(os.path.join(root, \"botframework\", \"streaming\", \"about.py\")) as f:\n package_info = {}\n info = f.read()\n exec(info, package_info)\n\nwith open(os.path.join(root, \"README.rst\"), encoding=\"utf-8\") as f:\n long_description = f.read()\n\nsetup(\n name=package_info[\"__title__\"],\n version=package_info[\"__version__\"],\n url=package_info[\"__uri__\"],\n author=package_info[\"__author__\"],\n description=package_info[\"__description__\"],\n keywords=[\"BotFrameworkStreaming\", \"bots\", \"ai\", \"botframework\", \"botframework\",],\n long_description=long_description,\n long_description_content_type=\"text/x-rst\",\n license=package_info[\"__license__\"],\n packages=[\n \"botframework.streaming\",\n \"botframework.streaming.payloads\",\n \"botframework.streaming.payloads.models\",\n \"botframework.streaming.payload_transport\",\n \"botframework.streaming.transport\",\n \"botframework.streaming.transport.web_socket\",\n ],\n install_requires=REQUIRES,\n classifiers=[\n \"Programming Language :: Python :: 3.7\",\n \"Intended Audience :: Developers\",\n \"License :: OSI Approved :: MIT License\",\n \"Operating System :: OS Independent\",\n \"Development Status :: 5 - Production/Stable\",\n \"Topic :: Scientific/Engineering :: Artificial Intelligence\",\n ],\n)\n", "path": "libraries/botframework-streaming/setup.py"}, {"content": "# Copyright (c) Microsoft Corporation. All rights reserved.\n# Licensed under the MIT License.\n\nimport os\nfrom setuptools import setup\n\nVERSION = os.environ[\"packageVersion\"] if \"packageVersion\" in os.environ else \"4.14.0\"\nREQUIRES = [\n \"botbuilder-schema==4.14.0\",\n \"botframework-connector==4.14.0\",\n \"jsonpickle>=1.2,<1.5\",\n]\n\nroot = os.path.abspath(os.path.dirname(__file__))\n\nwith open(os.path.join(root, \"botbuilder\", \"core\", \"about.py\")) as f:\n package_info = {}\n info = f.read()\n exec(info, package_info)\n\nwith open(os.path.join(root, \"README.rst\"), encoding=\"utf-8\") as f:\n long_description = f.read()\n\nsetup(\n name=package_info[\"__title__\"],\n version=package_info[\"__version__\"],\n url=package_info[\"__uri__\"],\n author=package_info[\"__author__\"],\n description=package_info[\"__description__\"],\n keywords=[\"BotBuilderCore\", \"bots\", \"ai\", \"botframework\", \"botbuilder\"],\n long_description=long_description,\n long_description_content_type=\"text/x-rst\",\n license=package_info[\"__license__\"],\n packages=[\n \"botbuilder.core\",\n \"botbuilder.core.adapters\",\n \"botbuilder.core.inspection\",\n \"botbuilder.core.integration\",\n \"botbuilder.core.skills\",\n \"botbuilder.core.streaming\",\n \"botbuilder.core.teams\",\n \"botbuilder.core.oauth\",\n ],\n install_requires=REQUIRES,\n classifiers=[\n \"Programming Language :: Python :: 3.7\",\n \"Intended Audience :: Developers\",\n \"License :: OSI Approved :: MIT License\",\n \"Operating System :: OS Independent\",\n \"Development Status :: 5 - Production/Stable\",\n \"Topic :: Scientific/Engineering :: Artificial Intelligence\",\n ],\n)\n", "path": "libraries/botbuilder-core/setup.py"}]} | 1,956 | 293 |
gh_patches_debug_30090 | rasdani/github-patches | git_diff | Textualize__textual-4299 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
`Placeholder` has no `disabled` `__init__` keyword parameter
It is intended that `disabled` is one of the "standard" keyword parameters for widgets in Textual; this seems to have never been added to `Placeholder`.
</issue>
<code>
[start of src/textual/widgets/_placeholder.py]
1 """Provides a Textual placeholder widget; useful when designing an app's layout."""
2
3 from __future__ import annotations
4
5 from itertools import cycle
6 from typing import TYPE_CHECKING, Iterator
7 from weakref import WeakKeyDictionary
8
9 from typing_extensions import Literal, Self
10
11 from .. import events
12
13 if TYPE_CHECKING:
14 from ..app import RenderResult
15
16 from ..css._error_tools import friendly_list
17 from ..reactive import Reactive, reactive
18 from ..widget import Widget
19
20 if TYPE_CHECKING:
21 from textual.app import App
22
23 PlaceholderVariant = Literal["default", "size", "text"]
24 """The different variants of placeholder."""
25
26 _VALID_PLACEHOLDER_VARIANTS_ORDERED: list[PlaceholderVariant] = [
27 "default",
28 "size",
29 "text",
30 ]
31 _VALID_PLACEHOLDER_VARIANTS: set[PlaceholderVariant] = set(
32 _VALID_PLACEHOLDER_VARIANTS_ORDERED
33 )
34 _PLACEHOLDER_BACKGROUND_COLORS = [
35 "#881177",
36 "#aa3355",
37 "#cc6666",
38 "#ee9944",
39 "#eedd00",
40 "#99dd55",
41 "#44dd88",
42 "#22ccbb",
43 "#00bbcc",
44 "#0099cc",
45 "#3366bb",
46 "#663399",
47 ]
48 _LOREM_IPSUM_PLACEHOLDER_TEXT = "Lorem ipsum dolor sit amet, consectetur adipiscing elit. Etiam feugiat ac elit sit amet accumsan. Suspendisse bibendum nec libero quis gravida. Phasellus id eleifend ligula. Nullam imperdiet sem tellus, sed vehicula nisl faucibus sit amet. Praesent iaculis tempor ultricies. Sed lacinia, tellus id rutrum lacinia, sapien sapien congue mauris, sit amet pellentesque quam quam vel nisl. Curabitur vulputate erat pellentesque mauris posuere, non dictum risus mattis."
49
50
51 class InvalidPlaceholderVariant(Exception):
52 """Raised when an invalid Placeholder variant is set."""
53
54
55 class Placeholder(Widget):
56 """A simple placeholder widget to use before you build your custom widgets.
57
58 This placeholder has a couple of variants that show different data.
59 Clicking the placeholder cycles through the available variants, but a placeholder
60 can also be initialised in a specific variant.
61
62 The variants available are:
63
64 | Variant | Placeholder shows |
65 |---------|------------------------------------------------|
66 | default | Identifier label or the ID of the placeholder. |
67 | size | Size of the placeholder. |
68 | text | Lorem Ipsum text. |
69 """
70
71 DEFAULT_CSS = """
72 Placeholder {
73 content-align: center middle;
74 overflow: hidden;
75 color: $text;
76 }
77 Placeholder.-text {
78 padding: 1;
79 }
80 """
81
82 # Consecutive placeholders get assigned consecutive colors.
83 _COLORS: WeakKeyDictionary[App, Iterator[str]] = WeakKeyDictionary()
84 _SIZE_RENDER_TEMPLATE = "[b]{} x {}[/b]"
85
86 variant: Reactive[PlaceholderVariant] = reactive[PlaceholderVariant]("default")
87
88 _renderables: dict[PlaceholderVariant, str]
89
90 def __init__(
91 self,
92 label: str | None = None,
93 variant: PlaceholderVariant = "default",
94 *,
95 name: str | None = None,
96 id: str | None = None,
97 classes: str | None = None,
98 ) -> None:
99 """Create a Placeholder widget.
100
101 Args:
102 label: The label to identify the placeholder.
103 If no label is present, uses the placeholder ID instead.
104 variant: The variant of the placeholder.
105 name: The name of the placeholder.
106 id: The ID of the placeholder in the DOM.
107 classes: A space separated string with the CSS classes
108 of the placeholder, if any.
109 """
110 # Create and cache renderables for all the variants.
111 self._renderables = {
112 "default": label if label else f"#{id}" if id else "Placeholder",
113 "size": "",
114 "text": "\n\n".join(_LOREM_IPSUM_PLACEHOLDER_TEXT for _ in range(5)),
115 }
116
117 super().__init__(name=name, id=id, classes=classes)
118
119 self.variant = self.validate_variant(variant)
120 """The current variant of the placeholder."""
121
122 # Set a cycle through the variants with the correct starting point.
123 self._variants_cycle = cycle(_VALID_PLACEHOLDER_VARIANTS_ORDERED)
124 while next(self._variants_cycle) != self.variant:
125 pass
126
127 async def _on_compose(self, event: events.Compose) -> None:
128 """Set the color for this placeholder."""
129 colors = Placeholder._COLORS.setdefault(
130 self.app, cycle(_PLACEHOLDER_BACKGROUND_COLORS)
131 )
132 self.styles.background = f"{next(colors)} 50%"
133
134 def render(self) -> RenderResult:
135 """Render the placeholder.
136
137 Returns:
138 The value to render.
139 """
140 return self._renderables[self.variant]
141
142 def cycle_variant(self) -> Self:
143 """Get the next variant in the cycle.
144
145 Returns:
146 The `Placeholder` instance.
147 """
148 self.variant = next(self._variants_cycle)
149 return self
150
151 def watch_variant(
152 self, old_variant: PlaceholderVariant, variant: PlaceholderVariant
153 ) -> None:
154 self.remove_class(f"-{old_variant}")
155 self.add_class(f"-{variant}")
156
157 def validate_variant(self, variant: PlaceholderVariant) -> PlaceholderVariant:
158 """Validate the variant to which the placeholder was set."""
159 if variant not in _VALID_PLACEHOLDER_VARIANTS:
160 raise InvalidPlaceholderVariant(
161 "Valid placeholder variants are "
162 + f"{friendly_list(_VALID_PLACEHOLDER_VARIANTS)}"
163 )
164 return variant
165
166 async def _on_click(self, _: events.Click) -> None:
167 """Click handler to cycle through the placeholder variants."""
168 self.cycle_variant()
169
170 def _on_resize(self, event: events.Resize) -> None:
171 """Update the placeholder "size" variant with the new placeholder size."""
172 self._renderables["size"] = self._SIZE_RENDER_TEMPLATE.format(*event.size)
173 if self.variant == "size":
174 self.refresh()
175
[end of src/textual/widgets/_placeholder.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/textual/widgets/_placeholder.py b/src/textual/widgets/_placeholder.py
--- a/src/textual/widgets/_placeholder.py
+++ b/src/textual/widgets/_placeholder.py
@@ -73,6 +73,10 @@
content-align: center middle;
overflow: hidden;
color: $text;
+
+ &:disabled {
+ opacity: 0.7;
+ }
}
Placeholder.-text {
padding: 1;
@@ -95,6 +99,7 @@
name: str | None = None,
id: str | None = None,
classes: str | None = None,
+ disabled: bool = False,
) -> None:
"""Create a Placeholder widget.
@@ -106,6 +111,7 @@
id: The ID of the placeholder in the DOM.
classes: A space separated string with the CSS classes
of the placeholder, if any.
+ disabled: Whether the placeholder is disabled or not.
"""
# Create and cache renderables for all the variants.
self._renderables = {
@@ -114,7 +120,7 @@
"text": "\n\n".join(_LOREM_IPSUM_PLACEHOLDER_TEXT for _ in range(5)),
}
- super().__init__(name=name, id=id, classes=classes)
+ super().__init__(name=name, id=id, classes=classes, disabled=disabled)
self.variant = self.validate_variant(variant)
"""The current variant of the placeholder."""
| {"golden_diff": "diff --git a/src/textual/widgets/_placeholder.py b/src/textual/widgets/_placeholder.py\n--- a/src/textual/widgets/_placeholder.py\n+++ b/src/textual/widgets/_placeholder.py\n@@ -73,6 +73,10 @@\n content-align: center middle;\n overflow: hidden;\n color: $text;\n+\n+ &:disabled {\n+ opacity: 0.7;\n+ }\n }\n Placeholder.-text {\n padding: 1;\n@@ -95,6 +99,7 @@\n name: str | None = None,\n id: str | None = None,\n classes: str | None = None,\n+ disabled: bool = False,\n ) -> None:\n \"\"\"Create a Placeholder widget.\n \n@@ -106,6 +111,7 @@\n id: The ID of the placeholder in the DOM.\n classes: A space separated string with the CSS classes\n of the placeholder, if any.\n+ disabled: Whether the placeholder is disabled or not.\n \"\"\"\n # Create and cache renderables for all the variants.\n self._renderables = {\n@@ -114,7 +120,7 @@\n \"text\": \"\\n\\n\".join(_LOREM_IPSUM_PLACEHOLDER_TEXT for _ in range(5)),\n }\n \n- super().__init__(name=name, id=id, classes=classes)\n+ super().__init__(name=name, id=id, classes=classes, disabled=disabled)\n \n self.variant = self.validate_variant(variant)\n \"\"\"The current variant of the placeholder.\"\"\"\n", "issue": "`Placeholder` has no `disabled` `__init__` keyword parameter\nIt is intended that `disabled` is one of the \"standard\" keyword parameters for widgets in Textual; this seems to have never been added to `Placeholder`.\n", "before_files": [{"content": "\"\"\"Provides a Textual placeholder widget; useful when designing an app's layout.\"\"\"\n\nfrom __future__ import annotations\n\nfrom itertools import cycle\nfrom typing import TYPE_CHECKING, Iterator\nfrom weakref import WeakKeyDictionary\n\nfrom typing_extensions import Literal, Self\n\nfrom .. import events\n\nif TYPE_CHECKING:\n from ..app import RenderResult\n\nfrom ..css._error_tools import friendly_list\nfrom ..reactive import Reactive, reactive\nfrom ..widget import Widget\n\nif TYPE_CHECKING:\n from textual.app import App\n\nPlaceholderVariant = Literal[\"default\", \"size\", \"text\"]\n\"\"\"The different variants of placeholder.\"\"\"\n\n_VALID_PLACEHOLDER_VARIANTS_ORDERED: list[PlaceholderVariant] = [\n \"default\",\n \"size\",\n \"text\",\n]\n_VALID_PLACEHOLDER_VARIANTS: set[PlaceholderVariant] = set(\n _VALID_PLACEHOLDER_VARIANTS_ORDERED\n)\n_PLACEHOLDER_BACKGROUND_COLORS = [\n \"#881177\",\n \"#aa3355\",\n \"#cc6666\",\n \"#ee9944\",\n \"#eedd00\",\n \"#99dd55\",\n \"#44dd88\",\n \"#22ccbb\",\n \"#00bbcc\",\n \"#0099cc\",\n \"#3366bb\",\n \"#663399\",\n]\n_LOREM_IPSUM_PLACEHOLDER_TEXT = \"Lorem ipsum dolor sit amet, consectetur adipiscing elit. Etiam feugiat ac elit sit amet accumsan. Suspendisse bibendum nec libero quis gravida. Phasellus id eleifend ligula. Nullam imperdiet sem tellus, sed vehicula nisl faucibus sit amet. Praesent iaculis tempor ultricies. Sed lacinia, tellus id rutrum lacinia, sapien sapien congue mauris, sit amet pellentesque quam quam vel nisl. Curabitur vulputate erat pellentesque mauris posuere, non dictum risus mattis.\"\n\n\nclass InvalidPlaceholderVariant(Exception):\n \"\"\"Raised when an invalid Placeholder variant is set.\"\"\"\n\n\nclass Placeholder(Widget):\n \"\"\"A simple placeholder widget to use before you build your custom widgets.\n\n This placeholder has a couple of variants that show different data.\n Clicking the placeholder cycles through the available variants, but a placeholder\n can also be initialised in a specific variant.\n\n The variants available are:\n\n | Variant | Placeholder shows |\n |---------|------------------------------------------------|\n | default | Identifier label or the ID of the placeholder. |\n | size | Size of the placeholder. |\n | text | Lorem Ipsum text. |\n \"\"\"\n\n DEFAULT_CSS = \"\"\"\n Placeholder {\n content-align: center middle;\n overflow: hidden;\n color: $text;\n }\n Placeholder.-text {\n padding: 1;\n }\n \"\"\"\n\n # Consecutive placeholders get assigned consecutive colors.\n _COLORS: WeakKeyDictionary[App, Iterator[str]] = WeakKeyDictionary()\n _SIZE_RENDER_TEMPLATE = \"[b]{} x {}[/b]\"\n\n variant: Reactive[PlaceholderVariant] = reactive[PlaceholderVariant](\"default\")\n\n _renderables: dict[PlaceholderVariant, str]\n\n def __init__(\n self,\n label: str | None = None,\n variant: PlaceholderVariant = \"default\",\n *,\n name: str | None = None,\n id: str | None = None,\n classes: str | None = None,\n ) -> None:\n \"\"\"Create a Placeholder widget.\n\n Args:\n label: The label to identify the placeholder.\n If no label is present, uses the placeholder ID instead.\n variant: The variant of the placeholder.\n name: The name of the placeholder.\n id: The ID of the placeholder in the DOM.\n classes: A space separated string with the CSS classes\n of the placeholder, if any.\n \"\"\"\n # Create and cache renderables for all the variants.\n self._renderables = {\n \"default\": label if label else f\"#{id}\" if id else \"Placeholder\",\n \"size\": \"\",\n \"text\": \"\\n\\n\".join(_LOREM_IPSUM_PLACEHOLDER_TEXT for _ in range(5)),\n }\n\n super().__init__(name=name, id=id, classes=classes)\n\n self.variant = self.validate_variant(variant)\n \"\"\"The current variant of the placeholder.\"\"\"\n\n # Set a cycle through the variants with the correct starting point.\n self._variants_cycle = cycle(_VALID_PLACEHOLDER_VARIANTS_ORDERED)\n while next(self._variants_cycle) != self.variant:\n pass\n\n async def _on_compose(self, event: events.Compose) -> None:\n \"\"\"Set the color for this placeholder.\"\"\"\n colors = Placeholder._COLORS.setdefault(\n self.app, cycle(_PLACEHOLDER_BACKGROUND_COLORS)\n )\n self.styles.background = f\"{next(colors)} 50%\"\n\n def render(self) -> RenderResult:\n \"\"\"Render the placeholder.\n\n Returns:\n The value to render.\n \"\"\"\n return self._renderables[self.variant]\n\n def cycle_variant(self) -> Self:\n \"\"\"Get the next variant in the cycle.\n\n Returns:\n The `Placeholder` instance.\n \"\"\"\n self.variant = next(self._variants_cycle)\n return self\n\n def watch_variant(\n self, old_variant: PlaceholderVariant, variant: PlaceholderVariant\n ) -> None:\n self.remove_class(f\"-{old_variant}\")\n self.add_class(f\"-{variant}\")\n\n def validate_variant(self, variant: PlaceholderVariant) -> PlaceholderVariant:\n \"\"\"Validate the variant to which the placeholder was set.\"\"\"\n if variant not in _VALID_PLACEHOLDER_VARIANTS:\n raise InvalidPlaceholderVariant(\n \"Valid placeholder variants are \"\n + f\"{friendly_list(_VALID_PLACEHOLDER_VARIANTS)}\"\n )\n return variant\n\n async def _on_click(self, _: events.Click) -> None:\n \"\"\"Click handler to cycle through the placeholder variants.\"\"\"\n self.cycle_variant()\n\n def _on_resize(self, event: events.Resize) -> None:\n \"\"\"Update the placeholder \"size\" variant with the new placeholder size.\"\"\"\n self._renderables[\"size\"] = self._SIZE_RENDER_TEMPLATE.format(*event.size)\n if self.variant == \"size\":\n self.refresh()\n", "path": "src/textual/widgets/_placeholder.py"}]} | 2,366 | 338 |
gh_patches_debug_8950 | rasdani/github-patches | git_diff | gratipay__gratipay.com-3047 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Cannot close (delete) my account
I works for a while but then I see an "Application Error" page from Heroku. Ostensibly the operation takes too long and Heroku kills the request.
Cannot close (delete) my account
I works for a while but then I see an "Application Error" page from Heroku. Ostensibly the operation takes too long and Heroku kills the request.
</issue>
<code>
[start of gratipay/models/_mixin_team.py]
1 """Teams on Gratipay are plural participants with members.
2 """
3 from collections import OrderedDict
4 from decimal import Decimal
5
6 from aspen.utils import typecheck
7
8
9 class MemberLimitReached(Exception): pass
10
11 class StubParticipantAdded(Exception): pass
12
13 class MixinTeam(object):
14 """This class provides methods for working with a Participant as a Team.
15
16 :param Participant participant: the underlying :py:class:`~gratipay.participant.Participant` object for this team
17
18 """
19
20 # XXX These were all written with the ORM and need to be converted.
21
22 def __init__(self, participant):
23 self.participant = participant
24
25 def show_as_team(self, user):
26 """Return a boolean, whether to show this participant as a team.
27 """
28 if not self.IS_PLURAL:
29 return False
30 if user.ADMIN:
31 return True
32 if not self.get_current_takes():
33 if self == user.participant:
34 return True
35 return False
36 return True
37
38 def add_member(self, member):
39 """Add a member to this team.
40 """
41 assert self.IS_PLURAL
42 if len(self.get_current_takes()) == 149:
43 raise MemberLimitReached
44 if not member.is_claimed:
45 raise StubParticipantAdded
46 self.__set_take_for(member, Decimal('0.01'), self)
47
48 def remove_member(self, member):
49 """Remove a member from this team.
50 """
51 assert self.IS_PLURAL
52 self.__set_take_for(member, Decimal('0.00'), self)
53
54 def remove_all_members(self, cursor=None):
55 (cursor or self.db).run("""
56 INSERT INTO takes (ctime, member, team, amount, recorder) (
57 SELECT ctime, member, %(username)s, 0.00, %(username)s
58 FROM current_takes
59 WHERE team=%(username)s
60 AND amount > 0
61 );
62 """, dict(username=self.username))
63
64 def member_of(self, team):
65 """Given a Participant object, return a boolean.
66 """
67 assert team.IS_PLURAL
68 for take in team.get_current_takes():
69 if take['member'] == self.username:
70 return True
71 return False
72
73 def get_take_last_week_for(self, member):
74 """Get the user's nominal take last week. Used in throttling.
75 """
76 assert self.IS_PLURAL
77 membername = member.username if hasattr(member, 'username') \
78 else member['username']
79 return self.db.one("""
80
81 SELECT amount
82 FROM takes
83 WHERE team=%s AND member=%s
84 AND mtime < (
85 SELECT ts_start
86 FROM paydays
87 WHERE ts_end > ts_start
88 ORDER BY ts_start DESC LIMIT 1
89 )
90 ORDER BY mtime DESC LIMIT 1
91
92 """, (self.username, membername), default=Decimal('0.00'))
93
94 def get_take_for(self, member):
95 """Return a Decimal representation of the take for this member, or 0.
96 """
97 assert self.IS_PLURAL
98 return self.db.one( "SELECT amount FROM current_takes "
99 "WHERE member=%s AND team=%s"
100 , (member.username, self.username)
101 , default=Decimal('0.00')
102 )
103
104 def compute_max_this_week(self, last_week):
105 """2x last week's take, but at least a dollar.
106 """
107 return max(last_week * Decimal('2'), Decimal('1.00'))
108
109 def set_take_for(self, member, take, recorder, cursor=None):
110 """Sets member's take from the team pool.
111 """
112 assert self.IS_PLURAL
113
114 # lazy import to avoid circular import
115 from gratipay.security.user import User
116 from gratipay.models.participant import Participant
117
118 typecheck( member, Participant
119 , take, Decimal
120 , recorder, (Participant, User)
121 )
122
123 last_week = self.get_take_last_week_for(member)
124 max_this_week = self.compute_max_this_week(last_week)
125 if take > max_this_week:
126 take = max_this_week
127
128 self.__set_take_for(member, take, recorder, cursor)
129 return take
130
131 def __set_take_for(self, member, amount, recorder, cursor=None):
132 assert self.IS_PLURAL
133 # XXX Factored out for testing purposes only! :O Use .set_take_for.
134 with self.db.get_cursor(cursor) as cursor:
135 # Lock to avoid race conditions
136 cursor.run("LOCK TABLE takes IN EXCLUSIVE MODE")
137 # Compute the current takes
138 old_takes = self.compute_actual_takes(cursor)
139 # Insert the new take
140 cursor.run("""
141
142 INSERT INTO takes (ctime, member, team, amount, recorder)
143 VALUES ( COALESCE (( SELECT ctime
144 FROM takes
145 WHERE member=%(member)s
146 AND team=%(team)s
147 LIMIT 1
148 ), CURRENT_TIMESTAMP)
149 , %(member)s
150 , %(team)s
151 , %(amount)s
152 , %(recorder)s
153 )
154
155 """, dict(member=member.username, team=self.username, amount=amount,
156 recorder=recorder.username))
157 # Compute the new takes
158 new_takes = self.compute_actual_takes(cursor)
159 # Update receiving amounts in the participants table
160 self.update_taking(old_takes, new_takes, cursor, member)
161 # Update is_funded on member's tips
162 member.update_giving(cursor)
163
164 def update_taking(self, old_takes, new_takes, cursor=None, member=None):
165 """Update `taking` amounts based on the difference between `old_takes`
166 and `new_takes`.
167 """
168 for username in set(old_takes.keys()).union(new_takes.keys()):
169 if username == self.username:
170 continue
171 old = old_takes.get(username, {}).get('actual_amount', Decimal(0))
172 new = new_takes.get(username, {}).get('actual_amount', Decimal(0))
173 diff = new - old
174 if diff != 0:
175 r = (self.db or cursor).one("""
176 UPDATE participants
177 SET taking = (taking + %(diff)s)
178 , receiving = (receiving + %(diff)s)
179 WHERE username=%(username)s
180 RETURNING taking, receiving
181 """, dict(username=username, diff=diff))
182 if member and username == member.username:
183 member.set_attributes(**r._asdict())
184
185 def get_current_takes(self, cursor=None):
186 """Return a list of member takes for a team.
187 """
188 assert self.IS_PLURAL
189 TAKES = """
190 SELECT member, amount, ctime, mtime
191 FROM current_takes
192 WHERE team=%(team)s
193 ORDER BY ctime DESC
194 """
195 records = (cursor or self.db).all(TAKES, dict(team=self.username))
196 return [r._asdict() for r in records]
197
198 def get_team_take(self, cursor=None):
199 """Return a single take for a team, the team itself's take.
200 """
201 assert self.IS_PLURAL
202 TAKE = "SELECT sum(amount) FROM current_takes WHERE team=%s"
203 total_take = (cursor or self.db).one(TAKE, (self.username,), default=0)
204 team_take = max(self.receiving - total_take, 0)
205 membership = { "ctime": None
206 , "mtime": None
207 , "member": self.username
208 , "amount": team_take
209 }
210 return membership
211
212 def compute_actual_takes(self, cursor=None):
213 """Get the takes, compute the actual amounts, and return an OrderedDict.
214 """
215 actual_takes = OrderedDict()
216 nominal_takes = self.get_current_takes(cursor=cursor)
217 nominal_takes.append(self.get_team_take(cursor=cursor))
218 budget = balance = self.balance + self.receiving - self.giving
219 for take in nominal_takes:
220 nominal_amount = take['nominal_amount'] = take.pop('amount')
221 actual_amount = take['actual_amount'] = min(nominal_amount, balance)
222 if take['member'] != self.username:
223 balance -= actual_amount
224 take['balance'] = balance
225 take['percentage'] = (actual_amount / budget) if budget > 0 else 0
226 actual_takes[take['member']] = take
227 return actual_takes
228
229 def get_members(self, current_participant):
230 """Return a list of member dicts.
231 """
232 assert self.IS_PLURAL
233 takes = self.compute_actual_takes()
234 members = []
235 for take in takes.values():
236 member = {}
237 member['username'] = take['member']
238 member['take'] = take['nominal_amount']
239 member['balance'] = take['balance']
240 member['percentage'] = take['percentage']
241
242 member['removal_allowed'] = current_participant == self
243 member['editing_allowed'] = False
244 member['is_current_user'] = False
245 if current_participant is not None:
246 if member['username'] == current_participant.username:
247 member['is_current_user'] = True
248 if take['ctime'] is not None:
249 # current user, but not the team itself
250 member['editing_allowed']= True
251
252 member['last_week'] = last_week = self.get_take_last_week_for(member)
253 member['max_this_week'] = self.compute_max_this_week(last_week)
254 members.append(member)
255 return members
256
[end of gratipay/models/_mixin_team.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/gratipay/models/_mixin_team.py b/gratipay/models/_mixin_team.py
--- a/gratipay/models/_mixin_team.py
+++ b/gratipay/models/_mixin_team.py
@@ -172,7 +172,7 @@
new = new_takes.get(username, {}).get('actual_amount', Decimal(0))
diff = new - old
if diff != 0:
- r = (self.db or cursor).one("""
+ r = (cursor or self.db).one("""
UPDATE participants
SET taking = (taking + %(diff)s)
, receiving = (receiving + %(diff)s)
| {"golden_diff": "diff --git a/gratipay/models/_mixin_team.py b/gratipay/models/_mixin_team.py\n--- a/gratipay/models/_mixin_team.py\n+++ b/gratipay/models/_mixin_team.py\n@@ -172,7 +172,7 @@\n new = new_takes.get(username, {}).get('actual_amount', Decimal(0))\n diff = new - old\n if diff != 0:\n- r = (self.db or cursor).one(\"\"\"\n+ r = (cursor or self.db).one(\"\"\"\n UPDATE participants\n SET taking = (taking + %(diff)s)\n , receiving = (receiving + %(diff)s)\n", "issue": "Cannot close (delete) my account\nI works for a while but then I see an \"Application Error\" page from Heroku. Ostensibly the operation takes too long and Heroku kills the request.\n\nCannot close (delete) my account\nI works for a while but then I see an \"Application Error\" page from Heroku. Ostensibly the operation takes too long and Heroku kills the request.\n\n", "before_files": [{"content": "\"\"\"Teams on Gratipay are plural participants with members.\n\"\"\"\nfrom collections import OrderedDict\nfrom decimal import Decimal\n\nfrom aspen.utils import typecheck\n\n\nclass MemberLimitReached(Exception): pass\n\nclass StubParticipantAdded(Exception): pass\n\nclass MixinTeam(object):\n \"\"\"This class provides methods for working with a Participant as a Team.\n\n :param Participant participant: the underlying :py:class:`~gratipay.participant.Participant` object for this team\n\n \"\"\"\n\n # XXX These were all written with the ORM and need to be converted.\n\n def __init__(self, participant):\n self.participant = participant\n\n def show_as_team(self, user):\n \"\"\"Return a boolean, whether to show this participant as a team.\n \"\"\"\n if not self.IS_PLURAL:\n return False\n if user.ADMIN:\n return True\n if not self.get_current_takes():\n if self == user.participant:\n return True\n return False\n return True\n\n def add_member(self, member):\n \"\"\"Add a member to this team.\n \"\"\"\n assert self.IS_PLURAL\n if len(self.get_current_takes()) == 149:\n raise MemberLimitReached\n if not member.is_claimed:\n raise StubParticipantAdded\n self.__set_take_for(member, Decimal('0.01'), self)\n\n def remove_member(self, member):\n \"\"\"Remove a member from this team.\n \"\"\"\n assert self.IS_PLURAL\n self.__set_take_for(member, Decimal('0.00'), self)\n\n def remove_all_members(self, cursor=None):\n (cursor or self.db).run(\"\"\"\n INSERT INTO takes (ctime, member, team, amount, recorder) (\n SELECT ctime, member, %(username)s, 0.00, %(username)s\n FROM current_takes\n WHERE team=%(username)s\n AND amount > 0\n );\n \"\"\", dict(username=self.username))\n\n def member_of(self, team):\n \"\"\"Given a Participant object, return a boolean.\n \"\"\"\n assert team.IS_PLURAL\n for take in team.get_current_takes():\n if take['member'] == self.username:\n return True\n return False\n\n def get_take_last_week_for(self, member):\n \"\"\"Get the user's nominal take last week. Used in throttling.\n \"\"\"\n assert self.IS_PLURAL\n membername = member.username if hasattr(member, 'username') \\\n else member['username']\n return self.db.one(\"\"\"\n\n SELECT amount\n FROM takes\n WHERE team=%s AND member=%s\n AND mtime < (\n SELECT ts_start\n FROM paydays\n WHERE ts_end > ts_start\n ORDER BY ts_start DESC LIMIT 1\n )\n ORDER BY mtime DESC LIMIT 1\n\n \"\"\", (self.username, membername), default=Decimal('0.00'))\n\n def get_take_for(self, member):\n \"\"\"Return a Decimal representation of the take for this member, or 0.\n \"\"\"\n assert self.IS_PLURAL\n return self.db.one( \"SELECT amount FROM current_takes \"\n \"WHERE member=%s AND team=%s\"\n , (member.username, self.username)\n , default=Decimal('0.00')\n )\n\n def compute_max_this_week(self, last_week):\n \"\"\"2x last week's take, but at least a dollar.\n \"\"\"\n return max(last_week * Decimal('2'), Decimal('1.00'))\n\n def set_take_for(self, member, take, recorder, cursor=None):\n \"\"\"Sets member's take from the team pool.\n \"\"\"\n assert self.IS_PLURAL\n\n # lazy import to avoid circular import\n from gratipay.security.user import User\n from gratipay.models.participant import Participant\n\n typecheck( member, Participant\n , take, Decimal\n , recorder, (Participant, User)\n )\n\n last_week = self.get_take_last_week_for(member)\n max_this_week = self.compute_max_this_week(last_week)\n if take > max_this_week:\n take = max_this_week\n\n self.__set_take_for(member, take, recorder, cursor)\n return take\n\n def __set_take_for(self, member, amount, recorder, cursor=None):\n assert self.IS_PLURAL\n # XXX Factored out for testing purposes only! :O Use .set_take_for.\n with self.db.get_cursor(cursor) as cursor:\n # Lock to avoid race conditions\n cursor.run(\"LOCK TABLE takes IN EXCLUSIVE MODE\")\n # Compute the current takes\n old_takes = self.compute_actual_takes(cursor)\n # Insert the new take\n cursor.run(\"\"\"\n\n INSERT INTO takes (ctime, member, team, amount, recorder)\n VALUES ( COALESCE (( SELECT ctime\n FROM takes\n WHERE member=%(member)s\n AND team=%(team)s\n LIMIT 1\n ), CURRENT_TIMESTAMP)\n , %(member)s\n , %(team)s\n , %(amount)s\n , %(recorder)s\n )\n\n \"\"\", dict(member=member.username, team=self.username, amount=amount,\n recorder=recorder.username))\n # Compute the new takes\n new_takes = self.compute_actual_takes(cursor)\n # Update receiving amounts in the participants table\n self.update_taking(old_takes, new_takes, cursor, member)\n # Update is_funded on member's tips\n member.update_giving(cursor)\n\n def update_taking(self, old_takes, new_takes, cursor=None, member=None):\n \"\"\"Update `taking` amounts based on the difference between `old_takes`\n and `new_takes`.\n \"\"\"\n for username in set(old_takes.keys()).union(new_takes.keys()):\n if username == self.username:\n continue\n old = old_takes.get(username, {}).get('actual_amount', Decimal(0))\n new = new_takes.get(username, {}).get('actual_amount', Decimal(0))\n diff = new - old\n if diff != 0:\n r = (self.db or cursor).one(\"\"\"\n UPDATE participants\n SET taking = (taking + %(diff)s)\n , receiving = (receiving + %(diff)s)\n WHERE username=%(username)s\n RETURNING taking, receiving\n \"\"\", dict(username=username, diff=diff))\n if member and username == member.username:\n member.set_attributes(**r._asdict())\n\n def get_current_takes(self, cursor=None):\n \"\"\"Return a list of member takes for a team.\n \"\"\"\n assert self.IS_PLURAL\n TAKES = \"\"\"\n SELECT member, amount, ctime, mtime\n FROM current_takes\n WHERE team=%(team)s\n ORDER BY ctime DESC\n \"\"\"\n records = (cursor or self.db).all(TAKES, dict(team=self.username))\n return [r._asdict() for r in records]\n\n def get_team_take(self, cursor=None):\n \"\"\"Return a single take for a team, the team itself's take.\n \"\"\"\n assert self.IS_PLURAL\n TAKE = \"SELECT sum(amount) FROM current_takes WHERE team=%s\"\n total_take = (cursor or self.db).one(TAKE, (self.username,), default=0)\n team_take = max(self.receiving - total_take, 0)\n membership = { \"ctime\": None\n , \"mtime\": None\n , \"member\": self.username\n , \"amount\": team_take\n }\n return membership\n\n def compute_actual_takes(self, cursor=None):\n \"\"\"Get the takes, compute the actual amounts, and return an OrderedDict.\n \"\"\"\n actual_takes = OrderedDict()\n nominal_takes = self.get_current_takes(cursor=cursor)\n nominal_takes.append(self.get_team_take(cursor=cursor))\n budget = balance = self.balance + self.receiving - self.giving\n for take in nominal_takes:\n nominal_amount = take['nominal_amount'] = take.pop('amount')\n actual_amount = take['actual_amount'] = min(nominal_amount, balance)\n if take['member'] != self.username:\n balance -= actual_amount\n take['balance'] = balance\n take['percentage'] = (actual_amount / budget) if budget > 0 else 0\n actual_takes[take['member']] = take\n return actual_takes\n\n def get_members(self, current_participant):\n \"\"\"Return a list of member dicts.\n \"\"\"\n assert self.IS_PLURAL\n takes = self.compute_actual_takes()\n members = []\n for take in takes.values():\n member = {}\n member['username'] = take['member']\n member['take'] = take['nominal_amount']\n member['balance'] = take['balance']\n member['percentage'] = take['percentage']\n\n member['removal_allowed'] = current_participant == self\n member['editing_allowed'] = False\n member['is_current_user'] = False\n if current_participant is not None:\n if member['username'] == current_participant.username:\n member['is_current_user'] = True\n if take['ctime'] is not None:\n # current user, but not the team itself\n member['editing_allowed']= True\n\n member['last_week'] = last_week = self.get_take_last_week_for(member)\n member['max_this_week'] = self.compute_max_this_week(last_week)\n members.append(member)\n return members\n", "path": "gratipay/models/_mixin_team.py"}]} | 3,331 | 146 |
gh_patches_debug_10452 | rasdani/github-patches | git_diff | sublimelsp__LSP-285 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Feature request: Toggle diagnostic panel
Hi Tom, I have a suggestion.
Right now I have a keyboard shortcut to open the diagnostic panel `ctrl+shift+m`.
As you can see. When I open the panel with that keybinding, and press the keybinding again the panel is still visible.

Wouldn't it be better if the panel could be toggled like this? :)

If the answer is yes? I have already done that and I could create a pull request if you want :)
</issue>
<code>
[start of plugin/diagnostics.py]
1 import html
2 import os
3 import sublime
4 import sublime_plugin
5
6 try:
7 from typing import Any, List, Dict, Tuple, Callable, Optional
8 assert Any and List and Dict and Tuple and Callable and Optional
9 except ImportError:
10 pass
11
12 from .core.settings import settings, PLUGIN_NAME
13 from .core.protocol import Diagnostic, DiagnosticSeverity
14 from .core.events import Events
15 from .core.configurations import is_supported_syntax
16 from .core.diagnostics import DiagnosticsUpdate, get_window_diagnostics, get_line_diagnostics
17 from .core.workspace import get_project_path
18 from .core.panels import create_output_panel
19
20 diagnostic_severity_names = {
21 DiagnosticSeverity.Error: "error",
22 DiagnosticSeverity.Warning: "warning",
23 DiagnosticSeverity.Information: "info",
24 DiagnosticSeverity.Hint: "hint"
25 }
26
27 diagnostic_severity_scopes = {
28 DiagnosticSeverity.Error: 'markup.deleted.lsp sublimelinter.mark.error markup.error.lsp',
29 DiagnosticSeverity.Warning: 'markup.changed.lsp sublimelinter.mark.warning markup.warning.lsp',
30 DiagnosticSeverity.Information: 'markup.inserted.lsp sublimelinter.gutter-mark markup.info.lsp',
31 DiagnosticSeverity.Hint: 'markup.inserted.lsp sublimelinter.gutter-mark markup.info.suggestion.lsp'
32 }
33
34 stylesheet = '''
35 <style>
36 div.error-arrow {
37 border-top: 0.4rem solid transparent;
38 border-left: 0.5rem solid color(var(--redish) blend(var(--background) 30%));
39 width: 0;
40 height: 0;
41 }
42 div.error {
43 padding: 0.4rem 0 0.4rem 0.7rem;
44 margin: 0 0 0.2rem;
45 border-radius: 0 0.2rem 0.2rem 0.2rem;
46 }
47
48 div.error span.message {
49 padding-right: 0.7rem;
50 }
51
52 div.error a {
53 text-decoration: inherit;
54 padding: 0.35rem 0.7rem 0.45rem 0.8rem;
55 position: relative;
56 bottom: 0.05rem;
57 border-radius: 0 0.2rem 0.2rem 0;
58 font-weight: bold;
59 }
60 html.dark div.error a {
61 background-color: #00000018;
62 }
63 html.light div.error a {
64 background-color: #ffffff18;
65 }
66 </style>
67 '''
68
69 UNDERLINE_FLAGS = (sublime.DRAW_SQUIGGLY_UNDERLINE
70 | sublime.DRAW_NO_OUTLINE
71 | sublime.DRAW_NO_FILL
72 | sublime.DRAW_EMPTY_AS_OVERWRITE)
73
74 BOX_FLAGS = sublime.DRAW_NO_FILL | sublime.DRAW_EMPTY_AS_OVERWRITE
75
76
77 def create_phantom_html(text: str) -> str:
78 global stylesheet
79 return """<body id=inline-error>{}
80 <div class="error-arrow"></div>
81 <div class="error">
82 <span class="message">{}</span>
83 <a href="code-actions">Code Actions</a>
84 </div>
85 </body>""".format(stylesheet, html.escape(text, quote=False))
86
87
88 def on_phantom_navigate(view: sublime.View, href: str, point: int):
89 # TODO: don't mess with the user's cursor.
90 sel = view.sel()
91 sel.clear()
92 sel.add(sublime.Region(point))
93 view.run_command("lsp_code_actions")
94
95
96 def create_phantom(view: sublime.View, diagnostic: Diagnostic) -> sublime.Phantom:
97 region = diagnostic.range.to_region(view)
98 # TODO: hook up hide phantom (if keeping them)
99 content = create_phantom_html(diagnostic.message)
100 return sublime.Phantom(
101 region,
102 '<p>' + content + '</p>',
103 sublime.LAYOUT_BELOW,
104 lambda href: on_phantom_navigate(view, href, region.begin())
105 )
106
107
108 def format_severity(severity: int) -> str:
109 return diagnostic_severity_names.get(severity, "???")
110
111
112 def format_diagnostic(diagnostic: Diagnostic) -> str:
113 location = "{:>8}:{:<4}".format(
114 diagnostic.range.start.row + 1, diagnostic.range.start.col + 1)
115 message = diagnostic.message.replace("\n", " ").replace("\r", "")
116 return " {}\t{:<12}\t{:<10}\t{}".format(
117 location, diagnostic.source, format_severity(diagnostic.severity), message)
118
119
120 phantom_sets_by_buffer = {} # type: Dict[int, sublime.PhantomSet]
121
122
123 def update_diagnostics_phantoms(view: sublime.View, diagnostics: 'List[Diagnostic]'):
124 global phantom_sets_by_buffer
125
126 buffer_id = view.buffer_id()
127 if not settings.show_diagnostics_phantoms or view.is_dirty():
128 phantoms = None
129 else:
130 phantoms = list(
131 create_phantom(view, diagnostic) for diagnostic in diagnostics)
132 if phantoms:
133 phantom_set = phantom_sets_by_buffer.get(buffer_id)
134 if not phantom_set:
135 phantom_set = sublime.PhantomSet(view, "lsp_diagnostics")
136 phantom_sets_by_buffer[buffer_id] = phantom_set
137 phantom_set.update(phantoms)
138 else:
139 phantom_sets_by_buffer.pop(buffer_id, None)
140
141
142 def update_diagnostics_regions(view: sublime.View, diagnostics: 'List[Diagnostic]', severity: int):
143 region_name = "lsp_" + format_severity(severity)
144 if settings.show_diagnostics_phantoms and not view.is_dirty():
145 regions = None
146 else:
147 regions = list(diagnostic.range.to_region(view) for diagnostic in diagnostics
148 if diagnostic.severity == severity)
149 if regions:
150 scope_name = diagnostic_severity_scopes[severity]
151 view.add_regions(
152 region_name, regions, scope_name, settings.diagnostics_gutter_marker,
153 UNDERLINE_FLAGS if settings.diagnostics_highlight_style == "underline" else BOX_FLAGS)
154 else:
155 view.erase_regions(region_name)
156
157
158 def update_diagnostics_in_view(view: sublime.View, diagnostics: 'List[Diagnostic]'):
159 if view and view.is_valid():
160 update_diagnostics_phantoms(view, diagnostics)
161 for severity in range(DiagnosticSeverity.Error, DiagnosticSeverity.Information):
162 update_diagnostics_regions(view, diagnostics, severity)
163
164
165 Events.subscribe("document.diagnostics",
166 lambda update: handle_diagnostics(update))
167
168
169 def handle_diagnostics(update: DiagnosticsUpdate):
170 window = update.window
171 view = window.find_open_file(update.file_path)
172 if view:
173 update_diagnostics_in_view(view, update.diagnostics)
174 update_diagnostics_panel(window)
175
176
177 class DiagnosticsCursorListener(sublime_plugin.ViewEventListener):
178 def __init__(self, view):
179 self.view = view
180 self.has_status = False
181
182 @classmethod
183 def is_applicable(cls, view_settings):
184 syntax = view_settings.get('syntax')
185 return settings.show_diagnostics_in_view_status and syntax and is_supported_syntax(syntax)
186
187 def on_selection_modified_async(self):
188 selections = self.view.sel()
189 if len(selections) > 0:
190 pos = selections[0].begin()
191 line_diagnostics = get_line_diagnostics(self.view, pos)
192 if len(line_diagnostics) > 0:
193 self.show_diagnostics_status(line_diagnostics)
194 elif self.has_status:
195 self.clear_diagnostics_status()
196
197 def show_diagnostics_status(self, line_diagnostics):
198 self.has_status = True
199 self.view.set_status('lsp_diagnostics', line_diagnostics[0].message)
200
201 def clear_diagnostics_status(self):
202 self.view.erase_status('lsp_diagnostics')
203 self.has_status = False
204
205
206 class LspShowDiagnosticsPanelCommand(sublime_plugin.WindowCommand):
207 def run(self):
208 ensure_diagnostics_panel(self.window)
209 self.window.run_command("show_panel", {"panel": "output.diagnostics"})
210
211
212 def create_diagnostics_panel(window):
213 panel = create_output_panel(window, "diagnostics")
214 panel.settings().set("result_file_regex", r"^\s*\S\s+(\S.*):$")
215 panel.settings().set("result_line_regex", r"^\s+([0-9]+):?([0-9]+).*$")
216 panel.assign_syntax("Packages/" + PLUGIN_NAME +
217 "/Syntaxes/Diagnostics.sublime-syntax")
218 # Call create_output_panel a second time after assigning the above
219 # settings, so that it'll be picked up as a result buffer
220 # see: Packages/Default/exec.py#L228-L230
221 panel = window.create_output_panel("diagnostics")
222 return panel
223
224
225 def ensure_diagnostics_panel(window: sublime.Window):
226 return window.find_output_panel("diagnostics") or create_diagnostics_panel(window)
227
228
229 def update_diagnostics_panel(window: sublime.Window):
230 assert window, "missing window!"
231 base_dir = get_project_path(window)
232
233 panel = ensure_diagnostics_panel(window)
234 assert panel, "must have a panel now!"
235
236 diagnostics_by_file = get_window_diagnostics(window)
237 if diagnostics_by_file is not None:
238 active_panel = window.active_panel()
239 is_active_panel = (active_panel == "output.diagnostics")
240 panel.settings().set("result_base_dir", base_dir)
241 panel.set_read_only(False)
242 if diagnostics_by_file:
243 to_render = []
244 for file_path, source_diagnostics in diagnostics_by_file.items():
245 relative_file_path = os.path.relpath(file_path, base_dir) if base_dir else file_path
246 if source_diagnostics:
247 to_render.append(format_diagnostics(relative_file_path, source_diagnostics))
248 panel.run_command("lsp_update_panel", {"characters": "\n".join(to_render)})
249 if settings.auto_show_diagnostics_panel and not active_panel:
250 window.run_command("show_panel",
251 {"panel": "output.diagnostics"})
252 else:
253 panel.run_command("lsp_clear_panel")
254 if settings.auto_show_diagnostics_panel and is_active_panel:
255 window.run_command("hide_panel",
256 {"panel": "output.diagnostics"})
257 panel.set_read_only(True)
258
259
260 def format_diagnostics(file_path, origin_diagnostics):
261 content = " β {}:\n".format(file_path)
262 for origin, diagnostics in origin_diagnostics.items():
263 for diagnostic in diagnostics:
264 item = format_diagnostic(diagnostic)
265 content += item + "\n"
266 return content
267
[end of plugin/diagnostics.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/plugin/diagnostics.py b/plugin/diagnostics.py
--- a/plugin/diagnostics.py
+++ b/plugin/diagnostics.py
@@ -206,7 +206,13 @@
class LspShowDiagnosticsPanelCommand(sublime_plugin.WindowCommand):
def run(self):
ensure_diagnostics_panel(self.window)
- self.window.run_command("show_panel", {"panel": "output.diagnostics"})
+ active_panel = self.window.active_panel()
+ is_active_panel = (active_panel == "output.diagnostics")
+
+ if is_active_panel:
+ self.window.run_command("hide_panel", {"panel": "output.diagnostics"})
+ else:
+ self.window.run_command("show_panel", {"panel": "output.diagnostics"})
def create_diagnostics_panel(window):
| {"golden_diff": "diff --git a/plugin/diagnostics.py b/plugin/diagnostics.py\n--- a/plugin/diagnostics.py\n+++ b/plugin/diagnostics.py\n@@ -206,7 +206,13 @@\n class LspShowDiagnosticsPanelCommand(sublime_plugin.WindowCommand):\n def run(self):\n ensure_diagnostics_panel(self.window)\n- self.window.run_command(\"show_panel\", {\"panel\": \"output.diagnostics\"})\n+ active_panel = self.window.active_panel()\n+ is_active_panel = (active_panel == \"output.diagnostics\")\n+\n+ if is_active_panel:\n+ self.window.run_command(\"hide_panel\", {\"panel\": \"output.diagnostics\"})\n+ else:\n+ self.window.run_command(\"show_panel\", {\"panel\": \"output.diagnostics\"})\n \n \n def create_diagnostics_panel(window):\n", "issue": "Feature request: Toggle diagnostic panel\nHi Tom, I have a suggestion.\r\n\r\nRight now I have a keyboard shortcut to open the diagnostic panel `ctrl+shift+m`.\r\nAs you can see. When I open the panel with that keybinding, and press the keybinding again the panel is still visible. \r\n\r\n\r\n\r\nWouldn't it be better if the panel could be toggled like this? :)\r\n\r\n\r\n\r\n\r\nIf the answer is yes? I have already done that and I could create a pull request if you want :)\n", "before_files": [{"content": "import html\nimport os\nimport sublime\nimport sublime_plugin\n\ntry:\n from typing import Any, List, Dict, Tuple, Callable, Optional\n assert Any and List and Dict and Tuple and Callable and Optional\nexcept ImportError:\n pass\n\nfrom .core.settings import settings, PLUGIN_NAME\nfrom .core.protocol import Diagnostic, DiagnosticSeverity\nfrom .core.events import Events\nfrom .core.configurations import is_supported_syntax\nfrom .core.diagnostics import DiagnosticsUpdate, get_window_diagnostics, get_line_diagnostics\nfrom .core.workspace import get_project_path\nfrom .core.panels import create_output_panel\n\ndiagnostic_severity_names = {\n DiagnosticSeverity.Error: \"error\",\n DiagnosticSeverity.Warning: \"warning\",\n DiagnosticSeverity.Information: \"info\",\n DiagnosticSeverity.Hint: \"hint\"\n}\n\ndiagnostic_severity_scopes = {\n DiagnosticSeverity.Error: 'markup.deleted.lsp sublimelinter.mark.error markup.error.lsp',\n DiagnosticSeverity.Warning: 'markup.changed.lsp sublimelinter.mark.warning markup.warning.lsp',\n DiagnosticSeverity.Information: 'markup.inserted.lsp sublimelinter.gutter-mark markup.info.lsp',\n DiagnosticSeverity.Hint: 'markup.inserted.lsp sublimelinter.gutter-mark markup.info.suggestion.lsp'\n}\n\nstylesheet = '''\n <style>\n div.error-arrow {\n border-top: 0.4rem solid transparent;\n border-left: 0.5rem solid color(var(--redish) blend(var(--background) 30%));\n width: 0;\n height: 0;\n }\n div.error {\n padding: 0.4rem 0 0.4rem 0.7rem;\n margin: 0 0 0.2rem;\n border-radius: 0 0.2rem 0.2rem 0.2rem;\n }\n\n div.error span.message {\n padding-right: 0.7rem;\n }\n\n div.error a {\n text-decoration: inherit;\n padding: 0.35rem 0.7rem 0.45rem 0.8rem;\n position: relative;\n bottom: 0.05rem;\n border-radius: 0 0.2rem 0.2rem 0;\n font-weight: bold;\n }\n html.dark div.error a {\n background-color: #00000018;\n }\n html.light div.error a {\n background-color: #ffffff18;\n }\n </style>\n '''\n\nUNDERLINE_FLAGS = (sublime.DRAW_SQUIGGLY_UNDERLINE\n | sublime.DRAW_NO_OUTLINE\n | sublime.DRAW_NO_FILL\n | sublime.DRAW_EMPTY_AS_OVERWRITE)\n\nBOX_FLAGS = sublime.DRAW_NO_FILL | sublime.DRAW_EMPTY_AS_OVERWRITE\n\n\ndef create_phantom_html(text: str) -> str:\n global stylesheet\n return \"\"\"<body id=inline-error>{}\n <div class=\"error-arrow\"></div>\n <div class=\"error\">\n <span class=\"message\">{}</span>\n <a href=\"code-actions\">Code Actions</a>\n </div>\n </body>\"\"\".format(stylesheet, html.escape(text, quote=False))\n\n\ndef on_phantom_navigate(view: sublime.View, href: str, point: int):\n # TODO: don't mess with the user's cursor.\n sel = view.sel()\n sel.clear()\n sel.add(sublime.Region(point))\n view.run_command(\"lsp_code_actions\")\n\n\ndef create_phantom(view: sublime.View, diagnostic: Diagnostic) -> sublime.Phantom:\n region = diagnostic.range.to_region(view)\n # TODO: hook up hide phantom (if keeping them)\n content = create_phantom_html(diagnostic.message)\n return sublime.Phantom(\n region,\n '<p>' + content + '</p>',\n sublime.LAYOUT_BELOW,\n lambda href: on_phantom_navigate(view, href, region.begin())\n )\n\n\ndef format_severity(severity: int) -> str:\n return diagnostic_severity_names.get(severity, \"???\")\n\n\ndef format_diagnostic(diagnostic: Diagnostic) -> str:\n location = \"{:>8}:{:<4}\".format(\n diagnostic.range.start.row + 1, diagnostic.range.start.col + 1)\n message = diagnostic.message.replace(\"\\n\", \" \").replace(\"\\r\", \"\")\n return \" {}\\t{:<12}\\t{:<10}\\t{}\".format(\n location, diagnostic.source, format_severity(diagnostic.severity), message)\n\n\nphantom_sets_by_buffer = {} # type: Dict[int, sublime.PhantomSet]\n\n\ndef update_diagnostics_phantoms(view: sublime.View, diagnostics: 'List[Diagnostic]'):\n global phantom_sets_by_buffer\n\n buffer_id = view.buffer_id()\n if not settings.show_diagnostics_phantoms or view.is_dirty():\n phantoms = None\n else:\n phantoms = list(\n create_phantom(view, diagnostic) for diagnostic in diagnostics)\n if phantoms:\n phantom_set = phantom_sets_by_buffer.get(buffer_id)\n if not phantom_set:\n phantom_set = sublime.PhantomSet(view, \"lsp_diagnostics\")\n phantom_sets_by_buffer[buffer_id] = phantom_set\n phantom_set.update(phantoms)\n else:\n phantom_sets_by_buffer.pop(buffer_id, None)\n\n\ndef update_diagnostics_regions(view: sublime.View, diagnostics: 'List[Diagnostic]', severity: int):\n region_name = \"lsp_\" + format_severity(severity)\n if settings.show_diagnostics_phantoms and not view.is_dirty():\n regions = None\n else:\n regions = list(diagnostic.range.to_region(view) for diagnostic in diagnostics\n if diagnostic.severity == severity)\n if regions:\n scope_name = diagnostic_severity_scopes[severity]\n view.add_regions(\n region_name, regions, scope_name, settings.diagnostics_gutter_marker,\n UNDERLINE_FLAGS if settings.diagnostics_highlight_style == \"underline\" else BOX_FLAGS)\n else:\n view.erase_regions(region_name)\n\n\ndef update_diagnostics_in_view(view: sublime.View, diagnostics: 'List[Diagnostic]'):\n if view and view.is_valid():\n update_diagnostics_phantoms(view, diagnostics)\n for severity in range(DiagnosticSeverity.Error, DiagnosticSeverity.Information):\n update_diagnostics_regions(view, diagnostics, severity)\n\n\nEvents.subscribe(\"document.diagnostics\",\n lambda update: handle_diagnostics(update))\n\n\ndef handle_diagnostics(update: DiagnosticsUpdate):\n window = update.window\n view = window.find_open_file(update.file_path)\n if view:\n update_diagnostics_in_view(view, update.diagnostics)\n update_diagnostics_panel(window)\n\n\nclass DiagnosticsCursorListener(sublime_plugin.ViewEventListener):\n def __init__(self, view):\n self.view = view\n self.has_status = False\n\n @classmethod\n def is_applicable(cls, view_settings):\n syntax = view_settings.get('syntax')\n return settings.show_diagnostics_in_view_status and syntax and is_supported_syntax(syntax)\n\n def on_selection_modified_async(self):\n selections = self.view.sel()\n if len(selections) > 0:\n pos = selections[0].begin()\n line_diagnostics = get_line_diagnostics(self.view, pos)\n if len(line_diagnostics) > 0:\n self.show_diagnostics_status(line_diagnostics)\n elif self.has_status:\n self.clear_diagnostics_status()\n\n def show_diagnostics_status(self, line_diagnostics):\n self.has_status = True\n self.view.set_status('lsp_diagnostics', line_diagnostics[0].message)\n\n def clear_diagnostics_status(self):\n self.view.erase_status('lsp_diagnostics')\n self.has_status = False\n\n\nclass LspShowDiagnosticsPanelCommand(sublime_plugin.WindowCommand):\n def run(self):\n ensure_diagnostics_panel(self.window)\n self.window.run_command(\"show_panel\", {\"panel\": \"output.diagnostics\"})\n\n\ndef create_diagnostics_panel(window):\n panel = create_output_panel(window, \"diagnostics\")\n panel.settings().set(\"result_file_regex\", r\"^\\s*\\S\\s+(\\S.*):$\")\n panel.settings().set(\"result_line_regex\", r\"^\\s+([0-9]+):?([0-9]+).*$\")\n panel.assign_syntax(\"Packages/\" + PLUGIN_NAME +\n \"/Syntaxes/Diagnostics.sublime-syntax\")\n # Call create_output_panel a second time after assigning the above\n # settings, so that it'll be picked up as a result buffer\n # see: Packages/Default/exec.py#L228-L230\n panel = window.create_output_panel(\"diagnostics\")\n return panel\n\n\ndef ensure_diagnostics_panel(window: sublime.Window):\n return window.find_output_panel(\"diagnostics\") or create_diagnostics_panel(window)\n\n\ndef update_diagnostics_panel(window: sublime.Window):\n assert window, \"missing window!\"\n base_dir = get_project_path(window)\n\n panel = ensure_diagnostics_panel(window)\n assert panel, \"must have a panel now!\"\n\n diagnostics_by_file = get_window_diagnostics(window)\n if diagnostics_by_file is not None:\n active_panel = window.active_panel()\n is_active_panel = (active_panel == \"output.diagnostics\")\n panel.settings().set(\"result_base_dir\", base_dir)\n panel.set_read_only(False)\n if diagnostics_by_file:\n to_render = []\n for file_path, source_diagnostics in diagnostics_by_file.items():\n relative_file_path = os.path.relpath(file_path, base_dir) if base_dir else file_path\n if source_diagnostics:\n to_render.append(format_diagnostics(relative_file_path, source_diagnostics))\n panel.run_command(\"lsp_update_panel\", {\"characters\": \"\\n\".join(to_render)})\n if settings.auto_show_diagnostics_panel and not active_panel:\n window.run_command(\"show_panel\",\n {\"panel\": \"output.diagnostics\"})\n else:\n panel.run_command(\"lsp_clear_panel\")\n if settings.auto_show_diagnostics_panel and is_active_panel:\n window.run_command(\"hide_panel\",\n {\"panel\": \"output.diagnostics\"})\n panel.set_read_only(True)\n\n\ndef format_diagnostics(file_path, origin_diagnostics):\n content = \" \u25cc {}:\\n\".format(file_path)\n for origin, diagnostics in origin_diagnostics.items():\n for diagnostic in diagnostics:\n item = format_diagnostic(diagnostic)\n content += item + \"\\n\"\n return content\n", "path": "plugin/diagnostics.py"}]} | 3,702 | 169 |
gh_patches_debug_39569 | rasdani/github-patches | git_diff | celery__celery-6917 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
I canβt stop a task by its task_id
[2018-12-02 23:53:58,955: INFO/MainProcess] Received task: tasks.add[bb1fe102-c1f9-4361-9370-1129900c0d52]
[2018-12-02 23:54:02,479: INFO/MainProcess] Terminating bb1fe102-c1f9-4361-9370-1129900c0d52 (Signals.SIGTERM)
[2018-12-02 23:54:02,490: ERROR/MainProcess] pidbox command error: NotImplementedError("<class 'celery.concurrency.eventlet.TaskPool'> does not implement kill_job",)
Traceback (most recent call last):
File "d:\envs\aidcs\lib\site-packages\kombu\pidbox.py", line 101, in dispatch
reply = handle(method, arguments)
File "d:\envs\aidcs\lib\site-packages\kombu\pidbox.py", line 122, in handle_cast
return self.handle(method, arguments)
File "d:\envs\aidcs\lib\site-packages\kombu\pidbox.py", line 116, in handle
return self.handlers[method](self.state, **arguments)
File "d:\envs\aidcs\lib\site-packages\celery\worker\control.py", line 163, in revoke
request.terminate(state.consumer.pool, signal=signum)
File "d:\envs\aidcs\lib\site-packages\celery\worker\request.py", line 249, in terminate
pool.terminate_job(self.worker_pid, signal)
File "d:\envs\aidcs\lib\site-packages\celery\concurrency\base.py", line 115, in terminate_job
'{0} does not implement kill_job'.format(type(self)))
NotImplementedError: <class 'celery.concurrency.eventlet.TaskPool'> does not implement kill_job
[2018-12-02 23:55:38,956: INFO/MainProcess] Task tasks.add[bb1fe102-c1f9-4361-9370-1129900c0d52] succeeded in 100.0s: 8
this is my main code:
from celery.app.control import Control
from tasks import add, app
myControl=Control(app)
myControl.revoke(task_id="b11729b0-6272-4527-af9d-dc24c0ad492d", terminate=True)
finallyοΌif i want to look at the state of the task only by task_id (just like above), how .
</issue>
<code>
[start of celery/concurrency/eventlet.py]
1 """Eventlet execution pool."""
2 import sys
3 from time import monotonic
4
5 from kombu.asynchronous import timer as _timer
6
7 from celery import signals
8
9 from . import base
10
11 __all__ = ('TaskPool',)
12
13 W_RACE = """\
14 Celery module with %s imported before eventlet patched\
15 """
16 RACE_MODS = ('billiard.', 'celery.', 'kombu.')
17
18
19 #: Warn if we couldn't patch early enough,
20 #: and thread/socket depending celery modules have already been loaded.
21 for mod in (mod for mod in sys.modules if mod.startswith(RACE_MODS)):
22 for side in ('thread', 'threading', 'socket'): # pragma: no cover
23 if getattr(mod, side, None):
24 import warnings
25 warnings.warn(RuntimeWarning(W_RACE % side))
26
27
28 def apply_target(target, args=(), kwargs=None, callback=None,
29 accept_callback=None, getpid=None):
30 kwargs = {} if not kwargs else kwargs
31 return base.apply_target(target, args, kwargs, callback, accept_callback,
32 pid=getpid())
33
34
35 class Timer(_timer.Timer):
36 """Eventlet Timer."""
37
38 def __init__(self, *args, **kwargs):
39 from eventlet.greenthread import spawn_after
40 from greenlet import GreenletExit
41 super().__init__(*args, **kwargs)
42
43 self.GreenletExit = GreenletExit
44 self._spawn_after = spawn_after
45 self._queue = set()
46
47 def _enter(self, eta, priority, entry, **kwargs):
48 secs = max(eta - monotonic(), 0)
49 g = self._spawn_after(secs, entry)
50 self._queue.add(g)
51 g.link(self._entry_exit, entry)
52 g.entry = entry
53 g.eta = eta
54 g.priority = priority
55 g.canceled = False
56 return g
57
58 def _entry_exit(self, g, entry):
59 try:
60 try:
61 g.wait()
62 except self.GreenletExit:
63 entry.cancel()
64 g.canceled = True
65 finally:
66 self._queue.discard(g)
67
68 def clear(self):
69 queue = self._queue
70 while queue:
71 try:
72 queue.pop().cancel()
73 except (KeyError, self.GreenletExit):
74 pass
75
76 def cancel(self, tref):
77 try:
78 tref.cancel()
79 except self.GreenletExit:
80 pass
81
82 @property
83 def queue(self):
84 return self._queue
85
86
87 class TaskPool(base.BasePool):
88 """Eventlet Task Pool."""
89
90 Timer = Timer
91
92 signal_safe = False
93 is_green = True
94 task_join_will_block = False
95 _pool = None
96 _quick_put = None
97
98 def __init__(self, *args, **kwargs):
99 from eventlet import greenthread
100 from eventlet.greenpool import GreenPool
101 self.Pool = GreenPool
102 self.getcurrent = greenthread.getcurrent
103 self.getpid = lambda: id(greenthread.getcurrent())
104 self.spawn_n = greenthread.spawn_n
105
106 super().__init__(*args, **kwargs)
107
108 def on_start(self):
109 self._pool = self.Pool(self.limit)
110 signals.eventlet_pool_started.send(sender=self)
111 self._quick_put = self._pool.spawn_n
112 self._quick_apply_sig = signals.eventlet_pool_apply.send
113
114 def on_stop(self):
115 signals.eventlet_pool_preshutdown.send(sender=self)
116 if self._pool is not None:
117 self._pool.waitall()
118 signals.eventlet_pool_postshutdown.send(sender=self)
119
120 def on_apply(self, target, args=None, kwargs=None, callback=None,
121 accept_callback=None, **_):
122 self._quick_apply_sig(
123 sender=self, target=target, args=args, kwargs=kwargs,
124 )
125 self._quick_put(apply_target, target, args, kwargs,
126 callback, accept_callback,
127 self.getpid)
128
129 def grow(self, n=1):
130 limit = self.limit + n
131 self._pool.resize(limit)
132 self.limit = limit
133
134 def shrink(self, n=1):
135 limit = self.limit - n
136 self._pool.resize(limit)
137 self.limit = limit
138
139 def _get_info(self):
140 info = super()._get_info()
141 info.update({
142 'max-concurrency': self.limit,
143 'free-threads': self._pool.free(),
144 'running-threads': self._pool.running(),
145 })
146 return info
147
[end of celery/concurrency/eventlet.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/celery/concurrency/eventlet.py b/celery/concurrency/eventlet.py
--- a/celery/concurrency/eventlet.py
+++ b/celery/concurrency/eventlet.py
@@ -2,6 +2,7 @@
import sys
from time import monotonic
+from greenlet import GreenletExit
from kombu.asynchronous import timer as _timer
from celery import signals
@@ -93,6 +94,7 @@
is_green = True
task_join_will_block = False
_pool = None
+ _pool_map = None
_quick_put = None
def __init__(self, *args, **kwargs):
@@ -107,8 +109,9 @@
def on_start(self):
self._pool = self.Pool(self.limit)
+ self._pool_map = {}
signals.eventlet_pool_started.send(sender=self)
- self._quick_put = self._pool.spawn_n
+ self._quick_put = self._pool.spawn
self._quick_apply_sig = signals.eventlet_pool_apply.send
def on_stop(self):
@@ -119,12 +122,17 @@
def on_apply(self, target, args=None, kwargs=None, callback=None,
accept_callback=None, **_):
- self._quick_apply_sig(
- sender=self, target=target, args=args, kwargs=kwargs,
+ target = TaskPool._make_killable_target(target)
+ self._quick_apply_sig(sender=self, target=target, args=args, kwargs=kwargs,)
+ greenlet = self._quick_put(
+ apply_target,
+ target, args,
+ kwargs,
+ callback,
+ accept_callback,
+ self.getpid
)
- self._quick_put(apply_target, target, args, kwargs,
- callback, accept_callback,
- self.getpid)
+ self._add_to_pool_map(id(greenlet), greenlet)
def grow(self, n=1):
limit = self.limit + n
@@ -136,6 +144,12 @@
self._pool.resize(limit)
self.limit = limit
+ def terminate_job(self, pid, signal=None):
+ if pid in self._pool_map.keys():
+ greenlet = self._pool_map[pid]
+ greenlet.kill()
+ greenlet.wait()
+
def _get_info(self):
info = super()._get_info()
info.update({
@@ -144,3 +158,24 @@
'running-threads': self._pool.running(),
})
return info
+
+ @staticmethod
+ def _make_killable_target(target):
+ def killable_target(*args, **kwargs):
+ try:
+ return target(*args, **kwargs)
+ except GreenletExit:
+ return (False, None, None)
+ return killable_target
+
+ def _add_to_pool_map(self, pid, greenlet):
+ self._pool_map[pid] = greenlet
+ greenlet.link(
+ TaskPool._cleanup_after_job_finish,
+ self._pool_map,
+ pid
+ )
+
+ @staticmethod
+ def _cleanup_after_job_finish(greenlet, pool_map, pid):
+ del pool_map[pid]
| {"golden_diff": "diff --git a/celery/concurrency/eventlet.py b/celery/concurrency/eventlet.py\n--- a/celery/concurrency/eventlet.py\n+++ b/celery/concurrency/eventlet.py\n@@ -2,6 +2,7 @@\n import sys\n from time import monotonic\n \n+from greenlet import GreenletExit\n from kombu.asynchronous import timer as _timer\n \n from celery import signals\n@@ -93,6 +94,7 @@\n is_green = True\n task_join_will_block = False\n _pool = None\n+ _pool_map = None\n _quick_put = None\n \n def __init__(self, *args, **kwargs):\n@@ -107,8 +109,9 @@\n \n def on_start(self):\n self._pool = self.Pool(self.limit)\n+ self._pool_map = {}\n signals.eventlet_pool_started.send(sender=self)\n- self._quick_put = self._pool.spawn_n\n+ self._quick_put = self._pool.spawn\n self._quick_apply_sig = signals.eventlet_pool_apply.send\n \n def on_stop(self):\n@@ -119,12 +122,17 @@\n \n def on_apply(self, target, args=None, kwargs=None, callback=None,\n accept_callback=None, **_):\n- self._quick_apply_sig(\n- sender=self, target=target, args=args, kwargs=kwargs,\n+ target = TaskPool._make_killable_target(target)\n+ self._quick_apply_sig(sender=self, target=target, args=args, kwargs=kwargs,)\n+ greenlet = self._quick_put(\n+ apply_target,\n+ target, args,\n+ kwargs,\n+ callback,\n+ accept_callback,\n+ self.getpid\n )\n- self._quick_put(apply_target, target, args, kwargs,\n- callback, accept_callback,\n- self.getpid)\n+ self._add_to_pool_map(id(greenlet), greenlet)\n \n def grow(self, n=1):\n limit = self.limit + n\n@@ -136,6 +144,12 @@\n self._pool.resize(limit)\n self.limit = limit\n \n+ def terminate_job(self, pid, signal=None):\n+ if pid in self._pool_map.keys():\n+ greenlet = self._pool_map[pid]\n+ greenlet.kill()\n+ greenlet.wait()\n+\n def _get_info(self):\n info = super()._get_info()\n info.update({\n@@ -144,3 +158,24 @@\n 'running-threads': self._pool.running(),\n })\n return info\n+\n+ @staticmethod\n+ def _make_killable_target(target):\n+ def killable_target(*args, **kwargs):\n+ try:\n+ return target(*args, **kwargs)\n+ except GreenletExit:\n+ return (False, None, None)\n+ return killable_target\n+\n+ def _add_to_pool_map(self, pid, greenlet):\n+ self._pool_map[pid] = greenlet\n+ greenlet.link(\n+ TaskPool._cleanup_after_job_finish,\n+ self._pool_map,\n+ pid\n+ )\n+\n+ @staticmethod\n+ def _cleanup_after_job_finish(greenlet, pool_map, pid):\n+ del pool_map[pid]\n", "issue": "I can\u2018t stop a task by its task_id\n[2018-12-02 23:53:58,955: INFO/MainProcess] Received task: tasks.add[bb1fe102-c1f9-4361-9370-1129900c0d52]\r\n[2018-12-02 23:54:02,479: INFO/MainProcess] Terminating bb1fe102-c1f9-4361-9370-1129900c0d52 (Signals.SIGTERM)\r\n[2018-12-02 23:54:02,490: ERROR/MainProcess] pidbox command error: NotImplementedError(\"<class 'celery.concurrency.eventlet.TaskPool'> does not implement kill_job\",)\r\nTraceback (most recent call last):\r\n File \"d:\\envs\\aidcs\\lib\\site-packages\\kombu\\pidbox.py\", line 101, in dispatch\r\n reply = handle(method, arguments)\r\n File \"d:\\envs\\aidcs\\lib\\site-packages\\kombu\\pidbox.py\", line 122, in handle_cast\r\n return self.handle(method, arguments)\r\n File \"d:\\envs\\aidcs\\lib\\site-packages\\kombu\\pidbox.py\", line 116, in handle\r\n return self.handlers[method](self.state, **arguments)\r\n File \"d:\\envs\\aidcs\\lib\\site-packages\\celery\\worker\\control.py\", line 163, in revoke\r\n request.terminate(state.consumer.pool, signal=signum)\r\n File \"d:\\envs\\aidcs\\lib\\site-packages\\celery\\worker\\request.py\", line 249, in terminate\r\n pool.terminate_job(self.worker_pid, signal)\r\n File \"d:\\envs\\aidcs\\lib\\site-packages\\celery\\concurrency\\base.py\", line 115, in terminate_job\r\n '{0} does not implement kill_job'.format(type(self)))\r\nNotImplementedError: <class 'celery.concurrency.eventlet.TaskPool'> does not implement kill_job\r\n[2018-12-02 23:55:38,956: INFO/MainProcess] Task tasks.add[bb1fe102-c1f9-4361-9370-1129900c0d52] succeeded in 100.0s: 8\r\n\r\n\r\n\r\n\r\nthis is my main code:\r\n\r\nfrom celery.app.control import Control\r\nfrom tasks import add, app\r\n\r\nmyControl=Control(app)\r\nmyControl.revoke(task_id=\"b11729b0-6272-4527-af9d-dc24c0ad492d\", terminate=True)\r\n\r\n\r\n\r\nfinally\uff0cif i want to look at the state of the task only by task_id (just like above), how .\n", "before_files": [{"content": "\"\"\"Eventlet execution pool.\"\"\"\nimport sys\nfrom time import monotonic\n\nfrom kombu.asynchronous import timer as _timer\n\nfrom celery import signals\n\nfrom . import base\n\n__all__ = ('TaskPool',)\n\nW_RACE = \"\"\"\\\nCelery module with %s imported before eventlet patched\\\n\"\"\"\nRACE_MODS = ('billiard.', 'celery.', 'kombu.')\n\n\n#: Warn if we couldn't patch early enough,\n#: and thread/socket depending celery modules have already been loaded.\nfor mod in (mod for mod in sys.modules if mod.startswith(RACE_MODS)):\n for side in ('thread', 'threading', 'socket'): # pragma: no cover\n if getattr(mod, side, None):\n import warnings\n warnings.warn(RuntimeWarning(W_RACE % side))\n\n\ndef apply_target(target, args=(), kwargs=None, callback=None,\n accept_callback=None, getpid=None):\n kwargs = {} if not kwargs else kwargs\n return base.apply_target(target, args, kwargs, callback, accept_callback,\n pid=getpid())\n\n\nclass Timer(_timer.Timer):\n \"\"\"Eventlet Timer.\"\"\"\n\n def __init__(self, *args, **kwargs):\n from eventlet.greenthread import spawn_after\n from greenlet import GreenletExit\n super().__init__(*args, **kwargs)\n\n self.GreenletExit = GreenletExit\n self._spawn_after = spawn_after\n self._queue = set()\n\n def _enter(self, eta, priority, entry, **kwargs):\n secs = max(eta - monotonic(), 0)\n g = self._spawn_after(secs, entry)\n self._queue.add(g)\n g.link(self._entry_exit, entry)\n g.entry = entry\n g.eta = eta\n g.priority = priority\n g.canceled = False\n return g\n\n def _entry_exit(self, g, entry):\n try:\n try:\n g.wait()\n except self.GreenletExit:\n entry.cancel()\n g.canceled = True\n finally:\n self._queue.discard(g)\n\n def clear(self):\n queue = self._queue\n while queue:\n try:\n queue.pop().cancel()\n except (KeyError, self.GreenletExit):\n pass\n\n def cancel(self, tref):\n try:\n tref.cancel()\n except self.GreenletExit:\n pass\n\n @property\n def queue(self):\n return self._queue\n\n\nclass TaskPool(base.BasePool):\n \"\"\"Eventlet Task Pool.\"\"\"\n\n Timer = Timer\n\n signal_safe = False\n is_green = True\n task_join_will_block = False\n _pool = None\n _quick_put = None\n\n def __init__(self, *args, **kwargs):\n from eventlet import greenthread\n from eventlet.greenpool import GreenPool\n self.Pool = GreenPool\n self.getcurrent = greenthread.getcurrent\n self.getpid = lambda: id(greenthread.getcurrent())\n self.spawn_n = greenthread.spawn_n\n\n super().__init__(*args, **kwargs)\n\n def on_start(self):\n self._pool = self.Pool(self.limit)\n signals.eventlet_pool_started.send(sender=self)\n self._quick_put = self._pool.spawn_n\n self._quick_apply_sig = signals.eventlet_pool_apply.send\n\n def on_stop(self):\n signals.eventlet_pool_preshutdown.send(sender=self)\n if self._pool is not None:\n self._pool.waitall()\n signals.eventlet_pool_postshutdown.send(sender=self)\n\n def on_apply(self, target, args=None, kwargs=None, callback=None,\n accept_callback=None, **_):\n self._quick_apply_sig(\n sender=self, target=target, args=args, kwargs=kwargs,\n )\n self._quick_put(apply_target, target, args, kwargs,\n callback, accept_callback,\n self.getpid)\n\n def grow(self, n=1):\n limit = self.limit + n\n self._pool.resize(limit)\n self.limit = limit\n\n def shrink(self, n=1):\n limit = self.limit - n\n self._pool.resize(limit)\n self.limit = limit\n\n def _get_info(self):\n info = super()._get_info()\n info.update({\n 'max-concurrency': self.limit,\n 'free-threads': self._pool.free(),\n 'running-threads': self._pool.running(),\n })\n return info\n", "path": "celery/concurrency/eventlet.py"}]} | 2,518 | 729 |
gh_patches_debug_18536 | rasdani/github-patches | git_diff | learningequality__kolibri-2113 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
print detailed exception info to server console on 500 errors
Currently, the web server middleware swallows all Python exceptions and returns the traceback information to the client in a 500 error. This makes debugging difficult.
It should be printed to the console and saved to log files.
</issue>
<code>
[start of kolibri/deployment/default/settings/base.py]
1 # -*- coding: utf-8 -*-
2 """
3 Django settings for kolibri project.
4
5 For more information on this file, see
6 https://docs.djangoproject.com/en/1.9/topics/settings/
7
8 For the full list of settings and their values, see
9 https://docs.djangoproject.com/en/1.9/ref/settings/
10 """
11 from __future__ import absolute_import, print_function, unicode_literals
12
13 # Build paths inside the project like this: os.path.join(BASE_DIR, ...)
14 import os
15
16 # import kolibri, so we can get the path to the module.
17 import kolibri
18 # we load other utilities related to i18n
19 # This is essential! We load the kolibri conf INSIDE the Django conf
20 from kolibri.utils import conf, i18n
21 from tzlocal import get_localzone
22
23 KOLIBRI_MODULE_PATH = os.path.dirname(kolibri.__file__)
24
25 BASE_DIR = os.path.abspath(os.path.dirname(__name__))
26
27 KOLIBRI_HOME = os.environ['KOLIBRI_HOME']
28
29 KOLIBRI_CORE_JS_NAME = 'kolibriGlobal'
30
31 LOCALE_PATHS = [
32 os.path.join(KOLIBRI_MODULE_PATH, "locale"),
33 ]
34
35 # Quick-start development settings - unsuitable for production
36 # See https://docs.djangoproject.com/en/1.9/howto/deployment/checklist/
37
38 # SECURITY WARNING: keep the secret key used in production secret!
39 SECRET_KEY = 'f@ey3)y^03r9^@mou97apom*+c1m#b1!cwbm50^s4yk72xce27'
40
41 # SECURITY WARNING: don't run with debug turned on in production!
42 DEBUG = False
43
44 ALLOWED_HOSTS = ['*']
45
46 # Application definition
47
48 INSTALLED_APPS = [
49 'kolibri.core',
50 'django.contrib.admin',
51 'django.contrib.auth',
52 'django.contrib.contenttypes',
53 'django.contrib.sessions',
54 'django.contrib.messages',
55 'django.contrib.staticfiles',
56 'kolibri.auth.apps.KolibriAuthConfig',
57 'kolibri.content',
58 'kolibri.logger',
59 'kolibri.tasks.apps.KolibriTasksConfig',
60 'kolibri.core.webpack',
61 'kolibri.core.exams',
62 'kolibri.core.device',
63 'kolibri.core.discovery',
64 'rest_framework',
65 'django_js_reverse',
66 'jsonfield',
67 'morango',
68 ] + conf.config['INSTALLED_APPS']
69
70 # Add in the external plugins' locale paths. Our frontend messages depends
71 # specifically on the value of LOCALE_PATHS to find its catalog files.
72 LOCALE_PATHS += [
73 i18n.get_installed_app_locale_path(app) for app in INSTALLED_APPS
74 if i18n.is_external_plugin(app)
75 ]
76
77 MIDDLEWARE_CLASSES = (
78 'django.contrib.sessions.middleware.SessionMiddleware',
79 'django.middleware.locale.LocaleMiddleware',
80 'django.middleware.common.CommonMiddleware',
81 'django.middleware.csrf.CsrfViewMiddleware',
82 'kolibri.plugins.setup_wizard.middleware.SetupWizardMiddleware',
83 'kolibri.auth.middleware.CustomAuthenticationMiddleware',
84 'django.contrib.auth.middleware.SessionAuthenticationMiddleware',
85 'django.contrib.messages.middleware.MessageMiddleware',
86 'django.middleware.clickjacking.XFrameOptionsMiddleware',
87 'django.middleware.security.SecurityMiddleware',
88 )
89
90 QUEUE_JOB_STORAGE_PATH = os.path.join(KOLIBRI_HOME, "job_storage.sqlite3")
91
92 ROOT_URLCONF = 'kolibri.deployment.default.urls'
93
94 TEMPLATES = [
95 {
96 'BACKEND': 'django.template.backends.django.DjangoTemplates',
97 'DIRS': [],
98 'APP_DIRS': True,
99 'OPTIONS': {
100 'context_processors': [
101 'django.template.context_processors.debug',
102 'django.template.context_processors.request',
103 'django.contrib.auth.context_processors.auth',
104 'django.contrib.messages.context_processors.messages',
105 'kolibri.core.context_processors.custom_context_processor.return_session',
106 ],
107 },
108 },
109 ]
110
111 WSGI_APPLICATION = 'kolibri.deployment.default.wsgi.application'
112
113
114 # Database
115 # https://docs.djangoproject.com/en/1.9/ref/settings/#databases
116
117 DATABASES = {
118 'default': {
119 'ENGINE': 'django.db.backends.sqlite3',
120 'NAME': os.path.join(KOLIBRI_HOME, 'db.sqlite3'),
121 'OPTIONS': {
122 'timeout': 100,
123 }
124 },
125 }
126
127 # Content directories and URLs for channel metadata and content files
128
129 # Directory and URL for storing content databases for channel data
130 CONTENT_DATABASE_DIR = os.path.join(KOLIBRI_HOME, 'content', 'databases')
131 if not os.path.exists(CONTENT_DATABASE_DIR):
132 os.makedirs(CONTENT_DATABASE_DIR)
133
134 # Directory and URL for storing de-duped content files for all channels
135 CONTENT_STORAGE_DIR = os.path.join(KOLIBRI_HOME, 'content', 'storage')
136 if not os.path.exists(CONTENT_STORAGE_DIR):
137 os.makedirs(CONTENT_STORAGE_DIR)
138
139 # Base default URL for downloading content from an online server
140 CENTRAL_CONTENT_DOWNLOAD_BASE_URL = "https://contentworkshop.learningequality.org"
141
142 # Internationalization
143 # https://docs.djangoproject.com/en/1.9/topics/i18n/
144
145 LANGUAGES = [
146 ('en', 'English'),
147 ('sw-tz', 'Kiswahili'),
148 ('es-es', 'EspaΓ±ol'),
149 ('es-mx', 'EspaΓ±ol (MΓ©xico)'),
150 ('fr-fr', 'FranΓ§ais, langue franΓ§aise'),
151 ('pt-pt', 'PortuguΓͺs'),
152 ('hi-in', 'ΰ€Ήΰ€Ώΰ€ΰ€¦ΰ₯'),
153 ]
154
155 LANGUAGE_CODE = conf.config.get("LANGUAGE_CODE") or "en"
156
157 TIME_ZONE = get_localzone().zone
158
159 USE_I18N = True
160
161 USE_L10N = True
162
163 USE_TZ = True
164
165 # Static files (CSS, JavaScript, Images)
166 # https://docs.djangoproject.com/en/1.9/howto/static-files/
167
168 STATIC_URL = '/static/'
169 STATIC_ROOT = os.path.join(KOLIBRI_HOME, "static")
170
171 # https://docs.djangoproject.com/en/1.9/ref/settings/#std:setting-LOGGING
172 # https://docs.djangoproject.com/en/1.9/topics/logging/
173
174 LOGGING = {
175 'version': 1,
176 'disable_existing_loggers': False,
177 'formatters': {
178 'verbose': {
179 'format': '%(levelname)s %(asctime)s %(module)s %(process)d %(thread)d %(message)s'
180 },
181 'simple': {
182 'format': '%(levelname)s %(message)s'
183 },
184 'simple_date': {
185 'format': '%(levelname)s %(asctime)s %(module)s %(message)s'
186 },
187 'color': {
188 '()': 'colorlog.ColoredFormatter',
189 'format': '%(log_color)s%(levelname)-8s %(message)s',
190 'log_colors': {
191 'DEBUG': 'bold_black',
192 'INFO': 'white',
193 'WARNING': 'yellow',
194 'ERROR': 'red',
195 'CRITICAL': 'bold_red',
196 },
197 }
198 },
199 'filters': {
200 'require_debug_true': {
201 '()': 'django.utils.log.RequireDebugTrue',
202 },
203 'require_debug_false': {
204 '()': 'django.utils.log.RequireDebugFalse',
205 },
206 },
207 'handlers': {
208 'console': {
209 'level': 'INFO',
210 'class': 'logging.StreamHandler',
211 'formatter': 'color'
212 },
213 'mail_admins': {
214 'level': 'ERROR',
215 'class': 'django.utils.log.AdminEmailHandler',
216 'filters': ['require_debug_false'],
217 },
218 'file_debug': {
219 'level': 'DEBUG',
220 'filters': ['require_debug_true'],
221 'class': 'logging.FileHandler',
222 'filename': os.path.join(KOLIBRI_HOME, 'debug.log'),
223 'formatter': 'simple_date',
224 },
225 'file': {
226 'level': 'INFO',
227 'filters': [],
228 'class': 'logging.FileHandler',
229 'filename': os.path.join(KOLIBRI_HOME, 'kolibri.log'),
230 'formatter': 'simple_date',
231 },
232 },
233 'loggers': {
234 'django': {
235 'handlers': ['console', 'file'],
236 'propagate': True,
237 },
238 'django.request': {
239 'handlers': ['mail_admins', 'file'],
240 'level': 'ERROR',
241 'propagate': False,
242 },
243 'kolibri': {
244 'handlers': ['console', 'mail_admins', 'file', 'file_debug'],
245 'level': 'INFO',
246 }
247 }
248 }
249
250
251 # Customizing Django auth system
252 # https://docs.djangoproject.com/en/1.9/topics/auth/customizing/
253
254 AUTH_USER_MODEL = 'kolibriauth.FacilityUser'
255
256 AUTHENTICATION_BACKENDS = ['kolibri.auth.backends.FacilityUserBackend']
257
258
259 # Django REST Framework
260 # http://www.django-rest-framework.org/api-guide/settings/
261
262 REST_FRAMEWORK = {
263 "UNAUTHENTICATED_USER": "kolibri.auth.models.KolibriAnonymousUser",
264 'DEFAULT_RENDERER_CLASSES': (
265 'rest_framework.renderers.JSONRenderer',
266 'rest_framework.renderers.BrowsableAPIRenderer',
267 'rest_framework_csv.renderers.CSVRenderer',
268 ),
269 }
270
271 # System warnings to disable
272 # see https://docs.djangoproject.com/en/1.9/ref/settings/#silenced-system-checks
273 SILENCED_SYSTEM_CHECKS = ["auth.W004"]
274
275 # Configuration for Django JS Reverse
276 # https://github.com/ierror/django-js-reverse#options
277
278 JS_REVERSE_JS_VAR_NAME = 'kolibriUrls'
279
280 JS_REVERSE_EXCLUDE_NAMESPACES = ['admin', ]
281
282 ENABLE_DATA_BOOTSTRAPPING = True
283
[end of kolibri/deployment/default/settings/base.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/kolibri/deployment/default/settings/base.py b/kolibri/deployment/default/settings/base.py
--- a/kolibri/deployment/default/settings/base.py
+++ b/kolibri/deployment/default/settings/base.py
@@ -215,6 +215,12 @@
'class': 'django.utils.log.AdminEmailHandler',
'filters': ['require_debug_false'],
},
+ 'request_debug': {
+ 'level': 'ERROR',
+ 'class': 'logging.StreamHandler',
+ 'formatter': 'color',
+ 'filters': ['require_debug_true'],
+ },
'file_debug': {
'level': 'DEBUG',
'filters': ['require_debug_true'],
@@ -236,7 +242,7 @@
'propagate': True,
},
'django.request': {
- 'handlers': ['mail_admins', 'file'],
+ 'handlers': ['mail_admins', 'file', 'request_debug'],
'level': 'ERROR',
'propagate': False,
},
| {"golden_diff": "diff --git a/kolibri/deployment/default/settings/base.py b/kolibri/deployment/default/settings/base.py\n--- a/kolibri/deployment/default/settings/base.py\n+++ b/kolibri/deployment/default/settings/base.py\n@@ -215,6 +215,12 @@\n 'class': 'django.utils.log.AdminEmailHandler',\n 'filters': ['require_debug_false'],\n },\n+ 'request_debug': {\n+ 'level': 'ERROR',\n+ 'class': 'logging.StreamHandler',\n+ 'formatter': 'color',\n+ 'filters': ['require_debug_true'],\n+ },\n 'file_debug': {\n 'level': 'DEBUG',\n 'filters': ['require_debug_true'],\n@@ -236,7 +242,7 @@\n 'propagate': True,\n },\n 'django.request': {\n- 'handlers': ['mail_admins', 'file'],\n+ 'handlers': ['mail_admins', 'file', 'request_debug'],\n 'level': 'ERROR',\n 'propagate': False,\n },\n", "issue": "print detailed exception info to server console on 500 errors\n\r\nCurrently, the web server middleware swallows all Python exceptions and returns the traceback information to the client in a 500 error. This makes debugging difficult.\r\n\r\nIt should be printed to the console and saved to log files.\r\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\"\"\"\nDjango settings for kolibri project.\n\nFor more information on this file, see\nhttps://docs.djangoproject.com/en/1.9/topics/settings/\n\nFor the full list of settings and their values, see\nhttps://docs.djangoproject.com/en/1.9/ref/settings/\n\"\"\"\nfrom __future__ import absolute_import, print_function, unicode_literals\n\n# Build paths inside the project like this: os.path.join(BASE_DIR, ...)\nimport os\n\n# import kolibri, so we can get the path to the module.\nimport kolibri\n# we load other utilities related to i18n\n# This is essential! We load the kolibri conf INSIDE the Django conf\nfrom kolibri.utils import conf, i18n\nfrom tzlocal import get_localzone\n\nKOLIBRI_MODULE_PATH = os.path.dirname(kolibri.__file__)\n\nBASE_DIR = os.path.abspath(os.path.dirname(__name__))\n\nKOLIBRI_HOME = os.environ['KOLIBRI_HOME']\n\nKOLIBRI_CORE_JS_NAME = 'kolibriGlobal'\n\nLOCALE_PATHS = [\n os.path.join(KOLIBRI_MODULE_PATH, \"locale\"),\n]\n\n# Quick-start development settings - unsuitable for production\n# See https://docs.djangoproject.com/en/1.9/howto/deployment/checklist/\n\n# SECURITY WARNING: keep the secret key used in production secret!\nSECRET_KEY = 'f@ey3)y^03r9^@mou97apom*+c1m#b1!cwbm50^s4yk72xce27'\n\n# SECURITY WARNING: don't run with debug turned on in production!\nDEBUG = False\n\nALLOWED_HOSTS = ['*']\n\n# Application definition\n\nINSTALLED_APPS = [\n 'kolibri.core',\n 'django.contrib.admin',\n 'django.contrib.auth',\n 'django.contrib.contenttypes',\n 'django.contrib.sessions',\n 'django.contrib.messages',\n 'django.contrib.staticfiles',\n 'kolibri.auth.apps.KolibriAuthConfig',\n 'kolibri.content',\n 'kolibri.logger',\n 'kolibri.tasks.apps.KolibriTasksConfig',\n 'kolibri.core.webpack',\n 'kolibri.core.exams',\n 'kolibri.core.device',\n 'kolibri.core.discovery',\n 'rest_framework',\n 'django_js_reverse',\n 'jsonfield',\n 'morango',\n] + conf.config['INSTALLED_APPS']\n\n# Add in the external plugins' locale paths. Our frontend messages depends\n# specifically on the value of LOCALE_PATHS to find its catalog files.\nLOCALE_PATHS += [\n i18n.get_installed_app_locale_path(app) for app in INSTALLED_APPS\n if i18n.is_external_plugin(app)\n]\n\nMIDDLEWARE_CLASSES = (\n 'django.contrib.sessions.middleware.SessionMiddleware',\n 'django.middleware.locale.LocaleMiddleware',\n 'django.middleware.common.CommonMiddleware',\n 'django.middleware.csrf.CsrfViewMiddleware',\n 'kolibri.plugins.setup_wizard.middleware.SetupWizardMiddleware',\n 'kolibri.auth.middleware.CustomAuthenticationMiddleware',\n 'django.contrib.auth.middleware.SessionAuthenticationMiddleware',\n 'django.contrib.messages.middleware.MessageMiddleware',\n 'django.middleware.clickjacking.XFrameOptionsMiddleware',\n 'django.middleware.security.SecurityMiddleware',\n)\n\nQUEUE_JOB_STORAGE_PATH = os.path.join(KOLIBRI_HOME, \"job_storage.sqlite3\")\n\nROOT_URLCONF = 'kolibri.deployment.default.urls'\n\nTEMPLATES = [\n {\n 'BACKEND': 'django.template.backends.django.DjangoTemplates',\n 'DIRS': [],\n 'APP_DIRS': True,\n 'OPTIONS': {\n 'context_processors': [\n 'django.template.context_processors.debug',\n 'django.template.context_processors.request',\n 'django.contrib.auth.context_processors.auth',\n 'django.contrib.messages.context_processors.messages',\n 'kolibri.core.context_processors.custom_context_processor.return_session',\n ],\n },\n },\n]\n\nWSGI_APPLICATION = 'kolibri.deployment.default.wsgi.application'\n\n\n# Database\n# https://docs.djangoproject.com/en/1.9/ref/settings/#databases\n\nDATABASES = {\n 'default': {\n 'ENGINE': 'django.db.backends.sqlite3',\n 'NAME': os.path.join(KOLIBRI_HOME, 'db.sqlite3'),\n 'OPTIONS': {\n 'timeout': 100,\n }\n },\n}\n\n# Content directories and URLs for channel metadata and content files\n\n# Directory and URL for storing content databases for channel data\nCONTENT_DATABASE_DIR = os.path.join(KOLIBRI_HOME, 'content', 'databases')\nif not os.path.exists(CONTENT_DATABASE_DIR):\n os.makedirs(CONTENT_DATABASE_DIR)\n\n# Directory and URL for storing de-duped content files for all channels\nCONTENT_STORAGE_DIR = os.path.join(KOLIBRI_HOME, 'content', 'storage')\nif not os.path.exists(CONTENT_STORAGE_DIR):\n os.makedirs(CONTENT_STORAGE_DIR)\n\n# Base default URL for downloading content from an online server\nCENTRAL_CONTENT_DOWNLOAD_BASE_URL = \"https://contentworkshop.learningequality.org\"\n\n# Internationalization\n# https://docs.djangoproject.com/en/1.9/topics/i18n/\n\nLANGUAGES = [\n ('en', 'English'),\n ('sw-tz', 'Kiswahili'),\n ('es-es', 'Espa\u00f1ol'),\n ('es-mx', 'Espa\u00f1ol (M\u00e9xico)'),\n ('fr-fr', 'Fran\u00e7ais, langue fran\u00e7aise'),\n ('pt-pt', 'Portugu\u00eas'),\n ('hi-in', '\u0939\u093f\u0902\u0926\u0940'),\n]\n\nLANGUAGE_CODE = conf.config.get(\"LANGUAGE_CODE\") or \"en\"\n\nTIME_ZONE = get_localzone().zone\n\nUSE_I18N = True\n\nUSE_L10N = True\n\nUSE_TZ = True\n\n# Static files (CSS, JavaScript, Images)\n# https://docs.djangoproject.com/en/1.9/howto/static-files/\n\nSTATIC_URL = '/static/'\nSTATIC_ROOT = os.path.join(KOLIBRI_HOME, \"static\")\n\n# https://docs.djangoproject.com/en/1.9/ref/settings/#std:setting-LOGGING\n# https://docs.djangoproject.com/en/1.9/topics/logging/\n\nLOGGING = {\n 'version': 1,\n 'disable_existing_loggers': False,\n 'formatters': {\n 'verbose': {\n 'format': '%(levelname)s %(asctime)s %(module)s %(process)d %(thread)d %(message)s'\n },\n 'simple': {\n 'format': '%(levelname)s %(message)s'\n },\n 'simple_date': {\n 'format': '%(levelname)s %(asctime)s %(module)s %(message)s'\n },\n 'color': {\n '()': 'colorlog.ColoredFormatter',\n 'format': '%(log_color)s%(levelname)-8s %(message)s',\n 'log_colors': {\n 'DEBUG': 'bold_black',\n 'INFO': 'white',\n 'WARNING': 'yellow',\n 'ERROR': 'red',\n 'CRITICAL': 'bold_red',\n },\n }\n },\n 'filters': {\n 'require_debug_true': {\n '()': 'django.utils.log.RequireDebugTrue',\n },\n 'require_debug_false': {\n '()': 'django.utils.log.RequireDebugFalse',\n },\n },\n 'handlers': {\n 'console': {\n 'level': 'INFO',\n 'class': 'logging.StreamHandler',\n 'formatter': 'color'\n },\n 'mail_admins': {\n 'level': 'ERROR',\n 'class': 'django.utils.log.AdminEmailHandler',\n 'filters': ['require_debug_false'],\n },\n 'file_debug': {\n 'level': 'DEBUG',\n 'filters': ['require_debug_true'],\n 'class': 'logging.FileHandler',\n 'filename': os.path.join(KOLIBRI_HOME, 'debug.log'),\n 'formatter': 'simple_date',\n },\n 'file': {\n 'level': 'INFO',\n 'filters': [],\n 'class': 'logging.FileHandler',\n 'filename': os.path.join(KOLIBRI_HOME, 'kolibri.log'),\n 'formatter': 'simple_date',\n },\n },\n 'loggers': {\n 'django': {\n 'handlers': ['console', 'file'],\n 'propagate': True,\n },\n 'django.request': {\n 'handlers': ['mail_admins', 'file'],\n 'level': 'ERROR',\n 'propagate': False,\n },\n 'kolibri': {\n 'handlers': ['console', 'mail_admins', 'file', 'file_debug'],\n 'level': 'INFO',\n }\n }\n}\n\n\n# Customizing Django auth system\n# https://docs.djangoproject.com/en/1.9/topics/auth/customizing/\n\nAUTH_USER_MODEL = 'kolibriauth.FacilityUser'\n\nAUTHENTICATION_BACKENDS = ['kolibri.auth.backends.FacilityUserBackend']\n\n\n# Django REST Framework\n# http://www.django-rest-framework.org/api-guide/settings/\n\nREST_FRAMEWORK = {\n \"UNAUTHENTICATED_USER\": \"kolibri.auth.models.KolibriAnonymousUser\",\n 'DEFAULT_RENDERER_CLASSES': (\n 'rest_framework.renderers.JSONRenderer',\n 'rest_framework.renderers.BrowsableAPIRenderer',\n 'rest_framework_csv.renderers.CSVRenderer',\n ),\n}\n\n# System warnings to disable\n# see https://docs.djangoproject.com/en/1.9/ref/settings/#silenced-system-checks\nSILENCED_SYSTEM_CHECKS = [\"auth.W004\"]\n\n# Configuration for Django JS Reverse\n# https://github.com/ierror/django-js-reverse#options\n\nJS_REVERSE_JS_VAR_NAME = 'kolibriUrls'\n\nJS_REVERSE_EXCLUDE_NAMESPACES = ['admin', ]\n\nENABLE_DATA_BOOTSTRAPPING = True\n", "path": "kolibri/deployment/default/settings/base.py"}]} | 3,431 | 229 |
gh_patches_debug_18689 | rasdani/github-patches | git_diff | sanic-org__sanic-1553 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Unable to start server -- Running run_async.py failed
**Describe the bug**
[2019-04-14 19:22:02 +0800] [21512] [INFO] Goin' Fast @ http://0.0.0.0:8000
[2019-04-14 19:22:02 +0800] [21512] [ERROR] Unable to start server
Traceback (most recent call last):
File "C:\ProgramData\Anaconda3\envs\venom\lib\site-packages\sanic\server.py", line 745, in serve
http_server = loop.run_until_complete(server_coroutine)
File "C:\ProgramData\Anaconda3\envs\venom\lib\asyncio\base_events.py", line 571, in run_until_complete
self.run_forever()
File "C:\ProgramData\Anaconda3\envs\venom\lib\asyncio\base_events.py", line 529, in run_forever
'Cannot run the event loop while another loop is running')
RuntimeError: Cannot run the event loop while another loop is running
**Code snippet**
Relevant source code, make sure to remove what is not necessary.
https://github.com/huge-success/sanic/blob/master/examples/run_async.py
**Expected behavior**
A clear and concise description of what you expected to happen.
**Environment (please complete the following information):**
- OS: [e.g. iOS]
- Version [e.g. 0.8.3]
Window and Linux, Python 3.6 or 3.7 don't work
**Additional context**
Add any other context about the problem here.
Is this example still work ?
</issue>
<code>
[start of examples/log_request_id.py]
1 '''
2 Based on example from https://github.com/Skyscanner/aiotask-context
3 and `examples/{override_logging,run_async}.py`.
4
5 Needs https://github.com/Skyscanner/aiotask-context/tree/52efbc21e2e1def2d52abb9a8e951f3ce5e6f690 or newer
6
7 $ pip install git+https://github.com/Skyscanner/aiotask-context.git
8 '''
9
10 import asyncio
11 import uuid
12 import logging
13 from signal import signal, SIGINT
14
15 from sanic import Sanic
16 from sanic import response
17
18 import uvloop
19 import aiotask_context as context
20
21 log = logging.getLogger(__name__)
22
23
24 class RequestIdFilter(logging.Filter):
25 def filter(self, record):
26 record.request_id = context.get('X-Request-ID')
27 return True
28
29
30 LOG_SETTINGS = {
31 'version': 1,
32 'disable_existing_loggers': False,
33 'handlers': {
34 'console': {
35 'class': 'logging.StreamHandler',
36 'level': 'DEBUG',
37 'formatter': 'default',
38 'filters': ['requestid'],
39 },
40 },
41 'filters': {
42 'requestid': {
43 '()': RequestIdFilter,
44 },
45 },
46 'formatters': {
47 'default': {
48 'format': '%(asctime)s %(levelname)s %(name)s:%(lineno)d %(request_id)s | %(message)s',
49 },
50 },
51 'loggers': {
52 '': {
53 'level': 'DEBUG',
54 'handlers': ['console'],
55 'propagate': True
56 },
57 }
58 }
59
60
61 app = Sanic(__name__, log_config=LOG_SETTINGS)
62
63
64 @app.middleware('request')
65 async def set_request_id(request):
66 request_id = request.headers.get('X-Request-ID') or str(uuid.uuid4())
67 context.set("X-Request-ID", request_id)
68
69
70 @app.route("/")
71 async def test(request):
72 log.debug('X-Request-ID: %s', context.get('X-Request-ID'))
73 log.info('Hello from test!')
74 return response.json({"test": True})
75
76
77 if __name__ == '__main__':
78 asyncio.set_event_loop(uvloop.new_event_loop())
79 server = app.create_server(host="0.0.0.0", port=8000)
80 loop = asyncio.get_event_loop()
81 loop.set_task_factory(context.task_factory)
82 task = asyncio.ensure_future(server)
83 try:
84 loop.run_forever()
85 except:
86 loop.stop()
87
[end of examples/log_request_id.py]
[start of examples/run_async.py]
1 from sanic import Sanic
2 from sanic import response
3 from signal import signal, SIGINT
4 import asyncio
5 import uvloop
6
7 app = Sanic(__name__)
8
9
10 @app.route("/")
11 async def test(request):
12 return response.json({"answer": "42"})
13
14 asyncio.set_event_loop(uvloop.new_event_loop())
15 server = app.create_server(host="0.0.0.0", port=8000)
16 loop = asyncio.get_event_loop()
17 task = asyncio.ensure_future(server)
18 signal(SIGINT, lambda s, f: loop.stop())
19 try:
20 loop.run_forever()
21 except:
22 loop.stop()
23
[end of examples/run_async.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/examples/log_request_id.py b/examples/log_request_id.py
--- a/examples/log_request_id.py
+++ b/examples/log_request_id.py
@@ -76,7 +76,7 @@
if __name__ == '__main__':
asyncio.set_event_loop(uvloop.new_event_loop())
- server = app.create_server(host="0.0.0.0", port=8000)
+ server = app.create_server(host="0.0.0.0", port=8000, return_asyncio_server=True)
loop = asyncio.get_event_loop()
loop.set_task_factory(context.task_factory)
task = asyncio.ensure_future(server)
diff --git a/examples/run_async.py b/examples/run_async.py
--- a/examples/run_async.py
+++ b/examples/run_async.py
@@ -12,7 +12,7 @@
return response.json({"answer": "42"})
asyncio.set_event_loop(uvloop.new_event_loop())
-server = app.create_server(host="0.0.0.0", port=8000)
+server = app.create_server(host="0.0.0.0", port=8000, return_asyncio_server=True)
loop = asyncio.get_event_loop()
task = asyncio.ensure_future(server)
signal(SIGINT, lambda s, f: loop.stop())
| {"golden_diff": "diff --git a/examples/log_request_id.py b/examples/log_request_id.py\n--- a/examples/log_request_id.py\n+++ b/examples/log_request_id.py\n@@ -76,7 +76,7 @@\n \n if __name__ == '__main__':\n asyncio.set_event_loop(uvloop.new_event_loop())\n- server = app.create_server(host=\"0.0.0.0\", port=8000)\n+ server = app.create_server(host=\"0.0.0.0\", port=8000, return_asyncio_server=True)\n loop = asyncio.get_event_loop()\n loop.set_task_factory(context.task_factory)\n task = asyncio.ensure_future(server)\ndiff --git a/examples/run_async.py b/examples/run_async.py\n--- a/examples/run_async.py\n+++ b/examples/run_async.py\n@@ -12,7 +12,7 @@\n return response.json({\"answer\": \"42\"})\n \n asyncio.set_event_loop(uvloop.new_event_loop())\n-server = app.create_server(host=\"0.0.0.0\", port=8000)\n+server = app.create_server(host=\"0.0.0.0\", port=8000, return_asyncio_server=True)\n loop = asyncio.get_event_loop()\n task = asyncio.ensure_future(server)\n signal(SIGINT, lambda s, f: loop.stop())\n", "issue": "Unable to start server -- Running run_async.py failed\n**Describe the bug**\r\n[2019-04-14 19:22:02 +0800] [21512] [INFO] Goin' Fast @ http://0.0.0.0:8000\r\n[2019-04-14 19:22:02 +0800] [21512] [ERROR] Unable to start server\r\nTraceback (most recent call last):\r\n File \"C:\\ProgramData\\Anaconda3\\envs\\venom\\lib\\site-packages\\sanic\\server.py\", line 745, in serve\r\n http_server = loop.run_until_complete(server_coroutine)\r\n File \"C:\\ProgramData\\Anaconda3\\envs\\venom\\lib\\asyncio\\base_events.py\", line 571, in run_until_complete\r\n self.run_forever()\r\n File \"C:\\ProgramData\\Anaconda3\\envs\\venom\\lib\\asyncio\\base_events.py\", line 529, in run_forever\r\n 'Cannot run the event loop while another loop is running')\r\nRuntimeError: Cannot run the event loop while another loop is running\r\n\r\n**Code snippet**\r\nRelevant source code, make sure to remove what is not necessary.\r\n\r\nhttps://github.com/huge-success/sanic/blob/master/examples/run_async.py\r\n\r\n**Expected behavior**\r\nA clear and concise description of what you expected to happen.\r\n\r\n\r\n**Environment (please complete the following information):**\r\n - OS: [e.g. iOS]\r\n - Version [e.g. 0.8.3]\r\nWindow and Linux, Python 3.6 or 3.7 don't work\r\n\r\n**Additional context**\r\nAdd any other context about the problem here.\r\n\r\n\r\nIs this example still work ?\n", "before_files": [{"content": "'''\nBased on example from https://github.com/Skyscanner/aiotask-context\nand `examples/{override_logging,run_async}.py`.\n\nNeeds https://github.com/Skyscanner/aiotask-context/tree/52efbc21e2e1def2d52abb9a8e951f3ce5e6f690 or newer\n\n$ pip install git+https://github.com/Skyscanner/aiotask-context.git\n'''\n\nimport asyncio\nimport uuid\nimport logging\nfrom signal import signal, SIGINT\n\nfrom sanic import Sanic\nfrom sanic import response\n\nimport uvloop\nimport aiotask_context as context\n\nlog = logging.getLogger(__name__)\n\n\nclass RequestIdFilter(logging.Filter):\n def filter(self, record):\n record.request_id = context.get('X-Request-ID')\n return True\n\n\nLOG_SETTINGS = {\n 'version': 1,\n 'disable_existing_loggers': False,\n 'handlers': {\n 'console': {\n 'class': 'logging.StreamHandler',\n 'level': 'DEBUG',\n 'formatter': 'default',\n 'filters': ['requestid'],\n },\n },\n 'filters': {\n 'requestid': {\n '()': RequestIdFilter,\n },\n },\n 'formatters': {\n 'default': {\n 'format': '%(asctime)s %(levelname)s %(name)s:%(lineno)d %(request_id)s | %(message)s',\n },\n },\n 'loggers': {\n '': {\n 'level': 'DEBUG',\n 'handlers': ['console'],\n 'propagate': True\n },\n }\n}\n\n\napp = Sanic(__name__, log_config=LOG_SETTINGS)\n\n\[email protected]('request')\nasync def set_request_id(request):\n request_id = request.headers.get('X-Request-ID') or str(uuid.uuid4())\n context.set(\"X-Request-ID\", request_id)\n\n\[email protected](\"/\")\nasync def test(request):\n log.debug('X-Request-ID: %s', context.get('X-Request-ID'))\n log.info('Hello from test!')\n return response.json({\"test\": True})\n\n\nif __name__ == '__main__':\n asyncio.set_event_loop(uvloop.new_event_loop())\n server = app.create_server(host=\"0.0.0.0\", port=8000)\n loop = asyncio.get_event_loop()\n loop.set_task_factory(context.task_factory)\n task = asyncio.ensure_future(server)\n try:\n loop.run_forever()\n except:\n loop.stop()\n", "path": "examples/log_request_id.py"}, {"content": "from sanic import Sanic\nfrom sanic import response\nfrom signal import signal, SIGINT\nimport asyncio\nimport uvloop\n\napp = Sanic(__name__)\n\n\[email protected](\"/\")\nasync def test(request):\n return response.json({\"answer\": \"42\"})\n\nasyncio.set_event_loop(uvloop.new_event_loop())\nserver = app.create_server(host=\"0.0.0.0\", port=8000)\nloop = asyncio.get_event_loop()\ntask = asyncio.ensure_future(server)\nsignal(SIGINT, lambda s, f: loop.stop())\ntry:\n loop.run_forever()\nexcept:\n loop.stop()\n", "path": "examples/run_async.py"}]} | 1,840 | 284 |
gh_patches_debug_4598 | rasdani/github-patches | git_diff | vispy__vispy-2223 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
scene.visuals.Graph is not working with directed = True
I am trying to render an directed graph but I am getting the error.
Code (based on [example from gallery](https://vispy.org/gallery/scene/graph.html#sphx-glr-gallery-scene-graph-py), I just set directed=True):
```py
import sys
import networkx as nx
from vispy import app, scene
from vispy.visuals.graphs import layouts
canvas = scene.SceneCanvas(title='Simple NetworkX Graph', size=(600, 600),
bgcolor='white', show=True)
view = canvas.central_widget.add_view('panzoom')
graph = nx.adjacency_matrix(
nx.fast_gnp_random_graph(500, 0.005, directed=True)
)
layout = layouts.get_layout('force_directed', iterations=100)
visual = scene.visuals.Graph(
graph, layout=layout, line_color='black', arrow_type="stealth",
arrow_size=30, node_symbol="disc", node_size=20,
face_color=(1, 0, 0, 0.2), border_width=0.0, animate=True, directed=True,
parent=view.scene)
@canvas.events.draw.connect
def on_draw(event):
if not visual.animate_layout():
canvas.update()
if __name__ == '__main__':
if sys.flags.interactive != 1:
app.run()
```
Error:
```
<< caught exception here: >>
File "C:\Users\maxim\AppData\Local\Programs\Python\Python39\lib\site-packages\vispy\util\event.py", line 469, in _invoke_callback
cb(event)
File "D:\dev\university\UniversityProjects\3\alg_and_struct\2\demo.py", line 27, in on_draw
if not visual.animate_layout():
File "C:\Users\maxim\AppData\Local\Programs\Python\Python39\lib\site-packages\vispy\visuals\graphs\graph.py", line 143, in animate_layout
node_vertices, line_vertices, arrows = next(self._layout_iter)
File "C:\Users\maxim\AppData\Local\Programs\Python\Python39\lib\site-packages\vispy\visuals\graphs\layouts\force_directed.py", line 95, in __call__
for result in solver(adjacency_mat, directed):
File "C:\Users\maxim\AppData\Local\Programs\Python\Python39\lib\site-packages\vispy\visuals\graphs\layouts\force_directed.py", line 162, in _sparse_fruchterman_reingold
line_vertices, arrows = _straight_line_vertices(adjacency_coo, pos,
File "C:\Users\maxim\AppData\Local\Programs\Python\Python39\lib\site-packages\vispy\visuals\graphs\util.py", line 92, in _straight_line_vertices
arrow_vertices = arrow_vertices.reshape((len(arrow_vertices)/2, 4))
TypeError: 'float' object cannot be interpreted as an integer
ERROR: Invoking <function on_draw at 0x000001EB3573EDC0> for DrawEvent
```
May be typecasting or `//` at [this line](https://github.com/vispy/vispy/blob/feeaf8afa99ddbbac86a03e3e611a52c1c89584d/vispy/visuals/graphs/util.py#L92) is needed.
</issue>
<code>
[start of vispy/visuals/graphs/util.py]
1 # -*- coding: utf-8 -*-
2 # Copyright (c) Vispy Development Team. All Rights Reserved.
3 # Distributed under the (new) BSD License. See LICENSE.txt for more info.
4 """
5 Graph utilities
6 ===============
7
8 A module containing several graph utility functions.
9 """
10
11 import numpy as np
12
13 try:
14 from scipy.sparse import issparse
15 from scipy import sparse
16 except ImportError:
17 def issparse(*args, **kwargs):
18 return False
19
20
21 def _get_edges(adjacency_mat):
22 func = _sparse_get_edges if issparse(adjacency_mat) else _ndarray_get_edges
23 return func(adjacency_mat)
24
25
26 def _sparse_get_edges(adjacency_mat):
27 return np.concatenate((adjacency_mat.row[:, np.newaxis],
28 adjacency_mat.col[:, np.newaxis]), axis=-1)
29
30
31 def _ndarray_get_edges(adjacency_mat):
32 # Get indices of all non zero values
33 i, j = np.where(adjacency_mat)
34
35 return np.concatenate((i[:, np.newaxis], j[:, np.newaxis]), axis=-1)
36
37
38 def _get_directed_edges(adjacency_mat):
39 func = _sparse_get_edges if issparse(adjacency_mat) else _ndarray_get_edges
40
41 if issparse(adjacency_mat):
42 triu = sparse.triu
43 tril = sparse.tril
44 else:
45 triu = np.triu
46 tril = np.tril
47
48 upper = triu(adjacency_mat)
49 lower = tril(adjacency_mat)
50
51 return np.concatenate((func(upper), func(lower)))
52
53
54 def _straight_line_vertices(adjacency_mat, node_coords, directed=False):
55 """
56 Generate the vertices for straight lines between nodes.
57
58 If it is a directed graph, it also generates the vertices which can be
59 passed to an :class:`ArrowVisual`.
60
61 Parameters
62 ----------
63 adjacency_mat : array
64 The adjacency matrix of the graph
65 node_coords : array
66 The current coordinates of all nodes in the graph
67 directed : bool
68 Wether the graph is directed. If this is true it will also generate
69 the vertices for arrows which can be passed to :class:`ArrowVisual`.
70
71 Returns
72 -------
73 vertices : tuple
74 Returns a tuple containing containing (`line_vertices`,
75 `arrow_vertices`)
76 """
77 if not issparse(adjacency_mat):
78 adjacency_mat = np.asarray(adjacency_mat, float)
79
80 if (adjacency_mat.ndim != 2 or adjacency_mat.shape[0] !=
81 adjacency_mat.shape[1]):
82 raise ValueError("Adjacency matrix should be square.")
83
84 arrow_vertices = np.array([])
85
86 edges = _get_edges(adjacency_mat)
87 line_vertices = node_coords[edges.ravel()]
88
89 if directed:
90 arrows = np.array(list(_get_directed_edges(adjacency_mat)))
91 arrow_vertices = node_coords[arrows.ravel()]
92 arrow_vertices = arrow_vertices.reshape((len(arrow_vertices)/2, 4))
93
94 return line_vertices, arrow_vertices
95
96
97 def _rescale_layout(pos, scale=1):
98 """
99 Normalize the given coordinate list to the range [0, `scale`].
100
101 Parameters
102 ----------
103 pos : array
104 Coordinate list
105 scale : number
106 The upperbound value for the coordinates range
107
108 Returns
109 -------
110 pos : array
111 The rescaled (normalized) coordinates in the range [0, `scale`].
112
113 Notes
114 -----
115 Changes `pos` in place.
116 """
117 pos -= pos.min(axis=0)
118 pos *= scale / pos.max()
119
120 return pos
121
[end of vispy/visuals/graphs/util.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/vispy/visuals/graphs/util.py b/vispy/visuals/graphs/util.py
--- a/vispy/visuals/graphs/util.py
+++ b/vispy/visuals/graphs/util.py
@@ -89,7 +89,7 @@
if directed:
arrows = np.array(list(_get_directed_edges(adjacency_mat)))
arrow_vertices = node_coords[arrows.ravel()]
- arrow_vertices = arrow_vertices.reshape((len(arrow_vertices)/2, 4))
+ arrow_vertices = arrow_vertices.reshape((len(arrow_vertices)//2, 4))
return line_vertices, arrow_vertices
| {"golden_diff": "diff --git a/vispy/visuals/graphs/util.py b/vispy/visuals/graphs/util.py\n--- a/vispy/visuals/graphs/util.py\n+++ b/vispy/visuals/graphs/util.py\n@@ -89,7 +89,7 @@\n if directed:\n arrows = np.array(list(_get_directed_edges(adjacency_mat)))\n arrow_vertices = node_coords[arrows.ravel()]\n- arrow_vertices = arrow_vertices.reshape((len(arrow_vertices)/2, 4))\n+ arrow_vertices = arrow_vertices.reshape((len(arrow_vertices)//2, 4))\n \n return line_vertices, arrow_vertices\n", "issue": "scene.visuals.Graph is not working with directed = True\nI am trying to render an directed graph but I am getting the error.\r\n\r\nCode (based on [example from gallery](https://vispy.org/gallery/scene/graph.html#sphx-glr-gallery-scene-graph-py), I just set directed=True):\r\n```py\r\nimport sys\r\n\r\nimport networkx as nx\r\n\r\nfrom vispy import app, scene\r\nfrom vispy.visuals.graphs import layouts\r\n\r\n\r\ncanvas = scene.SceneCanvas(title='Simple NetworkX Graph', size=(600, 600),\r\n bgcolor='white', show=True)\r\nview = canvas.central_widget.add_view('panzoom')\r\n\r\ngraph = nx.adjacency_matrix(\r\n nx.fast_gnp_random_graph(500, 0.005, directed=True)\r\n)\r\nlayout = layouts.get_layout('force_directed', iterations=100)\r\n\r\nvisual = scene.visuals.Graph(\r\n graph, layout=layout, line_color='black', arrow_type=\"stealth\",\r\n arrow_size=30, node_symbol=\"disc\", node_size=20,\r\n face_color=(1, 0, 0, 0.2), border_width=0.0, animate=True, directed=True,\r\n parent=view.scene)\r\n\r\n\r\[email protected]\r\ndef on_draw(event):\r\n if not visual.animate_layout():\r\n canvas.update()\r\n\r\nif __name__ == '__main__':\r\n if sys.flags.interactive != 1:\r\n app.run()\r\n```\r\n\r\nError:\r\n```\r\n<< caught exception here: >>\r\n File \"C:\\Users\\maxim\\AppData\\Local\\Programs\\Python\\Python39\\lib\\site-packages\\vispy\\util\\event.py\", line 469, in _invoke_callback\r\n cb(event)\r\n File \"D:\\dev\\university\\UniversityProjects\\3\\alg_and_struct\\2\\demo.py\", line 27, in on_draw\r\n if not visual.animate_layout():\r\n File \"C:\\Users\\maxim\\AppData\\Local\\Programs\\Python\\Python39\\lib\\site-packages\\vispy\\visuals\\graphs\\graph.py\", line 143, in animate_layout\r\n node_vertices, line_vertices, arrows = next(self._layout_iter)\r\n File \"C:\\Users\\maxim\\AppData\\Local\\Programs\\Python\\Python39\\lib\\site-packages\\vispy\\visuals\\graphs\\layouts\\force_directed.py\", line 95, in __call__\r\n for result in solver(adjacency_mat, directed):\r\n File \"C:\\Users\\maxim\\AppData\\Local\\Programs\\Python\\Python39\\lib\\site-packages\\vispy\\visuals\\graphs\\layouts\\force_directed.py\", line 162, in _sparse_fruchterman_reingold\r\n line_vertices, arrows = _straight_line_vertices(adjacency_coo, pos,\r\n File \"C:\\Users\\maxim\\AppData\\Local\\Programs\\Python\\Python39\\lib\\site-packages\\vispy\\visuals\\graphs\\util.py\", line 92, in _straight_line_vertices\r\n arrow_vertices = arrow_vertices.reshape((len(arrow_vertices)/2, 4))\r\nTypeError: 'float' object cannot be interpreted as an integer\r\nERROR: Invoking <function on_draw at 0x000001EB3573EDC0> for DrawEvent\r\n```\r\n\r\nMay be typecasting or `//` at [this line](https://github.com/vispy/vispy/blob/feeaf8afa99ddbbac86a03e3e611a52c1c89584d/vispy/visuals/graphs/util.py#L92) is needed.\r\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n# Copyright (c) Vispy Development Team. All Rights Reserved.\n# Distributed under the (new) BSD License. See LICENSE.txt for more info.\n\"\"\"\nGraph utilities\n===============\n\nA module containing several graph utility functions.\n\"\"\"\n\nimport numpy as np\n\ntry:\n from scipy.sparse import issparse\n from scipy import sparse\nexcept ImportError:\n def issparse(*args, **kwargs):\n return False\n\n\ndef _get_edges(adjacency_mat):\n func = _sparse_get_edges if issparse(adjacency_mat) else _ndarray_get_edges\n return func(adjacency_mat)\n\n\ndef _sparse_get_edges(adjacency_mat):\n return np.concatenate((adjacency_mat.row[:, np.newaxis],\n adjacency_mat.col[:, np.newaxis]), axis=-1)\n\n\ndef _ndarray_get_edges(adjacency_mat):\n # Get indices of all non zero values\n i, j = np.where(adjacency_mat)\n\n return np.concatenate((i[:, np.newaxis], j[:, np.newaxis]), axis=-1)\n\n\ndef _get_directed_edges(adjacency_mat):\n func = _sparse_get_edges if issparse(adjacency_mat) else _ndarray_get_edges\n\n if issparse(adjacency_mat):\n triu = sparse.triu\n tril = sparse.tril\n else:\n triu = np.triu\n tril = np.tril\n\n upper = triu(adjacency_mat)\n lower = tril(adjacency_mat)\n\n return np.concatenate((func(upper), func(lower)))\n\n\ndef _straight_line_vertices(adjacency_mat, node_coords, directed=False):\n \"\"\"\n Generate the vertices for straight lines between nodes.\n\n If it is a directed graph, it also generates the vertices which can be\n passed to an :class:`ArrowVisual`.\n\n Parameters\n ----------\n adjacency_mat : array\n The adjacency matrix of the graph\n node_coords : array\n The current coordinates of all nodes in the graph\n directed : bool\n Wether the graph is directed. If this is true it will also generate\n the vertices for arrows which can be passed to :class:`ArrowVisual`.\n\n Returns\n -------\n vertices : tuple\n Returns a tuple containing containing (`line_vertices`,\n `arrow_vertices`)\n \"\"\"\n if not issparse(adjacency_mat):\n adjacency_mat = np.asarray(adjacency_mat, float)\n\n if (adjacency_mat.ndim != 2 or adjacency_mat.shape[0] !=\n adjacency_mat.shape[1]):\n raise ValueError(\"Adjacency matrix should be square.\")\n\n arrow_vertices = np.array([])\n\n edges = _get_edges(adjacency_mat)\n line_vertices = node_coords[edges.ravel()]\n\n if directed:\n arrows = np.array(list(_get_directed_edges(adjacency_mat)))\n arrow_vertices = node_coords[arrows.ravel()]\n arrow_vertices = arrow_vertices.reshape((len(arrow_vertices)/2, 4))\n\n return line_vertices, arrow_vertices\n\n\ndef _rescale_layout(pos, scale=1):\n \"\"\"\n Normalize the given coordinate list to the range [0, `scale`].\n\n Parameters\n ----------\n pos : array\n Coordinate list\n scale : number\n The upperbound value for the coordinates range\n\n Returns\n -------\n pos : array\n The rescaled (normalized) coordinates in the range [0, `scale`].\n\n Notes\n -----\n Changes `pos` in place.\n \"\"\"\n pos -= pos.min(axis=0)\n pos *= scale / pos.max()\n\n return pos\n", "path": "vispy/visuals/graphs/util.py"}]} | 2,361 | 141 |
gh_patches_debug_27242 | rasdani/github-patches | git_diff | google__openhtf-473 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Update documentation and examples/measurement.py
Ran into some issues on a fresh install from the documentation. I needed to add the package libprotobuf-dev to the apt-get install line in CONTRIBUTING.md to get protobufs to build and got an error when trying to run the example measurements.py that units could not be found, resolved by importing openhtf.utils.units
</issue>
<code>
[start of examples/measurements.py]
1 # Copyright 2016 Google Inc. All Rights Reserved.
2
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6
7 # http://www.apache.org/licenses/LICENSE-2.0
8
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """Example OpenHTF test demonstrating use of measurements.
16
17 Run with (your virtualenv must be activated first):
18
19 python measurements.py
20
21 Afterwards, check out the output in measurements.json. If you open both this
22 example test and that output file and compare them, you should be able to see
23 where measurement values end up in the output and what the corresponding code
24 looks like that sets them.
25
26 TODO(someone): Write these examples.
27 For more complex topics, see the validators.py and dimensions.py examples.
28
29 For a simpler example, see the hello_world.py example. If the output of this
30 test is confusing, start with the hello_world.py output and compare it to this
31 test's output.
32
33 Some constraints on measurements:
34
35 - Measurement names must be valid python variable names. This is mostly for
36 sanity, but also ensures you can access them via attribute access in phases.
37 This applies *after* any with_args() substitution (not covered in this
38 tutorial, see the phases.py example for more details).
39
40 - You cannot declare the same measurement name multiple times on the same
41 phase. Technically, you *can* declare the same measurement on multiple
42 phases; measurements are attached to a specific phase in the output. This
43 isn't recommended, though, because it makes it difficult to flatten a test's
44 measurements, which some output formats require.
45 """
46
47 # Import openhtf with an abbreviated name, as we'll be using a bunch of stuff
48 # from it throughout our test scripts. See __all__ at the top of
49 # openhtf/__init__.py for details on what's in top-of-module namespace.
50 import openhtf as htf
51
52 # Import this output mechanism as it's the specific one we want to use.
53 from openhtf.output.callbacks import json_factory
54
55 # You won't normally need to import this, see validators.py example for
56 # more details. It's used for the inline measurement declaration example
57 # below, but normally you'll only import it when you want to define custom
58 # measurement validators.
59 from openhtf.util import validators
60
61
62 # Simple example of measurement use, similar to hello_world.py usage.
63 @htf.measures(htf.Measurement('hello_world_measurement'))
64 def hello_phase(test):
65 test.measurements.hello_world_measurement = 'Hello!'
66
67
68 # An alternative simpler syntax that creates the Measurement for you.
69 @htf.measures('hello_again_measurement')
70 def again_phase(test):
71 test.measurements.hello_again_measurement = 'Again!'
72
73
74 # Multiple measurements can be specified in a single decorator, using either of
75 # the above syntaxes. Technically, these syntaxes can be mixed and matched, but
76 # as a matter of convention you should always use one or the other within a
77 # single decorator call. You'll also note that you can stack multiple
78 # decorations on a single phase. This is useful if you have a handful of simple
79 # measurements, and then one or two with more complex declarations (see below).
80 @htf.measures('first_measurement', 'second_measurement')
81 @htf.measures(htf.Measurement('third'), htf.Measurement('fourth'))
82 def lots_of_measurements(test):
83 test.measurements.first_measurement = 'First!'
84 # Measurements can also be access via indexing rather than attributes.
85 test.measurements['second_measurement'] = 'Second :('
86 # This can be handy for iterating over measurements.
87 for measurement in ('third', 'fourth'):
88 test.measurements[measurement] = measurement + ' is the best!'
89
90
91 # Basic key/value measurements are handy, but we may also want to validate a
92 # measurement against some criteria, or specify additional information
93 # describing the measurement. Validators can get quite complex, for more
94 # details, see the validators.py example.
95 @htf.measures(htf.Measurement('validated_measurement').in_range(0, 10).doc(
96 'This measurement is validated.').with_units(units.SECOND))
97 def measure_seconds(test):
98 # The 'outcome' of this measurement in the test_record result will be a PASS
99 # because its value passes the validator specified (0 <= 5 <= 10).
100 test.measurements.validated_measurement = 5
101
102
103 # These additional attributes can also be specified inline as kwargs passed
104 # directly to the @measures decorator. If you do so, however, you must
105 # specify exactly one measurement with that decorator (ie. the first argument
106 # must be a string containing the measurement name). If you want to specify
107 # multiple measurements this way, you can stack multiple decorators.
108 @htf.measures('inline_kwargs', docstring='This measurement is declared inline!',
109 units=units.HERTZ, validators=[validators.in_range(0, 10)])
110 @htf.measures('another_inline', docstring='Because why not?')
111 def inline_phase(test):
112 # This measurement will have an outcome of FAIL, because the set value of 15
113 # will not pass the 0 <= x <= 10 validator.
114 test.measurements.inline_kwargs = 15
115 test.measurements.another_inline = 'This one is unvalidated.'
116
117 # Let's log a message so the operator knows the test should fail.
118 test.logger.info('Set inline_kwargs to a failing value, test should FAIL!')
119
120
121 if __name__ == '__main__':
122 # We instantiate our OpenHTF test with the phases we want to run as args.
123 test = htf.Test(hello_phase, again_phase, lots_of_measurements,
124 measure_seconds, inline_phase)
125
126 # In order to view the result of the test, we have to output it somewhere,
127 # and a local JSON file is a convenient way to do this. Custom output
128 # mechanisms can be implemented, but for now we'll just keep it simple.
129 # This will always output to the same ./measurements.json file, formatted
130 # slightly for human readability.
131 test.add_output_callbacks(
132 json_factory.OutputToJSON('./measurements.json', indent=2))
133
134 # Unlike hello_world.py, where we prompt for a DUT ID, here we'll just
135 # use an arbitrary one.
136 test.execute(test_start=lambda: 'MyDutId')
137
[end of examples/measurements.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/examples/measurements.py b/examples/measurements.py
--- a/examples/measurements.py
+++ b/examples/measurements.py
@@ -93,7 +93,7 @@
# describing the measurement. Validators can get quite complex, for more
# details, see the validators.py example.
@htf.measures(htf.Measurement('validated_measurement').in_range(0, 10).doc(
- 'This measurement is validated.').with_units(units.SECOND))
+ 'This measurement is validated.').with_units(htf.units.SECOND))
def measure_seconds(test):
# The 'outcome' of this measurement in the test_record result will be a PASS
# because its value passes the validator specified (0 <= 5 <= 10).
@@ -106,7 +106,7 @@
# must be a string containing the measurement name). If you want to specify
# multiple measurements this way, you can stack multiple decorators.
@htf.measures('inline_kwargs', docstring='This measurement is declared inline!',
- units=units.HERTZ, validators=[validators.in_range(0, 10)])
+ units=htf.units.HERTZ, validators=[validators.in_range(0, 10)])
@htf.measures('another_inline', docstring='Because why not?')
def inline_phase(test):
# This measurement will have an outcome of FAIL, because the set value of 15
| {"golden_diff": "diff --git a/examples/measurements.py b/examples/measurements.py\n--- a/examples/measurements.py\n+++ b/examples/measurements.py\n@@ -93,7 +93,7 @@\n # describing the measurement. Validators can get quite complex, for more\n # details, see the validators.py example.\n @htf.measures(htf.Measurement('validated_measurement').in_range(0, 10).doc(\n- 'This measurement is validated.').with_units(units.SECOND))\n+ 'This measurement is validated.').with_units(htf.units.SECOND))\n def measure_seconds(test):\n # The 'outcome' of this measurement in the test_record result will be a PASS\n # because its value passes the validator specified (0 <= 5 <= 10).\n@@ -106,7 +106,7 @@\n # must be a string containing the measurement name). If you want to specify\n # multiple measurements this way, you can stack multiple decorators.\n @htf.measures('inline_kwargs', docstring='This measurement is declared inline!',\n- units=units.HERTZ, validators=[validators.in_range(0, 10)])\n+ units=htf.units.HERTZ, validators=[validators.in_range(0, 10)])\n @htf.measures('another_inline', docstring='Because why not?')\n def inline_phase(test):\n # This measurement will have an outcome of FAIL, because the set value of 15\n", "issue": "Update documentation and examples/measurement.py\nRan into some issues on a fresh install from the documentation. I needed to add the package libprotobuf-dev to the apt-get install line in CONTRIBUTING.md to get protobufs to build and got an error when trying to run the example measurements.py that units could not be found, resolved by importing openhtf.utils.units\n", "before_files": [{"content": "# Copyright 2016 Google Inc. All Rights Reserved.\n\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n\n# http://www.apache.org/licenses/LICENSE-2.0\n\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Example OpenHTF test demonstrating use of measurements.\n\nRun with (your virtualenv must be activated first):\n\n python measurements.py\n\nAfterwards, check out the output in measurements.json. If you open both this\nexample test and that output file and compare them, you should be able to see\nwhere measurement values end up in the output and what the corresponding code\nlooks like that sets them.\n\nTODO(someone): Write these examples.\nFor more complex topics, see the validators.py and dimensions.py examples.\n\nFor a simpler example, see the hello_world.py example. If the output of this\ntest is confusing, start with the hello_world.py output and compare it to this\ntest's output.\n\nSome constraints on measurements:\n\n - Measurement names must be valid python variable names. This is mostly for\n sanity, but also ensures you can access them via attribute access in phases.\n This applies *after* any with_args() substitution (not covered in this\n tutorial, see the phases.py example for more details).\n\n - You cannot declare the same measurement name multiple times on the same\n phase. Technically, you *can* declare the same measurement on multiple\n phases; measurements are attached to a specific phase in the output. This\n isn't recommended, though, because it makes it difficult to flatten a test's\n measurements, which some output formats require.\n\"\"\"\n\n# Import openhtf with an abbreviated name, as we'll be using a bunch of stuff\n# from it throughout our test scripts. See __all__ at the top of\n# openhtf/__init__.py for details on what's in top-of-module namespace.\nimport openhtf as htf\n\n# Import this output mechanism as it's the specific one we want to use.\nfrom openhtf.output.callbacks import json_factory\n\n# You won't normally need to import this, see validators.py example for\n# more details. It's used for the inline measurement declaration example\n# below, but normally you'll only import it when you want to define custom\n# measurement validators.\nfrom openhtf.util import validators\n\n\n# Simple example of measurement use, similar to hello_world.py usage.\[email protected](htf.Measurement('hello_world_measurement'))\ndef hello_phase(test):\n test.measurements.hello_world_measurement = 'Hello!'\n\n\n# An alternative simpler syntax that creates the Measurement for you.\[email protected]('hello_again_measurement')\ndef again_phase(test):\n test.measurements.hello_again_measurement = 'Again!'\n\n\n# Multiple measurements can be specified in a single decorator, using either of\n# the above syntaxes. Technically, these syntaxes can be mixed and matched, but\n# as a matter of convention you should always use one or the other within a\n# single decorator call. You'll also note that you can stack multiple\n# decorations on a single phase. This is useful if you have a handful of simple\n# measurements, and then one or two with more complex declarations (see below).\[email protected]('first_measurement', 'second_measurement')\[email protected](htf.Measurement('third'), htf.Measurement('fourth'))\ndef lots_of_measurements(test):\n test.measurements.first_measurement = 'First!'\n # Measurements can also be access via indexing rather than attributes.\n test.measurements['second_measurement'] = 'Second :('\n # This can be handy for iterating over measurements.\n for measurement in ('third', 'fourth'):\n test.measurements[measurement] = measurement + ' is the best!'\n\n\n# Basic key/value measurements are handy, but we may also want to validate a\n# measurement against some criteria, or specify additional information\n# describing the measurement. Validators can get quite complex, for more\n# details, see the validators.py example.\[email protected](htf.Measurement('validated_measurement').in_range(0, 10).doc(\n 'This measurement is validated.').with_units(units.SECOND))\ndef measure_seconds(test):\n # The 'outcome' of this measurement in the test_record result will be a PASS\n # because its value passes the validator specified (0 <= 5 <= 10).\n test.measurements.validated_measurement = 5\n\n\n# These additional attributes can also be specified inline as kwargs passed\n# directly to the @measures decorator. If you do so, however, you must\n# specify exactly one measurement with that decorator (ie. the first argument\n# must be a string containing the measurement name). If you want to specify\n# multiple measurements this way, you can stack multiple decorators.\[email protected]('inline_kwargs', docstring='This measurement is declared inline!',\n units=units.HERTZ, validators=[validators.in_range(0, 10)])\[email protected]('another_inline', docstring='Because why not?')\ndef inline_phase(test):\n # This measurement will have an outcome of FAIL, because the set value of 15\n # will not pass the 0 <= x <= 10 validator.\n test.measurements.inline_kwargs = 15\n test.measurements.another_inline = 'This one is unvalidated.'\n\n # Let's log a message so the operator knows the test should fail.\n test.logger.info('Set inline_kwargs to a failing value, test should FAIL!')\n\n\nif __name__ == '__main__':\n # We instantiate our OpenHTF test with the phases we want to run as args.\n test = htf.Test(hello_phase, again_phase, lots_of_measurements,\n measure_seconds, inline_phase)\n\n # In order to view the result of the test, we have to output it somewhere,\n # and a local JSON file is a convenient way to do this. Custom output\n # mechanisms can be implemented, but for now we'll just keep it simple.\n # This will always output to the same ./measurements.json file, formatted\n # slightly for human readability.\n test.add_output_callbacks(\n json_factory.OutputToJSON('./measurements.json', indent=2))\n\n # Unlike hello_world.py, where we prompt for a DUT ID, here we'll just\n # use an arbitrary one.\n test.execute(test_start=lambda: 'MyDutId')\n", "path": "examples/measurements.py"}]} | 2,383 | 314 |
gh_patches_debug_32683 | rasdani/github-patches | git_diff | rotki__rotki-152 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
BTG or other assets with no market in bittrex crash the app
## Problem Definition
If a user holds an asset in bittrex that does not have a market in the exchange, like say `BTG`, then during balances query rotkehlchen will crash with `ValueError: Bittrex: Could not find BTC market for "BTG"`
## Task
Fix the crash, and use other sources for market price data in the case this happens.
</issue>
<code>
[start of rotkehlchen/bittrex.py]
1 import time
2 import hmac
3 import hashlib
4 from urllib.parse import urlencode
5 from json.decoder import JSONDecodeError
6
7 from typing import Dict, Tuple, Optional, Union, List, cast
8 from rotkehlchen.utils import (
9 createTimeStamp,
10 get_pair_position,
11 rlk_jsonloads,
12 cache_response_timewise,
13 )
14 from rotkehlchen.exchange import Exchange
15 from rotkehlchen.order_formatting import Trade
16 from rotkehlchen.fval import FVal
17 from rotkehlchen.errors import RemoteError
18 from rotkehlchen.inquirer import Inquirer
19 from rotkehlchen import typing
20
21 import logging
22 logger = logging.getLogger(__name__)
23
24 BITTREX_MARKET_METHODS = {
25 'getopenorders',
26 'cancel',
27 'sellmarket',
28 'selllimit',
29 'buymarket',
30 'buylimit'
31 }
32 BITTREX_ACCOUNT_METHODS = {
33 'getbalances',
34 'getbalance',
35 'getdepositaddress',
36 'withdraw',
37 'getorderhistory'
38 }
39
40
41 def bittrex_pair_to_world(pair: str) -> str:
42 return pair.replace('-', '_')
43
44
45 def world_pair_to_bittrex(pair: str) -> str:
46 return pair.replace('_', '-')
47
48
49 def trade_from_bittrex(bittrex_trade: Dict) -> Trade:
50 """Turn a bittrex trade returned from bittrex trade history to our common trade
51 history format"""
52 amount = FVal(bittrex_trade['Quantity']) - FVal(bittrex_trade['QuantityRemaining'])
53 rate = FVal(bittrex_trade['PricePerUnit'])
54 order_type = bittrex_trade['OrderType']
55 bittrex_price = FVal(bittrex_trade['Price'])
56 bittrex_commission = FVal(bittrex_trade['Commission'])
57 pair = bittrex_pair_to_world(bittrex_trade['Exchange'])
58 base_currency = get_pair_position(pair, 'first')
59 if order_type == 'LIMIT_BUY':
60 order_type = 'buy'
61 cost = bittrex_price + bittrex_commission
62 fee = bittrex_commission
63 elif order_type == 'LIMIT_SEL':
64 order_type = 'sell'
65 cost = bittrex_price - bittrex_commission
66 fee = bittrex_commission
67 else:
68 raise ValueError('Got unexpected order type "{}" for bittrex trade'.format(order_type))
69
70 return Trade(
71 timestamp=bittrex_trade['TimeStamp'],
72 pair=pair,
73 type=order_type,
74 rate=rate,
75 cost=cost,
76 cost_currency=base_currency,
77 fee=fee,
78 fee_currency=base_currency,
79 amount=amount,
80 location='bittrex'
81 )
82
83
84 class Bittrex(Exchange):
85 def __init__(
86 self,
87 api_key: typing.ApiKey,
88 secret: typing.ApiSecret,
89 inquirer: Inquirer,
90 data_dir: typing.FilePath
91 ):
92 super(Bittrex, self).__init__('bittrex', api_key, secret, data_dir)
93 self.apiversion = 'v1.1'
94 self.uri = 'https://bittrex.com/api/{}/'.format(self.apiversion)
95 self.inquirer = inquirer
96
97 def first_connection(self):
98 self.first_connection_made = True
99
100 def validate_api_key(self) -> Tuple[bool, str]:
101 try:
102 self.api_query('getbalance', {'currency': 'BTC'})
103 except ValueError as e:
104 error = str(e)
105 if error == 'APIKEY_INVALID':
106 return False, 'Provided API Key is invalid'
107 elif error == 'INVALID_SIGNATURE':
108 return False, 'Provided API Secret is invalid'
109 else:
110 raise
111 return True, ''
112
113 def api_query(
114 self,
115 method: str,
116 options: Optional[Dict] = None,
117 ) -> Union[List, Dict]:
118 """
119 Queries Bittrex with given method and options
120 """
121 if not options:
122 options = {}
123 nonce = str(int(time.time() * 1000))
124 method_type = 'public'
125
126 if method in BITTREX_MARKET_METHODS:
127 method_type = 'market'
128 elif method in BITTREX_ACCOUNT_METHODS:
129 method_type = 'account'
130
131 request_url = self.uri + method_type + '/' + method + '?'
132
133 if method_type != 'public':
134 request_url += 'apikey=' + self.api_key.decode() + "&nonce=" + nonce + '&'
135
136 request_url += urlencode(options)
137 signature = hmac.new(
138 self.secret,
139 request_url.encode(),
140 hashlib.sha512
141 ).hexdigest()
142 self.session.headers.update({'apisign': signature})
143 response = self.session.get(request_url)
144 try:
145 json_ret = rlk_jsonloads(response.text)
146 except JSONDecodeError:
147 raise RemoteError('Bittrex returned invalid JSON response')
148
149 if json_ret['success'] is not True:
150 raise RemoteError(json_ret['message'])
151 return json_ret['result']
152
153 def get_btc_price(self, asset: typing.BlockchainAsset) -> Optional[FVal]:
154 if asset == 'BTC':
155 return None
156 btc_price = None
157 btc_pair = 'BTC-' + asset
158 for market in self.markets:
159 if market['MarketName'] == btc_pair:
160 btc_price = FVal(market['Last'])
161 break
162
163 if btc_price is None:
164 raise ValueError('Bittrex: Could not find BTC market for "{}"'.format(asset))
165
166 return btc_price
167
168 @cache_response_timewise()
169 def query_balances(self) -> Tuple[Optional[dict], str]:
170 try:
171 self.markets = self.api_query('getmarketsummaries')
172 resp = self.api_query('getbalances')
173 except RemoteError as e:
174 msg = (
175 'Bittrex API request failed. Could not reach bittrex due '
176 'to {}'.format(e)
177 )
178 logger.error(msg)
179 return None, msg
180
181 returned_balances = dict()
182 for entry in resp:
183 currency = entry['Currency']
184 usd_price = self.inquirer.find_usd_price(
185 asset=currency,
186 asset_btc_price=self.get_btc_price(currency)
187 )
188
189 balance = dict()
190 balance['amount'] = FVal(entry['Balance'])
191 balance['usd_value'] = FVal(balance['amount']) * usd_price
192 returned_balances[currency] = balance
193
194 return returned_balances, ''
195
196 def query_trade_history(
197 self,
198 start_ts: typing.Timestamp,
199 end_ts: typing.Timestamp,
200 end_at_least_ts: typing.Timestamp,
201 market: Optional[str] = None,
202 count: Optional[int] = None,
203 ) -> List:
204
205 options: Dict[str, Union[str, int]] = dict()
206 cache = self.check_trades_cache(start_ts, end_at_least_ts)
207 cache = cast(List, cache)
208 if market is not None:
209 options['market'] = world_pair_to_bittrex(market)
210 elif cache is not None:
211 return cache
212
213 if count is not None:
214 options['count'] = count
215 order_history = self.api_query('getorderhistory', options)
216
217 returned_history = list()
218 for order in order_history:
219 order_timestamp = createTimeStamp(order['TimeStamp'], formatstr="%Y-%m-%dT%H:%M:%S.%f")
220 if start_ts is not None and order_timestamp < start_ts:
221 continue
222 if end_ts is not None and order_timestamp > end_ts:
223 break
224 order['TimeStamp'] = order_timestamp
225 returned_history.append(order)
226
227 self.update_trades_cache(returned_history, start_ts, end_ts)
228 return returned_history
229
[end of rotkehlchen/bittrex.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/rotkehlchen/bittrex.py b/rotkehlchen/bittrex.py
--- a/rotkehlchen/bittrex.py
+++ b/rotkehlchen/bittrex.py
@@ -1,24 +1,24 @@
-import time
-import hmac
import hashlib
-from urllib.parse import urlencode
+import hmac
+import logging
+import time
from json.decoder import JSONDecodeError
+from typing import Dict, List, Optional, Tuple, Union, cast
+from urllib.parse import urlencode
-from typing import Dict, Tuple, Optional, Union, List, cast
+from rotkehlchen import typing
+from rotkehlchen.errors import RemoteError
+from rotkehlchen.exchange import Exchange
+from rotkehlchen.fval import FVal
+from rotkehlchen.inquirer import Inquirer
+from rotkehlchen.order_formatting import Trade
from rotkehlchen.utils import (
+ cache_response_timewise,
createTimeStamp,
get_pair_position,
rlk_jsonloads,
- cache_response_timewise,
)
-from rotkehlchen.exchange import Exchange
-from rotkehlchen.order_formatting import Trade
-from rotkehlchen.fval import FVal
-from rotkehlchen.errors import RemoteError
-from rotkehlchen.inquirer import Inquirer
-from rotkehlchen import typing
-import logging
logger = logging.getLogger(__name__)
BITTREX_MARKET_METHODS = {
@@ -160,9 +160,6 @@
btc_price = FVal(market['Last'])
break
- if btc_price is None:
- raise ValueError('Bittrex: Could not find BTC market for "{}"'.format(asset))
-
return btc_price
@cache_response_timewise()
@@ -181,9 +178,10 @@
returned_balances = dict()
for entry in resp:
currency = entry['Currency']
+ asset_btc_price = self.get_btc_price(currency)
usd_price = self.inquirer.find_usd_price(
asset=currency,
- asset_btc_price=self.get_btc_price(currency)
+ asset_btc_price=asset_btc_price
)
balance = dict()
| {"golden_diff": "diff --git a/rotkehlchen/bittrex.py b/rotkehlchen/bittrex.py\n--- a/rotkehlchen/bittrex.py\n+++ b/rotkehlchen/bittrex.py\n@@ -1,24 +1,24 @@\n-import time\n-import hmac\n import hashlib\n-from urllib.parse import urlencode\n+import hmac\n+import logging\n+import time\n from json.decoder import JSONDecodeError\n+from typing import Dict, List, Optional, Tuple, Union, cast\n+from urllib.parse import urlencode\n \n-from typing import Dict, Tuple, Optional, Union, List, cast\n+from rotkehlchen import typing\n+from rotkehlchen.errors import RemoteError\n+from rotkehlchen.exchange import Exchange\n+from rotkehlchen.fval import FVal\n+from rotkehlchen.inquirer import Inquirer\n+from rotkehlchen.order_formatting import Trade\n from rotkehlchen.utils import (\n+ cache_response_timewise,\n createTimeStamp,\n get_pair_position,\n rlk_jsonloads,\n- cache_response_timewise,\n )\n-from rotkehlchen.exchange import Exchange\n-from rotkehlchen.order_formatting import Trade\n-from rotkehlchen.fval import FVal\n-from rotkehlchen.errors import RemoteError\n-from rotkehlchen.inquirer import Inquirer\n-from rotkehlchen import typing\n \n-import logging\n logger = logging.getLogger(__name__)\n \n BITTREX_MARKET_METHODS = {\n@@ -160,9 +160,6 @@\n btc_price = FVal(market['Last'])\n break\n \n- if btc_price is None:\n- raise ValueError('Bittrex: Could not find BTC market for \"{}\"'.format(asset))\n-\n return btc_price\n \n @cache_response_timewise()\n@@ -181,9 +178,10 @@\n returned_balances = dict()\n for entry in resp:\n currency = entry['Currency']\n+ asset_btc_price = self.get_btc_price(currency)\n usd_price = self.inquirer.find_usd_price(\n asset=currency,\n- asset_btc_price=self.get_btc_price(currency)\n+ asset_btc_price=asset_btc_price\n )\n \n balance = dict()\n", "issue": "BTG or other assets with no market in bittrex crash the app\n## Problem Definition\r\n\r\nIf a user holds an asset in bittrex that does not have a market in the exchange, like say `BTG`, then during balances query rotkehlchen will crash with `ValueError: Bittrex: Could not find BTC market for \"BTG\"`\r\n\r\n## Task\r\n\r\nFix the crash, and use other sources for market price data in the case this happens.\n", "before_files": [{"content": "import time\nimport hmac\nimport hashlib\nfrom urllib.parse import urlencode\nfrom json.decoder import JSONDecodeError\n\nfrom typing import Dict, Tuple, Optional, Union, List, cast\nfrom rotkehlchen.utils import (\n createTimeStamp,\n get_pair_position,\n rlk_jsonloads,\n cache_response_timewise,\n)\nfrom rotkehlchen.exchange import Exchange\nfrom rotkehlchen.order_formatting import Trade\nfrom rotkehlchen.fval import FVal\nfrom rotkehlchen.errors import RemoteError\nfrom rotkehlchen.inquirer import Inquirer\nfrom rotkehlchen import typing\n\nimport logging\nlogger = logging.getLogger(__name__)\n\nBITTREX_MARKET_METHODS = {\n 'getopenorders',\n 'cancel',\n 'sellmarket',\n 'selllimit',\n 'buymarket',\n 'buylimit'\n}\nBITTREX_ACCOUNT_METHODS = {\n 'getbalances',\n 'getbalance',\n 'getdepositaddress',\n 'withdraw',\n 'getorderhistory'\n}\n\n\ndef bittrex_pair_to_world(pair: str) -> str:\n return pair.replace('-', '_')\n\n\ndef world_pair_to_bittrex(pair: str) -> str:\n return pair.replace('_', '-')\n\n\ndef trade_from_bittrex(bittrex_trade: Dict) -> Trade:\n \"\"\"Turn a bittrex trade returned from bittrex trade history to our common trade\n history format\"\"\"\n amount = FVal(bittrex_trade['Quantity']) - FVal(bittrex_trade['QuantityRemaining'])\n rate = FVal(bittrex_trade['PricePerUnit'])\n order_type = bittrex_trade['OrderType']\n bittrex_price = FVal(bittrex_trade['Price'])\n bittrex_commission = FVal(bittrex_trade['Commission'])\n pair = bittrex_pair_to_world(bittrex_trade['Exchange'])\n base_currency = get_pair_position(pair, 'first')\n if order_type == 'LIMIT_BUY':\n order_type = 'buy'\n cost = bittrex_price + bittrex_commission\n fee = bittrex_commission\n elif order_type == 'LIMIT_SEL':\n order_type = 'sell'\n cost = bittrex_price - bittrex_commission\n fee = bittrex_commission\n else:\n raise ValueError('Got unexpected order type \"{}\" for bittrex trade'.format(order_type))\n\n return Trade(\n timestamp=bittrex_trade['TimeStamp'],\n pair=pair,\n type=order_type,\n rate=rate,\n cost=cost,\n cost_currency=base_currency,\n fee=fee,\n fee_currency=base_currency,\n amount=amount,\n location='bittrex'\n )\n\n\nclass Bittrex(Exchange):\n def __init__(\n self,\n api_key: typing.ApiKey,\n secret: typing.ApiSecret,\n inquirer: Inquirer,\n data_dir: typing.FilePath\n ):\n super(Bittrex, self).__init__('bittrex', api_key, secret, data_dir)\n self.apiversion = 'v1.1'\n self.uri = 'https://bittrex.com/api/{}/'.format(self.apiversion)\n self.inquirer = inquirer\n\n def first_connection(self):\n self.first_connection_made = True\n\n def validate_api_key(self) -> Tuple[bool, str]:\n try:\n self.api_query('getbalance', {'currency': 'BTC'})\n except ValueError as e:\n error = str(e)\n if error == 'APIKEY_INVALID':\n return False, 'Provided API Key is invalid'\n elif error == 'INVALID_SIGNATURE':\n return False, 'Provided API Secret is invalid'\n else:\n raise\n return True, ''\n\n def api_query(\n self,\n method: str,\n options: Optional[Dict] = None,\n ) -> Union[List, Dict]:\n \"\"\"\n Queries Bittrex with given method and options\n \"\"\"\n if not options:\n options = {}\n nonce = str(int(time.time() * 1000))\n method_type = 'public'\n\n if method in BITTREX_MARKET_METHODS:\n method_type = 'market'\n elif method in BITTREX_ACCOUNT_METHODS:\n method_type = 'account'\n\n request_url = self.uri + method_type + '/' + method + '?'\n\n if method_type != 'public':\n request_url += 'apikey=' + self.api_key.decode() + \"&nonce=\" + nonce + '&'\n\n request_url += urlencode(options)\n signature = hmac.new(\n self.secret,\n request_url.encode(),\n hashlib.sha512\n ).hexdigest()\n self.session.headers.update({'apisign': signature})\n response = self.session.get(request_url)\n try:\n json_ret = rlk_jsonloads(response.text)\n except JSONDecodeError:\n raise RemoteError('Bittrex returned invalid JSON response')\n\n if json_ret['success'] is not True:\n raise RemoteError(json_ret['message'])\n return json_ret['result']\n\n def get_btc_price(self, asset: typing.BlockchainAsset) -> Optional[FVal]:\n if asset == 'BTC':\n return None\n btc_price = None\n btc_pair = 'BTC-' + asset\n for market in self.markets:\n if market['MarketName'] == btc_pair:\n btc_price = FVal(market['Last'])\n break\n\n if btc_price is None:\n raise ValueError('Bittrex: Could not find BTC market for \"{}\"'.format(asset))\n\n return btc_price\n\n @cache_response_timewise()\n def query_balances(self) -> Tuple[Optional[dict], str]:\n try:\n self.markets = self.api_query('getmarketsummaries')\n resp = self.api_query('getbalances')\n except RemoteError as e:\n msg = (\n 'Bittrex API request failed. Could not reach bittrex due '\n 'to {}'.format(e)\n )\n logger.error(msg)\n return None, msg\n\n returned_balances = dict()\n for entry in resp:\n currency = entry['Currency']\n usd_price = self.inquirer.find_usd_price(\n asset=currency,\n asset_btc_price=self.get_btc_price(currency)\n )\n\n balance = dict()\n balance['amount'] = FVal(entry['Balance'])\n balance['usd_value'] = FVal(balance['amount']) * usd_price\n returned_balances[currency] = balance\n\n return returned_balances, ''\n\n def query_trade_history(\n self,\n start_ts: typing.Timestamp,\n end_ts: typing.Timestamp,\n end_at_least_ts: typing.Timestamp,\n market: Optional[str] = None,\n count: Optional[int] = None,\n ) -> List:\n\n options: Dict[str, Union[str, int]] = dict()\n cache = self.check_trades_cache(start_ts, end_at_least_ts)\n cache = cast(List, cache)\n if market is not None:\n options['market'] = world_pair_to_bittrex(market)\n elif cache is not None:\n return cache\n\n if count is not None:\n options['count'] = count\n order_history = self.api_query('getorderhistory', options)\n\n returned_history = list()\n for order in order_history:\n order_timestamp = createTimeStamp(order['TimeStamp'], formatstr=\"%Y-%m-%dT%H:%M:%S.%f\")\n if start_ts is not None and order_timestamp < start_ts:\n continue\n if end_ts is not None and order_timestamp > end_ts:\n break\n order['TimeStamp'] = order_timestamp\n returned_history.append(order)\n\n self.update_trades_cache(returned_history, start_ts, end_ts)\n return returned_history\n", "path": "rotkehlchen/bittrex.py"}]} | 2,897 | 488 |
gh_patches_debug_6849 | rasdani/github-patches | git_diff | WordPress__openverse-api-233 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[Bug] A circular import prevents starting the project correctly
## Description
<!-- Concisely describe the bug. -->
There is a problem with models imports, run the project and see:
```
web_1 | Exception in thread django-main-thread:
web_1 | Traceback (most recent call last):
web_1 | File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner
web_1 | self.run()
web_1 | File "/usr/local/lib/python3.9/threading.py", line 910, in run
web_1 | self._target(*self._args, **self._kwargs)
web_1 | File "/usr/local/lib/python3.9/site-packages/django/utils/autoreload.py", line 64, in wrapper
web_1 | fn(*args, **kwargs)
web_1 | File "/usr/local/lib/python3.9/site-packages/django/core/management/commands/runserver.py", line 110, in inner_run
web_1 | autoreload.raise_last_exception()
web_1 | File "/usr/local/lib/python3.9/site-packages/django/utils/autoreload.py", line 87, in raise_last_exception
web_1 | raise _exception[1]
web_1 | File "/usr/local/lib/python3.9/site-packages/django/core/management/__init__.py", line 375, in execute
web_1 | autoreload.check_errors(django.setup)()
web_1 | File "/usr/local/lib/python3.9/site-packages/django/utils/autoreload.py", line 64, in wrapper
web_1 | fn(*args, **kwargs)
web_1 | File "/usr/local/lib/python3.9/site-packages/django/__init__.py", line 24, in setup
web_1 | apps.populate(settings.INSTALLED_APPS)
web_1 | File "/usr/local/lib/python3.9/site-packages/django/apps/registry.py", line 114, in populate
web_1 | app_config.import_models()
web_1 | File "/usr/local/lib/python3.9/site-packages/django/apps/config.py", line 301, in import_models
web_1 | self.models_module = import_module(models_module_name)
web_1 | File "/usr/local/lib/python3.9/importlib/__init__.py", line 127, in import_module
web_1 | return _bootstrap._gcd_import(name[level:], package, level)
web_1 | File "<frozen importlib._bootstrap>", line 1030, in _gcd_import
web_1 | File "<frozen importlib._bootstrap>", line 1007, in _find_and_load
web_1 | File "<frozen importlib._bootstrap>", line 986, in _find_and_load_unlocked
web_1 | File "<frozen importlib._bootstrap>", line 680, in _load_unlocked
web_1 | File "<frozen importlib._bootstrap_external>", line 850, in exec_module
web_1 | File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed
web_1 | File "/openverse-api/catalog/api/models/__init__.py", line 1, in <module>
web_1 | from catalog.api.models.audio import (
web_1 | File "/openverse-api/catalog/api/models/audio.py", line 2, in <module>
web_1 | from catalog.api.models import OpenLedgerModel
web_1 | ImportError: cannot import name 'OpenLedgerModel' from partially initialized module 'catalog.api.models' (most likely due to a circular import) (/openverse-api/catalog/api/models/__init__.py)
```
## Expectation
<!-- Concisely describe what you expected to happen. -->
The project should start without errors and run normally, passing tests.
## Additional context
<!-- Add any other context about the problem here; or delete the section entirely. -->
The wrong order is introduced due to the `isort` rules so we should make an exception for these lines or the file.
## Resolution
<!-- Replace the [ ] with [x] to check the box. -->
- [ ] π I would be interested in resolving this bug.
</issue>
<code>
[start of openverse-api/catalog/api/models/__init__.py]
1 from catalog.api.models.audio import (
2 AltAudioFile,
3 Audio,
4 AudioList,
5 AudioReport,
6 AudioSet,
7 DeletedAudio,
8 MatureAudio,
9 )
10 from catalog.api.models.base import OpenLedgerModel
11 from catalog.api.models.image import (
12 DeletedImage,
13 Image,
14 ImageList,
15 ImageReport,
16 MatureImage,
17 )
18 from catalog.api.models.media import (
19 DEINDEXED,
20 DMCA,
21 MATURE,
22 MATURE_FILTERED,
23 NO_ACTION,
24 OTHER,
25 PENDING,
26 )
27 from catalog.api.models.models import ContentProvider, ShortenedLink, SourceLogo, Tag
28 from catalog.api.models.oauth import (
29 OAuth2Registration,
30 OAuth2Verification,
31 ThrottledApplication,
32 )
33
[end of openverse-api/catalog/api/models/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/openverse-api/catalog/api/models/__init__.py b/openverse-api/catalog/api/models/__init__.py
--- a/openverse-api/catalog/api/models/__init__.py
+++ b/openverse-api/catalog/api/models/__init__.py
@@ -1,3 +1,4 @@
+from catalog.api.models.base import OpenLedgerModel # isort:skip
from catalog.api.models.audio import (
AltAudioFile,
Audio,
@@ -7,7 +8,6 @@
DeletedAudio,
MatureAudio,
)
-from catalog.api.models.base import OpenLedgerModel
from catalog.api.models.image import (
DeletedImage,
Image,
| {"golden_diff": "diff --git a/openverse-api/catalog/api/models/__init__.py b/openverse-api/catalog/api/models/__init__.py\n--- a/openverse-api/catalog/api/models/__init__.py\n+++ b/openverse-api/catalog/api/models/__init__.py\n@@ -1,3 +1,4 @@\n+from catalog.api.models.base import OpenLedgerModel # isort:skip\n from catalog.api.models.audio import (\n AltAudioFile,\n Audio,\n@@ -7,7 +8,6 @@\n DeletedAudio,\n MatureAudio,\n )\n-from catalog.api.models.base import OpenLedgerModel\n from catalog.api.models.image import (\n DeletedImage,\n Image,\n", "issue": "[Bug] A circular import prevents starting the project correctly\n## Description\r\n<!-- Concisely describe the bug. -->\r\nThere is a problem with models imports, run the project and see:\r\n\r\n```\r\nweb_1 | Exception in thread django-main-thread:\r\nweb_1 | Traceback (most recent call last):\r\nweb_1 | File \"/usr/local/lib/python3.9/threading.py\", line 973, in _bootstrap_inner\r\nweb_1 | self.run()\r\nweb_1 | File \"/usr/local/lib/python3.9/threading.py\", line 910, in run\r\nweb_1 | self._target(*self._args, **self._kwargs)\r\nweb_1 | File \"/usr/local/lib/python3.9/site-packages/django/utils/autoreload.py\", line 64, in wrapper\r\nweb_1 | fn(*args, **kwargs)\r\nweb_1 | File \"/usr/local/lib/python3.9/site-packages/django/core/management/commands/runserver.py\", line 110, in inner_run\r\nweb_1 | autoreload.raise_last_exception()\r\nweb_1 | File \"/usr/local/lib/python3.9/site-packages/django/utils/autoreload.py\", line 87, in raise_last_exception\r\nweb_1 | raise _exception[1]\r\nweb_1 | File \"/usr/local/lib/python3.9/site-packages/django/core/management/__init__.py\", line 375, in execute\r\nweb_1 | autoreload.check_errors(django.setup)()\r\nweb_1 | File \"/usr/local/lib/python3.9/site-packages/django/utils/autoreload.py\", line 64, in wrapper\r\nweb_1 | fn(*args, **kwargs)\r\nweb_1 | File \"/usr/local/lib/python3.9/site-packages/django/__init__.py\", line 24, in setup\r\nweb_1 | apps.populate(settings.INSTALLED_APPS)\r\nweb_1 | File \"/usr/local/lib/python3.9/site-packages/django/apps/registry.py\", line 114, in populate\r\nweb_1 | app_config.import_models()\r\nweb_1 | File \"/usr/local/lib/python3.9/site-packages/django/apps/config.py\", line 301, in import_models\r\nweb_1 | self.models_module = import_module(models_module_name)\r\nweb_1 | File \"/usr/local/lib/python3.9/importlib/__init__.py\", line 127, in import_module\r\nweb_1 | return _bootstrap._gcd_import(name[level:], package, level)\r\nweb_1 | File \"<frozen importlib._bootstrap>\", line 1030, in _gcd_import\r\nweb_1 | File \"<frozen importlib._bootstrap>\", line 1007, in _find_and_load\r\nweb_1 | File \"<frozen importlib._bootstrap>\", line 986, in _find_and_load_unlocked\r\nweb_1 | File \"<frozen importlib._bootstrap>\", line 680, in _load_unlocked\r\nweb_1 | File \"<frozen importlib._bootstrap_external>\", line 850, in exec_module\r\nweb_1 | File \"<frozen importlib._bootstrap>\", line 228, in _call_with_frames_removed\r\nweb_1 | File \"/openverse-api/catalog/api/models/__init__.py\", line 1, in <module>\r\nweb_1 | from catalog.api.models.audio import (\r\nweb_1 | File \"/openverse-api/catalog/api/models/audio.py\", line 2, in <module>\r\nweb_1 | from catalog.api.models import OpenLedgerModel\r\nweb_1 | ImportError: cannot import name 'OpenLedgerModel' from partially initialized module 'catalog.api.models' (most likely due to a circular import) (/openverse-api/catalog/api/models/__init__.py)\r\n```\r\n\r\n## Expectation\r\n<!-- Concisely describe what you expected to happen. -->\r\nThe project should start without errors and run normally, passing tests.\r\n\r\n## Additional context\r\n<!-- Add any other context about the problem here; or delete the section entirely. -->\r\nThe wrong order is introduced due to the `isort` rules so we should make an exception for these lines or the file.\r\n\r\n## Resolution\r\n<!-- Replace the [ ] with [x] to check the box. -->\r\n- [ ] \ud83d\ude4b I would be interested in resolving this bug.\r\n\n", "before_files": [{"content": "from catalog.api.models.audio import (\n AltAudioFile,\n Audio,\n AudioList,\n AudioReport,\n AudioSet,\n DeletedAudio,\n MatureAudio,\n)\nfrom catalog.api.models.base import OpenLedgerModel\nfrom catalog.api.models.image import (\n DeletedImage,\n Image,\n ImageList,\n ImageReport,\n MatureImage,\n)\nfrom catalog.api.models.media import (\n DEINDEXED,\n DMCA,\n MATURE,\n MATURE_FILTERED,\n NO_ACTION,\n OTHER,\n PENDING,\n)\nfrom catalog.api.models.models import ContentProvider, ShortenedLink, SourceLogo, Tag\nfrom catalog.api.models.oauth import (\n OAuth2Registration,\n OAuth2Verification,\n ThrottledApplication,\n)\n", "path": "openverse-api/catalog/api/models/__init__.py"}]} | 1,747 | 139 |
gh_patches_debug_25787 | rasdani/github-patches | git_diff | pypa__setuptools-1905 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
TestDepends.testRequire regression in 41.6.0 (py3)
While trying to build the new release of setuptools, I get the following test failure:
```
==================================================================== FAILURES =====================================================================
_____________________________________________________________ TestDepends.testRequire _____________________________________________________________
self = <setuptools.tests.test_setuptools.TestDepends object at 0x7fbfae31d710>
@needs_bytecode
def testRequire(self):
req = Require('Json', '1.0.3', 'json')
assert req.name == 'Json'
assert req.module == 'json'
assert req.requested_version == '1.0.3'
assert req.attribute == '__version__'
assert req.full_name() == 'Json-1.0.3'
from json import __version__
assert req.get_version() == __version__
assert req.version_ok('1.0.9')
assert not req.version_ok('0.9.1')
assert not req.version_ok('unknown')
assert req.is_present()
assert req.is_current()
req = Require('Json 3000', '03000', 'json', format=LooseVersion)
assert req.is_present()
assert not req.is_current()
assert not req.version_ok('unknown')
req = Require('Do-what-I-mean', '1.0', 'd-w-i-m')
assert not req.is_present()
assert not req.is_current()
req = Require('Tests', None, 'tests', homepage="http://example.com")
assert req.format is None
assert req.attribute is None
assert req.requested_version is None
assert req.full_name() == 'Tests'
assert req.homepage == 'http://example.com'
from setuptools.tests import __path__
paths = [os.path.dirname(p) for p in __path__]
> assert req.is_present(paths)
E AssertionError: assert False
E + where False = <bound method Require.is_present of <setuptools.depends.Require object at 0x7fbfae0d0b38>>(['/tmp/portage/dev-python/setuptools-41.6.0/work/setuptools-41.6.0-python3_5/setuptools'])
E + where <bound method Require.is_present of <setuptools.depends.Require object at 0x7fbfae0d0b38>> = <setuptools.depends.Require object at 0x7fbfae0d0b38>.is_present
setuptools/tests/test_setuptools.py:120: AssertionError
```
I can reproduce it reliably with at least pypy3.6 (7.2.0) & python3.5 (3.5.7). I haven't tested other versions yet.
Full build log: [dev-python:setuptools-41.6.0:20191030-083347.log](https://github.com/pypa/setuptools/files/3787797/dev-python.setuptools-41.6.0.20191030-083347.log)
</issue>
<code>
[start of setuptools/_imp.py]
1 """
2 Re-implementation of find_module and get_frozen_object
3 from the deprecated imp module.
4 """
5
6 import os
7 import importlib.util
8 import importlib.machinery
9
10 from .py34compat import module_from_spec
11
12
13 PY_SOURCE = 1
14 PY_COMPILED = 2
15 C_EXTENSION = 3
16 C_BUILTIN = 6
17 PY_FROZEN = 7
18
19
20 def find_module(module, paths=None):
21 """Just like 'imp.find_module()', but with package support"""
22 spec = importlib.util.find_spec(module, paths)
23 if spec is None:
24 raise ImportError("Can't find %s" % module)
25 if not spec.has_location and hasattr(spec, 'submodule_search_locations'):
26 spec = importlib.util.spec_from_loader('__init__.py', spec.loader)
27
28 kind = -1
29 file = None
30 static = isinstance(spec.loader, type)
31 if spec.origin == 'frozen' or static and issubclass(
32 spec.loader, importlib.machinery.FrozenImporter):
33 kind = PY_FROZEN
34 path = None # imp compabilty
35 suffix = mode = '' # imp compability
36 elif spec.origin == 'built-in' or static and issubclass(
37 spec.loader, importlib.machinery.BuiltinImporter):
38 kind = C_BUILTIN
39 path = None # imp compabilty
40 suffix = mode = '' # imp compability
41 elif spec.has_location:
42 path = spec.origin
43 suffix = os.path.splitext(path)[1]
44 mode = 'r' if suffix in importlib.machinery.SOURCE_SUFFIXES else 'rb'
45
46 if suffix in importlib.machinery.SOURCE_SUFFIXES:
47 kind = PY_SOURCE
48 elif suffix in importlib.machinery.BYTECODE_SUFFIXES:
49 kind = PY_COMPILED
50 elif suffix in importlib.machinery.EXTENSION_SUFFIXES:
51 kind = C_EXTENSION
52
53 if kind in {PY_SOURCE, PY_COMPILED}:
54 file = open(path, mode)
55 else:
56 path = None
57 suffix = mode = ''
58
59 return file, path, (suffix, mode, kind)
60
61
62 def get_frozen_object(module, paths=None):
63 spec = importlib.util.find_spec(module, paths)
64 if not spec:
65 raise ImportError("Can't find %s" % module)
66 return spec.loader.get_code(module)
67
68
69 def get_module(module, paths, info):
70 spec = importlib.util.find_spec(module, paths)
71 if not spec:
72 raise ImportError("Can't find %s" % module)
73 return module_from_spec(spec)
74
[end of setuptools/_imp.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setuptools/_imp.py b/setuptools/_imp.py
--- a/setuptools/_imp.py
+++ b/setuptools/_imp.py
@@ -17,9 +17,18 @@
PY_FROZEN = 7
+def find_spec(module, paths):
+ finder = (
+ importlib.machinery.PathFinder().find_spec
+ if isinstance(paths, list) else
+ importlib.util.find_spec
+ )
+ return finder(module, paths)
+
+
def find_module(module, paths=None):
"""Just like 'imp.find_module()', but with package support"""
- spec = importlib.util.find_spec(module, paths)
+ spec = find_spec(module, paths)
if spec is None:
raise ImportError("Can't find %s" % module)
if not spec.has_location and hasattr(spec, 'submodule_search_locations'):
@@ -60,14 +69,14 @@
def get_frozen_object(module, paths=None):
- spec = importlib.util.find_spec(module, paths)
+ spec = find_spec(module, paths)
if not spec:
raise ImportError("Can't find %s" % module)
return spec.loader.get_code(module)
def get_module(module, paths, info):
- spec = importlib.util.find_spec(module, paths)
+ spec = find_spec(module, paths)
if not spec:
raise ImportError("Can't find %s" % module)
return module_from_spec(spec)
| {"golden_diff": "diff --git a/setuptools/_imp.py b/setuptools/_imp.py\n--- a/setuptools/_imp.py\n+++ b/setuptools/_imp.py\n@@ -17,9 +17,18 @@\n PY_FROZEN = 7\n \n \n+def find_spec(module, paths):\n+ finder = (\n+ importlib.machinery.PathFinder().find_spec\n+ if isinstance(paths, list) else\n+ importlib.util.find_spec\n+ )\n+ return finder(module, paths)\n+\n+\n def find_module(module, paths=None):\n \"\"\"Just like 'imp.find_module()', but with package support\"\"\"\n- spec = importlib.util.find_spec(module, paths)\n+ spec = find_spec(module, paths)\n if spec is None:\n raise ImportError(\"Can't find %s\" % module)\n if not spec.has_location and hasattr(spec, 'submodule_search_locations'):\n@@ -60,14 +69,14 @@\n \n \n def get_frozen_object(module, paths=None):\n- spec = importlib.util.find_spec(module, paths)\n+ spec = find_spec(module, paths)\n if not spec:\n raise ImportError(\"Can't find %s\" % module)\n return spec.loader.get_code(module)\n \n \n def get_module(module, paths, info):\n- spec = importlib.util.find_spec(module, paths)\n+ spec = find_spec(module, paths)\n if not spec:\n raise ImportError(\"Can't find %s\" % module)\n return module_from_spec(spec)\n", "issue": "TestDepends.testRequire regression in 41.6.0 (py3)\nWhile trying to build the new release of setuptools, I get the following test failure:\r\n\r\n```\r\n==================================================================== FAILURES =====================================================================\r\n_____________________________________________________________ TestDepends.testRequire _____________________________________________________________\r\n\r\nself = <setuptools.tests.test_setuptools.TestDepends object at 0x7fbfae31d710>\r\n\r\n @needs_bytecode\r\n def testRequire(self):\r\n req = Require('Json', '1.0.3', 'json')\r\n \r\n assert req.name == 'Json'\r\n assert req.module == 'json'\r\n assert req.requested_version == '1.0.3'\r\n assert req.attribute == '__version__'\r\n assert req.full_name() == 'Json-1.0.3'\r\n \r\n from json import __version__\r\n assert req.get_version() == __version__\r\n assert req.version_ok('1.0.9')\r\n assert not req.version_ok('0.9.1')\r\n assert not req.version_ok('unknown')\r\n \r\n assert req.is_present()\r\n assert req.is_current()\r\n \r\n req = Require('Json 3000', '03000', 'json', format=LooseVersion)\r\n assert req.is_present()\r\n assert not req.is_current()\r\n assert not req.version_ok('unknown')\r\n \r\n req = Require('Do-what-I-mean', '1.0', 'd-w-i-m')\r\n assert not req.is_present()\r\n assert not req.is_current()\r\n \r\n req = Require('Tests', None, 'tests', homepage=\"http://example.com\")\r\n assert req.format is None\r\n assert req.attribute is None\r\n assert req.requested_version is None\r\n assert req.full_name() == 'Tests'\r\n assert req.homepage == 'http://example.com'\r\n \r\n from setuptools.tests import __path__\r\n paths = [os.path.dirname(p) for p in __path__]\r\n> assert req.is_present(paths)\r\nE AssertionError: assert False\r\nE + where False = <bound method Require.is_present of <setuptools.depends.Require object at 0x7fbfae0d0b38>>(['/tmp/portage/dev-python/setuptools-41.6.0/work/setuptools-41.6.0-python3_5/setuptools'])\r\nE + where <bound method Require.is_present of <setuptools.depends.Require object at 0x7fbfae0d0b38>> = <setuptools.depends.Require object at 0x7fbfae0d0b38>.is_present\r\n\r\nsetuptools/tests/test_setuptools.py:120: AssertionError\r\n```\r\n\r\nI can reproduce it reliably with at least pypy3.6 (7.2.0) & python3.5 (3.5.7). I haven't tested other versions yet.\r\n\r\nFull build log: [dev-python:setuptools-41.6.0:20191030-083347.log](https://github.com/pypa/setuptools/files/3787797/dev-python.setuptools-41.6.0.20191030-083347.log)\r\n\r\n\n", "before_files": [{"content": "\"\"\"\nRe-implementation of find_module and get_frozen_object\nfrom the deprecated imp module.\n\"\"\"\n\nimport os\nimport importlib.util\nimport importlib.machinery\n\nfrom .py34compat import module_from_spec\n\n\nPY_SOURCE = 1\nPY_COMPILED = 2\nC_EXTENSION = 3\nC_BUILTIN = 6\nPY_FROZEN = 7\n\n\ndef find_module(module, paths=None):\n \"\"\"Just like 'imp.find_module()', but with package support\"\"\"\n spec = importlib.util.find_spec(module, paths)\n if spec is None:\n raise ImportError(\"Can't find %s\" % module)\n if not spec.has_location and hasattr(spec, 'submodule_search_locations'):\n spec = importlib.util.spec_from_loader('__init__.py', spec.loader)\n\n kind = -1\n file = None\n static = isinstance(spec.loader, type)\n if spec.origin == 'frozen' or static and issubclass(\n spec.loader, importlib.machinery.FrozenImporter):\n kind = PY_FROZEN\n path = None # imp compabilty\n suffix = mode = '' # imp compability\n elif spec.origin == 'built-in' or static and issubclass(\n spec.loader, importlib.machinery.BuiltinImporter):\n kind = C_BUILTIN\n path = None # imp compabilty\n suffix = mode = '' # imp compability\n elif spec.has_location:\n path = spec.origin\n suffix = os.path.splitext(path)[1]\n mode = 'r' if suffix in importlib.machinery.SOURCE_SUFFIXES else 'rb'\n\n if suffix in importlib.machinery.SOURCE_SUFFIXES:\n kind = PY_SOURCE\n elif suffix in importlib.machinery.BYTECODE_SUFFIXES:\n kind = PY_COMPILED\n elif suffix in importlib.machinery.EXTENSION_SUFFIXES:\n kind = C_EXTENSION\n\n if kind in {PY_SOURCE, PY_COMPILED}:\n file = open(path, mode)\n else:\n path = None\n suffix = mode = ''\n\n return file, path, (suffix, mode, kind)\n\n\ndef get_frozen_object(module, paths=None):\n spec = importlib.util.find_spec(module, paths)\n if not spec:\n raise ImportError(\"Can't find %s\" % module)\n return spec.loader.get_code(module)\n\n\ndef get_module(module, paths, info):\n spec = importlib.util.find_spec(module, paths)\n if not spec:\n raise ImportError(\"Can't find %s\" % module)\n return module_from_spec(spec)\n", "path": "setuptools/_imp.py"}]} | 1,933 | 323 |
gh_patches_debug_7219 | rasdani/github-patches | git_diff | spack__spack-18478 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
nn-c uses invalid self.compiler.pic_flag? (breaks nn-c build, via elmerfem build)
These lines fail, because there is no such member, and looking at other packages, it seems that flags like
```
self.compiler.cc_pic_flag
self.compiler.cxx_pic_flag
self.compiler.fc_pic_flag
#or ?
self.compiler.f77_pic_flag
```
would be appropriate.
https://github.com/spack/spack/blob/601f97d8a50b1840df9b056a34256b6dd2b54ce3/var/spack/repos/builtin/packages/nn-c/package.py#L29-L31
I triggered this on recent `devel` (today) by
```
spack install --test=root elmerfem@devel +mpi +hypre +lua +mumps +openmp +scatt2d +trilinos +zoltan
```
</issue>
<code>
[start of var/spack/repos/builtin/packages/nn-c/package.py]
1 # Copyright 2013-2020 Lawrence Livermore National Security, LLC and other
2 # Spack Project Developers. See the top-level COPYRIGHT file for details.
3 #
4 # SPDX-License-Identifier: (Apache-2.0 OR MIT)
5
6 from spack import *
7
8
9 class NnC(AutotoolsPackage):
10 """nn: Natural Neighbours interpolation. nn is a C code
11 for Natural Neighbours interpolation of 2D scattered data.
12 It provides a C library and a command line utility nnbathy."""
13
14 homepage = "https://github.com/sakov/nn-c"
15 git = "https://github.com/sakov/nn-c.git"
16
17 version('master', branch='master')
18 version('1.86.2', commit='343c7784d38d3270d75d450569fc0b64767c37e9')
19
20 variant('pic', default=True,
21 description='Produce position-independent code (for shared libs)')
22
23 configure_directory = 'nn'
24
25 def configure_args(self):
26 args = []
27 if '+pic' in self.spec:
28 args.extend([
29 'CFLAGS={0}'.format(self.compiler.pic_flag),
30 'CXXFLAGS={0}'.format(self.compiler.pic_flag),
31 'FFLAGS={0}'.format(self.compiler.pic_flag)
32 ])
33 return args
34
[end of var/spack/repos/builtin/packages/nn-c/package.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/var/spack/repos/builtin/packages/nn-c/package.py b/var/spack/repos/builtin/packages/nn-c/package.py
--- a/var/spack/repos/builtin/packages/nn-c/package.py
+++ b/var/spack/repos/builtin/packages/nn-c/package.py
@@ -26,8 +26,8 @@
args = []
if '+pic' in self.spec:
args.extend([
- 'CFLAGS={0}'.format(self.compiler.pic_flag),
- 'CXXFLAGS={0}'.format(self.compiler.pic_flag),
- 'FFLAGS={0}'.format(self.compiler.pic_flag)
+ 'CFLAGS={0}'.format(self.compiler.cc_pic_flag),
+ 'CXXFLAGS={0}'.format(self.compiler.cxx_pic_flag),
+ 'FFLAGS={0}'.format(self.compiler.fc_pic_flag)
])
return args
| {"golden_diff": "diff --git a/var/spack/repos/builtin/packages/nn-c/package.py b/var/spack/repos/builtin/packages/nn-c/package.py\n--- a/var/spack/repos/builtin/packages/nn-c/package.py\n+++ b/var/spack/repos/builtin/packages/nn-c/package.py\n@@ -26,8 +26,8 @@\n args = []\n if '+pic' in self.spec:\n args.extend([\n- 'CFLAGS={0}'.format(self.compiler.pic_flag),\n- 'CXXFLAGS={0}'.format(self.compiler.pic_flag),\n- 'FFLAGS={0}'.format(self.compiler.pic_flag)\n+ 'CFLAGS={0}'.format(self.compiler.cc_pic_flag),\n+ 'CXXFLAGS={0}'.format(self.compiler.cxx_pic_flag),\n+ 'FFLAGS={0}'.format(self.compiler.fc_pic_flag)\n ])\n return args\n", "issue": "nn-c uses invalid self.compiler.pic_flag? (breaks nn-c build, via elmerfem build)\nThese lines fail, because there is no such member, and looking at other packages, it seems that flags like\r\n```\r\nself.compiler.cc_pic_flag\r\nself.compiler.cxx_pic_flag\r\nself.compiler.fc_pic_flag\r\n#or ?\r\nself.compiler.f77_pic_flag\r\n```\r\nwould be appropriate.\r\n\r\nhttps://github.com/spack/spack/blob/601f97d8a50b1840df9b056a34256b6dd2b54ce3/var/spack/repos/builtin/packages/nn-c/package.py#L29-L31\r\n\r\nI triggered this on recent `devel` (today) by\r\n```\r\nspack install --test=root elmerfem@devel +mpi +hypre +lua +mumps +openmp +scatt2d +trilinos +zoltan\r\n```\n", "before_files": [{"content": "# Copyright 2013-2020 Lawrence Livermore National Security, LLC and other\n# Spack Project Developers. See the top-level COPYRIGHT file for details.\n#\n# SPDX-License-Identifier: (Apache-2.0 OR MIT)\n\nfrom spack import *\n\n\nclass NnC(AutotoolsPackage):\n \"\"\"nn: Natural Neighbours interpolation. nn is a C code\n for Natural Neighbours interpolation of 2D scattered data.\n It provides a C library and a command line utility nnbathy.\"\"\"\n\n homepage = \"https://github.com/sakov/nn-c\"\n git = \"https://github.com/sakov/nn-c.git\"\n\n version('master', branch='master')\n version('1.86.2', commit='343c7784d38d3270d75d450569fc0b64767c37e9')\n\n variant('pic', default=True,\n description='Produce position-independent code (for shared libs)')\n\n configure_directory = 'nn'\n\n def configure_args(self):\n args = []\n if '+pic' in self.spec:\n args.extend([\n 'CFLAGS={0}'.format(self.compiler.pic_flag),\n 'CXXFLAGS={0}'.format(self.compiler.pic_flag),\n 'FFLAGS={0}'.format(self.compiler.pic_flag)\n ])\n return args\n", "path": "var/spack/repos/builtin/packages/nn-c/package.py"}]} | 1,126 | 187 |
gh_patches_debug_2611 | rasdani/github-patches | git_diff | freedomofpress__securedrop-703 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Don't armor encrypted submissions
SecureDrop currently armors encrypted submissions. This bloats the size of stored submissions significantly due to the encoding. For example, a 93 MB upload results in a 125.7 MB submission for the journalist to download.
Downloading anything over Tor is very slow (the aforementioned download took me, on average, 9 minutes to download). Therefore, unnecessarily increasing the size of submissions severely impacts usability. There is no reason that I can think of to ascii armor submissions - they are uploaded and downloaded over HTTP, which automatically handles encoding and de-encoding binary data.
</issue>
<code>
[start of securedrop/crypto_util.py]
1 # -*- coding: utf-8 -*-
2 import os
3 import subprocess
4 from base64 import b32encode
5
6 from Crypto.Random import random
7 import gnupg
8 import scrypt
9
10 import config
11 import store
12
13 # to fix gpg error #78 on production
14 os.environ['USERNAME'] = 'www-data'
15
16 GPG_KEY_TYPE = "RSA"
17 if os.environ.get('SECUREDROP_ENV') == 'test':
18 # Optiimize crypto to speed up tests (at the expense of security - DO NOT
19 # use these settings in production)
20 GPG_KEY_LENGTH = 1024
21 SCRYPT_PARAMS = dict(N=2**1, r=1, p=1)
22 else:
23 GPG_KEY_LENGTH = 4096
24 SCRYPT_PARAMS = config.SCRYPT_PARAMS
25
26 SCRYPT_ID_PEPPER = config.SCRYPT_ID_PEPPER
27 SCRYPT_GPG_PEPPER = config.SCRYPT_GPG_PEPPER
28
29 DEFAULT_WORDS_IN_RANDOM_ID = 8
30
31 # Make sure these pass before the app can run
32 # TODO: Add more tests
33 def do_runtime_tests():
34 assert(config.SCRYPT_ID_PEPPER != config.SCRYPT_GPG_PEPPER)
35 # crash if we don't have srm:
36 try:
37 subprocess.check_call(['srm'], stdout=subprocess.PIPE)
38 except subprocess.CalledProcessError:
39 pass
40
41 do_runtime_tests()
42
43 GPG_BINARY = 'gpg2'
44 try:
45 p = subprocess.Popen([GPG_BINARY, '--version'], stdout=subprocess.PIPE)
46 except OSError:
47 GPG_BINARY = 'gpg'
48 p = subprocess.Popen([GPG_BINARY, '--version'], stdout=subprocess.PIPE)
49
50 assert p.stdout.readline().split()[
51 -1].split('.')[0] == '2', "upgrade GPG to 2.0"
52 del p
53
54 gpg = gnupg.GPG(binary=GPG_BINARY, homedir=config.GPG_KEY_DIR)
55
56 words = file(config.WORD_LIST).read().split('\n')
57 nouns = file(config.NOUNS).read().split('\n')
58 adjectives = file(config.ADJECTIVES).read().split('\n')
59
60
61 class CryptoException(Exception):
62 pass
63
64
65 def clean(s, also=''):
66 """
67 >>> clean("Hello, world!")
68 Traceback (most recent call last):
69 ...
70 CryptoException: invalid input
71 >>> clean("Helloworld")
72 'Helloworld'
73 """
74 # safe characters for every possible word in the wordlist includes capital
75 # letters because codename hashes are base32-encoded with capital letters
76 ok = ' !#%$&)(+*-1032547698;:=?@acbedgfihkjmlonqpsrutwvyxzABCDEFGHIJKLMNOPQRSTUVWXYZ'
77 for c in s:
78 if c not in ok and c not in also:
79 raise CryptoException("invalid input: %s" % s)
80 # scrypt.hash requires input of type str. Since the wordlist is all ASCII
81 # characters, this conversion is not problematic
82 return str(s)
83
84
85 def genrandomid(words_in_random_id=DEFAULT_WORDS_IN_RANDOM_ID):
86 return ' '.join(random.choice(words) for x in range(words_in_random_id))
87
88
89 def display_id():
90 return ' '.join([random.choice(adjectives), random.choice(nouns)])
91
92
93 def hash_codename(codename, salt=SCRYPT_ID_PEPPER):
94 """
95 >>> hash_codename('Hello, world!')
96 'EQZGCJBRGISGOTC2NZVWG6LILJBHEV3CINNEWSCLLFTUWZLFHBTS6WLCHFHTOLRSGQXUQLRQHFMXKOKKOQ4WQ6SXGZXDAS3Z'
97 """
98 return b32encode(scrypt.hash(clean(codename), salt, **SCRYPT_PARAMS))
99
100
101 def genkeypair(name, secret):
102 """
103 >>> if not gpg.list_keys(hash_codename('randomid')):
104 ... genkeypair(hash_codename('randomid'), 'randomid').type
105 ... else:
106 ... u'P'
107 u'P'
108 """
109 name = clean(name)
110 secret = hash_codename(secret, salt=SCRYPT_GPG_PEPPER)
111 return gpg.gen_key(gpg.gen_key_input(
112 key_type=GPG_KEY_TYPE, key_length=GPG_KEY_LENGTH,
113 passphrase=secret,
114 name_email=name
115 ))
116
117
118 def delete_reply_keypair(source_id):
119 key = getkey(source_id)
120 # If this source was never flagged for reivew, they won't have a reply keypair
121 if not key: return
122 # The private key needs to be deleted before the public key can be deleted
123 # http://pythonhosted.org/python-gnupg/#deleting-keys
124 gpg.delete_keys(key, True) # private key
125 gpg.delete_keys(key) # public key
126 # TODO: srm?
127
128
129 def getkey(name):
130 for key in gpg.list_keys():
131 for uid in key['uids']:
132 if name in uid:
133 return key['fingerprint']
134 return None
135
136
137 def get_key_by_fingerprint(fingerprint):
138 matches = filter(lambda k: k['fingerprint'] == fingerprint, gpg.list_keys())
139 return matches[0] if matches else None
140
141
142 def encrypt(plaintext, fingerprints, output=None):
143 # Verify the output path
144 if output:
145 store.verify(output)
146
147 # Remove any spaces from provided fingerpints
148 # GPG outputs fingerprints with spaces for readability, but requires the
149 # spaces to be removed when using fingerprints to specify recipients.
150 if not isinstance(fingerprints, (list, tuple)):
151 fingerprints = [fingerprints,]
152 fingerprints = [ fpr.replace(' ', '') for fpr in fingerprints ]
153
154 if isinstance(plaintext, unicode):
155 plaintext = plaintext.encode('utf8')
156
157 encrypt_fn = gpg.encrypt if isinstance(plaintext, str) else gpg.encrypt_file
158 out = encrypt_fn(plaintext,
159 *fingerprints,
160 output=output,
161 always_trust=True)
162 if out.ok:
163 return out.data
164 else:
165 raise CryptoException(out.stderr)
166
167
168 def decrypt(secret, plain_text):
169 """
170 >>> key = genkeypair('randomid', 'randomid')
171 >>> decrypt('randomid', 'randomid',
172 ... encrypt('randomid', 'Goodbye, cruel world!')
173 ... )
174 'Goodbye, cruel world!'
175 """
176 hashed_codename = hash_codename(secret, salt=SCRYPT_GPG_PEPPER)
177 return gpg.decrypt(plain_text, passphrase=hashed_codename).data
178
179
180 if __name__ == "__main__":
181 import doctest
182 doctest.testmod()
183
[end of securedrop/crypto_util.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/securedrop/crypto_util.py b/securedrop/crypto_util.py
--- a/securedrop/crypto_util.py
+++ b/securedrop/crypto_util.py
@@ -158,7 +158,8 @@
out = encrypt_fn(plaintext,
*fingerprints,
output=output,
- always_trust=True)
+ always_trust=True,
+ armor=False)
if out.ok:
return out.data
else:
| {"golden_diff": "diff --git a/securedrop/crypto_util.py b/securedrop/crypto_util.py\n--- a/securedrop/crypto_util.py\n+++ b/securedrop/crypto_util.py\n@@ -158,7 +158,8 @@\n out = encrypt_fn(plaintext,\n *fingerprints,\n output=output,\n- always_trust=True)\n+ always_trust=True,\n+ armor=False)\n if out.ok:\n return out.data\n else:\n", "issue": "Don't armor encrypted submissions\nSecureDrop currently armors encrypted submissions. This bloats the size of stored submissions significantly due to the encoding. For example, a 93 MB upload results in a 125.7 MB submission for the journalist to download.\n\nDownloading anything over Tor is very slow (the aforementioned download took me, on average, 9 minutes to download). Therefore, unnecessarily increasing the size of submissions severely impacts usability. There is no reason that I can think of to ascii armor submissions - they are uploaded and downloaded over HTTP, which automatically handles encoding and de-encoding binary data.\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\nimport os\nimport subprocess\nfrom base64 import b32encode\n\nfrom Crypto.Random import random\nimport gnupg\nimport scrypt\n\nimport config\nimport store\n\n# to fix gpg error #78 on production\nos.environ['USERNAME'] = 'www-data'\n\nGPG_KEY_TYPE = \"RSA\"\nif os.environ.get('SECUREDROP_ENV') == 'test':\n # Optiimize crypto to speed up tests (at the expense of security - DO NOT\n # use these settings in production)\n GPG_KEY_LENGTH = 1024\n SCRYPT_PARAMS = dict(N=2**1, r=1, p=1)\nelse:\n GPG_KEY_LENGTH = 4096\n SCRYPT_PARAMS = config.SCRYPT_PARAMS\n\nSCRYPT_ID_PEPPER = config.SCRYPT_ID_PEPPER\nSCRYPT_GPG_PEPPER = config.SCRYPT_GPG_PEPPER\n\nDEFAULT_WORDS_IN_RANDOM_ID = 8\n\n# Make sure these pass before the app can run\n# TODO: Add more tests\ndef do_runtime_tests():\n assert(config.SCRYPT_ID_PEPPER != config.SCRYPT_GPG_PEPPER)\n # crash if we don't have srm:\n try:\n subprocess.check_call(['srm'], stdout=subprocess.PIPE)\n except subprocess.CalledProcessError:\n pass\n\ndo_runtime_tests()\n\nGPG_BINARY = 'gpg2'\ntry:\n p = subprocess.Popen([GPG_BINARY, '--version'], stdout=subprocess.PIPE)\nexcept OSError:\n GPG_BINARY = 'gpg'\n p = subprocess.Popen([GPG_BINARY, '--version'], stdout=subprocess.PIPE)\n\nassert p.stdout.readline().split()[\n -1].split('.')[0] == '2', \"upgrade GPG to 2.0\"\ndel p\n\ngpg = gnupg.GPG(binary=GPG_BINARY, homedir=config.GPG_KEY_DIR)\n\nwords = file(config.WORD_LIST).read().split('\\n')\nnouns = file(config.NOUNS).read().split('\\n')\nadjectives = file(config.ADJECTIVES).read().split('\\n')\n\n\nclass CryptoException(Exception):\n pass\n\n\ndef clean(s, also=''):\n \"\"\"\n >>> clean(\"Hello, world!\")\n Traceback (most recent call last):\n ...\n CryptoException: invalid input\n >>> clean(\"Helloworld\")\n 'Helloworld'\n \"\"\"\n # safe characters for every possible word in the wordlist includes capital\n # letters because codename hashes are base32-encoded with capital letters\n ok = ' !#%$&)(+*-1032547698;:=?@acbedgfihkjmlonqpsrutwvyxzABCDEFGHIJKLMNOPQRSTUVWXYZ'\n for c in s:\n if c not in ok and c not in also:\n raise CryptoException(\"invalid input: %s\" % s)\n # scrypt.hash requires input of type str. Since the wordlist is all ASCII\n # characters, this conversion is not problematic\n return str(s)\n\n\ndef genrandomid(words_in_random_id=DEFAULT_WORDS_IN_RANDOM_ID):\n return ' '.join(random.choice(words) for x in range(words_in_random_id))\n\n\ndef display_id():\n return ' '.join([random.choice(adjectives), random.choice(nouns)])\n\n\ndef hash_codename(codename, salt=SCRYPT_ID_PEPPER):\n \"\"\"\n >>> hash_codename('Hello, world!')\n 'EQZGCJBRGISGOTC2NZVWG6LILJBHEV3CINNEWSCLLFTUWZLFHBTS6WLCHFHTOLRSGQXUQLRQHFMXKOKKOQ4WQ6SXGZXDAS3Z'\n \"\"\"\n return b32encode(scrypt.hash(clean(codename), salt, **SCRYPT_PARAMS))\n\n\ndef genkeypair(name, secret):\n \"\"\"\n >>> if not gpg.list_keys(hash_codename('randomid')):\n ... genkeypair(hash_codename('randomid'), 'randomid').type\n ... else:\n ... u'P'\n u'P'\n \"\"\"\n name = clean(name)\n secret = hash_codename(secret, salt=SCRYPT_GPG_PEPPER)\n return gpg.gen_key(gpg.gen_key_input(\n key_type=GPG_KEY_TYPE, key_length=GPG_KEY_LENGTH,\n passphrase=secret,\n name_email=name\n ))\n\n\ndef delete_reply_keypair(source_id):\n key = getkey(source_id)\n # If this source was never flagged for reivew, they won't have a reply keypair\n if not key: return\n # The private key needs to be deleted before the public key can be deleted\n # http://pythonhosted.org/python-gnupg/#deleting-keys\n gpg.delete_keys(key, True) # private key\n gpg.delete_keys(key) # public key\n # TODO: srm?\n\n\ndef getkey(name):\n for key in gpg.list_keys():\n for uid in key['uids']:\n if name in uid:\n return key['fingerprint']\n return None\n\n\ndef get_key_by_fingerprint(fingerprint):\n matches = filter(lambda k: k['fingerprint'] == fingerprint, gpg.list_keys())\n return matches[0] if matches else None\n\n\ndef encrypt(plaintext, fingerprints, output=None):\n # Verify the output path\n if output:\n store.verify(output)\n\n # Remove any spaces from provided fingerpints\n # GPG outputs fingerprints with spaces for readability, but requires the\n # spaces to be removed when using fingerprints to specify recipients.\n if not isinstance(fingerprints, (list, tuple)):\n fingerprints = [fingerprints,]\n fingerprints = [ fpr.replace(' ', '') for fpr in fingerprints ]\n\n if isinstance(plaintext, unicode):\n plaintext = plaintext.encode('utf8')\n\n encrypt_fn = gpg.encrypt if isinstance(plaintext, str) else gpg.encrypt_file\n out = encrypt_fn(plaintext,\n *fingerprints,\n output=output,\n always_trust=True)\n if out.ok:\n return out.data\n else:\n raise CryptoException(out.stderr)\n\n\ndef decrypt(secret, plain_text):\n \"\"\"\n >>> key = genkeypair('randomid', 'randomid')\n >>> decrypt('randomid', 'randomid',\n ... encrypt('randomid', 'Goodbye, cruel world!')\n ... )\n 'Goodbye, cruel world!'\n \"\"\"\n hashed_codename = hash_codename(secret, salt=SCRYPT_GPG_PEPPER)\n return gpg.decrypt(plain_text, passphrase=hashed_codename).data\n\n\nif __name__ == \"__main__\":\n import doctest\n doctest.testmod()\n", "path": "securedrop/crypto_util.py"}]} | 2,564 | 99 |
gh_patches_debug_57938 | rasdani/github-patches | git_diff | coreruleset__coreruleset-3500 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Google link/crawler blocked at PL2
### Description
Hello everyone,
Here is another false positive found in our production.
The `ARGS:gclid` contains a token in URL when someone visits a website by clicking a shared link on google/youtube.
However, it matches the following rules:
942440 PL2 SQL Comment Sequence Detected
949110 PL1 Inbound Anomaly Score Exceeded (Total Score: 5)
980170 PL1 Anomaly Scores: (Inbound Scores: blocking=5, detection=5, per_pl=0-5-0-0, threshold=5) - (Outbound Scores: blocking=0, detection=0, per_pl=0-0-0-0, threshold=4) - (SQLI=5, XSS=0, RFI=0, LFI=0, RCE=0, PHPI=0, HTTP=0, SESS=0)
Example:
`example.com/file/?gclid=j0KCQiA1NebBhDDARIsAANiDD3_RJeMv8zScF--mC1jf8fO8PDYJCxD9xdwT7iQ59QIIwL-86ncQtMaAh0lEALw_wcB`
Test on sandbox:
`curl -s -H "x-format-output: txt-matched-rules" -H 'x-crs-paranoia-level: 2' 'https://sandbox.coreruleset.org/file/?gclid=Cj0KCQiA1NebBhDDARIsAANiDD3_RJeMv8zScF--mC1jf8fO8PDYJCxD9xdwT7iQ59QIIwL-86ncQtMaAh0lEALw_wcB'`
We excluded following way:
```
SecRule &ARGS:gclid "@gt 0" "id:xxxxxxxx,\
....,\
....,\
ctl:ruleRemoveTargetById=942440;ARGS:gclid,\
chain"
SecRule ARGS:gclid "@rx ^[a-zA-Z0-9_-]{0,100}$" "t:none"
```
### Confirmation
- [x] I have removed any personal data (email addresses, IP addresses,
passwords, domain names) from any logs posted.
Thanks as always, @theMiddleBlue
</issue>
<code>
[start of util/find-rules-without-test/find-rules-without-test.py]
1 #!/usr/bin/env python3
2
3 # This file helps to find the rules which does not have any test cases.
4 #
5 # You just have to pass the CORERULESET_ROOT as argument.
6 #
7 # At the end, the script will print the list of rules without any tests.
8 #
9 # Please note, that there are some exclusions:
10 # * only REQUEST-NNN rules are checked
11 # * there are some hardcoded exlucions:
12 # * REQUEST-900-
13 # * REQUEST-901-
14 # * REQUEST-905-
15 # * REQUEST-910-
16 # * REQUEST-912.
17 # * REQUEST-949-
18 #
19 # and the rule 921170
20
21 import sys
22 import glob
23 import msc_pyparser
24 import argparse
25
26 EXCLUSION_LIST = ["900", "901", "905", "910", "912", "949", "921170"]
27 oformat = "native"
28
29 def find_ids(s, test_cases):
30 """
31 s: the parsed structure
32 test_cases: all available test cases
33 """
34 rids = {}
35 for i in s:
36 # only SecRule counts
37 if i['type'] == "SecRule":
38 for a in i['actions']:
39 # find the `id` action
40 if a['act_name'] == "id":
41 # get the argument of the action
42 rid = int(a['act_arg']) # int
43 srid = a['act_arg'] # string
44 if (rid%1000) >= 100: # skip the PL control rules
45 # also skip these hardcoded rules
46 need_check = True
47 for excl in EXCLUSION_LIST:
48 if srid[:len(excl)] == excl:
49 need_check = False
50 if need_check:
51 # if there is no test cases, just print it
52 if rid not in test_cases:
53 rids[rid] = a['lineno']
54 return rids
55
56 def errmsgf(msg):
57 if oformat == "github":
58 print("::error file={file},line={line},endLine={endLine},title={title}::{message}".format(**msg))
59 else:
60 print("file={file}, line={line}, endLine={endLine}, title={title}: {message}".format(**msg))
61
62 if __name__ == "__main__":
63
64 desc = """This script helps to find the rules without test cases. It needs a mandatory
65 argument where you pass the path to your coreruleset. The tool collects the
66 tests with name REQUEST-*, but not with RESPONSE-*. Then reads the rule id's,
67 and check which rule does not have any test. Some rules does not need test
68 case, these are hardcoded as exclusions: 900NNN, 901NNN, 905NNN, 910NNN,
69 912NNN, 949NNN."""
70
71 parser = argparse.ArgumentParser(description=desc, formatter_class=argparse.RawTextHelpFormatter)
72 parser.add_argument("--output", dest="output", help="Output format native[default]|github", required=False)
73 parser.add_argument('crspath', metavar='/path/to/coreruleset', type=str,
74 help='Directory path to CRS')
75 args = parser.parse_args()
76
77 if args.output is not None:
78 if args.output not in ["native", "github"]:
79 print("--output can be one of the 'native' or 'github'. Default value is 'native'")
80 sys.exit(1)
81 oformat = args.output
82
83 test_cases = {}
84 # from argument, build the rules path and regression test paths
85 crspath = args.crspath.rstrip("/") + "/rules/*.conf"
86 testpath = args.crspath.rstrip("/") + "/tests/regression/tests/*"
87 retval = 0
88 # collect rules
89 flist = glob.glob(crspath)
90 flist.sort()
91 if len(flist) == 0:
92 print("Can't open files in given path!")
93 sys.exit(1)
94
95 # collect test cases
96 tlist = glob.glob(testpath)
97 tlist.sort()
98 if len(tlist) == 0:
99 print("Can't open files in given path (%s)!" % (testpath))
100 sys.exit(1)
101 # find the yaml files with name REQUEST at the begin
102 # collect them in a dictionary
103 for t in tlist:
104 tname = t.split("/")[-1]
105 if tname[:7] == "REQUEST":
106 testlist = glob.glob(t + "/*.yaml")
107 testlist.sort()
108 for tc in testlist:
109 tcname = tc.split("/")[-1].split(".")[0]
110 test_cases[int(tcname)] = 1
111
112 # iterate the rule files
113 for f in flist:
114 fname = f.split("/")[-1]
115 if fname[:7] == "REQUEST":
116 try:
117 with open(f, 'r') as inputfile:
118 data = inputfile.read()
119 except:
120 print("Can't open file: %s" % f)
121 print(sys.exc_info())
122 sys.exit(1)
123
124 try:
125 # make a structure
126 mparser = msc_pyparser.MSCParser()
127 mparser.parser.parse(data)
128 # add the parsed structure to a function, which finds the 'id'-s,
129 # and the collected test cases
130 rids = find_ids(mparser.configlines, test_cases)
131 for k in rids.keys():
132 errmsgf({'file': f, 'line': rids[k], 'endLine': rids[k], 'title': "Test file missing", 'message': ("rule %d does not have any regression test" % k)})
133 except:
134 print("Can't parse config file: %s" % (f))
135 print(sys.exc_info()[1])
136 sys.exit(1)
137 sys.exit(retval)
138
[end of util/find-rules-without-test/find-rules-without-test.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/util/find-rules-without-test/find-rules-without-test.py b/util/find-rules-without-test/find-rules-without-test.py
--- a/util/find-rules-without-test/find-rules-without-test.py
+++ b/util/find-rules-without-test/find-rules-without-test.py
@@ -23,7 +23,7 @@
import msc_pyparser
import argparse
-EXCLUSION_LIST = ["900", "901", "905", "910", "912", "949", "921170"]
+EXCLUSION_LIST = ["900", "901", "905", "910", "912", "949", "921170", "942441", "942442"]
oformat = "native"
def find_ids(s, test_cases):
| {"golden_diff": "diff --git a/util/find-rules-without-test/find-rules-without-test.py b/util/find-rules-without-test/find-rules-without-test.py\n--- a/util/find-rules-without-test/find-rules-without-test.py\n+++ b/util/find-rules-without-test/find-rules-without-test.py\n@@ -23,7 +23,7 @@\n import msc_pyparser\n import argparse\n \n-EXCLUSION_LIST = [\"900\", \"901\", \"905\", \"910\", \"912\", \"949\", \"921170\"]\n+EXCLUSION_LIST = [\"900\", \"901\", \"905\", \"910\", \"912\", \"949\", \"921170\", \"942441\", \"942442\"]\n oformat = \"native\"\n \n def find_ids(s, test_cases):\n", "issue": "Google link/crawler blocked at PL2\n### Description\r\nHello everyone,\r\n\r\nHere is another false positive found in our production.\r\nThe `ARGS:gclid` contains a token in URL when someone visits a website by clicking a shared link on google/youtube.\r\nHowever, it matches the following rules:\r\n\r\n942440 PL2 SQL Comment Sequence Detected\r\n949110 PL1 Inbound Anomaly Score Exceeded (Total Score: 5)\r\n980170 PL1 Anomaly Scores: (Inbound Scores: blocking=5, detection=5, per_pl=0-5-0-0, threshold=5) - (Outbound Scores: blocking=0, detection=0, per_pl=0-0-0-0, threshold=4) - (SQLI=5, XSS=0, RFI=0, LFI=0, RCE=0, PHPI=0, HTTP=0, SESS=0)\r\n\r\nExample:\r\n`example.com/file/?gclid=j0KCQiA1NebBhDDARIsAANiDD3_RJeMv8zScF--mC1jf8fO8PDYJCxD9xdwT7iQ59QIIwL-86ncQtMaAh0lEALw_wcB`\r\n\r\nTest on sandbox:\r\n`curl -s -H \"x-format-output: txt-matched-rules\" -H 'x-crs-paranoia-level: 2' 'https://sandbox.coreruleset.org/file/?gclid=Cj0KCQiA1NebBhDDARIsAANiDD3_RJeMv8zScF--mC1jf8fO8PDYJCxD9xdwT7iQ59QIIwL-86ncQtMaAh0lEALw_wcB'`\r\n\r\nWe excluded following way:\r\n```\r\nSecRule &ARGS:gclid \"@gt 0\" \"id:xxxxxxxx,\\\r\n ....,\\\r\n ....,\\\r\n ctl:ruleRemoveTargetById=942440;ARGS:gclid,\\\r\n chain\"\r\n SecRule ARGS:gclid \"@rx ^[a-zA-Z0-9_-]{0,100}$\" \"t:none\"\r\n\r\n```\r\n### Confirmation\r\n\r\n- [x] I have removed any personal data (email addresses, IP addresses,\r\n passwords, domain names) from any logs posted.\r\n\r\nThanks as always, @theMiddleBlue \r\n\n", "before_files": [{"content": "#!/usr/bin/env python3\n\n# This file helps to find the rules which does not have any test cases.\n#\n# You just have to pass the CORERULESET_ROOT as argument.\n#\n# At the end, the script will print the list of rules without any tests.\n#\n# Please note, that there are some exclusions:\n# * only REQUEST-NNN rules are checked\n# * there are some hardcoded exlucions:\n# * REQUEST-900-\n# * REQUEST-901-\n# * REQUEST-905-\n# * REQUEST-910-\n# * REQUEST-912.\n# * REQUEST-949-\n#\n# and the rule 921170\n\nimport sys\nimport glob\nimport msc_pyparser\nimport argparse\n\nEXCLUSION_LIST = [\"900\", \"901\", \"905\", \"910\", \"912\", \"949\", \"921170\"]\noformat = \"native\"\n\ndef find_ids(s, test_cases):\n \"\"\"\n s: the parsed structure\n test_cases: all available test cases\n \"\"\"\n rids = {}\n for i in s:\n # only SecRule counts\n if i['type'] == \"SecRule\":\n for a in i['actions']:\n # find the `id` action\n if a['act_name'] == \"id\":\n # get the argument of the action\n rid = int(a['act_arg']) # int\n srid = a['act_arg'] # string\n if (rid%1000) >= 100: # skip the PL control rules\n # also skip these hardcoded rules\n need_check = True\n for excl in EXCLUSION_LIST:\n if srid[:len(excl)] == excl:\n need_check = False\n if need_check:\n # if there is no test cases, just print it\n if rid not in test_cases:\n rids[rid] = a['lineno']\n return rids\n\ndef errmsgf(msg):\n if oformat == \"github\":\n print(\"::error file={file},line={line},endLine={endLine},title={title}::{message}\".format(**msg))\n else:\n print(\"file={file}, line={line}, endLine={endLine}, title={title}: {message}\".format(**msg))\n\nif __name__ == \"__main__\":\n\n desc = \"\"\"This script helps to find the rules without test cases. It needs a mandatory\nargument where you pass the path to your coreruleset. The tool collects the\ntests with name REQUEST-*, but not with RESPONSE-*. Then reads the rule id's,\nand check which rule does not have any test. Some rules does not need test\ncase, these are hardcoded as exclusions: 900NNN, 901NNN, 905NNN, 910NNN,\n912NNN, 949NNN.\"\"\"\n\n parser = argparse.ArgumentParser(description=desc, formatter_class=argparse.RawTextHelpFormatter)\n parser.add_argument(\"--output\", dest=\"output\", help=\"Output format native[default]|github\", required=False)\n parser.add_argument('crspath', metavar='/path/to/coreruleset', type=str,\n help='Directory path to CRS')\n args = parser.parse_args()\n\n if args.output is not None:\n if args.output not in [\"native\", \"github\"]:\n print(\"--output can be one of the 'native' or 'github'. Default value is 'native'\")\n sys.exit(1)\n oformat = args.output\n\n test_cases = {}\n # from argument, build the rules path and regression test paths\n crspath = args.crspath.rstrip(\"/\") + \"/rules/*.conf\"\n testpath = args.crspath.rstrip(\"/\") + \"/tests/regression/tests/*\"\n retval = 0\n # collect rules\n flist = glob.glob(crspath)\n flist.sort()\n if len(flist) == 0:\n print(\"Can't open files in given path!\")\n sys.exit(1)\n\n # collect test cases\n tlist = glob.glob(testpath)\n tlist.sort()\n if len(tlist) == 0:\n print(\"Can't open files in given path (%s)!\" % (testpath))\n sys.exit(1)\n # find the yaml files with name REQUEST at the begin\n # collect them in a dictionary\n for t in tlist:\n tname = t.split(\"/\")[-1]\n if tname[:7] == \"REQUEST\":\n testlist = glob.glob(t + \"/*.yaml\")\n testlist.sort()\n for tc in testlist:\n tcname = tc.split(\"/\")[-1].split(\".\")[0]\n test_cases[int(tcname)] = 1\n\n # iterate the rule files\n for f in flist:\n fname = f.split(\"/\")[-1]\n if fname[:7] == \"REQUEST\":\n try:\n with open(f, 'r') as inputfile:\n data = inputfile.read()\n except:\n print(\"Can't open file: %s\" % f)\n print(sys.exc_info())\n sys.exit(1)\n\n try:\n # make a structure\n mparser = msc_pyparser.MSCParser()\n mparser.parser.parse(data)\n # add the parsed structure to a function, which finds the 'id'-s,\n # and the collected test cases\n rids = find_ids(mparser.configlines, test_cases)\n for k in rids.keys():\n errmsgf({'file': f, 'line': rids[k], 'endLine': rids[k], 'title': \"Test file missing\", 'message': (\"rule %d does not have any regression test\" % k)})\n except:\n print(\"Can't parse config file: %s\" % (f))\n print(sys.exc_info()[1])\n sys.exit(1)\n sys.exit(retval)\n", "path": "util/find-rules-without-test/find-rules-without-test.py"}]} | 2,693 | 208 |
gh_patches_debug_392 | rasdani/github-patches | git_diff | Nitrate__Nitrate-527 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Remove dependency mock
Use `unittest.mock` instead.
</issue>
<code>
[start of setup.py]
1 # -*- coding: utf-8 -*-
2
3 from setuptools import setup, find_packages
4
5
6 with open('VERSION.txt', 'r') as f:
7 pkg_version = f.read().strip()
8
9
10 def get_long_description():
11 with open('README.rst', 'r') as f:
12 return f.read()
13
14
15 install_requires = [
16 'beautifulsoup4 >= 4.1.1',
17 'django >= 2.0,<3.0',
18 'django-contrib-comments == 1.9.1',
19 'django-tinymce == 2.7.0',
20 'django-uuslug == 1.1.8',
21 'html2text',
22 'odfpy >= 0.9.6',
23 'python-bugzilla',
24 'xmltodict',
25 'kobo == 0.9.0'
26 ]
27
28 extras_require = {
29 'mysql': ['mysqlclient >= 1.2.3'],
30 'pgsql': ['psycopg2 == 2.7.5'],
31
32 # Required for tcms.auth.backends.KerberosBackend
33 'krbauth': [
34 'kerberos == 1.2.5'
35 ],
36
37 # Packages for building documentation
38 'docs': [
39 'Sphinx >= 1.1.2',
40 'sphinx_rtd_theme',
41 ],
42
43 # Necessary packages for running tests
44 'tests': [
45 'beautifulsoup4',
46 'coverage',
47 'factory_boy',
48 'flake8',
49 'mock',
50 'pytest',
51 'pytest-cov',
52 'pytest-django',
53 ],
54
55 # Contain tools that assists the development
56 'devtools': [
57 'django-debug-toolbar',
58 'tox',
59 'django-extensions',
60 'pygraphviz',
61 ],
62
63 # Required packages required to run async tasks
64 'async': [
65 'celery == 4.2.0',
66 ],
67
68 'multiauth': [
69 'social-auth-app-django == 3.1.0',
70 ]
71 }
72
73 setup(
74 name='nitrate-tcms',
75 version=pkg_version,
76 description='A full-featured Test Case Management System',
77 long_description=get_long_description(),
78 author='Nitrate Team',
79 maintainer='Chenxiong Qi',
80 maintainer_email='[email protected]',
81 url='https://github.com/Nitrate/Nitrate/',
82 license='GPLv2+',
83 keywords='test case',
84 install_requires=install_requires,
85 extras_require=extras_require,
86 python_requires='>=3.6',
87 package_dir={'': 'src'},
88 packages=find_packages('src', exclude=['test*']),
89 include_package_data=True,
90 zip_safe=False,
91 classifiers=[
92 'Framework :: Django',
93 'Framework :: Django :: 2.0',
94 'Framework :: Django :: 2.1',
95 'Framework :: Django :: 2.2',
96 'Intended Audience :: Developers',
97 'License :: OSI Approved :: GNU General Public License v2 or later (GPLv2+)',
98 'Programming Language :: Python :: 3',
99 'Programming Language :: Python :: 3.6',
100 'Programming Language :: Python :: 3.7',
101 'Programming Language :: Python :: 3 :: Only',
102 'Topic :: Software Development :: Quality Assurance',
103 'Topic :: Software Development :: Testing',
104 ],
105 project_urls={
106 'Issue Tracker': 'https://github.com/Nitrate/Nitrate/issues',
107 'Source Code': 'https://github.com/Nitrate/Nitrate',
108 'Documentation': 'https://nitrate.readthedocs.io/',
109 },
110 )
111
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -46,7 +46,6 @@
'coverage',
'factory_boy',
'flake8',
- 'mock',
'pytest',
'pytest-cov',
'pytest-django',
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -46,7 +46,6 @@\n 'coverage',\n 'factory_boy',\n 'flake8',\n- 'mock',\n 'pytest',\n 'pytest-cov',\n 'pytest-django',\n", "issue": "Remove dependency mock\nUse `unittest.mock` instead.\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\nfrom setuptools import setup, find_packages\n\n\nwith open('VERSION.txt', 'r') as f:\n pkg_version = f.read().strip()\n\n\ndef get_long_description():\n with open('README.rst', 'r') as f:\n return f.read()\n\n\ninstall_requires = [\n 'beautifulsoup4 >= 4.1.1',\n 'django >= 2.0,<3.0',\n 'django-contrib-comments == 1.9.1',\n 'django-tinymce == 2.7.0',\n 'django-uuslug == 1.1.8',\n 'html2text',\n 'odfpy >= 0.9.6',\n 'python-bugzilla',\n 'xmltodict',\n 'kobo == 0.9.0'\n]\n\nextras_require = {\n 'mysql': ['mysqlclient >= 1.2.3'],\n 'pgsql': ['psycopg2 == 2.7.5'],\n\n # Required for tcms.auth.backends.KerberosBackend\n 'krbauth': [\n 'kerberos == 1.2.5'\n ],\n\n # Packages for building documentation\n 'docs': [\n 'Sphinx >= 1.1.2',\n 'sphinx_rtd_theme',\n ],\n\n # Necessary packages for running tests\n 'tests': [\n 'beautifulsoup4',\n 'coverage',\n 'factory_boy',\n 'flake8',\n 'mock',\n 'pytest',\n 'pytest-cov',\n 'pytest-django',\n ],\n\n # Contain tools that assists the development\n 'devtools': [\n 'django-debug-toolbar',\n 'tox',\n 'django-extensions',\n 'pygraphviz',\n ],\n\n # Required packages required to run async tasks\n 'async': [\n 'celery == 4.2.0',\n ],\n\n 'multiauth': [\n 'social-auth-app-django == 3.1.0',\n ]\n}\n\nsetup(\n name='nitrate-tcms',\n version=pkg_version,\n description='A full-featured Test Case Management System',\n long_description=get_long_description(),\n author='Nitrate Team',\n maintainer='Chenxiong Qi',\n maintainer_email='[email protected]',\n url='https://github.com/Nitrate/Nitrate/',\n license='GPLv2+',\n keywords='test case',\n install_requires=install_requires,\n extras_require=extras_require,\n python_requires='>=3.6',\n package_dir={'': 'src'},\n packages=find_packages('src', exclude=['test*']),\n include_package_data=True,\n zip_safe=False,\n classifiers=[\n 'Framework :: Django',\n 'Framework :: Django :: 2.0',\n 'Framework :: Django :: 2.1',\n 'Framework :: Django :: 2.2',\n 'Intended Audience :: Developers',\n 'License :: OSI Approved :: GNU General Public License v2 or later (GPLv2+)',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Programming Language :: Python :: 3 :: Only',\n 'Topic :: Software Development :: Quality Assurance',\n 'Topic :: Software Development :: Testing',\n ],\n project_urls={\n 'Issue Tracker': 'https://github.com/Nitrate/Nitrate/issues',\n 'Source Code': 'https://github.com/Nitrate/Nitrate',\n 'Documentation': 'https://nitrate.readthedocs.io/',\n },\n)\n", "path": "setup.py"}]} | 1,551 | 68 |
gh_patches_debug_1370 | rasdani/github-patches | git_diff | pystiche__pystiche-103 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
ZeroDivisionError with default_epoch_optim_loop
I get an `ZeroDivisionError: integer division or modulo by zero` when using the `default_transformer_epoch_optim_loop`. This is probably because the `num_batches` of the `batch_sampler` is much smaller than in the `default_transformer_optim_loop` which results in `log_freq=0` in `default_transformer_optim_log_fn.`
Below is a minimal example to reproduce the error:
```python
from pystiche.optim.log import default_transformer_optim_log_fn, OptimLogger
logger = OptimLogger()
num_batches = 300
log_fn = default_transformer_optim_log_fn(logger, num_batches)
image_loading_velocity = 1
image_processing_velocity = 1
batch = 1
loss = 1
log_fn(batch, loss, image_loading_velocity, image_processing_velocity)
```
</issue>
<code>
[start of pystiche/optim/log.py]
1 from typing import Union, Optional, Tuple, Callable
2 import contextlib
3 import sys
4 import logging
5 import torch
6 from torch.optim.optimizer import Optimizer
7 from torch.optim.lr_scheduler import _LRScheduler as LRScheduler
8 import pystiche
9 from pystiche.pyramid.level import PyramidLevel
10 from .meter import FloatMeter, LossMeter, ProgressMeter
11
12 __all__ = [
13 "default_logger",
14 "OptimLogger",
15 "default_image_optim_log_fn",
16 "default_pyramid_level_header",
17 "default_transformer_optim_log_fn",
18 ]
19
20
21 def default_logger(name: Optional[str] = None, log_file: Optional[str] = None):
22 logger = logging.getLogger(name)
23 logger.setLevel(logging.INFO)
24
25 fmt = logging.Formatter(
26 fmt="|%(asctime)s| %(message)s", datefmt="%d.%m.%Y %H:%M:%S"
27 )
28
29 sh = logging.StreamHandler(sys.stdout)
30 sh.setLevel(logging.INFO)
31 sh.addFilter(lambda record: record.levelno <= logging.INFO)
32 sh.setFormatter(fmt)
33 logger.addHandler(sh)
34
35 sh = logging.StreamHandler(sys.stderr)
36 sh.setLevel(logging.WARNING)
37 sh.setFormatter(fmt)
38 logger.addHandler(sh)
39
40 if log_file is not None:
41 fh = logging.FileHandler(log_file)
42 fh.setLevel(logging.INFO)
43 fh.setFormatter(fmt)
44 logger.addHandler(fh)
45
46 return logger
47
48
49 class OptimLogger:
50 INDENT = 2
51 SEP_LINE_LENGTH = 80
52 SEP_CHARS = ("#", "=", "-", ".")
53
54 def __init__(self, logger: Optional[logging.Logger] = None):
55 if logger is None:
56 logger = default_logger()
57 self.logger = logger
58
59 self._environ_indent_offset = 0
60 self._environ_level_offset = 0
61
62 def _calc_abs_indent(self, indent: int, rel: bool):
63 abs_indent = indent
64 if rel:
65 abs_indent += self._environ_indent_offset
66 return abs_indent
67
68 def _calc_abs_level(self, level: int, rel: bool):
69 abs_level = level
70 if rel:
71 abs_level += self._environ_level_offset
72 return abs_level
73
74 def message(self, msg: str, indent: int = 0, rel=True) -> None:
75 abs_indent = self._calc_abs_indent(indent, rel)
76 for line in msg.splitlines():
77 self.logger.info(" " * abs_indent + line)
78
79 def sepline(self, level: int = 0, rel=True):
80 abs_level = self._calc_abs_level(level, rel)
81 self.message(self.SEP_CHARS[abs_level] * self.SEP_LINE_LENGTH)
82
83 def sep_message(
84 self, msg: str, level: int = 0, rel=True, top_sep=True, bottom_sep=True
85 ):
86 if top_sep:
87 self.sepline(level=level, rel=rel)
88 self.message(msg, rel=rel)
89 if bottom_sep:
90 self.sepline(level=level, rel=rel)
91
92 @contextlib.contextmanager
93 def environment(self, header: str):
94 self.sep_message(header)
95 self._environ_indent_offset += self.INDENT
96 self._environ_level_offset += 1
97 try:
98 yield
99 finally:
100 self._environ_level_offset -= 1
101 self._environ_indent_offset -= self.INDENT
102
103
104 def default_image_optim_log_fn(
105 optim_logger: OptimLogger, log_freq: int = 50, max_depth: int = 1
106 ) -> Callable[[int, Union[torch.Tensor, pystiche.LossDict]], None]:
107 def log_fn(step: int, loss: Union[torch.Tensor, pystiche.LossDict]) -> None:
108 if step % log_freq == 0:
109 with optim_logger.environment(f"Step {step}"):
110 if isinstance(loss, torch.Tensor):
111 optim_logger.message(f"loss: {loss.item():.3e}")
112 else: # isinstance(loss, pystiche.LossDict)
113 optim_logger.message(loss.aggregate(max_depth).format())
114
115 return log_fn
116
117
118 def default_pyramid_level_header(
119 num: int, level: PyramidLevel, input_image_size: Tuple[int, int]
120 ):
121 height, width = input_image_size
122 return f"Pyramid level {num} with {level.num_steps} steps " f"({width} x {height})"
123
124
125 def default_transformer_optim_log_fn(
126 optim_logger: OptimLogger,
127 num_batches: int,
128 log_freq: Optional[int] = None,
129 show_loading_velocity: bool = True,
130 show_processing_velocity: bool = True,
131 show_running_means: bool = True,
132 ):
133 if log_freq is None:
134 log_freq = min(round(1e-3 * num_batches) * 10, 50)
135
136 window_size = min(10 * log_freq, 1000)
137
138 meters = [LossMeter(show_avg=show_running_means, window_size=window_size)]
139 if show_loading_velocity:
140 meters.append(
141 FloatMeter(
142 name="loading_velocity",
143 fmt="{:3.1f} img/s",
144 show_avg=show_running_means,
145 window_size=window_size,
146 )
147 )
148 if show_processing_velocity:
149 meters.append(
150 FloatMeter(
151 name="processing_velocity",
152 fmt="{:3.1f} img/s",
153 show_avg=show_running_means,
154 window_size=window_size,
155 )
156 )
157
158 progress_meter = ProgressMeter(num_batches, *meters)
159
160 def log_fn(batch, loss, loading_velocity, processing_velocity):
161 progress_meter.update(
162 batch,
163 loss=loss,
164 loading_velocity=loading_velocity,
165 processing_velocity=processing_velocity,
166 )
167
168 if batch % log_freq == 0:
169 optim_logger.message(str(progress_meter))
170
171 return log_fn
172
173
174 def default_epoch_header_fn(
175 epoch: int, optimizer: Optimizer, lr_scheduler: Optional[LRScheduler]
176 ):
177 return f"Epoch {epoch}"
178
[end of pystiche/optim/log.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/pystiche/optim/log.py b/pystiche/optim/log.py
--- a/pystiche/optim/log.py
+++ b/pystiche/optim/log.py
@@ -131,7 +131,7 @@
show_running_means: bool = True,
):
if log_freq is None:
- log_freq = min(round(1e-3 * num_batches) * 10, 50)
+ log_freq = max(min(round(1e-3 * num_batches) * 10, 50), 1)
window_size = min(10 * log_freq, 1000)
| {"golden_diff": "diff --git a/pystiche/optim/log.py b/pystiche/optim/log.py\n--- a/pystiche/optim/log.py\n+++ b/pystiche/optim/log.py\n@@ -131,7 +131,7 @@\n show_running_means: bool = True,\n ):\n if log_freq is None:\n- log_freq = min(round(1e-3 * num_batches) * 10, 50)\n+ log_freq = max(min(round(1e-3 * num_batches) * 10, 50), 1)\n \n window_size = min(10 * log_freq, 1000)\n", "issue": "ZeroDivisionError with default_epoch_optim_loop\nI get an `ZeroDivisionError: integer division or modulo by zero` when using the `default_transformer_epoch_optim_loop`. This is probably because the `num_batches` of the `batch_sampler` is much smaller than in the `default_transformer_optim_loop` which results in `log_freq=0` in `default_transformer_optim_log_fn.` \r\n\r\nBelow is a minimal example to reproduce the error: \r\n```python\r\nfrom pystiche.optim.log import default_transformer_optim_log_fn, OptimLogger\r\n\r\nlogger = OptimLogger()\r\nnum_batches = 300\r\nlog_fn = default_transformer_optim_log_fn(logger, num_batches)\r\nimage_loading_velocity = 1\r\nimage_processing_velocity = 1\r\nbatch = 1\r\nloss = 1\r\nlog_fn(batch, loss, image_loading_velocity, image_processing_velocity)\r\n```\n", "before_files": [{"content": "from typing import Union, Optional, Tuple, Callable\nimport contextlib\nimport sys\nimport logging\nimport torch\nfrom torch.optim.optimizer import Optimizer\nfrom torch.optim.lr_scheduler import _LRScheduler as LRScheduler\nimport pystiche\nfrom pystiche.pyramid.level import PyramidLevel\nfrom .meter import FloatMeter, LossMeter, ProgressMeter\n\n__all__ = [\n \"default_logger\",\n \"OptimLogger\",\n \"default_image_optim_log_fn\",\n \"default_pyramid_level_header\",\n \"default_transformer_optim_log_fn\",\n]\n\n\ndef default_logger(name: Optional[str] = None, log_file: Optional[str] = None):\n logger = logging.getLogger(name)\n logger.setLevel(logging.INFO)\n\n fmt = logging.Formatter(\n fmt=\"|%(asctime)s| %(message)s\", datefmt=\"%d.%m.%Y %H:%M:%S\"\n )\n\n sh = logging.StreamHandler(sys.stdout)\n sh.setLevel(logging.INFO)\n sh.addFilter(lambda record: record.levelno <= logging.INFO)\n sh.setFormatter(fmt)\n logger.addHandler(sh)\n\n sh = logging.StreamHandler(sys.stderr)\n sh.setLevel(logging.WARNING)\n sh.setFormatter(fmt)\n logger.addHandler(sh)\n\n if log_file is not None:\n fh = logging.FileHandler(log_file)\n fh.setLevel(logging.INFO)\n fh.setFormatter(fmt)\n logger.addHandler(fh)\n\n return logger\n\n\nclass OptimLogger:\n INDENT = 2\n SEP_LINE_LENGTH = 80\n SEP_CHARS = (\"#\", \"=\", \"-\", \".\")\n\n def __init__(self, logger: Optional[logging.Logger] = None):\n if logger is None:\n logger = default_logger()\n self.logger = logger\n\n self._environ_indent_offset = 0\n self._environ_level_offset = 0\n\n def _calc_abs_indent(self, indent: int, rel: bool):\n abs_indent = indent\n if rel:\n abs_indent += self._environ_indent_offset\n return abs_indent\n\n def _calc_abs_level(self, level: int, rel: bool):\n abs_level = level\n if rel:\n abs_level += self._environ_level_offset\n return abs_level\n\n def message(self, msg: str, indent: int = 0, rel=True) -> None:\n abs_indent = self._calc_abs_indent(indent, rel)\n for line in msg.splitlines():\n self.logger.info(\" \" * abs_indent + line)\n\n def sepline(self, level: int = 0, rel=True):\n abs_level = self._calc_abs_level(level, rel)\n self.message(self.SEP_CHARS[abs_level] * self.SEP_LINE_LENGTH)\n\n def sep_message(\n self, msg: str, level: int = 0, rel=True, top_sep=True, bottom_sep=True\n ):\n if top_sep:\n self.sepline(level=level, rel=rel)\n self.message(msg, rel=rel)\n if bottom_sep:\n self.sepline(level=level, rel=rel)\n\n @contextlib.contextmanager\n def environment(self, header: str):\n self.sep_message(header)\n self._environ_indent_offset += self.INDENT\n self._environ_level_offset += 1\n try:\n yield\n finally:\n self._environ_level_offset -= 1\n self._environ_indent_offset -= self.INDENT\n\n\ndef default_image_optim_log_fn(\n optim_logger: OptimLogger, log_freq: int = 50, max_depth: int = 1\n) -> Callable[[int, Union[torch.Tensor, pystiche.LossDict]], None]:\n def log_fn(step: int, loss: Union[torch.Tensor, pystiche.LossDict]) -> None:\n if step % log_freq == 0:\n with optim_logger.environment(f\"Step {step}\"):\n if isinstance(loss, torch.Tensor):\n optim_logger.message(f\"loss: {loss.item():.3e}\")\n else: # isinstance(loss, pystiche.LossDict)\n optim_logger.message(loss.aggregate(max_depth).format())\n\n return log_fn\n\n\ndef default_pyramid_level_header(\n num: int, level: PyramidLevel, input_image_size: Tuple[int, int]\n):\n height, width = input_image_size\n return f\"Pyramid level {num} with {level.num_steps} steps \" f\"({width} x {height})\"\n\n\ndef default_transformer_optim_log_fn(\n optim_logger: OptimLogger,\n num_batches: int,\n log_freq: Optional[int] = None,\n show_loading_velocity: bool = True,\n show_processing_velocity: bool = True,\n show_running_means: bool = True,\n):\n if log_freq is None:\n log_freq = min(round(1e-3 * num_batches) * 10, 50)\n\n window_size = min(10 * log_freq, 1000)\n\n meters = [LossMeter(show_avg=show_running_means, window_size=window_size)]\n if show_loading_velocity:\n meters.append(\n FloatMeter(\n name=\"loading_velocity\",\n fmt=\"{:3.1f} img/s\",\n show_avg=show_running_means,\n window_size=window_size,\n )\n )\n if show_processing_velocity:\n meters.append(\n FloatMeter(\n name=\"processing_velocity\",\n fmt=\"{:3.1f} img/s\",\n show_avg=show_running_means,\n window_size=window_size,\n )\n )\n\n progress_meter = ProgressMeter(num_batches, *meters)\n\n def log_fn(batch, loss, loading_velocity, processing_velocity):\n progress_meter.update(\n batch,\n loss=loss,\n loading_velocity=loading_velocity,\n processing_velocity=processing_velocity,\n )\n\n if batch % log_freq == 0:\n optim_logger.message(str(progress_meter))\n\n return log_fn\n\n\ndef default_epoch_header_fn(\n epoch: int, optimizer: Optimizer, lr_scheduler: Optional[LRScheduler]\n):\n return f\"Epoch {epoch}\"\n", "path": "pystiche/optim/log.py"}]} | 2,462 | 144 |
gh_patches_debug_5653 | rasdani/github-patches | git_diff | open-telemetry__opentelemetry-python-contrib-537 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Problem with using domain socket as bind address
**Describe your environment**
The OT-wsgi library throws error if domain socket is used for the bind address.
**Steps to reproduce**
Here is a test program:
```
import web
from time import sleep
from opentelemetry import trace
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import (
ConsoleSpanExporter,
SimpleSpanProcessor,
)
from opentelemetry.instrumentation.wsgi import OpenTelemetryMiddleware
from cheroot import wsgi
trace.set_tracer_provider(TracerProvider())
trace.get_tracer_provider().add_span_processor(
SimpleSpanProcessor(ConsoleSpanExporter())
)
#tracer = trace.get_tracer(__name__)
urls = (
'/', 'index'
)
class index:
def GET(self):
return "Hello, world!"
if __name__ == "__main__":
app = web.application(urls, globals())
func = app.wsgifunc()
func = OpenTelemetryMiddleware(func)
server = wsgi.WSGIServer("/tmp/lr.sock", func, server_name="localhost")
server.start()
```
invocation:
```
(base) kamalh-mbp:~ kamalh$ echo -ne 'GET / HTTP/1.1\r\nHost: test.com\r\n\r\n' | nc -U /tmp/lr.sock
HTTP/1.1 500 Internal Server Error
Content-Length: 0
Content-Type: text/plain
```
Error from the program
```
(base) kamalh-mbp:opentelemetry kamalh$ python3 wsgi-lr.py
Overriding of current TracerProvider is not allowed
ValueError("invalid literal for int() with base 10: ''")
Traceback (most recent call last):
File "/Users/kamalh/miniconda3/lib/python3.7/site-packages/cheroot/server.py", line 1287, in communicate
req.respond()
File "/Users/kamalh/miniconda3/lib/python3.7/site-packages/cheroot/server.py", line 1077, in respond
self.server.gateway(self).respond()
File "/Users/kamalh/miniconda3/lib/python3.7/site-packages/cheroot/wsgi.py", line 140, in respond
response = self.req.server.wsgi_app(self.env, self.start_response)
File "/Users/kamalh/miniconda3/lib/python3.7/site-packages/opentelemetry/instrumentation/wsgi/__init__.py", line 229, in __call__
attributes=collect_request_attributes(environ),
File "/Users/kamalh/miniconda3/lib/python3.7/site-packages/opentelemetry/instrumentation/wsgi/__init__.py", line 122, in collect_request_attributes
result.update({SpanAttributes.NET_HOST_PORT: int(host_port)})
ValueError: invalid literal for int() with base 10: ''
```
**What is the expected behavior?**
Expect to see the server returning normally as in TCP sockets.
**What is the actual behavior?**
Error message. Please see the paste above.
**Additional context**
Add any other context about the problem here.
</issue>
<code>
[start of instrumentation/opentelemetry-instrumentation-wsgi/src/opentelemetry/instrumentation/wsgi/__init__.py]
1 # Copyright The OpenTelemetry Authors
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 """
15 This library provides a WSGI middleware that can be used on any WSGI framework
16 (such as Django / Flask) to track requests timing through OpenTelemetry.
17
18 Usage (Flask)
19 -------------
20
21 .. code-block:: python
22
23 from flask import Flask
24 from opentelemetry.instrumentation.wsgi import OpenTelemetryMiddleware
25
26 app = Flask(__name__)
27 app.wsgi_app = OpenTelemetryMiddleware(app.wsgi_app)
28
29 @app.route("/")
30 def hello():
31 return "Hello!"
32
33 if __name__ == "__main__":
34 app.run(debug=True)
35
36
37 Usage (Django)
38 --------------
39
40 Modify the application's ``wsgi.py`` file as shown below.
41
42 .. code-block:: python
43
44 import os
45 from opentelemetry.instrumentation.wsgi import OpenTelemetryMiddleware
46 from django.core.wsgi import get_wsgi_application
47
48 os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'application.settings')
49
50 application = get_wsgi_application()
51 application = OpenTelemetryMiddleware(application)
52
53 API
54 ---
55 """
56
57 import functools
58 import typing
59 import wsgiref.util as wsgiref_util
60
61 from opentelemetry import context, trace
62 from opentelemetry.instrumentation.utils import http_status_to_status_code
63 from opentelemetry.instrumentation.wsgi.version import __version__
64 from opentelemetry.propagate import extract
65 from opentelemetry.propagators.textmap import Getter
66 from opentelemetry.semconv.trace import SpanAttributes
67 from opentelemetry.trace.status import Status, StatusCode
68
69 _HTTP_VERSION_PREFIX = "HTTP/"
70 _CARRIER_KEY_PREFIX = "HTTP_"
71 _CARRIER_KEY_PREFIX_LEN = len(_CARRIER_KEY_PREFIX)
72
73
74 class WSGIGetter(Getter):
75 def get(
76 self, carrier: dict, key: str
77 ) -> typing.Optional[typing.List[str]]:
78 """Getter implementation to retrieve a HTTP header value from the
79 PEP3333-conforming WSGI environ
80
81 Args:
82 carrier: WSGI environ object
83 key: header name in environ object
84 Returns:
85 A list with a single string with the header value if it exists,
86 else None.
87 """
88 environ_key = "HTTP_" + key.upper().replace("-", "_")
89 value = carrier.get(environ_key)
90 if value is not None:
91 return [value]
92 return None
93
94 def keys(self, carrier):
95 return [
96 key[_CARRIER_KEY_PREFIX_LEN:].lower().replace("_", "-")
97 for key in carrier
98 if key.startswith(_CARRIER_KEY_PREFIX)
99 ]
100
101
102 wsgi_getter = WSGIGetter()
103
104
105 def setifnotnone(dic, key, value):
106 if value is not None:
107 dic[key] = value
108
109
110 def collect_request_attributes(environ):
111 """Collects HTTP request attributes from the PEP3333-conforming
112 WSGI environ and returns a dictionary to be used as span creation attributes."""
113
114 result = {
115 SpanAttributes.HTTP_METHOD: environ.get("REQUEST_METHOD"),
116 SpanAttributes.HTTP_SERVER_NAME: environ.get("SERVER_NAME"),
117 SpanAttributes.HTTP_SCHEME: environ.get("wsgi.url_scheme"),
118 }
119
120 host_port = environ.get("SERVER_PORT")
121 if host_port is not None:
122 result.update({SpanAttributes.NET_HOST_PORT: int(host_port)})
123
124 setifnotnone(result, SpanAttributes.HTTP_HOST, environ.get("HTTP_HOST"))
125 target = environ.get("RAW_URI")
126 if target is None: # Note: `"" or None is None`
127 target = environ.get("REQUEST_URI")
128 if target is not None:
129 result[SpanAttributes.HTTP_TARGET] = target
130 else:
131 result[SpanAttributes.HTTP_URL] = wsgiref_util.request_uri(environ)
132
133 remote_addr = environ.get("REMOTE_ADDR")
134 if remote_addr:
135 result[SpanAttributes.NET_PEER_IP] = remote_addr
136 remote_host = environ.get("REMOTE_HOST")
137 if remote_host and remote_host != remote_addr:
138 result[SpanAttributes.NET_PEER_NAME] = remote_host
139
140 user_agent = environ.get("HTTP_USER_AGENT")
141 if user_agent is not None and len(user_agent) > 0:
142 result[SpanAttributes.HTTP_USER_AGENT] = user_agent
143
144 setifnotnone(
145 result, SpanAttributes.NET_PEER_PORT, environ.get("REMOTE_PORT")
146 )
147 flavor = environ.get("SERVER_PROTOCOL", "")
148 if flavor.upper().startswith(_HTTP_VERSION_PREFIX):
149 flavor = flavor[len(_HTTP_VERSION_PREFIX) :]
150 if flavor:
151 result[SpanAttributes.HTTP_FLAVOR] = flavor
152
153 return result
154
155
156 def add_response_attributes(
157 span, start_response_status, response_headers
158 ): # pylint: disable=unused-argument
159 """Adds HTTP response attributes to span using the arguments
160 passed to a PEP3333-conforming start_response callable."""
161 if not span.is_recording():
162 return
163 status_code, _ = start_response_status.split(" ", 1)
164
165 try:
166 status_code = int(status_code)
167 except ValueError:
168 span.set_status(
169 Status(
170 StatusCode.ERROR,
171 "Non-integer HTTP status: " + repr(status_code),
172 )
173 )
174 else:
175 span.set_attribute(SpanAttributes.HTTP_STATUS_CODE, status_code)
176 span.set_status(Status(http_status_to_status_code(status_code)))
177
178
179 def get_default_span_name(environ):
180 """Default implementation for name_callback, returns HTTP {METHOD_NAME}."""
181 return "HTTP {}".format(environ.get("REQUEST_METHOD", "")).strip()
182
183
184 class OpenTelemetryMiddleware:
185 """The WSGI application middleware.
186
187 This class is a PEP 3333 conforming WSGI middleware that starts and
188 annotates spans for any requests it is invoked with.
189
190 Args:
191 wsgi: The WSGI application callable to forward requests to.
192 request_hook: Optional callback which is called with the server span and WSGI
193 environ object for every incoming request.
194 response_hook: Optional callback which is called with the server span,
195 WSGI environ, status_code and response_headers for every
196 incoming request.
197 tracer_provider: Optional tracer provider to use. If omitted the current
198 globally configured one is used.
199 """
200
201 def __init__(
202 self, wsgi, request_hook=None, response_hook=None, tracer_provider=None
203 ):
204 self.wsgi = wsgi
205 self.tracer = trace.get_tracer(__name__, __version__, tracer_provider)
206 self.request_hook = request_hook
207 self.response_hook = response_hook
208
209 @staticmethod
210 def _create_start_response(span, start_response, response_hook):
211 @functools.wraps(start_response)
212 def _start_response(status, response_headers, *args, **kwargs):
213 add_response_attributes(span, status, response_headers)
214 if response_hook:
215 response_hook(status, response_headers)
216 return start_response(status, response_headers, *args, **kwargs)
217
218 return _start_response
219
220 def __call__(self, environ, start_response):
221 """The WSGI application
222
223 Args:
224 environ: A WSGI environment.
225 start_response: The WSGI start_response callable.
226 """
227
228 token = context.attach(extract(environ, getter=wsgi_getter))
229
230 span = self.tracer.start_span(
231 get_default_span_name(environ),
232 kind=trace.SpanKind.SERVER,
233 attributes=collect_request_attributes(environ),
234 )
235
236 if self.request_hook:
237 self.request_hook(span, environ)
238
239 response_hook = self.response_hook
240 if response_hook:
241 response_hook = functools.partial(response_hook, span, environ)
242
243 try:
244 with trace.use_span(span):
245 start_response = self._create_start_response(
246 span, start_response, response_hook
247 )
248 iterable = self.wsgi(environ, start_response)
249 return _end_span_after_iterating(
250 iterable, span, self.tracer, token
251 )
252 except Exception as ex:
253 if span.is_recording():
254 span.set_status(Status(StatusCode.ERROR, str(ex)))
255 span.end()
256 context.detach(token)
257 raise
258
259
260 # Put this in a subfunction to not delay the call to the wrapped
261 # WSGI application (instrumentation should change the application
262 # behavior as little as possible).
263 def _end_span_after_iterating(iterable, span, tracer, token):
264 try:
265 with trace.use_span(span):
266 for yielded in iterable:
267 yield yielded
268 finally:
269 close = getattr(iterable, "close", None)
270 if close:
271 close()
272 span.end()
273 context.detach(token)
274
275
276 # TODO: inherit from opentelemetry.instrumentation.propagators.Setter
277
278
279 class ResponsePropagationSetter:
280 def set(self, carrier, key, value): # pylint: disable=no-self-use
281 carrier.append((key, value))
282
283
284 default_response_propagation_setter = ResponsePropagationSetter()
285
[end of instrumentation/opentelemetry-instrumentation-wsgi/src/opentelemetry/instrumentation/wsgi/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/instrumentation/opentelemetry-instrumentation-wsgi/src/opentelemetry/instrumentation/wsgi/__init__.py b/instrumentation/opentelemetry-instrumentation-wsgi/src/opentelemetry/instrumentation/wsgi/__init__.py
--- a/instrumentation/opentelemetry-instrumentation-wsgi/src/opentelemetry/instrumentation/wsgi/__init__.py
+++ b/instrumentation/opentelemetry-instrumentation-wsgi/src/opentelemetry/instrumentation/wsgi/__init__.py
@@ -118,7 +118,7 @@
}
host_port = environ.get("SERVER_PORT")
- if host_port is not None:
+ if host_port is not None and not host_port == "":
result.update({SpanAttributes.NET_HOST_PORT: int(host_port)})
setifnotnone(result, SpanAttributes.HTTP_HOST, environ.get("HTTP_HOST"))
| {"golden_diff": "diff --git a/instrumentation/opentelemetry-instrumentation-wsgi/src/opentelemetry/instrumentation/wsgi/__init__.py b/instrumentation/opentelemetry-instrumentation-wsgi/src/opentelemetry/instrumentation/wsgi/__init__.py\n--- a/instrumentation/opentelemetry-instrumentation-wsgi/src/opentelemetry/instrumentation/wsgi/__init__.py\n+++ b/instrumentation/opentelemetry-instrumentation-wsgi/src/opentelemetry/instrumentation/wsgi/__init__.py\n@@ -118,7 +118,7 @@\n }\n \n host_port = environ.get(\"SERVER_PORT\")\n- if host_port is not None:\n+ if host_port is not None and not host_port == \"\":\n result.update({SpanAttributes.NET_HOST_PORT: int(host_port)})\n \n setifnotnone(result, SpanAttributes.HTTP_HOST, environ.get(\"HTTP_HOST\"))\n", "issue": "Problem with using domain socket as bind address\n**Describe your environment** \r\nThe OT-wsgi library throws error if domain socket is used for the bind address.\r\n\r\n**Steps to reproduce**\r\nHere is a test program:\r\n\r\n```\r\nimport web\r\nfrom time import sleep\r\nfrom opentelemetry import trace\r\nfrom opentelemetry.sdk.trace import TracerProvider\r\nfrom opentelemetry.sdk.trace.export import (\r\n ConsoleSpanExporter,\r\n SimpleSpanProcessor,\r\n)\r\nfrom opentelemetry.instrumentation.wsgi import OpenTelemetryMiddleware\r\nfrom cheroot import wsgi\r\n\r\ntrace.set_tracer_provider(TracerProvider())\r\ntrace.get_tracer_provider().add_span_processor(\r\n SimpleSpanProcessor(ConsoleSpanExporter())\r\n)\r\n\r\n#tracer = trace.get_tracer(__name__)\r\n\r\nurls = (\r\n '/', 'index'\r\n)\r\nclass index:\r\n def GET(self):\r\n return \"Hello, world!\"\r\n\r\nif __name__ == \"__main__\":\r\n app = web.application(urls, globals())\r\n func = app.wsgifunc()\r\n\r\n func = OpenTelemetryMiddleware(func)\r\n\r\n server = wsgi.WSGIServer(\"/tmp/lr.sock\", func, server_name=\"localhost\")\r\n server.start()\r\n```\r\n\r\ninvocation:\r\n```\r\n(base) kamalh-mbp:~ kamalh$ echo -ne 'GET / HTTP/1.1\\r\\nHost: test.com\\r\\n\\r\\n' | nc -U /tmp/lr.sock\r\nHTTP/1.1 500 Internal Server Error\r\nContent-Length: 0\r\nContent-Type: text/plain\r\n```\r\n\r\nError from the program\r\n```\r\n(base) kamalh-mbp:opentelemetry kamalh$ python3 wsgi-lr.py\r\nOverriding of current TracerProvider is not allowed\r\nValueError(\"invalid literal for int() with base 10: ''\")\r\nTraceback (most recent call last):\r\n File \"/Users/kamalh/miniconda3/lib/python3.7/site-packages/cheroot/server.py\", line 1287, in communicate\r\n req.respond()\r\n File \"/Users/kamalh/miniconda3/lib/python3.7/site-packages/cheroot/server.py\", line 1077, in respond\r\n self.server.gateway(self).respond()\r\n File \"/Users/kamalh/miniconda3/lib/python3.7/site-packages/cheroot/wsgi.py\", line 140, in respond\r\n response = self.req.server.wsgi_app(self.env, self.start_response)\r\n File \"/Users/kamalh/miniconda3/lib/python3.7/site-packages/opentelemetry/instrumentation/wsgi/__init__.py\", line 229, in __call__\r\n attributes=collect_request_attributes(environ),\r\n File \"/Users/kamalh/miniconda3/lib/python3.7/site-packages/opentelemetry/instrumentation/wsgi/__init__.py\", line 122, in collect_request_attributes\r\n result.update({SpanAttributes.NET_HOST_PORT: int(host_port)})\r\nValueError: invalid literal for int() with base 10: ''\r\n```\r\n\r\n**What is the expected behavior?**\r\nExpect to see the server returning normally as in TCP sockets.\r\n\r\n**What is the actual behavior?**\r\nError message. Please see the paste above.\r\n\r\n**Additional context**\r\nAdd any other context about the problem here.\r\n\n", "before_files": [{"content": "# Copyright The OpenTelemetry Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"\nThis library provides a WSGI middleware that can be used on any WSGI framework\n(such as Django / Flask) to track requests timing through OpenTelemetry.\n\nUsage (Flask)\n-------------\n\n.. code-block:: python\n\n from flask import Flask\n from opentelemetry.instrumentation.wsgi import OpenTelemetryMiddleware\n\n app = Flask(__name__)\n app.wsgi_app = OpenTelemetryMiddleware(app.wsgi_app)\n\n @app.route(\"/\")\n def hello():\n return \"Hello!\"\n\n if __name__ == \"__main__\":\n app.run(debug=True)\n\n\nUsage (Django)\n--------------\n\nModify the application's ``wsgi.py`` file as shown below.\n\n.. code-block:: python\n\n import os\n from opentelemetry.instrumentation.wsgi import OpenTelemetryMiddleware\n from django.core.wsgi import get_wsgi_application\n\n os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'application.settings')\n\n application = get_wsgi_application()\n application = OpenTelemetryMiddleware(application)\n\nAPI\n---\n\"\"\"\n\nimport functools\nimport typing\nimport wsgiref.util as wsgiref_util\n\nfrom opentelemetry import context, trace\nfrom opentelemetry.instrumentation.utils import http_status_to_status_code\nfrom opentelemetry.instrumentation.wsgi.version import __version__\nfrom opentelemetry.propagate import extract\nfrom opentelemetry.propagators.textmap import Getter\nfrom opentelemetry.semconv.trace import SpanAttributes\nfrom opentelemetry.trace.status import Status, StatusCode\n\n_HTTP_VERSION_PREFIX = \"HTTP/\"\n_CARRIER_KEY_PREFIX = \"HTTP_\"\n_CARRIER_KEY_PREFIX_LEN = len(_CARRIER_KEY_PREFIX)\n\n\nclass WSGIGetter(Getter):\n def get(\n self, carrier: dict, key: str\n ) -> typing.Optional[typing.List[str]]:\n \"\"\"Getter implementation to retrieve a HTTP header value from the\n PEP3333-conforming WSGI environ\n\n Args:\n carrier: WSGI environ object\n key: header name in environ object\n Returns:\n A list with a single string with the header value if it exists,\n else None.\n \"\"\"\n environ_key = \"HTTP_\" + key.upper().replace(\"-\", \"_\")\n value = carrier.get(environ_key)\n if value is not None:\n return [value]\n return None\n\n def keys(self, carrier):\n return [\n key[_CARRIER_KEY_PREFIX_LEN:].lower().replace(\"_\", \"-\")\n for key in carrier\n if key.startswith(_CARRIER_KEY_PREFIX)\n ]\n\n\nwsgi_getter = WSGIGetter()\n\n\ndef setifnotnone(dic, key, value):\n if value is not None:\n dic[key] = value\n\n\ndef collect_request_attributes(environ):\n \"\"\"Collects HTTP request attributes from the PEP3333-conforming\n WSGI environ and returns a dictionary to be used as span creation attributes.\"\"\"\n\n result = {\n SpanAttributes.HTTP_METHOD: environ.get(\"REQUEST_METHOD\"),\n SpanAttributes.HTTP_SERVER_NAME: environ.get(\"SERVER_NAME\"),\n SpanAttributes.HTTP_SCHEME: environ.get(\"wsgi.url_scheme\"),\n }\n\n host_port = environ.get(\"SERVER_PORT\")\n if host_port is not None:\n result.update({SpanAttributes.NET_HOST_PORT: int(host_port)})\n\n setifnotnone(result, SpanAttributes.HTTP_HOST, environ.get(\"HTTP_HOST\"))\n target = environ.get(\"RAW_URI\")\n if target is None: # Note: `\"\" or None is None`\n target = environ.get(\"REQUEST_URI\")\n if target is not None:\n result[SpanAttributes.HTTP_TARGET] = target\n else:\n result[SpanAttributes.HTTP_URL] = wsgiref_util.request_uri(environ)\n\n remote_addr = environ.get(\"REMOTE_ADDR\")\n if remote_addr:\n result[SpanAttributes.NET_PEER_IP] = remote_addr\n remote_host = environ.get(\"REMOTE_HOST\")\n if remote_host and remote_host != remote_addr:\n result[SpanAttributes.NET_PEER_NAME] = remote_host\n\n user_agent = environ.get(\"HTTP_USER_AGENT\")\n if user_agent is not None and len(user_agent) > 0:\n result[SpanAttributes.HTTP_USER_AGENT] = user_agent\n\n setifnotnone(\n result, SpanAttributes.NET_PEER_PORT, environ.get(\"REMOTE_PORT\")\n )\n flavor = environ.get(\"SERVER_PROTOCOL\", \"\")\n if flavor.upper().startswith(_HTTP_VERSION_PREFIX):\n flavor = flavor[len(_HTTP_VERSION_PREFIX) :]\n if flavor:\n result[SpanAttributes.HTTP_FLAVOR] = flavor\n\n return result\n\n\ndef add_response_attributes(\n span, start_response_status, response_headers\n): # pylint: disable=unused-argument\n \"\"\"Adds HTTP response attributes to span using the arguments\n passed to a PEP3333-conforming start_response callable.\"\"\"\n if not span.is_recording():\n return\n status_code, _ = start_response_status.split(\" \", 1)\n\n try:\n status_code = int(status_code)\n except ValueError:\n span.set_status(\n Status(\n StatusCode.ERROR,\n \"Non-integer HTTP status: \" + repr(status_code),\n )\n )\n else:\n span.set_attribute(SpanAttributes.HTTP_STATUS_CODE, status_code)\n span.set_status(Status(http_status_to_status_code(status_code)))\n\n\ndef get_default_span_name(environ):\n \"\"\"Default implementation for name_callback, returns HTTP {METHOD_NAME}.\"\"\"\n return \"HTTP {}\".format(environ.get(\"REQUEST_METHOD\", \"\")).strip()\n\n\nclass OpenTelemetryMiddleware:\n \"\"\"The WSGI application middleware.\n\n This class is a PEP 3333 conforming WSGI middleware that starts and\n annotates spans for any requests it is invoked with.\n\n Args:\n wsgi: The WSGI application callable to forward requests to.\n request_hook: Optional callback which is called with the server span and WSGI\n environ object for every incoming request.\n response_hook: Optional callback which is called with the server span,\n WSGI environ, status_code and response_headers for every\n incoming request.\n tracer_provider: Optional tracer provider to use. If omitted the current\n globally configured one is used.\n \"\"\"\n\n def __init__(\n self, wsgi, request_hook=None, response_hook=None, tracer_provider=None\n ):\n self.wsgi = wsgi\n self.tracer = trace.get_tracer(__name__, __version__, tracer_provider)\n self.request_hook = request_hook\n self.response_hook = response_hook\n\n @staticmethod\n def _create_start_response(span, start_response, response_hook):\n @functools.wraps(start_response)\n def _start_response(status, response_headers, *args, **kwargs):\n add_response_attributes(span, status, response_headers)\n if response_hook:\n response_hook(status, response_headers)\n return start_response(status, response_headers, *args, **kwargs)\n\n return _start_response\n\n def __call__(self, environ, start_response):\n \"\"\"The WSGI application\n\n Args:\n environ: A WSGI environment.\n start_response: The WSGI start_response callable.\n \"\"\"\n\n token = context.attach(extract(environ, getter=wsgi_getter))\n\n span = self.tracer.start_span(\n get_default_span_name(environ),\n kind=trace.SpanKind.SERVER,\n attributes=collect_request_attributes(environ),\n )\n\n if self.request_hook:\n self.request_hook(span, environ)\n\n response_hook = self.response_hook\n if response_hook:\n response_hook = functools.partial(response_hook, span, environ)\n\n try:\n with trace.use_span(span):\n start_response = self._create_start_response(\n span, start_response, response_hook\n )\n iterable = self.wsgi(environ, start_response)\n return _end_span_after_iterating(\n iterable, span, self.tracer, token\n )\n except Exception as ex:\n if span.is_recording():\n span.set_status(Status(StatusCode.ERROR, str(ex)))\n span.end()\n context.detach(token)\n raise\n\n\n# Put this in a subfunction to not delay the call to the wrapped\n# WSGI application (instrumentation should change the application\n# behavior as little as possible).\ndef _end_span_after_iterating(iterable, span, tracer, token):\n try:\n with trace.use_span(span):\n for yielded in iterable:\n yield yielded\n finally:\n close = getattr(iterable, \"close\", None)\n if close:\n close()\n span.end()\n context.detach(token)\n\n\n# TODO: inherit from opentelemetry.instrumentation.propagators.Setter\n\n\nclass ResponsePropagationSetter:\n def set(self, carrier, key, value): # pylint: disable=no-self-use\n carrier.append((key, value))\n\n\ndefault_response_propagation_setter = ResponsePropagationSetter()\n", "path": "instrumentation/opentelemetry-instrumentation-wsgi/src/opentelemetry/instrumentation/wsgi/__init__.py"}]} | 4,059 | 191 |
gh_patches_debug_34727 | rasdani/github-patches | git_diff | ansible__ansible-43542 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
'slack' callback plugin not working
<!---
Verify first that your issue/request is not already reported on GitHub.
THIS FORM WILL BE READ BY A MACHINE, COMPLETE ALL SECTIONS AS DESCRIBED.
Also test if the latest release, and devel branch are affected too.
ALWAYS add information AFTER (OUTSIDE) these html comments.
Otherwise it may end up being automatically closed by our bot. -->
##### SUMMARY
ansible-playbook with slack module produces an error in callback plugin
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
slack.CallbackModule
##### ANSIBLE VERSION
<!--- Paste, BELOW THIS COMMENT, verbatim output from "ansible --version" between quotes below -->
```
ansible 2.6.1
config file = /Users/mikejames/GitHub/ConfigurationManagement/ansible/ansible.cfg
configured module search path = [u'/Users/mikejames/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/Cellar/ansible/2.6.1/libexec/lib/python2.7/site-packages/ansible
executable location = /usr/local/bin/ansible
python version = 2.7.15 (default, Jun 17 2018, 12:46:58) [GCC 4.2.1 Compatible Apple LLVM 9.1.0 (clang-902.0.39.2)]
```
##### CONFIGURATION
<!--- If using Ansible 2.4 or above, paste, BELOW THIS COMMENT, the results of "ansible-config dump --only-changed"
Otherwise, mention any settings you have changed/added/removed in ansible.cfg
(or using the ANSIBLE_* environment variables).-->
```
DEFAULT_CALLBACK_PLUGIN_PATH(/Users/mikejames/GitHub/ConfigurationManagement/ansible/ansible.cfg) = [u'/Users/mikejames/GitHub/ConfigurationManagement/ansible/playbooks/callback_plugins']
DEFAULT_CALLBACK_WHITELIST(/Users/mikejames/GitHub/ConfigurationManagement/ansible/ansible.cfg) = ['profile_tasks', 'timer', 'slack']
DEFAULT_HOST_LIST(/Users/mikejames/GitHub/ConfigurationManagement/ansible/ansible.cfg) = [u'/Users/mikejames/GitHub/ConfigurationManagement/ansible/inventory']
DEFAULT_LOG_PATH(/Users/mikejames/GitHub/ConfigurationManagement/ansible/ansible.cfg) = /Users/mikejames/GitHub/ConfigurationManagement/ansible/logs/ansible.log
DEFAULT_ROLES_PATH(/Users/mikejames/GitHub/ConfigurationManagement/ansible/ansible.cfg) = [u'/Users/mikejames/GitHub/ConfigurationManagement/ansible/playbooks/roles']
HOST_KEY_CHECKING(/Users/mikejames/GitHub/ConfigurationManagement/ansible/ansible.cfg) = False
```
##### OS / ENVIRONMENT
<!--- Mention, BELOW THIS COMMENT, the OS you are running Ansible from, and the OS you are
managing, or say "N/A" for anything that is not platform-specific.
Also mention the specific version of what you are trying to control,
e.g. if this is a network bug the version of firmware on the network device.-->
macOS High Sierra 10.13.6
##### STEPS TO REPRODUCE
<!--- For bugs, show exactly how to reproduce the problem, using a minimal test-case.
For new features, show how the feature would be used. -->
https://gist.github.com/tightly-clutched/05a40814d3271b51a6530163e3209299
<!--- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- What did you expect to happen when running the steps above? -->
I expected a result to be posted to slack channel
##### ACTUAL RESULTS
<!--- What actually happened? If possible run with extra verbosity (-vvvv) -->
playbook execution failed
<!--- Paste verbatim command output between quotes below -->
https://gist.github.com/tightly-clutched/05a40814d3271b51a6530163e3209299
</issue>
<code>
[start of lib/ansible/plugins/callback/slack.py]
1 # (C) 2014-2015, Matt Martz <[email protected]>
2 # (C) 2017 Ansible Project
3 # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
4
5 # Make coding more python3-ish
6 from __future__ import (absolute_import, division, print_function)
7 __metaclass__ = type
8
9 DOCUMENTATION = '''
10 callback: slack
11 callback_type: notification
12 requirements:
13 - whitelist in configuration
14 - prettytable (python library)
15 short_description: Sends play events to a Slack channel
16 version_added: "2.1"
17 description:
18 - This is an ansible callback plugin that sends status updates to a Slack channel during playbook execution.
19 - Before 2.4 only environment variables were available for configuring this plugin
20 options:
21 webhook_url:
22 required: True
23 description: Slack Webhook URL
24 env:
25 - name: SLACK_WEBHOOK_URL
26 ini:
27 - section: callback_slack
28 key: webhook_url
29 channel:
30 default: "#ansible"
31 description: Slack room to post in.
32 env:
33 - name: SLACK_CHANNEL
34 ini:
35 - section: callback_slack
36 key: channel
37 username:
38 description: Username to post as.
39 env:
40 - name: SLACK_USERNAME
41 default: ansible
42 ini:
43 - section: callback_slack
44 key: username
45 '''
46
47 import json
48 import os
49 import uuid
50
51 try:
52 from __main__ import cli
53 except ImportError:
54 cli = None
55
56 from ansible.module_utils.urls import open_url
57 from ansible.plugins.callback import CallbackBase
58
59 try:
60 import prettytable
61 HAS_PRETTYTABLE = True
62 except ImportError:
63 HAS_PRETTYTABLE = False
64
65
66 class CallbackModule(CallbackBase):
67 """This is an ansible callback plugin that sends status
68 updates to a Slack channel during playbook execution.
69 """
70 CALLBACK_VERSION = 2.0
71 CALLBACK_TYPE = 'notification'
72 CALLBACK_NAME = 'slack'
73 CALLBACK_NEEDS_WHITELIST = True
74
75 def __init__(self, display=None):
76
77 super(CallbackModule, self).__init__(display=display)
78
79 if not HAS_PRETTYTABLE:
80 self.disabled = True
81 self._display.warning('The `prettytable` python module is not '
82 'installed. Disabling the Slack callback '
83 'plugin.')
84
85 self.playbook_name = None
86
87 # This is a 6 character identifier provided with each message
88 # This makes it easier to correlate messages when there are more
89 # than 1 simultaneous playbooks running
90 self.guid = uuid.uuid4().hex[:6]
91
92 def set_options(self, task_keys=None, var_options=None, direct=None):
93
94 super(CallbackModule, self).set_options(task_keys=task_keys, var_options=var_options, direct=direct)
95
96 self.webhook_url = self.get_option('webhook_url')
97 self.channel = self.get_option('channel')
98 self.username = self.get_option('username')
99 self.show_invocation = (self._display.verbosity > 1)
100
101 if self.webhook_url is None:
102 self.disabled = True
103 self._display.warning('Slack Webhook URL was not provided. The '
104 'Slack Webhook URL can be provided using '
105 'the `SLACK_WEBHOOK_URL` environment '
106 'variable.')
107
108 def send_msg(self, attachments):
109 payload = {
110 'channel': self.channel,
111 'username': self.username,
112 'attachments': attachments,
113 'parse': 'none',
114 'icon_url': ('http://cdn2.hubspot.net/hub/330046/'
115 'file-449187601-png/ansible_badge.png'),
116 }
117
118 data = json.dumps(payload)
119 self._display.debug(data)
120 self._display.debug(self.webhook_url)
121 try:
122 response = open_url(self.webhook_url, data=data)
123 return response.read()
124 except Exception as e:
125 self._display.warning('Could not submit message to Slack: %s' %
126 str(e))
127
128 def v2_playbook_on_start(self, playbook):
129 self.playbook_name = os.path.basename(playbook._file_name)
130
131 title = [
132 '*Playbook initiated* (_%s_)' % self.guid
133 ]
134 invocation_items = []
135 if self._plugin_options and self.show_invocation:
136 tags = self.get_option('tags')
137 skip_tags = self.get_option('skip_tags')
138 extra_vars = self.get_option('extra_vars')
139 subset = self.get_option('subset')
140 inventory = os.path.basename(
141 os.path.realpath(self.get_option('inventory'))
142 )
143
144 invocation_items.append('Inventory: %s' % inventory)
145 if tags and tags != 'all':
146 invocation_items.append('Tags: %s' % tags)
147 if skip_tags:
148 invocation_items.append('Skip Tags: %s' % skip_tags)
149 if subset:
150 invocation_items.append('Limit: %s' % subset)
151 if extra_vars:
152 invocation_items.append('Extra Vars: %s' %
153 ' '.join(extra_vars))
154
155 title.append('by *%s*' % self.get_option('remote_user'))
156
157 title.append('\n\n*%s*' % self.playbook_name)
158 msg_items = [' '.join(title)]
159 if invocation_items:
160 msg_items.append('```\n%s\n```' % '\n'.join(invocation_items))
161
162 msg = '\n'.join(msg_items)
163
164 attachments = [{
165 'fallback': msg,
166 'fields': [
167 {
168 'value': msg
169 }
170 ],
171 'color': 'warning',
172 'mrkdwn_in': ['text', 'fallback', 'fields'],
173 }]
174
175 self.send_msg(attachments=attachments)
176
177 def v2_playbook_on_play_start(self, play):
178 """Display Play start messages"""
179
180 name = play.name or 'Play name not specified (%s)' % play._uuid
181 msg = '*Starting play* (_%s_)\n\n*%s*' % (self.guid, name)
182 attachments = [
183 {
184 'fallback': msg,
185 'text': msg,
186 'color': 'warning',
187 'mrkdwn_in': ['text', 'fallback', 'fields'],
188 }
189 ]
190 self.send_msg(attachments=attachments)
191
192 def v2_playbook_on_stats(self, stats):
193 """Display info about playbook statistics"""
194
195 hosts = sorted(stats.processed.keys())
196
197 t = prettytable.PrettyTable(['Host', 'Ok', 'Changed', 'Unreachable',
198 'Failures'])
199
200 failures = False
201 unreachable = False
202
203 for h in hosts:
204 s = stats.summarize(h)
205
206 if s['failures'] > 0:
207 failures = True
208 if s['unreachable'] > 0:
209 unreachable = True
210
211 t.add_row([h] + [s[k] for k in ['ok', 'changed', 'unreachable',
212 'failures']])
213
214 attachments = []
215 msg_items = [
216 '*Playbook Complete* (_%s_)' % self.guid
217 ]
218 if failures or unreachable:
219 color = 'danger'
220 msg_items.append('\n*Failed!*')
221 else:
222 color = 'good'
223 msg_items.append('\n*Success!*')
224
225 msg_items.append('```\n%s\n```' % t)
226
227 msg = '\n'.join(msg_items)
228
229 attachments.append({
230 'fallback': msg,
231 'fields': [
232 {
233 'value': msg
234 }
235 ],
236 'color': color,
237 'mrkdwn_in': ['text', 'fallback', 'fields']
238 })
239
240 self.send_msg(attachments=attachments)
241
[end of lib/ansible/plugins/callback/slack.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/lib/ansible/plugins/callback/slack.py b/lib/ansible/plugins/callback/slack.py
--- a/lib/ansible/plugins/callback/slack.py
+++ b/lib/ansible/plugins/callback/slack.py
@@ -76,6 +76,8 @@
super(CallbackModule, self).__init__(display=display)
+ self._options = cli.options
+
if not HAS_PRETTYTABLE:
self.disabled = True
self._display.warning('The `prettytable` python module is not '
@@ -132,27 +134,25 @@
'*Playbook initiated* (_%s_)' % self.guid
]
invocation_items = []
- if self._plugin_options and self.show_invocation:
- tags = self.get_option('tags')
- skip_tags = self.get_option('skip_tags')
- extra_vars = self.get_option('extra_vars')
- subset = self.get_option('subset')
- inventory = os.path.basename(
- os.path.realpath(self.get_option('inventory'))
- )
-
- invocation_items.append('Inventory: %s' % inventory)
- if tags and tags != 'all':
- invocation_items.append('Tags: %s' % tags)
+ if self._options and self.show_invocation:
+ tags = self._options.tags
+ skip_tags = self._options.skip_tags
+ extra_vars = self._options.extra_vars
+ subset = self._options.subset
+ inventory = [os.path.abspath(i) for i in self._options.inventory]
+
+ invocation_items.append('Inventory: %s' % ', '.join(inventory))
+ if tags and tags != ['all']:
+ invocation_items.append('Tags: %s' % ', '.join(tags))
if skip_tags:
- invocation_items.append('Skip Tags: %s' % skip_tags)
+ invocation_items.append('Skip Tags: %s' % ', '.join(skip_tags))
if subset:
invocation_items.append('Limit: %s' % subset)
if extra_vars:
invocation_items.append('Extra Vars: %s' %
' '.join(extra_vars))
- title.append('by *%s*' % self.get_option('remote_user'))
+ title.append('by *%s*' % self._options.remote_user)
title.append('\n\n*%s*' % self.playbook_name)
msg_items = [' '.join(title)]
| {"golden_diff": "diff --git a/lib/ansible/plugins/callback/slack.py b/lib/ansible/plugins/callback/slack.py\n--- a/lib/ansible/plugins/callback/slack.py\n+++ b/lib/ansible/plugins/callback/slack.py\n@@ -76,6 +76,8 @@\n \n super(CallbackModule, self).__init__(display=display)\n \n+ self._options = cli.options\n+\n if not HAS_PRETTYTABLE:\n self.disabled = True\n self._display.warning('The `prettytable` python module is not '\n@@ -132,27 +134,25 @@\n '*Playbook initiated* (_%s_)' % self.guid\n ]\n invocation_items = []\n- if self._plugin_options and self.show_invocation:\n- tags = self.get_option('tags')\n- skip_tags = self.get_option('skip_tags')\n- extra_vars = self.get_option('extra_vars')\n- subset = self.get_option('subset')\n- inventory = os.path.basename(\n- os.path.realpath(self.get_option('inventory'))\n- )\n-\n- invocation_items.append('Inventory: %s' % inventory)\n- if tags and tags != 'all':\n- invocation_items.append('Tags: %s' % tags)\n+ if self._options and self.show_invocation:\n+ tags = self._options.tags\n+ skip_tags = self._options.skip_tags\n+ extra_vars = self._options.extra_vars\n+ subset = self._options.subset\n+ inventory = [os.path.abspath(i) for i in self._options.inventory]\n+\n+ invocation_items.append('Inventory: %s' % ', '.join(inventory))\n+ if tags and tags != ['all']:\n+ invocation_items.append('Tags: %s' % ', '.join(tags))\n if skip_tags:\n- invocation_items.append('Skip Tags: %s' % skip_tags)\n+ invocation_items.append('Skip Tags: %s' % ', '.join(skip_tags))\n if subset:\n invocation_items.append('Limit: %s' % subset)\n if extra_vars:\n invocation_items.append('Extra Vars: %s' %\n ' '.join(extra_vars))\n \n- title.append('by *%s*' % self.get_option('remote_user'))\n+ title.append('by *%s*' % self._options.remote_user)\n \n title.append('\\n\\n*%s*' % self.playbook_name)\n msg_items = [' '.join(title)]\n", "issue": "'slack' callback plugin not working\n<!---\r\nVerify first that your issue/request is not already reported on GitHub.\r\nTHIS FORM WILL BE READ BY A MACHINE, COMPLETE ALL SECTIONS AS DESCRIBED.\r\nAlso test if the latest release, and devel branch are affected too.\r\nALWAYS add information AFTER (OUTSIDE) these html comments.\r\nOtherwise it may end up being automatically closed by our bot. -->\r\n\r\n##### SUMMARY \r\nansible-playbook with slack module produces an error in callback plugin\r\n\r\n##### ISSUE TYPE\r\n - Bug Report\r\n\r\n##### COMPONENT NAME\r\nslack.CallbackModule\r\n\r\n##### ANSIBLE VERSION\r\n<!--- Paste, BELOW THIS COMMENT, verbatim output from \"ansible --version\" between quotes below -->\r\n```\r\nansible 2.6.1\r\n config file = /Users/mikejames/GitHub/ConfigurationManagement/ansible/ansible.cfg\r\n configured module search path = [u'/Users/mikejames/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']\r\n ansible python module location = /usr/local/Cellar/ansible/2.6.1/libexec/lib/python2.7/site-packages/ansible\r\n executable location = /usr/local/bin/ansible\r\n python version = 2.7.15 (default, Jun 17 2018, 12:46:58) [GCC 4.2.1 Compatible Apple LLVM 9.1.0 (clang-902.0.39.2)]\r\n```\r\n\r\n##### CONFIGURATION\r\n<!--- If using Ansible 2.4 or above, paste, BELOW THIS COMMENT, the results of \"ansible-config dump --only-changed\"\r\nOtherwise, mention any settings you have changed/added/removed in ansible.cfg\r\n(or using the ANSIBLE_* environment variables).-->\r\n```\r\nDEFAULT_CALLBACK_PLUGIN_PATH(/Users/mikejames/GitHub/ConfigurationManagement/ansible/ansible.cfg) = [u'/Users/mikejames/GitHub/ConfigurationManagement/ansible/playbooks/callback_plugins']\r\nDEFAULT_CALLBACK_WHITELIST(/Users/mikejames/GitHub/ConfigurationManagement/ansible/ansible.cfg) = ['profile_tasks', 'timer', 'slack']\r\nDEFAULT_HOST_LIST(/Users/mikejames/GitHub/ConfigurationManagement/ansible/ansible.cfg) = [u'/Users/mikejames/GitHub/ConfigurationManagement/ansible/inventory']\r\nDEFAULT_LOG_PATH(/Users/mikejames/GitHub/ConfigurationManagement/ansible/ansible.cfg) = /Users/mikejames/GitHub/ConfigurationManagement/ansible/logs/ansible.log\r\nDEFAULT_ROLES_PATH(/Users/mikejames/GitHub/ConfigurationManagement/ansible/ansible.cfg) = [u'/Users/mikejames/GitHub/ConfigurationManagement/ansible/playbooks/roles']\r\nHOST_KEY_CHECKING(/Users/mikejames/GitHub/ConfigurationManagement/ansible/ansible.cfg) = False\r\n```\r\n\r\n##### OS / ENVIRONMENT\r\n<!--- Mention, BELOW THIS COMMENT, the OS you are running Ansible from, and the OS you are\r\nmanaging, or say \"N/A\" for anything that is not platform-specific.\r\nAlso mention the specific version of what you are trying to control,\r\ne.g. if this is a network bug the version of firmware on the network device.-->\r\nmacOS High Sierra 10.13.6\r\n\r\n##### STEPS TO REPRODUCE\r\n<!--- For bugs, show exactly how to reproduce the problem, using a minimal test-case.\r\nFor new features, show how the feature would be used. -->\r\n\r\nhttps://gist.github.com/tightly-clutched/05a40814d3271b51a6530163e3209299\r\n<!--- You can also paste gist.github.com links for larger files -->\r\n\r\n##### EXPECTED RESULTS\r\n<!--- What did you expect to happen when running the steps above? -->\r\nI expected a result to be posted to slack channel\r\n\r\n##### ACTUAL RESULTS\r\n<!--- What actually happened? If possible run with extra verbosity (-vvvv) -->\r\nplaybook execution failed \r\n\r\n<!--- Paste verbatim command output between quotes below -->\r\nhttps://gist.github.com/tightly-clutched/05a40814d3271b51a6530163e3209299\n", "before_files": [{"content": "# (C) 2014-2015, Matt Martz <[email protected]>\n# (C) 2017 Ansible Project\n# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)\n\n# Make coding more python3-ish\nfrom __future__ import (absolute_import, division, print_function)\n__metaclass__ = type\n\nDOCUMENTATION = '''\n callback: slack\n callback_type: notification\n requirements:\n - whitelist in configuration\n - prettytable (python library)\n short_description: Sends play events to a Slack channel\n version_added: \"2.1\"\n description:\n - This is an ansible callback plugin that sends status updates to a Slack channel during playbook execution.\n - Before 2.4 only environment variables were available for configuring this plugin\n options:\n webhook_url:\n required: True\n description: Slack Webhook URL\n env:\n - name: SLACK_WEBHOOK_URL\n ini:\n - section: callback_slack\n key: webhook_url\n channel:\n default: \"#ansible\"\n description: Slack room to post in.\n env:\n - name: SLACK_CHANNEL\n ini:\n - section: callback_slack\n key: channel\n username:\n description: Username to post as.\n env:\n - name: SLACK_USERNAME\n default: ansible\n ini:\n - section: callback_slack\n key: username\n'''\n\nimport json\nimport os\nimport uuid\n\ntry:\n from __main__ import cli\nexcept ImportError:\n cli = None\n\nfrom ansible.module_utils.urls import open_url\nfrom ansible.plugins.callback import CallbackBase\n\ntry:\n import prettytable\n HAS_PRETTYTABLE = True\nexcept ImportError:\n HAS_PRETTYTABLE = False\n\n\nclass CallbackModule(CallbackBase):\n \"\"\"This is an ansible callback plugin that sends status\n updates to a Slack channel during playbook execution.\n \"\"\"\n CALLBACK_VERSION = 2.0\n CALLBACK_TYPE = 'notification'\n CALLBACK_NAME = 'slack'\n CALLBACK_NEEDS_WHITELIST = True\n\n def __init__(self, display=None):\n\n super(CallbackModule, self).__init__(display=display)\n\n if not HAS_PRETTYTABLE:\n self.disabled = True\n self._display.warning('The `prettytable` python module is not '\n 'installed. Disabling the Slack callback '\n 'plugin.')\n\n self.playbook_name = None\n\n # This is a 6 character identifier provided with each message\n # This makes it easier to correlate messages when there are more\n # than 1 simultaneous playbooks running\n self.guid = uuid.uuid4().hex[:6]\n\n def set_options(self, task_keys=None, var_options=None, direct=None):\n\n super(CallbackModule, self).set_options(task_keys=task_keys, var_options=var_options, direct=direct)\n\n self.webhook_url = self.get_option('webhook_url')\n self.channel = self.get_option('channel')\n self.username = self.get_option('username')\n self.show_invocation = (self._display.verbosity > 1)\n\n if self.webhook_url is None:\n self.disabled = True\n self._display.warning('Slack Webhook URL was not provided. The '\n 'Slack Webhook URL can be provided using '\n 'the `SLACK_WEBHOOK_URL` environment '\n 'variable.')\n\n def send_msg(self, attachments):\n payload = {\n 'channel': self.channel,\n 'username': self.username,\n 'attachments': attachments,\n 'parse': 'none',\n 'icon_url': ('http://cdn2.hubspot.net/hub/330046/'\n 'file-449187601-png/ansible_badge.png'),\n }\n\n data = json.dumps(payload)\n self._display.debug(data)\n self._display.debug(self.webhook_url)\n try:\n response = open_url(self.webhook_url, data=data)\n return response.read()\n except Exception as e:\n self._display.warning('Could not submit message to Slack: %s' %\n str(e))\n\n def v2_playbook_on_start(self, playbook):\n self.playbook_name = os.path.basename(playbook._file_name)\n\n title = [\n '*Playbook initiated* (_%s_)' % self.guid\n ]\n invocation_items = []\n if self._plugin_options and self.show_invocation:\n tags = self.get_option('tags')\n skip_tags = self.get_option('skip_tags')\n extra_vars = self.get_option('extra_vars')\n subset = self.get_option('subset')\n inventory = os.path.basename(\n os.path.realpath(self.get_option('inventory'))\n )\n\n invocation_items.append('Inventory: %s' % inventory)\n if tags and tags != 'all':\n invocation_items.append('Tags: %s' % tags)\n if skip_tags:\n invocation_items.append('Skip Tags: %s' % skip_tags)\n if subset:\n invocation_items.append('Limit: %s' % subset)\n if extra_vars:\n invocation_items.append('Extra Vars: %s' %\n ' '.join(extra_vars))\n\n title.append('by *%s*' % self.get_option('remote_user'))\n\n title.append('\\n\\n*%s*' % self.playbook_name)\n msg_items = [' '.join(title)]\n if invocation_items:\n msg_items.append('```\\n%s\\n```' % '\\n'.join(invocation_items))\n\n msg = '\\n'.join(msg_items)\n\n attachments = [{\n 'fallback': msg,\n 'fields': [\n {\n 'value': msg\n }\n ],\n 'color': 'warning',\n 'mrkdwn_in': ['text', 'fallback', 'fields'],\n }]\n\n self.send_msg(attachments=attachments)\n\n def v2_playbook_on_play_start(self, play):\n \"\"\"Display Play start messages\"\"\"\n\n name = play.name or 'Play name not specified (%s)' % play._uuid\n msg = '*Starting play* (_%s_)\\n\\n*%s*' % (self.guid, name)\n attachments = [\n {\n 'fallback': msg,\n 'text': msg,\n 'color': 'warning',\n 'mrkdwn_in': ['text', 'fallback', 'fields'],\n }\n ]\n self.send_msg(attachments=attachments)\n\n def v2_playbook_on_stats(self, stats):\n \"\"\"Display info about playbook statistics\"\"\"\n\n hosts = sorted(stats.processed.keys())\n\n t = prettytable.PrettyTable(['Host', 'Ok', 'Changed', 'Unreachable',\n 'Failures'])\n\n failures = False\n unreachable = False\n\n for h in hosts:\n s = stats.summarize(h)\n\n if s['failures'] > 0:\n failures = True\n if s['unreachable'] > 0:\n unreachable = True\n\n t.add_row([h] + [s[k] for k in ['ok', 'changed', 'unreachable',\n 'failures']])\n\n attachments = []\n msg_items = [\n '*Playbook Complete* (_%s_)' % self.guid\n ]\n if failures or unreachable:\n color = 'danger'\n msg_items.append('\\n*Failed!*')\n else:\n color = 'good'\n msg_items.append('\\n*Success!*')\n\n msg_items.append('```\\n%s\\n```' % t)\n\n msg = '\\n'.join(msg_items)\n\n attachments.append({\n 'fallback': msg,\n 'fields': [\n {\n 'value': msg\n }\n ],\n 'color': color,\n 'mrkdwn_in': ['text', 'fallback', 'fields']\n })\n\n self.send_msg(attachments=attachments)\n", "path": "lib/ansible/plugins/callback/slack.py"}]} | 3,782 | 536 |
gh_patches_debug_23512 | rasdani/github-patches | git_diff | Lightning-AI__torchmetrics-173 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
MetricCollection should enforce order when passed a dict
## π Bug
Not a clear bug yet, but just thinking about distributed metric computation, in here: https://github.com/PyTorchLightning/metrics/blob/53d570158a503497351ae45ec895ca44a0546068/torchmetrics/collections.py#L81
we should make sure to sort the names before the insert so that we insert things in the same order (ModuleDict is already OrderedDict otherwise). If we don't we will get deadlocks when doing distributed metric updates.
Additionally, we might want to enforce sorting when passed list/tuple, but that might be more on the user end.
### To Reproduce
On each of the workers, pass dictionary with same metrics but in different order, try compute and observe deadlock.
</issue>
<code>
[start of torchmetrics/collections.py]
1 # Copyright The PyTorch Lightning team.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from copy import deepcopy
16 from typing import Any, Dict, List, Optional, Tuple, Union
17
18 from torch import nn
19
20 from torchmetrics.metric import Metric
21
22
23 class MetricCollection(nn.ModuleDict):
24 """
25 MetricCollection class can be used to chain metrics that have the same
26 call pattern into one single class.
27
28 Args:
29 metrics: One of the following
30
31 * list or tuple: if metrics are passed in as a list, will use the
32 metrics class name as key for output dict. Therefore, two metrics
33 of the same class cannot be chained this way.
34
35 * dict: if metrics are passed in as a dict, will use each key in the
36 dict as key for output dict. Use this format if you want to chain
37 together multiple of the same metric with different parameters.
38
39 prefix: a string to append in front of the keys of the output dict
40
41 Raises:
42 ValueError:
43 If one of the elements of ``metrics`` is not an instance of ``pl.metrics.Metric``.
44 ValueError:
45 If two elements in ``metrics`` have the same ``name``.
46 ValueError:
47 If ``metrics`` is not a ``list``, ``tuple`` or a ``dict``.
48
49 Example (input as list):
50 >>> import torch
51 >>> from pprint import pprint
52 >>> from torchmetrics import MetricCollection, Accuracy, Precision, Recall
53 >>> target = torch.tensor([0, 2, 0, 2, 0, 1, 0, 2])
54 >>> preds = torch.tensor([2, 1, 2, 0, 1, 2, 2, 2])
55 >>> metrics = MetricCollection([Accuracy(),
56 ... Precision(num_classes=3, average='macro'),
57 ... Recall(num_classes=3, average='macro')])
58 >>> metrics(preds, target)
59 {'Accuracy': tensor(0.1250), 'Precision': tensor(0.0667), 'Recall': tensor(0.1111)}
60
61 Example (input as dict):
62 >>> metrics = MetricCollection({'micro_recall': Recall(num_classes=3, average='micro'),
63 ... 'macro_recall': Recall(num_classes=3, average='macro')})
64 >>> same_metric = metrics.clone()
65 >>> pprint(metrics(preds, target))
66 {'macro_recall': tensor(0.1111), 'micro_recall': tensor(0.1250)}
67 >>> pprint(same_metric(preds, target))
68 {'macro_recall': tensor(0.1111), 'micro_recall': tensor(0.1250)}
69 >>> metrics.persistent()
70
71 """
72
73 def __init__(
74 self,
75 metrics: Union[List[Metric], Tuple[Metric], Dict[str, Metric]],
76 prefix: Optional[str] = None,
77 ):
78 super().__init__()
79 if isinstance(metrics, dict):
80 # Check all values are metrics
81 for name, metric in metrics.items():
82 if not isinstance(metric, Metric):
83 raise ValueError(
84 f"Value {metric} belonging to key {name}"
85 " is not an instance of `pl.metrics.Metric`"
86 )
87 self[name] = metric
88 elif isinstance(metrics, (tuple, list)):
89 for metric in metrics:
90 if not isinstance(metric, Metric):
91 raise ValueError(
92 f"Input {metric} to `MetricCollection` is not a instance"
93 " of `pl.metrics.Metric`"
94 )
95 name = metric.__class__.__name__
96 if name in self:
97 raise ValueError(f"Encountered two metrics both named {name}")
98 self[name] = metric
99 else:
100 raise ValueError("Unknown input to MetricCollection.")
101
102 self.prefix = self._check_prefix_arg(prefix)
103
104 def forward(self, *args, **kwargs) -> Dict[str, Any]: # pylint: disable=E0202
105 """
106 Iteratively call forward for each metric. Positional arguments (args) will
107 be passed to every metric in the collection, while keyword arguments (kwargs)
108 will be filtered based on the signature of the individual metric.
109 """
110 return {self._set_prefix(k): m(*args, **m._filter_kwargs(**kwargs)) for k, m in self.items()}
111
112 def update(self, *args, **kwargs): # pylint: disable=E0202
113 """
114 Iteratively call update for each metric. Positional arguments (args) will
115 be passed to every metric in the collection, while keyword arguments (kwargs)
116 will be filtered based on the signature of the individual metric.
117 """
118 for _, m in self.items():
119 m_kwargs = m._filter_kwargs(**kwargs)
120 m.update(*args, **m_kwargs)
121
122 def compute(self) -> Dict[str, Any]:
123 return {self._set_prefix(k): m.compute() for k, m in self.items()}
124
125 def reset(self) -> None:
126 """ Iteratively call reset for each metric """
127 for _, m in self.items():
128 m.reset()
129
130 def clone(self, prefix: Optional[str] = None) -> 'MetricCollection':
131 """ Make a copy of the metric collection
132 Args:
133 prefix: a string to append in front of the metric keys
134 """
135 mc = deepcopy(self)
136 mc.prefix = self._check_prefix_arg(prefix)
137 return mc
138
139 def persistent(self, mode: bool = True) -> None:
140 """Method for post-init to change if metric states should be saved to
141 its state_dict
142 """
143 for _, m in self.items():
144 m.persistent(mode)
145
146 def _set_prefix(self, k: str) -> str:
147 return k if self.prefix is None else self.prefix + k
148
149 @staticmethod
150 def _check_prefix_arg(prefix: str) -> Optional[str]:
151 if prefix is not None:
152 if isinstance(prefix, str):
153 return prefix
154 else:
155 raise ValueError('Expected input `prefix` to be a string')
156 return None
157
[end of torchmetrics/collections.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/torchmetrics/collections.py b/torchmetrics/collections.py
--- a/torchmetrics/collections.py
+++ b/torchmetrics/collections.py
@@ -35,6 +35,7 @@
* dict: if metrics are passed in as a dict, will use each key in the
dict as key for output dict. Use this format if you want to chain
together multiple of the same metric with different parameters.
+ Note that the keys in the output dict will be sorted alphabetically.
prefix: a string to append in front of the keys of the output dict
@@ -78,7 +79,9 @@
super().__init__()
if isinstance(metrics, dict):
# Check all values are metrics
- for name, metric in metrics.items():
+ # Make sure that metrics are added in deterministic order
+ for name in sorted(metrics.keys()):
+ metric = metrics[name]
if not isinstance(metric, Metric):
raise ValueError(
f"Value {metric} belonging to key {name}"
| {"golden_diff": "diff --git a/torchmetrics/collections.py b/torchmetrics/collections.py\n--- a/torchmetrics/collections.py\n+++ b/torchmetrics/collections.py\n@@ -35,6 +35,7 @@\n * dict: if metrics are passed in as a dict, will use each key in the\n dict as key for output dict. Use this format if you want to chain\n together multiple of the same metric with different parameters.\n+ Note that the keys in the output dict will be sorted alphabetically.\n \n prefix: a string to append in front of the keys of the output dict\n \n@@ -78,7 +79,9 @@\n super().__init__()\n if isinstance(metrics, dict):\n # Check all values are metrics\n- for name, metric in metrics.items():\n+ # Make sure that metrics are added in deterministic order\n+ for name in sorted(metrics.keys()):\n+ metric = metrics[name]\n if not isinstance(metric, Metric):\n raise ValueError(\n f\"Value {metric} belonging to key {name}\"\n", "issue": "MetricCollection should enforce order when passed a dict\n## \ud83d\udc1b Bug\r\n\r\nNot a clear bug yet, but just thinking about distributed metric computation, in here: https://github.com/PyTorchLightning/metrics/blob/53d570158a503497351ae45ec895ca44a0546068/torchmetrics/collections.py#L81\r\nwe should make sure to sort the names before the insert so that we insert things in the same order (ModuleDict is already OrderedDict otherwise). If we don't we will get deadlocks when doing distributed metric updates.\r\n\r\nAdditionally, we might want to enforce sorting when passed list/tuple, but that might be more on the user end.\r\n\r\n\r\n### To Reproduce\r\n\r\nOn each of the workers, pass dictionary with same metrics but in different order, try compute and observe deadlock.\r\n\r\n\n", "before_files": [{"content": "# Copyright The PyTorch Lightning team.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom copy import deepcopy\nfrom typing import Any, Dict, List, Optional, Tuple, Union\n\nfrom torch import nn\n\nfrom torchmetrics.metric import Metric\n\n\nclass MetricCollection(nn.ModuleDict):\n \"\"\"\n MetricCollection class can be used to chain metrics that have the same\n call pattern into one single class.\n\n Args:\n metrics: One of the following\n\n * list or tuple: if metrics are passed in as a list, will use the\n metrics class name as key for output dict. Therefore, two metrics\n of the same class cannot be chained this way.\n\n * dict: if metrics are passed in as a dict, will use each key in the\n dict as key for output dict. Use this format if you want to chain\n together multiple of the same metric with different parameters.\n\n prefix: a string to append in front of the keys of the output dict\n\n Raises:\n ValueError:\n If one of the elements of ``metrics`` is not an instance of ``pl.metrics.Metric``.\n ValueError:\n If two elements in ``metrics`` have the same ``name``.\n ValueError:\n If ``metrics`` is not a ``list``, ``tuple`` or a ``dict``.\n\n Example (input as list):\n >>> import torch\n >>> from pprint import pprint\n >>> from torchmetrics import MetricCollection, Accuracy, Precision, Recall\n >>> target = torch.tensor([0, 2, 0, 2, 0, 1, 0, 2])\n >>> preds = torch.tensor([2, 1, 2, 0, 1, 2, 2, 2])\n >>> metrics = MetricCollection([Accuracy(),\n ... Precision(num_classes=3, average='macro'),\n ... Recall(num_classes=3, average='macro')])\n >>> metrics(preds, target)\n {'Accuracy': tensor(0.1250), 'Precision': tensor(0.0667), 'Recall': tensor(0.1111)}\n\n Example (input as dict):\n >>> metrics = MetricCollection({'micro_recall': Recall(num_classes=3, average='micro'),\n ... 'macro_recall': Recall(num_classes=3, average='macro')})\n >>> same_metric = metrics.clone()\n >>> pprint(metrics(preds, target))\n {'macro_recall': tensor(0.1111), 'micro_recall': tensor(0.1250)}\n >>> pprint(same_metric(preds, target))\n {'macro_recall': tensor(0.1111), 'micro_recall': tensor(0.1250)}\n >>> metrics.persistent()\n\n \"\"\"\n\n def __init__(\n self,\n metrics: Union[List[Metric], Tuple[Metric], Dict[str, Metric]],\n prefix: Optional[str] = None,\n ):\n super().__init__()\n if isinstance(metrics, dict):\n # Check all values are metrics\n for name, metric in metrics.items():\n if not isinstance(metric, Metric):\n raise ValueError(\n f\"Value {metric} belonging to key {name}\"\n \" is not an instance of `pl.metrics.Metric`\"\n )\n self[name] = metric\n elif isinstance(metrics, (tuple, list)):\n for metric in metrics:\n if not isinstance(metric, Metric):\n raise ValueError(\n f\"Input {metric} to `MetricCollection` is not a instance\"\n \" of `pl.metrics.Metric`\"\n )\n name = metric.__class__.__name__\n if name in self:\n raise ValueError(f\"Encountered two metrics both named {name}\")\n self[name] = metric\n else:\n raise ValueError(\"Unknown input to MetricCollection.\")\n\n self.prefix = self._check_prefix_arg(prefix)\n\n def forward(self, *args, **kwargs) -> Dict[str, Any]: # pylint: disable=E0202\n \"\"\"\n Iteratively call forward for each metric. Positional arguments (args) will\n be passed to every metric in the collection, while keyword arguments (kwargs)\n will be filtered based on the signature of the individual metric.\n \"\"\"\n return {self._set_prefix(k): m(*args, **m._filter_kwargs(**kwargs)) for k, m in self.items()}\n\n def update(self, *args, **kwargs): # pylint: disable=E0202\n \"\"\"\n Iteratively call update for each metric. Positional arguments (args) will\n be passed to every metric in the collection, while keyword arguments (kwargs)\n will be filtered based on the signature of the individual metric.\n \"\"\"\n for _, m in self.items():\n m_kwargs = m._filter_kwargs(**kwargs)\n m.update(*args, **m_kwargs)\n\n def compute(self) -> Dict[str, Any]:\n return {self._set_prefix(k): m.compute() for k, m in self.items()}\n\n def reset(self) -> None:\n \"\"\" Iteratively call reset for each metric \"\"\"\n for _, m in self.items():\n m.reset()\n\n def clone(self, prefix: Optional[str] = None) -> 'MetricCollection':\n \"\"\" Make a copy of the metric collection\n Args:\n prefix: a string to append in front of the metric keys\n \"\"\"\n mc = deepcopy(self)\n mc.prefix = self._check_prefix_arg(prefix)\n return mc\n\n def persistent(self, mode: bool = True) -> None:\n \"\"\"Method for post-init to change if metric states should be saved to\n its state_dict\n \"\"\"\n for _, m in self.items():\n m.persistent(mode)\n\n def _set_prefix(self, k: str) -> str:\n return k if self.prefix is None else self.prefix + k\n\n @staticmethod\n def _check_prefix_arg(prefix: str) -> Optional[str]:\n if prefix is not None:\n if isinstance(prefix, str):\n return prefix\n else:\n raise ValueError('Expected input `prefix` to be a string')\n return None\n", "path": "torchmetrics/collections.py"}]} | 2,508 | 226 |
gh_patches_debug_21464 | rasdani/github-patches | git_diff | netbox-community__netbox-9547 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Provide Markdown help with links to local documentation pages
### NetBox version
v3.2.4
### Feature type
New functionality
### Proposed functionality
Currently netbox supports a documentation package as part of the main release due to https://github.com/netbox-community/netbox/issues/6328
I propose to change the Markdown assistance available in some text areas ( for example in comments fields) that is currently going to "https://github.com/adam-p/markdown-here/wiki/Markdown-Cheatsheet" to another URL as part of the offline documentation package

I propose that a new section in the documentation package is created, based in the github url above, and use the corresponding link within this assistance fields.
The final url could be something like, if this section is placed under references.
https://netboxfqdn/static/docs/reference/markdown/
### Use case
The following use cases are applicable:
Provide the correct documentation that is always related to the specific version being used, instead of the online version that refers the latest version.
Provide access to the documentation to system installed in a isolated management environment that do not have internet access.
### Database changes
none
### External dependencies
none
</issue>
<code>
[start of netbox/utilities/forms/fields/fields.py]
1 import json
2
3 from django import forms
4 from django.db.models import Count
5 from django.forms.fields import JSONField as _JSONField, InvalidJSONInput
6 from netaddr import AddrFormatError, EUI
7
8 from utilities.forms import widgets
9 from utilities.validators import EnhancedURLValidator
10
11 __all__ = (
12 'ChoiceField',
13 'ColorField',
14 'CommentField',
15 'JSONField',
16 'LaxURLField',
17 'MACAddressField',
18 'MultipleChoiceField',
19 'SlugField',
20 'TagFilterField',
21 )
22
23
24 class CommentField(forms.CharField):
25 """
26 A textarea with support for Markdown rendering. Exists mostly just to add a standard `help_text`.
27 """
28 widget = forms.Textarea
29 # TODO: Port Markdown cheat sheet to internal documentation
30 help_text = """
31 <i class="mdi mdi-information-outline"></i>
32 <a href="https://github.com/adam-p/markdown-here/wiki/Markdown-Cheatsheet" target="_blank" tabindex="-1">
33 Markdown</a> syntax is supported
34 """
35
36 def __init__(self, *, label='', help_text=help_text, required=False, **kwargs):
37 super().__init__(label=label, help_text=help_text, required=required, **kwargs)
38
39
40 class SlugField(forms.SlugField):
41 """
42 Extend Django's built-in SlugField to automatically populate from a field called `name` unless otherwise specified.
43
44 Parameters:
45 slug_source: Name of the form field from which the slug value will be derived
46 """
47 widget = widgets.SlugWidget
48 help_text = "URL-friendly unique shorthand"
49
50 def __init__(self, *, slug_source='name', help_text=help_text, **kwargs):
51 super().__init__(help_text=help_text, **kwargs)
52
53 self.widget.attrs['slug-source'] = slug_source
54
55
56 class ColorField(forms.CharField):
57 """
58 A field which represents a color value in hexadecimal `RRGGBB` format. Utilizes NetBox's `ColorSelect` widget to
59 render choices.
60 """
61 widget = widgets.ColorSelect
62
63
64 class TagFilterField(forms.MultipleChoiceField):
65 """
66 A filter field for the tags of a model. Only the tags used by a model are displayed.
67
68 :param model: The model of the filter
69 """
70 widget = widgets.StaticSelectMultiple
71
72 def __init__(self, model, *args, **kwargs):
73 def get_choices():
74 tags = model.tags.annotate(
75 count=Count('extras_taggeditem_items')
76 ).order_by('name')
77 return [
78 (str(tag.slug), '{} ({})'.format(tag.name, tag.count)) for tag in tags
79 ]
80
81 # Choices are fetched each time the form is initialized
82 super().__init__(label='Tags', choices=get_choices, required=False, *args, **kwargs)
83
84
85 class LaxURLField(forms.URLField):
86 """
87 Modifies Django's built-in URLField to remove the requirement for fully-qualified domain names
88 (e.g. http://myserver/ is valid)
89 """
90 default_validators = [EnhancedURLValidator()]
91
92
93 class JSONField(_JSONField):
94 """
95 Custom wrapper around Django's built-in JSONField to avoid presenting "null" as the default text.
96 """
97 def __init__(self, *args, **kwargs):
98 super().__init__(*args, **kwargs)
99 if not self.help_text:
100 self.help_text = 'Enter context data in <a href="https://json.org/">JSON</a> format.'
101 self.widget.attrs['placeholder'] = ''
102
103 def prepare_value(self, value):
104 if isinstance(value, InvalidJSONInput):
105 return value
106 if value is None:
107 return ''
108 return json.dumps(value, sort_keys=True, indent=4)
109
110
111 class MACAddressField(forms.Field):
112 """
113 Validates a 48-bit MAC address.
114 """
115 widget = forms.CharField
116 default_error_messages = {
117 'invalid': 'MAC address must be in EUI-48 format',
118 }
119
120 def to_python(self, value):
121 value = super().to_python(value)
122
123 # Validate MAC address format
124 try:
125 value = EUI(value.strip())
126 except AddrFormatError:
127 raise forms.ValidationError(self.error_messages['invalid'], code='invalid')
128
129 return value
130
131
132 #
133 # Choice fields
134 #
135
136 class ChoiceField(forms.ChoiceField):
137 """
138 Overrides Django's built-in `ChoiceField` to use NetBox's `StaticSelect` widget
139 """
140 widget = widgets.StaticSelect
141
142
143 class MultipleChoiceField(forms.MultipleChoiceField):
144 """
145 Overrides Django's built-in `MultipleChoiceField` to use NetBox's `StaticSelectMultiple` widget
146 """
147 widget = widgets.StaticSelectMultiple
148
[end of netbox/utilities/forms/fields/fields.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/netbox/utilities/forms/fields/fields.py b/netbox/utilities/forms/fields/fields.py
--- a/netbox/utilities/forms/fields/fields.py
+++ b/netbox/utilities/forms/fields/fields.py
@@ -3,6 +3,7 @@
from django import forms
from django.db.models import Count
from django.forms.fields import JSONField as _JSONField, InvalidJSONInput
+from django.templatetags.static import static
from netaddr import AddrFormatError, EUI
from utilities.forms import widgets
@@ -26,10 +27,9 @@
A textarea with support for Markdown rendering. Exists mostly just to add a standard `help_text`.
"""
widget = forms.Textarea
- # TODO: Port Markdown cheat sheet to internal documentation
- help_text = """
+ help_text = f"""
<i class="mdi mdi-information-outline"></i>
- <a href="https://github.com/adam-p/markdown-here/wiki/Markdown-Cheatsheet" target="_blank" tabindex="-1">
+ <a href="{static('docs/reference/markdown/')}" target="_blank" tabindex="-1">
Markdown</a> syntax is supported
"""
| {"golden_diff": "diff --git a/netbox/utilities/forms/fields/fields.py b/netbox/utilities/forms/fields/fields.py\n--- a/netbox/utilities/forms/fields/fields.py\n+++ b/netbox/utilities/forms/fields/fields.py\n@@ -3,6 +3,7 @@\n from django import forms\n from django.db.models import Count\n from django.forms.fields import JSONField as _JSONField, InvalidJSONInput\n+from django.templatetags.static import static\n from netaddr import AddrFormatError, EUI\n \n from utilities.forms import widgets\n@@ -26,10 +27,9 @@\n A textarea with support for Markdown rendering. Exists mostly just to add a standard `help_text`.\n \"\"\"\n widget = forms.Textarea\n- # TODO: Port Markdown cheat sheet to internal documentation\n- help_text = \"\"\"\n+ help_text = f\"\"\"\n <i class=\"mdi mdi-information-outline\"></i>\n- <a href=\"https://github.com/adam-p/markdown-here/wiki/Markdown-Cheatsheet\" target=\"_blank\" tabindex=\"-1\">\n+ <a href=\"{static('docs/reference/markdown/')}\" target=\"_blank\" tabindex=\"-1\">\n Markdown</a> syntax is supported\n \"\"\"\n", "issue": "Provide Markdown help with links to local documentation pages\n### NetBox version\n\nv3.2.4\n\n### Feature type\n\nNew functionality\n\n### Proposed functionality\n\nCurrently netbox supports a documentation package as part of the main release due to https://github.com/netbox-community/netbox/issues/6328\r\n\r\nI propose to change the Markdown assistance available in some text areas ( for example in comments fields) that is currently going to \"https://github.com/adam-p/markdown-here/wiki/Markdown-Cheatsheet\" to another URL as part of the offline documentation package \r\n\r\n\r\nI propose that a new section in the documentation package is created, based in the github url above, and use the corresponding link within this assistance fields. \r\n\r\nThe final url could be something like, if this section is placed under references. \r\nhttps://netboxfqdn/static/docs/reference/markdown/\n\n### Use case\n\n\r\n\r\nThe following use cases are applicable:\r\n\r\n Provide the correct documentation that is always related to the specific version being used, instead of the online version that refers the latest version.\r\n Provide access to the documentation to system installed in a isolated management environment that do not have internet access.\r\n\n\n### Database changes\n\nnone\n\n### External dependencies\n\nnone\n", "before_files": [{"content": "import json\n\nfrom django import forms\nfrom django.db.models import Count\nfrom django.forms.fields import JSONField as _JSONField, InvalidJSONInput\nfrom netaddr import AddrFormatError, EUI\n\nfrom utilities.forms import widgets\nfrom utilities.validators import EnhancedURLValidator\n\n__all__ = (\n 'ChoiceField',\n 'ColorField',\n 'CommentField',\n 'JSONField',\n 'LaxURLField',\n 'MACAddressField',\n 'MultipleChoiceField',\n 'SlugField',\n 'TagFilterField',\n)\n\n\nclass CommentField(forms.CharField):\n \"\"\"\n A textarea with support for Markdown rendering. Exists mostly just to add a standard `help_text`.\n \"\"\"\n widget = forms.Textarea\n # TODO: Port Markdown cheat sheet to internal documentation\n help_text = \"\"\"\n <i class=\"mdi mdi-information-outline\"></i>\n <a href=\"https://github.com/adam-p/markdown-here/wiki/Markdown-Cheatsheet\" target=\"_blank\" tabindex=\"-1\">\n Markdown</a> syntax is supported\n \"\"\"\n\n def __init__(self, *, label='', help_text=help_text, required=False, **kwargs):\n super().__init__(label=label, help_text=help_text, required=required, **kwargs)\n\n\nclass SlugField(forms.SlugField):\n \"\"\"\n Extend Django's built-in SlugField to automatically populate from a field called `name` unless otherwise specified.\n\n Parameters:\n slug_source: Name of the form field from which the slug value will be derived\n \"\"\"\n widget = widgets.SlugWidget\n help_text = \"URL-friendly unique shorthand\"\n\n def __init__(self, *, slug_source='name', help_text=help_text, **kwargs):\n super().__init__(help_text=help_text, **kwargs)\n\n self.widget.attrs['slug-source'] = slug_source\n\n\nclass ColorField(forms.CharField):\n \"\"\"\n A field which represents a color value in hexadecimal `RRGGBB` format. Utilizes NetBox's `ColorSelect` widget to\n render choices.\n \"\"\"\n widget = widgets.ColorSelect\n\n\nclass TagFilterField(forms.MultipleChoiceField):\n \"\"\"\n A filter field for the tags of a model. Only the tags used by a model are displayed.\n\n :param model: The model of the filter\n \"\"\"\n widget = widgets.StaticSelectMultiple\n\n def __init__(self, model, *args, **kwargs):\n def get_choices():\n tags = model.tags.annotate(\n count=Count('extras_taggeditem_items')\n ).order_by('name')\n return [\n (str(tag.slug), '{} ({})'.format(tag.name, tag.count)) for tag in tags\n ]\n\n # Choices are fetched each time the form is initialized\n super().__init__(label='Tags', choices=get_choices, required=False, *args, **kwargs)\n\n\nclass LaxURLField(forms.URLField):\n \"\"\"\n Modifies Django's built-in URLField to remove the requirement for fully-qualified domain names\n (e.g. http://myserver/ is valid)\n \"\"\"\n default_validators = [EnhancedURLValidator()]\n\n\nclass JSONField(_JSONField):\n \"\"\"\n Custom wrapper around Django's built-in JSONField to avoid presenting \"null\" as the default text.\n \"\"\"\n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n if not self.help_text:\n self.help_text = 'Enter context data in <a href=\"https://json.org/\">JSON</a> format.'\n self.widget.attrs['placeholder'] = ''\n\n def prepare_value(self, value):\n if isinstance(value, InvalidJSONInput):\n return value\n if value is None:\n return ''\n return json.dumps(value, sort_keys=True, indent=4)\n\n\nclass MACAddressField(forms.Field):\n \"\"\"\n Validates a 48-bit MAC address.\n \"\"\"\n widget = forms.CharField\n default_error_messages = {\n 'invalid': 'MAC address must be in EUI-48 format',\n }\n\n def to_python(self, value):\n value = super().to_python(value)\n\n # Validate MAC address format\n try:\n value = EUI(value.strip())\n except AddrFormatError:\n raise forms.ValidationError(self.error_messages['invalid'], code='invalid')\n\n return value\n\n\n#\n# Choice fields\n#\n\nclass ChoiceField(forms.ChoiceField):\n \"\"\"\n Overrides Django's built-in `ChoiceField` to use NetBox's `StaticSelect` widget\n \"\"\"\n widget = widgets.StaticSelect\n\n\nclass MultipleChoiceField(forms.MultipleChoiceField):\n \"\"\"\n Overrides Django's built-in `MultipleChoiceField` to use NetBox's `StaticSelectMultiple` widget\n \"\"\"\n widget = widgets.StaticSelectMultiple\n", "path": "netbox/utilities/forms/fields/fields.py"}]} | 2,226 | 265 |
gh_patches_debug_9398 | rasdani/github-patches | git_diff | saulpw__visidata-1890 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
fixed width saver truncates data if columns are not fully expanded
**Small description**
If you save or syscopy a table as `fixed` format, and the visible column width is less than the width of the data in the column, the data is truncated. Also, the resulting file is not a valid fixed width format file as the columns are not aligned with the headers.
**Expected result**
Saving or copying a table when the columns are not fully expanded should yield the same result as when the columns are expanded.
**Actual result with screenshot**

**Steps to reproduce with sample data and a .vd**
[test-vd-fixed.zip](https://github.com/saulpw/visidata/files/11217144/test-vd-fixed.zip)
**Additional context**
saul.pw/VisiData v2.11
</issue>
<code>
[start of visidata/loaders/fixed_width.py]
1
2 from visidata import VisiData, vd, Sheet, Column, Progress, SequenceSheet
3
4
5 vd.option('fixed_rows', 1000, 'number of rows to check for fixed width columns')
6 vd.option('fixed_maxcols', 0, 'max number of fixed-width columns to create (0 is no max)')
7
8 @VisiData.api
9 def open_fixed(vd, p):
10 return FixedWidthColumnsSheet(p.name, source=p, headerlines=[])
11
12 class FixedWidthColumn(Column):
13 def __init__(self, name, i, j, **kwargs):
14 super().__init__(name, **kwargs)
15 self.i, self.j = i, j
16
17 def calcValue(self, row):
18 return row[0][self.i:self.j]
19
20 def putValue(self, row, value):
21 value = str(value)[:self.j-self.i]
22 j = self.j or len(row)
23 row[0] = row[0][:self.i] + '%-*s' % (j-self.i, value) + row[0][self.j:]
24
25 def columnize(rows):
26 'Generate (i,j) indexes for fixed-width columns found in rows'
27
28 ## find all character columns that are not spaces ever
29 allNonspaces = set()
30 for r in rows:
31 for i, ch in enumerate(r):
32 if not ch.isspace():
33 allNonspaces.add(i)
34
35 colstart = 0
36 prev = 0
37
38 # collapse fields
39 for i in allNonspaces:
40 if i > prev+1:
41 yield colstart, i
42 colstart = i
43 prev = i
44
45 yield colstart, prev+1 # final column gets rest of line
46
47
48 class FixedWidthColumnsSheet(SequenceSheet):
49 rowtype = 'lines' # rowdef: [line] (wrapping in list makes it unique and modifiable)
50 def addRow(self, row, index=None):
51 Sheet.addRow(self, row, index=index)
52
53 def iterload(self):
54 itsource = iter(self.source)
55
56 # compute fixed width columns from first fixed_rows lines
57 maxcols = self.options.fixed_maxcols
58 self.columns = []
59 fixedRows = list([x] for x in self.optlines(itsource, 'fixed_rows'))
60 for i, j in columnize(list(r[0] for r in fixedRows)):
61 if maxcols and self.nCols >= maxcols-1:
62 self.addColumn(FixedWidthColumn('', i, None))
63 break
64 else:
65 self.addColumn(FixedWidthColumn('', i, j))
66
67 yield from fixedRows
68
69 self.setColNames(self.headerlines)
70
71 yield from ([line] for line in itsource)
72
73 def setCols(self, headerlines):
74 self.headerlines = headerlines
75
76
77 @VisiData.api
78 def save_fixed(vd, p, *vsheets):
79 with p.open(mode='w', encoding=vsheets[0].options.save_encoding) as fp:
80 for sheet in vsheets:
81 if len(vsheets) > 1:
82 fp.write('%s\n\n' % sheet.name)
83
84 widths = {} # Column -> width:int
85 # headers
86 for col in Progress(sheet.visibleCols, gerund='sizing'):
87 maxWidth = col.getMaxWidth(sheet.rows)
88 widths[col] = col.width if col.width >= maxWidth else sheet.options.default_width or maxWidth
89 fp.write(('{0:%s} ' % widths[col]).format(col.name))
90 fp.write('\n')
91
92 # rows
93 with Progress(gerund='saving'):
94 for dispvals in sheet.iterdispvals(format=True):
95 for col, val in dispvals.items():
96 fp.write(('{0:%s%s.%s} ' % ('>' if vd.isNumeric(col) else '<', widths[col], widths[col])).format(val))
97 fp.write('\n')
98
99 vd.status('%s save finished' % p)
100
[end of visidata/loaders/fixed_width.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/visidata/loaders/fixed_width.py b/visidata/loaders/fixed_width.py
--- a/visidata/loaders/fixed_width.py
+++ b/visidata/loaders/fixed_width.py
@@ -84,8 +84,7 @@
widths = {} # Column -> width:int
# headers
for col in Progress(sheet.visibleCols, gerund='sizing'):
- maxWidth = col.getMaxWidth(sheet.rows)
- widths[col] = col.width if col.width >= maxWidth else sheet.options.default_width or maxWidth
+ widths[col] = col.getMaxWidth(sheet.rows) #1849
fp.write(('{0:%s} ' % widths[col]).format(col.name))
fp.write('\n')
| {"golden_diff": "diff --git a/visidata/loaders/fixed_width.py b/visidata/loaders/fixed_width.py\n--- a/visidata/loaders/fixed_width.py\n+++ b/visidata/loaders/fixed_width.py\n@@ -84,8 +84,7 @@\n widths = {} # Column -> width:int\n # headers\n for col in Progress(sheet.visibleCols, gerund='sizing'):\n- maxWidth = col.getMaxWidth(sheet.rows)\n- widths[col] = col.width if col.width >= maxWidth else sheet.options.default_width or maxWidth\n+ widths[col] = col.getMaxWidth(sheet.rows) #1849 \n fp.write(('{0:%s} ' % widths[col]).format(col.name))\n fp.write('\\n')\n", "issue": "fixed width saver truncates data if columns are not fully expanded\n**Small description**\r\n\r\nIf you save or syscopy a table as `fixed` format, and the visible column width is less than the width of the data in the column, the data is truncated. Also, the resulting file is not a valid fixed width format file as the columns are not aligned with the headers.\r\n\r\n**Expected result**\r\n\r\nSaving or copying a table when the columns are not fully expanded should yield the same result as when the columns are expanded.\r\n\r\n**Actual result with screenshot**\r\n\r\n\r\n**Steps to reproduce with sample data and a .vd**\r\n[test-vd-fixed.zip](https://github.com/saulpw/visidata/files/11217144/test-vd-fixed.zip)\r\n\r\n**Additional context**\r\nsaul.pw/VisiData v2.11\r\n\n", "before_files": [{"content": "\nfrom visidata import VisiData, vd, Sheet, Column, Progress, SequenceSheet\n\n\nvd.option('fixed_rows', 1000, 'number of rows to check for fixed width columns')\nvd.option('fixed_maxcols', 0, 'max number of fixed-width columns to create (0 is no max)')\n\[email protected]\ndef open_fixed(vd, p):\n return FixedWidthColumnsSheet(p.name, source=p, headerlines=[])\n\nclass FixedWidthColumn(Column):\n def __init__(self, name, i, j, **kwargs):\n super().__init__(name, **kwargs)\n self.i, self.j = i, j\n\n def calcValue(self, row):\n return row[0][self.i:self.j]\n\n def putValue(self, row, value):\n value = str(value)[:self.j-self.i]\n j = self.j or len(row)\n row[0] = row[0][:self.i] + '%-*s' % (j-self.i, value) + row[0][self.j:]\n\ndef columnize(rows):\n 'Generate (i,j) indexes for fixed-width columns found in rows'\n\n ## find all character columns that are not spaces ever\n allNonspaces = set()\n for r in rows:\n for i, ch in enumerate(r):\n if not ch.isspace():\n allNonspaces.add(i)\n\n colstart = 0\n prev = 0\n\n # collapse fields\n for i in allNonspaces:\n if i > prev+1:\n yield colstart, i\n colstart = i\n prev = i\n\n yield colstart, prev+1 # final column gets rest of line\n\n\nclass FixedWidthColumnsSheet(SequenceSheet):\n rowtype = 'lines' # rowdef: [line] (wrapping in list makes it unique and modifiable)\n def addRow(self, row, index=None):\n Sheet.addRow(self, row, index=index)\n\n def iterload(self):\n itsource = iter(self.source)\n\n # compute fixed width columns from first fixed_rows lines\n maxcols = self.options.fixed_maxcols\n self.columns = []\n fixedRows = list([x] for x in self.optlines(itsource, 'fixed_rows'))\n for i, j in columnize(list(r[0] for r in fixedRows)):\n if maxcols and self.nCols >= maxcols-1:\n self.addColumn(FixedWidthColumn('', i, None))\n break\n else:\n self.addColumn(FixedWidthColumn('', i, j))\n\n yield from fixedRows\n\n self.setColNames(self.headerlines)\n\n yield from ([line] for line in itsource)\n\n def setCols(self, headerlines):\n self.headerlines = headerlines\n\n\[email protected]\ndef save_fixed(vd, p, *vsheets):\n with p.open(mode='w', encoding=vsheets[0].options.save_encoding) as fp:\n for sheet in vsheets:\n if len(vsheets) > 1:\n fp.write('%s\\n\\n' % sheet.name)\n\n widths = {} # Column -> width:int\n # headers\n for col in Progress(sheet.visibleCols, gerund='sizing'):\n maxWidth = col.getMaxWidth(sheet.rows)\n widths[col] = col.width if col.width >= maxWidth else sheet.options.default_width or maxWidth\n fp.write(('{0:%s} ' % widths[col]).format(col.name))\n fp.write('\\n')\n\n # rows\n with Progress(gerund='saving'):\n for dispvals in sheet.iterdispvals(format=True):\n for col, val in dispvals.items():\n fp.write(('{0:%s%s.%s} ' % ('>' if vd.isNumeric(col) else '<', widths[col], widths[col])).format(val))\n fp.write('\\n')\n\n vd.status('%s save finished' % p)\n", "path": "visidata/loaders/fixed_width.py"}]} | 1,823 | 164 |
gh_patches_debug_4240 | rasdani/github-patches | git_diff | liqd__adhocracy4-210 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Keep html time field optional even if a DateTimeField is set to be required
Time is optional in the backend but the html input field still gets the required attribute if the the DateTimeField is initializes with `required=True`
The time Widget should always be initialized without required.
</issue>
<code>
[start of adhocracy4/forms/widgets.py]
1 import datetime
2
3 from django.contrib.staticfiles.storage import staticfiles_storage
4 from django.forms import widgets as form_widgets
5 from django.template.loader import render_to_string
6 from django.utils.timezone import localtime
7
8
9 class DateTimeInput(form_widgets.SplitDateTimeWidget):
10 def __init__(self, time_label='', time_default=None, *args, **kwargs):
11 super().__init__(*args, **kwargs)
12 self.time_label = time_label
13 self.time_default = time_default or datetime.time(hour=0, minute=0)
14
15 class Media:
16 js = (
17 staticfiles_storage.url('datepicker.js'),
18 )
19 css = {'all': [
20 staticfiles_storage.url('datepicker.css'),
21 ]}
22
23 def render(self, name, value, attrs=None):
24 date_attrs = self.build_attrs(attrs)
25 date_attrs.update({
26 'class': 'datepicker',
27 'placeholder': self.widgets[0].format_value(datetime.date.today()),
28 'id': attrs['id'] + '_date'
29 })
30 time_attrs = self.build_attrs(attrs)
31 time_attrs.update({
32 'class': 'timepicker',
33 'placeholder': self.widgets[1].format_value(
34 self.get_default_time()),
35 'id': attrs['id'] + '_time'
36 })
37
38 if isinstance(value, datetime.datetime):
39 value = localtime(value)
40 date = value.date()
41 time = value.time()
42 else:
43 # value's just a list in case of an error
44 date = value[0] if value else None
45 time = value[1] if value else None
46
47 return render_to_string(
48 'a4forms/datetime_input.html', {
49 'date': self.widgets[0].render(
50 name + '_0',
51 date,
52 date_attrs
53 ),
54 'time': self.widgets[1].render(
55 name + '_1',
56 time,
57 time_attrs
58 ),
59 'time_label': {
60 'label': self.time_label,
61 'id_for_label': attrs['id'] + '_time'
62 },
63 })
64
65 def id_for_label(self, id_):
66 if id_:
67 id_ += '_date'
68 return id_
69
70 def get_default_time(self):
71 time_widget = self.widgets[1]
72
73 if not self.time_default:
74 return time_widget.format_value(datetime.time(hour=0, minute=0))
75 elif isinstance(self.time_default, (datetime.time, datetime.datetime)):
76 return time_widget.format_value(self.time_default)
77 else:
78 return self.time_default
79
[end of adhocracy4/forms/widgets.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/adhocracy4/forms/widgets.py b/adhocracy4/forms/widgets.py
--- a/adhocracy4/forms/widgets.py
+++ b/adhocracy4/forms/widgets.py
@@ -32,7 +32,8 @@
'class': 'timepicker',
'placeholder': self.widgets[1].format_value(
self.get_default_time()),
- 'id': attrs['id'] + '_time'
+ 'id': attrs['id'] + '_time',
+ 'required': False
})
if isinstance(value, datetime.datetime):
| {"golden_diff": "diff --git a/adhocracy4/forms/widgets.py b/adhocracy4/forms/widgets.py\n--- a/adhocracy4/forms/widgets.py\n+++ b/adhocracy4/forms/widgets.py\n@@ -32,7 +32,8 @@\n 'class': 'timepicker',\n 'placeholder': self.widgets[1].format_value(\n self.get_default_time()),\n- 'id': attrs['id'] + '_time'\n+ 'id': attrs['id'] + '_time',\n+ 'required': False\n })\n \n if isinstance(value, datetime.datetime):\n", "issue": "Keep html time field optional even if a DateTimeField is set to be required\nTime is optional in the backend but the html input field still gets the required attribute if the the DateTimeField is initializes with `required=True`\r\nThe time Widget should always be initialized without required.\n", "before_files": [{"content": "import datetime\n\nfrom django.contrib.staticfiles.storage import staticfiles_storage\nfrom django.forms import widgets as form_widgets\nfrom django.template.loader import render_to_string\nfrom django.utils.timezone import localtime\n\n\nclass DateTimeInput(form_widgets.SplitDateTimeWidget):\n def __init__(self, time_label='', time_default=None, *args, **kwargs):\n super().__init__(*args, **kwargs)\n self.time_label = time_label\n self.time_default = time_default or datetime.time(hour=0, minute=0)\n\n class Media:\n js = (\n staticfiles_storage.url('datepicker.js'),\n )\n css = {'all': [\n staticfiles_storage.url('datepicker.css'),\n ]}\n\n def render(self, name, value, attrs=None):\n date_attrs = self.build_attrs(attrs)\n date_attrs.update({\n 'class': 'datepicker',\n 'placeholder': self.widgets[0].format_value(datetime.date.today()),\n 'id': attrs['id'] + '_date'\n })\n time_attrs = self.build_attrs(attrs)\n time_attrs.update({\n 'class': 'timepicker',\n 'placeholder': self.widgets[1].format_value(\n self.get_default_time()),\n 'id': attrs['id'] + '_time'\n })\n\n if isinstance(value, datetime.datetime):\n value = localtime(value)\n date = value.date()\n time = value.time()\n else:\n # value's just a list in case of an error\n date = value[0] if value else None\n time = value[1] if value else None\n\n return render_to_string(\n 'a4forms/datetime_input.html', {\n 'date': self.widgets[0].render(\n name + '_0',\n date,\n date_attrs\n ),\n 'time': self.widgets[1].render(\n name + '_1',\n time,\n time_attrs\n ),\n 'time_label': {\n 'label': self.time_label,\n 'id_for_label': attrs['id'] + '_time'\n },\n })\n\n def id_for_label(self, id_):\n if id_:\n id_ += '_date'\n return id_\n\n def get_default_time(self):\n time_widget = self.widgets[1]\n\n if not self.time_default:\n return time_widget.format_value(datetime.time(hour=0, minute=0))\n elif isinstance(self.time_default, (datetime.time, datetime.datetime)):\n return time_widget.format_value(self.time_default)\n else:\n return self.time_default\n", "path": "adhocracy4/forms/widgets.py"}]} | 1,274 | 122 |
gh_patches_debug_15813 | rasdani/github-patches | git_diff | netbox-community__netbox-14367 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Bulk edit of Interfaces of VMs without cluster fails
### NetBox version
v3.6.5
### Python version
3.8
### Steps to Reproduce
1. Create VM and assign only a Site, not a Cluster
2. Create Interface for VM
3. Mark Interface and click on "Edit Selected"
### Expected Behavior
Edit form for selected VM Interface(s) appear
### Observed Behavior
Exception Window with the following Content:
```
<class 'AttributeError'>
'NoneType' object has no attribute 'site'
Python version: 3.8.10
NetBox version: 3.6.5
Plugins:
netbox_demo: 0.3.1
```
When generating the bulk edit form, the site is extracted from the cluster of the virtual machine, which fails if no cluster is assigned to the VM:
```
File "/opt/netbox/netbox/virtualization/forms/bulk_edit.py", line 272, in __init__
site = interface.virtual_machine.cluster.site
AttributeError: 'NoneType' object has no attribute 'site'
```
</issue>
<code>
[start of netbox/virtualization/forms/bulk_edit.py]
1 from django import forms
2 from django.utils.translation import gettext_lazy as _
3
4 from dcim.choices import InterfaceModeChoices
5 from dcim.constants import INTERFACE_MTU_MAX, INTERFACE_MTU_MIN
6 from dcim.models import Device, DeviceRole, Platform, Region, Site, SiteGroup
7 from extras.models import ConfigTemplate
8 from ipam.models import VLAN, VLANGroup, VRF
9 from netbox.forms import NetBoxModelBulkEditForm
10 from tenancy.models import Tenant
11 from utilities.forms import BulkRenameForm, add_blank_choice
12 from utilities.forms.fields import CommentField, DynamicModelChoiceField, DynamicModelMultipleChoiceField
13 from utilities.forms.widgets import BulkEditNullBooleanSelect
14 from virtualization.choices import *
15 from virtualization.models import *
16
17 __all__ = (
18 'ClusterBulkEditForm',
19 'ClusterGroupBulkEditForm',
20 'ClusterTypeBulkEditForm',
21 'VirtualMachineBulkEditForm',
22 'VMInterfaceBulkEditForm',
23 'VMInterfaceBulkRenameForm',
24 )
25
26
27 class ClusterTypeBulkEditForm(NetBoxModelBulkEditForm):
28 description = forms.CharField(
29 label=_('Description'),
30 max_length=200,
31 required=False
32 )
33
34 model = ClusterType
35 fieldsets = (
36 (None, ('description',)),
37 )
38 nullable_fields = ('description',)
39
40
41 class ClusterGroupBulkEditForm(NetBoxModelBulkEditForm):
42 description = forms.CharField(
43 label=_('Description'),
44 max_length=200,
45 required=False
46 )
47
48 model = ClusterGroup
49 fieldsets = (
50 (None, ('description',)),
51 )
52 nullable_fields = ('description',)
53
54
55 class ClusterBulkEditForm(NetBoxModelBulkEditForm):
56 type = DynamicModelChoiceField(
57 label=_('Type'),
58 queryset=ClusterType.objects.all(),
59 required=False
60 )
61 group = DynamicModelChoiceField(
62 label=_('Group'),
63 queryset=ClusterGroup.objects.all(),
64 required=False
65 )
66 status = forms.ChoiceField(
67 label=_('Status'),
68 choices=add_blank_choice(ClusterStatusChoices),
69 required=False,
70 initial=''
71 )
72 tenant = DynamicModelChoiceField(
73 label=_('Tenant'),
74 queryset=Tenant.objects.all(),
75 required=False
76 )
77 region = DynamicModelChoiceField(
78 label=_('Region'),
79 queryset=Region.objects.all(),
80 required=False,
81 )
82 site_group = DynamicModelChoiceField(
83 label=_('Site group'),
84 queryset=SiteGroup.objects.all(),
85 required=False,
86 )
87 site = DynamicModelChoiceField(
88 label=_('Site'),
89 queryset=Site.objects.all(),
90 required=False,
91 query_params={
92 'region_id': '$region',
93 'group_id': '$site_group',
94 }
95 )
96 description = forms.CharField(
97 label=_('Site'),
98 max_length=200,
99 required=False
100 )
101 comments = CommentField()
102
103 model = Cluster
104 fieldsets = (
105 (None, ('type', 'group', 'status', 'tenant', 'description')),
106 (_('Site'), ('region', 'site_group', 'site')),
107 )
108 nullable_fields = (
109 'group', 'site', 'tenant', 'description', 'comments',
110 )
111
112
113 class VirtualMachineBulkEditForm(NetBoxModelBulkEditForm):
114 status = forms.ChoiceField(
115 label=_('Status'),
116 choices=add_blank_choice(VirtualMachineStatusChoices),
117 required=False,
118 initial='',
119 )
120 site = DynamicModelChoiceField(
121 label=_('Site'),
122 queryset=Site.objects.all(),
123 required=False
124 )
125 cluster = DynamicModelChoiceField(
126 label=_('Cluster'),
127 queryset=Cluster.objects.all(),
128 required=False,
129 query_params={
130 'site_id': '$site'
131 }
132 )
133 device = DynamicModelChoiceField(
134 label=_('Device'),
135 queryset=Device.objects.all(),
136 required=False,
137 query_params={
138 'cluster_id': '$cluster'
139 }
140 )
141 role = DynamicModelChoiceField(
142 label=_('Role'),
143 queryset=DeviceRole.objects.filter(
144 vm_role=True
145 ),
146 required=False,
147 query_params={
148 "vm_role": "True"
149 }
150 )
151 tenant = DynamicModelChoiceField(
152 label=_('Tenant'),
153 queryset=Tenant.objects.all(),
154 required=False
155 )
156 platform = DynamicModelChoiceField(
157 label=_('Platform'),
158 queryset=Platform.objects.all(),
159 required=False
160 )
161 vcpus = forms.IntegerField(
162 required=False,
163 label=_('vCPUs')
164 )
165 memory = forms.IntegerField(
166 required=False,
167 label=_('Memory (MB)')
168 )
169 disk = forms.IntegerField(
170 required=False,
171 label=_('Disk (GB)')
172 )
173 description = forms.CharField(
174 label=_('Description'),
175 max_length=200,
176 required=False
177 )
178 config_template = DynamicModelChoiceField(
179 queryset=ConfigTemplate.objects.all(),
180 required=False
181 )
182 comments = CommentField()
183
184 model = VirtualMachine
185 fieldsets = (
186 (None, ('site', 'cluster', 'device', 'status', 'role', 'tenant', 'platform', 'description')),
187 (_('Resources'), ('vcpus', 'memory', 'disk')),
188 ('Configuration', ('config_template',)),
189 )
190 nullable_fields = (
191 'site', 'cluster', 'device', 'role', 'tenant', 'platform', 'vcpus', 'memory', 'disk', 'description', 'comments',
192 )
193
194
195 class VMInterfaceBulkEditForm(NetBoxModelBulkEditForm):
196 virtual_machine = forms.ModelChoiceField(
197 label=_('Virtual machine'),
198 queryset=VirtualMachine.objects.all(),
199 required=False,
200 disabled=True,
201 widget=forms.HiddenInput()
202 )
203 parent = DynamicModelChoiceField(
204 label=_('Parent'),
205 queryset=VMInterface.objects.all(),
206 required=False
207 )
208 bridge = DynamicModelChoiceField(
209 label=_('Bridge'),
210 queryset=VMInterface.objects.all(),
211 required=False
212 )
213 enabled = forms.NullBooleanField(
214 label=_('Enabled'),
215 required=False,
216 widget=BulkEditNullBooleanSelect()
217 )
218 mtu = forms.IntegerField(
219 required=False,
220 min_value=INTERFACE_MTU_MIN,
221 max_value=INTERFACE_MTU_MAX,
222 label=_('MTU')
223 )
224 description = forms.CharField(
225 label=_('Description'),
226 max_length=100,
227 required=False
228 )
229 mode = forms.ChoiceField(
230 label=_('Mode'),
231 choices=add_blank_choice(InterfaceModeChoices),
232 required=False
233 )
234 vlan_group = DynamicModelChoiceField(
235 queryset=VLANGroup.objects.all(),
236 required=False,
237 label=_('VLAN group')
238 )
239 untagged_vlan = DynamicModelChoiceField(
240 queryset=VLAN.objects.all(),
241 required=False,
242 query_params={
243 'group_id': '$vlan_group',
244 },
245 label=_('Untagged VLAN')
246 )
247 tagged_vlans = DynamicModelMultipleChoiceField(
248 queryset=VLAN.objects.all(),
249 required=False,
250 query_params={
251 'group_id': '$vlan_group',
252 },
253 label=_('Tagged VLANs')
254 )
255 vrf = DynamicModelChoiceField(
256 queryset=VRF.objects.all(),
257 required=False,
258 label=_('VRF')
259 )
260
261 model = VMInterface
262 fieldsets = (
263 (None, ('mtu', 'enabled', 'vrf', 'description')),
264 (_('Related Interfaces'), ('parent', 'bridge')),
265 (_('802.1Q Switching'), ('mode', 'vlan_group', 'untagged_vlan', 'tagged_vlans')),
266 )
267 nullable_fields = (
268 'parent', 'bridge', 'mtu', 'vrf', 'description',
269 )
270
271 def __init__(self, *args, **kwargs):
272 super().__init__(*args, **kwargs)
273 if 'virtual_machine' in self.initial:
274 vm_id = self.initial.get('virtual_machine')
275
276 # Restrict parent/bridge interface assignment by VM
277 self.fields['parent'].widget.add_query_param('virtual_machine_id', vm_id)
278 self.fields['bridge'].widget.add_query_param('virtual_machine_id', vm_id)
279
280 # Limit VLAN choices by virtual machine
281 self.fields['untagged_vlan'].widget.add_query_param('available_on_virtualmachine', vm_id)
282 self.fields['tagged_vlans'].widget.add_query_param('available_on_virtualmachine', vm_id)
283
284 else:
285 # See 5643
286 if 'pk' in self.initial:
287 site = None
288 interfaces = VMInterface.objects.filter(
289 pk__in=self.initial['pk']
290 ).prefetch_related(
291 'virtual_machine__site'
292 )
293
294 # Check interface sites. First interface should set site, further interfaces will either continue the
295 # loop or reset back to no site and break the loop.
296 for interface in interfaces:
297 if site is None:
298 site = interface.virtual_machine.cluster.site
299 elif interface.virtual_machine.cluster.site is not site:
300 site = None
301 break
302
303 if site is not None:
304 self.fields['untagged_vlan'].widget.add_query_param('site_id', site.pk)
305 self.fields['tagged_vlans'].widget.add_query_param('site_id', site.pk)
306
307 self.fields['parent'].choices = ()
308 self.fields['parent'].widget.attrs['disabled'] = True
309 self.fields['bridge'].choices = ()
310 self.fields['bridge'].widget.attrs['disabled'] = True
311
312
313 class VMInterfaceBulkRenameForm(BulkRenameForm):
314 pk = forms.ModelMultipleChoiceField(
315 queryset=VMInterface.objects.all(),
316 widget=forms.MultipleHiddenInput()
317 )
318
[end of netbox/virtualization/forms/bulk_edit.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/netbox/virtualization/forms/bulk_edit.py b/netbox/virtualization/forms/bulk_edit.py
--- a/netbox/virtualization/forms/bulk_edit.py
+++ b/netbox/virtualization/forms/bulk_edit.py
@@ -294,9 +294,10 @@
# Check interface sites. First interface should set site, further interfaces will either continue the
# loop or reset back to no site and break the loop.
for interface in interfaces:
+ vm_site = interface.virtual_machine.site or interface.virtual_machine.cluster.site
if site is None:
- site = interface.virtual_machine.cluster.site
- elif interface.virtual_machine.cluster.site is not site:
+ site = vm_site
+ elif vm_site is not site:
site = None
break
| {"golden_diff": "diff --git a/netbox/virtualization/forms/bulk_edit.py b/netbox/virtualization/forms/bulk_edit.py\n--- a/netbox/virtualization/forms/bulk_edit.py\n+++ b/netbox/virtualization/forms/bulk_edit.py\n@@ -294,9 +294,10 @@\n # Check interface sites. First interface should set site, further interfaces will either continue the\n # loop or reset back to no site and break the loop.\n for interface in interfaces:\n+ vm_site = interface.virtual_machine.site or interface.virtual_machine.cluster.site\n if site is None:\n- site = interface.virtual_machine.cluster.site\n- elif interface.virtual_machine.cluster.site is not site:\n+ site = vm_site\n+ elif vm_site is not site:\n site = None\n break\n", "issue": "Bulk edit of Interfaces of VMs without cluster fails\n### NetBox version\n\nv3.6.5\n\n### Python version\n\n3.8\n\n### Steps to Reproduce\n\n1. Create VM and assign only a Site, not a Cluster\r\n2. Create Interface for VM\r\n3. Mark Interface and click on \"Edit Selected\"\n\n### Expected Behavior\n\nEdit form for selected VM Interface(s) appear\n\n### Observed Behavior\n\nException Window with the following Content:\r\n```\r\n<class 'AttributeError'>\r\n\r\n'NoneType' object has no attribute 'site'\r\n\r\nPython version: 3.8.10\r\nNetBox version: 3.6.5\r\nPlugins: \r\n netbox_demo: 0.3.1\r\n```\r\n\r\nWhen generating the bulk edit form, the site is extracted from the cluster of the virtual machine, which fails if no cluster is assigned to the VM:\r\n```\r\n File \"/opt/netbox/netbox/virtualization/forms/bulk_edit.py\", line 272, in __init__\r\n site = interface.virtual_machine.cluster.site\r\nAttributeError: 'NoneType' object has no attribute 'site'\r\n```\n", "before_files": [{"content": "from django import forms\nfrom django.utils.translation import gettext_lazy as _\n\nfrom dcim.choices import InterfaceModeChoices\nfrom dcim.constants import INTERFACE_MTU_MAX, INTERFACE_MTU_MIN\nfrom dcim.models import Device, DeviceRole, Platform, Region, Site, SiteGroup\nfrom extras.models import ConfigTemplate\nfrom ipam.models import VLAN, VLANGroup, VRF\nfrom netbox.forms import NetBoxModelBulkEditForm\nfrom tenancy.models import Tenant\nfrom utilities.forms import BulkRenameForm, add_blank_choice\nfrom utilities.forms.fields import CommentField, DynamicModelChoiceField, DynamicModelMultipleChoiceField\nfrom utilities.forms.widgets import BulkEditNullBooleanSelect\nfrom virtualization.choices import *\nfrom virtualization.models import *\n\n__all__ = (\n 'ClusterBulkEditForm',\n 'ClusterGroupBulkEditForm',\n 'ClusterTypeBulkEditForm',\n 'VirtualMachineBulkEditForm',\n 'VMInterfaceBulkEditForm',\n 'VMInterfaceBulkRenameForm',\n)\n\n\nclass ClusterTypeBulkEditForm(NetBoxModelBulkEditForm):\n description = forms.CharField(\n label=_('Description'),\n max_length=200,\n required=False\n )\n\n model = ClusterType\n fieldsets = (\n (None, ('description',)),\n )\n nullable_fields = ('description',)\n\n\nclass ClusterGroupBulkEditForm(NetBoxModelBulkEditForm):\n description = forms.CharField(\n label=_('Description'),\n max_length=200,\n required=False\n )\n\n model = ClusterGroup\n fieldsets = (\n (None, ('description',)),\n )\n nullable_fields = ('description',)\n\n\nclass ClusterBulkEditForm(NetBoxModelBulkEditForm):\n type = DynamicModelChoiceField(\n label=_('Type'),\n queryset=ClusterType.objects.all(),\n required=False\n )\n group = DynamicModelChoiceField(\n label=_('Group'),\n queryset=ClusterGroup.objects.all(),\n required=False\n )\n status = forms.ChoiceField(\n label=_('Status'),\n choices=add_blank_choice(ClusterStatusChoices),\n required=False,\n initial=''\n )\n tenant = DynamicModelChoiceField(\n label=_('Tenant'),\n queryset=Tenant.objects.all(),\n required=False\n )\n region = DynamicModelChoiceField(\n label=_('Region'),\n queryset=Region.objects.all(),\n required=False,\n )\n site_group = DynamicModelChoiceField(\n label=_('Site group'),\n queryset=SiteGroup.objects.all(),\n required=False,\n )\n site = DynamicModelChoiceField(\n label=_('Site'),\n queryset=Site.objects.all(),\n required=False,\n query_params={\n 'region_id': '$region',\n 'group_id': '$site_group',\n }\n )\n description = forms.CharField(\n label=_('Site'),\n max_length=200,\n required=False\n )\n comments = CommentField()\n\n model = Cluster\n fieldsets = (\n (None, ('type', 'group', 'status', 'tenant', 'description')),\n (_('Site'), ('region', 'site_group', 'site')),\n )\n nullable_fields = (\n 'group', 'site', 'tenant', 'description', 'comments',\n )\n\n\nclass VirtualMachineBulkEditForm(NetBoxModelBulkEditForm):\n status = forms.ChoiceField(\n label=_('Status'),\n choices=add_blank_choice(VirtualMachineStatusChoices),\n required=False,\n initial='',\n )\n site = DynamicModelChoiceField(\n label=_('Site'),\n queryset=Site.objects.all(),\n required=False\n )\n cluster = DynamicModelChoiceField(\n label=_('Cluster'),\n queryset=Cluster.objects.all(),\n required=False,\n query_params={\n 'site_id': '$site'\n }\n )\n device = DynamicModelChoiceField(\n label=_('Device'),\n queryset=Device.objects.all(),\n required=False,\n query_params={\n 'cluster_id': '$cluster'\n }\n )\n role = DynamicModelChoiceField(\n label=_('Role'),\n queryset=DeviceRole.objects.filter(\n vm_role=True\n ),\n required=False,\n query_params={\n \"vm_role\": \"True\"\n }\n )\n tenant = DynamicModelChoiceField(\n label=_('Tenant'),\n queryset=Tenant.objects.all(),\n required=False\n )\n platform = DynamicModelChoiceField(\n label=_('Platform'),\n queryset=Platform.objects.all(),\n required=False\n )\n vcpus = forms.IntegerField(\n required=False,\n label=_('vCPUs')\n )\n memory = forms.IntegerField(\n required=False,\n label=_('Memory (MB)')\n )\n disk = forms.IntegerField(\n required=False,\n label=_('Disk (GB)')\n )\n description = forms.CharField(\n label=_('Description'),\n max_length=200,\n required=False\n )\n config_template = DynamicModelChoiceField(\n queryset=ConfigTemplate.objects.all(),\n required=False\n )\n comments = CommentField()\n\n model = VirtualMachine\n fieldsets = (\n (None, ('site', 'cluster', 'device', 'status', 'role', 'tenant', 'platform', 'description')),\n (_('Resources'), ('vcpus', 'memory', 'disk')),\n ('Configuration', ('config_template',)),\n )\n nullable_fields = (\n 'site', 'cluster', 'device', 'role', 'tenant', 'platform', 'vcpus', 'memory', 'disk', 'description', 'comments',\n )\n\n\nclass VMInterfaceBulkEditForm(NetBoxModelBulkEditForm):\n virtual_machine = forms.ModelChoiceField(\n label=_('Virtual machine'),\n queryset=VirtualMachine.objects.all(),\n required=False,\n disabled=True,\n widget=forms.HiddenInput()\n )\n parent = DynamicModelChoiceField(\n label=_('Parent'),\n queryset=VMInterface.objects.all(),\n required=False\n )\n bridge = DynamicModelChoiceField(\n label=_('Bridge'),\n queryset=VMInterface.objects.all(),\n required=False\n )\n enabled = forms.NullBooleanField(\n label=_('Enabled'),\n required=False,\n widget=BulkEditNullBooleanSelect()\n )\n mtu = forms.IntegerField(\n required=False,\n min_value=INTERFACE_MTU_MIN,\n max_value=INTERFACE_MTU_MAX,\n label=_('MTU')\n )\n description = forms.CharField(\n label=_('Description'),\n max_length=100,\n required=False\n )\n mode = forms.ChoiceField(\n label=_('Mode'),\n choices=add_blank_choice(InterfaceModeChoices),\n required=False\n )\n vlan_group = DynamicModelChoiceField(\n queryset=VLANGroup.objects.all(),\n required=False,\n label=_('VLAN group')\n )\n untagged_vlan = DynamicModelChoiceField(\n queryset=VLAN.objects.all(),\n required=False,\n query_params={\n 'group_id': '$vlan_group',\n },\n label=_('Untagged VLAN')\n )\n tagged_vlans = DynamicModelMultipleChoiceField(\n queryset=VLAN.objects.all(),\n required=False,\n query_params={\n 'group_id': '$vlan_group',\n },\n label=_('Tagged VLANs')\n )\n vrf = DynamicModelChoiceField(\n queryset=VRF.objects.all(),\n required=False,\n label=_('VRF')\n )\n\n model = VMInterface\n fieldsets = (\n (None, ('mtu', 'enabled', 'vrf', 'description')),\n (_('Related Interfaces'), ('parent', 'bridge')),\n (_('802.1Q Switching'), ('mode', 'vlan_group', 'untagged_vlan', 'tagged_vlans')),\n )\n nullable_fields = (\n 'parent', 'bridge', 'mtu', 'vrf', 'description',\n )\n\n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n if 'virtual_machine' in self.initial:\n vm_id = self.initial.get('virtual_machine')\n\n # Restrict parent/bridge interface assignment by VM\n self.fields['parent'].widget.add_query_param('virtual_machine_id', vm_id)\n self.fields['bridge'].widget.add_query_param('virtual_machine_id', vm_id)\n\n # Limit VLAN choices by virtual machine\n self.fields['untagged_vlan'].widget.add_query_param('available_on_virtualmachine', vm_id)\n self.fields['tagged_vlans'].widget.add_query_param('available_on_virtualmachine', vm_id)\n\n else:\n # See 5643\n if 'pk' in self.initial:\n site = None\n interfaces = VMInterface.objects.filter(\n pk__in=self.initial['pk']\n ).prefetch_related(\n 'virtual_machine__site'\n )\n\n # Check interface sites. First interface should set site, further interfaces will either continue the\n # loop or reset back to no site and break the loop.\n for interface in interfaces:\n if site is None:\n site = interface.virtual_machine.cluster.site\n elif interface.virtual_machine.cluster.site is not site:\n site = None\n break\n\n if site is not None:\n self.fields['untagged_vlan'].widget.add_query_param('site_id', site.pk)\n self.fields['tagged_vlans'].widget.add_query_param('site_id', site.pk)\n\n self.fields['parent'].choices = ()\n self.fields['parent'].widget.attrs['disabled'] = True\n self.fields['bridge'].choices = ()\n self.fields['bridge'].widget.attrs['disabled'] = True\n\n\nclass VMInterfaceBulkRenameForm(BulkRenameForm):\n pk = forms.ModelMultipleChoiceField(\n queryset=VMInterface.objects.all(),\n widget=forms.MultipleHiddenInput()\n )\n", "path": "netbox/virtualization/forms/bulk_edit.py"}]} | 3,684 | 175 |
gh_patches_debug_24120 | rasdani/github-patches | git_diff | conan-io__conan-center-index-11233 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[request] kcov/40
### Package Details
* Package Name/Version: **kcov/40**
* Changelog: **https://github.com/SimonKagstrom/kcov/blob/master/ChangeLog**
Hello,
Currently conan-center provides only 38 release, I would like to have latest release (40) also available.
I'll provides a pull request.
</issue>
<code>
[start of recipes/kcov/all/conanfile.py]
1 import os
2 from conans import ConanFile, CMake, tools
3 from conans.errors import ConanInvalidConfiguration
4
5
6 class KcovConan(ConanFile):
7 name = "kcov"
8 license = "GPL-2.0"
9 url = "https://github.com/conan-io/conan-center-index/"
10 homepage = "http://simonkagstrom.github.io/kcov/index.html"
11 description = "Code coverage tool for compiled programs, Python and Bash\
12 which uses debugging information to collect and report data without\
13 special compilation options"
14 topics = ("coverage", "linux", "debug")
15 settings = "os", "compiler", "build_type", "arch"
16 exports_sources = "CMakeLists.txt", "patches/**"
17 requires = ["zlib/1.2.11",
18 "libiberty/9.1.0",
19 "libcurl/7.64.1",
20 "elfutils/0.180"]
21 generators = "cmake"
22 _cmake = None
23 _source_subfolder = "source_subfolder"
24 _build_subfolder = "build_subfolder"
25
26 def configure(self):
27 if self.settings.os == "Windows":
28 raise ConanInvalidConfiguration(
29 "kcov can not be built on windows.")
30
31 def source(self):
32 tools.get(**self.conan_data["sources"][self.version])
33 extracted_dir = self.name + "-" + self.version
34 os.rename(extracted_dir, self._source_subfolder)
35
36 def _patch_sources(self):
37 for patch in self.conan_data["patches"][self.version]:
38 tools.patch(**patch)
39
40 def _configure_cmake(self):
41 if self._cmake is not None:
42 return self._cmake
43 self._cmake = CMake(self)
44 self._cmake.configure(build_folder=self._build_subfolder)
45 return self._cmake
46
47 def build(self):
48 self._patch_sources()
49 cmake = self._configure_cmake()
50 cmake.build()
51
52 def package(self):
53 cmake = self._configure_cmake()
54 cmake.install()
55 tools.rmdir(os.path.join(self.package_folder, "share"))
56 self.copy("COPYING*", dst="licenses", src=self._source_subfolder)
57
58 def package_info(self):
59 bindir = os.path.join(self.package_folder, "bin")
60 self.output.info("Appending PATH environment variable: {}"
61 .format(bindir))
62 self.env_info.PATH.append(bindir)
63
[end of recipes/kcov/all/conanfile.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/recipes/kcov/all/conanfile.py b/recipes/kcov/all/conanfile.py
--- a/recipes/kcov/all/conanfile.py
+++ b/recipes/kcov/all/conanfile.py
@@ -1,8 +1,8 @@
import os
-from conans import ConanFile, CMake, tools
+from conan import ConanFile
+from conans import CMake, tools
from conans.errors import ConanInvalidConfiguration
-
class KcovConan(ConanFile):
name = "kcov"
license = "GPL-2.0"
@@ -14,9 +14,9 @@
topics = ("coverage", "linux", "debug")
settings = "os", "compiler", "build_type", "arch"
exports_sources = "CMakeLists.txt", "patches/**"
- requires = ["zlib/1.2.11",
+ requires = ["zlib/1.2.12",
"libiberty/9.1.0",
- "libcurl/7.64.1",
+ "libcurl/7.83.1",
"elfutils/0.180"]
generators = "cmake"
_cmake = None
@@ -60,3 +60,4 @@
self.output.info("Appending PATH environment variable: {}"
.format(bindir))
self.env_info.PATH.append(bindir)
+ self.cpp_info.includedirs = []
| {"golden_diff": "diff --git a/recipes/kcov/all/conanfile.py b/recipes/kcov/all/conanfile.py\n--- a/recipes/kcov/all/conanfile.py\n+++ b/recipes/kcov/all/conanfile.py\n@@ -1,8 +1,8 @@\n import os\n-from conans import ConanFile, CMake, tools\n+from conan import ConanFile\n+from conans import CMake, tools\n from conans.errors import ConanInvalidConfiguration\n \n-\n class KcovConan(ConanFile):\n name = \"kcov\"\n license = \"GPL-2.0\"\n@@ -14,9 +14,9 @@\n topics = (\"coverage\", \"linux\", \"debug\")\n settings = \"os\", \"compiler\", \"build_type\", \"arch\"\n exports_sources = \"CMakeLists.txt\", \"patches/**\"\n- requires = [\"zlib/1.2.11\",\n+ requires = [\"zlib/1.2.12\",\n \"libiberty/9.1.0\",\n- \"libcurl/7.64.1\",\n+ \"libcurl/7.83.1\",\n \"elfutils/0.180\"]\n generators = \"cmake\"\n _cmake = None\n@@ -60,3 +60,4 @@\n self.output.info(\"Appending PATH environment variable: {}\"\n .format(bindir))\n self.env_info.PATH.append(bindir)\n+ self.cpp_info.includedirs = []\n", "issue": "[request] kcov/40\n### Package Details\r\n * Package Name/Version: **kcov/40**\r\n * Changelog: **https://github.com/SimonKagstrom/kcov/blob/master/ChangeLog**\r\n\r\nHello,\r\n\r\nCurrently conan-center provides only 38 release, I would like to have latest release (40) also available.\r\nI'll provides a pull request.\r\n\n", "before_files": [{"content": "import os\nfrom conans import ConanFile, CMake, tools\nfrom conans.errors import ConanInvalidConfiguration\n\n\nclass KcovConan(ConanFile):\n name = \"kcov\"\n license = \"GPL-2.0\"\n url = \"https://github.com/conan-io/conan-center-index/\"\n homepage = \"http://simonkagstrom.github.io/kcov/index.html\"\n description = \"Code coverage tool for compiled programs, Python and Bash\\\n which uses debugging information to collect and report data without\\\n special compilation options\"\n topics = (\"coverage\", \"linux\", \"debug\")\n settings = \"os\", \"compiler\", \"build_type\", \"arch\"\n exports_sources = \"CMakeLists.txt\", \"patches/**\"\n requires = [\"zlib/1.2.11\",\n \"libiberty/9.1.0\",\n \"libcurl/7.64.1\",\n \"elfutils/0.180\"]\n generators = \"cmake\"\n _cmake = None\n _source_subfolder = \"source_subfolder\"\n _build_subfolder = \"build_subfolder\"\n\n def configure(self):\n if self.settings.os == \"Windows\":\n raise ConanInvalidConfiguration(\n \"kcov can not be built on windows.\")\n\n def source(self):\n tools.get(**self.conan_data[\"sources\"][self.version])\n extracted_dir = self.name + \"-\" + self.version\n os.rename(extracted_dir, self._source_subfolder)\n\n def _patch_sources(self):\n for patch in self.conan_data[\"patches\"][self.version]:\n tools.patch(**patch)\n\n def _configure_cmake(self):\n if self._cmake is not None:\n return self._cmake\n self._cmake = CMake(self)\n self._cmake.configure(build_folder=self._build_subfolder)\n return self._cmake\n\n def build(self):\n self._patch_sources()\n cmake = self._configure_cmake()\n cmake.build()\n\n def package(self):\n cmake = self._configure_cmake()\n cmake.install()\n tools.rmdir(os.path.join(self.package_folder, \"share\"))\n self.copy(\"COPYING*\", dst=\"licenses\", src=self._source_subfolder)\n\n def package_info(self):\n bindir = os.path.join(self.package_folder, \"bin\")\n self.output.info(\"Appending PATH environment variable: {}\"\n .format(bindir))\n self.env_info.PATH.append(bindir)\n", "path": "recipes/kcov/all/conanfile.py"}]} | 1,278 | 325 |
gh_patches_debug_33677 | rasdani/github-patches | git_diff | kedro-org__kedro-3300 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Revisit: Make KedroContext a dataclass and add `config_loader` as a property
# Description
As `KedroSession` now control the lifecycle of Kedro's run, `KedroContext` act like a container and it stores important attributes. Since we now dropped Python 3.6 support, we can make use of Python's `dataclass` to further tidy up `KedroContext`'s constructor code.
todo:
- [x] Make `KedroContext` a dataclass(https://github.com/kedro-org/kedro/pull/1465)
- [x] Add `config_loader` as @property instead of relying the private `context._get_config_loader()` https://github.com/kedro-org/kedro/pull/1505
- [x] Update corresponding document in Starter/Kedro
</issue>
<code>
[start of kedro/framework/context/context.py]
1 """This module provides context for Kedro project."""
2 from __future__ import annotations
3
4 import logging
5 from copy import deepcopy
6 from pathlib import Path, PurePosixPath, PureWindowsPath
7 from typing import Any
8 from urllib.parse import urlparse
9 from warnings import warn
10
11 from attrs import field, frozen
12 from omegaconf import OmegaConf
13 from pluggy import PluginManager
14
15 from kedro.config import AbstractConfigLoader, MissingConfigException
16 from kedro.framework.project import settings
17 from kedro.io import DataCatalog
18 from kedro.pipeline.pipeline import _transcode_split
19
20
21 def _is_relative_path(path_string: str) -> bool:
22 """Checks whether a path string is a relative path.
23
24 Example:
25 ::
26 >>> _is_relative_path("data/01_raw") == True
27 >>> _is_relative_path("info.log") == True
28 >>> _is_relative_path("/tmp/data/01_raw") == False
29 >>> _is_relative_path(r"C:\\info.log") == False
30 >>> _is_relative_path(r"\\'info.log") == False
31 >>> _is_relative_path("c:/info.log") == False
32 >>> _is_relative_path("s3://info.log") == False
33
34 Args:
35 path_string: The path string to check.
36 Returns:
37 Whether the string is a relative path.
38 """
39 # os.path.splitdrive does not reliably work on non-Windows systems
40 # breaking the coverage, using PureWindowsPath instead
41 is_full_windows_path_with_drive = bool(PureWindowsPath(path_string).drive)
42 if is_full_windows_path_with_drive:
43 return False
44
45 is_remote_path = bool(urlparse(path_string).scheme)
46 if is_remote_path:
47 return False
48
49 is_absolute_path = PurePosixPath(path_string).is_absolute()
50 if is_absolute_path:
51 return False
52
53 return True
54
55
56 def _convert_paths_to_absolute_posix(
57 project_path: Path, conf_dictionary: dict[str, Any]
58 ) -> dict[str, Any]:
59 """Turn all relative paths inside ``conf_dictionary`` into absolute paths by appending them
60 to ``project_path`` and convert absolute Windows paths to POSIX format. This is a hack to
61 make sure that we don't have to change user's working directory for logging and datasets to
62 work. It is important for non-standard workflows such as IPython notebook where users don't go
63 through `kedro run` or `__main__.py` entrypoints.
64
65 Example:
66 ::
67 >>> conf = _convert_paths_to_absolute_posix(
68 >>> project_path=Path("/path/to/my/project"),
69 >>> conf_dictionary={
70 >>> "handlers": {
71 >>> "info_file_handler": {
72 >>> "filename": "info.log"
73 >>> }
74 >>> }
75 >>> }
76 >>> )
77 >>> print(conf['handlers']['info_file_handler']['filename'])
78 "/path/to/my/project/info.log"
79
80 Args:
81 project_path: The root directory to prepend to relative path to make absolute path.
82 conf_dictionary: The configuration containing paths to expand.
83 Returns:
84 A dictionary containing only absolute paths.
85 Raises:
86 ValueError: If the provided ``project_path`` is not an absolute path.
87 """
88 if not project_path.is_absolute():
89 raise ValueError(
90 f"project_path must be an absolute path. Received: {project_path}"
91 )
92
93 # only check a few conf keys that are known to specify a path string as value
94 conf_keys_with_filepath = ("filename", "filepath", "path")
95
96 for conf_key, conf_value in conf_dictionary.items():
97 # if the conf_value is another dictionary, absolutify its paths first.
98 if isinstance(conf_value, dict):
99 conf_dictionary[conf_key] = _convert_paths_to_absolute_posix(
100 project_path, conf_value
101 )
102 continue
103
104 # if the conf_value is not a dictionary nor a string, skip
105 if not isinstance(conf_value, str):
106 continue
107
108 # if the conf_value is a string but the conf_key isn't one associated with filepath, skip
109 if conf_key not in conf_keys_with_filepath:
110 continue
111
112 if _is_relative_path(conf_value):
113 # Absolute local path should be in POSIX format
114 conf_value_absolute_path = (project_path / conf_value).as_posix()
115 conf_dictionary[conf_key] = conf_value_absolute_path
116 elif PureWindowsPath(conf_value).drive:
117 # Convert absolute Windows path to POSIX format
118 conf_dictionary[conf_key] = PureWindowsPath(conf_value).as_posix()
119
120 return conf_dictionary
121
122
123 def _validate_transcoded_datasets(catalog: DataCatalog):
124 """Validates transcoded datasets are correctly named
125
126 Args:
127 catalog (DataCatalog): The catalog object containing the
128 datasets to be validated.
129
130 Raises:
131 ValueError: If a dataset name does not conform to the expected
132 transcoding naming conventions,a ValueError is raised by the
133 `_transcode_split` function.
134
135 """
136 # noqa: protected-access
137 for dataset_name in catalog._datasets.keys():
138 _transcode_split(dataset_name)
139
140
141 def _expand_full_path(project_path: str | Path) -> Path:
142 return Path(project_path).expanduser().resolve()
143
144
145 @frozen
146 class KedroContext:
147 """``KedroContext`` is the base class which holds the configuration and
148 Kedro's main functionality.
149 """
150
151 _package_name: str
152 project_path: Path = field(converter=_expand_full_path)
153 config_loader: AbstractConfigLoader
154 _hook_manager: PluginManager
155 env: str | None = None
156 _extra_params: dict[str, Any] | None = field(default=None, converter=deepcopy)
157
158 @property
159 def catalog(self) -> DataCatalog:
160 """Read-only property referring to Kedro's ``DataCatalog`` for this context.
161
162 Returns:
163 DataCatalog defined in `catalog.yml`.
164 Raises:
165 KedroContextError: Incorrect ``DataCatalog`` registered for the project.
166
167 """
168 return self._get_catalog()
169
170 @property
171 def params(self) -> dict[str, Any]:
172 """Read-only property referring to Kedro's parameters for this context.
173
174 Returns:
175 Parameters defined in `parameters.yml` with the addition of any
176 extra parameters passed at initialization.
177 """
178 try:
179 params = self.config_loader["parameters"]
180 except MissingConfigException as exc:
181 warn(f"Parameters not found in your Kedro project config.\n{str(exc)}")
182 params = {}
183
184 if self._extra_params:
185 # Merge nested structures
186 params = OmegaConf.merge(params, self._extra_params)
187
188 return OmegaConf.to_container(params) if OmegaConf.is_config(params) else params
189
190 def _get_catalog(
191 self,
192 save_version: str = None,
193 load_versions: dict[str, str] = None,
194 ) -> DataCatalog:
195 """A hook for changing the creation of a DataCatalog instance.
196
197 Returns:
198 DataCatalog defined in `catalog.yml`.
199 Raises:
200 KedroContextError: Incorrect ``DataCatalog`` registered for the project.
201
202 """
203 # '**/catalog*' reads modular pipeline configs
204 conf_catalog = self.config_loader["catalog"]
205 # turn relative paths in conf_catalog into absolute paths
206 # before initializing the catalog
207 conf_catalog = _convert_paths_to_absolute_posix(
208 project_path=self.project_path, conf_dictionary=conf_catalog
209 )
210 conf_creds = self._get_config_credentials()
211
212 catalog = settings.DATA_CATALOG_CLASS.from_config(
213 catalog=conf_catalog,
214 credentials=conf_creds,
215 load_versions=load_versions,
216 save_version=save_version,
217 )
218
219 feed_dict = self._get_feed_dict()
220 catalog.add_feed_dict(feed_dict)
221 _validate_transcoded_datasets(catalog)
222 self._hook_manager.hook.after_catalog_created(
223 catalog=catalog,
224 conf_catalog=conf_catalog,
225 conf_creds=conf_creds,
226 feed_dict=feed_dict,
227 save_version=save_version,
228 load_versions=load_versions,
229 )
230 return catalog
231
232 def _get_feed_dict(self) -> dict[str, Any]:
233 """Get parameters and return the feed dictionary."""
234 params = self.params
235 feed_dict = {"parameters": params}
236
237 def _add_param_to_feed_dict(param_name, param_value):
238 """This recursively adds parameter paths to the `feed_dict`,
239 whenever `param_value` is a dictionary itself, so that users can
240 specify specific nested parameters in their node inputs.
241
242 Example:
243
244 >>> param_name = "a"
245 >>> param_value = {"b": 1}
246 >>> _add_param_to_feed_dict(param_name, param_value)
247 >>> assert feed_dict["params:a"] == {"b": 1}
248 >>> assert feed_dict["params:a.b"] == 1
249 """
250 key = f"params:{param_name}"
251 feed_dict[key] = param_value
252 if isinstance(param_value, dict):
253 for key, val in param_value.items():
254 _add_param_to_feed_dict(f"{param_name}.{key}", val)
255
256 for param_name, param_value in params.items():
257 _add_param_to_feed_dict(param_name, param_value)
258
259 return feed_dict
260
261 def _get_config_credentials(self) -> dict[str, Any]:
262 """Getter for credentials specified in credentials directory."""
263 try:
264 conf_creds = self.config_loader["credentials"]
265 except MissingConfigException as exc:
266 logging.getLogger(__name__).debug(
267 "Credentials not found in your Kedro project config.\n %s", str(exc)
268 )
269 conf_creds = {}
270 return conf_creds
271
272
273 class KedroContextError(Exception):
274 """Error occurred when loading project and running context pipeline."""
275
[end of kedro/framework/context/context.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/kedro/framework/context/context.py b/kedro/framework/context/context.py
--- a/kedro/framework/context/context.py
+++ b/kedro/framework/context/context.py
@@ -8,7 +8,7 @@
from urllib.parse import urlparse
from warnings import warn
-from attrs import field, frozen
+from attrs import define, field
from omegaconf import OmegaConf
from pluggy import PluginManager
@@ -142,18 +142,38 @@
return Path(project_path).expanduser().resolve()
-@frozen
+@define(slots=False) # Enable setting new attributes to `KedroContext`
class KedroContext:
"""``KedroContext`` is the base class which holds the configuration and
Kedro's main functionality.
+
+ Create a context object by providing the root of a Kedro project and
+ the environment configuration subfolders (see ``kedro.config.OmegaConfigLoader``)
+ Raises:
+ KedroContextError: If there is a mismatch
+ between Kedro project version and package version.
+ Args:
+ project_path: Project path to define the context for.
+ config_loader: Kedro's ``OmegaConfigLoader`` for loading the configuration files.
+ env: Optional argument for configuration default environment to be used
+ for running the pipeline. If not specified, it defaults to "local".
+ package_name: Package name for the Kedro project the context is
+ created for.
+ hook_manager: The ``PluginManager`` to activate hooks, supplied by the session.
+ extra_params: Optional dictionary containing extra project parameters.
+ If specified, will update (and therefore take precedence over)
+ the parameters retrieved from the project configuration.
+
"""
- _package_name: str
- project_path: Path = field(converter=_expand_full_path)
- config_loader: AbstractConfigLoader
- _hook_manager: PluginManager
- env: str | None = None
- _extra_params: dict[str, Any] | None = field(default=None, converter=deepcopy)
+ project_path: Path = field(init=True, converter=_expand_full_path)
+ config_loader: AbstractConfigLoader = field(init=True)
+ env: str | None = field(init=True)
+ _package_name: str = field(init=True)
+ _hook_manager: PluginManager = field(init=True)
+ _extra_params: dict[str, Any] | None = field(
+ init=True, default=None, converter=deepcopy
+ )
@property
def catalog(self) -> DataCatalog:
| {"golden_diff": "diff --git a/kedro/framework/context/context.py b/kedro/framework/context/context.py\n--- a/kedro/framework/context/context.py\n+++ b/kedro/framework/context/context.py\n@@ -8,7 +8,7 @@\n from urllib.parse import urlparse\n from warnings import warn\n \n-from attrs import field, frozen\n+from attrs import define, field\n from omegaconf import OmegaConf\n from pluggy import PluginManager\n \n@@ -142,18 +142,38 @@\n return Path(project_path).expanduser().resolve()\n \n \n-@frozen\n+@define(slots=False) # Enable setting new attributes to `KedroContext`\n class KedroContext:\n \"\"\"``KedroContext`` is the base class which holds the configuration and\n Kedro's main functionality.\n+\n+ Create a context object by providing the root of a Kedro project and\n+ the environment configuration subfolders (see ``kedro.config.OmegaConfigLoader``)\n+ Raises:\n+ KedroContextError: If there is a mismatch\n+ between Kedro project version and package version.\n+ Args:\n+ project_path: Project path to define the context for.\n+ config_loader: Kedro's ``OmegaConfigLoader`` for loading the configuration files.\n+ env: Optional argument for configuration default environment to be used\n+ for running the pipeline. If not specified, it defaults to \"local\".\n+ package_name: Package name for the Kedro project the context is\n+ created for.\n+ hook_manager: The ``PluginManager`` to activate hooks, supplied by the session.\n+ extra_params: Optional dictionary containing extra project parameters.\n+ If specified, will update (and therefore take precedence over)\n+ the parameters retrieved from the project configuration.\n+\n \"\"\"\n \n- _package_name: str\n- project_path: Path = field(converter=_expand_full_path)\n- config_loader: AbstractConfigLoader\n- _hook_manager: PluginManager\n- env: str | None = None\n- _extra_params: dict[str, Any] | None = field(default=None, converter=deepcopy)\n+ project_path: Path = field(init=True, converter=_expand_full_path)\n+ config_loader: AbstractConfigLoader = field(init=True)\n+ env: str | None = field(init=True)\n+ _package_name: str = field(init=True)\n+ _hook_manager: PluginManager = field(init=True)\n+ _extra_params: dict[str, Any] | None = field(\n+ init=True, default=None, converter=deepcopy\n+ )\n \n @property\n def catalog(self) -> DataCatalog:\n", "issue": "Revisit: Make KedroContext a dataclass and add `config_loader` as a property\n# Description\r\nAs `KedroSession` now control the lifecycle of Kedro's run, `KedroContext` act like a container and it stores important attributes. Since we now dropped Python 3.6 support, we can make use of Python's `dataclass` to further tidy up `KedroContext`'s constructor code.\r\n\r\ntodo:\r\n- [x] Make `KedroContext` a dataclass(https://github.com/kedro-org/kedro/pull/1465)\r\n- [x] Add `config_loader` as @property instead of relying the private `context._get_config_loader()` https://github.com/kedro-org/kedro/pull/1505\r\n- [x] Update corresponding document in Starter/Kedro\r\n\n", "before_files": [{"content": "\"\"\"This module provides context for Kedro project.\"\"\"\nfrom __future__ import annotations\n\nimport logging\nfrom copy import deepcopy\nfrom pathlib import Path, PurePosixPath, PureWindowsPath\nfrom typing import Any\nfrom urllib.parse import urlparse\nfrom warnings import warn\n\nfrom attrs import field, frozen\nfrom omegaconf import OmegaConf\nfrom pluggy import PluginManager\n\nfrom kedro.config import AbstractConfigLoader, MissingConfigException\nfrom kedro.framework.project import settings\nfrom kedro.io import DataCatalog\nfrom kedro.pipeline.pipeline import _transcode_split\n\n\ndef _is_relative_path(path_string: str) -> bool:\n \"\"\"Checks whether a path string is a relative path.\n\n Example:\n ::\n >>> _is_relative_path(\"data/01_raw\") == True\n >>> _is_relative_path(\"info.log\") == True\n >>> _is_relative_path(\"/tmp/data/01_raw\") == False\n >>> _is_relative_path(r\"C:\\\\info.log\") == False\n >>> _is_relative_path(r\"\\\\'info.log\") == False\n >>> _is_relative_path(\"c:/info.log\") == False\n >>> _is_relative_path(\"s3://info.log\") == False\n\n Args:\n path_string: The path string to check.\n Returns:\n Whether the string is a relative path.\n \"\"\"\n # os.path.splitdrive does not reliably work on non-Windows systems\n # breaking the coverage, using PureWindowsPath instead\n is_full_windows_path_with_drive = bool(PureWindowsPath(path_string).drive)\n if is_full_windows_path_with_drive:\n return False\n\n is_remote_path = bool(urlparse(path_string).scheme)\n if is_remote_path:\n return False\n\n is_absolute_path = PurePosixPath(path_string).is_absolute()\n if is_absolute_path:\n return False\n\n return True\n\n\ndef _convert_paths_to_absolute_posix(\n project_path: Path, conf_dictionary: dict[str, Any]\n) -> dict[str, Any]:\n \"\"\"Turn all relative paths inside ``conf_dictionary`` into absolute paths by appending them\n to ``project_path`` and convert absolute Windows paths to POSIX format. This is a hack to\n make sure that we don't have to change user's working directory for logging and datasets to\n work. It is important for non-standard workflows such as IPython notebook where users don't go\n through `kedro run` or `__main__.py` entrypoints.\n\n Example:\n ::\n >>> conf = _convert_paths_to_absolute_posix(\n >>> project_path=Path(\"/path/to/my/project\"),\n >>> conf_dictionary={\n >>> \"handlers\": {\n >>> \"info_file_handler\": {\n >>> \"filename\": \"info.log\"\n >>> }\n >>> }\n >>> }\n >>> )\n >>> print(conf['handlers']['info_file_handler']['filename'])\n \"/path/to/my/project/info.log\"\n\n Args:\n project_path: The root directory to prepend to relative path to make absolute path.\n conf_dictionary: The configuration containing paths to expand.\n Returns:\n A dictionary containing only absolute paths.\n Raises:\n ValueError: If the provided ``project_path`` is not an absolute path.\n \"\"\"\n if not project_path.is_absolute():\n raise ValueError(\n f\"project_path must be an absolute path. Received: {project_path}\"\n )\n\n # only check a few conf keys that are known to specify a path string as value\n conf_keys_with_filepath = (\"filename\", \"filepath\", \"path\")\n\n for conf_key, conf_value in conf_dictionary.items():\n # if the conf_value is another dictionary, absolutify its paths first.\n if isinstance(conf_value, dict):\n conf_dictionary[conf_key] = _convert_paths_to_absolute_posix(\n project_path, conf_value\n )\n continue\n\n # if the conf_value is not a dictionary nor a string, skip\n if not isinstance(conf_value, str):\n continue\n\n # if the conf_value is a string but the conf_key isn't one associated with filepath, skip\n if conf_key not in conf_keys_with_filepath:\n continue\n\n if _is_relative_path(conf_value):\n # Absolute local path should be in POSIX format\n conf_value_absolute_path = (project_path / conf_value).as_posix()\n conf_dictionary[conf_key] = conf_value_absolute_path\n elif PureWindowsPath(conf_value).drive:\n # Convert absolute Windows path to POSIX format\n conf_dictionary[conf_key] = PureWindowsPath(conf_value).as_posix()\n\n return conf_dictionary\n\n\ndef _validate_transcoded_datasets(catalog: DataCatalog):\n \"\"\"Validates transcoded datasets are correctly named\n\n Args:\n catalog (DataCatalog): The catalog object containing the\n datasets to be validated.\n\n Raises:\n ValueError: If a dataset name does not conform to the expected\n transcoding naming conventions,a ValueError is raised by the\n `_transcode_split` function.\n\n \"\"\"\n # noqa: protected-access\n for dataset_name in catalog._datasets.keys():\n _transcode_split(dataset_name)\n\n\ndef _expand_full_path(project_path: str | Path) -> Path:\n return Path(project_path).expanduser().resolve()\n\n\n@frozen\nclass KedroContext:\n \"\"\"``KedroContext`` is the base class which holds the configuration and\n Kedro's main functionality.\n \"\"\"\n\n _package_name: str\n project_path: Path = field(converter=_expand_full_path)\n config_loader: AbstractConfigLoader\n _hook_manager: PluginManager\n env: str | None = None\n _extra_params: dict[str, Any] | None = field(default=None, converter=deepcopy)\n\n @property\n def catalog(self) -> DataCatalog:\n \"\"\"Read-only property referring to Kedro's ``DataCatalog`` for this context.\n\n Returns:\n DataCatalog defined in `catalog.yml`.\n Raises:\n KedroContextError: Incorrect ``DataCatalog`` registered for the project.\n\n \"\"\"\n return self._get_catalog()\n\n @property\n def params(self) -> dict[str, Any]:\n \"\"\"Read-only property referring to Kedro's parameters for this context.\n\n Returns:\n Parameters defined in `parameters.yml` with the addition of any\n extra parameters passed at initialization.\n \"\"\"\n try:\n params = self.config_loader[\"parameters\"]\n except MissingConfigException as exc:\n warn(f\"Parameters not found in your Kedro project config.\\n{str(exc)}\")\n params = {}\n\n if self._extra_params:\n # Merge nested structures\n params = OmegaConf.merge(params, self._extra_params)\n\n return OmegaConf.to_container(params) if OmegaConf.is_config(params) else params\n\n def _get_catalog(\n self,\n save_version: str = None,\n load_versions: dict[str, str] = None,\n ) -> DataCatalog:\n \"\"\"A hook for changing the creation of a DataCatalog instance.\n\n Returns:\n DataCatalog defined in `catalog.yml`.\n Raises:\n KedroContextError: Incorrect ``DataCatalog`` registered for the project.\n\n \"\"\"\n # '**/catalog*' reads modular pipeline configs\n conf_catalog = self.config_loader[\"catalog\"]\n # turn relative paths in conf_catalog into absolute paths\n # before initializing the catalog\n conf_catalog = _convert_paths_to_absolute_posix(\n project_path=self.project_path, conf_dictionary=conf_catalog\n )\n conf_creds = self._get_config_credentials()\n\n catalog = settings.DATA_CATALOG_CLASS.from_config(\n catalog=conf_catalog,\n credentials=conf_creds,\n load_versions=load_versions,\n save_version=save_version,\n )\n\n feed_dict = self._get_feed_dict()\n catalog.add_feed_dict(feed_dict)\n _validate_transcoded_datasets(catalog)\n self._hook_manager.hook.after_catalog_created(\n catalog=catalog,\n conf_catalog=conf_catalog,\n conf_creds=conf_creds,\n feed_dict=feed_dict,\n save_version=save_version,\n load_versions=load_versions,\n )\n return catalog\n\n def _get_feed_dict(self) -> dict[str, Any]:\n \"\"\"Get parameters and return the feed dictionary.\"\"\"\n params = self.params\n feed_dict = {\"parameters\": params}\n\n def _add_param_to_feed_dict(param_name, param_value):\n \"\"\"This recursively adds parameter paths to the `feed_dict`,\n whenever `param_value` is a dictionary itself, so that users can\n specify specific nested parameters in their node inputs.\n\n Example:\n\n >>> param_name = \"a\"\n >>> param_value = {\"b\": 1}\n >>> _add_param_to_feed_dict(param_name, param_value)\n >>> assert feed_dict[\"params:a\"] == {\"b\": 1}\n >>> assert feed_dict[\"params:a.b\"] == 1\n \"\"\"\n key = f\"params:{param_name}\"\n feed_dict[key] = param_value\n if isinstance(param_value, dict):\n for key, val in param_value.items():\n _add_param_to_feed_dict(f\"{param_name}.{key}\", val)\n\n for param_name, param_value in params.items():\n _add_param_to_feed_dict(param_name, param_value)\n\n return feed_dict\n\n def _get_config_credentials(self) -> dict[str, Any]:\n \"\"\"Getter for credentials specified in credentials directory.\"\"\"\n try:\n conf_creds = self.config_loader[\"credentials\"]\n except MissingConfigException as exc:\n logging.getLogger(__name__).debug(\n \"Credentials not found in your Kedro project config.\\n %s\", str(exc)\n )\n conf_creds = {}\n return conf_creds\n\n\nclass KedroContextError(Exception):\n \"\"\"Error occurred when loading project and running context pipeline.\"\"\"\n", "path": "kedro/framework/context/context.py"}]} | 3,557 | 579 |
gh_patches_debug_8416 | rasdani/github-patches | git_diff | optuna__optuna-449 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
ValueError when calling suggest_categorical with int and str
**Conditions**
- Optuna version: 0.13.0
- Python version: 3.7.3
- OS: Windows 10 Education
- Machine Learning library to be optimized: none
**Code to reproduce**
```
def objective(trial: optuna.Trial):
x = trial.suggest_categorical("x", [1, "0"])
print(x)
optuna.create_study( study_name="test_" + now_string(), storage="sqlite:///tmp/example.db").optimize(objective, n_trials=10)
```
**Error messages, stack traces, or logs**
```
Traceback (most recent call last):
File "C:\Users\imri\github\scoring-model\venv\lib\site-packages\optuna\study.py", line 468, in _run_trial
result = func(trial)
File "~\github\scoring-model\tests\TestOptuna.py", line 12, in objective
x = trial.suggest_categorical("x", [1, "0"])
File "~\github\scoring-model\venv\lib\site-packages\optuna\trial.py", line 337, in suggest_categorical
return self._suggest(name, distributions.CategoricalDistribution(choices=choices))
File "~\github\scoring-model\venv\lib\site-packages\optuna\trial.py", line 457, in _suggest
return self._set_new_param_or_get_existing(name, param_value, distribution)
File "~\github\scoring-model\venv\lib\site-packages\optuna\trial.py", line 462, in _set_new_param_or_get_existing
param_value_in_internal_repr = distribution.to_internal_repr(param_value)
File "~\github\scoring-model\venv\lib\site-packages\optuna\distributions.py", line 236, in to_internal_repr
return self.choices.index(param_value_in_external_repr)
ValueError: tuple.index(x): x not in tuple
```
</issue>
<code>
[start of optuna/samplers/random.py]
1 import numpy
2
3 from optuna import distributions
4 from optuna.samplers.base import BaseSampler
5 from optuna import types
6
7 if types.TYPE_CHECKING:
8 from typing import Any # NOQA
9 from typing import Dict # NOQA
10 from typing import Optional # NOQA
11
12 from optuna.distributions import BaseDistribution # NOQA
13 from optuna.structs import FrozenTrial # NOQA
14 from optuna.study import InTrialStudy # NOQA
15
16
17 class RandomSampler(BaseSampler):
18 """Sampler using random sampling.
19
20 Example:
21
22 .. code::
23
24 >>> study = optuna.create_study(sampler=RandomSampler())
25 >>> study.optimize(objective, direction='minimize')
26
27 Args:
28 seed: Seed for random number generator.
29 """
30
31 def __init__(self, seed=None):
32 # type: (Optional[int]) -> None
33
34 self.seed = seed
35 self.rng = numpy.random.RandomState(seed)
36
37 def infer_relative_search_space(self, study, trial):
38 # type: (InTrialStudy, FrozenTrial) -> Dict[str, BaseDistribution]
39
40 return {}
41
42 def sample_relative(self, study, trial, search_space):
43 # type: (InTrialStudy, FrozenTrial, Dict[str, BaseDistribution]) -> Dict[str, Any]
44
45 return {}
46
47 def sample_independent(self, study, trial, param_name, param_distribution):
48 # type: (InTrialStudy, FrozenTrial, str, distributions.BaseDistribution) -> Any
49 """Please consult the documentation for :func:`BaseSampler.sample_independent`."""
50
51 if isinstance(param_distribution, distributions.UniformDistribution):
52 return self.rng.uniform(param_distribution.low, param_distribution.high)
53 elif isinstance(param_distribution, distributions.LogUniformDistribution):
54 log_low = numpy.log(param_distribution.low)
55 log_high = numpy.log(param_distribution.high)
56 return float(numpy.exp(self.rng.uniform(log_low, log_high)))
57 elif isinstance(param_distribution, distributions.DiscreteUniformDistribution):
58 q = param_distribution.q
59 r = param_distribution.high - param_distribution.low
60 # [low, high] is shifted to [0, r] to align sampled values at regular intervals.
61 low = 0 - 0.5 * q
62 high = r + 0.5 * q
63 s = self.rng.uniform(low, high)
64 v = numpy.round(s / q) * q + param_distribution.low
65 # v may slightly exceed range due to round-off errors.
66 return float(min(max(v, param_distribution.low), param_distribution.high))
67 elif isinstance(param_distribution, distributions.IntUniformDistribution):
68 # numpy.random.randint includes low but excludes high.
69 return self.rng.randint(param_distribution.low, param_distribution.high + 1)
70 elif isinstance(param_distribution, distributions.CategoricalDistribution):
71 choices = param_distribution.choices
72 return self.rng.choice(choices)
73 else:
74 raise NotImplementedError
75
[end of optuna/samplers/random.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/optuna/samplers/random.py b/optuna/samplers/random.py
--- a/optuna/samplers/random.py
+++ b/optuna/samplers/random.py
@@ -69,6 +69,7 @@
return self.rng.randint(param_distribution.low, param_distribution.high + 1)
elif isinstance(param_distribution, distributions.CategoricalDistribution):
choices = param_distribution.choices
- return self.rng.choice(choices)
+ index = self.rng.randint(0, len(choices))
+ return choices[index]
else:
raise NotImplementedError
| {"golden_diff": "diff --git a/optuna/samplers/random.py b/optuna/samplers/random.py\n--- a/optuna/samplers/random.py\n+++ b/optuna/samplers/random.py\n@@ -69,6 +69,7 @@\n return self.rng.randint(param_distribution.low, param_distribution.high + 1)\n elif isinstance(param_distribution, distributions.CategoricalDistribution):\n choices = param_distribution.choices\n- return self.rng.choice(choices)\n+ index = self.rng.randint(0, len(choices))\n+ return choices[index]\n else:\n raise NotImplementedError\n", "issue": "ValueError when calling suggest_categorical with int and str\n**Conditions**\r\n- Optuna version: 0.13.0\r\n- Python version: 3.7.3\r\n- OS: Windows 10 Education\r\n- Machine Learning library to be optimized: none\r\n\r\n**Code to reproduce**\r\n```\r\ndef objective(trial: optuna.Trial):\r\n x = trial.suggest_categorical(\"x\", [1, \"0\"])\r\n print(x)\r\noptuna.create_study( study_name=\"test_\" + now_string(), storage=\"sqlite:///tmp/example.db\").optimize(objective, n_trials=10)\r\n```\r\n\r\n**Error messages, stack traces, or logs**\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"C:\\Users\\imri\\github\\scoring-model\\venv\\lib\\site-packages\\optuna\\study.py\", line 468, in _run_trial\r\n result = func(trial)\r\n File \"~\\github\\scoring-model\\tests\\TestOptuna.py\", line 12, in objective\r\n x = trial.suggest_categorical(\"x\", [1, \"0\"])\r\n File \"~\\github\\scoring-model\\venv\\lib\\site-packages\\optuna\\trial.py\", line 337, in suggest_categorical\r\n return self._suggest(name, distributions.CategoricalDistribution(choices=choices))\r\n File \"~\\github\\scoring-model\\venv\\lib\\site-packages\\optuna\\trial.py\", line 457, in _suggest\r\n return self._set_new_param_or_get_existing(name, param_value, distribution)\r\n File \"~\\github\\scoring-model\\venv\\lib\\site-packages\\optuna\\trial.py\", line 462, in _set_new_param_or_get_existing\r\n param_value_in_internal_repr = distribution.to_internal_repr(param_value)\r\n File \"~\\github\\scoring-model\\venv\\lib\\site-packages\\optuna\\distributions.py\", line 236, in to_internal_repr\r\n return self.choices.index(param_value_in_external_repr)\r\nValueError: tuple.index(x): x not in tuple\r\n```\r\n\r\n\n", "before_files": [{"content": "import numpy\n\nfrom optuna import distributions\nfrom optuna.samplers.base import BaseSampler\nfrom optuna import types\n\nif types.TYPE_CHECKING:\n from typing import Any # NOQA\n from typing import Dict # NOQA\n from typing import Optional # NOQA\n\n from optuna.distributions import BaseDistribution # NOQA\n from optuna.structs import FrozenTrial # NOQA\n from optuna.study import InTrialStudy # NOQA\n\n\nclass RandomSampler(BaseSampler):\n \"\"\"Sampler using random sampling.\n\n Example:\n\n .. code::\n\n >>> study = optuna.create_study(sampler=RandomSampler())\n >>> study.optimize(objective, direction='minimize')\n\n Args:\n seed: Seed for random number generator.\n \"\"\"\n\n def __init__(self, seed=None):\n # type: (Optional[int]) -> None\n\n self.seed = seed\n self.rng = numpy.random.RandomState(seed)\n\n def infer_relative_search_space(self, study, trial):\n # type: (InTrialStudy, FrozenTrial) -> Dict[str, BaseDistribution]\n\n return {}\n\n def sample_relative(self, study, trial, search_space):\n # type: (InTrialStudy, FrozenTrial, Dict[str, BaseDistribution]) -> Dict[str, Any]\n\n return {}\n\n def sample_independent(self, study, trial, param_name, param_distribution):\n # type: (InTrialStudy, FrozenTrial, str, distributions.BaseDistribution) -> Any\n \"\"\"Please consult the documentation for :func:`BaseSampler.sample_independent`.\"\"\"\n\n if isinstance(param_distribution, distributions.UniformDistribution):\n return self.rng.uniform(param_distribution.low, param_distribution.high)\n elif isinstance(param_distribution, distributions.LogUniformDistribution):\n log_low = numpy.log(param_distribution.low)\n log_high = numpy.log(param_distribution.high)\n return float(numpy.exp(self.rng.uniform(log_low, log_high)))\n elif isinstance(param_distribution, distributions.DiscreteUniformDistribution):\n q = param_distribution.q\n r = param_distribution.high - param_distribution.low\n # [low, high] is shifted to [0, r] to align sampled values at regular intervals.\n low = 0 - 0.5 * q\n high = r + 0.5 * q\n s = self.rng.uniform(low, high)\n v = numpy.round(s / q) * q + param_distribution.low\n # v may slightly exceed range due to round-off errors.\n return float(min(max(v, param_distribution.low), param_distribution.high))\n elif isinstance(param_distribution, distributions.IntUniformDistribution):\n # numpy.random.randint includes low but excludes high.\n return self.rng.randint(param_distribution.low, param_distribution.high + 1)\n elif isinstance(param_distribution, distributions.CategoricalDistribution):\n choices = param_distribution.choices\n return self.rng.choice(choices)\n else:\n raise NotImplementedError\n", "path": "optuna/samplers/random.py"}]} | 1,755 | 123 |
gh_patches_debug_59763 | rasdani/github-patches | git_diff | pretix__pretix-1120 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Not all Backend-Signals are displayed in documentation
I'm not sure why, but when looking at https://docs.pretix.eu/en/latest/development/api/general.html#backend, it seems to me like quite a few signals are not being displayed here...
Comparing to https://github.com/pretix/pretix/blob/master/doc/development/api/general.rst#backend, for example all the `html` and `navbar`-signals are missing...
</issue>
<code>
[start of src/pretix/presale/signals.py]
1 from pretix.base.signals import EventPluginSignal
2
3 html_head = EventPluginSignal(
4 providing_args=["request"]
5 )
6 """
7 This signal allows you to put code inside the HTML ``<head>`` tag
8 of every page in the frontend. You will get the request as the keyword argument
9 ``request`` and are expected to return plain HTML.
10
11 As with all plugin signals, the ``sender`` keyword argument will contain the event.
12 """
13
14 html_footer = EventPluginSignal(
15 providing_args=["request"]
16 )
17 """
18 This signal allows you to put code before the end of the HTML ``<body>`` tag
19 of every page in the frontend. You will get the request as the keyword argument
20 ``request`` and are expected to return plain HTML.
21
22 As with all plugin signals, the ``sender`` keyword argument will contain the event.
23 """
24
25 footer_link = EventPluginSignal(
26 providing_args=["request"]
27 )
28 """
29 The signal ``pretix.presale.signals.footer_links`` allows you to add links to the footer of an event page. You
30 are expected to return a dictionary containing the keys ``label`` and ``url``.
31
32 As with all plugin signals, the ``sender`` keyword argument will contain the event.
33 """
34
35 checkout_confirm_messages = EventPluginSignal()
36 """
37 This signal is sent out to retrieve short messages that need to be acknowledged by the user before the
38 order can be completed. This is typically used for something like "accept the terms and conditions".
39 Receivers are expected to return a dictionary where the keys are globally unique identifiers for the
40 message and the values can be arbitrary HTML.
41
42 As with all plugin signals, the ``sender`` keyword argument will contain the event.
43 """
44
45 checkout_flow_steps = EventPluginSignal()
46 """
47 This signal is sent out to retrieve pages for the checkout flow
48
49 As with all plugin signals, the ``sender`` keyword argument will contain the event.
50 """
51
52 voucher_redeem_info = EventPluginSignal(
53 providing_args=["voucher"]
54 )
55 """
56 This signal is sent out to display additional information on the "redeem a voucher" page
57
58 As with all plugin signals, the ``sender`` keyword argument will contain the event.
59 """
60
61 order_meta_from_request = EventPluginSignal(
62 providing_args=["request"]
63 )
64 """
65 This signal is sent before an order is created through the pretixpresale frontend. It allows you
66 to return a dictionary that will be merged in the meta_info attribute of the order.
67 You will receive the request triggering the order creation as the ``request`` keyword argument.
68
69 As with all event-plugin signals, the ``sender`` keyword argument will contain the event.
70 """
71 checkout_confirm_page_content = EventPluginSignal(
72 providing_args=['request']
73 )
74 """
75 This signals allows you to add HTML content to the confirmation page that is presented at the
76 end of the checkout process, just before the order is being created.
77
78 As with all plugin signals, the ``sender`` keyword argument will contain the event. A ``request``
79 argument will contain the request object.
80 """
81
82 fee_calculation_for_cart = EventPluginSignal(
83 providing_args=['request', 'invoice_address', 'total']
84 )
85 """
86 This signals allows you to add fees to a cart. You are expected to return a list of ``OrderFee``
87 objects that are not yet saved to the database (because there is no order yet).
88
89 As with all plugin signals, the ``sender`` keyword argument will contain the event. A ``request``
90 argument will contain the request object and ``invoice_address`` the invoice address (useful for
91 tax calculation). The ``total`` keyword argument will contain the total cart sum without any fees.
92 You should not rely on this ``total`` value for fee calculations as other fees might interfere.
93 """
94
95 contact_form_fields = EventPluginSignal(
96 providing_args=[]
97 )
98 """
99 This signals allows you to add form fields to the contact form that is presented during checkout
100 and by default only asks for the email address. You are supposed to return a dictionary of
101 form fields with globally unique keys. The validated form results will be saved into the
102 ``contact_form_data`` entry of the order's meta_info dictionary.
103
104 As with all plugin signals, the ``sender`` keyword argument will contain the event. A ``request``
105 argument will contain the request object.
106 """
107
108 question_form_fields = EventPluginSignal(
109 providing_args=["position"]
110 )
111 """
112 This signals allows you to add form fields to the questions form that is presented during checkout
113 and by default asks for the questions configured in the backend. You are supposed to return a dictionary
114 of form fields with globally unique keys. The validated form results will be saved into the
115 ``question_form_data`` entry of the position's meta_info dictionary.
116
117 The ``position`` keyword argument will contain either a ``CartPosition`` object or an ``OrderPosition``
118 object, depending on whether the form is called as part of the order checkout or for changing an order
119 later.
120
121 As with all plugin signals, the ``sender`` keyword argument will contain the event.
122 """
123
124 order_info = EventPluginSignal(
125 providing_args=["order"]
126 )
127 """
128 This signal is sent out to display additional information on the order detail page
129
130 As with all plugin signals, the ``sender`` keyword argument will contain the event.
131 """
132
133 process_request = EventPluginSignal(
134 providing_args=["request"]
135 )
136 """
137 This signal is sent out whenever a request is made to a event presale page. Most of the
138 time, this will be called from the middleware layer (except on plugin-provided pages
139 this will be called by the @event_view decorator). Similarly to Django's process_request
140 middleware method, if you return a Response, that response will be used and the request
141 won't be processed any further down the stack.
142
143 WARNING: Be very careful about using this signal as listening to it makes it really
144 easy to cause serious performance problems.
145
146 As with all plugin signals, the ``sender`` keyword argument will contain the event.
147 """
148
149 process_response = EventPluginSignal(
150 providing_args=["request", "response"]
151 )
152 """
153 This signal is sent out whenever a response is sent from a event presale page. Most of
154 the time, this will be called from the middleware layer (except on plugin-provided pages
155 this will be called by the @event_view decorator). Similarly to Django's process_response
156 middleware method you must return a response object, that will be passed further up the
157 stack to other handlers of the signal. If you do not want to alter the response, just
158 return the ``response`` parameter.
159
160 WARNING: Be very careful about using this signal as listening to it makes it really
161 easy to cause serious performance problems.
162
163 As with all plugin signals, the ``sender`` keyword argument will contain the event.
164 """
165
166 front_page_top = EventPluginSignal(
167 providing_args=[]
168 )
169 """
170 This signal is sent out to display additional information on the frontpage above the list
171 of products and but below a custom frontpage text.
172
173 As with all plugin signals, the ``sender`` keyword argument will contain the event. The
174 receivers are expected to return HTML.
175 """
176
177 front_page_bottom = EventPluginSignal(
178 providing_args=[]
179 )
180 """
181 This signal is sent out to display additional information on the frontpage below the list
182 of products.
183
184 As with all plugin signals, the ``sender`` keyword argument will contain the event. The
185 receivers are expected to return HTML.
186 """
187
188 checkout_all_optional = EventPluginSignal(
189 providing_args=['request']
190 )
191 """
192 If any receiver of this signal returns ``True``, all input fields during checkout (contact data,
193 invoice address, confirmations) will be optional, except for questions. Use with care!
194
195 As with all plugin signals, the ``sender`` keyword argument will contain the event. A ``request``
196 argument will contain the request object.
197 """
198
[end of src/pretix/presale/signals.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/pretix/presale/signals.py b/src/pretix/presale/signals.py
--- a/src/pretix/presale/signals.py
+++ b/src/pretix/presale/signals.py
@@ -26,7 +26,7 @@
providing_args=["request"]
)
"""
-The signal ``pretix.presale.signals.footer_links`` allows you to add links to the footer of an event page. You
+The signal ``pretix.presale.signals.footer_link`` allows you to add links to the footer of an event page. You
are expected to return a dictionary containing the keys ``label`` and ``url``.
As with all plugin signals, the ``sender`` keyword argument will contain the event.
| {"golden_diff": "diff --git a/src/pretix/presale/signals.py b/src/pretix/presale/signals.py\n--- a/src/pretix/presale/signals.py\n+++ b/src/pretix/presale/signals.py\n@@ -26,7 +26,7 @@\n providing_args=[\"request\"]\n )\n \"\"\"\n-The signal ``pretix.presale.signals.footer_links`` allows you to add links to the footer of an event page. You\n+The signal ``pretix.presale.signals.footer_link`` allows you to add links to the footer of an event page. You\n are expected to return a dictionary containing the keys ``label`` and ``url``.\n \n As with all plugin signals, the ``sender`` keyword argument will contain the event.\n", "issue": "Not all Backend-Signals are displayed in documentation\nI'm not sure why, but when looking at https://docs.pretix.eu/en/latest/development/api/general.html#backend, it seems to me like quite a few signals are not being displayed here...\r\n\r\nComparing to https://github.com/pretix/pretix/blob/master/doc/development/api/general.rst#backend, for example all the `html` and `navbar`-signals are missing...\n", "before_files": [{"content": "from pretix.base.signals import EventPluginSignal\n\nhtml_head = EventPluginSignal(\n providing_args=[\"request\"]\n)\n\"\"\"\nThis signal allows you to put code inside the HTML ``<head>`` tag\nof every page in the frontend. You will get the request as the keyword argument\n``request`` and are expected to return plain HTML.\n\nAs with all plugin signals, the ``sender`` keyword argument will contain the event.\n\"\"\"\n\nhtml_footer = EventPluginSignal(\n providing_args=[\"request\"]\n)\n\"\"\"\nThis signal allows you to put code before the end of the HTML ``<body>`` tag\nof every page in the frontend. You will get the request as the keyword argument\n``request`` and are expected to return plain HTML.\n\nAs with all plugin signals, the ``sender`` keyword argument will contain the event.\n\"\"\"\n\nfooter_link = EventPluginSignal(\n providing_args=[\"request\"]\n)\n\"\"\"\nThe signal ``pretix.presale.signals.footer_links`` allows you to add links to the footer of an event page. You\nare expected to return a dictionary containing the keys ``label`` and ``url``.\n\nAs with all plugin signals, the ``sender`` keyword argument will contain the event.\n\"\"\"\n\ncheckout_confirm_messages = EventPluginSignal()\n\"\"\"\nThis signal is sent out to retrieve short messages that need to be acknowledged by the user before the\norder can be completed. This is typically used for something like \"accept the terms and conditions\".\nReceivers are expected to return a dictionary where the keys are globally unique identifiers for the\nmessage and the values can be arbitrary HTML.\n\nAs with all plugin signals, the ``sender`` keyword argument will contain the event.\n\"\"\"\n\ncheckout_flow_steps = EventPluginSignal()\n\"\"\"\nThis signal is sent out to retrieve pages for the checkout flow\n\nAs with all plugin signals, the ``sender`` keyword argument will contain the event.\n\"\"\"\n\nvoucher_redeem_info = EventPluginSignal(\n providing_args=[\"voucher\"]\n)\n\"\"\"\nThis signal is sent out to display additional information on the \"redeem a voucher\" page\n\nAs with all plugin signals, the ``sender`` keyword argument will contain the event.\n\"\"\"\n\norder_meta_from_request = EventPluginSignal(\n providing_args=[\"request\"]\n)\n\"\"\"\nThis signal is sent before an order is created through the pretixpresale frontend. It allows you\nto return a dictionary that will be merged in the meta_info attribute of the order.\nYou will receive the request triggering the order creation as the ``request`` keyword argument.\n\nAs with all event-plugin signals, the ``sender`` keyword argument will contain the event.\n\"\"\"\ncheckout_confirm_page_content = EventPluginSignal(\n providing_args=['request']\n)\n\"\"\"\nThis signals allows you to add HTML content to the confirmation page that is presented at the\nend of the checkout process, just before the order is being created.\n\nAs with all plugin signals, the ``sender`` keyword argument will contain the event. A ``request``\nargument will contain the request object.\n\"\"\"\n\nfee_calculation_for_cart = EventPluginSignal(\n providing_args=['request', 'invoice_address', 'total']\n)\n\"\"\"\nThis signals allows you to add fees to a cart. You are expected to return a list of ``OrderFee``\nobjects that are not yet saved to the database (because there is no order yet).\n\nAs with all plugin signals, the ``sender`` keyword argument will contain the event. A ``request``\nargument will contain the request object and ``invoice_address`` the invoice address (useful for\ntax calculation). The ``total`` keyword argument will contain the total cart sum without any fees.\nYou should not rely on this ``total`` value for fee calculations as other fees might interfere.\n\"\"\"\n\ncontact_form_fields = EventPluginSignal(\n providing_args=[]\n)\n\"\"\"\nThis signals allows you to add form fields to the contact form that is presented during checkout\nand by default only asks for the email address. You are supposed to return a dictionary of\nform fields with globally unique keys. The validated form results will be saved into the\n``contact_form_data`` entry of the order's meta_info dictionary.\n\nAs with all plugin signals, the ``sender`` keyword argument will contain the event. A ``request``\nargument will contain the request object.\n\"\"\"\n\nquestion_form_fields = EventPluginSignal(\n providing_args=[\"position\"]\n)\n\"\"\"\nThis signals allows you to add form fields to the questions form that is presented during checkout\nand by default asks for the questions configured in the backend. You are supposed to return a dictionary\nof form fields with globally unique keys. The validated form results will be saved into the\n``question_form_data`` entry of the position's meta_info dictionary.\n\nThe ``position`` keyword argument will contain either a ``CartPosition`` object or an ``OrderPosition``\nobject, depending on whether the form is called as part of the order checkout or for changing an order\nlater.\n\nAs with all plugin signals, the ``sender`` keyword argument will contain the event.\n\"\"\"\n\norder_info = EventPluginSignal(\n providing_args=[\"order\"]\n)\n\"\"\"\nThis signal is sent out to display additional information on the order detail page\n\nAs with all plugin signals, the ``sender`` keyword argument will contain the event.\n\"\"\"\n\nprocess_request = EventPluginSignal(\n providing_args=[\"request\"]\n)\n\"\"\"\nThis signal is sent out whenever a request is made to a event presale page. Most of the\ntime, this will be called from the middleware layer (except on plugin-provided pages\nthis will be called by the @event_view decorator). Similarly to Django's process_request\nmiddleware method, if you return a Response, that response will be used and the request\nwon't be processed any further down the stack.\n\nWARNING: Be very careful about using this signal as listening to it makes it really\neasy to cause serious performance problems.\n\nAs with all plugin signals, the ``sender`` keyword argument will contain the event.\n\"\"\"\n\nprocess_response = EventPluginSignal(\n providing_args=[\"request\", \"response\"]\n)\n\"\"\"\nThis signal is sent out whenever a response is sent from a event presale page. Most of\nthe time, this will be called from the middleware layer (except on plugin-provided pages\nthis will be called by the @event_view decorator). Similarly to Django's process_response\nmiddleware method you must return a response object, that will be passed further up the\nstack to other handlers of the signal. If you do not want to alter the response, just\nreturn the ``response`` parameter.\n\nWARNING: Be very careful about using this signal as listening to it makes it really\neasy to cause serious performance problems.\n\nAs with all plugin signals, the ``sender`` keyword argument will contain the event.\n\"\"\"\n\nfront_page_top = EventPluginSignal(\n providing_args=[]\n)\n\"\"\"\nThis signal is sent out to display additional information on the frontpage above the list\nof products and but below a custom frontpage text.\n\nAs with all plugin signals, the ``sender`` keyword argument will contain the event. The\nreceivers are expected to return HTML.\n\"\"\"\n\nfront_page_bottom = EventPluginSignal(\n providing_args=[]\n)\n\"\"\"\nThis signal is sent out to display additional information on the frontpage below the list\nof products.\n\nAs with all plugin signals, the ``sender`` keyword argument will contain the event. The\nreceivers are expected to return HTML.\n\"\"\"\n\ncheckout_all_optional = EventPluginSignal(\n providing_args=['request']\n)\n\"\"\"\nIf any receiver of this signal returns ``True``, all input fields during checkout (contact data,\ninvoice address, confirmations) will be optional, except for questions. Use with care!\n\nAs with all plugin signals, the ``sender`` keyword argument will contain the event. A ``request``\nargument will contain the request object.\n\"\"\"\n", "path": "src/pretix/presale/signals.py"}]} | 2,730 | 160 |
gh_patches_debug_24056 | rasdani/github-patches | git_diff | pypi__warehouse-2574 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Improve sorting on simple page
I'd like to submit a patch for this but I have a few questions :)
First I'll describe what I'd like to do...
## sort by version number
See https://pypi.org/simple/pre-commit/
You'll notice that `0.10.0` erroneously sorts *before* `0.2.0` (I'd like to fix this)
## investigation
I've found the code which does this sorting [here](https://github.com/pypa/warehouse/blob/3bdfe5a89cc9a922ee97304c98384c24822a09ee/warehouse/legacy/api/simple.py#L76-L89)
This seems to just sort by filename, but by inspecting and viewing [this page](https://pypi.org/simple/pre-commit-mirror-maker/) I notice it seems to ignore `_` vs. `-` (which is good, that's what I want to continue to happen but I'm just not seeing it from the code!)
## other questions
The `File` objects which come back from the database contain a `.version` attribute that I'd like to use to participate in sorting, my main question is: **Can I depend on this version to be a valid [PEP440](https://www.python.org/dev/peps/pep-0440/) version and use something like `pkg_resources.parse_version`?**
I'd basically like to replicate something close to the sorting which @chriskuehl's [dumb-pypi](https://github.com/chriskuehl/dumb-pypi) does [here](https://github.com/chriskuehl/dumb-pypi/blob/fd0f93fc2e82cbd9bae41b3c60c5f006b2319c60/dumb_pypi/main.py#L77-L91).
Thanks in advance :)
---
**Good First Issue**: This issue is good for first time contributors. If there is not a corresponding pull request for this issue, it is up for grabs. For directions for getting set up, see our [Getting Started Guide](https://warehouse.pypa.io/development/getting-started/). If you are working on this issue and have questions, please feel free to ask them here, [`#pypa-dev` on Freenode](https://webchat.freenode.net/?channels=%23pypa-dev), or the [pypa-dev mailing list](https://groups.google.com/forum/#!forum/pypa-dev).
</issue>
<code>
[start of warehouse/legacy/api/simple.py]
1 # Licensed under the Apache License, Version 2.0 (the "License");
2 # you may not use this file except in compliance with the License.
3 # You may obtain a copy of the License at
4 #
5 # http://www.apache.org/licenses/LICENSE-2.0
6 #
7 # Unless required by applicable law or agreed to in writing, software
8 # distributed under the License is distributed on an "AS IS" BASIS,
9 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
10 # See the License for the specific language governing permissions and
11 # limitations under the License.
12
13 from pyramid.httpexceptions import HTTPMovedPermanently
14 from pyramid.view import view_config
15 from sqlalchemy import func
16 from sqlalchemy.orm import joinedload
17
18 from warehouse.cache.http import cache_control
19 from warehouse.cache.origin import origin_cache
20 from warehouse.packaging.models import JournalEntry, File, Project, Release
21
22
23 @view_config(
24 route_name="legacy.api.simple.index",
25 renderer="legacy/api/simple/index.html",
26 decorator=[
27 cache_control(10 * 60), # 10 minutes
28 origin_cache(
29 1 * 24 * 60 * 60, # 1 day
30 stale_while_revalidate=5 * 60, # 5 minutes
31 stale_if_error=1 * 24 * 60 * 60, # 1 day
32 ),
33 ],
34 )
35 def simple_index(request):
36 # Get the latest serial number
37 serial = request.db.query(func.max(JournalEntry.id)).scalar() or 0
38 request.response.headers["X-PyPI-Last-Serial"] = str(serial)
39
40 # Fetch the name and normalized name for all of our projects
41 projects = (
42 request.db.query(Project.name, Project.normalized_name)
43 .order_by(Project.normalized_name)
44 .all()
45 )
46
47 return {"projects": projects}
48
49
50 @view_config(
51 route_name="legacy.api.simple.detail",
52 renderer="legacy/api/simple/detail.html",
53 decorator=[
54 cache_control(10 * 60), # 10 minutes
55 origin_cache(
56 1 * 24 * 60 * 60, # 1 day
57 stale_while_revalidate=5 * 60, # 5 minutes
58 stale_if_error=1 * 24 * 60 * 60, # 1 day
59 ),
60 ],
61 )
62 def simple_detail(project, request):
63 # TODO: Handle files which are not hosted on PyPI
64
65 # Make sure that we're using the normalized version of the URL.
66 if (project.normalized_name !=
67 request.matchdict.get("name", project.normalized_name)):
68 return HTTPMovedPermanently(
69 request.current_route_path(name=project.normalized_name),
70 )
71
72 # Get the latest serial number for this project.
73 request.response.headers["X-PyPI-Last-Serial"] = str(project.last_serial)
74
75 # Get all of the files for this project.
76 files = (
77 request.db.query(File)
78 .options(joinedload(File.release))
79 .filter(
80 File.name == project.name,
81 File.version.in_(
82 request.db.query(Release)
83 .filter(Release.project == project)
84 .with_entities(Release.version)
85 )
86 )
87 .order_by(File.filename)
88 .all()
89 )
90
91 return {"project": project, "files": files}
92
[end of warehouse/legacy/api/simple.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/warehouse/legacy/api/simple.py b/warehouse/legacy/api/simple.py
--- a/warehouse/legacy/api/simple.py
+++ b/warehouse/legacy/api/simple.py
@@ -10,6 +10,8 @@
# See the License for the specific language governing permissions and
# limitations under the License.
+
+from packaging.version import parse
from pyramid.httpexceptions import HTTPMovedPermanently
from pyramid.view import view_config
from sqlalchemy import func
@@ -73,7 +75,7 @@
request.response.headers["X-PyPI-Last-Serial"] = str(project.last_serial)
# Get all of the files for this project.
- files = (
+ files = sorted(
request.db.query(File)
.options(joinedload(File.release))
.filter(
@@ -84,8 +86,8 @@
.with_entities(Release.version)
)
)
- .order_by(File.filename)
- .all()
+ .all(),
+ key=lambda f: (parse(f.version), f.packagetype)
)
return {"project": project, "files": files}
| {"golden_diff": "diff --git a/warehouse/legacy/api/simple.py b/warehouse/legacy/api/simple.py\n--- a/warehouse/legacy/api/simple.py\n+++ b/warehouse/legacy/api/simple.py\n@@ -10,6 +10,8 @@\n # See the License for the specific language governing permissions and\n # limitations under the License.\n \n+\n+from packaging.version import parse\n from pyramid.httpexceptions import HTTPMovedPermanently\n from pyramid.view import view_config\n from sqlalchemy import func\n@@ -73,7 +75,7 @@\n request.response.headers[\"X-PyPI-Last-Serial\"] = str(project.last_serial)\n \n # Get all of the files for this project.\n- files = (\n+ files = sorted(\n request.db.query(File)\n .options(joinedload(File.release))\n .filter(\n@@ -84,8 +86,8 @@\n .with_entities(Release.version)\n )\n )\n- .order_by(File.filename)\n- .all()\n+ .all(),\n+ key=lambda f: (parse(f.version), f.packagetype)\n )\n \n return {\"project\": project, \"files\": files}\n", "issue": "Improve sorting on simple page\nI'd like to submit a patch for this but I have a few questions :)\r\n\r\nFirst I'll describe what I'd like to do...\r\n\r\n## sort by version number\r\n\r\nSee https://pypi.org/simple/pre-commit/\r\n\r\nYou'll notice that `0.10.0` erroneously sorts *before* `0.2.0` (I'd like to fix this)\r\n\r\n## investigation\r\n\r\nI've found the code which does this sorting [here](https://github.com/pypa/warehouse/blob/3bdfe5a89cc9a922ee97304c98384c24822a09ee/warehouse/legacy/api/simple.py#L76-L89)\r\n\r\nThis seems to just sort by filename, but by inspecting and viewing [this page](https://pypi.org/simple/pre-commit-mirror-maker/) I notice it seems to ignore `_` vs. `-` (which is good, that's what I want to continue to happen but I'm just not seeing it from the code!)\r\n\r\n## other questions\r\n\r\nThe `File` objects which come back from the database contain a `.version` attribute that I'd like to use to participate in sorting, my main question is: **Can I depend on this version to be a valid [PEP440](https://www.python.org/dev/peps/pep-0440/) version and use something like `pkg_resources.parse_version`?**\r\n\r\nI'd basically like to replicate something close to the sorting which @chriskuehl's [dumb-pypi](https://github.com/chriskuehl/dumb-pypi) does [here](https://github.com/chriskuehl/dumb-pypi/blob/fd0f93fc2e82cbd9bae41b3c60c5f006b2319c60/dumb_pypi/main.py#L77-L91).\r\n\r\nThanks in advance :)\r\n\r\n---\r\n\r\n**Good First Issue**: This issue is good for first time contributors. If there is not a corresponding pull request for this issue, it is up for grabs. For directions for getting set up, see our [Getting Started Guide](https://warehouse.pypa.io/development/getting-started/). If you are working on this issue and have questions, please feel free to ask them here, [`#pypa-dev` on Freenode](https://webchat.freenode.net/?channels=%23pypa-dev), or the [pypa-dev mailing list](https://groups.google.com/forum/#!forum/pypa-dev).\n", "before_files": [{"content": "# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom pyramid.httpexceptions import HTTPMovedPermanently\nfrom pyramid.view import view_config\nfrom sqlalchemy import func\nfrom sqlalchemy.orm import joinedload\n\nfrom warehouse.cache.http import cache_control\nfrom warehouse.cache.origin import origin_cache\nfrom warehouse.packaging.models import JournalEntry, File, Project, Release\n\n\n@view_config(\n route_name=\"legacy.api.simple.index\",\n renderer=\"legacy/api/simple/index.html\",\n decorator=[\n cache_control(10 * 60), # 10 minutes\n origin_cache(\n 1 * 24 * 60 * 60, # 1 day\n stale_while_revalidate=5 * 60, # 5 minutes\n stale_if_error=1 * 24 * 60 * 60, # 1 day\n ),\n ],\n)\ndef simple_index(request):\n # Get the latest serial number\n serial = request.db.query(func.max(JournalEntry.id)).scalar() or 0\n request.response.headers[\"X-PyPI-Last-Serial\"] = str(serial)\n\n # Fetch the name and normalized name for all of our projects\n projects = (\n request.db.query(Project.name, Project.normalized_name)\n .order_by(Project.normalized_name)\n .all()\n )\n\n return {\"projects\": projects}\n\n\n@view_config(\n route_name=\"legacy.api.simple.detail\",\n renderer=\"legacy/api/simple/detail.html\",\n decorator=[\n cache_control(10 * 60), # 10 minutes\n origin_cache(\n 1 * 24 * 60 * 60, # 1 day\n stale_while_revalidate=5 * 60, # 5 minutes\n stale_if_error=1 * 24 * 60 * 60, # 1 day\n ),\n ],\n)\ndef simple_detail(project, request):\n # TODO: Handle files which are not hosted on PyPI\n\n # Make sure that we're using the normalized version of the URL.\n if (project.normalized_name !=\n request.matchdict.get(\"name\", project.normalized_name)):\n return HTTPMovedPermanently(\n request.current_route_path(name=project.normalized_name),\n )\n\n # Get the latest serial number for this project.\n request.response.headers[\"X-PyPI-Last-Serial\"] = str(project.last_serial)\n\n # Get all of the files for this project.\n files = (\n request.db.query(File)\n .options(joinedload(File.release))\n .filter(\n File.name == project.name,\n File.version.in_(\n request.db.query(Release)\n .filter(Release.project == project)\n .with_entities(Release.version)\n )\n )\n .order_by(File.filename)\n .all()\n )\n\n return {\"project\": project, \"files\": files}\n", "path": "warehouse/legacy/api/simple.py"}]} | 2,008 | 249 |
gh_patches_debug_9708 | rasdani/github-patches | git_diff | praw-dev__praw-1810 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Failed to upload a video.
**Describe the bug**
Failed to upload a video.
**To Reproduce**
Steps to reproduce the behavior:
submit any video
**Code/Logs**
```
>>> s = sbrdt.submit_video ('video', 'myvideo.mp4')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/gaspar/.local/lib/python3.9/site-packages/praw/models/reddit/subreddit.py", line 1383, in submit_video
video_poster_url=self._upload_media(thumbnail_path)[0],
File "/home/gaspar/.local/lib/python3.9/site-packages/praw/models/reddit/subreddit.py", line 695, in _upload_media
with open(media_path, "rb") as media:
FileNotFoundError: [Errno 2] No such file or directory: '/home/gaspar/.local/lib/python3.9/site-packages/praw/images/PRAW logo.png'
```
**System Info**
- OS: Arch Linux
- Python: 3.9.5
- PRAW Version: 7.4.0
</issue>
<code>
[start of setup.py]
1 """praw setup.py"""
2
3 import re
4 from codecs import open
5 from os import path
6
7 from setuptools import find_packages, setup
8
9 PACKAGE_NAME = "praw"
10 HERE = path.abspath(path.dirname(__file__))
11 with open(path.join(HERE, "README.rst"), encoding="utf-8") as fp:
12 README = fp.read()
13 with open(path.join(HERE, PACKAGE_NAME, "const.py"), encoding="utf-8") as fp:
14 VERSION = re.search('__version__ = "([^"]+)"', fp.read()).group(1)
15
16 extras = {
17 "ci": ["coveralls"],
18 "dev": ["packaging"],
19 "lint": [
20 "pre-commit",
21 "sphinx",
22 "sphinx_rtd_theme",
23 ],
24 "readthedocs": ["sphinx", "sphinx_rtd_theme"],
25 "test": [
26 "betamax >=0.8, <0.9",
27 "betamax-matchers >=0.3.0, <0.5",
28 "pytest >=2.7.3",
29 ],
30 }
31 extras["dev"] += extras["lint"] + extras["test"]
32
33 setup(
34 name=PACKAGE_NAME,
35 author="Bryce Boe",
36 author_email="[email protected]",
37 python_requires="~=3.6",
38 classifiers=[
39 "Development Status :: 5 - Production/Stable",
40 "Environment :: Console",
41 "Intended Audience :: Developers",
42 "License :: OSI Approved :: BSD License",
43 "Natural Language :: English",
44 "Operating System :: OS Independent",
45 "Programming Language :: Python",
46 "Programming Language :: Python :: 3",
47 "Programming Language :: Python :: 3.6",
48 "Programming Language :: Python :: 3.7",
49 "Programming Language :: Python :: 3.8",
50 "Programming Language :: Python :: 3.9",
51 "Programming Language :: Python :: 3.10",
52 "Topic :: Utilities",
53 ],
54 description=(
55 "PRAW, an acronym for `Python Reddit API Wrapper`, is a python package that"
56 " allows for simple access to reddit's API."
57 ),
58 extras_require=extras,
59 install_requires=[
60 "prawcore >=2.1, <3",
61 "update_checker >=0.18",
62 "websocket-client >=0.54.0",
63 ],
64 keywords="reddit api wrapper",
65 license="Simplified BSD License",
66 long_description=README,
67 package_data={"": ["LICENSE.txt"], PACKAGE_NAME: ["*.ini", "images/*.jpg"]},
68 packages=find_packages(exclude=["tests", "tests.*", "tools", "tools.*"]),
69 project_urls={
70 "Change Log": "https://praw.readthedocs.io/en/latest/package_info/change_log.html",
71 "Documentation": "https://praw.readthedocs.io/",
72 "Issue Tracker": "https://github.com/praw-dev/praw/issues",
73 "Source Code": "https://github.com/praw-dev/praw",
74 },
75 version=VERSION,
76 )
77
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -64,7 +64,7 @@
keywords="reddit api wrapper",
license="Simplified BSD License",
long_description=README,
- package_data={"": ["LICENSE.txt"], PACKAGE_NAME: ["*.ini", "images/*.jpg"]},
+ package_data={"": ["LICENSE.txt"], PACKAGE_NAME: ["*.ini", "images/*.png"]},
packages=find_packages(exclude=["tests", "tests.*", "tools", "tools.*"]),
project_urls={
"Change Log": "https://praw.readthedocs.io/en/latest/package_info/change_log.html",
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -64,7 +64,7 @@\n keywords=\"reddit api wrapper\",\n license=\"Simplified BSD License\",\n long_description=README,\n- package_data={\"\": [\"LICENSE.txt\"], PACKAGE_NAME: [\"*.ini\", \"images/*.jpg\"]},\n+ package_data={\"\": [\"LICENSE.txt\"], PACKAGE_NAME: [\"*.ini\", \"images/*.png\"]},\n packages=find_packages(exclude=[\"tests\", \"tests.*\", \"tools\", \"tools.*\"]),\n project_urls={\n \"Change Log\": \"https://praw.readthedocs.io/en/latest/package_info/change_log.html\",\n", "issue": "Failed to upload a video.\n**Describe the bug**\r\nFailed to upload a video.\r\n\r\n**To Reproduce**\r\nSteps to reproduce the behavior:\r\nsubmit any video\r\n\r\n**Code/Logs**\r\n```\r\n>>> s = sbrdt.submit_video ('video', 'myvideo.mp4')\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/home/gaspar/.local/lib/python3.9/site-packages/praw/models/reddit/subreddit.py\", line 1383, in submit_video\r\n video_poster_url=self._upload_media(thumbnail_path)[0],\r\n File \"/home/gaspar/.local/lib/python3.9/site-packages/praw/models/reddit/subreddit.py\", line 695, in _upload_media\r\n with open(media_path, \"rb\") as media:\r\nFileNotFoundError: [Errno 2] No such file or directory: '/home/gaspar/.local/lib/python3.9/site-packages/praw/images/PRAW logo.png'\r\n```\r\n\r\n**System Info**\r\n - OS: Arch Linux\r\n - Python: 3.9.5\r\n - PRAW Version: 7.4.0\r\n\n", "before_files": [{"content": "\"\"\"praw setup.py\"\"\"\n\nimport re\nfrom codecs import open\nfrom os import path\n\nfrom setuptools import find_packages, setup\n\nPACKAGE_NAME = \"praw\"\nHERE = path.abspath(path.dirname(__file__))\nwith open(path.join(HERE, \"README.rst\"), encoding=\"utf-8\") as fp:\n README = fp.read()\nwith open(path.join(HERE, PACKAGE_NAME, \"const.py\"), encoding=\"utf-8\") as fp:\n VERSION = re.search('__version__ = \"([^\"]+)\"', fp.read()).group(1)\n\nextras = {\n \"ci\": [\"coveralls\"],\n \"dev\": [\"packaging\"],\n \"lint\": [\n \"pre-commit\",\n \"sphinx\",\n \"sphinx_rtd_theme\",\n ],\n \"readthedocs\": [\"sphinx\", \"sphinx_rtd_theme\"],\n \"test\": [\n \"betamax >=0.8, <0.9\",\n \"betamax-matchers >=0.3.0, <0.5\",\n \"pytest >=2.7.3\",\n ],\n}\nextras[\"dev\"] += extras[\"lint\"] + extras[\"test\"]\n\nsetup(\n name=PACKAGE_NAME,\n author=\"Bryce Boe\",\n author_email=\"[email protected]\",\n python_requires=\"~=3.6\",\n classifiers=[\n \"Development Status :: 5 - Production/Stable\",\n \"Environment :: Console\",\n \"Intended Audience :: Developers\",\n \"License :: OSI Approved :: BSD License\",\n \"Natural Language :: English\",\n \"Operating System :: OS Independent\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Programming Language :: Python :: 3.10\",\n \"Topic :: Utilities\",\n ],\n description=(\n \"PRAW, an acronym for `Python Reddit API Wrapper`, is a python package that\"\n \" allows for simple access to reddit's API.\"\n ),\n extras_require=extras,\n install_requires=[\n \"prawcore >=2.1, <3\",\n \"update_checker >=0.18\",\n \"websocket-client >=0.54.0\",\n ],\n keywords=\"reddit api wrapper\",\n license=\"Simplified BSD License\",\n long_description=README,\n package_data={\"\": [\"LICENSE.txt\"], PACKAGE_NAME: [\"*.ini\", \"images/*.jpg\"]},\n packages=find_packages(exclude=[\"tests\", \"tests.*\", \"tools\", \"tools.*\"]),\n project_urls={\n \"Change Log\": \"https://praw.readthedocs.io/en/latest/package_info/change_log.html\",\n \"Documentation\": \"https://praw.readthedocs.io/\",\n \"Issue Tracker\": \"https://github.com/praw-dev/praw/issues\",\n \"Source Code\": \"https://github.com/praw-dev/praw\",\n },\n version=VERSION,\n)\n", "path": "setup.py"}]} | 1,571 | 145 |
gh_patches_debug_35228 | rasdani/github-patches | git_diff | mirumee__ariadne-529 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
OpenTracing plugin performs deepcopy of resolver's args, which fails when file upload for larger file is used.
OpenTracing performs deep copy of arguments passed to the resolver function when args filtering is used (eg. to hide passwords), but this apparently fails there's larger uploaded file in the args.
Potential fix would be default filter that replaces uploaded files with cheap str representation (eg. `<UploadedFile(name="test.jpg", type="image/jpeg", size=44100)>`) before custom filtering logic is ran next.
</issue>
<code>
[start of ariadne/contrib/tracing/opentracing.py]
1 from copy import deepcopy
2 from functools import partial
3 from inspect import isawaitable
4 from typing import Any, Callable, Dict, Optional
5
6 from graphql import GraphQLResolveInfo
7 from opentracing import Scope, Tracer, global_tracer
8 from opentracing.ext import tags
9
10 from ...types import ContextValue, Extension, Resolver
11 from .utils import format_path, should_trace
12
13 ArgFilter = Callable[[Dict[str, Any], GraphQLResolveInfo], Dict[str, Any]]
14
15
16 class OpenTracingExtension(Extension):
17 _arg_filter: Optional[ArgFilter]
18 _root_scope: Scope
19 _tracer: Tracer
20
21 def __init__(self, *, arg_filter: Optional[ArgFilter] = None):
22 self._arg_filter = arg_filter
23 self._tracer = global_tracer()
24 self._root_scope = None
25
26 def request_started(self, context: ContextValue):
27 self._root_scope = self._tracer.start_active_span("GraphQL Query")
28 self._root_scope.span.set_tag(tags.COMPONENT, "graphql")
29
30 def request_finished(self, context: ContextValue):
31 self._root_scope.close()
32
33 async def resolve(
34 self, next_: Resolver, parent: Any, info: GraphQLResolveInfo, **kwargs
35 ):
36 if not should_trace(info):
37 result = next_(parent, info, **kwargs)
38 if isawaitable(result):
39 result = await result
40 return result
41
42 with self._tracer.start_active_span(info.field_name) as scope:
43 span = scope.span
44 span.set_tag(tags.COMPONENT, "graphql")
45 span.set_tag("graphql.parentType", info.parent_type.name)
46
47 graphql_path = ".".join(
48 map(str, format_path(info.path)) # pylint: disable=bad-builtin
49 )
50 span.set_tag("graphql.path", graphql_path)
51
52 if kwargs:
53 filtered_kwargs = self.filter_resolver_args(kwargs, info)
54 for kwarg, value in filtered_kwargs.items():
55 span.set_tag(f"graphql.param.{kwarg}", value)
56
57 result = next_(parent, info, **kwargs)
58 if isawaitable(result):
59 result = await result
60 return result
61
62 def filter_resolver_args(
63 self, args: Dict[str, Any], info: GraphQLResolveInfo
64 ) -> Dict[str, Any]:
65 if not self._arg_filter:
66 return args
67
68 return self._arg_filter(deepcopy(args), info)
69
70
71 class OpenTracingExtensionSync(OpenTracingExtension):
72 def resolve(
73 self, next_: Resolver, parent: Any, info: GraphQLResolveInfo, **kwargs
74 ): # pylint: disable=invalid-overridden-method
75 if not should_trace(info):
76 result = next_(parent, info, **kwargs)
77 return result
78
79 with self._tracer.start_active_span(info.field_name) as scope:
80 span = scope.span
81 span.set_tag(tags.COMPONENT, "graphql")
82 span.set_tag("graphql.parentType", info.parent_type.name)
83
84 graphql_path = ".".join(
85 map(str, format_path(info.path)) # pylint: disable=bad-builtin
86 )
87 span.set_tag("graphql.path", graphql_path)
88
89 if kwargs:
90 filtered_kwargs = self.filter_resolver_args(kwargs, info)
91 for kwarg, value in filtered_kwargs.items():
92 span.set_tag(f"graphql.param.{kwarg}", value)
93
94 result = next_(parent, info, **kwargs)
95 return result
96
97
98 def opentracing_extension(*, arg_filter: Optional[ArgFilter] = None):
99 return partial(OpenTracingExtension, arg_filter=arg_filter)
100
101
102 def opentracing_extension_sync(*, arg_filter: Optional[ArgFilter] = None):
103 return partial(OpenTracingExtensionSync, arg_filter=arg_filter)
104
[end of ariadne/contrib/tracing/opentracing.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/ariadne/contrib/tracing/opentracing.py b/ariadne/contrib/tracing/opentracing.py
--- a/ariadne/contrib/tracing/opentracing.py
+++ b/ariadne/contrib/tracing/opentracing.py
@@ -1,11 +1,13 @@
-from copy import deepcopy
+import cgi
+import os
from functools import partial
from inspect import isawaitable
-from typing import Any, Callable, Dict, Optional
+from typing import Any, Callable, Dict, Optional, Union
from graphql import GraphQLResolveInfo
from opentracing import Scope, Tracer, global_tracer
from opentracing.ext import tags
+from starlette.datastructures import UploadFile
from ...types import ContextValue, Extension, Resolver
from .utils import format_path, should_trace
@@ -62,10 +64,12 @@
def filter_resolver_args(
self, args: Dict[str, Any], info: GraphQLResolveInfo
) -> Dict[str, Any]:
+ args_to_trace = copy_args_for_tracing(args)
+
if not self._arg_filter:
- return args
+ return args_to_trace
- return self._arg_filter(deepcopy(args), info)
+ return self._arg_filter(args_to_trace, info)
class OpenTracingExtensionSync(OpenTracingExtension):
@@ -101,3 +105,34 @@
def opentracing_extension_sync(*, arg_filter: Optional[ArgFilter] = None):
return partial(OpenTracingExtensionSync, arg_filter=arg_filter)
+
+
+def copy_args_for_tracing(value: Any) -> Any:
+ if isinstance(value, dict):
+ return {k: copy_args_for_tracing(v) for k, v in value.items()}
+ if isinstance(value, list):
+ return [copy_args_for_tracing(v) for v in value]
+ if isinstance(value, (UploadFile, cgi.FieldStorage)):
+ return repr_upload_file(value)
+ return value
+
+
+def repr_upload_file(upload_file: Union[UploadFile, cgi.FieldStorage]) -> str:
+ filename = upload_file.filename
+
+ if isinstance(upload_file, cgi.FieldStorage):
+ mime_type = upload_file.type
+ else:
+ mime_type = upload_file.content_type
+
+ if upload_file.file is None and isinstance(upload_file, cgi.FieldStorage):
+ size = len(upload_file.value) if upload_file.value is not None else 0
+ else:
+ file_ = upload_file.file
+ file_.seek(0, os.SEEK_END)
+ size = file_.tell()
+ file_.seek(0)
+
+ return (
+ f"{type(upload_file)}(mime_type={mime_type}, size={size}, filename={filename})"
+ )
| {"golden_diff": "diff --git a/ariadne/contrib/tracing/opentracing.py b/ariadne/contrib/tracing/opentracing.py\n--- a/ariadne/contrib/tracing/opentracing.py\n+++ b/ariadne/contrib/tracing/opentracing.py\n@@ -1,11 +1,13 @@\n-from copy import deepcopy\n+import cgi\n+import os\n from functools import partial\n from inspect import isawaitable\n-from typing import Any, Callable, Dict, Optional\n+from typing import Any, Callable, Dict, Optional, Union\n \n from graphql import GraphQLResolveInfo\n from opentracing import Scope, Tracer, global_tracer\n from opentracing.ext import tags\n+from starlette.datastructures import UploadFile\n \n from ...types import ContextValue, Extension, Resolver\n from .utils import format_path, should_trace\n@@ -62,10 +64,12 @@\n def filter_resolver_args(\n self, args: Dict[str, Any], info: GraphQLResolveInfo\n ) -> Dict[str, Any]:\n+ args_to_trace = copy_args_for_tracing(args)\n+\n if not self._arg_filter:\n- return args\n+ return args_to_trace\n \n- return self._arg_filter(deepcopy(args), info)\n+ return self._arg_filter(args_to_trace, info)\n \n \n class OpenTracingExtensionSync(OpenTracingExtension):\n@@ -101,3 +105,34 @@\n \n def opentracing_extension_sync(*, arg_filter: Optional[ArgFilter] = None):\n return partial(OpenTracingExtensionSync, arg_filter=arg_filter)\n+\n+\n+def copy_args_for_tracing(value: Any) -> Any:\n+ if isinstance(value, dict):\n+ return {k: copy_args_for_tracing(v) for k, v in value.items()}\n+ if isinstance(value, list):\n+ return [copy_args_for_tracing(v) for v in value]\n+ if isinstance(value, (UploadFile, cgi.FieldStorage)):\n+ return repr_upload_file(value)\n+ return value\n+\n+\n+def repr_upload_file(upload_file: Union[UploadFile, cgi.FieldStorage]) -> str:\n+ filename = upload_file.filename\n+\n+ if isinstance(upload_file, cgi.FieldStorage):\n+ mime_type = upload_file.type\n+ else:\n+ mime_type = upload_file.content_type\n+\n+ if upload_file.file is None and isinstance(upload_file, cgi.FieldStorage):\n+ size = len(upload_file.value) if upload_file.value is not None else 0\n+ else:\n+ file_ = upload_file.file\n+ file_.seek(0, os.SEEK_END)\n+ size = file_.tell()\n+ file_.seek(0)\n+\n+ return (\n+ f\"{type(upload_file)}(mime_type={mime_type}, size={size}, filename={filename})\"\n+ )\n", "issue": "OpenTracing plugin performs deepcopy of resolver's args, which fails when file upload for larger file is used.\nOpenTracing performs deep copy of arguments passed to the resolver function when args filtering is used (eg. to hide passwords), but this apparently fails there's larger uploaded file in the args.\r\n\r\nPotential fix would be default filter that replaces uploaded files with cheap str representation (eg. `<UploadedFile(name=\"test.jpg\", type=\"image/jpeg\", size=44100)>`) before custom filtering logic is ran next.\n", "before_files": [{"content": "from copy import deepcopy\nfrom functools import partial\nfrom inspect import isawaitable\nfrom typing import Any, Callable, Dict, Optional\n\nfrom graphql import GraphQLResolveInfo\nfrom opentracing import Scope, Tracer, global_tracer\nfrom opentracing.ext import tags\n\nfrom ...types import ContextValue, Extension, Resolver\nfrom .utils import format_path, should_trace\n\nArgFilter = Callable[[Dict[str, Any], GraphQLResolveInfo], Dict[str, Any]]\n\n\nclass OpenTracingExtension(Extension):\n _arg_filter: Optional[ArgFilter]\n _root_scope: Scope\n _tracer: Tracer\n\n def __init__(self, *, arg_filter: Optional[ArgFilter] = None):\n self._arg_filter = arg_filter\n self._tracer = global_tracer()\n self._root_scope = None\n\n def request_started(self, context: ContextValue):\n self._root_scope = self._tracer.start_active_span(\"GraphQL Query\")\n self._root_scope.span.set_tag(tags.COMPONENT, \"graphql\")\n\n def request_finished(self, context: ContextValue):\n self._root_scope.close()\n\n async def resolve(\n self, next_: Resolver, parent: Any, info: GraphQLResolveInfo, **kwargs\n ):\n if not should_trace(info):\n result = next_(parent, info, **kwargs)\n if isawaitable(result):\n result = await result\n return result\n\n with self._tracer.start_active_span(info.field_name) as scope:\n span = scope.span\n span.set_tag(tags.COMPONENT, \"graphql\")\n span.set_tag(\"graphql.parentType\", info.parent_type.name)\n\n graphql_path = \".\".join(\n map(str, format_path(info.path)) # pylint: disable=bad-builtin\n )\n span.set_tag(\"graphql.path\", graphql_path)\n\n if kwargs:\n filtered_kwargs = self.filter_resolver_args(kwargs, info)\n for kwarg, value in filtered_kwargs.items():\n span.set_tag(f\"graphql.param.{kwarg}\", value)\n\n result = next_(parent, info, **kwargs)\n if isawaitable(result):\n result = await result\n return result\n\n def filter_resolver_args(\n self, args: Dict[str, Any], info: GraphQLResolveInfo\n ) -> Dict[str, Any]:\n if not self._arg_filter:\n return args\n\n return self._arg_filter(deepcopy(args), info)\n\n\nclass OpenTracingExtensionSync(OpenTracingExtension):\n def resolve(\n self, next_: Resolver, parent: Any, info: GraphQLResolveInfo, **kwargs\n ): # pylint: disable=invalid-overridden-method\n if not should_trace(info):\n result = next_(parent, info, **kwargs)\n return result\n\n with self._tracer.start_active_span(info.field_name) as scope:\n span = scope.span\n span.set_tag(tags.COMPONENT, \"graphql\")\n span.set_tag(\"graphql.parentType\", info.parent_type.name)\n\n graphql_path = \".\".join(\n map(str, format_path(info.path)) # pylint: disable=bad-builtin\n )\n span.set_tag(\"graphql.path\", graphql_path)\n\n if kwargs:\n filtered_kwargs = self.filter_resolver_args(kwargs, info)\n for kwarg, value in filtered_kwargs.items():\n span.set_tag(f\"graphql.param.{kwarg}\", value)\n\n result = next_(parent, info, **kwargs)\n return result\n\n\ndef opentracing_extension(*, arg_filter: Optional[ArgFilter] = None):\n return partial(OpenTracingExtension, arg_filter=arg_filter)\n\n\ndef opentracing_extension_sync(*, arg_filter: Optional[ArgFilter] = None):\n return partial(OpenTracingExtensionSync, arg_filter=arg_filter)\n", "path": "ariadne/contrib/tracing/opentracing.py"}]} | 1,674 | 623 |
gh_patches_debug_37501 | rasdani/github-patches | git_diff | pytorch__vision-1005 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Mobilenet v2 width multiplier incorrect
There seems to be a small issue with the width multiplier in mobilenet v2. The official implementation rounds filter channels to a multiple of 8.
For example, mobilenet v2 width 1.4. The first conv layer has 44 channels as opposed to 48 in the official implementation:
model = torchvision.models.mobilenet_v2(width_mult=1.4)
for module in model.modules():
if isinstance(module, nn.Conv2d):
print(module.weight.shape)
torch.Size([44, 3, 3, 3])
torch.Size([44, 1, 3, 3])
torch.Size([22, 44, 1, 1])
torch.Size([132, 22, 1, 1])
torch.Size([132, 1, 3, 3])
torch.Size([33, 132, 1, 1])
Corresponding tensorflow 2.0 keras code:
model = tf.keras.applications.MobileNetV2(
weights="imagenet", input_shape=(224, 224, 3), alpha=1.4)
model.summary()
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_1 (InputLayer) [(None, 224, 224, 3) 0
__________________________________________________________________________________________________
Conv1_pad (ZeroPadding2D) (None, 225, 225, 3) 0 input_1[0][0]
__________________________________________________________________________________________________
Conv1 (Conv2D) (None, 112, 112, 48) 1296 Conv1_pad[0][0]
__________________________________________________________________________________________________
bn_Conv1 (BatchNormalization) (None, 112, 112, 48) 192 Conv1[0][0]
__________________________________________________________________________________________________
Conv1_relu (ReLU) (None, 112, 112, 48) 0 bn_Conv1[0][0]
__________________________________________________________________________________________________
expanded_conv_depthwise (Depthw (None, 112, 112, 48) 432 Conv1_relu[0][0]
__________________________________________________________________________________________________
expanded_conv_depthwise_BN (Bat (None, 112, 112, 48) 192 expanded_conv_depthwise[0][0]
__________________________________________________________________________________________________
expanded_conv_depthwise_relu (R (None, 112, 112, 48) 0 expanded_conv_depthwise_BN[0][0]
__________________________________________________________________________________________________
expanded_conv_project (Conv2D) (None, 112, 112, 24) 1152 expanded_conv_depthwise_relu[0][0
__________________________________________________________________________________________________
expanded_conv_project_BN (Batch (None, 112, 112, 24) 96 expanded_conv_project[0][0]
__________________________________________________________________________________________________
block_1_expand (Conv2D) (None, 112, 112, 144 3456 expanded_conv_project_BN[0][0]
__________________________________________________________________________________________________
block_1_expand_BN (BatchNormali (None, 112, 112, 144 576 block_1_expand[0][0]
__________________________________________________________________________________________________
block_1_expand_relu (ReLU) (None, 112, 112, 144 0 block_1_expand_BN[0][0]
__________________________________________________________________________________________________
block_1_pad (ZeroPadding2D) (None, 113, 113, 144 0 block_1_expand_relu[0][0]
__________________________________________________________________________________________________
block_1_depthwise (DepthwiseCon (None, 56, 56, 144) 1296 block_1_pad[0][0]
__________________________________________________________________________________________________
block_1_depthwise_BN (BatchNorm (None, 56, 56, 144) 576 block_1_depthwise[0][0]
__________________________________________________________________________________________________
block_1_depthwise_relu (ReLU) (None, 56, 56, 144) 0 block_1_depthwise_BN[0][0]
__________________________________________________________________________________________________
block_1_project (Conv2D) (None, 56, 56, 32) 4608 block_1_depthwise_relu[0][0]
__________________________________________________________________________________________________
I've implemented a fix here:
https://github.com/yaysummeriscoming/vision/blob/master/torchvision/models/mobilenet.py
Can I merge it in?
</issue>
<code>
[start of torchvision/models/mobilenet.py]
1 from torch import nn
2 from .utils import load_state_dict_from_url
3
4
5 __all__ = ['MobileNetV2', 'mobilenet_v2']
6
7
8 model_urls = {
9 'mobilenet_v2': 'https://download.pytorch.org/models/mobilenet_v2-b0353104.pth',
10 }
11
12
13 class ConvBNReLU(nn.Sequential):
14 def __init__(self, in_planes, out_planes, kernel_size=3, stride=1, groups=1):
15 padding = (kernel_size - 1) // 2
16 super(ConvBNReLU, self).__init__(
17 nn.Conv2d(in_planes, out_planes, kernel_size, stride, padding, groups=groups, bias=False),
18 nn.BatchNorm2d(out_planes),
19 nn.ReLU6(inplace=True)
20 )
21
22
23 class InvertedResidual(nn.Module):
24 def __init__(self, inp, oup, stride, expand_ratio):
25 super(InvertedResidual, self).__init__()
26 self.stride = stride
27 assert stride in [1, 2]
28
29 hidden_dim = int(round(inp * expand_ratio))
30 self.use_res_connect = self.stride == 1 and inp == oup
31
32 layers = []
33 if expand_ratio != 1:
34 # pw
35 layers.append(ConvBNReLU(inp, hidden_dim, kernel_size=1))
36 layers.extend([
37 # dw
38 ConvBNReLU(hidden_dim, hidden_dim, stride=stride, groups=hidden_dim),
39 # pw-linear
40 nn.Conv2d(hidden_dim, oup, 1, 1, 0, bias=False),
41 nn.BatchNorm2d(oup),
42 ])
43 self.conv = nn.Sequential(*layers)
44
45 def forward(self, x):
46 if self.use_res_connect:
47 return x + self.conv(x)
48 else:
49 return self.conv(x)
50
51
52 class MobileNetV2(nn.Module):
53 def __init__(self, num_classes=1000, width_mult=1.0, inverted_residual_setting=None):
54 super(MobileNetV2, self).__init__()
55 block = InvertedResidual
56 input_channel = 32
57 last_channel = 1280
58
59 if inverted_residual_setting is None:
60 inverted_residual_setting = [
61 # t, c, n, s
62 [1, 16, 1, 1],
63 [6, 24, 2, 2],
64 [6, 32, 3, 2],
65 [6, 64, 4, 2],
66 [6, 96, 3, 1],
67 [6, 160, 3, 2],
68 [6, 320, 1, 1],
69 ]
70
71 # only check the first element, assuming user knows t,c,n,s are required
72 if len(inverted_residual_setting) == 0 or len(inverted_residual_setting[0]) != 4:
73 raise ValueError("inverted_residual_setting should be non-empty "
74 "or a 4-element list, got {}".format(inverted_residual_setting))
75
76 # building first layer
77 input_channel = int(input_channel * width_mult)
78 self.last_channel = int(last_channel * max(1.0, width_mult))
79 features = [ConvBNReLU(3, input_channel, stride=2)]
80 # building inverted residual blocks
81 for t, c, n, s in inverted_residual_setting:
82 output_channel = int(c * width_mult)
83 for i in range(n):
84 stride = s if i == 0 else 1
85 features.append(block(input_channel, output_channel, stride, expand_ratio=t))
86 input_channel = output_channel
87 # building last several layers
88 features.append(ConvBNReLU(input_channel, self.last_channel, kernel_size=1))
89 # make it nn.Sequential
90 self.features = nn.Sequential(*features)
91
92 # building classifier
93 self.classifier = nn.Sequential(
94 nn.Dropout(0.2),
95 nn.Linear(self.last_channel, num_classes),
96 )
97
98 # weight initialization
99 for m in self.modules():
100 if isinstance(m, nn.Conv2d):
101 nn.init.kaiming_normal_(m.weight, mode='fan_out')
102 if m.bias is not None:
103 nn.init.zeros_(m.bias)
104 elif isinstance(m, nn.BatchNorm2d):
105 nn.init.ones_(m.weight)
106 nn.init.zeros_(m.bias)
107 elif isinstance(m, nn.Linear):
108 nn.init.normal_(m.weight, 0, 0.01)
109 nn.init.zeros_(m.bias)
110
111 def forward(self, x):
112 x = self.features(x)
113 x = x.mean([2, 3])
114 x = self.classifier(x)
115 return x
116
117
118 def mobilenet_v2(pretrained=False, progress=True, **kwargs):
119 """
120 Constructs a MobileNetV2 architecture from
121 `"MobileNetV2: Inverted Residuals and Linear Bottlenecks" <https://arxiv.org/abs/1801.04381>`_.
122
123 Args:
124 pretrained (bool): If True, returns a model pre-trained on ImageNet
125 progress (bool): If True, displays a progress bar of the download to stderr
126 """
127 model = MobileNetV2(**kwargs)
128 if pretrained:
129 state_dict = load_state_dict_from_url(model_urls['mobilenet_v2'],
130 progress=progress)
131 model.load_state_dict(state_dict)
132 return model
133
[end of torchvision/models/mobilenet.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/torchvision/models/mobilenet.py b/torchvision/models/mobilenet.py
--- a/torchvision/models/mobilenet.py
+++ b/torchvision/models/mobilenet.py
@@ -10,6 +10,26 @@
}
+def _make_divisible(v, divisor, min_value=None):
+ """
+ This function is taken from the original tf repo.
+ It ensures that all layers have a channel number that is divisible by 8
+ It can be seen here:
+ https://github.com/tensorflow/models/blob/master/research/slim/nets/mobilenet/mobilenet.py
+ :param v:
+ :param divisor:
+ :param min_value:
+ :return:
+ """
+ if min_value is None:
+ min_value = divisor
+ new_v = max(min_value, int(v + divisor / 2) // divisor * divisor)
+ # Make sure that round down does not go down by more than 10%.
+ if new_v < 0.9 * v:
+ new_v += divisor
+ return new_v
+
+
class ConvBNReLU(nn.Sequential):
def __init__(self, in_planes, out_planes, kernel_size=3, stride=1, groups=1):
padding = (kernel_size - 1) // 2
@@ -50,7 +70,17 @@
class MobileNetV2(nn.Module):
- def __init__(self, num_classes=1000, width_mult=1.0, inverted_residual_setting=None):
+ def __init__(self, num_classes=1000, width_mult=1.0, inverted_residual_setting=None, round_nearest=8):
+ """
+ MobileNet V2 main class
+
+ Args:
+ num_classes (int): Number of classes
+ width_mult (float): Width multiplier - adjusts number of channels in each layer by this amount
+ inverted_residual_setting: Network structure
+ round_nearest (int): Round the number of channels in each layer to be a multiple of this number
+ Set to 1 to turn off rounding
+ """
super(MobileNetV2, self).__init__()
block = InvertedResidual
input_channel = 32
@@ -74,12 +104,12 @@
"or a 4-element list, got {}".format(inverted_residual_setting))
# building first layer
- input_channel = int(input_channel * width_mult)
- self.last_channel = int(last_channel * max(1.0, width_mult))
+ input_channel = _make_divisible(input_channel * width_mult, round_nearest)
+ self.last_channel = _make_divisible(last_channel * max(1.0, width_mult), round_nearest)
features = [ConvBNReLU(3, input_channel, stride=2)]
# building inverted residual blocks
for t, c, n, s in inverted_residual_setting:
- output_channel = int(c * width_mult)
+ output_channel = _make_divisible(c * width_mult, round_nearest)
for i in range(n):
stride = s if i == 0 else 1
features.append(block(input_channel, output_channel, stride, expand_ratio=t))
| {"golden_diff": "diff --git a/torchvision/models/mobilenet.py b/torchvision/models/mobilenet.py\n--- a/torchvision/models/mobilenet.py\n+++ b/torchvision/models/mobilenet.py\n@@ -10,6 +10,26 @@\n }\n \n \n+def _make_divisible(v, divisor, min_value=None):\n+ \"\"\"\n+ This function is taken from the original tf repo.\n+ It ensures that all layers have a channel number that is divisible by 8\n+ It can be seen here:\n+ https://github.com/tensorflow/models/blob/master/research/slim/nets/mobilenet/mobilenet.py\n+ :param v:\n+ :param divisor:\n+ :param min_value:\n+ :return:\n+ \"\"\"\n+ if min_value is None:\n+ min_value = divisor\n+ new_v = max(min_value, int(v + divisor / 2) // divisor * divisor)\n+ # Make sure that round down does not go down by more than 10%.\n+ if new_v < 0.9 * v:\n+ new_v += divisor\n+ return new_v\n+\n+\n class ConvBNReLU(nn.Sequential):\n def __init__(self, in_planes, out_planes, kernel_size=3, stride=1, groups=1):\n padding = (kernel_size - 1) // 2\n@@ -50,7 +70,17 @@\n \n \n class MobileNetV2(nn.Module):\n- def __init__(self, num_classes=1000, width_mult=1.0, inverted_residual_setting=None):\n+ def __init__(self, num_classes=1000, width_mult=1.0, inverted_residual_setting=None, round_nearest=8):\n+ \"\"\"\n+ MobileNet V2 main class\n+\n+ Args:\n+ num_classes (int): Number of classes\n+ width_mult (float): Width multiplier - adjusts number of channels in each layer by this amount\n+ inverted_residual_setting: Network structure\n+ round_nearest (int): Round the number of channels in each layer to be a multiple of this number\n+ Set to 1 to turn off rounding\n+ \"\"\"\n super(MobileNetV2, self).__init__()\n block = InvertedResidual\n input_channel = 32\n@@ -74,12 +104,12 @@\n \"or a 4-element list, got {}\".format(inverted_residual_setting))\n \n # building first layer\n- input_channel = int(input_channel * width_mult)\n- self.last_channel = int(last_channel * max(1.0, width_mult))\n+ input_channel = _make_divisible(input_channel * width_mult, round_nearest)\n+ self.last_channel = _make_divisible(last_channel * max(1.0, width_mult), round_nearest)\n features = [ConvBNReLU(3, input_channel, stride=2)]\n # building inverted residual blocks\n for t, c, n, s in inverted_residual_setting:\n- output_channel = int(c * width_mult)\n+ output_channel = _make_divisible(c * width_mult, round_nearest)\n for i in range(n):\n stride = s if i == 0 else 1\n features.append(block(input_channel, output_channel, stride, expand_ratio=t))\n", "issue": "Mobilenet v2 width multiplier incorrect\nThere seems to be a small issue with the width multiplier in mobilenet v2. The official implementation rounds filter channels to a multiple of 8. \r\n\r\nFor example, mobilenet v2 width 1.4. The first conv layer has 44 channels as opposed to 48 in the official implementation:\r\nmodel = torchvision.models.mobilenet_v2(width_mult=1.4)\r\n\r\nfor module in model.modules():\r\n if isinstance(module, nn.Conv2d):\r\n print(module.weight.shape)\r\n\r\ntorch.Size([44, 3, 3, 3])\r\ntorch.Size([44, 1, 3, 3])\r\ntorch.Size([22, 44, 1, 1])\r\ntorch.Size([132, 22, 1, 1])\r\ntorch.Size([132, 1, 3, 3])\r\ntorch.Size([33, 132, 1, 1])\r\n\r\n\r\nCorresponding tensorflow 2.0 keras code:\r\nmodel = tf.keras.applications.MobileNetV2(\r\n weights=\"imagenet\", input_shape=(224, 224, 3), alpha=1.4)\r\nmodel.summary()\r\n\r\n__________________________________________________________________________________________________\r\nLayer (type) Output Shape Param # Connected to \r\n==================================================================================================\r\ninput_1 (InputLayer) [(None, 224, 224, 3) 0 \r\n__________________________________________________________________________________________________\r\nConv1_pad (ZeroPadding2D) (None, 225, 225, 3) 0 input_1[0][0] \r\n__________________________________________________________________________________________________\r\nConv1 (Conv2D) (None, 112, 112, 48) 1296 Conv1_pad[0][0] \r\n__________________________________________________________________________________________________\r\nbn_Conv1 (BatchNormalization) (None, 112, 112, 48) 192 Conv1[0][0] \r\n__________________________________________________________________________________________________\r\nConv1_relu (ReLU) (None, 112, 112, 48) 0 bn_Conv1[0][0] \r\n__________________________________________________________________________________________________\r\nexpanded_conv_depthwise (Depthw (None, 112, 112, 48) 432 Conv1_relu[0][0] \r\n__________________________________________________________________________________________________\r\nexpanded_conv_depthwise_BN (Bat (None, 112, 112, 48) 192 expanded_conv_depthwise[0][0] \r\n__________________________________________________________________________________________________\r\nexpanded_conv_depthwise_relu (R (None, 112, 112, 48) 0 expanded_conv_depthwise_BN[0][0] \r\n__________________________________________________________________________________________________\r\nexpanded_conv_project (Conv2D) (None, 112, 112, 24) 1152 expanded_conv_depthwise_relu[0][0\r\n__________________________________________________________________________________________________\r\nexpanded_conv_project_BN (Batch (None, 112, 112, 24) 96 expanded_conv_project[0][0] \r\n__________________________________________________________________________________________________\r\nblock_1_expand (Conv2D) (None, 112, 112, 144 3456 expanded_conv_project_BN[0][0] \r\n__________________________________________________________________________________________________\r\nblock_1_expand_BN (BatchNormali (None, 112, 112, 144 576 block_1_expand[0][0] \r\n__________________________________________________________________________________________________\r\nblock_1_expand_relu (ReLU) (None, 112, 112, 144 0 block_1_expand_BN[0][0] \r\n__________________________________________________________________________________________________\r\nblock_1_pad (ZeroPadding2D) (None, 113, 113, 144 0 block_1_expand_relu[0][0] \r\n__________________________________________________________________________________________________\r\nblock_1_depthwise (DepthwiseCon (None, 56, 56, 144) 1296 block_1_pad[0][0] \r\n__________________________________________________________________________________________________\r\nblock_1_depthwise_BN (BatchNorm (None, 56, 56, 144) 576 block_1_depthwise[0][0] \r\n__________________________________________________________________________________________________\r\nblock_1_depthwise_relu (ReLU) (None, 56, 56, 144) 0 block_1_depthwise_BN[0][0] \r\n__________________________________________________________________________________________________\r\nblock_1_project (Conv2D) (None, 56, 56, 32) 4608 block_1_depthwise_relu[0][0] \r\n__________________________________________________________________________________________________\r\n\r\n\r\nI've implemented a fix here:\r\nhttps://github.com/yaysummeriscoming/vision/blob/master/torchvision/models/mobilenet.py\r\n\r\nCan I merge it in?\n", "before_files": [{"content": "from torch import nn\nfrom .utils import load_state_dict_from_url\n\n\n__all__ = ['MobileNetV2', 'mobilenet_v2']\n\n\nmodel_urls = {\n 'mobilenet_v2': 'https://download.pytorch.org/models/mobilenet_v2-b0353104.pth',\n}\n\n\nclass ConvBNReLU(nn.Sequential):\n def __init__(self, in_planes, out_planes, kernel_size=3, stride=1, groups=1):\n padding = (kernel_size - 1) // 2\n super(ConvBNReLU, self).__init__(\n nn.Conv2d(in_planes, out_planes, kernel_size, stride, padding, groups=groups, bias=False),\n nn.BatchNorm2d(out_planes),\n nn.ReLU6(inplace=True)\n )\n\n\nclass InvertedResidual(nn.Module):\n def __init__(self, inp, oup, stride, expand_ratio):\n super(InvertedResidual, self).__init__()\n self.stride = stride\n assert stride in [1, 2]\n\n hidden_dim = int(round(inp * expand_ratio))\n self.use_res_connect = self.stride == 1 and inp == oup\n\n layers = []\n if expand_ratio != 1:\n # pw\n layers.append(ConvBNReLU(inp, hidden_dim, kernel_size=1))\n layers.extend([\n # dw\n ConvBNReLU(hidden_dim, hidden_dim, stride=stride, groups=hidden_dim),\n # pw-linear\n nn.Conv2d(hidden_dim, oup, 1, 1, 0, bias=False),\n nn.BatchNorm2d(oup),\n ])\n self.conv = nn.Sequential(*layers)\n\n def forward(self, x):\n if self.use_res_connect:\n return x + self.conv(x)\n else:\n return self.conv(x)\n\n\nclass MobileNetV2(nn.Module):\n def __init__(self, num_classes=1000, width_mult=1.0, inverted_residual_setting=None):\n super(MobileNetV2, self).__init__()\n block = InvertedResidual\n input_channel = 32\n last_channel = 1280\n\n if inverted_residual_setting is None:\n inverted_residual_setting = [\n # t, c, n, s\n [1, 16, 1, 1],\n [6, 24, 2, 2],\n [6, 32, 3, 2],\n [6, 64, 4, 2],\n [6, 96, 3, 1],\n [6, 160, 3, 2],\n [6, 320, 1, 1],\n ]\n\n # only check the first element, assuming user knows t,c,n,s are required\n if len(inverted_residual_setting) == 0 or len(inverted_residual_setting[0]) != 4:\n raise ValueError(\"inverted_residual_setting should be non-empty \"\n \"or a 4-element list, got {}\".format(inverted_residual_setting))\n\n # building first layer\n input_channel = int(input_channel * width_mult)\n self.last_channel = int(last_channel * max(1.0, width_mult))\n features = [ConvBNReLU(3, input_channel, stride=2)]\n # building inverted residual blocks\n for t, c, n, s in inverted_residual_setting:\n output_channel = int(c * width_mult)\n for i in range(n):\n stride = s if i == 0 else 1\n features.append(block(input_channel, output_channel, stride, expand_ratio=t))\n input_channel = output_channel\n # building last several layers\n features.append(ConvBNReLU(input_channel, self.last_channel, kernel_size=1))\n # make it nn.Sequential\n self.features = nn.Sequential(*features)\n\n # building classifier\n self.classifier = nn.Sequential(\n nn.Dropout(0.2),\n nn.Linear(self.last_channel, num_classes),\n )\n\n # weight initialization\n for m in self.modules():\n if isinstance(m, nn.Conv2d):\n nn.init.kaiming_normal_(m.weight, mode='fan_out')\n if m.bias is not None:\n nn.init.zeros_(m.bias)\n elif isinstance(m, nn.BatchNorm2d):\n nn.init.ones_(m.weight)\n nn.init.zeros_(m.bias)\n elif isinstance(m, nn.Linear):\n nn.init.normal_(m.weight, 0, 0.01)\n nn.init.zeros_(m.bias)\n\n def forward(self, x):\n x = self.features(x)\n x = x.mean([2, 3])\n x = self.classifier(x)\n return x\n\n\ndef mobilenet_v2(pretrained=False, progress=True, **kwargs):\n \"\"\"\n Constructs a MobileNetV2 architecture from\n `\"MobileNetV2: Inverted Residuals and Linear Bottlenecks\" <https://arxiv.org/abs/1801.04381>`_.\n\n Args:\n pretrained (bool): If True, returns a model pre-trained on ImageNet\n progress (bool): If True, displays a progress bar of the download to stderr\n \"\"\"\n model = MobileNetV2(**kwargs)\n if pretrained:\n state_dict = load_state_dict_from_url(model_urls['mobilenet_v2'],\n progress=progress)\n model.load_state_dict(state_dict)\n return model\n", "path": "torchvision/models/mobilenet.py"}]} | 3,117 | 731 |
gh_patches_debug_8344 | rasdani/github-patches | git_diff | spack__spack-42976 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
package error linux-pam: fatal error: rpc/rpc.h: No such file or directory
I'm trying to install flux-security, and this looks to be a dependency "linux-pam"
```console
==> Installing linux-pam-1.5.2-offodszhf3suwkcqq5z2c4anlyfzwykm [69/79]
==> No binary for linux-pam-1.5.2-offodszhf3suwkcqq5z2c4anlyfzwykm found: installing from source
==> Using cached archive: /opt/spack-environment/spack/var/spack/cache/_source-cache/archive/e4/e4ec7131a91da44512574268f493c6d8ca105c87091691b8e9b56ca685d4f94d.tar.xz
==> No patches needed for linux-pam
==> linux-pam: Executing phase: 'autoreconf'
==> linux-pam: Executing phase: 'configure'
==> linux-pam: Executing phase: 'build'
==> Error: ProcessError: Command exited with status 2:
'make' '-j16' 'V=1'
5 errors found in build log:
964 mv -f .deps/unix_chkpwd-unix_chkpwd.Tpo .deps/unix_chkpwd-unix_chkpwd.Po
965 libtool: compile: /opt/spack-environment/spack/lib/spack/env/gcc/gcc -DHAVE_CONFIG_H -I. -I../.. -I../../libpam/include -I../../libpamc/include -DCHKPWD_HELPER=\"/opt/soft
ware/linux-ubuntu22.04-neoverse_v1/gcc-11.4.0/linux-pam-1.5.2-offodszhf3suwkcqq5z2c4anlyfzwykm/sbin/unix_chkpwd\" -DUPDATE_HELPER=\"/opt/software/linux-ubuntu22.04-neoverse
_v1/gcc-11.4.0/linux-pam-1.5.2-offodszhf3suwkcqq5z2c4anlyfzwykm/sbin/unix_update\" -W -Wall -Wbad-function-cast -Wcast-align -Wcast-align=strict -Wcast-qual -Wdeprecated -W
inline -Wmain -Wmissing-declarations -Wmissing-format-attribute -Wmissing-prototypes -Wpointer-arith -Wreturn-type -Wshadow -Wstrict-prototypes -Wuninitialized -Wwrite-stri
ngs -g -O2 -MT bigcrypt.lo -MD -MP -MF .deps/bigcrypt.Tpo -c bigcrypt.c -fPIC -DPIC -o .libs/bigcrypt.o
966 /bin/bash ../../libtool --tag=CC --mode=compile /opt/spack-environment/spack/lib/spack/env/gcc/gcc -DHAVE_CONFIG_H -I. -I../.. -I../../libpam/include -I../../libpamc/
include -DCHKPWD_HELPER=\"/opt/software/linux-ubuntu22.04-neoverse_v1/gcc-11.4.0/linux-pam-1.5.2-offodszhf3suwkcqq5z2c4anlyfzwykm/sbin/unix_chkpwd\" -DUPDATE_HELPER=\"/opt/
software/linux-ubuntu22.04-neoverse_v1/gcc-11.4.0/linux-pam-1.5.2-offodszhf3suwkcqq5z2c4anlyfzwykm/sbin/unix_update\" -W -Wall -Wbad-function-cast -Wcast-align -Wcast-align
=strict -Wcast-qual -Wdeprecated -Winline -Wmain -Wmissing-declarations -Wmissing-format-attribute -Wmissing-prototypes -Wpointer-arith -Wreturn-type -Wshadow -Wstrict-prot
otypes -Wuninitialized -Wwrite-strings -g -O2 -MT md5_good.lo -MD -MP -MF .deps/md5_good.Tpo -c -o md5_good.lo md5_good.c
967 mv -f .deps/unix_update-unix_update.Tpo .deps/unix_update-unix_update.Po
968 /bin/bash ../../libtool --tag=CC --mode=compile /opt/spack-environment/spack/lib/spack/env/gcc/gcc -DHAVE_CONFIG_H -I. -I../.. -I../../libpam/include -I../../libpamc/
include -DCHKPWD_HELPER=\"/opt/software/linux-ubuntu22.04-neoverse_v1/gcc-11.4.0/linux-pam-1.5.2-offodszhf3suwkcqq5z2c4anlyfzwykm/sbin/unix_chkpwd\" -DUPDATE_HELPER=\"/opt/
software/linux-ubuntu22.04-neoverse_v1/gcc-11.4.0/linux-pam-1.5.2-offodszhf3suwkcqq5z2c4anlyfzwykm/sbin/unix_update\" -W -Wall -Wbad-function-cast -Wcast-align -Wcast-align
=strict -Wcast-qual -Wdeprecated -Winline -Wmain -Wmissing-declarations -Wmissing-format-attribute -Wmissing-prototypes -Wpointer-arith -Wreturn-type -Wshadow -Wstrict-prot
otypes -Wuninitialized -Wwrite-strings -g -O2 -MT md5_broken.lo -MD -MP -MF .deps/md5_broken.Tpo -c -o md5_broken.lo md5_broken.c
969 libtool: compile: /opt/spack-environment/spack/lib/spack/env/gcc/gcc -DHAVE_CONFIG_H -I. -I../.. -I../../libpam/include -I../../libpamc/include -DCHKPWD_HELPER=\"/opt/soft
ware/linux-ubuntu22.04-neoverse_v1/gcc-11.4.0/linux-pam-1.5.2-offodszhf3suwkcqq5z2c4anlyfzwykm/sbin/unix_chkpwd\" -DUPDATE_HELPER=\"/opt/software/linux-ubuntu22.04-neoverse
_v1/gcc-11.4.0/linux-pam-1.5.2-offodszhf3suwkcqq5z2c4anlyfzwykm/sbin/unix_update\" -W -Wall -Wbad-function-cast -Wcast-align -Wcast-align=strict -Wcast-qual -Wdeprecated -W
inline -Wmain -Wmissing-declarations -Wmissing-format-attribute -Wmissing-prototypes -Wpointer-arith -Wreturn-type -Wshadow -Wstrict-prototypes -Wuninitialized -Wwrite-stri
ngs -g -O2 -MT pam_unix_sess.lo -MD -MP -MF .deps/pam_unix_sess.Tpo -c pam_unix_sess.c -fPIC -DPIC -o .libs/pam_unix_sess.o
>> 970 pam_unix_passwd.c:80:11: fatal error: rpc/rpc.h: No such file or directory
971 80 | # include <rpc/rpc.h>
972 | ^~~~~~~~~~~
973 compilation terminated.
```
I tried installing rpc.h on my host, but to no avail - it likely needs to be defined with the package here. Thanks for the help!
</issue>
<code>
[start of var/spack/repos/builtin/packages/linux-pam/package.py]
1 # Copyright 2013-2024 Lawrence Livermore National Security, LLC and other
2 # Spack Project Developers. See the top-level COPYRIGHT file for details.
3 #
4 # SPDX-License-Identifier: (Apache-2.0 OR MIT)
5
6 from spack.package import *
7
8
9 class LinuxPam(AutotoolsPackage):
10 """Linux PAM (Pluggable Authentication Modules for Linux) project."""
11
12 homepage = "http://www.linux-pam.org/"
13 url = "https://github.com/linux-pam/linux-pam/releases/download/v1.5.2/Linux-PAM-1.5.2.tar.xz"
14
15 license("BSD-3-Clause")
16
17 version("1.5.1", sha256="201d40730b1135b1b3cdea09f2c28ac634d73181ccd0172ceddee3649c5792fc")
18 version("1.5.2", sha256="e4ec7131a91da44512574268f493c6d8ca105c87091691b8e9b56ca685d4f94d")
19 version("1.5.0", sha256="02d39854b508fae9dc713f7733bbcdadbe17b50de965aedddd65bcb6cc7852c8")
20 version("1.4.0", sha256="cd6d928c51e64139be3bdb38692c68183a509b83d4f2c221024ccd4bcddfd034")
21 version("1.3.1", sha256="eff47a4ecd833fbf18de9686632a70ee8d0794b79aecb217ebd0ce11db4cd0db")
22
23 depends_on("m4", type="build")
24 depends_on("autoconf", type="build")
25 depends_on("automake", type="build")
26 depends_on("libtool", type="build")
27
28 def configure_args(self):
29 config_args = ["--includedir=" + self.prefix.include.security]
30 return config_args
31
[end of var/spack/repos/builtin/packages/linux-pam/package.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/var/spack/repos/builtin/packages/linux-pam/package.py b/var/spack/repos/builtin/packages/linux-pam/package.py
--- a/var/spack/repos/builtin/packages/linux-pam/package.py
+++ b/var/spack/repos/builtin/packages/linux-pam/package.py
@@ -20,6 +20,8 @@
version("1.4.0", sha256="cd6d928c51e64139be3bdb38692c68183a509b83d4f2c221024ccd4bcddfd034")
version("1.3.1", sha256="eff47a4ecd833fbf18de9686632a70ee8d0794b79aecb217ebd0ce11db4cd0db")
+ depends_on("libtirpc")
+
depends_on("m4", type="build")
depends_on("autoconf", type="build")
depends_on("automake", type="build")
| {"golden_diff": "diff --git a/var/spack/repos/builtin/packages/linux-pam/package.py b/var/spack/repos/builtin/packages/linux-pam/package.py\n--- a/var/spack/repos/builtin/packages/linux-pam/package.py\n+++ b/var/spack/repos/builtin/packages/linux-pam/package.py\n@@ -20,6 +20,8 @@\n version(\"1.4.0\", sha256=\"cd6d928c51e64139be3bdb38692c68183a509b83d4f2c221024ccd4bcddfd034\")\n version(\"1.3.1\", sha256=\"eff47a4ecd833fbf18de9686632a70ee8d0794b79aecb217ebd0ce11db4cd0db\")\n \n+ depends_on(\"libtirpc\")\n+\n depends_on(\"m4\", type=\"build\")\n depends_on(\"autoconf\", type=\"build\")\n depends_on(\"automake\", type=\"build\")\n", "issue": "package error linux-pam: fatal error: rpc/rpc.h: No such file or directory\nI'm trying to install flux-security, and this looks to be a dependency \"linux-pam\"\r\n\r\n```console\r\n==> Installing linux-pam-1.5.2-offodszhf3suwkcqq5z2c4anlyfzwykm [69/79]\r\n==> No binary for linux-pam-1.5.2-offodszhf3suwkcqq5z2c4anlyfzwykm found: installing from source\r\n==> Using cached archive: /opt/spack-environment/spack/var/spack/cache/_source-cache/archive/e4/e4ec7131a91da44512574268f493c6d8ca105c87091691b8e9b56ca685d4f94d.tar.xz\r\n==> No patches needed for linux-pam\r\n==> linux-pam: Executing phase: 'autoreconf'\r\n==> linux-pam: Executing phase: 'configure'\r\n==> linux-pam: Executing phase: 'build'\r\n==> Error: ProcessError: Command exited with status 2:\r\n 'make' '-j16' 'V=1'\r\n\r\n5 errors found in build log:\r\n 964 mv -f .deps/unix_chkpwd-unix_chkpwd.Tpo .deps/unix_chkpwd-unix_chkpwd.Po\r\n 965 libtool: compile: /opt/spack-environment/spack/lib/spack/env/gcc/gcc -DHAVE_CONFIG_H -I. -I../.. -I../../libpam/include -I../../libpamc/include -DCHKPWD_HELPER=\\\"/opt/soft\r\n ware/linux-ubuntu22.04-neoverse_v1/gcc-11.4.0/linux-pam-1.5.2-offodszhf3suwkcqq5z2c4anlyfzwykm/sbin/unix_chkpwd\\\" -DUPDATE_HELPER=\\\"/opt/software/linux-ubuntu22.04-neoverse\r\n _v1/gcc-11.4.0/linux-pam-1.5.2-offodszhf3suwkcqq5z2c4anlyfzwykm/sbin/unix_update\\\" -W -Wall -Wbad-function-cast -Wcast-align -Wcast-align=strict -Wcast-qual -Wdeprecated -W\r\n inline -Wmain -Wmissing-declarations -Wmissing-format-attribute -Wmissing-prototypes -Wpointer-arith -Wreturn-type -Wshadow -Wstrict-prototypes -Wuninitialized -Wwrite-stri\r\n ngs -g -O2 -MT bigcrypt.lo -MD -MP -MF .deps/bigcrypt.Tpo -c bigcrypt.c -fPIC -DPIC -o .libs/bigcrypt.o\r\n 966 /bin/bash ../../libtool --tag=CC --mode=compile /opt/spack-environment/spack/lib/spack/env/gcc/gcc -DHAVE_CONFIG_H -I. -I../.. -I../../libpam/include -I../../libpamc/\r\n include -DCHKPWD_HELPER=\\\"/opt/software/linux-ubuntu22.04-neoverse_v1/gcc-11.4.0/linux-pam-1.5.2-offodszhf3suwkcqq5z2c4anlyfzwykm/sbin/unix_chkpwd\\\" -DUPDATE_HELPER=\\\"/opt/\r\n software/linux-ubuntu22.04-neoverse_v1/gcc-11.4.0/linux-pam-1.5.2-offodszhf3suwkcqq5z2c4anlyfzwykm/sbin/unix_update\\\" -W -Wall -Wbad-function-cast -Wcast-align -Wcast-align\r\n =strict -Wcast-qual -Wdeprecated -Winline -Wmain -Wmissing-declarations -Wmissing-format-attribute -Wmissing-prototypes -Wpointer-arith -Wreturn-type -Wshadow -Wstrict-prot\r\n otypes -Wuninitialized -Wwrite-strings -g -O2 -MT md5_good.lo -MD -MP -MF .deps/md5_good.Tpo -c -o md5_good.lo md5_good.c\r\n 967 mv -f .deps/unix_update-unix_update.Tpo .deps/unix_update-unix_update.Po\r\n 968 /bin/bash ../../libtool --tag=CC --mode=compile /opt/spack-environment/spack/lib/spack/env/gcc/gcc -DHAVE_CONFIG_H -I. -I../.. -I../../libpam/include -I../../libpamc/\r\n include -DCHKPWD_HELPER=\\\"/opt/software/linux-ubuntu22.04-neoverse_v1/gcc-11.4.0/linux-pam-1.5.2-offodszhf3suwkcqq5z2c4anlyfzwykm/sbin/unix_chkpwd\\\" -DUPDATE_HELPER=\\\"/opt/\r\n software/linux-ubuntu22.04-neoverse_v1/gcc-11.4.0/linux-pam-1.5.2-offodszhf3suwkcqq5z2c4anlyfzwykm/sbin/unix_update\\\" -W -Wall -Wbad-function-cast -Wcast-align -Wcast-align\r\n =strict -Wcast-qual -Wdeprecated -Winline -Wmain -Wmissing-declarations -Wmissing-format-attribute -Wmissing-prototypes -Wpointer-arith -Wreturn-type -Wshadow -Wstrict-prot\r\n otypes -Wuninitialized -Wwrite-strings -g -O2 -MT md5_broken.lo -MD -MP -MF .deps/md5_broken.Tpo -c -o md5_broken.lo md5_broken.c\r\n 969 libtool: compile: /opt/spack-environment/spack/lib/spack/env/gcc/gcc -DHAVE_CONFIG_H -I. -I../.. -I../../libpam/include -I../../libpamc/include -DCHKPWD_HELPER=\\\"/opt/soft\r\n ware/linux-ubuntu22.04-neoverse_v1/gcc-11.4.0/linux-pam-1.5.2-offodszhf3suwkcqq5z2c4anlyfzwykm/sbin/unix_chkpwd\\\" -DUPDATE_HELPER=\\\"/opt/software/linux-ubuntu22.04-neoverse\r\n _v1/gcc-11.4.0/linux-pam-1.5.2-offodszhf3suwkcqq5z2c4anlyfzwykm/sbin/unix_update\\\" -W -Wall -Wbad-function-cast -Wcast-align -Wcast-align=strict -Wcast-qual -Wdeprecated -W\r\n inline -Wmain -Wmissing-declarations -Wmissing-format-attribute -Wmissing-prototypes -Wpointer-arith -Wreturn-type -Wshadow -Wstrict-prototypes -Wuninitialized -Wwrite-stri\r\n ngs -g -O2 -MT pam_unix_sess.lo -MD -MP -MF .deps/pam_unix_sess.Tpo -c pam_unix_sess.c -fPIC -DPIC -o .libs/pam_unix_sess.o\r\n >> 970 pam_unix_passwd.c:80:11: fatal error: rpc/rpc.h: No such file or directory\r\n 971 80 | # include <rpc/rpc.h>\r\n 972 | ^~~~~~~~~~~\r\n 973 compilation terminated.\r\n```\r\n\r\nI tried installing rpc.h on my host, but to no avail - it likely needs to be defined with the package here. Thanks for the help!\n", "before_files": [{"content": "# Copyright 2013-2024 Lawrence Livermore National Security, LLC and other\n# Spack Project Developers. See the top-level COPYRIGHT file for details.\n#\n# SPDX-License-Identifier: (Apache-2.0 OR MIT)\n\nfrom spack.package import *\n\n\nclass LinuxPam(AutotoolsPackage):\n \"\"\"Linux PAM (Pluggable Authentication Modules for Linux) project.\"\"\"\n\n homepage = \"http://www.linux-pam.org/\"\n url = \"https://github.com/linux-pam/linux-pam/releases/download/v1.5.2/Linux-PAM-1.5.2.tar.xz\"\n\n license(\"BSD-3-Clause\")\n\n version(\"1.5.1\", sha256=\"201d40730b1135b1b3cdea09f2c28ac634d73181ccd0172ceddee3649c5792fc\")\n version(\"1.5.2\", sha256=\"e4ec7131a91da44512574268f493c6d8ca105c87091691b8e9b56ca685d4f94d\")\n version(\"1.5.0\", sha256=\"02d39854b508fae9dc713f7733bbcdadbe17b50de965aedddd65bcb6cc7852c8\")\n version(\"1.4.0\", sha256=\"cd6d928c51e64139be3bdb38692c68183a509b83d4f2c221024ccd4bcddfd034\")\n version(\"1.3.1\", sha256=\"eff47a4ecd833fbf18de9686632a70ee8d0794b79aecb217ebd0ce11db4cd0db\")\n\n depends_on(\"m4\", type=\"build\")\n depends_on(\"autoconf\", type=\"build\")\n depends_on(\"automake\", type=\"build\")\n depends_on(\"libtool\", type=\"build\")\n\n def configure_args(self):\n config_args = [\"--includedir=\" + self.prefix.include.security]\n return config_args\n", "path": "var/spack/repos/builtin/packages/linux-pam/package.py"}]} | 2,882 | 252 |
gh_patches_debug_20046 | rasdani/github-patches | git_diff | pytorch__vision-4649 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
WIDERFace download fails with zip error
Originally from https://github.com/pytorch/vision/pull/4614#issuecomment-943468223
```py
In [4]: torchvision.datasets.WIDERFace(root='/tmp/lol', split='train', download=True)
```
```
91473it [00:00, 9168293.30it/s]
---------------------------------------------------------------------------
BadZipFile Traceback (most recent call last)
<ipython-input-4-61c4acdeef4e> in <module>
----> 1 torchvision.datasets.WIDERFace(root='/tmp/lol', split='train', download=True)
~/dev/vision/torchvision/datasets/widerface.py in __init__(self, root, split, transform, target_transform, download)
70
71 if download:
---> 72 self.download()
73
74 if not self._check_integrity():
~/dev/vision/torchvision/datasets/widerface.py in download(self)
191 download_file_from_google_drive(file_id, self.root, filename, md5)
192 filepath = os.path.join(self.root, filename)
--> 193 extract_archive(filepath)
194
195 # download and extract annotation files
~/dev/vision/torchvision/datasets/utils.py in extract_archive(from_path, to_path, remove_finished)
407 extractor = _ARCHIVE_EXTRACTORS[archive_type]
408
--> 409 extractor(from_path, to_path, compression)
410
411 return to_path
~/dev/vision/torchvision/datasets/utils.py in _extract_zip(from_path, to_path, compression)
281
282 def _extract_zip(from_path: str, to_path: str, compression: Optional[str]) -> None:
--> 283 with zipfile.ZipFile(
284 from_path, "r", compression=_ZIP_COMPRESSION_MAP[compression] if compression else zipfile.ZIP_STORED
285 ) as zip:
~/opt/miniconda3/envs/pt/lib/python3.8/zipfile.py in __init__(self, file, mode, compression, allowZip64, compresslevel, strict_timestamps)
1267 try:
1268 if mode == 'r':
-> 1269 self._RealGetContents()
1270 elif mode in ('w', 'x'):
1271 # set the modified flag so central directory gets written
~/opt/miniconda3/envs/pt/lib/python3.8/zipfile.py in _RealGetContents(self)
1334 raise BadZipFile("File is not a zip file")
1335 if not endrec:
-> 1336 raise BadZipFile("File is not a zip file")
1337 if self.debug > 1:
1338 print(endrec)
BadZipFile: File is not a zip file
```
cc @pmeier
</issue>
<code>
[start of torchvision/datasets/widerface.py]
1 import os
2 from os.path import abspath, expanduser
3 from typing import Any, Callable, List, Dict, Optional, Tuple, Union
4
5 import torch
6 from PIL import Image
7
8 from .utils import (
9 download_file_from_google_drive,
10 download_and_extract_archive,
11 extract_archive,
12 verify_str_arg,
13 )
14 from .vision import VisionDataset
15
16
17 class WIDERFace(VisionDataset):
18 """`WIDERFace <http://shuoyang1213.me/WIDERFACE/>`_ Dataset.
19
20 Args:
21 root (string): Root directory where images and annotations are downloaded to.
22 Expects the following folder structure if download=False:
23
24 .. code::
25
26 <root>
27 βββ widerface
28 βββ wider_face_split ('wider_face_split.zip' if compressed)
29 βββ WIDER_train ('WIDER_train.zip' if compressed)
30 βββ WIDER_val ('WIDER_val.zip' if compressed)
31 βββ WIDER_test ('WIDER_test.zip' if compressed)
32 split (string): The dataset split to use. One of {``train``, ``val``, ``test``}.
33 Defaults to ``train``.
34 transform (callable, optional): A function/transform that takes in a PIL image
35 and returns a transformed version. E.g, ``transforms.RandomCrop``
36 target_transform (callable, optional): A function/transform that takes in the
37 target and transforms it.
38 download (bool, optional): If true, downloads the dataset from the internet and
39 puts it in root directory. If dataset is already downloaded, it is not
40 downloaded again.
41
42 """
43
44 BASE_FOLDER = "widerface"
45 FILE_LIST = [
46 # File ID MD5 Hash Filename
47 ("0B6eKvaijfFUDQUUwd21EckhUbWs", "3fedf70df600953d25982bcd13d91ba2", "WIDER_train.zip"),
48 ("0B6eKvaijfFUDd3dIRmpvSk8tLUk", "dfa7d7e790efa35df3788964cf0bbaea", "WIDER_val.zip"),
49 ("0B6eKvaijfFUDbW4tdGpaYjgzZkU", "e5d8f4248ed24c334bbd12f49c29dd40", "WIDER_test.zip"),
50 ]
51 ANNOTATIONS_FILE = (
52 "http://mmlab.ie.cuhk.edu.hk/projects/WIDERFace/support/bbx_annotation/wider_face_split.zip",
53 "0e3767bcf0e326556d407bf5bff5d27c",
54 "wider_face_split.zip",
55 )
56
57 def __init__(
58 self,
59 root: str,
60 split: str = "train",
61 transform: Optional[Callable] = None,
62 target_transform: Optional[Callable] = None,
63 download: bool = False,
64 ) -> None:
65 super(WIDERFace, self).__init__(
66 root=os.path.join(root, self.BASE_FOLDER), transform=transform, target_transform=target_transform
67 )
68 # check arguments
69 self.split = verify_str_arg(split, "split", ("train", "val", "test"))
70
71 if download:
72 self.download()
73
74 if not self._check_integrity():
75 raise RuntimeError(
76 "Dataset not found or corrupted. " + "You can use download=True to download and prepare it"
77 )
78
79 self.img_info: List[Dict[str, Union[str, Dict[str, torch.Tensor]]]] = []
80 if self.split in ("train", "val"):
81 self.parse_train_val_annotations_file()
82 else:
83 self.parse_test_annotations_file()
84
85 def __getitem__(self, index: int) -> Tuple[Any, Any]:
86 """
87 Args:
88 index (int): Index
89
90 Returns:
91 tuple: (image, target) where target is a dict of annotations for all faces in the image.
92 target=None for the test split.
93 """
94
95 # stay consistent with other datasets and return a PIL Image
96 img = Image.open(self.img_info[index]["img_path"])
97
98 if self.transform is not None:
99 img = self.transform(img)
100
101 target = None if self.split == "test" else self.img_info[index]["annotations"]
102 if self.target_transform is not None:
103 target = self.target_transform(target)
104
105 return img, target
106
107 def __len__(self) -> int:
108 return len(self.img_info)
109
110 def extra_repr(self) -> str:
111 lines = ["Split: {split}"]
112 return "\n".join(lines).format(**self.__dict__)
113
114 def parse_train_val_annotations_file(self) -> None:
115 filename = "wider_face_train_bbx_gt.txt" if self.split == "train" else "wider_face_val_bbx_gt.txt"
116 filepath = os.path.join(self.root, "wider_face_split", filename)
117
118 with open(filepath, "r") as f:
119 lines = f.readlines()
120 file_name_line, num_boxes_line, box_annotation_line = True, False, False
121 num_boxes, box_counter = 0, 0
122 labels = []
123 for line in lines:
124 line = line.rstrip()
125 if file_name_line:
126 img_path = os.path.join(self.root, "WIDER_" + self.split, "images", line)
127 img_path = abspath(expanduser(img_path))
128 file_name_line = False
129 num_boxes_line = True
130 elif num_boxes_line:
131 num_boxes = int(line)
132 num_boxes_line = False
133 box_annotation_line = True
134 elif box_annotation_line:
135 box_counter += 1
136 line_split = line.split(" ")
137 line_values = [int(x) for x in line_split]
138 labels.append(line_values)
139 if box_counter >= num_boxes:
140 box_annotation_line = False
141 file_name_line = True
142 labels_tensor = torch.tensor(labels)
143 self.img_info.append(
144 {
145 "img_path": img_path,
146 "annotations": {
147 "bbox": labels_tensor[:, 0:4], # x, y, width, height
148 "blur": labels_tensor[:, 4],
149 "expression": labels_tensor[:, 5],
150 "illumination": labels_tensor[:, 6],
151 "occlusion": labels_tensor[:, 7],
152 "pose": labels_tensor[:, 8],
153 "invalid": labels_tensor[:, 9],
154 },
155 }
156 )
157 box_counter = 0
158 labels.clear()
159 else:
160 raise RuntimeError("Error parsing annotation file {}".format(filepath))
161
162 def parse_test_annotations_file(self) -> None:
163 filepath = os.path.join(self.root, "wider_face_split", "wider_face_test_filelist.txt")
164 filepath = abspath(expanduser(filepath))
165 with open(filepath, "r") as f:
166 lines = f.readlines()
167 for line in lines:
168 line = line.rstrip()
169 img_path = os.path.join(self.root, "WIDER_test", "images", line)
170 img_path = abspath(expanduser(img_path))
171 self.img_info.append({"img_path": img_path})
172
173 def _check_integrity(self) -> bool:
174 # Allow original archive to be deleted (zip). Only need the extracted images
175 all_files = self.FILE_LIST.copy()
176 all_files.append(self.ANNOTATIONS_FILE)
177 for (_, md5, filename) in all_files:
178 file, ext = os.path.splitext(filename)
179 extracted_dir = os.path.join(self.root, file)
180 if not os.path.exists(extracted_dir):
181 return False
182 return True
183
184 def download(self) -> None:
185 if self._check_integrity():
186 print("Files already downloaded and verified")
187 return
188
189 # download and extract image data
190 for (file_id, md5, filename) in self.FILE_LIST:
191 download_file_from_google_drive(file_id, self.root, filename, md5)
192 filepath = os.path.join(self.root, filename)
193 extract_archive(filepath)
194
195 # download and extract annotation files
196 download_and_extract_archive(
197 url=self.ANNOTATIONS_FILE[0], download_root=self.root, md5=self.ANNOTATIONS_FILE[1]
198 )
199
[end of torchvision/datasets/widerface.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/torchvision/datasets/widerface.py b/torchvision/datasets/widerface.py
--- a/torchvision/datasets/widerface.py
+++ b/torchvision/datasets/widerface.py
@@ -43,13 +43,13 @@
BASE_FOLDER = "widerface"
FILE_LIST = [
- # File ID MD5 Hash Filename
- ("0B6eKvaijfFUDQUUwd21EckhUbWs", "3fedf70df600953d25982bcd13d91ba2", "WIDER_train.zip"),
- ("0B6eKvaijfFUDd3dIRmpvSk8tLUk", "dfa7d7e790efa35df3788964cf0bbaea", "WIDER_val.zip"),
- ("0B6eKvaijfFUDbW4tdGpaYjgzZkU", "e5d8f4248ed24c334bbd12f49c29dd40", "WIDER_test.zip"),
+ # File ID MD5 Hash Filename
+ ("15hGDLhsx8bLgLcIRD5DhYt5iBxnjNF1M", "3fedf70df600953d25982bcd13d91ba2", "WIDER_train.zip"),
+ ("1GUCogbp16PMGa39thoMMeWxp7Rp5oM8Q", "dfa7d7e790efa35df3788964cf0bbaea", "WIDER_val.zip"),
+ ("1HIfDbVEWKmsYKJZm4lchTBDLW5N7dY5T", "e5d8f4248ed24c334bbd12f49c29dd40", "WIDER_test.zip"),
]
ANNOTATIONS_FILE = (
- "http://mmlab.ie.cuhk.edu.hk/projects/WIDERFace/support/bbx_annotation/wider_face_split.zip",
+ "http://shuoyang1213.me/WIDERFACE/support/bbx_annotation/wider_face_split.zip",
"0e3767bcf0e326556d407bf5bff5d27c",
"wider_face_split.zip",
)
| {"golden_diff": "diff --git a/torchvision/datasets/widerface.py b/torchvision/datasets/widerface.py\n--- a/torchvision/datasets/widerface.py\n+++ b/torchvision/datasets/widerface.py\n@@ -43,13 +43,13 @@\n \n BASE_FOLDER = \"widerface\"\n FILE_LIST = [\n- # File ID MD5 Hash Filename\n- (\"0B6eKvaijfFUDQUUwd21EckhUbWs\", \"3fedf70df600953d25982bcd13d91ba2\", \"WIDER_train.zip\"),\n- (\"0B6eKvaijfFUDd3dIRmpvSk8tLUk\", \"dfa7d7e790efa35df3788964cf0bbaea\", \"WIDER_val.zip\"),\n- (\"0B6eKvaijfFUDbW4tdGpaYjgzZkU\", \"e5d8f4248ed24c334bbd12f49c29dd40\", \"WIDER_test.zip\"),\n+ # File ID MD5 Hash Filename\n+ (\"15hGDLhsx8bLgLcIRD5DhYt5iBxnjNF1M\", \"3fedf70df600953d25982bcd13d91ba2\", \"WIDER_train.zip\"),\n+ (\"1GUCogbp16PMGa39thoMMeWxp7Rp5oM8Q\", \"dfa7d7e790efa35df3788964cf0bbaea\", \"WIDER_val.zip\"),\n+ (\"1HIfDbVEWKmsYKJZm4lchTBDLW5N7dY5T\", \"e5d8f4248ed24c334bbd12f49c29dd40\", \"WIDER_test.zip\"),\n ]\n ANNOTATIONS_FILE = (\n- \"http://mmlab.ie.cuhk.edu.hk/projects/WIDERFace/support/bbx_annotation/wider_face_split.zip\",\n+ \"http://shuoyang1213.me/WIDERFACE/support/bbx_annotation/wider_face_split.zip\",\n \"0e3767bcf0e326556d407bf5bff5d27c\",\n \"wider_face_split.zip\",\n )\n", "issue": "WIDERFace download fails with zip error\nOriginally from https://github.com/pytorch/vision/pull/4614#issuecomment-943468223\r\n\r\n```py\r\nIn [4]: torchvision.datasets.WIDERFace(root='/tmp/lol', split='train', download=True)\r\n```\r\n\r\n```\r\n91473it [00:00, 9168293.30it/s]\r\n---------------------------------------------------------------------------\r\nBadZipFile Traceback (most recent call last)\r\n<ipython-input-4-61c4acdeef4e> in <module>\r\n----> 1 torchvision.datasets.WIDERFace(root='/tmp/lol', split='train', download=True)\r\n\r\n~/dev/vision/torchvision/datasets/widerface.py in __init__(self, root, split, transform, target_transform, download)\r\n 70\r\n 71 if download:\r\n---> 72 self.download()\r\n 73\r\n 74 if not self._check_integrity():\r\n\r\n~/dev/vision/torchvision/datasets/widerface.py in download(self)\r\n 191 download_file_from_google_drive(file_id, self.root, filename, md5)\r\n 192 filepath = os.path.join(self.root, filename)\r\n--> 193 extract_archive(filepath)\r\n 194\r\n 195 # download and extract annotation files\r\n\r\n~/dev/vision/torchvision/datasets/utils.py in extract_archive(from_path, to_path, remove_finished)\r\n 407 extractor = _ARCHIVE_EXTRACTORS[archive_type]\r\n 408\r\n--> 409 extractor(from_path, to_path, compression)\r\n 410\r\n 411 return to_path\r\n\r\n~/dev/vision/torchvision/datasets/utils.py in _extract_zip(from_path, to_path, compression)\r\n 281\r\n 282 def _extract_zip(from_path: str, to_path: str, compression: Optional[str]) -> None:\r\n--> 283 with zipfile.ZipFile(\r\n 284 from_path, \"r\", compression=_ZIP_COMPRESSION_MAP[compression] if compression else zipfile.ZIP_STORED\r\n 285 ) as zip:\r\n\r\n~/opt/miniconda3/envs/pt/lib/python3.8/zipfile.py in __init__(self, file, mode, compression, allowZip64, compresslevel, strict_timestamps)\r\n 1267 try:\r\n 1268 if mode == 'r':\r\n-> 1269 self._RealGetContents()\r\n 1270 elif mode in ('w', 'x'):\r\n 1271 # set the modified flag so central directory gets written\r\n\r\n~/opt/miniconda3/envs/pt/lib/python3.8/zipfile.py in _RealGetContents(self)\r\n 1334 raise BadZipFile(\"File is not a zip file\")\r\n 1335 if not endrec:\r\n-> 1336 raise BadZipFile(\"File is not a zip file\")\r\n 1337 if self.debug > 1:\r\n 1338 print(endrec)\r\n\r\nBadZipFile: File is not a zip file\r\n```\n\ncc @pmeier\n", "before_files": [{"content": "import os\nfrom os.path import abspath, expanduser\nfrom typing import Any, Callable, List, Dict, Optional, Tuple, Union\n\nimport torch\nfrom PIL import Image\n\nfrom .utils import (\n download_file_from_google_drive,\n download_and_extract_archive,\n extract_archive,\n verify_str_arg,\n)\nfrom .vision import VisionDataset\n\n\nclass WIDERFace(VisionDataset):\n \"\"\"`WIDERFace <http://shuoyang1213.me/WIDERFACE/>`_ Dataset.\n\n Args:\n root (string): Root directory where images and annotations are downloaded to.\n Expects the following folder structure if download=False:\n\n .. code::\n\n <root>\n \u2514\u2500\u2500 widerface\n \u251c\u2500\u2500 wider_face_split ('wider_face_split.zip' if compressed)\n \u251c\u2500\u2500 WIDER_train ('WIDER_train.zip' if compressed)\n \u251c\u2500\u2500 WIDER_val ('WIDER_val.zip' if compressed)\n \u2514\u2500\u2500 WIDER_test ('WIDER_test.zip' if compressed)\n split (string): The dataset split to use. One of {``train``, ``val``, ``test``}.\n Defaults to ``train``.\n transform (callable, optional): A function/transform that takes in a PIL image\n and returns a transformed version. E.g, ``transforms.RandomCrop``\n target_transform (callable, optional): A function/transform that takes in the\n target and transforms it.\n download (bool, optional): If true, downloads the dataset from the internet and\n puts it in root directory. If dataset is already downloaded, it is not\n downloaded again.\n\n \"\"\"\n\n BASE_FOLDER = \"widerface\"\n FILE_LIST = [\n # File ID MD5 Hash Filename\n (\"0B6eKvaijfFUDQUUwd21EckhUbWs\", \"3fedf70df600953d25982bcd13d91ba2\", \"WIDER_train.zip\"),\n (\"0B6eKvaijfFUDd3dIRmpvSk8tLUk\", \"dfa7d7e790efa35df3788964cf0bbaea\", \"WIDER_val.zip\"),\n (\"0B6eKvaijfFUDbW4tdGpaYjgzZkU\", \"e5d8f4248ed24c334bbd12f49c29dd40\", \"WIDER_test.zip\"),\n ]\n ANNOTATIONS_FILE = (\n \"http://mmlab.ie.cuhk.edu.hk/projects/WIDERFace/support/bbx_annotation/wider_face_split.zip\",\n \"0e3767bcf0e326556d407bf5bff5d27c\",\n \"wider_face_split.zip\",\n )\n\n def __init__(\n self,\n root: str,\n split: str = \"train\",\n transform: Optional[Callable] = None,\n target_transform: Optional[Callable] = None,\n download: bool = False,\n ) -> None:\n super(WIDERFace, self).__init__(\n root=os.path.join(root, self.BASE_FOLDER), transform=transform, target_transform=target_transform\n )\n # check arguments\n self.split = verify_str_arg(split, \"split\", (\"train\", \"val\", \"test\"))\n\n if download:\n self.download()\n\n if not self._check_integrity():\n raise RuntimeError(\n \"Dataset not found or corrupted. \" + \"You can use download=True to download and prepare it\"\n )\n\n self.img_info: List[Dict[str, Union[str, Dict[str, torch.Tensor]]]] = []\n if self.split in (\"train\", \"val\"):\n self.parse_train_val_annotations_file()\n else:\n self.parse_test_annotations_file()\n\n def __getitem__(self, index: int) -> Tuple[Any, Any]:\n \"\"\"\n Args:\n index (int): Index\n\n Returns:\n tuple: (image, target) where target is a dict of annotations for all faces in the image.\n target=None for the test split.\n \"\"\"\n\n # stay consistent with other datasets and return a PIL Image\n img = Image.open(self.img_info[index][\"img_path\"])\n\n if self.transform is not None:\n img = self.transform(img)\n\n target = None if self.split == \"test\" else self.img_info[index][\"annotations\"]\n if self.target_transform is not None:\n target = self.target_transform(target)\n\n return img, target\n\n def __len__(self) -> int:\n return len(self.img_info)\n\n def extra_repr(self) -> str:\n lines = [\"Split: {split}\"]\n return \"\\n\".join(lines).format(**self.__dict__)\n\n def parse_train_val_annotations_file(self) -> None:\n filename = \"wider_face_train_bbx_gt.txt\" if self.split == \"train\" else \"wider_face_val_bbx_gt.txt\"\n filepath = os.path.join(self.root, \"wider_face_split\", filename)\n\n with open(filepath, \"r\") as f:\n lines = f.readlines()\n file_name_line, num_boxes_line, box_annotation_line = True, False, False\n num_boxes, box_counter = 0, 0\n labels = []\n for line in lines:\n line = line.rstrip()\n if file_name_line:\n img_path = os.path.join(self.root, \"WIDER_\" + self.split, \"images\", line)\n img_path = abspath(expanduser(img_path))\n file_name_line = False\n num_boxes_line = True\n elif num_boxes_line:\n num_boxes = int(line)\n num_boxes_line = False\n box_annotation_line = True\n elif box_annotation_line:\n box_counter += 1\n line_split = line.split(\" \")\n line_values = [int(x) for x in line_split]\n labels.append(line_values)\n if box_counter >= num_boxes:\n box_annotation_line = False\n file_name_line = True\n labels_tensor = torch.tensor(labels)\n self.img_info.append(\n {\n \"img_path\": img_path,\n \"annotations\": {\n \"bbox\": labels_tensor[:, 0:4], # x, y, width, height\n \"blur\": labels_tensor[:, 4],\n \"expression\": labels_tensor[:, 5],\n \"illumination\": labels_tensor[:, 6],\n \"occlusion\": labels_tensor[:, 7],\n \"pose\": labels_tensor[:, 8],\n \"invalid\": labels_tensor[:, 9],\n },\n }\n )\n box_counter = 0\n labels.clear()\n else:\n raise RuntimeError(\"Error parsing annotation file {}\".format(filepath))\n\n def parse_test_annotations_file(self) -> None:\n filepath = os.path.join(self.root, \"wider_face_split\", \"wider_face_test_filelist.txt\")\n filepath = abspath(expanduser(filepath))\n with open(filepath, \"r\") as f:\n lines = f.readlines()\n for line in lines:\n line = line.rstrip()\n img_path = os.path.join(self.root, \"WIDER_test\", \"images\", line)\n img_path = abspath(expanduser(img_path))\n self.img_info.append({\"img_path\": img_path})\n\n def _check_integrity(self) -> bool:\n # Allow original archive to be deleted (zip). Only need the extracted images\n all_files = self.FILE_LIST.copy()\n all_files.append(self.ANNOTATIONS_FILE)\n for (_, md5, filename) in all_files:\n file, ext = os.path.splitext(filename)\n extracted_dir = os.path.join(self.root, file)\n if not os.path.exists(extracted_dir):\n return False\n return True\n\n def download(self) -> None:\n if self._check_integrity():\n print(\"Files already downloaded and verified\")\n return\n\n # download and extract image data\n for (file_id, md5, filename) in self.FILE_LIST:\n download_file_from_google_drive(file_id, self.root, filename, md5)\n filepath = os.path.join(self.root, filename)\n extract_archive(filepath)\n\n # download and extract annotation files\n download_and_extract_archive(\n url=self.ANNOTATIONS_FILE[0], download_root=self.root, md5=self.ANNOTATIONS_FILE[1]\n )\n", "path": "torchvision/datasets/widerface.py"}]} | 3,564 | 576 |
gh_patches_debug_2417 | rasdani/github-patches | git_diff | bookwyrm-social__bookwyrm-1864 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Invalid table limit error
**Describe the bug**
When running a fresh dev instance I get an `Invalid table limit` error, coming from `initdb.py`. Not sure if something is broken in the latest main branch, or I need to update my configuration.
**To Reproduce**
Steps to reproduce the behavior:
1. fetch latest `main` branch
2. `./bw-dev resetdb`
3. Get error (see below)
**Expected behavior**
BookWyrm resets database and new install works without errors.
**Screenshots**
```
Applying sessions.0001_initial... OK
+ execweb python manage.py initdb
+ docker-compose exec web python manage.py initdb
Traceback (most recent call last):
File "/app/manage.py", line 18, in <module>
execute_from_command_line(sys.argv)
File "/usr/local/lib/python3.9/site-packages/django/core/management/__init__.py", line 419, in execute_from_command_line
utility.execute()
File "/usr/local/lib/python3.9/site-packages/django/core/management/__init__.py", line 413, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/usr/local/lib/python3.9/site-packages/django/core/management/base.py", line 354, in run_from_argv
self.execute(*args, **cmd_options)
File "/usr/local/lib/python3.9/site-packages/django/core/management/base.py", line 398, in execute
output = self.handle(*args, **options)
File "/app/bookwyrm/management/commands/initdb.py", line 168, in handle
raise Exception("Invalid table limit:", limit)
Exception: ('Invalid table limit:', None)
```
**Instance**
local development, current `main` branch.
**Additional context**
I initially started getting this error on a branch I was working on, but it's occuring on the latest `main` branch without any changes.
---
**Desktop (please complete the following information):**
- OS: MacOS
</issue>
<code>
[start of bookwyrm/management/commands/initdb.py]
1 """ What you need in the database to make it work """
2 from django.core.management.base import BaseCommand
3 from django.contrib.auth.models import Group, Permission
4 from django.contrib.contenttypes.models import ContentType
5
6 from bookwyrm import models
7
8
9 def init_groups():
10 """permission levels"""
11 groups = ["admin", "moderator", "editor"]
12 for group in groups:
13 Group.objects.create(name=group)
14
15
16 def init_permissions():
17 """permission types"""
18 permissions = [
19 {
20 "codename": "edit_instance_settings",
21 "name": "change the instance info",
22 "groups": [
23 "admin",
24 ],
25 },
26 {
27 "codename": "set_user_group",
28 "name": "change what group a user is in",
29 "groups": ["admin", "moderator"],
30 },
31 {
32 "codename": "control_federation",
33 "name": "control who to federate with",
34 "groups": ["admin", "moderator"],
35 },
36 {
37 "codename": "create_invites",
38 "name": "issue invitations to join",
39 "groups": ["admin", "moderator"],
40 },
41 {
42 "codename": "moderate_user",
43 "name": "deactivate or silence a user",
44 "groups": ["admin", "moderator"],
45 },
46 {
47 "codename": "moderate_post",
48 "name": "delete other users' posts",
49 "groups": ["admin", "moderator"],
50 },
51 {
52 "codename": "edit_book",
53 "name": "edit book info",
54 "groups": ["admin", "moderator", "editor"],
55 },
56 ]
57
58 content_type = models.ContentType.objects.get_for_model(User)
59 for permission in permissions:
60 permission_obj = Permission.objects.create(
61 codename=permission["codename"],
62 name=permission["name"],
63 content_type=content_type,
64 )
65 # add the permission to the appropriate groups
66 for group_name in permission["groups"]:
67 Group.objects.get(name=group_name).permissions.add(permission_obj)
68
69 # while the groups and permissions shouldn't be changed because the code
70 # depends on them, what permissions go with what groups should be editable
71
72
73 def init_connectors():
74 """access book data sources"""
75 models.Connector.objects.create(
76 identifier="bookwyrm.social",
77 name="BookWyrm dot Social",
78 connector_file="bookwyrm_connector",
79 base_url="https://bookwyrm.social",
80 books_url="https://bookwyrm.social/book",
81 covers_url="https://bookwyrm.social/images/",
82 search_url="https://bookwyrm.social/search?q=",
83 isbn_search_url="https://bookwyrm.social/isbn/",
84 priority=2,
85 )
86
87 models.Connector.objects.create(
88 identifier="inventaire.io",
89 name="Inventaire",
90 connector_file="inventaire",
91 base_url="https://inventaire.io",
92 books_url="https://inventaire.io/api/entities",
93 covers_url="https://inventaire.io",
94 search_url="https://inventaire.io/api/search?types=works&types=works&search=",
95 isbn_search_url="https://inventaire.io/api/entities?action=by-uris&uris=isbn%3A",
96 priority=3,
97 )
98
99 models.Connector.objects.create(
100 identifier="openlibrary.org",
101 name="OpenLibrary",
102 connector_file="openlibrary",
103 base_url="https://openlibrary.org",
104 books_url="https://openlibrary.org",
105 covers_url="https://covers.openlibrary.org",
106 search_url="https://openlibrary.org/search?q=",
107 isbn_search_url="https://openlibrary.org/api/books?jscmd=data&format=json&bibkeys=ISBN:",
108 priority=3,
109 )
110
111
112 def init_federated_servers():
113 """big no to nazis"""
114 built_in_blocks = ["gab.ai", "gab.com"]
115 for server in built_in_blocks:
116 models.FederatedServer.objects.create(
117 server_name=server,
118 status="blocked",
119 )
120
121
122 def init_settings():
123 """info about the instance"""
124 models.SiteSettings.objects.create(
125 support_link="https://www.patreon.com/bookwyrm",
126 support_title="Patreon",
127 )
128
129
130 def init_link_domains(*_):
131 """safe book links"""
132 domains = [
133 ("standardebooks.org", "Standard EBooks"),
134 ("www.gutenberg.org", "Project Gutenberg"),
135 ("archive.org", "Internet Archive"),
136 ("openlibrary.org", "Open Library"),
137 ("theanarchistlibrary.org", "The Anarchist Library"),
138 ]
139 for domain, name in domains:
140 models.LinkDomain.objects.create(
141 domain=domain,
142 name=name,
143 status="approved",
144 )
145
146
147 class Command(BaseCommand):
148 help = "Initializes the database with starter data"
149
150 def add_arguments(self, parser):
151 parser.add_argument(
152 "--limit",
153 default=None,
154 help="Limit init to specific table",
155 )
156
157 def handle(self, *args, **options):
158 limit = options.get("limit")
159 tables = [
160 "group",
161 "permission",
162 "connector",
163 "federatedserver",
164 "settings",
165 "linkdomain",
166 ]
167 if limit not in tables:
168 raise Exception("Invalid table limit:", limit)
169
170 if not limit or limit == "group":
171 init_groups()
172 if not limit or limit == "permission":
173 init_permissions()
174 if not limit or limit == "connector":
175 init_connectors()
176 if not limit or limit == "federatedserver":
177 init_federated_servers()
178 if not limit or limit == "settings":
179 init_settings()
180 if not limit or limit == "linkdomain":
181 init_link_domains()
182
[end of bookwyrm/management/commands/initdb.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/bookwyrm/management/commands/initdb.py b/bookwyrm/management/commands/initdb.py
--- a/bookwyrm/management/commands/initdb.py
+++ b/bookwyrm/management/commands/initdb.py
@@ -164,7 +164,7 @@
"settings",
"linkdomain",
]
- if limit not in tables:
+ if limit and limit not in tables:
raise Exception("Invalid table limit:", limit)
if not limit or limit == "group":
| {"golden_diff": "diff --git a/bookwyrm/management/commands/initdb.py b/bookwyrm/management/commands/initdb.py\n--- a/bookwyrm/management/commands/initdb.py\n+++ b/bookwyrm/management/commands/initdb.py\n@@ -164,7 +164,7 @@\n \"settings\",\n \"linkdomain\",\n ]\n- if limit not in tables:\n+ if limit and limit not in tables:\n raise Exception(\"Invalid table limit:\", limit)\n \n if not limit or limit == \"group\":\n", "issue": "Invalid table limit error\n**Describe the bug**\r\nWhen running a fresh dev instance I get an `Invalid table limit` error, coming from `initdb.py`. Not sure if something is broken in the latest main branch, or I need to update my configuration.\r\n\r\n**To Reproduce**\r\nSteps to reproduce the behavior:\r\n1. fetch latest `main` branch\r\n2. `./bw-dev resetdb`\r\n3. Get error (see below)\r\n\r\n**Expected behavior**\r\nBookWyrm resets database and new install works without errors.\r\n\r\n**Screenshots**\r\n```\r\n Applying sessions.0001_initial... OK\r\n+ execweb python manage.py initdb\r\n+ docker-compose exec web python manage.py initdb\r\nTraceback (most recent call last):\r\n File \"/app/manage.py\", line 18, in <module>\r\n execute_from_command_line(sys.argv)\r\n File \"/usr/local/lib/python3.9/site-packages/django/core/management/__init__.py\", line 419, in execute_from_command_line\r\n utility.execute()\r\n File \"/usr/local/lib/python3.9/site-packages/django/core/management/__init__.py\", line 413, in execute\r\n self.fetch_command(subcommand).run_from_argv(self.argv)\r\n File \"/usr/local/lib/python3.9/site-packages/django/core/management/base.py\", line 354, in run_from_argv\r\n self.execute(*args, **cmd_options)\r\n File \"/usr/local/lib/python3.9/site-packages/django/core/management/base.py\", line 398, in execute\r\n output = self.handle(*args, **options)\r\n File \"/app/bookwyrm/management/commands/initdb.py\", line 168, in handle\r\n raise Exception(\"Invalid table limit:\", limit)\r\nException: ('Invalid table limit:', None)\r\n```\r\n\r\n**Instance**\r\nlocal development, current `main` branch.\r\n\r\n**Additional context**\r\nI initially started getting this error on a branch I was working on, but it's occuring on the latest `main` branch without any changes.\r\n\r\n---\r\n\r\n**Desktop (please complete the following information):**\r\n - OS: MacOS\r\n\n", "before_files": [{"content": "\"\"\" What you need in the database to make it work \"\"\"\nfrom django.core.management.base import BaseCommand\nfrom django.contrib.auth.models import Group, Permission\nfrom django.contrib.contenttypes.models import ContentType\n\nfrom bookwyrm import models\n\n\ndef init_groups():\n \"\"\"permission levels\"\"\"\n groups = [\"admin\", \"moderator\", \"editor\"]\n for group in groups:\n Group.objects.create(name=group)\n\n\ndef init_permissions():\n \"\"\"permission types\"\"\"\n permissions = [\n {\n \"codename\": \"edit_instance_settings\",\n \"name\": \"change the instance info\",\n \"groups\": [\n \"admin\",\n ],\n },\n {\n \"codename\": \"set_user_group\",\n \"name\": \"change what group a user is in\",\n \"groups\": [\"admin\", \"moderator\"],\n },\n {\n \"codename\": \"control_federation\",\n \"name\": \"control who to federate with\",\n \"groups\": [\"admin\", \"moderator\"],\n },\n {\n \"codename\": \"create_invites\",\n \"name\": \"issue invitations to join\",\n \"groups\": [\"admin\", \"moderator\"],\n },\n {\n \"codename\": \"moderate_user\",\n \"name\": \"deactivate or silence a user\",\n \"groups\": [\"admin\", \"moderator\"],\n },\n {\n \"codename\": \"moderate_post\",\n \"name\": \"delete other users' posts\",\n \"groups\": [\"admin\", \"moderator\"],\n },\n {\n \"codename\": \"edit_book\",\n \"name\": \"edit book info\",\n \"groups\": [\"admin\", \"moderator\", \"editor\"],\n },\n ]\n\n content_type = models.ContentType.objects.get_for_model(User)\n for permission in permissions:\n permission_obj = Permission.objects.create(\n codename=permission[\"codename\"],\n name=permission[\"name\"],\n content_type=content_type,\n )\n # add the permission to the appropriate groups\n for group_name in permission[\"groups\"]:\n Group.objects.get(name=group_name).permissions.add(permission_obj)\n\n # while the groups and permissions shouldn't be changed because the code\n # depends on them, what permissions go with what groups should be editable\n\n\ndef init_connectors():\n \"\"\"access book data sources\"\"\"\n models.Connector.objects.create(\n identifier=\"bookwyrm.social\",\n name=\"BookWyrm dot Social\",\n connector_file=\"bookwyrm_connector\",\n base_url=\"https://bookwyrm.social\",\n books_url=\"https://bookwyrm.social/book\",\n covers_url=\"https://bookwyrm.social/images/\",\n search_url=\"https://bookwyrm.social/search?q=\",\n isbn_search_url=\"https://bookwyrm.social/isbn/\",\n priority=2,\n )\n\n models.Connector.objects.create(\n identifier=\"inventaire.io\",\n name=\"Inventaire\",\n connector_file=\"inventaire\",\n base_url=\"https://inventaire.io\",\n books_url=\"https://inventaire.io/api/entities\",\n covers_url=\"https://inventaire.io\",\n search_url=\"https://inventaire.io/api/search?types=works&types=works&search=\",\n isbn_search_url=\"https://inventaire.io/api/entities?action=by-uris&uris=isbn%3A\",\n priority=3,\n )\n\n models.Connector.objects.create(\n identifier=\"openlibrary.org\",\n name=\"OpenLibrary\",\n connector_file=\"openlibrary\",\n base_url=\"https://openlibrary.org\",\n books_url=\"https://openlibrary.org\",\n covers_url=\"https://covers.openlibrary.org\",\n search_url=\"https://openlibrary.org/search?q=\",\n isbn_search_url=\"https://openlibrary.org/api/books?jscmd=data&format=json&bibkeys=ISBN:\",\n priority=3,\n )\n\n\ndef init_federated_servers():\n \"\"\"big no to nazis\"\"\"\n built_in_blocks = [\"gab.ai\", \"gab.com\"]\n for server in built_in_blocks:\n models.FederatedServer.objects.create(\n server_name=server,\n status=\"blocked\",\n )\n\n\ndef init_settings():\n \"\"\"info about the instance\"\"\"\n models.SiteSettings.objects.create(\n support_link=\"https://www.patreon.com/bookwyrm\",\n support_title=\"Patreon\",\n )\n\n\ndef init_link_domains(*_):\n \"\"\"safe book links\"\"\"\n domains = [\n (\"standardebooks.org\", \"Standard EBooks\"),\n (\"www.gutenberg.org\", \"Project Gutenberg\"),\n (\"archive.org\", \"Internet Archive\"),\n (\"openlibrary.org\", \"Open Library\"),\n (\"theanarchistlibrary.org\", \"The Anarchist Library\"),\n ]\n for domain, name in domains:\n models.LinkDomain.objects.create(\n domain=domain,\n name=name,\n status=\"approved\",\n )\n\n\nclass Command(BaseCommand):\n help = \"Initializes the database with starter data\"\n\n def add_arguments(self, parser):\n parser.add_argument(\n \"--limit\",\n default=None,\n help=\"Limit init to specific table\",\n )\n\n def handle(self, *args, **options):\n limit = options.get(\"limit\")\n tables = [\n \"group\",\n \"permission\",\n \"connector\",\n \"federatedserver\",\n \"settings\",\n \"linkdomain\",\n ]\n if limit not in tables:\n raise Exception(\"Invalid table limit:\", limit)\n\n if not limit or limit == \"group\":\n init_groups()\n if not limit or limit == \"permission\":\n init_permissions()\n if not limit or limit == \"connector\":\n init_connectors()\n if not limit or limit == \"federatedserver\":\n init_federated_servers()\n if not limit or limit == \"settings\":\n init_settings()\n if not limit or limit == \"linkdomain\":\n init_link_domains()\n", "path": "bookwyrm/management/commands/initdb.py"}]} | 2,686 | 116 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.